text
stringlengths
256
16.4k
Revision as of 21:37, 15 May 2015 by MathAdmin (talk | contribs) (Created page with "right|350px <span class="exam"> '''Use calculus to set up and solve the word problem:''' Find the length and width of a rectangle that has a perimet...") {\displaystyle x} {\displaystyle y} as shown in the image, we need to remember the equations for perimeter: {\displaystyle P\,=\,2x+2y,} and that for area: {\displaystyle A\,=\,xy.} {\displaystyle P\,=\,2x+2y} {\displaystyle y} {\displaystyle x} {\displaystyle 48\,=\,P\,=\,2x+2y,} {\displaystyle y=24-x.} {\displaystyle A(x)\,=\,xy\,=\,x(24-x)\,=\,24x-x^{2}.} {\displaystyle A(x)=24x-x^{2}} {\displaystyle A'(x)\,=\,24-2x\,=\,2(12-x).} {\displaystyle x=y=12} , and these are the values that will maximize area.
1 Agribusiness Study Program, Faculty of Agriculture, Muhammadiyah University of Makassar, Makassar, Indonesia. 2 Center for Research and Development of Natural Resources, Hasanuddin University, Makassar, Indonesia. Abstract: One of the advances in the current socio-economic science is to integrate the socio-spatial approach in various socio-economic studies. The socio-spatial approach method is one of the tools that play an important role in visualizing various social data into the dimensions of space and time. In this article, the author presents a model built on a socio-spatial approach in the effort to develop the potential of underdeveloped regions in Indonesia. This model provides input information on the suitability of land use in the development of agricultural commodities which is identified as an important factor to be used as a basic reference for regional development planning in disadvantaged areas. Furthermore, it’s relying on not only land suitability biophysical data but also the integration of social data into spatial models as part of limiting factors can provide more accurate results. The results suggest that biophysically appropriate commodities are not necessarily acceptable from the social aspects of society, as in the research that has been carried out, there are 12 types of commodities cultivated by farmers and 10 suitable types of commodities biophysically but only 6 commodities based on social aspects that are feasible to become development direction commodities agribusiness which consists of the main commodities namely coffee, chili, potatoes and onions and supporting commodities namely celery and leeks. Keywords: Socio-Spatial, Land Use Planning, Agricultural Information, Agribusiness Commodities {I}_{i}=\frac{{y}_{i}-\stackrel{¯}{y}}{{\sigma }^{2}}{\displaystyle {\sum }_{h=1,h\ne 1}^{n}\left[{W}_{ih}\left({y}_{h}-\stackrel{¯}{y}\right)\right]} {\sigma }^{2} {W}_{ih} Cite this paper: Junais, I. , Samsuar, S. , Useng, D. , Ali, H. and Syarif, A. (2019) Integration of Socio-Spatial Approach in Land Use Planning for Agribusiness Commodities: A Case Study of Underdeveloped Districts in South Sulawesi, Indonesia. Open Journal of Social Sciences, 7, 147-159. doi: 10.4236/jss.2019.71013. [1] Perpres No. 131/2015 (2015) Determination of Disadvantaged Regions 2015-2019. Government of the Republic of Indonesia, Indonesia. [2] Booth, A. (1989) Indonesian Agricultural Development in Comparative Perspective. World Development, 17, 1235-1254. [3] Rustiadi, E., Saefulhakim, S. and Panuju, D.R. (2011) Perencanaan dan Pengembangan Wilayah. Crestpent Press dan Yayasan Pustaka Obor Indonesia, Jakarta. [4] FAO (1976) A Framework of Land Evaluation. FAO Soil Bulletin, Rome, No. 6. [5] Balai Besar Penelitian dan Pengembangan Sumberdaya Lahan Pertanian (2011) Petunjuk Teknis Evaluasi Lahan Untuk Komoditas Pertanian. BBPPSDLP Badan Litbang Kementerian Pertanian, Bogor. [6] Goodchild, M.F. and Janelle, D.G. (2004) Thinking Spatially in the Social Sciences. Spatially Integrated Social Science, Oxford University Press, New York. [7] Kosfeld, R., Eckey, H.F. and Dreger, C. (2002) Regional Convergence in Unified Germany: A Spatial Econometric Perspective. Univ., Fachbereich Wirtschaftswiss, Germany. [8] Anselin, L. (1995) Local Indicators of Spatial Association-LISA. Geographical Analysis, 27, 93-115. https://doi.org/10.1111/j.1538-4632.1995.tb00338.x [9] Kalivas, D.P., Kollias, V.J. and Apostolidis, E.H. (2013) Evaluation of Three Spatial Interpolation Methods to Estimate Forest Volume in the Municipal Forest of the Greek Island Skyros. Geo-Spatial Information Science, 16, 100-112. [10] ESRI (2015) ArcGIS Version 10.4.1. [11] Wharton, J. (2017) Subsistence Agriculture and Economic Development. Routledge, Abingdon-on-Thames. https://doi.org/10.4324/9781315130408
Composition of relations - Wikipedia In the mathematics of binary relations, the composition of relations is the forming of a new binary relation R ; S from two given binary relations R and S. In the calculus of relations, the composition of relations is called relative multiplication,[1] and its result is called a relative product.[2]: 40 Function composition is the special case of composition of relations where all relations involved are functions. The words uncle and aunt indicate a compound relation: for a person to be an uncle, he must be a brother of a parent (or a sister for an aunt). In algebraic logic it is said that the relation of Uncle (xUz) is the composition of relations "is a brother of" (xBy) and "is a parent of" (yPz). {\displaystyle U=BP\quad \equiv \quad xByPz\iff xUz.} Beginning with Augustus De Morgan,[3] the traditional form of reasoning by syllogism has been subsumed by relational logical expressions and their composition.[4] 1.1 Notational variations 3 Composition in terms of matrices 4 Heterogeneous relations 5 Schröder rules 7 Join: another form of composition {\displaystyle R\subseteq X\times Y} {\displaystyle S\subseteq Y\times Z} are two binary relations, then their composition {\displaystyle R;S} {\displaystyle R;S=\{(x,z)\in X\times Z\mid \exists y\in Y:(x,y)\in R\land (y,z)\in S\}.} {\displaystyle R;S\subseteq X\times Z} is defined by the rule that says {\displaystyle (x,z)\in R;S} if and only if there is an element {\displaystyle y\in Y} {\displaystyle x\,R\,y\,S\,z} {\displaystyle (x,y)\in R} {\displaystyle (y,z)\in S} ).[5]: 13  Notational variationsEdit The semicolon as an infix notation for composition of relations dates back to Ernst Schroder's textbook of 1895.[6] Gunther Schmidt has renewed the use of the semicolon, particularly in Relational Mathematics (2011).[2]: 40 [7] The use of semicolon coincides with the notation for function composition used (mostly by computer scientists) in category theory,[8] as well as the notation for dynamic conjunction within linguistic dynamic semantics.[9] A small circle {\displaystyle (R\circ S)} has been used for the infix notation of composition of relations by John M. Howie in his books considering semigroups of relations.[10] However, the small circle is widely used to represent composition of functions {\displaystyle g(f(x))=(g\circ f)(x)} which reverses the text sequence from the operation sequence. The small circle was used in the introductory pages of Graphs and Relations[5]: 18 until it was dropped in favor of juxtaposition (no infix notation). Juxtaposition {\displaystyle (RS)} is commonly used in algebra to signify multiplication, so too, it can signify relative multiplication. Further with the circle notation, subscripts may be used. Some authors[11] prefer to write {\displaystyle \circ _{l}} {\displaystyle \circ _{r}} explicitly when necessary, depending whether the left or the right relation is the first one applied. A further variation encountered in computer science is the Z notation: {\displaystyle \circ } is used to denote the traditional (right) composition, but ⨾ (U+2A3E ⨾ FAT OPEN SEMICOLON) denotes left composition.[12][13] The binary relations {\displaystyle R\subseteq X\times Y} are sometimes regarded as the morphisms {\displaystyle R\colon X\to Y} in a category Rel which has the sets as objects. In Rel, composition of morphisms is exactly composition of relations as defined above. The category Set of sets is a subcategory of Rel that has the same objects but fewer morphisms. Composition of relations is associative: {\displaystyle R;(S;T)=(R;S);T.} The converse relation of R ; S is (R ; S)T = ST ; RT. This property makes the set of all binary relations on a set a semigroup with involution. If R and S are injective, then R ; S is injective, which conversely implies only the injectivity of R. If R and S are surjective, then R ; S is surjective, which conversely implies only the surjectivity of S. Composition in terms of matricesEdit Finite binary relations are represented by logical matrices. The entries of these matrices are either zero or one, depending on whether the relation represented is false or true for the row and column corresponding to compared objects. Working with such matrices involves the Boolean arithmetic with 1 + 1 = 1 and 1 × 1 = 1. An entry in the matrix product of two logical matrices will be 1, then, only if the row and column multiplied have a corresponding 1. Thus the logical matrix of a composition of relations can be found by computing the matrix product of the matrices representing the factors of the composition. "Matrices constitute a method for computing the conclusions traditionally drawn by means of hypothetical syllogisms and sorites."[14] Heterogeneous relationsEdit Main article: Heterogeneous relation Consider a heterogeneous relation R ⊆ A × B; i.e. where A and B are distinct sets. Then using composition of relation R with its converse RT, there are homogeneous relations R RT (on A) and RT R (on B). If ∀x ∈ A ∃y ∈ B xRy (that is, R is a (left-)total relation), then ∀x xRRTx so that R RT is a reflexive relation or I ⊆ R RT where I is the identity relation {xIx : x ∈ A}. Similarly, if R is a surjective relation then RT R ⊇ I = {xIx : x ∈ B}. In this case R ⊆ R RT R. The opposite inclusion occurs for a difunctional relation. {\displaystyle {\bar {R}}^{\textsf {T}}R} is used to distinguish relations of Ferrer's type, which satisfy {\displaystyle R{\bar {R}}^{\textsf {T}}R=R.} Let A = { France, Germany, Italy, Switzerland } and B = { French, German, Italian } with the relation R given by aRb when b is a national language of a. Since both A and B is finite, R can be represented by a logical matrix, assuming rows (top to bottom) and columns (left to right) are ordered alphabetically: {\displaystyle {\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\\1&1&1\end{pmatrix}}.} The converse relation RT corresponds to the transposed matrix, and the relation composition {\displaystyle R^{\textsf {T}};R} corresponds to the matrix product {\displaystyle R^{\textsf {T}}R} when summation is implemented by logical disjunction. It turns out that the 3×3 matrix {\displaystyle R^{\textsf {T}}R} contains a 1 at every position, while the reversed matrix product computes as: {\displaystyle RR^{\textsf {T}}={\begin{pmatrix}1&0&0&1\\0&1&0&1\\0&0&1&1\\1&1&1&1\end{pmatrix}}.} This matrix is symmetric, and represents a homogeneous relation on A. Correspondingly, {\displaystyle R^{\textsf {T}};R} is the universal relation on B, hence any two languages share a nation where they both are spoken (in fact: Switzerland). Vice versa, the question whether two given nations share a language can be answered using {\displaystyle R;R^{\textsf {T}}} Schröder rulesEdit For a given set V, the collection of all binary relations on V forms a Boolean lattice ordered by inclusion (⊆). Recall that complementation reverses inclusion: {\displaystyle A\subset B\implies B^{\complement }\subset A^{\complement }.} In the calculus of relations[15] it is common to represent the complement of a set by an overbar: {\displaystyle {\bar {A}}=A^{\complement }.} If S is a binary relation, let {\displaystyle S^{\textsf {T}}} represent the converse relation, also called the transpose. Then the Schröder rules are {\displaystyle QR\subseteq S\quad \equiv \quad Q^{\textsf {T}}{\bar {S}}\subseteq {\bar {R}}\quad \equiv \quad {\bar {S}}R^{\textsf {T}}\subseteq {\bar {Q}}.} Verbally, one equivalence can be obtained from another: select the first or second factor and transpose it; then complement the other two relations and permute them.[5]: 15–19  Though this transformation of an inclusion of a composition of relations was detailed by Ernst Schröder, in fact Augustus De Morgan first articulated the transformation as Theorem K in 1860.[4] He wrote {\displaystyle LM\subseteq N\implies {\bar {N}}M^{\textsf {T}}\subseteq {\bar {L}}.} With Schröder rules and complementation one can solve for an unknown relation X in relation inclusions such as {\displaystyle RX\subseteq S\quad {\text{and}}\quad XR\subseteq S.} For instance, by Schröder rule {\displaystyle RX\subseteq S\implies R^{\textsf {T}}{\bar {S}}\subseteq {\bar {X}},} and complementation gives {\displaystyle X\subseteq {\overline {R^{\textsf {T}}{\bar {S}}}},} which is called the left residual of S by R . Just as composition of relations is a type of multiplication resulting in a product, so some operations compare to division and produce quotients. Three quotients are exhibited here: left residual, right residual, and symmetric quotient. The left residual of two relations is defined presuming that they have the same domain (source), and the right residual presumes the same codomain (range, target). The symmetric quotient presumes two relations share a domain and a codomain. Left residual: {\displaystyle A\backslash B\mathrel {:=} {\overline {A^{\textsf {T}}{\bar {B}}}}} Right residual: {\displaystyle D/C\mathrel {:=} {\overline {{\bar {D}}C^{\textsf {T}}}}} Symmetric quotient: {\displaystyle \operatorname {syq} (E,F)\mathrel {:=} {\overline {E^{\textsf {T}}{\bar {F}}}}\cap {\overline {{\bar {E}}^{\textsf {T}}F}}} Using Schröder's rules, AX ⊆ B is equivalent to X ⊆ A {\displaystyle \backslash } B. Thus the left residual is the greatest relation satisfying AX ⊆ B. Similarly, the inclusion YC ⊆ D is equivalent to Y ⊆ D/C, and the right residual is the greatest relation satisfying YC ⊆ D.[2]: 43–6  One can practice the logic of residuals with Sudoku.[further explanation needed] Join: another form of compositionEdit A fork operator (<) has been introduced to fuse two relations c: H → A and d: H → B into c(<)d: H → A × B. The construction depends on projections a: A × B → A and b: A × B → B, understood as relations, meaning that there are converse relations aT and bT. Then the fork of c and d is given by {\displaystyle c(<)d\mathrel {:=} c;a^{\textsf {T}}\cap \ d;b^{\textsf {T}}.} Another form of composition of relations, which applies to general n-place relations for n ≥ 2, is the join operation of relational algebra. The usual composition of two binary relations as defined here can be obtained by taking their join, leading to a ternary relation, followed by a projection that removes the middle component. For example, in the query language SQL there is the operation Join (SQL). ^ Bjarni Jónssen (1984) "Maximal Algebras of Binary Relations", in Contributions to Group Theory, K.I. Appel editor American Mathematical Society ISBN 978-0-8218-5035-0 ^ a b c Gunther Schmidt (2011) Relational Mathematics, Encyclopedia of Mathematics and its Applications, vol. 132, Cambridge University Press ISBN 978-0-521-76268-7 ^ A. De Morgan (1860) "On the Syllogism: IV and on the Logic of Relations" ^ a b Daniel D. Merrill (1990) Augustus De Morgan and the Logic of Relations, page 121, Kluwer Academic ISBN 9789400920477 ^ a b c Gunther Schmidt & Thomas Ströhlein (1993) Relations and Graphs, Springer books ^ Ernst Schroder (1895) Algebra und Logik der Relative ^ Paul Taylor (1999). Practical Foundations of Mathematics. Cambridge University Press. p. 24. ISBN 978-0-521-63107-5. A free HTML version of the book is available at http://www.cs.man.ac.uk/~pt/Practical_Foundations/ ^ Michael Barr & Charles Wells (1998) Category Theory for Computer Scientists Archived 2016-03-04 at the Wayback Machine, page 6, from McGill University ^ Rick Nouwen and others (2016) Dynamic Semantics §2.2, from Stanford Encyclopedia of Philosophy ^ John M. Howie (1995) Fundamentals of Semigroup Theory, page 16, LMS Monograph #12, Clarendon Press ISBN 0-19-851194-9 ^ Kilp, Knauer & Mikhalev, p. 7 ^ Unicode character: Z Notation relational composition from FileFormat.info ^ Irving Copilowish (December 1948) "Matrix development of the calculus of relations", Journal of Symbolic Logic 13(4): 193–203 Jstor link, quote from page 203 ^ Vaughn Pratt The Origins of the Calculus of Relations, from Stanford University ^ De Morgan indicated contraries by lower case, conversion as M−1, and inclusion with )), so his notation was {\displaystyle nM^{-1}))\ l.} ^ Gunther Schmidt and Michael Winter (2018): Relational Topology, page 26, Lecture Notes in Mathematics vol. 2208, Springer books, ISBN 978-3-319-74451-3 M. Kilp, U. Knauer, A.V. Mikhalev (2000) Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter,ISBN 3-11-015248-7. Retrieved from "https://en.wikipedia.org/w/index.php?title=Composition_of_relations&oldid=1083141472"
Thermodynamic_system Knowpia Properties of isolated, closed, and open thermodynamic systems in exchanging energy and matter. The very existence of thermodynamic equilibrium, defining states of thermodynamic systems, is the essential, characteristic, and most fundamental postulate of thermodynamics, though it is only rarely cited as a numbered law.[2][3][4] According to Bailyn, the commonly rehearsed statement of the zeroth law of thermodynamics is a consequence of this fundamental postulate.[5] In reality, practically nothing in nature is in strict thermodynamic equilibrium, but the postulate of thermodynamic equilibrium often provides very useful idealizations or approximations, both theoretically and experimentally; experiments can provide scenarios of practical thermodynamic equilibrium. "With every change of volume (to the working body) a certain amount work must be done by the gas or upon it, since by its expansion it overcomes an external pressure, and since its compression can be brought about only by an exertion of external pressure. To this excess of work done by the gas or upon it there must correspond, by our principle, a proportional excess of heat consumed or produced, and the gas cannot give up to the "surrounding medium" the same amount of heat as it receives." The article Carnot heat engine shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engine; below, we see the Carnot engine as is typically modeled in current use: Carnot engine diagram (modern) – where heat flows from a high temperature TH furnace through the fluid of the "working body" (working substance) and into the cold sink TC, thus forcing the working substance to do mechanical work W on the surroundings, via cycles of contractions and expansions. In the diagram shown, the "working body" (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat Q can be introduced or transmitted through to produce work. In 1824, Sadi Carnot, in his famous paper Reflections on the Motive Power of Fire, had postulated that the fluid body could be any substance capable of expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, or air, etc. Though, in these early years, engines came in a number of configurations, typically QH was supplied by a boiler, wherein water boiled over a furnace; QC was typically a stream of cold flowing water in the form of a condenser located on a separate part of the engine. The output work W was the movement of the piston as it turned a crank-arm, which typically turned a pulley to lift water out of flooded salt mines. Carnot defined work as "weight lifted through a height". Systems in equilibriumEdit Types of transfers permitted by types of wall permeable to matter permeable to energy but impermeable to matter adynamic and A system is enclosed by walls that bound it and connect it to its surroundings.[7][8][9][10][11][12] Often a wall restricts passage across it by some form of matter or energy, making the connection indirect. Sometimes a wall is no more than an imaginary two-dimensional closed surface through which the connection to the surroundings is direct. The walls of a closed system allow transfer of energy as heat and as work, but not of matter, between it and its surroundings. The walls of an open system allow transfer both of matter and of energy.[13][14][15][16][17][18][19] This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is here used.[20][21] The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment or the reservoir. Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum, electric charge, or other conserved properties. The environment is ignored in the analysis of the system, except in regards to these interactions. Closed systemEdit {\displaystyle \Delta U=Q-W} {\displaystyle U} denotes the internal energy of the system, {\displaystyle Q} heat added to the system, {\displaystyle W} {\displaystyle \mathrm {d} U=\delta Q-\delta W.} If the work is due to a volume expansion by {\displaystyle \mathrm {d} V} at a pressure {\displaystyle P} {\displaystyle \delta W=P\mathrm {d} V.} {\displaystyle \delta Q=T\mathrm {d} S} {\displaystyle T} denotes the thermodynamic temperature and {\displaystyle S} {\displaystyle \mathrm {d} U=T\mathrm {d} S-P\mathrm {d} V.} {\displaystyle \sum _{j=1}^{m}a_{ij}N_{j}=b_{i}^{0}} {\displaystyle N_{j}} {\displaystyle j} -type molecules, {\displaystyle a_{ij}} the number of atoms of element {\displaystyle i}n molecule {\displaystyle j} {\displaystyle b_{i}^{0}} the total number of atoms of element {\displaystyle i}n the system, which remains constant, since the system is closed. There is one such equation for each element in the system. Isolated systemEdit An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is in a state of thermodynamic equilibrium. Truly isolated physical systems do not exist in reality (except perhaps for the universe as a whole), because, for example, there is always gravity between a system with mass and masses elsewhere.[22][23][24][25][26] However, real systems may behave nearly as an isolated system for finite (possibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena. Selective transfer of matterEdit A thermodynamic operation can render impermeable to matter all system walls other than the contact equilibrium wall for that substance. This allows the definition of an intensive state variable, with respect to a reference state of the surroundings, for that substance. The intensive variable is called the chemical potential; for component substance i it is usually denoted μi. The corresponding extensive variable can be the number of moles Ni of the component substance in the system. Open systemEdit In an open system, there is an exchange of energy and matter between the system and the surroundings. The presence of reactants in an open beaker is an example of an open system. Here the boundary is an imaginary surface enclosing the beaker and reactants. It is named closed, if borders are impenetrable for substance, but allow transit of energy in the form of heat, and isolated, if there is no exchange of heat and substances. The open system cannot exist in the equilibrium state. To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that was described above, a set of internal variables {\displaystyle \xi _{1},\xi _{2},\ldots } that are called internal variables have been introduced. The equilibrium state is considered to be stable. and the main property of the internal variables, as measures of non-equilibrium of the system, is their trending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable {\displaystyle {\frac {d\xi _{i}}{dt}}=-{\frac {1}{\tau _{i}}}\,\left(\xi _{i}-\xi _{i}^{(0)}\right),\quad i=1,\,2,\ldots ,} {\displaystyle \tau _{i}=\tau _{i}(T,x_{1},x_{2},\ldots ,x_{n})} is a relaxation time of a corresponding variables. It is convenient to consider the initial value {\displaystyle \xi _{i}^{0}} The specific contribution to the thermodynamics of open non-equilibrium systems was made by Ilya Prigogine, who investigated a system of chemically reacting substances.[28] In this case the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalized,[29][30][31] to consider any deviations from the equilibrium state, such as structure of the system, gradients of temperature, difference of concentrations of substances and so on, to say nothing of degrees of completeness of all chemical reactions, to be internal variables. The increments of Gibbs free energy {\displaystyle G} and entropy {\displaystyle S} {\displaystyle T=const} {\displaystyle p=const} are determined as {\displaystyle dG=\sum _{j}\,\Xi _{j}\,\Delta \xi _{j}+\sum _{\alpha }\,\mu _{\alpha }\,\Delta N_{\alpha },} {\displaystyle T\,dS=\Delta Q-\sum _{j}\,\Xi _{j}\,\Delta \xi _{j}+\sum _{\alpha =1}^{k}\,\eta _{\alpha }\,\Delta N_{\alpha }.} The stationary states of the system exists due to exchange of both thermal energy {\displaystyle \Delta Q_{\alpha }} and a stream of particles. The sum of the last terms in eqations presents the total energy coming into the system with the stream of particles of substances {\displaystyle \Delta N_{\alpha }} that can be positive or negative; the quantity {\displaystyle \mu _{\alpha }} is chemical potential of substance {\displaystyle \alpha } . The middle terms in equations (2) and (3) depicts energy dissipation (entropy production) due to the relaxation of internal variables {\displaystyle \xi _{j}} {\displaystyle \Xi _{j}} ^ Guggenheim, E.A. (1949). Statistical basis of thermodynamics, Research: A Journal of Science and its Applications, 2, Butterworths, London, pp. 450–454. ^ Tisza, L. (1966). Generalized Thermodynamics, M.I.T Press, Cambridge MA, p. 119. ^ Marsland, R. III, Brown, H.R., Valente, G. (2015). Time and irreversibility in axiomatic thermodynamics, Am. J. Phys., 83(7): 628–634. ^ Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, ISBN 1-4020-0788-4. ^ Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London, p.44 ^ Tisza, L. (1966), pp. 109, 112. ^ Adkins, C.J. (1968/1975), p. 4 ^ Callen, H.B. (1960/1985), pp. 15, 17. ^ Tschoegl, N.W. (2000), p. 5. ^ Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co, London, p. 66. ^ Tisza, L. (1966). Generalized Thermodynamics, M.I.T Press, Cambridge MA, pp. 112–113. ^ Guggenheim, E.A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, (1st edition 1949) 5th edition 1967, North-Holland, Amsterdam, p. 14. ^ Münster, A. (1970). Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London, pp. 6–7. ^ Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081, p. 3. ^ Tschoegl, N.W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5, p. 5. ^ Silbey, R.J., Alberty, R.A., Bawendi, M.G. (1955/2005). Physical Chemistry, fourth edition, Wiley, Hoboken NJ, p. 4. ^ Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, ISBN 0-471-86256-8, p. 17. ^ ter Haar, D., Wergeland, H. (1966). Elements of Thermodynamics, Addison-Wesley Publishing, Reading MA, p. 43. ^ I.M.Kolesnikov; V.A.Vinokurov; S.I.Kolesnikov (2001). Thermodynamics of Spontaneous and Non-Spontaneous Processes. Nova science Publishers. p. 136. ISBN 978-1-56072-904-4. ^ "A System and Its Surroundings". ChemWiki. University of California - Davis. Retrieved 9 May 2012. ^ "Hyperphysics". The Department of Physics and Astronomy of Georgia State University. Retrieved 9 May 2012. ^ Bryan Sanctuary. "Open, Closed and Isolated Systems in Physical Chemistry". Foundations of Quantum Mechanics and Physical Chemistry. McGill University (Montreal). Retrieved 9 May 2012. ^ Material and Energy Balances for Engineers and Environmentalists (PDF). Imperial College Press. p. 7. Archived from the original (PDF) on 15 August 2009. Retrieved 9 May 2012. ^ Pokrovskii V.N. (2013) A derivation of the main relations of non-equilibrium thermodynamics. Hindawi Publishing Corporation: ISRN Thermodynamics, vol. 2013, article ID 906136, 9 p. https://dx.doi.org/10.1155/2013/906136. ^ Zotin, Alexei; Pokrovskii, Vladimir (2018). "The growth and development of living organisms from the thermodynamic point of view". Physica A: Statistical Mechanics and its Applications. 512: 359–366. Abbott, M.M.; van Hess, H.G. (1989). Thermodynamics with Chemical Applications (2nd ed.). McGraw Hill. Halliday, David; Resnick, Robert; Walker, Jearl (2008). Fundamentals of Physics (8th ed.). Wiley. Moran, Michael J.; Shapiro, Howard N. (2008). Fundamentals of Engineering Thermodynamics (6th ed.). Wiley.
Compact Heat Storage for Solar Heating Systems | J. Sol. Energy Eng. | ASME Digital Collection Viktoria Martin, , Brinellvägen 68, SE-100 44 Stockholm, Sweden Fredrik Setterwall e-mail: fredrik.setterwall@ecostorage.se Martin, V., and Setterwall, F. (September 30, 2009). "Compact Heat Storage for Solar Heating Systems." ASME. J. Sol. Energy Eng. November 2009; 131(4): 041011. https://doi.org/10.1115/1.3197841 Energy and cost efficient solar hot water systems require some sort of integrated storage, with high energy density and high power capacity for charging and discharging being desirable properties of the storage. This paper presents the results and conclusions from the design, and experimental performance evaluation of high capacity thermal energy storage using so-called phase change materials (PCMs) as the storage media. A 140 l 15 kW h storage prototype was designed, built, and experimentally evaluated. The storage tank was directly filled with the PCM having its phase change temperature at 58°C ⁠. A tube heat exchanger for charging and discharging with water was submerged in the PCM. Results from the experimental evaluation showed that hot water can be provided with a temperature of 40°C for more than 2 h at an average power of 3 kW. The experimental results also show that it is possible to charge the 140 l storage with close to the theoretically calculated value of 15 kW h. Hence, this is a PCM storage solution with a storage capacity of over 100 kW h/m3 ⁠, and an average power capacity during discharging of over 20 kW/m3 ⁠. However, it is desirable to increase the heat transfer rate within the prototype. A predesign of using a finned-tube coil instead of an unfinned coil show that by using finned tube, the power capacity for discharging can be at least doubled, if not tripled. heat transfer, phase change materials, solar heating, thermal energy storage, PCM, phase change materials, latent heat, thermal energy storage, solar hot water Engineering prototypes, Heat transfer, Hot water, Solar energy, Solar heating, Storage, Temperature, Design, Phase change materials, Thermal energy storage, Water Final Report of the IEA/ECES Annex 17—Advanced Thermal Energy Storage through Phase Change Materials and Chemical Reactions—Feasibility Studies and Demonstration Projects ,” www.fskab.com/annex17www.fskab.com/annex17 Experimentation With a Water Tank Including a PCM Module Talmatsky PCM for Solar DHW: An Unfulfilled Promise? Performance Enhancement of Solar Dynamic LHTS Module Having Both Fins and Multiple PCMs Highly Conductive Composites Made of Phase Change Materials and Graphite for Thermal Storage Development of PCM-Based Thermal Energy Storage for SolarHot Water Systems Proceedings of the First International Conference on Solar Heating, Cooling and Buildings Eurosun 2008 , Lisbon, Portugal, Oct. 7–10, Paper No. 122. , 2009, personal communication with Mr. Nils Julin, Feb. Laboratory Prototypes of PCM Storage Units, Report C3 of Subtask C Within the IEA Solar Heating and Cooling Programme Task 32 , ed., May, www.iea-shc.org/task32www.iea-shc.org/task32 El-Saiger Experimental Study of Solid-Liquid Phase Change Spiral Thermal Energy Storage Unit , 2008, http://www.climator.com/files/climsel%20c58.pdfhttp://www.climator.com/files/climsel%20c58.pdf, accessed Nov. Numerical Simulation of the Thermal Behaviour of an Energy Storage Unit With Phase Change Material for Air Conditioning Applications Between 17°C and 40°C Proceedings of the Tenth International Conference on Thermal Energy Storage, Ecostock 2006 , Stockton, NJ, May 31–Jun. 2.
Jacob discovered that the x -intercepts of a certain parabola are \left(3, 0\right) \left(-1,0\right) , but now he needs to find the vertex. Can you get him started? What do you know about the vertex? Draw a sketch of this parabola to help you. Sketch the parabola on your graph paper first. Remember that parabolas are symmetric shapes and the vertex is on the line of symmetry. Knowing this information, what part of the vertex do you know?
Held Objects allows the agent to pick up an interactable object specified by its objectId. Compatible objects have in their object metadata and return their current state with . Note that the agent’s hand must be clear of obstruction or the action will fail. If the target object being in the agent’s hand would cause it to clip into the environment, the action will also fail. Picked up objects can also obstruct the Agent’s view of the environment since the Agent’s hand is always in camera view, so know that picking up larger objects will obstruct the field of vision. Certain objects are that can themselves be picked up. If a moveable receptacle is picked up while other Sim Objects are inside of it, the contained objects will be picked up with the moveable receptacle. Here, a sequences like “ objectId="Apple|1|1|1", forceAction=False, manualInteract=False The target object's , found in the object's metadata. to allow an object to be picked up regardless of if it is within the manualInteract By default, objects picked up by the agent teleport into the agent’s hand at a default position in front of the agent camera. Set this to in order to instead pick up an object at the object’s location. This allows the agent to manipulate the object via object manipulation actions without the abstraction of the picked up object teleporting to the agent’s hand. attempts to put an object the agent is holding onto or into the target receptacle. Valid target receptacle objects have in their object metadata. action="PutObject", placeStationary=True Put Object Parameters Enable to ignore any Receptacle Restrictions when attempting to place objects. Normally objects will fail to be put on a receptacle if that receptacle is not valid for the object. This will also ignore interaction range restrictions of the agent. Note this does not guarantee an object will be placed in a receptacle, as some objects will not fit inside a receptacle regardless of the default object restrictions. , a placed object will use the physics engine to resolve the final position. This means placing an object on an uneven surface may cause inconsistent results due to the object rolling around or even falling off of the target receptacle. Note that because of variances in physics resolution, this placement mode is non-deterministic! If , the object will be placed in/on the valid receptacle without using physics to resolve the final position. This means that the object will be placed so that it will not roll around. For deterministic placement make sure to set to DropHandObject attempts to drop an object currently in the agent’s hand and let physics resolve where it lands. The action is different from , as this does not guarantee the held object will be put into a specified receptacle. This is meant to be used in tandem with the Move/Rotate Hand functions to maneuver a held object to a target area, and the let it drop. Additionally, this drop action will fail if the held object is not clear from all collisions. Most importantly, the agent’s collision will prevent drop, as dropping an object if it is “inside” the agent will lead to unintended behavior. action="DropHandObject", forceAction=False Drop Object Parameters to forcibly drop an object even if this would cause clipping with the environment, other objects, or the agent. ThrowObject . It throws the object currently in the agent’s hand in the forward facing direction of the agent with a force of moveMagnitude newtons. Since objects have different mass properties, objects require different forces to throw them same distance. action="ThrowObject", moveMagnitude=150.0, Throw Object Parameters The amount of force used to throw the object in newtons. Note that objects of different masses will have different throw distances if this magnitude is not changed. to forcibly throw an object even if this would cause clipping with the environment or other objects. Move Held Object While the agent is holding an object, it has several available actions to manipulate it. One such action is the ability to move the held object closer or further away from it. Held object movement is useful if we want to drop an object on a surface that is relatively far away. There are several directions that the agent can move the held object. They include: MoveHeldObjectAhead MoveHeldObjectBack MoveHeldObjectLeft MoveHeldObjectRight MoveHeldObjectUp MoveHeldObjectDown . Each direction supports the same additional parameters. After calling a move action (e.g., ) or a rotate action (e.g., ) the position of the held object will reset to its default position in-front of the agent. action="MoveHeldObjectAhead", moveMagnitude=0.1, forceVisible=False # Other supported directions controller.step("MoveHeldObjectBack") controller.step("MoveHeldObjectLeft") controller.step("MoveHeldObjectRight") controller.step("MoveHeldObjectUp") controller.step("MoveHeldObjectDown") Move Held Object Parameters The distance, in meters, to move the held object in the specified direction. forceVisible=True results in action failing if the object is not visible to the agent. This prevents the agent from hiding the held object inside or behind another object, or moving it too far, such that it's out of the agent's field of view. We also provide a separate helper action, MoveHeldObject , which allows the held object to move in several directions with only a single action: action="MoveHeldObject", ahead=0.1, up=0.12, The distance, in meters, to move the held object forward, from the agent's current facing direction. The distance, in meters, to move the held object rightwards, from the agent's current facing direction. The distance, in meters, to move the held object upwards, from the agent's current facing direction. Rotate Held Object RotateHeldObject attempts to rotate the held object relative to its current rotation. ) the rotation of the held object will reset to its default rotation. action="RotateHeldObject", yaw=25, Rotate Held Object Parameters Increments the pitch of an object from the agent's current facing direction. Specified in degrees. The video below shows the held object's incrementing in 15 degree increments. Increments the yaw of an object from the agent's current facing direction. Specified in degrees. The video below shows the held object's 15 Increments the roll of an object from the agent's current facing direction. Specified in degrees. The video below shows the held object's 15 can be used to rotate the held object to a fixed rotation. rotation=dict(x=90, y=15, z=25) Sets the rotation of the object to the provided rotation, in degrees. The rotation is relative to the object's axes. So, the set rotation is independent of the object's current rotation. Rotations are specified in degrees. Directional Push DirectionalPush attempts to push an object in a given direction. Since objects can have different mass properties, pushing different objects often requires differing amounts of force to move them the same distance. Only objects can be pushed. controller.step action="DirectionalPush", objectId="Sofa|3|2|1", moveMagnitude="100", pushAngle="90" Directional Push Parameters The amount of force used to pull the object in newtons. Following natural physics, objects of different masses may move different distances with the same amount of force. The direction vector to push the object. Values in [0:360] are valid, with 0 being the current forward direction of the agent. This value will change the push direction clockwise from the agent’s forward. (i.e., 90 will push the object directly right, a value of 180 will push the object backwards). 0 action="PushObject", objectId="Mug|0.25|-0.27", Push Object Parameters The amount of force used to push the object in newtons. Note that objects of different masses will move different distances if this magnitude is not changed. PullObject 180 action="PullObject", The amount of force used to pull the object in newtons. Note that objects of different masses will move different distances if this magnitude is not changed. Touch Then Apply Force TouchThenApplyForce allows the agent to push an object at a certain point on the object, in a given direction. If a sim object is hit along the path of this ray, it will have a force determined by applied to it instantaneously. This action will return feedback in the attribute of the event metadata return. event = controller.step( action="TouchThenApplyForce", direction={ moveMagnitude=80, handDistance=1.5 as an alternate way to target an object based on the last image frame. Valid values are in [0:1] , corresponding to how far left the image is from the previous frame. Used in tandem with to target an object based on the last image frame, rather than an . Valid values are in [0:1] , corresponding to how from the top the object is from the previous frame. The direction vector relative to the agent’s current forward to push any object touched. The amount of force to apply to a touched object in newtons. handDistance The maximum Euclidean distance from the agent's camera that the (x, y) point that targets the object can be, in meters. If the point on the object is further away than , the action will fail. "didHandTouchSomething": True "objectId": "Apple|+1|+1|+1" "armsLength": 1.20 event.metadata["actionReturn"] didHandTouchSomething if a sim object was touched within the ray’s length of The unique string id of the object touched by the ray. If the object touched in the scene has collision but is not a sim object, this attribute will be "not a sim object, a structure was touched". The distance from the touched point on the object to the agent’s camera. This distance will never exceed the handDistance value passed in originally unless the action finishes as a failed action (see below). This action will return a failure (event.metadata["lastActionSuccess"] == False) only in two cases: The raycast hit an object, but the object was outside of the maximum visibility range of the agent (ie: a handDistance of 10 is passed in, and hits an object 9 meters away, but the agent’s max visibility distance is 1.5 meters, causing a failure). The feedback object generated in this case will be didHandTouchSomething=False, objectId="", armsLength=handDistance. The handDistance of the action is larger than the agent’s max visibility distance and no object was hit. The feedback object generated in this case will be didHandTouchSomething=False, objectId="", armsLength=handDistance with a metadata errormessage of “the position the hand would have moved to is outside the agent’s max interaction range”. Note that this action interacts with the visibility of an object in order to determine what can be poked. The visibility of an object is defined by: An object must be within the agent camera’s field of view. An object must be within the area of a cylinder defined by a radius of length visibilityDistance around the agent’s vertical y axis. If that object is within the cylinder, a line must be able to be cast from the agent’s camera position to a point on that object unobstructed. Because visibility is defined by the cylinder with radius visibilityDistance, the total area of objects that are touchable by this action is the intersection of the sphere of radius handDistance centered around the agent’s camera, and the cylinder of radius visibilityDistance about the agent’s vertical axis. Note that the agent camera is also centered around the agent’s vertical axis. Place Object At Point PlaceObjectAtPoint attempts to place an object flush with the surface of a receptacle. This can only be used on objects that are . The point to place an object at can be generated by the GetSpawnCoordinatesAboveReceptacle action below. Combine these two actions in order to teleport objects onto different receptacle surfaces. action="PlaceObjectAtPoint", objectId="Toaster|1|1|1", position={ Place Object At Point Parameters (x, y, z) coordinates for the point to try and place the object. Get Spawn Coordinate Above Receptacle is to explicitly be used in tandem with the action. It returns an array of xyz dictionary representing the (x, y, z) coordinates of valid spawn positions above a object. The array of xyz dictionary is returned in the actionReturn metadata. action="GetSpawnCoordinatesAboveReceptacle", objectId="CounterTop|1|1|1", anywhere=False , spawn coordinates will be returned even if the exact position of the coordinate is outside of the agent’s field of view. Keep False to return spawn coordinates even if they are not in view of the agent.
Centered figurate number that represents a decagon with a dot in the center A centered decagonal number is a centered figurate number that represents a decagon with a dot in the center and all other dots surrounding the center dot in successive decagonal layers. The centered decagonal number for n is given by the formula {\displaystyle 5n^{2}+5n+1\,} Thus, the first few centered decagonal numbers are 1, 11, 31, 61, 101, 151, 211, 281, 361, 451, 551, 661, 781, 911, 1051, ... (sequence A062786 in the OEIS) Like any other centered k-gonal number, the nth centered decagonal number can be reckoned by multiplying the (n − 1)th triangular number by k, 10 in this case, then adding 1. As a consequence of performing the calculation in base 10, the centered decagonal numbers can be obtained by simply adding a 1 to the right of each triangular number. Therefore, all centered decagonal numbers are odd and in base 10 always end in 1. Another consequence of this relation to triangular numbers is the simple recurrence relation for centered decagonal numbers: {\displaystyle CD_{n+1}=CD_{n}+10n,} {\displaystyle CD_{1}=1.} [ordinary] decagonal number Deza, Elena; Deza, Michel Marie (November 20, 2011). "1.6". Figurate Numbers. WORLD SCIENTIFIC. ISBN 978-981-4355-48-3.
Revision as of 15:09, 26 April 2015 by MathAdmin (talk | contribs) (Created page with "<span class="exam">Test the series for convergence or divergence. ::<span class="exam">(a) (6 points) <math>{\displaystyle \sum_{n=1}^{\infty}}\,(-1...") {\displaystyle {\displaystyle \sum _{n=1}^{\infty }}\,(-1)^{n}\sin {\frac {\pi }{n}}.} {\displaystyle {\displaystyle \sum _{n=1}^{\infty }}\,(-1)^{n}\cos {\frac {\pi }{n}}.} {\displaystyle n\geq 2} , both sine and cosine of {\displaystyle {\frac {\pi }{n}}} are strictly nonnegative. Thus, these series are alternating, and we can apply the Alternating Series Test: If a series {\displaystyle \sum _{k=1}^{\infty }a_{k}} Alternating in sign, and {\displaystyle \lim _{k\rightarrow 0}|a_{k}|=0,} Note that if the series does not converge to zero, we must claim it diverges by the Divergence Test: If {\displaystyle {\displaystyle \lim _{k\rightarrow \infty }a_{k}\neq 0,}} then the series/sum {\displaystyle \sum _{k=0}^{\infty }a_{k}} In the case of an alternating series, such as the two listed for this problem, we can choose to show it does not converge to zero absolutely. {\displaystyle placehold} Retrieved from "https://wiki.math.ucr.edu/index.php?title=009C_Sample_Midterm_3,_Problem_4&oldid=405"
Lemma 37.18.3 (07TD): Normalization and smooth morphisms—The Stacks project Lemma 37.18.3: Normalization and smooth morphisms (cite) Lemma 37.18.3 (Normalization and smooth morphisms). Let $X \to Y$ be a smooth morphism of schemes. Assume every quasi-compact open of $Y$ has finitely many irreducible components. Then the same is true for $X$ and there is a unique isomorphism $X^\nu = X \times _ Y Y^\nu $ over $X$ where $X^\nu $, $Y^\nu $ are the normalizations of $X$, $Y$. Proof. By Descent, Lemma 35.15.3 every quasi-compact open of $X$ has finitely many irreducible components. Note that $X_{red} = X \times _ Y Y_{red}$ as a scheme smooth over a reduced scheme is reduced, see Descent, Lemma 35.17.1. Hence we may assume that $X$ and $Y$ are reduced (as the normalization of a scheme is equal to the normalization of its reduction by definition). Next, note that $X' = X \times _ Y Y^\nu $ is a normal scheme by Descent, Lemma 35.17.2. The morphism $X' \to Y^\nu $ is smooth (hence flat) thus the generic points of irreducible components of $X'$ lie over generic points of irreducible components of $Y^\nu $. Since $Y^\nu \to Y$ is birational we conclude that $X' \to X$ is birational too (because $X' \to Y^\nu $ induces an isomorphism on fibres over generic points of $Y$). We conclude that there exists a factorization $X^\nu \to X' \to X$, see Morphisms, Lemma 29.54.5 which is an isomorphism as $X'$ is normal and integral over $X$. $\square$ Comment #5347 by Hao on June 24, 2020 at 10:05 It's better to mention what X^{\nu} is in Lemma 07TD.
EquivalentRate - Maple Help Home : Support : Online Help : Mathematics : Finance : Interest Rates : EquivalentRate EquivalentRate(rate, old, new, interval) EquivalentRate(rate, old, new, startdate, enddate, opts) positive constant, list or Vector; given interest rate Annual, Bimonthly, Continuous, EveryFourthMonth, Monthly, Quarterly, Semiannual, Simple, SimpleThenAnnual, SimpleThenBimonthly, SimpleThenEveryFourthMonth, SimpleThenMonthly, SimpleThenQuarterly, or SimpleThenSemiannual; compounding type for the original interest rate non-negative constant, list(non-negative), or Vector; duration of the compounding interval in years equation of the form option = value where option is daycounter; specify options for the EquivalentRate command The EquivalentRate command calculates an equivalent rate for the specified compounding interval and compounding type. The parameter rate is the original rate. It must be a positive constant. The old and new parameters are the original compounding type and the new compounding type respectively. The parameter interval is the duration of the compounding period. Alternatively, one can specify the beginning and the end of the compounding period as dates. The optional parameter interval can be used to specify the length of the compounding interval in years. This parameter is relevant only when the conversion involves simple compounding. \mathrm{with}⁡\left(\mathrm{Finance}\right): \mathrm{rate1}≔0.06: \mathrm{rate2}≔\mathrm{EquivalentRate}⁡\left(\mathrm{rate1},\mathrm{Continuous},\mathrm{Monthly}\right) \textcolor[rgb]{0,0,1}{\mathrm{rate2}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{0.06015025031} \mathrm{evalf}⁡\left(\mathrm{exp}⁡\left(\mathrm{rate1}\right)\right) \textcolor[rgb]{0,0,1}{1.061836547} \mathrm{evalf}⁡\left({\left(1+\frac{\mathrm{rate2}}{12}\right)}^{12}\right) \textcolor[rgb]{0,0,1}{1.061836548} \mathrm{intervalL}≔[1.2,2.5,4.8]: \mathrm{ratelist}≔\mathrm{EquivalentRate}⁡\left(0.65,\mathrm{Continuous},\mathrm{Simple},\mathrm{intervalL}\right) \textcolor[rgb]{0,0,1}{\mathrm{ratelist}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{0.984560221248501}& \textcolor[rgb]{0,0,1}{1.63136761487203}& \textcolor[rgb]{0,0,1}{4.50966242566154}\end{array}] This is an example of converting from/to simple compounding. \mathrm{startdate}≔"Jan-05-2006" \textcolor[rgb]{0,0,1}{\mathrm{startdate}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{"Jan-05-2006"} \mathrm{enddate}≔"Dec-31-2006" \textcolor[rgb]{0,0,1}{\mathrm{enddate}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{"Dec-31-2006"} \mathrm{interval}≔\mathrm{YearFraction}⁡\left(\mathrm{startdate},\mathrm{enddate}\right) \textcolor[rgb]{0,0,1}{\mathrm{interval}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{0.9863013699} \mathrm{Settings}⁡\left(\mathrm{daycounter}\right) \textcolor[rgb]{0,0,1}{\mathrm{Historical}} \mathrm{EquivalentRate}⁡\left(\mathrm{rate1},\mathrm{Continuous},\mathrm{Simple},\mathrm{interval}\right) \textcolor[rgb]{0,0,1}{0.06181088722} \mathrm{EquivalentRate}⁡\left(\mathrm{rate1},\mathrm{Continuous},\mathrm{Simple},"Jan-05-2006","Jan-05-2007",\mathrm{daycounter}=\mathrm{ISMA}\right) \textcolor[rgb]{0,0,1}{0.06183654655} Here are more conversions. \mathrm{rate3}≔\mathrm{EquivalentRate}⁡\left(\mathrm{rate1},\mathrm{Continuous},\mathrm{Quarterly}\right) \textcolor[rgb]{0,0,1}{\mathrm{rate3}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{0.06045225846} \mathrm{rate4}≔\mathrm{EquivalentRate}⁡\left(\mathrm{rate2},\mathrm{Monthly},\mathrm{Quarterly}\right) \textcolor[rgb]{0,0,1}{\mathrm{rate4}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{0.06045225846} \mathrm{rate5}≔\mathrm{EquivalentRate}⁡\left(\mathrm{rate1},\mathrm{Continuous},\mathrm{Simple},1.0\right) \textcolor[rgb]{0,0,1}{\mathrm{rate5}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{0.06183654655} \mathrm{EquivalentRate}⁡\left(\mathrm{rate5},\mathrm{Simple},\mathrm{Continuous},1.0\right) \textcolor[rgb]{0,0,1}{0.06000000000} \mathrm{EquivalentRate}⁡\left(\mathrm{rate1},\mathrm{Continuous},\mathrm{Simple},5.0\right) \textcolor[rgb]{0,0,1}{0.06997176152} The Finance[EquivalentRate] command was introduced in Maple 15.
f(x) = x^3 − 2x^2 . At what point(s) will the line tangent to f(x) be parallel to the secant line through (0, f(0)) (2, f(2)) Calculate the slope of the secant between (0, f(0)) (2, f(2)) We want to know where the slope of the tangent is the same as the slope of the secant. Recall that the slope of the tangent is also known as f^\prime(x) , find where f^\prime(x) = 0 The slope of the tangent = the slope of the secant at coordinate points ( ________, _________ ) and ( ________, _________ ). You must analytically compute the exact coordinates, but note that the slope of tangents lines is 0 at the local maximum and local minimum. Use the eTool below to examine the graph.
Section 42.28 (02TG): Intersecting with an invertible sheaf and rational equivalence—The Stacks project Section 42.28: Intersecting with an invertible sheaf and rational equivalence (cite) 42.28 Intersecting with an invertible sheaf and rational equivalence Applying the key lemma we obtain the fundamental properties of intersecting with invertible sheaves. In particular, we will see that $c_1(\mathcal{L}) \cap -$ factors through rational equivalence and that these operations for different invertible sheaves commute. Lemma 42.28.1. Let $(S, \delta )$ be as in Situation 42.7.1. Let $X$ be locally of finite type over $S$. Assume $X$ integral and $\dim _\delta (X) = n$. Let $\mathcal{L}$, $\mathcal{N}$ be invertible on $X$. Choose a nonzero meromorphic section $s$ of $\mathcal{L}$ and a nonzero meromorphic section $t$ of $\mathcal{N}$. Set $\alpha = \text{div}_\mathcal {L}(s)$ and $\beta = \text{div}_\mathcal {N}(t)$. Then \[ c_1(\mathcal{N}) \cap \alpha = c_1(\mathcal{L}) \cap \beta \] Proof. Immediate from the key Lemma 42.27.1 and the discussion preceding it. $\square$ Lemma 42.28.2. Let $(S, \delta )$ be as in Situation 42.7.1. Let $X$ be locally of finite type over $S$. Let $\mathcal{L}$ be invertible on $X$. The operation $\alpha \mapsto c_1(\mathcal{L}) \cap \alpha $ factors through rational equivalence to give an operation \[ c_1(\mathcal{L}) \cap - : \mathop{\mathrm{CH}}\nolimits _{k + 1}(X) \to \mathop{\mathrm{CH}}\nolimits _ k(X) \] Proof. Let $\alpha \in Z_{k + 1}(X)$, and $\alpha \sim _{rat} 0$. We have to show that $c_1(\mathcal{L}) \cap \alpha $ as defined in Definition 42.25.1 is zero. By Definition 42.19.1 there exists a locally finite family $\{ W_ j\} $ of integral closed subschemes with $\dim _\delta (W_ j) = k + 2$ and rational functions $f_ j \in R(W_ j)^*$ such that \[ \alpha = \sum (i_ j)_*\text{div}_{W_ j}(f_ j) \] Note that $p : \coprod W_ j \to X$ is a proper morphism, and hence $\alpha = p_*\alpha '$ where $\alpha ' \in Z_{k + 1}(\coprod W_ j)$ is the sum of the principal divisors $\text{div}_{W_ j}(f_ j)$. By Lemma 42.26.4 we have $c_1(\mathcal{L}) \cap \alpha = p_*(c_1(p^*\mathcal{L}) \cap \alpha ')$. Hence it suffices to show that each $c_1(\mathcal{L}|_{W_ j}) \cap \text{div}_{W_ j}(f_ j)$ is zero. In other words we may assume that $X$ is integral and $\alpha = \text{div}_ X(f)$ for some $f \in R(X)^*$. Assume $X$ is integral and $\alpha = \text{div}_ X(f)$ for some $f \in R(X)^*$. We can think of $f$ as a regular meromorphic section of the invertible sheaf $\mathcal{N} = \mathcal{O}_ X$. Choose a meromorphic section $s$ of $\mathcal{L}$ and denote $\beta = \text{div}_\mathcal {L}(s)$. By Lemma 42.28.1 we conclude that \[ c_1(\mathcal{L}) \cap \alpha = c_1(\mathcal{O}_ X) \cap \beta . \] However, by Lemma 42.25.2 we see that the right hand side is zero in $\mathop{\mathrm{CH}}\nolimits _ k(X)$ as desired. $\square$ Let $(S, \delta )$ be as in Situation 42.7.1. Let $X$ be locally of finite type over $S$. Let $\mathcal{L}$ be invertible on $X$. We will denote the operation $c_1(\mathcal{L}) \cap - $. This makes sense by Lemma 42.28.2. We will denote $c_1(\mathcal{L})^ s \cap -$ the $s$-fold iterate of this operation for all $s \geq 0$. Lemma 42.28.3. Let $(S, \delta )$ be as in Situation 42.7.1. Let $X$ be locally of finite type over $S$. Let $\mathcal{L}$, $\mathcal{N}$ be invertible on $X$. For any $\alpha \in \mathop{\mathrm{CH}}\nolimits _{k + 2}(X)$ we have \[ c_1(\mathcal{L}) \cap c_1(\mathcal{N}) \cap \alpha = c_1(\mathcal{N}) \cap c_1(\mathcal{L}) \cap \alpha \] as elements of $\mathop{\mathrm{CH}}\nolimits _ k(X)$. Proof. Write $\alpha = \sum m_ j[Z_ j]$ for some locally finite collection of integral closed subschemes $Z_ j \subset X$ with $\dim _\delta (Z_ j) = k + 2$. Consider the proper morphism $p : \coprod Z_ j \to X$. Set $\alpha ' = \sum m_ j[Z_ j]$ as a $(k + 2)$-cycle on $\coprod Z_ j$. By several applications of Lemma 42.26.4 we see that $c_1(\mathcal{L}) \cap c_1(\mathcal{N}) \cap \alpha = p_*(c_1(p^*\mathcal{L}) \cap c_1(p^*\mathcal{N}) \cap \alpha ')$ and $c_1(\mathcal{N}) \cap c_1(\mathcal{L}) \cap \alpha = p_*(c_1(p^*\mathcal{N}) \cap c_1(p^*\mathcal{L}) \cap \alpha ')$. Hence it suffices to prove the formula in case $X$ is integral and $\alpha = [X]$. In this case the result follows from Lemma 42.28.1 and the definitions. $\square$ Comment #6291 by Yi Shan on June 19, 2021 at 09:51 In the statement before Lemma 02TJ, why the operation c_{1}(\mathcal{L})\cap- maps the Chow group of (k+s) -cycles to that of k -cycles? Should the s here be replaced by 1
Illinois J. Math. 65 (2), (June 2021) Global automorphic Sobolev theory and the automorphic heat kernel Amy T. DeCelles Illinois J. Math. 65 (2), 261-286, (June 2021) DOI: 10.1215/00192082-9082091 KEYWORDS: 11F72, 11F55, 58J35, 46E35, 47D06, 35K08 Heat kernels arise in a variety of contexts including probability, geometry, and functional analysis; the automorphic heat kernel is particularly important in number theory and string theory. The typical construction of an automorphic heat kernel as a Poincaré series presents analytic difficulties, which can be dealt with in special cases (e.g., hyperbolic spaces) but are often sidestepped in higher rank by restricting to the compact quotient case. In this paper, we present a new approach, using global automorphic Sobolev theory, a robust framework for solving automorphic PDEs that does not require any simplifying assumptions about the rank of the symmetric space or the compactness of the arithmetic quotient. We construct an automorphic heat kernel via its automorphic spectral expansion in terms of cusp forms, Eisenstein series, and residues of Eisenstein series. We then prove uniqueness of the automorphic heat kernel as an application of operator semigroup theory. Finally, we prove the smoothness of the automorphic heat kernel by proving that its automorphic spectral expansion converges in the {C}^{\mathrm{\infty }} -topology. KEYWORDS: 60J60, 60J35, 60J45, 60J65 \mathit{\gamma }>0 0<t<\mathrm{\infty } Spectral properties of reducible conical metrics Bin Xu, Xuwen Zhu KEYWORDS: 34M35, 53C21 We show that the monodromy of a spherical conical metric g is reducible if and only if the metric g has a real-valued eigenfunction with eigenvalue 2 for the holomorphic extension {\mathrm{\Delta }}_{g}^{\mathrm{Hol}} of the associated Laplace–Beltrami operator. Such an eigenfunction produces a meromorphic vector field, which is then related to the developing maps of the conical metric. We also give a lower bound of the first nonzero eigenvalue of {\mathrm{\Delta }}_{g}^{\mathrm{Hol}} , together with a complete classification of the dimension of the space of real-valued 2-eigenfunctions for {\mathrm{\Delta }}_{g}^{\mathrm{Hol}} depending on the monodromy of the metric g. This paper can be seen as a new connection between the complex analysis method and the PDE approach in the study of spherical conical metrics. Orbifolds having Euler number zero Heegaard decomposition John Kalliongis, Ryo Ohashi KEYWORDS: 57M10, 57M05, 57M12, 57M60, 57S25, 57S30 In this paper, we completely classify, up to homeomorphism, the orientable and nonorientable orbifolds which have a Heegaard decomposition consisting of orbifold handlebodies with Euler number zero. In addition we compute their fundamental groups. Cohen–Lenstra distributions via random matrices over complete discrete valuation rings with finite residue fields Gilyoung Cheong, Yifeng Huang \left(R,\mathfrak{m}\right) be a complete discrete valuation ring with the finite residue field R∕\mathfrak{m}={\mathbb{F}}_{q} . Given a monic polynomial P\left(t\right)\in R\left[t\right] \mathfrak{m} gives an irreducible polynomial \stackrel{‾}{P}\left(t\right)\in {\mathbb{F}}_{q}\left[t\right] , we initiate an investigation of the distribution of \mathrm{coker}\left(P\left(A\right)\right) A\in {\mathrm{Mat}}_{n}\left(R\right) is randomly chosen with respect to the Haar probability measure on the additive group {\mathrm{Mat}}_{n}\left(R\right) n×n R-matrices. In particular, we provide a generalization of two results of Friedman and Washington about these random matrices. We use some concrete combinatorial connections between {\mathrm{Mat}}_{n}\left(R\right) {\mathrm{Mat}}_{n}\left({\mathbb{F}}_{q}\right) to translate our problems about a Haar-random matrix in {\mathrm{Mat}}_{n}\left(R\right) into problems about a random matrix in {\mathrm{Mat}}_{n}\left({\mathbb{F}}_{q}\right) with respect to the uniform distribution. Our results over {\mathbb{F}}_{q} are about the distribution of the \stackrel{‾}{P} -part of a random matrix \stackrel{‾}{A}\in {\mathrm{Mat}}_{n}\left({\mathbb{F}}_{q}\right) with respect to the uniform distribution, and one of them generalizes a result of Fulman. We heuristically relate our results to a celebrated conjecture of Cohen and Lenstra, which predicts that given an odd prime p, any finite abelian p-group (i.e., {\mathbb{Z}}_{p} -module) H occurs as the p-part of the class group of a random imaginary quadratic field extension of \mathbb{Q} with a probability inversely proportional to |{\mathrm{Aut}}_{\mathbb{Z}}\left(H\right)| . We review three different heuristics for the conjecture of Cohen and Lenstra, and they are all related to special cases of our main conjecture, which we prove as our main theorems. Quadratic differentials, measured foliations, and metric graphs on punctured surfaces Kealey Dias, Subhojoy Gupta, Maria Trnkova A meromorphic quadratic differential on a punctured Riemann surface induces horizontal and vertical measured foliations with pole singularities. In a neighborhood of a pole, such a foliation comprises foliated strips and half-planes, and its leaf space determines a metric graph. We introduce the notion of an asymptotic direction at each pole and show that for a punctured surface equipped with a choice of such asymptotic data, any compatible pair of measured foliations uniquely determines a complex structure and a meromorphic quadratic differential realizing that pair. This proves the analogue of a theorem of Gardiner–Masur for meromorphic quadratic differentials. We also prove an analogue of the Hubbard–Masur theorem; namely, for a fixed punctured Riemann surface there exists a meromorphic quadratic differential with any prescribed horizontal foliation, and such a differential is unique provided we prescribe the singular flat geometry at the poles. Weighted inequalities for q-functions Tomasz Gałązka, Adam Osękowski Let f be a martingale on an arbitrary atomic probability space equipped with a tree-like structure and let S\left(f,q\right) denote the associated q-function. The paper is devoted to weighted {L}^{p} {c}_{p,q,w}^{-1}{‖S\left(f,q\right)‖}_{{L}^{p}\left(w\right)}\le ‖f{‖}_{{L}^{p}\left(w\right)}\le {C}_{p,q,w}{‖S\left(f,q\right)‖}_{{L}^{p}\left(w\right)},\phantom{\rule{1em}{0ex}}1\le p<\mathrm{\infty }, for Muckenhoupt weights. Using the combination of the theory of sparse operators, extrapolation, and Bellman function method, we identify the optimal dependence of the constants {c}_{p,q,w} {C}_{p,q,w} {A}_{p} characteristics of the weights involved. An optimization problem arising in CR geometry KEYWORDS: 32H35, 32M99, 32V99, 90C05 We determine the asymptotic behavior as the degree tends to infinity of the minimal {L}^{1} m\left(n,d\right) of the solution of an optimization problem arising when studying polynomial sphere maps. Here n is the source dimension and d is the degree. We provide upper and lower bounds for m\left(n,d\right) . We use these bounds to show that the function d\to m\left(n,d\right) is monotone increasing in d. We prove that {lim}_{d\to \mathrm{\infty }}\frac{m\left(n,d\right)}{d}=n\left(n-1\right) N\left(n,d\right) denote the minimum possible target dimension of a monomial sphere map of degree d. We show, in source dimension unequal to 2, that {lim}_{d\to \mathrm{\infty }}\frac{m\left(n,d\right)}{N\left(n,d\right)}=n . The limit is 4 when n=2 . We discuss some complicated results obtained by coding when n=2 KEYWORDS: 11C08, 11G05, 11G07 {E}_{1} {E}_{2} {E}_{1}×{E}_{2} {P}_{1} {P}_{2} Hausdorff measures, dyadic approximations, and the Dobiński set Alberto Dayan, José L. Fernández, María J. González KEYWORDS: 11K55, 28A78, 30C85 The Dobiński set \mathcal{D} is an exceptional set for a certain infinite product identity, whose points are characterized as having exceedingly good approximations by dyadic rationals. We study the Hausdorff dimension and logarithmic measure of \mathcal{D} by means of the mass transference principle and by the construction of certain appropriate Cantor-like sets, termed willow sets, contained in \mathcal{D}
The Meaning of Relativity - Wikiquote The Meaning of Relativity: Four Lectures Delivered at Princeton University, May 1921 is a book published in 1922 by Princeton University Press in the USA and by Methuen & Company in the UK. The 1922 book is a translation of the 1921 Stafford Little Lectures at Princeton University, given in German by Albert Einstein (1879–1955). Einstein's goal in the lectures was to give an overview of the physics, mathematics, and basic thinking for both special relativity theory and general relativity theory. The Princeton physics professor Edwin Plimpton Adams (1878–1950) translated the lectures into English. There are 4 subsequent editions: 2nd edition in 1945, 3rd edition in 1950, 4th edition in 1953, and 5th edition in 1955. Einstein added for the 2nd edition an appendix entitled On the "Cosmological Problem" and for the 3rd edition an Appendix II entitled Relativistic Theory of the Non-symmetric Field. For the 5th edition, he completely revised Appendix II, based upon simplifying the derivations and the form of the general relativistic field equations. The simplification was done in collaboration with his assistant Bruria Kaufman (1918–2010). 1 Quotes from The Meaning of Relativity, 5th edition 1.1 Chapter. Space and Time in Pre-relativity Physics 1.2 Chapter. The Theory of Relativity 1.3 Chapter. The General Theory of Relativity 1.4 Chapter. The General Theory of Relativity (continued) 1.5 Appendix for the Second Edition 2 Quotes about The Meaning of Relativity Quotes from The Meaning of Relativity, 5th edition[edit] Chapter. Space and Time in Pre-relativity Physics[edit] The theory of relativity is intimately connected with the theory of space and time. I shall therefore begin with a brief investigation of the origin of our ideas of space and time, although in doing so I know that I introduce a controversial subject. The object of all science, whether natural science or psychology, is to co-ordinate our experiences and to bring them into a logical system. How are our customary ideas of space and time related to the character of our experience? Chapter. The Theory of Relativity[edit] Before the development of the theory of relativity it was known the principle of energy and momentum could be expressed in a differential form for the electromagnetic field. The four-dimensional formulation of these principles leads to an important conception, that of the energy tensor, which is important of the further development of the theory of relativity. Chapter. The General Theory of Relativity[edit] (Inertial mass) {\displaystyle \cdot } (Acceleration) {\displaystyle =} (Intensity of the gravitational field) {\displaystyle \cdot } (Gravitational mass). Chapter. The General Theory of Relativity (continued)[edit] A material particle upon which no force acts moves, according to the principle of inertia, uniformly in a straight line. In the four-dimensional continuum of the special theory of relativity (with real time co-ordinate) this a real straight line. The natural, that is, the simplest, generalization of the straight line which is meaningful in the system of concepts of the general (Riemannian) theory of invariants is that of the straightest, or geodesic, line. Appendix for the Second Edition[edit] Some try to explain Hubble's shift of spectral lines by means other than the Doppler effect. There is, however, no support for such a conception in the known physical facts. It is the essential achievement of the general theory of relativity that it has freed physics from the necessity of introducing the "inertial system" (or inertial systems). This concept is unsatisfactory for the following reason: without any deeper foundation it singles out certain co-ordinate systems among all conceivable ones. It is then assumed that the laws of physics hold only for such inertial systems (e.g. the law of inertia and the law of the constancy of the velocity of light). Thereby, space as such is assigned a role in the system of physics that distinguishes it from all other elements of physical description. It plays a determining role in all process, without in its turn being influenced by them. Though such a theory is logically possible, it is on the other hand rather unsatisfactory. Newton had been fully aware of this deficiency, but he had also clearly understood that no other path was open to physics in his time. Among the later physicians it was above all Ernst Mach who focussed attention on this point. Quotes about The Meaning of Relativity[edit] In May 1921 Albert Einstein delivered a series of lectures at Princeton University on the broad topic of relativity. The lectures form a unified survey of the basic concepts of relativity. Beginning with the pre-relativity physics of Newton (or perhaps, more correctly, the "three dimensional" relativity of Newton) Einstein lays the foundation for "four dimensional" relativity primarily from the postulational standpoint. The development of special relativity is followed by the formulation of the general theory leading up to the Schwarzschild line element and the cosmological problem. E. Richard Cohen, (October 1956)"Review of The Meaning of Relativity (Fifth Edition)". Physics Today 9 (10): 30–31. DOI:10.1063/1.3059795. Einstein, Albert; Adams, Edwin Plimpton (1922). The Meaning of Relativity: Four Lectures Delivered at Princeton Univ., May, 1921 (1st ed.). London: Methuen Publishing. OCLC 637254801. Einstein, Albert (1945). The Meaning of Relativity (2nd ed.). Princeton, N.J.: Princeton University Press. OCLC 1105547540. Einstein, Albert (1950). The Meaning of Relativity (3rd ed.). Princeton: Princeton University Press. OCLC 1304366. Einstein, Albert (1953). The Meaning of Relativity: Including the Generalization of Gravitation Theory (4th ed.). Princeton, N.J.: Princeton University Press. OCLC 946162394. Einstein, Albert (1955). The meaning of relativity: including the relativistic theory of the non-symmetric field. Princeton: Princeton University Press. ISBN 9780691080079. OCLC 177301011. The Meaning of Relativity 5th edition at Princeton University Press The Meaning of Relativity 5th edition at JSTOR The Meaning of Relativity at Springer Link Retrieved from "https://en.wikiquote.org/w/index.php?title=The_Meaning_of_Relativity&oldid=3100243"
After doing well on a test, Althea’s teacher placed a gold star on her paper. When Althea examined the star closely, she realized that it was really a regular pentagon surrounded by 5 isosceles triangles, as shown in the diagram below. If the star has the angle measurements shown in the diagram, find the sum of the angles inside the shaded pentagon. Show all work. What do the tick marks on the sides of the triangles mean? How can you use these to find the measures of the other angles in the triangles? Once you have the measures of the other angles in the triangles, how can you use these measures to find the measures of the angles in the pentagon?
attempts to open an Openable object to a specified amount, parameterized by . The full list of openable object types can be filtered on the Object Types page. An object can fail to open if it hits another object as it is opening. In this case the action will fail and the target object will reset to the position it was last in. action="OpenObject", objectId="Book|0.25|-0.27|0.95", openness=1, Open Object Parameters The proportion regarding how far the object should open. Valid values are in [0:1] , where the value 0 corresponds to completely closed, 1 corresponds to completely open, and 0.5 corresponds to halfway open, for instance. , the agent will not be able to interact with the object unless it is within the initialized of the object and the object appears in the agent's current frame. This prevents the agent from unnaturally interacting with objects that are too far away. Each object contains metadata in the event pertaining to the state of its openness. "openable": True, "openness": 0.75, Open Object Response Can this object be opened? The proportion that the object is open, linearly scaled to the range [0:1] . For instance, if an object is a quarter of the way open, then its openness would be 0.25 when the object's openness is 0 An object can fail to close if it hits another object as it is opening. In this case the action will fail and the target object will reset to the position it was last in. action="CloseObject", Close Object Parameters attempts to break a breakable object. If successful, the object appears visibly different. For instance, breakable objects may shatter completely into pieces or have their screens cracked. The full list of breakable object types can be filtered on the Object Types page. Broken objects cannot be unbroken (until the scene has reset). action="BreakObject", objectId="Vase|0.25|0.27|-0.95", Break Object Parameters Each object contains metadata in the event pertaining to if it is broken. "breakable": True, Break Object Response Can this object break? Is this object currently broken? Cook Object CookObject attempts to switch an object to its cooked state. Objects cannot not be uncooked (unless the scene is reset). The full list of cookable object types can be filtered on the Object Types page. action="CookObject", objectId="Egg|0.25|-0.27|0.95", Cook Object Parameters Each object contains metadata in the event pertaining to if it is cooked. "cookable": True, "isCooked": False, Cook Object Response Can this object be cooked? Is this object currently cooked? attempts to slice an object. If an object, such as an Apple, is successfully sliced, there may include several new AppleSliced objects in the scene and metadata, one for each slice. Other objects, like Egg, may only break after being sliced. Sliced objects cannot be unsliced (unless the scene is reset). The full list of sliceable objects can be filtered on the Object Types page. action="SliceObject", objectId="Potato|0.25|-0.27|0.95", Slice Object Parameters Each object contains metadata in the event pertaining to if it is sliced. "sliceable": True, "isSliced": False, Slice Object Response Can this object be sliced? Is this object currently sliced? ToggleObjectOn ToggleObjectOff toggles an object between on and off states, respectively. Examples include Lamps, Light Switches, Stove Knobs, and Laptops. The full list of toggleable objects can be filtered on the Object Types page. action="ToggleObjectOn", objectId="LightSwitch|0.25|-0.27|0.95", action="ToggleObjectOff", Toggle Object Parameters Each object contains metadata in the event pertaining to if it is toggled. "isToggled": True, Toggle Object Response Can this object be toggled? Is this object currently toggled on? Dirty Object DirtyObject attempts to make an object look dirty. can then be used to make the object look clean again. The full list of dirtyable object types can be filtered on the Object Types page. action="DirtyObject", objectId="Mug|0.25|-0.27|0.95", Clean Object action="CleanObject", Dirty Object Parameters Dirty Object Response FillObjectWithLiquid attempts to fill an object with fillLiquid . Only compatible objects that are empty can be filled. The full list of object types that can be filled with liquid can be filtered on the Object Types page. action="FillObjectWithLiquid", fillLiquid="coffee", Fill Object with Liquid Parameters The type of liquid that fills the object. Valid liquids are Each object contains metadata in the event pertaining to if it is filled with liquid. "fillLiquid": "coffee", "canFillWithLiquid": True, "isFilledWithLiquid": True, Fill Object with Liquid Response : Optional[str] Which liquid is the object filled with? If the object is not filled with liquid, it will report . Valid liquids are Can this object be filled with liquid? Is this object currently filled with liquid? Empty Liquid from Object EmptyLiquidFromObject attempts to empty the liquid from an object if it is currently filled with liquid (i.e., If an object is rotated downward far enough, gravity may cause the object to lose its liquid automatically, causing . Here, the liquid will not appear spilled on the ground like a puddle. Instead, it disappears. action="EmptyLiquidFromObject", Empty Liquid From Object Parameters Each object contains metadata in the event pertaining to if it can be emptied of liquid. "fillLiquid": None, "isFilledWithLiquid": False, Empty Liquid from Object Response Can this object be emptied with liquid? Use Up Object UseUpObject attempts to use up parts of an object. The action works with objects like ToiletPaper, TissueBox, and PaperTowelRoll, which each has an full/empty state. The full list of object types that can be used up can be filtered on the Object Types page. action="UseUpObject", objectId="ToiletPaper|0.25|-0.27|0.95", Use Up Object Parameters Each object contains metadata in the event pertaining to if it is used up. "canBeUsedUp": True, "isUsedUp": False, Use Up Object Response Can this object be used up? Is this object currently used up?
How to calculate PPM and percents PPM conversion: an example If percentages, per mills and parts per million still confuse you, give this PPM calculator a shot. It is a simple tool that can be used for conversion from PPM to units such as percents or parts per billion (PPB). In this article, we will provide you with a short description of each of the proportion metrics and give you a detailed explanation of how to calculate PPM and percentages. All of the proportion metrics are quite similar: they describe small values of dimensionless quantities, such as the volumetric proportion of \text{NO}_2 in the air. For example, PPM means "parts per million". If you find out that the concentration of \text{NO}_2 is equal to 1 ppm, it means that if you take a "sample" of air and divide it into a million parts, one of them will consist of \text{NO}_2 The proportion metrics are as follows: Percentage: equal to 1 per 100; Permille: equal to 1 per 1,000; PPM: equal to 1 per 1,000,000; PPB: equal to 1 per 1,000,000,000; and PPT: equal to 1 per 1,000,000,000,000. Let's take the following example: you have created a solution of salt ( \text{NaCl} ) in water. You used 0.005 grams of salt, and the final mass of the solution is equal to 1 kilogram. How many parts per million (PPM) of salt are in the solution? Start with expressing the solution concentration as a decimal. There are 0.005 grams of salt in 1 kilogram (1000 grams). It means that the decimal is equal to \footnotesize \quad\enspace 0.005\ \text{g}\ /\ 1,\!000\ \text{g} = 0.000005 To find the percentage, multiply this value by a hundred: \footnotesize \quad\enspace 0.000005 \times 100 \% = 0.0005\% To find the permille value, multiply the decimal by a thousand: \footnotesize \quad\enspace 0.000005 \times 1,\!000 ‰ = 0.005‰ For the PPM, multiply the decimal by a million: \footnotesize \quad\enspace \begin{split} & 0.000005 \times 1,\!000,\!000\ \text{PPM} \\ &= 5\ \text{PPM} \end{split} Finally, for the PPB, multiply the decimal by a billion: \footnotesize \quad\enspace \begin{split} & 0.000005 \times 1,\!000,\!000,\!000\ \text{PPB} \\ &= 5,\!000\ \text{PPB} \end{split} One of the applications of such PPM calculations in everyday life, is to find and adjust the salinity of your swimming pool. Use our ml to kg converter if you're struggling to estimate your product's volume in ml.
At the University of the Great Plains the following data about engineering majors was collected: 800 7200 8000 120 11{,}880 12{,}000 920 19{,}080 20{,}000 What is the conditional probability of living on campus, given that you know a student is an engineering major? 920 students are engineering majors. Of those, how many live on campus? Compare your answer to part (a) to the probability of living on campus. Divide the total number of students who live on campus by the total number of students at the university. Are the two events, {living on campus} and {engineering major} associated? Use the probabilities to explain why or why not. Yes. The probability of an engineering student living on campus is much smaller than the probability of a student picked at random living on campus.
Hydraulic variable orifice created by circular tube and round insert - MATLAB Orifice radius Hydraulic variable orifice created by circular tube and round insert The Annular Orifice block annular leakage in a fully-developed laminar flow created by a circular tube and a round insert in an isothermal liquid network. The insert can be located off-center from the tube by an eccentricity value. The flow rate is computed using the Hagen-Poiseuille equation (see [1]): q=\frac{\pi R{\left(R-r\right)}^{3}}{6\nu \rho L}·\left(1+\frac{3}{2}{\epsilon }^{2}\right)·p \epsilon =\frac{e}{R-r} R Orifice radius r Insert radius L Overlap length ε Eccentricity ratio Use this block to simulate leakage path in plungers, valves, and cylinders. \Delta p={p}_{\text{A}}-{p}_{\text{B}}, . Positive signal at the physical signal port S increases or decreases the overlap, depending on the value of the parameter Orifice orientation. S — Physical signal port that controls the insert displacement, m Physical signal port that controls the insert displacement. Orifice radius — Radius of the tube The radius of the tube. Insert radius — Radius of the insert 0.0098 m (default) | positive scalar The radius of the insert. Eccentricity — Distance between the central axes of the insert and the tube The distance between the central axes of the insert and the tube. The parameter can be a positive value, smaller than the difference between the radius of the tube and the radius of the insert, or equal to zero for coaxial configuration. Initial length — Initial overlap between the tube and the insert Initial overlap between the tube and the insert. The parameter must be positive. The value of the initial length does not depend on the orifice orientation. Orifice orientation — Specifies the effect of the control signal on the orifice overlap Positive signal increases overlap (default) | Negative signal increases overlap Specifies the effect of the control signal on the orifice overlap. [1] Noah D. Manring, Hydraulic Control Systems, John Wiley & Sons, 2005 Constant Area Hydraulic Orifice | Fixed Orifice | Orifice with Variable Area Round Holes | Orifice with Variable Area Slot | Variable Area Hydraulic Orifice | Variable Orifice
Log unconditional probability density for discriminant analysis classifier - MATLAB P\left(x\right)=\sum _{k=1}^{K}P\left(x,k\right), P\left(x|k\right)=\frac{1}{{\left({\left(2\pi \right)}^{d}|{\Sigma }_{k}|\right)}^{1/2}}\mathrm{exp}\left(-\frac{1}{2}\left(x-{\mu }_{k}\right){\Sigma }_{k}^{-1}{\left(x-{\mu }_{k}\right)}^{T}\right), |{\Sigma }_{k}| {\Sigma }_{k}^{-1}
Deltoidal_icositetrahedron Knowpia (rotating and 3D model) Conway notation oC or deC Symmetry group Oh, BC3, [4,3], *432 Dual polyhedron rhombicuboctahedron In geometry, a deltoidal icositetrahedron (also a trapezoidal icositetrahedron, tetragonal icosikaitetrahedron,[1] tetragonal trisoctahedron[2] and strombic icositetrahedron) is a Catalan solid. Its dual polyhedron is the rhombicuboctahedron. D. i. as artwork and die D. i. projected onto cube and octahedron in Perspectiva Corporum Regularium Dyakis dodecahedron crystal model and projection onto an octahedron Cartesian coordinates for a suitably sized deltoidal icositetrahedron centered at the origin are: (±1, 0, 0), (0, ±1, 0), (0, 0, ±1) (0, ±1/2√2, ±1/2√2), (±1/2√2, 0, ±1/2√2), (±1/2√2, ±1/2√2, 0) (±(2√2+1)/7, ±(2√2+1)/7, ±(2√2+1)/7) The long edges of this deltoidal icosahedron have length √(2-√2) ≈ 0.765367. The 24 faces are kites.[3] The short and long edges of each kite are in the ratio 1:(2 − 1/√2) ≈ 1:1.292893... If its smallest edges have length a, its surface area and volume are {\displaystyle {\begin{aligned}A&=6{\sqrt {29-2{\sqrt {2}}}}\,a^{2}\\V&={\sqrt {122+71{\sqrt {2}}}}\,a^{3}\end{aligned}}} The kites have three equal acute angles with value {\displaystyle \arccos({\frac {1}{2}}-{\frac {1}{4}}{\sqrt {2}})\approx 81.578\,941\,881\,85^{\circ }} and one obtuse angle (between the short edges) with value {\displaystyle \arccos(-{\frac {1}{4}}-{\frac {1}{8}}{\sqrt {2}})\approx 115.263\,174\,354\,45^{\circ }} Occurrences in nature and cultureEdit The deltoidal icositetrahedron is a crystal habit often formed by the mineral analcime and occasionally garnet. The shape is often called a trapezohedron in mineral contexts, although in solid geometry that name has another meaning. The deltoidal icositetrahedron has three symmetry positions, all centered on vertices: The solid's projection onto a cube divides its squares into quadrants. The projection onto an octahedron divides its triangles into kite faces. In Conway polyhedron notation this represents an ortho operation to a cube or octahedron. The solid (dual of the small rhombicuboctahedron) is similar to the disdyakis dodecahedron (dual of the great rhombicuboctahedron). The main difference is that the latter also has edges between the vertices on 3- and 4-fold symmetry axes (between yellow and red vertices in the images below). icositetrahedron Disdyakis dodecahedron Dyakis dodecahedron Tetartoid Dyakis dodecahedronEdit A variant with pyritohedral symmetry is called a dyakis dodecahedron[4][5] or diploid.[6] It is common in crystallography. It can be created by enlarging 24 of the 48 faces of the disdyakis dodecahedron. The tetartoid can be created by enlarging 12 of its 24 faces. [7] The great triakis octahedron is a stellation of the deltoidal icositetrahedron. The deltoidal icositetrahedron is one of a family of duals to the uniform polyhedra related to the cube and regular octahedron. When projected onto a sphere (see right), it can be seen that the edges make up the edges of an octahedron and cube arranged in their dual positions. It can also be seen that the threefold corners and the fourfold corners can be made to have the same distance to the center. In that case the resulting icositetrahedron will no longer have a rhombicuboctahedron for a dual, since for the rhombicuboctahedron the centers of its squares and its triangles are at different distances from the center. This polyhedron is topologically related as a part of sequence of deltoidal polyhedra with face figure (V3.4.n.4), and continues as tilings of the hyperbolic plane. These face-transitive figures have (*n32) reflectional symmetry. Tetrakis hexahedron, another 24-face Catalan solid which looks a bit like an overinflated cube. "The Haunter of the Dark", a story by H.P. Lovecraft, whose plot involves this figure ^ Conway, Symmetries of Things, p.284–286 ^ "Keyword: "forms" | ClipArt ETC". ^ "Kite". Retrieved 6 October 2019. ^ Isohedron 24k ^ The Isometric Crystal System ^ The 48 Special Crystal Forms ^ Both is indicated in the two crystal models in the top right corner of this photo. A visual demonstration can be seen here and here. Wenninger, Magnus (1983), Dual Models, Cambridge University Press, doi:10.1017/CBO9780511569371, ISBN 978-0-521-54325-5, MR 0730208 (The thirteen semiregular convex polyhedra and their duals, Page 23, Deltoidal icositetrahedron) The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, ISBN 978-1-56881-220-5 [1] (Chapter 21, Naming the Archimedean and Catalan polyhedra and tilings, page 286, tetragonal icosikaitetrahedron) Eric W. Weisstein, Deltoidal icositetrahedron (Catalan solid) at MathWorld.
Remember that the radian measure of a central angle of a circle is the ratio of the arc length to the radius (see problem 10-55). is blue. What is the radian measure for a 45 -degree central angle on a circle with radius 5 cm? What is the radian measure for a 45 1 cm? Answer in terms of π Both answers in part a should be the same. The central angle of a circle has a radian measure of \frac { \pi } { 3 } . What is the measure of the central angle of the sector in degrees? 60º
4.5 Tomography artifacts and algebraic reconstruction - Biomedical Imaging [Book] Biomedical Imaging by Tim Salditt, Timo Aspelmeier, Sebastian Aeffner Get full access to Biomedical Imaging and 60K+ other titles, with free 10-day trial of O'Reilly. This brings us to the interesting question of the angular sampling required for 3dRT. Using the same arguments of Nyquist sampling like previously for 2dRT, the necessary number of projections can be estimated to [47] {N}_{p}\simeq \frac{\text{π}}{\sqrt{3}}{N}_{r}^{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(4.151\right) where Nr is the number of resolution elements in the reconstruction volume. In contrast to 2dRT, where the number of projections scales linearly with Nr, the relation is quadratic for 3dRT. However, as the 3dRT scheme integrates over planes, more signals are summed up, and the accumulation time per projection in 3dRT can be reduced accordingly. Provided there will be future technical improvements, in particular sufficiently fast motor movement and short detector readout, the total ... Get Biomedical Imaging now with the O’Reilly learning platform.
5.3.3 Radiation damage to DNA - Biomedical Imaging [Book] \begin{array}{l}{\text{H}}_{2}\mathrm{O}\stackrel{\text{ion}\text{.rad}\text{.}}{\to }\text{H}\cdot \text{+OH}\cdot \\ {\text{H}}_{2}\mathrm{O}\stackrel{\text{ion}\text{.rad}\text{.}}{\to }{\text{H}}_{2}{\mathrm{O}}^{+}{\text{+e}}_{\text{aq}}^{-}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(5.78\right)\end{array} This is immediately followed by further reactions, like \begin{array}{cc}\begin{array}{c}{\text{H}}_{2}{\mathrm{O}}^{+}\to {\text{H}}^{+}+\mathrm{O}\text{H}\cdot \\ \text{OH}\cdot \text{+OH}\cdot \to {\text{H}}_{2}{\text{O}}_{2}\\ {\text{O}}_{2}+{\text{e}}_{\text{aq}}^{-}\to {\text{O}}_{2}^{-}\\ {\text{O}}_{2}^{-}+{\text{H}}_{2}\text{O}\to {\text{H}}_{2}{\mathrm{O}}_{2}+2{\text{OH}}^{-}+{\text{O}}_{2}\end{array}& \left(5.79\right)\end{array} which produce so called reactive oxygen species (ROS). The hydroxyl radical OH· with an unpaired electron, hydrogen peroxide H2O2 and the superoxide anion {\mathrm{O}}_{2}^{-} are chemically highly reactive and can cause numerous modifications of biomolecules, including several types of DNA damage discussed below.45 In irradiation with photons and electrons, the indirect action of radiation via radicals and ROS created from water and molecular oxygen accounts for about 70% of radiation induced cell death, whereas with direct action of radiation, the immediate ...
6.2.2 The optical far field and the phase problem - Biomedical Imaging [Book] \frac{{\hslash }^{2}}{2m}{\nabla }^{2}\Psi +V\Psi =0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(6.40\right) obtained for the variable replacement n2k2 → 2m/ℏ2V, with the conventional symbols for the potential V, mass m and (reduced) Planck’s constant ℏ. Diffraction integrals For use in biomedical imaging, we need formal solutions of the stationary wave equation, for example, explicit formulas to calculate how a field propagates through an object or in free space. This seems like a formidable task, and no generality can be expected. The solution depends on the precise 3d arrangement of the index of refraction n(r) and the boundary conditions specifying the radiation going into and out of – say – a bound region containing the object. In fact, explicit solutions without approximations can be expected only ...
Abstract: A new interpretation of the relativistic equation relating total-, momentum-, and mass-energies is presented. With the aid of the familiar energy-relationship triangle, old and new interpretations are compared. And the key difference is emphasized—apparent relativity versus intrinsic relativity. Mass-to-energy conversion is then brought about by adopting a three-part strategy: 1) Make the motion relative to the universal space medium. This allows the introduction of the concept of intrinsic energy (total, kinetic, and mass energies) as counterpart to the apparent version. 2) Recognize that a particle’s mass property diminishes with increase in speed. This means introducing the concept of intrinsic mass (which varies with intrinsic speed). 3) Impose a change in the particle’s gravitational environment. Instead of applying an electromagnetic accelerating force or energy in order to alter the particle’s total energy, there will simply be an environmental change. Thus, it is shown how to use relativity equations and relativistic motion—in a way that exploits the distinction between apparent and innate levels of reality—to explain the mass-to-energy-conversion mechanism. Moreover, the mechanism explains the 100-percent conversion of mass to energy; which, in turn, leads to an explanation of the mechanism driving astrophysical jets. Keywords: Relativistic Mass Energy, Kinetic Energy, Momentum Energy, Total Energy, Mass-Energy Conversion, Intrinsic Mass, Terminal Neutron Star, Energy Emission Mechanism, Astrophysical Jets, DSSU Theory p=m\upsilon =m\frac{\Delta x}{\Delta t} p=m\frac{\Delta x}{\Delta {t}_{0}} p=m\frac{\Delta x}{\Delta {t}_{0}}=m\frac{\Delta x}{\Delta t}\frac{\Delta t}{\Delta {t}_{0}} The middle term (Δx/Δt) is just the particle’s velocity υ; and (Δt/Δt0) is the velocity-dependent clock-time ratio 1/\sqrt{1-{\left(\upsilon /c\right)}^{2}} or γ [2] . With these substitutions, the general definition for momentum, in vector form, is p=m\upsilon \gamma . (momentum) (4) E=m{c}^{2}\gamma E={E}_{0}+{E}_{\text{kin}}=m{c}^{2}\gamma {E}^{2}={\left(m{c}^{2}\right)}^{2}+{\left(pc\right)}^{2} Another useful equation is one that relates total energy E and momentum p. It is derived by combining Equation (4), p=m\upsilon \gamma , and Equation (5), E=m{c}^{2}\gamma From Equation (5), is obtained m=\frac{E}{{c}^{2}\gamma } . Then, by substitution, Equation (4) becomes p=m\upsilon \gamma =\upsilon \frac{E}{{c}^{2}\gamma }\gamma , or simply, p=\upsilon \frac{E}{{c}^{2}} {E}^{2}={\left({E}_{0}+{E}_{\text{kin}}\right)}^{2}={\left(m{c}^{2}\right)}^{2}+{\left(pc\right)}^{2} {\left({E}_{0}+0\right)}^{2}={\left(m{c}^{2}\right)}^{2}+0 {E}_{0}=m{c}^{2} Notice that this equation works for all particles and all velocities, with the understanding that anything with (or of ) mass cannot travel at lightspeed. For any particular particle, the rest energy E0 does not vary―it is an innate property. For particles with mass, m is constant and E0 > 0. As an example, the rest energy of an electron is 0.511 Million electron Volts (MeV). While for a particle with no mass, m, obviously, is zero and E0 = 0. The photon is the patent example; it has no mass and since it does not exist in a rest state, the photon’s rest mass equals zero. Equation (9) is consistent with the reality in both cases. Note, however, what happens with the commonly misapplied expression, E=m{c}^{2} (Here the symbol for total energy E is used instead of the correct one for rest energy). If energy of a particle varies, as happens, then the mass m must, with the use of this form of the equation, also vary (The speed of light c, of course, always remains constant). With the terminology of this expression, a clear identification of a particle’s rest energy is lost. Moreover, it conveys the implication that a photon possesses inertial mass! For the massless photon, this expression predicts m={E}_{\gamma }/{c}^{2} , where Eγ is the energy of the photon―energy that cannot be zero. Thus, it wrongly attributes the photon with a mass value. {E}_{0}=m{c}^{2} is the preferred and logically more consistent expression for rest energy. {\left(\text{Totalenergy}\right)}^{2}={\left(\text{Restenergy}\right)}^{2}+{\left(\text{Momentumenergy}\right)}^{2} \begin{array}{c}E={E}_{0}+{E}_{\text{kin}}=\sqrt{{\left(m{c}^{2}\right)}^{2}+{\left(pc\right)}^{2}}\\ =\left(m{c}^{2}\right){\left(1+\frac{{\left(pc\right)}^{2}}{{\left(m{c}^{2}\right)}^{2}}\right)}^{1/2}\\ =\left(m{c}^{2}\right){\left(1+\frac{{p}^{2}}{{\left(mc\right)}^{2}}\right)}^{1/2}\end{array} For low speed, \frac{{p}^{2}}{{\left(mc\right)}^{2}}\ll 1 . Then, with the application of the binomial expansion theorem, {E}_{0}+{E}_{\text{kin}}\approx \left(m{c}^{2}\right)\left(1+\frac{1}{2}\frac{{p}^{2}}{{\left(mc\right)}^{2}}+\cdots \right)=m{c}^{2}+\frac{{p}^{2}}{2m}+\cdots {E}_{\text{kin}}\approx \frac{{p}^{2}}{2m}=\frac{1}{2}m{\upsilon }^{2} Equation (7) and Equation (8), {E}^{2}={\left(m{c}^{2}\right)}^{2}+{\left(pc\right)}^{2} p=\upsilon \frac{E}{{c}^{2}} , can be used to prove that when m = 0 the velocity of the particle is c in any reference system. There is no rest frame for such “bodies”. They have no rest energy; their total energy is purely kinetic [4] . “It is not good to introduce the concept of the mass M=m/\sqrt{1-{\upsilon }^{2}/{c}^{2}} of a moving body for which no clear definition can be given. It is better to introduce no other mass concept than the “rest mass” m. Instead of introducing M, it is {E}_{\text{int}}={E}_{\text{int}\text{.mass}}+{E}_{\text{int}\text{.kin}} {E}_{\text{int}}^{2}={\left({m}_{\text{int}}{c}^{2}\right)}^{2}+{\left({p}_{\text{int}}c\right)}^{2} {E}_{\text{int}}=\sqrt{{\left({m}_{\text{int}}{c}^{2}\right)}^{2}+{\left({p}_{\text{int}}c\right)}^{2}} {m}_{\text{int}}=\frac{{m}_{0}}{{\gamma }_{\text{int}}} {m}_{\text{int}}=\sqrt{1-\left({\upsilon }_{\mathrm{int}}/c\right)}\text{ }{m}_{0} {E}_{\text{int}\text{.kin}}=\sqrt{{\left({m}_{\text{int}}{c}^{2}\right)}^{2}+{\left({p}_{\text{int}}c\right)}^{2}}-\left({m}_{\text{int}}{c}^{2}\right) {\upsilon }_{\text{photon}}=c=\sqrt{{\upsilon }_{\text{rot}}^{2}+{\upsilon }_{\text{z}}^{2}} {\upsilon }_{\text{surface}\text{.inflow}}=\sqrt{\frac{2G{M}_{3.4\odot }}{{R}_{\text{surface}}}} The definition of momentum: p=m\upsilon \gamma The definition of total energy: E={E}_{0}+{E}_{\text{kin}}=m{c}^{2}\gamma Squaring Equation (A1) gives: \begin{array}{c}{p}^{2}={m}^{2}{\upsilon }^{2}{\gamma }^{2}\\ ={m}^{2}{\upsilon }^{2}\frac{1}{\left(1-{\upsilon }^{2}/{c}^{2}\right)}\\ ={m}^{2}{\upsilon }^{2}\frac{{c}^{2}}{\left({c}^{2}-{\upsilon }^{2}\right)}\end{array} {p}^{2}{c}^{2}-{p}^{2}{\upsilon }^{2}={m}^{2}{c}^{2}{\upsilon }^{2} {\upsilon }^{2}{m}^{2}{c}^{2}+{\upsilon }^{2}{p}^{2}={p}^{2}{c}^{2} \frac{{\upsilon }^{2}}{{c}^{2}}=\frac{{p}^{2}}{{\left(mc\right)}^{2}+{p}^{2}} {E}^{2}={\left(m{c}^{2}\right)}^{2}{\gamma }^{2} {E}^{2}={\left(mc\right)}^{2}{c}^{2}{\left({\left(1-\frac{{\upsilon }^{2}}{{c}^{2}}\right)}^{-1/2}\right)}^{2} {E}^{2}={\left(mc\right)}^{2}{c}^{2}{\left(1-\frac{{p}^{2}}{{\left(mc\right)}^{2}+{p}^{2}}\right)}^{-1} {E}^{2}={\left(mc\right)}^{2}{c}^{2}{\left(\frac{{\left(mc\right)}^{2}}{{\left(mc\right)}^{2}+{p}^{2}}\right)}^{-1} {E}^{2}={\left(mc\right)}^{2}{c}^{2}\left(\frac{{\left(mc\right)}^{2}+{p}^{2}}{{\left(mc\right)}^{2}}\right) {E}^{2}={\left(m{c}^{2}\right)}^{2}+{\left(pc\right)}^{2} Cite this paper: Ranzan, C. (2019) Mass-to-Energy Conversion, the Astrophysical Mechanism. Journal of High Energy Physics, Gravitation and Cosmology, 5, 520-551. doi: 10.4236/jhepgc.2019.52030.
Lemma 10.23.2 (00EO)—The Stacks project Zariski-local properties of modules and algebras Lemma 10.23.2. Let $R$ be a ring. Let $M$ be an $R$-module. Let $S$ be an $R$-algebra. Suppose that $f_1, \ldots , f_ n$ is a finite list of elements of $R$ such that $\bigcup D(f_ i) = \mathop{\mathrm{Spec}}(R)$, in other words $(f_1, \ldots , f_ n) = R$. If each $M_{f_ i} = 0$ then $M = 0$. If each $M_{f_ i}$ is a finite $R_{f_ i}$-module, then $M$ is a finite $R$-module. If each $M_{f_ i}$ is a finitely presented $R_{f_ i}$-module, then $M$ is a finitely presented $R$-module. Let $M \to N$ be a map of $R$-modules. If $M_{f_ i} \to N_{f_ i}$ is an isomorphism for each $i$ then $M \to N$ is an isomorphism. Let $0 \to M'' \to M \to M' \to 0$ be a complex of $R$-modules. If $0 \to M''_{f_ i} \to M_{f_ i} \to M'_{f_ i} \to 0$ is exact for each $i$, then $0 \to M'' \to M \to M' \to 0$ is exact. If each $R_{f_ i}$ is Noetherian, then $R$ is Noetherian. If each $S_{f_ i}$ is a finite type $R$-algebra, so is $S$. If each $S_{f_ i}$ is of finite presentation over $R$, so is $S$. Proof. We prove each of the parts in turn. By Proposition 10.9.10 this implies $M_\mathfrak p = 0$ for all $\mathfrak p \in \mathop{\mathrm{Spec}}(R)$, so we conclude by Lemma 10.23.1. For each $i$ take a finite generating set $X_ i$ of $M_{f_ i}$. Without loss of generality, we may assume that the elements of $X_ i$ are in the image of the localization map $M \rightarrow M_{f_ i}$, so we take a finite set $Y_ i$ of preimages of the elements of $X_ i$ in $M$. Let $Y$ be the union of these sets. This is still a finite set. Consider the obvious $R$-linear map $R^ Y \rightarrow M$ sending the basis element $e_ y$ to $y$. By assumption this map is surjective after localizing at an arbitrary prime ideal $\mathfrak p$ of $R$, so it is surjective by Lemma 10.23.1 and $M$ is finitely generated. By (2) we have a short exact sequence \[ 0 \rightarrow K \rightarrow R^ n \rightarrow M \rightarrow 0 \] Since localization is an exact functor and $M_{f_ i}$ is finitely presented we see that $K_{f_ i}$ is finitely generated for all $1 \leq i \leq n$ by Lemma 10.5.3. By (2) this implies that $K$ is a finite $R$-module and therefore $M$ is finitely presented. By Proposition 10.9.10 the assumption implies that the induced morphism on localizations at all prime ideals is an isomorphism, so we conclude by Lemma 10.23.1. By Proposition 10.9.10 the assumption implies that the induced sequence of localizations at all prime ideals is short exact, so we conclude by Lemma 10.23.1. We will show that every ideal of $R$ has a finite generating set: For this, let $I \subset R$ be an arbitrary ideal. By Proposition 10.9.12 each $I_{f_ i} \subset R_{f_ i}$ is an ideal. These are all finitely generated by assumption, so we conclude by (2). For each $i$ take a finite generating set $X_ i$ of $S_{f_ i}$. Without loss of generality, we may assume that the elements of $X_ i$ are in the image of the localization map $S \rightarrow S_{f_ i}$, so we take a finite set $Y_ i$ of preimages of the elements of $X_ i$ in $S$. Let $Y$ be the union of these sets. This is still a finite set. Consider the algebra homomorphism $R[X_ y]_{y \in Y} \rightarrow S$ induced by $Y$. Since it is an algebra homomorphism, the image $T$ is an $R$-submodule of the $R$-module $S$, so we can consider the quotient module $S/T$. By assumption, this is zero if we localize at the $f_ i$, so it is zero by (1) and therefore $S$ is an $R$-algebra of finite type. By the previous item, there exists a surjective $R$-algebra homomorphism $R[X_1, \ldots , X_ n] \rightarrow S$. Let $K$ be the kernel of this map. This is an ideal in $R[X_1, \ldots , X_ n]$, finitely generated in each localization at $f_ i$. Since the $f_ i$ generate the unit ideal in $R$, they also generate the unit ideal in $R[X_1, \ldots , X_ n]$, so an application of (2) finishes the proof. Suggested slogan: Zariski-local properties of modules and algebras Comment #5125 by Peng DU on June 04, 2020 at 06:51 May need add comma before "in other words". Comment #6673 by Ivan on October 28, 2021 at 21:53 For (2), can we just assume M_{f_i} is a finite R -module? Comment #6758 by Elías Guisado on November 22, 2021 at 10:32 On the proof of (4), the invoked proposition with tag 02C6 should be the one with tag 02C7 instead @#6673: No, because we want to characterize finite R -modules and it isn't true that M_{f_i} R -module, if M R -module as examples will show you. @#6758. Hmm... the use of the proposition isn't explained so this could be debated. Let me leave it as is for now. Somebody should edit the proof and explain this better (but please keep it succint). In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 00EO. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 00EO, in case you are confused.
Finite Element Modeling of the Left Atrium to Facilitate the Design of an Endoscopic Atrial Retractor | J. Biomech Eng. | ASME Digital Collection S. R. Jernigan, S. R. Jernigan , NCSU Box 7910, Raleigh, NC 27695 e-mail: srjernig@ncsu.edu D. R. Cormier Jernigan, S. R., Buckner, G. D., Eischen, J. W., and Cormier, D. R. (March 15, 2007). "Finite Element Modeling of the Left Atrium to Facilitate the Design of an Endoscopic Atrial Retractor." ASME. J Biomech Eng. December 2007; 129(6): 825–837. https://doi.org/10.1115/1.2801650 With the worldwide prevalence of cardiovascular diseases, much attention has been focused on simulating the characteristics of the human heart to better understand and treat cardiac disorders. The purpose of this study is to build a finite element model of the left atrium (LA) that incorporates detailed anatomical features and realistic material characteristics to investigate the interaction of heart tissue and surgical instruments. This model is used to facilitate the design of an endoscopically deployable atrial retractor for use in minimally invasive, robotically assisted mitral valve repair. Magnetic resonance imaging (MRI) scans of a pressurized explanted porcine heart were taken to provide a 3D solid model of the heart geometry, while uniaxial tensile tests of porcine left atrial tissue were conducted to obtain realistic material properties for noncontractile cardiac tissue. A finite element model of the LA was constructed using ANSYS™ Release 9.0 software and the MRI data. The Mooney–Rivlin hyperelastic material model was chosen to characterize the passive left atrial tissue; material constants were derived from tensile test data. Finite element analysis (FEA) models of a CardioVations Port Access™ retractor and a prototype endoscopic retractor were constructed to simulate interaction between each instrument and the LA. These contact simulations were used to compare the quality of retraction between the two instruments and to optimize the design of the prototype retractor. Model accuracy was verified by comparing simulated cardiac wall deflections to those measured by MRI. FEA simulations revealed that peak forces of approximately 2.85N 2.46N were required to retract the LA using the Port Access™ and prototype retractors, respectively. These forces varied nonlinearly with retractor blade displacement. Dilation of the atrial walls and rigid body motion of the chamber were approximately the same for both retractors. Finite element analysis is shown to be an effective tool for analyzing instrument/tissue interactions and for designing surgical instruments. The benefits of this approach to medical device design are significant when compared to the alternatives: constructing prototypes and evaluating them via animal or clinical trials. biological techniques, biological tissues, cardiology, endoscopes, finite element analysis, solid modelling, mitral valve repair, finite element analysis, left atrium, retraction Biological tissues, Blades, Design, Endoscopic devices, Engineering prototypes, Finite element analysis, Geometry, Magnetic resonance imaging, Materials properties, Solid models, Modeling, Construction, Valves, Wire Evaluation of Left Ventricular Function Based on Simulated Systolic Flow Dynamics Computed From Regional Wall Motion A Three-Dimensional Computational Method for Blood Flow in the Heart I: Immersed Elastic Fibers in a Viscous Incompressible Fluid A Three-Dimensional Computational Method for Blood Flow in the Heart II: Contractile Fibers Three-Dimensional Computational Model of Left Heart Diastolic Function With Fluid-Structure Interaction Ventricular Mechanics in Diastole: Material Parameter Sensitivity Relationship Between Regional Shortening and Asynchronous Electrical Activation in a Three-Dimensional Model of Ventricular Electromechanics Computational Mechanics of the Heart: From Tissue Structure to Ventricular Function Parameter Distribution Models for Estimation of Population Based Left Ventricular Deformation Using Sparse Fiducial Markers Laminar Fiber Architecture and Three-Dimensional Systolic Mechanics in Canine Ventricular Myocardium Left Ventricular Epicardial Deformation in Isolated Arrested Dog Heart Tracking of LV Endocardial Surface on Real-Time Three-Dimensional Ultrasound With Optical Flow Elastic Behavior of Porcine Coronary Artery Tissue Bro-Nielsen 1067-7055 (Eurographics '96), A Physically-Based Virtual Environment Dedicated to Surgical Simulation Eigth Eurographics Workshop on Virtual Environments Biomechanics: Properties of Living Tissues 1974, Cardiac Mechanics: Physiological, Clinical, and Mathematical Considerations Human Heart Valves: Hyperelastic Material Modeling Transactions on Mechanics: Tenth Conference on Mechanical Vibrations Use of Finite Element Analysis to Simulate the Hyperelastic Behaviour of Cardiovascular Tissue ANSYS Contact Technology Guide, ANSYS, Release 9.0, Nov. 2004, Sec. 3.5.8.1.
Revision as of 22:41, 21 August 2015 by MathAdmin (talk | contribs) (→‎The Sum of the first n Cubes) {\displaystyle n} {\displaystyle n} {\displaystyle 1+2+3+4+5+6+7+8+9+10+11+12+13,} {\displaystyle 1+2+\cdots +13.} {\displaystyle {\displaystyle \sum _{i=1}^{13}\,i,}} {\displaystyle \left(\Sigma \right)} {\displaystyle i} {\displaystyle i} {\displaystyle \Sigma } {\displaystyle i} {\displaystyle 1} {\displaystyle 13} {\displaystyle i=1,} {\displaystyle i=2,} {\displaystyle i=3,} {\displaystyle 13} {\displaystyle {\displaystyle \sum _{i=1}^{13}}\,i\,=\,1+2+3+4+5+6+7+8+9+10+11+12+13.} {\displaystyle {\displaystyle \sum _{i=1}^{5}}\,i^{2}\,=\,1^{2}+2^{2}+3^{2}+4^{2}+5^{2}.} {\displaystyle {\displaystyle \sum _{i=n}^{2n}}\,i\,=\,n+(n+1)+\cdots +(2n-1)+2n,} {\displaystyle {\displaystyle \sum _{i=1}^{n}}\,i^{3}\,=\,1^{3}+2^{3}+3^{3}+\cdots +n^{3}.} {\displaystyle \mathbb {N} } {\displaystyle {\text{The Natural Numbers}}\,=\,\mathbb {N} \,=\,\{1,2,3,\ldots \}\,=\,\{1,1+1,1+1+1,1+1+1+1,\ldots \}.} {\displaystyle 1} {\displaystyle n,} {\displaystyle n+1} {\displaystyle n-1,} {\displaystyle n.} {\displaystyle n^{\mathrm {th} }} {\displaystyle (n-1)^{\textrm {th}}} case. In such situations, strong induction assumes that the conjecture is true for ALL cases of lower value tha{\displaystyle n} {\displaystyle n}atural numbers is {\displaystyle {\displaystyle \sum _{i=1}^{n}i\,=\,1+2+\cdots +n\,=\,{\frac {n(n+1)}{2}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)}{2}}\,=\,{\frac {1(1+1)}{2}}\,=\,1.}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)}{2}}\,=\,{\frac {(n-1)n}{2}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i}&=&{\displaystyle \sum _{i=1}^{n-1}i\,+\,n}\\\\&=&{\displaystyle {\frac {(n-1)n}{2}}\,+\,n\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{2}-n}{2}}\,+\,{\frac {2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}-n+2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}+n}{2}}}\\\\&=&{\displaystyle {\frac {n(n+1)}{2}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{2}\,=\,1^{2}+2^{2}+\cdots +n^{2}\,=\,{\frac {n(n+1)(2n+1)}{6}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)(2n+1)}{6}}\,=\,{\frac {1(1+1)(2+1)}{6}}\,=\,1.}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)\left(2\left(n-1\right)+1\right)}{6}}\,=\,{\frac {(n-1)n(2n-1)}{6}}\,=\,{\frac {2n^{3}-3n^{2}+n}{6}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{2}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{2}+n^{2}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+n^{2}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+{\frac {6n^{2}}{6}}}\\\\&=&{\displaystyle {\frac {2n^{3}+3n^{2}+n}{6}}}\\\\&=&{\displaystyle {\frac {n(2n^{2}+3n+1)}{6}}}\\\\&=&{\displaystyle {\frac {n(n+1)(2n+1)}{6}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{3}\,=\,1^{3}+2^{3}+\cdots +n^{3}\,=\,{\frac {n^{2}(n+1)^{2}}{4}}.}} {\displaystyle n}atural numbers. {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}\,=\,{\frac {1^{2}(1+1)^{2}}{4}}\,=\,1,}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i^{3}\,=\,{\frac {(n-1)^{2}\left(\left(n-1\right)+1\right)^{2}}{4}}\,=\,{\frac {(n-1)^{2}n^{2}}{2}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{3}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{3}+n^{3}}\\\\&=&{\displaystyle {\frac {(n-1)^{2}n^{2}}{4}}+n^{3}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{4}-2n^{3}+n^{2}}{4}}+{\frac {4n^{3}}{4}}}\\\\&=&{\displaystyle {\frac {n^{4}+2n^{3}+n^{2}}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n^{2}+2n+1)}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}},\end{array}}} {\displaystyle \square }
Dart Physics – Tom Shafer July 9, 2017 at 10 AM We have a dart board at the office and have a good time lofting darts in nice, looping arcs. A recent project pushed me back into physics and led me to consider just how sensitive the dart-throwing motion is to small imperfections in angle; how precise do we need to be? Calculating those angular perturbations ( dy/d\theta in the coordinates I’ll set up next) requires the kinematics of the problem and provides an opportunity to solve the equations numerically with R. Our office dart board is fixed at a location \mathbf{r}_d = \langle x_d, y_d \rangle = \langle L, 0\rangle , and a thrower (darter? player?) stands with their throwing elbow at a location \mathbf{r}_t = \langle 0, -r \rangle r is the player’s forearm length and things are arranged so a “perfect” 90 ^\circ release starts from a height y=0 . Recall that the kinematics derive from Newton’s second law. Assuming a constant gravitational acceleration g near the earth’s surface, we want to solve the equations Because there are no forces acting left to right (except for the neglected air drag), the x equation has no acceleration term. With the above equations come initial conditions: x_0 = r\cos\theta y_0 = r(\sin\theta - 1) \theta the release angle – the angle of the player’s forearm at the moment of the throw. The angle \theta is measured conventionally (and unintuitively here) from the forward direction, so a throw vertically upwards would have \theta = 180^\circ \mathbf{v}_0 is a little trickier but possible with a bit of trigonometry and calculus; the magnitude is just v = r d\theta/dt \equiv r\omega , and one can show that To find a solution wherein we hit the bullseye, we fix x(t_f) = L y(t_f) = 0 t_f the time to target. These definitions specify the solution after some tedious algebra. First, the x equation yields time as a simple function of release angle \theta \omega (or linear velocity \nu = r\omega ): Intuitively, the time of flight is the ratio of the distance traveled in the x direction to v_0^{(x)} t_f equation generates a more complicated expression for the angular velocity: This one’s harder to interpret, but it has the correct units (s ^{-1} ) and exhibits interesting divergences: \omega \to \pm\infty \theta \to 0 \pi/2 \pi , etc. Note, too, that the \pi/2 divergence differs from its \pi counterpart in that different parts of the denominator go to zero ( \sin\theta vs. the term in square brackets). With the solution in hand we can also find the maximum height of the dart during its flight. The path is a function of \theta \omega , and the maximum height is an extremum of y(t) 0 = \frac{dy}{dt} = v_0^{(y)}-gt . A check of the second derivative confirms this is, in fact, a maximum, and The maximum depends on the starting height and also varies with the x distance to be crossed by the dart. Finally, we can work out an answer to my original question: how sensitive is the solution y(t) to perturbations in angle, \delta\theta ? The answer comes from the accurate solution \langle x(t_f) = L, y(t_f) = 0 \rangle by computing the derivative dy/d\theta \omega – the angular frequency of the accurate solution. The derivative is and this quantity can be interpreted as the vertical distance by which the dart would miss for a small (e.g., \lesssim 1 degree) imperfection in the release angle. We could go on to ask similar questions about imperfections in velocity. Now that we have the general solutions, let’s take a look at the results numerically. Start by making a few assignments: G_EARTH <- 9.8 L <- 10*12 * 2.54/100 # 10 feet r <- 14 * 2.54/100 # 14 inch forearm C <- 6*12 * 2.54/100 # 6 more feet C measures the ceiling height relative to the bullseye, r is the forearm length, and L is the horizontal distance between the player and the board. Here I’ve estimated L = 10 r = 14 inches, and C = 6 feet of clearance. The gravitational constant is, as always, 9.8 meters per squared second. With the constants fixed, I’ve implemented the kinematic equations as functions in R and computed them on a 10^{-4} radian grid spanning \theta \in [0, \pi] calc <- tibble( theta = seq(pi, 0, -pi*1e-4), theta_deg = theta * 180/pi, omega = omega(L = L, r = r, theta = theta), vel = r*omega, vel_mph = vel * 3600 * 100 / 2.54 / 12 / 5280, time = tt(L = L, r = r, theta = theta), y0 = r*(sin(theta - 1)), ymax = ymax(L = L, r = r, theta = theta), yratio = ymax/y0, ymax_ft = ymax * 100 / 2.54 / 12, hit_ceil = ymax >= C, dy_dtheta = dydtheta(L = L, r = r, theta = theta) The results can be simply divided into two main categories: “standard” vs. “extreme” solutions. I’ve defined standard solutions as those for which (i) the dart doesn’t hit the ceiling and (ii) \theta \ge 100^\circ to separate out the slightly crazier results. We find, intuitively, that the throw velocity must increase as the release angle approaches 90 degrees. The maximum height and air travel time always decrease as the trajectory flattens, but interestingly the required throw velocity has a minimum value. (In the figures I’m plotting vs. \varphi = 180^\circ - \theta 0^\circ corresponds to a vertical upwards throw and 180^\circ to a vertical downwards throw.) \varphi < \varphi_\mathrm{min} we have to throw the dart harder because much of its velocity is “wasted” traveling vertically. On the other hand, for \varphi > \varphi_\mathrm{min} we need more velocity to make it to the target before gravity can pull the dart too far. We could take a derivative to find \varphi_\mathrm{min} , but since we’re already here we can just find the minimum numerically: opt_value <- optimize(function(.x) omega(L, r, .x)^2, c(pi/2, pi)) 180 - opt_value$minimum * 180/pi The minimum is not 45^\circ , but slightly larger (a flatter trajectory). Crazier Solutions The first set of results not yet considered covers solutions approaching a perfectly horizontal throw. The travel time and maximum height again decrease (the maximum height decreases roughly linearly), but the throw velocity diverges: as \varphi \to 90^\circ the dart needs v \to \infty to hit the bullseye before gravity pulls it off line. Finally, there’s a slice of the solution for which we need a more vertical headroom: Once again we get crazy behavior with velocity, but now the time and maximum height are both extremely large – these plots are on a logarithmic scale. These solutions basically correspond to throwing a dart upwards and still managing to hit the bullseye. For some of these solutions our constant- g approximation would be in big trouble! Margin of Error in Release Angle Finally, what is our margin of error on dart throws? We can check by plotting \delta y = dy/d\theta \, \delta\theta \delta\theta = 1^\circ This plot suggests two things: There is an angle near 46 degrees where throws are less affected by errors in angle. The dart can miss by quite a bit for angles far from 46^\circ : on the order of meters (this seems like quite a lot). For the set up we considered, it turns out that there is an angle near \varphi = 46^\circ for which the required velocity is a minimum. The dart throw is also most forgiving near that angle. There are also an interesting class of solutions that require fast dart throws, some of which would put the dart into orbit! This was a fun exercise – I was able to work the problem and do the computation in an evening. To simply the mental model, I’ve actually mirrored our setup; we actually throw from right to left. ↩︎ I highly recommend the tidyverse suite of R packages to simplify creating and working with data frames. ↩︎ New Influence Maximization Article (August 2020)
Indium(III)_chloride Knowpia Indium(III) chloride is the chemical compound with the formula InCl3. This salt is a white, flaky solid with applications in organic synthesis as a Lewis acid. It is also the most available soluble derivative of indium.[2] 31JB8MKF8Z Y InChI=1S/3ClH.In/h3*1H;/q;;;+3/p-3 Y Key: PSCMQHVBLHHWTO-UHFFFAOYSA-K Y InChI=1/3ClH.In/h3*1H;/q;;;+3/p-3 Key: PSCMQHVBLHHWTO-DFZHHIFOAF Cl[In](Cl)Cl 195 g/100 mL, exothermic Solubility in other solvents THF, Ethanol P260, P301+P330+P331, P303+P361+P353, P305+P351+P338, P405, P501[1] Being a relatively electropositive metal, indium reacts quickly with chlorine to give the trichloride. Indium trichloride is very soluble and deliquescent.[3] A synthesis has been reported using an electrochemical cell in a mixed methanol-benzene solution.[4] Like AlCl3 and TlCl3, InCl3 crystallizes as a layered structure consisting of a close-packed chloride arrangement containing layers of octahedrally coordinated In(III) centers,[5] a structure akin to that seen in YCl3.[6] In contrast, GaCl3 crystallizes as dimers containing Ga2Cl6.[6] Molten InCl3 conducts electricity,[5] whereas AlCl3 does not as it converts to the molecular dimer, Al2Cl6.[7] InCl3 is a Lewis acid and forms complexes with donor ligands, L, InCl3L, InCl3L2, InCl3L3. For example, with the chloride ion it forms tetrahedral InCl4−, trigonal bipyramidal InCl52−, and octahedral InCl63−.[5] In diethyl ether solution, InCl3 reacts with lithium hydride, LiH, to form {\displaystyle {\ce {LiInH4}}} . This unstable compound decomposes below 0 °C,[8] and is reacted in situ in organic synthesis as a reducing agent[9] and to prepare tertiary amine and phosphine complexes of InH3.[10] Trimethylindium, InMe3, can be produced by reacting InCl3 in diethyl ether solution either with the Grignard reagent {\displaystyle {\ce {MeMgI}}} or methyllithium, LiMe. Triethylindium can be prepared in a similar fashion but with the grignard reagent EtMgBr.[11] {\displaystyle {\ce {{InCl3}+ 3LiMe -> {Me3In.OEt2}+ 3LiCl}}} {\displaystyle {\ce {{InCl3}+ 3MeMgI -> {Me3In.OEt2}+ 3MgClI}}} {\displaystyle {\ce {{InCl3}+ 3EtMgBr -> {Et3In.OEt2}+ 3MgBr2}}} InCl3 reacts with indium metal at high temperature to form the lower valent indium chlorides In5Cl9, In2Cl3 and InCl.[5] Catalyst in chemistryEdit Indium chloride is a Lewis acid catalyst in organic reactions such as Friedel-Crafts acylations and Diels-Alder reactions. As an example of the latter,[12] the reaction proceeds at room temperature, with 1 mole% catalyst loading in an acetonitrile-water solvent mixture. The first step is a Knoevenagel condensation between the barbituric acid and the aldehyde; the second step is a reverse electron-demand Diels-Alder reaction, which is a multicomponent reaction of N,N'-dimethyl-barbituric acid, benzaldehyde and ethyl vinyl ether. With the catalyst, the reported chemical yield is 90% and the percentage trans isomer is 70%. Without the catalyst added, the yield drops to 65% with 50% trans product. ^ a b c d "Indium(III) Chloride". American Elements. Retrieved May 15, 2019. ^ Araki, S.; Hirashita, T. "Indium trichloride" in Encyclopedia of Reagents for Organic Synthesis (Ed: L. Paquette) 2004, J. Wiley & Sons, New York. doi:10.1002/047084289X. ^ Indium Trichloride ^ Habeeb, J. J.; Tuck, D. G. "Electrochemical Synthesis of Indium(III) Complexes" Inorganic Syntheses, 1979, volume XIX, ISBN 0-471-04542-X ^ a b c d Egon Wiberg, Arnold Frederick Holleman (2001) Inorganic Chemistry, Elsevier ISBN 0123526515 ^ a b Wells, A.F. Structural Inorganic Chemistry, Oxford: Clarendon Press, 1984. ISBN 0-19-855370-6. ^ Anthony John Downs (1993). Chemistry of aluminium, gallium, indium, and thallium. Springer. ISBN 0-7514-0103-X. ^ Main Group Metals in Organic Synthesis vol 1, ed. Hisashi Yamamoto, Koichiro Oshima, Wiley VCH, 2004, ISBN 3527305084 ^ The Group 13 Metals Aluminium, Gallium, Indium and Thallium: Chemical Patterns and Peculiarities, Simon Aldridge, Anthony J. Downs, Wiley, 2011, ISBN 978-0-470-68191-6 ^ Main Group compounds in Inorganic Syntheses, vol 31, By Schultz, Neumayer, Marks; Ed., Alan H. Cowley, John Wiley & Sons, Inc., 1997, ISBN 0471152889 ^ An efficient synthesis of novel pyrano[2,3-d]- and furopyrano[2,3-d]pyrimidines via Indium-Catalyzed Multicomponent Domino Reaction Prajapati, D. Mukut Gohain, M. Beilstein Journal of Organic Chemistry 2006, 2:11 doi:10.1186/1860-5397-2-11
Revision as of 22:41, 21 August 2015 by MathAdmin (talk | contribs) (→‎The Sum of the first n Squares) {\displaystyle n} {\displaystyle n} {\displaystyle 1+2+3+4+5+6+7+8+9+10+11+12+13,} {\displaystyle 1+2+\cdots +13.} {\displaystyle {\displaystyle \sum _{i=1}^{13}\,i,}} {\displaystyle \left(\Sigma \right)} {\displaystyle i} {\displaystyle i} {\displaystyle \Sigma } {\displaystyle i} {\displaystyle 1} {\displaystyle 13} {\displaystyle i=1,} {\displaystyle i=2,} {\displaystyle i=3,} {\displaystyle 13} {\displaystyle {\displaystyle \sum _{i=1}^{13}}\,i\,=\,1+2+3+4+5+6+7+8+9+10+11+12+13.} {\displaystyle {\displaystyle \sum _{i=1}^{5}}\,i^{2}\,=\,1^{2}+2^{2}+3^{2}+4^{2}+5^{2}.} {\displaystyle {\displaystyle \sum _{i=n}^{2n}}\,i\,=\,n+(n+1)+\cdots +(2n-1)+2n,} {\displaystyle {\displaystyle \sum _{i=1}^{n}}\,i^{3}\,=\,1^{3}+2^{3}+3^{3}+\cdots +n^{3}.} {\displaystyle \mathbb {N} } {\displaystyle {\text{The Natural Numbers}}\,=\,\mathbb {N} \,=\,\{1,2,3,\ldots \}\,=\,\{1,1+1,1+1+1,1+1+1+1,\ldots \}.} {\displaystyle 1} {\displaystyle n,} {\displaystyle n+1} {\displaystyle n-1,} {\displaystyle n.} {\displaystyle n^{\mathrm {th} }} {\displaystyle (n-1)^{\textrm {th}}} case. In such situations, strong induction assumes that the conjecture is true for ALL cases of lower value tha{\displaystyle n} {\displaystyle n}atural numbers is {\displaystyle {\displaystyle \sum _{i=1}^{n}i\,=\,1+2+\cdots +n\,=\,{\frac {n(n+1)}{2}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)}{2}}\,=\,{\frac {1(1+1)}{2}}\,=\,1.}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)}{2}}\,=\,{\frac {(n-1)n}{2}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i}&=&{\displaystyle \sum _{i=1}^{n-1}i\,+\,n}\\\\&=&{\displaystyle {\frac {(n-1)n}{2}}\,+\,n\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{2}-n}{2}}\,+\,{\frac {2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}-n+2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}+n}{2}}}\\\\&=&{\displaystyle {\frac {n(n+1)}{2}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{2}\,=\,1^{2}+2^{2}+\cdots +n^{2}\,=\,{\frac {n(n+1)(2n+1)}{6}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)(2n+1)}{6}}\,=\,{\frac {1(1+1)(2+1)}{6}}\,=\,1.}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)\left(2\left(n-1\right)+1\right)}{6}}\,=\,{\frac {(n-1)n(2n-1)}{6}}\,=\,{\frac {2n^{3}-3n^{2}+n}{6}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{2}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{2}+n^{2}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+n^{2}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+{\frac {6n^{2}}{6}}}\\\\&=&{\displaystyle {\frac {2n^{3}+3n^{2}+n}{6}}}\\\\&=&{\displaystyle {\frac {n(2n^{2}+3n+1)}{6}}}\\\\&=&{\displaystyle {\frac {n(n+1)(2n+1)}{6}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{3}\,=\,1^{3}+2^{3}+\cdots +n^{3}\,=\,{\frac {n^{2}(n+1)^{2}}{4}}.}} {\displaystyle n}atural numbers. {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}\,=\,{\frac {1^{2}(1+1)^{2}}{4}}\,=\,1,}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i^{3}\,=\,{\frac {(n-1)^{2}\left(\left(n-1\right)+1\right)^{2}}{4}}\,=\,{\frac {(n-1)^{2}n^{2}}{2}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{3}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{3}+n^{3}}\\\\&=&{\displaystyle {\frac {(n-1)^{2}n^{2}}{4}}+n^{3}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{4}-2n^{3}+n^{2}}{4}}+{\frac {4n^{3}}{4}}}\\\\&=&{\displaystyle {\frac {n^{4}+2n^{3}+n^{2}}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n^{2}+2n+1)}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}},\end{array}}} {\displaystyle \square }
Total compute TOP500 supercomputers June '30 | Metaculus Total compute TOP500 supercomputers June '30 What will the the sum of the level of performance (in exaFLOPS) of the all 500 supercomputers in the TOP500 be according to their June 2030 list? 10^{18} FLOPS) of all supercomputers listed on the June 2030 TOP500 list.
Equality (mathematics) - Wikipedia @ WordDisk Relation with equivalence, congruence, and isomorphism {\displaystyle x=y} {\displaystyle (x+1)^{2}=x^{2}+2x+1} {\displaystyle \{x\mid P(x)\}=\{x\mid Q(x)\}} {\displaystyle P(x)\Leftrightarrow Q(x).} {\displaystyle P(x)} {\displaystyle Q(x),} This article uses material from the Wikipedia article Equality (mathematics), and is written by contributors. Text is available under a CC BY-SA 4.0 International License; additional terms may apply. Images, videos and audio are available under their respective licenses.
What Is Time-Domain Correlation Analysis? - MATLAB & Simulink - MathWorks 日本 Correlation analysis assumes a linear system and does not require a specific model structure. Impulse response is the output signal that results when the input is an impulse and has the following definition for a discrete model: \begin{array}{l}u\left(t\right)=0\text{ }t>0\\ u\left(t\right)=1\text{ }t=0\end{array} The response to an input u(t) is equal to the convolution of the impulse response, as follows: y\left(t\right)={∫}_{0}^{t}h\left(t−z\right)⋅u\left(z\right)dz
Abstract: This paper is to address using what a fluctuation of a metric tensor leads to, in pre Planckian physics. If so then, we pick the conditions for an equality, with a small δgn, to come up with restraints which are in line with modifications of the Friedman equation in a quantum bounce, with removal of the Penrose theorem initial singularity. In line with super negative pressure being applied, so as to understand what we can present as far as H = 0 (quantum bounce) in terms of density of the Universe. And also considering what to expect when P = wΔρ ~ (-1+ε+)Δρ, i.e. we have a negative energy density in Pre Planckian space-time. This leads to a causal discontinuity between Pre Planckian to Planckian space-time due to the sign of the inflaton changing from minus to positive, for reasons brought up in this manuscript, i.e. looking at Equations (9)-(11) of this document, with explanations as to what is going on physically. Keywords: Emergent Time, Metric Tensor Perturbations, HUP, Negative Energy Density a\left(t\right)~{a}_{\text{starting-point}}\cdot {t}^{\alpha } {H}^{2}=\frac{8\text{π}}{3{M}_{\text{Planck}}^{2}}\cdot \left(\rho -\frac{{\rho }^{2}}{2|\sigma |}\right) 3\cdot \left(1+\frac{p}{\rho }\right)\cdot \frac{{\rho }^{2}}{|\sigma |}-\frac{{\rho }^{2}}{|\sigma |}-\left(1+\frac{3p}{\rho }\right)\cdot \rho =3\cdot \left(1+\frac{p}{\rho }\right)\cdot \rho \rho \to \Delta \rho \frac{\Delta \rho }{\Delta t}~\left(\text{visc}\right)\times \left({H}_{\mathrm{int}}^{2}\right)\times {a}^{4} \begin{array}{c}\Delta \rho ~\left(\text{visc}\right)\times \left({H}_{\mathrm{int}}^{2}\right)\times {a}^{4}\times \frac{2\hslash }{\delta {g}_{tt}{k}_{B}{T}_{\text{initial}}}\\ ~\left(\text{visc}\right)\times \left({H}_{\mathrm{int}}^{2}\right)\times {a}_{\text{init}}^{2}\times \frac{2\hslash }{{\varphi }_{\mathrm{inf}}{k}_{B}{T}_{\text{initial}}}\end{array} \rho \to \Delta \rho P=w\Delta \rho ~\left(-1+{\epsilon }^{+}\right)\Delta \rho 3\cdot \left(1+\left(-1+{\epsilon }^{+}\right)\right)\cdot \frac{\Delta {\rho }^{2}}{|\sigma |}-\frac{\Delta {\rho }^{2}}{|\sigma |}-\left(1+3\left(-1+{\epsilon }^{+}\right)\right)\cdot \Delta \rho =3\cdot \left(1+\left(-1+{\epsilon }^{+}\right)\right)\cdot \Delta \rho \Delta \rho \cdot \left(1-\frac{1}{3\cdot {\epsilon }^{+}}\right)=|\sigma |\cdot \left(1+\frac{\left(2-3\cdot {\epsilon }^{+}\right)}{\left(3\cdot {\epsilon }^{+}\right)}\right) \begin{array}{l}a\approx {a}_{\mathrm{min}}{t}^{\gamma }\\ ⇔\varphi \approx \sqrt{\frac{\gamma }{4\text{π}G}}\cdot \mathrm{ln}\left\{\sqrt{\frac{8\text{π}G{V}_{0}}{\gamma \cdot \left(3\gamma -1\right)}}\cdot t\right\}\end{array} \begin{array}{c}\Delta \rho \approx -2\left|\sigma \right|\approx \left(\text{visc}\right)\times \left({H}_{\mathrm{int}}^{2}\right)\times {a}_{init}^{2}\times \frac{2\hslash }{{\varphi }_{\mathrm{inf}}{k}_{B}{T}_{\text{initial}}}\\ \approx \left(\text{visc}\right)\times \left({H}_{\mathrm{int}}^{2}\right)\times {a}_{init}^{2}\times \frac{2\hslash }{{\varphi }_{\mathrm{inf}}{k}_{B}{T}_{\text{initial}}}\\ \approx -\left(\text{visc}\right)\times \left({H}_{\mathrm{int}}^{2}\right)\times {a}_{\text{init}}^{2}\times \frac{2\hslash }{\left|\sqrt{\frac{\gamma }{4\text{π}G}}\cdot \mathrm{ln}\left\{\sqrt{\frac{8\text{π}G{V}_{0}}{\gamma \cdot \left(3\gamma -1\right)}}\cdot {t}_{\mathrm{min}}\right\}\right|{k}_{B}{T}_{\text{initial}}}\end{array} \begin{array}{c}\Delta \rho \approx -2\left|\sigma \right|\approx \left(\text{visc}\right)\times \left({H}_{\mathrm{int}}^{2}\right)\times {a}_{\text{init}}^{2}\times \frac{2\hslash }{\left({\varphi }_{\mathrm{inf}}+{\delta }^{+}\right)\cdot {k}_{B}\cdot \left({T}_{\text{Pre-Planck}\to \text{Planck}}\right)}\\ \approx \left(\text{visc}\right)\times \left({H}_{\mathrm{int}}^{2}\right)\times {a}_{\text{init}}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\times \frac{2\hslash }{\left|\sqrt{\frac{\gamma }{4\text{π}G}}\cdot \mathrm{ln}\left\{\sqrt{\frac{8\text{π}G{V}_{0}}{\gamma \cdot \left(3\gamma -1\right)}}\cdot \left(\left({t}_{\mathrm{min}}+{\epsilon }^{+}\right)\le {t}_{\text{Plank}}\right)\right\}\right|{k}_{B}\cdot \left({T}_{\text{Pre-Planck}\to \text{Planck}}\right)}\end{array} \begin{array}{l}\sqrt{\frac{8\text{π}G{V}_{0}}{\gamma \cdot \left(3\gamma -1\right)}}\cdot \left(\left({t}_{\mathrm{min}}\right)<{t}_{\text{Plank}}\right)<\sqrt{\frac{8\text{π}G{V}_{0}}{\gamma \cdot \left(3\gamma -1\right)}}\cdot \left(\left({t}_{\mathrm{min}}+{\epsilon }^{+}\right)\le {t}_{\text{Plank}}\right)\\ <\sqrt{\frac{8\text{π}G{V}_{0}}{\gamma \cdot \left(3\gamma -1\right)}}\cdot \left({t}_{\text{Plank}}\right)\\ &\text{\hspace{0.17em}}\sqrt{\frac{8\text{π}G{V}_{0}}{\gamma \cdot \left(3\gamma -1\right)}}\cdot \left(\left({t}_{\mathrm{min}}+{\epsilon }^{+}\right)\le {t}_{\text{Plank}}\right)\approx 1\end{array} {a}_{\text{init}}^{2} {H}_{\mathrm{int}}^{2} {\stackrel{˙}{\varphi }}^{2}\gg {V}_{\text{SUSY}} {H}_{\text{Hubble}}=\stackrel{˙}{a}/a \text{visc} \Delta {t}_{\text{initial}}~\frac{\hslash }{\delta {g}_{tt}{E}_{\text{initial}}}~\frac{2\hslash }{\delta {g}_{tt}{k}_{B}{T}_{\text{initial}}} {a}_{\text{init}}^{2} \Lambda {a}_{\mathrm{min}}~{a}_{\text{initial}}~{10}^{-55} \begin{array}{l}{\alpha }_{0}=\sqrt{\frac{4\text{π}G}{3{\mu }_{0}c}}{B}_{0}\\ \stackrel{⌢}{\lambda }\left(\text{defined}\right)=\Lambda {c}^{2}/3\\ {a}_{\mathrm{min}}={a}_{0}\cdot {\left[\frac{{\alpha }_{0}}{2\stackrel{⌢}{\lambda }\left(\text{defined}\right)}\left(\sqrt{{\alpha }_{0}^{2}+32\stackrel{⌢}{\lambda }\left(\text{defined}\right)\cdot {\mu }_{0}\omega \cdot {B}_{0}^{2}}-{\alpha }_{0}\right)\right]}^{1/4}\end{array} {\hslash }_{initial}\left[{t}_{initial}\le {t}_{Planck}\right] {\hslash }_{\text{initial}}\left[{t}_{\text{initial}}\le {t}_{\text{Planck}}\right] {\hslash }_{\text{initial}}\left[{t}_{\text{initial}}\le {t}_{\text{Planck}}\right] {\hslash }_{\text{initial}}\left[{t}_{\text{initial}}\le {t}_{\text{Planck}}\right] {\hslash }_{\text{initial}}\left[{t}_{\text{initial}}\le {t}_{\text{Planck}}\right] {\hslash }_{\text{initial}}\left[{t}_{\text{initial}}\le {t}_{\text{Planck}}\right] {\stackrel{˙}{\varphi }}^{2}\gg {V}_{\text{SUSY}} \varphi >0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{iff}\text{\hspace{0.17em}}\sqrt{\frac{8\text{π}G{V}_{0}}{\gamma \cdot \left(3\gamma -1\right)}}\cdot \delta t>1 {\stackrel{˙}{\varphi }}^{2}\gg {V}_{\text{SUSY}} Cite this paper: Beckwith, A. (2018) Gedankenexperiment, Assuming Nonsingular Quantum Bounce Friedman Equations Leading to a Causal Discontinuity between Pre Planckian to Planckian Physics Space-Time Regime. Journal of High Energy Physics, Gravitation and Cosmology, 4, 14-19. doi: 10.4236/jhepgc.2018.41003. [1] Freese, K., Brown, M. and Kinney, W. (2012) The Phantom Bounce: A New Proposal for an Oscillating Cosmology. In: Mersini, H. and Vaas, R., Eds., The Arrows of Time, A Debate in Cosmology, Springer Verlag, Berlin, 149-156. [4] Beckwith, A. (2017) Gedankerexperiment for Contributions to Cosmological Constant from Kinematic Viscosity Assuming Self Reproduction of the Universe with Non-Zero Initial Entropy. [5] Beckwith, A. (2017) Gedankenexperiment for Initial Expansion of the Universe and Effects of a Nearly Zero Inflaton in Pre Planckian Physics Space-Time Satisfying Traditional Slow Roll Formulas. [9] Deutsch, D. and Hayden, P. (1999) Information Flow in Entangled Quantum Systems. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 456, 1759-1774. [10] Beckwith, A.W. (2017) Gedankenexperiment for Refining the Unruh Metric Tensor Uncertainty Principle via Schwartzshield Geometry and Planckian Space-Time with Initial Non Zero Entropy. [11] Freese, K. (1992) Natural Inflaton. In: Nath, P., and Recucroft, S., Eds., Particles, Strings, and Cosmology, Northeastern University, World Scientific Publishing Company, Pte. Ltd, Singapore, 408-428. [13] Abbott, B.P., et al. (2016) LIGO Scientific Collaboration and Virgo Collaboration. Observation of Gravitational Waves from a Binary Black Hole Merger. Physical Review Letters, 116, Article ID: 061102. [15] Abbott, B.P., et al. (2016) LIGO Scientific Collaboration and Virgo Collaboration. GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence. Physical Review Letters, 116, Article ID: 241103.
Turbulent_Prandtl_number Knowpia The turbulent Prandtl number (Prt) is a non-dimensional term defined as the ratio between the momentum eddy diffusivity and the heat transfer eddy diffusivity. It is useful for solving the heat transfer problem of turbulent boundary layer flows. The simplest model for Prt is the Reynolds analogy, which yields a turbulent Prandtl number of 1. From experimental data, Prt has an average value of 0.85, but ranges from 0.7 to 0.9 depending on the Prandtl number of the fluid in question. The introduction of eddy diffusivity and subsequently the turbulent Prandtl number works as a way to define a simple relationship between the extra shear stress and heat flux that is present in turbulent flow. If the momentum and thermal eddy diffusivities are zero (no apparent turbulent shear stress and heat flux), then the turbulent flow equations reduce to the laminar equations. We can define the eddy diffusivities for momentum transfer {\displaystyle \varepsilon _{M}} and heat transfer {\displaystyle \varepsilon _{H}} {\displaystyle -{\overline {u'v'}}=\varepsilon _{M}{\frac {\partial {\bar {u}}}{\partial y}}} {\displaystyle -{\overline {v'T'}}=\varepsilon _{H}{\frac {\partial {\bar {T}}}{\partial y}}} {\displaystyle -{\overline {u'v'}}} is the apparent turbulent shear stress and {\displaystyle -{\overline {v'T'}}} is the apparent turbulent heat flux. The turbulent Prandtl number is then defined as {\displaystyle \mathrm {Pr} _{\mathrm {t} }={\frac {\varepsilon _{M}}{\varepsilon _{H}}}.} The turbulent Prandtl number has been shown to not generally equal unity (e.g. Malhotra and Kang, 1984; Kays, 1994; McEligot and Taylor, 1996; and Churchill, 2002). It is a strong function of the molecular Prandtl number amongst other parameters and the Reynolds Analogy is not applicable when the molecular Prandtl number differs significantly from unity as determined by Malhotra and Kang;[1] and elaborated by McEligot and Taylor[2] and Churchill [3] Turbulent momentum boundary layer equation: {\displaystyle {\bar {u}}{\frac {\partial {\bar {u}}}{\partial x}}+{\bar {v}}{\frac {\partial {\bar {u}}}{\partial y}}=-{\frac {1}{\rho }}{\frac {d{\bar {P}}}{dx}}+{\frac {\partial }{\partial y}}\left[(\nu {\frac {\partial {\bar {u}}}{\partial y}}-{\overline {u'v'}})\right].} Turbulent thermal boundary layer equation, {\displaystyle {\bar {u}}{\frac {\partial {\bar {T}}}{\partial x}}+{\bar {v}}{\frac {\partial {\bar {T}}}{\partial y}}={\frac {\partial }{\partial y}}\left(\alpha {\frac {\partial {\bar {T}}}{\partial y}}-{\overline {v'T'}}\right).} Substituting the eddy diffusivities into the momentum and thermal equations yields {\displaystyle {\bar {u}}{\frac {\partial {\bar {u}}}{\partial x}}+{\bar {v}}{\frac {\partial {\bar {u}}}{\partial y}}=-{\frac {1}{\rho }}{\frac {d{\bar {P}}}{dx}}+{\frac {\partial }{\partial y}}\left[(\nu +\varepsilon _{M}){\frac {\partial {\bar {u}}}{\partial y}}\right]} {\displaystyle {\bar {u}}{\frac {\partial {\bar {T}}}{\partial x}}+{\bar {v}}{\frac {\partial {\bar {T}}}{\partial y}}={\frac {\partial }{\partial y}}\left[(\alpha +\varepsilon _{H}){\frac {\partial {\bar {T}}}{\partial y}}\right].} Substitute into the thermal equation using the definition of the turbulent Prandtl number to get {\displaystyle {\bar {u}}{\frac {\partial {\bar {T}}}{\partial x}}+{\bar {v}}{\frac {\partial {\bar {T}}}{\partial y}}={\frac {\partial }{\partial y}}\left[(\alpha +{\frac {\varepsilon _{M}}{\mathrm {Pr} _{\mathrm {t} }}}){\frac {\partial {\bar {T}}}{\partial y}}\right].} In the special case where the Prandtl number and turbulent Prandtl number both equal unity (as in the Reynolds analogy), the velocity profile and temperature profiles are identical. This greatly simplifies the solution of the heat transfer problem. If the Prandtl number and turbulent Prandtl number are different from unity, then a solution is possible by knowing the turbulent Prandtl number so that one can still solve the momentum and thermal equations. In a general case of three-dimensional turbulence, the concept of eddy viscosity and eddy diffusivity are not valid. Consequently, the turbulent Prandtl number has no meaning. [4] ^ Malhotra, Ashok, & KANG, S. S. 1984. Turbulent Prandtl number in circular pipes. Int. J. Heat and Mass Transfer, 27, 2158-2161 ^ McEligot, D. M. & Taylor, M. F. 1996, The turbulent Prandtl number in the near-wall region for Low-Prandtl-number gas mixtures. Int. J. Heat Mass Transfer., 39, pp 1287--1295 ^ Churchill, S. W. 2002; A Reinterpretation of the Turbulent Prandtl Number. Ind. Eng. Chem. Res. , 41, 6393-6401. CLAPP, R. M. 1961. ^ Kays, W. M. (1994). "Turbulent Prandtl Number—Where Are We?". Journal of Heat Transfer. 116 (2): 284–295. doi:10.1115/1.2911398. Kays, William; Crawford, M.; Weigand, B. (2005). Convective Heat and Mass Transfer, Fourth Edition. McGraw-Hill. ISBN 978-0-07-246876-2.
A Sixth-Order Theory of Shear Deformable Beams With Variational Consistent Boundary Conditions | J. Appl. Mech. | ASME Digital Collection Guangyu Shi, e-mail: shi_guangyu@163.com e-mail: voyiadjis@eng.lsu.edu Shi, G., and Voyiadjis, G. Z. (December 20, 2010). "A Sixth-Order Theory of Shear Deformable Beams With Variational Consistent Boundary Conditions." ASME. J. Appl. Mech. March 2011; 78(2): 021019. https://doi.org/10.1115/1.4002594 This paper presents the derivation of a new beam theory with the sixth-order differential equilibrium equations for the analysis of shear deformable beams. A sixth-order beam theory is desirable since the displacement constraints of some typical shear flexible beams clearly indicate that the boundary conditions corresponding to these constraints can be properly satisfied only by the boundary conditions associated with the sixth-order differential equilibrium equations as opposed to the fourth-order equilibrium equations in Timoshenko beam theory. The present beam theory is composed of three parts: the simple third-order kinematics of displacements reduced from the higher-order displacement field derived previously by the authors, a system of sixth-order differential equilibrium equations in terms of two generalized displacements w ϕx of beam cross sections, and three boundary conditions at each end of shear deformable beams. A technique for the analytical solution of the new beam theory is also presented. To demonstrate the advantages and accuracy of the new sixth-order beam theory for the analysis of shear flexible beams, the proposed beam theory is applied to solve analytically three classical beam bending problems to which the fourth-order beam theory of Timoshenko has created some questions on the boundary conditions. The present solutions of these examples agree well with the elasticity solutions, and in particular they also show that the present sixth-order beam theory is capable of characterizing some boundary layer behavior near the beam ends or loading points. beams (structures), bending, differential equations, elasticity, shear deformation, variational techniques, shear deformable beams, sixth-order beam theory, variational consistent governing equations, proper boundary conditions, boundary layer behavior Boundary-value problems, Deflection, Displacement, Elasticity, Euler-Bernoulli beam theory, Shear (Mechanics), Stress, Differential equations, Equilibrium (Physics), Simply supported beams, Shear deformation, Boundary layers Decaying States of Plane Strain in a Semi-Infinite Strip and Boundary Conditions for Plate Theories Boundary Conditions for Elastic Beam Bending Cantilever Boundary Condition, Defections, and Stresses of Sandwich Beams An Asymptotic Analysis of Composite Beams With Kinematically Corrected End Effects Reflection on the Theory of Elastic Plates Mechanical Behavior of Laminated Composite Beam by the New Multi-Layered Laminated Composite Structures Model With Transverse Shear Stress Continuity An Accurate Simple Theory of Statics and Dynamics of Elastic Plates A Consistent Higher Order Beam Theory Dev. Theor. Appl. Mech. A New Simple Third-Order Shear Deformation Theory of Plates Static Analysis of a Bickford Beam by Means of the DQEM A Refined Beam Theory Based on the Refined Plate Theory The Refined Theory of Deep Rectangular Beams Based on General Solution of Elasticity On the Use of Variational Principles to Derive Beam Boundary Conditions Elasticity Solutions Versus Asymptotic Sectional Analysis of Homogeneous, Isotropic, Prismatic Beams A Refined Two-Dimensional Theory for Thick Cylindrical Shells On Efficient Finite Element Modeling of Plates and Beams Based on Higher-Order Theory and a New Composite Beam Element Variational Principles in Elasticity and Their Applications An Improved Transverse Shear Deformation Theory for Laminate Anisotropic Plates A Simple Higher-Order Theory for Laminated Composite Plates A Higher-Order Shear Beam Theory and Solution Technique for the Sixth-Order Differential Equilibrium Equations Proceeding of Chinese Conference of Theoretical and Applied Mechanics—2007 , Beijing, China, Paper No. 13-136.
Diuretic Effect of Cymbopogon jwarancusa after Single and Multiple Doses in Rats Sarah Jameel Khan, Syeda Afroz, Rafeeq Alam Khan* Diuretics are efficaciously used in management of various clinical emergencies like hypertension, heart failure, cirrhosis, hypercalciuria, hematuria and nephrotic syndrome. Cymbopogon jwarancusa is an aromatic perennial grass used in both traditional and Unani system of medicine to eradicate diseases like colds, seasonal fever, asthma, tuberculosis, rheumatic pain, back pain, toothache and nervous disorders. C. jwarancusa essential oils are used in perfumery, soap, detergents, medicines and pharmaceutical industry. Monoterpenes and sesquiterpenes constitute the highest composition in essential oil of C. jwarancusa. The present was designed to compare the diuretic activity of C. jwarancusa after single and multi-doses. Furosemide (20 mg/kg) was used as reference drug and 10% DMSO was used as vehicle. Diuretic activity was noticed by measuring urine volume and calculating diuretic and Lipchitz values. Maximum diuretic response was observed at 500 mg/kg of extract after both single and multi-dose administration. On basis of results it may be concluded that C. jwarancusa may be used as diuretic agent. Cymbopogon jwarancusa, Diuretic, Dimethyl Sulfoxide, Lipchitz Value, Furosemide Khan, S. , Afroz, S. and Khan, R. (2018) Diuretic Effect of Cymbopogon jwarancusa after Single and Multiple Doses in Rats. Pharmacology & Pharmacy, 9, 250-256. doi: 10.4236/pp.2018.97019. Maintenance of homeostasis is very important for a normal healthy life, since existence of life becomes difficult if any change disturbs this balance. Diuretics preserve this balance by maintain blood volume as well as concentration of excess ions in the body. These properties enable diuretics to treat various pathologies e.g. hypertension, congestive heart failure, hypercalciuria, edema, nephritic syndrome, cirrhosis and renal dysfunctions [1] [2] . Hypertension is a worldwide problem and its management gains priority each day. Survey of 2000 in adult population showed that around 972 million people had hypertension. This number is predicted to increase up of 1.56 billion by 2025 [3] . Diuretics mainly alters the excretion of electrolytes and water by acting on renal tubules. Various classes of diuretics include carbonic anhydrase inhibitors, thiazide, loop, potassium sparing and osmotic diuretics [4] . However loop diuretics are most effective as causes excretion of 20% - 25% of sodium and water while thiazide diuretics are moderate in action causing excretion of 5% - 8% sodium whereas potassium sparing diuretics are least effective since excrete only 2% - 3% of sodium. Most common undesired effects of diuretics are hypomagnesaemia, hyponatrimea, hyperglycemia, hypercholesterolemia, hyperuricemia, hypokalemia while less common side effects are weakness, impotence and fatigue [5] . In modern era herbal remedies are preferred over traditional medicines in eradicating numerous diseases, but are slow in action however thought to produce less side effects. C. jwarancusa is an aromatic perennial grass (Rusha grass, khavi grass) belonging to family Poaceaea, used in both traditional and Unani system of medicine for treatment of various ailment e.g. vomiting, fever, inflammatory condition, blood impurities and skin problems [6] . Monoterpenes and sesquiterpenes constitute the highest composition in essential oil extracted from C. jwarancusa. The Specie name of plant is a combination of two Sanskrit words jwar and khusha means fever breaker [7] . In literature C. jwarancusa have been used as anti-pyretic [8] , anti-fungal [9] , antibacterial [10] , anti-oxidant and cytotoxic agent [11] . However there is lack of documented evidence regarding diuretic activity of C. jwarancusa. Thus current study is designed to explore the diuretic potential of C. jwarancusa leaves extract at different does after single and multiple doses. 2.1. Collection of Plant and Extraction Aerial parts of C. jwarancusa were collected from University of Karachi and identified by herbarium, Department of Botany. A voucher specimen No. 93325 was then deposited in the herbarium. The parts of the plant were washed to remove impurities, dried, chopped and soaked in ethanol for 20 days. The filtered extract was then evaporated by rotary evaporator, freeze dried and kept in refrigerator for further examination. 2.2. Selection of Animals and Handling The study was conducted on albino Wister rats of both sex (140 - 200 gm.) obtained from the animals house of ICCBS, University of Karachi. All the animals were kept at the animal house of Department of Pharmacology, University of Karachi, in plastic cages with 12-h light/dark cycle at 22˚C ± 2˚C and 50% - 60% humidity for a period of one week before the start of experiment. Animals were fed standard diet and water regularly and were handled according to the guidelines of National Institute of Health for care and use of animals [12] . All doses of C. jwarancusa 150, 300 and 500 mg/kg and standard drug, furosemide 20 mg/kg were prepared in 10% DMSO and administered by oral intubation tube. 2.3. Design of Study Diuretic activity was examined using Lipschitz method. Animals were randomly divided into five groups designated as negative control, positive control and three treated groups each comprising of 6 animals. Animals were deprived of food and water for 15 hours then given normal saline by mouth in a dose of 25 ml/kg before administration of vehicle, standard drug and herbal extract, to enforce water balance and salt load. After administration of standard and treated drugs all animals were separately placed in especially designed metabolic cages for collection of urine. Animals were given food and water ad libitum during the total period of experiment. 2.3.1. Single Dose Response Single dose response of the vehicle, standard drug and herbal extract at three doses was examined in animal groups pre-treated with normal saline. Negative control group received only 10% DMSO. Positive control group was given furosemide (20 mg/kg) and treated groups received 150, 300 and 500 mg/kg of C. jwarancusa extract. Drugs and vehicle were given in the equivalent volume to all animals once a day. Urine volume was measured continuously every hour up to five hours and then after 24 hours. 2.3.2. Multiple-Dose Response Protocol for multiple-dose was same as that for single dose, except that test and standard drugs were administered daily in the same dose for five days. Urine volume was measured daily for 5 days at an interval of 24 hours. However readings on 5th day demonstrated cumulative result of urine volume from day 1 to 5. 2.3.3. Estimation of Diuretic Parameters Diuretic index and Lipschitz value were determined by following formulas [13] . \text{Diuretic index}=UVt/UVc \text{Lipschitz value}=UVr/UVc where, UVt is mean urine volume of test group. UVr is mean urine volume of reference group. UVc is mean urine volume of control group. All statistical calculation were performed by SPSS version 20 and values are expressed as mean ± S.E.M. For comparison studies one way ANOVA was used, followed by post hoc (Dunnett’s test). Values of p < 0.05 were considered as significant diuretic and values of p < 0.001 as highly significant diuretic. All graphical data interpreted by Microsoft excel. 1) Effect on urine volume Table 1 and Figure 1 show urinary output after single and multiple-dose of test and standard drugs. Result shows that groups received single and multiple-doses of furosemide have highly significant increase in urine output i.e. 15.51 ± 0.48 ml and 37.56 ± 3.36 ml as compared to control group i.e. 6.40 ± 0.94 ml and 22.31 ± 1.55 respectively. However, animals received 500 mg/kg of C. jwarancusa extract displayed significant increase in urine output as compared to control both after single and multiple-doses i.e. 9.51 ± 1.3 ml and 34 ± 3.14 ml. Animals received 300 mg/kg of extract exhibited significant urinary output only at 3rd and 4th day on as compared to control group. 2) Effect on diuretic index and Lipschitz value Table 2 shows comparative effect of diuretic index and Lipschitz values after single and multiple-doses of test and standard drugs. Diuretic index values after single dose of furosemide and 150, 300 and 500 mg/kg of C. jwarancusa extract were 2.42, 1, 1.36 and 1.48 respectively whereas in animals received multiple-doses of furosemide and 150, 300 and 500 mg/kg of C. jwarancusa extract showed diuretic values of 1.68, 1.22, 1.35 and 1.52 respectively. Lipschitz values of C. jwarancusa extract at 150, 300 and 500 mg/kg were 41%, 56% and 61% after single dose and 72%, 80% and 90% after multiple doses as compared to furosemide. Plants play very beneficial role in human life; not only provides nutritional benefits but also has medicinal value. In recent time’s medicinal plant has been targeted to be used as drug for eradicating variety of diseases. According to WHO Table 1. Effect of C. jwarancusa, furosemide and DMSO on urine volume in rats. n = 6; Values are mean ± S.E.M; Significant diuretic if p < *0.05; Highly significant diuretic if p < **0.001. Table 2. Effect of C. jwarancusa on diuretic index and Lipschitz value. Figure 1. Comparison of urinary output after single and multiple-doses of various drugs. n = 6; Values are mean ± S.E.M; Significant diuretic if *p < 0.05 as compared to control; Highly significant diuretic if **p < 0.001 as compared to control; CJ = Cymbopogon jwarancusa. almost one million people are dependent on herbal medicines for primary treatment of ailments and 21,000 herbal plants from all over the world have listed for possessing the medicinal properties [14] . In Unani Medicinal system the ethno pharmacological studies on C. jwarancusa illustrate its use as diuretic, but literature review reveals that no study have been performed to evaluate its diuretic action. In this study different strengths of ethanol extract of C. jwarancusa were used to investigate diuretic activity, while furosemide was used as reference drug to compare the response of test drug. C. jwarancusa at 500 mg/kg showed significant diuretic effect after single and multiple doses, while 300 mg/kg extract displayed significant diuretic response only on 3rd and 4th day as compared to control, on basis of these findings it may be concluded that C. jwarancusa extract has dose-dependent diuretic response. Terpenoids present in C. jwarancusa have been reported to have diuretic activity. Terpenoids prevent the actions of aldosterone by binding to A1 receptor, thus causing diuresis [15] . Terpenoids constitute approximately 65% - 70% of the total composition in C. jwarancusa. Hence the diuretic activity of C. jwarancusa may be due to high composition of terpenoids. In future, studies could be carried out to isolate active pharmacological constituents and determine their actual mechanism of action. Previously other species of Cymbopogon were also reported to have a diuretic response. C. citratus leaf decoction showed mild diuretic effects in rats at 10% and 20% [16] . C. schoenanthus extract also showed significant urine output values in combination with glycolic acid [17] . Patel [18] has categorized diuretics as good, moderate and poor on the basis of diuretic index, hence it may be concluded that according to diuretic index C. jwarancusa falls in the category of moderate diuretic at all doses and may be used safely. Authors are thankful to Department of Pharmacology, University of Karachi for providing facilities to complete this piece of work. [1] Hymes, L. and Warshaw, B. (1987) Thiazide Diuretics for the Treatment of Children with Idiopathic Hypercalciuria and Hematuria. The Journal of Urology, 138, 1217-1219. [2] Salvetti, A. and Ghiadoni, L. (2006) Thiazide Diuretics in the Treatment of Hypertension: An Update. Journal of the American Society of Nephrology, 17, S25-S29. [4] Puschett, J.B. (1994) Pharmacological Classification and Renal Actions of Diuretics. Cardiology, 84, 4-13. [5] Sica, D.A. (2004) Diuretic-Related Side Effects: Development and Treatment. The Journal of Clinical Hypertension, 6, 532-540. [6] Kirtikar, K. and Basu, B. (1982) Indian Medicinal Plants. 2nd Edition, Vol. I & II, Dehradun. [7] Jones, W., Hastings, W. and Chambers, W. (1796) Dissertations and Miscellaneous Pieces Relating to the History and Antiquities, the Arts, Sciences, and Literature, Of Asia. Being a Continuation of Extracts from the Asiatic Researches, Vol. 2, London. [8] Alam, M.K., Ahmed, S., Anjum, S., Akram, M., Shah, S.M.A., Wariss, H.M., Hasan, M.M. and Usmanghani, K. (2016) Evaluation of Antipyretic Activity of Some Medicinal Plants from Cholistan Desert Pakistan. Pakistan Journal of Pharmaceutical Sciences, 29, 529-533. [9] Bhuyan, P.D., Chutia, M., Pathak, M. and Baruah, P. (2010) Effect of Essential Oils from Lippia geminata and Cymbopogon jwarancusa on in Vitro Growth and Sporulation of Two Rice Pathogens. Journal of the American Oil Chemists’ Society, 87, 1333-1340. [10] Bose, S., Ammani, K. and Ratakumari, S. (2013) Chemical Composition and Its Antibacterial Activity of Essential Oil from Cymbopogon jwarancusa. International Journal of Biopharma Research, 2, 97-100. [11] Dar, M.Y., Shah, W.A., Rather, M.A., Qurishi, Y., Hamid, A. and QurishI, M. (2011) Chemical Composition, in Vitro Cytotoxic and Antioxidant Activities of the Essential Oil and Major Constituents of Cymbopogon jawarancusa (Kashmir). Food Chemistry, 129, 1606-1611. [12] (1996) Council NR: Guide for the Care and Use of Laboratory Animals. 8th Edition, National Academy Press, Washington DC. [13] Asif, M., Atif, M., Malik A.S.A., Dan, Z.C., Ahmad, I. and Ahmad, A. (2013) Diuretic Activity of Trianthema portulacastrum Crude Extract in Albino Rats. Tropical Journal of Pharmaceutical Research, 12, 967-997. [14] Singh, A., Singh, K. and Saxena, A. (2010) Hypoglycemic Activity of Different Extracts of Various Herbal Plants. International Journal of Research in Ayurveda and Pharmacy, 1, 212-224. [15] Rizvi, S.H., Shoeb, A., Kapil, R.S. and Popli, S.P. (1980) Two Diuretic Triterpenoids from Antidesma menasu. Phytochemistry, 19, 2409-2410. [16] Carbajal, D., Casaco, A., Arruzazabala, L., Gonzalez, R. and Tolon, Z. (1989) Pharmacological Study of Cymbopogon citratus Leaves. Journal of Ethnopharmacology, 25, 103-107. [17] Al-Ghamdi, S.S., Al-Ghamdi, A.A. and Shammah, A.A. (2007) Inhibition of Calcium Oxalate Nephrotoxicity with Cymbopogon schoenanthus (Al-Ethkher) Drug Metabolism Letters, 1, 241-244. [18] Patel, U., Kulkarni, M., Undale, V. and Boshale, A. (2009) Evaluation of Diuretic Activity of Aqueous and Methanol Extracts of Lepidium sativum Garden Cress (Cruciferae) in Rats. Tropical Journal of Pharmaceutical Research, 8, 215-219.
Revision as of 22:42, 13 September 2015 by MathAdmin (talk | contribs) (→‎Proof by (Weak) Induction) {\displaystyle 1+2+3+4+5+6+7+8+9+10+11+12+13,} {\displaystyle 1+2+\cdots +13.} {\displaystyle {\displaystyle \sum _{i=1}^{13}\,i,}} {\displaystyle \left(\Sigma \right)} {\displaystyle i} {\displaystyle i} {\displaystyle \Sigma } {\displaystyle i} {\displaystyle 1} {\displaystyle 13} {\displaystyle i=1,} {\displaystyle i=2,} {\displaystyle i=3,} {\displaystyle i=13.} {\displaystyle {\displaystyle \sum _{i=1}^{13}}\,i\,=\,1+2+3+4+5+6+7+8+9+10+11+12+13.} {\displaystyle {\displaystyle \sum _{i=1}^{5}}\,i^{2}\,=\,1^{2}+2^{2}+3^{2}+4^{2}+5^{2}.} {\displaystyle {\displaystyle \sum _{i=n}^{2n}}\,i\,=\,n+(n+1)+\cdots +(2n-1)+2n,} {\displaystyle {\displaystyle \sum _{i=1}^{n}}\,i^{3}\,=\,1^{3}+2^{3}+3^{3}+\cdots +n^{3}.} {\displaystyle \mathbb {N} } {\displaystyle {\text{The Natural Numbers}}\,=\,\mathbb {N} \,=\,\{1,2,3,\ldots \}\,=\,\{1,1+1,1+1+1,1+1+1+1,\ldots \}.} {\displaystyle 1} {\displaystyle n,} {\displaystyle n+1} {\displaystyle n-1,} {\displaystyle n.} {\displaystyle (n+1)^{\mathrm {th} }} {\displaystyle n^{\mathrm {th} }} {\displaystyle n} {\displaystyle n}atural numbers is {\displaystyle {\displaystyle \sum _{i=1}^{n}i\,=\,1+2+\cdots +n\,=\,{\frac {n(n+1)}{2}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)}{2}}\,=\,{\frac {1(1+1)}{2}}\,=\,1,}} {\displaystyle 1} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)}{2}}\,=\,{\frac {(n-1)n}{2}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i}&=&{\displaystyle \sum _{i=1}^{n-1}i\,+\,n}\\\\&=&{\displaystyle {\frac {(n-1)n}{2}}\,+\,n\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{2}-n}{2}}\,+\,{\frac {2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}-n+2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}+n}{2}}}\\\\&=&{\displaystyle {\frac {n(n+1)}{2}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{2}\,=\,1^{2}+2^{2}+\cdots +n^{2}\,=\,{\frac {n(n+1)(2n+1)}{6}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)(2n+1)}{6}}\,=\,{\frac {1(1+1)(2+1)}{6}}\,=\,1,}} {\displaystyle n=1.} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)\left(2\left(n-1\right)+1\right)}{6}}\,=\,{\frac {(n-1)n(2n-1)}{6}}\,=\,{\frac {2n^{3}-3n^{2}+n}{6}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{2}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{2}+n^{2}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+n^{2}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+{\frac {6n^{2}}{6}}}\\\\&=&{\displaystyle {\frac {2n^{3}+3n^{2}+n}{6}}}\\\\&=&{\displaystyle {\frac {n(2n^{2}+3n+1)}{6}}}\\\\&=&{\displaystyle {\frac {n(n+1)(2n+1)}{6}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{3}\,=\,1^{3}+2^{3}+\cdots +n^{3}\,=\,{\frac {n^{2}(n+1)^{2}}{4}}.}} {\displaystyle n}atural numbers. {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}\,=\,{\frac {1^{2}(1+1)^{2}}{4}}\,=\,1,}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i^{3}\,=\,{\frac {(n-1)^{2}\left(\left(n-1\right)+1\right)^{2}}{4}}\,=\,{\frac {(n-1)^{2}n^{2}}{4}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{3}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{3}+n^{3}}\\\\&=&{\displaystyle {\frac {(n-1)^{2}n^{2}}{4}}+n^{3}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{4}-2n^{3}+n^{2}}{4}}+{\frac {4n^{3}}{4}}}\\\\&=&{\displaystyle {\frac {n^{4}+2n^{3}+n^{2}}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n^{2}+2n+1)}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}},\end{array}}} {\displaystyle \square }
Revision as of 09:27, 10 April 2017 by MathAdmin (talk | contribs) (Created page with "<span class="exam">Find each of the following limits if it exists. If you think the limit does not exist provide a reason. <span class="exam">(a) <math style="vertical-...") {\displaystyle \lim _{x\rightarrow 0}{\frac {\sin(5x)}{1-{\sqrt {1-x}}}}} {\displaystyle \lim _{x\rightarrow 8}f(x),} {\displaystyle \lim _{x\rightarrow 8}{\frac {xf(x)}{3}}=-2} {\displaystyle \lim _{x\rightarrow -\infty }{\frac {\sqrt {9x^{6}-x}}{3x^{3}+4x}}} {\displaystyle \lim _{x\rightarrow a}g(x)\neq 0,} {\displaystyle \lim _{x\rightarrow a}{\frac {f(x)}{g(x)}}={\frac {\displaystyle {\lim _{x\rightarrow a}f(x)}}{\displaystyle {\lim _{x\rightarrow a}g(x)}}}.} {\displaystyle \lim _{x\rightarrow 0}{\frac {\sin x}{x}}=1} {\displaystyle x=0} {\displaystyle {\frac {\sin(5x)}{1-{\sqrt {1-x}}}},} {\displaystyle {\frac {0}{0}}.} {\displaystyle {\begin{array}{rcl}\displaystyle {\lim _{x\rightarrow 0}{\frac {\sin(5x)}{1-{\sqrt {1-x}}}}}&=&\displaystyle {\lim _{x\rightarrow 0}{\frac {\sin(5x)}{1-{\sqrt {1-x}}}}{\bigg (}{\frac {1+{\sqrt {1-x}}}{1+{\sqrt {1-x}}}}{\bigg )}}\\&&\\&=&\displaystyle {\lim _{x\rightarrow 0}{\frac {\sin(5x)(1+{\sqrt {1-x}})}{x}}}\\&&\\&=&\displaystyle {\lim _{x\rightarrow 0}{\frac {\sin(5x)}{x}}(1+{\sqrt {1-x}})}\\&&\\&=&\displaystyle {{\bigg (}\lim _{x\rightarrow 0}{\frac {\sin(5x)}{x}}{\bigg )}\lim _{x\rightarrow 0}(1+{\sqrt {1-x}})}\\&&\\&=&\displaystyle {{\bigg (}5\lim _{x\rightarrow 0}{\frac {\sin(5x)}{5x}}{\bigg )}(2)}\\&&\\&=&\displaystyle {5(1)(2)}\\&&\\&=&\displaystyle {10.}\end{array}}} {\displaystyle \lim _{x\rightarrow 8}3=3\neq 0,} {\displaystyle {\begin{array}{rcl}\displaystyle {-2}&=&\displaystyle {\lim _{x\rightarrow 8}{\bigg [}{\frac {xf(x)}{3}}{\bigg ]}}\\&&\\&=&\displaystyle {\frac {\lim _{x\rightarrow 8}xf(x)}{\lim _{x\rightarrow 8}3}}\\&&\\&=&\displaystyle {{\frac {\lim _{x\rightarrow 8}xf(x)}{3}}.}\end{array}}} {\displaystyle 3,} {\displaystyle -6=\lim _{x\rightarrow 8}xf(x).} {\displaystyle {\begin{array}{rcl}\displaystyle {-6}&=&\displaystyle {{\bigg (}\lim _{x\rightarrow 8}x{\bigg )}{\bigg (}\lim _{x\rightarrow 8}f(x){\bigg )}}\\&&\\&=&\displaystyle {8\lim _{x\rightarrow 8}f(x).}\\\end{array}}} {\displaystyle \lim _{x\rightarrow 8}f(x)} {\displaystyle \lim _{x\rightarrow 8}f(x)=-{\frac {3}{4}}.} {\displaystyle {\begin{array}{rcl}\displaystyle {\lim _{x\rightarrow -\infty }{\frac {\sqrt {9x^{6}-x}}{3x^{3}+4x}}}&=&\displaystyle {\lim _{x\rightarrow -\infty }{\frac {\sqrt {9x^{6}-x}}{3x^{3}+4x}}{\frac {{\big (}{\frac {1}{x^{3}}}{\big )}}{{\big (}{\frac {1}{x^{3}}}{\big )}}}}\\&&\\&=&\displaystyle {\lim _{x\rightarrow -\infty }{\frac {\sqrt {9-{\frac {1}{x^{5}}}}}{3+{\frac {4}{x^{2}}}}}.}\end{array}}} {\displaystyle {\begin{array}{rcl}\displaystyle {\lim _{x\rightarrow -\infty }{\frac {\sqrt {9x^{6}-x}}{3x^{3}+4x}}}&=&\displaystyle {\frac {\lim _{x\rightarrow -\infty }{\sqrt {9-{\frac {1}{x^{5}}}}}}{\lim _{x\rightarrow -\infty }3+{\frac {4}{x^{2}}}}}\\&&\\&=&\displaystyle {\frac {\sqrt {9}}{3}}\\&&\\&=&\displaystyle {1.}\end{array}}} {\displaystyle 10} {\displaystyle -{\frac {3}{4}}} {\displaystyle 1}
Correspondence to: † jchoi72@dau.ac.kr Hydrogen, Vacuum, Insulation, CFD, Rarefied gas 수소, 진공, 단열, 전산유체역학, 희박기체 q=-\kappa \frac{dT}{dx} \kappa =\frac{1}{3}{m}_{g}\lambda {nv}_{av}{c}_{v} \Lambda =\frac{{\alpha }_{eff}}{2}\left(\frac{\gamma +1}{\gamma -1}\right){\left(\frac{k}{2\pi mT}\right)}^{\frac{1}{2}}=18.189{\alpha }_{eff}\frac{\gamma +1}{\gamma -1}\frac{1}{{\left(MT\right)}^{0.5}} Q=\Lambda AP\left({T}_{1}-{T}_{2}\right) Kn=\frac{\lambda }{L} Analysis of theoretical and numerical analysis data of heat flux Ministry of Economy and Finance, “2050 carbon neutral strategy of the republic of Korea”, Ministry of Economy and Finance, 2020. Retrieved from https://www.korea.kr/archive/expDocView.do?docId=39241, . H. W. Lee, D. H. Oh, and Y. J. Seo, “Prediction of changes in filling time and temperature of hydrogen tank according to SOC of hydrogen”, Trans Korean Hydrogen New Energy Soc, Vol. 31, No. 4, 2020, pp. 345-350. [https://doi.org/10.7316/KHNES.2020.31.4.345] B. H. Park, “Simulation of temperature behavior in hydrogen tank during refueling 661 using cubic equations of state”, Trans Korean Hydrogen New Energy Soc, Vol. 30, No. 5, 2019, pp. 385-394. G. Cipriani, V. Di Dio, F. Genduso, D. La Cascia, R. Liga, R. Miceli, and G. R. Galluzzo, “Perspective on hydrogen energy carrier and its automotive applications”, Int. J. Hydrogen Energy, Vol. 39, No. 16, 2014, pp. 8482-8494. [https://doi.org/10.1016/j.ijhydene.2014.03.174] R. Ortiz Cebolla, B. Acosta, N. de Miguel, and P. Moretto, “Effect of precooled inlet gas 664 temperature and mass flow rate on final state of charge during hydrogen vehicle refueling”, Int. J. Hydrogen Energy, Vol. 40, 2015, pp. 4698-4706. [https://doi.org/10.1016/j.ijhydene.2015.02.035] L. Zhao, F. Li, Z. Li, L. Zhang, G. He, Q. Zhao, J. Yuan, J. Di, and C. Zhou, “Thermodynamic analysis of the emptying process of compressed hydrogen tanks”, Int. J. Hydrogen Energy, Vol. 44, No. 7, 2019, pp. 3993-4005. [https://doi.org/10.1016/j.ijhydene.2018.12.091] R. Krishna, E. Titus, M. Salimian, O. Okhay, S. Rajendran, A, Rajkumar, J.M.G. Sousa, A.L.C. Ferreira, J. C. Gil, and J. Gracio, “Hydrogen storage for energy application”, INTECH, 2012, pp. 243-266. [https://doi.org/10.5772/51238] Q. Hou, X. Yang, and J. Zhang, “Review on hydrogen storage performance of MgH2: Development and trends”, Chemistry Select, Vol. 6, No. 7, 2021, pp. 1589-1606. [https://doi.org/10.1002/slct.202004476] S. H. Bae, S. L. In, K. H. Jeong, Y. B. Lee, and Y. H. Sin, “Vacuum engineering”, The Korea Economic Daily, Korea, 2000. M. Knudsen and Ann. D. Physik, “The laws of molecular flow and the internal frictional flow of gases through pipes”, adp, Vol. 28, 1909, pp. 75. E. H. Kennard, “The kinetic theory of gases: the kinetic theory of gases. by leonard loeb. xi + 687 pages. Published by the McGraw-Hill Book Company, Inc., New York City”, Science, Vol. 80, No. 2079, 1939, pp. 406-407. [https://doi.org/10.1126/science.80.2079.406] J. H. Lee, J. H. Yoon, and S. P. Kim, “Statistical experimental study for verifying thermal characteristics of insulated double pipe”, Journal of Mechanical Science and Technology, Vol. 32, 2018, pp. 2317-2325. [https://doi.org/10.1007/s12206-018-0443-y] Initial condition Hydrogen Vacuum
Macro-economy - GCAM - IAMC-Documentation {\displaystyle {\text{Equation 1}}:GDP_{r,t+1}=POP_{r,t+1}(1+GRO_{r,t})^{tStep}({\frac {GDP_{r,t}}{POP_{r,t}}})P_{r,t+1}^{\alpha }} tStep number of years in time step GDPr,t GDP in region r in period t POPr,t population in region r in period t GROr,t annual average per capita GDP growth rate in region r in period t See Macro-Economic System for more details. Retrieved from "https://www.iamcdocumentation.eu/index.php?title=Macro-economy_-_GCAM&oldid=14432"
A horizontal flag is shown below. The radius of the outer semicircle is 4 , while that of the inner semicircle is 3 Imagine rotating the flag about its pole and describe the resulting three-dimensional figure. Draw a picture of this figure on your paper. Imagine a hollow rubber ball. The rubber is 1 -inch thick and the radius is 3 Volume = big sphere − small sphere \frac{4}{3}\pi(4)^3-\frac{4}{3}\pi(3)^3=\frac{148}{3}\pi \text{ un}^{3}
Revision as of 10:26, 22 August 2015 by MathAdmin (talk | contribs) (→‎Proof by (Weak) Induction) {\displaystyle 1+2+3+4+5+6+7+8+9+10+11+12+13,} {\displaystyle 1+2+\cdots +13.} {\displaystyle {\displaystyle \sum _{i=1}^{13}\,i,}} {\displaystyle \left(\Sigma \right)} {\displaystyle i} {\displaystyle i} {\displaystyle \Sigma } {\displaystyle i} {\displaystyle 1} {\displaystyle 13} {\displaystyle i=1,} {\displaystyle i=2,} {\displaystyle i=3,} {\displaystyle 13} {\displaystyle {\displaystyle \sum _{i=1}^{13}}\,i\,=\,1+2+3+4+5+6+7+8+9+10+11+12+13.} {\displaystyle {\displaystyle \sum _{i=1}^{5}}\,i^{2}\,=\,1^{2}+2^{2}+3^{2}+4^{2}+5^{2}.} {\displaystyle {\displaystyle \sum _{i=n}^{2n}}\,i\,=\,n+(n+1)+\cdots +(2n-1)+2n,} {\displaystyle {\displaystyle \sum _{i=1}^{n}}\,i^{3}\,=\,1^{3}+2^{3}+3^{3}+\cdots +n^{3}.} {\displaystyle \mathbb {N} } {\displaystyle {\text{The Natural Numbers}}\,=\,\mathbb {N} \,=\,\{1,2,3,\ldots \}\,=\,\{1,1+1,1+1+1,1+1+1+1,\ldots \}.} {\displaystyle 1} {\displaystyle n,} {\displaystyle n+1} {\displaystyle n-1,} {\displaystyle n.} Which approach you choose can depend on which is more convenient, or frequently which is more appealing to the teacher grading the work. We will use this style for the proofs on this page. {\displaystyle (n+1)^{\textrm {th}}} {\displaystyle n^{\mathrm {th} }} {\displaystyle n} {\displaystyle n}atural numbers is {\displaystyle {\displaystyle \sum _{i=1}^{n}i\,=\,1+2+\cdots +n\,=\,{\frac {n(n+1)}{2}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)}{2}}\,=\,{\frac {1(1+1)}{2}}\,=\,1,}} {\displaystyle 1} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)}{2}}\,=\,{\frac {(n-1)n}{2}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i}&=&{\displaystyle \sum _{i=1}^{n-1}i\,+\,n}\\\\&=&{\displaystyle {\frac {(n-1)n}{2}}\,+\,n\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{2}-n}{2}}\,+\,{\frac {2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}-n+2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}+n}{2}}}\\\\&=&{\displaystyle {\frac {n(n+1)}{2}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{2}\,=\,1^{2}+2^{2}+\cdots +n^{2}\,=\,{\frac {n(n+1)(2n+1)}{6}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)(2n+1)}{6}}\,=\,{\frac {1(1+1)(2+1)}{6}}\,=\,1,}} {\displaystyle n=1.} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)\left(2\left(n-1\right)+1\right)}{6}}\,=\,{\frac {(n-1)n(2n-1)}{6}}\,=\,{\frac {2n^{3}-3n^{2}+n}{6}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{2}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{2}+n^{2}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+n^{2}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+{\frac {6n^{2}}{6}}}\\\\&=&{\displaystyle {\frac {2n^{3}+3n^{2}+n}{6}}}\\\\&=&{\displaystyle {\frac {n(2n^{2}+3n+1)}{6}}}\\\\&=&{\displaystyle {\frac {n(n+1)(2n+1)}{6}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{3}\,=\,1^{3}+2^{3}+\cdots +n^{3}\,=\,{\frac {n^{2}(n+1)^{2}}{4}}.}} {\displaystyle n}atural numbers. {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}\,=\,{\frac {1^{2}(1+1)^{2}}{4}}\,=\,1,}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i^{3}\,=\,{\frac {(n-1)^{2}\left(\left(n-1\right)+1\right)^{2}}{4}}\,=\,{\frac {(n-1)^{2}n^{2}}{4}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{3}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{3}+n^{3}}\\\\&=&{\displaystyle {\frac {(n-1)^{2}n^{2}}{4}}+n^{3}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{4}-2n^{3}+n^{2}}{4}}+{\frac {4n^{3}}{4}}}\\\\&=&{\displaystyle {\frac {n^{4}+2n^{3}+n^{2}}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n^{2}+2n+1)}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}},\end{array}}} {\displaystyle \square }
Taking signal and noise to new dimensions Geophysics October 27, 2015, Vol.80, 1ND-3ND. doi:https://doi.org/10.1190/2015-0924-TIOGEO.1 The mud-sand crossover on marine seismic data Ann E. Cook; Derek E. Sawyer Geophysics October 07, 2015, Vol.80, A109-A114. doi:https://doi.org/10.1190/geo2015-0291.1 Sjoerd A. L. de Ridder; Biondo L. Biondi Planning of urban underground infrastructure using a broadband seismic landstreamer — Tomography results and uncertainty quantifications from a case study in southwestern Sweden Alireza Malehmir; Fengjiao Zhang; Mahdieh Dehghannejad; Emil Lundberg; Christin Döse; Olof Friberg; Bojan Brodic; Joachim Place; Mats Svensson; Henrik Möller Mehrdad Bastani; Lena Persson; Suman Mehta; Alireza Malehmir Geophysics October 15, 2015, Vol.80, B193-B10. doi:https://doi.org/10.1190/geo2014-0527.1 High-resolution seismic imaging in complex environments: A comparison among common-reflection-surface stack, common-midpoint stack, and prestack depth migration at the Ilva-Bagnoli brownf... Geophysics October 30, 2015, Vol.80, C107-C122. doi:https://doi.org/10.1190/geo2015-0288.1 Jeffrey Shragge; Thomas E. Blum; Kasper van Wijk; Ludmila Adam High-resolution adaptive beamforming for borehole acoustic reflection imaging Chao Li; Wenzheng Yue Geophysics October 07, 2015, Vol.80, D565-D574. doi:https://doi.org/10.1190/geo2014-0517.1 2D multifractal analysis and porosity scaling estimation in Lower Cretaceous carbonates Sandra Vega; M. Soufiane Jouini Large-scale 3D geoelectromagnetic modeling using parallel adaptive high-order finite element method Alexander V. Grayver; Tzanio V. Kolev Geophysics August 26, 2015, Vol.80, E277-E291. doi:https://doi.org/10.1190/geo2015-0013.1 Bulk electric conductivity response to soil and rock CO2 concentration during controlled CO2 release experiments: Observations and analytic modeling Scott Jewell; Xiaobing Zhou; Martha E. Apple; Laura M. Dobeck; Lee H. Spangler; Alfred B. Cunningham Optimized 3D synthetic aperture for controlled-source electromagnetics Allison Knaak; Roel Snieder; Liam Ó. Súilleabháin; Yuanzhong Fan; David Ramirez-Mejia Numerical study of long-electrode electric resistivity tomography — Accuracy, sensitivity, and resolution Mathias Ronczka; Carsten Rücker; Thomas Günther The impact of off-resonance effects on water content estimates in surface nuclear magnetic resonance Denys Grombacher; Rosemary Knight Stripping very low frequency communication signals with minimum shift keying encoding from streamed time-domain electromagnetic data Monitoring of ground surface and subsurface deformations at oil sands using radar interferometry Jin Baek; Sang-Wan Kim; Jeong Woo Kim Geophysics September 23, 2015, Vol.80, EN137-EN152. doi:https://doi.org/10.1190/geo2015-0164.1 Vincenzo Di Fiore; Giuseppe Cavuoto; Michele Punzo; Daniela Tarallo; Nicola Pelosi; Laura Giordano; Ines Alberico; Ennio Marsella; Salvatore Mazzola Geophysics October 15, 2015, Vol.80, EN153-EN166. doi:https://doi.org/10.1190/geo2014-0392.1 Matthew M. Haney; Victor C. Tsai An iterative method for the accurate determination of airborne gravity horizontal components using strapdown inertial navigation system/global navigation satellite system Shaokun Cai; Kaidong Zhang; Meiping Wu; Yangming Huang; Yapeng Yang Geophysics September 22, 2015, Vol.80, G119-G129. doi:https://doi.org/10.1190/geo2014-0063.1 3D parametric hybrid inversion of time-domain airborne electromagnetic data Michael S. McMillan; Christoph Schwarzbach; Eldad Haber; Douglas W. Oldenburg Geophysics September 28, 2015, Vol.80, K25-K36. doi:https://doi.org/10.1190/geo2015-0141.1 Microseismic and seismic denoising via ensemble empirical mode decomposition and adaptive thresholding Jiajun Han; Mirko van der Baan Geophysics August 18, 2015, Vol.80, KS69-KS80. doi:https://doi.org/10.1190/geo2014-0423.1 Multitrace impedance inversion with lateral constraints Haitham Hamid; Adam Pidlisecky Geophysics August 26, 2015, Vol.80, M101-M111. doi:https://doi.org/10.1190/geo2014-0546.1 Leonardo Azevedo; Ruben Nunes; Amílcar Soares; Evaldo C. Mundin; Guenther Schwedersky Neto Geophysics September 01, 2015, Vol.80, M113-M128. doi:https://doi.org/10.1190/geo2015-0104.1 Double-difference waveform inversion: Feasibility and robustness study with pressure data Di Yang; Mark Meadows; Phil Inderwiesen; Jorge Landa; Alison Malcolm; Michael Fehler A theoretical and physical modeling analysis of the coupling between baseline elastic properties and time-lapse changes in determining difference amplitude variation with offset Shahin Jabbari; Joe Wong; Kristopher A. Innanen A combined Wigner-Ville and maximum entropy method for high-resolution time-frequency analysis of seismic data Ibrahim Zoukaneri; Milton J. Porsani Geophysics October 05, 2015, Vol.80, O1-O11. doi:https://doi.org/10.1190/geo2014-0464.1 Espen Birger Raknes; Børge Arntsen Geophysics August 28, 2015, Vol.80, R303-R315. doi:https://doi.org/10.1190/geo2014-0472.1 Zedong Wu; Tariq Alkhalifah Jie Hou; William W. Symes Xukai Shen; Robert G. Clapp Chuanhui Li; Xuewei Liu Geophysics November 04, 2015, Vol.80, R361-R373. doi:https://doi.org/10.1190/geo2014-0446.1 Source wavefield reconstruction using a linear combination of the boundary wavefield in reverse time migration Shaolin Liu; Xiaofan Li; Wenshuai Wang; Tong Zhu Geophysics August 28, 2015, Vol.80, S203-S212. doi:https://doi.org/10.1190/geo2015-0109.1 Prism waves in seafloor canyons and their effects on seismic imaging James Deeks; David Lumley Mandy Wong; Biondo L. Biondi; Shuki Ronen Removing false images in reverse time migration: The concept of de-primary Tong W. Fei; Yi Luo; Jiarui Yang; Hongwei Liu; Fuhao Qin Wenlong Wang; George A. McMechan Geophysics October 15, 2015, Vol.80, S245-S258. doi:https://doi.org/10.1190/geo2014-0620.1 An adaptable 17-point scheme for high-accuracy frequency-domain acoustic wave modeling in 2D constant density media Xiangde Tang; Hong Liu; Heng Zhang; Lu Liu; Zhiyang Wang Geophysics August 26, 2015, Vol.80, T211-T221. doi:https://doi.org/10.1190/geo2014-0124.1 Air-gun bubble-ghost interactions Jack R. C. King Denis Nasyrov; Denis Kiyashchenko; Yurii Kiselev; Boris Kashtan; Vladimir Troyan Geophysics September 02, 2015, Vol.80, U73-U86. doi:https://doi.org/10.1190/geo2015-0107.1 Guofa Li; Yang Liu; Hao Zheng; Wei Huang Geophysics August 26, 2015, Vol.80, V145-V155. doi:https://doi.org/10.1190/geo2015-0038.1 Q Shoudong Wang; Dengfeng Yang; Jingnan Li; Huijuan Song Parallel matrix factorization algorithm and its application to 5D seismic reconstruction and denoising Jianjun Gao; Aaron Stanton; Mauricio D. Sacchi Closed-loop surface-related multiple elimination and its application to simultaneous data reconstruction Gabriel A. Lopez; D. J. Verschuur Microseismic monitoring — Introduction Vladimir Grechka; Brad Artman; Leo Eisner; Werner Heigl; Stephen Wilson Geophysics October 20, 2015, Vol.80, WCi-WCii. doi:https://doi.org/10.1190/2015-0917-SPSEINTRO.1 Vladimir Grechka; Alejandro De La Pena; Estelle Schisselé-Rebel; Emmanuel Auger; Pierre-Francois Roux Geophysics May 26, 2015, Vol.80, WC1-WC9. doi:https://doi.org/10.1190/geo2014-0617.1 Geophysics May 22, 2015, Vol.80, WC11-WC23. doi:https://doi.org/10.1190/geo2014-0523.1 Fast and automatic microseismic phase-arrival detection and denoising by pattern recognition and reduced-rank filtering Danilo Velis; Juan I. Sabbione; Mauricio D. Sacchi Geophysics July 27, 2015, Vol.80, WC25-WC38. doi:https://doi.org/10.1190/geo2014-0561.1 Anton Reshetnikov; Joern Kummerow; Hiroshi Asanuma; Markus Häring; Serge A. Shapiro Cédéric Van Renterghem; Tony Probert; Ian Bradford; Ali Özbek; Johan O. A. Robertsson Geophysics August 03, 2015, Vol.80, WC51-WC60. doi:https://doi.org/10.1190/geo2015-0037.1 Extended wave-equation imaging conditions for passive seismic data Ben Witten; Jeffrey Shragge Cornelius Langenbruch; Serge A. Shapiro Interferometric assessment of clamping quality of borehole geophones Yoones Vaezi; Mirko Van der Baan Waveform similarity for quality control of event locations, time picking, and moment tensor solutions Fernando Castellanos; Mirko van der Baan Geophysics September 02, 2015, Vol.80, WC99-WC106. doi:https://doi.org/10.1190/geo2015-0043.1 Jonas Folesky; Joern Kummerow; Serge A. Shapiro Geophysics September 22, 2015, Vol.80, WC107-WC115. doi:https://doi.org/10.1190/geo2014-0572.1 Surface microseismic imaging in the presence of high-velocity lithologic layers David Price; Doug Angus; Kit Chambers; Glenn Jones Nidhal Belayouni; Alexandrine Gesret; Guillaume Daniel; Mark Noble Geophysics October 15, 2015, Vol.80, WC133-WC143. doi:https://doi.org/10.1190/geo2015-0068.1 Taking signal and noise to new dimensions — Introduction Laurent Duval; Sergey Fomel; Mostafa Naghizadeh; Mauricio Sacchi Geophysics October 23, 2015, Vol.80, WDi-WDii. doi:https://doi.org/10.1190/2015-0921-SPSEINTRO.1 Yangkang Chen; Sergey Fomel Geophysics March 19, 2015, Vol.80, WD1-WD9. doi:https://doi.org/10.1190/geo2014-0227.1 Nonminimum phase deconvolution in the log domain: A sparse inversion approach Antoine Guitton; Jon Claerbout Geophysics June 08, 2015, Vol.80, WD11-WD18. doi:https://doi.org/10.1190/geo2015-0016.1 Meixia Wang; Sheng Xu Geophysics July 24, 2015, Vol.80, WD19-WD25. doi:https://doi.org/10.1190/geo2015-0059.1 Fast simultaneous seismic source separation using Stolt migration and demigration operators Amr Ibrahim; Mauricio D. Sacchi Independent simultaneous source acquisition and processing Ray Abma; David Howe; Mark Foster; Imtiaz Ahmed; Mehmet Tanis; Qie Zhang; Adeyemi Arogunmati; Gino Alexander Geophysics August 07, 2015, Vol.80, WD37-WD44. doi:https://doi.org/10.1190/geo2015-0078.1 Seismic data denoising through multiscale and sparsity-promoting dictionary learning Lingchen Zhu; Entao Liu; James H. McClellan Yunyue Elita Li; Laurent Demanet Geophysics September 01, 2015, Vol.80, WD59-WD72. doi:https://doi.org/10.1190/geo2015-0075.1 Source separation for simultaneous towed-streamer marine acquisition — A compressed sensing approach Rajiv Kumar; Haneet Wason; Felix J. Herrmann Asymmetric chirplet transform for sparse representation of seismic data Florian Boßmann; Jianwei Ma Geophysics September 22, 2015, Vol.80, WD89-WD100. doi:https://doi.org/10.1190/geo2015-0063.1 Zhen Wang; Tamir Hegazy; Zhiling Long; Ghassan AlRegib Geophysics September 23, 2015, Vol.80, WD101-WD116. doi:https://doi.org/10.1190/geo2015-0116.1 Signal and noise separation in prestack seismic data using velocity-dependent seislet transform Yang Liu; Sergey Fomel; Cai Liu Multidimensional simultaneous random plus erratic noise attenuation and interpolation for seismic data by joint low-rank and sparse inversion Raphael Sternfels; Ghislain Viguier; Regis Gondoin; David Le Meur Geophysics October 15, 2015, Vol.80, WD129-WD141. doi:https://doi.org/10.1190/geo2015-0066.1 Double-weave 3D seismic acquisition — Part 1: Sampling and sparse Fourier reconstruction Double-weave 3D seismic acquisition — Part 2: Seismic modeling and subsurface fold analyses Compression of local slant stacks by the estimation of multiple local slopes and the matching pursuit decomposition Hao Hu; Yike Liu; Are Osen; Yingcai Zheng Tiago Barros; Rafael Ferrari; Rafael Krummenauer; Renato Lopes Geophysics November 05, 2015, Vol.80, Z73. doi:https://doi.org/10.1190/2015-0928-GEODISSABS.1 Geophysics November 17, 2015, Vol.80, Z75-Z85. doi:https://doi.org/10.1190/2015-0924-CONTRIB.1 Geophysics November 05, 2015, Vol.80, Z75-Z74. doi:https://doi.org/10.1190/geo80IDX.1
Revision as of 12:26, 9 April 2017 by MathAdmin (talk | contribs) (Created page with "<span class="exam"> Does the following sequence converge or diverge? <span class="exam"> If the sequence converges, also find the limit of the sequence. <span class="exam"...") {\displaystyle a_{n}={\frac {\ln n}{n}}} {\displaystyle \lim _{x\rightarrow \infty }f(x)} {\displaystyle \lim _{x\rightarrow \infty }g(x)} {\displaystyle \pm \infty .} {\displaystyle \lim _{x\rightarrow \infty }{\frac {f'(x)}{g'(x)}}} {\displaystyle \pm \infty ,} {\displaystyle \lim _{x\rightarrow \infty }{\frac {f(x)}{g(x)}}\,=\,\lim _{x\rightarrow \infty }{\frac {f'(x)}{g'(x)}}.} {\displaystyle \lim _{n\rightarrow \infty }\ln n=\infty } {\displaystyle \lim _{n\rightarrow \infty }n=\infty .} {\displaystyle {\frac {\infty }{\infty }},} {\displaystyle x} {\displaystyle {\begin{array}{rcl}\displaystyle {\lim _{n\rightarrow \infty }{\frac {\ln n}{n}}}&=&\displaystyle {\lim _{x\rightarrow \infty }{\frac {\ln x}{x}}}\\&&\\&{\overset {L'H}{=}}&\displaystyle {\lim _{x\rightarrow \infty }{\frac {{\big (}{\frac {1}{x}}{\big )}}{1}}}\\&&\\&=&\displaystyle {0.}\end{array}}} {\displaystyle 0.}
Correct state and state estimation error covariance using extended or unscented Kalman filter, or particle filter and measurements - MATLAB correct - MathWorks Switzerland \stackrel{^}{x}\left[k|k-1\right] \stackrel{^}{x}\left[k|k\right] \stackrel{^}{x}\left[k|k\right] \stackrel{^}{x}\left[k|k\right] \mathrm{mu} \underset{}{\overset{ˆ}{x}}\left[k|k-1\right] \underset{}{\overset{ˆ}{x}}\left[k|k-1\right] \underset{}{\overset{ˆ}{x}}\left[k|k\right] \underset{}{\overset{ˆ}{x}}\left[k+1|k\right] \underset{}{\overset{ˆ}{x}}\left[k|k\right] \underset{}{\overset{ˆ}{x}}\left[k|k-1\right] \underset{}{\overset{ˆ}{x}}\left[k-1|k-1\right] x\left[k\right]=\sqrt{x\left[k-1\right]+u\left[k-1\right]}+w\left[k-1\right] y\left[k\right]=x\left[k\right]+2*u\left[k\right]+v\left[k{\right]}^{2} \stackrel{^}{x}\left[k|k-1\right] \stackrel{^}{x}\left[k|k\right] \stackrel{^}{x}\left[k+1|k\right] \stackrel{^}{x}\left[k-1|k-1\right] \stackrel{^}{x}\left[k|k-1\right] \stackrel{^}{x}\left[k|k\right] \stackrel{^}{x}\left[k|k\right]
The Potential of Using Dynamic Strains in Earthquake Early Warning Applications | Seismological Research Letters | GeoScienceWorld Noha Farghal; Noha Farghal * Corresponding author: nfarghal@usgs.gov Andrew Barbour; Noha Farghal, Andrew Barbour, John Langbein; The Potential of Using Dynamic Strains in Earthquake Early Warning Applications. Seismological Research Letters 2020;; 91 (5): 2817–2827. doi: https://doi.org/10.1785/0220190385 We investigate the potential of using borehole strainmeter data from the Network of the Americas (NOTA) and the U.S. Geological Survey networks to estimate earthquake moment magnitudes for earthquake early warning (EEW) applications. We derive an empirical equation relating peak dynamic strain, earthquake moment magnitude, and hypocentral distance, and investigate the effects of different types of instrument calibration on model misfit. We find that raw (uncalibrated) strains fit the model as accurately as calibrated strains. We test the model by estimating moment magnitudes of the largest two earthquakes in the July 2019 Ridgecrest earthquake sequence—the M 6.4 foreshock and the M 7.1 mainshock—using two strainmeters located within ∼50 km of the rupture. In both the cases, the magnitude based on the dynamic strain component is within ∼0.1–0.4 magnitude units of the catalog moment magnitude. We then compare the temporal evolution of our strain‐derived magnitudes for the largest two Ridgecrest events to the real‐time performance of the ShakeAlert EEW System (SAS). The final magnitudes from NOTA borehole strainmeters are close to SAS real‐time estimates for the M 6.4 foreshock, and significantly more accurate for the M 7.1 mainshock.
Wireless Sensor Network > Vol.11 No.2, February 2019 University of Aberdeen King’s College, Aberdeen, UK. DOI: 10.4236/wsn.2019.112002 PDF HTML XML 711 Downloads 1,449 Views Citations Ademuwagun, A. (2019) RSS-Distance Rationalization Procedure for Localization in an Indoor Environment. Wireless Sensor Network, 11, 13-33. doi: 10.4236/wsn.2019.112002. S=h\left(U\right)+e RSS\propto \frac{1}{{\left(distance\right)}^{2}} PathLoss\left(d\right)=-10{\mathrm{log}}_{10}\left[\frac{{G}_{t}{G}_{r}{\lambda }^{2}}{{\left(4\text{π}\right)}^{2}{d}^{2}}\right] {G}_{t} {G}_{r} are the ratio gains of the transmitting and receiving antennas respectively, \lambda is the wavelength in meters, and d is the Tx-Rx separation in meters. However, the free space path loss equation provides valid results only if the receiving antenna is in the far-field. The far-field is defined as the distance {d}_{f} given by Equation (4). {d}_{f}=\frac{2{D}^{2}}{\lambda } where D is the largest linear dimension of the antenna. For a receiver to be considered in the far-field of the transmitter, it must satisfy {d}_{f}\gg D {d}_{f}\gg \lambda . Hence, Equation (3) is not applicable in our situation. Nonetheless, in reality, RSS is highly affected by changes in environmental conditions particularly in an indoor environment. Some of the factors that cause signal strength attenuation make RSS not to correlate with distance, and some of the factors change from one enclosed medium to another. Hence, providing a generalized RSS model that could be applicable in all indoor conditions would be very difficult. RSS or received power, {P}_{r} , is related to distance, d and is given by the log-distance path loss model, which assumes that path loss varies exponentially with distance. The path loss in dB is given by Equation (5) \frac{{P}_{r}}{{P}_{t}}={G}_{t}\cdot {G}_{r}\left(\frac{\lambda }{4\text{π}d}\right) {P}_{r} {P}_{t} {G}_{t} {G}_{r} are transmitter and receiver antenna gains respectively. \lambda is the wavelength of the signal transmitted and d is distance between antennas. This equation is called the Friis equation. Thus, Equation (6) can be modified as a log-normal shadow model as follows, P\left(d\right)\left[\text{dBm}\right]=P\left({d}_{0}\right)\left[\text{dBm}\right]-10\gamma {\mathrm{log}}_{10}\left(\frac{d}{{d}_{0}}\right)+{X}_{\sigma } P\left({d}_{0}\right) represents the transmitting power of an anchor node at a reference distance {d}_{0} and d the distance between the anchor and the object to be localized, \gamma is the path loss exponent and {X}_{\sigma } is the shadow fading which follows zero mean Gaussian distribution with \sigma as standard deviation. Shadowing is due to obstructions caused by hills and buildings, and will be of little effect in our case study, while multipath fading due to the constructive and destructive interference of transmitted signal will be of significance in an indoor environment. Hence, considering that we are interested in indoor localization, particularly in the context of this paper, where we expect the size of a room or hall way to be between 16 m2 and 25 m2, and a maximum distance of not more than 20 m between the transmitters and receivers, the effect of {X}_{\sigma } can be neglected. Thus, Equation (7) will be modified as the log-distance path loss model, which assumes that path loss varies exponentially with distance. P\left(d\right)\left[\text{dBm}\right]=P\left({d}_{0}\right)\left[\text{dBm}\right]-10\gamma {\mathrm{log}}_{10}\left(\frac{d}{{d}_{0}}\right) Consequently, the function of received signal, f\left(RSS,\gamma \right) could be estimated. The value of the path loss exponent \gamma varies depending upon the environment. In free space, it is equal to 2 [14] . RSS=10\mathrm{log}\frac{P\left({d}_{0}\right)}{P\left(d\right)}\text{ }\text{ }\left[RSS\right]=\text{dBm} Y=-13.3-0.626{X}_{1}-0.53{X}_{2}-0.66{X}_{3} Y=-11.3-0.0309{X}_{1}-0.210{X}_{2}-0.0319{X}_{3} Y=20{\mathrm{log}}_{10}X-39 \gamma =2 in free space and \frac{d}{{d}_{0}}=X . Based on the data collected, we can express distance as X(m) and the measured RSS as Y(dBm) for the test bed area. We therefore, set our reference datum using Equation (12), which is the log-distance model for the propagation of the RSS data in free space. We compared the log-distance model with the simple moving average model and our proposed model. RS{S}_{k}=\frac{{a}_{k-1}+{a}_{k}+{a}_{k+1}}{3} RS{S}_{k}=\frac{1}{3}\underset{k}{\overset{n-1}{\sum }}\text{ }{a}_{k-1}+{a}_{k}+{a}_{k+1} where n and k are positive integers and n=k+1 RS{S}_{n}={a}_{1},{a}_{2},{a}_{3},\cdots ,{a}_{n} Hence, selecting the first three measured RSS data, the new RSS, {X}_{1} will be; {X}_{1}=|\mathrm{max}\left\{{a}_{1},{a}_{2},{a}_{3}\right\}| {X}_{2}=|\mathrm{max}\left\{{a}_{2},{a}_{3},{a}_{4}\right\}| {X}_{k}=|\mathrm{max}\left\{{a}_{k},{a}_{k+1},{a}_{k+2}\right\}| where n and k are positive integers; and k+2\le n . We applied our proposed algorithm to the measured RSS data and applied regression model to compare it with the original RSS data and the ideal log-distance data. This is displayed in Figure 13. [1] Mao, G. and Fidan, B. (2009) Introduction to Wireless Sensor Network Localization. Localization Algorithms and Strategies for Wireless Sensor Networks, 1-32. [2] Krach, B. and Robertson, P. (2008) Integration of Foot-Mounted Inertial Sensors into a Bayesian Location Estimation Framework. 2008 5th IEEE Workshop on Positioning, Navigation and Communication (WPNC 2008), 55-61. [3] Priwgharm, R. and Chemtanomwong, P. (2011) A Comparative Study on Indoor Localization Based on RSSI Measurement in Wireless Sensor Network. 2011 Eighth IEEE International Joint Conference on Computer Science and Software Engineering (JCSSE), Nakhon Pathom, 11-13 May 2011, 1-6. [4] Willoughby, T.R., Kupelian, P.A., Pouliot, J., Shinohara, K., Aubin, M., Roach, M., Skrumeda, L.L., Balter, J.M., Litzenberg, D.W., Hadley, S.W., et al (2006) Target Localization and Real-Time Tracking Using the Calypso 4d Localization System in Patients with Localized Prostate Cancer. International Journal of Radiation Oncology, Biology, Physics, 65, 528-534. [5] Merhi, Z., Nahas, M., Abdul-Nabi, S., Haj-Ali, A. and Bayoumi, M. (2013) RSSI Range Estimation for Indoor Anchor Based Localization for Wireless Sensor Networks. 2013 25th IEEE International Conference on Microelectronics (ICM), 1-4. [6] Parameswaran, A.T., Husain, M.I., Upadhyaya, S., et al (2009) Is RSSI a Reliable Parameter in Sensor Localization Algorithms: An Experimental Study. Field Failure Data Analysis Workshop (F2DA09), 5. [7] Ramadurai, V. and Sichitiu, M.L. (2003) Simulation-Based Analysis of a Localization Algorithm for Wireless Ad-Hoc Sensor Networks. Proceedings of the International Conference on Wireless Networks, Las Vegas, NV. [8] Patwari, N., Hero III, A.O., Perkins, M., Correal, N.S. and O’dea, R.J. (2003) Relative Location Estimation in Wireless Sensor Networks. IEEE Transactions on Signal Processing, 51, 2137-2148. [9] Barralet, M., Huang, X. and Sharma, D. (2009) Effects of Antenna Polarization on RSSI Based Location Identication. 2009 11th International Conference on Advanced Communication Technology (ICACT 2009), 260-265. [10] Chen, Y., Pan, Q., Liang, Y. and Hu, Z. (2010) Awcl: Adaptive Weighted Centroid Target Localization Algorithm Based on RSSI in WSN. 2010 3rd IEEE International Conference on Computer Science and Information Technology (ICCSIT), Chengdu, 9-11 July 2010, 331-336. [11] Choi, J.H., Choi, J.K. and Yoo, S.J. (2012) Iterative Path-Loss Exponent Estimation-Based Positioning Scheme in WSNS. 2012 IEEE Fourth International Conference on Ubiquitous and Future Networks (ICUFN), Phuket, 4-6 July 2012, 23-26. [12] Golestani, A., Petreska, N., Wilfert, D. and Zimmer, C. (2014) Improving the Precision of RSSI-Based Low-Energy Localization Using Path Loss Exponent Estimation. 2014 11th IEEE Workshop on Positioning, Navigation and Communication (WPNC), 1-6. [13] Hu, L and Evans, D. (2004) Localization for Mobile Sensor Networks. Proceedings of the 10th Annual International Conference on Mobile Computing and Networking, Philadelphia, PA, 26 September-1 October 2004, 45-57. [14] Oguejiofor, O., Okorogu, V., Adewale, A. and Osuesu, B. (2013) Outdoor Localization System Using RSSI Measurement of Wireless Sensor Network. International Journal of Innovative Technology and Exploring Engineering, 2, 1-6. [15] Rahman, M.S., Park, Y. and Kim, K.D. (2012) RSS-Based Indoor Localization Algorithm for Wireless Sensor Network Using Generalized Regression Neural Network. Arabian Journal for Science and Engineering, 37, 1043-1053. [16] Vander Stoep, J. (2009) Design and Implementation of Reliable Localization Algorithms Using Received Signal Strength. PhD Thesis, University of Washington. [17] Yun, S., Lee, J., Chung, Y. and Kim, E. (2007) Centroid Localization Method in Wireless Sensor Networks Using Tsk Fuzzy Modeling. ISIS 2007 Proceedings of the 8th Symposium on Advanced Intelligent Systems, 971-974. [18] Adewumi, O.G., Djouani, K. and Kurien, A.M. (2013) RSSI Based Indoor and Outdoor Distance Estimation for Localization in WSN. 2013 IEEE International Conference on Industrial Technology (ICIT), Cape Town, 25-28 February 2013, 1534-1539.
Satisfiability Knowpia In mathematical logic, a formula is satisfiable if it is true under some assignment of values to its variables. For example, the formula {\displaystyle x+3=y} is satisfiable because it is true when {\displaystyle x=3} {\displaystyle y=6} , while the formula {\displaystyle x+1=x} is not satisfiable over the integers. The dual concept to satisfiability is validity; a formula is valid if every assignment of values to its variables makes the formula true. For example, {\displaystyle x+3=3+x} is valid over the integers, but {\displaystyle x+3=y} Formally, satisfiability is studied with respect to a fixed logic defining the syntax of allowed symbols, such as first-order logic, second-order logic or propositional logic. Rather than being syntactic, however, satisfiability is a semantic property because it relates to the meaning of the symbols, for example, the meaning of {\displaystyle +} in a formula such as {\displaystyle x+1=x} . Formally, we define an interpretation (or model) to be an assignment of values to the variables and an assignment of meaning to all other non-logical symbols, and a formula is said to be satisfiable if it is true in every interpretation.[1] While this allows non-standard interpretations of symbols such as {\displaystyle +} , one can restrict their meaning by providing additional axioms. The satisfiability modulo theories problem considers satisfiability of a formula with respect to a formal theory, which is a (finite or infinite) set of axioms. Satisfiability and validity are defined for a single formula, but can be generalized to an arbitrary theory or set of formulas: a theory is satisfiable if at least one interpretation makes every formula in the theory true, and valid if every formula is true in every interpretation. For example, theories of arithmetic such as Peano arithmetic are satisfiable because they are true in the natural numbers. This concept is closely related to the consistency of a theory, and in fact is equivalent to consistency for first-order logic, a result known as Gödel's completeness theorem. The negation of satisfiability is unsatisfiability, and the negation of validity is invalidity. These four concepts are related to each other in a manner exactly analogous to Aristotle's square of opposition. The problem of determining whether a formula in propositional logic is satisfiable is decidable, and is known as the Boolean satisfiability problem, or SAT. In general, the problem of determining whether a sentence of first-order logic is satisfiable is not decidable. In universal algebra, equational theory, and automated theorem proving, the methods of term rewriting, congruence closure and unification are used to attempt to decide satisfiability. Whether a particular theory is decidable or not depends whether the theory is variable-free and on other conditions.[2] Reduction of validity to satisfiabilityEdit For classical logics with negation, it is generally possible to re-express the question of the validity of a formula to one involving satisfiability, because of the relationships between the concepts expressed in the above square of opposition. In particular φ is valid if and only if ¬φ is unsatisfiable, which is to say it is false that ¬φ is satisfiable. Put another way, φ is satisfiable if and only if ¬φ is invalid. For logics without negation, such as the positive propositional calculus, the questions of validity and satisfiability may be unrelated. In the case of the positive propositional calculus, the satisfiability problem is trivial, as every formula is satisfiable, while the validity problem is co-NP complete. Propositional satisfiability for classical logicEdit In the case of classical propositional logic, satisfiability is decidable for propositional formulae. In particular, satisfiability is an NP-complete problem, and is one of the most intensively studied problems in computational complexity theory. Satisfiability in first-order logicEdit For first-order logic (FOL), satisfiability is undecidable. More specifically, it is a co-RE-complete problem and therefore not semidecidable.[3] This fact has to do with the undecidability of the validity problem for FOL. The question of the status of the validity problem was posed firstly by David Hilbert, as the so-called Entscheidungsproblem. The universal validity of a formula is a semi-decidable problem by Gödel's completeness theorem. If satisfiability were also a semi-decidable problem, then the problem of the existence of counter-models would be too (a formula has counter-models iff its negation is satisfiable). So the problem of logical validity would be decidable, which contradicts the Church–Turing theorem, a result stating the negative answer for the Entscheidungsproblem. Satisfiability in model theoryEdit In model theory, an atomic formula is satisfiable if there is a collection of elements of a structure that render the formula true.[4] If A is a structure, φ is a formula, and a is a collection of elements, taken from the structure, that satisfy φ, then it is commonly written that A ⊧ φ [a] If φ has no free variables, that is, if φ is an atomic sentence, and it is satisfied by A, then one writes A ⊧ φ In this case, one may also say that A is a model for φ, or that φ is true in A. If T is a collection of atomic sentences (a theory) satisfied by A, one writes A ⊧ T Finite satisfiabilityEdit A problem related to satisfiability is that of finite satisfiability, which is the question of determining whether a formula admits a finite model that makes it true. For a logic that has the finite model property, the problems of satisfiability and finite satisfiability coincide, as a formula of that logic has a model if and only if it has a finite model. This question is important in the mathematical field of finite model theory. Finite satisfiability and satisfiability need not coincide in general. For instance, consider the first-order logic formula obtained as the conjunction of the following sentences, where {\displaystyle a}nd {\displaystyle b} are constants: {\displaystyle R(a_{0},a_{0})} {\displaystyle R(a_{0},a_{1})} {\displaystyle \forall xy(R(x,y)\rightarrow \exists zR(y,z))} {\displaystyle \forall xyz(R(y,x)\wedge R(z,x)\rightarrow x=z))} The resulting formula has the infinite model {\displaystyle R(a_{0},a_{0}),R(a_{0},a_{1}),R(a_{1},a_{2}),\ldots } , but it can be shown that it has no finite model (starting at the fact {\displaystyle R(a,b)} and following the chain of {\displaystyle R} atoms that must exist by the third axiom, the finiteness of a model would require the existence of a loop, which would violate the fourth axiom, whether it loops back on {\displaystyle a_{0}} or on a different element). The computational complexity of deciding satisfiability for an input formula in a given logic may differ from that of deciding finite satisfiability; in fact, for some logics, only one of them is decidable. For classical first-order logic, finite satisfiability is recursively enumerable (in class RE) and undecidable by Trakhtenbrot's theorem applied to the negation of the formula. Numerical constraintsEdit Numerical constraints[clarify] often appear in the field of mathematical optimization, where one usually wants to maximize (or minimize) an objective function subject to some constraints. However, leaving aside the objective function, the basic issue of simply deciding whether the constraints are satisfiable can be challenging or undecidable in some settings. The following table summarizes the main cases. over reals over integers Linear PTIME (see linear programming) NP-complete (see integer programming) Polynomial decidable through e.g. Cylindrical algebraic decomposition undecidable (Hilbert's tenth problem) Table source: Bockmayr and Weispfenning.[5]: 754  For linear constraints, a fuller picture is provided by the following table. Constraints over: Linear equations PTIME PTIME NP-complete Linear inequalities PTIME NP-complete NP-complete ^ See, for example, Boolos and Jeffrey, 1974, chapter 11. ^ Franz Baader; Tobias Nipkow (1998). Term Rewriting and All That. Cambridge University Press. pp. 58–92. ISBN 0-521-77920-0. ^ Baier, Christel (2012). "Chapter 1.3 Undecidability of FOL" (PDF). Lecture Notes — Advanced Logics. Technische Universität Dresden — Institute for Technical Computer Science. pp. 28–32. Retrieved 21 July 2012. [dead link] ^ Wilifrid Hodges (1997). A Shorter Model Theory. Cambridge University Press. p. 12. ISBN 0-521-58713-1. ^ a b Alexander Bockmayr; Volker Weispfenning (2001). "Solving Numerical Constraints". In John Alan Robinson; Andrei Voronkov (eds.). Handbook of Automated Reasoning Volume I. Elsevier and MIT Press. ISBN 0-444-82949-0. (Elsevier) (MIT Press). Boolos and Jeffrey, 1974. Computability and Logic. Cambridge University Press. Daniel Kroening; Ofer Strichman (2008). Decision Procedures: An Algorithmic Point of View. Springer Science & Business Media. ISBN 978-3-540-74104-6. A. Biere; M. Heule; H. van Maaren; T. Walsh, eds. (2009). Handbook of Satisfiability. IOS Press. ISBN 978-1-60750-376-7.
Section 42.5 (0EAH): Tame symbols—The Stacks project Section 42.5: Tame symbols (cite) 42.5 Tame symbols Consider a Noetherian local ring $(A, \mathfrak m)$ of dimension $1$. We denote $Q(A)$ the total ring of fractions of $A$, see Algebra, Example 10.9.8. The tame symbol will be a map \[ \partial _ A(-, -) : Q(A)^* \times Q(A)^* \longrightarrow \kappa (\mathfrak m)^* \] $\partial _ A(f, gh) = \partial _ A(f, g) \partial _ A(f, h)$ for $f, g, h \in Q(A)^*$, $\partial _ A(f, g) \partial _ A(g, f) = 1$ for $f, g \in Q(A)^*$, $\partial _ A(f, 1 - f) = 1$ for $f \in Q(A)^*$ such that $1 - f \in Q(A)^*$, $\partial _ A(aa', b) = \partial _ A(a, b)\partial _ A(a', b)$ and $\partial _ A(a, bb') = \partial _ A(a, b)\partial _ A(a, b')$ for $a, a', b, b' \in A$ nonzerodivisors, $\partial _ A(b, b) = (-1)^ m$ with $m = \text{length}_ A(A/bA)$ for $b \in A$ a nonzerodivisor, $\partial _ A(u, b) = u^ m \bmod \mathfrak m$ with $m = \text{length}_ A(A/bA)$ for $u \in A$ a unit and $b \in A$ a nonzerodivisor, and $\partial _ A(a, b - a)\partial _ A(b, b) = \partial _ A(b, b - a)\partial _ A(a, b)$ for $a, b \in A$ such that $a, b, b - a$ are nonzerodivisors. Since it is easier to work with elements of $A$ we will often think of $\partial _ A$ as a map defined on pairs of nonzerodivisors of $A$ satisfying (4), (5), (6), (7). It is an exercise to see that setting \[ \partial _ A(\frac{a}{b}, \frac{c}{d}) = \partial _ A(a, c) \partial _ A(a, d)^{-1} \partial _ A(b, c)^{-1} \partial _ A(b, d) \] we get a well defined map $Q(A)^* \times Q(A)^* \to \kappa (\mathfrak m)^*$ satisfying (1), (2), (3) as well as the other properties. We do not claim there is a unique map with these properties. Instead, we will give a recipe for constructing such a map. Namely, given $a_1, a_2 \in A$ nonzerodivisors, we choose a ring extension $A \subset B$ and local factorizations as in Lemma 42.4.4. Then we define \begin{equation} \label{chow-equation-tame-symbol} \partial _ A(a_1, a_2) = \prod \nolimits _ j \text{Norm}_{\kappa (\mathfrak m_ j)/\kappa (\mathfrak m)} ((-1)^{e_{1, j}e_{2, j}}u_{1, j}^{e_{2, j}}u_{2, j}^{-e_{1, j}} \bmod \mathfrak m_ j)^{m_ j} \end{equation} where $m_ j = \text{length}_{B_ j}(B_ j/\pi _ j B_ j)$ and the product is taken over the maximal ideals $\mathfrak m_1, \ldots , \mathfrak m_ r$ of $B$. Lemma 42.5.1. The formula (42.5.0.1) determines a well defined element of $\kappa (\mathfrak m)^*$. In other words, the right hand side does not depend on the choice of the local factorizations or the choice of $B$. Proof. Independence of choice of factorizations. Suppose we have a Noetherian $1$-dimensional local ring $B$, elements $a_1, a_2 \in B$, and nonzerodivisors $\pi , \theta $ such that we can write \[ a_1 = u_1 \pi ^{e_1} = v_1 \theta ^{f_1},\quad a_2 = u_2 \pi ^{e_2} = v_2 \theta ^{f_2} \] with $e_ i, f_ i \geq 0$ integers and $u_ i, v_ i$ units in $B$. Observe that this implies \[ a_1^{e_2} = u_1^{e_2}u_2^{-e_1}a_2^{e_1},\quad a_1^{f_2} = v_1^{f_2}v_2^{-f_1}a_2^{f_1} \] On the other hand, setting $m = \text{length}_ B(B/\pi B)$ and $k = \text{length}_ B(B/\theta B)$ we find $e_2 m = \text{length}_ B(B/a_2 B) = f_2 k$. Expanding $a_1^{e_2m} = a_1^{f_2 k}$ using the above we find \[ (u_1^{e_2}u_2^{-e_1})^ m = (v_1^{f_2}v_2^{-f_1})^ k \] This proves the desired equality up to signs. To see the signs work out we have to show $me_1e_2$ is even if and only if $kf_1f_2$ is even. This follows as both $me_2 = kf_2$ and $me_1 = kf_1$ (same argument as above). Independence of choice of $B$. Suppose given two extensions $A \subset B$ and $A \subset B'$ as in Lemma 42.4.4. Then \[ C = (B \otimes _ A B')/(\mathfrak m\text{-power torsion}) \] will be a third one. Thus we may assume we have $A \subset B \subset C$ and factorizations over the local rings of $B$ and we have to show that using the same factorizations over the local rings of $C$ gives the same element of $\kappa (\mathfrak m)$. By transitivity of norms (Fields, Lemma 9.20.5) this comes down to the following problem: if $B$ is a Noetherian local ring of dimension $1$ and $\pi \in B$ is a nonzerodivisor, then \[ \lambda ^ m = \prod \text{Norm}_{\kappa _ k/\kappa }(\lambda )^{m_ k} \] Here we have used the following notation: (1) $\kappa $ is the residue field of $B$, (2) $\lambda $ is an element of $\kappa $, (3) $\mathfrak m_ k \subset C$ are the maximal ideals of $C$, (4) $\kappa _ k = \kappa (\mathfrak m_ k)$ is the residue field of $C_ k = C_{\mathfrak m_ k}$, (5) $m = \text{length}_ B(B/\pi B)$, and (6) $m_ k = \text{length}_{C_ k}(C_ k/\pi C_ k)$. The displayed equality holds because $\text{Norm}_{\kappa _ k/\kappa }(\lambda ) = \lambda ^{[\kappa _ k : \kappa ]}$ as $\lambda \in \kappa $ and because $m = \sum m_ k[\kappa _ k:\kappa ]$. First, we have $m = \text{length}_ B(B/xB) = \text{length}_ B(C/\pi C)$ by Lemma 42.2.5 and (42.2.2.1). Finally, we have $\text{length}_ B(C/\pi C) = \sum m_ k[\kappa _ k:\kappa ]$ by Algebra, Lemma 10.52.12. $\square$ Lemma 42.5.2. The tame symbol (42.5.0.1) satisfies (4), (5), (6), (7) and hence gives a map $\partial _ A : Q(A)^* \times Q(A)^* \to \kappa (\mathfrak m)^*$ satisfying (1), (2), (3). Proof. Let us prove (4). Let $a_1, a_2, a_3 \in A$ be nonzerodivisors. Choose $A \subset B$ as in Lemma 42.4.4 for $a_1, a_2, a_3$. Then the equality \[ \partial _ A(a_1a_2, a_3) = \partial _ A(a_1, a_3) \partial _ A(a_2, a_3) \] follows from the equality \[ (-1)^{(e_{1, j} + e_{2, j})e_{3, j}} (u_{1, j}u_{2, j})^{e_{3, j}}u_{3, j}^{-e_{1, j} - e_{2, j}} = (-1)^{e_{1, j}e_{3, j}} u_{1, j}^{e_{3, j}}u_{3, j}^{-e_{1, j}} (-1)^{e_{2, j}e_{3, j}} u_{2, j}^{e_{3, j}}u_{3, j}^{-e_{2, j}} \] in $B_ j$. Properties (5) and (6) are equally immediate. Let us prove (7). Let $a_1, a_2, a_1 - a_2 \in A$ be nonzerodivisors and set $a_3 = a_1 - a_2$. Choose $A \subset B$ as in Lemma 42.4.4 for $a_1, a_2, a_3$. Then it suffices to show \[ (-1)^{e_{1, j}e_{2, j} + e_{1, j}e_{3, j} + e_{2, j}e_{3, j} + e_{2, j}} u_{1, j}^{e_{2, j} - e_{3, j}} u_{2, j}^{e_{3, j} - e_{1, j}} u_{3, j}^{e_{1, j} - e_{2, j}} \bmod \mathfrak m_ j = 1 \] This is clear if $e_{1, j} = e_{2, j} = e_{3, j}$. Say $e_{1, j} > e_{2, j}$. Then we see that $e_{3, j} = e_{2, j}$ because $a_3 = a_1 - a_2$ and we see that $u_{3, j}$ has the same residue class as $-u_{2, j}$. Hence the formula is true – the signs work out as well and this verification is the reason for the choice of signs in (42.5.0.1). The other cases are handled in exactly the same manner. $\square$ Lemma 42.5.3. Let $(A, \mathfrak m)$ be a Noetherian local ring of dimension $1$. Let $A \subset B$ be a finite ring extension with $B/A$ annihilated by a power of $\mathfrak m$ and $\mathfrak m$ not an associated prime of $B$. For $a, b \in A$ nonzerodivisors we have \[ \partial _ A(a, b) = \prod \text{Norm}_{\kappa (\mathfrak m_ j)/\kappa (\mathfrak m)}(\partial _{B_ j}(a, b)) \] where the product is over the maximal ideals $\mathfrak m_ j$ of $B$ and $B_ j = B_{\mathfrak m_ j}$. Proof. Choose $B_ j \subset C_ j$ as in Lemma 42.4.4 for $a, b$. By Lemma 42.4.1 we can choose a finite ring extension $B \subset C$ with $C_ j \cong C_{\mathfrak m_ j}$ for all $j$. Let $\mathfrak m_{j, k} \subset C$ be the maximal ideals of $C$ lying over $\mathfrak m_ j$. Let \[ a = u_{j, k}\pi _{j, k}^{f_{j, k}},\quad b = v_{j, k}\pi _{j, k}^{g_{j, k}} \] be the local factorizations which exist by our choice of $C_ j \cong C_{\mathfrak m_ j}$. By definition we have \[ \partial _ A(a, b) = \prod \nolimits _{j, k} \text{Norm}_{\kappa (\mathfrak m_{j, k})/\kappa (\mathfrak m)} ((-1)^{f_{j, k}g_{j, k}}u_{j, k}^{g_{j, k}}v_{j, k}^{-f_{j, k}} \bmod \mathfrak m_{j, k})^{m_{j, k}} \] \[ \partial _{B_ j}(a, b) = \prod \nolimits _ k \text{Norm}_{\kappa (\mathfrak m_{j, k})/\kappa (\mathfrak m_ j)} ((-1)^{f_{j, k}g_{j, k}}u_{j, k}^{g_{j, k}}v_{j, k}^{-f_{j, k}} \bmod \mathfrak m_{j, k})^{m_{j, k}} \] The result follows by transitivity of norms for $\kappa (\mathfrak m_{j, k})/\kappa (\mathfrak m_ j)/\kappa (\mathfrak m)$, see Fields, Lemma 9.20.5. $\square$ Lemma 42.5.4. Let $(A, \mathfrak m, \kappa ) \to (A', \mathfrak m', \kappa ')$ be a local homomorphism of Noetherian local rings. Assume $A \to A'$ is flat and $\dim (A) = \dim (A') = 1$. Set $m = \text{length}_{A'}(A'/\mathfrak mA')$. For $a_1, a_2 \in A$ nonzerodivisors $\partial _ A(a_1, a_2)^ m$ maps to $\partial _{A'}(a_1, a_2)$ via $\kappa \to \kappa '$. Proof. If $a_1, a_2$ are both units, then $\partial _ A(a_1, a_2) = 1$ and $\partial _{A'}(a_1, a_2) = 1$ and the result is true. If not, then we can choose a ring extension $A \subset B$ and local factorizations as in Lemma 42.4.4. Denote $\mathfrak m_1, \ldots , \mathfrak m_ m$ be the maximal ideals of $B$. Let $\mathfrak m_1, \ldots , \mathfrak m_ m$ be the maximal ideals of $B$ with residue fields $\kappa _1, \ldots , \kappa _ m$. For each $j \in \{ 1, \ldots , m\} $ denote $\pi _ j \in B_ j = B_{\mathfrak m_ j}$ a nonzerodivisor such that we have factorizations $a_ i = u_{i, j}\pi _ j^{e_{i, j}}$ as in the lemma. By definition we have \[ \partial _ A(a_1, a_2) = \prod \nolimits _ j \text{Norm}_{\kappa _ j/\kappa } ((-1)^{e_{1, j}e_{2, j}}u_{1, j}^{e_{2, j}}u_{2, j}^{-e_{1, j}} \bmod \mathfrak m_ j)^{m_ j} \] where $m_ j = \text{length}_{B_ j}(B_ j/\pi _ j B_ j)$. Set $B' = A' \otimes _ A B$. Since $A'$ is flat over $A$ we see that $A' \subset B'$ is a ring extension with $B'/A'$ annihilated by a power of $\mathfrak m'$. Let \[ \mathfrak m'_{j, l},\quad l = 1, \ldots , n_ j \] be the maximal ideals of $B'$ lying over $\mathfrak m_ j$. Denote $\kappa '_{j, l}$ the residue field of $\mathfrak m'_{j, l}$. Denote $B'_{j, l}$ the localization of $B'$ at $\mathfrak m'_{j, l}$. As factorizations of $a_1$ and $a_2$ in $B'_{j, l}$ we use the image of the factorizations $a_ i = u_{i, j} \pi _ j^{e_{i, j}}$ given to us in $B_ j$. By definition we have \[ \partial _{A'}(a_1, a_2) = \prod \nolimits _{j, l} \text{Norm}_{\kappa '_{j, l}/\kappa '} ((-1)^{e_{1, j}e_{2, j}}u_{1, j}^{e_{2, j}}u_{2, j}^{-e_{1, j}} \bmod \mathfrak m'_{j, l})^{m'_{j, l}} \] where $m'_{j, l} = \text{length}_{B'_{j, l}}(B'_{j, l}/\pi _ j B'_{j, l})$. Comparing the formulae above we see that it suffices to show that for each $j$ and for any unit $u \in B_ j$ we have \begin{equation} \label{chow-equation-to-prove} \left(\text{Norm}_{\kappa _ j/\kappa }(u \bmod \mathfrak m_ j)^{m_ j}\right)^ m = \prod \nolimits _ l \text{Norm}_{\kappa '_{j, l}/\kappa '}(u \bmod \mathfrak m'_{j, l})^{m'_{j, l}} \end{equation} in $\kappa '$. We are going to use the construction of determinants of endomorphisms of finite length modules in More on Algebra, Section 15.120 to prove this. Set $M = B_ j/\pi _ j B_ j$. By More on Algebra, Lemma 15.120.2 we have \[ \text{Norm}_{\kappa _ j/\kappa }(u \bmod \mathfrak m_ j)^{m_ j} = \det \nolimits _\kappa (u : M \to M) \] Thus, by More on Algebra, Lemma 15.120.3, the left hand side of (42.5.4.1) is equal to $\det _{\kappa '}(u : M \otimes _ A A' \to M \otimes _ A A')$. We have an isomorphism \[ M \otimes _ A A' = (B_ j/\pi _ j B_ j) \otimes _ A A' = \bigoplus \nolimits _ l B'_{j, l}/\pi _ j B'_{j, l} \] of $A'$-modules. Setting $M'_ l = B'_{j, l}/\pi _ j B'_{j, l}$ we see that $\text{Norm}_{\kappa '_{j, l}/\kappa '}(u \bmod \mathfrak m'_{j, l})^{m'_{j, l}} = \det _{\kappa '}(u_ j : M'_ l \to M'_ l)$ by More on Algebra, Lemma 15.120.2 again. Hence (42.5.4.1) holds by multiplicativity of the determinant construction, see More on Algebra, Lemma 15.120.1. $\square$ In the statement after the definition of tame symbols, the formula for \partial_{A}(\frac{a}{b},\frac{c}{d}) \partial_{A}(a,c)\partial_{A}(a,d)^{-1}\partial_{A}(b,c)^{-1}\partial_{A}(b,d) In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0EAH. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0EAH, in case you are confused.
Section 15.8 (07Z6): Fitting ideals—The Stacks project Section 15.8: Fitting ideals (cite) 15.8 Fitting ideals The Fitting ideals of a finite module are the ideals determined by the construction of Lemma 15.8.2. Lemma 15.8.1. Let $R$ be a ring. Let $A$ be an $n \times m$ matrix with coefficients in $R$. Let $I_ r(A)$ be the ideal generated by the $r \times r$-minors of $A$ with the convention that $I_0(A) = R$ and $I_ r(A) = 0$ if $r > \min (n, m)$. Then $I_0(A) \supset I_1(A) \supset I_2(A) \supset \ldots $, if $B$ is an $(n + n') \times m$ matrix, and $A$ is the first $n$ rows of $B$, then $I_{r + n'}(B) \subset I_ r(A)$, if $C$ is an $n \times n$ matrix then $I_ r(CA) \subset I_ r(A)$. If $A$ is a block matrix \[ \left( \begin{matrix} A_1 & 0 \\ 0 & A_2 \end{matrix} \right) \] then $I_ r(A) = \sum _{r_1 + r_2 = r} I_{r_1}(A_1) I_{r_2}(A_2)$. Proof. Omitted. (Hint: Use that a determinant can be computed by expanding along a column or a row.) $\square$ Lemma 15.8.2. Let $R$ be a ring. Let $M$ be a finite $R$-module. Choose a presentation \[ \bigoplus \nolimits _{j \in J} R \longrightarrow R^{\oplus n} \longrightarrow M \longrightarrow 0. \] of $M$. Let $A = (a_{ij})_{i = 1, \ldots , n, j \in J}$ be the matrix of the map $\bigoplus _{j \in J} R \to R^{\oplus n}$. The ideal $\text{Fit}_ k(M)$ generated by the $(n - k) \times (n - k)$ minors of $A$ is independent of the choice of the presentation. Proof. Let $K \subset R^{\oplus n}$ be the kernel of the surjection $R^{\oplus n} \to M$. Pick $z_1, \ldots , z_{n - k} \in K$ and write $z_ j = (z_{1j}, \ldots , z_{nj})$. Another description of the ideal $\text{Fit}_ k(M)$ is that it is the ideal generated by the $(n - k) \times (n - k)$ minors of all the matrices $(z_{ij})$ we obtain in this way. Suppose we change the surjection into the surjection $R^{\oplus n + n'} \to M$ with kernel $K'$ where we use the original map on the first $n$ standard basis elements of $R^{\oplus n + n'}$ and $0$ on the last $n'$ basis vectors. Then the corresponding ideals are the same. Namely, if $z_1, \ldots , z_{n - k} \in K$ as above, let $z'_ j = (z_{1j}, \ldots , z_{nj}, 0, \ldots , 0) \in K'$ for $j = 1, \ldots , n - k$ and $z'_{n + j'} = (0, \ldots , 0, 1, 0, \ldots , 0) \in K'$. Then we see that the ideal of $(n - k) \times (n - k)$ minors of $(z_{ij})$ agrees with the ideal of $(n + n' - k) \times (n + n' - k)$ minors of $(z'_{ij})$. This gives one of the inclusions. Conversely, given $z'_1, \ldots , z'_{n + n' - k}$ in $K'$ we can project these to $R^{\oplus n}$ to get $z_1, \ldots , z_{n + n' - k}$ in $K$. By Lemma 15.8.1 we see that the ideal generated by the $(n + n' - k) \times (n + n' - k)$ minors of $(z'_{ij})$ is contained in the ideal generated by the $(n - k) \times (n - k)$ minors of $(z_{ij})$. This gives the other inclusion. Let $R^{\oplus m} \to M$ be another surjection with kernel $L$. By Schanuel's lemma (Algebra, Lemma 10.109.1) and the results of the previous paragraph, we may assume $m = n$ and that there is an isomorphism $R^{\oplus n} \to R^{\oplus m}$ commuting with the surjections to $M$. Let $C = (c_{li})$ be the (invertible) matrix of this map (it is a square matrix as $n = m$). Then given $z'_1, \ldots , z'_{n - k} \in L$ as above we can find $z_1, \ldots , z_{n - k} \in K$ with $z_1' = Cz_1, \ldots , z'_{n - k} = Cz_{n - k}$. By Lemma 15.8.1 we get one of the inclusions. By symmetry we get the other. $\square$ Definition 15.8.3. Let $R$ be a ring. Let $M$ be a finite $R$-module. Let $k \geq 0$. The $k$th Fitting ideal of $M$ is the ideal $\text{Fit}_ k(M)$ constructed in Lemma 15.8.2. Set $\text{Fit}_{-1}(M) = 0$. Since the Fitting ideals are the ideals of minors of a big matrix (numbered in reverse ordering from the ordering in Lemma 15.8.1) we see that \[ 0 = \text{Fit}_{-1}(M) \subset \text{Fit}_0(M) \subset \text{Fit}_1(M) \subset \ldots \subset \text{Fit}_ t(M) = R \] for some $t \gg 0$. Here are some basic properties of Fitting ideals. Example 15.8.5. Let $R$ be a ring. The Fitting ideals of the finite free module $M = R^{\oplus n}$ are $\text{Fit}_ k(M) = 0$ for $k < n$ and $\text{Fit}_ k(M) = R$ for $k \geq n$. Lemma 15.8.9. Let $R$ be a ring. Let $M$ be a finitely presented $R$-module. Let $k \geq 0$. Assume that $\text{Fit}_ k(M) = (f)$ for some nonzerodivisor $f \in R$ and $\text{Fit}_{k - 1}(M) = 0$. Then $M$ has projective dimension $\leq 1$, $M' = \mathop{\mathrm{Ker}}(f : M \to M)$ is the $f$-power torsion submodule of $M$, $M'$ has projective dimension $\leq 1$, $M/M'$ is finite locally free of rank $r$, and $M \cong M/M' \oplus M'$. \[ R^{\oplus m} \xrightarrow {A} R^{\oplus n} \to M \to 0 \] for some matrix $A$ with coefficients in $R$. We first prove the lemma when $R$ is local. Set $M' = \{ x \in M \mid fx = 0\} $ as in the statement. By Lemma 15.8.8 we can choose $x_1, \ldots , x_ k \in M$ which generate $M/M'$. Then $x_1, \ldots , x_ k$ generate $M_ f = (M/M')_ f$. Hence, if there is a relation $\sum a_ ix_ i = 0$ in $M$, then we see that $a_1, \ldots , a_ k$ map to zero in $R_ f$ since otherwise $\text{Fit}_{k - 1}(M) R_ f = \text{Fit}_{k - 1}(M_ f)$ would be nonzero. Since $f$ is a nonzerodivisor, we conclude $a_1 = \ldots = a_ k = 0$. Thus $M \cong R^{\oplus k} \oplus M'$. After a change of basis in our presentation above, we may assume the first $n - k$ basis vectors of $R^{\oplus n}$ map into the summand $M'$ of $M$ and the last $k$-basis vectors of $R^{\oplus n}$ map to basis elements of the summand $R^{\oplus k}$ of $M$. Having done so, the last $k$ rows of the matrix $A$ vanish. In this way we see that, replacing $M$ by $M'$, $k$ by $0$, $n$ by $n - k$, and $A$ by the submatrix where we delete the last $k$ rows, we reduce to the case discussed in the next paragraph. Assume $R$ is local, $k = 0$, and $M$ annihilated by $f$. Now the $0$th Fitting ideal of $M$ is $(f)$ and is generated by the $n \times n$ minors of the matrix $A$ of size $n \times m$. (This in particular implies $m \geq n$.) Since $R$ is local, some $n \times n$ minor of $A$ is $uf$ for a unit $u \in R$. After renumbering we may assume this minor is the first one. Moreover, we know all other $n \times n$ minors of $A$ are divisible by $f$. Write $A = (A_1 A_2)$ in block form where $A_1$ is an $n \times n$ matrix and $A_2$ is an $n \times (m - n)$ matrix. By Algebra, Lemma 10.15.6 applied to the transpose of $A$ (!) we find there exists an $n \times n$ matrix $B$ such that \[ BA = B(A_1 A_2) = f \left( \begin{matrix} u 1_{n \times n} & C \end{matrix} \right) \] for some $n \times (m - n)$ matrix $C$ with coefficients in $R$. Then we first conclude $BA_1 = fu 1_{n \times n}$. Thus \[ BA_2 = fC = u^{-1}fuC = u^{-1}BA_1C \] Since the determinant of $B$ is a nonzerodivisor we conclude that $A_2 = u^{-1}A_1C$. Therefore the image of $A$ is equal to the image of $A_1$ which is isomorphic to $R^{\oplus n}$ because the determinant of $A_1$ is a nonzerodivisor. Hence $M$ has projective dimension $\leq 1$. We return to the case of a general ring $R$. By the local case we see that $M/M'$ is a finite locally free module of rank $k$, see Algebra, Lemma 10.78.2. Hence the extension $0 \to M' \to M \to M/M' \to 0$ splits. It follows that $M'$ is a finitely presented module. Choose a short exact sequence $0 \to K \to R^{\oplus a} \to M' \to 0$. Then $K$ is a finite $R$-module, see Algebra, Lemma 10.5.3. By the local case we see that $K_\mathfrak p \cong R_\mathfrak p^{\oplus a}$ for all primes. Hence by Algebra, Lemma 10.78.2 again we see that $K$ is finite locally free of rank $a$. It follows that $M'$ has projective dimension $\leq 1$ and the lemma is proved. $\square$ Comment #3040 by SE user on December 16, 2017 at 07:30 Lemma 15.8.4, Proof of (6): "killed by n " or "killed by f Comment #3385 by shanbei on May 27, 2018 at 13:01 In the third line of proof of (7) in Lemma 07ZA, perhaps you meant the rank of image is less than n instead of \leq?
FurnMove Challenge 2022 Held in conjunction with the CVPR 2022 Embodied AI Workshop Welcome to the 2022 AI2-THOR Furniture Moving (FurnMove) Challenge hosted at the CVPR 2022 Embodied AI Workshop. The goal of this challenge is to develop collaborative embodied agents. Particularly, two agents need to work together to move a piece of furniture through a living room to a goal. We are interested in the more realistic decentralized setting enabled via low-bandwidth communication. We'll be updating more details on March 1st, 2022. So, stay tuned for updates! Given only their egocentric visual observations, agents jointly hold a lifted piece of furniture in a living room scene and must collaborate to move it to a visually distinct goal location. As a piece of furniture cannot be moved without both agents agreeing on the direction, agents must explicitly coordinate at every timestep. In FurnMove, each agent at every timestep receives an egocentric observation (a 3\times 84 \times 84 RGB image) from AI2-THOR. In addition, agents are allowed to communicate with other agents at each timestep via a low bandwidth communication channel. Based on their local observation and communication, each agent must take an action from the set A Allowed Observations At test time each agent must take only a egocentric observation (a 3\times 84 \times 84 RGB image) from AI2-THOR. Do not exploit the metadata in test-scenes: You cannot use additional depth, mask, metadata info etc. from the simulator on test scenes. However, during training you are free to use additional info for things like auxiliary losses. If you use additional sensory information from AI2-THOR as input (e.g., depth, segmentation masks, class masks, panoramic images) during test-time, your entry will not be considered. For official consideration to the CVPR 2022 challenge, agents should just use RGB input. Each agent can take has an action space defined by A = A^{NAV} ∪ A^{MWO} ∪ A^{MO} ∪ A^{RO} A^{NAV} = \lbrace \text{MoveAhead}, \text{RotateLeft}, \text{RotateRight}, \text{Pass}\rbrace used to independently move each agent. A^{MWO} = \lbrace \text{MoveWithObject}X \mid X \in \lbrace \text{Ahead}, \text{Right}, \text{Left}, \text{Back} \rbrace\rbrace to move the lifted object and the agents simultaneously in the same direction. A^{MO} = \lbrace \text{MoveObject}X \mid X \in \lbrace \text{Ahead}, \text{Right}, \text{Left}, \text{Back} \rbrace\rbrace used to move the lifted object while the agents stay in place. A^{RO} = \lbrace \text{RotateObjectRight} \rbrace to rotate the lifted object clockwise. So the two agents, together, have joint action space 13 \times 13 = 169 actions. The coordination of this action space is defined by the following coordination matrix: We assume that all movement actions for agents and the lifted object result in a displacement of 0.25 meters and all rotation actions result in a rotation of 90 degrees counter-clockwise when viewing the agents from above. FurnMove Challenge Announced Challenge Code and Data Release To participate in the challenge, we'll put up a FurnMove Challenge repository on March 1st, 2022. Meanwhile, you can get started with the ECCV 2020 repository available on GitHub at: /allenai/cordial-sync Winners of the challenge will have the opportunity to present their work at the CVPR 2022 Embodied AI Workshop. More details will be updated on March 1st, 2022. To cite this work, please cite our papers on multi-agent furniture moving: @InProceedings{CordialSync, author = {Jain, Unnat and Weihs, Luca and Kolve, Eric and Farhadi, Ali and Lazebnik, Svetlana and Kembhavi, Aniruddha and Schwing, Alexander G.}, title = {A Cordial Sync: Going Beyond Marginal Policies For Multi-Agent Embodied Tasks}, note = {first two authors contributed equally}, @InProceedings{TwoBody, The FurnMove challenge organizers are listed below:
Departamento de Parasitología, Facultad de Medicina Veterinaria y Zootecnia, Universidad Nacional Autónoma de México, Ciudad de México, México. Ibarra-Velarde, F. , Vera-Montenegro, Y. , Alcala-Canto, Y. , Flores-Ramos, M. and Saldaña-Hernández, N. (2019) Comparative Efficacy of Three Commercial Ectoparasiticides against Fleas in Naturally Infested Dogs. Pharmacology & Pharmacy, 10, 234-243. doi: 10.4236/pp.2019.105020. \text{Efficacy}=\frac{\text{Arithmetic mean number of flea counts}\left(\text{Control}\right)-\text{Arithmetic mean of flea counts}\left(\text{Treated}\right)}{\text{Arithmetic mean of flea counts}\left(\text{Control}\right)}\times 100 [1] Taylor, M.A. (2001) Recent Developments in Ectoparasiticides. The Veterinary Journal, 161, 253-268. [2] Hayes, B., Schnitzler, B., Wiseman, S. and Snyder, D.E. (2015) Field Evaluation of the Efficacy and Safety of a Combination of Spinosad and Milbemycin Oxime in the Treatment and Prevention of Naturally Acquired Flea Infestations and Treatment of Intestinal Nematode Infections in Dogs in Europe. Veterinary Parasitology, 207, 99-106. [3] Dumont, P., Fankhauser, B., Bouhsira, E., Lienard, E., Jacquiet, P., Beugnet, F., et al. (2015) Repellent and Insecticidal Efficacy of a New Combination of Fipronil and Permethrin against the Main Vector of Canine Leishmaniosis in Europe (Phlebotomus perniciosus). Parasites & Vectors, 8, 49. [4] Dumont, P., Fourie, J.J., Soll, M. and Beugnet, F. (2015) Repellency, Prevention of Attachment and Acaricidal Efficacy of a New Combination of Fipronil and Permethrin against the Main Vector of Canine Babesiosis in Europe, Dermacentor reticulatus Ticks. Parasites & Vectors, 8, 50. [5] Chatzis, M.K., Psemmas, D., Papadopoulos, E., Navarro, C. and Saridomichelakis, M.N. (2017) A Field Trial of a Fixed Combination of Permethrin and Fipronil (Effitix(®)) for the Treatment and Prevention of Flea Infestation in Dogs Living with Sheep. Parasites & Vectors, 10, 212. [6] Guntay, O., Yikilmaz, M.S., Ozaydin, H., Izzetoglu, S. and Suner, A. (2018) Evaluation of Pyrethroid Susceptibility in Culex pipiens of Northern Izmir Province, Turkey. Journal of Arthropod-Borne Diseases, 12, 370-377. [8] Marchiondo, A.A., Holdsworth, P.A., Fourie, L.J., Rugg, D., Hellmann, K., Snyder, D.E., et al. (2013) World Association for the Advancement of Veterinary Parasitology (W.A.A.V.P.) Second Edition: Guidelines for Evaluating the Efficacy of Parasiticides for the Treatment, Prevention and Control of Flea and Tick Infestations on Dogs and Cats. Veterinary Parasitology, 194, 84-97. [9] Snyder, D.E., Rumschlag, A.J., Young, L.M. and Ryan, W.G. (2015) Speed of Flea Knockdown of Spinosad Compared to Afoxolaner, and of Spinosad through 28 Days Post-Treatment in Controlled Laboratory Studies. Parasites & Vectors, 8, 578. [10] Niazi, A., Goodarzi, M. and Yazdanipour, A.A. (2008) Comparative Study between Least-Squares Support Vector Machines and Partial Least Squares in Simultaneous Spectrophotometric Determination of Cypermethrin, Permethrin and Tetramethrin. Journal of the Brazilian Chemical Society, 19, 536-542. [11] Gfeller, R.G. and Messonnier, S.P. (2004) Handbook of Small Animal Toxicology and Poisonings. Second Edition, Mosby, St. Louis. [12] Jachowski, D.S., Skipper, S. and Gompper, M.E. (2011) Field Evaluation of Imidacloprid as a Systemic Approach to Flea Control in Black-Tailed Prairie Dogs, Cynomys ludovicianus. Journal of Vector Ecology, 36, 100-107. [13] Rauh, J.J., Lummis, S.C. and Sattelle, D.B. (1990) Pharmacological and Biochemical Properties of Insect GABA Receptors. Trends in Pharmacological Sciences, 11, 325-329. [14] Raymond-Delpech, V., Matsuda, K., Sattelle, B.M., Rauh, J.J. and Sattelle, D.B. (2005) Ion Channels: Molecular Targets of Neuroactive Insecticides. Invertebrate Neuroscience, 5, 119-133. [15] Postal, J.M.R., Ostal, L., Jeannin, P.C. and Consalvi, P.J. (1995) Field Efficacy of a Mechanical Pump Spray Formulation Containing 0.25% Fipronil in the Treatment and Control of Flea Infestation and Associated Dermatological Signs in Dogs and Cats. Veterinary Dermatology, 6, 153-158. [16] Franc, M., Lienard, E., Jacquiet, P., Bonneau, S. and Bouhsira, E. (2015) Efficacy of Fipronil Combined with Permethrin Commercial Spot on (Effitix) Preventing Culex pipiens from Feeding on Dogs. Parasitology Research, 114, 2093-2097. [17] Curtis, C.F. (1996) Use of 0.25 Percent Fipronil Spray to Treat Sarcoptic Manage in a Litter of Five-Week-Old Puppies. Veterinary Record, 139, 43-44. [18] Birckel, P., Cochet, P.B. and Weil, A. (1998) Cutaneous Distribution of C14 Fipronil in the Dog and Cat Following a Spot-On Administration. Vol. 3, Butterworth Heinemann, Oxford. [19] Curtis, C.F. (2004) Current Trends in the Treatment of Sarcoptes, Cheyletiella and Otodectes Mite Infestations in Dogs and Cats. Veterinary Dermatology, 15, 108-114. [20] Vincenzi, P. and Gauchi, C. (1997) Efficacy of Fipronil against Ear Mites Otodectes cynotis in Dogs and Cats. Proceedings of the 14th Annual Congress ESVD-ECVD, 5-7 September 1997, 117. [21] Beugnet, F., Bouhsira, E., Halos, L. and Franc, M. (2014) Preventive Efficacy of a Topical Combination of Fipronil-(S)-Methoprene-Eprinomectin-Praziquantel against Ear Mite (Otodectes cynotis) Infestation of Cats through a Natural Infestation Model. Parasite, 21, 40. [22] Nuttall, T.J., French, A.T., Cheetham, H.C. and Proctor, F.J. (1998) Treatment of Trombicula autumnalis Infestation in Dogs and Cats with a 0.25 Percent Fipronil Pump Spray. Journal of Small Animal Practice, 39, 237-239. [23] Fourie, J.J., Fourie, L.J., Horak, I.G. and Snyman, M.G. (2010) The Efficacy of a Topically Applied Combination of Cyphenothrin and Pyriproxyfen against the Southern African Yellow Dog Tick, Haemaphysalis elliptica, and the Cat Flea, Ctenocephalides felis, on Dogs. Journal of the South African Veterinary Association, 81, 33-36. https://doi.org/10.4102/jsava.v81i1.93 [24] Fourie, J.J., Crafford, D., Horak, I.G. and Stanneck, D. (2013) Prophylactic Treatment of Flea-Infested Dogs with an Imidacloprid/Flumethrin Collar (Seresto®, Bayer) to Preempt Infection with Dipylidium caninum. Parasitology Research, 112, 33-46. [25] Dryden, M.W., Payne, P.A., Vicki, S., Riggs, B., Davenport, J. and Kobuszewski, D. (2011) Efficacy of Dinotefuran-Pyriproxyfen, Dinotefuran-Pyriproxyfen Permethrin and Fipronil-(S)-Methoprene Topical Spot-On Formulations to Control Flea Populations in Naturally Infested Pets and Private Residences in Tampa, FL. Veterinary Parasitology, 182, 281-286. [26] Halos, L., Beugnet, F., Cardoso, L., Farkas, R., Franc, M., Guillot, J., et al. (2014) Flea Control Failure? Myths and Realities. Trends in Parasitology, 30, 228-233.
Lemma 15.8.4 (07ZA)—The Stacks project Lemma 15.8.4. Let $R$ be a ring. Let $M$ be a finite $R$-module. If $M$ can be generated by $n$ elements, then $\text{Fit}_ n(M) = R$. Given a second finite $R$-module $M'$ we have \[ \text{Fit}_ l(M \oplus M') = \sum \nolimits _{k + k' = l} \text{Fit}_ k(M)\text{Fit}_{k'}(M') \] If $R \to R'$ is a ring map, then $\text{Fit}_ k(M \otimes _ R R')$ is the ideal of $R'$ generated by the image of $\text{Fit}_ k(M)$. If $M$ is of finite presentation, then $\text{Fit}_ k(M)$ is a finitely generated ideal. If $M \to M'$ is a surjection, then $\text{Fit}_ k(M) \subset \text{Fit}_ k(M')$. We have $\text{Fit}_0(M) \subset \text{Ann}_ R(M)$. We have $V(\text{Fit}_0(M)) = \text{Supp}(M)$. Proof. Part (1) follows from the fact that $I_0(A) = R$ in Lemma 15.8.1. Part (2) follows form the corresponding statement in Lemma 15.8.1. Part (3) follows from the fact that $\otimes _ R R'$ is right exact, so the base change of a presentation of $M$ is a presentation of $M \otimes _ R R'$. Proof of (4). Let $R^{\oplus m} \xrightarrow {A} R^{\oplus n} \to M \to 0$ be a presentation. Then $\text{Fit}_ k(M)$ is the ideal generated by the $n - k \times n - k$ minors of the matrix $A$. Part (5) is immediate from the definition. Proof of (6). Choose a presentation of $M$ with matrix $A$ as in Lemma 15.8.2. Let $J' \subset J$ be a subset of cardinality $n$. It suffices to show that $f = \det (a_{ij})_{i = 1, \ldots , n, j \in J'}$ annihilates $M$. This is clear because the cokernel of \[ R^{\oplus n} \xrightarrow {A' = (a_{ij})_{i = 1, \ldots , n, j \in J'}} R^{\oplus n} \to M \to 0 \] is killed by $f$ as there is a matrix $B$ with $A' B = f1_{n \times n}$. Proof of (7). Choose a presentation of $M$ with matrix $A$ as in Lemma 15.8.2. By Nakayama's lemma (Algebra, Lemma 10.20.1) we have \[ M_\mathfrak p \not= 0 \Leftrightarrow M \otimes _ R \kappa (\mathfrak p) \not= 0 \Leftrightarrow \text{rank}(\text{image }A\text{ in }\kappa (\mathfrak p)) < n \] Clearly $\text{Fit}_0(M)$ exactly cuts out the set of primes with this property. $\square$ Comment #2062 by Kestutis Cesnavicius on June 11, 2016 at 15:38 On the left hand side of the equation in (2) the subscript k l 4 comment(s) on Section 15.8: Fitting ideals
Consider the function defined as follows: Determine the differentiability of the function at x = −2 x = 8 . Explain why the function is or is not differentiable at each of these points. When investigating whether a function is differentiable at a point, there are two things to consider: 1. Differentiability implies continuity. So you need to make sure that d(x) is continuous before you ask if it is differentiable. (Remember, there are 3 conditions of continuity must be explored.) 2. Is the derivative continuous? In other words, does the slope from the right agree with the slope from the left, and agree with the actual slope? Also, are those slopes finite? Determine all other points at which d(x) might not be differentiable, and check the existence of the derivative at each point. For each point at which d(x) is not differentiable, explain why not. Check boundary points x = 0 x = 16
Table 5 Conditional probabilities of parental mating type and triad genotypes given the sampling scheme of using the affected child as a proband P(MT, C | D) P(MT | D) 1. AA × AA AA p4ψ 2 /R p4ψ 2 /R n 1 2. AA × Aa AA 4p3q ψ 2 /(2R) 4{p}^{3}q\frac{{\psi }_{1}+{\psi }_{2}}{2R} Aa 4p3q ψ 1 /(2R) n 3 3. AA × aa Aa 2p2q2ψ 1 /R 2p2q2ψ 1 /R n 4 4. Aa × Aa AA 4p2q2ψ 2 /(4R) 4{p}^{2}{q}^{2}\frac{{\psi }_{2}+2{\psi }_{1}+1}{4R} Aa 4p2q2(2ψ 1 )/(4R) n 6 aa 4p2q2/(4R) n 7 5. Aa × aa Aa 4p q3ψ 1 /(2R) 4p{q}^{3}\frac{{\psi }_{1}+1}{2R} aa 4p q3/(2R) n 9 6. aa × aa aa q4/R q4/R n 10 Total 1 1 n Abbreviation: MT = Mating type, Obs = Observation. R =p2ψ 2 + 2pq ψ 1 + q2.
A technology group wants to determine if bringing a laptop on a trip that involves flying is related to people being on business trips. Data for 1000 random passengers at an airport was collected and summarized in the table below. 236 274 Not traveling for business 93 397 What is the probability of traveling with a laptop if someone is traveling for business? How many total people are traveling for business? How many of those people are traveling with a laptop? Does it appear that there is an association between bringing a laptop on a trip that involves flying and traveling for business? \text{P}(\text{laptop})\ne\text{P}(\text{laptop given business trip}) so they are associated.
Log minimal models according to Shokurov 2009 Log minimal models according to Shokurov Following Shokurov’s ideas, we give a short proof of the following klt version of his result: termination of terminal log flips in dimension d implies that any klt pair of dimension d has a log minimal model or a Mori fibre space. Thus, in particular, any klt pair of dimension 4 has a log minimal model or a Mori fibre space. Caucher Birkar. "Log minimal models according to Shokurov." Algebra Number Theory 3 (8) 951 - 958, 2009. https://doi.org/10.2140/ant.2009.3.951 Received: 12 March 2009; Revised: 7 September 2009; Accepted: 6 October 2009; Published: 2009 Keywords: minimal models , Mori fibre spaces Caucher Birkar "Log minimal models according to Shokurov," Algebra & Number Theory, Algebra Number Theory 3(8), 951-958, (2009)
In mathematics, a superperfect number is a positive integer n that satisfies {\displaystyle \sigma ^{2}(n)=\sigma (\sigma (n))=2n\,,} where σ is the divisor summatory function. Superperfect numbers are a generalization of perfect numbers. The term was coined by D. Suryanarayana (1969).[1] The first few superperfect numbers are : 2, 4, 16, 64, 4096, 65536, 262144, 1073741824, ... (sequence A019279 in the OEIS). To illustrate: it can be seen that 16 is a superperfect number as σ(16) = 1 + 2 + 4 + 8 + 16 = 31, and σ(31) = 1 + 31 = 32, thus σ(σ(16)) = 32 = 2 × 16. If n is an even superperfect number, then n must be a power of 2, 2k, such that 2k+1 − 1 is a Mersenne prime.[1][2] It is not known whether there are any odd superperfect numbers. An odd superperfect number n would have to be a square number such that either n or σ(n) is divisible by at least three distinct primes.[2] There are no odd superperfect numbers below 7×1024.[1] Perfect and superperfect numbers are examples of the wider class of m-superperfect numbers, which satisfy {\displaystyle \sigma ^{m}(n)=2n,} corresponding to m=1 and 2 respectively. For m ≥ 3 there are no even m-superperfect numbers.[1] The m-superperfect numbers are in turn examples of (m,k)-perfect numbers which satisfy[3] {\displaystyle \sigma ^{m}(n)=kn\,.} With this notation, perfect numbers are (1,2)-perfect, multiperfect numbers are (1,k)-perfect, superperfect numbers are (2,2)-perfect and m-superperfect numbers are (m,2)-perfect.[4] Examples of classes of (m,k)-perfect numbers are: (m,k)-perfect numbers 2 2 2, 4, 16, 64, 4096, 65536, 262144 A019279 2 3 8, 21, 512 A019281 2 4 15, 1023, 29127 A019282 2 6 42, 84, 160, 336, 1344, 86016, 550095, 1376256, 5505024 A019283 2 7 24, 1536, 47360, 343976 A019284 2 8 60, 240, 960, 4092, 16368, 58254, 61440, 65472, 116508, 466032, 710400, 983040, 1864128, 3932160, 4190208, 67043328, 119304192, 268173312, 1908867072 A019285 2 9 168, 10752, 331520, 691200, 1556480, 1612800, 106151936 A019286 2 10 480, 504, 13824, 32256, 32736, 1980342, 1396617984, 3258775296 A019287 2 11 4404480, 57669920, 238608384 A019288 2 12 2200380, 8801520, 14913024, 35206080, 140896000, 459818240, 775898880, 2253189120 A019289 3 any 12, 14, 24, 52, 98, 156, 294, 684, 910, 1368, 1440, 4480, 4788, 5460, 5840, ... A019292 4 any 2, 3, 4, 6, 8, 10, 12, 15, 18, 21, 24, 26, 32, 39, 42, 60, 65, 72, 84, 96, 160, 182, ... A019293 ^ a b c d Guy (2004) p. 99. ^ a b Weisstein, Eric W. "Superperfect Number". MathWorld. ^ Cohen & te Riele (1996) ^ Guy (2007) p.79 Superperfect Number at PlanetMath. Cohen, G. L.; te Riele, H. J. J. (1996). "Iterating the sum-of-divisors function". Experimental Mathematics. 5 (2): 93–100. doi:10.1080/10586458.1996.10504580. Zbl 0866.11003. Guy, Richard K. (2004). Unsolved problems in number theory (3rd ed.). Springer-Verlag. B9. ISBN 978-0-387-20860-2. Zbl 1058.11001. Sándor, József; Mitrinović, Dragoslav S.; Crstici, Borislav, eds. (2006). Handbook of number theory I. Dordrecht: Springer-Verlag. ISBN 1-4020-4215-9. Zbl 1151.11300. Suryanarayana, D. (1969). "Super perfect numbers". Elem. Math. 24: 16–17. Zbl 0165.36001.
Lemma 15.8.7 (07ZD)—The Stacks project Lemma 15.8.7. Let $R$ be a ring. Let $M$ be a finite $R$-module. Let $r \geq 0$. The following are equivalent $M$ is finite locally free of rank $r$ (Algebra, Definition 10.78.1), $\text{Fit}_{r - 1}(M) = 0$ and $\text{Fit}_ r(M) = R$, and $\text{Fit}_ k(M) = 0$ for $k < r$ and $\text{Fit}_ k(M) = R$ for $k \geq r$. Proof. It is immediate that (2) is equivalent to (3) because the Fitting ideals form an increasing sequence of ideals. Since the formation of $\text{Fit}_ k(M)$ commutes with base change (Lemma 15.8.4) we see that (1) implies (2) by Example 15.8.5 and glueing results (Algebra, Section 10.23). Conversely, assume (2). By Lemma 15.8.6 we may assume that $M$ is generated by $r$ elements. Thus a presentation $\bigoplus _{j \in J} R \to R^{\oplus r} \to M \to 0$. But now the assumption that $\text{Fit}_{r - 1}(M) = 0$ implies that all entries of the matrix of the map $\bigoplus _{j \in J} R \to R^{\oplus r}$ are zero. Thus $M$ is free. $\square$ Comment #1423 by Kestutis Cesnavicius on April 17, 2015 at 18:26 In (1), "locally free of rank " ---> "locally free of rank r Oops! Thanks, see here.
Many-valued_logic Knowpia Many-valued logic (also multi- or multiple-valued logic) refers to a propositional calculus in which there are more than two truth values. Traditionally, in Aristotle's logical calculus, there were only two possible values (i.e., "true" and "false") for any proposition. Classical two-valued logic may be extended to n-valued logic for n greater than 2. Those most popular in the literature are three-valued (e.g., Łukasiewicz's and Kleene's, which accept the values "true", "false", and "unknown"), four-valued, nine-valued, the finite-valued (finitely-many valued) with more than three values, and the infinite-valued (infinitely-many-valued), such as fuzzy logic and probability logic. Kleene (strong) K3 and Priest logic P3Edit Kleene's "(strong) logic of indeterminacy" K3 (sometimes {\displaystyle K_{3}^{S}} ) and Priest's "logic of paradox" add a third "undefined" or "indeterminate" truth value I. The truth functions for negation (¬), conjunction (∧), disjunction (∨), implication (→K), and biconditional (↔K) are given by:[2] Bochvar's internal three-valued logicEdit Another logic is Dmitry Bochvar's "internal" three-valued logic {\displaystyle B_{3}^{I}} , also called Kleene's weak three-valued logic. Except for negation and biconditional, its truth tables are all different from the above.[4] Belnap logic (B4)Edit Gödel logics Gk and G∞Edit In 1932 Gödel defined[5] a family {\displaystyle G_{k}} of many-valued logics, with finitely many truth values {\displaystyle 0,{\tfrac {1}{k-1}},{\tfrac {2}{k-1}},\ldots ,{\tfrac {k-2}{k-1}},1} {\displaystyle G_{3}} has the truth values {\displaystyle 0,{\tfrac {1}{2}},1} {\displaystyle G_{4}} {\displaystyle 0,{\tfrac {1}{3}},{\tfrac {2}{3}},1} . In a similar manner he defined a logic with infinitely many truth values, {\displaystyle G_{\infty }} , in which the truth values are all the real numbers in the interval {\displaystyle [0,1]} . The designated truth value in these logics is 1. The conjunction {\displaystyle \wedge } and the disjunction {\displaystyle \vee } are defined respectively as the minimum and maximum of the operands: {\displaystyle {\begin{aligned}u\wedge v&:=\min\{u,v\}\\u\vee v&:=\max\{u,v\}\end{aligned}}} {\displaystyle \neg _{G}} and implication {\displaystyle {\xrightarrow[{G}]{}}} {\displaystyle {\begin{aligned}\neg _{G}u&={\begin{cases}1,&{\text{if }}u=0\\0,&{\text{if }}u>0\end{cases}}\\[3pt]u\mathrel {\xrightarrow[{G}]{}} v&={\begin{cases}1,&{\text{if }}u\leq v\\v,&{\text{if }}u>v\end{cases}}\end{aligned}}} Łukasiewicz logics Lv and L∞Edit {\displaystyle {\xrightarrow[{L}]{}}} and negation {\displaystyle {\underset {L}{\neg }}} were defined by Jan Łukasiewicz through the following functions: {\displaystyle {\begin{aligned}{\underset {L}{\neg }}u&:=1-u\\u\mathrel {\xrightarrow[{L}]{}} v&:=\min\{1,1-u+v\}\end{aligned}}} At first Łukasiewicz used these definitions in 1920 for his three-valued logic {\displaystyle L_{3}} , with truth values {\displaystyle 0,{\frac {1}{2}},1} . In 1922 he developed a logic with infinitely many values {\displaystyle L_{\infty }} , in which the truth values spanned the real numbers in the interval {\displaystyle [0,1]} . In both cases the designated truth value was 1.[6] By adopting truth values defined in the same way as for Gödel logics {\displaystyle 0,{\tfrac {1}{v-1}},{\tfrac {2}{v-1}},\ldots ,{\tfrac {v-2}{v-1}},1} , it is possible to create a finitely-valued family of logics {\displaystyle L_{v}} , the abovementioned {\displaystyle L_{\infty }} and the logic {\displaystyle L_{\aleph _{0}}} , in which the truth values are given by the rational numbers in the interval {\displaystyle [0,1]} . The set of tautologies in {\displaystyle L_{\infty }} {\displaystyle L_{\aleph _{0}}} Product logic ΠEdit In product logic we have truth values in the interval {\displaystyle [0,1]} , a conjunction {\displaystyle \odot } and an implication {\displaystyle {\xrightarrow[{\Pi }]{}}} , defined as follows[7] {\displaystyle {\begin{aligned}u\odot v&:=uv\\u\mathrel {\xrightarrow[{\Pi }]{}} v&:={\begin{cases}1,&{\text{if }}u\leq v\\{\frac {v}{u}},&{\text{if }}u>v\end{cases}}\end{aligned}}} Additionally there is a negative designated value {\displaystyle {\overline {0}}} that denotes the concept of false. Through this value it is possible to define a negation {\displaystyle {\underset {\Pi }{\neg }}} and an additional conjunction {\displaystyle {\underset {\Pi }{\wedge }}} {\displaystyle {\begin{aligned}{\underset {\Pi }{\neg }}u&:=u\mathrel {\xrightarrow[{\Pi }]{}} {\overline {0}}\\u\mathbin {\underset {\Pi }{\wedge }} v&:=u\odot \left(u\mathrel {\xrightarrow[{\Pi }]{}} v\right)\end{aligned}}} {\displaystyle u\mathbin {\underset {\Pi }{\wedge }} v=\min\{u,v\}} Post logics PmEdit In 1921 Post defined a family of logics {\displaystyle P_{m}} with (as in {\displaystyle L_{v}} {\displaystyle G_{k}} ) the truth values {\displaystyle 0,{\tfrac {1}{m-1}},{\tfrac {2}{m-1}},\ldots ,{\tfrac {m-2}{m-1}},1} {\displaystyle {\underset {P}{\neg }}} and conjunction {\displaystyle {\underset {P}{\wedge }}} and disjunction {\displaystyle {\underset {P}{\vee }}} {\displaystyle {\begin{aligned}{\underset {P}{\neg }}u&:={\begin{cases}1,&{\text{if }}u=0\\u-{\frac {1}{m-1}},&{\text{if }}u\not =0\end{cases}}\\u\mathbin {\underset {P}{\wedge }} v&:=\min\{u,v\}\\u\mathbin {\underset {P}{\vee }} v&:=\max\{u,v\}\end{aligned}}} Rose logicsEdit In 1951, Alan Rose defined another family of logics for systems whose truth-values form lattices.[8] Relation to classical logicEdit Suszko's thesisEdit Functional completeness of many-valued logicsEdit Functional completeness is a term used to describe a special property of finite logics and algebras. A logic's set of connectives is said to be functionally complete or adequate if and only if its set of connectives can be used to construct a formula corresponding to every possible truth function.[9] An adequate algebra is one in which every finite mapping of variables can be expressed by some composition of its operations.[10] Classical logic: CL = ({0,1}, ¬, →, ∨, ∧, ↔) is functionally complete, whereas no Łukasiewicz logic or infinitely many-valued logics has this property.[10][11] We can define a finitely many-valued logic as being Ln ({1, 2, ..., n} ƒ1, ..., ƒm) where n ≥ 2 is a given natural number. Post (1921) proves that assuming a logic is able to produce a function of any mth order model, there is some corresponding combination of connectives in an adequate logic Ln that can produce a model of order m+1.[12] Known applications of many-valued logic can be roughly classified into two groups.[13] The first group uses many-valued logic to solve binary problems more efficiently. For example, a well-known approach to represent a multiple-output Boolean function is to treat its output part as a single many-valued variable and convert it to a single-output characteristic function (specifically, the indicator function). Other applications of many-valued logic include design of programmable logic arrays (PLAs) with input decoders, optimization of finite state machines, testing, and verification. The second group targets the design of electronic circuits that employ more than two discrete levels of signals, such as many-valued memories, arithmetic circuits, and field programmable gate arrays (FPGAs). Many-valued circuits have a number of theoretical advantages over standard binary circuits. For example, the interconnect on and off chip can be reduced if signals in the circuit assume four or more levels rather than only two. In memory design, storing two instead of one bit of information per memory cell doubles the density of the memory in the same die size. Applications using arithmetic circuits often benefit from using alternatives to binary number systems. For example, residue and redundant number systems[14] can reduce or eliminate the ripple-through carries that are involved in normal binary addition or subtraction, resulting in high-speed arithmetic operations. These number systems have a natural implementation using many-valued circuits. However, the practicality of these potential advantages heavily depends on the availability of circuit realizations, which must be compatible or competitive with present-day standard technologies. In addition to aiding in the design of electronic circuits, many-valued logic is used extensively to test circuits for faults and defects. Basically all known automatic test pattern generation (ATG) algorithms used for digital circuit testing require a simulator that can resolve 5-valued logic (0, 1, x, D, D').[15] The additional values—x, D, and D'—represent (1) unknown/uninitialized, (2) a 0 instead of a 1, and (3) a 1 instead of a 0. Research venuesEdit ^ Hurley, Patrick. A Concise Introduction to Logic, 9th edition. (2006). ^ (Gottwald 2005, p. 19) ^ Humberstone, Lloyd (2011). The Connectives. Cambridge, Massachusetts: The MIT Press. pp. 201. ISBN 978-0-262-01654-4. ^ a b (Bergmann 2008, p. 80) ^ Gödel, Kurt (1932). "Zum intuitionistischen Aussagenkalkül". Anzeiger der Akademie der Wissenschaften in Wien (69): 65f. ^ Kreiser, Lothar; Gottwald, Siegfried; Stelzner, Werner (1990). Nichtklassische Logik. Eine Einführung. Berlin: Akademie-Verlag. pp. 41ff–45ff. ISBN 978-3-05-000274-3. ^ Hajek, Petr: Fuzzy Logic. In: Edward N. Zalta: The Stanford Encyclopedia of Philosophy, Spring 2009. ([1]) ^ Rose, Alan (December 1951). "Systems of logic whose truth-values form lattices". Mathematische Annalen. 123: 152–165. doi:10.1007/BF02054946. S2CID 119735870. ^ Smith, Nicholas (2012). Logic: The Laws of Truth. Princeton University Press. p. 124. ^ a b Malinowski, Grzegorz (1993). Many-Valued Logics. Clarendon Press. pp. 26–27. ^ Church, Alonzo (1996). Introduction to Mathematical Logic. Princeton University Press. ISBN 978-0-691-02906-1. ^ Dubrova, Elena (2002). Multiple-Valued Logic Synthesis and Optimization, in Hassoun S. and Sasao T., editors, Logic Synthesis and Verification, Kluwer Academic Publishers, pp. 89-114 ^ Meher, Pramod Kumar; Valls, Javier; Juang, Tso-Bing; Sridharan, K.; Maharatna, Koushik (2008-08-22). "50 Years of CORDIC: Algorithms, Architectures and Applications" (PDF). IEEE Transactions on Circuits & Systems I: Regular Papers (published 2009-09-09). 56 (9): 1893–1907. doi:10.1109/TCSI.2009.2025803. S2CID 5465045. Retrieved 2016-01-03. ^ Abramovici, Miron; Breuer, Melvin A.; Friedman, Arthur D. (1994). Digital Systems Testing and Testable Design. New York: Computer Science Press. p. 183. ISBN 978-0-7803-1062-9. ^ "IEEE International Symposium on Multiple-Valued Logic (ISMVL)". www.informatik.uni-trier.de/~ley. Gottwald, Siegfried (2005). "Many-Valued Logics" (PDF). Archived from the original on 2016-03-03. {{cite journal}}: Cite journal requires |journal= (help)CS1 maint: bot: original URL status unknown (link) Miller, D. Michael; Thornton, Mitchell A. (2008). Multiple valued logic: concepts and representations. Synthesis lectures on digital circuits and systems. Vol. 12. Morgan & Claypool Publishers. ISBN 978-1-59829-190-2. Yaroslav Shramko and Heinrich Wansing (2020). "Suszko's Thesis". Stanford Encyclopedia of Philosophy. {{cite encyclopedia}}: CS1 maint: uses authors parameter (link)
Flocking (behavior) - Wikipedia Flocking is the behavior exhibited when a group of birds, called a flock, are foraging or in flight. Two flocks of common cranes A swarm-like flock of starlings Computer simulations and mathematical models that have been developed to emulate the flocking behaviors of birds can also generally be applied to the "flocking" behavior of other species. As a result, the term "flocking" is sometimes applied, in computer science, to species other than birds. This article is about the modelling of flocking behavior. From the perspective of the mathematical modeller, "flocking" is the collective motion by a group of self-propelled entities and is a collective animal behavior exhibited by many living beings such as birds, fish, bacteria, and insects.[1] It is considered an emergent behavior arising from simple rules that are followed by individuals and does not involve any central coordination. 2.1.1 Rule Variants There are parallels with the shoaling behavior of fish, the swarming behavior of insects, and herd behavior of land animals. During the winter months, starlings are known for aggregating into huge flocks of hundreds to thousands of individuals, murmurations, which when they take flight altogether, render large displays of intriguing swirling patterns in the skies above observers. Flocking behavior was simulated on a computer in 1987 by Craig Reynolds with his simulation program, Boids.[2] This program simulates simple agents (boids) that are allowed to move according to a set of basic rules. The result is akin to a flock of birds, a school of fish, or a swarm of insects.[3] Measurements of bird flocking have been made[4] using high-speed cameras, and a computer analysis has been made to test the simple rules of flocking mentioned above. It is found that they generally hold true in the case of bird flocking, but the long range attraction rule (cohesion) applies to the nearest 5–10 neighbors of the flocking bird and is independent of the distance of these neighbors from the bird. In addition, there is an anisotropy with regard to this cohesive tendency, with more cohesion being exhibited towards neighbors to the sides of the bird, rather than in front or behind. This is likely due to the field of vision of the flying bird being directed to the sides rather than directly forward or backward. Another recent study is based on an analysis of high speed camera footage of flocks above Rome, and uses a computer model assuming minimal behavioural rules.[5][6][7][8] Basic models of flocking behavior are controlled by three simple rules: Avoid crowding neighbours (short range repulsion) Steer towards average heading of neighbours Steer towards average position of neighbours (long range attraction) With these three simple rules, the flock moves in an extremely realistic way, creating complex motion and interaction that would be extremely hard to create otherwise. Rule VariantsEdit The basic model has been extended in several different ways since Reynolds proposed it. For instance, Delgado-Mata et al. [9] extended the basic model to incorporate the effects of fear. Olfaction was used to transmit emotion between animals, through pheromones modelled as particles in a free expansion gas. Hartman and Benes [10] introduced a complementary force to the alignment that they call the change of leadership. This steer defines the chance of the bird to become a leader and try to escape. Hemelrijk and Hildenbrandt [11] used attraction, alignment, and avoidance, and extended this with a number of traits of real starlings: birds fly according to fixed wing aerodynamics, while rolling when turning (thus losing lift); they coordinate with a limited number of interaction neighbours of 7 (like real starlings); they try to stay above a sleeping site (like starlings do at dawn), and when they happen to move outwards from the sleeping site, they return to it by turning; and fourth, they move at relative fixed speed. The authors showed that the specifics of flying behaviour as well as large flock size and low number of interaction partners were essential to the creation of the variable shape of flocks of starlings. In flocking simulations, there is no central control; each bird behaves autonomously. In other words, each bird has to decide for itself which flocks to consider as its environment. Usually environment is defined as a circle (2D) or sphere (3D) with a certain radius (representing reach).[citation needed] A basic implementation of a flocking algorithm has complexity {\displaystyle O(n^{2})} – each bird searches through all other birds to find those which fall into its environment.[improper synthesis?] Possible improvements:[citation needed] bin-lattice spatial subdivision. The entire area the flock can move in is divided into multiple bins. Each bin stores which birds it contains. Each time a bird moves from one bin to another, lattice has to be updated. Example: 2D(3D) grid in a 2D(3D) flocking simulation. {\displaystyle O(nk)} , k is number of surrounding bins to consider; just when bird's bin is found in {\displaystyle O(1)} Lee Spector, Jon Klein, Chris Perry and Mark Feinstein studied the emergence of collective behavior in evolutionary computation systems.[12] Bernard Chazelle proved that under the assumption that each bird adjusts its velocity and position to the other birds within a fixed radius, the time it takes to converge to a steady state is an iterated exponential of height logarithmic in the number of birds. This means that if the number of birds is large enough, the convergence time will be so great that it might as well be infinite.[13] This result applies only to convergence to a steady state. For example, arrows fired into the air at the edge of a flock will cause the whole flock to react more rapidly than can be explained by interactions with neighbors, which are slowed down by the time delay in the bird's central nervous systems—bird-to-bird-to-bird. Flock-like behavior in humans may occur when people are drawn to a common focal point or when repelled, as below: a crowd fleeing from the sound of gunfire. In Cologne, Germany, two biologists from the University of Leeds demonstrated a flock-like behavior in humans. The group of people exhibited a very similar behavioral pattern to that of a flock, where if 5% of the flock would change direction the others would follow suit. When one person was designated as a predator and everyone else was to avoid him, the flock behaved very much like a school of fish.[14] Flocking has also been considered as a means of controlling the behavior of Unmanned Air Vehicles (UAVs).[15] Flocking is a common technology in screensavers, and has found its use in animation. Flocking has been used in many films[16] to generate crowds which move more realistically. Tim Burton's Batman Returns (1992) featured flocking bats.[improper synthesis?] Flocking behaviour has been used for other interesting applications. It has been applied to automatically program Internet multi-channel radio stations.[17] It has also been used for visualizing information[18] and for optimization tasks.[19] ^ O'Loan, OJ; Evans, MR (1999). "Alternating steady state in one-dimensional flocking". Journal of Physics A: Mathematical and General. IOP Publishing. 32 (8): L99. arXiv:cond-mat/9811336. Bibcode:1999JPhA...32L..99O. doi:10.1088/0305-4470/32/8/002. S2CID 7642063. ^ Reynolds, Craig W. (1987). "Flocks, herds and schools: A distributed behavioral model.". ACM SIGGRAPH Computer Graphics. Vol. 21. pp. 25–34. ^ Feder, Toni (October 2007). "Statistical physics is for the birds". Physics Today. 60 (10): 28–30. Bibcode:2007PhT....60j..28F. doi:10.1063/1.2800090. ^ Hildenbrandt, H; Carere, C; Hemelrijk, CK (2010). "Self-organized aerial displays of thousands of starlings: a model". Behavioral Ecology. 21 (6): 1349–1359. doi:10.1093/beheco/arq149. ^ Hemelrijk, CK; Hildenbrandt, H (2011). "Some causes of the variable shape of flocks of birds". PLOS ONE. 6 (8): e22479. Bibcode:2011PLoSO...622479H. doi:10.1371/journal.pone.0022479. PMC 3150374. PMID 21829627. ^ Project Starflag ^ Swarm behaviour model by University of Groningen ^ Delgado-Mata C, Ibanez J, Bee S, et al. (2007). "On the use of Virtual Animals with Artificial Fear in Virtual Environments". New Generation Computing. 25 (2): 145–169. doi:10.1007/s00354-007-0009-5. S2CID 26078361. ^ Hartman C, Benes B (2006). "Autonomous boids". Computer Animation and Virtual Worlds. 17 (3–4): 199–206. doi:10.1002/cav.123. S2CID 15720643. ^ Hemelrijk, C. K.; Hildenbrandt, H. (2011). "Some Causes of the Variable Shape of Flocks of Birds". PLOS ONE. 6 (8): e22479. Bibcode:2011PLoSO...622479H. doi:10.1371/journal.pone.0022479. PMC 3150374. PMID 21829627. ^ Spector, L.; Klein, J.; Perry, C.; Feinstein, M. (2003). "Emergence of Collective Behavior in Evolving Populations of Flying Agents". Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2003). Springer-Verlag. Retrieved 2007-05-01. ^ Bernard Chazelle, The Convergence of Bird Flocking, J. ACM 61 (2014) ^ "http://psychcentral.com/news/2008/02/15/herd-mentality-explained/1922.html Archived 2014-11-29 at the Wayback Machine". Retrieved on October 31st 2008. ^ Senanayake, M., Senthooran, I., Barca, J. C., Chung, H., Kamruzzaman, J., & Murshed, M. "Search and tracking algorithms for swarms of robots: A survey." ^ Gabbai, J. M. E. (2005). "Complexity and the Aerospace Industry: Understanding Emergence by Relating Structure to Performance using Multi-Agent Systems". Manchester: University of Manchester Doctoral Thesis. {{cite journal}}: Cite journal requires |journal= (help) ^ Ibanez J, Gomez-Skarmeta AF, Blat J (2003). "DJ-boids: emergent collective behavior as multichannel radio station programming". Proceedings of the 8th international conference on Intelligent User Interfaces. pp. 248–250. doi:10.1145/604045.604089. ^ Moere A V (2004). "Time-Varying Data Visualization Using Information Flocking Boids" (PDF). Proceedings of the IEEE Symposium on Information Visualization. pp. 97–104. doi:10.1109/INFVIS.2004.65. ^ Cui Z, Shi Z (2009). "Boid particle swarm optimisation". International Journal of Innovative Computing and Applications. 2 (2): 77–85. doi:10.1504/IJICA.2009.031778. Bouffanais, Roland (2016). Design and Control of Swarm Dynamics. SpringerBriefs in Complexity. Springer Singapore. doi:10.1007/978-981-287-751-2. ISBN 9789812877505. Cucker, Felipe; Steve Smale (2007). "The Mathematics of Emergence" (PDF). Japanese Journal of Mathematics. 2: 197–227. doi:10.1007/s11537-007-0647-x. S2CID 2637067. Retrieved 2008-06-09. Shen, Jackie (Jianhong) (2008). "Cucker–Smale Flocking under Hierarchical Leadership". SIAM J. Appl. Math. 68 (3): 694–719. arXiv:q-bio/0610048. doi:10.1137/060673254. S2CID 14655317. Retrieved 2008-06-09. Fine, B.T.; D.A. Shell (2013). "Unifying microscopic flocking motion models for virtual, robotic, and biological flock members". Auton. Robots. 35 (2–3): 195–219. doi:10.1007/s10514-013-9338-z. S2CID 14091388. Vásárhelyi, G.; C. Virágh; G. Somorjai; T. Nepusz; A.E. Eiben; T. Vicsek (2018). "Optimized flocking of autonomous drones in confined environments". Science Robotics (published 18 July 2018). 3 (20): eaat3536. doi:10.1126/scirobotics.aat3536. PMID 33141727. Wikimedia Commons has media related to Swarming. Craig Reynolds' Boids page Iztok Lebar Bajec's fuzzy logic based flocking publications Murmurations of starlings (BBC videos) Drone flight captures Norfolk starlings murmuration (BBC videos) Retrieved from "https://en.wikipedia.org/w/index.php?title=Flocking_(behavior)&oldid=1073012593"
Section 10.63: Associated primes (cite) 10.63 Associated primes Here is the standard definition. For non-Noetherian rings and non-finite modules it may be more appropriate to use the definition in Section 10.66. Definition 10.63.1. Let $R$ be a ring. Let $M$ be an $R$-module. A prime $\mathfrak p$ of $R$ is associated to $M$ if there exists an element $m \in M$ whose annihilator is $\mathfrak p$. The set of all such primes is denoted $\text{Ass}_ R(M)$ or $\text{Ass}(M)$. Lemma 10.63.2. Let $R$ be a ring. Let $M$ be an $R$-module. Then $\text{Ass}(M) \subset \text{Supp}(M)$. Proof. If $m \in M$ has annihilator $\mathfrak p$, then in particular no element of $R \setminus \mathfrak p$ annihilates $m$. Hence $m$ is a nonzero element of $M_{\mathfrak p}$, i.e., $\mathfrak p \in \text{Supp}(M)$. $\square$ Lemma 10.63.3. Let $R$ be a ring. Let $0 \to M' \to M \to M'' \to 0$ be a short exact sequence of $R$-modules. Then $\text{Ass}(M') \subset \text{Ass}(M)$ and $\text{Ass}(M) \subset \text{Ass}(M') \cup \text{Ass}(M'')$. Also $\text{Ass}(M' \oplus M'') = \text{Ass}(M') \cup \text{Ass}(M'')$. Proof. If $m' \in M'$, then the annihilator of $m'$ viewed as an element of $M'$ is the same as the annihilator of $m'$ viewed as an element of $M$. Hence the inclusion $\text{Ass}(M') \subset \text{Ass}(M)$. Let $m \in M$ be an element whose annihilator is a prime ideal $\mathfrak p$. If there exists a $g \in R$, $g \not\in \mathfrak p$ such that $m' = gm \in M'$ then the annihilator of $m'$ is $\mathfrak p$. If there does not exist a $g \in R$, $g \not\in \mathfrak p$ such that $gm \in M'$, then the annilator of the image $m'' \in M''$ of $m$ is $\mathfrak p$. This proves the inclusion $\text{Ass}(M) \subset \text{Ass}(M') \cup \text{Ass}(M'')$. We omit the proof of the final statement. $\square$ Lemma 10.63.5. Let $R$ be a Noetherian ring. Let $M$ be a finite $R$-module. Then $\text{Ass}(M)$ is finite. Proof. Immediate from Lemma 10.63.4 and Lemma 10.62.1. $\square$ Proposition 10.63.6. Let $R$ be a Noetherian ring. Let $M$ be a finite $R$-module. The following sets of primes are the same: The minimal primes in the support of $M$. The minimal primes in $\text{Ass}(M)$. For any filtration $0 = M_0 \subset M_1 \subset \ldots \subset M_{n-1} \subset M_ n = M$ with $M_ i/M_{i-1} \cong R/\mathfrak p_ i$ the minimal primes of the set $\{ \mathfrak p_ i\} $. Proof. Choose a filtration as in (3). In Lemma 10.62.5 we have seen that the sets in (1) and (3) are equal. Let $\mathfrak p$ be a minimal element of the set $\{ \mathfrak p_ i\} $. Let $i$ be minimal such that $\mathfrak p = \mathfrak p_ i$. Pick $m \in M_ i$, $m \not\in M_{i-1}$. The annihilator of $m$ is contained in $\mathfrak p_ i = \mathfrak p$ and contains $\mathfrak p_1 \mathfrak p_2 \ldots \mathfrak p_ i$. By our choice of $i$ and $\mathfrak p$ we have $\mathfrak p_ j \not\subset \mathfrak p$ for $j < i$ and hence we have $\mathfrak p_1 \mathfrak p_2 \ldots \mathfrak p_{i - 1} \not\subset \mathfrak p_ i$. Pick $f \in \mathfrak p_1 \mathfrak p_2 \ldots \mathfrak p_{i - 1}$, $f \not\in \mathfrak p$. Then $fm$ has annihilator $\mathfrak p$. In this way we see that $\mathfrak p$ is an associated prime of $M$. By Lemma 10.63.2 we have $\text{Ass}(M) \subset \text{Supp}(M)$ and hence $\mathfrak p$ is minimal in $\text{Ass}(M)$. Thus the set of primes in (1) is contained in the set of primes of (2). Let $\mathfrak p$ be a minimal element of $\text{Ass}(M)$. Since $\text{Ass}(M) \subset \text{Supp}(M)$ there is a minimal element $\mathfrak q$ of $\text{Supp}(M)$ with $\mathfrak q \subset \mathfrak p$. We have just shown that $\mathfrak q \in \text{Ass}(M)$. Hence $\mathfrak q = \mathfrak p$ by minimality of $\mathfrak p$. Thus the set of primes in (2) is contained in the set of primes of (1). $\square$ Lemma 10.63.7. Let $R$ be a Noetherian ring. Let $M$ be an $R$-module. Then \[ M = (0) \Leftrightarrow \text{Ass}(M) = \emptyset . \] Proof. If $M = (0)$, then $\text{Ass}(M) = \emptyset $ by definition. If $M \not= 0$, pick any nonzero finitely generated submodule $M' \subset M$, for example a submodule generated by a single nonzero element. By Lemma 10.40.2 we see that $\text{Supp}(M')$ is nonempty. By Proposition 10.63.6 this implies that $\text{Ass}(M')$ is nonempty. By Lemma 10.63.3 this implies $\text{Ass}(M) \not= \emptyset $. $\square$ Lemma 10.63.8. Let $R$ be a Noetherian ring. Let $M$ be an $R$-module. Any $\mathfrak p \in \text{Supp}(M)$ which is minimal among the elements of $\text{Supp}(M)$ is an element of $\text{Ass}(M)$. Proof. If $M$ is a finite $R$-module, then this is a consequence of Proposition 10.63.6. In general write $M = \bigcup M_\lambda $ as the union of its finite submodules, and use that $\text{Supp}(M) = \bigcup \text{Supp}(M_\lambda )$ and $\text{Ass}(M) = \bigcup \text{Ass}(M_\lambda )$. $\square$ Lemma 10.63.9. Let $R$ be a Noetherian ring. Let $M$ be an $R$-module. The union $\bigcup _{\mathfrak q \in \text{Ass}(M)} \mathfrak q$ is the set of elements of $R$ which are zerodivisors on $M$. Proof. Any element in any associated prime clearly is a zerodivisor on $M$. Conversely, suppose $x \in R$ is a zerodivisor on $M$. Consider the submodule $N = \{ m \in M \mid xm = 0\} $. Since $N$ is not zero it has an associated prime $\mathfrak q$ by Lemma 10.63.7. Then $x \in \mathfrak q$ and $\mathfrak q$ is an associated prime of $M$ by Lemma 10.63.3. $\square$ Lemma 10.63.10. Let $R$ is a Noetherian local ring, $M$ a finite $R$-module, and $f \in \mathfrak m$ an element of the maximal ideal of $R$. Then \[ \dim (\text{Supp}(M/fM)) \leq \dim (\text{Supp}(M)) \leq \dim (\text{Supp}(M/fM)) + 1 \] If $f$ is not in any of the minimal primes of the support of $M$ (for example if $f$ is a nonzerodivisor on $M$), then equality holds for the right inequality. Proof. (The parenthetical statement follows from Lemma 10.63.9.) The first inequality follows from $\text{Supp}(M/fM) \subset \text{Supp}(M)$, see Lemma 10.40.9. For the second inequality, note that $\text{Supp}(M/fM) = \text{Supp}(M) \cap V(f)$, see Lemma 10.40.9. It follows, for example by Lemma 10.62.2 and elementary properties of dimension, that it suffices to show $\dim V(\mathfrak p) \leq \dim (V(\mathfrak p) \cap V(f)) + 1$ for primes $\mathfrak p$ of $R$. This is a consequence of Lemma 10.60.13. Finally, if $f$ is not contained in any minimal prime of the support of $M$, then the chains of primes in $\text{Supp}(M/fM)$ all give rise to chains in $\text{Supp}(M)$ which are at least one step away from being maximal. $\square$ Lemma 10.63.13. Let $\varphi : R \to S$ be a ring map. Let $M$ be an $S$-module. If $S$ is Noetherian, then $\mathop{\mathrm{Spec}}(\varphi )(\text{Ass}_ S(M)) = \text{Ass}_ R(M)$. Proof. We have already seen in Lemma 10.63.11 that $\mathop{\mathrm{Spec}}(\varphi )(\text{Ass}_ S(M)) \subset \text{Ass}_ R(M)$. For the converse, choose a prime $\mathfrak p \in \text{Ass}_ R(M)$. Let $m \in M$ be an element such that the annihilator of $m$ in $R$ is $\mathfrak p$. Let $I = \{ g \in S \mid gm = 0\} $ be the annihilator of $m$ in $S$. Then $R/\mathfrak p \subset S/I$ is injective. Combining Lemmas 10.30.5 and 10.30.7 we see that there is a prime $\mathfrak q \subset S$ minimal over $I$ mapping to $\mathfrak p$. By Proposition 10.63.6 we see that $\mathfrak q$ is an associated prime of $S/I$, hence $\mathfrak q$ is an associated prime of $M$ by Lemma 10.63.3 and we win. $\square$ Lemma 10.63.15. Let $R$ be a ring. Let $M$ be an $R$-module. Let $\mathfrak p \subset R$ be a prime. If $\mathfrak p \in \text{Ass}(M)$ then $\mathfrak pR_{\mathfrak p} \in \text{Ass}(M_{\mathfrak p})$. If $\mathfrak p$ is finitely generated then the converse holds as well. Proof. If $\mathfrak p \in \text{Ass}(M)$ there exists an element $m \in M$ whose annihilator is $\mathfrak p$. As localization is exact (Proposition 10.9.12) we see that the annihilator of $m/1$ in $M_{\mathfrak p}$ is $\mathfrak pR_{\mathfrak p}$ hence (1) holds. Assume $\mathfrak pR_{\mathfrak p} \in \text{Ass}(M_{\mathfrak p})$ and $\mathfrak p = (f_1, \ldots , f_ n)$. Let $m/g$ be an element of $M_{\mathfrak p}$ whose annihilator is $\mathfrak pR_{\mathfrak p}$. This implies that the annihilator of $m$ is contained in $\mathfrak p$. As $f_ i m/g = 0$ in $M_{\mathfrak p}$ we see there exists a $g_ i \in R$, $g_ i \not\in \mathfrak p$ such that $g_ i f_ i m = 0$ in $M$. Combined we see the annihilator of $g_1\ldots g_ nm$ is $\mathfrak p$. Hence $\mathfrak p \in \text{Ass}(M)$. $\square$ Lemma 10.63.16. Let $R$ be a ring. Let $M$ be an $R$-module. Let $S \subset R$ be a multiplicative subset. Via the canonical injection $\mathop{\mathrm{Spec}}(S^{-1}R) \to \mathop{\mathrm{Spec}}(R)$ we have $\text{Ass}_ R(S^{-1}M) = \text{Ass}_{S^{-1}R}(S^{-1}M)$, $\text{Ass}_ R(M) \cap \mathop{\mathrm{Spec}}(S^{-1}R) \subset \text{Ass}_ R(S^{-1}M)$, and if $R$ is Noetherian this inclusion is an equality. Proof. The first equality follows, since if $m \in S^{-1}M$, then the annihilator of $m$ in $R$ is the intersection of the annihilator of $m$ in $S^{-1}R$ with $R$. The displayed inclusion and equality in the Noetherian case follows from Lemma 10.63.15 since for $\mathfrak p \in R$, $S \cap \mathfrak p = \emptyset $ we have $M_{\mathfrak p} = (S^{-1}M)_{S^{-1}\mathfrak p}$. $\square$ Lemma 10.63.17. Let $R$ be a ring. Let $M$ be an $R$-module. Let $S \subset R$ be a multiplicative subset. Assume that every $s \in S$ is a nonzerodivisor on $M$. Then \[ \text{Ass}_ R(M) = \text{Ass}_ R(S^{-1}M). \] Proof. As $M \subset S^{-1}M$ by assumption we get the inclusion $\text{Ass}(M) \subset \text{Ass}(S^{-1}M)$ from Lemma 10.63.3. Conversely, suppose that $n/s \in S^{-1}M$ is an element whose annihilator is a prime ideal $\mathfrak p$. Then the annihilator of $n \in M$ is also $\mathfrak p$. $\square$ Lemma 10.63.18. Let $R$ be a Noetherian local ring with maximal ideal $\mathfrak m$. Let $I \subset \mathfrak m$ be an ideal. Let $M$ be a finite $R$-module. The following are equivalent: There exists an $x \in I$ which is not a zerodivisor on $M$. We have $I \not\subset \mathfrak q$ for all $\mathfrak q \in \text{Ass}(M)$. Proof. If there exists a nonzerodivisor $x$ in $I$, then $x$ clearly cannot be in any associated prime of $M$. Conversely, suppose $I \not\subset \mathfrak q$ for all $\mathfrak q \in \text{Ass}(M)$. In this case we can choose $x \in I$, $x \not\in \mathfrak q$ for all $\mathfrak q \in \text{Ass}(M)$ by Lemmas 10.63.5 and 10.15.2. By Lemma 10.63.9 the element $x$ is not a zerodivisor on $M$. $\square$ Lemma 10.63.19. Let $R$ be a ring. Let $M$ be an $R$-module. If $R$ is Noetherian the map \[ M \longrightarrow \prod \nolimits _{\mathfrak p \in \text{Ass}(M)} M_{\mathfrak p} \] Proof. Let $x \in M$ be an element of the kernel of the map. Then if $\mathfrak p$ is an associated prime of $Rx \subset M$ we see on the one hand that $\mathfrak p \in \text{Ass}(M)$ (Lemma 10.63.3) and on the other hand that $(Rx)_{\mathfrak p} \subset M_{\mathfrak p}$ is not zero. This contradiction shows that $\text{Ass}(Rx) = \emptyset $. Hence $Rx = 0$ by Lemma 10.63.7. $\square$ I have two comments about 02CE. First, and this is just notational, when it is stated that the product \mathfrak{p}_1\cdots\mathfrak{p}_{i-1} \mathfrak{p}_i , I take it that what is being used is that this is equivalent to none of the \mathfrak{p}_j in the product being in \mathfrak{p}_i , and this is supposed to hold because \mathfrak{p}_i is minimal among primes showing up in the filtration quotients. But what if one of the \mathfrak{p}_j 1\leq j\leq i-1 \mathfrak{p}_i ? I think what one needs to take is \prod_{1\leq j\leq i-1,\mathfrak{p}_j\neq\mathfrak{p}_i}\mathfrak{p}_j . Then the proof goes through. The second comment, which is maybe more serious, is that the argument shows that a minimal element of the \mathfrak{p}_i is an associated prime, and since associated primes are among the \mathfrak{p}-i , such a thing is necessarily a minimal associated. But if we start with a minimal associated prime, we know that it shows up in the filtration (and in the support), but how do we know it is minimal among these potentially larger sets? If \mathfrak{p}_j\subseteq\mathfrak{p}_i=\Ann_R(m) \mathfrak{p}_i minimal among associated primes, is \mathfrak{p}_j necessarily also associated? Hi, OK, I think this is fixed by this commit. Thanks! Comment #1152 by Yu Zhao on November 14, 2014 at 04:20 I think there is a typo in the proof of Lemma 10.62.12. In 3rd line of the proof, should " x R \mathfrak p " be " m R \mathfrak p Comment #5074 by guoh064 on May 04, 2020 at 03:57 Maybe there is a typo in the first line of the proof of Lemma 10.62.17.(Lemma 05C0). "As M \subset S^{-1}M by assumption we get the inclusion \text{Ass}(M) = \text{Ass}(S^{-1}M) from ..." I think it should be \text{Ass}(M) \subset \text{Ass}(S^{-1}M) Comment #6500 by Xu Jun on August 16, 2021 at 01:54 In definition 00LA of Ass(M) , do we require m\neq 0 M=0 Ass(M) is empty? @#6500. The annihilator of 0 is the whole ring which is not a prime ideal. So the set of associated primes of the zero module is the empty set with the definition as given now. OK? Comment #6650 by Likun Xie on October 19, 2021 at 21:42 Lemma 10.63.13, don't we need M to be finite over S ? Proposition 10.63.6 is for finite module. Comment #6781 by K H on December 01, 2021 at 03:23 I think the condition "R is a local ring" and "M is finite over R" is superfluous. Lemma 10.63.18. I think the condition "R is a local ring" and "M is finite over R" is superfluous. Lemma 10.63.18. All right I make a mistake, only "R is a local ring" is superfluous. @#6650: but the proposition is only applied to S/I which is finite. @#6781, 6782, 6783: I am going to leave this alone. This lemma is used a lot and I don't want to change it. We can add another lemma, but then please state carefully what the lemma should say.
Correspondence to: † taekhyun@changwon.ac.kr Sodium borohydride, Hydrogen peroxide, Fuel composition, Electrochemical reaction, Decomposition reaction, Fuel cell 수소화붕소나트륨, 과산화수소, 연료 조성, 전기화학반응, 분해반응, 연료전지 \mathrm{산}\mathrm{화}\mathrm{극}:\mathrm{ }{\mathrm{B}\mathrm{H}}_{4}^{-}+8{\mathrm{O}\mathrm{H}}^{-}\mathrm{ }\to \mathrm{ }{\mathrm{B}\mathrm{O}}_{2}^{-}+6{\mathrm{H}}_{2}\mathrm{O}+8{\mathrm{e}}^{-} \mathrm{환}\mathrm{원}\mathrm{극}\left(\mathrm{알}\mathrm{칼}\mathrm{리}\right):\mathrm{ }4{\mathrm{H}\mathrm{O}}_{2}^{-}+4{\mathrm{H}}_{2}\mathrm{O}+8{\mathrm{e}}^{-}\mathrm{ }\to \mathrm{ }12{\mathrm{O}\mathrm{H}}^{-} \mathrm{환}\mathrm{원}\mathrm{극}\left(\mathrm{산}\mathrm{성}\right):\mathrm{ }4{\mathrm{H}}_{2}{\mathrm{O}}_{2}+8{\mathrm{H}}^{+}+8{\mathrm{e}}^{-}\mathrm{ }\to \mathrm{ }8{\mathrm{H}}_{2}\mathrm{O} \mathrm{산}\mathrm{화}\mathrm{극}:\mathrm{ }{\mathrm{N}\mathrm{a}\mathrm{B}\mathrm{H}}_{4}+2{\mathrm{H}}_{2}\mathrm{O}\mathrm{ }\to \mathrm{ }{\mathrm{N}\mathrm{a}\mathrm{B}\mathrm{O}}_{2}+4{\mathrm{H}}_{2} \mathrm{환}\mathrm{원}\mathrm{극}:\mathrm{ }2{\mathrm{H}}_{2}{\mathrm{O}}_{2}\mathrm{ }\to \mathrm{ }2{\mathrm{H}}_{2}\mathrm{O}+{\mathrm{O}}_{2} \mathrm{이}\mathrm{온}\mathrm{의}\mathrm{ }\mathrm{이}\mathrm{동}\mathrm{도}:u=\frac{nq}{6\pi \mu r} \mathrm{아}\mathrm{레}\mathrm{니}\mathrm{우}\mathrm{스}\mathrm{ }\mathrm{식}:k={Ae}^{-\frac{{E}_{a}}{RT}} G. Ju, “Development status of domestic & overseas space exploration & associated technology”, Journal of the Korean Society for Aeronautical and Space Sciences, Vol. 44, No. 8, 2016, pp. 741-757. [https://doi.org/10.5139/JKSAS.2016.44.8.741] T. Y. Kim, S. Chang, and H. Heo, “Numerical study on the thermal design of lunar terrain imager system loaded on the Korea pathfinder lunar orbiter”, Joural of the Korean Society for Aeronautical and Space Sciences, Vol. 47, No. 4, 2019, pp. 309-318. [https://doi.org/10.5139/JKSAS.2019.47.4.309] T. H. Oh, “Gold-based bimetallic electrocatalysts supported on multiwalled carbon nanotubes for direct borohydride-hydrogen peroxide fuel cell”, Renew. Energy, Vol. 163, 2021, pp. 930-938. [https://doi.org/10.1016/j.renene.2020.09.028] G. H. Miley, N. Luo, J. Mather, R. Burton, G. Hawkins, L. Gu, E. Byrd, R. Gimlin, P. J. Shrestha, G. Benavides, J. Laystrom, and D. Carroll, “Direct NaBH4/H2O2 fuel cells”, J. Power Sources, Vol. 165, No. 2, 2007, pp. 509-516. [https://doi.org/10.1016/j.jpowsour.2006.10.062] P. S. Khadke, P. Sethuraman, P. Kandasamy, S. Parthasarathi, and A. K. Shukla, “A self-supported direct borohydride-hydrogen peroxide fuel cell system”, Energies, Vol. 2, No. 2, 2009, pp. 190-201. [https://doi.org/10.3390/en20200190] B. Šljukić, J. Milikić, D. M. F. Santos, C. A. C. Sequeira, D. Macciò, and A. Saccone, “Electrocatalytic performance of Pt-Dy alloys for direct borohydride fuel cells”, J. Power Sources, Vol. 272, 2014, pp. 335-343. [https://doi.org/10.1016/j.jpowsour.2014.08.080] Z. Wang, J. Parrondo, C. He, S. Sankarasubramanian, and V. Ramani, “Efficient pH-gradient-enabled microscale bipolar interfaces in direct borohydride fuel cells”, Nat. Energy, Vol. 4, 2019, pp. 281-289. [https://doi.org/10.1038/s41560-019-0330-5] T. H. Oh, “Effect of cathode conditions on performance of direct borohydride-hydrogen peroxide fuel cell system for space exploration”, Renew. Energy, Vol. 178, 2021, pp. 1156-1164. [https://doi.org/10.1016/j.renene.2021.06.137] W. Haijun, W. Cheng, L. Zhixiang, and M. Zongqiang, “Influence of operation conditions on direct NaBH4/H2O2 fuel cell performance”, Int. J. Hydrogen Energy, Vol. 35, No. 7, 2010, pp. 2648-2651. [https://doi.org/10.1016/j.ijhydene.2009.04.020] R. Mahmoodi, M. G. Hosseini, and H. Rasouli, “Ehancement of output power density and performance of direct borohydride-hydrogen peroxide fuel cell using Ni-Pd core-shell nanoparticles on polymeric composite supports(rGO-PANI) as novel electrocatalysts”, Appl. Catal. B: Environ., Vol. 251, 2019, pp. 37-48. [https://doi.org/10.1016/j.apcatb.2019.03.064] T. H. Oh, B. Jang, and S. Kwon, “Electrocatalysts supported on multiwalled carbon nanotubes for direct borohydride-hydrogen peroxide fuel cell”, Int. J. Hydrogen Energy, Vol. 39, No. 13, 2014, pp. 6977-6986. [https://doi.org/10.1016/j.ijhydene.2014.02.117] S. S. Yu and T. H. Oh, “Cathode catalyst of direct borohydride/hydrogen peroxide fuel cell for space exploration”, Trans Korean Hydrogen New Energy Soc, Vol. 31, No. 5, 2020, pp. 444-452. [https://doi.org/10.7316/KHNES.2020.31.5.444] T. H. Oh, “Nickel-based catalysts for direct borohydride/hydrogen peroxide fuel cell”, Trans Korean Hydrogen New Energy Soc, Vol. 31, No. 6, 2020, pp. 587-595. [https://doi.org/10.7316/KHNES.2020.31.6.587] Z. P. Li, B. H. Liu, K. Arai, K. Asaba, and S. Suda, “Evaluation of alkaline borohydride solution as the fuel for fuel cell”, J. Power Sources, Vol. 126, No. 1-2, 2004, pp. 28-33. [https://doi.org/10.1016/j.jpowsour.2003.08.017] H. B. Dai, Y. Liang, P. Wang, X. D. Yao, T. Rufford, M. Lu, and H. M. Cheng, “High-performance cobalt-tungsten-boron catalyst supported on Ni foam for hydrogen generation from alkaline sodium borohydride solution”, Int. J. Hydrogen Energy, Vol. 33, No. 16, 2008, pp. 4405-4412. [https://doi.org/10.1016/j.ijhydene.2008.05.080] H. Cheng and K. Scott, “Influence of operation conditions on direct borohydride fuel cell performance”, J. Power Sources, Vol. 160, No. 1, 2006, pp. 407-412. [https://doi.org/10.1016/j.jpowsour.2006.01.097] J. Wei, X. Wang, Y. Wang, J. Guo, P. He, S. Yang, N. Li, F. Pei, and Y. Wang, “Carbon-supported Au hollow nanospheres as anode catalysts for direct borohydride-hydrogen peroxide fuel cells”, Energy Fuels, Vol. 23, No. 8, 2009, pp. 4037-4041. [https://doi.org/10.1021/ef900186m] T. H. Oh, B. Jang, and S. Kwon, “Performance evaluation of direct borohydride-hydrogen peroxide fuel cells with electrocatalysts supported on multiwalled carbon nanotubes”, Energy, Vol. 76, 2014, pp. 911-919. [https://doi.org/10.1016/j.energy.2014.09.002]
Risk parameters - Risk Management The maximum amount that can be borrowed for a specific mortgage is determined by the Loan To Value (LTV) ratio. For example, if the LTV is 75%, borrowers will be allowed to borrow 0.75 ETH worth of corresponding currency for every 1 ETH worth of collateral. For a wallet, the maximum LTV is calculated as the weighted average of LTVs of collateral assets and their value. In particular, for a wallet that deposits the collateral assets that are worth in dollars C_1, \dots ,C_n , and their LTVs corresponding to LTV_1, \dots , LTV_n , maximum LTV is \frac{C_1 \times LTV_1 + \cdots + C_n \times LTV_n }{C_1+\cdots+C_n} The liquidation threshold is the percentage at which a position is defined as undercollateralised. For example, a Liquidation threshold of 80% means that if the value rises above 80% of the collateral, the position is undercollateralised and could be liquidated. The difference between the Loan To Value and the Liquidation Threshold is a safety cushion for borrowers. For each wallet, the liquidation threshold is calculated as the weighted average of the liquidation thresholds of the collateral assets and their value. In particular, for a wallet that deposits the collateral assets that are worth in dollars C_1, \dots ,C_n , and their liquidation threshold, respectively, LT_1, \dots, LT_n , the liquidation thresholds of wallet is \frac{C_1 \times LT_1 + \cdots + C_n \times LT_n}{C_1 + \cdots + C_n} For a wallet, these risk parameters enable the calculation of the health factor: H_f = \frac{C_1 \times LT_1+ \cdots + C_n \times LT_n}{B} B is total borrows (in dollars), C_1,\dots,C_n is the values (in dollars) of collateral assets, and LT_1, \dots, LT_n is their liquidation threshold, respectively. When H_f < 1 the loan may be liquidated to maintain solvency. Liquidation penalty is a bonus applied to the price of collateral assets purchased by liquidators as part of the liquidation of a loan that has reached the liquidation threshold. The reserve factor allocates a share of the protocol's interests to a collector contract as reserve for the ecosystem. BSC Lending Pool FTM Lending Pool ETH Lending Pool The frequency of price updates is determined by the liquidation strategy. We use a margin method, which means that prices are refreshed whenever the deviation exceeds a certain threshold. For the price feed, we rely on Chainlink's decentralized oracles. ​https://data.chain.link/bsc/mainnet/crypto-usd/busd-usd​ ​https://feeds.chain.link/dai-usd​ ​https://data.chain.link/bsc/mainnet/crypto-usd/usdc-usd​ ​https://data.chain.link/bsc/mainnet/crypto-usd/usdt-usd​ ​https://data.chain.link/bsc/mainnet/crypto-usd/eth-usd​ ​https://data.chain.link/bsc/mainnet/crypto-usd/bnb-usd​ ​https://data.chain.link/bsc/mainnet/crypto-usd/btc-usd​ https://data.chain.link/bsc/mainnet/crypto-usd/aave-usd ​https://feeds.chain.link/ada-usd​ ​https://data.chain.link/bsc/mainnet/crypto-usd/cake-usd​ ​https://data.chain.link/bsc/mainnet/crypto-usd/xrp-usd​ ​https://data.chain.link/bsc/mainnet/crypto-usd/doge-usd​ https://data.chain.link/bsc/mainnet/crypto-usd/dot-usd​ ​https://data.chain.link/bsc/mainnet/crypto-usd/xvs-usd​ ​https://data.chain.link/fantom/mainnet/crypto-usd/ftm-usd​
{\displaystyle L} is the limit o{\displaystyle f} {\displaystyle c} {\displaystyle \epsilon >0} {\displaystyle \delta >0} {\displaystyle 0<|x-c|<\delta } {\displaystyle \left|f(x)-L\right|\,<\,\epsilon .} {\displaystyle |f(x)-L|<\epsilon } {\displaystyle -\epsilon \,<\,f(x)-L\,<\,\epsilon .} {\displaystyle L} {\displaystyle L-\epsilon \,<\,f(x)\,<\,L+\epsilon .} {\displaystyle f(x)} {\displaystyle {\boldsymbol {\epsilon }}} {\displaystyle {\boldsymbol {L}}} {\displaystyle (L-\epsilon ,L+\epsilon )} {\displaystyle |x-c|<\delta } {\displaystyle c-\delta \,<\,x\,<\,c+\delta .} {\displaystyle x} {\displaystyle \delta } {\displaystyle c} {\displaystyle (c-\delta ,c+\delta )} {\displaystyle 0<|x-c|} {\displaystyle c} {\displaystyle (c-\delta ,c+\delta )} {\displaystyle c} {\displaystyle \delta } {\displaystyle \epsilon } {\displaystyle \delta <\epsilon } {\displaystyle \delta <\epsilon /3} {\displaystyle \delta <\min\{1,\epsilon /3\}} {\displaystyle |f(x)-L|\,<\,\epsilon .} {\displaystyle |f(x)-L|} {\displaystyle |x-c|} {\displaystyle |x-c|\,<\,{\textrm {something}},} {\displaystyle \delta } {\displaystyle {\displaystyle \lim _{x\rightarrow 1}10=10}} {\displaystyle f(x)=10,\ L=10} {\displaystyle c=1} {\displaystyle x} {\displaystyle f(x)=10} {\displaystyle x} {\displaystyle |f(x)-L|\,=\,|10-10|\,=\,0\,<\epsilon } {\displaystyle \epsilon } {\displaystyle \delta } {\displaystyle \epsilon >0} {\displaystyle \delta =1} {\displaystyle |x-1|<\delta =1} {\displaystyle |f(x)-L|\ =\ |10-10|\ =\ 0\ <\ \epsilon .} {\displaystyle \square } {\displaystyle \epsilon >0} {\displaystyle \delta } {\displaystyle \epsilon } {\displaystyle {\displaystyle \lim _{x\rightarrow 3}2x-4=2}} {\displaystyle f(x)=2x-4,\ L=2} {\displaystyle c=3} {\displaystyle |f(x)-L|<\epsilon } {\displaystyle |x-c|=|x-3|} {\displaystyle {\begin{array}{ccccrcl}&&&&\left|(2x-4)-2\right|&<&\epsilon \\\\\Rightarrow &&&&|2x-6|&<&\epsilon \\\\\Rightarrow &&&&|2(x-3)|&<&\epsilon \\\\\Rightarrow &&&&2|x-3|&<&\epsilon \\\\\Rightarrow &&&&|x-3|&<&{\displaystyle {\frac {\epsilon }{2}}.}\end{array}}} {\displaystyle \delta } {\displaystyle \delta <\epsilon /2} {\displaystyle \epsilon >0} {\displaystyle \delta <\epsilon /2} {\displaystyle |x-c|=|x-3|<\delta <\epsilon /2} {\displaystyle {\begin{array}{ccccrcl}&&&&|x-3|&<&{\displaystyle {\frac {\epsilon }{2}}}\\\\\Rightarrow &&&&|2x-6|&<&\epsilon \\\\\Rightarrow &&&&|2(x-3)|&<&\epsilon \\\\\Rightarrow &&&&|2x-6|&<&\epsilon \\\\\Rightarrow &&&&|(2x-4)-2|&<&{\displaystyle \epsilon }\\\\\Rightarrow &&&&|f(x)-L|&<&\epsilon ,\end{array}}} {\displaystyle \square } {\displaystyle f(x)=mx+b} {\displaystyle m\neq 0} {\displaystyle \delta <\epsilon /|m|} {\displaystyle {\displaystyle \lim _{x\rightarrow 1}x^{2}=1}} {\displaystyle f(x)=x^{2},\ L=1} {\displaystyle c=1} {\displaystyle |f(x)-L|<\epsilon } {\displaystyle |x-c|=|x-1|} {\displaystyle {\begin{array}{ccccrcl}&&&&\left|x^{2}-1\right|&<&\epsilon \\\\\Rightarrow &&&&|(x-1)(x+1)|&<&\epsilon \\\\\Rightarrow &&&&|x+1|\cdot |x-1|&<&\epsilon \\\\\Rightarrow &&&&|x-1|&<&{\displaystyle {\frac {\epsilon }{|x+1|}}.}\end{array}}} {\displaystyle |x-1|} {\displaystyle \epsilon } {\displaystyle x} {\displaystyle \delta } {\displaystyle \delta =1} {\displaystyle |x-c|=|x-1|<\delta =1} {\displaystyle -\delta \,=\,-1\,<\,x-1\,<\,1\,=\,\delta ,} {\displaystyle 1\,<\,x+1\,<\,3.\qquad \qquad \qquad (\dagger )} {\displaystyle \epsilon } {\displaystyle {\frac {\epsilon }{3}}\,<\,{\frac {\epsilon }{x+1}}\,=\,{\frac {\epsilon }{|x+1|}}\,<\,{\frac {\epsilon }{1}}\,=\,\epsilon .} {\displaystyle x} {\displaystyle |x-1|<1} {\displaystyle \epsilon /3<\epsilon /|x+1|} {\displaystyle \delta <\epsilon /3} {\displaystyle \delta =1} {\displaystyle \min\{a,b\}} {\displaystyle a}nd {\displaystyle b} {\displaystyle \epsilon >0} {\displaystyle \delta =\min\{1,\epsilon /3\}} {\displaystyle |x-c|=|x-1|<\delta } {\displaystyle {\begin{array}{ccrcrclccccc}&&&&|x-1|&<&{\displaystyle {\frac {\epsilon }{3}}}\\\\\Rightarrow &&&&3|x-1|&<&\epsilon \\\\\Rightarrow &&|x+1||x-1|&<&3|x-1|&<&\epsilon &&&&&{\textrm {(using}}\dagger )\\\\\Rightarrow &&&&|x^{2}-1|&<&\epsilon \\\\\Rightarrow &&&&|f(x)-L|&<&\epsilon ,\end{array}}} </math>\square</math> {\displaystyle {\displaystyle \lim _{x\rightarrow 2}3x^{2}=12}} {\displaystyle f(x)=3x^{2},\ L=12} {\displaystyle c=2} {\displaystyle |f(x)-L|<\epsilon } {\displaystyle {\begin{array}{ccccrcl}&&&&\left|3x^{2}-12\right|&<&\epsilon \\\\\Rightarrow &&&&3|x^{2}-4|&<&\epsilon \\\\\Rightarrow &&&&3|(x+2|\cdot |x-2)|&<&\epsilon \\\\\Rightarrow &&&&|x-2|&<&{\displaystyle {\frac {\epsilon }{3|x+2|}}.}\end{array}}} {\displaystyle \delta =1} {\displaystyle -\delta \,=\,-1\,<\,x-2\,<\,1\,=\,\delta .} {\displaystyle 3\,<\,x+2\,<\,5.\qquad \qquad \qquad (\dagger \dagger )} {\displaystyle \epsilon } {\displaystyle {\frac {\epsilon }{5}}\,<\,{\frac {\epsilon }{|x+2|}}\,<\,{\frac {\epsilon }{3}}.} {\displaystyle \delta =\min \left\{1,{\frac {1}{3}}\cdot {\frac {\epsilon }{3}}\right\}=\min \left\{1,{\frac {\epsilon }{9}}\right\}.} {\displaystyle \epsilon >0} {\displaystyle \delta =\min \left\{1,{\frac {\epsilon }{9}}\right\}} {\displaystyle |x-c|=|x-2|<\delta } {\displaystyle {\begin{array}{ccrcrclccccc}&&&&|x-2|&<&{\displaystyle {\frac {\epsilon }{9}}}\\\\\Rightarrow &&&&9|x-2|&<&\epsilon \\\\\Rightarrow &&3|x+2||x-2|&<&9|x-2|&<&\epsilon &&&&&{\textrm {(using}}\dagger \dagger )\\\\\Rightarrow &&&&|3x^{2}-12|&<&\epsilon \\\\\Rightarrow &&&&|f(x)-L|&<&\epsilon ,\end{array}}} Problem 5.Using the definition of a limit, show that {\displaystyle {\displaystyle \lim _{x\rightarrow 2}x^{2}+3x+1=11}} {\displaystyle f(x)=x^{2}+3x+1,\ L=11} {\displaystyle c=2} {\displaystyle |f(x)-L|<\epsilon } {\displaystyle {\begin{array}{ccccrcl}&&&&\left|x^{2}+3x+1-11\right|&<&\epsilon \\\\\Rightarrow &&&&|x^{2}+3x-10|&<&\epsilon \\\\\Rightarrow &&&&|(x+5)(x-2)|&<&\epsilon \\\\\Rightarrow &&&&|(x+5|\cdot |x-2)|&<&\epsilon \\\\\Rightarrow &&&&|x-2|&<&{\displaystyle {\frac {\epsilon }{|x+5|}}.}\end{array}}} {\displaystyle \delta =1} {\displaystyle -\delta \,=\,-1\,<\,x-2\,<\,1\,=\,\delta ,\qquad \qquad \qquad (\natural )} and adding 7 to the inequality, {\displaystyle -\delta \,=\,6\,<\,x+5\,<\,1\,=\,7.} {\displaystyle \epsilon } {\displaystyle {\frac {\epsilon }{7}}\,<\,{\frac {\epsilon }{|x+5|}}\,<\,{\frac {\epsilon }{5}}} {\displaystyle x} {\displaystyle |x-2|<1.} {\displaystyle \epsilon >0} {\displaystyle \delta =\min \left\{1,{\frac {\epsilon }{7}}\right\}} {\displaystyle |x-c|=|x-2|<\delta ,} {\displaystyle {\begin{array}{ccrcrclccccl}&&&&|x-2|&<&{\displaystyle {\frac {\epsilon }{7}}}\\\\\Rightarrow &&&&7|x-2|&<&\epsilon \\\\\Rightarrow &&|x+5||x-2|&<&7|x-2|&<&\epsilon &&&&&{\textrm {(using}}\natural )\\\\\Rightarrow &&&&|x^{2}+3x-10|&<&\epsilon \\\\\Rightarrow &&&&|x^{2}+3x+1-11&<&\epsilon \\\\\Rightarrow &&&&|f(x)-L|&<&\epsilon ,\end{array}}} {\displaystyle {\displaystyle \lim _{x\rightarrow 1}{\sqrt {x}}=1}} {\displaystyle f(x)={\sqrt {x}},\ L=1} {\displaystyle c=1} {\displaystyle |f(x)-L|<\epsilon } {\displaystyle {\begin{array}{cccrcl}&&&|{\sqrt {x}}-1|&<&{\displaystyle \epsilon }\\\\\Rightarrow &&&|{\sqrt {x}}+1|\cdot |{\sqrt {x}}-1&<&|{\sqrt {x}}+1|\cdot \epsilon \\\\\Rightarrow &&&|x-1|&<&|{\sqrt {x}}+1|\cdot \epsilon .\end{array}}} {\displaystyle \delta =1.} {\displaystyle -\delta \,=\,-1\,<\,x-1\,<\,1\,=\,\delta ,} {\displaystyle 0\,<\,x\,<\,2.} {\displaystyle 1\,<\,{\sqrt {x}}+1\,<\,{\sqrt {2}}+1.} {\displaystyle \epsilon } {\displaystyle \epsilon \,<\,|{\sqrt {x}}+1|\,<\,\left({\sqrt {2}}+1\right)\epsilon \qquad {\mbox{while}}\qquad {\frac {\epsilon }{{\sqrt {2}}+1}}\,<\,{\frac {\epsilon }{|{\sqrt {x}}+1|}}\,<\,{\frac {\epsilon }{1}}.} {\displaystyle x} {\displaystyle |x-1|<1=\delta } {\displaystyle \delta } {\displaystyle \epsilon >0} {\displaystyle \delta =\min \left\{1,\epsilon \right\}} {\displaystyle |x-c|=|x-1|<\delta } {\displaystyle {\begin{array}{cccrclcc}&&&|x-1|&<&{\displaystyle \epsilon }\\\\\Rightarrow &&&|({\sqrt {x}}+1)({\sqrt {x}}-1)|&<&\epsilon \\\\\Rightarrow &&&\left|{\sqrt {x}}+1\right|\cdot \left|{\sqrt {x}}-1\right|&<&\epsilon \\\\\Rightarrow &&&\left|{\sqrt {x}}-1\right|&<&{\frac {\epsilon }{\left|{\sqrt {x}}+1\right|}}&<&\epsilon ,\end{array}}} {\displaystyle \epsilon } {\displaystyle \left|{\sqrt {x}}+1\right|} {\displaystyle \square }
Growth Theorems for a Subclass of Strongly Spirallike Functions Yan-Yan Cui, Chao-Jun Wang, Si-Feng Zhu, "Growth Theorems for a Subclass of Strongly Spirallike Functions", Journal of Applied Mathematics, vol. 2014, Article ID 608641, 8 pages, 2014. https://doi.org/10.1155/2014/608641 Yan-Yan Cui,1 Chao-Jun Wang,1 and Si-Feng Zhu1 1College of Mathematics and Statistics, Zhoukou Normal University, Zhoukou, Henan 466001, China In this paper we consider a subclass of strongly spirallike functions on the unit disk in the complex plane , namely, strongly almost spirallike functions of type and order . We obtain the growth results for strongly almost spirallike functions of type and order on the unit disk in by using subordination principles and the geometric properties of analytic mappings. Furthermore we get the growth theorems for strongly almost starlike functions of order and strongly starlike functions on the unit disk of . These growth results follow the deviation results of these functions. Growth theorems for univalent analytic functions are important parts in geometric function theories of one complex variable. In 1983, Duren [1] obtained the following well-known growth and deviation theorem. Theorem 1 (see [1]). If is a normalized biholomorphic function on the unit disk , then Many scholars tried to extend the beautiful results to the cases in several complex variables. However, Cartan [2] pointed out that the corresponding growth theorem does not hold in several complex variables. He suggested that we may consider the biholomorphic mappings with special geometrical characteristic, such as convex mappings and starlike mappings. In 1991, Barnard et al. [3] obtained the growth theorems for starlike mappings on the unit ball in firstly. After that, there are a lot of followup studies. Gong et al. [4] extended the results to the cases on and obtained the growth theorems for starlike mappings on the bounded convex Reinhardt domains . Graham and Varolin [5] obtained the growth and covering theorems for normalized biholomorphic convex functions on the unit disk and also obtained the growth and covering theorems for normalized biholomorphic starlike functions on the unit disk by Alexander’s theorem. Liu and Ren [6] obtained the growth theorems for starlike mappings on the general bounded starlike and circular domains in . Liu and Lu [7] obtained the growth theorems for starlike mappings of order on the bounded starlike and circular domains. Feng and Lu [8] obtained the growth theorems for almost starlike mappings of order on the bounded starlike and circular domains. Honda [9] obtained the growth theorems for normalized biholomorphic -symmetric convex mappings on the unit ball in complex Banach spaces. In recent years, there are a lot of new results about the growth and covering theorems for the subclasses of biholomorphic mappings in several complex variables [10–12]. It can be seen that we can make a great breakthrough in the growth and covering theorems for the subclasses of biholomorphic mappings in several complex variables if we restrict the biholomorphic mappings with the geometrical characteristic. The mappings discussed focus on starlike mappings, convex mappings, and their subclasses. In 1974, Suffridge extended starlike mappings and convex mappings and gave the definition of spirallike mappings. Gurganus [13] gave the definition of spirallike mappings of type in several complex variables. Hamada and Kohr [14] obtained the growth theorems for spirallike mappings on some domains. Later Feng [15] gave the definition of almost spirallike mappings of type and order on the unit ball in . Feng et al. [16] obtained the growth theorems for almost spirallike mappings of type and order on the unit ball in complex Banach spaces. However, when we introduce the definition of the new subclasses of starlike mappings, convex mappings, and spirallike mappings, we always discuss them in firstly. In [17], Cai and Liu gave the definition of strongly almost spirallike functions of type and order on the unit disk. They also discussed their coefficient estimates. In this paper, we mainly discuss the growth theorems for strongly almost spirallike functions of type and order on , where is the unit disk. Moreover we get the growth theorems for strongly almost starlike functions of order and strongly starlike functions on . At last, we obtain the deviation results of these functions. Definition 2 (see [17]). Suppose that is an analytic function on , , , , and Then is called a strongly almost spirallike function of type and order on . We can get the definition of strongly spirallike functions of type [18], strongly almost starlike functions of order [19], and strongly starlike functions on [19] by setting , , and , respectively, in Definition 2. In order to give the main results, we need the following lemmas. Lemma 3 (see [1]). Let be an univalent analytic function on . Then if and only if , . Lemma 4 (see [20]). represents a circle whose center is and whose radius is in , where Lemma 5 (see [20]). Let be an analytic function on and . Then and for . Theorem 6. Let be a strongly almost spirallike function of type and order on and . Then where Proof. Since is a strongly almost spirallike function of type and order on , we get Let Then so we have . Therefore we get that there exists an analytic function on which satisfies , where , . Then Immediately, we have It follows that From Lemma 3, we deduce that the image of the unit disk under the mapping is the disk whose center is and whose radius is , where So we have Then On the one hand, in view of (14), we have Observing that and for and , we get for and . Thus, in view of (15), (16), and (17), we obtain Let Then we have This means that Let Obviously, we have Observing that and , we deduce that . So is a monotone decreasing function for . Also we have from Lemma 4. Then On the other hand, by direct computations, we have It follows that This means that . By (14) we know that In view of (15) and (19), we have Let Then Let Immediately, we have Also, we can get for , . Moreover, it is obvious that and . So we obtain . Therefore is a monotone increasing function for . In addition, we have from Lemma 4. Hence From the above results, we obtain This completes the proof. Theorem 7. Suppose that is a strongly almost starlike function of order on and . Then Proof. Let and in Theorem 6. Then (34) holds, so we can obtain the same result; that is, where Therefore we get the conclusion. Let in Theorem 7; we can get the following result for strongly starlike functions. Corollary 8. Let be a strongly starlike function on and . Then Proof. From Theorem 6, we have Let . Since we get Thus Furthermore, It follows that Let ; we have Consequently, Observing that , we have This completes the proof. Similar to Theorem 9, by Theorem 7, we can get the following results. Theorem 10. Let be a strongly almost starlike function of order on and . Then Remark 12. Let in Theorem 11. Then we have Let in Theorem 10. Then we have Let in Theorem 11; we can get the following result. Corollary 13. Let be a strongly starlike function on and . Then Proof. According to Corollary 8, we obtain Let . Since we have Thus So we get Letting , it follows that Therefore we obtain Also, we can get the conclusion by letting in Theorem 11. This completes the proof. Theorem 14. Suppose that is a strongly starlike function on and ; then Proof. On the one hand, from Corollary 13, we obtain . On the other hand, by and in the proof of Theorem 6, we can obtain for . Let . Then we have Therefore is a monotone increasing function with respect to . Also we can know that from Lemma 4. Hence By (14) we obtain Furthermore, , so Let . Since , we have Therefore we obtain Then we have So Therefore we obtain This completes the proof. From Theorems 6 and 9, we can get the following result. Theorem 15. Let be a strongly almost spirallike function of type and order on and , . Then where From Theorems 7 and 11, we can get the following result. This work is supported by NSF of China (nos. 11271359 and U1204618) and Science and Technology Research Projects of Henan Provincial Education Department (nos. 14B110015 and 14B110016). P. L. Duren, Univalent Functions, Springer, Berlin, Germany, 1983. View at: MathSciNet H. Cartan, “Sur la possibilite d'entendre aux fonctions de plusieurs variables complexes la theorie des fonctions univalents,” in Lecons sur les Fonctions Univalents on Mutivalents, P. Montel, Ed., pp. 129–155, Gauthier-Villar, 1933. View at: Google Scholar R. W. Barnard, C. H. FitzGerald, and S. Gong, “The growth and 1/4-theorems for starlike mappings in {\mathbb{C}}^{n} ,” Pacific Journal of Mathematics, vol. 150, no. 1, pp. 13–22, 1991. View at: Publisher Site | Google Scholar | MathSciNet S. Gong, S. K. Wang, and Q. H. Yu, “The growth and 1/4 -theorem for starlike mappings on {B}_{p} ,” Chinese Annals of Mathematics B, vol. 11, no. 1, pp. 100–104, 1990. View at: Google Scholar | MathSciNet I. Graham and D. Varolin, “Bloch constants in one and several variables,” Pacific Journal of Mathematics, vol. 174, no. 2, pp. 347–357, 1996. View at: Google Scholar | MathSciNet T. Liu and G. Ren, “The growth theorem for starlike mappings on bounded starlike circular domains,” Chinese Annals of Mathematics B, vol. 19, no. 4, pp. 401–408, 1998. View at: Google Scholar | MathSciNet H. Liu and K. P. Lu, “Two subclasses of starlike mappings in several complex variables,” Chinese Annals of Mathematics, vol. 21, no. 5, pp. 533–546, 2000. View at: Google Scholar | MathSciNet S. X. Feng and K. P. Lu, “The growth theorem for almost starlike mappings of order \alpha on bounded starlike circular domains,” Chinese Quarterly Journal of Mathematics. Shuxue Jikan, vol. 15, no. 2, pp. 50–56, 2000. View at: Google Scholar | MathSciNet T. Honda, “The growth theorem for k -fold symmetric convex mappings,” The Bulletin of the London Mathematical Society, vol. 34, no. 6, pp. 717–724, 2002. View at: Publisher Site | Google Scholar | MathSciNet N. I. Mahmudov and M. Eini Keleshteri, “ q -extensions for the apostol type polynomials,” Journal of Applied Mathematics, vol. 2014, Article ID 868167, 8 pages, 2014. View at: Publisher Site | Google Scholar | MathSciNet E. Merkes and M. Salmassi, “Subclasses of uniformly starlike functions,” International Journal of Mathematics and Mathematical Sciences, vol. 15, no. 3, pp. 449–454, 1992. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet S. Singh, “A subordination theorem for spirallike functions,” International Journal of Mathematics and Mathematical Sciences, vol. 24, no. 7, pp. 433–435, 2000. View at: Publisher Site | Google Scholar | MathSciNet K. R. Gurganus, “ϕ-like holomorphic functions in {\mathbb{C}}^{n} and Banach spaces,” Transactions of the American Mathematical Society, vol. 205, pp. 389–406, 1975. View at: Google Scholar | MathSciNet H. Hamada and G. Kohr, “Subordination chains and the growth theorem of spirallike mappings,” Mathematica, vol. 42(65), no. 2, pp. 153–161 (2001), 2000. View at: Google Scholar | Zentralblatt MATH | MathSciNet S. X. Feng, Some classes of holomorphic mappings in several complex variables [Ph.D. thesis], University of Science and Technology of China, Hefei, China, 2004. S. X. Feng, T. S. Liu, and G. B. Ren, “The growth and covering theorems for several mappings on the unit ball in complex Banach spaces,” Chinese Annals of Mathematics A, vol. 28, no. 2, pp. 215–230, 2007. View at: Google Scholar | MathSciNet R. H. Cai and X. S. Liu, “The third and fourth coefficient estimations for the subclasses of strongly spirallike functions,” Journal of Zhanjiang Normal College, vol. 31, pp. 38–43, 2010. View at: Google Scholar H. Hamada and G. Kohr, “The growth theorem and quasiconformal extension of strongly spirallike mappings of type α,” Complex Variables, vol. 44, no. 4, pp. 281–297, 2001. View at: Google Scholar | MathSciNet M. Chuaqui, “Applications of subordination chains to starlike mappings in {\mathbb{C}}^{n} L. V. Ahlfors, Complex Analysis, McGraw-Hill, New York, NY, USA, 3rd edition, 1978. View at: MathSciNet Copyright © 2014 Yan-Yan Cui et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Fit conditional variance model to data - MATLAB estimate - MathWorks 中国 {y}_{t}={\mathrm{ε}}_{t}, {\mathrm{ε}}_{t}={\mathrm{σ}}_{t}{z}_{t} {\mathrm{σ}}_{t}^{2}=0.0001+0.5{\mathrm{σ}}_{t-1}^{2}+0.2{\mathrm{ε}}_{t-1}^{2}. {z}_{t} {y}_{t}={\mathrm{ε}}_{t}, {\mathrm{ε}}_{t}={\mathrm{σ}}_{t}{z}_{t}, \mathrm{log}{\mathrm{σ}}_{t}^{2}=0.001+0.7\mathrm{log}{\mathrm{σ}}_{t-1}^{2}+0.5\left[\frac{|{\mathrm{ε}}_{t-1}|}{{\mathrm{σ}}_{t-1}}-\sqrt{\frac{2}{\mathrm{π}}}\right]-0.3\left(\frac{{\mathrm{ε}}_{t-1}}{{\mathrm{σ}}_{t-1}}\right) {z}_{t} {y}_{t}={\mathrm{ε}}_{t}, {\mathrm{ε}}_{t}={\mathrm{σ}}_{t}{z}_{t} {\mathrm{σ}}_{t}^{2}=0.001+0.5{\mathrm{σ}}_{t-1}^{2}+0.2{\mathrm{ε}}_{t-1}^{2}+0.2I\left[{\mathrm{ε}}_{t-1}<0\right]{\mathrm{ε}}_{t-1}^{2}. {z}_{t} EstParamCov = 3×3 Initial estimate of the t-distribution degrees-of-freedom parameter ν, specified as the comma-separated pair consisting of 'DoF0' and a positive scalar. DoF0 must exceed 2. [1] Bollerslev, Tim. “Generalized Autoregressive Conditional Heteroskedasticity.” Journal of Econometrics 31 (April 1986): 307–27. https://doi.org/10.1016/0304-4076(86)90063-1. [2] Bollerslev, Tim. “A Conditionally Heteroskedastic Time Series Model for Speculative Prices and Rates of Return.” The Review of Economics and Statistics 69 (August 1987): 542–47. https://doi.org/10.2307/1925546. [5] Engle, Robert. F. “Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation.” Econometrica 50 (July 1982): 987–1007. https://doi.org/10.2307/1912773.
Goodness of fit between test and reference data for analysis and validation of identified models - MATLAB goodnessOfFit - MathWorks 中国 {\mathrm{fit}}_{\mathrm{compare}}=\left(1-{\mathrm{fit}}_{\mathrm{gof}}\right)*100 fit=\frac{{‖x−xref‖}^{2}}{Ns} fit\left(i\right)=\frac{‖xref\left(:,i\right)−x\left(:,i\right)‖}{‖xref\left(:,i\right)−mean\left(xref\left(:,i\right)\right)‖} fit\left(i\right)=\frac{{‖xref\left(:,i\right)−x\left(:,i\right)‖}^{2}}{{‖xref\left(:,i\right)−mean\left(xref\left(:,i\right)\right)‖}^{2}}
Cosmological constant - Wikiquote Sketch of the timeline of the formation of the Universe in the Lambda-CDM model. The last 5 billion years of accelerated expansion represents the dark energy dominated era, as parametrized by the variable, dimensionless, cosmic scale factor {\displaystyle a} , a key parameter of the Friedmann equations. The cosmological constant ( {\displaystyle \Lambda } , or also indicated by λ) is the value of the energy density of the vacuum of space. It was originally introduced as hypothesis by Albert Einstein in 1917, as an addition to his theory of general relativity to achieve a static universe. Einstein abandoned it in 1931. The cosmological constant is the simplest possible form of dark energy, since it is constant in both space and time. This leads to the current standard model of cosmology known as the Lambda-CDM model parametrization of the Big Bang. The cosmological constant {\displaystyle \Lambda } appears in the Einstein field equations in the form of {\displaystyle R_{\mu \nu }-{\frac {1}{2}}R\,g_{\mu \nu }+\Lambda \,g_{\mu \nu }={8\pi G \over c^{4}}T_{\mu \nu }} The theoretical view of the actual universe, if it is in correspondence to our reasoning, is the following. The curvature of space is variable in time and place, according to the distribution of matter, but we may roughly approximate it by means of a spherical space. ...this view is logically consistent, and from the standpoint of the general theory of relativity lies nearest at hand [i.e. is most obvious]; whether, from the standpoint of present astronomical knowledge, it is tenable, will not be discussed here. In order to arrive at this consistent view, we admittedly had to introduce an extension of the field equations of gravitation, which is not justified by our actual knowledge of gravitation. It is to be emphasized, however, that a positive curvature of space is given by our results, even if the supplementary term [cosmological constant] is not introduced. The term is necessary only for the purpose of making possible a quasi-static distribution of matter, as required by the fact of the small velocity of the stars. After putting the finishing touches on general relativity in 1915, Einstein applied his new equations for gravity to a variety of problems. ... Despite the mounting successes of general relativity, for years after he first applied his theory to the most immense of all challenges—understanding the entire universe—Einstein absolutely refused to accept the answer that emerged from the mathematics. Before the work of Friedmann and Lemaître... Einstein, too, had realized that the equations of general relativity showed that the universe could not be static; the fabric of space could stretch or it could shrink, but it could not maintain a fixed size. This suggested that the universe might have had a definite beginning, when the fabric was maximally compressed, and might even have a definite end. Einstein stubbornly balked at this... because he and everyone else "knew" that the universe was eternal and, on the largest scales, fixed and unchanging. Thus, notwithstanding the beauty and successes of general relativity, Einstein reopened his notebook and sought a modification of the equations... It didn't take him long. In 1917 he achieved the goal by introducing a new term... the cosmological constant. Brian Greene,The Fabric of the Cosmos : Space, Time, and the Texture of Reality (2004) pp. 273-274. Alan Guth, The Early Universe (2012) Lecture 1: Inflationary Cosmology: Is Our Universe Part of a Multiverse? Part I, MITOpenCourseware (OCW) course 8.286 Massachusetts Institute of Technology. In 1917 de Sitter showed that Einstein's field equations could be solved by a model that was completely empty apart from the cosmological constant—i.e. a model with no matter whatsoever, just dark energy. This was the first model of an expanding universe. although this was unclear at the time. The whole principle of general relativity was to write equations for physics that were valid for all observers, independently of the coordinates used. But this means that the same solution can be written in various different ways... Thus de Sitter viewed his solution as static, but with a tendency for the rate of ticking clocks to depend on position. This phenomenon was already familiar in the form of gravitational time dilation... so it is understandable that the de Sitter effect was viewed in the same way. It took a while before it was proved (by Weyl, in 1923) that the prediction was of a redshifting of spectral lines that increased linearly with distance (i.e. Hubble's law). ... Michela Massimi, Philosophy and the Sciences for Everyone (2014) Even today, our picture of a world woven together by a gravitational force, and electromagnetic force, a strong force, and a weak force may be incomplete. Astronomers are gathering evidence that an additional fundamental interaction, a repulsive effect opposite to gravity, may be at work over vast distances and possibly changing with time. Michael Munowitz, Knowing: The Nature of Physical Law (2005) p. 55. In Einstein's scheme there was no end, no outside. Shoot an arrow or a light beam infinitely far in any direction and it would come back and hit you in the butt. ...But there was a problem with the curved-back universe. Such a configuration was unstable, it would fly apart or collapse. Einstein didn't know about galaxies. He thought, and was reassured as much by the best astronomers of the time, that the universe was a static cloud of stars. To explain why his curved universe didn't collapse like a struck tent, therefore, he fudged his equations with a term he called the cosmological constant, which produced a long-range repulsive force to counteract cosmic gravity. It made the equations ugly and he never really liked it. That was in 1917, twelve years before Hubble showed that the universe was full of galaxies rushing away from each other. It's a term that Einstein recognized as allowed by his theory — he threw it in and then, in disgust, threw it out again ... It's back! Jim Peebles, Princeton news conference for James Peeble, winner of the 2019 Nobel Prize in Physics (October 8, 2019). (quote at 24:49 of 38:15) [Einstein's cosmological constant] is a name without any meaning. ...We have, in fact, not the slightest inkling of what it's real significance is. It is put in the equations in order to give the greatest possible degree of mathematical generality. Willem de Sitter, Kosmos, A Course of Six Lectures on the Development of Our Insight Into the Structure of the Universe (1932) There is no direct observational evidence for the curvature [of space], the only directly observed data being the mean density and the expansion, which latter proves that the actual universe corresponds to the non-statical case. It is therefore clear that from the direct data of observation we can derive neither the sign nor that value of the curvature, and the question arises whether it is possible to represent the observed facts without introducing the curvature at all. Historically the term containing the 'cosmological constant λ' was introduced into the field equations in order to enable us to account theoretically for the existence of a finite mean density in a static universe. It now appears that in the dynamical case this end can be reached without the introduction of λ. Willem de Sitter, joint memoir with Einstein (1932) as quoted by Gerald James Whitrow, The Structure of the Universe: An Introduction to Cosmology (1949) It was early 1932, when Einstein and I both were at the California Institute of Technology in Pasedena, and we just decided to look for a simple relativistic model that agreed reasonably well with the known observational data, namely, the Hubble recession rate and the mean density of matter in the universe. So we took the space curvature to be zero and also the cosmological constant and the pressure term to be zero, and then it follows straightforwardly that the density is proportional to the square of the Hubble constant. It gives a value for the density that is high, but not impossibly high. That's about all there was to it. It was not an important paper, although Einstein apparently thought that it was. He was pleased to have a simple model with no cosmological constant. That's it. Willem de Sitter, as quoted by Helge Kragh, Masters of the Universe: Conversations with Cosmologists of the Past (2014) Lee Smolin, "Loop Quantum Gravity," The New Humanists: Science at the Edge (2003) ed., John Brockman. The most far-reaching implication of general relativity... is that the universe is not static, as in the orthodox view, but is dynamic, either contracting or expanding. Einstein, as visionary as he was, balked at the idea... One reason... was that, if the universe is currently expanding, then... it must have started from a single point. All space and time would have to be bound up in that "point," an infinitely dense, infinitely small "singularity." ...this struck Einstein as absurd. He therefore tried to sidestep the logic of his equations, and modified them by adding... a "cosmological constant." The term represented a force, of unknown nature, that would counteract the gravitational attraction of the mass of the universe. That is, the two forces would cancel... it is the kind of rabbit-out-of-the-hat idea that most scientists would label ad-hoc. ...Ironically, Einstein's approach contained a foolishly simple mistake: His universe would not be stable... like a pencil balanced on its point. Our particular laws are not at all unique. ...they could change from place to place and from time to time. The Laws of Physics are much like the weather... controlled by invisible influences in space almost the same way as that temperature, humidity, air pressure, and wind velocity control how rain and snow and hail form. ...The Landscape... is the space of possibilities... all the possible environments permitted by the theory. ...[T]heoretical physicists ...have always believed that the laws of nature are the unique, inevitable consequence of some elegant mathematical principle. ...the empirical evidence points much more convincingly to the opposite conclusion. The universe has more in common with a Rube Goldberg machine than with a unique consequence of mathematical symmetry. ...Two key discoveries are driving the paradigm shift—the success of inflationary cosmology and the existence of a small cosmological constant. Leonard Susskind, The Cosmic Landscape: String Theory and the Illusion of Intelligent Design (2005) pp. 12-13. At about the time of Malcadena's discovery, physicists started to become convinced (by cosmologists) that we live in a world with a nonvanishing cosmological constant [footnote: 10-23 in Planck units...[t]he incredible smallness... had fooled almost all physicists into believing that it didn't exist.], smaller by far than any other physical constant... the main determinant of the future history of the universe... also known as dark energy... a thorn in the side of physicists for almost a century. ...If {\displaystyle \Lambda } is positive, the cosomological term creates a repulsive force that increases with distance; if it is negative, the new force is attractive; if {\displaystyle \Lambda } is zero, there is no new force and we can ignore it. The cosmological constant['s]... most important consequence: the repulsive force, acting at cosmological distances, causes space to expand exponentially. There is nothing new about the universe expanding, but without a cosmological constant, the rate of expansion would gradually slow down. Indeed, it could even reverse itself and begin to contract, eventually imploding in a giant cosmic crunch. Instead, as a consequence of the cosmological constant, the universe appears to be doubling in size about every fifteen billion years, and all indications are that it will do so indefinitely. De Sitter proposed three types of nonstatic universes: the oscillating universes and the expanding universes of the first or second kiind. The main characteristic of the expanding "family" of the first kiind is that the radius is continually increasing from a definite initial time when it had the value zero. The universe becomes infinitely large after an infinite time. In the second kind... the radius possesses at the initial time a definite minimum value... in the Einstein model... the cosmological constant is supposed to be equal to the reciprocal of R2, whereas de Sitter computed for his interpretation the constant to be equal to 3/R2. Whitrow correctly points out the significant fact that in special relativity the cosmological constant is omitted... Wolfgang Yourgrau, "On Some Cosmological Theories and Constants," Cosmology, History, and Theology (2012) Cosmological constant at Wikiquote's sister projects: Cosmological Constant: Was it Einstein's greatest mistake or another stroke of genius? Sixty Symbols, University of Nottingham, Physics and Astronomy videos Retrieved from "https://en.wikiquote.org/w/index.php?title=Cosmological_constant&oldid=3066204"
RoboTHOR Challenge 2021 The 2021 RoboTHOR Challenge is a continuation of our 2020 RoboTHOR Challenge, held in conjunction with the Embodied AI Workshop at CVPR. The challenge focused on the problem of simulation-to-real transfer. Our goal with this challenge is to encourage researchers to work on this important problem, and to create a unified benchmark and track progress over time. Due to COVID-19, the challenge will only be done in simulation. Sign Up for Challenge Updates For decades, the AI community has sought to create perceptive agents that can augment human capabilities in real world tasks. The widespread availability of large and open, computer vision and natural language datasets, massive amounts of compute, and standardized benchmarks have been critical to the fast-paced progress witnessed over the past few years. In stark contrast, the considerable costs involved in acquiring physical robots and experimental environments, compounded by the lack of standardized benchmarks are proving to be principal hindrances towards progress in real world embodied AI. Recently, the vision community has leveraged progress in computer graphics and created a host of simulated perceptual environments with the promise of training models in simulation that can be deployed on robots in the physical world. These environments are free to use, continue to be improved and lower the barrier of entry to research in real world embodied AI; democratizing research in this direction. This has led to progress on a variety of tasks in simulation, including visual navigation and instruction following. But the elephant in the room remains: How well do these models trained in simulation generalize to the real world? The RoboTHOR Challenge 2021 deals with the task of Visual Semantic Navigation from ego-centric RGB-D camera input. The agent starts from a random location in an apartment and is expected to navigate towards an object that is specified by its type. Across different episodes, different object types are used as targets. However, all of the object types that appear in the validation and testing sets also appear in the training set. The dataset is divided into the following splits: Train 108000 60 Val 1080 15 Each episode is then provided in the following format: "id": "FloorPlan_Train1_1_AlarmClock_0", "scene": "FloorPlan_Train1_1", "object_type": "AlarmClock", "initial_position": { "initial_orientation": 150, "initial_horizon": 30, "shortest_path": [ {"x": 3.75, "y": 0.0045, "z": -2.25}, {"x": 9.25, "y": 0.0045, "z": -2.75} "shortest_path_length": 5.57 2021 RoboTHOR Challenge Announced To participate in the challenge, please refer to our GitHub page. It includes instructions for installation, downloading the dataset, and a simple example. /allenai/robothor-challenge Winners of the challenge will have the opportunity to present their work at the virtual CVPR 2021 Embodied AI Workshop. We have built support for this challenge into the AllenAct framework, this support includes: Several CNN to RNN model baseline model architectures along with our best pretrained model checkpoint (trained for 300M steps) obtaining a test-set success rate of ~26%. Reinforcement/imitation learning pipelines for training with Distributed Decentralized Proximal Policy Optimization (DD-PPO) and DAgger. Utility functions for visualization and caching (to improve training speed). SPL, Success weighted by (normalized inverse) Path Length, is a quick and common navigation metric (see Anderson et al. and Batra et al.) as \text{SPL} = \frac{1}{N} \sum_{i=1}^N S_i \cdot \frac{\ell_i}{\max(p_i, \ell_i)}, where, for each episode i\in \lbrace 1, 2, 3,\ldots, N\rbrace S_i is the binary indicator variable denoting if the episode was successful, \ell_i is the shortest path length (in meters) from the agent's starting position to the target, and p_i is the path length (in meters) that the agent took. The metric ranges inclusively from [0:1] A navigation episode is considered successful if both of the following criteria are met: The specified object category is within 1 meter (Euclidean distance) from the agent's camera, and the agent issues the STOP action, which indicates the termination of the episode. The object is visible from in the final action's frame. Our evaluation feedback also provides the SPL if criteria 2 were to be ignored. But, this metric is not used for evaluation. It may be helpful to check out the SPL section of the RoboTHOR documentation. It discusses an object's , and several evaluation helper methods. We are using AI2's Leaderboard to host challenge submissions. Submissions are now open! Where do I ask a question about the RoboTHOR Challenge? Please open up a discussion on our GitHub Page! We are happy to help. Can we use external data for training our models? Yes, you can use any external data (in addition to the provided dataset) for training your models. To cite this work, please cite the RoboTHOR paper: @InProceedings{RoboTHOR, author = {Matt Deitke and Winson Han and Alvaro Herrasti and Aniruddha Kembhavi and Eric Kolve and Roozbeh Mottaghi and Jordi Salvador and Dustin Schwenk and Eli VanderBilt and Matthew Wallingford and Luca Weihs and Mark Yatskar and Ali Farhadi}, title = {RoboTHOR: An Open Simulation-to-Real Embodied AI Platform}, The RoboTHOR 2021 challenge is being organized by the PRIOR team at the Allen Institute for AI. The organizers are listed below in alphabetical order.
Lemma 36.35.10 (0DJT)—The Stacks project Lemma 36.35.10. Let $f : X \to S$ be a morphism of schemes which is flat, proper, and of finite presentation. Let $E \in D(\mathcal{O}_ X)$ be $S$-perfect. Then $Rf_*E$ is a perfect object of $D(\mathcal{O}_ S)$ and its formation commutes with arbitrary base change. Proof. The statement on base change is Lemma 36.22.5. Thus it suffices to show that $Rf_*E$ is a perfect object. We will reduce to the case where $S$ is Noetherian affine by a limit argument. The question is local on $S$, hence we may assume $S$ is affine. Say $S = \mathop{\mathrm{Spec}}(R)$. We write $R = \mathop{\mathrm{colim}}\nolimits R_ i$ as a filtered colimit of Noetherian rings $R_ i$. By Limits, Lemma 32.10.1 there exists an $i$ and a scheme $X_ i$ of finite presentation over $R_ i$ whose base change to $R$ is $X$. By Limits, Lemmas 32.13.1 and 32.8.7 we may assume $X_ i$ is proper and flat over $R_ i$. By Lemma 36.35.9 we may assume there exists a $R_ i$-perfect object $E_ i$ of $D(\mathcal{O}_{X_ i})$ whose pullback to $X$ is $E$. Applying Lemma 36.27.1 to $X_ i \to \mathop{\mathrm{Spec}}(R_ i)$ and $E_ i$ and using the base change property already shown we obtain the result. $\square$ Comment #6023 by Noah Olander on March 30, 2021 at 11:05 f is flat it seems like "Tor-independent base change" is a better reference to get the base change than 0A1D. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0DJT. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0DJT, in case you are confused.
Monadic_predicate_calculus Knowpia {\displaystyle P(x)} {\displaystyle P} {\displaystyle x} Relationship with term logicEdit {\displaystyle [(\forall x\,D(x)\Rightarrow M(x))\land \neg (\exists y\,M(y)\land B(y))]\Rightarrow \neg (\exists z\,D(z)\land B(z))} {\displaystyle D} {\displaystyle M} {\displaystyle B} {\displaystyle \forall x\,P_{1}(x)\lor \cdots \lor P_{n}(x)\lor \neg P'_{1}(x)\lor \cdots \lor \neg P'_{m}(x)} {\displaystyle \exists x\,\neg P_{1}(x)\land \cdots \land \neg P_{n}(x)\land P'_{1}(x)\land \cdots \land P'_{m}(x),} {\displaystyle (\forall x\,\neg M(x)\lor H(x)\lor C(x))}
Eliminate states from state-space models - MATLAB modred - MathWorks Nordic Order Reduction by Matched-DC-Gain and Direct-Deletion Methods Eliminate states from state-space models rsys = modred(sys,elim) rsys = modred(sys,elim,'method') rsys = modred(sys,elim) reduces the order of a continuous or discrete state-space model sys by eliminating the states found in the vector elim. The full state vector X is partitioned as X = [X1;X2] where X1 is the reduced state vector and X2 is discarded. elim can be a vector of indices or a logical vector commensurate with X where true values mark states to be discarded. This function is usually used in conjunction with balreal. Use balreal to first isolate states with negligible contribution to the I/O response. If sys has been balanced with balreal and the vector g of Hankel singular values has M small entries, you can use modred to eliminate the corresponding M states. For example: [sys,g] = balreal(sys) % Compute balanced realization elim = (g<1e-8) % Small entries of g are negligible states rsys = modred(sys,elim) % Remove negligible states rsys = modred(sys,elim,'method') also specifies the state elimination method. Choices for 'method' include 'MatchDC' (default): Enforce matching DC gains. The state-space matrices are recomputed as described in Algorithms. 'Truncate': Simply delete X2. The 'Truncate' option tends to produces a better approximation in the frequency domain, but the DC gains are not guaranteed to match. If the state-space model sys has been balanced with balreal and the grammians have m small diagonal entries, you can reduce the model order by eliminating the last m states with modred. Consider the following continuous fourth-order model. h\left(s\right)=\frac{{s}^{3}+11{s}^{2}+36s+26}{{s}^{4}+14.6{s}^{3}+74.96{s}^{2}+153.7s+99.65}. To reduce its order, first compute a balanced state-space realization with balreal. [hb,g] = balreal(h); Examine the gramians. The last three diagonal entries of the balanced gramians are relatively small. Eliminate these three least-contributing states with modred, using both matched-DC-gain and direct-deletion methods. Both hmdc and hdel are first-order models. Compare their Bode responses against that of the original model. The reduced-order model hdel is clearly a better frequency-domain approximation of h. Now compare the step responses. stepplot(h,'-',hmdc,'-.',hdel,'--') While hdel accurately reflects the transient behavior, only hmdc gives the true steady-state response. The algorithm for the matched DC gain method is as follows. For continuous-time models \begin{array}{l}\stackrel{˙}{x}=Ax+By\\ y=Cx+Du\end{array} the state vector is partitioned into x1, to be kept, and x2, to be eliminated. \begin{array}{l}\left[\begin{array}{c}{\stackrel{˙}{x}}_{1}\\ {\stackrel{˙}{x}}_{2}\end{array}\right]=\left[\begin{array}{cc}{A}_{11}& {A}_{12}\\ {A}_{21}& {A}_{22}\end{array}\right]\left[\begin{array}{c}{x}_{1}\\ {x}_{2}\end{array}\right]+\left[\begin{array}{c}{B}_{1}\\ {B}_{2}\end{array}\right]u\\ y=\left[\begin{array}{cc}{C}_{1}& {C}_{2}\end{array}\right]x+Du\end{array} Next, the derivative of x2 is set to zero and the resulting equation is solved for x1. The reduced-order model is given by \begin{array}{l}{\stackrel{˙}{x}}_{1}=\left[{A}_{11}-{A}_{12}{A}_{22}{}^{-1}{A}_{21}\right]{x}_{1}+\left[{B}_{1}-{A}_{12}{A}_{22}{}^{-1}{B}_{2}\right]u\\ y=\left[{C}_{1}-{C}_{2}{A}_{22}{}^{-1}{A}_{21}\right]x+\left[D-{C}_{2}{A}_{22}{}^{-1}{B}_{2}\right]u\end{array} The discrete-time case is treated similarly by setting {x}_{2}\left[n+1\right]={x}_{2}\left[n\right] balreal | minreal | balred
Multiobjective Monotonicity Analysis: Pareto Set Dependency and Trade-Offs Causality in Configuration Design | J. Mech. Des. | ASME Digital Collection Email: noksig@mek.dtu.dk Sigurdarson, N. S., Eifler, T., Ebro, M., and Papalambros, P. Y. (October 21, 2021). "Multiobjective Monotonicity Analysis: Pareto Set Dependency and Trade-Offs Causality in Configuration Design." ASME. J. Mech. Des. March 2022; 144(3): 031704. https://doi.org/10.1115/1.4052444 Multiobjective design optimization studies typically derive Pareto sets or use a scalar substitute function to capture design trade-offs, leaving it up to the designer’s intuition to use this information for design refinements and decision-making. Understanding the causality of trade-offs more deeply, beyond simple postoptimality parametric studies, would be particularly valuable in configuration design problems to guide configuration redesign. This article presents the method of multiobjective monotonicity analysis to identify root causes for the existence of trade-offs and the particular shape of Pareto sets. This analysis process involves reducing optimization models through constraint activity identification to a point where dependencies specific to the Pareto set and the constraints that cause them are revealed. The insights gained can then be used to target configuration design changes. We demonstrate the proposed approach in the preliminary design of a medical device for oral drug delivery. design evaluation, design optimization, multiobjective optimization, product development, systems design Design, Optimization, Tradeoffs, Shapes Toyota's Principles of Set-Based Concurrent Engineering Toyota's Principles of Set-Based Concurrent Engineering , “Creating Structural Configurations,” Axiomatic Design Theory for Systems Res. Eng. Des. Theory Appl. Concurr. Eng. On Systems Architects and Systems Architecting: Some Thoughts on Explaining and Improving the Art and Science of Systems Architecting , “Is Engineering Design Disappearing From Design Research?,” C.-S. N. , “Conflict, Harmony, and Independence: Relationships in Evolutionary Multi-Criterion Optimisation,” A Preference Ordering Among Various Pareto Optimal Alternatives Pareto Analysis in Multiobjective Optimization Using the Collinearity Theorem and Scaling Method Trade-Off Strategies in Engineering Design Pareto Frontier Based Concept Selection Under Uncertainty, With Visualization Multiobjective Optimization and Multiple Constraint Handling With Evolutionary Algorithms—Part I: A Unified Formulation IEEE Trans. Syst. Man Cybern. Part A: Syst. Hum. On Characterizing the ‘Knee” of the Pareto Curve Based on Normal-Boundary Intersection A Pareto Approach to Aligning Public and Private Objectives in Vehicle Design Pareto Set Analysis: Local Measures of Objective Coupling in Multiobjective Design Optimization On the Analytical Derivation of the Pareto-Optimal Set With Applications to Structural Design Optimal Design of Complex Mechanical Systems With Applications to Vehicle Engineering Active Constraint Deduction—A Framework for Expert Systems in Mechanical Systems Design Maximal Vectors and Multi-Objective Optimization Effective Implementation of the ϵ -constraint Method in Multi-Objective Mathematical Programming Problems Multiobjectives in Water Resource Systems Analysis: The Surrogate Worth Trade Off Method , “Model Reduction and Verification Techniques,” ,” Pharmaceutical Quality/CMC, pp. Optimization ToolboxTM - Users Guide R2020B Software Documentation) The Effect of Size and Shape of Tablets on Their Esophageal Transit An Automated Procedure for Local Monotonicity Analysis Interactive Computing in the Application of Monotonicity Analysis to Design Optimization A Review of Research in Mechanical Engineering Design. Part II: Representations, Analysis, and Design for the Life Cycle
Defining the Relationship of Magnetohydrodynamic Voltages and Magnetic Field Strength | BIOMED | ASME Digital Collection Kevin J. Wu T. Stan Gregory, Michael C. Lastinger, Michael C. Lastinger Wu, KJ, Gregory, TS, Lastinger, MC, Boland, B, & Tse, ZTH. "Defining the Relationship of Magnetohydrodynamic Voltages and Magnetic Field Strength." Proceedings of the 2017 Design of Medical Devices Conference. 2017 Design of Medical Devices Conference. Minneapolis, Minnesota, USA. April 10–13, 2017. V001T11A009. ASME. https://doi.org/10.1115/DMD2017-3401 The magnetohydrodynamic (MHD) effect is observed in flowing electrolytic fluids and their interactions with magnetic fields. The magnetic field (B0), when perpendicular with the electrolytic fluid flow (μ), causes the shift of the charged particles in the fluid to shift across the length of the vessel (L) normal to the plane of B0 and flow, creating a voltage (VMHD) observable through voltage potential measurements across the flow (Eqn. 1)[1]. VMHD=∫0Lu⇀×B0⇀·dL⇀ In the medical field, this phenomenon is commonly encountered inside of a human body inside of an MRI machine (Fig. 1). The effect appears most prominently inside the aortic arch due to orientation and size, and is a large contributing factor to noise observed in intra-MRI ECGs [2, 3]. Traditionally, this MHD induced voltage (VMHD) was filtered out to obtain clean intra-MRI ECGs, but recent studies have shown that the VMHD induced in a vessel is related to the blood flow through it (stroke volume in the case of the aortic arch) [4]. Further proof of this relationship can be shown from the increase in VMHD measured from periphery blood vessels during periods of elevated heart rate from exercise stress, when compared to baseline state [5]. Previously, a portable device was built to utilize induced VMHD as an indicator of flow [6]. The device was capable of showing change in blood flow, utilizing a blood flow metric obtained from VMHD, however a quantitative relationship between VMHD and blood flow has yet to be established. This study aims to define the relationship between induced VMHD and magnetic field strength in a controlled setting. Through modulating the distance between a pair of magnets around a flow channel, we hope to better realize the relationship between magnetic field strength and induced VMHD with constant flow and electrolytic solution concentration. Magnetic fields, Magnetohydrodynamics, Flow (Dynamics), Blood flow, Magnetic resonance imaging, Aortic arch, Fluids, Vessels, Biomedicine, Blood vessels, Electrolytes, Fluid dynamics, Machinery, Magnets, Noise (Sound), Particulate matter, Stress Cross-Sectional Deformation of the Aorta as Measured With Magnetic Resonance Imaging
Create Model of Receptor-Ligand Kinetics - MATLAB & Simulink Open Model Builder App In this model, ligand L and receptor R species form receptor-ligand complexes through reversible binding reactions. These reactions are defined using mass action kinetics by \frac{dC}{dt}={k}_{f}\cdot L\cdot R-{k}_{r}\cdot C , where kf and kr are forward and reverse rate constants. L, R, and C are the concentrations of the ligand, receptor, and receptor-ligand complex, respectively. Click the SimBiology Model Builder icon on the Apps tab or enter simBiologyModelBuilder at the command line. On the Home tab of the app, select Model > Create New Blank Model. Enter m1 as the name for the model. The app creates an empty compartment unnamed and displays the compartment on the Diagram tab. Drag and drop three species blocks and one reaction block into the compartment. Optionally, you can rename the species and compartment by double-clicking the default names. For instance, change unnamed to cell. To connect the species to the reaction, press and hold the Ctrl key (on Windows® and Linux®) or the Option key (on macOS), click the species block, and drag the line. Click the reaction block to see its properties in the Property Editor pane. Set the following parameters. Select Reversible > true. In the States table, update the values of L to 5 and R to 10. Set the units of the L, R, and C species to nanomole/liter. Set the value of the forward rate parameter kf to 0.05. Set the unit to liter/nanomole/hour. Set the value of the reverse rate parameter kr to 0.1 with the unit 1/hour. On the Home tab, click the Model Analyzer icon to open the SimBiology Model Analyzer app. In the Model Analyzer app, select Program > Simulate Model on the Home tab. The Program1 tab opens. In the Simulation step of the program, set the Stop Time to 20 seconds because the model reaches a saturated state after that. Click Run from the Home tab. Running the program plots the results in the Plot1 tab. The plot shows the simulated responses in different colors. The program stores the simulation results in the LastRun folder of the program.
Flux-based permanent magnet synchronous motor - Simulink - MathWorks América Latina Lookup Table Memory Optimization Enable memory optimized 2D LUT Resample storage size for flux_d, n1 Resample storage size for flux_q, n2 flux_d max endpoint, u1max flux_d min endpoint, u1min flux_q max endpoint, u2max flux_q min endpoint, u2min Stator phase resistance, Rs Initial flux, fluxdq0 Physical inertia, viscous damping, and static friction, mechanical Flux-based permanent magnet synchronous motor The Flux-Based PMSM block implements a flux-based three-phase permanent magnet synchronous motor (PMSM) with a tabular-based electromotive force. The block uses the three-phase input voltages to regulate the individual phase currents, allowing control of the motor torque or speed. Flux-based motor models take into account magnetic saturation and iron losses. To calculate the magnetic saturation and iron loss, the Flux-Based PMSM block uses the inverse of the flux linkages. To obtain the block parameters, you can use finite-element analysis (FEA) or measure phase voltages using a dynamometer. To enable power loss calculations suitable for code generation targets that limit memory, select Enable memory optimized 2D LUT. The block implements equations that are expressed in a stationary rotor reference (dq) frame. The d-axis aligns with the a-axis. All quantities in the rotor reference frame are referred to the stator. The block uses these equations. q- and d-axis voltage \begin{array}{l}{v}_{d}=\frac{d{\psi }_{d}}{dt}+{R}_{s}{i}_{d}-{\omega }_{e}{\psi }_{q}\\ {v}_{q}=\frac{d{\psi }_{q}}{dt}+{R}_{s}{i}_{q}+{\omega }_{e}{\psi }_{d}\end{array} q- and d-axis current \begin{array}{l}{i}_{d}=f\left({\psi }_{d},{\psi }_{q}\right)\\ {i}_{q}=g\left({\psi }_{d},{\psi }_{q}\right)\end{array} Electromechanical torque {T}_{e}=1.5P\left[{\psi }_{d}{i}_{q}-{\psi }_{q}{i}_{d}\right] dq stator electrical angle with respect to the rotor a-axis Resistance of the stator and rotor windings, respectively q- and d-axis current, respectively q- and d-axis voltage, respectively Ψq, Ψd q- and d-axis magnet flux, respectively \begin{array}{l}{\omega }_{e}=P{\omega }_{m}\\ \frac{d{\theta }_{e}}{dt}= {\omega }_{e}\end{array} \begin{array}{l}{x}_{\alpha }= \frac{2}{3}{x}_{a}- \frac{1}{3}{x}_{b} -\frac{1}{3}{x}_{c}\\ {x}_{\beta }= \frac{\sqrt{3}}{2}{x}_{b}- \frac{\sqrt{3}}{2}{x}_{c}\end{array} \begin{array}{l}{x}_{d}= {x}_{\alpha }\mathrm{cos}{\theta }_{e}+ {x}_{\beta }\mathrm{sin}{\theta }_{e}\\ {x}_{q}= -{x}_{\alpha }\mathrm{sin}{\theta }_{e}+ {x}_{\beta }\mathrm{cos}{\theta }_{e}\end{array} \begin{array}{l}{x}_{a}= {x}_{a}\\ {x}_{b}= -\frac{1}{2}{x}_{\alpha }+ \frac{\sqrt{3}}{2}{x}_{\beta }\\ {x}_{c}= -\frac{1}{2}{x}_{\alpha }- \frac{\sqrt{3}}{2}{x}_{\beta }\end{array} \begin{array}{l}{x}_{\alpha }= {x}_{d}\mathrm{cos}{\theta }_{e}- {x}_{q}\mathrm{sin}{\theta }_{e}\\ {x}_{\beta }= {x}_{d}\mathrm{sin}{\theta }_{e}+ {x}_{q}\mathrm{cos}{\theta }_{e}\end{array} The rotor angular velocity is given by: \begin{array}{c}\frac{d}{dt}{\omega }_{m}=\frac{1}{J}\left({T}_{e}-{T}_{f}-F{\omega }_{m}-{T}_{m}\right)\\ \frac{d{\theta }_{m}}{dt}={\omega }_{m}\end{array} Combined inertia of rotor and load Combined viscous friction of rotor and load Combined rotor and load friction torque {P}_{mot}= -{\omega }_{m}{T}_{e} {P}_{bus}= {v}_{an}{i}_{a}+ {v}_{bn}{i}_{b}+{v}_{cn}{i}_{c} {P}_{elec}= -\frac{3}{2}\left({R}_{s}{i}_{sd}^{2}+{R}_{s}{i}_{sq}^{2}\right) {P}_{mech}= -\left({\omega }_{m}^{2}F+ |{\omega }_{m}|{T}_{f}\right) {P}_{mech}= 0 {P}_{str}= {P}_{bus}+ {P}_{mot}+ {P}_{elec} + {P}_{mech} Stator phase a, b, and c current Stator q- and d-axis currents Stator phase a, b, and c voltage Combined motor and load viscous damping Combined motor and load friction torque The data for the Corresponding d-axis current, id and Corresponding q-axis current, iq lookup tables are functions of the d- and q-axis flux. To enable current calculations suitable for code generation targets that limit memory, select Enable memory optimized 2D LUT. The block uses linear interpolation to optimize the current lookup table values for code generation. This table summarizes the optimization implementation. d- and q-axis flux aligns with the lookup table breakpoint values. Memory-optimized current is current lookup table value at intersection of flux values. d- and q-axis flux does not align with the lookup table breakpoint values, but is within range. Memory-optimized current is linear interpolation between corresponding flux values. d- and q-axis flux does not align with the lookup table breakpoint values, and is out of range. Cannot compute an memory-optimized current. Block uses extrapolated data. The lookup tables optimized for code generation do not support extrapolation for data that is out of range. However, you can include pre-calculated extrapolation values in the power loss lookup table by selecting Specify Extrapolation. The block uses the endpoint parameters to resize the table data. LdTrq — Rotor shaft torque Rotor shaft input torque, Tm, in N·m. To create this port, select Speed or Torque for the Port Configuration parameter. PwrStored PwrMtrStored Enable memory optimized 2D LUT — Selection Enable generation of optimized lookup tables, suitable code generation targets that limit memory. Vector of d-axis flux, flux_d — Flux breakpoints d-axis flux, Ψd, breakpoints, in Wb. Resample storage size for flux_d, n1 — Flux bit size 2 (default) | 4 | 8 | 16 | 32 | 64 | 128 | 256 Flux breakpoint storage size, n1, dimensionless. The block resamples the Corresponding d-axis current, id and Corresponding q-axis current, iq data based on the storage size. To create this parameter, select Enable memory optimized 2D LUT. Vector of q-axis flux, flux_q — Flux breakpoints q-axis flux, Ψq, breakpoints, in Wb. Resample storage size for flux_q, n2 — Flux bit size Corresponding d-axis current, id — 2D lookup table Array of values for d-axis current, id, as a function of M d-fluxes, Ψd, and N q-fluxes, Ψq, in A. Each value specifies the current for a specific combination of d- and q-axis flux. The array size must match the dimensions defined by the flux vectors. If you set Enable memory optimized 2D LUT, the block converts the data to single precision. Corresponding q-axis current, iq — 2D lookup table Array of values for q-axis current, id, as a function of M d-fluxes, Ψd, and N q-fluxes, Ψq, in A. Each value specifies the current for a specific combination of d- and q-axis flux. The array size must match the dimensions defined by the flux vectors. flux_d max endpoint, u1max — Flux breakpoint Flux breakpoint maximum extrapolation endpoint, u1max, in Wb. To create this parameter, select Enable memory optimized 2D LUT and Specify Extrapolation. flux_d min endpoint, u1min — Flux breakpoint Flux breakpoint minimum extrapolation endpoint, u1min, in Wb. flux_q max endpoint, u2max — Flux breakpoint flux_q min endpoint, u2min — Flux breakpoint Stator phase resistance, Rs — Resistance Stator phase resistance, Rs, in ohm. Initial flux, fluxdq0 — Flux Initial d- and q-axis flux, Ψq0 and Ψd0, in Wb. Initial mechanical position, theta_init — Angle Initial mechanical speed, omega_init — Speed Physical inertia, viscous damping, and static friction, mechanical — Inertia, damping, friction Inertia, J, in kgm^2 Flux-Based PM Controller | Induction Motor | Interior PMSM | Mapped Motor | Surface Mount PMSM
Definition 15.26.1 (053H)—The Stacks project Definition 15.26.1. Let $R$ be a ring. Let $I \subset R$ be an ideal and $a \in I$. Let $R[\frac{I}{a}]$ be the affine blowup algebra, see Algebra, Definition 10.70.1. Let $M$ be an $R$-module. The strict transform of $M$ along $R \to R[\frac{I}{a}]$ is the $R[\frac{I}{a}]$-module \[ M' = \left(M \otimes _ R R[\textstyle {\frac{I}{a}}]\right)/a\text{-power torsion} \] there is a missing ) M' 4 comment(s) on Section 15.26: Blowing up and flatness
Exhaust Particulate Matter Emission Factors and Deterioration Rate for In-Use Motor Vehicles | J. Eng. Gas Turbines Power | ASME Digital Collection B. Ubanwa, B. Ubanwa Texas Natural Resource Conservation Commission, 13219 Marrero Drive, Austin, TX 78729 A. Burnette, Eastern Research Group, 5608 Parkcrest Drive, Suite 100, Austin, TX 78731 S. Kishan, S. G. Fritz,, Principal Engineer Department of Emissions Research, Southwest Research Institute, 6220 Culebra Road, San Antonio, TX 78238-5166 Contributed by the Internal Combustion Engine Division of THE AMERICAN SOCIETY OF MECHANICAL ENGINEERS for publication in the ASME JOURNAL OF ENGINEERING FOR GAS TURBINES AND POWER. Manuscript received by the ICE Division July 2000; final revision received by the ASME Headquarters March 2002. Associate Editor: D. Assanis. Ubanwa, B., Burnette , A., Kishan, S., and Fritz,, S. G. (April 29, 2003). "Exhaust Particulate Matter Emission Factors and Deterioration Rate for In-Use Motor Vehicles ." ASME. J. Eng. Gas Turbines Power. April 2003; 125(2): 513–523. https://doi.org/10.1115/1.1559904 Recent measurements and modeling of primary exhaust particulate matter (PM) emissions from both gasoline and diesel-powered motor vehicles suggest that many vehicles produce PM at rates substantially higher than assumed in the current EPA PM emission factor model, known as “PART5.” The discrepancy between actual versus modeled PM emissions is generally attributed to inadequate emissions data and outdated assumptions in the PART5 model. This paper presents a study with the objective of developing an in-house tool (a modified PART5 model) for the Texas Natural Resource Conservation Commission (TNRCC) to use for estimating motor vehicle exhaust PM emissions in Texas. The work included chassis dynamometer emissions testing on several heavy-duty diesel vehicles at the Southwest Research Institute (SwRI), analysis of the exhaust PM emissions and other regulated pollutants (i.e., HC,CO,NOx), review of related studies and exhaust PM emission data obtained from literature of similar types of light and heavy-duty vehicle tests, a review of the current PART5 model, and analysis of the associated emission deterioration rates. Exhaust PM emissions data obtained from the vehicle testing at SwRI and other similar studies (covering a relatively large number and wide range of vehicles) were merged, and finally, used to modify the PART5 model. The modified model, which was named PART5-TX1, was then used to estimate new exhaust PM emission factors for in-use motor vehicles. Modifications to the model are briefly described, along with emissions test results from the heavy-duty diesel-powered vehicles tested at SwRI. Readers interested in a detailed understanding of the techniques used to modify the PART5 model are referred to the final project report to TNRCC (Eastern Research Group 2000). road vehicles, internal combustion engines, air pollution control, air pollution measurement, testing Emissions, Exhaust systems, Vehicles, Particulate matter, Diesel, Motor vehicles Cadle, S. H., et al., 1998, “Measurement of Exhaust Particulate Matter Emissions from In-Use light-Duty Motor Vehicle in the Denver, Colorado Area,” Final Report, CRC Project E-24-1. Norbeck, J. M., et al., 1998, “Measurement of Primary Particulate Matter Emissions From Light-Duty Motor Vehicles,” Final Report, CRC Project No. E-24-2. Whitney, K. A., 1998, “Measurement of Primary Exhaust Particulate Matter Emissions From Light-Duty Motor Vehicles,” Final Report, CRC Project No. E-24-3. Graboski, M. S., et al., 1998, “Heavy-Duty Diesel Vehicle Testing for the Northern Front Range Air Quality Study,” Final Report, Colorado State University Project. Mulawa, P. A., et al., 1995, “Characterization of Exhaust Particulate Matter From 1986 through 1990 Model Year Light-Duty Gasoline Vehicles,” GM Research and Development Publication 8456. Darlington, T., et al., 1996, “Exhaust Particulate Emissions from Gasoline-Fueled Vehicles,” 96WCC018, presented at World Car Conference. Weaver, C., et al., 1998, “Modeling Deterioration in Heavy-Duty Diesel Particulate Emissions,” EPA report under contract No. 8C-S112-NTSX. France, C. J., et al., 1979, “Recommended Practice for Determining Exhaust Emissions From Heavy-Duty Vehicles Under Transient Conditions,” Technical Report SDSB 79-08, EPA. Urban, C. M., 1984, “Dynamometer Simulation of Truck and Bus Road Horsepower for Transient Emissions Evaluations,” SAE Paper No. 840349. Cadle, S. H., et al., 1995, “PM-10 Dynamometer Exhaust Samples Collected From In-Service Vehicles in Nevada,” GM Research and Development Publication 8464. Particulate and Speciated HC Emission Rates from In-Use Vehicles Recruited in Orange County, CA Fritz, S. G., et al., 1992, “Emissions From Heavy-Duty Trucks Converted to CNG,” ASME 92-ICE-10. Fritz, S. G., et al., 1993, “Emissions From Heavy-Duty Trucks Converted to Compressed Natural Gas,” SAE 932950. Arcadis Geraghty & Miller, Inc., 1998, “Update Heavy-Duty Engine Emission Conversion Factors for MOBILE6: Analysis of BSFC and Calculation of Heavy-Duty Engine Emission Conversion Factors,” EPA Report No. 420-P-98-015, Mountain View, CA. Special ICE Issue
I think there is a shorter proof: By assumption g is not contained in any of the minimal primes of the support of M . Thus not contained in any of the minimal associated primes, Lemma 10.62.6. By Lemma 10.102.7 there are no embedded associated primes. Applying Lemma 10.62.9 we see that g is a nonzerodivisor on M . Lemma 10.102.5 finishes the proof. Of course the results in this chapter would need to be rearranged, but Lemmas 10.102.5 and 10.102.7 only use results from earlier sections anyway. I think this would also make Lemma 10.102.2 and its notation obsolete. Dear Dario, yes this lemma is left over from the attempt I made to write about Cohen-Macaulay modules with very little general theory about depth and regular sequences. The steps are in 10.103.2, 10.103.3, and 10.103.4 using the notion of a good element. I do still think it is somewhat fun that this can be done, so I am going to leave it as is for now.
Comment #1612 by Martin Bright on September 02, 2015 at 13:59 I've noticed a couple of typos in this proof. Halfway through there is a rogue h which should be a g . Also, in the statement "... we may assume that x \in Z_i i=1, \ldots, n ", I think x x' By the way, the version of this statement when S is the spectrum of a DVR seems already interesting: do you know if it's in the literature anywhere? Thanks for the corrections. I changed them here. Most of these statements are in the paper by Osserman and Payne mentioned at the beginning of the sections.
Technological change in energy - IMAGE - IAMC-Documentation Technological change in the energy model TIMER An important aspect of TIMER is the endogenous formulation of technology development, on the basis of learning by doing, which is considered to be a meaningful representation of technology change in global energy models 123. The general formulation of learning by doing in a model context is that a cost measure y tends to decline as a power function of an accumulated learning measure, where n is the learning rate, Q the cumulative capacity or output, and C is a constant: {\displaystyle Y=C*Q^{-n}} Often n is expressed by the progress ratio p, which indicates how fast the costs metric Y decreases with doubling of Q (p=2-n). Progress ratios reported in empirical studies are mostly between 0.65 and 0.95, with a median value of 0.82 4. In TIMER, learning by doing influences the capital output ratio of coal, oil and gas production, the investment cost of renewable and nuclear energy, the cost of hydrogen technologies, and the rate at which the energy conservation cost curves decline. The actual values used depend on the technologies and the scenario setting. The progress ratio for solar/wind and bioenergy has been set at a lower level than for fossil-based technologies, based on their early stage of development and observed historical trends 3. There is evidence that, in the early stages of development, p is higher than for technologies in use over a long period of time. For instance, values for solar energy have typically been below 0.8, and for fossil-fuel production around 0.9 to 0.95. For technologies in early stages of development, other factors may also contribute to technology progress, such as relatively high investment in research and development 3. In TIMER, the existence of a single global learning curve is postulated. Regions are then assumed to pool knowledge and learn together or, depending on the scenario assumptions, are partly excluded from this pool. In the last case, only the smaller cumulated production in the region would drive the learning process and costs would decline at a slower rate. Technology substitution in the energy model TIMER The indicated market share (IMS) of a technology is determined using a multinomial logit model that assigns market shares to the different technologies (i) on the basis of their relative prices in a set of competing technologies (j). {\displaystyle MS_{i}={\frac {e^{\lambda x_{i}}}{\sum _{j}e^{\lambda c_{j}}}}} MS is the market share of different technologies and c is their costs. In this equation, is the so-called logit parameter, determining the sensitivity of markets to price differences. The equation takes account of direct costs and also energy and carbon taxes and premium values. The last two reflect non-price factors determining market shares, such as preferences, environmental policies, infrastructure (or the lack of infrastructure) and strategic considerations. The premium values are determined in the model calibration process in order to correctly simulate historical market shares on the basis of simulated price information. The same parameters are used in scenarios to simulate the assumption on societal preferences for clean and/or convenient fuels. ^ | Christian Azar, Hadi Dowlatabadi (1999). A Review of Technical Change in Assessment of Climate Policy. Annual Review of Energy and the Environment, 24 (), 513-544. http://dx.doi.org/10.1146/annurev.energy.24.1.513 ^ | A Grubler, N Nakicenovic, D G Victor (1999). Modeling technological change: Implications for the global environment. Annual Review of Energy and the Environment, 24 (), 545-569. a b c | IEA (2010). Experience curves for energy technology policy. Paris, France: OECD/IEA. ^ | L. Argote, D. Epple (1990). Learning Curves in Manufacturing. Science, 247 (), 920-924. http://dx.doi.org/10.1126/science.247.4945.920 Retrieved from "https://www.iamcdocumentation.eu/index.php?title=Technological_change_in_energy_-_IMAGE&oldid=13617"
There are 120 scenes in iTHOR, evenly spread across kitchens, living rooms, bedrooms, and bathrooms. The list of scenes is defined as: kitchens = [f"FloorPlan{i}" for i in range(1, 31)] living_rooms = [f"FloorPlan{200 + i}" for i in range(1, 31)] bedrooms = [f"FloorPlan{300 + i}" for i in range(1, 31)] bathrooms = [f"FloorPlan{400 + i}" for i in range(1, 31)] scenes = kitchens + living_rooms + bedrooms + bathrooms ["FloorPlan1", "FloorPlan2", "FloorPlan3", {...}, "FloorPlan430"] Explore the scenes in the demo! Scene Utility We provide a cached alias to the above definitions using the ithor_scenes controller.ithor_scenes( include_kitchens=True, include_living_rooms=True, include_bedrooms=True, include_bathrooms=True "FloorPlan1", "FloorPlan201", include_kitchens Includes the 30 kitchens (i.e., FloorPlan[1:30]) in the returned list when True. include_living_rooms Includes the 30 living rooms (i.e., FloorPlan[201:230]) in the returned list when include_bedrooms Includes the 30 bedrooms (i.e., FloorPlan[301:330]) in list when include_bathrooms Includes the 30 bathrooms (i.e., FloorPlan[401:430]) in list when It is common to train agents on a subset of scenes, in order to validate and test how well they generalize to new scenes and environments. For each room type, it is standard practice to treat the first 20 scenes as training scenes, the next 5 scenes as validation scenes, and the last 5 scenes as testing scenes.
Average absolute deviation - Wikipedia 2 Mean absolute deviation around a central point 2.1 Mean absolute deviation around the mean 2.2 Mean absolute deviation around the median 3 Median absolute deviation around a central point 3.1 Median absolute deviation around the median 4 Maximum absolute deviation Measures of dispersion[edit] Mean absolute deviation around a central point[edit] {\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}|x_{i}-m(X)|.} {\displaystyle m(X)} {\displaystyle m(X)} {\displaystyle {\frac {|2-5|+|2-5|+|3-5|+|4-5|+|14-5|}{5}}=3.6} {\displaystyle {\frac {|2-3|+|2-3|+|3-3|+|4-3|+|14-3|}{5}}=2.8} {\displaystyle {\frac {|2-2|+|2-2|+|3-2|+|4-2|+|14-2|}{5}}=3.0} {\displaystyle \varphi \left(\mathbb {E} [Y]\right)\leq \mathbb {E} \left[\varphi (Y)\right]} {\displaystyle Y=\vert X-\mu \vert } {\displaystyle \mathbb {E} \left(|X-\mu \right|)^{2}\leq \mathbb {E} \left(|X-\mu |^{2}\right)} {\displaystyle \mathbb {E} \left(|X-\mu \right|)^{2}\leq \operatorname {Var} (X)} {\displaystyle \mathbb {E} \left(|X-\mu \right|)\leq {\sqrt {\operatorname {Var} (X)}}} {\textstyle {\sqrt {2/\pi }}=0.79788456\ldots } . Thus if X is a normally distributed random variable with expected value 0 then, see Geary (1935):[1] {\displaystyle w={\frac {E|X|}{\sqrt {E(X^{2})}}}={\sqrt {\frac {2}{\pi }}}.} {\displaystyle w_{n}\in [0,1]} , with a bias for small n.[2] Mean absolute deviation around the mean[edit] MAD has been proposed to be used in place of standard deviation since it corresponds better to real life.[3] Because the MAD is a simpler measure of variability than the standard deviation, it can be useful in school teaching.[4][5] This method's forecast accuracy is very closely related to the mean squared error (MSE) method which is just the average squared error of the forecasts. Although these methods are very closely related, MAD is more commonly used because it is both easier to compute (avoiding the need for squaring)[6] and easier to understand.[7] Mean absolute deviation around the median[edit] {\displaystyle D_{\text{med}}=E|X-{\text{median}}|} {\displaystyle b} {\textstyle D_{\text{mean}}=\sigma {\sqrt {2/\pi }}\approx 0.797884\sigma } {\displaystyle D_{\text{med}}\leq D_{\text{mean}}} {\displaystyle D_{\text{med}}=E|X-{\text{median}}|=2\operatorname {Cov} (X,I_{O})} {\displaystyle \mathbf {I} _{O}:={\begin{cases}1&{\text{if }}x>{\text{median}},\\0&{\text{otherwise}}.\end{cases}}} This representation allows for obtaining MAD median correlation coefficients.[citation needed] Median absolute deviation around a central point[edit] Median absolute deviation around the median[edit] The median absolute deviation (also MAD) is the median of the absolute deviation from the median. It is a robust estimator of dispersion. Maximum absolute deviation[edit] {\displaystyle m(X)=\max(X)} {\displaystyle \max(X)} Minimization[edit] ^ Geary, R. C. (1935). The ratio of the mean deviation to the standard deviation as a test of normality. Biometrika, 27(3/4), 310–332. ^ See also Geary's 1936 and 1946 papers: Geary, R. C. (1936). Moments of the ratio of the mean deviation to the standard deviation for normal samples. Biometrika, 28(3/4), 295–307 and Geary, R. C. (1947). Testing for normality. Biometrika, 34(3/4), 209–242. ^ Taleb, Nassim Nicholas (2014). "What scientific idea is ready for retirement?". Edge. Archived from the original on 2014-01-16. Retrieved 2014-01-16. {{cite web}}: CS1 maint: bot: original URL status unknown (link) ^ Kader, Gary (March 1999). "Means and MADS". Mathematics Teaching in the Middle School. 4 (6): 398–403. Archived from the original on 2013-05-18. Retrieved 20 February 2013. ^ Franklin, Christine, Gary Kader, Denise Mewborn, Jerry Moreno, Roxy Peck, Mike Perry, and Richard Scheaffer (2007). Guidelines for Assessment and Instruction in Statistics Education (PDF). American Statistical Association. ISBN 978-0-9791747-1-1. Archived (PDF) from the original on 2013-03-07. Retrieved 2013-02-20. ^ Nahmias, Steven; Olsen, Tava Lennon (2015), Production and Operations Analysis (7th ed.), Waveland Press, p. 62, ISBN 9781478628248, MAD is often the preferred method of measuring the forecast error because it does not require squaring. ^ Stadtler, Hartmut; Kilger, Christoph; Meyr, Herbert, eds. (2014), Supply Chain Management and Advanced Planning: Concepts, Models, Software, and Case Studies, Springer Texts in Business and Economics (5th ed.), Springer, p. 143, ISBN 9783642553097, the meaning of the MAD is easier to interpret . Retrieved from "https://en.wikipedia.org/w/index.php?title=Average_absolute_deviation&oldid=1087601487"
Kasiski - Maple Help Home : Support : Online Help : Programming : Names and Strings : StringTools Package : Statistics : Kasiski compute the Kasiski test on a string Kasiski( s ) The Kasiski(s) command computes the so-called Kasiski-test for the string s. This is defined to be the least common multiple of the lengths of repeated substrings of s. In elementary cryptanalysis, it is often used in conjunction with the index of coincidence to attempt to determine the number of alphabets used to encipher a plain text with a polyalphabetic substitution cipher. 8 \mathrm{with}⁡\left(\mathrm{StringTools}\right): \mathrm{Kasiski}⁡\left("abcde"\right) \textcolor[rgb]{0,0,1}{0} \mathrm{Kasiski}⁡\left("abcdeabcdabc"\right) \textcolor[rgb]{0,0,1}{1} StringTools[Entropy] StringTools[IndexOfCoincidence]
exposure - Maple Help Home : Support : Online Help : Science and Engineering : Units : Known Units : exposure Exposure has the dimension radiation per mass. The SI composite unit of exposure is the coulomb of radiation per kilogram. Maple knows the units of exposure listed in the following table. An asterisk ( * ) indicates the default context, an at sign (@) indicates an abbreviation, and under the prefixes column, SI indicates that the unit takes all SI prefixes, IEC indicates that the unit takes IEC prefixes, and SI+ and SI- indicate that the unit takes only positive and negative SI prefixes, respectively. Refer to a unit in the Units package by indexing the name or symbol with the context, for example, roentgen[SI] or R[SI]; or, if the context is indicated as the default, by using only the unit name or symbol, for example, roentgen or R. The unit of exposure is defined as follows. A roentgen is defined as 0.000258 coulomb of radiation per kilogram. \mathrm{convert}⁡\left('\mathrm{roentgen}','\mathrm{dimensions}','\mathrm{base}'=\mathrm{true}\right) \frac{\textcolor[rgb]{0,0,1}{\mathrm{electric_current}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{\mathrm{radiation}}\right)\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{time}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{\mathrm{radiation}}\right)}{\textcolor[rgb]{0,0,1}{\mathrm{mass}}} \mathrm{convert}⁡\left(1,'\mathrm{units}','\mathrm{roentgen}',\frac{'C⁡\left(\mathrm{radiation}\right)'}{'\mathrm{kg}'}\right) \frac{\textcolor[rgb]{0,0,1}{129}}{\textcolor[rgb]{0,0,1}{500000}} When the standard or natural modes for combining units are selected, Maple by default requires the correct annotation to the unit coulomb, as in the previous example. In the default simple mode, this is not required. \mathrm{convert}⁡\left(1,'\mathrm{units}','\mathrm{roentgen}',\frac{'C'}{'\mathrm{kg}'}\right) \frac{\textcolor[rgb]{0,0,1}{129}}{\textcolor[rgb]{0,0,1}{500000}} \mathrm{Units}[\mathrm{UseMode}]⁡\left('\mathrm{standard}'\right) \textcolor[rgb]{0,0,1}{\mathrm{simple}} \mathrm{convert}⁡\left(1,'\mathrm{units}','\mathrm{roentgen}',\frac{'C'}{'\mathrm{kg}'}\right) Error, (in `convert/units`) unable to convert `R` to `C/kg` \mathrm{convert}⁡\left(1,'\mathrm{units}','\mathrm{roentgen}',\frac{'C⁡\left(\mathrm{radiation}\right)'}{'\mathrm{kg}'}\right) \frac{\textcolor[rgb]{0,0,1}{129}}{\textcolor[rgb]{0,0,1}{500000}} \mathrm{convert}⁡\left(1,'\mathrm{units}','\mathrm{roentgen}',\frac{'C'}{'\mathrm{kg}'},'\mathrm{symbolic}'\right) \frac{\textcolor[rgb]{0,0,1}{129}}{\textcolor[rgb]{0,0,1}{500000}}
Draw a flag that would generate the same volume no matter if it were rotated about the x y -axis. Is there more than one possible shape of flag that meets this requirement? Does your flag have any special property that ensures these equal volumes of rotation? What type of symmetry will your flag need to have?
{\displaystyle n} {\displaystyle n} {\displaystyle 1+2+3+4+5+6+7+8+9+10+11+12+13,} {\displaystyle 1+2+\cdots +13.} {\displaystyle {\displaystyle \sum _{i=1}^{13}\,i,}} {\displaystyle \left(\Sigma \right)} {\displaystyle i} {\displaystyle i} {\displaystyle \Sigma } {\displaystyle i} {\displaystyle 1} {\displaystyle 13} {\displaystyle i=1,} {\displaystyle i=2,} {\displaystyle i=3,} {\displaystyle 13} {\displaystyle {\displaystyle \sum _{i=1}^{13}}\,i\,=\,1+2+3+4+5+6+7+8+9+10+11+12+13.} {\displaystyle {\displaystyle \sum _{i=1}^{5}}\,i^{2}\,=\,1^{2}+2^{2}+3^{2}+4^{2}+5^{2}.} {\displaystyle {\displaystyle \sum _{i=n}^{2n}}\,i\,=\,n+(n+1)+\cdots +(2n-1)+2n,} {\displaystyle {\displaystyle \sum _{i=1}^{n}}\,i^{3}\,=\,1^{3}+2^{3}+3^{3}+\cdots +n^{3}.} {\displaystyle \mathbb {N} } {\displaystyle {\text{The Natural Numbers}}\,=\,\mathbb {N} \,=\,\{1,2,3,\ldots \}\,=\,\{1,1+1,1+1+1,1+1+1+1,\ldots \}.} {\displaystyle 1} {\displaystyle n,} {\displaystyle n+1} {\displaystyle n-1,} {\displaystyle n.} {\displaystyle n^{\mathrm {th} }} {\displaystyle (n-1)^{\textrm {th}}} case. In such situations, strong induction assumes that the conjecture is true for ALL cases of lower value tha{\displaystyle n} {\displaystyle n}atural numbers is {\displaystyle {\displaystyle \sum _{i=1}^{n}i\,=\,1+2+\cdots +n\,=\,{\frac {n(n+1)}{2}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)}{2}}\,=\,{\frac {1(1+1)}{2}}\,=\,1.}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)}{2}}\,=\,{\frac {(n-1)n}{2}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i}&=&{\displaystyle \sum _{i=1}^{n-1}i\,+\,n}\\\\&=&{\displaystyle {\frac {(n-1)n}{2}}\,+\,n\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{2}-n}{2}}\,+\,{\frac {2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}-n+2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}+n}{2}}}\\\\&=&{\displaystyle {\frac {n(n+1)}{2}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{2}\,=\,1^{2}+2^{2}+\cdots +n^{2}\,=\,{\frac {n(n+1)(2n+1)}{6}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)(2n+1)}{6}}\,=\,{\frac {1(1+1)(2+1)}{6}}\,=\,1.}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)\left(2\left(n-1\right)+1\right)}{6}}\,=\,{\frac {(n-1)n(2n-1)}{6}}\,=\,{\frac {2n^{3}-3n^{2}+n}{6}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{2}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{2}+n^{2}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+n^{2}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+{\frac {6n^{2}}{6}}}\\\\&=&{\displaystyle {\frac {2n^{3}+3n^{2}+n}{6}}}\\\\&=&{\displaystyle {\frac {n(2n^{2}+3n+1)}{6}}}\\\\&=&{\displaystyle {\frac {n(n+1)(2n+1)}{6}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{3}\,=\,1^{3}+2^{3}+\cdots +n^{3}\,=\,{\frac {n^{2}(n+1)^{2}}{4}}.}} {\displaystyle n}atural numbers. {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}\,=\,{\frac {1^{2}(1+1)^{2}}{4}}\,=\,1,}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i^{3}\,=\,{\frac {(n-1)^{2}\left(\left(n-1\right)+1\right)^{2}}{4}}\,=\,{\frac {(n-1)^{2}n^{2}}{2}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{3}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{3}+n^{3}}\\\\&=&{\displaystyle {\frac {(n-1)^{2}n^{2}}{4}}+n^{3}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{4}-2n^{3}+n^{2}}{4}}+{\frac {4n^{3}}{4}}}\\\\&=&{\displaystyle {\frac {n^{4}+2n^{3}+n^{2}}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n^{2}+2n+1)}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}},\end{array}}} {\displaystyle \square }
\int _ { 0 } ^ { 5 } ( | x - 2 | + 3 ) d x To avoid rewriting the integrand as a piecewise function, you could sketch the graph and geometrically compute the area between x = 0 x = 5 \int ( \frac { 4 } { m ^ { 3 } } - 3 \operatorname { cos } m ) d m Check your work by differentiating your answer. Make sure the coefficients are correct. \int _ { 1 } ^ { 2 } x ^ { x } d x In this course, you will discover strategies to differentiate (and integrate) exponential functions. But you have not discovered this process yet. Use your calculator. \int \pi ^ { 2 } d x Consider what the graph of π² looks like. Is it's area function linear or quadratic?
Mitigation of ELMs by Electrostatic Field in Tokamaks Zhongtian Wang1*, Xiaochang Chen1*, Yifan Yan2, Huidong Li3, Qian Liu4, Maolin Mou5, Na Wu5, Zhanhui Wang2, Rui Ke2, Lin Nie2, Ming Xu2 1 School of Sciences, Nanchang University, Nanchang, China. 2 Southwestern Institute of Physics, Chengdu, China. 3 School of Science, Xihua University, Chengdu, China. 5 College of Physical Science and Technology, Sichuan University, Chengdu, China. Abstract: Mitigation of ELMs by electrostatic field is studied. The perpendicular heating in cyclotron waves tends to pile up the resonant particles toward the low magnetic field side in which a electrostatic field may result [J. Y. Hsu, V. S. Chan, R. W. Harvey, R. Prater, and S. K. Wong, Phys. Rev. Lett. 53, 564 (1984)]. The electrostatic field can make circulating particles trapped or make trapped particles circulating depending on the field direction. The trapped- particle population and bootstrap current change accordantly. Modulating bootstrap current, mitigation of type-1 ELM by the electrostatic field is possible. The electrostatic potential needed for the mitigation is quantitatively estimated. Experiments by either ECRH or biasing are being prepared to verify the theory. Keywords: Electrostatic Trapping, Bootstrap Current, Mitigation, Peeling-Ballooning Mode In present tokamaks operating in high-confinement regimes (H-modes), the steep pressure gradients at edge are often observed to relax through frequent intermittent discharges of energy, known as ELMs. The physics of ELMs is a key issue for ITER operation. The onset of ELMs constrains the pressure at top of edge transport barrier (pedestal height). The ELMs events transport substantial heat and particle loads to plasma-facing materials. A predictive understanding of the onset of type-I ELMs has been gained via the development of peeling-ballooning modes [1] in which EL Ms are triggered by instabilities driven by the large pressure gradient and bootstrap current in the edge. High pressure is important for fusion efficiency. The bootstrap current can be changed. The perpendicular heating in cyclotron waves tends to pile up the resonant particles toward the low magnetic field side. An electrostatic field may result [2] . Variations of the electrostatic potential at plasma edge are observed in HL -2A [3] . Full particle simulation is performed using the Boris algorithm [4] . The electrostatic field can make circulating particles trapped or make trapped particles circulating depending on the field direction. With the assumption neoclassical transport the population of the trapped particles and bootstrap current change accordantly. Modulating bootstrao current by changing the electrostatic field, mitigation of type-1 ELM is possible. The electrostatic potential needed for the mitigation is quantitatively calculated. Experiments by either ECRH or biasing [5] are being prepared to verify the theory. 2. Full Particle Orbit Simulation in Tokamaks In particle simulations of magnetized plasmas, the Boris algorithm [4] is the standard for advancing a charged particle in an electromagnetic field in accordance with the equation of motion associated with the Lorentz force, where the magnetic field and electric field are given respectively by \Psi is the poloidal magnetic flux, \Phi =E{R}_{0}\left(\frac{{R}_{0}}{R}-1\right) is the electrostatic potential. We proceed from Solov’ev solution {\Psi }_{0}=\frac{{j}_{\phi }{\mu }_{0}e}{2{R}_{0}\left(1+{e}^{2}\right)} , e is elongation, Q is related to tri-angularity. We use ITER’s parameters: , e=1.7 Q=0.33 , toroidal current I=15\text{\hspace{0.17em}}\text{MA} , aspect ratio A=3.1 . So the tokamak magnetic field is well-determined. Full orbit simulations find electric trapping and de-trapping seen in Figure 1 and Figure 2 respectively. Figure 1. (a) The electrostatic field E is zero, particle is circulating; (b) The electrostatic field E is 35 kv/m, particle is trapped. Ion energy is 60 kev, pitch angle is 72˚. Figure 2. (a) The electrostatic field E is zero, particle is trapped; (b) The electrostatic field E is −35 kv/m, particle is turned to be circulating which is called as de-trapping. Ion energy is 60 kev, pitch angle 151˚. Full particle orbit simulation is suitable to a multi-scale problem. The Boris algorithm [4] makes simulation in the long time simulation accurate. 3. Bootstrap Current The gyro-averaged Hamiltonian has been given in Ref. [6] , where the momenta {p}_{\phi }=R{v}_{\phi }-e\Psi are conjugate to \alpha , the gyrophase, φ, the toroidal angle, and x, expressed as where R and Z are the coordinates of the guiding center in a cylindrical system, ρ is the Larmor radius, Ω is the toroidal gyro-frequency. The particle mass is taken to be unity for simplicity. The electrostatic potential is assumed in a form, which is like the dipole potential produced by two close-point-charges, where \epsilon is the inverse aspect ratio. \stackrel{¯}{H}=H+eE{R}_{0} \frac{1}{2}{v}_{\perp 0}^{2}=\left({\Omega }_{0}{P}_{\alpha }+eE{R}_{0}\right) . For the large aspect-ratio approximation we have, k=\sqrt{\frac{2\epsilon {v}_{\perp 0}^{2}}{{v}_{\varphi 0}^{2}}} . For the trapped particles {v}_{\varphi \mathrm{max}}^{2}=2\epsilon {v}_{\perp 0}^{2} and the bounce frequency is The ions with \frac{{v}_{\varphi 0}^{2}}{2\epsilon }\le e{R}_{0}E are trapped, however, they are circulating without the electrostatic field. That is electrostatic trapping. For electrons the trapping condition is There is minimum of {\left({\Omega }_{0}{P}_{\alpha }\right)}_{\mathrm{min}}=\frac{{v}_{\varphi 0}^{2}}{2}+eE{R}_{0} for trapping. If equilibrium distribution-function is Maxwellian it is easy to calculate trapped-electron population. The fraction of trapped electrons is \text{Fraction}=\sqrt{2\epsilon }{e}^{-\frac{eE{R}_{0}}{T}} . Comparing with neoclassical transport [7] which increase by a factor {e}^{-\frac{eE{R}_{0}}{T}} . And the bootstrap current changes accordantly [8] , {B}_{p} is the poloidal magnetic field, n is the density, T is plasma temperature, {L}_{n} is the density scale length. The gradients in the electron profiles contribute to typically 70% - 90% of the total bootstrap current [9] . 4. Peeling-Ballooning Modes The criterion of peeling-ballooning modes can be expressed by the following formula [10] , {D}_{m} is the Mercier coefficient, {D}_{m}<1/4 is the Mercier stability criterion, finite (positive) bootstrap current, {j}_{\parallel } , is destabilizing and {q}^{\prime } is the derivative of the safety factor with respect to the poloidal magnetic flux. At pedestal the temperature is low, therefore, from Equation (16) bootstrap current is sensitive to the electrostatic potential. Now we use Equation (17) to calculate the criterion. For a large aspect ratio and low \beta ordering Equation (17) can be written [1] {D}_{R}<-\frac{Rq}{s}{\left(\frac{{j}_{||}}{B}\right)}_{edge} {D}_{R}=\frac{3R}{{s}^{2}{B}^{2}}\frac{\text{d}P}{\text{d}r}e\left(\frac{r}{R}-2\delta \right) and e is the elongation [11] . We neglect triangularity, \delta \frac{eE{R}_{0}}{T}>\mathrm{ln}\left(\left(\frac{s{q}^{2}}{3e}\sqrt{\frac{2}{{\epsilon }^{3}}}\right) s=0.2,q=2,e=2,\epsilon =0.3 we have the criterion for stability \frac{eE{R}_{0}}{T}>0.136 which can be produced in the practical experiments [5] . Electrostatic field, hopefully, can realize ELM-control like that in Ref. [12] and show synchronization of the ELM cycle with added electrostatic field. Electrostatic field, hopefully, can realize ELM-ree discharge which appears in I-mode of Alcator C-mod [13] . Full particle simulation is suitable to a multi-scale problem. The Boris algorithm makes long-time simulation accurate. The perpendicular heating in cyclotron waves tends to pile up the resonant particles toward the low magnetic field side in which electrostatic field may result [2] . The electrostatic field can make circulating particles trapped or make trapped particles circulating depending on the field direction. The trapped-particle population and bootstrap current change accordingly in the process. Modulating bootstrap current, mitigation of type-1 ELM or ELM-free discharge is possible. Experiments by either ECRH or biasing [5] are being prepared to verify the theory in HL -2A Tokamak [14] . Helpful discussions with Prof. S. Q. Liu, Dr. Y. Liu and Dr. X. S, Yang are greatly appreciated. This work is supported by the National Natural Science Foundation for Young Scientists of China (Grant No. 11605143), Chinese National Science Foundation (Nos. 11261140327, 11005035, 11205053, 11575055). National Key R&D Program of China under 2017YFE0300405. Cite this paper: Wang, Z. , Chen, X. , Yan, Y. , Li, H. , Liu, Q. , Mou, M. , Wu, N. , Wang, Z. , Ke, R. , Nie, L. and Xu, M. (2019) Mitigation of ELMs by Electrostatic Field in Tokamaks. Journal of High Energy Physics, Gravitation and Cosmology, 5, 149-155. doi: 10.4236/jhepgc.2019.51007. [1] Connor, J.W., Hastie, R.J., Wilson, H.R. and Miller, R.L. (1998) Magnetohydrodynamic Stability of Tokamak Edge Plasmas. Physics of Plasmas, 5, 2687. [2] Hsu, J.Y., Chan, V.S., Harvey, R.W., Prater, R. and Wong, S.K. (1984) Resonance Localization and Poloidal Electric Field Due to Cyclotron Wave Heating in Tokamak Plasmas. Physical Review Letters, 53, 564. [3] Cheng, J., Yan, L.W., Hong, W.Y., Zhao, K.J., Lan, T., Qian, J., Liu, A.D., Zhao, H.L., Liu, Y., Yang, Q.W., Dong, J.Q., Duan, X.R. and Liu, Y. (2010) Statistical Characterization of Blob Turbulence across the Separatrix in HL-2A Tokamak. Plasma Physics and Controlled Fusion, 52, 055003. [4] Boris, J. (1970) Proceedings of the Fourth Conference on Numerical Simulation of Plasmas, Naval Research Laboratory, Washington DC. [5] Zhu, T.Z., Chen, Z.P., Sun, Y., Nan, J.Y., Liu, H., Zhuang, G. and Wang, Z.J. (2014) The Construction of an Electrode Biasing System for Driving Plasma Rotation in J-TEXT Tokamak. Review of Scientific Instruments, 85, 053504. [6] Wang, Z.T., Wang, L., Long, L.X., Dong, J.Q., He, Z.X., Liu, Y. and Tang, C.J. (2012) Gyrokinetics for High-Frequency Modes in Tokamaks. Physics of Plasmas, 19, 072110. [7] Rosenbluth, M.N., Hazeltine, R.D. and Hinton, F.L. (1972) Plasma Transport in Toroidal Confinement Systems. Physics of Fluids, 15, 116. [8] Wesson, J. (1997) Tokamaks. Clarendon Press, Oxford. [9] Sauter, O., Buttery, R.J., Felton, R., Hender, T.C., Howell, D.F. and Contributors to the EFDA-JET Workprogramme (2002) Marginal β-Limit for Neoclassical Tearing Modes in JET H-Mode Discharges. Plasma Physics and Controlled Fusion, 44, 1999-2019. [10] Snyder, P.B., Wilson, H.R., Ferron, J.R., Lao, L.L., Leonard, A.W., et al. (2002) Edge Localized Modes and the Pedestal: A Model Based on Coupled Peeling-Ballooning Modes. Physics of Plasmas, 9, 2037. [11] Fitzpatrick, R., Bondeson, C.A. and Hastie, R.J. (1992) On the “11/2-D” Evolution of Tokamak Plasmas in the Case of Large Aspect Ratio. Plasma Physics and Controlled Fusion, 34, 1445. [12] Martin, Y.R., Degeling, A., Lang, P.T., Lister, J.B., Sips, A.C.C., Suttrop, W., Treutterer, W. and ASDEX Upgrade Teem (2004) 31th EPS Conference on Plasma Physics, London, 28 June-2 July 2004, Vol. 28G, 4.133. [13] Whytea, D.G., Hubbard, A.E., Hughes, J.W., Lipschultz, B., Rice, J.E., Marmar, E.S., Greenwald, M., Cziegler, I., Dominguez, A., Golfinopoulos, T., Howard, N., Lin, L., McDermottb, R.M., Porkolab, M., Reinke, M.L., Terry, J., Tsujii, N., Wolfe, S., Wukitch, S., Lin, Y. and the Alcator C-Mod Team (2010) I-Mode: An H-Mode Energy Confinement Regime with L-Mode Particle Transport in Alcator C-Mod. Nuclear Fusion, 50, 105005. [14] Xu, M., Duan, X., Dong, J., Ding, X., Yan, L., et al. (2014) Overview of HL-2A Recent Experiments. Nuclear Fusion, 55, 104022.
Overwrite submatrix or subdiagonal of input - Simulink - MathWorks Deutschland Specifying the Overwriting Values Valid Overwriting Values Overwriting a Submatrix Overwriting a Subdiagonal Overwrite submatrix or subdiagonal of input The Overwrite Values block overwrites a contiguous submatrix or subdiagonal of an input matrix. You can provide the overwriting values by typing them in a block parameter, or through an additional input port, which is useful for providing overwriting values that change at each time step. The block accepts scalars, vectors and matrices. The output always has the same size as the original input signal, not necessarily the same size as the signal containing the overwriting values. The input(s) and output of this block must have the same data type. The Source of overwriting value(s) parameter determines how you must provide the overwriting values, and has the following settings. Specify via dialog — You must provide the overwriting value(s) in the Overwrite with parameter. The block uses the same overwriting values to overwrite the specified portion of the input at each time step. To learn how to specify valid overwriting values, see Valid Overwriting Values. Second input port — You must provide overwriting values through a second block input port, V. Use this setting to provide different overwriting values at each time step. The output inherits its size and rate from the input signal, not the overwriting values. The rate at which you provide the overwriting values through input port V must match the rate at which the block receives each input matrix at input port A. In other words, the input signals must have the same Simulink® sample time. The overwriting values can be a single constant, vector, or matrix, depending on the portion of the input you are overwriting, regardless of whether you provide the overwriting values through an input port or by providing them in the Overwrite with parameter. Portion of Input to Overwrite A single element in the input Any constant value, v A length-k portion of the diagonal Any length-k column or row vector, v k=3\text{ }v=\left[\begin{array}{ccc}2& 4& 6\end{array}\right]\text{ or }\left[\begin{array}{c}2\\ 4\\ 6\end{array}\right] A length-k portion of a row Any length-k row vector, v k=3\text{ }v=\left[\begin{array}{ccc}2& 4& 6\end{array}\right] A length-k portion of a column Any length-k column vector, v k=2\text{ }v=\left[\begin{array}{c}4\\ 6\end{array}\right] An m-by-n submatrix Any m-by-n matrix, v \begin{array}{c}m=2\\ n=3\end{array}\text{ }v=\left[\begin{array}{lll}4\hfill & 5\hfill & 6\hfill \\ 7\hfill & 8\hfill & 9\hfill \end{array}\right] Only some of the following parameters are visible in the dialog box at any one time. Determines whether to overwrite a specified submatrix or a specified portion of the diagonal. Source of overwriting value(s) Determines where you must provide the overwriting values: either through an input port, or by providing them in the Overwrite with parameter. For more information, see Specifying the Overwriting Values. Overwrite with The value(s) with which to overwrite the specified portion of the input matrix. Enabled only when Source of overwriting value(s) is set to Specify via dialog. To learn how to specify valid overwriting values, see Valid Overwriting Values. The range of input rows to be overwritten. Options are All rows, One row, or Range of rows. For descriptions of these options, see Parameters. Row/Starting row The input row that is the first row of the submatrix that the block overwrites. For a description of the options for the Row and Starting row parameters, see Settings for Row, Column, Starting Row, and Starting Column Parameters. Row is enabled when Row span is set to One row, and Starting row when Row span is set to Range of rows. Row index/Starting row index Index of the input row that is the first row of the submatrix that the block overwrites. See how to use these parameters in Settings for Row, Column, Starting Row, and Starting Column Parameters. Row index is enabled when Row is set to Index, and Starting row index when Starting row is set to Index. Row offset/Starting row offset The offset of the input row that is the first row of the submatrix that the block overwrites. See how to use these parameters in Settings for Row, Column, Starting Row, and Starting Column Parameters. Row offset is enabled when Row is set to Offset from middle or Offset from last, and Starting row offset is enabled when Starting row is set to Offset from middle or Offset from last. The input row that is the last row of the submatrix that the block overwrites. For a description of this parameter's options, see Settings for Ending Row and Ending Column Parameters. This parameter is enabled when Row span is set to Range of rows, and Starting row is set to any option but Last. Index of the input row that is the last row of the submatrix that the block overwrites. See how to use this parameter in Settings for Ending Row and Ending Column Parameters. Enabled when Ending row is set to Index. The offset of the input row that is the last row of the submatrix that the block overwrites. See how to use this parameter in Settings for Ending Row and Ending Column Parameters. Enabled when Ending row is set to Offset from middle or Offset from last. The range of input columns to be overwritten. Options are All columns, One column, or Range of columns. For descriptions of the analogous row options, see Parameters. Column/Starting column The input column that is the first column of the submatrix that the block overwrites. For a description of the options for the Column and Starting column parameters, see Settings for Row, Column, Starting Row, and Starting Column Parameters. Column is enabled when Column span is set to One column, and Starting column when Column span is set to Range of columns. Column index/Starting column index Index of the input column that is the first column of the submatrix that the block overwrites. See how to use these parameters in Settings for Row, Column, Starting Row, and Starting Column Parameters. Column index is enabled when Column is set to Index, and Starting column index when Starting column is set to Index. Column offset/Starting column offset The offset of the input column that is the first column of the submatrix that the block overwrites. See how to use these parameters in Settings for Row, Column, Starting Row, and Starting Column Parameters. Column offset is enabled when Column is set to Offset from middle or Offset from last, and Starting column offset is enabled when Starting column is set to Offset from middle or Offset from last. The input column that is the last column of the submatrix that the block overwrites. For a description of this parameter's options, see Settings for Ending Row and Ending Column Parameters. This parameter is enabled when Column span is set to Range of columns, and Starting column is set to any option but Last. Index of the input column that is the last column of the submatrix that the block overwrites. See how to use this parameter in Settings for Ending Row and Ending Column Parameters. This parameter is enabled when Ending column is set to Index. The offset of the input column that is the last column of the submatrix that the block overwrites. See how to use this parameter in Settings for Ending Row and Ending Column Parameters. This parameter is enabled when Ending column is set to Offset from middle or Offset from last. Diagonal span The range of diagonal elements to be overwritten. Options are All elements, One element, or Range of elements. For descriptions of these options, see Overwriting a Subdiagonal. Element/Starting element The input diagonal element that is the first element in the subdiagonal that the block overwrites. For a description of the options for the Element and Starting element parameters, see Element and Starting Element Parameters. Element is enabled when Element span is set to One element, and Starting element when Element span is set to Range of elements. Element index/Starting element index Index of the input diagonal element that is the first element of the subdiagonal that the block overwrites. See how to use these parameters in Element and Starting Element Parameters. Element index is enabled when Element is set to Index, and Starting element index when Starting element is set to Index. Element offset/Starting element offset The offset of the input diagonal element that is the first element of the subdiagonal that the block overwrites. See how to use these parameters in Element and Starting Element Parameters. Element offset is enabled when Element is set to Offset from middle or Offset from last, and Starting element offset is enabled when Starting element is set to Offset from middle or Offset from last. The input diagonal element that is the last element of the subdiagonal that the block overwrites. For a description of this parameter's options, see Ending Element Parameters. This parameter is enabled when Element span is set to Range of elements, and Starting element is set to any option but Last. Ending element index Index of the input diagonal element that is the last element of the subdiagonal that the block overwrites. See how to use this parameter in Ending Element Parameters. This parameter is enabled when Ending element is set to Index. Ending element offset The offset of the input diagonal element that is the last element of the subdiagonal that the block overwrites. See how to use this parameter in Ending Element Parameters. This parameter is enabled when Ending element is set to Offset from middle or Offset from last. To overwrite a submatrix, follow these steps: Set the Overwrite parameter to Submatrix. Specify the overwriting values as described in Specifying the Overwriting Values. Specify which rows and columns of the input matrix are contained in the submatrix that you want to overwrite by setting the Row span parameter to one of the following options and the Column span to the analogous column-related options: All rows — The submatrix contains all rows of the input matrix. One row — The submatrix contains only one row of the input matrix, which you must specify in the Row parameter, as described in the following table. Range of rows — The submatrix contains one or more rows of the input, which you must specify in the Starting Row and Ending row parameters, as described in the following tables. When you set Row span to One row or Range of rows, you need to further specify the row(s) contained in the submatrix by setting the Row or Starting row and Ending row parameters. Likewise, when you set Column span to One column or Range of columns, you must further specify the column(s) contained in the submatrix by setting the Column or Starting column and Ending column parameters. For descriptions of the settings for these parameters, see the following tables. Settings for Row, Column, Starting Row, and Starting Column Parameters Settings for Specifying the Submatrix's First Row or Column First Row of Submatrix (Only row for Row span = One row) First Column of Submatrix First row of the input First column of the input Input row specified in the Row index parameter Input column specified in the Column index parameter Input row with the index M - rowOffset where M is the number of input rows, and rowOffset is the value of the Row offset or Starting row offset parameter Input column with the index N - colOffset where N is the number of input columns, and colOffset is the value of the Column offset or Starting column offset parameter Last row of the input Last column of the input floor(M/2 + 1 - rowOffset) Input column with the index floor(N/2 + 1 - rowOffset) where N is the number of input columns, and colOffset is the value of the or Column offset or Starting column offset parameter floor(M/2 + 1) where M is the number of input rows Input columns with the index floor(N/2 + 1) where N is the number of input columns Settings for Ending Row and Ending Column Parameters Settings for Specifying the Submatrix's Last Row or Column Last Row of Submatrix Last Column of Submatrix Input row specified in the Ending row index parameter Input column specified in the Ending column index parameter where M is the number of input rows, and rowOffset is the value of the Ending row offset parameter where N is the number of input columns, and colOffset is the value of the Ending column offset parameter floor(N/2 + 1 - rowOffset) For example, to overwrite the lower-right 2-by-3 submatrix of a 3-by-5 input matrix with all zeros, enter the following set of parameters: Overwrite = Submatrix Source of overwriting value(s) = Specify via dialog Overwrite with = 0 The following figure shows the block with the above settings overwriting a portion of a 3-by-5 input matrix. There are often several possible parameter combinations that select the same submatrix from the input. For example, instead of specifying Last for Ending column, you could select the same submatrix by specifying Ending column = Index Ending column index = 5 To overwrite a subdiagonal, follow these steps: Set the Overwrite parameter to Diagonal. Specify the subdiagonal that you want to overwrite by setting the Diagonal span parameter to one of the following options: All elements — Overwrite the entire input diagonal. One element — Overwrite one element in the diagonal, which you must specify in the Element parameter (described below). Range of elements — Overwrite a portion of the input diagonal, which you must specify in the Starting element and Ending element parameters, as described in the following table. When you set Diagonal span to One element or Range of elements, you need to further specify which diagonal element(s) to overwrite by setting the Element or Starting element and Ending element parameters. See the following tables. Element and Starting Element Parameters Settings for Element and Starting Element Parameters First Element in Subdiagonal (Only element when Diagonal span = One element) Diagonal element in first row of the input kth diagonal element, where k is the value of the Element index or Starting element index parameter Diagonal element in the row with the index where M is the number of input rows, and offset is the value of the Element offset or Starting element offset parameter Diagonal element in the last row of the input Diagonal element in the input row with the index floor(M/2 + 1 - offset) Ending Element Parameters Settings for Ending Element Parameter Last Element in Subdiagonal kth diagonal element, where k is the value of the Ending element index parameter where M is the number of input rows, and offset is the value of the Ending element offset parameter The input(s) and output of this block must have the same data type. Reshape (Simulink) | Selector (Simulink) | Submatrix (Simulink) | Variable Selector
10–32 UNF (⁠ ⧸o 4.8 mm L = 23 cm) 0.40 Hz 0.0003 No torsional mode excited M2 (⁠ ⧸o 2 mm L = 14 cm) 8.00 Hz 0.0002 Stinger bending occurs Wire (⁠ ⧸o 1 mm L = 7.62 cm) 19.0 Hz 0.0003 Stinger bending occurs ⧸o ⧸o ⧸o mx¨+c(X)x˙+k(X)x=Fosin(ωt) Hk(ω)=Ar+jBrωr2−ωk2+jηrωrωk=Rk+jIk where the subscript r indicates resonance, subscript k a point on the FRF, ω is the frequency, η is the loss factor which is twice the damping ratio, j=−1 R I are the real and imaginary parts of the FRF, respectively. The receptance, though, requires the outputs to be displacement; however, most dynamic tests use Accels to acquire the response of the system. Acceleration could be integrated to get displacement, introducing numerical error. To avoid integration error, the accelerance FRF can be used Hk(ω)=−ωk2(Ar+jBr)ωr2−ωk2+jηrωrωk=Rk+jIk ωr2=ω12ω22(R^12(R2−R1)−I^12(I2−I1))R^12(ω12R2−ω22R1)+I^12(ω12I2−ω22I1)η=R^12(Ω1I2−Ω2I1)−I^12(Ω1R2−Ω2R1)μ(R^122+I^122)R^12=ω2R1−ω1R2I^12=ω2I1−ω1I2Ω1=ω12(ωr2−ω22)Ω2=ω22(ωr2−ω12)μ=ω1ω2ωr Include all sensors in the computational models. Even sensors weighing less then 0.05% of the systems mass can cause a 1% change in the natural frequencies due to modifying the moment of inertia.
Reinforcement learning, line by line: Q-learning This is the third post of the blog series Reinforcement learning: line by line. The interactive sketch shows an implementation of the tabular Q-learning algorithm (Watkins, 1989) applied to a simple game, called the Pancakes Gridworld. See this post for more information about the Pancakes Gridworld as well as the notation and foundational concepts required to understand the algorithm. In case you are completely new to reinforcement learning (RL), see here for an informal introduction. The algorithm: tabular Q-learning The agent's goal is to learn an optimal action value function through interaction with the environment. The Pancakes Gridworld can be modeled as a Markov decision process (MDP) with finite state and action sets and we can thus represent the value function as a set of Q(s, a) -values, one for each state-action pair (s, a) . The agents starts with a random estimate of each Q -value (in the example shown above, we initialize all Q(s, a) -values to 0 ). In the sketch shown above, the Q-values are represented by four arrows in each cell. In the beginning these arrows are gray but they become blue (for negative Q-values) or red (for positive Q-values) during the learning process. During the learning process, the agent uses the current value estimates to make decisions and uses the reward signals provided by the environment to continuously improve its value estimates. 0: \text{Loop for each episode: } This is the “outer loop”. This simply indicates that the value function is usually learnt across different episodes. 1: S \leftarrow s_0 Set the agent to a starting state (in our case, this is always the same state s_0 in the left-bottom corner of the grid). 2: \text{Loop until a terminal state is reached:} This is the “inner loop”; it represents one episode (that is, until one of the terminal states 🍄 or 🥞 is reached). 3: \text{Select action } A \text{ from state } S \text{ using } \epsilon\text{-greedy policy} The so-called greedy policy in a state s is to select any value-maximizing action in that state, that is, \pi(s) = \argmax_{a \in \mathcal{A}(s)} \ Q(s, a) . In other words, the agent selects an action that is assumed to be best, according to the current beliefs ( Q -values). \epsilon -greedy policy is a stochastic policy based on the greedy policy. With probability of (1-\epsilon) it selects a value-maximizing, greedy action. Yet with probability of \epsilon \epsilon -greedy policy selects a uniformly randomly chosen action! Note that the \epsilon -greedy policy contains the greedy policy (for \epsilon = 0 ) and the uniformly random policy (for \epsilon = 1 ) as special cases. Occasionally selecting non-value-maximizing actions is called exploration and is required for the algorithm to converge to an optimal policy, see also the FAQs further below. 4: \text{Take action } A, \text{ observe reward } R \text{ and new state } S The response of the environment to the action executed by the agent. 5: Q(S, A) \leftarrow (1 - \alpha) \ Q(S, A) + \alpha \ [R + \gamma \max_{a} Q(S', a)] The learning update. This is where all the magic happens! The agent updates the value estimate for Q(S, A) S is the current state and A is the action that was selected by the \epsilon -greedy policy. Let's start to unpack the update formula a bit: \underbrace{Q(S, A)}_{\text{new estimate}} \leftarrow \textcolor{red}{(1 - \alpha)} \ \underbrace{Q(S, A)}_{\text{old estimate}} + \textcolor{red}{\alpha} \ \underbrace{[R + \gamma \max_{a} Q(S', a)]}_{\text{Bellman estimate}}. The new estimate of Q(S, A) is a weighted average of the agent's previous estimate of Q(S, A) and the “Bellman estimate” of Q(S, A) (see below). The learning rate \textcolor{red}{\alpha} determines how much weight we give to either of these estimates. For example, say our learning rate is \alpha = 0.1 , then our new value estimate consists of 1 - 0.1 = 90\% of the previous estimate of Q(S, A) 10\% of the Bellman estimate. If the learning rate is too low (in the extreme case, \alpha = 0 ), we never actually learn anything new because the new estimate always equal the old estimate. If, on the other hand, the learning rate is too high (say, \alpha = 1 ), we “throw away” everything we've learned about Q(S, A) during the update. The Bellman estimate \hat{Q}(S, A) = [R + \gamma \max_{a} Q(S', a)] is based on the Bellman equations of the optimal action-value function. In fact, the deterministic Bellman equations for the optimal value function, given by q_{\ast}(s, a) = \gamma q_{\ast}(s', a_\ast) s \in \mathcal{S} a \in \mathcal{A} , look almost the same as the Bellman estimator! Intuitively speaking, the Bellman estimator decomposes the expected return \hat{Q}(S, A) into a) the reward that is obtained in the following time step, given by R , and b) the return that is expected after that time step (estimated by the currently best Q -value in the next state, \max_{a} Q(S', a) ). One important difference between these two components is that R is an actual reward that was just observed, whereas the value of the next state is an estimate itself (one, that at the beginning of the learning process is initialized randomly and thus usually not very accurate). Especially in the beginning, most of the learning progress is thus made for state-action pairs that lead to an important reward R (for example, an action that leads the agent onto the 🥞-cell). Over time this knowledge is then propagated via the “bootstrapping” mechanism that, simply put, relates the value of a state S to the value of the following state S' . You can observe this behavior in the sketch above. The first red arrow (positive value) occurs when the agent hits the 🥞-cell for the first time. After that, the red arrows (that is, the knowledge that 🥞 are close) slowly but surely propagate to the starting state. 6: S \leftarrow S' The “next state” becomes the “current state”. If the current state is now a terminal state, then the episode is over and the algorithm jumps back to line 1. If the episode is not over yet, the algorithm jumps back to line 3 to select the next action. (Why) does the agent need to explore? An agent that never explores might get stuck in a sub-optimal policy. Consider the illustrative example shown in the sketch below. You can see the agent's current Q-value estimates as shown by the red arrows. The greedy policy with respect to this value function actually leads the agent to the 🥞-cell, just not in an optimal way (the expected episode return of this policy is 0 ). The exploration parameter in this sketch is set to \epsilon = 0 by default, so if you press the play_arrow button, the agent will always follow the same, inefficient path. Now increase the exploration parameter a little bit to, say, \epsilon = 0.3 (using the slider next to the bottom-right corner of the gridworld). You will see that, after some time, the agent finds a shorter route to the pancakes and updates its action-value estimates accordingly. Once an optimal policy is found (you can check this using the “Greedy policy” button below the sketch), you could dial down again the agent's exploration behavior to see that the agent now yields the optimal episode return of 4 Why should I care? Isn't the Pancakes Gridworld way too easy? Your friends probably wouldn't be particularly impressed if you showed them that you could “solve” the Pancakes Gridworld. So why should we be impressed by the Q-learning agent? The difference is that the agent initially knows absolutely nothing about its environment. Yes, you've read that correctly; the agent finds itself in the unimaginable situation of not even knowing about the existence of pancakes, let alone their deliciousness. Almost worse, the agent initially doesn't even know the concept of a direction, or how the different cells of the gridworld are connected to each other. All that the agent ever gets is the info that there are four actions in every state and a reward signal after very step. We, on the other hand, know that pancakes are good and your eyes help you to navigate safely to them. The RL agent learns all this from scratch, which, in my opinion, is a quite impressive achievement. What is the “Bellman error”? You might have noticed that many papers and books write the Q-learning update rule as \underbrace{Q(S, A)}_{\text{new estimate}} \leftarrow \underbrace{Q(S, A)}_{\text{old estimate}} + \alpha \ \underbrace{[\textcolor{#7FB069}{R + \gamma \max_{a} Q(S', a)} - \textcolor{#FA7921}{Q(S, A)}]}_{\text{Bellman error}}. This is of course just a re-arrangement of the formula shown and discussed in the pseudo code above, but it offers another nice interpretation. The Bellman error is simply the difference between the right-hand side (RHS) and the left-hand side (LHS) of the optimal Bellman equation applied to the current value estimates \underbrace{\textcolor{#FA7921}{Q(S, A)}}_{\text{LHS}} = \underbrace{\textcolor{#7FB069}{R + \gamma \max_{a} Q(S', a)}}_{\text{RHS}}. The Q-learning update is thus simply a way of reducing the Bellman error by adding a tiny bit of it to the current estimate of Q(S, A) (the amount that is added is determined by the learning rate \alpha ). The smaller the Bellman error across all state-action pairs, the closer are our value-function estimates to the optimal action-value function. If you liked this post, please consider following me on Twitter for updates on new blog posts. In the next post we compare Q-learning to the SARSA algorithm.
Correspondence to: † hkkim@uri.re.kr Hydrogen industry, Text mining, Semantic network, News big data, Term frequency-inverse document frequency, CONCOR analysis 수소산업, 텍스트 마이닝, 의미연결망, 뉴스빅데이터, 단어빈도-역문서빈도, 반복상관관계수렴 분석 TF-IDF=tf\left(t,d\right)×\text{log}\left(\frac{D}{df\left(t\right)}\right) Hydrogen Council, “Hydrogen scaling up: a sustainable pathway for the global energy transition”, Hydrogen Council, 2017. Retrieved from https://hydrogencouncil.com/wp-content/uploads/2017/11/Hydrogen-scaling-up-Hydrogen-Council.pdf, . J. S. Park and P. S. Yeoun, “A network analysis on the forest healing issues using big data – focused on Korean web news from 2005 to 2019 –”, The Journal of Korean Institute of Forest Recreation, Vol. 24, No. 2, 2020, pp. 63-71. B. W. Kang, T. H. Kim, and T. H. Lee, “Analysis of costs for a hydrogen refueling station in Korea”, Trans Korean Hydrogen New Energy Soc, Vol. 27, No. 3. 2016, pp. 256-263. [https://doi.org/10.7316/KHNES.2016.27.3.256] B. J. Kim, “An economic analysis of the hydrogen station enterprise considering dynamic utilization”, Trans Korean Hydrogen New Energy Soc, Vol. 28, No. 1, 2017, pp. 47-55. [https://doi.org/10.7316/KHNES.2017.28.1.47] B. J. Kim and J. W. Kim, “An analysis of the economy of scale for domestic on-site hydrogen fueling stations”, Journal of Energy Engineering, Vol. 16, No. 4, 2007, pp. 170-180. Retrieved from http://www.koreascience.or.kr/article/JAKO200707341597329.pub, . B. J. Kim, W. L. Yoon, and D. J. Seo, “Analysis of the economy of scale for domestic steam methane reforming hydrogen refueling stations utilizing the scale factor”, Trans Korean Hydrogen New Energy Soc, Vol. 30, No. 3, 2019, pp. 251-259. B. I. Choe, “A Korean strategy for the hydrogen society infrastructure based on the liquefied hydrogen”, Superconductivity and Cryogenics, Vol. 22, No. 1, 2020, pp. 9-12. Retrieved from https://www.earticle.net/Article/A370584, . J. H. Han, S. J. Kim, and C. B. Kim, “A strategy development of hydrogen energy industrial infrastructure by using SWOT/AHP method”, Journal of Korea Technology Innovation Society, Vol. 19, No. 4, 2016, pp. 822-847. Retrieved from https://www.koreascience.or.kr/article/JAKO201610364778833.j, . Y. Y. Kim, J. S. Park, and J. S. Jeong, “A study on overseas case analysis and empirical analysis for the development of the hydrogen industry in Chungcheongbuk-do”, International Business Review, Vol. 24, No. 2, 2020, pp. 77-90. Retrieved from http://www.kaibm.or.kr/html/sub0303.html?pageNm=article&journal=1&code=378914&issue=28464&Page=2&year=2020&searchType=title&searchValue=, . S. K. Kang, Y. S. Huh, and J. S. Moon, “A study on safety improvement for packaged hydrogen refueling station by risk assessment”, Trans Korean Hydrogen New Energy Soc, Vol. 28, No. 6, 2017, pp. 635-641. K. W. Rhie, T. H. Kim, and T. H. Lee, “Quantitative safety assessment for hydrogen station dispenser”, Trans Korean Hydrogen New Energy Soc, Vol. 17, No. 3, 2006, pp. 272-278. Retrieved from https://www.hydrogen.or.kr/upload/papers/KHNES.Vol.17,No.03-05.pdf, . J. S. Jeong, M. K. Jee, M. H. Go, H. D. Kim, H. Y. Lim, Y. R. Lee, and W. I. Kim, “Related documents classification system by similarity between documents”, Journal of Broadcast Engineering, Vol. 24, No. 1, 2019, pp. 77-86. P. Bafna, D. Pramod, and A. Vaidya, “Document clustering: TF-IDF approach”, 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), 2016, pp. 61-66. [https://doi.org/10.1109/ICEEOT.2016.7754750] D. Kim, D. Seo, S. Cho, and P. Kang, “Multi-co-training for document classification using various document representations: TF–IDF, LDA, and Doc2Vec”, Information Sciences, Vol. 477, 2019, pp. 15-29. [https://doi.org/10.1016/j.ins.2018.10.006] B. K. Jeon and H. C. Ahn, “A collaborative filtering system combined with users' review mining : application to the recommendation of smartphone apps”, J. Intell. Inform. Syst., Vol. 21, No. 2, 2015, pp. 1-18. [https://doi.org/10.13088/jiis.2015.21.2.01] R. Iwabuchi, Y. Nakajima, H. Honma, H. Aoshima, A. Kobayashi, T. Akiba, and S. Masuyama, “Proposal of recommender system based on user evaluation and cosmetic ingredients”, 2017 International Conference on Advanced Informatics, Concepts, Theory, and Applications (ICAICTA), 2017, pp. 1-6. [https://doi.org/10.1109/ICAICTA.2017.8090967] Z. K. Zhang, T. Zhou, and Y. C. Zhang, “Tag-aware recommender systems: a state-of-the-art survey”. Journal of Computer Science and Technology, Vol. 26, 2011, pp. 767. [https://doi.org/10.1007/s11390-011-0176-1] Z. H. Deng, K. H. Luo, and H. L. Yu, “A study of supervised term weighting scheme for sentiment analysis”, Expert Systems with Applications, Vol. 41, No. 7, 2014, pp. 3506-3513. [https://doi.org/10.1016/j.eswa.2013.10.056] M. Rathi, A. Malik, D. Varshney, R. Sharma, and S. Mendiratta, “Sentiment analysis of tweets using machine learning approach”, 2018 Eleventh International Conference on Contemporary Computing (IC3), 2018, pp. 1-3. [https://doi.org/10.1109/IC3.2018.8530517] S. S. Lee, “A content analysis of journal articles using the language network analysis methods”, Journal of the Korean Society for Information Management, Vol. 31, No. 4, 2014, pp. 49-68. [https://doi.org/10.3743/KOSIM.2014.31.4.049] M. Yoo, S. Lee, and T. Ha, “Semantic network analysis for understanding user experiences of bipolar and depressive disorders on Reddit”, Information Processing & Management, Vol. 56, No. 4, 2019, pp. 1565-1575. [https://doi.org/10.1016/j.ipm.2018.10.001] J. Choi and Y. S. Hwang, “Patent keyword network analysis for improving technology development efficiency”, Technological Forecasting and Social Change, Vol. 83, 2014, pp. 170-182. [https://doi.org/10.1016/j.techfore.2013.07.004] B. Yoon and Y. Park, “A text-mining-based patent network: analytical tool for high-technology trend”, The Journal of High Technology Management Research, Vol. 15, No. 1, 2004, pp. 37-50. [https://doi.org/10.1016/j.hitech.2003.09.003] S. Duari and V. Bhatnagar, “Complex network based supervised keyword extractor”, Expert Systems with Applications, Vol. 140, 2020, pp. 112876. [https://doi.org/10.1016/j.eswa.2019.112876] H. J. Seo, “The issues deduction of the payment system by the text network analysis”, Journal of Payment and Settlement Vol. 10, No. 2, 2018, pp. 37-64. Retrieved from https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002419261, . H. J. Seo, “Identifying key issues in Korea’s defense policy - application of text network analysis to the 2018 defense white paper -”, Korean Journal of Military Affairs, Vol. 6, 2019, pp. 39-70. [https://doi.org/10.33528/kjma.2019.12.6.39] H. J. Lee and Y. O. Kang, “Understanding tourist’s region of attraction and image of city through social network data analysis”, Journal of the Korean Urban Geographical Society, Vol. 23, No. 1, 2020, pp. 101-114. [https://doi.org/10.21189/JKUGS.23.1.8] S. H. Cho, “A study on analysis of the trend of blockchain by key words network analysis”, Journal of Korea Institute of Information, Electronics, and Communication Technology, Vol. 11, No. 5, 2018, pp. 550-555.
{\displaystyle 1+2+3+4+5+6+7+8+9+10+11+12+13,} {\displaystyle 1+2+\cdots +13.} {\displaystyle {\displaystyle \sum _{i=1}^{13}\,i,}} {\displaystyle \left(\Sigma \right)} {\displaystyle i} {\displaystyle i} {\displaystyle \Sigma } {\displaystyle i} {\displaystyle 1} {\displaystyle 13} {\displaystyle i=1,} {\displaystyle i=2,} {\displaystyle i=3,} {\displaystyle 13} {\displaystyle {\displaystyle \sum _{i=1}^{13}}\,i\,=\,1+2+3+4+5+6+7+8+9+10+11+12+13.} {\displaystyle {\displaystyle \sum _{i=1}^{5}}\,i^{2}\,=\,1^{2}+2^{2}+3^{2}+4^{2}+5^{2}.} {\displaystyle {\displaystyle \sum _{i=n}^{2n}}\,i\,=\,n+(n+1)+\cdots +(2n-1)+2n,} {\displaystyle {\displaystyle \sum _{i=1}^{n}}\,i^{3}\,=\,1^{3}+2^{3}+3^{3}+\cdots +n^{3}.} {\displaystyle \mathbb {N} } {\displaystyle {\text{The Natural Numbers}}\,=\,\mathbb {N} \,=\,\{1,2,3,\ldots \}\,=\,\{1,1+1,1+1+1,1+1+1+1,\ldots \}.} {\displaystyle 1} {\displaystyle n,} {\displaystyle n+1} {\displaystyle n-1,} {\displaystyle n.} {\displaystyle (n+1)^{\textrm {th}}} {\displaystyle n^{\mathrm {th} }} {\displaystyle n} {\displaystyle n}atural numbers is {\displaystyle {\displaystyle \sum _{i=1}^{n}i\,=\,1+2+\cdots +n\,=\,{\frac {n(n+1)}{2}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)}{2}}\,=\,{\frac {1(1+1)}{2}}\,=\,1,}} {\displaystyle 1} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)}{2}}\,=\,{\frac {(n-1)n}{2}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i}&=&{\displaystyle \sum _{i=1}^{n-1}i\,+\,n}\\\\&=&{\displaystyle {\frac {(n-1)n}{2}}\,+\,n\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{2}-n}{2}}\,+\,{\frac {2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}-n+2n}{2}}}\\\\&=&{\displaystyle {\frac {n^{2}+n}{2}}}\\\\&=&{\displaystyle {\frac {n(n+1)}{2}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{2}\,=\,1^{2}+2^{2}+\cdots +n^{2}\,=\,{\frac {n(n+1)(2n+1)}{6}}.}} {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n(n+1)(2n+1)}{6}}\,=\,{\frac {1(1+1)(2+1)}{6}}\,=\,1,}} {\displaystyle n=1.} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i\,=\,{\frac {(n-1)\left(\left(n-1\right)+1\right)\left(2\left(n-1\right)+1\right)}{6}}\,=\,{\frac {(n-1)n(2n-1)}{6}}\,=\,{\frac {2n^{3}-3n^{2}+n}{6}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{2}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{2}+n^{2}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+n^{2}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {2n^{3}-3n^{2}+n}{6}}+{\frac {6n^{2}}{6}}}\\\\&=&{\displaystyle {\frac {2n^{3}+3n^{2}+n}{6}}}\\\\&=&{\displaystyle {\frac {n(2n^{2}+3n+1)}{6}}}\\\\&=&{\displaystyle {\frac {n(n+1)(2n+1)}{6}}},\end{array}}} {\displaystyle \square } {\displaystyle n} {\displaystyle {\displaystyle \sum _{i=1}^{n}i^{3}\,=\,1^{3}+2^{3}+\cdots +n^{3}\,=\,{\frac {n^{2}(n+1)^{2}}{4}}.}} {\displaystyle n}atural numbers. {\displaystyle n=1,} {\displaystyle {\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}\,=\,{\frac {1^{2}(1+1)^{2}}{4}}\,=\,1,}} {\displaystyle n-1,} {\displaystyle {\displaystyle \sum _{i=1}^{n-1}i^{3}\,=\,{\frac {(n-1)^{2}\left(\left(n-1\right)+1\right)^{2}}{4}}\,=\,{\frac {(n-1)^{2}n^{2}}{4}}.}} {\displaystyle {\begin{array}{rcl}{\displaystyle \sum _{i=1}^{n}i^{3}}&=&{\displaystyle \sum _{i=1}^{n-1}i^{3}+n^{3}}\\\\&=&{\displaystyle {\frac {(n-1)^{2}n^{2}}{4}}+n^{3}\qquad \qquad {\mbox{(by the induction assumption)}}}\\\\&=&{\displaystyle {\frac {n^{4}-2n^{3}+n^{2}}{4}}+{\frac {4n^{3}}{4}}}\\\\&=&{\displaystyle {\frac {n^{4}+2n^{3}+n^{2}}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n^{2}+2n+1)}{4}}}\\\\&=&{\displaystyle {\frac {n^{2}(n+1)^{2}}{4}}},\end{array}}} {\displaystyle \square }
(Not recommended) Convert Binary to Base-P - MATLAB bi2de - MathWorks France Convert Binary to Base-10 bi2de is not recommended. Use bit2int instead. (Not recommended) Convert Binary to Base-P bi2de is not recommended. Instead, use the bit2int function. For more information, see Compatibility Considerations. d = bi2de(b) d = bi2de(b,flg) d = bi2de(b,p) d = bi2de(b,p,flg) d = bi2de(b) converts a binary row vector b to a decimal integer. d = bi2de(b,flg) converts a binary row vector to a decimal integer, where flg determines the position of the most significant digit. d = bi2de(b,p) converts a base-p row vector b to a decimal integer. d = bi2de(b,p,flg) converts a base-p row vector to a decimal integer, where flg determines the position of the most significant digit. This example shows how to convert binary numbers to decimal integers. It highlights the difference between right- and left- most significant digit positioning. Convert the two binary arrays to decimal by using the bi2de function. Assign the most significant digit is the leftmost element. The output of converting b1 corresponds to 0\left({2}^{4}\right)+1\left({2}^{3}\right)+0\left({2}^{2}\right)+1\left({2}^{1}\right)+1\left({2}^{0}\right)=11 , and b2 corresponds to 1\left({2}^{3}\right)+1\left({2}^{2}\right)+1\left({2}^{1}\right)+0\left({2}^{0}\right)=14 d1 = bi2de(b1,'left-msb') Assign the most significant digit is the rightmost element. The output of converting b1 corresponds to 0\left({2}^{0}\right)+1\left({2}^{1}\right)+0\left({2}^{2}\right)+1\left({2}^{3}\right)+1\left({2}^{4}\right)=26 1\left({2}^{0}\right)+1\left({2}^{1}\right)+1\left({2}^{2}\right)+0\left({2}^{3}\right)=7 d1 = bi2de(b1,'right-msb') b — Binary input Binary input, specified as a row vector or matrix of positive integer or logical values. b must represent an integer less than or equal to 252. Data Types: double | single | logical | integer | fi flg — MSB flag 'right-msb' (default) | 'left-msb' MSB flag, specified as 'right-msb' or 'left-msb'. 'right-msb' –– Indicates the right (or last) column of the binary input, b, as the most significant bit (or highest-order digit). 'left-msb' –– Indicates the left (or first) column of the binary input, b, as the most significant bit (or highest-order digit). p — Base Base of the input b, specified as an integer greater than or equal to 2. d — Decimal output nonnegative integer | vector Decimal output, returned as an nonnegative integer or row vector. If b is a matrix, each row represents a base-p number. In this case, the output d is a column vector in which each element is the decimal representation of the corresponding row of b. If the input data type is An integer data type and the value of d can be contained in the same integer data type as the input, the output data type uses the same data type as the input. Otherwise, the output data type is chosen to be big enough to contain the decimal output. double or logical data type, the output data type is double. single data type, the output data type is single. R2021b: bi2de is not recommended. Use bit2int instead. Use bit2int instead of bi2de. If converting numbers from a nonbase-2 representation to decimal, use base2dec. The code in this table shows binary-to-decimal conversion for various inputs using the recommended function. Discouraged Feature % Default (left MSB) n = randi([1 100]); % Number of integers bpi = 3; % Bits per integer x = randi([0,1],n*bpi,1); y = bi2de(reshape(x,bpi,[])','left-msb') y = bit2int(x,bpi) % Default row vector (or matrix) input bi2de(x) bit2int(x',length(x),0)' % Right MSB, logical input x = logical(randi([0,1],n*bpi,1)); y = bi2de(reshape(x,bpi,[])','right-msb') y = bit2int(x,bpi,false) % Right MSB, signed input, single input x = randi([0,1],n*bpi,1,'single'); y = bi2de(reshape(x,bpi,[])','right-msb'); N = 2^bpi; y = y - (y>=N/2)*N y = bit2int(x,bpi,false); bit2int | int2bit