text
stringlengths
256
16.4k
Improvements to the map Command Quick Help for Maple Objects You can use the seq command to generate numeric sequences without specifying an index variable. The one-parameter form is: seq(1..4); \textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4} Also, you can now specify the size of the increment to use with seq. seq(1..4, 1/2); \textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0,0,1}{5}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0,0,1}{7}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4} seq(f(i), i = 1..4, 1/2); \textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{⁡}\left(\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{2}}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{⁡}\left(\frac{\textcolor[rgb]{0,0,1}{5}}{\textcolor[rgb]{0,0,1}{2}}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{3}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{⁡}\left(\frac{\textcolor[rgb]{0,0,1}{7}}{\textcolor[rgb]{0,0,1}{2}}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{4}\right) In Maple 10, the map command supports additional arguments in mappings over procedures that have special evaluation rules. map(eval, [ 1 - x, 2 + x^2, sqrt(x) + 3 ], 'x'=0); [\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}] map(evalf, [ Pi, exp(1) ], 20); [\textcolor[rgb]{0,0,1}{3.1415926535897932385}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2.7182818284590452354}] Similar to map2, which maps over the second argument, map[n] allows mapping over the nth argument. map[5](`+`, 1, 1, 1, 1, [ 10, 11, 12 ], 1, 1); [\textcolor[rgb]{0,0,1}{16}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{17}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{18}] You can perform map operations using hardware floats. M := LinearAlgebra[RandomMatrix](10^3, outputoptions = [ datatype = float[ 8 ] ]); map[evalhf](`-`, M); Mapping over Matrix, Array, and Vector datatypes can be done in-place. M := Matrix([ [ 1, 2 ], [ 3, 4 ] ]); \textcolor[rgb]{0,0,1}{M}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cc}\textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{4}\end{array}] map[inplace](`*`, M, 2); [\begin{array}{cc}\textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{8}\end{array}] M[ 1, 1 ]; \textcolor[rgb]{0,0,1}{2} The Describe command generates a brief description for many procedures, modules, and other Maple objects based on information stored in them. This is a quick alternative to reading entire help pages. Describe(Sockets); # package for connection oriented TCP/IP sockets module Sockets: # open a client socket # close a socket connection # test for data availability on a socket # read text from a socket connection # write text to a socket connection # read a line of text from a socket connection # read binary data from a socket ReadBinary( ) # write binary data to a socket WriteBinary( ) # service requests, one at a time Serve( ) # internet address translation # parse an URL into components ParseURL( ) # map service names to port numbers LookupService( ) # retrieve the name of the local host GetHostName( ) # get the hostname of the local side of a connection # get the port number of the local side of a connection # get the hostname of the peer of a connection GetPeerHost( ) # get the port number of the peer of a connection GetPeerPort( ) # return the process ID GetProcessID( ) # retrieve data about the local host HostInfo( ) # determine the status of all open socket connections Status( ) # configure a socket connection For information on updates related to procedures and other aspects of Maple programming, see Programming Facilities Changes in Maple 10.
Characterization of the Stabilizing PID Controller Region for the Model-Free Time-Delay System 2013 Characterization of the Stabilizing PID Controller Region for the Model-Free Time-Delay System Linlin Ou, Yuan Su, Xuanguang Chen For model-free time-delay systems, an analytical method is proposed to characterize the stabilizing PID region based on the frequency response data. Such characterization uses linear programming which is computationally efficient. The characteristic parameters of the controller are first extracted from the frequency response data. Subsequently, by employing an extended Hermite-Biehler theorem on quasipolynomials, the stabilizing polygon region with respect to the integral and derivative gains \left({k}_{i} {k}_{d}\right) is described for a given proportional gain \left({k}_{p}\right) in term of the frequency response data. Simultaneously, the allowable stabilizing range of {k}_{p} is derived such that the complete stabilizing set of the PID controller can be obtained easily. The proposed method avoids the complexity and inaccuracy of the model identification and thus provides a convenient approach for the design and tuning of the PID controller in practice. The advantage of the proposed algorithm lies in that the boundaries of the stabilizing region consist of several simple straight lines, the complete stabilizing set can be obtained efficiently, and it can be implemented automatically in computers. Linlin Ou. Yuan Su. Xuanguang Chen. "Characterization of the Stabilizing PID Controller Region for the Model-Free Time-Delay System." J. Appl. Math. 2013 (SI17) 1 - 9, 2013. https://doi.org/10.1155/2013/926430 Linlin Ou, Yuan Su, Xuanguang Chen "Characterization of the Stabilizing PID Controller Region for the Model-Free Time-Delay System," Journal of Applied Mathematics, J. Appl. Math. 2013(SI17), 1-9, (2013)
Glass_batch_calculation Knowpia Glass batch calculation or glass batching is used to determine the correct mix of raw materials (batch) for a glass melt. The raw materials mixture for glass melting is termed "batch". The batch must be measured properly to achieve a given, desired glass formulation. This batch calculation is based on the common linear regression equation: {\displaystyle N_{B}=(B^{T}\cdot B)^{-1}\cdot B^{T}\cdot N_{G}} with NB and NG being the molarities 1-column matrices of the batch and glass components respectively, and B being the batching matrix.[1][2][3] The symbol "T" stands for the matrix transpose operation, "−1" indicates matrix inversion, and the sign "·" means the scalar product. From the molarities matrices N, percentages by weight (wt%) can easily be derived using the appropriate molar masses. An example batch calculation may be demonstrated here. The desired glass composition in wt% is: 67 SiO2, 12 Na2O, 10 CaO, 5 Al2O3, 1 K2O, 2 MgO, 3 B2O3, and as raw materials are used sand, trona, lime, albite, orthoclase, dolomite, and borax. The formulas and molar masses of the glass and batch components are listed in the following table: Formula of glass component Desired concentration of glass component, wt% Molar mass of glass component, g/mol Formula of batch component Molar mass of batch component, g/mol SiO2 67 60.0843 Sand SiO2 60.0843 Na2O 12 61.9789 Trona Na3H(CO3)2*2H2O 226.0262 CaO 10 56.0774 Lime CaCO3 100.0872 Al2O3 5 101.9613 Albite Na2O*Al2O3*6SiO2 524.4460 K2O 1 94.1960 Orthoclase K2O*Al2O3*6SiO2 556.6631 MgO 2 40.3044 Dolomite MgCa(CO3)2 184.4014 B2O3 3 69.6202 Borax Na2B4O7*10H2O 381.3721 The batching matrix B indicates the relation of the molarity in the batch (columns) and in the glass (rows). For example, the batch component SiO2 adds 1 mol SiO2 to the glass, therefore, the intersection of the first column and row shows "1". Trona adds 1.5 mol Na2O to the glass; albite adds 6 mol SiO2, 1 mol Na2O, and 1 mol Al2O3, and so on. For the example given above, the complete batching matrix is listed below. The molarity matrix NG of the glass is simply determined by dividing the desired wt% concentrations by the appropriate molar masses, e.g., for SiO2 67/60.0843 = 1.1151. {\displaystyle \mathbf {B} ={\begin{bmatrix}1&0&0&6&6&0&0\\0&1.5&0&1&0&0&1\\0&0&1&0&0&1&0\\0&0&0&1&1&0&0\\0&0&0&0&1&0&0\\0&0&0&0&0&1&0\\0&0&0&0&0&0&2\end{bmatrix}}} {\displaystyle \mathbf {N_{G}} ={\begin{bmatrix}1.1151\\0.1936\\0.1783\\0.0490\\0.0106\\0.0496\\0.0431\end{bmatrix}}} The resulting molarity matrix of the batch, NB, is given here. After multiplication with the appropriate molar masses of the batch ingredients one obtains the batch mass fraction matrix MB: {\displaystyle \mathbf {N_{B}} ={\begin{bmatrix}0.82087\\0.08910\\0.12870\\0.03842\\0.01062\\0.04962\\0.02155\end{bmatrix}}} {\displaystyle \mathbf {M_{B}} ={\begin{bmatrix}49.321\\20.138\\12.881\\20.150\\5.910\\9.150\\8.217\end{bmatrix}}} {\displaystyle \mathbf {M_{B}(100\%normalized)} ={\begin{bmatrix}39.216\\16.012\\10.242\\16.022\\4.699\\7.276\\6.533\end{bmatrix}}} The matrix MB, normalized to sum up to 100% as seen above, contains the final batch composition in wt%: 39.216 sand, 16.012 trona, 10.242 lime, 16.022 albite, 4.699 orthoclase, 7.276 dolomite, 6.533 borax. If this batch is melted to a glass, the desired composition given above is obtained.[4] During glass melting, carbon dioxide (from trona, lime, dolomite) and water (from trona, borax) evaporate. Simple glass batch calculation can be found at the website of the University of Washington.[5] Advanced batch calculation by optimizationEdit If the number of glass and batch components is not equal, if it is impossible to exactly obtain the desired glass composition using the selected batch ingredients, or if the matrix equation is not soluble for other reasons (i.e., the rows/columns are linearly dependent), the batch composition must be determined by optimization techniques. ^ Y. B. Peng, Xingye Lei, D. E. Day: "A computer programme for optimising batch calculations"; Glass technology, vol. 32, 1991, no. 4, p 123–130. ^ M. M. Khaimovich, K. Yu. Subbotin: "Automation of Batch Formula Calculation"; Glass and Ceramics, vol. 62, no 3-4, March 2005, p 109–112. ^ A. I. Priven: "Calculating batch weights with a programmable microcalculator"; Glass and Ceramics, vol. 43, no 11, November 1986, p 488–491. ^ See also: Free glass batch calculator ^ "Glass Melting". Battelle PNNL MST Handbook. U.S. Department of Energy, Pacific Northwest Laboratory. Archived from the original on 2010-05-05. Retrieved 2008-01-26.
11F22 Relationship to Lie algebras and finite simple groups 11F33 Congruences for modular and p 11F41 Automorphic forms on \mathrm{GL}\left(2\right) 11F50 Jacobi forms 11F66 Langlands L 11F67 Special values of automorphic L 11F68 Dirichlet series in several complex variables associated to automorphic forms; Weyl group multiple Dirichlet series p -adic theory, local fields A class of conjectured series representations for 1/\pi Guillera, Jesús (2006) A Dirichlet Series Associated to Eisenstein Series of Degree Two. Erik Lippa (1974) Alex J. Feingold, Igor B. Frenkel (1983) A new way to get Euler products. I., Rallis, S. Piatetski-Shapiro (1988) A Non-Vanishing Theorem for Zeta Functions of GLn H. Jacquet, J.A. Shalika (1976) A note on fields of definition M. Karel (1982) A relation between automorphic representations of \mathrm{GL}\left(2\right) \mathrm{GL}\left(3\right) Stephen Gelbart, Hervé Jacquet (1978) Hervé Jacquet (2012) Lafforgue has proposed a new approach to the principle of functoriality in a test case, namely, the case of automorphic induction from an idele class character of a quadratic extension. For technical reasons, he considers only the case of function fields and assumes the data is unramified. In this paper, we show that his method applies without these restrictions. The ground field is a number field or a function field and the data may be ramified. A remark on my paper "On the Saito-Kurokawa lifting". I.I. Piatetski-Shapiro (1984) Action of Hecke operators on products of Igusa theta constants with rational characteristics Anatoli N. Andrianov, Fedor A. Andrianov (2004) Algebraische Eigenschaften der lokalen Ringe in den Spitzen der Hilbertschen Modulgruppen. Reinhardt Kiehl, Eberhard Freitag (1974) An analogue of the Weil representation for G2. Gordan Savin (1993) An Arithmetic of Hermitian Modular Forms of Degree Two. Hisashi Kojima (1982) An Elementary Approach to Hecke Operators. Z. Gong, L. Grenié (2011) Given a representation \pi of a local unitary group G and another local unitary group H , either the Theta correspondence provides a representation {\theta }_{H}\left(\pi \right) H or we set {\theta }_{H}\left(\pi \right)=0 G H varies in a Witt tower, a natural question is: for which H {\theta }_{H}\left(\pi \right)\ne 0 ? For given dimension m there are exactly two isometry classes of unitary spaces that we denote {H}_{m}^{±} \epsilon \in \left\{0,1\right\} {m}_{\epsilon }^{±}\left(\pi \right) the minimal m of the same parity of \epsilon {\theta }_{{H}_{m}^{±}}\left(\pi \right)\ne 0 , then we prove that {m}_{\epsilon }^{+}\left(\pi \right)+{m}_{\epsilon }^{-}\left(\pi \right)\ge 2n+2 n \pi Analyse spectrale des formes automorphes et séries d'Eisenstein. Gilles Lachaud (1978) Announcement of errors in "The Eichler Commutation Relation for theta series with spherical harmonics" (Acta Arith. 63 (1993), 233-254) Lynne H. Walling (1994) Appendix to Orloff: Critical values of certain tensor product L-functions. D. Blasius (1987) Applications of special functions for the general linear group to number theory Audrey Terras (1976/1977)
Entropy of never born protein sequences | SpringerPlus | Full Text Entropy of never born protein sequences Grzegorz Szoniec1 & Maciej J Ogorzalek1 A Never Born protein is a theoretical protein which does not occur in nature. The reason why some proteins were selected and some were not during evolution is not known. We applied information theory to find similarities and differences in information content in Never Born and natural proteins. Both block and relative entropies are similar what means that both protein kinds contain strongly random sequences. An artificially generated Never Born protein sequence is closely as random as a natural one. Information theory approach suggests that protein selection during evolution was rather random/non-deterministic. Natural proteins have no noticeable unique features in information theory sense. Existing and known proteins are only a small subset of all possible sequences. Why were only some proteins selected during evolution? The reason is not known but two possible ways are considered: deterministic and random. To investigate theoretical sequences of amino acids a term Never Born Protein was introduced (Chiarabelli et al. 2006). Since 2006 only a few papers about them have been published. The most significant research has shown that 20% of them fold (i.e. reach stable and functional 3D structure) in laboratory conditions (Chiarabelli et al. 2006) and a tool for generating sequences with no similarity to natural proteins has been developed – Random Blast (Evangelista et al. 2007). The high folding ratio has been positively surprising and has abated opinion that existing proteins are the only stable and folding sequences. Surprisingly, as 1 out of 5 absolutely randomly generated proteins was a possibly useful one for living organisms. The authors did not expect so high percentage, furthermore their results came with doubts about correctness of their approach/methodology. Up to now this has been the most important discovery in Never Born protein science. The question about proteins origin is still open. There are papers that proved natural and synthetic (random) proteins are not different (Jacob 1969; Luisi 2003) or only slightly different from each other(Weiss et al. 2000), there are also papers that proved these two groups of proteins are significantly different and protein selection during evolution was a driven process (Munteanu et al. 2008; De Lucrezia et al. 2012). The results depend on methodology (which is not discussed here because of limited scope of this report) nevertheless another open question is which approach should be considered as more correct and reliable than others. Information theory (Shannon 1948) is applied to almost any branch of science. Information quantities i.e. Shannon entropy H (Shannon 1948) and Kullback–Leibler divergence (or relative entropy) DKL (Kullback & Leibler 1951) are defined as H\phantom{\rule{0.5em}{0ex}}=\phantom{\rule{0.5em}{0ex}}-{\sum }_{i}{p}_{i}\mathit{log}\left({p}_{i}\right) {D}_{\mathit{KL}}\left(P||Q\right)\phantom{\rule{0.5em}{0ex}}=\phantom{\rule{0.5em}{0ex}}{\sum }_{i}{p}_{i}\mathit{log}\left({p}_{i}/{q}_{i}\right) Where P and Q are probability densities, P = {pi} and Q = {qi}. Shannon entropy is a measure of uncertainty in an outcome with probability pi and relative entropy is a measure of similarity between two probability densities (it is not a true metric i.e. DKL(P||Q) is not equal to DKL(Q||P) except when P = Q). In protein science, information properties of natural proteins were comprehensively studied (Strait & Dewey 1996) and further intensively developed e.g. (Dewey 1996). In this short report we apply some of those ideas to Never Born proteins and we show that protein picking during evolution is closer to be a random/non-deterministic process. Our approach is strictly theoretical as (Strait & Dewey 1996). Natural proteins were randomly picked from UniProt database (The UniProt Consortium 2012). Never Born Proteins were generated with Random Blast tool (in both cases the number of sequences is 1250, in total around 400,000 amino acids, sequences lengths vary from 61 to around 1700 amino acids). Shannon entropies were calculated not only for every amino acid but also for blocks (block entropy (Papadimitriou et al. 2010)) of length from 2 to 20. Block entropy was calculated identically like Shannon entropy but probabilities refered to amino acid subsequences (blocks) of a specific length. Probabilities were normalized over occurrences in all sequences (the data and the scripts are available at http://www.cyfronet.pl/~myszonie/ent). The results are presented in Figure 1 and Table 1. Block entropies H n (in bits) vs. block lengths n of never born (blue) and natural (purple) proteins. Table 1 Relative entropies between never born and natural proteins The plot (Figure 1) proves that the values of entropy for both groups of proteins are very close. It means that uncertainty or in other words a number of possible amino acid combinations, is almost the same. This indicates that natural protein sequences are random like Never Born Protein sequences. Moreover, relative entropy values show that encoding a natural protein sequence using probability density of Never Born Proteins requires only a small excess of information (and vice versa). Summing up the inferences, protein selection during evolution is – in an information theory approach - closer to be a random process than deterministic one what is in-line with (Jacob 1969; Jacob 2003; Weiss et al. 2000). There is still a doubt wheter that small difference does not play a key role. Chiarabelli C: On the folding frequency in a totally random library of de novo proteins obtained by phage display. Chem Biodiver 2006, 3: 840-859. 10.1002/cbdv.200690088 De Lucrezia D: Do natural proteins differ from random sequences polypeptides? Natural vs. random proteins classification using an evolutionary neural network. PLoS One 2012, 7: e36634. 10.1371/journal.pone.0036634 Dewey TG: Algorithmic complexity of a protein. Phys Rev E 1996, 54: R39-R41. 10.1103/PhysRevE.54.R39 Evangelista G: Randomblast a tool to generate random never born protein sequences. Bio-Algorithms and Med-Systems 2007, 5: 27-31. Jacob M: On symmetry and function of biological systems. Edited by: Engstrom A, Strondberg B. Wiley Interscience, New York; 1969. Kullback BJ, Leibler RA: On information and sufficiency. Ann Math Statist 1951, 22: 79-86. 10.1214/aoms/1177729694 Luisi PL: Contingency and determinism. Phil Trans R Soc A 2003, 361: 1141-1147. 10.1098/rsta.2003.1189 Munteanu CR: Natural/random protein classification models based on star network topological indices. J Theor Biol 2008, 254: 775-783. 10.1016/j.jtbi.2008.07.018 Papadimitriou C: Entropy analysis of natural language written texts. Physica A 2010, 389: 3260-3266. 10.1016/j.physa.2010.03.038 Shannon CE: A mathematical theory of communication. Bell Labs Techn J 1948, 27: 379-423. Strait BJ, Dewey TG: The Shannon information entropy of protein sequences. Biophys J 1996, 71: 148-155. 10.1016/S0006-3495(96)79210-X The UniProt Consortium: Reorganizing the protein space at the Universal Protein Resource (UniProt). Nucleic Acids Res 2012, 40: D71-D75. Weiss O: Information content of protein sequences. J Math Biol 2000, 206: 379-386. The author would like to acknowledge Dr. Marcin Krol (Jagiellonian University, Poland) for the discussion and Prof. Fabio Polticelli (Roma Tre University, Italy) for the Random Blast binaries. Information Technology Department, Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, 4 Reymonta Street, Cracow, 30-059, Poland Grzegorz Szoniec & Maciej J Ogorzalek Grzegorz Szoniec Maciej J Ogorzalek Correspondence to Grzegorz Szoniec. GS designed and carried out the research and drafted the manuscript. MJO supervised the research. Both authors read and approved the final manuscript. Szoniec, G., Ogorzalek, M.J. Entropy of never born protein sequences. SpringerPlus 2, 200 (2013). https://doi.org/10.1186/2193-1801-2-200 Never born protein Block entropy
68R01 General 68R05 Combinatorics Jünger, Michael, Mutzel, Petra (1997) Francesco M. Malvestuto (2012) Decomposable (probabilistic) models are log-linear models generated by acyclic hypergraphs, and a number of nice properties enjoyed by them are known. In many applications the following selection problem naturally arises: given a probability distribution p over a finite set V n discrete variables and a positive integer k , find a decomposable model with tree-width k that best fits p ℋ is the generating hypergraph of a decomposable model and {p}_{ℋ} p under the model, we can measure... A bound for the Steiner tree problem in graphs Ján Plesník (1981) A computational perspective on network coding. Guo, Qin, Luo, Mingxing, Li, Lixiang, Yang, Yixian (2010) A fast multi-scale method for drawing large graphs. Harel, David, Koren, Yehuda (2002) Toke M. Carlsen, Søren Eilers (2007) We present an algorithm which for any aperiodic and primitive substitution outputs a finite representation of each special word in the shift space associated to that substitution, and determines when such representations are equivalent under orbit and shift tail equivalence. The algorithm has been implemented and applied in the study of certain new invariants for flow equivalence of substitutional dynamical systems. A graph method for Markov models solving Jaroslav Markl (1993) A Left-First Search Algorithm for Planar Graphs. J. Pach, H. de Fraysseix, P.O. de Mendez (1995) A linear algorithm for bend-optimal orthogonal drawings of triconnected cubic plane graphs. Rahman, Md.Saidur, Nakano, Shin-ichi, Nishizeki, Takao (1999) A logical model of HCP. Plotnikov, Anatoly D. (2001) A multilevel algorithm for force-directed graph-drawing. Walshaw, Chris (2003) Safro, Ilya, Ron, Dorit, Brandt, Achi (2006) A new algorithm for finding minimal cycle-breaking sets of turns in a graph. Levitin, Lev, Karpovsky, Mark, Mustafa, Mehmet, Zakrevski, Lev (2006) H. J. Olivié (1982) A new two-variable generalization of the chromatic polynomial. Dohmen, Klaus, Poenitz, André, Tittmann, Peter (2003) Discrete Mathematics and Theoretical Computer Science. DMTCS [electronic only] A note on complexity of algorithmic nets without cycles Karel Čulík (1971) Janvresse, É., de la Rue, T., Velenik, Y. (2006) A note on outerplanarity of product graphs P K. Jha, G Slutzki (1991) A note on rectilinearity and angular resolution. Bodlaender, Hans L., Tel, Gerard (2004)
Estimate transition probabilities from credit ratings data - MATLAB transprob - MathWorks 한국 transprob Construct a Transition Matrix From a Table of Historical Data of Credit Ratings Create a Transition Matrix Using a Cell Array for Historical Data of Credit Ratings idTotals Estimate transition probabilities from credit ratings data [transMat,sampleTotals,idTotals] = transprob(data) [transMat,sampleTotals,idTotals] = transprob(___,Name,Value) [transMat,sampleTotals,idTotals] = transprob(data) constructs a transition matrix from historical data of credit ratings. [transMat,sampleTotals,idTotals] = transprob(___,Name,Value) adds optional name-value pair arguments. Using the historical credit rating table as input data from Data_TransProb.mat display the first ten rows and compute the transition matrix: ID Date Rating ____________ _______________ _______ {'00010283'} {'10-Nov-1984'} {'CCC'} {'00010283'} {'12-May-1986'} {'B' } {'00010283'} {'29-Jun-1988'} {'CCC'} {'00010283'} {'12-Dec-1991'} {'D' } {'00013326'} {'09-Feb-1985'} {'A' } {'00013326'} {'24-Feb-1994'} {'AA' } {'00013326'} {'10-Nov-2000'} {'BBB'} {'00014413'} {'23-Dec-1982'} {'B' } {'00014413'} {'20-Apr-1988'} {'BB' } {'00014413'} {'16-Jan-1998'} {'B' } transMat = 8×8 Using the historical credit rating table input data from Data_TransProb.mat, compute the transition matrix using the cohort algorithm: %Estimate transition probabilities with 'cohort' algorithm transMatCoh = transprob(data,'algorithm','cohort') transMatCoh = 8×8 Using the historical credit rating data with ratings investment grade ('IG'), speculative grade ('SG'), and default ('D'), from Data_TransProb.mat display the first ten rows and compute the transition matrix: dataIGSG(1:10,:) ____________ _______________ ______ {'00011253'} {'04-Apr-1983'} {'IG'} {'00012751'} {'17-Feb-1985'} {'SG'} {'00012751'} {'19-May-1986'} {'D' } {'00014690'} {'17-Jan-1983'} {'IG'} {'00012144'} {'21-Nov-1984'} {'IG'} {'00012144'} {'25-Mar-1992'} {'SG'} {'00012144'} {'07-May-1994'} {'IG'} {'00012144'} {'23-Jan-2000'} {'SG'} {'00012144'} {'20-Aug-2001'} {'IG'} {'00012937'} {'07-Feb-1984'} {'IG'} transMatIGSG = transprob(dataIGSG,'labels',{'IG','SG','D'}) transMatIGSG = 3×3 Using the historical credit rating data with numeric ratings for investment grade (1), speculative grade (2), and default (3), from Data_TransProb.mat display the first ten rows and compute the transition matrix: dataIGSGnum(1:10,:) {'00011253'} {'04-Apr-1983'} 1 {'00012751'} {'17-Feb-1985'} 2 {'00012751'} {'19-May-1986'} 3 {'00014690'} {'17-Jan-1983'} 1 {'00012144'} {'21-Nov-1984'} 1 {'00012144'} {'25-Mar-1992'} 2 {'00012144'} {'20-Aug-2001'} 1 transMatIGSGnum = transprob(dataIGSGnum,'labels',{1,2,3}) transMatIGSGnum = 3×3 Using a MATLAB® table containing the historical credit rating cell array input data (dataCellFormat) from Data_TransProb.mat, estimate the transition probabilities with default settings. transMat = transprob(dataCellFormat) Using the historical credit rating cell array input data (dataCellFormat), compute the transition matrix using the cohort algorithm: transMatCoh = transprob(dataCellFormat,'algorithm','cohort') data — Credit migration data table | cell array of character vectors | preprocessed data structure Using transprob to estimate transition probabilities given credit ratings historical data (that is, credit migration data), the data input can be one of the following: An nRecords-by-3 MATLAB® table containing the historical credit ratings data of the form: ID Date Rating __________ _____________ ______ '00010283' '10-Nov-1984' 'CCC' '00010283' '12-May-1986' 'B' '00010283' '29-Jun-1988' 'CCC' '00010283' '12-Dec-1991' 'D' '00013326' '09-Feb-1985' 'A' '00013326' '24-Feb-1994' 'AA' '00013326' '10-Nov-2000' 'BBB' '00014413' '23-Dec-1982' 'B' where each row contains an ID (column 1), a date (column 2), and a credit rating (column 3). Column 3 is the rating assigned to the corresponding ID on the corresponding date. All information corresponding to the same ID must be stored in contiguous rows. Sorting this information by date is not required, but recommended for efficiency. When using a MATLAB table input, the names of the columns are irrelevant, but the ID, date and rating information are assumed to be in the first, second, and third columns, respectively. Also, when using a table input, the first and third columns can be categorical arrays, and the second can be a datetime array. The following summarizes the supported data types for table input: An nRecords-by-3 cell array of character vectors containing the historical credit ratings data of the form: where each row contains an ID (column 1), a date (column 2), and a credit rating (column 3). Column 3 is the rating assigned to the corresponding ID on the corresponding date. All information corresponding to the same ID must be stored in contiguous rows. Sorting this information by date is not required, but recommended for efficiency. IDs, dates, and ratings are stored in character vector format, but they can also be entered in numeric format. The following summarizes the supported data types for cell array input: A preprocessed data structure obtained using transprobprep. This data structure contains the fields'idStart', 'numericDates', 'numericRatings', and 'ratingsLabels'. Data Types: table | cell | struct Example: transMat = transprob(data,'algorithm','cohort') 'duration' (default) | character vector with values are 'duration' or 'cohort' Estimation algorithm, specified as the comma-separated pair consisting of 'algorithm' and a character vector with a value of 'duration' or 'cohort'. endDate — End date of the estimation time window latest date in data (default) | character vector | serial date number | datetime End date of the estimation time window, specified as the comma-separated pair consisting of 'endDate' and a date character vector, serial date number, or datetime object. The endDate cannot be a date before the startDate. Data Types: char | double | datetime When the input argument data is a preprocessed data structure obtained from a previous call to transprobprep, this optional input for 'labels is unused because the labels in the 'ratingsLabels' field of transprobprep take priority. startDate — Start date of the estimation time window earliest date in data (default) | character vector | serial date number | datetime Start date of the estimation time window, specified as the comma-separated pair consisting of 'startDate' and a date character vector, serial date number, or datetime object. excludeLabels — Label that is excluded from the transition probability computation '' (do not exclude any label) (default) | numeric | character vector | string Label that is excluded from the transition probability computation, specified as the comma-separated pair consisting of 'excludeLabels' and a character vector, string, or numerical rating. If multiple labels are to be excluded, 'excludeLabels' must be a cell array containing all of the labels for exclusion. The type of the labels given in 'excludeLabels' must be consistent with the data type specified in the labels input. The list of labels to exclude may or may not be specified in labels. Matrix of transition probabilities in percent, returned as a nRatings-by-nRatings transition matrix. totalsVec — A vector of size 1-by-nRatings. totalsMat — A matrix of size nRatings-by-nRatings. For the 'duration' algorithm, totalsMat(i,j) contains the total transitions observed out of rating i into ratingj (all the diagonal elements are zero). The total time spent on rating i is stored in totalsVec(i). For example, if there are three rating categories, Investment Grade (IG), Speculative Grade (SG), and Default (D), and the following information: idTotals — IDs totals IDs totals, returned as a struct array of size nIDs-by-1, where nIDs is the number of distinct IDs in column 1 of data when this is a table or cell array or, equivalently, equal to the length of the idStart field minus 1 when data is a preprocessed data structure from transprobprep. For each ID in the sample, idTotals contains one structure with the following fields: totalsVec — A sparse vector of size 1-by-nRatings. totalsMat — A sparse matrix of size nRatings-by-nRatings. These fields contain the same information described for the output sampleTotals, but at an ID level. For example, for 'duration', idTotals(k).totalsVec contains the total time that the k-th company spent on each rating. The algorithm first determines a sequence t0,...,tK of snapshot dates. The elapsed time, in years, between two consecutive snapshot dates tk-1 and tk is equal to 1 / ns, where ns is the number of snapshots per year. These K +1 dates determine K transition periods. The algorithm computes {N}_{i}^{n} , the number of transition periods in which obligor n starts at rating i. These are added up over all obligors to get Ni, the number of obligors in the sample that start a period at rating i. The number periods in which obligor n starts at rating i and ends at rating j, or migrates from i to j, denoted by {N}_{ij}^{n} , is also computed. These are also added up to get {N}_{ij}^{} , the total number of migrations from i to j in the sample. The estimate of the transition probability from i to j in one period, denoted by {P}_{ij}^{} {P}_{ij}^{}=\frac{Nij}{Ni} These probabilities are arranged in a one-period transition matrix P0, where the i,j entry in P0 is Pij. If the number of snapshots per year ns is 4 (quarterly snapshots), the probabilities in P0 are 3-month (or 0.25-year) transition probabilities. You may, however, be interested in 1-year or 2-year transition probabilities. The latter time interval is called the transition interval, Δt, and it is used to convert P0 into the final transition matrix, P, according to the formula: P={P}_{0}^{ns∗△t} For example, if ns = 4 and Δt = 2, P contains the two-year transition probabilities estimated from quarterly snapshots. For the cohort algorithm, optional output arguments idTotals and sampleTotals from transprob contain the following information: idTotals(n).totalsVec = \left({N}_{i}^{n}\right)∀i idTotals(n).totalsMat = \left({N}_{i,j}^{n}\right)∀ij idTotals(n).algorithm = 'cohort' sampleTotals.totalsVec = \left({N}_{i}^{}\right)∀i sampleTotals.totalsMat = \left({N}_{i,j}^{}\right)∀ij sampleTotals.algorithm = 'cohort' For efficiency, the vectors and matrices in idTotals are stored as sparse arrays. When ratings must be excluded (see the excludeLabels name-value input argument), all transitions involving the excluded ratings are removed from the sample. For example, if the ‘NR’ rating must be excluded, any transitions into ‘NR’ and out of ‘NR’ are excluded from the sample. The total counts for all other ratings are adjusted accordingly. For more information, see Visualize Transitions Data for transprob. {T}_{i}^{n} , the total time that obligor n spends in rating i within the estimation time window. These quantities are added up over all obligors to get {T}_{i}^{} , the total time spent in rating i, collectively, by all obligors in the sample. The algorithm also computes {T}_{ij}^{n} , the number times that obligor n migrates from rating i to rating j, with i not equal to j, within the estimation time window. And it also adds them up to get {T}_{ij}^{} , the total number of migrations, by all obligors in the sample, from the rating i to j, with i not equal to j. To estimate the transition probabilities, the duration algorithm first computes a generator matrix \mathrm{Λ} . Each off-diagonal entry of this matrix is an estimate of the transition rate out of rating i into rating j, and is {\mathrm{λ}}_{ij}^{}=\frac{{T}_{ij}^{}}{{T}_{i}^{}},i≠j The diagonal entries are computed as: {\mathrm{λ}}_{ii}^{}=−\underset{j≠i}{\overset{}{∑}}{\mathrm{λ}}_{ij}^{} With the generator matrix and the transition interval Δt (e.g., Δt = 2 corresponds to two-year transition probabilities), the transition matrix is obtained as P=\mathrm{exp}\left(\mathrm{Δ}t\mathrm{Λ}\right) , where exp denotes matrix exponentiation (expm in MATLAB). For the duration algorithm, optional output arguments idTotals and sampleTotals from transprob contain the following information: \left({T}_{i}^{n}\right)∀i \left({T}_{i,j}^{n}\right)∀ij idTotals(n).algorithm = 'duration' \left({T}_{i}^{}\right)∀i \left({T}_{i,j}^{}\right)∀ij sampleTotals.algorithm = 'duration' When ratings must be excluded (see the excludeLabels name-value input argument), all transitions involving the exclude ratings are removed from the sample. For example, if the ‘NR’ rating must be excluded, any transitions into ‘NR’ and out of ‘NR’ are excluded from the sample. The total time spent in ‘NR’ (or any other excluded rating) is also removed. [2] Löffler, G., P. N. Posch. Credit Risk Modeling Using Excel and VBA. West Sussex, England: Wiley Finance, 2007. transprobbytotals | transprobprep | table
Physics - An optical lattice of flux Joint Quantum Institute, National Institute of Standards and Technology, and University of Maryland, Gaithersburg, MD 20899, USA A suitable optical lattice for cold atoms could produce a large effective magnetic field in which the atoms would realize analogs to quantum Hall states. Figure 1: The accumulation of phase as a spin- 1/2 particle moves through an inhomogeneous (quadrupole) magnetic field. The black vectors indicate the component of the normalized effective Zeeman field in the {e}_{x}-{e}_{y} plane and the background color represents the {e}_{z} component. The Bloch spheres show the enclosed solid angle, which sets the particle’s accumulated phase as it moves in a loop about the origin.The accumulation of phase as a spin- 1/2 {e}_{x}-{e}_{y} plane and the background color represents the ... Show more Ultracold neutral atoms are among the most simple and flexible of quantum many-body systems. As such, they offer the capability to realize strongly interacting systems in their most fundamental form, absent the unwanted complexities that complicate (and sometimes enrich) their solid-state brethren. The question then arises: What classes of systems can be implemented with cold atoms, and of these, which can offer insight beyond that afforded in more conventional systems? In a paper published in Physical Review Letters, Nigel Cooper at the University of Cambridge, UK, proposes an elegant technique to take ultracold atoms to the extreme, where atoms moving about in a lattice potential experience an effective magnetic field with of order a unit flux quanta per lattice site [1]. This is a realm that is inaccessible in conventional materials and promises new types of quantized Hall effects where Landau-level quantization and band-structure effects are intertwined. Magnetic fields are enigmatic. In our studies we learn the Lorentz force law: in a uniform magnetic field, the force on a moving, charged object is perpendicular to both the magnetic field and the object’s velocity . This doesn’t fit in with our usual intuitive picture of forces derived from gradients of potentials, but instead requires a new type of potential: the electromagnetic vector potential . Like the usual scalar potential , the vector potential is related to the seemingly more physical fields via derivatives: the magnetic field . These potentials are not just mathematical sleights-of-hand; in quantum systems, they take center stage. In Schrödinger’s wave mechanics, the evolution of a particle’s wave function can be partially understood in terms of its quantum mechanical phase. Usually this phase can be divided into two parts: the dynamic phase acquired in proportion to the particle’s kinetic energy, and the phase from scalar potentials. Each of these accumulates at a rate proportional to the associated energy. If we consider a particle moving in a closed loop, with zero scalar potential, then the dynamic phase acquired upon traversing the loop will tend to zero as the velocity drops to zero. The phase acquired in a magnetic field is different: it depends on the geometry of the particle’s path. If our loop now encloses a magnetic flux , then the particle will acquire an additional phase proportional to . This interpretation is uncomfortable: somehow the particle has a nonlocal knowledge of the magnetic field everywhere inside the loop. It is more natural to think of the vector potential, in which case the acquired phase is no more than the line-integral of the vector potential around the loop. This leads to the celebrated Aharonov-Bohm effect [2,3] where a charged particle acquires a geometric phase as it moves in the completely field-free region outside an infinite solenoid. Other physical situations produce geometric phases in which neutral particles can behave as if magnetic fields were present. This concept was introduced to quantum mechanics as Berry’s phase [4] for particles with internal structure (like spin states) for which the energy is dependent on parameters in the Hamiltonian, such as position or momentum. If a particle starts in an eigenstate, it can acquire a geometric phase upon traversing a closed loop in parameter space, provided the “motion” is sufficiently slow that it adiabatically remains in the same eigenstate. The simplest example of a Berry’s phase is shown in Fig. 1 where a neutral spin- particle is moving in an inhomogenous magnetic field, giving rise to a position-dependent Zeeman shift—the difference in energy between the spin being oriented along or away from the magnetic field. The figure depicts the particle’s trajectory, along with the orientation of its ground state on the Bloch sphere. (This sphere defines the allowed states of a spin- particle.) The particle accumulates a geometric phase of , one-half the solid area traced out on the Bloch sphere as the particle moves in space. Such a phase can be interpreted as arising from a geometric gauge field, in analogy with the electromagnetic vector potential. Credit: (Bottom) from [1] Figure 2: Configurations that give rise to Berry’s phases for cold atoms in an optical lattice that generates an artificial gauge field. (Top) Existing techniques can produce an infinitely precessing Zeeman field along {e}_{x} , but not along {e}_{y} . (Bottom) Cooper’s proposal remedies this problem by allowing a net positive Berry’s phase in the lattice’s unit cell.Configurations that give rise to Berry’s phases for cold atoms in an optical lattice that generates an artificial gauge field. (Top) Existing techniques can produce an infinitely precessing Zeeman field along {e}_{x} {e}_{y} . (Bottom) Cooper’s p... Show more Geometric phases are real. A system particularly well suited for observing and studying their effects is a trap of ultracold atoms. In these systems researchers can first construct suitable geometries and then unambiguously measure the result. Geometric phases in neutral atoms were vividly demonstrated at MIT in 2002 when topological phases were directly imprinted into the wave function of a Bose-Einstein condensate (BEC) of sodium atoms in a magnetic field by inverting the orientation of the field. This produced or phase windings leading to vortices in the final wave function [5]. Going beyond this elegant demonstration, the next step is more powerful: constructing configurations where the atomic system obeys a new Hamiltonian containing steady-state gauge fields [6]. Such a system mimics that of a particle in an effective magnetic field [7]. Instead of using a magnetic field to generate a Berry’s phase, these ideas require a laser to couple different internal (spin) states of atoms. Such coupling is formally equivalent to a Zeeman magnetic field, but spatially structured on the scale of the optical wavelength. In our group at NIST, we followed these theory proposals with a series of experiments demonstrating the effective mapping between the Berry’s phase and the electromagnetic vector potential, leading to artificial magnetic and electric fields (see Refs. [8,9] and references therein). The top of Fig. 2 depicts how these ideas lead to large Berry’s phases, and also highlights a key limitation. The illustrated Bloch vector can wind an unlimited number of times around its equator as an atom moves along , but it can tip to the axes of the Bloch sphere when the atom moves along . This implies that we can create large “artificial magnetic fields,” but the maximum magnetic flux passing through the system scales as the length of the system, not its area, making it difficult to scale the artificial field to larger systems. Cooper proposes a technique to overcome this limitation [1] by creating a specific effective Zeeman field using standing waves of light—a type of lattice. The essence of his proposal is depicted in the bottom of Fig. 2: atoms moving in the optical unit cell experience a Berry’s phase, both as a function of and (see also Ref. [10]). Usually such ideas lead to staggered magnetic fields with equal and opposite sign in neighboring lattice sites, with zero average. The current work overcomes this limitation, allowing for large effective magnetic fields with flux scaling as the system’s area, not its linear extent. How does this work? The effective electromagnetic vector potential created by this technique has a gauge-dependent singularity called a Dirac string that effectively concentrates magnetic field of one sign to isolated points (with no physical effect). As a result, the effective magnetic field is still staggered, but acquires a nonzero average. To more intuitively understand how this gives rise to an effective field of the same sign, consider an atom moving along each of the two closed loops depicted in Fig. 2 (bottom). For the left loop, the Zeeman field traces out a circle at the top of the Bloch sphere with a counterclockwise direction, while the right loop traces a circle at the bottom of the Bloch sphere with a clockwise direction. If the top loop acquired a geometric phase , then the bottom loop acquired a phase of from the entire top portion of the Bloch sphere, but with the opposite sign. Since the acquired phase is defined only modulo , these two curves enclose the same effective field. This seemingly simple observation provides a straightforward path to realize larger effective magnetic fields than have hitherto been possible, yet with a comparatively simple set of lasers. (The pioneering proposals for large in-lattice gauge fields require more numerous lasers [11], and lack the simplicity and elegance of this approach.) To complete the story, Cooper studied the properties of the lowest Bloch bands in this optical flux lattice and computed the Chern number. (Here, the Chern number enumerates the number of times the wave function’s phase winds by on a path running from one side of the Block band to the other: a loop on the torus). In some cases, the Chern number is , showing that these bands are topologically equivalent to the lowest Landau level (also with Chern number ). Thus fermions completely filling the lowest band will be an integer quantum Hall state with a quantized Hall resistance . The atoms used in cold-atom experiments are typically bosons and do not have a Fermi energy, but at fillings of around one per-lattice site—about one atom per magnetic flux quanta—they are expected to display interaction-driven bosonic fractional quantum Hall states. I would like to thank V. Gurarie and A. Lamacraft for discussions that prepared me to appreciate the current work. I also acknowledge the financial support of the NSF through the PFC at JQI, and the ARO with funds from both the Atomtronics MURI and the DARPA OLE Program. Correction (29 April 2011): References [8] and [9] were corrected. N. R. Cooper, Phys. Rev. Lett. 106, 175301 (2011) W. Ehrenberg and R. E. Siday, Proc. Phys. Soc. London Sect. B 62, 8 (1949) Y. Aharonov and D. Bohm, Phys. Rev. 115, 485 (1959) M. V. Berry, Proc. R. Soc. London A 392, 45 (1984) A. E. Leanhardt, A. Görlitz, A. P. Chikkatur, D. Kielpinski, Y. Shin, D. E. Pritchard, and W. Ketterle, Phys. Rev. Lett. 89, 190403 (2002) G. Juzeliūnas, P. Öhberg, J. Ruseckas, and A. Klein, Phys. Rev. A 71, 053614 (2005) Y.-J. Lin, R. L. Compton, K. Jiménez-García, W. D. Phillips, J. V. Porto, and I. B. Spielman, Nature 462,628 (2009) Y.-J. Lin, R. L. Compton, K. Jiménez-García, W. D. Phillips, J. V. Porto, and I. B. Spielman, Nature Phys. (2011) A. M. Dudarev, R. B. Diener, I. Carusotto, and Q. Niu, Phys. Rev. Lett. 92, 153005 (2004) D. Jaksch and P. Zoller, New J. Phys. 5, 56 (2003) Ian Spielman in an experimentalist who received his Ph.D. in physics from the California Institute of Technology in 2004, studying quantum Hall bilayers. From there he moved to NIST in Gaithersburg, Maryland, for a two year NRC postdoc in the Laser Cooling and Trapping group, studying the physics of the superfluid-to-insulator transition in 2D atomic Bose gases. In 2006, he assumed his current position as a NIST physicist and a fellow of the newly founded Joint Quantum Institute (JQI). His research interests focus on using ultracold atomic systems to realize Hamiltonians familiar in condensed matter physics. This includes the pioneering work on creating artificial gauge fields for neutral atoms using laser atom interactions.
bert-large-uncased-whole-word-masking-finetuned-squad · Hugging Face apache-2.0 bert AutoTrain Compatible Infinity Compatible BERT large model (uncased) whole word masking finetuned on SQuAD Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning. This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data {\beta }_{1}=0.9\beta_\left\{1\right\} = 0.9 {\beta }_{2}=0.999\beta_\left\{2\right\} = 0.999 After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command: python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \ --per_device_eval_batch_size=3 \ --per_device_train_batch_size=3 \ Datasets used to train bert-large-uncased-whole-word-masking-finetuned-squad Spaces using bert-large-uncased-whole-word-masking-finetuned-squad amir22010/AskmeAnythingNow nata0801/Question_Answering_App AjulorC/question_answering_bot_deployed_with_Gradio
A Clarke–Ledyaev Type Inequality for Certain Non–Convex Sets Ivanov, M., Zlateva, N. (2000) We consider the question whether the assumption of convexity of the set involved in Clarke-Ledyaev inequality can be relaxed. In the case when the point is outside the convex hull of the set we show that Clarke-Ledyaev type inequality holds if and only if there is certain geometrical relation between the point and the set. Christian Clason, Karl Kunisch (2011) Convex duality is a powerful framework for solving non-smooth optimal control problems. However, for problems set in non-reflexive Banach spaces such as L1(Ω) or BV(Ω), the dual problem is formulated in a space which has difficult measure theoretic structure. The predual problem, on the other hand, can be formulated in a Hilbert space and entails the minimization of a smooth functional with box constraints, for which efficient numerical methods exist. In this work, elliptic control problems with... A generalized differential of real functionals. V. Novo, L. Rodríguez Marín (1994) Deville, Robert (1995) We prove that if f is a real valued lower semicontinuous function on a Banach space X and if there exists a C^1, real valued Lipschitz continuous function on X with bounded support and which is not identically equal to zero, then f is Lipschitz continuous of constant K provided all lower subgradients of f are bounded by K. As an application, we give a regularity result of viscosity supersolutions (or subsolutions) of Hamilton-Jacobi equations in infinite dimensions which satisfy a coercive condition.... Christian Clason, Kazufumi Ito, Karl Kunisch (2012) This work is concerned with a class of minimum effort problems for partial differential equations, where the control cost is of L∞-type. Since this problem is non-differentiable, a regularized functional is introduced that can be minimized by a superlinearly convergent semi-smooth Newton method. Uniqueness and convergence for the solutions to the regularized problem are addressed, and a continuation strategy based on a model function is proposed. Numerical examples for a convection-diffusion equation... A new approach to the constrained controllability problem Ali Boutoulout, Layla Ezzahri, Hamid Bourray (2014) We consider the problem of internal regional controllability with output constraints. It consists in steering a hyperbolic system to a final state between two prescribed functions only on a subregion of the evolution system domain. This problem is solved by characterizing the optimal control in terms of a subdifferential associated with the minimized functional. A new method for nonsmooth convex optimization. Wei, Z., Qi, L., Birge, J.R. (1998) A new subdifferential in quasiconvex analysis. Martínez-Legaz, J.E., Sach, P.H. (1999) A note on a class of equilibrium problems with equilibrium constraints Jiří V. Outrata (2004) The paper concerns a two-level hierarchical game, where the players on each level behave noncooperatively. In this way one can model eg an oligopolistic market with several large and several small firms. We derive two types of necessary conditions for a solution of this game and discuss briefly the possibilities of its computation. A Note on Coercivity of Lower Semicontinuous Functions and Nonsmooth Critical Point Theory Corvellec, J. (1996) The first motivation for this note is to obtain a general version of the following result: let E be a Banach space and f : E → R be a differentiable function, bounded below and satisfying the Palais-Smale condition; then, f is coercive, i.e., f(x) goes to infinity as ||x|| goes to infinity. In recent years, many variants and extensions of this result appeared, see [3], [5], [6], [9], [14], [18], [19] and the references therein. A general result of this type was given in [3, Theorem 5.1] for a lower... Giovanni P. Crespi, Ivan Ginchev, Matteo Rocca (2005) The existence of solutions to a scalar Minty variational inequality of differential type is usually related to monotonicity property of the primitive function. On the other hand, solutions of the variational inequality are global minimizers for the primitive function. The present paper generalizes these results to vector variational inequalities putting the Increasing Along Rays (IAR) property into the center of the discussion. To achieve that infinite elements in the image space Y are introduced.... The existence of solutions to a scalar Minty variational inequality of differential type is usually related to monotonicity property of the primitive function. On the other hand, solutions of the variational inequality are global minimizers for the primitive function. The present paper generalizes these results to vector variational inequalities putting the Increasing Along Rays (IAR) property into the center of the discussion. To achieve that infinite elements in the image space Y are introduced. Under... A note on singular points of convex functions in Banach spaces A note on the characterization of the global maxima of a (tangentially) convex function over a convex set. Hiriart-Urruty, J.-B., Ledyaev, Yuri S. (1996) A polynomial of degree four not satisfying Rolle’s Theorem in the unit ball of {l}_{2} Jesús Ferrer (2005) We give an example of a fourth degree polynomial which does not satisfy Rolle’s Theorem in the unit ball of {l}_{2}
A Beale-Kato-Madja criterion for magneto-micropolar fluid equations with partial viscosity. Wang, Yu-Zhu, Hu, Liping, Wang, Yin-Xia (2011) A blowup analysis of the mean field equation for arbitrarily signed vortices Hiroshi Ohtsuka, Takashi Suzuki (2006) We study the noncompact solution sequences to the mean field equation for arbitrarily signed vortices and observe the quantization of the mass of concentration, using the rescaling argument. A confinement result for axisymmetric fluids Carlotta Maffei, Carlo Marchioro (2001) A conservative finite difference scheme for static diffusion equation. Arteaga-Arispe, J., Guevara-Jordan, J.M. (2008) A controllability result for the 1 -D isentropic Euler equation Olivier Glass (2005) Luís Almeida, Lucio Damascelli, Yuxin Ge (2002) A free boundary stationary magnetohydrodynamic problem in connection with the electromagnetic casting process Tomasz Roliński (1995) We investigate the behaviour of the meniscus of a drop of liquid aluminium in the neighbourhood of a state of equilibrium under the influence of weak electromagnetic forces. The mathematical model comprises both Maxwell and Navier-Stokes equations in 2D. The meniscus is governed by the Young-Laplace equation, the data being the jump of the normal stress. To show the existence and uniqueness of the solution we use the classical implicit function theorem. Moreover, the differentiability of the operator... A general class of phase transition models with weighted interface energy E. Acerbi, G. Bouchitté (2008) A global existence result in Sobolev spaces for MHD system in the half-plane Emanuela Casella, Paola Trebeschi (2002) {ℝ}^{n} 0 Alberto Bressan (2003) A lower estimate of the interface of some nonlinear diffusion problems. J. Goncerzewicz, N. Okrasinski (1994) A mathematical model for core-annular flows with surfactants. Kas-Danouche, S., Papageorgiou, D., Siegel, M. (2004) Rajae Aboulaich, Soumaya Boujena, Jérôme Pousin (2001) O'Brien, S.B.G., Hayes, M. (2002)
Asymptotic Behavior of Ground State Radial Solutions for -Laplacian Problems Sonia Ben Othman, Rym Chemmam, Habib Mâagli, "Asymptotic Behavior of Ground State Radial Solutions for -Laplacian Problems", Journal of Mathematics, vol. 2013, Article ID 409329, 7 pages, 2013. https://doi.org/10.1155/2013/409329 Sonia Ben Othman,1 Rym Chemmam,1 and Habib Mâagli 2 1Département de Mathématiques, Faculté des Sciences de Tunis, Campus Universitaire, 2092 Tunis, Tunisia 2King Abdulaziz University, College of Sciences and Arts, Rabigh Campus, Department of Mathematics. P.O. Box 344, Rabigh 21911, Saudi Arabia Let , we take up the existence, the uniqueness and the asymptotic behavior of a positive continuous solution to the following nonlinear problem in , , , , where , is a positive differentiable function in and is a positive continuous function in such that there exists satisfying for each in , , and such that . Let and . The following differential equation: has been studied with various boundary conditions, where is a continuous function in , differentiable and positive in , is a nonnegative continuous function in , and , (see [1–15]). For a function depending only on , , and , the operator is the radially symmetric -Laplacian in and the equation of type (1) arises also as radial solutions of the Monge-Ampère equation (see [13]). In this paper, our main purpose is to obtain the existence of a unique positive solution to the following boundary value problem: and to establish estimates on such solution under an appropriate condition on . The study of this type of (1) is motivated by [5, 15]. Namely, in the special case , and , , the authors in [15] studied (1) and gave some uniqueness results. In this work, we consider a wider class of weights and we aim to extend the study of (1) in [15], to . For the case , the problem has been studied in [5]. Indeed, the authors of [5] proved the following existence result. Theorem 1. The problem has a unique positive solution satisfying for each where is a positive constant and is the function defined on by We shall improve in this paper the above asymptotic behavior of the solution of problem and we extend the study of to . The pure elliptic problem of type has been investigated by several authors with zero Dirichlet boundary value; we refer the reader to [16–24] and the references therein. More recently, applying Karamata regular variation theory, Chemmam et al. gave in [17] the asymptotic behavior of solutions of problem . In this work, we aim to extend the result established in [17] to the radial case associated to problem . To simplify our statements, we need to fix some notations and make some assumptions. Throughout this paper, we shall use , to denote the set of Karamata functions defined on by where is a positive constant and such that . It is clear that if and only if is a positive function in such that For two nonnegative functions and on a set , we write , , if there exists a constant such that , for each . The letter will denote a generic positive constant which may vary from line to line. Furthermore, we point out that if is a nonnegative continuous function in , then the function defined on by is the solution of the problem As it is mentioned above, our main purpose in this paper is to establish existence and global behavior of a positive solution of problem . Let us introduce our hypotheses. Here, the function is continuous in , differentiable and positive in such that with . The function is required to satisfy the following hypothesis. ( ) is a positive measurable function on such that with and the function such that . Remark 2. We need to verify condition in hypothesis , only if , (see Lemma 6 below). As a typical example of function satisfying , we quote the following. Example 3. Put . Then for and or and , the function satisfies . Now, we are ready to state our main result. Theorem 4. Assume . Then problem has a unique positive continuous solution satisfying for , where is the function defined on by The main body of the paper is organized as follows. In Section 2, we establish some estimates and we recall some known results on functions belonging to . Theorem 4 is proved in Section 3. The last Section is reserved to some applications. 2. Key Estimates In what follows, we are going to give estimates on the functions and , where is a function satisfying and is the function given by (12). First, we recall some fundamental properties of functions belonging to the class , taken from [17, 22]. Lemma 5. Let and . Then one has and . Lemma 6 (Karamata’s theorem). Let and be a function in . Then one has the following properties:(i)If , then converges and (ii)If , then diverges and Lemma 7. Let . Then there exists such that for and Lemma 8. Let , then one has If further converges, one has Now, we are able to prove the following propositions which play a crucial role in this paper. Proposition 9. Let be a function satisfying . Then one has for where is the function defined by Proof. For , we have Put To prove the result, it is sufficient to show that for . Since the function is continuous and positive in , we have Now, assume that , then we have It follows from Lemma 7 that To reach our estimates, we consider the following cases. (i) If , then it follows from Lemma 6 that Now, using Lemma 5 and again Lemma 6, we deduce that (ii) If , then it follows from Lemma 6, that . So, since , we have (iii) If , then for each , we have Since the function is in , then using the fact that and Lemma 6, we obtain that (iv) If , we have by Lemma 6 that this yields to Hence, we reach the result by combining (22) with the estimates stated in each case above. This completes the proof. Proposition 10. Let be a function satisfying and let be the function given by (12). Then for , one has Proof. Let and , we obtain by simple calculus that for , where So, one can see that where . Then, using Lemmas 5, 7, and 8, we obtain that and . Hence, it follows from Proposition 9 that where is the function defined in (19) by replacing by and by . This ends the proof. 3.1. Existence and Asymptotic Behavior Let be a function satisfying and let be the function given by (12). By Proposition 10, there exists a constant such that for each We look now at the existence of positive solution of problem satisfying (11). For the case , the existence of a positive continuous solution to problem is due to [5]. Now, we look to the existence result of problem when and we give precise asymptotic behavior of such solution for . For that, we split the proof into two cases. Case 1 ( ). Let be a positive continuous solution of problem . So, in order to obtain estimates (11) on the function , we need the following comparison result. Lemma 11. Let and such that Then . Proof. Suppose that for some . Then there exists , such that and for with or . By an elementary argument, we have On the other hand, since , then , for each . This yields to Using further (40), we deduce that the function is nondecreasing on with . Hence, from the monotonicity of , we obtain that the function is nondecreasing on with and . This yields to a contradiction, which completes the proof. Now, we are ready to prove (11). Put and . It follows from (7) that the function satisfies According to (37), we obtain by simple calculation that and satisfy, respectively, (38) and (39). Thus, we deduce by Lemma 11 that This implies (11) by using (37). Case 2 ( ). Put and let Obviously, the function belongs to and so is not empty. We consider the integral operator on defined by We shall prove that has a fixed point in , to construct a solution of problem . For this aim, we look at first that . Let , then we have for each This together with (37) implies that Since and , then leaves invariant the convex . Moreover, since , then the operator is nondecreasing on . Now, let be the sequence of functions in defined by Since , we deduce from the monotonicity of that for , we have Thanks to the monotone convergence theorem, we deduce that the sequence converges to a function which satisfies We conclude that is a positive continuous solution of problem satisfying (11). Assume that satisfies . For , the uniqueness of solution to problem follows from Lemma 11. Thus in the following, we look at the case . Let Let and be two positives solutions of problem in . Then there exists a constant such that This implies that the set is not empty. Now, put , then we aim to show that . Suppose that , then we have So, we have in , which implies that the function is nondecreasing on with . Hence from the monotonicity of , we obtain that the function is nondecreasing on with . This implies that . On the other hand, we deduce by symmetry that . Hence . Now, since and , we have . This yields to a contradiction with the fact that . Hence, and then . 4.1. First Application Let be a positive measurable function in satisfying for where the real numbers and satisfy one of the following two conditions:(i) and ,(ii) and . Using Theorem 4, we deduce that problem has a positive continuous solution in satisfying the following.(i)If , then for (ii)If and , then for (iii)If and , then for (iv)If and , then for (v)If , then for (vi) If and , then for 4.2. Second Application Let be a function satisfying and let . We are interested in the following nonlinear problem: Put , then by a simple calculus, we obtain that satisfies Using Theorem 4, we deduce that problem (64) has a unique solution such that Consequently, we deduce that (63) has a unique solution satisfying B. Acciaio and P. Pucci, “Existence of radial solutions for quasilinear elliptic equations with singular nonlinearities,” Advanced Nonlinear Studies, vol. 3, no. 4, pp. 511–539, 2003. View at: Google Scholar | Zentralblatt MATH | MathSciNet R. P. Agarwal, H. Lü, and D. O'Regan, “Existence theorems for the one-dimensional singular p -Laplacian equation with sign changing nonlinearities,” Applied Mathematics and Computation, vol. 143, no. 1, pp. 15–38, 2003. View at: Publisher Site | Google Scholar | MathSciNet R. P. Agarwal, H. Lü, and D. O'Regan, “An upper and lower solution method for the one-dimensional singular p -Laplacian,” Georgian Academy of Sciences A, vol. 28, pp. 13–31, 2003. View at: Google Scholar | MathSciNet R. P. Agarwal, H. Lü, and D. O'Regan, “Eigenvalues and the one-dimensional p -Laplacian,” Journal of Mathematical Analysis and Applications, vol. 266, no. 2, pp. 383–400, 2002. View at: Publisher Site | Google Scholar | MathSciNet I. Bachar, S. Ben Othman, and H. Mâagli, “Existence results of positive solutions for the radial p -Laplacian,” Nonlinear Studies, vol. 15, no. 2, pp. 177–189, 2008. View at: Google Scholar | MathSciNet I. Bachar, S. Ben Othman, and H. Mâagli, “Radial solutions for the p -Laplacian equation,” Nonlinear Analysis: Theory, Methods & Applications, vol. 70, no. 6, pp. 2198–2205, 2009. View at: Publisher Site | Google Scholar | MathSciNet E. Calzolari, R. Filippucci, and P. Pucci, “Existence of radial solutions for the p -Laplacian elliptic equations with weights,” American Institute of Mathematical Sciences Journal, vol. 15, no. 2, pp. 447–479, 2006. View at: Publisher Site | Google Scholar | MathSciNet D.-P. Covei, “Existence and asymptotic behavior of positive solution to a quasilinear elliptic problem in {ℝ}^{N} ,” Nonlinear Analysis: Theory, Methods & Applications, vol. 69, no. 8, pp. 2615–2622, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet M. Ghergu and V. D. Rădulescu, Nonlinear PDEs Mathematical Models in Biology, Chemistry and Population Genetics, Springer Monographs in Mathematics, Springer, Heidelberg, Germany, 2012. View at: Publisher Site | MathSciNet J. V. Goncalves and C. A. P. Santos, “Positive solutions for a class of quasilinear singular equations,” Electronic Journal of Differential Equations, vol. 2004, no. 56, pp. 1–15, 2004. View at: Google Scholar | Zentralblatt MATH | MathSciNet D. D. Hai and R. Shivaji, “Existence and uniqueness for a class of quasilinear elliptic boundary value problems,” Journal of Differential Equations, vol. 193, no. 2, pp. 500–510, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet p -Laplacian boundary value problems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 56, no. 7, pp. 975–984, 2004. View at: Publisher Site | Google Scholar | MathSciNet M. Karls and A. Mohammed, “Integrability of blow-up solutions to some non-linear differential equations,” Electronic Journal of Differential Equations, vol. 2004, pp. 1–8, 2004. View at: Google Scholar | Zentralblatt MATH | MathSciNet C.-G. Kim and Y.-H. Lee, “Existence of multiple positive radial solutions for p -Laplacian problems with an {L}^{1} -indefinite weight,” Taiwanese Journal of Mathematics, vol. 15, no. 2, pp. 723–736, 2011. View at: Google Scholar | MathSciNet P. Pucci, M. García-Huidobro, R. Manásevich, and J. Serrin, “Qualitative properties of ground states for singular elliptic equations with weights,” Annali di Matematica Pura ed Applicata IV, vol. 185, pp. S205–S243, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet H. Brezis and S. Kamin, “Sublinear elliptic equations in {ℝ}^{n} ,” Manuscripta Mathematica, vol. 74, no. 1, pp. 87–106, 1992. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet R. Chemmam, A. Dhifli, and H. Mâagli, “Asymptotic behavior of ground state solutions for sublinear and singular nonlinear Dirichlet problems,” Electronic Journal of Differential Equations, vol. 2011, no. 88, pp. 1–12, 2011. View at: Google Scholar | Zentralblatt MATH | MathSciNet A. L. Edelson, “Entire solutions of singular elliptic equations,” Journal of Mathematical Analysis and Applications, vol. 139, no. 2, pp. 523–532, 1989. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet M. Ghergu and V. Rădulescu, “Ground state solutions for the singular Lane-Emden-Fowler equation with sublinear convection term,” Journal of Mathematical Analysis and Applications, vol. 333, no. 1, pp. 265–273, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet A. V. Lair and A. W. Shaker, “Classical and weak solutions of a singular semilinear elliptic problem,” Journal of Mathematical Analysis and Applications, vol. 211, no. 2, pp. 371–385, 1997. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet A. Mohammed, “Ground state solutions for singular semi-linear elliptic equations,” Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 3-4, pp. 1276–1280, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet C. A. Santos, “On ground state solutions for singular and semi-linear problems including super-linear terms at infinity,” Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 12, pp. 6038–6043, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet J. Trubek, “Asymptotic behavior of solutions to \mathrm{\Delta }u+K{u}^{\sigma }=0 {ℝ}^{n} n\ge 3 Copyright © 2013 Sonia Ben Othman et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
For curves given in different formats, Table 2.2.1 lists formulas for the arc-length function, which measures the length of the curve as a function of the curve's parameter. In each case, the integrand for the arc-length function is recognized as \mathrm{ρ}=∥\mathbf{R}\prime ∥ . Hence, by the fundamental theorem of calculus, \frac{\mathrm{ds}}{\mathrm{dp}}=\mathrm{ρ} , and again by elementary calculus, \frac{\mathrm{dp}}{\mathrm{ds}}=1/\mathrm{ρ} \mathbf{R}\prime y=f\left(x\right) \left[\begin{array}{c}x\\ f(x)\end{array}\right] \left[\begin{array}{c}1\\ f\prime (x)\end{array}\right] s\left(x\right)={∫}_{{x}_{0}}^{x}\sqrt{1+{\left(f\prime \left(u\right)\right)}^{2}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathit{ⅆ}u \left\{\begin{array}{c}x=x\left(p\right)\\ y=y\left(p\right)\end{array}\right\} \left[\begin{array}{c}x(p)\\ y(p)\end{array}\right] \left[\begin{array}{c}x\prime (p)\\ y\prime (p)\end{array}\right] s\left(p\right)={∫}_{{p}_{0}}^{p}\sqrt{{\left(x\prime \left(u\right)\right)}^{2}+{\left(y\prime \left(u\right)\right)}^{2}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathit{ⅆ}u \left\{\begin{array}{c}x=x\left(p\right)\\ y=y\left(p\right)\\ z=z\left(p\right)\end{array}\right\} \left[\begin{array}{c}x(p)\\ y(p)\\ z(p)\end{array}\right] \left[\begin{array}{c}x\prime (p)\\ y\prime (p)\\ z\prime (p)\end{array}\right] s\left(p\right)={∫}_{{p}_{0}}^{p}\sqrt{{\left(x\prime \left(u\right)\right)}^{2}+{\left(y\prime \left(u\right)\right)}^{2}+{\left(z\prime \left(u\right)\right)}^{2}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathit{ⅆ}u Table 2.2.1 The arc-length function If the parameter for a curve is s, the curve's arc length, then (by the chain rule) \frac{d\mathbf{R}}{\mathrm{ds}}=\frac{d\mathbf{R}}{\mathrm{dp}}\frac{\mathrm{dp}}{\mathrm{ds}}=\frac{d\mathbf{R}}{\mathrm{dp}}\frac{1}{\mathrm{ρ}} ∥\frac{d\mathbf{R}}{\mathrm{ds}}∥ ∥\frac{d\mathbf{R}}{\mathrm{dp}}∥\frac{1}{\mathrm{\rho }}=\frac{\mathrm{\rho }}{\mathrm{\rho }}=1 Calculate the length of the helix defined in Example 2.1.4. x\left(t\right)=t \mathrm{cos}\left(t\right) y\left(t\right)=t \mathrm{sin}\left(t\right) z\left(t\right)={t}^{3}/6 0≤ t≤2 \mathrm{π} s\left(t\right) , the arc-length function for the helix in Example 2.1.4. s\left(t\right) x={p}^{2}-p/2,y=4/3 {p}^{3/2} p∈\left[0,\infty \right). s=s\left(p\right) p=p\left(s\right) s ∥\frac{d\mathbf{R}}{\mathrm{ds}}∥=1
52Axx General convexity 52A05 Convex sets without dimension restrictions 52A07 Convex sets in topological vector spaces 52A10 Convex sets in 2 3 dimensions (including convex surfaces) 52A20 Convex sets in 52A21 Finite-dimensional Banach spaces (including special norms, zonoids, etc.) 52A30 Variants of convex sets (star-shaped, ( m,n )-convex, etc.) 52A37 Other problems of combinatorial convexity 52A38 Length, area, volume 52A41 Convex functions and convex programs 52A55 Spherical and hyperbolic convexity Hans Herda (1976) A characterization of spheres among convex 3-bodies A Decomposition of Measures in Euclidean Space Yielding Error Bounds for Quadrature Formulas. Erich Novak (1987) Michel Bonnefont (2009) In the first part of the paper, we define an approximated Brunn-Minkowski inequality which generalizes the classical one for metric measure spaces. Our new definition, based only on properties of the distance, allows also us to deal with discrete metric measure spaces. Then we show the stability of our new inequality under convergence of metric measure spaces. This result gives as corollary the stability of the classical Brunn-Minkowski inequality for geodesic spaces. The proof of this stability... A general approach to dual characterizations of solvability of inequality systems with applications. Rubinov, A.M., Glover, B.M., Jeyakumar, V. (1995) A generalization of the Cauchy-Schwarz inequality. Pech, Pavel (1998) A. Figalli, F. Maggi, A. Pratelli (2014) By elementary geometric arguments, correlation inequalities for radially symmetric probability measures are proved in the plane. Precisely, it is shown that the correlation ratio for pairs of width-decreasing sets is minimized within the class of infinite strips. Since open convex sets which are symmetric with respect to the origin turn out to be width-decreasing sets, Pitt’s Gaussian correlation inequality (the two-dimensional case of the long-standing Gaussian correlation conjecture) is derived... A Geometric Inequality and the Complexity of Computing Volume. G. Elekes (1986) A geometric inequality involving a mobile point in the place of the triangle. Chu, Xiao-Guang, Wu, Yu-Dong (2009) Meyer, Mathieu, Reisner, Shlomo (2000) A Geometrical Isoperimetric Inequality and Applications to Problems of Mathematical Physics Catherine Bandle (1974) À l'ombre de Loewner Marcel Berger (1972) A measure of asymmetry for domains of constant width. Groemer, H., Wallen, L.J. (2001) A metric inequality associated with valuated fields P. Blanksby (1970) A new isometric inequality. J.M. Wills, P. Gritzmann, D. Wrase (1987) A note on Hayman's theorem on the bass note of a drum. Robert Osserman (1977) A note on separation by linear mappings Milan Vlach (1977) A note on sums of independent uniformly distributed random variables Rafał Latała, Krzysztof Oleszkiewicz (1995) Bukh, Boris (2006)
What to do with MAI on Polygon - Mai Finance - Tutorials This tutorial will present the different options that will let you use your freshly minted MAI on Polygon. The goal of this tutorial is not to present in details what you can do with your MAI stable coin, but have a list of all the websites and DeFi application that you can use on Polygon that will let you use your MAI directly, or in combination with other stable coins. For more details about specific ways to use MAI, you can refer to other tutorials on this site, or get help on Discord or Telegram. Please note that the list is not complete, and will never be since there are new dapps launching every week on the network. I can't review them all, so I will only present the main options, or the most famous / most "secured" options. If you want a particular project to be listed, please join the Qi community on Discord. I will not present Mai Finance farms. This subject deserves its own tutorial, because Qi is not like any other random farm token. Farming safely on bluechip projects Bluechip projects are the DeFi applications that proved to be solid and present a lower risk. They are usually audited and the team behind them have been working on then for a long time. They usually don't have huge APRs (Annual Percentage Rate) but they can be trusted. ​Balancer is an automated portfolio manager, liquidity provider, and price sensor. On the platform, you will be able to lend your crypto and you collect fees from traders, who rebalance your portfolio by following arbitrage opportunities. If you need more details about Balancer, please go read the official doc. On Polygon network, Balancer proposes a pool composed of the 4 main stable coins: DAI, USDC, USDT and MAI (miMATIC). This stable pool has currently a pretty steady APR of ~20%. Stable coin pool state as of August 2021 The best thing about Balancer is that you absolutely don't need to own the 4 coins to deposit into the pool. Balancer will automatically generate a balanced combination with whatever deposit you make. This means that if you have 100$ worth of MAI, you can simply deposit them into the Balancer pool and let the algorithm slip it properly to have a 25% ratio for each coin depending on their respective price at the moment of the deposit. Rewards for the pool are paid using the BAL token, distributed on a weekly basis. In addition to the BAL token, additional rewards can be granted depending on the pool you entered. You can check the different incentives program here. In our case, participating in the stable pool will also earn you MATIC and Qi rewards. The complete flow would be something like this If you need more details on how you can use Mai Finance to lend your crypto and borrow MAI (instead of selling your crypto to buy MAI), read other guides on this site. You can even include AAVE into the loop to earn even more. A little bit of click-bait here. Curve is another platform where you will be able to lend your crypto assets in pools that will generate revenues, but not MAI directly (not yet?). The pools we are interested in are the AAVE pool that will generate between 5% and 15% APR (APR varies a lot) on a stable coins trio (DAI/USDC/USDT). The pool works exactly like Balancer in the way that you can enter the pool using a single asset that will be used on AAVE by the protocol. the atricrypto pool that is composed of the stable coin trio and includes wETH and wBTC too to mitigate impermanent losses. This pool has an APR ranging between 25% and 30%. The Mai Finance team is currently trying to have MAI added to this pool too, meaning that you may be able to enter it with your minted MAI directly. While waiting for the Curve protocol to accept MAI as a valid stable coin in their pools, you can still use your favourite crypto with Curve by following these steps (example with MATIC) Deposit your MATIC token on AAVE and collect amWMATIC Deposit your amWMATIC on Mai Finance and collect camWMATIC (the AAVE rewards will be compounded into the camWMATIC tokens) Use the camWMATIC as a collateral on Mai Finance and borrow MAI against it Use the swap page on Mai Finance to swap all of your MAI for USDC Enter the atricrypto pool on Curve with your USDC and get 25% to 30% reward Enter the AAVE pool on Curve with your USDC and get 5% to 15% reward Rewards on Curve are granted in Auto-compounded USDC that increase your position in the pool (it will be a mix of USDC/USDT/DAI and possibly wBTC/wETH for the atricrypto pool) WMATIC that you can then use to repeat the loop above and increase your loan and invested capital CRV token, that can also be used as collateral on Mai Finance to borrow more MAI and increase your invested capital There's a complete guide on how you can use Mai Finance to lever up your crypto on AAVE. This is not doing a direct use of MAI stable coin, but we can imagine that, in the future, AAVE will also have a MAI pool where you will be able to lend your crypto. ​QuickSwap is probably one of the most famous DEX (Decentralized EXchange) on Polygon with SushiSwap and 1Inch. It's also an AMM (Automated Market Maker) that allows users to efficiently trade on the Polygon network using liquidity pools. Any trade on the exchange is subject to a fee that is partially redistributed to users who deposit their liquidity on the platform. The way you can use MAI on QuickSwap is very similar to a regular yield farm so if you need to get exact steps to enter the MAI/USDC pool on QuickSwap, it's probably better for you to read this article. Currently, if you enter the MAI/USDC LP (Liquidity Provider) pool on QuickSwap, you will earn Details of the MAI/USDC pool on QuickSwap as of August 2021 Degen farms and aggregators ​Adamant is an aggregator that is listing all the "best" farms on Polygon and let you enter them directly from their website. By depositing your assets (LP tokens) on a specific pool on Adamant, the algorithms will harvest the rewards granted by the pool and automatically compound part of the reward into your LP position. The rest of the reward is usually converted in WMATIC that is then redistributed to the holders of the ADDY token (native token of Adamant). Finally, you get a reward in ADDY tokens as well that you can harvest and vest for 90 days, earning you part of the WMATIC dividends. In general, Adamant is a good place to go if you don't really care about the farm token, and if you don't want to compound your rewards manually several times a day. It also generates more revenue since you get some ADDY rewards in addition to the reward granted by the pool. Adamant currently supports a few pools that accept the MAI/USDC LP pair. The pools are on QuickSwap: QUICK reward is swapped into more MAI/USDC LP and WMATIC rewards DinoSwap: Dino reward is swapped into more MAI/USDC LP and WMATIC rewards Mai Finance: Qi reward is swapped into more MAI/USDC LP and WMATIC rewards QuickSwap MAI/USDC pool on Adamant The screenshots of the QuickSwap pool on QuickSwap website (see paragraph above) and Adamant have been taken the same day, but are showing different APYs (Annual Percentage Yield). You can see that the APY on Adamant is a little bit higher than on QuickSwap directly. The reward breakdown is as follows 12.88% Auto-compounded QUICK (meaning the QUICK reward is transformed into more LP tokens) 9.16% ADDY reward (not compounded) 3.40% fee share dividend (claiming ADDY daily) This means that, out of the 20.92% granted by QuickSwap, only 12.88% is used to increase your LP position, the rest is swapped into WMATIC dividends. You will be able claim your ADDY reward daily (or anytime) and stake them, which will will in turn generate claimable WMATIC dividends. In other words, Adamant seems a better option because it has better APYs and compound rewards automatically, but in reality it involves a lot of manual actions too. Using Adamant also has a strong impact of native token prices. Indeed, because Adamant is constantly selling the farm tokens to generate more LP pairs and WMATIC as dividends to their ADDY holders, the sell pressure is very high on farm tokens and can explain why their price is consistently decaying. Other farms accepting MAI/USDC LP pair MAI getting more and more popularity on Polygon, and because QuickSwap supports the MAI/USDC pair, a lot of farms are now supporting it too. The following list will present a few projects on which you can earn yield using MAI/USDC Other farms may also accept the MAI/USDC pool. If you want to stay informed about new farms and their launch date, I strongly recommend taking a look at the RugDoc.io calendar for Polygon farms, and possibly to the rest of their website which will present a very smart overview of each farm, as well as their potential risks. ​Impermax is a platform that let users leverage their LP tokens for higher yields. The goal is very simple: by providing LP tokens and using them as collateral, one can then borrow more of the 2 underlying assets to generate more LP tokens and repeat the loop. Impermax loop explained When doing so, the user is exposed to impermanent loss, and the loss is magnified by the number of times the loop is repeated. The risk of liquidation is also multiplied when too many loops are applied. Indeed, if the APR is multiplied, the price variation of the two coins forming the pair is amplified by the lever effect, leading to faster liquidation. With stable coins, the risk of liquidation is lower though, because the price variation is negligible. This also means that the Collateral to Debt Ratio (CDR) can be very close to 100%, leading to a high number of loops, hence a high APR. Note that Impermax is charging fees when you borrow and leverage your position. The fee corresponds to 0.1% of your final position. As an example, if I have $100 worth of MAI/USDC and I leverage 50x, my final position will worth $5,000 and I will pay a $4.90 fee corresponding to the $4,900 that I borrowed. The effect of looping the lending/borrowing combination allows to multiply the final APY. With an initial APY of 20% for MAI/USDC pair with a CDR of 110%, operating the loop 50 times, and using the formula Equivalent APR = Initial APR * \sum_{i=0}^{n}{\frac{100}{CDR}}^i We can easily get a 228% final APR. There are some other elements that will affect the final APR, namely the borrowing APR (loan interest for borrowing more LP tokens), and the supply/demand of both assets composing the LP pair (directly driving the borrowing APR). Also, because all the rates are magnified by the number of times the loop is applied, the APR will vary drastically, and can sometimes become negative for short amount of times (your LP token will be used to repay the negative APR). Leveraged position of my MAI/USDC pair In the end, you are using the base APR on a much bigger value, which is earning much bigger interests, increasing the APR of your initial position. An example of Impermax dashboard with an initial $70.52 MAI/USDC pair I can see very easily how much I'm using as collateral, how much I initially invested, what's the leverage ratio, and what are the liquidation values due to the leverage ratio. This position will give me the following ratios at the time of writing Earnings and spendings estimation at a given time The APR is granted in IMX token that can either be swapped for more MAI/USDC (use the power of Mai Finance to borrow at 0% interest, RFTM), or used to provide liquidity on specific pools accepting IMX on Impermax. Supplying MAI to borrowers Indeed, on the app you can also provide liquidity to those who want to apply leveraging loops to their positions (they will need underlying assets to generate more LP tokens). Lending assets is a great way to earn yield and let the borrowers take all the risks. Also, the more users are borrowing, the higher the supply APR will be. Rates for supplying and borrowing MAI on Impermax at a given time This is another great way to optimize your 0% loan on Mai Finance. Not only you don't have to pay anything to borrow MAI, but you can earn a lot of interest just by depositing it on Impermax. Everything is this tutorial is purely educational. The goal is to bring light to projects that I think are worthy for people evolving in the crypto world on Polygon. I obviously didn't talk about Mai Finance as a farm because a dedicated tutorial will be written very soon. Finally, this guide is ABSOLUTELY NOT meant to be applied as is, it's not any financial advice and you should not follow blindly what I wrote. Please read the docs of the different projects I mentioned before considering investing on their platforms.
Simplify the following expressions and then check your answers with a scientific calculator. \left(16 + 2\right) · \left(−3−1\right) 18\left(-4\right) −6 + 7 · 5 29 4\left(−6\right) − \left(−11\right) + 6\left(1 − 5\right) Circling terms should make this one easier. \enclose{circle}[mathcolor="blue"]{\color{black}{4\left(-6\right)}}-\enclose{circle}[mathcolor="blue"]{\color{black}{\left(-11\right)}}+\enclose{circle}[mathcolor="blue"]{\color{black}{6\left(1-5\right)}}
High School Mathematics Extensions/Supplementary/Differentiation - Wikibooks, open books for an open world High School Mathematics Extensions/Supplementary/Differentiation 1 Differentiate from first principle(otherwise known as differentialisation) 1.7 Differentiating f(z) = (1 - z)^n 1.7.1 Derivation 1 1.8 Differentiation technique 1.9 Differentiation applied to generating functions Differentiate from first principle(otherwise known as differentialisation)[edit | edit source] This section and the *differentiation technique* section can be skipped if you are already familiar with calculus/differentiation. In calculus, differentiation is a very important operation applied to functions of real numbers. To differentiate a function f(x), we simply evaluate the limit {\displaystyle \lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} {\displaystyle \lim _{h\to 0}} means that we let h approach 0. However, for now, we can simply think of it as putting h to 0, i.e., letting h = 0 at an appropriate time. There are various notations for the result of differentiation (called the derivative), for example {\displaystyle f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} {\displaystyle {\frac {dy}{dx}}=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} mean the same thing. We say, f'(x) is the derivative of f(x). Differentiation is useful for many purposes, but we shall not discuss why calculus was invented, but rather how we can apply calculus to the study of generating functions. It should be clear that if {\displaystyle g(x)=f(x)} {\displaystyle g^{\prime }(x)=f^{\prime }(x)} the above law is important. If g(x) a closed-form of f(x), then it is valid to differentiate both sides to obtain a new generating function. {\displaystyle h(x)=g(x)+f(x)then:<math>h^{\prime }(x)=g^{\prime }(x)+f^{\prime }(x)} This can be verified by looking at the properties of limits. Differentiate from first principle f(x) where {\displaystyle f(x)=x^{2}} Firstly, we form the difference quotient {\displaystyle f^{\prime }(x)=\lim _{h\to 0}{\frac {(x+h)^{2}-x^{2}}{h}}} We can't set h to 0 to evaluate the limit at this point. Can you see why? We need to expand the quadratic first. {\displaystyle =\lim _{h\to 0}{\frac {x^{2}+2xh+h^{2}-x^{2}}{h}}} {\displaystyle =\lim _{h\to 0}{\frac {2xh+h^{2}}{h}}} We can now factor out the h to obtain now {\displaystyle \lim _{h\to 0}2x+h} from where we can let h go to zero safely to obtain the derivative, 2x. So {\displaystyle f^{\prime }(x)=2x} {\displaystyle (x^{2})'=2x} Differentiate from first principles, p(x) = xn. We start from the difference quotient: {\displaystyle p'(x)=\lim _{h\to 0}{\frac {(x+h)^{n}-x^{n}}{h}}} By the binomial theorem, we have: {\displaystyle =\lim _{h\to 0}{\frac {1}{h}}(x^{n}+nx^{n-1}h+...+h^{n}-x^{n})} The first xn cancels with the last, to get {\displaystyle =\lim _{h\to 0}{\frac {1}{h}}(nx^{n-1}h+...+h^{n})} Now, we bring the constant 1/h inside the brackets {\displaystyle =\lim _{h\to 0}nx^{n-1}+...+h^{n-1}} and the result falls out: {\displaystyle =nx^{n-1}} {\displaystyle p(x)=x^{n}} {\displaystyle p'(x)=nx^{n-1}} As you can see, differentiate from first principle involves working out the derivative of a function through algebraic manipulation, and for that reason this section is algebraically very difficult. Assume that if {\displaystyle h(x)=f(x)+g(x)} {\displaystyle h^{\prime }(x)=f^{\prime }(x)+g'(x)} {\displaystyle x^{2}+x^{5}} Solution Let {\displaystyle h(x)=x^{2}+x^{5}} {\displaystyle h'(x)=2x+5x^{4}} {\displaystyle g(x)=Af(x)then} {\displaystyle g^{\prime }(x)=Af^{\prime }(x)} {\displaystyle {\begin{matrix}g(x)&=&Af(x)\\\\g'(x)&=&\lim _{h\to 0}{\frac {A}{h}}(f(x+h)-f(x))\\\\&=&A\lim _{h\to 0}{\frac {1}{h}}(f(x+h)-f(x))\\\\&=&Af'(x)\end{matrix}}} Differentiate from first principle {\displaystyle {\begin{matrix}f(x)={\frac {1}{1-x}}\end{matrix}}} {\displaystyle {\begin{matrix}f'(x)&=&\lim _{h\to 0}{\frac {1}{h}}({\frac {1}{1-(x+h)}}-{\frac {1}{1-x}})\\\\&=&\lim _{h\to 0}{\frac {1}{h}}({\frac {1-x-(1-(x+h))}{(1-(x+h))(1-x)}})\\\\&=&\lim _{h\to 0}{\frac {1}{h}}({\frac {h}{(1-(x+h))(1-x)}})\\\\&=&\lim _{h\to 0}{\frac {1}{(1-(x+h))(1-x)}}\\\\&=&{\frac {1}{(1-x)^{2}}}\end{matrix}}} {\displaystyle f(z)=z^{2}} {\displaystyle f(z)=(1-z)^{2}} 3. Differentiate from first principle {\displaystyle f(z)={\frac {1}{(1-z)^{2}}}} {\displaystyle f(z)=(1-z)^{3}} 5. Prove the result assumed in example 3 above, i.e. if f(x)=g(x)+h(x) f′(x)=g′(x)+h′(x). Hint: use limits. Differentiating f(z) = (1 - z)^n[edit | edit source] We aim to derive a vital result in this section, namely, to derive the derivative of {\displaystyle f(z)=(1-z)^{n}} where n ≥ 1 and n an integer. We will show a number of ways to arrive at the result. Derivation 1[edit | edit source] {\displaystyle f(z)=(1-z)^{n}} expand the right hand side using binomial expansion {\displaystyle f(z)=1-{n \choose 1}z+{n \choose 2}z^{2}+...+(-1)^{n}z^{n}} {\displaystyle f'(z)=-{n \choose 1}+{n \choose 2}2z+...+(-1)^{n}nz^{n-1}} now we use {\displaystyle {n \choose i}={\frac {n!}{i!(n-i)!}}} {\displaystyle f'(z)=-{\frac {n!}{1!(n-1)!}}+{\frac {n!}{2!(n-2)!}}2z+...+(-1)^{n}nz^{n-1}} and there are some cancelling {\displaystyle f'(z)=-{\frac {n!}{1!(n-1)!}}+{\frac {n!}{1!(n-2)!}}z+...+(-1)^{n}nz^{n-1}} take out a common factor of -n, and recall that 1! = 0! = 1 we get {\displaystyle f'(z)=-n(1+{\frac {n-1!}{1!(n-2)!}}z+...+(-1)^{n-1}z^{n-1})} let j = i - 1, we get {\displaystyle f'(z)=-n(1+{\frac {n-1!}{1!(n-2)!}}z+...+(-1)^{n-1}z^{n-1})} but this is just the expansion of (1 - z)n-1 {\displaystyle f'(z)=-n(1-z)^{n-1}} Similar to Derivation 1, we use instead the definition of a derivative: {\displaystyle f'(z)=\lim _{h\to 0}{\frac {(1-(z+h))^{n}-(1-z)^{n}}{h}}} {\displaystyle f'(z)=\lim _{h\to 0}{\frac {\sum _{i=0}^{n}{n \choose i}(-1)^{i}(z+h)^{i}-\sum _{i=0}^{n}{n \choose i}(-1)^{i}z^{i}}{h}}} {\displaystyle f'(z)=\lim _{h\to 0}{\frac {\sum _{i=0}^{n}{n \choose i}(-1)^{i}((z+h)^{i}-z^{i})}{h}}} take the limit inside (recall that [Af(x)]' = Af'(x) ) {\displaystyle f'(z)=\sum _{i=0}^{n}{n \choose i}(-1)^{i}\lim _{h\to 0}{\frac {(z+h)^{i}-z^{i}}{h}}} the inside is just the derivative of zi {\displaystyle f'(z)=\sum _{i=1}^{n}{n \choose i}(-1)^{i}iz^{i-1}} exactly as derivation 1, we get {\displaystyle f'(z)=-n(1-z)^{n-1}} Example Differentiate (1 - z)2 f(z) = (1 - z)2 = 1 - 2z + z2 f'(z) = - 2 + 2z f'(z) = - 2(1 - z) Solution 2 By the result derived above we have f'(z) = -2(1 - z)2 - 1 = -2(1 - z) Imitate the method used above or otherwise, differentiate: 1. (1 - z)3 2. (1 + z)2 4. (Harder) 1/(1 - z)3 (Hint: Use definition of derivative) Differentiation technique[edit | edit source] We will teach how to differentiate functions of this form: {\displaystyle f(z)={\frac {1}{g(z)}}} i.e. functions whose reciprocals are also functions. We proceed, by the definition of differentiation: {\displaystyle f(z)={\frac {1}{g(z)}}} {\displaystyle {\begin{matrix}f'(z)&=&\lim _{h\to 0}{\frac {1}{h}}({\frac {1}{g(z+h)}}-{\frac {1}{g(z)}})\\\\&=&\lim _{h\to 0}{\frac {1}{h}}({\frac {g(z)-g(z+h)}{g(z+h)g(z)}})\\\\&=&\lim _{h\to 0}{\frac {g(z+h)-g(z)}{h}}{\frac {-1}{g(z+h)g(z)}}\\\\&=&\lim _{h\to 0}g'(z){\frac {-1}{g(z+h)g(z)}}\\\\&=&-{\frac {g'(z)}{g(z)^{2}}}\\\end{matrix}}} {\displaystyle {\begin{matrix}{\frac {1}{1-z}}&=&1+&z+z^{2}+z^{3}+...\\\\({\frac {1}{1-z}})'&=&&1+2z+3z^{2}+...\\\end{matrix}}} {\displaystyle ({\frac {1}{g}})'={\frac {-g'}{g^{2}}}} where g is a function of z, we get {\displaystyle {\begin{matrix}{\frac {1}{(1-z)^{2}}}&=&&1+2z+3z^{2}+...\\\end{matrix}}} which confirmed the result derived using a counting argument. 1. 1/(1-z)2 3. 1/(1+z)3 4. Show that (1/(1 - z)n)' = n/(1-z)n+1 Differentiation applied to generating functions[edit | edit source] Now that we are familiar with differentiation from first principle, we should consider: {\displaystyle f(z)={\frac {1}{1-x^{2}}}} {\displaystyle {\frac {1}{1-x^{2}}}=1+x^{2}+x^{4}+x^{6}+....} {\displaystyle {\begin{pmatrix}{\frac {1}{1-x^{2}}}\end{pmatrix}}'=2x+4x^{3}+6x^{5}+....} {\displaystyle {\frac {2x}{(1-x^{2})^{2}}}=2x(1+2x^{2}+3x^{4}+....)} {\displaystyle {\frac {1}{(1-x^{2})^{2}}}=1+2x^{2}+3x^{4}+....} Note that we can obtain the above result by the substituion method as well, {\displaystyle {\frac {1}{(1-z)^{2}}}=1+2z+3z^{2}+....} letting z = x2 gives you the require result. The above example demonstrated that we need not concern ourselves with difficult differentiations. Rather, to get the results the easy way, we need only to differentiate the basic forms and apply the substitution method. By basic forms we mean generating functions of the form: {\displaystyle {\frac {1}{(1-z)^{n}}}} Let's consider the number of solutions to {\displaystyle a_{1}+a_{2}+a_{3}+...+a_{n}=m} for ai ≥ 0 for i = 1, 2, ... n. We know that for any m, the number of solutions is the coefficient to: {\displaystyle (1+x+x^{2}+...)^{n}={\frac {1}{(1-z)^{n}}}} {\displaystyle {\frac {1}{1-z}}=1+x+x^{2}+...+x^{n}+...} differentiate both sides (note that 1 = 1!) {\displaystyle {\frac {1!}{(1-z)^{2}}}=1+2x+3x^{2}...+nx^{n-1}+...} {\displaystyle {\frac {2!}{(1-z)^{3}}}=2+2\times 3x...+n(n-1)x^{n-2}+...} and so on for (n-1) times {\displaystyle {\frac {(n-1)!}{(1-z)^{n}}}=(n-1)!+{\frac {n!}{1!}}x+{\frac {(n+1)!}{2!}}x^{2}+{\frac {(n+2)!}{3!}}x^{3}+...} divide both sides by (n-1)! {\displaystyle {\frac {1}{(1-z)^{n}}}=1+{\frac {n!}{(n-1)!1!}}x+{\frac {(n+1)!}{(n-1)!2!}}x^{2}+{\frac {(n+2)!}{(n-1)!3!}}x^{3}+...} the above confirms the result derived using a counting argument. Retrieved from "https://en.wikibooks.org/w/index.php?title=High_School_Mathematics_Extensions/Supplementary/Differentiation&oldid=3262851"
Physics - Changing repulsion into attraction with the quantum Hippy Hippy Shake Changing repulsion into attraction with the quantum Hippy Hippy Shake Department of Physics, Georgetown University, Washington, DC 20057, USA A quick-acting ac field may change the repulsive interaction in a system of correlated electrons to an attractive one. Figure 1: Schematic of how different fields affect the low-temperature equilibrium distribution of electrons in a Mott insulator (left). When driven by a dc field, the density of states A\phantom{\rule{0}{0ex}}\omega changes shape, but the occupancy N\phantom{\rule{0}{0ex}}\omega is similar to that of an equilibrium occupation at a positive temperature (top right). When driven by a high-frequency ac field, the interaction strength is renormalized, and the system can be at a negative temperature (bottom right).Schematic of how different fields affect the low-temperature equilibrium distribution of electrons in a Mott insulator (left). When driven by a dc field, the density of states A\phantom{\rule{0}{0ex}}\omega N\phantom{\rule{0}{0ex}}\omega is similar to that of an equilibriu... Show more In a paper in Physical Review Letters [1], Naoto Tsuji, Takashi Oka, and Hideo Aoki, all at the University of Tokyo, and Philipp Werner at ETH, Zurich, show that a system of electrons that initially repel each other when at equilibrium (described by a temperature ) can be changed to one where repulsive interactions are replaced by attractive ones. This change occurs when the system is driven by an electric field that is constant in space but oscillates rapidly in time. The phenomenon arises from the generic behavior of correlated quantum particles that are driven in an oscillatory way on a periodic lattice. Correlated motion occurs when the movement of one particle depends on the behavior of others. The concept is familiar to children who play with puzzles like the famous “fifteen” puzzle involving moving pieces in a square with one empty space. The aim is to organize the squares in a particular order, but one needs to interchange pieces without destroying the order that has already been established. A more complicated example is the Rubik’s cube; same principle but in three dimensions. Quantum particles with strong particle-particle interactions exhibit correlated motion. Much of the framework for solving these problems for a system in equilibrium at a given temperature comes from work done in the 1940s and 1950s, culminating in the classic text by Abrikosov, Gorkov, and Dzyaloshinskii [2]. In the 1960s, new developments by Baym and Kadanoff [3] and by Keldysh [4] showed how to extend these formal developments to solve the nonequilibrium many-body problem. It turned out that the resulting equations were too complex to use except in some approximate fashion. This impasse lingered for many years. In the 1990s it became possible to study experimental systems in nonequilibrium, typically in semiconductor systems. This is when Bloch oscillations were observed in superlattice structures [5] and nonequilibrium transport was measured in quantum dots [6]. More recently, in the past decade, we have seen frenetic activity in nonequilibrium experiments, ranging from pump-probe photoemission or reflectivity experiments [7], to virtually all experiments in ultracold atomic systems trapped in optical lattices [8]. There have been experiments performed in ion traps where a simple quantum state slowly evolves into a complicated correlated one with directly measurable properties [9]—Feynman’s analog quantum computer [10]. Theory has lagged behind experiment in this recent revolution, especially for quantum problems which involve fermions (such as electrons). This is because the most successful numerical techniques like quantum Monte Carlo simulations often develop errors due to what is called the fermion sign problem, which makes it impossible to perform calculations at low temperatures. Progress has, however, been made in one-dimensional problems, where a technique that systematically solves increasingly larger systems using matrix diagonalization methods has been employed to describe the nonequilibrium evolution of many different quantum states [11]. In 2006, dynamical mean-field theory, an extremely useful tool for solving the equilibrium many-body problem starting from the limit of large numbers of spatial dimensions, was generalized to solve nonequilibrium problems [12]. This technique has subsequently been applied to a large number of different problems. The work of Tsuji et al. is within the framework of the nonequilibrium dynamical mean-field theory, but comes with some surprising results. They find that the main effect of the driving field is to renormalize the energy scale for motion between nearest-neighbor sites on the lattice (which is called the hopping integral). When this hopping integral changes sign, they find that the system can be described by an effective negative temperature, or equivalently, by a positive temperature but with the repulsive interactions changed to attractive. What is most surprising is that a nonequilibrium system can be described by such a simple picture involving only a change of scale in the energy and possibly a change of interactions from repulsive to attractive. Most nonequilibrium problems are much more complicated to describe. Take for example the problem of Bloch oscillations [13]. A constant electric field accelerates noninteracting electrons on a periodic lattice. But an electron cannot move too far before its wave vector reaches the boundary of the Brillouin zone and Bragg reflects it to an opposite side. The velocity of the electron changes sign during this reflection. Changing directions with each reflection, the electron eventually moves in a periodic fashion. The frequency of this motion is called the Bloch frequency, which is proportional to the magnitude of the electric field. The quantum mechanical states are completely rearranged by this motion. Instead of a continuous band of eigenstates, the system has a density of states given by a series of weighted delta functions separated by the Bloch frequency. The occupation of these states changes periodically. If interactions are added to the system, then the delta functions broaden, but also split into Mott-Hubbard subbands, with a splitting of the peaks of the subbands given by the strength of the electron-electron interaction. This is an example of a system that cannot be described by a simple rescaling of energy scales. A very different picture arises in the case of noninteracting electrons driven by an electric field that oscillates with a high frequency but remains uniform in space. To a very good approximation, the main effect of the driving field is to multiply the hopping integral by a factor that can be either positive or negative. If the factor equals zero, then the particles cannot hop to nearest-neighbor sites and the system exhibits dynamical localization [14]. If the hopping integral changes sign, then the band inverts, and the states around zero momentum (the lowest energy states before the field is turned on) become the highest energy states. For large frequencies, electrons with definite momentum oscillate rapidly about their momentum values, but do not evolve across the Brillouin zone like they do for a constant field. Hence the highest energy states of the system are occupied; we have a negative temperature, or a population inversion. We can describe this population inversion as an equilibrium distribution at positive temperature of a system described by the negative Hamiltonian - . Tsuji et al. focus on the interacting case. If the particle-particle interactions are not modified by the driving field (except by the overall renormalization of energy scales), inverting the sign of all of the energies that describe the system in an equilibrium distribution with a positive temperature changes the sign of the interaction. In other words, the driving field changes the system from a repulsive to an attractive one. Not too surprisingly, the authors find that whenever the oscillating field is turned on rapidly, this effective temperature of the system increases in magnitude. A schematic is given in Fig. 1. Using a nonequilibrium quantum Monte Carlo simulation, Tsuji et al. show that this simple picture describes their system quite accurately. They examine the system by suddenly turning on a high-frequency electric field. They compare this to what happens when they suddenly change the interaction strength. While there are often additional oscillations in the system driven by the electric field, the time-averaged quantities (averaged over the period of the electric field) appear to be very close to the results one gets by rapidly changing the value of the interaction. Tsuji et al. argue that these nonequilibrium configurations will remain stable if the system is isolated from the environment because the energy is conserved, and hence cannot relax. This point of view is clearer for the attractive model, which is described by an effective equilibrium distribution. Relaxation can occur if the system is coupled to an environment that allows for energy exchange. The authors argue that it might be easiest to see these effects in ultracold atomic systems trapped in optical lattices. One needs to understand the effect of the trap on this phenomenon, as it will switch signs from positive to negative curvature leading to additional effects. In any case, the future is bright for new insights into the nonequilibrium many-body problem. I acknowledge support of the ARO grant W NF with funds from the DARPA OLE Program, the AFOSR MURI program grant FA - - - , the NSF under grant number DMR- , the INDO-US Science and Technology Forum JC- - Ultracold atoms, and the McDevitt endowment trust at Georgetown University. N. Tsuji, T. Oka, P. Werner, and H. Aoki, Phys. Rev. Lett. 106, 236401 (2011) A. A. Abrikosov, L. P. Gorkov, and I. Y. Dzyaloshinskii, Quantum field theoretical methods in statistical physics, International series of monographs in natural philosophy Vol. 4 (Pergamon Press, Oxford, 1965) L. P. Kadanoff and G. Baym, Quantum Statistical Mechanics (W. A. Benjamin, Inc., New York, 1962)[Amazon][WorldCat] L. V. Keldysh, J. Exptl. Theor. Phys. 47, 1515 (1964); Sov. Phys. JETP 20, 1018 (1965) C. Waschke, H. G. Roskos, R. Schwedler, K. Leo, H. Kurz, and Klaus Köhler, Phys. Rev. Lett. 70, 3319 (1993) D. Goldhaber-Gordon, Hadas Shtrikman, D. Mahalu, David Abusch-Magder, and U. Meirav, M. A. Kastner, Nature, 391, 156 (1998) L. Perfetti et al., Phys. Rev. Lett. 97, 067402 (2006); F. Schmitt et al., Science 321, 1649 (2008) See for example, I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008) K. Kim et al., Nature, 465,590 (2010) R. Feynman, Int. J. Theor. Phys. 21, 467 (1982) See, for example, U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005) J. K. Freericks, V. M. Turkowski, and V. Zlatić, Phys. Rev. Lett. 97, 266408 (2006) F. Bloch, Z. Phys. 52, 555 (1928); C. Zener, Proc. R. Soc. London A 145, 523 (1934) D. H. Dunlap and V. M. Kenkre, Phys. Rev. B 34, 3625 (1986); M. Holthaus, Phys. Rev. Lett. 69, 351 (1992) Jim Freericks is the Robert L. McDevitt, K.S.G., K.C.H.S. and Catherine H. McDevitt, L.C.H.S. Chair and Professor of Physics at Georgetown University, where he has been since 1994. He received his undergraduate degree from Princeton University, his graduate degree from the University of California, Berkeley, and did postdoctoral fellowships at the Institute for Theoretical Physics at the University of California, Santa Barbara, and at the University of California, Davis. He is a fellow of the American Physical Society and has written a graduate level text on transport in multilayered nanostructures which won the 2009 Alpha Sigma Nu award for the best science book. He is a co-developer of nonequilibrium dynamical mean-field theory. Dynamical Band Flipping in Fermionic Lattice Systems: An ac-Field-Driven Change of the Interaction from Repulsive to Attractive Naoto Tsuji, Takashi Oka, Philipp Werner, and Hideo Aoki
Force And Laws Of Motion, Popular Questions: CBSE Class 9 ENGLISH, English Grammar - Meritnation give 7 examples of inertia of rest with explanation what are the forces acting on a moving train? Yash.m asked a question How to find the least count of a spring balance? Define acceleration and give its SI unit. When is acceleration of a body negative? Give two examples of situations in which acceleration of the body is negative. give examples of inertia of direction with explanation Which of the following situations involves the Newton's 2nd law of motion? a) a force can stop a lighter vehicle as well as a heavier vehicle which are moving b) a force can accelerate a lighter vehicle more easily than a heavier vehicle that is moving c) a force exerted by a lighter vehicle on collision with a heavier vehicle results in both the vehicles coming to a halt d) force exerted by the escaping air from a balloon in the downward direction makes the balloon to go upwards TO BE ANSWERED FAST THANK YOU Deepish Sharma asked a question how many types of inertia are there name them Explain this activity with observation and conclusion. Set a five-rupee coin on a stiff card covering an empty glass tumbler standing on a table as shown in Fig. Give the card a sharp horizontal flick with a finger If we do it fast then the card shoots away, allowing the coin to fall vertically into the glass tumbler due to its inertia. Lakshita Rathore & 2 others asked a question Shrutika Goel asked a question draw the momentum mass graph when the velocity of the body is keep constant? Khushbu Nangliya & 2 others asked a question what is the formula of recoil velocity of a gun????????? 1) Is a marble rolling down an inclined plane moving with constant velocity? Explain. 2) What can you say about the speed of a moving object if no force is acting on to it.? 3) Two forces of 5N &22N are acting in a body in the same direction what will be the resultant force& in which direction will it act? If the two forces in the above example would have been acting in the opposite direction What would be the resultant force& in which direction will it act? 4) If we push the box with a small force, the box does not move, why? 5) What should be the force acting on an object moving with uniform velocity? Alaikya Tata asked a question explain why a cricketer moves his hands backwards while catching a fast moving cricket ball. a body is acted upon by a constant force directed towards a fixed point. the magnitude of the force varies inversely as the square of the distance from the fixed point. what is the nature of the path? Ayushi Saxena asked a question please explain newton's third law of motion! 1 .A car of mass 1000kg and a bus of mass 8000kg are moving with the same velocity of 36 km/h. Find the forces to stop both the car and the bus in 5s. 2. A mechanic strikes a nail with a hammer of mass 500g moving with a velocity of 20m/s The hammer comes to rest in 0.02s.after striking the nail. Calculate the force exerted by the nail on 3. A bullet of mass 10g traveling with a velocity of 100 m/s penetrates in a wooden plank and is brought to rest in 0.01s Find (a) the distance through which the bullet penetrates in the wooden plank and (b) the force exerted on the bullet. Shamruthi Mohan asked a question if a man jumps out from a boat , the boat moves backwards . why? a force of 20N towards east is balanced by an unknown force F. What will be the magnitude and direction of unknown force Yogendra Singh asked a question Derive the equation for the conservation of momentum A box of mass 400kg rests on a carrier of truck that is moving at a speed of 120km/he. The driver applies brake and slow to a speed of 60km/he in 20 sec. Find the constant acceleration generated upon the box. 1) The minute hand of a clock is 7 cm long. Calculate the distance covered and the displacement of minute hand of the clock from 9.00 AM to 9.30 AM. 2) An athlete completes a round of a circular track of diameter 200 m in 20s. Calculate the distance travelled by the athlete. 3)A boy is running on a straight road. He runs 500 m towards north in 2 minutes 10 seconds and then turns back and runs 200 m in 1 minute. Calculate his average speed and magnitude of average velocity during the whole journey. 4) Akshil drove his car with speed of 20 km/h while going to his college. When he returned to his home along the same route , the speed of the car is 30 km/h. Calculate the average speed of the car during the entire journey. a horizontal force equal to the weight of a bodymoves it from rest ,wat is the acceleration produced in it? Devin B Thomas asked a question name and define three different types of inertia and give an example each why a moving ship takes longer time as compared to a car when equal breaks are appiled Prasenjeet Singh asked a question Define force and write its si. Unit From the velocity time graph.calculate: (1) deacceleration in region AB (2) acceleration in region BC (3) total distance travelled in region ABC (4) average velocity between 10 and 30s Samarth Aakte asked a question From a rifle of mass 4kg, a bullet of mass 50g is fired with an initial velocity of 35 ms-1. Calculate the initial recoil velocity of the rifle. Khushbu Prasad asked a question The bullet of mass 10g moving with a velocity of 400m/s gets embeded in a freely suspended wooden block of mass700g .what is the velocity aquired by a block a boy weighing 30 kg is riding a bicycle weighing 50 kg . if the bicycle is moving at a speed of 9 km/h towards the west , find the linear momentum of the bicycle-boy system in SI units " Ijasabdurahman & 4 others asked a question examples of newton's second law of motion Parv Ashwani asked a question with the help of example explain controlled and uncontrolled motion? Sofia Jameel & 1 other asked a question Sananna Das asked a question Why does the speed of an object change with time? Sumit Chauhan asked a question A body of mass 5 Kg is moving with a velocity of 10 m/s. A force is applied on it so that in 25 s , it attains a velocity of 35 m/s. calculate the value of the force applied. A car running at a speed of 72 km/hr is slowed down to 18 km/hr over a distance of 40 m.Calculate: (1) retardation produced by its brakes (2) time for which brakes are applied? Fathima Ruby asked a question ALL THE FORMULAS IN THIS CHAPTER FORCE AND LAWS OF MOTION CAN ANY ONE SAY ME ASAP Explain :Galileo's experiment (4marks) Sahil Sharma & 1 other asked a question on what factors does inertia of a body depend? Sarah Uv asked a question Explain how each of Newton’s law affects a game of tug of war? Q). 2 resistors of 5 K\Omega and 7K\Omega are connected in series across a 15V battery, then current flowing through each of resistance should be What is the real definition of Second Law Of Motion What is the momentum of a man of mass 75 kg.When he walks with a uniform velocity of 2m/s Two bodies of mass 4 and 6 kg are attached to the ends of a string passing over a pulley. The 4 kg mass is attached to the table top by another string. Find the tension in the string T1. (g=9.8 m/s2). Adriparna Paul asked a question Can two forces acting along perpendicular directions be balanced? explain how does a karate player breaks a slab of ice with a single blow? Mohammed Basim asked a question A metal ball and a rubber ball of the same mass are dropped from the same height. After hitting the floor the rubber ball rises higher than the metal ball. Why? A) Momentum is not conserved when the rubber ball hits the floor. B) Momentum is not conserved when the metallic ball hits the floor. C) The rubber ball hits the floor with greater velocity. D) There is greater change in momentum to the ball in case of the rubber ball. Chelsea Dhawan & 1 other asked a question a 40 kg skater moving at 4ms eastwards collides head on with a 60 kg skater travelling at 3ms westwards . if the two skates remain in contact, what is their final velocity? Kaustubh Sharma asked a question Explain recoiling of gun with third law of motion? A Bead Of Mass m is attached to one end of a spring of natural length R And spring constant K = (√3+1)mg/R . other end of the spring is attached toa point A ON the vertical ring of radius R MAKING AN ANGLE OF 30 DEGREE WITH ITS CIRCUMFERENCE THE NORMAL REACTION AT THE BEAD WHEN THE SPRING IS ALLOWED TO MOVE FROM POINT A Roma asked a question so action- reaction forces produce the same magnitude of acceleration? justify plsssss reply fast! !! Vishal Tolambiya asked a question a stone of 1kg is thrown with a velocity of 20m/s. across a frozen surface of a lake and comes to rest after travelling a distance of 50m. what is the force of friction between the stone and the ice. Help me out solving these ques a hunter has a machine gun that can fire 50 g bullets with a velocity of 150 m/s . a 60 kg tiger springs at him with a velocity of 10 m/s . how many bullets must the hunter fire per second into the tiger in order to stop him in his track Rahul Ayinkalath asked a question Does velocity depends upon the mass of the particle. Explain. Shantanu Roy asked a question Derive newtons first law of motion from second law. why does one get hurt seriously while jumping on a hard floor ? plz help Why shockers are provided in automobile vehicles? an object weighs 840 gram in air and 680 gram when fully immersed in a liquid what is a buoyant force on the object due to the liquid a bullet of mass 20g is horizontally fired with a velocity of 150m/s frm a pistol of mass 2kg. What is the recoil velocity of the pistol? From the given displacement time graph,calculate:(1) velocity between 0-4 s (2) velocity between 4-6 s (3) velocity between 6-9 secs (4)average velocity between 0-4s,0-6s,0-9s ? Find the initial velocity of the car which is stopped in 10 sec by applying brakes .The retardation due to brakes is 2.5m/sec sq..... Define: INERTIA a train is moving with acceleration along a straight line with respect to ground aperson in train finds that- A.newtond law2 is false and 3law is true B.newtons 3 law is false but newtons 2law is true C.all laws r false D.all laws are true raj.k.sinha asked a question Do all motions requires a cause? Anjali Thakur asked a question How much momentum will a dumbbell of mass 10 Kg transfer to the floor if it falls from a height of 80 cm? Take its downwards acceleration to be 10 m/s2? A gardener waters the plants by a pipe of diameter 1 mm. The water comes out from the pipe atthe rate of 10 cm3/sec. The reactionary force exerted on the hand of the gardener is? can someone plz tell me the derivations of equations of motions without graph .. Mayank Bagri asked a question what is the difference between magnitude and direction Sumreet asked a question if the mass of an object in free fall is doubled its acceleration B.increases by factor of four C.stays same. D.is cut in half. a bullet of mass 10 g is fired from a rifle with a velocity of 800 m/s . after passing through a mud wall 180cm thick, the velocity drops to 100m/s. calculate the average resistance of the wall. in a spring balance the space between 0 and 25 gwt marks is divided into 10 equal parts find the least count and the range of the spring balance A gunman gets a jerk on firing a bullet.why?? M Ram Koushik asked a question A ball of mass 200 gram is thrown with a speed of 20 metre per second the ball strikes a bat and rebounds along the same line and the speed of 40 metre per second variation of interaction force as long as the ball remains in contact with the bat is shown in the figure , what is the maximum force exerted by the bat on the ball Thahira & 1 other asked a question action and reaction forces do not balance each other. why? Zaid Khan asked a question Place a water - filled tumbler on tray. Hold the tray and turn around as fast as u can. Why does water not spill? Few examples of balanced and unbalanced force? Purti Balani asked a question Q- two carts A and B of mass 10 kg each are placed on a horizontal track. They are joined tightly by a light but strong rope C . a man holds the cart A and pulls it towards the right with a force of 70N. The total force of friction by the track and the air on each cart is 15N,acting toward the left. Find:- (a) the acceleration of the cart (b) the force exerted by the rope on the car B why is it difficult to balance our body when we accidentally step on a peel of banana give reasons Dhruvbohra asked a question what is stationary body? Radha Krishna asked a question Somaditya Singh asked a question N.C.E.R.T class 9 science chapter 9 , force and laws of motion, end chapter exercise , question - 7 , part-b, 1)why engine's mass is not considered to calculate the acceleration of the train Ht Tembhurkar asked a question An object of mass 100 kg is accelerated uniformly from a velocity of 5 m s−1 to 8 m s−1 in 6 s. Calculate the initial and final momentum of the object. Also, find the magnitude of the force exerted on the object. A door is 1m wide. It can be closed by an effort of 25 N , when effort is applied at a distance of 0.4 m from the hinge.What effort is needed, if it is applied at its extreme end ? how a karate player can break a pile of tiles with a single blow of his hand Q). If three coplanar forces acting at origin are in equilibrium then which is correct : Bishal Dangol asked a question if you shake the branches of tree,the fruits fall down. why? Arundhati Sharma & 1 other asked a question Define inertia. On what factor does it depend? what are the different kinds of inertia? give one example of each. Giridhar Sharma asked a question A ball of mass 400g dropped from a height of 5m. A boy on the ground hits the ball vertically upwards with a bat with an force of 100N so that it attains a vertically height of 20m. The time for which the ball remains in contact with the bat? (g = 10m/s2) Manish Lohia & 1 other asked a question Show that Newton's first law of motion is contained in second law ? A given object takes n times as much time to slide down a 45 degree rough incline plane as it takes to slide down a perfectly smooth 45 degree incline plane. Prove that the co efficient of kinetic friction between the object and the incline is given by [1-(1/n2)]. Harshita Chaurasia asked a question Maximum acceleration that can act on a 2.5 kg mass under the action of a 16N and a 4N force is _____. How? Lucky & 4 others asked a question When a carpet is beaten with a stick dust comes out. why? Austin Joseph asked a question State Newton’s second law of motion. Derive a mathematical expression for Newton’s second law.
Comments on tag 00SY—Kerodon Comment #820 by DnlGrgk on December 10, 2020 at 19:46 There is a typo in the last bullet point of "c)": it should be " Fun(\mathcal{S}, \mathcal{C}) Fun(\mathcal{C}, \mathcal{S}) Fun(\mathcal{S}, \mathcal{C}) \operatorname{Fun}(\mathcal{C}, \mathcal{S}) \operatorname{LFun}(\mathcal{S}, \mathcal{C}) \operatorname{LFun}(\mathcal{C}, \mathcal{S}) Please delete my first two comments, and this one too. Comment #826 by Kerodon on December 20, 2020 at 18:09 In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 00SY. The letter 'O' is never used.
Physics - Invention of the Maser and Laser Invention of the Maser and Laser Charles Townes’ pair of papers on the first maser in 1954 and 1955 laid the foundation for the laser era. Proud father. Charles Townes and his colleagues were the first to build a “maser,” which operated in the microwave frequency range. It was the precursor of the laser. The ubiquitous laser, appearing today in supermarket checkout machines, CD players, and eye surgeon’s offices, developed out of the maser, which was first described in Physical Review papers published in 1954 and 1955. The maser–the name stands for “microwave amplification by stimulated emission of radiation”–in turn depended on an insight that came from Albert Einstein almost 40 years earlier. But the path from theory to application was far from straightforward, and it took ingredients from many different disciplines for these theoretically simple devices to achieve practicality. After World War II, radar scientists looking for ways to generate electromagnetic radiation at wavelengths shorter than one centimeter began collaborating with physicists who wanted to use such radiation to investigate molecular structure. When atomic bonds inside a molecule flip between slightly different forms, they often absorb or emit centimeter- or millimeter-band radiation. Vacuum tubes and related devices, used in radar, are impractical for producing sub-centimeter wavelength radiation. But in the early 1950s, Charles Townes, then at Columbia University in New York City, had the idea that molecules themselves would make good emitters of the desired wavelengths, if only he could persuade large numbers of molecules to emit en masse. Recent research came to Townes’ aid. Back in 1916, Albert Einstein had deduced theoretically the existence of stimulated emission–the process by which electromagnetic waves of the right frequency can “stimulate” an excited atom or molecule to fall to a lower energy state and emit more waves. In 1947 Willis Lamb and Robert Retherford, also of Columbia, used stimulated emission to amplify the radiation emitted by hydrogen molecules in order to better measure the frequency of a specific molecular transition [1]. Townes was familiar with microwave engineering techniques and saw a way to go further. If he could assemble a population of excited molecules in a cavity with the right dimensions, radiation emitted by some of the molecules would reflect back and interact with other molecules, causing further stimulated emission. The feedback loop between the cavity and molecules would dramatically amplify the signal, he reasoned. Townes and his colleagues built the first maser in 1954. They sent a beam of excited ammonia molecules into a resonant cavity. Emission became self-sustaining as radiation from molecules in the cavity stimulated further radiation from the continuously renewed supply of excited molecules. Radiating at a wavelength of a little over one centimeter, the power of this first maser was tiny, some ten nanowatts. But the energy was concentrated in a spectacularly sharp line in the emission spectrum–in other words, the radiation was exceedingly uniform, consisting of a single wavelength with little contamination from other wavelengths. Many theorists had told Townes his device couldn’t possibly work. Once it did, other researchers quickly replicated it and began inventing variations on it. In 1958 Townes and Arthur Schawlow of Bell Laboratories in New Jersey proposed a system that would work at infrared and optical wavelengths [2] but it wasn’t until 1960 that the first light-emitting maser–which quickly became known as the laser–was constructed [3]. Townes shared the 1964 Nobel Prize in physics for his work on masers and lasers. Laser development attracted later legal wrangling as various groups fought over patents. But Bernard Burke of the Massachusetts Institute of Technology, who remembers seeing the original maser at Columbia, says that Townes “wasn’t interested in keeping it a secret. It was a nice example of the openness of science.” W. E. Lamb Jr. and R. C. Retherford, “Fine Structure of the Hydrogen Atom by a Microwave Method,” Phys. Rev. 72, 241 (1947) A. L. Schawlow and C. H. Townes, “Infrared and Optical Masers,” Phys. Rev. 112, 1940 (1958) T. Maiman, “Stimulated Optical Radiation in Ruby,” Nature (London) 187, 493 (1960) laser history from Bell LabsJ. L. Bromberg, The Laser in America: 1950 - 1970 (MIT Press, 1991) {\mathrm{H}}_{3} {\mathrm{H}}_{3}
Neutral-atom qubits store information in their spin states. So far, most neutral-atom experiments have used alkali metals, for which the necessary trapping and cooling techniques are highly advanced. Alkali-metal atoms have a drawback, however: the electronic spin states used to store quantum information can be corrupted by the light field used for trapping the atoms. As an alternative, physicists have experimented with alkaline-earth atoms, which can store information more robustly in their nuclear spin states. This possibility has been demonstrated in strontium-87 ( ), but the multiple spin states of this isotope’s large nuclear spin make it difficult to use to implement a simple two-level qubit. In their demonstrations, Thompson, Kaufman, and their respective teams used ytterbium-171 ( ) atoms. Like atoms, the spin states of atoms are robust to perturbation by the optical trap. But unlike atoms, atoms have a nuclear spin of 1/2, making it easier to manipulate spin-state qubits made from this isotope. Both teams show that atoms can be cooled and trapped using optical tweezers and that the nuclear spins of atoms can be initialized, manipulated using optical or radio-frequency fields, and measured. In addition, Kaufman’s group demonstrates that a ten-by-ten atomic lattice can be loaded with atoms rapidly, with few defects, and then cooled to nearly absolute zero for high-fidelity qubit manipulations. Thompson’s team, meanwhile, demonstrates a two-qubit gate operation using pairs of adjacent atoms. Although neutral-atom quantum computers have not yet been explored as thoroughly as other platforms, recent advances in atom-manipulation techniques mean that they are catching up. Kaufman thinks that, eventually, physicists will be able to exploit the varied energy structures of different atoms to implement quantum computers that are scalable and that can be used in diverse applications such as metrology. Thompson says that the nuclear spin sublevels of atoms have been predicted to offer an especially effective method of quantum error correction. S. Ma et al., “Universal gate operations on nuclear spin qubits in an optical tweezer array of {}^{171}\text{Yb} atoms,” Phys. Rev. X 12, 021028 (2022). Universal Gate Operations on Nuclear Spin Qubits in an Optical Tweezer Array of {}^{171}\mathrm{Yb} {}^{171}\mathrm{Yb}
Oloid - Wikipedia Three-dimensional curved geometric object Oloid structure, showing the two 240 degree circular sectors and the convex hull. The plane shape of a developed Oloid surface An oloid is a three-dimensional curved geometric object that was discovered by Paul Schatz in 1929. It is the convex hull of a skeletal frame made by placing two linked congruent circles in perpendicular planes, so that the center of each circle lies on the edge of the other circle. The distance between the circle centers equals the radius of the circles. One third of each circle's perimeter lies inside the convex hull, so the same shape may be also formed as the convex hull of the two remaining circular arcs each spanning an angle of 4π/3. The surface area of an oloid is given by:[1] {\displaystyle A=4\pi r^{2}} exactly the same as the surface area of a sphere with the same radius. In closed form, the enclosed volume is[1][2] {\displaystyle V={\frac {2}{3}}\left(2E\left({\frac {3}{4}}\right)+K\left({\frac {3}{4}}\right)\right)r^{3}} {\displaystyle K} {\displaystyle E} denote the complete elliptic integrals of the first and second kind respectively. A numerical calculation gives {\displaystyle V\approx 3.0524184684r^{3}} The surface of the oloid is a developable surface, meaning that patches of the surface can be flattened into a plane. While rolling, it develops its entire surface: every point of the surface of the oloid touches the plane on which it is rolling, at some point during the rolling movement.[1] Unlike most axial symmetric objects (cylinder, sphere etc.), while rolling on a flat surface, its center of mass performs a meander motion rather than a linear one. In each rolling cycle, the distance between the oloid's center of mass and the rolling surface has two minima and two maxima. The difference between the maximum and the minimum height is given by {\displaystyle \Delta h=r\left({\frac {\sqrt {2}}{2}}-{3}{\frac {\sqrt {3}}{8}}\right)\approx 0.0576r} {\displaystyle r} is the oloid's circular arcs radius. Since this difference is fairly small, the oloid's rolling motion is relatively smooth. At each point during this rolling motion, the oloid touches the plane in a line segment. The length of this segment stays unchanged throughout the motion, and is given by:[1][3] {\displaystyle l={\sqrt {3}}r} Related shapes[edit] Comparison of an oloid (left) and sphericon (right) — in the SVG image, move over the image to rotate the shapes The sphericon is the convex hull of two semicircles on perpendicular planes, with centers at a single point. Its surface consists of the pieces of four cones. It resembles the oloid in shape and, like it, is a developable surface that can be developed by rolling. However, its equator is a square with four sharp corners, unlike the oloid which does not have sharp corners. Another object called the two circle roller is defined from two perpendicular circles for which the distance between their centers is √2 times their radius, farther apart than the oloid. It can either be formed (like the oloid) as the convex hull of the circles, or by using only the two disks bounded by the two circles. Unlike the oloid its center of gravity stays at a constant distance from the floor, so it rolls more smoothly than the oloid. In 1979, modern dancer Alan Boeding designed his "Circle Walker" sculpture from two crosswise semicircles, forming a skeletal version of the sphericon, a shape with a similar rolling motion to the oloid. He began dancing with a scaled-up version of the sculpture in 1980 as part of an MFA program in sculpture at Indiana University, and after he joined the MOMIX dance company in 1984 the piece became incorporated into the company's performances.[4][5] The company's later piece "Dream Catcher" is based around another Boeding sculpture whose linked teardrop shapes incorporate the skeleton and rolling motion of the oloid.[6] ^ a b c d Dirnböck, Hans; Stachel, Hellmuth (1997), "The development of the oloid" (PDF), Journal for Geometry and Graphics, 1 (2): 105–118, MR 1622664 . ^ Kuleshov, Alexander S.; Hubbard, Mont; Peterson, Dale L.; Gede, Gilbert (2011), "Motion of the Oloid-toy", Proc. 7th European Nonlinear Dynamics Conference, 24–29 July 2011, Rome, Italy (PDF), archived from the original (PDF) on 28 December 2013, retrieved 6 November 2013 . ^ Green, Judith (May 2, 1991), "hits and misses at Momix: it's not quite dance, but it's sometimes art", Dance review, San Jose Mercury News ^ Boeding, Alan (April 27, 1988), "Circle dancing", The Christian Science Monitor ^ Anderson, Jack (February 8, 2001), "Leaping Lizards and Odd Denizens of the Desert", Dance Review, The New York Times Wikimedia Commons has media related to Oloid surface. Rolling oloid, filmed at Swiss Science Center Technorama, Winterthur, Switzerland. Paper model oloid Make your own oloid Oloid mesh Polygon mesh of the oloid, and code to generate it. Retrieved from "https://en.wikipedia.org/w/index.php?title=Oloid&oldid=1079007014"
Per-Unit System - MATLAB & Simulink - MathWorks Benelux Per-Unit System and Motor Control Blockset Why Use Per-Unit System Instead of Standard SI Units Motor Control Blockset™ uses these International System of Units (SI): Torque newton-meter N.m The SI Unit for speed is rad/s. However, most manufacturers use rpm as the unit to specify the rotational speed of the motors. Motor Control Blockset prefers rpm as the unit of rotational speed over rad/s. However, you can use either value based on your preference. The per-unit (PU) system is commonly used in electrical engineering to express the values of quantities like voltage, current, power, and so on. It is used for transformers and AC machines for power system analysis. Embedded systems engineers also use this system for optimized code-generation and scalability, especially when working with fixed-point targets. For a given quantity (such as voltage, current, power, speed, and torque), the PU system expresses a value in terms of a base quantity: \text{quantity expressed in PU = }\frac{\text{quantity expressed in SI units}}{\text{base value}} Generally, most systems select the nominal values of the system as the base values. Sometimes, a system may also select the maximum measurable value as the base value. After you establish the base values, all signals are represented in PU with respect to the selected base value. For example, in a motor control system, if the selected base value of the current is 10A, then the PU representation of a 2A current is expressed as (2/10) PU = 0.2 PU. \text{quantity expressed in SI units = quantity expressed in PU }×\text{ base value} For example, the SI unit representation of 0.2 PU = (0.2 x base value) = (0.2 x 10) A. Motor Control Blockset uses these conventions to define the base values for voltage, current, speed, torque, and power. Base voltage Vbase This is the maximum phase voltage supplied by the inverter. Generally, for Space Vector PWM, it is \text{PU_System}\text{.V_base = }\left(\frac{\text{inverter}\text{.V_dc}}{\sqrt{\text{3}}}\right) For Sinusoidal PWM, it is \text{PU_System}\text{.V_base = }\left(\frac{\text{inverter}\text{.V_dc}}{\text{2}}\right) Base current Ibase This is the maximum current that can be measured by the current sensing circuit of the inverter. Generally, but not necessarily, it is Imax of the inverter. \text{PU_System}\text{.I_base = inverter}\text{.I_max} Base speed Nbase This is the nominal (or rated) speed of the motor. This is also the maximum speed that the motor can achieve at the nominal voltage and nominal load without a field-weakening operation. Base torque Tbase This torque is mathematically derived from the base current. Physically, the motor may or may not be able to produce this torque. Generally, it is \text{PU_System}\text{.T_base = }\frac{\text{3}}{\text{2}}×\text{pmsm}\text{.p}×\text{pmsm}\text{.FluxPM}×\text{PU_System}\text{.I_base} Base power Pbase This is the power derived by the base voltage and base current. \text{PU_System}\text{.P_base = }\frac{\text{3}}{\text{2}}×\text{PU_System}\text{.V_base}×\text{PU_System}\text{.I_base} Vdc is the DC voltage that you provide to the inverter. Imax is the maximum current measured by the ADCs connected to the current sensors of the inverter. p is the number of pole pairs available in the PMSM. FluxPM is the permanent magnet flux linkage of the PMSM. pmsm is the MATLAB® workspace parameter structure that saves the motor variables. inverter is the MATLAB workspace parameter structure that saves the inverter variables. PU_System is the MATLAB workspace parameter structure that saves the PU system variables. For the voltage and current values, you can generally consider the peak value of the nominal sinusoidal voltage (or current) as 1PU. Therefore, the base values used for voltage and current are the RMS values multiplied by \sqrt{2} , or the peak value measured between phase-neutral. You can simplify your calculations by using the PU system. Motor Control Blockset uses these base value definitions for the PU-system-related conversions performed by the algorithms used in the toolbox examples. The toolbox stores the PU-system-related variables in a structure called PU_System in the MATLAB workspace. Per-unit representation of signals has many advantages over the SI units. This technique: Improves the computational efficiency of code execution, and therefore is a preferred system for fixed-point targets. Creates a scalable control algorithm that can be used across many systems.
Physics - The Two Structures of Hot Dense Ice April 21, 2022 &bullet; Physics 15, s49 Experiments indicate that superionic ice can exist in two stable crystal structures. slay19/stock.adobe.com If water exists on an icy giant planet or an exoplanet, it’s most likely in a phase that doesn’t naturally occur on Earth. At the high temperatures and pressures inside these planetary bodies, water molecules should break apart. Their oxygen ions should then crystallize into a lattice while their hydrogen ions float freely, creating an ion-conducting substance called superionic ice. Theorists have predicted this ice’s structures, but there is disagreement about when superionic phases are stable. Now, Gunnar Weck of the University of Paris-Saclay and his colleagues have experimentally confirmed two superionic-ice lattice structures and the conditions under which each remains stable [1]. The study resolves a long-standing puzzle and provides insight into the materials and conditions that may exist in icy planetary interiors. Recent experiments indicate that superionic ice has different crystal structures under different pressures. But ambiguities exist in the exact arrangement of atoms in these crystals. To provide clarity on this issue, Weck and his colleagues examined hot dense water ice at pressures between 25 and 180 GPa and at temperatures between 500 and 2500 K. The team compressed their water ice using a diamond anvil cell and heated it with a laser using boron-doped diamond heaters. They used synchrotron x-ray diffraction to determine each sample’s atomic arrangement during this process. In their experiments, Weck and his colleagues observed two superionic-ice structures. They found that the ice started transitioning between the two structures at a pressure of 57 GPa and a temperature of 1500 K and completed the change at 166 GPa and 2500 K. The team presented the phase diagram to illustrate the stability conditions for each structure. G. Weck et al., “Evidence and stability field of fcc superionic water ice using static compression,” Phys. Rev. Lett. 128, 165701 (2022). Evidence and Stability Field of fcc Superionic Water Ice Using Static Compression Gunnar Weck, Jean-Antoine Queyroux, Sandra Ninet, Frédéric Datchi, Mohamed Mezouar, and Paul Loubeyre {\text{CaH}}_{6}
Gender Roles with Text Mining and N-grams Today is the one year anniversary of the janeaustenr package’s appearance on CRAN, its cranniversary, if you will. I think it’s time for more Jane Austen here on my blog. I saw this paper by Matthew Jockers and Gabi Kirilloff a number of months ago and the ideas in it have been knocking around in my head ever since. The authors of that paper used text mining to examine a corpus of 19th century novels and explore how gendered pronouns (he/she/him/her) are associated with different verbs. These authors used the Stanford CoreNLP library to parse dependencies in sentences and find which verbs are connected to which pronouns; I have been thinking about how to apply a different approach to this question using tidy data principles and n-grams. Let’s see what we can do! Jane Austen and n-grams An n-gram is a contiguous series of n words from a text; for example, a bigram is a pair of words, with n = 2 . If we want to find out which verbs an author is more likely to pair with the pronoun “she” than with “he”, we can analyze bigrams. Let’s use unnest_tokens from the tidytext package to identify all the bigrams in the 6 completed, published novels of Jane Austen and transform this to a tidy dataset. austen_bigrams ## book bigram ## <fctr> <chr> ## 1 Sense & Sensibility sense and ## 2 Sense & Sensibility and sensibility ## 3 Sense & Sensibility sensibility by ## 4 Sense & Sensibility by jane ## 5 Sense & Sensibility jane austen ## 6 Sense & Sensibility austen 1811 ## 7 Sense & Sensibility 1811 chapter ## 8 Sense & Sensibility chapter 1 ## 9 Sense & Sensibility 1 the ## 10 Sense & Sensibility the family That is all the bigrams from Jane Austen’s works, but we only want the ones that start with “he” or “she”. Jane Austen wrote in the third person, so this is a good example set of texts for this question. The original paper used dependency parsing of sentences and included other pronouns like “her” and “him”, but let’s just look for bigrams that start with “she” and “he”. We will get some adverbs and modifiers and such as the second word in the bigram, but mostly verbs, the main thing we are interested in. pronouns <- c("he", "she") bigram_counts <- austen_bigrams %>% count(book, bigram, sort = TRUE) %>% filter(word1 %in% pronouns) %>% rename(total = nn) ## word1 word2 total ## <chr> <chr> <int> ## 1 she had 1472 ## 2 she was 1377 ## 3 he had 1023 ## 4 he was 889 ## 5 she could 817 ## 6 he is 399 ## 7 she would 383 ## 8 she is 330 ## 9 he could 307 ## 10 he would 264 There we go! These are the most common bigrams that start with “he” and “she” in Jane Austen’s works. Notice that there are more mentions of women than men here; this makes sense as Jane Austen’s novels have protagonists who are women. The most common bigrams look pretty similar between the male and female characters in Austen’s works. Let’s calculate a log odds ratio so we can find the words (hopefully mostly verbs) that exhibit the biggest differences between relative use for “she” and “he”. word_ratios <- bigram_counts %>% mutate(word_total = sum(total)) %>% filter(word_total > 10) %>% select(-word_total) %>% spread(word1, total, fill = 0) %>% mutate(logratio = log(she / he)) %>% Which words have about the same likelihood of following “he” or “she” in Jane Austen’s novels? arrange(abs(logratio)) ## word2 he she logratio ## <chr> <dbl> <dbl> <dbl> ## 1 always 0.001846438 0.0018956289 0.02629233 ## 2 loves 0.000923219 0.0008920607 -0.03433229 ## 3 too 0.000923219 0.0008920607 -0.03433229 ## 4 when 0.000923219 0.0008920607 -0.03433229 ## 5 acknowledged 0.001077089 0.0011150758 0.03466058 ## 6 remained 0.001077089 0.0011150758 0.03466058 ## 7 had 0.157562702 0.1642506690 0.04157024 ## 8 paused 0.001384828 0.0014495986 0.04571041 ## 9 would 0.040775504 0.0428189117 0.04889836 ## 10 turned 0.003077397 0.0032337199 0.04954919 These words, like “always” and “loves”, are about as likely to come after the word “she” as the word “he”. Now let’s look at the words that exhibit the largest differences in appearing after “she” compared to “he”. mutate(abslogratio = abs(logratio)) %>% top_n(15, abslogratio) %>% mutate(word = reorder(word2, logratio)) %>% ggplot(aes(word, logratio, color = logratio < 0)) + geom_segment(aes(x = word, xend = word, y = 0, yend = logratio), y = "Relative appearance after 'she' compared to 'he'", title = "Words paired with 'he' and 'she' in Jane Austen's novels", subtitle = "Women remember, read, and feel while men stop, take, and reply") + scale_color_discrete(name = "", labels = c("More 'she'", "More 'he'")) + labels = c("0.5x", "Same", "2x", "4x")) These words are the ones that are the most different in how Jane Austen used them with the pronouns “he” and “she”. Women in Austen’s novels do things like remember, read, feel, resolve, long, hear, dare, and cry. Men, on the other hand, in these novels do things like stop, take, reply, come, marry, and know. Women in Austen’s world can be funny and smart and unconventional, but she plays with these ideas within a cultural context where they act out gendered roles. George Eliot and n-grams Let’s look at another set of novels to see some similarities and differences. Let’s take some novels of George Eliot, another English writer (a woman) who lived and wrote several decades after Jane Austen. Let’s take Middlemarch (MY FAVE), Silas Marner, and The Mill on the Floss. eliot <- gutenberg_download(c(145, 550, 6688), mirror = "http://mirrors.xmission.com/gutenberg/") We now have the texts downloaded from Project Gutenberg. We can use the same approach as above and calculate the log odds ratios for each word that comes after “he” and “she” in these novels of George Eliot. eliot_ratios <- eliot %>% count(bigram, sort = TRUE) %>% rename(total = nn) %>% What words exhibit the largest differences in their appearance after these pronouns in George Eliot’s works? eliot_ratios %>% title = "Words paired with 'he' and 'she' in George Eliot's novels", subtitle = "Women read, run, and need while men leave, mean, and tell") + labels = c("0.125x", "0.25x", "0.5x", "Same", "2x", "4x", "8x")) We can see some difference in word use and style here, but overall there are quite similar ideas behind the verbs for women and men in Eliot’s works as Austen’s. Women read, run, need, marry, and look while men leave, mean, tell, know, and call. The verbs associated with women are more connected to emotion or feelings while the verbs associated with men are more connected to action or speaking. Jane Eyre and n-grams Finally, let’s look at one more novel. The original paper found that Jane Eyre by Charlotte Brontë had its verbs switched, in that there were lots of active, non-feelings verbs associated with feminine pronouns. That Jane Eyre! eyre <- gutenberg_download(1260, eyre_ratios <- eyre %>% filter(word_total > 5) %>% What words exhibit the largest differences in Jane Eyre? eyre_ratios %>% title = "Words paired with 'he' and 'she' in Jane Eyre", subtitle = "Women look, tell, and open while men stop, smile, and pause") + Indeed, the words that are more likely to appear after “she” are not particularly feelings-oriented; women in this novel do things like look, tell, open, and do. Men in Jane Eyre do things like stop, smile, pause, pursue, and stand. It is so interesting to me how these various authors understand and portray their characters’ roles and gender, and how we can see that through analyzing word choices. The R Markdown file used to make this blog post is available here. I am very happy to hear about that or other feedback and questions!
A reliability block diagram (RBD) is a diagrammatic method for showing how component reliability contributes to the success or failure of a redundant. RBD is also known as a dependence diagram (DD). A reliability block diagram An RBD is drawn as a series of blocks connected in parallel or series configuration. Parallel blocks indicate redundant subsystems or components that contribute to a lower failure rate. Each block represents a component of the system with a failure rate. RBDs will indicate the type of redundancy in the parallel path.[1] For example, a group of parallel blocks could require two out of three components to succeed for the system to succeed. By contrast, any failure along a series path causes the entire series path to fail.[2][3] An RBD may be drawn using switches in place of blocks, where a closed switch represents a working component and an open switch represents a failed component. If a path may be found through the network of switches from beginning to end, the system still works. An RBD may be converted to a success tree or a fault tree depending on how the RBD is defined. A success tree may then be converted to a fault tree or vice versa by applying de Morgan's theorem. To evaluate an RBD, closed form solutions are available when blocks or components have statistical independence. When statistical independence is not satisfied, specific formalisms and solution tools such as dynamic RBD have to be considered.[4] 1 Calculating an RBD Calculating an RBD[edit] The first thing one must determine when calculating an RBD is whether to use probability or rate. Failure rates are often used in RBDs to determine system failure rates. Use probabilities or rates in an RBD but not both. Series probabilities are calculated by multiplying the reliability (a probability) of the series components: {\displaystyle R_{\text{SYS}}=R_{1}(t)\times R_{2}(t)\times \cdots \times R_{n}(t)} Parallel probabilities are calculated by multiplying the unreliability (Q) of the series components where Q = 1 – R if only one unit needs to function for system success: {\displaystyle Q_{\text{SYS}}=Q_{1}(t)\times Q_{2}(t)\times \cdots \times Q_{n}(t)} For constant failure rates, series rates are calculated by superimposing the Poisson point processes of the series components: {\displaystyle \lambda _{\text{SYS}}=\lambda _{1}+\lambda _{2}+\cdots +\lambda _{n}} Parallel rates can be evaluated using a number of formulas including this formula[5] for all units active with equal component failure rates. n − q out of n redundant units are required for success. μ >> λ {\displaystyle \lambda _{\text{SYS}}={\frac {n!\lambda ^{q+1}}{(n-q-1)!\mu ^{q}}}} If the components in a parallel system have n different failure rates a more general formula can be used as follows. For the repairable model Q = λ/μ as long as {\textstyle \mu \gg \lambda } {\displaystyle \lambda _{\text{SYS}}=\sum _{i=1}^{n}\left(\lambda _{i}\prod _{j=1;j\neq i}^{n}Q_{j}\right)} ^ Electronic Design Handbook, MIL-HDBK-338B, October 1, 1998 ^ Mohammad Modarres; Mark Kaminskiy; Vasiliy Krivtsov (1999). "4" (pdf). Reliability Engineering and Risk Analysis: A Practical Guide. Ney York, NY: Marcel Decker, Inc. p. 198. ISBN 978-0-8247-2000-1. Retrieved 2010-03-16. ^ "6.4 Reliability Modeling and Prediction". Electronic Reliability Design Handbook. B. U.S. Department of Defense. 1998. MIL–HDBK–338B. Archived from the original (pdf) on 2011-07-22. Retrieved 2010-03-16. ^ Salvatore Distefano, Antonio Puliafito. "Dependability Evaluation with Dynamic Reliability Block Diagrams and Dynamic Fault Trees." IEEE Trans Dependable Sec. Comput. 6(1): 4–17 (2009) http://www.reliabilityeducation.com/rbd.pdf (commercial website) Institut pour la Maîtrise des Risques,method sheets, english version[permanent dead link] Retrieved from "https://en.wikipedia.org/w/index.php?title=Reliability_block_diagram&oldid=1032289813"
Crystal Wafer: Crystallographic cracking behavior in silicon single crystal wafer Crystallographic cracking behavior in silicon single crystal wafer Crystallographic cracking behavior was studied on three-point-bending specimens of silicon single-crystal wafer having (1 \text{1}\text{̄} 0) [11 \text{2}\text{̄} ]-oriented precrack. Crystallographic cracking occurred on alternating {111} planes after traversing about 500 μm from crack front at the brittle–ductile-transition temperature, and the main crack was almost parallel to the loading axis. The preferentially activated slip systems ahead of the crack tip resulted in the characteristic fracture in the specimens. The experimental results could be well explained by calculating the shear stress on all possible tetrahedral slip planes around the crack tip. Brittle–ductile-transition, Three-point bending, Cross-slip zone
The Vault - TOAD Wiki PADSwap UI to view and interact with the Vault The Vault is another groundbreaking idea for the whole TOAD.Network ecosystem. The Vault is a secure place that stores the backing for PAD, the native token of PADSwap. Whenever a swap takes place on PADSwap, 0.05% of the transaction goes to the Vault. The tokens accumulated in the Vault act as a backing for PAD, giving it real value in underlying tokens (such as BNB, BTC, ETH, etc.). Essentially, PAD is backed by small amounts of every token on PADSwap, and the backing amount is growing with every transaction. At any time, any holder can burn their PAD, permanently removing these tokens from the circulating supply, and receive their backing. For example, burning 1% of PAD supply will give you exactly 1% of every token stored in the Vault. If PAD's market cap ever drops below its Vault backing, it will become profitable to burn your PAD, which will in turn make PAD more scarce and thereby more valuable, bringing the price of PAD back up. The Vault's purpose is to behave as an automated “price correction mechanism”, enabling holders to retrieve the corresponding amount of PAD tokens burned in the form of tokens held in the Vault at that moment. It works like a water wheel on a river, or other constant perpetual motion machine. The more volume the more rewards, enticing more volume which brings more rewards. Visualization of the rising PAD price floor through the Vault Why create a vault that stores value as backing for PAD instead of distributing dividends for people staking PAD? The most important point is persistent vs. non-persistent rewards. One of the biggest problems of distributing those fees as dividends to token holders is that the value generated by those fees are not persistent. That means the value is distributed to holders, but those holders can sell the token later and keep those rewards they already earned. On this new vault model, those rewards are persistent: ‘If a user sells PAD he is also selling the right to redeem its backing, and the new user buying it is also receiving all the historical rewards already collected’, in the case of a dividend distribution the only value of the token to the new user would be the future fees it will generate. This is a good question and the answer is not simple as there are multiple reasons why this new vault model is way superior. It creates a floor price. It helps reduce the PAD supply in case the price of PAD dumps below its backing value there is an incentive to buy PAD to burn and redeem the backing. This arbitrage trade will not only benefit the person doing it, but also PAD, as it will reduce the supply of PAD and push the price up from the buy. It is flexible. One of our long term goals is to create a DAO. Where the toad token will serve as the governance token (it is scarce, hard to farm, has a fixed max supply) and pad as the utility token. The vault can be integrated with the DAO and the community can vote in which tokens they want to add. Giving this organization the ability to search for upcoming projects and invest in them using the trading fees. This can turn PAD into an index token, pegged to multiple tokens and managed by a decentralized organization. The community will also be able to vote on the % of each token to buy with the fees generated, this gives the vault the ability to adapt its strategy during bull/ bear markets. EX: The vault can buy more USDC while the markets are up and buy more BTC when the markets crash. In the future, when the Vault is filled enough, if a big dump happens on PAD price, for some brief time the Vault will have a better exchange rate per PAD then PADSwap. Thus, there will be an opportunity for arbitrage on PAD price. People can arbitrage by buying PAD on PADSwap and burning it to profit off this market condition. When this happens, PAD price will go up influenced by standard AMM rules: increased buying pressure for the arbitrage supply decrease due to the burn These two actions will be pushing the price above the backing price once again In other words, we can see The Vault as a cross-chain index, forever-rising, floating peg. In most AMMs, part of the swap fee goes towards Developer wallets. But not here! At PADSwap, this portion is aimed to further reward all users by sending 0.05% of each swap to the Vault where PADs’ backing value is held. This protects users from volatile market conditions. It is a better rewarding feature than distributing dividends because PAD accumulates all historical profits + all future ones. Once someone buys PAD, they are not only buying PAD's future profits, but also all historic profits. This is superior to a token that simply directly pays dividends. So, in short, The Vault is fed through fees from swaps, farms, and any additional inflows that get added over time. These fees are held in the vault and also used to buy other tokens to store in The Vault, effectively forming a crypto-index. If the open market price of PAD dips below its Vault backing price, users are incentivized to burn PAD to redeem their share of the Vault holdings. Redeem Backing V be the set of tokens in the Vault, t_i \in V the amount of token in the Vault, P the circulating supply of PAD, p the amount of PAD supplied by the user, l the leverage set by the community bound by [1, 3] , then we can define the redeem function R R(p, t_i \in V) = \left( \dfrac{t_i}{P}\right) \cdot p \cdot l This function will be executed for all tokens in the vault V An example for burning Pad with easy numbers. Say the circulating supply of PAD is 100 PAD and you are holding 1 PAD which you want to burn, then you will get 1% (1/100) of all tokens in the Vault at that time. If there were 100 TOAD, 1 BTC, 10 ETH, and 1000 BUSD, you would get 1 Toad, 0.01 BTC, 0.1 ETH and 10 BUSD (ignoring transaction fees). The supply of PAD will be forever reduced to 99 and all future fees and value in the vault is now shared among fewer PAD tokens.
Differential geometry in Rust – ebvalaim.log The array problem once again So we want to check the compatibility of tensors with respect to the coordinate systems statically (that is, at compile time), but we could expand it a bit. For example, addition only makes sense for tensors of the same rank and variance (that is, the same "composition" of indices, that can be either covariant or contravariant). Adding, let's say, a vector and a covector (which are both of rank 1, but one is contravariant, and the other covariant) makes no sense. If the type system could also be used to detect this kind of errors, it would be awesome, but is it possible? As it turns out, yes it is, but it's not that easy. Dealing with tensors Tensors are characterized basically by two properties: the dimension of the underlying space and their rank / variance. For example, as mentioned above, vectors and covectors have rank one. Matrix tensors (like linear transformations or bilinear forms), on the other hand, have rank 2. The difference lies in the variance - linear transformations have one contravariant and one covariant index, and bilinear forms - two covariant ones. The dimension along with the rank define the number of coordinates of the tensor - it is D^R D is the space dimension, and R - the rank of the tensor. Since both these values will be known statically, it would also be nice if we could statically determine the size of the coordinates array. And this is where problems start. As I already mentioned in the previous post, the Rust arrays can't be parametrized by their length. At the first glance it would be thus necessary to program each type of tensors separately with an array of an appropriate size. Fortunately, there is a solution which I also described in that post - it is the GenericArray struct. The types used by this struct to encode the array length have all of arithmetics defined in the typenum crate, so calculating values like D^R shouldn't be a problem. Indeed, it's possible, but not really smooth. It turns out that each operation on the number types from typenum causes all the guarantees assumed for those types to be lost. For example, if I defined the type representing the dimension and the type representing the rank to be applicable as the length of an array, this guarantee disappears upon creation of the type being the result of D^R . This means that I need to define it separately, which bloats the trait requirements and leads to monster snippets like this one. The author of typenum, paholg has been of invaluable help in this matter. Thanks to him, I managed to write requirements that allowed me to compile a struct representing a tensor. After these initial struggles everything went smoothly. Further operations on tensor types were easy to code, as I already knew what the compiler expects. This way I quickly implemented tensor addition and subtraction, multiplication by scalars and other tensors, and contraction (which means summing the tensor elements on a "diagonal" - in the case of a matrix it's just the trace). The compiler gives up Unfortunately, after I had everything coded and a few tests written, it turned out that the compiler couldn't take it anymore. The tensor multiplication was too much to bear and it got in an infinite loop. My efforts aiming to identify and repair the problem were in vain. It is pretty obvious that the culprit is the tensor multiplication, as commenting out its code makes the error disappear, but it's impossible to determine how the error is being caused. In an act of despair I compiled the debug version of the compiler and generated some logs, but I only got about 500 MB of text which didn't help me much. The reddit community advised me to start an issue on GitHub, which I did. One of the developers manage to extract the essence of the offending code, but the cause of the problem is still unknown. Nevertheless, using an alternate syntax (<Tensor<T,U> as Mul<Tensor<T, V>>::mul(tensor1, tensor2) instead of tensor1 * tensor2) allows to get the code to work and the library is functional. I have released version 0.1 yesterday. Finally: an additional macro for GenericArray I've also managed to improve one particular aspect of GenericArray. Until now, creation of such an array required the usage of the from_slice method, which a) bloats the code, b) only validates the slice length dynamically. Today I created an arr! macro, which not only makes the code shorter, but also checks everything statically: // the old version let array1 = GenericArray::<u32, U3>::from_slice(&[1, 2, 3]); // the new version let array2 = arr![u32; 1, 2, 3]; // compiles, but panics at runtime let array3 = GenericArray<u32, U2>::from_slice(&[1, 2, 3]); // doesn't compile let array4: GenericArray<u32, U2> = arr![u32; 1, 2, 3]; // the new version // compiles, but panics at runtime The macro was uploaded to crates.io in version 0.1.1 of generic-array. So, this was my recent coding activity. In the near future, I'm planning to add some support for metric tensors and Christoffel symbols to differential-geometry, which should be enough to create a clone of the aforementioned gr-engine. When I'm done, I'll write what became of it :)
14C30 Transcendental methods, Hodge theory , Hodge conjecture 14C35 Applications of methods of algebraic K A Construction of Surfaces with pg = 1, q = 0 and 2...(K2) ...8. Counter Examples of the Global Torelli Theorem. Andrei N. Todorov (1981) Sławomir Rams, Piotr Tworzewski, Tadeusz Winiarski (2005) Eduardo Esteves, Israel Vainsencher (2006) We give an intersection theoretic proof of M. Soares’ bounds for the Poincaré-Hopf index of an isolated singularity of a foliation of {\mathrm{ℂℙ}}^{n} A Pieri-type theorem for Lagrangian and odd Orthogonal Grassmannians. Piotr Pragacz, Jan Ratajski (1996) A triple intersection theorem for the varieties SO(n)/Pd S. Sertöz (1993) We study the Schubert calculus on the space of d-dimensional linear subspaces of a smooth n-dimensional quadric lying in the projective space. Following Hodge and Pedoe we develop the intersection theory of this space in a purely combinatorial manner. We prove in particular that if a triple intersection of Schubert cells on this space is nonempty then a certain combinatorial relation holds among the Schubert symbols involved, similar to the classical one. We also show when these necessary conditions... About the adjunction process for polarized algebraic surfaces. Antonio Lanteri, Marino Palleschi (1984) Angelo Vistoli (1989) Algebraic Cycles on Abelian Varieties of Fermat Type. Tetsuji Shioda (1981) Ample vector bundles with zero loci having a bielliptic curve section. Antonio Lanteri, Hidetoshi Maeda (2003) An Euler-Poincaré Characteristic for Improper Intersections. Wolfgang Vogel, Jürgen Stückrad (1986) Analytic deviation of ideals and intersection theory of analytic spaces. Rüdiger Achilles, Mirella Manaresi (1993) Arakelov Chow groups of abelian schemes, arithmetic Fourier transform, and analogues of the standard conjectures of Lefschetz type. Klaus Künnemann (1994) Arithmetic discriminants and horizontal intersections. Arithmetical graphs. Dino J. Lorenzini (1989) Around the Gysin triangle. II. Déglise, Frédéricke (2008) Associate forms, joins, multiplicities and an intrinsic elimination theory Federico Gaeta (1990) F. L. Zak (2012) Berührungsbedingungen für p-l-s-Kegelschnitte U. STERZ, K. DRECHSLER (1981) Bivariant Chern classes for morphisms with nonsingular target varieties Shoji Yokura (2005) W. Fulton and R. MacPherson posed the problem of unique existence of a bivariant Chern class-a Grothendieck transformation from the bivariant theory F of constructible functions to the bivariant homology theory H. J.-P. Brasselet proved the existence of a bivariant Chern class in the category of embeddable analytic varieties with cellular morphisms. In general however, the problem of uniqueness is still unresolved. In this paper we show that for morphisms having nonsingular target varieties there... Canonical Classes on Singular Varieties. W. Fulton, Kent Johnson (1980)
Physics - Cloud Quantum Computing Tackles Simple Nucleus Cloud Quantum Computing Tackles Simple Nucleus Researchers perform a quantum computation of the binding energy of the deuteron using a web connection to remote quantum devices. Scientific computation has long been a matter of typing commands on a screen and then sending those instructions to a distant computer that might be down the hall or across the world. Remote access like this allows scientists to use supercomputers and other powerful machines that they couldn’t manage by themselves. Now this same idea has spread to the quantum realm. So-called cloud quantum computing is now being offered by several companies like IBM, Google, and Rigetti, who have quantum chips linked to the internet. A certified user simply sends his or her quantum programming code to one of these quantum providers, where the operations can be run and the results sent back. No need for the user to leave the office or even learn any of the complicated details about the quantum “hardware.” Taking advantage of this trend, Eugene Dumitrescu from Oak Ridge National Laboratory in Tennessee and collaborators have performed a computation of the deuteron binding energy using quantum processors accessed via cloud servers [1]. The solution to this problem was already known, but this is the first time this calculation has been done with quantum computers. The work highlights the opportunities for scientists as quantum machines become more and more ubiquitous. Figure 1: Both classical bits and quantum bits are characterized by two distinct states. The difference is that classical bits can only be in one state or the other, whereas a qubit can be in a combination, or superposition, of the two. Although the idea of quantum computers has been around for decades [2], the technical realization of such machines became possible only in the last few years. Quantum computers rely on the manipulation of quantum bits, called qubits, which can be in an arbitrary superposition of the bit states, zero and one (Fig. 1). Being simultaneously in two states implies that qubits carry more information than classical bits. If you have N classical bits, then they will be in one state out of possible states, whereas N qubits could represent all those possible states at the same time. The power of quantum computers comes from their ability to create large superposition states, entanglement, and interference—all properties that do not exist in classical computation. This makes a dramatic difference in speed, as certain problems that scale exponentially in the number of operations on a classical computer are expected to scale polynomially on a quantum computer. There now exist several realizations of quantum computers that combine classical bits with few dozens of qubits [3]. The qubits come in a variety of physical implementations, with some represented by the spin up or down of atoms and others by two excited states in a superconducting circuit, for exmple. Certain quantum machines are now available to outside users. For example, the IBM Q Experience is a cloud-based platform that allows researchers to run their own experiments on one of the superconductor-based quantum computers that are housed in different IBM research labs. In their work, Dumitrescu et al. obtained access to two cloud-based quantum computing systems: an IBM QX5 quantum chip and a Rigetti 19Q quantum chip. In order to utilize these machines, the researchers had to become fluent in the “language” of quantum computers, which is different from that of classical computers. In general, problem solving using quantum computers involves several steps [4–6], which can be split into three main blocks: (i) formulate the problem to be solved in terms of unitary matrices, (ii) rewrite those matrices in terms of gates that can be realized on a given quantum computer, and (iii) implement and try to improve the efficiency of (ii), reducing the number of gates as much as possible—given that a very small number of gates is enough to implement almost any unitary matrix [4]. The gate in a quantum computer refers to an operation (or manipulation) of qubits, and it is always represented by a unitary operator. If we think of the qubit state as a spin, then a unitary operator would be a rotation of that spin. To take a simple example, suppose that we want to find the energy of a particular state . To construct this state, we would devise a unitary operator that would operate on one or more qubits in their ground state: . Let’s assume that the Hamiltonian can be calculated from another unitary operator . An easy way to calculate the mean energy is to assemble the qubits representing and manipulate them with while also manipulating an extra, or ancilla, qubit (Fig. 2). At the end of these operations, the ancilla qubit is measured, returning either zero or one. This measurement, however, is sampling just one possibility out of many, so it is necessary to repeat the measurement many times and take the average. In this case, the final output will be related to the expectation value , which could be converted to the mean energy. Figure 2: In cloud-based quantum computing, a user formulates a problem—such as finding the binding energy of a nucleus—in terms of unitary matrices: U W , etc. Those matrices are converted into gate operations, and these commands are sent through the internet to a computing facility equipped with a quantum chip (shown on right). An example of a quantum computation is shown in the green boxes: First, the U operator acts on a set of qubits, \left|0\right〉 , to produce the desired wave function: \left|𝜓\right〉 . That wave function is then manipulated by the W operator while another qubit, called the ancilla qubit, is manipulated by other operators (designated by H , the Hadamard gate). Finally, the ancilla qubit is measured, and the result is sent back to the user.In cloud-based quantum computing, a user formulates a problem—such as finding the binding energy of a nucleus—in terms of unitary matrices: U W , etc. Those matrices are converted into gate operations, and these commands are sent through the internet... Show more Dumitrescu et al. chose as their computational target the binding energy of the deuteron [1]. The Hamiltonian in this case is very simple, and the solution can be found analytically. But formulating the problem for quantum computers is a useful exercise, which should help in developing procedures for tackling much harder problems. In terms of the three main blocks of quantum computing, the authors made a very clear and pedagogical description of points (i) and (ii), whereas point (iii) is more technical and beyond the scope of the calculation. The team’s strategy was based on the so-called quantum eigensolver method [7]. They first represented an ansatz of the ground-state wave function in terms of a set of functions called the coupled-cluster basis [8]. This representation has one or two parameters, so they calculated the energy for different sets of parameters and chose the set that gave the lowest energy. The researchers initially performed a two-qubit computation, which involved just two coupled-cluster basis states. They found matching results from the IBM and Rigetti chips. They also performed a three-qubit computation with just the IBM chip. When the results were extrapolated to the infinite basis limit (a calculation that could be done analytically), the computed binding energy was in excellent agreement with exact calculations. Nowadays, quantum computers are quite limited in terms of the number of qubits and available gates. In addition, it has to be said that manipulating qubits is not easy: the spin of an atom representing the state of a qubit, for example, is affected by the environment, and this means that the qubit manipulations suffer from a noise that increases with the number of gates applied to the qubits. However, even given such limitations, the interest in quantum computing has literally exploded. The amount of available quantum hardware has grown substantially as well, and this should multiply opportunities to explore new ways to solve quantum many-body problems in physics and chemistry. Researchers have already begun looking at how quantum computing could resolve problems in, for example, scattering dynamics [9] and ground-state determinations [10]. E. F. Dumitrescu, A. J. McCaskey, G. Hagen, G. R. Jansen, T. D. Morris, T. Papenbrock, R. C. Pooser, D. J. Dean, and P. Lougovski, “Cloud Quantum Computing of an Atomic Nucleus,” Phys. Rev. Lett. 120, 210501 (2018). R. P. Feynman, “Simulating Physics with Computers,” Int. J. Theor. Phys. 21, 467 (1982); “Quantum Mechanical Computers,” Found. Phys. 16, 507 (1986); S. Lloyd, “Universal Quantum Simulators,” Science 273, 1073 (1996). J. S. Otterbach et al., “Unsupervised Machine Learning on a Hybrid Quantum Computer,” arXiv:1712.05771. E. Ovrum and M. Hjorth-Jensen, “Quantum Computation Algorithm for Many-Body Studies,” arXiv:1804.03719. P. J. Coles et al., “Quantum Algorithm Implementations for Beginners,” arXiv:1804.03719. J. Preskill, “Quantum Computing in the NISQ Era and Beyond,” arxiv:1801.00862. A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’Brien, “A Variational Eigenvalue Solver on a Photonic Quantum Processor,” Nat. Commun. 5, 4213 (2014). Y. Shen, X. Zhang, S. Zhang, J.-N. Zhang, M.-H. Yung, and K. Kim, “Quantum Implementation of the Unitary Coupled Cluster for Simulating Molecular Electronic Structure,” Phys. Rev. A 95, 020501 (2017). A. Roggero and J. Carlson, “Linear Response on a Quantum Computer,” arXiv:1804.01505. D. B. Kaplan, N. Klco, and A. Roggero, “Ground States via Spectral Combing on a Quantum Computer,” arXiv:1709.08250. Stefano Gandolfi is a staff scientist working in the theoretical division of Los Alamos National Laboratory. In 2007, he obtained his Ph.D. in physics from the University of Trento, Italy, where he received an award for the best Ph.D thesis of the year. He then became a postdoctoral fellow at the International School for Advanced Studies (SISSA) in Trieste, Italy, before moving to Los Alamos in 2009. In 2013, he received the Young Scientist Prize from the International Union of Pure and Applied Physics (IUPAP). His research focuses on nuclear interactions, nuclear structure, electroweak interactions in nuclei and dense matter, physics of neutron stars, and strongly correlated ultracold Fermi gases. E. F. Dumitrescu, A. J. McCaskey, G. Hagen, G. R. Jansen, T. D. Morris, T. Papenbrock, R. C. Pooser, D. J. Dean, and P. Lougovski Quantum InformationNuclear Physics
Physics - Thierry Giamarchi Thierry Giamarchi holds a Ph.D. in physics from Paris XI University. A permanent member of France’s CNRS since 1986, he was a postdoc/visiting fellow at Bell Laboratories between 1990 and 1992, and in 2002 he became full professor in the Condensed Matter Department of the University of Geneva. His research focuses on the effects of interactions in low-dimensional quantum systems, such as Tomonaga-Luttinger liquids, and on the effects of disorder in classical and quantum systems. He is a fellow of the American Physical Society and, since 2013, a member of the French Academy of Sciences. In 2010 he was recognized as an Outstanding Referee by the American Physical Society. For more information visit: https://dqmp.unige.ch/giamarchi/thierry-giamarchi/ Deconstructing the electron An angle-resolved photoemission spectroscopy study of electron transport along quasi-one-dimensional \text{Mo} \text{O} chains of {\text{Li}}_{0.9}{\text{Mo}}_{6}{\text{O}}_{17} reveals puzzling behavior that does not fit within the available one-dimensional theory frameworks and likely points to undiscovered physics. Read More » Theory for 1D Quantum Materials Tested with Cold Atoms and Superconductors The Tomonaga-Luttinger theory describing one-dimensional materials has been tested with cold atoms and arrays of Josephson junctions. Read More »
76D05 Navier-Stokes equations 76D03 Existence, uniqueness, and regularity theory 76D45 Capillarity (surface tension) Olivier Bernard, Anne-Céline Boulanger, Marie-Odile Bristeau, Jacques Sainte-Marie (2013) Cultivating oleaginous microalgae in specific culturing devices such as raceways is seen as a future way to produce biofuel. The complexity of this process coupling non linear biological activity to hydrodynamics makes the optimization problem very delicate. The large amount of parameters to be taken into account paves the way for a useful mathematical modeling. Due to the heterogeneity of raceways along the depth dimension regarding temperature, light intensity or nutrients availability, we adopt... A Combined Finite Element and Marker and Cell Method for Solving Navier-Stokes Equations. V. Girault (1976) A continuity property for the inverse of Mañé's projection Zdeněk Skalák (1998) X be a compact subset of a separable Hilbert space H with finite fractal dimension {d}_{F}\left(X\right) {P}_{0} an orthogonal projection in H of rank greater than or equal to 2{d}_{F}\left(X\right)+1 \delta >0 , there exists an orthogonal projection P H of the same rank as {P}_{0} , which is injective when restricted to X \parallel P-{P}_{0}\parallel <\delta . This result follows from Mañé’s paper. Thus the inverse {\left(P{|}_{X}\right)}^{-1} of the restricted mapping {P|}_{X}\phantom{\rule{0.222222em}{0ex}}X\to PX is well defined. It is natural to ask whether there exists a universal modulus of continuity for the inverse of Mañé’s... A counterexample to the smoothness of the solution to an equation arising in fluid mechanics Stephen Montgomery-Smith, Milan Pokorný (2002) We analyze the equation coming from the Eulerian-Lagrangian description of fluids. We discuss a couple of ways to extend this notion to viscous fluids. The main focus of this paper is to discuss the first way, due to Constantin. We show that this description can only work for short times, after which the ``back to coordinates map'' may have no smooth inverse. Then we briefly discuss a second way that uses Brownian motion. We use this to provide a plausibility argument for the global regularity for... A direct proof of the Caffarelli-Kohn-Nirenberg theorem Jörg Wolf (2008) In the present paper we give a new proof of the Caffarelli-Kohn-Nirenberg theorem based on a direct approach. Given a pair (u,p) of suitable weak solutions to the Navier-Stokes equations in ℝ³ × ]0,∞[ the velocity field u satisfies the following property of partial regularity: The velocity u is Lipschitz continuous in a neighbourhood of a point (x₀,t₀) ∈ Ω × ]0,∞ [ if limsu{p}_{R\to 0⁺}1/R{\int }_{{Q}_{R}\left(x₀,t₀\right)}|curlu×u/|u||²dxdt\le {\epsilon }_{*} for a sufficiently small {\epsilon }_{*}>0 Maria Francesca Carfora, Roberto Natalini (2008) In this paper we introduce a new class of numerical schemes for the incompressible Navier-Stokes equations, which are inspired by the theory of discrete kinetic schemes for compressible fluids. For these approximations it is possible to give a stability condition, based on a discrete velocities version of the Boltzmann H-theorem. Numerical tests are performed to investigate their convergence and accuracy. A Discrete Solenoidal Finite Difference Scheme for the Numerical Approximation of Incompressible Flows. Manfred Dobrowolski (1989) Hyam Abboud, Toni Sayah (2008) We study a two-grid scheme fully discrete in time and space for solving the Navier-Stokes system. In the first step, the fully non-linear problem is discretized in space on a coarse grid with mesh-size H and time step k. In the second step, the problem is discretized in space on a fine grid with mesh-size h and the same time step, and linearized around the velocity uH computed in the first step. The two-grid strategy is motivated by the fact that under suitable assumptions, the contribution of uH... A functional-analytic approach to turbulent convection B. Szafirski (1970) A generalization of a theorem by Kato on Navier-Stokes equations. Marco Cannone (1997) We generalize a classical result of T. Kato on the existence of global solutions to the Navier-Stokes system in C([0,∞);L3(R3)). More precisely, we show that if the initial data are sufficiently oscillating, in a suitable Besov space, then Kato's solution exists globally. As a corollary to this result, we obtain a theory of existence of self-similar solutions for the Navier-Stokes equations. A geometric improvement of the velocity-pressure local regularity criterion for a suitable weak solution to the Navier-Stokes equations We deal with a suitable weak solution \left(𝐯,p\right) to the Navier-Stokes equations in a domain \Omega \subset {ℝ}^{3} . We refine the criterion for the local regularity of this solution at the point \left(𝐟{x}_{0},{t}_{0}\right) , which uses the {L}^{3} 𝐯 {L}^{3/2} p in a shrinking backward parabolic neighbourhood of \left({𝐱}_{0},{t}_{0}\right) . The refinement consists in the fact that only the values of 𝐯 p , in the exterior of a space-time paraboloid with vertex at \left({𝐱}_{0},{t}_{0}\right) , respectively in a ”small” subset of this exterior, are considered. The consequence is that... A Global a posteriori Error Estimate for Quasilinear Elliptic Problems. M.S. Mock (1975) David J. Knezevic, Endre Süli (2009) We examine a heterogeneous alternating-direction method for the approximate solution of the FENE Fokker–Planck equation from polymer fluid dynamics and we use this method to solve a coupled (macro-micro) Navier–Stokes–Fokker–Planck system for dilute polymeric fluids. In this context the Fokker–Planck equation is posed on a high-dimensional domain and is therefore challenging from a computational point of view. The heterogeneous alternating-direction scheme combines a spectral Galerkin method for... A hybrid multigrid method for the steady-state incompressible Navier-Stokes equations. Pernice, Michael (2000) A logarithmic regularity criterion for 3D Navier-Stokes system in a bounded domain Jishan Fan, Xuanji Jia, Yong Zhou (2019) This paper proves a logarithmic regularity criterion for 3D Navier-Stokes system in a bounded domain with the Navier-type boundary condition. A Mixed Finite Element Approximation of the Navier-Stokes Equations. P. LeTallec (1980) A Mixed Finite Element Method for Solving the Nonstationary Stokes Equation. M. Dobrowolski (1980/1981) C. Johnson (1978)
Precession - Wikipedia Periodic change in the direction of a rotation axis In astronomy, precession refers to any of several slow changes in an astronomical body's rotational or orbital parameters. An important example is the steady change in the orientation of the axis of rotation of the Earth, known as the precession of the equinoxes. 2.2 Relativistic (Einsteinian) 3.3 Nodal precession Torque-free[edit] Torque-free precession implies that no external moment (torque) is applied to the body. In torque-free precession, the angular momentum is a constant, but the angular velocity vector changes orientation with time. What makes this possible is a time-varying moment of inertia, or more precisely, a time-varying inertia matrix. The inertia matrix is composed of the moments of inertia of a body calculated with respect to separate coordinate axes (e.g. x, y, z). If an object is asymmetric about its principal axis of rotation, the moment of inertia with respect to each coordinate direction will change with time, while preserving angular momentum. The result is that the component of the angular velocities of the body about each axis will vary inversely with each axis' moment of inertia. The torque-free precession rate of an object with an axis of symmetry, such as a disk, spinning about an axis not aligned with that axis of symmetry can be calculated as follows:[1] {\displaystyle {\boldsymbol {\omega }}_{\mathrm {p} }={\frac {{\boldsymbol {I}}_{\mathrm {s} }{\boldsymbol {\omega }}_{\mathrm {s} }}{{\boldsymbol {I}}_{\mathrm {p} }\cos({\boldsymbol {\alpha }})}}} where ωp is the precession rate, ωs is the spin rate about the axis of symmetry, Is is the moment of inertia about the axis of symmetry, Ip is moment of inertia about either of the other two equal perpendicular principal axes, and α is the angle between the moment of inertia direction and the symmetry axis.[2] For a generic solid object without any axis of symmetry, the evolution of the object's orientation, represented (for example) by a rotation matrix R that transforms internal to external coordinates, may be numerically simulated. Given the object's fixed internal moment of inertia tensor I0 and fixed external angular momentum L, the instantaneous angular velocity is {\displaystyle {\boldsymbol {\omega }}\left({\boldsymbol {R}}\right)={\boldsymbol {R}}{\boldsymbol {I}}_{0}^{-1}{\boldsymbol {R}}^{T}{\boldsymbol {L}}} Precession occurs by repeatedly recalculating ω and applying a small rotation vector ω dt for the short time dt; e.g.: {\displaystyle {\boldsymbol {R}}_{\text{new}}=\exp \left(\left[{\boldsymbol {\omega }}\left({\boldsymbol {R}}_{\text{old}}\right)\right]_{\times }dt\right){\boldsymbol {R}}_{\text{old}}} for the skew-symmetric matrix [ω]×. The errors induced by finite time steps tend to increase the rotational kinetic energy: {\displaystyle E\left({\boldsymbol {R}}\right)={\boldsymbol {\omega }}\left({\boldsymbol {R}}\right)\cdot {\frac {\boldsymbol {L}}{2}}} this unphysical tendency can be counteracted by repeatedly applying a small rotation vector v perpendicular to both ω and L, noting that {\displaystyle E\left(\exp \left(\left[{\boldsymbol {v}}\right]_{\times }\right){\boldsymbol {R}}\right)\approx E\left({\boldsymbol {R}}\right)+\left({\boldsymbol {\omega }}\left({\boldsymbol {R}}\right)\times {\boldsymbol {L}}\right)\cdot {\boldsymbol {v}}} Torque-induced[edit] Torque-induced precession (gyroscopic precession) is the phenomenon in which the axis of a spinning object (e.g., a gyroscope) describes a cone in space when an external torque is applied to it. The phenomenon is commonly seen in a spinning toy top, but all rotating objects can undergo precession. If the speed of the rotation and the magnitude of the external torque are constant, the spin axis will move at right angles to the direction that would intuitively result from the external torque. In the case of a toy top, its weight is acting downwards from its center of mass and the normal force (reaction) of the ground is pushing up on it at the point of contact with the support. These two opposite forces produce a torque which causes the top to precess. The device depicted on the right (or above on mobile devices) is gimbal mounted. From inside to outside there are three axes of rotation: the hub of the wheel, the gimbal axis, and the vertical pivot. To distinguish between the two horizontal axes, rotation around the wheel hub will be called spinning, and rotation around the gimbal axis will be called pitching. Rotation around the vertical pivot axis is called rotation. In the picture, a section of the wheel has been named dm1. At the depicted moment in time, section dm1 is at the perimeter of the rotating motion around the (vertical) pivot axis. Section dm1, therefore, has a lot of angular rotating velocity with respect to the rotation around the pivot axis, and as dm1 is forced closer to the pivot axis of the rotation (by the wheel spinning further), because of the Coriolis effect, with respect to the vertical pivot axis, dm1 tends to move in the direction of the top-left arrow in the diagram (shown at 45°) in the direction of rotation around the pivot axis.[3] Section dm2 of the wheel is moving away from the pivot axis, and so a force (again, a Coriolis force) acts in the same direction as in the case of dm1. Note that both arrows point in the same direction. In the discussion above, the setup was kept unchanging by preventing pitching around the gimbal axis. In the case of a spinning toy top, when the spinning top starts tilting, gravity exerts a torque. However, instead of rolling over, the spinning top just pitches a little. This pitching motion reorients the spinning top with respect to the torque that is being exerted. The result is that the torque exerted by gravity – via the pitching motion – elicits gyroscopic precession (which in turn yields a counter torque against the gravity torque) rather than causing the spinning top to fall to its side. Classical (Newtonian)[edit] The torque caused by the normal force – Fg and the weight of the top causes a change in the angular momentum L in the direction of that torque. This causes the top to precess. Precession is the change of angular velocity and angular momentum produced by a torque. The general equation that relates the torque to the rate of change of angular momentum is: {\displaystyle {\boldsymbol {\tau }}={\frac {\mathrm {d} \mathbf {L} }{\mathrm {d} t}}} {\displaystyle {\boldsymbol {\tau }}} {\displaystyle \mathbf {L} } are the torque and angular momentum vectors respectively. Due to the way the torque vectors are defined, it is a vector that is perpendicular to the plane of the forces that create it. Thus it may be seen that the angular momentum vector will change perpendicular to those forces. Depending on how the forces are created, they will often rotate with the angular momentum vector, and then circular precession is created. Under these circumstances the angular velocity of precession is given by: [4] {\displaystyle {\boldsymbol {\omega }}_{\mathrm {p} }={\frac {\ mgr}{I_{\mathrm {s} }{\boldsymbol {\omega }}_{\mathrm {s} }}}={\frac {\tau }{I_{\mathrm {s} }{\boldsymbol {\omega }}_{\mathrm {s} }\sin(\theta )}}} where Is is the moment of inertia, ωs is the angular velocity of spin about the spin axis, m is the mass, g is the acceleration due to gravity, θ is the angle between the spin axis and the axis of precession and r is the distance between the center of mass and the pivot. The torque vector originates at the center of mass. Using ω = 2π/T, we find that the period of precession is given by:[5] {\displaystyle T_{\mathrm {p} }={\frac {4\pi ^{2}I_{\mathrm {s} }}{\ mgrT_{\mathrm {s} }}}={\frac {4\pi ^{2}I_{\mathrm {s} }\sin(\theta )}{\ \tau T_{\mathrm {s} }}}} Where Is is the moment of inertia, Ts is the period of spin about the spin axis, and τ is the torque. In general, the problem is more complicated than this, however. Relativistic (Einsteinian)[edit] The special and general theories of relativity give three types of corrections to the Newtonian precession, of a gyroscope near a large mass such as Earth, described above. They are: Thomas precession, a special-relativistic correction accounting for an object (such as a gyroscope) being accelerated along a curved path. de Sitter precession, a general-relativistic correction accounting for the Schwarzschild metric of curved space near a large non-rotating mass. Lense–Thirring precession, a general-relativistic correction accounting for the frame dragging by the Kerr metric of curved space near a large rotating mass. In astronomy, precession refers to any of several gravity-induced, slow and continuous changes in an astronomical body's rotational axis or orbital path. Precession of the equinoxes, perihelion precession, changes in the tilt of Earth's axis to its orbit, and the eccentricity of its orbit over tens of thousands of years are all important parts of the astronomical theory of ice ages. (See Milankovitch cycles.) Axial precession (precession of the equinoxes)[edit] Axial precession is the movement of the rotational axis of an astronomical body, whereby the axis slowly traces out a cone. In the case of Earth, this type of precession is also known as the precession of the equinoxes, lunisolar precession, or precession of the equator. Earth goes through one such complete precessional cycle in a period of approximately 26,000 years or 1° every 72 years, during which the positions of stars will slowly change in both equatorial coordinates and ecliptic longitude. Over this cycle, Earth's north axial pole moves from where it is now, within 1° of Polaris, in a circle around the ecliptic pole, with an angular radius of about 23.5°. The ancient Greek astronomer Hipparchus (c. 190–120 BC) is generally accepted to be the earliest known astronomer to recognize and assess the precession of the equinoxes at about 1° per century (which is not far from the actual value for antiquity, 1.38°),[6] although there is some minor dispute about whether he was.[7] In ancient China, the Jin-dynasty scholar-official Yu Xi (fl. 307–345 AD) made a similar discovery centuries later, noting that the position of the Sun during the winter solstice had drifted roughly one degree over the course of fifty years relative to the position of the stars.[8] The precession of Earth's axis was later explained by Newtonian physics. Being an oblate spheroid, Earth has a non-spherical shape, bulging outward at the equator. The gravitational tidal forces of the Moon and Sun apply torque to the equator, attempting to pull the equatorial bulge into the plane of the ecliptic, but instead causing it to precess. The torque exerted by the planets, particularly Jupiter, also plays a role.[9] Precessional movement of the axis (left), precession of the equinox in relation to the distant stars (middle), and the path of the north celestial pole among the stars due to the precession. Vega is the bright star near the bottom (right). Apsidal precession[edit] Apsidal precession—the orbit rotates gradually over time. The orbits of planets around the Sun do not really follow an identical ellipse each time, but actually trace out a flower-petal shape because the major axis of each planet's elliptical orbit also precesses within its orbital plane, partly in response to perturbations in the form of the changing gravitational forces exerted by other planets. This is called perihelion precession or apsidal precession. In the adjunct image, Earth's apsidal precession is illustrated. As the Earth travels around the Sun, its elliptical orbit rotates gradually over time. The eccentricity of its ellipse and the precession rate of its orbit are exaggerated for visualization. Most orbits in the Solar System have a much smaller eccentricity and precess at a much slower rate, making them nearly circular and nearly stationary. Discrepancies between the observed perihelion precession rate of the planet Mercury and that predicted by classical mechanics were prominent among the forms of experimental evidence leading to the acceptance of Einstein's Theory of Relativity (in particular, his General Theory of Relativity), which accurately predicted the anomalies.[10][11] Deviating from Newton's law, Einstein's theory of gravitation predicts an extra term of A/r4, which accurately gives the observed excess turning rate of 43″ every 100 years. Nodal precession[edit] Main article: Nodal precession Orbital nodes also precess over time. For the precession of the Moon's orbit, see lunar precession. Precession as a form of parallel transport ^ Schaub, Hanspeter (2003), Analytical Mechanics of Space Systems, AIAA, pp. 149–150, ISBN 9781600860270 ^ Boal, David (2001). "Lecture 26 – Torque-free rotation – body-fixed axes" (PDF). Retrieved 2008-09-17. ^ Teodorescu, Petre P (2002). Mechanical Systems, Classical Models: Volume II: Mechanics of Discrete and Continuous Systems. Springer Science & Business Media. p. 420. ISBN 978-1-4020-8988-6. ^ Moebs, William; Ling, Samuel J.; Sanny, Jeff (Sep 19, 2016). 11.4 Precession of a Gyroscope - University Physics Volume 1 | OpenStax. Houston, Texas. Retrieved 23 October 2020. ^ Barbieri, Cesare (2007). Fundamentals of Astronomy. New York: Taylor and Francis Group. p. 71. ISBN 978-0-7503-0886-1. ^ Swerdlow, Noel (1991). On the cosmical mysteries of Mithras. Classical Philology, 86, (1991), 48–63. p. 59. ^ Sun, Kwok. (2017). Our Place in the Universe: Understanding Fundamental Astronomy from Ancient Discoveries, second edition. Cham, Switzerland: Springer. ISBN 978-3-319-54171-6, p. 120; see also Needham, Joseph; Wang, Ling. (1995) [1959]. Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, reprint edition. Cambridge: Cambridge University Press. ISBN 0-521-05801-5, p. 220. ^ Bradt, Hale (2007). Astronomy Methods. Cambridge University Press. p. 66. ISBN 978-0-521-53551-9. ^ "An even larger value for a precession has been found, for a black hole in orbit around a much more massive black hole, amounting to 39 degrees each orbit". Wikibooks has a book on the topic of: Rotational Motion Media related to Precession at Wikimedia Commons Precession and the Milankovich theory From Stargazers to Starships Retrieved from "https://en.wikipedia.org/w/index.php?title=Precession&oldid=1083999567"
Series and parallel connections — lesson. Science State Board, Class 10. Series connection of parallel resistors: A series-parallel circuit is formed by connecting a set of parallel resistors in series. Connect \(R_1\) and \(R_2\) in parallel to obtain an effective resistance of \(R_{P1}\). Similarly, connect \(R_3\) and \(R_4\) in parallel to get an effective resistance of \(R_{P2}\). These parallel segments of resistors are then joined in series. Series-parallel combination of resistors The formula for the effective resistance of the parallel combination of resistors is For two resistors in the circuit, the effective resistance is given as Using the effective resistance of the series circuit, {R}_{S}=\phantom{\rule{0.147em}{0ex}}{R}_{1}+{R}_{2}+{R}_{3} , the net effective resistance of the series-parallel combination of resistors is, Parallel connection of series resistors: A parallel-series circuit is formed by connecting a set of series resistors in parallel. Connect \(R_1\) and \(R_2\) in series to get an effective resistance of \(R_{S1}\). Similarly, connect \(R_3\) and \(R_4\) in series to get an effective resistance of \(R_{S2}\). These series segments of resistors are then joined in parallel. Parallel-series combination of resistors {R}_{S}=\phantom{\rule{0.147em}{0ex}}{R}_{1}+{R}_{2}+{R}_{3} Using the formula for the effective resistance of the parallel combination of resistors , the net effective resistance of parallel-series combination of resistors is
Canonical State-Space Realizations - MATLAB & Simulink - MathWorks Deutschland \left({\lambda }_{1},\sigma ±j\omega ,{\lambda }_{2}\right) \left[\begin{array}{cccc}{\lambda }_{1}& 0& 0& 0\\ 0& \sigma & \omega & 0\\ 0& -\omega & \sigma & 0\\ 0& 0& 0& {\lambda }_{2}\end{array}\right] In the companion realization, the characteristic polynomial of the system appears explicitly in the rightmost column of the A matrix. You can obtain the companion canonical form of your system by using the canoncanon (System Identification Toolbox) command in the following way: P\left(s\right)={s}^{n}+{\alpha }_{1}{s}^{n-1}+\dots +{\alpha }_{n-1}s+{\alpha }_{n} A=\left[\begin{array}{ccc}\begin{array}{l}0\\ 1\\ 0\\ 0\\ ⋮\\ 0\end{array}& \begin{array}{l}0\\ 0\\ 1\\ 0\\ ⋮\\ 0\end{array}& \begin{array}{ccc}\begin{array}{cc}\begin{array}{l}0\\ 0\\ 0\\ 1\\ ⋮\\ 0\end{array}& \begin{array}{l}\dots \\ \dots \\ \dots \\ \dots \\ \ddots \\ \dots \end{array}\end{array}& \begin{array}{l}0\\ 0\\ 0\\ 0\\ ⋮\\ 1\end{array}& \begin{array}{l}-{\alpha }_{n}\\ -{\alpha }_{n-1}\\ -{\alpha }_{n-2}\\ -{\alpha }_{n-3}\\ \text{ }⋮\\ -{\alpha }_{1}\end{array}\end{array}\end{array}\right] \frac{Q\left(s\right)}{P\left(s\right)}\text{ }=\text{ }\frac{{b}_{0}{s}^{n}+{b}_{1}{s}^{n-1}+\dots +{b}_{n-1}s+{b}_{n}}{{s}^{n}+{\alpha }_{1}{s}^{n-1}+\dots +{\alpha }_{n-1}s+{\alpha }_{n}} {A}_{o}=\left[\begin{array}{ccc}\begin{array}{l}0\\ 1\\ 0\\ 0\\ ⋮\\ 0\end{array}& \begin{array}{l}0\\ 0\\ 1\\ 0\\ ⋮\\ 0\end{array}& \begin{array}{ccc}\begin{array}{cc}\begin{array}{l}0\\ 0\\ 0\\ 1\\ ⋮\\ 0\end{array}& \begin{array}{l}\dots \\ \dots \\ \dots \\ \dots \\ \ddots \\ \dots \end{array}\end{array}& \begin{array}{l}0\\ 0\\ 0\\ 0\\ ⋮\\ 1\end{array}& \begin{array}{l}-{\alpha }_{n}\\ -{\alpha }_{n-1}\\ -{\alpha }_{n-2}\\ -{\alpha }_{n-3}\\ \text{ }⋮\\ -{\alpha }_{1}\end{array}\end{array}\end{array}\right] {B}_{o}=\left[\begin{array}{l}\text{ }{b}_{n}-{a}_{n}{b}_{0}\\ {b}_{n-1}-{a}_{n-1}{b}_{0}\\ {b}_{n-2}-{a}_{n-2}{b}_{0}\\ \text{ }⋮\\ \text{ }{b}_{1}-{a}_{1}{b}_{0}\end{array}\right] {C}_{o}=\left[\begin{array}{cccc}\begin{array}{cc}0& 0\end{array}& \cdots & 0& 1\end{array}\right] {D}_{o}={b}_{0} \frac{Q\left(s\right)}{P\left(s\right)}\text{ }=\text{ }\frac{{b}_{0}{s}^{n}+{b}_{1}{s}^{n-1}+\dots +{b}_{n-1}s+{b}_{n}}{{s}^{n}+{\alpha }_{1}{s}^{n-1}+\dots +{\alpha }_{n-1}s+{\alpha }_{n}} {A}_{c}=\left[\begin{array}{ccc}\begin{array}{l}\text{ }0\\ \text{ }0\\ \text{ }⋮\\ \text{ }0\\ -{\alpha }_{n}\end{array}& \begin{array}{l}\text{ }1\\ \text{ }0\\ \text{ }⋮\\ \text{ }0\\ -{\alpha }_{n-1}\end{array}& \begin{array}{ccc}\begin{array}{l}\text{ }0\\ \text{ }1\\ \text{ }⋮\\ \text{ }0\\ -{\alpha }_{n-2}\end{array}& \begin{array}{l}\cdots \\ \cdots \\ \ddots \\ \cdots \\ \cdots \end{array}& \begin{array}{l}\text{ }0\\ \text{ }0\\ \text{ }⋮\\ \text{ }1\\ -{\alpha }_{1}\end{array}\end{array}\end{array}\right] {B}_{c}=\left[\begin{array}{c}\begin{array}{l}0\\ 0\end{array}\\ ⋮\\ \begin{array}{l}0\\ 1\end{array}\end{array}\right] {C}_{c}=\left[\begin{array}{ccc}\begin{array}{cc}{b}_{n}-{a}_{n}{b}_{0}\text{ }& {b}_{n-1}-{a}_{n-1}{b}_{0}\end{array}& {b}_{n-2}-{a}_{n-2}{b}_{0}& \begin{array}{cc}\cdots & {b}_{1}-{a}_{1}{b}_{0}\end{array}\end{array}\right] {D}_{c}={b}_{0} \begin{array}{l}{A}_{c}={A}_{o}^{T}\\ {B}_{c}={C}_{o}^{T}\\ {C}_{c}={B}_{o}^{T}\\ {D}_{c}={D}_{o}\end{array} canon | ss
{u}_{t}-\Delta u={u}^{p}\phantom{\rule{4pt}{0ex}}\text{with}\phantom{\rule{4pt}{0ex}}0<p<1 J. Aguirre, M. Escobedo (1986/1987) A Commuting Vectorfields Approach to Strichartz type Inequalities and Applications to Quasilinear Wave Equations Sergiu Klainerman (1999/2000) {S}^{1} {m}^{\text{'}} {x}^{\text{'}} {E}_{\epsilon }\left({m}^{\text{'}}\right)=\epsilon \int {|{\nabla }^{\text{'}}·{m}^{\text{'}}|}^{2}d{x}^{\text{'}}+\frac{1}{2}\int {\left||{\nabla }^{\text{'}}{|}^{-1/2}{\nabla }^{\text{'}}·{m}^{\text{'}}\right|}^{2}d{x}^{\text{'}} \epsilon \to 0 {S}^{1} {m}^{\text{'}} {\nabla }^{\text{'}}·{m}^{\text{'}}=0 \epsilon >0 A counterexample in regularity theory for parabolic systems A counterexample to Schauder estimates for elliptic operators with unbounded coefficients Enrico Priola (2001) We consider a homogeneous elliptic Dirichlet problem involving an Ornstein-Uhlenbeck operator in a half space {\mathbb{R}}_{+}^{2} {\mathbb{R}}^{2} . We show that for a particular initial datum, which is Lipschitz continuous and bounded on {\mathbb{R}}_{+}^{2} , the second derivative of the classical solution is not uniformly continuous on {\mathbb{R}}_{+}^{2} . In particular this implies that the well known maximal Hölder-regularity results fail in general for Dirichlet problems in unbounded domains involving unbounded coefficients. A criterion of Petrowsky's kind for a degenerate quasilinear parabolic equation. Peter Lindqvist (1995) The celebrated criterion of Petrowsky for the regularity of the latest boundary point, originally formulated for the heat equation, is extended to the so-called p-parabolic equation. A barrier is constructed by the aid of the Barenblatt solution. A degenerate parabolic equation in noncylindrical domains. M. Bertsch, R. Dal Passo, B. Franchi (1992) A distributional solution to a hyperbolic problem arising in population dynamics. Kmit, Irina (2007) \left(𝐯,p\right) \Omega \subset {ℝ}^{3} \left(𝐟{x}_{0},{t}_{0}\right) {L}^{3} 𝐯 {L}^{3/2} p \left({𝐱}_{0},{t}_{0}\right) 𝐯 p \left({𝐱}_{0},{t}_{0}\right) A global differentiability result for solutions of nonlinear elliptic problems with controlled growths Luisa Fattorusso (2008) \Omega be a bounded open subset of {ℝ}^{n} n>2 \Omega we deduce the global differentiability result u\in {H}^{2}\left(\Omega ,{ℝ}^{N}\right) for the solutions u\in {H}^{1}\left(\Omega ,{ℝ}^{n}\right) of the Dirichlet problem u-g\in {H}_{0}^{1}\left(\Omega ,{ℝ}^{N}\right),-\sum _{i}{D}_{i}{a}^{i}\left(x,u,Du\right)={B}_{0}\left(x,u,Du\right) with controlled growth and nonlinearity q=2 . The result was obtained by first extending the interior differentiability result near the boundary and then proving the global differentiability result making use of a covering procedure. A Harnack inequality approach to the regularity of free boundaries. Part I: Lipschitz free boundaries are C1,α. This is the first in a series of papers where we intend to show, in several steps, the existence of classical (or as classical as possible) solutions to a general two-phase free-boundary system. We plan to do so by:(a) constructing rather weak generalized solutions of the free-boundary problems,(b) showing that the free boundary of such solutions have nice measure theoretical properties (i.e., finite (n-1)-dimensional Hausdorff measure and the associated differentiability properties),(c) showing... A maximal regularity result with applications to parabolic problems with nonhomogeneous boundary conditions Davide Guidetti (1990) A new partial regularity proof for solutions of nonlinear elliptic systems. Christoph Hamburger (1998) A new proof of Harnack's inequality for elliptic partial differential equations in divergence form. Crescimbeni, Raquel, Forzani, Liliana, Perini, Alejandra (2007) A Note on Div-Curl Lemma Gala, Sadek (2007) 2000 Mathematics Subject Classification: 42B30, 46E35, 35B65.We prove two results concerning the div-curl lemma without assuming any sort of exact cancellation, namely the divergence and curl need not be zero, and div\left({u}^{-}{v}^{\to }\right)\in {H}^{1}\left({R}^{d}\right) which include as a particular case, the result of [3]. A note on local smoothing effects for the unitary group associated with the KdV equation. Carvajal, Xavier (2008)
Physics - Controlling Single Photons with Rydberg Superatoms Wenchao Xu and Vladan Vuletić Figure 1: Sketch of the superatom-cavity scheme for photon manipulation. (Top) If the superatom is in its ground state, photons pass through the cavity. (Bottom) If the superatom is in its excited state, photons are reflected at the cavity’s entrance port, and their optical phase is shifted by 𝜋 . The phase shift can be controlled by creating a Rydberg excitation (red dot) in the atomic cloud.Sketch of the superatom-cavity scheme for photon manipulation. (Top) If the superatom is in its ground state, photons pass through the cavity. (Bottom) If the superatom is in its excited state, photons are reflected at the cavity’s entrance port, and... Show more The past decade has witnessed swift progress in the development and application of quantum technologies. Many promising directions involve using photons, the smallest energy packets of light, as carriers of quantum information [1]. Photons at optical wavelengths can be quickly transported through optical fibers over long distances and with negligible noise, even at room temperature. Unfortunately, one drawback is that photons do not normally interact with each other, which makes it challenging to manipulate a photon with another photon. Optical photons also couple weakly with other quantum systems, such as superconducting qubits, which makes it hard to interface these platforms with photons. Now, two research groups, one led by Alexei Ourjoumtsev at PSL University, France, and the other by Stephan Dürr and Gerhard Rempe at the Max Planck Institute for Quantum Optics, Germany, demonstrate all-optical schemes for realizing operations on photons [2, 3]. The schemes hold potential for building components for both photonic quantum computers and quantum networks. To achieve the desired controllable interactions between individual photons, a physical platform is required that exhibits an extremely large optical nonlinearity, such that its optical response is different for one and two photons. Individual atoms, in principle, exhibit a large nonlinearity due to an effect called saturation (one atom can only absorb one photon at a time). However, the coupling between a single atom and a single photon is weak, meaning that a deterministic manipulation of the photons is impossible—a logic device would have a less-than-certain probability of performing its function. Recent attempts to enhance this coupling involve the use of a high-quality optical cavity, where a photon bouncing between the mirrors has a chance to interact multiple times with an atom placed inside the cavity [4]. However, sizable coupling enhancements require extremely high-quality mirrors and cavity stabilization schemes, which are technically challenging to realize. These challenges have so far limited the coupling strength achievable with individual atoms. A second approach is to generate an artificial, two-level atom comprising many atoms—a superatom. One such scheme involves Rydberg atoms, which have one of their outermost electrons excited to a state with a large principal quantum number, and which can interact strongly with each other over micrometer scales [5]. This strong interaction means that, in an ensemble of atoms, the excitation of one atom to a Rydberg state can block the Rydberg excitation of a second atom, permitting the absorption of only one photon at a time. Under these conditions, the whole ensemble acts like a superatom, in which the transition between ground and excited states has a giant cross section for the absorption of a photon. However, if too many atoms are packed into a small volume, atomic collisions will cause the dephasing of the quantum coherence between the two superatom states. Such dephasing limits the maximum density of the atoms and thereby the achievable atom-photon coupling strength [6]. To overcome these limitations, the teams combine the two approaches, placing a Rydberg superatom inside an optical cavity. In both studies, the researchers use light beams to coherently manipulate the state of the superatom inside the cavity. They then show that, depending on the state of the superatom, the optical cavity manifests different optical responses (Fig. 1). If the superatom remains in its ground state, photons can pass through the cavity, which displays high transmission. But if the superatom is in its excited Rydberg state, photons cannot enter the cavity and will be reflected at its entrance port. Upon reflection, the optical phase of each photon will be shifted by . Therefore, this -phase shift can be controlled by switching the superatom’s state. The two studies show that the superatom-cavity system displays features that enable reliable and efficient control over photons. First, the superatom’s state inside the optical cavity can be determined nondestructively by monitoring photon transmission through the cavity. Ourjoumtsev’s group demonstrates that such nondestructive detection can be obtained in a single shot with a 95% fidelity [2]. A fast and nondestructive detection would be crucial for implementing quantum error correction. Second, the -phase shift that is conditional on the superatom state can be harnessed to realize an important logical component for quantum operations: a controllable two-qubit gate. Dürr and Gerhard’s group experimentally demonstrates one such gate—a CNOT gate that switches one qubit’s state if and only if the other qubit is in its “1” state [3]. This demonstration was a true tour de force that eliminated many imperfections that can lead to the loss of the control or target photons. As a result, the demonstrated gate features a record-high efficiency of over 40% (efficiency is defined as the probability that the gate performs its operation), a value more than 3 times larger than the previous record of 11%. Gate efficiency is a key bottleneck for the development of photonic quantum computation. Low efficiency means that additional hardware and operations are needed to ensure the operation is accurately executed. The breakthrough in efficiency demonstrated here would thus significantly reduce the overhead needed for reliable quantum computation. Together, the results by the two groups encourage the exploration of similar concepts that could be used in optical quantum-computing schemes. Beyond its computation potential, the efficient and precise control over photons achieved with this Rydberg superatom-cavity platform constitutes an important step toward realizing high-speed quantum-communication networks based on photons exchanged through conventional optical fibers. In such networks, the cavity-superatom system could serve as a quantum memory or as an optical switch that preserves quantum coherence. Finally, the new system may be used as a transducer that connects different quantum-information platforms. Such transduction could be realized by connecting the transition between ground and Rydberg states, which typically lies at optical frequencies, with transitions between the many Rydberg states for an atom, which lie in the microwave range from a few GHz to 100 GHz. As such, Rydberg superatoms could be employed to coherently convert microwave photons into optical photons and vice versa. This functionality opens pathways toward new hybrid quantum technologies that transduce quantum information between optical and microwave photons, which can couple, respectively, to atomic qubits and to superconducting qubits. T. E. Northup and R. Blatt, “Quantum information transfer using photons,” Nat. Photonics 8, 356 (2014). J. Vaneecloo et al., “Intracavity Rydberg superatom for optical quantum engineering: Coherent control, single-shot detection, and optical 𝜋 phase shift,” Phys. Rev. X 12, 021034 (2022). T. Stolz et al., “Quantum-logic gate between two optical photons with an average efficiency above 40%,” Phys. Rev. X 12, 021035 (2022). A. Reiserer and G. Rempe, “Cavity-based quantum networks with single atoms and optical photons,” Rev. Mod. Phys. 87, 1379 (2015). M. D. Lukin et al., “Dipole blockade and quantum information processing in mesoscopic atomic ensembles,” Phys. Rev. Lett. 87, 037901 (2001). A. Gaj et al., “From molecular spectra to a density shift in dense Rydberg gases,” Nat. Commun. 5, 4546 (2014). Vladan Vuletic received his Ph.D. from the University of Munich and has previously worked at the Max Planck Institute for Quantum Optics in Garching, Germany, and at Stanford University. He is now a professor at the Massachusetts Institute of Technology, where his research group studies many-body entanglement, quantum measurements, cavity quantum electrodynamics, and strong photon-photon interactions. Intracavity Rydberg Superatom for Optical Quantum Engineering: Coherent Control, Single-Shot Detection, and Optical \pi Julien Vaneecloo, Sébastien Garcia, and Alexei Ourjoumtsev \pi
Physics - Dipolar Gas Chilled to Near Zero Dipolar Gas Chilled to Near Zero The cooling of strongly dipolar molecules to their absolute ground state has opened the possibility of creating new forms of matter. Jee Woo Park and Sebastian Will/MIT Cooling dipolar molecules to near absolute zero could produce new quantum states of matter, as the dipoles exert strong long-range forces on each other that are not found in nature. Researchers have now chilled a gas of sodium-potassium (NaK) molecules in their absolute ground state to microkelvin temperatures. The NaK molecules have a stronger dipole interaction and are more stable than previous superchilled molecules, which makes them ideal for exploring the impact of dipolar interactions in the quantum regime. Molecules differ from atoms in that they rotate and vibrate. These extra degrees of freedom complicate efforts to cool molecules with traditional laser techniques. To circumvent these challenges, researchers have recently been able to form certain ultracold molecules—the fermionic KRb and the bosonic RbCs—directly out of a gas of ultracold atoms, and to subsequently transfer them into their rovibrational ground state. Martin Zwierlein and his colleagues at MIT have added the fermionic NaK to this small class of ultracold molecules. The team started with cold sodium and potassium atoms and then used a magnetic field to access a so-called Feshbach resonance that combines the atoms into weakly bound molecules. Subsequently, a pair of lasers couples this Feshbach state to the rovibrational ground state, allowing a smooth transfer to the lowest energy state without adding kinetic energy to the gas. Because NaK doesn’t easily dissociate, the team found it has a relatively long lifetime (greater than 2.5 seconds), which benefits future experiments. If the temperature can be lowered by another order of magnitude, the NaK gas will enter the quantum degenerate regime where exotic forms of matter are predicted, such as a topological superfluid that contains Majorana fermions or a dipolar quantum crystal that could be solid and superfluid at the same time. Ultracold Dipolar Gas of Fermionic {}^{23}\mathrm{Na}{}^{40}\mathrm{K} Molecules in Their Absolute Ground State {}^{23}\mathrm{Na}{}^{40}\mathrm{K}
Thoughts about light bending – ebvalaim.log Thoughts about light bending I had a sudden moment of clarity recently when thinking about how to remove a graphical artifact from the Black Hole Simulator. The current simulator has one ugly aspect of the rendered image. Exactly 90 degrees from the direction to the black hole a graphical artifact appears - a strip of smudged, incorrectly calculated pixels (see the picture to the right). The reason is hidden deeply in the rendering mechanism. In short, it looks like this - you can't efficiently calculate the color of every pixel by raytracing. Taking advantage of the symmetry of the Schwarzschild black hole, I created a table of angles of light deflection. Thanks to the symmetry, I can describe each ray of light by only one parameter - simply put, the minimal distance from the black hole along its path (actually, the impact parameter). To calculate the deflection, I need one more thing, and that is the distance from the black hole at which I send/receive it - the greater part of the whole path the ray has to travel, the greater the deflection. Such a table was being sent to the graphics card. Then, during rendering, a direction of the ray was calculated for each pixel, and this was converted into the impact parameter. The distance was known independently. Appropriate deflection was being read from the table and this was used to calculate the color of the pixel. In theory everything is fine, but one problem appeared - the light rays sent in directions close to 90 degrees from the black hole have very similar impact parameters. This gives nearly identical deflection angles, which makes many pixels the same color. An ugly strip appears. The whole problem arises because of the usage of the impact parameter for description of the light ray. This makes it easier to generate the deflection table, but as you can see, it causes big problems. What if we abandoned this idea? Let's try to describe each ray with the distance from the black hole and the angle between its direction and the direction to the black hole. First we need a way of generating the four-velocity of a ray being given an angle. It would be done most convieniently using two unit, orthogonal spatial vectors (with which we would generate a circle) and one timelike vector, which we would add with an appropriate coefficient, such that we will get a null vector (null vectors describe light rays). In order to be able to describe the interior of the black hole as well as its exterior, the simulator uses the Eddington-Finkelstein coordinates (u, r, \theta, \phi) (I think in the literature what I denote by u v instead - I'll leave it as u , because I use this notation everywhere and a change would cause mistakes). In these coordinates \partial_u \partial_t in Schwarzschild coordinates and \partial_r is a null vector directed to the past. \partial_r being directed to the past is actually a good thing - the observer doesn't send light rays, but receives them. The calculations are being started at the observer though, so we need to follow the paths of the rays into the past. The initial direction should thus point to the past. \partial_r vector is then a great candidate for one of our parametrized rays - I'll want to get it as a result for the angle \alpha=\pi \alpha will be the angle from the direction towards the black hole). Let's now find a good ray for the direction \alpha=0 For now we will only consider the radial direction and time, which means that we don't care for \theta \phi . This leaves us with v = a\partial_r + b\partial_u It would be nice for this vector to be "normalized": g(v, \partial_r) = 1 And, of course, it should be a null vector: g(v, v) = 0 In E-F coordinates we have g_{uu} = \left(1-\frac{2M}{r}\right) g_{ur} = -1 g_{rr} = 0 . This yields: 1 = -b 0 = -2ab + b^2\left(1-\frac{2M}{r}\right) b = -1 a = \frac{M}{r} - \frac{1}{2} From this we get v = \left(\frac{M}{r} - \frac{1}{2}\right) \partial_r - \partial_u Having two null vectors, one pointing towards and one away from the black hole, it is easy to find a timelike vector - we just add them together v+\partial_r and normalize the result. This yields: T = \frac{1}{\sqrt{2}}\left[ \left( \frac{M}{r} + \frac{1}{2} \right) \partial_r - \partial_u \right] Now, one of the spatial vectors will be a combination of \partial_r T , hence we get: A = \sqrt{2}\partial_r - T The second one will be just in the direction of \partial_\phi B = \frac{1}{r}\partial_\phi We now have a complete orthonormal basis for the rays. A ray sent in the direction at an angle \alpha to the direction towards the black hole will have its four-velocity equal to: U(\alpha) = T - A\cos\alpha + B\sin\alpha A is negated so that \alpha=0 means the direction to the black hole, not away from it). Because our spatial vectors are orthogonal to T -T is the four-velocity of an observer relative to which we generate the circle. As it turns out, this observer moves relative to the black hole, which means that \alpha can't be directly converted into a pixel of the background, but fortunately it's not a problem. \alpha will only be needed as a parametrization of the table, but the final point reached by the ray will be defined by the \phi coordinate of the ray's position, and it is uniformly distributed in the resting frame. Small problems may arise when picking appropriate \alpha angles for the entries in the table, but I think I'll solve this in an entirely different way. The pixel calculation algorithm will therefore look like this: calculate the direction of the ray for each pixel; calculate $\alpha$ from the direction (by \cos\alpha = -g(U, A) ); read the deflection angle for given \alpha r ; find the proper pixel. What is left to decide is what the values in the table should actually be. If the simulator is to render the black hole also from great distances, we will need something that extrapolates nicely or can be calculated approximately when the observer is far from the black hole. Since possible distances are infinite, we won't be able to generate table values for all of them. A nice candidate is the difference from the final angle that we would get in a flat space. \phi in a flat space is easily found: we need to find U(\alpha) M=0 , find the coordinates in the directions \partial_t \partial_R (remembering that \partial_t = \partial_u \partial_r = \partial_R - \partial_t ) in ordinary spherical coordinates and from that calculate \cos \alpha' . We get: \cos\alpha' = \frac{3\cos\alpha - 1}{3-\cos\alpha} Assuming that the observer is at \phi=\pi \alpha=0 is in the direction of the black hole, \alpha' will be exactly the final \phi of the ray with no deflections. We can subtract that from the raytraced coordinate and thus find the difference from the flat case, which should be approximately \frac{4M}{r_{min}} for large distances ( r_{min} being the minimal distance from the black hole along the path of the ray, related to the impact parameter by \frac{1}{b^2} = \left(1-\frac{2M}{r_{min}}\right)\frac{1}{r_{min}^2} For further improvements, we can try to approximate the deflections by an analytic function of the angle for each distance. It is something I intend to explore a bit more. If it can be done, what will be left is putting the values of the coefficients defining the function in the table and then interpolating them between predefined distances. This would be probably the best way. One important thing to remember: for a range of the angles there won't be any values of the deflection. That's because the rays sent in some directions will hit the black hole. The critical impact parameter will be given by r_{min} = 3M (the radius of the photon sphere), which gives b = 3\sqrt{3}M . Calculatig critical \alpha from that will be relatively simple. My plan for the next days is therefore this: try to generate the deflection angles for different values of r , draw charts and check if some reasonable function can be chosen to approximate them. Of course, I will describe the progress here :) 'Till the next time!
Estimate uncertainty of market impact cost - MATLAB timingRisk - MathWorks Australia timingRisk Estimate Timing Risk for Stocks Estimate uncertainty of market impact cost tr = timingRisk(k,trade) tr = timingRisk(k,trade) returns the uncertainty of the market impact cost estimate, or timing risk. timingRisk uses the Kissell Research Group (KRG) transaction cost analysis object k and trade data trade. Estimate timing risk tr for each stock using the Kissell Research Group transaction cost analysis object k. Display the first three timing risk values. tr(1:3) Timing risk trading costs display in basis points. The trading cost varies with the trade strategy. timingRisk determines the trade strategy using these variables in this order: If you specify size in the trade data, timingRisk uses the Size variable. Otherwise, timingRisk uses the variables ADV and Shares to determine the size. Timing risk, returned as a vector. The vector values correspond to the timing risk in basis points for each stock in trade. Timing risk (TR) estimates the uncertainty surrounding the estimated transaction cost. Price volatility and liquidity risk creates uncertainty. Price volatility causes the price to be either higher or lower than expected due to factors independent of the order. Liquidity risk causes the market impact cost to be either higher or lower than estimated due to market volumes. TR is dependent upon volumes, intraday trading patterns, and market impact resulting from other market participants. The TR model is \text{TR}=\sigma \cdot \sqrt{\frac{1}{3}\cdot \frac{1}{250}\cdot \frac{Shares}{ADV}\cdot \left(\frac{1-POV}{POV}\right)}\cdot {10}^{4}. \sigma is price volatility. 250 is the number of trading days in the year. Shares are the number of shares to trade. ADV is the average daily volume of the stock. POV is the percentage of market volume, or participation fraction, of the order. krg | iStar | marketImpact | priceAppreciation | liquidityFactor
Physics - Laughlin’s Charge Pump Realized Using atomic spin to represent a synthetic dimension, researchers have experimentally verified the predictions of a long-unrealized thought experiment. A. Fabre et al. [2]; adapted by APS/Alan Stonebraker Figure 1: Lasers cyclically couple three electronic spins (purple bars) of dysprosium atoms to form a synthetic Hall cylinder. The cylindrical surface is penetrated by a finite synthetic magnetic flux (yellow arrows). When the synthetic axial magnetic flux (black arrows) changes by one quantum {\Phi }_{0} adiabatically, the atomic cloud moves a distance of {l}_{\text{mag}} on the cylinder.Lasers cyclically couple three electronic spins (purple bars) of dysprosium atoms to form a synthetic Hall cylinder. The cylindrical surface is penetrated by a finite synthetic magnetic flux (yellow arrows). When the synthetic axial magnetic flux (bl... Show more Periodic boundary conditions are an indispensable tool in theoretical studies, allowing researchers to not only simplify calculations but also to explore profound concepts. Imagining how to implement these boundary conditions in physical systems can be especially fruitful. A prototypical example is a 1981 thought experiment devised by the Nobel-prize-winning physicist Robert Laughlin to explain the quantization of Hall conductance [1]. This thought experiment introduced the idea of Laughlin’s charge pump, which has so far remained unrealized. Now, Aurélien Fabre, from the Kastler Brossel Laboratory, France, and his colleagues have created a “synthetic Hall cylinder” for dysprosium atoms that realizes Laughlin’s charge pump, allowing the team to experimentally observe features of this system [2]. In his thought experiment, Laughlin considered quantum Hall states on a cylinder whose surface and two ends are penetrated by finite magnetic fluxes. Laughlin argued that changing the axial magnetic flux would result in a current along the axial direction such that charges would be pumped along the cylinder’s surface. The most important characteristic of Laughlin’s charge pump is that it is topological: If the rate of change of the axial flux is slow enough for the system to be adiabatic, the details of its time dependence don’t matter—the number of pumped electrons always increases by an integer number when the axial flux changes by one quantum. Soon after Laughlin’s proposal, physicist David Thouless developed a closely related idea to describe topological charge pumping in generic systems, in which some parameter changes slowly and periodically [3]. Whereas the Thouless pump has been demonstrated in laboratories using ultracold atoms in optical lattices [4, 5], realizing Laughlin’s scheme has proved more difficult because of the need for periodic boundary conditions. Like most physical systems available in laboratories, the Hall bars that physicists use to study quantum Hall states have open boundaries. And while there exist systems with periodic boundary conditions—such as carbon nanotubes—it is challenging to thread the necessary finite magnetic fluxes through their cylindrical surfaces. Recent developments in atomic, molecular, and optical physics have provided physicists with an alternative route to study topological charge pumps and many other topological quantum phenomena. In such schemes, instead of driving the cyclotron motion of electrons using magnetic fields, as in Laughlin’s thought experiment, researchers use atom-laser interactions to generate “synthetic” magnetic fields that give rise to cyclotron motions of charge-neutral atoms [6]. Furthermore, atoms’ internal degrees of freedom, such as their hyperfine spins or electronic spins, can be used to represent a spatial dimension, with this or other synthetic dimensions allowing experimentalists to explore novel phenomena unattainable in conventional apparatuses (see Viewpoint: Photons Get Slippery) [7]. For instance, cyclically coupled hyperfine spins or electronic spins result in a periodic boundary condition in the synthetic dimension that mimics a periodic boundary condition in the spatial dimension. In 2018, two groups demonstrated such synthetic Hall cylinders. One group realized a synthetic Hall cylinder for fermionic ytterbium atoms and observed different band structures under boundary conditions in the synthetic dimension [8]. The other group created a synthetic Hall cylinder for bosonic rubidium atoms and discovered symmetry-protected band crossings, with the atoms requiring two periods to return to their original states in a Bloch oscillation [9]. In their new experiment, Fabre and colleagues create a synthetic dimension by coupling three electronic spin states of dysprosium atoms using two counterpropagating laser beams. One of the lasers has two frequency components, each of which prompts a Raman transition in combination with the other laser. One Raman transition flips the electronic spin by one quantum ( ), while the other Raman transition flips the electronic spin by two quanta ( ) . These three states thus form a cyclically coupled loop ( ), representing a one-dimensional lattice with a periodic boundary condition. To demonstrate Laughlin’s charge pump, this setup requires an extra ingredient that previous experiments have not yet realized: The axial magnetic flux needs to be accurately controlled to reach the adiabatic limit. If the axial flux changes too fast compared with the energy difference between the atoms’ ground and excited states, the pumped charge no longer changes by an integer when the axial flux changes by one quantum. Fabre and colleagues achieve this condition by using laser beams to impose two phases—one position-dependent, one position-independent—to the intersite tunnelings in the synthetic dimension. The position-dependent phase gives rise to a net synthetic magnetic flux penetrating the cylindrical surface. The position-independent phase determines the axial magnetic flux. The researchers change the axial magnetic flux by tuning the laser phase and find that such a change leads to a displacement of the atomic cloud in the real dimension. They also vary the ramping rate of the laser phase and find that, when the ramping rate is slow enough, the displacement becomes linearly dependent on the change of the axial magnetic flux. The slope perfectly agrees with Laughlin’s prediction, confirming that the atomic cloud is pumped by one magnetic length once the axial magnetic flux changes by one quantum adiabatically. This first experimental realization of Laughlin’s charge pump raises the possibility of many exciting developments. Although the three synthetic-dimension sites demonstrated fulfill the minimum requirement of a periodic boundary condition, enlarging the cylinder—for example, by including all 17 electronic states available to dysprosium atoms—could reduce finite-size effects. Furthermore, it would be interesting to generalize Laughlin’s charge pump to the strong atom-atom-interaction regime, where charge fractionalization is expected. Fabre and colleagues found that the system’s lowest few energy bands are flat, with bandwidths much narrower than the gaps between them. Interatom interactions become dominant in such flat bands, which could lead to the rich physics of fractional quantum Hall states or of fractional Chern insulators. In addition, it should be possible to explore systems with more complex synthetic topologies, such as a torus [10]. Putting quantum Hall states on a toroidal topology could result in stable ground-state degeneracy, which, being immune to local perturbations, would be useful for fault-tolerant quantum computing. R. B. Laughlin, “Quantized Hall conductivity in two dimensions,” Phys. Rev. B 23, 5632 (1981). A. Fabre et al., “Laughlin’s topological charge pump in an atomic Hall cylinder,” Phys. Rev. Lett. 128, 173202 (2022). D. J. Thouless, “Quantization of particle transport,” Phys. Rev. B 27, 6083 (1983). M. Lohse et al., “A Thouless quantum pump with ultracold bosonic atoms in an optical superlattice,” Nat. Phys. 12, 350 (2015). S. Nakajima et al., “Topological Thouless pumping of ultracold fermions,” Nat. Phys. 12, 296 (2016). Y.-J. Lin et al., “Synthetic magnetic fields for ultracold neutral atoms,” Nature 462, 628 (2009). A. Celi et al., “Synthetic gauge fields in synthetic dimensions,” Phys. Rev. Lett. 112, 043001 (2014). J. H. Han et al., “Band gap closing in a synthetic Hall tube of neutral fermions,” Phys. Rev. Lett. 122, 065303 (2019). C.-H. Li et al., “Bose-Einstein condensate on a synthetic topological Hall cylinder,” Phys. Rev. X Quantum 3, 010316 (2022). Y. Yan et al., “Emergent periodic and quasiperiodic lattices on surfaces of synthetic Hall tori and synthetic Hall cylinders,” Phys. Rev. Lett. 123, 260405 (2019). Qi Zhou is a professor in the Department of Physics and Astronomy at Purdue University. His research interests include synthetic gauge fields for ultracold atoms, strongly interacting bosons and fermions, quantum nonequilibrium dynamics, and connections between few-body and many-body physics. Qi Zhou received his Ph.D. degree from The Ohio State University and his B.S. degree from Tsinghua University, China. Laughlin’s Topological Charge Pump in an Atomic Hall Cylinder Aurélien Fabre, Jean-Baptiste Bouhiron, Tanish Satoor, Raphael Lopes, and Sylvain Nascimbene
This article is about the topology of communication networks. For the topology of electrical networks, see Topology (electrical circuits). For the topology of transport networks, see Transport topology. 4.3.1 Linear bus 4.3.2 Distributed bus 4.4.2 Distributed star 4.6.1 Fully connected network 4.6.2 Partially connected network Wired technologies[edit] Wireless technologies[edit] Exotic technologies[edit] Network interfaces[edit] Repeaters and hubs[edit] Daisy chain[edit] Linear bus[edit] Distributed bus[edit] Extended star[edit] Distributed star[edit] Mesh[edit] Fully connected network[edit] {\displaystyle c={\frac {n(n-1)}{2}}.\,} Partially connected network[edit] A fully connected network, complete topology, or full mesh topology is a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are {\displaystyle {\frac {n(n-1)}{2}}\,} direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen in military applications. Retrieved from "https://en.wikipedia.org/w/index.php?title=Network_topology&oldid=1079590067"
Straight Lines, Popular Questions: CBSE Class 11-commerce ENGLISH, English Grammar - Meritnation Rupesh Shrivastava asked a question the point P is the foot of the perpendicular from A(0,t) to the line whose equation is y=tx. Determine a) the equation of line AP b) the co-ordinate of P c) the area of triangle OAP, where O is origin No links pls give detailed answers Q. Find the equation of perpendicular bisector of the line segment joining (1, 1) & (2, 3) Shweta Mishra asked a question In this solution, how does coefficient of y becomes zero ? What is the reason that if a line is parallel to y axis , then the coefficient of y becomes zero ? proove that the lines 3x-4y+5=0 ,7x-8y+5=0 and 4x+5y=45 are concurrent also find the point of concurrency C1:x^2+y^2=25 , C2:x^2+y^2-2x-4y-7=0 be two circles intersecting each ither at A and B. What is the point of intersection of tangents of C1 at A and B? a straight line is such that the portion of it intercepted between the axes is bisected at the point (x1,y1) , prove that its equation is (x/x1 + y/y1) = 2 Abhimanyu Mampuzha asked a question Reduce the equation into intercept form3x+4y-12=0(i) Find the distance of the above line from the origin.(ii) Find the distance of the above line from the line. 6x+8y-18=0 find the equation of line passing through the point of intersection of lines 4x + 7y=3 and 2x-3y+9=0,that has equal intercepts on the axes If the angle between two lines pi/4 and slope of one of the lines is 1/2 , find the slope of the other line. A Line Passes through (x1,y2) and (h,k). if slope of the line is m , show that k-y1=m(h-x1) the coordinates of a point which is at a distance of 1/root2 units from (1,1) in the direction of line x + y - 3= 0 answer is (1/2, 3/2) Dhairya Manwani asked a question Easy method pls find the equation of the straight line passing through the points of intersection of 2x+y-1=0 and x+3y-2=0 and making with the coordinate axes a triangle of area 3/8 sq.units if three points (h,0) (a,b) and (0,k) lie on a same line show that a/h+b/k=1 The slope of line is double the slope of another line. If the tangent of angle between them is 1/3, find the slopes of line. Ganga asked a question Given three sticks of length 10cm, 5cm, and 3cm. The area of the triangle formed by using these sticks will be 0cm^2. True or False.. find the equation of the lines through the point of intersection of the lines x-3y+1=0 and 2x+5y-9=0 whose distance from the origin is sqrt(5) Fatema asked a question find the equation of the line which is parallel to the line 2x-3y=5 and is such that sum of its intercepts on the axes is 6 Find the equation of straight lines which passes through the point (3,-2) and cuts off positive intercepts on the x-axis and y-axis, which are in the ratio 4:3. Represent the atraight line y=x+2 in the parametric form If A (3, 3), B (4, 3) and C (3, 4) be the vertices of a triangle, then the distance between it’s orthocentre and circumcentre is a. root 2 b. 1/ root 2 c. 2 root 2 d. none If Ax + By = C and xcos(alpha) + ysin(alpha) = p represent the same line, find p in terms of A,B,C. Arundhati Padhy asked a question the points A,Band C are(4,0),(2,2),and(0,6) respectively.AB produced cuts the y-axis at Pand CB produced cuts the x-axis at Q.find the co-ordinates of the pointsP and q.Find the eq. of the straight line joining the mid points of AC nad OB( Where O is the origin )and verify that this line passes through the mid point of PQ. Piyush Bishnoi asked a question give the diagram of question10 of exercise 10.2 If the origin is shifted to the point (2,-1) obtain the new equation of locus,axes remainng parallel.The old equation of locus is: 2x²-3xy-9y²-5x-24y-7=0 Find the points on y axis whose perpendicular distances from the line 4x-3y-12=0 is 3. Absolute value of slope of a line, common tangent to both the curves given by y = x2 and x2 + y + 1 = 0 will be – Nirvighna Chavan asked a question if y = mx +c and xcosθ + y sin θ represent the same st. line then show that c =p root 1+m 2 xcosθ+ysin = p 9. A stick of length 10 units rests against the floor and a wall of a room. If the stick begins to slide on the floor then the locus of its middle points - (A) x2 + y2 = 2.5 (B) x2 + y2 = 25 (C) x2+ y2 = 100 Question no. 24 Part B Q.24. (B) If line 2x - by + 1 = 0 intersects the curve 2{x}^{2}-b{y}^{2}+\left(2b-1\right)xy-x-by=0 at points A & B and AB subtends a right angle at origin, then value of b+{b}^{2} Find the slope of a line whose inclination is 15 0 The co-ordinate of points A,B,C are (6,3),(-3,5),(4,-2) respectively and the co-ordinates of P be (x,y) prove that the ratio of area of triangle PBC and ABC is |x + y + z / 7 |
p M \sigma A categorical concept of completion of objects Guillaume C. L. Brümmer, Eraldo Giuli (1992) We introduce the concept of firm classes of morphisms as basis for the axiomatic study of completions of objects in arbitrary categories. Results on objects injective with respect to given morphism classes are included. In a finitely well-complete category, firm classes are precisely the coessential first factors of morphism factorization structures. A characterization of point semiuniformities. Montgomery, Jennifer P. (1996) A A -weight of Alexandroff spaces A. Caterino, G. Dimov, M. C. Vipera (2002) The paper is devoted to the study of the ordered set A\mathcal{K}\left(X,\alpha \right) of all, up to equivalence, A -compactifications of an Alexandroff space \left(X,\alpha \right) . The notion of A -weight (denoted by aw\left(X,\alpha \right) ) of an Alexandroff space \left(X,\alpha \right) is introduced and investigated. Using results in ([7]) and ([5]), lattice properties of A\mathcal{K}\left(X,\alpha \right) A{\mathcal{K}}_{\alpha \mathcal{w}}\left(X,\alpha \right) are studied, where A{\mathcal{K}}_{\alpha \mathcal{w}}\left(X,\alpha \right) is the set of all, up to equivalence, A -compactifications Y \left(X,\alpha \right) w\left(Y\right)=aw\left(X,\alpha \right) . A characterization of the families of bounded functions generating an A -compactification of \left(X,\alpha \right) is obtained. The notion... A Completion for a Transitive Quasi-Uniform Space E. Papatriantafillou (1974) A continuous dependence of fixed points of \phi -contractive mappings in uniform spaces Vasil G. Angelov (1992) The main purpose of the present paper is to established conditions for a continuous dependence of fixed points of \phi -contractive mappings in uniform spaces. An application to nonlinear functional differential equations of neutral type have been made. A counterexample in semimetric spaces. Jesús M. Fernández Castillo, Francisco Montalvo (1990) A generalisation of almost-compactness, with an associated generalisation of completeness A. J. Ward (1975) A new class of quasi-uniform spaces. Romaguera, Salvador (2000) A non-completely regular quiet quasi-metric. Deák, J. (1990) A nontransitive space based on combinatorics Hans-Peter A. Künzi, Stephen Watson (1999) Costruiamo uno spazio nontransitivo analogo al piano di Kofner. Mentre gli argomenti usati per la costruzione del piano di Kofner si fondano su riflessioni geometriche, le nostre prove si basano su idee combinatorie. A non-zero dimensional atom in the lattice of uniformities Reiterman, J., Rödl, V. (1977) A note on the non-emptiness of the limit of approximate systems Michael G. Charalambous (1996) Short proofs of the fact that the limit space of a non-gauged approximate system of non-empty compact uniform spaces is non-empty and of two related results are given. A Note on Totally Bounded Quasi-Uniformities Fletcher, P., Hunsaker, W. (1998) We present the original proof, based on the Doitchinov completion, that a totally bounded quiet quasi-uniformity is a uniformity. The proof was obtained about ten years ago, but never published. In the mean-time several stronger results have been obtained by more direct arguments [8, 9, 10]. In particular it follows from Künzi’s [8] proofs that each totally bounded locally quiet quasi-uniform space is uniform, and recently Déak [10] observed that even each totally bounded Cauchy quasi-uniformity... A partial generalization of a theorem of Hursch Charles J. Mozzochi (1968) A Regular Space Without a Uniformly Regular Quasi-Uniformity. Hans-Peter A. Künzi (1990) A Sequentially Compact Non-compact Quasi-pseudometric Space. J. Ferrer, V. Gregori (1983) A sufficient condition of full normality Tomáš Kaiser (1996) We present a direct constructive proof of full normality for a class of spaces (locales) that includes, among others, all metrizable ones.
Theory of Anisotropic Thin-Walled Beams | J. Appl. Mech. | ASME Digital Collection Theory of Anisotropic Thin-Walled Beams V. V. Volovoi, Post Doctoral Fellow, V. V. Volovoi, Post Doctoral Fellow School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0150 D. H. Hodges, Professor Contributed by the Applied Mechanics Division of THE AMERICAN SOCIETY OF MECHANICAL ENGINEERS for publication in the ASME JOURNAL OF APPLIED MECHANICS. Manuscript received by the ASME Applied Mechanics Division, Sept. 18, 1998; final revision, Mar. 7, 2000. Associate Technical Editor: W. K. Liu. Discussion on the paper should be addressed to the Technical Editor, Professor Lewis T. Wheeler, Department of Mechanical Engineering, University of Houston, Houston, TX 77204-4792, and will be accepted until four months after final publication of the paper itself in the ASME JOURNAL OF APPLIED MECHANICS. Volovoi, V. V., and Hodges, D. H. (March 7, 2000). "Theory of Anisotropic Thin-Walled Beams ." ASME. J. Appl. Mech. September 2000; 67(3): 453–459. https://doi.org/10.1115/1.1312806 Asymptotically correct, linear theory is presented for thin-walled prismatic beams made of generally anisotropic materials. Consistent use of small parameters that are intrinsic to the problem permits a natural description of all thin-walled beams within a common framework, regardless of whether cross-sectional geometry is open, closed, or strip-like. Four “classical” one-dimensional variables associated with extension, twist, and bending in two orthogonal directions are employed. Analytical formulas are obtained for the resulting 4×4 cross-sectional stiffness matrix (which, in general, is fully populated and includes all elastic couplings) as well as for the strain field. Prior to this work no analytical theories for beams with closed cross sections were able to consistently include shell bending strain measures. Corrections stemming from those measures are shown to be important for certain cases. Contrary to widespread belief, it is demonstrated that for such “classical” theories, a cross section is not rigid in its own plane. Vlasov’s correction is shown to be unimportant for closed sections, while for open cross sections asymptotically correct formulas for this effect are provided. The latter result is an extension to a general contour of a result for I-beams previously published by the authors. [S0021-8936(00)03003-8] elasticity, bending, torsion Anisotropy, Approximation, Cross section (Physics), Shells, Stiffness, Strips, Displacement, Geometry, Phantoms, Elasticity On the Energy of an Elastic Rod J. of Appl. Math. Mech. VABS: A New Concept for Composite Rotor Blade Cross-Sectional Modeling Theory of Anisotropic Thin-Walled Closed-Section Beams Asymptotic Theory for Static Behavior of Elastic Anisotropic I-Beams Koiter, W. T., 1959, “A Consistent First Approximation in the General Theory of Thin Elastic Shells,” Proceedings of the IUTAM Symposium: Symposium on the Theory of Thin Elastic Shells, North-Holland, Amsterdam, pp. 12–33. Sanders, J. L., 1959, “An Improved First Order Approximation Theory for Thin Shells,” Technical Report 24, NASA. Berdichevsky, V. L., 1983, Variational Principles of Continuum Mechanics, Nauka, Moscow. Formulation and Evaluation of an Analytical Model for Composite Box-Beams Rand, O., 1997, “Generalization of Analytical Solutions for Solid and Thin-Walled Composite Beams,” Proceedings of the American Helicopter Society 53rd Annual Forum, Viriginia Beach, VA, American Helicopter Society, New York, pp. 159–173. Thick-Walled Composite Beam Theory Including 3-D Elastic Effects and Torsional Warping Effect of Accuracy Loss in Classical Shell Theory Torsion of Noncylindrical Shafts of Circular Cross Section
Paul Walker, Thomas Radke, Erik Schnetter Thorn PUGHInterp implements the Cactus interpolation API CCTK_InterpGridArrays() for the interpolation of CCTK grid arrays at arbitrary points. Thorn PUGHInterp provides an implementation of the Cactus interpolation API specification for the interpolation of CCTK grid arrays at arbitrary points, CCTK_InterpGridArrays(). This function interpolates a list of CCTK grid arrays (in a multiprocessor run these are generally distributed over processors) on a list of interpolation points. The grid topology and coordinates are implicitly specified via a Cactus coordinate system. The interpolation points may be anywhere in the global Cactus grid. In a multiprocessor run they may vary from processor to processor; each processor will get whatever interpolated data it asks for. The routine CCTK_InterpGridArrays() does not do the actual interpolation itself but rather takes care of whatever interprocessor communication may be necessary, and – for each processor’s local patch of the domain-decomposed grid arrays – calls CCTK_InterpLocalUniform() to invoke an external local interpolation operator (as identified by an interpolation handle). It is advantageous to interpolate a list of grid arrays at once (for the same list of interpolation points) rather than calling CCTK_InterpGridArrays() several times with a single grid array. This way note only can PUGHInterp’s implementation of CCTK_InterpGridArrays() aggregate communications for multiple grid arrays into one (resulting in less communications overhead) but also CCTK_InterpLocalUniform() may compute interpolation coefficients once and reuse them for all grid arrays. Please refer to the Cactus UsersGuide for a complete function description of CCTK_InterpGridArrays() and CCTK_InterpLocalUniform(). 2 PUGHInterp’s Implementation of CCTK_InterpGridArrays() If thorn PUGHInterp was activated in the ActiveThorns list of a parameter file for a Cactus run, it will overload at startup the flesh-provided dummy function for CCTK_InterpGridArrays() with its own routine. This routine will then be invoked in subsequent calls to CCTK_InterpGridArrays(). PUGHInterp’s routine for the interpolation of grid arrays provides exactly the same semantics as CCTK_InterpGridArrays()which is thoroughly described in the Function Reference chapter of the Cactus UsersGuide. In the following, only user-relevant details about its implementation, such as specific error codes and the evaluation of parameter options table entries, are explained. At first, CCTK_InterpGridArrays() checks its function arguments for invalid values passed by the caller. In case of an error, the routine will issue an error message and return with an error code of either UTIL_ERROR_BAD_HANDLE for an invalid coordinate system and/or parameter options table, or UTIL_ERROR_BAD_INPUT otherwise. Currently there is the restriction that only CCTK_VARIABLE_REAL is accepted as the CCTK data type for the interpolation points coordinates. Then the parameter options table is parsed and evaluated for additional information about the interpolation call (see section 2.2 for details). In the single-processor case, CCTK_InterpGridArrays() would now invoke the local interpolation operator (as specified by its handle) by a call to CCTK_InterpLocalUniform() to perform the actual interpolation. The return code from this call is then also passed back to the user. For the multi-processor case, PUGHInterp does a query call to the local interpolator first to find out whether it can deal with the number of interprocessor ghostzones available. For that purpose it sets up an array of two interpolation points which denote the extremes of the physical coordinates on a processor: the lower-left and upper-right point of the processor-local grid’s bounding box1 . The query gets passed the same user-supplied function arguments as for the real interpolation call, apart from the interpolation points coordinates (which now describe a processor’s physical bounding box coordinates) and the output array pointers (which are all set to NULL in order to indicate that this is a query call only). A return code of CCTK_ERROR_INTERP_POINT_OUTSIDE from CCTK_InterpLocalUniform() for this query call (meaning the local interpolator potentially requires values from grid points which are outside of the available processor-local patch of the global grid) causes CCTK_InterpGridArrays() to return immediately with a CCTK_ERROR_INTERP_GHOST_SIZE_TOO_SMALL error code on all processors. Otherwise the CCTK_InterpGridArrays() routine will continue and map the user-supplied interpolation points onto the processors which own these points. In a subsequent global communication all processors receive ”their” interpolation points coordinates and call CCTK_InterpLocalUniform() with those. The interpolation results are then sent back to the processors which originally requested the interpolation points. Like the PUGH driver thorn, PUGHInterp uses MPI for the necessary interprocessor communication. Note that the MPI_Alltoall()/MPI_Alltoallv() calls for the distribution of interpolation points coordinates to their owning processors and the back transfer of the interpolation results to the requesting processors are collective communication operations. So in the multi-processor case you must call CCTK_InterpGridArrays() in parallel on each processor (even if a processor doesn’t request any points to interpolate at), otherwise the program will run into a deadlock. 2.2 Passing Additional Information via the Parameter Table One of the function arguments to CCTK_InterpGridArrays() is an integer handle which refers to a key/value options table. Such a table can be used to pass additional information (such as the interpolation order) to the interpolation routines (i.e. to both CCTK_InterpGridArrays() and the local interpolator as invoked via CCTK_InterpLocalUniform()). The table may also be modified by these routines, eg. to exchange internal information between the local and global interpolator, and/or to pass back arbitrary information to the user. The only table option currently evaluated by PUGHInterp’s implementation of CCTK_InterpGridArrays() is:   CCTK_INT input_array_time_levels[N_input_arrays]; which lets you choose the timelevels for the individual grid arrays to interpolate (in the range \left[0,N\text{_}time\text{_}levels\text{_}of\text{_}var\text{_}i-1\right] ). If no such table option is given, then the current timelevel (0) will be taken as the default. The following table options are meant for the user to specify how the local interpolator should deal with interpolation points near grid boundaries:   CCTK_INT  N_boundary_points_to_omit[2 * N_dims];   CCTK_REAL boundary_off_centering_tolerance[2 * N_dims];   CCTK_REAL boundary_extrapolation_tolerance[2 * N_dims]; In the multi-processor case, CCTK_InterpGridArrays() will modify these arrays in a user-supplied options table in order to specify the handling of interpolation points near interprocessor boundaries (ghostzones) for the local interpolator; corresponding elements in the options arrays are set to zero for all ghostzone faces, i.e. no points should be omitted, and no off-centering and extrapolation is allowed at those boundaries. Array elements for physical grid boundaries are left unchanged by CCTK_InterpGridArrays(). If any of the above three boundary handling table options is missing in the user-supplied table, CCTK_InterpGridArrays() will create and add it to the table with appropriate defaults. For the default values, as well as a comprehensive discussion of grid boundary handling options, please refer to documentation of the thorn(s) providing local interpolator(s) (eg. thorn LocalInterp in the Cactus ThornGuide). At present, the table option boundary_extrapolation_tolerance is not implemented. Instead, if any point cannot be mapped onto a processor (i.e. the point is outside the global grid), a level-1 warning is printed to stdout by default, and the error code CCTK_ERROR_INTERP_POINT_OUTSIDE is returned. The warning will not be printed if the parameter table contains an entry (of any type) with the key "suppress_warnings". The local interpolation status will be stored in the user-supplied parameter table (if given) as an integer scalar value with the option key "local_interpolator_status" (see section 2.3 for details).   CCTK_INT     error_point_status; are used internally by CCTK_InterpGridArrays() to pass information about per-point status codes between the global and the local interpolator (again see section 2.3 for details). 2.3 CCTK_InterpGridArrays() Return Codes The return code from CCTK_InterpGridArrays()2 is determined as follows: If any of the arguments are invalid (e.g. N_dims<0 ), the return code is UTIL_ERROR_BAD_INPUT. If any errors are encountered when processing the parameter table, the return code is the appropriate UTIL_ERROR_TABLE_* error code. If the query call determines that the number of ghost zones in the grid is too small for the local interpolator, the return code is CCTK_ERROR_INTERP_POINT_OUTSIDE. Otherwise, the return code from CCTK_InterpGridArrays() is the minimum over all processors of the return code from the local interpolation on that processor. If the local interpolator supports per-point status returns and the user supplies an interpolator parameter table, then in addition to this global interpolation return code, CCTK_InterpGridArrays() also returns a “local” status code which describes the outcome of the local interpolation for all the interpolation points which originated on this processor:   CCTK_INT local_interpolator_status; This gives the minimum over all the interpolation points originating on this processor, of the CCTK_InterpLocalUniform() return codes for those points. (It doesn’t matter on which processor(s) the points were actually interpolated – CCTK_InterpGridArrays() takes care of gathering all the status information back to the originating processors.) For more information on how to invoke interpolation operators please refer to the flesh documentation. This section lists all the variables which are assigned storage by thorn CactusPUGH/PUGHInterp. Storage can either last for the duration of the run (Always means that if this thorn is activated storage will be assigned, Conditional means that if this thorn is activated storage will be assigned for the duration of the run if some condition is met), or can be turned on for the duration of a schedule function. pughinterp_startup pughinterp startup routine
30C65 Quasiconformal mappings in {𝐑}^{n} 30C10 Polynomials 30C30 Numerical methods in conformal mapping theory 30C40 Kernel functions and applications 30C45 Special classes of univalent and multivalent functions (starlike, convex, bounded rotation, etc.) 30C50 Coefficient problems for univalent and multivalent functions 30C55 General theory of univalent and multivalent functions 30C62 Quasiconformal mappings in the plane {𝐑}^{n} 30C80 Maximum principle; Schwarz's lemma, Lindelöf principle, analogues and generalizations; subordination ACL Q A converse defect relation for quasimeromorphic mappings. Sastry, Swati (1995) A new characterization for isometries by triangles. Li, Baokui, Wang, Yuefei (2009) A new inequality for weakly \left({K}_{1},{K}_{2}\right) -Quasiregular mappings. Tong, Yuxia, Gu, Jiantao, Li, Ying (2007) A note on Gehring's lemma. Milman, Mario (1996) A Picard type theorem for quasiregular mappings of Rn into n-manifolds with many ends. Ilkka Holopainen, Seppo Rickman (1992) {A}_{r}\left(\lambda \right) A Liu, Bing (2002) {A}_{r}\left(\lambda \right) A -harmonic tensors. Erratum. A theorem of Semmes and the boundary absolute continuity in all dimensions. Juha Heinonen (1996) We use a recent theorem of Semmes to resolve some questions about the boundary absolute continuity of quasiconformal maps in space. Action of Möbius Transformations on Homeomorphisms: Stability and Rigidity. N.V. Ivanov (1996) Ahlfors theorems for differential forms. Martio, O., Miklyukov, V.M., Vuorinen, M. (2010) An ideal boundary for domains in An inverse Sobolev lemma. Pekka Koskela (1994) We establish an inverse Sobolev lemma for quasiconformal mappings and extend a weaker version of the Sobolev lemma for quasiconformal mappings of the unit ball of Rn to the full range 0 &lt; p &lt; n. As an application we obtain sharp integrability theorems for the derivative of a quasiconformal mapping of the unit ball of Rn in terms of the growth of the mapping. Analytic aspects of quasiconformality. Astala, Kari (1998) Apollonian isometries of regular domains are Möbius mappings. Hästö, Peter, Ibragimov, Zair (2007) Area and coarea formulas for the mappings of Sobolev classes with values in a metric space. Karmanova, M.B. (2007) Behavior of quasiregular semigroups near attracting fixed points. Mayer, Volker (2000) Bi-Lipschitz Concordance Implies Bi-Lipschitz Isotopy. Jouni Luukkainen (1991)
Solving Nonlinear Elliptic Equations: Calling PETSc from Cactus Solving Nonlinear Elliptic Equations: Calling PETSc from Cactus < > The Cactus thorn TATPETSc provides a simple interface to the SNES interface of PETSc. SNES (“Simple Nonlinear Elliptic Solver”) is a set of efficient parallel solvers for sparse matrices, i.e. for discretised problems with small stencils. The main task of TATPETSc is in handling the mismatch in the different parallelisation models. 2 Nonlinear Elliptic Equations 3 Solving a Nonlinear Elliptic Equation 6 The Solution, Or Not 8 Common PETSc options 9 Installing PETSc PETSc (“The Portable, Extensible Toolkit for Scientific Computation”) [1] is a library that, among other things, solves nonlinear elliptic equations. It is highly configurable, and using it is a nontrivial exercise. It is therefore convenient to create wrappers for this library that handle the more common cases. Although this introduces a layer that is a black box to the PETSc novice, it at the same time enables this PETSc novice to use this library, which would otherwise not be possible. At the moment there exist three wrapper routines. Two of them handle initialisation and shutting down the library at programme startup and shutdown time. The third solves a nonlinear elliptic equation. Nonlinear elliptic equations can be written in the form F\left(x\right)=0 x is an -dimensional vector, and F is function with an -dimensional vector as result. The vector x is the unknown to solve for. The function F has to be given together with initial data {x}_{0} for the unknown x In the case of discretised field equations, the vector x contains all grid points. The function F then includes the boundary conditions. Consider as an example the Laplace equation \mathrm{\Delta }\varphi =0 in three dimensions on a grid with 1{0}^{3} points. In that case, the unknown x \varphi , with the dimension n=1000 F F\left(x\right)=\mathrm{\Delta }x . Remember that, once a problem is discretised, derivative operators are algebraic expressions. That means that calculating \mathrm{\Delta }x does not involve taking analytic (“real”) derivatives, but can rather be written as linear function (matrix multiplication). The wrapper routine that solves a nonlinear elliptic equation is void TATPETSc_solve ( const cGH *cctkGH, const int *var, const int *val, int nvars, int options_table, void (*fun) (cGH *cctkGH, int options_table, void *userdata), void (*bnd) (cGH *cctkGH, int options_table, void *userdata), There is currently no Fortran wrapper for this routine. This routine takes the following arguments: cctkGH a pointer to the Cactus grid hierarchy the list of indices of the grid variables x the list of indices of the grid variables that contain the function value F\left(x\right) the number of variables to solve for, which is the same as the number of function values a table with additional options (see below) the routine that calculates the function values from the variables the routine that applies the boundary conditions to the variables data that are passed through unchanged to the callback routines The options table can have the following elements: CCTK_INT periodic[dim] dim flags indicating whether the grid is periodic in the corresponding direction. ( dim is the number of dimensions of the grid variables.) The default values are 0. CCTK_INT solvebnds[2*dim] 2\cdot dim flags indicating whether the grid points on the corresponding outer boundaries should be solved for (usually not). The default values are 0. CCTK_INT stencil_width the maximum stencil size used while calculating the function values from the variables (should be as small as possible). The default value is 1. CCTK_FN_POINTER jacobian The real type of jacobian must be void (*jacobian)(cGH *cctkGH, void *data). This is either a routine that calculates the Jacobian directly, or 0 (zero). The default is 0. CCTK_FN_POINTER get_coloring The real type of get_coloring must be void (*get_coloring)(DA da, ISColoring *iscoloring, void *userdata). This is either a routine that calculates the colouring for calculating the Jacobian, or 0 (zero). You need to pass a nonzero value only if you have very special boundary conditions. The default is 0. In order to be able to call this routine directly, it is necessary to inherit from TATPETSc in the thorn where this routine is called. Increasing stencil_width from 1 to 2 increases the run time by about a factor of 5. In general you want to be able to set stencil_width to 1. Note that calculating F does normally not require upwind derivatives with their larger size. stencil_width does not have to be equal to the number of ghost zones. One sure way to speed up solving nonlinear elliptic equations is to provide an explicit function that calculates the Jacobian. While such a function is in principle straightforward to write, it is very (very!) tedious to do so. If you pass 0 for this function, the Jacobian is evaluated numerically. This is about one order of magnitude slower, but is also a lot less work. A linear elliptic equation does not need initial data. However, a nonlinear elliptic equation does. It may have several solutions, and the initial data select between these solutions. The initial data {x}_{0} have to be put into the variables x before the solver is called. The function values F\left(x\right) can remain undefined. There are two functions that you have to apply boundary conditions to. You need to apply boundary conditions to the right hand side, i.e. F , and to the solution, i.e. x The routine bnd has to apply boundary conditions to x at exactly those boundaries where solvebnds is false. (The boundaries where solvebnds is true have already been determined by the solver.) This includes imposing the symmetry boundary conditions. Not applying a boundary condition on the outer boundaries is equivalent to a Dirichlet boundary condition, with the boundary values given in the initial data {x}_{0} F , you need to apply the necessary boundary conditions to F as well. You have to do this in the routine fun that calculates F When the solver returns, the result is available in x F\left(x\right) contains the corresponding function value, which should be close to zero. It is possible that the solver doesn’t converge. In this case, F will not be close to zero. It is possible (by the way of programming error) to make certain grid points in F x , or to ignore certain grid points in x while calculating F . This leads to a singular matrix when the nonlinear solver calls a linear solver as a substep. Such a system is ill-posed and cannot be solved. An excision boundary leads to a certain number of grid points that should not be solved for. In order to avoid a singular matrix, it is still necessary to impose a condition on these grid points. Assuming that you want a Dirichlet-like condition for these grid points, I suggest F\left(x\right)=x-{x}_{0} {x}_{0} are the initial data that you have to save someplace. Note that you have to impose this (or a different) condition not only onto the boundary points, but onto all excised points, i.e. all points that are not solved for. (The above condition satisfies F\left(x\right)=0 x={x}_{0} , which will be the solution for these grid points. Setting F\left(x\right)=0 does not work because it leads to a singular matrix, as outlined above.) Options Database Keys (from the PETSc documentation) -snes_type type ls, tr, umls, umtr, test -snes_stol convergence tolerance in terms of the norm of the change in the solution between steps -snes_atol atol absolute tolerance of residual norm -snes_rtol rtol relative decrease in tolerance norm from initial -snes_max_it max_it -snes_max_funcs max_funcs -snes_trtol trtol trust region tolerance -snes_no_convergence_test skip convergence test in nonlinear or minimization solver; hence iterations will continue until max_it or some other criterion is reached. Saves expense of convergence test prints residual norm at each iteration -snes_vecmonitor plots solution at each iteration -snes_vecmonitor_update plots update to solution at each iteration -snes_xmonitor plots residual norm at each iteration -snes_fd use finite differences to compute Jacobian; very slow, only for testing -snes_mf_ksp_monitor if using matrix-free multiply then print h at each KSP iteration Before you can use TATPETSc, you have to install PETSc. PETSc comes with extensive documentation for that. In order to be able to use PETSc with Cactus, you have to give certain options when you configure your Cactus applications. To do so, create an options file containing the following options. SYS_INC_DIRS /usr/include/petsc LIBDIRS /usr/X11R6/lib LIBS crypt petscfortran petscts petscsnes petscsles petscdm petscmat petscvec petsc lapack blas mpe mpich X11 g2c z Replace /usr/include/petsc with the corresponding directory on your machine. If you have other packages that need other options, then you have to combine these options manually. Even if the other options would normally be selected automatically, selecting the PETSc options manually will override the other options. Do not forget to activate MPI. PETSc needs MPI to run. [1] PETSc: http://www-fp.mcs.anl.gov/petsc/ Description: Command line options for PETSc Description: Produce log output while running Description: Produce much log output while running This section lists all the variables which are assigned storage by thorn CactusElliptic/TATPETSc. Storage can either last for the duration of the run (Always means that if this thorn is activated storage will be assigned, Conditional means that if this thorn is activated storage will be assigned for the duration of the run if some condition is met), or can be turned on for the duration of a schedule function. tatpetsc_initialize initialise petsc tatpetsc_finalize finalise petsc
Profit margin is a percentage measurement of profit that expresses the amount a company earns per dollar of sales. If a company makes more money per sale, it has a higher profit margin. Gross profit margin and net profit margin, on the other hand, are two separate profitability ratios used to assess a company's financial stability and overall health. The gross profit margin is the percentage of revenue that exceeds the COGS. A high gross profit margin indicates that a company is successfully producing profit over and above its costs. The net profit margin is the ratio of net profits to revenues for a company; it reflects how much each dollar of revenue becomes profit. Understanding Gross Profit Margin and Net Profit Margin While gross profit and gross margin are two measurements of profitability, net profit margin, which includes a company's total expenses, is a far more definitive profitability metric, and the one most closely scrutinized by analysts and investors. Here's a more in-depth look at gross profit margin and net profit margin. Gross profit margin is a measure of profitability that shows the percentage of revenue that exceeds the cost of goods sold (COGS). The gross profit margin reflects how successful a company's executive management team is in generating revenue, considering the costs involved in producing their products and services. In short, the higher the number, the more efficient management is in generating profit for every dollar of cost involved. The gross profit margin is calculated by taking total revenue minus the COGS and dividing the difference by total revenue. The gross margin result is typically multiplied by 100 to show the figure as a percentage. The COGS is the amount it costs a company to produce the goods or services that it sells. \begin{aligned} &\text{Gross Profit Margin} = \frac{\left(\text{Revenue} - \text{COGS}\right)}{\text{Revenue}}\times100\\ &\textbf{where:}\\ &\text{COGS}=\text{Cost of goods sold} \end{aligned} ​Gross Profit Margin=Revenue(Revenue−COGS)​×100where:​ Explaining Gross Vs. Net Profit Margin For the fiscal year ending September 30, 2017, Apple reported total sales or revenue of $229 billion and COGS of $141 billion as shown from the company's consolidated 10K statement below. Apple's gross profit margin for 2017 was 38%. Using the formula above, it would be calculated as follows: \begin{aligned} &\frac{\left(\text{\$229B} - \text{\$141B}\right)}{\text{\$229B}}*100 = 38\% \end{aligned} ​$229B($229B−$141B)​∗100=38%​ This means that for every dollar Apple generated in sales, the company generated 38 cents in gross profit before other business expenses were paid. A higher ratio is usually preferred, as this would indicate that the company is selling inventory for a higher profit. Gross profit margin provides a general indication of a company's profitability, but it is not a precise measurement. Gross profit margin is shown as a percentage while gross profit is an absolute dollar amount. Gross Profit Margin vs. Gross Profit It is important to note the difference between gross profit margin and gross profit. Gross profit margin is shown as a percentage while gross profit is an absolute dollar amount. The gross profit is the absolute dollar amount of revenue that a company generates beyond its direct production costs. Thus, an alternate rendering of the gross margin equation becomes gross profit divided by total revenues. As shown in the statement above, Apple's gross profit figure was $88 billion (or $229 billion minus $141 billion). In short, gross profit is the total number of gross profit after subtracting revenue from COGS—or $88 billion in the case of Apple. But the gross margin is the percent of profit Apple generated per the cost of producing its goods, or 38%. The gross profit figure is of little analytical value because it is a number in isolation rather than a figure calculated in relation to both costs and revenue. Therefore, the gross profit margin (or gross margin) is more significant for market analysts and investors. To illustrate the difference, consider a company showing a gross profit of $1 million. At first glance, the profit figure may appear impressive, but if the gross margin for the company is only 1%, then a mere 2% increase in production costs is sufficient enough to make the company lose money. The net profit margin is the ratio of net profits to revenues for a company or business segment. Expressed as a percentage, the net profit margin shows how much of each dollar collected by a company as revenue translates to profit. Net profitability is an important distinction since increases in revenue do not necessarily translate into increased profitability. Net profit is the gross profit (revenue minus COGS) minus operating expenses and all other expenses, such as taxes and interest paid on debt. Although it may appear more complicated, net profit is calculated for us and provided on the income statement as net income. \begin{aligned} &\text{Net Profit Margin}=\frac{\left( NI \right)\times100} {\text{Revenue}}\\ &\textbf{where:}\\ &\begin{aligned} \text{NI}&=\text{Net income}\\ &=\text{R}\ -\ \text{COGS}\ -\ \text{OE}\ -\ \text{O}\ -\ \text{I}\ -\ \text{T}\end{aligned}\\ &\text{R}=\text{Revenue}\\ &\text{OE}=\text{Operating expenses}\\ &\text{O}=\text{Other expenses}\\ &\text{I}=\text{Interest}\\ &\text{T}=\text{Taxes} \end{aligned} ​Net Profit Margin=Revenue(NI)×100​where:NI​=Net income​R=RevenueOE=Operating expensesO=Other expensesI=Interest​ Apple reported a net income number of roughly $48 billion (highlighted in blue) for the fiscal year ending September 30, 2017, as shown from its consolidated 10K statement below. As we saw earlier, Apple's total sales or revenue was $229 billion for the same period. Apple's net profit margin for 2017 was 21%. Using the formula above, we can calculate it as: \begin{aligned} &\frac{\text{\$48B}}{\text{\$229B}}=0.21\\ &0.21*100 = 21\% \end{aligned} ​$229B$48B​=0.21​ A 21% net profit margin indicates that for every dollar generated by Apple in sales, the company kept $0.21 as profit. A higher profit margin is always desirable since it means the company generates more profits from its sales. However, profit margins can vary by industry. Growth companies might have a higher profit margin than retail companies, but retailers make up for their lower profit margins with higher sales volumes. It is possible for a company to have a negative net profit margin. A negative net profit margin occurs when a company has a loss for the quarter or year. That loss, however, may just be a temporary issue for the company. Reasons for losses could be increases in the cost of labor and raw materials, recessionary periods, and the introduction of disruptive technological tools that could affect the company's bottom line. Investors and analysts typically use both gross profit margin and net profit margin to gauge how efficient a company's management is in earning profits relative to the costs involved in producing their goods and services. It is wise to compare the margins of companies within the same industry and over multiple periods to get a sense of any trends. How do gross profit margin and operating profit margin differ? The Difference Between Revenue and Cost in Gross Margin The gross margin represents the amount of total sales revenue that the company retains after incurring the direct costs (COGS) associated with producing the goods and services sold by the company.
Partially Averaged Navier-Stokes Method for Turbulence: Fixed Point Analysis and Comparison With Unsteady Partially Averaged Navier-Stokes | J. Appl. Mech. | ASME Digital Collection Sharath S. Girimaji, Eunhwan Jeong, Eunhwan Jeong , 45 Eoeun-Dong, Youscong-gu, Daejeon 305-333, Korea Girimaji, S. S., Jeong, E., and Srinivasan, R. (November 8, 2005). "Partially Averaged Navier-Stokes Method for Turbulence: Fixed Point Analysis and Comparison With Unsteady Partially Averaged Navier-Stokes." ASME. J. Appl. Mech. May 2006; 73(3): 422–429. https://doi.org/10.1115/1.2173677 Hybrid/bridging models that combine the advantages of Reynolds averaged Navier Stokes (RANS) method and large-eddy simulations are being increasingly used for simulating turbulent flows with large-scale unsteadiness. The objective is to obtain accurate estimates of important large-scale fluctuations at a reasonable cost. In order to be effective, these bridging methods must posses the correct “energetics”: that is, the right balance between production (P) and dissipation (ε) ⁠. If the model production-to-dissipation ratio (P∕ε) is inconsistent with turbulence physics at that cutoff, the computations will be unsuccessful. In this paper, we perform fixed-point analyses of two bridging models—partially-averaged Navier Stokes (PANS) and unsteady RANS (URANS)—to examine the behavior of production-to-dissipation ratio. It is shown that the URANS- (P∕ε) ratio is too high rendering it incapable of resolving much of the fluctuations. On the other hand, the PANS- (P∕ε) ratio allows the model to vary smoothly from RANS to DNS depending upon the values of its resolution control parameters. turbulence, Navier-Stokes equations, flow simulation, flow instability Computation, Energy dissipation, Flow (Dynamics), Reynolds-averaged Navier–Stokes equations, Turbulence, Kinetic energy, Fluctuations (Physics), Resolution (Optics), Navier-Stokes equations Trends in Turbulence Treatments ,” AIAA 2000-2306, Fluids 2000, Denver, CO. LNS—An Approach Towards Embedded LES Time-Accurate Simulations and Acoustic Analysis of Slat Free Shear Layer Time-Accurate Simulations and Acoustic Analysis of Slat Free Shear Layer: Part 2 ,” AIAA 2002-2579, AIAA/CEAS Aeroacoustics Conference and Exhibit, Breckenridge, CO. Partially-Averaged Navier-Stokes Method: A RANS to DNS Bridging Method ,” ASME J. Appl. Mech. (submitted). Detached-Eddy Simulation Over a Simplified Landing Gear Mac Giolla Mhuiris On the Prediction of Equilibrium States in Homogeneous Turbulence Turbulence: The Filtering Approach LES of Non-Equilibrium Pulsed Turbulent Flows Proceedings of TSFP-2 , Stockholm, June 27–29, Vol. Predicting Equilibrium States With Reynolds Stress Closures in a Channel Flow and Homogeneous Shear Flow Computing Non-Equilibrium Flows with Time-Dependent RANS and VLES ,” 15th ICNMFD. Pressure-Strain Correlation Modeling of Complex Turbulent Flows An Experimental Study of Entrainment and Transport in the Turbulent Near Wake of a Circular Cylinder A Methodology for Simulations of Complex Turbulent Flows Investigation of Calculated Turbulence Parameters for Use in Hybrid Broadband Fan Noise Calculations Simulation of a 3D Axisymmetric Hill: Comparison of RANS and Hybrid RANS-LES Models
Counting lattices in simple Lie groups: The positive characteristic case February 2012 Counting lattices in simple Lie groups: The positive characteristic case Duke Math. J. 161(3): 431-481 (February 2012). DOI: 10.1215/00127094-1507421 In this article, we prove the following conjecture by Lubotzky. Let G={\mathbb{G}}_{0}\left(K\right) K is a local field of characteristic p\ge 5 {\mathbb{G}}_{0} is a simply connected, absolutely almost simple K K -rank at least 2. We give the rate of growth of {\rho }_{x}\left(G\right):=|\left\{\Gamma \subseteq G|\Gamma \text{ a lattice in }G,\mathrm{vol}\left(G/\Gamma \right)\le x\right\}/\sim |, {\Gamma }_{1}\sim {\Gamma }_{2} if and only if there is an abstract automorphism \theta G {\Gamma }_{2}=\theta \left({\Gamma }_{1}\right) . We also study the rate of subgroup growth {s}_{x}\left(\Gamma \right) of any lattice \Gamma G . As a result, we show that these two functions have the same rate of growth, which proves Lubotzky’s conjecture. Along the way, we also study the rate of growth of the number of equivalence classes of maximal lattices in G with covolume at most x Alireza Salehi Golsefidy. "Counting lattices in simple Lie groups: The positive characteristic case." Duke Math. J. 161 (3) 431 - 481, February 2012. https://doi.org/10.1215/00127094-1507421 Alireza Salehi Golsefidy "Counting lattices in simple Lie groups: The positive characteristic case," Duke Mathematical Journal, Duke Math. J. 161(3), 431-481, (February 2012)
Metapopulation Knowpia A metapopulation consists of a group of spatially separated populations of the same species which interact at some level. The term metapopulation was coined by Richard Levins in 1969 to describe a model of population dynamics of insect pests in agricultural fields, but the idea has been most broadly applied to species in naturally or artificially fragmented habitats. In Levins' own words, it consists of "a population of populations".[1] Metapopulations are important in fisheries. The local population (1.) serves as a source for hybridization with surrounding subspecies populations (1.a, 1.b, and 1.c).The populations are normally spatially separated and independent but spatial overlap during breeding times allows for gene flow between the populations. A metapopulation is generally considered to consist of several distinct populations together with areas of suitable habitat which are currently unoccupied. In classical metapopulation theory, each population cycles in relative independence of the other populations and eventually goes extinct as a consequence of demographic stochasticity (fluctuations in population size due to random demographic events); the smaller the population, the more chances of inbreeding depression and prone to extinction. Although individual populations have finite life-spans, the metapopulation as a whole is often stable because immigrants from one population (which may, for example, be experiencing a population boom) are likely to re-colonize habitat which has been left open by the extinction of another population. They may also emigrate to a small population and rescue that population from extinction (called the rescue effect). Such a rescue effect may occur because declining populations leave niche opportunities open to the "rescuers". The development of metapopulation theory, in conjunction with the development of source–sink dynamics, emphasised the importance of connectivity between seemingly isolated populations. Although no single population may be able to guarantee the long-term survival of a given species, the combined effect of many populations may be able to do this. Metapopulation theory was first developed for terrestrial ecosystems, and subsequently applied to the marine realm.[2] In fisheries science, the term "sub-population" is equivalent to the metapopulation science term "local population". Most marine examples are provided by relatively sedentary species occupying discrete patches of habitat, with both local recruitment and recruitment from other local populations in the larger metapopulation. Kritzer & Sale have argued against strict application of the metapopulation definitional criteria that extinction risks to local populations must be non-negligible.[2]: 32  Finnish biologist Ilkka Hanski of the University of Helsinki was an important contributor to metapopulation theory. Predation and oscillationsEdit The first experiments with predation and spatial heterogeneity were conducted by G. F. Gause in the 1930s, based on the Lotka–Volterra equation, which was formulated in the mid-1920s, but no further application had been conducted.[3] The Lotka-Volterra equation suggested that the relationship between predators and their prey would result in population oscillations over time based on the initial densities of predator and prey. Gause's early experiments to prove the predicted oscillations of this theory failed because the predator–prey interactions were not influenced by immigration. However, once immigration was introduced, the population cycles accurately depicted the oscillations predicted by the Lotka-Volterra equation, with the peaks in prey abundance shifted slightly to the left of the peaks of the predator densities. Huffaker's experiments expanded on those of Gause by examining how both the factors of migration and spatial heterogeneity lead to predator–prey oscillations. Huffaker's experiments on predator–prey interactions (1958)Edit In order to study predation and population oscillations, Huffaker used mite species, one being the predator and the other being the prey.[4] He set up a controlled experiment using oranges, which the prey fed on, as the spatially structured habitat in which the predator and prey would interact.[5] At first, Huffaker experienced difficulties similar to those of Gause in creating a stable predator–prey interaction. By using oranges only, the prey species quickly became extinct followed consequently with predator extinction. However, he discovered that by modifying the spatial structure of the habitat, he could manipulate the population dynamics and allow the overall survival rate for both species to increase. He did this by altering the distance between the prey and oranges (their food), establishing barriers to predator movement, and creating corridors for the prey to disperse.[3] These changes resulted in increased habitat patches and in turn provided more areas for the prey to seek temporary protection. When the prey would become extinct locally at one habitat patch, they were able to reestablish by migrating to new patches before being attacked by predators. This habitat spatial structure of patches allowed for coexistence between the predator and prey species and promoted a stable population oscillation model.[6] Although the term metapopulation had not yet been coined, the environmental factors of spatial heterogeneity and habitat patchiness would later describe the conditions of a metapopulation relating to how groups of spatially separated populations of species interact with one another. Huffaker's experiment is significant because it showed how metapopulations can directly affect the predator–prey interactions and in turn influence population dynamics.[7] The Levins modelEdit Levins' original model applied to a metapopulation distributed over many patches of suitable habitat with significantly less interaction between patches than within a patch. Population dynamics within a patch were simplified to the point where only presence and absence were considered. Each patch in his model is either populated or not. Let N be the fraction of patches occupied at a given time. During a time dt, each occupied patch can become unoccupied with an extinction probability edt. Additionally, 1 − N of the patches are unoccupied. Assuming a constant rate c of propagule generation from each of the N occupied patches, during a time dt, each unoccupied patch can become occupied with a colonization probability cNdt . Accordingly, the time rate of change of occupied patches, dN/dt, is {\displaystyle {\frac {dN}{dt}}=cN(1-N)-eN.\,} This equation is mathematically equivalent to the logistic model, with a carrying capacity K given by {\displaystyle K=1-{\frac {e}{c}}\,} and growth rate r {\displaystyle r=c-e.\,} At equilibrium, therefore, some fraction of the species's habitat will always be unoccupied. Stochasticity and metapopulationsEdit Huffaker's[4] studies of spatial structure and species interactions are an example of early experimentation in metapopulation dynamics. Since the experiments of Huffaker[4] and Levins,[1] models have been created which integrate stochastic factors. These models have shown that the combination of environmental variability (stochasticity) and relatively small migration rates cause indefinite or unpredictable persistence. However, Huffaker's experiment almost guaranteed infinite persistence because of the controlled immigration variable. Stochastic patch occupancy models (SPOMs)Edit One major drawback of the Levins model is that it is deterministic, whereas the fundamental metapopulation processes are stochastic. Metapopulations are particularly useful when discussing species in disturbed habitats, and the viability of their populations, i.e., how likely they are to become extinct in a given time interval. The Levins model cannot address this issue. A simple way to extend the Levins' model to incorporate space and stochastic considerations is by using the contact process. Simple modifications to this model can also incorporate for patch dynamics. At a given percolation threshold, habitat fragmentation effects take place in these configurations predicting more drastic extinction thresholds.[8] For conservation biology purposes, metapopulation models must include (a) the finite nature of metapopulations (how many patches are suitable for habitat), and (b) the probabilistic nature of extinction and colonisation. Also, note that in order to apply these models, the extinctions and colonisations of the patches must be asynchronous. Microhabitat patches (MHPs) and bacterial metapopulationsEdit E. coli metapopulation on a chip. Combining nanotechnology with landscape ecology, synthetic habitat landscapes have been fabricated on a chip by building a collection of bacterial mini-habitats with nano-scale channels providing them with nutrients for habitat renewal, and connecting them by corridors in different topological arrangements, generating a spatial mosaic of patches of opportunity distributed in time. This can be used for landscape experiments by studying the bacteria metapopulations on the chip, for example their evolutionary ecology.[9] Life history evolutionEdit Metapopulation models have been used to explain life-history evolution, such as the ecological stability of amphibian metamorphosis in small vernal ponds. Alternative ecological strategies have evolved. For example, some salamanders forgo metamorphosis and sexually mature as aquatic neotenes. The seasonal duration of wetlands and the migratory range of the species determines which ponds are connected and if they form a metapopulation. The duration of the life history stages of amphibians relative to the duration of the vernal pool before it dries up regulates the ecological development of metapopulations connecting aquatic patches to terrestrial patches.[10] ^ a b Levins, R. (1969), "Some demographic and genetic consequences of environmental heterogeneity for biological control", Bulletin of the Entomological Society of America, 15 (3): 237–240, doi:10.1093/besa/15.3.237 ^ a b Kritzer, J. P. & Sale, P. F. (eds) (2006) Marine metapopulations, Academic Press, New York. ^ a b Real, Leslie A. and Brown, James H. 1991. Foundations of Ecology: Classic papers with commentaries. The University of Chicago Press, Chicago. ^ a b c Huffaker, C.B. (1958), "Experimental Studies on Predation: Dispersion factors and predator–prey oscillations", Hilgardia, 27 (343): 343–383, doi:10.3733/hilg.v27n14p343 ^ Legendre, P.; Fortin, M.J. (1989), "Spatial pattern and ecological analysis", Plant Ecology, 80 (2): 107, CiteSeerX 10.1.1.330.8940, doi:10.1007/BF00048036, S2CID 17101938 ^ Kareiva, P. (1987), "Habitat Fragmentation and the Stability of Predator–Prey Interactions", Nature, 326 (6111): 388–390, Bibcode:1987Natur.326..388K, doi:10.1038/326388a0, S2CID 4335135 ^ Janssen, A. et al. 1997. Metapopulation Dynamics of a Persisting Predator–Prey system. ^ Keymer J.E; P.A. Marquet; J.X. Velasco‐Hernández; S.A. Levin (November 2000). "Extinction Thresholds and Metapopulation Persistence in Dynamic Landscapes". The American Naturalist. 156 (5): 478–4945. doi:10.1086/303407. hdl:10533/172124. PMID 29587508. S2CID 4385886. ^ Keymer J.E.; P. Galajda; C. Muldoon R. & R. Austin (November 2006). "Bacterial metapopulations in nanofabricated landscapes". PNAS. 103 (46): 17290–295. Bibcode:2006PNAS..10317290K. doi:10.1073/pnas.0607971103. PMC 1635019. PMID 17090676. ^ Petranka, J. W. (2007), "Evolution of complex life cycles of amphibians: bridging the gap between metapopulation dynamics and life history evolution", Evolutionary Ecology, 21 (6): 751–764, doi:10.1007/s10682-006-9149-1, S2CID 38832436. Bascompte J.; Solé R. V. (1996), "Habitat Fragmentation and Extinction Thresholds in spatially explicit models", Journal of Animal Ecology, 65 (4): 465–473, doi:10.2307/5781, JSTOR 5781. Hanski, I. Metapopulation Ecology Oxford University Press. 1999. ISBN 0-19-854065-5 Fahrig, L. 2003. Effects of Habitat Fragmentation on Biodiversity. Annual Review of ecology, evolution, and systematics. 34:1, p. 487. Levin S.A. (1974), "Dispersion and Population Interactions", The American Naturalist, 108 (960): 207, doi:10.1086/282900, S2CID 83630608. Helsinki-science: Metapopulation
Nobuyuki Tose (1987) We study the propagation of microlocal analytic singularities for the microdifferential equations with conical refraction studied by R. Melrose and G. Uhlmann. We transform the equations to a simple canonical form 2-microlocaly through quantized bicanonical transformations by Y. Laurent. A general class of Gevrey-type pseudo differential operators Otto Liess, Luigi Rodino (1983) A Lie Group Structure for Fourier Integral Operators. Malcolm Adams, Rudolf Schmid, Tudor Ratiu (1986) A Lie Group Structure for Pseudodifferential Operators. Malcolm Adams, Tudor. Ratiu, Rudolf Schmid (1985/1986) A microlocal F. and M. Riesz theorem with applications. Raymondus G. M. Brummelhuis (1989) Consider, by way of example, the following F. and M. Riesz theorem for Rn: Let μ be a finite measure on Rn whose Fourier transform μ* is supported in a closed convex cone which is proper, that is, which contains no entire line. Then μ is absolutely continuous (cf. Stein and Weiss [SW]). Here, as in the sequel, absolutely continuous means with respect to Lebesque measure. In this theorem one can replace the condition on the support of μ* by a similar condition on the wave front set WF(μ) of μ, while... A Remark on Invariant Pseudo-Differential Operators. Anders Melin (1972) A Wiener algebra for the Fefferman-Phong inequality Nicolas Lerner, Yoshinori Morimoto (2005/2006) Algebras of pseudodifferential operators on complete manifolds. Ammann, Bernd, Lauter, Robert, Nistor, Victor (2003) An accuracy improvement in Egorov's theorem. Jorge Drumond Silva (2007) We prove that the theorem of Egorov, on the canonical transformation of symbols of pseudodifferential operators conjugated by Fourier integral operators, can be sharpened. The main result is that the statement of Egorov's theorem remains true if, instead of just considering the principal symbols in Sm/Sm-1 for the pseudodifferential operators, one uses refined principal symbols in Sm/Sm-2, which for classical operators correspond simply to the principal plus the subprincipal symbol, and can generally... An Example on the Heisenberg Group Related to the Levy Operator. E.M. Stein (1982) An integral operator representation of classical periodic pseudodifferential operators. Vainikko, G. (1999) Analyse micro-locale du noyau de Bergman M. Kashiwara (1976/1977) Analyse semi-classique de l'opérateur de Schrödinger sur la sphère A. Grigis (1990/1991) Analytic index formulas for elliptic corner operators Boris Fedosov, Bert-Wolfgang Schulze, Nikolai Tarkhanov (2002) Spaces with corner singularities, locally modelled by cones with base spaces having conical singularities, belong to the hierarchy of (pseudo-) manifolds with piecewise smooth geometry. We consider a typical case of a manifold with corners, the so-called "edged spindle", and a natural algebra of pseudodifferential operators on it with special degeneracy in the symbols, the "corner algebra". There are three levels of principal symbols in the corner algebra, namely the interior,... Biinvariant Operators on Nilpotent Lie Groups. David Wigner (1977) Marius Mitrea, Victor Nistor (2007) We study the method of layer potentials for manifolds with boundary and cylindrical ends. The fact that the boundary is non-compact prevents us from using the standard characterization of Fredholm or compact pseudo-differential operators between Sobolev spaces, as, for example, in the works of Fabes-Jodeit-Lewis and Kral-Wedland . We first study the layer potentials depending on a parameter on compact manifolds. This then yields the invertibility of the relevant boundary integral operators in the... P un opérateur pseudodifférentiel (ou microdifférentiel) tel que expP soit aussi un opérateur pseudodifférentiel. Alors le symbole de expP s’ecrit expq avec un symbole q . Pour la réciproque, si Q est un opérateur à symbole expq , il existe un opérateur P Q=expP . Tous ces résultats reposent sur la théorie développée dans la Note I de cette série. Comme application, on obtient une condition suffisante d’inversibilité pour les opérateurs pseudodifférentiels d’ordre infini.
Use an iterated triple integral to obtain the volume of R , the first-octant region that is bounded by the coordinate planes, and the additional planes x=1 x+y+z=2 Figure 8.1.29(a) shows the solid whose volume is obtained by iterating a triple integral in Cartesian coordinates in the order \mathrm{dy} \mathrm{dz} \mathrm{dx} {∫}_{0}^{1}{∫}_{0}^{2-x}{∫}_{0}^{2-x-y}1 \mathrm{dz} \mathrm{dy} \mathrm{dx} \frac{7}{6} Figure 8.1.29(a) The region R Access the MultiInt command via the Context Panel Type the integrand, 1. Fill in the fields of the two dialogs shown below. 1 \stackrel{\text{MultiInt}}{\to } {\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{1}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{z}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{y}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{x} = \frac{\textcolor[rgb]{0,0,1}{7}}{\textcolor[rgb]{0,0,1}{6}} Table 8.1.29(a) provides a solution by a task template that integrates in Cartesian coordinates and draws the region of integration. {∭}_{R}\mathrm{Ψ}\left(x,y,z\right) \mathrm{dv} R \mathrm{dv} Select dvdz dy dxdz dx dydx dy dzdx dz dydy dx dzdy dz dx \mathrm{Ψ}= F= G= b= f= g= a= Table 8.1.29(a) Task template integrating in Cartesian coordinates Table 8.1.29(b) provides a solution from first principles. Iterated triple-integral template {∫}_{0}^{1}{∫}_{0}^{2-x}{∫}_{0}^{2-x-y}1 \mathit{ⅆ}z \mathit{ⅆ}y \mathit{ⅆ}x \frac{\textcolor[rgb]{0,0,1}{7}}{\textcolor[rgb]{0,0,1}{6}} Table 8.1.29(b) Integration via first principles Table 8.1.29(c) obtains a solution via the MultiInt command in the Student MultivariateCalculus package. \mathrm{with}\left(\mathrm{Student}:-\mathrm{MultivariateCalculus}\right): \mathrm{MultiInt}\left(1,z=0..2-x-y,y=0..2-x,x=0..1,\mathrm{output}=\mathrm{integral}\right) {\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{1}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{z}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{y}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{x} \mathrm{MultiInt}\left(1,z=0..2-x-y,y=0..2-x,x=0..1\right) \frac{\textcolor[rgb]{0,0,1}{7}}{\textcolor[rgb]{0,0,1}{6}} Table 8.1.29(c) MultiInt command iterating in Cartesian coordinates in the order \mathrm{dy} \mathrm{dz} \mathrm{dx} Table 8.1.29(d) implements the iterated integration via the top-level Int and int commands. \mathrm{Int}\left(1,\left[z=0..2-x-y,y=0..2-x,x=0..1\right]\right)=\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{int}\left(1,\left[z=0..2-x-y,y=0..2-x,x=0..1\right]\right) {\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{1}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{z}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{y}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\frac{\textcolor[rgb]{0,0,1}{7}}{\textcolor[rgb]{0,0,1}{6}} Table 8.1.29(d) Top-level Int and int commands
32T25 Finite type domains 32T40 Peak functions A characterization of totally real generic submanifolds of strictly pseudoconvex boundaries in Cn admitting a local foliation by interpolation submanifolds. Andrei Iordan (1990) A Construction of Normal Forms for Weakly Pseudoconvex CR Manifolds in ...2. Philip P. Wong (1982) A Criterion for the Neumann Type Problem over a Differential Complex on a Strongly Pseudo Convex Domain. Takao Akahori (1983) Gregor Fels (1995) G={K}^{ℂ} be a complex reductive group. We give a description both of domains \Omega \subset G and plurisubharmonic functions, which are invariant by the compact group, K , acting on G by (right) translation. This is done in terms of curvature of the associated Riemannian symmetric space M:=G/K . Such an invariant domain \Omega with a smooth boundary is Stein if and only if the corresponding domain {\Omega }_{M}\subset M is geodesically convex and the sectional curvature of its boundary S:=\partial {\Omega }_{M} fulfills the condition {K}^{S}\left(E\right)\ge {K}^{M}\left(E\right)+k\left(E,n\right) k\left(E,n\right) is explicitly computable... A Generalisation of a Theorem of Fornaess-Sibony. Mechthild Behrens (1985/1986) A Levi Problem on Two-Dimensional Complex Manifolds. Klas Diederich, Takeo Ohsawa (1982) A link between {C}^{\infty } and analytic solvability for P.D.E. with constant coefficients Giuseppe Zampieri (1980) A Note on the kernel of the ...-Neumann operator on strongly pseudoconvex domains. Der-Chen E. Chang (1988) A precise result on the boundary regularity of biholomorphic mappings (Erratum). A Pseudo-convex Domain not Admitting a Holomorphic Support Function. J.J. Kohn, L. Nirenberg (1973) A pseudoconvex domain with bounded solutions for ..., but not admitting ...-estimates. Joachim Michel (1993) A pseudoconvex-pseudoconcave generalization of Grauert's direct image theorem Yum-Tong Siu (1972) A remark on semiglobal existence for \overline{\partial } Alberto Scalari (1997) A remark on the uniform extendability of the Bergman kernel function. So-Chin Chen (1991)
This specifies the origin of the hyperslab. It must be given as an array of integer values with N elements. Each value specifies the offset in grid points in this dimension into the N-dimensional volume of the grid variable. The direction vectors specify both the directions in which the hyperslab should be spanned (each vector defines one direction of the hyperslab) and its dimensionality ( = the total number of dimension vectors). The direction vectors must be given as a concatenated array of integer values. The direction vectors must not be a linear combination of each other or null vectors. If the direction vectors for a hyperslab are not given, the hyperslab dimensions will default to N , and its directions are parallel to the underlying grid. This specifies the extent of the hyperslab in each of its dimensions as a number of grid points. It must be given as an array of integer values with M elements ( M being the number of hyperslab dimensions). To select only every so many grid points from the hyperslab you can set the downsample option. It must be given as an array of integer values with M M The global attributes are set to "unchunked" = "yes", nprocs = 1, and ioproc_every =
OHM forks on Polygon: The case of KLIMA - Mai Finance - Tutorials This guide will present an overly simplified presentation of OHM-forks and related projects, and how you can benefit from them on Polygon with the special case of Klima DAO. If you've been following the crypto news for the past few months, you certainly noticed a lot of (3,3) references, and heard about Ohm-related projects. I will try to present quickly what these projects are, what are the principle rules from their core, and how you can actually use them as part of your investment strategies. For this last part, we'll be focusing on Klima DAO, one of the most successful ohm-forks on Polygon, that also has a very interesting story and goal. What are OHM-forks What is OHM and what is a fork? Everything started on the Ethereum Mainnet with Olympus DAO. Their goal is to create a new reserve currency to compete with the dollar, except that unless other stable coins, this new currency would have a floating value. The native token (Ohm) needs to be fully backed by a basket of different assets, however the tokenomics of the project make the value Ohm is defined by the market. Olympus DAO launched in March 2021 and is still a very successful project on Mainnet. The TVL is denominated in dozens of millions of dollars, and the Ohm price is keeping a very high price. Because of this success, the project has been forked (copied) and multiple Olympus DAO clones popped on many chains. Overview of the tokenomics This section will be a little more technical than what we use to present in our other guides, but to understand the success of Olympus and other Ohm-forks, it's important to understand how they work. The base idea of the Olympus protocol is to increase the treasury as much as possible by selling the native token at a discount, while maintaining a circulating supply as low as possible to maintain a high price. This is done by providing very high rewards to stakers, and having almost full control of the liquidity. Bonding: the protocol will propose native tokens at a discounted price. The price is paid using specific assets that are used to back the native token. In the example of Olympus DAO, the OHM token is 100% backed by a few tokens that include mostly DAI, so bonds can be purchased using DAI directly, or using DAI-OHM LP tokens (and lately additional tokens including FRAX). When people buy the native tokens using the backing assets or LP tokens, the payment goes directly in the treasury, allowing the protocol to mint more tokens, hence being able to run for a longer period of time. The only thing is that the discounted token is released over a vesting period, meaning the user who bought the native token using bonding will not be able to fully use it right away. Staking: after bonding, users will collect the native tokens and will have the choice between selling them or staking them. In order to make sure that the latter option will be chosen, the protocol offers insanely high rewards to stakers (we're talking about 1.2% daily gains !!!). The goal behind these high APRs is to get a staking ratio as close as possible to 100%. If there aren't a lot of tokens circulating, the price is driven up, and coupled with high rewards, it makes it even more interesting to stake. As a side note, a price that goes up will also help keep high reward rates. Increasing the treasury and control the liquidity: Treasury is increased from bonding, and from the fact that native tokens can be bonded with LP tokens that are almost completely controlled by the protocol. These LP tokens are used to collect swap fees for users who prefer buying the native token on the market at full price over bonding (see next chapter for details). Buy back and burn: Most OHM-like projects include a mechanism that will buy back the native tokens and burn them on very specific occasions. Problems occur when users are selling the native token, driving the price down. However, if people sell their tokens, the APY goes up since the number of minted tokens remains the same for less staked tokens. But even with higher APY, if nobody buys and stakes the sold tokens, the protocol can possibly buy them back from the market in order to apply buying pressure, drive the price up and keep the circulating supply low. Tokens that are bought back are simply destroyed. Indeed, since part of the treasury has been used to acquire these tokens, keeping them in the treasury or distributing them would actually dilute the treasury, which would either reduce the rewards rate, or affect periods of time during which the protocol can run. You can find additional resources about the concept of Olympus DAO and its tokenomics here: ​DeFi 2.0 - A new Narrative? Olympus DAO, Tokemak Explained​ ​WTF is Olympus DAO​ Bonding VS Staking Why would someone pay for a token when there's a discounted version available through a bond? This is a legitimate question, and the answer will depend on the discount offered by the bond. Since we'll be working with Klima DAO, let's compare the buying + staking VS bonding: Klima staking reward as of November 2021 Klima bonding ROI as of November 2021 If one buys directly Klima from the market and stake it for 5 days (the actual vesting period for bonding), the ROI (Return On Investment) will be 8.51%. If one buys a bond instead, the maximal ROI would be 5.47% by providing BCT/KLIMA. This means that, with the equivalent of $100, you would get after 5 days $108.51 with the 1st option $105.47 with the 2nd option However, it's important to understand that bonded Klima are released over the vesting period. Hence you can harvest the vested Klima and stake it in order to profit from rebases (reward distribution). Since you will only get rewards for whatever you staked during the vesting period, and since there are 15 rebases during the 5 days for the bond to be fully released, we can assume that you can potentially harvest 6.67% before each one of the 15 rebases. Assuming you will harvest and stake at the beginning of each rebase, you would get this: rebase # At the end of the vesting period, the 5.47% ROI is respected, but staking rewards also added an extra 4.65% (that haven't been compounded for simplicity), resulting in a 10.12% ROI. This means that bonding is actually more interesting than staking directly, even if the bonding ROI seems lower than the staking ROI. The total reward you will get by staking N times over the vesting period (with Nmax = 15 at most) is: Reward_{total} = \sum_{i=1}^{N}{\frac{Investment * i * (1 + APR_{Vesting})}{N} * APR_{staking}} You can then run your own simulations in order to verify if it's better to buy and stake, or to bond. For our example, with a staking ROI of 8.51% over 5 days, a bonding discount of 3.95% with 15 rebases would be better (giving an equivalent ROI of 8.52%). You can run the same simulation with harvest + stake only once a day instead of 3 times a day before each rebase. For the same APY as above, you would need a bonding discount of 6.76% to get a better ROI than staking. You can find a simulator for your bonding VS staking calculation under the form of a google spreadsheet you can copy and edit at your will. Note that this page is NOT maintained nor provided by the QiDAO community. The special case of Klima DAO The specificity that makes Klima DAO different from other Ohm-forks is the main asset backing the Klima token: the BCT token, provided by the Toucan Protocol. The BCT (Base Carbon Tonne) is actually representing investments in the real world to decarbonize the earth, turning carbon offsets from the real world into tokens. You can read a lot more about how it works in the official documentation of Toucan. BCT is then used by the Klima DAO app to mint the KLIMA tokens, the same way DAI is used by the Olympus DAO to mint OHM. In other words, Klima acts like a Carbon sink, providing real life funds to fight climate change. More info can be found on the Klima website and documentation, and you can come and discuss ways to make crypto greener on the Discord server of QiDAO. One of the main differences between Olympus and Klima is that BCT doesn't have a stable price. This presents a higher risk than for forks using stable coins to build their treasury, however it's assumed that environmental problems will be more and more important, and there will be more and more projects trying to extract carbon from the atmosphere, which would in turn increase the overall value of BCT. Strategy 1: sKLIMA leverage, or full (9,9) Without going deep into the (3,3) game theory, (9,9) represent a situation where one is leveraging a staked position. This is actually possible because Klima DAO will provide a sKLIMA token as a proof of deposit that some platform will accept as a collateral for a possible loan. Let's see the details of it. The leverage loop using MarketXYZ and Klima Leverage your Klima position The idea is to get an initial amount of KLIMA token that you can deposit on Klima DAO. This will allow you to get very high APY (as of writing, the APY is 38,873.08%, or 601% APR or a daily gain of 1.68%) and by depositing your KLIMA token, you will get sKLIMA as a proof of deposit. This sKLIMA token can be used on Market XYZ in the Green Leverage Locker which will allow you to take a loan against this deposit. As a side note, Mai Finance partnered with Market XYZ and seeded the green locker pool with 1M MAI to guarantee low interest rates when you borrow MAI against your sKLIMA. Green locker on Market XYZ as of november 2021 It's not an obligation for you to borrow MAI, you can actually borrow whatever token with the lowest interest rate, but you need to keep in mind that you will have to pay fees on your loan, and the faster you repay your loan, the less fees you will pay. With your loan, you will be able to buy more KLIMA tokens and repeat the loop. You will notice that the APY on sKLIMA will largely compensate the interests on your loan. There's a minimum amount to borrow on Market.xyz, please check the limit when you want to apply this strategy. Market XYZ will also have some liquidation levels, meaning that if your collateral value goes below the liquidation level, there's a risk for you to lose your collateral. In order to lower the risk of liquidation, the following simulation assumes that you will keep an C/D ratio of 250%, and that you invest an initial $1,000 of KLIMA tokens at 38,873% APY to borrow MAI at 20.49% interest sKLIMA ($) MAI loan ($) eq. APY (%) interests ($) Of course, it's possibly enough to stop after 3 loops since the equivalent APY won't grow much past that. As a side note, because the initial investment is $1,000,the value you will get at the end of 1 year would be $646,820.00, assuming everything stays the same. In other words, you invest $1,000, you will need to repay $665.57 with an additional $136.38 (an accumulated debt of $801.95) but you will also earn $646,820. You can also see that the value of your sKLIMA position is growing very quickly (around 8% every 5 days), which means you can also increase your debt at this point and leverage even more for additional gains. Capturing benefits value and repaying your loan One of the main issues with Ohm-fork projects is that it assumes that everybody is staking and nobody sells. But, if nobody sells, nobody gets any benefits, and in most cases, the first to sell will get the cake. For any investment strategy, it's important to capture the value of your gains. You can do it by withdrawing a part of your sKLIMA position on Market XYZ and get back KLIMA that you can sell. If you invest $100 and operate the 7 loops as above, your investment in KLIMA would have generated $67.11 after 31 days, which means you can fully repay your loan with interest in 1 single month. If you do so, you will start again with 166$ the next month and no outstanding debt. Just keep an eye on the borrowing APR that can get pretty high on Market XYZ. Strategy 2: Continuous Investment, or full (4,4) Once again, (4,4) is related to game theory and reserve currencies, and indicate people who bond their tokens then stake them. In this strategy, we will see how we can use Klima and Augury to purchase bonds regularly, and stake them continuously. The investment loop using Augury and Mai Finance Continuous investment using Augury Finance and Mai Finance We are still using Klima, but this time we're using an infusion from Augury Finance in order to automate the extraction of the value of Klima. By depositing your KLIMA tokens in the infusion, the algorithm in charge of the infusion will perform the following actions after each rebase: 50% of the KLIMA harvested is restaked to increase your sKLIMA position 50% of the KLIMA harvested is sold for USDC added to the NFTM pool on Augury, and distributed to you as NFTM tokens Augury Infusion with 0% performance fee and 0% deposit fee NFTM can then either be held while it increases in value, or redeemed for its USDC value. In other words, it doesn't matter if the KLIMA token loses value after a rebase since its value is captured and stored as NFTM. After redeeming the USDC value of your NFTM rewards, you can then buy the token of your choice and store it in a vault in Mai Finance. The example above is using a camWETH vault, but you can really use any vault you like. The idea is to be able to use the vaults on Mai Finance to borrow MAI and buy new bonds on Klima DAO to repeat the loop. Then you can harvest the KLIMA tokens and inject them in Augury. Keep in mind that bonds are vesting little at a time, so it's totally possible to harvest regularly and stake on Augury before your bond is totally vested. Assuming you invest $100 like in the previous example, and place it directly in the Augury infusion, that the APR of the Klima infusion is 552.94% (current value as of writing), and that you want to keep a C/D ratio on the camWETH vault of 240% and a APY of 2.19% on the camWETH vault, here are the results over one year: wETH ($) Once again, assuming that all rates and prices stay the same, at the end of the year you would have $4,684.775 worth of KLIMA tokens $3,284.424 worth of wETH and outstanding debt of $1,368.510 Which is an equivalent APY of 6,866.46%. This is far from the 38,705.13% advertised by KLIMA, but still pretty impressive for a $100 investment. Also, a good load of your gains have been converted into wETH in a vault on Mai Finance, and your loan on the application will get you some additional Qi Tokens. If this strategy has a much lower APY than pure (9,9), it's also a relatively affordable one since you can enter the loop with as much as KLIMA you want. Everything presented in this document is pure theory and is proposed for educational purposes. The biggest issue with projects like Olympus and Klima is that, once again, the first user to sell will profit from the high price. If the first sell is massive (because gains are massive), it can snowball quickly into a panic effect that can very well kill the price of the KlIMA token. However, in this case, the APY would skyrocket, meaning that users who don't sell will benefit from very high rewards, so that when the APY attracts new users, the ones who held will be big winners. It's also good to note that the project can only continue to print tokens as long as additional funds are injected into the treasury. So the risk can be very high if you don't extract some benefits from time to time in order to lower the risks. As a final note, pay attention that ohm-forks are the new trend, but most projects may fail, and a lot of these forks are not solid projects. Because of their nature, they are not verified by RugDoc yet, and it may be very complicated to identify real applications from pure scam.
A⁢\mathrm{sin}⁡\left(x\right) A x t n p p p The trace=n option specifies that a number of previous frames of the animation be kept visible. When n n+1 n=5 When is a list of integers, then the frames in those positions are the frames that remain visible. Each integer in n=0 \mathrm{with}⁡\left(\mathrm{plots}\right): \mathrm{animate}⁡\left(\mathrm{plot},[A⁢{x}^{2},x=-4..4],A=-3..3\right) \mathrm{animate}⁡\left(\mathrm{plot},[A⁢{x}^{2},x=-4..4],A=-3..3,\mathrm{trace}=5,\mathrm{frames}=50\right) \mathrm{animate}⁡\left(\mathrm{plot},[A⁢{x}^{2},x=-4..4],A=-3..3,\mathrm{trace}=[30,35,40,45,50],\mathrm{frames}=50\right) \mathrm{animate}⁡\left(\mathrm{plot3d},[A⁢\left({x}^{2}+{y}^{2}\right),x=-3..3,y=-3..3],A=-2..2,\mathrm{style}=\mathrm{patchcontour}\right) \mathrm{animate}⁡\left(\mathrm{implicitplot},[{x}^{2}+{y}^{2}={r}^{2},x=-3..3,y=-3..3],r=1..3,\mathrm{scaling}=\mathrm{constrained}\right) \mathrm{animate}⁡\left(\mathrm{implicitplot},[{x}^{2}+A⁢x⁢y-{y}^{2}=1,x=-2..2,y=-3..3],A=-2..2,\mathrm{scaling}=\mathrm{constrained}\right) \mathrm{animate}⁡\left(\mathrm{plot},[[\mathrm{sin}⁡\left(t\right),\mathrm{sin}⁡\left(t\right)⁢\mathrm{exp}⁡\left(-\frac{t}{5}\right)],t=0..x],x=0..6⁢\mathrm{\pi },\mathrm{frames}=50\right) \mathrm{animate}⁡\left(\mathrm{plot},[[\mathrm{cos}⁡\left(t\right),\mathrm{sin}⁡\left(t\right),t=0..A]],A=0..2⁢\mathrm{\pi },\mathrm{scaling}=\mathrm{constrained},\mathrm{frames}=50\right) \mathrm{animate}⁡\left(\mathrm{plot},[[\frac{1-{t}^{2}}{1+{t}^{2}},\frac{2⁢t}{1+{t}^{2}},t=-10..A]],A=-10..10,\mathrm{scaling}=\mathrm{constrained},\mathrm{frames}=50,\mathrm{view}=[-1..1,-1..1]\right) \mathrm{opts}≔\mathrm{thickness}=5,\mathrm{numpoints}=100,\mathrm{color}=\mathrm{black}: \mathrm{animate}⁡\left(\mathrm{spacecurve},[[\mathrm{cos}⁡\left(t\right),\mathrm{sin}⁡\left(t\right),\left(2+\mathrm{sin}⁡\left(A\right)\right)⁢t],t=0..20,\mathrm{opts}],A=0..2⁢\mathrm{\pi }\right) B≔\mathrm{plot3d}⁡\left(1-{x}^{2}-{y}^{2},x=-1..1,y=-1..1,\mathrm{style}=\mathrm{patchcontour}\right): \mathrm{opts}≔\mathrm{thickness}=5,\mathrm{color}=\mathrm{black}: \mathrm{animate}⁡\left(\mathrm{spacecurve},[[t,t,1-2⁢{t}^{2}],t=-1..A,\mathrm{opts}],A=-1..1,\mathrm{frames}=11,\mathrm{background}=B\right) \mathrm{animate}⁡\left(\mathrm{ball},[0,\mathrm{sin}⁡\left(t\right)],t=0..4⁢\mathrm{\pi },\mathrm{scaling}=\mathrm{constrained},\mathrm{frames}=100\right) \mathrm{sinewave}≔\mathrm{plot}⁡\left(\mathrm{sin}⁡\left(x\right),x=0..4⁢\mathrm{\pi }\right): \mathrm{animate}⁡\left(\mathrm{ball},[t,\mathrm{sin}⁡\left(t\right)],t=0..4⁢\mathrm{\pi },\mathrm{frames}=50,\mathrm{background}=\mathrm{sinewave},\mathrm{scaling}=\mathrm{constrained}\right) \mathrm{animate}⁡\left(\mathrm{ball},[t,\mathrm{sin}⁡\left(t\right)],t=0..4⁢\mathrm{\pi },\mathrm{frames}=50,\mathrm{trace}=10,\mathrm{scaling}=\mathrm{constrained}\right) \mathrm{animate}⁡\left(F,[\mathrm{\theta }],\mathrm{\theta }=0..2⁢\mathrm{\pi },\mathrm{background}=\mathrm{plot}⁡\left([\mathrm{cos}⁡\left(t\right)-2,\mathrm{sin}⁡\left(t\right),t=0..2⁢\mathrm{\pi }]\right),\mathrm{scaling}=\mathrm{constrained},\mathrm{axes}=\mathrm{none}\right)
Research:Standard metrics - Meta (Redirected from Research:Metrics standardization) This page is about a 2013/14 project for metrics standardization. For overall edit statistics across Wikimedia projects, see Statistics. Metrics standardization, Wikimedia Research & Data Showcase, March 2014 Researchers, analysts, and product managers use a wide variety of metrics (from "monthly active editors" to "user's giving proportion in the dictator game"[1]) track and evaluate phenomena related to the Wikimedia projects. This page collects metrics which are suitable for wide use, which will make it faster to develop new research projects and easier to compare existing ones. These metrics are mostly quantitive, but qualitive metrics are worth standardizing too. For example, researchers sometimes survey Wikimedia users and contributors about their subjective satisfaction with software. It would be sensible to devise a standard, well-considered way of asking such questions. A high-level overview of the design of Rolling Monthly Active Editors, June 2014 2.1 Newly registered user 2.3 Productive new editor 2.4 Surviving new editor 3.1 The editor model 3.1.1 Rolling active editor 3.1.2 Rolling new active editor 3.1.3 Rolling surviving new active editor 3.1.4 Rolling recurring old active editor 3.1.5 Rolling re-activated editor 3.2 Other community metrics 3.2.1 Daily unique registered editors 3.2.2 Daily unique anonymous editors 3.2.3 Daily unique bot editors 3.2.4 Daily unique page creators 3.2.5 Daily unique media creators 4.1 Daily edits 4.2 Daily edits by registered users 4.3 Daily edits by anonymous users 4.4 Daily edits by bot users 4.5 Daily pages created 4.6 Daily media created 6.2 Unique devices Analysis example. An example of sensitivity analysis for the new editor definition: monthly count of newly registered users on the German Wikipedia performing at least one edit in their first day/week in the article namespace or across all namespaces. One way to group standard metrics is into 5 categories: these metrics provide indicators on the acquisition, activation and productivity of users joining Wikipedia or other Wikimedia projects for the first time. these metrics measure the overall composition, growth and volume of activity of existing communities, including both human and automated activity by bots. this category of metrics measures the growth and dynamics of content creation, including edits, new articles, uploads. these metrics measure the quantity and quality of curation and moderation activities, such as reverts, deletions, blocks. these metrics measure traffic and readership of Wikimedia projects. Each metric and user class definition comes with supportive analysis whose goal is to understand how sensitive its definition is to specific parameter choices and whether the metric captures the same phenomenon in different projects. We strive to run sensitivity analysis across projects in different languages and of varying levels of maturity, but we welcome feedback to improve these definitions and to identify edge cases, particularly for smaller projects or projects with uncommon policies, where the proposed definition may not accurately capture the quantity it attempts to represent. We also expect the use of these metrics in the first iterations of the design of Editor Engagement Vital Signs to reveal anomalies and interesting facts that are hard to anticipate until series for each metric are automatically generated for each Wikimedia project. Newly registered user[edit] {\displaystyle {\text{newly registered user}}} is a previously unregistered user creating a username for the first time on a Wikimedia project. {\displaystyle {\text{new editor}}(n,t)} {\displaystyle n} {\displaystyle t} {\displaystyle T} Standardized definitio{\displaystyle n} = 1 edi{\displaystyle t} Productive new editor[edit] {\displaystyle {\text{productive new editor}}(n,t)} is a new editor who completes at least {\displaystyle n} productive edit(s) within {\displaystyle t}ime since registration ( {\displaystyle T} Standardized definitio{\displaystyle n} = 1 productive edi{\displaystyle t} Surviving new editor[edit] {\displaystyle {\text{surviving new editor}}(n,m,t_{1},t_{2},t_{3})} {\displaystyle n} edits within {\displaystyle t_{1}} time since registration ( {\displaystyle T} ) and also completes {\displaystyle m} edits in the survival period {\displaystyle [T+t_{2},T+t_{2}+t_{3}]} Standardized definitio{\displaystyle n} = 1 edit {\displaystyle m} {\displaystyle t_{1}} {\displaystyle t_{2}} = 30 days (~ one month) {\displaystyle t_{3}} The editor model[edit] The editor model is a suite of metrics which include subclasses of and funnel rates for monthly active editors. Rolling active editor[edit] {\displaystyle {\text{rolling active editor}}(T,u,n)} is a registered user who completed {\displaystyle n} edits to pages in any namespace of a Wikimedia project between {\displaystyle T-u} {\displaystyle T} Active editor (rolling) Standardized definitio{\displaystyle n} = 5 edits {\displaystyle u} Rolling new active editor[edit] {\displaystyle {\text{rolling new active editor}}(T,u,n)} is a newly registered user who both registered and completed {\displaystyle n} {\displaystyle T-u} {\displaystyle T} New active editor (rolling) Standardized definitio{\displaystyle n} {\displaystyle u} Rolling active editor Rolling surviving new active editor[edit] {\displaystyle {\text{rolling surviving new active editor}}(T,u,n)} {\displaystyle n} edits between {\displaystyle T-2u} {\displaystyle T-u} and continued to complete {\displaystyle n} {\displaystyle T-u} {\displaystyle T} Surviving new active editor (rolling) Standardized definitio{\displaystyle n} {\displaystyle u} Rolling new active editor Rolling recurring old active editor[edit] {\displaystyle {\text{rolling recurring old active editor}}(T,u,n)} is a user registered before {\displaystyle T-2u} , completed {\displaystyle n} {\displaystyle T-2u} {\displaystyle T-u} {\displaystyle n} {\displaystyle T-u} {\displaystyle T} Recurring old active editor (rolling) Standardized definitio{\displaystyle n} {\displaystyle u} Rolling re-activated editor[edit] {\displaystyle {\text{rolling re-activated editor}}(T,u,n)} is a user who completed less tha{\displaystyle n} {\displaystyle T-2u} {\displaystyle T-u} and completed {\displaystyle n} edits (but was not a R:newly registered user) between {\displaystyle T-u} {\displaystyle T} Reactivated editor (rolling) Standardized definitio{\displaystyle n} {\displaystyle u} Other community metrics[edit] The following metrics do not form part of the Editor Model and are computed daily. These metrics will be delivered in stage 3 (2015-Q1) Daily unique registered editors[edit] {\displaystyle {\text{daily unique registered editor}}(D,n)} is a user who is not a flagged bot and completed at least {\displaystyle n} edits on date {\displaystyle D} Standardized definitio{\displaystyle n} Daily unique anonymous editors[edit] {\displaystyle {\text{daily unique anonymous editor}}(D,n)} is an unregistered user who completed at least {\displaystyle n} {\displaystyle D} via the same IP address. Standardized definitio{\displaystyle n} Daily unique bot editors[edit] {\displaystyle {\text{daily unique bot editor}}(D,n)} is a user who is a flagged bot and completed at least {\displaystyle n} {\displaystyle D} Standardized definitio{\displaystyle n} Daily unique page creators[edit] {\displaystyle {\text{daily unique page creator}}(D,n)} is a user who completed at least {\displaystyle n} page creations across all namespaces on date {\displaystyle D} Standardized definitio{\displaystyle n} = 1 page creation Daily unique media creators[edit] {\displaystyle {\text{daily unique media creator}}(D,n)} {\displaystyle n} media creations on date {\displaystyle D} Standardized definitio{\displaystyle n} = 1 media creation these metrics will be delivered in stage 3 (2015-Q1) Daily edits[edit] {\displaystyle {\text{daily edits}}(D)} is a count of the number of edits saved by any users on date {\displaystyle D} Standardized definition Daily edits by registered users[edit] {\displaystyle {\text{daily edits by registered users}}(D)} is a count of the number of edits saved by non-bot-flagged registered users on date {\displaystyle D} Daily edits by anonymous users[edit] {\displaystyle {\text{daily edits by anonymous users}}(D)} is a count of the number of edits saved by anonymous editors on date {\displaystyle D} Daily edits by bot users[edit] {\displaystyle {\text{daily edits by bot users}}(D)} is a count of the number of edits by flagged bot users on date {\displaystyle D} Daily pages created[edit] {\displaystyle {\text{daily pages created}}(D)} is a count of the number of page creations across all namespaces on date {\displaystyle D} Daily media created[edit] {\displaystyle {\text{daily media created}}(D)} is a count of media creations on date {\displaystyle D} See Research:Page view. Unique devices[edit] See Research:Unique devices. Supplementary resources[edit] Preliminary drafts and background analysis for other metrics can be found in this category Presentation at October 2014 metrics showcase ↑ Yann Algan et al. (2014), "Cooperation in a peer production economy: experimental evidence from Wikipedia." Retrieved from "https://meta.wikimedia.org/w/index.php?title=Research:Standard_metrics&oldid=22299300"
If k is negative, this is understood as counting from the end of A (unless the lowerbound of A is different from 1). That is, -1 represents the last entry of A, -2 represents the next-to-last entry, and so on. \mathrm{with}⁡\left(\mathrm{ArrayTools}\right): v≔[5,2,5,2,3,4,4,5,3,1,5,2,3,2,2,4,3,3,1,2] \textcolor[rgb]{0,0,1}{v}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}] To find the third-smallest entry of v , we can use Partition. w≔\mathrm{Partition}⁡\left(v,3\right) \textcolor[rgb]{0,0,1}{w}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}] w[3] \textcolor[rgb]{0,0,1}{2} {w}_{3}=2 , so the third-smallest entry of v We can find the fourth-largest entry of v by having Partition partition around the fourth entry from the end of v w≔\mathrm{Partition}⁡\left(v,-4\right) \textcolor[rgb]{0,0,1}{w}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}] w[-4] \textcolor[rgb]{0,0,1}{5} Alternatively, we can partition around the fourth entry of v in descending order. w≔\mathrm{Partition}⁡\left(v,4,\mathrm{`>`}\right) \textcolor[rgb]{0,0,1}{w}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}] w[4] \textcolor[rgb]{0,0,1}{5} If we want to know both the third-smallest and the fourth-largest entries of v , we can call Partition with a list value for the k parameter. w≔\mathrm{Partition}⁡\left(v,[3,-4]\right) \textcolor[rgb]{0,0,1}{w}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}] w[3] \textcolor[rgb]{0,0,1}{2} w[-4] \textcolor[rgb]{0,0,1}{5} Alternatively, we can first fix the third-smallest entry and then use the bounds option to partition the rest of v w≔\mathrm{Partition}⁡\left(v,3\right) \textcolor[rgb]{0,0,1}{w}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}] w[3] \textcolor[rgb]{0,0,1}{2} u≔\mathrm{Partition}⁡\left(w,-4,\mathrm{bounds}=4..\right) \textcolor[rgb]{0,0,1}{u}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}] u[-4] \textcolor[rgb]{0,0,1}{5} As another alternative, if v is an rtable, we can do both of these manipulations in-place. v≔\mathrm{Vector}⁡\left(v\right) \textcolor[rgb]{0,0,1}{v}\textcolor[rgb]{0,0,1}{≔}\begin{array}{c}[\begin{array}{c}\textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{4}\\ \textcolor[rgb]{0,0,1}{5}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{⋮}\end{array}]\\ \hfill \textcolor[rgb]{0,0,1}{\text{20 element Vector[column]}}\end{array} \mathrm{Partition}⁡\left(v,3,\mathrm{inplace}\right): \mathrm{Partition}⁡\left(v,-4,\mathrm{bounds}=4..,\mathrm{inplace}\right): \mathrm{convert}⁡\left(v,\mathrm{list}\right) [\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}] v[3] \textcolor[rgb]{0,0,1}{2} v[-4] \textcolor[rgb]{0,0,1}{5}
Cecelia wants to measure the area of her bedroom floor. Should she use square inches or square feet? Complete parts (a) through (c) below as you explore this question. Write a sentence to explain which units you think Cecelia should use. Square inches are good for measuring the area of things such as your desk or a piece of paper. Do you think they would also be good for measuring the area of your room? Make sure to explain If Cecelia’s bedroom is 12 15.5 feet, what is the area of the bedroom floor? Show how you got your answer. To get you started, here is a generic rectangle representing the area of Cecelia's room. 15\ \text{feet} 0.5 \text{ feet} 10 \text{ feet} 2 \text{ feet} Find the perimeter of Cecelia’s bedroom floor. Show how you got your answer. The perimeter is the distance you would travel if you walked the length of each wall in Cecelia's room. Thus, the following expression represents the perimeter of the room: 12 + 15.5 + 12 + 15.5 55
A new shipment of nails is due any day at Hannah’s Hardware Haven, and you have been asked to help label the shelves so that the nails are organized in length from least to greatest. She is expecting nails of the following sizes: 1\frac{3}{8} inch, 1\frac{7}{8} 2\frac{1}{4} \frac { 7 } { 8 } inch, and 1\frac{1}{2} inch. Use the ruler below to help Hannah order the labels on the shelves from least to greatest. Dots have been placed on the ruler representing the lengths. Determine which dot represents which length and you will be able to put the lengths in order from least to greatest.
Log unconditional probability density for discriminant analysis classifier - MATLAB - MathWorks Switzerland P\left(x\right)=\sum _{k=1}^{K}P\left(x,k\right), P\left(x|k\right)=\frac{1}{{\left({\left(2\pi \right)}^{d}|{\Sigma }_{k}|\right)}^{1/2}}\mathrm{exp}\left(-\frac{1}{2}\left(x-{\mu }_{k}\right){\Sigma }_{k}^{-1}{\left(x-{\mu }_{k}\right)}^{T}\right), |{\Sigma }_{k}| {\Sigma }_{k}^{-1}
Physics - The antineutrino vanishes differently William C. Louis Researchers report a possible difference between muon neutrino and muon antineutrino disappearance, which if confirmed will have serious implications for our current theoretical understanding. Figure 1: The MINOS experiment consists of two similar detectors located at distances of 1.04\phantom{\rule{0.333em}{0ex}}\text{km} [Near Detector (ND)] and 735\phantom{\rule{0.333em}{0ex}}\text{km} [Far Detector (FD)] from the neutrino production target. The ND is located at Fermilab, and the FD is located in the SOUDAN Underground Laboratory in northern Minnesota.The MINOS experiment consists of two similar detectors located at distances of 1.04\phantom{\rule{0.333em}{0ex}}\text{km} 735\phantom{\rule{0.333em}{0ex}}\text{km} [Far Detector (FD)] from the neutrino production target. The ND is located at Fermilab, and the FD is located in the SOUDAN Undergrou... Show more CPT symmetry, the combination of charge conjugation, parity inversion, and time reversal, is a fundamental symmetry of particle and nuclear physics and is considered sacred. It is conserved in field theories that explain the strong, weak, and electromagnetic interactions. In the lepton sector, CPT symmetry requires that muon neutrino disappearance oscillations be identical to muon antineutrino disappearance oscillations in vacuum. A test of CPT symmetry was recently performed by the MINOS experiment at Fermilab, which, due to its magnetic field, is the first experiment to distinguish and tracks and separately measure the disappearance of muon neutrinos and muon antineutrinos [1]. (Previous experiments have measured a mixture of neutrino and antineutrino oscillations.) Remarkably, as reported in Physical Review Letters, MINOS appears to observe a difference between muon neutrino and muon antineutrino disappearance [1]. The “atmospheric neutrino problem,” a deficit of atmospheric muon neutrinos relative to electron neutrinos, was initially observed by the IMB and Kamioka experiments and was then shown to be due to oscillations by the SuperKamiokande experiment in 1998. Neutrino oscillations occur if there is mixing between neutrino flavors and if individual neutrino flavors consist of a linear combination of different neutrino mass eigenstates. In the case of two-flavor mixing, e.g., mixing between and , then the probability that a will oscillate into a is given by where is the mixing angle, is the difference in squared masses of the two mass eigenstates in , is the distance travelled by the neutrino in , and is the neutrino energy in . In addition to the IMB, Kamioka, and SuperKamiokande atmospheric neutrino experiments, the K2K, MINOS, and OPERA accelerator neutrino experiments have confirmed the oscillation resolution of the “atmospheric neutrino problem.” The most precise measurement of oscillations comes from the MINOS experiment, which consists of two similar detectors [2] located at distances of [Near Detector (ND)] and [Far Detector (FD)] from the particle production target. Neutrinos are produced by - protons from the Fermilab Main Injector interacting on a graphite target, followed by magnetic horns that focus either positive pions and kaons to produce a dominantly beam, or negative pions and kaons to produce an enhanced beam. The ND, located at Fermilab, and the FD, located in the SOUDAN Underground Laboratory in northern Minnesota (see Fig. 1), are tracking calorimeters consisting of planes of magnetized steel ( ) interspersed with planes of plastic scintillator. Neutrino interactions in the steel produce muons whose energy is measured by either the range of the contained muon track or by the curvature of the muon track in the magnetic field. This curvature also determines the charge of the muon and whether the incident neutrino is a or . The hadronic energy is determined from the total amount of light produced in the scintillator. The total neutrino energy is the sum of the muon energy and the associated hadronic energy. MINOS is designed to make a precision measurement of and disappearance by comparing the neutrino energy distribution in the FD (after neutrinos have oscillated) to the neutrino energy distribution in ND (before neutrinos have oscillated). MINOS has made the world’s best measurement of disappearance oscillations [3]. Using a data sample corresponding to protons on target (POT), MINOS measures the best-fit oscillation parameters to be and . Antineutrino experiments are difficult, due to their low event rate compared to neutrino experiments. Nevertheless, based on POT, MINOS has also reported the first direct observation of disappearance oscillations [1] and measures the oscillation parameters to be and ). The no-oscillation hypothesis in antineutrino mode is disfavored at standard deviations; however, it is significant that the and disappearance parameters appear to be different. As stated in the paper, “The probability that the underlying and parameters are identical is .” What could explain this apparent difference between muon neutrino and muon antineutrino disappearance? First, it is possible that the difference is just due to a statistical fluctuation. This possibility will be tested by additional MINOS data to be taken over the next few years. If the difference is not a statistical fluctuation, then it is possible that it is due to nuclear effects [4], which can cause a difference in the energy reconstruction of neutrino events compared to antineutrino events. A large energy difference is unlikely but could arise if the hadronic energy is misreconstructed. Neutrino events have a higher fraction of hadronic energy than antineutrino events, and as the neutrino energy is needed for the determination of , a mismeasurement of the neutrino energy then results in an incorrect measurement of . If the apparent difference between muon neutrino and muon antineutrino disappearance is not due to a statistical fluctuation or to nuclear effects, then we would have to consider new physics beyond the standard model. Indeed, global fits to the world neutrino and antineutrino oscillation data [5] encounter tension between the neutrino and antineutrino data sets and favor different neutrino and antineutrino oscillation parameters. One possible beyond the standard model solution involves nonstandard interactions [6], which would affect neutrinos and antineutrinos passing through matter (as is the case for MINOS) differently. A more extreme possibility is that Lorentz symmetry is violated [7] or CPT symmetry is violated [8], and that neutrino oscillation parameters are different from antineutrino parameters. If this were the case, then the impact on nuclear and particle physics would be profound. Fortunately, there are several experiments that are either taking data or being constructed that will be able to test this possible difference between muon neutrino and muon antineutrino disappearance. The SciBooNE and MiniBooNE experiments at Fermilab, located at distances of and from the neutrino source, took data at the same time in both neutrino mode and antineutrino mode and are performing a joint analysis of their disappearance data. Also, the T2K experiment in Japan has detectors at distances of and , and is now taking data with neutrinos. T2K has the capability of switching to antineutrinos in a few years. In addition, the NO A experiment at Fermilab is under construction and should begin taking data in a couple of years with detectors at distances of and . Finally, the IceCube experiment at the South Pole is measuring high-energy atmospheric neutrinos and antineutrinos and will be sensitive to disappearance over distances of approximately to . Will neutrino experiments continue to surprise us? Is CPT symmetry conserved in the lepton sector? Stay tuned. D. G. Michael et al., Nucl. Instrum. Meth. A 596, 190 (2008) P. Adamson et al., arXiv:1103.0340 G. T. Garvey (private communication) G. Karagiorgi et al., Phys. Rev. D 80, 073001 (2009); Carlo Giunti and Marco Laveder, 83, 053006 (2011); 82,093016 (2010); 82,113009 (2010); Joachim Kopp, Michele Maltoni, and Thomas Schwetz, arXiv:1103.4570 A. Friedland, C. Lunardini, and M. Maltoni, Phys. Rev. D 70, 111301 (2004); W. A. Mann et al., 82, 113010 (2010); J. Kopp et al., 82, 113002 (2010); Netta Engelhardt, Ann E. Nelson, and Jonathan R. Walsh, arXiv:1002.4452 Jorge S. Diaz and V. Alan Kostelecky, arXiv:1012.5985 Gabriela Barenboim and Joseph D. Lykken, arXiv:0908.2993 William Louis received his Ph.D. in 1978 from the University of Michigan. After appointments as a research associate at Rutherford Laboratory and an assistant professor at Princeton University, he became a staff member at Los Alamos National Laboratory in 1987. He is a fellow of the American Physical Society and of Los Alamos National Laboratory, and he works on both short- and long-baseline neutrino oscillation and neutrino cross-section experiments. First Direct Observation of Muon Antineutrino Disappearance P. Adamson et al. (MINOS Collaboration)
DefiniteSum - Maple Help Home : Support : Online Help : Mathematics : Discrete Mathematics : Summation and Difference Equations : SumTools : Hypergeometric Subpackage : DefiniteSum compute the definite sum DefiniteSum(T, n, k, l..u) For a specified hypergeometric term T of n and k, the DefiniteSum(T, n, k, l..u) command computes, if it exists, a closed form for the definite sum f⁡\left(n\right)=\sum _{k=l}^{u}⁡T Let r, s, u, v be integers. The DefiniteSum command computes closed forms for four types of definite sums. They are \sum _{k=r⁢n+s}^{u⁢n+v}⁡T⁡\left(n,k\right) \sum _{k=r⁢n+s}^{\mathrm{\infty }}⁡T⁡\left(n,k\right) \sum _{k=-\mathrm{\infty }}^{u⁢n+v}⁡T⁡\left(n,k\right) \sum _{k=-\mathrm{\infty }}^{\mathrm{\infty }}⁡T⁡\left(n,k\right) A closed form is defined as one that can be represented as a sum of hypergeometric terms or as a d'Alembertian term. If the input T is a definite sum of a hypergeometric term, and if the environment variable _EnvDoubleSum is set to true, then DefiniteSum tries to find a closed form for the specified definite sum of T. Note that this operation can be very expensive. For more information on the construction of the minimal Z-pair for T, see ExtendedZeilberger. Note: If you set infolevel[DefiniteSum] to 3, Maple prints diagnostics. \mathrm{with}⁡\left(\mathrm{SumTools}[\mathrm{Hypergeometric}]\right): T≔\frac{{\left(-1\right)}^{k}⁢\mathrm{binomial}⁡\left(2⁢n,k\right)⁢{\mathrm{binomial}⁡\left(2⁢n-k,n\right)}^{2}⁢\left(2⁢n+1\right)}{2⁢n+1+k}: \mathrm{Sum}⁡\left(T,k=0..n\right)=\mathrm{DefiniteSum}⁡\left(T,n,k,0..n\right) \textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\sum }_{\textcolor[rgb]{0,0,1}{k}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{n}}\textcolor[rgb]{0,0,1}{⁡}\frac{{\left(\textcolor[rgb]{0,0,1}{-1}\right)}^{\textcolor[rgb]{0,0,1}{k}}\textcolor[rgb]{0,0,1}{⁢}\left(\genfrac{}{}{0}{}{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}}{\textcolor[rgb]{0,0,1}{k}}\right)\textcolor[rgb]{0,0,1}{⁢}{\left(\genfrac{}{}{0}{}{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{k}}{\textcolor[rgb]{0,0,1}{n}}\right)}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)}{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{k}}\textcolor[rgb]{0,0,1}{=}\frac{\left(\genfrac{}{}{0}{}{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}}{\textcolor[rgb]{0,0,1}{n}}\right)\textcolor[rgb]{0,0,1}{⁢}\left(\genfrac{}{}{0}{}{\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}}\right)}{\left(\genfrac{}{}{0}{}{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}}\right)} T≔\frac{{\left(-1\right)}^{k}}{\left(k+1\right)⁢\mathrm{binomial}⁡\left(2⁢n,k\right)}: \mathrm{infolevel}[\mathrm{DefiniteSum}]≔3: \mathrm{Sum}⁡\left(T,k=0..2⁢n-1\right)=\mathrm{DefiniteSum}⁡\left(T,n,k,0..2⁢n-1\right) DefiniteSum: "try algorithms for definite sum" Definite: "Construct the Zeilberger recurrence" Definite: "Solve the recurrence equation ..." Definite: "Find hypergeometric solutions" Definite: "Solve the homogeneous linear recurrence equation" Definite: "Find a particular hypergeometric solution" Definite: "Find a particular d'Alembertian solution" Definite: "Construction of the general solution successful" Definite: "Solve the initial-condition problem" \textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\sum }_{\textcolor[rgb]{0,0,1}{k}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{⁡}\frac{{\left(\textcolor[rgb]{0,0,1}{-1}\right)}^{\textcolor[rgb]{0,0,1}{k}}}{\left(\textcolor[rgb]{0,0,1}{k}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{⁢}\left(\genfrac{}{}{0}{}{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}}{\textcolor[rgb]{0,0,1}{k}}\right)}\textcolor[rgb]{0,0,1}{=}\frac{\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{⁢}\left(\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{\Psi }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{\Psi }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\right)\right)}{\textcolor[rgb]{0,0,1}{4}} \mathrm{infolevel}[\mathrm{DefiniteSum}]≔0: T≔\frac{{\left(-1\right)}^{k}⁢\mathrm{binomial}⁡\left(n,k\right)\cdot 1}{\mathrm{binomial}⁡\left(x+k,k\right)}: T≔\mathrm{Sum}⁡\left(\mathrm{eval}⁡\left(T,n=m\right),m=0..n\right) \textcolor[rgb]{0,0,1}{T}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\sum }_{\textcolor[rgb]{0,0,1}{m}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{n}}\textcolor[rgb]{0,0,1}{⁡}\frac{{\left(\textcolor[rgb]{0,0,1}{-1}\right)}^{\textcolor[rgb]{0,0,1}{k}}\textcolor[rgb]{0,0,1}{⁢}\left(\genfrac{}{}{0}{}{\textcolor[rgb]{0,0,1}{m}}{\textcolor[rgb]{0,0,1}{k}}\right)}{\left(\genfrac{}{}{0}{}{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{k}}{\textcolor[rgb]{0,0,1}{k}}\right)} \mathrm{_EnvDoubleSum}≔\mathrm{true} \textcolor[rgb]{0,0,1}{\mathrm{_EnvDoubleSum}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{Sum}⁡\left(T,k=0..n\right)=\mathrm{DefiniteSum}⁡\left(T,n,k,0..n\right) \textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\sum }_{\textcolor[rgb]{0,0,1}{k}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{n}}\textcolor[rgb]{0,0,1}{⁡}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\sum }_{\textcolor[rgb]{0,0,1}{m}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{n}}\textcolor[rgb]{0,0,1}{⁡}\frac{{\left(\textcolor[rgb]{0,0,1}{-1}\right)}^{\textcolor[rgb]{0,0,1}{k}}\textcolor[rgb]{0,0,1}{⁢}\left(\genfrac{}{}{0}{}{\textcolor[rgb]{0,0,1}{m}}{\textcolor[rgb]{0,0,1}{k}}\right)}{\left(\genfrac{}{}{0}{}{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{k}}{\textcolor[rgb]{0,0,1}{k}}\right)}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{\Psi }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{\mathrm{\Psi }}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right) Abramov, S.A., and Zima, E.V. "D'Alembertian Solutions of Inhomogeneous Linear Equations (differential, difference, and some other)." Proceedings ISSAC'96, pp. 232-240. 1996. Petkovsek, M. "Hypergeometric Solutions of Linear Recurrences with Polynomial Coefficients." Journal of Symbolic Computing. Vol. 14. (1992): 243-264. SumTools[Hypergeometric][IndefiniteSum]
Atmosphere | Free Full-Text | The Development of Volcanic Ash Cloud Layers over Hours to Days Due to Atmospheric Turbulence Layering High–Resolution Modeling of Airflows and Particle Deposition over Complex Terrain at Sakurajima Volcano Bulk Deposition and Source Apportionment of Atmospheric Heavy Metals and Metalloids in Agricultural Areas of Rural Beijing during 2016–2020 Bursik, M. Bear-Crozier, A. Pavolonis, M. Tupper, A. The Development of Volcanic Ash Cloud Layers over Hours to Days Due to Atmospheric Turbulence Layering Adele Bear-Crozier Center for Geohazards Studies, University at Buffalo, Buffalo, NY 14260, USA Earth Observatory of Singapore and Asian School of the Environment, Nanyang Technological University, Singapore 639798, Singapore NOAA Cooperative Institute for Meteorological Satellite Studies, University of Wisconsin, Madison, WI 53706, USA Natural Hazards Consulting, Montmorency, VIC 3094, Australia Academic Editor: Masato Iguchi Volcanic ash clouds often become multilayered and thin with distance from the vent. We explore one mechanism for the development of this layered structure. We review data on the characteristics of turbulence layering in the free atmosphere, as well as examples of observations of layered clouds both near-vent and distally. We then explore dispersion models that explicitly use the observed layered structure of atmospheric turbulence. The results suggest that the alternation of turbulent and quiescent atmospheric layers provides one mechanism for the development of multilayered ash clouds by modulating vertical particle motion. The largest particles, generally μ >100 \mathsf{\mu } m, are little affected by turbulence. For particles in which both settling and turbulent diffusion are important to vertical motion, mostly in the range of 10–100 \mathsf{\mu } m, the greater turbulence intensity and more rapid turbulent diffusion in some layers causes these particles to spend greater time in the more turbulent layers, leading to a layering of concentration. The results may have important implications for ash cloud forecasting and aviation safety. View Full-Text Keywords: turbulence; eddy diffusivity; ash layer; volcanic cloud; ash cloud; Pinatubo; aviation safety turbulence; eddy diffusivity; ash layer; volcanic cloud; ash cloud; Pinatubo; aviation safety Bursik, M.; Yang, Q.; Bear-Crozier, A.; Pavolonis, M.; Tupper, A. The Development of Volcanic Ash Cloud Layers over Hours to Days Due to Atmospheric Turbulence Layering. Atmosphere 2021, 12, 285. https://doi.org/10.3390/atmos12020285 Bursik M, Yang Q, Bear-Crozier A, Pavolonis M, Tupper A. The Development of Volcanic Ash Cloud Layers over Hours to Days Due to Atmospheric Turbulence Layering. Atmosphere. 2021; 12(2):285. https://doi.org/10.3390/atmos12020285 Bursik, Marcus, Qingyuan Yang, Adele Bear-Crozier, Michael Pavolonis, and Andrew Tupper. 2021. "The Development of Volcanic Ash Cloud Layers over Hours to Days Due to Atmospheric Turbulence Layering" Atmosphere 12, no. 2: 285. https://doi.org/10.3390/atmos12020285
{L}^{p} \left(r,p\right) C\left(T,X\right) A bilinear version of Holsztyński's theorem on isometries of C(X)-spaces Antonio Moreno Galindo, Ángel Rodríguez Palacios (2005) We prove that, for a compact metric space X not reduced to a point, the existence of a bilinear mapping ⋄: C(X) × C(X) → C(X) satisfying ||f⋄g|| = ||f|| ||g|| for all f,g ∈ C(X) is equivalent to the uncountability of X. This is derived from a bilinear version of Holsztyński's theorem [3] on isometries of C(X)-spaces, which is also proved in the paper. A characterization of regular averaging operators and its consequences Spiros A. Argyros, Alexander D. Arvanitakis (2002) We present a characterization of continuous surjections, between compact metric spaces, admitting a regular averaging operator. Among its consequences, concrete continuous surjections from the Cantor set 𝓒 to [0,1] admitting regular averaging operators are exhibited. Moreover we show that the set of this type of continuous surjections from 𝓒 to [0,1] is dense in the supremum norm in the set of all continuous surjections. The non-metrizable case is also investigated. As a consequence, we obtain... A class of {l}^{1} -preduals which are isomorphic to quotients of C\left({\omega }^{\omega }\right) Ioannis Gasparis (1999) For every countable ordinal α, we construct an {l}_{1} -predual {X}_{\alpha } which is isometric to a subspace of C\left({\omega }^{{\omega }^{{\omega }^{\alpha }+2}}\right) and isomorphic to a quotient of C\left({\omega }^{\omega }\right) {X}_{\alpha } is not isomorphic to a subspace of C\left({\omega }^{{\omega }^{\alpha }}\right) A formula for the Bloch norm of a {C}^{1} -function on the unit ball of {ℂ}^{n} {C}^{1} f on the unit ball 𝔹\subset {ℂ}^{n} we define the Bloch norm by {\parallel f\parallel }_{𝔅}=sup\parallel \stackrel{˜}{d}f\parallel , \stackrel{˜}{d}f is the invariant derivative of f, and then show that {\parallel f\parallel }_{𝔅}=\underset{\genfrac{}{}{0pt}{}{z,w\in 𝔹}{z\ne w}}{sup}{\left(1-|z|}^{2}{\right)}^{1/2}{\left(1-|w|}^{2}{\right)}^{1/2}\frac{|f\left(z\right)-f\left(w\right)|}{|w-{P}_{w}z-{s}_{w}{Q}_{w}z|}. G. Capriz, P. Podio-Guidugli (1982) A generalization of the Banach-Stone theorem Krzysztof Jarosz (1982) A Holsztyński theorem for spaces of continuous vector-valued functions Michael Cambern (1978) A Kronecker theorem in functional analysis R. Kaufman (1973) A local Bernstein inequality on real algebraic varieties. Charles Fefferman, Raghavan Narasimhan (1996) A necessary and sufficient condition for existence of extremal functions of a linear functional on {H}_{1} Ryabykh, V.G. (2007) A Norm Preserving Complex Choquet Theorem. Otte Hustad (1971) A Normed Linear Space Containing the Schlicht Functions. J.A. Pfaltzgraff, J.A. Cima (1971) A note on nonfragmentability of Banach spaces. Mirmostafaee, S.Alireza Kamel (2001)
Invariant (mathematics) - Wikipedia (Redirected from Invariant (computer science)) Property that is not changed by mathematical transformations For other uses, see Invariant (disambiguation). {\displaystyle \circ } 1.1 MU puzzle 2 Invariant set 3.1 Unchanged under group action 3.2 Independent of presentation 3.3 Unchanged under perturbation 4 Invariants in computer science 4.1 Automatic invariant detection in imperative programs {\textstyle \int _{M}K\,d\mu } {\displaystyle K} {\displaystyle (M,g)} {\displaystyle g} MU puzzle[edit] Invariant set[edit] {\displaystyle x\in S\implies T(x)\in S.} Unchanged under group action[edit] Independent of presentation[edit] Unchanged under perturbation[edit] Invariants in computer science[edit] For other uses of the word "invariant" in computer science, see invariant (disambiguation). Automatic invariant detection in imperative programs[edit] Retrieved from "https://en.wikipedia.org/w/index.php?title=Invariant_(mathematics)&oldid=1079583789#Invariants_in_computer_science"
Cancellation criterion Cancellation criterion | Technically Exists Cancellation criterion The cancellation criterion requires that every vote be perfectly cancelled out by some other vote. Specifically, for every valid ballot there must exist some other ballot such that adding both ballots to an election will never change the result. m passes the cancellation criterion if for every ballot b , there exists some (not necessarily distinct) ballot b' such that for any list of ballots b_1, b_2, \dots, b_n m(b_1, b_2, \dots, b_n, b, b') = m(\alpha(b_1, b_2, \dots, b_n)) for every permutation \alpha b_1, b_2, \dots, b_n . Note that restrictions on the domain of m are left implied. This criterion is intended to be a formalization of the equal vote criterion, which is sometimes called Frohnmayer balance in reference to its creator, Mark Frohnmayer. Because the informal equal vote criterion can be interpreted in multiple ways, it is possible that the cancellation criterion does not quite capture the intent behind it. An alternative formalization is the opposite cancellation criterion. The cancellation criterion implies the anonymity criterion and therefore the identical input options criterion, and it is implied by the opposite cancellation criterion.
Robert Mercaş, Aleksi Saarela (2014) A k-abelian cube is a word uvw, where the factors u, v, and w are either pairwise equal, or have the same multiplicities for every one of their factors of length at most k. Previously it has been shown that k-abelian cubes are avoidable over a binary alphabet for k ≥ 8. Here it is proved that this holds for k ≥ 5. A characterization of balanced episturmian sequences. Paquin, Geneviève, Vuillon, Laurent (2007) A classification of periodic turtle sequences. Holdener, J., Wagaman, A. (2003) A combinatorial theorem on p -power-free words and an application to semigroups Aldo de Luca, Stefano Varricchio (1990) A criterion for non-automaticity of sequences. Schlage-Puchta, Jan-Christoph (2003) Pandelis Dodos, Vassilis Kanellopoulos, Konstantinos Tyros (2014) We prove a density version of the Carlson–Simpson Theorem. Specifically we show the following. For every integer k\ge 2 and every set A of words over k \mathrm{lim}\phantom{\rule{4pt}{0ex}}{\mathrm{sup}}_{n\to \infty }|A\cap {\left[k\right]}^{n}|/{k}^{n}>0 there exist a word c k \left({w}_{n}\right) of left variable words over k c\cup \left\{{c}^{}{w}_{0}{\left({a}_{0}\right)}^{}..{.}^{}{w}_{n}\left({a}_{n}\right):n\in ℕ\phantom{\rule{4.0pt}{0ex}}\text{and}\phantom{\rule{4.0pt}{0ex}}{a}_{0},...,{a}_{n}\in \left[k\right]\right\} A . While the result is infinite-dimensional its proof is based on an appropriate finite and quantitative version, also obtained in the paper. A finite word poset. Erdős, Péter L., Sziklai, Péter, Torney, David C. (2001) A general upper bound in extremal theory of sequences Martin Klazar (1992) We investigate the extremal function f\left(u,n\right) which, for a given finite sequence u k symbols, is defined as the maximum length m of a sequence v={a}_{1}{a}_{2}...{a}_{m} of integers such that 1) 1\le {a}_{i}\le n {a}_{i}={a}_{j},i\ne j |i-j|\ge k v contains no subsequence of the type u f\left(u,n\right) is very near to be linear in u of length greater than 4, namely that f\left(u,n\right)=O\left(n{2}^{O\left(\alpha {\left(n\right)}^{|u|-4}\right)}\right). |u| u \alpha \left(n\right) is the inverse to the Ackermann function and goes to infinity very slowly. This result extends the estimates in [S] and [ASS] which... A generalization of Sturmian sequences: Combinatorial structure and transcendence Rebecca Risley, Luca Zamboni (2000) Sébastien Ferenczi (2014) We generalize to all interval exchanges the induction algorithm defined by Ferenczi and Zamboni for a particular class. Each interval exchange corresponds to an infinite path in a graph whose vertices are certain unions of trees we call castle forests. We use it to describe those words obtained by coding trajectories and give an explicit representation of the system by Rokhlin towers. As an application, we build the first known example of a weakly mixing interval exchange outside the hyperelliptic... Pascal Ochem (2006) We present an algorithm which produces, in some cases, infinite words avoiding both large fractional repetitions and a given set of finite words. We use this method to show that all the ternary patterns whose avoidability index was left open in Cassaigne's thesis are 2-avoidable. We also prove that there exist exponentially many {\frac{7}{4}}^{+} {\frac{7}{5}}^{+} -free 4-ary words. Finally we give small morphisms for binary words containing only the squares 2, 12 and (01)² and for binary words avoiding... A hierarchy for circular codes Giuseppe Pirillo (2008) We first prove an extremal property of the infinite Fibonacci* word f: the family of the palindromic prefixes {hn | n ≥ 6} of f is not only a circular code but “almost” a comma-free one (see Prop. 12 in Sect. 4). We also extend to a more general situation the notion of a necklace introduced for the study of trinucleotides codes on the genetic alphabet, and we present a hierarchy relating two important classes of codes, the comma-free codes and the circular ones. Vince Bárány (2008) We investigate automatic presentations of ω-words. Starting points of our study are the works of Rigo and Maes, Caucal, and Carton and Thomas concerning lexicographic presentation, MSO-interpretability in algebraic trees, and the decidability of the MSO theory of morphic words. Refining their techniques we observe that the lexicographic presentation of a (morphic) word is in a certain sense canonical. We then generalize our techniques to a hierarchy of classes of ω-words enjoying the above... A length bound for binary equality words Jana Hadravová (2011) w be an equality word of two binary non-periodic morphisms g,h:{\left\{a,b\right\}}^{*}\to {\Delta }^{*} with unique overflows. It is known that if w contains at least 25 occurrences of each of the letters and b , then it has to have one of the following special forms: up to the exchange of the letters and b w={\left(ab\right)}^{i}a w={a}^{i}{b}^{j} gcd\left(i,j\right)=1 . We will generalize the result, justify this bound and prove that it can be lowered to nine occurrences of each of the letters and b Isabelle Fagnot (2006) Among Sturmian words, some of them are morphic, i.e. fixed point of a non-identical morphism on words. Berstel and Séébold (1993) have shown that if a characteristic Sturmian word is morphic, then it can be extended by the left with one or two letters in such a way that it remains morphic and Sturmian. Yasutomi (1997) has proved that these were the sole possible additions and that, if we cut the first letters of such a word, it didn't remain morphic. In this paper, we give an elementary and combinatorial... A lower bound for the arithmetical complexity of Sturmian words. Frid, A.È. (2005) A morphic approach to combinatorial games : the Tribonacci case Eric Duchêne, Michel Rigo (2008) We propose a variation of Wythoff’s game on three piles of tokens, in the sense that the losing positions can be derived from the Tribonacci word instead of the Fibonacci word for the two piles game. Thanks to the corresponding exotic numeration system built on the Tribonacci sequence, deciding whether a game position is losing or not can be computed in polynomial time. A new characteristic property of the palindrome prefixes of a standard Sturmian word. Pirillo, Giuseppe (1999)
camDAI beginner strategy - Mai Finance - Tutorials Mai Finance - Tutorials 🇺🇸/🇬🇧 English The Unofficial Guide to Mai Finance Leverage Your Aave Market Tokens camDAI beginner strategy Farming using only stable coins What to do with MAI on Polygon What to do with Qi on Polygon Stack DApps like Lego bricks Farming or Staking? Or both? How to combine farming and borrowing rewards OHM forks on Polygon: The case of KLIMA MAI single-staking with Ethalend The Elephant, the Chimp and the Otter From CeFi to DeFi with Celsius From Traditional Finance to DeFi with Jarvis MAI University Earning Passive Income with QiDAO How does MAI work? MAI loans and Vaults incentives Debt repayment - Why and When? Debt repayment - How? MAI Metaverse MAI Metaverse Pt.2 How to use MAI in the real life DeFi doesn't need to be complicated. This article presents how you can enter DeFi using Mai Finance with a low risk strategy and still get reasonable interests. Most people are scared when they think about DeFi. There's always a risk factor to take in account when using crypto currencies, the volatility of this market can make one loose a lot of money, and there are so many possibilities that finding a right strategy can be quite complex. However, when you're using the correct tools, some easy and low risk strategies can get good results, and can probably compete with more complex and risky options. In this guide, we will try to present an investment strategy based on leveraged stable coin, with a touch of risk for higher interests. Understanding the concept of leverage Story of an unluQi gold miner We are in the far west, during the great gold rush. Banks want to buy gold to be able to lend money to people and get interests on these loans, and miners want to get rich by selling their gold to banks. You're a miner, but not very lucky. You only found a single nugget. However, you're super clever, and instead of mining, you have another plan! You go to a bank and explain that you have gold. You can deposit the gold to the bank as a collateral, meaning that you let the bank use that gold for people who want to use it, and the bank will give you some interests on your deposit. Also, because you lended some gold, the bank agrees to let you borrow money from them, and in case you cannot repay your loan with some interests. The bank will pay itself using the gold you deposited. Cool, now you are earning interests on the gold you have at the bank, and they gave you some cash. With that, you decide to go see a fellow miner and buy his gold with your cash. This is letting him focus on mining and he gets cash for the gold he found. Everybody is happy. You go back to the bank and deposit the gold you bought. This implies more interest, and now the bank lets you borrow more cash from the extra gold you deposited. You have more gold exposed to the bank's interests, and some more cash. Time to go back to see if your friend found more gold, then repeat again and again. This is what is called leverage. Now imagine that you can find a bank that lets you borrow cash at 0% interest and you have a solid money printing machine only from the interests you're getting. ​AAVE is a lending and borrowing platform where you can deposit your assets. By lending on AAVE, your deposited tokens will earn yield. For our strategy, we'll be lending DAI, a stable coin (pegged to the US dollar). On AAVE, $100 worth of DAI will potentially generate between 4% and 10% rate of return over the span of 1 year. AAVE markets on Polygon as of October 2021 When you deposit your assets on AAVE, you will receive a proof of deposit. In our example, since we are depositing DAI, we will get amDAI tokens in our wallet (aave market DAI). You absolutely need to keep this receipt because you will need it to remove your DAI from AAVE. This is the bank that will accept your gold in our far west comparison. ​Mai Finance is a lending platform that will let you deposit some assets in a vault, and borrow against the value of this deposit. If we go back to the bank analogy, it would be a bank that lets you take a loan, but the loan doesn't come from what other people are lending. Instead, the bank prints money corresponding to your personal deposit, so you only borrow against yourself. Mai finance will accept the amDAI on its yield instrument. The yield instrument is just an intermediate tool between AAVE and the vault on Mai Finance. As you can see in the AAVE screenshot, lending DAI will make you earn 8.75% in DAI (that is compounded), but also 2.01% reward in MATIC. The yield instrument on Mai Finance will harvest this MATIC reward and swap it for more DAI that will be added to your DAI deposit. The APY (Annual Percentage Yield) on the Mai Finance site is hence showing the aggregated interests from AAVE. Yield instrument on Mai Finance Once you deposited your amDAI on the yield instrument, you will get some camDAI in your wallet (compounding amDAI). This is a receipt that indicates your share of the amDAI pool in the yield instrument. As a side note, because camDAI is a representation of your share of the amDAI pool, the ratio between amDAI and camDAI isn't 1:1. See this article for more details. You can now deposit your camDAI tokens in a vault on Mai Finance, and will then be able to borrow some MAI (a stable coin pegged to $1) against your collateral. In our far west comparison, this is a second bank that will let you take a cash loan based on the amount of gold you deposited in the first bank. This second bank accepts the receipt from the first bank as a guarantee in case you cannot repay your loan. ​Zapper is a Swiss army knife of DeFi on Polygon. This platform will let you farm yields in liquidity pools, lend your assets on AAVE directly from their platform, presents a dashboard of your different investments, and will let you swap some currencies for other currencies. This is the last feature that we will be using in order to exchange the MAI stable coin we just borrowed for more DAI. Swapping MAI for DAI In our far west example, Zapper is the gold miner that will accept your cash and will sell you gold. As you can see in the screenshot above, Zapper is using Balancer has the protocol to operate the swap. Balancer is an automated portfolio manager, liquidity provider, and price sensor where you will be able to provide liquidity (and get fees from this) or swap currencies using the liquidity pools. For our guide, we will use Balancer to expose our investments to a little more volatility and get better interests. This is 100% optional though. Even if we explained what AAVE is, our strategy will use a feature from Mai Finance to automate the DAI deposit on AAVE, the amDAI deposit in the yield instrument and the camDAI deposit in the camDAI vault. The Zap in using DAI button opens a popup that lets you deposit your DAI in the vault and operates the AAVE deposit under the hood. This is saving a lot of time, and some gas. This will be our first step. Assuming we have $100 worth of DAI, we will deposit them on Mai Finance in a camDAI vault. This will allow us to borrow MAI against this initial deposit. The minimal CDR (Collateral to Debt Ratio) for camDAI is 110%. This means that the ratio between your collateral (the $100 worth of DAI) and the loan we're about to get needs to remain above 110%. If this CRD ratio reaches the minimal value of 110%, it means that your collateral is losing value and your debt may become bigger than the value of your collateral. At this point, your vault can be liquidated: someone can repay a part of your debt and get a part of your collateral as a compensation. However, since both DAI and MAI are stable coins pegged to the US dollar, the risk of getting a big difference between the 2 assets is very low, which makes this strategy fairly safe. In order to maintain the liquidation risk fairly low, we will try to stick to a CDR of 115%. In order to know how much MAI we can borrow to stay at a 115% CDR, we will use this formula: MAI_{available} = \frac{Collateral_{value} - Debt_{value} * Target_{CDR}}{Target_{CDR}} With a collateral value of $100, no debt yet, and a target CDR of 115%, here's how much we can borrow: MAI_{available}=\frac{100 - 0*1.15}{1.15}=86.95 ​You can then swap the MAI you borrowed for DAI and repeat. Here's what your collateral and debt should look like: Equivalent APY DAI liquidation price We're stopping at 17 loops but you can operate more if you want to. At the end of the 17 loops, you'd get $695.423 of collateral and $595.423 of debt. This corresponds to a CDR 116.79% which should be safe enough to prevent liquidation. If we consider the 10.42% APY granted by the yield instrument, this would generate Interests = Collateral_{value}*APY=695.423*10.42\%= \$72.463 If we consider that the initial investment was only $100, that's an equivalent APY of 72.463% on single staking a stable coin! In order to get a little exposure to high volatility assets, you can use the same loop as above but only leverage 90% of the borrowed MAI, and use the 10% to buy something else. In this example, we will use the 10% to buy Qi (the native token of Mai Finance) and use the Qi-BAL pool on Balancer that currently has an APR (Annual Percentage Revenue) of 107.12%. Qi-BAL pool state as of October 2021 Since we're re-injecting less DAI in the camDAI vault, we will also operate less loops. The setup will look like this: At the end of the 10 loops, you'd get $420.354 of DAI as collateral $355.948 of debt $35.595 of Qi The same math as in the previous case gives the following results A final CDR of 118.09%, which should be considered as safe enough to prevent liquidation $43.800 of interests on DAI from the 10.42% APY granted by the yield instrument $68.139 of interests on your Qi from the Balancer pool, if you assume you will be compounding the Qi and BAL rewards in the Qi-BAL pool A total APY of 111.94% This strategy presents more risks in the sense that the investment in the Qi-BAL pool isn't guaranteed. However, you will get a little bit of exposure to Qi, which will let you participate to the QiDAO protocol. If you use the BAL reward on Mai Finance as a collateral and borrow against it, you will also be able to re-invest in the camDAI vault or in the Qi-BAL pool. If you do so, you will also be entitled to borrowing rewards paid in Qi every week. With some minimal investment and low maintenance, you can get some pretty solid results simply by leveraging your DAI. Since DAI is a stable coin that has a lot of liquidity across multiple chains, the risk is relatively low for DAI to go off peg and for your vault to be liquidated. It's the kind of "set and forget" setup that can easily be a very good starting point for any DeFi beginner, and chances are this strategy will perform the same way in a bull market or in a bear market. Finally, we also explained how you can use the same strategy to grab a portion of your loan and test out the many possibilities that DeFi has on Polygon. Everything presented in this tutorial is educational content made to illustrate the leverage option proposed by Mai Finance. We didn't talk about debt repayment because there are articles dedicated to this on this site, but you need to keep in mind that Mai Finance charges a 0.5% repayment fee on the borrowed amount. As always, make your own researches and don't hesitate to ask question on the Discord server of the DAO community. Keep in mind that a strategy that works well at a given time may perform poorly (or make you lose money) at another time. Please stay informed, monitor the markets, keep an eye on your investments, and as always, do your own research.
Extension:Math/newFeatures - MediaWiki Extension:Math/newFeatures This page is a preview for the new Math rendering that will get live at Wikipedia soon. Registered user will be able to chose between {\displaystyle E=mc^{2}} (currently disabled via config) {\displaystyle E=mc^{2}} (currently active) {\displaystyle E=mc^{2}} (no Mathoid server can be accessed from the Wikimedia production cluster) In a first step MathML and SVG will be available to registered users only. If you want to test please register an account here Register You don't have to enter a email address nor any private information do not use a password that you use elsewhere Change your Math rendering settings to MathML here Go to a random page or one of the test pages listed below. I found a bug[edit] If you find any bugs please report a bug at Bugzilla or write a mail to math_bugs (at) ckurs (dot) de Extension:Math/Unique Ids Retrieved from "https://www.mediawiki.org/w/index.php?title=Extension:Math/newFeatures&oldid=1116902"
76N10 Existence, uniqueness, and regularity theory 76N15 Gas dynamics, general 76N17 Viscous-inviscid interaction 76N20 Boundary-layer theory A conservative spectral element method for the approximation of compressible fluid flow Kelly Black (1999) A method to approximate the Euler equations is presented. The method is a multi-domain approximation, and a variational form of the Euler equations is found by making use of the divergence theorem. The method is similar to that of the Discontinuous-Galerkin method of Cockburn and Shu, but the implementation is constructed through a spectral, multi-domain approach. The method is introduced and is shown to be a conservative scheme. A numerical example is given for the expanding flow around a point... Raphaël Danchin (2014) Here we investigate the Cauchy problem for the barotropic Navier-Stokes equations in {ℝ}^{n} , in the critical Besov spaces setting. We improve recent results as regards the uniqueness condition: initial velocities in critical Besov spaces with (not too) negative indices generate a unique local solution. Apart from (critical) regularity, the initial density just has to be bounded away from 0 and to tend to some positive constant at infinity. Density-dependent viscosity coefficients may be considered. Using... A note on critical times of 2×2 quasilinear hyperbolic systems Ivan Straškraba (1984) In this paper the exact formula for the critical time of generating discontinuity (shock wave) in a solution of a 2×2 quasilinear hyperbolic system is derived. The applicability of the formula in the engineering praxis is shown on one-dimensional equations of isentropic non-viscous compressible fluid flow. A note on the relaxation-time limit of the isothermal Euler equations. Xu, Jiang, Fang, Daoyuan (2007) A numerical method for unsteady flows Nicola Botta, Rolf Jeltsch (1995) A high resolution finite volume method for the computation of unsteady solutions of the Euler equations in two space dimensions is presented and validated. The scheme is of Godunov-type. The first order part of the flux function uses the approximate Riemann problem solver of Pandolfi and here a new derivation of this solver is presented. This construction paves the way to understand the conditions under which the scheme satisfies an entropy condition. The extension to higher order is done by applying... A regularity result for a solid-fluid system associated to the compressible Navier-Stokes equations M. Boulakia, S. Guerrero (2009) A remark on the smoothness of bounded regions filled with a steady compressible and isentropic fluid Sébastien Novo, Antonín Novotný (2005) For convenient adiabatic constants, existence of weak solutions to the steady compressible Navier-Stokes equations in isentropic regime in smooth bounded domains is well known. Here we present a way how to prove the same result when the bounded domains considered are Lipschitz. A uniqueness result for a model for mixtures in the absence of external forces and interaction momentum Jens Frehse, Sonja Goj, Josef Málek (2005) We consider a continuum model describing steady flows of a miscible mixture of two fluids. The densities {\rho }_{i} of the fluids and their velocity fields {u}^{\left(i\right)} are prescribed at infinity: {\rho }_{i}{|}_{\infty }={\rho }_{i\infty }>0 {u}^{\left(i\right)}{|}_{\infty }=0 . Neglecting the convective terms, we have proved earlier that weak solutions to such a reduced system exist. Here we establish a uniqueness type result: in the absence of the external forces and interaction terms, there is only one such solution, namely {\rho }_{i}\equiv {\rho }_{i\infty } {u}^{\left(i\right)}\equiv 0 i=1,2 P. Chévrier, H. Galley (1993) About steady transport equation I – {L}^{p} -approach in domains with smooth boundaries Antonín Novotný (1996) We investigate the steady transport equation \lambda z+w·\nabla z+az=f,\phantom{\rule{1.0em}{0ex}}\lambda >0 in various domains (bounded or unbounded) with smooth noncompact boundaries. The functions w,\phantom{\rule{0.166667em}{0ex}}a are supposed to be small in appropriate norms. The solution is studied in spaces of Sobolev type (classical Sobolev spaces, Sobolev spaces with weights, homogeneous Sobolev spaces, dual spaces to Sobolev spaces). The particular stress is put onto the problem to extend the results to as less regular vector fields w,\phantom{\rule{0.166667em}{0ex}}a , as possible (conserving the requirement of... About steady transport equation. II: Schauder estimates in domains with smooth boundaries. Novotny, Antonin (1997) About the Resolvent of an Operator from Fluid Dynamics. Gerhard Ströhmer (1987) Abrupt and smooth separation of free boundaries in flow problems Hans Wilhelm Alt, Luis A. Caffarelli, Avner Friedman (1985) Acoustic-gravity waves in a viscous and thermally conducting isothermal atmosphere. III: For arbitrary Prandtl number. Alkahby, Hadi Yahya (1997) Acoustic-gravity waves in a viscous and thermally conducting isothermal atmosphere. I: For large Prandtl number. Alkahby, H.Y. (1995) Acoustic-gravity waves in a viscous and thermally conducting isothermal atmosphere. II: For small Prandtl number. Alkahby, Hadi Y. (1995) Didier Bresch, Marguerite Gisclon, Chi-Kun Lin (2005) The purpose of this work is to study an example of low Mach (Froude) number limit of compressible flows when the initial density (height) is almost equal to a function depending on x . This allows us to connect the viscous shallow water equation and the viscous lake equations. More precisely, we study this asymptotic with well prepared data in a periodic domain looking at the influence of the variability of the depth. The result concerns weak solutions. In a second part, we discuss the general low... The purpose of this work is to study an example of low Mach (Froude) number limit of compressible flows when the initial density (height) is almost equal to a function depending on x. This allows us to connect the viscous shallow water equation and the viscous lake equations. More precisely, we study this asymptotic with well prepared data in a periodic domain looking at the influence of the variability of the depth. The result concerns weak solutions. In a second part, we discuss... A. Fettah, T. Gallouët, H. Lakehal (2014) In this paper, we prove the existence of a solution for a quite general stationary compressible Stokes problem including, in particular, gravity effects. The Equation Of State gives the pressure as an increasing superlinear function of the density. This existence result is obtained by passing to the limit on the solution of a viscous approximation of the continuity equation. Laura Gastaldo, Raphaèle Herbin, Jean-Claude Latché (2010) We present in this paper a pressure correction scheme for the drift-flux model combining finite element and finite volume discretizations, which is shown to enjoy essential stability features of the continuous problem: the scheme is conservative, the unknowns are kept within their physical bounds and, in the homogeneous case (i.e. when the drift velocity vanishes), the discrete entropy of the system decreases; in addition, when using for the drift velocity a closure law which takes the form of...
52C17 Packing and covering in 52C05 Lattices and convex bodies in 2 52C07 Lattices and convex bodies in 52C10 Erdős problems and related topics of discrete geometry 52C15 Packing and covering in 2 52C17 Packing and covering in 52C20 Tilings in 2 52C22 Tilings in 52C26 Circle packings and discrete conformal geometry 52C35 Arrangements of points, flats, hyperplanes 52C40 Oriented matroids 52C45 Combinatorial complexity of geometric structures A Criterion for the Affine Equivalence of Cell Complexes in Rd and Convex Polyhedra in Rd+1 . F. Aurenhammer (1987) A lower bound on packing density. J.A. Rush (1989) A New Bound on the Local Density of Sphere Packings. D.J. Muder (1993) A new sphere packing in 20 dimensions. Alexander Vardy (1995) A note on the realizaiton of distances within sets in euclidean space. D.G. Larman (1978) A note on the ten-neighbour packings of equal balls K. Bezdek, A. Bezdek (1988) A Reduction of Lattice Tiling by Translates of a Cubical Cluster. S. Szabó (1987) A remark about the directed line-elements in the plane KÁROLY BEZDEK (1986) A Remark to the Paper of H. Hadwiger "Überdeckung des Raumes durch translationsgleiche Punktmengen und Nachbarnzahl". B. Uhrin (1987) A Stability Property of the Densest Circle Packing. Imre Bárány, N.P. Dolbilin (1988) Milica Stojanović (2005) An alternative proof of Petty's theorem on equilateral sets Tomasz Kobos (2013) The main goal of this paper is to provide an alternative proof of the following theorem of Petty: in a normed space of dimension at least three, every 3-element equilateral set can be extended to a 4-element equilateral set. Our approach is based on the result of Kramer and Németh about inscribing a simplex into a convex body. To prove the theorem of Petty, we shall also establish that for any three points in a normed plane, forming an equilateral triangle of side p, there exists a fourth point,... An On-Line Potato-Sack Theorem. M. Lassak, J. Zhang (1991) Asymptotic volume formulae and hyperbolic ball packing. Marshall, T.H. (1999) Ausbohrung von Rhomben B. WEISSBACH (1977) Blichfeldt's density bound revisted. G. Fejes Tóth, W. Kuperberg (1993) Circle Packings and Polyhedral Surfaces. R.J. Gardner, M. Kallay (1992) Coating by cubes. Bezdek, K., Hausel, T. (1994) Compact packings in the Euclidean space K. Bezdek (1987)
Differential geometry in Rust During the last few weeks I've been working on a library that would let the user do some differential-geometric calculations in Rust. By differential geometry I mean mostly the tensor calculus in curved spaces or space-times. I've already created something like that in C++, but I wanted to try and use some of the Rust features to create an improved version. What could Rust do better? The most convenient representation of tensors for doing calculations is in the form of arrays of numbers. The problem is that representing a tensor numerically requires choosing a coordinate system. Various operations, like for example addition of two tensors, only make sens when the tensors involved are expressed in the same coordinate system. The only possibility of enforcing this rule in C++ was to encode the coordinate system as a property of the tensor object and checking for compatibility in the operator code. This way any errors will be detected at runtime. Ok, so the errors were detectable, so what could be done better? Well, for examples the tensors expressed in different coordinate systems could not only have a different value of some property, but be objects of different types. This way the error can be detected at compile time, before the program is even translated into an executable form. It wouldn't be very practical in C++, but the Rust type system allows to do it quite interestingly. EDIT: It has been brought to my attention that C++'s templates also allow for this kind of thing. Nevertheless, doing it in Rust was a fun experiment :) Generic arrays in Rust Recently, I decided to try and "translate" the black hole simulator into Rust as an exercise. Initially I wanted to create a library implementing differential geometry, like tensor calculus, metric etc. I quickly encountered a problem. Rust and arrays Tensors are objects with some constant number of coordinates, depending on the dimension of the space they function in. For example, in an n-dimensional space, vectors have n coordinates, rank-2 tensors have n^2 etc. Arrays are perfect for representing such objects. Rust handles arrays without problems. An array of N elements of type T is written in Rust as [T; N]. Everything would be fine, except for one tiny detail - I would like my code not to depend on the dimension of the space. It is possible to define so-called generic types in Rust. They are datatypes that have internal details parametrized by some other type. For example, we could easily create a generic 3-dimensional vector: What if we wanted to create an n-dimensional vector, though? struct Vector<T,N> { coords: [T; N] // won't work } coords: [T; N] // won't work Nope. You can't express an integer type parameter in Rust. It looked hopeless, but I accidentally stumbled upon some code that suggested the existence of a solution.
Decagon Knowpia In geometry, a decagon (from the Greek δέκα déka and γωνία gonía, "ten angles") is a ten-sided polygon or 10-gon.[1] The total sum of the interior angles of a simple decagon is 1440°. A self-intersecting regular decagon is known as a decagram. Regular decagonEdit {\displaystyle A={\frac {5}{2}}a^{2}\cot \left({\frac {\pi }{10}}\right)={\frac {5}{2}}a^{2}{\sqrt {5+2{\sqrt {5}}}}\simeq 7.694208843\,a^{2}} {\displaystyle A=10\tan \left({\frac {\pi }{10}}\right)r^{2}=2r^{2}{\sqrt {5\left(5-2{\sqrt {5}}\right)}}\simeq 3.249196962\,r^{2}} {\displaystyle A=5\sin \left({\frac {\pi }{5}}\right)R^{2}={\frac {5}{2}}R^{2}{\sqrt {\frac {5-{\sqrt {5}}}{2}}}\simeq 2.938926261\,R^{2}} {\displaystyle A=2.5da} where d is the distance between parallel sides, or the height when the decagon stands on one side as base, or the diameter of the decagon's inscribed circle. By simple trigonometry, {\displaystyle d=2a\left(\cos {\tfrac {3\pi }{10}}+\cos {\tfrac {\pi }{10}}\right),} {\displaystyle d=a{\sqrt {5+2{\sqrt {5}}}}.} SidesEdit A regular decagon has 10 sides and is equilateral. It has 35 diagonals Nonconvex regular decagonEdit This tiling by golden triangles, a regular pentagon, contains a stellation of regular decagon, the Schäfli symbol of which is {10/3}. The length ratio of two inequal edges of a golden triangle is the golden ratio, denoted {\displaystyle {\text{by }}\Phi {\text{,}}} or its multiplicative inverse: {\displaystyle \Phi -1={\frac {1}{\Phi }}=2\,\cos 72\,^{\circ }={\frac {1}{\,2\,\cos 36\,^{\circ }}}={\frac {\,{\sqrt {5}}-1\,}{2}}{\text{.}}} So we can get the properties of a regular decagonal star, through a tiling by golden triangles that fills this star polygon. The golden ratio in decagonEdit {\displaystyle {\frac {\overline {AM}}{\overline {MH}}}={\frac {\overline {AH}}{\overline {AM}}}={\frac {1+{\sqrt {5}}}{2}}=\Phi \approx 1.618{\text{.}}} {\displaystyle {\frac {\overline {E_{1}E_{10}}}{\overline {E_{1}F}}}={\frac {\overline {E_{10}F}}{\overline {E_{1}E_{10}}}}={\frac {R}{a}}={\frac {1+{\sqrt {5}}}{2}}=\Phi \approx 1.618{\text{.}}} Symmetries of a regular decagon. Vertices are colored by their symmetry positions. Blue mirrors are drawn through vertices, and purple mirrors are drawn through edges. Gyration orders are given in the center. The highest symmetry irregular decagons are d10, an isogonal decagon constructed by five mirrors which can alternate long and short edges, and p10, an isotoxal decagon, constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular decagon. 10-cube projection Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms.[8] In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the regular decagon, m=5, and it can be divided into 10 rhombs, with examples shown below. This decomposition can be seen as 10 of 80 faces in a Petrie polygon projection plane of the 5-cube. A dissection is based on 10 of 30 faces of the rhombic triacontahedron. The list OEIS: A006245 defines the number of solutions as 62, with 2 orientations for the first symmetric form, and 10 orientations for the other 6. Skew decagonEdit ^ a b Sidebotham, Thomas H. (2003), The A to Z of Mathematics: A Basic Guide, John Wiley & Sons, p. 146, ISBN 9780471461630 . ^ The elements of plane and spherical trigonometry, Society for Promoting Christian Knowledge, 1850, p. 59 . Note that this source uses a as the edge length and gives the argument of the cotangent as an angle in degrees rather than in radians. ^ Ludlow, Henry H. (1904), Geometric Construction of the Regular Decagon and Pentagon Inscribed in a Circle, The Open Court Publishing Co. . ^ a b Green, Henry (1861), Euclid's Plane Geometry, Books III–VI, Practically Applied, or Gradations in Euclid, Part II, London: Simpkin, Marshall,& CO., p. 116 . Retrieved 10 February 2016. ^ a b Köller, Jürgen (2005), Regelmäßiges Zehneck, → 3. Section "Formeln, Ist die Seite a gegeben ..." (in German) . Retrieved 10 February 2016. ^ Coxeter, Regular polytopes, 12.4 Petrie polygon, pp. 223-226.
Physics - Weak Lensing Becomes a High-Precision Survey Science Weak Lensing Becomes a High-Precision Survey Science Physics Department, Brookhaven National Laboratory, Upton, NY 11973, USA August 27, 2018 &bullet; Physics 11, 85 Analyzing its first year of data, the Dark Energy Survey has demonstrated that weak lensing can probe cosmological parameters with a precision comparable to cosmic microwave background observations. Over the last decades, scientists have built a paradigm cosmological model, based on the premises of general relativity, known as the CDM model. This model has successfully explained many aspects of the Universe’s evolution from a homogeneous primeval soup to the inhomogeneous Universe of planets, stars, and galaxies that we see today. The CDM model is, however, at odds with the minimal standard model of particle physics, which cannot explain the two main ingredients of CDM cosmology: the cold dark matter (CDM) that represents approximately 85% of all matter in the Universe and the cosmological constant ( ), or dark energy, that drives the Universe’s accelerated expansion. R. Hahn/Fermilab Figure 1: The CCD imager of the Dark Energy Camera (DECam) used by the Dark Energy Survey. DECam is mounted on the Victor M. Blanco 4-m-aperture telescope in the Chilean Andes. One potential way to sort out the nature of dark matter and dark energy exploits an effect called weak gravitational lensing—a subtle bending of light induced by the presence of matter. Measurements of this effect, however, have proven challenging and so far have delivered less information than many physicists had hoped for. In a series of articles [1], the Dark Energy Survey (DES) now reports remarkable progress in the field. Analyzing data from its first year of operation, the DES has combined weak lensing and galaxy clustering observations to derive new constraints on cosmological parameters. The results suggest that we have reached an era in which weak gravitational lensing has become a systematic, high-precision technique for probing the Universe, on par with other well-established techniques, such as those based on observations of the cosmic microwave background (CMB) and on measurements of baryonic acoustic oscillations (BAO). T. M. C. Abbott et al., Phys. Rev. D (2018) Figure 2: Constraints on cosmological parameters as determined by the DES (blue), Planck (green), and by the combination of DES and Planck (red). Within the measurements’ accuracy, the Planck and DES constraints are consistent with each other ( {\Omega }_{m} is the matter density divided by the total energy density, and {S}_{8} is a parameter related to the amplitude of density fluctuations). For each color, the contour plots represent 68% and 95% confidence levels.Constraints on cosmological parameters as determined by the DES (blue), Planck (green), and by the combination of DES and Planck (red). Within the measurements’ accuracy, the Planck and DES constraints are consistent with each other ( {\Omega }_{m} is the matter... Show more Gravitational lensing is a consequence of the curvature of spacetime induced by mass. As light travels toward Earth from distant galaxies, it passes through clumps of matter that distort the light’s path. If lensing is strong, this distortion can dramatically stretch the images of the galaxies into long arcs. But in most situations, lensing is weak and causes subtler deformations—think of the distortions of images printed on a T-shirt that’s slightly stretched. Galaxies in the same part of the sky, whose light travels a similar path to us, are subjected to similar stretching, making them appear “aligned”—an effect known as cosmic shear. By quantifying the alignment of “background” galaxies, weak-lensing measurements derive information on the “foreground” mass that causes the distortions. Since dark matter constitutes the majority of matter, weak gravitational lensing largely probes dark matter. The potential of the technique has been known for decades [2]. Initially, however, researchers didn’t realize how difficult it would be to measure the tiny signal due to weak lensing and to isolate it from myriad other effects that cause similar distortions. Most importantly, for ground-based observations, the light reaching the telescope goes through Earth’s atmosphere. Atmospheric conditions, optical imperfections of the telescope, or simply inadequate data reduction techniques can blur or distort the images of individual objects. If such effects are coherent across the telescope’s field of view, they can lead to subtle alignments that can be misinterpreted as consequences of weak lensing. Moreover, most galaxies are elliptical to start with, and these ellipticities can be aligned for astrophysical reasons unrelated to weak lensing. Despite these difficulties, several pioneering efforts established the feasibility of weak gravitational lensing. In 2000, several groups reported the first detections of cosmic shear [3]. These were followed by 15 years of important advances, such as those obtained using data from the Sloan Digital Sky Survey [4], the Kilo-Degree Survey [5], and the Hyper Suprime-Cam Subaru Strategic Survey [6]. However, the new DES results mark an important milestone in terms of accuracy and breadth of analysis. Two main factors enabled these results. The first was the use of the Dark Energy Camera (DECam), a sensitive detector, custom-designed for weak-lensing measurements (Fig. 1), which was mounted on the 4-m-aperture Victor M. Blanco telescope in Chile, where DES has a generous allocation of observing time. The second factor was the size of the collaboration—more on the scale of a particle-physics collaboration than an astrophysics one. This resource allowed DES to dedicate unprecedented attention to data analysis. For example, two independent weak-lensing “pipelines” performed an important cross check of the results. [7] As reported in the latest crop of DES papers, the collaboration mapped out the dark matter in a patch of sky spanning 1321 , or about 3% of the full sky. They performed this mapping using two independent approaches. The first provided a direct probe of dark matter by measuring the cosmic shear caused by foreground dark matter on 26 million background galaxies. The second approach entailed measuring the correlation between galaxy positions and cosmic shear and the cross correlation between galaxy positions. Comparing these correlations allowed the underlying dark matter distribution to be inferred. The two approaches led to the same results, providing a compelling consistency check on the weak-lensing dark matter map. The collaboration used the weak-lensing result to derive constraints on a number of cosmological parameters. In particular, they combined their data with data from other cosmological probes (such as CMB, BAO, and Type 1a supernovae) to derive the tightest constraints to date on the dark energy equation-of-state parameter (w), defined as the ratio of the pressure of the dark energy to its density. This parameter is related to the rate at which the density of dark energy evolves. The data indicate that w is equal to , within an experimental accuracy of a few percentage points. Such a value supports a picture in which dark energy is unchanging and equal to the inert energy of the vacuum—Einstein’s cosmological constant—rather than a more dynamical component, which many theorists had hoped for. One of the most important aspects of the DES reports is the comparison with the most recent CMB measurements from the Planck satellite mission [8]. The CMB is the radiation that was left over when light decoupled from matter around 380,000 years after the big bang, so Planck probes the Universe at high redshift ( ). The DES data, on the other hand, concern much more recent times, at redshifts between 0.2 and 1.3. To check whether Planck and DES are consistent, the CMB-constrained parameters need to be extrapolated across cosmic history (from to ) using the standard cosmological model. Within the experimental uncertainties, this extrapolation shows good agreement (Fig. 2), thus confirming the standard cosmological model’s predictive power across cosmic ages. While this success has to be cherished, everyone also silently hopes that experimenters will eventually find some breaches in the CDM model, which could provide fresh hints as to what dark matter and dark energy are. The next few years will certainly be exciting for the field. DES already has five years of data in the bag and will soon release the analysis of their three-year results. Ultimately, DES will map 5000 , or one eighth of the full sky. The DES results are also very encouraging in view of the Large Synoptic Survey Telescope (LSST)—a telescope derived from the early concept of a “dark matter telescope” proposed in 1996. LSST should become operational in 2022, and it will survey almost the entire southern sky. Within this context, we can be hopeful that weak-lensing measurements will provide important insights into the most pressing open questions of cosmology. T. M. C. Abbot et al., “Dark Energy Survey year 1 results: Cosmological constraints from galaxy clustering and weak lensing,” Phys. Rev. D 98, 043526 (2018); J. Elvin-Poole et al., “Dark Energy Survey year 1 results: Galaxy clustering for combined probes,” 98, 042006 (2018); J. Prat et al., “Dark Energy Survey year 1 results: Galaxy-galaxy lensing,” 98, 042005 (2018); M. A. Troxel et al., “Dark Energy Survey Year 1 results: Cosmological constraints from cosmic shear,” 98, 043528 (2018). A. Albrecht et al., “Report of the Dark Energy Task Force,” arXiv:0609591. D. M. Wittman et al., “Detection of weak gravitational lensing distortions of distant galaxies by cosmic dark matter at large scales,” Nature 405, 143 (2000); D. J. Bacon et al., “Detection of weak gravitational lensing by large-scale structure,” Mon. Not. R. Astron. Soc. 318, 625 (2000); N. Kaiser, G. Wilson, and G. A. Luppino, “Large-Scale Cosmic Shear Measurements,” arXiv:0003338; L. Van Waerbeke et al., “Detection of correlated galaxy ellipticities from CFHT data: First evidence for gravitational lensing by large-scale structures,” Astron. Astrophys. 358, No. 30, 2000. H. Lin et al., “The SDSS Co-add: Cosmic shear measurement,” Astrophys. J. 761, 15 (2012). F. Köhlinger et al., “KiDS-450: the tomographic weak lensing power spectrum and constraints on cosmological parameters,” Mon. Not. R. Astron. Soc. 471, 4412 (2017). R. Mandelbaum et al., “The first-year shear catalog of the Subaru Hyper Suprime-Cam Subaru Strategic Program Survey,” Publ. Astron. Soc. Jpn. 70, S25 (2017). It’s worth mentioning that the data analysis used “blinding,” a protocol in which the people carrying out the analysis cannot see the final results, so as to eliminate possible biases towards specific results.. N. Aghanim et al. (Planck Collaboration), “Planck 2018 results. VI. Cosmological parameters,” arXiv:1807.06209. Anže Slosar is a tenured scientist and a group leader for cosmology and astrophysics at the Brookhaven National Laboratory (BNL). He received his Ph.D. in 2003 from the University of Cambridge, followed by postdoctoral stints at the Ljubljana University, the University of Oxford, and at the Berkeley Center for Cosmological Physics. He joined BNL in 2009. In 2011 he received a Department of Energy Early Career Award to carry out research on the Lyman-alpha forest. M. A. Troxel et al. (Dark Energy Survey Collaboration) J. Prat et al. (Dark Energy Survey Collaboration) J. Elvin-Poole et al. (Dark Energy Survey Collaboration) T. M. C. Abbott et al. (Dark Energy Survey Collaboration) CosmologyAstrophysicsParticles and Fields
Well-posedness of distribution dependent SDEs with singular drifts May 2021 Well-posedness of distribution dependent SDEs with singular drifts Michael Röckner, Xicheng Zhang Michael Röckner,1,2 Xicheng Zhang3 1Fakultät für Mathematik, Universität Bielefeld, 33615, Bielefeld, Germany 2Academy of Mathematics and Systems Science, Chinese Academy of Sciences (CAS), Beijing, 100190, P.R. China 3School of Mathematics and Statistics, Wuhan University, Wuhan, Hubei 430072, P.R. China Consider the following distribution dependent SDE: \mathrm{d}{\mathit{X}}_{\mathit{t}}={\mathit{\sigma }}_{\mathit{t}}\left({\mathit{X}}_{\mathit{t}},{\mathit{\mu }}_{{\mathit{X}}_{\mathit{t}}}\right)\phantom{\rule{0.1667em}{0ex}}\mathrm{d}{\mathit{W}}_{\mathit{t}}+{\mathit{b}}_{\mathit{t}}\left({\mathit{X}}_{\mathit{t}},{\mathit{\mu }}_{{\mathit{X}}_{\mathit{t}}}\right)\phantom{\rule{0.1667em}{0ex}}\mathrm{d}\mathit{t}, {\mathit{\mu }}_{{\mathit{X}}_{\mathit{t}}} stands for the distribution of {\mathit{X}}_{\mathit{t}} . In this paper for non-degenerate σ, we show the strong well-posedness of the above SDE under some integrability assumptions in the spatial variable and Lipschitz continuity in μ about b and σ. In particular, we extend the results of Krylov–Röckner (Probab. Theory Related Fields 131 (2005) 154–196) to the distribution dependent case. Michael Röckner. Xicheng Zhang. "Well-posedness of distribution dependent SDEs with singular drifts." Bernoulli 27 (2) 1131 - 1158, May 2021. https://doi.org/10.3150/20-BEJ1268 Received: 1 October 2019; Revised: 1 April 2020; Published: May 2021 Keywords: Distribution dependent SDEs , McKean–Vlasov system , singular drifts , superposition principle , Zvonkin’s transformation Michael Röckner, Xicheng Zhang "Well-posedness of distribution dependent SDEs with singular drifts," Bernoulli, Bernoulli 27(2), 1131-1158, (May 2021)
Hydro_NSID Nik Stergioulas, Roberto De Pietri, Frank Löfller Hydro_RNSID - rotating relativistic neutron stars. This thorn generates neutron star initial data for the GRHydro code. As with the Einstein Toolkit code itself, please feel free to add, alter or extend any part of this code. However please keep the documentation up to date (even, or especially, if it’s just to say what doesn’t work). This thorn effectively takes the public domain code RNSID written by Nik Stergioulas and interpolates the output onto a Cartesian grid. This porting is based on an initila porting to Whisky by Luca Baiotti and Ian Hawke and has been adapted to GRHydro and Einstein Toolkit. 2 RNSID RNSID, or rotating neutron star initial data, is a code based on the Komatsu-Eriguch-Hachisu (KEH) method for constructing models of rotating neutron stars. It allows for polytropic or tabulated equations of state. For more details of the how the code works see [3], [4] (appendix A is particularly helpful) or especially [5] which is the most up to date and lists other possible methods of constructing rotating neutron star initial data. In short Hydro_RNSID is a thorn that generate initial model for rotating isolated stars described by a zero-temperature tabulated Equation of State or an iso-entrophic politropic EOS. The activation of the thorn for genereting ID (The thorns “Hydro_Base” and “GRHydro” are the two prerequisites) The model are generated specifing the central baryonic density (rho_central), the oblatness of the Star (axes_ratio) and the rotational profile (rotation_type). Currently two kinds of rotational profiles are implemented: “uniform” for uniformly rotating stars and “diff” for differentially rotating stars, described by the j-law profile (parametrized by the parameter A_diff= Â {\mathrm{\Omega }}_{c}-\mathrm{\Omega }=\frac{1}{{Â}^{2}{r}_{e}^{2}}\left[\frac{\left(\mathrm{\Omega }-\omega \right){r}^{2}{\mathrm{sin}}^{2}𝜃{e}^{-2\nu }}{1-{\left(\mathrm{\Omega }-\omega \right)}^{2}{r}^{2}{\mathrm{sin}}^{2}𝜃{e}^{-2\nu }}\right] {r}_{e} is the equatorial radius of the star and \mathrm{\Omega } is the rotational angular velocity \mathrm{\Omega }={u}^{\varphi }∕{u}^{0} {\mathrm{\Omega }}_{c} \mathrm{\Omega } at the center of the star. 3 Parameters of Thorn Here one can find definition of the main parameter the determine the behaviour of the Thorn. The activation of the RNSID initial data is achieved by the following line: ActiveThorns="Hydro_Base GRHydro Hydro_RNSID" ##### Setting for activating the ID ADMBase::initial_data  = "hydro_rnsid" ADMBase::initial_lapse = "hydro_rnsid" ADMBase::initial_shift = "hydro_rnsid" The correspongig section of the parameter file is: ##### Basic Setting Hydro_rnsid::rho_central   = 1.28e-3  # central baryon density (G=c=1) Hydro_rnsid::axes_ratio    = 1        # radial/equatorial axes ratio Hydro_rnsid::rotation_type = diff     # uniform = uniform rotation Hydro_rnsid::A_diff        = 1        # Parameter of the diff rot-law. Hydro_rnsid::accuracy      = 1e-10    # accuracy goal for convergence Than a section for setting the Equation of State (EOS) should be added. If this section is missing a “poly” EOS will be used with default parameters. The two possibilities are: Isentropic Polytrope: In this case the base setting for the initial data are specified giving the following parameters: ##### Setting for polytrope Hydro_rnsid::eos_type  = "poly" Hydro_rnsid::RNS_Gamma = 2.0 Hydro_rnsid::RNS_K     = 165 They correspond at the following implementation of the EOS that it is consistent with the 1st Law of thermodinamics. \begin{array}{rcll}p& =& K\cdot {\rho }^{\Gamma }& \text{(2)}\text{}\text{}\\ 𝜖& =& \frac{K}{\Gamma -1}\cdot {\rho }^{\Gamma -1}& \text{(3)}\text{}\text{}\end{array} and for the above choice of parameters corresponding to the choice K=165 (in units where G=c={M}_{\odot }=1 \Gamma =2 Tabulated EOS: In this case the (cold) EOS used to generate the initial data is read from a file ##### Setting for tabulated EOS Hydro_rnsid::eos_type  = "tab" Hydro_rnsid::eos_file  = "full_path_name_of_the tabulated_EOS_file" The syntax of the tabulated file is the same as for the original RNSID program and assumes that all quantities are expressed in the cgs system of units. The first line contains the number of tabulated values ( N ) while the next N lines contain the values: e=\rho \left(1+𝜀\right) p \mathrm{log}h={c}^{2}{\mathrm{log}}_{e}\left(\left(e+p\right)∕\rho \right) \rho An additional section allows one to start initial data from a previously generated binary file or to save the data generated at this time. Usually the best way to proceed is to specify where the initial data file should be located. ##### Setting for recover and saving of 2d models Hydro_rnsid::save_2Dmodel    = "yes"   # other possibility is no (default) Hydro_rnsid::recover_2Dmodel = "yes"   # other possibility is no (default) Hydro_rnsid::model2D_file    = "full_file_name" For examples of initial data generated using RNSID and their evolutions, see [1, 2]. In the par directory, examples are provided as a perl file that produces the corresponding Cactus par files. These examples correspond to the evolutions described in [1, 2]. 4 Utility program Together with the Thorn, we distribute a self-executable version of the initial data routine RNSID that accepts the same parameters as the thorn and is able to create a binary file of the 2d initial data that can be directly imported into the evolution code. Moreover, the program RNS_readID is provided that reads a 2d initial data file and produces an hdf5 version of the data interpolated onto a 3d grid. [1] J. A. Font, N. Stergioulas and K. D. Kokkotas. Nonlinear hydrodynamical evolution of rotating relativistic stars: Numerical methods and code tests. Mon. Not. Roy. Astron. Soc., 313, 678, 2000. [2] F. Löffler, R. De Pietri, A. Feo, F. Maione and L. Franci, Stiffness effects on the dynamics of the bar-mode instability of neutron stars in full general relativity. Phys. Rev., D 91, 064057, 2015 (arXiv:1411.1963). [3] N. Stergioulas and J. L. Friedmann. Comparing models of rapidly rotating relativistic stars constructed by two numerical methods. ApJ., 444, 306, 1995. [4] N. Stergioulas. The structure and stability of rotating relativistic stars. PhD thesis, University of Wisconsin-Milwaukee, 1996. [5] N. Stergioulas. Rotating Stars in Relativity Living Rev. Relativity, 1, 1998. [Article in online journal], cited on 18/3/02, http://www.livingreviews.org/Articles/Volume1/1998-8stergio/index.html. Description: constant A in differential rotation law Description: rnsid accuracy in convergence Description: rnsid axes ratio Description: Convergence factor eos_file Description: Equation of state table EOS table file eos_type Description: Specify type of equation of state Range Default: poly Polytropic EOS model2d_file Description: Name of 2D model file Range Default: model2D.dat Default 2D model file recover_2dmodel Description: Recover 2D model? recover 2D model don’t recover 2D model rho_central Range Default: 1.24e-3 rns_atmo_tolerance Description: A point is set to atmosphere if rho < (1+RNS_atmo_tolerance)*RNS_rho_min rns_gamma Description: If we’re using a different EoS at run time, this is the RNS Gamma Will be ignored if negative rns_k Description: If we’re using a different EoS at run time, this is the RNS K rns_lmax Description: max. term in Legendre poly. Any positive, non zero number rns_rho_min Description: A minimum rho below which evolution is turned off (atmosphere). Atmosphere detection for RNSID Description: Specify type of rotation law Range Default: uniform KEH differential rotation law save_2dmodel Description: Save 2D model? don’t save 2D model Description: Set shift to zero? set shift to zero don’t set shift to zero Construnct stationary initial data with rnsid FishEye.h This section lists all the variables which are assigned storage by thorn EinsteinInitialData/Hydro_RNSID. Storage can either last for the duration of the run (Always means that if this thorn is activated storage will be assigned, Conditional means that if this thorn is activated storage will be assigned for the duration of the run if some condition is met), or can be turned on for the duration of a schedule function. hydro_rnsid_checkparameters hydro_rnsid_init create rotating neutron star initial data hydrobase::rho hydrobase::press hydrobase::eps hydrobase::vel
UrquhartGraph - Maple Help Home : Support : Online Help : Mathematics : Discrete Mathematics : Graph Theory : GraphTheory Package : GeometricGraphs : UrquhartGraph construct Urquhart graph UrquhartGraph( P, opts ) The UrquhartGraph(P, opts) command returns an Urquhart graph for the point set P. P be a set of points in The Urquhart graph is the undirected graph whose vertices correspond to points in P p q p q are the edge of a triangle in a Delaunay triangulation of P but are not the longest of any triangle in this triangulation. Intuitively, the Urquhart graph is obtained from a Delaunay triangulation by simply removing the longest edge from each triangle. It was proposed by R. B. Urquhart as an efficient method for approximating the relative neighborhood graph. The Urquhart graph has the following relationships with other graphs: Generate a set of random two-dimensional points and draw the Urquhart graph. \mathrm{with}⁡\left(\mathrm{GraphTheory}\right): \mathrm{with}⁡\left(\mathrm{GeometricGraphs}\right): \mathrm{points}≔\mathrm{LinearAlgebra}:-\mathrm{RandomMatrix}⁡\left(60,2,\mathrm{generator}=0..100.,\mathrm{datatype}=\mathrm{float}[8]\right) \textcolor[rgb]{0,0,1}{\mathrm{points}}\textcolor[rgb]{0,0,1}{≔}\begin{array}{c}[\begin{array}{cc}\textcolor[rgb]{0,0,1}{9.85017697341803}& \textcolor[rgb]{0,0,1}{82.9750304386195}\\ \textcolor[rgb]{0,0,1}{86.0670183749663}& \textcolor[rgb]{0,0,1}{83.3188659363996}\\ \textcolor[rgb]{0,0,1}{64.3746795546741}& \textcolor[rgb]{0,0,1}{73.8671607639673}\\ \textcolor[rgb]{0,0,1}{57.3670557294666}& \textcolor[rgb]{0,0,1}{2.34399775883031}\\ \textcolor[rgb]{0,0,1}{23.6234264844933}& \textcolor[rgb]{0,0,1}{52.6873367387328}\\ \textcolor[rgb]{0,0,1}{47.0027547350003}& \textcolor[rgb]{0,0,1}{22.2459488367552}\\ \textcolor[rgb]{0,0,1}{74.9213491558963}& \textcolor[rgb]{0,0,1}{62.0471820220718}\\ \textcolor[rgb]{0,0,1}{92.1513434709073}& \textcolor[rgb]{0,0,1}{96.3107262637080}\\ \textcolor[rgb]{0,0,1}{48.2319624355944}& \textcolor[rgb]{0,0,1}{63.7563267144141}\\ \textcolor[rgb]{0,0,1}{90.9441877431805}& \textcolor[rgb]{0,0,1}{33.8527464913022}\\ \textcolor[rgb]{0,0,1}{⋮}& \textcolor[rgb]{0,0,1}{⋮}\end{array}]\\ \hfill \textcolor[rgb]{0,0,1}{\text{60 × 2 Matrix}}\end{array} \mathrm{UG}≔\mathrm{UrquhartGraph}⁡\left(\mathrm{points}\right) \textcolor[rgb]{0,0,1}{\mathrm{UG}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{Graph 1: an undirected unweighted graph with 60 vertices and 65 edge\left(s\right)}} \mathrm{DrawGraph}⁡\left(\mathrm{UG}\right) Urquhart, R. B. (1980), "Algorithms for computation of relative neighborhood graph", Electronics Letters, 16 (14): 556–557. doi:10.1049/el:19800386 The GraphTheory[GeometricGraphs][UrquhartGraph] command was introduced in Maple 2020.
A categorification of the square root of -1 Yin Tian (2016) We give a graphical calculus for a monoidal DG category ℐ whose Grothendieck group is isomorphic to the ring ℤ[√(-1)]. We construct a categorical action of ℐ which lifts the action of ℤ[√(-1)] on ℤ². A characterization of discrete linearly compact rings by means of a duality A. Orsatti, V. Roselli (1981) A characterization of representation-finite algebras Andrzej Skowroński, M. Wenderlich (1991) Let A be a finite-dimensional, basic, connected algebra over an algebraically closed field. Denote by Γ(A) the Auslander-Reiten quiver of A. We show that A is representation-finite if and only if Γ(A) has at most finitely many vertices lying on oriented cycles and finitely many orbits with respect to the action of the Auslander-Reiten translation. A characterization of representation-finite biserial algebras over a perfect field [Book] Zygmunt Pogorzały (1989) A class of quasitilted rings that are not tilted Riccardo Colpi, Kent R. Fuller, Enrico Gregorio (2006) Based on the work of D. Happel, I. Reiten and S. Smalø on quasitilted artin algebras, the first two authors recently introduced the notion of quasitilted rings. Various authors have presented examples of quasitilted artin algebras that are not tilted. Here we present a class of right quasitilted rings that not right tilted, and we show that they satisfy a condition that would force a quasitilted artin algebra to be tilted. A classification of symmetric algebras of strictly canonical type Marta Kwiecień, Andrzej Skowroński (2009) In continuation of our article in Colloq. Math. 116.1, we give a complete description of the symmetric algebras of strictly canonical type by quivers and relations, using Brauer quivers. A classification of two-peak sincere posets of finite prinjective type and their sincere prinjective representations Justyna Kosakowska (2001) Assume that K is an arbitrary field. Let (I, ⪯) be a two-peak poset of finite prinjective type and let KI be the incidence algebra of I. We study sincere posets I and sincere prinjective modules over KI. The complete set of all sincere two-peak posets of finite prinjective type is given in Theorem 3.1. Moreover, for each such poset I, a complete set of representatives of isomorphism classes of sincere indecomposable prinjective modules over KI is presented in Tables 8.1. A cluster expansion formula ( {A}_{n} case). Schiffler, Ralf (2008) A computation of positive one-peak posets that are Tits-sincere Marcin Gąsiorek, Daniel Simson (2012) A complete list of positive Tits-sincere one-peak posets is provided by applying combinatorial algorithms and computer calculations using Maple and Python. The problem whether any square integer matrix A\in ₙ\left(ℤ\right) is ℤ-congruent to its transpose {A}^{tr} is also discussed. An affirmative answer is given for the incidence matrices {C}_{I} and the Tits matrices C{̂}_{I} of positive one-peak posets I. A construction for quasi-hereditary algebras Vlastimil Dlab, Claus Michael Ringel (1989) A construction of complex syzygy periodic modules over symmetric algebras Andrzej Skowroński (2005) We construct arbitrarily complicated indecomposable finite-dimensional modules with periodic syzygies over symmetric algebras. A Continuity Criterion for Functors. Thomas S. Shores (1975) A Criterion for Finite Representation Type. Klaus Bongartz (1984) A criterion for quasi-heredity and the characteristic module. Pu Zhang (1997) A density theorem for algebra representations on the space (s) We show that an arbitrary irreducible representation T of a real or complex algebra on the F-space (s), or, more generally, on an arbitrary infinite (topological) product of the field of scalars, is totally irreducible, provided its commutant is trivial. This provides an affirmative solution to a problem of Fell and Doran for representations on these spaces. A density theorem for F-spaces A family of noetherian rings with their finite length modules under control Markus Schmidmeier (2002) We investigate the category \text{mod}\Lambda of finite length modules over the ring \Lambda =A{\otimes }_{k}\Sigma \Sigma is a V-ring, i.e. a ring for which every simple module is injective, k a subfield of its centre and A an elementary k -algebra. Each simple module {E}_{j} gives rise to a quasiprogenerator {P}_{j}=A\otimes {E}_{j} . By a result of K. Fuller, {P}_{j} induces a category equivalence from which we deduce that \text{mod}\Lambda \simeq {\coprod }_{j}badhbox{P}_{j} . As a consequence we can (1) construct for each elementary k A k a nonartinian noetherian ring \Lambda \text{mod}A\simeq \text{mod}\Lambda , (2) find twisted... William Crawley-Boevey, Otto Kerner (1994)
Home : Support : Online Help : Mathematics : Algebra : Polynomials : Lcm inert lcm function multivariate polynomials over the rationals The Lcm function is a placeholder for representing the least common multiple of an arbitrary number of polynomials with rational coefficients. Lcm is used in conjunction with mod or modp1 as described below which define the coefficient domain. The call Lcm(a, b, ...) mod p computes the least common multiple of the polynomials modulo p a prime integer. The inputs must be polynomials over the rationals or over a finite field specified by RootOf expressions. The call modp1(Lcm(a, b, ...), p) does likewise for polynomials in the modp1 representation. \mathrm{Lcm}⁡\left({x}^{2}+1,{x}^{2}+x\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}2 {\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}
30D35 Distribution of values, Nevanlinna theory 30D05 Functional equations in the complex domain, iteration and composition of analytic functions 30D10 Representations of entire functions by series and integrals 30D15 Special classes of entire functions and growth estimates 30D30 Meromorphic functions, general theory 30D60 Quasi-analytic and other classes of functions A class of gap series with small growth in the unit disc. Sons, L.R., Ye, Zhuan (2002) A normality criterion for meromorphic functions having multiple zeros Shanpeng Zeng, Indrajit Lahiri (2014) We prove a normality criterion for a family of meromorphic functions having multiple zeros which involves sharing of a non-zero value by the product of functions and their linear differential polynomials. A note on a result of Singh and Kulkarni. Fang, Mingliang (2000) A note on algebraic differential equations whose coefficients are entire functions of finite order A note on Mues' conjecture. Lahiri, Indrajit (2001) Sujoy Majumder, Somnath Saha (2018) The purpose of the paper is to study the uniqueness problems of linear differential polynomials of entire functions sharing a small function and obtain some results which improve and generalize the related results due to J. T. Li and P. Li (2015). Basically we pay our attention to the condition \lambda \left(f\right)\ne 1 in Theorems 1.3, 1.4 from J. T. Li and P. Li (2015). Some examples have been exhibited to show that conditions used in the paper are sharp. A note on some results of Schwick. Xu, Yan, Fang, Mingliang (2004) A note on the growth of entire functions. Chung-Chung Yang (1975) A note on the oscillation theory of certain second order differential equations. Wang, Shupei (1995) A note on the separated maximum modulus points of meromorphic functions Ewa Ciechanowicz, Ivan I. Marchenko (2014) We give an upper estimate of Petrenko's deviation for a meromorphic function of finite lower order in terms of Valiron's defect and the number p(∞,f) of separated maximum modulus points of the function. We also present examples showing that this estimate is sharp. A property of entire transcendental functions Alexander Abian (1978) A question of Gross and weighted sharing of a finite set by meromorphic functions. A sharp form of Nevanlinna's second fundamental theorem. A. Hinkkanen (1992) A sharp form of Nevanlinna's second main theorem of several complex variables. Zhuan Ye (1996) A sharp result concerning cercles de remplissage. Rossi, John (1995) A theorem of differential mappings of Riemann surfaces. Hu, Peichu, Yang, Mingze (1994) A unicity theorem for meromorphic functions. Qiu, Huiling, Fang, Mingliang (2002) A uniqueness result related to meromorphic functions sharing two sets. Banerjee, Abhijit, Majumder, Sujoy (2011) L\phantom{\rule{3.33333pt}{0ex}}\mathrm{log}L M. Essen, D. F. Shea, C. S. Stanton (1985) We give a necessary and sufficient condition for an analytic function in {H}^{1} to have real part in class L logL . This condition contains the classical one of Zygmund; other variants are also given.
Train Network in Parallel with Custom Training Loop - MATLAB & Simulink - MathWorks {\mathit{s}}_{\mathit{c}}^{2}=\frac{1}{\mathit{M}}\sum _{\mathit{j}=1}^{\mathit{N}}{\mathit{m}}_{\mathit{j}}\left[{\mathit{s}}_{\mathit{j}}^{2}+{\left({\stackrel{‾}{\mathit{x}}}_{\mathit{j}}-{\stackrel{‾}{\mathit{x}}}_{\mathit{c}}\right)}^{2}\right] \mathit{N} \mathit{M} {\mathit{m}}_{\mathit{j}} \mathit{j} {\stackrel{‾}{\mathit{x}}}_{\mathit{j}} {\mathit{s}}_{\mathit{j}}^{2} {\stackrel{‾}{\mathit{x}}}_{\mathit{c}}
IsTangent - Maple Help Home : Support : Online Help : Mathematics : Geometry : 3-D Euclidean : Plane Functions : IsTangent test if a plane is tangent to a sphere IsTangent(p, s) The routine returns true if the plane p is tangent to the sphere s; false if they are not; and FAIL if it is unable to reach a conclusion. In case of FAIL, if the third optional argument is given, the condition that makes p tangent to s is assigned to this argument. The command with(geom3d,IsTangent) allows the use of the abbreviated form of this command. \mathrm{with}⁡\left(\mathrm{geom3d}\right): \mathrm{sphere}⁡\left(s,[\mathrm{point}⁡\left(o,0,0,1\right),1]\right) \textcolor[rgb]{0,0,1}{s} Find the condition that makes the plane a⁢x+b⁢y+c⁢z+d=0 tangent to the sphere s \mathrm{assume}⁡\left(a\ne 0\right) \mathrm{plane}⁡\left(p,a⁢x+b⁢y+c⁢z+d=0,[x,y,z]\right) \textcolor[rgb]{0,0,1}{p} \mathrm{IsTangent}⁡\left(p,s,'\mathrm{condition}'\right) IsTangent: "unable to determine if 1-abs(c+d)/(a^2+b^2+c^2)^(1/2) is zero" \textcolor[rgb]{0,0,1}{\mathrm{FAIL}} Hence, the condition is: \mathrm{condition} \textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\frac{|\textcolor[rgb]{0,0,1}{c}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{d}|}{\sqrt{{\textcolor[rgb]{0,0,1}{\mathrm{a~}}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{b}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{c}}^{\textcolor[rgb]{0,0,1}{2}}}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0} geom3d[TangentPlane]
Home : Support : Online Help : Programming : Low-level Manipulation : Matrices, Vectors, and Arrays : ArrayTools : Dimensions Array, list, Matrix, Vector The Dimensions(M) function returns the equivalent of [rtable_dims(M)], but works on lists as well. When M is a scalar the result will always be [ 1..1 ]. This function is part of the ArrayTools package, so it can be used in the short form Dimensions(..) only after executing the command with(ArrayTools). However, it can always be accessed through the long form of the command by using ArrayTools[Dimensions](..). \mathrm{with}⁡\left(\mathrm{ArrayTools}\right): \mathrm{Dimensions}⁡\left([[1,2],[3,4]]\right) [\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{2}]
hyad.es | Equipotential Chess After a few weeks' worth of playtesting (spanning several months of nonoverlapping schedules), I'm proud to release a new board game into the wild, based on some really nice physics. For the uninitiated, equipotentials are surfaces of constant potential energy, and field lines (prominent examples of which have probably surfaced in your childhood education vis-a-vis magnets) show the direction of the net force from a particular field. Properly constructed, equipotentials and field lines should in principle be orthogonal everywhere they intersect — just like lines on a chessboard. In fact, exactly like lines on a chessboard… A beginner's beginners' guide to 象棋 象棋, or Chinese chess, is a quasi-popular pastime among Singaporeans1. The most obvious difference between it and international chess is that pieces are played on the intersections of grid lines, instead of inside cells defined by grid lines. For an ordinary square board, this difference is trivially superficial (just like the difference between contravariance and covariance is unimportant in Euclidean space). However, for alternative topologies, things like the two-colouring scheme imposed by a flat orthogonal grid cannot necessarily hold, and things might get interesting — that is to say, too interesting for things like Bishops to work properly. On the other hand, if we can guarantee orthogonality of grid lines at least locally, then we can still play a game with Chinese chess-like rules irrespective of the global topological structure. Rather than insult the reader's ability to read Wikipedia, we shall merely summarise. In Chinese chess, all moves consist of a single uninterrupted line segment of arbitrary length along grid lines, which may or may not terminate with a capture, with some exceptions: Pawns (兵/卒) may only take steps of unit length; moreover they cannot move backwards at any point, and may move sideways only after crossing the central river; Horses (马) may only make a compound move consisting of first a unit step, and then movement across a diagonal such that the overall path bounds two cells (this is similar to the Knight movement mechanic in international chess); a piece blocking the first unit step precludes the second part of the move (顶马脚) Elephants (相/象) may only make two successive diagonal movements in the same direction along the same diagonal; once again, an intervening piece renders a move invalid. Also they can't cross the river because your generals are not as batshit as Hannibal. Palace Guards (士/仕) may only make unit steps along certain designated diagonals that are marked out on the board; Cannons (炮/砲) may only capture a piece by jumping over exactly one intervening piece; The General (将/帅) may only take unit steps inside of the Palace. Additionally, line-of-sight between Generals counts as check for endgame purposes (飞将). In retrospect, that's a lot of exceptions. We generate our board by considering the net field and effective equipotentials produced by two co-orbiting bodies of equal mass [m] m , resulting from both the gravitational force and centrifugal effects. In the rotating barycentric reference frame, the effective potential is given by [U = -\frac{a}{\left|\mathbf{r}-\mathbf{r}_1\right|}-\frac{a}{\left|\mathbf{r}-\mathbf{r}_2\right|} - b \left|\hat{\mathbf{n}}\times\mathbf{r}\times\hat{\mathbf{n}}\right|^2,] where [\hat{\mathbf{n}}] \hat{\mathbf{n}} is a unit vector denoting the axis of rotation, [a] a is proportional to the gravitational term [Gm] Gm , and [b] b gives us the effective centrifugal potential [\frac{1}{2}\omega^2] \frac{1}{2}\omega^2 . This gives us a playing field that is both rotationally symmetric around the axis connecting the two masses, and reflectively symmetric about the plane at the midpoint of the system bisecting this axis. Projecting onto two dimensions gives us something that looks like this: This is actually half the board; the actual board possesses mirror symmetry around the central plane (represented by the line on the right). There are a few immediately interesting features: All equipotentials (the circles and roundish curves) intersect the field lines (the radial curves) at right angles everywhere There are vertices where field lines intersect; for gameplay purposes, we treat them as ordinary vertices, with straight line segments being interpreted geometrically. The most obvious one is where all the field lines come together in the middle. These are source/sink singularities, and they correspond to where the point masses (planets?) would be. As such, Generals start there. We've marked out the first equipotential (corresponding to a LEO-like positioning) and the sixth with double lines. We'll get back to this later. On the right side of the board, there is another vertex where field lines intersect. However, we see that the vertical field lines converge there, but the horizontal ones diverge! As it turns out, this is a (gravitational) saddle point, just like all the Lagrange points (except L4/5) are; in particular, this is the L1 Lagrange point for these two bodies. The equipotentials here represent Lissajous orbits that are stable around the axis, but unstable along the axis. Finally, we see that there exist both cells with three sides (adjoining the point mass) as well as five (near the saddle point). This forces us to redefine “diagonal” for gameplay purposes. After quite a bit of playtesting and rebalancing, we settled on the following layout for pieces: The topology of this setup differs pretty significantly from the flat board. Consequently, there must exist some deviations from standard gameplay rules: Instead of advancing forward by unit steps, pawns advance along field lines either out of your own gravitational well, or into your opponent's gravitational well. Once in your opponent's gravitational well, a pawn can also take unit steps along equipotentials. The sixth equipotential (marked with a double line) is special; pawns may move freely along it, as long as their path is not interrupted. However, they still may not capture nonadjacent pieces (except other pawns). Pawns and Elephants may not advance past the sixth equipotential. As there are no Palace diagonals to speak of, Palace Guards may move freely along the first equipotential (marked with a double line), so long as their path is not interrupted. They may also take a unit step down to the planet, or back up to low-earth orbit. We define diagonal movement as movement along two consecutive edges, both bounding the same cell. Thus, if I were to begin on one vertex of a triangle, a diagonal movement entitles me to end up at either of the other two vertices; conversely, if I were to begin at one vertex of a pentagonal cell, the two vertices adjacent to me are inaccessible by diagonal movement. In particular, moves for Horses and Elephants will use this definition of diagonality. Rules aside, the new topology leads to some pretty interesting situations. For example, a single Chariot can threaten an enemy piece on an otherwise empty equipotential from two different directions. We leave the discovery of more interesting tactical and strategic consequences as an exercise for the reader. Obviously, this also means that we can turn arbitrary physical configurations into playable chessboards: Space Yugoslavia The possibilities are technically endless, right? During playtesting, I received a lot of feedback that the game would have been easier to pick up had it been computerised. Prima facie, this seems like a pretty good idea; however, the presence of singularities where field lines intersect (i.e. the field sinks and saddle points) makes parameterisation difficult. If anyone has any ideas as to how to implement this properly (or really any feedback on the game in general), feel free to leave a comment. Update 3rd August 2016: I've created a computerised version of this game! Right now it's a preliminary sketch that doesn't have support for some important features (game save/load, game reset, AI, network play… actually that's a lot of features). The computerised version can be found at its GitLab repo. These boards were generated in Mathematica with [a=1,b=0.125] a=1,b=0.125 , and are designed to be played with smallish pieces when printed at size A2. Half Board (useful for printing boards larger than your printer allows; just print two copies and join them together) Half Board Layout Full Board Layout Many thanks are owed to Kho Zhe Wei, Ernest Tan, and everyone else who played this game and/or offered feedback on it of some sort. Thanks also to YCX and KLKM, who tested the computerised version. Especially retirees, and smart kids who need a CCA in schools that don't have a Bridge club ↩ As usual, I release these under CC BY-NC-SA. ↩
Show how finding a Least Common Multiple can help you to simplify each of the following expressions. Then simplify them. \frac { 2 } { 3 } - \frac { 5 } { 12 } 3 12 ? How can this help you with this calculation? Convert two-thirds to a fraction with a denominator of 12 \left ( \frac{2}{3} \right )\left ( \frac{4}{4} \right )=\frac{8}{12} Subtract five-twelfths from your answer to Step 1. \frac{8}{12}-\frac{5}{12}=\frac{3}{12}=\frac{1}{4} \frac{1}{4} \frac { 4 } { 5 } + \frac { 11 } { 12 } See the steps in part (a) for help. In this problem, you might need to rewrite both fractions so that they have a common denominator.
Physics - Holey Rubber Slab Has Controllable Stiffness Holey Rubber Slab Has Controllable Stiffness Squeezing a holey rubber slab changes its stiffness over a wide range in the direction perpendicular to the squeeze. B. Florijn/Univ. of Leiden Controllable Swiss cheese. The stiffness in the vertical direction of this silicone slab depends on the squeezing force applied in the horizontal direction. (See video below.) A new material has a tunable springiness. Simply by squeezing the rubbery material to different degrees, the researchers who developed it can predictably alter its stiffness over a wide range in a direction perpendicular to the squeeze. The material is a slab containing two different sized holes. Such a “programmable” material might find uses in robotics, medical prostheses, or ordinary footwear. “Smart materials” have properties that can change in response to changes in their environment. For example, researchers have developed tunable vibration dampers for vehicles and for protecting buildings from earthquakes. These devices work by changing their stiffness, and thus the frequency at which they absorb vibrations. But they usually use piezoelectric materials, which require an electric field to change their properties. A purely mechanical technology that doesn’t rely on electric power could be cheaper and more robust. Bastiaan Florijn and coworkers at the University of Leiden in the Netherlands have developed and tested such a structure. It is an example of a so-called metamaterial—a material whose properties derive from its macroscopic structure rather than from its composition. Previously mechanical metamaterials were used to make acoustic dampers that could be switched on and off by a small amount of compression [1], or for “unfeelability cloaks” that could deform to mask the presence of an object concealed beneath them [2]. But the new material has a variety of mechanical properties, rather than simply being switchable between two states. B. Florijn et al., Phys. Rev. Lett. (2014) The slab is gradually squeezed in the vertical direction by a maximum of about 12% of its total height, while the horizontal squeezing (confinement) is fixed at 15% of the (unconfined) width. At about half of the maximum squeezing (0:03 in the video), the stiffness suddenly drops, when the orientation of the squished holes switches. The equivalent event during the reverse process occurs at slightly less squeezing (0:11), demonstrating that this degree of horizontal confinement leads to hysteresis. (The video is sped up by 20 times.)The slab is gradually squeezed in the vertical direction by a maximum of about 12% 15% of the (unconfined) width. At about half of the maximum squeezing (0:03 in the video)... Show more Florijn and colleagues made a slab of silicone rubber thick, perforated with rows of holes alternating between and in diameter, so that each large hole was surrounded by four small holes, and vice versa. In their experiments, the team controlled the stiffness in one direction within the sheet (call it ) by squeezing, or “confining,” the sheet by a carefully measured amount in the other direction ( ). For a fixed confinement, they measured the relationship between force (stress) and deformation (strain) in the direction. They showed that the shape of this curve was strongly affected by the degree of confinement. In the team’s experiment, as the force slowly squeezed the sheet, the large holes flattened along either the or the direction, and the interspersed, small holes flattened along the perpendicular direction. As the squeezing increased, the orientation of this flattening could switch, sometimes abruptly, from to or to , which altered the stiffness. Following standard practice, after reaching the most squeezed state in the direction, the team ran the process in reverse, slowly releasing the pressure, to see if the stress-strain curve would look the same in both directions. The researchers found four kinds of behavior. With little or no confinement, the amount of deformation in the direction increased smoothly with the applied force. For greater confinement, a kink developed in the stress-strain curve, making it nonlinear. With still more confinement, the curve showed “hysteresis,” an even more pronounced nonlinear behavior where the forward and reverse curves don’t match. In this case, the material could absorb and dissipate energy (by generating a bit of heat), rather than simply storing it like a spring. Finally, for still more confinement in the direction, the curve became smooth again. Florijn and colleagues saw the same behavior in numerical simulations of a two-dimensional elastic sheet with the same pattern of holes. Florijn says one could imagine using the material to make the feet of a robot “change from more bouncy to more dissipative, depending on the terrain.” Or one could build a car bumper that “can dissipate a lot of energy during impact but afterwards can easily be brought back to its original shape and strength.” Martin Wegener of the Karlsruhe Institute of Technology in Germany, who has developed other metamaterial structures, calls the work a “step forward in controlling mechanical metamaterials.” Previous work, he says, has focused mainly on linear properties, as in a simple spring. Here, in contrast, the material can behave nonlinearly—making possible applications such as shock absorption. “I expect that the paper will stimulate further work” on similar structures, says Wegener. P. Wang, F. Casadei, S. Shan, J. C. Weaver, and K. Bertoldi, “Harnessing Buckling to Design Tunable Locally Resonant Acoustic Metamaterials,” Phys. Rev. Lett. 113, 014301 (2014) T. Bückmann, M. Thiel, M. Kadic, R. Schittny, and M. Wegener, “An Elasto-mechanical Unfeelability Cloak Made of Pentamode Metamaterials,” Nature Commun. 5, (2014) Bastiaan Florijn, Corentin Coulais, and Martin van Hecke MetamaterialsMechanics
How slippage is calculated - LP-Swap Academy How slippage is calculated The intuition behind slippage is the following: if you try to buy big quantities in a market that has no liquidity, then you will have to pay more than the expected price (see this article for more details). Here we focus on the slippage occurring when swapping tokens on decentralised exchanges, that is when using liquidity pools. q_{input} BUSD tokens for WBNB tokens. For this you will use a BUSD-WBNB pool containing quantities x^{\mathrm{BUSD}} x^{\mathrm{WBNB}} of both tokens before the swap. Recall from the swap formula that the real quantity q_{output}^{real} of WBNB tokens you get from the pool is given by q_{output}^{real}=\frac{(1-r)\times q_{input} \times x^{\mathrm{WBNB}} }{x^{\mathrm{BUSD}}+(1-r)\times q_{input}}, 0\leq r \leq 1 is the fee rate of the pool used to pay to liquidity providers (typically it equals 0.002). However for very small input q_{input} of BUSD tokens compared to the sizes of the pools, we obtain an ouput of q_{output}^{ideal}=(1-r)\times q_{input}\times x^{\mathrm{WBNB}}/x^{\mathrm{BUSD}} \text{ WBNB tokens.} The slippage of the transaction is the amount of WBNB tokens that you loose compared to buying all the WBNB tokens at the ideal price: q^{real}_{output}-q_{output}^{ideal}=-\frac{(1-r)\times q_{input}^2\times x^{\mathrm{WBNB}}}{ (x^{\mathrm{BUSD}}\times [x^{\mathrm{BUSD}}+q_{input}])}<0. The result is quite intuitive: we see for instance that the bigger the quantity q_{input} of BUSD you swap, the higher the slippage. Besides, the bigger the quantity x^{\mathrm{BUSD}} of BUSD already in the pool, the less significative your contribution to this pool and the impact of your swap, and so the lower the slippage. Similarly, the bigger the quantity x^{\mathrm{WBNB}} of WBNB already in the pool, or rather the bigger the pool ratio x^{\mathrm{WBNB}}/x^{\mathrm{BUSD}} of a BUSD token expressed in WBNB, the more WBNB tokens are withdrawn from the pool because of your swap, and so the higher the slippage.
Vedisk matematikk/Hvorfor virker det? – Wikibøker Vedisk matematikk/Hvorfor virker det? 2 Addition and Subtraction Techniques 2.1 Subtracting from a power of 10 3 Multiplication Techniques 3.1 Multiplying by 11 3.3 Multiplying numbers close to a power of 10 4 Division techniques Mal:TOCright It should be understood that there is no magic in the the many techniques described in the other chapters, indeed there is no magic in mathematics in general, it can be argued that mathematics is the purest of all sciences as there is no opinion and mathematics needs no experiments or interpretation of results; things are either true, (i.e. they are proven to be true), or they are not. That being the case, there must be sound reasons why all the previously described techniques work. The reason some of the techniques work is simply that they perform a well understood algorithm (e.g. long multiplication) in a more efficient way, (often due to particular problem properties, e.g. the technique of multiplying any number by 11), even if it is difficult to see this at first. Other techniques work by making use of less widely understood mathematical laws, (e.g. algebra, quadratic equations, modular or 'clock' arithmetic, etc.). In either case, it is not necessary to know why a technique works to be able to use it, (much like you don't need to know how a car works to be able to drive one). It is for this reason, as well as to make the previous chapters more immediately usable, that the description of why each technique works has been omitted. However, for those that are curious and want to== Introduction == Mal:TOCright It should be understood that there is no magic in the the many techniques described in the other chapters, indeed there is no magic in mathematics in general, it can be argued that mathematics is the purest of all sciences as there is no opinion and mathematics needs no experiments or interpretation of results; things are either true, (i.e. they are proven to be true), or they are not. That being the case, there must be sound reasons why all the previously described techniques work. However, for those that are curious and want to investigate further, this chapter describes why many of the Vedic mathematics techniques work. Remember that some of the descriptions below will require knowledge of areas of mathematics that you may not be familiar with. Hopefully this will give you the impetus to investigate these areas and expand your mathematical knowledge, (this is a very rewarding way to discover new aspects of a subject). However even if this is not the case, you can (and should) still use the techniques and be happy in the knowledge that even if you don't know how the techniques work, they will still improve your numerical and arithmetic skills. Think of this section as an appendix, useful for further study, but not essential to the understanding of the main theme of the book. Addition and Subtraction TechniquesRediger Subtracting from a power of 10Rediger Multiplication TechniquesRediger Multiplying by 11Rediger When multiplying by 11 using long multiplication a pattern to the working out can be discovered e.g. 46 876 4386 432672 11x 11x 11x 11x -- --- ---- ------- 460+ 8760+ 43860+ 4326720+ --- ---- ----- -------- 506 9636 48246 4759392 You can see that in the addition section of each long multiplication above, each column apart from the first and last is the sum of the original digit in the column and the next one (to the right). Once you know this you can just write down the result of multiplying any number by 11. Write the rightmost digit down. Add each pair of digits and write the result down right to left (carrying digits where necessary). Finally write down the left most digit. Multiply 712x11 {\displaystyle {\begin{matrix}&7&&1&&2\\&\swarrow \searrow &+&\swarrow \searrow &+&\swarrow \searrow \\7&&8&&3&&2\end{matrix}}} The reason for working from right to left instead of the more usual left to right is so any carries can be added in as you go along. e.g. Multiply 8738x11 {\displaystyle {\begin{matrix}&8&&7&&3&&8\\&\swarrow \searrow &+&\swarrow \searrow &+&\swarrow \searrow &+&\swarrow \searrow \\9&\leftarrow _{1}&6&\leftarrow _{1}&1&\leftarrow _{1}&1&&8\end{matrix}}} Multiplying numbers close to a power of 10Rediger In the techniques section it is shown that the Vertically and Crosswise sutra can be used to easily multiply numbers that are close to 100. It is then shown that the same technique can be used to multiply any numbers near a power of 10, and that in fact the general technique will work for any numbers near any base, the key factor being that the technique is useful if the initial subtractions result in numbers that are easier to multiply. To understand why this technique works, you need a basic understanding of algebra, and quadratic equations. Consider two numbers A and B that are to be multiplied together and a third number X that is close to both numbers (we will call X the 'base' ). We assume that the numbers A and B are difficult to multiply and so we are looking for an easier alternative that only involves addition, subtraction and the multiplication of easier (e.g. smaller or simpler) numbers. The key is to realise that since X is close to both numbers we can generate smaller numbers (that are hopefully easier to work with) related to A and B by subtracting each from X (we will call these smaller numbers a and b). i.e. {\displaystyle {\begin{aligned}&a=X-A\\&b=X-B\\Thus:\\&A=X-a\quad (1)\\&B=X-b\quad (2)\end{aligned}}} We can multiply A and B by substituting for them using equations (1) and (2) above, i.e. {\displaystyle {\begin{aligned}AB&=(X-a)(X-b)\\&=X^{2}-aX-bX+ab\\&=X(X-a-b)+ab\end{aligned}}} Now we have something we can work with! You can see from the equation above that we can replace the multiplication of A and B with some subtractions of small numbers (X-a-b) a multiplication of the result of this subtraction by the 'base' number X and then the addition of a small multiplication ab. (Remember a and b are small because X is close to A and B and a=X-A, b=X-B). The only multiplication that might be difficult is the multiplication of X by the result of the initial subtraction (X-a-b), however if we choose X carefully (e.g. by making X a power of 10) we can make sure that this multiplication is simple too. With this knowledge we can now make sense of the Vertically and Crosswise multiplication technique. i.e. {\displaystyle {\begin{matrix}A&\longrightarrow &(X-A)\\\\B&\longrightarrow &(X-B)\\\hline \quad \end{matrix}}\quad \Rightarrow \quad {\begin{matrix}A&\longrightarrow &a\\\\B&\longrightarrow &b\\\hline \quad \end{matrix}}\quad \Rightarrow \quad {\begin{matrix}A&&a\\&&\downarrow \\B&&b\\\hline &&ab\end{matrix}}\quad \Rightarrow \quad {\begin{matrix}\quad \quad A&&a\\&\nwarrow &\\\quad \quad B&&b\\\hline (A-b)&&ab\end{matrix}}\quad \Rightarrow \quad {\begin{matrix}\quad \quad A&&a\\&&\\\quad \quad B&&b\\\hline (X-a-b)&&ab\end{matrix}}} Perhaps the cleverest bit is that if the base number X is an appropriate power of 10, both the multiplication of (X-a-b) by X and the subsequent addition of ab is handled automatically by the positional shift of the digits caused by appending the ab digits to the end of the (X-a-b) digits. The only potential problem left is if the product ab is equal to or larger than the base X. In this case the positional shift of the (X-a-b) digits will be one too many, so instead the leading digit of the product ab must be 'carried' and then added to the (X-a-b) value. Division techniquesRediger Hentet fra «https://no.wikibooks.org/w/index.php?title=Vedisk_matematikk/Hvorfor_virker_det%3F&oldid=22291»
Thin beds, tuning, and AVO - SEG Wiki (Redirected from Thin beds, tuning, and AVO (tutorial)) This tutorial originally appeared as a featured article in The Leading Edge in December 2014 — see issue. In this tutorial, we will explore two topics that are particularly relevant to quantitative seismic interpretation — thin-bed tuning and AVO analysis. Specifically, we will examine the impact of thin beds on prestack seismic amplitudes and subsequent effects on AVO attribute values. The code used to generate results and figures presented in this tutorial can be found in two Python scripts at http://github.com/seg. Each script is self-contained and allows the user to investigate the impact of layer and wavelet properties on poststack and prestack seismic amplitudes. Tuning refers to the modulation of seismic amplitudes because of constructive and destructive interference from overlapping seismic reflections. This phenomenon commonly occurs when a downgoing wave is reflected from multiple closely spaced interfaces. If the resultant upgoing reflections overlap, the reflected seismic energy will interfere and alter the amplitude response of the true geology. Let's examine this phenomenon using a zero-offset synthetic wedge model created using the script tuning_wedge.py (Figure 1). This model is generated using a 30-Hz Ricker wavelet and varying the thickness of layer 2. For thicknesses greater than 40 m, we see that the amplitude response of the wedge is a constant value. This indicates that there are discrete reflections from the top and base of the wedge with no interference. Figure 1. (a) A three-layer wedge model. (b) Zero-offset synthetic seismogram displayed in normal polarity. (c) Amplitude of the synthetic extracted along the top of layer 2. Below a thickness of 40 m, the effects of constructively interfering wavelet side lobes become apparent (i.e., amplitude increase resulting from tuning). Below a thickness of approximately 17 m, we start to see destructive interference from overlap of the central wavelet lobes. Interpreting the geologic meaning of these tuned seismic amplitudes is clearly more complex than the case of nonoverlapping seismic reflections. The wedge model is a standard tool in the interpreter's arsenal. It is used routinely to gain insight into the geologic meaning of seismic amplitudes below the tuning thickness of a particular reservoir. The same tuning phenomenon that impacts zero-offset seismic data also affects prestack seismic amplitudes and prestack analysis techniques such as AVO. Let's reconsider our initial wedge model. Instead of examining only the zero-offset case, we now investigate a synthetic angle gather to assess the impact of thin-bed tuning on angle-dependent reflectivity. Figure 2 is created using the script tuning_prestack.py. This figure shows a synthetic angle gather and associated amplitude-versus-angle-of-incidence curves corresponding to the 17-m-thick trace from our wedge model. Notice in this figure that there are two amplitude curves for the upper-interface reflectivity, one corresponding to the convolved amplitude and the other corresponding to the exact Zoeppritz P-to-P reflectivity. Explicitly, one is what we expect to record in the field (i.e., convolved amplitudes), and the other is what we theoretically anticipate for a given {\displaystyle V_{P},V_{S}} , and density model (i.e., Zoeppritz reflectivities). Figure 2. (a) Input properties for synthetic model. (b) Synthetic angle gather for the three-layer model, displayed in normal polarity. (c) Amplitude extracted along the upper interface. (d) Amplitude extracted along the lower interface. Quite clearly, there are differences in the reflectivities computed using Zoeppritz equations and the convolved synthetic. As previously discussed for the zero-offset case, a model 17 m thick will result in constructive interference along the upper interface. As expected, the convolved amplitudes are larger than the exact Zoeppritz reflectivities, but only for angles of incidence less than 27°. For angles of incidence larger than 27°, the convolved amplitudes become smaller than the exact Zoeppritz reflectivities (i.e., destructive interference). This indicates that tuning resulting from thin beds is also dependent on incidence angle. Let us now consider the impact of thin-bed tuning on the AVO attributes, normal-incidence reflectivity ( {\displaystyle R_{0}} ), and gradient ( {\displaystyle G} ) calculated for the top of our wedge. We calculate {\displaystyle R_{0}} {\displaystyle G} attributes by fitting Shuey's equation, {\displaystyle R(\theta )=R_{0}+G\sin ^{2}\theta } to the amplitude values for the upper interface. Table 1 summarizes those attribute values. Table 1. AVO inversion of convolved and exact Zoeppritz reflectivities from the wedge-model upper interface produce significant AVO attribute values. Reflectivity curve {\displaystyle R_{0}} Zoeppritz 0.03168 −0.05671 Convolved 0.03797 −0.08555 For our 17-m-thick wedge, there is a significant difference between {\displaystyle R_{0}} {\displaystyle G} values computed from the convolved synthetic and exact Zoeppritz amplitudes. Because AVO is an amplitude-based analysis technique, tuning caused by thin beds will manifest similar errors when we invert for other AVO attributes. In summary, thin-bed tuning affects poststack and prestack seismic amplitudes. Simple synthetic-modeling tools such as those presented in this tutorial allow you to gauge the impact of thin-bed tuning on seismic-amplitude interpretation and analysis techniques. Aki, K., and P. G. Richards, 2002, Quantitative seismology, 2nd ed.: University Science Books. Chung, H.-M., and D. C. Lawton, 1999, A quantitative study of the effects of tuning on AVO effects for thin beds: Canadian Journal of Exploration Geophysics, 35, nos. 1–2, 36–42. Mavko, G., T. Mukerji, and J. Dvorkin, 2009, The rock physics handbook: Tools for seismic analysis of porous media, 2nd ed.: Cambridge University Press. CrossRef Shuey, R. T., 1985, A simplification of the Zoeppritz equations: Geophysics, 50, no. 4, 609–614, http://dx.doi.org/10.1190/1.1441936. Widess, M., 1973, How thin is a thin bed?: Geophysics, 38, no. 6, 1176–1180, http://dx.doi.org/10.1190/1.1440403. Corresponding author: Wes Hamlyn, whamlyn ikonscience.com Retrieved from "https://wiki.seg.org/index.php?title=Thin_beds,_tuning,_and_AVO&oldid=31087"
Generating the structure of the universe – ebvalaim.log Generating the structure of the universe When creating a universe, you have to begin somewhere. A good starting point is to define its shape. The most convenient and probably the most obvious shape is a cube. Each of the three coordinates is then a number from the same range and it is very easy to make it so that there are no boundaries - it is enough to add a condition that leaving the cube on one side is equivalent to entering it on the other side. It is similar to the well-known Snake game, where the snake leaving the screen on the right returned from the left - just in three dimensions. So we have a cubic space made of points described by three numbers: x,\; y,\; z\; \in (a,b) . A question arises: what data type will be best for representing the coordinates of a point? The answer requires us to check the size of the numbers involved first. We would like a universe of a realistic size - which gives the side of the cube on the order of 10^{11} light years. A year is about 30 000 000 ( 3 \times 10^7 ) seconds, a light second is about 300 000 000 ( 3 \times 10^8 ) meters, which gives the size on the order of 10^{27} meters. It would be nice, if the elementary parts of the universe were smaller than a meter (let's say, a millimeter) - which means that we have to handle numbers with 30 significant digits. That's a lot. For such large numbers floating-point data types are often used, but they will not be right in this case. Because such numbers store a constant number of significant digits and the position of the decimal point, their accuracy depends on their size. For example, a 64-bit floating-point number can store 15 significant digits - which means that for numbers on the order of a billion ( 10^9 ) they have an accuracy of about one millionth ( 10^{-6} ), but for numbers around 10^{15} the result is only accurate up to a unit. It is clear that some library for multiple precision numbers will be needed. I decided to use GMP and describe the coordinates with integers (which, with appropriately placed decimal point, can be interpreted as fixed-point real numbers). Why not floating-point? Even with precision appropriate for the largest numbers, the accuracy would vary across space. I prefer to have constant accuracy and not risk discrepancies between different places, even if it means having lower accuracy near 0. Of course in this case the coordinates won't mean the number of meters from the origin, but rather the number of some arbitrary units. I defined the meter to be 65536 units, which allows me to treat the last 16 bits of a number as digits after the decimal point. Generating the galaxies Having defined the space, we can start to consider the galaxies. The module generating the galaxies will have to be able to answer two main questions: What are the positions of galaxies in a given fragment of space? Are the given coordinates of a galaxy correct? The second question will be mostly for validating some external data, but it will also be useful. The solution to the first question looks simple enough - it should suffice to take the given range of coordinates, generate some pseudorandom numbers from this range, starting from some predetermined seed and that's it. But what if we take two intersecting blocks of space? How to make sure that we will get the same galaxies in their common part? What will help is octrees. The algorithm will look like this: Take a cube corresponding to the whole space and generate a number of galaxies contained in it (how - it's a separate question, it might for example be a pseudorandom number generated from the seed for the universe) Generate a seed for the current cube (for example, by hashing its coordinates) Divide the cube in 8 parts (dividing each side in two) For each part, assign to it some probability proportional to some galaxy density function (for a uniform distibution - just 1/8) For each cube, generate the number of galaxies in it: Calculate the success probability p, dividing probability for the cube by the sum of probabilities for it and the remaining cubes Generate a number from the binomial distribution with parameters - N = the number of remaining galaxies, p = the probability generated in the previous step Subtract the resulting number of galaxies from the total and repeat for other cubes If any of the smaller blocks has sides of length 1 and contains only 1 galaxy, return it as the position of a galaxy. Repeat steps starting from 2 for the cubes having nonzero number of galaxies and nonempty intersection with the given fragment of space Since in this algorithm we always divide by 2, it will be convenient to have a power of 2 as the length of the side of the universe. The size of the universe will thus be described by the number n , meaning the side of 2^{n-16} meters (reminder: 1 meter is 65536 = 2^{16} units). Because we always start from the whole universe and progress to smaller and smaller parts, we will always get the same galaxies in the same blocks, no matter what the given fragment was. The only important thing is that the content of a given block be determined only by its own properties - but this can easily be achieved by using a value connected to it for a seed (for example, a hash of the coordinates). The algorithm may seem slow, because it always requires dividing the universe in 2 all the way to the lowest level, but even for realistic universes it gives about 100 divisions until the blocks have size of 1 unit. The only problem might appear when the given fragment is too big and the number of galaxies in it is very large, but this can be avoided by not generating galaxies in large blocks of space. It is worth noting, that the same algorithm can be used for other objects, like stars - we will only need a proper density function. This is a thing for the future, though. For now the algorithm described above is implemented in C. I've discovered a new interesting language though, namely Rust. Since it doesn't allow memory leaks and generates code with performance similar to C/C++, I decided to rewrite the algorithm in it. I'm currently working on a library of Rust bindings to MPFR (it's needed for the binomial distribution). The next post will appear when the galaxy generation in Rust is ready :)
hyad.es | Copyrighted Stuff I claim ownership of everything original published on this site, modulo attributions and other qualifications. Please cite academic publications with reference to journals, as appropriate. I wrote this CMS myself because I was sick of Wordpress (for not letting me do interesting things), LiveJournal (for being characteristically obsolescent) and Drupal (for being incredulously buggy). This site makes use of the following: Markdown (specifically, a modified PHPMEM) for formatting, MathJax for [\LaTeX] \LaTeX typesetting, Prettify for dynamic syntax highlighting, and Disqus for comments. I release all content in my blog, as well as any code hosted here (unless stated otherwise), under the Creative Commons Attribution Non-Commercial Share-Alike 3.0 License, with an additional stipulation that a link to the original page be furnished when attributions are made.
Advection Knowpia In the field of physics, engineering, and earth sciences, advection is the transport of a substance or quantity by bulk motion of a fluid. The properties of that substance are carried with it. Generally the majority of the advected substance is a fluid. The properties that are carried with the advected substance are conserved properties such as energy. An example of advection is the transport of pollutants or silt in a river by bulk water flow downstream. Another commonly advected quantity is energy or enthalpy. Here the fluid may be any material that contains thermal energy, such as water or air. In general, any substance or conserved, extensive quantity can be advected by a fluid that can hold or contain the quantity or substance. During advection, a fluid transports some conserved quantity or material via bulk motion. The fluid's motion is described mathematically as a vector field, and the transported material is described by a scalar field showing its distribution over space. Advection requires currents in the fluid, and so cannot happen in rigid solids. It does not include transport of substances by molecular diffusion. Advection is sometimes confused with the more encompassing process of convection which is the combination of advective transport and diffusive transport. In meteorology and physical oceanography, advection often refers to the transport of some property of the atmosphere or ocean, such as heat, humidity (see moisture) or salinity. Advection is important for the formation of orographic clouds and the precipitation of water from clouds, as part of the hydrological cycle. Distinction between advection and convectionEdit The term advection often serves as a synonym for convection, and this correspondence of terms is used in the literature. More technically, convection applies to the movement of a fluid (often due to density gradients created by thermal gradients), whereas advection is the movement of some material by the velocity of the fluid. Thus, although it might seem confusing, it is technically correct to think of momentum being advected by the velocity field in the Navier-Stokes equations, although the resulting motion would be considered to be convection. Because of the specific use of the term convection to indicate transport in association with thermal gradients, it is probably safer to use the term advection if one is uncertain about which terminology best describes their particular system. In meteorology and physical oceanography, advection often refers to the horizontal transport of some property of the atmosphere or ocean, such as heat, humidity or salinity, and convection generally refers to vertical transport (vertical advection). Advection is important for the formation of orographic clouds (terrain-forced convection) and the precipitation of water from clouds, as part of the hydrological cycle. Other quantitiesEdit The advection equation also applies if the quantity being advected is represented by a probability density function at each point, although accounting for diffusion is more difficult.[citation needed] Mathematics of advectionEdit The advection equation is the partial differential equation that governs the motion of a conserved scalar field as it is advected by a known velocity vector field. It is derived using the scalar field's conservation law, together with Gauss's theorem, and taking the infinitesimal limit. One easily visualized example of advection is the transport of ink dumped into a river. As the river flows, ink will move downstream in a "pulse" via advection, as the water's movement itself transports the ink. If added to a lake without significant bulk water flow, the ink would simply disperse outwards from its source in a diffusive manner, which is not advection. Note that as it moves downstream, the "pulse" of ink will also spread via diffusion. The sum of these processes is called convection. The advection equationEdit In Cartesian coordinates the advection operator is {\displaystyle \mathbf {u} \cdot \nabla =u_{x}{\frac {\partial }{\partial x}}+u_{y}{\frac {\partial }{\partial y}}+u_{z}{\frac {\partial }{\partial z}}.} {\displaystyle \mathbf {u} =(u_{x},u_{y},u_{z})} is the velocity field, and {\displaystyle \nabla } is the del operator (note that Cartesian coordinates are used here). The advection equation for a conserved quantity described by a scalar field {\displaystyle \psi } is expressed mathematically by a continuity equation: {\displaystyle {\frac {\partial \psi }{\partial t}}+\nabla \cdot \left(\psi {\mathbf {u} }\right)=0} {\displaystyle \nabla \cdot } is the divergence operator and again {\displaystyle \mathbf {u} } is the velocity vector field. Frequently, it is assumed that the flow is incompressible, that is, the velocity field satisfies {\displaystyle \nabla \cdot {\mathbf {u} }=0.} {\displaystyle \mathbf {u} } is said to be solenoidal. If this is so, the above equation can be rewritten as {\displaystyle {\frac {\partial \psi }{\partial t}}+{\mathbf {u} }\cdot \nabla \psi =0} In particular, if the flow is steady, then {\displaystyle {\mathbf {u} }\cdot \nabla \psi =0} {\displaystyle \psi } is constant along a streamline. If a vector quantity {\displaystyle \mathbf {a} } (such as a magnetic field) is being advected by the solenoidal velocity field {\displaystyle \mathbf {u} } , the advection equation above becomes: {\displaystyle {\frac {\partial {\mathbf {a} }}{\partial t}}+\left({\mathbf {u} }\cdot \nabla \right){\mathbf {a} }=0.} {\displaystyle \mathbf {a} } is a vector field instead of the scalar field {\displaystyle \psi } Solving the equationEdit A simulation of the advection equation where u = (sin t, cos t) is solenoidal. The advection equation is not simple to solve numerically: the system is a hyperbolic partial differential equation, and interest typically centers on discontinuous "shock" solutions (which are notoriously difficult for numerical schemes to handle). Even with one space dimension and a constant velocity field, the system remains difficult to simulate. The equation becomes {\displaystyle {\frac {\partial \psi }{\partial t}}+u_{x}{\frac {\partial \psi }{\partial x}}=0} {\displaystyle \psi =\psi (x,t)} is the scalar field being advected and {\displaystyle u_{x}} {\displaystyle x} component of the vector {\displaystyle \mathbf {u} =(u_{x},0,0)} Treatment of the advection operator in the incompressible Navier–Stokes equationsEdit According to Zang,[1] numerical simulation can be aided by considering the skew-symmetric form for the advection operator. {\displaystyle {\frac {1}{2}}{\mathbf {u} }\cdot \nabla {\mathbf {u} }+{\frac {1}{2}}\nabla ({\mathbf {u} }{\mathbf {u} })} {\displaystyle \nabla ({\mathbf {u} }{\mathbf {u} })=[\nabla ({\mathbf {u} }u_{x}),\nabla ({\mathbf {u} }u_{y}),\nabla ({\mathbf {u} }u_{z})]} {\displaystyle \mathbf {u} } Since skew symmetry implies only imaginary eigenvalues, this form reduces the "blow up" and "spectral blocking" often experienced in numerical solutions with sharp discontinuities (see Boyd[2]). Using vector calculus identities, these operators can also be expressed in other ways, available in more software packages for more coordinate systems. {\displaystyle \mathbf {u} \cdot \nabla \mathbf {u} =\nabla \left({\frac {\|\mathbf {u} \|^{2}}{2}}\right)+\left(\nabla \times \mathbf {u} \right)\times \mathbf {u} } {\displaystyle {\frac {1}{2}}\mathbf {u} \cdot \nabla \mathbf {u} +{\frac {1}{2}}\nabla (\mathbf {u} \mathbf {u} )=\nabla \left({\frac {\|\mathbf {u} \|^{2}}{2}}\right)+\left(\nabla \times \mathbf {u} \right)\times \mathbf {u} +{\frac {1}{2}}\mathbf {u} (\nabla \cdot \mathbf {u} )} This form also makes visible that the skew-symmetric operator introduces error when the velocity field diverges. Solving the advection equation by numerical methods is very challenging and there is a large scientific literature about this. Overshoot (signal) ^ Zang, Thomas (1991). "On the rotation and skew-symmetric forms for incompressible flow simulations". Applied Numerical Mathematics. 7: 27–40. Bibcode:1991ApNM....7...27Z. doi:10.1016/0168-9274(91)90102-6. ^ Boyd, John P. (2000). Chebyshev and Fourier Spectral Methods 2nd edition. Dover. p. 213.
A better proof of the Goldman-Parker conjecture. Schwartz, Richard Evan (2005) A combinatorial formula for Kazhdan-Lusztig polynomials. Francesco Brenti (1994) A Combinatorial Proof of the Existence of the Generic Hecke Algebra and R-Polynomials. Kimmo Eriksson (1994) A finiteness property and an automatic structure for Coxeter groups. Brigitte Brink, Robert B. Howlett (1993) {\stackrel{˜}{C}}_{n} F. Digne (2012) We prove that an Artin-Tits group of type \stackrel{˜}{C} is the group of fractions of a Garside monoid, analogous to the known dual monoids associated with Artin-Tits groups of spherical type and obtained by the “generated group” method. This answers, in this particular case, a general question on Artin-Tits groups, gives a new presentation of an Artin-Tits group of type \stackrel{˜}{C} , and has consequences for the word problem, the computation of some centralizers or the triviality of the center. A key point of the proof... {B}_{n} A limit relation for Dunkl-Bessel functions of type A and B. Rösler, Margit, Voit, Michael (2008) Dmitri I. Panyushev (2014) 𝔤 be a simple Lie algebra and {\mathrm{𝔄𝔟}}^{o} the poset of non-trivial abelian ideals of a fixed Borel subalgebra of 𝔤 . In [8], we constructed a partition {\mathrm{𝔄𝔟}}^{o}={\bigsqcup }_{\mu }{\mathrm{𝔄𝔟}}_{\mu } parameterised by the long positive roots of 𝔤 and studied the subposets {\mathrm{𝔄𝔟}}_{\mu } . In this note, we show that this partition is compatible with intersections, relate it to the Kostant-Peterson parameterisation and to the centralisers of abelian ideals. We also prove that the poset of positive roots of 𝔤 is a join-semilattice. Affine permutations and inversion multigraphs. Papi, Paolo (1997) Affine permutations of type A Björner, Anders, Brenti, Francesco (1996) Affine Weyl groups as infinite permutations. Eriksson, Henrik, Eriksson, Kimmo (1998) An improved tableau criterion for Bruhat order. Automorphism groups of right-angled buildings: simplicity and local splittings Pierre-Emmanuel Caprace (2014) We show that the group of type-preserving automorphisms of any irreducible semiregular thick right-angled building is abstractly simple. When the building is locally finite, this gives a large family of compactly generated abstractly simple locally compact groups. Specialising to appropriate cases, we obtain examples of such simple groups that are locally indecomposable, but have locally normal subgroups decomposing non-trivially as direct products, all of whose factors are locally normal. Automorphisms and abstract commensurators of 2-dimensional Artin groups. Crisp, John (2005) Automorphisms of Coxeter groups of type {K}_{n} Ryan, Jeffrey A. (2007) Automorphisms of nearly finite Coxeter groups. Franzsen, W.N., Howlett, R.B. (2003) Automorphisms of right-angled Coxeter groups. Gutierrez, Mauricio, Kaul, Anton (2008)
How to Calculate a Batting Average: 7 Steps (with Pictures) 1 Calculating Batting Average 2 Calculating Other Offensive Stats Batting average has been one of baseball's "big three" statistics for decades, along with runs batted in (RBI) and home runs. Fans of the more recent "sabermetrics" approach to baseball statistics criticize batting average for its failure to account for walks.[1] X Research source Nevertheless, for the average fan, batting average is a convenient and popular method for comparing offensive skill. {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/5\/59\/Calculate-a-Batting-Average-Step-1-Version-2.jpg\/v4-460px-Calculate-a-Batting-Average-Step-1-Version-2.jpg","bigUrl":"\/images\/thumb\/5\/59\/Calculate-a-Batting-Average-Step-1-Version-2.jpg\/aid380377-v4-728px-Calculate-a-Batting-Average-Step-1-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"} Find the player's hits. Hits (also called base hits) are simply the sum of singles, doubles, triples, and home runs.[2] X Research source This statistic is easy to find online for professional players. You can use the statistics for a season, a whole career, or any other period of time you're interested in. Just make sure all your statistics come from the same time frame. {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/7\/7e\/Calculate-a-Batting-Average-Step-2-Version-2.jpg\/v4-460px-Calculate-a-Batting-Average-Step-2-Version-2.jpg","bigUrl":"\/images\/thumb\/7\/7e\/Calculate-a-Batting-Average-Step-2-Version-2.jpg\/aid380377-v4-728px-Calculate-a-Batting-Average-Step-2-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"} Find the player's at-bats. This is the number of times the player has made an attempt at an hit. This does not include walks, hits by pitch, or sacrifices, since these do not reflect the batter's offensive skill.[3] X Research source Divide the number of hits by the number of at-bats. The answer tells you the battering average, or the fraction of the time that a batter turned an at-bat attempt into a successful hit. For example, if a player had 70 Hits and 200 At-Bats, his Batting Average is 70 ÷ 200 = 0.350. You can read a batting average of 0.350 as "this player would expect to get 350 hits in 1000 at-bats." {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/f\/fb\/Calculate-a-Batting-Average-Step-4-Version-2.jpg\/v4-460px-Calculate-a-Batting-Average-Step-4-Version-2.jpg","bigUrl":"\/images\/thumb\/f\/fb\/Calculate-a-Batting-Average-Step-4-Version-2.jpg\/aid380377-v4-728px-Calculate-a-Batting-Average-Step-4-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"} Round to the third decimal place. Batting averages are almost always rounded this way. When a baseball fan mentions a batting average of "three hundred," she means 0.300. You can calculate batting averages to four or more decimal places, but this doesn't have much use beyond breaking ties. Calculating Other Offensive Stats {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/a\/ae\/Calculate-a-Batting-Average-Step-5.jpg\/v4-460px-Calculate-a-Batting-Average-Step-5.jpg","bigUrl":"\/images\/thumb\/a\/ae\/Calculate-a-Batting-Average-Step-5.jpg\/aid380377-v4-728px-Calculate-a-Batting-Average-Step-5.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"} Find the on-base percentage. A player's OBP tells you the fraction of the time that player makes it to a base, including walks and hits by pitch. Some fans consider this a better metric than battering average, since it takes into account all scoring methods. To find OBP, calculate {\displaystyle {\frac {Hits+Walks+HitsByPitch}{PlateAppearances}}} This formula is good enough for most purposes, but it does count some uncommon plays that do not reflect on the batter's skill, such as sacrifice bunts and catcher's interferences. If you need to be precise, replace plate appearances with "at-bats + walks + hits by pitch + sacrifice flies." {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/0\/00\/Calculate-a-Batting-Average-Step-6.jpg\/v4-460px-Calculate-a-Batting-Average-Step-6.jpg","bigUrl":"\/images\/thumb\/0\/00\/Calculate-a-Batting-Average-Step-6.jpg\/aid380377-v4-728px-Calculate-a-Batting-Average-Step-6.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"} Understand runs batted in. Runs batted in, or RBI, tell you how many times the team passed home plate due to this batter. For example, if there are two teammates on base and Sarah scores a home run, Sarah gets 3 RBI (two teammates + herself). This gives you a straightforward account of exactly how many runs a batter has scored. However, since it depends on other batters loading the bases, this statistic isn't a great way to compare players from different teams. Do not add to RBI if the at-bat led to a double play (two outs), or if a run only occurred due to an error.[5] X Research source Find slugging percentage. Slugging percentage is similar to a batting average, but counts the number of bases scored instead of just the number of hits. This rewards more powerful hitters who score more doubles, triples, and home runs. Slugging percentage equals {\displaystyle {\frac {Hits+Doubles+(2*Triples)+(3*HomeRuns)}{AtBats}}} You will get the same result if you count the bases in a more intuitive way: Singles + (2*Doubles) + (3*Triples) + (4*Home Runs). The formula above is usually easier to use since most baseball statistic websites do not list Singles. Why are there no ties in baseball? It's simply a tradition since the early days of baseball. It wouldn't have to be that way; it just is. Does a sacrifice lower a hitter's batting average? No. It does not count as an official time at bat and has no effect on a batting average. If my son was batting .222 and last night went 3 for 4, what's his new average? Add 3 to your son's total hits, add 4 to your son's total at-bats, and divide the new totals to find the new batting average. It is not possible to calculate the change without knowing the total hits and at-bats. What about a batter reaching base on an error? If the batter reaches first base because of an error by a fielder, it counts as an at-bat but not a hit. It lowers the batter's batting average just as if he'd been thrown out. How many points do you get for a single hit? Hits do not directly score points (or "runs"). Points are scored only when a runner crosses home plate, which could occur even without a hit. If the player hits the ball and gets thrown out does it count as a hit? Not if he's thrown out at first. If he makes it safely to first, he's credited with a hit even if he continues to run and is thrown out at second, third, or home. How do I score extra base hits in batting averages? An extra base hit has the same effect on a batting average as a single does. Each is considered one hit. If the batter gets to first on a fielder's choice, does that count as a hit? A fielder's choice counts as an at-bat but not a hit. This means a fielder's choice will lower the batter's batting average. Is it better to get a home run or a base hit to improve my batting average? It doesn't matter, since both are counted as a hit. Singles, doubles, triples, and home runs all count the same in figuring batting averages. I'm confused with what to include in batting percentage. If a player had 3 AB with 1 Hit and 1 Hit-By-Pitch, is the AB reduce by one to remove the HBP from the calculation. or does it remain? The HBP (like a walk) is not counted as an official time at bat. If you're keeping track of a player's batting average throughout a season, keep a running tally of the hits and at-bats so you can easily recalculate after each game. In Major League baseball, .300 in a season is the traditional cutoff for a great batting average. Due to increased pitcher skill among other factors, only about twenty players tend to reach this in a given year.[7] X Research source No one has hit .400 since the 1940s. To calculate a batting average for a professional player, look up their hits, or the sum of their singles, doubles, triples, and home runs, online. Be sure to use a specific time frame unless you're calculating a career batting average! Next, look up the player's at-bats, then divide the number of hits by the number of at-bats and round to the third decimal place to calculate their batting average! For tips on calculating on-base and slugging percentages, read on! William (Bill) Stone "I have watched baseball for years and never knew what they meant by "batting average." Thanks, now I know. "..." more "The clarification of the concept (in this instance: how a batting average is figured) was helpful." L. Ugwu "Very helpful! Clear for newbies."
Extension:Math/MathJax testing - MediaWiki 1Examples and tests Toggle Examples and tests subsection 1.1Quadratic polynomial 1.2Quadratic polynomial (force PNG rendering) 1.4Tall parentheses and fractions 1.5Integrals 1.6Summation 1.7Differential equation 1.10Integral equation 1.11Example 1.12Continuation and cases 1.13Prefixed subscript 1.14Fraction and small fraction 1.15Area of a quadrilateral 1.16Volume of a sphere-stand 1.17Multiple equations 1.18task T38059 1.19Equation references 1.20CJK 1.21Textstyles When you click a MathJax formula, you get an instantly zoomed version of it for improved readability. MathJax formulas can also copied and pasted, and scaled without loss of quality for printing, higher resolution displays, or better readability. MathJax is a JavaScript display engine for mathematics. It's an alternative to PNG rendering for Wikimedia sites. MathJax is slower to render, but more scalable (infinite zoom) and manipulable than PNG. MathJax is currently enabled on this wiki (mediawiki.org), but you have to explicitly enable it in your user preferences (under Appearance -> Math). Otherwise you'll still see the old-school PNG images. Enable MathJax Put some formulas below or in your sandbox Report issues on the talk page or report a bug against the "Math" extension (known issues) 1 Examples and tests 1.18 task T38059 1.20 CJK 1.21 Textstyles Examples and tests[edit] Most of the following examples were copied from the English Wikipedia math help page. {\displaystyle ax^{2}+bx+c=0} Quadratic polynomial (force PNG rendering)[edit] {\displaystyle ax^{2}+bx+c=0\,\!} {\displaystyle x={-b\pm {\sqrt {b^{2}-4ac}} \over 2a}} {\displaystyle 2=\left({\frac {\left(3-x\right)\times 2}{3-x}}\right)} {\displaystyle S_{\text{new}}=S_{\text{old}}-{\frac {\left(5-T\right)^{2}}{2}}} {\displaystyle {\text{full month's benefits}}\times {\frac {({\text{number of days in month}}+1-{\text{date of application}})}{\text{number of days in month}}}={\text{allotment}}} {\displaystyle \int _{a}^{x}\!\!\!\int _{a}^{s}f(y)\,dy\,ds=\int _{a}^{x}f(y)(x-y)\,dy} {\displaystyle \sum _{i=0}^{n-1}i} {\displaystyle \sum _{m=1}^{\infty }\sum _{n=1}^{\infty }{\frac {m^{2}\,n}{3^{m}\left(m\,3^{n}+n\,3^{m}\right)}}} {\displaystyle u''+p(x)u'+q(x)u=f(x),\quad x>a} {\displaystyle |{\bar {z}}|=|z|,|({\bar {z}})^{n}|=|z|^{n},\arg(z^{n})=n\arg(z)} {\displaystyle \lim _{z\rightarrow z_{0}}f(z)=f(z_{0})} {\displaystyle \phi _{n}(\kappa )={\frac {1}{4\pi ^{2}\kappa ^{2}}}\int _{0}^{\infty }{\frac {\sin(\kappa R)}{\kappa R}}{\frac {\partial }{\partial R}}\left[R^{2}{\frac {\partial D_{n}(R)}{\partial R}}\right]\,dR} {\displaystyle \phi _{n}(\kappa )=0.033C_{n}^{2}\kappa ^{-11/3},\quad {\frac {1}{L_{0}}}\ll \kappa \ll {\frac {1}{l_{0}}}} {\displaystyle f(x)={\begin{cases}1&-1\leq x<0\\{\frac {1}{2}}&x=0\\1-x^{2}&{\text{otherwise}}\end{cases}}} {\displaystyle {}_{p}F_{q}(a_{1},\dots ,a_{p};c_{1},\dots ,c_{q};z)=\sum _{n=0}^{\infty }{\frac {(a_{1})_{n}\cdots (a_{p})_{n}}{(c_{1})_{n}\cdots (c_{q})_{n}}}{\frac {z^{n}}{n!}}} {\displaystyle {\frac {a}{b}}\ {\tfrac {a}{b}}} {\displaystyle S=dD\,\sin \alpha \!} {\displaystyle V={\tfrac {1}{6}}\pi h\left[3\left(r_{1}^{2}+r_{2}^{2}\right)+h^{2}\right]} Multiple equations[edit] {\displaystyle {\begin{aligned}u&={\tfrac {1}{\sqrt {2}}}(x+y)\qquad &x&={\tfrac {1}{\sqrt {2}}}(u+v)\\v&={\tfrac {1}{\sqrt {2}}}(x-y)\qquad &y&={\tfrac {1}{\sqrt {2}}}(u-v)\end{aligned}}} <math>{\begin{aligned}q_{1}&=\cos \left({\frac {\phi -\psi }{2}}\right)\sin \left({\frac {\theta }{2}}\right)\\q_{2}&=\sin \left({\frac {\phi -\psi }{2}}\right)\sin \left({\frac {\theta }{2}}\right)\\q_{3}&=\sin \left({\frac {\phi +\psi }{2}}\right)\cos \left({\frac {\theta }{2}}\right)\\q_{4}&=\cos \left({\frac {\phi +\psi }{2}}\right)\cos \left({\frac {\theta }{2}}\right)\end{aligned}}</math> task T38059[edit] {\displaystyle 1<2\&3>4} {\displaystyle {\begin{aligned}1<2&3>4\end{aligned}}} Failed to parse (unknown function "\upgamma"): {\displaystyle {{\upgamma}_{\text{S}}}={{\uprho}_{\text{S}}}\cdot \text{g}\quad[\text{kN}/\text{m}^\text{3}} Equation references[edit] Failed to parse (unknown function "\begin{equation}"): {\displaystyle \begin{equation}\label{eq1} x = \sin(y) \end{equation}} The above has label Failed to parse (unknown function "\ref"): {\displaystyle \ref{eq1}} <math>\begin{equation}\label{eq1} x = \sin(y) The above has label <math>\ref{eq1}</math> Failed to parse (syntax error): {\displaystyle 中文} {\displaystyle {\text{中文}}} (<math>\text{中文}</math> displays nothing) Failed to parse (syntax error): {\displaystyle 中\text{文}} Textstyles[edit] {\displaystyle {\textbf {boldtext}}} {\displaystyle {\textit {itallictext}}} {\displaystyle {\textrm {romantext}}} {\displaystyle {\texttt {teletypetext}}} should render as Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "/mathoid/local/v1/":): {\displaystyle {\tt teletype\ text}} {\displaystyle {\textsf {sanseriftext}}} should render as Failed to parse (unknown function "\sf"): {\displaystyle {\sf sanserif\ text}} Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. TeX parse error: Undefined control sequence \emph"): {\displaystyle {\emph {emphasizedtext}}} Also in Latex 2e Failed to parse (unknown function "\textmd"): {\displaystyle \textmd{midsize text}} should render as Failed to parse (unknown function "\md"): {\displaystyle {\md midsize\ text}} Failed to parse (unknown function "\textup"): {\displaystyle \textup{upper case text}} should render as Failed to parse (unknown function "\up"): {\displaystyle {\up upper\ case\ text}} Failed to parse (unknown function "\textsl"): {\displaystyle \textsl{slanted text}} should render as Failed to parse (unknown function "\sl"): {\displaystyle {\sl slanted\ text}} Failed to parse (unknown function "\textsc"): {\displaystyle \textsc{small cap text}} should render as Failed to parse (unknown function "\sc"): {\displaystyle {\sc small\ cap\ text}} {\displaystyle {\textit {\textbf {Nestedboldanditallic}}}} {\displaystyle {\textbf {\textit {Nesteditallicandbold}}}} Failed to parse (unknown function "\upgamma"): {\displaystyle \textbf{\upgamma{Mathroman}}} Math font styles {\displaystyle \mathbf {bold\ text} } {\displaystyle {\mathit {itallic\ text}}} {\displaystyle \mathrm {roman\ text} } {\displaystyle {\mathtt {teletype\ text}}} {\displaystyle {\mathsf {sanserif\ text}}} Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "/mathoid/local/v1/":): {\displaystyle \mathnormal{normal\ text}} {\displaystyle {\mathcal {CAL\ LETTERS}}} {\displaystyle \mathrm {\gamma } } Failed to parse (unknown function "\tiny"): {\displaystyle \tiny tiny} Failed to parse (unknown function "\scriptsize"): {\displaystyle \scriptsize scriptsize} Failed to parse (unknown function "\footnotesize"): {\displaystyle \footnotesize footnotesize} Failed to parse (unknown function "\small"): {\displaystyle \small small} Failed to parse (unknown function "\normalsize"): {\displaystyle \normalsize normalsize} Failed to parse (unknown function "\large"): {\displaystyle \large large} Failed to parse (unknown function "\huge"): {\displaystyle \huge huge} Retrieved from "https://www.mediawiki.org/w/index.php?title=Extension:Math/MathJax_testing&oldid=5222734"