text
stringlengths
256
16.4k
Demography - Wikipedia Science that deals with populations and their structures, statistically and theoretically This article is about the discipline. For the journal, see Demography (journal). For the album by 16 Volt, see Demography (album). It has been suggested that Demographic analysis be merged into this article. (Discuss) Proposed since February 2021. The Demography of the World Population from 1950 to 2100. Data source: United Nations — World Population Prospects 2017 Demography (from Ancient Greek δῆμος (dêmos) 'people, society', and -γραφία (-graphía) 'writing, drawing, description')[1] is the statistical study of populations, especially human beings. Demographic analysis can cover whole societies or groups defined by criteria such as education, nationality, religion, and ethnicity. Educational institutions[2] usually treat demography as a field of sociology, though there are a number of independent demography departments.[3] Patient demographics form the core of the data for any medical institution, such as patient and emergency contact information and patient medical record data. They allow for the identification of a patient and his categorization into categories for the purpose of statistical analysis. Patient demographics include: date of birth, gender (Ref: Google Health), date of death, postal code, ethnicity, blood type (Ref: Microsoft HealthVault: Personal Demographic Information, Basic Demographic Information), emergency contact information, family doctor, insurance provider data, allergies, major diagnoses and major medical history.[4] Formal demography limits its object of study to the measurement of population processes, while the broader field of social demography or population studies also analyses the relationships between economic, social, cultural, and biological processes influencing a population.[5] 4 Basic equation regarding development of a population 5.1 Social surveys Demographic thoughts traced back to antiquity, and were present in many civilisations and cultures, like Ancient Greece, Ancient Rome, China and India.[6] Made up of the prefix demo- and the suffix -graphy, the term demography refers to the overall study of population. In ancient Greece, this can be found in the writings of Herodotus, Thucydides, Hippocrates, Epicurus, Protagoras, Polus, Plato and Aristotle.[6] In Rome, writers and philosophers like Cicero, Seneca, Pliny the Elder, Marcus Aurelius, Epictetus, Cato, and Columella also expressed important ideas on this ground.[6] In the Middle ages, Christian thinkers devoted much time in refuting the Classical ideas on demography. Important contributors to the field were William of Conches,[7] Bartholomew of Lucca,[7] William of Auvergne,[7] William of Pagula,[7] and Muslim sociologists like Ibn Khaldun.[8] One of the earliest demographic studies in the modern period was Natural and Political Observations Made upon the Bills of Mortality (1662) by John Graunt, which contains a primitive form of life table. Among the study's findings were that one-third of the children in London died before their sixteenth birthday. Mathematicians, such as Edmond Halley, developed the life table as the basis for life insurance mathematics. Richard Price was credited with the first textbook on life contingencies published in 1771,[9] followed later by Augustus de Morgan, On the Application of Probabilities to Life Contingencies (1838).[10] In 1755, Benjamin Franklin published his essay Observations Concerning the Increase of Mankind, Peopling of Countries, etc., projecting exponential growth in British colonies.[11] His work influenced Thomas Robert Malthus,[12] who, writing at the end of the 18th century, feared that, if unchecked, population growth would tend to outstrip growth in food production, leading to ever-increasing famine and poverty (see Malthusian catastrophe). Malthus is seen as the intellectual father of ideas of overpopulation and the limits to growth. Later, more sophisticated and realistic models were presented by Benjamin Gompertz and Verhulst. In 1855, a Belgian scholar Achille Guillard defined demography as the natural and social history of human species or the mathematical knowledge of populations, of their general changes, and of their physical, civil, intellectual, and moral condition.[13] The period 1860–1910 can be characterized as a period of transition where in demography emerged from statistics as a separate field of interest. This period included a panoply of international ‘great demographers’ like Adolphe Quételet (1796–1874), William Farr (1807–1883), Louis-Adolphe Bertillon (1821–1883) and his son Jacques (1851–1922), Joseph Körösi (1844–1906), Anders Nicolas Kaier (1838–1919), Richard Böckh (1824–1907), Émile Durkheim (1858–1917), Wilhelm Lexis (1837–1914), and Luigi Bodio (1840–1920) contributed to the development of demography and to the toolkit of methods and techniques of demographic analysis.[14] Find sources: "Demography" – news · newspapers · books · scholar · JSTOR (September 2020) (Learn how and when to remove this template message) There are two types of data collection—direct and indirect—with several different methods of each type. Direct methods[edit] A census is the other common direct method of collecting demographic data. A census is usually conducted by a national government and attempts to enumerate every person in a country. In contrast to vital statistics data, which are typically collected continuously and summarized on an annual basis, censuses typically occur only every 10 years or so, and thus are not usually the best source of data on births and deaths. Analyses are conducted after a census to estimate how much over or undercounting took place. These compare the sex ratios from the census data to those estimated from natural values and mortality data. Rate of human population growth showing projections for later this century[citation needed] Indirect methods[edit] There are a variety of demographic methods for modelling population processes. They include models of mortality (including the life table, Gompertz models, hazards models, Cox proportional hazards models, multiple decrement life tables, Brass relational logits), fertility (Hermes model, Coale-Trussell models, parity progression ratios), marriage (Singulate Mean at Marriage, Page model), disability (Sullivan's method, multistate life tables), population projections (Lee-Carter model, the Leslie Matrix), and population momentum (Keyfitz). Common rates and ratios[edit] The age-specific fertility rates, the annual number of live births per 1,000 women in particular age groups (usually age 15–19, 20–24 etc.) The expectation of life (or life expectancy), the number of years that an individual at a given age could expect to live at present mortality levels. The replacement level fertility, the average number of children women must have in order to replace the population for the next generation. For example, the replacement level fertility in the US is 2.11.[18] Note that the crude death rate as defined above and applied to a whole population can give a misleading impression. For example, the number of deaths per 1,000 people can be higher for developed nations than in less-developed countries, despite standards of health being better in developed countries. This is because developed countries have proportionally more older people, who are more likely to die in a given year, so that the overall mortality rate can be higher even if the mortality rate at any given age is lower. A more complete picture of mortality is given by a life table, which summarizes mortality separately at each age. A life table is necessary to give a good estimate of life expectancy. Basic equation regarding development of a population[edit] {\displaystyle {\text{Population}}_{t+1}={\text{Population}}_{t}+{\text{Natural Increase}}_{t}+{\text{Net Migration}}_{t}} {\displaystyle {\text{Natural Increase}}_{t}={\text{Births}}_{t}-{\text{Deaths}}_{t}} {\displaystyle {\text{Net Migration}}_{t}={\text{Immigration}}_{t}-{\text{Emigration}}_{t}} These basic equations can also be applied to subpopulations. For example, the population size of ethnic groups or nationalities within a given society or country is subject to the same sources of change. When dealing with ethnic groups, however, "net migration" might have to be subdivided into physical migration and ethnic reidentification (assimilation). Individuals who change their ethnic self-labels or whose ethnic classification in government statistics changes over time may be thought of as migrating or moving from one population subcategory to another.[19] Science of population[edit] Populations can change through three processes: fertility, mortality, and migration. Fertility involves the number of children that women have and is to be contrasted with fecundity (a woman's childbearing potential).[21] Mortality is the study of the causes, consequences, and measurement of processes affecting death to members of the population. Demographers most commonly study mortality using the Life Table, a statistical device that provides information about the mortality conditions (most notably the life expectancy) in the population.[22] Migration refers to the movement of persons from a locality of origin to a destination place across some predefined, political boundary. Migration researchers do not designate movements 'migrations' unless they are somewhat permanent. Thus demographers do not consider tourists and travellers to be migrating. While demographers who study migration typically do so through census data on place of residence, indirect sources of data including tax forms and labour force surveys are also important.[23] List of demographics articles Social surveys[edit] Socio-Economic Panel (SOEP, German) Global Social Change Research Project (United States) Population Council (United States) Vienna Institute of Demography (VID) (Austria) Wittgenstein Centre for Demography and Global Human Capital (Austria) Scientific journals[edit] ^ "demography". Merriam-Webster Dictionary. ^ "The Science of Population". demographicpartitions.org. Archived from the original on 14 August 2015. Retrieved 4 August 2015. ^ "What Are Patient Demographics?". 21 December 2011. ^ a b c Srivastava, Sangya (December 2005). S.C.Srivastava,Studies in Demography, p.39-41. ISBN 9788126119929. ^ a b c d Peter Biller,The measure of multitude: Population in medieval thought[1]. ^ See, e.g., Andrey Korotayev, Artemy Malkov, & Daria Khaltourina (2006). Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS, ISBN 5-484-00414-4. ^ “Our Yesterdays: the History of the Actuarial Profession in North America, 1809-1979,” by E.J. (Jack) Moorhead, FSA, ( 1/23/10 – 2/21/04), published by the Society of Actuaries as part of the profession’s centennial celebration in 1989. ^ The History of Insurance, Vol 3, Edited by David Jenkins and Takau Yoneyama (1 85196 527 0): 8 Volume Set: ( 2000) Availability: Japan: Kinokuniya). ^ von Valtier, William F. (June 2011). ""An Extravagant Assumption": The Demographic Numbers behind Benjamin Franklin's Twenty-Five-Year Doubling Period" (PDF). Proceedings of the American Philosophical Society. 155 (2): 158–188. Archived from the original (PDF) on 5 March 2016. Retrieved 19 September 2018. ^ Zirkle, Conway (25 April 1941). "Natural Selection before the 'Origin of Species'". Proceedings of the American Philosophical Society. Philadelphia, PA: American Philosophical Society. 84 (1): 71–123. ISSN 0003-049X. JSTOR 984852. ^ de Gans, Henk and Frans van Poppel (2000) Contributions from the margins. Dutch statisticians, actuaries and medical doctors and the methods of demography in the time of Wilhelm Lexis. Workshop on ‘Lexis in Context: German and Eastern& Northern European Contributions to Demography 1860-1910’ at the Max Planck Institute for Demographic Research, Rostock, August 28 and 29, 2000. ^ Power C and Elliott J (2006). "Cohort profile: 1958 British Cohort Study". International Journal of Epidemiology. 35 (1): 34–41. doi:10.1093/ije/dyi183. PMID 16155052. ^ The last three are run by the Centre for Longitudinal Studies ^ a b c d Introduction to environmental engineering and science by Masters and Ela, 2008, Pearson Education, chapter 3 ^ See, for example, Barbara A. Anderson and Brian D. Silver, "Estimating Russification of Ethnic Identity Among Non-Russians in the USSR," Demography, Vol. 20, No. 4 (Nov., 1983): 461-489. ^ Lutz, Wolfgang; Sanderson, Warren; Scherbov, Sergei (19 June 1997). "Doubling of world population unlikely" (PDF). Nature. 387 (6635): 803–805. Bibcode:1997Natur.387..803L. doi:10.1038/42935. PMID 9194559. S2CID 4306159. Archived from the original (PDF) on 16 December 2008. Retrieved 2008-11-13. ^ John Bongaarts. The Fertility-Inhibiting Effects of the Intermediate Fertility Variables. Studies in Family Planning, Vol. 13, No. 6/7. (Jun. - Jul., 1982), pp. 179-189. ^ "N C H S - Life Tables". ^ Donald T. Rowland Demographic Methods and Concepts Ch. 11 ISBN 0-19-875263-6 ^ "International Union for the Scientific Study of Population". ^ "Population Association of America". ^ Canadian Population Society Archived 26 June 2011 at the Wayback Machine Gavrilova N.S., Gavrilov L.A. 2011. Ageing and Longevity: Mortality Laws and Mortality Forecasts for Ageing Populations [In Czech: Stárnutí a dlouhověkost: Zákony a prognózy úmrtnosti pro stárnoucí populace]. Demografie, 53(2): 109–128. Gavrilov L.A., Gavrilova N.S. 2010. Demographic Consequences of Defeating Aging. Rejuvenation Research, 13(2-3): 329–334. Uhlenberg P. (Editor), (2009) International Handbook of the Demography of Aging, New York: Springer-Verlag, pp. 113–131. Demography at Curlie Retrieved from "https://en.wikipedia.org/w/index.php?title=Demography&oldid=1087184840"
On covering and quasi-unsplit families of curves | EMS Press On covering and quasi-unsplit families of curves Given a covering family V of effective 1-cycles on a complex projective variety X , we find conditions allowing to construct a geometric quotient q~: X \to Y q regular on the whole of X , such that every fiber of q is an equivalence class for the equivalence relation naturally defined by V . Among others, we show that on a normal and \Q -factorial projective variety X \dim(X) \leq 4 , every covering and quasi-unsplit family V of rational curves generates a geometric extremal ray of the Mori cone \overline{\rm NE}(X) of classes of effective 1-cycles and that the associated Mori contraction yields a geometric quotient for V X has canonical singularities. Laurent Bonavero, Stéphane Druel, Cinzia Casagrande, On covering and quasi-unsplit families of curves. J. Eur. Math. Soc. 9 (2007), no. 1, pp. 45–57
Fit isolation forest for anomaly detection - MATLAB iforest - MathWorks América Latina Fit isolation forest for anomaly detection forest = iforest(Tbl) forest = iforest(X) forest = iforest(___,Name=Value) [forest,tf] = iforest(___) [forest,tf,scores] = iforest(___) Use the iforest function to fit an isolation forest model for outlier detection and novelty detection. Outlier detection (detecting anomalies in training data) — Use the output argument tf of iforest to identify anomalies in training data. Novelty detection (detecting anomalies in new data with uncontaminated training data) — Create an IsolationForest object by passing uncontaminated training data (data with no outliers) to iforest. Detect anomalies in new data by passing the object and the new data to the object function isanomaly. forest = iforest(Tbl) returns an IsolationForest object for predictor data in the table Tbl. forest = iforest(X) uses predictor data in the matrix X. forest = iforest(___,Name=Value) specifies options using one or more name-value arguments in addition to any of the input argument combinations in the previous syntaxes. For example, ContaminationFraction=0.1 instructs the function to process 10% of the training data as anomalies. [forest,tf] = iforest(___) also returns the logical array tf, whose elements are true when an anomaly is detected in the corresponding row of Tbl or X. [forest,tf,scores] = iforest(___) also returns an anomaly score in the range [0,1] for each observation in Tbl or X. A score value close to 0 indicates a normal observation, and a value close to 1 indicates an anomaly. To use a subset of the variables in Tbl, specify the variables by using the PredictorNames name-value argument. Example: NumLearners=50,NumObservationsPerLearner=100 specifies to train an isolation forest using 50 isolation trees and 100 observations for each isolation tree. If iforest uses a subset of input variables as predictors, then the function indexes the predictors using only the subset. The CategoricalPredictors values do not count any variables that the function does not use. By default, if the predictor data is in a table (Tbl), iforest assumes that a variable is categorical if it is a logical vector, unordered categorical vector, character array, string array, or cell array of character vectors. If the predictor data is a matrix (X), iforest assumes that all predictors are continuous. To identify any other predictors as categorical predictors, specify them by using the CategoricalPredictors name-value argument. For a categorical variable with more than 64 categories, the iforest function uses an approximate splitting method that can reduce the accuracy of the isolation forest model. Example: CategoricalPredictors='all' 0 (default) | numeric scalar in the range [0,1] If the ContaminationFraction value is 0 (default), then iforest treats all training observations as normal observations, and sets the score threshold (ScoreThreshold property value of forest) to the maximum value of scores. If the ContaminationFraction value is in the range (0,1], then iforest determines the threshold value so that the function detects the specified fraction of training observations as anomalies. Example: ContaminationFraction=0.1 The average path lengths used by the isolation forest algorithm to compute anomaly scores usually converge well before growing 100 isolation trees for both normal points and anomalies [1]. Example: NumLearners=50 min(N,256) where N is the number of training observations (default) | positive integer scalar greater than or equal to 3 Number of observations to draw from the training data without replacement for each isolation tree, specified as a positive integer scalar greater than or equal to 3. The isolation forest algorithm performs well with a small NumObservationsPerLearner value, because using a small sample size helps to detect dense anomalies and anomalies close to normal points. However, you need to experiment with the sample size if N is small. For an example, see Examine NumObservationsPerLearner for Small Data. Example: NumObservationsPerLearner=100 Predictor variable names, specified as a string array of unique names or cell array of unique character vectors. The functionality of PredictorNames depends on the way you supply the predictor data. If you supply Tbl, then you can use PredictorNames to choose which predictor variables to use. That is, iforest uses only the predictor variables in PredictorNames. PredictorNames must be a subset of Tbl.Properties.VariableNames. By default, PredictorNames contains the names of all predictor variables in Tbl. If you supply X, then you can use PredictorNames to assign names to the predictor variables in X. Example: PredictorNames=["SepalLength" "SepalWidth" "PetalLength" "PetalWidth"] Flag to run in parallel, specified as true or false. If you specify UseParallel=true, the iforest function executes for-loop iterations in parallel by using parfor. This option requires Parallel Computing Toolbox™. Trained isolation forest model, returned as an IsolationForest object. You can use the object function isanomaly to find anomalies in new data. iforest identifies observations with scores above the threshold (ScoreThreshold property value of forest) as anomalies. The function determines the threshold value to detect the specified fraction (ContaminationFraction name-value argument) of training observations as anomalies. s\left(x\right)={2}^{-\frac{E\left[h\left(x\right)\right]}{c\left(n\right)}}, iforest considers NaN, '' (empty character vector), "" (empty string), <missing>, and <undefined> values in Tbl and NaN values in X to be missing values. iforest does not use observations with all missing values and assigns the anomaly score of 1 to the observations. iforest uses observations with some missing values to find splits on variables for which these observations have valid values. IsolationForest | isanomaly | fitcsvm | robustcov
1.1 Wired version mostly known as Discrete Multi-tone Transmission (DMT) 3.3 Guard interval for elimination of intersymbol interference 4 Efficiency comparison between single carrier and multicarrier 7.6.3 COFDM vs VSB 8 Vector OFDM (VOFDM) 9.1 Advantages over standard OFDM Example of applicationsEdit Wired version mostly known as Discrete Multi-tone Transmission (DMT)Edit Summary of advantagesEdit Summary of disadvantagesEdit Characteristics and principles of operationEdit The orthogonality requires that the subcarrier spacing is {\displaystyle \scriptstyle \Delta f\,=\,{\frac {k}{T_{U}}}} Hertz, where TU seconds is the useful symbol duration (the receiver-side window size), and k is a positive integer, typically equal to 1. This stipulates that each carrier frequency undergoes k more complete cycles per symbol period than the previous carrier. Therefore, with N subcarriers, the total passband bandwidth will be B ≈ N·Δf (Hz). A simple example: A useful symbol duration TU = 1 ms would require a subcarrier spacing of {\displaystyle \scriptstyle \Delta f\,=\,{\frac {1}{1\,\mathrm {ms} }}\,=\,1\,\mathrm {kHz} } (or an integer multiple of that) for orthogonality. N = 1,000 subcarriers would result in a total passband bandwidth of NΔf = 1 MHz. For this symbol time, the required bandwidth in theory according to Nyquist is {\displaystyle \scriptstyle \mathrm {BW} =R/2=(N/T_{U})/2=0.5\,\mathrm {MHz} } (half of the achieved bandwidth required by our scheme), where R is the bit rate and where N = 1,000 samples per symbol by FFT. If a guard interval is applied (see below), Nyquist bandwidth requirement would be even lower. The FFT would result in N = 1,000 samples per symbol. If no guard interval was applied, this would result in a base band complex valued signal with a sample rate of 1 MHz, which would require a baseband bandwidth of 0.5 MHz according to Nyquist. However, the passband RF signal is produced by multiplying the baseband signal with a carrier waveform (i.e., double-sideband quadrature amplitude-modulation) resulting in a passband bandwidth of 1 MHz. A single-side band (SSB) or vestigial sideband (VSB) modulation scheme would achieve almost half that bandwidth for the same symbol rate (i.e., twice as high spectral efficiency for the same symbol alphabet length). It is however more sensitive to multipath interference. Implementation using the FFT algorithmEdit {\displaystyle {\begin{aligned}\mathrm {MIPS} &={\frac {\mathrm {computational\ complexity} }{T_{\mathrm {symbol} }}}\times 1.3\times 10^{-6}\\&={\frac {147\;456\times 2}{896\times 10^{-6}}}\times 1.3\times 10^{-6}\\&=428\end{aligned}}} Guard interval for elimination of intersymbol interferenceEdit Simplified equalizationEdit Our example: The OFDM equalization in the above numerical example would require one complex valued multiplication per subcarrier and symbol (i.e., {\displaystyle \scriptstyle N\,=\,1000} complex multiplications per OFDM symbol; i.e., one million multiplications per second, at the receiver). The FFT algorithm requires {\displaystyle \scriptstyle N\log _{2}N\,=\,10,000} [this is imprecise: over half of these complex multiplications are trivial, i.e. = to 1 and are not implemented in software or HW]. complex-valued multiplications per OFDM symbol (i.e., 10 million multiplications per second), at both the receiver and transmitter side. This should be compared with the corresponding one million symbols/second single-carrier modulation case mentioned in the example, where the equalization of 125 microseconds time-spreading using a FIR filter would require, in a naive implementation, 125 multiplications per symbol (i.e., 125 million multiplications per second). FFT techniques can be used to reduce the number of multiplications for an FIR filter-based time-domain equalizer to a number comparable with OFDM, at the cost of delay between reception and decoding which also becomes comparable with OFDM. Channel coding and interleavingEdit Adaptive transmissionEdit OFDM extended with multiple accessEdit Space diversityEdit Linear transmitter power amplifierEdit {\displaystyle CF=10\log _{10}(n)+CF_{c}} Efficiency comparison between single carrier and multicarrierEdit {\displaystyle \eta =2{\frac {R_{s}}{B_{\text{OFDM}}}}} {\displaystyle R_{s}} is the symbol rate in giga-symbols per second (Gsps), {\displaystyle B_{\text{OFDM}}} is the bandwidth of OFDM signal, and the factor of 2 is due to the two polarization states in the fiber. Idealized system modelEdit {\displaystyle s[n]} is a serial stream of binary digits. By inverse multiplexing, these are first demultiplexed into {\displaystyle N} parallel streams, and each one mapped to a (possibly complex) symbol stream using some modulation constellation (QAM, PSK, etc.). Note that the constellations may be different, so some streams may carry a higher bit-rate than others. An inverse FFT is computed on each set of symbols, giving a set of complex time-domain samples. These samples are then quadrature-mixed to passband in the standard way. The real and imaginary components are first converted to the analogue domain using digital-to-analogue converters (DACs); the analogue signals are then used to modulate cosine and sine waves at the carrier frequency, {\displaystyle f_{\text{c}}} , respectively. These signals are then summed to give the transmission signal, {\displaystyle s(t)} The receiver picks up the signal {\displaystyle r(t)} , which is then quadrature-mixed down to baseband using cosine and sine waves at the carrier frequency. This also creates signals centered on {\displaystyle 2f_{\text{c}}} , so low-pass filters are used to reject these. The baseband signals are then sampled and digitised using analog-to-digital converters (ADCs), and a forward FFT is used to convert back to the frequency domain. {\displaystyle N} parallel streams, each of which is converted to a binary stream using an appropriate symbol detector. These streams are then re-combined into a serial stream, {\displaystyle {\hat {s}}[n]} , which is an estimate of the original binary stream at the transmitter. {\displaystyle N} subcarriers are used, and each subcarrier is modulated using {\displaystyle M} alternative symbols, the OFDM symbol alphabet consists of {\displaystyle M^{N}} combined symbols. {\displaystyle \nu (t)=\sum _{k=0}^{N-1}X_{k}e^{j2\pi kt/T},\quad 0\leq t<T,} {\displaystyle \{X_{k}\}} are the data symbols, {\displaystyle N} is the number of subcarriers, and {\displaystyle T} is the OFDM symbol time. The subcarrier spacing of {\textstyle {\frac {1}{T}}} makes them orthogonal over each symbol period; this property is expressed as: {\displaystyle {\begin{aligned}&{\frac {1}{T}}\int _{0}^{T}\left(e^{j2\pi k_{1}t/T}\right)^{*}\left(e^{j2\pi k_{2}t/T}\right)dt\\{}={}&{\frac {1}{T}}\int _{0}^{T}e^{j2\pi \left(k_{2}-k_{1}\right)t/T}dt=\delta _{k_{1}k_{2}}\end{aligned}}} {\displaystyle (\cdot )^{*}} denotes the complex conjugate operator and {\displaystyle \delta \,} To avoid intersymbol interference in multipath fading channels, a guard interval of length {\displaystyle T_{\text{g}}} is inserted prior to the OFDM block. During this interval, a cyclic prefix is transmitted such that the signal in the interval {\displaystyle -T_{\text{g}}\leq t<0} equals the signal in the interval {\displaystyle (T-T_{\text{g}})\leq t<T} . The OFDM signal with cyclic prefix is thus: {\displaystyle \nu (t)=\sum _{k=0}^{N-1}X_{k}e^{j2\pi kt/T},\quad -T_{\text{g}}\leq t<T} The low-pass signal filter above can be either real or complex-valued. Real-valued low-pass equivalent signals are typically transmitted at baseband—wireline applications such as DSL use this approach. For wireless applications, the low-pass signal is typically complex-valued; in which case, the transmitted signal is up-converted to a carrier frequency {\displaystyle f_{\text{c}}} . In general, the transmitted signal can be represented as: {\displaystyle {\begin{aligned}s(t)&=\Re \left\{\nu (t)e^{j2\pi f_{c}t}\right\}\\&=\sum _{k=0}^{N-1}|X_{k}|\cos \left(2\pi \left[f_{\text{c}}+{\frac {k}{T}}\right]t+\arg[X_{k}]\right)\end{aligned}}} OFDM system comparison tableEdit {\textstyle \Delta f={\frac {1}{T_{U}}}\approx {\frac {B}{N}}} ADSLEdit Powerline TechnologyEdit Wireless local area networks (LAN) and metropolitan area networks (MAN)Edit Wireless personal area networks (PAN)Edit Terrestrial digital radio and television broadcastingEdit DVB-TEdit SDARSEdit COFDM vs VSBEdit Digital radioEdit BST-OFDM used in ISDBEdit Ultra-widebandEdit Flash-OFDMEdit Vector OFDM (VOFDM)Edit VOFDM was proposed by Xiang-Gen Xia in 2000 (Proceedings of ICC 2000, New Orleans, and IEEE Trans. on Communications, Aug. 2001) for single transmit antenna systems. VOFDM replaces each scalar value in the conventional OFDM by a vector value and is a bridge between OFDM and the single carrier frequency domain equalizer (SC-FDE). When the vector size is {\displaystyle 1} , it is OFDM and when the vector size is at least the channel length and the FFT size is {\displaystyle 1} , it is SC-FDE. In VOFDM, assume {\displaystyle M} is the vector size, and each scalar-valued signal {\displaystyle X_{n}} in OFDM is replaced by a vector-valued signal {\displaystyle {\bf {X}}_{n}} of vector size {\displaystyle M} {\displaystyle 0\leq n\leq N-1} . One takes the {\displaystyle N} -point IFFT of {\displaystyle {\bf {X}}_{n},0\leq n\leq N-1} , component-wisely and gets another vector sequence of the same vector size {\displaystyle M} {\displaystyle {\bf {x}}_{k},0\leq k\leq N-1} . Then, one adds a vector CP of length {\displaystyle \Gamma } to this vector sequence as {\displaystyle {\bf {x}}_{0},{\bf {x}}_{1},...,{\bf {x}}_{N-1},{\bf {x}}_{0},{\bf {x}}_{1},...,{\bf {x}}_{\Gamma -1}} This vector sequence is converted to a scalar sequence by sequentializing all the vectors of size {\displaystyle M} , which is transmitted at a transmit antenna sequentially. At the receiver, the received scalar sequence is first converted to the vector sequence of vector size {\displaystyle M} . When the CP length satisfies {\textstyle \Gamma \geq \left\lceil {\frac {L}{M}}\right\rceil } , then, after the vector CP is removed from the vector sequence and the {\displaystyle N} -point FFT is implemented component-wisely to the vector sequence of length {\displaystyle N} {\displaystyle {\bf {Y}}_{n}={\bf {H}}_{n}{\bf {X}}_{n}+{\bf {W}}_{n},\,\,0\leq n\leq N-1,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(1)} {\displaystyle {\bf {W}}_{n}} are additive white noise and {\textstyle {\bf {H}}_{n}={\bf {H}}{\mathord {\left(\exp {\mathord {\left({\frac {2\pi jn}{N}}\right)}}\right)}}={\bf {H}}(z)|_{z=\exp(2\pi jn/N)}} {\displaystyle {\bf {H}}(z)} is the following {\displaystyle M\times M} polyphase matrix of the ISI channel {\textstyle H(z)=\sum _{k=0}^{L}h_{k}z^{-k}} {\displaystyle \mathbf {H} (z)=\left[{\begin{array}{cccc}H_{0}(z)&z^{-1}H_{M-1}(z)&\cdots &z^{-1}H_{1}(z)\\H_{1}(z)&H_{0}(z)&\cdots &z^{-1}H_{2}(z)\\\vdots &\vdots &\vdots &\vdots \\H_{M-1}(z)&H_{M-2}(z)&\cdots &H_{0}(z)\end{array}}\right]} {\textstyle H_{m}(z)=\sum _{l}h_{Ml+m}z^{-l}} {\displaystyle m} th polyphase component of the channel {\displaystyle H(z),0\leq m\leq M-1} . From (1), one can see that the original ISI channel is converted to {\displaystyle N} many vector subchannels of vector size {\displaystyle M} . There is no ISI across these vector subchannels but there is ISI inside each vector subchannel. In each vector subchannel, at most {\displaystyle M} many symbols are interfered each other. Clearly, when the vector size {\displaystyle M=1} , the above VOFDM returns to OFDM and when {\displaystyle M>L} {\displaystyle N=1} , it becomes the SC-FDE. The vector size {\displaystyle M} is a parameter that one can choose freely and properly in practice and controls the ISI level. There may be a trade-off between vector size {\displaystyle M} , demodulation complexity at the receiver, and FFT size, for a given channel bandwidth. Note that the length of the CP part in the sequential form does not have to be an integer multiple of the vector size, {\displaystyle \Gamma M} . One can truncate the above vectorized CP to a sequential CP of length not less than the ISI channel length, which will not affect the above demodulation. It has been shown (Yabo Li et al., IEEE Trans. on Signal Processing, Oct. 2012) that applying the MMSE linear receiver to each vector subchannel (1), it achieves multipath diversity and/or signal space diversity. This is because the vectorized channel matrices in (1) are pseudo-circulant and can be diagonalized by the {\displaystyle M} -point DFT/IDFT matrix with some diagonal phase shift matrices. Then, the right hand side DFT/IDFT matrix and the {\displaystyle k} th diagonal phase shift matrix in the diagonalization can be thought of the precoding to the input information symbol vector {\displaystyle {\bf {X}}_{k}} {\displaystyle k} th sub vector channel, and all the vectorized subchannels become diagonal channels of {\displaystyle M} discrete frequency components from the {\displaystyle MN} -point DFT of the original ISI channel. It may collect the multipath diversity and/or signal space diversity similar to the precoding to collect the signal space diversity for single antenna systems to combat wireless fading or the diagonal space-time block coding to collect the spatial diversity for multiple antenna systems. The details are referred to the IEEE TCOM and IEEE TSP papers mentioned above. Wavelet-OFDMEdit Instead of using an IDFT to create the sender signal, the wavelet OFDM uses a synthesis bank consisting of a {\displaystyle N} -band transmultiplexer followed by the transform function {\displaystyle F_{n}(z)=\sum _{k=0}^{L-1}f_{n}(k)z^{-k},\quad 0\leq n<N} {\displaystyle G_{n}(z)=\sum _{k=0}^{L-1}g_{n}(k)z^{-k},\quad 0\leq n<N} followed by another {\displaystyle N} -band transmultiplexer. The relationship between both transform functions is {\displaystyle {\begin{aligned}f_{n}(k)&=g_{n}(L-1-k)\\F_{n}(z)&=z^{-(L-1)}G_{n}*(z-1)\end{aligned}}} An example of W-OFDM uses the Perfect Reconstruction Cosine Modulated Filter Bank (PR-CMFB) and Extended Lapped Transform (ELT) is used for the wavelet TF. Thus, {\displaystyle \textstyle f_{n}(k)} {\displaystyle \textstyle g_{n}(k)} {\displaystyle {\begin{aligned}f_{n}(k)&=2p_{0}(k)\cos \left[{\frac {\pi }{N}}\left(n+{\frac {1}{2}}\right)\left(k-{\frac {L-1}{2}}\right)-(-1)^{n}{\frac {\pi }{4}}\right]\\g_{n}(k)&=2p_{0}(k)\cos \left[{\frac {\pi }{N}}\left(n+{\frac {1}{2}}\right)\left(k-{\frac {L-1}{2}}\right)+(-1)^{n}{\frac {\pi }{4}}\right]\\P_{0}(z)&=\sum _{k=0}^{N-1}z^{-k}Y_{k}\left(z^{2N}\right)\end{aligned}}} These two functions are their respective inverses, and can be used to modulate and demodulate a given input sequence. Just as in the case of DFT, the wavelet transform creates orthogonal waves with {\displaystyle \textstyle f_{0}} {\displaystyle \textstyle f_{1}} {\displaystyle \textstyle f_{N-1}} . The orthogonality ensures that they do not interfere with each other and can be sent simultaneously. At the receiver, {\displaystyle \textstyle g_{0}} {\displaystyle \textstyle g_{1}} {\displaystyle \textstyle g_{N-1}} are used to reconstruct the data sequence once more. Advantages over standard OFDMEdit ^ William Shieh, Ivan Djordjevic. (2010). "OFDM for Optical Communications". 525 B Street, Suite 1900, San Diego, California 92101-4495, USA: Academic Press. {{cite book}}: CS1 maint: location (link) Retrieved from "https://en.wikipedia.org/w/index.php?title=Orthogonal_frequency-division_multiplexing&oldid=1087810013"
The VMOS and DMOS developed into what has become known as VDMOS (vertical DMOS).[10] John Moll's research team at HP Labs fabricated DMOS prototypes in 1977, and demonstrated advantages over the VMOS, including lower on-resistance and higher breakdown voltage.[7] The same year, Hitachi introduced the LDMOS (lateral DMOS), a planar type of DMOS. Hitachi was the only LDMOS manufacturer between 1977 and 1983, during which time LDMOS was used in audio power amplifiers from manufacturers such as HH Electronics (V-series) and Ashly Audio, and were used for music and public address systems.[10] With the introduction of the 2G digital mobile network in 1995, the LDMOS became the most widely used RF power amplifier in mobile networks such as 2G, 3G,[11] and 4G.[12] On-state resistanceEdit Breakdown voltage/on-state resistance trade-offEdit Body diodeEdit It can be seen in figure 1 that the source metallization connects both the N+ and P+ implantations, although the operating principle of the MOSFET only requires the source to be connected to the N+ zone. However, if it were, this would result in a floating P zone between the N-doped source and drain, which is equivalent to a NPN transistor with a non-connected base. Under certain conditions (under high drain current, when the on-state drain to source voltage is in the order of some volts), this parasitic NPN transistor would be triggered, making the MOSFET uncontrollable. The connection of the P implantation to the source metallization shorts the base of the parasitic transistor to its emitter (the source of the MOSFET) and thus prevents spurious latching. This solution, however, creates a diode between the drain (cathode) and the source (anode) of the MOSFET, making it able to block current in only one direction. Switching operationEdit CapacitancesEdit {\displaystyle {\begin{matrix}C_{iss}&=&C_{GS}+C_{GD}\\C_{oss}&=&C_{GD}+C_{DS}\\C_{rss}&=&C_{GD}\end{matrix}}} Gate to source capacitanceEdit Gate to drain capacitanceEdit {\displaystyle C_{GD}={\frac {C_{oxD}\times C_{GDj}\left(V_{GD}\right)}{C_{oxD}+C_{GDj}\left(V_{GD}\right)}}} {\displaystyle w_{GDj}={\sqrt {\frac {2\epsilon _{Si}V_{GD}}{qN}}}} {\displaystyle \epsilon _{Si}} {\displaystyle C_{GDj}=A_{GD}{\frac {\epsilon _{Si}}{w_{GDj}}}} {\displaystyle C_{GDj}\left(V_{GD}\right)=A_{GD}{\sqrt {\frac {q\epsilon _{Si}N}{2V_{GD}}}}} Drain to source capacitanceEdit Other dynamic elementsEdit Packaging inductancesEdit Limits of operationEdit Gate oxide breakdownEdit Maximum drain to source voltageEdit Maximum drain currentEdit Maximum temperatureEdit Safe operating areaEdit Latch-upEdit Cellular structureEdit P-substrate power MOSFETEdit {\displaystyle V_{in}+V_{GS}} {\displaystyle V_{in}} {\displaystyle R_{DSon}} VMOSEdit UMOSEdit Super-junction deep-trench technologyEdit ^ Murray, Anthony F. J.; McDonald, Tim; Davis, Harold; Cao, Joe; Spring, Kyle. "Extremely Rugged MOSFET Technology with Ultra-low RDS(on) Specified for A Broad Range of EAR Conditions" (PDF). International Rectifier. Retrieved 26 April 2022. Wikimedia Commons has media related to Power MOSFET. Retrieved from "https://en.wikipedia.org/w/index.php?title=Power_MOSFET&oldid=1084779345"
Loop spaces and representations 15 June 2013 Loop spaces and representations David Ben-Zvi, David Nadler We introduce loop spaces (in the sense of derived algebraic geometry) into the representation theory of reductive groups. In particular, we apply our previously developed theory to flag varieties, and we obtain new insights into fundamental categories in representation theory. First, we show that one can recover finite Hecke categories (realized by \mathcal{D} -modules on flag varieties) from affine Hecke categories (realized by coherent sheaves on Steinberg varieties) via {S}^{1} -equivariant localization. Similarly, one can recover \mathcal{D} -modules on the nilpotent cone from coherent sheaves on the commuting variety. We also show that the categorical Langlands parameters for real groups studied by Adams, Barbasch, and Vogan and by Soergel arise naturally from the study of loop spaces of flag varieties and their Jordan decomposition (or in an alternative formulation, from the study of local systems on a Möbius strip). This provides a unifying framework that overcomes a discomforting aspect of the traditional approach to the Langlands parameters, namely their evidently strange behavior with respect to changes in infinitesimal character. David Ben-Zvi. David Nadler. "Loop spaces and representations." Duke Math. J. 162 (9) 1587 - 1619, 15 June 2013. https://doi.org/10.1215/00127094-2266130 David Ben-Zvi, David Nadler "Loop spaces and representations," Duke Mathematical Journal, Duke Math. J. 162(9), 1587-1619, (15 June 2013)
Dolichyl-phosphatase - Wikipedia In enzymology, a dolichyl-phosphatase (EC 3.1.3.51) is an enzyme that catalyzes the chemical reaction dolichyl phosphate + H2O {\displaystyle \rightleftharpoons } dolichol + phosphate Thus, the two substrates of this enzyme are dolichyl phosphate and H2O, whereas its two products are dolichol and phosphate. This enzyme belongs to the family of hydrolases, to be specific, those acting on phosphoric monoester bonds. The systematic name of this enzyme class is dolichyl-phosphate phosphohydrolase. Other names in common use include dolichol phosphate phosphatase, dolichol phosphatase, dolichol monophosphatase, dolichyl monophosphate phosphatase, dolichyl phosphate phosphatase, polyisoprenyl phosphate phosphatase, polyprenylphosphate phosphatase, and Dol-P phosphatase. This enzyme participates in n-glycan biosynthesis. Adrian GS, Keenan RW (1979). "A dolichyl phosphate-cleaving acid phosphatase from Tetrahymena pyriformis". Biochim. Biophys. Acta. 575 (3): 431–8. doi:10.1016/0005-2760(79)90112-7. PMID 229909. Rip JW, Rupar CA, Chaudhary N, Carroll KK (1981). "Localization of a dolichyl phosphate phosphatase in plasma membranes of rat liver". J. Biol. Chem. 256 (4): 1929–34. PMID 6257694. Wedgwood JF, Strominger JL (1980). "Enzymatic activities in cultured human lymphocytes that dephosphorylate dolichyl pyrophosphate and dolichyl phosphate". J. Biol. Chem. 255 (3): 1120–3. PMID 6243292. Retrieved from "https://en.wikipedia.org/w/index.php?title=Dolichyl-phosphatase&oldid=917393864"
Dix conversion - SEG Wiki The horizon-consistent stacking velocity profiles (Figure 9.1-3) at each of the layer boundaries are used to perform Dix conversion to derive the interval velocity profiles for each of the layers. Dix conversion is based on the formula {\displaystyle v_{n}={\sqrt {\frac {V_{n}^{2}\tau _{n}-V_{n-1}^{2}\tau _{n-1}}{\tau _{n}-\tau _{n-1}}}},} where vn is the isotropic interval velocity within the layer bounded by the (n − 1)st layer boundary above and the nth layer boundary below, τn and τn−1 are the corresponding two-way zero-offset times, and Vn and Vn−1 are the corresponding rms velocities. Derivation of equation ( 1 ) is provided in Section J.4. Equation ( 1 ) is based on the assumptions that the layer boundaries are flat and the offset range used in estimating the rms velocities Vn and Vn−1 corresponds to a small spread. The procedure for estimating the layer velocities and reflector depths using Dix conversion of stacking velocities includes the following steps: For each of the layers in the model, pick the time of horizon on the unmigrated CMP-stacked data that corresponds to the base-layer boundary (Figure 9.1-2a). These times are used in lieu of the two-way zero-offset times in equation ( 1 ). Extract the rms velocities at horizon times (Figure 9.1-3). Use equation ( 1 ) to compute the interval velocities for each of the layers from the known quantities — rms velocities and times at top- and base-layer boundaries. Use interval velocities and times at layer boundaries to compute depths at layer boundaries. If the input times are from an unmigrated stacked section as in Figure 9.1-2a, use normal-incidence rays for depth conversion. If the input times are from a migrated stacked section, use image rays for depth conversion. Interval velocity profiles derived from Dix conversion are shown in Figure 9.1-4a. The earth model can be constructed by combining the estimated interval velocity profiles and depth horizons (Figure 9.1-4b). Comparison with the true model shown in Figure 9.1-1b clearly demonstrates that the interval velocity estimation based on Dix conversion is not completely accurate. The interval velocity profiles derived from Dix conversion (Figure 9.1-4a) exhibit the sinusoidal oscillations caused by the swings in the stacking velocity profiles themselves (Figure 9.1-3). The fundamental problem is that the stacking velocity estimation is based on fitting a hyperbola to CMP traveltimes associated with a laterally homogeneous earth model. If there are lateral velocity variations in layers above the layer under consideration, and if these variations are within a cable length, then stacking velocities would oscillate in a physically implausable manner [1] [2] [3]. As a consequence, the resulting interval velocity estimation based on Dix conversion is adversely affected. In the present case, Dix conversion has produced fairly accurate estimates for the interval velocities of the top three layers — H1, H2, and H3 as shown in Figure 9.1-4a. But the interval velocity estimates for layers H4 and H5 have been adversely affected by the laterally varying velocities within the layer above, H3. Figure 9.1-1 An earth model that comprises six flat layers: (a) the interval velocity profiles for the six horizons H1-H6; (b) true velocity-depth model created from the profiles in (a). Figure 9.1-2 (a) The CMP-stacked section derived from the modeled common-shot gathers using the earth model shown in Figure 9.1-1b; (b) the stacking velocity section. Figure 9.1-3 Horizon-consistent stacking velocity semblance spectra computed from the CMP gathers of the synthetic data as in Figure 9.1-2a along the time horizons H1-H6. Figure 9.1-4 (a) The interval velocity profiles derived from Dix conversion of the horizon-consistent stacking velocity profiles picked from the semblance spectra shown in Figure 9.1-3; (b) estimated velocity-depth model. Compare with the true velocity-depth model shown in Figure 9.1-1b. Figure 9.1-5 (a) The interval velocity profiles as in Figure 9.1-4a displayed by the thick curves and smoothed interval velocity profiles displayed by the thin curves; (b) estimated velocity-depth model using the smoothed interval velocity profiles in (a). Compare with the true velocity-depth model shown in Figure 9.1-1b and the model derived from the unsmoothed interval velocities shown in Figure 9.1-4b. Figure 9.1-6 Central portions of (a) the true velocity-depth model shown in Figure 9.1-1b, (b) the velocity-depth model shown in Figure 9.1-4b estimated using the unsmoothed interval velocity profiles shown in Figure 9.1-4a, and (c) the velocity-depth model shown in Figure 9.1-5b estimated using the smoothed interval velocity profiles shown in Figure 9.1-5a. The pragmatic approach would be to smooth out the oscillations in the stacking velocities before Dix conversion and smooth out the oscillations in the velocity profiles after Dix conversion (Figure 9.1-5a). Then, the resulting earth model is expected to be free of the adverse effects of stacking velocity anomalies (Figure 9.1-5b). A closer look at the central portions of the estimated models using Dix conversion is shown in Figure 9.1-6. Note that the model derived from the smoothed interval velocities is closer to the true model. We shall make an attempt in model updating to update this result by using tomography. ↑ Lynn and Claerbout, 1982, Lynn, W. S. and Claerbout, J. F., 1982, Velocity estimation in laterally varying media: Geophysics, 47, 884–897. ↑ Loinger, 1983, Loinger, E., 1983, A linear model for velocity anomalies: Geophys. Prosp., 31, 98–118. ↑ Rocca and Toldi, 1983, Rocca, F. and Toldi, J., 1983, Lateral velocity anomalies: 53rd Ann. Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, 572–574. Dictionary:Dix_formula Retrieved from "https://wiki.seg.org/index.php?title=Dix_conversion&oldid=158205"
EUDML | On the -factors of motives. II. EuDML | On the -factors of motives. II. \text{Γ} -factors of motives. II. Deninger, Christopher. "On the -factors of motives. II.." Documenta Mathematica 6 (2001): 67-95. <http://eudml.org/doc/50393>. @article{Deninger2001, author = {Deninger, Christopher}, keywords = {product of -factors; Archimedean-factor; local Euler factor; motive; de Rham complex; product of -factors; Archimedean-factor}, title = {On the -factors of motives. II.}, AU - Deninger, Christopher TI - On the -factors of motives. II. KW - product of -factors; Archimedean-factor; local Euler factor; motive; de Rham complex; product of -factors; Archimedean-factor product of \text{Γ} -factors, Archimedean \text{Γ} -factor, local Euler factor, motive, de Rham complex, product of {\Gamma } {\Gamma } Arithmetic varieties and schemes; Arakelov theory; heights L Articles by Deninger
A phase field model for the electromigration of intergranular voids | EMS Press A phase field model for the electromigration of intergranular voids We propose a degenerate Allen--Cahn/Cahn--Hilliard system coupled to a quasi-static diffusion equation to model the motion of intergranular voids. The system can be viewed as a phase field system with an interfacial parameter \gamma \gamma \to 0 , the phase field system models the evolution of voids by surface diffusion and electromigration in an electrically conducting solid with a grain boundary. We introduce a finite element approximation for the proposed system, show stability bounds, prove convergence, and hence existence of a weak solution to this nonlinear degenerate parabolic system in two space dimensions. An iterative scheme for solving the resulting nonlinear discrete system at each time level is introduced and analysed, and some numerical experiments are presented. In the Appendix we discuss the sharp interface limit of the above degenerate system as the interfacial parameter \gamma tends to zero. Harald Garcke, Robert Nürnberg, John W. Barrett, A phase field model for the electromigration of intergranular voids. Interfaces Free Bound. 9 (2007), no. 2, pp. 171–210
From E. A. Darwin [before 8 June 1858]1 To make sure that I now understand look at my diagram.2 The two : A, F in the hexagon the section of the Dodec. Call CB radius of sphere (r) & CF radius of ⊙ in which the hexagon is inscribed r′, then r′ : r :: .8165 : 1 in the bees hexagon r′ = 0inch.125 or ⅛ inch [therefore sign] r = r′.8165 = .125.8165 = 0inch.153 r−r′ = 0inch.028 distance sought. The distance from center to obtuse ∠ = r .866 = 0in.153 .866 = 0in.1325 distance to ob: ∠ of Dodec = 0.1325 r′ = distance to ∠ of Heg = 0.1250 Difference = 0.0075 or ¾ of \frac{1}{100} Make no scruple of asking any number of explanations, as it is very little trouble to me, & saves you bother.3 Yours E D CB=R=1 CC’=R[SQUARE ROOT]2=1.414 EF= \frac{1}{2} FG=side of hexagon EF=EC [TIMES] tang 30o=EC [TIMES] \frac{1}{3} EF=EC= \frac{2}{2} 2EF=FG= \frac{2}{3} = .8165 [DIAGRAMS HERE] crossed pencil; ‘Unimportant’added pencil Bottom of first page: ‘0.028 0.056 0.25 0.306 = diameter of sphere’; ‘sphere rather more than \frac{1}{20} inch larger than hexagon’ pencil . Enclosure: ‘or one side of hexagon = radius’; ‘One side of Hexagon in circle equals radius of circle.’ pencil \note5 Erasmus says Dodecahedron diameter is radius 2. In Wasps nest, hexagon = radius of sphere (on hexagonal theory) & saucer up to lower part of festoon will equal half radius.— from angle to angle of hexagon of course = twice radius. [therefore sign] saucer \frac{1}{4} of diameter of hexagon.— CD was eager to revise his calculations relating to the construction of bees’ cells (see letter to W. E. Darwin, [26 May 1858]). Erasmus Alvey Darwin had helped him in the past with geometrical problems (see letter from E. A. Darwin, [May–June 1858], and Correspondence vol. 3, letter from E. A. Darwin, [May 1844 – 1 October 1846]). The enclosure is in DAR 48 (ser. 2): 46 in association with CD’s notes on bees’ cells. The note is in DAR 162: 48/2. Gives calculations on the structure of bees’ cells.
Process of arranging, controlling and optimizing work and workloads It is an important tool for manufacturing and engineering, where it can have a major impact on the productivity of a process. In manufacturing, the purpose of scheduling is to keep due dates of customers and then minimize the production time and costs, by telling a production facility when to make, with which staff, and on which equipment. Production scheduling aims to maximize the efficiency of the operation and reduce costs. In some situations, scheduling can involve random attributes, such as random processing times, random due dates, random weights, and stochastic machine breakdowns. In this case, the scheduling problems are referred to as "stochastic scheduling." 2 Key concepts in scheduling 3 Scheduling algorithms 4 Batch production scheduling 4.2 Scheduling in the batch processing environment 4.3 Cycle-time analysis 4.5 Algorithmic methods Inventory reduction, levelling Labour load levelling Production scheduling tools greatly outperform older manual scheduling methods. These provide the production scheduler with powerful graphical interfaces which can be used to visually optimize real-time work loads in various stages of production, and pattern recognition allows the software to automatically create scheduling opportunities which might not be apparent without this view into the data. For example, an airline might wish to minimize the number of airport gates required for its aircraft, in order to reduce costs, and scheduling software can allow the planners to see how this can be done, by analysing time tables, aircraft usage, or the flow of passengers. Key concepts in scheduling[edit] Inputs : Inputs are plant, labour, materials, tooling, energy and a clean environment. Scheduling algorithms[edit] Main article: Job shop scheduling See also: Genetic algorithm scheduling Batch production scheduling[edit] Scheduling in the batch processing environment[edit] Cycle-time analysis[edit] {\displaystyle CT_{min}={\begin{matrix}max\\j=1,M\end{matrix}}\lbrace \tau _{j}\rbrace } {\displaystyle CT_{min}={\begin{matrix}max\\j=1,M\end{matrix}}\lbrace \tau _{j}/N_{j}\rbrace } A wide variety of algorithms and approaches have been applied to batch process scheduling. Early methods, which were implemented in some MRP systems assumed infinite capacity and depended only on the batch time. Such methods did not account for any resources, and would produce infeasible schedules.[13] Agent-based modeling describes the batch process and constructs a feasible schedule under various constraints.[17] By combining with mixed-integer programming or simulated-based optimization methods, this approach could achieve a good balance between the solution efficiency and the schedule performance.[18] A new development and framework addresses how to exploit the aggregation of several digital twins, representing different physical assets and their autonomous decision-making, together with a global digital twin, in order to perform production scheduling optimization. [19] Resource-Task Network ^ Marcus V. Magalhaes and Nilay Shah, “Crude Oil Scheduling,” Foundations of Computer-Aided Operations (FOCAPO) 2003,pp 323-325. ^ Zhenya Jia and Marianthi Ierapetritou, “Efficient Short-Term Scheduling of Refinery Operation Based on a Continuous Time Formulation,” Foundations of Computer-Aided Operations (FOCAPO) 2003, pp 327-330 ^ Toumi, A., Jurgens, C., Jungo, C., MAier, B.A., Papavasileiou, V., and Petrides, D., “Design and Optimization of a Large Scale Biopharmaceutical Facility using Process Simulation and Scheduling Tools,” Pharmaceutical Engineering (the ISPE magazine) 2010, vol 30, no 2, pp 1-9. ^ Papavasileiou, V., Koulouris, A., Siletti, C., and Petrides, D., “Optimize Manufacturing of Pharmaceutical Products with Process Simulation and Production Scheduling Tools,” Chemical Engineering Research and Design (IChemE publication) 2007, vol 87, pp 1086-1097 ^ Michael Pinedo, Scheduling Theory, Algorithms, and Systems,Prentice Hall, 2002,pp 1-6. ^ T. F. Edgar, C.L. Smith, F. G. Shinskey, G. W. Gassman, P. J. Schafbuch, T. J. McAvoy, D. E. Seborg, Process control, Perry’s Chemical Engineer’s Handbook, R. Perry and D. W. Green eds.,McGraw Hill, 1997,p 8-41. ^ Charlotta Johnsson, S88 for Beginners, World Batch Forum, 2004. ^ L.T. Biegler, I. E. Grossman and A. W. Westerberg, Systematic Methods of Chemical Process Design, Prentice Hall, 1999 p181. ^ M. Pinedo, 2002, pp 14-22. ^ Biegler et al. 1999, p187 ^ M. Pinedo, 2002, p430 ^ M. Pinedo, 2002, p28 ^ G. Plenert and G/ Kirchmier, 2000, pp38-41 ^ C. Mendez, J. Cerda, I. Grossman, I. Harjunkoski, M. Fahl, State of the art Review of Optimization Methods for Short Term Scheduling of Batch Processes, Computers and Chemical Engineering, 30 (2006), pp 913-946 ^ I. Lustig, Progress in Linear and Integer Programming and Emergence of Constraint Programming, Foundations of Computer-Aided Operations (FOCAPO) 2003, 133-151 ^ L. Zeballos and G.P. Henning, A Constraint Programming Approach to the Multi-Stage Batch Scheduling Problem, Foundations of Computer-Aided Operations (FOCAPO), 2003, 343-346 ^ Chu, Yunfei; You, Fengqi; Wassick, John M. (2014). "Hybrid method integrating agent-based modeling and heuristic tree search for scheduling of complex batch processes". Computers & Chemical Engineering. 60: 277–296. doi:10.1016/j.compchemeng.2013.09.004. ^ Chu, Yunfei; Wassick, John M.; You, Fengqi (2013). "Efficient scheduling method of complex batch processes with general network structure via agent-based modeling". AIChE Journal. 59 (8): 2884–2906. doi:10.1002/aic.14101. ^ Villalonga, A.; Negri, E.; Biscardo, G.; Castano, F.; Haber, R.E.; Fumagalli, L.; Macchi, M. (January 2021). "A decision-making framework for dynamic scheduling of cyber-physical production systems based on digital twins". Annual Reviews in Control. 51: 357–373. doi:10.1016/j.arcontrol.2021.04.008. Brucker P. Scheduling Algorithms. Heidelberg, Springer. Fifth ed. ISBN 978-3-540-24804-0 Retrieved from "https://en.wikipedia.org/w/index.php?title=Scheduling_(production_processes)&oldid=1087621775" Production and manufacturing software
(Redirected from Maximum entropy classifier) {\displaystyle \operatorname {score} (\mathbf {X} _{i},k)={\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i},} {\displaystyle f(k,i)} {\displaystyle f(k,i)=\beta _{0,k}+\beta _{1,k}x_{1,i}+\beta _{2,k}x_{2,i}+\cdots +\beta _{M,k}x_{M,i},} {\displaystyle \beta _{m,k}} {\displaystyle f(k,i)={\boldsymbol {\beta }}_{k}\cdot \mathbf {x} _{i},} {\displaystyle {\boldsymbol {\beta }}_{k}} {\displaystyle \mathbf {x} _{i}} {\displaystyle {\begin{aligned}\ln {\frac {\Pr(Y_{i}=1)}{\Pr(Y_{i}=K)}}&={\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}\\\ln {\frac {\Pr(Y_{i}=2)}{\Pr(Y_{i}=K)}}&={\boldsymbol {\beta }}_{2}\cdot \mathbf {X} _{i}\\\cdots &\cdots \\\ln {\frac {\Pr(Y_{i}=K-1)}{\Pr(Y_{i}=K)}}&={\boldsymbol {\beta }}_{K-1}\cdot \mathbf {X} _{i}\\\end{aligned}}} {\displaystyle {\begin{aligned}\Pr(Y_{i}=1)&={\Pr(Y_{i}=K)}e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}\\\Pr(Y_{i}=2)&={\Pr(Y_{i}=K)}e^{{\boldsymbol {\beta }}_{2}\cdot \mathbf {X} _{i}}\\\cdots &\cdots \\\Pr(Y_{i}=K-1)&={\Pr(Y_{i}=K)}e^{{\boldsymbol {\beta }}_{K-1}\cdot \mathbf {X} _{i}}\\\end{aligned}}} {\displaystyle \Pr(Y_{i}=K)=1-\sum _{k=1}^{K-1}\Pr(Y_{i}=k)=1-\sum _{k=1}^{K-1}{\Pr(Y_{i}=K)}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}\Rightarrow \Pr(Y_{i}=K)={\frac {1}{1+\sum _{k=1}^{K-1}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}} {\displaystyle {\begin{aligned}\Pr(Y_{i}=1)&={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{1+\sum _{k=1}^{K-1}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}\\\\\Pr(Y_{i}=2)&={\frac {e^{{\boldsymbol {\beta }}_{2}\cdot \mathbf {X} _{i}}}{1+\sum _{k=1}^{K-1}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}\\\cdots &\cdots \\\Pr(Y_{i}=K-1)&={\frac {e^{{\boldsymbol {\beta }}_{K-1}\cdot \mathbf {X} _{i}}}{1+\sum _{k=1}^{K-1}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}\\\end{aligned}}} {\displaystyle {\begin{aligned}\ln \Pr(Y_{i}=1)&={\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}-\ln Z\,\\\ln \Pr(Y_{i}=2)&={\boldsymbol {\beta }}_{2}\cdot \mathbf {X} _{i}-\ln Z\,\\\cdots &\cdots \\\ln \Pr(Y_{i}=K)&={\boldsymbol {\beta }}_{K}\cdot \mathbf {X} _{i}-\ln Z\,\\\end{aligned}}} {\displaystyle -\ln Z} {\displaystyle \sum _{k=1}^{K}\Pr(Y_{i}=k)=1} {\displaystyle {\begin{aligned}\Pr(Y_{i}=1)&={\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}\,\\\Pr(Y_{i}=2)&={\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{2}\cdot \mathbf {X} _{i}}\,\\\cdots &\cdots \\\Pr(Y_{i}=K)&={\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{K}\cdot \mathbf {X} _{i}}\,\\\end{aligned}}} {\displaystyle {\begin{aligned}1=\sum _{k=1}^{K}\Pr(Y_{i}=k)&=\sum _{k=1}^{K}{\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}\\&={\frac {1}{Z}}\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}\\\end{aligned}}} {\displaystyle Z=\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}} {\displaystyle {\begin{aligned}\Pr(Y_{i}=1)&={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}\,\\\Pr(Y_{i}=2)&={\frac {e^{{\boldsymbol {\beta }}_{2}\cdot \mathbf {X} _{i}}}{\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}\,\\\cdots &\cdots \\\Pr(Y_{i}=K)&={\frac {e^{{\boldsymbol {\beta }}_{K}\cdot \mathbf {X} _{i}}}{\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}\,\\\end{aligned}}} {\displaystyle \Pr(Y_{i}=c)={\frac {e^{{\boldsymbol {\beta }}_{c}\cdot \mathbf {X} _{i}}}{\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}} {\displaystyle \operatorname {softmax} (k,x_{1},\ldots ,x_{n})={\frac {e^{x_{k}}}{\sum _{i=1}^{n}e^{x_{i}}}}} {\displaystyle x_{1},\ldots ,x_{n}} {\displaystyle \operatorname {softmax} (k,x_{1},\ldots ,x_{n})} {\displaystyle x_{k}} {\displaystyle f(k)={\begin{cases}1\;{\textrm {if}}\;k=\operatorname {\arg \max } (x_{1},\ldots ,x_{n}),\\0\;{\textrm {otherwise}}.\end{cases}}} {\displaystyle \Pr(Y_{i}=c)=\operatorname {softmax} (c,{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i},\ldots ,{\boldsymbol {\beta }}_{K}\cdot \mathbf {X} _{i})} {\displaystyle \beta _{k}} {\displaystyle k-1} {\displaystyle k-1} {\displaystyle {\begin{aligned}{\frac {e^{({\boldsymbol {\beta }}_{c}+C)\cdot \mathbf {X} _{i}}}{\sum _{k=1}^{K}e^{({\boldsymbol {\beta }}_{k}+C)\cdot \mathbf {X} _{i}}}}&={\frac {e^{{\boldsymbol {\beta }}_{c}\cdot \mathbf {X} _{i}}e^{C\cdot \mathbf {X} _{i}}}{\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}e^{C\cdot \mathbf {X} _{i}}}}\\&={\frac {e^{C\cdot \mathbf {X} _{i}}e^{{\boldsymbol {\beta }}_{c}\cdot \mathbf {X} _{i}}}{e^{C\cdot \mathbf {X} _{i}}\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}\\&={\frac {e^{{\boldsymbol {\beta }}_{c}\cdot \mathbf {X} _{i}}}{\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}\end{aligned}}} {\displaystyle C=-{\boldsymbol {\beta }}_{K}} {\displaystyle {\begin{aligned}{\boldsymbol {\beta }}'_{1}&={\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{K}\\\cdots &\cdots \\{\boldsymbol {\beta }}'_{K-1}&={\boldsymbol {\beta }}_{K-1}-{\boldsymbol {\beta }}_{K}\\{\boldsymbol {\beta }}'_{K}&=0\end{aligned}}} {\displaystyle {\begin{aligned}\Pr(Y_{i}=1)&={\frac {e^{{\boldsymbol {\beta }}'_{1}\cdot \mathbf {X} _{i}}}{1+\sum _{k=1}^{K-1}e^{{\boldsymbol {\beta }}'_{k}\cdot \mathbf {X} _{i}}}}\,\\\cdots &\cdots \\\Pr(Y_{i}=K-1)&={\frac {e^{{\boldsymbol {\beta }}'_{K-1}\cdot \mathbf {X} _{i}}}{1+\sum _{k=1}^{K-1}e^{{\boldsymbol {\beta }}'_{k}\cdot \mathbf {X} _{i}}}}\,\\\Pr(Y_{i}=K)&={\frac {1}{1+\sum _{k=1}^{K-1}e^{{\boldsymbol {\beta }}'_{k}\cdot \mathbf {X} _{i}}}}\,\\\end{aligned}}} {\displaystyle {\begin{aligned}Y_{i,1}^{\ast }&={\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}+\varepsilon _{1}\,\\Y_{i,2}^{\ast }&={\boldsymbol {\beta }}_{2}\cdot \mathbf {X} _{i}+\varepsilon _{2}\,\\\cdots &\\Y_{i,K}^{\ast }&={\boldsymbol {\beta }}_{K}\cdot \mathbf {X} _{i}+\varepsilon _{K}\,\\\end{aligned}}} {\displaystyle \varepsilon _{k}\sim \operatorname {EV} _{1}(0,1),} {\displaystyle Y_{i}} {\displaystyle Y_{i,k}^{\ast }} {\displaystyle {\begin{aligned}\Pr(Y_{i}=1)&=\Pr(Y_{i,1}^{\ast }>Y_{i,2}^{\ast }{\text{ and }}Y_{i,1}^{\ast }>Y_{i,3}^{\ast }{\text{ and }}\cdots {\text{ and }}Y_{i,1}^{\ast }>Y_{i,K}^{\ast })\\\Pr(Y_{i}=2)&=\Pr(Y_{i,2}^{\ast }>Y_{i,1}^{\ast }{\text{ and }}Y_{i,2}^{\ast }>Y_{i,3}^{\ast }{\text{ and }}\cdots {\text{ and }}Y_{i,2}^{\ast }>Y_{i,K}^{\ast })\\\cdots &\\\Pr(Y_{i}=K)&=\Pr(Y_{i,K}^{\ast }>Y_{i,1}^{\ast }{\text{ and }}Y_{i,K}^{\ast }>Y_{i,2}^{\ast }{\text{ and }}\cdots {\text{ and }}Y_{i,K}^{\ast }>Y_{i,K-1}^{\ast })\\\end{aligned}}} {\displaystyle {\begin{aligned}\Pr(Y_{i}=1)&=\Pr(\max(Y_{i,1}^{\ast },Y_{i,2}^{\ast },\ldots ,Y_{i,K}^{\ast })=Y_{i,1}^{\ast })\\\Pr(Y_{i}=2)&=\Pr(\max(Y_{i,1}^{\ast },Y_{i,2}^{\ast },\ldots ,Y_{i,K}^{\ast })=Y_{i,2}^{\ast })\\\cdots &\\\Pr(Y_{i}=K)&=\Pr(\max(Y_{i,1}^{\ast },Y_{i,2}^{\ast },\ldots ,Y_{i,K}^{\ast })=Y_{i,K}^{\ast })\\\end{aligned}}} {\displaystyle {\begin{aligned}\Pr(Y_{i}=1)&=\Pr(Y_{i,1}^{\ast }>Y_{i,k}^{\ast }\ \forall \ k=2,\ldots ,K)\\&=\Pr(Y_{i,1}^{\ast }-Y_{i,k}^{\ast }>0\ \forall \ k=2,\ldots ,K)\\&=\Pr({\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}+\varepsilon _{1}-({\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}+\varepsilon _{k})>0\ \forall \ k=2,\ldots ,K)\\&=\Pr(({\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{k})\cdot \mathbf {X} _{i}>\varepsilon _{k}-\varepsilon _{1}\ \forall \ k=2,\ldots ,K)\end{aligned}}} {\displaystyle X\sim \operatorname {EV} _{1}(a,b)} {\displaystyle Y\sim \operatorname {EV} _{1}(a,b)} {\displaystyle X-Y\sim \operatorname {Logistic} (0,b).} {\displaystyle X\sim \operatorname {Logistic} (0,1)} {\displaystyle bX\sim \operatorname {Logistic} (0,b).}
Model Gain-Scheduled Control Systems in Simulink - MATLAB & Simulink - MathWorks Benelux 1-D Lookup Table, 2-D Lookup Table, n-D Lookup Table — For a scalar gain that depends on one, two, or more scheduling variables. Matrix Interpolation — For a matrix-valued gain that depends on one, two, or three scheduling variables. (This block is in the Simulink Extras library.) MATLAB Function block — When you have a functional expression relating the gains to the scheduling variables, use a MATLAB Function block. If the expression is a smooth function, using a MATLAB function can result in smoother gain variations than a lookup table. Also, if you use a code-generation product such as Simulink Coder™ to implement the controller in hardware, a MATLAB function can result in a more memory-efficient implementation than a lookup table. You can use systune to tune gain schedules implement as either lookup tables or MATLAB functions. See Tune Gain Schedules in Simulink. As an example, The model rct_CSTR includes a PI controller and a lead compensator in which the controller gains are implemented as lookup tables using 1-D Lookup Table blocks. Open that model and examine the controllers. \begin{array}{c}d{x}_{e}=A{x}_{e}+Bu+L\left(y-C{x}_{e}-Du\right)\\ u=-K{x}_{e},\end{array} MATLAB Function block — Specify a MATLAB function that takes scheduling variables and returns matrix values. Matrix Interpolation block — Specify a lookup table to associate a matrix value with each scheduling-variable breakpoint. Between breakpoints, the block interpolates the matrix elements. (This block is in the Simulink Extras library.) You can tune matrix-valued gain schedules implemented as either MATLAB Function blocks or as Matrix Interpolation blocks. However, to tune a Matrix Interpolation block, you must set Simulate using to Interpreted execution. See the Matrix Interpolation block reference page for information about simulation modes.
15 February 2011 Quiver flag varieties and multigraded linear series Alastair Craw1 1School of Mathematics and Statistics, University of Glasgow This paper introduces a class of smooth projective varieties that generalize and share many properties with partial flag varieties of type A . The quiver flag variety {M}_{\vartheta }\left(Q,\underline{r}\right) of a finite acyclic quiver Q (with a unique source) and a dimension vecto\underline{r} is a fine moduli space of stable representations of Q . Quiver flag varieties are Mori dream spaces, they are obtained via a tower of Grassmann bundles, and their bounded derived category of coherent sheaves is generated by a tilting bundle. We define the multigraded linear series of a weakly exceptional sequence of locally free sheaves \underline{E}=\left({O}_{X},{E}_{1},\dots ,{E}_{\rho }\right) on a projective scheme X to be the quiver flag variety |\underline{E}|:={M}_{\vartheta }\left(Q,\underline{r}\right) of a pair \left(Q,\underline{r}\right) encoded by \underline{E} . When each {E}_{i} is globally generated, we obtain a morphism {\varphi }_{|\underline{E}|}:X\to |\underline{E}| , realizing each {E}_{i} as the pullback of a tautological bundle. As an application, we introduce the multigraded Plücker embedding of a quiver flag variety. Alastair Craw. "Quiver flag varieties and multigraded linear series." Duke Math. J. 156 (3) 469 - 500, 15 February 2011. https://doi.org/10.1215/00127094-2010-217 Primary: 14D22 , 16G20 , 18E30 Alastair Craw "Quiver flag varieties and multigraded linear series," Duke Mathematical Journal, Duke Math. J. 156(3), 469-500, (15 February 2011)
(\text{minibatch} , \text{in\_channels} , iH , iW) (\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kH , kW) (\text{out\_channels}) implicit paddings on both sides of the input. Can be a string {‘valid’, ‘same’}, single number or a tuple (padH, padW) . Default: 0 padding='valid' is the same as no padding. padding='same' pads the input so the output has the same shape as the input. However, this mode doesn’t support any stride values other than 1. \text{in\_channels}
EUDML | Reductive compactifications of semitopological semigroups. EuDML | Reductive compactifications of semitopological semigroups. Reductive compactifications of semitopological semigroups. Fattahi, Abdolmajid; Pourabdollah, Mohamad Ali; Sahleh, Abbas Fattahi, Abdolmajid, Pourabdollah, Mohamad Ali, and Sahleh, Abbas. "Reductive compactifications of semitopological semigroups.." International Journal of Mathematics and Mathematical Sciences 2003.51 (2003): 3277-3280. <http://eudml.org/doc/50470>. @article{Fattahi2003, author = {Fattahi, Abdolmajid, Pourabdollah, Mohamad Ali, Sahleh, Abbas}, keywords = {semigroup compactification; semitopological semigroup; continuous complex value function}, title = {Reductive compactifications of semitopological semigroups.}, AU - Fattahi, Abdolmajid AU - Pourabdollah, Mohamad Ali AU - Sahleh, Abbas TI - Reductive compactifications of semitopological semigroups. KW - semigroup compactification; semitopological semigroup; continuous complex value function Abdolmajid Fattahi, H. R. Ebrahimi Vishki, Characterization of Eℱ -subcompactification semigroup compactification, semitopological semigroup, continuous complex value function Articles by Fattahi Articles by Sahleh
Market_capitalization Knowpia The New York Stock Exchange on Wall Street, the world's largest stock exchange in terms of total market capitalization of its listed companies[1] Market capitalization is equal to the share price multiplied by the number of shares outstanding.[2][3] Since outstanding stock is bought and sold in public markets, capitalization could be used as an indicator of public opinion of a company's net worth and is a determining factor in some forms of stock valuation. Market cap only reflects the equity value of a company. A firm's choice of capital structure has a significant impact on how the total value of a company is allocated between equity and debt. A more comprehensive measure is enterprise value (EV), which gives effect to outstanding debt, preferred stock, and other factors. For insurance firms, a value called the embedded value (EV) has been used. The total capitalization of stock markets or economic regions may be compared with other economic indicators (e.g. the Buffett indicator). The total market capitalization of all publicly traded companies in 2020 was approximately US$93 trillion.[4] Historical estimates of world market capEdit Total market capitalization of all publicly traded companies in the world from 1975 to 2020.[4] (in mil. US$) 1975 1,149,245 27.2 14,577 1991 11,340,785 56.8 24,666 1999 33,181,159 115.1 38,414 Market cap is given by the formula {\textstyle {\text{MC}}=N\times P} , where MC is the market capitalization, N is the number of shares outstanding, and P is the market price per share. For example, if a company has 4 million shares outstanding and the closing price per share is $20, its market capitalization is then $80 million. If the closing price per share rises to $21, the market cap becomes $84 million. If it drops to $19 per share, the market cap falls to $76 million. This is in contrast to mercantile pricing where purchase price, average price and sale price may differ due to transaction costs. Not all of the outstanding shares trade on the open market. The number of shares trading on the open market is called the float. It is equal to or less than N because N includes shares that are restricted from trading. The free-float market cap uses just the floating number of shares in the calculation, generally resulting in a smaller number. Market cap termsEdit Traditionally, companies were divided into large-cap, mid-cap, and small-cap.[citation needed][2] The terms mega-cap and micro-cap have also since come into common use,[5][6] and nano-cap is sometimes heard. Different numbers are used by different indexes;[7] there is no official definition of, or full consensus agreement about, the exact cutoff values. The cutoffs may be defined as percentiles rather than in nominal dollars. The definitions expressed in nominal dollars need to be adjusted over decades due to inflation, population change, and overall market valuation (for example, $1 billion was a large market cap in 1950, but it is not very large now), and market caps are likely to be different country to country. CryptocurrenciesEdit The term market capitalization has also been applied to cryptocurrencies in recent years.[8][9] This is in contrast to the more traditional term, money supply, used to describe the total volume of a fiat money. List of countries by stock market capitalization ^ "Market highlights for first half-year 2010" (PDF). World Federation of Exchanges. Archived from the original (PDF) on July 22, 2013. Retrieved May 29, 2013. ^ a b "Market Capitalization Definition". Retrieved April 2, 2013. ^ "Financial Times Lexicon". Archived from the original on September 25, 2016. Retrieved February 19, 2013. ^ a b "Market capitalization of listed domestic companies (current US$) | Data". data.worldbank.org. Retrieved September 20, 2021. ^ "Mega Cap Definition". Retrieved April 2, 2013. ^ "Micro Cap Definition". Retrieved April 2, 2013. ^ "Definition of Market Capitalization". Archived from the original on October 1, 2020. Retrieved August 3, 2008. ^ Popper, Nathaniel (September 12, 2018). "When Cryptocurrencies Fluctuate, He Uses These Tech Tools to Keep Track". The New York Times. Retrieved August 13, 2021. ^ Vigna, Paul (January 23, 2018). "The Programmer at the Center of a $100 Billion Crypto Storm". wsj.com. Wall Street Journal. Retrieved August 13, 2021. Look up market capitalization in Wiktionary, the free dictionary. How to Value Assets – from the Washington State (U.S.) government web site Year-end market capitalization by country – World Bank, 1988–2018
ForwardSubstitute - Maple Help Home : Support : Online Help : Mathematics : Linear Algebra : LinearAlgebra Package : Modular Subpackage : ForwardSubstitute apply in-place forward substitution from a lower triangular mod m Matrix to a mod m Matrix or Vector ForwardSubstitute(m, A, B, diagflag) mod m lower triangular Matrix mod m Matrix or Vector to which to apply forward substitution diagflag boolean; indicate whether to assume diagonal entries are 1 The ForwardSubstitute function applies the forward substitution described by the lower triangular part of the square mod m Matrix A to the mod m Matrix or Vector B. Note: It is assumed that A is in lower triangular form, or that only the lower triangular part is relevant and the upper triangular part of A is completely ignored. Application of forward substitution requires that m is prime, but in some cases it can be computed if m is composite. If it cannot be computed for m composite, an error message is returned. The diagflag parameter is a boolean that indicates if the diagonal of the lower triangular Matrix is considered to be the identity (true), or used in the forward substitution (false). This option is most useful when applying forward substitution from a compact LU decomposition (see LUDecomposition), where the diagonal of the lower triangular factor is the identity, and is not explicitly stored. The ForwardSubstitute function is used as one of the steps in the LUApply function. This command is part of the LinearAlgebra[Modular] package, so it can be used in the form ForwardSubstitute(..) only after executing the command with(LinearAlgebra[Modular]). However, it can always be used in the form LinearAlgebra[Modular][ForwardSubstitute](..). Construct and solve a lower triangular system. \mathrm{with}⁡\left(\mathrm{LinearAlgebra}[\mathrm{Modular}]\right): p≔97 \textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{97} A≔\mathrm{Mod}⁡\left(p,\mathrm{Matrix}⁡\left(4,4,\left(i,j\right)↦\mathbf{if}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}j\le i\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{then}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathrm{rand}⁡\left(\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{else}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}0\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{end}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{if}\right),\mathrm{integer}[]\right): B≔\mathrm{Mod}⁡\left(p,\mathrm{Matrix}⁡\left(4,2,\left(i,j\right)↦\mathrm{rand}⁡\left(\right)\right),\mathrm{integer}[]\right): A,B [\begin{array}{cccc}\textcolor[rgb]{0,0,1}{77}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{96}& \textcolor[rgb]{0,0,1}{10}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{86}& \textcolor[rgb]{0,0,1}{58}& \textcolor[rgb]{0,0,1}{36}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{80}& \textcolor[rgb]{0,0,1}{22}& \textcolor[rgb]{0,0,1}{44}& \textcolor[rgb]{0,0,1}{39}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{cc}\textcolor[rgb]{0,0,1}{60}& \textcolor[rgb]{0,0,1}{39}\\ \textcolor[rgb]{0,0,1}{43}& \textcolor[rgb]{0,0,1}{12}\\ \textcolor[rgb]{0,0,1}{55}& \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{24}& \textcolor[rgb]{0,0,1}{71}\end{array}] X≔\mathrm{Copy}⁡\left(p,B\right): \mathrm{ForwardSubstitute}⁡\left(p,A,X,\mathrm{false}\right): X [\begin{array}{cc}\textcolor[rgb]{0,0,1}{94}& \textcolor[rgb]{0,0,1}{32}\\ \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{82}\\ \textcolor[rgb]{0,0,1}{75}& \textcolor[rgb]{0,0,1}{34}\\ \textcolor[rgb]{0,0,1}{94}& \textcolor[rgb]{0,0,1}{58}\end{array}] \mathrm{Multiply}⁡\left(p,A,X\right)-B [\begin{array}{cc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}] Lower triangular system with assumed diagonal of 1. p≔97 \textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{97} A≔\mathrm{Mod}⁡\left(p,\mathrm{Matrix}⁡\left(4,4,\left(i,j\right)↦\mathbf{if}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}j<i\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{then}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathrm{rand}⁡\left(\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{else}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}0\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{end}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{if}\right),\mathrm{float}[8]\right): B≔\mathrm{Mod}⁡\left(p,\mathrm{Vector}[\mathrm{column}]⁡\left(4,i↦\mathrm{rand}⁡\left(\right)\right),\mathrm{float}[8]\right): A,B [\begin{array}{cccc}\textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{45.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{29.}& \textcolor[rgb]{0,0,1}{21.}& \textcolor[rgb]{0,0,1}{0.}& \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{48.}& \textcolor[rgb]{0,0,1}{7.}& \textcolor[rgb]{0,0,1}{33.}& \textcolor[rgb]{0,0,1}{0.}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{57.}\\ \textcolor[rgb]{0,0,1}{65.}\\ \textcolor[rgb]{0,0,1}{16.}\\ \textcolor[rgb]{0,0,1}{93.}\end{array}] X≔\mathrm{Copy}⁡\left(p,B\right): \mathrm{ForwardSubstitute}⁡\left(p,A,X,\mathrm{true}\right): X [\begin{array}{c}\textcolor[rgb]{0,0,1}{57.}\\ \textcolor[rgb]{0,0,1}{22.}\\ \textcolor[rgb]{0,0,1}{35.}\\ \textcolor[rgb]{0,0,1}{25.}\end{array}] \mathrm{AddMultiple}⁡\left(p,\mathrm{Multiply}⁡\left(p,A,X\right),X\right)-B [\begin{array}{c}\textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}\\ \textcolor[rgb]{0,0,1}{0.}\end{array}]
Create lagged time series data - MATLAB lagmatrix - MathWorks Australia lagmatrix Shift Matrix of Data Return Time Base of Shifted Series Shift Table Variables Specify Presample and Postsample Data LagTbl Create lagged time series data YLag = lagmatrix(Y,lags) [YLag,TLag] = lagmatrix(Y,lags) LagTbl = lagmatrix(Tbl,lags) [___] = lagmatrix(___,Name=Value) YLag = lagmatrix(Y,lags) shifts the input regular series Y in time by the lags (positive) or leads (negative) in lags, and returns the matrix of shifted series YLag. [YLag,TLag] = lagmatrix(Y,lags) also returns a vector TLag representing the common time base for the shifted series relative to the original time base of 1, 2, 3, …, numObs. LagTbl = lagmatrix(Tbl,lags) shifts all variables in the input table of timetable Tbl, which represent regular time series, and returns the table or timetable of shifted series LagTbl. To select different variables in Tbl to shift, use the DataVariables name-value argument. [___] = lagmatrix(___,Name=Value) uses additional options specified by one or more name-value arguments, using any input-argument combination in the previous syntaxes. lagmatrix returns the output-argument combination for the corresponding input arguments. For example, lagmatrix(Tbl,1,Y0=zeros(1,5),DataVariables=1:5) lags, by one period, the first five variables in the input table Tbl and sets the presample of each series to 0. Create a bivariate time series matrix X with five observations each. Y = [1 -1; 2 -2 ;3 -3 ;4 -4 ;5 -5] Create a shifted matrix, which is composed of the original X and its first two lags. XLag = lagmatrix(Y,lags) XLag = 5×6 1 -1 NaN NaN NaN NaN 2 -2 1 -1 NaN NaN XLAG is a 5-by-6 matrix: The first two columns contain the original data (lag 0). Columns 3 and 4 contain the data lagged by one unit. Columns 5 and 6 contain the data lagged by two units. By default, lagmatrix returns only values corresponding to the time base of the original data, and the function fills unknown presample values using NaNs. Create a shifted matrix, which is composed of the original X and its first two lags. Return the time base of the shift series. [XLag,TLag] = lagmatrix(Y,lags); By default, lagmatrix returns the time base of the input data. Shift multiple time series, which are variables in tables, using the default options of lagmatrix. Load data of yearly Canadian inflation and interest rates Data_Canada.mat, which contains five series in the table DataTable. Create a timetable from the table of data. tail(TT) Time INF_C INF_G INT_S INT_M INT_L ___________ _______ _______ ______ ______ ______ 31-Dec-1987 4.2723 4.608 8.1692 9.4158 9.9267 31-Dec-1988 3.9439 4.5256 9.4158 9.7717 10.227 31-Dec-1989 4.8743 4.7258 12.016 10.203 9.9217 31-Dec-1990 4.6547 3.1015 12.805 11.193 10.812 31-Dec-1991 5.4633 2.8614 8.8301 9.1625 9.8067 31-Dec-1994 0.18511 0.60929 5.4168 7.7867 8.58 Create timetable containing all series lagged by one year, the series themselves, and the series led by a year. lags = [1 0 -1]; LagTT = lagmatrix(TT,lags); head(LagTT) Time Lag1INF_C Lag1INF_G Lag1INT_S Lag1INT_M Lag1INT_L Lag0INF_C Lag0INF_G Lag0INT_S Lag0INT_M Lag0INT_L Lead1INF_C Lead1INF_G Lead1INT_S Lead1INT_M Lead1INT_L ___________ _________ _________ _________ _________ _________ _________ _________ _________ _________ _________ __________ __________ __________ __________ __________ 31-Dec-1954 NaN NaN NaN NaN NaN 0.6606 1.4468 1.4658 2.6683 3.255 0.077402 0.76162 1.5533 2.7908 3.1892 31-Dec-1955 0.6606 1.4468 1.4658 2.6683 3.255 0.077402 0.76162 1.5533 2.7908 3.1892 1.4218 3.0433 2.9025 3.7575 3.6058 31-Dec-1956 0.077402 0.76162 1.5533 2.7908 3.1892 1.4218 3.0433 2.9025 3.7575 3.6058 3.1546 2.3148 3.7775 4.565 4.125 31-Dec-1957 1.4218 3.0433 2.9025 3.7575 3.6058 3.1546 2.3148 3.7775 4.565 4.125 2.4828 1.3636 2.2925 3.4692 4.115 31-Dec-1958 3.1546 2.3148 3.7775 4.565 4.125 2.4828 1.3636 2.2925 3.4692 4.115 1.183 2.0722 4.805 4.9383 5.0492 31-Dec-1959 2.4828 1.3636 2.2925 3.4692 4.115 1.183 2.0722 4.805 4.9383 5.0492 1.2396 1.2139 3.3242 4.5192 5.1892 31-Dec-1960 1.183 2.0722 4.805 4.9383 5.0492 1.2396 1.2139 3.3242 4.5192 5.1892 1.0156 0.46074 2.8342 4.375 5.0583 31-Dec-1961 1.2396 1.2139 3.3242 4.5192 5.1892 1.0156 0.46074 2.8342 4.375 5.0583 1.1088 1.3737 4.0125 4.6 5.1008 LagTT is a timetable containing the shifted series. lagmatrix appends each variable of the input timetable by Lagj or Leadj, depending on whether the series is a lag or lead, with j indicating the number of shifting units. By default, lagmatrix shifts all variables in the input table. You can choose a subset of variables to shift by using the DataVariables name-value argument. For example, shift only the inflation rate series. LagTTINF = lagmatrix(TT,lags,DataVariables=["INF_C" "INF_G"]); head(LagTTINF) Time Lag1INF_C Lag1INF_G Lag0INF_C Lag0INF_G Lead1INF_C Lead1INF_G ___________ _________ _________ _________ _________ __________ __________ 31-Dec-1954 NaN NaN 0.6606 1.4468 0.077402 0.76162 31-Dec-1955 0.6606 1.4468 0.077402 0.76162 1.4218 3.0433 31-Dec-1956 0.077402 0.76162 1.4218 3.0433 3.1546 2.3148 31-Dec-1957 1.4218 3.0433 3.1546 2.3148 2.4828 1.3636 31-Dec-1958 3.1546 2.3148 2.4828 1.3636 1.183 2.0722 31-Dec-1959 2.4828 1.3636 1.183 2.0722 1.2396 1.2139 31-Dec-1960 1.183 2.0722 1.2396 1.2139 1.0156 0.46074 31-Dec-1961 1.2396 1.2139 1.0156 0.46074 1.1088 1.3737 Create a vector of univariate time series data. y = [0.1 0.4 -0.2 0.1 0.2]'; Create vectors representing presample and postsample data. y0 = [0.50; 0.75]*y(1) yF = [0.75; 0.50]*y(end) yF = 2×1 Shift the series by two units in both directions. Specify the presample and postsample data, and return a matrix containing shifted series for the entire time base. [YLag,TLag] = lagmatrix(y,lags,Y0=y0,YF=yF) YLag = 5×3 TLag = 5×1 Because the presample and postsample have enough observations to cover the time base of the input data, the shifted series YLag is completely specified (it does not contain NaN entries). Shift the series in the same way, but return a matrix containing shifted series for the entire time base by specifying "full" for the Shape name-value argument. [YLagFull,TLagFull] = lagmatrix(y,lags,Y0=y0,YF=yF,Shape="full") YLagFull = 9×3 TLagFull = 9×1 Because the presample and postsample do not contain enough observations to cover the full time base, which includes presample through postsample times, lagmatrix fills unknown sample units using NaN values. Y — Time series data Time series data, specified as a numObs-by-numVars numeric matrix. Each column of Y corresponds to a variable, and each row corresponds to an observation. lags — Data shifts integer | integer-valued vector Data shifts, specified as an integer or integer-valued vector of length numShifts. Lags are positive integers, which shift the input series forward over the time base. Leads are negative integers, which shift the input series backward over the time base. lagmatrix applies each specified shift in lags, in order, to each input series. Shifts of regular time series have units of one time step. If Tbl is a timetable, it must represent a sample with a regular datetime time step (see isregular). Specify numVars variables to filter by using the DataVariables argument. The selected variables must be numeric. Example: lagmatrix(Tbl,1,Y0=zeros(1,5),DataVariables=1:5) lags, by one period, the first 5 variables in the input table Tbl and sets the presample of each series to 0. Y0 — Presample data NaN (default) | numeric matrix | table | timetable Presample data to backward fill lagged series, specified as a matrix with numVars columns, or a table or timetable. For a table or timetable, the DataVariables name-value argument selects the variables in Y0 to shift. Y0 must have the same data type as the input data. Timetables must have regular sample times preceding times in Tbl. lagmatrix fills required presample values from the end of Y0. Example: Y0=zeros(size(Y,2),2) YF — Postsample data to front fill led series Postsample data to frontward fill led series, specified as a matrix with numVars columns, or a table or timetable. For a table or timetable, the DataVariables name-value argument selects the variables in YF to shift. The default for postsample data is NaN. YF must have the same data type as the input data. Timetables must have regular sample times following times in Tbl. lagmatrix fills required postsample values from the beginning of YF. Example: YF=ones(size(Y,2),3) DataVariables — Variables in Tbl, Y0, and YF Variables in Tbl, Y0, and YF, from which lagmatrix creates shifted time series data, specified as a string vector or cell vector of character vectors containing variable names in Tbl.Properties.VariableNames, or an integer or logical vector representing the indices of names. The selected variables must be numeric. Shape — Part of shifted series to appear in outputs "same" (default) | "full" | "valid" | character vector Part of the shifted series to appear in the outputs, specified as a value in this table. "full" Outputs contain all values in the input time series data and all specified presample Y0 or postsample Yf values on an expanded time base. "same" Outputs contain only values on the original time base. "valid" Outputs contain values for times at which all series have specified (non-NaN) values. To illustrate the shape of the output shifted time series for each value of Shape, suppose the input time series data is a 2-D series with numObs = T observations \left[\begin{array}{cc}{y}_{1,t}& {y}_{2,t}\end{array}\right], and lags is [1 0 -1]. The output shifted series is one of the three T-by-6 matrix arrays in this figure. Example: Shape="full" YLag — Shifted time series variables Shifted time series variables in Y, returned as a numeric matrix. lagmatrix returns YLag when you supply the input Y. Columns are, in order, all series in Y shifted by the lags(1), all series in Y shifted by the lags(2), …, all series in Y shifted by lags(end). Rows depend on the value of the Shape name-value argument. For example, suppose Y is the 2-D time series of numObs = T observations \left[\begin{array}{cc}{y}_{1,t}& {y}_{2,t}\end{array}\right], lags is [1 0 -1], and Shape if "full". YLag is the T-by-6 matrix \left[\begin{array}{cccccc}NaN& NaN& NaN& NaN& {y}_{1,1}& {y}_{2,1}\\ NaN& NaN& {y}_{1,1}& {y}_{2,1}& {y}_{1,2}& {y}_{2,2}\\ {y}_{1,1}& {y}_{2,1}& {y}_{1,2}& {y}_{2,2}& {y}_{1,3}& {y}_{2,3}\\ ⋮& ⋮& ⋮& ⋮& ⋮& ⋮\\ {y}_{1,T-2}& {y}_{2,T-2}& {y}_{1,T-1}& {y}_{2,T-1}& {y}_{1,T}& {y}_{2,T}\\ {y}_{1,T-1}& {y}_{2,T-1}& {y}_{1,T}& {y}_{2,T}& NaN& NaN\\ {y}_{1,T}& {y}_{2,T}& NaN& NaN& NaN& NaN\end{array}\right]. TLag — Common time base for the shifted series Common time base for the shifted series relative to the original time base of 1, 2, 3, …, numObs, returned as a vector of length equal to the number of observations in YLag. lagmatrix returns TLag when you supply the input Y. Series with lags (lags > 0) have higher indices; series with leads (lags < 0) have lower indices. For example, the value of TLag for the example in the YLag output description is the column vector with entries 0:(T+1). LagTbl — Shifted time series variables and common time base Shifted time series variables and common time base, returned as a table or timetable, the same data type as Tbl. lagmatrix returns LagTbl when you supply the input Tbl. LagTbl contains the outputs YLag and TLag. The following conditions apply: Each lagged variable of LagTbl has a label Lagjvarname, where varname is the corresponding variable name in DataVariables and j is lag j in lags. Each lead variable has a label Leadjvarname, where j is lead j in lags. If LagTbl is a table, the variable labeled TLag contains TLag. If LagTbl is a timetable, the Time variable contains TLag.
(Redirected from MSB0) Convention to identify bit positions In computing, bit numbering is the convention used to identify the bit positions in a binary number. 1 Bit significance and indexing 1.1 Least significant bit in digital steganography 2 Unsigned integer example 3 Most- vs least-significant bit first Bit significance and indexing[edit] The binary representation of decimal 149, with the LSB highlighted. The LSB represents a value of 1. The unsigned binary representation of decimal 149, with the MSB highlighted. The MSB represents a value of 128. In computing, the least significant bit (LSB) is the bit position in a binary integer representing the binary 1s place of the integer. Similarly, the most significant bit (MSB) represents the highest-order place of the binary integer. The LSB is sometimes referred to as the low-order bit or right-most bit, due to the convention in positional notation of writing less significant digits further to the right. The MSB is similarly referred to as the high-order bit or left-most bit. In both cases, the LSB and MSB correlate directly to the least significant digit and most significant digit of a decimal integer. Bit indexing correlates to the positional notation of the value in base 2. For this reason, bit index is not affected by how the value is stored on the device, such as the value's byte order. Rather, it is a property of the numeric value in binary itself. This is often utilized in programming via bit shifting: A value of 1 << n corresponds to the nth bit of a binary integer (with a value of 2n). Least significant bit in digital steganography[edit] In digital steganography, sensitive messages may be concealed by manipulating and storing information in the least significant bits of an image or a sound file. The user may later recover this information by extracting the least significant bits of the manipulated pixels to recover the original message. This allows the storage or transfer of digital information to remain concealed. Unsigned integer example[edit] This table illustrates an example of decimal value of 149 and the location of LSB. In this particular example, the position of unit value (decimal 1 or 0) is located in bit position 0 (n = 0). MSB stands for most significant bit, while LSB stands for least significant bit. Binary (Decimal: 149) Bit weight for given bit position n ( 2n ) Bit position label Most- vs least-significant bit first[edit] The expressions most significant bit first and least significant bit at last are indications on the ordering of the sequence of the bits in the bytes sent over a wire in a serial transmission protocol or in a stream (e.g. an audio stream). LSB 0 bit numbering[edit] LSB 0: A container for 8-bit binary number with the highlighted least significant bit assigned the bit number 0 When the bit numbering starts at zero for the least significant bit (LSB) the numbering scheme is called LSB 0.[1] This bit numbering method has the advantage that for any unsigned number the value of the number can be calculated by using exponentiation with the bit number and a base of 2.[2] The value of an unsigned binary integer is therefore {\displaystyle \sum _{i=0}^{N-1}b_{i}\cdot 2^{i}} where bi denotes the value of the bit with number i, and N denotes the number of bits in total. MSB 0 bit numbering[edit] MSB 0: A container for 8-bit binary number with the highlighted most significant bit assigned the bit number 0 When the bit numbering starts at zero for the most significant bit (MSB) the numbering scheme is called MSB 0. The value of an unsigned binary integer is therefore {\displaystyle \sum _{i=0}^{N-1}b_{i}\cdot 2^{N-1-i}} ALGOL 68's elem operator is effectively "MSB 1 bit numbering" as the bits are numbered from left to right, with the first bit (bits elem 1) being the "most significant bit", and the expression (bits elem bits width) giving the "least significant bit". Similarly, when bits are coerced (typecast) to an array of Boolean ([ ]bool bits), the first element of this array (bits[lwb bits]) is again the "most significant bit". For MSB 1 numbering, the value of an unsigned binary integer is {\displaystyle \sum _{i=1}^{N}b_{i}\cdot 2^{N-i}} PL/I numbers BIT strings starting with 1 for the leftmost bit. The Fortran BTEST function uses LSB 0 numbering. Unit in the last place (ULP) MAC address: Bit-reversed notation ^ Langdon, Glen G. (1982). Computer Design. Computeach Press Inc. p. 52. ISBN 0-9607864-0-6. ^ "Bit Numbers". Retrieved 2021-03-30. Retrieved from "https://en.wikipedia.org/w/index.php?title=Bit_numbering&oldid=1088576658#MSB_0"
Layer 2: The Blockchain Network - Lightbook Blockchain networks are rapidly becoming ubiquitous, and have been applied to a vast array of problems and a varied cross-section of industries. The very first blockchain was used in the creation of Bitcoin, and continues to power the leading cryptocurrency to this day. This inaugural blockchain employs the Proof-of-Work (PoW) consensus protocol, an invention that solved a long outstanding problem in computer science and ensures continued security through ongoing resource allocation. Since the early successes of Bitcoin's blockchain, attempts to find better consensus protocols have spurred a renaissance of innovation. While obviously vital to the growth and development of the industry, we feel that attempts to create such consensus protocols for the purposes of leaving PoW behind are, while innovative, somewhat misguided. Instead of re-inventing the wheel (namely, Proof-of-Work consensus) what is needed is a way to use the wheel in a more effective manner -- to harness it to build a machine that changes the world, and that connects the world. Proof-of-Work Consensus, Re-Imagined At Zenotta we believe that PoW is, and always will be, the fundamental consensus philosophy that gives a blockchain network its power. But just as in a responsible, functioning, fair society we do not allow runaway greed & monopolistic endeavours to dominate entirely, so too should the PoW consensus mechanism be tempered, to reduce monopolistic tendencies, prevent mining re-centralization, and avert adverse environmental effects (more on this in the next section below). However, the first question that must be addressed is that of scalability. In dealing with data, rather than tokens, a blockchain network must be able to scale to many orders of magnitude higher in transactions per second than current Proof-of-Work setups. Data transactions would have a far higher velocity than monetary transactions, and traditional Proof-of-Work consensus approaches were designed to be deliberately slow, in order to give miners time to act in the event of a malicious party attempting to subvert the blockchain, and to provide a relatively equal opportunity for people to mine on standard spec. CPUs. In order to achieve this high scaling, we begin with the important insight that of the three roles carried out by a blockchain; namely, block create (package transactions), block write (store on the ledger), and block verify (mine), it is only the latter that needs to be distributed in order to solve the Byzantine Generals' Problem and achieve the required trustless security. It is therefore possible to separate out these three roles in a way that vastly simplifies and speeds up the mining throughput. Transactions per second can conservatively reach into the tens of thousands, while the Byzantine security of the ledger is preserved. Specialised nodes fulfilling each of the roles can perform their tasks in a dedicated fashion and communicate in parallel where possible, achieving optimal load-balancing and efficient time utilization. The Zenotta blockchain employs such an approach, as outlined below. The Zenotta network protocol consists of three sub-networks built up from specialised nodes: Compute Nodes that handle block creation, through packaging transactions Mining Nodes that handle block mining, through next-generation PoW consensus Storage Nodes that handle block writing, adding new blocks to the historical ledger The compute nodes package transactions into blocks and send the blocks out to the miners. When the miners have completed their mining consensus along with all verification checks, the compute nodes add the mined block to the blockchain by sending it to the storage nodes. The compute nodes and storage nodes interact with each other via a RAFT consensus mechanism in order to maintain a single chain. At every stage, any decisions made by the compute nodes (e.g., selecting transactions to package) are fully and verifiably fair, employing uncontestable randomness that can be checked by any and all nodes in the mining network. This specialisation allows for transaction block handling, validation and processing to be optimised, taking PoW from a few transactions per second to thousands. The use of sub-networks introduces a novel mechanism to enable compliance without introducing control. Data privacy, trade compliance and service levels as programmable elements introduced through a sub-network with fair, transparent governance properties have the potential to save enterprise significant time and cost while dramatically reducing risk. A Next Generation Proof-of-Work Protocol Proof-of-Work is not without its problems. One of those is the knock-on effect of greed and the technological arms race in developing faster mining equipment, leading to a highly unequal and increasingly centralized network in terms of mining power. The table below demonstrates the sheer scale of the inequality in the distribution of hashrates. Single machines can reach 8 orders of magnitude greater hashrates than a standard CPU. When thousands of such machines are combined together in parallel (so-called ASIC 'farms'), the ability to exert control over the blockchain becomes a real threat to the secure operation of the ledger; moreover, the political power wielded by such large centralized bodies over the blockchain protocol and its development can be substantial. The vast range of hashrates among mining technology To address the problem of extreme inequality in mining, we borrow from the natural world and the laws of thermodynamics for a system of interacting particles. In the figure below, red represents a particle with a high temperature, while green represents a particle with low temperature. In such a system of particles that are in thermal contact, the rate of heat transfer proceeds as a function of the difference in temperature between each particle pair. In nature, heat flows between particles at a rate proportional to the temperature difference In a blockchain network, decentralization is arguably the property that gives a blockchain its power and its utility. Inequality in hashrate is a threat to decentralization, and therefore should be minimized. To this end, one can imagine a blockchain network as consisting of 'particles' (in this analogy, mining nodes) of different temperatures corresponding to the hashrates of the nodes, and apply a similar approach as shown above to bring the (effective) hashrates closer to 'thermal balance' i.e. closer to a homogeneous distribution. This can be done peer-to-peer by determining the appropriate 'heat' transfer and then employing an individual mining difficulty to modify the effective hashrate of each miner. The full technical description of this balancing protocol is the subject of a separate paper, but briefly, the block processing power of the network (the ‘node temperature’) is moved towards a homogeneous distribution by decreasing the effective hashrate of the ‘hot’ nodes (e.g., the ASICs) and increasing it for the ‘cold’ nodes (e.g., the CPUs). This is done smoothly, allowing our node temperature quantity η to change between nodes pairwise using a simple differential equation, which for miner A takes the form: \frac{d\eta_A}{dt} = -\alpha \Delta \eta The Zenotta blockchain network functions in exactly this way. A secure, peer-to-peer algorithm adjusts the iterative difficulty of the hash function in order to balance the effective hashrate between a pair of miners, and this adjustment is further checked and verified by other miners. The degree of balancing for the network is a tunable parameter, allowing the optimum to be found that ensures high efficiency, security, and in particular, inclusivity for those miners with less powerful machines that find themselves left behind in the standard PoW 'race to the top' that occurs in mining chip manufacturing. The spectrum lying between the two extremes of the homogeneous and proportional models In this way, decentralization, the primary goal of any public blockchain, is optimized. The network is no longer dominated by monopolistic forces that threaten the security of the consensus algorithm. While a fully balanced distribution of hashrates is very much the opposite extreme to the current high level of inequality in mining power (and would cause its own problems) a sensible middle ground somewhere on the spectrum between the standard 'proportional model' of Bitcoin and the 'homogenous model' that is the theoretical end state of the balancing algorithm would be a win-win scenario for a PoW network. A Green Mining Solution If the bitcoin price goes up by 10x, you would expect the energy consumption of the network to also go up by 10x. (Christopher Bendiksen, CoinSHARES report on BTC mining, 2019) The problem of greed in Bitcoin mining is considerable. One of the knock-on effects of the scramble to out-perform other miners and secure a larger piece of the payout has been a huge increase in the energy usage and carbon footprint of the Bitcoin blockchain. The figure below shows the rise of the energy consumption of the Bitcoin network over a 4.5 year period. Bitcoin historical energy consumption in annualized TWh. Figure credit: LongHash.com We wish to stress that the public perception and understanding of this energy consumption trend and its extrapolation into the future is often based on faulty reasoning and misinformation -- for a more balanced discussion we recommend this medium post. However, no-one can predict for certain how much more the fierce competition for mining dominance will fuel the carbon footprint of Bitcoin (and other PoW blockchains) and of course, finding greener solutions to resource-intensive industries is vital in the fight against climate change. To this end, Zenotta's next-generation Proof-of-Work consensus protocol uses the balancing of effective hashrate to create a green mining network. Although seemingly counterintuitive due to the fact that Proof-of-Work was designed to be energy-intensive, the reduction of a miner's ability to be greedy and take a far larger slice of the total network hashrate has the effect of keeping the energy consumption to a minimum (several orders of magnitude lower than traditional Proof-of-Work consensus). At the same time, miners are not disincentivised to make use of dedicated, energy-efficient chips such as ASICs, but instead are limited in the fraction of the block time that their highly effective machines can crunch the hash. This allows the mining network to grow ever more efficient, as better and more powerful mining rigs are developed, while preventing this arms race from affecting the energy consumption of the network. A Fairer Distribution of Wealth The balanced nature of the mining network has implications that go beyond environmental benefits and establish a fairer and more inclusive economy for Smart Data. The on-ramp to PoW mining is incredibly steep, and all but a few with the most resources are precluded from joining. The Zenotta consensus algorithm allows the arms race that develops faster and more efficient processors to happen but prevents it from muscling out those with average CPUs. With a network where even the least powerful miners can mine profitably, the responsibility for securing the blockchain can be shared by all. Through mining and helping to secure the network, individuals gain further ability to create Smart Data and participate in the Smart Data economy. They also gain voting rights on aspects of the network protocol, which improves the democratic strength of the ecosystem. A Stronger Data Economy The mining network is the backbone & the facilitator of the transactions made on a blockchain. A strong and fair data economy needs a strong backbone, and a fair means of operation, and so the use of the long established, tried & tested consensus mechanism -- Proof-of-Work -- coupled with a protocol designed to balance mining power and reduce inequality, inspired by the laws of thermodynamics, provide respectively the required strength and the required fairness.
ImagePeriodogram - Maple Help Home : Support : Online Help : Programming : ImageTools Package : ImagePeriodogram ImagePeriodogram(img) Image ; input image scaling = identical(sqr, "sqr", log, "log", sqrt, "sqrt", none ,"none") Specifies the scaling method used for the discrete fourier transform of img The default is log. center = identical(none, "none", horizontal, "horizontal", vertical, "vertical", both, "both") Specifies the position of zero-frequency component. "none" and none means no shifting applied. "horizontal" and horizontal means the img is shifted along the first dimension. "vertical" and vertical means the img is shifted along the second dimension. "both" and both means FFTShift with default option is applied for the image img. The default is both. The ImagePeriodogram(img) returns the periodogram of img by applying the discrete fourier transform and FFTShift. \mathrm{with}⁡\left(\mathrm{ImageTools}\right): \mathrm{img}≔\mathrm{Read}⁡\left(\mathrm{cat}⁡\left(\mathrm{kernelopts}⁡\left(\mathrm{datadir}\right),"/images/tree.jpg"\right)\right): \mathrm{p1}≔\mathrm{ImagePeriodogram}⁡\left(\mathrm{img}\right) {\textcolor[rgb]{0,0,1}{\mathrm{_rtable}}}_{\textcolor[rgb]{0,0,1}{36893627874726604060}} \mathrm{Embed}⁡\left(\mathrm{p1}\right) \mathrm{p2}≔\mathrm{ImagePeriodogram}⁡\left(\mathrm{img},'\mathrm{center}'='\mathrm{horizontal}'\right): \mathrm{Embed}⁡\left(\mathrm{p2}\right) \mathrm{p3}≔\mathrm{ImagePeriodogram}⁡\left(\mathrm{img},'\mathrm{scaling}'='\mathrm{sqrt}'\right): \mathrm{Embed}⁡\left(\mathrm{p3}\right) The ImageTools[ImagePeriodogram] command was introduced in Maple 2019.
Troubleshooting Frequency Response Estimation - MATLAB & Simulink - MathWorks América Latina Time Response Not at Steady State FFT Contains Large Harmonics at Frequencies Other than the Input Signal Frequency Time Response Grows Without Bound Time Response Is Discontinuous or Zero Time Response Is Noisy Time Response Shows Harmonics That Do Not Change Smoothly If, after analyzing your frequency response estimation, the frequency response plot does not match the expected behavior of your system, you can use the time response and FFT plots to help you improve the results. If your estimation is slow or you run out of memory during estimation, see Managing Estimation Speed and Memory. This time response has not reached steady state. This plot shows a steady-state time response. Because frequency response estimation requires steady-state input and output signals, transients produce inaccurate estimation results. For sinestream input signals, transients sometimes interfere with the estimation either directly or indirectly through spectral leakage. For chirp input signals, transients interfere with estimation. Model cannot initialize to steady state. Increase the number of periods for frequencies that do not reach steady state by changing the NumPeriods and SettlingPeriods properties. See Modify Estimation Input Signals. Disable all time-varying source blocks in your model and repeat the estimation. See Effects of Time-Varying Source Blocks on Frequency Response Estimation. (Sinestream input) Not enough periods for the output to reach steady state. Increase the number of periods for frequencies that do not reach steady state by changing the NumPeriods and SettlingPeriods. See Modify Estimation Input Signals. Check that filtering is enabled during estimation. You enable filtering by setting the ApplyFilteringInFRESTIMATE option to on. For information about how estimation uses filtering, see the frestimate reference page. (Chirp input) Signal sweeps through the frequency range too quickly. Increase the simulation time by increasing NumSamples. See Modify Estimation Input Signals. After you try the suggested actions, recompute the estimation either: In a particular frequency range (only for sinestream input signals) To recompute the estimation in a particular frequency range: Determine the frequencies for which you want to recompute the estimation results. Then, extract a portion of the sinestream input signal at these frequencies using fselect. For example, these commands extract a sinestream input signal between 10 and 20 rad/s from the input signal input: input2 = fselect(input,10,20); Modify the properties of the extracted sinestream input signal input2, as described in Modify Estimation Input Signals. Estimate the frequency response sysest2 with the modified input signal using frestimate. Merge the original estimated frequency response sysest and the recomputed estimated frequency response sysest2: Remove data from sysest at the frequencies in sysest2 using fdel. sysest = fdel(sysest,input2.Frequency) Concatenate the original and recomputed responses using fcat. sys_combined = fcat(sysest2,sysest) Analyze the recomputed frequency response, as described in Analyze Estimated Frequency Response. For an example of frequency response estimation with time-varying source blocks, see Effects of Time-Varying Source Blocks on Frequency Response Estimation When the FFT plot shows large amplitudes at frequencies other than the input signal, your model is operating outside the linear range. This condition can cause problems when you want to analyze the response of your linear system to small perturbations. For models operating in the linear range, the input amplitude A1 in y(t) must be larger than the amplitudes of other harmonics, A2 and A3. \begin{array}{l}u\left(t\right)={A}_{1}\mathrm{sin}\left({\omega }_{1}+{\varphi }_{1}\right)\\ y\left(t\right)={A}_{1}\mathrm{sin}\left({\omega }_{1}+{\varphi }_{1}\right)+{A}_{2}\mathrm{sin}\left({\omega }_{2}+{\varphi }_{2}\right)+{A}_{3}\mathrm{sin}\left({\omega }_{3}+{\varphi }_{3}\right)+...\end{array} Adjust the amplitude of your input signal to decrease the impact of other harmonics, and repeat the estimation. Typically, you should decrease the input amplitude level to keep the model operating in the linear range. For more information about modifying signal amplitudes, see one of the following: When the time response grows without bound, frequency response estimation results are inaccurate. Frequency response estimation is only accurate close to the operating point. Try the suggested actions listed the table and repeat the estimation. Model is unstable. You cannot estimate the frequency response using frestimate. Instead, use exact linearization to get a linear representation of your model. See Linearize Simulink Model at Model Operating Point or the linearize reference page. Stable model is not at steady state. Disable all source blocks in your model, and repeat the estimation using a steady-state operating point. See Compute Steady-State Operating Points. Stable model captures a growing transient. If the model captures a growing transient, increase the number of periods in the input signal by changing NumPeriods. Repeat the estimation using a steady-state operating point. Discontinuities or noise in the time response indicate that the amplitude of your input signal is too small to overcome the effects of the discontinuous blocks in your model. Examples of discontinuous blocks include Quantizer, Backlash, and Dead Zones. If you used a sinestream input signal and estimated with filtering, turn filtering off in the Simulation Results Viewer to see the unfiltered time response. The following model with a Quantizer block shows an example of the impact of an input signal that is too small. When you estimate this model, the unfiltered simulation output includes discontinuities. Increase the amplitude of your input signal, and repeat the estimation. With a larger amplitude, the unfiltered simulated output of the model with a Quantizer block is smooth. When the time response is noisy, frequency response estimation results may be biased. frestimate does not support estimating frequency response estimation of Simulink® models with blocks that model noise. Locate such blocks with frest.findSources and disable them using the BlocksToHoldConstant option of frestimate. If you need to estimate a model with noise, use frestimate to simulate an output signal from your Simulink model for estimation—without modifying your model. Then, use the Signal Processing Toolbox™ or System Identification Toolbox™ software to estimate a model. To simulate the output of your model in response to a specified input signal: Create a random input signal. For example: You can also specify your own custom signal as a timeseries object. For example: in_ts = timeseries(y,t); Simulate the model to obtain the output signal. For example: [sysest,simout] = frestimate(model,op,io,in_ts) The second output argument of frestimate, simout, is a Simulink.Timeseries object that stores the simulated output. in_ts is the corresponding input data. Generate timeseries objects before using with other MathWorks® products: input = generateTimeseries(in_ts); output = simout{1}.Data; You can use data from timeseries objects directly in Signal Processing Toolbox software, or convert these objects to System Identification Toolbox data format. For examples, see Estimate Frequency Response Models with Noise Using Signal Processing Toolbox and Estimate Frequency Response Models with Noise Using System Identification Toolbox. For a related example, see Disable Noise Sources During Frequency Response Estimation. The estimated frequency response result does not match the linear system bode plot, possibly only over a certain frequency range. When the time responses show magnitudes that do not change smoothly, additional frequency components are affecting the response. These additional frequency components come from the defined input signal. When the input signal is created using frest.Sinestream, the default value of SamplesPerPeriod is 40. This default setting produces a coarse input signal, which causes the mismatch in the bode plot. To create a smoother input signal, increase the value of the SamplesPerPeriod setting. For more information about setting SamplesPerPeriod, see the following:
Dictionary:Helmholtz equation - SEG Wiki {\displaystyle \left(\nabla ^{2}+\kappa ^{2}\right)\psi =0} {\displaystyle \kappa =\omega /V} {\displaystyle \omega } =angular frequency, and V=velocity. Derivation of the Helmholtz equation Given the homogeneous form of the scalar wave equation {\displaystyle \left[\nabla ^{2}-{\frac {1}{V^{2}({\boldsymbol {x}})}}{\frac {\partial ^{2}}{\partial t^{2}}}\right]\Psi ({\boldsymbol {x}},t)=0} {\displaystyle {\boldsymbol {x}}\equiv (x_{1},x_{2},x_{3})} {\displaystyle t} {\displaystyle V({\boldsymbol {x}})} is the wavespeed, and {\displaystyle \Psi ({\boldsymbol {x}},t)} is the wave field. {\displaystyle \Psi ({\boldsymbol {x}},t)} by its Fourier transform representation {\displaystyle \Psi ({\boldsymbol {x}},t)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\psi ({\boldsymbol {x}},\omega )e^{-i\omega t}\;d\omega } noting that the second derivative of {\displaystyle \Psi ({\boldsymbol {x}},t)} {\displaystyle {\frac {\partial ^{2}}{\partial t^{2}}}\Psi ({\boldsymbol {x}},t)\equiv {\frac {1}{2\pi }}\int _{0}^{\infty }(-i\omega )^{2}\psi ({\boldsymbol {x}},\omega )e^{-i\omega t}\;d\omega } the following Fourier integral form {\displaystyle {\frac {1}{2\pi }}\int _{-\infty }^{\infty }\left[\nabla ^{2}+{\frac {\omega ^{2}}{V^{2}({\boldsymbol {x}})}}\right]\psi ({\boldsymbol {x}},\omega )e^{-i\omega t}\;d\omega =0} Because the only way for this Fourier integral representation to vanish is if its integrand vanishes, the Helmholtz equation appears {\displaystyle \left[\nabla ^{2}+{\frac {\omega ^{2}}{V^{2}({\boldsymbol {x}})}}\right]\psi ({\boldsymbol {x}},\omega )=0} Retrieved from "https://wiki.seg.org/index.php?title=Dictionary:Helmholtz_equation&oldid=161849"
General FAQs - MonoX Liquidity Pools are simply pools of tokens locked in a smart contract. The smart contract logic will determine how the tokens function, and how this capital is utilized by the liquidity pool. Users deposit tokens into the pool and therefore provide liquidity. AMMs are liquidity pool smart contracts that use an Automated Market Maker algorithm in order to use the liquidity in the pool for trading/exchanging. Until now, at a minimum, liquidity pools have always consisted of two tokens; Token A and Token B. These two tokens create a new market for the two tokens to be traded. For example, a liquidity pool with an ETH/USDT pair means you can buy ETH in exchange for USDT and vice versa. Pricing is calculated using the constant formula popularized by Uniswap: x y = k Where x is Token A, y is Token B and k is the invariant. The constant product market maker algorithm makes sure that the product of the two tokens in the liquidity pool always remains the same. As a result, the ratio of the tokens in the pool dictates the price and the amount of liquidity in the pool affects slippage. Stablecoins are a type of cryptocurrency which are designed to minimize volatility. They are often pegged to a stable asset or basket of assets. For MonoX, the vUNIT stablecoin is pegged at 1:1 with USD. How do Single Token Liquidity Pools work? Single Token Liquidity pools function by grouping the deposited token into a virtual pair with our virtual unit stablecoin (vUNIT), instead of having the liquidity provider deposit multiple pool pairs, they only have to deposit one. In essence, liquidity providers only need to deposit “Token A” to the pool reserve and each token is paired with the vUNIT stablecoin. There is no pool weighting, only an amount of Token A reserve in the pool based upon how much liquidity has been provided to the pool.
Keplerite, Ca9(Ca0.5☐0.5)Mg(PO4)7, a new meteoritic and terrestrial phosphate isomorphous with merrillite, Ca9NaMg(PO4)7 | American Mineralogist | GeoScienceWorld Keplerite, Ca9(Ca0.5☐0.5)Mg(PO4)7, a new meteoritic and terrestrial phosphate isomorphous with merrillite, Ca9NaMg(PO4)7 Institue of Earth Sciences, St. Petersburg State University, Universitetskaya Nab. 7/9, 199034 St. Petersburg, Russia Faculty of Natural Sciences, Institute of Earth Sciences, University of Silesia, Bedzińska 60, 41-200 Sosnowiec, Poland Natalia S. Vlasenko; Centre for Geo-Environmental Research and Modelling, St. Petersburg State University, Ulyanovskaya str. 1, 198504 St. Petersburg, Russia Vladimir N. Bocharov; Edita V. Obolonskaya The Mining Museum, St. Petersburg Mining University, 2, 21st Line, 199106 St. Petersburg, Russia Sergey N. Britvin, Irina O. Galuskina, Natalia S. Vlasenko, Oleg S. Vereshchagin, Vladimir N. Bocharov, Maria G. Krzhizhanovskaya, Vladimir V. Shilovskikh, Evgeny V. Galuskin, Yevgeny Vapnik, Edita V. Obolonskaya; Keplerite, Ca9(Ca0.5☐0.5)Mg(PO4)7, a new meteoritic and terrestrial phosphate isomorphous with merrillite, Ca9NaMg(PO4)7. American Mineralogist 2021;; 106 (12): 1917–1927. doi: https://doi.org/10.2138/am-2021-7834 Keplerite is a new mineral, the Ca-dominant counterpart of the most abundant meteoritic phosphate, which is merrillite. The isomorphous series merrillite-keplerite, Ca9NaMg(PO4)7-Ca9(Ca0.5☐0.5) Mg(PO4)7, represents the main reservoir of phosphate phosphorus in the solar system. Both minerals are related by the heterovalent substitution at the B-site of the crystal structure: 2Na+ (merrillite) → Ca2+ + ☐ (keplerite). The near-end-member keplerite of meteoritic origin occurs in the main-group pallasites and angrites. The detailed description of the mineral is made based on the Na-free type material from the Marjalahti meteorite (the main group pallasite). Terrestrial keplerite was discovered in the pyrometamorphic rocks of the Hatrurim Basin in the northern part of Negev desert, Israel. Keplerite grains in Marjalahti have an ovoidal to cloudy shape and reach 50 μm in size. The mineral is colorless, transparent with a vitreous luster. Cleavage was not observed. In transmitted light, keplerite is colorless and non-pleochroic. Uniaxial (−), ω = 1.622(1), ε = 1.619(1). Chemical composition (electron microprobe, wt%): CaO 48.84; MgO 3.90; FeO 1.33; P2O5 46.34, total 100.34. The empirical formula (O = 28 apfu) is Ca9.00(Ca0.33 Fe0.202+ ☐0.47)1.00Mg1.04P6.97O28. The ideal formula is Ca9(Ca0.5☐0.5)Mg(PO4)7. Keplerite is trigonal, space group R3c, unit-cell parameters refined from single-crystal data are: a = 10.3330(4), c = 37.0668(24) Å, V = 3427.4(3) Å3, Z = 6. The calculated density is 3.122 g/cm−3. The crystal structure has been solved and refined to R1 = 0.039 based on 1577 unique observed reflections [I>2σ(I)]. A characteristic structural feature of keplerite is a partial (half-vacant) occupancy of the sixfold-coordinated B-site (denoted as CaIIA in the earlier works). The disorder caused by this cation vacancy is the most likely reason for the visually resolved splitting of the ν1 (symmetric stretching) (PO4) vibration mode in the Raman spectrum of keplerite. The mineral is an indicator of high-temperature environments characterized by extreme depletion of Na. The association of keplerite with “REE-merrillite” and stanfieldite provides evidence for the similarity of temperature conditions that occurred in the Mottled Zone to those expected during the formation of pallasite meteorites and lunar rocks. Because of the cosmochemical significance of the merrillite-keplerite series and by analogy to plagioclases, the Na-number measure, 100×Na/(Na+Ca) (apfu), is herein proposed for the characterization of solid solutions between merrillite and keplerite. The merrillite end-member, Ca9NaMg(PO4)7, has the Na-number = 10, whereas keplerite, Ca9(Ca0.5☐0.5)Mg(PO4)7, has Na-number = 0. Keplerite (IMA 2019-108) is named in honor of Johannes Kepler (1571–1630), a prominent German naturalist, for his contributions to astronomy and crystallography. Marjalahti Meteorite keplerite Chemical composition and crystal structure of merrillite from the Suizhou meteorite
Time Series Regression IV: Spurious Regression - MATLAB & Simulink Example - MathWorks France This example considers trending variables, spurious regression, and methods of accommodation in multiple linear regression models. It is the fourth in a series of examples on time series regression, following the presentation in previous examples. Predictors that trend over time are sometimes viewed with suspicion in multiple linear regression (MLR) models. Individually, however, they need not affect ordinary least squares (OLS) estimation. In particular, there is no need to linearize and detrend each predictor. If response values are well-described by a linear combination of the predictors, an MLR model is still applicable, and classical linear model (CLM) assumptions are not violated. If, however, a trending predictor is paired with a trending response, there is the possibility of spurious regression, where t -statistics and overall measures of fit become misleadingly "significant." That is, the statistical significance of relationships in the model do not accurately reflect the causal significance of relationships in the data-generating process (DGP). To investigate, we begin by loading relevant data from the previous example Time Series Regression III: Influential Observations, and continue the analysis of the credit default model presented there: One way that mutual trends arise in a predictor and a response is when both variables are correlated with a causally prior confounding variable outside of the model. The omitted variable (OV) becomes a part of the innovations process, and the model becomes implicitly restricted, expressing a false relationship that would not exist if the OV were included in the specification. Correlation between the OV and model predictors violates the CLM assumption of strict exogeneity. When a model fails to account for a confounding variable, the result is omitted variable bias, where coefficients of specified predictors over-account for the variation in the response, shifting estimated values away from those in the DGP. Estimates are also inconsistent, since the source of the bias does not disappear with increasing sample size. Violations of strict exogeneity help model predictors track correlated changes in the innovations, producing overoptimistically small confidence intervals on the coefficients and a false sense of goodness of fit. To avoid underspecification, it is tempting to pad out an explanatory model with control variables representing a multitude of economic factors with only tenuous connections to the response. By this method, the likelihood of OV bias would seem to be reduced. However, if irrelevant predictors are included in the model, the variance of coefficient estimates increases, and so does the chance of false inferences about predictor significance. Even if relevant predictors are included, if they do not account for all of the OVs, then the bias and inefficiency of coefficient estimates may increase or decrease, depending, among other things, on correlations between included and excluded variables [1]. This last point is usually lost in textbook treatments of OV bias, which typically compare an underspecified model to a practically unachievable fully-specified model. Without experimental designs for acquiring data, and the ability to use random sampling to minimize the effects of misspecification, econometricians must be very careful about choosing model predictors. The certainty of underspecification and the uncertain logic of control variables makes the role of relevant theory especially important in model specification. Examples in this series Time Series Regression V: Predictor Selection and Time Series Regression VI: Residual Diagnostics describe the process in terms of cycles of diagnostics and respecification. The goal is to converge to an acceptable set of coefficient estimates, paired with a series of residuals from which all relevant specification information has been distilled. In the case of the credit default model introduced in the example Time Series Regression I: Linear Models, confounding variables are certainly possible. The candidate predictors are somewhat ad hoc, rather than the result of any fundamental accounting of the causes of credit default. Moreover, the predictors are proxies, dependent on other series outside of the model. Without further analysis of potentially relevant economic factors, evidence of confounding must be found in an analysis of model residuals. Detrending is a common preprocessing step in econometrics, with different possible goals. Often, economic series are detrended in an attempt to isolate a stationary component amenable to ARMA analysis or spectral techniques. Just as often, series are detrended so that they can be compared on a common scale, as with per capita normalizations to remove the effect of population growth. In regression settings, detrending may be used to minimize spurious correlations. A plot of the credit default data (see the example Time Series Regression I: Linear Models) shows that the predictor BBB and the response IGD are both trending. It might be hoped that trends could be removed by deleting a few atypical observations from the data. For example, the trend in the response seems mostly due to the single influential observation in 2001: plot(dates,y0-detrend(y0),'m.-') plot(datesd1,yd1-detrend(yd1),'g*-') legend(respName0,'Trend','Trend with 2001 deleted','Location','NW') title('{\bf Response}') Deleting the point reduces the trend, but does not eliminate it. Alternatively, variable transformations are used to remove trends. This may improve the statistical properties of a regression model, but it complicates analysis and interpretation. Any transformation alters the economic meaning of a variable, favoring the predictive power of a model over explanatory simplicity. The manner of trend-removal depends on the type of trend. One type of trend is produced by a trend-stationary (TS) process, which is the sum of a deterministic trend and a stationary process. TS variables, once identified, are often linearized with a power or log transformation, then detrended by regressing on time. The detrend function, used above, removes the least-squares line from the data. This transformation often has the side effect of regularizing influential observations. Not all trends are TS, however. Difference stationary (DS) processes, also known as integrated or unit root processes, may exhibit stochastic trends, without a TS decomposition. When a DS predictor is paired with a DS response, problems of spurious regression appear [2]. This is true even if the series are generated independently from one another, without any confounding. The problem is complicated by the fact that not all DS series are trending. Consider the following regressions between DS random walks with various degrees of drift. The coefficient of determination ( {R}^{2} ) is computed in repeated realizations, and the distribution displayed. For comparison, the distribution for regressions between random vectors (without an autoregressive dependence) is also displayed: numSims = 1000; drifts = [0 0.1 0.2 0.3]; numModels = length(drifts); Steps = randn(T,2,numSims); % Regression between two random walks: ResRW = zeros(numSims,T,numModels); RSqRW = zeros(numSims,numModels); for d = 1:numModels for s = 1:numSims Y = zeros(T,2); Y(t,:) = drifts(d) + Y(t-1,:) + Steps(t,:,s); % The compact regression formulation: % MRW = fitlm(Y(:,1),Y(:,2)); % ResRW(s,:,d) = MRW.Residuals.Raw'; % RSqRW(s,d) = MRW.Rsquared.Ordinary; % is replaced by the following for % efficiency in repeated simulation: X = [ones(size(Y(:,1))),Y(:,1)]; Coeff = X\y; yHat = X*Coeff; res = y-yHat; yBar = mean(y); regRes = yHat-yBar; SSR = regRes'*regRes; SSE = res'*res; SST = SSR+SSE; RSq = 1-SSE/SST; ResRW(s,:,d) = res'; RSqRW(s,d) = RSq; % Plot R-squared distributions: [v(1,:),edges] = histcounts(RSqRW(:,1)); for i=2:size(RSqRW,2) v(i,:) = histcounts(RSqRW(:,i),edges); numBins = size(v,2); ticklocs = edges(1:end-1)+diff(edges)/2; names = cell(1,numBins); for i = 1:numBins names{i} = sprintf('%0.5g-%0.5g',edges(i),edges(i+1)); bar(ax,ticklocs,v.'); set(ax,'XTick',ticklocs,'XTickLabel',names,'XTickLabelRotation',30); CMap = fig.Colormap; Colors = CMap(linspace(1,64,numModels),:); legend(strcat({'Drift = '},num2str(drifts','%-2.1f')),'Location','North') xlabel('{\it R}^2') ylabel('Number of Simulations') title('{\bf Regression Between Two Independent Random Walks}') clear RsqRW % Regression between two random vectors: RSqR = zeros(numSims,1); % MR = fitlm(Steps(:,1,s),Steps(:,2,s)); % RSqR(s) = MR.Rsquared.Ordinary; X = [ones(size(Steps(:,1,s))),Steps(:,1,s)]; y = Steps(:,2,s); RSqR(s) = RSq; % Plot R-squared distribution: histogram(RSqR) ax.Children.FaceColor = [.8 .8 1]; title('{\bf Regression Between Two Independent Random Vectors}') clear RSqR {R}^{2} for the random-walk regressions becomes more significant as the drift coefficient increases. Even with zero drift, random-walk regressions are more significant than regressions between random vectors, where {R}^{2} values fall almost exclusively below 0.1. Spurious regressions are often accompanied by signs of autocorrelation in the residuals, which can serve as a diagnostic clue. The following shows the distribution of autocorrelation functions (ACF) for the residual series in each of the random-walk regressions above: ACFResRW = zeros(numSims,numLags+1,numModels); ACFResRW(s,:,d) = autocorr(ResRW(s,:,d)); clear ResRW % Plot ACF distributions: boxplot(ACFResRW(:,:,1),'PlotStyle','compact','BoxStyle','outline','LabelOrientation','horizontal','Color',Colors(1,:)) ax.XTickLabel = {''}; boxplot(ACFResRW(:,:,2),'PlotStyle','compact','BoxStyle','outline','LabelOrientation','horizontal','Widths',0.4,'Color',Colors(2,:)) boxplot(ACFResRW(:,:,4),'PlotStyle','compact','BoxStyle','outline','LabelOrientation','horizontal','Widths',0.2,'Color',Colors(4,:),'Labels',0:20) line([0,21],[0,0],'Color','k') line([0,21],[2/sqrt(T),2/sqrt(T)],'Color','b') line([0,21],[-2/sqrt(T),-2/sqrt(T)],'Color','b') ylabel('Sample Autocorrelation') title('{\bf Residual ACF Distributions}') clear ACFResRW Colors correspond to drift values in the bar plot above. The plot shows extended, significant residual autocorrelation for the majority of simulations. Diagnostics related to residual autocorrelation are discussed further in the example Time Series Regression VI: Residual Diagnostics. The simulations above lead to the conclusion that, trending or not, all regression variables should be tested for integration. It is then usually advised that DS variables be detrended by differencing, rather than regressing on time, to achieve a stationary mean. The distinction between TS and DS series has been widely studied (for example, in [3]), particularly the effects of underdifferencing (treating DS series as TS) and overdifferencing (treating TS series as DS). If one trend type is treated as the other, with inappropriate preprocessing to achieve stationarity, regression results become unreliable, and the resulting models generally have poor forecasting ability, regardless of the in-sample fit. Econometrics Toolbox™ has several tests for the presence or absence of integration: adftest, pptest, kpsstest, and lmctest. For example, the augmented Dickey-Fuller test, adftest, looks for statistical evidence against a null of integration. With default settings, tests on both IGD and BBB fail to reject the null in favor of a trend-stationary alternative: IGD = y0; BBB = X0(:,2); [h1IGD,pValue1IGD] = adftest(IGD,'model','TS') h1IGD = logical pValue1IGD = 0.1401 [h1BBB,pValue1BBB] = adftest(BBB,'model','TS') h1BBB = logical pValue1BBB = 0.6976 Other tests, like the KPSS test, kpsstest, look for statistical evidence against a null of trend-stationarity. The results are mixed: s = warning('off'); % Turn off large/small statistics warnings [h0IGD,pValue0IGD] = kpsstest(IGD,'trend',true) [h0BBB,pValue0BBB] = kpsstest(BBB,'trend',true) The p-values of 0.1 and 0.01 are, respectively, the largest and smallest in the table of critical values used by the right-tailed kpsstest. They are reported when the test statistics are, respectively, very small or very large. Thus the evidence against trend-stationarity is especially weak in the first test, and especially strong in the second test. The IGD results are ambiguous, failing to reject trend-stationarity even after the Dickey-Fuller test failed to reject integration. The results for BBB are more consistent, suggesting the predictor is integrated. What is needed for preprocessing is a systematic application of these tests to all of the variables in a regression, and their differences. The utility function i10test automates the required series of tests. The following performs paired ADF/KPSS tests on all of the model variables and their first differences: I.names = {'model'}; I.vals = {'TS'}; S.names = {'trend'}; S.vals = {true}; i10test(DataTable,'numDiffs',1,... 'itest','adf','iparams',I,... 'stest','kpss','sparams',S); I(1) I(0) AGE 1 0 D1AGE 1 0 BBB 0 1 D1BBB 1 0 CPF 0 0 D1CPF 1 0 SPR 0 1 D1SPR 1 0 IGD 0 0 D1IGD 1 0 Columns show test results and p-values against nulls of integration, I\left(1\right) , and stationarity, I\left(0\right) . At the given parameter settings, the tests suggest that AGE is stationary (integrated of order 0), and BBB and SPR are integrated but brought to stationarity by a single difference (integrated of order 1). The results are ambiguous for CPF and IGD, but both appear to be stationary after a single difference. For comparison with the original regression in the example Time Series Regression I: Linear Models, we replace BBB, SPR, CPF, and IGD with their first differences, D1BBB, D1SPR, D1CPF, and D1IGD. We leave AGE undifferenced: D1X0 = diff(X0); D1X0(:,1) = X0(2:end,1); % Use undifferenced AGE D1y0 = diff(y0); predNamesD1 = {'AGE','D1BBB','D1CPF','D1SPR'}; respNameD1 = {'D1IGD'}; Original regression with undifferenced data: Regression with differenced data: MD1 = fitlm(D1X0,D1y0,'VarNames',[predNamesD1,respNameD1]) The differenced data increases the standard errors on all coefficient estimates, as well as the overall RMSE. This may be the price of correcting a spurious regression. The sign and the size of the coefficient estimate for the undifferenced predictor, AGE, shows little change. Even after differencing, CPF has pronounced significance among the predictors. Accepting the revised model depends on practical considerations like explanatory simplicity and forecast performance, evaluated in the example Time Series Regression VII: Forecasting. Because of the possibility of spurious regression, it is usually advised that variables in time series regressions be detrended, as necessary, to achieve stationarity before estimation. There are trade-offs, however, between working with variables that retain their original economic meaning and transformed variables that improve the statistical characteristics of OLS estimation. The trade-off may be difficult to evaluate, since the degree of "spuriousness" in the original regression cannot be measured directly. The methods discussed in this example will likely improve the forecasting abilities of resulting models, but may do so at the expense of explanatory simplicity. [1] Clarke, K. A. "The Phantom Menace: Omitted Variable Bias in Econometric Research." Conflict Management and Peace Science. Vol. 22, 2005, pp. 341–352. [2] Granger, C. W. J., and P. Newbold. "Spurious Regressions in Econometrics." Journal of Econometrics. Vol. 2, 1974, pp. 111–120. [3] Nelson, C., and C. Plosser. "Trends and Random Walks in Macroeconomic Time Series: Some Evidence and Implications." Journal of Monetary Economics. Vol. 10, 1982, pp. 130–162.
Hedging Agents - Angle Insuring the Core module against collateral volatility. Hedging Agents (HAs) open perpetual futures from the Core module: they can get leveraged in one transaction on the evolution of the price of a collateral with a multiplier of their choice. They are here to insure the Core module against the volatility of the collateral brought by users. With enough demand for HAs, Angle Core module could resist collateral price drops of up to 99%. HAs can make significant gains in case of price increase but also substantial losses when collateral price decreases. They pay small transaction fees (potentially around 0.3%) when they open their position and when they close it. Contrary to centralized exchanges, they do not have to pay funding rates for holding their positions. 🗺 Principle Angle Core module is by essence highly dependent on collateral volatility. Let's say one stable seeker brings 1 wETH against 2000 agEUR and the price of wETH then decreases by 50% (from 2000€ to 1000€). The Core module then needs to find 1 wETH to ensure the redeemability of the 2000€ of stablecoins and maintain their stability. We say that the Core module needs to insure itself against the volatility of the collateral. While surges in collateral prices are beneficial, drops, as in the example above, are less desirable. For this reason, the Core module transfers this volatility to other actors looking to get leverage on the collateral: Hedging Agents (HAs). They are the agents insuring the Core module against drops in collateral prices, making sure that it always has enough reserves to reimburse users. 🔮 Perpetual Futures Hedging Agents are taking perpetual futures from the Core module. When they come in to open a position, they bring a certain amount of collateral (their margin), and choose an amount of the same collateral from the Core module they want to hedge (or cover/back). The contract then stores the oracle value and timestamp at which they opened a position. Hedging Agents are independent from one another, meaning that the actions of one Hedging Agent have no impact on the position of another Hedging Agent. Precisely speaking, if a HA enters with an amount x of collateral (xis the margin) and decides to take on the volatility of an amount y of the same collateral (y is the amount committed, or the position size) that was brought by users minting stablecoins, then the contract stores x, y, the oracle value and the timestamp at which this HA came in. At any given point in time, the HA is entitled to get from the Core module: \texttt{cash out amount} = x+y\cdot (1-\frac{\texttt{initial oracle price}}{\texttt{current oracle price}}) This formula means that the HA will get back their input x, plus or minus the capital gains or losses of the amount y they decided to back. The PnL of the HA on this position is therefore: \texttt{PnL} = y\cdot (1-\frac{\texttt{initial oracle price}}{\texttt{current oracle price}}) Since HAs bring collateral to the protocol, we define their leverage as: \texttt{leverage} = \frac{x+y}{x} = \frac{\texttt{margin + amount committed}}{\texttt{margin}} 📈 Price Increase Scenario When the collateral price increases (with respect to the asset stablecoins are pegged to), besides their margin (amount brought initially), HAs are entitled to get the capital gains they would have made if they had owned the collateral they hedged. If an HA brought 1 wETH and decided to back 1 wETH at a wETH price of 2000€, then: x = 1, \space y=1 \texttt{initial oracle price} = 2000 If the price of wETH increases to 4000€, then according to the formula above, the HA can get from the Core module: \texttt{cash out amount} = 1.5 \space \texttt{wETH} The HA made 6000€ from their initial 2000€. If they had just stayed long without leverage, they would have only 4000€. 📉 Price Decrease Scenario When the collateral price decreases (with respect to the asset stablecoins are pegged to), HAs will incur losses on their margin as if they had owned the collateral they covered. Back to the previous example, if the price of wETH decreases to 1000€, then the cash out amount of the HA becomes: \texttt{cash out amount} = 1 + 1 \cdot (1-2) = 0 At this point, the HA is liquidated and their collateral goes to the protocol. They cannot claim anything. In general, the cash out amount of a HA can go to zero if the price drops to: \texttt{current price} = \frac{y}{x+y}\cdot \texttt{initial price} 💧 HAs Liquidations In practice, and like in most centralized perpetual swaps exchanges, there is a maintenance margin meaning that if the value of the theorical cash out amount gets too small compared with the amount committed by a HA, then this HA's position can get liquidated. HAs can hence get liquidated even with a non null cash out amount. Mathematically speaking, we define the margin ratio of a HA as: \texttt{margin ratio} = \frac{\texttt{margin + PnL}}{\texttt{amount committed}} Or to use the above notations: \texttt{margin ratio} = \frac{x}{y} + (1-\frac{\texttt{initial oracle price}}{\texttt{current oracle price}}) A HA can get liquidated if: \texttt{margin ratio} \leq \texttt{maintenance margin} 🛏️ HAs Hedged Amounts When HAs enter Angle Core module, they specify a position size denominated in collateral, representing an amount of the collateral reserves they are hedging. Yet from a contracts perspective, when HAs come in, they insure a fixed amount of stablecoins. This quantity remains constant and only depends on variables fixed upon HAs entry. So while HAs only see that they back an amount of collateral from users, from a smart contract perspective, each HA insures the Core module for a fixed amount of stablecoins. This is what the accounting of the Core module keeps track of when determining when to let new HAs come in or not. The total amount hedged by HAs for a given collateral/stablecoin pair is hence the sum of the product between the amount committed by HAs and their entry price: it is a measure of how much stablecoins issued are backed and insured. This quantity is compared to the amount of collateral in stablecoin value needed by the Core module to pay back users in case they all want to burn their stablecoins. For example, if some users bring 1 wETH to mint 2000 agEUR, and others burn 1000 agEUR, the amount to hedge is 1000 EUR of wETH. HAs can hedge a fraction of this quantity (close to 100%): this is called the target hedge amount. The hedge ratio of Angle Core module for a given stablecoin/collateral pair is hence defined as: \texttt{Hedge Ratio} = \frac{\texttt{Total amount hedged by HAs in stablecoin}}{\texttt{Total value of stablecoins issued}} 🏢 Insurance of the Core module Against Collateral Volatility Here we explain in a more imaged way how the Core module can always have enough collateral to pay back users burning stablecoins in case of price changes of the collateral. If HAs have a 6x leverage and back all the collateral in the Core module that was used to issue stablecoins: In Angle Core module, Hedging Agents have to pay small transaction fees when they open and close positions. These transaction fees are computed on the amount that is committed by the HA (the position size). Entry and exit fees for HAs depend on hedging curves, which define transaction fees for HAs based on the hedging ratio of the Core module. Note that on Angle Core module, there is no funding rate to be paid by perpetual futures holders as opposed to most perps exchanges. This allow traders to hold their positions longer at a much lower cost. The exact values of the transaction fees for HAs depend on the hedge ratio (sometimes referred to as coverage ratio) of the specific agToken/collateral pair. You can see the current fees situation in the analytics page related to the collateral/stablecoin pool in question. Entry Transaction Fees The entry transaction fees for HAs is an upfront cost paid when HAs open a position. The higher the hedging ratio, the more expensive it gets to be an HA. Conversely, HAs should be incentivized to enter positions to help hedge the Core module when the hedging ratio is low: transaction fees would be lower in this case. Let' say a HA comes to the Core module with 1 wETH and opens a 2 wETH poisition (hedging the Core module against the changes in price of these 2 wETH). If the transaction fees are 0.3%, then the contracts consider that the HA has a margin of (1 - (0.003 x 2)) = 0,994 wETH for a position of 2 wETH. Exit Transaction Fees Exit fees are paid by HAs when they close their perpetuals. The more collateral is hedged by HAs, the less expensive it is to exit the Core module. When the hedging ratio is low, HAs should be discouraged to exit with higher transaction fees. If a HA had an initial margin of 1 wETH and a position size of 2 wETH, then with 0.3% transaction fees, they will get in wETH the current value of their perpetual according to the cash out formula above minus 0.3% of 2 wETH (the amount hedged at the opening). Fees To Add or Remove Margin When HAs open a perpetual, they have the opportunity to add or remove to their margin thus decreasing or increasing their leverage. As entry and exit fees depend only on the position size (or committed amount) of HAs, and these add/remove operations do not modify it, no fees are paid for such operations.
Sterol O-acyltransferase - Wikipedia Not to be confused with Acetyl-Coenzyme A acetyltransferase. ACAT, SOAT, STAT Chr. 12 [1] Sterol O-acyltransferase (also called Acyl-CoA cholesterol acyltransferase, Acyl-CoA cholesterin acyltransferase[citation needed] or simply ACAT) is an intracellular protein located in the endoplasmic reticulum that forms cholesteryl esters from cholesterol. Sterol O-acyltransferase catalyzes the chemical reaction: acyl-CoA + cholesterol {\displaystyle \rightleftharpoons } CoA + cholesterol ester Thus, the two substrates of this enzyme are acyl-CoA and cholesterol, whereas its two products are CoA and cholesteryl ester. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups, the membrane-bound O-acyltransferases. This enzyme participates in bile acid biosynthesis. 1 Class and structure 3.1 SOAT1 (ACAT1) 5 Plant Synthesis of Steryl Esters Class and structure[edit] Acyl-CoA cholesterol acyl transferase EC 2.3.1.26, more simply referred to as ACAT, also known as sterol O-acyltransferase (SOAT), belongs to the class of enzymes known as acyltransferases. The role of this enzyme is to transfer fatty acyl groups from one molecule to another. ACAT is an important enzyme in bile acid biosynthesis. In nearly all mammalian cells, ACAT catalyzes the intracellular esterification of cholesterol and formation of cholesteryl esters. The esterification of cholesterol mediated by ACAT is functionally significant for several reasons. ACAT-mediated esterification of cholesterol limits its solubility in the cell membrane lipids and thus promotes accumulation of cholesterol ester in the fat droplets within cytoplasm; this process is important because the toxic accumulation of free cholesterol in various cell membrane fractions is prevented. Most of the cholesterol absorbed during intestinal transport undergoes ACAT-mediated esterification before incorporation in chylomicrons. In the liver, ACAT-mediated esterification of cholesterol is involved in the production and release of apoB-containing lipoproteins. ACAT also plays an important role in foam cell formation and atherosclerosis by participating in accumulating cholesterol esters in macrophages and vascular tissue. The rate-controlling enzyme in cholesterol catabolism, hepatic cholesterol 7-hydroxylase, is believed to be regulated partly by ACAT.[1] The mechanism scheme is as follows: Acyl-CoA + Cholesterol ←→ CoA + Cholesteryl ester[2] There are two isoforms of SOAT (also sometimes referred to as ACAT) that have been reported to date: SOAT1 and SOAT2. SOAT1 is characterized by its ubiquitous presence in tissues with the exception of the intestine, where SOAT2 is prevalent. The different isoforms are also associated with different pathologies associated with abnormalities in lipid metabolism.[3] SOAT1 (ACAT1)[edit] Previous studies have shown that SOAT modulates proteolytic processing in cell-based and animal models of Alzheimer's disease. A follow-up study reports that SOAT1 RNAi reduced cellular SOAT1 protein and cholesteryl ester levels while causing a slight increase in free cholesterol content of endoplasmic reticulum membranes. The data also showed that a modest decrease in SOAT activity led to suppressive effects on Abeta generation.[3] In a recent study, it was shown that SOAT2 activity is upregulated as a result of chronic renal failure. This study was specific to hepatic SOAT, which plays a major role in hepatic production and release of very low density lipoprotein (VLDL), release of cholesterol, foam cell formation, and atherogenesis.[3] In another study, non-human primates revealed a positive correlation between liver cholesteryl ester secretion rate and the development of coronal artery atherosclerosis. The results of the experiment are indicative that under all of the conditions of cellular cholesterol availability tested, the relative level of SOAT2 expression affects the cholesteryl ester content, and therefore the atherogenecity of nascent apoB-containing lipoproteins.[4] In yeast, acyl-CoA:sterol acyltransferase (ASAT) is functionally equivalent to ACAT. Although studies in vitro and in yeast suggest that the acyl-CoA binding protein (ACBP) may modulate long-chain fatty acyl-CoA (LCFA-CoA) distribution, the physiological function in mammals is unresolved. Recent research suggests that ACBP expression may play a role in LCFA-CoA metabolism in a physiological context.[5] In S. cerevisiae, the accumulation of ergosteryl esters accompanies entry into the stationary phase and sporulation. Researchers have identified two genes in yeast, ARE2 and ARE1, that encode the different isozymes of ASAT. In yeast, Are2 is the major catalytic isoform. Mitotic cell growth and spore germination are not compromised when these genes are deleted, but diploids that are homozygous for an ARE2 null mutation exhibit a decrease in sporulation efficiency.[6] Plant Synthesis of Steryl Esters[edit] In plants cellular sterol ester synthesis is performed by an enzyme different from mammalian ACAT and yeast ASAT; it is performed by Phospholipid:Sterol Acyltransferase (PSAT). A recent study shows that PSAT is involved in the regulation of the pool of free sterols and the amount of free sterol intermediates in the membranes. It is also described as the only intracellular enzyme discovered that catalyzes an acyl-CoA independent sterol ester formation. PSAT is therefore considered to have a similar physiological function in plant cells as ACAT in animal cells.[7] Lecithin—cholesterol acyltransferase (LCAT) ^ Katsuren K, Tamura T, Arashiro R, Takata K, Matsuura T, Niikawa N, Ohta T (April 2001). "Structure of the human acyl-CoA:cholesterol acyltransferase-2 (ACAT-2) gene and its relation to dyslipidemia". Biochimica et Biophysica Acta (BBA) - Molecular and Cell Biology of Lipids. 1531 (3): 230–40. doi:10.1016/S1388-1981(01)00106-8. PMID 11325614. ^ "KEGG Reaction: R01461". Kyoto Encyclopedia of Genes and Genomes. Kanehisa Laboratories. Retrieved 2009-05-06. ^ a b c Temel RE, Hou L, Rudel LL, Shelness GS (July 2007). "ACAT2 stimulates cholesteryl ester secretion in apoB-containing lipoproteins". Journal of Lipid Research. 48 (7): 1618–27. doi:10.1194/jlr.M700109-JLR200. PMID 17438337. ^ Huttunen HJ, Greco C, Kovacs DM (April 2007). "Knockdown of ACAT-1 Reduces Amyloidogenic Processing of APP". FEBS Letters. 581 (8): 1688–92. doi:10.1016/j.febslet.2007.03.056. PMC 1896096. PMID 17412327. ^ Huang H, Atshaves BP, Frolov A, Kier AB, Schroeder F (August 2005). "Acyl-coenzyme A binding protein expression alters liver fatty acyl-coenzyme A metabolism". Biochemistry. 44 (30): 10282–97. doi:10.1021/bi0477891. PMID 16042405. ^ Yu C, Kennedy NJ, Chang CC, Rothblatt JA (September 1996). "Molecular cloning and characterization of two isoforms of Saccharomyces cerevisiae acyl-CoA:sterol acyltransferase". The Journal of Biological Chemistry. 271 (39): 24157–63. doi:10.1074/jbc.271.39.24157. PMID 8798656. ^ Banas A, Carlsson AS, Huang B, Lenman M, Banas W, Lee M, Noiriel A, Benveniste P, Schaller H, Bouvier-Navé P, Stymne S (October 2005). "Cellular sterol ester synthesis in plants is performed by an enzyme (phospholipid:sterol acyltransferase) different from the yeast and mammalian acyl-CoA:sterol acyltransferases". The Journal of Biological Chemistry. 280 (41): 34626–34. doi:10.1074/jbc.M504459200. PMID 16020547. Figure 2 of the esterification reaction with one molecule of free cholesterol, oleic acid, catalyzed by acyl-CoA: cholesterol acyltransferase. It gives the cholesterol ester cholesterol oleate. from Sigrid Hahn; Hans-Ulrich Klör (2001). Knoll Lexikon Adipositas. Aesopus-Verlag. ISBN 978-3-7773-1774-8. Retrieved 22 June 2013(German) Encyclopedia obesity {{cite book}}: CS1 maint: postscript (link) Sgoutas DS (1970). "Effect of geometry and position of ethylenic bond upon acyl coenzyme A--cholesterol-O-acyltransferase". Biochemistry. 9 (8): 1826–33. doi:10.1021/bi00810a024. PMID 5439042. Spector AA, Mathur SN, Kaduce TL (1979). "Role of acylcoenzyme A: cholesterol o-acyltransferase in cholesterol metabolism". Prog. Lipid Res. 18 (1): 31–53. doi:10.1016/0163-7827(79)90003-1. PMID 42927. Taketani S, Nishino T, Katsuki H (1979). "Characterization of sterol-ester synthetase in Saccharomyces cerevisiae". Biochim. Biophys. Acta. 575 (1): 148–55. doi:10.1016/0005-2760(79)90140-1. PMID 389289. Retrieved from "https://en.wikipedia.org/w/index.php?title=Sterol_O-acyltransferase&oldid=1075863648"
From W. E. Darwin 22 April [1863]1 I sent off this morning a bit of Corydal to you.2 I examined \frac{1}{2} a dozen or more this morning, and I think the pistil certainly does spring forward, though very little in young flowers; and I think the pistil looks to spring foward more than it does as it is pulled back by the cap just at first. in one or two flowers which were old and I suppose had not been visited it seemed to spring forward with quite a jerk exactly into the guiding valley to nectary, and it seemed to fill it so completely that after the flower had gone off and the pistil was in the furrow I should not think the nectary could be visited again except sideways or inside the pistil. Thanks for your Linum paper3 I have not had time to read it yet; I am going on Sunday to Cowes to look for Anchusa when I will look at the stamens4 Thank Etty for her letter.5 When is George expected home6 Your affect son | W E Darwin. The year is established by the relationship between this letter and the letter from W. E. Darwin, 1 May [1863] (this volume, Supplement). CD’s interest in Corydalis (the genus of fumeworts) may have related to his investigation of pelorism in some species of Corydalis, and whether it was adaptive or a case of reversion (see Correspondence vol. 11, letter to M. T. Masters, 6 April [1863]). CD had completed a draft of the chapter in Variation in which he discussed Corydalis in relation to reversion (Variation 2: 58–9) on 1 April 1863 (see Correspondence vol. 11, Appendix II). William’s observational notes and sketches relating to Corydalis claviculata and C. lutea are in his botanical notebook (DAR 117: 61–3) and his botanical sketchbook (DAR 186: 43, pp. 48–9). ‘Two forms in species of Linum’ was read before the Linnean Society on 5 February and published in the society’s journal on 13 May 1863 (General index to the Journal of the Linnean Society, p. vi). William’s name appears on CD’s presentation list for ‘Two forms in species of Linum’ (Correspondence vol. 11, Appendix IV). William did not go to Cowes on the Isle of Wight until Sunday 3 May 1863, when he collected fifty-two plants of what he took to be Anchusa officinalis (alkanet or bugloss; see Correspondence vol. 11, letter from W. E. Darwin, 4 May [1863]). Henrietta Emma Darwin’s letter has not been found. In a letter to William of [17 March 1863], Emma Darwin had reported that George Howard Darwin was staying on to ‘grind’ at Clapham Grammar School (DAR 219.1: 71); according to Emma’s diary (DAR 242), George returned from school on 23 April 1863. Sent off Corydalis. Observations on Corydalis pistils.
GIphi - Maple Help Home : Support : Online Help : Mathematics : Numerical Computations : Integer Functions : Gaussian Integers : GIphi number of Gaussian integers in a reduced system modulo n GIphi(n) The function GIphi returns the number of Gaussian integers in a reduced system modulo n. A system of Gaussian integers incongruent each to each with respect to a given modulus n is called a reduced system of incongruent numbers modulo n. \mathrm{with}⁡\left(\mathrm{GaussInt}\right): \mathrm{GIphi}⁡\left(-201-43⁢I\right) \textcolor[rgb]{0,0,1}{15600} GaussInt[GIorder]
EUDML | -embeddings, AR and ANR spaces. EuDML | -embeddings, AR and ANR spaces. P -embeddings, AR and ANR spaces. Stramaccia, L. Stramaccia, L.. "-embeddings, AR and ANR spaces.." Homology, Homotopy and Applications 5.1 (2003): 213-218. <http://eudml.org/doc/50549>. @article{Stramaccia2003, author = {Stramaccia, L.}, keywords = {continuous pseudometric; metric expansion; proreflection; -embedding; -embedding}, title = {-embeddings, AR and ANR spaces.}, AU - Stramaccia, L. TI - -embeddings, AR and ANR spaces. KW - continuous pseudometric; metric expansion; proreflection; -embedding; -embedding continuous pseudometric, metric expansion, proreflection, P -embedding, P Extension of maps Articles by Stramaccia
I keep getting mixed up on whether the diagonalized form of A is T^(-1)AT or TAT^(-1). Is there an easy way to remember the correct form? - Murray Wiki I keep getting mixed up on whether the diagonalized form of A is T^(-1)AT or TAT^(-1). Is there an easy way to remember the correct form? This is the way I use to remember which form to use. Note the {\displaystyle T\,} matrix consists of all the eigenvectors, {\displaystyle v_{i}\,} {\displaystyle A\,} {\displaystyle T=(v_{1}|v_{2}|\cdots |v_{n})\,} . Using the definition of (right) eigenvectors: {\displaystyle Av_{i}=\lambda _{i}v_{i}\,} {\displaystyle AT=A(v_{1}|v_{2}|\cdots |v_{n})=(Av_{1}|Av_{2}|\cdots |Av_{n})=(\lambda _{1}v_{1}|\lambda _{2}v_{2}|\cdots |\lambda _{n}v_{n})\,} {\displaystyle =(v_{1}|v_{2}|\cdots |v_{n}){\begin{pmatrix}\lambda _{1}\\&\lambda _{2}\\&&\ddots \\&&&\lambda _{n}\end{pmatrix}}=T\Lambda \,} Right multiply each side by {\displaystyle T^{-1}\,} {\displaystyle A=T\Lambda T^{-1}\,} , which is the correct form. Retrieved from "https://murray.cds.caltech.edu/index.php?title=I_keep_getting_mixed_up_on_whether_the_diagonalized_form_of_A_is_T%5E(-1)AT_or_TAT%5E(-1)._Is_there_an_easy_way_to_remember_the_correct_form%3F&oldid=8261"
Permanence of a Discrete Model of Mutualism with Infinite Deviating Arguments Xuepeng Li, Wensheng Yang, "Permanence of a Discrete Model of Mutualism with Infinite Deviating Arguments", Discrete Dynamics in Nature and Society, vol. 2010, Article ID 931798, 7 pages, 2010. https://doi.org/10.1155/2010/931798 Xuepeng Li1 and Wensheng Yang 1 We propose a discrete model of mutualism with infinite deviating arguments, that is . By some Lemmas, sufficient conditions are obtained for the permanence of the system. Chen and You [1] studied the following two species integro-differential model of mutualism: where , and are continuous functions bounded above and below by positive constants: and Using the differential inequality theory, they obtained a set of sufficient conditions to ensure the permanence of system (1.1). For more background and biological adjustments of system(1.1), one could refer to [1–4] and the references cited therein. However, many authors [5–12] have argued that the discrete time models governed by difference equations are more appropriate than the continuous ones when the populations have nonoverlapping generations. Also, since discrete time models can also provide efficient computational models of continuous models for numerical simulations, it is reasonable to study discrete time models governed by difference equations. Another permanence is one of the most important topics on the study of population dynamics. One of the most interesting questions in mathematical biology concerns the survival of species in ecological models. It is reasonable to ask for conditions under which the system is permanent. Motivated by the above question, we consider the permanence of the following discrete model of mutualism with infinite deviating arguments: where is the density of mutualism species at the th generation. For , and are bounded nonnegative sequences such that Here, for any bounded sequence , Let we consider (1.2) together with the following initial condition: It is not difficult to see that solutions of (1.2) and (1.4) are well defined for all and satisfy The aim of this paper is, by applying the comparison theorem of difference equation and some lemmas, to obtain a set of sufficient conditions which guarantee the permanence of system (1.2). In this section, we establish permanence results for system (1.2). Following Comparison Theorem of difference equation is Theorem 2.6 of [13, page ]. Lemma. Let . For any fixed is a non-decreasing function with respect to , and for , following inequalities hold: If , then for all . Now let us consider the following single species discrete model: where and are strictly positive sequences of real numbers defined for and . Similar to the proof of Propositions and in [6], we can obtain the following. Lemma. Any solution of system (2.1) with initial condition satisfies where Lemma 2.3 (see [14]). Let and be nonnegative sequences defined on , and is a constant. If then Lemma 2.4 (see [2]). Let be a nonnegative bounded sequences, and let be a nonnegative sequence such that . Then Proposition. Let be any positive solution of system (1.2), then where Proof. Let be any positive solution of system (1.2), then from the first equation of system (1.2) we have Let , then where When is nonnegative sequence, by applying Lemma 2.3, it immediately follows that When is negative sequence, (2.12) also holds. From (2.12), we have By using the second equation of system (1.2), similar to the above analysis, we can obtain This completes the proof of Proposition 2.5. Now we are in the position of stating the permanence of system (1.2). Theorem. Under the assumption(1.3), system (1.2) is permanent, that is, there exist positive constants which are independent of the solutions of system (1.2) such that, for any positive solution of system(1.2) with initial condition (1.4), one has Proof. By applying Proposition 2.5, we see that to end the proof of Theorem 2.6 it is enough to show that under the conditions of Theorem 2.6 From Proposition 2.5, For all , there exists a For all , According to Lemma 2.4, from (2.13) and (2.14) we have For above , according to (2.18), there exists a positive integer , such that, for all , Thus, for all , from the first equation of system(1.2), it follows that It follows that, for , Hence In other words, From the first equation of system (1.2) and (2.23), for all , it follows that By applying Lemmas 2.1 and 2.2 to (2.24), it immediately follows that Setting , it follows that Similar to the above analysis, from the second equation of system (1.2), we have that This completes the proof of Theorem 2.6. F. D. Chen and M. S. You, “Permanence for an integrodifferential model of mutualism,” Applied Mathematics and Computation, vol. 186, no. 1, pp. 30–34, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet F. D. Chen, “Permanence in a discrete Lotka-Volterra competition model with deviating arguments,” Nonlinear Analysis: Real World Applications, vol. 9, no. 5, pp. 2150–2155, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet Y. K. Li and G. T. Xu, “Positive periodic solutions for an integrodifferential model of mutualism,” Applied Mathematics Letters, vol. 14, no. 5, pp. 525–530, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet R. P. Agarwal, Difference Equations and Inequalities: Theory, Method and Applications, vol. 228 of Monographs and Textbooks in Pure and Applied Mathematics, Marcel Dekker, New York, NY, USA, 2nd edition, 2000. View at: MathSciNet F. D. Chen, “Permanence and global attractivity of a discrete multispecies Lotka-Volterra competition predator-prey systems,” Applied Mathematics and Computation, vol. 182, no. 1, pp. 3–12, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet X. Chen and F. D. Chen, “Stable periodic solution of a discrete periodic Lotka-Volterra competition system with a feedback control,” Applied Mathematics and Computation, vol. 181, no. 2, pp. 1446–1454, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet Y. K. Li and L. H. Lu, “Positive periodic solutions of discrete n -species food-chain systems,” Applied Mathematics and Computation, vol. 167, no. 1, pp. 324–344, 2005. View at: Publisher Site | Google Scholar | MathSciNet Y. Muroya, “Persistence and global stability in Lotka-Volterra delay differential systems,” Applied Mathematics Letters, vol. 17, no. 7, pp. 795–800, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet Y. Muroya, “Partial survival and extinction of species in discrete nonautonomous Lotka-Volterra systems,” Tokyo Journal of Mathematics, vol. 28, no. 1, pp. 189–200, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet X. T. Yang, “Uniform persistence and periodic solutions for a discrete predator-prey system with delays,” Journal of Mathematical Analysis and Applications, vol. 316, no. 1, pp. 161–177, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet F. D. Chen, “Permanence for the discrete mutualism model with time delays,” Mathematical and Computer Modelling, vol. 47, no. 3-4, pp. 431–435, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet L. Wang and M. Q. Wang, Ordinary Difference Equation, Xinjiang University Press, Xinjiang, China, 1991. Y. Takeuchi, Global Dynamical Properties of Lotka-Volterra Systems, World Scientific, River Edge, NJ, USA, 1996. View at: MathSciNet Copyright © 2010 Xuepeng Li and Wensheng Yang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Convert angle of attack and sideslip angle to direction cosine matrix - Simulink - MathWorks España Direction Cosine Matrix Body to Wind DCMwb Convert angle of attack and sideslip angle to direction cosine matrix The Direction Cosine Matrix Body to Wind block converts angle of attack and sideslip angles into a 3-by-3 direction cosine matrix (DCM). This direction cosine matrix is helpful for vector body axes to wind axes coordinate transformations. To transform the coordinates of a vector in body axes (ox0, oy0, oz0) to a vector in wind axes (ox2, oy2, oz2), multiply the block output direction cosine matrix with a vector in body axes. For information on the axis rotations for this transformation, see Algorithms. ɑ β — Angle of attack and sideslip angle Angle of attack and sideslip angle, specified as a 2-by-1 vector, in radians. DCMwb — Direction cosine matrix 3-by-3 direction cosine matrix Direction cosine matrix, returned as 3-by-3 direction cosine matrix. The order of the axis rotations required to bring this transformation about is: A rotation about oy0 through the angle of attack (α) to axes (ox1, oy1, oz1) A rotation about oz1 through the sideslip angle (β) to axes (ox2, oy2, oz2) \begin{array}{l}\left[\begin{array}{c}o{x}_{2}\\ o{y}_{2}\\ o{z}_{2}\end{array}\right]=DC{M}_{wb}\left[\begin{array}{c}o{x}_{0}\\ o{y}_{0}\\ o{z}_{0}\end{array}\right]\\ \\ \left[\begin{array}{c}o{x}_{2}\\ o{y}_{2}\\ o{z}_{2}\end{array}\right]=\left[\begin{array}{ccc}\mathrm{cos}\beta & \mathrm{sin}\beta & 0\\ -\mathrm{sin}\beta & \mathrm{cos}\beta & 0\\ 0& 0& 1\end{array}\right]\left[\begin{array}{ccc}\mathrm{cos}\alpha & 0& \mathrm{sin}\alpha \\ 0& 1& 0\\ -\mathrm{sin}\alpha & 0& \mathrm{cos}\alpha \end{array}\right]\left[\begin{array}{c}o{x}_{0}\\ o{y}_{0}\\ o{z}_{0}\end{array}\right]\end{array} DC{M}_{wb}=\left[\begin{array}{ccc}\mathrm{cos}\alpha \mathrm{cos}\beta & \mathrm{sin}\beta & \mathrm{sin}\alpha \mathrm{cos}\beta \\ -\mathrm{cos}\alpha \mathrm{sin}\beta & \mathrm{cos}\beta & -\mathrm{sin}\alpha \mathrm{sin}\beta \\ -\mathrm{sin}\alpha & 0& \mathrm{cos}\alpha \end{array}\right] [1] Stevens, B. L., and F. L. Lewis. Aircraft Control and Simulation. Hoboken, NJ: John Wiley & Sons, 1992. Direction Cosine Matrix Body to Wind to Alpha and Beta | Direction Cosine Matrix to Rotation Angles | Direction Cosine Matrix to Wind Angles | Rotation Angles to Direction Cosine Matrix | Wind Angles to Direction Cosine Matrix
Paired integration and stationarity tests - MATLAB i10test - MathWorks France Conduct Default Integration and Stationarity Tests on Matrix of Data Conduct Default Integration and Stationarity Tests on Table Variables Specify Integration and Stationarity Test Options DecisionTbl i10test returns a results table when you supply a table of data Paired integration and stationarity tests [H,PValue] = i10test(X) DecisionTbl = i10test(Tbl) [___] = i10test(___,Name=Value) [H,PValue] = i10test(X) displays, at the command window, the results of paired integration and stationarity tests on the variables in the matrix of time series data X. Row labels in the display table are variable names and their differences. Column labels are I(1) and I(0), respectively, to indicate the null hypothesis of the test. The function also returns the matrix of test rejection decisions H and associated p-values for the test statistics PValue. DecisionTbl = i10test(Tbl) displays the results of paired integration and stationarity tests on all the variables of the table or timetable Tbl. The function also returns the table DecisionTbl containing variables for the test rejection decisions and associated p-values for the test statistics. [___] = i10test(___,Name=Value) uses additional options specified by one or more name-value arguments, using any input-argument combination in the previous syntaxes. i10test returns the output-argument combination for the corresponding input arguments. For example, i10test(Tbl,NumDiffs=1,DataVariables=1:5) tests the first 5 variables in the input table Tbl, and tests their first difference. Conduct paired integration and stationarity tests on multiple time series using the default tests and settings. Input the time series data as a numeric matrix. load Data_Canada.mat Conduct the default integration (adftest) and stationarity (kpsstest) tests on all time series in the data. Return the test decisions and \mathit{p} [H,PValue] = i10test(Data) var1 0 1 For all series, the tests fail to reject a unit root (H = 0 for I(1)), and they reject stationarity (H = 1 for I(0)). The \mathit{p} -values are large for adftest and very small, outside the Monte Carlo simulated tables, for kpsstest. Conduct paired integration and stationarity tests on two time series, which are variables in a table, using default options. Return a table of results. Conduct the default integration and stationarity tests on all the variables. Return the table of test decisions and \mathit{p} DecisionTbl = i10test(TT) INF_C 0 1 INF_G 0 1 INT_S 0 1 INT_M 0 1 INT_L 0 1 DecisionTbl=5×4 table I1 I0 P1 P0 __ __ _______ ____ INF_C 0 1 0.32546 0.01 INF_G 0 1 0.26503 0.01 INT_S 0 1 0.4097 0.01 INT_M 0 1 0.621 0.01 INT_L 0 1 0.7358 0.01 DecisionTbl is a table of test results. The rows correspond to variables in the input timetable TT, and the columns correspond to rejection decisions and corresponding \mathit{p} By default, i10test conducts integration and stationarity tests between all pairs of variables in the input table. To select a subset of variables from an input table, set the DataVariables option. Conduct paired integration and stationarity tests on two time series and their differences. Specify integration and stationary test options. Load the Nelson-Plosser data, which contains data in the table DataTable. Consider conducting augmented Dickey-Fuller tests to assess integration and KPSS stationarity tests to assess stationarity. Create scalar structures that specify integration and stationarity test options in cell vectors. IParams.names = {'Lags' 'Model'}; % Names of augmented Dickey-Fuller test options IParams.vals = {1 'ts'}; % Values of augmented Dickey-Fuller test options SParams.names = {'Trend'}; % Names of KPSS test options SParams.vals = {true}; % Values of KPSS test options Conduct the integration and stationarity tests on the real gross national product (GNPR) and the consumer price index (CPI) series, and their first differences. Specify the integration and stationary test options and turn off the command window display table. DecisionTbl = i10test(DataTable,DataVariables=["GNPR" "CPI"], ... NumDiffs=1,Display="off",ITest="adf",IParams=IParams, ... STest="kpss",SParams=SParams) I1 I0 P1 P0 __ __ _________ ________ GNPR 0 1 0.87598 0.01 D1GNPR 1 0 0.0054215 0.1 CPI 0 1 0.97987 0.01 D1CPI 1 0 0.001 0.056788 DecisionTbl is a table of test results. Rows correspond to variables in the input table DataTable. The variable I1 contains the decisions for testing the null hypothesis that the raw series contains a unit root. A value of 0 fails to reject the null hypothesis (GNPR and CPI) and a value of 1 rejects the null hypothesis in favor of a stationary series (D1GNPR and D1CPI). The variable P1 contains the \mathit{p} -value for the test. The variable I0 contains the decisions for testing the null hypothesis that the differenced series contains a unit root. A value of 0 fails to reject the null hypothesis (D1GNPR and D1CPI) and a value of 1 rejects the null hypothesis in favor of a stationary, differenced series (GNPR and CPI). The variable P0 contains the \mathit{p} At the specified settings, the test results suggest that both series have one degree of integration. For each test, i10test removes missing observations, represented by NaN values, from the series being tested. Example: i10test(Tbl,NumDiffs=1,DataVariables=1:5) tests the first 5 variables in the input table Tbl, and tests their first difference. Variable names used in the display, specified as a string vector or cell vector of strings of a length numVars. VarNames(j) specifies the name to use for variable X(:,j) or DataVariables(j). If the input time series data is the matrix X, the default is {'var1','var2',...}. If the input time series data is the table or timetable Tbl, the default is Tbl.Properties.VariableNames. NumDiffs — Number of differences of each input variable to test Number of differences of each input variable to test, specified as a nonnegative integer. To each input variable, i10test applies differences of order 0 through NumDiffs, and conducts paired integration and stationarity tests on each resulting series (a total of 2*numVars(NumDiffs + 1) tests). Example: 'numDiffs',2 ITest — Integration test "adf" (default) | "pp" | character vector Integration test to conduct, specified as a value in this table. "adf" Augmented Dickey-Fuller test, as conducted by adftest "pp" Phillips-Perron test, as conducted by pptest Example: ITest="pp" IParams — Integration test parameters Integration test parameters, specified as a scalar structure. IParams has fields names and vals with the following values: IParams.names is a cell vector of valid name-value argument names for the integration test specified by ITest. i10test ignores variable selection arguments of the integration test; use the DataVariables name-value argument instead. IParams.vals is a cell vector with the same length as IParams.names containing corresponding values for the names in IParams.names. Values must specify one test. i10test uses default values for unspecified integration-test options. The default value for IParams is an empty structure, which means i10test uses test defaults. Example: i10test(Tbl,ITest="pp",IParams=struct('names',{{'Model' 'Test' 'Alpha'}},'vals',{{'ts' 't2' 0.01}})) conducts the default stationary test and the Phillips-Perron test for integration with a drift term in both hypotheses and a deterministic time trend in the alternative model, uses the modified unstudentized test statistic, and sets the significant level for each test to 0.01. STest — Stationarity test "kpss" (default) | "lmc" | character vector Stationarity test to conduct, specified as a value in this table. "kpss" KPSS test, as conducted by kpsstest "lmc" Leybourne-McCabe test, as conducted by lmctest Example: STest="lmc" SParams — Stationarity test parameters Stationarity test parameters, specified as a scalar structure. SParams has fields names and vals with the following values: SParams.names is a cell vector of valid name-value argument names for the stationarity test specified by STest. i10test ignores variable selection arguments of the stationarity test; use the DataVariables name-value argument instead. SParams.vals is a cell vector with the same length as SParams.names containing corresponding values for the names in SParams.names. Values must specify one test. i10test uses default values for unspecified stationarity-test options. The default value for SParams is an empty structure, which means i10test uses defaults. Example: i10test(Tbl,STest="lmc",IParams=struct('names',{{'Lags' 'Trend' 'Alpha'}},'vals',{{1 false 0.01}})) conducts the default integration test and the Leybourne-McCabe stationarity test, at 0.01 level of significance, including one lagged response in the structural model and excluding a deterministic time trend term. "on" i10test displays all outputs in tabular form to the command window. Row labels are input variable names and their differences. Columns labels indicate the null hypothesis of the tests: I(1) for the integration tests and I(0) for the stationarity tests. "off" i10test does not display the results to the command window. The value of Display applies to all tests. Variables in Tbl for which i10test conducts the tests, specified as a string vector or cell vector of character vectors containing variable names in Tbl.Properties.VariableNames, or an integer or logical vector representing the indices of names. The selected variables must be numeric. H — Test decisions Test decisions, returned as a numVars*(numDiffs + 1)-by-2 logical matrix. i10test returns H when you supply the input X. Values of 1 indicate rejection of the null hypothesis in favor of the alternative. Null and alternative hypotheses depend on the test and options; see the appropriate reference page for more details. Rows of H correspond, in order, to {x}_{1},\Delta {x}_{1},{\Delta }^{2}{x}_{1},\dots ,{\Delta }^{D}{x}_{1},{x}_{2},\Delta {x}_{2},{\Delta }^{2}{x}_{2},\dots ,{\Delta }^{D}{x}_{2},\dots , where Δ is the differencing operator and D is the specified number of differences. Columns of H correspond to the null hypothesis of integration, I(1), and the null hypothesis of stationarity, I(0), respectively. PValue — p-values Test statistic p-values, returned as a numVars*(numDiffs + 1)-by-2 matrix with the same size and arrangement as H. i10test returns PValue when you supply the input X. DecisionTbl — Test summary Test summary, returned as a table with variables for outputs H (I1 and I0) and PValue (P1 and P0). Rows correspond to variables specified by DataVariables and labeled by VarNames, and their corresponding differences, where Djname is the label for variable name with order j difference. i10test returns DecisionTbl when you supply the input Tbl. Kwiatkowski, Phillips, Schmidt, and Shin [1], and other references, suggest paired integration and stationarity tests as a method for mutual confirmation of individual test results. However, different integration test results can disagree on the same set of data, different stationarity test results can disagree, and stationarity tests can fail to confirm integration tests. Still, Amano and van Norden [2], Burke[3], and other references, perform Monte Carlo studies that suggest that paired testing is generally more reliable than using either type of test alone. [3] Burke, S. P. "Confirmatory Data Analysis: The Joint Application of Stationarity and Unit Root Tests." University of Reading, UK. Discussion paper 20, 1994. R2022a: i10test returns a results table when you supply a table of data If you supply a table of time series data Tbl, i10test returns a table containing variables for the test rejection decisions H and corresponding p-values PValue, with rows corresponding to variables and their differences. Before R2022a, i10test returned H and PValue in separate positions of the output when you supplied a table of input data. Starting in R2022a, if you supply a table of input data, update your code to return all outputs in the first output position. DecisionTbl = i10test(Tbl,Name=Value) i10test issues an error if you request more outputs. adftest | pptest | kpsstest | lmctest
Three dices D1,D2,D3 are thrown Find the probability that two numbers of two dices matches only - Maths - Probability - I - 11853767 | Meritnation.com Three dices D1,D2,D3 are thrown. Find the probability that two numbers of two dices matches only. Surabhi Grover answered this When the three dice are thrown,the total possibilities are 6×6×6 We have to find the probability that two numbers of two dice match : The number of cases in which all the three dice give the same number are:\phantom{\rule{0ex}{0ex}}\left(1,1,1\right),\left(2,2,2\right),\left(3,3,3\right),\left(4,4,4\right),\left(5,5,5\right),\left(6,6,6\right)=6 cases The number of cases in which all dice give different numbers=6×5×4=120\phantom{\rule{0ex}{0ex}} (This is because when the first die is thrown,any 6 numbers are but when the second die is thrown 5 possible number are there because in this case the second die cant show the same number as the first die and similarly fir the third die 4 numbers are possible) So the cases in which two die match=Total cases-cases in which all die show same number-cases in which all die show different numbers\phantom{\rule{0ex}{0ex}}=216-120-6=90 SO the required probability= \frac{90}{216}=\frac{5}{12}
So what does this mean exactly? Referring to the previous example, maximizing the likelihood of the training data is equivalent to finding the symbol pair, whose probability divided by the probabilities of its first symbol followed by its second symbol is the greatest among all symbol pairs. E.g. "u", followed by "g" would have only been merged if the probability of "ug" divided by "u", "g" would have been greater than for any other symbol pair. Intuitively, WordPiece is slightly different to BPE in that it evaluates what it loses by merging two symbols to ensure it’s worth it. x_{1}, \dots, x_{N} x_{i} S(x_{i}) \mathcal{L} = -\sum_{i=1}^{N} \log \left ( \sum_{x \in S(x_{i})} p(x) \right ) ←Summary of the models Padding and truncation→
Linear classifier - Wikipedia Statistical classification in machine learning In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class (or group) it belongs to. A linear classifier achieves this by making a classification decision based on the value of a linear combination of the characteristics. An object's characteristics are also known as feature values and are typically presented to the machine in a vector called a feature vector. Such classifiers work well for practical problems such as document classification, and more generally for problems with many variables (features), reaching accuracy levels comparable to non-linear classifiers while taking less time to train and use.[1] 2 Generative models vs. discriminative models 2.1 Discriminative training In this case, the solid and empty dots can be correctly classified by any number of linear classifiers. H1 (blue) classifies them correctly, as does H2 (red). H2 could be considered "better" in the sense that it is also furthest from both groups. H3 (green) fails to correctly classify the dots. If the input feature vector to the classifier is a real vector {\displaystyle {\vec {x}}} , then the output score is {\displaystyle y=f({\vec {w}}\cdot {\vec {x}})=f\left(\sum _{j}w_{j}x_{j}\right),} {\displaystyle {\vec {w}}} is a real vector of weights and f is a function that converts the dot product of the two vectors into the desired output. (In other words, {\displaystyle {\vec {w}}} is a one-form or linear functional mapping {\displaystyle {\vec {x}}} onto R.) The weight vector {\displaystyle {\vec {w}}} is learned from a set of labeled training samples. Often f is a threshold function, which maps all values of {\displaystyle {\vec {w}}\cdot {\vec {x}}} above a certain threshold to the first class and all other values to the second class; e.g., {\displaystyle f(\mathbf {x} )={\begin{cases}1&{\text{if }}\ \mathbf {w} ^{T}\cdot \mathbf {x} >\theta ,\\0&{\text{otherwise}}\end{cases}}} The superscript T indicates the transpose and {\displaystyle \theta } is a scalar threshold. A more complex f might give the probability that an item belongs to a certain class. For a two-class classification problem, one can visualize the operation of a linear classifier as splitting a high-dimensional input space with a hyperplane: all points on one side of the hyperplane are classified as "yes", while the others are classified as "no". A linear classifier is often used in situations where the speed of classification is an issue, since it is often the fastest classifier, especially when {\displaystyle {\vec {x}}} is sparse. Also, linear classifiers often work very well when the number of dimensions in {\displaystyle {\vec {x}}} is large, as in document classification, where each element in {\displaystyle {\vec {x}}} is typically the number of occurrences of a word in a document (see document-term matrix). In such cases, the classifier should be well-regularized. Generative models vs. discriminative models[edit] There are two broad classes of methods for determining the parameters of a linear classifier {\displaystyle {\vec {w}}} . They can be generative and discriminative models.[2][3] Methods of the former model joint probability distribution, whereas methods of the latter model conditional density functions {\displaystyle P({\vec {x}}|{\rm {class}})} . Examples of such algorithms include: Linear Discriminant Analysis (LDA)—assumes Gaussian conditional density models Naive Bayes classifier with multinomial or multivariate Bernoulli event models. The second set of methods includes discriminative models, which attempt to maximize the quality of the output on a training set. Additional terms in the training cost function can easily perform regularization of the final model. Examples of discriminative training of linear classifiers include: Logistic regression—maximum likelihood estimation of {\displaystyle {\vec {w}}} assuming that the observed training set was generated by a binomial model that depends on the output of the classifier. Perceptron—an algorithm that attempts to fix all errors encountered in the training set Fisher's Linear Discriminant Analysis—an algorithm (different than "LDA") that maximizes the ratio of between-class scatter to within-class scatter, without any other assumptions. It is in essence a method of dimensionality reduction for binary classification. [4] Support vector machine—an algorithm that maximizes the margin between the decision hyperplane and the examples in the training set. Note: Despite its name, LDA does not belong to the class of discriminative models in this taxonomy. However, its name makes sense when we compare LDA to the other main linear dimensionality reduction algorithm: principal components analysis (PCA). LDA is a supervised learning algorithm that utilizes the labels of the data, while PCA is an unsupervised learning algorithm that ignores the labels. To summarize, the name is a historical artifact.[5]: 117  Discriminative training often yields higher accuracy than modeling the conditional density functions[citation needed]. However, handling missing data is often easier with conditional density models[citation needed]. All of the linear classifier algorithms listed above can be converted into non-linear algorithms operating on a different input space {\displaystyle \varphi ({\vec {x}})} , using the kernel trick. Discriminative training[edit] Discriminative training of linear classifiers usually proceeds in a supervised way, by means of an optimization algorithm that is given a training set with desired outputs and a loss function that measures the discrepancy between the classifier's outputs and the desired outputs. Thus, the learning algorithm solves an optimization problem of the form[1] {\displaystyle {\underset {\mathbf {w} }{\arg \min }}\;R(\mathbf {w} )+C\sum _{i=1}^{N}L(y_{i},\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i})} w is a vector of classifier parameters, L(yi, wTxi) is a loss function that measures the discrepancy between the classifier's prediction and the true output yi for the i'th training example, R(w) is a regularization function that prevents the parameters from getting too large (causing overfitting), and C is a scalar constant (set by the user of the learning algorithm) that controls the balance between the regularization and the loss function. Popular loss functions include the hinge loss (for linear SVMs) and the log loss (for linear logistic regression). If the regularization function R is convex, then the above is a convex problem.[1] Many algorithms exist for solving such problems; popular ones for linear classification include (stochastic) gradient descent, L-BFGS, coordinate descent and Newton methods. ^ a b c Guo-Xun Yuan; Chia-Hua Ho; Chih-Jen Lin (2012). "Recent Advances of Large-Scale Linear Classification" (PDF). Proc. IEEE. 100 (9). ^ T. Mitchell, Generative and Discriminative Classifiers: Naive Bayes and Logistic Regression. Draft Version, 2005 ^ A. Y. Ng and M. I. Jordan. On Discriminative vs. Generative Classifiers: A comparison of logistic regression and Naive Bayes. in NIPS 14, 2002. ^ R.O. Duda, P.E. Hart, D.G. Stork, "Pattern Classification", Wiley, (2001). ISBN 0-471-05669-3 Y. Yang, X. Liu, "A re-examination of text categorization", Proc. ACM SIGIR Conference, pp. 42–49, (1999). paper @ citeseer R. Herbrich, "Learning Kernel Classifiers: Theory and Algorithms," MIT Press, (2001). ISBN 0-262-08306-X Retrieved from "https://en.wikipedia.org/w/index.php?title=Linear_classifier&oldid=1052852317"
Minor (linear algebra) - Wikipedia Minor (linear algebra) Determinant of a subsection of a square matrix This article is about a concept in linear algebra. For the concept of "minor" in graph theory, see Graph minor. In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. 1.1 First minors 2 Applications of minors and cofactors 2.1 Cofactor expansion of the determinant 3 Multilinear algebra approach 4 A remark about different notation Definition and illustration[edit] First minors[edit] If A is a square matrix, then the minor of the entry in the i th row and j th column (also called the (i, j) minor, or a first minor[1]) is the determinant of the submatrix formed by deleting the i th row and j th column. This number is often denoted Mi,j. The (i, j) cofactor is obtained by multiplying the minor by {\displaystyle (-1)^{i+j}} To illustrate these definitions, consider the following 3 by 3 matrix, {\displaystyle {\begin{bmatrix}1&4&7\\3&0&5\\-1&9&11\\\end{bmatrix}}} To compute the minor M2,3 and the cofactor C2,3, we find the determinant of the above matrix with row 2 and column 3 removed. {\displaystyle M_{2,3}=\det {\begin{bmatrix}1&4&\Box \\\Box &\Box &\Box \\-1&9&\Box \\\end{bmatrix}}=\det {\begin{bmatrix}1&4\\-1&9\\\end{bmatrix}}=9-(-4)=13} So the cofactor of the (2,3) entry is {\displaystyle \ C_{2,3}=(-1)^{2+3}(M_{2,3})=-13.} Let A be an m × n matrix and k an integer with 0 < k ≤ m, and k ≤ n. A k × k minor of A, also called minor determinant of order k of A or, if m = n, (n−k)th minor determinant of A (the word "determinant" is often omitted, and the word "degree" is sometimes used instead of "order") is the determinant of a k × k matrix obtained from A by deleting m−k rows and n−k columns. Sometimes the term is used to refer to the k × k matrix obtained from A as above (by deleting m−k rows and n−k columns), but this matrix should be referred to as a (square) submatrix of A, leaving the term "minor" to refer to the determinant of this matrix. For a matrix A as above, there are a total of {\textstyle {m \choose k}\cdot {n \choose k}} minors of size k × k. The minor of order zero is often defined to be 1. For a square matrix, the zeroth minor is just the determinant of the matrix.[2][3] {\displaystyle 1\leq i_{1}<i_{2}<\cdots <i_{k}\leq m} {\displaystyle 1\leq j_{1}<j_{2}<\cdots <j_{k}\leq n} be ordered sequences (in natural order, as it is always assumed when talking about minors unless otherwise stated) of indexes, call them I and J, respectively. The minor {\textstyle \det \left((A_{i_{p},j_{q}})_{p,q=1,\ldots ,k}\right)} corresponding to these choices of indexes is denoted {\displaystyle \det _{I,J}A} {\displaystyle \det A_{I,J}} {\displaystyle [A]_{I,J}} {\displaystyle M_{I,J}} {\displaystyle M_{i_{1},i_{2},\ldots ,i_{k},j_{1},j_{2},\ldots ,j_{k}}} {\displaystyle M_{(i),(j)}} (where the {\displaystyle (i)} denotes the sequence of indexes I, etc.), depending on the source. Also, there are two types of denotations in use in literature: by the minor associated to ordered sequences of indexes I and J, some authors[4] mean the determinant of the matrix that is formed as above, by taking the elements of the original matrix from the rows whose indexes are in I and columns whose indexes are in J, whereas some other authors mean by a minor associated to I and J the determinant of the matrix formed from the original matrix by deleting the rows in I and columns in J.[2] Which notation is used should always be checked from the source in question. In this article, we use the inclusive definition of choosing the elements from rows of I and columns of J. The exceptional case is the case of the first minor or the (i, j)-minor described above; in that case, the exclusive meaning {\textstyle M_{i,j}=\det \left(\left(A_{p,q}\right)_{p\neq i,q\neq j}\right)} is standard everywhere in the literature and is used in this article also. The complement, Bijk...,pqr..., of a minor, Mijk...,pqr..., of a square matrix, A, is formed by the determinant of the matrix A from which all the rows (ijk...) and columns (pqr...) associated with Mijk...,pqr... have been removed. The complement of the first minor of an element aij is merely that element.[5] Applications of minors and cofactors[edit] Cofactor expansion of the determinant[edit] Main article: Laplace expansion The cofactors feature prominently in Laplace's formula for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given an n × n matrix {\displaystyle A=(a_{ij})} , the determinant of A, denoted det(A), can be written as the sum of the cofactors of any row or column of the matrix multiplied by the entries that generated them. In other words, defining {\displaystyle C_{ij}=(-1)^{i+j}M_{ij}} then the cofactor expansion along the j th column gives: {\displaystyle \ \det(\mathbf {A} )=a_{1j}C_{1j}+a_{2j}C_{2j}+a_{3j}C_{3j}+\cdots +a_{nj}C_{nj}=\sum _{i=1}^{n}a_{ij}C_{ij}=\sum _{i=1}^{n}a_{ij}(-1)^{i+j}M_{ij}} The cofactor expansion along the i th row gives: {\displaystyle \ \det(\mathbf {A} )=a_{i1}C_{i1}+a_{i2}C_{i2}+a_{i3}C_{i3}+\cdots +a_{in}C_{in}=\sum _{j=1}^{n}a_{ij}C_{ij}=\sum _{j=1}^{n}a_{ij}(-1)^{i+j}M_{ij}} Inverse of a matrix[edit] One can write down the inverse of an invertible matrix by computing its cofactors by using Cramer's rule, as follows. The matrix formed by all of the cofactors of a square matrix A is called the cofactor matrix (also called the matrix of cofactors or, sometimes, comatrix): {\displaystyle \mathbf {C} ={\begin{bmatrix}C_{11}&C_{12}&\cdots &C_{1n}\\C_{21}&C_{22}&\cdots &C_{2n}\\\vdots &\vdots &\ddots &\vdots \\C_{n1}&C_{n2}&\cdots &C_{nn}\end{bmatrix}}} Then the inverse of A is the transpose of the cofactor matrix times the reciprocal of the determinant of A: {\displaystyle \mathbf {A} ^{-1}={\frac {1}{\operatorname {det} (\mathbf {A} )}}\mathbf {C} ^{\mathsf {T}}.} The transpose of the cofactor matrix is called the adjugate matrix (also called the classical adjoint) of A. The above formula can be generalized as follows: Let {\displaystyle 1\leq i_{1}<i_{2}<\ldots <i_{k}\leq n} {\displaystyle 1\leq j_{1}<j_{2}<\ldots <j_{k}\leq n} be ordered sequences (in natural order) of indexes (here A is an n × n matrix). Then[6] {\displaystyle [\mathbf {A} ^{-1}]_{I,J}=\pm {\frac {[\mathbf {A} ]_{J',I'}}{\det \mathbf {A} }},} where I′, J′ denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary to I, J, so that every index 1, ..., n appears exactly once in either I or I′, but not in both (similarly for the J and J′) and {\displaystyle [\mathbf {A} ]_{I,J}} denotes the determinant of the submatrix of A formed by choosing the rows of the index set I and columns of index set J. Also, {\displaystyle [\mathbf {A} ]_{I,J}=\det \left((A_{i_{p},j_{q}})_{p,q=1,\ldots ,k}\right)} . A simple proof can be given using wedge product. Indeed, {\displaystyle [\mathbf {A} ^{-1}]_{I,J}(e_{1}\wedge \ldots \wedge e_{n})=\pm (\mathbf {A} ^{-1}e_{j_{1}})\wedge \ldots \wedge (\mathbf {A} ^{-1}e_{j_{k}})\wedge e_{i'_{1}}\wedge \ldots \wedge e_{i'_{n-k}},} {\displaystyle e_{1},\ldots ,e_{n}} are the basis vectors. Acting by A on both sides, one gets {\displaystyle [\mathbf {A} ^{-1}]_{I,J}\det \mathbf {A} (e_{1}\wedge \ldots \wedge e_{n})=\pm (e_{j_{1}})\wedge \ldots \wedge (e_{j_{k}})\wedge (\mathbf {A} e_{i'_{1}})\wedge \ldots \wedge (\mathbf {A} e_{i'_{n-k}})=\pm [\mathbf {A} ]_{J',I'}(e_{1}\wedge \ldots \wedge e_{n}).} The sign can be worked out to be {\displaystyle (-1)^{\sum _{s=1}^{k}i_{s}-\sum _{s=1}^{k}j_{s}}} , so the sign is determined by the sums of elements in I and J. Given an m × n matrix with real entries (or entries from any other field) and rank r, then there exists at least one non-zero r × r minor, while all larger minors are zero. We will use the following notation for minors: if A is an m × n matrix, I is a subset of {1,...,m} with k elements, and J is a subset of {1,...,n} with k elements, then we write [A]I,J for the k × k minor of A that corresponds to the rows with index in I and the columns with index in J. If I = J, then [A]I,J is called a principal minor. If the matrix that corresponds to a principal minor is a square upper-left submatrix of the larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to k, also known as a leading principal submatrix), then the principal minor is called a leading principal minor (of order k) or corner (principal) minor (of order k).[3] For an n × n square matrix, there are n leading principal minors. A basic minor of a matrix is the determinant of a square submatrix that is of maximal size with nonzero determinant.[3] For Hermitian matrices, the leading principal minors can be used to test for positive definiteness and the principal minors can be used to test for positive semidefiniteness. See Sylvester's criterion for more details. Both the formula for ordinary matrix multiplication and the Cauchy–Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that A is an m × n matrix, B is an n × p matrix, I is a subset of {1,...,m} with k elements and J is a subset of {1,...,p} with k elements. Then {\displaystyle [\mathbf {AB} ]_{I,J}=\sum _{K}[\mathbf {A} ]_{I,K}[\mathbf {B} ]_{K,J}\,} where the sum extends over all subsets K of {1,...,n} with k elements. This formula is a straightforward extension of the Cauchy–Binet formula. Multilinear algebra approach[edit] A more systematic, algebraic treatment of minors is given in multilinear algebra, using the wedge product: the k-minors of a matrix are the entries in the kth exterior power map. If the columns of a matrix are wedged together k at a time, the k × k minors appear as the components of the resulting k-vectors. For example, the 2 × 2 minors of the matrix {\displaystyle {\begin{pmatrix}1&4\\3&\!\!-1\\2&1\\\end{pmatrix}}} are −13 (from the first two rows), −7 (from the first and last row), and 5 (from the last two rows). Now consider the wedge product {\displaystyle (\mathbf {e} _{1}+3\mathbf {e} _{2}+2\mathbf {e} _{3})\wedge (4\mathbf {e} _{1}-\mathbf {e} _{2}+\mathbf {e} _{3})} where the two expressions correspond to the two columns of our matrix. Using the properties of the wedge product, namely that it is bilinear and alternating, {\displaystyle \mathbf {e} _{i}\wedge \mathbf {e} _{i}=0,} and antisymmetric, {\displaystyle \mathbf {e} _{i}\wedge \mathbf {e} _{j}=-\mathbf {e} _{j}\wedge \mathbf {e} _{i},} we can simplify this expression to {\displaystyle -13\mathbf {e} _{1}\wedge \mathbf {e} _{2}-7\mathbf {e} _{1}\wedge \mathbf {e} _{3}+5\mathbf {e} _{2}\wedge \mathbf {e} _{3}} where the coefficients agree with the minors computed earlier. A remark about different notation[edit] In some books, instead of cofactor the term adjunct is used.[7] Moreover, it is denoted as Aij and defined in the same way as cofactor: {\displaystyle \mathbf {A} _{ij}=(-1)^{i+j}\mathbf {M} _{ij}} Using this notation the inverse matrix is written this way: {\displaystyle \mathbf {M} ^{-1}={\frac {1}{\det(M)}}{\begin{bmatrix}A_{11}&A_{21}&\cdots &A_{n1}\\A_{12}&A_{22}&\cdots &A_{n2}\\\vdots &\vdots &\ddots &\vdots \\A_{1n}&A_{2n}&\cdots &A_{nn}\end{bmatrix}}} Keep in mind that adjunct is not adjugate or adjoint. In modern terminology, the "adjoint" of a matrix most often refers to the corresponding adjoint operator. ^ Burnside, William Snow & Panton, Arthur William (1886) Theory of Equations: with an Introduction to the Theory of Binary Algebraic Form. ^ a b Elementary Matrix Algebra (Third edition), Franz E. Hohn, The Macmillan Company, 1973, ISBN 978-0-02-355950-1 ^ a b c "Minor". Encyclopedia of Mathematics. ^ Linear Algebra and Geometry, Igor R. Shafarevich, Alexey O. Remizov, Springer-Verlag Berlin Heidelberg, 2013, ISBN 978-3-642-30993-9 ^ Bertha Jeffreys, Methods of Mathematical Physics, p.135, Cambridge University Press, 1999 ISBN 0-521-66402-0. ^ Viktor Vasil_evich Prasolov (13 June 1994). Problems and Theorems in Linear Algebra. American Mathematical Soc. pp. 15–. ISBN 978-0-8218-0236-6. ^ Felix Gantmacher, Theory of matrices (1st ed., original language is Russian), Moscow: State Publishing House of technical and theoretical literature, 1953, p.491, PlanetMath entry of Cofactors Retrieved from "https://en.wikipedia.org/w/index.php?title=Minor_(linear_algebra)&oldid=1086029281"
Pantetheine-phosphate adenylyltransferase - Wikipedia Phosphopantetheine adenylyltransferase from Thermotoga maritima. 4'-Phosphopantetheine shown as spheres. PDB 1vlh In enzymology, a pantetheine-phosphate adenylyltransferase (EC 2.7.7.3) is an enzyme that catalyzes the chemical reaction ATP + 4'-Phosphopantetheine {\displaystyle \rightleftharpoons } diphosphate + 3'-dephospho-CoA Thus, the two substrates of this enzyme are ATP and 4'-Phosphopantetheine, whereas its two products are diphosphate and 3'-dephospho-CoA. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is ATP:pantetheine-4'-phosphate adenylyltransferase. Other names in common use include dephospho-CoA pyrophosphorylase, pantetheine phosphate adenylyltransferase, dephospho-coenzyme A pyrophosphorylase, and 3'-dephospho-CoA pyrophosphorylase. This enzyme participates in pantothenate and coa biosynthesis. As of late 2007, 8 structures have been solved for this class of enzymes, with PDB accession codes 1B6T, 1GN8, 1H1T, 1O6B, 1OD6, 1QJC, 1TFU, and 1VLH. NOVELLI GD (1953). "Enzymatic synthesis and structure of CoA". Fed. Proc. 12 (3): 675–81. PMID 13107738. Martin DP, Drueckhammer DG (1993). "Separate enzymes catalyze the final two steps of coenzyme A biosynthesis in Brevibacterium ammoniagenes: purification of pantetheine phosphate adenylyltransferase". Biochem. Biophys. Res. Commun. 192 (3): 1155–61. doi:10.1006/bbrc.1993.1537. PMID 8389542. Geerlof A, Lewendon A, Shaw WV (1999). "Purification and characterization of phosphopantetheine adenylyltransferase from Escherichia coli". J. Biol. Chem. 274 (38): 27105–11. doi:10.1074/jbc.274.38.27105. PMID 10480925. Izard T, Geerlof A, Lewendon A, Barker JJ (1999). "Cubic crystals of phosphopantetheine adenylyltransferase from Escherichia coli". Acta Crystallogr. D. 55 (Pt 6): 1226–8. doi:10.1107/S0907444999004394. PMID 10329792. Retrieved from "https://en.wikipedia.org/w/index.php?title=Pantetheine-phosphate_adenylyltransferase&oldid=989144293"
15 October 2016 The Frobenius properad is Koszul Ricardo Campos, Sergei Merkulov, Thomas Willwacher Duke Math. J. 165(15): 2921-2989 (15 October 2016). DOI: 10.1215/00127094-3645116 We show the Koszulness of the properad governing involutive Lie bialgebras and also of the properads governing nonunital and unital-counital Frobenius algebras, solving a long-standing problem. This gives us minimal models for their deformation complexes, and for deformation complexes of their algebras which are discussed in detail. Using an operad of graph complexes we prove, with the help of an earlier result of one of the authors, that there is a highly nontrivial action of the Grothendieck–Teichmüller group {\mathrm{GRT}}_{1} on (completed versions of) the minimal models of the properads governing Lie bialgebras and involutive Lie bialgebras by automorphisms. As a corollary, one obtains a large class of universal deformations of (involutive) Lie bialgebras and Frobenius algebras, parameterized by elements of the Grothendieck–Teichmüller Lie algebra. We also prove that for any given homotopy involutive Lie bialgebra structure on a vector space, there is an associated homotopy Batalin–Vilkovisky algebra structure on the associated Chevalley–Eilenberg complex. Ricardo Campos. Sergei Merkulov. Thomas Willwacher. "The Frobenius properad is Koszul." Duke Math. J. 165 (15) 2921 - 2989, 15 October 2016. https://doi.org/10.1215/00127094-3645116 Received: 20 October 2014; Revised: 29 October 2015; Published: 15 October 2016 Keywords: Frobenius algebras , Grothendieck–Teichmüller group , involutive Lie bialgebras , operads , string topology Ricardo Campos, Sergei Merkulov, Thomas Willwacher "The Frobenius properad is Koszul," Duke Mathematical Journal, Duke Math. J. 165(15), 2921-2989, (15 October 2016)
GeometricRandomVariable - Maple Help Home : Support : Online Help : Education : Student Packages : Statistics : Random Variable Distributions : GeometricRandomVariable GeometricRandomVariable(p) The geometric random variable is a discrete probability random variable with probability function given by: f⁡\left(t\right)={\begin{array}{cc}0& t<0\\ p⁢{\left(1-p\right)}^{t}& \mathrm{otherwise}\end{array} 0<p,p\le 1 The geometric random variable has the lack of memory property: the probability of an event occurring in the next time interval of an exponential random variable is independent of the amount of time that has already passed. x=1 Note that the distribution above is for the number of failures \mathrm{before} the first success. The other common convention is for the number of trials with the last being the first success. That is, the other convention would have p⁢{\left(1-p\right)}^{t-1} in the probability function. \mathrm{with}⁡\left(\mathrm{Student}[\mathrm{Statistics}]\right): X≔\mathrm{GeometricRandomVariable}⁡\left(p\right): \mathrm{ProbabilityFunction}⁡\left(X,u\right) {\begin{array}{cc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{u}\textcolor[rgb]{0,0,1}{<}\textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{⁢}{\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{p}\right)}^{\textcolor[rgb]{0,0,1}{u}}& \textcolor[rgb]{0,0,1}{\mathrm{otherwise}}\end{array} \mathrm{ProbabilityFunction}⁡\left(X,2\right) \textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{⁢}{\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{p}\right)}^{\textcolor[rgb]{0,0,1}{2}} \mathrm{Mean}⁡\left(X\right) \frac{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{p}}{\textcolor[rgb]{0,0,1}{p}} \mathrm{Variance}⁡\left(X\right) \frac{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{p}}{{\textcolor[rgb]{0,0,1}{p}}^{\textcolor[rgb]{0,0,1}{2}}} Y≔\mathrm{GeometricRandomVariable}⁡\left(\frac{1}{4}\right): \mathrm{ProbabilityFunction}⁡\left(Y,x,\mathrm{output}=\mathrm{plot}\right) \mathrm{CDF}⁡\left(Y,x\right) {\begin{array}{cc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{<}\textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}{\left(\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{4}}\right)}^{⌊\textcolor[rgb]{0,0,1}{x}⌋\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}}& \textcolor[rgb]{0,0,1}{\mathrm{otherwise}}\end{array} \mathrm{CDF}⁡\left(Y,5,\mathrm{output}=\mathrm{plot}\right) The Student[Statistics][GeometricRandomVariable] command was introduced in Maple 18. Statistics[Distributions][Geometric]
Home : Support : Online Help : Mathematics : Differential Equations : Lie Symmetry Method : Commands for PDEs (and ODEs) : LieAlgebrasOfVectorFields : VectorField : constructor constructing a VectorField object VectorField( components = compList, space = varList) VectorField( DExpr, space = varList) VectorField( listOfPairs, space = varList ) VectorField( 0, space = varList) a list of scalar expressions [xi1, xi2,...,xin] the components of the vector field. expression of the form xi1*D[x1] + xi2*D[x2] + ... + xin*D[xn] a list of ordered pairs [[xi1,x1], [xi2,x2], ..., [xin,xn]] of component values and corresponding space coordinate The command VectorField(...) is a constructor method for creating a VectorField object. Once a valid VectorField object is created, it has access to various methods which allow it to be manipulated and its contents queried. see Overview of VectorField object for more detail. X X=\underset{i=0}{\overset{n}{∑}}{\mathrm{\xi }}^{i}\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right) \left({x}_{1},{x}_{2},\dots ,{x}_{n}\right) {\mathrm{\xi }}^{i} {x}_{1},{x}_{2},\dots ,{x}_{n} \frac{∂}{∂{x}_{i}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\left({\mathrm{\xi }}^{i}\right)⁡\left({x}_{1},{x}_{2},\mathrm{..},{x}_{n}\right) The VectorField command first validates the user input arguments and then constructs a VectorField object. A valid VectorField object consists of two data attributes: components {\mathrm{\xi }}^{1},{\mathrm{\xi }}^{2},\dots ,{\mathrm{\xi }}^{n} {x}_{1},{x}_{2},\dots ,{x}_{n} The second calling sequence is a textual representation of the usual appearance of a vector field. The space = varList argument is optional; if present, its specification of the space overrides the space [x1, x2,..., xn] implied by DExpr. The fourth calling sequence is a special constructor for the zero vector field on the specified space; the space = varList argument is required. This command is part of the VectorField package. For more detail, see Overview of the VectorField package. This command can be used in the form VectorField(...) only after executing the command with(LieAlgebrasOfVectorFields), but can always be used by executing LieAlgebrasOfVectorFields:-VectorField(...). \mathrm{with}⁡\left(\mathrm{LieAlgebrasOfVectorFields}\right): X≔\mathrm{VectorField}⁡\left(\mathrm{components}=[{x}^{2},x⁢y],\mathrm{space}=[x,y]\right) \textcolor[rgb]{0,0,1}{X}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}} X≔\mathrm{VectorField}⁡\left({x}^{2}⁢\mathrm{D}[x]+x⁢y⁢\mathrm{D}[y]\right) \textcolor[rgb]{0,0,1}{X}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}} Third calling sequence, vector field specified by ordered pairs: X≔\mathrm{VectorField}⁡\left([[{x}^{2},x],[x⁢y,y]]\right) \textcolor[rgb]{0,0,1}{X}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}} Z≔\mathrm{VectorField}⁡\left(0,\mathrm{space}=[x,y]\right) \textcolor[rgb]{0,0,1}{Z}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{0} \mathrm{Tx}≔\mathrm{VectorField}⁡\left(\mathrm{D}[x],\mathrm{space}=[x,y,z,t]\right) \textcolor[rgb]{0,0,1}{\mathrm{Tx}}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}} Although the coordinates y,z,t are not visible in the printed form of this vector field, they are present in the VectorField object: \mathrm{GetComponents}⁡\left(\mathrm{Tx}\right),\mathrm{GetSpace}⁡\left(\mathrm{Tx}\right) [\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{t}]
Log unconditional probability density for discriminant analysis classifier - MATLAB - MathWorks India Compute Log Unconditional Probability Density of an Observation Log unconditional probability density for discriminant analysis classifier lp = logp(obj,Xnew) lp = logp(obj,Xnew) returns the log of the unconditional probability density of each row of Xnew, computed using the discriminant analysis model obj. Matrix where each row represents an observation, and each column represents a predictor. The number of columns in Xnew must equal the number of predictors in obj. Column vector with the same number of rows as Xnew. Each entry is the logarithm of the unconditional probability density of the corresponding row of Xnew. Construct a discriminant analysis classifier for Fisher's iris data, and examine its prediction for an average measurement. Load Fisher's iris data and construct a default discriminant analysis classifier. Find the log probability of the discriminant model applied to an average iris. logpAverage = logp(Mdl,mean(meas)) logpAverage = -1.7254 The unconditional probability density of a point x of a discriminant analysis model is P\left(x\right)=\sum _{k=1}^{K}P\left(x,k\right), where P(x,k) is the conditional density of the model at x for class k, when the total number of classes is K. The conditional density P(x,k) is P(x,k) = P(k)P(x|k), where P(k) is the prior probability of class k, and P(x|k) is the conditional density of x given class k. The conditional density function of the multivariate normal with 1-by-d mean μk and d-by-d covariance Σk at a 1-by-d point x is P\left(x|k\right)=\frac{1}{{\left({\left(2\pi \right)}^{d}|{\Sigma }_{k}|\right)}^{1/2}}\mathrm{exp}\left(-\frac{1}{2}\left(x-{\mu }_{k}\right){\Sigma }_{k}^{-1}{\left(x-{\mu }_{k}\right)}^{T}\right), |{\Sigma }_{k}| {\Sigma }_{k}^{-1} CompactClassificationDiscriminant | fitcdiscr | mahal
Mini-Workshop: Heterotic Strings, Derived Categories, and Stacks | EMS Press Harald A. Posch The Pennsylvania State University, University Park, United States The miniworkshop \emph{Heterotic strings, derived categories, and stacks}, organised by Bj\"orn Andreas (Berlin), Emanuel Scheidegger (Vienna) and Eric Sharpe (Utah) was held November 13th--November 19th, 2005. This meeting was well attended with 14 participants with broad geographic representation. This workshop was a nice blend of researchers with various backgrounds in both mathematics and physics. The three topics represent areas of mathematics and physics with significant technical overlap. Heterotic strings are types of string theories whose compactifications involve complex K\"ahler manifolds with holomorphic vector bundles, and most of the complications revolve around those vector bundles. Derived categories (of coherent sheaves) have an obvious mathematical link with holomorphic vector bundles, and appear physically in studies of D-brane/antibrane systems. Details of the physical model in which derived categories enter physics are also closely related to the details of the physical model in which stacks enter physics: in each case, only a distinguished subclass of presentations can be realized physically, and the nonuniqueness of presentations in that subclass is conjectured to be washed out by a physical process called renormalization group flow. These topics also form elements of generalizations of a conjectured generalization of ``mirror symmetry.'' Mirror symmetry is a symmetry exchanging pairs of complex K\"ahler manifolds with trivial canonical bundle. It has been of interest to algebraic geometers because it provides a new approach to enumerative geometry: (usually difficult) curve-counting questions were mapped to comparatively trivial questions about the mirror manifold. Mirror symmetry was originally developed for spaces, but recently has been extended to stacks. One of the conjectured generalizations of mirror symmetry, known as ``(0,2) mirror symmetry,'' exchanges pairs consisting of complex K\"ahler manifolds with holomorphic vector bundles, and is an analogue of ordinary mirror symmetry for heterotic strings. Another generalization, known as ``homological mirror symmetry,'' exchanges derived categories of coherent sheaves on one of the mirrors with a derived Fukaya category of the other. As the topics of this miniworkshop show up in these new areas of mirror symmetry, this miniworkshop could have instead been titled ``New developments in mirror symmetry.'' Since understanding these topics involves an interplay between mathematics and physics, for this miniworkshop we brought together a collection of both mathematicians and physicists. B. Andreas, V. Braun, and E. Scheidegger spoke specifically on mathematical aspects of heterotic strings, and E. Sharpe gave an overview of a few current problems in heterotic strings. A. Tomasiello spoke on mirror symmetry in flux backgrounds, using ideas recently developed by Hitchin to extend mirror symmetry for type II strings. (The same ideas can also, it is thought, be used to solve certain technical problems in understanding heterotic strings in flux backgrounds, as discussed in E. Sharpe's talk.) D. Ploog spoke on general aspects of derived categories and Fourier-Mukai transforms, then U. Bruzzo and D. Hernandez Ruiperez gave a collection of talks on Fourier-Mukai transforms, relevant to both derived categories (encoding automorphisms thereof) and heterotic strings (encoding T-dualities). E. Macri spoke on pi-stability, a physical aspect of derived categories. K.-G. Schlesinger and C. Lazaroiu spoke on A_{\infty} L_{\infty} algebras, as relevant to open and closed string field theory, and which play a role in the physical understanding of derived categories. Finally, E. Sharpe and P. Horja gave a collection of talks on physical aspects of stacks. Bjorn Andreas, Harald A. Posch, Eric Sharpe, Ping Xu, Mini-Workshop: Heterotic Strings, Derived Categories, and Stacks. Oberwolfach Rep. 2 (2005), no. 4, pp. 3019–3060
3-methyl-2-oxobutanoate hydroxymethyltransferase - Wikipedia In enzymology, a 3-methyl-2-oxobutanoate hydroxymethyltransferase (EC 2.1.2.11) is an enzyme that catalyzes the chemical reaction 5,10-methylenetetrahydrofolate + 3-methyl-2-oxobutanoate + H2O {\displaystyle \rightleftharpoons } tetrahydrofolate + 2-dehydropantoate The 3 substrates of this enzyme are 5,10-methylenetetrahydrofolate, 3-methyl-2-oxobutanoate, and H2O, whereas its two products are tetrahydrofolate and 2-dehydropantoate. This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the hydroxymethyl-, formyl- and related transferases. The systematic name of this enzyme class is 5,10-methylenetetrahydrofolate:3-methyl-2-oxobutanoate hydroxymethyltransferase. Other names in common use include alpha-ketoisovalerate hydroxymethyltransferase, dehydropantoate hydroxymethyltransferase, ketopantoate hydroxymethyltransferase, oxopantoate hydroxymethyltransferase, 5,10-methylene tetrahydrofolate:alpha-ketoisovalerate, and hydroxymethyltransferase. This enzyme participates in pantothenate and coa biosynthesis. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1M3U, 1O66, 1O68, and 1OY0. Powers SG, Snell EE (1976). "Ketopantoate hydroxymethyltransferase. II. Physical, catalytic, and regulatory properties". J. Biol. Chem. 251 (12): 3786–93. PMID 6463. Teller JH, Powers SG, Snell EE (1976). "Ketopantoate hydroxymethyltransferase. I. Purification and role in pantothenate biosynthesis". J. Biol. Chem. 251 (12): 3780–5. PMID 776976. Retrieved from "https://en.wikipedia.org/w/index.php?title=3-methyl-2-oxobutanoate_hydroxymethyltransferase&oldid=917319687"
To William Thompson [1 March 1849] My dear Thompson I shd. have answered your note by return of Post, but I was in bed all the day before yesterday & was incapable of doing anything yesterday morning. The enclosed diagram will show difference between Chthamalus & Balanus.—1 You will with great difficulty recognize the genus externally in the British species: internally (when cleaned) there is a conspicuous difference.— I send some good British specimens & a N. American species, in Blotting Paper which shows the characteristic difference much better— Look with lens at inside of the cleaned Brit. spec. & compare with a cleaned Balanus & look at diagram & you will understand difference.— It is curious that the British B. punctatus & Chthamalus are so alike externally, as they belong to different sub-families.— The British Chthamalus has never been named as a Chthamalus; I suspect it is the B. punctatus of Montague; but the nomenclature of the sessile Cirripedes is all guess-work.— As I thought it wd be satisfactory to you, I have looked through your whole collection & cannot see any Chthamalus; I suspect cause is that you have collected generally those on shells & wood, crabs &c. & not so often on rock.— (NB The walls in Chthamalus are not cellular or tubular as is always case with Balanus)— I have been delighted at going over your collection again to see what an admirable one it is: it will be invaluably useful to me.—2 I have gone through \frac{4}{5} of your Book3 & found much to interest me— The accounts of the Hawks & Corvidæ,—in short wherever you treat much of habits, will, interest, I shd. think, everyone.— The discussions on the rarer species being deservedly Irish, will tend to make the Book heavy for the general reader; but I know that you intended your work not as one of amusement, but of instruction.— Why do you & others separate English & Latin Index.— does the English hurt the Latin? Many a time in such cases has something very like an oath come very near my lips, when I have found that I have been looking in a hurry in the wrong index.— Have you seen Rev. E. S. Dixon (not Dickson) work on Poultry;4 it is very good & amusing. I quite forget whether I told you, that I have quite lately made up my mind to go before the end of this month to Malvern for two months to see if there be any truth in the water cure.— It will lose me two months but if I can partly regain my health, it will indeed answer in every respect, for I am sure I have not worked more than 1 out of the 3 last months. I fear we shall not meet this Spring, without you remain till June in London.— I am tired so no more— If you are in doubt about any Irish Chthamalus, pray send it me. What labour your Book must have cost you! [DIAGRAMS HERE] Post. end carina 2d latera 1st. latera anterior end of Rostrum Chthalalmus Post. end or carina anterior end or Rostrum Balanus Horizontal section through upper part of shell. you will see that carina & second latera are the same in Balanus & Chthamalus— whereas Rostrum & 1st latera materially differ.— The Rostrum & carina have a similar structure in Chthamalus.— The diagram is at the end of the letter, and is on the verso of what appears to be the cover of Thompson’s letter to CD, postmarked 27 February. A similar diagram appears in Living Cirripedia (1854): 39 and depicts these and three other genera in the Balanidae. Thompson sent CD ‘his very large collection’ of Balanus balanoides and also provided information concerning the range, habits, and growth rate of this species (see Living Cirripedia (1854): 272–3, and letter from William Thompson, 29 September 1848). CD is referring to the first volume of W. Thompson 1849–56. CD’s annotated copies of the first three volumes of this work are in the Darwin Library–CUL. CD recorded having read them on 5 March 1849 (DAR 119; Correspondence vol. 4, Appendix IV). E. S. Dixon 1848. CD’s annotated copy is in the Darwin Library–CUL. See letter to J. D. Hooker, 6 October [1848], for CD’s opinion of Edmund Saul Dixon. Dixon, Edmund Saul. 1848. Ornamental and domestic poultry: their history and management. London: Office of the “Gardeners’ Chronicle”. Thompson, William. 1849–56. The natural history of Ireland. 4 vols. (Vol. 4 edited by Robert Patterson.) London. Encloses diagram illustrating difference between Chthamalus and Balanus. Specimens sent. Finds no Chthamalus in WT’s collection. Has read with much interest WT’s book [The natural history of Ireland, vol. 1 (1849)]. Recommends E. S. Dixon’s book [Ornamental and domestic poultry; their history and management (1848)]. Trinity College Library, Cambridge (Add.L.b.1: 24) ALS 5pp & CC 1p inc Thompson, William (a)
Design and Analyze Compact UWB Low Pass Filter Using pcbComponent - MATLAB & Simulink - MathWorks 한국 Design and Analyze UWB Low Pass Filter This example shows how to design and analyze a compact low pass ultra-wide band (UWB) filter based on a U-shaped complementary split ring resonator using the pcbComponent object. The filter is designed to have very low insertion loss over a wide band of frequency from low 0.1 GHz to 10.8 GHz. The design of the filter is taken from the reference [1]. This compact filter design employs U-shaped complementary split ring resonator (U-CSRR). U-CSRR is a uniplanar configuration of complementary split ring resonator (CSRR) as described in [1]. This structure has an advantage of simpler fabrication as it is formed on the top metal layer. The U-CSRR is formed using two concentric split square rings of outer length {L}_{1} , inner length {L}_{2} as shown in Figure (a) below. This U-CSRR element is created on top metal layer of hosting transmission line. The equivalent circuit of the single U-CSRR cell shown in Figure (b) is represented by a parallel resonant circuit with inductance {L}_{c} and capacitance {C}_{c} . The inductance {L}_{R} {C}_{R} represent the inductance and capacitance of the host transmission. The U-CSRR particle is electrically coupled to the host transmission line. The equivalent circuit of the U-CSRR particle suggests that at low frequency, the impedance of the parallel tank circuit is small and the circuit has passband characteristics. Figure (a) shows the schematic diagram of such a filter employing two U-CSRR cells in microstrip [1] representing various feature dimensions. As in [1], the choice of two U-CSRR particles in filter design is to have a compact size and ensure high attenuation in stop band. Use the traceRectangular object to create feeding transmission line ZA and rectangular unit cell Cell_A. Perform the Boolean add operation for the microstrip shapes ZA, Cell_A and create LeftSection. % Set variables for ground plane gndW = 7e-3; % Set variables for feeding transmission line ZA_Width = 4e-3; % Define unit cell length Cell_Length = 5e-3; % Create feeding microstrip line ZA = traceRectangular("Length",ZA_Length,"Width",ZA_Width,... "Center",[-ZA_Length/2-Cell_Length 0]); % Create rectangular unit cell Cell_A = traceRectangular("Length",Cell_Length,"Width",Cell_Length,... "Center",[-Cell_Length/2 0]); % Join feeding line and rectangular unit cell LeftSection = ZA + Cell_A; Use traceLine object to create shapes s1, s2, s3, s4. Use traceRectangular object to create shape s5. Subtract shapes s1,s2,s3,s4, and s5 from LeftSection. This operation creates various slots seen on U-CSRR particle. Visualize LeftSection using the show % Create shapes for various slots s1 = traceLine('StartPoint',[-Cell_Length/2-0.2e-3 -1.9e-3],... 'Angle',[-180 -270 0],'Length',[1.75e-3 3.8e-3 1.75e-3],'Width',0.2e-3); s2 = traceLine('StartPoint',[-Cell_Length/2+0.2e-3 -1.9e-3],... 'Angle',[0 90 180],'Length',[1.75e-3 3.8e-3 1.75e-3],'Width',0.2e-3); 'Angle',[-90 0 90],'Length',[0.8e-3 2.4e-3 0.8e-3],'Width',0.2e-3); s4 = traceLine('StartPoint',[-Cell_Length/2-1.2e-3 0.2e-3],... 'Angle',[90 0 -90],'Length',[0.8e-3 2.4e-3 0.8e-3],'Width',0.2e-3); s5 = traceRectangular("Length",0.2e-3,"Width",1.8e-3,... % Create slots of U-CSRR on hosted micrsotrip line LeftSection = LeftSection -s1 -s2 -s3 -s4 -s5; Use the copy, rotateZ, and rotateX methods on the LeftSection object to create a RightSection. This creates right portion of filter having another U-CSRR hosted transmission line. Visualize RightSection using show function. Perform theBoolean add operation for the shapes LeftSection, RightSection to create a filter. Visualize the filter. filter = LeftSection + RightSection; Define the substrate parameters and create a dielectric to use in the pcbComponent of the designed filter. Create a groundplane using the traceRectangular shape. Use the pcbComponent to create a filter PCB. Assign the dielectric and ground plane to the Layers property on pcbComponent. Assign the FeedLocations to the edge of the feed ports. Set the BoardThickness to 1.52 mm on the pcbComponent and visualize the filter. The below code performs these operations and creates the filter PCB. % Define Substrate and its thickness substrate = dielectric("RO4730JXR"); substrate.Thickness = 1.52e-3; % Define bottom ground plane ground = traceRectangular("Length",gndL,"Width",gndW,... "Center",[0,0]); Use pcbComponent to create a filter pcb Use the mesh function to have fine meshing and set MaxEdgeLength to 1mm. mesh(pcb,'MaxEdgeLength',1e-3); Use the sparameters function to calculate the S-parameters for the low pass filter and plot it using the rfplot function. Analyze the values of {S}_{21} {S}_{11} to understand the behavior of low pass filter. rfplot(spar,[1 2],1); The result shows that the filter has {S}_{21} values close to 0 dB and {S}_{11} values less than -15 dB between wide band of frequencies {f}_{1} = 0.1 GHz and {f}_{2} = 10.0 GHz. The designed filter therefore has ultra-wide passband response. For frequencies greater than 10.8 GHz, {S}_{21} values are less than -10 dB indicating stopband response. Use the charge function to visualize the charge distribution on the metal surface and dielectric of low pass filter. charge(pcb,5e9); charge(pcb,5e9,'dielectric'); Use the current function to visualize the current distribution on the metal surface and volume polarization currents on dielectric of low pass filter. current(pcb,5e9); current(pcb,5e9,'dielectric'); [1] Abdalla, M. A., G. Arafa, and M. Saad, “Compact UWB LPF based on uni-planar metamaterial complementary split ring resonator,” Proceedings of 10th International Congress on Advanced Electromagnetic Materials in Microwaves and Optics (METAMATERIALS), 10–12, Chania, Greece, Sep. 2016
Polynomial Operations and Theorems - Vocabulary - Course Hero College Algebra/Polynomial Operations and Theorems/Vocabulary polynomial expression consisting of two terms zero, where and b i=\sqrt{-1} a + bi greatest degree of a term in a polynomial exponent of the variable of a term in a polynomial in one variable coefficient of the first term of a polynomial written in standard form, which places the term with the highest degree first terms that have the same variable (or variables) with the same exponents polynomial expression consisting of one term function whose rule is a polynomial in one variable, which is a sum or difference of terms of the form ax^n a n is a nonnegative integer sum or difference of terms of the form ax^n a n process of long division of polynomials where only the coefficients and constants are recorded. The divisor must be a linear factor with a coefficient of 1. polynomial expression consisting of three terms <Overview>Polynomial Operations
FAQ - Session Recordings - Browsee Find everything regarding session recordings and replays in this section. I do not want to record all my sessions. Can I prioritize my recordings based on URL, UTM source, or country? Yes, Browsee gives you a lot of flexibility around the sessions that you want to record. This means that you get more valuable recordings at the same price. Go to the recordings under the settings tab and add your preferences. You can prioritize your recordings based on: Landing URLs like "contains /app" or "contains /product/". UTM source like "cpc", "ads", etc. Country like "USA", "India", "France", etc. I do not want to record sessions for a URL, Device, or IP? Will these ignored sessions be counted against my quota? Yes, you get complete flexibility around what not to record. Go to the recordings under the settings tab and add your preferences. You can ignore/deprioritize your recordings based on: Landing URLs like "contains /blog" or "contains /article/". Devices like "Mobile", "Desktop", etc. IP addresses for your team or company like "192.168.32.12", etc. Total session time < "2 secs" These ignored session recordings will not be counted in your quota then. Hence, you will get more valuable recordings in the same pricing plan. What is the sampling rate and why Browsee is not recording all my sessions? Based on your pricing plan, Browsee decides how many sessions will be recorded per project or session quota per project. sampling = (Session Quota/Total Sessions) * 100 For a $100 plan, about 1000 sessions will be recorded for one project every day. If you have 2 projects under the same account, about 500 sessions will be recorded for each project. Read more about sampling-related questions here. Why my number of recordings have reduced as compared to yesterday? Browsee computes the sampling rate based on the previous day's sessions. So, if you are running a heavy campaign for a single day, you may notice a reduced number of recordings on the next day. However, it will get normalized over time and we will record the sessions as per your monthly quota. Can I download my session recordings as videos? No, currently you can not download your recordings as videos. However, you can email us at [email protected] or ping us on chat for a feature request and we may take it from there. Why I can not see images and CSS in my session recordings? Sometimes, you may not be able to see the images and CSS as your website. The reason might be: Your website is non-SSL and Browsee will not be able to server non-SSL images and CSS Also, sometimes the webservers are configured to not let the images be downloaded under a different domain However, these are pretty rare cases, and please mail us at [email protected] or ping us on chat in case you face any such issues. Why do I see a jump in the recording while replaying a session? This may happen if you have set an option in Browsee where you choose to prioritize or deprioritize recordings for some of your URLs. Where can I find my saved sessions? How can I find my sessions with Rage or Frustration clicks? Browsee tags your sessions based on the session data when there is a user experience issue. You can find your rage click sessions: You can filter your session list for rage clicks and then you can watch the sessions. Refer to this document for more details on how to filter the sessions. You can directly go to the rage click tab in the user experience screen where all your recent (last 7 days) rage click sessions are listed In case there are any irrelevant rage clicks like clicks on image sliders etc, you can mark them resolved and we will not mark them in the future. What are the AI tags and what is the advantage of these AI-based tags on session recordings? Browsee tags the session recordings based on user behavior and emotions like rage, confusion, expectation mismatch, etc. Take advantage of these tags to minimize the human effort of watching sessions. You can look for these tags and watch only these sessions to get an approximate idea of the problems faced by your users. Read more about AI-based tags here.​ Can I send additional information in my session recordings like identify calls or tags? Yes, you can send any user related information like name, email, database ID or tags like "paid", "unpaid", "in-trial" etc with an API call. Kindly refer to this document to understand how can you send this information to Browsee.
Managing Parallel, Part 1: Queueing, Work, Oh My! Parallel execution is a great thing. It enables the employ of multiple workers to get a job done faster. If all the workers perform exactly the same function, then the problem can be divided up evenly amongst the workers. This is the simplest form of parallelism. Picture a room full of typists. <img class=”center-image responsive-image” height=”auto” width=100% src=”https://upload.wikimedia.org/wikipedia/commons/6/68/People_at_work_in_Wartime,_Britain,_1940_D1031.jpg” /> Each one can type. It doesn’t matter that each is typing something slightly different, they’re all doing part of the larger job (typing all the letters that need typed). There are a few things to consider with this model: Typing speed, sure they all do the same job..but maybe one typist had a lot of coffee or perhaps has a slightly more oiled machine. They’ll go faster than the others. What happens if one typist gets a letter with a single sentence, but others get long letters? If uniformly distributed this wouldn’t do too much to our overall throughput, but what happens if we’re unlucky and that one typist gets a batch of only short letters? The short letter typist would likely finish early and go home unless we can do a better job of load balancing. Non-homogeneous jobs are common, and must be dealt with. The stacks of papers next to each typists machine take time to move, and coordination to allocate. Communication takes time and energy. The most optimal model is a combination of multiple similar workers (like the typists), but arranged with workers that perform other functions. Henry Ford demonstrated an assembly line approach that gave each worker basically one job. <img class=”center-image responsive-image” height=”auto” width=100% src=”https://upload.wikimedia.org/wikipedia/commons/2/29/Ford_assembly_line_-_1913.jpg”/> He put these workers in a line where the cars were passed from one worker to the next as they were built up. This assembly-line or job-shop model is what we want to achieve with our parallel programs. It’s relatively easy to visualize blocks of “typists” or the same type of worker existing within the job-shop model (take one workers station on the line and duplicate it, looking at the example below we could duplicate B for instance multiple times leaving the connectivity the same). This model is one of the best for getting things done quickly, however it has problems of its own. Picture a three worker system: <img class=”center-image responsive-image” height=”auto” width=100% src=”/img/threeWorkersExample.png”/> Assume for the moment that each worker only has the option of passing jobs to the worker following their station. What would happen if worker C is the slowest? Workers A and B would have to go at the rate of C, which isn’t good for overall productivity. How about if worker B is the fastest overall? Worker B couldn’t go that fast because B can only operate at the rate that A can hand jobs to B and the rate at which C can accept them. This is a problem. Even if we measure each worker and align them to be perfectly rate matched, it’ll likely never actually occur again outside of the one time we measured it. Workers, as is the case with a thread/process/context running on a computer, are complex. What happens if B is really motivated after breakfast to go fast? B is still limited by A and C. How bout if A and B are really on the ball, but C just isn’t motivated for the moment? A and B could do a lot of work, if only they could pile up items for C to work on. From now on, this will be referred to as “bursty behavior.” It is temporary or transient (i.e., it only lasts for a short time…but could re-appear at any time, who knows what will motivate our workers to go faster). Worker behavior that causes jobs to pile up in-between workers consistently is caused by rate mismatch (e.g., if A is consistently faster than B, then A is limited by B). The terms “bursty” and “rate mismatch” will be used from now on as they are more technically appropriate. So far we’ve assumed that each worker is simply handing jobs directly to the next worker. This is far from the case in many modern assembly lines (and indeed many of our programs). The most efficient way for many assembly lines is to buffer output between workers. <img class=”center-image responsive-image” height=”auto” width=100% src=”https://upload.wikimedia.org/wikipedia/commons/a/a2/Modern_warehouse_with_pallet_rack_storage_system.jpg” /> There are many reasons to buffer (using the A, B, C example above): Maybe worker B needs five of each job from A Accommodate bursty behavior of B when transitioning to C But there are also some wrong reasons to buffer. What would happen if an assembly line attempted to correct rate mismatch with a warehouse in between worker stations (buffer). There are two scenarios (again, with A and B). If A goes significantly faster than B, it is fairly intuitive that B would fall behind. Items would pile up between A and B. What would happen over the course of a day? A year? Lets jump right to infinity. What would happen is an infinite number of jobs would pile up. This is bad, very bad. I don’t have infinite space, nor does anyone else. Using a queueing model, in this case a simple M/M/1 system (see Wikipedia page, formula for mean queue size is \frac{{\lambda }^{2}}{\mu \left(-\lambda +\mu \right)} ) it is easy to show that as the arrival rate (rate at which A sends jobs to B equalizes, the number of jobs that piles up (shown on y-axis) between A and B piles up. In the image below I’ve plotted the number of items that would in fact pile up as the arrival rate is increased from almost zero to that almost equal to the rate at which B can service jobs (Note: I’ve not plotted the case where they are equal since infinity just blows the whole scale of the chart). <img class=”center-image responsive-image” src=”/img/queuePlot.jpg” height=”auto” width=75% /> How about if B is far faster than A? A would find the buffer space between A and B almost always empty. There is a non-zero probability (that can be calculated) for bursty behavior to occur and for items to briefly pile up in the buffer. That probability can be calculated, and is sometimes useful, but for most purposes it’ll happen so infrequently that it won’t be a problem (this is a case of bursty behavior, just not the one most would jump to at first). The only word of caution I have is that infrequent events can happen at high enough rates or long enough running time. One case not covered yet explicitly, because it can rarely happen in software processes, is where worker A directly hands items to worker B. This requires both workers to work (or service jobs if using queueing terminology) at the exact same rate in a deterministic manner. Before you can understand exactly why this works, the term deterministic must be understood to mean a system where no randomness is exhibited or involved in the determination of all future states. For an assembly line system this means that everything is done in lock-step, the exact same way, with the same rate, every time. This case usually only exists in the hardware world (and even then, physics often intervenes to add some unaccounted for behavior). For the purposes of software, a deterministic queue hardly ever exists (even perfectly engineered systems exhibit some variation, see this paper on best case execution variation link). For software systems, these assembly line models suggest a few things: Fork-join parallelism is limiting, to get the most out of dividing a problem into tasks we need fork-join and pipeline parallelism (coincidentally both combined can be termed data-flow). This way all workers can execute as soon as data is available for them to execute on. This is not the way most programs are written today. Load balancing is critical. Most current systems start by chunking, and then by work stealing to load-level each worker thread/process. But they tend to do it on a fork-join model vs. a pipeline model. It is not easy to get a balanced, and high performing parallel system. Three are three general strategies for handling bursty behavior in parallel systems: Statically allocate huge, over-provisioned buffers between workers. Tie thread scheduling into the buffering system so that threads are scheduled only when there is room for their work to be communicated. Dynamically re-allocate buffers based on dynamic conditions. Most systems take the first approach, a few take the second. Statically allocating giant buffers has several drawbacks. First and foremost, huge queues mean long latency and wait. Tying thread schedules to the size of buffers is a great approach, however it is complicated and often negates the possibility of enabling individual threads to go slightly faster given a custom sized buffer. Lastly, and even more complicated to implement in practice are dynamically resizing buffers. It is fairly easy to visualize a naive implementation of a dynamically resizing buffer, however building one that is high performance without locks is quite difficult. Getting rid of locks/atomic variables is critical, estimate around 45-50 cycles for a pthread_mutex, potentially more depending on the platform. This post was mostly about workers, arrangements, and buffering. The next article (part 2) will be on the choice of parallelism modality and determining the amount of work to put in each thread. There are many modalities to choose from ranging from SIMD through heavy weight multi-node MPI. The choice of what to put in each context will also be covered, as each threading modality has a different (and non-zero) overhead of use. Thanks for reading, as always feel free to leave comments, tweet, or share!
Basic Concepts In Chemistry, Popular Questions: Jee Online course CHEMISTRY, Chemistry - Meritnation Find The Total No. Of Moles Of Electron In 9 g Of H2O. Mehul Aswar asked a question explain how to find n -factor of Na2SO4 Henil asked a question atomic mass of chlorine is 35.5 .it has two isotopes of atomic mass 35 and 37. what is the percentage of heavier isotopes in the sample Calculate the molecular weight of the compound if 0.168g of it displaces 49.4 ml of air measured over water at 20C and 740 mm of pressure .(aq. Tension at 20C=18mm). the percentage of element m is 53 .in its oxide of molecular formula M2O3 . its atomic mass is about 1.9g of CHxBry has same no. of atoms as in 0.6g og H2O. The value of x+y is ? What is the structure of 5-(2,2-Dimethylpropyl)nonane Amartya Mitra asked a question does soda ash decompose at all? Jayesh Bangad asked a question How many moles of NH3 are there in 250cm3 of a 30% solution , the specific gravity of which is 0.90? find the percentage composition of iron and magnesium , 5. 0 g , which when dissolved in acid , gave 2.81 liter of H2 at NTP. 16ml of gaseous aliphatic compound CnH3nOm was mixed with 60ml of O2 and sparked.the gaseous mixture was cooling occupied 44ml.after treatment with KOH solution,the volume of gas remaining was 12ml .Deduce formula of compound.All measurements are made at contant pressure and on room temperature and on cooling H2O liquefies and KOH absorbs CO2. calculate n and m​ 20ml of 0.1 Msolution of compound Na2CO3 is tirated against 0.05M HCl , x ml of HCl is used when phenophthalein is a used as an indicator and y ml of HCl is used when methyl orange is the indicator in two separate titration . hence ( y- x) is 12 gm of alkaline earth metal gives 14.8 gm of its nitride. atomic weight of that metal is??? Percentage of Se in peroxide anhydrous enzyme is 0.5% by mass ( atomic mass = 78) . The minimum molecular mass of peroxidase anhydrous enzyme is....? An iron sample contains 18% Fe3O4 . What is the amount that is precipitated as Fe2O3 which weighs 0.40g. The molar mass of an acid = 40g mol-1. 50 ml of one normal Ca(OH)2 is neutralized by 0.5 g of the acid. The basicity of the acid is equivalent weight of KCl.MgCl26H​2O having molecular mass =M ?? 1)M/5 Krithika Elancheran asked a question If 10g of dihydrogen reacts with 10g of dioxygen to produce water. Which is the limiting reagent? ​ What amount of 25% H2SO4 will be required to react completely with 100 g (20% impure) CaCO3? on reduction with hydrogen , 3.6g of an oxide of metal left 3.2g of metal . if the vapour density of metal 32 , the simplest formula of the oxide would be A sample of heme a constituent of haemoglobin , weighs 35.2 mg and contains 3.19 mg of iron ,than its molecular mass is ??? Rbhu Gandhi asked a question 20g of a monobasic acid furnishes 0.5 mole of H3O+ ions in its aq. solution. The calue if 1g eq. of the acid Air contains nearly 20 percent of air by volume . The volume of air needed for complete combustion of 100 ml of acetylene will be The oxide of meta lhas 32% oxygen . its equivalent weight is? Ques-24.9 gram of acid (Molar mas = 98g) was dissolved in one litre solution. now 1000ml of this solution was completely neutralised by 90ml of naoh solution. the strength of naoh solution was 40g/litre. the basicity of the acid is What is correct for 10 g of CaCO3 ? i) It contains 1 g atom of carbon ii) It contains 0.3 g atoms of oxygen iii) It contains 12 g of calcium iv) It refers to 0.1 g equivalent of CaCO​3 Please explain each case. ​ ​V1ml of aM HCL and V2ml of bM HCL are mixed to produce M1 HCL.again, V2 ml of MHCLare mixed to produce M2 HCL. If V1+V2>1 and V1+V2=a+b andM1+M2=5+4 then find V1+V2= how many litres of O2 will be required for complete combustion of 5 litres of C3H8 under a) identical condition b) identical condition of temperature 25 degree celcius and pressure 1 atmosphere? if 0.01 mole of Na2CO3 is required , amount of Na2CO3 .10H​2O to be taken is??? CaCO3 = CaO+CO2. what weight of CaO will be produced by heating: a) 100 kg pure CaCO3 b) 100 g limestone with 95 % purity? how to find oxidation number of carbon atom in organic compounds the vapour density of pure gaseous product of a solid element burnt in a oxygen without any change in a volume is 32 . it equivalent mass is ??? What percentage of carbon is contained by 0.2 gram of hydrocarbon which gives 0.506 gram of carbon dioxide on complete combustion is acidic and reducing character trend is same if no how to predict reducing character trend What will change if Avogadro's number is changed? (A) Mutarotation is specific rotation of an anomer of carbohydrate (B) Starch can show mutarotation but do not give tollen's test (C) All carbohydrates are reducing sugar (D) Mixture of acidic hydrolysis of sucrose is known as invert sugar 49) total number of coordination isomers show by complex \left[Co\left(N{H}_{3}{\right)}_{6}\right] \left[Cr\left(CI{\right)}_{6}\right] if the density of a 3 M aq solution of NaCl is 1.25 g/ml, what will be its molality? the number of iron atoms per molecule of haemoglobin if haemoglobin contains 0.25%iron by mass and its molecular mass is 89600,is What is the structure of 1,1-diethyl-3-methyl cylohexane find the number of protons present in 16 g of methane Q). The chemical that undergoes self oxidation and self reduction in the same reaction is (a) benzyl alcohol (b) acetone (c) formaldehyde (d) acetic acid +polyethylene can be produced from CaC2 according to following sequence. CaC2+H2O-CaO + HCtriple bond CH n(HC triple bond CH) +nH2-(CH2_CH2)n. The mass of polyethylene that can be produced from 20kg of pure CaC2 is? What is C in this equation? ​ how to find the number of valence electron in an atom Oxalate ion are oxidised by Cr2O72- ion to form a gas and that is 1)O2 2)CO2 3) CO 4)H2
Vacancy_defect Knowpia In crystallography, a vacancy is a type of point defect in a crystal where an atom is missing from one of the lattice sites.[2] Crystals inherently possess imperfections, sometimes referred to as crystalline defects. Electron microscopy of sulfur vacancies in a monolayer of molybdenum disulfide. Right circle points to a divacancy, i.e., sulfur atoms are missing both above and below the Mo layer. Other circles are single vacancies, i.e., sulfur atoms are missing only above or below the Mo layer. Scale bar: 1 nm.[1] Vacancies occur naturally in all crystalline materials. At any given temperature, up to the melting point of the material, there is an equilibrium concentration (ratio of vacant lattice sites to those containing atoms).[2] At the melting point of some metals the ratio can be approximately 1:1000.[3] This temperature dependence can be modelled by {\displaystyle N_{\rm {v}}=N\exp(-Q_{\rm {v}}/k_{\rm {B}}T)} where Nv is the vacancy concentration, Qv is the energy required for vacancy formation, kB is the Boltzmann constant, T is the absolute temperature, and N is the concentration of atomic sites i.e. {\displaystyle N=mN_{\rm {A}}/M} where m is mass, NA Avogadro constant, and M the molar mass. It is the simplest point defect. In this system, an atom is missing from its regular atomic site. Vacancies are formed during solidification due to vibration of atoms, local rearrangement of atoms, plastic deformation and ionic bombardments. The creation of a vacancy can be simply modeled by considering the energy required to break the bonds between an atom inside the crystal and its nearest neighbor atoms. Once that atom is removed from the lattice site, it is put back on the surface of the crystal and some energy is retrieved because new bonds are established with other atoms on the surface. However, there is a net input of energy because there are fewer bonds between surface atoms than between atoms in the interior of the crystal. Material physicsEdit In most applications vacancy defects are irrelevant to the intended purpose of a material, as they are either too few or spaced throughout a multi-dimensional space in such a way that force or charge can move around the vacancy. In the case of more constrained structures like carbon nanotubes however, vacancies and other crystalline defects can significantly weaken the material.[4] ^ a b Ehrhart, P. (1991) "Properties and interactions of atomic defects in metals and alloys", chapter 2, p. 88 in Landolt-Börnstein, New Series III, Vol. 25, Springer, Berlin ^ Siegel, R. W. (1978). "Vacancy concentrations in metals". Journal of Nuclear Materials. 69–70: 117–146. Bibcode:1978JNuM...69..117S. doi:10.1016/0022-3115(78)90240-4. ^ "Defects And Disorder In Carbon Nanotubes" (PDF). Philip G. Collins. Retrieved 8 April 2020. Crystalline Defects in Silicon
Gamma function - MATLAB gamma - MathWorks Benelux Evaluate Gamma Function Y = gamma(X) returns the gamma function evaluated at the elements of X. Evaluate the gamma function with a scalar and a vector. \Gamma \left(0.5\right) \sqrt{\pi } y = gamma(0.5) Evaluate several values of the gamma function between [-3.5 3.5]. x = -3.5:3.5; 0.2701 -0.9453 2.3633 -3.5449 1.7725 0.8862 1.3293 3.3234 Plot the gamma function and its reciprocal. Use fplot to plot the gamma function and its reciprocal. The gamma function increases quickly for positive arguments and has simple poles at all negative integer arguments (as well as 0). The function does not have any zeros. Conversely, the reciprocal gamma function has zeros at all negative integer arguments (as well as 0). fplot(@gamma) fplot(@(x) 1./gamma(x)) legend('\Gamma(x)','1/\Gamma(x)') Input array, specified as a scalar, vector, matrix, or multidimensional array. The elements of X must be real. For double and single data types, the gamma function returns Inf for all values greater than realmax and realmax('single'). The saturation thresholds for positive integers are gamma(172) and gamma(single(36)), where the evaluated gamma functions are greater than the maximum representable values. The gamma function is defined for real x > 0 by the integral: \Gamma \left(x\right)={\int }_{0}^{\infty }{e}^{-t}{t}^{x-1}dt The gamma function interpolates the factorial function. For integer n: gamma(n+1) = factorial(n) = prod(1:n) The domain of the gamma function extends to negative real numbers by analytic continuation, with simple poles at the negative integers. This extension arises from repeated application of the recursion relation \Gamma \left(n-1\right)=\frac{\Gamma \left(n\right)}{n-1}\text{\hspace{0.17em}}. The computation of gamma is based on algorithms outlined in [1]. [1] Cody, J., An Overview of Software Development for Special Functions, Lecture Notes in Mathematics, 506, Numerical Analysis Dundee, G. A. Watson (ed.), Springer Verlag, Berlin, 1976. [2] Abramowitz, M. and I.A. Stegun, Handbook of Mathematical Functions, National Bureau of Standards, Applied Math. Series #55, Dover Publications, 1965, sec. 6.5. gammainc | gammaincinv | gammaln | psi | factorial
ratvaluep - Maple Help Home : Support : Online Help : Mathematics : Numbers : P-adic : ratvaluep ratvaluep compute the sum of the first n terms of a p-adic number ratvaluep(ex, n) p-adic number or rational number The function ratvaluep returns a rational number which is the sum of the first n terms of the p-adic number ex. If ex is a rational number then it returns ex itself. The command with(padic,ratvaluep) allows the use of the abbreviated form of this command. \mathrm{with}⁡\left(\mathrm{padic}\right): f≔75⁢{x}^{3}+3⁢{x}^{2}+8⁢x+3 \textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{75}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{8}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{3} \mathrm{Digitsp}≔13 \textcolor[rgb]{0,0,1}{\mathrm{Digitsp}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{13} a≔\mathrm{rootp}⁡\left(f,5\right) \textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{5}}^{\textcolor[rgb]{0,0,1}{-2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{5}}^{\textcolor[rgb]{0,0,1}{-1}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{5}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{5}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{5}}^{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{5}}^{\textcolor[rgb]{0,0,1}{6}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{5}}^{\textcolor[rgb]{0,0,1}{7}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{5}}^{\textcolor[rgb]{0,0,1}{8}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{5}}^{\textcolor[rgb]{0,0,1}{9}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{O}\textcolor[rgb]{0,0,1}{⁡}\left({\textcolor[rgb]{0,0,1}{5}}^{\textcolor[rgb]{0,0,1}{10}}\right) \mathrm{ratvaluep}⁡\left(a,10\right) \frac{\textcolor[rgb]{0,0,1}{6533399}}{\textcolor[rgb]{0,0,1}{25}}
EUDML | Symmetric cube -functions for are entire. EuDML | Symmetric cube -functions for are entire. Symmetric cube L {\text{GL}}_{2} are entire. Kim, Henry H.; Shahidi, Freydoon Kim, Henry H., and Shahidi, Freydoon. "Symmetric cube -functions for are entire.." Annals of Mathematics. Second Series 150.2 (1999): 645-662. <http://eudml.org/doc/120843>. author = {Kim, Henry H., Shahidi, Freydoon}, keywords = {holomorphy; symmetric cube -function; non-monomial cusp forms; symmetric cube -function}, title = {Symmetric cube -functions for are entire.}, AU - Kim, Henry H. AU - Shahidi, Freydoon TI - Symmetric cube -functions for are entire. KW - holomorphy; symmetric cube -function; non-monomial cusp forms; symmetric cube -function Guangshi Lü, Mean values connected with the Dedekind zeta-function of a non-normal cubic field holomorphy, symmetric cube L -function, non-monomial cusp forms, symmetric cube L L Articles by Shahidi
Four-tris - TetrisWiki four-tris logo July 21, 2020[note 1] 1.5.2 / July 1st, 2021[1] 10 × 24 (default, modifiable) four-tris is a downloadable Windows client made as a training tool for block-stacking games, meant to allow players to quickly explore different situations and scenarios; It was built to be similar to a chess "analysis board" but for Tetris. Features include drawing out pieces or an entire stack onto a testing playfield, undo/redo commands, multiple training modes, game stats, and customizable skins, controls, tuning, and sound effects. four-tris is developed by Fio. 1.1.2 Cheese mode 1.1.3 Four mode 1.1.3.1 PC mode four-tris launches with a regular modern stacker playfield with gravity disabled. With the mouse, players are able to draw out minos onto the playfield, automatically coloring valid tetrominos when possible. With the camera function, the player can draw a box, similar to Windows 10's Snipping Tool and Snip & Sketch, over an existing playfield in any stacker game, and the program will automatically copy the board state captured to its own playfield. By clicking on various boxes, the player can change the oncoming queue, the current Hold piece, mirror the playfield, change game modes, and undo/redo. A convenient feature is the ability to undo/redo any change, done by clicking the onscreen left/right arrows or by pressing Ctrl+Z and Ctrl+Y respectively. The default no-gravity mode when the program is launched. Automatically generates a 10 line continuous stream of garbage, similar to Jstris Cheese Race, in that garbage will be inserted once a Combo ends, or after the current garbage rows becomes low. Automatically generates an infinite center 4 wide playfield. Has a loss condition when any combo is broken. A specialized mode for perfect clear training. The environment is identical to Training mode, but you may modify the 7-bag state to replicate those of bags after {\displaystyle n} th PC. A constant high but non-20G gravity is applied to simulate high gravity situations such as TETR.IO's Blitz mode. ↑ At least— this is where earliest messages on this project's Discord server are. ↑ "1.5.2's Release". GitHub. July 1, 2021. Retrieved . four-tris GitHub page four-tris Discord server Retrieved from "https://tetris.wiki/index.php?title=Four-tris&oldid=23921"
Prepare Data for Linear Mixed-Effects Models - MATLAB & Simulink Tables and Dataset Arrays Relation of Matrix Form to Tables and Dataset Arrays To fit a linear-mixed effects model, you must store your data in a table or dataset array. In your table or dataset array, you must have a column for each variable including the response variable. More specifically, the table or dataset array, say tbl, must contain the following: A response variable y Predictive variables Xjwhich can be continuous or grouping variables Grouping variables g1, g2, ..., gR, where the grouping variables in Xj and gr can be categorical, logical, a character array, a string array, or a cell array of character vectors, r = 1, 2, ..., R. You must organize your data so that each row represents an observation. And each row should contain the value of variables and the levels of grouping variables corresponding to that observation. For example, if you have data from an experiment with four treatment options, on five different types of individuals chosen randomly from a population of individuals (blocks), the table or dataset array must look like this. 1 1 y11 Now, consider a split-plot experiment, where the effect of four different types of fertilizers on the yield of tomato plants is studied. The soil where the tomato plants are planted is divided into three blocks based on the soil type: sandy, silty, and loamy. Each block is divided into five plots, where five types of tomato plants, (cherry, heirloom, grape, vine, and plum) are randomly assigned to these plots. Then, the tomato plants in the plots are divided into subplots, where each subplot is treated by one of the four fertilizers. The data from this experiment looks like: 'Sandy' 'Plum' 1 104 'Sandy' 'Cherry' 1 57 'Sandy' 'Vine' 3 99 'Sandy' 'Vine' 4 117 'Silty' 'Plum' 1 120 'Loamy' 'Vine' 3 111 You must specify the model you want to fit using the formula input argument to fitlme. In general, a formula for model specification is a character vector or string scalar of the form 'y ~ terms'. For linear mixed-effects models, this formula is in the form 'y ~ fixed + (random1|grouping1) + ... + (randomR|groupingR)', where fixed contains the fixed-effects terms and random1, ..., randomR contain the random-effects terms. For example, for the previous fertilizer experiment, consider the following mixed-effects model {y}_{imjk}={\beta }_{0}+\sum _{m=2}^{4}{\beta }_{1m}I{\left[F\right]}_{im}+\sum _{j=2}^{5}{\beta }_{2j}I{\left[T\right]}_{ij}+{b}_{0k}{S}_{k}+{b}_{0jk}{\left(S*T\right)}_{jk}+{\epsilon }_{imjk}, where i = 1, 2, ..., 60, the index m corresponds to the fertilizer types, j corresponds to the tomato types, and k = 1, 2, 3 corresponds to the blocks (soil). Sk represents the kth soil type, and I[F]im is the dummy variable representing level m of the fertilizer. Similarly, I[T]ij is the dummy variable representing the level j of the tomato type. You can fit this model using the formula 'Yield ~ 1 + Fertilizer + Tomato + (1|Soil)+(1|Soil:Tomato)'. For detailed information on how to specify your model using formula, see Relationship Between Formula and Design Matrices. If you cannot easily describe your model using a formula, you can create design matrices to define the fixed and random effects, and fit the model using fitlmematrix(X,y,Z,G). You must create your design matrices as follows. Fixed-effects and random-effects design matrices X and Z: Enter a column of 1s for the intercept using ones(n,1), where n is the total number of observations. If X1 is a continuous variable, then enter X1 as it is in a separate column. If X1 is a categorical variable with m levels, then there must be m – 1 dummy variables for m – 1 levels of X1 in X. For example, consider an experiment where you want to study the impact of quality of raw materials from four different providers on the productivity of a production line. If you fit a linear mixed-effects model with intercept and provider as the fixed-effects terms, intercept is the random-effects term, and you use reference contrasts coding, then you must construct your fixed- and random-effects design matrices as follows. D = dummyvar(provider); % Create dummy variables X = [ones(n,1) D(:,2) D(:,3) D(:,4)]; Z = [ones(n,1)]; Because reference contrast coding uses the first provider as the reference, and the model has an intercept, you must use the dummy variables for only the last three providers. If there is an interaction term of predictor variables X1 and X2, then you must enter a column that you form by elementwise product of the vectors X1 and X2. For example, if you want to fit a model, where there is an intercept, a continuous treatment factor, a continuous time factor, and their interaction as the fixed-effects in a longitudinal study, and time is the random-effects term, then your fixed- and random-effects design matrices should look like X = [ones(n,1),treatment,time,treatment.*time]; y = response; Z = [time]; Grouping variables G: There is one column for each grouping variable and a column of elementwise product of the grouping variables in case of a nesting. For example, if you want to group plots (plot) within blocks (block), then you must add a column of elementwise product of plot by block. More specifically, if you want to fit a model where there is intercept and a continuous treatment factor as the fixed-effects in a split-block experiment, and the intercept and treatment are grouped by the plots nested within blocks, then the design matrices should look like this. X = [ones(n,1),treatment]; Z = [ones(n,1),treatment]; G = [block.*plot]; Suppose in the earlier quality of raw materials example, the raw materials arrive in bulks, and the bulks are nested within providers. If you want to fit a linear mixed-effects model, where intercept is grouped by the bulks within providers, then your design matrices should look like this. D = dummyvar(provider); Z = ones(n,1); G = [provider.*bulks]; In the earlier longitudinal study example, if you want to add random effects for intercept and time grouped by subjects that participated in the study, then your design matrices should look like X = [ones(n,1),treatment,time, treatment.*time]; Z = [ones(n,1),time]; G = subject; fitlme(tbl,formula) and fitlmematrix(X,y,Z,G) are equivalent in functionality, such that y is the n-by-1 response vector. X is an n-by-p fixed-effects design matrix. fitlme constructs this from the expression fixed in formula. Z is an R-by-1 cell array with Z{r} being an n-by-q(r) random-effects design matrix constructed from the rth expression in random in formula, r = 1, 2, ..., R. G is an R-by-1 cell array with G{r} being an n-by-1 grouping variable, gr, in formula with M(r) levels or groups. For example, if tbl is a table or dataset array containing the response variable y, the continuous variables X1 and X2, and the grouping variable g, then to fit a linear mixed-effects model that corresponds to the formula expression 'y ~ X1+ X2+ (X1*X2|g)' using fitlmematrix(X,y,Z,G) the input arguments must correspond to the following: y = tbl.y X = [ones(n,1), tbl.X1, tbl.X2] Z = [ones(n,1), tbl.X1, tbl.X2, tbl.X1.*tbl.X2] G = tbl.g
Free_product_of_associative_algebras Knowpia Free product of associative algebras In algebra, the free product (coproduct) of a family of associative algebras {\displaystyle A_{i},i\in I} over a commutative ring R is the associative algebra over R that is, roughly, defined by the generators and the relations of the {\displaystyle A_{i}} 's. The free product of two algebras A, B is denoted by A ∗ B. The notion is a ring-theoretic analog of a free product of groups. In the category of commutative R-algebras, the free product of two algebras (in that category) is their tensor product. We first define a free product of two algebras. Let A, B be two algebras over a commutative ring R. Consider their tensor algebra, the direct sum of all possible finite tensor products of A, B; explicitly, {\displaystyle T=\bigoplus _{n=0}^{\infty }T_{n}} {\displaystyle T_{0}=R,\,T_{1}=A\oplus B,\,T_{2}=(A\otimes A)\oplus (A\otimes B)\oplus (B\otimes A)\oplus (B\otimes B),\,T_{3}=\cdots ,\dots } {\displaystyle A*B=T/I} where I is the two-sided ideal generated by elements of the form {\displaystyle a\otimes a'-aa',\,b\otimes b'-bb',\,1_{A}-1_{B}.} We then verify the universal property of coproduct holds for this (this is straightforward but we should give details.) K. I. Beidar, W. S. Martindale and A. V. Mikhalev, Rings with generalized identities, Section 1.4. This reference was mentioned in "Coproduct in the category of (noncommutative) associative algebras". Stack Exchange. May 9, 2012. "How to construct the coproduct of two (non-commutative) rings". Stack Exchange. January 3, 2014.
Mr Ransome1 has in the most magnificent manner, & owing, he says, to all that he has heard you say of me, presented me with a complete series of the Ipswich likenesses.2 In consequence I have you in duplicate3 & wish therefore to return one copy to you.— Will you tell me where I can have it left for you in London.— I have some copies of my own likeness, which you no doubt have in the series, otherwise I shd of course have been proud to have sent you one.— My wife says she never saw me with the smile, as engraved, but that otherwise that it is very like.— My said wife has been occupied these two days past in producing a fourth boy Darwin & seventh child!4 He is to be called Leonard,—a name I hold in affection from Cambridge & other associations.—5 I was so bold during my wifes confinement which are always rapid, as to administer Chloroform, before the Dr. came & I kept her in a state of insensibility of 1 & \frac{1}{2} hours & she knew nothing from first pain till she heard that the child was born.— It is the grandest & most blessed of discoveries. I hope Mrs & Miss Henslow are well I am at work again & believe I have succeeded in persuading our Clodhoppers to be enrolled in a Club.—6 George Ransome, secretary of the Ipswich Museum. The collection of sixty lithograph portraits by Thomas Herbert Maguire, made for the Ipswich Museum. See letter to George Ransome, 25 October [1849], n. 1. See letter to J. S. Henslow, [7 October 1849]. Leonard was the eighth child born to the Darwins, but Mary Eleanor, their third child, born in September 1842, had lived for only three weeks. CD refers to Leonard Jenyns, Henslow’s brother-in-law, with whom he made frequent entomological expeditions during his Cambridge undergraduate days, and to Henslow’s oldest child, also called Leonard. This refers to the establishment of the Down Friendly Society, a mutual insurance and benefit club, of which CD served as treasurer for thirty years (LL 1: 142).
Home : Support : Online Help : Connectivity : Maple T.A. : MapleTA Package : Builtin : if select a result given a condition if(condition, trueresult, falseresult) falseresult The if command tests the given condition and returns trueresult if condition is true or nonzero. Otherwise falseresult is returned. Note, because if is a recognized keyword in maple, it must be used with backwards-single-quotes around it. \mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{`if`}⁡\left(1<2,"Red","Blue"\right) \textcolor[rgb]{0,0,1}{"Red"} The MapleTA[Builtin][if] command was introduced in Maple 18.
Unobstructedness of filling secants and the Gruson–Peskine general projection theorem 15 March 2015 Unobstructedness of filling secants and the Gruson–Peskine general projection theorem We prove an unobstructedness result for deformations of subvarieties constrained by intersections with another fixed subvariety. We deduce smoothness and expected-dimension results for multiple-point loci of generic projections, mainly from a point or a line, or for fibers of embedding dimension 2 or less. Ziv Ran. "Unobstructedness of filling secants and the Gruson–Peskine general projection theorem." Duke Math. J. 164 (4) 697 - 722, 15 March 2015. https://doi.org/10.1215/00127094-2881483 Keywords: generic projections , multiple points , multisecant lines , Rational curves Ziv Ran "Unobstructedness of filling secants and the Gruson–Peskine general projection theorem," Duke Mathematical Journal, Duke Math. J. 164(4), 697-722, (15 March 2015)
HoughLine - Maple Help Home : Support : Online Help : Programming : ImageTools Package : HoughLine detect lines using Hough Transform HoughLine(img, d_rho, d_theta, threshold) d_rho rhorange : list , contains minimum rho and maximum rho thetarange : list , contains minimum theta and maximum theta The HoughLine command detects the lines in img, and returns the detected lines in an m 2 Array, lines. A line given by the equation \mathrm{\rho }=x⁢\mathrm{cos}⁡\left(\mathrm{\theta }\right)+y⁢\mathrm{sin}⁡\left(\mathrm{\theta }\right) is represented by the pair of values \left(\mathrm{\rho },\mathrm{\theta }\right) . This pair then forms a row of lines. The equation takes the origin of the coordinate system to be at the top left corner of the image, contrary to e.g. the ImageTools[Draw] subpackage which takes the origin a the lower left corner. The value \mathrm{\rho } (in the first column) can be understood as the signed distance from the origin to the closest point on the line - the sign being negative if the line passes above the origin and positive otherwise. The value \mathrm{\theta } (in the second column) can be understood as the counterclockwise angle in radians of a normal to the line, starting from horizontal. By default, the values of \mathrm{\theta } are in the range 0 \mathrm{\pi } The implementation attempts to find such lines by varying \mathrm{\rho } \mathrm{\theta } over a grid of values. The required arguments d_rho and d_theta specify the step size for \mathrm{\rho } \mathrm{\theta } , respectively, in this grid. For each line, the implementation counts the number of nonblack pixels. If this is greater than threshold, the line is included in the result. img should be a binary image, so calling EdgeDetect and Threshold are usually necessary before calling HoughLine. The option rhorange specifies the range of \mathrm{\rho } , so only the lines with \mathrm{\rho } values in this range is returned. The default is from -\mathrm{upperbound}⁡\left(\mathrm{img},1\right)-\mathrm{upperbound}⁡\left(\mathrm{img},2\right) \mathrm{upperbound}⁡\left(\mathrm{img},1\right)+\mathrm{upperbound}⁡\left(\mathrm{img},2\right) The option thetarange specifies the range of \mathrm{\theta } \mathrm{\theta } values in this range is returned. The default is 0 \mathrm{\pi } \mathrm{with}⁡\left(\mathrm{ImageTools}\right): \mathrm{img}≔\mathrm{Read}⁡\left(\mathrm{cat}⁡\left(\mathrm{kernelopts}⁡\left(\mathrm{datadir}\right),"/images/Maplesoft.png"\right)\right) {\textcolor[rgb]{0,0,1}{\mathrm{_rtable}}}_{\textcolor[rgb]{0,0,1}{36893628729988960492}} \mathrm{edge}≔\mathrm{Threshold}⁡\left(\mathrm{EdgeDetect}⁡\left(\mathrm{img}\right),1.5\right) {\textcolor[rgb]{0,0,1}{\mathrm{_rtable}}}_{\textcolor[rgb]{0,0,1}{36893628729851154292}} \mathrm{Embed}⁡\left(\mathrm{edge}\right) \mathrm{line}≔\mathrm{HoughLine}⁡\left(\mathrm{edge},1,\frac{\mathrm{\pi }}{180},250\right) {\textcolor[rgb]{0,0,1}{\mathrm{_rtable}}}_{\textcolor[rgb]{0,0,1}{36893628729851135388}} local i, nRows, nCols, rho, theta, pixel, ctheta, stheta, xMid, yMid, x1, y1, x2, y2; pixel := evalf('sqrt'(nRows^2 + nCols^2)); ctheta := cos(theta); stheta := sin(theta); xMid := rho * ctheta; yMid := rho * stheta; x1 := xMid + pixel * (-stheta); y1 := yMid + pixel * ctheta; x2 := xMid - pixel * (-stheta); y2 := yMid - pixel * ctheta; Draw:-Line(img, x1, nRows - y1, x2, nRows - y2, [255, 0, 0]): \mathrm{DrawLine}⁡\left(\mathrm{img},\mathrm{line}\right) \mathrm{Embed}⁡\left(\mathrm{img}\right) The ImageTools[HoughLine] command was introduced in Maple 2020.
{\displaystyle {\begin{bmatrix}1&9&-13\\20&5&-6\end{bmatrix}}} 3.1 Addition, scalar multiplication, and transposition 3.4 Submatrix 6.1.1 Diagonal and triangular matrix 6.1.2 Identity matrix 6.1.3 Symmetric or skew-symmetric matrix 6.1.4 Invertible matrix and its inverse 6.1.5 Definite matrix 6.2 Main operations 7 Computational aspects 9 Abstract algebraic aspects and generalizations 9.1 Matrices with more general entries 9.2 Relationship to linear maps 9.4 Infinite matrices 9.5 Empty matrix 10.2 Analysis and geometry 10.3 Probability theory and statistics 10.4 Symmetries and transformations in physics 10.5 Linear combinations of quantum states 10.6 Normal modes 10.7 Geometrical optics 11.1 Other historical usages of the word "matrix" in mathematics 14.1 Physics references {\displaystyle \mathbf {A} ={\begin{bmatrix}-1.3&0.6\\20.4&5.5\\9.7&-6.2\end{bmatrix}}.} {\displaystyle {\begin{bmatrix}3&7&2\end{bmatrix}}} A matrix with one row, sometimes used to represent a vector {\displaystyle {\begin{bmatrix}4\\1\\8\end{bmatrix}}} A matrix with one column, sometimes used to represent a vector {\displaystyle {\begin{bmatrix}9&13&5\\1&11&7\\2&6&3\end{bmatrix}}} A matrix with the same number of rows and columns, sometimes used to represent a linear transformation from a vector space to itself, such as reflection, rotation, or shearing. {\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}}={\begin{pmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{pmatrix}}=\left(a_{ij}\right)\in \mathbb {R} ^{m\times n}.} The specifics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are usually symbolized using upper-case letters (such as A in the examples above), while the corresponding lower-case letters, with two subscript indices (e.g., a11, or a1,1), represent the entries. In addition to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface upright (non-italic), to further distinguish matrices from other mathematical objects. An alternative notation involves the use of a double-underline with the variable name, with or without boldface style (as in the case of {\displaystyle {\underline {\underline {A}}}} {\displaystyle \mathbf {A} ={\begin{bmatrix}4&-7&\color {red}{5}&0\\-2&0&11&8\\19&1&-3&12\end{bmatrix}}} {\displaystyle \mathbf {A} ={\begin{bmatrix}0&-1&-2&-3\\1&0&-1&-2\\2&1&0&-1\end{bmatrix}}} The set of all m-by-n real matrices is often denoted {\displaystyle {\mathcal {M}}(m,n),} {\displaystyle {\mathcal {M}}_{m\times n}\mathbb {R} .} The set of all m-by-n matrices over another field or over a ring R, is similarly denoted {\displaystyle {\mathcal {M}}(m,n,R),} {\displaystyle {\mathcal {M}}_{m\times n}(R).} If m = n, that is, in the case of square matrices, one does not repeat the dimension: {\displaystyle {\mathcal {M}}(n,R),} {\displaystyle {\mathcal {M}}_{n}(R).} [7] Often, {\displaystyle M} is used in place of {\displaystyle {\mathcal {M}}.} Addition, scalar multiplication, and transposition[edit] {\displaystyle {\begin{bmatrix}1&3&1\\1&0&0\end{bmatrix}}+{\begin{bmatrix}0&0&5\\7&5&0\end{bmatrix}}={\begin{bmatrix}1+0&3+0&1+5\\1+7&0+5&0+0\end{bmatrix}}={\begin{bmatrix}1&3&6\\8&5&0\end{bmatrix}}} {\displaystyle 2\cdot {\begin{bmatrix}1&8&-3\\4&-2&5\end{bmatrix}}={\begin{bmatrix}2\cdot 1&2\cdot 8&2\cdot -3\\2\cdot 4&2\cdot -2&2\cdot 5\end{bmatrix}}={\begin{bmatrix}2&16&-6\\8&-4&10\end{bmatrix}}} {\displaystyle {\begin{bmatrix}1&2&3\\0&-6&7\end{bmatrix}}^{\mathrm {T} }={\begin{bmatrix}1&0\\2&-6\\3&7\end{bmatrix}}} {\displaystyle [\mathbf {AB} ]_{i,j}=a_{i,1}b_{1,j}+a_{i,2}b_{2,j}+\cdots +a_{i,n}b_{n,j}=\sum _{r=1}^{n}a_{i,r}b_{r,j},} {\displaystyle {\begin{aligned}{\begin{bmatrix}{\underline {2}}&{\underline {3}}&{\underline {4}}\\1&0&0\\\end{bmatrix}}{\begin{bmatrix}0&{\underline {1000}}\\1&{\underline {100}}\\0&{\underline {10}}\\\end{bmatrix}}&={\begin{bmatrix}3&{\underline {2340}}\\0&1000\\\end{bmatrix}}.\end{aligned}}} {\displaystyle {\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}={\begin{bmatrix}0&1\\0&3\\\end{bmatrix}},} {\displaystyle {\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}{\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}={\begin{bmatrix}3&4\\0&0\\\end{bmatrix}}.} Submatrix[edit] {\displaystyle \mathbf {A} ={\begin{bmatrix}1&\color {red}{2}&3&4\\5&\color {red}{6}&7&8\\\color {red}{9}&\color {red}{10}&\color {red}{11}&\color {red}{12}\end{bmatrix}}\rightarrow {\begin{bmatrix}1&3&4\\5&7&8\end{bmatrix}}.} Linear equations[edit] {\displaystyle \mathbf {Ax} =\mathbf {b} } {\displaystyle {\begin{aligned}a_{1,1}x_{1}+a_{1,2}x_{2}+&\cdots +a_{1,n}x_{n}=b_{1}\\&\ \ \vdots \\a_{m,1}x_{1}+a_{m,2}x_{2}+&\cdots +a_{m,n}x_{n}=b_{m}\end{aligned}}} {\displaystyle \mathbf {x} =\mathbf {A} ^{-1}\mathbf {b} } {\displaystyle \mathbf {A} ={\begin{bmatrix}a&c\\b&d\end{bmatrix}}} {\displaystyle {\begin{bmatrix}0\\0\end{bmatrix}},{\begin{bmatrix}1\\0\end{bmatrix}},{\begin{bmatrix}1\\1\end{bmatrix}}} {\displaystyle {\begin{bmatrix}0\\1\end{bmatrix}}} by π/6 = 30° {\displaystyle {\begin{bmatrix}1&1.25\\0&1\end{bmatrix}}} {\displaystyle {\begin{bmatrix}-1&0\\0&1\end{bmatrix}}} {\displaystyle {\begin{bmatrix}{\frac {3}{2}}&0\\0&{\frac {2}{3}}\end{bmatrix}}} {\displaystyle {\begin{bmatrix}{\frac {3}{2}}&0\\0&{\frac {3}{2}}\end{bmatrix}}} {\displaystyle {\begin{bmatrix}\cos \left({\frac {\pi }{6}}\right)&-\sin \left({\frac {\pi }{6}}\right)\\\sin \left({\frac {\pi }{6}}\right)&\cos \left({\frac {\pi }{6}}\right)\end{bmatrix}}} Square matrix[edit] {\displaystyle {\begin{bmatrix}a_{11}&0&0\\0&a_{22}&0\\0&0&a_{33}\\\end{bmatrix}}} {\displaystyle {\begin{bmatrix}a_{11}&0&0\\a_{21}&a_{22}&0\\a_{31}&a_{32}&a_{33}\\\end{bmatrix}}} {\displaystyle {\begin{bmatrix}a_{11}&a_{12}&a_{13}\\0&a_{22}&a_{23}\\0&0&a_{33}\\\end{bmatrix}}} Diagonal and triangular matrix[edit] Identity matrix[edit] {\displaystyle \mathbf {I} _{1}={\begin{bmatrix}1\end{bmatrix}},\ \mathbf {I} _{2}={\begin{bmatrix}1&0\\0&1\end{bmatrix}},\ \ldots ,\ \mathbf {I} _{n}={\begin{bmatrix}1&0&\cdots &0\\0&1&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &1\end{bmatrix}}} Symmetric or skew-symmetric matrix[edit] Invertible matrix and its inverse[edit] Definite matrix[edit] {\displaystyle {\begin{bmatrix}{\frac {1}{4}}&0\\0&1\\\end{bmatrix}}} {\displaystyle {\begin{bmatrix}{\frac {1}{4}}&0\\0&-{\frac {1}{4}}\end{bmatrix}}} Q(x, y) = 1/4 x2 + y2 Q(x, y) = 1/4 x2 − 1/4 y2 Orthogonal matrix[edit] {\displaystyle \mathbf {A} ^{\mathrm {T} }=\mathbf {A} ^{-1},\,} {\displaystyle \mathbf {A} ^{\mathrm {T} }\mathbf {A} =\mathbf {A} \mathbf {A} ^{\mathrm {T} }=\mathbf {I} _{n},} Main operations[edit] {\displaystyle \operatorname {tr} (\mathbf {AB} )=\sum _{i=1}^{m}\sum _{j=1}^{n}a_{ij}b_{ji}=\operatorname {tr} (\mathbf {BA} ).} {\displaystyle \det {\begin{bmatrix}a&b\\c&d\end{bmatrix}}=ad-bc.} Eigenvalues and eigenvectors[edit] {\displaystyle Av=\lambda v} {\displaystyle \det(\mathbf {A} -\lambda \mathbf {I} )=0.} Computational aspects[edit] Main articles: Matrix decomposition, Matrix diagonalization, Gaussian elimination, and Bareiss algorithm Abstract algebraic aspects and generalizations[edit] Matrices with more general entries[edit] Relationship to linear maps[edit] {\displaystyle f(\mathbf {v} _{j})=\sum _{i=1}^{m}a_{i,j}\mathbf {w} _{i}\qquad {\mbox{for}}\ j=1,\ldots ,n.} These properties can be restated more naturally: the category of all matrices with entries in a field {\displaystyle k} with multiplication as composition is equivalent to the category of finite-dimensional vector spaces and linear maps over this field. Matrix groups[edit] Infinite matrices[edit] {\displaystyle M=\bigoplus _{i\in I}R} {\displaystyle \mathrm {CFM} _{I}(R)} {\displaystyle I\times I} {\displaystyle \mathrm {RFM} _{I}(R)} Empty matrix[edit] {\displaystyle a+ib\leftrightarrow {\begin{bmatrix}a&-b\\b&a\end{bmatrix}},} {\displaystyle {\begin{bmatrix}1&1&0\\1&0&1\\0&1&0\end{bmatrix}}.} Analysis and geometry[edit] {\displaystyle H(f)=\left[{\frac {\partial ^{2}f}{\partial x_{i}\,\partial x_{j}}}\right].} {\displaystyle {\begin{bmatrix}2&0\\0&-2\end{bmatrix}}} It encodes information about the local growth behaviour of the function: given a critical point x = (x1, ..., xn), that is, a point where the first partial derivatives {\displaystyle \partial f/\partial x_{i}} {\displaystyle J(f)=\left[{\frac {\partial f_{i}}{\partial x_{j}}}\right]_{1\leq i\leq m,1\leq j\leq n}.} {\displaystyle {\begin{bmatrix}0.7&0\\0.3&1\end{bmatrix}}} {\displaystyle {\begin{bmatrix}0.7&0.2\\0.3&0.8\end{bmatrix}}} Symmetries and transformations in physics[edit] Linear combinations of quantum states[edit] Geometrical optics[edit] {\displaystyle a_{1}a_{2}\cdots a_{n}\prod _{i<j}(a_{j}-a_{i})\;} Other historical usages of the word "matrix" in mathematics[edit] Gram–Schmidt process – Orthonormalization of a set of vectors Tensor — A generalization of matrices with any number of indices ^ a b Weisstein, Eric W. "Matrix". mathworld.wolfram.com. Retrieved 2020-08-19. ^ "How to organize, add and multiply matrices - Bill Shillito". TED ED. Retrieved April 6, 2013. ^ a b "How to Multiply Matrices". www.mathsisfun.com. Retrieved 2020-08-19. ^ "Matrix | mathematics". Encyclopedia Britannica. Retrieved 2020-08-19. ^ Grcar, Joseph F. (2011-01-01). "John von Neumann's Analysis of Gaussian Elimination and the Origins of Modern Numerical Analysis". SIAM Review. 53 (4): 607–682. doi:10.1137/080734716. ISSN 0036-1445. ^ Šolin 2005, Ch. 2.5. See also stiffness method. ^ Merriam-Webster dictionary, Merriam-Webster, retrieved April 20, 2009 Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (1992), "LU Decomposition and Its Applications" (PDF), Numerical Recipes in FORTRAN: The Art of Scientific Computing (2nd ed.), Cambridge University Press, pp. 34–42, archived from the original on 2009-09-06 {{citation}}: CS1 maint: unfit URL (link) Physics references[edit] Cayley, Arthur (1889), The collected mathematical papers of Arthur Cayley, vol. I (1841–1853), Cambridge University Press, pp. 123–126 Knobloch, Eberhard (1994), "From Gauss to Weierstrass: determinant theory and its historical evaluations", The intersection of history and mathematics, Science Networks Historical Studies, vol. 15, Basel, Boston, Berlin: Birkhäuser, pp. 51–66, MR 1308079 Matrix (mathematics)at Wikipedia's sister projects Retrieved from "https://en.wikipedia.org/w/index.php?title=Matrix_(mathematics)&oldid=1084756822"
Phosphopantothenoylcysteine decarboxylase - Wikipedia In enzymology, a phosphopantothenoylcysteine decarboxylase (EC 4.1.1.36) is an enzyme that catalyzes the chemical reaction N-[(R)-4'-phosphopantothenoyl]-L-cysteine {\displaystyle \rightleftharpoons } pantotheine 4'-phosphate + CO2 Hence, this enzyme has one substrate, [[N-[(R)-4'-phosphopantothenoyl]-L-cysteine]], and two products, pantotheine 4'-phosphate and CO2. This enzyme belongs to the family of lyases, to be specific the carboxy-lyases, which cleave carbon-carbon bonds. The systematic name of this enzyme class is N-[(R)-4'-phosphopantothenoyl]-L-cysteine carboxy-lyase (pantotheine-4'-phosphate-forming). Other names in common use include 4-phosphopantotheoylcysteine decarboxylase, 4-phosphopantothenoyl-L-cysteine decarboxylase, PPC-decarboxylase, and N-[(R)-4'-phosphopantothenoyl]-L-cysteine carboxy-lyase. This enzyme participates in coenzyme A (CoA) biosynthesis from pantothenic acid. As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1MVL, 1MVN, and 1QZU. Brown GM (1958). "Requirement of cytidine triphosphate for the biosynthesis of phosphopantetheine". J. Am. Chem. Soc. 80 (12): 3161. doi:10.1021/ja01545a062. Brown GM (February 1959). "The metabolism of pantothenic acid". The Journal of Biological Chemistry. 234 (2): 370–8. PMID 13630913. Retrieved from "https://en.wikipedia.org/w/index.php?title=Phosphopantothenoylcysteine_decarboxylase&oldid=989195950"
tensor(deprecated)/raise - Maple Help Home : Support : Online Help : tensor(deprecated)/raise raise a covariant index lower a contravariant index raise(contravariant_metric_tensor, A, i1, i2, ... ) lower(covariant_metric_tensor, A, i1, i2, ... ) contravariant_metric_tensor metric tensors used to raise the indices covariant_metric_tensor metric tensors used to lower the indices tensor by which to raise/lower the indices i1, ... non-empty sequence of indices of A to raise/lower Important: The tensor package has been deprecated. Use the superseding command DifferentialGeometry[Tensor][RaiseLowerIndices]. The function raise(con_met, A, 2, 3) computes the tensor A with indices 2 and 3 raised using the contravariant metric con_met. The function lower(cov_met, A, 1, 4) computes the tensor A with indices 1 and 4 lowered using the covariant metric cov_met. Each index in the call to raise must be a valid covariant index of A. Each index in the call to lower must be a valid contravariant index of A. There must be at least 1 index given and the number of indices cannot exceed the rank of A. Simplification: These routines use the `tensor/prod/simp` routine for simplification purposes. The simplification routine is applied to each component of the result after it is computed. By default, `tensor/prod/simp` is initialized to the `tensor/simp` routine. It is recommended that the `tensor/prod/simp` routine be customized to suit the needs of the particular problem. These functions are part of the tensor package, and so can be used in the form raise(..) / lower(..) only after performing the command with(tensor), or with(tensor, raise) / with(tensor, lower). These functions can always be accessed in the long form tensor[raise](..) / tensor[lower](..). \mathrm{with}⁡\left(\mathrm{tensor}\right): covariant Euclidean 3-space metric in spherical-polar coordinates: a≔\mathrm{create}⁡\left([-1,-1],\mathrm{array}⁡\left([[1,0,0],[0,{r}^{2},0],[0,0,{r}^{2}⁢{\mathrm{sin}⁡\left(\mathrm{\theta }\right)}^{2}]]\right)\right) \textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{⁡}\left([\textcolor[rgb]{0,0,1}{\mathrm{compts}}\textcolor[rgb]{0,0,1}{=}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& {\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& {\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{\mathrm{\theta }}\right)}^{\textcolor[rgb]{0,0,1}{2}}\end{array}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{index_char}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{-1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-1}]]\right) contravariant Euclidean 3-space metric in spherical-polar coordinates: A≔\mathrm{create}⁡\left([1,1],\mathrm{array}⁡\left([[1,0,0],[0,\frac{1}{{r}^{2}},0],[0,0,\frac{1}{{r}^{2}⁢{\mathrm{sin}⁡\left(\mathrm{\theta }\right)}^{2}}]]\right)\right) \textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{⁡}\left([\textcolor[rgb]{0,0,1}{\mathrm{compts}}\textcolor[rgb]{0,0,1}{=}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \frac{\textcolor[rgb]{0,0,1}{1}}{{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \frac{\textcolor[rgb]{0,0,1}{1}}{{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{\mathrm{\theta }}\right)}^{\textcolor[rgb]{0,0,1}{2}}}\end{array}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{index_char}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]]\right) create a mixed 2-tensor, raise one index, then lower the other T≔\mathrm{create}⁡\left([1,-1],\mathrm{array}⁡\left([[w,x,0],[y,z,0],[0,{y}^{2},x⁢y⁢w]]\right)\right) \textcolor[rgb]{0,0,1}{T}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{⁡}\left([\textcolor[rgb]{0,0,1}{\mathrm{compts}}\textcolor[rgb]{0,0,1}{=}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{w}& \textcolor[rgb]{0,0,1}{x}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{y}& \textcolor[rgb]{0,0,1}{z}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& {\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}& \textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{w}\end{array}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{index_char}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-1}]]\right) \mathrm{raise}⁡\left(A,T,2\right) \textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{⁡}\left([\textcolor[rgb]{0,0,1}{\mathrm{compts}}\textcolor[rgb]{0,0,1}{=}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{w}& \frac{\textcolor[rgb]{0,0,1}{x}}{{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{y}& \frac{\textcolor[rgb]{0,0,1}{z}}{{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \frac{{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}}{{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}}& \frac{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{w}}{{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{\mathrm{\theta }}\right)}^{\textcolor[rgb]{0,0,1}{2}}}\end{array}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{index_char}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]]\right) \mathrm{lower}⁡\left(a,T,1\right) \textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{⁡}\left([\textcolor[rgb]{0,0,1}{\mathrm{compts}}\textcolor[rgb]{0,0,1}{=}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{w}& \textcolor[rgb]{0,0,1}{x}& \textcolor[rgb]{0,0,1}{0}\\ {\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}& {\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{z}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& {\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{\mathrm{\theta }}\right)}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}& {\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{\mathrm{\theta }}\right)}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{w}\end{array}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{index_char}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{-1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-1}]]\right)
Basic chronology in the biblical period[] Past methods of dividing years[] Past methods of numbering years[] Leap months[] Determining the new month in the Mishnaic period[] The fixing of the calendar[] Names of weekdays[] Days of week of holidays[] Justification for leap months[] Characteristics of leap months[] Anno Mundi[] New year[] Rosh Hashanah postponement rules[] Deficient, regular, and complete years[] Four gates[] Other calendars[] Karaite calendar[] Samaritan calendar[] The Qumran calendar[] Other calendars used by ancient Jews[] Astronomical calculations[] Synodic month – the molad interval[] {\displaystyle {\tfrac {765433}{25920}}} Seasonal drift[] Implications for Jewish ritual[] Worked example[] Rectifying the Hebrew calendar[] Calendar observance in Auschwitz[] Usage in contemporary Israel[] ^ a b Avodah Zarah 9a Soncino ion, footnote 4: "The Eras in use among Jews in Talmudic Times are: (a) ERA OF CONTRACTS [H] dating from the year 380 before the Destruction of the Second Temple (312–1 BCE) when, at the Battle of Gaza, Seleucus Nicator, one of the followers of Alexander the Great, gained dominion over Palestine. It is also termed Seleucid or Greek Era [H]. Its designation as Alexandrian Era connecting it with Alexander the Great (Maim. Yad, Gerushin 1, 27) is an anachronism, since Alexander died in 323 BCE—eleven years before this Era began (v. E. Mahler, Handbuch der judischen Chronologie, p. 145). This Era, which is first mentioned in Mac. I, 10, and was used by notaries or scribes for dating all civil contracts, was generally in vogue in eastern countries till the 16th cent, and was employed even in the 19th cent, among the Jews of Yemen, in South Arabia (Eben Saphir, Lyck, 1866, p. 62b). (b) THE ERA OF THE DESTRUCTION (of the Second Temple) [H] the year 1 of which corresponds to 381 of the Seleucid Era, and 69–70 of the Christian Era. This Era was mainly employed by the Rabbis and was in use in Palestine for several centuries, and even in the later Middle Ages documents were dated by it. One of the recently discovered Genizah documents bears the date 13 Tammuz 987 after the Destruction of the Temple—i.e., 917 C.E. (Op. cit. p. 152, also Marmorstein ZDMG, Vol. VI, p. 640). The difference between the two Eras as far as the tens and units are concerned is thus 20. If therefore a Tanna, say in the year 156 Era of Dest. (225 CE), while remembering, naturally, the century, is uncertain about the tens and units, he should ask the notary what year it is according to his—Seleucid—era. He will get the answer 536 (156 + 380), on adding 20 to which he would get 556, the last two figures giving him the year [1] 56 of the Era of Destruction." ^ Mishneh Torah, Sanctification of the New Moon 1:2; quoted in Sanctification of the New Moon. Archived 2010-06-21 at the Wayback Machine. Translated from the Hebrew by Solomon Gandz; supplemented, introduced, and ed by Julian Obermann; with an astronomical commentary by Otto Neugebauer. Yale Judaica Series, Volume 11, New Haven: Yale University Press, 1956. William Moses Feldman. Rabbinical Mathematics and Astronomy, 3rd ion, Sepher-Hermon Press, New York, 1978. Edward M. Reingold and Nachum Dershowitz. Calendrical Calculations: The Millennium Edition. Cambridge University Press; 2 ion (2001). ISBN 0-521-77752-6 723–730. Date converters[]
Wave Mechanics; Prince Louis de Broglie Prince Louis de Broglie In 1923, while still a graduate student at the University of Paris, Louis de Broglie published a brief note in the journal Comptes rendus containing an idea that was to revolutionize our understanding of the physical world at the most fundamental level. He had been troubled by a curious "contradiction" arising from Einstein's special theory of relativity. First, he assumed that there is always associated with a particle of mass m a periodic internal phenomenon of frequency \nu \text{.} For a particle at rest, he equated the rest mass energy m{c}^{2} to the energy of the quantum of the electromagnetic field h\nu \text{.} m{c}^{2}=h\nu \text{,} h c De Broglie noted that relativity theory predicts that, when such a particle is set in motion, its total relativistic energy will increase, tending to infinity as the speed of light is approached. Likewise, the period of the internal phenomenon assumed to be associated with the particle will also increase (due to time dilation). Since period and frequency are inversely related, a period increase is equivalent to a decrease of frequency and, hence, of the energy given by the quantum relation h\nu \text{.} It was this apparent incompatibility between the tendency of the relativistic energy to increase and the quantum energy to decrease that troubled de Broglie.1 [footnote] The manner in which de Broglie resolved this apparent contradiction is the subject of the famous 1923 Comptes rendus note [Comptes rendus de l'Académie des Sciences, vol. 177, pp. 507-510 (1923)]. The original note in French and an English translation are available here by kind permission of the Académie des Sciences and the Fondation Louis de Broglie. The assistance of Professor Sophie Papaefthymiou of the University of Paris in obtaining permission to post these materials is gratefully acknowledged. 1923 Comptes rendus Note De Broglie's 1923 Comptes rendus note is available here as follows: English translation as a web page, English translation in Adobe PDF format, and Facsimile of the original French in Adobe PDF format. The Adobe PDF files require that you have Adobe Acrobat Reader in order to view and print them. A free copy of Acrobat Reader can be downloaded from the Adobe web site. Phase Wave Animation For an illustration of the relationship between the phase wave proposed by de Broglie and the instantaneous state of the internal periodic phenomenon de Broglie assumed to be associated with a particle, see the accompanying animated graphic . De Broglie presented a more detailed exposition of the ideas contained in his 1923 note in the first chapter of his doctoral thesis Recherches sur la théorie des Quanta (University of Paris, 1924). An English translation can be found in "Phase Waves of Louis deBroglie", Am. J. Phys. vol. 40 no. 9, pp. 1315-1320, September, 1972. The ideas are also revisited in the first chapter of de Broglie's book Non-linear Wave Mechanics: A Causal Interpretation, Elsevier Publishing Company, 1960. 1 A consequence of de Broglie's reasoning is that a phase wave, often referred to as the "pilot" wave, appears to accompany the particle. This is made evident in de Broglie's 1923 Comptes rendus note. Yet, modern introductions to quantum mechanics often fail to emphasize that this phase wave arises as an inevitable consequence of de Broglie's assumption of the internal periodic phenomenon of the particle and the transformation laws of the special theory of relativity. A notable exception is "Introduction to Quantum Mechanics," A. P. French and Edwin F. Taylor, W. W. Norton & Co., pp. 55-62 (1978). Given de Broglie's assumptions, quantum mechanics, which is to say the study of the behavior and interpretation of the phase wave, is the study of an inherently relativistic phenomenon. In this sense, if it were not for relativistic effects, quantum (wave) mechanics would not exist! Yet, the phrase "non-relativistic quantum mechanics" is a commonplace. Clearly, this phrase should be understood to refer to the quantum mechanical description of particles moving at speeds very much less than the speed of light, but not to imply that relativistic effects are ever of no consequence in quantum mechanics. An example is provided by the orbits of the electron in the hydrogen atom. Even for the innermost orbits, the speed of the electron is very much less than the speed of light. While the motion of the electron is, therefore, "non-relativistic" in the sense that the relativistic corrections to the classically (non-quantum mechanical) predicted behavior of the electron would be inconsequential, relativistic effects in fact dominate. That is, the phase wave associated with the electron, which leads directly to the quantization of the allowed orbits, is, according to de Broglie's model, relativistic in origin. If not for the relativistic transformations that give rise to the phase wave, quantization of the orbits and, hence, the spectrum of the hydrogen atom would not occur. Clearly then, relativistic effects cannot be considered to be inconsequential in determining the electronic properties of the hydrogen atom, even though the speed of the orbiting electron is very much less than the speed of light. [return from footnote]
Superposition explained by QuTech Academy A typical example visualizing superposition is the double-slit experiment. This experiment is explained in the following video: The double-slit experiment explained by QuTech Academy Qubits can be in a superposition of both the basis states \left\lvert 0 \right\rangle \left\lvert 1 \right\rangle . When a qubit is measured (to be more precise: only observables can be measured), the qubit will collapse to one of its eigenstates and the measured value will reflect that state. For example, when a qubit is in a superposition state of equal weights, a measurement will make it collapse to one of its two basis states \left\lvert 0 \right\rangle \left\lvert 1 \right\rangle with an equal probability of 50%. \left\lvert 0 \right\rangle is the state that when measured, and therefore collapsed, will always give the result 0. Similarly, \left\lvert 1 \right\rangle will always convert to 1. Quantum superposition is fundamentally different from superposing classical waves. A quantum computer consisting of n qubits can exist in a superposition of 2^n states: from \left\lvert 000... 0 \right\rangle \left\lvert 111... 1 \right\rangle . In contrast, playing n musical sounds with all different frequencies, can only give a superposition of n frequencies. Adding classical waves scales linear, where the superposition of quantum states is exponential. One of the other counter-intuitive phenomena in quantum physics is entanglement. A pair or group of particles is entangled when the quantum state of each particle cannot be described independently of the quantum state of the other particle(s). The quantum state of the system as a whole can be described; it is in a definite state, although the parts of the system are not. Entanglement explained by QuTech Academy When two qubits are entangled there exists a special connection between them. The entanglement will become clear from the results of measurements. The outcome of the measurements on the individual qubits could be 0 or 1. However, the outcome of the measurement on one qubit will always be correlated to the measurement on the other qubit. This is always the case, even if the particles are separated from each other by a large distance. Examples of such states are the Bell states. For example, two particles are created in such a way that the total spin of the system is zero. If the spin of one of the particles is measured on a certain axis and found to be counterclockwise, then it is guaranteed that a measurement of the spin of the other particle (along the same axis) will show the spin to be clockwise. This seems strange, because it appears that one of the entangled particles “feels” that a measurement is performed on the other entangled particle and “knows” what the outcome should be, but this is not the case. This happens, without any information exchange between the entangled particles. They could even be billions of miles away from each other and this entanglement would still be present. Einstein was confused, not the quantum theory - Stephen Hawking A common misunderstanding is that entanglement could be used to instantaneously send information from one point to another. This is not possible because although it is possible to know the state of the other particle when measuring one, the measurement results of the individual particles are random. There is no way to predetermine the individual result, therefore it is not possible to send a message in this way. The fact that qubits can be entangled, makes a quantum computer more powerful than a classical computer. With the information stored in superposition, some problems can be solved exponentially faster.
On the family of pentagonal curves of genus 6 and associated modular forms on the ball January, 2003 On the family of pentagonal curves of genus 6 and associated modular forms on the ball In this article we study the inverse of the period map for the family F of complex algebraic curves of genus 6 equipped with an automorphism of order 5 5 fixed points. This is a family with 2 parameters, and is fibred over a Del Pezzo surface. Our period map is essentially same as the Schwarz map for the Appell hypergeometric differential equation {F}_{1}\left(3/5,3/5,2/5,6/5\right) This differential equation and the family F are studied by G. Shimura (1964), T. Terada (1983, 1985), P. Deligne and G. D. Mostow (1986) and T. Yamazaki and M. Yoshida (1984). Based on their results we give a representation of the inverse of the period map in terms of Riemann theta constants. This is the first variant of the work of H. Shiga (1981) and K. Matsumoto (1989, 2000) to the co-compact case. Kenji KOIKE. "On the family of pentagonal curves of genus 6 and associated modular forms on the ball." J. Math. Soc. Japan 55 (1) 165 - 196, January, 2003. https://doi.org/10.2969/jmsj/1196890848 Keywords: algebraic curve , configuration space , theta function Kenji KOIKE "On the family of pentagonal curves of genus 6 and associated modular forms on the ball," Journal of the Mathematical Society of Japan, J. Math. Soc. Japan 55(1), 165-196, (January, 2003)
Compute divergence of vector field - MATLAB divergence - MathWorks América Latina Divergence of Vector Volume Data as Slice Planes Divergence of 2-D Vector Field Numerical Divergence Compute divergence of vector field div = divergence(X,Y,Z,Fx,Fy,Fz) div = divergence(Fx,Fy,Fz) div = divergence(X,Y,Fx,Fy) div = divergence(Fx,Fy) div = divergence(X,Y,Z,Fx,Fy,Fz) computes the numerical divergence of a 3-D vector field with vector components Fx, Fy, and Fz. The arrays X, Y, and Z, which define the coordinates for the vector components Fx, Fy, and Fz, must be monotonic, but do not need to be uniformly spaced. X, Y, and Z must be 3-D arrays of the same size, which can be produced by meshgrid. div = divergence(Fx,Fy,Fz) assumes a default grid of sample points. The default grid points X, Y, and Z are determined by the expression [X,Y,Z] = meshgrid(1:n,1:m,1:p), where [m,n,p] = size(Fx). Use this syntax when you want to conserve memory and are not concerned about the absolute distances between points. div = divergence(X,Y,Fx,Fy) computes the numerical divergence of a 2-D vector field with vector components Fx and Fy. The matrices X and Y, which define the coordinates for Fx and Fy, must be monotonic, but do not need to be uniformly spaced. X and Y must be 2-D matrices of the same size, which can be produced by meshgrid. div = divergence(Fx,Fy) assumes a default grid of sample points. The default grid points X and Y are determined by the expression [X,Y] = meshgrid(1:n,1:m), where [m,n] = size(Fx). Use this syntax when you want to conserve memory and are not concerned about the absolute distances between points. Load a 3-D vector field data set that represents a wind flow. The data set contains arrays of size 35-by-41-by-15. Compute the numerical divergence of the vector field. Display the divergence of vector volume data as slice planes. Show the divergence at the yz -planes with x=90 x=134 , at the xz y=59 , and at the xy z=0 . Use color to indicate divergence. h = slice(x,y,z,div,[90 134],59,0); set([h(1),h(2)],'ambientstrength',0.6); Specify 2-D coordinates and a vector field. [x,y] = meshgrid(-8:2:8,-8:2:8); Fx = 200 - (x.^2 + y.^2); Fy = 200 - (x.^2 + y.^2); Plot the vector field components Fx and Fy. quiver(x,y,Fx,Fy) Find the numerical divergence of the 2-D vector field. Plot the contour of the divergence. D = divergence(x,y,Fx,Fy); contour(x,y,D,'ShowText','on') X, Y, Z — Input coordinates matrices | 3-D arrays Input coordinates, specified as matrices or 3-D arrays. For 2-D vector fields, X and Y must be 2-D matrices of the same size, and that size can be no smaller than 2-by-2. For 3-D vector fields, X, Y, and Z must be 3-D arrays of the same size, and that size can be no smaller than 2-by-2-by-2. Fx, Fy, Fz — Vector field components at input coordinates Vector field components at the input coordinates, specified as matrices or 3-D arrays. Fx, Fy, and Fz must be the same size as X, Y, and Z. The numerical divergence of a vector field is a way to estimate the values of the divergence using the known values of the vector field at certain points. For a 3-D vector field of three variables F\left(x,y,z\right)={F}_{x}\left(x,y,z\right)\text{\hspace{0.17em}}{\stackrel{^}{e}}_{x}+{F}_{y}\left(x,y,z\right)\text{\hspace{0.17em}}{\stackrel{^}{e}}_{y}+{F}_{z}\left(x,y,z\right)\text{\hspace{0.17em}}{\stackrel{^}{e}}_{z} , the definition of the divergence of F is \text{div }F=\nabla ·F=\frac{\partial {F}_{x}}{\partial x}+\frac{\partial {F}_{y}}{\partial y}+\frac{\partial {F}_{z}}{\partial z}. For a 2-D vector field of two variables F\left(x,y\right)={F}_{x}\left(x,y\right)\text{\hspace{0.17em}}{\stackrel{^}{e}}_{x}+{F}_{y}\left(x,y\right)\text{\hspace{0.17em}}{\stackrel{^}{e}}_{y} , the divergence is \text{div }F=\nabla ·F=\frac{\partial {F}_{x}}{\partial x}+\frac{\partial {F}_{y}}{\partial y}. divergence computes the partial derivatives in its definition by using finite differences. For interior data points, the partial derivatives are calculated using central difference. For data points along the edges, the partial derivatives are calculated using single-sided (forward) difference. For example, consider a 2-D vector field F that is represented by the matrices Fx and Fy at locations X and Y with size m-by-n. The locations are 2-D grids created by [X,Y] = meshgrid(x,y), where x is a vector of length n and y is a vector of length m. divergence then computes the partial derivatives ∂Fx / ∂x and ∂Fy / ∂y as dFx(:,i) = (Fx(:,i+1) - Fx(:,i-1))/(x(i+1) - x(i-1)) and dFy(j,:) = (Fy(j+1,:) - Fy(j-1,:))/(y(j+1) - y(j-1)) for interior data points. dFx(:,1) = (Fx(:,2) - Fx(:,1))/(x(2) - x(1)) and dFx(:,n) = (Fx(:,n) - Fx(:,n-1))/(x(n) - x(n-1)) for data points at the left and right edges. dFy(1,:) = (Fy(2,:) - Fy(1,:))/(y(2) - y(1)) and dFy(m,:) = (Fy(m,:) - Fy(m-1,:))/(y(m) - y(m-1)) for data points at the top and bottom edges. The numerical divergence of the vector field is equal to div = dFx + dFy. streamtube | gradient | curl | isosurface
FirstChild - Maple Help Home : Support : Online Help : Connectivity : Web Features : XMLTools : FirstChild extract the first child node of an XML tree extract the second child node of an XML tree extract the third child node of an XML tree extract the last child node of an XML tree FirstChild(xmlTree) SecondChild(xmlTree) ThirdChild(xmlTree) LastChild(xmlTree) Each of these routines accesses a particular child of the given XML element xmlTree. The returned expression is either of type string (NULL is returned in the case of a plain text child node) or an XML tree data structure (when the child node has a tree structure of its own). Each of these procedures is a specialization of the GetChild routine for common special cases. For instance, SecondChild retrieves the second content element of xmlTree if there are at least two such children. Otherwise, NULL is returned. \mathrm{with}⁡\left(\mathrm{XMLTools}\right): x≔\mathrm{XMLElement}⁡\left("a",["colour"="red"],\mathrm{XMLElement}⁡\left("b",[],"foo"\right),\mathrm{XMLElement}⁡\left("c",[],"bar"\right),\mathrm{XMLElement}⁡\left("d",[],"baz"\right)\right): \mathrm{Print}⁡\left(x\right) \mathrm{Print}⁡\left(\mathrm{FirstChild}⁡\left(x\right)\right) \mathrm{Print}⁡\left(\mathrm{SecondChild}⁡\left(x\right)\right) \mathrm{Print}⁡\left(\mathrm{ThirdChild}⁡\left(x\right)\right) \mathrm{Print}⁡\left(\mathrm{LastChild}⁡\left(x\right)\right)
The rotation operators are defined as: \ R_x\left ( \theta \right ) = \begin{pmatrix} \cos \left ( \frac{\theta}{2} \right ) & -i \sin \left ( \frac{\theta}{2} \right) \ -i \sin \left ( \frac{\theta}{2} \right) & \cos \left ( \frac{\theta}{2} \right ) \end{pmatrix} \ \ R_y\left ( \theta \right ) = \begin{pmatrix} \cos \left ( \frac{\theta}{2} \right ) & - \sin \left ( \frac{\theta}{2} \right) \ \sin \left ( \frac{\theta}{2} \right) & \cos \left ( \frac{\theta}{2} \right ) \end{pmatrix} \ \ R_z\left ( \theta \right ) = \begin{pmatrix} e^{-i \frac{\theta}{2}} & 0 \ 0 & e^{i \frac{\theta}{2}} \end{pmatrix} \ The rotation operators are generated by exponentiation of the Pauli matrices according to \ exp{(i A x)} = \cos\left ( x \right )I+i\sin\left ( x \right )A \ where A is one of the three Pauli Matrices. Note that the Rz rotation operator can also be expressed as \begin{pmatrix} e^{i \frac{\theta}{2}} & 0 \ 0 & e^{i \frac{\theta}{2}} \end{pmatrix} \begin{pmatrix} e^{-i \frac{\theta}{2}} & 0 \ 0 & e^{i \frac{\theta}{2}} \end{pmatrix} = \begin{pmatrix} 1 & 0 \ 0 & e^{i \theta} \end{pmatrix} \ which differs from the definition above by a global phase only.
Pure states, nonnegative polynomials and sums of squares | EMS Press Pure states, nonnegative polynomials and sums of squares In recent years, much work has been devoted to a systematic study of polynomial identities certifying strict or non-strict positivity of a polynomial f on a basic closed set K\subset\mathbb{R}^n . The interest in such identities originates not least from their importance in polynomial optimization. The majority of the important results requires the archimedean condition, which implies that K has to be compact. This paper introduces the technique of pure states into commutative algebra. We show that this technique allows an approach to most of the recent archimedean Stellensätze that is considerably easier and more conceptual than the previous proofs. In particular, we reprove and strengthen some of the most important results from the last years. In addition, we establish several such results which are entirely new. They are the first that allow f to have arbitrary, not necessarily discrete, zeros in K Sabine Burgdorf, Claus Scheiderer, Markus Schweighofer, Pure states, nonnegative polynomials and sums of squares. Comment. Math. Helv. 87 (2012), no. 1, pp. 113–140
Early versions of Minecraft received critical acclaim, praising the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay.[238][239][240] Critics have praised Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay.[225] Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable".[18] Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building.[225] The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends".[18] Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences".[232] It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. [241] In September 2019, The Guardian classified Minecraft as the best video game of (the first two decades of) the 21st century,[305] and in November 2019 Polygon called the game the "most important game of the decade" in its 2010s "decade in review".[306] In December 2019, Forbes gave Minecraft a special mention in a list of the best video games of the 2010s, stating that the game is "without a doubt one of the most important games of the last ten years."[307] In June 2020, Minecraft was inducted into the World Video Game Hall of Fame.[308] In September 2014, the British Museum in London announced plans to recreate its building along with all exhibits in Minecraft in conjunction with members of the public.[348] Microsoft and non-profit Code.org had teamed up to offer Minecraft-based games, puzzles, and tutorials aimed to help teach children how to program; by March 2018, Microsoft and Code.org reported that more than 85 million children have used their tutorials.[349] After the release of Minecraft, other video games were released with various similarities to Minecraft, and some were described as being "clones". Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, and Total Miner.[350] David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans.[351] A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system.[352] In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms to not officially receive Minecraft at the time.[353] These clone titles include UCraft (Nexis Games),[354] Cube Life: Island Survival (Cypronia),[355] Discovery (Noowanda),[356] Battleminer (Wobbly Tooth Games),[357] Cube Creator 3D (Big John Games),[358] and Stone Shire (Finger Gun Games).[359] Despite this, the fears of fans were unfounded with official Minecraft releases on Nintendo consoles eventually resuming.[360][361][12] {\displaystyle 16*16*128} ^ "Minecraft Live | Minecraft". A Celebration of All Things Minecraft!. Retrieved 19 May 2022. ((cite web)): CS1 maint: url-status (link) Parent company: Xbox Game Studios (Microsoft)
Logistic regression - Simple English Wikipedia, the free encyclopedia Figure 1: Example of a logistic curve. The values of y cannot be less than 0 or greater than 1. Logistic regression, also known as logit regression or logit model, is a mathematical model used in statistics to estimate (guess) the probability of an event occurring having been given some previous data. Logistic regression works with binary data, where either the event happens (1) or the event does not happen (0). So given some feature x it tries to find out whether some event y happens or not. So y can either be 0 or 1. In the case where the event happens, y is given the value 1. If the event does not happen, then y is given the value of 0. For example, if y represents whether a sports team wins a match, then y will be 1 if they win the match or y will be 0 if they do not. This is known as binomial logistic regression. There is also another form of logistic regression which uses multiple values for the variable y. This form of logistic regression is known as multinomial logistic regression. Logistic regression uses the logistic function to find a model that fits with the data points. The function gives an 'S' shaped curve to model the data. The curve is restricted between 0 and 1, so it is easy to apply when y is binary. Logistic regression can then model events better than linear regression, as it shows the probability for y being 1 for a given x value. Logistic regression is used in statistics and machine learning to predict values of an input from previous test data. Basics[change | change source] Logistic regression is an alternative method to use other than the simpler linear regression. Linear regression tries to predict the data by finding a linear – straight line – equation to model or predict future data points. Logistic regression does not look at the relationship between the two variables as a straight line. Instead, Logistic regression uses the natural logarithm function to find the relationship between the variables and uses test data to find the coefficients. The function can then predict the future results using these coefficients in the logistic equation. Logistic regression uses the concept of odds ratios to calculate the probability. This is defined as the ratio of the odds of an event happening to its not happening. For example, the probability of a sports team to win a certain match might be 0.75. The probability for that team to lose would be 1 – 0.75 = 0.25. The odds for that team winning would be 0.75/0.25 = 3. This can be said as the odds of the team winning are 3 to 1.[1] The odds can be defined as: {\displaystyle Odds={P(y=1|x) \over 1-P(y=1|x)}} The natural logarithm of the odds ratio is then taken in order to create the logistic equation. The new equation is known as the logit: {\displaystyle Logit(P(x))=\ln \left({P(y=1|x) \over 1-P(y=1|x)}\right)} In Logistic regression the logit of the probability is said to be linear with respect to x, so the logit becomes: {\displaystyle Logit(P(x))=a+bx} Using the two equations together then gives the following: {\displaystyle {P(y=1|x) \over 1-P(y=1|x)}=e^{a+bx}} This then leads to the probability: {\displaystyle P(y=1|x)={e^{a+bx} \over 1+e^{a+bx}}={1 \over 1+e^{-(a+bx)}}} This final equation is the logistic curve for Logistic regression. It models the non-linear relationship between x and y with an ‘S’-like curve for the probabilities that y =1 - that event the y occurs. In this example a and b represent the gradients for the logistic function just like in linear regression. The logit equation can then be expanded to handle multiple gradients. This gives more freedom with how the logistic curve matches the data. The multiplication of two vectors can then be used to model more gradient values and give the following equation: {\displaystyle Logit(P(x))=w_{0}x^{0}+w_{1}x^{1}+w_{2}x^{2}+...+w_{n}x^{n}=w^{T}x} In this equation w = [ w0 , w1 , w2 , ... , wn ] and represents the n gradients for the equation. The powers of x are given by the vector x = [ 1 , x , x2 , .. , xn ] . These two vectors give the new logit equation with multiple gradients. The logistic equation then can then be changed to show this: {\displaystyle P(y=1|x)={1 \over 1+e^{-(w^{T}x)}}} This is then a more general logistic equation allowing for more gradient values. ↑ "Logistic Regression". faculty.cas.usf.edu. http://faculty.cas.usf.edu/mbrannick/regression/Logistic.html https://www.strath.ac.uk/aer/materials/5furtherquantitativeresearchdesignandanalysis/unit6/whatislogisticregression/ Archived 2015-05-08 at the Wayback Machine Retrieved from "https://simple.wikipedia.org/w/index.php?title=Logistic_regression&oldid=7438387"
Von Neumann Algebras and Ergodic Theory of Group Actions | EMS Press The workshop \emph{Von Neumann Algebras and Ergodic Theory of Group Actions} was organized by Dietmar Bisch (Vanderbilt University, Nashville), Damien Gaboriau (ENS Lyon), Vaughan Jones (UC Berkeley) and Sorin Popa (UC Los Angeles). It was held in Oberwolfach from October 26 to November 1, 2008. This workshop was the first Oberwolfach meeting on von Neumann algebras and orbit equivalence ergodic theory. The organizers took special care to invite many young mathematicians and more than half of the 28 talks were given by them. The meeting was very well attended by over 40 participants, leading senior researchers and junior mathematicians in the field alike. Participants came from about a dozen different countries including Belgium, Canada, Denmark, France, Germany, Great Britain, Japan, Poland, Switzerland and the USA. The first day of the workshop featured beautiful introductory talks to orbit equivalence and von Neumann algebras (Gaboriau), Popa's deformation/rigidity techniques and applications to rigidity in II _1 factors (Vaes), subfactors and planar algebras (Bisch), random matrices, free probability and subfactors (Shlyakhtenko), subfactor lattices and conformal field theory (Xu) and an open problem session (Popa). There were many excellent lectures during the subsequent days of the conference and many new results were presented, some for the first time during this meeting. A few of the highlights of the workshop were Vaes' report on a new cocycle superrigidity result for non-singular actions of lattices in SL(n, \mathbb R {\mathbb R}^n and on other homogeneous spaces (joint with Popa), Ioana's result showing that every sub-equivalence relation of the equivalence relation arising from the standard SL(2, \mathbb Z )-action on the 2 -torus {\mathbb T}^2 is either hyperfinite, or has relative property (T), and Epstein's report on her result that every countable, non-amenable group admits continuum many non-orbit equivalent, free, measure preserving, ergodic actions on a standard probability space. Other talks discussed new results on fundamental groups of II _1 factors, L ^2 -rigidity in von Neumann algebras, II _1 factors with at most one Cartan subalgebra, subfactors from Hadamard matrices, a new construction of subfactors from a planar algebra and new results on topological rigidity and the Atiyah conjecture. Many interactions and stimulating discussions took place at this workshop, which is of course exactly what the organizers had intended. The organizers would like to thank the Mathematisches Forschungsinstitut Oberwolfach for providing the splendid environnment for holding this conference. Special thanks go to the very helpful and competent staff of the institute. Damien Gaboriau, Vaughan F. R. Jones, Sorin Popa, Dietmar Bisch, Von Neumann Algebras and Ergodic Theory of Group Actions. Oberwolfach Rep. 5 (2008), no. 4, pp. 2763–2814
Nonlinear *-Jordan-type derivations on *-algebras April 2021 Nonlinear *-Jordan-type derivations on *-algebras \mathsc{𝒜} \ast -algebra with the unit I \mathsc{𝒜} contains a nontrivial projection P X\mathsc{𝒜}P=0 X=0 X\mathsc{𝒜}\left(I-P\right)=0 X=0 . In this paper, it is shown that \mathrm{\Phi } is a nonlinear \ast -Jordan-type derivation on \mathsc{𝒜} \mathrm{\Phi } is an additive \ast -derivation. As applications, the nonlinear \ast -Jordan-type derivations on prime \ast -algebras, von Neumann algebras with no central summands of type {I}_{1} , factor von Neumann algebras and standard operator algebras are characterized. Changjing Li. Yuanyuan Zhao. Fangfang Zhao. "Nonlinear *-Jordan-type derivations on *-algebras." Rocky Mountain J. Math. 51 (2) 601 - 612, April 2021. https://doi.org/10.1216/rmj.2021.51.601 Received: 22 June 2020; Revised: 21 September 2020; Accepted: 16 October 2020; Published: April 2021 Keywords: *-derivations , *-Jordan-type derivations , von Neumann algebras Changjing Li, Yuanyuan Zhao, Fangfang Zhao "Nonlinear *-Jordan-type derivations on *-algebras," Rocky Mountain Journal of Mathematics, Rocky Mountain J. Math. 51(2), 601-612, (April 2021)
Vicsek fractal - Wikipedia (Redirected from Box fractal) Vicsek fractal (5th iteration of cross form) In mathematics the Vicsek fractal, also known as Vicsek snowflake or box fractal,[1][2] is a fractal arising from a construction similar to that of the Sierpinski carpet, proposed by Tamás Vicsek. It has applications including as compact antennas, particularly in cellular phones. 6 steps of a Sierpinski carpet Self-affine fractal built from a 3 × 2 grid Box fractal also refers to various iterated fractals created by a square or rectangular grid with various boxes removed or absent and, at each iteration, those present and/or those absent have the previous image scaled down and drawn within them. The Sierpinski triangle may be approximated by a 2 × 2 box fractal with one corner removed. The Sierpinski carpet is a 3 × 3 box fractal with the middle square removed. The basic square is decomposed into nine smaller squares in the 3-by-3 grid. The four squares at the corners and the middle square are left, the other squares being removed. The process is repeated recursively for each of the five remaining subsquares. The Vicsek fractal is the set obtained at the limit of this procedure. The Hausdorff dimension of this fractal is {\displaystyle \textstyle {\frac {\log(5)}{\log(3)}}} An alternative construction (shown below in the left image) is to remove the four corner squares and leave the middle square and the squares above, below, left and right of it. The two constructions produce identical limiting curves, but one is rotated by 45 degrees with respect to the other. Self-similarities I — removing corner squares. Self-similarities II — keeping corner squares.4 Four iterations of the saltire form of the fractal (top) and the cross form of the fractal (bottom). Anticross-stitch curve, iterations 0-4 Cross-stitch island Approximation by the chaos game where the jump=2/3 randomly towards either the center or one of the vertices of a square The Vicsek fractal has the surprising property that it has zero area yet an infinite perimeter, due to its non-integer dimension. At each iteration, four squares are removed for every five retained, meaning that at iteration n the area is {\displaystyle \textstyle {({\frac {5}{9}})^{n}}} (assuming an initial square of side length 1). When n approached infinity, the area approaches zero. The perimeter however is {\displaystyle \textstyle {4({\frac {5}{3}})^{n}}} , because each side is divided into three parts and the center one is replaced with three sides, yielding an increase of three to five. The perimeter approaches infinity as n increases. The boundary of the Vicsek fractal is the Type 1 quadratic Koch curve. Animation of the 3D analogue of the Vicsek fractal (third iteration) Flight to and around a 3D Vicsek fractal There is a three-dimensional analogue of the Vicsek fractal. It is constructed by subdividing each cube into 27 smaller ones, and removing all but the "center cross", the central cube and the six cubes touching the center of each face. Its Hausdorff dimension is {\displaystyle \textstyle {\frac {\log(7)}{\log(3)}}} Similarly to the two-dimensional Vicsek fractal, this figure has zero volume. Each iteration retains 7 cubes for every 27, resulting in a volume of {\displaystyle \textstyle {({\frac {7}{27}})^{n}}} at iteration n, which approaches zero as n approaches infinity. There exist an infinite number of cross sections which yield the two-dimensional Vicsek fractal. Wikimedia Commons has media related to Box fractals. ^ Shan Fuqi; Gu Hongming; Gao Baoxin (2004). "Analysis of a vicsek fractal patch antenna". ICMMT 4th International Conference On, Proceedings Microwave and Millimeter Wave Technology, 2004. Beijing, China: IEEE: 98–101. doi:10.1109/ICMMT.2004.1411469. ISBN 9780780384019. ^ Weisstein, Eric W. "Box Fractal". MathWorld. ^ "Box Fractals". 2014-01-03. "Box Fractal". Wolfram Alpha Site. Retrieved 21 February 2019. Retrieved from "https://en.wikipedia.org/w/index.php?title=Vicsek_fractal&oldid=1029560391"
Short-time Fourier transform - MATLAB stft - MathWorks Korea Compute and plot the STFT of the signal. Use a Kaiser window of length 256 and shape parameter \beta =5 . Specify the length of overlap as 220 samples and DFT length as 512 points. Plot the STFT with default colormap and view. Generate a quadratic chirp sampled at 1 kHz for 2 seconds. The instantaneous frequency is 100 Hz at \mathit{t}=0 \mathit{t}=1 Compute the one-sided, two-sided, and centered short-time Fourier transforms of the signal. In all cases, use a 202-sample Kaiser window with shape factor \beta =10 to window the signal segments. Display the frequency range used to compute each transform. M R L k=⌊\frac{{N}_{x}-L}{M-L}⌋, {N}_{x} x\left(n\right) X\left(f\right)=\left[\begin{array}{ccccc}{X}_{1}\left(f\right)& {X}_{2}\left(f\right)& {X}_{3}\left(f\right)& \cdots & {X}_{k}\left(f\right)\end{array}\right] m {X}_{m}\left(f\right)=\sum _{n=-\infty }^{\infty }x\left(n\right)g\left(n-mR\right){e}^{-j2\pi fn}, g\left(n\right) M {X}_{m}\left(f\right) mR R M L k\text{ }=\text{ }\frac{\left(length\left(x\right)-noverlap\right)}{\left(length\left(window\right)-noverlap\right)}
Composition_ring Knowpia In mathematics, a composition ring, introduced in (Adler 1962), is a commutative ring (R, 0, +, −, ·), possibly without an identity 1 (see non-unital ring), together with an operation {\displaystyle \circ :R\times R\rightarrow R} such that, for any three elements {\displaystyle f,g,h\in R} {\displaystyle (f+g)\circ h=(f\circ h)+(g\circ h)} {\displaystyle (f\cdot g)\circ h=(f\circ h)\cdot (g\circ h)} {\displaystyle (f\circ g)\circ h=f\circ (g\circ h).} It is not generally the case that {\displaystyle f\circ g=g\circ f} , nor is it generally the case that {\displaystyle f\circ (g+h)} {\displaystyle f\circ (g\cdot h)} ) has any algebraic relationship to {\displaystyle f\circ g} {\displaystyle f\circ h} There are a few ways to make a commutative ring R into a composition ring without introducing anything new. Composition may be defined by {\displaystyle f\circ g=0} for all f,g. The resulting composition ring is a rather uninteresting. {\displaystyle f\circ g=f} for all f,g. This is the composition rule for constant functions. If R is a boolean ring, then multiplication may double as composition: {\displaystyle f\circ g=fg} for all f,g. More interesting examples can be formed by defining a composition on another ring constructed from R. The polynomial ring R[X] is a composition ring where {\displaystyle (f\circ g)(x)=f(g(x))} {\displaystyle f,g\in R} The formal power series ring R[[X]] also has a substitution operation, but it is only defined if the series g being substituted has zero constant term (if not, the constant term of the result would be given by an infinite series with arbitrary coefficients). Therefore, the subset of R[[X]] formed by power series with zero constant coefficient can be made into a composition ring with composition given by the same substitution rule as for polynomials. Since nonzero constant series are absent, this composition ring does not have a multiplicative unit. If R is an integral domain, the field R(X) of rational functions also has a substitution operation derived from that of polynomials: substituting a fraction g1/g2 for X into a polynomial of degree n gives a rational function with denominator {\displaystyle g_{2}^{n}} , and substituting into a fraction is given by {\displaystyle {\frac {f_{1}}{f_{2}}}\circ g={\frac {f_{1}\circ g}{f_{2}\circ g}}.} However, as for formal power series, the composition cannot always be defined when the right operand g is a constant: in the formula given the denominator {\displaystyle f_{2}\circ g} should not be identically zero. One must therefore restrict to a subring of R(X) to have a well-defined composition operation; a suitable subring is given by the rational functions of which the numerator has zero constant term, but the denominator has nonzero constant term. Again this composition ring has no multiplicative unit; if R is a field, it is in fact a subring of the formal power series example. The set of all functions from R to R under pointwise addition and multiplication, and with {\displaystyle \circ } given by composition of functions, is a composition ring. There are numerous variations of this idea, such as the ring of continuous, smooth, holomorphic, or polynomial functions from a ring to itself, when these concepts makes sense. For a concrete example take the ring {\displaystyle {\mathbb {Z} }[x]} , considered as the ring of polynomial maps from the integers to itself. A ring endomorphism {\displaystyle F:{\mathbb {Z} }[x]\rightarrow {\mathbb {Z} }[x]} {\displaystyle {\mathbb {Z} }[x]} is determined by the image under {\displaystyle F} {\displaystyle x} {\displaystyle f=F(x)} and this image {\displaystyle f} can be any element of {\displaystyle {\mathbb {Z} }[x]} . Therefore, one may consider the elements {\displaystyle f\in {\mathbb {Z} }[x]} as endomorphisms and assign {\displaystyle \circ :{\mathbb {Z} }[x]\times {\mathbb {Z} }[x]\rightarrow {\mathbb {Z} }[x]} , accordingly. One easily verifies that {\displaystyle {\mathbb {Z} }[x]} satisfies the above axioms. For example, one has {\displaystyle (x^{2}+3x+5)\circ (x-2)=(x-2)^{2}+3(x-2)+5=x^{2}-x+3.} This example is isomorphic to the given example for R[X] with R equal to {\displaystyle \mathbb {Z} } , and also to the subring of all functions {\displaystyle \mathbb {Z} \to \mathbb {Z} } formed by the polynomial functions. Carleman matrix Adler, Irving (1962), "Composition rings", Duke Mathematical Journal, 29 (4): 607–623, doi:10.1215/S0012-7094-62-02961-7, ISSN 0012-7094, MR 0142573
EUDML | Complete Spaces of Vector-Valued Holomorphic Germs. EuDML | Complete Spaces of Vector-Valued Holomorphic Germs. Complete Spaces of Vector-Valued Holomorphic Germs. J. Bonet; P. Domanski; J. Mujica Bonet, J., Domanski, P., and Mujica, J.. "Complete Spaces of Vector-Valued Holomorphic Germs.." Mathematica Scandinavica 75.1 (1994): 150-160. <http://eudml.org/doc/167311>. @article{Bonet1994, author = {Bonet, J., Domanski, P., Mujica, J.}, keywords = {non-empty compact subset of a Fréchet space; LB-space; germs of holomorphic functions; quasinormable; Fréchet-Montel space; -space in the sense of Lindenstrauss and Pełczyński}, title = {Complete Spaces of Vector-Valued Holomorphic Germs.}, AU - Bonet, J. AU - Domanski, P. AU - Mujica, J. TI - Complete Spaces of Vector-Valued Holomorphic Germs. KW - non-empty compact subset of a Fréchet space; LB-space; germs of holomorphic functions; quasinormable; Fréchet-Montel space; -space in the sense of Lindenstrauss and Pełczyński non-empty compact subset of a Fréchet space, LB-space, germs of holomorphic functions, quasinormable, Fréchet-Montel space, {ℒ}_{\infty } -space in the sense of Lindenstrauss and Pełczyński Articles by J. Bonet Articles by P. Domanski Articles by J. Mujica
torch.nn.functional.grid_sample — PyTorch 1.11.0 documentation torch.nn.functional.grid_sample¶ torch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=None)[source]¶ In the spatial (4-D) case, for input with shape (N, C, H_\text{in}, W_\text{in}) and grid with shape (N, H_\text{out}, W_\text{out}, 2) , the output will have shape (N, C, H_\text{out}, W_\text{out}) When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on Reproducibility for background. input (Tensor) – input of shape (N, C, H_\text{in}, W_\text{in}) (4-D case) or (N, C, D_\text{in}, H_\text{in}, W_\text{in}) (5-D case) grid (Tensor) – flow-field of shape (N, H_\text{out}, W_\text{out}, 2) (N, D_\text{out}, H_\text{out}, W_\text{out}, 3) align_corners (bool, optional) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic. This option parallels the align_corners option in interpolate(), and so whichever option is used here should also be used there to resize the input image before grid sampling. Default: False When align_corners = True, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled by grid_sample() will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 was align_corners = True. Since then, the default behavior has been changed to align_corners = False, in order to bring it in line with the default for interpolate(). mode='bicubic' is implemented using the cubic convolution algorithm with \alpha=-0.75 \alpha might be different from packages to packages. For example, PIL and OpenCV use -0.5 and -0.75 respectively. This algorithm may “overshoot” the range of values it’s interpolating. For example, it may produce negative values or values greater than 255 when interpolating input in [0, 255]. Clamp the results with :func: torch.clamp to ensure they are within the valid range.
When does the associated graded Lie algebra of an arrangement group decompose? | EMS Press When does the associated graded Lie algebra of an arrangement group decompose? \mathcal{A} be a complex hyperplane arrangement, with fundamental group G and holonomy Lie algebra \mathfrak{H} \mathfrak{H}_3 is a free abelian group of minimum possible rank, given the values the Möbius function \mu\colon \mathcal{L}_2\to \mathbb{Z} takes on the rank 2 flats of \mathcal{A} . Then the associated graded Lie algebra of G decomposes (in degrees \ge 2 ) as a direct product of free Lie algebras. In particular, the ranks of the lower central series quotients of the group are given by \phi_r(G)=\sum _{X\in \mathcal{L}_2} \phi_r(F_{\mu(X)}) r\ge 2 . We illustrate this new Lower Central Series formula with several families of examples. Alexander I. Suciu, Stefan Papadima, When does the associated graded Lie algebra of an arrangement group decompose?. Comment. Math. Helv. 81 (2006), no. 4, pp. 859–875
Publications - A Little Ink Home Publications Teaching Vitae Search Feed Jim Fowler, Bart Snapp, Carolyn Johns, Darry Andrews, Dan Boros, Herb Clemens, Vic Ferdinand, Brad Findell, Bill Husen, John H. Johnson, Nela Lakos, Elizabeth Miller, Bobby Ramsey, Jenny Sheldon, Jim Talamo, and Tim Carlson (2021). An Open-Source Calculus Textbook on the Ximera Platform. PRIMUS, 31(1) 925–939. DOI: 10.1080/10511970.2020.1781720. Ranthony A.C. Edmonds and John H. Johnson Jr. (Feb. 2021). Intersections of Mathematics and Society. Notices of the American Mathematical Society 68(2), 207–210. DOI: 10.1090/noti2219. E. Miller, J. Fowler, C. Johns, J. Johnson Jr., B. Ramsey, and B. Snapp (2021). Increasing Active Learning in Large, Tightly Coordinated Calculus Courses. PRIMUS 31(3-5), 371–392. DOI: 10.1080/10511970.2020.1772923. John H. Johnson Jr. and Florian Karl Richter (Dec. 2017). Revisiting the nilpotent polynomial Hales–Jewett theorem. Adv. Math. 321(1), 269–286. DOI: 10.1016/j.aim.2017.09.033 MR: 3715711. Vitaly Bergelson, John H. Johnson Jr., and Joel Moreira (Apr. 2017). New polynomial and multi- dimensional extensions of classical partition results. J. Combin. Theory Ser. A 147, 119–154. DOI: 10.1016/j.jcta.2016.11.010 MR: 3589892. John H. Johnson Jr. (July 2015). A new and simpler noncommutative central sets theorem. Topology Appl. 189(1), 10–24. DOI: 10.1016/j.topol.2015.03.006 MR: 3342569. Neil Hindman and John H. Johnson (2012/13). Images of C sets and related large sets under nonhomogeneous spectra. INTEGERS 12B (Proceedings of the Integers Conference 2011), Paper No. A2, 25 pp. URL: http://math.colgate.edu/~integers/a2intproc11/a2intproc11.pdf) MR: 3055676. Cory Christopherson and John H. Johnson Jr. (July 2021). Algebraic characterizations of some relative notions of size. (Accepted to Semigroup Forum). arXiv: 2105.09723 [math.GN]. John H. Johnson (Dec. 2011). A dynamical characterization of C sets. arXiv: 1112.0715 [math.DS].
I hope your visit wont break down altogether especially as I shall like to have a little talk which does better than writing.2 George told me I think that you had directed all your real estate to be sold so as to convert it into personalty and in that case I will do the same.3 This could then be added to the personalty which will be a good deal more than the realty & divided in some proportion among the children. Have you estimated what the proportion is in which you have left your own. I should be rather inclined to leave \frac{5}{6} to the boys & \frac{1}{6} to the girls & I think they i.e. girls will have as much as is good for them. Yours E. D. Review in the Times today 4 \frac{1}{2} Columns. Have you got it?4 Verso of letter: ‘12,000 to each child’ pencil The year is established by the reference to the review of Expression in The Times (see n. 4, below). Illness prevented CD from travelling to London on 10 December as he had intended (see letter from E. A. Darwin to Emma Darwin, 9 December [1872]), but he did visit Erasmus from 17 to 22 or 23 December (see ‘Journal’ (Appendix II)). George Howard Darwin, who had just completed his training in law, had been advising Erasmus on writing his will (see letter from E. A. Darwin, 11 December [1872] and n. 2). The Times, 13 December 1872, p. 4, carried a review of Expression, a copy of which is in CD’s collection of reviews (DAR 226.2: 142–4). Hopes to have a visit to discuss proportions to be left to the children under their wills; thinks 5/6 to the boys, 1/6 to the girls who "will have as much as is good for them".
Managing Parallelism Part 2 Managing Parallel, Part 2: To thread or not to thread This post is all about the types of parallelism avaiable in general on a modern machine. There are many. It’ll focus on open operating systems like Linux, and at times we’ll talk about others like OS X or earlier operating systems where applicable. Instead of talking about the hardware and software separately, we’ll go back and forth when needed. First we’ll start off with an overview on the types of parallelism that we could have. Lets start off with some terminology and a really high level overview before diving in. Software Managed Parallelism In general there are three levels of parallelism managed by the operating system, and also three that are generally exploited by the hardware. Starting with the software managed (and by managed, I mean scheduled, or “when and where to run”) we have the process (green at the bottom, figure below) as the most heavy weight. For a long time, everything was a process. There were no threads to be had. Before widespread OS threading support was available (or needed), user-space threading was (and still often is) used to context swap between multiple units of work. In modern systems, the next smallest unit of parallelism is the kernel managed thread. Each process has what is generally called a “main” thread which it uses upon start-up and is initially scheduled by the operating system (for systems that use software managed threads, there are also ones which have hardware scheduling which we’ll cover in a bit). In general the order from largest most monolithic to smallest and finest grain is: Process, Kernel Thread, User-space Thread. Each process can have many OS managed threads. OS managed threads in turn, including the main thread, can have many user-space managed threads. User-space threads are managed, not by the operating system, but typically by a run-time or manual (user defined) scheduling system (again, details will be discussed in later in this post). Processes, threads, and user space threads can also leverage parallelism mediated by the hardware such as SIMD, out-of-order execution and pipeline parallelism. This is of course, is outside of the control of the software that runs on the hardware, however it is there none-the-less. <img class=”center-image responsive-image” src=”/img/CPUManagedContext.png” height=”auto” width=75% /> Hardware Managed Parallelism Unlike software, hardware is capable of being completely parallel. On every clock tick, signals can be processed. This can be as wide as timing constraints allow (a topic for another post). I want to go over the three main sources of parallelism in modern processors: SIMD, pipeline, and out-of-order execution. Let’s start with SIMD, which stands for single instruction, multiple data. It is one of the most naive forms of parallelism available to hardware but also one of the most effective for many workloads. Using the previous posts assembly line termimology, SIMD is like adding N workers of the same time at one point in the assembly. Looking at it in abstract: <img class=”center-image responsive-image” src=”/img/vectorAdd.png” height=”auto” width=75% /> SIMD parallelism ostensibly provides (at least the illusion of) a compute element that can operate on multiple elements of fixed width at the exact same time. I say illusion because even though the instruction set architecture specifies a vector of N width, you could end up doing sequential operations or smaller vector ops on that width of data. At a high level it looks something like the below picture. <img class=”center-image responsive-image” src=”/img/SIMD_Hierarchy.png” height=”auto” width=75% /> The processor has instructions that specify a vector operation, in this case, an add instruction. The processor also has registers that are theoretically architected to match the width of the actual vector unit. I say theoretically because these can be virtualized, however, even the virtualized widths should have the illusion that the registers have the same width as the vector instruction. I must emphasize that there is no guarantee that this will be the case (unless guaranteed by the ISA). The register only has to be wide enough to feed the actual physical vector unit. Given N cycles the vector processor takes and performs the operation on the two registers. The result could be written back to a third register, but in our example it will write back to register A. The goal of a vector op is to perform multiple ops with hopefully a wide load from memory to the registers. The load from memory is interesting, and I want to take a second to talk about that specifically since it is the dominant factor, not the compute in many cases. Most cache lines are 64-bytes in length. In total that means that each cache line has 512-bits. Some cache lines are larger, e.g., 128-bytes, regardless of exact size, the concept of minimum granularity is what counts. No matter how little you the programmer asks for, you will always get that minimum granularity. DDR (your main memory, RAM) is accessed in bursts. For every cache line that is filled, your core accesses DRAM multiple times which is a burst length (where N=8 for most systems), so for one cache line 8 DRAM accesses. Every time the processor goes to DRAM takes energy, I’ll leave energy to a future blog post in this series. Okay, back to memory and SIMD vectors. Making vectors wider than the unit of coherence can strain the memory system so it becomes critical to get as much cache line re-use as possible so that a full burst isn’t needed for every vector operation, otherwise much of the performance netted from the SIMD unit could be lost due to memory latency. Naively implementing everything as SIMD is quite foolish without profiling and hardware tuning as your performance will likely suffer. Another classic form of hardware parallelism is a pipeline. I’m going to explain how and why it works for hardware in two different ways, then go for the software corollary. First, lets get back to the original worker and assembly line allusion from part 1. If we have an assembly line and there is a task to perform, the rate that items come off that assembly line are limited by the speed that each worker can perform their tasks. Let’s assume that worker A has a task that has three main steps that each take a single unit of time, so total time for completion of a job for worker A takes three units of time. Now lets assume that we can divide that job into three pieces, and get two new workers that can each perform part of the main job. So now each worker takes a single unit of time to perform their job. In this new “pipelined” arrangement every single unit of time one job is completed. This is in contrast to the single worker doing the job with three tasks where only one item was completed for every three units of time. This is HUGE, we’ve now parallelized something temporally (in time), something that wasn’t immediately obvious how to parallelize. Now take the picture below: <img class=”center-image responsive-image” src=”/img/PipelinedHardwareParallelism.png” height=”auto” width=75% /> A single instruction has the property that to complete it there are three discernible sub-tasks or stages. Each sub-task now takes \frac{1}{3} of the time compared to the single monolithic operation. So what’s the downside? Well, there is very little, unless the cost of communicating a job from one worker to the next is high. If you have single instruction, and no follow-on then there is really no benefit to pipeline decomposition of the operation. You’ll still have the 3-unit of time latency (using our previous example) as the non-pipelined instruction. The advantage is that if you have lots of instructions then you’ll get one operation every unit of time, whereas before the core was limited by the slowest single step (the one we broke up). Another fortuitous side-effect in hardware is that each shorter task in hardware is now simpler and requires less logic. For hardware this typically means that you can run it at a higher clock rate. In general more pipeline stages, and less actual work for each stage means higher clock rates and more throughput (until you run into the limits of your latch and clock skew). Where this largely falls apart is branching (perhaps a subject for another blog post, in the interim see: Pentium 4 Pipeline Problems). In addition to pipeline parallelism and SIMD parallelism, the way almost all high performance cores get performance is through out-of-order execution. Out-of-order relies on independence of instructions within the instruction window. Looking at the example below: <img class=”center-image responsive-image” src=”/img/OoO.png” height=”50%” width=75% /> we have a simple ROB with instruction A at the head and D at the tail. The only dependent instruction within the ROB is C which is dependent on the result of A. All the other instructions can be issued in parallel except for instruction C which must wait for instruction A to complete. These instructions are committed (seen as complete by the memory system and effectively by the program) in order from the head of the ROB to the tail. This parallelism is limited by the amount of independence in the instruction window, the size of the instruction window, and the number of functional units to issue instructions to in the core. The performance of hardware parallelism is often limited by the memory fabric feeding the functional compute units. We’ll look at those issues in a future post. So we’ve talked about many means of extracting parallelism, but what about parallelism the programmer can control (most of the hardware mechanisms are largely outside of what you can influence, however there are exceptions where instruction scheduling can have a huge difference). In the next post we’ll deep dive into software managed parallelism.
Home : Support : Online Help : Mathematics : Differential Equations : Lie Symmetry Method : Commands for PDEs (and ODEs) : LieAlgebrasOfVectorFields : Distribution : Overview Overview of the Distribution Object The Distribution object provides a general toolkit for dealing with distributions. A user can query properties of a Distribution object, compute associated quantities, and combine distributions in various ways. Some existing Maple utility functions such as indets, type, has,...etc are overloaded for use with the Distribution object. A distribution in the differential-geometric sense is a specification of a subspace of tangent space at each point of a manifold M. A Distribution object can be constructed via the Distribution constructor. To construct a Distribution object, see LieAlgebrasOfVectorFields[Distribution]. The Distribution object is an exported item in the LieAlgebrasOfVectorFields package. To construct and access a Distribution object, the LieAlgebrasOfVectorFields package must be loaded (i.e. with(LieAlgebrasOfVectorFields);). For more information, see Overview of the LieAlgebrasOfVectorFields package. Once a Distribution object S has been constructed, each method in the Distribution object S can be accessed by either the short form command(S, otherArguments) or the long form S:-command(S, otherArguments). A Distribution object is displayed (via its ModulePrint method) as a set of VectorField objects which form a basis for it at each point. \mathrm{with}⁡\left(\mathrm{LieAlgebrasOfVectorFields}\right): We first build vector fields associated with 3-d cylinder (2-dim x-y rotation, and z translation and uniform scaling) \mathrm{X1}≔\mathrm{VectorField}⁡\left(-y⁢\mathrm{D}[x]+x⁢\mathrm{D}[y],\mathrm{space}=[x,y,z]\right) \textcolor[rgb]{0,0,1}{\mathrm{X1}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}} \mathrm{X2}≔\mathrm{VectorField}⁡\left(\mathrm{D}[z],\mathrm{space}=[x,y,z]\right) \textcolor[rgb]{0,0,1}{\mathrm{X2}}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}} \mathrm{X3}≔\mathrm{VectorField}⁡\left(z⁢\mathrm{D}[z],\mathrm{space}=[x,y,z]\right) \textcolor[rgb]{0,0,1}{\mathrm{X3}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}} \mathrm{\Sigma }≔\mathrm{Distribution}⁡\left(\mathrm{X1},\mathrm{X2},\mathrm{X3}\right) \textcolor[rgb]{0,0,1}{\mathrm{\Sigma }}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}}{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}} \mathrm{\Omega }≔\mathrm{Distribution}⁡\left(\mathrm{VectorField}⁡\left(\mathrm{D}[x],\mathrm{space}=[x,y,z]\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{\Omega }}\textcolor[rgb]{0,0,1}{≔}{\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}} We can request the dimension of this distribution \mathrm{Dimension}⁡\left(\mathrm{\Sigma }\right) \textcolor[rgb]{0,0,1}{2} \mathrm{IsInvolutive}⁡\left(\mathrm{\Sigma }\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{IsIntegrable}⁡\left(\mathrm{\Sigma }\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} We can check if x-translation is subspace of Sigma \mathrm{IsSubspace}⁡\left(\mathrm{\Omega },\mathrm{\Sigma }\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} Sum of these two distributions covers all (x,y,z) space. \mathrm{VectorSpaceSum}⁡\left(\mathrm{\Sigma },\mathrm{\Omega }\right) {\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}} These two distribution don't intersect \mathrm{Intersection}⁡\left(\mathrm{\Sigma },\mathrm{\Omega }\right) \textcolor[rgb]{0,0,1}{\varnothing } The invariant of Sigma \mathrm{Integrals}⁡\left(\mathrm{\Sigma }\right) [{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}] Finding other distributions.. \mathrm{CauchyDistribution}⁡\left(\mathrm{\Sigma }\right) {\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}}{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}} \mathrm{DerivedDistribution}⁡\left(\mathrm{\Sigma }\right) {\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁢}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}}{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∂}\textcolor[rgb]{0,0,1}{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}} \mathrm{type}⁡\left(\mathrm{\Sigma },'\mathrm{Distribution}'\right) \textcolor[rgb]{0,0,1}{\mathrm{true}}
Regularize - Maple Help Home : Support : Online Help : Mathematics : Factorization and Solving Equations : RegularChains : ChainTools Subpackage : Regularize make a polynomial regular or null with respect to a regular chain Regularize(p, rc, R) Regularize(p, rc, R, 'normalized'='yes') Regularize(p, rc, R, 'normalized'='strongly') The command Regularize(p, rc, R) returns a list made of two lists. The first one consists of regular chains \mathrm{reg_i} such that p is regular modulo the saturated ideal of \mathrm{reg_i} . The second one consists of regular chains \mathrm{sing_i} such that p is null modulo the saturated ideal of \mathrm{sing_i} In addition, the union of the regular chains of these lists is a decomposition of rc in the sense of Kalkbrener. If 'normalized'='yes' is passed, all the returned regular chains are normalized. If 'normalized'='strongly' is passed, all the returned regular chains are strongly normalized. If 'normalized'='yes' is present, rc must be normalized. If 'normalized'='strongly' is present, rc must be strongly normalized. The command RegularizeDim0 implements another algorithm with the same purpose as that of the command Regularize. However it is specialized to zero-dimensional regular chains in prime characteristic. When both algorithms apply, the latter usually outperforms the former one. This command is part of the RegularChains[ChainTools] package, so it can be used in the form Regularize(..) only after executing the command with(RegularChains[ChainTools]). However, it can always be accessed through the long form of the command by using RegularChains[ChainTools][Regularize](..). \mathrm{with}⁡\left(\mathrm{RegularChains}\right): \mathrm{with}⁡\left(\mathrm{ChainTools}\right): R≔\mathrm{PolynomialRing}⁡\left([x,y,z]\right) \textcolor[rgb]{0,0,1}{R}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{polynomial_ring}} \mathrm{rc}≔\mathrm{Empty}⁡\left(R\right) \textcolor[rgb]{0,0,1}{\mathrm{rc}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}} \mathrm{rc}≔\mathrm{Chain}⁡\left([z⁢\left(z-1\right),y⁢\left(y-2\right)],\mathrm{rc},R\right); \mathrm{Equations}⁡\left(\mathrm{rc},R\right) \textcolor[rgb]{0,0,1}{\mathrm{rc}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}} [{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{z}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{z}] p≔z⁢x+y \textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y} \mathrm{reg},\mathrm{sing}≔\mathrm{op}⁡\left(\mathrm{Regularize}⁡\left(p,\mathrm{rc},R\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{reg}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{sing}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}}] \mathrm{map}⁡\left(\mathrm{Equations},\mathrm{reg},R\right) [[\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}]] \mathrm{map}⁡\left(\mathrm{Equations},\mathrm{sing},R\right) [[\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}]] [\mathrm{seq}⁡\left(\mathrm{SparsePseudoRemainder}⁡\left(p,\mathrm{reg}[i],R\right),i=1..\mathrm{nops}⁡\left(\mathrm{reg}\right)\right)] [\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}] \mathrm{seq}⁡\left(\mathrm{SparsePseudoRemainder}⁡\left(p,\mathrm{sing}[i],R\right),i=1..\mathrm{nops}⁡\left(\mathrm{sing}\right)\right) \textcolor[rgb]{0,0,1}{0}
Bile acid-CoA:amino acid N-acyltransferase - Wikipedia glycine N-choloyltransferase In enzymology, a bile acid-CoA:amino acid N-acyltransferase (EC 2.3.1.65) is an enzyme that catalyzes the chemical reaction choloyl-CoA + glycine {\displaystyle \rightleftharpoons } CoA + glycocholate Thus, the two substrates of this enzyme are choloyl-CoA and glycine, whereas its two products are CoA and glycocholate. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is choloyl-CoA:glycine N-choloyltransferase. Other names in common use include glycine-taurine N-acyltransferase, amino acid N-choloyltransferase, BAT, glycine N-choloyltransferase, BACAT, cholyl-CoA glycine-taurine N-acyltransferase, and cholyl-CoA:taurine N-acyltransferase. This enzyme participates in bile acid biosynthesis and taurine and hypotaurine metabolism. Czuba B, Vessey DA (1980). "Kinetic characterization of cholyl-CoA glycine-taurine N-acyltransferase from bovine liver". J. Biol. Chem. 255 (11): 5296–9. PMID 7372637. Jordan TW, Lee R, Lim WC (1980). "Isoelectric focussing of soluble and particulate benzoyl-CoA and cholyl-CoA:amino acid N-acyltransferases from rat liver". Biochem. Int. 1: 325–330. Vessey DA (1979). "The co-purification and common identity of cholyl CoA:glycine- and cholyl CoA:taurine-N-acyltransferase activities from bovine liver". J. Biol. Chem. 254 (6): 2059–63. PMID 422567. Johnson MR, Barnes S, Kwakye JB, Diasio RB (1991). "Purification and characterization of bile acid-CoA:amino acid N-acyltransferase from human liver". J. Biol. Chem. 266 (16): 10227–33. PMID 2037576. Falany CN, Xie X, Wheeler JB, Wang J, Smith M, He D, Barnes S (2002). "Molecular cloning and expression of rat liver bile acid CoA ligase". J. Lipid Res. 43 (12): 2062–71. doi:10.1194/jlr.M200260-JLR200. PMID 12454267. He D, Barnes S, Falany CN (2003). "Rat liver bile acid CoA:amino acid N-acyltransferase: expression, characterization, and peroxisomal localization". J. Lipid Res. 44 (12): 2242–9. doi:10.1194/jlr.M300128-JLR200. PMID 12951368. O'Byrne J, Hunt MC, Rai DK, Saeki M, Alexson SE (2003). "The human bile acid-CoA:amino acid N-acyltransferase functions in the conjugation of fatty acids to glycine". J. Biol. Chem. 278 (36): 34237–44. doi:10.1074/jbc.M300987200. PMID 12810727. Retrieved from "https://en.wikipedia.org/w/index.php?title=Bile_acid-CoA:amino_acid_N-acyltransferase&oldid=984493719"
Quadratic residuosity problem - Wikipedia The quadratic residuosity problem (QRP[1]) in computational number theory is to decide, given integers {\displaystyle a}nd {\displaystyle N} {\displaystyle a} {\displaystyle N} or not. Here {\displaystyle N=p_{1}p_{2}} for two unknown primes {\displaystyle p_{1}} {\displaystyle p_{2}} {\displaystyle a} is among the numbers which are not obviously quadratic non-residues (see below). The problem was first described by Gauss in his Disquisitiones Arithmeticae in 1801. This problem is believed to be computationally difficult. Several cryptographic methods rely on its hardness, see § Applications. An efficient algorithm for the quadratic residuosity problem immediately implies efficient algorithms for other number theoretic problems, such as deciding whether a composite {\displaystyle N} of unknown factorization is the product of 2 or 3 primes.[2] 1 Precise formulation 2 Distribution of residues Precise formulation[edit] {\displaystyle a}nd {\displaystyle T} {\displaystyle a} is said to be a quadratic residue modulo {\displaystyle T} {\displaystyle b} {\displaystyle a\equiv b^{2}{\pmod {T}}} Otherwise we say it is a quadratic non-residue. When {\displaystyle T=p} is a prime, it is customary to use the Legendre symbol: {\displaystyle \left({\frac {a}{p}}\right)={\begin{cases}1&{\text{ if }}a{\text{ is a quadratic residue modulo }}p{\text{ and }}a\not \equiv 0{\pmod {p}},\\-1&{\text{ if }}a{\text{ is a quadratic non-residue modulo }}p,\\0&{\text{ if }}a\equiv 0{\pmod {p}}.\end{cases}}} This is a multiplicative character which means {\displaystyle {\big (}{\tfrac {a}{p}}{\big )}=1} for exactly {\displaystyle (p-1)/2} {\displaystyle 1,\ldots ,p-1} , and it is {\displaystyle -1} for the remaining. It is easy to compute using the law of quadratic reciprocity in a manner akin to the Euclidean algorithm, see Legendre symbol. Consider now some given {\displaystyle N=p_{1}p_{2}} {\displaystyle p_{1}} {\displaystyle p_{2}} are two, different unknown primes. A given {\displaystyle a} {\displaystyle N} {\displaystyle a} is a quadratic residue modulo both {\displaystyle p_{1}} {\displaystyle p_{2}} Since we don't know {\displaystyle p_{1}} {\displaystyle p_{2}} , we cannot compute {\displaystyle {\big (}{\tfrac {a}{p_{1}}}{\big )}} {\displaystyle {\big (}{\tfrac {a}{p_{2}}}{\big )}} . However, it is easy to compute their product. This is known as the Jacobi symbol: {\displaystyle \left({\frac {a}{N}}\right)=\left({\frac {a}{p_{1}}}\right)\left({\frac {a}{p_{2}}}\right)} This can also be efficiently computed using the law of quadratic reciprocity for Jacobi symbols. {\displaystyle {\big (}{\tfrac {a}{N}}{\big )}} can not in all cases tell us whether {\displaystyle a} {\displaystyle N} or not! More precisely, if {\displaystyle {\big (}{\tfrac {a}{N}}{\big )}=-1} {\displaystyle a} is necessarily a quadratic non-residue modulo either {\displaystyle p_{1}} {\displaystyle p_{2}} , in which case we are done. But if {\displaystyle {\big (}{\tfrac {a}{N}}{\big )}=1} then it is either the case that {\displaystyle a} {\displaystyle p_{1}} {\displaystyle p_{2}} , or a quadratic non-residue modulo both {\displaystyle p_{1}} {\displaystyle p_{2}} . We cannot distinguish these cases from knowing just that {\displaystyle {\big (}{\tfrac {a}{N}}{\big )}=1} This leads to the precise formulation of the quadratic residue problem: Problem: Given integers {\displaystyle a}nd {\displaystyle N=p_{1}p_{2}} {\displaystyle p_{1}} {\displaystyle p_{2}} are unknown, different primes, and where {\displaystyle {\big (}{\tfrac {a}{N}}{\big )}=1} , determine whether {\displaystyle a} {\displaystyle N} Distribution of residues[edit] {\displaystyle a} is drawn uniformly at random from integers {\displaystyle 0,\ldots ,N-1} {\displaystyle {\big (}{\tfrac {a}{N}}{\big )}=1} {\displaystyle a} more often a quadratic residue or a quadratic non-residue modulo {\displaystyle N} As mentioned earlier, for exactly half of the choices of {\displaystyle a\in \{1,\ldots ,p_{1}-1\}} {\displaystyle {\big (}{\tfrac {a}{p_{1}}}{\big )}=1} , and for the rest we have {\displaystyle {\big (}{\tfrac {a}{p_{1}}}{\big )}=-1} . By extension, this also holds for half the choices of {\displaystyle a\in \{1,\ldots ,N-1\}\setminus p_{1}\mathbb {Z} } . Similarly for {\displaystyle p_{2}} . From basic algebra, it follows that this partitions {\displaystyle (\mathbb {Z} /N\mathbb {Z} )^{\times }} into 4 parts of equal size, depending on the sign of {\displaystyle {\big (}{\tfrac {a}{p_{1}}}{\big )}} {\displaystyle {\big (}{\tfrac {a}{p_{2}}}{\big )}} The allowed {\displaystyle a} in the quadratic residue problem given as above constitute exactly those two parts corresponding to the cases {\displaystyle {\big (}{\tfrac {a}{p_{1}}}{\big )}={\big (}{\tfrac {a}{p_{2}}}{\big )}=1} {\displaystyle {\big (}{\tfrac {a}{p_{1}}}{\big )}={\big (}{\tfrac {a}{p_{2}}}{\big )}=-1} . Consequently, exactly half of the possible {\displaystyle a}re quadratic residues and the remaining are not. The intractability of the quadratic residuosity problem is the basis for the security of the Blum Blum Shub pseudorandom number generator. It also yields the public key Goldwasser–Micali cryptosystem.[3][4] as well as the identity based Cocks scheme. Higher residuosity problem ^ Kaliski, Burt (2011). "Quadratic Residuosity Problem". Encyclopedia of Cryptography and Security: 1003. doi:10.1007/978-1-4419-5906-5_429. ^ Adleman, L. (1980). "On Distinguishing Prime Numbers from Composite Numbers". Proceedings of the 21st IEEE Symposium on the Foundations of Computer Science (FOCS), Syracuse, N.Y. pp. 387–408. doi:10.1109/SFCS.1980.28. ISSN 0272-5428. ^ S. Goldwasser, S. Micali (1982). "Probabilistic encryption and how to play mental poker keeping secret all partial information". Proc. 14th Symposium on Theory of Computing: 365–377. doi:10.1145/800070.802212. ^ S. Goldwasser, S. Micali (1984). "Probabilistic encryption". Journal of Computer and System Sciences. 28 (2): 270–299. doi:10.1016/0022-0000(84)90070-9. Retrieved from "https://en.wikipedia.org/w/index.php?title=Quadratic_residuosity_problem&oldid=1032223215"
Litepaper | Pendle Documentation Pendle enables the tokenizing and trading of future yield by leveraging on base lending layers created by prominent DeFi protocols such as Aave and Compound, which have shown incredible growth and community acceptance. With Pendle, future yield can be separated from its underlying asset and traded independently. Pendle is powered by an AMM specifically designed to support tokens with depreciating time value, creating a new type of DeFi derivative. Pendle focuses on developing this layer of yield derivatives - expanding the supported token pairs, creating market depth, and growing the ecosystem. What does this allow?​ This allows for freely tradable on-chain fixed and floating yields of altcoins, creating forward yield curves across tokens. This gives the lending market greater visibility and more maturity. Having on-chain information tradable across multiple time horizons creates a new avenue for yield strategies, such as Harvest vaults, to maximize or protect returns. It allows for lenders to lock in their yields and traders to speculate and gain exposure to changes in yield. The tokenization of future yield also allows for the creation of products with future yield as collateral. Various new trading derivatives, such as rate swap products, the selling and buying of yield protection, and spread trading, will be feasible. Besides creating a vibrant-rates trading layer across the most relevant lending token pairs, Pendle can also participate in the creation of yield products, providing the ecosystem with a greater selection of strategies to easily express their view of the market. To allow owners to give up rights to their yield for a fixed period of time, users will deposit their yield token (aLINK for the purposes of this paper) into a smart contract. Two tokens will be issued, YT and OT. Future Yield Token (YT)​ Each YT represents ownership of the future yield of the locked aLINK for a preset number of blocks. YT can be traded in the AMM, and the holder of the token receives aLINK yield as distributed by the base lending platform. At expiry, YT have a value of zero. After expiry, only the OT is needed to redeem the underlying asset. YT differ from each other according to underlying asset and expiry date. Tokens with the same underlying asset and expiry are fungible. OT represent the underlying staked asset and are transferable. Only wallets holding OT and its corresponding YT can withdraw the underlying asset deposited. Automated Market Maker (AMM) for Tokens with Time-Decay​ While YT can be traded on existing Uniswap type AMMs, the constant product invariant formula x · y = k is not ideal for YT, where time is an additional factor. Utilizing a formula which is a pure function of reserves would cause pools to suffer from predictable losses to arbitrage as YT maturity approaches. Taking inspiration from the constant product invariant and incorporating a constant decaying time factor, we have developed a strain of AMMs that can be utilized for tokens with time value. A series of sample graphs at different timestamps are shown below. When YT is issued at the start of the contract period, the curve follows that of a standard constant product AMM curve, x \cdot y = k . As time passes, the curve eventually eases into a horizontal line. This is a reflection of how the price of YT decreases as the remaining future interest yield approaches zero. Chaining Liquidity Pools​ A set of new pools per token pair will be created after each expiry, chaining new expiries as old expiries become irrelevant. The setup will look like this: Essentially, the overlapping of liquidity pools enables a constant yield curve for the underlying. Liquidity incentives will be utilized to create this chain and popular pairs can be accorded longer time frames. For example, ETH or WBTC pairs may have demand for a 360-day expiry. How Pendle Works​ Minting and Trading​ User mints YT and OT through Pendle by depositing aToken; OT (ownership token) and YT (future yield token) are minted. OT represents ownership of the underlying aToken, and YT represents the future yield of the underlying aToken. YT Minter can sell the YT or add to the YT liquidity pool in exchange for LP tokens to earn liquidity incentives. YT can be purchased or sold, and after the change of ownership has occurred, the entitlement of subsequent interest revenue tied to the underlying aToken will be changed to the new YT owner. YT can be traded until its expiry. YT has no value upon expiry. The OT holder can choose to roll forward to a new expiry and repeat the process, or redeem the underlying Redeeming the Underlying Asset​ Redeeming aToken before contract expiry requires the possession of both OT and YT. The OT holder can obtain YT by either purchasing YT from the market or withdrawing YT from the liquidity pool. With both OT and its corresponding YT in the wallet, the minter can execute redemption of the underlying aToken from Pendle. Increased Exposure​ If a trader holds the view that lending rates on LINK will continue to increase, he can simply buy YT on Pendle. The value of YT increases if interest rates rise, and he can choose to sell at any time or hold YT until expiry. He gains exposure in a more capital efficient manner as compared to buying and depositing the underlying asset. Assuming 1 LINK is $1 and its annual yield is 24%, this makes 2 months of yield worth roughly 4c. Disregarding time value of money, a trader can purchase exposure to 2 months of aLINK yield for 4 cents per token, instead of purchasing and locking in the actual token at $1, which is 25x more capital efficient. Yield Lock​ An upcoming staking event increases demand for LINK, causing lending rates to increase sharply. Lender A holding Aave's aLINK anticipates that demand will begin to fall post event, leading to a decrease in rates in 2 week's time. He can act on this view and lock in his yields with Pendle. To do this, he deposits his aLINK, and in turn, receives both OT and YT. Next, he sells his YT (Uniswap style) on Pendle and receives USDT in return for giving up ownership of his future aLINK yield. Lender A can now repeat the process to lock in more yield or deploy his funds elsewhere. Interest Rate Oracle​ Using the price of YT traded and the time left to maturity, we can derive the implied yield the market is attributing to the underlying asset. This implied yield may be utilized in various forms, such as on-chain settlement of interest rate derivatives, input as part of an oracle service or an indication of the expected path of interest rates. Spread Trading​ As the forward yield curve is created, traders can express their views on the market by trading YT with different maturities or different underlying assets. Pendle can be involved by creating products that provide 1-click exposure to such structures. For future development, it is possible for fees to be charged on such products and funnelled to the treasury to be managed by PENDLE holders (see below). On-Chain Yield Strategies​ Given that prices of tokens are on-chain, a variety of strategies can be deployed for arbitrage and automated buying and selling of yield, as we see in the Yearn vaults. PENDLE Token​ Pendle's native token will be utilized for governance. The landscape is changing rapidly and we see many innovations of value accrual. The team is committed but not limited to enabling the following token governance: Creation of new market pairs As protocols like Aave continue to innovate in the space, we can continue to utilize these building blocks and grow the pie. Trading on credit will be possible in the near future. DeFi ecosystems are also growing on emerging chains such as Polkadot and Cosmos. There will be opportunities to tokenize the yield of the most popular tokens and allow for cross-chain yield exposure and trading. We are closely following the progress on cross-chain possibilities and believe that as the DeFi ecosystem expands, the emerging technologies will be able to support a vibrant interest rate trading layer across multiple chains. Pe,P and Pa,P Automated Market Maker (AMM) for Tokens with Time-Decay Chaining Liquidity Pools How Pendle Works Yield Lock Interest Rate Oracle On-Chain Yield Strategies PENDLE Token
Malonate-semialdehyde dehydrogenase (acetylating) - Wikipedia In enzymology, a malonate-semialdehyde dehydrogenase (acetylating) (EC 1.2.1.18) is an enzyme that catalyzes the chemical reaction 3-oxopropanoate + CoA + NAD(P)+ {\displaystyle \rightleftharpoons } acetyl-CoA + CO2 + NAD(P)H The 4 substrates of this enzyme are 3-oxopropanoate, CoA, NAD+, and NADP+, whereas its 4 products are acetyl-CoA, CO2, NADH, and NADPH. This enzyme belongs to the family of oxidoreductases, specifically those acting on the aldehyde or oxo group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3-oxopropanoate:NAD(P)+ oxidoreductase (decarboxylating, CoA-acetylating). This enzyme is also called malonic semialdehyde oxidative decarboxylase. This enzyme participates in 4 metabolic pathways: inositol metabolism, alanine and aspartate metabolism, beta-alanine metabolism, and propanoate metabolism. Hayaishi O, Nishizuka Y, Tatibana M, Takeshita M, Kuno S (March 1961). "Enzymatic studies on the metabolism of beta-alanine". The Journal of Biological Chemistry. 236: 781–90. PMID 13712439. Yamada EW, Jakoby WB (March 1960). "Aldehyde oxidation. V. Direct conversion of malonic semialdehyde to acetyl-coenzyme A". The Journal of Biological Chemistry. 235: 589–94. PMID 13846369. Jakoby WB (1963). "Aldehyde dehydrogenase". In Boyer PD, Lardy H, Myrback K (eds.). The Enzymes. Vol. 7 (2nd ed.). New York: Academic Press. pp. 203–221. Retrieved from "https://en.wikipedia.org/w/index.php?title=Malonate-semialdehyde_dehydrogenase_(acetylating)&oldid=917518368"
Noetherian ring - Citizendium In algebra, a Noetherian ring is a ring with a condition on the lattice of ideals. 3 Useful Criteria {\displaystyle A} be a ring. The following conditions are equivalent: {\displaystyle A} satisfies an ascending chain condition on the set of its ideals: that is, there is no infinite ascending chain of ideals {\displaystyle I_{0}\subsetneq I_{1}\subsetneq I_{2}\subsetneq \ldots } {\displaystyle A} Every nonempty set of ideals of {\displaystyle A} has a maximal element when considered as a partially ordered set with respect to inclusion. When the above conditions are satisfied, {\displaystyle A} is said to be Noetherian. Alternatively, the ring {\displaystyle A} is Noetherian if is a Noetherian module when regarded as a module over itself. A Noetherian domain is a Noetherian ring which is also an integral domain. A field is Noetherian, since its only ideals are (0) and (1). A principal ideal domain is Noetherian, since every ideal is generated by a single element. The ring of integers Z The polynomial ring over a field The ring of continuous functions from R to R is not Noetherian. There is an ascending sequence of ideals {\displaystyle \langle 0\rangle \subset \langle x\rangle \subset \langle x,x-1\rangle \subset \langle x,x-1,x-2\rangle \subset \cdots .\,} Useful Criteria {\displaystyle A} is a Noetherian ring, then we have the following useful results: {\displaystyle A/I} is Noetherian for any ideal {\displaystyle I} The localization of {\displaystyle A} by a multiplicative subset {\displaystyle S} is again Noetherian. Hilbert's Basis Theorem: The polynomial ring {\displaystyle A[X]} is Noetherian (hence so is {\displaystyle A[X_{1},\ldots ,X_{n}]} Serge Lang (1993). Algebra, 3rd ed. Addison-Wesley, 186-187. ISBN 0-201-55540-9. Retrieved from "https://citizendium.org/wiki/index.php?title=Noetherian_ring&oldid=349962"
Creating Discriminant Analysis Model - MATLAB & Simulink - MathWorks India Each class (Y) generates data (X) using a multivariate normal distribution. In other words, the model assumes X has a Gaussian mixture distribution (gmdistribution). For linear discriminant analysis, the model has the same covariance matrix for each class; only the means vary. Under this modeling assumption, fitcdiscr infers the mean and covariance parameters of each class. For linear discriminant analysis, it computes the sample mean of each class. Then it computes the sample covariance by first subtracting the sample mean of each class from the observations of that class, and taking the empirical covariance matrix of the result. For quadratic discriminant analysis, it computes the sample mean of each class. Then it computes the sample covariances by first subtracting the sample mean of each class from the observations of that class, and taking the empirical covariance matrix of each class. The fit method does not use prior probabilities or costs for fitting. fitcdiscr constructs weighted classifiers using the following scheme. Suppose M is an N-by-K class membership matrix: Mnk = 1 if observation n is from class k Mnk = 0 otherwise. The estimate of the class mean for unweighted data is {\stackrel{^}{\mu }}_{k}=\frac{{\sum }_{n=1}^{N}{M}_{nk}{x}_{n}}{{\sum }_{n=1}^{N}{M}_{nk}}. For weighted data with positive weights wn, the natural generalization is {\stackrel{^}{\mu }}_{k}=\frac{{\sum }_{n=1}^{N}{M}_{nk}{w}_{n}{x}_{n}}{{\sum }_{n=1}^{N}{M}_{nk}{w}_{n}}. The unbiased estimate of the pooled-in covariance matrix for unweighted data is \stackrel{^}{\Sigma }=\frac{{\sum }_{n=1}^{N}{\sum }_{k=1}^{K}{M}_{nk}\left({x}_{n}-{\stackrel{^}{\mu }}_{k}\right){\left({x}_{n}-{\stackrel{^}{\mu }}_{k}\right)}^{T}}{N-K}. For quadratic discriminant analysis, fitcdiscr uses K = 1. For weighted data, assuming the weights sum to 1, the unbiased estimate of the pooled-in covariance matrix is \stackrel{^}{\Sigma }=\frac{{\sum }_{n=1}^{N}{\sum }_{k=1}^{K}{M}_{nk}{w}_{n}\left({x}_{n}-{\stackrel{^}{\mu }}_{k}\right){\left({x}_{n}-{\stackrel{^}{\mu }}_{k}\right)}^{T}}{1-{\sum }_{k=1}^{K}\frac{{W}_{k}^{\left(2\right)}}{{W}_{k}}}, {W}_{k}={\sum }_{n=1}^{N}{M}_{nk}{w}_{n} is the sum of the weights for class k. {W}_{k}^{\left(2\right)}={\sum }_{n=1}^{N}{M}_{nk}{w}_{n}^{2} is the sum of squared weights per class. ClassificationDiscriminant | gmdistribution
Yield vs. Return: What's the Difference? Yield vs. Return: An Overview Risk and Yield The Rate of Return vs. Yield Yield and return are two different ways of measuring the profitability of an investment over a set period of time, often annually. The yield is the income the investment returns over time, typically expressed as a percentage, while the return is the amount that was gained or lost on an investment over time, usually expressed as a dollar value. Yield and return both measure an investment's financial value over a set period of time, but do it using different metrics. Yield is the amount an investment earns during a time period, usually reflected as a percentage. Return is how much an investment earns or loses over time, reflected as the difference in the holding's dollar value. The yield is forward-looking and the return is backward-looking. Yield is the income returned on an investment, such as the interest received from holding a security. The yield is usually expressed as an annual percentage rate based on the investment's cost, current market value, or face value. Yield may be considered known or anticipated depending on the security in question, as certain securities may experience fluctuations in value. Yield is forward-looking. Furthermore, it measures the income, such as interest and dividends, that an investment earns and ignores capital gains. This income is taken in the context of a specific period and is then annualized with the assumption that the interest or dividends will continue to be received at the same rate. A bond yield can have multiple yield options depending on the exact nature of the investment. The coupon is the bond interest rate fixed at issuance, and the coupon rate is the yield paid by fixed-income security. The coupon rate is the annual coupon payments paid by the issuer relative to the bond's face or par value. The current yield is the bond interest rate as a percentage of the current price of the bond. The yield to maturity is an estimate of what an investor will receive if the bond is held to its maturity date. Return is the financial gain or loss on an investment and is typically expressed as the change in the dollar value of an investment over time. Return is also referred to as total return and expresses what an investor earned from an investment during a certain period. Total return includes interest, dividends, and capital gain, such as an increase in the share price. In other words, a return is retrospective or backward-looking. For example, if an investor bought a stock for $50 and sold it for $60, the return would be $10. If the company paid a dividend of $1 during the time the stock was held, the total return would be $11, including the capital gain and dividend. A positive return is a profit on an investment, and a negative return is a loss on an investment. Risk is an important component of the yield paid on an investment. The higher the risk, the higher the associated yield potential. Some investments are less risky than others. For example, U.S. Treasuries carry less risk than stocks. Since stocks are considered to carry a higher risk than bonds, stocks typically have a higher yield potential to compensate investors for the added risk. The rate of return is a metric that can be used to measure a variety of financial instruments, while yield refers to a narrower group of investments—namely, those that produce interest or dividends. Rate of return and yield both describe the performance of investments over a set period (typically one year), but they have subtle and sometimes important differences. The rate of return is a specific way of expressing the total return on an investment that shows the percentage increase over the initial investment cost. Yield shows how much income has been returned from an investment based on initial cost, but it does not include capital gains in its calculation. Rate of return can be applied to nearly any investment while yield is somewhat more limited because not all investments produce interest or dividends. Mutual funds, stocks, and bonds are three common types of securities that have both rates of return and yields. The formula for rate of return is: \frac{\text{Current Price }-\text{ Original Price}}{\text{Original Price}}\times{100} Original PriceCurrent Price − Original Price​×100 In our earlier example, if a stock is bought for $50 and sold for $60, your return would be $10 for the investment. Adding the dividend of $1 during the time the stock was held, the total return is $11, including the capital gain and dividend. The rate of return is: \begin{aligned} &\frac{\$60\left(\text{Current Price}\right)\text{ }+\text{ }\$1\left(\text{D}\right)\text{ }-\text{ }\$50\left(\text{Original Price}\right)}{\$50}\\ &=0.22*100\\ &=\text{22\% Rate of Return}\\ &\textbf{where:}\\ &\text{D = Dividend}\\ \end{aligned} ​$50$60(Current Price) + $1(D) − $50(Original Price)​=0.22∗100=22% Rate of Returnwhere:D = Dividend​ Consider a mutual fund, for example. Its rate of return can be calculated by taking the total interest and dividends paid and combining them with the current share price, then dividing that figure by the initial investment cost. The yield would refer to the interest and dividend income earned on the fund but not the increase—or decrease—in the share price. There are several different types of yield for each bond: coupon rate, current yield, and yield to maturity. Yield can also be less precise than the rate of return since it is often forward-looking, whereas the rate of return is backward-looking. Many types of annual yields are based on future assumptions that current income will continue to be earned at the same rate.
The vapour density of completely dissociated sociated NH4CL would be The vapour density of completely dissociated sociated N{H}_{4}CL A. Determined by the amount of solid N{H}_{4}CL used in the experiment B. Triple to that of N{H}_{4}CL C. Half of that N{H}_{4}CL D. Slightly less than half of that of N{H}_{4}CL Phylum - Protozoa MCQ Phylum - Annelida MCQ Advance DBMS MCQ Rearrangement Set 15 MCQ Rearrangement Set 9 MCQ Partnership MCQ Units, Dimensions and Errors MCQ Idioms & Phrases MCQ Angiosperms (Taxonomy and Classification) MCQ Series MCQ Machine Input Mi-set 2 MCQ Workshop Technology MCQ
2.1 Asymptomatic and presymptomatic transmission Human-to-human transmission of SARS‑CoV‑2 was confirmed on 20 January 2020 during the COVID-19 pandemic.[15][38][39][40] Transmission was initially assumed to occur primarily via respiratory droplets from coughs and sneezes within a range of about 1.8 metres (6 ft).[41][42] Laser light scattering experiments suggest that speaking is an additional mode of transmission[43][44] and a far-reaching[45] one, indoors, with little air flow.[46][47] Other studies have suggested that the virus may be airborne as well, with aerosols potentially being able to transmit the virus.[48][49][50] During human-to-human transmission, between 200 and 800 infectious SARS‑CoV‑2 virions are thought to initiate a new infection.[51][52][53] If confirmed, aerosol transmission has biosafety implications because a major concern associated with the risk of working with emerging viruses in the laboratory is the generation of aerosols from various laboratory activities which are not immediately recognizable and may affect other scientific personnel.[54] Indirect contact via contaminated surfaces is another possible cause of infection.[55] Preliminary research indicates that the virus may remain viable on plastic (polypropylene) and stainless steel (AISI 304) for up to three days, but it does not survive on cardboard for more than one day or on copper for more than four hours.[10] The virus is inactivated by soap, which destabilizes its lipid bilayer.[56][57] Viral RNA has also been found in stool samples and semen from infected individuals.[58][59] However, an epidemiological model of the beginning of the outbreak in China suggested that "pre-symptomatic shedding may be typical among documented infections" and that subclinical infections may have been the source of a majority of infections.[72] That may explain how out of 217 on board a cruise liner that docked at Montevideo, only 24 of 128 who tested positive for viral RNA showed symptoms.[73] Similarly, a study of ninety-four patients hospitalized in January and February 2020 estimated patients began shedding virus two to three days before symptoms appear and that "a substantial proportion of transmission probably occurred before first symptoms in the index case".[74] The authors later published a correction that showed that shedding began earlier than first estimated, four to five days before symptoms appear.[75] Research into the natural reservoir of the virus that caused the 2002–2004 SARS outbreak has resulted in the discovery of many SARS-like bat coronaviruses, most originating in horseshoe bats. The closest match by far, published on Nature (journal) in February 2022, were viruses BANAL-52 (96.8% resemblance to SARS‑CoV‑2), BANAL-103 and BANAL-236, collected in three different species of bats in Feuang, Laos.[90][91][92] An earlier source published in February 2020 identified the virus RaTG13, collected in bats in Mojiang, Yunnan, China to be the closest to SARS‑CoV‑2, with 96.1% resemblance.[17][93] None of the above are its direct ancestor.[94] Although the role of pangolins as an intermediate host was initially posited (a study published in July 2020 suggested that pangolins are an intermediate host of SARS‑CoV‑2-like coronaviruses[98][99]), subsequent studies have not substantiated their contribution to the spillover.[84] Evidence against this hypothesis includes the fact that pangolin virus samples are too distant to SARS-CoV-2: isolates obtained from pangolins seized in Guangdong were only 92% identical in sequence to the SARS‑CoV‑2 genome (matches above 90 percent may sound high, but in genomic terms it is a wide evolutionary gap[100]). In addition, despite similarities in a few critical amino acids,[101] pangolin virus samples exhibit poor binding to the human ACE2 receptor.[102] Viral genetic sequence data can provide critical information about whether viruses separated by time and space are likely to be epidemiologically linked.[119] With a sufficient number of sequenced genomes, it is possible to reconstruct a phylogenetic tree of the mutation history of a family of viruses. By 12 January 2020, five genomes of SARS‑CoV‑2 had been isolated from Wuhan and reported by the Chinese Center for Disease Control and Prevention (CCDC) and other institutions;[109][120] the number of genomes increased to 42 by 30 January 2020.[121] A phylogenetic analysis of those samples showed they were "highly related with at most seven mutations relative to a common ancestor", implying that the first human infection occurred in November or December 2019.[121] Examination of the topology of the phylogenetic tree at the start of the pandemic also found high similarities between human isolates.[122] As of 21 August 2021,[update] 3,422 SARS‑CoV‑2 genomes, belonging to 19 strains, sampled on all continents except Antarctica were publicly available.[123] Pangolin SARSr-CoV-GX, 85.3% to SARS-CoV-2, Manis javanica, smuggled from Southeast Asia[134] Pangolin SARSr-CoV-GD, 90.1% to SARS-CoV-2, Manis javanica, smuggled from Southeast Asia[135] (Bat) RaTG13, 96.1% to SARS-CoV-2, Rhinolophus affinis, Mojiang, Yunnan[138] (Bat) BANAL-52, 96.8% to SARS-CoV-2, Rhinolophus malayanus, Vientiane, Laos[139] Each SARS-CoV-2 virion is 60–140 nanometres (2.4×10−6–5.5×10−6 in) in diameter;[104][81] its mass within the global human populace has been estimated as being between 0.1 and 10 kilograms.[144] Like other coronaviruses, SARS-CoV-2 has four structural proteins, known as the S (spike), E (envelope), M (membrane), and N (nucleocapsid) proteins; the N protein holds the RNA genome, and the S, E, and M proteins together create the viral envelope.[145] Coronavirus S proteins are glycoproteins and also type I membrane proteins (membranes containing a single transmembrane domain oriented on the extracellular side).[112] They are divided into two functional parts (S1 and S2).[103] In SARS-CoV-2, the spike protein, which has been imaged at the atomic level using cryogenic electron microscopy,[146][147] is the protein responsible for allowing the virus to attach to and fuse with the membrane of a host cell;[145] specifically, its S1 subunit catalyzes attachment, the S2 subunit fusion.[148] As of early 2022, about 7 million SARS-CoV-2 genomes had been sequenced and deposited into public databases and another 800,000 or so were added each month.[149] Very few drugs are known to effectively inhibit SARS‑CoV‑2. Masitinib is a clinically safe drug and was recently found to inhibit its main protease, 3CLpro and showed >200-fold reduction in viral titers in the lungs and nose in mice. However, it is not approved for the treatment of COVID-19 in humans as of August 2021.[165][needs update] In December 2021, the United States granted emergency use authorization to Nirmatrelvir/ritonavir for the treatment of the virus;[166] the European Union, United Kingdom, and Canada followed suit with full authorization soon after.[167][168][169] One study found that Nirmatrelvir/ritonavir reduced the risk of hospitalization and death by 88%.[170] A meta-analysis from November 2020 estimated the basic reproduction number ( {\displaystyle R_{0}} ) of the virus to be between 2.39 and 3.44.[20] This means each infection from the virus is expected to result in 2.39 to 3.44 new infections when no members of the community are immune and no preventive measures are taken. The reproduction number may be higher in densely populated conditions such as those found on cruise ships.[172] Human behavior affects the R0 value and hence estimates of R0 differ between different countries, cultures, and social norms. For instance, one study found relatively low R0 (~3.5) in Sweden, Belgium and the Netherlands, while Spain and the US had significantly higher R0 values (5.9 to 6.4, respectively).[173] There have been about 96,000 confirmed cases of infection in mainland China.[177] While the proportion of infections that result in confirmed cases or progress to diagnosable disease remains unclear,[178] one mathematical model estimated that 75,815 people were infected on 25 January 2020 in Wuhan alone, at a time when the number of confirmed cases worldwide was only 2,015.[179] Before 24 February 2020, over 95% of all deaths from COVID-19 worldwide had occurred in Hubei province, where Wuhan is located.[180][181] As of 22 May 2022, the percentage had decreased to 0.051%.[177] As of 22 May 2022, there have been 527,303,437 total confirmed cases of SARS‑CoV‑2 infection in the ongoing pandemic.[177] The total number of deaths attributed to the virus is 6,288,954.[177] ^ "How Coronavirus Spreads Archived 3 April 2020 at the Wayback Machine", Centers for Disease Control and Prevention, Retrieved 14 May 2021. ^ "Coronavirus disease (COVID-19): How is it transmitted? Archived 15 October 2020 at the Wayback Machine", World Health Organization ^ He, Xi; Lau, Eric H. Y.; Wu, Peng; Deng, Xilong; Wang, Jian; Hao, Xinxin; Lau, Yiu Chung; Wong, Jessica Y.; Guan, Yujuan; Tan, Xinghua; Mo, Xiaoneng; Chen, Yanqing; Liao, Baolin; Chen, Weilie; Hu, Fengyu; Zhang, Qing; Zhong, Mingqiu; Wu, Yanrong; Zhao, Lingzhai; Zhang, Fuchun; Cowling, Benjamin J.; Li, Fang; Leung, Gabriel M. (September 2020). "Author Correction: Temporal dynamics in viral shedding and transmissibility of COVID-19". Nature Medicine. 26 (9): 1491–1493. doi:10.1038/s41591-020-1016-z. PMC 7413015. PMID 32770170. ^ a b c d e Report of the WHO-China Joint Mission on Coronavirus Disease 2019 (COVID-19) (PDF) (Report). World Health Organization (WHO). 24 February 2020. Archived (PDF) from the original on 29 February 2020. Retrieved 5 March 2020. ^ Temmam, Sarah; Vongphayloth, Khamsing; Salazar, Eduard Baquero; Munier, Sandie; Bonomi, Max; Régnault, Béatrice; Douangboubpha, Bounsavane; Karami, Yasaman; Chretien, Delphine; Sanamxay, Daosavanh; Xayaphet, Vilakhan (February 2022). "Bat coronaviruses related to SARS-CoV-2 and infectious for human cells". Nature. ^ Mallapaty, Smriti (24 September 2021). "Closest known relatives of virus behind COVID-19 found in Laos". Nature. 597 (7878): 603–603. doi:10.1038/d41586-021-02596-2. ^ "Newly Discovered Bat Viruses Give Hints to Covid's Origins". New York Times. 14 October 2021. ^ Sokhansanj, Bahrad A.; Rosen, Gail L. (26 April 2022). Gaglia, Marta M. (ed.). "Mapping Data to Deep Understanding: Making the Most of the Deluge of SARS-CoV-2 Genome Sequences". mSystems. 7 (2): e00035–22. doi:10.1128/msystems.00035-22. ISSN 2379-5077. ^ Fact sheet for healthcare providers: Emergency Use Authorization for Paxlovid (PDF) (Technical report). Pfizer. 22 December 2021. LAB-1492-0.8. Archived from the original on 23 December 2021. ^ "Paxlovid EPAR". European Medicines Agency (EMA). 24 January 2022. Retrieved 3 February 2022. Text was copied from this source which is copyright European Medicines Agency. Reproduction is authorized provided the source is acknowledged. ^ "Oral COVID-19 antiviral, Paxlovid, approved by UK regulator" (Press release). Medicines and Healthcare products Regulatory Agency. 31 December 2021. ^ "Health Canada authorizes Paxlovid for patients with mild to moderate COVID-19 at high risk of developing serious disease". Health Canada (Press release). 17 January 2022. Retrieved 24 April 2022. ^ "FDA Authorizes First Oral Antiviral for Treatment of COVID-19". U.S. Food and Drug Administration (FDA) (Press release). 22 December 2021. Retrieved 22 December 2021. This article incorporates text from this source, which is in the public domain. ^ a b c d "COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU)". ArcGIS. Johns Hopkins University. Retrieved 22 May 2022.