text
stringlengths
256
16.4k
Delete-1 Statistics - MATLAB & Simulink - MathWorks France Determine Influential Observations Using CovRatio Determine Observations Influential on Coefficients Using Dfbetas Determine Observations Influential on Fitted Response Using Dffits Compute and Examine Delete-1 Variance Values Delete-1 change in covariance (CovRatio) identifies the observations that are influential in the regression fit. An influential observation is one where its exclusion from the model might significantly alter the regression function. Values of CovRatio larger than 1 + 3*p/n or smaller than 1 – 3*p/n indicate influential points, where p is the number of regression coefficients, and n is the number of observations. The CovRatio statistic is the ratio of the determinant of the coefficient covariance matrix with observation i deleted to the determinant of the covariance matrix for the full model: \text{CovRatio}=\frac{\mathrm{det}\left\{MSE\left(i\right){\left[{X}^{\prime }\left(i\right)X\left(i\right)\right]}^{-1}\right\}}{\mathrm{det}\left[MSE{\left({X}^{\prime }X\right)}^{-1}\right]}. CovRatio is an n-by-1 vector in the Diagnostics table of the fitted LinearModel object. Each element is the ratio of the generalized variance of the estimated coefficients when the corresponding element is deleted to the generalized variance of the coefficients using all the data. Display the CovRatio by indexing into the property using dot notation mdl.Diagnostics.CovRatio Plot the delete-1 change in covariance using plotDiagnostics(mdl,'CovRatio') For details, see the plotDiagnostics method of the LinearModel class. This example shows how to use the CovRatio statistics to determine the influential points in data. Load the sample data and define the response and predictor variables. Plot the CovRatio statistics. For this example, the threshold limits are 1 + 3*5/100 = 1.15 and 1 - 3*5/100 = 0.85. There are a few points beyond the limits, which might be influential points. Find the observations that are beyond the limits. find((mdl.Diagnostics.CovRatio)>1.15|(mdl.Diagnostics.CovRatio)<0.85) The sign of a delete-1 scaled difference in coefficient estimate (Dfbetas) for coefficient j and observation i indicates whether that observation causes an increase or decrease in the estimate of the regression coefficient. The absolute value of a Dfbetas indicates the magnitude of the difference relative to the estimated standard deviation of the regression coefficient. A Dfbetas value larger than 3/sqrt(n) in absolute value indicates that the observation has a large influence on the corresponding coefficient. Dfbetas for coefficient j and observation i is the ratio of the difference in the estimate of coefficient j using all observations and the one obtained by removing observation i, and the standard error of the coefficient estimate obtained by removing observation i. The Dfbetas for coefficient j and observation i is Dfbeta{s}_{ij}=\frac{{b}_{j}-{b}_{j\left(i\right)}}{\sqrt{MS{E}_{\left(i\right)}}\left(1-{h}_{ii}\right)}, where bj is the estimate for coefficient j, bj(i) is the estimate for coefficient j by removing observation i, MSE(i) is the mean squared error of the regression fit by removing observation i, and hii is the leverage value for observation i. Dfbetas is an n-by-p matrix in the Diagnostics table of the fitted LinearModel object. Each cell of Dfbetas corresponds to the Dfbetas value for the corresponding coefficient obtained by removing the corresponding observation. After obtaining a fitted model, say, mdl, using fitlm or stepwiselm, you can obtain the Dfbetas values as an n-by-p matrix by indexing into the property using dot notation, mdl.Diagnostics.Dfbetas This example shows how to determine the observations that have large influence on coefficients using Dfbetas. Load the sample data and define the response and independent variables. Find the Dfbetas values that are high in absolute value. [row,col] = find(abs(mdl.Diagnostics.Dfbetas)>3/sqrt(100)); disp([row col]) The delete-1 scaled change in fitted values (Dffits) show the influence of each observation on the fitted response values. Dffits values with an absolute value larger than 2*sqrt(p/n) might be influential. Dffits for observation i is {\text{Dffits}}_{i}=s{r}_{i}\sqrt{\frac{{h}_{ii}}{1-{h}_{ii}}}, where sri is the studentized residual, and hii is the leverage value of the fitted LinearModel object. Dffits is an n-by-1 column vector in the Diagnostics table of the fitted LinearModel object. Each element in Dffits is the change in the fitted value caused by deleting the corresponding observation and scaling by the standard error. Display the Dffits values by indexing into the property using dot notation mdl.Diagnostics.Dffits Plot the delete-1 scaled change in fitted values using plotDiagnostics(mdl,'Dffits') For details, see the plotDiagnostics method of the LinearModel class for details. This example shows how to determine the observations that are influential on the fitted response values using Dffits values. Load the sample data and define the response and independent variables. Plot the Dffits values. The influential threshold limit for the absolute value of Dffits in this example is 2*sqrt(5/100) = 0.45. Again, there are some observations with Dffits values beyond the recommended limits. Find the Dffits values that are large in absolute value. find(abs(mdl.Diagnostics.Dffits)>2*sqrt(4/100)) The delete-1 variance (S2_i) shows how the mean squared error changes when an observation is removed from the data set. You can compare the S2_i values with the value of the mean squared error. S2_i is a set of residual variance estimates obtained by deleting each observation in turn. The S2_i value for observation i is S2_i=MS{E}_{\left(i\right)}=\frac{\sum _{j\ne i}^{n}{\left[{y}_{j}-{\stackrel{^}{y}}_{j\left(i\right)}\right]}^{2}}{n-p-1}, where yj is the jth observed response value. S2_i is an n-by-1 vector in the Diagnostics table of the fitted LinearModel object. Each element in S2_i is the mean squared error of the regression obtained by deleting that observation. Display the S2_i vector by indexing into the property using dot notation mdl.Diagnostics.S2_i Plot the delete-1 variance values using plotDiagnostics(mdl,'S2_i') This example shows how to compute and plot S2_i values to examine the change in the mean squared error when an observation is removed from the data. Load the sample data and define the response and independent variables. Display the MSE value for the model. Plot the S2_i values. This plot makes it easy to compare the S2_i values to the MSE value of 23.114, indicated by the horizontal dashed lines. You can see how deleting one observation changes the error variance. LinearModel | fitlm | stepwiselm | plotDiagnostics | plotResiduals
Previous (2006 Kolkata leather factory fire) Ordinal number 2nd {\displaystyle (1+i)(1-i)} Greek numeral β' Roman numeral (Unicode) Ⅱ, ⅱ Ge'ez ፪ Chinese numeral 二,弍,贰,貳 Hebrew ב (Bet) Korean 이 duo- bi- (from Latin) twi- (Old English) 2 (two) is a number, numeral, and glyph that represents the number. It is the natural number[1] that follows 1 and precedes 3. It is an integer and a cardinal number, that is, a number that is used for counting.[2] In addition, it is classified as a real number,[3] distinguishing it from imaginary numbers. 4 In religion, philosophy, and culture 4.2 Marxism versus religious teachings The number two also represents a pair or duality. In biology, sexual reproduction requires the mating of two organisms, male and female, or the uniting of materials provided by male and female organisms. Eastern philosophy points to such dual characteristics as masculinity and femininity (in the animate world), or positivity and negativity (in the inanimate world), and categorizes them as Yang and Yin (in Chinese). Although some consider good and evil as an example of Yang-Yin characteristics, there is a fundamental difference: evil (such as hatred and violence) flourishes by destroying goodness (such as love and kindness), whereas male and female flourish and multiply their kind by uniting in harmony and love. The glyph currently used in the Western world to represent the number 2 traces its roots back to the Brahmin Indians, who wrote 2 as two horizontal lines. (It is still written that way in modern Chinese and Japanese.) The Gupta script rotated the two lines 45 degrees, making them diagonal, and sometimes also made the top line shorter and made its bottom end curve towards the center of the bottom line. Apparently for speed, the Nagari started making the top line more like a curve and connecting to the bottom line. The Ghubar Arabs made the bottom line completely vertical, and now the glyph looked like a dotless closing question mark. Restoring the bottom line to its original horizontal position, but keeping the top line as a curve that connects to the bottom line leads to our modern glyph.[4] In fonts with text figures, 2 usually has the same height as a lowercase X, for example, . Two has many properties in mathematics.[5] An integer is called even if it is divisible by 2. For integers written in a numeral system based on an even number, such as decimal and hexadecimal, divisibility by 2 is easily tested by merely looking at the one's place digit. If it is even, then the whole number is even. In particular, when written in the decimal system, all multiples of 2 will end in 0, 2, 4, 6, or 8. Two is the smallest and first prime number, and the only even one. (For this reason, it is sometimes humorously called "the oddest prime".) The next prime number is three. Two and three are the only two consecutive prime numbers. 2 is the first Sophie Germain prime, the first factorial prime, the first Lucas prime, and the first Smarandache-Wellin prime. It is an Eisenstein prime with no imaginary part and real part of the form {\displaystyle 3n-1} . It is also a Stern prime, a Pell number, and a Markov number, appearing in infinitely many solutions to the Markov Diophantine equation involving odd-indexed Pell numbers. It is the third Fibonacci number, and the third and fifth Perrin numbers. Despite being a prime, two is also a highly composite number, because it has more divisors than the number one. The next highly composite number is four. Vulgar fractions with 2 or 5 in the denominator do not yield infinite decimal expansions, as is the case with most primes, because 2 and 5 are factors of ten, the decimal base. Two is the base of the simplest numeral system in which natural numbers can be written concisely, being the length of the number a logarithm of the value of the number (whereas in base 1 the length of the number is the value of the number itself); the binary system is used in computers. x+x = 2•x addition to multiplication x•x = x2 multiplication to exponentiation Two also has the unique property that 2+2 = 2•2 = 2²=2↑↑2=2↑↑↑2, and so on, no matter how high the operation is. Two is the only number x such that the sum of the reciprocals of the powers of x equals itself. In symbols: {\displaystyle \sum _{k=0}^{\infty }{\frac {1}{2^{k}}}=1+{\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+{\frac {1}{16}}+\cdots =2.} {\displaystyle \sum _{k=0}^{\infty }{\frac {1}{n^{k}}}=1+{\frac {1}{n-1}}\quad {\mbox{for all}}\quad n\in \mathbb {R} >1.} In the set-theoretical construction of the natural numbers, 2 is identified with the set {\displaystyle \{\{\emptyset \},\emptyset \}} . This latter set is important in category theory: it is a subobject classifier in the category of sets. Two also has the unique property such that: {\displaystyle \sum _{k=0}^{n-1}2^{k}=2^{n}-1} {\displaystyle \sum _{k=a}^{n-1}2^{k}=2^{n}-\sum _{k=0}^{a-1}2^{k}-1} Two has a connection to triangular numbers: {\displaystyle \prod _{k=0}^{n}2^{k}=2^{tri_{2}(n)}} {\displaystyle tri_{d}(n)={\frac {1}{d!}}\prod _{k=0}^{d-1}(n+k)\quad {\mbox{if}}\quad d\geq 2} gives the nth d-dimensional simplex number. When d=2, {\displaystyle tri_{2}(n)={\frac {n^{2}+n}{2}}} The number of domino tilings of a 2×2 checkerboard is 2. {\displaystyle 2\times x} {\displaystyle 2\div x} {\displaystyle 0.{\overline {6}}} {\displaystyle 0.{\overline {3}}} {\displaystyle 0.{\overline {2}}8571{\overline {4}}} {\displaystyle 0.{\overline {2}}} {\displaystyle 0.{\overline {1}}{\overline {8}}} {\displaystyle 0.1{\overline {6}}} {\displaystyle 0.{\overline {1}}5384{\overline {6}}} {\displaystyle 0.{\overline {1}}4285{\overline {7}}} {\displaystyle 0.1{\overline {3}}} {\displaystyle x\div 2} {\displaystyle 2^{x}\,} {\displaystyle x^{2}\,} In chemistry, 2 is the atomic number of helium. Group 2 in the periodic table of elements consists of the alkaline earth metals, which commonly have a valence of +2. Also, period 2 in the periodic table consists of the eight elements lithium through neon. In the structures of many living organisms (especially mammals), various features occur in pairs. Examples include: eyes, ears, nostrils, jaws, forelegs (or arms), hind legs, lungs, kidneys, and so forth. Sexual reproduction requires the mating of two organisms, male and female, or the uniting of materials provided by male and female organisms. Studies in molecular biology have shown that each molecule of DNA usually consists of two polynucleotide strands that form a double helix. There are two opposite and complementary electrical charges: positive and negative. Both are needed and work together in the formation of atoms and molecules. There are two opposite and complementary magnetic poles: north and south. Both play a part in setting up a magnetic field. In the electronic configuration of an atom, a maximum of two electrons (of opposite spin) may occupy a given orbital at any instant. In nuclear physics, 2 is the first magic number. A magic number (in this case) is the number of nucleons (protons and neutrons) in the atomic nucleus, such that they are arranged into complete shells in the nucleus.[6] A binary star is a stellar system consisting of two stars orbiting around their center of mass. The Roman numeral II is usually applied to the second-discovered satellite of a planet or minor planet (such as Pluto II or (87) Sylvia II Remus). The Roman numeral II also stands for bright giant in the Yerkes spectral classification scheme. Two is the Saros number of the solar eclipse series that began on May 4, 2861 B.C.E. and ended on June 21, 1563 B.C.E. The duration of Saros series 2 was 1298.17 years, and it contained 73 solar eclipses.[7] Two is the Saros number of the lunar eclipse series that began on March 3, 2523 B.C.E. and ended on April 22, 1225 B.C.E. The duration of Saros series 2 was 1298.17 years, and it contained 73 lunar eclipses.[8] 2 is the resin identification code used in recycling to identify high-density polyethylene. In religion, philosophy, and culture In Eastern philosophy, a well-known dualism is that of Yang and Yin (in Chinese), or Yang and Eum (in Korean). They stand for pairs of characteristics such as masculinity and femininity, or positivity and negativity. Some consider good and evil as an example of Yang-Yin characteristics, but there is a fundamental difference: evil (such as hatred and violence) flourishes by destroying goodness (such as love and kindness), whereas male and female flourish by uniting in harmony. Marxism versus religious teachings In Marxist ideology, a pair known as "thesis" and "antithesis" have to clash with each other for progress to occur. This philosophy, however, became a justification for class warfare, in which the "lower class" were encouraged to destroy the "upper class." By contrast, various major religions place value on all people, regardless of class, encouraging harmony among people guided by God's love. The Book of Genesis notes that the first humans were a couple, Adam and Eve. One may recognize this as being necessary for reproduction and multiplication of the human race. In the story about Noah, the animals boarded Noah's Ark two by two. One may infer that they were male and female pairs, to allow those species to survive through sexual reproduction. The Ten Commandments were given in the form of two tablets (Shnei Luchot HaBrit). Two candles are traditionally kindled to usher in the Shabbat, recalling the two different ways Shabbat is referred to in the two times the Ten Commandments are recorded in the Torah. These two expressions are known in Hebrew as שמור וזכור ("guard" and "remember"), as in "Guard the Shabbat day to sanctify it" (Deut. 5:12) and "Remember the Shabbat day to sanctify it" (Ex. 20:8). Two challahs (lechem mishnah) are placed on the table for each Shabbat meal and a blessing made over them, to commemorate the double portion of manna that fell in the desert every Friday to cover that day's meals and the Shabbat meals. In Jewish law, the testimonies of two witnesses are required to verify and validate events, such as marriage, divorce, and a crime that warrants capital punishment. Rosh Hashana, the first day of the Jewish year, is a 2-day holiday "Second-Day Yom Tov" (Yom Tov Sheini Shebegaliyot) is a rabbinical enactment that mandates a two-day celebration for each of the one-day Jewish festivals (i.e., the first and seventh day of Passover, the day of Shavuot, the first day of Sukkot, and the day of Shemini Atzeret) outside the land of Israel. The twos of all four suits in playing cards. Two (二, èr) is a good number in Chinese culture. There is a Chinese saying "good things come in pairs." It is common to use double symbols in product brand names, such as double happiness, double coin, double elephants, and so forth. Cantonese people like the number two because it sounds the same as the word "easy" (易) in Cantonese. ↑ D. Wells, The Penguin Dictionary of Curious and Interesting Numbers (London: Penguin Group, 1987), 41–44. ↑ Xavier Borg, Magic numbers derived from a variable phase nuclear model The General Science Journal, January 2, 2006. Retrieved October 9, 2017. ↑ Saros Series 2 Saros Series Catalog of Solar Eclipses NASA Eclipse website. Retrieved October 9, 2017. ↑ Saros Series 2 Catalog of Lunar Eclipse Saros Series, NASA Eclipse website. Retrieved October 9, 2017. Wells, D.G. The Penguin Dictionary of Curious and Interesting Numbers. Rev. ed. London, UK: Penguin Books, 1998. ISBN 0140261494 Number "2" and the human body teachers.net.
 Show that electric field due to line charge at any plane is same in magnitude and directed radially upward. from Physics Electric Charges and Fields Class 12 CBSE Show that electric field due to line charge at any plane is same in magnitude and directed radially upward. A thin long straight line L of charge having uniform linear charge density λ is shown by figure. By symmetry, it follows that electric field due to line charge at distance r in any plane is same in magnitude and directed radially upward. A thin conducting spherical shell of radius R has charge +q spread uniformly over its surface. Using Gauss’s law, derive an expression for an electric field at a point outside the shell. Draw a graph of electric field E(r) with distance r from the centre of the shell for 0 ≤ r ≤ ∞. Electric field intensity at any point outside a uniformly charged spherical shell: Assume a thin spherical shell of radius R with centre O. Let charge +q is uniformly distributed over the surface of the shell. Let P be any point on the Gaussian surface sphere S1 with centre O and radius r (r > R). According to Gauss's law Graph: As charge on shell reside on outer surface so there is no charge inside shell so electric field by Gauss's law will be zero. The variation of the electric field intensity E(r) with distance r from the centre for shell 0 ≤ r < ∞ is shown below. What is meant by the statement that the electric field of a point charge has spherical symmetry whereas that of an electric dipole is cylindrically symmetrical? Consider a charge q at the centre of a sphere of radius r. The electric field at all points on the surface of the sphere is given by So, the electric field due to a point charge is spherically symmetric. In the case of an electric dipole, the electric field at a distance r, from the mid-point of the dipole, on the equatorial line is given by Now, imagine a cylinder of radius r drawn with electric dipole as axis. The electric field, due to dipole, at all points on the surface of the cylinder will be the same. So, the electric field due to dipole has cylindrical symmetry. State Gauss's law in electrostatics. Use this law to derive an expression for the electric field due to an infinitely long straight wire of linear charge density A cm–1. Gauss’s law in electrostatics: It states that total electric flux over the closed surface S in vacuum is times the total charge (q) contained in side S. Electric field due to an infinitely long straight wire. Let an infinitely long line charge having linear charge density X. Assume a cylindrical Gaussian surface of radius r and length 1 coaxial with the line charge to determine its electric field at distance r. By symmetry, the electric field E has same magnitude at each point of the curved surface S1 and is directed radially outward. So angle at surfaces between is zero, and angle of with at are . Total flux through the cylindrical surface, Since λ is the charge per unit length and l is the length of the wire, Thus, the charge enclosed According to Gaussian law, The electrostatic force of repulsion between two positively charged ions carrying equal charge is 3.7 x 10–9 N, when they are separated by a distance of 5 \stackrel{\circ }{\mathrm{A}} . How many electrons are missing from each ion? \mathrm{Electrostatic} \mathrm{force} \mathrm{of} \mathrm{repulsion} -\mathrm{F} = 3.7 × {10}^{-9}\mathrm{N} \mathrm{Let} \mathrm{us} \mathrm{say} \mathrm{charge} \mathrm{is} {\mathrm{q}}_{1} = {\mathrm{q}}_{2} = \mathrm{q}\phantom{\rule{0ex}{0ex}} \mathrm{distance} \mathrm{between} \mathrm{two} \mathrm{charges}-r = 5 \mathrm{Å} = 5 × {10}^{-10} \mathrm{m},\phantom{\rule{0ex}{0ex}}\mathrm{To} \mathrm{find} \mathrm{the} \mathrm{number} \mathrm{of} \mathrm{electrons} \mathrm{missing} - \mathrm{n} = ? Using Coulomb's law, \mathrm{F} = \frac{1}{4\mathrm{\pi } {\in }_{0}}\frac{{\mathrm{q}}_{1} {\mathrm{q}}_{2}}{{\mathrm{r}}^{2}} ⇒3.7 × {10}^{-9} = 9 × {10}^{9} \frac{\mathrm{qq}}{\left(5 × {10}^{-10}{\right)}^{2}}\phantom{\rule{0ex}{0ex}}⇒ {\mathrm{q}}^{2} = \frac{3.7 × {10}^{-9} × 25 × {10}^{-20}}{9 × {10}^{9}} \phantom{\rule{0ex}{0ex}} = 10.28 × {10}^{-38}\phantom{\rule{0ex}{0ex}} \therefore \mathrm{q} = 3.2 × {10}^{-19} \mathrm{coulomb} \mathrm{q} = \mathrm{ne} \therefore \mathrm{n}=\frac{\mathrm{q}}{\mathrm{e}}=\frac{3.2 × {10}^{-19}}{1.6 × {10}^{-19}} = 2 A charge q is placed at the centre of the line joining two equal charges Q. Show that the system of three charges will be in equilibrium if q = – Q / 4. Let two equal charges Q each, be held at A and B, where AB = 2x. C is the centre of AB, where charge q is held, figure below. Net force on q is zero. So q is already in equilibrium. For the three charges to be in equilibrium, net force on each charge must be zero. Now, total force on Q at B is https://www.zigya.com/share/UEhFTjEyMDQ4MjYy
 Harmonic Investigation of Compact Fluorescent Lamps Low Energy Consumption Lamps of Cameroonian Market Harmonic Investigation of Compact Fluorescent Lamps Low Energy Consumption Lamps of Cameroonian Market S. Perabi Ngoffe1*, S. Ndjakomo Essiane1, Florence Offole2, Ghislain Mengata Mengounou1, Adolphe Moukengue Imano1, Onambele1 1Laboratory of Technology and Applied Sciences, University of Douala, Douala, Cameroun 2Engineering Sciences Laboratory, University of Douala, Douala, Cameroun This article presents a study of compact fluorescent lamps (CFLs) low energy consumption lamps found in the Cameroonian market. The current obtained in the experimental setup has been analyzed in Matlab Simulink. The results obtained show that the THD of different lamps does not respect the standard IEC-61000-3-2. These values increase with the power of lamps. Spectral analysis of these lamps shows that the probable cause of their premature degradation results from the effects of harmonics on the capacitors. This degradation is all more precocious as the rank and the concerned THDi is great. That’s why 75 W lamps are more sensitive than others. Harmonic, Pollution, Compact Fluorescent Lamp, THD, Spectrum Harmonic Deficit electric energy in the world and mainly in Africa has led states to adopt energy efficiency policies. These policies concern production and consumption fields. Consumption of electrical energy by electric lighting is estimated at 20% world consumption. To reduce this, Cameroonians have chosen low-energy lamps. Among these, we find mainly LED lamps and compact fluorescent lamps (CFLs) with electronic ballast. The lifespan ranging of CFL is estimated between 6000 h and 15,000 h but in pratic we observe a premature deterioration of these. Analysis of power supplies of these faulty lamps shows that they are RCD (resistor, capacitor and diodes) types [1] and the most critical element is capacitor. These failures can have several causes; harmonic pollution is one of them. Harmonic currents originate from the absorption of non-sinusoidal currents by non-linear loads [2] . CFLs by their alimentation structures are nonlinear load [3] . Effects of harmonics are numerous, ranging from abnormal heating of conductors to the destruction of electric components [4] [5] [6] . These effects concern the components connected to the same node of an electrical network and vary according to the harmonic rank and their rate in the signal. The electromagnetic compatibility standards limit this harmonic pollution depending on type and power of load [7] [8] [9] . Several solutions exist to this phenomenon, among them: sizing, compensation, and filtering [10] . Knowledge of harmonic spectrum makes it possible not only to diagnose failures causes of electric components, but also to design adapted filters. The article aim is to investigate the harmonic pollution of CFLs. It is subdivided into three parts. The first is a review of the literature on harmonics and analysis methods, the second presents the methods and tools used to carry out our investigation and the last is a presentation of results. Harmonics are the sinusoidal voltages or currents whose frequency is an integer multiple of frequency network (50 hz in Cameroon) called fundamental. A signal polluted is a superposition of this different harmonics that can modeled by following equation: f\left(t\right)={a}_{0}+\sum _{n=1}^{\infty }\left({a}_{n}\mathrm{cos}\left(nwt\right)+{b}_{n}\mathrm{sin}\left(nwt\right)\right) where n is harmonic rank and w=\frac{2\Pi }{T} the electric pulse {a}_{0} {a}_{n} {b}_{n} , are real constants with: {a}_{0} the average value of the electrical signal given by: {a}_{0}=\frac{1}{T}\underset{}{\overset{T}{\int }}f\left(t\right)\text{d}t {a}_{n} the real part of signal amplitude {a}_{n}=\frac{2}{T}\underset{}{\overset{T}{\int }}f\left(t\right)\mathrm{cos}\left(n\omega t\right)\text{d}t {b}_{n} the imaginary part of signal amplitude {b}_{n}=\frac{2}{T}\underset{}{\overset{T}{\int }}f\left(t\right)\mathrm{sin}\left(n\omega t\right)\text{d}t For n = 1 we have the fundamental. 2.1. Harmonics Sources Main harmonics perturbations causes are nonlinear charges. They are generated by equipment (electronics components) who supply by DC current. computers, variable speed drives and CFLs are examples. This load type is characterized by generation of deformed current which remains periodic and created sequences zero harmonics like: rank 7 and 13 witch are sequence positive; rank 5 and 11 witch are sequence negative; rank 3 and 9 witch are sequence zero [11] . 2.2. Harmonic Currents Characteristics Electric characteristics of a periodic signal deformed are: • The total rms value {I}_{k} is rms value of harmonic current at rank k, the total rms value is given by Equation (2) {I}_{rms}=\sqrt{\sum _{k=1}^{n}{I}_{k}^{2}} •The individual harmonic distortion rate It gives relation between rms value of harmonic and the fundamental TH{D}_{i}\%=\frac{{I}_{K}}{{I}_{1}}\times 100 •Total harmonic distortion It characterizes signal distortion rate, their expression is given by Equation (4) THD\%=\frac{\sqrt{{\displaystyle \sum _{k=2}^{n}{I}_{k}^{2}}}}{{I}_{1}}\times 100 •The harmonic spectrum It consists of determining magnitude or THDI (voltage or current) at different signal harmonics rank. the spectral density gives similar information. Figure 1 shows an example of a spectrum. • The power factor It is ratio between active power (P) and apparent power (S). It does not translate difference phase between voltage and current of load. Equation (5) give their expression: \lambda =\frac{P}{S} •The distortion factor Figure 1. Harmonic spectrum. It is ratio between power factor and difference phase between current and voltage. It characterizes deformation of the phase shift. If it equal to one, it means that signal has not harmonics. It is given by Equation (6) \nu =\frac{\lambda }{\mathrm{cos}{\phi }_{1}} {\phi }_{1} phase shift between voltage and current at the fundamental • The distortion power The presence of Harmonics in signal create a power distortion given by following relation [12] : {D}^{2}={S}^{2}-{P}_{1}^{2}-{Q}_{1}^{2} {P}_{1} {Q}_{1} are active and reactive powers at the fundamental. 2.3. The Effects of Harmonics Harmonics have several effects in the electric network equipment’s, including [1] [4] [5] [6] : • The heating of cables by joule effect, translated by following equation: \text{Perte}=r\sum _{k=1}^{n}{I}_{k}^{2} With r the cable resistance. This heating also concerns the protective conductor if spectrum is rich in rank 3 harmonic. • Destruction of capacitors Capacitor current is given by following relation: I=2\Pi kfCU where k is the harmonic rank and f the fundamental frequency. This current increases with the harmonic rank. If capacitor is connected parallel with a transformer or an inductor it can enter resonance at the corresponding eletric pulsation given by Equation (9) {w}_{{}_{k}}^{2}=\frac{1}{LC} {w}_{k}=2\Pi kf Apart from these consequences we can cite: the nuisance tripping of circuit breakers, sources disturbance, sensitive electronic equipment and others. 2.4. Standards and Harmonics To limit harmonic pollution several standards and directives on electromagnetic compatibility are imposed. These are presented in Table 1. Like any device electric, CFLs lamps must comply with IEC 61000-3-2 standard, which states that harmonics emission limits for lamps are subdivided according to their active power (Table 2) [7] . Table 1. Standards for harmonic emission limits [7] [8] [9] . Table 2. Maximum harmonic current allowed. Investigation we carried out focused on 3 lamps whose characteristics are given in Table 3. 3.1. Presentation Experimental Setup Measurement of current absorbed by lamps was carried out using the experimental setup shown in Figure 2. Experimental setup consists of the following elements: • Connection box to connect lamps and different elements of the bench; • Switch for switching lamps; • Computer (output interface) for viewing electrical signal; • VOLCRAFT digital oscilloscope interface connected to computer; • A measuring probe; A shunt for currents measurement through voltages image. 3.2. Treatment and Analysis Platform on Matlab Simulink Signal coming from oscilloscope (computer) being noisy, Figure 3 presents Simulink processing and analysis platform. Voltages at the terminals of our different shunt are presented in Figure 4. Table 3. Characteristics of investigated lamps. Figure 2. Harmonic spectrum experimental setup. We note that obtained measurements are quite noisy hence need to filter for better exploitation. After measurement, signal obtained are processing on matlab simulink platform. Results are presented in Figure 5. Currents obtained are less noisy. Their different THD are presented in Table 4. It is found that THD increases with the lamps power. values obtained are much higher than the 3% predicted by standard for class C equipment. Analysis of spectral density gave the following results in Figure 6. Figure 3. Signal processing and analysis platform. Figure 4. Shunt voltage. (a) 20 W lamp; (b) 40 W lamp; (c) 75 W lamp. Figure 5. Simulink platform results. (a) Experimental currents obtained; (b) Filtred currents; (c) THD lamp. Figure 6. Spectral density of the currents absorbed by the studied lamps. (a) 20 W lamp; (b) 40 W lamp; (c) 75 W lamp. Table 4. Obtained currents with a THD. Table 5 shows the results of calculation of the THDi resulting from this spectral density. We note that at each harmonic rank, THDi increase with power of lamps. According to the IEC-61000-3-2 standard only the 20 W lamp is in adequacy with standard (Table 2). In addition, Equations (8) and (9) predict risk of overcurrent in capacitors, and resonance between capacitors, transformer and inductance of the CFLs supply. according to results of Table 5 the 75 W lamp is most likely to degrade prematurely. Table 5. THDi of investigated lamps. This article investigates the harmonic pollution of CFLs lamps in order to diagnose the causes of premature degradation of these lamps. For this purpose, experimental setup has been set up to carry out the various tests. Noisy signal obtained was filtered and analyzed in the Matlab Simulink platform. Obtained results show that harmonic spectrum of investigated lamps mainly comprises harmonics of zero sequence. THD obtained increases with power as well as THDi. Comparison of different results with IEC-61000-3-2 IEC standard shows that only the 20 W lamp is inadequacy at the harmonics 3 and 5. High rate of these THDs suggests that the rapid deterioration cause of these type lamps are effects of harmonics on the capacitors (Equations (8) and (9)). The authors wish to express their sincere thanks to research team in Electrical Energy System of University of Douala. Perabi Ngoffe, S., Ndjakomo Essiane, S., Offole, F., Mengata Mengounou, G., Moukengue Imano, A. and Onambele (2019) Harmonic Investigation of Compact Fluorescent Lamps Low Energy Consumption Lamps of Cameroonian Market. Open Access Library Journal, 6: e5446. https://doi.org/10.4236/oalib.1105446 1. Schneider Electric (2015) Elimination des harmoniques dans les installations. Edition 09/2015, 1-21. 2. Nohra, M.A.H. (2017) Commande de Filtres Actifs Parallèles sur un Réseau Fortement Perturbé. Ph.D. Thesis, Toulouse University, Tou-louse. 3. Hanna Nohra, A.F., Kanaan, H.Y. and Al-Haddad, K. (2012) A Study on the Impact of a Massive Integration of Compact Fluorescent Lamps on Power Quality in Distribution Power Systems. International Conference on Renewable Ener-gies for Developing Countries (REDEC), Beirut, 28-29 November 2012, 1-6. https://doi.org/10.1109/redec.2012.6416700 4. Collombet, C., Lupin, J.M. and Schonek, J. (1999) Perturbation harmoniques dans les réseaux polluésetleur traitement. Schneider Electric, cahier technique no. 152. 5. Abbaspour, M. and Jahanikia, A.H. (2009) Power Quality Consideration in the Widespread Use of Compact Fluorescent Lamps. 10th International Conference on Electrical Power Quality and Utilisation, Lodz, 15-17 September 2009, 1-6. 6. Richard, M.K. and Sen, P.K. (2010) Compact Fluorescent Lamps and Their Effect on Power Quality and Application Guidelines. IEEE Industry Applications Society Annual Meeting, Houston, 3-7 October 2010, 1-7. 7. CEI 61000-3-2 Compatibilité électromagnétique (CEM) (2009)—Partie 3-2: Limites—Limites pour lesémissions de courant harmonique (Courant appelé par les appareils 16A par phase) édition 3.2-2009. 8. CEI 61000-2-2 Compatibilité Electromagnétique (CEM)—Partie 2-2 (2002): Environnement-Niveaux decompatibilité pour les perturbations conduites à basse fréquence et la transmission des signaux surles réseaux publics d'alimentation basse tension. Deuxième édition 2002-03. 9. IEC 61000-2-4 Compatibilité électromagnétique (CEM)—Partie 2-4 (2002): Environnement—Niveauxde compatibilité dans les installations industrielles pour les perturbations conduites à bassefréquence. Deuxièmeédition 2002-06. 10. Hanna Nohra, A.F., Kanaan, H.Y. and Fadel, M. (2016) Comparative Evaluation of Harmonic Compensation Methods Based on Power Calculation and Current Harmonic Detection for Single-Phase Applications. IECON 2016—42nd Annual Conference of the IEEE Industrial Electronics Society, Florence, 23-26 October 2016, 3685-3690. https://doi.org/10.1109/iecon.2016.7793331 11. Schonek, J. (2000) Les singularités de l’harmonique 3. Schneider Electric, cahier technique n 202. 12. Czarnecki, L.S. (1987) What Is Wrong with the Budeanu Concept of Reactive and distortion power and Why It Should Be Abandoned. IEEE Transactions on Instrumentation and Measurement, IM-36, 834-837.
Vertex - Simple English Wikipedia, the free encyclopedia special kind of point that describes the corners or intersections of geometric shapes A vertex is a corner. More than one vertex are called vertices. A vertex is a point where two or more lines, curves, rays, or sides meet, and is often represented by letters such as {\displaystyle P} {\displaystyle Q} {\displaystyle R} {\displaystyle S} B is an ear because the line between C and D is completely inside the shape. C is a mouth because the line between A and B is completely outside the shape. For example, the vertex of an angle is the point where the two edges of the angle intersect, and a vertex of a cube is simply one of its corners—of which there are 8.[2] The concept of vertex applies to both two-dimensional and three-dimensional geometrical objects. For example, a tetrahedron has 4 vertices, and a pentagon has 5 vertices.[3] ↑ "What Are Vertices in Math?". Sciencing. Retrieved 2020-08-16. ↑ "Vertices, Edges and Faces". www.mathsisfun.com. Retrieved 2020-08-16. Retrieved from "https://simple.wikipedia.org/w/index.php?title=Vertex&oldid=7224270"
 Dynamic Modulus of Elasticity of Some Mortars Prepared from Selected Jordanian Masonry Cements Civil Engineering Department, College of Engineering, Tafila Technical University, Tafila, Jordan {E}_{d}=\rho \left\{\left(1+\mu \right)\left(1-\mu \right){V}^{2}\right\}/\left(1-2\mu \right) \begin{array}{l}{E}_{d}\left(\text{Thabet}\right)=\{\left(1970\text{\hspace{0.17em}}\text{kg}/{\text{m}}^{3}\right)\times \left({\mathrm{sec}}^{2}/9.81\text{\hspace{0.17em}}m\right)\times \left(1.17\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\times 0.66{\left(\left(0.16/44.3\right)\times 10\text{E}6\right)}^{2}\left({\text{m}}^{\text{2}}/{\text{sec}}^{\text{2}}\right)\left(9.81\text{\hspace{0.17em}}\text{N}/\text{kg}\right)\}/\left(0.83\times 10\text{E}6\right)\\ =22773\text{\hspace{0.17em}}MPa\end{array} y=46.381{x}^{2}-5342.2x+169275 \text{Compressive strength}=2/{10}^{12}\times {\text{dynamic modulus of elasticity}}^{2.9618}\left(r=0.9816\right) Al-Baijat, H. (2019) Dynamic Modulus of Elasticity of Some Mortars Prepared from Selected Jordanian Masonry Cements. Open Journal of Composite Materials, 9, 199-206. https://doi.org/10.4236/ojcm.2019.92011 1. Lanas, J., Bernal, J., Bello, M. and Galindo, J. (2004) Mechanical Properties of Natural Lime-Based Mortars. Cement and Concrete Research, 34, 2191-2201. https://doi.org/10.1016/j.cemconres.2004.02.005 2. Fortes-Revilla, C., Martinex, S. and Blanci-Varela, M. (2006) Modelling of Slaked Lime Metakaolin Mortar Engineering Characteristicsin Terms of Process Variables, Cement and Concrete Composites, 28, 458-467. https://doi.org/10.1016/j.cemconcomp.2005.12.006 3. Lanas, J., Sirera, R. and Alvarez, J. (2006) Study of the Mechanical Behavior of Masonry Lime-Based Mortars Cured and Exposed under Different Conditions. Cement and Concrete Research, 36, 961-970. https://doi.org/10.1016/j.cemconres.2005.12.003 4. Lanas, J. and Alvarez, J. (2003) Masonry Repair Lime-Based Mortars: Factors Affecting the Mechanical Behavior, Cement and Concrete Research, 33, 1867-1876. https://doi.org/10.1016/S0008-8846(03)00210-2 5. Gleize, P., Müller, A. and Roman, R. (2003) Microstructural Investigation of a Silica Fume-Cement-Lime Mortar. Cement and Concrete Composites, 25, 171-175. https://doi.org/10.1016/S0958-9465(02)00006-9 6. Tchamdjou, W.H.J. Cherradi, T., Abidi, M.L. and Pereira-de-Oliveira, L.A. (2017) The Use of Volcanic Scoria from “Djoungo” (Cameroon) as Cement Replacement and Fine Aggregate by Sand Substitution in Mortar for Masonry. European Journal of Environmental and Civil Engineering, 1-19. https://doi.org/10.1080/19648189.2017.1364298 7. el Mahdi Safhi, A., et al. (2019) Development of Self-Compacting Mortars Based on Treated Marine Sediments. Journal of Building Engineering, 22, 252-261. https://doi.org/10.1016/j.jobe.2018.12.024 8. Haddad, R. and Shannag, M., (2008) Performance of Jordanian Masonry Cement for Construction Purposes. Jordan Journal of Civil Engineering, 2, 19-31 9. Al-Beijat, H., Bignozzi, M. and Moh’d, B. (2013) Compressive Strength of Jordanian cement Mortars. Open Journal of Civil Engineering, 3, 6 p. 10. BE EN-196-1 (2005) Methods of Testing Cement, European Standards. European Committee for Standardization (CEN), Brussels.
The Real Burst QR Decomposition block uses QR decomposition to compute R and C = Q'B, where QR = A, and A and B are real-valued matrices. The least-squares solution to Ax = B is x = R\C. R is an upper triangular matrix and Q is an orthogonal matrix. To compute C = Q', set B to be the identity matrix. When Regularization parameter is nonzero, the Real Burst QR Decomposition block transforms \left[\begin{array}{c}\lambda {I}_{n}\\ A\end{array}\right] R=Q\text{'}\left[\begin{array}{c}\lambda {I}_{n}\\ A\end{array}\right] \left[\begin{array}{c}{0}_{n,p}\\ B\end{array}\right] C=Q\text{'}\left[\begin{array}{c}{0}_{n,p}\\ B\end{array}\right] \left[\begin{array}{c}\lambda {I}_{n}\\ A\end{array}\right] C(i,:) — Rows of matrix C = Q'B Whether output data is valid, returned as a Boolean scalar. This control signal indicates when the data at output ports R(i,:) and C(i,:) is valid. When this value is 1 (true), the block has successfully computed the R and C matrices. When this value is 0 (false), the output data is not valid. Use fixed.getQRDecompositionModel(A,B) to generate a template model containing a Real Burst QR Decomposition block for real-valued input matrices A and B. Complex Burst QR Decomposition | Real Burst Q-less QR Decomposition | Real Partial-Systolic QR Decomposition
On the Interpretation of Sagnac's Experiment in the General Theory of Relativity - Wikisource, the free online library On the Interpretation of Sagnac's Experiment in the General Theory of Relativity (1922) by Rudolf Ortvay, translated from German by Wikisource In German: Über die Deutung des Sagnacschen Versuches in der allgemeinen Relativitätstheorie, Physikalische Zeitschrift, 23, 176-178, Online 735489On the Interpretation of Sagnac's Experiment in the General Theory of RelativityRudolf Ortvay1922 On the Interpretation of Sagnac's Experiment in the General Theory of Relativity[1] By Rudolf Ortvay. In the well known experiment of Sagnac[2], two light rays traversing in opposite directions the periphery of a rotating circle (more exactly a polygon), will be brought to interfere, and on that occasion a displacement of the interference fringes in respect to the stationary state was observed, by an amount that corresponds to a phase difference of {\displaystyle {\tfrac {2lv}{c^{2}}}} {\displaystyle l} is the length of the periphery, {\displaystyle v=\omega r_{0}} is the velocity of a point of the periphery, {\displaystyle c} is the speed of light. The theory of the experiment for the resting system of earth was given by M. v. Laue, before the experiment was even conducted.[3] Since the experiment proves that in the rotating system the propagation of light does not correspond to that of an uniform moving system, a discussion from the standpoint of general relativity is desired. The method that was used by H. Thirring in his paper[4] on the effect of distant rotating masses, provides the means for this investigation, and we will treat the problem with the same approximation, as the centrifugal- and Coriolis forces were treated by Thirring. We assume with Thirring, that a spherical shell of mass {\displaystyle M} {\displaystyle r} is uniformly rotating around the origin of our coordinate system with the angular velocity {\displaystyle \omega } {\displaystyle z} -axis. Let {\displaystyle r} be large against the circle radius. In infinity we take for the coefficients {\displaystyle g_{ik}} of the line element, its values in the pseudo-euclidean case and apply the approximate solution of Einstein with retarded potentials. When we neglect the terms that include {\displaystyle \omega ^{2}} , we obtain for the energy-momentum-tensor[4] {\displaystyle T_{ik}=\varrho \left({\frac {dx_{4}}{ds}}\right)^{2}\left\{{\begin{array}{ccccccc}0&&0&&0&&-r\omega \sin \vartheta \sin \varphi \\0&&0&&0&&r\omega \sin \vartheta \cos \varphi \\0&&0&&0&&0\\-r\omega \sin \vartheta \sin \varphi &&r\omega \sin \vartheta \cos \varphi &&0&&1\end{array}}\right.} {\displaystyle x_{4}=ct} {\displaystyle c=1=} speed of light, {\displaystyle \varrho } = mass density. We obtain the covariant components of the line element,[4] when we neglect the terms that include {\displaystyle \omega ^{2}} {\displaystyle g_{ik}=\left\{{\begin{array}{ccccccc}-(1+\alpha )&&0&&0&&+{\frac {2}{3}}\alpha \omega y\\\\0&&-(1+\alpha )&&0&&-{\frac {2}{3}}\alpha \omega x\\\\0&&0&&0&&0\\\\+{\frac {2}{3}}\alpha \omega y&&-{\frac {2}{3}}\alpha \omega x&&0&&+(1+\alpha )\end{array}}\right.} {\displaystyle \alpha ={\frac {xM}{4\pi r}}} {\displaystyle x} = gravitational constant. ​ For the line element we obtain {\displaystyle ds^{2}=-(1+\alpha )\left(dx^{2}+dy^{2}\right)+{\frac {4}{3}}\alpha \omega (xdy-ydx)dt+(1-\alpha )dt^{2}} and we determine the speed of light from the equation {\displaystyle ds^{2}=0} {\displaystyle xdy-ydx=r^{2}d\varphi =r^{2}{\frac {d\varphi }{dt}}\cdot dt=r\cdot c\cdot dt} {\displaystyle c} is the speed of light in the direction of the circumference. {\displaystyle ds^{2}=0} we obtain after substitution of that value and division by {\displaystyle dt^{2}} {\displaystyle -(1+\alpha )\left\{\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}\right\}+{\frac {4}{3}}\alpha \omega rc+1-\alpha =0} {\displaystyle \left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}=e^{2}} we further obtain {\displaystyle c^{2}-{\frac {4}{3}}\cdot {\frac {\alpha }{1+\alpha }}\omega rc-{\frac {1-\alpha }{1+\alpha }}=0} and from that for {\displaystyle c} {\displaystyle c=+{\frac {2}{3}}{\frac {\alpha }{1+\alpha }}\omega r\pm {\sqrt {{\frac {4}{9}}\left({\frac {\alpha }{1+\alpha }}\right)^{2}+{\frac {1-\alpha }{1+\alpha }}}}} and by restriction to terms of first order in {\displaystyle \alpha } {\displaystyle c_{1}=1-\alpha +{\frac {2}{3}}\alpha \omega r=\left(1-{\frac {xM}{4\pi \cdot r}}\right)+{\frac {x}{6\pi }}\cdot {\frac {M}{r}}\cdot v} {\displaystyle c_{2}=-(1-\alpha )+{\frac {2}{3}}\alpha \omega r=-\left(1-{\frac {x}{4\pi }}\cdot {\frac {M}{r}}\right)+{\frac {x}{6\pi }}\cdot {\frac {M}{r}}\cdot v} {\displaystyle c_{0}=1-{\frac {xM}{4\pi r}}} means the speed of light; when the shell is at rest, it differs only by a small extant from unity, only as much as it is determined by the potential of the shell. The second term determines that the velocity in both direction is not equal. The speed of light is not {\displaystyle c=c_{0}\pm v} as in Sagnac's experiment, because to the second term the dragging coefficient {\displaystyle {\frac {x}{6\pi }}\cdot {\frac {M}{r}}} is added, which is very small. Another result was not to be expected, since the distant masses that determine the normal values of {\displaystyle g_{ik}} , don't share the rotation. In the first section I've tried to interpret Sagnac's experiment by direct calculation of the effect of the distant rotating masses, however, the line element could only be calculated as long as the velocity of the distant masses is small. However, Sagnac's experiment can be interpreted exactly by general relativity, when we, like in the treatment of the centrifugal forces[5], relate the line element to a coordinate system that rotates against the system of fixed stars. If the line element is in a system that is stationary against the fixed stars by using spatial cylindric coordinates[6] (where the unit time is chosen so that the speed of light is unity) {\displaystyle ds^{2}=dz^{2}+dr^{2}+r^{2}d\varphi '^{2}-dt'^{2}} then we can transform it to a system that rotates around the {\displaystyle z} -axis by a constant angular velocity {\displaystyle \omega } , by the substitution {\displaystyle \varphi '=\varphi +\omega t,\ t'=t} {\displaystyle ds^{2}=dz^{2}+dr^{2}+r^{2}d\varphi ^{2}-2r^{2}\omega d\varphi dt-dt^{2}\left(1-\omega ^{2}r^{2}\right)} We determine the speed of light from equation {\displaystyle ds^{2}=0} . By considering, that for a light ray propagating in a plane perpendicular to the rotation axis, {\displaystyle dz=0} {\displaystyle dr=0} {\displaystyle d{\frac {d\varphi }{dt}}=c=} {\displaystyle \omega \cdot r=v=} the velocity of a point that is at rest at distance {\displaystyle r} to the origin, we obtain (after division by {\displaystyle dt^{2}} ) a quadratic equation for the determination of the speed of light {\displaystyle c^{2}=2vc-\left(1-v^{2}\right)=0} {\displaystyle c=v\pm 1} in agreement with Sagnac's experiment. If we would have assumed a spatially finite world according to Einstein or de Sitter, then we would have to start with a line element, that (in the proximity of the origin) differs from the assumed expression only by magnitudes of second order, and the result would be the same except the magnitudes of the same order. 1. It was shown that the gravitational forces emanating from the rotating distant masses, are dragging the light in the direction of motion. 2. In a coordinate system that is rotating against the system of fixed stars, light is moving in agreement with the experiment of Sagnac. ↑ Partially according to a paper of the author, submitted to the Kgl. Ung. Akademie der Wissensch. in the meeting of February 13, 1922 ↑ Sagnac, Paris, C.R. 157, 708 a. 1410, 1913 ↑ M. v. Laue, Münch. Ber. 1911. p. 404; Das Relativitätsprinzip, 3. Aufl. 1919; Ann. d. Phys. 62, 448, 1920 ↑ 4.0 4.1 4.2 H. Thirring, this journal 19, 33, 1918; 22, 29, 1921 ↑ H. Thirring, this journal 19, 33, 1918 a. 22, 29, 1921; A. Kopff, this journal 22, 24. a. 179, 1921. ↑ H. Weyl, Raum, Zeit, Materie IV. Aufl. S. 203 Retrieved from "https://en.wikisource.org/w/index.php?title=Translation:On_the_Interpretation_of_Sagnac%27s_Experiment_in_the_General_Theory_of_Relativity&oldid=11637387"
offset = lteULFrameOffsetPUCCH3(ue,chs,waveform) [offset corr] = lteULFrameOffsetPUCCH3(ue,chs,waveform) offset = lteULFrameOffsetPUCCH3(ue,chs,waveform) performs synchronization using PUCCH format 3 demodulation reference signals (DM-RS) for the time-domain waveform, waveform, given UE-specific settings, ue, and PUCCH format 3 configuration, chs. The returned value, offset, indicates the number of samples from the start of the waveform, waveform, to the position in that waveform where the first subframe containing the DM-RS begins. offset provides subframe timing; frame timing can be achieved by using offset with the subframe number, ue.NSubframe. This behavior is consistent with real-world operation because the base station knows when, or in which subframe, to expect uplink transmissions. [offset corr] = lteULFrameOffsetPUCCH3(ue,chs,waveform) also returns a complex-valued matrix corr, which is the signal used to extract the timing offset. Synchronize and demodulate a transmission that has been delayed by seven samples using the PUCCH format 3 demodulation reference signal (DM-RS) symbols. Initialize configuration structures (ue and pucch3). On the transmit side, populate reGrid, generate waveform, and insert a delay of seven samples. waveform = lteSCFDMAModulate(ue,reGrid); tx = [zeros(7,1); waveform]; On the receive side, perform synchronization using the PUCCH format 3 DM-RS symbols for the time-domain waveform and demodulate adjusting for the frame timing estimate. Show estimated frame timing offset. fOffset = lteULFrameOffsetPUCCH3(ue,pucch3,tx) fOffset = 7 rxGrid = lteSCFDMADemodulate(ue,tx(1+fOffset:end)); [~,corr] = lteULFrameOffsetPUCCH3(ue,pucch3,tx); [offset,corrDelayed] = lteULFrameOffsetPUCCH3(ue,pucch3,txDelayed); {N}_{\text{RB}}^{\text{UL}} {n}_{PUCCH}^{\left(3\right)} Number of samples from the start of the waveform to the position in that waveform where the first subframe begins, returned as a scalar integer. offset is computed by extracting the timing of the peak of the correlation between waveform and internally generated reference waveforms containing DM-RS signals. The correlation is performed separately for each antenna and the antenna with the strongest correlation is used to compute offset. Signal used to extract the timing offset, returned as a numeric matrix. corr has the same dimensions as waveform.
Manning formula - Wikipedia The Manning formula is an empirical formula estimating the average velocity of a liquid flowing in a conduit that does not completely enclose the liquid, i.e., open channel flow. However, this equation is also used for calculation of flow variables in case of flow in partially full conduits, as they also possess a free surface like that of open channel flow. All flow in so-called open channels is driven by gravity. It was first presented by the French engineer Philippe Gauckler in 1867,[1] and later re-developed by the Irish engineer Robert Manning in 1890.[2] The Manning formula is also known as the Gauckler–Manning formula, or Gauckler–Manning–Strickler formula in Europe. In the United States, in practice, it is very frequently called simply Manning's equation. The Gauckler–Manning formula states: {\displaystyle V={\frac {k}{n}}{R_{h}}^{2/3}\,S^{1/2}} V is the cross-sectional average velocity (L/T; ft/s, m/s); n is the Gauckler–Manning coefficient. Units of n are often omitted, however n is not dimensionless, having units of: (T/[L1/3]; s/[ft1/3]; s/[m1/3]). Rh is the hydraulic radius (L; ft, m); S is the slope of the hydraulic grade line or the linear hydraulic head loss (L/L), which is the same as the channel bed slope when the water depth is constant. (S = hf/L). k is a conversion factor between SI and English units. It can be left off, as long as you make sure to note and correct the units in the n term. If you leave n in the traditional SI units, k is just the dimensional analysis to convert to English. k = 1 for SI units, and k = 1.49 for English units. (Note: (1 m)1/3/s = (3.2808399 ft)1/3/s = 1.4859 ft1/3/s) NOTE: Ks strickler = 1/n manning. The coefficient Ks strickler varies from 20 (rough stone and rough surface) to 80 m1/3/s (smooth concrete and cast iron). The discharge formula, Q = A V, can be used to manipulate Gauckler–Manning's equation by substitution for V. Solving for Q then allows an estimate of the volumetric flow rate (discharge) without knowing the limiting or actual flow velocity. The Gauckler–Manning formula is used to estimate the average velocity of water flowing in an open channel in locations where it is not practical to construct a weir or flume to measure flow with greater accuracy. The friction coefficients across weirs and orifices are less subjective than n along a natural (earthen, stone or vegetated) channel reach. Cross sectional area, as well as n, will likely vary along a natural channel. Accordingly, more error is expected in estimating the average velocity by assuming a Manning's n, than by direct sampling (i.e., with a current flowmeter), or measuring it across weirs, flumes or orifices. Manning's equation is also commonly used as part of a numerical step method, such as the standard step method, for delineating the free surface profile of water flowing in an open channel.[3] The formula can be obtained by use of dimensional analysis. In the 2000s this formula was derived theoretically using the phenomenological theory of turbulence.[4][5] 2 Gauckler–Manning coefficient 3 Authors of flow formulas Hydraulic radius[edit] The hydraulic radius is one of the properties of a channel that controls water discharge. It also determines how much work the channel can do, for example, in moving sediment. All else equal, a river with a larger hydraulic radius will have a higher flow velocity, and also a larger cross sectional area through which that faster water can travel. This means the greater the hydraulic radius, the larger volume of water the channel can carry. Based on the 'constant shear stress at the boundary' assumption,[6] hydraulic radius is defined as the ratio of the channel's cross-sectional area of the flow to its wetted perimeter (the portion of the cross-section's perimeter that is "wet"): {\displaystyle R_{h}={\frac {A}{P}}} Rh is the hydraulic radius (L); A is the cross sectional area of flow (L2); P is the wetted perimeter (L). For channels of a given width, the hydraulic radius is greater for deeper channels. In wide rectangular channels, the hydraulic radius is approximated by the flow depth. The hydraulic radius is not half the hydraulic diameter as the name may suggest, but one quarter in the case of a full pipe. It is a function of the shape of the pipe, channel, or river in which the water is flowing. Hydraulic radius is also important in determining a channel's efficiency (its ability to move water and sediment), and is one of the properties used by water engineers to assess the channel's capacity. Gauckler–Manning coefficient[edit] The Gauckler–Manning coefficient, often denoted as n, is an empirically derived coefficient, which is dependent on many factors, including surface roughness and sinuosity. When field inspection is not possible, the best method to determine n is to use photographs of river channels where n has been determined using Gauckler–Manning's formula. In natural streams, n values vary greatly along its reach, and will even vary in a given reach of channel with different stages of flow. Most research shows that n will decrease with stage, at least up to bank-full. Overbank n values for a given reach will vary greatly depending on the time of year and the velocity of flow. Summer vegetation will typically have a significantly higher n value due to leaves and seasonal vegetation. Research has shown, however, that n values are lower for individual shrubs with leaves than for the shrubs without leaves.[7] This is due to the ability of the plant's leaves to streamline and flex as the flow passes them thus lowering the resistance to flow. High velocity flows will cause some vegetation (such as grasses and forbs) to lay flat, where a lower velocity of flow through the same vegetation will not.[8] In open channels, the Darcy–Weisbach equation is valid using the hydraulic diameter as equivalent pipe diameter. It is the only best and sound method to estimate the energy loss in human made open channels. For various reasons (mainly historical reasons), empirical resistance coefficients (e.g. Chézy, Gauckler–Manning–Strickler) were and are still used. The Chézy coefficient was introduced in 1768 while the Gauckler–Manning coefficient was first developed in 1865, well before the classical pipe flow resistance experiments in the 1920–1930s. Historically both the Chézy and the Gauckler–Manning coefficients were expected to be constant and functions of the roughness only. But it is now well recognised that these coefficients are only constant for a range of flow rates. Most friction coefficients (except perhaps the Darcy–Weisbach friction factor) are estimated 100% empirically and they apply only to fully rough turbulent water flows under steady flow conditions. One of the most important applications of the Manning equation is its use in sewer design. Sewers are often constructed as circular pipes. It has long been accepted that the value of n varies with the flow depth in partially filled circular pipes.[9] A complete set of explicit equations that can be used to calculate the depth of flow and other unknown variables when applying the Manning equation to circular pipes is available.[10] These equations account for the variation of n with the depth of flow in accordance with the curves presented by Camp. Authors of flow formulas[edit] Albert Brahms (1692–1758) Antoine de Chézy (1718–1798) Henry Darcy (1803–1858) Julius Ludwig Weisbach (1806-1871) Robert Manning (1816–1897) Wilhelm Rudolf Kutter (1818–1888) Henri Bazin (1843–1917) Paul Richard Heinrich Blasius (1883–1970) Albert Strickler (1887–1963) Cyril Frank Colebrook (1910–1997) ^ Gauckler, Ph. (1867), Etudes Théoriques et Pratiques sur l'Ecoulement et le Mouvement des Eaux, vol. Tome 64, Paris, France: Comptes Rendues de l'Académie des Sciences, pp. 818–822 ^ Manning, R. (1891). "On the flow of water in open channels and pipes". Transactions of the Institution of Civil Engineers of Ireland. 20: 161–207. ^ Chow (1959) pp. 262-267 ^ Gioia, G.; Bombardelli, F. A. (2001). "Scaling and Similarity in Rough Channel Flows". Physical Review Letters. 88 (1): 014501. Bibcode:2002PhRvL..88a4501G. doi:10.1103/PhysRevLett.88.014501. ISSN 0031-9007. PMID 11800954. ^ Gioia, G.; Chakraborty, Pinaki (2006). "Turbulent Friction in Rough Pipes and the Energy Spectrum of the Phenomenological Theory" (PDF). Physical Review Letters. 96 (4): 044502. arXiv:physics/0507066. Bibcode:2006PhRvL..96d4502G. doi:10.1103/PhysRevLett.96.044502. hdl:2142/984. ISSN 0031-9007. PMID 16486828. S2CID 7439208. ^ Le Mehaute, Bernard (2013). An Introduction to Hydrodynamics and Water Waves. Springer. p. 84. ISBN 978-3-642-85567-2. ^ Freeman, Gary E.; Copeland, Ronald R.; Rahmeyer, William; Derrick, David L. (1998). Field Determination of Manning'snValue for Shrubs and Woody Vegetation. Engineering Approaches to Ecosystem Restoration. pp. 48–53. doi:10.1061/40382(1998)7. ISBN 978-0-7844-0382-2. ^ Hardy, Thomas; Panja, Palavi; Mathias, Dean (2005), WinXSPRO, A Channel Cross Section Analyzer, User's Manual, Version 3.0. Gen. Tech. Rep. RMRS-GTR-147 (PDF), Fort Collins, CO: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station, p. 94 ^ Camp, T. R. (1946). "Design of Sewers to Facilitate Flow". Sewage Works Journal. 18 (1): 3–16. JSTOR 25030187. PMID 21011592. ^ Akgiray, Ömer (2005). "Explicit solutions of the Manning equation for partially filled circular pipes". Canadian Journal of Civil Engineering. 32 (3): 490–499. doi:10.1139/l05-001. ISSN 0315-1468. Chanson, Hubert (2004). The hydraulics of open channel flow. Elsevier Butterworth Heinemann. ISBN 978-0-7506-5978-9. Chow, Ven Te (2009). Open-channel Hydraulics. Blackburn Press. ISBN 978-1-932846-18-8. Grant, Douglas M. (1989). Diane K. Walkowiak (ed.). Isco Open Channel Flow Measurement Handbook. Teledyne Isco. ISBN 978-0-9622757-3-9. Keulegan, Garbis Hovannes (1938). Laws of turbulent flow in open channels (PDF). Vol. 21. US: National Bureau of Standards. Scaling and Similarity in Rough Channel Flows at the Wayback Machine (archived July 16, 2011) History of the Manning Formula Manning formula calculator for several channel shapes Manning n values associated with photos Table of values of Manning's n Interactive demo of Manning's equation Retrieved from "https://en.wikipedia.org/w/index.php?title=Manning_formula&oldid=1088152937"
Tax rate — Wikipedia Republished // WIKI 2 Aspect of tax law For a type of taxation system in the United Kingdom and elsewhere, see Rates (tax). In a tax system, the tax rate is the ratio (usually expressed as a percentage) at which a business or person is taxed. There are several methods used to present a tax rate: statutory, average, marginal, and effective. These rates can also be presented using different definitions applied to a tax base: inclusive and exclusive. Biden’s Capital Gains Tax Rate to Test World’s Highest How to Calculate the Effective Tax Rate Marginal and average tax rates - example calculation 1 Statutory 2 Average 3 Marginal 4 Effective 5 Inclusive and exclusive A statutory tax rate is the legally imposed rate. An income tax could have multiple statutory rates for different income levels, where a sales tax may have a flat statutory rate.[1] The statutory tax rate is expressed as a percentage and will always be higher than the effective tax rate.[2] An average tax rate is the ratio of the total amount of taxes paid to the total tax base (taxable income or spending), expressed as a percentage.[1] Le{\displaystyle t} be the total tax liability. {\displaystyle i} be the total tax base. {\displaystyle ={\frac {t}{i}}.} In a proportional tax, the tax rate is fixed and the average tax rate equals this tax rate. In case of tax brackets, commonly used for progressive taxes, the average tax rate increases as taxable income increases through tax brackets, asymptoting to the top tax rate. For example, consider a system with three tax brackets, 10%, 20%, and 30%, where the 10% rate applies to income from $1 to $10,000, the 20% rate applies to income from $10,001 to $20,000, and the 30% rate applies to all income above $20,000. Under this system, someone earning $25,000 would pay $1,000 for the first $10,000 of income (10%); $2,000 for the second $10,000 of income (20%); and $1,500 for the last $5,000 of income (30%). In total, they would pay $4,500, or an 18% average tax rate. A marginal tax rate is the tax rate on income set at a higher rate for incomes above a designated higher bracket, which in 2016 in the United States was $415,050. For annual income that was above the cut-off point in that higher bracket, the marginal tax rate in 2016 was 39.6%. For income below the $415,050 cut off, the lower tax rate was 35% or less.[3][4] {\displaystyle {\frac {\Delta t}{\Delta i}}} where t is the total tax liability and i is total income, and ∆ refers to a numerical change. In accounting practice, the tax numerator in the above equation usually includes taxes at federal, state, provincial, and municipal levels. Marginal tax rates are applied to income in countries with progressive taxation schemes, with incremental increases in income taxed in progressively higher tax brackets, resulting in the tax burden being distributed amongst those who can most easily afford it. Marginal taxes are valuable as they allow governments to generate revenue to fund social services in a way that only affects those who will be the least negatively affected. With a flat tax, by comparison, all income is taxed at the same percentage, regardless of amount. An example is a sales tax where all purchases are taxed equally. A poll tax is a flat tax of a set dollar amount per person. The marginal tax in these scenarios would be constant (in case of a poll tax—zero), however, these are both forms of regressive taxation and place a higher tax burden on those who are least able to cope with it, and often results in an underfunded government leading to increased deficits. The effective tax rate is the percent of their income that an individual or a corporation pays in taxes.[5] The term is used in financial reporting to measure the total tax paid as a percentage of the company's accounting income, instead of as a percentage of the taxable income. International Accounting Standard 12,[6] define it as income tax expense or benefit for accounting purposes divided by accounting profit. In Generally Accepted Accounting Principles (United States), the term is used in official guidance only with respect to determining income tax expense for interim (e.g. quarterly) periods by multiplying accounting income by an "estimated annual effective tax rate", the definition of which rate varies depending on the reporting entity's circumstances.[7] Australian evidence indicates that a high (low) effective tax rate is associated with more (less) readable voluntary tax reports and voluntary tax reports with a more positive (negative) tone.[8] In U.S. income tax law, the term can be used in relation to determining whether a foreign income tax on specific types of income exceeds a certain percentage of U.S. tax that would apply on such income if U.S. tax had been applicable to the income.[9] Mathematically, 25% income tax out of $100 income yields the same as 33% sales tax on a $75 purchase. Some tax systems include the taxes owed in the tax base (tax-inclusive, Before Tax), while other tax systems do not include taxes owed as part of the base (tax-exclusive, After Tax).[10] In the United States, sales taxes are usually quoted exclusively and income taxes are quoted inclusively. The majority of Europe, value added tax (VAT) countries, include the tax amount when quoting merchandise prices, including Goods and Services Tax (GST) countries, such as Australia and New Zealand. However, those countries still define their tax rates on a tax exclusive basis. {\displaystyle i} be the inclusive tax rate (like an income tax). For a 20% rate, then {\displaystyle i=0.20} {\displaystyle e} be the exclusive rate (like a sales tax). {\displaystyle p} be the total price of the good (including the tax). {\displaystyle p\times i} {\displaystyle p-(p\times i)} To convert the inclusive rate to the exclusive rate, divide the money going to the government by the money the company nets: {\displaystyle e={\frac {p\times i}{p-(p\times i)}}={\frac {p\times i}{p\times (1-i)}}={\frac {i}{1-i}}} Therefore, to convert any inclusive tax rate to an exclusive tax rate, divide the inclusive rate by 1 minus that rate. Capital flight Progressive tax Proportional tax Regressive tax Tax rates of Europe ^ a b "What is the difference between statutory, average, marginal, and effective tax rates?" (PDF). Americans For Fair Taxation. Archived from the original (PDF) on 2007-06-14. Retrieved 2007-04-23. ^ "Statutory vs. Effective Tax Rate". DeaneBarker.net. 2011-12-31. Retrieved 2016-12-28. ^ "2016 Federal Tax Schedules". irs.gov. Retrieved 2017-04-27. ^ Piper, Mike (Sep 12, 2014). Taxes Made Simple: Income Taxes Explained in 100 Pages or Less. Simple Subjects, LLC. ISBN 978-0981454214. ^ Kagan, Julia. Effective Tax Rate. Investopedia. Retrieved: December 10, 2020. ^ IAS 12, paragraphs 86. ^ ASC 740-270-30-6 through -9. ^ Morton, Elizabeth; Bicudo de Castro, Vincent; Hinchliffe, Sarah. "The Association of Mandatory Tax Disclosures in the Readability and Tone of Voluntary Tax Reports" (PDF). eJournal of Tax Research. 19 (2): 232–272. ISSN 1448-2398. ^ See, e.g., 26 CFR 1.904-4(c). ^ a b Bachman, Paul; Haughton, Jonathan; Kotlikoff, Laurence J.; Sanchez-Penalver, Alfonso; Tuerck, David G. (November 2006). "Taxing Sales under the FairTax – What Rate Works?" (PDF). Beacon Hill Institute. Tax Analysts. Archived from the original (PDF) on 2007-06-14. Retrieved 2007-04-24. Wikimedia Commons has media related to Marginal tax rates.
SAT Algebra Word Problems | Brilliant Math & Science Wiki SAT Algebra Word Problems To solve SAT algebra word problems, you need to know: how to work with ratios, proportions, and percents SAT Tips for Algebra Word Problems A fruit basket contains the same number of apples and pears. If Eric eats 5 apples and 1 pear, there will be twice as many pears as apples. How many pears remain in the basket? \ \ 4 \ \ 8 \ \ 9 \ \ 10 \ \ 11 x be the number of apples and the number of pears in the basket. Eric eats 5 apples, and therefore there are x-5 apples remaining. He also eats 1 pear, and therefore there are x-1 pears remaining. We're told that after Eric eats there are twice as many pears as apples in the basket. We set up an equation. \begin{array}{l c l l} 2(x-5) &=& x-1 &\quad \text{create equation} &(1)\\ 2x-10 &=& x-1 &\quad \text{use distributive property} &(2)\\ x-10 &=& -1 &\quad \text{subtract}\ x\ \text{from both sides} &(3)\\ x &=& 9 &\quad \text{add}\ 10\ \text{to both sides} &(4)\\ \end{array} We're looking for the number of pears remaining in the basket, but x=9 is the original number of pears. Since he ate 1 pear, there are 8 pears remaining. Here, for each answer choice, we work our way backwards. We compute, based on how many pears are left, how many pears and apples there were to start with. Remember that there was the same number of apples and pears before Eric ate. (A) If 4 pears are left, then there were 4+1=5 pears to start with. Since there are twice as many pears remaining as apples, then there are \frac{4}{2}=2 apples left, and there were 2+5=7 apples to start with. But 5\neq 7. This is the wrong choice. (B) If 8 pears are left, then there were 8+1=9 pears to start with. \frac{8}{2}=4 apples left, and there were 4+5=9 apples to start with. 9=9 and therefore this is the correct answer. (C) If 9 pears are left, then there were 9+1=10 pears to start with. \frac{9}{2}=4.5 apples left, and there were 4.5+5=9.5 apples to start with. But 9 \neq 9.5. (D) If 10 pears are left, then there were 10+1=11 pears to start with. \frac{10}{2}=5 apples left, and there were 5+5=10 apples to start with. But 10\neq 11 . This is the wrong choice. (E) If 11 pears are left, then there were 11+1=12 pears to start with. \frac{11}{2}=5.5 apples left, and there were 5.5+5=10.5 apples to start with. 10.5\neq 12. If you got this problem wrong, you should review SAT Translating Word Problems. This wrong choice is the number of apples that remain in the basket. If you solve for the original number of pears, you will get this wrong answer. When accounting for the pear Eric ate, if instead of subtracting 1 from the original number of pears, you add 1, you will get this wrong answer. (E) When finding how many apples remain in the basket after Eric consumed 5 of them, if you add 5 to x instead of subtract 5 from it, you will get \begin{array}{l c l l} 2(x\fbox{+}5) &=& x-1 &\quad \text{mistake: added}\ 5\ \text{to}\ x &(1)\\ \end{array} Or, you may get this wrong answer if in step (3) of the solution you subtract 10 from both sides and you ignore the negative signs. Katie and Beth had the same number of marbles on January 1st. By the end of the year, Katie's collection of marbles increased by 25\% and Beth's decreased by 75\% . On December 31st, the number of marbles Beth owned was what percent of the number of marbles Katie owned? \ \ 18.75\% \ \ 20\% \ \ 33.33\% \ \ 75\% \ \ 100\% At 2:00 P.M. a train A leaves a station traveling at 100 mi/hr. At 4:30 P.M. train B leaves the station on a parallel route and traveling at 150 mi/hr. How far away from the station will the second train overtake the first train? \ \ 250 \ \ 500 \ \ 750 \ \ 800 \ \ 1500 Tip: Distance = Rate \times Between 2:00 P.M. and 4:30 P.M. there are 2.5 hours. This means that train A travels 2.5 hrs more than train B. If t denotes the hours train B travels, then train A travels (t+2.5) hrs. Let the distance the two trains travel be denoted by d d=rt we create an equation representing the distance each train traveled. \begin{array}{l l c l l} d&=&100(t+2.5) &\quad \text{train A} &(1)\\ d&=&150t &\quad \text{train B} &(2)\\ \end{array} Since the distance the two trains traveled is the same, we set (1) (2). \begin{array}{r c l} 150t&=&100(t+2.5) &\quad (1) = (2) \\ 150t &=& 100t + 250 &\quad \text{use distributive property}\\ 50t &=& 250 &\quad \text{subtract}\ 100t\ \text{from both sides}\\ t&=&5\ \text{hrs} &\quad \text{divide both sides by}\ 25\\ \end{array} So, train B travel 5 hours before it catches up with train A. But we are looking for the distance the trains have traveled when train B takes over train A. Plugging t=5 (2) 5 \times 150 = 750 This is just the sum of the speeds of the two trains. If you find the distance train A travels from 4:30 P.M. till train B overtakes it, you will get this wrong answer. If you think the difference between 2:00 P.M. and 4:30 P.M. is 2 hrs, you will get this wrong answer. If you find the combined distance the trains traveled, you will get this wrong answer. Fractions-Word Problems Decimals-Word Problems Percents-Word Problems Systems of Linear Equations Word Problems-Easy Convert Percentages, Fractions, and Decimals SAT Direct and Inverse Variation Distance = Rate \times Cite as: SAT Algebra Word Problems. Brilliant.org. Retrieved from https://brilliant.org/wiki/sat-word-problems/
numtheory(deprecated)/issqrfree - Maple Help Home : Support : Online Help : numtheory(deprecated)/issqrfree test if integer is square free issqrfree(n) Important: The numtheory package has been deprecated. Use the superseding command NumberTheory[IsSquareFree] instead. The function issqrfree(n) returns true if n is square free, and false otherwise. The integer n is square free if it is not divisible by a perfect square. The command with(numtheory,issqrfree) allows the use of the abbreviated form of this command. \mathrm{with}⁡\left(\mathrm{numtheory}\right): \mathrm{ifactor}⁡\left(20\right) {\left(\textcolor[rgb]{0,0,1}{2}\right)}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{⁢}\left(\textcolor[rgb]{0,0,1}{5}\right) \mathrm{issqrfree}⁡\left(20\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} \mathrm{ifactor}⁡\left(21\right) \left(\textcolor[rgb]{0,0,1}{3}\right)\textcolor[rgb]{0,0,1}{⁢}\left(\textcolor[rgb]{0,0,1}{7}\right) \mathrm{issqrfree}⁡\left(21\right) \textcolor[rgb]{0,0,1}{\mathrm{true}}
Find the Equation of the Circle which passes through 2 points on x - axis which are at a distance - Maths - Conic Sections - 8741323 | Meritnation.com Find the Equation of the Circle which passes through 2 points on x - axis which are at a distance of 4 units from origin and whose radius is 5 units The general equation of the circle is, {x}^{2}+{y}^{2}+2gx+2fy+c=0 The two points on the x-axis are at the distance of 4 units from the orgin. So the points can be (4, 0) and (-4, 0). So (1) passes through these 2 points. \left(4,0\right)\Rightarrow {4}^{2}+{0}^{2}+2g.\quad 4+\quad 2f.0+c=0\Rightarrow 16+8g+c=0\quad ...\quad \left(2\right)\phantom{\rule{0ex}{0ex}}\left(-4,0\right)\Rightarrow {\left(-4\right)}^{2}+{0}^{2}+2g.\quad \left(-4\right)+\quad 2f.0+c=0\Rightarrow 16-8g+c=0\quad ...\quad \left(3\right)\phantom{\rule{0ex}{0ex}}Adding\quad \left(2\right)\quad and\quad \left(3\right)\quad we\quad get,\phantom{\rule{0ex}{0ex}}32+2c=0\phantom{\rule{0ex}{0ex}}c=\frac{-32}{2}=-16\phantom{\rule{0ex}{0ex}}From\quad \left(2\right),\phantom{\rule{0ex}{0ex}}16+8g\quad -16=0\phantom{\rule{0ex}{0ex}}8g=0\phantom{\rule{0ex}{0ex}}g=0\phantom{\rule{0ex}{0ex}}Given\quad tht\quad radius\quad =\quad 5\phantom{\rule{0ex}{0ex}}\Rightarrow \sqrt{{g}^{2}+{f}^{2}-c}=5\phantom{\rule{0ex}{0ex}}\Rightarrow \sqrt{{0}^{2}+{f}^{2}-\left(-16\right)}=5\phantom{\rule{0ex}{0ex}}\Rightarrow {f}^{2}+16=25\phantom{\rule{0ex}{0ex}}\Rightarrow {f}^{2}=9\phantom{\rule{0ex}{0ex}}\Rightarrow f=\pm 3\phantom{\rule{0ex}{0ex}}Substituting\quad g,\quad f\quad and\quad c\quad values\quad in\quad \left(1\right),\phantom{\rule{0ex}{0ex}}{x}^{2}+{y}^{2}+2x.\quad 0+2y.\quad \left(\pm 3\right)-16=0\phantom{\rule{0ex}{0ex}}\Rightarrow {x}^{2}+{y}^{2}\pm 6y-16=0
Convert baseband signal to RF signal - Simulink - MathWorks Italia Add Image Reject filters Add Channel Select filter Noise floor (dBm/Hz) IQ Modulator Icons IQ Modulator block icon updated Convert baseband signal to RF signal The IQ Modulator converts a baseband signal to RF signal and models an IQ modulator with impairments. I stands for the in-phase component of the signal and Q stands for the quadrature phase component of the signal. You can use the IQ Modulator to design direct conversion transmitters. The IQ Modulator block mask icons are dynamic and indicate the current state of the applied noise parameter. For more information, see IQ Modulator Icons. Available power gain — Relates the ratio of the power of a single sideband (SSB) of the output to the input power at the I branch. This assumes no gain mismatch and that the input at the Q branch is Qin = - j.Iin Open circuit voltage gain — Value of the open circuit voltage gain parameter as the linear voltage gain term of the polynomial voltage-controlled voltage source (VCVS). Available power gain — Ratio of power of SSB at output to input power at I Ratio of the power of SSB at the output to input power at I branch, specified as a scalar in dB or a unitless ratio. For a unitless ratio, select None. Open circuit voltage of IQ modulator, specified as a scalar in dB or a unitless ratio. For a unitless ratio, select None. Input impedance (Ohm) — Input impedance of IQ modulator Input impedance of IQ modulator, specified as a scalar in Ohms. Output impedance (Ohm) — Output impedance of IQ modulator Output impedance of IQ modulator, specified as a scalar in Ohms. Add Image Reject filters — Image reject (IR) filter parameters Add Channel Select filter — Channel select (CS) filter parameters Ground and hide negative terminals — Ground and hide terminals Edit System — Break IQ modulator block links and replace internal variables by appropriate values Gain difference between I and Q branches, specified as a scalar in dB, or a unitless ratio. Gain mismatch is assumed to be forward-going, that is, the mismatch does not affect leakage from LO to RF. \left(Available\text{ }\text{\hspace{0.17em}}power\text{\hspace{0.17em}}gain+I/Q\text{\hspace{0.17em}}gain\text{\hspace{0.17em}}mismatch\right) relates the ratio of power of the single-sideband (SSB) at the Q input branch to the output power. Phase difference between I and Q branches, specified as a scalar in degrees or radians. This mismatch affects the LO to input RF leakage. Ratio of magnitude between LO voltage to leaked RF voltage, specified as a scalar in dB, or a unitless ratio. For a unitless ratio, select None. Noise floor (dBm/Hz) — Single-sided noise power spectral distribution -inf (default) | scalar in dBm/Hz Single-sided noise power spectral distribution, specified as a scalar in dBm/Hz. This block assumes -174dBm/Hz noise input at both I and Q branches. Select this parameter to add phase noise to your IQ modulator system. Phase noise level, specified as a scalar, vector, or matrix with element unit in dBc/Hz. 1e-10 s (default) | scalar Even and odd order: The IQ Modulator can produce second-order and third-order intermodulation frequencies, in addition to a linear term. Odd order: The IQ Modulator generates only "odd-order" intermodulation frequencies. The linear gain determines the linear a1 term. The block calculates the remaining terms from the values specified in IP3, 1-dB gain compression power, Output saturation power, and Gain compression at saturation. The number of constraints you specify determines the order of the model. The figure shows the graphical definition of the nonlinear IQ modulator parameters. To enable this parameter, first select Odd order in Nonlinear polynomial type tab. Then change the default value of Output saturation power . Select Add Image Reject filters in the Main tab to see the IR Filter parameters tab. \frac{{R}_{\text{load}}}{{R}_{\text{source}}}>{R}_{\text{ratio}} \frac{{R}_{\text{load}}}{{R}_{\text{source}}}<\frac{1}{{R}_{\text{ratio}}} {R}_{\text{ratio}}\text{\hspace{0.17em}}=\text{\hspace{0.17em}}\frac{\sqrt{1+{\epsilon }^{2}}+\epsilon }{\sqrt{1+{\epsilon }^{2}}-\epsilon } \epsilon \text{\hspace{0.17em}}=\text{\hspace{0.17em}}\sqrt{{10}^{\left(0.1{R}_{\text{p}}\right)}-1} Select Add Channel Select filter in the Main tab to see the CS Filter parameters. \frac{{R}_{\text{load}}}{{R}_{\text{source}}}>{R}_{\text{ratio}} \frac{{R}_{\text{load}}}{{R}_{\text{source}}}<\frac{1}{{R}_{\text{ratio}}} {R}_{\text{ratio}}\text{\hspace{0.17em}}=\text{\hspace{0.17em}}\frac{\sqrt{1+{\epsilon }^{2}}+\epsilon }{\sqrt{1+{\epsilon }^{2}}-\epsilon } \epsilon \text{\hspace{0.17em}}=\text{\hspace{0.17em}}\sqrt{{10}^{\left(0.1{R}_{\text{p}}\right)}-1} R2021b: IQ Modulator block icon updated Starting in R2021b, the IQ Modulator block icon has updated. The block icons are now dynamic and show the current state of the noise parameter. When you open a model created before R2021b containing a IQ Modualtor block, the software replaces the block icon with the R2021b version. Modulator | IQ Demodulator | Mixer
Continuous-time or discrete-time two-degree-of-freedom PID controller - Simulink - MathWorks Switzerland D\left[\frac{N}{1+N\alpha \left(z\right)}\right], \alpha \left(z\right)=\frac{{T}_{s}}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}z}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}}{2}\frac{z+1}{z-1}. u=P\left(br-y\right)+I\frac{1}{s}\left(r-y\right)+D\frac{N}{1+N\frac{1}{s}}\left(cr-y\right), u=P\left(br-y\right)+I\alpha \left(z\right)\left(r-y\right)+D\frac{N}{1+N\beta \left(z\right)}\left(cr-y\right), u=P\left[\left(br-y\right)+I\frac{1}{s}\left(r-y\right)+D\frac{N}{1+N\frac{1}{s}}\left(cr-y\right)\right]. u=P\left[\left(br-y\right)+I\alpha \left(z\right)\left(r-y\right)+D\frac{N}{1+N\beta \left(z\right)}\left(cr-y\right)\right], \alpha \left(z\right)=\frac{{T}_{s}}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}z}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}}{2}\frac{z+1}{z-1}. D\left[\frac{N}{1+N\alpha \left(z\right)}\right], \alpha \left(z\right)=\frac{{T}_{s}}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}z}{z-1}. \alpha \left(z\right)=\frac{{T}_{s}}{2}\frac{z+1}{z-1}. {u}_{i}=\int \left(r-y\right)I\text{\hspace{0.17em}}dt. u=P\left(br-y\right)+I\frac{1}{s}\left(r-y\right)+D\frac{N}{1+N\frac{1}{s}}\left(cr-y\right), u=P\left(br-y\right)+I\alpha \left(z\right)\left(r-y\right)+D\frac{N}{1+N\beta \left(z\right)}\left(cr-y\right), u=P\left[\left(br-y\right)+I\frac{1}{s}\left(r-y\right)+D\frac{N}{1+N\frac{1}{s}}\left(cr-y\right)\right]. u=P\left[\left(br-y\right)+I\alpha \left(z\right)\left(r-y\right)+D\frac{N}{1+N\beta \left(z\right)}\left(cr-y\right)\right], D\frac{z-1}{z{T}_{s}}\left(cr-y\right). {z}_{pole}=1-N{T}_{s} {z}_{pole}=\frac{1}{1+N{T}_{s}} {z}_{pole}=\frac{1-N{T}_{s}/2}{1+N{T}_{s}/2} \begin{array}{l}{F}_{par}\left(s\right)=\frac{\left(bP+cDN\right){s}^{2}+\left(bPN+I\right)s+IN}{\left(P+DN\right){s}^{2}+\left(PN+I\right)s+IN},\\ {C}_{par}\left(s\right)=\frac{\left(P+DN\right){s}^{2}+\left(PN+I\right)s+IN}{s\left(s+N\right)},\end{array} \begin{array}{l}{F}_{id}\left(s\right)=\frac{\left(b+cDN\right){s}^{2}+\left(bN+I\right)s+IN}{\left(1+DN\right){s}^{2}+\left(N+I\right)s+IN},\\ {C}_{id}\left(s\right)=P\frac{\left(1+DN\right){s}^{2}+\left(N+I\right)s+IN}{s\left(s+N\right)}.\end{array} {Q}_{par}\left(s\right)=\frac{\left(\left(b-1\right)P+\left(c-1\right)DN\right)s+\left(b-1\right)PN}{s+N}. {Q}_{id}\left(s\right)=P\frac{\left(\left(b-1\right)+\left(c-1\right)DN\right)s+\left(b-1\right)N}{s+N}.
Interquartile Range (IQR) | Brilliant Math & Science Wiki Yash Singhal and Mahindra Jain contributed Before studying interquartile range, we first should study quartiles for they act as a base for the interquartile range. Quartiles are those values which divide the series into 4 equal parts. Before calculating the quartiles, first we have to arrange all the individual observations in an ascending order. Let us study some components of quartiles. First quartile: This is the first value that divides the series into 4 equal parts. It is also known as lower quartile. It divides the series in such a manner that one-fourth of the observations are below it and the remaining three quarters are above it. It is represented by the letter q_{1}. Second quartile: This is the second value that divides the series into 4 equal parts. It is also called median. It divides the series equally. Half of the observations are below it and the other half of the observations are above it. It is represented by the letter q_{2} M . For more information on median, see Median. Third quartile: This is the third value that divides the series into 4 equal parts. It is also called upper quartile. It divides the series in such a way that three-fourth of the observations are below it and the remaining one-fourth of the observations are above it. It is represented by the letter q_{3} How to calculate the quartiles? The following are some formulas for calculating the quartiles: Individual Series: For calculating q_{1} \left(\frac{(N+1)}{4}\right)^{\text{th}} observation, where N is the number of observations. For calculating q_{3} \left(3\times\frac{(N+1)}{4}\right)^{\text{th}} N is the number of observations. Discrete Series: For calculating the quartiles, first calculate the cumulative frequency (cf) . For calculating q_{1} \left(\frac{(N+1)}{4}\right)^{\text{th}} N q_{3} \left(3\times\frac{(N+1)}{4}\right)^{\text{th}} N Continuous Series: For calculating q_{1} l+\dfrac{\dfrac{N}{4}-cf}{f}\times h, and for calculating q_{3} l+\dfrac{\dfrac{3N}{4}-cf}{f}\times h, l is the lower limit of the quartile class interval N is the number of observations cf is the cumulative frequency of the class interval preceding the quartile class interval f is the frequency of the quartile class interval h is the width of the quartile class interval. NOTE: The formulas for calculating q_{2} or the median have already been given in the wiki page of Median. While calculating quartiles, if \frac{N+1}{4} comes in a decimal, then use this: Separate the integer and fractional parts. Add the integer number observation and the positive difference of the integer observation and its next observation multiplied by the fractional value. \frac{N+1}{4}=5.5 q_{1}=5^{\text{th}} \text{ observation } + 0.5\times \left(6^{\text{th}} \text{ observation -} 5^{\text{th}} \text{ observation}\right). Now comes the turn of interquartile range. It is defined as the difference of the upper quartile and the lower quartile. Let's see some worked examples. q_{1},q_{3} and the interquartile range of the following distribution: 10,15,20,25,30,35,40,45,50,55,60. The values are arranged in ascending order. So, we can calculate q_{1} using the formula. As there are 11 observations, we put 11 \left(\frac{(N+1)}{4}\right)^{\text{th}} observation and get q_{1} 3^{\text{rd}} observation which is 20 q_{3} q_{3} 9^{\text{th}} 50 . So the interquartile range is equal to 50-20=30 _\square q_{1},q_{3} 28,18,20,24,30,15,47,27. Arranging the values in ascending order, we get 15,18,20,24,27,28,30,47 . Calculating q_{1} 2.25^{\text{th}} observation as our q_{1} . Now to calculate the 2.25^{\text{th}} observation, we will use the technique given in the Note of this page and we will get 2^{\text{nd}} \text{ observation } +0.25\times \left(3^{\text{rd}} \text{ observation }-2^{\text{nd}}\text{ observation }\right)=q_{1}, from which we get our q_{1} 18.5 Similarly doing for q_{3} q_{3} 29.5 . Hence, the interquartile range will be 29.5-18.5=11. \ _\square Cite as: Interquartile Range (IQR). Brilliant.org. Retrieved from https://brilliant.org/wiki/data-interquartile-range/
Logic Puzzles - Advanced | Brilliant Math & Science Wiki Congrats on making it to the advanced page! Be prepared for a challenge that will test your deductive reasoning prowess. You will need a magnifying glass to hunt for clues, a bloodhound's nose to follow the leads and a thinking cap to figure it all out. Watch out for any unnecessary information that might mislead you, and keep your eyes focused on the goal. The hunt is on! Read through the passage carefully and make note of the important information. Work from the information that you are given and think about their implications. If you're stuck going forward, try working backward! One day a mad scientist lined up Andy, Brandy, Candy and Dandy in a row, so that each of them could see the ones in front of them but not behind. Andy was able to see everyone else while Dandy couldn't see anyone. Then the mad scientist declared, Starting from the back (Andy first), he asked them each in turn what the color of their hat was. To his surprise, they all were able to correctly deduce the color of their hat based on the responses that they heard. Let's work from the information that we are given, and record down the implications: If Andy had seen hats of 3 different colors, then he would not have been able to deduce his own hat color. Thus, he saw 2 hats of the same color and 1 hat of a different color. If Bandy had seen 2 hats of different colors, then he would not have been able to determine his own hat color. Thus, he must have seen 2 hats of the same color, and then called out the remaining color. Thus, Candy and Dandy had hats of the same color. _\square Back to Quiz: Logic Puzzles - Advanced Order Theory: Given how certain terms compare to each other, we have to find the largest or smallest term. Given various comparisons, we have to decide which term is the largest or the smallest. Drawing a flowchart can be helpful, as it offers a visual way for us to get organized. Elimination grids: Setting up the information in a grid offers an easy way of displaying an interaction with the information. Cross out scenarios that cannot be true, and "when you have eliminated the impossible, whatever remains, however improbable, must be the truth." Information Compression: The information has been compressed, which allows people to communicate extremely effectively, which then can stump an outsider. We have to tease apart the process in order to determine the best way to package the information. K-level thinking: When events occur sequentially, we can make use of the information gleamed in the prior step, to restrict the possibilities of the next step. By listing out all possible scenarios, we can easily work through the implications and figure out the true scenario. Cite as: Logic Puzzles - Advanced. Brilliant.org. Retrieved from https://brilliant.org/wiki/logic-puzzles-advanced/
ANALYTIC GEOMETRY - Encyclopedia Information Analytic geometry Information This article is about co-ordinate geometry. For the study of analytic varieties, see Algebraic geometry § Analytic geometry. 2.1 Cartesian coordinates (in a plane or space) 2.2 Polar coordinates (in a plane) 2.3 Cylindrical coordinates (in a space) 2.4 Spherical coordinates (in a space) 3 Equations and curves 4 Distance and angle 6 Finding intersections of geometric objects 6.1 Finding intercepts 7 Tangents and normals 7.1 Tangent lines and planes 7.2 Normal line and vector The Greek mathematician Menaechmus solved problems and proved theorems by using a method that had a strong resemblance to the use of coordinates and it has sometimes been maintained that he had introduced analytic geometry. [1] Apollonius of Perga, in On Determinate Section, dealt with problems in a manner that may be called an analytic geometry of one dimension; with the question of finding points on a line that were in a ratio to the others. [2] Apollonius in the Conics further developed a method that is so similar to analytic geometry that his work is sometimes thought to have anticipated the work of Descartes by some 1800 years. His application of reference lines, a diameter and a tangent is essentially no different from our modern use of a coordinate frame, where the distances measured along the diameter from the point of tangency are the abscissas, and the segments parallel to the tangent and intercepted between the axis and the curve are the ordinates. He further developed relations between the abscissas and the corresponding ordinates that are equivalent to rhetorical equations of curves. However, although Apollonius came close to developing analytic geometry, he did not manage to do so since he did not take into account negative magnitudes and in every case the coordinate system was superimposed upon a given curve a posteriori instead of a priori. That is, equations were determined by curves, but curves were not determined by equations. Coordinates, variables, and equations were subsidiary notions applied to a specific geometric situation. [3] The 11th-century Persian mathematician Omar Khayyam saw a strong relationship between geometry and algebra and was moving in the right direction when he helped close the gap between numerical and geometric algebra [4] with his geometric solution of the general cubic equations, [5] but the decisive step came later with Descartes. [4] Omar Khayyam is credited with identifying the foundations of algebraic geometry, and his book Treatise on Demonstrations of Problems of Algebra (1070), which laid down the principles of analytic geometry, is part of the body of Persian mathematics that was eventually transmitted to Europe. [6] Because of his thoroughgoing geometrical approach to algebraic equations, Khayyam can be considered a precursor to Descartes in the invention of analytic geometry. [7]: 248  Analytic geometry was independently invented by René Descartes and Pierre de Fermat, [8] [9] although Descartes is sometimes given sole credit. [10] [11] Cartesian geometry, the alternative term used for analytic geometry, is named after Descartes. Descartes made significant progress with the methods in an essay titled La Géométrie (Geometry), one of the three accompanying essays (appendices) published in 1637 together with his Discourse on the Method for Rightly Directing One's Reason and Searching for Truth in the Sciences, commonly referred to as Discourse on Method. La Geometrie, written in his native French tongue, and its philosophical principles, provided a foundation for calculus in Europe. Initially the work was not well received, due, in part, to the many gaps in arguments and complicated equations. Only after the translation into Latin and the addition of commentary by van Schooten in 1649 (and further work thereafter) did Descartes's masterpiece receive due recognition. [12] Pierre de Fermat also pioneered the development of analytic geometry. Although not published in his lifetime, a manuscript form of Ad locos planos et solidos isagoge (Introduction to Plane and Solid Loci) was circulating in Paris in 1637, just prior to the publication of Descartes' Discourse. [13] [14] [15] Clearly written and well received, the Introduction also laid the groundwork for analytical geometry. The key difference between Fermat's and Descartes' treatments is a matter of viewpoint: Fermat always started with an algebraic equation and then described the geometric curve that satisfied it, whereas Descartes started with geometric curves and produced their equations as one of several properties of the curves. [12] As a consequence of this approach, Descartes had to deal with more complicated equations and he had to develop the methods to work with polynomial equations of higher degree. It was Leonhard Euler who first applied the coordinate method in a systematic study of space curves and surfaces. In analytic geometry, the plane is given a coordinate system, by which every point has a pair of real number coordinates. Similarly, Euclidean space is given coordinates where every point has three coordinates. The value of the coordinates depends on the choice of the initial point of origin. There are a variety of coordinate systems used, but the most common are the following: [16] {\displaystyle x=r\,\cos \theta ,\,y=r\,\sin \theta ;\,r={\sqrt {x^{2}+y^{2}}},\,\theta =\arctan(y/x).} In spherical coordinates, every point in space is represented by its distance ρ from the origin, the angle θ its projection on the xy-plane makes with respect to the horizontal axis, and the angle φ that it makes with respect to the z-axis. The names of the angles are often reversed in physics. [16] In analytic geometry, any equation involving the coordinates specifies a subset of the plane, namely the solution set for the equation, or locus. For example, the equation y = x corresponds to the set of all the points on the plane whose x-coordinate and y-coordinate are equal. These points form a line, and y = x is said to be the equation for this line. In general, linear equations involving x and y specify lines, quadratic equations specify conic sections, and more complicated equations describe more complicated figures. [17] Usually, a single equation corresponds to a curve on the plane. This is not always the case: the trivial equation x = x specifies the entire plane, and the equation x2 + y2 = 0 specifies only the single point (0, 0). In three dimensions, a single equation usually gives a surface, and a curve must be specified as the intersection of two surfaces (see below), or as a system of parametric equations. [18] The equation x2 + y2 = r2 is the equation for any circle centered at the origin (0, 0) with a radius of r. {\displaystyle y=mx+b} {\displaystyle \mathbf {r} _{0}} be the position vector of some point {\displaystyle P_{0}=(x_{0},y_{0},z_{0})} {\displaystyle \mathbf {n} =(a,b,c)} be a nonzero vector. The plane determined by this point and vector consists of those points {\displaystyle P} , with position vector {\displaystyle \mathbf {r} } , such that the vector drawn from {\displaystyle P_{0}} {\displaystyle P} {\displaystyle \mathbf {n} } . Recalling that two vectors are perpendicular if and only if their dot product is zero, it follows that the desired plane can be described as the set of all points {\displaystyle \mathbf {r} } {\displaystyle \mathbf {n} \cdot (\mathbf {r} -\mathbf {r} _{0})=0.} {\displaystyle a(x-x_{0})+b(y-y_{0})+c(z-z_{0})=0,} which is the point-normal form of the equation of a plane.[ citation needed] This is just a linear equation: {\displaystyle ax+by+cz+d=0,{\text{ where }}d=-(ax_{0}+by_{0}+cz_{0}).} {\displaystyle ax+by+cz+d=0,} is a plane having the vector {\displaystyle \mathbf {n} =(a,b,c)} as a normal.[ citation needed] This familiar equation for a plane is called the general form of the equation of the plane. [19] {\displaystyle x=x_{0}+at} {\displaystyle y=y_{0}+bt} {\displaystyle z=z_{0}+ct} {\displaystyle Ax^{2}+Bxy+Cy^{2}+Dx+Ey+F=0{\text{ with }}A,B,C{\text{ not all zero.}}} As scaling all six constants yields the same locus of zeros, one can consider conics as points in the five-dimensional projective space {\displaystyle \mathbf {P} ^{5}.} The conic sections described by this equation can be classified using the discriminant [20] {\displaystyle B^{2}-4AC.} {\displaystyle B^{2}-4AC<0} , the equation represents an ellipse; {\displaystyle A=C} {\displaystyle B=0} , the equation represents a circle, which is a special case of an ellipse; {\displaystyle B^{2}-4AC=0} {\displaystyle B^{2}-4AC>0} {\displaystyle A+C=0} A quadric, or quadric surface, is a 2-dimensional surface in 3-dimensional space defined as the locus of zeros of a quadratic polynomial. In coordinates x1, x2,x3, the general quadric is defined by the algebraic equation [21] {\displaystyle \sum _{i,j=1}^{3}x_{i}Q_{ij}x_{j}+\sum _{i=1}^{3}P_{i}x_{i}+R=0.} {\displaystyle d={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}},} {\displaystyle \theta =\arctan(m),} {\displaystyle d={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}+(z_{2}-z_{1})^{2}}},} while the angle between two vectors is given by the dot product. The dot product of two Euclidean vectors A and B is defined by [22] {\displaystyle \mathbf {A} \cdot \mathbf {B} {\stackrel {\mathrm {def} }{=}}\left\|\mathbf {A} \right\|\left\|\mathbf {B} \right\|\cos \theta ,} {\displaystyle R(x,y)} is changed by standard transformations as follows: {\displaystyle x} {\displaystyle x-h} moves the graph to the right {\displaystyle h} {\displaystyle y} {\displaystyle y-k} moves the graph up {\displaystyle k} {\displaystyle x} {\displaystyle x/b} stretches the graph horizontally by a factor of {\displaystyle b} . (think of the {\displaystyle x} as being dilated) {\displaystyle y} {\displaystyle y/a} stretches the graph vertically. {\displaystyle x} {\displaystyle x\cos A+y\sin A} {\displaystyle y} {\displaystyle -x\sin A+y\cos A} rotates the graph by an angle {\displaystyle A} For example, the parent function {\displaystyle y=1/x} has a horizontal and a vertical asymptote, and occupies the first and third quadrant, and all of its transformed forms have one horizontal and vertical asymptote, and occupies either the 1st and 3rd or 2nd and 4th quadrant. In general, if {\displaystyle y=f(x)} , then it can be transformed into {\displaystyle y=af(b(x-k))+h} . In the new transformed function, {\displaystyle a} is the factor that vertically stretches the function if it is greater than 1 or vertically compresses the function if it is less than 1, and for negative {\displaystyle a} values, the function is reflected in the {\displaystyle x} -axis. The {\displaystyle b} value compresses the graph of the function horizontally if greater than 1 and stretches the function horizontally if less than 1, and like {\displaystyle a} , reflects the function in the {\displaystyle y} -axis when it is negative. The {\displaystyle k} {\displaystyle h} values introduce translations, {\displaystyle h} , vertical, and {\displaystyle k} horizontal. Positive {\displaystyle h} {\displaystyle k} values mean the function is translated to the positive end of its axis and negative meaning translation towards the negative end. {\displaystyle R(x,y)} is a relation in the {\displaystyle xy} plane. For example, {\displaystyle x^{2}+y^{2}-1=0} For two geometric objects P and Q represented by the relations {\displaystyle P(x,y)} {\displaystyle Q(x,y)} the intersection is the collection of all points {\displaystyle (x,y)} which are in both relations. [23] {\displaystyle P} might be the circle with radius 1 and center {\displaystyle (0,0)} {\displaystyle P=\{(x,y)|x^{2}+y^{2}=1\}} {\displaystyle Q} {\displaystyle (1,0):Q=\{(x,y)|(x-1)^{2}+y^{2}=1\}} . The intersection of these two circles is the collection of points which make both equations true. Does the point {\displaystyle (0,0)} make both equations true? Using {\displaystyle (0,0)} {\displaystyle (x,y)} , the equation for {\displaystyle Q} {\displaystyle (0-1)^{2}+0^{2}=1} {\displaystyle (-1)^{2}=1} which is true, so {\displaystyle (0,0)} {\displaystyle Q} . On the other hand, still using {\displaystyle (0,0)} {\displaystyle (x,y)} {\displaystyle P} {\displaystyle 0^{2}+0^{2}=1} {\displaystyle 0=1} which is false. {\displaystyle (0,0)} {\displaystyle P} so it is not in the intersection. The intersection of {\displaystyle P} {\displaystyle Q} can be found by solving the simultaneous equations: {\displaystyle x^{2}+y^{2}=1} {\displaystyle (x-1)^{2}+y^{2}=1.} Substitution: Solve the first equation for {\displaystyle y} {\displaystyle x} and then substitute the expression for {\displaystyle y} into the second equation: {\displaystyle x^{2}+y^{2}=1} {\displaystyle y^{2}=1-x^{2}.} We then substitute this value for {\displaystyle y^{2}} into the other equation and proceed to solve for {\displaystyle x} {\displaystyle (x-1)^{2}+(1-x^{2})=1} {\displaystyle x^{2}-2x+1+1-x^{2}=1} {\displaystyle -2x=-1} {\displaystyle x=1/2.} Next, we place this value of {\displaystyle x} in either of the original equations and solve for {\displaystyle y} {\displaystyle (1/2)^{2}+y^{2}=1} {\displaystyle y^{2}=3/4} {\displaystyle y={\frac {\pm {\sqrt {3}}}{2}}.} {\displaystyle \left(1/2,{\frac {+{\sqrt {3}}}{2}}\right)\;\;{\text{and}}\;\;\left(1/2,{\frac {-{\sqrt {3}}}{2}}\right).} Elimination: Add (or subtract) a multiple of one equation to the other equation so that one of the variables is eliminated. For our current example, if we subtract the first equation from the second we get {\displaystyle (x-1)^{2}-x^{2}=0} {\displaystyle y^{2}} in the first equation is subtracted from the {\displaystyle y^{2}} in the second equation leaving no {\displaystyle y} term. The variable {\displaystyle y} has been eliminated. We then solve the remaining equation for {\displaystyle x} , in the same way as in the substitution method: {\displaystyle x^{2}-2x+1+1-x^{2}=1} {\displaystyle -2x=-1} {\displaystyle x=1/2.} We then place this value of {\displaystyle x} {\displaystyle y} {\displaystyle (1/2)^{2}+y^{2}=1} {\displaystyle y^{2}=3/4} {\displaystyle y={\frac {\pm {\sqrt {3}}}{2}}.} {\displaystyle \left(1/2,{\frac {+{\sqrt {3}}}{2}}\right)\;\;{\text{and}}\;\;\left(1/2,{\frac {-{\sqrt {3}}}{2}}\right).} One type of intersection which is widely studied is the intersection of a geometric object with the {\displaystyle x} {\displaystyle y} coordinate axes. The intersection of a geometric object and the {\displaystyle y} {\displaystyle y} -intercept of the object. The intersection of a geometric object and the {\displaystyle x} {\displaystyle x} -intercept of the object. {\displaystyle y=mx+b} {\displaystyle b} specifies the point where the line crosses the {\displaystyle y} axis. Depending on the context, either {\displaystyle b} or the point {\displaystyle (0,b)} {\displaystyle y} Bissell, Christopher C. (1987), "Cartesian geometry: The Dutch contribution", The Mathematical Intelligencer, 9: 38–44, doi: 10.1007/BF03023730 Boyer, Carl B. (1944), "Analytic Geometry: The Discovery of Fermat and Descartes", Mathematics Teacher, 37 (3): 99–105, doi: 10.5951/MT.37.3.0099 Boyer, Carl B. (1965), "Johann Hudde and space coordinates", Mathematics Teacher, 58 (1): 33–36, doi: 10.5951/MT.58.1.0033 Coolidge, J. L. (1948), "The Beginnings of Analytic Geometry in Three Dimensions", American Mathematical Monthly, 55 (2): 76–86, doi: 10.2307/2305740, JSTOR 2305740 Retrieved from " https://en.wikipedia.org/?title=Analytic_geometry&oldid=1083235220" Analytic Geometry Videos Analytic Geometry Websites Analytic Geometry Encyclopedia Articles
Box Plots Practice Problems Online | Brilliant Which of the following is closest to the largest value in the data set represented by the box plot above? cannot tell 18 10 17 12 Consider the box plot above which was generated from a set of 9 values. If the largest possible mean of the numbers (i.e. their arithmetic average) is m , which is closest to 9m Hint: think of what values the box plot tells us must be in the data set. What is the mean of the numbers in the data set represented by the box plot above (note: the mean of some numbers is the arithmetic average of them). 9.5 9.8 10 11 cannot tell Let the box plot above represent a very big data set. Which is closest to the percentage of the data points in the set of data that are less than or equal to 7?
Estimate three-phase sinusoidal characteristics using a phase-locked loop - Simulink - MathWorks 日本 y\left(t\right)=A\left(t\right)\mathrm{sin}\left({\mathrm{ϕ}}_{0}+∫2\mathrm{π}f\left(t\right)dt\right), Ï•0 is the initial phase angle of the input signal. Because the input signal is assumed to be balanced, the block calculates the amplitude directly from the instantaneous amplitude of the three phases. The estimated phase angle Ï• is the angle of this generated sinusoid: \mathrm{ϕ}\left(t\right)={\mathrm{ϕ}}_{0}+∫2\mathrm{π}f\left(t\right)dt, where f if the frequency of the sinusoid, and Ï•0 is the initial phase angle. The phase detector produces an error signal relative to the phase difference eÏ• between the input sinusoid u and the synthesized sinusoid y. It also outputs the amplitude A. The loop filter provides an estimate of the input angular frequency ω by filtering out the high-frequency components of the phase difference. The block also outputs the converted frequency f in Hz. The voltage-controlled oscillator integrates the angular speed to produce the phase estimate Ï• which it sends to the Phase Detector for comparison.
Power (Physics) | James's Knowledge Graph In physics, Power ( P ) is the rate at which work is done (or energy is transferred) over time ( t ). Recall that work is a change in energy, therefore Power is the change in energy ( E ) over the change in time, or P=\Delta{E}/\Delta{t} Power can be expressed as as the product of force ( F V P=FV , because force is energy transferred to an object (a change in energy) and velocity is the change in position over time (a change in time). Power is measured in watts. One watt is equal to 1 joule per second, or W=J/s Video: Power - Physics 101 / AP Physics Review Deeper Knowledge on Power (Physics) Energy that results from charged particles Watt's Law (Power Law) A formula to define the relationship between power, voltage, and current (P=IV) Broader Topics Related to Power (Physics) The fundamental nature and properties of matter, energy, and motion Power (Physics) Knowledge Graph
Archimedean Property | Brilliant Math & Science Wiki Pranjal Jain, Pi Han Goh, Zandra Vinegar, and This page has been proposed for an upcoming wiki collaboration. It is currently under construction, and you can help by adding examples of what you think should be on this page ( S, \circ be a closed algebraic structure with a Norm (e.g. real numbers with absolute value as a norm). Then, the norm n satisfies the Archimedean property on S \forall a, b \in S, n(a) < n(b) \Rightarrow \exists m \in N \text{ such that } n ( m \cdot a) > n (b) Corollary: For all real numbers r n n > r Note: The field of rational functions of x does not satisfy the Archimedean property. Forms of Completeness Frequently, Dedicand cuts are used axiomatically to construct the Reals. The Reals definitely have Dedicand completeness. :) The rationals do not. A la wikipedia: Real number system has the property that every non-empty subset of R which is bounded above has a least upper bound. This property is called Least Upper Bound Property. Cauchy completeness is the statement that every Cauchy sequence of real numbers converges. Cauchy Sequence is a sequence whose elements become arbitrarily close to each other as the sequence progresses. Mathematically, for a given \epsilon>0 N |x_m-x_n|<\epsilon\ \forall\ m,n>N n I_n = [a_n, b_n] be a (non-empty) bounded interval of real numbers such that I_1 \supset\ I_2 \supset I_3 \supset \cdots \supset I_n \supset I_{n+1} \supset \cdots \displaystyle\lim_{n\rightarrow 0} (b_n-a_n)=0 \displaystyle\bigcap_{n=1}^{\infty} I_n contains only one point. The nested interval theorem states that the intersection of all of the intervals I_n Source: Springer's "Real Analysis and Applications" The monotone convergence theorem states that every nondecreasing, bounded sequence of real numbers converges. The Bolzano–Weierstrass theorem states that every bounded sequence of real numbers has a convergent subsequence. Prove that Real numbers follow Archimedean Property Prove that Hyperreal Numbers do not follow Archimedian Property Existence of rational/irrational number between two real numbers The uncountability of the reals. Properties of Inequalities in the Reals x\in R m m\le x < m+1 l x<l\le x+1 \alpha,\beta \alpha<\beta n_1,n_2 \in N \alpha<\alpha+\frac{1}{n_1}<\beta \alpha<\beta-\frac{1}{n_2}<\beta Cite as: Archimedean Property. Brilliant.org. Retrieved from https://brilliant.org/wiki/archimedean-property/
Test_statistic Knowpia Two widely used test statistics are the t-statistic and the F-test. Suppose the task is to test whether a coin is fair (i.e. has equal probabilities of producing a head or a tail). If the coin is flipped 100 times and the results are recorded, the raw data can be represented as a sequence of 100 heads and tails. If there is interest in the marginal probability of obtaining a tail, only the number T out of the 100 flips that produced a tail needs to be recorded. But T can also be used as a test statistic in one of two ways: the value of T can be compared with its expected value under the null hypothesis of 50, and since the sample size is large, a normal distribution can be used as an approximation to the sampling distribution either for T or for the revised test statistic T−50. Common test statisticsEdit One-sample tests are appropriate when a sample is being compared to the population from a hypothesis. The population characteristics are known from theory or are calculated from the population. Two-sample tests are appropriate for comparing two samples, typically experimental and control samples from a scientifically controlled experiment. Paired tests are appropriate for comparing two samples where it is impossible to control important variables. Rather than comparing two sets, members are paired between samples so the difference between the members becomes the sample. Typically the mean of the differences is then compared to zero. The common example scenario for when a paired difference test is appropriate is when a single set of test subjects has something applied to them and the test is intended to check for an effect. Z-tests are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation. A t-test is appropriate for comparing means under relaxed conditions (less is assumed). Tests of proportions are analogous to tests of means (the 50% proportion). Chi-squared tests use the same calculations and the same probability distribution for different applications: Chi-squared tests for variance are used to determine whether a normal population has a specified variance. The null hypothesis is that it does. Chi-squared tests of independence are used for deciding whether two variables are associated or are independent. The variables are categorical rather than numeric. It can be used to decide whether left-handedness is correlated with height (or not). The null hypothesis is that the variables are independent. The numbers used in the calculation are the observed and expected frequencies of occurrence (from contingency tables). Chi-squared goodness of fit tests are used to determine the adequacy of curves fit to data. The null hypothesis is that the curve fit is adequate. It is common to determine curve shapes to minimize the mean square error, so it is appropriate that the goodness-of-fit calculation sums the squared errors. F-tests (analysis of variance, ANOVA) are commonly used when deciding whether groupings of data by category are meaningful. If the variance of test scores of the left-handed in a class is much smaller than the variance of the whole class, then it may be useful to study lefties as a group. The null hypothesis is that two variances are the same – so the proposed grouping is not meaningful. In the table below, the symbols used are defined at the bottom of the table. Many other tests can be found in other articles. Proofs exist that the test statistics are appropriate.[2] Assumptions or notes One-sample z-test {\displaystyle z={\frac {{\overline {x}}-\mu _{0}}{({\sigma }/{\sqrt {n}})}}} (Normal population or n large) and σ known. (z is the distance from the mean in relation to the standard deviation of the mean). For non-normal distributions it is possible to calculate a minimum proportion of a population that falls within k standard deviations for any k (see: Chebyshev's inequality). Two-sample z-test {\displaystyle z={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{\sqrt {{\frac {\sigma _{1}^{2}}{n_{1}}}+{\frac {\sigma _{2}^{2}}{n_{2}}}}}}} Normal population and independent observations and σ1 and σ2 are known where {\displaystyle d_{0}} {\displaystyle \mu _{1}-\mu _{2}} {\displaystyle t={\frac {{\overline {x}}-\mu _{0}}{(s/{\sqrt {n}})}},} {\displaystyle df=n-1\ } (Normal population or n large) and {\displaystyle \sigma } Paired t-test {\displaystyle t={\frac {{\overline {d}}-d_{0}}{(s_{d}/{\sqrt {n}})}},} {\displaystyle df=n-1\ } (Normal population of differences or n large) and {\displaystyle \sigma } Two-sample pooled t-test, equal variances {\displaystyle t={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{s_{p}{\sqrt {{\frac {1}{n_{1}}}+{\frac {1}{n_{2}}}}}}},} {\displaystyle s_{p}^{2}={\frac {(n_{1}-1)s_{1}^{2}+(n_{2}-1)s_{2}^{2}}{n_{1}+n_{2}-2}},} {\displaystyle df=n_{1}+n_{2}-2\ } (Normal populations or n1 + n2 > 40) and independent observations and σ1 = σ2 unknown Two-sample unpooled t-test, unequal variances (Welch's t-test) {\displaystyle t={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{\sqrt {{\frac {s_{1}^{2}}{n_{1}}}+{\frac {s_{2}^{2}}{n_{2}}}}}},} {\displaystyle df={\frac {\left({\frac {s_{1}^{2}}{n_{1}}}+{\frac {s_{2}^{2}}{n_{2}}}\right)^{2}}{{\frac {\left({\frac {s_{1}^{2}}{n_{1}}}\right)^{2}}{n_{1}-1}}+{\frac {\left({\frac {s_{2}^{2}}{n_{2}}}\right)^{2}}{n_{2}-1}}}}} (Normal populations or n1 + n2 > 40) and independent observations and σ1 ≠ σ2 both unknown One-proportion z-test {\displaystyle z={\frac {{\hat {p}}-p_{0}}{\sqrt {p_{0}(1-p_{0})}}}{\sqrt {n}}} n .p0 > 10 and n (1 − p0) > 10 and it is a SRS (Simple Random Sample), see notes. Two-proportion z-test, pooled for {\displaystyle H_{0}\colon p_{1}=p_{2}} {\displaystyle z={\frac {({\hat {p}}_{1}-{\hat {p}}_{2})}{\sqrt {{\hat {p}}(1-{\hat {p}})({\frac {1}{n_{1}}}+{\frac {1}{n_{2}}})}}}} {\displaystyle {\hat {p}}={\frac {x_{1}+x_{2}}{n_{1}+n_{2}}}} n1 p1 > 5 and n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and independent observations, see notes. Two-proportion z-test, unpooled for {\displaystyle |d_{0}|>0} {\displaystyle z={\frac {({\hat {p}}_{1}-{\hat {p}}_{2})-d_{0}}{\sqrt {{\frac {{\hat {p}}_{1}(1-{\hat {p}}_{1})}{n_{1}}}+{\frac {{\hat {p}}_{2}(1-{\hat {p}}_{2})}{n_{2}}}}}}} Chi-squared test for variance {\displaystyle \chi ^{2}=(n-1){\frac {s^{2}}{\sigma _{0}^{2}}}} • Normal population Chi-squared test for goodness of fit {\displaystyle \chi ^{2}=\sum ^{k}{\frac {({\text{observed}}-{\text{expected}})^{2}}{\text{expected}}}} df = k − 1 − # parameters estimated, and one of these must hold. • All expected counts are at least 5.[4] • All expected counts are > 1 and no more than 20% of expected counts are less than 5[5] Two-sample F test for equality of variances {\displaystyle F={\frac {s_{1}^{2}}{s_{2}^{2}}}} Normal populations Arrange so {\displaystyle s_{1}^{2}\geq s_{2}^{2}} and reject H0 for {\displaystyle F>F(\alpha /2,n_{1}-1,n_{2}-1)} Regression t-test of {\displaystyle H_{0}\colon R^{2}=0.} {\displaystyle t={\sqrt {\frac {R^{2}(n-k-1^{*})}{1-R^{2}}}}} {\displaystyle t>t(\alpha /2,n-k-1^{*})} *Subtract 1 for intercept; k terms contain independent variables. In general, the subscript 0 indicates a value taken from the null hypothesis, H0, which should be used as much as possible in constructing its test statistic. ... Definitions of other symbols: {\displaystyle \alpha } , the probability of Type I error (rejecting a null hypothesis when it is in fact true) {\displaystyle n} {\displaystyle n_{1}} = sample 1 size {\displaystyle n_{2}} {\displaystyle {\overline {x}}} {\displaystyle \mu _{0}} = hypothesized population mean {\displaystyle \mu _{1}} = population 1 mean {\displaystyle \mu _{2}} {\displaystyle \sigma } {\displaystyle \sigma ^{2}} = population variance {\displaystyle s} = sample standard deviation {\displaystyle \sum ^{k}} = sum (of k numbers) {\displaystyle s^{2}} {\displaystyle s_{1}} = sample 1 standard deviation {\displaystyle s_{2}} {\displaystyle t} = t statistic {\displaystyle df} {\displaystyle {\overline {d}}} = sample mean of differences {\displaystyle d_{0}} = hypothesized population mean difference {\displaystyle s_{d}} = standard deviation of differences {\displaystyle \chi ^{2}} = Chi-squared statistic {\displaystyle {\hat {p}}} = x/n = sample proportion, unless specified otherwise {\displaystyle p_{0}} = hypothesized population proportion {\displaystyle p_{1}} = proportion 1 {\displaystyle p_{2}} {\displaystyle d_{p}} = hypothesized difference in proportion {\displaystyle \min\{n_{1},n_{2}\}} {\displaystyle x_{1}=n_{1}p_{1}} {\displaystyle x_{2}=n_{2}p_{2}} {\displaystyle F} = F statistic {\displaystyle R^{2}} = coefficient of determination ^ Berger, R. L.; Casella, G. (2001). Statistical Inference, Duxbury Press, Second Edition (p.374) ^ Loveland, Jennifer L. (2011). Mathematical Justification of Introductory Hypothesis Tests and Development of Reference Materials (M.Sc. (Mathematics)). Utah State University. Retrieved April 30, 2013. Abstract: "The focus was on the Neyman–Pearson approach to hypothesis testing. A brief historical development of the Neyman–Pearson approach is followed by mathematical proofs of each of the hypothesis tests covered in the reference material." The proofs do not reference the concepts introduced by Neyman and Pearson, instead they show that traditional test statistics have the probability distributions ascribed to them, so that significance calculations assuming those distributions are correct. The thesis information is also posted at mathnstats.com as of April 2013. ^ Steel, R. G. D., and Torrie, J. H., Principles and Procedures of Statistics with Special Reference to the Biological Sciences., McGraw Hill, 1960, page 350. ^ Weiss, Neil A. (1999). Introductory Statistics (5th ed.). pp. 802. ISBN 0-201-59877-9. ^ Steel, R. G. D., and Torrie, J. H., Principles and Procedures of Statistics with Special Reference to the Biological Sciences., McGraw Hill, 1960, page 288.)
1) Control, 0% black garlic extracts; BG7.5, 7.5% black garlic extracts; BG15.0, 15.0% black garlic extracts; BG22.5, 22.5% black garlic extracts; BG30.0, 30.0% black garlic extracts. \text{DPPH}\text{\hspace{0.17em}}\text{라디칼}\text{\hspace{0.17em}}\text{소거활성}\left(%\right)\text{=}\left[1-\left(\text{시료}\text{\hspace{0.17em}}\text{첨가구의}\text{\hspace{0.17em}}\text{흡광도}/\text{시료}\text{\hspace{0.17em}}\text{무첨가구의}\text{\hspace{0.17em}}\text{흡광도}\text{\hspace{0.17em}}\right)\right]×100 3) Means with different letters in the same column are significantly different by Duncan’s multiple range test (p<0.05). 2) GAE, Gallic acid equivalent; QE, Quercetin equivalent. 1) TPC, total polyphenol contents; FC, flavonoid contents; DPPH, DPPH radical scavenging activity.
Resistor including optional tolerance, operational limits, fault behavior, and noise - MATLAB - MathWorks 日本 i=v/R Uniform distribution: R · (1 – tol + 2· tol· rand) Gaussian distribution: R · (1 + tol · randn / nSigma) R · (1 + tol ) R · (1 – tol ) i=v/R+{i}_{N} {i}_{N}=\sqrt{2kT/R}\frac{N\left(0,1\right)}{\sqrt{h}} [-50 150] °C (default) The coefficient α in the equation that describes resistance as a function of temperature, RT = R (1+α(T–T0)). The default value is for copper.
find the equation of the circle concentric with the circles x^2 + y^2 -4x +6y-3=0 and of double its (1) - Maths - Conic Sections - 6863337 | Meritnation.com find the equation of the circle concentric with the circles x^2 + y^2 -4x +6y-3=0 and of double its (1) circumference (2) area Equation of the circle is given as x2 + y2 - 4x + 6y -3 = 0 So changing it as standard equation, as (x - h)2 + (y -k)2 = a2 , here ( h , k) are centre of the circle and a is the radius. x2 + y2 - 4x + 6y -3 = 0 Adding 4 and 9 on both sides we get, x2 + y2 - 4x + 4 + 6y + 9 = 3 + 4 + 9 (x - 2) 2+ (y + 3 )2 = 42 (i) So the centre is ( 2 ,-3) and Radius is 4. 1) Circumference is double, hence radius is double, but centre remains same as the circle are concentric So the Radius will be 8 So changing the radius value of equation (i) to 8, we get (x - 2) 2+ (y + 3 )2 = 82 Hence on expanding we get x2 + y2 - 4x + 6y + 4 + 9 = 64 x2 + y2 - 4x + 6y - 51 = 0 , is the equation of the circle. 2) Area is double Initially the area was πr2 = π(4)2 = 16π As the area is double , so the area is 32π For the area of 32π, we have radius of πr2 = 32π or r = \sqrt{32} So the equation is (x - 2) 2+ (y + 3 )2 = \left(\sqrt{32}{\right)}^{2} Hence x2 + y2 - 4x + 6y + 4 + 9 -32 = 0
Projects - Curvenote Docs Content in Curvenote lives within projects, which are a collection of content (articles, notebooks, blocks) that belong to either you personally, or a team. Using projects allows you to organize your content, include collaborators, share and publish your content! From your profile or team page, select the New Project ➕ icon in the lower right corner Choose a project template and click NEXT Enter a project title and click NEXT Select a Project Visibility setting Your project is now being created. When ready, click GO TO PROJECT The project will now be added to your personal or team profile. #Project Settings After creating a project you can update the settings at anytime. The project settings are accessible via the Project Actions menu \mathbf{\vdots} when in your personal or team profile: or at the bottom of the project navigation panel to the left of the editor when you are in the project: Only owners or team admins can access the project settings. In the project settings you can update or add the title, description, URL, visibility. Learn more ➡️ Project Visibility. #Thumbnail Image You can add or update a thumbnail image for your project. To do this: Browse your computer or drag and drop a new image Use the image editor to position your image #Delete Project You have the option to delete a project. To do this: Click Delete Project Follow the prompt to enter the project URL This will permanently delete the project and all of its content! Any links to the project content will break. This action cannot be undone! #What else do projects offer? Projects offer more than just a place to put your content. You can add other Curvenote members and collaborate within your projects. Learn more ➡️ Project Collaborators You can organize and control how to navigate the content in your project. Learn more ➡️ Project Organization & Navigation You can share some or all of the content in your project publicly. Learn more ➡️ Project Visibility
Calculus MCQs 01 - PAKMATH Calculus MCQs Calculus MCQs 01 Calculus MCQs 01 22/02/2019 03/12/2020 adminCalculus MCQs Calculus MCQs 01, there are 21 multiple choice questions. Attempt these questions and check your progress with correct answers. So attempt all multiples. This page contains mcqs about calculus for the preparation of every test. log2-1/log+2 \infty Vector Analysis mcqs CALCULUS MCQS TESTS Previous post: Properties of complex Numbers with respect to addition Next post: Vector Analysis MCQs 01 One Reply to “Calculus MCQs 01” Kindly tell how to take quiz… it says login bt I can’t login proprly…
INFORMATION THEORY - Encyclopedia Information Information theory Information Information theory is the scientific study of the quantification, storage, and communication of digital information. [1] The field was fundamentally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. [2]: vii The field is at the intersection of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering. A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a die (with six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security. Applications of fundamental topics of information theory include source coding/ data compression (e.g. for ZIP files), and channel coding/ error detection and correction (e.g. for DSL). Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones and the development of the Internet. The theory has also found applications in other areas, including statistical inference, [3] cryptography, neurobiology, [4] perception, [5] linguistics, the evolution [6] and function [7] of molecular codes ( bioinformatics), thermal physics, [8] molecular dynamics, [9] quantum computing, black holes, information retrieval, intelligence gathering, plagiarism detection, [10] pattern recognition, anomaly detection [11] and even art creation. 3.1 Entropy of an information source 4.2.2 Channels with memory and directed information 8.5 MOOC on information theory Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled A Mathematical Theory of Communication, in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent. [4] {\displaystyle \lim _{p\rightarrow 0+}p\log p=0} {\displaystyle H=-\sum _{i}p_{i}\log _{2}(p_{i})} The entropy of a Bernoulli trial as a function of success probability, often called the binary entropy function, Hb(p). The entropy is maximized at 1 bit per trial when the two possible outcomes are equally probable, as in an unbiased coin toss. {\displaystyle \mathbb {X} } {\displaystyle x\in \mathbb {X} } , then the entropy, H, of X is defined: [12] {\displaystyle H(X)=\mathbb {E} _{X}[I(x)]=-\sum _{x\in \mathbb {X} }p(x)\log p(x).} {\displaystyle \mathbb {E} _{X}} is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable p(x) = 1/n; i.e., most unpredictable, in which case H(X) = log n. {\displaystyle H_{\mathrm {b} }(p)=-p\log _{2}p-(1-p)\log _{2}(1-p).} {\displaystyle H(X,Y)=\mathbb {E} _{X,Y}[-\log p(x,y)]=-\sum _{x,y}p(x,y)\log p(x,y)\,} The conditional entropy or conditional uncertainty of X given random variable Y (also called the equivocation of X about Y) is the average conditional entropy over Y: [13] {\displaystyle H(X|Y)=\mathbb {E} _{Y}[H(X|y)]=-\sum _{y\in Y}p(y)\sum _{x\in X}p(x|y)\log p(x|y)=-\sum _{x,y}p(x,y)\log p(x|y).} {\displaystyle H(X|Y)=H(X,Y)-H(Y).\,} {\displaystyle I(X;Y)=\mathbb {E} _{X,Y}[SI(x,y)]=\sum _{x,y}p(x,y)\log {\frac {p(x,y)}{p(x)\,p(y)}}} {\displaystyle I(X;Y)=H(X)-H(X|Y).\,} {\displaystyle I(X;Y)=I(Y;X)=H(X)+H(Y)-H(X,Y).\,} {\displaystyle I(X;Y)=\mathbb {E} _{p(y)}[D_{\mathrm {KL} }(p(X|Y=y)\|p(X))].} {\displaystyle I(X;Y)=D_{\mathrm {KL} }(p(X,Y)\|p(X)p(Y)).} The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution {\displaystyle p(X)} {\displaystyle q(X)} {\displaystyle q(X)} {\displaystyle p(X)} {\displaystyle D_{\mathrm {KL} }(p(X)\|q(X))=\sum _{x\in X}-p(x)\log {q(x)}\,-\,\sum _{x\in X}-p(x)\log {p(x)}=\sum _{x\in X}p(x)\log {\frac {p(x)}{q(x)}}.} {\displaystyle p(x)} {\displaystyle p(x)} {\displaystyle q(x)} Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. {\displaystyle r=\lim _{n\to \infty }H(X_{n}|X_{n-1},X_{n-2},X_{n-3},\ldots );} {\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}H(X_{1},X_{2},\dots X_{n});} that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result. [14] {\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}I(X_{1},X_{2},\dots X_{n};Y_{1},Y_{2},\dots Y_{n});} {\displaystyle {\xrightarrow[{\text{Message}}]{W}}{\begin{array}{|c| }\hline {\text{Encoder}}\\f_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Encoded \atop sequence} }]{X^{n}}}{\begin{array}{|c| }\hline {\text{Channel}}\\p(y|x)\\\hline \end{array}}{\xrightarrow[{\mathrm {Received \atop sequence} }]{Y^{n}}}{\begin{array}{|c| }\hline {\text{Decoder}}\\g_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Estimated \atop message} }]{\hat {W}}}} {\displaystyle C=\max _{f}I(X;Y).\!} A continuous-time analog communications channel subject to Gaussian noise—see Shannon–Hartley theorem. A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of 1 − Hb(p) bits per channel use, where Hb is the binary entropy function to the base-2 logarithm: A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is 1 − p bits per channel use. {\displaystyle i} {\displaystyle P(y_{i}|x_{i},x_{i-1},x_{1-2},...,x_{1},y_{i-1},y_{1-2},...,y_{1}).} {\displaystyle x^{i}=(x_{i},x_{i-1},x_{1-2},...,x_{1})} {\displaystyle P(y_{i}|x^{i},y^{i-1}).} . In such a case the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not [15] [16] (if there is no feedback the directed informationj equals the mutual information). One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods. [17] Semioticians Doede Nauta and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics. [18]: 171 [19]: 137 Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing." [18]: 91  Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and Ferruccio Rossi-Landi to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones. [20] List of unsolved problems in information theory ^ "Claude Shannon, pioneered digital information theory". FierceTelecom. Retrieved 2021-04-30. ^ Shannon, Claude Elwood (1998). The mathematical theory of communication. Warren Weaver. Urbana: University of Illinois Press. ISBN 0-252-72546-8. OCLC 40716662. ^ Delgado-Bonal, Alfonso; Martín-Torres, Javier (2016-11-03). "Human vision is determined based on information theory". Scientific Reports. 6 (1): 36038. Bibcode: 2016NatSR...636038D. doi: 10.1038/srep36038. ISSN 2045-2322. PMC 5093619. PMID 27808236. ^ cf; Huelsenbeck, J. P.; Ronquist, F.; Nielsen, R.; Bollback, J. P. (2001). "Bayesian inference of phylogeny and its impact on evolutionary biology". Science. 294 (5550): 2310–2314. Bibcode: 2001Sci...294.2310H. doi: 10.1126/science.1065889. PMID 11743192. S2CID 2138288. ^ Allikmets, Rando; Wasserman, Wyeth W.; Hutchinson, Amy; Smallwood, Philip; Nathans, Jeremy; Rogan, Peter K. (1998). "Thomas D. Schneider], Michael Dean (1998) Organization of the ABCR gene: analysis of promoter and splice junction sequences". Gene. 215 (1): 111–122. doi: 10.1016/s0378-1119(98)00269-8. PMID 9666097. ^ Jaynes, E. T. (1957). "Information Theory and Statistical Mechanics". Phys. Rev. 106 (4): 620. Bibcode: 1957PhRv..106..620J. doi: 10.1103/physrev.106.620. ^ Talaat, Khaled; Cowen, Benjamin; Anderoglu, Osman (2020-10-05). "Method of information entropy for convergence assessment of molecular dynamics simulations". Journal of Applied Physics. 128 (13): 135102. Bibcode: 2020JAP...128m5102T. doi: 10.1063/5.0019078. OSTI 1691442. S2CID 225010720. ^ Bennett, Charles H.; Li, Ming; Ma, Bin (2003). "Chain Letters and Evolutionary Histories". Scientific American. 288 (6): 76–81. Bibcode: 2003SciAm.288f..76B. doi: 10.1038/scientificamerican0603-76. PMID 12764940. Archived from the original on 2007-10-07. Retrieved 2008-03-11. ^ Fazlollah M. Reza (1994) [1961]. An Introduction to Information Theory. Dover Publications, Inc., New York. ISBN 0-486-68210-2. ^ Robert B. Ash (1990) [1965]. Information Theory. Dover Publications, Inc. ISBN 0-486-66521-6. ^ Jerry D. Gibson (1998). Digital Compression for Multimedia: Principles and Standards. Morgan Kaufmann. ISBN 1-55860-369-7. ^ Massey, James L. (1990). "Causality, Feedback And Directed Information". CiteSeerX 10.1.1.36.5688. {{ cite journal}}: Cite journal requires |journal= ( help) ^ Permuter, Haim Henry; Weissman, Tsachy; Goldsmith, Andrea J. (February 2009). "Finite State Channels With Time-Invariant Deterministic Feedback". IEEE Transactions on Information Theory. 55 (2): 644–662. arXiv: cs/0608070. doi: 10.1109/TIT.2008.2009849. S2CID 13178. ^ Haggerty, Patrick E. (1981). "The corporation and innovation". Strategic Management Journal. 2 (2): 97–118. doi: 10.1002/smj.4250020202. ^ Nöth, Winfried (January 2012). "Charles S. Peirce's theory of information: a theory of the growth of symbols and of knowledge". Cybernetics and Human Knowing. 19 (1–2): 137–161. ^ Nöth, Winfried (1981). " Semiotics of ideology". Semiotica, Issue 148. Shannon, C.E. (1948), " A Mathematical Theory of Communication", Bell System Technical Journal, 27, pp. 379–423 & 623–656, July & October, 1948. PDF. Andrey Kolmogorov (1968), " Three approaches to the quantitative definition of information" in International Journal of Computer Mathematics. Landauer, R. (1961). "Irreversibility and Heat Generation in the Computing Process" (PDF). IBM J. Res. Dev. 5 (3): 183–191. doi: 10.1147/rd.53.0183. Timme, Nicholas; Alford, Wesley; Flecker, Benjamin; Beggs, John M. (2012). "Multivariate information measures: an experimentalist's perspective". arXiv: 1111.6857 [ cs.IT]. Cover, Thomas; Thomas, Joy A. (2006). Elements of information theory (2nd ed.). New York: Wiley-Interscience. ISBN 0-471-24195-4. McEliece, R. The Theory of Information and Coding. Cambridge, 2002. ISBN 978-0521831857 Shannon, Claude; Weaver, Warren (1949). The Mathematical Theory of Communication (PDF). Urbana, Illinois: University of Illinois Press. ISBN 0-252-72548-4. LCCN 49-11922. Raymond W. Yeung, " Information Theory" ( The Chinese University of Hong Kong) Wikiquote has quotations related to Information theory. "Information", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Lambert F. L. (1999), " Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!", Journal of Chemical Education Retrieved from " https://en.wikipedia.org/?title=Information_theory&oldid=1088848741" Information Theory Videos Information Theory Websites Information Theory Encyclopedia Articles
MaplePortal/EngineeringOptimization - Maple Help Home : Support : Online Help : MaplePortal/EngineeringOptimization Maple lets you minimize or maximize objective functions with respect to constraints. \mathrm{Optimization}:-\mathrm{Minimize}\left({x}^{2}+\mathrm{sin}\left(x+y\right),\left\{x+2 y=9\right\}\right) [\textcolor[rgb]{0,0,1}{-0.980004457937920792}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0.0469482872699630}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{4.47652585636502}]] The objective function can be a sum-of-squares error for parameter estimation, or the weight of a mechanical device or energy required for a process The constraints can be limits on the dimensions of a mechanical device, or the allowable stresses minimum and maximum process temperatures or amount of base materials Units can be employed in the objective function or the constraints. You can use Maple's built-in linear, nonlinear, and quadratic optimizers, or the optional Global Optimization Toolbox. Example - Fuel Pod Design Optimization You are designing a fuel pod with a hemispherical cap, cylindrical mid-section and conical cap. What are values of L, H and R that minimize the surface area while maintaining the volume V at 3 m3? \mathrm{restart}:\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{with}\left(\mathrm{Optimization}\right):\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}} Objective function - surface area of pod \mathrm{obj}≔\frac{1}{2}\cdot 4\cdot \mathrm{\pi }\cdot {\mathrm{R}}^{2}+2\cdot \mathrm{\pi }\cdot \mathrm{R}\cdot \mathrm{L}+\mathrm{\pi }\cdot \mathrm{R}\cdot \sqrt{{\mathrm{H}}^{2}+{\mathrm{R}}^{2}}:\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}} Constraint on the volume area of pod \mathrm{cons1}≔\frac{1}{2}\frac{4}{3}\mathrm{\pi }\cdot {\mathrm{R}}^{3}+\mathrm{\pi } {\mathrm{R}}^{2}\cdot \mathrm{L}+\frac{1}{3}\cdot \mathrm{\pi }\cdot {\mathrm{R}}^{2}\cdot \mathrm{H}=3⟦{\mathrm{m}}^{3}⟧: All dimensions must be greater than 0 \mathrm{cons2}≔0≤\mathrm{R},0≤\mathrm{L},0≤\mathrm{H}: Hence the optimized dimensions are \mathrm{dimensions}≔\mathrm{Minimize}\left(\mathrm{obj},\left\{\mathrm{cons1},\mathrm{cons2}\right\},\mathrm{initialpoint}=\left\{\mathrm{H}=1⟦\mathrm{m}⟧,\mathrm{L}=1⟦\mathrm{m}⟧,\mathrm{R}=1⟦\mathrm{m}⟧\right\}\right) \left[\textcolor[rgb]{0,0,1}{10.2533536615869920}\textcolor[rgb]{0,0,1}{⁢}⟦{\textcolor[rgb]{0,0,1}{\mathrm{m}}}^{\textcolor[rgb]{0,0,1}{2}}⟧\textcolor[rgb]{0,0,1}{,}\left[\textcolor[rgb]{0,0,1}{\mathrm{H}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0.785093823049978}\textcolor[rgb]{0,0,1}{⁢}⟦\textcolor[rgb]{0,0,1}{\mathrm{m}}⟧\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{L}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0.392546902492684}\textcolor[rgb]{0,0,1}{⁢}⟦\textcolor[rgb]{0,0,1}{\mathrm{m}}⟧\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{R}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0.877761593519080}\textcolor[rgb]{0,0,1}{⁢}⟦\textcolor[rgb]{0,0,1}{\mathrm{m}}⟧\right]\right] Check that the constraint on the pod volume is satisfied \mathrm{eval}\left(\left[\mathrm{cons1}\right],\mathrm{dimensions}\left[2\right]\right) \left[\textcolor[rgb]{0,0,1}{3.00000000039170}\textcolor[rgb]{0,0,1}{⁢}{⟦\textcolor[rgb]{0,0,1}{\mathrm{m}}⟧}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{⁢}⟦{\textcolor[rgb]{0,0,1}{\mathrm{m}}}^{\textcolor[rgb]{0,0,1}{3}}⟧\right]
Standard-compliant loudness measurements - Simulink - MathWorks Italia Use relative scale for loudness measurements Target loudness level (LUFS) Output true-peak value The Loudness Meter block measures the loudness and true-peak of an audio signal based on EBU R 128 and ITU-R BS.1770-4 standards. Matrix input –– Each column of the input is treated as an independent channel. If you use the default Channel weights, specify the input channels in order: [Left, Right, Center, Left surround, Right surround]. M — Momentary loudness measurement The block outputs a column vector with the same data type and number of rows as the input signal. S — Short-term loudness measurement TP — True-peak value The block outputs a real scalar with the same data type as the input signal. To enable this port, select the Output true-peak value parameter. Channel weights — Linear weighting applied to each input channel The number of elements of the row vector must be equal to or greater than the number of input channels. Excess values in the vector are ignored. The default channel weights follow the ITU-R BS.1170-4 standard. To use the default channel weights, specify the input to the Loudness Meter block as a matrix whose columns correspond to channels in this order: [Left, Right, Center, Left surround, Right surround]. It is a best practice to specify the channel weights in order: [Left, Right, Center, Left surround, Right surround]. Use relative scale for loudness measurements — Specify block to output loudness measurements relative to target level On — The loudness measurements are relative to the value specified by Target loudness level (LUFS). The output of the block is returned in loudness units (LU). Off — The loudness measurements are absolute, and returned in loudness units full scale (LUFS). Target loudness level (LUFS) — Reference level for relative loudness measurements For example, if the Target loudness level (LUFS) is –23, then a loudness value of –24 LUFS is reported as –1 LU. To enable this parameter, select the Use relative scale for loudness measurements parameter. Output true-peak value — Add output port for true-peak value When you select this parameter, an additional output port, TP, is added to the block. The TP port outputs the true-peak value of the input frame. Code generation –– Simulate model using generated C code. The first time you run a simulation, Simulink® generates C code for the block. The C code is reused for subsequent simulations, as long as the model does not change. This option requires additional startup time but the speed of the subsequent simulations is comparable to Interpreted execution. The Loudness Meter block calculates the momentary loudness, short-term loudness, and true-peak value of an audio signal. You can specify any number of channels and nondefault channel weights used for loudness measurements. The block algorithm is described for the general case of n channels and default channel weights. The input channels, x, pass through a K-weighted filter implemented using the algorithm of the Weighting Filter block. The K-weighted filter shapes the frequency spectrum to reflect perceived loudness. The K-weighted channels, y, are divided into 0.4-second segments with 0.3-second overlap. If the required number of samples have not been collected yet, the Loudness Meter block returns the last computed value for momentary loudness. If enough samples have been collected, then the power (mean square) of each segment of the K-weighted channels is calculated: m{P}_{i}=\frac{1}{w}\sum _{k=1}^{w}{y}_{i}^{2}\left[k\right] m{L}_{i}=-0.691+10{\mathrm{log}}_{10}\left(\sum _{c=1}^{n}{G}_{c}×m{P}_{\left(i,c\right)}\right)\text{ }LUFS mL is the momentary loudness returned by your Loudness Meter block. The K-weighted channels, y, are divided into 3-second segments with 2.9-second overlap. If the required number of samples have not been collected yet, the Loudness Meter block returns the last computed values for short-term loudness and loudness range. If enough samples have been collected, then the power (mean square) of each K-weighted channel is calculated: s{P}_{i}=\frac{1}{w}\sum _{k=1}^{w}{y}_{i}^{2}\left[k\right] s{L}_{i}=-0.691+10\text{\hspace{0.17em}}{\mathrm{log}}_{10}\left(\sum _{c=1}^{n}{G}_{c}×s{P}_{\left(i,c\right)}\right)\text{ }LUFS sL is the short-term loudness returned by your Loudness Meter block. [0.75,1.5) 256 [1.5,3) 128 [192,∞) not required c=20×{\mathrm{log}}_{10}\left(|b|\right) integratedLoudness | loudnessMeter
Transitive relation - Wikipedia (Redirected from Transitive property) Find sources: "Transitive relation" – news · newspapers · books · scholar · JSTOR (October 2013) (Learn how and when to remove this template message) 4 Transitive extensions and transitive closure 5 Relation properties that require transitivity 6 Counting transitive relations {\displaystyle a,b} {\displaystyle S\neq \varnothing :} {\displaystyle {\begin{aligned}&aRb\\\Rightarrow {}&bRa\end{aligned}}} {\displaystyle {\begin{aligned}aRb{\text{ and }}&bRa\\\Rightarrow a={}&b\end{aligned}}} {\displaystyle {\begin{aligned}a\neq {}&b\Rightarrow \\aRb{\text{ or }}&bRa\end{aligned}}} {\displaystyle {\begin{aligned}\min S\\{\text{exists}}\end{aligned}}} {\displaystyle {\begin{aligned}a\vee b\\{\text{exists}}\end{aligned}}} {\displaystyle {\begin{aligned}a\wedge b\\{\text{exists}}\end{aligned}}} {\displaystyle aRa} {\displaystyle {\text{not }}aRa} {\displaystyle {\begin{aligned}aRb\Rightarrow \\{\text{not }}bRa\end{aligned}}} {\displaystyle R} {\displaystyle a,b,c,} {\displaystyle aRb} {\displaystyle bRc} {\displaystyle aRc,} A homogeneous relation R on the set X is a transitive relation if,[1] for all a, b, c ∈ X, if a R b and b R c, then a R c. Or in terms of first-order logic: {\displaystyle \forall a,b,c\in X:(aRb\wedge bRc)\Rightarrow aRc,} where a R b is the infix notation for (a, b) ∈ R. whenever x > y and y > z, then also x > z whenever x ≥ y and y ≥ z, then also x ≥ z whenever x = y and y = z, then also x = z. More examples of transitive relations: "is a subset of" (set inclusion, a relation on sets) "divides" (divisibility, a relation on natural numbers) "implies" (implication, symbolized by "⇒", a relation on propositions) Examples of non-transitive relations: "is the successor of" (a relation on natural numbers) "is a member of the set" (symbolized as "∈")[2] "is perpendicular to" (a relation on lines in Euclidean geometry) The empty relation on any set {\displaystyle X} is transitive[3][4] because there are no elements {\displaystyle a,b,c\in X} {\displaystyle aRb} {\displaystyle bRc} , and hence the transitivity condition is vacuously true. A relation R containing only one ordered pair is also transitive: if the ordered pair is of the form {\displaystyle (x,x)} {\displaystyle x\in X} the only such elements {\displaystyle a,b,c\in X} {\displaystyle a=b=c=x} , and indeed in this case {\displaystyle aRc} , while if the ordered pair is not of the form {\displaystyle (x,x)} then there are no such elements {\displaystyle a,b,c\in X} {\displaystyle R} is vacuously transitive. The converse (inverse) of a transitive relation is always transitive. For instance, knowing that "is a subset of" is transitive and "is a superset of" is its converse, one can conclude that the latter is transitive as well. The intersection of two transitive relations is always transitive. For instance, knowing that "was born before" and "has the same first name as" are transitive, one can conclude that "was born before and also has the same first name as" is also transitive. The union of two transitive relations need not be transitive. For instance, "was born before or has the same first name as" is not a transitive relation, since e.g. Herbert Hoover is related to Franklin D. Roosevelt, which is in turn related to Franklin Pierce, while Hoover is not related to Franklin Pierce. The complement of a transitive relation need not be transitive. For instance, while "equal to" is transitive, "not equal to" is only transitive on sets with at most one element. A transitive relation need not be reflexive. When it is, it is called a preorder. For example, on set X = {1,2,3}: R = { (1,1), (2,2), (3,3), (1,3), (3,2) } is reflexive, but not transitive, as the pair (1,2) is absent, R = { (1,1), (2,2), (3,3), (1,3) } is reflexive as well as transitive, so it is a preorder, R = { (1,1), (2,2), (3,3) } is reflexive as well as transitive, another preorder. Transitive extensions and transitive closure[edit] Main article: Transitive closure The transitive closure of a relation is a transitive relation.[7] Relation properties that require transitivity[edit] Preorder – a reflexive and transitive relation Partial order – an antisymmetric preorder Total preorder – a connected (formerly called total) preorder Equivalence relation – a symmetric preorder Strict weak ordering – a strict partial order in which incomparability is an equivalence relation Total ordering – a connected (total), antisymmetric, and transitive relation Counting transitive relations[edit] No general formula that counts the number of transitive relations on a finite set (sequence A006905 in the OEIS) is known.[8] However, there is a formula for finding the number of relations that are simultaneously reflexive, symmetric, and transitive – in other words, equivalence relations – (sequence A000110 in the OEIS), those that are symmetric and transitive, those that are symmetric, transitive, and antisymmetric, and those that are total, transitive, and antisymmetric. Pfeiffer[9] has made some progress in this direction, expressing relations with combinations of these properties in terms of each other, but still calculating any one is difficult. See also Brinkmann and McKay (2005).[10] Mala showed that no polynomial with integer coefficients can represent a formula for the number of transitive relations on a set,[11] and found certain recursive relations that provide lower bounds for that number. He also showed that that number is a polynomial of degree two if the set[clarify] contains exactly two ordered pairs.[12] 3 512 171 64 64 29 19 13 6 5 4 65,536 3,994 4,096 1,024 355 219 75 24 15 n 2n2 2n2−n 2n(n+1)/2 {\textstyle \sum _{k=0}^{n}k!S(n,k)} {\textstyle \sum _{k=0}^{n}S(n,k)} Note that S(n, k) refers to Stirling numbers of the second kind. The Rock–paper–scissors game is based on an intransitive and antitransitive relation "x beats y". Proposition: If R is a univalent, then R;RT is transitive. {\displaystyle xR;R^{T}yR;R^{T}z.} Then there are a and b such that {\displaystyle xRaR^{T}yRbR^{T}z.} Since R is univalent, yRb and aRTy imply a=b. Therefore xRaRTz so that xR;RTz and R;RT is transitive. proof: R;RT is symmetric and reflexive on its domain. With univalence of R, the transitive requirement for equivalence is fulfilled. Hypothetical syllogism — transitivity of the material conditional ^ Smith, Eggen & St. Andre 2006, p. 145 harvnb error: no target: CITEREFSmithEggenSt._Andre2006 (help) ^ However, the class of von Neumann ordinals is constructed in a way such that ∈ is transitive when restricted to that class. ^ https://courses.engr.illinois.edu/cs173/sp2011/Lectures/relations.pdf[bare URL PDF] ^ Flaška, V.; Ježek, J.; Kepka, T.; Kortelainen, J. (2007). Transitive Closures of Binary Relations I (PDF). Prague: School of Mathematics - Physics Charles University. p. 1. Archived from the original (PDF) on 2013-11-02. Lemma 1.1 (iv). Note that this source refers to asymmetric relations as "strictly antisymmetric". ^ a b Liu 1985, p. 112 ^ Gunnar Brinkmann and Brendan D. McKay,"Counting unlabelled topologies and transitive relations" ^ Mala, Firdous Ahmad (2021-06-14). "On the number of transitive relations on a set". Indian Journal of Pure and Applied Mathematics. doi:10.1007/s13226-021-00100-0. ISSN 0975-7465. ^ Mala, Firdous Ahmad (2021-10-13). "Counting Transitive Relations with Two Ordered Pairs". Journal of Applied Mathematics and Computation. 5 (4): 247–251. doi:10.26855/jamc.2021.12.002. ISSN 2576-0645. ^ since e.g. 3R4 and 4R5, but not 3R5 ^ since e.g. 2R3 and 3R4 and 2R4 ^ since xRy and yRz can never happen ^ since, more generally, xRy and yRz implies x=y+1=z+2≠z+1, i.e. not xRz, for all x, y, z ^ Drum, Kevin (November 2018). "Preferences are not transitive". Mother Jones. Retrieved 2018-11-29. ^ Oliveira, I.F.D.; Zehavi, S.; Davidov, O. (August 2018). "Stochastic transitivity: Axioms and models". Journal of Mathematical Psychology. 85: 25–35. doi:10.1016/j.jmp.2018.06.002. ISSN 0022-2496. ^ Sen, A. (1969). "Quasi-transitivity, rational choice and collective decisions". Rev. Econ. Stud. 36 (3): 381–393. doi:10.2307/2296434. JSTOR 2296434. Zbl 0181.47302. Grimaldi, Ralph P. (1994), Discrete and Combinatorial Mathematics (3rd ed.), Addison-Wesley, ISBN 0-201-19912-2 Liu, C.L. (1985), Elements of Discrete Mathematics, McGraw-Hill, ISBN 0-07-038133-X Smith, Douglas; Eggen, Maurice; St.Andre, Richard (2006), A Transition to Advanced Mathematics (6th ed.), Brooks/Cole, ISBN 978-0-534-39900-9 "Transitivity", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Retrieved from "https://en.wikipedia.org/w/index.php?title=Transitive_relation&oldid=1088548364"
INTEGRAL EQUATION - Encyclopedia Information Integral equation Information For equations of integer unknowns, see Diophantine equation. In mathematics, integral equations are equations in which an unknown function appears under an integral sign. There is a close connection between differential and integral equations, and some problems may be formulated either way. See, for example, Green's function, Fredholm theory, and Maxwell's equations. 4 Wiener–Hopf integral equations 5 Power series solution for integral equations 6 Integral equations as a generalization of eigenvalue equations The most basic type of integral equation is called a Fredholm equation of the first type, {\displaystyle f(x)=\int _{a}^{b}K(x,t)\,\varphi (t)\,dt.} The notation follows Arfken. Here φ is an unknown function, f is a known function, and K is another known function of two variables, often called the kernel function. Note that the limits of integration are constant: this is what characterizes a Fredholm equation. If the unknown function occurs both inside and outside of the integral, the equation is known as a Fredholm equation of the second type, {\displaystyle \varphi (x)=f(x)+\lambda \int _{a}^{b}K(x,t)\,\varphi (t)\,dt.} The parameter λ is an unknown factor, which plays the same role as the eigenvalue in linear algebra. If one limit of integration is a variable, the equation is called a Volterra equation. The following are called Volterra equations of the first and second types, respectively, {\displaystyle f(x)=\int _{a}^{x}K(x,t)\,\varphi (t)\,dt} {\displaystyle \varphi (x)=f(x)+\lambda \int _{a}^{x}K(x,t)\,\varphi (t)\,dt.} In all of the above, if the known function f is identically zero, the equation is called a homogeneous integral equation. If f is nonzero, it is called an inhomogeneous integral equation. It is worth noting that integral equations often do not have an analytical solution, and must be solved numerically. An example of this is evaluating the Electric-Field Integral Equation (EFIE) or Magnetic-Field Integral Equation (MFIE) over an arbitrarily shaped object in an electromagnetic scattering problem. One method to solve numerically requires discretizing variables and replacing integral by a quadrature rule {\displaystyle \sum _{j=1}^{n}w_{j}K\left(s_{i},t_{j}\right)u(t_{j})=f(s_{i}),\qquad i=0,1,\dots ,n.} Then we have a system with n equations and n variables. By solving it we get the value of the n variables {\displaystyle u(t_{0}),u(t_{1}),\dots ,u(t_{n}).} Integral equations are classified according to three different dichotomies, creating eight different kinds: both fixed: Fredholm equation one variable: Volterra equation Placement of unknown function only inside integral: first kind both inside and outside integral: second kind Nature of known function f identically zero: homogeneous not identically zero: inhomogeneous Integral equations are important in many applications. Problems in which integral equations are encountered include radiative transfer, and the oscillation of a string, membrane, or axle. Oscillation problems may also be solved as differential equations. Both Fredholm and Volterra equations are linear integral equations, due to the linear behaviour of φ(x) under the integral. A nonlinear Volterra integral equation has the general form: {\displaystyle \varphi (x)=f(x)+\lambda \int _{a}^{x}K(x,t)\,F(x,t,\varphi (t))\,dt,} where F is a known function. Wiener–Hopf integral equations Main article: Wiener–Hopf method {\displaystyle y(t)=\lambda x(t)+\int _{0}^{\infty }k(t-s)\,x(s)\,ds,\qquad 0\leq t<\infty .} Originally, such equations were studied in connection with problems in radiative transfer, and more recently, they have been related to the solution of boundary integral equations for planar problems in which the boundary is only piecewise smooth. Power series solution for integral equations In many cases, if the Kernel of the integral equation is of the form K(xt) and the Mellin transform of K(t) exists, we can find the solution of the integral equation {\displaystyle g(s)=s\int _{0}^{\infty }K(st)\,f(t)\,dt} in the form of a power series {\displaystyle f(t)=\sum _{n=0}^{\infty }{\frac {a_{n}}{M(n+1)}}t^{n}} {\displaystyle g(s)=\sum _{n=0}^{\infty }a_{n}s^{-n},\qquad M(n+1)=\int _{0}^{\infty }K(t)\,t^{n}\,dt} are the Z-transform of the function g(s), and M(n + 1) is the Mellin transform of the Kernel. See also: Liouville–Neumann series Integral equations as a generalization of eigenvalue equations Certain homogeneous linear integral equations can be viewed as the continuum limit of eigenvalue equations. Using index notation, an eigenvalue equation can be written as {\displaystyle \sum _{j}M_{i,j}v_{j}=\lambda v_{i}} where M = [Mi,j] is a matrix, v is one of its eigenvectors, and λ is the associated eigenvalue. Taking the continuum limit, i.e., replacing the discrete indices i and j with continuous variables x and y, yields {\displaystyle \int K(x,y)\,\varphi (y)\,dy=\lambda \,\varphi (x),} where the sum over j has been replaced by an integral over y and the matrix M and the vector v have been replaced by the kernel K(x, y) and the eigenfunction φ(y). (The limits on the integral are fixed, analogously to the limits on the sum over j.) This gives a linear homogeneous Fredholm equation of the second type. In general, K(x, y) can be a distribution, rather than a function in the strict sense. If the distribution K has support only at the point x = y, then the integral equation reduces to a differential eigenfunction equation. In general, Volterra and Fredholm integral equations can arise from a single differential equation, depending on which sort of conditions are applied at the boundary of the domain of its solution. Further information: Fredholm theory Actuarial science (ruin theory [1]) Marchenko equation ( inverse scattering transform) Options pricing under jump-diffusion [2] ^ "Lecture Notes on Risk Theory" (PDF). 2010. ^ Sachs, E. W.; Strauss, A. K. (2008-11-01). "Efficient solution of a partial integro-differential equation in finance". Applied Numerical Mathematics. 58 (11): 1687–1703. doi: 10.1016/j.apnum.2007.11.002. ISSN 0168-9274. Kendall E. Atkinson The Numerical Solution of Integral Equations of the Second Kind. Cambridge Monographs on Applied and Computational Mathematics, 1997. Harry Bateman (1910) History and Present State of the Theory of Integral Equations, Report of the British Association. Andrei D. Polyanin and Alexander V. Manzhirov Handbook of Integral Equations. CRC Press, Boca Raton, 1998. ISBN 0-8493-2876-4. E. T. Whittaker and G. N. Watson. A Course of Modern Analysis Cambridge Mathematical Library. M. Krasnov, A. Kiselev, G. Makarenko, Problems and Exercises in Integral Equations, Mir Publishers, Moscow, 1971 Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Chapter 19. Integral Equations and Inverse Theory". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Integral Equations: Exact Solutions at EqWorld: The World of Mathematical Equations. Integral Equations: Index at EqWorld: The World of Mathematical Equations. "Integral equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Integral Equations ( MIT OpenCourseWare) Retrieved from " https://en.wikipedia.org/?title=Integral_equation&oldid=1060151053" Integral Equation Videos Integral Equation Websites Integral Equation Encyclopedia Articles
Find the standard equation of ellipse whose focus is (1,0), the directrix is x+y+1=0 and eccentricity is 1/root2 - Maths - Conic Sections - 8430273 | Meritnation.com Find the standard equation of ellipse whose focus is (1,0), the directrix is x+y+1=0 and eccentricity is 1/root2 \mathrm{For} \mathrm{an} \mathrm{ellipse},\phantom{\rule{0ex}{0ex}}\mathrm{Given}, \mathrm{focus} \mathrm{S}=\left(1,0\right) \mathrm{and} \mathrm{equation} \mathrm{of} \mathrm{directrix} \mathrm{isx}+\mathrm{y}+1=0 \mathrm{and} \mathrm{eccentricity} \mathrm{e}=\frac{1}{\sqrt{2}}\phantom{\rule{0ex}{0ex}}\mathrm{Let} \mathrm{P} \mathrm{be} \mathrm{any} \mathrm{point} \left(\mathrm{x},\mathrm{y}\right). \mathrm{So}, \mathrm{by} \mathrm{definition} \mathrm{of} \mathrm{an} \mathrm{ellipse},\phantom{\rule{0ex}{0ex}}\mathrm{SP}=\mathrm{e}×\mathrm{PM}\phantom{\rule{0ex}{0ex}}⇒\sqrt{{\left(\mathrm{x}-1\right)}^{2}+{\left(\mathrm{y}-0\right)}^{2}}=\frac{1}{\sqrt{2}}×\frac{\mathrm{x}+\mathrm{y}+1}{\sqrt{{1}^{2}+{1}^{2}}}\phantom{\rule{0ex}{0ex}}⇒\sqrt{{\left(\mathrm{x}-1\right)}^{2}+{\mathrm{y}}^{2}}=\frac{1}{2}\left(\mathrm{x}+\mathrm{y}+1\right)\phantom{\rule{0ex}{0ex}}\mathrm{Squaring} \mathrm{on} \mathrm{both} \mathrm{sides}, \mathrm{we} \mathrm{have},\phantom{\rule{0ex}{0ex}}{\left(\mathrm{x}-1\right)}^{2}+{\mathrm{y}}^{2}=\frac{1}{4}{\left(\mathrm{x}+\mathrm{y}+1\right)}^{2}\phantom{\rule{0ex}{0ex}}⇒4\left({\left(\mathrm{x}-1\right)}^{2}+{\mathrm{y}}^{2}\right)={\left(\mathrm{x}+\mathrm{y}+1\right)}^{2}\phantom{\rule{0ex}{0ex}}⇒4\left({\mathrm{x}}^{2}+1-2\mathrm{x}+{\mathrm{y}}^{2}\right)={\mathrm{x}}^{2}+{\mathrm{y}}^{2}+1+2\mathrm{xy}+2\mathrm{y}+2\mathrm{x}\phantom{\rule{0ex}{0ex}}⇒4{\mathrm{x}}^{2}+4-8\mathrm{x}+4{\mathrm{y}}^{2}={\mathrm{x}}^{2}+{\mathrm{y}}^{2}+1+2\mathrm{xy}+2\mathrm{y}+2\mathrm{x}\phantom{\rule{0ex}{0ex}}⇒\mathbf{3}{\mathbf{x}}^{\mathbf{2}}\mathbf{+}\mathbf{3}{\mathbf{y}}^{\mathbf{2}}\mathbf{-}\mathbf{2}\mathbf{xy}\mathbf{-}\mathbf{10}\mathbf{x}\mathbf{-}\mathbf{2}\mathbf{y}\mathbf{+}\mathbf{3}\mathbf{=}\mathbf{0} S.kamarajpandian answered this let focus S(1,o) directrix x+ y +1 = 0 e = 1/root2. let P(x,y) be any point. M a point on the direc where the perpendicular from s meets the dir. for an ellipse SP/PM = e SP2 / PM2 = e2 ( (x-1)2 + y2 ) / (x+y+1)2/2 = 1/2 x2 - 2x + 1 + y2 = 1/4 ( x2+y2+ 1 +2xy+2x+2y)C 4x2 - 8x + 4 + 4y2 = ( x2+y2+ 1 +2xy+2x+2y) 3x2 + 3y2 -2xy - 10x -2y + 3 = 0
Predicting System Behavior | Brilliant Math & Science Wiki Predicting system behavior is the goal of analyzing signals and systems. We want to be able to predict how a system will behave in the long term with any input. Specifically, analyzing discrete-time, linear time-invariant systems (LTI systems) is an important application of this method due to its usefulness in areas such as circuitry and modern system design. So, in this wiki, we will focus on LTI systems and predicting their behavior. Systems can be described for a finite input in simple terms. The long-term behavior of the output can either be increasing or decreasing, and it can have a constant sign or an alternating sign. For example, a system whose output is decreasing over time with an alternating sign would have a graphical representation that looks like this: A system whose output decreases with alternating sign over time This graph could represent a spring on the floor, anchored at some point. If you pull out the spring to the right of where it is anchored (the positive direction) and let go, it will respond by moving back the left of its anchor (the negative direction). However, it will lose some of its momentum, and as it continues to oscillate around its anchor, it will travel less distance each time. Eventually, it will come to a halt at the anchor. Complex Poles Recall from the wiki on linear time-invariant systems that all systems can be described by a system function that is a polynomial in \mathcal{R}^n space. For example, a system that adds the input to the signal from the previous time step would have the equation Y = X + \mathcal{R}X . This is the same as the difference equation y[n] = x[n] + y[n-1] . In both cases, the real meaning of delay is needed to have any hope of predicting what the systems will do. In this course of study, it is generally assumed that all systems start at rest, or that all input and output values of the system before time 0 are equal to 0. This is how this system behaves going forward, given the unit impulse function as the input signal: \begin{aligned} y[0] = x[0] + y[-1] = 1 + 0 &= 1 \\ y[1] = x[1] + y[0] = 0 + 1 &= 1 \\ y[2] = x[2] + y[1] = 0 + 1 &= 1. \end{aligned} This trend continues, and the value of this system is always 1 from now until the end of time. That's a bit surprising given that only a small jump start was given to this system. This system is persistent because there is an infinite number of non-zero samples in the output. A transient system would have a finite number of non-zero samples in the output. The operator equation, Y = X + \mathcal{R}X , can be rewritten as Y = \frac{1}{1 - \mathcal{R}}X . This is a very important equation because it will come up all the time when dealing with delays. A better way of describing this is needed, and comparing it to a feedforward system helps. Consider the following two geometric series: \begin{aligned} S &= 1 + x+ x^2 + x^3 + \cdots \\ Sx &= x + x^2 + x^3 + x^ 4 + \cdots. \end{aligned} If we subtract the first equation from the second, we get S - Sx = S(1 - x) = 1 , S = \frac{1}{1 - x} = 1 + x + x^2 + x^3 +\cdots . This same method can be applied to a system, O O = 1 + \mathcal{R} + \mathcal{R}^2 + \cdots , \frac{1}{1 - \mathcal{R}} = 1 + \mathcal{R} + \mathcal{R}^2 + \cdots. Try to imagine the block diagram that describes system O . Try drawing it on a piece of paper before looking at the answer. O is an interesting system that has an infinite number of feedforward paths. Each subsequent feedforward path has one additional delay (with the first path having no delay). There is an adder right before the output for all paths. It looks something like this: O , an infinitely large feedforward loop First-order systems are simple systems that provide an easier introduction to predicting system behavior. A first-order system is a system that has a system function that only exists in \mathcal{R}^n space when n = 1 . That is, it only involves \mathcal{R} terms, not \mathcal{R}^2 or higher terms. You may be wondering about zero-order systems. What's another name for zero-order systems? A zero-order system might also be called a feedforward system because there is zero delay. These systems are simple to understand because for any signal with a finite number of non-zero samples, it will produce an output with a finite number of non-zero samples. To get an idea of how we might predict the first-order system, let's look at a simple block diagram. c\mathcal{R} 1 + c\mathcal{R} \frac{1}{1 + c\mathcal{R}} \frac{1}{1 - c\mathcal{R}} Look at the block diagram below. What is the corresponding system function? Once you find the corresponding system function for this block diagram, we can use the transformation we found above to make the system function: \frac{Y}{X} = 1 + c\mathcal{R} + c^2\mathcal{R}^2 + \cdots. So, the output signal is a sum of the scaled and delayed input signal. Say c equals 1.5. Then \frac{Y}{X} = 1 + 1.5\mathcal{R} + 2.25\mathcal{R} + \cdots. In other words, the system grows more and more as time goes on ( 1.5^n grows as n ). In fact, the systems can all be described by poles, the base of the geometric sequence. In that example, the pole was 1.5. The value of the pole describes how the system will behave over time. The value of the pole determines the behavior of the system. Value of pole, p_0\hspace{10mm} Behavior of system (mode) p_0 \lt -1 The output increases to \infty and the sign alternates. -1 \lt p_0 \lt 0 The output magnitude decreases towards 0 and the sign alternates. 0 \lt p_0 \lt 1 The output magnitude decreases towards 0 monotonically. 1 \lt p_0 \infty monotonically. These behaviors are called modes. For a fixed pole, there is only one mode for a first-order system. However, more complex systems can display more than one mode. Second-order systems have system functions whose denominator is second-order ( meaning it has both \mathcal{R} \mathcal{R}^2 ). A useful tool for understanding second-order systems is decomposing them into collections of first-order systems. Remember that we can do that because LTI systems have the important property that combinations of LTI systems are themselves LTI systems. Take the following second-order LTI system: This system can be represented by the following system function: \frac{Y}{X} = H = \frac{1}{1 -0.2\mathcal{R} -0.24\mathcal{R}^2} . There are a few ways we can break this system down into first-order systems: Un-cascade it We can think about this system as a cascade of two first-order systems. The cascade of two systems is putting one system after the other. The output from the first system becomes the input to the next system. The goal here is to find H_1 H_2 H_1H_2 = H . So we need to factor the denominator of our previous system function: H = \frac{1}{(1 - 0.6\mathcal{R})(1 + 0.4\mathcal{R})} . So, now we have H_1 = \frac{1}{(1 -0.6\mathcal{R})} H_2 = \frac{1}{(1 + 0.4\mathcal{R})} . In other words, it's two systems, one with pole 0.6 and one with pole -0.4. The new system diagram looks like this: A cascade of two first-order systems We can also decompose the second-order system into the sum of two first-order systems. Note that summing two systems is not the same as cascading them. The goal here is to find H_1 H_2 H_1 + H_2 = H . This process uses partial fraction decomposition. So, we have our factored system function H = \frac{1}{(1 - 0.6\mathcal{R})(1 + 0.4\mathcal{R})} and, to perform the decomposition, we have H = \frac{A}{(1 - 0.6\mathcal{R})} + \frac{B}{(1 + 0.4\mathcal{R})} . We need to solve for A B: \begin{aligned} \frac{1}{(1 - 0.6\mathcal{R})(1 + 0.4\mathcal{R})} &= \frac{A}{(1 - 0.6\mathcal{R})} + \frac{1}{(1 + 0.4\mathcal{R})} \\ 1 &= A(1 + 0.4\mathcal{R}) + B(1 - 0.6\mathcal{R}) \\ &= (A + B) + (0.4A - 0.6B)\mathcal{R}. \end{aligned} Now we need to equate terms on each side of the equation that have equal powers of \mathcal{R} \begin{aligned} 1 &= A + B \\ 0 &= 0.4A - 0.6B. \end{aligned} B = 0.4,\qquad A = 0.6, which means we now have H = H_1 + H_2, H_1 = \frac{0.6}{(1 - 0.6\mathcal{R})},\qquad H_2 = \frac{0.4}{(1 + 0.4\mathcal{R})}. This block diagram is a little trickier to figure out. But it's a feedforward system with two feedback subsystems. The terms that are in the numerator of each subsystem H_1 H_2 appear as gains after the feedback. Block diagram after additive decomposition This system, unlike first-order systems, has two poles. So, predicting its behavior is not as straightforward. On the one hand, the top system has 0.6 as its pole, and the bottom system has -0.4 as its pole. They both cause the system to decrease its output magnitude over time, but should the sign alternate or stay monotonic? The dominant pole dictates what the system does over time. n^\text{th} -order system, the dominant pole is the pole with the largest magnitude. So, for our system, the dominant pole is 0.6. This system will monotonically decrease its output magnitude. In the previous section, we evaluated a system that could be factored into real numbers. What happens when that's not possible? Imagine a system function that looks like this: H = \frac{2 + 2\mathcal{R}}{1 + 2\mathcal{R} + 4\mathcal{R}^2} . Factoring the denominator gives us H = \frac{2 + 2\mathcal{R}}{\big(1 - (-1 + \sqrt{-3})\mathcal{R}\big)\big(1 - (-1 - \sqrt{-3})\mathcal{R}\big)}. We have complex poles. The poles are -1 + 1.732j -1 - 1.732j . This is not an uncommon occurrence. Difference equations can describe systems in the real world without having any complex terms in them. However, they can still have complex poles. For example, the difference equation that describes this equation is y[n] = 2x[n] + 2x[n-1] - 2y[n-1] - 4y[n-2] . To work with these complex numbers, we transform them into polar coordinates. So, a + bj re^{j\Omega} \begin{aligned} a &= r\cos\Omega \\ b &= r\sin\Omega . \end{aligned} r \sqrt{a^2 + b^2} , and the angle \Omega \tan^{-1}(b, a) Complex poles produce complex modes. So, while a real pole might give us \frac{1}{1 - p\mathcal{R}} = 1 + p\mathcal{R} + p^2\mathcal{R}^2 +\cdots , a complex pole will give us \frac{1}{1 - re^{j\Omega}} = 1 + r^2e^{j\Omega}\mathcal{R} + r^3e^{j\Omega}\mathcal{R}^2 + \cdots . What's going on here as n tends towards infinity? If you think of p^n as a point in the complex plane, it's easier to visualize. As time goes on, the radius of that point will change due to r^n increasing or decreasing. The angle \Omega will change, but it won't affect the radius magnitude, it will just rotate it around the origin of the plane. So every p^n will be rotated by \Omega . We call the period of this signal \frac{2\pi}{\Omega} because that's the number of samples it takes to complete one full circle. Example of complex mode[1] Because complex numbers always come in conjugate pairs, their imaginary parts always cancel each other out. This allows us to figure out a difference equation that has real numbers, so we can understand the system. Consider a system with poles re^{j\Omega} re^{i\Omega} H = \frac{1}{\big(1 - re^{j\Omega}\big)\big(1 - re^{-j\Omega}\big)} . Multiplying out the denominator, we get 1 - r\big(e^{j\Omega} + e^{-j\Omega}\big)\mathcal{R} + r^2e^{j\Omega - j\Omega}\mathcal{R}^2 . e^{jx} = j\sin(n) + \cos(x) 1 - r\big(\cos\Omega + j\sin\Omega + \cos(-\Omega) + j\sin(-\Omega)\big)\mathcal{R} + r^2\mathcal{R}^2 , \sin(-x) = -\sin(x) \cos(-x) = \cos(x) , this in turn equals 1 - 2r\cos\Omega\mathcal{R} + r^2\mathcal{R}^2 . The complex portions canceled each other out because \sin(-x) = -\sin(x) . This is great because now we can write a difference equation that helps us understand how this system behaves: y[n] = x[n] + 2r\cos\Omega y[n-1] - r^2y[n-2] . But how can we understand how our system will behave? For a difference equation y[n] = r^n(\cos n\Omega + \alpha\sin n\Omega) , we know the system cannot be smaller than -\sqrt{1 + \alpha^2} \sqrt{1 + \alpha^2} . These are the bounds of our output signal. r provides an important clue as well. r dictates how fast the system will decrease. Finally, \Omega will dictate the oscillation frequency between the bounds. In the following graphic, the red and green lines are the bounds of the system. r dictates how fast the system is decaying, and \Omega dictates the period of oscillation \frac{2\pi}{\Omega} Output of system with complex poles[1] Bj, R. 6.01 Spring 2016. Retrieved June 21, 2016, from http://sicp-s4.mit.edu/6.01/spring16/reference/notes/sigsys Cite as: Predicting System Behavior. Brilliant.org. Retrieved from https://brilliant.org/wiki/predicting-system-behavior/
GENERAL TOPOLOGY - Encyclopedia Information General topology Information https://en.wikipedia.org/wiki/General_topology Branch of topology 2 A topology on a set 2.1 Basis for a topology 2.2 Subspace and quotient 2.3 Examples of topological spaces 2.3.1 Discrete and trivial topologies 2.3.2 Cofinite and cocountable topologies 2.3.3 Topologies on the real and complex numbers 2.3.4 The metric topology 3.1.1 Neighborhood definition 3.1.2 Sequences and nets 3.1.3 Closure operator definition 3.3 Homeomorphisms 3.4 Defining topologies via continuous functions 4 Compact sets 5 Connected sets 5.2 Disconnected spaces 5.3 Path-connected sets 6 Products of spaces 10 Baire category theorem 11 Main areas of research 11.1 Continuum theory 11.3 Pointless topology 11.4 Dimension theory 11.5 Topological algebras 11.6 Metrizability theory 11.7 Set-theoretic topology Let X be a set and let τ be a family of subsets of X. Then τ is called a topology on X if: [1] [2] The members of τ are called open sets in X. A subset of X is said to be closed if its complement is in τ (i.e., its complement is open). A subset of X may be open, closed, both ( clopen set), or neither. The empty set and X itself are always both closed and open. Main article: Basis (topology) A base (or basis) B for a topological space X with topology T is a collection of open sets in T such that every open set in T can be written as a union of elements of B. [3] [4] We say that the base generates the topology T. Bases are useful because many properties of topologies can be reduced to statements about a base that generates that topology—and because many topologies are most easily defined in terms of a base that generates them. Main article: Continuous function {\displaystyle f\colon X\rightarrow T} In several contexts, the topology of a space is conveniently specified in terms of limit points. In many instances, this is accomplished by specifying when a point is the limit of a sequence, but for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. [5] A function is continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition. In detail, a function f: X → Y is sequentially continuous if whenever a sequence (xn) in X converges to a limit x, the sequence (f(xn)) converges to f(x). [6] Thus sequentially continuous functions "preserve sequential limits". Every continuous function is sequentially continuous. If X is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if X is a metric space, sequential continuity and continuity are equivalent. For non first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve limits of nets, and in fact this property characterizes continuous functions. {\displaystyle f\colon (X,\mathrm {cl} )\to (X',\mathrm {cl} ')\,} {\displaystyle f(\mathrm {cl} (A))\subseteq \mathrm {cl} '(f(A)).} {\displaystyle f^{-1}(\mathrm {cl} '(A'))\supseteq \mathrm {cl} (f^{-1}(A')).} {\displaystyle f\colon (X,\mathrm {int} )\to (X',\mathrm {int} ')\,} {\displaystyle f^{-1}(\mathrm {int} '(A))\subseteq \mathrm {int} (f^{-1}(A))} {\displaystyle (X,\tau _{X})\rightarrow (Y,\tau _{Y})} {\displaystyle f\colon X\rightarrow S,\,} {\displaystyle S\rightarrow X} {\displaystyle X\rightarrow S.} Main article: Compact (mathematics) {\displaystyle \{U_{\alpha }\}_{\alpha \in A}} {\displaystyle X=\bigcup _{\alpha \in A}U_{\alpha },} {\displaystyle X=\bigcup _{i\in J}U_{i}.} The only subsets of X that are both open and closed ( clopen sets) are X and the empty set. {\displaystyle \Gamma _{x}} {\displaystyle \Gamma _{x}'} {\displaystyle \Gamma _{x}\subset \Gamma '_{x}} Main article: Product topology {\displaystyle X:=\prod _{i\in I}X_{i},} {\displaystyle i\in I} {\displaystyle \prod _{i\in I}U_{i}} {\displaystyle \prod _{i\in I}X_{i}} Main article: Separation axiom Main article: axiom of countability A metric space [7] is an ordered pair {\displaystyle (M,d)} {\displaystyle M} is a set an{\displaystyle d} {\displaystyle M} {\displaystyle d\colon M\times M\rightarrow \mathbb {R} } {\displaystyle x,y,z\in M} {\displaystyle d(x,y)\geq 0} {\displaystyle d(x,y)=0\,} {\displaystyle x=y\,} ( identity of indiscernibles), {\displaystyle d(x,y)=d(y,x)\,} {\displaystyle d(x,z)\leq d(x,y)+d(y,z)} ( triangle inequality) . {\displaystyle d} {\displaystyle d} {\displaystyle M} The Baire category theorem says: If X is a complete metric space or a locally compact Hausdorff space, then the interior of every union of countably many nowhere dense sets is empty. [8] Main article: Continuum (topology) Main article: Topological dynamics Topological dynamics concerns the behavior of a space and its subspaces over time when subjected to continuous change. Many examples with applications to physics and other areas of math include fluid dynamics, billiards and flows on manifolds. The topological characteristics of fractals in fractal geometry, of Julia sets and the Mandelbrot set arising in complex dynamics, and of attractors in differential equations are often critical to understanding these systems.[ citation needed] Main article: Pointless topology Pointless topology (also called point-free or pointfree topology) is an approach to topology that avoids mentioning points. The name 'pointless topology' is due to John von Neumann. [9] The ideas of pointless topology are closely related to mereotopologies, in which regions (sets) are treated as foundational without explicit reference to underlying point sets. Main article: Dimension theory Main article: Topological algebra {\displaystyle \cdot :A\times A\longrightarrow A} {\displaystyle (a,b)\longmapsto a\cdot b} Main article: Metrization theorem {\displaystyle (X,\tau )} {\displaystyle d\colon X\times X\to [0,\infty )} {\displaystyle \tau } {\displaystyle \beta } {\displaystyle \beta } {\displaystyle \beta } ^ Moore, E. H.; Smith, H. L. (1922). "A General Theory of Limits". American Journal of Mathematics. 44 (2): 102–121. doi: 10.2307/2370388. JSTOR 2370388. Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology ( Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446 Retrieved from " https://en.wikipedia.org/?title=General_topology&oldid=1088970767" General Topology Videos General Topology Websites General Topology Encyclopedia Articles
Convert VAR model to VEC model - MATLAB var2vec - MathWorks Deutschland Convert VAR Model to VEC Model Using Cell Arrays Convert Structural VAR Model to VEC Model Using Lag Operator Polynomials Convert VARMA Model to VEC Model Convert VAR model to VEC model [VEC,C] = var2vec(VAR) If any of the time series in a vector autoregression (VAR) model are cointegrated, then the VAR model is nonstationary. You can determine the error-correction coefficient by converting the VAR model to a vector error-correction (VEC) model. The error-correction coefficient matrix determines, on average, how the time series react to deviations from their long-run averages. The rank of the error-correction coefficient determines how many cointegrating relations there exist in the model. Because estimate is suitable for estimating VAR models in reduced form, you can convert an estimated VAR model to its VEC model equivalent using var2vec. [VEC,C] = var2vec(VAR) returns the coefficient matrices (VEC) and the error-correction coefficient matrix (C) of the vector error-correction model equivalent to the vector autoregressive model with coefficient matrices (VAR). If the number of lags in the input vector autoregressive model is p, then the number of lags in the output vector error-correction model is q = p – 1. Consider converting the following VAR(3) model to a VEC(2) model. {y}_{t}=\left[\begin{array}{c}0.5\\ 1\\ -2\end{array}\right]+\left[\begin{array}{ccc}0.54& 0.86& -0.43\\ 1.83& 0.32& 0.34\\ -2.26& -1.31& 3.58\end{array}\right]{y}_{t-1}+\left[\begin{array}{ccc}0.14& -0.12& 0.05\\ 0.14& 0.07& 0.10\\ 0.07& 0.16& 0.07\end{array}\right]{y}_{t-3}+{\epsilon }_{t}. {A}_{1} {A}_{2} {A}_{3} ) of the VAR(3) model terms {y}_{t-1} {y}_{t-2} {y}_{t-3} A1 = [0.54 0.86 -0.43; 1.83 0.32 0.34; -2.26 -1.31 3.58]; A2 = zeros(3); A3 = [0.14 -0.12 0.05; 0.14 0.07 0.10; 0.07 0.16 0.07]; Pack the matrices into separate cells of a 3 dimensional cell vector. Put A1 into the first cell, A2 into the second cell, and A3 into the third cell. VAR = {A1 A2 A3}; Compute the coefficient matrices of \Delta {y}_{t-1} \Delta {y}_{t-2} , and error-correction coefficient matrix of the equivalent VEC(2) model. [VEC,C] = var2vec(VAR); The specification of a cell array of matrices for the input argument indicates that the VAR(3) model is a reduced-form model expressed as a difference equation. VAR{1} is the coefficient of {y}_{t-1} , and subsequent elements correspond to subsequent lags. VEC is a 1-by-2 cell vector of 3-by-3 coefficient matrices for the VEC(2) equivalent of the VAR(3) model. Because the VAR(3) model is in reduced form, the equivalent VEC model is also. That is, VEC{1} is the coefficient of \Delta {y}_{t-1} , and subsequent elements correspond to subsequent lags. The orientation of VEC corresponds to the orientation of VAR. Display the VEC(2) model coefficients. B1 = VEC{1} Since the constant offsets between the models are equivalent, the resulting VEC(2) model is \begin{array}{rcl}\Delta {y}_{t}& =& \left[\begin{array}{c}0.5\\ 1\\ -2\end{array}\right]+\left[\begin{array}{ccc}-0.14& 0.12& -0.05\\ -0.14& -0.07& -0.10\\ -0.07& -0.16& -0.07\end{array}\right]\Delta {y}_{t-1}+\left[\begin{array}{ccc}-0.14& 0.12& -0.05\\ -0.14& -0.07& -0.10\\ -0.07& -0.16& -0.07\end{array}\right]\Delta {y}_{t-2}\\ & +& \left[\begin{array}{ccc}-0.32& 0.74& -0.38\\ 1.97& -0.61& 0.44\\ -2.19& -1.15& 2.65\end{array}\right]{y}_{t-1}+{\epsilon }_{t}\end{array}. Consider converting the following structural VAR(2) model to a structural VEC(1) model. \left[\begin{array}{cc}0.54& -2.26\\ 1.83& 0.86\end{array}\right]{y}_{t}=\left[\begin{array}{cc}0.32& -0.43\\ -1.31& 0.34\end{array}\right]{y}_{t-1}+\left[\begin{array}{cc}0.07& 0.07\\ -0.01& -0.02\end{array}\right]{y}_{t-2}+{\epsilon }_{t}. Specify the autoregressive coefficient matrices {A}_{0} {A}_{1} {A}_{2} A0 = [0.54 -2.26; A1 = [0.32 -0.43 -1.31 0.34]; A2 = [0.07 0.07 -0.01 -0.02]; Pack the matrices into separate cells of a 3 dimensional cell vector. Put A0 into the first cell, A1 into the second cell, and A2 into the third cell. Negate the coefficients corresponding to all nonzero lag terms. VARCoeff = {A0; -A1; -A2}; Create a lag operator polynomial that encompasses the autoregressive terms in the VAR(2) model. VAR = LagOp(VARCoeff) VAR is a LagOp lag operator polynomial. VAR specifies the VAR(2) model in lag operator notation, as in this equation \left({A}_{0}-{A}_{1}L-{A}_{2}{L}^{2}\right){y}_{t}={\epsilon }_{t}. L {y}_{t} \Delta {y}_{t} \Delta {y}_{t} , and the error-correction coefficient matrix of the equivalent VEC(1) model. VAR.Coefficients{0} is {A}_{0} {y}_{t} . Subsequent elements in VAR.Coefficients correspond to subsequent lags in VAR.Lags. VEC is the VEC(1) equivalent of the VAR(2) model. Because the VAR(2) model is structural, the equivalent VEC(1) model is as well. That is, VEC.Coefficients{0} is the coefficient of \Delta {y}_{t} , and subsequent elements correspond to subsequent lags in VEC.Lags. Display the VEC model coefficients in difference-equation notation. B0 = VEC.Coefficients{0} B1 = -VEC.Coefficients{1} The resulting VEC(1) model is \left[\begin{array}{cc}0.54& -2.26\\ 1.83& 0.86\end{array}\right]\Delta {y}_{t}=\left[\begin{array}{cc}-0.07& -0.07\\ 0.01& 0.02\end{array}\right]\Delta {y}_{t-1}+\left[\begin{array}{cc}-0.15& 1.9\\ -3.15& -0.54\end{array}\right]{y}_{t-1}+{\epsilon }_{t}. Alternatively, reflect the lag operator polynomial VEC around lag 0 to obtain the difference-equation notation coefficients. DiffEqnCoeffs = reflect(VEC); B = toCellArray(DiffEqnCoeffs); B{1} == B0 Approximate the coefficients of the VEC model that represents this stationary and invertible VARMA(8,4) model that is in lag operator form \begin{array}{l}\left\{\left[\begin{array}{ccc}1& 0.2& -0.1\\ 0.03& 1& -0.15\\ 0.9& -0.25& 1\end{array}\right]+\left[\begin{array}{ccc}0.5& -0.2& -0.1\\ -0.3& -0.1& 0.1\\ 0.4& -0.2& -0.05\end{array}\right]{L}^{4}+\left[\begin{array}{ccc}0.05& -0.02& -0.01\\ -0.1& -0.01& -0.001\\ 0.04& -0.02& -0.005\end{array}\right]{L}^{8}\right\}{y}_{t}=\\ \left\{\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]+\left[\begin{array}{ccc}-0.02& 0.03& 0.3\\ 0.003& 0.001& 0.01\\ 0.3& 0.01& 0.01\end{array}\right]{L}^{4}\right\}{\epsilon }_{t}\end{array} {y}_{t}={\left[{y}_{1t}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}{y}_{2t}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}{y}_{3t}\right]}^{\prime } {\epsilon }_{t}={\left[{\epsilon }_{1t}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}{\epsilon }_{2t}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}{\epsilon }_{3t}\right]}^{\prime } Create a cell vector containing the VAR coefficient matrices. Start with the coefficient of {y}_{t} Create a cell vector containing the VMA coefficient matrices. Start with the coefficient of {\epsilon }_{t} arma2ma requires LagOp lag operator polynomials for input arguments that comprise structural VAR or VMA models. Construct separate LagOp polynomials that describe the VAR(8) and VMA(4) components of the VARMA(8,4) model. VARLag and VMALag are LagOp lag operator polynomials that describe the VAR and VMA components of the VARMA model. Convert the VARMA(8,4) model to a VAR(p) model by obtaining the coefficients of the truncated approximation of the infinite-lag polynomial. Set numLags to return at most 12 lagged terms. VAR = arma2ar(VARLag,VMALag,numLags) VAR is a LagOP lag operator polynomial. All coefficients except those corresponding to lags 0, 4, 8, and 12 are 3-by-3 matrices of zeros. The coefficients in VAR.Coefficients comprise a structural VAR(12) model approximation of the original VARMA(8,4) model. Compute the coefficients of the VEC(11) model equivalent to the resulting VAR(12) model. Lags: [0 1 2 3 4 5 6 7 8 9 10 11] VEC is a LagOp lag operator polynomial containing the coefficient matrices of the resulting VEC(11) model in VEC.Coefficients. VEC.Coefficients{0} is the coefficient of \Delta {y}_{t} , Vec{1} is the coefficient of \Delta {y}_{t-1} Display the nonzero coefficients of the resulting VEC model. lag2Idx = VEC.Lags + 1; % Lags start at 0. Add 1 to convert to indices. VecCoeff = toCellArray(VEC); fprintf('%8.3f %8.3f %8.3f \n',VecCoeff{lag2Idx(j)}) -0.100 -0.150 1.000 VAR(p) model coefficients, specified as a numeric vector, a cell vector of n-by-n numeric matrices, or a LagOp lag operator polynomial object. The VAR(p) is a univariate time series. VAR must be a length p numeric vector. VAR(j) contains the scalar Aj, the coefficient of the lagged response yt–j. The coefficient of yt (A0) is 1. VAR must have length p, and each cell contains an n-by-n numeric matrix (n > 1). VAR{j} must contain Aj, the coefficient matrix of the lag term yt–j. var2vec assumes that the coefficient of yt (A0) is the n-by-n identity. VAR.Degree must be p. VAR.Coefficients{0} is A0, the coefficient of yt. All other elements correspond to the coefficients of the subsequent lag terms. For example, VAR.Coefficients{j} is the coefficient matrix of yt–j. VAR.Lags stores all nonzero lags. To construct a model in reduced form, set VAR.Coefficients{0} to eye(VAR.Dimension). \left[\begin{array}{cc}1& 0\\ 0& 1\end{array}\right]{y}_{t}=\left[\begin{array}{cc}0.1& 0.2\\ 1& 0.1\end{array}\right]{y}_{t-1}+\left[\begin{array}{cc}-0.1& 0.01\\ 0.2& -0.3\end{array}\right]{y}_{t-2}+{\epsilon }_{t} to a VEC(1) model. The model is in difference-equation notation. You can convert the model by entering VEC = var2vec({[0.1 0.2; 1 0.1], [-0.1 0.01; 0.2 -0.3]}); The VAR(2) model in lag operator notation is \left(\left[\begin{array}{cc}1& 0\\ 0& 1\end{array}\right]-\left[\begin{array}{cc}0.1& 0.2\\ 1& 0.1\end{array}\right]-\left[\begin{array}{cc}-0.1& 0.01\\ 0.2& -0.3\end{array}\right]L\right){y}_{t}={\epsilon }_{t}. The coefficient matrices of the lagged responses appear negated compared to the corresponding coefficients in difference-equation notation. To obtain the same result using LagOp lag operator polynomials, enter VAR = LagOp({eye(2), -[0.1 0.2; 1 0.1], -[-0.1 0.01; 0.2 -0.3]}); VEC = var2vec(VAR); VEC(q) model coefficients of differenced responses, returned as a numeric vector, a cell vector of n-by-n numeric matrices, or a LagOp lag operator polynomial object. n is the number of time series in the VAR(p) model. VAR and VEC share the same data type and orientation. var2vec converts VAR(p) models to VEC(p – 1) models. That is: If VAR is a cell or numeric vector, then numel(VEC) is numel(VAR) - 1. If VAR is a LagOp lag operator polynomial, then VEC.Degree is VAR.Degree - 1. Error-correction coefficient, returned as an n-by-n numeric matrix. n is the number of time series in the VAR model. {A}_{0}{y}_{t}=a+{A}_{1}{y}_{t-1}+{A}_{2}{y}_{t-2}+...+{A}_{p}{y}_{t-p}+{\epsilon }_{t}. {B}_{0}\Delta {y}_{t}=b+{B}_{1}\Delta {y}_{t-1}+{B}_{2}\Delta {y}_{t-2}+...+{B}_{q}\Delta {y}_{t-q}+C{y}_{t-1}+{\epsilon }_{t}. A\left(L\right){y}_{t}=a+{\epsilon }_{t} A\left(L\right)={A}_{0}-{A}_{1}L-{A}_{2}{L}^{2}-...-{A}_{p}{L}^{p} {L}^{j}{y}_{t}={y}_{t-j} B\left(L\right)\Delta {y}_{t}=b+C{y}_{t-1}+{\epsilon }_{t} B\left(L\right)={B}_{0}-{B}_{1}L-{B}_{2}{L}^{2}-...-{B}_{q}{L}^{q} {A}_{0}{y}_{t}=a+{A}_{1}{y}_{t-1}+{A}_{2}{y}_{t-2}+...+{A}_{p}{y}_{t-p}+{\epsilon }_{t}. {B}_{0}\Delta {y}_{t}=b+{B}_{1}\Delta {y}_{t-1}+{B}_{2}\Delta {y}_{t-2}+...+{B}_{q}\Delta {y}_{t-q}+C{y}_{t-1}+{\epsilon }_{t}. To accommodate structural VAR models, specify the input argument VAR as a LagOp lag operator polynomial. To access the cell vector of the lag operator polynomial coefficients of the output argument VEC, enter toCellArray(VEC). VECDEN = toCellArray(reflect(VEC)); VECDEN is a cell vector containing p coefficients corresponding to the differenced response terms in VEC.Lags in difference-equation notation. The first element is the coefficient of Δyt, the second element is the coefficient of Δyt–1, and so on. Consider converting a VAR(p) model to a VEC(q) model. If the error-correction coefficient matrix (C) has: Rank zero, then the converted VEC model is a stable VAR(p – 1) model in terms of Δyt. Full rank, then the VAR(p) model is stable (i.e., has no unit roots) [2]. Rank r, such that 0 < r < n, then the stable VEC model has r cointegrating relations. The constant offset of the converted VEC model is the same as the constant offset of the VAR model. var2vec does not impose stability requirements on the coefficients. To check for stability, use isStable. isStable requires a LagOp lag operator polynomial as an input argument. For example, to check whether VAR, the cell array of n-by-n numeric matrices, composes a stable time series, enter arma2ar | arma2ma | isStable | estimate | toCellArray | vec2var
Wavelet coherence and cross-spectrum - MATLAB wcoherence - MathWorks Nordic Wavelet Coherence of Two Sine Waves Effect of Sampling Interval on Wavelet Coherence Effect of Sampling Frequency on Wavelet Coherence Effect of Number of Smoothed Scales on Wavelet Coherence Effect of Phase Display Threshold on Wavelet Coherence of Weather Data NumScalesToSmooth PhaseDisplayThreshold Wavelet Cross Spectrum 'NumOctaves' name-value pair will be removed Wavelet coherence and cross-spectrum wcoh = wcoherence(x,y) [wcoh,wcs] = wcoherence(x,y) [wcoh,wcs,period] = wcoherence(x,y,ts) [wcoh,wcs,f] = wcoherence(x,y,fs) [wcoh,wcs,f,coi] = wcoherence(___) [wcoh,wcs,period,coi] = wcoherence(___,ts) [___,coi,wtx,wty] = wcoherence(___) [___] = wcoherence(___,Name,Value) wcoherence(___) wcoh = wcoherence(x,y) returns the magnitude-squared wavelet coherence, which is a measure of the correlation between signals x and y in the time-frequency plane. Wavelet coherence is useful for analyzing nonstationary signals. The inputs x and y must be equal length, 1-D, real-valued signals. The coherence is computed using the analytic Morlet wavelet. [wcoh,wcs] = wcoherence(x,y) returns the wavelet cross-spectrum of x and y. You can use the phase of the wavelet cross-spectrum values to identify the relative lag between the input signals. [wcoh,wcs,period] = wcoherence(x,y,ts) uses the positive duration ts as the sampling interval. The duration ts is used to compute the scale-to-period conversion, period. The duration array period has the same format as specified in ts. [wcoh,wcs,f] = wcoherence(x,y,fs) uses the positive sampling frequency, fs, to compute the scale-to-frequency conversion, f. The sampling frequency fs is in Hz. [wcoh,wcs,f,coi] = wcoherence(___) returns the cone of influence, coi, for the wavelet coherence in cycles per sample. If you specify the sampling frequency, fs, the cone of influence is in Hz. [wcoh,wcs,period,coi] = wcoherence(___,ts) returns the cone of influence, coi, in cycles per unit time. [___,coi,wtx,wty] = wcoherence(___) returns the continuous wavelet transforms (CWT) of x and y in wtx, wty, respectively. wtx and wty are used in the formation of the wavelet cross spectrum and coherence estimates. [___] = wcoherence(___,Name,Value) specifies additional options using one or more name-value pair arguments. This syntax may be used in any of the previous syntaxes. wcoherence(___) with no output arguments plots the wavelet coherence and cone of influence in the current figure. Due to the inverse relationship between frequency and period, a plot that uses the sampling interval is the inverse of a plot the uses the sampling frequency. For areas where the coherence exceeds 0.5, plots that use the sampling frequency display arrows to show the phase lag of y with respect to x. The arrows are spaced in time and scale. The direction of the arrows corresponds to the phase lag on the unit circle. For example, a vertical arrow indicates a π/2 or quarter-cycle phase lag. The corresponding lag in time depends on the duration of the cycle. Use default wcoherence settings to obtain the wavelet coherence between a sine wave with random noise and a frequency-modulated signal with decreasing frequency over time. t = linspace(0,1,1024); x = -sin(8*pi*t) + 0.4*randn(1,1024); x = x/max(abs(x)); y = wnoise('doppler',10); wcoh = wcoherence(x,y); The default coherence computation uses the analytic Morlet wavelet, 12 voices per octave and smooths 12 scales. The default number of octaves is equal to floor(log2(numel(x)))-1, which in this case is 9. Obtain the wavelet coherence data for two signals, specifying a sampling interval of 0.001 seconds. Both signals consist of two sine waves (10 Hz and 50 Hz) in white noise. The sine waves have different time supports. Set the random number generator to its default settings for reproducibility. Then create the two signals. title('X') Obtain the coherence of the two signals. [wcoh,~,period,coi] = wcoherence(x,y,seconds(0.001)); Use the pcolor command to plot the coherence and cone of influence. period = seconds(period); coi = seconds(coi); h = pcolor(t,log2(period),wcoh); ytick=round(pow2(ax.YTick),3); ax.YTickLabel=ytick; ax.XLabel.String='Time'; ax.YLabel.String='Period'; ax.Title.String = 'Wavelet Coherence'; hcol = colorbar; hcol.Label.String = 'Magnitude-Squared Coherence'; plot(ax,t,log2(coi),'w--','linewidth',2) Use wcoherence(x,y,seconds(0.001)) without any outputs arguments. This plot includes the phase arrows and the cone of influence. wcoherence(x,y,seconds(0.001)); Obtain the wavelet coherence for two signals, specifying a sampling frequency of 1000 Hz. Both signals consist of two sine waves (10 Hz and 50 Hz) in white noise. The sine waves have different time supports. Set the random number generator to its default settings for reproducibility and create the two signals. x = cos(2*pi*10*t).*(t>=0.5 & t<1.1)+... Obtain the wavelet coherence. The coherence plot is flipped with respect to the plot in the previous example, which specifies a sampling interval instead of a sampling frequency. wcoherence(x,y,1000) Obtain the scale-to-frequency conversion output in f. [wcoh,wcs,f] = wcoherence(x,y,1000); Obtain the wavelet coherence for two signals. Both signals consist of two sine waves (10 Hz and 50 Hz) in white noise. Use the default number of scales to smooth. This value is equivalent to the number of voices per octave. Both values default to 12. Set the random number generator to its default settings for reproducibility. Then, create the two signals and obtain the coherence. wcoherence(x,y) Set the number of scales to smooth to 18. The increased smoothing causes reduced low frequency resolution. wcoherence(x,y,'NumScalesToSmooth',18) Compare the effects of using different phase display thresholds on the wavelet coherence. Plot the wavelet coherence between the El Nino time series and the All India Average Rainfall Index. The data are sampled monthly. Specify the sampling interval as 1/12 of a year to display the periods in years. Use the default phase display threshold of 0.5, which shows phase arrows only where the coherence is greater than or equal to 0.5. load ninoairdata; wcoherence(nino,air,years(1/12)); Set the phase display threshold to 0.7. The number of phase arrows decreases. vector of real values Input signal, specified as a vector of real values. x must be a 1-D, real-valued signal. The two input signals, x and y, must be the same length and must have at least four samples. Input signal, specified as vector of real values. y must be a 1-D, real-valued signal. The two input signals, x and y, must be the same length and must have at least four samples. ts — Sampling interval duration with positive scalar input Sampling interval, also known as the sampling period, specified as a duration with positive scalar input. Valid durations are years, days, hours, seconds, and minutes. You can also use the duration function to specify ts. You cannot use calendar durations (caldays, calweeks, calmonths, calquarters, or calyears). You cannot specify both a sampling frequency fs and a sampling period ts. positive scalar | [] If you specify fs as empty, wcoherence uses normalized frequency in cycles/sample. The Nyquist frequency is ½. Example: 'PhaseDisplayThreshold',0.7; specifies the threshold for displaying phase vectors. Frequency limits to use in wcoherence, specified as a two-element vector with positive strictly increasing elements. The first element specifies the lowest peak passband frequency and must be greater than or equal to the product of the wavelet peak frequency in hertz and two time standard deviations divided by the signal length. The second element specifies the highest peak passband frequency and must be less than or equal to the Nyquist frequency. The base 2 logarithm of the ratio of the maximum frequency to the minimum frequency must be greater than or equal to 1/NV where NV is the number of voices per octave. If you specify frequency limits outside the permissible range, wcoherence truncates the limits to the minimum and maximum valid values. Use cwtfreqbounds with the wavelet set to 'amor' to determine frequency limits for different parameterizations of the wavelet coherence. Example: 'FrequencyLimits',[0.1 0.3] Period limits to use in wcoherence, specified as a two-element duration array with strictly increasing positive elements. The first element must be greater than or equal to 2×ts where ts is the sampling period. The base 2 logarithm of the ratio of the minimum period to the maximum period must be less than or equal to -1/NV where NV is the number of voices per octave. The maximum period cannot exceed the signal length divided by the product of two time standard deviations of the wavelet and the wavelet peak frequency. If you specify period limits outside the permissible range, wcoherence truncates the limits to the minimum and maximum valid values. Use cwtfreqbounds with the wavelet set to 'amor' to determine period limits for different parameterizations of the wavelet coherence. Example: 'PeriodLimits',[seconds(0.2) seconds(1)] 12 (default) | even integer from 10 to 32 Number of voices per octave to use in the wavelet coherence, specified as an even integer from 10 to 32. NumScalesToSmooth — Number of scales to smooth Number of scales to smooth in time and scale, specified as a positive integer less than or equal to one half N, where N is the number of scales in the wavelet transform. If unspecified, NumScalesToSmooth defaults to the minimum of floor(N/2) and VoicesPerOctave. The function uses a moving average filter to smooth across scale. If your coherence is noisy, you can specify a larger NumScalesToSmooth value to smooth the coherence more. Number of octaves to use in the wavelet coherence, specified as a positive integer between 1 and floor(log2(numel(x)))-1. If you do not need to examine lower frequency values, use a smaller NumOctaves value. The 'NumOctaves' name-value pair is not recommended and will be removed in a future release. The recommended way to modify the frequency or period range of wavelet coherence is with the 'FrequencyLimits' or 'PeriodLimits' name-value pairs. You cannot specify both the 'NumOctaves' and 'FrequencyLimits' or 'PeriodLimits' name-value pairs. See cwtfreqbounds. PhaseDisplayThreshold — Threshold for displaying phase vectors Threshold for displaying phase vectors, specified as a real scalar between 0 and 1. This function displays phase vectors for regions with coherence greater than or equal to the specified threshold value. Lowering the threshold value displays more phase vectors. If you use wcoherence with any output arguments, the PhaseDisplayThreshold value is ignored. wcoh — Wavelet coherence Wavelet coherence, returned as a matrix. The coherence is computed using the analytic Morlet wavelet over logarithmic scales, with a default value of 12 voices per octave. The default number of octaves is equal to floor(log2(numel(x)))-1. If you do not specify a sampling interval, sampling frequency is assumed. wcs — Wavelet cross spectrum matrix of complex values Wavelet cross-spectrum, returned as a matrix of complex values. You can use the phase of the wavelet cross-spectrum values to identify the relative lag between the input signals. period — Scale-to-period conversion array of durations Scale-to-period conversion, returned as an array of durations. The conversion values are computed from the sampling period specified in ts. Each period element has the same format as ts. f — Scale-to-frequency conversion Scale-to-frequency conversion, returned as a vector. The vector contains the peak frequency values for the wavelets used to compute the coherence. If you want to output f, but do not specify a sampling frequency input, fs, the returned wavelet coherence is in cycles per sample. array of doubles | array of durations Cone of influence for the wavelet coherence, returned as either an array of doubles or array of durations. The cone of influence indicates where edge effects occur in the coherence data. If you specify a sampling frequency, fs, the cone of influence is in Hz. If you specify a sampling interval or period, ts, the cone of influence is in periods. Due to the edge effects, give less credence to areas of apparent high coherence that are outside or overlap the cone of influence. The cone of influence is indicated by a dashed line. For additional information, see Boundary Effects and the Cone of Influence. wtx — Continuous wavelet transform of x Continuous wavelet transform of x, returned as a matrix. wty — Continuous wavelet transform of y Continuous wavelet transform of y, returned as a matrix. The wavelet cross-spectrum is a measure of the distribution of power of two signals. The wavelet cross spectrum of two time series, x and y, is: {C}_{xy}\left(a,b\right)=S\left({C}_{x}^{*}\left(a,b\right){C}_{y}\left(a,b\right)\right) Cx(a,b) and Cy(a,b) denote the continuous wavelet transforms of x and y at scales a and positions b. The superscript * is the complex conjugate, and S is a smoothing operator in time and scale. For real-valued time series, the wavelet cross-spectrum is real-valued if you use a real-valued analyzing wavelet, and complex-valued if you use a complex-valued analyzing wavelet. Wavelet coherence is a measure of the correlation between two signals. The wavelet coherence of two time series x and y is: \frac{{|S\left({C}_{x}^{*}\left(a,b\right){C}_{y}\left(a,b\right)\right)|}^{2}}{S\left(|{C}_{x}\left(a,b\right){|}^{2}\right)·S\left(|{C}_{y}\left(a,b\right){|}^{2}\right)} Cx(a,b) and Cy(a,b) denote the continuous wavelet transforms of x and y at scales a and positions b. The superscript * is the complex conjugate and S is a smoothing operator in time and scale. For real-valued time series, the wavelet coherence is real-valued if you use a real-valued analyzing wavelet, and complex-valued if you use a complex-valued analyzing wavelet. [1] Grinsted, A, J., C. Moore, and S. Jevrejeva. “Application of the cross wavelet transform and wavelet coherence to geophysical time series.” Nonlinear Processes in Geophysics. Vol. 11, Issue 5/6, 2004, pp. 561–566. [2] Maraun, D., J. Kurths, and M. Holschneider. "Nonstationary Gaussian processes in wavelet domain: Synthesis, estimation and significance testing.” Physical Review E 75. 2007, pp. 016707-1–016707-14. [3] Torrence, C., and P. Webster. "Interdecadal changes in the ESNO-Monsoon System." Journal of Climate. Vol. 12, 1999, pp. 2679–2690. The following input arguments are not supported: ts (sampling interval), PeriodLimits name-value pair, and PhaseDisplayThreshold name-value pair. The duration data type is not supported. R2020a: 'NumOctaves' name-value pair will be removed The 'NumOctaves' name-value pair argument will be removed in a future release. Use either: Name-value pair argument 'FrequencyLimits' to modify the frequency range of wavelet coherence. Name-value pair argument 'PeriodLimits' to modify the period range of wavelet coherence. cwt | cwtfreqbounds | cwtfilterbank
Carmichael Lambda Function - Maple Help Home : Support : Online Help : Mathematics : Number Theory : Carmichael Lambda Function CarmichaelLambda(n) lambda(n) \mathrm{\lambda }⁡\left(n\right) The size of the largest cyclic group generated by {g}^{i}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}n is given by CarmichaelLambda(n). Alternatively, CarmichaelLambda(n) is the smallest integer i g n {g}^{i} 1 n lambda is an alias for CarmichaelLambda. You can enter the command lambda using either the 1-D or 2-D calling sequence. For example, lambda(8) is equivalent to \mathrm{\lambda }⁡\left(8\right) \mathrm{with}⁡\left(\mathrm{NumberTheory}\right): \mathrm{seq}⁡\left(\mathrm{Totient}⁡\left(i\right),i=1..7\right) \textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6} \mathrm{seq}⁡\left(\mathrm{CarmichaelLambda}⁡\left(i\right),i=1..7\right) \textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6} \mathrm{CarmichaelLambda}⁡\left(8\right) \textcolor[rgb]{0,0,1}{2} \mathrm{Totient}⁡\left(8\right) \textcolor[rgb]{0,0,1}{4} \mathrm{\lambda }⁡\left(21\right) \textcolor[rgb]{0,0,1}{6} \mathrm{Totient}⁡\left(21\right) \textcolor[rgb]{0,0,1}{12} \mathrm{CarmichaelLambda}⁡\left(k\right) \textcolor[rgb]{0,0,1}{\mathrm{CarmichaelLambda}}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{k}\right) Carmichael's theorem states that {g}^{\mathrm{\lambda }⁡\left(n\right)} 1 n g n d≔\mathrm{CarmichaelLambda}⁡\left(112\right) \textcolor[rgb]{0,0,1}{d}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{12} {\mathrm{seq}⁡\left(\mathrm{`if`}⁡\left(\mathrm{igcd}⁡\left(g,112\right)=1,{g}^{d}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}112,\mathrm{NULL}\right),g=1..111\right)} {\textcolor[rgb]{0,0,1}{1}} The NumberTheory[CarmichaelLambda] command was introduced in Maple 2016.
Crossed-dipole antenna element - MATLAB - MathWorks 한국 The phased.CrossedDipoleAntennaElement System object™ models a crossed-dipole antenna element which is used to generate circularly polarized fields. A crossed-dipole antenna is formed from two orthogonal short-dipole antennas. By default, one dipole lies along y-axis and the other along the z-axis in the antenna local coordinate system. You can rotate the antenna in the yz-plane using the RotationAngle property. This antenna object generates right hand or left hand circularly polarized fields, or linearly polarized fields controlled using the Polarization property. These fields are pure along the x-axis (defined by 0° azimuth and 0° elevation angles). 0 (default) | scalar between -45° and +45° Crossed-dipole rotation angle, specified as a scalar between -45° and +45°. The rotation angle specifies the angle of rotation of the two dipoles around the x-axis. The rotation angle is measured counter-clockwise around the x-axis looking towards to origin. A default value of 0° corresponds to the case where one dipole is along the z-axis and the other dipole is along the y-axis. Units are in degrees. 'RHCP' – right hand circularly polarize field. The horizontal field has a 90° phase advance compared to the vertical field. 'LHCP' – left hand circularly polarize field. The horizontal field has a 90° delay compared to the vertical field. Azimuth and elevation angles of the response directions, specified as a real-valued 1-by-M row vector or a real-valued 2-by-M matrix, where M is the number of angular directions. Angle units are in degrees. The azimuth angle must lie in the range –180° to 180°, inclusive. The elevation angle must lie in the range –90° to 90°, inclusive. Find the response of a crossed-dipole antenna at boresight, 0° azimuth and 0° elevation, and off-boresight at 30° azimuth and 0° elevation. The antenna operates at 250 MHz. {0}^{∘} {0}^{∘} {0}^{∘} 3{0}^{∘} 4{5}^{∘} 5{5}^{∘} {0}^{∘} {0}^{∘} {0}^{∘} {0}^{∘} ±9{0}^{∘}
Actor-Critic Agents - MATLAB & Simulink - MathWorks 한국 {S}_{ts},{A}_{ts},{R}_{ts+1},{S}_{ts+1},…,{S}_{ts+N−1},{A}_{ts+N−1},{R}_{ts+N},{S}_{ts+N} For each episode step t = ts+1, ts+2, …, ts+N, compute the return Gt, which is the sum of the reward for that step and the discounted future reward. If Sts+N is not a terminal state, the discounted future reward includes the discounted state value function, computed using the critic network V. {G}_{t}=\underset{k=t}{\overset{ts+N}{∑}}\left({\mathrm{γ}}^{k−t}{R}_{k}\right)+b{\mathrm{γ}}^{N−t+1}V\left({S}_{ts+N};\mathrm{ϕ}\right) To specify the discount factor γ, use the DiscountFactor option. {D}_{t}={G}_{t}−V\left({S}_{t};\mathrm{ϕ}\right) d\mathrm{θ}=\underset{t=1}{\overset{N}{∑}}{∇}_{{\mathrm{θ}}_{\mathrm{μ}}}\mathrm{ln}\mathrm{π}\left(A|{S}_{t};\mathrm{θ}\right)⋅{D}_{t} Accumulate the gradients for the critic network by minimizing the mean squared error loss between the estimated value function V (St;Ï•) and the computed target return Gt across all N experiences. If the EntropyLossWeight option is greater than zero, then additional gradients are accumulated to minimize the entropy loss function. d\mathrm{ϕ}=\underset{t=1}{\overset{N}{∑}}{∇}_{\mathrm{ϕ}}{\left({G}_{t}−V\left({S}_{t};\mathrm{ϕ}\right)\right)}^{2} \mathrm{θ}=\mathrm{θ}+\mathrm{α}d\mathrm{θ} Here, α is the learning rate of the actor. Specify the learning rate when you create the actor by setting the LearnRate option in the rlActorOptimizerOptions property within the agent options object. \mathrm{ϕ}=\mathrm{ϕ}+\mathrm{β}d\mathrm{ϕ} Here, β is the learning rate of the critic. Specify the learning rate when you create the critic by setting the LearnRate option in the rlCriticOptimizerOptions property within the agent options object.
Predict or estimate states of dynamic systems - Simulink - MathWorks España Predict or estimate states of dynamic systems Filtering/Adaptive Filters Use the Kalman Filter block to predict or estimate the state of a dynamic system from a series of incomplete and/or noisy measurements. Suppose you have a noisy linear system that is defined by the following equations: \begin{array}{l}{x}_{k}=A{x}_{k-1}+{w}_{k-1}\\ {z}_{k}=H{x}_{k}+{v}_{k}\end{array} This block can use the previously estimated state, {\stackrel{^}{x}}_{k-1} , to predict the current state at time k, {x}_{k}^{-} , as shown by the following equation: \begin{array}{l}{x}_{k}^{-}=A{\stackrel{^}{x}}_{k-1}\\ {P}_{k}^{-}=A{\stackrel{^}{P}}_{k-1}{A}^{T}+Q\end{array} The block can also use the current measurement, {z}_{k} , and the predicted state, {x}_{k}^{-} , to estimate the current state value at time k, {\stackrel{^}{x}}_{k} , so that it is a more accurate approximation: \begin{array}{l}{K}_{k}={P}_{k}^{-}{H}^{T}{\left(H{P}_{k}^{-}{H}^{T}+R\right)}^{-1}\\ {\stackrel{^}{x}}_{k}={x}_{k}^{-}+{K}_{k}\left({z}_{k}-H{x}_{k}^{-}\right)\\ {\stackrel{^}{P}}_{k}=\left(I-{K}_{k}H\right){P}_{k}^{-}\end{array} The variables in the previous equations are defined in the following table. Default Value or Initial Condition x State N/A \stackrel{^}{x} Estimated state zeros([6, 1]) {x}^{-} Predicted state N/A A State transition matrix \left[\begin{array}{cccccc}1& 0& 1& 0& 0& 0\\ 0& 1& 0& 1& 0& 0\\ 0& 0& 1& 0& 0& 0\\ 0& 0& 0& 1& 0& 0\\ 0& 0& 0& 0& 1& 0\\ 0& 0& 0& 0& 0& 1\end{array}\right] w Process noise N/A z Measurement N/A H Measurement matrix \left[\begin{array}{cccccc}1& 0& 0& 0& 0& 0\\ 0& 1& 0& 0& 0& 0\\ 0& 0& 0& 0& 1& 0\\ 0& 0& 0& 0& 0& 1\end{array}\right] v Measurement noise N/A \stackrel{^}{P} Estimated error covariance 10*eye(6) P- Predicted error covariance N/A Q Process noise covariance 0.05*eye(6) K Kalman gain N/A R Measurement noise covariance eye(4) I Identity matrix N/A In the previous equations, z is a vector of measurement values. Most of the time, the block processes Z, an M-by-N matrix, where M is the number of measurement values and N is the number of filters. Use the Number of filters parameter to specify the number of filters to use to predict or estimate the current value. Use the Enable filters parameter to specify which filters are enabled or disabled at each time step. If you select Always, the filters are always enabled. If you choose Specify via input port <Enable>, the Enable port appears on the block. The input to this port must be a row vector of 1s and 0s whose length is equal to the number of filters. For example, if there are 3 filters and the input to the Enable port is [1 0 1], only the first and third filter are enabled at this time step. If you select the Reset the estimated state and estimated error covariance when filters are disabled check box, the estimated and predicted states as well as the estimated error covariance that correspond to the disabled filters are reset to their initial values. All filters have the same state transition matrix, measurement matrix, initial conditions, and noise covariance, but their state, measurement, enable, and MSE signals are unique. Within the state, measurement, enable, and MSE signals, each column corresponds to a filter. Use the Measurement matrix source parameter to specify how to enter the measurement matrix values. If you select Specify via dialog, the Measurement matrix parameter appears in the dialog box. If you select Input port <H>, the H port appears on the block. Use this port to specify your measurement matrix. Specify the number of filters to use to predict or estimate the current value. Specify which filters are enabled or disabled at each time step. If you select Always, the filters are always enabled. If you choose Specify via input port <Enable>, the Enable port appears on the block. Reset the estimated state and estimated error covariance when filters are disabled If you select this check box, the estimated and predicted states as well as the estimated error covariance that correspond to the disabled filters are reset to their initial values. This parameter is visible if, for the Enable filters parameter, you select Specify via input port <Enable>. Initial condition for estimated state Enter the initial condition for the estimated state. Initial condition for estimated error covariance Enter the initial condition for the estimated error covariance. Enter the state transition matrix. Process noise covariance Enter the process noise covariance. Measurement matrix source Specify how to enter the measurement matrix values. If you select Specify via dialog, the Measurement matrix parameter appears in the dialog box. If you select Input port <H>, the H port appears on the block. Enter the measurement matrix values. This parameter is visible if you select Specify via dialog for the Measurement matrix source parameter. Measurement noise covariance Enter the measurement noise covariance. Output estimated measurement <Z_est> Select this check box if you want the block to output the estimated measurement. Output estimated state <X_est> Select this check box if you want the block to output the estimated state. Output MSE of estimated state <MSE_est> Select this check box if you want the block to output the mean-squared error of the estimated state. Output predicted measurement <Z_prd> Select this check box if you want the block to output the predicted measurement. Output predicted state <X_prd> Select this check box if you want the block to output the predicted state. Output MSE of predicted state <MSE_prb> Select this check box if you want the block to output the mean-squared error of the predicted state. [1] Haykin, Simon. Adaptive Filter Theory. Upper Saddle River, NJ: Prentice Hall, 1996. [2] Welch, Greg and Gary Bishop, “An Introduction to the Kalman Filter,” TR 95–041, Department of Computer Science, University of North Carolina. M-by-N measurement where M is the length of the measurement vector and N is the number of filters. 1-by-N vector of 1s and 0s where N is the number of filters. M-by-P measurement matrix where M is the length of the measurement vector and P is the length of the filter state vectors. Same as Z port M-by-N estimated measurement matrix where M is the length of the measurement vector and N is the number of filters. X_est P-by-N estimated state matrix where P is the length of the filter state vectors and N is the number of filters. MSE_est 1-by-N vector that represents the mean-squared-error of the estimated state. N is the number of filters. Z_prd M-by-N predicted measurement matrix where M is the length of the measurement vector and N is the number of filters. X_prd P-by-N predicted state matrix where P is the length of the filter state vectors and N is the number of filters. MSE_prd 1-by-N vector that represents the mean-squared-error of the predicted state. N is the number of filters. LDL Solver
In the case where both players know there are exactly N rounds, the rational solution can be found using K-level thinking. Consider the N^{th} round. Player 1 reasons that since there are no more rounds in the future, there is no reason to cooperate, so she defects. Similarly, player 2 reasons that he should defect. But on the (N-1)^{th} round the reasoning is the same. They both know that they will both defect on the N^{th} round anyway, so there's no reason not to defect on the (N-1)^{th} round. By induction, they both always defect. As before, the rational solution is strictly worse for both players than always cooperating.
Escape velocity - New World Encyclopedia Previous (Escalator) Next (Eschatology) Isaac Newton's analysis of escape velocity. Projectiles A and B fall back to earth. Projectile C achieves a circular orbit, D an elliptical one. Projectile E escapes. In physics, escape velocity is the speed of an object at which its kinetic energy is equal to the magnitude of its gravitational potential energy, as calculated by the equation {\displaystyle U_{g}=-Gm_{1}m_{2}/r} . It is commonly described as the speed needed to "break free" from a gravitational field (without any additional impulse). The term escape velocity actually refers to a speed rather than a velocity—that is, it specifies how fast the object must move, but the direction of movement is irrelevant. In more technical terms, escape velocity is a scalar (not a vector) quantity. 4 List of escape velocities 5 Calculating escape velocity 6 Deriving escape velocity using calculus 6.1 Derivation using only g and r 6.2 Derivation using G and M 6.3 The derivations are consistent The phenomenon of escape velocity is a consequence of conservation of energy. For an object with a given total energy, which is moving subject to conservative forces (such as a static gravity fields) the object can reach only combinations of places and speeds which have that total energy; and places which have a higher potential energy than this cannot be reached at all. For a given gravitational potential energy at a given position, the escape velocity is the minimum speed an object without propulsion needs to have sufficient energy to be able to "escape" from the gravity, that is, so that gravity will never manage to pull it back. For the sake of simplicity, unless stated otherwise, this article will assume that the scenario one is dealing with is that an object is attempting to escape from a uniform spherical planet by moving straight up (along a radial line away from the center of the planet), and that the only significant force acting on the moving object is the planet's gravity. Escape velocity is actually a speed (not a velocity) because it does not specify a direction: no matter what the direction of travel is, the object can escape the gravitational field. The simplest way of deriving the formula for escape velocity is to use conservation of energy. Imagine that a spaceship of mass m is at a distance r from the center of mass of the planet, whose mass is M. Its initial speed is equal to its escape velocity, {\displaystyle v_{e}} . At its final state, it will be an infinite distance away from the planet, and its speed will be negligibly small and assumed to be 0. Kinetic energy K and gravitational potential energy Ug are the only types of energy that we will deal with, so by the conservation of energy, {\displaystyle (K+U_{g})_{i}=(K+U_{g})_{f}.\,} Kf = 0 because final velocity is zero, and Ugf = 0 because its final distance is infinity, so {\displaystyle {\frac {1}{2}}mv_{e}^{2}+{\frac {-GMm}{r}}=0+0} {\displaystyle v_{e}={\sqrt {\frac {2GM}{r}}}} Defined a little more formally, "escape velocity" is the initial speed required to go from an initial point in a gravitational potential field to infinity with a residual velocity of zero, with all speeds and velocities measured with respect to the field. Additionally, the escape velocity at a point in space is equal to the speed that an object would have if it started at rest from an infinite distance and was pulled by gravity to that point. In common usage, the initial point is on the surface of a planet or moon. On the surface of the Earth, the escape velocity is about 11.2 kilometers per second (~6.96 mi/s), which is approximately 34 times the speed of sound (mach 34) and at least 10 times the speed of a rifle bullet. However, at 9,000 km altitude in "space," it is slightly less than 7.1 km/s. The escape velocity relative to the surface of a rotating body depends on direction in which the escaping body travels. For example, as the Earth's rotational velocity is 465 m/s at the equator, a rocket launched tangentially from the Earth's equator to the east requires an initial velocity of about 10.735 km/s relative to Earth to escape whereas a rocket launched tangentially from the Earth's equator to the west requires an initial velocity of about 11.665 km/s relative to Earth. The surface velocity decreases with the cosine of the geographic latitude, so space launch facilities are often located as close to the equator as feasible, for example, the American Cape Canaveral (latitude 28°28' N) and the French Guiana Space Centre (latitude 5°14' N). Escape velocity is independent of the mass of the escaping object. It does not matter if the mass is 1 kg or 1000 kg, escape velocity from the same point in the same gravitational field is always the same. What differs is the amount of energy needed to accelerate the mass to achieve escape velocity: The energy needed for an object of mass {\displaystyle m} to escape the Earth's gravitational field is GMm / r, a function of the object's mass (where r is the radius of the Earth, G is the gravitational constant, and M is the mass of the Earth). More massive objects require more energy to reach escape velocity. All of this, of course, assumes one is neglecting air resistance. Planetary or lunar escape velocity is sometimes misunderstood to be the speed a powered vehicle (such as a rocket) must reach to leave orbit; however, this is not the case, as the quoted number is typically the surface escape velocity, and vehicles never achieve that speed direct from the surface. This surface escape velocity is the speed required for an object to leave the planet if the object is simply projected from the surface of the planet and then left without any more kinetic energy input: In practice the vehicle's propulsion system will continue to provide energy after it has left the surface. In fact a vehicle can leave the Earth's gravity at any speed. At higher altitude, the local escape velocity is lower. But at the instant the propulsion stops, the vehicle can only escape if its speed is greater than or equal to the local escape velocity at that position. At sufficiently high altitude this speed can approach 0. If an object attains escape velocity, but is not directed straight away from the planet, then it will follow a curved path. Even though this path will not form a closed shape, it is still considered an orbit. Assuming that gravity is the only significant force in the system, this object's speed at any point in the orbit will be equal to the escape velocity at that point (due to the conservation of energy, its total energy must always be 0, which implies that it always has escape velocity; see the derivation above). The shape of the orbit will be a parabola whose focus is located at the center of mass of the planet. An actual escape requires of course that the orbit not intersect the planet, since this would cause the object to crash. When moving away from the source, this path is called an escape orbit; when moving closer to the source, a capture orbit. Both are known as C3 = 0 orbits (where C3 = - μ/a, and a is the semi-major axis). Remember that in reality there are many gravitating bodies in space, so that, for instance, a rocket that travels at escape velocity from Earth will not escape to an infinite distance away because it needs an even higher speed to escape the Sun's gravity. In other words, near the Earth, the rocket's orbit will appear parabolic, but eventually its orbit will become an ellipse around the Sun. To leave planet Earth an escape velocity of 11.2 km/s is required, however a speed of 42.1 km/s is required to escape the Sun's gravity (and exit the solar system) from the same position Ve[1] on the Sun, the Sun's gravity: 617.5 km/s on Mercury, Mercury's gravity: 4.4 km/s at Mercury, the Sun's gravity: 67.7 km/s on Venus, Venus' gravity: 10.4 km/s at Venus, the Sun's gravity: 49.5 km/s on Earth, the Earth's gravity: 11.2 km/s at the Earth/Moon, the Sun's gravity: 42.1 km/s on the Moon, the Moon's gravity: 2.4 km/s at the Moon, the Earth's gravity: 1.4 km/s on Mars, Mars' gravity: 5.0 km/s at Mars, the Sun's gravity: 34.1 km/s on Jupiter, Jupiter's gravity: 59.5 km/s at Jupiter, the Sun's gravity: 18.5 km/s on Saturn, Saturn's gravity: 35.5 km/s at Saturn, the Sun's gravity: 13.6 km/s on Uranus, Uranus' gravity: 21.3 km/s at Uranus, the Sun's gravity: 9.6 km/s on Neptune, Neptune's gravity: 23.5 km/s at Neptune, the Sun's gravity: 7.7 km/s in the solar system, the Milky Way's gravity: ~1,000 km/s Because of the atmosphere it is not useful and hardly possible to give an object near the surface of the Earth a speed of 11.2 km/s, as these speeds are too far in the hypersonic regime for most practical propulsion systems and would cause most objects to burn up due to atmospheric friction. For an actual escape orbit a spacecraft is first placed in low Earth orbit and then accelerated to the escape velocity at that altitude, which is a little less—about 10.9 km/s. The required acceleration, however, is generally even less because from that sort of an orbit the spacecraft already has a speed of 8 km/s. To expand upon the derivation given in the Overview, {\displaystyle v_{e}={\sqrt {\frac {2GM}{r}}}={\sqrt {\frac {2\mu }{r}}}={\sqrt {2gr\,}}.} {\displaystyle v_{e}} is the escape velocity, G is the gravitational constant, M is the mass of the body being escaped from, m is the mass of the escaping body, r is the distance between the center of the body and the point at which escape velocity is being calculated, g is the gravitational acceleration at that distance, and μ is the standard gravitational parameter.[2] {\displaystyle {\sqrt {2}}} times the speed in a circular orbit at the same height (compare this with equation (14) in circular motion). This corresponds to the fact that the potential energy with respect to infinity of an object in such an orbit is minus two times its kinetic energy, while to escape the sum of potential and kinetic energy needs to be at least zero. {\displaystyle v_{e}} from the surface (in m/s) is approximately 2.364×10−5 m1.5kg−0.5s−1 times the radius r (in meters) times the square root of the average density ρ (in kg/m³), or: {\displaystyle v_{e}\approx 2.364\times 10^{-5}r{\sqrt {\rho }}.\,} Deriving escape velocity using calculus These derivations use calculus, Newton's laws of motion and Newton's law of universal gravitation. Derivation using only g and r The Earth's escape speed can be derived from "g," the acceleration due to gravity at the Earth's surface. It is not necessary to know the gravitational constant G or the mass M of the Earth. Let r = the Earth's radius, and g = the acceleration of gravity at the Earth's surface. Above the Earth's surface, the acceleration of gravity is governed by Newton's inverse-square law of universal gravitation. Accordingly, the acceleration of gravity at height s above the center of the Earth (where s > r ) is {\displaystyle g(r/s)^{2}} . The weight of an object of mass m at the surface is g m, and its weight at height s above the center of the Earth is gm (r / s)². Consequently the energy needed to lift an object of mass m from height s above the Earth's center to height s + ds (where ds is an infinitesimal increment of s) is gm (r / s)² ds. Since this decreases sufficiently fast as s increases, the total energy needed to lift the object to infinite height does not diverge to infinity, but converges to a finite amount. That amount is the integral of the expression above: {\displaystyle \int _{r}^{\infty }gm(r/s)^{2}\,ds=gmr^{2}\int _{r}^{\infty }s^{-2}\,ds=gmr^{2}\left[-s^{-1}\right]_{s:=r}^{s:=\infty }} {\displaystyle =gmr^{2}\left(0-(-r^{-1})\right)=gmr.} That is how much kinetic energy the object of mass m needs in order to escape. The kinetic energy of an object of mass m moving at speed v is (1/2)mv². Thus we need {\displaystyle {\begin{matrix}{\frac {1}{2}}\end{matrix}}mv^{2}=gmr.} The factor m cancels out, and solving for v we get {\displaystyle v={\sqrt {2gr\,}}.} If we take the radius of the Earth to be r = 6400 kilometers and the acceleration of gravity at the surface to be g = 9.8 m/s², we get {\displaystyle v\cong {\sqrt {2\left(9.8\ {\mathrm {m} /\mathrm {s} ^{2}}\right)(6.4\times 10^{6}\ \mathrm {m} )}}=11\,201\ \mathrm {m} /\mathrm {s} .} This is just a bit over 11 kilometers per second, or a bit under 7 miles per second, as Isaac Newton calculated. Derivation using G and M Let G be the gravitational constant and let M be the mass of the earth or other body to be escaped. {\displaystyle ma=m{\frac {dv}{dt}}=-{\frac {GMm}{r^{2}}}\,} {\displaystyle a={\frac {dv}{dt}}=-{\frac {GM}{r^{2}}}\,} By applying the chain rule, one gets: {\displaystyle {\frac {dv}{dt}}={\frac {dv}{dr}}\cdot {\frac {dr}{dt}}=-{\frac {GM}{r^{2}}}\,} {\displaystyle v={\frac {dr}{dt}}} {\displaystyle {\frac {dv}{dr}}\cdot v=-{\frac {GM}{r^{2}}}\,} {\displaystyle v\cdot dv=-{\frac {GM}{r^{2}}}\,dr\,} {\displaystyle \int _{v_{0}}^{v(t)}v\,dv=-\int _{r_{0}}^{r(t)}{\frac {GM}{r^{2}}}\,dr\,} {\displaystyle {\frac {v(t)^{2}}{2}}-{\frac {v_{0}^{2}}{2}}={\frac {GM}{r(t)}}-{\frac {GM}{r_{0}}}\,} Since we want escape velocity {\displaystyle t\rightarrow \infty \ \ r(t)\rightarrow \infty } {\displaystyle v(t)\rightarrow 0} {\displaystyle -{\frac {v_{0}^{2}}{2}}=-{\frac {GM}{r_{0}}}\,} {\displaystyle v_{0}={\sqrt {\frac {2GM}{r_{0}}}}\,} v0 is the escape velocity and r0 is the radius of the planet. Note that the above derivation relies on the equivalence of inertial mass and gravitational mass. The derivations are consistent The gravitational acceleration can be obtained from the gravitational constant G and the mass of Earth M: {\displaystyle g={\frac {GM}{r^{2}}},} where r is the radius of Earth. Thus {\displaystyle v={\sqrt {2gr\,}}={\sqrt {{\frac {2GMr}{r^{2}}}\,}}={\sqrt {{\frac {2GM}{r}}\,}},} so the two derivations given above are consistent. The escape velocity from a position in a field with multiple sources is derived from the total potential energy per kg at that position, relative to infinity. The potential energies for all sources can simply be added. For the escape velocity this results in the square root of the sum of the squares of the escape velocities of all sources separately. For example, at the Earth's surface the escape velocity for the combination Earth and Sun is {\displaystyle \scriptstyle {\sqrt {11.2^{2}\ +\ 42.1^{2}}}\ =\ 43.56\ \mathrm {km} /\mathrm {s} } . As a result, to leave the solar system requires a speed of 13.6 km/s relative to Earth in the direction of the Earth's orbital motion, since the speed is then added to the speed of 30 km/s of that orbital motion In the hypothetical case of uniform density, the velocity that an object would achieve when dropped in a hypothetical vacuum hole from the surface of the Earth to the center of the Earth is the escape velocity divided by {\displaystyle \scriptstyle {\sqrt {2}}} , that is, the speed in a circular orbit at a low height. Correspondingly, the escape velocity from the center of the Earth would be {\displaystyle \scriptstyle {\sqrt {1.5}}} times that from the surface. A refined calculation would take into account the fact that the Earth's mass is not uniformly distributed as the center is approached. This gives higher speeds. ↑ 1.0 1.1 Georgia State University, Data of Planets. Retrieved October 16, 2008. ↑ Bate, Mueller, and White, 1971, 35. Bate, Roger R., Donald D. Mueller, and Jerry E. White. Fundamentals of Astrodynamics. New York: Dover Publications, 1971. ISBN 0486600610. Schutz, Bernard F. Gravity from the Ground Up. Cambridge: Cambridge University Press, 2003. ISBN 0521455065. Vallado, David Anthony, and Wayne D. McClain. Fundamentals of Astrodynamics and Applications. Space Technology Library, 12. Microcosm, Inc, 2001. ISBN 1881883124. Escape velocity: Calculated from mass and distance. Escape velocity history History of "Escape velocity" Retrieved from https://www.newworldencyclopedia.org/p/index.php?title=Escape_velocity&oldid=1065164
FUNCTIONAL ANALYSIS - Encyclopedia Information Functional analysis Information https://en.wikipedia.org/wiki/Functional_analysis The usage of the word functional as a noun goes back to the calculus of variations, implying a function whose argument is a function. The term was first used in Hadamard's 1910 book on that subject. However, the general concept of a functional had previously been introduced in 1887 by the Italian mathematician and physicist Vito Volterra. [1] [2] The theory of nonlinear functionals was continued by students of Hadamard, in particular Fréchet and Lévy. Hadamard also founded the modern school of linear functional analysis further developed by Riesz and the group of Polish mathematicians around Stefan Banach. In modern introductory texts on functional analysis, the subject is seen as the study of vector spaces endowed with a topology, in particular infinite-dimensional spaces. [3] [4] In contrast, linear algebra deals mostly with finite-dimensional spaces, and does not use topology. An important part of functional analysis is the extension of the theory of measure, integration, and probability to infinite dimensional spaces, also known as infinite dimensional analysis. 1 Normed vector spaces 2 Linear functional analysis 3 Major and foundational results 3.3 Hahn–Banach theorem 3.4 Open mapping theorem 3.5 Closed graph theorem 4 Foundations of mathematics considerations Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the orthonormal basis. [5] Finite-dimensional Hilbert spaces are fully understood in linear algebra, and infinite-dimensional separable Hilbert spaces are isomorphic to {\displaystyle \ell ^{\,2}(\aleph _{0})\,} . Separability being important for applications, functional analysis of Hilbert spaces consequently mostly deals with this space. One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace. Many special cases of this invariant subspace problem have already been proven. Examples of Banach spaces are {\displaystyle L^{p}} -spaces for any real number {\displaystyle p\geq 1} . Given also a measure {\displaystyle \mu } {\displaystyle X} {\displaystyle L^{p}(X)} , sometimes also denoted {\displaystyle L^{p}(X,\mu )} {\displaystyle L^{p}(\mu )} , has as its vectors equivalence classes {\displaystyle [\,f\,]} of measurable functions whose absolute value's {\displaystyle p} -th power has finite integral; that is, functions {\displaystyle f}or which one has {\displaystyle \int _{X}\left|f(x)\right|^{p}\,d\mu (x)<+\infty .} {\displaystyle \mu } is the counting measure, then the integral may be replaced by a sum. That is, we require {\displaystyle \sum _{x\in X}\left|f(x)\right|^{p}<+\infty .} Then it is not necessary to deal with equivalence classes, and the space is denoted {\displaystyle \ell ^{p}(X)} , written more simply {\displaystyle \ell ^{p}} {\displaystyle X} is the set of non-negative integers. {\displaystyle \sup \nolimits _{T\in F}\|T(x)\|_{Y}<\infty ,} {\displaystyle \sup \nolimits _{T\in F}\|T\|_{B(X,Y)}<\infty .} Theorem: [6] Let A be a bounded self-adjoint operator on a Hilbert space H. Then there is a measure space (X, Σ, μ) and a real-valued essentially bounded measurable function f on X and a unitary operator U:H → L2μ(X) such that {\displaystyle U^{*}TU=A\;} {\displaystyle [T\varphi ](x)=f(x)\varphi (x).\;} {\displaystyle \|T\|=\|f\|_{\infty }} There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now {\displaystyle f} may be complex-valued. Hahn–Banach theorem: [7] If p : V → R is a sublinear function, and φ : U → R is a linear functional on a linear subspace U ⊆ V which is dominated by p on U; that is, {\displaystyle \varphi (x)\leq p(x)\qquad \forall x\in U} then there exists a linear extension ψ : V → R of φ to the whole space V which is dominated by p on V; that is, there exists a linear functional ψ such that {\displaystyle \psi (x)=\varphi (x)\qquad \forall x\in U,} {\displaystyle \psi (x)\leq p(x)\qquad \forall x\in V.} The open mapping theorem, also known as the Banach–Schauder theorem (named after Stefan Banach and Juliusz Schauder), is a fundamental result which states that if a continuous linear operator between Banach spaces is surjective then it is an open map. More precisely,: [7] The closed graph theorem states the following: If X is a topological space and Y is a compact Hausdorff space, then the graph of a linear map T from X to Y is closed if and only if T is continuous. [8] Functional analysis in its present form [update] includes the following tendencies: ^ Lawvere, F. William. "Volterra's functionals and covariant cohesion of space" (PDF). acsu.buffalo.edu. Proceedings of the May 1997 Meeting in Perugia. {{ cite web}}: CS1 maint: url-status ( link) ^ Saraiva, Luís (October 2004). History of Mathematical Sciences. WORLD SCIENTIFIC. p. 195. doi: 10.1142/5685. ISBN 978-93-86279-16-3. Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis: A Hitchhiker's Guide, 3rd ed., Springer 2007, ISBN 978-3-540-32696-0. Online doi: 10.1007/3-540-29587-9 (by subscription) Functional analysisat Wikipedia's sister projects Functional analysis ( topics – glossary) dual ( algebraic/ topological) Differential equations ( ordinary - partial - stochastic) Mathematics ( areas of mathematics) Retrieved from " https://en.wikipedia.org/?title=Functional_analysis&oldid=1086220154" Functional Analysis Videos Functional Analysis Websites Functional Analysis Encyclopedia Articles
1) if parabola y2=px passes through point (2,-3),find the length of latus rectum 2) find the value of - Maths - Conic Sections - 6838567 | Meritnation.com 1) if parabola y2=px passes through point (2,-3),find the length of latus rectum . 2) find the value of p so that the equation x2 +y2 - 2px+ 4y - 12=0 may represent a circle of radius 5 units . 3) one end of the diameter of a circle x2+y2-6x+5y-7=0 os (7,-8).find the co-ordinates of other end In the given question, we are given a parabola {y}^{2}=px which passes through the point (2,-3). So, substituting x = 2 and y = -3 in the given equation we get {\left(-3\right)}^{2}=p\left(2\right)\phantom{\rule{0ex}{0ex}}9=2p\phantom{\rule{0ex}{0ex}}p=\frac{9}{2} p=\frac{9}{2} in the given parabola, we get {y}^{2}=\frac{9}{2}x Next, comparing equation (1) with the equation of the parabola {y}^{2}=4ax 4a=\frac{9}{2} Now we know that the length of latus rectum is 4a. So, using equation (2) Length of the latus rectum = 4a \frac{9}{2} Therefore, the length of the latus rectum is \frac{9}{2} "Due to paucity of time it would not be possible for us to solve all your queries. We are providing solution to one of your queries. Try solving the rest of the questions yourself and if you face any difficulty then do get back to us."
Estimate power spectrum or power density spectrum - MATLAB - MathWorks 日本 SE = dsp.SpectrumEstimator returns a System object, SE, that computes the frequency power spectrum or the power density spectrum of real or complex signals. This System object uses the Welch’s averaged modified periodogram method or the filter bank-based spectral estimation method. Specify the number of filter coefficients, or taps, for each frequency band. This value corresponds to the number of filter coefficients per polyphase branch. The total number of filter coefficients is given by NumTapsPerBand × FFTLength. {Z}_{i}=\frac{1}{L}\underset{m=0}{\overset{L−1}{∑}}{|{y}_{i}\left[m\right]|}^{2} Z=\left[{Z}_{0},\text{ }{Z}_{1},\text{ }{Z}_{2},\cdots ,{Z}_{M−1}\right] \begin{array}{l}{w}_{N}=\mathrm{λ}{w}_{N−1}+1\\ {\stackrel{¯}{z}}_{N}=\left(1−\frac{1}{{w}_{N}}\right){\stackrel{¯}{z}}_{N−1}+\left(\frac{1}{{w}_{N}}\right){z}_{N}\end{array} {w}_{N} {z}_{N} {\stackrel{¯}{z}}_{N−1} \left(1−\frac{1}{{w}_{N}}\right){\stackrel{¯}{z}}_{N−1} {\stackrel{¯}{z}}_{N} [4] Welch, P. D. “The use of fast Fourier transforms for the estimation of power spectra: A method based on time averaging over short modified periodograms,” IEEE Transactions on Audio and Electroacoustics, Vol. 15, 1967, pp. 70–73.
MATHEMATICAL MODELING - Encyclopedia Information Mathematical model Information (Redirected from Mathematical modeling) https://en.wikipedia.org/wiki/Mathematical_modeling For other uses, see Mathematical model (disambiguation). Description of a system using mathematical concepts and language Find sources: "Mathematical model" – news · newspapers · books · scholar · JSTOR (May 2008) ( Learn how and when to remove this template message) A mathematical model is a description of a system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in the natural sciences (such as physics, biology, earth science, chemistry) and engineering disciplines (such as computer science, electrical engineering), as well as in non-physical systems such as the social sciences (such as economics, psychology, sociology, political science). The use of mathematical models to solve problems in business or military operations is a large part of the field of operations research. Mathematical models are also used in music, [1] linguistics, [2] and philosophy (for example, intensively in analytic philosophy). 1 Elements of a mathematical model 3.1 A priori information 3.1.1 Subjective information 3.3 Training and tuning 3.4.1 Fit to empirical data 3.4.2 Scope of the model 3.4.3 Philosophical considerations Elements of a mathematical model Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. In the physical sciences, a traditional mathematical model contains most of the following elements: Supplementary sub-models Classical constraints and kinematic equations Mathematical models are of different types: Linear structure implies that a problem can be decomposed into simpler parts that can be treated independently and/or analyzed at a different scale and the results obtained will remain valid for the initial problem when recomposed and rescaled. Static vs. dynamic: A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations or difference equations. Explicit vs. implicit: If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties. Deterministic vs. probabilistic (stochastic): A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a " statistical model"—randomness is present, and variable states are not described by unique values, but rather by probability distributions. Deductive, inductive, or floating: A deductive model is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. [3] Application of catastrophe theory in science has been characterized as a floating model. [4] Strategic vs non-strategic Models used in game theory are different in a sense that they model agents with incompatible incentives, such as competing species or bidders in an auction. Strategic models assume that players are autonomous decision makers who rationally choose actions that maximize their objective function. A key challenge of using strategic models is defining and computing solution concepts such as Nash equilibrium. An interesting property of strategic models is that they separate reasoning about rules of the game from reasoning about behavior of the players. [5] To analyse something with a typical "black box approach", only the behavior of the stimulus/response will be accounted for, to infer the (unknown) box. The usual representation of this black box system is a data flow diagram centered in the box. In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification. [7] Note that better accuracy does not necessarily mean a better model. Statistical models are prone to overfitting which means that a model is fitted to data too much and it has lost its ability to generalize to new events that were not observed before. Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training, while the optimization of model hyperparameters is called tuning and often uses cross-validation. [8] In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting[ citation needed]. Usually, the easiest part of model evaluation is checking whether a model fits experimental measurements or other empirical data. In models with parameters, a common approach to test this fit is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics. Defining a metric to measure distances between observed and predicted data is a useful tool for assessing model fit. In statistics, decision theory, and some economic models, a loss function plays a similar role. While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit of statistical models than models involving differential equations. Tools from nonparametric statistics can sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form. An example of such criticism is the argument that the mathematical models of optimal foraging theory do not offer insight that goes beyond the common-sense conclusions of evolution and other basic principles of ecology. [9] Throughout history, more and more accurate mathematical models have been developed. Newton's laws accurately describe many everyday phenomena, but at certain limits theory of relativity and quantum mechanics must be used. It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws, Maxwell's equations and the Schrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximate on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis. A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, boolean values or strings, for example. The variables represent some properties of the system, for example, the measured system outputs often in the form of signals, timing data, counters, and event occurrence . The actual model is the set of functions that describe the relations between the different variables. One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton (DFA) which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s: Many everyday activities carried out without a thought are uses of mathematical models. A geographical map projection of a region of the earth onto a small, plane surface is a model which can be used for many purposes such as planning travel. [10] Another simple activity is predicting the position of a vehicle from its initial position, direction and speed of travel, using the equation that distance traveled is the product of time and speed. This is known as dead reckoning when used more formally. Mathematical modeling in this way does not necessarily require formal mathematics; animals have been shown to use dead reckoning. [11] [12] Model of a particle in a potential-field. In this model we consider a particle as being a point of mass which describes a trajectory in space which is modeled by a function giving its coordinates in space as a function of time. The potential field is given by a function {\displaystyle V\!:\mathbb {R} ^{3}\!\rightarrow \mathbb {R} } and the trajectory, that is a function {\displaystyle \mathbf {r} \!:\mathbb {R} \rightarrow \mathbb {R} ^{3}} , is the solution of the differential equation: {\displaystyle -{\frac {\mathrm {d} ^{2}\mathbf {r} (t)}{\mathrm {d} t^{2}}}m={\frac {\partial V[\mathbf {r} (t)]}{\partial x}}\mathbf {\hat {x}} +{\frac {\partial V[\mathbf {r} (t)]}{\partial y}}\mathbf {\hat {y}} +{\frac {\partial V[\mathbf {r} (t)]}{\partial z}}\mathbf {\hat {z}} ,} {\displaystyle m{\frac {\mathrm {d} ^{2}\mathbf {r} (t)}{\mathrm {d} t^{2}}}=-\nabla V[\mathbf {r} (t)].} Model of rational behavior for a consumer. In this model we assume a consumer faces a choice of n commodities labeled 1,2,...,n each with a market price p1, p2,..., pn. The consumer is assumed to have an ordinal utility function U (ordinal in the sense that only the sign of the differences between two utilities, and not the level of each utility, is meaningful), depending on the amounts of commodities x1, x2,..., xn consumed. The model further assumes that the consumer has a budget M which is used to purchase a vector x1, x2,..., xn in such a way as to maximize U(x1, x2,..., xn). The problem of rational behavior in this model then becomes a mathematical optimization problem, that is: {\displaystyle \max U(x_{1},x_{2},\ldots ,x_{n})} {\displaystyle \sum _{i=1}^{n}p_{i}x_{i}\leq M.} {\displaystyle x_{i}\geq 0\;\;\;\forall i\in \{1,2,\ldots ,n\}} This model has been used in a wide variety of economic contexts, such as in general equilibrium theory to show existence and Pareto efficiency of economic equilibria. Neighbour-sensing model is a model that explains the mushroom formation from the initially chaotic fungal network. In computer science, mathematical models may be used to simulate computer networks. In mechanics, mathematical models may be used to analyze the movement of a rocket model. TK Solver - Rule-based modeling ^ D. Tymoczko, A Geometry of Music: Harmony and Counterpoint in the Extended Common Practice (Oxford Studies in Music Theory), Oxford University Press; Illustrated Edition (March 21, 2011), ISBN 978-0195336672 ^ Andras Kornai, Mathematical Linguistics (Advanced Information and Knowledge Processing),Springer, ISBN 978-1849966948 ^ Truesdell, Clifford (1984). An Idiot's Fugitive Essays on Science. Springer. pp. 121–7. ISBN 3-540-90703-3. ^ Li, C., Xing, Y., He, F., & Cheng, D. (2018). A Strategic Learning Algorithm for State-based Games. ArXiv. ^ Billings S.A. (2013), Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains, Wiley. ^ "Thomas Kuhn". Stanford Encyclopedia of Philosophy. 13 August 2004. Retrieved 15 January 2019. ^ Thornton, Chris. "Machine Learning Lecture". Retrieved 2019-02-06. ^ Pyke, G. H. (1984). "Optimal Foraging Theory: A Critical Review". Annual Review of Ecology and Systematics. 15: 523–575. doi: 10.1146/annurev.es.15.110184.002515. ^ "GIS Definitions of Terminology M-P". LAND INFO Worldwide Mapping. Retrieved January 27, 2020. ^ Gallistel (1990). The Organization of Learning. Cambridge: The MIT Press. ISBN 0-262-07113-4. ^ Whishaw, I. Q.; Hines, D. J.; Wallace, D. G. (2001). "Dead reckoning (path integration) requires the hippocampal formation: Evidence from spontaneous exploration and spatial learning tasks in light (allothetic) and dark (idiothetic) tests". Behavioural Brain Research. 127 (1–2): 49–69. doi: 10.1016/S0166-4328(01)00359-X. PMID 11718884. S2CID 7897256. Gary Chartrand (1977) Graphs as Mathematical Models, Prindle, Webber & Schmidt ISBN 0871502364 Dubois, G. (2018) "Modeling and Simulation", Taylor & Francis, CRC Press. Papadimitriou, Fivos. (2010). Mathematical Modelling of Spatial-Ecological Complex Systems: an Evaluation. Geography, Environment, Sustainability 1(3), 67-80. doi: 10.24057/2071-9388-2010-3-1-67-80 Peierls, R. (1980). "Model-making in physics". Contemporary Physics. 21: 3–17. Bibcode: 1980ConPh..21....3P. doi: 10.1080/00107518008210938. Retrieved from " https://en.wikipedia.org/?title=Mathematical_model&oldid=1084453742" Mathematical Modeling Videos Mathematical Modeling Websites Mathematical Modeling Encyclopedia Articles
Void_ratio Knowpia The void ratio of a mixture is the ratio of the volume of voids to volume of solids. It is a dimensionless quantity in materials science, and is closely related to porosity as follows: {\displaystyle e={\frac {V_{V}}{V_{S}}}={\frac {V_{V}}{V_{T}-V_{V}}}={\frac {\phi }{1-\phi }}} {\displaystyle \phi ={\frac {V_{V}}{V_{T}}}={\frac {V_{V}}{V_{S}+V_{V}}}={\frac {e}{1+e}}} wher{\displaystyle e} is void ratio, {\displaystyle \phi } is porosity, VV is the volume of void-space (such as fluids), VS is the volume of solids, and VT is the total or bulk volume. This figure is relevant in composites, in mining (particular with regard to the properties of tailings), and in soil science. In geotechnical engineering, it is considered one of the state variables of soils and represented by the symbol e.[1][2] Note that in geotechnical engineering, the symbol {\displaystyle \phi } usually represents the angle of shearing resistance, a shear strength (soil) parameter. Because of this, the equation is usually rewritten using {\displaystyle n} for porosity: {\displaystyle e={\frac {V_{V}}{V_{S}}}={\frac {V_{V}}{V_{T}-V_{V}}}={\frac {n}{1-n}}} {\displaystyle n={\frac {V_{V}}{V_{T}}}={\frac {V_{V}}{V_{S}+V_{V}}}={\frac {e}{1+e}}} wher{\displaystyle e} {\displaystyle n} is porosity, VV is the volume of void-space (air and water), VS is the volume of solids, and VT is the total or bulk volume.[3] Engineering applicationsEdit Volume change tendency control. If void ratio is high (loose soils) voids in a soil skeleton tend to minimize under loading - adjacent particles contract. The opposite situation, i.e. when void ratio is relatively small (dense soils), indicates that the volume of the soil is vulnerable to increase under loading - particles dilate. Fluid conductivity control (ability of water movement through the soil). Loose soils show high conductivity, while dense soils are not so permeable. Particles movement. In a loose soil particles can move quite easily, whereas in a dense one finer particles cannot pass through the voids, which leads to clogging. Relation between void ratio and porosity ^ Lambe, T. William & Robert V. Whitman. Soil Mechanics. Wiley, 1991; p. 29. ISBN 978-0-471-51192-2 ^ Santamarina, J. Carlos, Katherine A. Klein, & Moheb A. Fam. Soils and Waves: Particulate Materials Behavior, Characterization and Process Monitoring. Wiley, 2001; pp. 35-36 & 51-53. ISBN 978-0-471-49058-6 ^ Craig, R. F. Craig's Soil Mechanics. London: Spon, 2004, p.18. ISBN 0-203-49410-5.
Frictional brake with flexible band wrapped around rotating drum with triggered faults - MATLAB - MathWorks Nordic Contact friction coefficient vector Frictional brake with flexible band wrapped around rotating drum with triggered faults The Band Brake block represents a frictional brake with a flexible band that wraps around the periphery of a rotating drum to produce a braking action. A positive actuating force causes the band to tighten around the rotating drum and it places the friction surfaces in contact. Viscous and contact friction between the surfaces of the drum and the flexible band causes the rotating drum to decelerate. You can model the effects of heat flow and temperature change for the block by using port H, an optional thermal conserving port. Band brakes provide high braking torque at the cost of reduced braking precision in applications that include winch drums, chainsaws, go-karts, and mini-bikes. The model employs a simple parameterization with readily accessible brake geometry and friction parameters. The braking torque as a function of the external brake actuation force that tightens the belt is T=\left({F}_{TB}-{F}_{A}\right)\cdot {r}_{D}+{\mu }_{visc}\cdot \omega T is the braking torque. FTB is the force acting on the tense branch of the band. FA is the external brake actuation force. μvisc is the viscous friction coefficient. μ is the contact friction coefficient. ϕ is the wrap angle. Forces FTB and FA satisfy the relationship \frac{{F}_{TB}}{{F}_{A}}={e}^{\mu \varphi } Replacing the relationship in the braking torque formula eliminates force FTB such that T={F}_{A}\left({e}^{\mu \varphi }-1\right)\cdot {r}_{D}+{\mu }_{visc}\cdot \omega To avoid discontinuity at zero relative velocity, the model defines the actuation force, FS, as a hyperbolic function {F}_{A}={F}_{in}\cdot \mathrm{tanh}\left(\frac{4\omega }{{\omega }_{threshold}}\right) Fin is the force input signal. You can model the effects of heat flow and temperature change by exposing the optional thermal port. To expose the port, in the Friction settings, set the Thermal Port parameter to Model. Exposing the thermal port also exposes these related settings: F — Belt tensioning force Physical signal inport associated with the external tensioning force that is applied to the belt. H — Heat flow, in W Radius of the drum contact surface. The parameter must be greater than zero. Wrap angle — Contact angle Contact angle between the flexible belt and the rotating drum. The parameter must be greater than zero. .01 n*m/(rad/s) (default) | nonegative scalar Viscous friction coefficient at the belt-drum contact surface. The parameter must be greater than or equal to zero. When this parameter is set to Model, the thermal port and related parameters and variables are visible. Coulomb friction coefficient at the belt-drum contact surface. The value must be greater than zero. This parameter is only visible when the Thermal Port parameter is set to Omit. Contact friction coefficient vector — Coulomb friction [.1, .05, .03] (default) | increasing vector of positive values Coulomb friction coefficient at the belt-drum contact surface, such that: Double-Shoe Brake | Loaded-Contact Rotational Friction | Rotational Detent
Aliquot sum - Wikipedia In number theory, the aliquot sum s(n) of a positive integer n is the sum of all proper divisors of n, that is, all divisors of n other than n itself. That is, {\displaystyle s(n)=\sum \nolimits _{d|n,\ d\neq n}d.} It can be used to characterize the prime numbers, perfect numbers, deficient numbers, abundant numbers, and untouchable numbers, and to define the aliquot sequence of a number. 2 Characterization of classes of numbers For example, the proper divisors of 12 (that is, the positive divisors of 12 that are not equal to 12) are 1, 2, 3, 4, and 6, so the aliquot sum of 12 is 16 i.e. (1 + 2 + 3 + 4 + 6). The values of s(n) for n = 1, 2, 3, ... are: 0, 1, 1, 3, 1, 6, 1, 7, 4, 8, 1, 16, 1, 10, 9, 15, 1, 21, 1, 22, 11, 14, 1, 36, 6, 16, 13, 28, 1, 42, 1, 31, 15, 20, 13, 55, 1, 22, 17, 50, 1, 54, 1, 40, 33, 26, 1, 76, 8, 43, ... (sequence A001065 in the OEIS) Characterization of classes of numbersEdit The aliquot sum function can be used to characterize several notable classes of numbers: 1 is the only number whose aliquot sum is 0. A number is prime if and only if its aliquot sum is 1.[1] The aliquot sums of perfect, deficient, and abundant numbers are equal to, less than, and greater than the number itself respectively.[1] The quasiperfect numbers (if such numbers exist) are the numbers n whose aliquot sums equal n + 1. The almost perfect numbers (which include the powers of 2, being the only known such numbers so far) are the numbers n whose aliquot sums equal n − 1. The untouchable numbers are the numbers that are not the aliquot sum of any other number. Their study goes back at least to Abu Mansur al-Baghdadi (circa 1000 AD), who observed that both 2 and 5 are untouchable.[1][2] Paul Erdős proved that their number is infinite.[3] The conjecture that 5 is the only odd untouchable number remains unproven, but would follow from a form of Goldbach's conjecture together with the observation that, for a semiprime number pq, the aliquot sum is p + q + 1.[1] The mathematicians Pollack & Pomerance (2016) noted that one of Erdős' "favorite subjects of investigation" was the aliquot sum function. IterationEdit Main article: Aliquot sequence Iterating the aliquot sum function produces the aliquot sequence n, s(n), s(s(n)), ... of a nonnegative integer n (in this sequence, we define s(0) = 0). It remains unknown whether these sequences always converge (the limit of the sequence must be 0 or a perfect number), or whether they can diverge (i.e. the limit of the sequence does not exist).[1] Divisor function: The sum of the (xth powers of the) positive divisors of a number William of Auberive, medieval numerologist interested in aliquot sums ^ a b c d e Pollack, Paul; Pomerance, Carl (2016), "Some problems of Erdős on the sum-of-divisors function", Transactions of the American Mathematical Society, Series B, 3: 1–26, doi:10.1090/btran/10, MR 3481968 ^ Sesiano, J. (1991), "Two problems of number theory in Islamic times", Archive for History of Exact Sciences, 41 (3): 235–238, doi:10.1007/BF00348408, JSTOR 41133889, MR 1107382 ^ Erdős, P. (1973), "Über die Zahlen der Form {\displaystyle \sigma (n)-n} {\displaystyle n-\phi (n)} " (PDF), Elemente der Mathematik, 28: 83–86, MR 0337733 Weisstein, Eric W. "Restricted Divisor Function". MathWorld. Retrieved from "https://en.wikipedia.org/w/index.php?title=Aliquot_sum&oldid=1083512867"
The distribution of polynomials over finite fields Stephen Cohen — 1970 Galois groups of trinomials Value sets of functions over finite fields The distribution of polynomials over finite fields, II Kloosterman sums and primitive elements in Galois fields Windmill Polynomials Over Fields of Characteristic Two. Stephen D. Cohen — 1989 Primitive roots and powers among values of polynomials over finite fields. The Galois group of generic symmetric matrices Cohen, Stephen D. — 1990 The strong primitive normal basis theorem Stephen D. Cohen; Sophie Huczynska — 2010 Primitive free quartics with specified norm and trace On Grosswald's conjecture on primitive roots Stephen D. Cohen; Tomás Oliveira e Silva; Tim Trudgian — 2016 Grosswald’s conjecture is that g(p), the least primitive root modulo p, satisfies g(p) ≤ √p - 2 for all p > 409. We make progress towards this conjecture by proving that g(p) ≤ √p -2 for all 409<p<2.5×{10}^{15} p>3.38×{10}^{71} 6 Cohen, SD 5 Cohen, S 2 Huczynska, S 1 Oliveira e Silva, T 1 Trudgian, T
A number of efficiency improvements have been applied to make Maple 10 faster and use less memory. GNU Multiple Precision (GMP) Library Matrix Subselection The internal interface between Maple and GMP has been improved. Maple now uses GMP integers to represent all non-immediate integers. This allows significant optimization within Maple. Operations on large integers are much faster than in previous release of Maple. This speed up has also affected floating-point operations, which are much faster at high Digits settings. For commutative polynomials, the Groebner[Basis] command implements new algorithms for Groebner basis computation that are faster, in most cases, than the existing algorithms. You can control which algorithm is used with the method option. By default, the command makes a reasonable guess at the most efficient method available. For more information, see Groebner[Basis] and Groebner/Basis_details. The option method="skew" selects an algorithm from Maple 9.5. Cyclic5:=[ x+y+z+t+u, x*y+y*z+z*t+t*u+u*x, x*y*z+y*z*t+z*t*u+t*u*x+u*x*y, x*y*z*t+y*z*t*u+z*t*u*x+t*u*x*y+u*x*y*z, x*y*z*t*u-1 time(Basis(Cyclic5, plex(x,y,z,t,u), method="skew")); \textcolor[rgb]{0,0,1}{24.879} Without specifying a method, Maple uses the new engine for commutative polynomials. (The variable x is renamed to override the caching mechanism.) time(Basis(eval(Cyclic5,x=X), plex(X,y,z,t,u))); \textcolor[rgb]{0,0,1}{1.170} Subselection of regular shaped rtables is now faster in Maple. The code below now requires .279s, compared to 1.939s in Maple 9.5 on the same machine. repmat := proc(a,m,n) a[[seq(1..-1,i=1..m)],[seq(1..-1,i=1..n)]]; A:=Matrix([[1,2,3],[4,5,6],[7,8,9]],datatype=integer[4]): time(repmat(A,1000,1000)); \textcolor[rgb]{0,0,1}{0.154} Computation of the Matrix exponential for real and complex floating-point data is significantly faster. In Maple 9.5.1, the example below requires 14.3 seconds and allocates 4652204 bytes more memory, run under Linux on a 2.4GHz Celeron(R). In Maple 10, this call to MatrixExponential requires 0.06 seconds and allocates only 458668 more bytes of memory. M := Matrix(50,50,(i,j)->sin(i/30.)*cos(j/20.),datatype=float): sol := LinearAlgebra[MatrixExponential](M): timeinc := time()-st; \textcolor[rgb]{0,0,1}{\mathrm{timeinc}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{0.076} Special code has been added to RootFinding[Analytic] to detect multiple roots, leading to a significant speedup in the presence of multiple roots. RootFinding:-Analytic(sinh(z-1)*(z-1)*sin(z-1),z,-3-I..3+I*2,'digits'=32); \textcolor[rgb]{0,0,1}{1.0000000000000000000000000000000}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1.0000000000000000000000000000000}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1.0000000000000000000000000000000}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-2.1415926535897932384626433832791} On restart, the Maple kernel releases the memory it has allocated. After a restart, the amount of memory used by the kernel should be similar to the amount of memory used when the kernel first starts.
Decimate signal using cascaded integrator-comb filter - Simulink - MathWorks 한국 The block supports real and complex fixed-point inputs. In its normal mode of operation, the CIC Decimation block allows the adder’s numeric values to overflow and wrap around [1] [3]. The Fixed-Point infrastructure then causes overflow warnings to appear on the command line. This overflow is of no consequence. CIC decimated output, returned as a vector or a matrix. The data type of the output is determined by the settings in the block dialog. The complexity of the output matches that of the input. The number of output rows is (1/R)✕Num, where R is the decimation factor and Num is the number of input rows. \text{WL}=\mathrm{ceil}\left(N×{\mathrm{log}}_{2}\left(M×R\right)\right)+I H\left(z\right)={\left[\underset{k=0}{\overset{RM−1}{∑}}{z}^{−k}\right]}^{N}=\frac{{\left(1−{z}^{−RM}\right)}^{N}}{{\left(1−{z}^{−1}\right)}^{N}}=\frac{1}{{\left(1−{z}^{−1}\right)}^{N}}·\frac{{\left(1−{z}^{−RM}\right)}^{N}}{1}={H}_{\text{I}}{}^{N}\left(z\right)·{H}_{\text{c}}{}^{N}\left(z\right) H\left(z\right)=\frac{{\left(1−{z}^{−M}\right)}^{N}}{{\left(1−{z}^{−1}\right)}^{N}}. [1] Hogenauer, E.B. “An Economical Class of Digital Filters for Decimation and Interpolation.” IEEE Transactions on Acoustics, Speech and Signal Processing. Vol. 29, Number 2, 1981, pp. 155–162.
Abstract Algebra/Hypercomplex numbers - Wikibooks, open books for an open world Abstract Algebra/Hypercomplex numbers The terms group theory and ring theory are refinements of algebraic understanding that developed in the era of electronics and aircraft, the 20th century. The term hypercomplex number harkens back to the age of steam. For the most part, the hypercomplex systems have been assimilated through the resolution of vision provided by groups, rings, and fields, and the term has been retired from use other than historic reference. Similarly, the field of complex numbers {\displaystyle C=\{z=x+iy,\ x,y\in R\},\ i^{2}=-1,} has an insufficiently descriptive name, and might be better described as division binarions C according to composition algebra theory. W.R. Hamilton (1805−1865) studied quaternions and biquaternions Hypercomplex numbers grew out of William Rowan Hamilton's construction of quaternions in the 1840s. The legacy of his vision continues in spatial vector algebra: for vectors {\displaystyle v=ai+bj+ck} {\displaystyle w=di+ej+fk,} the well-known products are Dot: {\displaystyle v\cdot w=ad+be+cf\in R} Cross: {\displaystyle v\times w=(bf-ec)i-(af-dc)j+(ae-db)k\in R^{3}.} These products are the severed remnants of Hamilton’s quaternion product: {\displaystyle \ \ vw=-v\cdot w+v\times w\in R^{4}.} In 1845 John T. Graves and Arthur Cayley described an eight-dimensional hypercomplex system now referred to as octonions or Cayley numbers. They extend quaternions but associativity of multiplication is lost. James Cockle challenged the presumption of quaternions in four dimensions by presenting associative hypercomplex systems tessarines (1848) and coquaternions (1849). Hamilton had his own eight-dimensional system (biquaternions) that were explored in his Lectures on Quaternions (1853), but virtually ignored in Elements of Quaternions (completed by his son in 1865) and in the version edited by Charles Jasper Jolly in 1899. Quaternions feature the property of anti-commutativity of the basis vectors i, j, k: {\displaystyle ij=-ji=k,\quad jk=-kj=i,\quad ki=-ik=j.} (in coquaternions {\displaystyle jk=-kj=-i} Due to anti-commutativity, squaring a vector leaves many cancelled terms: {\displaystyle (ai+bj+ck)^{2}=-a^{2}-b^{2}-c^{2},} thus for {\displaystyle r=ai+bj+ck,} {\displaystyle (a^{2}+b^{2}+c^{2}=1)\ \equiv \ (r^{2}=-1).} For any such r, the plane {x + y r : x,y in R} is a complex number plane, and by Euler's formula the mapping {\displaystyle ar\mapsto \cos a+r\sin a} takes the ray through r to a wrapping of the unit circle in that plane. The unit sphere in quaternions is composed of these circles, considering the variable r. According to Hamilton, a unit quaternion is a versor; evidently every versor can be known by its parameters a and r. W.K. Clifford (1845−1879) studied split-biquaternions When the anti-commutativity axiom is changed to commutativity, then two square roots of minus one, say h and i, have a product hi with square {\displaystyle (hi)^{2}=hihi=h^{2}i^{2}=(-1)(-1)=+1.} James Cockle’s tessarines are based on such an imaginary unit, now with plus one for its square. Cockle initiated the use of j, j2 = +1, to represent this new imaginary unit that is not a square root of minus one. The tessarines are z = w + z j where z, w are in C. The real tessarines {\displaystyle D=\{a+bj:a,b\in R\}} feature a unit hyperbola, contrasting with the unit circle {\displaystyle \{a+bi:a^{2}+b^{2}=1\}\subset C.} Whereas the circle surrounds the origin, a hyperbola has radii in only half of the directions of the plane and requires a conjugate hyperbola to cover the other half, and even then the asymptotes, that they share, provide even more directions in the plane. In 1873 William Kingdon Clifford exploited the real tessarines to modify Hamilton's biquaternions: where Hamilton had used elements of C (division binarions) for coefficients of a biquaternion q = w + x i + y j + z k, Clifford used real tessarines (now called split binarions D). Clifford’s construction illustrated a process of generating new algebras from given ones in a procedure called tensor products: Hamilton’s biquaternions are {\displaystyle C\otimes H} , and the split biquaternions of Clifford are {\displaystyle D\otimes H.} Clifford was precocious, particularly in his anticipation of a geometric model of gravitation as hills and valleys in a temporal plenum. But he lived before set theory, modern logical and mathematical symbology, and before abstract algebra with its firmament of groups, rings and fields. One of the realities of light is its finite speed: a foot per nanosecond, an astronomic unit in 500 seconds, or a light year in a year. When a diagram uses any of these pairs of units as axes, the diagonals through the origin represent to locus of light, one for the left beam, one for the right. The diagonals are asymptotes to hyperbolas, such as {\displaystyle aj\mapsto \cosh a+j\sinh a,} a real tessarine. Eventually, over decades of deliberation, physicists realized that this hyperbola was the answer to a linear-velocity problem: how can v + w be the sum of two velocities when such accumulation may run over the speed of light. The hyperbola lies between the asymptotes and will not run over the speed of light. In the real tessarine system the points of the hyperbola are {\displaystyle e^{aj}} {\displaystyle e^{bj},} representing two velocities in the group {\displaystyle \{e^{xj}:x\in R\},} a hyperbola. The sum of two velocities is found by their product {\displaystyle e^{aj}e^{bj}=e^{(a+b)j},} another element of the hyperbola. After 1911the parameter a was termed rapidity. Evidently this aspect of special relativity was born of real tessarines. The electromagnetic work of Clerk-Maxell and Heinrich Hertz demanded a fitting context for theorizing with the temporal variable included. Maxwell had used Hamilton’s del operator {\displaystyle \nabla =i{\frac {\partial }{\partial x}}+j{\frac {\partial }{\partial y}}+k{\frac {\partial }{\partial z}}} in A Treatise on Electricity and Magnetism, but the quaternion algebra is unsuitable: it is implicitly a Euclidean 4-space since {\displaystyle qq^{*}=w^{2}+x^{2}+y^{2}+z^{2},} the square of the Euclidean norm. Alex Macfarlane (1851−1913) studied hyperbolic quaternions In the 1890s Alexander Macfarlane advocated Space Analysis with a hypercomplex system that exchanged Hamilton's sphere of imaginary units for a sphere of Cockle's imaginary units that square to +1. He retained the anti-commutative property of quaternions so that {\displaystyle qq^{*}=w^{2}-x^{2}-y^{2}-z^{2}.} Then in this system of hyperbolic quaternions, for any r on the sphere, {\displaystyle \{x+yr:x,y\in R\}} is a plane of split binarions, including unit hyperbola suitable to represent motion at any rapidity in direction r. The hyperbolic quaternions looked like an elegant model for electromechanics until it was found wanting. The problem was that the simple property of associative multiplication broke down in hyperbolic quaternions, and though it was a hypercomplex system with a useful model, loss of this property put it outside the purview of group theory, for instance. Once the axioms of a vector space were established, hypercomplex systems were included. The axioms require a commutative group of vectors, a scalar field, and rules of operations. Putting the axioms of a vector space together with those for a ring establishes the meaning of an algebra in the study of abstract algebra. For associative hypercomplex systems, Joseph Wedderburn removed all the mystery in 1907 when he showed that any such system could be represented with matrix rings over a field. For instance, 2 x 2 real matrices form an algebra M(2,R) isomorphic to coquaternions and 2 x 2 complex matrices form an algebra M(2,C) isomorphic to biquaternions. These algebras, along with R, C and tessarines form the associative composition algebras which are noted for the property {\displaystyle (pq)(pq)^{*}=(pp^{*})(qq^{*}).} About 1897 four cooperative efforts changed mathematics for the better. Giuseppe Peano began to assemble his Formulario Mathematico, Felix Klein spearheaded the mathematical encyclopedia project, the quadrennial series of International Congresses of Mathematics was begun, and the International Association for Promoting the Study of Quaternions and Allied Systems of Mathematics published a bibliography and annual review. Peano's effort gave mathematicians the symbolic language to compress concepts and proofs using set theory. Klein's encyclopedia upheld German as the primary medium, and the Congresses drew together all nations. The Quaternion Society was the primary arena addressing hypercomplex numbers, and was dissolved after 1913 upon the death of its president, Alexander Macfarlane. Retrieved from "https://en.wikibooks.org/w/index.php?title=Abstract_Algebra/Hypercomplex_numbers&oldid=3503729"
RECURSION THEORY - Encyclopedia Information Computability theory Information (Redirected from Recursion theory) https://en.wikipedia.org/wiki/Recursion_theory Study of computable functions and Turing degrees For the concept of computability, see Computability. 4 6 13 4098 ? > 3.5×1018267 > 1010101018705353 ? Does not appear The Busy Beaver function Σ(n) grows faster than any computable function. Hence, it is not computable; [1] only few values are known. Basic questions addressed by computability theory include: What does it mean for a function on the natural numbers to be computable? How can noncomputable functions be classified into a hierarchy based on their level of noncomputability? 1 Computable and uncomputable sets 2 Turing computability 3.1 Relative computability and the Turing degrees 3.2 Other reducibilities 3.3 Rice's theorem and the arithmetical hierarchy 3.5 Numberings 3.6 The priority method 3.7 The lattice of computably enumerable sets 3.8 Automorphism problems 3.10 Frequency computation 3.11 Inductive inference 3.12 Generalizations of Turing computability 3.13 Continuous computability theory 4 Relationships between definability, proof and computability Computable and uncomputable sets Computability theory originated in the 1930s, with work of Kurt Gödel, Alonzo Church, Rózsa Péter, Alan Turing, Stephen Kleene, and Emil Post. [2] [3] " Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing's computability). It seems to me that this importance is largely due to the fact that with this concept one has for the first time succeeded in giving an absolute notion to an interesting epistemological notion, i.e., one not depending on the formalism chosen.*"(Gödel 1946 in Davis 1965:84). [4] Relative computability and the Turing degrees Main articles: Turing reduction and Turing degree They are computably enumerable, and Each can be translated into any other via a many-one reduction. That is, given such sets A and B, there is a total computable function f such that A = {x : f(x) ∈ B}. These sets are said to be many-one equivalent (or m-equivalent). Other reducibilities Main article: Reduction (recursion theory) The strong reducibilities include: One-one reducibility A is one-one reducible (or 1-reducible) to B if there is a total computable injective function f such that each n is in A if and only if f(n) is in B. Many-one reducibility This is essentially one-one reducibility without the constraint that f be injective. A is many-one reducible (or m-reducible) to B if there is a total computable function f such that each n is in A if and only if f(n) is in B. A is truth-table reducible to B if A is Turing reducible to B via an oracle Turing machine that computes a total function regardless of the oracle it is given. Because of compactness of Cantor space, this is equivalent to saying that the reduction presents a single list of questions (depending only on the input) to the oracle simultaneously, and then having seen their answers is able to produce an output without asking additional questions regardless of the oracle's answer to the initial queries. Many variants of truth-table reducibility have also been studied. Rice's theorem and the arithmetical hierarchy Rice showed that for every nontrivial class C (which contains some but not all c.e. sets) the index set E = {e: the eth c.e. set We is in C} has the property that either the halting problem or its complement is many-one reducible to E, that is, can be mapped using a many-one reduction to E (see Rice's theorem for more detail). But, many of these index sets are even more complicated than the halting problem. These type of sets can be classified using the arithmetical hierarchy. For example, the index set FIN of the class of all finite sets is on the level Σ2, the index set REC of the class of all recursive sets is on the level Σ3, the index set COFIN of all cofinite sets is also on the level Σ3 and the index set COMP of the class of all Turing-complete sets Σ4. These hierarchy levels are defined inductively, Σn+1 contains just all sets which are computably enumerable relative to Σn; Σ1 contains the computably enumerable sets. The index sets given here are even complete for their levels, that is, all the sets in these levels can be many-one reduced to the given index sets. Further information: Turing degree § Post's problem and the priority method The lattice of computably enumerable sets Automorphism problems The field of Kolmogorov complexity and algorithmic randomness was developed during the 1960s and 1970s by Chaitin, Kolmogorov, Levin, Martin-Löf and Solomonoff (the names are given here in alphabetical order; much of the research was independent, and the unity of the concept of randomness was not understood at the time). The main idea is to consider a universal Turing machine U and to measure the complexity of a number (or string) x as the length of the shortest input p such that U(p) outputs x. This approach revolutionized earlier ways to determine when an infinite sequence (equivalently, characteristic function of a subset of the natural numbers) is random or not by invoking a notion of randomness for finite objects. Kolmogorov complexity became not only a subject of independent study but is also applied to other subjects as a tool for obtaining proofs. There are still many open problems in this area. For that reason, a recent research conference in this area was held in January 2007 [5] and a list of open problems [6] is maintained by Joseph Miller and Andre Nies. Generalizations of Turing computability Computability theory includes the study of generalized notions of this field such as arithmetic reducibility, hyperarithmetical reducibility and α-recursion theory, as described by Sacks (1990). These generalized notions include reducibilities that cannot be executed by Turing machines but are nevertheless natural generalizations of Turing reducibility. These studies include approaches to investigate the analytical hierarchy which differs from the arithmetical hierarchy by permitting quantification over sets of natural numbers in addition to quantification over individual numbers. These areas are linked to the theories of well-orderings and trees; for example the set of all indices of computable (nonbinary) trees without infinite branches is complete for level {\displaystyle \Pi _{1}^{1}} of the analytical hierarchy. Both Turing reducibility and hyperarithmetical reducibility are important in the field of effective descriptive set theory. The even more general notion of degrees of constructibility is studied in set theory. Relationships between definability, proof and computability The field of mathematical logic dealing with computability and its generalizations has been called "recursion theory" since its early days. Robert I. Soare, a prominent researcher in the field, has proposed (Soare 1996) that the field should be called "computability theory" instead. He argues that Turing's terminology using the word "computable" is more natural and more widely understood than the terminology using the word "recursive" introduced by Kleene. Many contemporary researchers have begun to use this alternate terminology. [7] These researchers also use terminology such as partial computable function and computably enumerable (c.e.) set instead of partial recursive function and recursively enumerable (r.e.) set. Not all researchers have been convinced, however, as explained by Fortnow [8] and Simpson. [9] Some commentators argue that both the names recursion theory and computability theory fail to convey the fact that most of the objects studied in computability theory are not computable. [10] ^ Tibor Radó (May 1962). "On non-computable functions". Bell System Technical Journal. 41 (3): 877–884. doi: 10.1002/j.1538-7305.1962.tb00480.x. ^ Many of these foundational papers are collected in The Undecidable (1965) edited by Martin Davis. ^ Soare, Robert Irving (22 December 2011). "Computability Theory and Applications: The Art of Classical Computability" (PDF). Department of Mathematics. University of Chicago. Retrieved 23 August 2017. ^ The full paper can also be found at pages 150ff (with commentary by Charles Parsons at 144ff) in Feferman et al. editors 1990 Kurt Gödel Volume II Publications 1938-1974, Oxford University Press, New York, ISBN 978-0-19-514721-6. Both reprintings have the following footnote * added to the Davis volume by Gödel in 1965: "To be more precise: a function of integers is computable in any formal system containing arithmetic if and only if it is computable in arithmetic, where a function f is called computable in S if there is in S a computable term representing f (p. 150). ^ Conference on Logic, Computability and Randomness Archived 2007-12-26 at the Wayback Machine, January 10–13, 2007. ^ The homepage of Andre Nies has a list of open problems in Kolmogorov complexity ^ MathSciNet searches for the titles like "computably enumerable" and "c.e." show that many papers have been published with this terminology as well as with the other one. ^ Lance Fortnow, " Is it Recursive, Computable or Decidable?," 2004-2-15, accessed 2018-3-22. ^ Stephen G. Simpson, " What is computability theory?," FOM email list, 1998-8-24, accessed 2006-1-9. ^ Harvey Friedman, " Renaming recursion theory," FOM email list, 1998-8-28, accessed 2006-1-9. Undergraduate level texts Cooper, S.B. (2004). Computability Theory. Chapman & Hall/CRC. ISBN 1-58488-237-9. Cutland, N. (1980). Computability, An introduction to recursive function theory. Cambridge University Press. ISBN 0-521-29465-7. Matiyasevich, Y. (1993). Hilbert's Tenth Problem. MIT Press. ISBN 0-262-13295-8. Jain, S.; Osherson, D.; Royer, J.; Sharma, A. (1999). Systems that learn, an introduction to learning theory (2nd ed.). Bradford Book. ISBN 0-262-10077-0. Kleene, S. (1952). Introduction to Metamathematics. North-Holland. ISBN 0-7204-2103-9. Lerman, M. (1983). Degrees of unsolvability. Perspectives in Mathematical Logic. Springer-Verlag. ISBN 3-540-12155-2. Nies, Andre (2009). Computability and Randomness. Oxford University Press. ISBN 978-0-19-923076-1. Odifreddi, P. (1989). Classical Recursion Theory. North-Holland. ISBN 0-444-87295-7. Odifreddi, P. (1999). Classical Recursion Theory. Vol. II. Elsevier. ISBN 0-444-50205-X. Rogers, Jr., H. (1987). The Theory of Recursive Functions and Effective Computability (2nd ed.). MIT Press. ISBN 0-262-68052-1. Sacks, G. (1990). Higher Recursion Theory. Springer-Verlag. ISBN 3-540-19305-7. Simpson, S.G. (1999). Subsystems of Second Order Arithmetic. Springer-Verlag. ISBN 3-540-64882-8. Soare, R.I. (1987). Recursively Enumerable Sets and Degrees. Perspectives in Mathematical Logic. Springer-Verlag. ISBN 0-387-15299-7. Survey papers and collections Ambos-Spies, K.; Fejer, P. (2006). "Degrees of Unsolvability" (PDF). Archived from the original (PDF) on 2013-04-20. Retrieved 2006-10-27. Unpublished preprint. Enderton, H. (1977). "Elements of Recursion Theory". In Barwise, J. (ed.). Handbook of Mathematical Logic. North-Holland. pp. 527–566. ISBN 0-7204-2285-X. Ershov, Y.L.; Goncharov, S.S.; Nerode, A.; Remmel, J.B. (1998). Handbook of Recursive Mathematics. North-Holland. ISBN 0-7204-2285-X. Fairtlough, M.; Wainer, S.S. (1998). "Hierarchies of Provably Recursive Functions". In Buss, S.R. (ed.). Handbook of Proof Theory. Elsevier. pp. 149–208. ISBN 978-0-08-053318-6. Soare, R.I. (1996). "Computability and recursion" (PDF). Bulletin of Symbolic Logic. 2 (3): 284–321. doi: 10.2307/420992. JSTOR 420992. S2CID 5894394. Research papers and collections Burgin, M.; Klinger, A. (2004). "Experience, Generations, and Limits in Machine Learning". Theoretical Computer Science. 317 (1–3): 71–91. doi: 10.1016/j.tcs.2003.12.005. Church, A. (1936). "An unsolvable problem of elementary number theory". American Journal of Mathematics. 58 (2): 345–363. doi: 10.2307/2371045. JSTOR 2371045. Reprinted in Davis 1965. Church, A. (1936). "A note on the Entscheidungsproblem". Journal of Symbolic Logic. 1 (1): 40–41. doi: 10.2307/2269326. JSTOR 2269326. Reprinted in Davis 1965. Davis, Martin, ed. (2004) [1965]. The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions. Courier. ISBN 978-0-486-43228-1. Friedberg, R.M. (1958). "Three theorems on recursive enumeration: I. Decomposition, II. Maximal Set, III. Enumeration without repetition". The Journal of Symbolic Logic. 23 (3): 309–316. doi: 10.2307/2964290. JSTOR 2964290. Gold, E. Mark (1967). "Language Identification in the Limit" (PDF). Information and Control. 10 (5): 447–474. doi: 10.1016/s0019-9958(67)91165-5. [1] Harrington, L.; Soare, R.I. (1991). "Post's Program and incomplete recursively enumerable sets". Proc. Natl. Acad. Sci. U.S.A. 88 (22): 10242–6. Bibcode: 1991PNAS...8810242H. doi: 10.1073/pnas.88.22.10242. PMC 52904. PMID 11607241. Jockusch jr, C.G. (1968). "Semirecursive sets and positive reducibility". Trans. Amer. Math. Soc. 137 (2): 420–436. doi: 10.1090/S0002-9947-1968-0220595-7. JSTOR 1994957. Kleene, S.C.; Post, E.L. (1954). "The upper semi-lattice of degrees of recursive unsolvability". Annals of Mathematics. Second. 59 (3): 379–407. doi: 10.2307/1969708. JSTOR 1969708. Moore, C. (1996). "Recursion theory on the reals and continuous-time computation". Theoretical Computer Science. 162 (1): 23–44. CiteSeerX 10.1.1.6.5519. doi: 10.1016/0304-3975(95)00248-0. Myhill, J. (1956). "The lattice of recursively enumerable sets". The Journal of Symbolic Logic. 21: 215–220. doi: 10.1017/S002248120008525X. Orponen, P. (1997). "A survey of continuous-time computation theory". Advances in Algorithms, Languages, and Complexity: 209–224. CiteSeerX 10.1.1.53.1991. doi: 10.1007/978-1-4613-3394-4_11. ISBN 978-1-4613-3396-8. Post, E. (1944). "Recursively enumerable sets of positive integers and their decision problems". Bulletin of the American Mathematical Society. 50 (5): 284–316. doi: 10.1090/S0002-9904-1944-08111-1. MR 0010514. Post, E. (1947). "Recursive unsolvability of a problem of Thue". Journal of Symbolic Logic. 12 (1): 1–11. doi: 10.2307/2267170. JSTOR 2267170. Reprinted in Davis 1965. Shore, Richard A.; Slaman, Theodore A. (1999). "Defining the Turing jump" (PDF). Mathematical Research Letters. 6 (6): 711–722. doi: 10.4310/mrl.1999.v6.n6.a10. MR 1739227. Slaman, T.; Woodin, W.H. (1986). "Definability in the Turing degrees". Illinois J. Math. 30 (2): 320–334. doi: 10.1215/ijm/1256044641. MR 0840131. Soare, R.I. (1974). "Automorphisms of the lattice of recursively enumerable sets, Part I: Maximal sets". Annals of Mathematics. 100 (1): 80–120. doi: 10.2307/1970842. JSTOR 1970842. Turing, A. (1937). "On computable numbers, with an application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. s2-42 (1): 230–265. doi: 10.1112/plms/s2-42.1.230. Turing, A.M. (1938). "On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction". Proceedings of the London Mathematical Society. s2-43 (1): 544–6. doi: 10.1112/plms/s2-43.6.544. Reprinted in Davis 1965. PDF from comlab.ox.ac.uk Turing, A.M. (1939). "Systems of logic based on ordinals". Proceedings of the London Mathematical Society. s2-45 (1): 161–228. doi: 10.1112/plms/s2-45.1.161. hdl: 21.11116/0000-0001-91CE-3. Reprinted in Davis 1965. Wikimedia Commons has media related to Computability theory. Computability in Europe homepage Webpage on Recursion Theory Course at Graduate Level with approximately 100 pages of lecture notes German language lecture notes on inductive inference Retrieved from " https://en.wikipedia.org/?title=Computability_theory&oldid=1085425720" Recursion Theory Videos Recursion Theory Websites Recursion Theory Encyclopedia Articles
Number Theory Overview | Brilliant Math & Science Wiki Number Theory Overview Zandra Vinegar contributed Number theory is a rich and abstract branch of mathematics which explores the fundamental properties of our number system. Whether you're looking for quick practice problems that strengthen your abstract reasoning skills for Olympiad competition topics, or for advanced, open-ended challenges, we have something here for you. The collection of Brilliant problems and articles is large and growing. The Best of Number Theory on Brilliant Number Theory Topic Area Map Theorems and Proofs in Number Theory Within number theory, you can learn about wide-ranging topics such as: Divisibility Rules Chinese Remainder Theorem Solving Diophantine Equations Extended Euclidean Algorithm Common Misconceptions \hspace{35mm} \hspace{35mm} Divisibility Prime Numbers Greatest Common Divisor & Lowest Common Multiple Number Bases Factorials Integer Sequences Rational Numbers Modular Arithmetic Quadratic Residues Linear Diophantine Equations The most popular introductory Number Theory topics on Brilliant are: Prime Numbers Prime Factorization Fibonacci Sequence Factorials Irrational Numbers Our goal is to be able to challenge any student in the world with math that stretches their skills. Learning definitions and introductory skills is the first part of mastery in number theory, followed by learning and understanding the theorems and proofs central to this topic. The most popular pages on Brilliant that prove famous theorems in number theory are: Chinese Remainder Theorem Postage Stamp Problem Factors Wilson's Theorem Bezout's Identity Cite as: Number Theory Overview. Brilliant.org. Retrieved from https://brilliant.org/wiki/learn-and-practice-number-theory-on-brilliant/
Costs - Vocabulary - Course Hero Microeconomics/Costs/Vocabulary \text{AFC}=\frac{\text{FC}}{\text{Q}} total cost divided by the quantity of output; the sum of average fixed and average variable costs \text{ATC}=\frac{\text{TC}}{\text{Q}} \text{ATC}=\text{AFC}+\text{AVC} \text{AVC}=\frac{\text{VC}}{\text{Q}} occurs when long-run average total cost remains the same as output increases occurs when long-run average total cost rises as output rises the sum of explicit and implicit (opportunity) costs occurs when long-run average total cost falls as output rises a cost involving monetary payment the cost of fixed inputs; does not change as output changes a cost that does not require the buyer to pay cash, or that cannot easily be assigned a monetary value the observation in the short run that each additional unit of a production input, holding all other inputs fixed, will yield progressively smaller increases in output long-run average total cost (LATC) the minimum per-unit cost of producing any level of output when all inputs are variable the minimum cost of producing any level of output when all inputs are variable the additional cost that a firm incurs by producing an additional unit of output, which must be covered in order to remain operational in the short run \text{MC} = \frac{\Delta \text{TC}}{\Delta \text{Q}} the period in which at least one productive input is fixed the sum of a firm's fixed and variable costs the cost of variable inputs; changes as output changes <Overview>Costs in the Short Run
A weak molecule condition for certain Triebel-Lizorkin spaces Steve Hofmann — 1992 A weak molecule condition is given for the Triebel-Lizorkin spaces Ḟ_p^{α,q}, with 0 < α < 1 and 1 < p, q < ∞. As an easy corollary, one may deduce, by atomic-molecular methods, a Triebel-Lizorkin space "T1" Theorem of Han and Sawyer, and Han, Jawerth, Taibleson and Weiss, for Calderón-Zygmund kernels K(x,y) which are not assumed to satisfy any regularity condition in the y variable. On certain nonstandard Calderón-Zygmund operators We formulate a version of the T1 theorem which enables us to treat singular integrals whose kernels need not satisfy the usual smoothness conditions. We also prove a weighted version. As an application of the general theory, we consider a class of multilinear singular integrals in {ℝ}^{n} related to the first Calderón commutator, but with a kernel which is far less regular. On singular integrals of Calderón-type in R and BMO. We prove Lp (and weighted Lp) bounds for singular integrals of the form p.v. ∫Rn E (A(x) - A(y) / |x - y|) (Ω(x - y) / |x - y|n) f(y) dy, where E(t) = cos t if Ω is odd, and E(t) = sin t if Ω is even, and where ∇ A ∈ BMO. Even in the case that Ω is smooth, the theory of singular integrals with rough kernels plays a key role in the proof. By... The solution of the Kato problem in two dimensions. Steve Hofmann; Alan McIntosh — 2002 We solve, in two dimensions, the "square root problem of Kato". That is, for L ≡ -div (A(x)∇), where A(x) is a 2 x 2 accretive matrix of bounded measurable complex coefficients, we prove that L1/2: L1 2(R2) → L2(R2). [Proceedings of the 6th International Conference on Harmonic Analysis and Partial Differential Equations, El Escorial (Madrid),... Gaussian estimates for fundamental solutions to certain parabolic systems. Steve Hofmann; Seick Kim — 2004 Auscher proved Gaussian upper bound estimates for the fundamental solutions to parabolic equations with complex coefficients in the case when coefficients are time-independent and a small perturbation of real coefficients. We prove the equivalence between the local boundedness property of solutions to a parabolic system and a Gaussian upper bound for its fundamental matrix. As a consequence, we extend Auscher's result to the time dependent case. {L}^{p} Neumann problem for the heat equation in non-cylindrical domains Steve Hofmann; John L. Lewis — 1998 I shall discuss joint work with John L. Lewis on the solvability of boundary value problems for the heat equation in non-cylindrical (i.e., time-varying) domains, whose boundaries are in some sense minimally smooth in both space and time. The emphasis will be on the Neumann problem with data in {L}^{p} . A somewhat surprising feature of our results is that, in contrast to the cylindrical case, the optimal results hold when p=2 , with the situation getting progressively worse as p 1 . In particular,... Second order elliptic operators with complex bounded measurable coefficients in {L}^{p} Steve Hofmann; Svitlana Mayboroda; Alan McIntosh — 2011 L be a second order divergence form elliptic operator with complex bounded measurable coefficients. The operators arising in connection with L , such as the heat semigroup and Riesz transform, are not, in general, of Calderón-Zygmund type and exhibit behavior different from their counterparts built upon the Laplacian. The current paper aims at a thorough description of the properties of such operators in {L}^{p} , Sobolev, and some new Hardy spaces naturally associated to L . First, we show that the... Square functions of Calderón type and applications. We establish L and L bounds for a class of square functions which arises in the study of singular integrals and boundary value problems in non-smooth domains. As an application, we present a simplified treatment of a class of parabolic smoothing operators which includes the caloric single layer potential on the boundary of certain minimally smooth, non-cylindrical domains. Lp bounds for Riesz transforms and square roots associated to second order elliptic operators. Steve Hofmann; José María Martell — 2003 Riesz transforms associated with the Hodge Laplacian in Lipschitz subdomains of Riemannian manifolds Steve Hofmann; Marius Mitrea; Sylvie Monniaux — 2011 {L}^{p} -bounds for the Riesz transforms associated to the Hodge-Laplacian equipped with absolute and relative boundary conditions in a Lipschitz subdomain of a (smooth) Riemannian manifold for p in a certain interval depending on the Lipschitz character of the domain. Riesz transform on manifolds and heat kernel regularity Pascal Auscher; Thierry Coulhon; Xuan Thinh Duong; Steve Hofmann — 2004 Carleson measures, trees, extrapolation, and T(b) theorems. Pascal Auscher; Steve Hofmann; Camil Muscalu; Terence Tao; Christoph Thiele — 2002 The theory of Carleson measures, stopping time arguments, and atomic decompositions has been well-established in harmonic analysis. More recent is the theory of phase space analysis from the point of view of wave packets on tiles, tree selection algorithms, and tree size estimates. The purpose of this paper is to demonstrate that the two theories are in fact closely related, by taking existing results and reproving them in a unified setting. In particular we give a dyadic version of extrapolation... Existence of big pieces of graphs for parabolic problems. Hofmann, Steve; Lewis, John L.; Nyström, Kaj — 2003 13 Hofmann, S 3 Lewis, JL 2 Auscher, P 2 McIntosh, A 1 Duong, XT 1 Martell, JM 1 Mayboroda, S 1 Mitrea, M 1 Monniaux, S 1 Muscalu, C 1 Nyström, K 1 Tao, T 1 Thiele, C
Stress (mechanics) - Simple English Wikipedia, the free encyclopedia Figure 1.1 Stress in a loaded deformable material body assumed as a continuum. Figure 1.2 Axial stress in a prismatic bar axially loaded. Figure 1.3 Normal stress in a prismatic (straight member of uniform cross-sectional area) bar. The stress or force distribution in the cross section of the bar is not necessarily uniform. However, an average normal stress {\displaystyle \sigma _{\mathrm {avg} }\,\!} Figure 1.4 Shear stress in a prismatic bar. The stress or force distribution in the cross section of the bar is not necessarily uniform. Nevertheless, an average shear stress {\displaystyle \tau _{\mathrm {avg} }\,\!} is a reasonable approximation.[1] Stress is the force per unit area on a body that tends to cause it to change shape.[2] Stress is a measure of the internal forces in a body between its particles.[2] These internal forces are a reaction to the external forces applied on the body that cause it to separate, compress or slide.[2] External forces are either surface forces or body forces. Stress is the average force per unit area that a particle of a body exerts on an adjacent particle, across an imaginary surface that separates them. The formula for uniaxial normal stress is: {\displaystyle {\sigma }={\frac {F}{A}}} where σ is the stress, F is the force and A is the surface area. In SI units, force is measured in newtons and area in square metres. This means stress is newtons per square meter, or N/m2. However, stress has its own SI unit, called the pascal. 1 pascal (symbol Pa) is equal to 1 N/m2. In Imperial units, stress is measured in pound-force per square inch, which is often shortened to "psi". The dimension of stress is the same as that of pressure. In continuum mechanics, the loaded deformable body behaves as a continuum. So, these internal forces are distributed continually within the volume of the material body. (This means that the stress distribution in the body is expressed as a piecewise continuous function of space and time.) The forces cause deformation of the body's shape. The deformation can lead to a permanent shape change or structural failure if the material is not strong enough. Some models of continuum mechanics treat force as something that can change. Other models look at the deformation of matter and solid bodies, because the characteristics of matter and solids are three dimensional. Each approach can give different results. Classical models of continuum mechanics assume an average force and do not properly include "geometrical factors". (The geometry of the body can be important to how stress is shared out and how energy builds up during the application of the external force.) 2 Simple stresses 2.1 Uniaxial normal stress 3 Stress in one-dimensional bodies Shear stress[change | change source] Further information: Shear stress Simple stresses[change | change source] In some situations, the stress within an object can be described by a single number, or by a single vector (a number and a direction). Three such simple stress situations are the uniaxial normal stress, the simple shear stress, and the isotropic normal stress.[3] Uniaxial normal stress[change | change source] Tensile stress (or tension) is the stress state leading to expansion; that is, the length of a material tends to increase in the tensile direction. The volume of the material stays constant. When equal and opposite forces are applied on a body, then the stress due to this force is called tensile stress. Therefore in a uniaxial material the length increases in the tensile stress direction and the other two directions will decrease in size. In the uniaxial manner of tension, tensile stress is induced by pulling forces. Tensile stress is the opposite of compressive stress. Structural members in direct tension are ropes, soil anchors and nails, bolts, etc. Beams subjected to bending moments may include tensile stress as well as compressive stress and/or shear stress. Tensile stress may be increased until the reach of tensile strength, namely the limit state of stress. Stress in one-dimensional bodies[change | change source] All real objects occupy three-dimensional space. However, if two dimensions are very large or very small compared to the others, the object may be modelled as one-dimensional. This simplifies the mathematical modelling of the object. One-dimensional objects include a piece of wire loaded at the ends and viewed from the side, and a metal sheet loaded on the face and viewed up close and through the cross section. ↑ Walter D. Pilkey, Orrin H. Pilkey (1974). Mechanics of solids. p. 292. ↑ 2.0 2.1 2.2 Daintith, John, ed. (2005). A Dictionary of Physics (Fifth ed.). Oxford University Press. p. 509. ISBN 978-0-19-280628-4. Ameen, Mohammed (2005). Computational elasticity: theory of elasticity and finite and boundary element methods. Alpha Science Int'l Ltd. pp. 33–66. ISBN 184265201X. Atanackovic, Teodor M.; Guran, Ardéshir (2000). Theory of elasticity for scientists and engineers. Springer. pp. 1–46. ISBN 081764072X. Chadwick, Peter (1999). Continuum mechanics: concise theory and problems. Dover books on physics (2 ed.). Dover Publications. pp. 90–106. ISBN 0486401804. Chakrabarty, J. (2006). Theory of plasticity (3 ed.). Butterworth-Heinemann. pp. 17–32. ISBN 0750666382. Chatterjee, Rabindranath (1999). Mathematical Theory of Continuum Mechanics. Alpha Science Int'l Ltd. pp. 111–157. ISBN 8173192448. Chen, Wai-Fah; Han, Da-Jian (2007). Plasticity for structural engineers. J. Ross Publishing. pp. 46–71. ISBN 978-1932159752. Fung, Yuan-cheng; Tong, Pin (2001). Classical and computational solid mechanics. Volume 1 of Advanced series in engineering science. World Scientific. pp. 66–96. ISBN 9810241240. Hamrock, Bernard (2005). Fundamentals of Machine Elements. McGraw-Hill. pp. 58–59. ISBN 0072976829. Hjelmstad, Keith D. (2005). Fundamentals of structural mechanics. Prentice-Hall international series in civil engineering and engineering mechanics (2 ed.). Springer. pp. 103–130. ISBN 038723330X. Irgens, Fridtjov (2008). Continuum mechanics. Springer. pp. 42–81. ISBN 978-3540742975. Jaeger, John Conrad; Cook, N.G.W, & Zimmerman, R.W. (2007). Fundamentals of rock mechanics (Fourth ed.). Wiley-Blackwell. pp. 9–41. ISBN 978-0632057597. {{cite book}}: CS1 maint: multiple names: authors list (link) Lubliner, Jacob (2008). Plasticity Theory (PDF) (Revised ed.). Dover Publications. ISBN 978-0486462905. Archived from the original (PDF) on 2010-03-31. Retrieved 2011-07-24. Mase, George E. (1970). Continuum Mechanics. McGraw-Hill. pp. 44–76. ISBN 0070406634. Mase, G. Thomas; George E. Mase (1999). Continuum Mechanics for Engineers (Second ed.). CRC Press. pp. 47–102. ISBN 0-8493-1855-6. Prager, William (2004). Introduction to mechanics of continua. Dover Publications. pp. 43–61. ISBN 0486438090. Smith, Donald Ray; Truesdell, Clifford (1993). An introduction to continuum mechanics -after Truesdell and Noll. Springer. ISBN 0792324544. Wu, Han-Chin (2005). Continuum mechanics and plasticity. CRC Press. pp. 45–78. ISBN 1584883634. Stress analysis, Wolfram Research Archived 2006-09-03 at the Wayback Machine ESDU Stress Analysis Methods True stress and true strain Archived 2008-04-30 at the Wayback Machine Stress-Strain Curve for Ductile Material Archived 2008-05-01 at the Wayback Machine Retrieved from "https://simple.wikipedia.org/w/index.php?title=Stress_(mechanics)&oldid=8073781"
Calculus of Variations/CHAPTER XIV - Wikibooks, open books for an open world Calculus of Variations/CHAPTER XIV CHAPTER XIV: THE ISOPERIMETRICAL PROBLEM. 190 Statement of the problem. 191 A simpler form of the integral that appears. {\displaystyle F_{1}} for this problem. 193 Integration of the differential equation that occurs. 194 An immediate consequence is the theorem of Steiner: Those portions of curve that are free to vary, are the arcs of equal circles. 195 If there exists a curve, which with a given periphery incloses the greatest surface area, that curve is a circle. 196 The admissibility that this property belongs to the circle. The isoperimetrical problem may be briefly stated as follows : Determine the curve of given length which maximizes or minimizes a certain definite integral. For example, it may be asked : Among all curves of a given length joining two points, what is the form of the one which produces a minimum surface of revolution about a definite axis; or, along what arc of given length joining two fixed points does a particle under the influence of gravity descend in the shortest time ? We shall consider here the Problem V of Chapter I, which may be again stated as follows : Suppose that any portion of the plane is bounded in such a way that one can go from any point in it to any other point without crossing the boundaries. In this portion of plane a line returning into itself is to be so constructed that having a given length it incloses the greatest possible surface area. {\displaystyle x} {\displaystyle y} be such functions of {\displaystyle t}hat for two definite values {\displaystyle t_{0}} {\displaystyle t_{1}} the corresponding points fall together, and that while {\displaystyle t} goes from the smaller value {\displaystyle t_{0}} to the greater value {\displaystyle t_{1}} {\displaystyle x,y} traverses in a positive direction the whole curve from the initial point to the end-point. The surface area, inclosed by the curve, is expressed by the integral {\displaystyle 1)\qquad I^{(0)}={\frac {1}{2}}\int _{t_{0}}^{t_{1}}(xy'-yx'){\text{d}}t} and its perimeter by {\displaystyle 2)\qquad I^{(1)}=\int _{t_{0}}^{t_{1}}{\sqrt {x'^{2}+y'^{2}}}{\text{d}}t} The problem proposed consists in expressing {\displaystyle x} {\displaystyle y} {\displaystyle t} in such a manner that the first integral shall have the greatest possible value, while at the same time the second integral retains a given value. It makes no difference vihere the origin of coordinates has been chosen ; for by a transformation of the origin the second integral remains unchanged while the first integral is changed only by a constant. This does not alter the maximum property of the integral. One may also add other conditions ; for example : That the curve go through a certain number of fixed points in a given order, or that it is to include certain portions of curve in a given order, etc. The curve will then contain portions along which the variation is not free. {\displaystyle F} is here {\displaystyle F={\frac {1}{2}}(xy'-yx')-\lambda {\sqrt {x'^{2}+y'^{2}}}} Instead of this function we may substitute another, since {\displaystyle {\frac {{\text{d}}(xy)}{{\text{d}}t}}=xy'+yx'} and consequently. {\displaystyle {\frac {1}{2}}(xy'-yx')={\frac {1}{2}}{\frac {\text{d}}{{\text{d}}t}}(xy)-yx'} Now, if we integrate between the limits {\displaystyle t_{0}\ldots t_{1}} , the first term of the right-hand side of the above equation vanishes, since the endpoint and the initial-point of the curve coincide. It follows, then, that {\displaystyle {\frac {1}{2}}\int _{t_{0}}^{t_{1}}(xy'-yx'){\text{d}}t=-\int _{t_{0}}^{t_{1}}yx'{\text{d}}t} We may consequently give the function {\displaystyle F} {\displaystyle 3)\qquad F=-x'y-\lambda {\sqrt {x'^{2}+y'^{2}}}} {\displaystyle {\frac {\partial F}{\partial x'}}=-y-{\frac {\lambda x'}{\sqrt {x'^{2}+y'^{2}}}}\qquad {\frac {\partial F}{\partial y'}}=-{\frac {\lambda y'}{\sqrt {x'^{2}+y'^{2}}}}} But since (Art. 187) {\displaystyle {\frac {\partial F}{\partial x'}}} {\displaystyle {\frac {\partial F}{\partial y'}}} vary in a continuous manner along the portions of curve that vary freely, since also {\displaystyle \lambda } has the same constant value for the whole curve (Art. 185), and since the quantities that are multiplied by {\displaystyle \lambda } are nothing other than the direction-cosines of the tangent to the curve, it follows that the curve at every point, where the variation is free, changes its direction in a continuous manner. {\displaystyle F_{1}} has the value {\displaystyle 4)\qquad F_{1}={\frac {-\lambda }{({\sqrt {x'^{2}+y'^{2}}})^{3}}}} {\displaystyle F_{1}} does not change sign, and since a maximum is to enter and consequently {\displaystyle F_{1}} is to be continuously negative, it follows that {\displaystyle \lambda } must be a positive constant. In order to find the curve itself, we have to integrate the differential equation {\displaystyle G^{(0)}-\lambda G^{(1)}=0} . This equation is equivalent (Art. 79) to the two equations {\displaystyle {\frac {\text{d}}{{\text{d}}t}}{\frac {\partial F}{\partial x'}}-{\frac {\partial F}{\partial x}}=0\qquad {\frac {\text{d}}{{\text{d}}t}}{\frac {\partial F}{\partial y'}}-{\frac {\partial F}{\partial y}}=0} {\displaystyle F} {\displaystyle x} explicitly, the first of these equations gives {\displaystyle 5)\qquad {\frac {\partial F}{\partial x'}}=~{\text{constant}}~~~{\text{or}}~~~y+{\frac {\lambda x'}{\sqrt {x'^{2}+y'^{2}}}}=b} {\displaystyle b} is an arbitrary constant. Since {\displaystyle {\frac {\partial F}{\partial x'}}} varies in a continuous manner for a portion of curve where there is free variation, it follows that the constant {\displaystyle b} retains the same value throughout such a portion of curve. The curve may, however, consist of separate portions which are free to vary, and for these the constant {\displaystyle b} may have different values. If we take as the independent variable the arcs of curve measured from the origin, we have from 5), {\displaystyle 6)\qquad {\frac {{\text{d}}x}{{\text{d}}s}}=-{\frac {1}{\lambda }}(y-b)} and consequently, since {\displaystyle \left({\frac {{\text{d}}x}{{\text{d}}s}}\right)^{2}+\left({\frac {{\text{d}}y}{{\text{d}}s}}\right)^{2}=1} {\displaystyle \left({\frac {{\text{d}}y}{{\text{d}}s}}\right)^{2}=1-{\frac {1}{\lambda ^{2}}}(y-b)^{2}} {\displaystyle {\frac {{\text{d}}^{2}y}{{\text{d}}s^{2}}}=-{\frac {1}{\lambda ^{2}}}(y-b)={\frac {1}{\lambda }}{\frac {{\text{d}}x}{{\text{d}}s}}} It is seen at once, if we integrate the last equation, that {\displaystyle 7)\qquad {\frac {{\text{d}}^{2}y}{{\text{d}}s^{2}}}={\frac {1}{\lambda }}(x-a)} {\displaystyle a} is an arbitrary constant ; and consequently the equation of the curve is {\displaystyle 8)\qquad (x-a)^{2}+(y-b)^{2}=\lambda ^{2}} From the nature of the curve it is evident that {\displaystyle \lambda } An immediate consequence is the theorem of Steiner, that those portions of the cure, which are free to vary, must be the arcs of equal circles. These circles may have different centers, since {\displaystyle a}nd {\displaystyle b} are not determined. Each such arc of the circle may, however, lie on different sides of the chord joining two endpoints ; we have, therefore, to ascertain which of the two arcs is the one required. The solutions of the differential equation are {\displaystyle x-a=\lambda \cos {\frac {s-s_{0}}{\lambda }}=\lambda \cos t\qquad t-b=\lambda \sin {\frac {s-s_{0}}{\lambda }}=\lambda \sin t} as is seen from equations 6) and 7), when differentiated. Since {\displaystyle \lambda } {\displaystyle s} {\displaystyle t} and since with increasing {\displaystyle t}he curve is traversed in the positive direction, we must take that arc for which this is also true. Let {\displaystyle C} be the center of the circle, {\displaystyle A_{1}} the initial-point, and {\displaystyle A_{2}} the end-point of the arc. That arc will be the right one which lies on the positive side of {\displaystyle CA_{1}} , that is, on the side of the increasing {\displaystyle t} 's. For if {\displaystyle t_{1}} is the angle which the radius {\displaystyle CA_{1}} {\displaystyle X} {\displaystyle x_{1},y_{1}} are the coordinates of the point {\displaystyle A_{1}} {\displaystyle \cos t_{1}={\frac {1}{\lambda }}(x_{1}-a)\qquad \sin t_{1}={\frac {1}{\lambda }}(y_{1}-b)} and further the angle, which the tangent {\displaystyle A_{1}B_{1}} drawn to the arc at the point {\displaystyle A_{1}} includes with the {\displaystyle X} {\displaystyle t_{1}+{\frac {\pi }{2}}} . Consequently we have {\displaystyle \cos \left(t_{1}+{\frac {\pi }{2}}\right)=-\sin t_{1}=-{\frac {1}{\lambda }}(y_{1}-b)=\left({\frac {{\text{d}}x}{{\text{d}}s}}\right)_{1}} {\displaystyle \sin \left(t_{1}+{\frac {\pi }{2}}\right)=\cos t_{1}={\frac {1}{\lambda }}(x_{1}-a)=\left({\frac {{\text{d}}y}{{\text{d}}s}}\right)_{1}} formulae, which have the right signs. This would not be true if we took the other arc and also the tangent which is drawn in the other direction. Hence that arc is always to be taken which, looking out from the center, is traversed in the positive direction. If no conditions are imposed upon the curve and it is required to find among all isoperimetrical lines that one which offers the greatest surface area, then the question is not of an absolute maximum, since the curve may be shoved anywhere in the plane without an alteration in its shape. The problem may be stated more accurately by saying that the integral which represents the surface area is not to admit of a positive increment, when all possible variations are introduced. The problem thus formulated leads to exactly the same necessary conditions as before, namely that the first variation is to vanish, and consequently we have the same differential equation to solve. We have also the same condition for {\displaystyle \lambda } . Since the second variation can never be positive, and consequently {\displaystyle F_{1}} can not change its sign, we conclude as above that {\displaystyle \lambda } is positive. Since the whole curve is free to vary and since {\displaystyle {\frac {\partial F}{\partial x'}}} {\displaystyle {\frac {\partial F}{\partial y'}}} are continuous functions for the whole trace, the constants {\displaystyle a}nd {\displaystyle b} are the same for the whole curve; however, they remain undetermined. We have, consequently, the following result: If there exists a closed curve which with a given periphery includes the greatest surface area, this curve is a circle. However, it has not as yet been proved that this property belongs to the circle. The treatment of the second variation is not sufficient, since only such variations have been employed vihere the distance betvieen two corresponding points, and also the difEerence in direction at these points do not exceed certain limits. The further proof has to be made that every other curve forms the boundary of a smaller surface area. The proof that the circle has this maximum property, ( a proof which is omitted in all previous solutions of the problem), has been considered so difficult that its solution has been denied to be in the province of the Calculus of Variations. We shall, however, in the next Chapter show that in the theorems already treated a means of overcoming this difficulty is offered. It will be seen that without the use of the second variation the desired result is reached in all cases where the function {\displaystyle F_{1}} does not change sign, not only at any point of the curve but also for any direction at any point. Retrieved from "https://en.wikibooks.org/w/index.php?title=Calculus_of_Variations/CHAPTER_XIV&oldid=3460712"
Multivariate t Distribution - MATLAB & Simulink - MathWorks Benelux Plot PDF and CDF of Multivariate t-Distribution The probability density function of the d-dimensional multivariate Student's t distribution is given by f\left(x,\Sigma ,\nu \right)=\frac{1}{{|\Sigma |}^{1/2}}\frac{1}{\sqrt{{\left(\nu \pi \right)}^{d}}}\frac{\Gamma \left(\left(\nu +d\right)/2\right)}{\Gamma \left(\nu /2\right)}{\left(1+\frac{{x}^{\prime }\text{ }{\Sigma }^{-1}x}{\nu }\right)}^{-\left(\nu +d\right)/2}. where x is a 1-by-d vector, Σ is a d-by-d symmetric, positive definite matrix, and ν is a positive scalar. While it is possible to define the multivariate Student's t for singular Σ, the density cannot be written as above. For the singular case, only random number generation is supported. Note that while most textbooks define the multivariate Student's t with x oriented as a column vector, for the purposes of data analysis software, it is more convenient to orient x as a row vector, and Statistics and Machine Learning Toolbox™ software uses that orientation. The multivariate Student's t distribution is a generalization of the univariate Student's t to two or more variables. It is a distribution for random vectors of correlated variables, each element of which has a univariate Student's t distribution. In the same way as the univariate Student's t distribution can be constructed by dividing a standard univariate normal random variable by the square root of a univariate chi-square random variable, the multivariate Student's t distribution can be constructed by dividing a multivariate normal random vector having zero mean and unit variances by a univariate chi-square random variable. The multivariate Student's t distribution is parameterized with a correlation matrix, Σ, and a positive scalar degrees of freedom parameter, ν. ν is analogous to the degrees of freedom parameter of a univariate Student's t distribution. The off-diagonal elements of Σ contain the correlations between variables. Note that when Σ is the identity matrix, variables are uncorrelated; however, they are not independent. The multivariate Student's t distribution is often used as a substitute for the multivariate normal distribution in situations where it is known that the marginal distributions of the individual variables have fatter tails than the normal. Plot the pdf of a bivariate Student's t distribution. You can use this distribution for a higher number of dimensions as well, although visualization is not easy. x1 = -3:.2:3; x2 = -3:.2:3; F = mvtpdf([X1(:) X2(:)],Rho,nu); F = reshape(F,length(x2),length(x1)); caxis([min(F(:))-.5*range(F(:)),max(F(:))]); axis([-3 3 -3 3 0 .2]) xlabel('x1'); ylabel('x2'); zlabel('Probability Density'); Plot the cdf of a bivariate Student's t distribution. F = mvtcdf([X1(:) X2(:)],Rho,nu); axis([-3 3 -3 3 0 1]) xlabel('x1'); ylabel('x2'); zlabel('Cumulative Probability'); Since the bivariate Student's t distribution is defined on the plane, you can also compute cumulative probabilities over rectangular regions. For example, this contour plot illustrates the computation that follows, of the probability contained within the unit square shown in the figure. contour(x1,x2,F,[.0001 .001 .01 .05:.1:.95 .99 .999 .9999]); line([0 0 1 1 0],[1 0 0 1 1],'linestyle','--','color','k'); Compute the value of the probability contained within the unit square. F = mvtcdf([0 0],[1 1],Rho,nu) Computing a multivariate cumulative probability requires significantly more work than computing a univariate probability. By default, the mvtcdf function computes values to less than full machine precision and returns an estimate of the error, as an optional second output. [F,err] = mvtcdf([0 0],[1 1],Rho,nu) mvtcdf | mvtpdf | mvtrnd
(→‎PR: Primitive Recursive Functions: a bit more about ordinals while I am at it) (adding para-, para-L, para-NL, and para-NL[f log]. Also, added para-P as an alias for FPT.) See also [[Complexity Zoo:B#bqpctc|BQP<sub>CTC</sub>]]. ===== <span id="para" style="color:red">para-</span>: Parameterized Complexity ===== The prototypical example (as well the violation of the naming convention) is para-P, which is almost always known as [[Complexity Zoo:FPT#fpt|FPT]], which is equal to DTIME(f(k)n^c) for some constant c. Space-parameterized examples include [[#para-L|para-L]] and [[#para-NL|para-NL]], which are equal to DSPACE(f(k)+log(n)) and NDSPACE(f(k)+log(n)), respectively. Compare with the slicewise complexity classes [[Complexity Zoo:X#x|X-]], such as [[Complexity Zoo:X#x|X-]]. J. Flum and M. Grohe. Describing parameterized complexity classes. Information and Computation. Elberfeld M., Stockhusen C., Tantau T. (2012) On the Space Complexity of Parameterized Problems. ===== <span id="paral" style="color:red">para-L</span>: Parameterized Logspace ===== [[#para|para-]] version of [[Complexity Zoo:L#l|L]]. Equivalent to DSPACE(f(k)+log(n)) for some computable function f. Compare with slicewise parameterized logspace, [[Complexity Zoo:X#xl|XL]]. ===== <span id="paranl" style="color:red">para-NL</span>: Parameterized Nondeterministic Logspace ===== [[#para|para-]] version of [[Complexity Zoo:N#nl|NL]]. Equivalent to NDSPACE(f(k)+log(n)) for some computable function f. Compare with slicewise parameterized nondeterministic logspace, [[Complexity Zoo:X#xnl|XNL]]. It seems open whether there are natural complete problems for para-NL. However, the related class [[#paranlflog|para-NL[f log]]] has many natural complete problems. ===== <span id="paranlflog" style="color:red">para-NL[f log]</span>: Parameterized Nondeterministic Logspace ===== Like [[#paranl|para-NL]], but where the number of nondeterministic branches is bounded by O(f(k) log(n)). para-NL[f log] is contained within [[Complexity Zoo:X#xl|XL]], slicewise logspace. ===== <span id="parap" style="color:red">para-P</span>: Parameterized Polynomial time. ===== para-P is a less common name for [[Complexity Zoo:F#fpt|FPT]], but in line with other [[#para|para-]] classes naming conventions. Its slicewise counterpart is still called [[Complexity Zoo:X#xp|XP]]. {\displaystyle k} {\displaystyle n^{O(1)}} {\displaystyle O(n)} {\displaystyle k} {\displaystyle k} {\displaystyle k} {\displaystyle k} {\displaystyle F:X_{1}\times X_{2}\times \cdots \times X_{k}\to \{0,1\}} {\displaystyle i\in [1..k]} {\displaystyle X_{k}\in \{0,1\}^{n}} {\displaystyle F} {\displaystyle k} {\displaystyle X_{i}} {\displaystyle O\left(\mathrm {poly} (\log n)\right)} {\displaystyle k} {\displaystyle k} {\displaystyle k} {\displaystyle k} {\displaystyle {\mathsf {P}}^{\mathrm {SAT} [1]}={\mathsf {P}}^{\mathrm {SAT} [2]}\Rightarrow {\mathsf {PH}}\subseteq {\mathsf {NP}}} {\displaystyle c} {\displaystyle 2^{c}} {\displaystyle 2^{n^{O(1)}}} {\displaystyle n^{O(1)}}
Newman–Shanks–Williams prime - Wikipedia Not to be confused with Williams prime. In mathematics, a Newman–Shanks–Williams prime (NSW prime) is a prime number p which can be written in the form {\displaystyle S_{2m+1}={\frac {\left(1+{\sqrt {2}}\right)^{2m+1}+\left(1-{\sqrt {2}}\right)^{2m+1}}{2}}.} NSW primes were first described by Morris Newman, Daniel Shanks and Hugh C. Williams in 1981 during the study of finite simple groups with square order. The first few NSW primes are 7, 41, 239, 9369319, 63018038201, … (sequence A088165 in the OEIS), corresponding to the indices 3, 5, 7, 19, 29, … (sequence A005850 in the OEIS). The sequence S alluded to in the formula can be described by the following recurrence relation: {\displaystyle S_{0}=1\,} {\displaystyle S_{1}=1\,} {\displaystyle S_{n}=2S_{n-1}+S_{n-2}\qquad {\text{for all }}n\geq 2.} The first few terms of the sequence are 1, 1, 3, 7, 17, 41, 99, … (sequence A001333 in the OEIS). Each term in this sequence is half the corresponding term in the sequence of companion Pell numbers. These numbers also appear in the continued fraction convergents to √2. Newman, M.; Shanks, D. & Williams, H. C. (1980). "Simple groups of square order and an interesting sequence of primes". Acta Arithmetica. 38 (2): 129–140. doi:10.4064/aa-38-2-129-140. The Prime Glossary: NSW number Retrieved from "https://en.wikipedia.org/w/index.php?title=Newman–Shanks–Williams_prime&oldid=1037219861"
Nash Equilibrium | Brilliant Math & Science Wiki Alexander Katz, Christopher Williams, and Adam Strandberg contributed A Nash Equilibrium is a set of strategies that players act out, with the property that no player benefits from changing their strategy. Intuitively, this means that if any given player were told the strategies of all their opponents, they still would choose to retain their original strategy. For example, in the game of trying to guess 2/3 of the average guesses, the unique Nash equilibrium is (counterintuitively) for all players to choose 0. Nash equilibria are useful in analyzing the result of competitive scenarios, especially as applied to conflict such as war. For similar reasons, they are often used in analyzing economic factors, such as markets, currencies, and auctions. They are also used to enforce cooperation through self-interest, by contriving the relative rewards of actions in such a way that each actor would independently choose the desired action; a famous example is the prisoner's dilemma problem. S_i denote the set of strategies for the i th player, and S=S_1 \cdot S_2 \cdot \ldots \cdot S_n denote the set of strategy profiles. This means that the elements of S are all possible combinations of individual strategies. Let f_i(s) denote the payoff to player i when evaluated at strategy profile s \in S ; note that the payoff to an individual player depends on the strategies of the other players as well. An individual mixed strategy is a probability distribution on the set of available strategies. For example, selecting one of "rock", "paper", or "scissors" uniformly at random is an example of a mixed strategy. There can also be a choice of weighting so that strategies are picked with different probabilities. A pure strategy is one that does not involve randomization at all, instead choosing a particular strategy all of the time. Note that, importantly: A pure strategy is simply a special case of a mixed strategy, in which one strategy is chosen 100% of the time. Which means that the same methods used to calculate mixed strategies are equally useful in detecting pure strategies. A Nash equilibrium is a strategy profile s=(s_1, s_2, \ldots, s_n) f_i(s) \geq f_i((s_1, s_2, \ldots, s_i', \ldots, s_n)) i s_i' \in S_i denotes a strategy other than s_i available to player i In the event that this inequality is strict; i.e. f_i(s) > f_i((s_1, s_2, \ldots, s_i', \ldots, s_n)) i , the profile s is called a strict Nash equilibrium. Otherwise, s is called a weak Nash equilibrium. Nash's existence theorem guarantees that as long as S_i is finite for all i and there are a finite number of players, at least one Nash equilibrium exists (possibly involving mixed strategies). Amusingly, this result was dismissed by John von Neumann with the response "That's trivial, you know. That's just a fixed point theorem", a year before Nash's publication and 40 years before Nash won the Nobel prize for (in part) this result. Indeed, Nash's existence theorem is a consequence of the Brouwer fixed point theorem (or equivalently Kakatuni's fixed point theorem), which is a more general statement in algebraic topology. The simplest example of Nash equilibrium is the coordination game, in which both players benefit from coordinating but may also hold individual preferences. For instance, suppose two friends wish to plan an evening around either partying or watching a movie. Both friends prefer to engage in the same activity, but one prefers partying to movies by a factor of 2, and the other prefers movies to partying by the same ratio. This can be modeled by the following payoff matrix: Party 2,1 0,0 Movie 0,0 1,2 where the payoff vector is listed under the appropriate strategy profile (the first player's strategies are listed on the left). In this case, both {Party, Party} and {Movie, Movie} are Nash equilibria, as neither side would choose to deviate when informed of the other's choice. The most famous example of Nash equilibrium, however, is the Prisoner's dilemma problem, in which each of two prisoners have the choice of "cooperating" with the other prisoner by keeping quiet, or "defecting" by confessing. If both prisoners cooperate, they will face little jail time, but if exactly one of them defects, the defector will immediately go free and the cooperator will face lots of jail time. The catch is that if both prisoners choose to defect, they will both face a moderate amount of jail time. This can be modeled by the payoff matrix where lower jail sentences have higher payoffs. In this scenario, there is exactly one Nash equilibrium: both players choose to defect -- in any other case, a cooperating prisoner would choose instead to defect. This is despite the fact that both prisoners would improve their situation by both cooperating, meaning that the Nash equilibrium is globally inferior to the "both cooperate" strategy. The practical application of this is clear: by contriving the values appropriately, authorities can make it optimal for a suspect to confess rather than cooperate with their accessories. Generally, finding a "pure" Nash equilibrium (in which no randomization occurs) is fairly easy, as verifying one only requires comparing a small number of potential payoffs. For instance, consider a game with the following payoff matrix: In this game, each player has three strategies to choose from, and the first player earns the value in the corresponding cell. His goal is to maximize his score, while the second player's goal is to minimize it. A Nash equilibrium of this game occurs when neither player has any incentive to change their strategy, even if they know their opponents. This means that For a cell to represent a (pure) Nash equilibrium, it must be the minimum of its row and the maximum of its column as this is the only way neither player would choose to change their strategy. In the above game, the unique pure equilibrium is player 1 choosing strategy 2 and player 2 choosing strategy 3, as neither player wishes to deviate from the resulting payoff of 1. Of course, a "pure" Nash equilibrium is a special case of a mixed strategy (where one strategy is chosen with probability 1), so the more general approach below is universally valid. In the case of mixed strategies, the situation becomes slightly more complex, and often involves optimization strategies such as the rearrangement inequality. For instance, consider the following game: each player can show either one or two fingers, and If an odd number of fingers are showing, the first player scores the number of shown fingers. If an even number of fingers are showing, the second player scores the number of shown fingers. This corresponds to the payoff matrix It is immediately evident that this game has no pure equilibrium (as either player would choose to switch their move if losing), so an analysis of mixed strategies is necessarily. Surprisingly, the Nash equilibrium in this game favors the first player, despite the apparent symmetry of the problem. To find the (or a) Nash equilibrium of the game, assume that the Nash equilibrium consists of the first player choosing 1 with probability p (and 2 with probability 1-p ), and the second player chooses 1 with probability q . Note that Nash's theorem guarantees that at least one Nash equilibrium exists, so this step is valid. Now, player 1's expected payoff is (0-2) \cdot p \cdot q + (3-0) \cdot p \cdot (1-q) + (3-0) \cdot (1-p) \cdot q + (0-4) \cdot (1-p) \cdot (1-q) = -12pq+7p+7q-4 Since this is a Nash equilibrium, player 1 would not choose to adjust p q . But the payoff can be written as p(7-12q)+7q-4 q>\frac{7}{12} , player 1 would wish to minimize p (set p=0 q<\frac{7}{12} , player 1 would wish to maximize p p=1 This implies that at the Nash equilibrium point, q=\frac{7}{12} Analogously, player 2's expected payoff is (2-0) \cdot p \cdot q + (0-3) \cdot p \cdot (1-q) + (0-3) \cdot (1-p) \cdot q + (4-0) \cdot (1-p) \cdot (1-q) = 12pq-7p-7q+4 which is precisely as expected, considering the sum of the expected payoffs should be zero (this is a zero-sum game). Thus, by analogous reasoning, p=\frac{7}{12} at the Nash equilibrium point. Thus, at the Nash equilibrium point, player 1's expected utility is positive, namely \frac{1}{12} . This means that the game is inherently unfair; by choosing 1 with probability \frac{7}{12} , player 1 guarantees an expected payoff of at least \frac{1}{12} (player 2 chooses the same strategy in order to minimize player 1's expected payoff). Generally speaking, both players adapted the same general strategy: to calculate the expected payoff of the other player as a function of the probability distributions, then adjust theirs to "cancel out" the other's. Another way of viewing Nash's theorem is to note that since the expected payoff is linear in each variable, this process results in a system of linear equations that always has at least one solution. Alice and Bob are playing 900 games of Rock-Paper-Scissors, but Alice is not allowed to choose scissors in any of the games. If both players choose their strategies optimally (i.e. Nash equilibrium is reached), what is the expected number of games Bob will win? "Optimally" means that both players want to maximize the difference between the number of games they win and the number of games the opponent wins. Nash equilibrium requires several conditions to hold in order to apply: All players are interested only in maximizing their own expected payoff, and will act accordingly. All players execute their strategies perfectly. All players are infinitely intelligent, or at least intelligent enough to determine the solution. Every player knows (or can deduce) the planned equilibrium strategy of all other players, and know that changing their own strategy will not result in other players changing their strategy. All of this is common knowledge, meaning that every player knows every other player satisfies the above four conditions. In practice, it is rare for all these conditions to be directly satisfied. For instance, A prisoner in the prisoner's dilemma may face other considerations; for example, a prisoner expecting retribution for defecting would be facing far less of a dilemma. A player may accidentally (or intentionally) execute a strategy imperfectly, which could potentially lead to a loss, but could also potentially lead to a win due to invalidation of the common knowledge criterion. A player may not be sufficiently intelligent to work out the solution; for instance, a young child playing tic-tac-toe would not necessarily be able to deduce the optimal play. Players may believe, rightly or wrongly, that their fellow players will not be perfectly rational. This is a major concern, for example, in arms races -- particularly, as in recent times, nuclear ones. For this reason, most practical situations are not modeled particularly well by Nash equilibrium. The concept is mostly useful for explaining trends in economics and evolutionary biology, in which strategies that do not maximize utility (e.g. money in economics, or survival in biology) are rejected through natural competition. Indeed, research in these fields tends to support the theory that the system tends to its Nash equilibrium. Cite as: Nash Equilibrium. Brilliant.org. Retrieved from https://brilliant.org/wiki/nash-equilibrium/
SearchFrobeniusGroups - Maple Help Home : Support : Online Help : Mathematics : Group Theory : SearchFrobeniusGroups search for Frobenius groups satisfying specified properties SearchFrobeniusGroups(spec, formopt) The SearchFrobeniusGroups( spec ) command searches Maple's database of Frobenius groups for groups satisfying properties specified in a sequence spec of search parameters. The valid search parameters may be grouped into several classes, as described in the following sections. Use the form = X option to control the form of the output from this command. By default, an expression sequence of IDs for the FrobeniusGroups database is returned. This is the same as specifying form = "id". To have an expression sequence of groups, either permutation groups, or finitely presented groups, use either the form = "permgroup" or form = "fpgroup" options, respectively. Finally, the form = "count" option causes SearchFrobeniusGroups to return just the number of groups in the database satisfying the constraints implied by the search parameters. Note that the IDs returned in the default case are the IDs of the groups within the FrobeniusGroups database. These may differ from the IDs for the same group if it happens to be present in another database, such as the SmallGroups database, which has its own set of group IDs. Note further that IDs returned by SearchFrobeniusGroups are limited to those actually present in the database. In particular, they are limited by the maximum group order and by the order exclusions documented in FrobeniusGroup. Boolean search parameters p, such as supersoluble, can be specified in one of the forms p = true, p = false, or just p (which is equivalent to p = true). If the boolean search parameter p is true, then only groups satisfying the corresponding predicate are returned. If the boolean search parameter p is false, then only groups that do not satisfy the predicate are returned. Leaving a boolean search parameter unspecified causes the SearchFrobeniusGroups command to return groups that do, and do not, satisfy the corresponding predicate. abeliancomplement describes groups with an Abelian Frobenius complement abeliankernel describes groups with an Abelian Frobenius kernel cycliccomplement describes groups with a cyclic Frobenius complement cyclickernel describes groups with a cyclic Frobenius kernel elementarykernel describes groups with an elementary abelian Frobenius kernel homocyclickernel describes groups with a homocyclic Frobenius kernel nilpotentcomplement describes groups with a nilpotent Frobenius complement describes the class of groups with an ordered Sylow tower describes the class of groups with a Sylow tower (of any complexion) Maple supports search parameters that describe numeric invariants of finite groups. All have positive integral values. A numeric search parameter p may be given in the form p = n, for some specific value n, or by indicating a range, as in p = a .. b. In the former case, only groups for which the numeric parameter has the value n will be returned. In the case in which a range is specified, groups for which the numeric invariant lies within the indicated range (inclusive of its end-points) are returned. In addition, inequalities of the form p < n (p > n) or p <= n (p >= n) are supported. indicates the largest order of an element of the group indicates the length of the Frattini series of the group kernel_nilpclass indicates the nilpotency class of the Frobenius kernel indicates the permutation group rank (number of sub-orbits) of the group nsylow[ p ] indicates the number of Sylow p-subgroups of the group Subgroup and Quotient Search Parameters A subgroup of a Frobenius group is typically not a Frobenius group. (It may be in some cases, of course.) Therefore, subgroups of Frobenius groups are indicated by using their ID from the database of small groups. In some cases, only the order of the subgroup is stored, since the subgroup is larger than any group in the SmallGroups database. Several subgroup search parameters are supported. These describe the isomorphism type of various subgroups of a group by specifying the Small Group ID (as returned by the IdentifySmallGroup command), or just the order of the group if is too large to have a SmallGroups database ID. specifies the SmallGroup ID (or order) of the Frobenius complement specifies the SmallGroup ID (or order) of the derived subgroup specifies the SmallGroup ID (or order) of the derived quotient specifies the SmallGroup ID (or order) of the Frobenius kernel sylow[ p] specifies the SmallGroup ID (or order) of the Sylow p-subgroup It is important to understand that the option values for subgroups are the IDs within the small groups database, while the IDs returned by the SearchFrobeniusGroups command are the IDs of groups within the FrobeniusGroups database. \mathrm{with}⁡\left(\mathrm{GroupTheory}\right): The following command places no restrictions on the groups being queried, so it just returns the total number of Frobenius groups in the database because of the form = "count" option. \mathrm{SearchFrobeniusGroups}⁡\left('\mathrm{form}'="count"\right) \textcolor[rgb]{0,0,1}{9034} What are the Frobenius groups of order 100 \mathrm{SearchFrobeniusGroups}⁡\left('\mathrm{order}'=100\right) [\textcolor[rgb]{0,0,1}{100}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{100}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{100}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}] Let's check that each of these has an abelian Frobenius kernel. \mathrm{IsAbelian}⁡\left(\mathrm{FrobeniusKernel}⁡\left(\mathrm{FrobeniusGroup}⁡\left(100,1\right)\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{IsAbelian}⁡\left(\mathrm{FrobeniusKernel}⁡\left(\mathrm{FrobeniusGroup}⁡\left(100,2\right)\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{IsAbelian}⁡\left(\mathrm{FrobeniusKernel}⁡\left(\mathrm{FrobeniusGroup}⁡\left(100,3\right)\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} However, not all have a cyclic Frobenius kernel. \mathrm{IsCyclic}⁡\left(\mathrm{FrobeniusKernel}⁡\left(\mathrm{FrobeniusGroup}⁡\left(100,1\right)\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{IsCyclic}⁡\left(\mathrm{FrobeniusKernel}⁡\left(\mathrm{FrobeniusGroup}⁡\left(100,2\right)\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} \mathrm{IsCyclic}⁡\left(\mathrm{FrobeniusKernel}⁡\left(\mathrm{FrobeniusGroup}⁡\left(100,3\right)\right)\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} Let's see what other Frobenius groups have a cyclic kernel of order 25 \mathrm{SearchFrobeniusGroups}⁡\left(\mathrm{kernel}=25,'\mathrm{cyclickernel}'\right) [\textcolor[rgb]{0,0,1}{50}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{100}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}] We get the same result if we further specify that the Frobenius complement be nilpotent. \mathrm{SearchFrobeniusGroups}⁡\left(\mathrm{kernel}=25,'\mathrm{cyclickernel}','\mathrm{nilpotentcomplement}'\right) [\textcolor[rgb]{0,0,1}{50}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{100}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}] The following command counts the number of Frobenius groups of order at most 1000 with Frobenius complement of order 4 \mathrm{SearchFrobeniusGroups}⁡\left('\mathrm{order}'\le 1000,'\mathrm{complement}'=4,'\mathrm{form}'="count"\right) \textcolor[rgb]{0,0,1}{59} Find the doubly transitive Frobenius groups in the database with a homocyclic Frobenius kernel and order greater than 10000 \mathrm{SearchFrobeniusGroups}⁡\left(10000<'\mathrm{order}',1<'\mathrm{transitivity}','\mathrm{homocyclickernel}'\right) [\textcolor[rgb]{0,0,1}{10100}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{10506}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{11342}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{11772}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{12656}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{14520}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{14520}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{14520}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{14520}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}] Count the Frobenius groups with rank equal to 7 \mathrm{SearchFrobeniusGroups}⁡\left('\mathrm{rank}'=7,'\mathrm{form}'="count"\right) \textcolor[rgb]{0,0,1}{39} The GroupTheory[SearchFrobeniusGroups] command was introduced in Maple 2019. The GroupTheory[SearchFrobeniusGroups] command was updated in Maple 2021. GroupTheory[FrobeniusKernel]
On the Dynamics of the Electron (June) - Wikisource, the free online library Translation:On the Dynamics of the Electron (June) In French: Sur la dynamique de l’électron, Comptes Rendus de l’Académie des Sciences, t. 140, p. 1504–1508 Session from June 5, 1905. ELECTRICITY —— On the dynamics of the electron. Note of M. H. POINCARÉ It seems at first sight that the aberration of light and the associated optical phenomena will provide a means of determining the absolute motion of the Earth, or rather its motion, not in relation to other stars, but in relation to the ether. This is not the case: the experiments in which we take into account only the first order of aberration were initially unsuccessful and an explanation was easily found; but Michelson, who imagined an experiment by which terms depending on the square of the aberration could be measured, had no luck either. It seems that this inability to demonstrate absolute motion is a general law of nature. An explanation was proposed by Lorentz, who introduced the hypothesis of a contraction of all bodies in the direction of motion of the earth; this contraction would account for the Michelson-Morley experiment and all those that have been conducted to date, but leaves room for other experiments even more delicate and more easily conceived than executed, ​which might demonstrate the absolute motion of the Earth. But if the impossibility of such a finding is considered highly probable, one can predict that these experiments, if they can ever be conducted, will give a negative result. Lorentz has sought to supplement and amend his hypothesis so as to bring it into accord with the postulate of the complete impossibility of determining absolute motion. This he managed to do in his article entitled Electromagnetic phenomena in a system moving with any velocity smaller than that of Light (Proceedings de l’Académie d’Amsterdam, May 27, 1904). The importance of this question made me determined to return to it; and the results I obtained are in agreement on all important points with those of Lorentz; I was only led to modify and complete them in a few points of detail. The essential point, established by Lorentz, is that the electromagnetic field equations are not altered by a certain transformation (which I shall call after the name of Lorentz), which has the following form: {\displaystyle x^{\prime }=kl(x+\epsilon t),\quad y^{\prime }=ly,\quad z^{\prime }=lz,\quad t^{\prime }=kl(t+\epsilon x)} {\displaystyle x,y,z} are the coordinates and {\displaystyle t}he time before the transformation, {\displaystyle x',y',z'} {\displaystyle t'} after the transformation. Moreover, {\displaystyle \epsilon } is a constant which defines the transformation {\displaystyle k={\frac {1}{\sqrt {1-\epsilon ^{2}}}}} {\displaystyle l} is an arbitrary function of {\displaystyle \epsilon } . One can see that in this transformation the {\displaystyle x} -axis plays a particular role, but one can obviously construct a transformation in which this role would be played by any straight line through the origin. The sum of all these transformations, together with the set of all rotations of space, must form a group; but for this to occur, we need {\displaystyle l=1} ; so one is forced to suppose {\displaystyle l=1} and this is a consequence that Lorentz has obtained by another way. {\displaystyle \rho } the electric density of the electron, {\displaystyle \xi ,\eta ,\zeta } the velocity before the transformation; we obtain for the same quantities {\displaystyle \rho ',\xi ',\eta ',\zeta '} after processing {\displaystyle \rho ^{\prime }={\frac {k}{l^{3}}}\rho (1+\epsilon \xi ),\quad \rho ^{\prime }\xi ^{\prime }={\frac {k}{l^{3}}}\rho (\xi +\epsilon ),\quad \rho ^{\prime }\eta ^{\prime }={\frac {\rho \eta }{l^{3}}},\ \quad \rho ^{\prime }\zeta ^{\prime }={\frac {\rho \zeta }{l^{3}}}} ​These formulas differ somewhat from those which had been found by Lorentz. Let now {\displaystyle X,Y,Z} {\displaystyle X',Y',Z'} the three components of force before and after transformation, and the force is expressed in unit volume; I found {\displaystyle X^{\prime }={\frac {k}{l^{3}}}(X+\epsilon \Sigma X\xi ),\quad Y^{\prime }={\frac {Y}{l^{3}}},\quad Z^{\prime }={\frac {Z}{l^{3}}}} These formulas are slightly different from those of Lorentz; the additional term in {\displaystyle \Sigma X\xi } reminds us on a result previously obtained by Liénard. If we now denote by {\displaystyle X_{1},\ Y_{1},\ Z_{1}} {\displaystyle X'_{1},\ Y'_{1},\ Z'_{1}} the components of a force, not referred to unit volume, but to unit mass of the electron, we obtain {\displaystyle X_{1}^{\prime }={\frac {k}{l^{3}}}{\frac {\rho }{\rho ^{\prime }}}(X_{1}+\epsilon \Sigma X_{1}\xi ),\quad Y_{1}^{\prime }={\frac {\rho }{\rho ^{\prime }}}{\frac {Y_{1}}{l^{3}}},\quad Z_{1}^{\prime }={\frac {\rho }{\rho ^{\prime }}}{\frac {Z_{1}}{l^{3}}}} Lorentz was also led to assume that the electron in motion takes the form of an oblate spheroid; this is also the hypothesis made by Langevin, however, while Lorentz assumed that two axes of the ellipsoid remain constant, which is consistent with the hypothesis {\displaystyle l=1} , Langevin assumed that the volume remains constant. Both authors have shown that these two hypotheses are consistent with the experiments of Kaufmann, as well as the original hypothesis of Abraham (spherical electron). The hypothesis of Langevin would have the advantage that it is self-sufficient, because it suffices to regard the electron as deformable and incompressible, and to explain that it takes an ellipsoidal shape when it moves. But I show, in agreement with Lorentz, that it is incapable to accord with the impossibility of an experiment showing the absolute motion. As I have said, this is because {\displaystyle l=1} is the only case for which all the Lorentz transformations form a group. But with the hypothesis of Lorentz, the agreement between the formulas does not occur all alone; we obtain at the same time a possible explanation for the contraction of the electron, assuming that the electron, deformable and compressible, is subjected to a constant external pressure whose work is proportional to volume changes. I show, by applying the principle of least action, ​that under those conditions the compensation is complete, assuming that inertia is an electromagnetic phenomenon exclusively, as generally admitted since Kaufmann's experiments, and apart from the constant pressure that I just mentioned and which acts on the electron, all the forces are of electromagnetic origin. We have thus the explanation of the impossibility of demonstrating absolute motion and of the contraction of all the bodies in the direction of the terrestrial motion. {\displaystyle x,y,z} are the projections on the three axes of the vector joining the two positions, if the velocity of the attracted body is {\displaystyle \xi ,\eta ,\zeta } , and that of the attracting body {\displaystyle \xi _{1},\eta _{1},\zeta _{1}} , the three components of the attraction (which I can still call {\displaystyle X'_{1},\ Y'_{1},\ Z'_{1}} ) are functions of {\displaystyle x,y,z,\xi ,\eta ,\zeta ,\xi _{1},\eta _{1},\zeta _{1}} . I asked myself whether it was possible to determine these functions ​in a way that they are affected by the Lorentz transformation according to equations (4) and found the ordinary law of gravitation, whenever the velocities {\displaystyle \xi ,\eta ,\zeta ,\xi _{1},\eta _{1},\zeta _{1}} are small enough that one can neglect the squares in respect to the square of the speed of light. The answer must be affirmative. It is found that the corrected attraction consists of two forces, one parallel to the vector {\displaystyle x,y,z} , the other to the velocity {\displaystyle \xi ,\eta ,\zeta } The difference to the ordinary law of gravitation, as I have said, is of order {\displaystyle \xi ^{2}} ; if we only assume, as Laplace did, that the speed of propagation is that of light, this discrepancy is of order {\displaystyle \xi } , that is to say 10.000 times larger. It is therefore, at first sight, not absurd to assume that astronomical observations are not precise enough to detect a difference as small as the one which we imagine. But this is what only a thorough discussion will make possible to decide. Retrieved from "https://en.wikisource.org/w/index.php?title=Translation:On_the_Dynamics_of_the_Electron_(June)&oldid=10817514"
Plot contours - MATLAB fcontour - MathWorks Benelux Plot Contours of Function Specify Plotting Interval and Plot Piecewise Contour Plot Change Line Style and Width Plot Multiple Contour Plots Modify Contour Plot After Creation Fill Area Between Contours Specify Levels for Contour Lines Control Resolution of Contour Lines Plot contours fcontour(f,xyinterval) fcontour(___,LineSpec) fcontour(___,Name,Value) fcontour(ax,___) fc = fcontour(___) fcontour(f) plots the contour lines of the function z = f(x,y) for constant levels of z over the default interval [-5 5] for x and y. fcontour(f,xyinterval) plots over the specified interval. To use the same interval for both x and y, specify xyinterval as a two-element vector of the form [min max]. To use different intervals, specify a four-element vector of the form [xmin xmax ymin ymax]. fcontour(___,LineSpec) sets the line style and color for the contour lines. For example, '-r' specifies red lines. Use this option after any of the previous input argument combinations. fcontour(___,Name,Value) specifies line properties using one or more name-value pair arguments. fcontour(ax,___) plots into the axes specified by ax instead of the current axes. fc = fcontour(___) returns a FunctionContour object. Use fc to query and modify properties of a specific FunctionContour object. For a list of properties, see FunctionContour Properties. Plot the contours of f\left(x,y\right)=\mathrm{sin}\left(x\right)+\mathrm{cos}\left(y\right) -5<x<5 -5<y<5 f = @(x,y) sin(x) + cos(y); Specify the plotting interval as the second argument of fcontour. When you plot multiple inputs over different intervals in the same axes, the axis limits adjust to display all the data. This behavior lets you plot piecewise inputs. \begin{array}{cc}erf\left(x\right)+\mathrm{cos}\left(y\right)& -5<x<0\\ \mathrm{sin}\left(x\right)+\mathrm{cos}\left(y\right)& 0<x<5\end{array} -5<y<5 fcontour(@(x,y) erf(x) + cos(y),[-5 0 -5 5]) fcontour(@(x,y) sin(x) + cos(y),[0 5 -5 5]) {x}^{2}-{y}^{2} as dashed lines with a line width of 2. f = @(x,y) x.^2 - y.^2; fcontour(f,'--','LineWidth',2) \mathrm{sin}\left(x\right)+\mathrm{cos}\left(y\right) x-y on the same axes by using hold on. fcontour(@(x,y) sin(x)+cos(y)) fcontour(@(x,y) x-y) {e}^{-\left(x/3{\right)}^{2}-\left(y/3{\right)}^{2}}+{e}^{-\left(x+2{\right)}^{2}-\left(y+2{\right)}^{2}} . Assign the function contour object to a variable. f = @(x,y) exp(-(x/3).^2-(y/3).^2) + exp(-(x+2).^2-(y+2).^2); fc = fcontour(f) FunctionContour with properties: Function: @(x,y)exp(-(x/3).^2-(y/3).^2)+exp(-(x+2).^2-(y+2).^2) LineColor: 'flat' Fill: off LevelList: [0.2000 0.4000 0.6000 0.8000 1 1.2000 1.4000] Change the line width to 1 and the line style to a dashed line by using dot notation to set properties of the function contour object. Show contours close to 0 and 1 by setting the LevelList property. Add a colorbar. fc.LineWidth = 1; fc.LineStyle = '--'; fc.LevelList = [1 0.9 0.8 0.2 0.1]; Create a plot that looks like a sunset by filling the area between the contours of erf\left(\left(y+2{\right)}^{3}\right)-{e}^{\left(-0.65\left(\left(x-2{\right)}^{2}+\left(y-2{\right)}^{2}\right)\right)}. f = @(x,y) erf((y+2).^3) - exp(-0.65*((x-2).^2+(y-2).^2)); fcontour(f,'Fill','on'); If you want interpolated shading instead, use the fsurf function and set its 'EdgeColor' option to 'none' followed by the command view(0,90). Set the values at which fcontour draws contours by using the 'LevelList' option. fcontour(f,'LevelList',[-1 0 1]) Control the resolution of contour lines by using the 'MeshDensity' option. Increasing 'MeshDensity' can make smoother, more accurate plots, while decreasing it can increase plotting speed. Create two plots in a 2-by-1 tiled chart layout. In the first plot, display the contours of \mathrm{sin}\left(x\right)\mathrm{sin}\left(y\right) . The corners of the squares do not meet. To fix this issue, increase 'MeshDensity' to 200 in the second plot. The corners now meet, showing that by increasing 'MeshDensity' you increase the resolution. f = @(x,y) sin(x).*sin(y); title('Default Mesh Density (71)') fcontour(f,'MeshDensity',200) title('Custom Mesh Density (200)') x\mathrm{sin}\left(y\right)-y\mathrm{cos}\left(x\right) . Display the grid lines, add a title, and add axis labels. fcontour(@(x,y) x.*sin(y) - y.*cos(x), [-2*pi 2*pi], 'LineWidth', 2); title({'xsin(y) - ycos(x)','-2\pi < x < 2\pi and -2\pi < y < 2\pi'}) Set the x-axis tick values and associated labels by setting the XTickLabel and XTick properties of the axes object. Access the axes object using gca. Similarly, set the y-axis tick values and associated labels. ax.XTick = ax.XLim(1):pi/2:ax.XLim(2); ax.YTick = ax.YLim(1):pi/2:ax.YLim(2); Function to plot, specified as a function handle to a named or anonymous function. [–5 5 -5 5] (default) | vector of form [min max] | vector of form [xmin xmax ymin ymax] Axes object. If you do not specify an axes object, then the fcontour uses the current axes. Line style and color, specified as a character vector or string containing a line style specifier, a color specifier, or both. Example: '--r' specifies red dashed lines These two tables list the line style and color options. Line Style Specifier Example: 'MeshDensity',30 The properties listed here are only a subset. For a full list, see FunctionContour Properties. fc — One or more FunctionContour objects One or more FunctionContour objects, returned as a scalar or a vector. You can use these objects to query and modify the properties of a specific contour plot. For a list of properties, see FunctionContour Properties. fmesh | fplot | fplot3 | fsurf | title
COMMUTATIVE ALGEBRA - Encyclopedia Information Commutative algebra Information https://en.wikipedia.org/wiki/Commutative_algebra Branch of algebra that studies commutative rings {\displaystyle \mathbb {Z} } {\displaystyle 0=\mathbb {Z} _{1}} {\displaystyle \mathbb {Z} _{1}} {\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} } {\displaystyle \mathbb {Z} (p^{\infty })} {\displaystyle \mathbb {T} } {\displaystyle \mathbb {Z} } {\displaystyle \mathbb {Z} [1/p]} {\displaystyle \mathbb {R} } {\displaystyle \mathbb {Z} _{p}} {\displaystyle \mathbb {Q} _{p}} {\displaystyle \mathbb {T} _{p}} Commutative algebra is the branch of algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings; rings of algebraic integers, including the ordinary integers {\displaystyle \mathbb {Z} } ; and p-adic integers. [1] 3 Main tools and results 3.2 Hilbert's basis theorem 3.3 Primary decomposition 3.6 Zariski topology on prime ideals 5 Connections with algebraic geometry Main tools and results {\displaystyle I_{1}\subseteq \cdots I_{k-1}\subseteq I_{k}\subseteq I_{k+1}\subseteq \cdots } there exists an n such that: {\displaystyle I_{n}=I_{n+1}=\cdots } Main article: Hilbert's basis theorem Hilbert's basis theorem has some immediate corollaries: By induction we see that {\displaystyle R[X_{0},\dotsc ,X_{n-1}]} will also be Noetherian. Since any affine variety over {\displaystyle R^{n}} (i.e. a locus-set of a collection of polynomials) may be written as the locus of an ideal {\displaystyle {\mathfrak {a}}\subset R[X_{0},\dotsc ,X_{n-1}]} and further as the locus of its generators, it follows that every affine variety is the locus of finitely many polynomials — i.e. the intersection of finitely many hypersurfaces. {\displaystyle A} is a finitely-generated {\displaystyle R} -algebra, then we know that {\displaystyle A\simeq R[X_{0},\dotsc ,X_{n-1}]/{\mathfrak {a}}} {\displaystyle {\mathfrak {a}}} is an ideal. The basis theorem implies that {\displaystyle {\mathfrak {a}}} must be finitely generated, say {\displaystyle {\mathfrak {a}}=(p_{0},\dotsc ,p_{N-1})} {\displaystyle A} is finitely presented. Main article: Primary decomposition {\displaystyle I=\bigcap _{i=1}^{t}Q_{i}} {\displaystyle I=\bigcap _{i=1}^{k}P_{i}} is decomposition of I with Rad(Pi) ≠ Rad(Pj) for i ≠ j, and both decompositions of I are irredundant (meaning that no proper subset of either {Q1, ..., Qt} or {P1, ..., Pk} yields an intersection equal to I), t = k and (after possibly renumbering the Qi) Rad(Qi) = Rad(Pi) for all i. For any primary decomposition of I, the set of all radicals, that is, the set {Rad(Q1), ..., Rad(Qt)} remains the same by the Lasker–Noether theorem. In fact, it turns out that (for a Noetherian ring) the set is precisely the assassinator of the module R/I; that is, the set of all annihilators of R/I (viewed as a module over R) that are prime. Main article: Localization (algebra) {\displaystyle {\frac {m}{s}}} Main article: Completion (ring theory) Zariski topology on prime ideals Main article: Zariski topology The Zariski topology defines a topology on the spectrum of a ring (the set of prime ideals). [2] In this formulation, the Zariski-closed sets are taken to be the sets {\displaystyle V(I)=\{P\in \operatorname {Spec} \,(A)\mid I\subseteq P\}} The fundamental example in commutative algebra is the ring of integers {\displaystyle \mathbb {Z} } . The existence of primes and the unique factorization theorem laid the foundations for concepts such as Noetherian rings and the primary decomposition. Other important examples are: Polynomial rings {\displaystyle R[x_{1},...,x_{n}]} The p-adic integers Rings of algebraic integers. Connections with algebraic geometry List of commutative algebra topics Glossary of commutative algebra ^ Atiyah and Macdonald, 1969, Chapter 1 ^ Dummit, D. S.; Foote, R. (2004). Abstract Algebra (3 ed.). Wiley. pp. 71–72. ISBN 9780471433347. Retrieved from " https://en.wikipedia.org/?title=Commutative_algebra&oldid=1055195961" Commutative Algebra Videos Commutative Algebra Websites Commutative Algebra Encyclopedia Articles
Great-circle_distance Knowpia The great-circle distance, orthodromic distance, or spherical distance is the distance along a great circle. A diagram illustrating great-circle distance (drawn in red) between two points on a sphere, P and Q. Two antipodal points, u and v are also shown. It is the shortest distance between two points on the surface of a sphere, measured along the surface of the sphere (as opposed to a straight line through the sphere's interior). The distance between two points in Euclidean space is the length of a straight line between them, but on the sphere there are no straight lines. In spaces with curvature, straight lines are replaced by geodesics. Geodesics on the sphere are circles on the sphere whose centers coincide with the center of the sphere, and are called 'great circles'. The determination of the great-circle distance is part of the more general problem of great-circle navigation, which also computes the azimuths at the end points and intermediate way-points. Through any two points on a sphere that are not antipodal points (directly opposite each other), there is a unique great circle. The two points separate the great circle into two arcs. The length of the shorter arc is the great-circle distance between the points. A great circle endowed with such a distance is called a Riemannian circle in Riemannian geometry. Between antipodal points, there are infinitely many great circles, and all great circle arcs between antipodal points have a length of half the circumference of the circle, or {\displaystyle \pi r} , where r is the radius of the sphere. The Earth is nearly spherical, so great-circle distance formulas give the distance between points on the surface of the Earth correct to within about 0.5%.[1] The vertex is the highest-latitude point on a great circle. An illustration of the central angle, Δσ, between two points, P and Q. λ and φ are the longitudinal and latitudinal angles of P respectively {\displaystyle \lambda _{1},\phi _{1}} {\displaystyle \lambda _{2},\phi _{2}} be the geographical longitude and latitude of two points 1 and 2, and {\displaystyle \Delta \lambda ,\Delta \phi } be their absolute differences; then {\displaystyle \Delta \sigma } , the central angle between them, is given by the spherical law of cosines if one of the poles is used as an auxiliary third point on the sphere:[2] {\displaystyle \Delta \sigma =\arccos {\bigl (}\sin \phi _{1}\sin \phi _{2}+\cos \phi _{1}\cos \phi _{2}\cos(\Delta \lambda ){\bigr )}.} The problem is normally expressed in terms of finding the central angle {\displaystyle \Delta \sigma } . Given this angle in radians, the actual arc length d on a sphere of radius r can be trivially computed as {\displaystyle d=r\,\Delta \sigma .} Computational formulasEdit On computer systems with low floating point precision, the spherical law of cosines formula can have large rounding errors if the distance is small (if the two points are a kilometer apart on the surface of the Earth, the cosine of the central angle is near 0.99999999). For modern 64-bit floating-point numbers, the spherical law of cosines formula, given above, does not have serious rounding errors for distances larger than a few meters on the surface of the Earth.[3] The haversine formula is numerically better-conditioned for small distances:[4] {\displaystyle {\begin{aligned}\Delta \sigma &=\operatorname {archav} \left(\operatorname {hav} \left(\Delta \phi \right)+\left(1-\operatorname {hav} (\Delta \phi )-\operatorname {hav} (\phi _{1}+\phi _{2})\right)\cdot \operatorname {hav} \left(\Delta \lambda \right)\right)\\&=2\arcsin {\sqrt {\sin ^{2}\left({\frac {\Delta \phi }{2}}\right)+\left(1-\sin ^{2}\left({\frac {\Delta \phi }{2}}\right)-\sin ^{2}\left({\frac {\phi _{1}+\phi _{2}}{2}}\right)\right)\cdot \sin ^{2}\left({\frac {\Delta \lambda }{2}}\right)}}.\end{aligned}}} Historically, the use of this formula was simplified by the availability of tables for the haversine function: hav(θ) = sin2(θ/2). Although this formula is accurate for most distances on a sphere, it too suffers from rounding errors for the special (and somewhat unusual) case of antipodal points. A formula that is accurate for all distances is the following special case of the Vincenty formula for an ellipsoid with equal major and minor axes:[5] {\displaystyle \Delta \sigma =\arctan {\frac {\sqrt {\left(\cos \phi _{2}\sin(\Delta \lambda )\right)^{2}+\left(\cos \phi _{1}\sin \phi _{2}-\sin \phi _{1}\cos \phi _{2}\cos(\Delta \lambda )\right)^{2}}}{\sin \phi _{1}\sin \phi _{2}+\cos \phi _{1}\cos \phi _{2}\cos(\Delta \lambda )}}.} Vector versionEdit Another representation of similar formulas, but using normal vectors instead of latitude and longitude to describe the positions, is found by means of 3D vector algebra, using the dot product, cross product, or a combination:[6] {\displaystyle {\begin{aligned}\Delta \sigma &=\arccos \left(\mathbf {n} _{1}\cdot \mathbf {n} _{2}\right)\\&=\arcsin \left|\mathbf {n} _{1}\times \mathbf {n} _{2}\right|\\&=\arctan {\frac {\left|\mathbf {n} _{1}\times \mathbf {n} _{2}\right|}{\mathbf {n} _{1}\cdot \mathbf {n} _{2}}}\\\end{aligned}}} {\displaystyle \mathbf {n} _{1}} {\displaystyle \mathbf {n} _{2}} are the normals to the ellipsoid at the two positions 1 and 2. Similarly to the equations above based on latitude and longitude, the expression based on arctan is the only one that is well-conditioned for all angles. The expression based on arctan requires the magnitude of the cross product over the dot product. From chord lengthEdit A line through three-dimensional space between points of interest on a spherical Earth is the chord of the great circle between the points. The central angle between the two points can be determined from the chord length. The great circle distance is proportional to the central angle. The great circle chord length, {\displaystyle C_{h}\,\!} , may be calculated as follows for the corresponding unit sphere, by means of Cartesian subtraction: {\displaystyle {\begin{aligned}\Delta {X}&=\cos \phi _{2}\cos \lambda _{2}-\cos \phi _{1}\cos \lambda _{1};\\\Delta {Y}&=\cos \phi _{2}\sin \lambda _{2}-\cos \phi _{1}\sin \lambda _{1};\\\Delta {Z}&=\sin \phi _{2}-\sin \phi _{1};\\C&={\sqrt {(\Delta {X})^{2}+(\Delta {Y})^{2}+(\Delta {Z})^{2}}}\end{aligned}}} The central angle is: {\displaystyle \Delta \sigma =2\arcsin {\frac {C}{2}}.} Radius for spherical EarthEdit Equatorial (a), polar (b) and mean Earth radii as defined in the 1984 World Geodetic System revision. (Not to scale.) The shape of the Earth closely resembles a flattened sphere (a spheroid) with equatorial radius {\displaystyle a} of 6378.137 km; distance {\displaystyle b} from the center of the spheroid to each pole is 6356.7523142 km. When calculating the length of a short north-south line at the equator, the circle that best approximates that line has a radius of {\textstyle {\frac {b^{2}}{a}}} (which equals the meridian's semi-latus rectum), or 6335.439 km, while the spheroid at the poles is best approximated by a sphere of radius {\textstyle {\frac {a^{2}}{b}}} , or 6399.594 km, a 1% difference. So long as a spherical Earth is assumed, any single formula for distance on the Earth is only guaranteed correct within 0.5% (though better accuracy is possible if the formula is only intended to apply to a limited area). Using the mean earth radius, {\textstyle R_{1}={\frac {1}{3}}(2a+b)\approx 6371.009{\text{ km}}} (for the WGS84 ellipsoid) means that in the limit of small flattening, the mean square relative error in the estimates for distance is minimized.[7] ^ Admiralty Manual of Navigation, Volume 1, The Stationery Office, 1987, p. 10, ISBN 9780117728806, The errors introduced by assuming a spherical Earth based on the international nautical mile are not more than 0.5% for latitude, 0.2% for longitude. ^ Kells, Lyman M.; Kern, Willis F.; Bland, James R. (1940). Plane And Spherical Trigonometry. McGraw Hill Book Company, Inc. pp. 323-326. Retrieved July 13, 2018. ^ "Calculate distance, bearing and more between Latitude/Longitude points". Retrieved 10 Aug 2013. ^ Sinnott, Roger W. (August 1984). "Virtues of the Haversine". Sky and Telescope. 68 (2): 159. ^ Vincenty, Thaddeus (1975-04-01). "Direct and Inverse Solutions of Geodesics on the Ellipsoid with Application of Nested Equations" (PDF). Survey Review. Kingston Road, Tolworth, Surrey: Directorate of Overseas Surveys. 23 (176): 88–93. doi:10.1179/sre.1975.23.176.88. Retrieved 2008-07-21. ^ Gade, Kenneth (2010). "A non-singular horizontal position representation" (PDF). The Journal of Navigation. Cambridge University Press. 63 (3): 395–417. doi:10.1017/S0373463309990415. GreatCircle at MathWorld
Accord between different laws of Nature that seemed incompatible - Wikisource, the free online library Accord between different laws of Nature that seemed incompatible (1744) by Pierre Louis Moreau de Maupertuis, translated from French by Wikisource Early article setting out the principle of least action. 53468Accord between different laws of Nature that seemed incompatiblePierre Louis Moreau de Maupertuis1744 To broaden our knowledge of the world, we use different methods that rarely lead us to the same truths. Nevertheless, it would be remarkable if certain fundamental truths of Philosophy could be illustrated by geometric arguments or algebraic proofs. One such remarkable example can be found in one of the most important subjects of physics. The most beautiful discoveries since the Renaissance, indeed since the beginnings of all science, are the laws governing light, whether moving through a uniform medium, or being reflected from an opaque surface, or changing direction upon entering another transparent medium. These laws are fundamental to the science of optics and colors. I will not presume to treat the whole of such a vast subject, but will limit myself to a few well-known truths. As I have said, these laws are the basis of an admirable science, one that allows an old man with weakened eyes to see as well as he did in his youth, or even better; a science that extends our vision into the furthest reaches of space and into the smallest parts of matter, allowing us to discover many things that had been concealed. Here are the laws that govern light. The first law states that light moves in a straight line in a uniform medium. The second law states that when light encounters a medium it cannot penetrate, it is reflected and the angle of reflection equals the angle of incidence. In other words, upon reflection, light makes an angle with the surface that is equal to the angle at which the light encountered the surface. The third law states that when light passes from one medium to another, it is bent and the sine of the angle of refraction always has the same ratio to the sine of the angle of incidence. For example, if a ray of light passes from air to water, it is bent so that the sine of the angle of refraction is three-quarters of the sine of the angle of incidence, regardless of what the angle of incidence is. But the third law still requires a plausible explanation. The passage of light from one medium to another exhibits behavior that is totally different from a ball moving through different media. Every explanation of refraction has some problems that have not yet been overcome. I will not cite all the great men who have worked on this problem. Their names would form a long list, a useless ornament in this article, and a review of their various systems would be an immense undertaking. However, I will group their explanations of the reflection and refraction of light into three classes. The first class consists of explanations that seek to derive the behavior of light from purely mechanical laws, i.e., the basic laws governing the motion of material objects. The second class consists of explanations that augment the mechanical laws with an attraction between light and matter, or something that produces an equivalent effect. Finally, the third class consists of explanations derived from purely metaphysical principles, i.e., from laws to which Nature herself seems subjugated by a superior Intelligence that always produces an effect in the simplest possible manner. Descartes and those who followed him belong to the first class. They modeled the reflection of light as the motion of a ball that, encountering an impenetrable surface, rebounds to the same side from which it came. Similarly, they modeled the refraction of light as a ball that, encountering a penetrable surface, continues to progress, albeit with a changed direction. Although the arguments used by this great Philosopher to explain refraction are imperfect, Descartes still deserves credit for trying to derive the laws of light from the simplest mechanical laws. Several mathematicians have noted gaps in Descartes' logic and have sought an improved explanation. Newton despaired of deriving the law of refraction from those governing the motion of a ball when it passes between two media of different resistance. Therefore, he proposed an attraction between light and matter that increases proportionally to the amount of matter present, which was able to account for refraction exactly and rigorously. Mr. Clairaut has written an excellent article on this topic. He lays out the failings of the Cartesian theory, and subscribes to the attraction theory between light and matter, although he proposes that an "atmosphere" surrounding the matter causes the apparent force of attraction. He derives the law of refraction with the clarity that is typical of all the subjects he has researched. Fermat was the first to recognize the failings of Descartes' explanation, and also seemed to despair of explaining the refraction law from purely mechanical laws governing balls encountering obstacles or passing through a resistant medium. However, he also did not resort to "atmospheres" surrounding material bodies or other forms of attraction between light and matter, although he was certainly aware of the attraction theory and found it tolerable. Rather, Fermat sought to explain refraction using a totally different and purely metaphysical principle. Everyone knows that, when light or another body travels from one point to another in a straight line, it follows the path of shortest distance and time. One also knows (or at least it is easy to show) that, when light is reflected, it likewise travels along the path of shortest distance and briefest time. The equality of the angles of incidence and reflection result from requiring a body to travel from one point to another along the path of shortest distance and briefest time that involves a reflection from a given plane. For if the angles are equal, the sum of the two paths by which the ball travel and return is shorter in distance and briefer in time than any other sum of paths making unequal angles. Hence, both the direct and reflected motion of light seem to depend on a metaphysical law stating that Nature always acts in the simplest possible manner to produce its effects. Whether a material body should travel from one point to another without encountering an obstacle, or encountering an impenetrable obstacle, Nature always leads it along the path of shortest distance and briefest time. To apply this principle to refraction, let us consider two transparent media separated by the plane of their common surface. Suppose that the light's point of departure is within one medium, whereas its point of arrival is in the other medium, but that the line joing the two points is not perpendicular to the surface separating the media. Let us also assume that the light travels with different speeds in the different media. Hence, the straight line joining the two points is still the path of shortest distance, but is no longer the path of briefest time. Since the travel time depends on the speeds of light in the two media, the path of briefest time should be longer in length in the medium where the light moves more quickly, and shorter in the medium where light moves more slowly. This seems to occur when light passes from air into water. The ray is bent such that a longer path is taken in air and a shorter path in water. If we assume, as seems reasonable, that light moves more quickly in air than it does in water, then the bent path taken by the light causes it to move from its departure point to its arrival point in the briefest possible time. This path-of-least-time principle of Fermat seems reasonable, and accounts for the refraction of light as well as its direct propagation and reflection. Fermat himself had no difficulty in believing that light traveled more easily and more quickly in less dense media than in media of higher density. At first glance, who would assume that light moves more easily and more quickly in glass and water than it does in air or vacuum? Several celebrated mathematicians also agreed with Fermat's principle, particularly Leibniz, who gave the problem an elegant mathematical analysis. He was charmed by the metaphysical principle as an example of the final causes to which he was so attached, and considered it beyond doubt that light moves more quickly in air than in water or glass. Nevertheless, Descartes believed exactly the opposite, that light moves more quickly in denser media and, although his reasoning was perhaps inadequate, that failing does not stem from his assumption about the speed of light. Every theory of refraction makes some assumption about the relative speed of light in different media, some agreeing with Descartes and some agreeing with Fermat and Leibniz. If it is assumed that light moves more quickly in denser media, the entire theory of Fermat and Leibniz is destroyed. For, in that case, the bent path of light upon refraction would correspond neither to the path of shortest distance nor to the path of briefest time. A path that traveled longer in a medium of slower speed would definitely not arrive in the shortest possible time. The article of Mr. de Mairan on reflection and refraction describes the history of the dispute between Fermat and Descartes, as well as the confusion and inability to harmonize the law of refraction with the metaphysical principle. Now I have to define what I mean by "action". When a material body is transported from one point to another, it involves an action that depends on the speed of the body and on the distance it travels. However, the action is neither the speed nor the distance taken separately; rather, it is proportional to the sum of the distances travelled multiplied each by the speed at which they were travelled. Hence, the action increases linearly with the speed of the body and with the distance travelled. This action is the true expense of Nature, which she manages to make as small as possible in the motion of light. Let there be two media, separated by a common surface (represented by the line CD), such that the speed of light in the upper medium is V and in the lower medium is W. Let there be a ray of light AR that leaves from a given point A and arrives at a given point B. To find the point R at which the light is bent, I seek a point that minimizes the action, i.e., {\displaystyle V\cdot AR+W\cdot RB} should be minimized or, equivalently, {\displaystyle V\cdot {\sqrt {AC^{2}+CR^{2}}}+W\cdot {\sqrt {BD^{2}+CD^{2}-2CD\times CR+CR^{2}}}} {\displaystyle AC} {\displaystyle BD} {\displaystyle CD} are constants, minimization yields the equation {\displaystyle {\frac {V\cdot CR\ dCR}{\sqrt {AC^{2}+CR^{2}}}}-{\frac {W\cdot (CD-CR)\ dCR}{\sqrt {BD^{2}+DR^{2}}}}=0} {\displaystyle {\frac {V\cdot CR}{AR}}={\frac {W\cdot DR}{BR}}\cdot {\frac {CR}{AR}}:{\frac {DR}{BR}}::W:V} In other words, the ratio of the sine of the angle of incidence to the sine of the angle of refraction equals the inverse ratio of the speeds at which light moves in each medium. Thus, the refraction of light agrees with the grand principle that Nature always uses the simplest means to accomplish its effects. From this principle, can be derived whenever light passes from one medium to another, the ratio of the sine of the angle of refraction to the sine of the angle of refraction equals the inverse ratio of the speeds at which light moves in each medium. In both cases (direct propagation and reflection), the speed of light remains constant. Hence, the path of least action is the same as the path of shortest distance and the path of briefest time. However, those latter two paths are merely a consequence of the path of least action, a consequence that Fermat and Leibniz took as the fundamental principle. One cannot doubt that everything is governed by a supreme Being who has imposed forces on material objects, forces that show his power, just as he has fated those objects to execute actions that demonstrate his wisdom. The harmony between these two attributes is so perfect, that undoubtedly all the effects of Nature could be derived from each one taken separately. A blind and deterministic mechanics follows the plans of a perfectly clear and free Intellect. If our spirits were sufficiently vast, we would also see the causes of all physical effects, either by studying the properties of material bodies or by studying what would most suitable for them to do. The first type of studies is more within our power, but does not take us far. The second type may lead us stray, since we do not know enough of the goals of Nature and we can be mistaken about the quantity that is truly the expense of Nature in producing its effects. To unify the certainty of our research with its breadth, it is necessary to use both types of study. Let us calculate the motion of bodies, but also consult the plans of the Intelligence that makes them move. Retrieved from "https://en.wikisource.org/w/index.php?title=Translation:Accord_between_different_laws_of_Nature_that_seemed_incompatible&oldid=4572901"
Slew Rate of Triangular Waveform - MATLAB & Simulink - MathWorks Australia Slew Rate of Triangular Waveform This example shows how to use the slew rate as an estimate of the rising and falling slopes of a triangular waveform. Create three triangular waveforms. One waveform has rising-falling slopes of ±2 , one waveform has rising-falling slopes of ±\frac{1}{2} , and one waveform has a rising slope of +2 and a falling slope of -\frac{1}{2} . Use slewrate to find the slopes of the waveforms. Use tripuls to create a triangular waveform with rising-falling slopes of ±2 . Set the sampling interval to 0.01 seconds, which corresponds to a sample rate of 100 hertz. x = tripuls(t); Compute and plot the slew rate for the triangular waveform. Input the sample rate (100 Hz) to obtain the correct positive and negative slope values. slewrate(x,1/dt) Change the width of the triangular waveform so it has slopes of ±\frac{1}{2} . Compute and plot the slew rate. x = tripuls(t,4); Create a triangular waveform with a rising slope of +2 -\frac{1}{2} . Compute the slew rate. x = tripuls(t,5/2,-3/5); s = slewrate(x,1/dt) The first element of s is the rising slope and the second element is the falling slope. Plot the result. slewrate(x,1/dt); slewrate | tripuls
Euclidean Algorithm | Brilliant Math & Science Wiki Patrick Corn, Arulx Z, Mahindra Jain, and The Euclidean algorithm is an efficient method for computing the greatest common divisor of two integers, without explicitly factoring the two integers. It is used in countless applications, including computing the explicit expression in Bezout's identity, constructing continued fractions, reduction of fractions to their simple forms, and attacking the RSA cryptosystem. Furthermore, it can be extended to other rings that have a division algorithm, such as the ring {\mathbb Q}[x] of polynomials with rational coefficients. Solving for Bezout's identity Euclidean Algorithm in other Rings The Euclidean algorithm solves the problem: a,b, d=\text{gcd}(a,b). If the prime factorizations of and b are known, finding the greatest common divisor is straightforward. Let a = p_1 ^{q_1} p_2 ^ {q_2} \ldots p_n^{q_n} b= p_1 ^{r_1} p_2 ^ {r_2} \ldots p_n^{r_n} p_i are positive prime integers and q_i, r_i\geq 0 \gcd{(a,b)} = p_1 ^ { \min(q_1, r_1)} p_2 ^ {\min (q_2, r_2)} \ldots p_n^{\min (q_n, r_n) }. However, in general, factoring integers is a difficult problem from a computational perspective. The Euclidean algorithm provides a fast way to determine d without knowing the prime factors of a b. Here is an outline of the steps: a = x, b=y. x,y , use the division algorithm to write x=yq+r, 0\le r < |y|. r=0, stop and output y; this is the gcd of a,b. r\ne 0, (x,y) (y,r). \gcd( 16457, 1638 )? Apply the Euclidean algorithm: \begin{array}{rrl} 16457 =& 1638 \times 10 & + 77 \\ 1638 =& 77 \times 21 & + 21\\ 77 =& 21 \times 3 &+ 14 \\ 21 =& 14 \times 1 & + 7\\ 14 =& 7 \times 2 &+ 0. \end{array} The process stops since we reached 0, 7 = \gcd (7, 14) = \gcd(14, 21) = \gcd (21, 77) = \gcd (77, 1638) = \gcd( 1638, 16457) . \ _\square What is the highest common factor of 2442 and 17171? Note: Try not to use a calculator. The last line of the above example suggests a proof that the Euclidean algorithm computes the gcd. That is, it is enough to show that the gcd of each pair of numbers in the algorithm is the same, because the last pair is (x,y) y|x \text{gcd}(x,y)=y, which is what the Euclidean algorithm outputs for \text{gcd}(a,b). Here is a proof of this statement: a=bq+r. d|a,d|b, d|r, \text{gcd}(a,b) \le \text{gcd}(b,r). e|b,e|r, e|a, \text{gcd}(b,r) \le \text{gcd}(a,b). So the GCDs are equal. _\square \gcd(2015! + 1, 2016! +1) = \, ? \gcd(A,B) denotes the greatest common divisor of the two numbers A B (IMO '59) Prove that \dfrac {21n+4} {14n+3} is irreducible for every positive integer n \dfrac {21n+4} {14n+3} is irreducible if and only if the numerator and denominator have no common factor, i.e. their greatest common divisor is 1. Applying the Euclidean algorithm, \begin{array}{rll} 21n+4 =& 1 \times (14n+3) &+7n+1 \\ 14n+3 =& 2 \times (7n+1) &+1\\ 7n+1 =& (7n+1) \times 1 & +0. \end{array} \gcd(21n+4, 14n+3) =1 , which shows that the fraction is irreducible. _\square Bezout's identity says that the equation ax+by=\text{gcd}(a,b) x,y. The Euclidean algorithm gives a method for finding one pair of solutions. Find integers x,y 16457x+1638y = 7. Reverse the Euclidean algorithm: \begin{aligned} 16457 &= 1638 \times 10 + 77 \\ 1638 &= 77 \times 21 + 21\\ 77 &= 21 \times 3 + 14 \\ 21 &= 14 \times 1 + 7\\ 14 &= 7 \times 2 + 0. \end{aligned} 7 as a linear combination of numbers moving up the line: \begin{aligned} 7&= 21- 1\cdot 14 \\ &= 21-1(77-21\cdot 3)= 21\cdot 4-77 \cdot 1 \\ &= (1638-77\cdot 21)4 - 77 \cdot 1 = 1638 \cdot 4- 77 \cdot 85 \\ &= 1638 \cdot 4 - (16457-1638\cdot 10)85 =16457(-85)+1638\cdot 854. \end{aligned} x=-85,y=854 _\square See the Extended Euclidean Algorithm wiki for more details. The Euclidean algorithm can be used in any ring with a division algorithm. Here are two examples: Find the GCD of the polynomials x^5 + x^4 + 2x^3 + 2x^2 + 2x + 1 x^5 + x^4 + x^3 - x^2 - x - 1 \mathbb Q. Use Euclidean algorithm: \begin{aligned} x^5+x^4+2x^3+2x^2+2x+1 &= (x^5+x^4+x^3-x^2-x-1)1 + (x^3+3x^2+3x+2) \\ x^5+x^4+x^3-x^2-x-1 &= (x^3+3x^2+3x+2)(x^2-2x+4)+(-9x^2-9x-9) \\ x^3+3x^2+3x+2 &= (-9x^2-9x-9)\left( -\frac19x-\frac29\right) + 0. \end{aligned} So "the" GCD is -9x^2-9x-9. See below for a comment on uniqueness. _\square In a ring with a division algorithm (sometimes called a Euclidean ring), the GCD is defined up to multiplication by a unit, i.e. an element of the ring with a multiplicative inverse. The units in the ring \mathbb Z of integers are \pm 1, so this ambiguity is resolved by stipulating that the GCD is positive. In F[x] F is a field, the units are the nonzero constant polynomials. For instance, in the above example, -9x^2-9x-9 divides every common divisor of the two polynomials, but so does 9x^2+9x+9 \frac1{100}x^2+\frac1{100}x+\frac1{100} or any other constant multiple of -9x^2-9x-9. This ambiguity can be resolved here by stipulating that the GCD should always be a monic polynomial—in this case x^2+x+1. \text{gcd}(4+17i,7+6i) in the Gaussian integers {\mathbb Z}[i]. Use the Euclidean algorithm: \begin{aligned} 4+17i &= (7+6i)(2+i)+(-4-2i) \\ 7+6i &= (-4-2i)(-2)+(-1+2i) \\ -4-2i &= (-1+2i)(2i)+0. \end{aligned} -1+2i. Note that the units in {\mathbb Z}[i] \pm 1, \pm i, so there are four equally valid answers: -1+2i, 1-2i, -2-i, 2+i. _\square Note that the division algorithm quotients above are obtained by dividing and rounding to the nearest Gaussian integer, e.g. \frac{4+17i}{7+6i} = \frac{(4+17i)(7-6i)}{(7+6i)(7-6i)} = \frac{130+95i}{85}, which is closest to 2+i. \begin{cases} a &=& 100 + 85i \\ b &=& 208 + 39i \\ c &=& 188 + 22i \\ \end{cases} a, b, c are Gaussian integers shown above, if d is the greatest common divisor (GCD) of a, b, c, |d|^{2}? Cite as: Euclidean Algorithm. Brilliant.org. Retrieved from https://brilliant.org/wiki/euclidean-algorithm/
 Electronically Controllable Quadrature Sinusoidal Oscillator Using VD-DIBAs Electronically Controllable Quadrature Sinusoidal Oscillator Using VD-DIBAs Department of Electronics and Communication Engineering, Maharaja Agrasen Institute of Technology, New Delhi, India A new voltage-mode quadrature sinusoidal oscillator (QSO) using two voltage differencing-differential input buffered amplifiers (VD-DIBAs) and only three passive components (two capacitors and a resistor) is presented. The proposed QSO circuit offers advantages of independent electronic control of both oscillation frequency and condition of oscillation, availability of two quadrature voltage outputs and low active and passive sensitivities. SPICE simulation results have been included using 0.35 µm MIETEC technology to confirm the validity of the proposed QSO oscillator. Voltage Differencing-Differential Input Buffered Amplifier, Voltage-Mode, Quadrature Sinusoidal Oscillator Quadrature sinusoidal oscillators (QSOs) are important blocks in the synthesis of modern transceivers. A QSO provides two sinusoids with a 90˚ phase difference. QSOs are useful in telecommunications for quadrature mixers and single sideband generators [1] , in direct-conversion receivers, used for measurement purposes in vector generators and selective voltmeters [2] . Because of these applications number of QSOs has been realized employing different active building blocks in the open literature [3] - [8] . VD-DIBA is one of the active building blocks among the various active building blocks introduced in reference [9] which is emerging as a very flexible and versatile building block for analog signal processing/signal generation and has been used earlier for realizing a number of functions. VD-DIBA has been used in single resistance controlled oscillators, simulation of inductors, realization of active filters [10] - [17] . Recently VD-DIBA has also been used in the realization of QSO where independent electronic control of CO and FO is not available [18] . Therefore, the purpose of this paper is to propose a new QSO having electronic control of both CO and FO by separate transconductance of the VD-DIBAs. This property is very attractive for realizing current controlled oscillators as FO can be controlled independently without disturbing CO, whereas the flexibility of being able to adjust CO independently is useful in amplitude stabilization. The proposed configuration also offers low active and passive sensitivities. The validity of proposed structure has been confirmed by SPICE simulation with 0.35 µm MIETEC technology. 2. The Proposed New Oscillator Configuration The symbolic notation and the equivalent circuit model of the VD-DIBA are shown in Figure 1(a) and Figure 1(b) respectively. The circuit model includes two controlled sources: the voltage source controlled by differential voltage \left({V}_{z}-{V}_{v}\right) with the unity voltage gain and the current source controlled by differential voltage \left({V}_{+}-{V}_{-}\right) , with the transconductance {g}_{m} . The corresponding voltage-current relationship of input-output terminals of VD-DIBA can be expressed by the following matrix: \left(\begin{array}{c}{I}_{+}\\ {I}_{-}\\ {I}_{z}\\ {I}_{v}\\ {V}_{w}\end{array}\right)=\left(\begin{array}{ccccc}0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0\\ {g}_{m}& -{g}_{m}& 0& 0& 0\\ 0& 0& 0& 0& 0\\ 0& 0& 1& -1& 0\end{array}\right)\left(\begin{array}{c}{V}_{+}\\ {V}_{-}\\ {V}_{z}\\ {V}_{v}\\ {I}_{w}\end{array}\right) A straight forward circuit analysis of the circuit of Figure 2 yields the following characteristic equation (CE): {s}^{2}{C}_{1}{C}_{2}+s{C}_{1}\left(\frac{1}{{R}_{0}}-{g}_{{m}_{2}}\right)+\frac{{g}_{{m}_{1}}}{{R}_{0}}=0 From Equation (2), the CO and FO are given by \left(\frac{1}{{R}_{0}}-{g}_{{m}_{2}}\right)\le 0 Figure 1. (a) Symbolic notation of; and (b) Equivalent circuit model of VD-DIBA. Figure 2.Proposed electronically controllable quadrature sinusoidal oscillator. {\omega }_{0}=\sqrt{\frac{{g}_{{m}_{1}}}{{R}_{0}{C}_{1}{C}_{2}}} Thus from Equations (3) and (4), it is clear that CO is electronically controllable by the transconductance gm2, whereas FO is electronically controllable through the transconductance gm1. Therefore both CO and FO are independently controllable by two separate transconductance of VD-DIBAs. {R}_{Z} {C}_{Z} as parasitic resistance and parasitic capacitance respectively of the Z-terminal of the VD-DIBA, taking the non-idealities into account, namely the voltage of W-terminal {V}_{W}=\left({\beta }^{+}{V}_{Z}-{\beta }^{-}{V}_{V}\right) where β+ = 1 − εp (εp = 1) and β− = 1 − εn (εn = 1) denote the voltage tracking errors of Z-terminal and V-terminal of the VD-DIBA respectively, then the expressions for CE, CO and FO can be given as: \begin{array}{l}{s}^{2}\left({C}_{1}+{C}_{z}\right)\left({C}_{2}+{C}_{z}\right)+s\left\{\left({C}_{1}+{C}_{z}\right)\left(\frac{1}{{R}_{0}}+\frac{1}{{R}_{z}}-{g}_{{m}_{2}}{\beta }^{+}\right)+\frac{1}{{R}_{z}}\left({C}_{2}+{C}_{z}\right)\right\}\\ +\frac{1}{{R}_{z}}\left(\frac{1}{{R}_{0}}+\frac{1}{{R}_{z}}-{g}_{{m}_{2}}{\beta }^{+}\right)+\frac{{\beta }^{+}{g}_{{m}_{1}}}{{R}_{0}}=0\end{array} \left\{\left({C}_{1}+{C}_{z}\right)\left(\frac{1}{{R}_{0}}+\frac{1}{{R}_{z}}-{g}_{{m}_{2}}{\beta }^{+}\right)+\frac{1}{{R}_{z}}\left({C}_{2}+{C}_{z}\right)\right\}\le 0 {\omega }_{0}=\sqrt{\frac{{R}_{0}+{R}_{z}-{R}_{0}{R}_{z}{g}_{{m}_{2}}{\beta }^{+}+{R}_{z}^{2}{\beta }^{+}{g}_{{m}_{1}}}{{R}_{0}{R}_{z}^{2}\left({C}_{1}+{C}_{z}\right)\left({C}_{2}+{C}_{z}\right)}} The passive and active sensitivities can be expressed as: {S}_{{C}_{1}}^{{\omega }_{0}}=-\frac{1}{2}\frac{{C}_{1}}{{C}_{1}+{C}_{z}} {S}_{{C}_{2}}^{{\omega }_{0}}=-\frac{1}{2}\frac{{C}_{2}}{{C}_{2}+{C}_{z}} {S}_{{C}_{z}}^{{\omega }_{0}}=-\frac{1}{2}\left(\frac{1}{{C}_{1}+{C}_{z}}+\frac{1}{{C}_{2}+{C}_{z}}\right){C}_{z} {S}_{{\beta }^{+}}^{{\omega }_{0}}=-\frac{1}{2}\frac{{\beta }^{+}{R}_{z}\left({R}_{0}{g}_{{m}_{2}}-{R}_{z}{g}_{{m}_{1}}\right)}{{R}_{0}+{R}_{z}-{R}_{0}{R}_{z}{g}_{{m}_{2}}{\beta }^{+}+{R}_{z}^{2}{\beta }^{+}{g}_{{m}_{1}}} {S}_{{g}_{{m}_{1}}}^{{\omega }_{0}}=\frac{1}{2}\frac{{R}_{z}^{2}{\beta }^{+}{g}_{{m}_{1}}}{{R}_{0}+{R}_{z}-{R}_{0}{R}_{z}{g}_{{m}_{2}}{\beta }^{+}+{R}_{z}^{2}{\beta }^{+}{g}_{{m}_{1}}} {S}_{{g}_{{m}_{2}}}^{{\omega }_{0}}=-\frac{1}{2}\frac{{R}_{0}{R}_{z}{g}_{{m}_{2}}{\beta }^{+}}{{R}_{0}+{R}_{z}-{R}_{0}{R}_{z}{g}_{{m}_{2}}{\beta }^{+}+{R}_{z}^{2}{\beta }^{+}{g}_{{m}_{1}}} {S}_{{R}_{0}}^{{\omega }_{0}}=-\frac{1}{2}\frac{{R}_{z}\left(1+{R}_{z}{\beta }^{+}{g}_{{m}_{1}}\right)}{{R}_{0}+{R}_{z}-{R}_{0}{R}_{z}{g}_{{m}_{2}}{\beta }^{+}+{R}_{z}^{2}{\beta }^{+}{g}_{{m}_{1}}} {S}_{{R}_{z}}^{{\omega }_{0}}=-\frac{1}{2}\left(1+\frac{2{R}_{0}+{R}_{z}}{{R}_{z}+{R}_{0}-{R}_{0}{R}_{z}{\beta }^{+}{g}_{{m}_{2}}+{R}_{z}^{2}{\beta }^{+}{g}_{{m}_{1}}}\right) In the ideal case, the various sensitivities of ω0 with respect to C1, C2, R0, Cz, Rz, gm1, gm2 and β+ are found to be {S}_{{C}_{1}}^{{\omega }_{0}}={S}_{{C}_{2}}^{{\omega }_{0}}={S}_{{R}_{0}}^{{\omega }_{0}}={S}_{{R}_{z}}^{{\omega }_{0}}=-\frac{1}{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{S}_{{g}_{{m}_{1}}}^{{\omega }_{0}}={S}_{{\beta }^{+}}^{{\omega }_{0}}=\frac{1}{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{S}_{{C}_{z}}^{{\omega }_{0}}={S}_{{g}_{{m}_{2}}}^{{\omega }_{0}}=0. Considering the typical values of various parasitic e.g. Cz = 0.81 pF, Rz = 53 kΩ, β+ = β− = 1 along with gm1 = 310.477 µƱ, gm2 =291.186 µƱ, C1 = C2 = 10 nF, and R0 = 4 kΩ, the various sensitivities are found to be {S}_{{C}_{1}}^{{\omega }_{0}}=-0.006 {S}_{{C}_{2}}^{{\omega }_{0}}=-0.006 {S}_{{C}_{Z}}^{{\omega }_{0}}=-0.987 {S}_{{R}_{0}}^{{\omega }_{0}}=-0.533 {S}_{{R}_{Z}}^{{\omega }_{0}}=-0.535 {S}_{{g}_{{m}_{1}}}^{{\omega }_{0}}=0.502 {S}_{{g}_{{m}_{2}}}^{{\omega }_{0}}=-0.0355 {S}_{{\beta }^{+}}^{{\omega }_{0}}=0.466 Frequency stability is an important figure of merit of an oscillator. The frequency stability factor is defined as {S}^{F}=\text{d}\phi \left(u\right)/\text{d}u \omega /{\omega }_{0} is the normalized frequency, and u=\phi \left(u\right) represents the phase function of the open loop transfer function of the oscillator circuit. With C1 = C2 = C, R0 = 1/gm2 = 1/g, gm1 = ng, SF for the proposed SECO is found to be: {S}^{F}=2\sqrt{n} Thus, the new proposed configuration offers very high frequency stability factor larger values of n. The proposed QSO was simulated using CMOS VD-DIBA (as shown in Figure 3) to verify its theoretical analysis. The passive elements are selected as R0 = 4 kΩ, and C1 = C2 = 10 nF. The transconductances of VD-DIBAs were controlled by bias voltages VB1, VB2 respectively. The simulated output waveforms for transient response and steady state response are shown in Figure 4 and Figure 5 respectively. These results, thus, confirm the validity of the proposed structure. Figure 6 shows the simulation results of the output spectrum, where the total harmonic distortion (THD) is found to be about 1.9% for both outputs Vo1 and Vo2. The generated waveforms relationship within quadrature circuit has been confirmed by Lissajous pattern shown in Figure 7. The CMOS VD-DIBA is Figure 3. A CMOS transistor implementation of VD-DIBA, VB2 = VB3 = −0.22 V and VB4 = −0.9 V, VDD = −VSS = 2 V [16] . Figure 4. Transient response of proposed QSO. Figure 5. Steady state response of proposed QSO. implemented using 0.35 µm MIETEC technology. The transistor model parameters used for CMOS VD-DIBA are listed in Table 1 and aspect ratios (W/L ratios) of the MOSFETs used in Figure 3 are shown in Table 2. Comparisons of previously known quadrature sinusoidal oscillators are Table 3. In this communication, an electronically tunable voltage-mode quadrature sinusoidal oscillator enabling independent electronic control of frequency of oscillation and condition of oscillation is presented. The proposed QSO circuit employs only two VD-DIBAs, two grounded capacitors and a resistor. The Figure 6. Frequency response of proposed QSO. Figure 7. Lissajous pattern of proposed QSO. Table 1. Transistors process parameters in SPICE simulations. Table 2. Aspect ratios of CMOS transistors used in Figure 3. Table 3. Comparison of previously known quadrature sinusoidal oscillators. proposed QSO is capable of simultaneously providing two explicit quadrature voltage outputs. The condition of oscillation and the frequency of oscillation of the proposed circuit are controllable electronically through separate transconductance of the VD-DIBAs. The workability of the proposed structure has been demonstrated by PSPICE simulations using 0.35 µm MIETEC technology. Pushkar, K.L. (2018) Electronically Controllable Quadrature Sinusoidal Oscillator Using VD-DIBAs. Circuits and Systems, 9, 41-48. https://doi.org/10.4236/cs.2018.93004 1. Horng, J.W., Hou, C.L., Chang, C.M., Chung, W.Y., Tang, H.W. and Wen, Y.H. (2005) Quadrature Oscillators Using CCIIs. International Journal of Electronics, 92, 21-31. https://doi.org/10.1080/00207210412331332899 2. Gibson, J.D. (1997) The Communication Handbook. CRC Press, Boca Raton. 3. Tangsrirat, W. and Surakampontorn, W. (2009) Single-Resistance Controlled Quadrature Oscillator and Universal Biquad Filter Using CFOAs. AEU-International Journal of Electronics and Communications, 63, 1080-1086. https://doi.org/10.1016/j.aeue.2008.08.006 4. Horng, J.W. (2002) Current Differencing Buffered Amplifiers Based Single Resistance Controlled Quadrature Oscillator Employing Grounded Capacitors. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E85-A, 1416-1419. 5. Ozcan, S., Toker, A., Acar, C., Kuntman, H. and Cicekoglu, O. (2000) Single Resistance-Controlled Sinusoidal Oscillators Employing Current Differencing Buffered Amplifier. Microelectronics Journal, 31, 169-174. https://doi.org/10.1016/S0026-2692(99)00113-5 6. Prommee, P. and Dejhan, K. (2002) An Integrable Electronic Controlled Quadrature Sinusoidal Oscillator Using CMOS Operational Transconductance Amplifier. International Journal of Electronics, 89, 365-379. https://doi.org/10.1080/713810385 7. Rodriguez-Vazquez, A., Linares-Barranco, B., Huertas, J.L. and Sanchez-Sinencio, E. (1990) On the Design of Voltage Controlled Sinusoidal Oscillators Using OTA’s. IEEE Transactions on Circuits and Systems, 37, 198-211. https://doi.org/10.1109/31.45712 8. Holzel, R. (1993) A Simple Wide-Band Sine Wave Quadrature Oscillator. IEEE Transactions on Instrumentation and Measurement, 42, 758-760. https://doi.org/10.1109/19.231604 9. Biolek, D., Senani, R., Biolkova, V. and Kolka, Z. (2008) Active Elements for Analog signal processing: Classification, Review, and New Proposals. Radioengineering, 17, 15-32. 10. Pushkar, K.L., Goel, R.K., Gupta, K., et al. (2016) New VD-DIBA-Based Single-Resistance-Controlled Sinusoidal Oscillator. Circuits and Systems, 7, 4145-4153. 11. Prasad, D., Bhaskar, D.R. and Pushkar, K.L. (2013) Electronically Controllable Sinusoidal Oscillator Employing CMOS VD-DIBAs. ISRN Electronics, 2013, Article ID: 823630. 12. Pushkar, K.L., Bhaskar, D.R. and Prasad, D. (2013) Single-Resistance Controlled Sinusoidal Oscillator Using Single VD-DIBA. Active and Passive Electronic Components, 2013, Article ID: 971936. https://doi.org/10.1155/2013/971936 13. Bhaskar, D.R. Prasad, D. and Pushkar, K.L. (2013) Fully Uncoupled Electronically Controllable Sinusoidal Oscillator Employing VD-DIBAs. Circuits and Systems, 4, 264-268. https://doi.org/10.4236/cs.2013.43035 14. Pushkar, K.L., Bhaskar, D.R. and Prasad, D. (2013) A New MISO-Type Voltage-Mode Universal Biquad Using Single VD-DIBA. ISRN Electronics, 2013, Article ID: 478213. 15. Pushkar, K.L., Bhaskar, D.R. and Prasad, D. (2013) Voltage-Mode Universal Biquad Filter Employing Single VD-DIBA. Circuits and Systems, 4, 44-48. https://doi.org/10.4236/cs.2013.41008 16. Prasad, D., Bhaskar, D.R. and Pushkar, K.L. (2011) Realization of New Electronically Controllable Grounded and Floating Simulated Inductance Circuits using Voltage Differencing Differential Input Buffered Amplifiers. Active and Passive Electronic Components, 2011, Article ID: 101432. 17. Bhaskar, D.R., Prasad, D. and Pushkar, K.L. (2013) Electronically-Controllable Grounded-Capacitor-Based Grounded and Floating Inductance Simulated Circuits using VD-DIBAs. Circuits and Systems, 4, 422-430. https://doi.org/10.4236/cs.2013.45055 18. Bajer, J., Vavra, J. and Biolek, D. (2014) Voltage-Mode Quadrature Oscillator Using VD-DIBA Active Elements. IEEE Asia Pacific Conference on Circuits and Systems, Ishigaki, 17-20 November 2014, Vol. 4, 197-200. https://doi.org/10.1109/APCCAS.2014.7032755 19. Kalra, D., Gupta, S. and Arora, T.S. (2016) Single-Resistance Controlled Quadrature Oscillator Employing Two Current Differencing Buffered Amplifier. 2nd International Conference on Contemporary Computing and Informatics, Noida, 14-17 December 2016, 688-692. 20. Pittala, C.S. and Srinivasulu, A. (2015) Quadrature Oscillator Using Operational Transresistance Amplifier. International Conference on Applied Electronics, Pilsen, 9-10 September 2014, 117-128. 21. Tangsrirat, W., Prasertsom, D., Piyatat, T. and Surakampontorn, W. (2008) Single-Resistance-Controlled Quadrature Oscillator using Current Differencing Buffered Amplifiers. International Journal of Electronics, 95, 1119-1126. https://doi.org/10.1080/00207210802387676 22. Channumsin, O. and Tangsrirat, W. (2017) VDIBA-Based Sinusoidal Quadrature Oscillator. Przglad Elektrotechniczny, 93, 248-251. 23. Phatsornsiri, P. and Lamun, P. (2015) Tunable Current-Mode Quadrature Oscillator using CFTAs and Grounded Capacitors. 12th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, Hua Hin, 24-27 June 2015, 1-4. https://doi.org/10.1109/ECTICon.2015.7207104
Rectangular Grid Walk - Problem Solving Practice Problems Online | Brilliant An ant in the coordinate plane needs to travel from the origin to (4, \, 4) by moving in 1 unit increments up or to the right. However, the ant cannot travel through (3, \, 1) (2, \, 2) (1, \, 3) . How many possible paths can the ant take? Wonder Woman needs to rescue a man from a burning building located at (12, \, 8) . If she is flying from the origin in diagonal increments of (+1, \, +1) (+1, \, -1) , in how many ways can she reach the burning building? The Three Musketeers are located at the origin and trying to reach d'Artagnan, who is located at (7, \, 7) . Each step, they may only move up or right 1 unit, and every third step, when the sum of coordinates is divisible by 3 , they cannot move to the right. How many paths can they take to d'Artagnan? In Lemuria, the marketplace is laid out in a four by five grid. Vendors at each of twenty locations in the grid are followed by the vendor immediately north of them and the vendor immediately west of them (if either exist). Whenever a vendor receives a sample, that vendor sends its own (different) samples to each vendor (if any) following it. If the southeasternmost vendor sends a sample to each of the vendors following it, then how many samples will be passed around in total? A fly is moving through three-dimensional space, from the origin to (2, \, 1, \, 4) . However, the fly moves erratically, so that it can only move one unit at a time in the positive x y -, or z -direction. In how many ways could it reach its destination?
Set (game) | Brilliant Math & Science Wiki Set is a game played with cards that each contain four attributes, where each attribute takes on one of three possible values. The goal of the game is to find sets (hence the game's name) of three cards, such that for each of the four attributes, either all three cards have different values or all three cards have the same value. The cards have different numbers, colors, shapes, and textures, forming a set Mathematics of Set Each card contains four attributes, each of which take on three values: Number: each card contains either 1, 2, or 3 shapes. Color: the shapes on each card are either red, green, or purple. Shape: the shapes on each card are either ovals, diamonds, or squiggles. Texture: each shape is either hollow, shaded, or filled. Each combination of attributes corresponds to exactly one card, for a total of 81 cards in the deck. At any time, 12 cards are revealed, and the fastest player to find a set within these 12 cards wins the set (depending on the players, the penalty for a false set claim can range from being unable to claim a set again that round to losing an already won set). The three cards in the set are then removed, and another three cards are revealed, with the game continuing until the deck is completed. Additionally, if no player is able to find a set within the group of 12 cards after some time, another three cards are revealed (with the agreement of all players). The next set claimed will not be replaced, since there will already be at least 12 cards showing. As shown below, it is indeed possible (though somewhat unlikely) that a group of 12 cards contains no set; in fact, it is possible for a group of up to 20 cards to contain no set. Though the game of set is highly based on pattern recognition, there are a number of approaches used to speed up the searching process. Analyzing a board: Firstly, chances are very high that some attribute will be under-represented, in the sense that it is quite likely that (for example) there will be only one or two shaded cards in play. This means that sets involving those cards can be checked quite quickly, and if (as is most likely) they are not involved in any sets, any three cards that form a set necessarily have the same value for that attribute: in the previous example, any set would have three cards with filled shapes or three cards with hollow shapes. Similarly, the chances are high that some attribute will be over-represented, in the sense that it is quite likely that (for example) there will be many cards with ovals with play. This means that it is often worth examining only possible sets containing these cards, as this guarantees one attribute will be satisfied (and there will be many more possibilities for the remaining three to be satisfied). Transitioning between boards: The most important part of the game is the period when the three new cards are revealed, as the players have already performed a significant amount of analysis on the 9 cards already showing. Firstly, it is not unusual for the remaining 9 cards to themselves contain a set, so it is certainly worth analyzing the remaining cards for this reason alone. However, even more can be learned by anticipating possible useful cards: for instance, an over-represented attribute (e.g. a lot of ones) within the 9 cards means that an additional card with the same attribute is very likely to complete a set. Finally, when the new cards are revealed, they are likely to be the ones involved in the set (should one exist), since the players will have already analyzed the 9 showing cards and discarded (or removed) sets from this pool of cards. The mechanic of adding three additional cards when no set is possible leads to two natural questions: What is the probability that it will be necessarily to add three cards? What is the largest number of cards that can be revealed such that there is no set among them? Both of these questions are answered by interpreting the game of set geometrically: each card can be viewed as an element of \mathbb{F}_3^4 , which basically means 4-D points of the form (a_1, a_2, a_3, a_4) a_i=1,2, or 3 for each i . For instance, the point 1, 2, 3, 1 could correspond to "one green hollow squiggle". The geometric interpretation of a set is then quite nice: three cards form a set if their associated points form a line. Significant work is still necessary to solve the case of 4 attributes, but the situation is fairly easy to analyze when there are only 2 attributes (making the space \mathbb{F}_3^2 ). In such a situation, the question of the largest number of cards that contains no set is answered by the 2-dimensional geometric interpretation: what is the maximum number of integer points x,y 0 \leq x \leq 2, 0 \leq y \leq 2 such that no three form a "line", where a line can "loop around"? All three colors represent lines in \mathbb{F}^2 It is not too difficult to show that the maximal number of points (squares in the above picture) that do not form any line is 4: Two examples of four points with no lines Suppose it were possible to find 5 points, no three of which are collinear. Then each horizontal line contains at most 2 points, and so some horizontal line contains exactly one point (call it P There are four lines going through P H , the horizontal line through the point V , the vertical line through the point D_- , the down-right diagonal line through the point D_+ , the up-right diagonal line through the point H contains no points other than P , and each point other than P is on exactly one of these lines, exactly one of V, D_+, D_- contains two points other than P (by the pigeonhole principle), and thus contains three collinear points -- a contradiction. Hence there is no collection of 5 points in \mathbb{F}_3^2 with no three collinear, making the maximum 4. This type of reasoning gets more difficult to work with in higher dimensions, but the general strategy remains the same. In dimensions, the maximum number of cards with no set is: n # of cards 6 between 112 and 114 \geq 7 which, as applied to the traditional set game, means that it is possible to find 20 cards with no set[1]. 20 cards that contain no set Even higher dimensions are more difficult to work with and require additional tools from the field of Ramsey theory. A recent (as of 2016) breakthrough shows that the maximal size of a cap set (a collection of cards with no set) in dimensions has size at most \left(\frac{2.756}{3}\right)^n , which has applications as far-reaching as quicker matrix multiplication algorithms. The other question, regarding the probability of this occurring, is best answered by computer program. Knuth provides the following numbers[2].: # of cards # of card-sets with no set probability of set 5 22441536 12.41% 6 247615056 23.70% 7 2144076480 38.34% 8 14587567020 54.65% 10 318294370368 83.05% 12 2284535476080 96.77% 16 1141342138404 99.9996% 1-10^{-5} 1-10^{-8} 1-10^{-11} 1-10^{-13} which, in the typical 12-card case, gives the answer of slightly less than \frac{1}{30} Davis, B., & Maclagan, D. THE CARD GAME SET. Retrieved April 2nd, 2016, from https://web.archive.org/web/20130605073741/http://www.math.rutgers.edu/~maclagan/papers/set.pdf joriki, ?. In the card game Set, what's the probability of a Set existing in n cards?. Retrieved April 2nd, 2016, from http://math.stackexchange.com/q/203146 Cite as: Set (game). Brilliant.org. Retrieved from https://brilliant.org/wiki/set-game/
Garbage Collection is the process of freeing heap space for allocation of new objects. Marking identifies live (reachable) objects and marks them as still used, leaving the others to be considered garbage. Sweeping traverses the heap to find the free space between live objects, recording it in a free list for future allocation. Compaction defragments the heap to increase the amount of contiguous free space, speeding up future allocations. Generations are separate memory pools holding objects of different ages. GC generations The weak generational hypothesis states that: Most objects only survive for a short period of time. Thus the JVM splits heap space allocations into a number of generations: Eden is where most new objects are allocated. 2x Survivor spaces, one of which is empty at any given time, serving as the destination for any live objects in eden and the other survivor space during GC. After a GC, eden and the source survivor space are both empty. Objects remain in the survivor space until after a number of copies between survivor spaces or until there isn't sufficient memory to keep them there, after which they are copied into the old region in a process known as ageing. Old stores aged or long-lived objects. Objects are allocated in the young generations. GC occurs in each generation when the generation fills up. GCs in young generations are considered "minor", and most objects are collected here. As surviving objects age, they're moved to the old generation. GCs in older generations ("major" collections) result in the entire heap being collected, and typically take much longer. The JVM supports a number of different GC implementations. Generally it is wise to allow the JVM to choose the best implementation for the application, and attempt to adjust heap and generation sizes before switching collectors. Use a concurrent collector to reduce pause time, and a parallel collector to increase throughput on multi-core systems. -XX:+PrintFlagsRanges prints the range of all the flags.z -XX:+PrintCommandLineFlags prints "ergonomically selected" default JVM flags. -XX:+PrintFlagsFinal prints the final/resolved set of flags the JVM will run with, including defaults. -XX:+PrintAdaptiveSizePolicy prints information about adaptive generation sizing. -verbose:gc or -Xlog:gc* enables logging of GC events. -Xlog:gc+heap=info prints G1 regions consumed by humongous objects. -Xlog:codecache=trace prints code heap status. -XX:+HeapDumpOnOutOfMemoryError dumps heap to a file on OOM. -XX:HeapDumpPath=path specifies the path to the dump file. -XX:+DisableExplicitGC disables explicit GC via System.gc(). If more than 98% of the total run-time is spent in GC and less than 2% of the heap is recovered, this collector throws an OutOfMemoryError to prevent applications from running for extended periods of time while making little or no progress because of an undersized heap. This behaviour can be disabled with -XX:-UseGCOverheadLimit. The JVM attempts to address the following goals, in descending order of priority: Maximum pause-time goal, set with -XX:MaxGCPauseMillis=n. Throughput goal, set with -XX:GCTimeRatio=n as a ratio of time spent in GC to time spent in application code ( 1 / (n + 1) ; e.g. the default value of 99 sets a goal of 1 / 100 , or 1%). Implicitly, minimising the size of the heap. Understanding GC events [Full GC (<trigger>) (<start size>-><finish size>(<delta>), <real time>] -Xnoclassgc disables garbage collection of class objects. -XX:+AggressiveHeap enables aggressive heap optimisation, for long-running applications with intensive memory allocations. -XX:+ShrinkHeapInSteps toggles incremental reductions of the heap toward the target size instead of multiple GC cycles. -XX:InitialSurvivorRatio configures the initial eden:survivor space ratio used by the collector. -XX:+UseAdaptiveSizePolicy toggles adaptive sizing of the survivor space ratio. -XX:SurvivorRatio configures the constant eden:survivor space ratio used by the collector when -XX:-UseAdaptiveSizePolicy is specified. -XX:TargetSurvivorRatio sets the desired percentage of survivor space used after a young GC (defaults to 50%). -XX:MaxTenuringThreshold sets the maximum tenuring (ageing) threshold for use in adaptive GC sizing. HotSpot VM GC Tuning Guide
This article is about an obsolete version of the CIELUV color space. For the video color model, see YUV. The Planckian locus on the MacAdam (u, v) chromaticity diagram. The normals are lines of equal correlated color temperature. The CIE 1960 color space ("CIE 1960 UCS", variously expanded Uniform Color Space, Uniform Color Scale, Uniform Chromaticity Scale, Uniform Chromaticity Space) is another name for the (u, v) chromaticity space devised by David MacAdam.[1] The CIE 1960 UCS does not define a luminance or lightness component, but the Y tristimulus value of the XYZ color space or a lightness index similar to W* of the CIE 1964 color space are sometimes used.[2] Today, the CIE 1960 UCS is mostly used to calculate correlated color temperature, where the isothermal lines are perpendicular to the Planckian locus. As a uniform chromaticity space, it has been superseded by the CIE 1976 UCS. 2 Relation to CIE XYZ 3 Relation to CIE 1976 UCS Judd determined that a more uniform color space could be found by a simple projective transformation of the CIEXYZ tristimulus values:[3] {\displaystyle {\begin{pmatrix}''R''\\''G''\\''B''\end{pmatrix}}={\begin{pmatrix}3.1956&2.4478&-0.1434\\-2.5455&7.0492&0.9963\\0.0000&0.0000&1.0000\end{pmatrix}}{\begin{pmatrix}X\\Y\\Z\end{pmatrix}}} (Note: What we have called "G" and "B" here are not the G and B of the CIE 1931 color space and in fact are "colors" that do not exist at all.) Judd was the first to employ this type of transformation, and many others were to follow. Converting this RGB space to chromaticities one finds[4][clarification needed The following formulae do not agree with u=R/(R+G+B) and v=G/(R+G+B)] Judd's UCS, with the Planckian locus and the isotherms from 1,000K to 10,000K, perpendicular to the locus. Judd then translated these isotherms back into the CIEXYZ color space. (The colors used in this illustration are illustrative only and do not correspond to the true colors represented by the respective points.) {\displaystyle u_{\rm {Judd}}={\frac {0.4661x+0.1593y}{y-0.15735x+0.2424}}={\frac {5.5932x+1.9116y}{12y-1.882x+2.9088}}} {\displaystyle v_{\rm {Judd}}={\frac {0.6581y}{y-0.15735x+0.2424}}={\frac {7.8972y}{12y-1.882x+2.9088}}} MacAdam simplified Judd's UCS for computational purposes: {\displaystyle u={\frac {4x}{12y-2x+3}}} {\displaystyle v={\frac {6y}{12y-2x+3}}} The Colorimetry committee of the CIE considered MacAdam's proposal at its 14th Session in Brussels for use in situations where more perceptual uniformity was desired than the (x,y) chromaticity space,[5] and officially adopted it as the standard UCS the next year.[6] Relation to CIE XYZ[edit] The CIE 1960 UCS, also known as the MacAdam (u,v) chromaticity diagram. Colors outside the colored triangle cannot be represented on most computer screens. U, V, and W can be found from X, Y, and Z using: {\displaystyle U=\textstyle {\frac {2}{3}}X} {\displaystyle V=Y\,} {\displaystyle W=\textstyle {\frac {1}{2}}(-X+3Y+Z)} {\displaystyle X=\textstyle {\frac {3}{2}}U} {\displaystyle Y=V} {\displaystyle Z=\textstyle {\frac {3}{2}}U-3V+2W} We then find the chromaticity variables as: {\displaystyle u={\frac {U}{U+V+W}}={\frac {4X}{X+15Y+3Z}}} {\displaystyle v={\frac {V}{U+V+W}}={\frac {6Y}{X+15Y+3Z}}} We can also convert from u and v to x and y: {\displaystyle x={\frac {3u}{2u-8v+4}}} {\displaystyle y={\frac {2v}{2u-8v+4}}} Relation to CIE 1976 UCS[edit] Main article: CIELUV {\displaystyle u^{\prime }=u\,} {\displaystyle v^{\prime }=\textstyle {\frac {3}{2}}v\,} ^ MacAdam, David Lewis (August 1937). "Projective transformations of I.C.I. color specifications". JOSA. 27 (8): 294–299. doi:10.1364/JOSA.27.000294. ^ Arun N. Netravali, Barry G. Haskell (1986). Digital Pictures: Representation, Compression, and Standards (2E ed.). Springer. p. 288. ISBN 0-306-42195-X. ^ Judd, Deane B. (January 1935). "A Maxwell Triangle Yielding Uniform Chromaticity Scales". JOSA. 25 (1): 24–35. doi:10.1364/JOSA.25.000024. An important application of this coordinate system is its use in finding from any series of colors the one most resembling a neighboring color of the same brilliance, for example, the finding of the nearest color temperature for a neighboring non-Planckian stimulus. The method is to draw the shortest line from the point representing the non-Planckian stimulus to the Planckian locus. ^ OSA Committee on Colorimetry (November 1944). "Quantitative data and methods for colorimetry". JOSA. 34 (11): 633–688. (recommended reading) ^ CIE (January 1960). "Brussels Session of the International Commission on Illumination". JOSA. 50 (1): 89–90. The use of the following chromaticity diagram is provisionally recommended whenever a diagram yielding color spacing perceptually more nearly uniform than the (xy) diagram is desired. The chromaticity diagram is produced by plotting 4X/(X + 15Y + 3Z) as abscissa and 6Y/(X + 15Y + 3Z) as ordinate, in which X, Y, and Z are the tristimulus values corresponding to the 1931 CIE Standard Observer and Coordinate System. ^ "Official Recommendations". Publication No. 004: Proceedings of the CIE Session 1959 in Bruxelles. 14th Session. Vol. A. Brussels: International Commission on Illumination. 1960. p. 36. Free Windows utility to generate chromaticity diagrams. Delphi source included.
Pi - Simple English Wikipedia, the free encyclopedia This article is about the number. For the Greek letter, see Pi (letter). The number π (/paɪ/) is a mathematical constant that is the ratio of a circle's circumference to its diameter. This produces a number, and that number is always the same. However, the number is rather strange. The number starts as 3.141592653589793... and continues without end. Numbers like this are called irrational numbers.[1][2][3] Pi is an endless string of numbers The diameter is the largest chord which can be fitted inside a circle. It passes through the center of the circle. The distance around a circle is known as the circumference. Even though the diameter and circumference are different for different circles, the number pi remains constant: its value never changes. This is because the relationship between the circumference and diameter is always the same.[4] 1.2 Approximate value 3 Pi in real life π is commonly defined as the ratio of a circle's circumference C to its diameter d:[5] {\displaystyle \pi ={\frac {C}{d}}.} Approximate valueEdit A diagram showing how π can be found by using a circle with a diameter of one. The circumference of this circle is π. Pi is often written as π, or the Greek letter π as a shortcut. Pi is also an irrational number, meaning it cannot be written as a fraction ( {\displaystyle a \over b} ), where 'a' and 'b' are integers (whole numbers).[2][3] This basically means that the digits of pi that are to the right of the decimal go forever—without repeating in a pattern, and that it is impossible to write the exact value of pi as a number. Pi can only be approximated, or measured to a value that is close enough for practical purposes.[6] A value close to pi is 3.141592653589793238462643...[7] A common fraction approximation of pi is {\displaystyle 22/7} , which yields approximately 3.14285714. This approximation is 0.04% away from the true value of pi. While this approximation is accepted for most of its use in real life, the fraction {\displaystyle 355/113} is more accurate (giving about 3.14159292), and can be used when a value closer to pi is needed.[8] Computers can be used to get better approximations of pi. In March 2019, Emma Haruka Iwao calculated the value of pi to 31.4 trillion digits.[9][10] Mathematicians have known about pi for thousands of years, because they have been working with circles for the same amount of time. Civilizations as old as the Babylonians have been able to approximate pi to many digits, such as the fraction 25/8 and 256/81. Most historians believe that ancient Egyptians had no concept of π, and that the correspondence is a coincidence.[11] The first written reference to pi dates to 1900 BC.[12] Around 1650 BC, the Egyptian Ahmes gave a value in the Rhind Papyrus. The Babylonians were able to find that the value of pi was slightly greater than 3, by simply making a big circle and then sticking a piece of rope onto the circumference and the diameter, taking note of their distances, and then dividing the circumference by the diameter. Knowledge of the number pi passed back into Europe and into the hands of the Hebrews, who made the number important in a section of the Bible called the Old Testament. After this, the most common way of trying to find pi was to draw a shape of many sides inside any circle, and use the area of the shape to find pi. The Greek philosopher Archimedes, for example, used a polygon shape that had 96 sides in order to find the value of pi, but the Chinese in 500 CE were able to use a polygon with 16,384 sides to find the value of pi. The Greeks, like Anaxagoras of Clazomenae, were also busy with finding out other properties of the circle, such as how to make squares of circles and squaring the number pi. Since then, many people have been trying to find out more and more exact values of pi.[13] Claudius Ptolemy around 150 CE 3.1416 Zu Chongzhi 430-501 CE 3.1415929203 al-Khwarizmi around 800 CE 3.1416 al-Kashi around 1430 3.14159265358979 Viète 1540–1603 3.141592654 Roomen 1561–1615 3.14159265358979323 Van Ceulen around 1600 3.14159265358979323846264338327950288 In the 16th century, better and better ways of finding pi became available, such as the complicated formula that the French lawyer François Viète developed. The first use of the Greek symbol "π" was in an essay written in 1706 by William Jones. A mathematician named Lambert also showed in 1761 that the number pi was irrational; that is, it cannot be written as a fraction by normal standards. Another mathematician named Lindeman was also able to show in 1882 that pi was part of the group of numbers known as transcendentals, which are numbers that cannot be the solution to a polynomial equation.[3][14] Pi can also be used for figuring out many other things beside circles.[11] The properties of pi have allowed it to be used in many other areas of math besides geometry, the study of shapes. Some of these areas are complex analysis, trigonometry, and series. Pi in real lifeEdit There are different ways to calculate many digits of π. This is of limited use though. Pi can sometimes be used to work out the area or the circumference of any circle. To find the circumference of a circle, use the formula C (circumference) = π × (diameter). To find the area of a circle, use the formula π (radius²). This formula is sometimes written as {\displaystyle A=\pi r^{2}} , where r is the variable for the radius of any circle and A is the variable for the area of that circle. To calculate the circumference of a circle with an error of 1 mm: 4 digits are needed for a radius of 30 meters 10 digits for a radius equal to that of the earth 15 digits for a radius equal to the distance from the earth to the sun. 20 digits for a radius equal to the distance from the earth to Polaris. People generally celebrate March 14 as Pi Day, because March 14 is also written as 3/14, which represents the first three numbers 3.14 in the approximation of pi.[6] Pi day started during 2001. ↑ 2.0 2.1 "Pi". www.mathsisfun.com. Retrieved 2020-08-10. ↑ 3.0 3.1 3.2 Weisstein, Eric W. "Pi". mathworld.wolfram.com. Retrieved 2020-08-10. ↑ Arndt, Jörg; Haenel, Christoph (2006), Pi Unleashed, Springer-Verlag, ISBN 978-3-540-66572-4 , English translation by Catriona and David Lischka ↑ 6.0 6.1 "About Pi". 1994–2010. Retrieved 2010-06-05. ↑ "How Many Decimals of Pi Do We Really Need?". jpl.nasa.gov. Retrieved February 19, 2018. ↑ "Pi to 4 Million Decimals". Archived from the original (php) on 2008-03-09. Retrieved 2010-06-05. ↑ Cajori, Florian (2007). A History of Mathematical Notations: Vol. II. Cosimo, Inc. pp. 8–13. ISBN 978-1-60206-714-1. the ratio of the length of a circle to its diameter was represented in the fractional form by the use of two letters ... J.A. Segner ... in 1767, he represented 3.14159... by δ:π, as did Oughtred more than a century earlier ↑ Shaban, Hamza (2019). "Pi Day news: Google employee breaks record, calculates 31.4 trillion digits of Pi". chicagotribune.com. Chicago Tribune. Retrieved 2019-03-14. ↑ 11.0 11.1 Arndt, Jorg; Haenel, Christoph (2001). Pi - Unleashed. Springer Science & Business Media. ISBN 978-3-540-66572-4. ↑ Beckmann, Petr 1971. A History of Pi. St. Martins Press, London. ↑ J.J. O'Connor; E.F. Robertson (August 2001). "Pi History". Retrieved 2010-06-05. ↑ "PI". 2000–2005. Retrieved 2010-06-06. Retrieved from "https://simple.wikipedia.org/w/index.php?title=Pi&oldid=8175141"
Linear Regression with Interaction Effects - MATLAB & Simulink - MathWorks América Latina Perform stepwise linear regression. Plot prediction slice plots. Plot main effects. Plot interaction effects. Plot prediction effects. To retain only the first column of blood pressure, store data in a table. For the initial model, use the full model with all terms and their pairwise interactions. The final model in formula form is BloodPressure ~ 1 + Age + Smoker + Sex*Weight. This model includes all four main effects (Age, Smoker, Sex, Weight) and the two-way interaction between Sex and Weight. This model corresponds to BP={\beta }_{0}+{\beta }_{A}{X}_{A}+{\beta }_{Sm}{I}_{Sm}+{\beta }_{S}{I}_{S}+{\beta }_{W}{X}_{W}+{\beta }_{SW}{X}_{W}{I}_{S}+ϵ, BP is the blood pressure {\beta }_{i} are the coefficients {I}_{Sm} is the indicator variable for smoking; {I}_{Sm}=1 indicates a smoking patient whereas {I}_{Sm}=0 indicates a nonsmoking patient {I}_{S} is the indicator variable for sex; {I}_{S}=1 indicates a male patient whereas {I}_{S}=0 indicates a female patient {X}_{A} is the Age variable {X}_{W} is the Weight variable ϵ is the error term The following table shows the fitted linear model for each gender and smoking combination. \begin{array}{|ccc|}\hline {I}_{Sm}& {I}_{S}& \text{Linear}\text{ }\text{Model}\\ 1\text{(Smoker)}& 1\text{(Male)}& \begin{array}{rl}BP& =\left({\beta }_{0}+{\beta }_{Sm}+{\beta }_{S}\right)+{\beta }_{A}{X}_{A}+\left({\beta }_{W}+{\beta }_{SW}\right){X}_{W}\\ \underset{}{\overset{ˆ}{BP}}& =107.5617+0.11584{X}_{A}+0.11826{X}_{W}\end{array}\\ 1\text{(Smoker)}& 0\text{(Female)}& \begin{array}{rl}BP& =\left({\beta }_{0}+{\beta }_{Sm}\right)+{\beta }_{A}{X}_{A}+{\beta }_{W}{X}_{W}\\ \underset{}{\overset{ˆ}{BP}}& =143.0007+0.11584{X}_{A}-0.1393{X}_{W}\end{array}\\ 0\text{(Nonsmoker)}& 1\text{(Male)}& \begin{array}{rl}BP& =\left({\beta }_{0}+{\beta }_{S}\right)+{\beta }_{A}{X}_{A}+\left({\beta }_{W}+{\beta }_{SW}\right){X}_{W}\\ \underset{}{\overset{ˆ}{BP}}& =97.901+0.11584{X}_{A}+0.11826{X}_{W}\end{array}\\ 0\text{(Nonsmoker)}& 0\text{(Female)}& \begin{array}{rl}BP& ={\beta }_{0}+{\beta }_{A}{X}_{A}+{\beta }_{W}{X}_{W}\\ \underset{}{\overset{ˆ}{BP}}& =133.17+0.11584{X}_{A}-0.1393{X}_{W}\end{array}\\ \hline\end{array} As seen from these models, {\beta }_{Sm} {\beta }_{S} show how much the intercept of the response function changes when the indicator variable takes the value 1 compared to when it takes the value 0. {\beta }_{SW} , however, shows the effect of the Weight variable on the response variable when the indicator variable for sex takes the value 1 compared to when it takes the value 0. You can explore the main and interaction effects in the final model using the methods of the LinearModel class as follows. This plot shows the main effects for all predictor variables. The green line in each panel shows the change in the response variable as a function of the predictor variable when all other predictor variables are held constant. For example, for a smoking male patient aged 37.5, the expected blood pressure increases as the weight of the patient increases, given all else the same. The dashed red curves in each panel show the 95% confidence bounds for the predicted response values. The horizontal dashed line in each panel shows the predicted response for the specific value of the predictor variable corresponding to the vertical dashed line. You can drag these lines to get the predicted response values at other predictor values, as shown next. For example, the predicted value of the response variable is 118.3497 when a patient is female, nonsmoking, age 40.3788, and weighs 139.9545 pounds. The values in the square brackets, [114.621, 122.079], show the lower and upper limits of a 95% confidence interval for the estimated response. Note that, for a nonsmoking female patient, the expected blood pressure decreases as the weight increases, given all else is held constant. This plot displays the main effects. The circles show the magnitude of the effect and the blue lines show the upper and lower confidence limits for the main effect. For example, being a smoker increases the expected blood pressure by 10 units, compared to being a nonsmoker, given all else is held constant. Expected blood pressure increases about two units for males compared to females, again, given other predictors held constant. An increase in age from 25 to 50 causes an expected increase of 4 units, whereas a change in weight from 111 to 202 causes about a 4-unit decrease in the expected blood pressure, given all else held constant. This plot displays the impact of a change in one factor given the other factor is fixed at a value. Be cautious while interpreting the interaction effects. When there is not enough data on all factor combinations or the data is highly correlated, it might be difficult to determine the interaction effect of changing one factor while keeping the other fixed. In such cases, the estimated interaction effect is an extrapolation from the data. The blue circles show the main effect of a specific term, as in the main effects plot. The red circles show the impact of a change in one term for fixed values of the other term. For example, in the bottom half of this plot, the red circles show the impact of a weight change in female and male patients, separately. You can see that an increase in a female’s weight from 111 to 202 pounds causes about a 14-unit decrease in the expected blood pressure, while an increase of the same amount in the weight of a male patient causes about a 5-unit increase in the expected blood pressure, again given other predictors are held constant. This plot shows the effect of changing one variable as the other predictor variable is held constant. In this example, the last figure shows the response variable, blood pressure, as a function of weight, when the variable sex is fixed at males and females. The lines for males and females are crossing which indicates a strong interaction between weight and sex. You can see that the expected blood pressure increases as the weight of a male patient increases, but decreases as the weight of a female patient increases.
Direct_shear_test Knowpia A direct shear test is a laboratory or field test used by geotechnical engineers to measure the shear strength properties of soil[1][2] or rock[2] material, or of discontinuities in soil or rock masses.[2][3] The U.S. and U.K. standards defining how the test should be performed are ASTM D 3080, AASHTO T236 and BS 1377-7:1990, respectively. For rock the test is generally restricted to rock with (very) low shear strength. The test is, however, standard practice to establish the shear strength properties of discontinuities in rock. The test is performed on three or four specimens from a relatively undisturbed soil sample. A specimen is placed in a shear box which has two stacked rings to hold the sample; the contact between the two rings is at approximately the mid-height of the sample. A confining stress is applied vertically to the specimen, and the upper ring is pulled laterally until the sample fails, or through a specified strain. The load applied and the strain induced is recorded at frequent intervals to determine a stress–strain curve for each confining stress. Several specimens are tested at varying confining stresses to determine the shear strength parameters, the soil cohesion (c) and the angle of internal friction, commonly known as friction angle ( {\displaystyle \phi } ). The results of the tests on each specimen are plotted on a graph with the peak (or residual) stress on the y-axis and the confining stress on the x-axis. The y-intercept of the curve which fits the test results is the cohesion, and the slope of the line or curve is the friction angle. Direct shear tests can be performed under several conditions. The sample is normally saturated before the test is run, but can be run at the in-situ moisture content. The rate of strain can be varied to create a test of undrained or drained conditions, depending on whether the strain is applied slowly enough for water in the sample to prevent pore-water pressure buildup. A direct shear test machine is required to perform the test. The test using the direct shear machine determines the consolidated drained shear strength of a soil material in direct shear.[4] The advantages of the direct shear test [5] over other shear tests are the simplicity of setup and equipment used, and the ability to test under differing saturation, drainage, and consolidation conditions. These advantages have to be weighed against the difficulty of measuring pore-water pressure when testing in undrained conditions, and possible spuriously high results from forcing the failure plane to occur in a specific location. The test equipment and procedures are slightly different for test on discontinuities.[6] ^ a b c Price, D.G. (2009). De Freitas, M.H. (ed.). Engineering Geology: Principles and Practice. Springer. p. 450. ISBN 978-3-540-29249-4. ^ "Direct shear test machine". www.cooper.co.uk. Cooper Research Technology. Retrieved 8 September 2014. ^ "Direct Shear Test; To Determine Shear Strength of Soil. - CivilPie". CivilPie. 2018-05-31. Retrieved 2018-06-06. ^ Hencher, S. R.; Richards, L. R. (1989). "Laboratory direct shear testing of rock discontinuities". Ground Engineering. 22 (2): 24–31.
Conal Elliott » category In the post Overloading lambda, I gave a translation from a typed lambda calculus into the vocabulary of cartesian closed categories (CCCs). This simple translation leads to unnecessarily complex expressions. For instance, the simple lambda term, “λ ds → (λ (a,b) → (b,a)) ds”, translated to a rather complicated CCC term: apply ∘ (curry (apply ∘ (apply ∘ (const (,) △ (id ∘ exr) ∘ exr) △ (id ∘ exl) ∘ exr)) △ id) (Recall from the previous post that (∘) binds more tightly than (△) and (▽).) However, we can do much better, translating to exr △ exl which says to pair the right and left halves of the argument pair, i.e., swap. This post applies some equational properties to greatly simplify/optimize the result of translation to CCC form, including example above. First I’ll show the equational reasoning and then how it’s automated in the lambda-ccc library. Continue reading ‘Optimizing CCCs’ » In the post Overloading lambda, I gave a translation from a typed lambda calculus into the vocabulary of cartesian closed categories (CCCs). This simple translation leads to unnecessarily complex expressions.... Tags: category, CCC, overloading | 4 Comments Haskell’s type class facility is a powerful abstraction mechanism. Using it, we can overload multiple interpretations onto a single vocabulary, with each interpretation corresponding to a different type. The class laws constrain these interpretations and allow reasoning that is valid over all (law-abiding) instances—even ones not yet defined. As Haskell is a higher-order functional language in the heritage of Church’s (typed) lambda calculus, it also supports “lambda abstraction”. Sadly, however, these two forms of abstraction don’t go together. When we use the vocabulary of lambda abstraction (“λ x → ⋯”) and application (“u v”), our expressions can only be interpreted as one type (constructor), namely functions. (Note that I am not talking about parametric polymorphism, which is available with both lambda abstraction and type-class-style overloading.) Is it possible to overload lambda and application using type classes, or perhaps in the same spirit? The answer is yes, and there are some wonderful benefits of doing so. I’ll explain the how in this post and hint at the why, to be elaborated in futures posts. Continue reading ‘Overloading lambda’ » Haskell’s type class facility is a powerful abstraction mechanism. Using it, we can overload multiple interpretations onto a single vocabulary, with each interpretation corresponding to a different type. The class... Since fall of last year, I’ve been working at Tabula, a Silicon Valley start-up developing an innovative programmable hardware architecture called “Spacetime”, somewhat similar to an FPGA, but much more flexible and efficient. I met the founder, Steve Teig, at a Bay Area Haskell Hackathon in February of 2011. He described his Spacetime architecture, which is based on the geometry of the same name, developed by Hermann Minkowski to elegantly capture Einstein’s theory of special relativity. Within the first 30 seconds or so of hearing what Steve was up to, I knew I wanted to help. The vision Steve shared with me included not only a better alternative for hardware designers (programmed in hardware languages like Verilog and VHDL), but also a platform for massively parallel execution of software written in a purely functional language. Lately, I’ve been working mainly on this latter aspect, and specifically on the problem of how to compile Haskell. Our plan is to develop the Haskell compiler openly and encourage collaboration. If anything you see in this blog series interests you, and especially if have advice or you’d like to collaborate on the project, please let me know. In my next series of blog posts, I’ll describe some of the technical ideas I’ve been working with for compiling Haskell for massively parallel execution. For now, I want to introduce a central idea I’m using to approach the problem. Continue reading ‘From Haskell to hardware via cartesian closed categories’ » Since fall of last year, I’ve been working at Tabula, a Silicon Valley start-up developing an innovative programmable hardware architecture called “Spacetime”, somewhat similar to an FPGA, but much more... Tags: category, CCC, compilation | 6 Comments The function of the imagination is not to make strange things settled, so much as to make settled things strange. Why is matrix multiplication defined so very differently from matrix addition? If we didn’t know these procedures, could we derive them from first principles? What might those principles be? This post gives a simple semantic model for matrices and then uses it to systematically derive the implementations that we call matrix addition and multiplication. The development illustrates what I call “denotational design”, particularly with type class morphisms. On the way, I give a somewhat unusual formulation of matrices and accompanying definition of matrix “multiplication”. For more details, see the linear-map-gadt source code. 2012–12–17: Replaced lost B entries in description of matrix addition. Thanks to Travis Cardwell. 2012–12018: Added note about math/browser compatibility. Note: I’m using MathML for the math below, which appears to work well on Firefox but on neither Safari nor Chrome. I use Pandoc to generate the HTML+MathML from markdown+lhs+LaTeX. There’s probably a workaround using different Pandoc settings and requiring some tweaks to my WordPress installation. If anyone knows how (especially the WordPress end), I’d appreciate some pointers. Continue reading ‘Reimagining matrices’ » The function of the imagination is notto make strange things settled, so much asto make settled things strange.- G.K. Chesterton Why is matrix multiplication defined so very differently from matrix... Tags: category, denotational design, linear algebra, type class morphism | 10 Comments
Compression-ignition controller that includes air mass flow, torque, and EGR estimation - Simulink - MathWorks Italia VgtPos VgtSpd FuelMainSoi TurbRackPosCmd Air - EGR Air - VGR Compression-ignition controller that includes air mass flow, torque, and EGR estimation The CI Controller block implements a compression-ignition (CI) controller with air mass flow, torque, exhaust gas recirculation (EGR) flow, exhaust back-pressure, and exhaust gas temperature estimation. You can use the CI Controller block in engine control design or performance, fuel economy, and emission tradeoff studies. The core engine block requires the commands that are output from the CI Controller block. The block uses the commanded torque and measured engine speed to determine these open-loop actuator commands: EGR valve area percent The CI Controller block has two subsystems: The Controller subsystem — Determines the commands based on tables that are functions of commanded torque and measured engine speed. Determines Commands for Measured engine speed VGT rack position The Estimator subsystem — Determines estimates based on these engine attributes. Cycle average intake manifold pressure and temperature Absolute ambient pressure VGT speed Exhaust gas back-pressure EGR valve gas mass flow Cycle average intake manifold absolute pressure EGRap, EGRcmd Corrected turbocharger speed VGT rack position command P{w}_{inj} MAINSOI Start of injection timing for main fuel injection pulse The controller governs the combustion process by commanding VGT rack position, EGR valve area percent, fuel injection timing, and injector pulse-width. Feedforward lookup tables, which are functions of measured engine speed and commanded torque, determine the control commands. The controller commands the EGR valve area percent and VGT rack position. Changing the VGT rack position modifies the turbine flow characteristics. At low-requested torques, the rack position can reduce the exhaust back pressure, resulting in a low turbocharger speed and boost pressure. When the commanded fuel requires additional air mass flow, the rack position is set to close the turbocharger vanes, increasing the turbocharger speed and intake manifold boost pressure. R{P}_{cmd}={f}_{RPcmd}\left(Tr{q}_{cmd},N\right) EG{R}_{cmd}={f}_{EGRcmd}\left(Tr{q}_{cmd},N\right) To initiate combustion, a CI engine injects fuel directly into the combustion chamber. After the injection, the fuel spontaneously ignites, increasing cylinder pressure. The total mass of the injected fuel and main injection timing determines the torque production. Assuming constant fuel rail pressure, the CI controller commands the injector pulse-width based on the total requested fuel mass: P{w}_{inj}=\frac{{F}_{cmd, tot}}{{S}_{inj}} P{w}_{inj} Fuel injector slope Fcmd,tot Main start-of-injection timing {F}_{cmd,tot}={f}_{Fcmd,tot}\left(Tr{q}_{cmd},N\right) MAINSOI=f\left({F}_{cmd,tot},N\right) {C}_{idle}\left(z\right)={K}_{p,idle}+{K}_{i,idle}\frac{{t}_{s}}{z-1} Using the CI Core Engine block, the CI Controller block estimates the air mass flow rate, EGR valve mass flow, exhaust back-pressure, engine torque, AFR, and exhaust temperature from sensor feedback. The Info port provides the estimated values, but block does not use them to determine the open-loop engine actuator commands. EGR Valve Mass Flow To calculate the estimated exhaust gas recirculation (EGR) valve mass flow, the block calculates the EGR flow that would occur at standard temperature and pressure conditions, and then corrects the flow to actual temperature and pressure conditions. The block EGR calculation uses estimated exhaust back-pressure, estimated exhaust temperature, standard temperature, and standard pressure. {\stackrel{˙}{m}}_{egr,est}={\stackrel{˙}{m}}_{egr,std}\frac{{P}_{exh,est}}{{P}_{std}}\sqrt{\frac{{T}_{std}}{{T}_{exh,est}}} {\stackrel{˙}{m}}_{egr,std}=f\left(\frac{MAP}{{P}_{exh,est}},EGRap\right) {\stackrel{˙}{m}}_{egr,std} {\stackrel{˙}{m}}_{egr,est} Estimated EGR valve mass flow {\stackrel{˙}{m}}_{egr,std} Standard EGR valve mass flow {P}_{std} {T}_{std} MAP Measured cycle average intake manifold absolute pressure Pexh,est Estimated exhaust back-pressure {P}_{Amb} Measured EGR valve area percent To estimate the EGR valve mass flow, the block requires an estimate of the exhaust back-pressure. To estimate the exhaust back-pressure, the block uses the ambient pressure and the turbocharger pressure ratio. {P}_{exh,est}={P}_{Amb}P{r}_{turbo} For the turbocharger pressure ration calculation, the block uses two lookup tables. The first lookup table determines the approximate turbocharger pressure ratio as a function of turbocharger mass flow and corrected turbocharger speed. Using a second lookup table, the block corrects the approximate turbocharger pressure ratio for VGT rack position. \begin{array}{l}P{r}_{turbo}=f\left({\stackrel{˙}{m}}_{airstd},{N}_{vgtcorr}\right)f\left(VG{T}_{pos}\right)\\ \text{where:}\\ {N}_{vgtcorr}=\frac{{N}_{vgt}}{\sqrt{{T}_{exh,est}}}\end{array} {\stackrel{˙}{m}}_{egr,est} {\stackrel{˙}{m}}_{egr,std} {\stackrel{˙}{m}}_{port,est} Estimated intake port mass flow rate {\stackrel{˙}{m}}_{airstd} Standard air mass flow EGRap Measured EGR valve area Measured cycle average intake manifold absolute pressure Measured cycle average intake manifold gas absolute temperature {P}_{std} {T}_{std} Prvgtcorr Turbocharger pressure ratio correction for VGT rack position {P}_{Amb} Nvgtcorr Corrected turbocharger speed VGTpos Measured VGT rack position The exhaust-back pressure calculation uses these lookup tables: P{r}_{turbo}=f\left({\stackrel{˙}{m}}_{airstd},{N}_{vgtcorr}\right) {\stackrel{˙}{m}}_{airstd} To calculate the standard air mass flow through the turbocharger, the block uses conservation of mass, the estimated intake port, and EGR mass flows (from the last estimated calculation). The calculation assumes negligible exhaust manifold filling dynamics. {\stackrel{˙}{m}}_{airstd}=\left({\stackrel{˙}{m}}_{port,est}-{\stackrel{˙}{m}}_{egr,est}\right)\frac{{P}_{std}}{MAP}\sqrt{\frac{MAT}{{T}_{std}}} {T}_{exh}={f}_{Texh}\left(F,N\right) \begin{array}{l}{T}_{exhnom}=SO{I}_{exhteff}MA{P}_{exhteff}MA{T}_{exhteff}O2{p}_{exhteff}FUEL{P}_{exhteff}Tex{h}_{opt}\\ {T}_{exh}={T}_{exhnom}+\Delta {T}_{post}\\ \\ SO{I}_{exhteff}={f}_{SO{I}_{exhteff}}\left(\Delta SOI,N\right)\\ MA{P}_{exhteff}={f}_{MA{P}_{exhteff}}\left(MA{P}_{ratio},\lambda \right)\\ MA{T}_{exhteff}={f}_{MA{T}_{exhteff}}\left(\Delta MAT,N\right)\\ O2{p}_{exhteff}={f}_{O2{p}_{exhteff}}\left(\Delta O2p,N\right)\\ Tex{h}_{opt}={f}_{Texh}\left(F,N\right)\end{array} The measured engine speed and fuel injector pulse-width determine the commanded fuel mass flow rate: {\stackrel{˙}{m}}_{fuel,cmd}=\frac{N{S}_{inj}P{w}_{inj}{N}_{cyl}}{Cps\left(\frac{60s}{min}\right)\left(\frac{1000mg}{g}\right)} The commanded total fuel mass flow and estimated port mass flow rates determine the estimated AFR: AF{R}_{est}=\frac{{\stackrel{˙}{m}}_{port,est}}{{\stackrel{˙}{m}}_{fuel,cmd}} P{w}_{inj} Estimated air-fuel ratio {\stackrel{˙}{m}}_{fuel,cmd} Commanded fuel mass flow rate {S}_{inj} {\stackrel{˙}{m}}_{port,est} Total estimated engine air mass flow at intake ports Measured intake manifold absolute pressure, MAP, in Pa. Absolute ambient pressure, {P}_{Amb} Measured EGR valve area percent, EGRap, in %. VgtPos — VGT speed Measured VGT rack position, VGTpos. VgtSpd — VGT speed Measured VGT speed, Nvgt, in rpm. P{w}_{inj} EGRcmd % RPcmd N/A FuelMassTotCmd degATDC {\stackrel{˙}{m}}_{fuel,cmd} Estimated port mass flow rate {\stackrel{˙}{m}}_{port,est} EstExhPrs EstEGRFlow EstAfr P{w}_{inj} FuelMainSoi — Fuel main injecting timing Main start-of-injection timing, MAINSOI, in degrees crank angle after top dead center (degATDC). TurbRackPosCmd — Rack position VGT rack position command, RPcmd. EgrVlvAreaPctCmd — Intake cam phaser angle command EGR valve area percent command, EGRcmd. EGR valve area percent, f_egrcmd — Lookup table EG{R}_{cmd}={f}_{EGRcmd}\left(Tr{q}_{cmd},N\right) Commanded torque breakpoints, f_egr_tq_bpt — Breakpoints [10 26.43 42.86 59.29 75.71 92.14 108.6 125 141.4 157.9 174.3 190.7 207.1 223.6 240] (default) | vector Commanded torque breakpoints, in N·m. Speed breakpoints, f_egr_n_bpt — Breakpoints [1000 1411 1821 2232 2643 3054 3464 3875 4286 4696 5107 5518 5929 6339 6750] (default) | vector VGT rack position table, f_rpcmd — Lookup table R{P}_{cmd}={f}_{RPcmd}\left(Tr{q}_{cmd},N\right) Commanded torque breakpoints, f_rp_tq_bpt — Breakpoints Speed breakpoints, f_rp_n_bpt — Breakpoints {S}_{inj} Fuel lower heating value, fuel_lhv — Heat Fuel mass per injection table, f_fcmd_tot — Lookup table {F}_{cmd,tot}={f}_{Fcmd,tot}\left(Tr{q}_{cmd},N\right) Fuel main injection timing table, f_main_soi — Lookup table MAINSOI=f\left({F}_{cmd,tot},N\right) Fuel main injection timing fuel breakpoints, f_main_soi_f_bpt — Breakpoints Fuel main injection timing fuel breakpoints, in mg per injection. Fuel main injection timing speed breakpoints, f_main_soi_n_bpt — Breakpoints [1000,1410.71428571429,1821.42857142857,2232.14285714286,2642.85714285714,3053.57142857143,3464.28571428571,3875,4285.71428571429,4696.42857142857,5107.14285714286,5517.85714285714,5928.57142857143,6339.28571428572,6750] (default) | vector Fuel main injection timing speed breakpoints, in rpm. Commanded torque breakpoints, f_f_tot_tq_bpt — Breakpoints [0 10 26.43 42.86 59.29 75.71 92.14 108.6 125 141.4 157.9 174.3 190.7 207.1 223.6 240] (default) | vector Speed breakpoints, f_f_tot_n_bpt — Breakpoints Base idle speed, N_idle — Speed Base idle speed, Nidle, in rpm. {N}_{cyl} Cps {V}_{d} {R}_{air} {P}_{std} {T}_{std} Speed density volumetric efficiency, f_nv — Lookup table {\eta }_{v}={f}_{{\eta }_{v}}\left(MAP,N\right) {\eta }_{v} Speed density intake manifold pressure breakpoints, f_nv_prs_bpt — Breakpoints [95 100.3 105.7 111 116.4 121.7 127.1 132.4 137.8 143.1 148.4 153.8 159.1 164.5 169.8 175.2 180.5 185.9 191.2 196.6 201.9 207.2 212.6 217.9 223.3 228.6 234 239.3 244.7 250] (default) | vector Speed density engine speed breakpoints, f_nv_n_bpt — Breakpoints [750 956.9 1164 1371 1578 1784 1991 2198 2405 2612 2819 3026 3233 3440 3647 3853 4060 4267 4474 4681 4888 5095 5302 5509 5716 5922 6129 6336 6543 6750] (default) | vector EGR valve standard flow calibration, f_egr_stdflow — Lookup table {\stackrel{˙}{m}}_{egr,std}=f\left(\frac{MAP}{{P}_{exh,est}},EGRap\right) {\stackrel{˙}{m}}_{egr,std} EGR valve standard flow pressure ratio breakpoints, dimensionless. EGR valve standard flow area percent breakpoints, in percent. Turbocharger pressure ratio, f_turbo_pr — Lookup table P{r}_{turbo}=f\left({\stackrel{˙}{m}}_{airstd},{N}_{vgtcorr}\right) {\stackrel{˙}{m}}_{airstd} Turbocharger pressure ratio standard flow breakpoints, f_turbo_pr_stdflow_bpt — Breakpoints Turbocharger pressure ratio standard flow breakpoints, in g/s. Turbocharger pressure ratio corrected speed breakpoints, f_turbo_pr_corrspd_bpt — Breakpoints Turbocharger pressure ratio corrected speed breakpoints, in rpm/K^(1/2). Turbocharger pressure ratio VGT position correction, f_turbo_pr_vgtposcorr — Lookup table Turbocharger pressure ratio VGT position correction breakpoints, f_turbo_pr_vgtposcorr_bpt — Breakpoints Turbocharger pressure ratio VGT position correction breakpoints, dimensionless. {T}_{brake}={f}_{Tnf}\left(F,N\right) C{p}_{exh} {T}_{exh}={f}_{Texh}\left(F,N\right) {T}_{exh}
Rectangular Grid Walk - Minimal Restrictions Practice Problems Online | Brilliant Perry the Platypus is on a secret mission. He needs to move in the coordinate plane from (0, \, 0) (4, \, 2) without passing through (3, \, 1) . If Perry only moves 1 unit at a time to the right or up, in how many ways could he complete his mission? The Andromedan Trade Goods Association requires the following of its member worlds, whose locations can be plotted as a 100 \times 100 Each time a world receives a trade good, it must send that trade good to one of the worlds immediately to the right of or above it. Another trade good must be sent to the other world. Unfortunately, the world Aberdeen, located 2 worlds to the right and 2 worlds above Bellerophon, has stopped complying with Andromedan regulation and does not pass on any trade goods, either new or received. All other worlds still comply with Andromedan regulation. If Bellerophon, located in the bottom-left corner, sends out a trade good to each of the two worlds next to it, how many trade goods will the world 6 to the right and 3 above the initial world receive? An ant in the coordinate plane is located at (-1, \, -2) , and it can move repeatedly one unit to the right or up. If it wishes to travel to (3, \, 3) without passing through the origin or (1, \, -1) , then how many possible paths could it take? A particle is moving from the origin to (6, \, 4) . If the particle moves one unit at a time to the right or up and cannot pass through (2, \, 1) (4, \, 3) , how many possible paths could the particle take? Micro Man is trapped in an incomplete sudoku grid! If he starts at the 9 and wants to travel to the 2 without hitting the 3 , how many paths could he take, provided he wants to get there as quickly as possibly, while each step moving to a square sharing an edge with his current square?
Compute filtered output, filter error, and filter weights for given input and desired signal using RLS adaptive filter algorithm - Simulink - MathWorks España Compute filtered output, filter error, and filter weights for given input and desired signal using RLS adaptive filter algorithm The RLS Filter block recursively computes the least squares estimate (RLS) of the FIR filter weights. The block estimates the filter weights, or coefficients, needed to convert the input signal into the desired signal. Connect the signal you want to filter to the Input port. The input signal can be a scalar or a column vector. Connect the signal you want to model to the Desired port. The desired signal must have the same data type, complexity, and dimensions as the input signal. The Output port outputs the filtered input signal. The Error port outputs the result of subtracting the output signal from the desired signal. The corresponding RLS filter is expressed in matrix form as \begin{array}{l}k\left(n\right)=\frac{{\lambda }^{-1}P\left(n-1\right)u\left(n\right)}{1+{\lambda }^{-1}{u}^{H}\left(n\right)P\left(n-1\right)u\left(n\right)}\hfill \\ y\left(n\right)=w\left(n-1\right)u\left(n\right)\hfill \\ e\left(n\right)=d\left(n\right)-y\left(n\right)\hfill \\ w\left(n\right)=w\left(n-1\right)+{k}^{H}\left(n\right)e\left(n\right)\hfill \\ P\left(n\right)={\lambda }^{-1}P\left(n-1\right)-{\lambda }^{-1}k\left(n\right){u}^{H}\left(n\right)P\left(n-1\right)\hfill \end{array} where λ-1 denotes the reciprocal of the exponential weighting factor. The variables are as follows The inverse covariance matrix at step n The gain vector at step n w\left(n\right) The forgetting factor The implementation of the algorithm in the block is optimized by exploiting the symmetry of the inverse covariance matrix P(n). This decreases the total number of computations by a factor of two. The Forgetting factor (0 to 1) parameter corresponds to λ in the equations. It specifies how quickly the filter “forgets” past sample information. Setting λ=1 specifies an infinite memory. Typically, 1-\frac{1}{2L}<\lambda <1 , where L is the filter length. You can specify a forgetting factor using the input port, Lambda, or enter a value in the Forgetting factor (0 to 1) parameter in the Block Parameters: RLS Filter dialog box. \stackrel{^}{w}\left(0\right) , as a vector or a scalar for the Initial value of filter weights parameter. When you enter a scalar, the block uses the scalar value to create a vector of filter weights. This vector has length equal to the filter length and all of its values are equal to the scalar value. The initial value of P(n) is \frac{1}{{\sigma }^{2}}I where you specify {\sigma }^{2} in the Initial input variance estimate parameter. Rises from zero to a positive value, where the rise is not a continuation of a rise from a negative value to zero; see the following figure Falls from zero to a negative value, where the fall is not a continuation of a fall from a positive value to zero; see the following figure Either edge — Triggers a reset operation when the Reset input is a Rising edge or Falling edge, as described above The rlsdemo example illustrates a noise cancellation system built around the RLS Filter block. Specify forgetting factor via Select Dialog to enter a value for the forgetting factor in the Block parameters: RLS Filter dialog box. Select Input port to specify the forgetting factor using the Lambda input port. Forgetting factor (0 to 1) Enter the exponential weighting factor in the range 0 ≤λ≤1. A value of 1 specifies an infinite memory. Tunable (Simulink). Initial input variance estimate The initial value of 1/P(n).
NUMERICAL ANALYSIS - Encyclopedia Information Numerical analysis Information https://en.wikipedia.org/wiki/Numerical_analysis Babylonian clay tablet YBC 7289 (c. 1800–1600 BC) with annotations. The approximation of the square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 + 51/602 + 10/603 = 1.41421296... [1] Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, [2] [3] [4] and stochastic differential equations and Markov chains for simulating living cells in medicine and biology. Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid 20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms. [5] The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection ( YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square. 8.3 Online course material The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, [5] as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method. The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, [5] since now longer and more complicated calculations could be done. In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems. [6] [7] [8] [9] Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called ' discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum. Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of {\displaystyle 3x^{3}+4=28} , after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01. Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type {\displaystyle a+b+c+d+e} is even more inexact. Numerical stability is a notion in numerical analysis. An algorithm is called 'numerically stable' if an error, whatever its cause, does not grow to be much larger during the calculation. [10] This happens if the problem is ' well-conditioned', meaning that the solution changes by only a small amount if the problem data are changed by a small amount. [10] To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error. [10] So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem. For instance, computing the square root of 2 (which is roughly 1.41421) is a well-posed problem. Many algorithms solve this problem by starting with an initial approximation x0 to {\displaystyle {\sqrt {2}}} , for instance x0 = 1.4, and then computing improved guesses x1, x2, etc. One such method is the famous Babylonian method, which is given by xk+1 = xk/2 + 1/xk. Another method, called 'method X', is given by xk+1 = (xk2 − 2)2 + xk. [note 1] A few iterations of each scheme are calculated in table form below, with initial guesses x0 = 1.4 and x0 = 1.42. {\displaystyle f(x)=x\left({\sqrt {x+1}}-{\sqrt {x}}\right)} {\displaystyle g(x)={\frac {x}{{\sqrt {x+1}}+{\sqrt {x}}}}.} Comparing the results of {\displaystyle f(500)=500\left({\sqrt {501}}-{\sqrt {500}}\right)=500\left(22.38-22.36\right)=500(0.02)=10} {\displaystyle {\begin{alignedat}{3}g(500)&={\frac {500}{{\sqrt {501}}+{\sqrt {500}}}}\\&={\frac {500}{22.38+22.36}}\\&={\frac {500}{44.74}}=11.17\end{alignedat}}} by comparing the two results above, it is clear that loss of significance (caused here by catastrophic cancellation from subtracting approximations to the nearby numbers {\displaystyle {\sqrt {501}}} {\displaystyle {\sqrt {500}}} , despite the subtraction being computed exactly) has a huge effect on the results, even though both functions are equivalent, as shown below {\displaystyle {\begin{alignedat}{4}f(x)&=x\left({\sqrt {x+1}}-{\sqrt {x}}\right)\\&=x\left({\sqrt {x+1}}-{\sqrt {x}}\right){\frac {{\sqrt {x+1}}+{\sqrt {x}}}{{\sqrt {x+1}}+{\sqrt {x}}}}\\&=x{\frac {({\sqrt {x+1}})^{2}-({\sqrt {x}})^{2}}{{\sqrt {x+1}}+{\sqrt {x}}}}\\&=x{\frac {x+1-x}{{\sqrt {x+1}}+{\sqrt {x}}}}\\&=x{\frac {1}{{\sqrt {x+1}}+{\sqrt {x}}}}\\&={\frac {x}{{\sqrt {x+1}}+{\sqrt {x}}}}\\&=g(x)\end{alignedat}}} Interpolation: Observing that the temperature varies from 20 degrees Celsius at 1:00 to 14 degrees at 3:00, a linear interpolation of this data would conclude that it was 17 degrees at 2:00 and 18.5 degrees at 1:30pm. Extrapolation: If the gross domestic product of a country has been growing an average of 5% per year and was 100 billion last year, it might extrapolated that it will be 105 billion this year. Regression: In linear regression, given n points, a line is computed that passes as close as possible to those n points. Optimization: Suppose lemonade is sold at a lemonade stand, at $1.00 per glass, that 197 glasses of lemonade can be sold per day, and that for each increase of $0.01, one less glass of lemonade will be sold per day. If $1.485 could be charged, profit would be maximized, but due to the constraint of having to charge a whole-cent amount, charging $1.48 or $1.49 per glass will both yield the maximum income of $220.52 per day. Differential equation: If 100 fans are set up to blow air from one end of the room to the other and then a feather is dropped into the wind, what happens? The feather will follow the air currents, which may be very complex. One approximation is to measure the speed at which the air is blowing near the feather every second, and advance the simulated feather as if it were moving in a straight line at that same speed for one second, before measuring the wind speed again. This is called the Euler method for solving an ordinary differential equation. Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found. [11] Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation {\displaystyle 2x+5=3} is linear while {\displaystyle 2x^{2}+5=3} Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method [12] are usually preferred for large systems. General iterative methods can be developed using a matrix splitting. Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. [13] [14] Linearization is another technique for solving nonlinear equations. Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm [15] is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis. Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. [16] Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. [17] These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration [18]), or, in modestly large dimensions, the method of sparse grids. Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations. [19] Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. [20] This can be done by a finite element method, [21] [22] [23] a finite difference method, [24] or (particularly in engineering) a finite volume method. [25] The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation. Over the years the Royal Statistical Society published numerous algorithms in its Applied Statistics (code for these "AS" functions is here); ACM similarly, in its Transactions on Mathematical Software ("TOMS" code is here). The Naval Surface Warfare Center several times published its Library of Mathematics Subroutines (code here). There are several popular numerical computing applications such as MATLAB, [26] [27] [28] TK Solver, S-PLUS, and IDL [29] as well as free and open source alternatives such as FreeMat, Scilab, [30] [31] GNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R [32] (similar to S-PLUS), Julia, [33] and Python with libraries such as NumPy, SciPy [34] [35] [36] and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude. [37] [38] Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results. [39] [40] [41] [42] Local linearization method ^ This is a fixed point iteration for the equation {\displaystyle x=(x^{2}-2)^{2}+x=f(x)} , whose solutions include {\displaystyle {\sqrt {2}}} . The iterates always move to the right since {\displaystyle f(x)\geq x} {\displaystyle x_{1}=1.4<{\sqrt {2}}} {\displaystyle x_{1}=1.42>{\sqrt {2}}} ^ "Photograph, illustration, and description of the root(2) tablet from the Yale Babylonian Collection". Archived from the original on 13 August 2012. Retrieved 2 October 2006. ^ Hestenes, Magnus R.; Stiefel, Eduard (December 1952). "Methods of Conjugate Gradients for Solving Linear Systems". Journal of Research of the National Bureau of Standards. 49 (6): 409. ^ Ezquerro Fernández, J. A., & Hernández Verón, M. Á. (2017). Newton’s method: An updated approach of Kantorovich’s theory. Birkhäuser. ^ The Singular Value Decomposition and Its Applications in Image Compression Archived 4 October 2006 at the Wayback Machine ^ Iserles, A. (2009). A first course in the numerical analysis of differential equations. Cambridge University Press. ^ Ames, W. F. (2014). Numerical methods for partial differential equations. Academic Press. ^ Strang, G., & Fix, G. J. (1973). An analysis of the finite element method. Englewood Cliffs, NJ: Prentice-hall. ^ Bezanson, Jeff; Edelman, Alan; Karpinski, Stefan; Shah, Viral B. (1 January 2017). "Julia: A Fresh Approach to Numerical Computing". SIAM Review. 59 (1): 65–98. doi: 10.1137/141000671. hdl: 1721.1/110125. ISSN 0036-1445. ^ Speed comparison of various number crunching packages Archived 5 October 2006 at the Wayback Machine ^ Comparison of mathematical programs for data analysis Archived 18 May 2016 at the Portuguese Web Archive Stefan Steinhaus, ScientificWeb.com ^ Maeder, R. E. (1991). Programming in mathematica. Addison-Wesley Longman Publishing Co., Inc. Golub, Gene H.; Charles F. Van Loan (1986). Matrix Computations (3rd ed.). Johns Hopkins University Press. ISBN 0-8018-5413-X. Higham, Nicholas J. (1996). Accuracy and Stability of Numerical Algorithms. Society for Industrial and Applied Mathematics. ISBN 0-89871-355-2. Hildebrand, F. B. (1974). Introduction to Numerical Analysis (2nd ed.). McGraw-Hill. ISBN 0-07-028761-9. Kahan, W. (1972). A survey of error-analysis. Proc. IFIP Congress 71 in Ljubljana. Info. Processing 71. Vol. 2. Amsterdam: North-Holland Publishing. pp. 1214–39. (examples of the importance of accurate arithmetic). Numerical analysisat Wikipedia's sister projects gdz.sub.uni-goettingen, Numerische Mathematik, volumes 1-66, Springer, 1959-1994 (searchable; pages are images). (in English and German) Numerische Mathematik, volumes 1–112, Springer, 1959–2009 "Numerical analysis", Encyclopedia of Mathematics, EMS Press, 2001 [1994] First Steps in Numerical Analysis ( archived), R.J.Hosking, S.Joe, D.C.Joyce, and J.C.Turner CSEP (Computational Science Education Project), U.S. Department of Energy ( archived 2017-08-01) Numerical Methods, ch 3. in the Digital Library of Mathematical Functions Numerical Interpolation, Differentiation and Integration, ch 25. in the Handbook of Mathematical Functions ( Abramowitz and Stegun) Numerical Methods ( Archived 28 July 2009 at the Wayback Machine), Stuart Dalziel University of Cambridge Lectures in Numerical Analysis ( archived), R. Radok Mahidol University Introduction to Numerical Analysis, Doron Levy University of Maryland Numerical Analysis - Numerical Methods (archived), John H. Mathews California State University Fullerton Retrieved from " https://en.wikipedia.org/?title=Numerical_analysis&oldid=1083896635" Numerical Analysis Videos Numerical Analysis Websites Numerical Analysis Encyclopedia Articles
Conal Elliott » functor \Theta \left(n\phantom{\rule{0.167em}{0ex}}\mathrm{log}\phantom{\rule{0.167em}{0ex}}n\right) \Theta \left(n\right) Composable parallel scanning The post Deriving list scans gave a simple specification of the list-scanning functions scanl and scanr, and then transformed those specifications into the standard optimized implementations. Next, the post Deriving parallel tree scans adapted the specifications and derivations to a type of binary trees. The resulting implementations are parallel-friendly, but not work-efficient, in that they perform n\phantom{\rule{0.167em}{0ex}}\mathrm{log}\phantom{\rule{0.167em}{0ex}}n work vs linear work as in the best-known sequential algorithm. Besides the work-inefficiency, I don’t know how to extend the critical initTs and tailTs functions (analogs of inits and tails on lists) to depth-typed, perfectly balanced trees, of the sort I played with in A trie for length-typed vectors and From tries to trees. The difficulty I encounter is that the functions initTs and tailTs make unbalanced trees out of balanced ones, so I don’t know how to adapt the specifications when types prevent the existence of unbalanced trees. This new post explores an approach to generalized scanning via type classes. After defining the classes and giving a simple example, I’ll give a simple & general framework based on composing functor combinators. 2011-03-02: Fixed typo. "constant functor is easiest" (instead of "identity functor"). Thanks, frguybob. 2011-03-05: Removed final unfinished sentence. 2011-07-28: Replace "assocL" with "assocR" in prefixScan derivation for g ∘ f. Continue reading ‘Composable parallel scanning’ » The post Deriving list scans gave a simple specification of the list-scanning functions scanl and scanr, and then transformed those specifications into the standard optimized implementations. Next, the post Deriving... Tags: functor, scan | 5 Comments Another angle on zippers The zipper is an efficient and elegant data structure for purely functional editing of tree-like data structures, first published by Gérard Huet. Zippers maintain a location of focus in a tree and support navigation operations (up, down, left, right) and editing (replace current focus). The original zipper type and operations are customized for a single type, but it’s not hard to see how to adapt to other tree-like types, and hence to regular data types. There have been many follow-up papers to The Zipper, including a polytypic version in the paper Type-indexed data types. All of the zipper adaptations and generalizations I’ve seen so far maintain the original navigation interface. In this post, I propose an alternative interface that appears to significantly simplify matters. There are only two navigation functions instead of four, and each of the two is specified and implemented via a fairly simple one-liner. I haven’t used this new zipper formulation in an application yet, so I do not know whether some usefulness has been lost in simplifying the interface. The code in this blog post is taken from the Haskell library functor-combo and completes the Holey type class introduced in Differentiation of higher-order types. 2010-07-29: Removed some stray Just applications in up definitions. (Thanks, illissius.) 2010-07-29: Augmented my complicated definition of tweak2 with a much simpler version from Sjoerd Visscher. 2010-07-29: Replaced fmap (first (:ds')) with (fmap.first) (:ds') in down definitions. (Thanks, Sjoerd.) Continue reading ‘Another angle on zippers’ » The zipper is an efficient and elegant data structure for purely functional editing of tree-like data structures, first published by Gérard Huet. Zippers maintain a location of focus in a... Tags: derivative, functor, zipper | 16 Comments Differentiation of higher-order types A “one-hole context” is a data structure with one piece missing. Conor McBride pointed out that the derivative of a regular type is its type of one-hole contexts. When a data structure is assembled out of common functor combinators, a corresponding type of one-hole contexts can be derived mechanically by rules that mirror the standard derivative rules learned in beginning differential calculus. I’ve been playing with functor combinators lately. I was delighted to find that the data-structure derivatives can be expressed directly using the standard functor combinators and type families. The code in this blog post is taken from the Haskell library functor-combo. See also the Haskell Wikibooks page on zippers, especially the section called “Differentiation of data types”. I mean this post not as new research, but rather as a tidy, concrete presentation of some of Conor’s delightful insight. Continue reading ‘Differentiation of higher-order types’ » A “one-hole context” is a data structure with one piece missing. Conor McBride pointed out that the derivative of a regular type is its type of one-hole contexts. When a... Tags: derivative, functor, zipper | 5 Comments
q 1 find the locus of the feet of the perpendicular drawn from the point (b,0) on tangents to the - Maths - Conic Sections - 8810987 | Meritnation.com q.1 find the locus of the feet of the perpendicular drawn from the point (b,0) on tangents to the circle x2+y2 =a2. the equation of the tangent to the circle {x}^{2}+{y}^{2}={a}^{2} ......\left(1\right) is y=mx±a\sqrt{1+{m}^{2}} .......\left(2\right) let P(h,k) be the locus of the feet of the perpendicular drawn from the point A(b,0). therefore PA is perpendicular to the tangents slope of PA = - 1/m \frac{k-0}{h-b}=-\frac{1}{m}\phantom{\rule{0ex}{0ex}}⇒m=\frac{b-h}{k}.......\left(3\right) and point (h,k) must lie on the tangent (2), therefore k=mh±a\sqrt{1+{m}^{2}}\phantom{\rule{0ex}{0ex}}from eq\left(3\right)\phantom{\rule{0ex}{0ex}}k=\frac{\left(b-h\right)}{k}.h±a.\sqrt{1+{\left(\frac{b-h}{k}\right)}^{2}}\phantom{\rule{0ex}{0ex}}k-\frac{\left(b-h\right).h}{k}=±a\sqrt{1+{\left(\frac{b-h}{k}\right)}^{2}}\phantom{\rule{0ex}{0ex}}\frac{{k}^{2}-bh+{h}^{2}}{k}=±a\sqrt{\frac{{k}^{2}+\left(b-h{\right)}^{2}}{{k}^{2}}}\phantom{\rule{0ex}{0ex}}on squaring both sides , we have:\phantom{\rule{0ex}{0ex}}\frac{\left({k}^{2}+{h}^{2}-bh{\right)}^{2}}{{k}^{2}}={a}^{2}.\left[\frac{{k}^{2}+\left(b-h{\right)}^{2}}{{k}^{2}}\right]\phantom{\rule{0ex}{0ex}}⇒\left({k}^{2}+{h}^{2}-bh{\right)}^{2}={a}^{2}.\left[{k}^{2}+\left(b-h{\right)}^{2}\right] now for locus replace h by x and k by y: {\left[{y}^{2}-bx+{x}^{2}\right]}^{2}={a}^{2}.\left[{y}^{2}+\left(b-x{\right)}^{2}\right]
Adaptive Tracking of Maneuvering Targets with Managed Radar - MATLAB & Simulink - MathWorks 한국 Multifunction radars can search for targets, confirm new tracks, and revisit tracks to update the state. To perform these functions, a multifunction radar is often managed by a resource manager that creates radar tasks for search, confirmation, and tracking. These tasks are scheduled according to priority and time so that, at each time step, the multifunction radar can point its beam in a desired direction. The Search and Track Scheduling for Multifunction Phased Array Radar example shows a multifunction phased-array radar managed with a resource manager. In this example, we extend the Search and Track Scheduling for Multifunction Phased Array Radar example to the case of multiple maneuvering targets. There are two conflicting requirements for a radar used to track maneuvering targets: You define a scenario and a radar with an update rate of 20 Hz, which means that the radar has 20 beams per second allocated for either search, confirmation, or tracking. You load the benchmark trajectories used in the Benchmark Trajectories for Multi-Object Tracking (Sensor Fusion and Tracking Toolbox) example. There are six benchmark trajectories and you define a trajectory for each one. The six platforms in the figure follow non-maneuvering legs interspersed with maneuvering legs. You can view the trajectories in the figure. In this example, you use a tracker that associates the detections to the tracks using a global nearest neighbor (GNN) algorithm. To track the maneuvering targets, you define a FilterInitializationFcn function that initializes an IMM filter. The initMPARIMM function uses two motion models: a constant-velocity model and a constant-turn rate model. The trackingIMM (Sensor Fusion and Tracking Toolbox) filter is responsible for estimating the probability of each model, which you can access from its ModelProbabilities property. In this example, you classify a target as maneuvering when the probability of the constant-turn rate model is higher than 0.6. This section only briefly outlines the radar resource management. For more details, see the Adaptive Tracking of Maneuvering Targets with Managed Radar example. Unlike search tasks, track tasks cannot be planned in advance. Instead, the resource manager creates confirmation and tracking tasks based on the changing scenario. The main difference in this example from the Adaptive Tracking of Maneuvering Targets with Managed Radar example is that the JobType for each track task can be either "TrackNonManeuvering" or "TrackManeuvering". The distinction between the two types of tracking tasks enables you to schedule tasks for each type of track at different revisit rates, making it an adaptive tracking algorithm. Similar to search tasks, tracking tasks are also managed in a job queue. 6⋅0.8≈5 [1] Charlish, Alexander, Folker Hoffmann, Christoph Degen, and Isabel Schlangen. “The Development From Adaptive to Cognitive Radar Resource Management.” IEEE Aerospace and Electronic Systems Magazine 35, no. 6 (June 1, 2020): 8–19. https://doi.org/10.1109/MAES.2019.2957847.
Portfolio Set for Optimization Using PortfolioMAD Object - MATLAB & Simulink - MathWorks 한국 X⊂{R}^{n} {A}_{I}x≤{b}_{I} {A}_{E}x={b}_{E} {l}_{B}≤x≤{u}_{B} {l}_{i}{v}_{i}≤{x}_{i}≤{u}_{i}{v}_{i} {l}_{S}≤{1}^{T}x≤{u}_{S} {l}_{G}≤Gx≤{u}_{G} {l}_{Ri}{\left({G}_{B}x\right)}_{i}≤{\left({G}_{A}x\right)}_{i}≤{u}_{Ri}{\left({G}_{B}x\right)}_{i} \frac{1}{2}{1}^{T}|x−{x}_{0}|≤\mathrm{τ} {1}^{T}×\mathrm{max}\left\{0,x−{x}_{0}\right\}≤{\mathrm{τ}}_{B} {1}^{T}×\mathrm{max}\left\{0,{x}_{0}−x\right\}≤{\mathrm{τ}}_{S} The average turnover constraint (see Working with Average Turnover Constraints Using PortfolioMAD Object) with Ï„ is not a combination of the one-way turnover constraints with Ï„ = Ï„B = Ï„S. MinNumAssets≤\underset{i=1}{\overset{NumAssets}{∑}}{v}_{i}≤MaxNumAssets
Scientific Notation | James's Knowledge Graph Scientific Notation is a way to express very small and very large numbers using exponents, and is useful for brevity and for comparing orders of magnitude between numbers. To convert a number to scientific notation, it is rewritten to be greater than zero but less than ten multiplied by 10 raised to some exponent. 99,000 expressed in scientific notation is 9.9 \cdot 10^4 0.00099 9.9 \cdot 10^{-4} SI number prefixes are commonly used in lieu of scientific or engineering notation. For example, rather than writing " 1.23 \cdot 10^3g " (grams), we can write " 1.23kg " (kilograms). Common large numbers expressed in scientific notation Avogadro's number is 6.02214076 \cdot 10^{23} The charge of an Electron is 0.6021766208 \cdot 10^{-19} coulombs. A googol is, a one followed by 100 zeros, is 1 \cdot 10^{100} 2.99792458 \cdot 10^8 Video: Introduction to scientific notation Deeper Knowledge on Scientific Notation Broader Topics Related to Scientific Notation Scientific Notation Knowledge Graph
Buybacks & Recollateralization - XUSD.Money XUSD: Partial-Collateralized Stablecoin Protocol XUSD Shares (XUS) Buybacks & Recollateralization Liquidity Programs & Staking The protocol at times will have excess collateral value or require adding collateral to reach the collateral ratio. To quickly redistribute value back to XUS holders or increase system collateral, two special swap functions are built into the protocol: buyback and recollateralize. Anyone can call the recollateralize function which then checks if the total collateral value in USD across the system is below the current collateral ratio. If it is, then the system allows the caller to add up to the amount needed to reach the target collateral ratio in exchange for newly minted XUS at a bonus rate. The bonus rate is set to .75% to quickly incentivize arbitragers to close the gap and recollateralize the protocol to the target ratio. The bonus rate can be adjusted or changed to a dynamic PID controller adjusted variable through governance. XUS{received} = \frac{(Y∗P_y)(1+B_r)}{Pz}​​ Y is the units of collateral needed to reach the collateral ratio P_y Y B_r is the bonus rate for XUS emitted when recollateralizing P_z is the price in USD of XUS Example A: There is 100,000,00 XUSD in circulation at a 50% collateral ratio. The total value of collateral across the DAI and WETH pools is 50m USD and the system is balanced. The price of XUSD drops to $.99 and the protocol increases the collateral ratio to 50.25%. There is now $250,000 worth of collateral needed to reach the target ratio. Anyone can call the recollateralize function and place up to $250,000 of collateral into pools to receive an equal value of XUS plus a bonus rate of .75%. Placing 250,000 DAI at a price of $1.00/DAI and a market price of $3.80/XUS is as follows: XUS{received}=\frac{(250000∗1.00)(1+.0075)}{3.80} \\ ​XUS_{received}=66282.89 The opposite scenario occurs when there is excess collateral in the system than required to hold the target collateral ratio. This can happen a number of ways: The protocol has been lowering the collateral ratio successfully keeping the price of XUSD stable Interest bearing collateral is accepted into the protocol and its value accrues Minting and redemption fees are creating revenue In such a scenario, any XUS holder can call the buyback function to exchange the amount of excess collateral value in the system for XUS which is then burned by the protocol. This effectively redistributes any excess value back to the XUS distribution and holders don't need to actively participate in buybacks to gain value since there is no bonus rate for the buyback function. It effectively models a share buyback to the governance token distribution. Collateral_{received}=\frac{Z∗Pz}{P_y} Z is units of XUS deposited to be burned P_y ​is the price in USD of the collateral P_z Example B: There is 150,000,000 XUSD in circulation at a 50% collateral ratio. The total value of collateral across the DAI and WETH pools is 76m USD. There is $1m worth of excess collateral available for XUS buybacks. Anyone can call the buyback function and burn up to $1,000,000 worth of XUS to receive excess collateral. Burning 238,095.238 XUS at a price of $4.20/XUS to receive DAI at a price of $.99/DAI is as follows: DAI_{received} = \frac{238095.238∗4.20}{.99} \\ ​DAI_{received}=1010101.01
GEOMETRIC TOPOLOGY - Encyclopedia Information Geometric topology Information https://en.wikipedia.org/wiki/Geometric_topology For other uses, see Geometric topology (disambiguation). 2 Differences between low-dimensional and high-dimensional topology 3 Important tools in geometric topology 3.1 Fundamental group 3.3 Handle decompositions 3.4 Local flatness 3.5 Schönflies theorems 4 Branches of geometric topology 4.1 Low-dimensional topology 4.3 High-dimensional geometric topology Geometric topology as an area distinct from algebraic topology may be said to have originated in the 1935 classification of lens spaces by Reidemeister torsion, which required distinguishing spaces that are homotopy equivalent but not homeomorphic. This was the origin of simple homotopy theory. The use of the term geometric topology to describe these seems to have originated rather recently. [1] A handle decomposition of an m- manifold M is a union {\displaystyle \emptyset =M_{-1}\subset M_{0}\subset M_{1}\subset M_{2}\subset \dots \subset M_{m-1}\subset M_{m}=M} {\displaystyle M_{i}} {\displaystyle M_{i-1}} by the attaching of {\displaystyle i} -handles. A handle decomposition is to a manifold what a CW-decomposition is to a topological space—in many regards the purpose of a handle decomposition is to have a language analogous to CW-complexes, but adapted to the world of smooth manifolds. Thus an i-handle is the smooth analogue of an i-cell. Handle decompositions of manifolds arise naturally via Morse theory. The modification of handle structures is closely linked to Cerf theory. Suppose a d dimensional manifold N is embedded into an n dimensional manifold M (where d < n). If {\displaystyle x\in N,} we say N is locally flat at x if there is a neighborhood {\displaystyle U\subset M} of x such that the topological pair {\displaystyle (U,U\cap N)} is homeomorphic to the pair {\displaystyle (\mathbb {R} ^{n},\mathbb {R} ^{d})} , with a standard inclusion of {\displaystyle \mathbb {R} ^{d}} {\displaystyle \mathbb {R} ^{n}} . That is, there exists a homeomorphism {\displaystyle U\to R^{n}} {\displaystyle U\cap N} {\displaystyle \mathbb {R} ^{d}} The generalized Schoenflies theorem states that, if an (n − 1)-dimensional sphere S is embedded into the n-dimensional sphere Sn in a locally flat way (that is, the embedding extends to that of a thickened sphere), then the pair (Sn, S) is homeomorphic to the pair (Sn, Sn−1), where Sn−1 is the equator of the n-sphere. Brown and Mazur received the Veblen Prize for their independent proofs [2] [3] of this theorem. Surgery theory is a collection of techniques used to produce one manifold from another in a 'controlled' way, introduced by Milnor ( 1961). Surgery refers to cutting out parts of the manifold and replacing it with a part of another manifold, matching up along the cut or boundary. This is closely related to, but not identical with, handlebody decompositions. It is a major tool in the study and classification of manifolds of dimension greater than 3. The classification of exotic spheres by Kervaire and Milnor ( 1963) led to the emergence of surgery theory as a major tool in high-dimensional topology. ^ Brown, Morton (1960), A proof of the generalized Schoenflies theorem. Bull. Amer. Math. Soc., vol. 66, pp. 74–76. MR 0117695 ^ Mazur, Barry, On embeddings of spheres., Bull. Amer. Math. Soc. 65 1959 59–65. MR 0117693 Retrieved from " https://en.wikipedia.org/?title=Geometric_topology&oldid=1080854801" Geometric Topology Videos Geometric Topology Websites Geometric Topology Encyclopedia Articles
 Biomechanical Effects of Implant Materials on Posterior Lumbar Interbody Fusion: Comparison of Polyetheretherketone and Titanium Spacers Using Finite Element Analysis and Considering Bone Density 1Department of Orthopaedic Surgery, Juntendo University, Tokyo, Japan; 2Renewable Energy Center, Research Institute for Applied Mechanics, Kyushu University, Kasuga, Japan Correspondence to: Ikuho Yonezawa, Keywords: Posterior Lumbar Interbody Fusion, Biomechanics, Finite Element Analysis, Cage, Polyetheretherketone, Titanium, Osteoporosis Few biomechanical data exist regarding whether the polyetheretherketone (PEEK) spacer or titanium spacer is better for posterior lumbar interbody fusion (PLIF). This study evaluated the biomechanical influence that these types of spacers with different levels of hardness exert on the vertebra by using finite element analysis including bone strength distribution. To evaluate the risk of spacer subsidence for PLIF, we built a finite element model of the lumbar spine using computed tomography data of osteoporosis patients. Then, we simulated PLIF in L3/4 and built models with the hardness of the interbody spacer set as PEEK and titanium. Bones around the spacer were subjected to different load conditions. Then, fracture elements and some stress states of the two modalities were compared. In both models of PLIF simulation, fracture elements and stress were concentrated in the bones around the spacer. Fracture elements and stress values of the model simulating the PEEK spacer were significantly smaller compared to those of the titanium simulation model. For PLIF of osteoporotic vertebrae, this suggested that the PEEK spacer is in a mechanical environment less susceptible to subsidence caused by microfractures of bone tissue and bone remodeling-related fusion aspects. Therefore, PEEK spacers are biomechanically more useful. \rho =\{\begin{array}{l}0.0\left(HU\le -1\right)\hfill \\ \left(HU+1.4246\right)\times \frac{0.001}{1.0580}\left(-1\le HU\right)\hfill \end{array} E=\left\{\begin{array}{l}0.001\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(\rho =0\right)\hfill \\ 33900{\rho }^{2.20}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(0<\rho \le 0.27\right)\hfill \\ 5370+469\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(0.27<\rho <0.6\right)\hfill \\ 10200{\rho }^{2.01}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(0.6\le \rho \right)\hfill \end{array} {\sigma }_{r}=\{\begin{array}{l}1.0\times {10}^{20}\left(\rho \le 0.2\right)\hfill \\ 137{\rho }^{1.88}\left(0.2<\rho <0.317\right)\hfill \\ 114{\rho }^{1.72}\left(0.317\le \rho \right)\hfill \end{array} 1. Cloward, R.B. (1953) The Treatment of Ruptured Lumbar Intervertebral Discs by Vertebral Body Fusion. I. Indications, Operative Technique, after Care. Journal of Neurosurgery, 10, 154-168. 2. Brantigan, J.W., McAfee, P.C., Cunningham, B.W., Wang, H. and Orbegoso, C.M. (1994) Interbody Lumbar Fusion Using a Carbon Fiber Cage Implant versus allograft Bone: An Investigational Study in the Spanish Goat. Spine, 19, 1436-1444. 3. Brantigan, J.W. and Steffee, A.D. (1993) A Carbon Fiber Implant to Aid Interbody Lumbar Fusion: Two-Year Clinical Results in the First 26 Patients. Spine, 18, 2106-2117. 4. Matge, G. (2002) Cervical Cages Fusion with 5 Dif-ferent Implants: 250 Cases. Acta Neurochirurgica, 144, 539-549. 5. Ray, C.D. (1997) Threaded Titanium Cages for Lumbar Interbody Fusions. Spine, 22, 667-679. 6. Eck, K.R., Bridwell, K.H., Ungacta, F.F., Lapp, M.A., Lenke, L.G. and Riew, K.D. (2000) Analysis of Titanium Mesh Cages in Adults with Minimum Two-Year Follow-Up. Spine, 25, 2407-2415. 7. Kuslich, S.D., Danielson, G., Dowdle, J.D., Sherman, J., Fredrickson, B., Yuan, H. and Griffith, S.L. (2000) Four-Year Follow-Up Results of Lumbar Spine Arthrodesis Using the Bagby and Kuslich Lumbar Fusion Spacer. Spine, 25, 2656-2662. 8. McAfee, P.C. (1999) Interbody Fusion Spacers in Reconstructive Operations on the Spine. The Journal of Bone and Joint Surgery—American Volume, 81, 859-880. 9. Nemoto, O., Asazuma, T., Yato, Y., Imabayashi, H., Yasuoka, H. and Fujikawa, A. (2014) Comparison of Fusion Rates Following Transforaminal Lumbar Interbody Fusion Using Polyetheretherketone Cages or Titanium Cages with Transpedicular Instrumentation. European Spine Journal, 23, 2150-2155. 10. Wang, Z., Fu, S., Wu, Z.X., Zhang, Y. and Lei, W. (2013) Ti2448 Pedicle Screw System Augmentation for Posterior Lumbar Interbody Fusion. Spine, 38, 2008-2015. 11. Couvertier, M., Germaneau, A., Saget, M., Dupré, J.C., Doumalin, P., Brémand, F., Hesser, F., Brèque, C., Roulaud, M., Monlezun, O., Vendeuvre, T. and Rigoard, P. (2017) Biomechanical Analysis of the Thoracolumbar Spine under Physiological Loadings: Experimental Motion Data Corridors for Validation of Finite Element Models. Proceedings of the Institution of Mechanical Engineers. Part H, 231, 975-981. 12. Lee, C.H., Landham, P.R., Eastell, R., Adams, M.A., Dolan, P. and Yang, L. (2017) Development and Validation of a Subject-Specific Finite Element Model of the Functional Spinal Unit to Predict Vertebral Strength. Proceedings of the Institution of Mechanical Engineers. Part H, 231, 821-830. 13. Vadapalli, S., Sairyo, K., Goel, V.K., Robon, M., Biyani, A., Khandha, A. and Ebraheim, N.A. (2006) Biomechanical Rationale for Using Polyetheretherketone (PEEK) Spacers for Lumbar Interbody Fusion—A Finite Element Study. Spine, 31, E992-998. 14. Xiao, Z., Wang, L., Gong, H. and Zhu, D. (2012) Biomechanical Evaluation of Three Surgical Scenarios of Posterior Lumbar Interbody Fusion by Finite Element Analysis. BioMedical Engineering OnLine, 11, 31. 15. Lu, Y., Rosenau, E., Paetzold, H., Klein, A., Püschel, K., Morlock, M.M. and Huber, G. (2013) Strain Changes on the Cortical Shell of Vertebral Bodies due to Spine Ageing: A Parametric Study Using a Finite Element Model Evaluated by Strain Measurements. Proceedings of the Institution of Mechanical Engineers. Part H, 227, 1265-1274. 16. Imai, K., Ohnishi, I., Bessho, M. and Nakamura, K. (2006) Nonlinear Finite Element Model Predicts Vertebral Bone Strength and Fracture Site. Spine, 31, 1789-1794. 17. Keyak, J.H., Meagher, J.M., Skinner H.B. and Mote Jr., C.D. (1990) Automated Three-Dimensional Finite Element Modelling of Bone: A New Method. Journal of Biomedical Engineering, 12, 389-397. 18. Matsuura, Y., Giambini, H., Ogawa, Y., Fang, Z., Thoreson, A.R., Yaszemski, M.J., Lu, L. and An, K.N. (2014) Specimen-Specific Nonlinear Finite Element Modeling to Predict Vertebrae Fracture Loads after Vertebroplasty. Spine, 39, E1291-1296. 19. Keyak, J.H., Rossi, S.A., Jones, K.A. and Skinner, H.B. (1998) Prediction of Femoral Fracture Load Using Automated Finite Element Modeling. Journal of Biomechanics, 31, 125-133. 20. Tsuang, Y.H., Chiang, Y.F., Hung, C.Y., Wei, H.W., Huang, C.H. and Cheng, C.K. (2009) Comparison of Cage Application Modality in Posterior Lumbar Interbody Fusion with Posterior Instrumentation—A Finite Element Study. Medical Engineering & Physics, 31, 565-570. 21. Bessho, M., Ohnishi, I., Matsuyama, J., Matsumoto, T., Imai, K. and Nakamura, K. (2007) Prediction of Strength and Strain of the Proximal Femur by a CT-Based Finite Element Method. Journal of Biomechanics, 40, 1745-1753. 22. Kurtz, S.M. and Devine, J.N. (2007) PEEK Biomaterials in Trauma, Orthopedic, and Spinal Implants. Biomaterials, 28, 4845-4869. 23. Pelletier, M.H., Cordaro, N., Punjabi, V.M., Waites, M., Lau, A. and Walsh, W.R. (2016) PEEK versus Ti Interbody Fusion Devices: Resultant Fusion, Bone Apposition, Initial and 26-Week Biomechanics. Clinical Spine Surgery, 29, E208-E214. 24. Oxland, T.R., Lund, T., Jost, B., Cripton, P., Lippuner, K., Jaeger, P. and Nolte, L.P. (1996) The Relative Importance of Vertebral Bone Density and Disc Degeneration in Spinal Flexibility and Interbody Implant Performance. Spine, 21, 2558-2569. 25. Boden, S. and Sumner, D. (1995) Biologic Factors Affecting Spinal Fusion and Bone Regeneration. Spine, 20, S102-S112. 26. Sethi, A., Lee, S. and Vaidya, R. (2009) Transforaminal Lumbar Interbody Fusion using Unilateral Pedicle Screws and a Translaminar Screw. European Spine Journal, 18, 430-434. 27. Oh, K.W., Lee, J.H., Lee, D.Y. and Shim, H.J. (2017) The Correlation between Cage Subsidence, Bone Mineral Density, and Clinical Results in Posterior Lumbar Interbody Fusion. Clinical Spine Surgery, 30, E683-E689. 28. Lee, J.H., Jeon, D.W., Lee, S.J., Chang, B.S. and Lee, C.K. (2010) Fusion Rates and Subsidence of Morselized Local Bone Grafted in Titanium Cages in Posterior Lumbar Interbody Fusion using Quantitative Three-Dimensional Computed Tomography Scans. Spine, 35, 1460-1465. 29. Herrera, A., Panisello, J.J., Ibarz, E., Cegonino, J., Puérto-las, J.A. and Gracia, L. (2009) Comparison between DEXA and Finite Element Studies in the Long-Term Bone Remodeling of an Anatomical Femoral Stem. Journal of Biomechanical Engineering, 131, Article ID: 041013. 30. Huiskes, R., Weinans, H., Grootenboer, H.J., Dalstra, M., Fudala, B. and Slooff, T.J. (1987) Adaptive Bone-Remodeling Theory Applied to Prosthetic-Design Analysis. Journal of Biomechanics, 20, 1135-1150. 31. Lee, J.H., Lee, J.H., Park, J.W. and Lee, H.S. (2011) Fusion Rates of a Morselized Local Bone Graft in Polyetheretherketone Spacers in Posterior Lumbar Interbody Fusion by Quantitative Analysis using Consecutive Three-Dimensional Computed Tomography Scans. The Spine Journal, 11, 647-653. 32. Schimmel, J.J., Poeschmann, M.S., Horsting, P.P., Schonfeld, D.H., van Limbeek, J. and Pavlov, P.W. (2016) Comparison between DEXA and Finite Element Studies in the Long-Term Bone Remodeling of an Anatomical Femoral Stem. Clinical Spine Surgery, 29, E252-E258. 33. Seaman, S., Kerezoudis, P., Bydon, M., Torner, J.C. and Hitchon, P.W. (2017) Titanium vs. Polyetheretherketone (PEEK) Interbody Fusion: Meta-Analysis and Review of the Literature. Journal of Clinical Neuroscience, 44, 23-29. 34. Tawara, D., Sakamoto, J., Murakami, H., Kawahara, N., Oda, J. and Tomita, K. (2010) Mechanical Therapeutic Effects in Osteoporotic L1-Vertebrae Evaluated by Nonlinear Patient-Specific Finite Element Analysis. Journal of Biomechanical Science and Engineering, 5, 499-514.
Logical negation - Simple English Wikipedia, the free encyclopedia Logical negation (also known as not) is a logic operation. For a proposition {\displaystyle P} , its negation is written as {\displaystyle \neg P} .[1] It takes one input. It flips the value of the input as the output. If the input was true, it returns false; If the input was false, it returns true.[2][3] ↑ Weisstein, Eric W. "Negation". mathworld.wolfram.com. Retrieved 2020-09-02. ↑ "Logic and Mathematical Statements - Worked Examples". www.math.toronto.edu. Retrieved 2020-09-02. Retrieved from "https://simple.wikipedia.org/w/index.php?title=Logical_negation&oldid=7094542"
Boussinesq approximation (buoyancy) - Wikipedia This article is about the Boussinesq approximation in buoyancy-driven flows. For other uses, see Boussinesq approximation (disambiguation). In fluid dynamics, the Boussinesq approximation (pronounced [businɛsk], named for Joseph Valentin Boussinesq) is used in the field of buoyancy-driven flow (also known as natural convection). It ignores density differences except where they appear in terms multiplied by g, the acceleration due to gravity. The essence of the Boussinesq approximation is that the difference in inertia is negligible but gravity is sufficiently strong to make the specific weight appreciably different between the two fluids. Sound waves are impossible/neglected when the Boussinesq approximation is used since sound waves move via density variations. Boussinesq flows are common in nature (such as atmospheric fronts, oceanic circulation, katabatic winds), industry (dense gas dispersion, fume cupboard ventilation), and the built environment (natural ventilation, central heating). The approximation is extremely accurate for many such flows, and makes the mathematics and physics simpler. 1 The approximation The approximationEdit The Boussinesq approximation is applied to problems where the fluid varies in temperature from one place to another, driving a flow of fluid and heat transfer. The fluid satisfies conservation of mass, conservation of momentum and conservation of energy. In the Boussinesq approximation, variations in fluid properties other than density ρ are ignored, and density only appears when it is multiplied by g, the gravitational acceleration.[1]: 127–128 If u is the local velocity of a parcel of fluid, the continuity equation for conservation of mass is[1]: 52  {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \left(\rho \mathbf {u} \right)=0.} If density variations are ignored, this reduces to[1]: 128  {\displaystyle \nabla \cdot \mathbf {u} =0.} The general expression for conservation of momentum of an incompressible, Newtonian fluid (the Navier–Stokes equations) is {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} =-{\frac {1}{\rho }}\nabla p+\nu \nabla ^{2}\mathbf {u} +{\frac {1}{\rho }}\mathbf {F} ,} where ν (nu) is the kinematic viscosity and F is the sum of any body forces such as gravity.[1]: 59 In this equation, density variations are assumed to have a fixed part and another part that has a linear dependence on temperature: {\displaystyle \rho =\rho _{0}-\alpha \rho _{0}(T-T_{0}),} where α is the coefficient of thermal expansion.[1]: 128–129 The Boussinesq approximation states that the density variation is only important in the buoyancy term. {\displaystyle F=\rho \mathbf {g} } is the gravitational body force, the resulting conservation equation is[1]: 129  {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} =-{\frac {1}{\rho _{0}}}\nabla (p-\rho _{0}\mathbf {g} \cdot \mathbf {z} )+\nu \nabla ^{2}\mathbf {u} -\mathbf {g} \alpha (T-T_{0}).} In the equation for heat flow in a temperature gradient, the heat capacity per unit volume, {\displaystyle \rho C_{p}} , is assumed constant and the dissipation term is ignored. The resulting equation is {\displaystyle {\frac {\partial T}{\partial t}}+\mathbf {u} \cdot \nabla T={\frac {k}{\rho C_{p}}}\nabla ^{2}T+{\frac {J}{\rho C_{p}}},} where J is the rate per unit volume of internal heat production and {\displaystyle k} is the thermal conductivity.[1]: 129  The three numbered equations are the basic convection equations in the Boussinesq approximation. The advantage of the approximation arises because when considering a flow of, say, warm and cold water of density ρ1 and ρ2 one needs only to consider a single density ρ: the difference Δρ = ρ1 − ρ2 is negligible. Dimensional analysis shows[clarification needed] that, under these circumstances, the only sensible way that acceleration due to gravity g should enter into the equations of motion is in the reduced gravity g′ where {\displaystyle g'=g{\frac {\rho _{1}-\rho _{2}}{\rho }}.} (Note that the denominator may be either density without affecting the result because the change would be of order g(Δρ/ρ)2 .) The most generally used dimensionless number would be the Richardson number and Rayleigh number. The mathematics of the flow is therefore simpler because the density ratio ρ1/ρ2, a dimensionless number, does not affect the flow; the Boussinesq approximation states that it may be assumed to be exactly one. InversionsEdit One feature of Boussinesq flows is that they look the same when viewed upside-down, provided that the identities of the fluids are reversed. The Boussinesq approximation is inaccurate when the dimensionless density difference Δρ/ρ is of order unity. For example, consider an open window in a warm room. The warm air inside is less dense than the cold air outside, which flows into the room and down towards the floor. Now imagine the opposite: a cold room exposed to warm outside air. Here the air flowing in moves up toward the ceiling. If the flow is Boussinesq (and the room is otherwise symmetrical), then viewing the cold room upside down is exactly the same as viewing the warm room right-way-round. This is because the only way density enters the problem is via the reduced gravity g′ which undergoes only a sign change when changing from the warm room flow to the cold room flow. An example of a non-Boussinesq flow is bubbles rising in water. The behaviour of air bubbles rising in water is very different from the behaviour of water falling in air: in the former case rising bubbles tend to form hemispherical shells, while water falling in air splits into raindrops (at small length scales surface tension enters the problem and confuses the issue). ^ a b c d e f g Tritton, D. J. (1977). Physical fluid dynamics. New York: Van Nostrand Reinhold Co. ISBN 9789400999923. Boussinesq, Joseph (1897). Théorie de l'écoulement tourbillonnant et tumultueux des liquides dans les lits rectilignes a grande section. Vol. 1. Gauthier-Villars. Retrieved 10 October 2015. Tritton, D.J. (1988). Physical Fluid Dynamics (Second ed.). Oxford University Press. ISBN 978-0-19-854493-7. Retrieved from "https://en.wikipedia.org/w/index.php?title=Boussinesq_approximation_(buoyancy)&oldid=1083906061"
Characteristics of Circular Orbits Practice Problems Online | Brilliant A satellite with mass m = 3.00 \times 10 ^4 \text{ kg} goes around a planet with mass M = 6.00 \times 10 ^{17} \text{ kg} in a circular orbit of radius R = 4.00 \times 10^6 \text{ m}. Find the approximate orbital speed v for the satellite. G=6.67 \times 10^{-11} \text{ N}\cdot\text{m}^2\text{/kg}^2. \text{ m/s} \text{ m/s} \text{ m/s} \text{ m/s} m = 4.00 \times 10 ^4 \text{ kg} orbtis around a planet of mass M = 6.00 \times 10 ^{17} \text{ kg}. If the satellite's gravitational potential energy is U=-4.00 \times 10^5 \text{ J}, what is the approximate kinetic energy of the satellite? G=6.67 \times 10^{-11} \text{ N}\cdot\text{m}^2\text{/kg}^2. 2.00 \times 10^6 \text{ J}. 4.00 \times 10^4 \text{ J}. 2.00 \times 10^5 \text{ J}. 3.35 \times 10^6 \text{ J}. m = 2.00 \times 10 ^4 \text{ kg} M = 8.00 \times 10 ^{17} \text{ kg} R. If the speed of the satellite is v = 5.17 \text{ m/s}, whatis the approximate angular momentum of the satellite? G=6.67 \times 10^{-11} \text{ N}\cdot\text{m}^2\text{/kg}^2. \times 10^{11} \text{ kg} \cdot \text{m}^2 \text{/s} \times 10^{10} \text{ kg} \cdot \text{m}^2 \text{/s} \times 10^{11} \text{ kg} \cdot \text{m}^2 \text{/s} \times 10^{9} \text{ kg} \cdot \text{m}^2 \text{/s} m = 4.00 \times 10^4 \text{ kg} orbits around a planet while maintaining a height of h = 500 \text{ km} from its surface. If the planet has mass M = 9.00 \times 10 ^{17} \text{ kg} R = 1.50 \times 10^6 \text{ m}, G=6.67 \times 10^{-11} \text{ N}\cdot\text{m}^2\text{/kg}^2. \times 10^6 \text{ J} \times 10^3 \text{ J} \times 10^5 \text{ J} \times 10^4 \text{ J} A satellite is moving around the earth in a stable circular orbit. Which of the following statements is WRONG about this satellite? (a) It is moving at a constant speed. (b) It is acted upon by a force directed away from the center of the earth which counter-balances the gravitational pull of the earth. (c) Its angular momentum remains constant. (d) It behaves as if it were a free-falling body.
Moebius - Maple Help Home : Support : Online Help : Mathematics : Number Theory : Moebius Moebius(n) Möbius(n) mu(n) \mathrm{\mu }⁡\left(n\right) The Moebius(n) command computes the Moebius function of the positive integer n. If n is divisible by the square of a prime number, then Moebius(n) is equal to 0. Otherwise, Moebius(n) is equal to 1 if n has an even number of prime factors, and is equal to -1 if n has an odd number of prime factors. \mathrm{Möbius} and mu are aliases of Moebius. You can enter the command mu using either the 1-D or 2-D calling sequence. For example, mu(8) is equivalent to \mathrm{\mu }⁡\left(8\right) \mathrm{with}⁡\left(\mathrm{NumberTheory}\right): \mathrm{Moebius}⁡\left(1\right) \textcolor[rgb]{0,0,1}{1} \mathrm{Moebius}⁡\left({3}^{3}\cdot 5\right) \textcolor[rgb]{0,0,1}{0} \mathrm{Moebius}⁡\left(3\cdot 5\cdot 7\right) \textcolor[rgb]{0,0,1}{-1} \mathrm{Moebius}⁡\left(23\cdot 11\right) \textcolor[rgb]{0,0,1}{1} The Möbius function is multiplicative as an arithmetic function. That is, if n and m are coprime then Moebius(n*m) = Moebius(n)*Moebius(m). \mathrm{igcd}⁡\left(5657,31945103\right) \textcolor[rgb]{0,0,1}{1} \mathrm{\mu }⁡\left(5657\cdot 31945103\right) \textcolor[rgb]{0,0,1}{-1} \mathrm{\mu }⁡\left(5657\right)⁢\mathrm{\mu }⁡\left(31945103\right) \textcolor[rgb]{0,0,1}{-1} The first 50 values for the Moebius function are plotted below: \mathrm{plots}:-\mathrm{pointplot}⁡\left([\mathrm{seq}⁡\left([n,\mathrm{Moebius}⁡\left(n\right)],n=1..50\right)],\mathrm{labels}=["n",\mu ⁡\left(n\right)],\mathrm{symbol}=\mathrm{soliddiamond},\mathrm{symbolsize}=15,\mathrm{color}="OrangeRed",\mathrm{size}=[600,400],\mathrm{tickmarks}=[\mathrm{default},[-1,0,1]]\right) The NumberTheory[Moebius] command was introduced in Maple 2016.
ALGEBRAIC GEOMETRY - Encyclopedia Information Algebraic geometry Information Find sources: "Algebraic geometry" – news · newspapers · books · scholar · JSTOR (January 2020) ( Learn how and when to remove this template message) Zeros of simultaneous polynomials {\displaystyle x^{2}+y^{2}+z^{2}-1=0.\,} {\displaystyle x^{2}+y^{2}+z^{2}-1=0,\,} {\displaystyle x+y+z=0.\,} {\displaystyle V(S)=\{(t_{1},\dots ,t_{n})\mid p(t_{1},\dots ,t_{n})=0{\text{ for all }}p\in S\}.\,} Morphism of affine varieties Rational function and birational equivalence {\displaystyle x^{2}+y^{2}-1=0} {\displaystyle x={\frac {2\,t}{1+t^{2}}}} {\displaystyle y={\frac {1-t^{2}}{1+t^{2}}}\,,} {\displaystyle x^{2}+y^{2}-a=0} {\displaystyle a>0} {\displaystyle a<0} {\displaystyle xy-1=0} {\displaystyle xy-1=0} {\displaystyle x>0} {\displaystyle xy-1=0} {\displaystyle x+y>0} Asymptotic complexity vs. practical efficiency {\displaystyle d^{2^{cn}}} {\displaystyle d^{2^{c'n}}} {\displaystyle d^{O(n^{2})}} .[ citation needed] {\displaystyle d^{O(n^{2})}} Some of the roots of algebraic geometry date back to the work of the Hellenistic Greeks from the 5th century BC. The Delian problem, for instance, was to construct a length x so that the cube of side x contained the same volume as the rectangular box a2b for given sides a and b. Menaechmus (circa 350 BC) considered the problem geometrically by intersecting the pair of plane conics ay = x2 and xy = ab. [1] In the 3rd century BC, Archimedes and Apollonius systematically studied additional problems on conic sections using coordinates. [1] [2] Medieval Muslim mathematicians, including Ibn al-Haytham in the 10th century AD, [3] solved certain cubic equations by purely algebraic means and then interpreted the results geometrically. The Persian mathematician Omar Khayyám (born 1048 AD) discovered a method for solving cubic equations by intersecting a parabola with a circle [4] and seems to have been the first to conceive a general theory of cubic equations. [5] A few years after Omar Khayyám, Sharaf al-Din al-Tusi's Treatise on equations has been described by Roshdi Rashed as "inaugurating the beginning of algebraic geometry". [6] This was criticized by Jeffrey Oaks, who claims that the study of curves by means of equations originated with Descartes in the seventeenth century. [7] Such techniques of applying geometrical constructions to algebraic problems were also adopted by a number of Renaissance mathematicians such as Gerolamo Cardano and Niccolò Fontana "Tartaglia" on their studies of the cubic equation. The geometrical approach to construction problems, rather than the algebraic one, was favored by most 16th and 17th century mathematicians, notably Blaise Pascal who argued against the use of algebraic and analytical methods in geometry. [8] The French mathematicians Franciscus Vieta and later René Descartes and Pierre de Fermat revolutionized the conventional way of thinking about construction problems through the introduction of coordinate geometry. They were interested primarily in the properties of algebraic curves, such as those defined by Diophantine equations (in the case of Fermat), and the algebraic reformulation of the classical Greek works on conics and cubics (in the case of Descartes). In the same period began the algebraization of the algebraic geometry through commutative algebra. The prominent results in this direction are Hilbert's basis theorem and Hilbert's Nullstellensatz, which are the basis of the connexion between algebraic geometry and commutative algebra, and Macaulay's multivariate resultant, which is the basis of elimination theory. Probably because of the size of the computation which is implied by multivariate resultants, elimination theory was forgotten during the middle of the 20th century until it was renewed by singularity theory and computational algebraic geometry. [a] Algebraic geometry now finds applications in statistics, [9] control theory, [10] [11] robotics, [12] error-correcting codes, [13] phylogenetics [14] and geometric modelling. [15] There are also connections to string theory, [16] game theory, [17] graph matchings, [18] solitons [19] and integer programming. [20] ^ A witness of this oblivion is the fact that Van der Waerden removed the chapter on elimination theory from the third edition (and all the subsequent ones) of his treatise Moderne algebra (in German).[ citation needed] ^ a b Dieudonné, Jean (October 1972). "The Historical Development of Algebraic Geometry". The American Mathematical Monthly. 79 (8): 827–866. doi: 10.2307/2317664. ISSN 0002-9890. JSTOR 2317664. Wikidata Q55886951. ^ Blume, L. E.; Zame, W. R. (1994). "The algebraic geometry of perfect and sequential equilibrium". Econometrica. 62 (4): 783–794. doi: 10.2307/2951732. JSTOR 2951732. ^ Kenyon, Richard; Okounkov, Andrei; Sheffield, Scott (2003). "Dimers and Amoebae". arXiv: math-ph/0311005. Retrieved from " https://en.wikipedia.org/?title=Algebraic_geometry&oldid=1086925216" Algebraic Geometry Videos Algebraic Geometry Websites Algebraic Geometry Encyclopedia Articles