text
stringlengths 256
16.4k
|
|---|
Prime ideal - Wikipedia
Ideal in a ring which has properties similar to prime elements
This article is about ideals in ring theory. For prime ideals in order theory, see ideal (order theory) § Prime ideals.
A Hasse diagram of a portion of the lattice of ideals of the integers
{\displaystyle \mathbb {Z} .}
The purple nodes indicate prime ideals. The purple and green nodes are semiprime ideals, and the purple and blue nodes are primary ideals.
4 Connection to maximality
Prime ideals for commutative rings[edit]
A positive integer n is a prime number if and only if
{\displaystyle n\mathbb {Z} }
is a prime ideal in
{\displaystyle \mathbb {Z} .}
A simple example: In the ring
{\displaystyle R=\mathbb {Z} ,}
the subset of even numbers is a prime ideal.
Given an integral domain
{\displaystyle R}
, any prime element
{\displaystyle p\in R}
generates a principal prime ideal
{\displaystyle (p)}
. Eisenstein's criterion for integral domains (hence UFDs) is an effective tool for determining whether or not an element in a polynomial ring is irreducible. For example, take an irreducible polynomial
{\displaystyle f(x_{1},\ldots ,x_{n})}
in a polynomial ring
{\displaystyle \mathbb {F} [x_{1},\ldots ,x_{n}]}
over some field
{\displaystyle \mathbb {F} }
If R denotes the ring
{\displaystyle \mathbb {C} [X,Y]}
of polynomials in two variables with complex coefficients, then the ideal generated by the polynomial Y 2 − X 3 − X − 1 is a prime ideal (see elliptic curve).
{\displaystyle \mathbb {Z} [X]}
of all polynomials with integer coefficients, the ideal generated by 2 and X is a prime ideal. It consists of all those polynomials whose constant coefficient is even.
In any ring R, a maximal ideal is an ideal M that is maximal in the set of all proper ideals of R, i.e. M is contained in exactly two ideals of R, namely M itself and the whole ring R. Every maximal ideal is in fact prime. In a principal ideal domain every nonzero prime ideal is maximal, but this is not true in general. For the UFD
{\displaystyle \mathbb {C} [x_{1},\ldots ,x_{n}]}
, Hilbert's Nullstellensatz states that every maximal ideal is of the form
{\displaystyle (x_{1}-\alpha _{1},\ldots ,x_{n}-\alpha _{n}).}
{\displaystyle \mathbb {C} [x,y]\to {\frac {\mathbb {C} [x,y]}{(x^{2}+y^{2}-1)}}\to {\frac {\mathbb {C} [x,y]}{(x^{2}+y^{2}-1,x)}}}
{\displaystyle {\frac {\mathbb {C} [x,y]}{(x^{2}+y^{2}-1,x)}}\cong {\frac {\mathbb {C} [y]}{(y^{2}-1)}}\cong \mathbb {C} \times \mathbb {C} }
showing that the ideal
{\displaystyle (x^{2}+y^{2}-1,x)\subset \mathbb {C} [x,y]}
is not prime. (See the first property listed below.)
Another non-example is the ideal
{\displaystyle (2,x^{2}+5)\subset \mathbb {Z} [x]}
{\displaystyle x^{2}+5-2\cdot 3=(x-1)(x+1)\in (2,x^{2}+5)}
but neither
{\displaystyle x-1}
{\displaystyle x+1}
are elements of the ideal.
The sum of two prime ideals is not necessarily prime. For an example, consider the ring
{\displaystyle \mathbb {C} [x,y]}
with prime ideals P = (x2 + y2 − 1) and Q = (x) (the ideals generated by x2 + y2 − 1 and x respectively). Their sum P + Q = (x2 + y2 − 1, x) = (y2 − 1, x) however is not prime: y2 − 1 = (y − 1)(y + 1) ∈ P + Q but its two factors are not. Alternatively, the quotient ring has zero divisors so it is not an integral domain and thus P + Q cannot be prime.
Not every ideal which cannot be factored into two ideals is a prime ideal; e.g.
{\displaystyle (x,y^{2})\subset \mathbb {R} [x,y]}
cannot be factored but is not prime.
Prime ideals for noncommutative rings[edit]
This is close to the historical point of view of ideals as ideal numbers, as for the ring
{\displaystyle \mathbb {Z} }
"A is contained in P" is another way of saying "P divides A", and the unit ideal R represents unity.
Important facts[edit]
If S is any m-system in R, then a lemma essentially due to Krull shows that there exists an ideal I of R maximal with respect to being disjoint from S, and moreover the ideal I must be prime (the primality I can be proved as follows: if
{\displaystyle a,b\not \in I}
, then there exist elements
{\displaystyle s,t\in S}
{\displaystyle s\in I+(a),t\in I+(b)}
by the maximal property of I. We can take
{\displaystyle r\in R}
{\displaystyle srt\in S}
{\displaystyle (a)(b)\subset I}
{\displaystyle srt\in (I+(a))r(I+(b))\subset I+(a)(b)\subset I}
, which is a contradiction).[4] In the case {S} = {1}, we have Krull's theorem, and this recovers the maximal ideals of R. Another prototypical m-system is the set, {x, x2, x3, x4, ...}, of all positive powers of a non-nilpotent element.
Connection to maximality[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Prime_ideal&oldid=1062765173"
|
Find Extremum of Multivariate Function and Its Approximation - MATLAB & Simulink Example - MathWorks Italia
Create Function and Find Its Derivative
Convert symmatrix Objects to sym Objects
Substitute Numeric Values and Find the Minimum
Approximate Function Near Its Minimum
This example shows how to find the extremum of a multivariate function and its approximation near the extremum point. This example uses symbolic matrix variables to represent the multivariate function and its derivatives. Symbolic matrix variables are available starting in R2021a.
Consider the multivariate function
\mathit{f}\left(\mathbit{x}\right)=\mathrm{sin}\left({\mathbit{x}}^{\mathit{T}}\mathbit{A}\text{\hspace{0.17em}}\mathbit{x}\right)
x
A
is a 2-by-2 matrix. To find a local extremum of this function, compute a root of the derivative of
f\left(x\right)
. In other words, find the solution of the derivative
\nabla f\left({x}_{0}\right)=0
Create the vector
x
A
as symbolic matrix variables. Define the function
\mathit{f}\left(\mathbit{x}\right)=\mathrm{sin}\left({\mathbit{x}}^{\mathit{T}}\mathbit{A}\text{\hspace{0.17em}}\mathbit{x}\right)
f = sin(x.'*A*x)
\mathrm{sin}\left({x}^{\mathrm{T}} A x\right)
Compute the derivative D of the function
f\left(x\right)
x
. The derivative D is displayed in compact matrix notation in terms of
x
A
D = diff(f,x)
\mathrm{cos}\left({x}^{\mathrm{T}} A x\right) \left({x}^{\mathrm{T}} A+{x}^{\mathrm{T}} {A}^{\mathrm{T}}\right)
The symbolic matrix variables x, A, f, and D are symmatrix objects. These objects represent matrices, vectors, and scalars in compact matrix notation. To show the components of these variables, convert the symmatrix objects to sym objects using symmatrix2sym.
xsym = symmatrix2sym(x)
xsym =
\left(\begin{array}{c}{x}_{1}\\ {x}_{2}\end{array}\right)
\left(\begin{array}{cc}{A}_{1,1}& {A}_{1,2}\\ {A}_{2,1}& {A}_{2,2}\end{array}\right)
fsym = symmatrix2sym(f)
\mathrm{sin}\left({x}_{1} \left({A}_{1,1} {x}_{1}+{A}_{1,2} {x}_{2}\right)+{x}_{2} \left({A}_{2,1} {x}_{1}+{A}_{2,2} {x}_{2}\right)\right)
Dsym = symmatrix2sym(D)
Dsym =
\left(\begin{array}{cc}\mathrm{cos}\left({x}_{1} \left({A}_{1,1} {x}_{1}+{A}_{1,2} {x}_{2}\right)+{x}_{2} \left({A}_{2,1} {x}_{1}+{A}_{2,2} {x}_{2}\right)\right) \left(2 {A}_{1,1} {x}_{1}+{A}_{1,2} {x}_{2}+{A}_{2,1} {x}_{2}\right)& \mathrm{cos}\left({x}_{1} \left({A}_{1,1} {x}_{1}+{A}_{1,2} {x}_{2}\right)+{x}_{2} \left({A}_{2,1} {x}_{1}+{A}_{2,2} {x}_{2}\right)\right) \left({A}_{1,2} {x}_{1}+{A}_{2,1} {x}_{1}+2 {A}_{2,2} {x}_{2}\right)\end{array}\right)
Suppose you are interested in the case where the value of
A
is [2 -1; 0 3]. Substitute this value into the function fsym.
fsym = subs(fsym,Asym,[2 -1; 0 3])
\mathrm{sin}\left(3 {{x}_{2}}^{2}+{x}_{1} \left(2 {x}_{1}-{x}_{2}\right)\right)
A
into the derivative Dsym
Dsym = subs(Dsym,Asym,[2 -1; 0 3])
\left(\begin{array}{cc}\mathrm{cos}\left(3 {{x}_{2}}^{2}+{x}_{1} \left(2 {x}_{1}-{x}_{2}\right)\right) \left(4 {x}_{1}-{x}_{2}\right)& -\mathrm{cos}\left(3 {{x}_{2}}^{2}+{x}_{1} \left(2 {x}_{1}-{x}_{2}\right)\right) \left({x}_{1}-6 {x}_{2}\right)\end{array}\right)
Then, apply the symbolic function solve to get a root of the derivative.
[xmin,ymin] = solve(Dsym,xsym,'PrincipalValue',true);
x0 = [xmin; ymin]
\left(\begin{array}{c}0\\ 0\end{array}\right)
f\left(x\right)
together with the extremum solution
{x}_{0}
. Set the plot interval to
-1<{x}_{1}<1
-1<{x}_{2}<1
as the second argument of fsurf. Use fplot3 to plot the coordinates of the extremum solution.
fsurf(fsym,[-1 1 -1 1])
fplot3(xmin,ymin,subs(fsym,xsym,x0),'ro')
You can approximate a multivariate function around a point
{x}_{0}
with a multinomial using the Taylor expansion.
f\left(x\right)\approx f\left({x}_{0}\right)+\nabla f\left({x}_{0}\right)\cdot \left(x-{x}_{0}\right)+\frac{1}{2}\left(x-{x}_{0}{\right)}^{T}\phantom{\rule{0.2777777777777778em}{0ex}}H\left(f\left({x}_{0}\right)\right)\phantom{\rule{0.2777777777777778em}{0ex}}\left(x-{x}_{0}\right)
\nabla f\left({x}_{0}\right)
is the gradient vector, and
H\left(f\left({x}_{0}\right)\right)
is the Hessian matrix of the multivariate function
\mathit{f}\left(\mathbit{x}\right)
calculated at
{x}_{0}
Find the Hessian matrix and return the result as a symbolic matrix variable.
-\mathrm{sin}\left({x}^{\mathrm{T}} A x\right) \left({A}^{\mathrm{T}} x+A x\right) \left({x}^{\mathrm{T}} A+{x}^{\mathrm{T}} {A}^{\mathrm{T}}\right)+\mathrm{cos}\left({x}^{\mathrm{T}} A x\right) \left({A}^{\mathrm{T}}+A\right)
Convert the Hessian matrix
H\left(f\left({x}_{0}\right)\right)
to the sym data type, which represents the matrix in its component form. Use subs to evaluate the Hessian matrix for
A
= [2 -1; 0 3] at the minimum point
{x}_{0}
Hsym = symmatrix2sym(H);
Hsym = subs(Hsym,Asym,[2 -1; 0 3]);
H0 = subs(Hsym,xsym,x0)
\left(\begin{array}{cc}4& -1\\ -1& 6\end{array}\right)
Evaluate the gradient vector
\nabla f\left({x}_{0}\right)
{x}_{0}
D0 = subs(Dsym,xsym,x0)
\left(\begin{array}{cc}0& 0\end{array}\right)
Compute the Taylor approximation to the function near its minimum.
fapprox = subs(fsym,xsym,x0) + D0*(xsym-x0) + 1/2*(xsym-x0).'*H0*(xsym-x0)
fapprox =
{x}_{1} \left(2 {x}_{1}-\frac{{x}_{2}}{2}\right)-{x}_{2} \left(\frac{{x}_{1}}{2}-3 {x}_{2}\right)
Plot the function approximation on the same graph that shows
f\left(x\right)
{x}_{0}
fsurf(fapprox,[-1 1 -1 1])
|
Mathematical Tools Physics NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
Forces of 1 N and 2 N act along with the lines x = 0 and y = 0. The equation of the line along which the resultant lies is given by:
1. y - 2x = 0
2. 2y - x = 0
3. y + x = 0
4. y - x = 0
Subtopic: Resultant of Vectors |
3\stackrel{^}{i}+4\stackrel{^}{j}
\stackrel{^}{i}+\stackrel{^}{j}
and in the same plane as that of
3\stackrel{^}{i}+4\stackrel{^}{j}
\frac{1}{2}\left(\stackrel{^}{j}-\stackrel{^}{i}\right)
\frac{3}{2}\left(\stackrel{^}{j}-\stackrel{^}{i}\right)
\frac{5}{2}\left(\stackrel{^}{j}-\stackrel{^}{i}\right)
\frac{7}{2}\left(\stackrel{^}{j}-\stackrel{^}{i}\right)
Subtopic: Scalar Product |
\stackrel{\to }{\mathrm{C} }= \stackrel{\to }{A } + \stackrel{\to }{B } \mathrm{and} \stackrel{\to }{\mathrm{C} }
\mathrm{\alpha } \mathrm{with} \stackrel{\to }{\mathrm{A}} \mathrm{and} \mathrm{\beta } \mathrm{with} \stackrel{\to }{\mathrm{B}}
. Which of the following options is correct?
\mathrm{\alpha } \mathrm{cannot} \mathrm{be} \mathrm{less} \mathrm{then} \mathrm{\beta }
\mathrm{\alpha } <\mathrm{\beta }, \mathrm{if} \mathrm{A}<\mathrm{B}
\mathrm{\alpha } <\mathrm{\beta }, \mathrm{if} \mathrm{A}>\mathrm{B}
\mathrm{\alpha } <\mathrm{\beta }, \mathrm{if} \mathrm{A}=\mathrm{B}
If the angle between the vector
\stackrel{\to }{A} and \stackrel{\to }{B}
\theta
, the value of the product
\left(\stackrel{\to }{B}×\stackrel{\to }{A}\right).\stackrel{\to }{A}
B{A}^{2} \mathrm{cos}\theta
B{A}^{2} \mathrm{sin}\theta
B{A}^{2} \mathrm{sin}\theta \mathrm{cos}\theta
Subtopic: Vector Product |
Two forces A and B have a resultant
{R}_{1}
. If B is doubled, the new resultant
{R}_{2}
is perpendicular to A. Then
{R}_{1}=A
{R}_{1}=B
{R}_{2}=A
{R}_{2}=B
1. If both assertion and reason are true and the reason is the correct explanation of the assertion.
2. If both assertion and reason are true but the reason is not the correct explanation of the assertion.
3. If the assertion is true but the reason is false.
4. If the assertion and reason both are false.
Assertion: The graph between P and Q is a straight line when P/Q is constant.
Reason: The straight-line graph means that P proportional to Q or P is equal to constant multiplied by Q.
Subtopic: Co-ordinate geometry |
Two forces of magnitude F have a resultant of the same magnitude F. The angle between the two forces is
Two forces with equal magnitudes F act on a body and the magnitude of the resultant force is F/3. The angle between the two forces is
{\mathrm{cos}}^{-1}\left(-\frac{17}{18}\right)
{\mathrm{cos}}^{-1}\left(-\frac{1}{3}\right)
{\mathrm{cos}}^{-1}\left(\frac{2}{3}\right)
{\mathrm{cos}}^{-1}\left(\frac{8}{9}\right)
Two forces are such that the sum of their magnitudes is 18 N and their resultant is perpendicular to the smaller force and magnitude of resultant is 12 N. Then the magnitudes of the forces are
1. 12 N, 6 N
2. 13 N, 5N
If two forces of 5 N each are acting along X and Y axes, then the magnitude and direction of resultant is
5\sqrt{2},\text{\hspace{0.17em}\hspace{0.17em}}\pi /3
5\sqrt{2},\text{\hspace{0.17em}\hspace{0.17em}}\pi /4
-5\sqrt{2},\text{\hspace{0.17em}\hspace{0.17em}}\pi /3
-5\sqrt{2},\text{\hspace{0.17em}\hspace{0.17em}}\pi /4
|
States Of Matter, Popular Questions: CBSE Class 11-science ENGLISH, English Grammar - Meritnation
sakshisingh929... asked a question
by dalton law of partial pressure , derive Pa=(Pt)(Xa)
Draw a graph showing the relationship between volume and temperature of an ideal gas at constant temperature.
what is P-V isotherm?
Simon George asked a question
please give two applications of daltons law of partial pressure.please reply fast i have an exam coming sooon ..please reply asap:)
Particles of soil at the bottom of a river remain separated, but they stick together when taken out. Name the property behind this.
among the following gaseous elements with atomic numbers, which will have greater rate of diffusion?
b) Z = 18
d) Z = 17
Madhusmita Nayak asked a question
(1) At 273oC and 1 atm pressure the volume of a given mass of gas will be twice the volume at 0oC and 1 atm pressure.
(2) At -136.5oC and 1 atm pressure, the volume of a given mass of gas will be half its volume at 0oC and 1 atm pressure.
(3) The mass ratio of equal volumes of NH3 and H2S under similar conditions of temperature and pressure is 1:2.
(4) The molar ratio of equal masses of CH4 and SO2 is 1:4.
A 5L flask containing N2 at 1bar and 25°C is connected to a 4L flask containing N2 at 2 bar and 0°C. After the gases are allowed to mix keeping both flasks at their original temperature, what will be the pressure and amount of N2 in the 5L flask assuming ideal gas behaviour? Answer is 1.42 atm and 0.294 moles
Density of a gas was found to be 5.5 g/L at 2 bar pressure and at 250C. Calculate its molar mass. (R=0.083 L bar/K/mol)
Vyomika Jha asked a question
[Q] 4 gases A,B,C and D have compressibility factor 1.6,0.8,0.6 and 1.8 respectively. Arrange them in:
1) Increasing order of their ideal behaviour.
2) Increasing order of their attractive force.
What is the difference between yellow phosphorus and red phosphorus?
Abdurrahim Zain asked a question
what is rotating polar molecule?
MgC{l}_{2}
Explain (i) Dispersion forces (ii) Dipole Dipole forces (iii) Dipole induced Dipole forces (iv) Hydrogenbond by giving example.
Which of the following is correct? Kindly explain the answer also.
a) For H2 and He, Z>1, and molar volume at STP is less than 22.4 L.
b) For H2 and He, Z<1, and molar volume at STP is greater than 22.4 L.
c)ForH2and He, Z>1, and molar volume atSTPis less than 22.4 L.
d)ForH2and He, Z>1, and molar volume atSTPis greater than 22.4 L.
what is the difference between intermolecular forces and intramolecular forces
Sreejith C H asked a question
can anyone explain the isotherm of carbon dioxide?
Derive the expression for avg kinetic energy of gas. Please fast i have exams 2moro. Show full calculation and derivations
Trisha Sengupta & 1 other asked a question
Oil spreads over the surface of water where as water does not spread over the surface of oil,Why?.
simplicity of gases is due to the fact that the forces of interaction between their molecules are negligible. Explain
What are polar and non- polar molecules?
Atul Dahiya asked a question
A dry gas occupies 127 ml at STP. If the same mass of a gas were collected over water at 23 degree centigrade and a total perrure of 745mm. What volume would it ocupy? The vapour pressure of water at 23 deg C is 21mm.
|
EUDML | Generalized strongly set-valued nonlinear complementarity problems. EuDML | Generalized strongly set-valued nonlinear complementarity problems.
Generalized strongly set-valued nonlinear complementarity problems.
Huang, Nan-Jing; Cho, Yeol Je
Huang, Nan-Jing, and Cho, Yeol Je. "Generalized strongly set-valued nonlinear complementarity problems.." International Journal of Mathematics and Mathematical Sciences 22.3 (1999): 597-604. <http://eudml.org/doc/49023>.
author = {Huang, Nan-Jing, Cho, Yeol Je},
keywords = {complementarity problem; strongly monotone Lipschitz continuous mappings; strongly monotone mapping and -Lipschitz continuous set-valued mappings; algorithm; strongly monotone mapping and -Lipschitz continuous set-valued mappings},
title = {Generalized strongly set-valued nonlinear complementarity problems.},
AU - Huang, Nan-Jing
TI - Generalized strongly set-valued nonlinear complementarity problems.
KW - complementarity problem; strongly monotone Lipschitz continuous mappings; strongly monotone mapping and -Lipschitz continuous set-valued mappings; algorithm; strongly monotone mapping and -Lipschitz continuous set-valued mappings
complementarity problem, strongly monotone Lipschitz continuous mappings, strongly monotone mapping and
H
-Lipschitz continuous set-valued mappings, algorithm, strongly monotone mapping and
H
-Lipschitz continuous set-valued mappings
|
Gordon_Kindlmann Knowpia
Gordon L. Kindlmann is an American computer scientist who works on information visualization and image analysis.[1] He is recognized for his contributions in developing tools for tensor data visualization.
Teem software library
Diderot DSL
Computer science, information visualization
Visualization and Analysis of Diffusion Tensor Fields (2004)
Gordon Kindlmann graduated from Cornell University with a BA in mathematics in 1995 and a MS in computer graphics in 1998. He then attended the University of Utah for his PhD, where he worked at the Scientific Computing and Imaging Institute under Christopher R. Johnson and graduated in 2004. While at Utah, he developed a set of methods for visualizing volumetric data interactively using multidimensional transfer functions, which were each cited over 500 times.[2][3][4]
Following his PhD, he was a post-doctoral research fellow in the Laboratory of Mathematics in Imaging at Brigham and Women's Hospital affiliated with Harvard Medical School, where he developed the tensor glyph, a scientific visualization tool for visualizing the degrees of freedom of a
{\displaystyle 3\times 3}
.[5] His work in diffusion tensor MRI visualization was included in a chapter of The Visualization Handbook.[6] He joined the computer science faculty at the University of Chicago as an assistant professor in 2009.
In 2013, Kindlmann appeared in Computer Chess, an independent comedy-drama film written and directed by Andrew Bujalski about a group of software engineers in 1980 who write programs to compete in computer chess.[7] The film premiered at the 2013 Sundance Film Festival, where it won the Alfred P. Sloan Prize, and subsequently screened at SXSW and the Maryland Film Festival.
^ "Gordon Kindlmann". The Huffington Post. Retrieved 5 April 2017.
^ Kniss, Joe; Kindlmann, Gordon; Hansen, Charles (1 January 2002). "Multidimensional transfer functions for interactive volume rendering". IEEE Transactions on visualization and computer graphics. pp. 270–285. Retrieved 5 April 2017.
^ Kindlmann, Gordon; Durkin, James W. (1 January 1998). "Semi-automatic generation of transfer functions for direct volume rendering". Proceedings of the 1998 IEEE symposium on Volume visualization. ACM. pp. 79–86. Retrieved 5 April 2017.
^ Kniss, Joe; Kindlmann, Gordon; Hansen, Charles (1 January 2001). "Interactive volume rendering using multi-dimensional transfer functions and direct manipulation widgets". Proceedings of the Conference on Visualization'01. IEEE Computer Society. pp. 255–262. Retrieved 5 April 2017.
^ "Gordon Kindlmann". Laboratory of Mathematics in Imaging. Harvard Medical School. Retrieved 5 April 2017.
^ Kniss, J; Kindlmann, G; Hansen, C (2005). "9". In Johnson, C; Hansen, C (eds.). The Visualization Handbook. Elsevier. pp. 189–209.
^ "Computer Chess". IMDb. 7 November 2013. Retrieved 5 April 2017.
|
Generate octave spectrum - MATLAB poctave - MathWorks 한êµ
Octave Spectra of White and Pink Noise
Octave Smoothing of White and Pink Noise
Octave Spectrogram of Audio Signal
Octave Spectrum Weighting
BandsPerOctave
Using Octave Filters
Using Octave Smoothing
Generate octave spectrum
p = poctave(x,fs)
p = poctave(xt)
p = poctave(pxx,fs,f)
p = poctave(___,type)
p = poctave(___,Name,Value)
[p,cf] = poctave(___)
[p,cf,t] = poctave(___)
poctave(___)
p = poctave(x,fs) returns the octave spectrum of a signal x sampled at a rate fs. The octave spectrum is the average power over octave bands as defined by the ANSI S1.11 standard [2]. If x is a matrix, then the function estimates the octave spectrum independently for each column and returns the result in the corresponding column of p.
p = poctave(xt) returns the octave spectrum of a signal stored in the MATLAB® timetable xt.
p = poctave(pxx,fs,f) performs octave smoothing by converting a power spectral density, pxx, to a 1/b octave power spectrum, where b is the number of subbands in the octave band. The frequencies in f correspond to the PSD estimates in pxx.
p = poctave(___,type) specifies the kind of spectral analysis performed by the function. Specify type as 'power' or 'spectrogram'.
p = poctave(___,Name,Value) specifies additional options for any of the previous syntaxes using name-value arguments.
[p,cf] = poctave(___) also returns the center frequencies of the octave bands over which the octave spectrum is computed.
[p,cf,t] = poctave(___) additionally returns a time vector, t, corresponding to the center times of the segments used to compute the power spectrum estimates when type is 'spectrogram'.
poctave(___) with no output arguments plots the octave spectrum or spectrogram in the current figure. If type is specified as 'spectrogram', then this function is supported only for single-channel input.
{10}^{5}
samples of white Gaussian noise. Create a signal of pseudopink noise by filtering the white noise with a filter whose zeros and poles are all on the positive x-axis. Visualize the zeros and poles.
wn = randn(N,1);
z = [0.982231570015379 0.832656605953720 0.107980893771348]';
p = [0.995168968915815 0.943841773712820 0.555945259371364]';
[b,a] = zp2tf(z,p,1);
pn = filter(b,a,wn);
Create a two-channel signal consisting of white and pink noise. Compute the octave spectrum. Assume a sample rate of 44.1 kHz. Set the frequency band from 30 Hz to the Nyquist frequency.
sg = [wn pn];
poctave(sg,fs,'FrequencyLimits',[30 fs/2])
legend('White noise','Pink noise','Location','SouthEast')
The white noise has an octave spectrum that increases with frequency. The octave spectrum of the pink noise is approximately constant throughout the frequency range. The octave spectrum of a signal illustrates how the human ear perceives the signal.
{10}^{5}
samples of white Gaussian noise sampled at 44.1 kHz. Create a signal of pink noise by filtering the white noise with a filter whose zeros and poles are all on the positive x-axis.
Compute the Welch estimate of the power spectral density for both signals. Divide the signals into 2048-sample segments, specify 50% overlap between adjoining segments, window each segment with a Hamming window, and use 4096 DFT points.
[pxx,f] = pwelch([wn pn],hamming(2048),1024,4096,fs);
Display the spectral densities over a frequency band ranging from 200 Hz to the Nyquist frequency. Use a logarithmic scale for the frequency axis.
pwelch([wn pn],hamming(2048),1024,4096,fs)
xlim([200 fs/2]/1000)
legend('White','Pink')
Compute and display the octave spectra of the signals. Use the same frequency range as in the previous plot. Specify six bands per octave and compute the spectra using 8th-order filters.
poctave(pxx,fs,f,'BandsPerOctave',6,'FilterOrder',8,'FrequencyLimits',[200 fs/2],'psd')
Read an audio recording of an electronic toothbrush into MATLAB®. The toothbrush turns on at about 1.75 seconds and stays on for approximately 2 seconds.
[y,fs] = audioread('toothbrush.m4a');
Compute the octave spectrogram of the audio signal. Specify 48 bands per octave and 82% overlap. Restrict the total frequency range from 100 Hz to fs/2 Hz and use C-weighting.
poctave(y,fs,'spectrogram','BandsPerOctave',48,'OverlapPercent',82,'FrequencyLimits',[100 fs/2],'Weighting','C')
{10}^{5}
Compute the octave spectrum of the signal. Specify three bands per octave and restrict the total frequency range from 200 Hz to 20 kHz. Store the name-value pairs in a cell array for later use. Display the spectrum.
flims = [200 20e3];
bpo = 3;
opts = {'FrequencyLimits',flims,'BandsPerOctave',bpo};
poctave(pn,fs,opts{:});
Compute the octave spectrum of the signal with the same settings, but use C-weighting. The C-weighted spectrum falls off at frequencies above 6 kHz.
poctave(pn,fs,opts{:},'Weighting','C')
Compute the octave spectrum again, but now use A-weighting. The A-weighted spectrum peaks at about 3 kHz and falls off above 6 kHz and at the lower end of the frequency band.
poctave(pn,fs,opts{:},'Weighting','A')
legend('Pink noise','C-weighted','A-weighted','Location','SouthWest')
Input signal, specified as a vector or matrix. If x is a vector, then poctave treats it as a single channel. If x is a matrix, then poctave computes the octave spectrum or spectrogram independently for each column and returns the result in the corresponding column of p. If type is set to 'spectrogram', the function concatenates the spectrograms along the third dimension of p.
Example: sin(2*pi*(0:127)/16)+randn(1,128)/100 specifies a noisy sinusoid.
Sample rate, specified as a positive scalar expressed in hertz. The sample rate cannot be lower than 7 Hz.
Input timetable. xt must contain increasing, finite, uniformly spaced row times. If xt represents a multichannel signal, then it must have either a single variable containing a matrix or multiple variables consisting of vectors.
Example: timetable(seconds(0:4)',randn(5,1)) specifies a random process sampled at 1 Hz for 4 seconds.
Power spectral density (PSD), specified as a vector or matrix with real nonnegative elements. The power spectral density must be expressed in linear units, not decibels. Use db2pow to convert decibel values to power values. If type is 'spectrogram', then each column in pxx is considered to be the PSD for a particular time window or sample.
Example: [pxx,f] = periodogram(cos(pi./[4;2]*(0:159))'+randn(160,2)) specifies the periodogram PSD estimate of a noisy two-channel sinusoid sampled at 2Ï€ Hz and the frequencies at which it is computed.
f — PSD frequencies
PSD frequencies, specified as a vector. f must be finite, strictly increasing, and uniformly spaced in the linear scale.
'power' (default) | 'spectrogram'
Type of spectrum to compute, specified as 'power' or 'spectrogram'.
'power' — Compute the octave power spectrum of the input.
'spectrogram' — Compute the octave spectrogram of the input. The function divides the input into segments and returns the short-time octave power spectrum of each segment.
Example: 'Weighting','A','FilterOrder',8 computes the octave spectrum using A-weighting and 8th-order filters.
BandsPerOctave — Number of subbands in octave band
1 (default) | 3/2 | 2 | 3 | 6 | 12 | 24 | 48 | 96
Number of subbands in the octave band, specified as 1, 3/2, 2, 3, 6, 12, 24, 48, or 96. This parameter dictates the width of a fractional-octave band. In such a frequency band, the upper edge frequency is the lower edge frequency times 21/b, where b is the number of subbands.
FilterOrder — Order of bandpass filters
Order of bandpass filters, specified as a positive even integer.
FrequencyLimits — Frequency band
[max(3,3*fs/48e3) fs/2] (default) | two-element vector
Frequency band, specified as an increasing two-element vector expressed in hertz. The lower value of the vector must be at least 3 Hz. The upper value of the vector must be smaller than or equal to the Nyquist frequency. If the vector does not contain an octave center, poctave may return a center frequency outside the specified limits. To ensure a stable filter design, the actual minimum achievable frequency limit increases to 3*fs/48e3 if the sample rate exceeds 48 kHz. If this argument is not specified, poctave uses the interval [max(3,3*fs/48e3) fs/2].
Weighting — Frequency weighting
'none' (default) | 'A' | 'C' | vector | matrix | 1-by-2 cell array | digitalFilter object
Frequency weighting, specified as one of these:
'none' — poctave does not perform any frequency weighting on the input.
'A' — poctave performs A-weighting on the input. The ANSI S1.42 standard defines the A-weighting curve. The IEC 61672-1 standard defines the minimum and maximum attenuation limits for an A-weighting filter. The ANSI S1.42.2001 standard defines the weighting curve by specifying analog poles and zeros.
'C' — poctave performs C-weighting on the input. The ANSI S1.42 standard defines the C-weighting curve. The IEC 61672-1 standard defines the minimum and maximum attenuation limits for a C-weighting filter. The ANSI S1.42.2001 standard defines the weighting curve by specifying analog poles and zeros.
Vector — poctave treats the input as a vector of coefficients that specify a finite impulse response (FIR) filter.
Matrix — poctave treats the input as a matrix of second-order section coefficients that specify an infinite impulse response (IIR) filter. The matrix must have at least two rows and exactly six columns.
1-by-2 cell array — poctave treats the input as the numerator and denominator coefficients, in that order, that specify the transfer function of an IIR filter.
digitalFilter object — poctave treats the input as a filter that was designed using designfilt.
This argument is supported only when the input is a signal. Octave smoothing does not support frequency weighting.
Example: 'Weighting',fir1(30,0.5) specifies a 30th-order FIR filter with a normalized cutoff frequency of 0.5Ï€ rad/sample.
Example: 'Weighting',[2 4 2 6 0 2;3 3 0 6 0 0] specifies a third-order Butterworth filter with a normalized 3-dB frequency of 0.5Ï€ rad/sample.
Example: 'Weighting',{[1 3 3 1]/6 [3 0 1]/3} specifies a third-order Butterworth filter with a normalized 3-dB frequency of 0.5Ï€ rad/sample.
Example: 'Weighting',designfilt('lowpassiir','FilterOrder',3,'HalfPowerFrequency',0.5) specifies a third-order Butterworth filter with a normalized 3-dB frequency of 0.5Ï€ rad/sample.
Lower bound for nonzero values, specified as a real scalar. The function sets those elements of p such that 10 log10(p) ≤ 'MinThreshold' to zero. Specify 'MinThreshold' in decibels.
WindowLength — Length of data segments
Length of data segments, specified as a nonnegative integer. 'WindowLength' must be less than or equal to the length of the input signal. If not specified, the length of data segments is calculated based on the size of the input signal. This input is valid only when type is 'spectrogram'.
OverlapPercent — Overlap percent between adjoining segments
Overlap percent between adjoining segments, specified as a real scalar in the interval [0, 100). If not specified, 'OverlapPercent' is zero. This input is valid only when type is 'spectrogram'.
p — Octave spectrum or spectrogram
Octave spectrum or spectrogram, returned as a vector, matrix, or 3-D array. The third dimension, if present, corresponds to the input channels.
cf — Center frequencies
Center frequencies, returned as a vector. cf contains a list of center frequencies of the octave bands over which poctave estimated the octave spectrum. cf has units of hertz.
t — Center times
Center times, returned as a vector. If the input is a PSD, then t represents the sample indices corresponding to the columns of pxx. This argument applies only when type is 'spectrogram'.
Octave analysis is used to identify sound or vibration levels across a broad frequency range in a process that resembles how a human ear perceives sound. The signal spectrum is split into octave or fractional-octave bands. The frequency limit of each band is twice the lower frequency limit, thus the bandwidth increases at higher frequencies.
To perform octave analysis, the poctave function creates a filter bank of parallel bandpass filters. Each digital bandpass filter is mapped to an equivalent Butterworth lowpass analog filter [3]. The analog filter is mapped back to a digital bandpass filter using a bandpass version of the bilinear transformation, and the result is returned as a cascade of fourth-order sections.
The lower and upper edge frequencies of each octave band are given by
{f}_{l}=cfâ
\left({G}^{â1/2b}\right)
{f}_{u}=cfâ
\left({G}^{1/2b}\right)
where fc is the center frequency of each band defined by the ANSI S1.11-2004 standard [2] and returned in cf, G is a conversion constant (
{10}^{3/10}
), and b is the number of bands per octave.
For more information on the design and implementation of octave filters, see Digital Filter Design (Audio Toolbox).
The poctave function calculates the average power over each octave band by integrating the power spectral density (PSD) of the signal within the band using the rectangle method. The average power of an octave band represents the signal level at the band center frequency.
When a band edge falls within a bin, the function assigns to the band only the fraction of power corresponding to the percentage of the frequency bin that the band occupies. For example, this diagram shows an octave band whose edges fall within two different frequency bins, represented by orange and blue dashed rectangles. The power within the shaded regions is computed for the given octave band.
When a band edge falls at 0 or at the Nyquist frequency, fNyquist, the function assigns to the band two times the fraction of power corresponding to the percentage of the frequency bin that the band occupies. This duplication accounts for the half bin power that is present in the range [–w/2, 0] and [fNyquist, fNyquist + w/2], where w is the bin width. For example, this diagram shows an octave band whose right edge falls at the Nyquist frequency. The power within the shaded region is computed for the given octave band.
[1] Smith, Julius Orion, III. "Example: Synthesis of 1/F Noise (Pink Noise)." In Spectral Audio Signal Processing. https://ccrma.stanford.edu/~jos/sasp/.
[2] Specification for Octave-Band and Fractional-Octave-Band Analog and Digital Filters. ANSI Standard S1.11-2004. Melville, NY: Acoustical Society of America, 2004.
If input weighting is specified as a variable-sized matrix during code generation, then it must not reduce to a vector at runtime.
|
A coherent model for the Universe: The Big Bang Theory | Astronomy 801: Planets, Stars, Galaxies, and the Universe
A coherent model for the Universe: The Big Bang Theory
Hubble's Law implies the Universe is expanding;
We observe the galaxies in the Universe to lie in a filament / void structure, but overall to be distributed on the largest scales in a homogeneous and isotropic way;
General Relativity describes the behavior of space-time in the Universe;
Astronomers have adopted a model known as the Hot Big Bang model to describe the Universe that incorporates the information above. You can state the model pretty concisely—we propose that the entire visible Universe and all of its contents was contained in a tiny region that was originally the size of a pinpoint. At one instant in time, that very hot, very dense point began expanding, and it is now much larger and much cooler than it was at the beginning.
Using the mathematical formulation of General Relativity, you can show that for a homogeneous, isotropic Universe, the geometry of the Universe must be either flat (that is, it obeys the laws of geometry that we all learned in math class), positively curved, or negatively curved. These possibilities are illustrated (to the best that they can be approximated in a flat illustration) below.
Figure 10.11: Schematic diagram of the potential geometries of the Universe
Credit: NASA / WMAP
Recall that in the flat geometry that you learned in school, you were taught that the sum of the interior angles of a triangle equals 180 degrees. In the bottom part of the illustration, you see a flat plane with a normal triangle drawn on the plane. The top part of the illustration represents a closed, positively curved or spherical geometry. In the spherical geometry, the sum of the interior angles of a triangle add up to more than 180 degrees. The middle of the illustration represents an infinite, negatively curved or hyperbolic geometry. In this geometry, the sum of the interior angles of a triangle add up to less than 180 degrees. In principle, if there was a large enough triangular object in the universe and you could measure its interior angles, you could determine the local geometry of the Universe. There are other tests you could also conceive. For example, in our familiar flat geometry, two parallel lines remain parallel for their entire length. In a spherical geometry, parallel lines converge, and in a hyperbolic geometry parallel lines diverge. Note that if the Universe is spherical, if you could travel in one direction long enough, you would return to your original position. In either of the other two cases, you would never return to your original position.
You should keep in mind that these different geometric effects are observable in our universe. If we can detect them directly, we can determine the details of the shape of our universe. One more realistic example is to consider using a standard ruler to measure how distance and angular size are related. Consider an analogy on Earth: if you look out your window and see a nearby tree and a more distant tree, even if they are the same real size, the nearby tree will appear larger to you. We know that in our familiar flat geometry, as an object gets farther away from us, it looks smaller. However, in the positively curved geometry, an object with a given size will look smaller and smaller as it gets progressively farther away, but at some critical distance, it will start to look larger again! So, if you can identify an object with a known size (say a galaxy), and you see that the sizes of galaxies shrink as they get farther away from us, but then their apparent sizes start to grow again for all galaxies past a certain redshift, we must live in a positively curved universe. In practice, this test is very hard to do, because we do not know of a good object that can be seen at large distances and is always the same size (that is, the diameter of galaxies is not constant enough to do this test well).
Another way to determine the geometry of the Universe is to try to measure the total quantity of matter it contains, since the amount of matter determines the geometry of the Universe. If we designate a critical density and we call that value
{\rho }_{\text{crit}}
If the measured density of the universe,
{\rho }_{\text{ave}}>{\rho }_{\text{crit}}
then the geometry of the Universe will be spherical, and the Universe will be closed.
{\rho }_{\text{ave}}<{\rho }_{\text{crit}}
then the geometry of the Universe will be hyperbolic, and the Universe will be open and infinite.
{\rho }_{\text{ave}}={\rho }_{\text{crit}}
then the geometry of the Universe will be flat, and the Universe will be infinite.
In the illustration above, the geometries are labeled with, for example,
{\Omega }_{0}\quad =1
{\Omega }_{0}=\text{ }{\rho }_{\text{ave}}\quad /\text{ }{\rho }_{\text{crit}}
{\Omega }_{0}\quad =1
, this means the same thing as
{\rho }_{\text{ave}}\quad =\text{ }{\rho }_{\text{crit}}
Finally, we should address some limitations of the Big Bang model and some misconceptions to watch out for when discussing the Big Bang model. These are:
Because the Big Bang occurred approximately 13.7 billion years ago, we are only able to observe the Universe within about 13.7 billion light years of Earth. This boundary is referred to as our horizon. The Big Bang model only describes the Universe within our horizon. We expect that the Universe extends far beyond our horizon, and the Universe beyond our horizon may not have the same properties as the observable Universe within our horizon.
The Big Bang model does not suppose that an explosion occurred at one point that is the center of the Universe. Instead, the entire Universe was once a point, and it has now expanded in size to its current extent. So every point in our Universe now was once in the same location in the past. So, the proper way to consider the location of the "center" is that every single location in the Universe now can be considered to be the center of the Universe equally.
A very common question is to ask "What is beyond the edge of the Universe or what is the Universe expanding into?" The (unsatisfying) answer to this question is that the Universe encompasses everything and it as a whole is expanding, so there is no edge.
The Big Bang model does not claim to explain the cause of the initial expansion.
‹ The Large Scale Structure of the Universe up The Cosmic Microwave Background ›
The Implications of Hubble's Law: An Expanding Universe
Dark Matter, Dark Energy, and the Accelerating Universe
|
Mechanical Properties of Solids Physics NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
Physics - Mechanical Properties of Solids
A gas undergoes a process in which its pressure P and volume V are related as
{\mathrm{VP}}^{\mathrm{n}}=\mathrm{constant}
. The bulk modulus for the gas in the process is:
[This question includes concepts from Kinetic Theory chapter]
\frac{P}{n}
{P}^{1/n}
{P}^{n}
Subtopic: Shear and bulk modulus |
The Young's modulus of a wire of length 'L' and radius 'r' is 'Y'. If length is reduced to L/2 and radius r/2, then Young's modulus will be
1. Y/2
Subtopic: Young's modulus |
Three wires A, B, C made of the same material and radius have different lengths. The graphs in the figure show the elongation-load variation. The longest wire is
Subtopic: Stress - Strain Curve |
The breaking stress of a wire depends upon:
1. material of the wire.
3. radius of the wire.
4. shape of the cross-section.
Subtopic: Stress - Strain |
The bulk modulus of a spherical object is B. If it is subjected to uniform pressure P, the fractional decrease in radius is
\frac{P}{B}
\frac{B}{3P}
\frac{3P}{B}
\frac{P}{3B}
The Young's modulus of steel is twice that of brass. Two wires of the same length and of the same area of cross-section, one of steel and another of brass are suspended from the same roof. If we want the lower ends of the wires to be at the same level, then the weight added to the steel and brass wires must be in the ratio of:
The following four wires are made of the same material. Which of them will have the largest extension when the same tension is applied?
(1) Length=50 cm, diameter=0.5 mm
(2) Length=100 cm, diameter=1 mm
Subtopic: Hooke's Law |
When a certain weight is suspended from a long uniform wire, its length increases by one cm. If the same weight is suspended from another wire of the same material and length but having a diameter half of the first one, then the increase in length will be -
The material which practically does not show elastic after effect is
Subtopic: Elasticity |
A force F is needed to break a copper wire having radius R. The force needed to break a copper wire of radius 2R will be:
|
EUDML | A sequence of binomial coefficients related to Lucas and Fibonacci numbers. EuDML | A sequence of binomial coefficients related to Lucas and Fibonacci numbers.
A sequence of binomial coefficients related to Lucas and Fibonacci numbers.
Benoumhani, Moussa. "A sequence of binomial coefficients related to Lucas and Fibonacci numbers.." Journal of Integer Sequences [electronic only] 6.2 (2003): Art. 03.2.1, 10 p., electronic only-Art. 03.2.1, 10 p., electronic only. <http://eudml.org/doc/54391>.
@article{Benoumhani2003,
author = {Benoumhani, Moussa},
keywords = {Fibonacci number; log-concave sequence; limit theorems; Lucas number; polynomial with real zeros; unimodal sequence},
title = {A sequence of binomial coefficients related to Lucas and Fibonacci numbers.},
AU - Benoumhani, Moussa
TI - A sequence of binomial coefficients related to Lucas and Fibonacci numbers.
KW - Fibonacci number; log-concave sequence; limit theorems; Lucas number; polynomial with real zeros; unimodal sequence
Fibonacci number, log-concave sequence, limit theorems, Lucas number, polynomial with real zeros, unimodal sequence
q
Articles by Benoumhani
|
The Wulff shape minimizes an anisotropic Willmore functional | EMS Press
The Wulff shape minimizes an anisotropic Willmore functional
Ulrich Clarenz
The aim of this paper is to find a fourth order energy having Wulff shapes as minimizers. This question is motivated by surface restoration problems. In surface restoration usually a damaged region of a surface has to be replaced by a surface patch which restores the region in a suitable way. In particular one aims for
C^1
-continuity at the patch boundary. A fourth order energy is considered to measure fairness and to allow appropriate boundary conditions ensuring continuity of the normal field. Here, anisotropy comes into play if edges and corners of a surface are destroyed. In the present paper we define a generalization of the classical Willmore functional and prove that Wulff-shapes are the only minimizers.
Ulrich Clarenz, The Wulff shape minimizes an anisotropic Willmore functional. Interfaces Free Bound. 6 (2004), no. 3, pp. 351–359
|
On the minimum norm of representatives of residue classes in number fields
1 June 2007 On the minimum norm of representatives of residue classes in number fields
Jean Bourgain, Mei-Chu Chang
Jean Bourgain,1 Mei-Chu Chang2
1School of Mathematics, Institute for Advanced Study
2Department of Mathematics, University of California Riverside
In this article, we consider the problem of finding upper bounds on the minimum norm of representatives in residue classes in quotient
O/I
I
is an integral ideal in the maximal order
O
K
. In particular, we answer affirmatively a question of Konyagin and Shparlinski [KS], stating that an upper bound
o\left(N\left(I\right)\right)
holds for most ideals
I
, denoting
N\left(I\right)
I
. More precise statements are obtained, especially when
I
is prime. We use the method of exponential sums over multiplicative groups, essentially exploiting some new bounds obtained by the authors
Jean Bourgain. Mei-Chu Chang. "On the minimum norm of representatives of residue classes in number fields." Duke Math. J. 138 (2) 263 - 280, 1 June 2007. https://doi.org/10.1215/S0012-7094-07-13824-9
Primary: 11L051 , 11R27
Jean Bourgain, Mei-Chu Chang "On the minimum norm of representatives of residue classes in number fields," Duke Mathematical Journal, Duke Math. J. 138(2), 263-280, (1 June 2007)
|
'SRES' Scenarios and 'RCP' Pathways | METEO 469: From Meteorology to Mitigation: Understanding Global Warming
Scientists attempt to create scenarios of future human activity that represent plausible future greenhouse emissions pathways. Ideally, these scenarios span the range of possible future emissions pathways, so that they can be used as a basis for exploring a realistic set of future projections of climate change.
In the early IPCC assessments, the most widely used and referred-to family of emissions scenarios were the so-called SRES scenarios (for Special Report on Emissions Scenarios) that helped form the basis for the IPCC Fourth Assessment Report. These scenarios made varying assumptions ('storylines') regarding future global population growth, technological development, globalization, and societal values. One (the A1 'one global family' storyline chosen by Michael Mann and Lee Kump in version 1 of Dire Predictions) assumed a future of globalization and rapid economic and technological growth, including fossil fuel intensive (A1FI), non-fossil fuel intensive (A1T), and balanced (A1B) versions. Another (A2, 'a divided world') assumed a greater emphasis on national identities. The B1 and B2 scenarios assumed more sustainable practices ('utopia'), with more global-focus and regional-focus, respectively.
Let us now directly compare the various SRES scenarios both in terms of their annual rates of carbon emissions, measured in gigatons (Gt) of carbon (1Gt = 1012 tons), and the resulting trajectories of atmospheric
{\text{CO}}_{\text{2}}
concentrations. Getting the concentrations actually requires an intermediate step involving the use of simple model of ocean carbon uptake, to account for the effect of oceanic absorption of atmospheric
{\text{CO}}_{\text{2}}
Figure 6.1: Estimated
{\text{CO}}_{\text{2}}
concentrations (top) and Annual Carbon Emissions (bottom) for the Various IPCC SRES Scenarios.
We can see from the above comparison how various trajectories of our future carbon emissions translate to atmospheric
{\text{CO}}_{\text{2}}
concentration trajectories. From the point of view of controlling future
{\text{CO}}_{\text{2}}
concentrations, these graphics can be quite daunting. Depending on the path chosen by society, we could plausibly approach
{\text{CO}}_{\text{2}}
concentrations that are quadruple pre-industrial levels by 2100. Even in the best case of the SRES scenarios, B1, we will likely reach twice pre-industrial levels (i.e., around 550 ppm) by 2100. And to keep
{\text{CO}}_{\text{2}}
concentrations below this level, we can see that we have to bring emissions to a peak by 2040, and ramp them down to less than half current levels by 2100.
You might wonder, what scenario do we actually appear to be following? Over the first ten years of these scenarios, observed emissions actually were close to the most carbon intensive of the SRES scenarios—A1FI. This gives you an idea of how challenging the problem of stabilizing carbon emissions at levels lower than twice pre-industrial actually is.
Figure 6.2: Observed Historic Emissions Compares with the Various IPCC SRES Scenarios.
One problem with the SRES scenarios—indeed, a fair criticism of them—is that they do not explicitly incorporate carbon emissions controls. While some of the scenarios involve storylines that embrace generic notions of sustainability and environmental protection, the scenarios do not envision explicit attempts to stabilize
{\text{CO}}_{\text{2}}
concentrations at any particular level. For the Fifth Assessment Report, a new set of scenarios, called Representative Concentration Pathways (RCPs), was developed. They are referred to as pathways to emphasize that they are not definitive, but are instead internally consistent time-dependent forcing projections that could potentially be realized with multiple socioeconomic scenarios. In particular, they can take into account climate change mitigation policies to limit emissions. The scenarios are named after the approximate radiative forcing relative to the pre-industrial period achieved either in the year 2100, or at stabilization after 2100. They were created with 'integrated assessment models' that include climate, economic, land use, demographic, and energy-usage effects, whose greenhouse gas concentrations were then converted to an emissions trajectory using carbon cycle models.
The RCP2.6 scenario peaks at 3.0 W / m2 before declining to 2.6 W / m2 in 2100, and requires strong mitigation of greenhouse gas concentrations in the 21st century. The RCP4.5 and RCP6.0 scenarios stabilize after 2100 at 4.2 W / m2 and 6.0 W / m2, respectively. The RCP4.5 and SRES B1 scenarios are comparable; RCP6.0 lies between the SRES B1 and A1B scenarios. The RCP8.5 scenario is the closest to a ‘business as usual’ scenario of fossil fuel use, and has comparable forcing to SRES A2 by 2100.
In all RCPs global population levels off or starts to decline by 2100, with a peak value of 12 billion in RCP8.5. Gross domestic product (GDP) increases in all cases; of note, the RCP2.6 pathway has the highest GDP, though it has the least dependence on fossil fuel sources. Carbon dioxide emissions for all RCPs except the RCP8.5 scenario peak by 2100.
Even the RCPs have encountered a fair bit of criticism. For the recently released Sixth IPCC Assessment Report, scientists and modelers are using Shared Socioeconomic Pathways (SSPs), which link specific policy decisions with projected emissions outcomes. The readings this week include a Commentary from the journal Nature about the issue of RCPs and the path forward with SSPs.
Figure 6.3a: RCP Global Population Scenarios
Click here for text description of Figure 6.3a
In all pathways, global population levels off or starts to decline by 2100; the highest world population (12 billion) is achieved by 2100 in RCP 8.5
Gross Domestic product (GDP) increases in all cases, and interestingly, the highest GDP is realized in the RCP 2.6 scenario. Energy consumption increases in all scenarios, with non-fossil-carbon-based energy sources most important in RCP 2.6; RCP 8.5 relies heavily on coal
Future emissions differ quite dramatically among the scenarios. The largest growth and cumulative release of CO2 is associated with the RCAP 8.5 fossil-fuel-intensive scenario, while the smallest is associated with the RCP 2.6 scenario
Figure 6.3b: RCP Global Population Scenarios
Figure 6.4: RCP Gross Domestic Product Scenarios.
Figure 6.5: RCP Carbon Dioxide Emission Scenarios.
With all of these scenarios, stabilizing CO2 concentrations requires not just preventing the increase of emissions, but reducing emissions. This leads naturally to our next topic—the topic of stabilization scenarios.
‹ Introduction up Stabilizing CO2 Concentrations ›
|
Generate frequency bands around the characteristic fault frequencies of ball or roller bearings for spectral feature extraction - MATLAB bearingFaultBands - MathWorks Nordic
bearingFaultBands
Frequency Bands Using Bearing Specifications
Frequency Bands for Roller Bearing
Visualize Frequency Bands Around Characteristic Bearing Frequencies
Frequency Bands and Spectral Metrics of Ball Bearing
Generate frequency bands around the characteristic fault frequencies of ball or roller bearings for spectral feature extraction
FB = bearingFaultBands(FR,NB,DB,DP,beta)
FB = bearingFaultBands(___,Name,Value)
[FB,info] = bearingFaultBands(___)
bearingFaultBands(___)
FB = bearingFaultBands(FR,NB,DB,DP,beta) generates characteristic fault frequency bands FB of a roller or ball bearing using its physical parameters. FR is the rotational speed of the shaft or inner race, NB is the number of balls or rollers, DB is the ball or roller diameter, DP is the pitch diameter, and beta is the contact angle in degrees. The values in FB have the same implicit units as FR.
FB = bearingFaultBands(___,Name,Value) allows you to specify additional parameters using one or more name-value pair arguments.
[FB,info] = bearingFaultBands(___) also returns the structure info containing information about the generated fault frequency bands FB.
bearingFaultBands(___) with no output arguments plots a bar chart of the generated fault frequency bands FB.
For this example, consider a bearing with a pitch diameter of 12 cm with eight rolling elements. Each rolling element has a diameter of 2 cm. The outer race remains stationary as the inner race is driven at 25 Hz. The contact angle of the rolling element is 15 degrees.
With the above physical dimensions of the bearing, construct the frequency bands using bearingFaultBands.
FB is returned as a 4x2 array with default frequency band width of 10 percent of FR which is 2.5 Hz. The first column in FB contains the values of
F-\frac{W}{2}
F+\frac{W}{2}
for each characteristic defect frequency.
For this example, consider a micro roller bearing with 11 rollers where each roller is 7.5 mm. The pitch diameter is 34 mm and the contact angle is 0 degrees. Assuming a shaft speed of 1800 rpm, construct frequency bands for the roller bearing. Specify 'Domain' as 'frequency' to obtain the frequency bands FB in the same units as FR.
FR = 1800;
[FB1,info1] = bearingFaultBands(FR,NB,DB,DP,beta,'Domain','frequency')
FB1 = 4×2
Centers: [7.7162e+03 1.2084e+04 3.8815e+03 701.4706]
Labels: ["1Fo" "1Fi" "1Fb" "1Fc"]
FaultGroups: [1 2 3 4]
Now, include the sidebands for the inner race and rolling element defect frequencies using the 'Sidebands' name-value pair.
[FB2,info2] = bearingFaultBands(FR,NB,DB,DP,beta,'Domain','order','Sidebands',0:1)
Centers: [4.2868 5.7132 6.7132 7.7132 1.7667 2.1564 2.5461 0.3897]
Labels: ["1Fo" "1Fi-1Fr" "1Fi" "1Fi+1Fr" ... ]
FaultGroups: [1 2 2 2 3 3 3 4]
You can use the generated fault bands FB to extract spectral metrics using the faultBandMetrics command.
For this example, consider a damaged bearing with a pitch diameter of 12 cm with eight rolling elements. Each rolling element has a diameter of 2 cm. The outer race remains stationary as the inner race is driven at 25 Hz. The contact angle of the rolling element is 15 degrees.
With the above physical dimensions of the bearing, visualize the fault frequency bands using bearingFaultBands.
bearingFaultBands(FR,NB,DB,DP,beta)
From the plot, observe the following bearing specific vibration frequencies:
Cage defect frequency, Fc at 10.5 Hz.
Ball defect frequency, Fb at 73 Hz.
Outer race defect frequency, Fo at 83.9 Hz.
Inner race defect frequency, Fi at 116.1 Hz.
For this example, consider a ball bearing with a pitch diameter of 12 cm with 10 rolling elements. Each rolling element has a diameter of 0.5 cm. The outer race remains stationary as the inner race is driven at 25 Hz. The contact angle of the ball is 0 degrees. The dataset bearingData.mat contains power spectral density (PSD) and its respective frequency data for the bearing vibration signal in a table.
First, construct the bearing frequency bands including the first 3 sidebands using the physical characteristics of the ball bearing.
FB = bearingFaultBands(FR,NB,DB,DP,beta,'Sidebands',1:3)
FB is a 14x2 array which includes the primary frequencies and their sidebands.
Load the PSD data. bearingData.mat contains a table X where PSD is contained in the first column and the frequency grid is in the second column, as cell arrays respectively.
load('bearingData.mat','X')
Var1 Var2
{12001x1 double} {12001x1 double}
Compute the spectral metrics using the PSD data in table X and the frequency bands in FB.
spectralMetrics = faultBandMetrics(X,FB)
PeakAmplitude1 PeakFrequency1 BandPower1 PeakAmplitude2 PeakFrequency2 BandPower2 PeakAmplitude3 PeakFrequency3 BandPower3 PeakAmplitude4 PeakFrequency4 BandPower4 PeakAmplitude5 PeakFrequency5 BandPower5 PeakAmplitude6 PeakFrequency6 BandPower6 PeakAmplitude7 PeakFrequency7 BandPower7 PeakAmplitude8 PeakFrequency8 BandPower8 PeakAmplitude9 PeakFrequency9 BandPower9 PeakAmplitude10 PeakFrequency10 BandPower10 PeakAmplitude11 PeakFrequency11 BandPower11 PeakAmplitude12 PeakFrequency12 BandPower12 PeakAmplitude13 PeakFrequency13 BandPower13 PeakAmplitude14 PeakFrequency14 BandPower14 TotalBandPower
______________ ______________ __________ ______________ ______________ __________ ______________ ______________ __________ ______________ ______________ __________ ______________ ______________ __________ ______________ ______________ __________ ______________ ______________ __________ ______________ ______________ __________ ______________ ______________ __________ _______________ _______________ ___________ _______________ _______________ ___________ _______________ _______________ ___________ _______________ _______________ ___________ _______________ _______________ ___________ ______________
121 121 314.43 56.438 56.438 144.95 81.438 81.438 210.57 106.44 106.44 276.2 156.44 156.44 407.45 181.44 181.44 473.07 206.44 206.44 538.7 264.75 264.75 691.77 276.75 276.75 723.27 288.69 288.69 754.61 312.69 312.69 817.61 324.62 324.62 848.94 336.62 336.62 880.44 13.188 13.188 31.418 7113.4
spectralMetrics is a 1x43 table with peak amplitude, peak frequency and band power calculated for each frequency range in FB. The last column in spectralMetrics is the total band power, computed across all 14 frequencies in FB.
FR — Rotational speed of the shaft or inner race
Rotational speed of the shaft or inner race, specified as a positive scalar. FR is the fundamental frequency around which bearingFaultBands generates the fault frequency bands. Specify FR either in Hertz or revolutions per minute.
NB — Number of balls or rollers
Number of balls or rollers in the bearing, specified as a positive integer.
DB — Diameter of the ball or roller
Diameter of the ball or roller, specified as a positive integer.
DP — Pitch diameter
Pitch diameter of the bearing, specified as a positive scalar. DP is the diameter of the circle that the center of the ball or roller travels during the bearing rotation.
beta — Contact angle
Contact angle in degrees between a plane perpendicular to the ball or roller axis and the line joining the two raceways, specified as a positive scalar.
10 percent of the fundamental frequency (default) | positive scalar
Width of the frequency bands centered at the nominal fault frequencies, specified as the comma-separated pair consisting of 'Width' and a positive scalar. The default value is 10 percent of the fundamental frequency. Avoid specifying 'Width' with a large value so that the fault bands do not overlap.
Domain — Units of the fault band frequencies
'frequency' (default) | 'order'
Units of the fault band frequencies, specified as the comma-separated pair consisting of 'Domain' and either 'frequency' or 'order'. Select:
'frequency' if you want FB to be returned in the same units as FR.
'order' if you want FB to be returned as number of rotations relative to the inner race rotation, FR.
\left[\text{max}\left(0,\text{ }|F|-\frac{W}{2}\right),\text{ }|F|+\frac{W}{2}\right]
Fault frequency bands, returned as an Nx2 array, where N is the number of fault frequencies. FB is returned in the same units as FR, in either hertz or orders depending on the value of 'Domain'. Use the generated fault frequency bands to extract spectral metrics using faultBandMetrics. The generated fault bands,
\left[F-\frac{W}{2},\text{ }F+\frac{W}{2}\right]
, are centered at:
Outer race defect frequency, Fo and its harmonics
Inner race defect frequency, Fi, its harmonics and sidebands at FR
Rolling element (ball) defect frequency, Fbits harmonics and sidebands at Fc
Cage (train) defect frequency, Fc and its harmonics
The value W is the width of the frequency bands, which you can specify using the 'Width' name-value pair. For more information on bearing frequencies, see Algorithms.
FaultGroups — Fault group numbers identifying related fault frequencies
bearingFaultBands computes the different characteristic bearing frequencies as follows:
Outer race defect frequency,
{F}_{o}=\frac{NB}{2}FR\left(1-\frac{DB}{DP}\text{cos}\left(\beta \right)\right)
Inner race defect frequency,
{F}_{i}=\frac{NB}{2}FR\left(1+\frac{DB}{DP}\text{cos}\left(\beta \right)\right)
Rolling element (ball) defect frequency,
{F}_{b}=\frac{DP}{2DB}FR\left(1-{\left[\frac{DB}{DP}\text{cos}\left(\beta \right)\right]}^{2}\right)
Cage (train) defect frequency,
{F}_{c}=\frac{FR}{2}\left(1-\frac{DB}{DP}\text{cos}\left(\beta \right)\right)
[1] Chandravanshi, M & Poddar, Surojit. "Ball Bearing Fault Detection Using Vibration Parameters." International Journal of Engineering Research & Technology. 2. 2013.
[2] Singh, Sukhjeet & Kumar, Amit & Kumar, Navin. "Motor Current Signature Analysis for Bearing Fault Detection in Mechanical Systems." Procedia Materials Science. 6. 171–177. 10.1016/j.mspro.2014.07.021. 2014.
[3] Roque, Antonio & Silva, Tiago & Calado, João & Dias, J. "An approach to fault diagnosis of rolling bearings." WSEAS Transactions on Systems and Control. 4. 2009.
faultBandMetrics | faultBands | gearMeshFaultBands
|
PUCCH format 1 DM-RS uplink subframe timing estimate - MATLAB lteULFrameOffsetPUCCH1 - MathWorks Australia
[offset,corr] = lteULFrameOffsetPUCCH1(ue,chs,waveform)
offset provides subframe timing. Frame timing can be achieved by using offset with the subframe number, ue.NSubframe. This behavior is consistent with real-world operation because the base station knows when, or in which subframe, to expect uplink transmissions.
[offset,corr] = lteULFrameOffsetPUCCH1(ue,chs,waveform) also returns a complex matrix corr, which is the signal used to extract the timing offset.
Synchronize and demodulate a transmission that has been delayed by four samples using the PUCCH format 1 demodulation reference signal (DM-RS) symbols.
On the transmit side, populate reGrid, generate waveform, and insert a delay of four samples.
View the correlation peak for a delayed transmit waveform. The transmission contains PUCCH format 1 demodulation reference signal (DM-RS) symbols available for estimating the waveform timing.
Configure UE-specific settings and channel transmission parameters.
pucch1 = struct('ResourceIdx',0,'CyclicShifts',0, ...
'DeltaShift',1,'ResourceSize',0);
Generate Transmit Waveform
On the receive side, calculate timing offset using the PUCCH format 1 DM-RS symbols for the time-domain waveform. Estimate the correlations for the transmit waveform and for a delayed version of the transmit waveform.
{N}_{\text{RB}}^{\text{UL}}
{n}_{PUCCH}^{\left(1\right)}
{N}_{RB}^{\left(2\right)}
{N}_{cs}^{\left(1\right)}
Generate waveform by SC-FDMA modulation of a resource matrix using lteSCFDMAModulate function, or by using one of the channel model functions (lteFadingChannel, lteHSTChannel, or lteMovingChannel).
Number of samples from the start of the waveform to the position in that waveform where the first subframe containing the DM-RS begins, returned as a scalar integer. offset is computed by extracting the timing of the peak of the correlation between waveform and internally generated reference waveforms containing DM-RS signals. The correlation is performed separately for each antenna and the antenna with the strongest correlation is used to compute offset.
|
Wideband minimum-variance distortionless-response beamformer - MATLAB - MathWorks Deutschland
SubbandsOutputPort
Subband MVDR Beamforming of ULA
Subband MVDR Beamforming of Array with Interference
Wideband minimum-variance distortionless-response beamformer
The phased.SubbandMVDRBeamformer System object™ implements a wideband minimum variance distortionless response beamformer (MVDR) based on the subband processing technique. This type of beamformer is also called a Capon beamformer.
To beamform signals arriving at an array:
Create the phased.SubbandMVDRBeamformer object and set its properties.
beamformer = phased.SubbandMVDRBeamformer
beamformer = phased.SubbandMVDRBeamformer(Name,Value)
beamformer = phased.SubbandMVDRBeamformer creates a subband MVDR beamformer System object, beamformer. The object performs subband MVDR beamforming on the received signal.
beamformer = phased.SubbandMVDRBeamformer(Name,Value) creates a subband MVDR beamformer System object, beamformer, with each specified property Name set to the specified Value. You can specify additional name-value pair arguments in any order as Name1,Value1,...,NameN,ValueN.
Example: beamformer = phased.SubbandMVDRBeamformer('SensorArray',phased.URA('Size',[5 5]),'OperatingFrequency',500e6) sets the sensor array to a 5-by-5 uniform rectangular array (URA) with all other default URA property values. The beamformer has an operating frequency of 500 MHz.
SubbandsOutputPort — Option to enable output of subband center frequencies
Option to enable output of subband center frequencies, specified as either true or false. To obtain the subband center frequencies, set this property to true and use the corresponding output argument FREQS when calling the object.
[Y,FREQS] = beamformer(___)
[Y,W,FREQS] = beamformer(X,XT,ANG)
Y = beamformer(X) performs wideband MVDR beamforming on the input, X, and returns the beamformed output in Y. This syntax uses X for training samples to calculate the beamforming weights. Use the Direction property to specify the beamforming direction.
Y = beamformer(X,XT) uses XT for training samples to calculate the beamforming weights.
Y = beamformer(X,ANG) uses ANG as the beamforming direction. This syntax applies when you set the DirectionSource property to 'Input port'.
[Y,W] = beamformer(___) returns the beamforming weights, W. This syntax applies when you set the WeightsOutputPort property to true.
[Y,FREQS] = beamformer(___) returns the center frequencies of the subbands, FREQS. This syntax applies when you set the SubbandsOutputPort property to true.
You can combine optional input arguments when you set their enabling properties. Optional input arguments must be listed in the same order as their enabling properties. For example, [Y,W,FREQS] = beamformer(X,XT,ANG) is valid when you specify TrainingInputPort as true and set DirectionSource to 'Input port'.
Wideband input signal, specified as an M-by-N matrix, where N is the number of array elements. M is the number of samples in the data. If the sensor array consists of subarrays, N is then the number of subarrays.
If you set the TrainingInputPort to false, then the object uses X as training data. In this case, the dimension M must be greater than N×NB. where NB is the number of subbands specified in the NumSubbands.
If you set TrainingInputPort to true, use the XT argument to supply training data. In this case, the dimension M can be any positive integer.
XT — Wideband training samples
P-by-N complex-valued matrix
Wideband training samples, specified as a P-by-N matrix where N is the number of elements. If the sensor array consists of subarrays, then N represents the number of subarrays.
This argument applies when you set TrainingInputPort to true. The dimension P is the number of samples in the training data. P must be larger than N×NB, where NB is the number of subbands specified in the NumSubbands property.
Example: FT = [1,1;j,1;0.5,0]
Beamforming direction, specified as a 2-by-L real-valued matrix, where L is the number of beamforming directions. This argument applies only when you set the DirectionSource property to 'Input port'. Each column takes the form of [AzimuthAngle;ElevationAngle]. Angle units are in degrees. The azimuth angle must lie between –180° and 180°. The elevation angle must lie between –90° and 90°. Angles are defined with respect to the local coordinate system of the array.
Beamformed output, returned as an M-by-L complex-valued matrix. The quantity M is the number of signal samples and L is the number of beamforming directions specified in the ANG argument.
N-by-K-by-L complex-valued matrix
Beamforming weights, returned as an N-by-K-by-L complex-valued matrix. The quantity N is the number of sensor elements or subarrays and K is the number of subbands specified by the NumSubbands property. The quantity L is the number of beamforming directions. Each column of W contains the narrowband beamforming weights used in the corresponding subband for the corresponding directions.
To return this output, set the WeightsOutputPort property to true.
FREQS — Center frequencies of subbands
Center frequencies of subbands, returned as a K-by-1 real-valued column vector. The quantity K is the number of subbands specified by the NumSubbands property.
To return this output, set the SubbandsOutputPort property to true.
Apply subband MVDR beamforming to an underwater acoustic 11-element ULA. The incident angle of the signal is
1{0}^{\circ }
3{0}^{\circ }
elevation. The signal is an FM chirp having a bandwidth of 1 kHz. The speed of sound is 1500 m/s.
carrierFreq = 2000;
sig = chirp(t,0,2,fs/2);
collector = phased.WidebandCollector('Sensor',array,'PropagationSpeed',c,...
'SampleRate',fs,'ModulatedInput',true,...
'CarrierFrequency',carrierFreq);
sig1 = collector(sig,incidentAngle);
noise = 0.3*(randn(size(sig1)) + 1j*randn(size(sig1)));
rx = sig1 + noise;
Apply MVDR beamforming
beamformer = phased.SubbandMVDRBeamformer('SensorArray',array,...
'Direction',incidentAngle,'OperatingFrequency',carrierFreq,...
'PropagationSpeed',c,'SampleRate',fs,'TrainingInputPort',true, ...
'SubbandsOutputPort',true,'WeightsOutputPort',true);
[y,w,subbandfreq] = beamformer(rx, noise);
Plot the signal that is input to the middle sensor (channel 6) vs the beamformer output.
plot(t(1:300),real(rx(1:300,6)),'r:',t(1:300),real(y(1:300)))
legend('Original','Beamformed');
Plot the response pattern for five bands
pattern(array,subbandfreq(1:5).',-180:180,0,...
'PropagationSpeed',c,'Weights',w(:,1:5));
Apply subband MVDR beamforming to an underwater acoustic 11-element ULA. Beamform the arriving signals to optimize the gain of a linear FM chirp signal arriving from 0 degrees azimuth and 0 degrees elevation. The signal has a bandwidth of 2.0 kHz. In addition, there unit amplitude 2.250 kHz interfering sine wave arriving from 28 degrees azimuth and 0 degrees elevation. Show how the MVDR beamformer nulls the interfering signal. Display the array pattern for several frequencies in the neighborhood of 2.250 kHz. The speed of sound is 1500 meters/sec.
Simulate Arriving Signal and Noise
incidentAngle = [0;0];
Simulate Interfering Signal
Combine both the desired and interfering signals.
fint = 250;
sigint = sin(2*pi*fint*t);
interfangle = [28;0];
sigint1 = collector(sigint,interfangle);
rx = sig1 + sigint1 + noise;
Use the combined noise and interfering signal as training data.
'PropagationSpeed',c,'SampleRate',fs,'TrainingInputPort',true,...
'NumSubbands',64,...
[y,w,subbandfreq] = beamformer(rx,sigint1 + noise);
tidx = [1:300];
plot(t(tidx),real(rx(tidx,6)),'r:',t(tidx),real(y(tidx)))
Plot Array Response Showing Beampattern Null
Plot the response pattern for five bands near 2.250 kHz.
fdx = [5,7,9,11,13];
pattern(array,subbandfreq(fdx).',-50:50,0,...
'PropagationSpeed',c,'Weights',w(:,fdx),...
'CoordinateSystem','rectangular');
The beamformer places a null at 28 degrees for the subband containing 2.250 kHz.
Diagonal loading is a technique to improve beamformer robustness when stability issues arise from steering vector errors or finite sample size effects. This technique adds a positive real-valued multiple of the identity matrix to the correlation matrix of the received array data vector. You can apply diagonal loading using the DiagonalLoadingFactor property.
{f}_{m}=\left\{\begin{array}{c}{f}_{c}-\frac{{f}_{s}}{2}+\left(m-1\right)\Delta f\text{, }{N}_{B}\text{ even}\\ {f}_{c}-\frac{\left({N}_{B}-1\right){f}_{s}}{2{N}_{B}}+\left(m-1\right)\Delta f\text{, }{N}_{B}\text{ odd}\end{array},\text{ }m=1,\dots ,{N}_{B}
Usage notes and limitations:.
See System Objects in MATLAB Code Generation (MATLAB Coder)
phased.MVDRBeamformer | phased.FrostBeamformer | phased.PhaseShiftBeamformer | phased.SubbandPhaseShiftBeamformer | phased.LCMVBeamformer | phased.WidebandCollector
|
States of Matter Chemistry NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
The mean free path (
\mathrm{\lambda }
) of a gas sample is given by:
\lambda =\sqrt{2}\pi {\sigma }^{2}N
\lambda =1/\sqrt{2}\pi {\sigma }^{2}N
\lambda =\sqrt{2}\pi \mu {\sigma }^{2}N
Subtopic: Kinetic Theory of Gas |
Which is lighter than dry air?
Subtopic: Introduction to States of Matter |
The ratio a/b (the terms used in van der Waals' equation) has the unit-
1. atm litre mol-1
2. atm dm3 mol-1
3. dyne cm mol-1
Subtopic: Vanderwaal Correction |
At relatively high pressure, van der Waals' equation reduces to-
1. PV = RT
2. PV = RT + a/v
3. PV = RT + Pb
4. PV = RT - a/V2
The maximum deviation from ideal gas behaviour takes place -
1. At high temperature and low pressure
2. At low temperature and high pressure
3. At high temperature and high pressure
4. At low temperature and low pressure
Joule-Thomson coefficient (
\partial \mathrm{T}/\partial \mathrm{P}
)H for an ideal gas is:
2. +ve
3. -ve
Subtopic: Liquefaction of Gases & Liquid |
All the three states H2O, i.e., the triple point for H2O the equilibrium,
⇌
⇌
Vapour exist at:
1. 3.85 mm and 0.0981
°
°
3. 760 mm and 0
°
The unit of van der Waals' constant 'a' is:
1. atm litre2 mol-2
2. dyne cm4 mol-2
3. newton m4 mol-2
The unit of van der Waals' constant 'b' is :
1. cm3 mol-1
2. litre mol-1
3. m3 mol-1
At STP, 0.50-mole H2 gas and 1.0 mole He gas
1. have equal average kinetic energies
2. have equal molecular speeds
3. occupy equal volumes
4. have equal effusion rates
|
The Useless Idiot's Guide to Girlfriends - Uncyclopedia, the content-free encyclopedia
The Useless Idiot's Guide to Girlfriends
Remember, maintaining eye contact is essential.
The Useless Idiot's Guide to Girlfriends. We, as in me, would like to help all those utterly useless idiot's out there by coming up with a guide to help them better understand complicated issues such as bananas or small specs of dirt. We have found that there are many idiots in the world. There are the bigot idiots, who are just so gung-ho about being a completely useless waste of skin, that they annoy the rest of us. There are also the ashamed idiots to counter, who never want to admit their idocracy. These are the ones who you can catch agreeing with you on some deep(or even unbelievably simple) matter, but once you ask them if they really know what you're talking about they just smile and say 'no' real cutely, as if that will mitigate the knowledge that they are idiots. These idiots should not be ashamed, they should proudly announce to the world that "THEY IS IDIOT!", it is nothing to be ashamed of, the world needs ditchdiggers too. There is also the undescribably idiotic idiots, who are obviously undescribable. And lastly there are the blondes, enough said. Since you are most likely the idiot we think you are we have made a simple outlook of what we are all about:
{\displaystyle You=idiot}
{\displaystyle We=smart}
You read(if you can, if not seek the expert advice of a higher intellectual who has advanced himself or herself in the abstract aspects of putting symbols on paper together to construct an abstract idea, or used to define a physical object in a materialistic reality. Oh, and bring 'We' some tea, numskull).
You head hurt.
Transverse Property of Equality: You=smart, We=smart, You=We! You are one of us, thank you for selling your soul to dierbergs(actually, sold to 'We'), have a nice day!
2 Weapons of Defense
4 Rules for a Girlfriend
The average girlfriend...you've been warned.
A girlfriend, defined by Uncyclopedia, is "something that you will never have, unless you are a girl yourself, in which case you can be your own girlfriend, or you can be a lesbian. If you are a girl, and want a hetero-partner, then a Boyfriend is for you (it should be noted that these are far easier to find than girlfriends)." The official definition is "crazy, psycho bitch who'll steal your soul". So far to date, there have been 3459237237 soul stealings. Most girlfriends are evil and use their powers to lure men into their trap by using superier weapons called boobs and vaginas. Most of these women come with add-ons that make their weapons even more deadly.
In the beginning stages, the girlfriends are actually born as girls. For about first 14 years of their lives they stay as girls that is until they search for a master that can take of them. By taking care of, I mean giving them presents, food, puppies, sex, etc. Like Pokemon, boys search the world trying to capture these creatures thinking that it would be great to have one (how little they know). When the boy finds a girl he would like to capture, he throws his balls at her until she is captured, but once captured, girls then evolve into the most ferocious creature ever created by God, the girlfriend. These fearsome beasts are even more powerful than Gazebos. Once one is captured, the boy never wants another one again, in fear of being destroyed by jealousy of the first girlfriend. So far, boys, like they usually are, do not listen to their elders, but instead keep going for another girlfriend. Some say it's "the thrill of the hunt, but just remember to catch and release!"
Weapons of Defense[edit]
There are a number of weapons to defend against girlfriends.
baseball and/or football
any sport in general
Getting another girlfriend
That fat guy sitting next to you
A burnt wing man
If you are one of the unlucky people to have a girlfriend, these defensive strategies are up the UTMOST IMPORTANCE that they be carried out.
If you cannot rid your girlfriend away because she is just too attached or she is "crazy, psycho bitch who'll steal your soul", you can always try to tame her and keep her as a pet. The best way to do this is to use money and buy her a lot of gifts. This will keep her occupied for most of the day.
It is best just not to get one in general. They are a menace to society and need to be stopped. Organizations have been created to rid the world of this evil.
Rules for a Girlfriend[edit]
8. Don't do this.
You may not at any time watch a movie with her; instead, watch her.
Do not ask to anything with a group if she is with you.
During outings, you may not be distracted by other things besides her.
Do not fart, belch, burp, crap, fart, yawn, sneeze, blink, breath while with her.
Wear a protective cup just in case she gets mad.
Do not get one.
Do not try to be friendly to one.
Retrieved from "http://en.uncyclopedia.co/w/index.php?title=The_Useless_Idiot%27s_Guide_to_Girlfriends&oldid=6050823"
|
Power of lens — lesson. Science CBSE, Class 10.
21. Power of lens
You already know that a lens's ability to converge or diverge light rays is determined by its focal length. A convex lens with a short focal length bends light rays through large angles by bringing them closer to the optical centre. A concave lens with a very short focal length, on the other hand, causes more divergence than a lens with a longer focal length. The power of a lens is the degree of convergence or divergence of light rays it achieves. A lens's power is equal to the reciprocal of its focal length. It is denoted by the letter \(P\). The power \(P\) of a lens with a focal length of \(f\) is calculated as follows:
P=\frac{1}{f}
The '\(diopter\)' is the SI unit of lens power. The letter \(D\) stands for it. When \(f\) is measured in metres, power is measured in dioptres. A lens with a focal length of one metre has a power of one dioptre. The power of a convex lens is \(positive\), while the power of a concave lens is \(negative\). Corrective lenses are prescribed by optometrists, who specify their strengths.
Let us say the lens prescribed by the optometrist has power equal to \(+2.0\) \(D\). This implies the lens prescribed is convex. The focal length of the lens is \(+ 0.50\) \(m\). Similarly, a lens of power \(–2.5\) \(D\) has a focal length of \(–0.40\) \(m\). The prescribed lens is concave in shape.
Many optical instruments consist of a multiple number of lenses. They are combined with increasing the magnification and sharpness of the image.
The total power (\(P\)) of the lenses placed in contact is given by the algebraic sum of the powers of individual lenses.
P={P}_{1}+{P}_{2}+{P}_{3}....
For opticians, using powers instead of focal lengths for lenses is much more convenient. An optician places several different combinations of corrective lenses of known power in contact inside the testing spectacles' frame during eye testing. The required lens power is calculated by the optician using simple algebraic addition.
A combination of two lenses with powers of \(+ 2.0\) \(D\) and \(+0.25\) \(D\) is equivalent to a single lens with powers of \(+2.25\) \(D\).
Lens systems can be designed to minimise certain defects in images produced by a single lens using the simple additive property of lens powers.
Lenses for cameras, microscopes, and telescopes frequently use such a lens system, which consists of several lenses in contact.
https://pixabay.com/illustrations/telescope-astronomy-science-5805728/
|
Y combinator - jaredgorski.org
\lambda f.(\lambda x.f(x~x))~(\lambda x.f(x~x))
The y combinator is a way to implement recursion in lambda calculus
Most basic recursion representation:
loop = loop
Consider representing “loop” in lambda calculus (the “omega combinator”):
(\lambda x.x~x)~(\lambda x.x~x)
\lambda x.x~x
takes an input
x
and returns
x
as output
we apply this function to itself
this concept of “self application” is the key to recursion
(\lambda x.x~x)~(\lambda x.x~x)
\rarr ~~(\lambda (\lambda x.x~x).(\lambda x.x~x)~(\lambda x.x~x))~(\lambda x.x~x)
\rarr ~~(\lambda x.x~x)~(λx.x~x)
rec~f = f(rec~f)\\ = f(f(f( ...
a definition of “rec” in lambda calculus (y combinator):
\lambda f.(\lambda x.f(x~x))~(\lambda x.f(x~x))
note that the y combinator includes the same sort of “self application” or “omega combinator” included in “loop”, though the y combinator is not itself recursive because it relies on the
f
input, which is the function that should be recursed
as the “omega combinator” is applied to itself, expanded, and evaluated, it naturally calls the
function input each time it’s evaluated, allowing for recursion
|
Area type inequalities and integral means of harmonic functions on the unit ball
April, 2007 Area type inequalities and integral means of harmonic functions on the unit ball
In this paper we investigate the relationship among the following integrals
{\int }_{B}|u\left(x\right){|}^{p-i}|\nabla u\left(x\right){|}^{i}\left(1-|x|{\right)}^{\alpha }dV\left(x\right),
i\in \left\{0,1,2\right\}
1<p<\mathrm{\infty }
\alpha >0
u
is an arbitrary harmonic function on the unit ball
B\subset {\mathbf{R}}^{n}
. Growth of the integral means of harmonic functions is also compared to the integral means of their gradient.
Stevo STEVIĆ. "Area type inequalities and integral means of harmonic functions on the unit ball." J. Math. Soc. Japan 59 (2) 583 - 601, April, 2007. https://doi.org/10.2969/jmsj/05920583
Keywords: area inequality , Harmonic functions , integral mean , weighted integrals
Stevo STEVIĆ "Area type inequalities and integral means of harmonic functions on the unit ball," Journal of the Mathematical Society of Japan, J. Math. Soc. Japan 59(2), 583-601, (April, 2007)
|
Electrical network — Wikipedia Republished // WIKI 2
Assemblage of connected electrical elements
For electrical power transmission grids and distribution networks, see Electrical grid.
{\displaystyle v=iR}
, according to Ohm's law.
An electrical network is an interconnection of electrical components (e.g., batteries, resistors, inductors, capacitors, switches, transistors) or a model of such an interconnection, consisting of electrical elements (e.g., voltage sources, current sources, resistances, inductances, capacitances). An electrical circuit is a network consisting of a closed loop, giving a return path for the current. Linear electrical networks, a special type consisting only of sources (voltage or current), linear lumped elements (resistors, capacitors, inductors), and linear distributed elements (transmission lines), have the property that signals are linearly superimposable. They are thus more easily analyzed, using powerful frequency domain methods such as Laplace transforms, to determine DC response, AC response, and transient response.
A resistive circuit is a circuit containing only resistors and ideal current and voltage sources. Analysis of resistive circuits is less complicated than analysis of circuits containing capacitors and inductors. If the sources are constant (DC) sources, the result is a DC circuit. The effective resistance and current distribution properties of arbitrary resistor networks can be modeled in terms of their graph measures and geometrical properties.[1]
A network that contains active electronic components is known as an electronic circuit. Such networks are generally nonlinear and require more complex design and analysis tools.
Electrical Grid 101 : All you need to know ! (With Quiz)
Simplifying resistor networks | Circuit analysis | Electrical engineering | Khan Academy
Problem on Transfer Function of Electrical Network
1.1 By passivity
1.2 By linearity
1.3 By lumpiness
2 Classification of sources
2.2 Dependent
3 Applying electrical laws
4 Design methods
5 Network simulation software
5.1 Linearization around operating point
5.2 Piecewise-linear approximation
6.1 Representation
6.2 Design and analysis methodologies
6.3 Measurement
6.4 Analogies
6.5 Specific topologies
An active network contains at least one voltage source or current source that can supply energy to the network indefinitely. A passive network does not contain an active source.
An active network contains one or more sources of electromotive force. Practical examples of such sources include a battery or a generator. Active elements can inject power to the circuit, provide power gain, and control the current flow within the circuit.
Discrete passive components (resistors, capacitors and inductors) are called lumped elements because all of their, respectively, resistance, capacitance and inductance is assumed to be located ("lumped") at one place. This design philosophy is called the lumped-element model and networks so designed are called lumped-element circuits. This is the conventional approach to circuit design. At high enough frequencies, or for long enough circuits (such as power transmission lines), the lumped assumption no longer holds because there is a significant fraction of a wavelength across the component dimensions. A new design model is needed for such cases called the distributed-element model. Networks designed to this model are called distributed-element circuits.
A distributed-element circuit that includes some lumped components is called a semi-lumped design. An example of a semi-lumped circuit is the combline filter.
Dependent sources depend upon a particular element of the circuit for delivering the power or voltage or current depending upon the type of source it is.
Kirchhoff's current law: The sum of all currents entering a node is equal to the sum of all currents leaving the node.
Kirchhoff's voltage law: The directed sum of the electrical potential differences around a loop must be zero.
Ohm's law: The voltage across a resistor is equal to the product of the resistance and the current flowing through it.
Norton's theorem: Any network of voltage or current sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor.
Thévenin's theorem: Any network of voltage or current sources and resistors is electrically equivalent to a single voltage source in series with a single resistor.
Superposition theorem: In a linear network with several independent sources, the response in a particular branch when all the sources are acting simultaneously is equal to the linear sum of individual responses calculated by taking one independent source at a time.
Linear network analysis
Series and parallel circuits
Impedance transforms
Two-port parameters
See also: Network analysis (electrical circuits)
To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Simple linear circuits can be analyzed by hand using complex number theory. In more complex cases the circuit may be analyzed with specialized computer programs or estimation techniques such as the piecewise-linear model.
When faced with a new circuit, the software first tries to find a steady state solution, that is, one where all nodes conform to Kirchhoff's current law and the voltages across and through each element of the circuit conform to the voltage/current equations governing that element.
Once the steady state solution is found, the operating points of each element in the circuit are known. For a small signal analysis, every non-linear element can be linearized around its operation point to obtain the small-signal estimate of the voltages and currents. This is an application of Ohm's Law. The resulting linear circuit matrix can be solved with Gaussian elimination.
Software such as the PLECS interface to Simulink uses piecewise-linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time.
Wikimedia Commons has media related to Electrical circuits.
Look up electrical circuit in Wiktionary, the free dictionary.
Digital circuit
Ground (electricity)
Open-circuit voltage
Short circuit
Voltage drop
Circuit diagram
Design and analysis methodologies
Network analysis (electrical circuits)
Mathematical methods in electronics
Superposition theorem
Topology (electronics)
Mesh analysis
Prototype filter
Network analyzer (electrical)
Network analyzer (AC power)
Continuity test
Hydraulic analogy
Mechanical–electrical analogies
Impedance analogy (Maxwell analogy)
Mobility analogy (Firestone analogy)
Through and across analogy (Trent analogy)
Specific topologies
Bridge circuit
LC circuit
RC circuit
RL circuit
RLC circuit
Potential divider
|
A frequentist approach to probability
A Gentleman and a Scala
One thing that always confused me in my intro stats classes was the concept of a random variable. A random variable is not a variable like I’m used to thinking about, like a thing that has one value at a time. A random variable is instead an object that you can sample values from, and the values you get will be distributed according to some underlying probability distribution.
In that way it sort of acts like a container, where the only operation is to sample a value from the container. In Scala it might look something like:
trait Distribution[A] {
The idea is that get returns a different value (of type A) from the distribution every time you call it.
I’m going to add a sample method that lets me draw a sample of any size I want from the distribution.
def sample(n: Int): List[A] = {
List.fill(n)(this.get)
Now to create a simple distribution. Here’s one whose samples are uniformly distributed between 0 and 1.
val uniform = new Distribution[Double] {
private val rand = new java.util.Random()
override def get = rand.nextDouble()
And sampling it gives
scala> uniform.sample(10).foreach(println)
Every good container should have a map method. map will transform values produced by the distribution according to some function you pass it.
def map[B](f: A => B): Distribution[B] = new Distribution[B] {
override def get = f(self.get)
(Quick technical note: I added a self-type annotation that makes self an alias for this so that it’s easier to refer to in anonymous inner classes.)
Now I can map * 2 over the uniform distribution, giving a uniform distribution between 0 and 2:
scala> uniform.map(_ * 2).sample(10).foreach(println)
map also lets you create distributions of different types:
scala> val tf = uniform.map(_ < 0.5)
tf: Distribution[Boolean] = <distribution>
scala> tf.sample(10)
res2: List[Boolean] = List(true, true, true, true, false, false, false, false, true, false)
tf is a Distribution[Boolean] that should give true and false with equal probability. Actually, it would be a bit more useful to be able to create distributions giving true and false with arbitrary probabilities.
def tf(p: Double): Distribution[Boolean] = {
uniform.map(_ < p)
scala> tf(0.8).sample(10)
res0: List[Boolean] = List(true, false, true, true, true, true, true, true, true, true)
A very closely related distribution is the Bernoulli distribution, which gives 0 or 1 with some probability instead of true and false. This can be achieved with a simple map:
def bernoulli(p: Double): Distribution[Int] = {
tf.map(b => if (b) 1 else 0)
Cool. Now I want to measure the probability that a random variable will take on certain values. This is easy to do empirically by pulling 10,000 sample values and counting how many of the values satisfy the given predicate.
private val N = 10000
def pr(predicate: A => Boolean): Double = {
this.sample(N).count(predicate).toDouble / N
scala> uniform.pr(_ < 0.4)
res2: Double = 0.4015
It works! It’s not exact, but it’s close enough.
Now I need two ways to transform a distribution.
def given(predicate: A => Boolean): Distribution[A] = new Distribution[A] {
override def get = {
val a = self.get
if (predicate(a)) a else this.get
def repeat(n: Int): Distribution[List[A]] = new Distribution[List[A]] {
List.fill(n)(self.get)
given creates a new distribution by sampling from the original distribution and discarding values that don’t match the given predicate. repeat creates a Distribution[List[A]] from a Distribution[A] by producing samples that are lists of samples from the original distributions.
OK, now one more distribution:
def discreteUniform[A](values: Iterable[A]): Distribution[A] = {
val vec = values.toVector
uniform.map(x => vec((x * vec.length).toInt))
scala> val die = discreteUniform(1 to 6)
die: Distribution[Int] = <distribution>
scala> die.sample(10)
scala> die.pr(_ == 4)
scala> die.given(_ % 2 == 0).pr(_ == 4)
scala> val dice = die.repeat(2).map(_.sum)
dice: Distribution[Int] = <distribution>
scala> dice.pr(_ == 7)
scala> dice.pr(_ == 11)
scala> dice.pr(_ < 4)
Neat! This is getting useful.
OK I’m tired of looking at individual probabilities. What I really want is a way to visualize the entire distribution.
scala> dice.hist
2 2.67% ##
3 5.21% #####
4 8.48% ########
5 11.52% ###########
6 13.78% #############
7 16.61% ################
10 8.66% ########
11 5.64% #####
12 2.79% ##
That’s better. hist pulls 10,000 samples from the distribution, buckets them, counts the size of the buckets, and finds a good way to display it. (The code is tedious so I’m not going to reproduce it here.)
Don’t tell anyone it’s a monad
Another way to represent two die rolls is to sample from die twice and add the samples.
scala> val dice = die.map(d1 => die.map(d2 => d1 + d2))
dice: Distribution[Distribution[Int]] = <distribution>
But wait, that gives me a Distribution[Distribution[Int]], which is nonsense. Fortunately there’s an easy fix.
def flatMap[B](f: A => Distribution[B]): Distribution[B] = new Distribution[B] {
override def get = f(self.get).get
scala> val dice = die.flatMap(d1 => die.map(d2 => d1 + d2))
6 14.04% ##############
9 10.97% ##########
The definition of dice can be re-written using Scala’s for-comprehension syntax:
val dice = for {
d1 <- die
} yield d1 + d2
This is really nice. The <- notation can be read as sampling a value from a distribution. d1 and d2 are samples from die and both have type Int. d1 + d2 is a sample from dice, the distribution I’m creating.
In other words, I’m creating a new distribution by writing code that constructs a single sample of the distribution from individual samples of other distributions. This is pretty handy! Lots of common distributions can be constructed this way. (More on that soon!)
I think it would be fun to model the Monty Hall problem.
val montyHall: Distribution[(Int, Int)] = {
val doors = (1 to 3).toSet
prize <- discreteUniform(doors) // The prize is placed randomly
choice <- discreteUniform(doors) // You choose randomly
opened <- discreteUniform(doors - prize - choice) // Monty opens one of the other doors
switch <- discreteUniform(doors - choice - opened) // You switch to the unopened door
} yield (prize, switch)
This code constructs a distribution of pairs representing the door the prize is behind and the door you switched to. Let’s see how often those are the same door:
scala> montyHall.pr{ case (prize, switch) => prize == switch }
Just as expected. Lots of people have a hard time believing the explanation behind why this is correct, but there’s no arguing with just trying it 10,000 times!
HTH vs HTT
Another fun problem: if you flip a coin repeatedly, which pattern do you expect to see earlier, heads-tails-heads or heads-tails-tails?
First I need the following method:
def until(pred: List[A] => Boolean): Distribution[List[A]] = new Distribution[List[A]] {
def helper(sofar: List[A]): List[A] = {
if (pred(sofar)) sofar
else helper(self.get :: sofar)
helper(Nil)
until samples from the distribution, adding the samples to the front of the list until the list satisfies some predicate. A single sample from the resulting distribution is a list that satisfies the predicate.
val hth = tf(0.5).until(_.take(3) == List(true, false, true)).map(_.length)
val htt = tf(0.5).until(_.take(3) == List(false, false, true)).map(_.length)
Looking at the distributions:
scala> hth.hist
4 12.43% ############
5 9.50% #########
6 7.82% #######
8 6.51% ######
10 4.57% ####
12 3.78% ###
17 1.76% #
scala> htt.hist
Eyeballing it, it appears that HTT is likely to occur earlier than HTH. (How can this be? Excercise for the reader!) But I’d like to get a more concrete answer than that. What I want to know is how many flips you expect to see before seeing either pattern. So let me add a method to compute the expected value of a distribution:
def ev: Double = {
Stream.fill(N)(self.get).sum / N
Hm, that .sum is not going to work for all As. I mean, A could certainly be Boolean, as in the case of the tf distribution (what is the expected value of a coin flip?). So I need to constrain A to Double for the purposes of this method.
def ev(implicit toDouble: A <:< Double): Double = {
Stream.fill(N)(toDouble(self.get)).sum / N
scala> hth.ev
<console>:15: error: Cannot prove that Int <:< Double.
hth.ev
Perfect. You know, it really bothered me when I first learned that the expected value of a die roll is 3.5. Requiring an explicit conversion to Double before computing the expected value of any distribution makes that fact a lot more palatable.
scala> hth.map(_.toDouble).ev
scala> htt.map(_.toDouble).ev
There we go, empirical confirmation that HTT is expected to appear after 8 flips and HTH after 10 flips.
I’m curious. Suppose you and I played a game where we each flipped a coin until I got HTH and you got HTT. Then whoever took more flips pays the other person the difference. What is the expected value of this game? Is it 2? It doesn’t have to be 2, does it? Maybe the distributions are funky in some way that makes the difference in expected value 2 but the expected difference something else.
Well, easy enough to try it.
val diff = for {
me <- hth
you <- htt
} yield me - you
scala> diff.map(_.toDouble).ev
Actually, it does have to be 2. Expectation is linear!
Unbiased rounding
At Foursquare we have some code that computes how much our customers owe us, and charges them for it. Our payments provider, Stripe, only allows us to charge in whole cents, but for complicated business reasons sometimes a customer owes us fractional cents. (No, this is not an Office Space or Superman III reference.) So we just round to the nearest whole cent (actually we use unbiased rounding, or banker’s rounding, which rounds 0.5 cents up half the time and down half the time).
Because we’re paranoid and also curious, we want to know how much money we are losing or gaining due to rounding. Let’s say that during some period of time we saw that we rounded 125 times, and the sum of all the roundings totaled +8.5 cents. That kinda seems like a lot, but it could happen by chance. If fractional cents are uniformly distributed, what is the probability that you would see a difference that big after 125 roundings?
scala> val d = uniform.map(x => if (x < 0.5) -x else 1.0-x).repeat(125).map(_.sum)
d: Distribution[Double] = <distribution>
scala> d.hist
-10.0 0.02%
-9.0 0.20%
-7.0 1.32% #
-6.0 2.15% ##
-5.0 3.75% ###
-4.0 5.12% #####
-3.0 7.83% #######
-2.0 10.58% ##########
-1.0 11.44% ###########
0.0 12.98% ############
1.0 11.57% ###########
2.0 10.68% ##########
3.0 7.73% #######
4.0 5.70% #####
5.0 3.88% ###
6.0 2.32% ##
7.0 1.21% #
There’s the distribution. Each instance is either a loss of x if x < 0.5 or a gain of 1.0-x. Repeat 125 times and sum it all up to get the total gain or loss from rounding.
Now what’s the probability that we’d see a total greater than 8.5 cents? (Or less than -8.5 cents — a loss of 8.5 cents would be equally surprising.)
scala> d.pr(x => math.abs(x) > 8.5)
Pretty unlikely, about 1%! So the distribution of fractional cents is probably not uniform. We should maybe look into that.
One last example. It turns out the normal distribution can be approximated pretty well by summing 12 uniformly distributed random variables and subtracting 6. In code:
val normal: Distribution[Double] = {
uniform.repeat(12).map(_.sum - 6)
scala> normal.hist
-3.50 0.04%
-2.00 2.54% ##
-1.50 6.62% ######
-1.00 12.09% ############
-0.50 17.02% #################
0.00 20.12% ####################
0.50 17.47% #################
1.00 12.63% ############
1.50 6.85% ######
2.00 2.61% ##
scala> normal.pr(x => math.abs(x) < 1)
I believe it! One more check though.
def variance(implicit toDouble: A <:< Double): Double = {
val mean = this.ev
this.map(x => {
math.pow(toDouble(x) - mean, 2)
}).ev
def stdev(implicit toDouble: A <:< Double): Double = {
math.sqrt(this.variance)
\sigma^2
X
\mu
E[(X-\mu)^2]
, and the standard deviation
\sigma
is just the square root of the variance.
scala> normal.stdev
This is a great approximation and all, but java.util.Random actually provides a nextGaussian method, so for the sake of performance I’m just going to use that.
val normal: Distribution[Double] = new Distribution[Double] {
rand.nextGaussian()
The frequentist approach lines up really well with my intuitions about probability. And Scala’s for-comprehensions provide a suggestive syntax for constructing new random variables from existing ones. So I’m going to continue to explore various concepts in probability and statistics using these tools.
In later posts I’ll try to model Bayesian inference, Markov chains, the Central Limit Theorem, probablistic graphical models, and a bunch of related distributions.
All of the code for this is on github.
Climbing the probability distribution ladder →
Engineer at Slack NYC, CMU alum, Scala fan, new father
Exact numeric nth derivatives
Infinite lazy polynomials
Probability is in the process
The quantum eraser demystified
Is the NBA draft rigged?
Programming with futures: patterns and anti-patterns
Follow @jliszka
© 2013 Jason Liszka with help from Jekyll Bootstrap and Twitter Bootstrap
|
This conference was the fifth in the series of topology conferences organized by Gordon, L\"uck, and Oliver, and the last in which L\"uck will be an organizer. This meeting, which currently takes place every second year, is one of the few regularly occurring conferences anywhere which allows researchers from a wide range of areas of topology to meet.
There were about 50 participants in this meeting, including researchers in many different areas of algebraic and geometric topology. This conference was partly funded by the European Commission, which made it possible to invite and support many more young participants --- thesis students as well as recent postdocs --- than is usually the case.
There were a total of 19 talks at the conference, covering areas such as
3
-mani\-folds and knot theory, geometric group theory, algebraic
K
L
-theory, and homotopy theory. Hence it is difficult to separate out themes which covered more than two or three talks. The following is a brief summary of some of the highlights.
Marc Lackenby's talk was about the ``folk'' conjecture in knot theory that crossing number is additive under connected sum. Clearly
c(K_1\#K_2) \le c(K_1) + c(K_2)
; what one wants is an inequality in the other direction. Applying normal surface theory to a suitable handle decomposition of the complement of
K_1\#K_2
derived from a minimal crossing diagram, Lackenby shows that for some explicit universal positive constant
A
c(K_1\#K_2) \ge A{\cdot}(c(K_1)+c(K_2))
Nathalie Wahl talked about her ongoing joint work with Allen Hatcher on the stability of the homology of the mapping class group of certain
3
-manifolds. Namely, she looked at those with
n
summands of type~
S^2\times{}S^1
s
punctures, stabilizing with respect to increasing
n
. The result is that the
i
-th homology stabilizes when
n \ge2i+2
, which improves considerably the previous stability range.
Also on the subject of
3
-manifolds was the talk by Walter Neumann, about his joint work with Jason Behrstock in which they give the quasi-isometry classification of the fundamental groups of graph-manifolds. The result is that for closed non-geometric graph-manifolds there is only one quasi-isometry class, whereas in the bounded case the classification corresponds to the classification of the dual graphs up to so-called bisimilarity (a concept which, interestingly, arises in computer science).
The talks by Thomas Schick and Bernhard Hanke dealt with manifolds with positive scalar curvature. Hanke described conditions under which a closed manifold with fixed point free
S^1
-action can be shown to have a Riemannian metric with positive scalar curvature. Schick discussed some connections between the nonequivariant problem (existence of positive scalar curvature metrics without a group action) and the Novikov conjecture.
In a different direction, Jesper Grodal and Carles Broto described recent prog\-ress on
p
-completed classifying spaces and related topics. Grodal described work which shows that, for a finite group
G
p
|G|
, the fundamental group of the geometric realization of the linking category
\mathcal{L}_p^c(G)
is in many interesting cases isomorphic to
G
again. The point here is that the category~
\mathcal{L}_p^c(G)
depends only on the
p
-completed classifying space
BG_p^\wedge
, and thus that the group
G
can be ``recovered'' from this
p
-completed space in certain favorable cases. Broto described a new class of spaces, classifying spaces of ``
p
-local compact groups,'' which includes
p
-completed classifying spaces of compact Lie groups and
p
-compact groups, for which the spaces have many of the nice homotopy theoretic properties of
p
-completed classifying spaces of compact Lie groups. Also, in a talk on a related topic, Natalia Castellana described recent work on connected covers of finite
H
Geometric group theory was represented by the talks of Karen Vogtmann and Mike Davis. Vogtmann talked about joint work with Jim Conant about certain classes defined by Morita in the unstable rational homology of the outer automorphism group of a free group. In particular Conant and Vogtmann reinterpret and generalize these Morita classes, associating a class with every odd-valent graph. Davis talked about his joint work with Dymara, Januszkiewicz and Okun, giving a description of the cohomology module
H^*(W;\Z{}W)
of a Coxeter group
W
with coefficients in the group ring
\Z{}W
Arthur Bartels, in his talk, described the recent proof of the
K
-theoretic Farrell-Jones conjecture with arbitrary coefficients for subgroups of finite products of word-hyperbolic groups. (Notice that such groups can be very wild.) This has many consequences. It implies the Bass Conjecture, the Kaplansky Conjecture and Moody's induction conjecture for such groups. If
G
is such a group and torsionfree this says that the Whitehead group of
G
and the projective class group of
{\Z} G
both vanish. Another consequence is that the
K
-theoretic Farrell-Jones Conjecture with coefficients is true for the examples of groups for which its non-commutative companion, the Baum-Connes Conjecture with coefficients, is known to be false by a result of Higson, Lafforgue, and Skandalis.
|
Complex analysis - Citizendium
This article is about Complex analysis. For other uses of the term Analysis , please see Analysis (disambiguation).
Complex analysis is, broadly speaking, the study of functions that take as input complex numbers and output complex numbers, and behave well with respect to a notion of complex differentiation, as discussed below. It's crucial to note that just having complex-valued functions does not qualify something for being called complex analysis; it is really the new definitions of differentiation and integration with respect to complex variables, and using the field structure of complex numbers, that makes the subject different.
Complex analysis is closely related to real analysis, the study of functions over the reals. However, a number of beautiful results that hold in complex analysis, fail to have analogues in real analysis. The core reason behind this is that the complex numbers, which form a plane, is far more well-connected than the real line (which can be disconnected by just removing one point) allowing for fascinating arguments with geometric content.
Let us now turn to the question: Is it possible to extend the methods of calculus to functions of a complex variable, and why might we want to do so? We recall the definition of one of the two fundamental operations of calculus, differentiation. Given a function
{\displaystyle y=f(x)}
, we say f is differentiable at
{\displaystyle x_{0}}
{\displaystyle \lim _{h\to 0}{\frac {f(x_{0}+h)-f(x_{0})}{h}}}
exists, and we call the limiting value the derivative of f at
{\displaystyle x_{0}}
, and the function that assigns to each point x the derivative of f at x is called the derivative of f, and is written
{\displaystyle f'(x)}
{\displaystyle df/dx}
. Now, does this definition work for functions of a complex variable? The answser is yes, and to see why, we fix x and unravel the definition of limit. If the limit exists, say
{\displaystyle c=f'(x)}
, then for every (real) number
{\displaystyle \varepsilon >0}
, there is a (real) number
{\displaystyle \delta }
{\displaystyle |h|<\delta }
{\displaystyle \left|{\frac {f(x+h)-f(x)}{h}}-c\right|<\varepsilon }
This makes perfect sense for functions of a complex variable, but we need to keep in mind that
{\displaystyle |\cdot |}
represents the modulus of a complex number, not the real absolute value.
This seemingly innocuous difference actually has far reaching implications. Recall that the complex plane has two real dimensions, so there are many ways that h can approach 0: successive values of h may be points on the x-axis, points on the y-axis, some other line through the origin, it may spiral in, or take any of a number of paths, but the definition requires that the limit be the same number in every case. This is a very strong requirement! Fortunately, it turns out to be sufficient to consider just two of the possible "approach paths": a sequence of values along the x-axis and a sequence of values along the y-axis. If we call the real and imaginary parts (respectively) of
{\displaystyle w=f(z)}
u and v, (i.e.,
{\displaystyle w=f(z)=u+iv}
), this requirement can be expressed in terms of the partial derivatives of u and v with respect to x and y:
{\displaystyle {\frac {\partial u}{\partial x}}={\frac {\partial v}{\partial y}}}
{\displaystyle {\frac {\partial v}{\partial x}}=-{\frac {\partial u}{\partial y}}}
These equations are known as the Cauchy-Riemann equations.
Note: These equations are frequently written in the more compact form,
{\displaystyle u_{x}=v_{y}}
{\displaystyle v_{x}=-u_{y}}
They may be obtained by noting that if the approach path is on x-axis,
{\displaystyle \partial f/\partial y=0}
{\displaystyle {\frac {df}{dz}}={\frac {1}{2}}\left({\frac {\partial u}{\partial x}}+i{\frac {\partial v}{\partial x}}\right)}
and that on the y-axis,
{\displaystyle \partial f/\partial x=0}
{\displaystyle {\frac {df}{dz}}={\frac {1}{2}}\left(-i{\frac {\partial u}{\partial y}}+{\frac {\partial v}{\partial y}}\right)}
These equations have far-reaching implications. To get some idea if why this is so, consider that we can take second derivatives to obtain
{\displaystyle u_{xx}+u_{yy}=0}
{\displaystyle v_{xx}+v_{yy}=0}
In other words, u and v satisfy Laplace's equation in 2 dimensions. These functions arise in mathematical physics as scalar potentials in, for example, fluid dynamics. Laplace's equation is also basic to the study of partial differential equations. This is but one indication of the reason for the ubiquity of complex functions in physics.
By contrast, the definition of integration in complex analysis involves no surprises. Path integrals and integrals over regions are defined just as they are in the calculus of functions of two real variables. What is different is that the Cauchy-Riemann equations imply that integrals of complex functions have some very special properties. In particular, if a function f is differentiable (in the sense explained above) in a simply connected domain (intuitively, a domain having no "holes" in it), then for any closed curve
{\displaystyle \gamma }
defined in that domain
{\displaystyle \int _{\gamma }\nolimits f\,dz=0.}
It is essential that the domain of definition be simply connected. For example, let
{\displaystyle D=\{z\mid \textstyle {\frac {1}{2}}<|z|<{\frac {3}{2}}\}}
{\displaystyle f(z)=1/z}
. Then if we define
{\displaystyle \gamma (t)=e^{it}}
where t ranges from 0 to
{\displaystyle 2\pi }
(i.e., we take
{\displaystyle \gamma }
to be the unit circle), then the integral will not be 0.
{\displaystyle \gamma _{1}}
{\displaystyle \gamma _{2}}
are two homotopic paths joining a pair of points
{\displaystyle P,Q\in D}
(intuitively, one can be deformed into the other), then
{\displaystyle \int _{\gamma _{1}}\nolimits fdz=\int _{\gamma _{2}}\nolimits f\,dz.}
This is commonly expressed by saying that the integrals are path independent, and this is just the condition for the existence of a scalar potential!
Finally, we note that integrals in domains containing singularities (such as 1/z in the above example) can be computed using Cauchy's integral formula
{\displaystyle f(z)={\frac {1}{2\pi i}}\int _{\gamma }\nolimits {\frac {f(\zeta )\,d\zeta }{\zeta -z}}.}
This result lies at the heart of many applications of complex analysis to disciplines ranging from number theory to physics. Its importance would be difficult to overestimate.
Retrieved from "https://citizendium.org/wiki/index.php?title=Complex_analysis&oldid=25953"
|
Endwall Film Cooling Effects on Secondary Flows in a Contoured Endwall Nozzle Vane | J. Turbomach. | ASME Digital Collection
, Viale Marconi, 24044 Dalmine (BG), Italy
e-mail: giuseppe.franchini@unibg.it
Antonio Perdichizzi,
Marco Quattrore
Barigozzi, G., Franchini, G., Perdichizzi, A., and Quattrore, M. (April 27, 2010). "Endwall Film Cooling Effects on Secondary Flows in a Contoured Endwall Nozzle Vane." ASME. J. Turbomach. October 2010; 132(4): 041005. https://doi.org/10.1115/1.3192147
The present paper investigates the effects of endwall injection of cooling flow on the aerodynamic performance of a nozzle vane cascade with endwall contouring. Tests have been performed on a seven vane cascade with a geometry typical of a real gas turbine nozzle vane. The cooling scheme consists of four rows of cylindrical holes. Tests have been carried out at low speed
(Ma2is=0.2)
with a low inlet turbulence intensity level (1.0%) and with a coolant to mainstream mass flow ratio varied in the range from 0% (solid endwall) to 2.5%. Energy loss coefficient, secondary vorticity, and outlet angle distributions were computed from five-hole probe measured data. Contoured endwall results, with and without film cooling, were compared with planar endwall data. Endwall contouring was responsible for a significant overall loss decrease, as a result of the reduction in both profile and planar side secondary flows losses; a loss increase on the contoured side was instead observed. Like as for the planar endwall, even for the contoured endwall, coolant injection modifies secondary flows, reducing their intensity, but the relevance of the changes is reduced. Nevertheless, for all the tested injection conditions, secondary losses on the contoured side are always higher than in the planar case, while contoured cascade overall losses are lower. A unique minimum overall loss injection condition was found for both tested geometries, which corresponds to an injected mass flow rate of about 1.0%.
aerodynamics, blades, coolants, cooling, gas turbines, liquid films, nozzles, turbulence, vortices
Cascades (Fluid dynamics), Coolants, Flow (Dynamics), Nozzles, Cooling, Film cooling, Geometry
L. Ya.
Atlas of Axial Turbine Blade Characteristics
Machinostroenie
Effects of Outer Diameter Endwall Contouring on the Three-Dimensional Flow Field in an Annular Turbine Nozzle Guide Vane: Part I—Experimental Investigation
Effect of Two Endwall Contours on the Performance of an Annular Nozzle Cascade
Influence of Mach Number and End Wall Cooling on Secondary Flows in a Straight Nozzle Cascade
Proceedings of the Sixth European Conference on Turbomachinery Fluid Dynamics and Thermodynamics
Effects of Slot Bleed Injection Over a Contoured Endwall on Nozzle Guide Vane Cooling Performance: Part I—Flow Field Measurements
Proceedings of the Seventh European Conference on Turbomachinery Fluid Dynamics and Thermodynamics
AGARD CP 390 Heat Transfer and Cooling in Gas Turbines
, Bergen, Norway.
Numerical Investigation on Influence of Nozzle Swirl Blade on Flow Field
|
Differential Privacy Guide Differential Privacy Guide Table of contents
Differential Privacy in Immuta
Differential Privacy Policy Guide
Audience: Data Owners, Governors, and Users
Content Summary: This page outlines the theory of differential privacy and provides conceptual information helpful in configuring Differential Privacy Policies in Immuta. It further explains the effects of Immuta's Differential Privacy Policies on query results.
For instructions on how to programmatically create and apply Differential Privacy Policies, see the Differential Privacy Rule Type section of the Policy Handler HTTP API documentation.
Differential privacy is a mathematical framework enabling approximate evaluation of a function over a private database, while limiting the ability of an outside party to make inferences about individual input records. This page will outline the motivation and mathematical underpinnings of differential privacy, as well as provide insights on how to get the most out of Differential Privacy Policies.
Consider a game in which an attacker is provided a pair of databases D_1
and D_2
that differ only by insertion (or deletion) of a single record, and the evaluation of some aggregate, \mathcal{A}
\mathcal{A}
, over either D_1
or D_2
. This evaluation is denoted \mathcal{A}(D)
\mathcal{A}(D)
, where D
stands for the unknown input database — either D_1
or D_2
, as appropriate. The attacker is allowed to examine D_1
, D_2
, and is given a full specification of \mathcal{A}
\mathcal{A}
, so that he or she can independently evaluate \mathcal{A}
\mathcal{A}
over D_1
, D_2
, or any input of their choosing. The goal of the attacker is to guess whether D
was D_1
or D_2
This task is considered difficult if the attacker is forced to guess and, regardless of attack methodology, is unlikely to be right significantly more than 50\%
50\%
of the time. Intuitively, if the attacker is unable to determine which input database was used to produce the observed result after being given the complete contents of D_1
, D_2
, and being permitted examination of and experimentation with \mathcal{A}
\mathcal{A}
, it must be that the result carries little information about the row in which the databases differ. Further, \mathcal{A}
\mathcal{A}
is considered private if its construction ensures that this task is difficult over any pair of adjacent databases.
Differential privacy is a mathematical framework enabling approximate evaluation of a function over a private database while limiting the ability of an outside party to make inferences about individual input records. Difficulty is ensured by requiring that the privacy mechanism respects a certain distributional condition with respect to its output.
Formally, a privacy mechanism, \mathcal{A}
\mathcal{A}
, is called (\varepsilon, \delta)
(\varepsilon, \delta)
-differentially-private, if for any pair of databases D_1
, D_2
which differ from each other by insertion (or deletion) of a single record, and any S \subseteq \textrm{Range}({\mathcal{A}})
S \subseteq \textrm{Range}({\mathcal{A}})
\Pr[\mathcal{A}(D_1)\in S] \leq e^\varepsilon \cdot \Pr[\mathcal{A}(D_2) \in S] + \delta.
Roughly, when \varepsilon
\varepsilon
is small, and \delta = 0
\delta = 0
, this condition ensures that there exists no set of outputs of \mathcal{A}
\mathcal{A}
that provide a significant advantage in determining whether the privacy mechanism was evaluated over D_1
and D_2
. We can quantify the advantage conferred to an adversary hoping to discriminate D_1
from D_2
. Let S \subseteq \textrm{Range}(\mathcal{A})
S \subseteq \textrm{Range}(\mathcal{A})
. We think of observing the event S
whenever \mathcal{A}
\mathcal{A}
returns a value contained in S
More formally, the significance (privacy-loss) of observing the event S
under the privacy mechanism \mathcal{A}
\mathcal{A}
\mathcal{L}^{\mathcal{A}}_{D_1, D_2}(S) :=\ln\left(\frac{\Pr[\mathcal{A}(D_1) \in S]}{\Pr[\mathcal{A}(D_2) \in S]}\right).
Thus when \mathcal{A}
\mathcal{A}
is (\varepsilon, \delta)
(\varepsilon, \delta)
-differentially-private, it holds that for any pair of databases D_1
, D_2
differing by a single record, the privacy loss, \mathcal{L}^{\mathcal{A}}_{D_1, D_2}(S)
\mathcal{L}^{\mathcal{A}}_{D_1, D_2}(S)
, is no larger than \varepsilon
\varepsilon
with probability at least 1-\delta
1-\delta
The goal of differential privacy is to protect how much an adversary can learn about the underlying data from analysis results. This leads to the following concern: if an analysis can be heavily influenced by the presence of certain items, it may be possible to draw conclusions about the participation of those by observing the output.
As an example, consider the query SELECT MAX(salary) FROM employees. This query is thought of as sensitive, as it is entirely determined by a single row in the input.
On the other hand, the query SELECT 3 FROM employees GROUP BY true is almost entirely insensitive to data in the employees table. Assuming the employees table must always be non-empty, the result (3
) is entirely independent of the data, and revealing it leaks no information about the contents of the database.
It is important to note that the above queries behave differently when the table is empty, returning an empty result-set. This is an important distinction that cannot be ignored in a more generic setting where the employees table is permitted to be empty. In such a setting, the empty result-set would allow an outsider to infer that the true count of the table is 0
. Immuta's Differential Privacy Engine ensures that results are returned even when the implied result-set would otherwise be empty.
Sensitivity is a numerical measure which bounds how much an analysis can be influenced by a single row.
We now introduce some notation which helps in being mathematically precise. Let \mathcal{D}
\mathcal{D}
denote the set of all databases. For ease of reading, conceptualize \mathcal{D}
\mathcal{D}
as a set of all possible instantations of some table. With this in mind, we can consider each (numerical) query or, more generally, any subsequent quantitative analysis, to be a mathematical function, f
, which assigns each possible table k
numerical values. To this end, let f:\mathcal{D}\rightarrow{\mathbb{R}}^k
f:\mathcal{D}\rightarrow{\mathbb{R}}^k
stand for a query.
The \ell_1
\ell_1
-sensitivity of f
, denoted \Delta(f)
\Delta(f)
, is a numerical measure of how much f
can change when comparing its value over any pair of databases that differ by the presence of a single record. To this end, we use the notation D_1 \sim D_2
D_1 \sim D_2
to indicate that databases D_1
and D_2
differ by a single record.
\begin{align}\Delta(f) := & \max_{\substack{D_1, D_2 \in \mathcal{D} \\ D_1 \sim D_2}} \|f(D_1) - f(D_2) \|_1 \\ = & \max_{\substack{D_1, D_2 \in \mathcal{D} \\ D_1 \sim D_2}} \sum_{i=1}^k|f(D_1)-f(D_2)| \end{align}
There are numerous algorithmic mechanisms which attain differential privacy. Internally, Immuta's Differential Privacy Engine employs one of two mechanisms, depending upon the choice of SQL aggregate. This section outlines those mechanisms.
The Laplace Mechanism protects function input by adding noise directly to the analysis results. At first glance it may not be clear under what circumstances adding noise to the output is sufficient to protect the input, if at all. It turns out that doing so is sufficient precisely when the \ell_1
\ell_1
-sensitivity of the analysis is bounded.
Let \textrm{Lap}(b)
\textrm{Lap}(b)
denote the 0
-centered Laplace distribution with variance 2b^2
. The probability density function for this distribution is f(x;b)=\frac{1}{2b}e^{-\frac{|x|}{b}}.
f(x;b)=\frac{1}{2b}e^{-\frac{|x|}{b}}.
Theorem [Dwork 2006]
Let \varepsilon > 0
\varepsilon > 0
, and let f:\mathcal{D} \rightarrow \mathbb{R}^k
f:\mathcal{D} \rightarrow \mathbb{R}^k
with finite sensitivity, then f(x) + (\eta_1, \eta_2, \ldots, \eta_k)
f(x) + (\eta_1, \eta_2, \ldots, \eta_k)
is \varepsilon
\varepsilon
-differentially-private, provided that \eta_1, \eta_2, \ldots, \eta_k
\eta_1, \eta_2, \ldots, \eta_k
are independently sampled from \textrm{Lap}(\Delta(f)/\varepsilon)
\textrm{Lap}(\Delta(f)/\varepsilon)
Sample and Aggregate
Sample and Aggregate provides a method for differentially-private evaluation of sensitive functions. The idea is to randomly partition the data and then evaluate f
over each partition. The evaluations are aggregated together into the final analysis via a differentially-private aggregation. Provided that f
is stable under subsampling, this strategy provides a differentially-private estimate for the evaluation f
over the database, even when f
is sensitive.
Within Immuta, Sample and Aggregate is carried out using the method of Nissim et. al. [NSS06]
It is natural to wonder what level of privacy protection is afforded to individual rows in the database when multiple independent evaluations of differentially-private privacy mechanisms are available to an outsider. Effectively, neither the net privacy loss nor the combined tolerances of failure can exceed their respective individual sums across all releases. This statement is made precise below:
Theorem [DR14]
Let \mathcal{A}_1
\mathcal{A}_1
, \ldots
\ldots
, \mathcal{A}_k
\mathcal{A}_k
denote a sequence of k
privacy mechanisms where the i
-th privacy mechanism \mathcal{A}_i
\mathcal{A}_i
satisfies \left(\varepsilon_i, \delta_i\right)
\left(\varepsilon_i, \delta_i\right)
-differential privacy, respectively. Then, given any database, D
, the release of \left\{(i, \mathcal{A}_i(D)) : 1 \leq i \leq k \right\}
\left\{(i, \mathcal{A}_i(D)) : 1 \leq i \leq k \right\}
, satisfies \left(\sum_{i=1}^k \varepsilon_i, \sum_{i=1}^k \delta_i\right)
\left(\sum_{i=1}^k \varepsilon_i, \sum_{i=1}^k \delta_i\right)
-differential privacy.
As a special case, when the database is partitioned into k
disjoint subsets, sequential evaluation over separate partitions composes in a parallel manner. In other words, neither privacy-loss nor failure tolerance accumulate. The precise statement follows:
Let D_1
, D_2
\ldots
, D_k
be a partition of a database D
. Let \mathcal{A}_1
\mathcal{A}_1
\ldots
\mathcal{A}_k
denote a sequence of k
privacy mechanisms where the i
\mathcal{A}_i
\left(\varepsilon_i, \delta_i\right)
-differential privacy, respectively. Then the release of \left\{(i, \mathcal{A}_i(D_i)) : 1 \leq i \leq k \right\}
\left\{(i, \mathcal{A}_i(D_i)) : 1 \leq i \leq k \right\}
, satisfies \left(\max_{i=1}^k \varepsilon_i, \max_{i=1}^k \delta_i\right)
\left(\max_{i=1}^k \varepsilon_i, \max_{i=1}^k \delta_i\right)
Internally, Immuta applies differential privacy at the aggregate level where the choice of privacy mechanism is a function of the aggregate.
In this setting, a query result can be thought of as a sequential composition of differentially-private outputs. Since the \varepsilon
\varepsilon
value on the policy is treated as a privacy-loss tolerance for the entire query result, the effective \varepsilon
\varepsilon
of each column is divided by k
, where k
is the number of columns in the result-set. It should be noted that the \delta
\delta
parameter is not scaled, resulting in an (\varepsilon, k\cdot\delta)
(\varepsilon, k\cdot\delta)
-differentially private release.
Queries subject to a Differential Privacy Policy have the following restrictions:
Only aggregates can be queried
Only WHERE clauses can be used to filter (no GROUP BY)
When a Differential Privacy policy is in effect, the AVG SQL aggregate returns an (k^{-1}\varepsilon, \delta)
(k^{-1}\varepsilon, \delta)
-differentially-private average.
Evaluation occurs via the Sample and Aggregate method, as follows:
First, the result-set is randomly partitioned into samples subsets.
Next, an average (the arithmetic mean) is computed for each random partition element.
Finally, a differentially-private estimation of the average is obtained via an (k^{-1}\varepsilon, \delta)
(k^{-1}\varepsilon, \delta)
-smoothed differentially-private median of the partition-wise averages.
When a Differential Privacy policy is in effect, the COUNT SQL aggregate returns an (k^{-1}\varepsilon, 0)
(k^{-1}\varepsilon, 0)
-differentially-private count.
Evaluation occurs via the Laplace Mechanism method.
Note that the \ell_1
\ell_1
-sensitivity of the COUNT aggregate is 1
since, in the worst case, adding or removing a single record changes the count of the filtered result-set by at most one. Thus, employing the Laplace Mechanism with a sensitivity of 1
provides (\varepsilon, 0)
(\varepsilon, 0)
-differential-privacy.
Since this method involves adding noise sampled from \textrm{Lap}(k/\varepsilon)
\textrm{Lap}(k/\varepsilon)
to the true count, the expected error is roughly the standard deviation of the noise distribution, \pm \sqrt{2}k/\varepsilon
\pm \sqrt{2}k/\varepsilon
When a Differential Privacy policy is in effect, the MAX SQL aggregate returns an (k^{-1}\varepsilon, \delta)
(k^{-1}\varepsilon, \delta)
Next, the maximum is computed for each random partition element.
(k^{-1}\varepsilon, \delta)
-smoothed differentially-private median of the partition-wise minimums.
(k^{-1}\varepsilon, \delta)
-differentially-private minimum.
Next, the minimum is computed for each random partition element.
(k^{-1}\varepsilon, \delta)
When a Differential Privacy policy is in effect, the SUM SQL aggregate returns an (k^{-1}\varepsilon, \delta)
(k^{-1}\varepsilon, \delta)
This aggregate assumes that the column takes values in a bounded interval: \left[ R_\min, R_\max \right].
\left[ R_\min, R_\max \right].
Evaluation occurs via the Laplace Mechanism method with a sensitivity of \max(|R_\min|, |R_\max|).
\max(|R_\min|, |R_\max|).
Since this method involves adding noise sampled from \textrm{Lap}(k\max(|R_\min|, |R_\max|)/\varepsilon)
\textrm{Lap}(k\max(|R_\min|, |R_\max|)/\varepsilon)
to the true sum, the expected error is roughly the standard deviation, \pm \sqrt{2}k\max(|R_\min|, |R_\max|)/\varepsilon.
\pm \sqrt{2}k\max(|R_\min|, |R_\max|)/\varepsilon.
[DKM06]: C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, M. Naor. "Our Data, Ourselves: Privacy Via Distributed Noise Generation." Advances in Cryptology - EUROCRYPT 2006
[NSS07]: K. Nissim, S. Raskhodnikova, A. D. Smith. “Smooth sensitivity and sampling in private data analysis.” Proceedings of the 39th Annual ACM Symposium on Theory of Computing.
[DR14]: C. Dwork, A. Roth. “The Algorithmic Foundations of Differential Privacy.” Foundations and Trends in Theoretical Computer Science. Vol 9.
Previous Custom WHERE Clause Functions
Next External Masking Interface
|
What Is the Information Coefficient (IC)?
The information coefficient (IC) is a measure used to evaluate the skill of an investment analyst or an active portfolio manager. The information coefficient shows how closely the analyst's financial forecasts match actual financial results. The IC can range from 1.0 to -1.0, with -1 indicating the analyst's forecasts bear no relation to the actual results, and 1 indicating that the analyst's forecasts perfectly matched actual results.
The information coefficient (IC) is a measure used to evaluate the skill an investment analyst or active portfolio manager.
An IC of +1.0 indicates a perfect prediction of actual returns, while an IC of 0.0 indicates no linear relationship. An IC of -1.0 indicates that the analyst always fails at making a correct prediction.
The IC is not to be confused with the Information Ratio (IR). The IR is a measure of an investment manager's skill, comparing a manager's excess returns to the amount of risk taken.
The Formula for the IC Is
\begin{aligned} &\text{IC} = (2 \times \text{Proportion Correct}) - 1 \\ &\textbf{where:} \\ &\text{Proportion Correct} = \text{Proportion of predictions made} \\ &\text{correctly by the analyst} \\ \end{aligned}
IC=(2×Proportion Correct)−1where:Proportion Correct=Proportion of predictions madecorrectly by the analyst
Explaining the Information Coefficient
The information coefficient describes the correlation between predicted and actual stock returns, sometimes used to measure the contribution of a financial analyst. An IC of +1.0 indicates a perfect linear relationship between predicted and actual returns, while an IC of 0.0 indicates no linear relationship. An IC of -1.0 indicates that the analyst always fails at making a correct prediction.
An information coefficient (IC) score near +1.0 indicates that the analyst has great skill in forecasting. But, in reality, if the definition of "correct" is that the analyst's prediction matched the direction (up or down) of actual results, then the odds of getting the forecast right are 50/50. So even an analyst with no skill whatsoever could be expected to have an IC of around 0, meaning that half of the forecasts were right and half were wrong. A score close to 0 reveals that the analyst's forecasting skills are no better than results that could be achieved by chance, suggesting that ICs approaching -1 are rare.
The IC and the IR are both components of the Fundamental Law of Active Management, which states that a manager's performance (IR) depends on skill level (IC) and its breadth, or how often it is used.
Example of the Information Coefficient
As a hypothetical example, if an investment analyst made two predictions and got two right, the information coefficient would be:
\begin{aligned} &\text{IC} = (2 \times 1.0) - 1 = +1.0 \\ \end{aligned}
IC=(2×1.0)−1=+1.0
If an analyst's predictions were only half of the time right, then:
\begin{aligned} &\text{IC} = (2 \times 0.5) - 1 = 0.0 \\ \end{aligned}
IC=(2×0.5)−1=0.0
If, however. none of the predictions were right, then:
\begin{aligned} &\text{IC} = (2 \times 0.0) - 1 = -1.0 \\ \end{aligned}
IC=(2×0.0)−1=−1.0
Limitations of the Information Coefficient
The IC is only meaningful for an analyst who makes a large number of predictions. This is because if there only a small number of predictions, random chance may explain a great deal of the results. So if there are only two predictions made and both are right the information coefficient is +1.0. If, however, the IC is till at or close to +1.0 after several dozen predictions have been made, then it is far more attributable to skill than to chance.
|
A partly hanging uniform chain of length l is resting on a roug-Turito
As shown in figure, the block of 2 kg at one end and the other of 3 kg at the other end of a light string are connected. It the system remains stationary find the magnitude and direction of the frictional force,(g=10ms2)
f\left(x\right)=\left\{\begin{array}{l}\left[x\right] \text{ if }-3<x\le -1\\ |x| \text{ if }-1<x<1\\ |\left[-x\right]| \text{ if }1\le x\le 3\end{array}\text{ then }\left\{x:f\left(x\right)\ge 0\right\}=\right\
f\left(x\right)=\left\{\begin{array}{l}\left[x\right] \text{ if }-3<x\le -1\\ |x| \text{ if }-1<x<1\\ |\left[-x\right]| \text{ if }1\le x\le 3\end{array}\text{ then }\left\{x:f\left(x\right)\ge 0\right\}=\right\
|
Plateaued Functions - Boolean Functions
Revision as of 17:28, 8 February 2019 by Nikolay (talk | contribs)
{\displaystyle f:\mathbb {F} _{2^{n}}\rightarrow \mathbb {F} _{2}}
is said to be plateaued if its Walsh transform takes at most three distinct values, viz.
{\displaystyle 0}
{\displaystyle \pm \mu }
for some positive ineger
{\displaystyle \mu }
called the amplitude o{\displaystyle f}
This notion can be naturally extended to vectorial Boolean functions by applying it to each component. More precisely, if
{\displaystyle F}
{\displaystyle (n,m)}
-function, we say that
{\displaystyle F}
is plateaued if all its component functions
{\displaystyle u\cdot F}
{\displaystyle u\neq 0}
are plateaued. If all of the component functions are plateaued and have the same amplitude, we say that
{\displaystyle F}
is plateaued with single amplitude.
The characterization by means of the derivatives below suggests the following definition: a v.B.f.
{\displaystyle F}
is said to be strongly-plateuaed if, for every
{\displaystyle a}nd every
{\displaystyle v}
{\displaystyle \{b\in \mathbb {F} _{2}^{n}:D_{a}D_{b}F(x)=v\}}
{\displaystyle x}
, or, equivalently, the size of the set
{\displaystyle \{b\in \mathbb {F} _{2}^{n}:D_{a}F(b)=D_{a}F(x)+v\}}
{\displaystyle x}
{\displaystyle f_{\phi ,h}}
{\displaystyle f_{\phi ,h}(x,y)=x\cdot \phi (y)+h(y)}
{\displaystyle x\in \mathbb {F} _{2}^{r},y\in \mathbb {F} _{2}^{s}}
{\displaystyle r}
{\displaystyle s}
are any positive integers,
{\displaystyle n=r+s}
{\displaystyle \phi :\mathbb {F} _{2}^{s}\rightarrow \mathbb {F} _{2}^{r}}
{\displaystyle h:\mathbb {F} _{2}^{s}\rightarrow \mathbb {F} _{2}}
{\displaystyle f_{\phi ,h}}
{\displaystyle W_{f_{\phi ,h}}(a,b)=2^{r}\sum _{y\in \phi ^{-1}(a)}(-1)^{b\cdot y+h(y)}}
{\displaystyle (a,b)}
{\displaystyle \phi }
is injective, resp. takes each value in its image set two times, then
{\displaystyle f_{\phi ,h}}
is plateaued of amplitude
{\displaystyle 2^{r}}
{\displaystyle 2^{r+1}}
Using the fact that a Boolean function
{\displaystyle f}
is plateaued if and only if the expression
{\displaystyle \sum _{a,b\in \mathbb {F} _{2}^{n}}(-1)^{DaDbf(x)}}
{\displaystyle x\in \mathbb {F} _{2}^{n}}
{\displaystyle F}
{\displaystyle (n,m)}
-function. Then:
F is plateuaed if and only if, for every
{\displaystyle v\in \mathbb {F} _{2}^{m}}
{\displaystyle \{(a,b)\in (\mathbb {F} _{2}^{n})^{2}:D_{a}D_{b}F(x)=v\}}
{\displaystyle x}
F is plateaued with single amplitude if and only if the size of the set depends neither on
{\displaystyle x}
{\displaystyle v\in \mathbb {F} _{2}^{m}}
{\displaystyle v\neq 0}
{\displaystyle F}
, the value distribution of
{\displaystyle D_{a}D_{b}F(x)}
{\displaystyle D_{a}F(b)+D_{a}F(x)}
{\displaystyle (a,b)}
{\displaystyle (\mathbb {F} _{2}^{n})^{2}}
if two plateaued functions
{\displaystyle F,G}
have the same distribution, then all of their component functions
{\displaystyle u\cdot F,u\cdot G}
{\displaystyle F(x)=x^{d}}
{\displaystyle \lambda \neq 0}
{\displaystyle |\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(x)=v\}|=|\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(x/\lambda )=v/\lambda ^{d}\}|.}
{\displaystyle F}
is plateaued if and only if, for every
{\displaystyle v\in \mathbb {F} _{2^{n}}}
{\displaystyle |\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(1)=v\}|=|\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(0)=v\}|;}
{\displaystyle F}
is plateaued with single amplitude if and only if the size above does not, in addition, depend on
{\displaystyle v\neq 0}
{\displaystyle F}
{\displaystyle (n,m)}
-function. Then
{\displaystyle F}
is plateuaed with all components unbalanced if and only if, for every
{\displaystyle v,x\in \mathbb {F} _{2}^{n}}
{\displaystyle |\{(a,b)\in (\mathbb {F} _{2}^{n})^{2}:D_{a}D_{b}F(x)=v\}|=|\{(a,b)\in (\mathbb {F} _{2}^{n})^{2}:F(a)+F(b)=v\}|.}
{\displaystyle F}
is plateaued with single amplitude if and only if this value does not, in addition, depend on
{\displaystyle v}
{\displaystyle v\neq 0}
{\displaystyle {\rm {Im}}(D_{a}F)}
{\displaystyle F}
{\displaystyle f}
{\displaystyle {\Delta _{f}}(a)=\sum _{x\in \mathbb {F} _{2}^{n}}(-1)^{f(x)+f(x+a)}}
A{\displaystyle n}
-variable Boolean function
{\displaystyle f}
{\displaystyle x\in \mathbb {F} _{2}^{n}}
{\displaystyle 2^{n}\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{f}(a)\Delta _{f}(a+x)=\left(\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{f}^{2}(a)\right)\Delta _{f}(x).}
{\displaystyle (n,m)}
{\displaystyle F}
{\displaystyle x\in \mathbb {F} _{2}^{n},u\in \mathbb {F} _{2}^{m}}
{\displaystyle 2^{n}\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{u\cdot F}(a)\Delta _{u\cdot F}(a+x)=\left(\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{u\cdot F}^{2}(a)\right)\Delta _{u\cdot F}(x).}
{\displaystyle F}
is plateaued with single amplitude if and only if, for every
{\displaystyle x\in \mathbb {F} _{2}^{n},u\in \mathbb {F} _{2}^{m}}
{\displaystyle \sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{u\cdot F}(a)\Delta _{u\cdot F}(a+x)=\mu ^{2}\Delta _{u\cdot F}(x).}
{\displaystyle F}
{\displaystyle x,v\in \mathbb {F} _{2}^{n}}
{\displaystyle 2^{n}|\{(a,b,c)\in (\mathbb {F} _{2}^{n})^{3}:F(a)+F(b)+F(c)+F(a+b+c+x)=v\}|=|\{(a,b,c,d)\in (\mathbb {F} _{2}^{n})^{4}:F(a)+F(b)+F(c)+F(a+b+c)+F(d)+F(d+x)=v\}|.}
{\displaystyle f:\mathbb {F} _{2^{n}}\rightarrow \mathbb {F} _{2}}
{\displaystyle 0\neq \alpha \in \mathbb {F} _{2}^{n}}
{\displaystyle \sum _{w\in \mathbb {F} _{2}^{n}}W_{f}(w+\alpha )W_{f}^{3}(w)=0.}
{\displaystyle (n,m)}
{\displaystyle F}
is plateuaed if and only if for every
{\displaystyle u\in \mathbb {F} _{2}^{m}}
{\displaystyle 0\neq \alpha \in \mathbb {F} _{2}^{n}}
{\displaystyle \sum _{w\in \mathbb {F} _{2}^{n}}W_{F}(w+\alpha ,u)W_{F}^{3}(w,u)=0.}
{\displaystyle F}
is plateaued with single amplitude if and only if, in addition, the sum
{\displaystyle \sum _{w\in \mathbb {F} _{2}^{n}}W_{F}^{4}(w,u)}
{\displaystyle u}
{\displaystyle u\neq 0}
{\displaystyle f:\mathbb {F} _{2^{n}}\rightarrow \mathbb {F} _{2}}
{\displaystyle b\in \mathbb {F} _{2}}
{\displaystyle \sum _{a\in \mathbb {F} _{2}}W_{f}^{4}(a)=2^{n}(-1)^{f(b)}\sum _{a\in \mathbb {F} _{2}^{n}}(-1)^{a\cdot b}W_{f}^{3}(a).}
{\displaystyle (n,m)}
{\displaystyle F}
{\displaystyle b\in \mathbb {F} _{2}^{n}}
{\displaystyle u\in \mathbb {F} _{m}}
{\displaystyle \sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{4}(a,u)=2^{n}(-1)^{u\cdot F(b)}\sum _{a\in \mathbb {F} _{2}^{n}}(-1)^{a\cdot b}W_{F}^{3}(a,u).}
{\displaystyle F}
is plateaued with single amplitude if and only if the two sums above do not depend on
{\displaystyle u}
{\displaystyle u\neq 0}
Any Boolean function
{\displaystyle f}
i{\displaystyle n}
{\displaystyle \left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{f}^{4}(a)\right)^{2}\leq 2^{2n}\left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{f}^{6}(a)\right),}
with equality if and only i{\displaystyle f}
is plateuaed.
{\displaystyle (n,m)}
{\displaystyle F}
{\displaystyle \sum _{u\in \mathbb {F} _{2}^{m}}\left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{4}(a,u)\right)^{2}\leq 2^{2n}\sum _{u\in \mathbb {F} _{2}^{m}}\left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{6}(a,u)\right),}
{\displaystyle F}
In addition, every
{\displaystyle (n,m)}
-function satisfies
{\displaystyle \sum _{u\in \mathbb {F} _{2}^{m}}\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{4}(a,u)\leq 2^{n}\sum _{u\in \mathbb {F} _{2}^{m}}{\sqrt {\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{6}(a,u)}},}
{\displaystyle F}
|
Isospectral commuting variety, the Harish-Chandra D-module, and principal nilpotent pairs
15 August 2012 Isospectral commuting variety, the Harish-Chandra
\mathbf{D}
-module, and principal nilpotent pairs
\mathfrak{g}
be a complex reductive Lie algebra with Cartan algebra
\mathfrak{t}
. Hotta and Kashiwara defined a holonomic
\mathcal{D}
\mathcal{M}
\mathfrak{g}×\mathfrak{t}
, called the Harish-Chandra module. We relate
gr\mathcal{M}
, an associated graded module with respect to a canonical Hodge filtration on
\mathcal{M}
, to the isospectral commuting variety, a subvariety of
\mathfrak{g}×\mathfrak{g}×\mathfrak{t}×\mathfrak{t}
which is a ramified cover of the variety of pairs of commuting elements of
\mathfrak{g}
. Our main result establishes an isomorphism of
gr\mathcal{M}
with the structure sheaf of
{\mathfrak{X}}_{norm}
, the normalization of the isospectral commuting variety. We deduce, using Saito’s theory of Hodge
\mathcal{D}
-modules, that the scheme
{\mathfrak{X}}_{norm}
is Cohen–Macaulay and Gorenstein. This confirms a conjecture of M. Haiman.
Associated with any principal nilpotent pair in
\mathfrak{g}
there is a finite subscheme of
{\mathfrak{X}}_{norm}
. The corresponding coordinate ring is a bigraded finite-dimensional Gorenstein algebra that affords the regular representation of the Weyl group. The socle of that algebra is a
1
-dimensional space generated by a remarkable
W
-harmonic polynomial on
\mathfrak{t}×\mathfrak{t}
. In the special case where
\mathfrak{g}=\mathfrak{g}{\mathfrak{l}}_{n}
the above algebras are closely related to the
n!
-theorem of Haiman, and our
W
-harmonic polynomial reduces to the Garsia–Haiman polynomial. Furthermore, in the
\mathfrak{g}{\mathfrak{l}}_{n}
-case, the sheaf
gr\mathcal{M}
gives rise to a vector bundle on the Hilbert scheme of
n
{\mathbb{C}}^{2}
that turns out to be isomorphic to the Procesi bundle. Our results were used by I. Gordon to obtain a new proof of positivity of the Kostka–Macdonald polynomials established earlier by Haiman.
Victor Ginzburg. "Isospectral commuting variety, the Harish-Chandra
\mathbf{D}
-module, and principal nilpotent pairs." Duke Math. J. 161 (11) 2023 - 2111, 15 August 2012. https://doi.org/10.1215/00127094-1699392
Victor Ginzburg "Isospectral commuting variety, the Harish-Chandra
\mathbf{D}
-module, and principal nilpotent pairs," Duke Mathematical Journal, Duke Math. J. 161(11), 2023-2111, (15 August 2012)
|
Novel Features of Classical Electrodynamics and Their Connection to the Elementary Charge, Energy Density of Vacuum and Heisenberg’s Uncertainty Principle—Review and Consolidation
Novel Features of Classical Electrodynamics and Their Connection to the Elementary Charge, Energy Density of Vacuum and Heisenberg’s Uncertainty Principle―Review and Consolidation
1Department of Engineering Sciences, Uppsala University, Uppsala, Sweden
The paper provides a review and conciliation of the results pertinent to the energy and action associated with electromagnetic radiation obtained using classical electrodynamics and published in several journal papers. The results presented in those papers are based on three systems that generate electromagnetic radiation, namely, frequency domain antennas, time domain antennas and decelerating (or accelerating) charged elementary particles. In the case of radiation generated by a frequency domain antenna, the energy dissipated as radiation within half a period, U, satisfies the order of magnitude inequality
U\ge h\nu \to q\ge e
where q is the magnitude of the oscillating charge in the antenna, e is the elementary charge,
\nu
is the frequency and h is the Planck constant. In the case of transient radiation fields generated by time domain antennas or the radiation emitted by decelerating (or accelerating) charged elementary particles, the energy dissipated by the system as radiation satisfies the order of magnitude inequality
U{\tau }_{r}\ge h/4\text{π}\to q\ge e
where U is the energy dissipated as radiation by the system,
{\tau }_{r}
is the duration of the energy emission and q is either the charge in the current pulse in the case of the time domain antenna or the charge of the elementary particle giving rise to the radiation. These results are derived while adhering strictly to the principles of classical electrodynamics alone. These results were interpreted in different papers in different ways using different assumptions. In this paper, we provide a unified interpretation of the results, and combining these results with two simple quantum mechanical concepts, expression for the elementary charge as a function of other natural constants and the energy density of vacuum is derived. The expressions predict the elementary charge to an accuracy higher than about 1%.
{U}_{med}=\frac{{q}_{0}^{2}\text{π}\nu }{4{\epsilon }_{0}c}\left\{\gamma +\mathrm{ln}\left(4\text{π}L/\lambda \right)\right\}
\gamma
{q}_{0}
\lambda
\nu
{q}_{0}
\lambda
{a}_{0}
{R}_{\infty }
{U}_{\mathrm{max}}=\frac{{q}_{0}^{2}\text{π}\nu }{4{\epsilon }_{0}c}\left\{\gamma +\mathrm{ln}\left(4\text{π}{R}_{\infty }/{a}_{0}\right)\right\}
\nu
h\text{ }\nu
{q}_{0}=\pm \sqrt{\frac{4{\epsilon }_{0}hc}{\text{π}\left\{\gamma +\mathrm{ln}\left(4\text{π}{R}_{\infty }/{a}_{0}\right)\right\}}}
{R}_{\infty }={c}^{2}\sqrt{\frac{3}{8\text{π}G{\rho }_{\Lambda }}}
{\rho }_{\Lambda }
6\times {10}^{-10}
{R}_{\infty }
1.55\times {10}^{26}
{R}_{\infty }
{q}_{0}=\pm \sqrt{\frac{4{\epsilon }_{0}hc}{\text{π}\left\{\gamma +\mathrm{ln}\left[\frac{4\text{π}{c}^{2}}{{a}_{0}}\sqrt{\frac{3}{8\text{π}G{\rho }_{\Lambda }}}\right]\right\}}}
{q}_{0}=\pm 1.603\times {10}^{-19}
U\ge h\nu \to q\ge e
{q}_{0}
q\ge e
U\ge h\nu
{\text{e}}^{-{t}^{2}/2{\sigma }^{2}}
\tau
\eta
\eta =\tau /\left(L/c\right)
\eta
U=\frac{{q}^{2}}{8{\text{π}}^{3/2}\sigma {\epsilon }_{0}c}\mathrm{ln}\left(\frac{2}{\eta }\right)
{\text{e}}^{-{t}^{2}/2{\sigma }_{r}^{2}}
{\sigma }_{r}=\sigma /\sqrt{2}
{\tau }_{r}
{\tau }_{r}\approx 4{\sigma }_{r}={2}^{3/2}\sigma
A=\frac{{\tau }_{r}{q}^{2}}{8{\text{π}}^{3/2}\sigma {\epsilon }_{0}c}\mathrm{ln}\left(\frac{2}{\eta }\right)
{\tau }_{r}
A=\frac{{2}^{3/2}{q}^{2}}{8{\text{π}}^{3/2}{\epsilon }_{0}c}\mathrm{ln}\left(\frac{2}{\eta }\right)
\eta
A=\frac{{2}^{3/2}{q}^{2}}{8{\text{π}}^{3/2}{\epsilon }_{0}c}\mathrm{ln}\left(\frac{2L}{\tau c}\right)\text{\hspace{0.17em}}
\tau c
\tau c
\tau c\approx a
a
A=\frac{{2}^{3/2}{q}^{2}}{8{\text{π}}^{3/2}{\epsilon }_{0}c}\mathrm{ln}\left(\frac{2L}{a}\right)
{R}_{\infty }
{A}_{m}=\frac{{2}^{3/2}{q}^{2}}{8{\text{π}}^{3/2}{\epsilon }_{0}c}\mathrm{ln}\left(\frac{2{R}_{\infty }}{{a}_{0}}\right)
h/2\text{π}
h/\text{4π}
h/\text{4π}
{q}_{0}=\pm \sqrt{\frac{h{\text{π}}^{1/2}{\epsilon }_{0}c}{{2}^{1/2}}\left[1/\mathrm{ln}\left(\frac{2{R}_{\infty }}{{a}_{0}}\right)\right]}
{q}_{0}=\pm 1.61\times {10}^{-19}\text{\hspace{0.17em}}\text{C}
h/\text{4π}
A\ge h/\text{4π}\to q\ge e
q\ge e
A\ge h/\text{4π}
\beta =v/c
U=\frac{{q}^{2}{\beta }^{2}}{16{\text{π}}^{3/2}{\epsilon }_{0}c\sigma }\left[\frac{1+{\beta }^{2}}{{\beta }^{3}}\mathrm{ln}\left(\frac{1+\beta }{1-\beta }\right)-\frac{2}{{\beta }^{2}}\right]
{\text{e}}^{-{t}^{2}/{\sigma }^{2}}
{\tau }_{r}={2}^{3/2}\sigma
A={\tau }_{r}U=\frac{{q}^{2}{\beta }^{2}{2}^{3/2}}{16{\text{π}}^{3/2}{\epsilon }_{0}c}\left[\frac{\left(1+{\beta }^{2}\right)}{{\beta }^{3}}\mathrm{ln}\left(\frac{1+\beta }{1-\beta }\right)-\frac{2}{{\beta }^{2}}\right]
\beta
\left(1-\beta \right)\ll 1
\beta \approx 1
A=\frac{{q}^{2}{2}^{3/2}}{8{\text{π}}^{3/2}{\epsilon }_{0}c}\left[\mathrm{ln}\left(\frac{2}{1-\beta }\right)-1\right]
\beta
{v}_{\mathrm{min}}
{v}_{\mathrm{min}}=\frac{3\sqrt{3}}{8}\frac{h}{\text{π}}\frac{1}{{m}_{e}{R}_{\infty }}
{v}_{\mathrm{max}}
{v}_{\mathrm{max}}\approx c-{v}_{\mathrm{min}}
\left(1-\beta \right)
{\left(1-\beta \right)}_{\mathrm{min}}
{\left(1-\beta \right)}_{\mathrm{min}}=\frac{3\sqrt{3}}{8\text{π}}\frac{h}{m{R}_{\infty }c}
{A}_{m}=\frac{{q}^{2}{2}^{3/2}}{8{\text{π}}^{3/2}{\epsilon }_{0}c}\left[\mathrm{ln}\left(\frac{16\text{π}m{R}_{\infty }c}{3\sqrt{3}h}\right)-1\right]
h/\text{4π}
{q}_{0}=\pm \sqrt{\frac{h{\text{π}}^{1/2}{\epsilon }_{0}c}{{2}^{1/2}\left[\mathrm{ln}\left(\frac{16\text{π}m{R}_{\infty }c}{3\sqrt{3}h}\right)-1\right]}}
{m}_{e}
{q}_{0}=\pm \sqrt{\frac{h{\text{π}}^{1/2}{\epsilon }_{0}c}{{2}^{1/2}\left[\mathrm{ln}\left(\frac{8{R}_{\infty }}{3\sqrt{3}\text{\hspace{0.05em}}\alpha {a}_{0}}\right)-1\right]}}
\alpha
{a}_{0}
h/\text{4π}
{q}_{0}\approx \pm 1.58\times {10}^{-19}\text{\hspace{0.17em}}\text{C}
h/\text{4π}
A\ge h/\text{4π}\to q\ge e
A=h/\text{4π}\to q\approx e
q\ge e
A\ge h/\text{4π}
2.2\times {10}^{-6}
U\ge h\nu \to q\ge e
U\ge h\nu \to q=e
U\ge h\nu \to q\ge e
U\ge h\nu
q\ge e
U\ge h\nu
h\nu
h\nu
U\ge h\nu
q\ge e
{q}_{0}
{q}_{0}
e=\sqrt{\frac{4{\epsilon }_{0}hc}{\text{π}\left\{\gamma +\mathrm{ln}\left[\frac{4\text{π}{c}^{2}}{{a}_{0}}\sqrt{\frac{3}{8\text{π}G{\rho }_{\Lambda }}}\right]\right\}}}
\alpha
\alpha =\frac{2}{\text{π}\left\{\gamma +\mathrm{ln}\left[\frac{4\pi {R}_{\infty }}{{a}_{0}}\right]\right\}}
\alpha =\frac{2}{\text{π}\left\{\gamma +\mathrm{ln}\left[\frac{4\text{π}{c}^{2}}{{a}_{0}}\sqrt{\frac{3}{8\text{π}G{\rho }_{\Lambda }}}\right]\right\}}
{q}_{0}
h\nu
{q}_{0}
{q}_{0}=e
{q}_{0}=e
h\nu
h\nu
{q}_{0}=e
{\rho }_{\Lambda }
{\rho }_{\Lambda }=\frac{3}{8\text{π}G{\left\{\frac{{a}_{0}}{4\text{π}{c}^{2}}\mathrm{exp}\left[\frac{4{\epsilon }_{0}hc}{{e}^{2}\text{π}}-\gamma \right]\right\}}^{2}}
{\rho }_{\Lambda }=4\times {10}^{-10}
6\times {10}^{-10}
{q}_{0}=e
{q}_{o}
A\ge h/\text{4π}\to q\ge e
A=h/\text{4π}\to q=e
A>h/\text{4π}\to q>e
A>h/\text{4π}\to q>e
U{\tau }_{r}\ge h/\text{2π}
U{\tau }_{r}\ge h/\text{2π}
1/\eta
U/c
Cooray, V. and Cooray, G. (2019) Novel Features of Classical Electrodynamics and Their Connection to the Elementary Charge, Energy Density of Vacuum and Heisenberg’s Uncertainty Principle―Review and Consolidation. Journal of Modern Physics, 10, 74-90. https://doi.org/10.4236/jmp.2019.101007
1. Cooray, V. and Cooray, G. (2016) Atmosphere, 7, 64. ttps://doi.org/10.3390/atmos7050064
2. Cooray, V. and Cooray, G. (2018) Journal of Electromagnetic Analysis and Application, 10, 77-87. https://doi.org/10.4236/jemaa.2018.105006
3. Cooray, V. and Cooray, G. (2017) Atmosphere, 8, 46. https://doi.org/10.3390/atmos8030046
4. Cooray, V. and Cooray, G. (2017) Journal of Electromagnetic Analysis and Application, 9, 167-182. https://doi.org/10.4236/jemaa.2017.911015
5. Cooray, V. and Cooray, G. (2016) Atmosphere, 7, 151. https://doi.org/10.3390/atmos7110151
6. Cooray, V. and Cooray, G. (2017) Natural Science, 9, 219-230. https://doi.org/10.4236/ns.2017.97022
7. Komatsu, E., et al. (2011) Astrophysical Journal, 192, 18. https://doi.org/10.1088/0067-0049/192/2/18
8. Perivolaropoulos, L. (2017) Physical Review D, 95, Article ID: 103523.
9. Weinberg, S. (2008) Cosmology. Oxford University Press, Oxford.
10. Cooray, V., Cooray, G. and Rachidi, F. (2017) Journal of Modern Physics, 8, 1979-1987. https://doi.org/10.4236/jmp.2017.812119
|
(i) In Figure 1 below, a charge Q is fixed. Another charge q is moved along a circular arc MN of radius r around it, from the point M to the point N such that the length of the arc MN = l. The work done in this process is:
\frac{1}{4\pi {\in }_{0}}·\frac{Qq}{{r}^{2}}l
\frac{Qq}{2{\in }_{0}{r}^{2}}l
\frac{Qq}{2\pi {\in }_{0}{r}^{2}}
(ii) A carbon resistor has coloured bands as shown in Figure 2 below. The resistance of the resistor is:
(a) 26Ω ± 10%
(b) 26Ω ± 5%
(c) 260Ω ± 5%
(d) 260Ω ± 10%
(iii) A solenoid L and a resistor R are connected in series to a battery, through a switch. When the switch is put on, current I flowing through it varies with time t as shown in which of the graphs given below:
(iv) Two thin lenses having optical powers of –10D and +6D are placed in contact with each other. The focal length of the combination is:
(a) +0·25 cm
(b) –0·25 cm
(c) +0·25 m
(d) –0·25 m
(v) Total energy of an electron in the ground state of hydrogen atom is –13·6 eV. Its total energy, when hydrogen atom is in the first excited state, is:
(a) +13·6 eV
(b) +3·4 eV
(c) –3·4 eV
(d) –54·4 eV
(i) A charged oil drop weighing 1·6 × 10–15 N is found to remain suspended in a uniform electric field of intensity 2 × 103NC–1. Find the charge on the drop.
(ii) For a metallic conductor, what is the relation between current density (J), conductivity (σ) and electric field intensity E?
(iii) In Figure 3 given below, find the value of resistance x for which points A and B are at the same potential:
(iv) Write the expression for the Lorentz force F in vector form.
(v) A coil has a self-inductance of 0·05 Henry. Find magnitude of the emf induced in it when the current flowing through it is changing at the rate 100 As–1.
(vi) To which regions of the electromagnetic spectrum do the following wavelengths belong:
(vii) What is the difference between polarised light and unpolarised light?
(viii) Name the principle on the basis of which optical fibres work.
(ix) Calculate dispersive power of a transparent material given:
nv = 1·56, nr = 1·54, ny = 1·55.
(x) What is meant by short-sightedness?
(xi) Two metals A and B have work functions 4eV and 6eV respectively. Which metal has lower threshold wavelength for photoelectric effect?
(xii) Calculate angular momentum of an electron in the third Bohr orbit of hydrogen atom.
(xiii) In a nuclear reactor, what is the function of a moderator?
(xiv) In our Nature, where is the nuclear fusion reaction taking place continuously?
(xv) What is the use of a Zener diode? VIEW SOLUTION
(a) Two point charges Q1 = 400μC and Q2 = l00μC are kept fixed, 60 cm apart in vacuum. Find intensity of the electric field at midpoint of the line joining Q1 and Q2.
(b) (i) State Gauss' Law.
(ii) In an electric dipole, at which point is the electric potential zero? VIEW SOLUTION
(a) Obtain an expression for equivalent capacitance when three capacitors C1, C2 and C3 are connected in series.
(b) A metallic wire has a resistance of 3·0Ω at 0°C and 4·8Ω at 150°C. Find the temperature coefficient of resistance of its material. VIEW SOLUTION
(a) In the circuit shown in Figure 4 below, E1 and E2 are two cells having emfs 2V and 3V respectively, and negligible internal resistances. Applying Kirchhoff's laws of electrical networks, find the values of currents I1 and I2.
(b) State how a moving coil galvanometer can be converted into an ammeter. VIEW SOLUTION
(a) Draw a labelled circuit diagram of a potentiometer to measure internal resistance of a cell. Write the working formula. (Derivation not required).
(b) (i) Define Curie temperature.
(ii) If magnetic susceptibility of a certain magnetic material is 0·0001, find its relative permeability. VIEW SOLUTION
(a) (i) Two infinitely long current carrying conductors X and Y are kept parallel to each other, 24 cm apart in vacuum. They carry currents of 5A and 7A respectively, in the same direction, as shown in Figure 5 below. Find the position of a neutral point, i.e. a point where resultant magnetic flux density is zero. (Ignore earth’s magnetic field).
(ii) If current through the conductor Y is reversed in direction, will neutral point lie between X and Y, to the left of X or to the right of Y?
(b) (i) Define Ampere in terms of force between two current carrying conductors.
(ii) What is an ideal transformer? VIEW SOLUTION
(a) A coil having self-inductance of 0·7H and resistance of 165Ω is connected to an a.c. source of 275V, 50Hz. If
\pi =\frac{22}{7},
(i) Reactance of the coil
(ii) Impedance of the coil
(iii) Current flowing through the coil
(b) Draw a labelled graph showing variation of impedance of a series LCR circuit with frequency of the a.c. supply. VIEW SOLUTION
(a) Derive Snell's law of refraction using Huygen's wave theory.
(b) Monochromatic light of wavelength 650nm falls normally on a slit of width 1·3 × 10–4 cm and the resulting Fraunhofer diffraction is obtained on a screen. Find the angular width of the central maxima. VIEW SOLUTION
(a) In Young's double slit experiment, show that:
\beta =\frac{\lambda D}{d},
(b) A ray of ordinary light is travelling in air. It is incident on air glass pair at a polarising angle of 56°. Find the angle of refraction in glass. VIEW SOLUTION
(a) Find the angle of incidence at which a ray of monochromatic light should be incident on the first surface AB of a regular glass prism ABC so that the emergent ray grazes the adjacent surface AC. (Refractive Index of glass = 1·56).
(b) State how focal length of a glass lens (Refractive Index 1·5) changes when it is completely immersed in:
(i) Water (Refractive Index 1·33)
(ii) A liquid (Refractive Index 1·65) VIEW SOLUTION
(a) A convex lens of a focal length 5 cm is used as a simple microscope. Where should an object be placed so that the image formed by it lies at the least distance of distinct vision (D = 25 cm)?
(b) Draw a labelled ray diagram showing the formation of an image by a refracting telescope when the final image lies at infinity. VIEW SOLUTION
(a) Monochromatic light of wavelength 198 nm is incident on the surface of a metallic cathode whose work function is 2·5 eV. How much potential difference must be applied between the cathode and the anode of a photocell to just stop the photo current from flowing?
(b) (i) What is de Broglie hypothesis?
(a) (i) How are various lines of Lyman series formed? Explain on the basis of Bohr's theory.
(ii) Calculate the shortest wavelength of electromagnetic radiation present in Balmer series of hydrogen spectrum.
(b) State the effect of the following changes on the X-rays emitted by Coolidge X-ray tube:
(i) High voltage between cathode and anode is increased.
(ii) Filament temperature is increased. VIEW SOLUTION
(a) Half life of a certain radioactive material is 8 hours.
(i) Find disintegration constant of this material.
(ii) If one starts with 600g of this substance, how much of it will disintegrate in one day?
(b) Sketch a graph showing the variation of binding energy per nucleon of a nucleus with its mass number. VIEW SOLUTION
(a) Draw a circuit diagram for the common emitter transistor amplifier. What is meant by phase reversal?
(b) Write the truth table of the following circuit. Name the gate represented by this circuit.
|
EUDML | The connection between May's axioms for a triangulated tensor product and Happel's description of the derived category of the quiver . EuDML | The connection between May's axioms for a triangulated tensor product and Happel's description of the derived category of the quiver .
The connection between May's axioms for a triangulated tensor product and Happel's description of the derived category of the quiver
{D}_{4}
Keller, Bernhard; Neeman, Amnon
Keller, Bernhard, and Neeman, Amnon. "The connection between May's axioms for a triangulated tensor product and Happel's description of the derived category of the quiver .." Documenta Mathematica 7 (2002): 535-560. <http://eudml.org/doc/50318>.
author = {Keller, Bernhard, Neeman, Amnon},
keywords = {derived category; tensor product; quiver; triangulated category; category of representations of quiver ; category of representations of quiver },
title = {The connection between May's axioms for a triangulated tensor product and Happel's description of the derived category of the quiver .},
AU - Keller, Bernhard
AU - Neeman, Amnon
TI - The connection between May's axioms for a triangulated tensor product and Happel's description of the derived category of the quiver .
KW - derived category; tensor product; quiver; triangulated category; category of representations of quiver ; category of representations of quiver
Yann Palu, Cluster characters for 2-Calabi–Yau triangulated categories
derived category, tensor product, quiver, triangulated category, category of representations of quiver
{D}_{4}
, category of representations of quiver
{D}_{4}
Articles by Keller
Articles by Neeman
|
Bistability - Wikipedia
{\displaystyle x_{1}}
{\displaystyle x_{2}}
{\displaystyle x_{3}}
In a dynamical system, bistability means the system has two stable equilibrium states.[1] Something that is bistable can be resting in either of two states. An example of a mechanical device which is bistable is a light switch. The switch lever is designed to rest in the "on" or "off" position, but not between the two. Bistable behavior can occur in mechanical linkages, electronic circuits, nonlinear optical systems, chemical reactions, and physiological and biological systems.
In a conservative force field, bistability stems from the fact that the potential energy has two local minima, which are the stable equilibrium points.[2] These rest states need not have equal potential energy. By mathematical arguments, a local maximum, an unstable equilibrium point, must lie between the two minima. At rest, a particle will be in one of the minimum equilibrium positions, because that corresponds to the state of lowest energy. The maximum can be visualized as a barrier between them.
2 In biological and chemical systems
3 In mechanical systems
In the mathematical language of dynamic systems analysis, one of the simplest bistable systems is
{\displaystyle {\frac {dy}{dt}}=y(1-y^{2}).}
This system describes a ball rolling down a curve with shape
{\displaystyle {\frac {y^{4}}{4}}-{\frac {y^{2}}{2}}}
, and has three equilibrium points:
{\displaystyle y=1}
{\displaystyle y=0}
{\displaystyle y=-1}
. The middle point
{\displaystyle y=0}
is unstable, while the other two points are stable. The direction of change of
{\displaystyle y(t)}
over time depends on the initial condition
{\displaystyle y(0)}
. If the initial condition is positive (
{\displaystyle y(0)>0}
), then the solution
{\displaystyle y(t)}
approaches 1 over time, but if the initial condition is negative (
{\displaystyle y(0)<0}
{\displaystyle y(t)}
approaches −1 over time. Thus, the dynamics are "bistable". The final state of the system can be either
{\displaystyle y=1}
{\displaystyle y=-1}
, depending on the initial conditions.[3]
The appearance of a bistable region can be understood for the model system
{\displaystyle {\frac {dy}{dt}}=y(r-y^{2})}
which undergoes a supercritical pitchfork bifurcation with bifurcation paramete{\displaystyle r}
In biological and chemical systems[edit]
Three-dimensional invariant measure for cellular-differentiation featuring a two-stable mode. The axes denote cell counts for three types of cells: progenitor (
{\displaystyle z}
), osteoblast (
{\displaystyle y}
), and chondrocyte (
{\displaystyle x}
). Pro-osteoblast stimulus promotes P→O transition.[4]
Bistability is key for understanding basic phenomena of cellular functioning, such as decision-making processes in cell cycle progression, cellular differentiation,[5] and apoptosis. It is also involved in loss of cellular homeostasis associated with early events in cancer onset and in prion diseases as well as in the origin of new species (speciation).[6]
Bistability can be generated by a positive feedback loop with an ultrasensitive regulatory step. Positive feedback loops, such as the simple X activates Y and Y activates X motif, essentially links output signals to their input signals and have been noted to be an important regulatory motif in cellular signal transduction because positive feedback loops can create switches with an all-or-nothing decision.[7] Studies have shown that numerous biological systems, such as Xenopus oocyte maturation,[8] mammalian calcium signal transduction, and polarity in budding yeast, incorporate temporal (slow and fast) positive feedback loops, or more than one feedback loop that occurs at different times.[7] Having two different temporal positive feedback loops or "dual-time switches" allows for (a) increased regulation: two switches that have independent changeable activation and deactivation times; and (b) linked feedback loops on multiple timescales can filter noise.[7]
Bistability can also arise in a biochemical system only for a particular range of parameter values, where the parameter can often be interpreted as the strength of the feedback. In several typical examples, the system has only one stable fixed point at low values of the parameter. A saddle-node bifurcation gives rise to a pair of new fixed points emerging, one stable and the other unstable, at a critical value of the parameter. The unstable solution can then form another saddle-node bifurcation with the initial stable solution at a higher value of the parameter, leaving only the higher fixed solution. Thus, at values of the parameter between the two critical values, the system has two stable solutions. An example of a dynamical system that demonstrates similar features is
{\displaystyle {\frac {\mathrm {d} x}{\mathrm {d} t}}=r+{\frac {x^{5}}{1+x^{5}}}-x}
{\displaystyle x}
is the output, and
{\displaystyle r}
is the parameter, acting as the input.[9]
Bistability can be modified to be more robust and to tolerate significant changes in concentrations of reactants, while still maintaining its "switch-like" character. Feedback on both the activator of a system and inhibitor make the system able to tolerate a wide range of concentrations. An example of this in cell biology is that activated CDK1 (Cyclin Dependent Kinase 1) activates its activator Cdc25 while at the same time inactivating its inactivator, Wee1, thus allowing for progression of a cell into mitosis. Without this double feedback, the system would still be bistable, but would not be able to tolerate such a wide range of concentrations.[10]
Bistability has also been described in the embryonic development of Drosophila melanogaster (the fruit fly). Examples are anterior-posterior[11] and dorso-ventral[12][13] axis formation and eye development.[14]
A prime example of bistability in biological systems is that of Sonic hedgehog (Shh), a secreted signaling molecule, which plays a critical role in development. Shh functions in diverse processes in development, including patterning limb bud tissue differentiation. The Shh signaling network behaves as a bistable switch, allowing the cell to abruptly switch states at precise Shh concentrations. gli1 and gli2 transcription is activated by Shh, and their gene products act as transcriptional activators for their own expression and for targets downstream of Shh signaling.[15] Simultaneously, the Shh signaling network is controlled by a negative feedback loop wherein the Gli transcription factors activate the enhanced transcription of a repressor (Ptc). This signaling network illustrates the simultaneous positive and negative feedback loops whose exquisite sensitivity helps create a bistable switch.
Bistability can only arise in biological and chemical systems if three necessary conditions are fulfilled: positive feedback, a mechanism to filter out small stimuli and a mechanism to prevent increase without bound.[6]
Bistable chemical systems have been studied extensively to analyze relaxation kinetics, non-equilibrium thermodynamics, stochastic resonance, as well as climate change.[6] In bistable spatially extended systems the onset of local correlations and propagation of traveling waves have been analyzed.[16][17]
Bistability is often accompanied by hysteresis. On a population level, if many realisations of a bistable system are considered (e.g. many bistable cells (speciation)[18]), one typically observes bimodal distributions. In an ensemble average over the population, the result may simply look like a smooth transition, thus showing the value of single-cell resolution.
A specific type of instability is known as modehopping, which is bi-stability in the frequency space. Here trajectories can shoot between two stable limit cycles, and thus show similar characteristics as normal bi-stability when measured inside a Poincare section.
In mechanical systems[edit]
Bistability as applied in the design of mechanical systems is more commonly said to be "over centre"—that is, work is done on the system to move it just past the peak, at which point the mechanism goes "over centre" to its secondary stable position. The result is a toggle-type action- work applied to the system below a threshold sufficient to send it 'over center' results in no change to the mechanism's state.
Springs are a common method of achieving an "over centre" action. A spring attached to a simple two position ratchet-type mechanism can create a button or plunger that is clicked or toggled between two mechanical states. Many ballpoint and rollerball retractable pens employ this type of bistable mechanism.
An even more common example of an over-center device is an ordinary electric wall switch. These switches are often designed to snap firmly into the "on" or "off" position once the toggle handle has been moved a certain distance past the center-point.
A ratchet-and-pawl is an elaboration—a multi-stable "over center" system used to create irreversible motion. The pawl goes over center as it is turned in the forward direction. In this case, "over center" refers to the ratchet being stable and "locked" in a given position until clicked forward again; it has nothing to do with the ratchet being unable to turn in the reverse direction.
A ratchet in action. Each tooth in the ratchet together with the regions to either side of it constitutes a simple bistable mechanism.
ferroelectric, ferromagnetic, hysteresis, bistable perception
astable multivibrator, monostable multivibrator.
Multistable perception describes the spontaneous or exogenous alternation of different percepts in face of the same physical stimulus.
Interferometric modulator display, a bistable reflective display technology found in mirasol displays by Qualcomm
^ Morris, Christopher G. (1992). Academic Press Dictionary of Science and Technology. Gulf Professional publishing. p. 267. ISBN 978-0122004001.
^ Nazarov, Yuli V.; Danon, Jeroen (2013). Advanced Quantum Mechanics: A Practical Guide. Cambridge University Press. p. 291. ISBN 978-1139619028.
^ Ket Hing Chong; Sandhya Samarasinghe; Don Kulasiri & Jie Zheng (2015). "Computational techniques in mathematical modelling of biological switches". Modsim2015: 578–584. For detailed techniques of mathematical modelling of bistability, see the tutorial by Chong et al. (2015) http://www.mssanz.org.au/modsim2015/C2/chong.pdf The tutorial provides a simple example illustration of bistability using a synthetic toggle switch proposed in Collins, James J.; Gardner, Timothy S.; Cantor, Charles R. (2000). "Construction of a genetic toggle switch in Escherichia coli". Nature. 403 (6767): 339–42. Bibcode:2000Natur.403..339G. doi:10.1038/35002131. PMID 10659857. S2CID 345059. . The tutorial also uses the dynamical system software XPPAUT http://www.math.pitt.edu/~bard/xpp/xpp.html to show practically how to see bistability captured by a saddle-node bifurcation diagram and the hysteresis behaviours when the bifurcation parameter is increased or decreased slowly over the tipping points and a protein gets turned 'On' or turned 'Off'.
^ Kryven, I.; Röblitz, S.; Schütte, Ch. (2015). "Solution of the chemical master equation by radial basis functions approximation with interface tracking". BMC Systems Biology. 9 (1): 67. doi:10.1186/s12918-015-0210-y. PMC 4599742. PMID 26449665.
^ Ghaffarizadeh A, Flann NS, Podgorski GJ (2014). "Multistable switches and their role in cellular differentiation networks". BMC Bioinformatics. 15: S7+. doi:10.1186/1471-2105-15-s7-s7. PMC 4110729. PMID 25078021.
^ a b c Wilhelm, T (2009). "The smallest chemical reaction system with bistability". BMC Systems Biology. 3: 90. doi:10.1186/1752-0509-3-90. PMC 2749052. PMID 19737387.
^ a b c O. Brandman, J. E. Ferrell Jr., R. Li, T. Meyer, Science 310, 496 (2005)
^ Ferrell JE Jr.; Machleder EM (1998). "The biochemical basis of an all-or-none cell fate switch in Xenopus oocytes". Science. 280 (5365): 895–8. Bibcode:1998Sci...280..895F. doi:10.1126/science.280.5365.895. PMID 9572732. S2CID 34863795.
^ Angeli, David; Ferrell, JE; Sontag, Eduardo D (2003). "Detection of multistability, bifurcations, and hysteresis in a large calss of biological positive-feedback systems". PNAS. 101 (7): 1822–7. Bibcode:2004PNAS..101.1822A. doi:10.1073/pnas.0308265100. PMC 357011. PMID 14766974.
^ Ferrell JE Jr. (2008). "Feedback regulation of opposing enzymes generates robust, all-or-none bistable responses". Current Biology. 18 (6): R244–R245. doi:10.1016/j.cub.2008.02.035. PMC 2832910. PMID 18364225.
^ Lopes, Francisco J. P.; Vieira, Fernando M. C.; Holloway, David M.; Bisch, Paulo M.; Spirov, Alexander V.; Ohler, Uwe (26 September 2008). "Spatial Bistability Generates hunchback Expression Sharpness in the Drosophila Embryo". PLOS Computational Biology. 4 (9): e1000184. Bibcode:2008PLSCB...4E0184L. doi:10.1371/journal.pcbi.1000184. PMC 2527687. PMID 18818726.
^ Wang, Yu-Chiun; Ferguson, Edwin L. (10 March 2005). "Spatial bistability of Dpp–receptor interactions during Drosophila dorsal–ventral patterning". Nature. 434 (7030): 229–234. Bibcode:2005Natur.434..229W. doi:10.1038/nature03318. PMID 15759004. S2CID 4415152.
^ Umulis, D. M.; Mihaela Serpe; Michael B. O’Connor; Hans G. Othmer (1 August 2006). "Robust, bistable patterning of the dorsal surface of the Drosophila embryo". Proceedings of the National Academy of Sciences. 103 (31): 11613–11618. Bibcode:2006PNAS..10311613U. doi:10.1073/pnas.0510398103. PMC 1544218. PMID 16864795.
^ Graham, T. G. W.; Tabei, S. M. A.; Dinner, A. R.; Rebay, I. (22 June 2010). "Modeling bistable cell-fate choices in the Drosophila eye: qualitative and quantitative perspectives". Development. 137 (14): 2265–2278. doi:10.1242/dev.044826. PMC 2889600. PMID 20570936.
^ Lai, K., M.J. Robertson, and D.V. Schaffer, The sonic hedgehog signaling system as a bistable genetic switch. Biophys J, 2004. 86(5): pp. 2748–57.
^ Elf, J.; Ehrenberg, M. (2004). "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases". Systems Biology. 1 (2): 230–236. doi:10.1049/sb:20045021. PMID 17051695. S2CID 17770042.
^ Kochanczyk, M.; Jaruszewicz, J.; Lipniacki, T. (July 2013). "Stochastic transitions in a bistable reaction system on the membrane". Journal of the Royal Society Interface. 10 (84): 20130151. doi:10.1098/rsif.2013.0151. PMC 3673150. PMID 23635492.
^ Nielsen; Dolganov, Nadia A.; Rasmussen, Thomas; Otto, Glen; Miller, Michael C.; Felt, Stephen A.; Torreilles, Stéphanie; Schoolnik, Gary K.; et al. (2010). Isberg, Ralph R. (ed.). "A Bistable Switch and Anatomical Site Control Vibrio cholerae Virulence Gene Expression in the Intestine". PLOS Pathogens. 6 (9): 1. doi:10.1371/journal.ppat.1001102. PMC 2940755. PMID 20862321.
http://www.answers.com/topic/optical-bistability
Retrieved from "https://en.wikipedia.org/w/index.php?title=Bistability&oldid=1088232303"
|
Computing genus-zero twisted Gromov-Witten invariants
15 April 2009 Computing genus-zero twisted Gromov-Witten invariants
Tom Coates, Alessio Corti, Hiroshi Iritani, Hsian-Hua Tseng
Tom Coates,1 Alessio Corti,1 Hiroshi Iritani,2 Hsian-Hua Tseng3
2Faculty of Mathematics, Kyushu University 6-10-1; Department of Mathematics, Imperial College London
3Department of Mathematics, University of British Columbia; Department of Mathematics, University of Wisconsin–Madison
Twisted Gromov-Witten invariants are intersection numbers in moduli spaces of stable maps to a manifold or orbifold
X
which depend in addition on a vector bundle over
X
and an invertible multiplicative characteristic class. Special cases are closely related to local Gromov-Witten invariants of the bundle and to genus-zero one-point invariants of complete intersections in
X
. We develop tools for computing genus-zero twisted Gromov-Witten invariants of orbifolds and apply them to several examples. We prove a “quantum Lefschetz theorem” that expresses genus-zero one-point Gromov-Witten invariants of a complete intersection in terms of those of the ambient orbifold
X
. We determine the genus-zero Gromov-Witten potential of the type
A
surface singularity
\left[{\mathbb{C}}^{2}/{\mathbb{Z}}_{n}\right]
. We also compute some genus-zero invariants of
\left[{\mathbb{C}}^{3}/{\mathbb{Z}}_{3}\right]
, verifying predictions of Aganagic, Bouchard, and Klemm [3]. In a self-contained appendix, we determine the relationship between the quantum cohomology of the
{A}_{n}
surface singularity and that of its crepant resolution, thereby proving the Crepant Resolution Conjectures of Ruan and of Bryan and Graber [12] in this case
Tom Coates. Alessio Corti. Hiroshi Iritani. Hsian-Hua Tseng. "Computing genus-zero twisted Gromov-Witten invariants." Duke Math. J. 147 (3) 377 - 438, 15 April 2009. https://doi.org/10.1215/00127094-2009-015
Tom Coates, Alessio Corti, Hiroshi Iritani, Hsian-Hua Tseng "Computing genus-zero twisted Gromov-Witten invariants," Duke Mathematical Journal, Duke Math. J. 147(3), 377-438, (15 April 2009)
|
Stabilizing CO2 Concentrations | METEO 469: From Meteorology to Mitigation: Understanding Global Warming
Before we proceed, it is useful to cover a few more important details. You may recall from an earlier lesson that the radiative forcing due to a given increase in atmospheric
{\text{CO}}_{\text{2}}
concentration,
\Delta {F}_{C{O}_{2}}
, can be approximated as:
\Delta {F}_{C{O}_{2}}=5.35\mathrm{ln}\left(\frac{\left[C{O}_{2}\right]}{{\left[C{O}_{2}\right]}_{0}}\right)
{\left[C{O}_{2}\right]}_{0}
\left[C{O}_{2}\right]
is the final concentration. This gives a forcing for doubling of
{\text{CO}}_{\text{2}}
from pre-industrial values (i.e.,
{\left[C{O}_{2}\right]}_{0}
= 280 ppm and
\left[C{O}_{2}\right]
= 560 ppm) of just under
\mathrm{4\quad }W{m}^{-2}
. Given the typical estimate of climate sensitivity we discussed during the past two lessons, we know that this forcing translates to about 3°C warming. That means, we get about 0.75°C warming for each
W{m}^{-2}
of radiative forcing.
Thus far,
{\text{CO}}_{\text{2}}
has increased from pre-industrial levels of 280 ppm to current levels of around 410 ppm. Based on the relationships above, what radiative forcing and global mean temperature increase would you expect in response to our behavior so far?
Using the formula above, we get a radiative forcing of ΔF = 2.04 W/m2.
Given that we get roughly 0.75°C warming for each W/m2 forcing, this gives slightly more than 1.5°C warming.
If you successfully answered the question above, you know that the
{\text{CO}}_{\text{2}}
increases so far should have given rise to 1.5°C warming of the globe. Yet we have only seen about 1.0°C warming. Are the theoretical formulas wrong? Did we make a mistake? Actually, it is neither. First of all, we know that it takes decades for the climate system to equilibrate to a rise in atmospheric
{\text{CO}}_{\text{2}}
, so we have not yet realized the expected equilibrium warming indicated by the equilibrium climate sensitivity. Models indicate that there is as much as another 0.5°C of warming still in the pipeline, due to the
{\text{CO}}_{\text{2}}
increases that have taken place already. That alone would almost explain the 0.7°C discrepancy between the warming we expect, and the lesser warming we've observed.
However, we have forgotten two other things that—as it happens—roughly cancel out! First of all,
{\text{CO}}_{\text{2}}
is not the only greenhouse gas whose concentrations we have been increasing through industrial and other human activities. There are other greenhouse gases—methane, nitrous oxide, and others—whose concentrations we have increased, and whose concentrations are projected to continue to rise in the various SRES and RCP scenarios we have examined.
Figure 6.6: Greenhouse Gas Levels Resulting from Various Emissions Scenarios.
We need to account for the effect of all of these other greenhouse gases. We can do this using the concept of
{\text{CO}}_{\text{2}}
equivalent (
{\text{CO}}_{\text{2}}
_eq).
{\text{CO}}_{\text{2}}
_eq is the concentration of
{\text{CO}}_{\text{2}}
that would be equivalent, in terms of the total radiative forcing, to a combination of all the other greenhouse gases. If we take into account the rises in methane and other anthropogenic greenhouse gases, then the net radiative forcing is equivalent to having increased
{\text{CO}}_{\text{2}}
to a substantially higher, roughly 485 ppm! In other words, the current value of
{\text{CO}}_{\text{2}}
_eq is 485 ppm. This fact has caused quite a bit of confusion, leading some commentators (see this RealClimate article) to incorrectly sound the alarm that it is already too late to stabilize
{\text{CO}}_{\text{2}}
concentrations at 450 ppm and, hence, to avoid breaching the targets that have been set by some as constituting dangerous anthropogenic interference with the climate (see this article by Michael Mann for a discussion of these considerations).
Nonetheless, if
{\text{CO}}_{\text{2}}
_eq has reached 485ppm, does that mean that we are committed to the net warming that can be expected from a concentration of 485 ppm
{\text{CO}}_{\text{2}}
? Well, yes and no. The other thing we have left out is that greenhouse gases are not the only significant anthropogenic impact on the climate. We know that the production of sulphate and other aerosols has played an important role, cooling substantial regions of the Northern Hemisphere continents, in particular, during the past century. The best estimate of the impact of this anthropogenc forcing, while quite uncertain, is roughly -0.8
W/{m}^{2}
of forcing, which is equivalent—in this context—to the contribution of negative 60 ppm of
{\text{CO}}_{\text{2}}
. If we add -60 ppm to 485 ppm we get 425 ppm—which is closer to the current actual
{\text{CO}}_{\text{2}}
concentration of 408 ppm. So, in other words, if we take into account not only the effect of all other greenhouse gases, but also the offsetting cooling effect of anthropogenic aerosols, we end up roughly where we started off, considering only the effect of increasing atmospheric
{\text{CO}}_{\text{2}}
concentration through fossil fuel burning.
It is, therefore, a useful simplification to simply look at atmospheric
{\text{CO}}_{\text{2}}
alone as a proxy for the total anthropogenic forcing of the climate, but there are some important caveats to keep in mind:
(1) the various scenarios assume that the sulphate aerosol burden remains unchanged. If we instead choose to clean up the atmosphere to the point of scrubbing all current sulphate aerosols from industrial emissions, we are left with the faustian bargain of experiencing the additional climate change impacts of a sudden effective increase of atmospheric
{\text{CO}}_{\text{2}}
of 60 ppm;
(2) not all greenhouse gases are created the same—some, such as methane, have far shorter residence times in the atmosphere (timescale of years) than does
{\text{CO}}_{\text{2}}
, which persists for centuries.
That means that there is a far greater future climate change commitment embodied in a scenario of pure
{\text{CO}}_{\text{2}}
emissions than the same
{\text{CO}}_{\text{2}}
equivalent emissions consisting largely of methane. This has implications for the abatement strategies we will discuss later in the course.
These limitations notwithstanding, let us now consider the impact of various pure
{\text{CO}}_{\text{2}}
scenarios. Let us focus specifically on scenarios that will stabilize atmospheric
{\text{CO}}_{\text{2}}
at some particular level, i.e., so-called stabilization scenarios. Invariably, these scenarios involve bringing annual emissions to a peak at some point during the 21st century, and decreasing them subsequently. Obviously, the higher we allow the concentrations to increase and the later the peak, the higher the ultimate
{\text{CO}}_{\text{2}}
concentration is going to be. The various possible such scenarios are shown below in increments of 50 ppm. If we are to stabilize
{\text{CO}}_{\text{2}}
concentrations at 550 ppm, we can see that
{\text{CO}}_{\text{2}}
emissions should be brought to a peak of no more than 8.7 gigatons of carbon per year, by around 2050, and reduced below 1990 levels (i.e., 6 gigatons carbon per year) by 2100. For comparison, as we saw earlier that current emissions are at roughly 8.5 gigatons per year and rising at the rate of the carbon-intensive A1FI SRES emissions scenario, so we are already "behind the curve" so to speak, even for 550 ppm stabilization.
For 450 ppm stabilization, the challenge is far greater. According to the figure below, we would have had to bring emissions to a peak before 2010 at roughly 7.5 gigatons per year, and lower them to roughly 4 gigatons per year (i.e., 33% below 1990 levels) by 2050. Obviously, that train has already left the station. Alternatively, the RCP2.6 pathway is an example of a 450 ppm stabilization scenario consistent with where we are now, that involves bringing emissions to a peak within the next decade below 10 gigatons per year, and reducing them far more dramatically, to near zero 80% by 2100 through various mitigation policies. With every year we continue with business-as-usual carbon emissions, achieving a 450 ppm stabilization target becomes that much more difficult, and involves far greater reduction of emissions in future decades. It is for this reason that the problem of greenhouse gas stabilization has been referred to by some scientists as a problem with a very large procrastination penalty.
Figure 6.7: Annual
{\text{CO}}_{\text{2}}
emissions and Resulting
{\text{CO}}_{\text{2}}
concentrations for Various Stablization Scenarios.
Credit: Robert A. Rohde / Global Warming Art
‹ 'SRES' Scenarios and 'RCP' Pathways up The "Kaya Identity" ›
'SRES' Scenarios and 'RCP' Pathways
The "Kaya Identity"
The "Wedges" Concept
Project #1: Fossil Fuel Emissions
|
numfmt - Maple Help
Home : Support : Online Help : Connectivity : Maple T.A. : MapleTA Package : Builtin : numfmt
format a number according to a template
numfmt(format, number)
The numfmt command uses a format template to specify how many digits after the decimal point to display among other things. The given number is matched against the template and a formatted string is returned.
This command has the same specification as the built-in Java command, java.text.DecimalFormat.
\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.00",20.9\right)
\textcolor[rgb]{0,0,1}{"20.90"}
\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.#",12.34\right)
\textcolor[rgb]{0,0,1}{"12.3"}
\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left(".#",12.34\right)
\textcolor[rgb]{0,0,1}{"12.3"}
\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.0000",12.3456789\right)
\textcolor[rgb]{0,0,1}{"12.3457"}
\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.010",12.3456\right)
\textcolor[rgb]{0,0,1}{"12.351"}
\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#",12.34\right)
\textcolor[rgb]{0,0,1}{"12"}
\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.#",12.3456\right)
\textcolor[rgb]{0,0,1}{"12.3"}
\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("#.###",12.3456\right)
\textcolor[rgb]{0,0,1}{"12.346"}
\mathrm{MapleTA}:-\mathrm{Builtin}:-\mathrm{numfmt}\left("000.#",12.3456\right)
\textcolor[rgb]{0,0,1}{"012.3"}
The MapleTA[Builtin][numfmt] command was introduced in Maple 18.
|
How To Debug · USACO Guide
HomeGeneralHow To Debug
Pre-SubmitWrong Answer (or Runtime Error)Runtime ErrorTime Limit ExceededLast ResortBefore Posting on the USACO Guide Forum
General tips for identifying errors within your solution.
things to try in an ICPC contest
Asking for help FAQ
This module is based on the resources above. I've included the content that is most relevant to USACO.
Your code should be readable (to yourself at the very least).
Following style tips from the Adding Solutions module may help with this.
Wrong Answer (or Runtime Error)
Is your output format correct?
Did you remove debug output before submitting?
Do you handle all corner cases (such as
N=1
) / special cases?
For problems with multiple independent test cases (such as this one), are you clearing all data structures between test cases?
Keep in mind that your solution might only behave incorrectly when a test case is followed by a smaller test case.
Have you understood the problem correctly? Read the full problem statement again.
Confusing
N
M
i
j
, etc.?
Shadowed or unused or uninitialized variables?
In C++, compiling with warning options (-Wall -Wshadow) should detect these.
Any undefined behavior? It can result in different outputs locally vs online (ex. maybe you are passing the sample case locally but not when you submit to the USACO judge). Try running your code in multiple places (ex. USACO Guide IDE, Codeforces Custom Test) and see if you always get the same result. Common examples of undefined behavior include:
(C++) Uninitialized variables
(C++) Not returning anything from non-void functions
(C++) Array out of bounds
Considering using ::at as mentioned here.
(C++ / Java) Signed integer overflow
USACO problems usually contain a note of the following form if the output format requires 64-bit rather than 32-bit integers, but it's easy to miss:
Note that the large size of integers involved in this problem may require the use of 64-bit integer data types (e.g., a long long in C/C++)."
In C++, compiling with additional options can help catch these.
Add assertions and resubmit.
Any NaNs (ex. taking the square root of a negative number)?
Try using a type with more precision (ex. long double instead of double in C++).
Are you printing the output to the correct amount of precision?
Are you sure your algorithm works?
Go through the algorithm for a simple case / write some testcases to run your algorithm on.
Write a test case generator and compare the outputs of your solution against that of a (simpler) slow solution, or a model solution if available.
See stress testing for more information.
Any undefined behavior? (see above)
Any assertions that might fail?
Any possible division by 0? (mod 0 for example)
Any possible infinite recursion?
Invalidated pointers or iterators?
Are you using too much memory?
Do you have any possible infinite loops?
Did you remove debug output before submitting (ex. are you printing a lot of information to stderr)?
Unnecessary copying of data? C++ - Consider passing variables by reference.
C++ - Try substituting arrays in place of vectors.
Rewrite your solution from the start.
Be sure to save your original solution. It's always possible that you might introduce more bugs.
Before Posting on the USACO Guide Forum
If you have found a small test case on which your program fails and you know why the expected output is correct, you should be able to figure out why your program is incorrect on your own.
Add print statements to your code and compare their outputs to what you get when you simulate your program by hand.
Check for undefined behavior as described above.
If you haven't found a small test case on which your solution fails,
Try downloading the official test data and seeing if your solution fails on any small test cases.
If that doesn't work, then try generating a small test case on which your solution fails as described above.
|
GetExpression - Maple Help
Home : Support : Online Help : Education : Grading : GetExpression
GetExpression(obj)
The GetExpression command returns an expression that obj represents, if this exists, followed by any variables used in the expression. Otherwise nothing is returned.
If obj was not created with user-supplied variables, then automatically generated variables may appear in the returned result.
\mathrm{with}\left(\mathrm{Grading}\right):
L≔\mathrm{LinearFunction}\left([0,5],[3,-2]\right):
\mathrm{GetExpression}\left(L\right)
\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{v}}{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{v}
Q≔\mathrm{QuadraticFunction}\left({w}^{2}-2w+5\right)
\textcolor[rgb]{0,0,1}{Q}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{<< QuadraticFunction: w^2-2*w+5>>}}
\mathrm{GetExpression}\left(Q\right)
{\textcolor[rgb]{0,0,1}{w}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{w}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{w}
The Grading:-GetExpression command was introduced in Maple 18.
|
Rs Aggarwal 2018 for Class 8 Math Chapter 21 - Data Handling
Rs Aggarwal 2018 Solutions for Class 8 Math Chapter 21 Data Handling are provided here with simple step-by-step explanations. These solutions for Data Handling are extremely popular among Class 8 students for Math Data Handling Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Rs Aggarwal 2018 Book of Class 8 Math Chapter 21 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Rs Aggarwal 2018 Solutions. All Rs Aggarwal 2018 Solutions for class Class 8 Math are prepared by experts and are 100% accurate.
The number of members in 20 families are given below:
Prepare a frequency distribution of the data.
First, we will arrange the data in increasing order.
Frequency table of the above data:
Number of members Tally marks Number of families (frequency)
||||
||||
\overline{)||||}
||
|||
||
Arranging the outcomes in increasing order, we get:
1 was thrown 6 times
Frequency distribution table of the above data:
Number Tally mark frequency
\overline{)||||} |
\overline{)||||} |
||||
\overline{)||||}
||||
\overline{)||||}
The following data gives the number of children in 40 families:
Arranging the dates in ascending order, we get:
0, 0 ,0 , 0, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,2,2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 6, 6, 6
4 families with no children
7 families with 1 child
12 families with 2 children
5 families with 3 children
The frequency distribution table for the above data can be generated as below:
Number of children Tally mark Number of families (frequency)
||||
\overline{)||||} ||
\overline{)||||} \overline{)||||} ||
\overline{)||||}
\overline{)||||}
|
|||
|||
The marks obtained by 40 students of a class in an examination are given below:
8, 47, 22, 31, 17, 13, 38, 26, 3, 34, 29, 11, 22, 7, 15, 24, 38, 31, 21, 35, 42, 24, 45, 23, 21, 27, 29, 49, 25, 48, 21, 15, 18, 27, 19, 45, 14, 34, 37, 34.
Prepare a frequency distribution table with equal class intervals, starting from 0−10 (where 10 not included).
Arranging the given observations in ascending order, we get:
Thus, the frequency distribution may be represented as shown below:
0-10 ||| 3
The electricity bills (in rupees) of 25 houses of a certain locality for a month are given below:
Arrange the above data in increasing order and form a frequency table using equal class intervals, starting from 300−400, where 400 is not included.
Arranging the given observations in increasing order, we get:
500-600 |||| | 6
The weekly wages (in rupees) of 28 workers of a factory are given below:
Construct a frequency table with equal class intervals, taking the first of the class intervals as 610−630, where 630 is not included.
The weekly pocket expenses (in rupees) of 30 students of a class are given below:
62, 80, 110, 75, 84, 73, 60, 62, 100, 87, 78, 94, 117, 86, 65, 68, 90, 80, 118, 72, 95, 72, 103, 96, 64, 94, 87, 85, 105, 115.
Construct a frequency table with class intervals 60−70 (where 70 is not included), 70−80, 80−90, etc.
Thus, the frequency distribution may be represented as given below:
Weekly pocket expenses
90-100 |||| 5
The daily earnings (in rupees) of 24 stores in a market was recorded as under:
Prepare a frequency table taking equal class sizes. One such class is 500−550, where 550 is not included.
The heights (in cm) of 22 students were recorded as under:
Prepare a frequency distribution table, taking equal class intervals and starting from 125−130, where 130 is not included.
Arranging the given observations in increasing order, we get 125, 125, 126, 126, 128, 130, 130, 130, 132, 132, 133, 134, 135, 135, 135, 136, 136, 138, 142, 142, 143 and 144.
The top speeds of 30 different land animals have been organised into a frequency table given below:
Maximum speed (in km/h) 10–20 20–30 30–40 40–50 50–60 60–70
Number of animals 3 5 10 8 0 2
The ages (in years) of 360 patients treated in a hospital on a particular day are given below:
Age (in years) 10–20 20–30 30–40 40–50 50–60 60–70
Hint. Take 10 small divisions = 10 patients.
Draw a histogram for the frequency distribution of following data:
Class interval 8–13 13–18 18–23 23–28 28–33 33–38 38–43
Hint. Take 1 small division = 10.
Draw a histogram for the frequency distribution of the following data:
Class interval 20–25 25–30 30–35 35–40 40–45 45–50
Class interval 600–640 640–680 680–720 720–760 760–800 800–840
The following table shows the number of illiterate persons in the age group (10–58 years) in a town. Represent the given data by means of a histogram.
Age group (in years) 10–18 18–26 26–34 34–42 42–50 50–58
Number of illiterate persons 175 325 100 150 250 525
Hint. Take 1 small division = 5 persons.
The marks obtained (out of 20) by 30 students of a class in a test are given below:
7, 10, 8, 16, 13, 14, 15, 11, 18, 11, 15, 10, 7, 14, 20, 19, 15, 16, 14, 20, 10, 11, 14, 17, 13, 12, 15, 14, 16, 17.
Prepare a frequency distribution table for the above data using class intervals of equal width in which one class interval in 3–8 (including 2 and excluding 8). From the frequency distribution table so obtained, draw a histogram.
||
\overline{)||||}
|||
\overline{)||||}
\overline{)||||}
\overline{)||||}
|
||||
The weights (in kg) of 30 students of a class are
Prepare a frequency distribution table using one class interval as 30–35, in which 30 is included and 35 excluded. Using the above data. draw a histogram.
Frequency distriubution table
Weight(in kg) Tally Marks Number of students
||
\overline{)||||}
||
\overline{)||||}
\overline{)||||}
|
\overline{)||||}
||
|||
Look at the histogram given below and answer the question that follow:
(i) How many students have height more than or equal to 135 cm but less than 150 cm?
(ii) Which class interval has the least number of students?
(iv) How many students have height less than 140 cm?
(i) Number of students =
14+18+10=42
(ii) Class interval with least number of students = 150−155
(iii) Class size = 130 − 125 = 5
(iv) Number of students with height less than 140 =
14+8+6=28
(ii) How many teachers are of age less than 45 years?
(iii) How many teachers are of age 40 years or more but less than 55 years?
(i) The given histogram tells about the number of teachers in a school from the age group 25-30 to 55-60.
(ii) Number of teachers with age less than 45 years =
3+5+7+6=21
(iii) Number of teachers with age more than 40 years but less than 55 years =
6+4+3=13
The histogram given below shows the number of literate females in the age group of 10 to 40 years. Study the histogram carefully and answer the question that follow:
(i) Write the classes, assuming that all the classes are of equal width.
Hint. Number of class intervals of equal width = 6 (given), class width =
\frac{\left(40-10\right)}{6}=5.
(i) Age group =10 to 40
Given all the classes are of equal width
Number of class interval of equal width = 6
Class width =
\frac{40-10}{6}=5
Classes are 10-15,15-20,20-25,25-30,30-35,35-40
(ii) Class width =
\frac{40-10}{6}=5
(iii)Age group with least literate females = 10-15
(iv)Age group with highest literate females = 15-20
|
Isabelle/HOL and Proof General Reference [Isabelle/HOL Support Wiki]
Isabelle/HOL Support Wiki
Trace: • Isabelle/HOL and Proof General Reference
Isabelle/HOL and Proof General Reference
Theorem Modification
Assorted gimmicks
This site is intended to help getting started with using Isabelle/HOL and the Isabelle jEdit editor. This page in particular is the quick cheat sheet and can be used as a reference.
Please use the FAQ - Ask Questions page to post and view questions and the Exam Questions page to collect possible questions for the exams. The Goals of the Exercises page summarizes what you should have learned by doing them.
For best practices in Isabelle/HOL, see the corresponding page.
Please also check out and add stuff concerning the Editor, Isabelle/HOL Syntax and useful External References. Last but not least, you can post and check out Example Code.
Help us to improve the Wiki by checking and complementing existing content, as well as creating wanted pages!
You can download and install Isabelle from http://isabelle.in.tum.de/. To run Isabelle you can run `isabelle jedit -m brackets`.
Isabelle is already installed on the SCI machines tux1, tux2, tux3 and tux9. On those machines you can start it by simply typing `isabelle` into a terminal.
Every Isabelle/HOL theory looks like this:
theory MyTheory imports Main
It is important that the name of the theory is equal to the name of the file.
Isabelle uses many symbols which are not available on a normal keyboard. You can either use the “Symbols” tab in jedit or type the following shortcuts and use autocomplete with “TAB” to get the correct symbol.
\land
\lor
\neg
\longrightarrow
\leftrightarrow
\forall
\exists
/\ or & \/ or | \not or ~ --> <-> \forall or ! \exists or ?
\bigwedge
\Longrightarrow
\equiv
[|
|]
!! ==> == [| |]
=
\neq
<
\leq
\geq
= ~= < > ⇐ >=
\in
\notin
\subset
\subseteq
\cap
\cup
\in or : \notin or ~: \subset \subseteq \inter \union
If you know LaTeX, try using commands you know from there.
Heuristic: try ML! Major differences:
Function definition with primrec, fun and function
List constructor is #, not ::.
Types and expressions have to be enclosed in double quotes
definition $ident where “$ident == $expression” – assigns the value of $expression to identifier $ident
lemma ($name :)? “$formula” – states $formula as lemma, assigns it a name (if given) and creates a proof goal
theorem ($name :)? “$formula” – states $formula as theorem, assigns it a name (if given) and creates a proof goal
primrec – used for primitive recursive functions (nothing to prove, but heavy constraints)
fun – used for more general functions, but have to prove termination (e.g. via well-founded order)
function – used for arbitrary function definitions; have to prove all kinds of stuff
(Co)Inductive Stuff
Apply method $method with parameters $params by entering apply ($method $params). If you have no parameters, you can just write apply $method. For example, a (very simple) proof can look like this:
lemma "[| A; A --> B |] ==> B"
apply (frule mp)
assumption — x
cases an expression x x case_tac
drule a rule x drule(k), drule_tac
erule a rule x erule(k), erule_tac
fold a definition (or equation) x x
frule a rule x x frule(k), frule_tac
induct a variable x induct_tac
insert a theorem x x cut_tac
rename_tac list of identifiers
rotate_tac an integer x
rule a rule x rule(k), rule_tac
split a splitting rule ? x
subgoal_tac a formula x
subst a definition (or equation) x x subst (asm)
thin_tac a formula
unfold a definition (or equation) x x ?
arith x x linear arithmetics exponentially slow for many operators
auto x x x x
blast x x x logics, sets; fast
clarify x x
clarsimp x x x
force x x x x
metis x x x logics no sets
safe x x x x
simp x x x
simp_all x x x x
Note that safety of automated proof methods can be sabotaged by adding unsafe rules to rule sets used.
There are many more methods. You can print them by issuing the command print_methods, key combination C-c C-a h m or via [Isabelle → Show Me → Methods].
+ Applies the method as often as applicable, but at least once. assumption+
(_,_) Applies all the methods in sequence and fails if any one is not applicable. (rule mp, simp)
[n] Applies the method only to the first n subgoals. auto[5]
The sequencing of methods has the additional effect that backtracking is used to make the whole sequence work. As many methods could be applied in different ways, e.g. by matching the premise with a different assumption, failure of one step of the sequence just leads to trying another possibility for one of the steps before.
Attributes (also: directives) can be used to obtain new, more specific theorems from already proven, more general ones. In other words, they allow you to adapt theorems to your current needs. There are two major uses:
$thm [$attr1, $attr2, …] – can be used in any place where you would put a theorem or rule. Instead of $thm itself, the result of applying the given attributes from left to right is used.
lemmas $name = $thm [$attr1, $attr2, …] – assigns a new name to the modified theorem, enabling later (re)use.
(lemma|theorem) $name [$attr1, $attr2, …] : $formula – applies the given attributes from left to right after the proof is finished, assigning the result to the given name.
of $t1 $t2 … Replaces variables with the given terms in order. Use _ for keeping the variable. (Example)
where $v1=$t1 and $v2=$t2 … Replaces the specified variables in a theorem with the given terms. (Example)
THEN $rule Applies the given rule to a theorem and returns the conclusion. (Example)
OF $thm1 $thm2 Generates a new instance of a rule using the given theorems. (Example)
simplified Applies simp to a theorem and returns the result. (Example)
rotated $k Rotates the given theorem's assumptions by $k to the left. If no value is given, $k=1 is assumed. (Example)
symmetric Equivalent to THEN sym. (Example)
Other attributes perform some action with a theorem. They probably only make sense in lemma/theorem definitions or together with lemmas (see above):
iff Enables both simplifier and classical reasoner to use this theorem. Only use with equivalences whose right-hand side is “simpler” than left-hand side.
rule_format Lifts a top-level implication into Pure logic, i.e. enables reasoners to use the theorem as rule.
simp Allows the simplifier to use this theorem.
There are many more attributes. You can print them by issuing the command print_attributes, key combination C-c C-a h a or via [Isabelle → Show Me → Attributes].
done finishes the proof if no more subgoals left
by $method tries to finish the rest of the proof with the given method followed by assumption+
sorry forces an unfinished proof to be considered successful (i.e. lemma/theorem is usable!)
oops aborts the proof and drops the lemma/theorem
conjI, conjE, conjunct1, conjunct2
disjI1, disjI2, disjE
impI, impE, mp
iffI, iffE
allI, spec, bspec
exI, exE, bexI
notI, notE
contrapos_(pp|np|pn|nn)
Basically any theorem!
thm $name – command that shows theorem with name $name
value $expr – command that evaluates the specified expression and prints the result
prefer $k – command that moves subgoal number $k to the top of the list
defer – command that moves the current subgoal to the bottom of the list
quickcheck – command that tries to find a counterexample to the current subgoal. Useful to check wether an unsafe rule did harm. Note: it might not find a counterexample even if the goal can not be proven!
refute and nitpick – similar to quickcheck but try to find counterexample models, not only variable assignments. Can handle more constructs.
sledgehammer – command that invokes fully automated theorem provers both locally and on remote clusters. Tries to find a (minimal) set of theorems needed to solve the current goal.
lfp $function – a function that yields the least fixpoint of the given function
undefined – a distinguished value for any type
f(x := y) – the function update: the result of this expression is the function f updated such that it now returns y for parameter x; the other values do not change.
{x. P x} – the set of values fulfilling predicate P. For instance, {x::nat. x dvd 125} is the set of (natural) divisors of
125
{E x | x. P x} – the set of values created by expression E, for all values fulfilling predicate P, {x + y | x y. x < 10 /\ y < 10} is the set of sums of all pairs of natural numbers with a single digit.
start.txt · Last modified: 2014/04/30 15:28 by peter
|
Compute positive-, negative-, and zero-sequence components of three-phase signal - Simulink - MathWorks Nordic
Compute positive-, negative-, and zero-sequence components of three-phase signal
The Sequence Analyzer block outputs the magnitude and phase of the positive-, negative-, and zero-sequence components of a set of three balanced or unbalanced signals. Index 1 denotes the positive sequence, index 2 denotes the negative sequence, and index 0 denotes the zero sequence. The signals can optionally contain harmonics. The three sequence components of a three-phase signal (voltages V1 V2 V0 or currents I1 I2 I0) are computed as follows:
\begin{array}{l}{V}_{1}=\frac{1}{3}\left({V}_{a}+a\cdot {V}_{b}+{a}^{2}\cdot Vc\right)\\ {V}_{2}=\frac{1}{3}\left({V}_{a}+{a}^{2}\cdot {V}_{b}+a\cdot Vc\right)\\ {V}_{0}=\frac{1}{3}\left({V}_{a}+{V}_{b}+Vc\right)\\ {V}_{a},{V}_{b},{V}_{c}=\text{three voltage phasors at the specified frequency}\\ a={e}^{j2\pi /3}=1\angle {120}^{\circ }\text{ complex operator}\end{array}
A Fourier analysis over a sliding window of one cycle of the specified frequency is first applied to the three input signals. It evaluates the phasor values Va, Vb, and Vc at the specified fundamental or harmonic frequency. Then the transformation is applied to obtain the positive sequence, negative sequence, and zero sequence.
As the block uses a running average window to perform the Fourier analysis, one cycle of simulation must complete before the outputs give the correct magnitude and angle. For example, the block response to a step change of V1 is a one-cycle ramp. For the first cycle of simulation, the output is held constant using the values specified by the initial input parameters.
Specify the fundamental frequency, in hertz, of the three-phase input signal. Default is 60.
Harmonic n (1=fundamental)
Specify the harmonic component to evaluate the sequences. Set to 1 to compute the sequences at the fundamental frequency or to the number corresponding to the desired harmonic. Default is 1.
Specify the sequence component the block outputs. The options include Positive, Negative, Zero, and Positive Negative Zero (default). Select Positive Negative Zero to calculate all the sequences.
Specify the initial magnitude and phase, in degrees, of the positive-sequence component of the input signal. Default is [1, 0].
Connects the vectorized signal of the three [a b c] sinusoidal signals to the input.
|u| Magnitude
Outputs the magnitude (peak value) of the specified sequence component(s), in the same units as the abc input signals.
\angle
u Phase
Outputs the phase, in degrees, of the specified components.
The power_SequenceAnalyzer model shows the use of the Sequence Analyzer block to compute the three sequence components of a three-phase sinusoidal voltage. The model sample time
|
Thyristor - MATLAB - MathWorks Italia
Thyristor (Piecewise Linear)
Model Gate Port and Thermal Effects
On-state behaviour and switching losses
On-state voltage, Vak(Tj,Iak)
Temperature vector, Tj
Anode-cathode current vector, Iak
Switch-on loss
Natural commutation rectification loss
Off-state voltage for switching loss data
On-state current for switching loss data
Switch-on loss, Eon(Tj,Iak)
Temperature vector for switching losses, Tj
Anode-cathode current vector for switching losses, Iak
Wait time before switch-on current measurement
Integral Diode
Update on switching losses and thermal modelling options
Removal of energy dissipation time constant parameter
The Thyristor (Piecewise Linear) block models a thyristor. The I-V characteristic for a thyristor is such that the thyristor turns on if the gate-cathode voltage exceeds the specified gate trigger voltage. The device turns off if the load current falls below the specified holding-current value.
To define the I-V characteristic of the thyristor, set the On-state behaviour and switching losses parameter to either Specify constant values or Tabulate with temperature and current. The Tabulate with temperature and current option is available only if you expose the thermal port of the block.
In the on state, the anode-cathode path behaves like a linear diode with forward-voltage drop, Vf, and on-resistance, Ron. However, if you expose the thermal port of the block and parameterize the device using tabulated I-V data, the tabulated resistance is a function of the temperature and current.
In the off state, the anode-cathode path behaves like a linear resistor with a low off-state conductance, Goff.
The defining Simscape™ equations for the block are:
if (v > Vf)&&((G>Vgt)||(i>Ih))
i == (v - Vf*(1-Ron*Goff))/Ron;
i == v*Goff;
v is the anode-cathode voltage.
Vf is the forward voltage.
G is the gate voltage.
Vgt is the gate trigger voltage.
i is the anode-cathode current.
Ih is the holding current.
Ron is the on-state resistance.
Goff is the off-state conductance.
Using the Integral Diode tab of the block dialog box, you can include an integral cathode-anode diode. An integral diode protects the semiconductor device by providing a conduction path for reverse current. An inductive load can produce a high reverse-voltage spike when the semiconductor device suddenly switches off the voltage supply to the load.
Prioritize simulation speed. Protection diode with no dynamics The block includes an integral copy of the Diode block. To parameterize the internal Diode block, use the Protection parameters.
Precisely specify reverse-mode charge dynamics. Protection diode with charge dynamics The block includes an integral copy of the dynamic model of the Diode block. To parameterize the internal Diode block, use the Protection parameters.
You can choose between physical or electrical ports to control the gate terminal and expose the thermal port to model the heat that switching events and conduction losses generate. To choose the gate control port and expose the thermal port, set the Modeling option parameter to either:
PS control port — Contains a physical signal port that is associated with the gate terminal.
Electrical control port — Contains an electrical conserving port that is associated with the gate terminal.
PS control port | Thermal port — Contains a thermal port and a physical signal port that is associated with the gate terminal.
Electrical control port | Thermal port — Contains a thermal port and an electrical conserving port that is associated with the gate terminal.
For more information about using thermal ports, see Simulating Thermal Effects in Semiconductors.
Switching losses are one of the main sources of thermal loss in semiconductors. During each on switching transition, the thyristor parasitics store and then dissipate energy.
Switching losses depend on the off-state voltage and the on-state current. When the switching device is turned on, the power losses depend on the initial off-state voltage across the device and the final on-state current once the device is fully in its on state. When the switching device is turned off, the power loss is defined by the Natural commutation rectification loss parameter value. This is the rectification loss applied at the point that the device switches off due to the current falling below the holding current. This loss is a fixed value and it is not scaled by the off-state voltage or the on-state current.
In this block, switching losses are applied by stepping up the junction temperature with a value equal to the switching loss divided by the total thermal mass at the junction. The Switch-on loss, Eon(Tj,Iak) parameter value sets the sizes of the switching losses and they are either fixed or dependent on junction temperature and drain-source current. In both cases, losses are scaled by the off-state voltage prior to the latest device turn-on event.
As the final current after a switching event is not known during the simulation, the block records the on-state current once the current is greater than the holding current for a time longer than the value specified in the Wait time before switch-on current measurement. Similarly, the block records the off-state voltage at the point that the device is commanded on. For this reason, the simlog does not report the switching losses to the thermal network until one switching cycle later.
For all ideal switching devices, the switching losses are reported in the simlog as lastTurnOffLoss and lastTurnOnLoss (for the thyristor, this is the Natural commutation rectification loss) and recorded as a pulse with amplitude equal to the energy loss. If you use a script to sum the total losses over a defined simulation period, you must sum the pulse values at each pulse rising edge. Alternatively, you can use the ee_getPowerLossSummary and ee_getPowerLossTimeSeries functions to extract conduction and switching losses from logged data.
The figure shows the block port names.
Port associated with the gate terminal. You can set the port to either a physical signal or electrical port.
A — Anode terminal
Electrical conserving port associated with the anode terminal.
K — Cathode terminal
Electrical conserving port associated with the cathode terminal.
To enable this port, set Modeling option to either PS control port | Thermal port or Electrical control port | Thermal port.
Modeling option — Gate control and thermal ports visibility
PS control port (default) | Electrical control port | PS control port | Thermal port | Electrical control port | Thermal port
Whether to specify physical or electrical control ports for the switching device gate and enable the thermal port.
This table shows how the visibility of Main parameters depends on how you configure the Modeling option and On-state behavior and switching losses parameters. To learn how to read this table, see Parameter Dependencies.
PS control port or Electrical control port PS control port | Thermal port or Electrical control port | Thermal port
Forward voltage, Vf Gate trigger voltage, Vgt
On-state resistance Holding current
Off-state conductance On-state behaviour and switching losses
Specify constant values Tabulate with temperature and current
Gate trigger voltage, Vgt Forward voltage, Vf On-state voltage, Vak(Tj,Iak)
Holding current On-state resistance Temperature vector, Tj
Off-state conductance Anode-cathode current vector, Iak
On-state behaviour and switching losses — On-state current for switching loss data
Specify constant values (default) | Tabulate with temperature and current
Select a parameterization method. The option that you select determines which other parameters are enabled. Options are:
Specify constant values — Use scalar values to specify the output current, switch-on loss, switch-off loss, and on-state voltage data. This is the default parameterization method.
Tabulate with temperature and current — Use vectors to specify the output current, switch-on loss, switch-off loss, and temperature data.
See the Main Parameter Dependencies table.
On-state voltage, Vak(Tj,Iak) — On-state voltage
[0, .1, .6, .8, 1, 1.3, 1.6, 2, 2.4; 0, .1, .7, 1, 1.2, 1.5, 1.9, 2.4, 2.8] V (default)
Voltage drop across the device while it is in a triggered conductive state. This parameter is defined as a function of temperature and final on-state output current. Specify this parameter using a vector quantity.
Temperature vector, Tj — Temperature vector
[298.15, 398.15] K (default)
Temperature values at which the on-state voltage is specified. Specify this parameter using a vector quantity.
Anode-cathode current vector, Iak — Anode-cathode current vector
[0, .1, 1, 5, 10, 20, 40, 70, 100] A (default)
Anode-cathode currents for which the on-state voltage is defined. The first element must be zero. Specify this parameter using a vector quantity.
To enable these parameters, set Modeling option to PS control port | Thermal port or Electrical control port | Thermal port.
Switch-on loss — Switch-on loss
22.86e-3 J (default)
Energy dissipated during a single switch-on event. This parameter is defined as a function of temperature and final on-state output current. Specify this parameter using a scalar quantity.
To enable this parameter, set On-state behavior and switching losses to Specify constant values.
Natural commutation rectification loss — Natural commutation rectification loss
10e-3 J (default)
Rectification loss applied at the point that the block switches off when the current falls below the Holding current. Specify this parameter using a scalar quantity.
Off-state voltage for switching loss data — Off-state voltage for losses data
The output voltage of the device during the off state. This is the blocking voltage at which the switch-on loss and switch-off loss data are defined.
On-state current for switching loss data — Output current
600 A (default)
Output currents for which the switch-on loss, switch-off loss, and on-state voltage are defined. The first element must be zero. Specify this parameter using a scalar quantity.
This parameter is measured at the point that the gate voltage falls below the Gate trigger voltage, Vgt. The turn-on pulse is longer than the time it takes the current to reach its maximum value.
Switch-on loss, Eon(Tj,Iak) — Switch-on loss
[0, .0024, .024, .12, .2, .48, 1.04, 2.16, 3.24; 0, .003, .03, .15, .25, .6, 1.3, 2.7, 4.05] * 1e-3 J (default)
Energy dissipated during a single switch on event. This parameter is defined as a function of temperature and final on-state output current. Specify this parameter using a vector quantity.
To enable this parameter, set On-state behavior and switching losses to Tabulate with temperature and current.
Temperature vector for switching losses, Tj — Temperature vector for switching losses
Temperature values at which the switch-on loss and switch-off loss are specified. Specify this parameter using a vector quantity.
Anode-cathode current vector for switching losses, Iak — Anode-cathode current vector for switching losses
Anode-cathode currents for which the switch-on loss and switch-off-loss are defined. The first element must be zero. Specify this parameter using a vector quantity.
Wait time before switch-on current measurement — Time before on-state current is measured
Time to wait before recording the on-state current
None (default) | Protection diode with no dynamics | Protection diode with charge dynamics
Block integral protection diode. The default value is None.
Protection diode with no dynamics
Protection diode with charge dynamics
Piecewise Linear (default) | Tabulated I-V curve
Tabulated I-V curve — Use tabulated forward bias I-V data plus fixed reverse bias off conductance.
This parameter is visible only when the thermal port is exposed and the Integral protection diode parameter is set to Protection diode with no dynamics or Protection diode with charge dynamics.
This parameter is visible only when the thermal port is exposed and the Integral protection diode parameter is set to Protection diode with no dynamics or Protection diode with charge dynamics and Diode model is set to Tabulated I-V curve.
If the thermal port is hidden, set Integral protection diode to Protection diode with no dynamics or Protection diode with charge dynamics.
If the thermal port is exposed, set Integral protection diode to Protection diode with no dynamics or Protection diode with charge dynamics and Diode model to Piecewise linear.
To enable this parameter, expose the thermal port and set Diode model to Tabulated I-V curve and Table type to Table in If(Tj,Vf) form.
Vector of junction temperatures. This parameter must be a vector of at least two elements.
To enable this parameter, expose the thermal port and set Diode model to Tabulated I-V curve.
To enable this parameter, expose the thermal port and set Diode model to Tabulated I-V curve and Table type to Table in Vf(Tj,If) form.
This parameter is visible only when the Integral protection diode parameter is set to Protection diode with no dynamics or Protection diode with charge dynamics.
This parameter is visible only when the Integral protection diode parameter is set to Protection diode with charge dynamics.
This parameter is visible only when the Integral protection diode parameter is set to Protection diode with charge dynamics and the Reverse recovery time parameterization parameter is set to Specify reverse recovery time directly.
This parameter is visible only when the Integral protection diode parameter is set to Protection diode with charge dynamics and the Reverse recovery time parameterization parameter is set to Specify stretch factor.
-\frac{{i}^{2}{}_{RM}}{2a},
This parameter is visible only when the Integral protection diode parameter is set to Protection diode with charge dynamics and the Reverse recovery time parameterization parameter is set to Specify reverse recovery charge.
This parameter is visible only when the Integral protection diode parameter is set to Protection diode with charge dynamics and the Reverse recovery time parameterization parameter is set to Specify reverse recovery energy.
R2021a: Removal of energy dissipation time constant parameter
From R2021a forward, the Energy dissipation time constant parameter of the Thyristor (Piecewise Linear) block is no longer used. A step in junction temperature now reflects the switching losses. If your model contains a thermal mass directly connected to this block thermal port, remove it and model the thermal mass inside the component itself.
R2020b: Update on switching losses and thermal modelling options
From R2020b forward, the Thyristor (Piecewise Linear) block has improved losses and thermal modelling options.
Electrical and thermal on-state losses are now always identical. The Thermal loss dependent on parameter and its options, Voltage and current and Voltage, current, and temperature, have been renamed to On-state behavior and switching losses, Specify constant values, and Tabulate with temperature and current:
If you selected Voltage and current for Thermal loss dependent on, then the electrical on-state losses are unchanged and their values are determined using the on-state resistance. However, the thermal on-state losses are now also determined by the on-state resistance. Prior to R2020b, the thermal on-state losses were defined by the product of the On-state voltage and Output current, Iout parameters.
If you selected Voltage, current, and temperature for Thermal loss dependent on, then the thermal on-state losses are unchanged and the On-state voltage, Vak(Tj,Iak) parameter sets their values. However, the electrical on-state losses are now equal to the thermal on-state losses. Prior to R2020b, the electrical on-state losses were defined by the value of the on-state resistance.
The On-state voltage and the switch-off parameters are no longer used.
Diode | GTO | IGBT (Ideal, Switching) | Ideal Semiconductor Switch | MOSFET (Ideal, Switching)
|
Volume 8, No. 3 (2014) Special issue on the occasion of Pierre de la Harpe's 70-th birthday| Groups, Geometry, and Dynamics | EMS Press
Volume 8, No. 3 (2014) Special issue on the occasion of Pierre de la Harpe's 70-th birthday
Imbeddings into groups of intermediate growth
Laurent BartholdiAnna Erschler
Residual properties of groups defined by basic commutators
Gilbert BaumslagRoman Mikhailov
Mustafa G. BenliRostislav GrigorchukYaroslav Vorobets
Geometry of locally compact groups of polynomial growth and shape of large balls
The isomorphism problem for profinite completions of finitely presented, residually finite groups
Relative amenability
Pierre-Emmanuel CapraceNicolas Monod
Subgroups approximatively of finite index and wreath products
A normal subgroup theorem for commensurators of lattices
Darren CreutzYehuda Shalom
Deformation theory and finite simple quotients of triangle groups II
Michael LarsenAlexander LubotzkyClaude Marion
On transitivity and (non)amenability of Aut
F_n
actions on group presentations
Aglaia MyropolskaTatiana Nagnibeda
Groups, orders, and laws
Hyperbolic groupoids: metric and measure
C
*-simple groups without free subgroups
Alexander Yu. OlshanskiiDenis Osin
Countable degree-1 saturation of certain
C
*-algebras which are coronas of Banach algebras
|
Convert state-space filter parameters to zero-pole-gain form - MATLAB ss2zp - MathWorks India
Zeros, Poles, and Gain of a Discrete-Time System
Convert state-space filter parameters to zero-pole-gain form
[z,p,k] = ss2zp(A,B,C,D)
[z,p,k] = ss2zp(A,B,C,D,ni)
[z,p,k] = ss2zp(A,B,C,D) converts a state-space representation
\begin{array}{l}\stackrel{˙}{x}=Ax+Bu\\ y=Cx+Du\end{array}
of a given continuous-time or discrete-time system to an equivalent zero-pole-gain representation
H\left(s\right)=\frac{Z\left(s\right)}{P\left(s\right)}=k\frac{\left(s-{z}_{1}\right)\left(s-{z}_{2}\right)\cdots \left(s-{z}_{n}\right)}{\left(s-{p}_{1}\right)\left(s-{p}_{2}\right)\cdots \left(s-{p}_{n}\right)}
whose zeros, poles, and gains represent the transfer function in factored form.
[z,p,k] = ss2zp(A,B,C,D,ni) indicates that the system has multiple inputs and that the nith input has been excited by a unit impulse.
Consider a discrete-time system defined by the transfer function
H\left(z\right)=\frac{2+3{z}^{-1}}{1+0.4{z}^{-1}+{z}^{-2}}.
Determine its zeros, poles, and gain directly from the transfer function. Pad the numerator with zeros so it has the same length as the denominator.
Express the system in state-space form and determine the zeros, poles, and gain using ss2zp.
[A,B,C,D] = tf2ss(b,a);
[z,p,k] = ss2zp(A,B,C,D,1)
State matrix. If the system has r inputs and q outputs and is described by n state variables, then A is n-by-n.
Input-to-state matrix. If the system has r inputs and q outputs and is described by n state variables, then B is n-by-r.
Input-to-state matrix. If the system has r inputs and q outputs and is described by n state variables, then C is q-by-n.
Feedthrough matrix. If the system has r inputs and q outputs and is described by n state variables, then D is q-by-r.
ni — Input index
Input index, specified as an integer scalar. If the system has r inputs, use ss2zp with a trailing argument ni = 1, …, r to compute the response to a unit impulse applied to the nith input. Specifying this argument causes ss2zp to use the nith columns of B and D.
Zeros of the system, returned as a matrix. z contains the numerator zeros in its columns. z has as many columns as there are outputs (rows in C).
Poles of the system, returned as a column vector. p contains the pole locations of the denominator coefficients of the transfer function.
ss2zp finds the poles from the eigenvalues of the A array. The zeros are the finite solutions to a generalized eigenvalue problem:
z = eig([A B;C D],diag([ones(1,n) 0]);
In many situations, this algorithm produces spurious large, but finite, zeros. ss2zp interprets these large zeros as infinite.
ss2zp finds the gains by solving for the first nonzero Markov parameters.
[1] Laub, A. J., and B. C. Moore. "Calculation of Transmission Zeros Using QZ Techniques." Automatica. Vol. 14, 1978, p. 557.
Outputs z and p are always complex.
The order of outputs, z and p, might be different in MATLAB® and the generated code.
sos2zp | ss2sos | ss2tf | tf2zp | tf2zpk | zp2ss
|
Prompt criticality - Wikipedia
(Redirected from Prompt critical)
Sustained nuclear fission achieved solely by prompt neutron emission
In nuclear engineering, prompt criticality describes a nuclear fission event in which criticality (the threshold for an exponentially growing nuclear fission chain reaction) is achieved with prompt neutrons alone (neutrons that are released immediately in a fission reaction) and does not rely on delayed neutrons (neutrons released in the subsequent decay of fission fragments). As a result, prompt supercriticality causes a much more rapid growth in the rate of energy release than other forms of criticality. Nuclear weapons are based on prompt criticality, while most nuclear reactors rely on delayed neutrons to achieve criticality.
2 Critical versus prompt-critical
3.1 Prompt critical accidents
4 List of accidental prompt critical excursions
An assembly is critical if each fission event causes, on average, exactly one additional such event in a continual chain. Such a chain is a self-sustaining fission chain reaction. When a uranium-235 (U-235) atom undergoes nuclear fission, it typically releases between one and seven neutrons (with an average of 2.4). In this situation, an assembly is critical if every released neutron has a 1∕2.4 = 0.42 = 42 % probability of causing another fission event as opposed to either being absorbed by a non-fission capture event or escaping from the fissile core.
The average number of neutrons that cause new fission events is called the effective neutron multiplication factor, usually denoted by the symbols k-effective, k-eff or k. When k-effective is equal to 1, the assembly is called critical, if k-effective is less than 1 the assembly is said to be subcritical, and if k-effective is greater than 1 the assembly is called supercritical.
Critical versus prompt-critical[edit]
In a supercritical assembly the number of fissions per unit time, N, along with the power production, increases exponentially with time. How fast it grows depends on the average time it takes, T, for the neutrons released in a fission event to cause another fission. The growth rate of the reaction is given by:
{\displaystyle N(t)=N_{0}k^{\frac {t}{T}}\,}
Most of the neutrons released by a fission event are the ones released in the fission itself. These are called prompt neutrons, and strike other nuclei and cause additional fissions within nanoseconds (an average time interval used by scientists in the Manhattan Project was one shake, or 10 ns). A small additional source of neutrons is the fission products. Some of the nuclei resulting from the fission are radioactive isotopes with short half-lives, and nuclear reactions among them release additional neutrons after a long delay of up to several minutes after the initial fission event. These neutrons, which on average account for less than one percent of the total neutrons released by fission, are called delayed neutrons. The relatively slow timescale on which delayed neutrons appear is an important aspect for the design of nuclear reactors, as it allows the reactor power level to be controlled via the gradual, mechanical movement of control rods. Typically, control rods contain neutron poisons (substances, for example boron or hafnium, that easily capture neutrons without producing any additional ones) as a means of altering k-effective. With the exception of experimental pulsed reactors, nuclear reactors are designed to operate in a delayed-critical mode and are provided with safety systems to prevent them from ever achieving prompt criticality.
In a delayed-critical assembly, the delayed neutrons are needed to make k-effective greater than one. Thus the time between successive generations of the reaction, T, is dominated by the time it takes for the delayed neutrons to be released, of the order of seconds or minutes. Therefore, the reaction will increase slowly, with a long time constant. This is slow enough to allow the reaction to be controlled with electromechanical control systems such as control rods, and accordingly all nuclear reactors are designed to operate in the delayed-criticality regime.
In contrast, a critical assembly is said to be prompt-critical if it is critical (k = 1) without any contribution from delayed neutrons and prompt-supercritical if it is supercritical (the fission rate growing exponentially, k > 1) without any contribution from delayed neutrons. In this case the time between successive generations of the reaction, T, is limited only by the fission rate from the prompt neutrons, and the increase in the reaction will be extremely rapid, causing a rapid release of energy within a few milliseconds. Prompt-critical assemblies are created by design in nuclear weapons and some specially designed research experiments.
The difference between a prompt neutron and a delayed neutron has to do with the source from which the neutron has been released into the reactor. The neutrons, once released, have no difference except the energy or speed that have been imparted to them. A nuclear weapon relies heavily on prompt-supercriticality (to produce a high peak power in a fraction of a second), whereas nuclear power reactors use delayed-criticality to produce controllable power levels for months or years.
In order to start up a controllable fission reaction, the assembly must be delayed-critical. In other words, k must be greater than 1 (supercritical) without crossing the prompt-critical threshold. In nuclear reactors this is possible due to delayed neutrons. Because it takes some time before these neutrons are emitted following a fission event, it is possible to control the nuclear reaction using control rods.
A steady-state (constant power) reactor is operated so that it is critical due to the delayed neutrons, but would not be so without their contribution. During a gradual and deliberate increase in reactor power level, the reactor is delayed-supercritical. The exponential increase of reactor activity is slow enough to make it possible to control the criticality factor, k, by inserting or withdrawing rods of neutron absorbing material. Using careful control rod movements, it is thus possible to achieve a supercritical reactor core without reaching an unsafe prompt-critical state.
Once a reactor plant is operating at its target or design power level, it can be operated to maintain its critical condition for long periods of time.
Prompt critical accidents[edit]
Main article: Criticality accident
Nuclear reactors can be susceptible to prompt-criticality accidents if a large increase in reactivity (or k-effective) occurs, e.g., following failure of their control and safety systems. The rapid uncontrollable increase in reactor power in prompt-critical conditions is likely to irreparably damage the reactor and in extreme cases, may breach the containment of the reactor. Nuclear reactors' safety systems are designed to prevent prompt criticality and, for defense in depth, reactor structures also provide multiple layers of containment as a precaution against any accidental releases of radioactive fission products.
With the exception of research and experimental reactors, only a small number of reactor accidents are thought to have achieved prompt criticality, for example Chernobyl #4, the U.S. Army's SL-1, and Soviet submarine K-431. In all these examples the uncontrolled surge in power was sufficient to cause an explosion that destroyed each reactor and released radioactive fission products into the atmosphere.
At Chernobyl in 1986, a poorly understood positive scram effect resulted in an overheated reactor core. This led to the rupturing of the fuel elements and water pipes, vaporization of water, a steam explosion, and a meltdown. Estimated power levels prior to the incident suggest that it operated in excess of 30 GW, ten times its 3 GW maximum thermal output. The reactor chamber's 2000-ton lid was lifted by the steam explosion. Since the reactor was not designed with a containment building capable of containing this catastrophic explosion, the accident released large amounts of radioactive material into the environment.
In the other two incidents, the reactor plants failed due to errors during a maintenance shutdown that was caused by the rapid and uncontrolled removal of at least one control rod. The SL-1 was a prototype reactor intended for use by the US Army in remote polar locations. At the SL-1 plant in 1961, the reactor was brought from shutdown to prompt critical state by manually extracting the central control rod too far. As the water in the core quickly converted to steam and expanded (in just a few milliseconds), the 26,000-pound (12,000 kg) reactor vessel jumped 9 feet 1 inch (2.77 m), leaving impressions in the ceiling above.[1][2] All three men performing the maintenance procedure died from injuries. 1,100 curies of fission products were released as parts of the core were expelled. It took 2 years to investigate the accident and clean up the site. The excess prompt reactivity of the SL-1 core was calculated in a 1962 report:[3]
The delayed neutron fraction of the SL-1 is 0.70%… Conclusive evidence revealed that the SL-1 excursion was caused by the partial withdrawal of the central control rod. The reactivity associated with the 20-inch withdrawal of this one rod has been estimated to be 2.4% δk/k, which was sufficient to induce prompt criticality and place the reactor on a 4 millisecond period.
In the K-431 reactor accident, 10 were killed during a refueling operation. The K-431 explosion destroyed the adjacent machinery rooms and ruptured the submarine's hull. In these two catastrophes, the reactor plants went from complete shutdown to extremely high power levels in a fraction of a second, damaging the reactor plants beyond repair.
List of accidental prompt critical excursions[edit]
A number of research reactors and tests have purposely examined the operation of a prompt critical reactor plant. CRAC, KEWB, SPERT-I, Godiva device, and BORAX experiments contributed to this research. Many accidents have also occurred, however, primarily during research and processing of nuclear fuel. SL-1 is the notable exception.
The following list of prompt critical power excursions is adapted from a report submitted in 2000 by a team of American and Russian nuclear scientists who studied criticality accidents, published by the Los Alamos Scientific Laboratory, the location of many of the excursions.[4] A typical power excursion is about 1 x 1017 fissions.
Los Alamos Scientific Laboratory, 21 August 1945
Los Alamos Scientific Laboratory, 21 May 1946
Los Alamos Scientific Laboratory, December 1949, 3 or 4 x 1016 fissions
Los Alamos Scientific Laboratory, 1 February 1951
Los Alamos Scientific Laboratory, 18 April 1952
Argonne National Laboratory, 2 June 1952
Oak Ridge National Laboratory, 26 May 1954
Oak Ridge National Laboratory, 1 February 1956
Los Alamos Scientific Laboratory, 3 July 1956
Los Alamos Scientific Laboratory, 12 February 1957
Mayak Production Association, 2 January 1958
Oak Ridge Y-12 Plant, 16 June 1958 (possible)
Los Alamos Scientific Laboratory, Cecil Kelley criticality accident, 30 December 1958
SL-1, 3 January 1961, 4 x 1018 fissions or 130 megajoules (36 kWh)
Idaho Chemical Processing Plant, 25 January 1961
Los Alamos Scientific Laboratory, 11 December 1962
Sarov (Arzamas-16), 11 March 1963
White Sands Missile Range, 28 May 1965
Oak Ridge National Laboratory, 30 January 1968
Chelyabinsk-70, 5 April 1968
Aberdeen Proving Ground, 6 September 1968
Mayak Production Association, 10 December 1968 (2 prompt critical excursions)
Kurchatov Institute, 15 February 1971
Idaho Chemical Processing Plant, 17 October 1978 (very nearly prompt critical)
Soviet submarine K-431, 10 August 1985
Chernobyl disaster, 26 April 1986
Sarov (Arzamas-16), 17 June 1997
JCO Fuel Fabrication Plant, 30 September 1999
In the design of nuclear weapons, in contrast, achieving prompt criticality is essential. Indeed, one of the design problems to overcome in constructing a bomb is to compress the fissile materials enough to achieve prompt criticality before the chain reaction has a chance to produce enough energy to cause the core to expand too much. A good bomb design must therefore win the race to a dense, prompt critical core before a less-powerful chain reaction disassembles the core without allowing a significant amount of fuel to fission (known as a fizzle). This generally means that nuclear bombs need special attention paid to the way the core is assembled, such as the implosion method invented by Richard C. Tolman, Robert Serber, and other scientists at the University of California, Berkeley in 1942.
^ Tucker, Todd (2009). Atomic America: How a Deadly Explosion and a Feared Admiral Changed the Course of Nuclear History. New York: Free Press. ISBN 978-1-4165-4433-3. See summary: [1] Archived 21 July 2011 at the Wayback Machine
^ Stacy, Susan M. (2000). "Chapter 15: The SL-1 Incident" (PDF). Proving the Principle: A History of The Idaho National Engineering and Environmental Laboratory, 1949–1999. U.S. Department of Energy, Idaho Operations Office. pp. 138–149. ISBN 978-0-16-059185-3. Archived (PDF) from the original on 29 December 2016. Retrieved 8 September 2015.
^ IDO-19313 Archived 27 September 2011 at the Wayback Machine Additional Analysis of the SL-1 Excursion, Final Report of Progress July through October 1962, November 1962.
^ A Review of Criticality Accidents, Los Alamos National Laboratory, LA-13638, May 2000. Thomas P. McLaughlin, Shean P. Monahan, Norman L. Pruvost, Vladimir V. Frolov, Boris G. Ryazanov, and Victor I. Sviridov.
"Nuclear Energy: Principles", Physics Department, Faculty of Science, Mansoura University, Mansoura, Egypt; apparently excerpted from notes from the University of Washington Department of Mechanical Engineering; themselves apparently summarized from Bodansky, D. (1996), Nuclear Energy: Principles, Practices, and Prospects, AIP
Retrieved from "https://en.wikipedia.org/w/index.php?title=Prompt_criticality&oldid=1086751518"
|
Residual signal of a time-synchronous averaged signal - MATLAB tsaresidual - MathWorks Nordic
tsaresidual
Find and Visualize the Residual Signal of a Compound TSA Signal
Compute Residual Signal and Amplitude Spectrum of a TSA Signal
Visualize the Residual and Amplitude Spectrum of a TSA Signal
Residual signal of a time-synchronous averaged signal
Y = tsaresidual(X,fs,rpm,orderList)
Y = tsaresidual(X,t,rpm,orderList)
Y = tsaresidual(XT,rpm,orderList)
[Y,S] = tsaresidual(___)
___ = tsaresidual(___,Name,Value)
tsaresidual(___)
Y = tsaresidual(X,fs,rpm,orderList) computes the residual signal Y of the time-synchronous averaged (TSA) signal vector X using sampling rate fs, the rotational speed rpm, and the orders to be filtered orderList. The residual signal is computed by removing the components in orderList and their harmonics from X. You can use Y to further extract condition indicators of rotating machinery for predictive maintenance. For example, extracting the root-mean-squared value of the residual signal is useful in identifying changes over time which indicate potential machine faults.
Y = tsaresidual(X,t,rpm,orderList) computes the residual signal Y of the TSA signal vector X with corresponding time values in vector t.
Y = tsaresidual(XT,rpm,orderList) computes the residual signal Y of the TSA signal stored in the timetable XT. XT must contain a single numeric column variable.
[Y,S] = tsaresidual(___) returns the amplitude spectrum S of the residual signal Y. S is the amplitude spectrum computed using the normalized fast Fourier transform (FFT) of Y.
___ = tsaresidual(___,Name,Value) allows you to specify additional parameters using one or more name-value pair arguments. You can use this syntax with any of the previous input and output arguments.
tsaresidual(___) with no output arguments plots the time-domain and frequency-domain plots of the raw and residual TSA signals.
Consider a drivetrain with six gears driven by a motor that is fitted with a vibration sensor, as depicted in the figure below. Gear 1 on the motor shaft meshes with gear 2 with a gear ratio of 17:1. The final gear ratio, that is, the ratio between gears 1 and 2 and gears 3 and 4, is 51:1. Gear 5, also on the motor shaft, meshes with gear 6 with a gear ratio of 10:1. The motor is spinning at 180 RPM, and the sampling rate of the vibration sensor is 50 KHz. To obtain the signal containing just the meshing components for gears 5 and 6, filter out the signal components due to the gears 1 and 2 and, 3 and 4 by specifying their gear ratios of 17 and 51 in orderList. The signal components corresponding to the shaft rotation (order = 1) is always implicitly included in the computation.
Compute the residual of the TSA signal using the sample time, rpm, and the mesh orders to be filtered out.
Y = tsaresidual(X,t,rpm,orderList);
Visualize the residual signal, the raw TSA signal, and their amplitude spectrum on a plot.
tsaresidual(X,fs,rpm,orderList)
In this example, sineWavePhaseMod.mat contains the data of a phase modulated sine wave. XT is a timetable with the sine wave data and rpm used is 60 RPM. The sine wave has a frequency of 32 Hz, and to filter out the unmodulated sine wave, use 32 as the orderList.
Compute the residual signal and its amplitude spectrum. Set the value of 'Domain' to 'frequency' since the orders are in Hz.
[Y,S] = tsaresidual(XT,rpm,orders,'Domain','frequency')
0 sec 2.552e-15
0.00097656 sec 0.051822
0.0019531 sec 0.10116
0.0078125 sec 0.2336
0.010742 sec 0.1559
0.011719 sec 0.11215
0.012695 sec 0.062503
0.013672 sec 0.0092782
0.014648 sec -0.045032
The output Y is a timetable that contains the residual signal, that is, the phase modulation signal, while S is a vector that contains the amplitude spectrum of the residual signal Y.
Load the data, and plot the residual signal of the amplitude modulated TSA signal X. To obtain the residual signal, filter out the unmodulated sine wave by specifying the frequency of 32 Hz in orderList. Set the value of 'Domain' to 'frequency'.
tsaresidual(X,t,rpm,orderList,'Domain','frequency');
From the plot, observe the waveform and amplitude spectrum of the residual and raw signals, respectively.
Rotational speed of the shaft, specified as a positive scalar. tsaresidual uses a bandwidth equal to the shaft speed around the frequencies of interest to filter out the undesired frequency components from the TSA signal. The signal components corresponding to this frequency, that is, order = 1 are always filtered out.
Specify rpm in revolutions per minute.
Orders to be filtered out of the TSA signal, specified as a vector of positive integers. Select the orders and harmonics to be filtered out of the TSA signal by observing them on the amplitude spectrum plot. For instance, specify orderList as the known mesh orders in a gear train to filter out the known components and their harmonics. For more information, see Find and Visualize the Residual Signal of a Compound TSA Signal. Specify the units of orderList by selecting the appropriate value for 'Domain'.
Example: …,'NumRotations',5
Y — Residual signal of the TSA signal
Residual signal of the TSA signal, returned as:
A vector, when the TSA signal is specified as a vector X
A timetable, when the TSA signal is specified as a timetable XT
The residual signal is computed by removing the components in orderList and the shaft signal along with their respective harmonics from X. You can use Y to further extract condition indicators of rotating machinery for predictive maintenance. For example, extracting the root-mean-squared value of the residual signal is useful in identifying changes over time, which indicate potential machine faults. For more information on how Y is computed, see Algorithms.
S — Amplitude spectrum of the residual signal
Amplitude spectrum of the residual signal, returned as a vector. S is the normalized fast Fourier transform of the signal Y. S has the same length as the input TSA signal X. For more information on how S is computed, see Algorithms.
The residual signal is computed from the TSA signal by removing the following from the signal spectrum:
The frequencies are removed by computing the discrete Fourier transform (DFT) and setting the spectrum values to zero at the specified frequencies. tsaresidual uses a bandwidth equal to the shaft speed around the frequencies of interest to filter out the undesired frequency components, as mentioned in [4].
The amplitude spectrum of the residual signal is computed as follows,
\text{S = }\frac{\text{fft}\left(Y\right)}{\text{length}\left(Y\right)*2}
Here, Y is the residual signal.
tsadifference | tsaregular
|
Rs Aggarwal for Class 6 Math Chapter 16 - Triangles
Rs Aggarwal Solutions for Class 6 Math Chapter 16 Triangles are provided here with simple step-by-step explanations. These solutions for Triangles are extremely popular among Class 6 students for Math Triangles Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Rs Aggarwal Book of Class 6 Math Chapter 16 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Rs Aggarwal Solutions. All Rs Aggarwal Solutions for class Class 6 Math are prepared by experts and are 100% accurate.
Take three noncollinear points A, B and C on a page of your notebook. Join AB, BC and CA. What figure do you get?
Name: (i) the side opposite to ∠C
(ii) the angle opposite to the side BC
(iii) the vertex opposite to the side CA
(iv) the side opposite to the vertex B
We get a triangle by joining the three non-collinear points A, B, and C.
(i) The side opposite to ∠C is AB.
(ii) The angle opposite to the side BC is ∠A.
(iii) The vertex opposite to the side CA is B.
The measures of two angles of a triangle are 72° and 58°.
Now, the sum of the measures of all the angles of a triangle is 180o.
\therefore
⇒ x + 130o = 180o
⇒ x = 180o
-
The measure of the third angle of the triangle is 50o.
The angles of a triangle are in the ratio 1 : 3 : 5. Find the measure of each of the angles.
The angles of a triangle are in the ratio 1:3:5.
Let the measures of the angles of the triangle be (1x), (3x) and (5x)
Sum of the measures of the angles of the triangle = 180o
∴ 1x + 3x + 5x = 180o
5x = 100o
The measures of the angles are 20o, 60o and 100o.
One of the acute angles of a right triangle is 50°. Find the other acute angle.
In a right angle triangle, one of the angles is 90o.
It is given that one of the acute angled of the right angled triangle is 50o.
We know that the sum of the measures of all the angles of a triangle is 180o.
Now, let the third angle be x.
Therefore, we have:
90o + 50o + x = 180o
⇒ 140o + x = 180o
⇒ x = 180o
-
The third acute angle is 40o.
One of the angles of a triangle is 110° and the other two angles are equal. What is the measure of each of these equal angles?
∠A = 110o and ∠B = ∠C
Now, the sum of the measures of all the angles of a traingle is 180o .
⇒ 110o + ∠B + ∠B = 180o
⇒ 110o + 2∠B = 180o
⇒ 2∠B = 180o
-
⇒ 2∠B = 70o
⇒ ∠B = 70o / 2
⇒ ∠B = 35o
∴ ∠C = 35o
The measures of the three angles:
∠A = 110o, ∠B = 35o, ∠C = 35o
If one angle of a triangle is equal to the sum of other two, show that the triangle is a right triangle.
We know:
⇒ ∠B +∠C + ∠B + ∠C = 180o
⇒ 2∠B + 2∠C = 180o
⇒ 2(∠B +∠C) = 180o
⇒ ∠B + ∠C = 180/2
⇒ ∠B + ∠C = 90o
\therefore
∠A = 90o
This shows that the triangle is a right angled triangle.
Let 3∠A = 4 ∠B = 6 ∠C = x
\angle \mathrm{A} = \frac{\mathrm{x}}{3}, \angle \mathrm{B} = \frac{\mathrm{x}}{4}, \angle \mathrm{C} = \frac{\mathrm{x}}{6}\phantom{\rule{0ex}{0ex}}\mathrm{But}, \angle \mathrm{A} + \angle \mathrm{B} + \angle \mathrm{C} = 180°\phantom{\rule{0ex}{0ex}}\therefore \frac{\mathrm{x}}{3} + \frac{\mathrm{x}}{4} + \frac{\mathrm{x}}{6} = 180°\phantom{\rule{0ex}{0ex}}\mathrm{or} \frac{4\mathrm{x} + 3\mathrm{x} + 2\mathrm{x}}{12} = 180°\phantom{\rule{0ex}{0ex}}\mathrm{or} 9\mathrm{x} = 180° × 12 = 2160°\phantom{\rule{0ex}{0ex}}\mathrm{or} \mathrm{x} = 240° \phantom{\rule{0ex}{0ex}}\therefore \angle \mathrm{A} = \frac{240}{3} = 80°, \angle \mathrm{B} = \frac{240}{4} = 60°, \angle \mathrm{C} = \frac{240}{6} = 40°
Look at the figures given below. State for each triangle whether it is acute, right or obtuse.
(i) It is an obtuse angle triangle as one angle is 130o, which is greater than 90o.
(ii) It is an acute angle triangle as all the angles in it are less than 90o.
(iii) It is a right angle triangle as one angle is 90o.
(iv) It is an obtuse angle triangle as one angle is 92o, which is greater than 90o.
In the given figure some triangles have been given. State for each triangle whether it is scalene, isosceles or equilateral.
Equilateral Triangle: A triangle whose all three sides are equal in length and each of the three angles measures 60o.
Isosceles Triangle: A triangle whose two sides are equal in length and the angles opposite them are equal to each other.
Scalene Triangle: A triangle whose all three sides and angles are unequal in measure.
(i) Isosceles
DE = EF = 2.4 cm
All the sides are unequal.
(iv) Equilateral
XY = YZ = ZX = 3 cm
(v) Equilateral
All three angles are 60o.
(vi) Isosceles
Two angles are equal in measure.
(vii) Scalene
All the angles are unequal.
Draw a ∆ABC. Take a point D on BC. Join AD. How many triangles do you get? Name them.
In ∆ABC, if we take a point D on BC, then we get three triangles, namely ∆ADB, ∆ADC and ∆ABC.
(iv) each angle more than 60°?
(v) each angle less than 60°?
(vi) each angle equal to 60°?
If the two angles are 90o each, then the sum of two angles of a triangle will be 180o, which is not possible.
For example, let the two angles be 120o and 150o. Then, their sum will be 270o, which cannot form a triangle.
For example, let the two angles be 50o and 60o, which on adding, gives 110o. They can easily form a triangle whose third angle is 180o
-
110o = 70o.
For example, let the two angles be 70o and 80o, which on adding, gives 150o. They cannot form a triangle whose third angle is 180o
-
150o = 30o, which is less than 60o.
For example, let the two angles be 50o and 40o, which on adding, gives 90o . Thus, they cannot form a triangle whose third angle is 180o
-
90o = 90o, which is greater than 60o.
Sum of all angles = 60o + 60o + 60o = 180o
(i) A triangle has ...... sides, ...... angles and ...... vertices.
(ii) The sum of the angles of a triangle is ...... .
(iii) The sides of a scalene triangle are of ....... lengths.
(v) The angles opposite to equal sides of an isosceles triangle are ....... .
(vi) The sum of the lengths of the sides of a triangle is called its .......... .
(i) A triangle has 3 sides 3 angles and 3 vertices.
(ii) The sum of the angles of a triangle is 180o.
(iii) The sides of a scalene triangle are of different lengths.
(iv) Each angle of an equilateral triangle measures 60o.
(v) The angles opposite to equal sides of an isosceles triangle are equal.
(vi) The sum of the lengths of the sides of a triangle is called its perimeter.
How many parts does a triangle have?
A triangle has 6 parts: three sides and three angles.
With the angles given below, in which case the construction of triangle is possible?
(a) Sum = 30° + 60° + 70° = 160o
This is not equal to the sum of all the angles of a triangle.
(b) Sum = 50° + 70° + 60° = 180o
Hence, it is possible to construct a triangle with these angles.
(c) Sum = 40° + 80° + 65° = 185o
(d) Sum = 72° + 28° + 90° = 190o
The angles of a triangle are in the ratio 2 : 3 : 4. The largest angle is
Let the measures of the given angles be (2x)o, (3x)o and (4x)o.
\therefore
(2x)o + (3x)o + (4x)o = 180o
⇒ (9x)o = 180o
⇒ x = 180 / 9
\therefore
2x = 40o, 3x = 60o, 4x = 80o
Hence, the measures of the angles of the triangle are 40o, 60o, 80o.
Thus, the largest angle is 80o.
The two angles of a triangle are complementary. The third angle is
The measure of two angles are complimentary if their sum is 90o degrees.
Let the two angles be x and y, such that x + y = 90o .
Let the third angle be z.
Now, we know that the sum of all the angles of a triangle is 180o.
x + y + z = 180o
⇒ 90o + z = 180o
⇒ z = 180o
-
The third angle is 90o.
One of the base angles of an isosceles triangle is 70°. The vertical angle is
Let ∠A = 70o
We know that the angles opposite to the equal sides of an isosceles triangle are equal.
\therefore
∠B = 70o
We need to find the vertical angle ∠C.
Now, sum of all the angles of a triangle is 180o.
⇒ 70o + 70o + ∠C = 180o
⇒ 140o + ∠C = 180o
⇒ ∠C = 180o
-
A triangle having sides of different lengths is called
A triangle having sides of different lengths is called a scalene triangle.
In an isosceles ∆ABC, the bisectors of ∠B and ∠C meet at a point O. If ∠A = 40°, then ∠BOC =
In the isosceles ABC, the bisectors of ∠B and ∠C meet at point O.
Since the triangle is isosceles, the angles opposite to the equal sides are equal.
\therefore
⇒ 40o + 2∠B = 180o
⇒ 2∠B = 140o
Bisectors of an angle divide the angle into two equal angles.
So, in ∆BOC:
∠OBC = 35o and ∠OCB = 35o
∠BOC + ∠OBC + ∠OCB = 180o
⇒ ∠BOC + 35o + 35o = 180o
⇒ ∠BOC = 180o - 70o
⇒ ∠BOC = 110o
The sides of a triangle are in the ratio 3 : 2 : 5 and its perimeter is 30 cm. The length of the longest side is
The sides of a triangle are in the ratio 3:2:5.
Let the lengths of the sides of the triangle be (3x), (2x), (5x).
Sum of the lengths of the sides of a triangle = Perimeter
(3x) + (2x) + (5x) = 30
First side = 3x = 9 cm
Second side = 2x = 6 cm
Third side = 5x = 15 cm
The length of the longest side is 15 cm.
Two angles of a triangle measure 30° and 25° respectively. The measure of the third angle is
Two angles of a triangle measure 30° and 25°, respectively.
x = 180o
-
Each angle of an equilateral triangle measures
Each angle of an equilateral triangle measures 60o.
In the adjoining figure, the point P lies
(a) in the interior of ∆ABC
(b) in the exterior of ∆ABC
(c) on ∆ABC
(d) outside ∆ABC
Point P lies on ∆ABC.
|
Disjoint Set Union · USACO Guide
HomeGoldDisjoint Set Union
ResourcesImplementationSolution - Focus ProblemProblemsStandardHarder
Authors: Benjamin Qi, Andrew Wang, Nathan Gong
Contributor: Michael Cao
The Disjoint Set Union (DSU) data structure allows you to add edges to an initially empty graph and test whether two vertices of the graph are connected.
Silver - Depth First Search (DFS)
YS - Easy
10.6 - Disjoint-Set Data Structure
path compression, diagrams
Optional: DSU Complexity Proofs
\underline{\log^* n}
\underline{\alpha (m,n)}
As the implementation is quite simple, you may prefer to use this in place of DFS for computing connected components.
Check PAPS for the explanation. e[x] contains the negation of the size of
x
's component if
x
is the representative of its component, and the parent of
x
DSU(int N) { e = vector<int>(N, -1); }
// get representive component (uses path compression)
int get(int x) { return e[x] < 0 ? x : e[x] = get(e[x]); }
public class DisjointSets {
int[] parents; // 0-indexed
int[] sizes;
public DisjointSets(int size) {
sizes = new int[size];
parents = new int[size];
class DisjointSets:
def __init__(self, size: int) -> None:
self.parents = [-1 for _ in range(size)]
self.sizes = [1 for _ in range(size)]
# finds the "representative" node in a's component
def find(self, x: int) -> int:
if self.parents[x] == -1:
Solution - Focus Problem
\mathcal{O}(Q \log ^*N)
Without union find, we would have to represent the graph with an adjacency list and use flood fill to calculate connected components. This approach takes
\mathcal{O}(NQ)
time, which is too slow, motivating us to use union find.
By representing the graph with the union find data structure, we can use its methods to both unite vertices and check if two vertices
u_i
v_i
are in the same connected component using only
\mathcal{O}(\log^*N)
time. This reduces the overall time complexity to
\mathcal{O}(Q \log ^*N)
, which is a substantial improvement and allows us to pass all test cases.
private static class DisjointSets {
You should already be familiar with the DFS / Binary Search solutions to "Wormhole Sort" and "Moocast."
Easy Show Tags DSU
Normal Show Tags DSU
Don't worry about solving these if this is the first time you've encountered DSU.
New Roads Queries
Hard Show Tags DSU, Merging
Ski Course Rating
Hard Show Tags DSU
Very Hard Show Tags DSU
|
{\displaystyle {\text{Money lost}}={\text{Level}}\times {\text{Base payout}}}
12000 (if the Pokémon with the highest level in the player's party is level 100 and the player has 8 Badges or the Island Challenge Completion stamp; 100×120=12000).
16 × the level of the player's highest-level Pokémon. The player will respawn in a spot in the current location; for example, the player respawns at the entrance to the grounds if the player whites out at Kaminko's House, while the player respawns in Acri's house if the player whites out in Gateon Port.
<player> panicked and lost
XX,XXX... Player lost against XXX XXXX!
<player> paid
XX,XXX as the prize money...
<player> dropped
XX,XXX in panic! Player lost against XXX XXXX!
<player> paid out
XX,XXX to the winner.
XX,XXX in panic... Player lost against XXX XXXX!
XX,XXX to the winner...
<player> panicked and dropped
XX,XXX... You lost against XXX XXXX!
<player> gave
You panicked and dropped
XX,XXX... You lost to XXX XXXX!
|
Invariance of Milnor numbers and topology of complex polynomials | EMS Press
Invariance of Milnor numbers and topology of complex polynomials
Institut d'Estudis Catalans, Bellaterra, Spain
We give a global version of Lê-Ramanujam 7-constant theorem for polynomials. Let
(f_t)
t\in[0,1]
, be a family of polynomials of n complex variables with isolated singularities, whose coefficients are polynomials in t. We consider the case where some numerical invariants are constant (the affine Milnor number 7(t), the Milnor number at infinity 5(t), the number of critical values, the number of affine critical values, the number of critical values at infinity). Let n=2, we also suppose the degree of the
f_t
is a constant, then the polynomials
f_0
f_1
are topologically equivalent. For
n>3
we suppose that critical values at infinity depend continuously on t, then we prove that the geometric monodromy representations of the
f_t
are all equivalent.
Arnaud Bodin, Invariance of Milnor numbers and topology of complex polynomials. Comment. Math. Helv. 78 (2003), no. 1, pp. 134–152
|
Flux Protocol - Flux Protocol
Introduction to Flux Protocol
ZeroOne(https://01.finance)
Byebye $FLUX, welcome $ZO
Flux Whitepaper
Liquidity Mining Distribution
Multichain Overview
FLUX token cross-chain transaction guide
Liquidity Mining Information
Multi-Chain Liquidity Mining Tutorials
Important Metrics on FLUX's Lending-Borrowing and Liquidation
This section of the whitepaper describes the mathematics behind the platform
Flux is a complete decentralized lending protocol, and anyone can adapt and developed a variety of lending applications based on this protocol.
Assets and Cryptocurrencies
Flux Protocol supports various digital assets. Besides supporting Conflux Network’s native token - CFX, the protocol supports assets following the ERC-777 standard.
Flux Protocol aggregates the assets of all users forming an aggregated digital asset market - money market. Similar to the Compound protocol, once a user supplies the protocol with their assets, these assets become fungible. Liquidity providers (Lenders) can withdraw their supplied assets at any given time and do not need to wait for their specific loan to mature.
The asset balance of money market accrues interest based on the specific supply rate of the asset. Users are free to view their balance at any time (accrued interest - interest receivable and interest payable); when users update their balances through a transaction (supply, transfer or withdrawal of assets) the accrued interest is transferred to the user.
Individual users that hold their assets as a long-term investment (#HODL), such as CFX or other available digital assets, can supply these assets to the money market as an additional source of return of their investment. For example, users owning CFX can supply their tokens to the money market on Flux Protocol, and earn interest (denominated in CFX) without having to manage their assets, fulfill loan requests or take speculative risks.
Flux allows users to borrow assets by depositing a trusted collateral. Users have all rights of the borrowed assets, and can freely use these assets anywhere they can be used. Users don’t need to negotiate maturity dates, interest rates, etc. User only need to specify the asset they want to borrow. In addition, if the assets are cross-chain assets, they can be cross-chained into other blockchain protocols, such as Ethereum or Bitcoin, through ShuttleFlow - A Cross-Chain Asset Protocol on the Conflux Network
Similar to the types of asset supply, every money market has a floating interest rate, determined by the market forces, which sets the borrowing cost of each asset (Borrowing Interest Rate).
In order to mitigate market risks, Flux sets the collateral factor of various asset types according to their Asset Risk Level, as displayed in Table 1. Assets cannot be over-collateralized. By combining asset price, balance, and collateral factor, the borrowing capacity is dynamically calculated for each account; each account can only borrow assets within its borrowing capacity. Collateral cannot be withdrawn, as long as the borrow is ongoing; however, even if user assets are used as collateral, the users will receive deposit interest.
At the same time, an account’s borrowing capacity cannot exceed the balance value of that specific account, which is called the collateral rate. The collateral rate is determined by the users ration between asset supply (lending) and asset borrow. When the user’s collateral rate drops below a certain threshold, the liquidation collateral rate, liquidation occurs; in order to increase the collateral rate to avoid liquidation, users can repay their borrowed assets partially or completely, and/or supply more assets into the money market.
Asset Risk Level
USDT, USDC, DAI, etc.
Tokens with high market caps, high amounts of tokens distribution with exceptional market acceptance, trading depth & volume Market Cap > $50mn
BTC, ETH and OKB, HT, BNB platform tokens
XRP, BCH, BSV, LTC, EOS, AD, etc.
Tokens with Top 10 Market Caps
Tokens other than listed above
Table 1- Collateral Factor of Asset Types
Risk & Liquidation
To reduce Flux Protocol’s risks and protect the liquidity suppliers (lenders), when the collateral rate of an account becomes negative, the account’s supplied (lending) assets will be liquidated by other users of the protocol in order to repay the borrowing outstanding to maintain the protocols requirement of over-collateralization. When the liquidation process occurs, the liquidation account’s collateral becomes an exchangeable asset at the current market price. Anybody (liquidator) can repay the borrowing outstanding of the borrower account in exchange for the borrow account’s collateral as liquidation reward.
Flux protocol expresses the collateral need as liquidation collateral rate, which is a fixed value and currently set at 110%. When an account’s borrow collateral rate is lower than the liquidation collateral rate, the liquidity value of an account becomes negative. In this case the value of an account’s borrowing outstanding exceeds their borrowing capacity.
Any Conflux Network address can participate in the liquidation process without any pressure and reliance on a centralized system. On a first-in-time basis, the liquidator can invoke the liquidation account’s outstanding borrowing.
When an account’s borrow collateral rate is far lower than the liquidation collateral rate, falling below 102%, the account will be placed into a margin call. When an account is place into a margin call, Flux Protocol’s designated liquidator will process the liquidation for the liquidation account. Liquidations due to a margin call result in losses, therefore, the liquidators (user participating in liquidations) must pay a 6% fee into a liquidation smart contract, which is the Flux Liquidation Reserve. The Flux Liquidation Reserve will be utilized for margin calls.
Figure 1- Collateral Rates
To prevent liquidation risks that are caused by borrowers withdrawing their assets, Flux Protocol requires the collateral rate to be above 120% after borrowers withdraw their assets, unless the borrowers pays off the outstanding borrowings (loans).
In the money market of each individual asset, Flux Protocol balances the interest rate automatically based on the assets supply relationship (funding utilization rate). The suppliers and borrowers do not need to negotiate terms and interest rates individually since the Flux Protocol has an interest rate model implemented. Each money market has independent interest rates. The utilization rate U for each money market a unifies the asset supply and demand relationship into one variable:
U_a = Borrows_a + Cash_a
Borrows_a represents the outstanding loan balance in the money market a; Cash_a represents the balance of assets supplied to the money market a. Subjecting to economic principles, when borrowing demand is low, interest rates should be low; on the contrary, when borrowing demand is high, interest rates should rise. The borrowing demand is expressed by the utilization rate function. The current set borrowing interest rate (BorrowInterestRate_a) is obtained from the utilization rate and a base interest rate of 0.005:
BorrowInterestRate_a = ( \frac{e^{20*U_a}-1}{ e^{20}-1}*0.995+ 0.2*U_a+ 0.005)
Within one money market a, there is the following interest rate equilibrium:
Borrows_a * Borrowing Interest Rate_a = Supplies_a * Supply Interest Rate_a
The figure below shows the interest rate change influenced by the utilization rate, where the fund utilization rate is 0, the borrowing rate is 0.005 and the deposit rate is 0:
Figure 2- Interest Rate Change
Flux Protocol requires that any modification to the protocol must be voted through the DAO, short for Decentralized Autonomous Organization, in order to be taken into effect. This decentralize the Flux protocol. In the early stages of Flux Protocol, the Flux Development Team will manage the DAO internally. After the Flux Protocol is complete and stable, the Flux Development Team will transfer the management authority of Flux Protocol to the DAO community.
|
Study the pattern below. Sketch and label the fourth and fifth figures. Then predict how many dots will be in the
100^{\text{th}}
figure. Write an expression you can use to determine the number of dots in any figure.
\Large\begin{array}{c c c} \bullet & \bullet & \bullet \end{array}\\ \; \; \text{Figure 1}
\Large\begin{array}{c c c} \bullet & \bullet & \bullet\\ \bullet & \bullet & \bullet \end{array}\\ \; \, \text{Figure 2}
\Large\begin{array}{c c c} \bullet & \bullet & \bullet\\ \bullet & \bullet & \bullet\\ \bullet & \bullet & \bullet \end{array}\\ \; \, \text{Figure 3}
In this pattern, you begin with
3
dots and add
3
more with each figure.
Use this description to help you with this problem.
|
Modeling of CO2-Hydrate Formation in Geological Reservoirs by Injection of CO2 Gas | J. Energy Resour. Technol. | ASME Digital Collection
Modeling of
CO2
-Hydrate Formation in Geological Reservoirs by Injection of
CO2
, 250 Karl Clark Road, Edmonton, AB, T6N 1E4, Canada
e-mail: uddin@arc.ab.ca
Mafiz Uddin (Research Engineer, Alberta Research Council) received his B.Sc. in Civil Engineering (Engineering University, Bangladesh, 1984), M.Sc. in Hydrology (University College Galway, Ireland, 1992), and Ph.D. in Hydrogeology (University New Brunswick, Canada, 1989). He was a Postdoctoral Research Fellow (University of Alberta, Canada), Physical Scientist (CFS, Natural Resources Canada), and Hydrogeologist (Alberta Environment, Canada). He joined the Alberta Research Council in 2001 where he has worked to date.
D. Coombe,
D. Coombe
, 3512-33 Street, NW, Calgary, AB, T2L 2A6, Canada
e-mail: dennis.coombe@cmgl.ca
Dennis Coombe (Senior Staff Scientist, CMGL) received his B.Sc. in Honours Chemistry (University of Calgary, 1970) and his Ph.D. in Physical Chemistry (University of British Columbia, 1976). After a NATO Postdoctoral Fellowship (University of Leiden, the Netherlands), he joined the Computer Modelling Group in 1980 where he has worked to date. A member of the STARS development team, he is concerned with the maintenance, support, and enhancement of CMG’s thermal and chemical advanced processes’ simulator. He is also an Advisor of enhanced oil recovery field development pilots and university thesis projects worldwide.
, Terrain Sciences Division, Box 6000, 9860 West Saanich Road, Sidney, BC, V8L 4B2, Canada
e-mail: fwright@nrcan.gc.ca
Uddin, M., Coombe, D., and Wright, F. (August 8, 2008). "Modeling of
CO2
CO2
Gas." ASME. J. Energy Resour. Technol. September 2008; 130(3): 032502. https://doi.org/10.1115/1.2956979
Continuing concern about the impacts of atmospheric carbon dioxide
(CO2)
on the global climate system provides an impetus for the development of methods for long-term disposal of
CO2
produced by industrial and other activities. Investigations of the
CO2
-hydrate properties indicate the feasibility of geologic sequestration
CO2
as gas hydrate and the possibility of coincident
CO2
sequestration/
CH4
production from natural gas hydrate reservoirs. Numerical studies can provide an integrated understanding of the process mechanisms in predicting the potential and economic viability of
CO2
gas sequestration, especially when utilizing realistic geological reservoir characteristics in the models. This study numerically investigates possible sequestration of
CO2
as a stable gas hydrate in various reservoir geological formations. As such, this paper extends the applicability of a previously developed model to more realistic and relevant reservoir scenarios. A unified gas hydrate model coupled with a thermal reservoir simulator (CMG STARS) was applied to simulate
CO2
-hydrate formation in four reservoir geological formations. These reservoirs can be described as follows. The first reservoir (Reservoir I) is similar to tight gas reservoir with mean porosity 0.25 and mean absolute permeability
10mD
. The second reservoir (Reservoir II) is similar to a conventional sandstone reservoir with mean porosity 0.25 and mean permeability
20mD
. The third reservoir (Reservoir III) is similar to hydrate-free Mallik silt with mean porosity 0.30 and mean permeability
100mD
. The fourth reservoir (Reservoir IV) is similar to hydrate-free Mallik sand with mean porosity 0.35 and mean permeability
1000mD
. The Mallik gas hydrate bearing formation itself can be described as several layers of variable thickness with permeability variations from
1mDto1000mD
, and is addressed as a separate part of this study. This paper describes numerical methodology, model input data selection, and reservoir simulation results, including an enhancement to model the effects of ice formation and decay. The numerical investigation shows that the gas hydrate model effectively captures the spatial and temporal dynamics of
CO2
CO2
gas. Practical limitations to
CO2
-hydrate formation by gas injection are identified and potential improvements to the process are suggested.
Hydrates/Coal Bed Methane/Heavy Oil/Oil Sands/Tight Gas
carbon compounds, climate mitigation, geology, numerical analysis, CO2 sequestration, CO2 hydrate, CH4 hydrate, hydrate formation, hydrate decomposition, geological reservoir, numerical simulation
Methane hydrate, Pressure, Reservoirs, Temperature, Water, Simulation, Permeability, Porosity, Modeling
Clathrates of Natural Gases
Scientific Results From the Mallik 2002 Gas Hydrate Production Research Well Program
,” Mackenzie Delta, Northwest Territories, Canada,
, eds., Geological Survey of Canada, Bulletin 585.
Mallik 2002 Gas Hydrate Production Research Well Program, Mackenzie Delta, Northwest Territories: Well Data and Interactive Data Viewer
,” Appendix A in Scientific Results From the Mallik 2002 Gas Hydrate Production Research Well Program, Mackenzie Delta, Northwest Territories, Canada,
Preliminary Evaluation of the Geological Suitability and Geothermal Stability of the Great Lakes Basin for CO2 Sequestration as Gas Hydrate
,” Geological Survey of Canada, Open File 4596, 1 CD ROM.
Overview of the Coring Program for the JAPEX/JNOC/GSC et al. Malik 5L-38 Gas Hydrate Production Research Well
,” in Scientific Results From the Mallik 2002 Gas Hydrate Production Research Well Program, Mackenzie Delta, Northwest Territories, Canada,
Adisasmito
, Jr, 1991, “
Hydrates of Carbon Dioxide and Methane Mixtures
Pooladi-Darvish
A Numerical Study on Gas Production From Formations Containing Gas Hydrates
2003 Petroleum Society’s Canadian International Petroleum Conference (CIPC)
, Jun. 10-12, Paper 2003-060.
Kinetics of Methane Hydrate Decomposition
Dholabhi
Kinetics of Carbon Dioxide and Methane Hydrate Formation
Thermodynamic Properties of Gases—Carbon Dioxide
In Situ Stability of Gas Hydrate in Reservoir Sediments of the JAPEX/JNOC/GSC et al. Mallik 5L-38 Gas Hydrate Production Research Well
,” in Scientific Results from the Mallik Gas Hydrate Production Research Well Program, Mackenzie Delta, Northwest Territories, Canada,
Thermal Conductivity of Sediments Within the Gas-Hydrate-Bearing Interval at the JAPEX/JNOC/GSC et al. Mallik 5L-38 Gas Hydrate Production Research Well
A Natural Gradient Experiment on Solute Transport in a Sand Aquifer: Spatial Variability of Hydraulic Conductivity and Its Role in the Dispersion Process
Universal Scaling of Hydraulic Conductivities and Dispersivities in Geologic Media
Macroscopic Dispersion in Porous Media: The Controlling Factors
Mantoglou
Numerical Simulation and Assessment of Alternative Strategies for Formation of CO2 Hydrate in Geological Reservoirs
,” Research Report 2005 Prepared for the Geological Survey of Canada, Natural Resources of Canada, Heavy Oil and Oil Sands, Alberta Research Council, Edmonton, Canada.
Numerical Studies of Gas Hydrate Formation and Decomposition in a Geological Reservoir
Advanced Numerical Simulation of CO2 Hydrate Formation in Geological Reservoirs by Injection of CO2 Gas
Studies of CO2 Hydrate Formation and Dissolution
Numerical Studies of Gas Production From Class 2 and Class 3 Hydrate Accumulations at the Mallk Site, Mackenzie Delta, Canada
JAPEX/JNOC/GSC MALLIK 2L-38 Gas Hydrate Research Well, Mackenzie Delta, N. W. T.: Overview of Field Operations
2000 SPE/CERI Gas Technology Symposium
, SPE Paper No. 59795.
Regional Gas Hydrate Occurrences, Permafrost Conditions, and Cenozoic Geology, Mackenzie Delta Area
,” in Scientific Results From JAPEX/JNOC/GSC Mallik 2L-38 Gas Hydrate Research Well, Mackenzie Delta, Northwest Territories, Canada,
, eds., Geological Survey of Canada, Bulletin,
Sedimentology of Gas Hydrate Host Strata From the JAPEX/JNOC/GSC Mallik 2L-38 Gas Hydrate Research Well
,” in Scientific Results From the JAPEX/JNOC/GSC Mallik 2L-38 Gas Hydrate Research Well, Mackenzie Delta, Northwest Territories, Canada,
, eds., Geological Survey of Canada, Bulletin
JAPEX/JNOC/GSC et al. Mallik 5L-38 Gas Hydrate Production Research Well Downhole Well-Log and Core Montages
Overview of Thermal-Stimulation Production-Test Results for the JAPEXJNOC/GSC et al. Mallik 5L-38 Gas Hydrate Production Research Well
Overview of Pressure-Drawdown Production-Test Results for the JAPEXJNOC/GSC et al. Mallik 5L-38 Gas Hydrate Production Research Well
Schrőtter
Temperature Field of the Mallik Gas Hydrate Occurrence-Implications on Phase Changes and Thermal Properties
The Characteristic of Hydrate Exploitation by Depressurization
|
Multiplicity and stability of closed geodesics on Finsler 2-spheres | EMS Press
A survey of recent progress on the multiplicity and stability problems of closed geodesics on Finsler
2
-spheres is given.
Yiming Long, Multiplicity and stability of closed geodesics on Finsler 2-spheres. J. Eur. Math. Soc. 8 (2006), no. 2, pp. 341–353
|
Both plugins make it possible to run APBS from within PyMOL, and then display the results as a color-coded electrostatic surface (units
{\displaystyle K_{b}T/e_{c}}
) in the molecular display window (as with the image to the right).
(this might be outdated information): If the B-factor is
{\displaystyle \geq 100,}
then APBS doesn't properly read in the PDB file and thus outputs garbage (or dies). To fix this, set all b factors to be less than 100.
|
If f is a polynomial in x and y of degree
d
, and f can be written as
\sum _{i,j}{c}_{i,j}{x}^{i}{y}^{j}
{c}_{i,j}
, then the output of this command is
\sum _{i,j}{c}_{i,j}{x}^{i}{y}^{j}{z}^{d-i-j}
\mathrm{with}\left(\mathrm{algcurves}\right):
f≔{y}^{2}-{x}^{3}
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}
\mathrm{homogeneous}\left(f,x,y,z\right)
\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{z}
\mathrm{subs}\left(z=1,\right)
\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}
|
Minimization with Bound Constraints and Banded Preconditioner - MATLAB & Simulink - MathWorks Deutschland
Objective Function with Gradient
Hessian Pattern
Improve Solution
This example shows how to solve a nonlinear problem with bounds using the fmincon trust-region-reflective algorithm. This algorithm provides additional efficiency when the problem is sparse, and has both an analytic gradient and a known structure, such as its Hessian pattern.
For a given
that is a positive multiple of 4, the objective function is
f\left(x\right)=1+\sum _{i=1}^{n}{|\left(3-2{x}_{i}\right){x}_{i}-{x}_{i-1}-{x}_{i+1}+1|}^{p}+\sum _{i=1}^{n/2}{|{x}_{i}+{x}_{i+n/2}|}^{p},
p=7/3
{x}_{0}=0
{x}_{n+1}=0
. The tbroyfg helper function at the end of this example implements the objective function, including its gradient.
The problem has the bounds
-10\le {x}_{i}\le 10
i
n
The sparsity pattern of the Hessian matrix is predetermined and stored in the file tbroyhstr.mat. The sparsity structure for the Hessian of this problem is banded, as you can see in the following spy plot.
In this plot, the center stripe is itself a five-banded matrix. The following plot shows the matrix more clearly.
Set options to use the trust-region-reflective algorithm. This algorithm requires you to set the SpecifyObjectiveGradient option to true.
Also, use optimoptions to set the HessPattern option to Hstr. If you do not set this option for such a large problem with an obvious sparsity structure, the problem uses a great amount of memory and computation because fmincon attempts to use finite differencing on a full Hessian matrix of 640,000 nonzero entries.
Set the initial point to –1 for odd indices and +1 for even indices.
The problem has no linear or nonlinear constraints, so set those parameters to [].
Examine the exit flag, objective function value, first-order optimality measure, and number of solver iterations.
fmincon does not take many iterations to reach a solution. However, the solution has a relatively high first-order optimality measure, which is why the exit flag does not have the preferred value of 1.
Try using a five-banded preconditioner instead of the default diagonal preconditioner. Using optimoptions, set the PrecondBandWidth option to 2 and solve the problem again. (The bandwidth is the number of upper or lower diagonals, not counting the main diagonal.)
The exit flag and objective function value do not appear to change. However, the number of iterations increases, and the first-order optimality measure decreases considerably. Compute the difference in objective function value.
The objective function value decreases by a tiny amount. The solution mainly improves the first-order optimality measure, not the objective function.
This code creates the tbroyfg helper function.
|
The sum of the reciprocal of the primes increasing without bound. The x axis is in log scale, showing that the divergence is very slow. The red function is a lower bound that also diverges.
The sum of the reciprocals of all prime numbers diverges; that is:
{\displaystyle \sum _{p{\text{ prime}}}{\frac {1}{p}}={\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{5}}+{\frac {1}{7}}+{\frac {1}{11}}+{\frac {1}{13}}+{\frac {1}{17}}+\cdots =\infty }
This was proved by Leonhard Euler in 1737,[1] and strengthens (i.e. it gives more information than) Euclid's 3rd-century-BC result that there are infinitely many prime numbers.
There are a variety of proofs of Euler's result, including a lower bound for the partial sums stating that
{\displaystyle \sum _{\scriptstyle p{\text{ prime}} \atop \scriptstyle p\leq n}{\frac {1}{p}}\geq \log \log(n+1)-\log {\frac {\pi ^{2}}{6}}}
for all natural numbers n. The double natural logarithm (log log) indicates that the divergence might be very slow, which is indeed the case. See Meissel–Mertens constant.
1 The harmonic series
2.1 Euler's proof
2.2 Erdős's proof by upper and lower estimates
2.3 Proof that the series exhibits log-log growth
2.4 Proof from Dusart's inequality
2.5 Geometric and harmonic-series proof
The harmonic series[edit]
First, we describe how Euler originally discovered the result. He was considering the harmonic series
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n}}=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+\cdots =\infty }
He had already used the following "product formula" to show the existence of infinitely many primes.
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n}}=\prod _{p}\left(1+{\frac {1}{p}}+{\frac {1}{p^{2}}}+\cdots \right)=\prod _{p}{\frac {1}{1-p^{-1}}}}
Here the product is taken over the set of all primes.
Such infinite products are today called Euler products. The product above is a reflection of the fundamental theorem of arithmetic. Euler noted that if there were only a finite number of primes, then the product on the right would clearly converge, contradicting the divergence of the harmonic series.
Euler considered the above product formula and proceeded to make a sequence of audacious leaps of logic. First, he took the natural logarithm of each side, then he used the Taylor series expansion for log x as well as the sum of a converging series:
{\displaystyle {\begin{aligned}\log \left(\sum _{n=1}^{\infty }{\frac {1}{n}}\right)&{}=\log \left(\prod _{p}{\frac {1}{1-p^{-1}}}\right)=-\sum _{p}\log \left(1-{\frac {1}{p}}\right)\\[5pt]&=\sum _{p}\left({\frac {1}{p}}+{\frac {1}{2p^{2}}}+{\frac {1}{3p^{3}}}+\cdots \right)\\[5pt]&=\sum _{p}{\frac {1}{p}}+{\frac {1}{2}}\sum _{p}{\frac {1}{p^{2}}}+{\frac {1}{3}}\sum _{p}{\frac {1}{p^{3}}}+{\frac {1}{4}}\sum _{p}{\frac {1}{p^{4}}}+\cdots \\[5pt]&=A+{\frac {1}{2}}B+{\frac {1}{3}}C+{\frac {1}{4}}D+\cdots \\[5pt]&=A+K\end{aligned}}}
for a fixed constant K < 1. Then he invoked the relation
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n}}=\log \infty ,}
which he explained, for instance in a later 1748 work,[2] by setting x = 1 in the Taylor series expansion
{\displaystyle \log \left({\frac {1}{1-x}}\right)=\sum _{n=1}^{\infty }{\frac {x^{n}}{n}}.}
This allowed him to conclude that
{\displaystyle A={\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{5}}+{\frac {1}{7}}+{\frac {1}{11}}+\cdots =\log \log \infty .}
It is almost certain that Euler meant that the sum of the reciprocals of the primes less than n is asymptotic to log log n as n approaches infinity. It turns out this is indeed the case, and a more precise version of this fact was rigorously proved by Franz Mertens in 1874.[3] Thus Euler obtained a correct result by questionable means.
Erdős's proof by upper and lower estimates[edit]
The following proof by contradiction is due to Paul Erdős.
Let pi denote the ith prime number. Assume that the sum of the reciprocals of the primes converges
Then there exists a smallest positive integer k such that
{\displaystyle \sum _{i=k+1}^{\infty }{\frac {1}{p_{i}}}<{\frac {1}{2}}\qquad (1)}
For a positive integer x, let Mx denote the set of those n in {1, 2, ..., x} which are not divisible by any prime greater than pk (or equivalently all n ≤ x which are a product of powers of primes pi ≤ pk). We will now derive an upper and a lower estimate for |Mx|, the number of elements in Mx. For large x, these bounds will turn out to be contradictory.
Every n in Mx can be written as n = m2r with positive integers m and r, where r is square-free. Since only the k primes p1, ..., pk can show up (with exponent 1) in the prime factorization of r, there are at most 2k different possibilities for r. Furthermore, there are at most √x possible values for m. This gives us the upper estimate
{\displaystyle |M_{x}|\leq 2^{k}{\sqrt {x}}\qquad (2)}
The remaining x − |Mx| numbers in the set difference {1, 2, ..., x} \ Mx are all divisible by a prime greater than pk. Let Ni,x denote the set of those n in {1, 2, ..., x} which are divisible by the ith prime pi. Then
{\displaystyle \{1,2,\ldots ,x\}\smallsetminus M_{x}=\bigcup _{i=k+1}^{\infty }N_{i,x}}
Since the number of integers in Ni,x is at most x/pi (actually zero for pi > x), we get
{\displaystyle x-|M_{x}|\leq \sum _{i=k+1}^{\infty }|N_{i,x}|<\sum _{i=k+1}^{\infty }{\frac {x}{p_{i}}}}
Using (1), this implies
{\displaystyle {\frac {x}{2}}<|M_{x}|\qquad (3)}
This produces a contradiction: when x ≥ 22k + 2, the estimates (2) and (3) cannot both hold, because x/2 ≥ 2k√x.
Proof that the series exhibits log-log growth[edit]
Here is another proof that actually gives a lower estimate for the partial sums; in particular, it shows that these sums grow at least as fast as log log n. The proof is due to Ivan Niven,[4] adapted from the product expansion idea of Euler. In the following, a sum or product taken over p always represents a sum or product taken over a specified set of primes.
The proof rests upon the following four inequalities:
Every positive integer i can be uniquely expressed as the product of a square-free integer and a square as a consequence of the fundamental theorem of arithmetic. Start with
{\displaystyle i=q_{1}^{2{\alpha }_{1}+{\beta }_{1}}\cdot q_{2}^{2{\alpha }_{2}+{\beta }_{2}}\cdots q_{r}^{2{\alpha }_{r}+{\beta }_{r}},}
where the βs are 0 (the corresponding power of prime q is even) or 1 (the corresponding power of prime q is odd). Factor out one copy of all the primes whose β is 1, leaving a product of primes to even powers, itself a square. Relabeling:
{\displaystyle i=(p_{1}p_{2}\cdots p_{s})\cdot b^{2},}
where the first factor, a product of primes to the first power, is square free. Inverting all the is gives the inequality
{\displaystyle \sum _{i=1}^{n}{\frac {1}{i}}\leq \left(\prod _{p\leq n}\left(1+{\frac {1}{p}}\right)\right)\cdot \left(\sum _{k=1}^{n}{\frac {1}{k^{2}}}\right)=A\cdot B.}
{\displaystyle {\frac {1}{i}}={\frac {1}{p_{1}p_{2}\cdots p_{s}}}\cdot {\frac {1}{b^{2}}},}
{\displaystyle {\begin{aligned}\left(1+{\frac {1}{p_{1}}}\right)\left(1+{\frac {1}{p_{2}}}\right)\ldots \left(1+{\frac {1}{p_{s}}}\right)&=\left({\frac {1}{p_{1}}}\right)\left({\frac {1}{p_{2}}}\right)\cdots \left({\frac {1}{p_{s}}}\right)+\ldots \\&={\frac {1}{p_{1}p_{2}\cdots p_{s}}}+\ldots .\end{aligned}}}
{\displaystyle 1/(p_{1}p_{2}\cdots p_{s})}
is one of the summands in the expanded product A. And since
{\displaystyle 1/b^{2}}
is one of the summands of B, every summand
{\displaystyle 1/i}
is represented in one of the terms of AB when multiplied out. The inequality follows.
The upper estimate for the natural logarithm
{\displaystyle {\begin{aligned}\log(n+1)&=\int _{1}^{n+1}{\frac {dx}{x}}\\&=\sum _{i=1}^{n}\underbrace {\int _{i}^{i+1}{\frac {dx}{x}}} _{{}\,<\,{\frac {1}{i}}}\\&<\sum _{i=1}^{n}{\frac {1}{i}}\end{aligned}}}
The lower estimate 1 + x < exp(x) for the exponential function, which holds for all x > 0.
Let n ≥ 2. The upper bound (using a telescoping sum) for the partial sums (convergence is all we really need)
{\displaystyle {\begin{aligned}\sum _{k=1}^{n}{\frac {1}{k^{2}}}&<1+\sum _{k=2}^{n}\underbrace {\left({\frac {1}{k-{\frac {1}{2}}}}-{\frac {1}{k+{\frac {1}{2}}}}\right)} _{=\,{\frac {1}{k^{2}-{\frac {1}{4}}}}\,>\,{\frac {1}{k^{2}}}}\\&=1+{\frac {2}{3}}-{\frac {1}{n+{\frac {1}{2}}}}<{\frac {5}{3}}\end{aligned}}}
Combining all these inequalities, we see that
{\displaystyle {\begin{aligned}\log(n+1)&<\sum _{i=1}^{n}{\frac {1}{i}}\\&\leq \prod _{p\leq n}\left(1+{\frac {1}{p}}\right)\sum _{k=1}^{n}{\frac {1}{k^{2}}}\\&<{\frac {5}{3}}\prod _{p\leq n}\exp \left({\frac {1}{p}}\right)\\&={\frac {5}{3}}\exp \left(\sum _{p\leq n}{\frac {1}{p}}\right)\end{aligned}}}
Dividing through by 5/3 and taking the natural logarithm of both sides gives
{\displaystyle \log \log(n+1)-\log {\frac {5}{3}}<\sum _{p\leq n}{\frac {1}{p}}}
as desired. Q.E.D.
{\displaystyle \sum _{k=1}^{\infty }{\frac {1}{k^{2}}}={\frac {\pi ^{2}}{6}}}
(see the Basel problem), the above constant log 5/3 = 0.51082... can be improved to log π2/6 = 0.4977...; in fact it turns out that
{\displaystyle \lim _{n\to \infty }\left(\sum _{p\leq n}{\frac {1}{p}}-\log \log n\right)=M}
where M = 0.261497... is the Meissel–Mertens constant (somewhat analogous to the much more famous Euler–Mascheroni constant).
Proof from Dusart's inequality[edit]
From Dusart's inequality, we get
{\displaystyle p_{n}<n\log n+n\log \log n\quad {\mbox{for }}n\geq 6}
{\displaystyle {\begin{aligned}\sum _{n=1}^{\infty }{\frac {1}{p_{n}}}&\geq \sum _{n=6}^{\infty }{\frac {1}{p_{n}}}\\&\geq \sum _{n=6}^{\infty }{\frac {1}{n\log n+n\log \log n}}\\&\geq \sum _{n=6}^{\infty }{\frac {1}{2n\log n}}=\infty \end{aligned}}}
by the integral test for convergence. This shows that the series on the left diverges.
Geometric and harmonic-series proof[edit]
Suppose for contradiction the sum converged. Then, there exists
{\displaystyle n}
{\displaystyle \sum _{i\geq n+1}{\frac {1}{p_{i}}}<1}
. Call this sum
{\displaystyle x}
Now consider the convergent geometric series
{\displaystyle x+x^{2}+x^{3}+\cdots }
This geometric series contains the sum of reciprocals of all numbers whose prime factorization contain only primes in the set
{\displaystyle \{p_{n+1},p_{n+2},\cdots \}}
Consider the subseries
{\displaystyle \sum _{i\geq 1}{\frac {1}{1+i(p_{1}p_{2}\cdots p_{n})}}}
. This is a subseries because
{\displaystyle 1+i(p_{1}p_{2}\cdots p_{n})}
is not divisible by any
{\displaystyle p_{j},j\leq n}
However, by the Limit comparison test, this subseries diverges by comparing it to the harmonic series. Indeed,
{\textstyle \lim _{i\to \infty }{\frac {1+i(p_{1}p_{2}\cdots p_{n})}{i}}=p_{1}p_{2}\cdots p_{n}}
Thus, we have found a divergent subseries of the original convergent series, and since all terms are positive, this gives the contradiction. We may conclude
{\textstyle \sum _{i\geq 1}{\frac {1}{p_{i}}}}
diverges. Q.E.D.
While the partial sums of the reciprocals of the primes eventually exceed any integer value, they never equal an integer.
One proof[5] is by induction: The first partial sum is 1/2, which has the form odd/even. If the nth partial sum (for n ≥ 1) has the form odd/even, then the (n + 1)st sum is
{\displaystyle {\frac {\text{odd}}{\text{even}}}+{\frac {1}{p_{n+1}}}={\frac {{\text{odd}}\cdot p_{n+1}+{\text{even}}}{{\text{even}}\cdot p_{n+1}}}={\frac {{\text{odd}}+{\text{even}}}{\text{even}}}={\frac {\text{odd}}{\text{even}}}}
as the (n + 1)st prime pn + 1 is odd; since this sum also has an odd/even form, this partial sum cannot be an integer (because 2 divides the denominator but not the numerator), and the induction continues.
Another proof rewrites the expression for the sum of the first n reciprocals of primes (or indeed the sum of the reciprocals of any set of primes) in terms of the least common denominator, which is the product of all these primes. Then each of these primes divides all but one of the numerator terms and hence does not divide the numerator itself; but each prime does divide the denominator. Thus the expression is irreducible and is non-integer.
Euclid's theorem that there are infinitely many primes
Small set (combinatorics)
Brun's theorem, on the convergent sum of reciprocals of the twin primes
^ Euler, Leonhard (1737). "Variae observationes circa series infinitas" [Various observations concerning infinite series]. Commentarii Academiae Scientiarum Petropolitanae. 9: 160–188.
^ Euler, Leonhard (1748). Introductio in analysin infinitorum. Tomus Primus [Introduction to Infinite Analysis. Volume I]. Lausanne: Bousquet. p. 228, ex. 1.
^ Mertens, F. (1874). "Ein Beitrag zur analytischen Zahlentheorie". J. Reine Angew. Math. 78: 46–62.
^ Niven, Ivan, "A Proof of the Divergence of Σ 1/p", The American Mathematical Monthly, Vol. 78, No. 3 (Mar. 1971), pp. 272-273. The half-page proof is expanded by William Dunham in Euler: The Master of Us All, pp. 74-76.
^ Lord, Nick (2015). "Quick proofs that certain sums of fractions are not integers". The Mathematical Gazette. 99: 128–130. doi:10.1017/mag.2014.16. S2CID 123890989.
Dunham, William (1999). Euler: The Master of Us All. MAA. pp. 61–79. ISBN 0-88385-328-0.
Caldwell, Chris K. "There are infinitely many primes, but, how big of an infinity?".
Retrieved from "https://en.wikipedia.org/w/index.php?title=Divergence_of_the_sum_of_the_reciprocals_of_the_primes&oldid=1079646769"
|
2013 A Note on the Asymptotic Behavior of Parabolic Monge-Ampère Equations on Riemannian Manifolds
We study the asymptotic behavior of the parabolic Monge-Ampère equation
\partial \phi \left(x,t\right)/\partial t=\text{log}\text{\hspace{0.17em}}\left(\text{det}\left(g\left(x\right)+\text{Hess}\phi \left(x,t\right)\right)/\text{det}g\left(x\right)\right)-\lambda \phi \left(x,t\right)
𝕄×\left(0,\infty \right)
\phi \left(x,0\right)={\phi }_{0}\left(x\right)
𝕄
𝕄
is a compact complete Riemannian manifold, λ is a positive real parameter, and
{\phi }_{0}\left(x\right):𝕄\to ℝ
is a smooth function. We show a meaningful asymptotic result which is more general than those in Huisken, 1997.
Qiang Ru. "A Note on the Asymptotic Behavior of Parabolic Monge-Ampère Equations on Riemannian Manifolds." J. Appl. Math. 2013 1 - 4, 2013. https://doi.org/10.1155/2013/304864
Qiang Ru "A Note on the Asymptotic Behavior of Parabolic Monge-Ampère Equations on Riemannian Manifolds," Journal of Applied Mathematics, J. Appl. Math. 2013(none), 1-4, (2013)
|
The L2-solutions of linear differential equations of second order
June 1947 The
{L}^{2}
-solutions of linear differential equations of second order
Philip Hartman "The
{L}^{2}
-solutions of linear differential equations of second order," Duke Mathematical Journal, Duke Math. J. 14(2), 323-326, (June 1947)
|
15 September 2008 Linear manifolds in the moduli space of one-forms
Martin Möller1
We study closures of
{\mathrm{GL}}_{2}^{+}\left(\mathbb{R}\right)
-orbits in the total space
\Omega {M}_{g}
of the Hodge bundle over the moduli space of curves under the assumption that they are algebraic manifolds. We show that in the generic stratum, such manifolds are the whole stratum, the hyperelliptic locus, or parameterize curves whose Jacobian has additional endomorphisms. This follows from a cohomological description of the tangent bundle to
\Omega {M}_{g}
. For nongeneric strata, similar results can be shown by a case-by-case inspection. We also propose to study a notion of linear manifold that comprises Teichmüller curves, Hilbert modular surfaces, and the ball quotients of Deligne and Mostow [DM]. Moreover, we give an explanation for the difference between Hilbert modular surfaces and Hilbert modular threefolds with respect to this notion of linearity
Martin Möller. "Linear manifolds in the moduli space of one-forms." Duke Math. J. 144 (3) 447 - 487, 15 September 2008. https://doi.org/10.1215/00127094-2008-041
Martin Möller "Linear manifolds in the moduli space of one-forms," Duke Mathematical Journal, Duke Math. J. 144(3), 447-487, (15 September 2008)
|
Modular Arithmetic · USACO Guide
HomeGoldModular Arithmetic
IntroductionModular ExponentiationResourcesSolution - ExponentiationModular InverseWith ExponentiationTemplatesProblems
Authors: Darren Yao, Michael Cao, Andi Qu, Benjamin Qi, Andrew Wang
Working with remainders from division.
Gold - Divisibility
AryanshS
The Art of Modular Arithmetic
introduces modular arithmetic through numerous math-contest-level examples and problems
13.3 - Modular Arithmetic
very brief, module is based off this
plenty of examples from math contests
Spheniscine - Modular Arithmetic for Beginners
some practice probs
2, 5 - Modular Arithmetic
In modular arithmetic, instead of working with integers themselves, we work with their remainders when divided by
m
. We call this taking modulo
m
m = 23
, then instead of working with
x = 247
x \bmod 23 = 17
m
will be a large prime, given in the problem; the two most common values are
10^9 + 7
998\,244\,353=119\cdot 2^{23}+1
. Modular arithmetic is used to avoid dealing with numbers that overflow built-in data types, because we can take remainders, according to the following formulas:
(a+b) \bmod m = (a \bmod m + b \bmod m) \bmod m
(a-b) \bmod m = (a \bmod m - b \bmod m) \bmod m
(a \cdot b) \pmod{m} = ((a \bmod m) \cdot (b \bmod m)) \bmod m
a^b \bmod {m} = (a \bmod m)^b \bmod m
Binary exponentiation can be used to efficently compute
x ^ n \mod m
. To do this, let's break down
x ^ n
into binary components. For example,
5 ^ {10}
5 ^ {1010_2}
5 ^ 8 \cdot 5 ^ 2
. Then, if we know
x ^ y
y
which are powers of two (
x ^ 1
x ^ 2
x ^ 4
\dots
x ^ {2^{\lfloor{\log_2n} \rfloor}}
, we can compute
x ^ n
\mathcal{O}(\log n)
To deal with
m
, observe that modulo doesn't affect multiplications, so we can directly implement the above "binary exponentiation" algorithm while adding a line to take results
\pmod m
Solution - Exponentiation
The modular inverse is the equivalent of the reciprocal in real-number arithmetic; to divide
a
b
a
by the modular inverse of
b
. We'll only consider prime moduli
p
2
p=10^9+7
i=\frac{p+1}{2}=5\cdot 10^8+4
. This means that for any integer
x
(2x)\cdot i\equiv x\cdot (2i)\equiv x\pmod{10^9+7}.
10i\equiv 5\pmod{10^9+7}
Various ways to take modular inverse, we'll only discuss the second here
With Exponentiation
Fermat's Little Theorem (not to be confused with Fermat's Last Theorem) states that all integers
a
p
a^{p - 1} \equiv 1 \pmod{p}
a^{p-2} \cdot a \equiv 1 \pmod{p}
a^{p - 2}
is a modular inverse of
a
p
ll x = binpow(2, MOD - 2, MOD);
cout << x << "\n"; // 500000004
assert(2 * x % MOD == 1);
public static final int MOD = (int) Math.pow(10, 9) + 7;
long x = binpow(2, MOD - 2, MOD);
System.out.println(x); // 500000004
x = pow(2, MOD - 2, MOD);
print(x) # 500000004
Because it takes
\mathcal{O}(\log p)
time to compute a modular inverse modulo
p
, frequent use of division inside a loop can significantly increase the running time of a program. If the modular inverse of the same number(s) is/are being used many times, it is a good idea to precalculate it.
Also, one must always ensure that they do not attempt to divide by 0. Be aware that after applying modulo, a nonzero number can become zero, so be very careful when dividing by non-constant values.
Optional: Another Way to Compute Modular Inverses
We can also use the extended Euclidean algorithm. See the module in the Advanced section.
ModIntShort
feasible to type up during a contest
Using BenQ's template, both of these do the same thing:
Code Snippet: ModInt (Click to expand)
int a = 1e8, b = 1e8, c = 1e8;
Exponentiation II
Easy Show Tags Modular Arithmetic
Divisor Analysis
Normal Show Tags Modular Arithmetic
Hard Show Tags Modular Arithmetic
|
Bitmask DP · USACO Guide
HomeGoldBitmask DP
Bitmask DPTutorialSolutionProblemsApplication - Bitmask over PrimesRough IdeaProblems
Authors: Michael Cao, Siyong Huang
Contributors: Andrew Wang, Neo Wang
Gold - Intro to Bitwise Operators
You can often use this to solve subtasks.
Hamiltonian Flights
10.5 - DP on Bits, 19.2 - Hamiltonian Paths
Elevator Rides, SOS, Hamiltonian
9.4 - Subset DP
example - similar to Hamiltonian
Hamiltonian walks
DP and Bit Masking
Easy Show Tags Bitmasks, MinCostFlow
Guard Mark
Uddered but not Herd
Moovie Mooving
Max Indep Set
Normal Show Tags Bitmasks, DP, Meet in Middle
Normal Show Tags Binary Search, Bitmasks, DP, Geometry
2017 - Longest beautiful sequence
Hard Show Tags Bitmasks, DP
Hard Show Tags Bitmasks, DP, Game Theory, Sqrt
Very Hard Show Tags Bitmasks, DP
Very Hard Show Tags Bitmasks, DFS, DP, Tree
Application - Bitmask over Primes
In some number theory problems, it helps to represent each number were represented by a bitmask of its prime divisors. For example, the set
\{6, 10, 15 \}
\{0b011, 0b101, 0b110 \}
, where the bits correspond to divisibility by
[2, 3, 5]
Then, here are some equivalent operations between masks and these integers:
Bitwise AND is GCD
Bitwise OR is LCM
Iterating over bits is iterating over prime divisors
Iterating over submasks is iterating over divisors
Choosing a set with GCD
1
is equivalent to choosing a set of bitmasks that AND to
0
. For example, we can see that
\{6, 10 \}
doesn't have GCD
1
0b011 \& 0b101 = 0b001 \neq 0
\{6, 10, 15 \}
has GCD
1
0b011 \& 0b101 \& 0b110 = 0b000 = 0
Maybe this is just standard NT, but I've always thought about it as a bitmask. Also, any tutorials or more problems of this type?
Hard Show Tags Combinatorics, DP
Very Hard Show Tags Bitmasks, DP, NT
Insane Show Tags Binary Search, Bitmasks, NT
Insane Show Tags Bitmasks, Combinatorics, DP
|
Nous montrons qu’un champ de vecteurs différentiable admet une forme normale polynomiale intégrable autour d’un cycle limite de multiplicité deux. Cette forme dépend de trois paramètres : l’invariant formel de monodromie, la période du cycle et un troisième invariant qui mesure l’asymétrie des périodes des cycles apparaissent dans la bifurcation générique de ce cycle double.
We establish a polynomial normal form for a vector field having a limit cycle of multiplicity 2. The smooth classification problem for such fields is closely related to the problem of classification of germs
\Delta :\left({ℝ}^{1},0\right)\to \left({ℝ}^{1},0\right)
\Delta \left(x\right)=x+c{x}^{2}+\cdots
, solved by F. Takens in 1973. Such germs appear as the germs of Poincaré return maps for semistable cycles, and a smooth conjugacy between any two such germs may be extended to a smooth orbital equivalence between the original fields.
If one deals with smooth conjugacy of flows rather than with the orbital equivalence of the corresponding fields, then two additional real parameters appear. One of them is the period of the cycle, while the second parameter keeps track of the asymmetry of the angular velocity, resulting in a difference between periods of two hyperbolic cycles appearing after perturbation of the given field.
author = {Yakovenko, Sergey Yu.},
title = {Smooth normalization of a vector field near a semistable limit cycle},
AU - Yakovenko, Sergey Yu.
TI - Smooth normalization of a vector field near a semistable limit cycle
Yakovenko, Sergey Yu. Smooth normalization of a vector field near a semistable limit cycle. Annales de l'Institut Fourier, Tome 43 (1993) no. 3, pp. 893-903. doi : 10.5802/aif.1360. http://archive.numdam.org/articles/10.5802/aif.1360/
[AVG] V.I. Arnold, A.N. Varchenko and S.M. Gussein-Zade, Singularities of differentiable maps. I. Classification of critical points, caustics and wavefronts, Monographs in Mathematics, vol 82, Birkhäuser Boston Inc., Boston MA, 1985. | Zbl 0554.58001
[B] G. Belitskiĭ, Equivalence and normal forms of germs of smooth mappings, Russian Mathematical Surveys, 33-1 (1978). | MR 80k:58017 | Zbl 0398.58009
[IY1] Yu.S. Ilyashenko, S.Yu. Yakovenko, Finite-differentiable normal forms for local families of diffeomorphisms and vector fields, Russian Math. Surveys, 46-1 (1991), 1-43. | Zbl 0744.58006
[IY2] Yu.S. Ilyashenko, S.Yu. Yakovenko, Nonlinear Stokes phenomena in smooth classification problems, Nonlinear Stokes phenomena (Yu. S. Ilyashenko, ed.), Advances in Soviet Mathematics, AMS Publ., Providence RI, 14 (1993), 235-287. | Zbl 0804.32012
[T] F. Takens, Normal forms for certain singularities of vector fields, Ann. Inst. Fourier, Grenoble, 23-2 (1973), 163-195. | Numdam | MR 51 #1872 | Zbl 0266.34046
[M] J.N. Mather, Stability of C∞-mappings, III, Publ. Math. IHES, 35 (1968), 279-308. | Numdam | Zbl 0159.25001
|
Return loss - Wikipedia
In telecommunications, return loss is a measure in relative terms of the power of the signal reflected by a discontinuity in a transmission line or optical fiber. This discontinuity can be caused by a mismatch between the termination or load connected to the line and the characteristic impedance of the line. It is usually expressed as a ratio in decibels (dB);
{\displaystyle RL(\mathrm {dB} )=10\log _{10}{P_{\mathrm {i} } \over P_{\mathrm {r} }}}
where RL(dB) is the return loss in dB, Pi is the incident power and Pr is the reflected power.
Return loss is related to both standing wave ratio (SWR) and reflection coefficient (Γ). Increasing return loss corresponds to lower SWR. Return loss is a measure of how well devices or lines are matched. A match is good if the return loss is high. A high return loss is desirable and results in a lower insertion loss.
From a certain perspective 'Return Loss' is a misnomer. The usual function of a transmission line is to convey power from a source to a load with minimal loss. If a transmission line is correctly matched to a load, the reflected power will be zero, no power will be lost due to reflection, and 'Return Loss' will be infinite. Conversely if the line is terminated in an open circuit, the reflected power will be equal to the incident power; all of the incident power will be lost in the sense that none of it will be transferred to a load, and RL will be zero. Thus the numerical values of RL tend in the opposite sense to that expected of a 'loss'.
As defined above, RL will always be positive, since Pr can never exceed Pi . However, return loss has historically been expressed as a negative number, and this convention is still widely found in the literature.[1] Strictly speaking, if a negative sign is ascribed to RL, the ratio of reflected to incident power is implied;
{\displaystyle RL'(\mathrm {dB} )=10\log _{10}{P_{\mathrm {r} } \over P_{\mathrm {i} }}}
where RL'(dB) is the negative of RL(dB).
In practice, the sign ascribed to RL is largely immaterial. If a transmission line includes several discontinuities along its length, the total return loss will be the sum of the RLs caused by each discontinuity, and provided all RLs are given the same sign, no error or ambiguity will result. Whichever convention is used, it will always be understood that Pr can never exceed Pi .
In metallic conductor systems, reflections of a signal traveling down a conductor can occur at a discontinuity or impedance mismatch. The ratio of the amplitude of the reflected wave Vr to the amplitude of the incident wave Vi is known as the reflection coefficient
{\displaystyle \Gamma }
{\displaystyle {\mathit {\Gamma }}={V_{\mathrm {r} } \over V_{\mathrm {i} }}}
Return loss is the negative of the magnitude of the reflection coefficient in dB. Since power is proportional to the square of the voltage, return loss is given by,
{\displaystyle RL(\mathrm {dB} )=-20\log _{10}\left|{\mathit {\Gamma }}\right|}
where the vertical bars indicate magnitude. Thus, a large positive return loss indicates the reflected power is small relative to the incident power, which indicates good impedance match between transmission line and load.
If the incident power and the reflected power are expressed in 'absolute' decibel units, (e.g., dBm), then the return loss in dB can be calculated as the difference between the incident power Pi (in absolute decibel units) and the reflected power Pr (also in absolute decibel units),
{\displaystyle RL(\mathrm {dB} )=P_{\mathrm {i} }(\mathrm {dB} )-P_{\mathrm {r} }(\mathrm {dB} )\,}
In optics (particularly in fiber optics) a loss that takes place at discontinuities of refractive index, especially at an air-glass interface such as a fiber endface. At those interfaces, a fraction of the optical signal is reflected back toward the source. This reflection phenomenon is also called "Fresnel reflection loss," or simply "Fresnel loss."
Fiber optic transmission systems use lasers to transmit signals over optical fiber, and a high optical return loss (ORL) can cause the laser to stop transmitting correctly. The measurement of ORL is becoming more important in the characterization of optical networks as the use of wavelength-division multiplexing increases. These systems use lasers that have a lower tolerance for ORL, and introduce elements into the network that are located in close proximity to the laser.
{\displaystyle {\text{ORL}}(\mathrm {dB} )=10\log _{10}{P_{\mathrm {i} } \over P_{\mathrm {r} }}}
{\displaystyle \scriptstyle P_{\mathrm {r} }}
is the reflected power and
{\displaystyle \scriptstyle P_{\mathrm {i} }}
is the incident, or input, power.
^ Trevor S. Bird, "Definition and Misuse of Return Loss", IEEE Antennas & Propagation Magazine, vol.51, iss.2, pp. 166–167, April 2009.
Federal Standard 1037C and from MIL-STD-188
Optical Return Loss Testing—Ensuring High-Quality Transmission EXFO Application note #044
Retrieved from "https://en.wikipedia.org/w/index.php?title=Return_loss&oldid=1088185680"
|
Understanding the Bayesian approach to false discovery rates (using baseball statistics) | R-bloggers
Understanding the Bayesian approach to false discovery rates (using baseball statistics)
Understanding the beta distribution (using baseball statistics)
Understanding credible intervals (using baseball statistics)
In my last few posts, I’ve been exploring how to perform estimation of batting averages, as a way to demonstrate empirical Bayesian methods. We’ve been able to construct both point estimates and credible intervals based on each player’s batting performance, while taking into account that some we have more information about some players than others.
But sometimes, rather than estimating a value, we’re looking to answer a yes or no question about each hypothesis, and thus classify them into two groups. For example, suppose we were constructing a Hall of Fame, where we wanted to include all players that have a batting probability (chance of getting a hit) greater than .300. We want to include as many players as we can, but we need to be sure that each belongs.
In the case of baseball, this is just for illustration- in real life, there are a lot of other, better metrics to judge a player by! But the problem of hypothesis testing appears whenever we’re trying to identify candidates for future study. We need a principled approach to decide which players are worth including, that also handles multiple testing problems. (Are we sure that any players actually have a batting probability above .300? Or did a few players just get lucky?) To solve this, we’re going to apply a Bayesian approach to a method usually associated with frequentist statistics, namely false discovery rate control.
This approach is very useful outside of baseball, and even outside of beta/binomial problems. We could be asking which genes in an organism are related to a disease, which answers to a survey have changed over time, or which counties have an unusually high incidence of a disease. Knowing how to work with posterior predictions for many individuals, and come up with a set of candidates for further study, is an essential skill in data science.
As I did in my last post, I’ll start with some code you can use to catch up if you want to follow along in R. (Once again, all the code in this post can be found here).
anti_join(Pitching, by = "playerID") %>%
beta1 = AB - H + beta0)
Posterior Error Probabilities
Consider the legendary player Hank Aaron. His career batting average is 0.3050, but we’re basing our hall on his “true probability” of hitting. Should he be permitted in our >.300 Hall of Fame?
When Aaron’s batting average is shrunken by empirical Bayes, we get an estimate of 0.3039. We thus suspect that his true probability of hitting is higher than .300, but we’re not necessarily certain (recall that credible intervals). Let’s take a look at his posterior beta distribution:
We can see that there is a nonzero probability (shaded) that his true probability of hitting is less than .3. We can calulate this with the cumulative distribution function (CDF) of the beta distribution, which in R is computed by the pbeta function:
career_eb %>% filter(name == "Hank Aaron")
## playerID name H AB average eb_estimate alpha1 beta1
## (chr) (chr) (int) (int) (dbl) (dbl) (dbl) (dbl)
## 1 aaronha01 Hank Aaron 3771 12364 0.305 0.304 3850 8818
pbeta(.3, 3850, 8818)
This probability that he doesn’t belong in the Hall of Fame is called the Posterior Error Probability, or PEP. We could easily have calculated the probability Aaron does belong, which we would call the Posterior Inclusion Probability, or PIP. (Note that
mbox{PIP}=1-mbox{PEP}
) The reason we chose to measure the PEP rather than the PIP will become clear in the next section.
It’s equally straightforward to calculate the PEP for every player, just like we calculated the credible intervals for each player in the last post:
career_eb <- career_eb %>%
mutate(PEP = pbeta(.3, alpha1, beta1))
What does the distribution of the PEP look like across players?
Unsurprisingly, for most players, it’s almost certain that they don’t belong in the hall of fame: we know that their batting averages are below .300. If they were included, it is almost certain that they would be an error. In the middle are the borderline players: the ones where we’re not sure. And down there close to 0 are the rare but proud players who we’re (almost) certain belong in the hall of fame.
The PEP is closely related to the estimated batting average:
Notice that crossover point: to have a PEP less than 50%, you need to have a shrunken batting average greater than .3. That’s because the shrunken estimate is the center of our posterior beta distribution (the “over/under” point). If a player’s shrunken estimate is above .3, it’s more likely than not that their true average is as well. And the players we’re not sure about (PEP
approx
.5) have batting averages very close to .300.
Notice also the relationship between the number of at-bats (the amount of evidence) and the PEP. If a player’s shrunken batting average is .28, but he hasn’t batted many times, it is still possible his true batting average is above .3- the credible interval is wide. However, if the player with .28 has a high AB (light blue), the credible interval becomes thinner, we become confident that the true probability of hitting is under .3, and the PEP goes up to 1.
Now we want to set some threshold for inclusion in our Hall of Fame. This criterion is up to us: what kind of goal do we want to set? There are many options, but let me propose one: let’s try to include as many players as possible, while ensuring that no more than 5% of the Hall of Fame was mistakenly included. Put another way, we want to ensure that if you’re in the Hall of Fame, the probability you belong there is at least 95%.
This criterion is called false discovery rate control. It’s particularly relevant in scientific studies, where we might want to come up with a set of candidates (e.g. genes, countries, individuals) for future study. There’s nothing special about 5%: if we wanted to be more strict, we could choose the same policy, but change our desired FDR to 1% or .1%. Similarly, if we wanted a broader set of candidates to study, we could set an FDR of 10% or 20%.
Let’s start with the easy cases. Who are the players with the lowest posterior error probability?
eb_estimate
1 Rogers Hornsby 2930 8173 0.355 0
2 Ed Delahanty 2596 7505 0.343 0
3 Shoeless Joe Jackson 1772 4981 0.350 0
4 Willie Keeler 2932 8591 0.338 0
5 Nap Lajoie 3242 9589 0.336 0
6 Tony Gwynn 3141 9288 0.336 0
7 Harry Heilmann 2660 7787 0.339 0
8 Lou Gehrig 2721 8001 0.337 0
9 Billy Hamilton 2158 6268 0.340 0
10 Eddie Collins 3315 9949 0.331 0
These players are a no-brainer for our Hall of Fame: there’s basically no risk in including them. But suppose we instead tried to include the top 100. What do the 90th-100th players look like?
90 Stuffy McInnis 2405 7822 0.306 0.134
91 Bob Meusel 1693 5475 0.307 0.138
92 Rip Radcliff 1267 4074 0.307 0.144
93 Mike Piazza 2127 6911 0.306 0.146
94 Denny Lyons 1333 4294 0.307 0.150
95 Robinson Cano 1649 5336 0.306 0.150
96 Don Mattingly 2153 7003 0.305 0.157
97 Taffy Wright 1115 3583 0.307 0.168
98 Hank Aaron 3771 12364 0.304 0.170
99 John Stone 1391 4494 0.306 0.171
100 Ed Morgan 879 2810 0.308 0.180
OK, so these players are borderline. We would guess that their career batting average is greater than .300, but we aren’t as certain.
So let’s say we chose to take the top 100 players for our Hall of Fame (thus, cut it off at Ed Morgan). What would we predict the false discovery rate to be? That is, what fraction of these 100 players would be falsely included?
top_players <- career_eb %>%
top_n(100, PEP)
Well, we know the PEP of each of these 100 players, which is the probability that that individual player is a false positive. And by the wonderful property of linearity of expected value, we can just add up these probabilities to get the expected value (the average) of the total number of false positives.
sum(top_players$PEP)
This means that of these 100 players, we expect that about four and a half of them are false discoveries. (If it’s not clear why you can add up the probabilities like that, check out this explanation of linearity of expected value). Now, we don’t know which four or five players we are mistaken about! (If we did, we could just kick them out of the hall). But we can make predictions about the players in aggregate. Here, we can see that taking the top 100 players would get pretty close to our goal of FDR = 5%.
Note that we’re calculating the FDR as
4.43 / 100=4.43%
. Thus, we’re really computing the mean PEP: the average Posterior Error Probability.
mean(top_players$PEP)
We could have asked the same thing about the first 50 players, or the first 200:
sorted_PEP <- career_eb %>%
arrange(PEP)
mean(head(sorted_PEP$PEP, 50))
mean(head(sorted_PEP$PEP, 200))
We can experiment with many thresholds to get our desired FDR, but it’s even easier just to compute them all at once, by computing the cumulative mean of all the (sorted) posterior error probabilities. We can use the cummean function from dplyr:
arrange(PEP) %>%
mutate(qvalue = cummean(PEP))
Notice that I called the cumulative mean of the FDR a qvalue. The term q-value was first defined by John Storey as an analogue to the p-value for controlling FDRs in multiple testing. The q-value is convenient because we can say “to control the FDR at X%, collect only hypotheses where
% <![CDATA[ q < X %]]>
hall_of_fame <- career_eb %>%
This ends up with 103 players in the Hall of Fame. If we wanted to be more careful about letting players in, we’d simply set a stricter q-value threshold:
strict_hall_of_fame <- career_eb %>%
At which point we’d include only 68 players. It’s useful to look at how many players would be included at various thresholds:
This shows that you could include 200 players in the Hall of Fame, but at that point you’d expect that about 25% of them would be incorrectly included. On the other side, you could create a hall of 50 players and be very confident that all of them have a batting probability of .300.
It’s worth emphasizing the difference between measuring an individual’s posterior error probability and the q-value, which is the false discovery rate of a group including that player. Hank Aaron has a PEP of 17%, but he can be included in the Hall of Fame while keeping the FDR below 5%. If this is surprising, imagine that you were instead trying to keep the average height above 6’0”. You would start by including all players taller than 6’0”, but could also include some players who were 5’10” or 5’11” while preserving your average. Similarly, we simply need to keep the average PEP of the players below 5%. (For this reason, the PEP is sometimes called the local false discovery rate, which emphasizes both the connection and the distinction).
Frequentists and Bayesians; meeting in the middle
In my previous three posts, I’ve been taking a Bayesian approach to our estimation and interpretation of batting averages. We haven’t really used any frequentist statistics: in particular, we haven’t seen a single p-value or null hypothesis. Now we’ve used out posterior distributions to compute q-values, and used it to control false discovery rate.
But note that the q-value was originally defined in terms of null hypothesis significance testing, particularly as a transformation of p-values under multiple testing. By calculating, and then averaging, the posterior error probability, we’ve found another way to control FDR. This connection is explored in two great papers from my former advisor, found here and here.
There are some notable differences between our approach here and typical FDR control. In particular, we aren’t defining a null hypothesis (we aren’t assuming any players have a batting average equal to .300), but are instead trying to avoid what Andrew Gelman calls “Type S errors”. Still, this is another great example of the sometimes underappreciated technique of examining the frequentist properties of Bayesian approaches- and, conversely, understanding the Bayesian interpretations of frequentist goals.
What’s Next: A/B testing of batters
We’ve been comparing each player to a fixed threshold, .300. What if we want to compare two players to each other? For instance, catcher Mike Piazza has a higher career batting average (2127 / 6911 = 0.308) than Hank Aaron (3771 / 12364 = 0.305). Can we say with confidence that his true batting average is higher?
This is the common problem of comparing two proportions, which often occurs in A/B testing (e.g. comparing two versions of an login form to see which gets a higher signup rate). We’ll apply some of what we learned here about the Bayesian approach to hypothesis testing, and see how sharing information across batters with empirical Bayes can once again give us an advantage.
|
Ordinary differential equation - Wikiversity
1 Ordinary differential equation
2 Sturm-Liouville theory
3 Solving ordinary differential equation
3.1 1st ordered ordinary differential equation
3.2 2nd ordered ordinary differential equation
Ordinary differential equation[edit | edit source]
In mathematics, an ordinary differential equation (or ODE) is a relation that contains functions of only one independent variable, and one or more of its derivatives with respect to that variable.
A simple example is Newton's second law of motion, which leads to the differential equation
{\displaystyle m{\frac {d^{2}x(t)}{dt^{2}}}=F(x(t)),\,}
for the motion of a particle of mass m. In general, the force F depends upon the position of the particle x(t) at time t, and thus the unknown function x(t) appears on both sides of the differential equation, as is indicated in the notation F(x(t)).
Ordinary differential equations are distinguished from partial differential equations, which involve partial derivatives of several variables.
Ordinary differential equations arise in many different contexts including geometry, mechanics, astronomy and population modelling. Many famous mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert and Euler.
Much study has been devoted to the solution of ordinary differential equations. In the case where the equation is linear, it can be solved by analytical methods. Unfortunately, most of the interesting differential equations are non-linear and, with a few exceptions, cannot be solved exactly. Approximate solutions are arrived at using computer approximations (see numerical ordinary differential equations). The trajectory of a projectile launched from a cannon follows a curve determined by an ordinary differential equation that is derived from Newton's second law. Contents
o 1.1 Ordinary differential equation
o 1.2 Solutions
* 3 Reduction to a first order system
* 4 Linear ordinary differential equations
o 4.1 Homogeneous equations
o 4.2 Nonhomogeneous equations
o 4.3 Fundamental systems for homogeneous equations with constant coefficients
* 5 Theories of ODEs
o 5.1 Singular solutions
o 5.2 Reduction to quadratures
o 5.3 Fuchsian theory
o 5.4 Lie's theory
o 5.5 Sturm-Liouville theory
[edit] Ordinary differential equation
Let y be an unknown function
{\displaystyle y:\mathbb {R} \to \mathbb {R} }
in x with y(n) the nth derivative of y, then an equation of the form
{\displaystyle F(x,y,y',\ \dots ,\ y^{(n-1)})=y^{(n)}}
is called an ordinary differential equation (ODE) of order n; for vector valued functions,
{\displaystyle y:\mathbb {R} \to \mathbb {R} ^{m}}
it is called a system of ordinary differential equations of dimension m.
When a differential equation of order n has the form
{\displaystyle F\left(x,y,y',y'',\ \dots ,\ y^{(n)}\right)=0}
it is called an implicit differential equation whereas the form
{\displaystyle F\left(x,y,y',y'',\ \dots ,\ y^{(n-1)}\right)=y^{(n)}}
is called an explicit differential equation.
A differential equation is said to be linear if F can be written as a linear combination of the derivatives of y
{\displaystyle y^{(n)}=\sum _{i=0}^{n-1}a_{i}(x)y^{(i)}+r(x)}
with ai(x) and r(x) continuous functions in x. The function r(x) is called the source term; if r(x)=0 then the linear differential equation is called homogeneous, otherwise it is called non-homogeneous or inhomogeneous.
{\displaystyle F(x,y,y',\ \dots ,\ y^{(n)})=0}
{\displaystyle u:I\subset \mathbb {R} \to \mathbb {R} }
is called the solution or integral curve for F, if u is n-times differentiable on I, F is defined for all
{\displaystyle (x,u,u',\ \dots ,\ u^{(n)})\quad x\in I}
{\displaystyle F(x,u,u',\ \dots ,\ u^{(n)})=0\quad x\in I.}
Given two solutions
{\displaystyle u:J\subset \mathbb {R} \to \mathbb {R} }
{\displaystyle v:I\subset \mathbb {R} \to \mathbb {R} }
u is called an extension of v if I ⊂ J and
{\displaystyle u(x)=v(x)\quad x\in I.\,}
A solution which has no extension is called a global solution.
A general solution of an n-th order equation is a solution containing n arbitrary variables, corresponding to n constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'. A singular solution is a solution that can't be derived from the general solution.
Main article: Examples of differential equations
[edit] Reduction to a first order system
Any differential equation of order n can be written as a system of n first-order differential equations. Given an explicit ordinary differential equation of order n and dimension 1,
{\displaystyle F\left(x,y,y',y'',\ \dots ,\ y^{(n-1)}\right)=y^{(n)}}
we define a new family of unknown functions
{\displaystyle y_{n}:=y^{(n-1)}.\!}
We can then rewrite the original differential equation as a system of differential equations with order 1 and dimension n.
{\displaystyle y_{1}^{'}=y_{2}}
{\displaystyle \vdots }
{\displaystyle y_{n}^{'}=F(y_{n},\dots ,y_{1},x).}
which can be written concisely in vector notation as
{\displaystyle \mathbf {y} ^{'}=\mathbf {F} (\mathbf {y} ,x)}
{\displaystyle \mathbf {y} :=(y,\ldots ,y_{n}).}
[edit] Linear ordinary differential equations
A well understood particular class of differential equations is linear differential equations. We can always reduce an explicit linear differential equation of any order to a system of differential equation of order 1
{\displaystyle y_{i}'(x)=\sum _{j=1}^{n}a_{i,j}(x)y_{j}+b_{i}(x)\,\mathrm {,} \quad i=1,\ldots ,n}
which we can write concisely using matrix and vector notation as
{\displaystyle \mathbf {y} ^{'}(x)=\mathbf {A} (x)\mathbf {y} (x)+\mathbf {b} (x)}
{\displaystyle \mathbf {y} (x):=(y_{1}(x),\ldots ,y_{n}(x))\mathbf {b} (x):=(b_{1}(x),\ldots ,b_{n}(x))\mathbf {A} (x):=(a_{i,j}(x))\,\mathrm {,} \quad i,j=1,\ldots ,n.}
[edit] Homogeneous equations
The set of solutions for a system of homogeneous linear differential equations of order 1 and dimension n
\mathbf{y}^'(x) = \mathbf{A}(x) \mathbf{y}(x)
forms an n-dimensional vector space. Given a basis for this vector space \mathbf{z}_1(x), \ldots, \mathbf{z}_n(x), which is called a fundamental system, every solution \mathbf{s}(x) can be written as
\mathbf{s}(x) = \sum_{i=1}^{n} c_i \mathbf{z}_i(x).
The n × n matrix
\mathbf{Z}(x) := (\mathbf{z}_1(x), \ldots, \mathbf{z}_n(x))
is called fundamental matrix. In general there is no method to explicitly construct a fundamental system, but if one solution is known d'Alembert reduction can be used to reduce the dimension of the differential equation by one.
[edit] Nonhomogeneous equations
The set of solutions for a system of inhomogeneous linear differential equations of order 1 and dimension n
\mathbf{y}^'(x) = \mathbf{A}(x) \mathbf{y}(x) + \mathbf{b}(x)
can be constructed by finding the fundamental system \mathbf{z}_1(x), \ldots, \mathbf{z}_n(x) to the corresponding homogeneous equation and one particular solution \mathbf{p}(x) to the inhomogeneous equation. Every solution \mathbf{s}(x) to nonhomogeneous equation can then be written as
\mathbf{s}(x) = \sum_{i=1}^{n} c_i \mathbf{z}_i(x) + \mathbf{p}(x).
A particular solution to the nonhomogeneous equation can be found by the method of undetermined coefficients or the method of variation of parameters.
[edit] Fundamental systems for homogeneous equations with constant coefficients
If a system of homogeneous linear differential equations has constant coefficients
\mathbf{y}^'(x) = \mathbf{A} \mathbf{y}(x)
then we can explicitly construct a fundamental system. The fundamental system can be written as a matrix differential equation
\mathbf{Y}^' = \mathbf{A} \mathbf{Y}
with solution as a matrix exponential
e^{x \mathbf{A}}
which is a fundamental matrix for the original differential equation. To explicitly calculate this expression we first transform A into Jordan normal form
e^{x \mathbf{A}} = e^{x \mathbf{C}^{-1} \mathbf{J} \mathbf{C}^{1}} = \mathbf{C}^{-1} e^{x \mathbf{J}} \mathbf{C}^{1}
and then evaluate the Jordan blocks
J_i = \begin{bmatrix} \lambda_i & 1 & \; & \; \\ \; & \ddots & \ddots & \; \\ \; & \; & \ddots & 1 \\ \; & \; & \; & \lambda_i \end{bmatrix}
of J separately as
e^{x \mathbf{J_i}} = e^{\lambda_i x} \begin{bmatrix} 1 & x & \frac{x^2}{2} & \dots & \frac{x^{n-1}}{(n-1)!} \\ \; & \ddots & \ddots & \ddots & \vdots \\ \; & \; & \ddots & \ddots & \frac{x^2}{2} \\ \; & \; & \; & \ddots & x \\ \; & \; & \; & \; & 1 \end{bmatrix} .
[edit] Theories of ODEs
[edit] Singular solutions
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century did it receive special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (starting in 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field which was worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
[edit] Reduction to quadratures
The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenth-century algebraists to find a method for solving the general equation of the nth degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that the differential equation meets its limitations very soon unless complex numbers are introduced. Hence analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter the real question was to be, not whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and if so, what are the characteristic properties of this function.
[edit] Fuchsian theory
Two memoirs by Fuchs (Crelle, 1866, 1868), inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869, although his method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those followed in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve which remains unchanged under a rational transformation, so Clebsch proposed to classify the transcendent functions defined by the differential equations according to the invariant properties of the corresponding surfaces f = 0 under rational one-to-one transformations.
[edit] Lie's theory
From 1870 Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact.
Sturm-Liouville theory[edit | edit source]
Sturm-Liouville theory is a general method for resolution of second order linear equations with variable coefficients.
Solving ordinary differential equation[edit | edit source]
1st ordered ordinary differential equation[edit | edit source]
1st ordered ordinary differential equation of general form
{\displaystyle A{\frac {d}{dt}}f(x)+Bf(t)=0}
Rearrange equation above,
{\displaystyle {\frac {d}{dt}}f(t)=-sf(t)}
. With,
{\displaystyle s={\frac {B}{A}}}
{\displaystyle f(t)=Ae^{-st}=Ae^{-{\frac {B}{A}}t}}
2nd ordered ordinary differential equation[edit | edit source]
{\displaystyle A{\frac {d^{2}}{dx^{2}}}f(x)+B{\frac {d}{dx}}f(x)+Cf(x)=0}
{\displaystyle {\frac {d^{2}}{dx^{2}}}f(x)+{\frac {B}{A}}{\frac {d}{dx}}f(x)+{\frac {C}{A}}f(x)=0}
{\displaystyle {\frac {d^{2}}{dx^{2}}}f(x)=-{\frac {B}{A}}{\frac {d}{dx}}f(x)-{\frac {C}{A}}f(x)}
{\displaystyle {\frac {d^{2}}{dx^{2}}}f(x)=-2\alpha {\frac {d}{dx}}f(x)-\beta f(x)}
{\displaystyle \beta ={\frac {C}{A}}}
{\displaystyle \alpha =\beta \gamma ={\frac {C}{A}}\gamma ={\frac {B}{2A}}}
{\displaystyle \gamma ={\frac {B}{2C}}}
Roots of differential equations
1 real root .
{\displaystyle f(x)=Ae^{-\alpha t}}
{\displaystyle \alpha =\beta }
2 real roots .
{\displaystyle f(x)=Ae^{(-\alpha \pm {\sqrt {\alpha -\beta }})t}}
{\displaystyle \alpha >\beta }
2 complex roots .
{\displaystyle f(x)=Ae^{(-\alpha \pm {\sqrt {\beta -\alpha }})t}}
{\displaystyle \alpha <\beta }
{\displaystyle f(x)=Ae^{(-\alpha \pm {\sqrt {\beta -\alpha }})t}=A(\alpha )Sin\omega t}
{\displaystyle A(\alpha )=Ae^{-\alpha t}}
{\displaystyle \omega ={\sqrt {\beta -\alpha }}}
Differential equations from outside physics
* A. D. Polyanin and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition)", Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2
* A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 0-415-27267-X
* D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
* Hartman, Philip, Ordinary Differential Equations, 2nd Ed., Society for Industrial & Applied Math, 2002. ISBN 0-89871-510-5.
* W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
* E.L. Ince, Ordinary Differential Equations, Dover Publications, 1958, ISBN 0486603490
* Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0-486-49510-8
Wikibooks Wikibooks has a book on the topic of Calculus/Ordinary differential equations
* Differential Equations at the Open Directory Project (includes a list of software for solving differential equations).
* EqWorld: The World of Mathematical Equations, containing a list of ordinary differential equations with their solutions.
* Online Notes / Differential Equations by Paul Dawkins, Lamar University.
* Differential Equations, S.O.S. Mathematics.
* A primer on analytical solution of differential equations from the Holistic Numerical Methods Institute, University of South Florida.
* Mathematical Assistant on Web online solving first order (linear and with separated variables) and second order linear differential equations (with constant coefficients), including intermediate steps in the solution
* Ordinary Differential Equations and Dynamical Systems lecture notes by Gerald Teschl
Retrieved from "http://en.wikipedia.org/wiki/Ordinary_differential_equation" Categories: Differential calculus | Ordinary differential equations
Retrieved from "https://en.wikiversity.org/w/index.php?title=Ordinary_differential_equation&oldid=2016812"
|
(Optional) More on Unordered Sets & Maps · USACO Guide
HomeGold(Optional) More on Unordered Sets & Maps
HashingCustom HashingHackingAnother Hash Function
(Optional) More on Unordered Sets & Maps
Authors: Darren Yao, Benjamin Qi
Contributors: Neo Wang, Nathan Gong
Maintaining collections of distinct elements with hashing.
Bronze - Introduction to Sets & Maps
Gold - String Hashing
You can (almost always) use ordered sets and maps instead, but it's good to know that these exist.
IUSACO
4.4 - Sets & Maps
module is based off this
Hashing refers to assigning a unique code to every variable/object which allows insertions, deletions, and searches in
\mathcal{O}(1)
time, albeit with a high constant factor, as hashing requires a large constant number of operations. However, as the name implies, elements are not ordered in any meaningful way, so traversals of an unordered set will return elements in some arbitrary order.
There is no built in method for hashing pairs or vectors. Namely, unordered_set<vector<int>> does not work. In this case, we can use an sorted map (which supports all of the functions used in the code above) or declare our own hash function.
Hash Functions for C++ Unordered Containers
How to create user-defined hash function for unordered_map.
The link provides an example of hashing pairs of strings, as well as other data structures like pairs.
size_t operator()(const pi& p) const { return p.f^p.s; }
unordered_map<pi,int,hashPi> um;
template<> struct hash<pi> {
Java has its own hash functions for pre-defined objects like ArrayList's. However, a custom hash function is still needed for user-defined objects. In order to create one, we can implement the hashCode method.
Additionally, in order for HashSet's and HashMap's to work with a custom class, we must also implement the equals method.
// uses custom hash function in class Pair
Set<Pair> set = new HashSet<>();
However, this hash function is quite bad; if we insert
(0,0), (1,1), (2,2) \ldots
then they will all be mapped to the same bucket (so it would easily be hacked).
A better (and easier) way to hash custom objects is to use Java's built in Objects.hash() method. This method takes in multiple objects and uses them to create a hash code.
You don't need to know this for USACO, but you will need this to pass some of the problems in this module.
In USACO contests, unordered sets and maps generally fine, but the built-in hashing algorithm for C++ is vulnerable to pathological data sets causing abnormally slow runtimes. Apparently Java is not vulnerable to this, however.
Blowing up Unordered Map
Explanation of this problem and how to fix it.
Essentially, use unordered_map<int, int, custom_hash> defined in the blog above in place of unordered_map<int, int>.
Benq (from KACTL)
struct chash { /// use most bits rather than just the lowest ones
const uint64_t C = ll(2e18*PI)+71; // large odd number
const int RANDOM = rng(); // random 32-bit number
return __builtin_bswap64((x^RANDOM)*C);
template<class K,class V> using um = unordered_map<K,V,chash>;
(explain assumptions that are required for this to work)
|
String Hashing · USACO Guide
HomeGoldString Hashing
TutorialImplementation - Single BaseImplementation - Multiple BasesSolution - Searching For StringsOne HashTwo HashesProblems
Authors: Benjamin Qi, Andi Qu
Contributors: Andrew Wang, Kevin Sheng
Quickly test equality of substrings with a small probability of failure.
26.3 - String Hashing
14.3 - Hashing
If "small" isn't a satisfying-enough answer for "what's the probability of collision?", then you should check out rng-58's blog post talking about hashing. This blog post talks about the Schwarz-Zippel lemma and how that can be used to calculate the probability of a collision.
It also explains how to hash rooted trees - an uncommon technique, but still useful to know!
Implementation - Single Base
As mentioned in the articles above, there is no need to calculate modular inverses.
class HashedString {
// change M and P if you want
static const long long M = 1e9 + 9;
static const long long P = 9973;
public class HashedString {
public static final long M = (long) 1e9 + 9;
public static final long P = 9973;
// pow[i] contains P^i % M
private static ArrayList<Long> pow = new ArrayList<>();
class HashedString:
# Change M and P if you want
M = int(1e9) + 9
# pow[i] contains P^i % M
_pow = [1]
while len(self._pow) < len(s):
This implementation calculates
\texttt{hsh}[i + 1] = \left(\sum_{x = 0}^i P^{i - x} \cdot S[x]\right) \bmod M
The hash of any particular substring
S[a : b]
is then calculated as
\left(\sum_{x = a}^b P^{b - x} \cdot S[x] \right) \bmod M = (\texttt{hsh}[b + 1] - \texttt{hsh}[a] \cdot P^{b - a + 1}) \bmod M
using prefix sums. This is nice because the highest power of
P
in that polynomial will always be
P^{b - a}
10^9 + 9
is prime, the probability of collision when using this hash is at most
\frac{N}{10^9 + 9} < 10^{-4}
, by the Schwarz-Zippel lemma. This means that if you select any two different strings of length at most
N
and a random base modulo
10^9 + 9
9973
in the code), the probability that they hash to the same value is at most
10^{-4}
Implementation - Multiple Bases
dacin21 - Anti-Hash Tests
regarding CF educational rounds in particular
HashRange
It's generally a good idea to use two randomized bases rather than just one to decrease the probability that two random strings hash to the same value.
CCC - Easy
Solution - Searching For Strings
\mathcal O((|N| + |H|) \cdot \Sigma)
\Sigma
We'll use a sliding window over
H
to find the "matches" with
N
Since we don't care about relative order when comparing two substrings, we can store frequency tables of the characters in the current window and in
N
. When we slide the window, at most two values in that table change. To compare two substrings, we simply compare the 26 values in each table.
If we only needed to count the number of matches, then the above alone would suffice (in fact, IOI 2006 Writing is just that). However, we need to count the distinct permutations of
N
H
, so we need to be a bit more clever.
One way to solve this is by storing the polynomial hashes of each match in a hashset, since we expect different permutations to have different polynomial hashes. The answer would simply be the size of that hashset at the end.
Since the test data for this particular problem is very strong, we will probably get hash collisions with only one hash. To remedy this, we use two hashes for each match - this significantly decreases the probability of collisions.
Using the base
9973
with the two modulos
10^9 + 9
10^9 + 7
works for this problem. (Note that using two bases with the same modulo works too.)
const ll P = 9973, M1 = 1e9 + 9, M2 = 1e9 + 7;
int freq_target[26], freq_curr[26];
string n, h;
Two Hashes
\mathcal O((|N| + |H|) \log M)
An alternative solution without frequency tables would be to hash the substrings that we're trying to match. Since order doesn't matter, we need to modify our hash function slightly.
In particular, instead of computing the polynomial hash of the substrings, compute the product
(P + s_1)(P + s_2) \cdots (P + s_k) \bmod M
as the hash (again, using two modulos). This hash is nice because the relative order of the letters doesn't matter, as multiplication is commutative.
Since this hash requires the modular inverse, there's an extra
\log M
factor in the time complexity.
Alternative hashes (e.g. computing the sum
(P + s_1)^2 + (P + s_2)^2 + \dots + (P + s_k)^2 \bmod M
) also work for other hashing problems, but the test cases are too strong for that to pass here.
ll inv(ll base, ll MOD) {
ll ans = 1, expo = MOD - 2;
while (expo) {
if (expo & 1) ans = ans * base % MOD;
Easy Show Tags Hashing
2017 - Palindromic Partitions
Easy Show Tags Greedy, Hashing
Easy Show Tags DP, Hashing
Normal Show Tags Binary Search, Hashing
Normal Show Tags Hashing, Simulation
2017 - Hangman 2
Normal Show Tags Hashing
2017 - Osmosmjerka
Normal Show Tags Hashing, Probability
2021 - Sateliti
Hard Show Tags Binary Search, Hashing
Hard Show Tags DP, Hashing
Hard Show Tags Hashing
2016 - Zamjene
Very Hard Show Tags DSU, Hashing
2016 - Palinilap
Very Hard Show Tags Binary Search, Hashing
|
Ordered pair/Related Articles - Citizendium
Ordered pair/Related Articles
< Ordered pair
A list of Citizendium articles, and planned articles, about Ordered pair.
See also changes related to Ordered pair, or pages that link to Ordered pair or to this page or whose text contains "Ordered pair".
Auto-populated based on Special:WhatLinksHere/Ordered pair. Needs checking by a human.
Affine space [r]: Collection of points, none of which is special; an n-dimensional vector belongs to any pair of points. [e]
Cartesian product [r]: The set of ordered pairs whose elements come from two given sets. [e]
{\displaystyle i^{2}=-1}
Graph (disambiguation) [r]: Add brief definition or description
Retrieved from "https://citizendium.org/wiki/index.php?title=Ordered_pair/Related_Articles&oldid=651889"
|
List of Torpedoes - Azur Lane Wiki
For Submarine torpedoes, see List of Submarine Torpedoes.
The formula used to compute the DPS values of the torpedoes below (against Light, Medium and Heavy armour) is as follows:
{\displaystyle {\begin{aligned}\mathrm {Damage\ per\ Second} &={\dfrac {\mathrm {Damage\ per\ Torpedo(Dmg)} \times \mathrm {Number\ of\ Torpedoes(Rnd)} \times \mathrm {Armor\ Modifier} }{\mathrm {Reload} }}\end{aligned}}}
1. The armour modifiers used for the torpedoes below are as follows.
Normal 80% 100% 130% 30
Normal (Royal Navy) 35
Normal (Sakura Empire) 40
Acoustic Homing 20
Guided Missile 130% 110% 80% 80 (Vanguard)
140 (Main Fleet)
Speeds are given in knots, the same units that ship and aircraft speeds are given in. Multiply these values by 0.6 to get the speed in game units per second.
2. All torpedoes have a Coefficient of 100% at Level 0. The constants for Damage Modifiers are as follows:
Damage Modifier / Coefficient
3. All the values used do not factor in the Reload or Torpedo stats.
On ships with lower base Torpedo, torpedoes with higher Torpedo stat but lower DPS may perform better than those with higher DPS but lower Torpedo stat.
4. Torpedoes have a small splash range (3 units), but these values in the table assume only one target is hit - therefore, they may perform better in practice than on paper.
5. The "Pre-Loaded Torpedo +1" trait grants ships the ability to launch the first set of torpedoes immediately regardless of torpedo reload time, which may give any ships possessing this ability a greater incentive to use torpedoes with greater Damage but slower Reload values. The resultant average DPS by the 60 second mark (half the required time to the S-rank threshold) is given as Preload DPS for each armor class as well.
6. Trajectories:
Acoustic homing torpedoes can home in on targets within 14 units. However, they have 33% less speed than standard ones.
Royal Navy torpedoes have roughly 17% more speed than standard ones, while Sakura Empire torpedoes have 33% more.
7. DDs with reduced torpedo spread at max Limit Break reduce spreads as follows:
Twin 30° 16°
Triple 40° 32°
Quadruple 48° 36°
Quintuple 60° 40°
Sextuple 60°
Triple (Royal Navy) 6° random 5° random
Quadruple (Royal Navy) 8° random 6° random
Quintuple (Royal Navy) 10° random 8° random
Preload DPS
12 4 63 35.51 5.68 7.10 9.23 9.04 11.30 14.69 52 48° 60° Acoustic Homing
25 4 66 33.79 6.25 7.81 10.16 9.77 12.21 15.88 52 48° 60° Acoustic Homing
45 4 70 32.12 6.97 8.72 11.33 10.71 13.38 17.40 52 48° 60° Acoustic Homing
5 4 46 31.96 4.61 5.76 7.48 7.06 8.82 11.47 50 48° 60° Normal
12 4 50 30.44 5.26 6.57 8.54 7.92 9.90 12.87 50 48° 60° Normal
25 4 56 28.92 6.20 7.75 10.07 9.18 11.48 14.92 50 48° 60° Normal
533mm Quadruple Torpedo Mount Mk 17
533mm Quadruple Torpedo Mount Mk IX
25 4 62 30.48 6.51 8.14 10.58 9.82 12.27 15.95 50 8° 10° Normal
12 5 46 36.53 5.04 6.30 8.19 8.10 10.13 13.17 50 60° 60° Normal
45 5 56 33.05 6.78 8.47 11.01 10.51 13.14 17.08 50 60° 60° Normal
533mm Quintuple Torpedo Mount Mk 17
533mm Quintuple Torpedo Mount Mk IX
5 3 63 30.46 4.96 6.20 8.07 7.48 9.35 12.16 52 40° 60° Acoustic Homing
0 3 46 27.40 4.03 5.04 6.55 5.87 7.34 9.54 50 40° 60° Normal
533mm Triple Torpedo Mount Mk 17
533mm Triple Torpedo Mount Mk IX
12 3 62 26.36 5.64 7.06 9.17 8.12 10.16 13.20 50 6° 60° Normal
12 2 52 22.61 3.68 4.60 5.98 5.07 6.33 8.23 50 30° 60° Normal
610mm Quadruple Torpedo Mount Kai
70 5 70 35.73 7.84 9.80 12.73 12.50 15.63 20.32 60° 72° Normal
610mm Triple Torpedo Mount Kai
SY-1 Missile
45 4 100 32.26 16.12 13.64 9.92 24.79 20.97 15.25 200 0° 100° Guided
12 4 99 31.13 10.18 12.72 16.54 15.46 19.32 25.12 52 48° 60° Acoustic Homing
25 4 133 26.80 15.88 19.85 25.81 22.97 28.72 37.33 52 48° 60° Acoustic Homing
5 4 70 30.05 7.45 9.32 12.11 11.19 13.98 18.18 50 48° 60° Normal
12 4 104 26.79 12.42 15.53 20.19 17.97 22.46 29.20 50 48° 60° Normal
25 4 172 24.70 22.28 27.85 36.21 31.46 39.32 51.12 50 8° 10° Normal
12 5 94 32.15 11.70 14.62 19.00 17.96 22.45 29.19 50 60° 60° Normal
5 3 81 28.86 6.74 8.42 10.95 9.98 12.47 16.21 52 40° 60° Acoustic Homing
12 3 106 25.61 9.93 12.42 16.14 14.17 17.72 23.03 52 40° 60° Acoustic Homing
0 3 70 25.75 6.52 8.16 10.60 9.32 11.66 15.15 50 40° 60° Normal
12 2 108 19.42 8.90 11.12 14.46 11.78 14.72 19.14 50 30° 60° Normal
70 5 190 28.6 26.57 33.22 43.18 39.24 49.05 63.77 60° 72° Normal
45 4 264 26.27 52.26 44.22 32.16 75.14 63.58 46.24 200 0° 100° Guided
Retrieved from ‘https://azurlane.koumakan.jp/w/index.php?title=List_of_Torpedoes&oldid=218831’
|
GreedyColor - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Graph Theory : GraphTheory Package : GreedyColor
GreedyColor(G, perm)
The GreedyColor command colors the vertices of the graph in the order given by perm, one at a time, assigning to each vertex the smallest available color. If the permutation perm is not specified, a heuristic is used, the permutation being given by the following rule: a vertex with lowest degree is removed from the graph, the heuristic permutation for the remaining graph is determined recursively, and the removed vertex is put at the end of this permutation. This yields a coloring with at most
d+1
colors, where
d
is such that every subgraph of G contains a vertex of degree at most
d
d
is also known as the degeneracy of G.
The output consists the number of colors used, followed a list that specifies the coloring of the vertices.
\mathrm{with}\left(\mathrm{GraphTheory}\right):
\mathrm{C6}≔\mathrm{CycleGraph}\left(6\right)
\textcolor[rgb]{0,0,1}{\mathrm{C6}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{Graph 1: an undirected unweighted graph with 6 vertices and 6 edge\left(s\right)}}
\mathrm{GreedyColor}\left(\mathrm{C6}\right)
\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]
\mathrm{GreedyColor}\left(\mathrm{C6},[1,4,2,5,3,6]\right)
\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]
|
Families of strong KT structures in six dimensions | EMS Press
Families of strong KT structures in six dimensions
Università di Chieti-Pescara, Italy
This paper classifies Hermitian structures on 6-dimensional nilmanifolds
M=\Ga\bs G
for which the fundamental 2-form is
\pd\opd
-closed, a condition that is shown to depend only on the underlying complex structure J of M. The space of such J is described when G is the complex Heisenberg group, and explicit solutions are obtained from a limaçon-shaped curve in the complex plane. Related theory is used to provide examples of various types of Ricci-flat structures.
Anna Fino, Simon Salamon, Maurizio Parton, Families of strong KT structures in six dimensions. Comment. Math. Helv. 79 (2004), no. 2, pp. 317–340
|
Difference between revisions of "2013:Audio Chord Estimation" - MIREX Wiki
Difference between revisions of "2013:Audio Chord Estimation"
(Created page with "The Utrecht Agreement on Chord Evaluation ===Evaluation of Chord Transcriptions=== Before the final description of the chord evaluation goes live here, please see the discu...")
[[The Utrecht Agreement on Chord Evaluation]]
===Evaluation of Chord Transcriptions===
This task requires participants to extract or transcribe a sequence of chords from an audio music recording. For many applications in music information retrieval, extracting the harmonic structure of an audio track is very desirable, for example for segmenting pieces into characteristic segments, for finding similar pieces, or for semantic analysis of music. The extraction of the harmonic structure requires the estimation of a sequence of chords that is as precise as possible. This includes the full characterisation of chords – root, quality, and bass note – as well as their chronological order, including specific onset times and durations. Audio chord estimation has a long history in MIREX, and readers interested in this history, especially with respect to evaluation methodology, should review the work of Christopher Harte (2010), Pauwels and Peeters (2013), and the [https://www.music-ir.org/mirex/wiki/The_Utrecht_Agreement_on_Chord_Evaluation “Utrecht Agreement”] on evaluation metrics.
Before the final description of the chord evaluation goes live here, please see the discussion based on the [[The Utrecht Agreement on Chord Evaluation]].
; Isophonics
: The collected Beatles, Queen, and Zweieck datasets from the Centre for Digital Music at Queen Mary, University of London (http://www.isophonics.net/), as used for Audio Chord Estimation in MIREX for many years. Available from http://www.isophonics.net/. See also Matthias Mauch’s dissertation (2010) and Harte et al.’s introductory paper (2005).
; Billboard
: An abridged version of the ''Billboard'' dataset from McGill University, including a representative sample of American popular music from the 1950s through the 1990s. Available from http://billboard.music.mcgill.ca. See also Ashley Burgoyne’s dissertation (2012) and Burgoyne et al.’s introductory paper (2011). Parsing tools for the data are available from http://hackage.haskell.org/package/billboard-parser/ and documented by De Haas and Burgoyne (2012).
== Training and Testing ==
The training and testing divisions differ for the two data sets. The Isophonics has been available publicly for so long that it no longer makes sense to offer a separate training phase; as such, the entire data set will be used for testing, as in previous years. In contrast, in order to support MIREX, a portion of the ''Billboard'' ground truth has been withheld from the public. Submissions may train on all of the songs that have been publicly released so far: the MIREX servers have access to the ground-truth annotations and the original audio. Whether trained or not, all submissions will be tested against a fresh set of 200 songs that have never been released publicly.
The ground-truth files contain one line per unique chord, in the form <code>{start_time end_time chord}</code>, e.g.,
=== Beatles dataset ===
To evaluate the quality of an automatic transcription, a transcription is compared to ground truth created by one or more human annotators. MIREX typically uses ''chord symbol recall'' (CSR) to estimate how well the predicted chords match the ground truth:
=== Queen and Zweieck dataset ===
<math>\textrm{CSR} = \frac{\textrm{total duration of segments where annotation equals estimation}} {\textrm{total duration of annotated segments}}</math>
=== Billboard dataset (abridged) ===
In previous years, MIREX has used an approximate CSR calculated by sampling both the ground-truth and the automatic annotations every 10 ms and dividing the number of correctly annotated samples by the total number of samples. Following Christopher Harte (2010, §8.1.2), however, we can view the ground-truth and estimated annotations as continuous segmentations of the audio and calculate the CSR by considering the cumulative length of the correctly overlapping segments. This way of calculating the CSR is more precise, as the precision of the frame-based method is limited by the frame length, and computationally more efficient, as it reduces the number of segment comparisons. Because pieces of music come in a wide variety of lengths, we will weight the CSR by the length of the song when computing an average for a given corpus. This final number is referred to as the ''weighted chord symbol recall'' (WCSR).
===Example ground-truth file ===
== Chord Vocabularies ==
# Chord root note only;
# Major and minor: {<code>N, maj, min</code>};
# Seventh chords: {<code>N, maj, min, maj7, min7, 7</code>};
# Major and minor with inversions: {<code>N, maj, min, maj/3, min/b3, maj/5, min/5</code>}; or
# Seventh chords with inversions: {<code>N, maj, min, maj7, min7, 7, maj/3, min/b3, maj7/3, min7/b3, 7/3, maj/5, min/5, maj7/5, min7/5, 7/5, maj7/7, min7/b7, 7/b7</code>}.
With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. For instance, in the major and minor case, <code>G:7(#9)</code> is mapped to <code>G:maj</code> because the interval set of <code>G:maj</code>, {<code>1,3,5</code>}, is a subset of the interval set of the <code>G:7(#9)</code>, {<code>1,3,5,b7,#9</code>}. In the seventh-chord case, <code>G:7(#9)</code> is mapped to <code>G:7</code> instead because the interval set of <code>G:7</code> {<code>1, 3, 5, b7</code>} is also a subset of <code>G:7(#9)</code> but is larger than <code>G:maj</code>. If a chord cannot be represented by a certain class, e.g., mapping a <code>D:aug</code> or <code>F:sus4(9)</code> to {<code>maj, min</code>}, the chord is excluded from the evaluation if it occurs in the ground-truth, and it is considered a mismatch if it occurs in an estimated annotation.
=== Segmentation Score ===
|+ Most frequent chord qualities in the McGill ''Billboard'' corpus.
! Freq. (%)
! Cum. Freq (%)
=== Frame-based recall ===
X:maj<br>
X:min<br>
X:aug<br>
X:dim<br>
X:sus2<br>
N <br>
X:maj <br>
X:maj7<br>
X:7<br>
X:maj(9)<br>
X:aug(7) <br>
X:min(7)<br>
X:min7<br>
X:dim(7)<br>
X:hdim7 <br>
X:sus4(7)<br>
X:sus4(b7)<br>
X:dim7<br>
5) check annotation label:<br>
IF symbol is 'X' (i.e. non-dictionary) <br>
THEN ignore frame (record number of ignored frames)<br>
ELSE compare annotated/estimated chords for the predefined number of intervals <br>
increment hit count if chords match<br>
7) go back to 4 until final chord frame
--[[User:Chrish|Chrish]] 17:05, 9 September 2009 (UTC)
=== Audio Format ===
=== I/O Format ===
!INTERVALS
!SHORTHAND
|-*Triads:
|X:(1,3,5)
|X or X:maj
|X:(1,b3,5)
|X:min
|min7
|diminished
|maj7
|X:(1,b3,b5)
|X:dim
|augmented
|X:(1,3,#5)
|X:aug
|maj(9)
|suspended4
|X:sus4
|sus4
|possible 6th triad:
|7(#9)
|*Quads:
|major-major7
|X:(1,3,5,7)
|X:maj7
|major-minor7
|X:(1,3,5,b7)
|X:7
|major-add9
|X:maj(9)
|major-major7-#5
|X:(1,3,#5,7)
|X:aug(7)
|minor-major7
|X:(1,b3,5,7)
|X:min(7)
|minor-minor7
|X:(1,b3,5,b7)
|X:min7
|minor-add9
|minor 7/b5 (ambiguous - could be either of the following)
|minor-major7-b5
|X:(1,b3,b5,7)
|X:dim(7)
|minor-minor7-b5 (a half diminished-7th)
|X:(1,b3,b5,b7)
|X:hdim7
|sus4-major7
|X:sus4(7)
|sus4-minor7
|X:sus4(b7)
|omitted from list on wiki:
|diminished7
|X:(1,b3,b5,bb7)
|X:dim7
|No Chord
Our recommendations are motivated by the frequencies of chord qualities in the ''Billboard'' corpus (see table above), which is a balanced sample of American popular music from the 1950s through the 1990s (J.A. Burgoyne, Wild, and Fujinaga 2011). Pure major and minor chords alone account for 65 percent of all chords encountered, whereas augmented and diminished triads account for 0.2 percent or less of the corpus each. Our arguments for our particular seventh-chord vocabulary as opposed to the set of all tetrads follows similar reasoning; our proposed vocabulary accounts for 86 percent of all chords, whereas no other standard type of seventh chord accounts for more than 0.2 percent of the corpus. In future years, the table suggests that we might consider introducing vocabularies including power chords, and possibly suspended chords or added sixths and ninths as well.
Please note that two things have changed in the syntax since it was originally described in [6]. The first change is that the root is no longer implied as a voiced element of a chord so a C major chord (notes C, E and G) should be written C:(1,3,5) instead of just C:(3,5) if using the interval list representation. As before, the labels C and C:maj are equivalent to C:(1,3,5). The second change is that the shorthand label "sus2" (intervals 1,2,5) has been added to the available shorthand list.--[[User:Chrish|Chrish]] 17:05, 9 September 2009 (UTC)
== Chord Segmentation ==
We still accept participants who would only like to be evaluated on major/minor chords and want to use the number format which is an integer chord id on range 0-24, where values 0-11 denote the C major, C# major, ..., B major and 12-23 denote the C minor, C# minor, ..., B minor and 24 denotes silence or no-chord segments. '''Please note that the format is still the same'''
<math>Q = 1 - \frac{\textrm{maximum of directional Hamming distances in either direction}} {\textrm{total duration of song}}</math>
= Submission Format =
=== Command line calling format ===
''extractFeaturesAndTrain "/path/to/trainFileList.txt" "/path/to/scratch/dir" ''
== I/O Format ==
Where fileList.txt has the paths to each wav file. The features extracted on this stage can be stored under "/path/to/scratch/dir"
The ground truth files for the supervised learning will be in the same path with a ".txt" extension at the end. For example for "/path/to/trainFile1.wav", there will be a corresponding ground truth file called "/path/to/trainFile1.wav.txt" .
<pre>start_time end_time chord_label</pre>
''doChordID.sh "/path/to/testFileList.txt" "/path/to/scratch/dir" "/path/to/results/dir" ''
== Command line calling format ==
<pre>extractFeaturesAndTrain "/path/to/trainFileList.txt" "/path/to/scratch/dir" </pre>
where <code>fileList.txt</code> has the paths to each WAV file. The features extracted on this stage can be stored under <code>/path/to/scratch/dir</code>. The ground truth files for the supervised learning will be in the same path with a <code>.txt</code> extension at the end. For example for <code>/path/to/trainFile1.wav</code>, there will be a corresponding ground truth file called <code>/path/to/trainFile1.wav.txt</code>. For testing:
<pre>doChordID.sh "/path/to/testFileList.txt" "/path/to/scratch/dir" "/path/to/results/dir" </pre>
If there is no training, you can ignore the second argument here. In the results directory, there should be one file for each testfile with same name as the test file + <code>.txt</code>. Programs can use their working directory if they need to keep temporary cache files or internal debugging info. Standard output and standard error will be logged.
=== Packaging submissions ===
== Packaging submissions ==
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed). All submissions should include a <code>README</code> file including the following information:
* Command line calling format for all executables and an example formatted set of commands
* Expected memory footprint
* Expected runtime
* Any required environments (and versions), e.g. python, java, bash, matlab.
= Time and Hardware limits =
== Time and hardware limits ==
Somewhere in the email discussion on the MIREX list, there was a mention that the recent systems run on the Beatles/Queen/Zweieck dataset might have over-learnt the properties of this dataset. I just wondered whether, during or post-MIREX, there was any way to formally/experimentally demonstrate this? I mean, beyond making the observation that there is a "drop" in performance from an open dataset to a closed one. The issue would seem particularly pertinent with regard to this dataset since it's been public for sometime.
(Matthew Davies, 9th August)
== Potential Participants ==
1. Harte, C.A. and Sandler, M.B. (2005). '''Automatic chord identification using a quantised chromagram.''' Proceedings of 118th Audio Engineering Society's Convention.
2. Sailer, C. and Rosenbauer K. (2006). '''A bottom-up approach to chord detection.''' Proceedings of International Computer Music Conference 2006.
Abdallah, Samer A., Katy Noland, Mark B. Sandler, Michael Casey, and Christophe Rhodes. 2005. “Theory and Evaluation of a Bayesian Music Structure Extractor.” In ''Proceedings of the International Society for Music Information Retrieval Conference'', 420–425.
3. Shenoy, A. and Wang, Y. (2005). '''Key, chord, and rythm tracking of popular music recordings.''' Computer Music Journal 29(3), 75-86.
Burgoyne, J. A., J. Wild, and I. Fujinaga. 2011. “An expert ground truth set for audio chord recognition and music analysis.” In ''Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR)'', 633–638.
4. Sheh, A. and Ellis, D.P.W. (2003). '''Chord segmentation and recognition using em-trained hidden markov models.''' Proceedings of 4th International Conference on Music Information Retrieval.
5. Yoshioka, T. et al. (2004). '''Automatic Chord Transcription with concurrent recognition of chord symbols and boundaries.''' Proceedings of 5th International Conference on Music Information Retrieval.
Haas, W. B. de, and John~Ashley Burgoyne. 2012. ''Parsing the Billboard Chord Transcriptions''. Technical report UU-CS- 2012-018, Department of Information and Computing Sciences, Utrecht University.
6. Harte, C. et al. (2005). '''Symbolic representation of musical chords: a proposed syntax for text annotations.''' Proceedings of 6th International Conference on Music Information Retrieval.
Harte, C., M. Sandler, S. Abdallah, and E. Gómez. 2005. “Symbolic representation of musical chords: A proposed syntax for text annotations.” In ''Proceedings of the 6th International Society for Music Information Retrieval Conference (ISMIR)'', 66–71.
7. Papadopoulos, H. and Peeters, G. (2007). '''Large-scale study of chord estimation algorithms based on chroma representation and HMM.''' Proceedings of 5th International Conference on Content-Based Multimedia Indexing.
8. Abdallah, S. et al. (2005). '''Theory and Evaluation of a Bayesian Music Structure Extractor''' (pp. 420-425) Proc. 6th International Conference on Music Information Retrieval, ISMIR 2005.
9. John Ashley Burgoyne et al. (2011). '''An expert ground-truth set for audio chord recognition and music analysis''' (pp. 633–638) Proc. 12th International Society for Music Information Retrieval Conference, ISMIR 2006. [http://ismir2011.ismir.net/papers/OS8-1.pdf (PDF)]
Pauwels, Johan, and Geoffroy Peeters. 2013. “Evaluating automatically estimated chord sequences.” In ''Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)''. Vancouver, British Columbia, Canada.
{\displaystyle {\textrm {CSR}}={\frac {\textrm {totaldurationofsegmentswhereannotationequalsestimation}}{\textrm {totaldurationofannotatedsegments}}}}
{\displaystyle Q=1-{\frac {\textrm {maximumofdirectionalHammingdistancesineitherdirection}}{\textrm {totaldurationofsong}}}}
|
Freda Function has another quadratic function for you to investigate! Graph the equation
y=x^2+3
and then answer the questions from problem 1-23.
Look at the equation. Which way do you think the parabola will open up?
Below are the questions from problem 1-23 for you to answer.
Look carefully at your parabola when you are answering them.
How would you describe the shape of your parabola? For example, would you describe your parabola as opening up or down? Do the sides of the parabola ever go straight up or down (vertically)? Why or why not? Is there anything else special about its shape?
Does your parabola have any lines of symmetry? That is, can you fold the graph of your parabola so that each side of the fold exactly matches the other? If so, where would the fold be? Do you think this works for all parabolas? Why or why not? For more information on lines of symmetry, see the Math Notes box at the end of this lesson.
Are there any special points on your parabola? Which points do you think are important to know?
Are there
x
y
-intercepts? What are they? Are there any intercepts that you expected but do not exist for your parabola?
Is there a highest (maximum) or lowest (minimum) point on the graph of your parabola? If so, where is it? This point is called a vertex.
Use the eTool below to help you graph the equation.
|
List of ASW Equipment - Azur Lane Wiki
List of ASW Equipment
The formula used to compute the DPS values of the Depth Charges below is as follows:
{\displaystyle {\begin{aligned}DamageperSecond&={\frac {Damagepershot(Dmg)\times NumberofShots(Rnd)}{Reload(Rld)}}\end{aligned}}}
All of the equipment below are equipped in the Auxiliary slot.
All the values used do not factor in the Reload or ASW stats.
Each ship may only equip up to one Sonar and one Depth Charge. Only ASW-capable escort ships (Light Cruisers and Destroyers) may equip Sonars and Depth Charges.
ASW-capable escort ships launch a 28 base damage Depth Charge if not equipped with any actual Depth Charge equipment.
When multiple Sonars are equipped in the fleet, each Sonar will increase the Scan Range.
ASW aircraft are equipped in the Auxiliary slot of Light Carriers and Warspite Retrofit, but are upgraded using Aircraft plates.
The Consolidated PBY-5A Catalina is equipped in the Auxiliary slot of Destroyers, Light Cruisers, Heavy Cruisers, and Large Cruisers and do not use Aircraft plates. But keep in mind Heavy and Large Cruisers will not get the ASW benefits of this item.
ASW helicopter are equipped in the Auxiliary slot of Köln Retrofit.
Rld(s)
Basic Depth Charge ★ 16 4.52 10.62 25 5 Depth Charge
Basic Depth Charge ★★ 5 19 4.39 12.98 25 5 Depth Charge
Basic Depth Charge ★★★ 12 22 4.26 15.49 25 5 Depth Charge
Basic Sonar ★★ 1 1 5 45 Sonar Spots submerged enemies within specified range. DDs / CLs will launch a depth charge when in range, launching a stock charge if one is not equipped.
Basic Sonar ★★★ 3 2 4.8 45 Sonar Spots submerged enemies within specified range. DDs / CLs will launch a depth charge when in range, launching a stock charge if one is not equipped.
Basic Sonar ★★★★ 5 4 4.5 45 Sonar Spots submerged enemies within specified range. DDs / CLs will launch a depth charge when in range, launching a stock charge if one is not equipped. Decreases the Accuracy of detected enemy submarines by 3.0%. Effect does not stack.
Consolidated PBY-5A Catalina ★★★★ 10 14 Auxiliary Provides aerial reconnaissance, decreasing Ambush chance by 10.0% and increasing Ambush evasion chance by 10.0%. Effect does not stack.
Corn Lantern ★★★★★ 9 Auxiliary When equipped by a Hololive ship: decreases their damage taken by 4.0%
Fairey Swordfish Mk II-ASV (ASW) ★★ 5 37 4.52 8.19 5 ASW Bomber
Fairey Swordfish Mk II-ASV (ASW) ★★★ 12 45 4.32 10.42 5 ASW Bomber
Fairey Swordfish Mk II-ASV (ASW) ★★★★ 25 56 4.12 13.59 5 ASW Bomber
Flettner Fl 282 Kolibri ★★★★ 9 ASW Helicopter
General Motors TBM-3 Avenger (ASW) ★★ 5 33 4.12 8.01 6 ASW Bomber
General Motors TBM-3 Avenger (ASW) ★★★ 12 40 3.99 10.03 6 ASW Bomber
General Motors TBM-3 Avenger (ASW) ★★★★ 25 48 3.86 12.44 6 ASW Bomber
Hedgehog ★★★★★ 45 128 4.32 29.63 50 12 Depth Charge The average damage is shown; actual damage ranges from 2/3 to 4/3 this value.
Only creates one explosion that does actual damage. The individual projectiles are merely cosmetic.
Improved Depth Charge ★★ 5 20 4.32 13.89 30 6 Depth Charge
Improved Depth Charge ★★★ 12 24 4.16 17.31 30 6 Depth Charge
Improved Depth Charge ★★★★ 25 27 3.99 20.3 30 6 Depth Charge
Improved Sonar ★★★ 4 3 5 Sonar Increases Scan Range by 5.
Improved Sonar ★★★★ 9 5 5 Sonar Increases Scan Range by 5.
Improved Sonar ★★★★★ 14 7 8 Sonar Increases Scan Range by 8. Decreases the TRP of detected enemy submarines by 5.0%. Effect does not stack.
Basic Sonar ★★★★ 18 9 4.5 45 Sonar Spots submerged enemies within specified range. DDs / CLs will launch a depth charge when in range, launching a stock charge if one is not equipped. Decreases the Accuracy of detected enemy submarines by 3.0%. Effect does not stack.
Corn Lantern ★★★★★ 24 Auxiliary When equipped by a Hololive ship: decreases their damage taken by 4.0%
Fairey Swordfish Mk II-ASV (ASW) ★★ 5 47 4.28 10.98 5 ASW Bomber
Fairey Swordfish Mk II-ASV (ASW) ★★★★ 25 112 3.06 36.6 5 ASW Bomber
Flettner Fl 282 Kolibri ★★★★ 24 ASW Helicopter
General Motors TBM-3 Avenger (ASW) ★★ 5 41 3.92 10.46 6 ASW Bomber
Improved Sonar ★★★ 10 6 5 Sonar Increases Scan Range by 5.
Improved Sonar ★★★★ 24 10 5 Sonar Increases Scan Range by 5.
Improved Sonar ★★★★★ 35 12 8 Sonar Increases Scan Range by 8. Decreases the TRP of detected enemy submarines by 5.0%. Effect does not stack.
Basic Sonar ★★★★ 20 10 4.5 45 Sonar Spots submerged enemies within specified range. DDs / CLs will launch a depth charge when in range, launching a stock charge if one is not equipped. Decreases the Accuracy of detected enemy submarines by 3.0%. Effect does not stack.
Retrieved from ‘https://azurlane.koumakan.jp/w/index.php?title=List_of_ASW_Equipment&oldid=218834’
|
Complex analysis/Related Articles - Citizendium
Complex analysis/Related Articles
< Complex analysis
A list of Citizendium articles, and planned articles, about Complex analysis.
See also changes related to Complex analysis, or pages that link to Complex analysis or to this page or whose text contains "Complex analysis".
2.1 Disciplines within complex analysis
Mathematical analysis [r]: Add brief definition or description
Disciplines within complex analysis
Harmonic analysis [r]: Add brief definition or description
Several complex variables [r]: Field of mathematics, precisely of complex analysis, that studies those properties which characterize functions of more than one complex variable. [e]
Analytic function [r]: Add brief definition or description
Taylor series [r]: Representation of a function as an infinite sum of terms calculated from the values of its derivatives at a single point. [e]
Proof that holomorphic functions are analytic [r]: Add brief definition or description
Meromorphic function [r]: Add brief definition or description
Pole [r]: Add brief definition or description
Residue [r]: Complex number which describes the behavior of line integrals of a meromorphic function around a singularity. [e]
Cauchy's integral formula [r]: Add brief definition or description
Laurent series [r]: Add brief definition or description
Cauchy–Riemann equations [r]: Add brief definition or description
Riemann surface [r]: Add brief definition or description
{\displaystyle i^{2}=-1}
Real analysis [r]: Add brief definition or description
Augustin-Louis Cauchy [r]: (1789 – 1857) prominent French mathematician, one of the pioneers of rigor in mathematics and complex analysis. [e]
Bertrand Riemann [r]: Add brief definition or description
Karl Weierstrass [r]: Add brief definition or description
Retrieved from "https://citizendium.org/wiki/index.php?title=Complex_analysis/Related_Articles&oldid=517805"
|
USACO FAQs · USACO Guide
HomeGeneralUSACO FAQs
Q: What is the USACO?Q: How Do USACO Contests Work?Q: What language should I use for USACO?Q: How do I prepare for USACO?Q: What's the best resource to get better at USACO?Q: I'm stuck. Where can I get help for USACO?Q: When should I read the USACO analyses?Q: Should I implement every problem that I solve?Q: What topics do I need to know for each of the USACO divisions?Q: Where can I find more practice problems?Q: Should I also use the USACO Training Pages?Q: What CodeForces rating corresponds to each of the USACO divisions?Closing Thoughts
Authors: USACO Finalists, Darren Yao
Q: What is the USACO?
USACO stands for the USA Computing Olympiad.
Q: How Do USACO Contests Work?
more about contest format
USACO contests are scored out of
1000
points. Recent contests each have three equally weighted problems; that is, each problem is worth
1000/3 = 333.333 \dots
points. Each problem has
10 \dots 25
test cases weighted equally. For example, if a contest has three problems, and problem 1 has 10 test cases, then you will get
333.333 \dots/10 = 33.333\dots
points for each test case that you get correct for problem 1.
At the end of the contest, a "cutoff score" for each division is determined based on the difficulty of the contest. If your score is above the "cutoff score" for your division, then you get promoted to the next division. Historically, the cutoff score has always been a multiple of
50
points in the range
600\ldots 850
750
If you get a perfect score during the contest (ie. you solve all three problems correctly), then you get an in-contest promotion, where you immediately get promoted to the next division. You can start the next division's contest whenever you want during the contest window; your four-hour timer resets when you start the contest.
Q: What language should I use for USACO?
The most popular languages that USACO supports are C++, Java, and Python. In general, we recommend the following:
We cover choosing a language in more detail in our "Choosing a Language" module.
Q: How do I prepare for USACO?
Learn algorithms, do practice problems, and reflect on why you're missing problems. Make sure you learn from every problem you do, and you'll improve over time. If you're looking for a guided roadmap to improve at USACO, check out the USACO Guide (that's this site!)
If you want to get better at USACO, the key thing is to do more practice!
Q: What's the best resource to get better at USACO?
We made the USACO Guide specifically to provide high-quality resources to help people get better at USACO; we encourage you to give it a shot! We also list additional resources that you may find helpful. Additionally, USACO has its own resources page.
Q: I'm stuck. Where can I get help for USACO?
We recommend you go to the (unofficial) USACO Forum to get help when you're stuck. Alternatively, you can join the (unofficial) USACO Discord Server. They have channels called #cp-discussion and #cp-help dedicated to questions about competitive programming.
Q: When should I read the USACO analyses?
It really comes down to personal preference; there's no right or wrong answer -- do what works for you! With that being said, we've asked numerous top USACO competitors what they think about this question. This module lists their thoughts on how to effectively practice for USACO.
Q: Should I implement every problem that I solve?
Usually, yes (unless the problem is significantly too easy for you). Solving competitive programming problems consists of two parts: coming up with the algorithm, and implementing the algorithm. You should implement so that you practice both parts.
Q: What topics do I need to know for each of the USACO divisions?
While there is no official USACO syllabus, we've compiled topics for each division from historical contests:
USACO Bronze Topics
USACO Silver Topics
USACO Gold Topics
USACO Platinum Topics
Q: Where can I find more practice problems?
The USACO website has problems from 2011 onwards.
We provide a list of recent USACO problems (2015 onwards) here.
Older USACO problems may be easier than recent USACO contest problems due to increases in difficulty.
CodeForces -- you can search by tag, difficulty level, etc.
CSAcademy Archive
For additional sources of problems, check the contests page.
Q: Should I also use the USACO Training Pages?
You might find them useful. Keep in mind that they:
Are not beginner friendly, as noted by Rob Kolstad himself here.
Don't allow you to view the analysis for a problem until you solve it. Don't allow you to move past a section until you've solved all the problems in it.
Though some people consider this a plus (as mentioned here).
Don't cover many of the topics that appear frequently in recent contests (ex. segment trees), as noted here.
Q: What CodeForces rating corresponds to each of the USACO divisions?
CodeForces rating and USACO divisions shouldn't be compared since CF emphasizes solving problems quickly (5-8 problems under 2 hours time constraint), while USACO has harder problems and more time (3 problems in 4-5 hours). However, here are some very rough estimates:
USACO Bronze competitors are probably <1300 rated on CF, and Bronze problems correspond to 900-1500 rated CF problems.
USACO Silver competitors are probably 1200-1500 rated on CF, and Silver problems correspond to 1200-1900 rated CF problems.
USACO Gold competitors are probably 1500-1800 rated on CF, and Gold problems correspond to 1500-2200 rated CF problems.
USACO Platinum competitors are probably 1650+ rated on CF, and Platinum problems correspond to 1900+ rated CF problems. (Note that at the Platinum level there is a lot of variation in CF ratings.)
Again, CF problems and contests are significantly different from USACO!
We hope you've found this FAQ useful! If you have any additional questions, please feel free to ask on the USACO Forum and we'll do our best to answer them.
Best of luck on your competitive programming journey!
|
Theory of relativity/Special relativity/E = mc² - Wikiversity
Theory of relativity/Special relativity/E = mc²
< Theory of relativity | Special relativity(Redirected from Special relativity/E = mc²)
This article presumes that the reader has read Special relativity/energy.
This article will deduce, on theoretical grounds, that the mass (rest mass, that is) of an object changes when it emits or absorbs energy, in accordance with the famous formula E=mc2.
As in the previous articles, we will perform a "gedanken experiment" on an object releasing kinetic energy. An object of mass M, that consists of two parts of mass M/2 each, will burst into two separate objects. Those objects each have mass m/2, and they fly away from the original object, in opposite directions, each with speed
{\displaystyle v\,}
We could imagine that the two halves of the original object had a spring between them that was released, or that electric or magnetic repulsion was involved, or that an explosive charge was used, or that a nuclear disintegration took place.
Under classical mechanics, of course we expect that m = M, due to conservation of mass.
The original object, of mass M, splits into two pieces, each of mass m/2 and moving away with speed v.
The same experiment, viewed from a frame of reference moving to the left with speed v. After the event, one of the result objects is at rest and the other has speed 2v/(1+v2/c2).
In the original rest frame, there is no kinetic energy before the event. After the event, each piece has kinetic energy
{\displaystyle {\frac {m}{2}}\ c^{2}\left({\frac {1}{\sqrt {1-v^{2}/c^{2}}}}-1\right)}
{\displaystyle E=mc^{2}\left({\frac {1}{\sqrt {1-v^{2}/c^{2}}}}-1\right)}
(See Special relativity/energy for the derivation of this.)
Now examine the experiment from a frame of reference moving to the left with speed
{\displaystyle v\,}
. Before the event, the object of mass M was moving to the right with speed
{\displaystyle v\,}
, so its kinetic energy was
{\displaystyle Mc^{2}\left({\frac {1}{\sqrt {1-v^{2}/c^{2}}}}-1\right)}
After the event, one of the pieces is at rest, and the other is moving to the right with speed
{\displaystyle {\frac {2v}{1+v^{2}/c^{2}}}}
due to the addition law. (See Special relativity/space, time, and the Lorentz transform for the derivation of this.)
The kinetic energy after the event, in this frame, is
{\displaystyle {\frac {m}{2}}\ c^{2}\left({\frac {1}{\sqrt {1-4v^{2}/((1+v^{2}/c^{2})^{2}\ c^{2})}}}-1\right)}
{\displaystyle {\frac {m}{2}}\ c^{2}\left({\frac {1+v^{2}/c^{2}}{1-v^{2}/c^{2}}}-1\right)}
{\displaystyle {\frac {mv^{2}}{1-v^{2}/c^{2}}}}
The energy released, as measured in this frame, is the final energy minus the initial energy, or
{\displaystyle E={\frac {mv^{2}}{1-v^{2}/c^{2}}}-Mc^{2}\left({\frac {1}{\sqrt {1-v^{2}/c^{2}}}}-1\right)}
Now all observers, in all frames of reference, must agree on how much energy was released, so, using equation 1:
{\displaystyle E=mc^{2}\left({\frac {1}{\sqrt {1-v^{2}/c^{2}}}}-1\right)={\frac {mv^{2}}{1-v^{2}/c^{2}}}-Mc^{2}\left({\frac {1}{\sqrt {1-v^{2}/c^{2}}}}-1\right)}
{\displaystyle m\left(1+{\frac {v^{2}/c^{2}}{1-v^{2}/c^{2}}}-{\frac {1}{\sqrt {1-v^{2}/c^{2}}}}\right)=M\left({\frac {1}{\sqrt {1-v^{2}/c^{2}}}}-1\right)}
{\displaystyle m\left({\frac {1}{1-v^{2}/c^{2}}}-{\frac {1}{\sqrt {1-v^{2}/c^{2}}}}\right)=M\left({\frac {1}{\sqrt {1-v^{2}/c^{2}}}}-1\right)}
{\displaystyle {\frac {m}{\sqrt {1-v^{2}/c^{2}}}}=M}
Now, using equation 1 again:
{\displaystyle E=mc^{2}\left({\frac {1}{\sqrt {1-v^{2}/c^{2}}}}-1\right)=c^{2}\left({\frac {m}{\sqrt {1-v^{2}/c^{2}}}}-m\right)=c^{2}\ (M-m)}
The loss in mass, times the square of the speed of light, is equal to the energy released.
Since the laws of physics work in reverse, an object's mass will increase if it absorbs energy.
This principle is often interpreted as meaning that the two classical principles of conservation of mass and conservation of energy are to be replaced with a single more general principle of conservation of "mass-energy", since mass and energy can be converted into each other. But the principle of conservation of energy was always about conservation of potential energy plus kinetic energy, and that principle still holds. Potential energy and kinetic energy can be interchanged the same way they could under classical physics. What is new is that potential energy is embodied in mass. Any object that contains potential energy in any form has extra mass because of it. In the experiment above, the initial body of mass M contained potential energy in the amount of
{\displaystyle c^{2}\ (M-m)}
, which was released when it split into two parts.
One can (and particle physicists often do) think of mass as being equivalent to potential energy, so that, for example, a Uranium atom actually "contains" 220 GeV of energy. But this potential energy is not readily convertible into kinetic energy (though 200 MeV of it can be extracted through fission.) In some cases, the notion that an object's mass is effectively all potential energy is useful. Electrons and positrons each have an intrinsic energy of 511 KeV, due to their mass of .00055 amu. When they collide and annihilate each other, that energy is totally converted into 1.022 MeV in massless photons.
The next article in this series is Special relativity/spacetime diagrams and vectors.
Retrieved from "https://en.wikiversity.org/w/index.php?title=Theory_of_relativity/Special_relativity/E_%3D_mc²&oldid=1834838"
|
Dictionary:Laplace transform - SEG Wiki
The linear transform pair
{\displaystyle F(s)=\int f(t)e^{-st}dt}
{\displaystyle f(t)={\frac {1}{2\pi i}}\int F(s)e^{st}ds}
s is a complex number and t is a real one. When the limits of integration are
{\displaystyle \pm \infty }
, the transform is two-sided. The two-sided Laplace transform becomes identical with the Fourier transform when s is purely imaginary. More often the one-sided transform is used, especially in the study of transient waveforms. In this case, where f(t) is causal, the integral is
{\displaystyle F(s)=lim\int f(t)e^{-st}dt}
{\displaystyle f(t)=\int F(s)e^{+st}ds}
The one-sided transform is often written with limits 0 to
{\displaystyle \infty }
, the limit being implied. Laplace transforms may not exist for all values of s and hence many Laplace transforms are limited to strips of convergence, the ranges of values for the real part of s for which the above intearals are finite. The Laplace transform domain is often called the s-plane. See Sheriff and Geldart (1995, 545–546).
Retrieved from "https://wiki.seg.org/index.php?title=Dictionary:Laplace_transform&oldid=77326"
|
2013:Audio Chord Estimation - MIREX Wiki
Revision as of 08:28, 9 September 2013 by J. Ashley Burgoyne (talk | contribs)
3.1 Chord Vocabularies
3.2 Chord Segmentation
4.3 Command line calling format
This task requires participants to extract or transcribe a sequence of chords from an audio music recording. For many applications in music information retrieval, extracting the harmonic structure of an audio track is very desirable, for example for segmenting pieces into characteristic segments, for finding similar pieces, or for semantic analysis of music. The extraction of the harmonic structure requires the estimation of a sequence of chords that is as precise as possible. This includes the full characterisation of chords – root, quality, and bass note – as well as their chronological order, including specific onset times and durations. Audio chord estimation has a long history in MIREX, and readers interested in this history, especially with respect to evaluation methodology, should review the work of Christopher Harte (2010), Pauwels and Peeters (2013), and the “Utrecht Agreement” on evaluation metrics.
Two datasets are used to evaluate chord transcription accuracy.
The collected Beatles, Queen, and Zweieck datasets from the Centre for Digital Music at Queen Mary, University of London (http://www.isophonics.net/), as used for Audio Chord Estimation in MIREX for many years. Available from http://www.isophonics.net/. See also Matthias Mauch’s dissertation (2010) and Harte et al.’s introductory paper (2005).
An abridged version of the Billboard dataset from McGill University, including a representative sample of American popular music from the 1950s through the 1990s. Available from http://billboard.music.mcgill.ca. See also Ashley Burgoyne’s dissertation (2012) and Burgoyne et al.’s introductory paper (2011). Parsing tools for the data are available from http://hackage.haskell.org/package/billboard-parser/ and documented by De Haas and Burgoyne (2012).
The training and testing divisions differ for the two data sets. The Isophonics has been available publicly for so long that it no longer makes sense to offer a separate training phase; as such, the entire data set will be used for testing, as in previous years. In contrast, in order to support MIREX, a portion of the Billboard ground truth has been withheld from the public. Submissions may train on all of the songs that have been publicly released so far: the MIREX servers have access to the ground-truth annotations and the original audio. Whether trained or not, all submissions will be tested against a fresh set of 200 songs that have never been released publicly.
The ground-truth files contain one line per unique chord, in the form {start_time end_time chord}, e.g.,
Start and end times are in seconds from the start of the file. Chord labels follow the syntax proposed by C. Harte et al. (2005). Please note that the syntax has changed slightly since since it was originally described; in particular, the root is no longer implied as a voiced element of a chord so a C major chord (notes C, E and G) should be written C:(1,3,5) instead of just C:(3,5) if using the interval list representation. As before, the labels C and C:maj are equivalent to C:(1,3,5).
To evaluate the quality of an automatic transcription, a transcription is compared to ground truth created by one or more human annotators. MIREX typically uses chord symbol recall (CSR) to estimate how well the predicted chords match the ground truth:
{\displaystyle {\textrm {CSR}}={\frac {\textrm {totaldurationofsegmentswhereannotationequalsestimation}}{\textrm {totaldurationofannotatedsegments}}}}
In previous years, MIREX has used an approximate CSR calculated by sampling both the ground-truth and the automatic annotations every 10 ms and dividing the number of correctly annotated samples by the total number of samples. Following Christopher Harte (2010, §8.1.2), however, we can view the ground-truth and estimated annotations as continuous segmentations of the audio and calculate the CSR by considering the cumulative length of the correctly overlapping segments. This way of calculating the CSR is more precise, as the precision of the frame-based method is limited by the frame length, and computationally more efficient, as it reduces the number of segment comparisons. Because pieces of music come in a wide variety of lengths, we will weight the CSR by the length of the song when computing an average for a given corpus. This final number is referred to as the weighted chord symbol recall (WCSR).
Chord Vocabularies
[chord-eval]
We propose a set of single chord evaluation measures for MIREX that extends the previous iterations of MIREX and combines it with evaluation measures proposed in the literature, providing a more complete assessment of the transcription quality. Following Pauwels and Peeters (2013), we suggest using the CSR with five different chord vocabulary mappings.
In each of these calculations, the full chord descriptions of either the estimated or the ground-truth transcriptions, which might contain complex chord annotations, would be mapped to the following classes:
Major and minor: {N, maj, min};
Seventh chords: {N, maj, min, maj7, min7, 7};
Major and minor with inversions: {N, maj, min, maj/3, min/b3, maj/5, min/5}; or
Seventh chords with inversions: {N, maj, min, maj7, min7, 7, maj/3, min/b3, maj7/3, min7/b3, 7/3, maj/5, min/5, maj7/5, min7/5, 7/5, maj7/7, min7/b7, 7/b7}.
With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels. A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary. For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj. If a chord cannot be represented by a certain class, e.g., mapping a D:aug or F:sus4(9) to {maj, min}, the chord is excluded from the evaluation if it occurs in the ground-truth, and it is considered a mismatch if it occurs in an estimated annotation.
Most frequent chord qualities in the McGill Billboard corpus.
Cum. Freq (%)
maj(9) 1 91
sus4 1 93
7(#9) 1 95
Our recommendations are motivated by the frequencies of chord qualities in the Billboard corpus (see table above), which is a balanced sample of American popular music from the 1950s through the 1990s (J.A. Burgoyne, Wild, and Fujinaga 2011). Pure major and minor chords alone account for 65 percent of all chords encountered, whereas augmented and diminished triads account for 0.2 percent or less of the corpus each. Our arguments for our particular seventh-chord vocabulary as opposed to the set of all tetrads follows similar reasoning; our proposed vocabulary accounts for 86 percent of all chords, whereas no other standard type of seventh chord accounts for more than 0.2 percent of the corpus. In future years, the table suggests that we might consider introducing vocabularies including power chords, and possibly suspended chords or added sixths and ninths as well.
Besides CSR, the chord transcription literature includes several other metrics for evaluating chord transcriptions, which mainly focus on the segmentation of the automatic transcription. We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences ((S. A. Abdallah et al. 2005); (Mauch 2010, §2.3.3)). Depending on the order of application, the directional Hamming distance yields a measure of over- or under segmentation. Both directions can be combined to yield an overall quality metric (Christopher Harte 2010, §8.3.2):
{\displaystyle Q=1-{\frac {\textrm {maximumofdirectionalHammingdistancesineitherdirection}}{\textrm {totaldurationofsong}}}}
Audio tracks in the training directory will be encoded as 44.1 kHz 16bit mono WAV files.
The algorithms should output text files with a similar format to that used in the ground truth transcriptions. That is to say, they should be flat text files with chord segment labels and times arranged thus:
with elements separated by white spaces, times given in seconds, chord labels corresponding to the syntax described by C. Harte et al. (2005), and one chord segment per line. As in all benchmarks after 2008, end times are a mandatory component of the output. For the evaluation process we will assume enharmonic equivalence for chord roots. We will no longer accept participants who would only like to be evaluated on major/minor chords and want to use the number format.
where fileList.txt has the paths to each WAV file. The features extracted on this stage can be stored under /path/to/scratch/dir. The ground truth files for the supervised learning will be in the same path with a .txt extension at the end. For example for /path/to/trainFile1.wav, there will be a corresponding ground truth file called /path/to/trainFile1.wav.txt. For testing:
doChordID.sh "/path/to/testFileList.txt" "/path/to/scratch/dir" "/path/to/results/dir"
If there is no training, you can ignore the second argument here. In the results directory, there should be one file for each testfile with same name as the test file + .txt. Programs can use their working directory if they need to keep temporary cache files or internal debugging info. Standard output and standard error will be logged.
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed). All submissions should include a README file including the following information:
Abdallah, Samer A., Katy Noland, Mark B. Sandler, Michael Casey, and Christophe Rhodes. 2005. “Theory and Evaluation of a Bayesian Music Structure Extractor.” In Proceedings of the International Society for Music Information Retrieval Conference, 420–425.
Burgoyne, J. A., J. Wild, and I. Fujinaga. 2011. “An expert ground truth set for audio chord recognition and music analysis.” In Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR), 633–638.
Burgoyne, John Ashley. 2012. “Stochastic Processes and Database-Driven Musicology.” Ph.D. diss. Montréal, Québec, Canada: McGill University. http://digitool.Library.McGill.CA:80/R/-?func=dbin-jump-full&object_id=107704&silo_library=GEN01.
Haas, W. B. de, and John~Ashley Burgoyne. 2012. Parsing the Billboard Chord Transcriptions. Technical report UU-CS- 2012-018, Department of Information and Computing Sciences, Utrecht University.
Harte, C., M. Sandler, S. Abdallah, and E. Gómez. 2005. “Symbolic representation of musical chords: A proposed syntax for text annotations.” In Proceedings of the 6th International Society for Music Information Retrieval Conference (ISMIR), 66–71.
Harte, Christopher. 2010. “Towards automatic extraction of harmony information from music signals.” Ph.D. diss. Queen Mary, University of London.
Mauch, Matthias. 2010. “Automatic Chord Transcription from Audio Using Computational Models of Musical Context.” Ph.D. diss. Queen Mary University of London.
Pauwels, Johan, and Geoffroy Peeters. 2013. “Evaluating automatically estimated chord sequences.” In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Vancouver, British Columbia, Canada.
Retrieved from "https://www.music-ir.org/mirex/w/index.php?title=2013:Audio_Chord_Estimation&oldid=9556"
|
Waste heat - zxc.wiki
Waste heat is heat that is generated by living beings or technical devices and given off to the environment.
From a thermodynamic point of view, most real processes are irreversible . As a result of the dissipation of energy, heat is inevitably generated during these processes.
2.2.2 Circular processes
3 Use of waste heat
Snowdrops penetrate the snow cover using waste heat
In metabolic processes inevitable heat (produced thermogenesis ). Alternating warm organisms release the self-generated heat completely as waste heat to their environment. Because the temperature of such organisms deviates little from their ambient temperature, the low level of waste heat is hardly noticeable. However, the melting of snow in the immediate vicinity of early bloomers can be attributed to the waste heat produced. Snowdrops even actively warm up to above ambient temperature, so that their waste heat melts the snow cover above them and light can penetrate through. Birds and mammals as animals of the same temperature always give off so much energy as waste heat that their body temperature remains almost constant. Your waste heat is noticeable.
In both cases, the organisms have partially developed mechanisms for thermoregulation . If there is a threat of overheating due to strong sunlight or high ambient temperatures, plants increase their heat output by increasing their transpiration and thus dissipating additional heat of evaporation . In higher organisms this function is performed by sweating or panting .
See also : thermoregulation
Technical devices and systems cannot be operated without generating waste heat. This usually has to be derived in order to avoid malfunctions due to overheating or to restore the initial state of the working medium in circular processes .
In energy converters , the waste heat always represents a loss. In this case, it is energy that leaves the system and is no longer available for its purpose. The efficiency of the energy conversion can therefore be defined using the waste heat .
According to Ohm's law , every conductor through which current flows , i.e. almost every electrical component, generates heat loss. This means that waste heat must be dissipated from every electrical device and every electrical engineering system. In an electric motor, in addition to the mechanical friction of the moving parts, the internal resistance of the current-carrying winding reduces the mechanical power output via the shaft . An energy conversion efficiency can therefore be defined for an electric motor, in which, in addition to the frictional losses , the power loss of the internal resistance of the coils is also negatively included.
{\ displaystyle P _ {\ mathrm {m}}}
{\ displaystyle P _ {\ mu}}
{\ displaystyle R}
{\ displaystyle P_ {R}}
{\ displaystyle \ eta _ {\ mathrm {m}} = {\ frac {P _ {\ mathrm {m}}} {P _ {\ mathrm {el}}}} = {\ frac {P _ {\ mathrm {el} } -P _ {\ mu} -P_ {R}} {P _ {\ mathrm {el}}}}}
As a consequence, friction losses are also converted into waste heat ( dissipation ), so that the mechanical power around the entire waste heat power
{\ displaystyle {\ dot {Q}} _ {\ mathrm {ab}} = P _ {\ mu} + P_ {R}}
is decreased:
{\ displaystyle \ eta _ {\ mathrm {m}} = {\ frac {P _ {\ mathrm {el}} - {\ dot {Q}} _ {\ mathrm {ab}}} {P _ {\ mathrm {el }}}} = 1 - {\ frac {{\ dot {Q}} _ {\ mathrm {ab}}} {P _ {\ mathrm {el}}}}}
In the same way, losses in other electrotechnical energy converters can be taken into account as waste heat output. Information technology devices (such as CPUs , routers, etc.), on the other hand, are rarely energy converters. Apart from being converted into sound or electromagnetic radiation (light, radio waves ) for signal transmission, they emit all of their electrical power as waste heat. Their efficiency cannot be expressed in terms of efficiency, but rather is determined by the need for electrical energy .
In the case of incandescent lamps , the generation of waste heat outweighs the generation of light by far. Their efficiency is therefore a few percent.
CPUs completely convert the electrical power consumed into waste heat. As this is not sufficiently transported away by free convection , powerful processor coolers are used.
In thermal engineering systems, waste heat is often the result of insulation that can only be implemented to a limited extent in real terms, or of losses in the heat transfer that occur in reality . Waste heat here is a loss of heating energy.
The heating coil in a kettle not only heats the water that is filled in, but also the material and the surrounding air.
An internal combustion engine generates waste heat, which is emitted via the cooler in addition to the exhaust system, since combustion heat is also emitted to pistons , valves and cylinder walls, etc.
Heat machines such as thermal power plants , internal combustion engines etc. cannot be operated without releasing waste heat into the environment. According to the second law of thermodynamics , heat machines cannot do any work without creating a temperature difference. Such systems must therefore be able to give off waste heat at a low temperature level in order to function. Since heat transfer occurs spontaneously (i.e. by itself, without any effort), but only in the direction of a temperature gradient and when a power plant is operated, a body of water or the earth's atmosphere (via cooling towers ) can serve as the only natural heat sink with the lowest temperature, waste heat here always falls at temperatures above the ambient temperature.
According to Carnot efficiency , the efficiency of a heat engine increases with decreasing temperature on the cold, the heat-emitting side. The lower the temperature level of this waste heat, the higher the efficiency of the heat engine. With thermal power plants and all variants of using (waste) heat with the help of cycle processes, efforts are therefore made to bring the temperature of the heat output as close as possible to the ambient temperature.
An internal combustion engine emits waste heat mainly through the exhaust system, since the combustion heat cannot be fully used during expansion.
If the exhaust gas heat from the internal combustion engine is used in a block-type thermal power station to heat the water, the waste heat can be reduced and fuel consumption improved. The Carnot efficiency of the overall process (degree of utilization ) increases as the temperature at which the overall system gives off heat to the environment drops.
In midsummer, the output of thermal power plants, which release their waste heat into rivers or lakes, must be throttled so that the water temperatures do not rise too sharply. Less oxygen is dissolved in warmer water , which would endanger fish and other organisms.
In winter, the efficiency of most thermal power plants is higher because of the lower ambient temperatures, since waste heat can be given off at lower temperatures than in summer.
The waste water heat recovery from the sewer system can contribute to a profitable heating operation of a heat pump due to its uniform temperature .
In most technical processes, efforts are made to reduce the waste heat output ("waste heat quantity" ) to a technically and economically sensible level through measures such as insulation , heat recovery or the use of suitable materials (e.g. low ohmic resistance). Reducing the waste heat from a system therefore usually means investing more effort in these measures and thus increasing the efficiency of the system.
The most common way of making the heat given off by a technical system usable and thus reducing its heat dissipation to the environment is heat recovery in power plants ( recuperation ), in air conditioning systems, in the blast furnace process ( wind heater ) and many other cycle processes. Here are heat exchangers used that use the heat from hot exhaust gases or hot exhaust air to ambient air or other necessary for the process media, which are present at low temperatures to heat (. Eg fresh air in winter for air conditioning).
Although technical systems emit waste heat with sometimes considerable capacities, for example in the case of cooling towers in power plants, the temperature level of these heat sources is in most cases too low for economic use. A technical possibility of waste heat at a low temperature level, e.g. For example, to convert it into electrical energy, there are circular processes such as the Organic Rankine Cycle , in which hydrocarbons with a particularly low boiling point are used as the working medium, or thermoelectric generators , with which electrical energy can be obtained directly from an existing temperature difference between two bodies.
The temperature level of technical waste heat sources can be used in agriculture, for example, for heating greenhouses , for growing asparagus or for fish farming .
The use of industrial waste heat - both directly and through the heat pump , if the waste heat temperature is not sufficient - is considered an important source of heat for climate-friendly remote - and district heating networks . Heat recovery has great potential for increasing energy efficiency . With the waste heat generated in Europe , the entire needs of the heating sector could be covered. To develop this potential, it would be necessary to expand the district heating supply to transport waste heat to households. In Germany, around 700 to 800 TWh of waste heat is generated every year in industry, of which more than 300 TWh could be used with heat pumps for heating purposes.
Wiktionary: waste heat - explanations of meanings, word origins, synonyms, translations
↑ University of Halle: Thermal theory and thermal conduction. (PDF) In: Thermal theory (thermodynamics). Retrieved February 21, 2018 .
^ Andrei David et al .: Heat Roadmap Europe: Large-Scale Electric Heat Pumps in District Heating Systems . In: Energies . tape 10 , no. 4 , 2017, p. 578 ff ., doi : 10.3390 / en10040578 .
↑ The heat change can only be achieved with the heat pump . In: Renewable Energies. The magazine . June 27, 2019. Retrieved June 27, 2019.
This page is based on the copyrighted Wikipedia article "Abw%C3%A4rme" (Authors); it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License. You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.
|
(Created page with "= Background and Definition = A Boolean function <math>f : \mathbb{F}_{2^n} \rightarrow \mathbb{F}_2</math> is said to be ''plateaued'' if its Walsh transform takes at most t...")
== Primary constructons ==
=== Maiorana-MacFarland Functions ===
=== Generalization of the Maiorana-MacFarland Functions <ref name="camion1992">Camion P, Carlet C, Charpin P, Sendrier N. On Correlation-immune functions. InAdvances in Cryptology—CRYPTO’91 1992 (pp. 86-100). Springer Berlin/Heidelberg.</ref> ===
The Maiorana-MacFarland class of bent functions can be generalized into the class of functions <math>f_{\phi,h}</math> of the form
{\displaystyle f:\mathbb {F} _{2^{n}}\rightarrow \mathbb {F} _{2}}
{\displaystyle \{b\in \mathbb {F} _{2}^{n}:D_{a}D_{b}F(x)=v\}}
{\displaystyle \{b\in \mathbb {F} _{2}^{n}:D_{a}F(b)=D_{a}F(x)+v\}}
{\displaystyle f_{\phi ,h}}
{\displaystyle f_{\phi ,h}(x,y)=x\cdot \phi (y)+h(y)}
{\displaystyle x\in \mathbb {F} _{2}^{r},y\in \mathbb {F} _{2}^{s}}
{\displaystyle \phi :\mathbb {F} _{2}^{s}\rightarrow \mathbb {F} _{2}^{r}}
{\displaystyle h:\mathbb {F} _{2}^{s}\rightarrow \mathbb {F} _{2}}
{\displaystyle f_{\phi ,h}}
{\displaystyle W_{f_{\phi ,h}}(a,b)=2^{r}\sum _{y\in \phi ^{-1}(a)}(-1)^{b\cdot y+h(y)}}
{\displaystyle f_{\phi ,h}}
{\displaystyle \sum _{a,b\in \mathbb {F} _{2}^{n}}(-1)^{DaDbf(x)}}
{\displaystyle x\in \mathbb {F} _{2}^{n}}
{\displaystyle v\in \mathbb {F} _{2}^{m}}
{\displaystyle \{(a,b)\in (\mathbb {F} _{2}^{n})^{2}:D_{a}D_{b}F(x)=v\}}
{\displaystyle x}
{\displaystyle x}
{\displaystyle v\in \mathbb {F} _{2}^{m}}
{\displaystyle v\neq 0}
{\displaystyle D_{a}D_{b}F(x)}
{\displaystyle D_{a}F(b)+D_{a}F(x)}
{\displaystyle (a,b)}
{\displaystyle (\mathbb {F} _{2}^{n})^{2}}
{\displaystyle u\cdot F,u\cdot G}
{\displaystyle F(x)=x^{d}}
{\displaystyle \lambda \neq 0}
{\displaystyle |\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(x)=v\}|=|\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(x/\lambda )=v/\lambda ^{d}\}|.}
{\displaystyle v\in \mathbb {F} _{2^{n}}}
{\displaystyle |\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(1)=v\}|=|\{(a,b)\in \mathbb {F} _{2^{n}}^{2}:D_{a}F(b)+D_{a}F(0)=v\}|;}
{\displaystyle v\neq 0}
{\displaystyle v,x\in \mathbb {F} _{2}^{n}}
{\displaystyle |\{(a,b)\in (\mathbb {F} _{2}^{n})^{2}:D_{a}D_{b}F(x)=v\}|=|\{(a,b)\in (\mathbb {F} _{2}^{n})^{2}:F(a)+F(b)=v\}|.}
{\displaystyle v}
{\displaystyle v\neq 0}
{\displaystyle {\rm {Im}}(D_{a}F)}
{\displaystyle F}
{\displaystyle f}
{\displaystyle {\Delta _{f}}(a)=\sum _{x\in \mathbb {F} _{2}^{n}}(-1)^{f(x)+f(x+a)}}
{\displaystyle x\in \mathbb {F} _{2}^{n}}
{\displaystyle 2^{n}\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{f}(a)\Delta _{f}(a+x)=\left(\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{f}^{2}(a)\right)\Delta _{f}(x).}
{\displaystyle x\in \mathbb {F} _{2}^{n},u\in \mathbb {F} _{2}^{m}}
{\displaystyle 2^{n}\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{u\cdot F}(a)\Delta _{u\cdot F}(a+x)=\left(\sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{u\cdot F}^{2}(a)\right)\Delta _{u\cdot F}(x).}
{\displaystyle x\in \mathbb {F} _{2}^{n},u\in \mathbb {F} _{2}^{m}}
{\displaystyle \sum _{a\in \mathbb {F} _{2}^{n}}\Delta _{u\cdot F}(a)\Delta _{u\cdot F}(a+x)=\mu ^{2}\Delta _{u\cdot F}(x).}
{\displaystyle x,v\in \mathbb {F} _{2}^{n}}
{\displaystyle 2^{n}|\{(a,b,c)\in (\mathbb {F} _{2}^{n})^{3}:F(a)+F(b)+F(c)+F(a+b+c+x)=v\}|=|\{(a,b,c,d)\in (\mathbb {F} _{2}^{n})^{4}:F(a)+F(b)+F(c)+F(a+b+c)+F(d)+F(d+x)=v\}|.}
{\displaystyle f:\mathbb {F} _{2^{n}}\rightarrow \mathbb {F} _{2}}
{\displaystyle 0\neq \alpha \in \mathbb {F} _{2}^{n}}
{\displaystyle \sum _{w\in \mathbb {F} _{2}^{n}}W_{f}(w+\alpha )W_{f}^{3}(w)=0.}
{\displaystyle u\in \mathbb {F} _{2}^{m}}
{\displaystyle 0\neq \alpha \in \mathbb {F} _{2}^{n}}
{\displaystyle \sum _{w\in \mathbb {F} _{2}^{n}}W_{F}(w+\alpha ,u)W_{F}^{3}(w,u)=0.}
{\displaystyle \sum _{w\in \mathbb {F} _{2}^{n}}W_{F}^{4}(w,u)}
{\displaystyle u}
{\displaystyle u\neq 0}
{\displaystyle f:\mathbb {F} _{2^{n}}\rightarrow \mathbb {F} _{2}}
{\displaystyle b\in \mathbb {F} _{2}}
{\displaystyle \sum _{a\in \mathbb {F} _{2}}W_{f}^{4}(a)=2^{n}(-1)^{f(b)}\sum _{a\in \mathbb {F} _{2}^{n}}(-1)^{a\cdot b}W_{f}^{3}(a).}
{\displaystyle b\in \mathbb {F} _{2}^{n}}
{\displaystyle u\in \mathbb {F} _{m}}
{\displaystyle \sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{4}(a,u)=2^{n}(-1)^{u\cdot F(b)}\sum _{a\in \mathbb {F} _{2}^{n}}(-1)^{a\cdot b}W_{F}^{3}(a,u).}
{\displaystyle u\neq 0}
Any Boolean function 𝑓 i{\displaystyle n}
{\displaystyle \left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{f}^{4}(a)\right)^{2}\leq 2^{2n}\left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{f}^{6}(a)\right),}
{\displaystyle \sum _{u\in \mathbb {F} _{2}^{m}}\left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{4}(a,u)\right)^{2}\leq 2^{2n}\sum _{u\in \mathbb {F} _{2}^{m}}\left(\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{6}(a,u)\right),}
{\displaystyle \sum _{u\in \mathbb {F} _{2}^{m}}\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{4}(a,u)\leq 2^{n}\sum _{u\in \mathbb {F} _{2}^{m}}{\sqrt {\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{6}(a,u)}},}
{\displaystyle D_{a}F(x)=D_{a}F(0)}
{\displaystyle F(x)+F(x+a)=F(0)+F(a)}
{\displaystyle 0\neq a\in \mathbb {F} _{2}^{n}}
{\displaystyle F(0)=0}
{\displaystyle |\{(x,b)\in \mathbb {F} _{2^{n}}^{2}:F(x)+F(x+b)+F(b)=0\}|=3\cdot 2^{n}-2,}
{\displaystyle \sum _{a\in \mathbb {F} _{2^{n}},u\in \mathbb {F} _{2^{n}}^{*}}W_{F}^{3}(a,u)=2^{2n+1}(2^{n}-1).}
{\displaystyle 3\cdot 2^{3^{n}}-2^{2n+1}\leq \sum _{u\in \mathbb {F} _{2}^{n}}{\sqrt {\sum _{a\in \mathbb {F} _{2}^{n}}W_{F}^{6}(a,u)}},}
{\displaystyle 2^{\lambda _{u}}}
{\displaystyle u\cdot F}
{\displaystyle F}
{\displaystyle \sum _{0\neq u\in \mathbb {F} _{2}^{n}}2^{2\lambda _{u}}\leq 2^{n+1}(2^{n}-1).}
{\displaystyle |\{(a,b)\in (\mathbb {F} _{2}^{n})^{2}:a\neq b,F(a)=F(b)\}|\geq 2\cdot (2^{n}-1),}
|
Exam Questions [Isabelle/HOL Support Wiki]
Trace: • Exam Questions
Core HOL
The idea of this page is to collect questions which might be asked in the exams. Whenever you review the slides or work on exercises, just put anything here which might be asked.
I guess it will be most easy for the moderating group to put questions about the topic they present here, too.
Please try to maintain categories of useful size, so each gets its own edit button.
If there are any answers on this page, they are given by students; Feel free to discuss the answers and/or add additional ones!
Please give a short overview over the lecture.
Intorduction: What is Verification and Specification? What is a Logic? –> Propositional logic
Functional programming and specification (ML & Isabelle/HOL)
HOL: language and semantic aspects (e.g. types, terms, theory, cons. ext., …)
Proofs in HOL
Specific things in HOL: sets, functions, relations, and fixpoints
Specifiaction of PL semantics (operational-, denotational- & axiomatical semantics)
–> programm verification and programm logic (Hoare Logic)
argument for building the system right (NOT building the right system –> validation)
defining all possible behaviours of the specified system
What is the difference between model- and property-oriented specifications?
Model-oriented is based on well-defined mathematical objects like sets and functions, which are used to construct a representation of the system state, as well as the operations on these states. Thus, such specifications reason about a transition system.
Property-oriented is purely declarative. It uses some logic language to express the properties of the functions in the system.
What's the most important thing that needs to be proven when defining types in a HOL theory
T = (\chi, \Sigma, A)
Proof that the newly introduced term
S
r \Rightarrow bool
has at least one element inside. Formally
T \vdash \exists x.\; S\; x
Why is it important that the new types defined in HOL are non-empty?
(see FAQ for hints)
How are the elements True and False introduced in HOL?
Since there should always be a distinguished infinite set in U whose elements and subsets are also fully contained in U, there's a guarantee that there will be a set with two elements (say the subset {0,1} of the infinite set) and that one is going to be mapped further on to the values True and False 1)
Why must all functions be total in Isabelle/HOL?
How does the automatic proof of termination work for definition, primrec, and fun?
What is the general procedure for proving termination manually in function?
Why do we need calculi.
Basically, calculi are the tools we use to derive (and prove) theorems. Given a set of valid formulas and a sound calculus, new valid formulas (tautologies) can be derived. As far as I know, the semantic notion of “validity” corresponds to “provability” in the calculus, since it does not consider any notion of “truth”.
Please explain the following properties: Soundness (Correctness), Completeness, …
A deductive system is sound if all provable formulas are valid. It is complete if all valid formulas are provable.
Validity is a semantic property (
A \models B
), while provability a syntactic one (
A \vdash B
). So, in other words we have:
A \vdash B \longrightarrow A \models B
A \models B \longrightarrow A \vdash B
What are the advantages/disadvantages of the Hilbert calculus compared to the Gentzen calculus.
How can we prove formulas without a calculus?
Calculi are based purely on syntax. We can always rely on semantic proofs, e.g. truth table or tableaux methods.
exam.txt · Last modified: 2011/08/02 15:29 by 188.107.127.174
|
NCERT Solutions for Class 12 Science Chemistry Chapter 6 - General Principles And Processes Of Isolation Of Elements
NCERT Solutions for Class 12 Science Chemistry Chapter 6 General Principles And Processes Of Isolation Of Elements are provided here with simple step-by-step explanations. These solutions for General Principles And Processes Of Isolation Of Elements are extremely popular among Class 12 Science students for Chemistry General Principles And Processes Of Isolation Of Elements Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the NCERT Book of Class 12 Science Chemistry Chapter 6 are provided here for you for free. You will also love the ad-free experience on Meritnation’s NCERT Solutions. All NCERT Solutions for class Class 12 Science Chemistry are prepared by experts and are 100% accurate.
is thermodynamically feasible as is apparent from the Gibbs energy value.
Why is the extraction of copper from pyrites more difficult than that from its
oxide ore through reduction?
The Gibbs free energy of formation (ΔfG) of Cu2S is less than that of and . Therefore, H2 and C cannot reduce Cu2S to Cu.
(i) Zone refining:
(ii) Column chromatography:
At 673 K, the value of is less than that of . Therefore, CO can be oxidised more easily to CO2 than C to CO. Hence, CO is a better reducing agent than C at 673 K.
Name the common elements present in the anode mud in electrolytic refining
of copper. Why are they so present ?
Write down the reactions taking place in different zones in the blast furnace
during the extraction of iron.
3{\mathrm{Fe}}_{2}{\mathrm{O}}_{3} + \mathrm{CO}\to 2{\mathrm{Fe}}_{3}{\mathrm{O}}_{4} + {\mathrm{CO}}_{2}\phantom{\rule{0ex}{0ex}}{\mathrm{Fe}}_{3}{\mathrm{O}}_{4} + 4\mathrm{CO} \to 3\mathrm{Fe} + 4{\mathrm{CO}}_{2}\phantom{\rule{0ex}{0ex}}{\mathrm{Fe}}_{2}{\mathrm{O}}_{3} + \mathrm{CO} \to 2\mathrm{FeO} + {\mathrm{CO}}_{2}
(i) Concentration of ore
(ii) Conversion to oxide (Roasting)
(iii) Extraction of zinc from zinc oxide (Reduction)
(iv) Electrolytic Refining
What criterion is followed for the selection of the stationary phase in
How can you separate alumina from silica in bauxite ore associated with
silica? Give equations, if any.
1. To decrease the melting point of the mixture from 2323 K to 1140 K.
2. To increase the electrical conductivity of Al2O3.
The standard Gibbs free energy of formation of ZnO from Zn
is lower than that of CO2 from CO. Therefore, CO cannot reduce ZnO to Zn. Hence, Zn is not extracted from ZnO through reduction using CO.
The value of for formation of Cr2O3 is − 540 kJmol−1 and that of Al2 O3 is − 827 kJmol−1. Is the reduction of Cr2O3 possible with Al?
The value of for the formation of Cr2O3 from Cr (−540 kJmol−1) is higher than that of Al2O3 from Al (−827 kJmol−1). Therefore, Al can reduce Cr2O3 to Cr. Hence, the reduction of Cr2O3 with Al is possible.
The choice of a reducing agent in a particular case depends on
thermodynamic factor. How far do you agree with this statement? Support
your opinion with two examples.
Name the processes from which chlorine is obtained as a by-product. What
will happen if an aqueous solution of NaCl is subjected to electrolysis?
In the electrometallurgy of aluminium, a fused mixture of purified alumina (Al2O3), cryolite (Na3AlF6) and fluorspar (CaF2) is electrolysed. In this electrolysis, graphite is used as the anode and graphite-lined iron is used as the cathode. During the electrolysis, Al is liberated at the cathode, while CO and CO2 are liberated at the anode, according to the following equation.
(ii) Electrolytic refining;
|
Longman Panorma Geography Solutions for Class 7 Social science Chapter 13 - Life In Temperature Regions
life in temperature regions
Longman Panorma Geography Solutions Solutions for Class 7 Social science Chapter 13 Life In Temperature Regions are provided here with simple step-by-step explanations. These solutions for Life In Temperature Regions are extremely popular among Class 7 students for Social science Life In Temperature Regions Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Longman Panorma Geography Solutions Book of Class 7 Social science Chapter 13 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Longman Panorma Geography Solutions Solutions. All Longman Panorma Geography Solutions Solutions for class Class 7 Social science are prepared by experts and are 100% accurate.
What is the location of the Prairies?
The Prairies are the grasslands that are located in the temperate region of North America spreading over Canada and the USA. They lie between the Rocky Mountains in the west and the Great Lakes and the Appalachian Highlands in the east. Canadian Prairies are located in the states of Saskatchewan, Manitoba, Alberta and Ontario, whereas the Prairies in the USA are located in states of Minnesota, North Dakota, South Dakota, Iowa and Wisconsin.
Name the rivers that drain the Prairies.
The major rivers that drain the Prairies are as follows:
(a) River Mississippi (USA)
(b) River Missouri (USA)
(c) River Ohio (USA)
(d) River Dakota (USA)
(e) River Saskatchewan (Canada)
Which factors favour agriculture in the Prairies?
The following factors favour agriculture in the Prairies:
Dark brown fertile soil, which is rich in organic matter, supports agriculture.
Rolling terrain and gentle slopes provide proper drainage and are suitable for mechanised agriculture.
Rains during spring and summer are good for crops. Melting snow also provides soil with moisture.
A combine harvester is a machine used in agriculture for commercial purposes. It performs the following functions:
It reaps the crops.
It threshes the grains.
It packs the grains in a sack.
Describe the wildlife of the Veld.
The wildlife in the Veld can be described by the following characteristic features:
Prominent animals that are found in the Veld are lions, leopards, cheetahs, giraffes, springboks, antelopes and oryxes.
Birds found in this region include ostriches, quails and partridges.
Hunting and poaching have greatly reduced the diversity of wildlife, and most species are facing the threat of extinction.
Which minerals are found in the Veld region?
The following minerals are found in the Veld region:
(e) Uranium
Climate of the Prairies and of the Veld
The following distinctions can be made about the climate in the Prairies and the Veld:
Prairies Veld
Prairies have extreme climate due to their location in the interior of continent. Both diurnal and annual ranges of temperature are high. Veld has moderate climate owing to higher altitude and proximity to ocean on three sides.
Rainfall occurs mostly during spring and summer. Summers are short and rainfall is low with frequent droughts.
Snowfall occurs during winters. Local chinook wind from the Rockies tend to raise temperatures. Winters are long, cool and dry with occasional frost.
Animal rearing in the Prairies and in the Veld
Animal rearing in the Prairies and the Veld can be distinguished in the following ways:
Dairy cattle are raised for milk products and their farms are located near towns. Animal rearing includes cattle and sheep rearing.
Beef cattle are reared in isolated farms called ranches. These are situated near major highways and railway lines. Cattle rearing takes place in warmer and wetter eastern part. Sheep rearing takes place in the cooler western part.
Ranches are managed like factories, with departments and cowboys working in them. Both beef and sheep wool are exported along with leather.
Why are winds blowing in the Prairies very strong and gusty?
The following are the reasons for strong and gusty winds in the Prairies:
(a) The Prairies face the leeward side of the Rocky Mountains.
(b) The descending air from the Mountains is dry and light.
(c) The winds do not face any obstacles in the open grasslands and hence, the winds get violent and strong.
The local winds of this nature are known as chinook.
Why are most of the settlements in the Prairies located along roads and railway lines?
Most of the settlements in the Prairies are located along roads and railway lines due to the following reasons:
(a) The levelled land facilitates the construction of roads and railway lines.
(b) In the late nineteenth century, the construction of transcontinental railways like the Canadian Pacific Railway and the Canadian National Railway attracted millions of people who settled near railway lines and roads in search of jobs.
(c) Economy is dependent on transport and communication, thereby, inducing people to settle near transport routes.
South Africa is a major producer and exporter of wool in the world.
South Africa is a major producer and exporter of wool in the world due to the following major reasons:
(a) The cooler and drier western part of the Veld grassland in South Africa is suitable for sheep rearing.
(b) The Merino sheep of the Veld produces finest quality of wool. Mohair is obtained from Angora goats.
(c) Sheep rearing is an important occupation of people in these areas. Hence, the production of wool is high and this makes South Africa a major exporter of wool in the world.
Agriculture in the Veld is not as important as in the Prairies.
Agriculture in the Veld is not as important as the Prairies due to the following factors in Velds:
(a) Prevalence of low rainfall and consequent drought conditions.
(b) Soil is not highly fertile, which further restricts agriculture.
(c) Availability of minerals reduces the dependency on agriculture.
(d) Animal rearing and export of animal products like meat and wool are major occupations of people in Velds.
Describe a cattle ranch in the Prairies.
Cattle ranches in the Prairies can be described in the following manner:
(b) They function commercially as large factories.
(c) Ranches include all aspects of cattle rearing including fodder supply; thus, they are self-sufficient. Various activities that are carried out in the ranches include fodder cultivation, operating machines, providing water supply and medical facilities, beef production, transportation, etc.
(d) The activities that are performed by cowboys who live in the ranches include herding the cattle on horseback, driving them for pastures, etc. Animals are well recognised by their branding.
(e) The fattened cattle are taken to meat-packing centres. The largest of these centres is located in Chicago. Ranches are located near highways and railway lines for easier transportation.
Give an account of wheat farming in the Prairies.
The following are features of wheat farming in the Prairies:
(a) Light rains in spring and summer are suitable for wheat cultivation. The rolling topography and organic soil also helps in wheat farming.
(b) The melting snow provides moisture to the soil.
(c) Wheat is cultivated during spring in Canada and during winters in the USA.
(d) Most of the cropping is done for commercial purpose. It is aided by large farmlands and open spaces provided by grasslands. It is also suitable for mechanisation of agriculture and application of scientific methods of farming.
(e) Thus the Prairies produce a huge quantity of wheat which is exported to Europe and Asia.
The region is known as the 'granary of the world' owing to the huge production of wheat.
Describe the physical environment of the Veld.
The physical environment of the Veld can be described by its following features:
(a) It is the temperate grassland of South Africa and is located on its Eastern plateau.
(b) The plateau on which the Veld is located descends in a series of steps.
(c) The height of Highveld is 1,200–1,800 metres with a ridge called Whitewater running in between.
(d) The height of Middleveld is 600–1,200 metres, while the Lowveld is located at a height of 150
-
600 metres.
(e) The major rivers that drain the Veld are Limpopo, Sabi and Orange.
(f) The climate is moderate due to presence of sea on three sides. Grasses and bushes comprise the natural vegetation of Veld.
The temperate grasslands in South America are called
The correct answer is option (d) Pampas.
Explanation: Grasslands are known by different local names. In South America, they are called pampas and are mainly found in Argentina.
A local warm wind in the Prairies is
The correct answer is option (b) Chinook.
Explanation: The local warm winds in the Prairies are known as Chinook. They blow from the Eastern slopes of Rocky Mountains and raise the temperature during winters.
An animal which has been recklessly hunted in the Prairies is
The correct answer is option (b) bison.
Explanation: The American bison was an important animal of the Prairies at one point of time but its reckless hunting has widely reduced its number.
The world's biggest meat-packing centre is
The correct answer is option (a) Chicago.
Explanation: Chicago in USA is the biggest meat-packing centre of the world. Most of the meat is sourced from animals reared on ranches in the prairies.
The sheep which yields the finest quality wool is
d. Blackface
The correct answer is option (c) Merino.
Explanation: The wool obtained from Merino sheep, found in the Veld, is of the finest quality. South Africa has maintained top position in wool production and its export.
Which of the following cities in South Africa is famous for its diamond industry?
The correct answer is option (d) Kimberley.
Explanation: Kimberley, in South Africa, is famous for its diamond mines.
1. On an outline map of North America mark and label: Rocky Mountains, Great Lakes, Appalachian Highlands, Prairies, River Mississippi, Chicago and Winnipeg.
2. On an outline map of Africa and label: Drakensberg Mountains, Kalahari Desert, Veld, River Orange, River Limpopo, Johannesberg, Pretoria, Kimberley.
1) Map of North America
2) Map of Africa
|
SWR meter - Wikipedia
Measurement device for radio equipment
Find sources: "SWR meter" – news · newspapers · books · scholar · JSTOR (May 2011) (Learn how and when to remove this template message)
An SWR meter for CB radio equipment
The standing wave ratio meter, SWR meter, ISWR meter (current "I" SWR), or VSWR meter (voltage SWR) measures the standing wave ratio (SWR) in a transmission line.[a] The meter indirectly measures the degree of mismatch between a transmission line and its load (usually an antenna). Electronics technicians use it to adjust radio transmitters and their antennas and feedlines to be impedance matched so they work together properly, and evaluate the effectiveness of other impedance matching efforts.
1 Directional SWR meter
2 Radio operators' SWR meters
3 SWR bridge
Directional SWR meter[edit]
A directional SWR meter measures the magnitude of the forward and reflected waves by sensing each one individually, with directional couplers. A calculation then produces the SWR.
A simple directional SWR meter
Referring to the above diagram, the transmitter (TX) and antenna (ANT) terminals connect via an internal transmission line. This main line is electromagnetically coupled to two smaller sense lines (directional couplers). These are terminated with resistors at one end and diode rectifiers at the other. Some meters use a printed circuit board with three parallel traces to make the transmission line and two sensing lines. The resistors match the characteristic impedance of the sense lines. The diodes convert the magnitudes of the forward and reverse waves to the terminals FWD and REV, respectively, as DC voltages, which are smoothed by the capacitors.[1]: 27‑21 The meter or amplifier (not shown) connected to the FWD and REV terminals acts as the required drain resistor, and determines the dwell-time of the meter reading.
Interior view of an SWR meter. The three parallel coupled lines are visible. Diodes, capacitors and termination resistors are mounted at the ends of the sense lines.
To calculate the SWR, first calculate the reflection coefficient:
{\displaystyle \Gamma ={\frac {V_{rev}}{\;V_{fwd}\;}}}
(the voltages should include a relative phase factor).
Then calculate the SWR:
{\displaystyle {\mathsf {SWR}}={\Biggl \|}\,{\frac {1+\Gamma }{\;1-\Gamma \;}}\,{\Biggr \|}~.}
In a passive meter, this is usually indicated on a non-linear scale.
Radio operators' SWR meters[edit]
For decades [2][3] radio operators have built and used SWR meters as a simple tuning and diagnostic tool. With shielding compromised, a pair of coax or twinline transmission lines, placed close enough, suffer crosstalk. A wave moving in the driven line induces waves in the measurement line. Placed in parallel (straight or loosely coiled) a driven wave reinforces or cancels an induced wave in the same or opposite direction. If the cable pair exceeds half wavelength, cancellation is complete, and power dissipated in matched termination is approximately proportional to the forward and reflected power.
CircuitMod-2.7 simulation of SWR meter with mismatched load
Resisters represent meter movements.
Reflected graph is voltage on left resistor. Forward graph is voltage on right resistor.
The approximation improves as crosstalk weakens and harmonic number increases. Over time, nonlinear high gain amplifiers have replaced nonlinear electro-mechanical movements – which replaced incandescent bulbs – to require less cross-talk and improve linear frequency range.
Because all frequencies above minimum contribute, the measured ratio is a single frequency quality measure, that increases with unintended harmonics and spurious emissions, as well as actual SWR. By analogy, the measurement cable is a crystal radio (non-discriminating receiver) representing all the radio receivers that might suffer interference from dirty emissions. Though called an SWR Meter, a low measured ratio indicates not only good match, but also clean A3, F3, or G3 emission without excessive harmonics nor spurious (out-of-channel) power.
SWR bridge[edit]
SWR can also be measured using an impedance bridge. The bridge is balanced (0 Volts across the detector) only when the test impedance exactly matches the reference impedance. When a transmission line is mismatched (SWR > 1:1), its input impedance deviates from its characteristic impedance; thus, a bridge can be used to determine the presence or absence of a low SWR.
To test for a match, the reference impedance of the bridge is set to the expected load impedance (for example, 50 Ohms), and the transmission line connected as the unknown impedance. RF power is applied to the circuit. The voltage at the line input represents the vector sum of the forward wave, and the wave reflected from the load. If we know the characteristic impedance of the line is 50 Ohms, we know the magnitude and phase of the forward wave. It's the same wave present on the other side of the detector. Subtracting this known wave from the wave at the line input yields the reflected wave. Properly designed, a bridge circuit can not only indicate a match, but the degree of mismatch – making it possible to calculate the SWR. This usually involves alternately connecting the reference wave and the reflected wave to a power meter, and comparing the magnitudes of the resulting deflections.[1]: 27‑03
An SWR meter does not measure the actual impedance of a load (the resistance and reactance), but only the mismatch ratio. To measure the actual impedance requires an antenna analyzer or other similar RF measuring device. For accurate readings, the SWR meter itself must also match the line's impedance (typically 50 or 75 Ohms). To accommodate multiple impedances, some SWR meters have switches that select the impedance appropriate for the sense lines.
An SWR meter should connect to the line as close as possible to the antenna: All practical transmission lines have a certain amount of loss, which attenuates the reflected wave as it travels back along the line. Thus, the SWR is highest closest to the load, and only improves as the distance from the load increases, creating the false impression of a matched system.[1]: 28‑07
^ The terms ISWR (current standing wave ratio) and VSWR (voltage standing wave ratio) are sometimes used to emphasized the method by which the measurement is made, however, in the absence of measurement errors, the two numbers are identical. The circumspect term SWR is preferred to avoid false precision.[1]
^ a b c d The ARRL Antenna Book (21st ed.). The American Radio Relay League, Inc. 2007. ISBN 0-87259-987-6.
^ Grebenkemper, John, KI6WX (1997). "The Tandem match – An accurate directional wattmeter". Handbook for Amateur Radio (PDF). The American Radio Relay League, Inc. Chapter 22: Station setup and accessory projects, pages 22.36–22.42. [full citation needed]
^ Kaune, Bill, W7IEQ (2012). "A modern directional power/SWR meter". Handbook for Amateur Radio (PDF). The American Radio Relay League, Inc.
Wikimedia Commons has media related to SWR meters.
Retrieved from "https://en.wikipedia.org/w/index.php?title=SWR_meter&oldid=1077014421"
|
Flood Fill · USACO Guide
HomeSilverFlood Fill
ResourcesIntroductionImplementationExample - Counting RoomsImplementationProblems
Author: Darren Yao
Contributor: Kevin Sheng
Finding connected components in a graph represented by a grid.
10.5 - Flood Fill
4.2.4 - Flood Fill
Flood fill is an algorithm that identifies and labels the connected component that a particular cell belongs to in a multidimensional array.
For example, suppose that we want to split the following grid into components of connected cells with the same number.
Let's start the flood fill from the top-left cell. The color scheme will be red for the node currently being processed, blue for nodes already visited, and uncolored for nodes not yet visited.
As opposed to an explicit graph where the edges are given, a grid is an implicit graph. This means that the neighbors are just the nodes directly adjacent in the four cardinal directions.
Usually, grids given in problems will be
N
M
, so the first line of the input contains the numbers
N
M
. In this example, we will use a two-dimensional integer array to store the grid, but depending on the problem, a two-dimensional character array or a two-dimensional boolean array may be more appropriate. Then, there are
N
rows, each with
M
numbers containing the contents of each square in the grid. Example input might look like the following (varies between problems):
And we’ll want to input the grid as follows:
int grid[MAX_N][MAX_N];
int col_num;
private static int rowNum;
private static int colNum;
StringTokenizer dims = new StringTokenizer(read.readLine());
row_num, col_num = [int(i) for i in input().split()]
for _ in range(row_num):
grid.append([int(i) for i in input().split()])
When doing flood fill, we will maintain an
N\times M
array of booleans to keep track of which squares have been visited, and a global variable to maintain the size of the current component we are visiting. Make sure to store the grid, the visited array, dimensions, and the current size variable globally.
This means that we want to recursively call the search function for the squares above, below, and to the left and right of our current square. Due to its recursive nature, floodfill can be thought of as a modified version of DFS. The algorithm to find the size of a connected component in a grid using flood fill is as follows (we’ll also maintain a 2D visited array).
The code below shows the global/static variables we need to maintain while doing flood fill and the flood fill algorithm itself:
int grid[MAX_N][MAX_N]; // the grid itself
bool visited[MAX_N][MAX_N]; // keeps track of which nodes have been visited
int curr_size = 0; // reset to 0 each time we start a new component
void floodfill(int r, int c, int color){
private static int[][] grid; // the grid itself
private static int colNum; // grid dimensions, rows and columns
private static boolean[][] visited; // keeps track of which nodes have been visited
private static int currSize = 0; // reset to 0 each time we start a new component
* input code and other problem-specific stuff here
sys.setrecursionlimit(2 ** 30) # pretty much disable the recurion limit
row_num = MAX_N
col_num = MAX_N
grid = [[0 for _ in range(col_num)] for _ in range(row_num)]
visited = [[False for _ in range(col_num)] for _ in range(row_num)]
curr_size = 0
Example - Counting Rooms
Recursive implementations of flood fill sometimes lead to
Stack overflow if you didn't include the appropriate compiler options for adjusting the stack size
Memory limit exceeded if you run the recursive implementation on a really big grid (i.e., running the above code on a
4000\times 4000
grid may exceed 256 MB)
Non-recursive implementations generally use less memory than recursive ones.
A non-recursive implementation of flood fill adds adjacent nodes to a stack or queue, similar to BFS, and is usually implemented as follows:
const int R_CHANGE[]{0, 1, 0, -1};
const int C_CHANGE[]{1, 0, -1, 0};
public class RoomCount {
public static final int R_CHANGE[] = {0, 1, 0, -1};
public static final int C_CHANGE[] = {1, 0, -1, 0};
private static boolean[][] visited;
private static char[][] building;
grid = [input() for _ in range(row_num)]
def floodfill(r: int, c: int):
Note: you can also use queue and pop from the
Icy Perimeter
Easy Show Tags Flood Fill
Easy Show Tags Binary Search, Flood Fill
Normal Show Tags Flood Fill
Where's Bessie?
Why Did the Cow Cross the Road III
Mooyo Mooyo
Hard Show Tags Flood Fill
Maze Tac Toe
Multiplayer Moo
|
tensor(deprecated)/tensorsGR - Maple Help
Home : Support : Online Help : tensor(deprecated)/tensorsGR
compute General Relativity curvature tensors in a coordinate basis
tensorsGR(coord, cov_metric, 'contra_metric', 'det_met', 'C1', 'C2', 'Rm', 'Rc', 'R', 'G', 'C', print_flag)
list of coordinate variable names, for example, [t,x,y,z]
rank-2 symmetric tensor_type of the covariant metric
(optional) print directive to print results after computation
contra_metric
rank-2 symmetric tensor_type of contravariant metric
determinant of the covariant metric component matrix
covariant Riemann tensor
covariant Ricci tensor
covariant Einstein tensor
covariant Weyl tensor
The function tensorsGR(coord, cov_metric, 'contra_metric', 'det_met', 'C1', 'C2', 'Rm', 'Rc', 'R', 'G', 'C') calculates the following objects given the coordinates, coord, and covariant metric tensor, cov_metric:
contravariant metric tensor, returned through contra_metric
determinant of the metric tensor components, returned through det_met
Christoffel symbols of the first kind, returned through C1
Christoffel symbols of the second kind, returned through C2
covariant Riemann tensor, returned through Rm
covariant Ricci tensor, returned through Rc
Ricci scalar, returned through R
covariant Einstein tensor, returned through G
covariant Weyl tensor, returned through C
The calculated quantities are returned via the third through eleventh parameters. Since these are output parameters, they must be passed as unassigned names. The return value is NULL.
The last parameter, print_flag, is optional directive to display the calculated results (using tensor[display_allGR]) after they have been calculated. If used, it must be passed with the value print. Other values will result in an error.
The calculations are made simply by making the appropriate calls to the following procedures: tensor[invert], tensor[partial_diff], tensor[Christoffel1], tensor[Christoffel2], tensor[Riemann], tensor[Ricci], tensor[Ricciscalar], tensor[Einstein], and tensor[Weyl].
Note that this procedure is not strictly necessary. However, it provides a convenient way to calculate all of the important general relativity curvature quantities in the natural basis. The print_flag option provides the further convenience of displaying the results automatically once they are computed.
Simplification: Since this routine computes all of the quantities by calling the appropriate routines from the package, simplification is done according to the simplification methods of each individual routine.
This function is part of the tensor package, and so can be used in the form tensorsGR(..) only after performing the command with(tensor), or with(tensor, tensorsGR). This function can always be accessed in the long form tensor[tensorsGR](..).
\mathrm{with}\left(\mathrm{tensor}\right):
\mathrm{coords}≔[t,r,\mathrm{th},\mathrm{ph}]:
g≔\mathrm{array}\left(\mathrm{symmetric},\mathrm{sparse},1..4,1..4\right):
g[1,1]≔1-\frac{2m}{r}:
g[2,2]≔-\frac{1}{g[1,1]}:
g[3,3]≔-{r}^{2}:
g[4,4]≔-{r}^{2}{\mathrm{sin}\left(\mathrm{th}\right)}^{2}:
\mathrm{metric}≔\mathrm{create}\left([-1,-1],\mathrm{eval}\left(g\right)\right)
\textcolor[rgb]{0,0,1}{\mathrm{metric}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{}\left([\textcolor[rgb]{0,0,1}{\mathrm{compts}}\textcolor[rgb]{0,0,1}{=}[\begin{array}{cccc}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{m}}{\textcolor[rgb]{0,0,1}{r}}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{m}}{\textcolor[rgb]{0,0,1}{r}}}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{th}}\right)}^{\textcolor[rgb]{0,0,1}{2}}\end{array}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{index_char}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{-1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-1}]]\right)
\mathrm{tensorsGR}\left(\mathrm{coords},\mathrm{metric},\mathrm{contra_metric},\mathrm{det_met},\mathrm{C1},\mathrm{C2},\mathrm{Rm},\mathrm{Rc},R,G,C\right)
Show it is a vacuum solution of the Einstein field equations
\mathrm{displayGR}\left(\mathrm{Einstein},G\right)
\textcolor[rgb]{0,0,1}{}
\textcolor[rgb]{0,0,1}{\mathrm{The Einstein Tensor}}
\textcolor[rgb]{0,0,1}{\mathrm{non-zero components :}}
\textcolor[rgb]{0,0,1}{\mathrm{None}}
\textcolor[rgb]{0,0,1}{\mathrm{None}}
\textcolor[rgb]{0,0,1}{\mathrm{character : \left[-1, -1\right]}}
Show that it is not flat.
\mathrm{displayGR}\left(\mathrm{Weyl},C\right)
\textcolor[rgb]{0,0,1}{}
\textcolor[rgb]{0,0,1}{\mathrm{The Weyl Tensor}}
\textcolor[rgb]{0,0,1}{\mathrm{non-zero components :}}
\textcolor[rgb]{0,0,1}{\mathrm{C1212}}\textcolor[rgb]{0,0,1}{=}\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{m}}{{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{3}}}
\textcolor[rgb]{0,0,1}{\mathrm{C1313}}\textcolor[rgb]{0,0,1}{=}\frac{\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{m}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{m}}{{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}}
\textcolor[rgb]{0,0,1}{\mathrm{C1414}}\textcolor[rgb]{0,0,1}{=}\frac{\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{m}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{m}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{th}}\right)}^{\textcolor[rgb]{0,0,1}{2}}}{{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}}
\textcolor[rgb]{0,0,1}{\mathrm{C2323}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{m}}{\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{m}}
\textcolor[rgb]{0,0,1}{\mathrm{C2424}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{m}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{th}}\right)}^{\textcolor[rgb]{0,0,1}{2}}}{\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{m}}
\textcolor[rgb]{0,0,1}{\mathrm{C3434}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{m}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{th}}\right)}^{\textcolor[rgb]{0,0,1}{2}}
\textcolor[rgb]{0,0,1}{\mathrm{character : \left[-1, -1, -1, -1\right]}}
tensor(deprecated)/display_allGR
tensor(deprecated)[Einstein]
tensor(deprecated)[Weyl]
|
Power transmission element with frictional belt wrapped around pulley circumference - MATLAB - MathWorks Nordic
V-belt sheave angle
Belt mass per unit length
Belt maximum tension
Pulley translation
Pulley inertia
Pulley initial rotational velocity
Pulley mass
Pulley initial translational velocity
Force threshold
Power transmission element with frictional belt wrapped around pulley circumference
The Belt Pulley block represents a pulley wrapped in a flexible ideal, flat, or V-shaped belt. When you set Belt type to Ideal - No slip, the belt does not slip relative to the pulley surfaces. You can optionally enable pulley linear translation.
The block accounts for friction between the flexible belt and the pulley surface. The tensioner slips when the load exceeds the friction force. The block accounts for centrifugal loading in the flexible belt, pulley inertia, and bearing friction.
The belt ends can either move in the same direction or in the opposite direction.
The block equations relate power transmission between the belt branches or to or from the pulley. The driving and driven branches use the same calculation. Without sufficient tension, the frictional force is not enough to transmit power between the pulley and belt.
Your model is valid when both ends of the belt are in tension. You can choose to display a warning in the Simulink® Diagnostic Viewer when the leading belt end loses tension. When assembling a model, ensure that tension is maintained throughout the simulation by adding mass to at least one of the belt ends or by adding a tensioner like the Rope block. Use the Variable Viewer to ensure that any springs attached the belt are in tension. For more details on building a tensioner, see Best Practices for Modeling Pulley Networks.
You can set Friction model to Modal to use a modal parameterization for the pulley. Choose the modal parameterization for greater numerical robustness. To enable this parameter, set Pulley translation to Off.
The Belt Pulley block uses a composite implementation of the Fundamental Friction Clutch block to produce the conditions for the modal parameterization.
For either setting of the Friction model parameter, the block equations refer to these quantities:
β is the belt direction sign. When you set Belt direction to Ends move in same direction, β = 1. Otherwise, β = -1.
Vrel is the relative velocity between the belt and pulley periphery. Vrel = 0 when Belt type is Ideal - No slip.
VA is the linear velocity of branch A.
VB is the linear velocity of branch B.
VC is the linear velocity of the pulley at its center. If Pulley translation is Off, the block constrains this to 0.
ωS is the angular velocity of the pulley contact surface.
R is the radius of the pulley.
Fcentrifugal is centrifugal force of the belt.
FC is the force acting through the pulley centroid. When you specify a value for Inertia, FC includes force due to the pulley mass acceleration.
ρ is the belt linear density.
Ffr is the friction force between the pulley and the belt.
FA is the force acting along branch A.
FB is the force acting along branch B.
θ is the contact wrap angle.
τS is the pulley torque.
The kinematic constraints between the pulley and belt are:
-\beta {V}_{A}={V}_{C}-R{\omega }_{S}-\beta {V}_{rel}
{V}_{B}={V}_{C}+R{\omega }_{S}+\beta {V}_{rel}
When you set Belt type to either V-belt or Flat belt and set Centrifugal force to Model centrifugal force, the centrifugal force is:
{F}_{centrifugal}=\rho {\left({V}_{B}-{V}_{C}\right)}^{2}.
When you set Pulley translation to On, the force balancing equation is:
{F}_{C}=\left(\beta {F}_{A}-{F}_{B}-{F}_{centrifugal,smooth,sat}\right)\cdot \mathrm{sin}\left(\frac{\theta }{2}\right).
To calculate Fcentrifugal,smooth,sat, the block
Smooths Fcentrifugal using Fthr. You can increase smoothing by raising Fthr, and you can decrease smoothing by lowering Fthr.
Saturates values above βFA-FB.
The sign convention is such that, when Belt direction is Ends move in opposite direction, a positive rotation in port S results in a negative translation for port A and a positive translation for port B.
To enable the Friction model parameter, set Belt type to Flat belt or V-belt and Pulley translation to Off.
When you set Friction model to Continuous, the block equations refer to these quantities:
μ is the Contact friction coefficient parameter.
μsmoothed is the instantaneous value of the friction coefficient.
Vthr is the Velocity threshold parameter.
b is the viscous damping of the bearing.
Fthr is the Force threshold parameter.
The instantaneous friction coefficient is a function of the relative velocity such that
{\mu }_{smoothed}={\mu }_{sheave}\mathrm{tanh}\left(4\frac{{V}_{rel}}{{V}_{thr}}\right),
where the hyperbolic tangent function maintains numerical robustness by ensuring a smooth and continuous output for Vrel zero-crossings.
For a V-belt, the block derives the contact friction value using the sheave angle:
{\mu }_{sheave}=\frac{\mu }{\mathrm{sin}\left(\frac{\varphi }{2}\right)},
μsheave is the effective friction coefficient.
Φ is the sheave angle.
For a flat belt, μsheave = μ.
The friction velocity threshold controls the width of the region within which the friction coefficient changes its value from zero to a steady-state maximum. The friction velocity threshold specifies the velocity at which the hyperbolic tangent equals 0.999. The smaller the value, the steeper the change of μ.
The block determines the effect of friction on the force at the belt ends as:
-\beta {F}_{A}-{F}_{centrifugal}=\left({F}_{B}-{F}_{centrifugal}\right){e}^{-{\mu }_{smoothed}\theta },
which follows the form of the capstan equation, also known as the Euler-Eytelwein equation. The torque acting on the pulley is
{\tau }_{S}=\left(-\beta {F}_{A}-{F}_{B}\right)R\sigma +{\omega }_{S}b,
where σ = 1 when you set Belt type to Ideal - No slip. Otherwise,
\sigma =\mathrm{tanh}\left(4\frac{{V}_{rel}}{{V}_{thr}}\right)\mathrm{tanh}\left(\frac{{F}_{B}}{{F}_{thr}}\right).
Modal Friction
For a V-belt, the block derives the static and kinetic friction values by using the sheave angle:
\begin{array}{l}{\mu }_{Static,sheave}=\frac{{\mu }_{Static}}{\mathrm{sin}\left(\frac{\varphi }{2}\right)}\\ {\mu }_{Kinetic,sheave}=\frac{{\mu }_{Kinetic}}{\mathrm{sin}\left(\frac{\varphi }{2}\right)}\end{array}
μStatic is the Static friction coefficient parameter.
μStatic,sheave is effective static friction coefficient.
μKinetic is the Kinetic friction coefficient parameter.
μKinetic,sheave is effective kinetic friction coefficient.
For a flat belt, μStatic,sheave = μStatic, and μKinetic,sheave = μKinetic.
When you set Friction model to Modal, the block calculates the maximum static friction force before slipping as
{F}_{Friction,Static,Max}=\left({F}_{Drive}-{F}_{Centrifugal}\right)\cdot \left(1-{e}^{-{\mu }_{Static,sheave}\theta }\right),
where FFriction,Static,Max is the maximum force magnitude due to static friction, and FDrive is the force due to tension felt at the end of the pulley with the greatest force magnitude. As with Fcentrifugal, the block uses Fthr to smooth FDrive. To increase or decrease the smoothing on FDrive, raise of lower the value of the Force threshold parameter.
The block smoothly saturates FFriction,Static,Max to be greater than or equal to 0 as
{F}_{Friction,Static,Max,Smooth}=0.5\left({F}_{Friction,Static,Max}+\sqrt{{F}_{Friction,Static,Max}^{2}+{\left(R{F}_{thr}\right)}^{2}}\right).
During slip,
{F}_{Friction,Slip}=\frac{{\mu }_{Kinetic,sheave}}{{\mu }_{Static,sheave}}{F}_{Friction,Static,Max},
where μKinetic,sheave is the kinetic friction coefficient. The block resolves the torque balance using
{\tau }_{S}={\omega }_{S}b+R{F}_{Friction},
R\beta {F}_{A}+R{F}_{B}=-R{F}_{Friction},
where Simscape™ logs RFFriction as fundamental_clutch.torque.
The block assumes noncompliance along the length of the belt.
The block assumes both belt ends maintain adequate tension throughout the simulation.
The block treats the translation of the pulley center as planar where the pulley travels along the bisect of the pulley wrap angle. The center velocity VC and force FC only account for the component along this line of motion.
The Eytelwein equation for belt friction neglects the effect of pulley translation on friction.
S — Pulley shaft angular velocity
Mechanical rotational conserving port associated with the angular velocity of the pulley shaft.
A — Belt end A linear velocity
Mechanical translational conserving port associated with the linear velocity of belt end A.
B — Belt end B linear velocity
Mechanical translational conserving port associated with the linear velocity of belt end B.
C — Pulley center linear velocity
Mechanical translational conserving port associated with pulley translation. The pulley moves in the plane and along the bisect of the pulley wrap angle. When the relative velocity is positive, the pulley center moves.
To enable this port, set Pulley translation to On.
Belt type — Belt parameterization
Ideal - No slip (default) | Flat belt | V-belt
Belt type selection. The belt type affects slip conditions.
Ideal - No slip — Parameterize an ideal belt that does not slip relative to the pulley.
Flat belt — Parameterize a belt with a rectangular cross-section.
V-belt— Parameterize a belt with a V-shaped cross-section.
V-belt sheave angle — Sheave angle
Sheave angle of the V-belt.
To enable this parameter, set Belt type to V-belt.
Number of V-belts — Number of belts
Number of V-belts.
The block rounds noninteger values to the nearest integer. Increasing the number of belts increases the effective mass per unit length and maximum allowable tension.
Centrifugal force — Centrifugal force option
Do not model centrifugal force - Suitable for HIL simulation (default) | Model centrifugal force
Option to include the effects of centrifugal force. If you set this parameter to Model centrifugal force, centrifugal force saturates to approximately 90% of the value of the force on each belt end.
To enable this parameter, set Belt type to Flat belt or V-belt.
Belt mass per unit length — Mass per unit length
0.6 kg/m (default) | positive scalar
Centrifugal force contribution in terms of linear density expressed as mass per unit length.
To enable this parameter, set Belt type to Flat belt or V-belt and Centrifugal force to Model centrifugal force.
Belt direction — Initial relative motion direction of the belt end
Ends move in opposite direction (default) | Ends move in same direction
Relative direction of translational motion of one belt end with respect to the other.
Maximum tension — Tension threshold option
Option to specify a maximum tension. If you select Specify maximum tension and the belt tension on either end of the belt meets or exceeds the value that you specify for the Belt maximum tension parameter, the simulation stops and generates an assertion error.
Belt maximum tension — Maximal tension threshold
Maximum allowable tension for each belt. When the tension on either end of the belt meets or exceeds this value, the simulation stops and generates an assertion error.
Tension warning — Slack threshold reporting
Do not check tension (default) | Warn if leading end loses tension
Whether the block generates a warning when the tension at either end of the belt falls below zero.
When combining Belt Pulley blocks with Rope blocks to form loops, set the Belt Pulley block Tension warning parameter to Do not check tension and set the Rope block Tension warning parameter to Warn if rope loses tension.
Pulley translation — Option to allow linear motion
Option to allow the pulley to translate. Setting this parameter to On enables port C.
Pulley radius — Pulley radius
Radius of the pulley.
Bearing viscous friction coefficient — Bearing viscous friction
0.001 N*m/(rad/s) (default) | scalar
Viscous friction associated with the bearings that hold the axis of the pulley.
Inertia — Rotational inertia
Option to parameterize the rotational inertia with an initial velocity.
Pulley inertia — Pulley inertia
Rotational inertia of the pulley.
Pulley initial rotational velocity — Initial pulley rotational velocity
Initial rotational velocity of the pulley.
Pulley mass — Pulley mass
0.01 kg (default) | positive scalar
Pulley mass for the inertia calculation.
Pulley translation to On
Inertia to Specify inertia and initial velocity
Pulley initial translational velocity — Initial pulley rotational velocity
Initial translational velocity of the pulley.
To enable these settings, set Belt type to Flat belt or V-belt.
Friction model — Friction method selection
Modal (default) | Continuous
Option to parameterize continuous or modal friction.
To enable this parameter, set Pulley translation to Off.
Initial state — Initial state of pulley
Option to initialize the simulation with the pulley locked or unlocked.
Pulley translation to Off
Friction model to Modal
Belt friction while maintaining static contact.
To enable this parameter, set Friction model to Modal.
Belt friction while slipping.
Contact friction coefficient — Contact friction coefficient
Coulomb friction coefficient between the belt and the pulley surface.
To enable this parameter, set Friction model to Continuous.
Wrap angle — Belt-to-pulley contact angle
Radial contact angle between the belt and the pulley.
Velocity threshold — Velocity threshold
Relative velocity required for peak kinetic friction in the contact. The friction velocity threshold improves the numerical stability of the simulation by ensuring that the force is continuous when the direction of the velocity changes.
Force threshold — Force threshold
0.01 N (default) | positive scalar
Relative force required for peak kinetic friction in the contact.
For optimal simulation performance, set the Belt > Centrifugal force parameter to Do not model centrifugal force - Suitable for HIL simulation.
[1] Johnson, Kenneth L. Contact Mechanics. Cambridge: Cambridge Univ. Press, 2003.
Rope | Rope Drum | Belt Drive | Wheel and Axle
|
Model coaxial transmission line - Simulink - MathWorks France
Outer radius (m)
Inner radius(m)
Relative permeability constant
Model coaxial transmission line
The Coaxial Transmission Line block uses S-parameters to model its frequency behavior. A cross-section of the Coaxial Transmission Line is shown in the following figure. Its physical characteristics include the radius of the inner conductor a and the radius of the outer conductor b.
Outer radius (m) — Radius of outer conductor
Radius of the outer conductor of the coaxial transmission line, specified as a scalar in meters.
Inner radius(m) — Radius of Inner conductor
Radius of the inner conductor of the coaxial transmission line, specified as a scalar in meters.
Relative permeability constant — Relative permeability of dielectric
Relative permeability of the dielectric, expressed as the ratio of the permeability of the dielectric to permeability in free space μ0, specified as a real scalar.
Relative permittivity constant — Relative permittivity of dielectric
Relative permittivity of the dielectric, expressed as the ratio of the permittivity of the dielectric to permittivity in free space ε0, specified as a real scalar.
Loss tangent of the dielectric, specified as a nonnegative scalar.
Conductivity of the conductor, specified as a positive scalar in Siemens/meters.
Zin is the input impedance of the shunt circuit and it is calculated as follows:
for a short circuited transmission line
{\text{Z}}_{\text{in}}=j{Z}_{0}\mathrm{tan}\gamma l
and for the open circuited transmission line
{\text{Z}}_{\text{in}}=-j{Z}_{0}\mathrm{cot}\gamma l
Z0 is the characteristic impedance and
\gamma
is the propagation constant
The ABCD-parameters for the shunt stub are calculated as
\begin{array}{c}A=1\\ B=0\\ C=1/{Z}_{in}\\ D=1\end{array}
\begin{array}{c}A=1\\ B={Z}_{in}\\ C=0\\ D=1\end{array}
\begin{array}{l}A=\frac{{e}^{kd}+{e}^{-kd}}{2}\\ B=\frac{{Z}_{0}*\left({e}^{kd}-{e}^{-kd}\right)}{2}\\ C=\frac{{e}^{kd}-{e}^{-kd}}{2*{Z}_{0}}\\ D=\frac{{e}^{kd}+{e}^{-kd}}{2}\end{array}
\begin{array}{c}{Z}_{0}=\sqrt{\frac{R+j\omega L}{G+j\omega C}}\\ k={k}_{r}+j{k}_{i}=\sqrt{\left(R+j\omega L\right)\left(G+j\omega C\right)}\end{array}
\begin{array}{l}R=\frac{1}{2\pi {\sigma }_{cond}{\delta }_{cond}}\left(\frac{1}{a}+\frac{1}{b}\right)\\ L=\frac{\mu }{2\pi }\mathrm{ln}\left(\frac{b}{a}\right)\\ G=\frac{2\pi \omega {\epsilon }^{″}}{\mathrm{ln}\left(\frac{b}{a}\right)}\\ C=\frac{2\pi {\epsilon }^{\prime }}{\mathrm{ln}\left(\frac{b}{a}\right)}\end{array}
1/\sqrt{\pi f\mu {\sigma }_{cond}}
Coplanar Waveguide Transmission Line | General Passive Network | Transmission Line | Microstrip Transmission Line | Parallel-Plate Transmission Line | Two-Wire Transmission Line
|
Force in a magnetic circuit
HOME / Applications / Force in a magnetic circuit
A magnetic circuit is a physical system whose components produce and contain magnetic flux. Flux is generally produced by permanent magnets or coils, while the flux paths consist of some ferromagnetic material of high permeability, such as iron. The example of magnetic circuit [1] in Figure 1 contains two iron parts of cross-section area
\mathrm{S} = 4c{\mathrm{m2}}^{}
. One part is stationary and the other is moving; they are separated by two air gaps of length
\mathrm{L. N}
-turn coil wrapped around the stationary part, carries the current I (
N=300; I=1A
) and is responsible for the flux in the circuit.
Force on the moving part can be determined by calculating the change in the magnetic energy that would be produced by moving the part over a small distance . Force magnitude is obtained as:
This simple method is based upon the virtual displacement principle and it is often used to calculate forces in magnetic devices.
Since the permeability of iron is much greater than the permeability of air ( ), magnetic field in the iron ( ), as well as leakage flux, are neglected. Ampere’s law for the contour in Figure 1. can be written as:
{\mathrm{Ha}}_{}
represents magnetic field in the air gap.
With no leakage and no filed in the iron ( ), the entire magnetic energy of the system is stored in the volume
\mathrm{V }= SL
of the air gap and can be computed as:
Assuming uniform field distribution in the air gap allows to calculate
W
by simply multiplying energy density and volume
V
. The circuit contains 2 air gaps, therefore, after doubling the energy in eq.3 and substituting eq.2 in eq.3:
For two different air gap lengths
{\mathrm{L1}}_{}=1.5mm
{\mathrm{L2}}_{}=2.5mm
, change in magnetic energy can be computed as:
Equation 5, combined with eq. 1, yields the following expression for force magnitude:
Figure 1 - Magnetic circuit example
To accurately represent situation in the analytical example, where force has been computed using energies for
1.5mm
2.5mm
air gaps, Solidworks model of the circuit has been created with two air gaps of different lengths:
{\mathrm{L1}}_{}=1.5mm
{\mathrm{L2}}_{}=2.5mm
(Figure 2.).
Figure 2 - Model of one half of the magnetic circuit is created in SOLIDWORKS
Due to the symmetry of the problem, it is enough to simulate only one half of the space (half of the magnetic circuit). The simulation uses EMS Magnetostatic study, with symmetry factor of 0.5 .
Current driven Wound Coil with
300
turns and
1A/turn
is added to the coil region. Its Entry and Exit Ports are faces in the plane of symmetry.
The following instructions show you how to create new material library, define custom material and add material to an element of your model:
In the EMS Manager tree, right-click the Stationary element icon in the Materials folder and select Apply Material to All Bodies. The Material Task Pane opens.
Select “New Library from the “Material Task Pane”.
Browse to the location where to save the new library file (the file extension will be .emsmtr).
Type MyLib for the name. An empty material library with the MyLib is added to the Material Task Panel.
Click on Add Materiel button .
Type Mur1200 for the material's name.
Type 1200 for the Relative Permeability and take the default for the rest of the fields.
Copper is assigned to the coil region, and air to the surrounding volume.
Figure 3 - Tangential Flux boundary condition
Normal component of the flux density at the plane of symmetry (
xy
plane in Figure 2.) is zero (all flux lines are parallel to this plane). Therefore, Tangential Flux boundary condition should be applied on all surfaces that belongs to the plane, including air and coil cross-sections. To do so:
In the study, right-click the Load/Restraint folder and select Tangential Flux. The Tangential Flux Property page appears.
Click inside the Faces for Tangential Flux box then select all the symmetry faces.
To add a coil to a Magnetostatic study:
In the EMS manger tree right-click on the Coils icon and select Wound Coil .
Keep default Coil Type as a Current driven coil.
Click inside the Components or Bodies for Coils box .
Click on the (+) sign in the upper left corner of the graphics area to open the components tree.
Click on the Coil-1 icon. It will appear in the Components and Solid Bodies list.
Click inside the Faces for Entry Port box then select the entry port face.
Click inside the Faces for Exit Port box then select the exit port face.
Click on General properties tab.
Type 300 in the Turns box .
Keep the default value of 1 the Current per Turn field . The units in Amp-Turns.
Figure 4 - Entry and exit ports of the coil
EMS automatically computes the nodal force distribution without any user input. However, for a rigid body force calculation, part on which the force or torque shall be calculated needs to be defined before running the simulation.
In the EMS manager tree, right-click the Force/Torques folder and select Virtual Work .
The Forces/Torques Property Manager appears.
Click inside the Components and Bodies for Forces/Torques box .
Click on Moveable part icon. It will appear in the Components and Solid Bodies list.
Quality of the mesh in the air gap region is of critical importance for accurate force calculation. EMS allows user to take full control over the mesh resolution.
In the EMS manager tree, right-click the Mesh icon and select Apply Control . The Mesh Control Property Manager Page appears.
Click inside the Components and Solid Bodies box .
Click on the (+) sign in the upper left corner of the graphical area to open the components tree.
Click on the Air Gap 1, and Air Gap 2 icons. They will appear in the Components and Solid Bodies list.
Under Control Parameters click inside the Element Size box and type 2.5 mm.
To mesh the model:
In the EMS manger tree, right-click on the Mesh icon and select Create Mesh .
Type 35 in the box for average number of mesh elements per diagonal for each solid body .
Right-click Study icon and selected Run , to execute simulation. Once the computation is done, the program creates five folders in the EMS manager tree. These folders are: Report, Magnetic Flux density, Magnetic Field intensity, Current density, and Force Distribution.
It is a good habit to first view the magnetic flux density in the model, including the outer air. This action gives an indication whether the outer air boundary is far enough.
In the EMS Manager tree, right-click on the Magnetic Flux Density folder and select 3D Fringe Plot> No Clipping.The Magnetic Flux Density Property Manager Page appears.
In the Display box:
a. Select Br from the magnetic flux density component . Directions are based on the global coordinate system.
b. Set Units to Tesla.
c. Select Continuous from Fringe Options .
3. Select OK .
Figure 5 - Flux density in the magnetic circuit
By examining Figure 5. it becomes clear that the magnetic flux density is very small on the outer air boundary. Thus, the air box is large enough. Had it been otherwise, air box surrounding the magnetic circuit would have to be larger.
In the EMS Manager tree, Double-click Result Table to display force result.
Figure 6 - EMS Results Table
Remember that, because of the symmetry about xy plane, only half of the problem has been modeled. Thus,
\mathrm{Fx}
and components must be multiplied by a factor of 2, while the
\mathrm{Fz}
component cancels out. Since
\mathrm{Fy}
\mathrm{Fx}
, the resultant force is virtually in the X direction with a magnitude of
\mathrm{FxTotal}= 2 x 1.507= 3.014N
. Analytical solution compares very well with the EMS result.
Analytical Virtual Work Solution EMS Result
Force [N] 3.02 3.014
[1] Electromagnetics and calculation of fields, by Nathan Ida and Joao P. A. Bastos, 2nd Edition, page 183-184. Publisher: Springer-Verlag;
Download Application Model
|
On the Basin of Attraction of Limit Cycles in Periodic Differential Equations | EMS Press
On the Basin of Attraction of Limit Cycles in Periodic Differential Equations
We consider a general system of ordinary differential equations \begin{eqnarray*}\dot{x}=f(t,x)\mbox{,}\end{eqnarray*} where
x\in\mathbb R^n
f(t+T,x)=f(t,x)
(t,x)\in\mathbb R\times \mathbb R^n
is a periodic function. We give a sufficient and necessary condition for the existence and uniqueness of an exponentially asymptotically stable periodic orbit. Moreover, this condition is sufficient and necessary to prove that a subset belongs to the basin of attraction of the periodic orbit. The condition uses a Riemannian metric, and we present methods to construct such a metric explicitly.
Peter Giesl, On the Basin of Attraction of Limit Cycles in Periodic Differential Equations. Z. Anal. Anwend. 23 (2004), no. 3, pp. 547–576
|
Breadth First Search (BFS) · USACO Guide
HomeGoldBreadth First Search (BFS)
Queues & DequesQueuesC++DequesC++Breadth First SearchResourcesSolution - Message Route0/1 BFSProblems
Authors: Benjamin Qi, Andi Qu, Neo Wang
Contributor: Qi Wang
Traversing a graph in a way such that vertices closer to the starting vertex are processed first.
Silver - Flood Fill
Queues & Deques
4.5 - Queues, Deques
3.2, 6.3 - Queues
A queue is a First In First Out (FIFO) data structure that supports three operations, all in
\mathcal{O}(1)
push: inserts at the back of the queue
pop: deletes from the front of the queue
front: retrieves the element at the front without removing it.
q.push(1); // [1]
q.push(3); // [3, 1]
q.push(4); // [4, 3, 1]
q.pop(); // [4, 3]
cout << q.front() << endl; // 3
add: insertion at the back of the queue
poll: deletion from the front of the queue
peek: which retrieves the element at the front without removing it
Java doesn't actually have a Queue class; it's only an interface. The most commonly used implementation is the LinkedList, declared as follows:
q.add(1); // [1]
q.add(3); // [3, 1]
q.add(4); // [4, 3, 1]
q.poll(); // [4, 3]
System.out.println(q.peek()); // 3
Python has a queue built-in module.
Queue.put(n): Inserts element to the back of the queue.
Queue.get(): Gets and removes the front element. If the queue is empty, this will wait forever, creating a TLE error.
Queue.queue[n]: Gets the nth element without removing it. Set n to 0 for the first element.
q = Queue() # []
q.put(1) # [1]
q.put(2) # [1, 2]
v = q.queue[0] # v = 1, q = [1, 2]
v = q.get() # v = 1, q = [2]
v = q.get() # v = 2, q = []
v = q.get() # Code waits forever, creating TLE error.
Python's queue.Queue() uses Locks to maintain a threadsafe synchronization, so it's quite slow. To avoid TLE, use collections.deque() instead for a faster version of a queue.
A deque (usually pronounced "deck") stands for double ended queue and is a combination of a stack and a queue, in that it supports
\mathcal{O}(1)
insertions and deletions from both the front and the back of the deque. Not very common in Bronze / Silver.
The four methods for adding and removing are push_back, pop_back, push_front, and pop_front.
d.push_front(3); // [3]
d.push_front(4); // [4, 3]
d.push_back(7); // [4, 3, 7]
d.pop_front(); // [3, 7]
d.push_front(1); // [1, 3, 7]
d.pop_back(); // [1, 3]
You can also access deques in constant time like an array in constant time with the [] operator. For example, to access the element
\texttt{i}
for some deque
\texttt{dq}
\texttt{dq[i]}
In Java, the deque class is called ArrayDeque. The four methods for adding and removing are addFirst , removeFirst, addLast, and removeLast.
deque.addFirst(3); // [3]
deque.addFirst(4); // [4, 3]
deque.addLast(7); // [4, 3, 7]
deque.removeFirst(); // [3, 7]
deque.addFirst(1); // [1, 3, 7]
deque.removeLast(); // [1, 3]
In Python, collections.deque() is used for a deque data structure. The four methods for adding and removing are appendleft, popleft, append, and pop.
d.appendleft(3) # [3]
d.appendleft(4) # [4, 3]
d.append(7) # [4, 3, 7]
d.popleft() # [3, 7]
d.appendleft(1) # [1, 3, 7]
d.pop() # [1, 3]
Message Route
interactive, implementation
12.1 - BFS
grid, 8-puzzle examples
BFS and its uses
If you prefer a video format
Solution - Message Route
\mathcal{O}(V+E)
We can observe is that there are many possible shortest paths to output. Fortunately, the problem states that we can print any valid solution. Notice that like every other BFS problem, the distance of each node increases by
1
when we travel to the next level of unvisited nodes. However, the problem requires that we add additional information - in this case, the path. When we traverse from node
a
b
, we can set the parent of
b
a
. After the BFS is complete, this allows us to backtrack through the parents which ultimately leads us to our starting node. We know to terminate at node
1
because it's the starting node. If there is no path to our end node, then its distance will remain at INT_MAX.
For the test input, we start with the following parent array.
Parent 0 0 0 0 0
Distance 0 INT_MAX INT_MAX INT_MAX INT_MAX
After visiting children of node
1
Distance 0 1 1 1 INT_MAX
After visiting node
5
from node
4
To determine the path, we can backtrack from node
n \rightarrow 1
5 \rightarrow 1
, pushing each value that we backtrack into a vector. The path we take is
5 \rightarrow \texttt{parent}[5]=4 \rightarrow \texttt{parent}[4] =1
which corresponds to the vector
[5, 4, 1]
. We break at node
1
because it was the initial starting node. Finally, we reverse the vector and print out its length (in this case,
3
vi dist(N+1,INT_MAX), parent(N+1);
vector<vi> adj(N+1);
private static Map<Integer, LinkedList<Integer>> adj = new HashMap<>();
In the gold division, the problem statement will almost never directly be, "Given an unweighted graph, find the shortest path between node
u
v
." Instead, the difficulty in many BFS problems are converting the problem into a graph on which we can run BFS and get the answer.
A 0/1 BFS finds the shortest path in a graph where the weights on the edges can only be 0 or 1, and runs in
\mathcal{O}(V + E)
using a deque. Read the resource below for an explanation of how the algorithm works.
2013 - Tracks in the Snow
Baltic OI - Easy
Complexity:
\mathcal O(NM)
We can use the following greedy strategy to find our answer:
Run flood fill to find each connected component with the same tracks.
Construct a graph where the nodes are the connected components and there are edges between adjacent connected components.
The answer is the maximum distance from the node containing
(1, 1)
to another node. We can use BFS to find this distance.
For a detailed proof of why this works, see the official editorial.
Although this gives us an
\mathcal O(NM)
solution, there is a simpler solution using 0/1 BFS!
Consider the graph with an edge between each pair of adjacent cells with tracks, where the weight is 0 if the tracks are the same and 1 otherwise. The answer is simply the longest shortest-path from the top left cell. This is because going from one track to another same one is like not leaving a node (hence the cost is
0
), while going from one track to a different one is like traversing the edge between two nodes (hence the cost is
1
Since the weight of each edge is either 0 or 1 and we want the shortest paths from the top left cell to each other cell, we can apply 0/1 BFS. The time complexity of this solution is
\mathcal O(NM)
int dx[4]{1, -1, 0, 0}, dy[4]{0, 0, 1, -1};
int n, m, depth[4000][4000], ans = 1;
string snow[4000];
return (x > -1 && x < n && y > -1 && y < m && snow[x][y] != '.');
public class tracks {
static final int[] dx = {0, 0, -1, 1};
static final int[] dy = {-1, 1, 0, 0};
static int N = 1, H, W;
static int[][] grid, count;
FastIO io = new FastIO();
Due to oj.uz's grading constraints for Java, this solution will TLE on the judge.
Easy Show Tags BFS
Graph Girth
Normal Show Tags Cycle
Normal Show Tags BFS, DFS
Normal Show Tags BFS
What's Up With Gravity?
Normal Show Tags BFS, Bitmasks
2009 - Mecho
Normal Show Tags BFS, Binary Search
Cow Navigation
Hard Show Tags BFS
A Pie for a Pie
|
Gravity Scaling Parameter for Pool Boiling Heat Transfer | J. Heat Transfer | ASME Digital Collection
e-mail: rraj@umd.edu
e-mail: john.b.mcquillen@nasa.gov
Raj, R., Kim, J., and McQuillen, J. (July 6, 2010). "Gravity Scaling Parameter for Pool Boiling Heat Transfer." ASME. J. Heat Transfer. September 2010; 132(9): 091502. https://doi.org/10.1115/1.4001632
Although the effects of microgravity, earth gravity, and hypergravity
(>1.5 g)
on pool boiling heat flux have been studied previously, pool boiling heat flux data over a continuous range of gravity levels (0–1.7 g) was unavailable until recently. The current work uses the results of a variable gravity, subcooled pool boiling experiment to develop a gravity scaling parameter for n-perfluorohexane/FC-72 in the buoyancy-dominated boiling regime
(Lh/Lc>2.1)
. The heat flux prediction was then validated using heat flux data at different subcoolings and dissolved gas concentrations. The scaling parameter can be used as a tool to predict boiling heat flux at any gravity level in the buoyancy dominated regime if the data under similar experimental conditions are available at any other gravity level.
scaling parameter, variable gravity, pool boiling, microgravity, boiling, dissolving, gravity waves, heat transfer, organic compounds, two-phase flow, zero gravity experiments
Boiling, Gravity (Force), Heat transfer, Pool boiling, Heat flux
Heat Transfer Correlations for Natural Convection Boiling
Berechnung des Maximalvolume von Dampfblasen
Keshock
Effects of Reduced Gravity on Nucleate Boiling Bubble Dynamics in Saturated Water
On the Transition Film Boiling Under Natural Convection
Kotloturbostroenie
Review of Reduced Gravity Boiling Heat Transfer: U.S. Research
Nucleate Pool Boiling on a Flat Plate Heater Under Microgravity Conditions: Results of Parabolic Flight, and Development of a Correlation Predicting Heat Flux Variation Due to Gravity
Proceedings of the Seventh ECI International Conference on Boiling Heat Transfer
, Florianopolis, Brazil, May 3–7.
Subcooled Pool Boiling Under Variable Gravity Environments
, Reno, NV, Jan. 7–10.
Investigation of Duration-Dependent Multifactoring During Boiling on Down-Facing Heating Surface
|
The function of cristae in a mitochondrion is
A. electron transport and ATP synthesis
B. carbon assimilation
C. intake of
{o}_{2}
D. elimination of
c{o}_{2}
Neurology MCQ Urology MCQ Problems on Numbers MCQ Atomic Structure MCQ Radiologic Examination MCQ Genital trauma MCQ Order & Ranking MCQ Coding Decoding MCQ Computer Fundamental MCQ Decision Making MCQ Chemical Engineering Thermodynamics MCQ Phylum - Coelentrata MCQ
|
Convex foliated projective structures and the Hitchin component for PSL4(R)
15 September 2008 Convex foliated projective structures and the Hitchin component for
{\mathrm{PSL}}_{4}\left(R\right)
Olivier Guichard, Anna Wienhard
Olivier Guichard,1 Anna Wienhard2
1Centre National de la Recherche Scientifique, Laboratoire de Mathématiques d'Orsay; Département de Mathématiques, Université Paris-Sud
In this article, we give a geometric interpretation of the Hitchin component
{T}^{4}\left(\Sigma \right)\subset \mathrm{Rep}\left({\pi }_{1}\left(\Sigma \right),{\mathrm{PSL}}_{4}\left(R\right)\right)
of a closed oriented surface of genus
g\ge 2
. We show that representations in
{T}^{4}\left(\Sigma \right)
are precisely the holonomy representations of properly convex foliated projective structures on the unit tangent bundle of
\Sigma
. From this, we also deduce a geometric description of the Hitchin component
T\left(\Sigma ,{\mathrm{Sp}}_{4}\left(R\right)\right)
of representations into the symplectic group
Olivier Guichard. Anna Wienhard. "Convex foliated projective structures and the Hitchin component for
{\mathrm{PSL}}_{4}\left(R\right)
." Duke Math. J. 144 (3) 381 - 445, 15 September 2008. https://doi.org/10.1215/00127094-2008-040
Olivier Guichard, Anna Wienhard "Convex foliated projective structures and the Hitchin component for
{\mathrm{PSL}}_{4}\left(R\right)
," Duke Mathematical Journal, Duke Math. J. 144(3), 381-445, (15 September 2008)
|
Symbolic Summation - MATLAB & Simulink - MathWorks Italia
Comparing symsum and sum
Computational Speed of symsum versus sum
Output Format Differences Between symsum and sum
Symbolic Math Toolbox™ provides two functions for calculating sums:
sum finds the sum of elements of symbolic vectors and matrices. Unlike the MATLAB® sum, the symbolic sum function does not work on multidimensional arrays. For details, follow the MATLAB sum page.
symsum finds the sum of a symbolic series.
You can find definite sums by using both sum and symsum. The sum function sums the input over a dimension, while the symsum function sums the input over an index.
Consider the definite sum
S=\sum _{k=1}^{10}\frac{1}{{k}^{2}}.
First, find the terms of the definite sum by substituting the index values for k in the expression. Then, sum the resulting vector using sum.
f = 1/k^2;
V = subs(f, k, 1:10)
S_sum = sum(V)
[ 1, 1/4, 1/9, 1/16, 1/25, 1/36, 1/49, 1/64, 1/81, 1/100]
S_sum =
Find the same sum by using symsum by specifying the index and the summation limits. sum and symsum return identical results.
S_symsum = symsum(f, k, 1, 10)
S_symsum =
For summing definite series, symsum can be faster than sum. For summing an indefinite series, you can only use symsum.
You can demonstrate that symsum can be faster than sum by summing a large definite series such as
S=\sum _{k=1}^{100000}{k}^{2}.
To compare runtimes on your computer, use the following commands.
sum(sym(1:100000).^2);
symsum(k^2, k, 1, 100000);
symsum can provide a more elegant representation of sums than sum provides. Demonstrate this difference by comparing the function outputs for the definite series
S=\sum _{k=1}^{10}{x}^{k}.
To simplify the solution, assume x > 1.
S_sum = sum(x.^(1:10))
S_symsum = symsum(x^k, k, 1, 10)
x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x
x^11/(x - 1) - x/(x - 1)
Show that the outputs are equal by using isAlways. The isAlways function returns logical 1 (true), meaning that the outputs are equal.
isAlways(S_sum == S_symsum)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.