text
stringlengths 256
16.4k
|
|---|
Concentration Terminology | Brilliant Math & Science Wiki
Omkar Kulkarni, Pranjal Jain, Pranshu Gaba, and
Vishal Ch
Concentration of solution is defined as the amount of solute dissolved in a specific (fixed) amount of solvent.
Relation between the Concentration Terms
Dependence on Volume/Temperature
Often, concentration of solution is described qualitatively, through the use of such words as dilute and concentrated. These are relative terms.
3
A,B,
C
20\text{ ml},30\text{ ml},
50\text{ ml}
alcohol in
100\text{ ml}
of water each, respectively. In this case,
B
is more dilute than
C
while it is more concentrated than
A
. A solution can be diluted by adding solvent or removing solute. On the other hand, a solution can be concentrated by adding solute or removing solvent.
Source: Wikipedia, No copyright infringement intended
Molarity is defined as the number of moles of solute per litre of solution:
(\text{Molarity})=\frac{(\text{Number of moles of solute})}{(\text{Number of litres of solution})}.
It is generally represented by
\text{M}
_\square
3.7\text{ g}~\ce{Ca(OH)2}
is dissolved in
250\text{ ml}
of solution. What is its molarity?
The molar mass of
\ce{Ca(OH)2}
40+32+2=74\text{ g}
. So the number of moles is obtained as follows:
(\text{Number of moles})=\frac{(\text{Given weight})}{(\text{Molar mass})}=\frac{3.7}{74}=0.05.
250\text{ ml}=0.25\text{ L}
. Hence the molarity is
\frac{0.05}{0.25}=0.2\text{ M}.\ _\square
We have mixed two glucose solution:
500\text{ ml}
0.4\text{ M}
800\text{ ml}
0.25\text{ M}
. What is the molarity of the final solution formed after mixing these two solutions?
(\text{Number of moles of solute}) = (\text{Molarity}) \times (\text{Volume of solution}).
In the first solution, we have
\frac{500}{1000} \times 0.4 = 0.2\text{ mol}.
In the second solution, we have
\frac{800}{1000} \times 0.25 = 0.2\text{ mol}.
When two solutions are mixed, they will have a total of
0.2 + 0.2 = 0.4\text{ mol}
of glucose, and the final volume is
1.3\text{ L}
. So the final molarity is
\frac{0.4}{1.3} = \frac{4}{13} \text{ M}.
_\square
0.750\text{ M}
solution of sulfuric acid
\ce{H2SO4}
has a density of
1.046
grams per milliliter at
20^\circ\text{C}.
What is the molality of the solution?
To simplify the problem, assume you have exactly
1\text{ L}
of solution. Then you have
\begin{aligned} (\text{Moles of }\ce{H2SO4}) &=(\text{Molarity})\times(\text{Volume})\\ &=0.75\times 1=0.75\\\\ (\text{Mass of solution}) &=(\text{Density})\times(\text{Volume})\\ &=1.046\times 1000=1046\text{ (g)}\\\\ (\text{Mass of solute}) &=(\text{Molar Mass})\times(\text{Number of moles})\\ &=98\times 0.75=73.5\text{ (g)}\\\\ (\text{Mass of solvent}) &=(\text{Mass of solution})-(\text{Mass of solute})\\ &=1046-73.5=972.5\text{ (g)}=0.9725\text{ (kg)}\\\\ (\text{Molality}) &=\dfrac{(\text{Moles of solute})}{(\text{Mass of solvent})}\\ &=\dfrac{0.75}{0.9725}=0.771\text{ (M)}.\ _\square \end{aligned}
Find the molality of 4 molar aqueous pure nitric acid solution. Consider that the density of the solution is
1\text{ gm/ml}
for the sake of this question.
Normality is defined as the number of gram equivalents per liter of solution:
(\text{Normality})=\frac{(\text{Number of gram equivalents})}{(\text{Number of litres of solution})}.
\text{N}.
Molarity can be converted to normality by multiplying by valency factor:
(\text{Normality)=(Valency Factor})\times(\text{Molarity}).\ _\square
What is the normality of a
0.5\text{ M} \; \ce{H2SO4}
solution?
n
-factor of
\ce{H2SO4}
2
since sulphuric acid has two replaceable
\ce{H+}
(\text{Normality}) = (n\text{-factor}) \times (\text{Molarity}),
the normality is
0.5 \times 2 = 1\text{ N}. \ _\square
In a beaker containing 2 liters of 3 normal solution of pure sodium hydroxide, how much hydrochloric acid of the same volume must be added to neutralize all of sodium hydroxide contained in it?
Submit your answer in grams (g).
Mole fraction for a component is defined as the number of moles of that component per mole of mixture. In a mixture of
p
components, the mole fraction of the
k^\text{th}
component is given by
x_k=\frac{n_k}{n_1+n_2+n_3+\dots+n_p},
n_a
represents the number of moles of component
a
_\square
An aqueous solution weighing 100 gram has 60 gram of pure chalk. Then find the mole fraction of the chalk.
If the answer can be expressed as
\frac{a}{b}
and
b
are coprime positive integers, then submit the value of
a + b
Formality is defined as the number of moles of formula units in one liter of solution. It is used for ionic compounds only:
(\text{Formality})=\frac{W\times1000}{(\text{GFM})\times[\text{Volume of solution (ml)}]},
where GFM stands for gram formula mass.
_\square
Percentage weight by weight is defined as
\frac{(\text{Mass of solute})}{(\text{Mass of solution})}\times100. \ _\square
Percentage weight by volume is defined as
\frac{[\text{Mass of solute (g)}]}{[\text{Volume of solution (ml)}]}\times100.\ _\square
Percentage volume by volume is defined as
\frac{(\text{Volume of solute})}{(\text{Volume of solution})}\times 100.\ _\square
Find the molarity of aqueous solution of pure hydrochloric acid which is
15
% volume-by-volume.
\text{Density of solution} = 2\text{ gm/ml}.
\text{Density of solute} = 1.5\text{ gm/ml}.
Parts per million (PPM) is defined as
\frac{(\text{Mass of solute})}{(\text{Mass of solution})}\times10^6.
As this concentration unit is generally used for solutions in which the mass of solute is very small as compared to the mass of solution, we can say that this concentration unit is the same as
\frac{(\text{Mass of solute})}{(\text{Mass of solvent})}\times 10^6.\ _\square
We will take mass of solute to be
m_a
, mass of solvent
m_{b}
, density of solution
d
, molecular mass of solute
M_a
, and molecular mass of solvent
M_b
m_b
g of solute is present in
m_a
g of solvent, and
\dfrac{m_b}{m_a}
g of solute is present in 1 g of solvent,
\dfrac{1000m_b}{m_a}
g of solute in 1000 g of solvent.
\boxed{\text{Molality}(m)= \dfrac{\dfrac{1000m_b}{m_a}}{M_b} \implies \dfrac{1000m_b}{m_a \times M_b}}.
Now, say, assume
m_b
g of solute in
m_a+m_b
g of solution
m_b
\dfrac{m_a+m_b}{d}
ml of solution.
\dfrac{1000 m_b \times d}{m_a+m_b}
in 1 L of solution, which implies
\boxed{\text{Molarity}(M)= \dfrac{\dfrac{1000 m_b \times d}{m_a+m_b}}{M_b}}.
\boxed{\dfrac{\text{Molality}}{\text{Molarity}} = \dfrac{m_a+m_b}{d\times m_a}}.
Here we get the relation guys with derivation. For instance, take the relation between normality (N) and molarity (M) to be
\boxed{\text{Molarity} \times n\text{-factor}= \text{Normality}}.\ _\square
Some of the concentration terms depend on the volume of solution/solvent like molarity and normality. The volume changes as the temperature is varied. Thus, such concentration terms are temperature dependent. On the other hand, terms like molality and mole fraction do not depend on volume, thus being temperature independent. The density is often required to convert temperature independent terms into temperature dependent terms or vice versa.
Cite as: Concentration Terminology. Brilliant.org. Retrieved from https://brilliant.org/wiki/concentration-terminology/
|
Wireless Recharging of Implanted Batteries via Ultrasound | J. Med. Devices | ASME Digital Collection
L. Radziemski,
L. Radziemski
, Tucson, AZ USA
A. Denison,
A. Denison
Radziemski, L., Denison, A., and Dunn, F. (July 7, 2009). "Wireless Recharging of Implanted Batteries via Ultrasound." ASME. J. Med. Devices. June 2009; 3(2): 027517. https://doi.org/10.1115/1.3136169
In order to expand the applications for implanted rechargeable batteries, and to reduce the frequency of battery replacement procedures, we are investigating a recharging technique complimentary to and improving on the current RF recharging technique. Although the first applications deal with batteries that could be implanted in human bodies to power neurostimulators, sensors, and drug pumps, non-medical applications may exist. Using a transmitter-receiver arrangement, we have recharged batteries wirelessly using ultrasound at several frequencies between 0.75 and 3.0 MHz. Rechargeable implantable batteries of 35, 200 and 600 mA-hr were charged at rates of up to 0.75 C, where C is the charging rate (charging current/maximum battery charging current). Typically the intervening medium was one centimeter of a tissue mimicking liquid (TML), however some in vitro experiments have also been performed. Charging was accomplished at distances of up to 20 centimeters in water, and even through millimeters of plastic and centimeters of aluminum. Temperature measurements were made on both transmitting and receiving transducers, and in the TML. As expected there were significant increases in temperature at the higher charging currents. Experimentally we determined that the “overall efficiency” of the charging process, viz.
E=(Ibatt*Vbatt)/(net electrical power)input
, was closely correlated with the observed heating. That is, the lower the efficiency, the higher the input electrical power required, the more transducer heat was produced and conducted into and through the medium. The critical issues were the coupling of the transmitter and receiver to the medium, and the efficiency of conversion of the receiver output to charging power by the charging circuitry. These depend on the mechanical and electrical impedances, and we improved the efficiency considerable by appropriate impedance matching. Active and passive methods of cooling the transducers and intervening medium have been constructed and successfully tested. With our system, recharging times will be limited not by heating considerations, but only by the optimum rate at which a given battery can accept charge.
biological tissues, biomedical transducers, biomedical ultrasonics, electric charge, secondary cells
Biological tissues, Biomedicine, Implantable batteries, Secondary cells, Transducers, Ultrasound, Batteries, Electricity (Physics), Heating, Aluminum, Biomedical ultrasonics, Cooling, Currents, Drugs, Electric charge, Heat, Pumps, Sensors, Temperature, Temperature measurement, Water
A Bi-Frequency Co-Linear Array Transducer for Biomedical Ultrasound Imaging
Design and Manufacture of a Solar-Electric Snow Melting Device for Making Potable Water on Mt. Rainier
|
EUDML | -sequences in abelian groups. EuDML | -sequences in abelian groups.
{T}_{\text{Ω}}
-sequences in abelian groups.
Ledet, Robert; Clark, Bradd
Ledet, Robert, and Clark, Bradd. "-sequences in abelian groups.." International Journal of Mathematics and Mathematical Sciences 24.3 (2000): 145-148. <http://eudml.org/doc/48652>.
@article{Ledet2000,
author = {Ledet, Robert, Clark, Bradd},
keywords = {abelian group; Hausdorff group topology; -sequence; -sequence},
title = {-sequences in abelian groups.},
AU - Ledet, Robert
AU - Clark, Bradd
TI - -sequences in abelian groups.
KW - abelian group; Hausdorff group topology; -sequence; -sequence
abelian group, Hausdorff group topology,
{T}_{\text{Ω}}
-sequence,
{T}_{{\Omega }}
-sequence
Articles by Ledet
|
Applying Direct Variation - Course Hero
College Algebra/Application of Lines/Applying Direct Variation
Many real-world situations can be modeled by direct variation. In geometry, the perimeter
P
of a square is proportional to the side length
s
of the square by the formula
P=4s
d=rt
describes how distance
d
is proportional to time
t
by the constant of proportionality
r
, which represents a constant rate of travel. When shopping for several same-price items, the total cost
C
is proportional to the number of items
n
by the price
p
of each item, or
C = pn
Sales tax is another example of direct variation. The amount of sales tax on an item is proportional to the price. If the sales tax rate is
r
, then the amount of sales tax on an item with price
x
rx
. The total cost
y
of the item with tax is:
\begin{aligned}y&=x + rx\\y&=(1+r)x\end{aligned}
Another application of direct variation is converting units, such as distance, volume, or currency. The constant of variation is the conversion factor.
Applying Direct Variation to Sales Tax
Graph and then determine the exact total cost of an item with a price of $5 after a sales tax of 10%.
The total cost
y
of an item with price
x
after a sales tax rate of
r
is represented by the sales tax equation:
y=(1 + r)x
For a 10% sales tax, the value of
r
is 0.10, which can also be written as 0.1. Substitute 0.1 for
\begin{aligned}y &= (1 + 0.1)x\\y &= 1.1x\end{aligned}
Determine two points on the line of the equation.
(0, 0)
(1, k)
k
is the constant of proportionality. In this situation,
k
\begin{aligned} y&=kx\\y&=1.1x\end{aligned}
So, the line of the equation goes through the points
(0, 0)
(1, 1.1)
Plot the points and determine the slope, or
k
, of the equation. The graph of the equation goes through the origin with a slope of 1.1.
From the graph, when the price of the item
x
is $5, the total cost
y
after tax is between $5 and $6. Substitute 5 for
x
in the equation to identify the exact total cost.
\begin{aligned}y &= 1.1x\\ &=1.1(5)\\&=5.5\end{aligned}
The exact total cost is $5.50.
Applying Direct Variation to Currency
Currency in U.S. dollars varies directly with the value in other currencies. On a given day, 8 euros are exchanged for 9.25 U.S. dollars. How many euros would be exchanged for 50 U.S. dollars?
Write the direct variation equation. Let
y
represent the number of euros and
x
represent the number of U.S. dollars.
y=kx
Substitute the given values of
x
y
into the equation, and solve for
k
\begin{aligned}8&=k(9.25)\\ \frac{8}{9.25}&=k\\ 0.865&=k\end{aligned}
This means that the exchange rate is about 0.865 euro per U.S. dollar.
k
into the direct variation equation.
y=0.865x
The number of U.S. dollars is given, which is represented by
x
. Substitute 50 for the value of
x
\begin{aligned}y&=0.865(50)\\y&=43.25\end{aligned}
At the given exchange rate, 43.25 euros would be exchanged for 50 U.S. dollars.
<Direct Variation
|
\overline{x}\pm \text{SD}
\overline{x}\pm \text{SD}
Use of official tools for student support and assessment of learning progress (/ 5)
Use feedback to help students become aware of their strengths, weaknesses and attitudes (/ 5)
Assessment of students’ learning for judgment on the level of proficiency (/ 5)
Supporting students’ personal initiative on approaches, learning strategies and skills through the use of a self-assessment grid and the logbook (/ 5)
Conduct students in explaining their successes or difficulties and the reasons that depend on them alone (/ 5)
Average score of the skill 4 (/ 25)
|
02.09 Structured Data NumPy
While often our data can be well represented by a homogeneous array of values, sometimes this is not the case. This section demonstrates the use of NumPy's structured arrays and record arrays, which provide efficient storage for compound, heterogeneous data. While the patterns shown here are useful for simple operations, scenarios like this often lend themselves to the use of Pandas Dataframes, which we'll explore in Chapter 3.
Imagine that we have several categories of data on a number of people (say, name, age, and weight), and we'd like to store these values for use in a Python program. It would be possible to store these in three separate arrays:
But this is a bit clumsy. There's nothing here that tells us that the three arrays are related; it would be more natural if we could use a single structure to store all of this data. NumPy can handle this through structured arrays, which are arrays with compound data types.
Recall that previously we created a simple array using an expression like this:
We can similarly create a structured array using a compound data type specification:
[('name', '<U10'), ('age', '<i4'), ('weight', '<f8')]
Here 'U10' translates to "Unicode string of maximum length 10," 'i4' translates to "4-byte (i.e., 32 bit) integer," and 'f8' translates to "8-byte (i.e., 64 bit) float." We'll discuss other options for these type codes in the following section.
Now that we've created an empty container array, we can fill the array with our lists of values:
[('Alice', 25, 55.0) ('Bob', 45, 85.5) ('Cathy', 37, 68.0) ('Doug', 19, 61.5)]
As we had hoped, the data is now arranged together in one convenient block of memory.
The handy thing with structured arrays is that you can now refer to values either by index or by name:
array(['Alice', 'Bob', 'Cathy', 'Doug'], dtype='<U10')
('Alice', 25, 55.0)
Using Boolean masking, this even allows you to do some more sophisticated operations such as filtering on age:
array(['Alice', 'Doug'], dtype='<U10')
Note that if you'd like to do any operations that are any more complicated than these, you should probably consider the Pandas package, covered in the next chapter. As we'll see, Pandas provides a Dataframe object, which is a structure built on NumPy arrays that offers a variety of useful data manipulation functionality similar to what we've shown here, as well as much, much more.
Structured array data types can be specified in a number of ways. Earlier, we saw the dictionary method:
dtype([('name', '<U10'), ('age', '<i4'), ('weight', '<f8')])
For clarity, numerical types can be specified using Python types or NumPy dtypes instead:
A compound type can also be specified as a list of tuples:
dtype([('name', 'S10'), ('age', '<i4'), ('weight', '<f8')])
If the names of the types do not matter to you, you can specify the types alone in a comma-separated string:
dtype([('f0', 'S10'), ('f1', '<i4'), ('f2', '<f8')])
The shortened string format codes may seem confusing, but they are built on simple principles. The first (optional) character is < or >, which means "little endian" or "big endian," respectively, and specifies the ordering convention for significant bits. The next character specifies the type of data: characters, bytes, ints, floating points, and so on (see the table below). The last character or characters represents the size of the object in bytes.
'b' Byte np.dtype('b')
'i' Signed integer np.dtype('i4') == np.int32
'u' Unsigned integer np.dtype('u1') == np.uint8
'f' Floating point np.dtype('f8') == np.int64
'c' Complex floating point np.dtype('c16') == np.complex128
'S', 'a' String np.dtype('S5')
'U' Unicode string np.dtype('U') == np.str_
'V' Raw data (void) np.dtype('V') == np.void
It is possible to define even more advanced compound types. For example, you can create a type where each element contains an array or matrix of values. Here, we'll create a data type with a mat component consisting of a
3\times 3
floating-point matrix:
(0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]) [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]]
Now each element in the X array consists of an id and a
3\times 3
matrix. Why would you use this rather than a simple multidimensional array, or perhaps a Python dictionary? The reason is that this NumPy dtype directly maps onto a C structure definition, so the buffer containing the array content can be accessed directly within an appropriately written C program. If you find yourself writing a Python interface to a legacy C or Fortran library that manipulates structured data, you'll probably find structured arrays quite useful!
NumPy also provides the np.recarray class, which is almost identical to the structured arrays just described, but with one additional feature: fields can be accessed as attributes rather than as dictionary keys. Recall that we previously accessed the ages by writing:
If we view our data as a record array instead, we can access this with slightly fewer keystrokes:
The downside is that for record arrays, there is some extra overhead involved in accessing the fields, even when using the same syntax. We can see this here:
|
Exponential Functions - Problem Solving | Brilliant Math & Science Wiki
Cheolho Han, Pi Han Goh, Mahindra Jain, and
An exponential function is a function of the form
f(x)=a \cdot b^x,
and
b
b
is positive. Exponential functions are used to model relationships with exponential growth or decay. Exponential growth occurs when a function's rate of change is proportional to the function's current value. Whenever an exponential function is decreasing, this is often referred to as exponential decay.
Suppose that the population of rabbits increases by 1.5 times a month. When the initial population is 100, what is the approximate integer population after a year?
The population after
n
months is given by
100 \times 1.5^n.
Therefore, the approximate population after a year is
100 \times 1.5^{12} \approx 100 \times 129.75 = 12975. \ _\square
Suppose that the population of rabbits increases by 1.5 times a month. At the end of a month, 10 rabbits immigrate in. When the initial population is 100, what is the approximate integer population after a year?
p(n)
be the population after
n
months. Then
p(n+2) = 1.5 p(n+1) + 10
p(n+1) = 1.5 p(n) + 10,
p(n+2) - p(n+1) = 1.5 \big(p(n+1) - p(n)\big).
Then the population after
n
p(0) + \big(p(1) - p(0)\big) \frac{1.5^{n} - 1}{1.5 - 1} .
Therefore, the population after a year is given by
\begin{aligned} 100 + (160 - 100) \frac{1.5^{12} - 1}{1.5 - 1} \approx& 100 + 60 \times 257.493 \\ \approx& 15550. \ _\square \end{aligned}
Suppose that the annual interest is 3 %. When the initial balance is 1,000 dollars, how many years would it take to have 10,000 dollars?
The balance after
n
1000 \times 1.03^n.
To have the balance 10,000 dollars, we need
\begin{aligned} 1000 \times 1.03^n \ge& 10000 \\ 1.03^n \ge& 10\\ n \log_{10}{1.03} \ge& 1 \\ n \ge& 77.898\dots. \end{aligned}
Therefore, it would take 78 years.
_\square
The half-life of carbon-14 is approximately 5730 years. Humans began agriculture approximately ten thousand years ago. If we had 1 kg of carbon-14 at that moment, how much carbon-14 in grams would we have now?
The weight of carbon-14 after
n
1000 \times \left( \frac{1}{2} \right)^{\frac{n}{5730}}
in grams. Therefore, the weight after 10000 years is given by
1000 \times \left( \frac{1}{2} \right)^{\frac{10000}{5730}} \approx 1000 \times 0.298 = 298.
Therefore, we would have approximately 298 g.
_\square
C < A < B B < C < A C < B < A A < C < B
Given three numbers such that
0 < a < b < c < 1
A = a^{a}b^{b}c^{c}, \quad B = a^{a}b^{c}c^{b} , \quad C = a^{b}b^{c}c^{a}.
How do the values of
A, B, C
compare to each other?
\large 2^{x} = 3^{y} = 12^{z}
If the equation above is fulfilled for non-zero values of
x,y,z,
\frac { z(x+2y) }{ xy }
\frac17
7
1
3
5^x = 6^y = 30^7
\frac{ xy}{x+y}
27^{x} = 64^{y} = 125^{z} = 60
\large\frac{2013xyz}{xy+yz+xz}
\frac{26}{23}
\frac{25}{23}
\frac{27}{23}
\frac{24}{23}
\large f(x)=\frac{e^x+e^{-x}}{e^x-e^{-x}}
Suppose we define the function
f(x)
as above. If
f(a)=\frac{5}{3}
f(b)=\frac{7}{5},
f(a+b)?
\large \left(1+\frac{1}{x}\right)^{x+1}=\left(1+\frac{1}{2000}\right)^{2000}
x
is an integer that satisfies the equation above, find the value of
x
\Large a^{(a-1)^{(a-2)}} = a^{a^2-3a+2}
a
\large (x^2+5x+5)^{x^2-10x+21}=1 .
\large |x|^{(x^2-x-2)} < 1
If the solution to the inequality above is
x\in (A,B)
A+B
Cite as: Exponential Functions - Problem Solving. Brilliant.org. Retrieved from https://brilliant.org/wiki/exponential-functions/
|
Imaging Depth-Resolved Tissue Birefringence With a Single Detector | J. Med. Devices | ASME Digital Collection
Muhammad K. Al-Kaisi,
Muhammad K. Al-Kaisi
Al-Kaisi, M. K., and Akkin, T. (June 25, 2008). "Imaging Depth-Resolved Tissue Birefringence With a Single Detector." ASME. J. Med. Devices. June 2008; 2(2): 027559. https://doi.org/10.1115/1.2936214
We report a novel frequency multiplexed optical coherence tomography (FM-OCT) system that is capable of measuring depth-resolved tissue birefringence from a single record of a single detector. The FM-OCT system utilizes polarization-maintaining-fiber based components. The orthogonal channels of the polarization-maintaining-fiber and the cross-terms are frequency multiplexed. After recording the interference signal, a set of digital band-pass filters extract the polarization information. A rapid scanning optical delay line in the reference arm compensates for dispersion and allows a real-time display of tissue birefringence. The axial resolution provided by a superluminescent diode working at
855nm
28nm
bandwidth is
12μm
. The accuracy and sensitivity measurements suggest that the FM-OCT system is suitable for depth-resolved birefringence characterization of biological tissue.
band-pass filters, biological tissues, birefringence, light polarisation, medical image processing, multiplexing, optical tomography
Biological tissues, Double refraction, Filters, Imaging, Polarization (Light), Sensors
The Separation of Membrane and Bending Shears in Shell With Two Birefringent Coatings
Time Dependent Polarization and Strain Evolution Around a Circular Hole in Ferroelectrics
|
FlipDimension - Maple Help
Home : Support : Online Help : Programming : Low-level Manipulation : Matrices, Vectors, and Arrays : ArrayTools : FlipDimension
FlipDimension(A, dim)
integer; dimension to flip
The FlipDimension command reverses the order of elements in dimension, dim, of Array, A.
This function is part of the ArrayTools package, so it can be used in the short form FlipDimension(..) only after executing the command with(ArrayTools). However, it can always be accessed through the long form of the command by using ArrayTools[FlipDimension](..).
\mathrm{with}\left(\mathrm{ArrayTools}\right):
\mathrm{FlipDimension}\left(\mathrm{Array}\left([[a,b,c],[d,e,f],[g,h,i]]\right),1\right)
[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{g}& \textcolor[rgb]{0,0,1}{h}& \textcolor[rgb]{0,0,1}{i}\\ \textcolor[rgb]{0,0,1}{d}& \textcolor[rgb]{0,0,1}{e}& \textcolor[rgb]{0,0,1}{f}\\ \textcolor[rgb]{0,0,1}{a}& \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{c}\end{array}]
\mathrm{FlipDimension}\left(\mathrm{Array}\left([[a,b,c],[d,e,f],[g,h,i]]\right),2\right)
[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{c}& \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{a}\\ \textcolor[rgb]{0,0,1}{f}& \textcolor[rgb]{0,0,1}{e}& \textcolor[rgb]{0,0,1}{d}\\ \textcolor[rgb]{0,0,1}{i}& \textcolor[rgb]{0,0,1}{h}& \textcolor[rgb]{0,0,1}{g}\end{array}]
|
Global Constraint Catalog: Cstretch_circuit
<< 5.374. stage_element5.376. stretch_path >>
[Pesant01]
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚌𝚒𝚛𝚌𝚞𝚒𝚝}\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂},\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\right)
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚕}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚕𝚖𝚒𝚗}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚕𝚖𝚊𝚡}-\mathrm{𝚒𝚗𝚝}\right)
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|>0
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂},\mathrm{𝚟𝚊𝚛}\right)
|\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|>0
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂},\left[\mathrm{𝚟𝚊𝚕},\mathrm{𝚕𝚖𝚒𝚗},\mathrm{𝚕𝚖𝚊𝚡}\right]\right)
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂},\mathrm{𝚟𝚊𝚕}\right)
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚕𝚖𝚒𝚗}\le \mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚕𝚖𝚊𝚡}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚕𝚖𝚒𝚗}\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝚜𝚞𝚖}
\left(\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚕𝚖𝚒𝚗}\right)\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
In order to define the meaning of the
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚙𝚊𝚝𝚑}
constraint, we first introduce the notions of stretch and span. Let
n
be the number of variables of the collection
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
i
j
\left(0\le i<n,0\le j<n\right)
be two positions within the collection of variables
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
such that the following conditions apply:
i\le j
then all variables
{X}_{i},\cdots ,{X}_{j}
take a same value from the set of values of the
\mathrm{𝚟𝚊𝚕}
i>j
{X}_{i},\cdots ,{X}_{n-1},{X}_{0},\cdots ,{X}_{j}
\mathrm{𝚟𝚊𝚕}
{X}_{\left(i-1\right)modn}
{X}_{i}
{X}_{\left(j+1\right)modn}
{X}_{j}
We call such a set of variables a stretch. The span of the stretch is equal to
1+\left(j-i\right)modn
, while the value of the stretch is
{X}_{i}
. We now define the condition enforced by the
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚌𝚒𝚛𝚌𝚞𝚒𝚝}
Each item
\left(\mathrm{𝚟𝚊𝚕}-v,\mathrm{𝚕𝚖𝚒𝚗}-s,\mathrm{𝚕𝚖𝚊𝚡}-t\right)
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
collection enforces the minimum value
s
as well as the maximum value
t
for the span of a stretch of value
v
Having an item
\left(\mathrm{𝚟𝚊𝚕}-v,\mathrm{𝚕𝚖𝚒𝚗}-s,\mathrm{𝚕𝚖𝚊𝚡}-t\right)
strictly greater than 0 does not mean that value
v
should be assigned to one of the variables of collection
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
. It rather means that, when value
v
is used, all stretches of value
v
must have a span that belong to interval
\left[s,t\right]
A variable of the collection
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
may be assigned a value that is not defined in the
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\left(\begin{array}{c}〈6,6,3,1,1,1,6,6〉,\hfill \\ 〈\begin{array}{ccc}\mathrm{𝚟𝚊𝚕}-1\hfill & \mathrm{𝚕𝚖𝚒𝚗}-2\hfill & \mathrm{𝚕𝚖𝚊𝚡}-4,\hfill \\ \mathrm{𝚟𝚊𝚕}-2\hfill & \mathrm{𝚕𝚖𝚒𝚗}-2\hfill & \mathrm{𝚕𝚖𝚊𝚡}-3,\hfill \\ \mathrm{𝚟𝚊𝚕}-3\hfill & \mathrm{𝚕𝚖𝚒𝚗}-1\hfill & \mathrm{𝚕𝚖𝚊𝚡}-6,\hfill \\ \mathrm{𝚟𝚊𝚕}-6\hfill & \mathrm{𝚕𝚖𝚒𝚗}-2\hfill & \mathrm{𝚕𝚖𝚊𝚡}-4\hfill \end{array}〉\hfill \end{array}\right)
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚌𝚒𝚛𝚌𝚞𝚒𝚝}
66311166
contains three stretches
6666
, 3, and
111
respectively verifying the following conditions:
The span of the first stretch
6666
is located within interval
\left[2,4\right]
(i.e., the limit associated with value 6).
The span of the second stretch 3 is located within interval
\left[1,6\right]
The span of the third stretch
111
\left[2,4\right]
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|>1
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}\right)>1
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|>|\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|
|\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|>1
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚕𝚖𝚊𝚡}\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
The article [Pesant01], which originally introduced the
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}
constraint, quotes rostering problems as typical examples of use of this constraint.
We split the origin
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}
constraint into the
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚌𝚒𝚛𝚌𝚞𝚒𝚝}
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚙𝚊𝚝𝚑}
constraints that respectively use the
\mathrm{𝑃𝐴𝑇𝐻}
\mathrm{𝐿𝑂𝑂𝑃}
\mathrm{𝐶𝐼𝑅𝐶𝑈𝐼𝑇}
\mathrm{𝐿𝑂𝑂𝑃}
arc generators. We also reorganise the parameters: the
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
collection describes the attributes of each value that can be assigned to the variables of the
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚌𝚒𝚛𝚌𝚞𝚒𝚝}
constraint. Finally we skipped the pattern constraint that tells what values can follow a given value.
A first filtering algorithm was described in the original article of G. Pesant [Pesant01]. An algorithm that also generates explanations is given in [RochartJussien03]. The first filtering algorithm achieving arc-consistency is depicted in [Hellsten04], [HellstenPesantBeek04]. This algorithm is based on dynamic programming and handles the fact that some values can be followed by only a given subset of values.
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚌𝚒𝚛𝚌𝚞𝚒𝚝}
constraint can be reformulated in term of a
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚙𝚊𝚝𝚑}
\mathrm{𝐿𝑀𝐴𝑋}
denote the maximum value taken by the
\mathrm{𝚕𝚖𝚊𝚡}
attribute within the items of the collection
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
n
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\delta =min\left(\mathrm{𝐿𝑀𝐴𝑋},n\right)
. The first and second arguments of the
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚙𝚊𝚝𝚑}
constraint are created in the following way:
We pass to the
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚙𝚊𝚝𝚑}
the variables of the collection
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
to which we add the
\delta
first variables of the collection
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚙𝚊𝚝𝚑}
the values of the collection
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
with the following modification: to each value
v
for which the corresponding
\mathrm{𝚕𝚖𝚊𝚡}
attribute is greater than or equal to
n
we reset its value to
n+\delta
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚙𝚊𝚝𝚑}
can achieve arc-consistency this reformulation may not achieve arc-consistency since it duplicates variables.
Using this reformulation, the example
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚌𝚒𝚛𝚌𝚞𝚒𝚝}
\left(〈6,6,3,1,1,1,6,6〉,
〈\mathrm{𝚟𝚊𝚕}-1\mathrm{𝚕𝚖𝚒𝚗}-2\mathrm{𝚕𝚖𝚊𝚡}-4,\mathrm{𝚟𝚊𝚕}-2\mathrm{𝚕𝚖𝚒𝚗}-2\mathrm{𝚕𝚖𝚊𝚡}-3,
\mathrm{𝚟𝚊𝚕}-3\mathrm{𝚕𝚖𝚒𝚗}-1\mathrm{𝚕𝚖𝚊𝚡}-6,\mathrm{𝚟𝚊𝚕}-6\mathrm{𝚕𝚖𝚒𝚗}-2\mathrm{𝚕𝚖𝚊𝚡}-4〉\right)
of the Example slot is reformulated as:
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚙𝚊𝚝𝚑}
\left(〈6,6,3,1,1,1,6,6,6,6,3,1,1,1〉,
〈\mathrm{𝚟𝚊𝚕}-1\mathrm{𝚕𝚖𝚒𝚗}-2\mathrm{𝚕𝚖𝚊𝚡}-4,\mathrm{𝚟𝚊𝚕}-2\mathrm{𝚕𝚖𝚒𝚗}-2\mathrm{𝚕𝚖𝚊𝚡}-3,
\mathrm{𝚟𝚊𝚕}-3\mathrm{𝚕𝚖𝚒𝚗}-1\mathrm{𝚕𝚖𝚊𝚡}-6,\mathrm{𝚟𝚊𝚕}-6\mathrm{𝚕𝚖𝚒𝚗}-2\mathrm{𝚕𝚖𝚊𝚡}-4〉\right)
In the reformulation
\delta
was equal to 6, and the
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
collection was left unchanged since no
\mathrm{𝚕𝚖𝚊𝚡}
attribute was equal to the number of variables of the
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
collection (i.e., 8).
\mathrm{𝚐𝚛𝚘𝚞𝚙}
(timetabling constraint),
\mathrm{𝚙𝚊𝚝𝚝𝚎𝚛𝚗}
(sliding sequence constraint,timetabling constraint),
\mathrm{𝚜𝚕𝚒𝚍𝚒𝚗𝚐}_\mathrm{𝚍𝚒𝚜𝚝𝚛𝚒𝚋𝚞𝚝𝚒𝚘𝚗}
(sliding sequence constraint),
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚙𝚊𝚝𝚑}
(sliding sequence constraint,timetabling constraint).
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚙𝚊𝚝𝚑}
characteristic of a constraint: cyclic.
constraint type: timetabling constraint, sliding sequence constraint.
filtering: dynamic programming, arc-consistency, duplicated variables.
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝐶𝐼𝑅𝐶𝑈𝐼𝑇}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{1},\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{2}\right)
\mathrm{𝐿𝑂𝑂𝑃}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{1},\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{2}\right)
•\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{1}.\mathrm{𝚟𝚊𝚛}=\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
•\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{2}.\mathrm{𝚟𝚊𝚛}=\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
•
\mathrm{𝚗𝚘𝚝}_\mathrm{𝚒𝚗}
\left(
\mathrm{𝐌𝐈𝐍}_\mathrm{𝐍𝐂𝐂}
,1,\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚕𝚖𝚒𝚗}-1\right)
•
\mathrm{𝐌𝐀𝐗}_\mathrm{𝐍𝐂𝐂}
\le \mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚕𝚖𝚊𝚡}
Part (A) of Figure 5.375.1 shows the initial graphs associated with values 1, 2, 3 and 6 of the Example slot. Part (B) of Figure 5.375.1 shows the corresponding final graphs associated with values 1, 3 and 6. Since value 2 is not assigned to any variable of the
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
collection the final graph associated with value 2 is empty. The
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚌𝚒𝚛𝚌𝚞𝚒𝚝}
For value 1 we have one connected component for which the number of vertices is greater than or equal to 2 and less than or equal to 4,
For value 2 we do not have any connected component,
For value 6 we have one connected component for which the number of vertices is greater than or equal to 2 and less than or equal to 4.
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}_\mathrm{𝚌𝚒𝚛𝚌𝚞𝚒𝚝}
|
Global Constraint Catalog: Csubgraph_isomorphism
<< 5.381. strongly_connected5.383. sum >>
[Gregor79]
\mathrm{𝚜𝚞𝚋𝚐𝚛𝚊𝚙𝚑}_\mathrm{𝚒𝚜𝚘𝚖𝚘𝚛𝚙𝚑𝚒𝚜𝚖}\left(\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽},\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃},\mathrm{𝙵𝚄𝙽𝙲𝚃𝙸𝙾𝙽}\right)
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚒𝚗𝚍𝚎𝚡}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚜𝚞𝚌𝚌}-\mathrm{𝚜𝚒𝚗𝚝}\right)
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚒𝚗𝚍𝚎𝚡}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚜𝚞𝚌𝚌}-\mathrm{𝚜𝚟𝚊𝚛}\right)
\mathrm{𝙵𝚄𝙽𝙲𝚃𝙸𝙾𝙽}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚒𝚖𝚊𝚐𝚎}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽},\left[\mathrm{𝚒𝚗𝚍𝚎𝚡},\mathrm{𝚜𝚞𝚌𝚌}\right]\right)
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}.\mathrm{𝚒𝚗𝚍𝚎𝚡}\ge 1
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}.\mathrm{𝚒𝚗𝚍𝚎𝚡}\le |\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}|
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽},\mathrm{𝚒𝚗𝚍𝚎𝚡}\right)
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}.\mathrm{𝚜𝚞𝚌𝚌}\ge 1
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}.\mathrm{𝚜𝚞𝚌𝚌}\le |\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃},\left[\mathrm{𝚒𝚗𝚍𝚎𝚡},\mathrm{𝚜𝚞𝚌𝚌}\right]\right)
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}.\mathrm{𝚒𝚗𝚍𝚎𝚡}\ge 1
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}.\mathrm{𝚒𝚗𝚍𝚎𝚡}\le |\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}|
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃},\mathrm{𝚒𝚗𝚍𝚎𝚡}\right)
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}.\mathrm{𝚜𝚞𝚌𝚌}\ge 1
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}.\mathrm{𝚜𝚞𝚌𝚌}\le |\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝙵𝚄𝙽𝙲𝚃𝙸𝙾𝙽},\left[\mathrm{𝚒𝚖𝚊𝚐𝚎}\right]\right)
\mathrm{𝙵𝚄𝙽𝙲𝚃𝙸𝙾𝙽}.\mathrm{𝚒𝚖𝚊𝚐𝚎}\ge 1
\mathrm{𝙵𝚄𝙽𝙲𝚃𝙸𝙾𝙽}.\mathrm{𝚒𝚖𝚊𝚐𝚎}\le |\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}|
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝙵𝚄𝙽𝙲𝚃𝙸𝙾𝙽},\mathrm{𝚒𝚖𝚊𝚐𝚎}\right)
|\mathrm{𝙵𝚄𝙽𝙲𝚃𝙸𝙾𝙽}|=|\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}|
Given two directed graphs
\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}
\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}
enforce a one to one correspondence, defined by the function
\mathrm{𝙵𝚄𝙽𝙲𝚃𝙸𝙾𝙽}
, between the vertices of the graph
\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}
and the vertices of an induced subgraph of
\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}
so that, if there is an arc from
u
v
in the graph
\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}
, then there is also an arc from the image of
u
v
in the induced subgraph of
\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}
. The vertices of both graphs are respectively defined by the two collections of vertices
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}
. Within collection
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}
the set of successors of each node is fixed, while this is not the case for the collection
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}
. This stems from the fact that the
\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}
graph is not fixed (i.e., the lower and upper bounds of the target graph are specified when we post the
\mathrm{𝚜𝚞𝚋𝚐𝚛𝚊𝚙𝚑}_\mathrm{𝚒𝚜𝚘𝚖𝚘𝚛𝚙𝚑𝚒𝚜𝚖}
constraint, while the induced subgraph of a solution to the
\mathrm{𝚜𝚞𝚋𝚐𝚛𝚊𝚙𝚑}_\mathrm{𝚒𝚜𝚘𝚖𝚘𝚛𝚙𝚑𝚒𝚜𝚖}
constraint corresponds to a graph for which the upper and lower bounds are identical).
\left(\begin{array}{c}〈\begin{array}{cc}\mathrm{𝚒𝚗𝚍𝚎𝚡}-1\hfill & \mathrm{𝚜𝚞𝚌𝚌}-\left\{2,4\right\},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-2\hfill & \mathrm{𝚜𝚞𝚌𝚌}-\left\{1,3,4\right\},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-3\hfill & \mathrm{𝚜𝚞𝚌𝚌}-\varnothing ,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-4\hfill & \mathrm{𝚜𝚞𝚌𝚌}-\varnothing \hfill \end{array}〉,\hfill \\ 〈\begin{array}{cc}\mathrm{𝚒𝚗𝚍𝚎𝚡}-1\hfill & \mathrm{𝚜𝚞𝚌𝚌}-\varnothing ,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-2\hfill & \mathrm{𝚜𝚞𝚌𝚌}-\left\{3,4,5\right\},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-3\hfill & \mathrm{𝚜𝚞𝚌𝚌}-\varnothing ,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-4\hfill & \mathrm{𝚜𝚞𝚌𝚌}-\left\{2,5\right\},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-5\hfill & \mathrm{𝚜𝚞𝚌𝚌}-\varnothing \hfill \end{array}〉,\hfill \\ 〈4,2,3,5〉\hfill \end{array}\right)
Figure 5.382.1 gives the pattern (see Part (A)) and target graph (see Part (B)) of the Example slot as well as the one to one correspondence (see Part (C)) between the pattern graph and the induced subgraph of the target graph. The
\mathrm{𝚜𝚞𝚋𝚐𝚛𝚊𝚙𝚑}_\mathrm{𝚒𝚜𝚘𝚖𝚘𝚛𝚙𝚑𝚒𝚜𝚖}
constraint since:
To the arc from vertex 1 to vertex 4 in the pattern graph corresponds the arc from vertex 4 to 5 in the induced subgraph of the target graph.
Figure 5.382.1. Illustration of the Example slot: (A) The pattern graph, (B) a possible initial target graph – plain arcs must belong to the induced subgraph, while dotted arcs may or may not belong to the induced subgraph – and (C) the correspondence, denoted by thick dashed arcs, between the vertices of the pattern graph and the vertices of the induced subgraph of the target graph. Within a set variable a bold value (respectively a plain value) represents a value that for sure belong (respectively that may belong) to the set.
|\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}|>1
|\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}|>1
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝙿𝙰𝚃𝚃𝙴𝚁𝙽}
\mathrm{𝙽𝙾𝙳𝙴𝚂}_\mathrm{𝚃𝙰𝚁𝙶𝙴𝚃}
Within the context of constraint programming the constraint was used for finding symmetries [Puget03], [Puget05a], [Puget05b].
[Ullmann76], [Regin95], [LarrosaValiente02], [ZampelliDevilleSolnonSorlinDupont07].
\mathrm{𝚐𝚛𝚊𝚙𝚑}_\mathrm{𝚒𝚜𝚘𝚖𝚘𝚛𝚙𝚑𝚒𝚜𝚖}
symmetry: symmetry.
|
Global Constraint Catalog: Cnvector
<< 5.289. nvalues_except_05.291. nvectors >>
Introduced by G. Chabert as a generalisation of
\mathrm{𝚗𝚟𝚊𝚕𝚞𝚎}
\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛}\left(\mathrm{𝙽𝚅𝙴𝙲},\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}\right)
\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛𝚜}
\mathrm{𝚗𝚙𝚘𝚒𝚗𝚝}
\mathrm{𝚗𝚙𝚘𝚒𝚗𝚝𝚜}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝙽𝚅𝙴𝙲}
\mathrm{𝚍𝚟𝚊𝚛}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚎𝚌}-\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\right)
|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}|\ge 1
\mathrm{𝙽𝚅𝙴𝙲}\ge \mathrm{𝚖𝚒𝚗}\left(1,|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}|\right)
\mathrm{𝙽𝚅𝙴𝙲}\le |\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂},\mathrm{𝚟𝚎𝚌}\right)
\mathrm{𝚜𝚊𝚖𝚎}_\mathrm{𝚜𝚒𝚣𝚎}
\left(\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂},\mathrm{𝚟𝚎𝚌}\right)
\mathrm{𝙽𝚅𝙴𝙲}
is the number of distinct tuples of values taken by the vectors of the collection
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
. Two tuples of values
〈{A}_{1},{A}_{2},\cdots ,{A}_{m}〉
〈{B}_{1},{B}_{2},\cdots ,{B}_{m}〉
distinct
if and only if there exist an integer
i\in \left[1,m\right]
{A}_{i}\ne {B}_{i}
\left(\begin{array}{c}2,〈\begin{array}{c}\mathrm{𝚟𝚎𝚌}-〈5,6〉,\hfill \\ \mathrm{𝚟𝚎𝚌}-〈5,6〉,\hfill \\ \mathrm{𝚟𝚎𝚌}-〈9,3〉,\hfill \\ \mathrm{𝚟𝚎𝚌}-〈5,6〉,\hfill \\ \mathrm{𝚟𝚎𝚌}-〈9,3〉\hfill \end{array}〉\hfill \end{array}\right)
\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛}
\mathrm{𝙽𝚅𝙴𝙲}=2
is set to the number of distinct tuples of values (i.e., tuples
〈5,6〉
〈9,3〉
) occurring within the collection
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
. Figure 5.290.1 depicts with a thick rectangle a possible initial domain for each of the five vectors and with a grey circle each tuple of values of the corresponding solution.
Figure 5.290.1. Possible initial domains (
{C}_{11}\in \left[1,6\right]
{C}_{12}\in \left[2,6\right]
{C}_{21}\in \left[3,5\right]
{C}_{22}\in \left[6,9\right]
{C}_{31}\in \left[4,10\right]
{C}_{32}\in \left[1,4\right]
{C}_{41}\in \left[5,9\right]
{C}_{42}\in \left[3,7\right]
{C}_{51}\in \left[9,11\right]
{C}_{52}\in \left[0,5\right]
) and solution corresponding to the Example slot: we have two distinct vectors (
\mathrm{𝙽𝚅𝙴𝙲}=2
|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}|>1
\mathrm{𝙽𝚅𝙴𝙲}>1
\mathrm{𝙽𝚅𝙴𝙲}<|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}|
|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}|>1
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}.\mathrm{𝚟𝚎𝚌}
All occurrences of two distinct tuples of values of
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}.\mathrm{𝚟𝚎𝚌}
can be swapped; all occurrences of a tuple of values of
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}.\mathrm{𝚟𝚎𝚌}
\mathrm{𝙽𝚅𝙴𝙲}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
\mathrm{𝙽𝚅𝙴𝙲}=1
|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}|>0
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
\mathrm{𝙽𝚅𝙴𝙲}=|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}|
It was shown in [ChabertLorca09], [ChabertJaulinLorca09] that, finding out whether a
\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛}
constraint has a solution or not is NP-hard (i.e., the restriction to the rectangle case and to the atmost side of the
\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛}
were considered for this purpose). This was achieved by reduction from the rectangle clique partition problem.
Assume the collection
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
is not empty (otherwise
\mathrm{𝙽𝚅𝙴𝙲}=0
). In this context, let
n
m
respectively denote the number of vectors of the collection
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
and the number of components of each vector. Furthermore, let
{\alpha }_{i}=min\left(\underline{{C}_{1i}},\underline{{C}_{2i}},\cdots ,\underline{{C}_{ni}}\right)
{\beta }_{i}=max\left(\overline{{C}_{1i}},\overline{{C}_{2i}},\cdots ,\overline{{C}_{ni}}\right)
{\gamma }_{i}={\beta }_{i}-{\alpha }_{i}+1
\left(i\in \left[1,m\right]\right)
. By associating to each vector
〈{C}_{k1},{C}_{k2},\cdots ,{C}_{km}〉,\left(k\in \left[1,n\right]\right)
{D}_{k}=\sum _{1\le i\le m}\left(\left(\prod _{i<j\le m}{\gamma }_{j}\right)·\left({C}_{ki}-{\alpha }_{i}\right)\right),
\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛}
\left(\mathrm{𝙽𝚅𝙴𝙲},
〈\mathrm{𝚟𝚎𝚌}-〈{C}_{11},{C}_{12},\cdots ,{C}_{1m}〉,
\mathrm{𝚟𝚎𝚌}-〈{C}_{21},{C}_{22},\cdots ,{C}_{2m}〉,
\cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots
\mathrm{𝚟𝚎𝚌}-〈{C}_{n1},{C}_{n2},\cdots ,{C}_{nm}〉〉\right)
can be expressed in term of the constraint
\mathrm{𝚗𝚟𝚊𝚕𝚞𝚎}
\left(\mathrm{𝙽𝚅𝙴𝙲},〈{D}_{1},{D}_{2},\cdots ,{D}_{n}〉\right)
Note that the previous reformulation does not work anymore if the variables have a continuous domain, or if an overflow occurs while propagating the equality constraint
{D}_{k}={\sum }_{1\le i\le m}\left(\left({\prod }_{i<j\le m}{\gamma }_{j}\right)·\left({C}_{ki}-{\alpha }_{i}\right)\right)
(i.e., the number of components
m
is too big).
When using this reformulation with respect to the Example slot we first introduce
{D}_{1}=1·6-3+\left(4·5-20\right)\right)=3
{D}_{2}=1·6-3+\left(4·5-20\right)\right)=3
{D}_{3}=1·3-3+\left(4·9-20\right)\right)=16
{D}_{4}=1·6-3+\left(4·5-20\right)\right)=3
{D}_{5}=1·3-3+\left(4·9-20\right)\right)=16
and then get the constraint
\mathrm{𝚗𝚟𝚊𝚕𝚞𝚎}
\left(2,〈3,3,16,3,16〉\right)
\mathrm{𝚕𝚎𝚡}_\mathrm{𝚎𝚚𝚞𝚊𝚕}
\mathrm{𝚘𝚛𝚍𝚎𝚛𝚎𝚍}_\mathrm{𝚊𝚝𝚕𝚎𝚊𝚜𝚝}_\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛}
\mathrm{𝚘𝚛𝚍𝚎𝚛𝚎𝚍}_\mathrm{𝚊𝚝𝚖𝚘𝚜𝚝}_\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛}
\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛𝚜}
(replace an equality with the number of distinct vectors by a comparison with the number of distinct nvectors).
\mathrm{𝚘𝚛𝚍𝚎𝚛𝚎𝚍}_\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛}
\mathrm{𝚊𝚝𝚕𝚎𝚊𝚜𝚝}_\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛}
=\mathrm{𝙽𝚅𝙴𝙲}
\ge \mathrm{𝙽𝚅𝙴𝙲}
\mathrm{𝚊𝚝𝚖𝚘𝚜𝚝}_\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛}
=\mathrm{𝙽𝚅𝙴𝙲}
\le \mathrm{𝙽𝚅𝙴𝙲}
\mathrm{𝚗𝚟𝚊𝚕𝚞𝚎}
(vector replaced by
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎}
application area: SLAM problem.
complexity: rectangle clique partition.
problems: domination.
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
\mathrm{𝐶𝐿𝐼𝑄𝑈𝐸}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚎𝚌𝚝𝚘𝚛𝚜}\mathtt{1},\mathrm{𝚟𝚎𝚌𝚝𝚘𝚛𝚜}\mathtt{2}\right)
\mathrm{𝚕𝚎𝚡}_\mathrm{𝚎𝚚𝚞𝚊𝚕}
\left(\mathrm{𝚟𝚎𝚌𝚝𝚘𝚛𝚜}\mathtt{1}.\mathrm{𝚟𝚎𝚌},\mathrm{𝚟𝚎𝚌𝚝𝚘𝚛𝚜}\mathtt{2}.\mathrm{𝚟𝚎𝚌}\right)
\mathrm{𝐍𝐒𝐂𝐂}
=\mathrm{𝙽𝚅𝙴𝙲}
\mathrm{𝙴𝚀𝚄𝙸𝚅𝙰𝙻𝙴𝙽𝙲𝙴}
\mathrm{𝐍𝐒𝐂𝐂}
graph property we show the different strongly connected components of the final graph. Each strongly connected component corresponds to a tuple of values that is assigned to some vectors of the
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
collection. The 2 following tuple of values
〈5,6〉
〈9,3〉
are used by the vectors of the
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁𝚂}
\mathrm{𝚗𝚟𝚎𝚌𝚝𝚘𝚛}
|
Jensen's Inequality | Brilliant Math & Science Wiki
Jubayer Nirjhor, A Former Brilliant Member, Kishlaya Jaiswal, and
Mehdi Elkouhlani
Jensen's inequality is an inequality involving convexity of a function. We first make the following definitions:
A function is convex on an interval
I
if the segment between any two points taken on its graph
(
I)
lies above the graph. An example of a convex function is
f(x)=x^2
A function is concave on an interval
I
if the segment between any two points on the graph
(
I)
lies below the graph. An example of a concave function is
f(x)=-x^2
Statement of Jensen's Inequality
Special Cases of Jensen's Inequality
Applications of Jensen's Inequality
Now, Jensen's inequality is the following:
Let a real-valued function
f
be convex on the interval
I
x_1,...,x_n\in I
\omega_1,...,\omega_n\ge 0
\dfrac{\omega_1 f\left(x_1\right)+\omega_2 f\left(x_2\right)+\cdots+\omega_n f\left(x_n\right)}{\omega_1+\omega_2+\cdots+\omega_n} \ge f\left(\dfrac{\omega_1x_1+\omega_2x_2+\cdots+\omega_nx_n}{\omega_1+\omega_2+\cdots+\omega_n}\right).
If
is concave, the direction of inequality is flipped.
In particular if we take weights
\omega_1=\omega_2=\cdots=\omega_n=1
\dfrac{f\left(x_1\right)+f\left(x_2\right)+\cdots+f\left(x_n\right)}{n} \ge f\left(\dfrac{x_1+x_2+\cdots+x_n}{n}\right). \ _\square
This proof is a little long. As it doesn't help much in the problem-solving aspect, you may skip it.
Note that we need only to prove the statement for convex functions. This is due to the following result:
g
is a concave function, then
-g
We first prove the following statement :
P \iff Q,
P
Q
P
\forall x_1,...,x_n\in I
\forall \omega_1,...,\omega_n\ge 0
the following holds :
\dfrac{\omega_1 f\left(x_1\right)+\omega_2 f\left(x_2\right)+\cdots+\omega_n f\left(x_n\right)}{\omega_1+\omega_2+\cdots+\omega_n} \ge f\left(\dfrac{\omega_1x_1+\omega_2x_2+\cdots+\omega_nx_n}{\omega_1+\omega_2+\cdots+\omega_n}\right).
Q
\forall x_1,...,x_n\in I
\forall \lambda_1, \dots, \lambda_n \geq 0 \text{ and } \displaystyle \sum_{i=1}^n \lambda_i = 1
\sum_{i=1}^n \lambda_i f(x_i) \geq f \left(\sum_{i=1}^n \lambda_i x_i \right).
\lambda_i = \dfrac{\omega_i}{\displaystyle \sum_{r=1}^n \omega_r},
i = 1,2, \dots n.
This means that we've reduced the problem to just showing that
Q
is true. To do this, we use induction.
n = 1
is trivially true.
n = 2
is the definition of a convex function (the proof of this is left to the interested reader).
n \geq 3
, we shall assume that
\lambda_n \in (0,1)
(
\lambda_n = 1,
the assertion trivially holds, and if
\lambda_n = 0,
we appeal to the induction hypothesis).
Now, we proceed assuming that the cases
k = 2
k = n-1
are true :
\begin{aligned} f \left( \sum_{i = 1}^n \lambda_i x_i \right) &= f \left( (1-\lambda_n) \left(\sum_{i = 1}^{n-1} \mu_i x_i \right) + \lambda_n x_n \right)\qquad \left(\text{where }\ \mu_i = \frac{\lambda_i}{1-\lambda_n}, \ i = 1,2,\dots,n-1,\ \sum_{i=1}^{n-1} \mu_i = 1\right) \\ & \leq (1-\lambda_n)f \left(\sum_{i = 1}^{n-1} \mu_i x_i \right) + \lambda_n f(x_n)\\ & \leq \left( \sum_{i=1}^{n-1} (1-\lambda_n)\mu_i f(x_i) \right) + \lambda_n f(x_n)\\ & = \sum_{i = 1}^n \lambda_if(x_i). \end{aligned}
So, the proof is complete via induction.
_\square
The last form is the form that is most frequently encountered and used in various Olympiad level problems.
How do we check if a function is convex or concave? We can't always plot the graph and check. The best (and often quicker) way is using calculus. A function
f
is convex on the interval
I
f''(x) \geq 0
x\in I
and concave if
f''(x) \leq 0
x\in I
. The precise statement is given below:
f: I \to \mathbb{R}
be a twice-differentiable function.
f
I
f''(x) \geq 0 \ \forall x \in I;
f
I
f''(x) \leq 0 \ \forall x \in I.
The proof of this can be found here.
f(x)=x^2
f'(x)=2x,
f''(x)=2>0
x\in\mathbb{R}
. So the function is convex everywhere on the graph. Similarly for
f(x)=-x^2
f''(x)=-2<0
x\in\mathbb{R}
. So the function is concave everywhere on the graph.
Since we've been talking so much about
f(x)=x^2
, let's apply Jensen's on it. We've already shown that it's convex everywhere. We choose reals
x_1=1, x_2=2,...,x_n=n
. Applying Jensen's we get
\begin{aligned} \dfrac{f\left(x_1\right)+f\left(x_2\right)+\cdots+f\left(x_n\right)}{n} &\ge f\left(\dfrac{x_1+x_2+\cdots+x_n}{n}\right) \\ \dfrac{f\left(1\right)+f\left(2\right)+\cdots+f\left(n\right)}{n} &\ge f\left(\dfrac{1+2+\cdots+n}{n}\right) \\ \dfrac{1^2+2^2+\cdots+n^2}{n} &\ge f\left(\frac{\frac{n(n+1)}{2}}{n}\right) \\ \frac{\frac{n(n+1)(2n+1)}{6}}{n} &\ge f\left(\dfrac{n+1}{2}\right) \\ \dfrac{(n+1)(2n+1)}{6}&\ge \left(\dfrac{n+1}{2}\right)^2 \\ n &\ge 1 . \end{aligned}
AM-GM inequality (arithmetic mean-geometric mean inequality) is one of the special cases of Jensen's inequality:
\frac{\sum_{i=1}^n a_i}{n} \geq \sqrt[n]{\prod_{i=1}^n a_i}.
f(x) = \log x \ \forall x > 0.
f^{\prime\prime}(x) = \frac{-1}{x^2} < 0
f(x)
turns out to be a concave function.
(
Also notice that we can conclude
f(x)
is concave using the graph of
\log x.)
Thus, by Jensen's inequality we have
\begin{aligned} f\left(\frac{ \sum_{i=1}^n a_i}{n}\right) & \geq \frac{\sum_{i=1}^n f\left(a_i\right)}{n} \\ \Rightarrow \log \left(\frac{\sum_{i=1}^n a_i}{n}\right) & \geq \frac{\sum_{i=1}^n \log a_i }{n}. \end{aligned}
By the property of logarithms,
\log x + \log y = \log xy
b\log a = \log a^b
. Therefore, we can simplify the terms on the RHS as
\begin{aligned} \log \left(\dfrac{\sum_{i=1}^n a_i}{n}\right) & \geq \dfrac{\sum_{i=1}^n \log a_i}{n} \\ & = \dfrac{\log a_1+\log a_2 + \cdots + \log a_n }{n} \\ & = \dfrac{\log \left(a_1 a_2 a_3 \cdots a_n\right)}{n} \\ & = \log \left(\prod_{i=1}^n a_i\right)^{\frac{1}{n}}. \end{aligned}
Taking antilogarithm, we get our desired result.
_\square
Prove that for all
n\in\mathbb{N},
\sqrt{1^2+1}+\sqrt{2^2+1}+\cdots +\sqrt{n^2+1}\ge \dfrac{n}{2}\sqrt{n^2+2n+5}.
Notice that on the left side
(
L)
we have a sum of same terms with some different arguments, so we should think of applying Jensen's with the common form as a function. Define the function
f(x)=\sqrt{x^2+1}.
We can find the second derivative, but that's not needed if we observe that for
x\to \infty
f(x)=\sqrt{x^2+1}\to |x|
. So the graph should look like that of
y=|x|
, V-shaped with a little curve at the bottom and thus convex.
Now we are ready to apply Jensen's with reals
x_1=1,x_2=2,...,x_n=n
\begin{aligned} \dfrac{f\left(x_1\right)+f\left(x_2\right)+\cdots+f\left(x_n\right)}{n} &\ge f\left(\dfrac{x_1+x_2+\cdots+x_n}{n}\right) \\ \dfrac{f\left(1\right)+f\left(2\right)+\cdots+f\left(n\right)}{n} &\ge f\left(\dfrac{1+2+\cdots+n}{n}\right) \\ \dfrac{\sqrt{1^2+1}+\sqrt{2^2+1}+\cdots+\sqrt{n^2+1}}{n} &\ge f\left(\dfrac{n(n+1)/2}{n}\right) \\ &= f\left(\dfrac{n+1}{2}\right) \\ &= \sqrt{\left(\dfrac{n+1}{2}\right)^2+1} \\ &= \dfrac{1}{2}\sqrt{n^2+2n+5}. \end{aligned}
L\ge \dfrac{n}{2}\sqrt{n^2+2n+5},
completing the proof.
_\square
\dfrac{1}{a}+\dfrac{1}{b}+\dfrac{1}{c} = a+b+c
a,b,c
be positive real numbers such that the above equation is satisfied. If the maximum value of the expression below is in the form of
\frac {m}{n} ,
m,n
m+n?
\dfrac{1}{\left(2a+b+c\right)^2}+\dfrac{1}{\left(2b+c+a\right)^2}+\dfrac{1}{\left(2c+b+a\right)^2}
Cite as: Jensen's Inequality. Brilliant.org. Retrieved from https://brilliant.org/wiki/jensens-inequality/
|
Bio-Organism Damage under the Influence of Microwave Heating
Bio-Organism Damage under the Influence of Microwave Heating ()
9006 Greenwood Ave, San Gabriel, CA, USA.
In this paper, we setup a simple model to understand the damage of bio-organism under the influence of microwave heating based on the bio-heat transfer equation and Arrhenius equation. Detailed information of temperature distribution and fraction of damage is presented. Results show that microwave heating is effective in the damage of bio-organism.
Bio-Heat, Microwave, Bio-Damage
Xie, Z. (2019) Bio-Organism Damage under the Influence of Microwave Heating. Journal of Biosciences and Medicines, 7, 41-45. doi: 10.4236/jbm.2019.74005.
Radio frequency (RF)/microwaves has applied in monitoring public health and in medicine as well as being a therapy method [1] [2] [3] . Modern electronic science and technology have seen rapid technological advances and enormous influence on our everyday lives [4] . They have laid the foundation for an unprecedented drive toward the improvement of existing medical devices and for the development of new ones. In particular, the advances in RF/microwave technology, among others, set the way for novel therapeutic and diagnostic methods. Microwave with frequencies from hundreds of MHz to several GHz is of primacy research field being investigated for medical and diagnostic applications in various areas such as cancer therapy, organ imaging, cardiology, surgery, etc. [5] [6] [7] [8] [9] . On the other hand, RF/microwave radiation with high power raised safety concerns on the biological effect. Lots of research has been performed over radio station, television, and digital mobile phone systems [10] .
In this manuscript, we study the bio-heating effect of a bio-organism under the illumination of microwave. We set up the microwave and input the bio-heat equation. We setup the bio-organism inside of the microwave, to simulate the bio-organism heated. The temperature increase and fraction of damage are both investigated in details. Results show that after 60 s of microwave illumination, the bio-organism will be 95% damaged.
The heat transferred from the microwave to the bio-organism and the heat dissipation inside the bio-organism is governed by the following heat transfer equation [11] .
K\rho C\cdot \partial T/\partial t+\rho Cu\cdot \nabla T+\nabla \cdot q-Q=0
where T is the temperature in unit of K, ρ is the density in unit of kg/m3, C is the heat. Capacity in unit of J/kg/K, u is the fluid velocity in unit of m/s, q is the heat flux in unit of W/m2 and Q is the volume heat generation rate in unit of W/m3.
Here, we also compute the fraction of tissue damage. This parameter gives a quantified evaluation about the degree of tissue injury (β) during the process. The calculation is based on the Arrhenius equation [12] .
\frac{\text{d}\beta }{\text{d}t}=A\mathrm{exp}\left(-\frac{\Delta E}{RT}\right)
where ΔE is the activation energy for irreversible damage reaction (in unit of J/mol), and A is the frequency factor (in unit of Hz). These two parameters are dependent on the type of tissue under study. The fraction of damaged tissue (η) is represented by: η = 1 ?exp(?β).
The setup of the simulation under study is shown in Figure 1. The isolated box is 40 cm long, 40 cm wide and 20 cm high, used for confining the microwave inside. Besides, the box is made of copper, a good material for being low loss in the microwave range under study. The bio-organism is represented by an ellipsoid with three semi-axes being 8, 5 and 3 cm long. Specifically, we choose three points in the central plane for measuring the detailed information in the bio-organism. They are denoted by A, B and C as the inset of Figure 1. The microwave is sent from the port on the left side to excite the simulation.
Figure 2 is the temperature distribution of the bio-organism at different times (a) 10 s, (b) 30 s, (c) 45 s, and (d) 60 s. At 10 s, the bio-organism stays mostly at its initial temperature since there are not enough energy being absorbed. The temperature increases gradually as the absorbed energy increases. At 60 s, the highest temperature rises to 305 K, 5 degrees higher than its initial value.
This study is simulation the bio-organism damage under the bio-heat. In life, skin scald and cooking are example of bio-organism damage.
Figure 3 gives the temperature as a function of time at the designated points. The temperature increases at a much higher rate for Point B as a microwave hot spot is formed around Point B, offering much more energy density.
Finally, we study the fraction of damaged bio-organism according to the Arrhenius equation discussed above. Figure 4 shows the fraction of damage for the
Figure 1. The bio-organism is located inside a metal box under the illumination of microwave. Three points located at the central plane of the bio-organism (shown in the lower right inset) are chosen to monitor the temperature and fraction of damage. (C point is center point of the upper surface, B point is center point of the surface below. A point is the central of whole bio-organism.)
Figure 2. Temperature distribution of the bio-organism at different times: (a) 10 s, (b) 30 s, (c) 45 s, (d) 60 s.
Figure 3. Temperature as a function of time at designated points.
Figure 4. Fraction of damage for the bio-organism at (a) 5 s, (b) 10 s, (c) 30 s, and (d) 60 s.
Figure 5. Fraction of damage for the designated points as a function of time.
bio-organism at different times for the two cross sectional view long the central line. The corresponding values are 15%, 35%, 75% and 95%. Note that the damage is relatively uniform on the planes displayed. Fraction of damage for the designated points as a function of time is shown in Figure 5. The value increases quickly in the first 30 seconds and slows down thereafter, gradually saturates towards 100%.
In conclusion, we have established a model for studying the bio-heating effect under microwave. The model simulation is based on the heat transfer equation and Arrhenius equation. Simulation results reveal that the bio-organism will rise 5 degrees in temperature and 95% will be damaged by the microwave radiation.
[1] Chandra, R., Zhou, H., Balasingham, I. and Narayanan, R.M. (2015) On the Opportunities and Challenges in Microwave Medical Sensing and Imaging. IEEE Transactions on Biomedical Engineering, 62, 1667-1682.
[2] Cheng, G.G., Zhu, Y. and Grzesik, J. (2013) Microwave Medical Imaging Techniques. 2013 7th European Conference on Antennas and Propagation (EuCAP), Gothenburg, 8-12 April 2013, 2669-2673.
[3] Livraghi, T., Meloni, F., Solbiati, L., Zanus, G. and Collaborative Italian Group Using AMICA System (2012) Complications of Microwave Ablation for Liver Tumors: Results of a Multicenter Study. Cardiovascular and Interventional Radiology, 35, 868-874.
[4] Einspruch, N.G., Ed. (2014) VLSI Electronics: Microstructure Science (Vol. 3). Academic Press, New York.
[5] Rosen, A. (1990) Microwave Application in Cancer Therapy, Cardiology and Measurement Techniques: A Short Overview. IEEE MTT Newsletter, 17-20.
[6] Finger, P.T, (1997) Microwave Thermoradiotherapy for Uveal Melanoma: Results of a 10-Year Study. Ophthalmology, 104, 1794-1803.
[7] Strohbehn, J.W., Bowers, E.D., Walsh, J.E. and Douple, E.B. (1979) An Invasive Microwave Antenna for Locally-Induced Hyperthermia for Cancer Therapy. The Journal of Microwave Power, 14, 339-350.
[8] Tabuse, K. (1998) Basic Knowledge of a Microwave Tissue Coagulator and Its Clinical Applications. Journal of Hepato-Biliary-Pancreatic Surgery, 5, 165-172.
[9] Nikolova, N.K. (2011) Microwave Imaging for Breast Cancer. IEEE Microwave Magazine, 12, 78-94.
[10] Hossmann, K.A. and Hermann, D.M. (2003) Effects of Electromagnetic Radiation of Mobile Phones on the Central Nervous System. Bioelectromagnetics, 24, 49-62.
[11] Bravo, M.E., Sánchez, P.D.J., Aguilar, R.V. and Chávez, A.E. (2015) Heat Transfer in Biological Tissues. In: Selected Topics of Computational and Experimental Fluid Mechanics, Springer, Cham, 313-320.
[12] Vander Vorst, A., Rosen, A. and Kotsuka, Y. (2006) RF/Microwave Interaction with Biological Tissues (Vol. 181). John Wiley & Sons, New York.
|
LMIs in Control/Matrix and LMI Properties and Tools/Variable Reduction Lemma - Wikibooks, open books for an open world
LMIs in Control/Matrix and LMI Properties and Tools/Variable Reduction Lemma
The variable reduction lemma allows the solution of algebraic Riccati inequality that involve a matrix of unknown dimension. This often arises when finding the controller that minimizes the H∞ norm.
In order to find the unknown matrix
{\displaystyle M}
we need matrices
{\displaystyle A}
{\displaystyle P}
{\displaystyle Q}
Given a symmetric matrix
{\displaystyle A\in \mathbb {R} ^{n\times n}}
and two matrices
{\displaystyle P}
{\displaystyle Q}
of column dimension n, consider the problem of finding matrix
{\displaystyle M}
of compatible dimensions such that
{\displaystyle {\begin{aligned}\ A+P^{T}M^{T}Q+Q^{T}M^{T}P<0\\\end{aligned}}}
The above equation is solvable for some
{\displaystyle M}
if and only if the following two conditions hold
{\displaystyle {\begin{aligned}\ W_{P}^{T}AW_{P}<0\\\ W_{Q}^{T}AW_{Q}<0\\\end{aligned}}}
{\displaystyle W_{P}}
{\displaystyle W_{Q}}
are matrices whose columns are bases for the null spaces of
{\displaystyle P}
{\displaystyle Q}
This can be implemented in any LMI solver such as YALMIP, using an algorithmic solver like Gurobi.
Using this technique we can get the value of unknown matrix
{\displaystyle M}
https://web.mit.edu/braatzgroup/33 A tutorial on linear and bilinear matrix inequalities.pdf - A journal paper on the said LMI
https://onlinelibrary.wiley.com/doi/abs/10.1002/rnc.4590040403 - Research paper on the said LMI and its proof
Retrieved from "https://en.wikibooks.org/w/index.php?title=LMIs_in_Control/Matrix_and_LMI_Properties_and_Tools/Variable_Reduction_Lemma&oldid=4013761"
|
David K. Zhang - Group Theory
Group theory is the first subject in a branch of mathematics known as abstract algebra (or modern algebra). Abstract algebra is one of the main branches of modern pure mathematics, and it is a standard requirement in the university mathematics curriculum. Every professional mathematician and theoretically-oriented physical scientist (i.e., physicists and chemists) should know a thing or two about group theory.
The goal of abstract algebra is to study the general properties of algebraic systems and the interrelations between them. That might not mean a lot if you haven’t studied abstract algebra before, so let me contextualize this statement.
In grade school, you learned about a handful of different number systems, including the integers
\Z
\Q
\R
, and possibly (if you were an advanced student) the complex numbers
\C
. You also learned that there are several algebraic properties that these systems satisfy, such as the commutative property of addition,
x + y = y + x
, and the associative property of multiplication,
x \cdot (y \cdot z) = (x \cdot y) \cdot z
In abstract algebra, we ask what other number systems support a notion of addition, negation, multiplication, or some other operation, that satisfies the commutative, associative, distributive, etc. property.
Each of the number systems mentioned above supports the four basic arithmetic operations of addition, subtraction, multiplication, and (with the exception of the integers) division. Of course, subtraction and division are merely the inverses of addition and multiplication, so in each case, there are really only two fundamental arithmetic operations. [TODO: Finish writing introduction.]
A group is an algebraic structure
\alg{G; 1, {}^{-1}, \cdot}
G
1 \in G
{}^{-1}: G \to G
x \mapsto x^{-1}
, called inversion;
\cdot : G \times G \to G
(x, y) \mapsto x \cdot y
, called the group operation or group product;
Associative property:
(x \cdot y) \cdot z = x \cdot (y \cdot z)
x, y, z \in G
1 \cdot x = x \cdot 1 = x
x \in G
Inverse property:
x \cdot x^{-1} = x^{-1} \cdot x = 1
x \in G
Definition: Abelian Group
An abelian group is a group
\alg{G; 1, {}^{-1}, \cdot}
Commutative property:
x \cdot y = y \cdot x
x, y \in G
|
Serine C-palmitoyltransferase - Wikipedia
Crystallographic structure of serine palmitoyltransferase from S. paucimobilis. The cofactor PLP is visible in the center.[1]
C20orf38, SPTLC2L
In enzymology, a serine C-palmitoyltransferase (EC 2.3.1.50) is an enzyme that catalyzes the chemical reaction:[2][3]
palmitoyl-CoA + L-serine
{\displaystyle \rightleftharpoons }
CoA + 3-dehydro-D-sphinganine + CO2
Thus, the two substrates of this enzyme are palmitoyl-CoA and L-serine, whereas its 3 products are CoA, 3-dehydro-D-sphinganine, and CO2.[4][5] This reaction is a key step in the biosynthesis of sphingosine which is a precursor of many other sphingolipids.[3]
This enzyme participates in sphingolipid metabolism. It employs one cofactor, pyridoxal phosphate.
This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is palmitoyl-CoA:L-serine C-palmitoyltransferase (decarboxylating). Other names in common use include:
serine palmitoyltransferase,
SPT, 3-oxosphinganine synthetase, and
acyl-CoA:serine C-2 acyltransferase decarboxylating.
Serine C-palmitoyltransferase is a member of the AOS (a-oxoamine synthase) family of PLP-dependent enzymes, which catalyse the condensation of amino acids and acyl-CoA thioester substrates.[6] The human enzyme is a heterodimer consisting of two monomeric subunits known as long chain base 1 and 2 (LCB1/2) encoded by separate genes.[1] The active site of LCB2 contains lysine and other key catalytic residues that are not present in LCB1, which does not participate in catalysis but is nevertheless required for the synthesis and stability of the enzyme.[7]
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 2JG2 and 2JGT.[1]
Key active site residues of serine C-palmitoyltransferase that interact with PLP. Generated from 2JG2.
The PLP (pyridoxal 5′-phosphate)-dependent serine C-palmitoyltransferase carries out the first enzymatic step of de novo sphingolipid biosynthesis. The enzyme catalyses a Claisen-like condensation between L-serine and an acyl-CoA thioester (CoASH) substrate (typically C16-palmitoyl) or an acyl-ACP (acyl-carrier protein) thioester substrate, to form 3-ketodihydrosphingosine. Initially PLP cofactor is bound to the active-site lysine via a Schiff base to form the holo-form or internal aldimine of the enzyme. The amine group of L-serine then attacks and displaces the lysine bound to PLP, forming the external aldimine intermediate. Subsequently, deprotonation occurs at the Cα of serine, forming the quinonoid intermediate that attacks the incoming thioester substrate. Following decarboxylation and lysine attack, the product 3-keto-dihydrosphingosine is released and catalytically active PLP is reformed. This condensation reaction forms the sphingoid base or long-chain base found in all subsequent intermediate sphingolipids and complex sphingolipids in the organism.[3]
A variety of different serine C-palmitoyltransferase isoforms exist across different species. Unlike in eukaryotes, where the enzyme is heterodimeric and membrane bound, bacterial enzymes are homodimers and cytoplasmic. Studies of the isoform of the enzyme found in the Gram-negative bacterium S. paucimobilis were the first to elucidate the structure of the enzyme, revealing that PLP cofactor is held in place by several active site residues including Lys265 and His159.[8] Specifically, the S. paucimobilis isoform features an active-site arginine residue (Arg378) that plays a key role in stabilizing the carboxy moiety of the PLP-L-serine external aldimine intermediate. Similar arginine residues in enzyme homologues (Arg370, Arg390) play analogous roles.[9] Other homologues, such as in Sphingobacterium multivorum, feature the carboxy moiety bound to serine and methionine residues via water in place of arginine.[10] Certain enzyme homologues, such as in S. multivorum as well as B. stolpii, are found to be associated with the inner cell membrane, thus resembling the eukaryotic enzymes.[11] The B. stolpii homologue also features substrate inhibition by palmitoyl-CoA, a feature shared by the yeast and mammalian homologues.[12][13][14]
HSAN1 (hereditary sensory and autonomic neuropathy type 1) is a genetic disorder caused by mutations in either one of SPTLC1 or SPTLC2, genes encoding the two heterodimeric subunits of the eukaryotic serine C-palmitoyltransferase enzyme.[15][16][17] These mutations have been shown to alter active site specificity, specifically by enhancing the ability of the enzyme to condense L-alanine with the palmitoyl-CoA substrate.[18] This is consistent with elevated levels of deoxysphingoid bases formed by the condensation of alanine with palmitoyl-CoA observed in HSAN1 patients.[19]
Serine C-palmitoyltransferase is expressed in a large number of species from bacteria to humans. The bacterial enzyme is a water-soluble homodimer[2] whereas in eukaryotes the enzyme is a heterodimer which is anchored to the endoplasmic reticulum.[3] Humans and other mammals express three paralogous subunits SPTLC1, SPTLC2, and SPTLC3. It was originally proposed that the functional human enzyme is a heterodimer between a SPTLC1 subunit and a second subunit which is either SPTLC2 or SPTLC3.[20] However more recent data suggest that the enzyme may exist as a larger complex, possibly an octamer, comprising all three subunits.[21]
^ a b c Yard BA, Carter LG, Johnson KA, Overton IM, Dorward M, Liu H, McMahon SA, Oke M, Puech D, Barton GJ, Naismith JH, Campopiano DJ (July 2007). "The structure of serine palmitoyltransferase; gateway to sphingolipid biosynthesis". Journal of Molecular Biology. 370 (5): 870–86. doi:10.1016/j.jmb.2007.04.086. PMID 17559874.
^ a b Ikushiro H, Hayashi H, Kagamiyama H (April 2003). "Bacterial serine palmitoyltransferase: a water-soluble homodimeric prototype of the eukaryotic enzyme". Biochimica et Biophysica Acta (BBA) - Proteins and Proteomics. 1647 (1–2): 116–20. doi:10.1016/S1570-9639(03)00074-8. PMID 12686119.
^ a b c d Hanada K (June 2003). "Serine palmitoyltransferase, a key enzyme of sphingolipid metabolism". Biochimica et Biophysica Acta (BBA) - Molecular and Cell Biology of Lipids. 1632 (1–3): 16–30. doi:10.1016/S1388-1981(03)00059-3. PMID 12782147.
^ Brady RN, Di Mari SJ, Snell EE (January 1969). "Biosynthesis of sphingolipid bases. 3. Isolation and characterization of ketonic intermediates in the synthesis of sphingosine and dihydrosphingosine by cell-free extracts of Hansenula ciferri". The Journal of Biological Chemistry. 244 (2): 491–6. doi:10.1016/S0021-9258(18)94455-8. PMID 4388074.
^ Stoffel W, LeKim D, Sticht G (May 1968). "Biosynthesis of dihydrosphingosine in vitro". Hoppe-Seyler's Zeitschrift für Physiologische Chemie. 349 (5): 664–70. doi:10.1515/bchm2.1968.349.1.664. PMID 4386961.
^ Eliot AC, Kirsch JF (2004). "Pyridoxal phosphate enzymes: mechanistic, structural, and evolutionary considerations". Annual Review of Biochemistry. 73: 383–415. doi:10.1146/annurev.biochem.73.011303.074021. PMID 15189147.
^ Han G, Gable K, Yan L, Natarajan M, Krishnamurthy J, Gupta SD, Borovitskaya A, Harmon JM, Dunn TM (December 2004). "The topology of the Lcb1p subunit of yeast serine palmitoyltransferase". The Journal of Biological Chemistry. 279 (51): 53707–16. doi:10.1074/jbc.M410014200. PMID 15485854.
^ Shiraiwa Y, Ikushiro H, Hayashi H (June 2009). "Multifunctional role of His159in the catalytic reaction of serine palmitoyltransferase". The Journal of Biological Chemistry. 284 (23): 15487–95. doi:10.1074/jbc.M808916200. PMC 2786316. PMID 19346561.
^ Lowther J, Charmier G, Raman MC, Ikushiro H, Hayashi H, Campopiano DJ (June 2011). "Role of a conserved arginine residue during catalysis in serine palmitoyltransferase" (PDF). FEBS Letters. 585 (12): 1729–34. doi:10.1016/j.febslet.2011.04.013. PMID 21514297. S2CID 25828713.
^ Ikushiro H, Islam MM, Okamoto A, Hoseki J, Murakawa T, Fujii S, Miyahara I, Hayashi H (October 2009). "Structural insights into the enzymatic mechanism of serine palmitoyltransferase from Sphingobacterium multivorum". Journal of Biochemistry. 146 (4): 549–62. doi:10.1093/jb/mvp100. PMID 19564159.
^ Ikushiro H, Islam MM, Tojo H, Hayashi H (August 2007). "Molecular characterization of membrane-associated soluble serine palmitoyltransferases from Sphingobacterium multivorum and Bdellovibrio stolpii". Journal of Bacteriology. 189 (15): 5749–61. doi:10.1128/JB.00194-07. PMC 1951810. PMID 17557831.
^ Gable K, Slife H, Bacikova D, Monaghan E, Dunn TM (March 2000). "Tsc3p is an 80-amino acid protein associated with serine palmitoyltransferase and required for optimal enzyme activity". The Journal of Biological Chemistry. 275 (11): 7597–603. doi:10.1074/jbc.275.11.7597. PMID 10713067.
^ Hanada K, Hara T, Nishijima M (March 2000). "Purification of the serine palmitoyltransferase complex responsible for sphingoid base synthesis by using affinity peptide chromatography techniques". The Journal of Biological Chemistry. 275 (12): 8409–15. doi:10.1074/jbc.275.12.8409. PMID 10722674.
^ Lara PN, Moon J, Redman MW, Semrad TJ, Kelly K, Allen JW, Gitlitz BJ, Mack PC, Gandara DR (January 2015). "Relevance of platinum-sensitivity status in relapsed/refractory extensive-stage small-cell lung cancer in the modern era: a patient-level analysis of southwest oncology group trials". Journal of Thoracic Oncology. 10 (1): 110–5. doi:10.1097/JTO.0000000000000385. PMC 4320001. PMID 25490004.
^ Bejaoui K, Wu C, Scheffler MD, Haan G, Ashby P, Wu L, de Jong P, Brown RH (March 2001). "SPTLC1 is mutated in hereditary sensory neuropathy, type 1". Nature Genetics. 27 (3): 261–2. doi:10.1038/85817. PMID 11242106. S2CID 34442339.
^ Gable K, Han G, Monaghan E, Bacikova D, Natarajan M, Williams R, Dunn TM (March 2002). "Mutations in the yeast LCB1 and LCB2 genes, including those corresponding to the hereditary sensory neuropathy type I mutations, dominantly inactivate serine palmitoyltransferase". The Journal of Biological Chemistry. 277 (12): 10194–200. doi:10.1074/jbc.M107873200. PMID 11781309.
^ Rotthier A, Auer-Grumbach M, Janssens K, Baets J, Penno A, Almeida-Souza L, Van Hoof K, Jacobs A, De Vriendt E, Schlotter-Weigel B, Löscher W, Vondráček P, Seeman P, De Jonghe P, Van Dijck P, Jordanova A, Hornemann T, Timmerman V (October 2010). "Mutations in the SPTLC2 subunit of serine palmitoyltransferase cause hereditary sensory and autonomic neuropathy type I". American Journal of Human Genetics. 87 (4): 513–22. doi:10.1016/j.ajhg.2010.09.010. PMC 2948807. PMID 20920666.
^ Gable K, Gupta SD, Han G, Niranjanakumari S, Harmon JM, Dunn TM (July 2010). "A disease-causing mutation in the active site of serine palmitoyltransferase causes catalytic promiscuity". The Journal of Biological Chemistry. 285 (30): 22846–52. doi:10.1074/jbc.M110.122259. PMC 2906276. PMID 20504773.
^ Penno A, Reilly MM, Houlden H, Laurá M, Rentsch K, Niederkofler V, Stoeckli ET, Nicholson G, Eichler F, Brown RH, von Eckardstein A, Hornemann T (April 2010). "Hereditary sensory neuropathy type 1 is caused by the accumulation of two neurotoxic sphingolipids". The Journal of Biological Chemistry. 285 (15): 11178–87. doi:10.1074/jbc.M109.092973. PMC 2856995. PMID 20097765.
^ Hornemann T, Richard S, Rütti MF, Wei Y, von Eckardstein A (December 2006). "Cloning and initial characterization of a new subunit for mammalian serine-palmitoyltransferase". The Journal of Biological Chemistry. 281 (49): 37275–81. doi:10.1074/jbc.M608066200. PMID 17023427.
^ Hornemann T, Wei Y, von Eckardstein A (July 2007). "Is the mammalian serine palmitoyltransferase a high-molecular-mass complex?". The Biochemical Journal. 405 (1): 157–64. doi:10.1042/BJ20070025. PMC 1925250. PMID 17331073.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Serine_C-palmitoyltransferase&oldid=1045092211"
|
Comparative Study of Two Low CO2 Emission Power Generation System Options With Natural Gas Reforming | J. Eng. Gas Turbines Power | ASME Digital Collection
Comparative Study of Two Low
CO2
Emission Power Generation System Options With Natural Gas Reforming
, Chinese Academy of Sciences, Beijing 100080, P.R.C.
e-mail: zhangna@mail.etp.ac.cn
Zhang, N., and Lior, N. (June 13, 2008). "Comparative Study of Two Low
CO2
Emission Power Generation System Options With Natural Gas Reforming." ASME. J. Eng. Gas Turbines Power. September 2008; 130(5): 051701. https://doi.org/10.1115/1.2904895
Two power plant schemes that reduce
CO2
emission and employ natural gas reforming were analyzed and discussed. The first one integrates natural gas reforming technology for efficiency improvement with an oxy-fuel combined power system (OXYF-REF), with water as the main work fluid. The reforming heat is obtained from the available turbine exhaust heat, and the produced syngas is used as fuel with oxygen as the oxidizer. The turbine working fluid can expand down to a vacuum, producing a high-pressure ratio and thus more net work. The second system integrates natural gas reforming in a precombustion decarbonization scheme using chemical absorption technology for the
CO2
removal (PCD-REF). The gas turbine is the conventional air-based one with compressor intercooling. Supplementary combustion is employed to elevate the turbine exhaust temperature and thus achieve a much higher methane conversion rate (96.9%). Both systems involve internal heat recuperation from gas turbine exhausts, and particular attention has been paid to the integration of the heat recovery chain to reduce the related exergy destruction. The systems are simulated and their thermal efficiency, overall and component exergy losses, and
CO2
removal capacity are compared. The OXYF-REF system has a higher energy efficiency, of 51.4%, and higher
CO2
removal, but the product
CO2
has lower purity, of 84%. The PCD-REF system has a thermal efficiency of 46%, the captured
CO2
is 99% pure, and the
CO2
specific emission is
58.5g∕kWh
air pollution, carbon, combined cycle power stations, combustion, gas turbines, heat recovery, water
Combustion, Compressors, Emissions, Energy generation, Exergy, Exhaust systems, Fuels, Heat, Heat recovery, Natural gas, Pressure, Steam, Temperature, Turbines, Water, Methane, Absorption, Compression, Gas turbines, Combustion chambers, Fluids, Syngas, Cycles, Oxygen, Energy efficiency, Energy / power systems
Natural Gas Decarbonization to Reduce CO2 Emission From Combined Cycles—Part I: Partial Oxidation
Analysis of Gas-Steam Combined Cycles With Natural Gas Reforming and CO2 Capture
Performance Modeling of a Carbon Dioxide Removal System for Power Plants
Sensitivity Analysis of the MATIANT Cycle
Yantovski
Zvagolsky
The COOPERATE—Demo Power Cycle
Stack Downward Zero Emission Fuel-Fired Power Plants Concept
Further Research Zero CO2 Emission Power Production: The “COOLENERG” Process
Comparison of Two CO2 Removal Options in Combined Cycle Power Plants
Power Generation With 100% Carbon Capture and Sequestration
Second Annual Conference on Carbon Sequestration
High Efficiency, Zero Emission Power Generation Based on a High-Temperature Steam Cycle
28th International Technical Conference on Coal Utilization and Fuel Systems
A Novel Near-Zero CO2 Emission Thermal Cycle With LNG Cryogenic Exergy Utilization
Proposal and Analysis of a Novel Zero CO2 Emission Cycle With Liquid Natural Gas Cryogenic Exergy Utilization
CO2 Recovery in a Novel Power Plant System With Chemical-Looping Combustion
A New Advanced Power-Generation System Using Chemical-Looping Combustion
Off-Design Evaluation of a Natural Gas Fired Chemical Looping Combustion Combined Cycle With CO2 Capture
Proceedings of ECOS2005
Advanced Zero Emissions Gas Turbine Power Plant
AZEP Gas Turbine Combined Cycle Power Plants—Thermo-Economic Analysis
A Quantitative Comparison of Gas Turbine Cycles With CO2 Capture
The Recuperative Auto Thermal Reforming and Recuperative Reforming Gas Turbine Power Cycles With CO2 Removal—Part I: The Recuperative-Auto Thermal Reforming Cycle
The Recuperative Auto Thermal Reforming and Recuperative Reforming Gas Turbine Power Cycles With CO2 Removal—Part II: The Recuperative Reforming Cycle
A Methane-Steam Reformer for a Basic Chemically Recuperated Gas Turbine
Exergetic Optimization of Intercooled Reheat Chemically Recuperated Gas Turbine
Analysis of a Basic Chemically Recuperated Gas Turbine Power Plant
ASPEN PLUS®, Aspen Technology, Inc., Version 11.1, http://www.aspentech.com/http://www.aspentech.com/, Aspen Physical Property System, Physical Property Methods and Models 11.1, Sept. 2001, Aspen Technology, Inc., Cambridge, MA.
Configuration Analysis of Oxy-Fuel Cycles With Natural Gas Reforming and CO2 Capture
Energy and Exergy Analysis of a Chemical Process System With Distributed Parameters Based on the Energy-Direction Factor Diagram
Sources of Combustion Irreversibility
Performance and Cost of Power Plants With Capture and Storage of CO2
Horazak
An Overview of Turbine and Combustor Development for Coal-Based Oxy-Syngas System
A Comparison of Electricity and Hydrogen Production Systems With CO2 Capture and Storage. Part A: Review and Selection of Promising Conversion and Capture Technologies
Utilization of Alloy 693 in Metal Dusting Environments
NACE International Corrosion 2007 Conference and Expo
, Mar. 11–15, Paper No. 07430.
The R-ATR and the R-REF Gas Turbine Power Cycles With CO 2 Removal: Part 2 — The R-REF Cycle
|
Dual Nature of Radiation and Matter Physics NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
If V be the accelerating voltage, then the maximum frequency of continuous x-rays is given by
\frac{eh}{V}
\frac{hV}{e}
\frac{eV}{h}
\frac{h}{eV}
Subtopic: Photoelectric Effect: Experiment |
The maximum kinetic energy of photoelectron emitted from the surface of work function f due to incidence of light of frequency n is E. If the frequency of incident light is doubled, then maximum kinetic of emitted photon will be
2. 2E - f
3. 2E + f
4. 2E + 2f
Subtopic: Einstein's Photoelectric Equation |
The de-Broglie wavelength associated with an electron accelerated through a voltage of 900 V is:
0.31 \stackrel{\mathrm{o}}{\mathrm{A}}
0.41 \stackrel{\mathrm{o}}{\mathrm{A}}
0.5 \stackrel{\mathrm{o}}{\mathrm{A}}
0.16 \stackrel{0}{\mathrm{A}}
Subtopic: De-broglie Wavelength |
The de-Broglie wavelength of a neutron in thermal equilibrium with heavy water at a temperature T (Kelvin) and mass m, is
\frac{h}{\sqrt{mkT}}
\frac{h}{\sqrt{3mkT}}
\frac{2h}{\sqrt{3mkT }}
\frac{2h}{\sqrt{mkT}}
Electrons of mass m with de-Broglie wavelength
\lambda
fall on the target in an X-ray tube. The cut off wavelength
\left({\lambda }_{0}\right)
of the emitted X-ray is -
{\lambda }_{0}=\frac{2mc{\lambda }^{2}}{h}
{\lambda }_{0}=\frac{2h}{mc}
{\lambda }_{0}=\frac{2{m}^{2}{c}^{2}{\lambda }^{3}}{{h}^{2}}
{\lambda }_{0}=\lambda
Photons with energy 5 eV are incident on a cathode C in a photoelectric cell. The maximum energy of emitted photoelectrons is 2 eV. When photons of energy 6 eV are incident on C, no photoelectron will reach the anode A, if the stopping potential of A relative to C is
(1) +3 V
(3) - 1V
(4) -3 V
When a metallic surface is illuminated with radiation of wavelength
\lambda
, the stopping potential is V. If the same surface is illuminated with radiation of wavelength 2
\lambda
, the stopping potential is
\frac{V}{4}
.The threshold wavelength for metallic surface is:
\lambda
\frac{5}{2}
\lambda
\lambda
\lambda
An electron of mass m and a photon have the same energy E. Find the ratio of de-Broglie wavelength associated with the electron to that associated with the photon. (c is the velocity of light)
1. {\left(\frac{E}{2m}\right)}^{1/2}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}2. c{\left(2mE\right)}^{1/2}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}3. \frac{1}{c}{\left(\frac{2m}{E}\right)}^{1/2}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}4. \frac{1}{c}{\left(\frac{E}{2m}\right)}^{1/2}
A radiation of energy 'E' falls normally on a perfectly reflecting surface. The momentum transferred to the surface is (c=velocity of light)
1. E/c
2. 2E/c
3. 2E/c2
4. E/c2
Subtopic: Particle Nature of Light |
A certain metallic surface is illuminated with monochromatic light of wavelength λ. The stopping potential for photoelectric current for this light is 3Vo. If the same surface is illuminated with light of wavelength 2λ. the stopping potential is Vo. The threshold wavelength for this surface for the photoelectric effect is:
1. 6λ
|
Can you use the fact that
\tanh ix = i\tan x
@PrometheusPi
I am trying to simplify a general Taylor series of function f:
x, x_0, h = sym.symbols("x x_0 h", real=True)
ts = sym.series(f(x),x, x_0, 4)
# but substituting x-x_0 by h
ts.subs(x-x_0,h)
# just returns 𝑂(1;𝑥→𝑥0)
Why is the (x-x0) in the Taylor series not substituted correctly?
Parseval Gramalot
@PGramalot_gitlab
I thought people answered questions here. Seems like a ghost town. Good day.
@erentar2002:matrix.org
yeah, happens
good luck for finding an answer
@PrometheusPi subs will replace a sum like x - x_0 (= x + (-1)*x_0) in an expression only if that is also a sum that contains the same arguments:
In [19]: (x - x_0 + 1).subs(x - x_0, h)
Out[19]: h + 1
Otherwise, one must substitute x alone:
In [24]: print(ts.subs(x, x_0 + h))
f(x_0) + h*Subs(Derivative(f(_xi_1), _xi_1), _xi_1, x_0) + h**2*Subs(Derivative(f(_xi_1), (_xi_1, 2)), _xi_1, x_0)/2 + h**3*Subs(Derivative(f(_xi_1), (_xi_1, 3)), _xi_1, x_0)/6 + O(h**4)
@jksuom Thanks a lot for your fast answer. That helped tremendously. (I was not aware of the fact, that subs requires the substitution to be part of a sum. And I did not think of just replacing a single variable instead.)
@PGramalot_gitlab your question is a bit lengthy. You may be better off asking it on the mailing list.
How can I modify the latex printer the print the standard basis vectors as \mathbf{i}, \mathbf{j}, and \mathbf{k}? Currently the vectors get printed as follows
>>> latex(i)
'\\mathbf{\\hat{i}_{C}}'
@meganly The formatting of base vectors is hard-coded and cannot be given as a parameter. I would implement a subclass redefining the formatting. Something like this (untested):
class MyCoordSys3D(CoordSys3D):
self.latex_vects = [r'\mathbf{%s}' % n for n in self.vector_names]
There's a parameter in the CoordSys3D that lets you set these
Also if you use a custom printer that should take precedence over the defaults.
I don't like how the vector module defines the printers on the objects. SymPy library classes should define printing on the printer classes. I also don't like how it precomputes the latex form at construction time.
s, Kc, tau_i = sm.symbols("s K_c tau_i")
Gc = Kc*(1+1/(tau_i*s))
Gp =1/(s**2 + 2*s + 2)
eq = 1 +Gc*Gv*Gp*Gm
How to get the eq's numerator with leading coefficient as 1?
Thanks for the help @jksuom and @asmeurer . I tried
C= CoordSys3D('C', vector_names = ["\mathbf{i}", "\mathbf{j}", "\mathbf{k}"])
and now the latex for the base vector i is '\mathbf{\hat{\mathbf{i}}_{C}}'.
However, implementing a subclass to do custom printing should work.
thorek1
@thorek1
under which conditions does the follwoing simplify to y? simplify((y^(1/(1-alpha)))^(1 - alpha))
Look at the docstring of powdenest.
You need assumptions on either y or alpha.
@vars alpha positive = true
@vars y real = true
it seems i would need sympy to understand that 0 < alpha < 1
powdenest((y^(1/(alpha)))^(alpha),force=true) works finde
I don't think that is sufficient. Consider y = -1 and alpha = 1/2
true, in fact y > 0 and 0<alpha<1
docstring of powdenest doesnt help much in this case it seems
SymPy doesn't currently have a way to tell powdenest that alpha is less than 1. Using force=True is the best way, after manually verifying that the simplification is mathematically correct.
ok, what is the best workaround then?
doesnt get me there
@vars alpha y positive = true
powdenest((y^(1/(1-alpha)))^(1 - alpha))
only other option is to do a change of variables that lets you use the assumptions that sympy supports, like positive=True
hmm, manually catch all int +/- var cases and replace them
that's a bug I guess
This seems related sympy/sympy#19627
I guess you can do something with replace with a pattern.
>>> a, b, c = Wild('a'), Wild('b'), Wild('c')
>>> expr.replace((a**b)**c, a**(b*c))
yeah thats probably a good way to go
I get the solution assuming alpha is an integer but that doesnt seem very safe
@laolux:privacytools.io
I have some issue with integrating the Heaviside function. Usually I would expect integrate(Heaviside(x,0), (x,-1,1) =1, but instead sympy returns Integral(Heaviside(x, 0), (x, -1, 1)). Works fine if I omit the second argument of Heaviside, but I think it should work in any case, as the value at x=0 is irrelevant for the integration (set of measure zero).
Is there something I am missing, or should I report it as a bug?
@laolux:privacytools.io Many integrators like meijerint work by looking up the results in a table. The table may contain Heaviside(x) but not Heaviside(x, 0) which is a different object.
>>> Heaviside(x) == Heaviside(x, 0)
It may be possible to extend the matching code to handle this but that may not be easy to implement. Another entry should probably be added.
@jksuom Thanks for the explanation. I read im the documentation that “Heaviside(x)==Heaviside(x, None)”, but for integration that should not matter. But now I understand that sympy takes different ways to evaluate these functions. Anyways, “meijerint” seems to have some more issues with Heaviside, as I commented on github:
https://github.com/sympy/sympy/issues/21426#issuecomment-832445998
@asmeurer can you take a look at #21423, please?
I ran into the following issue/error while trying to get the extrema of a simple 4th order polynomial:
x_sym = sym.symbols("x", real=True) # single variable
pot_sym = x_sym**4 - x_sym**2 + x_sym * 1/10 # function f
pot_prime_sym = sym.diff(pot_sym, x_sym) # first derivative of function df/fx
extrema = sym.solve(pot_prime_sym) # get extrema via df/dx == 0
for extremum in extrema:
print(sym.N(extremum), " == ", sym.N(sym.simplify(extremum))) # print values
0.050253826762553 - 0.e-23*I == 0.050253826762553 + 3.70576914423756e-22*I
0.680639276423668 + 0.e-23*I == 0.680639276423668
-0.730893103186221 + 0.e-23*I == -0.730893103186221
The last two results are equal, but the first entry does seem to cause an error of I simplify the result before returning its numeric value. There should be no complex contribution 3.7e-22*I, thus something goes wrong here.
Is this a know issue, did I do something wrong, or is this a not-yet know issue and I should open a issue on github?
I am using sympy 1.7.1.
It is a known issue that is hard to avoid when working with complex floating point numbers. (There is an imaginary part because solve does not know that the result should be real.) For real floating point roots, it is often better to use nroots.
@jksuom Thanks for the fast reply. nroots worked like a charm :+1:. If it is a know issue, I will refrain from opening another issue on GitHub.
|
Quaternion mean rotation - MATLAB meanrot - MathWorks Switzerland
meanrot
Quaternion Mean Rotation
Average Out Rotational Noise
The meanrot Algorithm and Limitations
quatAverage
quatAverage = meanrot(quat)
quatAverage = meanrot(quat,dim)
quatAverage = meanrot(___,nanflag)
quatAverage = meanrot(quat) returns the average rotation of the elements of quat along the first array dimension whose size not does equal 1.
If quat is a vector, meanrot(quat) returns the average rotation of the elements.
If quat is a matrix, meanrot(quat) returns a row vector containing the average rotation of each column.
If quat is a multidimensional array, then mearot(quat) operates along the first array dimension whose size does not equal 1, treating the elements as vectors. This dimension becomes 1 while the sizes of all other dimensions remain the same.
The meanrot function normalizes the input quaternions, quat, before calculating the mean.
quatAverage = meanrot(quat,dim) return the average rotation along dimension dim. For example, if quat is a matrix, then meanrot(quat,2) is a column vector containing the mean of each row.
quatAverage = meanrot(___,nanflag) specifies whether to include or omit NaN values from the calculation for any of the previous syntaxes. meanrot(quat,'includenan') includes all NaN values in the calculation while mean(quat,'omitnan') ignores them.
Create a matrix of quaternions corresponding to three sets of Euler angles.
eulerAngles = [40 20 10; ...
quat = quaternion(eulerAngles,'eulerd','ZYX','frame');
Determine the average rotation represented by the quaternions. Convert the average rotation to Euler angles in degrees for readability.
quatAverage = quaternion
0.88863 - 0.062598i + 0.27822j + 0.35918k
eulerAverage = eulerd(quatAverage,'ZYX','frame')
eulerAverage = 1×3
Use meanrot over a sequence of quaternions to average out additive noise.
Create a vector of 1e6 quaternions whose distance, as defined by the dist function, from quaternion(1,0,0,0) is normally distributed. Plot the Euler angles corresponding to the noisy quaternion vector.
nrows = 1e6;
ax = 2*rand(nrows,3) - 1;
ax = ax./sqrt(sum(ax.^2,2));
ang = 0.5*randn(size(ax,1),1);
q = quaternion(ax.*ang ,'rotvec');
noisyEulerAngles = eulerd(q,'ZYX','frame');
plot(noisyEulerAngles(:,1))
Use meanrot to determine the average quaternion given the vector of quaternions. Convert to Euler angles and plot the results.
qAverage = meanrot(q);
qAverageInEulerAngles = eulerd(qAverage,'ZYX','frame');
plot(ones(nrows,1)*qAverageInEulerAngles(:,1))
The meanrot Algorithm
The meanrot function outputs a quaternion that minimizes the squared Frobenius norm of the difference between rotation matrices. Consider two quaternions:
q0 represents no rotation.
q90 represents a 90 degree rotation about the x-axis.
q0 = quaternion([0 0 0],'eulerd','ZYX','frame');
q90 = quaternion([0 0 90],'eulerd','ZYX','frame');
Create a quaternion sweep, qSweep, that represents rotations from 0 to 180 degrees about the x-axis.
eulerSweep = (0:1:180)';
qSweep = quaternion([zeros(numel(eulerSweep),2),eulerSweep], ...
Convert q0, q90, and qSweep to rotation matrices. In a loop, calculate the metric to minimize for each member of the quaternion sweep. Plot the results and return the value of the Euler sweep that corresponds to the minimum of the metric.
r0 = rotmat(q0,'frame');
r90 = rotmat(q90,'frame');
rSweep = rotmat(qSweep,'frame');
metricToMinimize = zeros(size(rSweep,3),1);
for i = 1:numel(qSweep)
metricToMinimize(i) = norm((rSweep(:,:,i) - r0),'fro').^2 + ...
norm((rSweep(:,:,i) - r90),'fro').^2;
plot(eulerSweep,metricToMinimize)
xlabel('Euler Sweep (degrees)')
ylabel('Metric to Minimize')
[~,eulerIndex] = min(metricToMinimize);
eulerSweep(eulerIndex)
The minimum of the metric corresponds to the Euler angle sweep at 45 degrees. That is, meanrot defines the average between quaterion([0 0 0],'ZYX','frame') and quaternion([0 0 90],'ZYX','frame') as quaternion([0 0 45],'ZYX','frame'). Call meanrot with q0 and q90 to verify the same result.
eulerd(meanrot([q0,q90]),'ZYX','frame')
The metric that meanrot uses to determine the mean rotation is not unique for quaternions significantly far apart. Repeat the experiment above for quaternions that are separated by 180 degrees.
q180 = quaternion([0 0 180],'eulerd','ZYX','frame');
r180 = rotmat(q180,'frame');
norm((rSweep(:,:,i) - r180),'fro').^2;
Quaternion means are usually calculated for rotations that are close to each other, which makes the edge case shown in this example unlikely in real-world applications. To average two quaternions that are significantly far apart, use the slerp function. Repeat the experiment using slerp and verify that the quaternion mean returned is more intuitive for large distances.
qMean = slerp(q0,q180,0.5);
q0_q180 = eulerd(qMean,'ZYX','frame')
q0_q180 = 1×3
Quaternion for which to calculate the mean, specified as a scalar, vector, matrix, or multidimensional array of quaternions.
Dimension dim indicates the dimension whose length reduces to 1. The size(quatAverage,dim) is 1, while the sizes of all other dimensions remain the same.
'includenan' –– Include NaN values when computing the mean rotation, resulting in NaN.
quatAverage — Quaternion average rotation
Quaternion average rotation, returned as a scalar, vector, matrix, or multidimensional array.
meanrot determines a quaternion mean,
\overline{q}
, according to [1].
\overline{q}
is the quaternion that minimizes the squared Frobenius norm of the difference between rotation matrices:
\overline{q}=\mathrm{arg}\begin{array}{c}\mathrm{min}\\ q\in {\text{S}}^{3}\end{array}\sum _{i=1}^{n}{‖A\left(q\right)-A\left({q}_{i}\right)‖}_{F}^{2}
[1] Markley, F. Landis, Yang Chen, John Lucas Crassidis, and Yaakov Oshman. "Average Quaternions." Journal of Guidance, Control, and Dynamics. Vol. 30, Issue 4, 2007, pp. 1193-1197.
dist | slerp
|
Decompose signals into time-aligned components - MATLAB - MathWorks Italia
Open the Signal Multiresolution Analyzer App
Visualize Time-Aligned MODWTMRA Decomposition
Compare MODWTMRA and EMD Decompositions
Duplicate Decomposition and Generate Script
Decompose signals into time-aligned components
The Signal Multiresolution Analyzer app is an interactive tool for visualizing multilevel wavelet- and data adaptive-based decompositions of real-valued 1-D signals and comparing results. The app supports single- and double-precision data. With the app, you can:
Access all the real-valued 1-D signals in your MATLAB® workspace.
Generate decompositions using fixed-bandwidth and data-adaptive multiresolution analysis (MRA) methods:
Fixed-bandwidth: Maximal overlap discrete wavelet transform (MODWT) (default), and empirical mode decomposition (EMD)
Data-adaptive: Empirical wavelet transform (EWT), tunable Q-factor wavelet transform (TQWT), and variational mode decomposition (VMD)
Adjust default parameters, and visualize and compare multiple decompositions.
Choose decomposition levels to include in the signal reconstruction.
Obtain frequency ranges of the decomposition levels.
Determine the relative energy of the signal across levels.
Export reconstructed signals and decompositions to your workspace.
Recreate decompositions in your workspace by generating MATLAB scripts.
MATLAB command prompt: Enter signalMultiresolutionAnalyzer.
Load in the Kobe earthquake data. The data are seismograph measurements (vertical acceleration in
\text{nm}/{\text{sec}}^{2}
) recorded at Tasmania University, Hobart, Australia, on 16 January 1995, beginning at 20:56:51 (GMT) and continuing for 51 minutes at one second intervals.
Open Signal Multiresolution Analyzer and click Import. A window appears listing all the workspace variables the app can process.
Select the Kobe data from the dialog box and click Import. By default, a four-level MODWTMRA decomposition of the signal appears in the MODWT tab. The decomposition is obtained using the modwt and modwtmra functions with default settings. The plots in the Decomposition pane are the projections of the wavelet decompositions of the signal at each scale on the original signal subspace. The decomposed signal is named kobe1 in the Decomposed Signals pane. The method MODWT identifies the decomposition. The original signal, kobe, and the reconstruction, kobe1, are plotted in the Reconstructions pane.
By default, plots are with respect to sample index and frequencies are in cycles per sample. To plot with respect to time and display frequencies in hertz, select the Sample Rate radio button on the Signal Multiresolution Analyzer tab. The default sample rate is 1 hertz. The plots and frequencies update to use the sample rate.
The Level Selection pane shows the relative energies of the signal across scales, as well as the frequency bands.
A check box in the Show column controls whether or not that level is displayed in the Decomposition pane. A check box in the Include column controls whether or not to include that level of the wavelet decomposition in the reconstruction. Clicking a plot in the Decomposition pane is another way to include or exclude that level in the signal reconstruction.
To generate a new decomposition, change one of the wavelet parameters in the toolstrip on the MODWT tab and click Decompose.
Wavelet - Wavelet family
Number - Wavelet filter number
Level - Wavelet decomposition level
Changing any parameter in the toolstrip enables the Decompose button.
Load the noisy Doppler signal. The signal is a noisy version of the Doppler test signal of Donoho and Johnstone [1].
Open Signal Multiresolution Analyzer and import the signal into the app. By default, the app creates a four-level MODWTMRA decomposition of the signal in the MODWT tab. In the Decomposed Signals pane, the wavelet decomposition is named noisdopp1. The Reconstructions pane shows the original and reconstructed signals plotted in two different colors.
To add the EMD decomposition, first switch to the Signal Multiresolution Analyzer tab, then click Add ▼ and select EMD.
After a few moments the EMD decomposition noisdopp2 appears in the EMD tab. The decomposition is obtained using the emd function with default settings. The residual is now the thickest plot in the Reconstructions pane. You can change the parameters in the toolstrip and click Decompose to obtain a different EMD decomposition. To learn more about the parameters and the EMD algorithm, see emd.
To more easily see the differences between the two reconstructions, click noisdopp in the plot legend. The text fades, and the plot of the original signal is hidden. You can use the legend to hide any plot in the Reconstructions pane.
This example shows how to duplicate a decomposition for modification. The example also shows how to generate a script to recreate the decomposition in your workspace.
Load the Kobe earthquake data into your workspace. The data are seismograph measurements (vertical acceleration in
\text{nm}/{\text{sec}}^{2}
Open Signal Multiresolution Analyzer and import the earthquake data into the app. By default, the app creates a four-level MODWTMRA decomposition of the signal called kobe1 using the modwt and modwtmra functions with default settings. To show plots with respect to time and express frequencies in Hz, click the Sample Rate radio button in the Signal Multiresolution Analyzer tab.
Duplicate Decomposition
Create a new six-level decomposition using the order 4 Coiflet. In the Signal Multiresolution Analyzer tab, click Duplicate in the toolstrip. Since kobe1 is the currently selected item in Decomposed Signals, a duplicate of the first decomposition is created. The duplicate is called kobe1Copy. The plots in Reconstructions are updated to include the new decomposition. Except for the color, the duplicate is identical with the first decomposition. You can change the name of the duplicate by right-clicking on the name in Decomposed Signals.
In the MODWT tab, change the settings in the toolstrip to the following values and then click Decompose.
Wavelet: coif
In Level Selection, note which components of the decomposition are included in the reconstruction: the approximation and the level 5 and level 6 details.
Level 4 has approximately 60% of the total energy. Remove levels 5 and 6 from the reconstruction, and include level 4. Show only the approximation and level 4 details in the Decomposition pane. To approximately align the decomposition with the reconstruction, drag the Decomposition pane beneath the Reconstructions pane.
Generate MODWT Script
You have three export options. You can export the reconstruction or the entire decomposition of the selected decomposed signal to your workspace, or you can export a MATLAB™ script to recreate the decomposition in your workspace. To generate a script, in the Signal Multiresolution Analyzer tab click Export > Generate MATLAB Script.
An untitled script opens in your editor with the following executable code. The true-false values in levelForReconstruction correspond to which Include boxes are checked in Level Selection. You can save the script as is, or modify it to apply the same decomposition settings to other signals. Run the code.
% Logical array for selecting reconstruction elements
levelForReconstruction = [false,false,false,true,false,false,true];
% Perform the decomposition using modwt
wt = modwt(kobe,'coif4',6);
% Construct MRA matrix using modwtmra
mra = modwtmra(wt,'coif4');
% Sum down the rows of the selected multiresolution signals
kobe1Copy = sum(mra(levelForReconstruction,:),1);
Plot the original signal and reconstruction. Except for possibly the colors, the plot will match the kobe1Copy reconstruction shown in the app.
plot(t,kobe1Copy,LineWidth=2)
title("Reconstruction")
legend("Original","Reconstruction",Location="northwest")
Generate EMD Script
Add the EMD decomposition of the Kobe data by clicking Add ▼ and selecting EMD in the Signal Multiresolution Analyzer tab. The name of the decomposed signal in the Decomposed Signals pane is kobe3. By default, the reconstruction consists only of the residual. The decomposition is obtained by using the emd function with default settings.
Generate a script that creates the EMD decomposition by clicking Export > Generate MATLAB Script. An untitled script opens in your editor with the following executable code. Run the code.
levelForReconstruction = [false,false,false,false,false,true];
% Perform the decomposition using EMD
[imf,residual,info] = emd(kobe, ...
SiftRelativeTolerance=0.2, ...
SiftMaxIterations=100, ...
MaxNumIMF=5, ...
MaxNumExtrema=1, ...
MaxEnergyRatio=20, ...
Interpolation='spline');
% Construct MRA matrix by appending IMFs and residual
mra = [imf residual].';
kobe3 = sum(mra(levelForReconstruction,:),1);
Compare the reconstruction kobe3 with the original signal. In this case, the reconstruction only consists of the residual.
plot(t,kobe3,LineWidth=2)
Wavelet — Orthogonal wavelet family
sym (default) | coif | db | fk
Orthogonal wavelet family to use to generate the multiresolution analysis (default), specified as:
The Wavelet parameter is applicable only for generating a multiresolution analysis.
For more information about the wavelets, use the waveinfo function. For example, to learn more about Daubechies wavelets, enter waveinfo('db').
spline (default) | pchip
Interpolation method to use for envelope construction in empirical mode decomposition, specified as one of the following:
spline — Cubic spline interpolation
pchip — Piecewise cubic Hermite interpolating polynomial method
The Interpolation parameter is applicable only for generating an empirical mode decomposition. You can change other options with the app when creating empirical mode decompositions. For more information, see emd.
signalMultiresolutionAnalyzer
signalMultiresolutionAnalyzer opens the Signal Multiresolution Analyzer app. Once the app initializes, import a signal for analysis by clicking Import.
signalMultiresolutionAnalyzer(sig)
signalMultiresolutionAnalyzer(sig) opens the Signal Multiresolution Analyzer app and imports, decomposes, and plots the multiresolution analysis of sig using modwtmra and modwt with the sym4 wavelet and default settings.
sig is a variable in the workspace. sig can be:
A 1-by-N or N-by-1 real-valued vector.
Single or double precision.
By default, the app plots the decomposition levels as functions of sample index. To plot with respect to time, you can set a sample rate or sample period using the app.
To decompose more than one signal simultaneously, you can run multiple instances of the Signal Multiresolution Analyzer app.
For the MODWT and TQWT decomposition methods, the script generated by the Signal Multiresolution Analyzer app supports gpuArray (Parallel Computing Toolbox) inputs.
To generate the decompositions, Signal Multiresolution Analyzer uses these functions:
EMD — emd
EWT — ewt
MODWT — modwt and modwtmra
TQWT — tqwt and tqwtmra
VMD — vmd
For the fixed-bandwidth methods, EMD and MODWT, Signal Multiresolution Analyzer reports the theoretical frequency ranges of the decomposition levels. For the data-adaptive methods, EWT, TQWT, and VMD, the app reports the measured bandwidth.
Wavelet Signal Denoiser | Wavelet Time-Frequency Analyzer
emd | ewt | modwt | modwtmra | tqwt | tqwtmra | vmd
|
Global Constraint Catalog: Ccond_lex_greater
<< 5.80. cond_lex_cost5.82. cond_lex_greatereq >>
DESCRIPTION LINKS AUTOMATON
Inspired by [WallaceWilson06].
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚐𝚛𝚎𝚊𝚝𝚎𝚛}\left(\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1},\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2},\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴}\right)
\mathrm{𝚃𝚄𝙿𝙻𝙴}_\mathrm{𝙾𝙵}_\mathrm{𝚅𝙰𝙻𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚕}-\mathrm{𝚒𝚗𝚝}\right)
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚝𝚞𝚙𝚕𝚎}-\mathrm{𝚃𝚄𝙿𝙻𝙴}_\mathrm{𝙾𝙵}_\mathrm{𝚅𝙰𝙻𝚂}\right)
|\mathrm{𝚃𝚄𝙿𝙻𝙴}_\mathrm{𝙾𝙵}_\mathrm{𝚅𝙰𝙻𝚂}|\ge 1
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚃𝚄𝙿𝙻𝙴}_\mathrm{𝙾𝙵}_\mathrm{𝚅𝙰𝙻𝚂},\mathrm{𝚟𝚊𝚕}\right)
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1},\mathrm{𝚟𝚊𝚛}\right)
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2},\mathrm{𝚟𝚊𝚛}\right)
|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1}|=|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2}|
|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1}|=|\mathrm{𝚃𝚄𝙿𝙻𝙴}_\mathrm{𝙾𝙵}_\mathrm{𝚅𝙰𝙻𝚂}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴},\mathrm{𝚝𝚞𝚙𝚕𝚎}\right)
\mathrm{𝚜𝚊𝚖𝚎}_\mathrm{𝚜𝚒𝚣𝚎}
\left(\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴},\mathrm{𝚝𝚞𝚙𝚕𝚎}\right)
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴},\left[\right]\right)
\mathrm{𝚒𝚗}_\mathrm{𝚛𝚎𝚕𝚊𝚝𝚒𝚘𝚗}
\left(\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1},\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴}\right)
\mathrm{𝚒𝚗}_\mathrm{𝚛𝚎𝚕𝚊𝚝𝚒𝚘𝚗}
\left(\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2},\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴}\right)
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2}
are both assigned to the
{𝙸}^{th}
{𝙹}^{th}
items of the collection
\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴}
𝙸>𝙹
\left(\begin{array}{c}〈0,0〉,\hfill \\ 〈1,0〉,\hfill \\ 〈\begin{array}{c}\mathrm{𝚝𝚞𝚙𝚕𝚎}-〈1,0〉,\hfill \\ \mathrm{𝚝𝚞𝚙𝚕𝚎}-〈0,1〉,\hfill \\ \mathrm{𝚝𝚞𝚙𝚕𝚎}-〈0,0〉,\hfill \\ \mathrm{𝚝𝚞𝚙𝚕𝚎}-〈1,1〉\hfill \end{array}〉\hfill \end{array}\right)
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚐𝚛𝚎𝚊𝚝𝚎𝚛}
constraint holds since
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2}
are respectively assigned to the third and first items of the collection
\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴}
|\mathrm{𝚃𝚄𝙿𝙻𝙴}_\mathrm{𝙾𝙵}_\mathrm{𝚅𝙰𝙻𝚂}|>1
|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1}|>1
|\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2}|>1
|\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴}|>1
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2}
\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴}.\mathrm{𝚝𝚞𝚙𝚕𝚎}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2}
\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴}.\mathrm{𝚝𝚞𝚙𝚕𝚎}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2}
\mathrm{𝙿𝚁𝙴𝙵𝙴𝚁𝙴𝙽𝙲𝙴}_\mathrm{𝚃𝙰𝙱𝙻𝙴}.\mathrm{𝚝𝚞𝚙𝚕𝚎}
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚌𝚘𝚜𝚝}
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚌𝚘𝚜𝚝}
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚐𝚛𝚎𝚊𝚝𝚎𝚛𝚎𝚚}
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚕𝚎𝚜𝚜}
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚕𝚎𝚜𝚜𝚎𝚚}
(preferences),
\mathrm{𝚕𝚎𝚡}_\mathrm{𝚐𝚛𝚎𝚊𝚝𝚎𝚛}
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚐𝚛𝚎𝚊𝚝𝚎𝚛𝚎𝚚}
characteristic of a constraint: vector, automaton.
modelling: preferences.
symmetry: lexicographic order.
Figure 5.81.1 depicts the automaton associated with the preference table of the
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚐𝚛𝚎𝚊𝚝𝚎𝚛}
constraint given in the example. Let
\mathrm{𝚅𝙰𝚁}{\mathtt{1}}_{k}
\mathrm{𝚅𝙰𝚁}{\mathtt{2}}_{k}
respectively be the
\mathrm{𝚟𝚊𝚛}
{k}^{th}
items of the
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2}
collections. Figure 5.81.2 depicts the reformulation of the
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚐𝚛𝚎𝚊𝚝𝚎𝚛}
constraint. This reformulation uses:
Two occurrences of the automaton depicted by Figure 5.81.1 for computing the positions
𝙸
𝙹
within the preference table corresponding to
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{1}
\mathrm{𝚅𝙴𝙲𝚃𝙾𝚁}\mathtt{2}
The binary constraint
𝙸>𝙹
Figure 5.81.1. Automaton associated with the preference table of the
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚐𝚛𝚎𝚊𝚝𝚎𝚛}
constraint given in the Example slot
Figure 5.81.2. Hypergraph of the reformulation corresponding to the
\mathrm{𝚌𝚘𝚗𝚍}_\mathrm{𝚕𝚎𝚡}_\mathrm{𝚐𝚛𝚎𝚊𝚝𝚎𝚛}
constraint: it uses two occurrences of the automaton of Figure 5.81.1 and the constraint
𝙸>𝙹
|
WCSEDIT
Modifies or examines image coordinate system information
This task performs one of a set of modifications to the WCS (World Coordinate System) components of a list of images. According to the value of the MODE parameter it will:
Set the Current coordinate system
Add a new coordinate system
Remove a coordinate system
Set an attribute for a coordinate system
Show the coordinate systems which currently exist
The routine does not fail if some of the requested edits cannot be performed, but a file whose name is given by the NAMELIST parameter records which images were successfully accessed.
WCSEDIT in mode frame
COEFFS(
\ast
) = _DOUBLE
If MODE is ADD, this parameter is a list of the coefficients used for the mapping from the target frame to the new frame. Its meaning and the number of values required depend on the value of MAPTYPE:
UNIT – No values are required
\begin{array}{rcll}{x}^{\prime }& =& x& \text{}\\ {y}^{\prime }& =& y& \text{}\end{array}
LINEAR – Six values
{C}_{1}\dots {C}_{6}
are required:
\begin{array}{rcll}{x}^{\prime }& =& {C}_{1}+{C}_{2}x+{C}_{3}y& \text{}\\ {y}^{\prime }& =& {C}_{4}+{C}_{5}x+{C}_{6}y& \text{}\end{array}
PINCUSHION – Three values
{C}_{1}\dots {C}_{3}
\begin{array}{rcll}{x}^{\prime }=x\left(1+{C}_{1}\left[{\left(x-{C}_{2}\right)}^{2}+{\left(y-{C}_{3}\right)}^{2}\right]\right)& & & \text{}\\ {y}^{\prime }=y\left(1+{C}_{1}\left[{\left(x-{C}_{2}\right)}^{2}+{\left(y-{C}_{3}\right)}^{2}\right]\right)& & & \text{}\end{array}
DOMAIN = LITERAL (Read)
If MODE is ADD this gives the Domain (name) to be used for the new frame. Spaces in the name are ignored and letters are folded to upper case. If the new frame is successfully added and any frame with the same domain name already exists, the old one will be removed, and a message will be printed to that effect. [CCD_WCSEDIT]
If a "Sky Co-ordinate System" specification is supplied (using parameter FRAME) for a celestial co-ordinate system, then an epoch value is needed to qualify it. This is the epoch at which the supplied sky positions were determined. It should be given as a decimal years value, with or without decimal places ("1996.8" for example). Such values are interpreted as a Besselian epoch if less than 1984.0 and as a Julian epoch otherwise.
FOREXP
\ast
\ast
) = LITERAL (Read)
If MODE=ADD and MAPTYPE=MATH, this gives the expressions to be used for the forward transformation to be added. There must be at least two expressions (for the two coordinates) but there may be more if intermediate expressions are to be used. Expression syntax is fortran-like; see the AST_MATHMAP documentation in SUN/210 for details.
This parameter specifies the "target frame", which has the following meaning according to the value of the MODE parameter:
MODE = CURRENT – The frame to be made Current
MODE = REMOVE – The frame to remove; if it is a domain name (see below) then all frames with that domain will be removed.
MODE = ADD – The new frame will be a copy of the target frame (though Domain and Title will be changed), and will be mapped from it using the mapping given.
MODE = SET – The frame whose attributes are to be set
MODE = SHOW – This parameter is ignored
The Null (!) value; in this case the Current frame is used.
A domain name, or !, is usually the most suitable choice.
A list specifying the names of the images whose WCS components are to be modified or examined. The image names should be separated by commas and may include wildcards.
INVERT = _LOGICAL (Read)
If set TRUE the mapping defined by COEFFS will be applied in the reverse direction. [FALSE]
INVEXP
\ast
\ast
If MODE=ADD and MAPTYPE=MATH, this gives the expressions to be used for the inverse transformation to be added. There must be at least two expressions (for the two coordinates) but there may be more if intermediate expressions are to be used. Expression syntax is fortran-like; see the AST_MATHMAP documentation in SUN/210 for details.
MAPTYPE = LITERAL (Read)
This parameter is required when MODE is ADD, and specifies the type of transformation which maps from the target frame to the new frame. It may take one of the following values:
UNIT – A Unit mapping
LINEAR – A linear mapping
PINCUSHION – A pincushion distortion
MATH – A general algebraic mapping
The action to be performed. It may take one of the following values:
ADD – Add a new frame (which becomes Current)
CURRENT – Set the Current frame
REMOVE – Remove a frame (Current frame is not changed unless the Current one is removed)
SET – Set frame attributes (Current frame is not changed)
SHOW – Display a list of the frames which exist
The name of an output file in which to write the names of all the NDFs which were successfully accessed. In particular, if MODE is CURRENT, this list will include all the NDFs which contained the specified frame, but exclude any which did not. [WCSEDIT.LIS]
SET = LITERAL (Read)
If MODE is SET, then this gives a string of the form "attribute=value" which is to be applied to the frame. The string is passed straight to the AST_SET routine (see SUN/210).
SIMPFI = _LOGICAL (Read)
If MODE=SET and MAPTYPE=MATH, this gives the value of the mapping’s SimpFI attribute (whether it is legitimate to simplify the forward followed by the inverse transformation to a unit transformation). [TRUE]
SIMPIF = _LOGICAL (Read)
If MODE=SET and MAPTYPE=MATH, this gives the value of the mapping’s SimpIF attribute (whether it is legitimate to simplivy the inverse followed by the forward transformation to a unit transformation). [TRUE]
\ast
current ccd_reg
This sets the Current coordinate system of all the images in the current directory to ’CCD_REG’. The names of all the images which had this coordinate system are written to the file WCSEDIT.LIS. Any which do not appear in this file were not modified by the program.
wcsedit data
\ast
remove frame=4
The fourth coordinate frame in the WCS component of each image ’data
\ast
.sdf’ is removed.
wcsedit "first,second" mode=add frame=GRID maptype=pincushion coeffs=[-6.8e-8,0,0] domain=NEW
A new coordinate system, called ’NEW’, is added to the images first and second. It is connected to the previously existing GRID domain by a pincushion distortion mapping centred at the origin with a distortion coefficient of
6.8e-8. If any frames with domain NEW already exist in those images they are removed.
wcsedit image1 set ! set="domain=NEW,title=New frame"
This changes the value of the Domain attribute of the Current coordinate frame in the WCS component of image1 to the name "NEW" and sets the Title attribute of the frame to "New frame".
wcsedit image1 show
This displays all the coordinate frames in image1 with their Domains and titles, and indicates which one is Current.
wcsedit frm mode=add frame=pixel maptype=math simpif simpfi forexp=["r=sqrt(x
\ast
+
\ast
y)","theta=atan2(y,x)"] invexp=[x=r
\ast
cos(theta),y=r
\ast
sin(theta)]
Adds a frame giving a polar coordinate view of the PIXEL frame.
“Handling coordinate systems directly”.
This routine provides similar functionality to that provided by KAPPA applications WCSADD, WCSREMOVE and WCSFRAME, but allows use of CCDPACK-style NDF lists.
|
Perform mod operation with a constant denominator and generate optimized HDL code - Simulink - MathWorks Italia
Modulo by Constant HDL Optimized
Perform mod operation with a constant denominator and generate optimized HDL code
The Modulo by Constant HDL Optimized block performs the modulo operation (remainder after division) with a constant denominator using an HDL-optimized architecture with cycle-true latency.
The Modulo by Constant block uses an algorithm that is functionally similar to a Barrett Reduction. The division operation is computed via a multiplication by inverse, which generally results in better performance on embedded systems.
Dividend, specified as a real scalar.
If X is a fixed-point data type, it must use binary-point scaling. Slope-bias representation is not supported for fixed-point data types.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | fixed point
Y — Result of modulus operation
Result of modulus operation, returned as a scalar.
Denominator — Divisor
Divisor to use for the modulus operation, specified as a positive, real-valued, finite scalar.
Values: MATLAB® expression that evaluates to a positive, real-valued, finite fixed point or numeric value
Y=X\mathrm{mod}D=X-⌊\frac{X}{D}⌋×D
is an important building block for many mathematical algorithms. However, this formula for X mod D is computationally inefficient for fixed-point and integer inputs. Many embedded processors lack instructions for integer division. Those that do have them require many clock cycles to compute the answer. Division is also inefficient in commercially-available FPGAs, whose arithmetic circuits are designed for efficient multiplication, addition, and subtraction. Finally, for fixed-point modulo operations, it is difficult to optimize the word length of internal data types used for the calculation because the division operation is unbounded, even for small-wordlength inputs.
\frac{X}{D}=X×\frac{1}{D}
Modulo by Constant
|
2014 On Generalized Fractional Integral Operators and the Generalized Gauss Hypergeometric Functions
Dumitru Baleanu, Praveen Agarwal
A remarkably large number of fractional integral formulas involving the number of special functions, have been investigated by many authors. Very recently, Agarwal (National Academy Science Letters) gave some integral transform and fractional integral formulas involving the
{F}_{p}^{\left(\alpha ,\beta \right)}\left(·\right)
. In this sequel, here, we aim to establish some image formulas by applying generalized operators of the fractional integration involving Appell’s function
{F}_{3}\left(·\right)
due to Marichev-Saigo-Maeda. Some interesting special cases of our main results are also considered.
Dumitru Baleanu. Praveen Agarwal. "On Generalized Fractional Integral Operators and the Generalized Gauss Hypergeometric Functions." Abstr. Appl. Anal. 2014 (SI58) 1 - 5, 2014. https://doi.org/10.1155/2014/630840
Dumitru Baleanu, Praveen Agarwal "On Generalized Fractional Integral Operators and the Generalized Gauss Hypergeometric Functions," Abstract and Applied Analysis, Abstr. Appl. Anal. 2014(SI58), 1-5, (2014)
|
Generate downlink RMC waveform - MATLAB lteRMCDLTool - MathWorks Switzerland
lteRMCDLTool
Generate LTE DL RMC R.31-4
Generate RMC R.3 with SIB
Generate LTE DL RMC R.12 With 16QAM Modulation
Generate downlink RMC waveform
[waveform,grid,rmccfgout] = lteRMCDLTool(rmccfg,trdata)
[waveform,grid,rmccfgout] = lteRMCDLTool(rc,trdata,duplexmode,totsubframes)
lteRMCDLTool starts the LTE Waveform Generator app configured for parameterization and generation of a reference measurement channel (RMC) waveform. The Reference Channel menu lists the available RMCs with their default top-level settings.
[waveform,grid,rmccfgout] = lteRMCDLTool(rmccfg,trdata) where rmccfg specifies a user-defined reference channel structure. The reference configuration structure with default parameters can easily be created using lteRMCDL then modified if desired.
SIB1 messages and the associated PDSCH and PDCCH can be added to the output waveform by adding the substructure rmccfg.SIB.
[waveform,grid,rmccfgout] = lteRMCDLTool(rc,trdata,duplexmode,totsubframes) specifies the default reference measurement channel, rc, and information bits trdata. duplexmode and totsubframes are optional input arguments, that define the duplex mode of the generated waveform and total number of subframes that make up the grid.
Generate a time domain signal and a 3-dimensional array of the resource elements for R.31-4 FDD as specified in TS 36.101 Annex A.3.9.1-1. R.31-4 FDD is 20MHz, 64QAM, variable code rate and has user data scheduled in subframe 5.
[txWaveform,txGrid,rmcCfgOut] = lteRMCDLTool('R.31-4',{[1;0] [1;0]});
This example shows use of lteRMCDLTool to generate a tx waveform with SIB transmission enabled using DCIFormat1A and localized allocation.
Specify desired RMC, initialize configuration structure and define txData. Generate txGrid and plot it.
rmc = lteRMCDL(rc);
txData = [1;0;0;1];
[~,txGrid,~] = lteRMCDLTool(rmc, txData);
mesh(abs(txGrid))
To insert SIB1 message into the output waveform, initialize SIB substructure, enable SIB transmission, adjust other defaults, and regenerate txGrid. Plot txGrid to illustrate the presence of SIB1 message in subframe 5
[txWaveform,txGrid,rmcCfgOut] = lteRMCDLTool(rmc, txData);
Generate a time domain waveform, and a 3D array of the resource elements for RMC R.12 as specified in TS 36.101. Modify the standard R.12 RMC to use 16QAM modulation scheme instead of the default QPSK.
Create an RMC setting structure specifying R.12 for RC and 16QAM for Modulation.
rmc.PDSCH.Modulation = '16QAM';
Generate the tx waveform, RE grid and also output the RMC configuration structure.
[txWaveform, txGrid, rmcCfgOut] = lteRMCDLTool(rmc, txData);
Review the rmcCgfOut structure and PDSCH substructure.
SerialCat: 1
rmcCfgOut.PDSCH
ActualCodeRate: [0 0.3846 0.3846 0.3846 0.3846 0 0.3846 0.3846 ... ]
TrBlkSizes: [0 936 936 936 936 0 936 936 936 936]
CodedTrBlkSizes: [0 2496 2496 2496 2496 0 2496 2496 2496 2496]
CSIMode: 'PUCCH 1-1'
HARQProcessSequence: [0 1 2 3 4 0 5 6 7 8]
vector | cell array containing one or two vectors
Information bits, specified as a vector or cell array containing one or two vectors of bit values. Each vector contains the information bits stream to be coded across the duration of the generation, which represents multiple concatenated transport blocks. If the number of bits required across all subframes of the generation exceeds the length of the vectors provided, the txdata vector is looped internally. This feature allows you to enter a short pattern, such as [1;0;0;1], which is repeated as the input to the transport coding. In each subframe of generation, the number of data bits taken from this stream comes from the elements of the rmccfgout.PDSCH.TrBlkSizes matrix.
When the trdata input contains empty vectors, there is no transport data. The transmission of PDSCH and its corresponding PDCCH are skipped in the waveform when the trdata contains empty vectors. The other physical channels and signals are transmitted as normal in generated waveform.
Example: [1;0;0;1]
Total number of subframes, specified as a positive integer. This argument specifies the total number of subframes that form the resource grid.
Reference channel configuration, specified as a structure. Create a reference configuration structure with default parameters by using the lteRMCDL function. The reference configuration structures you generate with the lteRMCDL function comply with those defined in Annex A.3 of [1].
To generate the waveform output in alignment with your simulation requirements, modify the output of the lteRMCDL function. To add SIB1 messages and the associated PDSCH and PDCCH to the output waveform, specify the rmccfg.SIB substructure. You can specify this input to include fields contained in the rmccfgout output structure.
Generated RMC time-domain waveform, returned as a NS-by-NT numeric matrix. NS is the number of time-domain samples and NT is the number of transmit antennas.
grid represents the populated resource grid for all the physical channels specified in TS 36.101 [1], Annex A.3.
RMC configuration, returned as a structure. This output contains information about the OFDM-modulated waveform and RMC-specific configuration parameters. Field definitions and settings align with rmccfg.
For more information about the OFDM modulated waveform, see lteOFDMInfo. For more information about the RMC-specific configuration parameters, see lteRMCDL.
NFrame Nonnegative integer Frame number
These fields are only present and applicable for 'Port7-14' transmission scheme (TxScheme) and only required in rmccfg if CSIRSPeriod is not set to 'Off'.
CSIRefP 1, 2, 4, 8 Array of number of CSI-RS antenna ports
Include a SIB message by adding the SIB substructure to the lteRMCDL function configuration output structure, rmccfgout, after it is generated and before using the rmccfgout structure as input to lteRMCDLTool.
Enable PDCCH OFDMA channel noise generator (OCNG). See footnote.
The following fields are only present and applicable for 'TDD' duplex mode (DuplexMode).
Carrier sampling rate in Hz, (NSC/NSYM) × 3.84e6, where NSC is the number of subcarriers and NSYM is the number of OFDM symbols in a subframe.
PDCCH - 1 for NDLRB < 10
PDCCH for NDLRB >= 10
2 symbols for 20 MHz, 15 MHz, and 10 MHz
3 symbols for 5 MHz and 3 MHz
4 symbols for 1.4 MHz
In the TDD mode, only two OFDM symbols are allocated to PDCCH in subframes 1 and 6 irrespective of the channel bandwidth. Therefore, the CFI value varies per subframe for the 5 MHz and 3 MHz and 1.4 MHz channel bandwidths, that is for bandwidths where PDCCH symbol allocation is not two for other subframes.
The PDCCH ONCG fills the unused PDCCH resource elements with QPSK symbols using either single port or transmit diversity depending on the number of cell RS ports.
Numeric scalar or one or two row numeric matrix
PDCCHPower Numeric scalar
The following field is only present for 'SpatialMux' transmission scheme (TxScheme).
The following field is only present for 'Port7-8', 'Port8', or 'Port7-14' transmission schemes (TxScheme).
The following fields are only present for UE-specific beamforming ('Port5', 'Port7-8', 'Port8', or 'Port7-14').
W Numeric matrix
One-based HARQ process indices for the internal HARQ scheduling sequence. The sequence of length LHARQ_Seq is optimized according to transport block sizes, number of HARQ processes, duplex mode, and when in TDD mode the UL/DL configuration.
The HARQ process sequence table is calculated according to the procedure detailed in 3GPP Tdoc R5-095777 ("Scheduling of retransmissions and number of active HARQ processes for DL performance RMC-s")
For the case when NHARQProcesses = 1, the HARQProcessSequence is [1 0 0 0 0 0 0 0 0 0]. Using this HARQ process sequence, only the TrBlkSize corresponding to subframe 0 gets transmitted. There is no transmission in other subframes, even if the transport block sizes in other subframes are nonzero.
SIB Substructure
If the substructure SIB has been added to rmccfg, SIB1 messages and the associated PDSCH and PDCCH can be generated. The SIB substructure includes these fields:
(0,1), bit array
SIB1 transport block information bits
VRBStart
variable, see rules in TS 36.213 Section 7.1.6.3
Virtual RB allocation starting resource block, RBstart.
VRBLength
Length in terms of virtual contiguously allocated resource blocks, LCRBs.
Enable/Disable SIB generation
'Format1A' (default) or 'Format1C'
0 (default) or 1, single bit flag
Localized (0) or distributed (1) allocation of virtual resource blocks for Resource allocation type 2
The following parameter is only applicable when DCIFormat = 'Format1A'.
N1APRB
Transport block set selection parameter,
{N}_{PRB}^{1A}
Indicates the column in TS 36.213, Table 7.1.7.2.1-1 for transport block size selection. The default is the smallest transport block size, in either column 2 or 3, that is bigger than or equal to the length of the Data field. Also see TS 36.212 Section 5.3.3.1.3 and TS 36.213 Section 7.1.7.
The following parameter is only applicable when using distributed allocation (AllocationType = 1).
Distributed allocation gap, ‘0’ for Ngap,1 or ‘1’ for Ngap,2
The set of valid transport block sizes is specified in TS 36.213 [4], Table 7.1.7.2.1-1. Only columns 2 and 3 apply to the SIB DL-SCH. The Data field is padded with zeros to the closest valid size from this table.
Per TS 36.321 [5], Section 6.1.1, the lowest order information bit of the SIB.Data field is mapped to the most significant bit of the SIB1 transport block.
For subframe 5, per TS 36.101 [1], Annex A.3, reference PDSCH transmissions are not scheduled in subframe 5 except for the SIB1 associated PDSCH.
Setting the OCNG parameter field 'On' fills all unused, unscheduled PDSCH resource elements with QPSK modulated random data.
The values for CFI and PRBSet can vary per subframe. If these parameters are arrays, then the function cyclically steps through the elements of the array starting with the index given by mod(NSubframe,length(parameter)). When parameter is PRBSet, the parameter must be a cell array of column vectors or slot-wise matrices.
The PHICH symbols carry a single ACK on the first PHICH instance in each PHICH group.
The substructure, OCNGPDSCH, defines the OCNG patterns in associated RMCs and tests according to TS 36.101 [1], Section A.5. OCNGPDSCH contains these fields which can also be customized with the full range of PDSCH-specific values.
In previous releases, the input-free syntaxes of this function opened the LTE Downlink RMC Generator app. Starting in R2019b, input-free calls to this function open the LTE Waveform Generator app for a downlink RMC waveform.
lteRMCDL | lteRMCULTool | lteTestModelTool
|
User:SMP - Wikipedia
This user comes from Barcelona.
-4 This user is an expert LaTeX user.
-3 This user has an advanced understanding of American English.
Col·laboro amb la Viquipèdia en català.
I support Catalan language Wikipedia with my contributions
Feel free to correct me anywhere.
{\displaystyle e^{i\pi \ }}
This user is a mathematician.
This user contributes using Kubuntu.
This user contributes using Lubuntu.
I grant non-exclusive permission for the Wikimedia Foundation Inc. to relicense my text contributions under any copyleft license that it chooses, provided it maintains the free and open spirit of the GFDL. This permission acknowledges that future licensing needs of the Wikimedia projects may need adapting in unforeseen fashions to facilitate other uses, formats, and locations. It is given for as long as this banner remains.
Retrieved from "https://en.wikipedia.org/w/index.php?title=User:SMP&oldid=387586039"
|
EUDML | Saddle point theorems on generalized convex spaces. EuDML | Saddle point theorems on generalized convex spaces.
Saddle point theorems on generalized convex spaces.
Kim, In-Sook; Park, Sehie
Kim, In-Sook, and Park, Sehie. "Saddle point theorems on generalized convex spaces.." Journal of Inequalities and Applications [electronic only] 5.4 (2000): 397-405. <http://eudml.org/doc/121562>.
author = {Kim, In-Sook, Park, Sehie},
keywords = {minimax theorem; coincidence theorem; generalized convex space; -space; -space},
title = {Saddle point theorems on generalized convex spaces.},
AU - Kim, In-Sook
TI - Saddle point theorems on generalized convex spaces.
KW - minimax theorem; coincidence theorem; generalized convex space; -space; -space
minimax theorem, coincidence theorem, generalized convex space,
G
G
|
EUDML | Multipliers on some weighted -spaces. EuDML | Multipliers on some weighted -spaces.
Multipliers on some weighted
{L}^{p}
Öztop, S.
Öztop, S.. "Multipliers on some weighted -spaces.." International Journal of Mathematics and Mathematical Sciences 23.9 (2000): 651-656. <http://eudml.org/doc/48720>.
@article{Öztop2000,
author = {Öztop, S.},
keywords = {multipliers; locally compact abelian group},
title = {Multipliers on some weighted -spaces.},
AU - Öztop, S.
TI - Multipliers on some weighted -spaces.
KW - multipliers; locally compact abelian group
multipliers, locally compact abelian group
Articles by Öztop
|
bert-base-cased · Hugging Face
en arxiv:1810.04805 apache-2.0 bert exbert AutoTrain Compatible Infinity Compatible
BERT base model (cased)
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.
>>> unmasker = pipeline('fill-mask', model='bert-base-cased')
[{'sequence': "[CLS] Hello I'm a fashion model. [SEP]",
{'sequence': "[CLS] Hello I'm a new model. [SEP]",
{'sequence': "[CLS] Hello I'm a male model. [SEP]",
'token_str': 'male'},
{'sequence': "[CLS] Hello I'm a professional model. [SEP]",
'token_str': 'professional'},
{'sequence': "[CLS] Hello I'm a super model. [SEP]",
'token_str': 'super'}]
model = TFBertModel.from_pretrained("bert-base-cased")
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions:
[{'sequence': '[CLS] The man worked as a lawyer. [SEP]',
'token_str': 'lawyer'},
{'sequence': '[CLS] The man worked as a waiter. [SEP]',
{'sequence': '[CLS] The man worked as a cop. [SEP]',
'token_str': 'cop'},
{'sequence': '[CLS] The man worked as a detective. [SEP]',
'token_str': 'detective'},
{'sequence': '[CLS] The man worked as a doctor. [SEP]',
'token_str': 'doctor'}]
[{'sequence': '[CLS] The woman worked as a nurse. [SEP]',
{'sequence': '[CLS] The woman worked as a waitress. [SEP]',
{'sequence': '[CLS] The woman worked as a maid. [SEP]',
{'sequence': '[CLS] The woman worked as a housekeeper. [SEP]',
'token_str': 'housekeeper'},
{'sequence': '[CLS] The woman worked as a cook. [SEP]',
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).
The texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form:
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4,
{\beta }_{1}=0.9\beta_\left\{1\right\} = 0.9
{\beta }_{2}=0.999\beta_\left\{2\right\} = 0.999
MNLI-(m/mm)
84.6/83.4 71.2 90.5 93.5 52.1 85.8 88.9 66.4 79.6
author = {Jacob Devlin and
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
Datasets used to train bert-base-cased
Spaces using bert-base-cased
Salesforce/BLIP
flax-community/multilingual-image-captioning
darkproger/propaganda
paulbricman/cybersalience
ajitrajasekharan/NER-Biomedical-PHI-Ensemble
docs-demos/bert-base-uncased
flax-community/indonesian-image-captioning
Fastem/Article_application
Darkhan/shad_ml2
vladyur/whats_wrong_with_your_docker
Fastem/Article_application_2
Vrk/NLP-Project
|
LMIs in Control/Matrix and LMI Properties and Tools/D-Stability Rise Time Poles - Wikibooks, open books for an open world
LMIs in Control/Matrix and LMI Properties and Tools/D-Stability Rise Time Poles
LMI for Rise Time Poles
The following LMI allows for the verification that poles of a system will fall within a rise time constraint. This can also be used to place poles for rise time when the system matrix includes a controller, such as in the form A+BK.
4 The LMI: LMI for Rise Time Poles
{\displaystyle {\begin{aligned}{\dot {x}}(t)&=Ax\end{aligned}}}
or the matrix
{\displaystyle A\in \mathbb {R} ^{n\times n}}
, which is the state matrix.
The data required is the matrix A and the rise time
{\displaystyle t_{r}}
you wish to verify.
To begin, the constraint of the pole locations is as follows:
{\displaystyle z^{*}z-{1.8^{2} \over {t_{r}}^{2}}{\leq }0}
, where z is a complex pole of A. We define
{\displaystyle r^{2}{\geq }z^{*}z}
. The goal of the optimization is to find a valid P > 0 such that the following LMI is satisfied.
The LMI: LMI for Rise Time PolesEdit
The LMI problem is to find a matrix P> 0 satisfying:
{\displaystyle {\begin{aligned}{\begin{bmatrix}-rP&AP\\(AP)^{T}&-rP\end{bmatrix}}<0\\\end{aligned}}}
If the LMI is found to be feasible, then the pole locations of A, represented as z, will meet the rise time specification of
{\displaystyle z^{*}z-{1.8^{2} \over {t_{r}}^{2}}{\leq }0}
, and the poles of A satisfy the previously defined constraint.
https://github.com/maxwellpeterson99/MAE509Code
[1] - D-stabilization
[2] - D-stability Controller
[3] - D-stability Observer
[5] - A course on LMIs in Control by Matthew Peet
[6] -Matrix and LMI Properties and Tools
Retrieved from "https://en.wikibooks.org/w/index.php?title=LMIs_in_Control/Matrix_and_LMI_Properties_and_Tools/D-Stability_Rise_Time_Poles&oldid=4011183"
|
Centrum Wiskunde & Informatica: Approximate span programs
T. Ito (Tsuyoshi) and S. Jeffery (Stacey)
Span programs are a model of computation that have been used to design quantum algorithms, mainly in the query model. It is known that for any decision problem, there exists a span program that leads to an algorithm with optimal quantum query complexity, however finding such an algorithm is generally challenging. We consider new ways of designing quantum algorithms using span programs. We show how any span program that decides a function f can also be used to decide “threshold” versions of the function f, or more generally, approximate a quantity called the span program witness size, which is some property of the input related to f. We achieve these results by relaxing the requirement that 1-inputs hit some target exactly in the span program, which could potentially make design of span programs significantly easier. In addition, we give an exposition of span program structure, which increases the general understanding of this important model. One implication of this is alternative algorithms for estimating the witness size when the phase gap of a certain unitary can be lower bounded. We show how to lower bound this phase gap in certain cases. As an application, we give the first upper bounds in the adjacency query model on the quantum time complexity of estimating the effective resistance between s and t,
{R}_{s,t}\left(G\right)
. For this problem we obtain
\stackrel{~}{O}\left(\frac{1}{{\epsilon }^{3/2}}n\sqrt{{R}_{s,t}\left(G\right)}\right)
O\left(\mathrm{log}n\right)
space. In addition, when
\mu
{\lambda }_{2}\left(G\right)
, by our phase gap lower bound, we can obtain an upper bound of
\stackrel{~}{O}\left(\frac{1}{\epsilon }n\sqrt{{R}_{s,t}\left(G\right)/\mu }\right)
for estimating effective resistance, also using
O\left(\mathrm{log}n\right)
Keywords Effective resistance, Quantum algorithms, Quantum query complexity, Span programs
Ito, T, & Jeffery, S. (2018). Approximate span programs. Algorithmica. doi:10.1007/s00453-018-0527-1
|
Global Constraint Catalog: Cdisjoint_sboxes
<< 5.123. disjoint5.125. disjoint_tasks >>
\mathrm{𝚍𝚒𝚜𝚓𝚘𝚒𝚗𝚝}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}\left(𝙺,\mathrm{𝙳𝙸𝙼𝚂},\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂},\mathrm{𝚂𝙱𝙾𝚇𝙴𝚂}\right)
\mathrm{𝚍𝚒𝚜𝚓𝚘𝚒𝚗𝚝}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(𝚟-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝙸𝙽𝚃𝙴𝙶𝙴𝚁𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(𝚟-\mathrm{𝚒𝚗𝚝}\right)
\mathrm{𝙿𝙾𝚂𝙸𝚃𝙸𝚅𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(𝚟-\mathrm{𝚒𝚗𝚝}\right)
𝙺
\mathrm{𝚒𝚗𝚝}
\mathrm{𝙳𝙸𝙼𝚂}
\mathrm{𝚜𝚒𝚗𝚝}
\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚘𝚒𝚍}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚜𝚒𝚍}-\mathrm{𝚍𝚟𝚊𝚛},𝚡-\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\right)
\mathrm{𝚂𝙱𝙾𝚇𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚜𝚒𝚍}-\mathrm{𝚒𝚗𝚝},𝚝-\mathrm{𝙸𝙽𝚃𝙴𝙶𝙴𝚁𝚂},𝚕-\mathrm{𝙿𝙾𝚂𝙸𝚃𝙸𝚅𝙴𝚂}\right)
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|\ge 1
|\mathrm{𝙸𝙽𝚃𝙴𝙶𝙴𝚁𝚂}|\ge 1
|\mathrm{𝙿𝙾𝚂𝙸𝚃𝙸𝚅𝙴𝚂}|\ge 1
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂},𝚟\right)
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|=𝙺
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝙸𝙽𝚃𝙴𝙶𝙴𝚁𝚂},𝚟\right)
|\mathrm{𝙸𝙽𝚃𝙴𝙶𝙴𝚁𝚂}|=𝙺
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝙿𝙾𝚂𝙸𝚃𝙸𝚅𝙴𝚂},𝚟\right)
|\mathrm{𝙿𝙾𝚂𝙸𝚃𝙸𝚅𝙴𝚂}|=𝙺
\mathrm{𝙿𝙾𝚂𝙸𝚃𝙸𝚅𝙴𝚂}.𝚟>0
𝙺>0
\mathrm{𝙳𝙸𝙼𝚂}\ge 0
\mathrm{𝙳𝙸𝙼𝚂}<𝙺
\mathrm{𝚒𝚗𝚌𝚛𝚎𝚊𝚜𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚}
\left(\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂},\left[\mathrm{𝚘𝚒𝚍}\right]\right)
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂},\left[\mathrm{𝚘𝚒𝚍},\mathrm{𝚜𝚒𝚍},𝚡\right]\right)
\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂}.\mathrm{𝚘𝚒𝚍}\ge 1
\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂}.\mathrm{𝚘𝚒𝚍}\le |\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂}|
\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂}.\mathrm{𝚜𝚒𝚍}\ge 1
\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂}.\mathrm{𝚜𝚒𝚍}\le |\mathrm{𝚂𝙱𝙾𝚇𝙴𝚂}|
|\mathrm{𝚂𝙱𝙾𝚇𝙴𝚂}|\ge 1
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚂𝙱𝙾𝚇𝙴𝚂},\left[\mathrm{𝚜𝚒𝚍},𝚝,𝚕\right]\right)
\mathrm{𝚂𝙱𝙾𝚇𝙴𝚂}.\mathrm{𝚜𝚒𝚍}\ge 1
\mathrm{𝚂𝙱𝙾𝚇𝙴𝚂}.\mathrm{𝚜𝚒𝚍}\le |\mathrm{𝚂𝙱𝙾𝚇𝙴𝚂}|
\mathrm{𝚍𝚘}_\mathrm{𝚗𝚘𝚝}_\mathrm{𝚘𝚟𝚎𝚛𝚕𝚊𝚙}\left(\mathrm{𝚂𝙱𝙾𝚇𝙴𝚂}\right)
\left({O}_{i},{O}_{j}\right)
i\ne j
{O}_{i}
{O}_{j}
are disjoint with respect to a set of dimensions depicted by
\mathrm{𝙳𝙸𝙼𝚂}
{O}_{i}
{O}_{j}
𝙺
\mathrm{𝚜𝚒𝚍}
𝚝
𝚕
\mathrm{𝚘𝚒𝚍}
\mathrm{𝚜𝚒𝚍}
𝚡
{O}_{i}
{O}_{j}
\mathrm{𝙳𝙸𝙼𝚂}
if and only if for all shifted box
{s}_{i}
{O}_{i}
{s}_{j}
{O}_{j}
there exists at least one dimension
d\in \mathrm{𝙳𝙸𝙼𝚂}
such that (1) the origin of
{s}_{i}
d
is strictly greater than the end of
{s}_{j}
d
, or (2) the origin of
{s}_{j}
d
{s}_{i}
d
\left(\begin{array}{c}2,\left\{0,1\right\},\hfill \\ 〈\begin{array}{ccc}\mathrm{𝚘𝚒𝚍}-1\hfill & \mathrm{𝚜𝚒𝚍}-1\hfill & 𝚡-〈1,1〉,\hfill \\ \mathrm{𝚘𝚒𝚍}-2\hfill & \mathrm{𝚜𝚒𝚍}-2\hfill & 𝚡-〈4,1〉,\hfill \\ \mathrm{𝚘𝚒𝚍}-3\hfill & \mathrm{𝚜𝚒𝚍}-4\hfill & 𝚡-〈2,4〉\hfill \end{array}〉,\hfill \\ 〈\begin{array}{ccc}\mathrm{𝚜𝚒𝚍}-1\hfill & 𝚝-〈0,0〉\hfill & 𝚕-〈1,2〉,\hfill \\ \mathrm{𝚜𝚒𝚍}-2\hfill & 𝚝-〈0,0〉\hfill & 𝚕-〈1,1〉,\hfill \\ \mathrm{𝚜𝚒𝚍}-2\hfill & 𝚝-〈1,0〉\hfill & 𝚕-〈1,3〉,\hfill \\ \mathrm{𝚜𝚒𝚍}-2\hfill & 𝚝-〈0,2〉\hfill & 𝚕-〈1,1〉,\hfill \\ \mathrm{𝚜𝚒𝚍}-3\hfill & 𝚝-〈0,0〉\hfill & 𝚕-〈3,1〉,\hfill \\ \mathrm{𝚜𝚒𝚍}-3\hfill & 𝚝-〈0,1〉\hfill & 𝚕-〈1,1〉,\hfill \\ \mathrm{𝚜𝚒𝚍}-3\hfill & 𝚝-〈2,1〉\hfill & 𝚕-〈1,1〉,\hfill \\ \mathrm{𝚜𝚒𝚍}-4\hfill & 𝚝-〈0,0〉\hfill & 𝚕-〈1,1〉\hfill \end{array}〉\hfill \end{array}\right)
Figure 5.124.1 shows the objects of the example. Since these objects are pairwise disjoint the
\mathrm{𝚍𝚒𝚜𝚓𝚘𝚒𝚗𝚝}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
Figure 5.124.1. (D) the three mutually disjoint objects
{O}_{1}
{O}_{2}
{O}_{3}
{S}_{1}
{S}_{2}
{S}_{4}
{S}_{1}
{S}_{2}
{S}_{3}
{S}_{4}
|\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂}|>1
\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂}
\mathrm{𝚂𝙱𝙾𝚇𝙴𝚂}
\mathrm{𝚂𝙱𝙾𝚇𝙴𝚂}.𝚕.𝚟
\ge 1
\mathrm{𝙾𝙱𝙹𝙴𝙲𝚃𝚂}
One of the eight relations of the Region Connection Calculus [RandellCuiCohn92]. Unlike the
\mathrm{𝚗𝚘𝚗}_\mathrm{𝚘𝚟𝚎𝚛𝚕𝚊𝚙}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
constraint, which just prevents objects from overlapping, the
\mathrm{𝚍𝚒𝚜𝚓𝚘𝚒𝚗𝚝}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
constraint in addition enforces that borders and corners of objects are not directly in contact.
\mathrm{𝚌𝚘𝚗𝚝𝚊𝚒𝚗𝚜}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
\mathrm{𝚌𝚘𝚟𝚎𝚛𝚎𝚍𝚋𝚢}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
\mathrm{𝚌𝚘𝚟𝚎𝚛𝚜}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
\mathrm{𝚎𝚚𝚞𝚊𝚕}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
\mathrm{𝚒𝚗𝚜𝚒𝚍𝚎}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
\mathrm{𝚖𝚎𝚎𝚝}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
\mathrm{𝚗𝚘𝚗}_\mathrm{𝚘𝚟𝚎𝚛𝚕𝚊𝚙}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
\mathrm{𝚘𝚟𝚎𝚛𝚕𝚊𝚙}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
\mathrm{𝚗𝚘𝚗}_\mathrm{𝚘𝚟𝚎𝚛𝚕𝚊𝚙}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}
•\mathrm{𝚘𝚛𝚒𝚐𝚒𝚗}\left(𝙾\mathtt{1},𝚂\mathtt{1},𝙳\right)\stackrel{\mathrm{def}}{=}𝙾\mathtt{1}.𝚡\left(𝙳\right)+𝚂\mathtt{1}.𝚝\left(𝙳\right)
•\mathrm{𝚎𝚗𝚍}\left(𝙾\mathtt{1},𝚂\mathtt{1},𝙳\right)\stackrel{\mathrm{def}}{=}𝙾\mathtt{1}.𝚡\left(𝙳\right)+𝚂\mathtt{1}.𝚝\left(𝙳\right)+𝚂\mathtt{1}.𝚕\left(𝙳\right)
•\begin{array}{c}\mathrm{𝚍𝚒𝚜𝚓𝚘𝚒𝚗𝚝}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}\left(\mathrm{𝙳𝚒𝚖𝚜},𝙾\mathtt{1},𝚂\mathtt{1},𝙾\mathtt{2},𝚂\mathtt{2}\right)\stackrel{\mathrm{def}}{=}\hfill \\ \begin{array}{c}\exists 𝙳\in \mathrm{𝙳𝚒𝚖𝚜}\hfill \\ \bigvee \left(\begin{array}{c}\begin{array}{c}\mathrm{𝚘𝚛𝚒𝚐𝚒𝚗}\left(𝙾\mathtt{1},𝚂\mathtt{1},𝙳\right)>\hfill \\ \mathrm{𝚎𝚗𝚍}\left(𝙾\mathtt{2},𝚂\mathtt{2},𝙳\right)\hfill \end{array},\hfill \\ \begin{array}{c}\mathrm{𝚘𝚛𝚒𝚐𝚒𝚗}\left(𝙾\mathtt{2},𝚂\mathtt{2},𝙳\right)>\hfill \\ \mathrm{𝚎𝚗𝚍}\left(𝙾\mathtt{1},𝚂\mathtt{1},𝙳\right)\hfill \end{array}\hfill \end{array}\right)\hfill \end{array}\hfill \end{array}
•\begin{array}{c}\mathrm{𝚍𝚒𝚜𝚓𝚘𝚒𝚗𝚝}_\mathrm{𝚘𝚋𝚓𝚎𝚌𝚝𝚜}\left(\mathrm{𝙳𝚒𝚖𝚜},𝙾\mathtt{1},𝙾\mathtt{2}\right)\stackrel{\mathrm{def}}{=}\hfill \\ \begin{array}{c}\forall 𝚂\mathtt{1}\in \mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}\left(\left[𝙾\mathtt{1}.\mathrm{𝚜𝚒𝚍}\right]\right)\hfill \\ \begin{array}{c}\forall 𝚂\mathtt{2}\in \mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}\left(\begin{array}{c}\left[\begin{array}{c}𝙾\mathtt{2}.\mathrm{𝚜𝚒𝚍}\hfill \end{array}\right]\hfill \end{array}\right)\hfill \\ \mathrm{𝚍𝚒𝚜𝚓𝚘𝚒𝚗𝚝}_\mathrm{𝚜𝚋𝚘𝚡𝚎𝚜}\left(\begin{array}{c}\mathrm{𝙳𝚒𝚖𝚜},\hfill \\ 𝙾\mathtt{1},\hfill \\ 𝚂\mathtt{1},\hfill \\ 𝙾\mathtt{2},\hfill \\ 𝚂\mathtt{2}\hfill \end{array}\right)\hfill \end{array}\hfill \end{array}\hfill \end{array}
•\begin{array}{c}\mathrm{𝚊𝚕𝚕}_\mathrm{𝚍𝚒𝚜𝚓𝚘𝚒𝚗𝚝}\left(\mathrm{𝙳𝚒𝚖𝚜},\mathrm{𝙾𝙸𝙳𝚂}\right)\stackrel{\mathrm{def}}{=}\hfill \\ \begin{array}{c}\forall 𝙾\mathtt{1}\in \mathrm{𝚘𝚋𝚓𝚎𝚌𝚝𝚜}\left(\mathrm{𝙾𝙸𝙳𝚂}\right)\hfill \\ \begin{array}{c}\forall 𝙾\mathtt{2}\in \mathrm{𝚘𝚋𝚓𝚎𝚌𝚝𝚜}\left(\mathrm{𝙾𝙸𝙳𝚂}\right)\hfill \\ \begin{array}{c}\begin{array}{c}𝙾\mathtt{1}.\mathrm{𝚘𝚒𝚍}<\hfill \\ 𝙾\mathtt{2}.\mathrm{𝚘𝚒𝚍}\hfill \end{array}⇒\hfill \\ \mathrm{𝚍𝚒𝚜𝚓𝚘𝚒𝚗𝚝}_\mathrm{𝚘𝚋𝚓𝚎𝚌𝚝𝚜}\left(\begin{array}{c}\mathrm{𝙳𝚒𝚖𝚜},\hfill \\ 𝙾\mathtt{1},\hfill \\ 𝙾\mathtt{2}\hfill \end{array}\right)\hfill \end{array}\hfill \end{array}\hfill \end{array}\hfill \end{array}
•\mathrm{𝚊𝚕𝚕}_\mathrm{𝚍𝚒𝚜𝚓𝚘𝚒𝚗𝚝}\left(\mathrm{𝙳𝙸𝙼𝙴𝙽𝚂𝙸𝙾𝙽𝚂},\mathrm{𝙾𝙸𝙳𝚂}\right)
|
Home : Support : Online Help : Statistics and Data Analysis : DataFrames and DataSeries : DataFrame Commands : Select, Remove
selection from a DataFrame
removal from a DataFrame
DataFrame/selectremove
selection and removal from a DataFrame
select(f, DF, key, b1, ..., bn)
select[inplace](f, DF, key, b1, ..., bn)
remove(f, DF, key, b1, ..., bn)
remove[inplace](f, DF, key, b1, ..., bn)
selectremove(f, DF, key, b1, ..., bn)
selectremove[inplace](f, DF, key, b1, ..., bn)
specifies the key column of DF
When called on a DataFrame object, the select command returns a DataFrame object consisting of those rows where the key column entry satisfies the given criterion.
When called on a DataFrame object, the remove command returns a DataFrame object consisting of those rows where the key column entry does not satisfy the given criterion.
When called on a DataFrame object, the selectremove command returns a sequence of two DataFrame objects, the first consisting of those rows where the key column entry satisfies the given criterion, and the second consisting of the other rows.
The criterion used for deciding whether a row of the DataFrame is included in the result is to call
f\left(x,\mathrm{b1},\mathrm{...},\mathrm{bn}\right)
x
is the entry in that row and in the key column. This should return true or false (or FAIL, which is interpreted in the same way as false). If you call select, then the returned DataFrame will contain this row if and only if the value returned is true. If you call remove, then the returned DataFrame will contain this row if and only if the value returned is false (or FAIL).
The value key can be a positive or negative integer to indicate the position of the key column, or the label of the key column. This is interpreted in the same way as for indexing a DataFrame, so the first test is if key is a valid column position, and if not, the second test is whether it is a valid column label. You cannot specify a range, list, rtable, or Boolean DataSeries or DataFrame.
Any row that is included in the result will have the same label that it has in DF. All columns have the same labels as in DF.
If you call select[inplace] or remove[inplace], the command will modify the DataFrame object DF and return it. The calling sequences with no index on the command name will return a new DataFrame object and leave DF unchanged. If you call selectremove[inplace], then the command will modify DF to be the first DataFrame returned, and the second DataFrame will be a new DataFrame object.
You can do similar things with DataFrame indexing using a Boolean DataSeries. See the example below. Indexing also allows for criteria involving multiple columns, which is not supported with the select, remove, and selectremove commands.
\mathrm{df}
, below, into rows with prime and non-prime entries in various ways, using the isprime command as the testing criterion.
\mathrm{df}≔\mathrm{DataFrame}\left(\mathrm{Matrix}\left(4,5,\left(i,j\right)↦2\cdot i-j\right),\mathrm{rows}=[a,b,c,d],\mathrm{columns}=[A,B,C,\mathrm{D},E]\right)
\textcolor[rgb]{0,0,1}{\mathrm{df}}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{}& \textcolor[rgb]{0,0,1}{A}& \textcolor[rgb]{0,0,1}{B}& \textcolor[rgb]{0,0,1}{C}& \textcolor[rgb]{0,0,1}{\mathrm{D}}& \textcolor[rgb]{0,0,1}{E}\\ \textcolor[rgb]{0,0,1}{a}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{-2}& \textcolor[rgb]{0,0,1}{-3}\\ \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{c}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{d}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}\end{array}]
The rows for which the first column is a prime.
\mathrm{select}\left(\mathrm{isprime},\mathrm{df},1\right)
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{}& \textcolor[rgb]{0,0,1}{A}& \textcolor[rgb]{0,0,1}{B}& \textcolor[rgb]{0,0,1}{C}& \textcolor[rgb]{0,0,1}{\mathrm{D}}& \textcolor[rgb]{0,0,1}{E}\\ \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{c}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{d}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}\end{array}]
We can get the same result if we specify the first column with its label,
A
\mathrm{select}\left(\mathrm{isprime},\mathrm{df},A\right)
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{}& \textcolor[rgb]{0,0,1}{A}& \textcolor[rgb]{0,0,1}{B}& \textcolor[rgb]{0,0,1}{C}& \textcolor[rgb]{0,0,1}{\mathrm{D}}& \textcolor[rgb]{0,0,1}{E}\\ \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{c}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{d}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}\end{array}]
A third way to get the same result is by indexing with a Boolean DataSeries.
\mathrm{df}[\mathrm{`~`}[\mathrm{isprime}]\left(\mathrm{df}[A]\right)]
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{}& \textcolor[rgb]{0,0,1}{A}& \textcolor[rgb]{0,0,1}{B}& \textcolor[rgb]{0,0,1}{C}& \textcolor[rgb]{0,0,1}{\mathrm{D}}& \textcolor[rgb]{0,0,1}{E}\\ \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{c}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{d}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}\end{array}]
We can obtain the rows where the entries in column
\mathrm{D}
are non-prime using remove.
\mathrm{remove}\left(\mathrm{isprime},\mathrm{df},\mathrm{D}\right)
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{}& \textcolor[rgb]{0,0,1}{A}& \textcolor[rgb]{0,0,1}{B}& \textcolor[rgb]{0,0,1}{C}& \textcolor[rgb]{0,0,1}{\mathrm{D}}& \textcolor[rgb]{0,0,1}{E}\\ \textcolor[rgb]{0,0,1}{a}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{-2}& \textcolor[rgb]{0,0,1}{-3}\\ \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{d}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}\end{array}]
If we want to obtain both the rows where the entries are prime, and where they are nonprime, we can use selectremove. Below, we show this for column
C
\mathrm{selectremove}\left(\mathrm{isprime},\mathrm{df},C\right)
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{}& \textcolor[rgb]{0,0,1}{A}& \textcolor[rgb]{0,0,1}{B}& \textcolor[rgb]{0,0,1}{C}& \textcolor[rgb]{0,0,1}{\mathrm{D}}& \textcolor[rgb]{0,0,1}{E}\\ \textcolor[rgb]{0,0,1}{c}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{d}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{}& \textcolor[rgb]{0,0,1}{A}& \textcolor[rgb]{0,0,1}{B}& \textcolor[rgb]{0,0,1}{C}& \textcolor[rgb]{0,0,1}{\mathrm{D}}& \textcolor[rgb]{0,0,1}{E}\\ \textcolor[rgb]{0,0,1}{a}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{-2}& \textcolor[rgb]{0,0,1}{-3}\\ \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\end{array}]
If we want to select the entries of a particular type, then we can use the optional fourth argument to select.
\mathrm{select}\left(\mathrm{type},\mathrm{df},C,\mathrm{positive}\right)
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{}& \textcolor[rgb]{0,0,1}{A}& \textcolor[rgb]{0,0,1}{B}& \textcolor[rgb]{0,0,1}{C}& \textcolor[rgb]{0,0,1}{\mathrm{D}}& \textcolor[rgb]{0,0,1}{E}\\ \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{c}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{d}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}\end{array}]
The original DataFrame,
\mathrm{df}
\mathrm{df}
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{}& \textcolor[rgb]{0,0,1}{A}& \textcolor[rgb]{0,0,1}{B}& \textcolor[rgb]{0,0,1}{C}& \textcolor[rgb]{0,0,1}{\mathrm{D}}& \textcolor[rgb]{0,0,1}{E}\\ \textcolor[rgb]{0,0,1}{a}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{-2}& \textcolor[rgb]{0,0,1}{-3}\\ \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{c}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{d}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}\end{array}]
\mathrm{df}
\mathrm{select}[\mathrm{inplace}]\left(\mathrm{type},\mathrm{df},C,\mathrm{positive}\right)
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{}& \textcolor[rgb]{0,0,1}{A}& \textcolor[rgb]{0,0,1}{B}& \textcolor[rgb]{0,0,1}{C}& \textcolor[rgb]{0,0,1}{\mathrm{D}}& \textcolor[rgb]{0,0,1}{E}\\ \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{c}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{d}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}\end{array}]
\mathrm{df}
[\begin{array}{cccccc}\textcolor[rgb]{0,0,1}{}& \textcolor[rgb]{0,0,1}{A}& \textcolor[rgb]{0,0,1}{B}& \textcolor[rgb]{0,0,1}{C}& \textcolor[rgb]{0,0,1}{\mathrm{D}}& \textcolor[rgb]{0,0,1}{E}\\ \textcolor[rgb]{0,0,1}{b}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}\\ \textcolor[rgb]{0,0,1}{c}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{d}& \textcolor[rgb]{0,0,1}{7}& \textcolor[rgb]{0,0,1}{6}& \textcolor[rgb]{0,0,1}{5}& \textcolor[rgb]{0,0,1}{4}& \textcolor[rgb]{0,0,1}{3}\end{array}]
The DataFrame/select, DataFrame/remove and DataFrame/selectremove commands were introduced in Maple 2019.
|
EUDML | On ...-regular rings with no infinite trivial subring. EuDML | On ...-regular rings with no infinite trivial subring.
On ...-regular rings with no infinite trivial subring.
Hirano, Yasuyuki. "On ...-regular rings with no infinite trivial subring.." Mathematica Scandinavica 63.2 (1988): 212-214. <http://eudml.org/doc/167051>.
@article{Hirano1988,
author = {Hirano, Yasuyuki},
keywords = {-regular ring; strongly regular ideal},
title = {On ...-regular rings with no infinite trivial subring.},
TI - On ...-regular rings with no infinite trivial subring.
KW - -regular ring; strongly regular ideal
\pi
-regular ring, strongly regular ideal
Articles by Yasuyuki Hirano
|
A simple model for magnetism in a crystal is to imagine a grid with a dipole that can either point up or down. The most energy-stable configuration is of course a checkerboard pattern. However, imagine a scenario in which each cell can only feel the cells above or below it and left or right to it, how can you reach this stable configuration from a random starting configuration?[1]
One way we might solve this problem is to flip the cell if three or more of its neighbours are the same. This way, we flip if by flipping we can reach a lower energy, and we don't flip if we don't reach a lower energy from flipping.
However, we see that in practise for many initial configuration, this method doesn't yield the optimal checkboard pattern. Why?
The problem is that our algorithm only flips if it results in a lower-energy configuration: It is a greedy algorithm, and tends to end up in local minima. Sometimes, you have to make a short-term sacrifice in order to achieve a long-term goal. But how can we make these short-term sacrifices in pursuit of our long-term goal?
Meet simulated annealing. In physics, temperature is essentially small random vibrations - the higher the temperature, the stronger the vibrations. Introducing a temperature parameter can give us a mechanism to make these short-term sacrifices. The temperature will start high, and eventually go to zero which reduces to our initial algorithm.
When the temperature is above zero, we will have an exponentially decaying probability of flipping a cell even if it will result in a higher energy-state. Moreover, the probability will also depend on how bad the flip is with dramatically worse flips being more unlikely. The code might look something like:
temperature = lambda time_fraction: 1-time_fraction
# T: Temperature
# E: Fraction of neighbours which are dissimilar.
accept_prob = lambda T, E: np.exp(-E/T)
We can plot the probability of flipping a cell with only
1
similar neighbour over time
We see that, with a bit of tuning, this algorithm is able to find the globally optimal configuration since it is less likely to get stuck in a local minima.
Simulated annealing is a powerful optimization technique that is useful well being this very simple example. It is particularly useful if it's better to find an approximate global optimum[2] than a precise local optimum found by methods such as gradient descent. You can play around with the code for this experiment on Colab.
We assume that each cell is sequentially updated in a scanning motion, and that the cells at the edges feel the cell at the opposite edge. ↩︎
Sometimes, even the annealing method doesn't yield the globally optimal configuration, but it is usually closer than the simple non-annealing method. (See demonstration) ↩︎
|
N-Way ANOVA - MATLAB & Simulink - MathWorks Nordic
Introduction to N-Way ANOVA
Prepare Data for N-Way ANOVA
You can use the function anovan to perform N-way ANOVA. Use N-way ANOVA to determine if the means in a set of data differ with respect to groups (levels) of multiple factors. By default, anovan treats all grouping variables as fixed effects. For an example of ANOVA with random effects, see ANOVA with Random Effects. For repeated measures, see fitrm and ranova.
N-way ANOVA is a generalization of two-way ANOVA. For three factors, for example, the model can be written as
{y}_{ijkr}=\mu +{\alpha }_{i}+{\beta }_{j}+{\gamma }_{k}+{\left(\alpha \beta \right)}_{ij}+{\left(\alpha \gamma \right)}_{ik}+{\left(\beta \gamma \right)}_{jk}+{\left(\alpha \beta \gamma \right)}_{ijk}+{\epsilon }_{ijkr,}
yijkr is an observation of the response variable. i represents group i of factor A, i = 1, 2, ..., I, j represents group j of factor B, j = 1, 2, ..., J, k represents group k of factor C, and r represents the replication number, r = 1, 2, ..., R. For constant R, there are a total of N = I*J*K*R observations, but the number of observations does not have to be the same for each combination of groups of factors.
μ is the overall mean.
αi are the deviations of groups of factor A from the overall mean μ due to factor A. The values of αi sum to 0.
{\sum }_{i=1}^{I}{\alpha }_{i}=0.
βj are the deviations of groups in factor B from the overall mean μ due to factor B. The values of βj sum to 0.
{\sum }_{j=1}^{J}{\beta }_{j}=0.
γk are the deviations of groups in factor C from the overall mean μ due to factor C. The values of γk sum to 0.
{\sum }_{k=1}^{K}{\gamma }_{k}=0.
(αβ)ij is the interaction term between factors A and B. (αβ)ij sum to 0 over either index.
{\sum }_{i=1}^{I}{\left(\alpha \beta \right)}_{ij}={\sum }_{j=1}^{J}{\left(\alpha \beta \right)}_{ij}=0.
(αγ)ik is the interaction term between factors A and C. The values of (αγ)ik sum to 0 over either index.
{\sum }_{i=1}^{I}{\left(\alpha \gamma \right)}_{ik}={\sum }_{k=1}^{K}{\left(\alpha \gamma \right)}_{ik}=0.
(βγ)jk is the interaction term between factors B and C. The values of (βγ)jk sum to 0 over either index.
{\sum }_{j=1}^{J}{\left(\beta \gamma \right)}_{jk}={\sum }_{k=1}^{K}{\left(\beta \gamma \right)}_{jk}=0.
(αβγ)ijk is the three-way interaction term between factors A, B, and C. The values of (αβγ)ijk sum to 0 over any index.
{\sum }_{i=1}^{I}{\left(\alpha \beta \gamma \right)}_{ijk}={\sum }_{j=1}^{J}{\left(\alpha \beta \gamma \right)}_{ijk}={\sum }_{k=1}^{K}{\left(\alpha \beta \gamma \right)}_{ijk}=0.
εijkr are the random disturbances. They are assumed to be independent, normally distributed, and have constant variance.
Three-way ANOVA tests hypotheses about the effects of factors A, B, C, and their interactions on the response variable y. The hypotheses about the equality of the mean responses for groups of factor A are
\begin{array}{l}{H}_{0}:{\alpha }_{1}={\alpha }_{2}\cdots ={\alpha }_{I}\\ {H}_{1}:\text{ at least one }{\alpha }_{i}\text{ is different},\text{ }i=1,\text{ }2,\text{ }...,\text{ }I.\end{array}
The hypotheses about the equality of the mean response for groups of factor B are
\begin{array}{l}{H}_{0}:{\beta }_{1}={\beta }_{2}=\cdots ={\beta }_{J}\\ {H}_{1}:\text{ at least one }{\beta }_{j}\text{ is different, }j=1,\text{ }2,\text{ }...,\text{ }J.\end{array}
The hypotheses about the equality of the mean response for groups of factor C are
\begin{array}{l}{H}_{0}:{\gamma }_{1}={\gamma }_{2}=\cdots ={\gamma }_{K}\\ {H}_{1}:\text{ at least one }{\gamma }_{k}\text{ is different},\text{ }k=1,\text{ }2,\text{ }...,\text{ }K.\end{array}
The hypotheses about the interaction of the factors are
\begin{array}{l}{H}_{0}:{\left(\alpha \beta \right)}_{ij}=0\\ {H}_{1}:\text{at least one }{\left(\alpha \beta \right)}_{ij}\ne 0\end{array}
\begin{array}{l}{H}_{0}:{\left(\alpha \gamma \right)}_{ik}=0\\ {H}_{1}:\text{at least one }{\left(\alpha \gamma \right)}_{ik}\ne 0\\ \\ {H}_{0}:{\left(\beta \gamma \right)}_{jk}=0\\ {H}_{1}:\text{at least one }{\left(\beta \gamma \right)}_{jk}\ne 0\\ \\ {H}_{0}:{\left(\alpha \beta \gamma \right)}_{ijk}=0\\ {H}_{1}:\text{at least one }{\left(\alpha \beta \gamma \right)}_{ijk}\ne 0\end{array}
In this notation parameters with two subscripts, such as (αβ)ij, represent the interaction effect of two factors. The parameter (αβγ)ijk represents the three-way interaction. An ANOVA model can have the full set of parameters or any subset, but conventionally it does not include complex interaction terms unless it also includes all simpler terms for those factors. For example, one would generally not include the three-way interaction without also including all two-way interactions.
Unlike anova1 and anova2, anovan does not expect data in a tabular form. Instead, it expects a vector of response measurements and a separate vector (or text array) containing the values corresponding to each factor. This input data format is more convenient than matrices when there are more than two factors or when the number of measurements per factor combination is not constant.
\begin{array}{ccccccccccc}y& =& \left[& {y}_{1},& {y}_{2},& {y}_{3},& {y}_{4},& {y}_{5},& \cdots ,& {y}_{N}& {\right]}^{\prime }\\ & & & ↑& ↑& ↑& ↑& ↑& & ↑& \\ g1& =& \left\{& \text{'}A\text{'},& \text{'}A\text{'},& \text{'}C\text{'},& \text{'}B\text{'},& \text{'}B\text{'},& \cdots ,& \text{'}D\text{'}& \right\}\\ g2& =& \left[& 1& 2& 1& 3& 1& \cdots ,& 2& \right]\\ g3& =& \left\{& \text{'}\text{hi}\text{'},& \text{'}\text{mid}\text{'},& \text{'}\text{low}\text{'},& \text{'}\text{mid}\text{'},& \text{'}\text{hi}\text{'},& \cdots ,& \text{'}\text{low}\text{'}& \right\}\end{array}
This example shows how to perform N-way ANOVA on car data with mileage and other information on 406 cars made between 1970 and 1982.
The example focusses on four variables. MPG is the number of miles per gallon for each of 406 cars (though some have missing values coded as NaN). The other three variables are factors: cyl4 (four-cylinder car or not), org (car originated in Europe, Japan, or the USA), and when (car was built early in the period, in the middle of the period, or late in the period).
Fit the full model, requesting up to three-way interactions and Type 3 sums-of-squares.
varnames = {'Origin';'4Cyl';'MfgDate'};
anovan(MPG,{org cyl4 when},3,3,varnames);
Note that many terms are marked by a # symbol as not having full rank, and one of them has zero degrees of freedom and is missing a p-value. This can happen when there are missing factor combinations and the model has higher-order terms. In this case, the cross-tabulation below shows that there are no cars made in Europe during the early part of the period with other than four cylinders, as indicated by the 0 in tbl(2,1,1).
[tbl,chi2,p,factorvals] = crosstab(org,when,cyl4)
tbl(:,:,1) =
chi2 = 207.7689
factorvals=3×3 cell array
{'USA' } {'Early'} {'Other' }
{'Europe'} {'Mid' } {'Four' }
{'Japan' } {'Late' } {0x0 double}
Consequently it is impossible to estimate the three-way interaction effects, and including the three-way interaction term in the model makes the fit singular.
Using even the limited information available in the ANOVA table, you can see that the three-way interaction has a p-value of 0.699, so it is not significant.
Examine only two-way interactions.
[p,tbl2,stats,terms] = anovan(MPG,{org cyl4 when},2,3,varnames);
terms = 6×3
Now all terms are estimable. The p-values for interaction term 4 (Origin*4Cyl) and interaction term 6 (4Cyl*MfgDate) are much larger than a typical cutoff value of 0.05, indicating these terms are not significant. You could choose to omit these terms and pool their effects into the error term. The output terms variable returns a matrix of codes, each of which is a bit pattern representing a term.
Omit terms from the model by deleting their entries from terms.
terms([4 6],:) = []
Run anovan again, this time supplying the resulting vector as the model argument. Also return the statistics required for multiple comparisons of factors.
[~,~,stats] = anovan(MPG,{org cyl4 when},terms,3,varnames)
source: 'anovan'
resid: [3.1235 0.1235 3.1235 1.1235 2.1235 0.1235 -0.8765 ... ]
coeffs: [18x1 double]
Rtr: [10x10 double]
rowbasis: [10x18 double]
nullproject: [18x10 double]
terms: [4x3 double]
nlevels: [3x1 double]
continuous: [0 0 0]
vmeans: [3x1 double]
termcols: [5x1 double]
coeffnames: {18x1 cell}
vars: [18x3 double]
grpnames: {3x1 cell}
vnested: []
ems: []
denom: []
dfdenom: []
msdenom: []
varest: []
varci: []
txtdenom: []
txtems: []
rtnames: []
Now you have a more parsimonious model indicating that the mileage of these cars seems to be related to all three factors, and that the effect of the manufacturing date depends on where the car was made.
Perform multiple comparisons for Origin and Cylinder.
[results,~,~,gnames] = multcompare(stats,'Dimension',[1,2]);
Group A Group B Lower Limit A-B Upper Limit P-value
____________________________ ____________________________ ___________ _______ ___________ __________
{'Origin=USA,4Cyl=Other' } {'Origin=Japan,4Cyl=Other' } -5.4891 -3.8412 -2.1932 4.2334e-10
{'Origin=USA,4Cyl=Other' } {'Origin=Europe,4Cyl=Other'} -4.4146 -2.7251 -1.0356 6.2974e-05
{'Origin=USA,4Cyl=Other' } {'Origin=USA,4Cyl=Four' } -9.9992 -8.5828 -7.1664 0
{'Origin=USA,4Cyl=Other' } {'Origin=Japan,4Cyl=Four' } -14.024 -12.424 -10.824 0
{'Origin=USA,4Cyl=Other' } {'Origin=Europe,4Cyl=Four' } -12.898 -11.308 -9.718 0
{'Origin=Japan,4Cyl=Other' } {'Origin=Europe,4Cyl=Other'} -0.71714 1.116 2.9492 0.5085
{'Origin=Japan,4Cyl=Other' } {'Origin=USA,4Cyl=Four' } -7.3655 -4.7417 -2.1179 3.8678e-06
{'Origin=Japan,4Cyl=Other' } {'Origin=Japan,4Cyl=Four' } -9.9992 -8.5828 -7.1664 0
{'Origin=Japan,4Cyl=Other' } {'Origin=Europe,4Cyl=Four' } -9.7464 -7.4668 -5.1872 1.4557e-20
{'Origin=Europe,4Cyl=Other'} {'Origin=USA,4Cyl=Four' } -8.5396 -5.8577 -3.1757 6.9888e-09
{'Origin=Europe,4Cyl=Other'} {'Origin=Japan,4Cyl=Four' } -12.052 -9.6988 -7.3459 0
{'Origin=Europe,4Cyl=Other'} {'Origin=Europe,4Cyl=Four' } -9.9992 -8.5828 -7.1664 0
{'Origin=USA,4Cyl=Four' } {'Origin=Japan,4Cyl=Four' } -5.4891 -3.8412 -2.1932 4.2334e-10
{'Origin=USA,4Cyl=Four' } {'Origin=Europe,4Cyl=Four' } -4.4146 -2.7251 -1.0356 6.2974e-05
{'Origin=Japan,4Cyl=Four' } {'Origin=Europe,4Cyl=Four' } -0.71714 1.116 2.9492 0.5085
anova1 | anovan | multcompare | kruskalwallis
|
Square pyramidal number - zxc.wiki
The square pyramidal numbers belong to the figured numbers , more precisely to the pyramidal numbers . They quantify the number of spheres that can be used to build a pyramid with a square base. As the chart below it shows the example of the fourth square Pyramidalzahl 30, they are the sums of the first square numbers .
In the following denote the -th quadratic pyramidal number.
{\ displaystyle \ operatorname {Pyr} _ {4} (n)}
{\ displaystyle n}
{\ displaystyle \ operatorname {Pyr} _ {4} (n) = \ sum _ {i = 1} ^ {n} i ^ {2} = 1 ^ {2} + 2 ^ {2} + 3 ^ {2 } + 4 ^ {2} + \ ldots n ^ {2} = {\ frac {n (n + 1) (2n + 1)} {6}} = {\ frac {2n ^ {3} + 3n ^ { 2} + n} {6}}}
The first quadratic pyramidal numbers are
0, 1, 5, 14, 30, 55, 91, 140, 204, 285, 385, ... (sequence A000330 in OEIS )
For some authors, the zero is not a quadratic pyramidal number, so the sequence of numbers only begins with the one.
2 Relationships to other figured numbers, further representations
3 related figured numbers
5 Derivation of the empirical formula
The generating function of the quadratic pyramidal numbers is
{\ displaystyle {\ frac {x (x + 1)} {(x-1) ^ {4}}} = \ sum _ {n = 0} ^ {\ infty} \ operatorname {Pyr} _ {4} ( n) x ^ {n} = \ mathbf {1} x + \ mathbf {5} x ^ {2} + \ mathbf {14} x ^ {3} + \ mathbf {30} x ^ {4} + \ mathbf { 55} x ^ {5} + \ ldots}
Relationships to other figured numbers, further representations
{\ displaystyle \ operatorname {Pyr} _ {4} (n) = {\ binom {n + 2} {3}} + {\ binom {n + 1} {3}}}
with the binomial coefficients and
{\ displaystyle \ operatorname {Pyr} _ {4} (n) = {\ frac {1} {4}} \ operatorname {Pyr} _ {3} (2n)}
with the tetrahedral numbers .
{\ displaystyle \ operatorname {Pyr} _ {3} (n)}
In addition, with the -th triangular number:
{\ displaystyle \ Delta _ {n}}
{\ displaystyle n}
{\ displaystyle \ operatorname {Pyr} _ {4} (n) = \ Delta _ {n} +2 \ operatorname {Pyr} _ {3} (n-1)}
Related figured numbers
The other pyramidal numbers , e.g. B. the tetrahedral numbers .
The sum of two consecutive quadratic pyramidal numbers is an octahedral number .
4900 is adjacent to the trivial case 1, the only number that is at the same time a perfect square and a square pyramidal number: . This was proven by GN Watson in 1918.
{\ displaystyle \ operatorname {Pyr} _ {4} (24) = 4900 = 70 ^ {2}}
The sum of the reciprocal values of all quadratic pyramidal numbers is
{\ displaystyle \ sum _ {n = 1} ^ {\ infty} \ operatorname {Pyr} _ {4} (n) ^ {- 1}}
{\ displaystyle \ sum _ {n = 1} ^ {\ infty} {\ frac {6} {n (n + 1) (2n + 1)}} = 18-24 \ ln (2) = 1 {,} 3644676665 \ ldots}
(Follow A159354 in OEIS )
Derivation of the empirical formula
The difference between two consecutive square numbers is always an odd number. More precisely, because of the fact that the difference between the -th and -th square number is. This gives the following scheme:
{\ displaystyle k ^ {2} - (k-1) ^ {2} = 2k-1}
{\ displaystyle k}
{\ displaystyle (k-1)}
{\ displaystyle 2k-1}
{\ displaystyle {\ begin {array} {ccccccccccccccc} 0 && 1 && 4 && 9 && 16 && 25 & \ ldots & (n-1) ^ {2} && n ^ {2} \\ & 1 && 3 && 5 && 7 && 9 && \ ldots && 2n-1 & \ end {array}}}
A square number can thus be represented as the sum of odd numbers, ie it applies . This sum display is now used to display the sum of the first square numbers by means of a set of odd numbers arranged in a triangle. The sum of all the odd numbers in the triangle corresponds exactly to the sum of the first square numbers .
{\ displaystyle n ^ {2} = \ sum _ {i = 1} ^ {n} (2i-1)}
{\ displaystyle n}
{\ displaystyle n}
{\ displaystyle {\ begin {array} {rcccccccc} \ scriptstyle 1 ^ {2} \ scriptstyle = \, \ vline & 1 &&&&&&& \\\ scriptstyle 2 ^ {2} \ scriptstyle = \, \ vline & 1 & 3 &&&&&& \\\ scriptstyle 3 ^ {2} \ scriptstyle = \, \ vline & 1 & 3 & 5 &&&&& \\\ scriptstyle 4 ^ {2} \ scriptstyle = \, \ vline & 1 & 3 & 5 & 7 &&&& \\\ scriptstyle 5 ^ {2} \ scriptstyle = \, \ vline & 1 & 3 & 5 & 7 & 9 &&& \\\ vdots quad \ vline & \ vdots &&&&& \ ddots && \\\ scriptstyle (n-1) ^ {2} \ scriptstyle = \, \ vline & 1 & \ cdots &&&& \ cdots & \ scriptstyle 2n-3 & \\\ scriptstyle n ^ {2 } \ scriptstyle = \, \ vline & 1 & \ cdots &&&& \ cdots & \ scriptstyle 2n-3 & \ scriptstyle 2n-1 \ end {array}}}
Now you arrange the same odd numbers in two other ways to form a congruent triangle.
{\ displaystyle {\ begin {array} {cccccccc} \ scriptstyle 2n-1 &&&&&& \\\ scriptstyle 2n-3 & \ scriptstyle 2n-3 &&&&& \\\ vdots && \ ddots &&&& \\ 9 & \ cdots & \ cdots & 9 &&&& \\ 7 & \ cdots & \ cdots & 7 & 7 &&& \\ 5 & \ cdots & \ cdots & 5 & 5 & 5 \\ 3 & \ cdots & \ cdots & 3 & 3 & 3 & 3 \\ 1 & \ cdots & \ cdots & 1 & 1 & 1 & 1 & 1 \\\ hline \ scriptstyle = n ^ {2} & \ scriptstyle = (n -1) ^ {2} & \ cdots & \ scriptstyle = 5 ^ {2} & \ scriptstyle = 4 ^ {2} & \ scriptstyle = 3 ^ {2} & \ scriptstyle = 2 ^ {2} & \ scriptstyle = 1 ^ {2} \ end {array}}}
{\ displaystyle {\ begin {array} {cccccccc} 1 &&&&&&& \\ 3 & 1 &&&&&&& \\ 5 & 3 & 1 &&&&& \\ 7 & 5 & 3 & 1 &&&& \\ 9 & 7 & 5 & 3 & 1 &&& \\\ vdots &&&&& \ ddots & \ && \\\ script\ 2n-1 & \ scriptstyle 2n-3 &&&& \ cdots & 3 & 1 \\\ hline \ scriptstyle = n ^ {2} & \ scriptstyle = (n-1) ^ {2} & \ cdots & \ scriptstyle = 5 ^ {2} & \ scriptstyle = 4 ^ {2} & \ scriptstyle = 3 ^ {2} & \ scriptstyle = 2 ^ {2} & \ scriptstyle = 1 ^ {2} \ end {array}}}
If you put these triangles on top of each other, then the sum of every column consisting of three numbers is always constant and there are such columns. So the sum of all the odd numbers of the three triangles is exactly three times the sum of the first square numbers . The following applies:
{\ displaystyle 2n + 1}
{\ displaystyle 1 + 2 + \ ldots + n = {\ tfrac {n (n + 1)} {2}}}
{\ displaystyle {\ tfrac {n (n + 1) (2n + 1)} {2}}}
{\ displaystyle n}
{\ displaystyle \ operatorname {Pyr} _ {4} (n) = {\ frac {n (n + 1) (2n + 1)} {6}}}
Faulhaber formula
John H. Conway, Richard Guy: The Book of Numbers . Springer, 1996, ISBN 9780387979939 , pp. 47–50 ( excerpt (Google) )
Eric W. Weisstein : Square Pyramidal Number . In: MathWorld (English).
This page is based on the copyrighted Wikipedia article "Quadratische_Pyramidalzahl" (Authors); it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License. You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.
|
Dihydropyrimidine dehydrogenase (NADP+) - Wikipedia
Dihydroprymidine dehydrogenase dimer, Sus scrofa
DPYD, DHP, DHPDHASE, DPD, dihydropyrimidine dehydrogenase, DYPD
OMIM: 612779 MGI: 2139667 HomoloGene: 85 GeneCards: DPYD
dihydropyrimidine dehydrogenase (NAD+) activity
Chr 1: 97.08 – 98 Mb Chr 3: 118.36 – 119.23 Mb
In enzymology, a dihydropyrimidine dehydrogenase (NADP+) (EC 1.3.1.2) is an enzyme that catalyzes the chemical reaction
5,6-dihydrouracil + NADP+
{\displaystyle \rightleftharpoons }
uracil + NADPH + H+
Thus, the two substrates of this enzyme are 5,6-dihydrouracil and NADP+, whereas its 3 products are uracil, NADPH, and H+.
In humans the enzyme is encoded by the DPYD gene.[5][6] It is the initial and rate-limiting step in pyrimidine catabolism.[citation needed] It catalyzes the reduction of uracil and thymine.[7] It is also involved in the degradation of the chemotherapeutic drugs 5-fluorouracil and tegafur.[8] It also participates in beta-alanine metabolism and pantothenate and coa biosynthesis.
The systematic name of this enzyme class is 5,6-dihydrouracil:NADP+ 5-oxidoreductase.
Other names in common use include:
dihydrouracil dehydrogenase (NADP+)
4,5-dihydrothymine: oxidoreductase
DHPDH
dehydrogenase, dihydrouracil (nicotinamide adenine dinucleotide, phosphate)
DHU dehydrogenase
hydropyrimidine dehydrogenase
As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes 1GT8, 1GTE, 1GTH, 1H7W, and 1H7X.
The protein is a pyrimidine catabolic enzyme and the initial and rate-limiting factor in the pathway of uracil and thymidine catabolism. Genetic deficiency of this enzyme results in an error in pyrimidine metabolism associated with thymine-uraciluria and an increased risk of toxicity in cancer patients receiving 5-fluorouracil chemotherapy.[6]
|alt=Fluorouracil (5-FU) Activity edit]]
^ The interactive pathway map can be edited at WikiPathways: "FluoropyrimidineActivity_WP1601".
Dihydropyrimidine dehydrogenase deficiency, a genetic disorder
^ Takai S, Fernandez-Salguero P, Kimura S, Gonzalez FJ, Yamada K (December 1994). "Assignment of the human dihydropyrimidine dehydrogenase gene (DPYD) to chromosome region 1p22 by fluorescence in situ hybridization". Genomics. 24 (3): 613–4. doi:10.1006/geno.1994.1680. PMID 7713523.
^ a b "Entrez Gene: DPYD dihydropyrimidine dehydrogenase".
^ Chung T, Na J, Kim YI, Chang DY, Kim YI, Kim H, Moon HE, Kang KW, Lee DS, Chung JK, Kim SS, Suh-Kim H, Paek SH, Youn H (2016). "Dihydropyrimidine Dehydrogenase Is a Prognostic Marker for Mesenchymal Stem Cell-Mediated Cytosine Deaminase Gene and 5-Fluorocytosine Prodrug Therapy for the Treatment of Recurrent Gliomas". Theranostics. 6 (10): 1477–90. doi:10.7150/thno.14158. PMC 4955049. PMID 27446484.
^ Caudle KE, Thorn CF, Klein TE, Swen JJ, McLeod HL, Diasio RB, Schwab M (December 2013). "Clinical Pharmacogenetics Implementation Consortium guidelines for dihydropyrimidine dehydrogenase genotype and fluoropyrimidine dosing". Clinical Pharmacology and Therapeutics. 94 (6): 640–5. doi:10.1038/clpt.2013.172. PMC 3831181. PMID 23988873.
Hoff PM, Royce M, Medgyesy D, Brito R, Pazdur R (December 1999). "Oral fluoropoyrimidines". Seminars in Oncology. 26 (6): 640–6. PMID 10606257.
Schneider HB, Becker H (May 2003). "Impact of dihydropyrimidine dehydrogenase on 5-fluorouracil treatment in cancer patients". European Journal of Medical Research. 8 (5): 226–8. PMID 12844478.
Omura K (June 2003). "Clinical implications of dihydropyrimidine dehydrogenase (DPD) activity in 5-FU-based chemotherapy: mutations in the DPD gene, and DPD inhibitory fluoropyrimidines". International Journal of Clinical Oncology. 8 (3): 132–8. doi:10.1007/s10147-003-0330-z. PMID 12851836. S2CID 22593228.
Lee W, Lockhart AC, Kim RB, Rothenberg ML (February 2005). "Cancer pharmacogenomics: powerful tools in cancer chemotherapy and drug development". The Oncologist. 10 (2): 104–11. doi:10.1634/theoncologist.10-2-104. PMID 15709212.
Lu ZH, Zhang R, Diasio RB (August 1992). "Purification and characterization of dihydropyrimidine dehydrogenase from human liver". The Journal of Biological Chemistry. 267 (24): 17102–9. doi:10.1016/S0021-9258(18)41899-6. PMID 1512248.
Porter DJ, Chestnut WG, Merrill BM, Spector T (March 1992). "Mechanism-based inactivation of dihydropyrimidine dehydrogenase by 5-ethynyluracil". The Journal of Biological Chemistry. 267 (8): 5236–42. doi:10.1016/S0021-9258(18)42757-3. PMID 1544906.
Dupuis A, Skehel JM, Walker JE (March 1991). "A homologue of a nuclear-coded iron-sulfur protein subunit of bovine mitochondrial complex I is encoded in chloroplast genomes". Biochemistry. 30 (11): 2954–60. doi:10.1021/bi00225a032. PMID 1901022.
Eggink G, Engel H, Vriend G, Terpstra P, Witholt B (March 1990). "Rubredoxin reductase of Pseudomonas oleovorans. Structural relationship to other flavoprotein oxidoreductases based on one NAD and two FAD fingerprints". Journal of Molecular Biology. 212 (1): 135–42. doi:10.1016/0022-2836(90)90310-I. PMID 2319593.
Tuchman M, Roemeling RV, Hrushesky WA, Sothern RB, O'Dea RF (1989). "Dihydropyrimidine dehydrogenase activity in human blood mononuclear cells". Enzyme. 42 (1): 15–24. doi:10.1159/000469002. PMID 2528450.
Diasio RB, Beavers TL, Carpenter JT (January 1988). "Familial deficiency of dihydropyrimidine dehydrogenase. Biochemical basis for familial pyrimidinemia and severe 5-fluorouracil-induced toxicity". The Journal of Clinical Investigation. 81 (1): 47–51. doi:10.1172/JCI113308. PMC 442471. PMID 3335642.
Yokota H, Fernandez-Salguero P, Furuya H, Lin K, McBride OW, Podschun B, Schnackerz KD, Gonzalez FJ (September 1994). "cDNA cloning and chromosome mapping of human dihydropyrimidine dehydrogenase, an enzyme associated with 5-fluorouracil toxicity and congenital thymine uraciluria". The Journal of Biological Chemistry. 269 (37): 23192–6. doi:10.1016/S0021-9258(17)31638-1. PMID 8083224.
Lu Z, Zhang R, Diasio RB (November 1993). "Dihydropyrimidine dehydrogenase activity in human peripheral blood mononuclear cells and liver: population characteristics, newly identified deficient patients, and clinical implication in 5-fluorouracil chemotherapy". Cancer Research. 53 (22): 5433–8. PMID 8221682.
Vreken P, Van Kuilenburg AB, Meinsma R, Smit GP, Bakker HD, De Abreu RA, van Gennip AH (1997). "A point mutation in an invariant splice donor site leads to exon skipping in two unrelated Dutch patients with dihydropyrimidine dehydrogenase deficiency". Journal of Inherited Metabolic Disease. 19 (5): 645–54. doi:10.1007/BF01799841. PMID 8892022. S2CID 10835574.
Johnson MR, Wang K, Tillmanns S, Albin N, Diasio RB (May 1997). "Structural organization of the human dihydropyrimidine dehydrogenase gene". Cancer Research. 57 (9): 1660–3. PMID 9135003.
Fernandez-Salguero PM, Sapone A, Wei X, Holt JR, Jones S, Idle JR, Gonzalez FJ (April 1997). "Lack of correlation between phenotype and genotype for the polymorphically expressed dihydropyrimidine dehydrogenase in a family of Pakistani origin". Pharmacogenetics. 7 (2): 161–3. doi:10.1097/00008571-199704000-00012. PMID 9170156.
Vreken P, Van Kuilenburg AB, Meinsma R, van Gennip AH (July 1997). "Identification of novel point mutations in the dihydropyrimidine dehydrogenase gene" (PDF). Journal of Inherited Metabolic Disease. 20 (3): 335–8. doi:10.1023/A:1005357307122. PMID 9266349. S2CID 3060557.
Vreken P, Van Kuilenburg AB, Meinsma R, van Gennip AH (December 1997). "Dihydropyrimidine dehydrogenase (DPD) deficiency: identification and expression of missense mutations C29R, R886H and R235W". Human Genetics. 101 (3): 333–8. doi:10.1007/s004390050637. PMID 9439663. S2CID 24362516.
Ogura K, Nishiyama T, Takubo H, Kato A, Okuda H, Arakawa K, Fukushima M, Nagayama S, Kawaguchi Y, Watabe T (January 1998). "Suicidal inactivation of human dihydropyrimidine dehydrogenase by (E)-5-(2-bromovinyl)uracil derived from the antiviral, sorivudine". Cancer Letters. 122 (1–2): 107–13. doi:10.1016/S0304-3835(97)00377-7. PMID 9464498.
FRITZSON P (1960). "Properties and assay of dihydrouracil dehydrogenase of rat liver". J. Biol. Chem. 235 (3): 719–25. doi:10.1016/S0021-9258(19)67929-9. PMID 13825299.
Shiotani T, Weber G (1981). "Purification and properties of dihydrothymine dehydrogenase from rat liver". J. Biol. Chem. 256 (1): 219–24. doi:10.1016/S0021-9258(19)70122-7. PMID 7451435.
1gt8: DIHYDROPYRIMIDINE DEHYDROGENASE (DPD) FROM PIG, TERNARY COMPLEX WITH NADPH AND URACIL-4-ACETIC ACID
1gte: DIHYDROPYRIMIDINE DEHYDROGENASE (DPD) FROM PIG, BINARY COMPLEX WITH 5-IODOURACIL
1gth: DIHYDROPYRIMIDINE DEHYDROGENASE (DPD) FROM PIG, TERNARY COMPLEX WITH NADPH AND 5-IODOURACIL
1h7w: DIHYDROPYRIMIDINE DEHYDROGENASE (DPD) FROM PIG
1h7x: DIHYDROPYRIMIDINE DEHYDROGENASE (DPD) FROM PIG, TERNARY COMPLEX OF A MUTANT ENZYME (C671A), NADPH AND 5-FLUOROURACIL
Retrieved from "https://en.wikipedia.org/w/index.php?title=Dihydropyrimidine_dehydrogenase_(NADP%2B)&oldid=1081536621"
|
Sharp inequalities for the Beurling–Ahlfors transform on radial functions
1 February 2013 Sharp inequalities for the Beurling–Ahlfors transform on radial functions
Rodrigo Bañuelos, Adam Osȩkowski
1\le p\le 2
, we prove sharp weak-type
\left(p,p\right)
-estimates for the Beurling–Ahlfors operator acting on the radial function subspaces of
{L}^{p}\left(\mathbb{C}\right)
. A similar sharp
{L}^{p}
-result is proved for
1<p\le 2
. The results are derived from martingale inequalities which are of independent interest.
Rodrigo Bañuelos. Adam Osȩkowski. "Sharp inequalities for the Beurling–Ahlfors transform on radial functions." Duke Math. J. 162 (2) 417 - 434, 1 February 2013. https://doi.org/10.1215/00127094-1962649
Rodrigo Bañuelos, Adam Osȩkowski "Sharp inequalities for the Beurling–Ahlfors transform on radial functions," Duke Mathematical Journal, Duke Math. J. 162(2), 417-434, (1 February 2013)
|
Work, Energy and Power Physics NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
A spring 40 mm long is stretched by the application of a force. If 10 N force required to stretch the spring through 1 mm, then work done in stretching the spring through 40 mm is
Subtopic: Elastic Potential Energy |
Two springs with spring constants
{k}_{1}
{k}_{2}
= 3000 N/m are stretched by the same force. The ratio of potential energy stored in the springs will be
A block of mass 2 kg moving with velocity of 10 m/s on a smooth surface hits a spring of force constant
80×{10}^{3}
N/m as shown. The maximum compression in the spring is
A body starts moving from rest in straight line under a constant power source. Its displacement in time t is proportional to
{t}^{1/2}
{t}^{3/2}
{t}^{2}
Subtopic: Power |
A block of mass m = 25 kg on a smooth horizontal surface with a velocity
\stackrel{\to }{v}
m{s}^{-1}
meets the spring of spring constant k = 100 N/m fixed at one end as shown in the figure. The maximum compression of the spring and velocity of the block as it returns to the original position respectively are:
1. 1.5 m, -3
m{s}^{-1}
2. 1.5 m, 0
m{s}^{-1}
m{s}^{-1}
m{s}^{-1}
The velocity, given to the block of mass (m), is
\sqrt{\frac{7}{2}gl}
to rotate it in a circle of radius l. Calculate the height (h) where the block leaves the circle.
\frac{3l}{2}
\frac{4l}{3}
\frac{5l}{4}
Subtopic: Work Energy Theorem |
If length of string is
l\mathit{ }\mathit{=}\mathit{ }\frac{\mathit{10}}{\mathit{3}}m\mathit{,}\mathit{ }\frac{{\mathit{T}}_{\mathit{m}\mathit{a}\mathit{x}}}{{\mathit{T}}_{\mathit{m}\mathit{i}\mathit{n}}}\mathit{=}\mathit{4}
\mathrm{where}\phantom{\rule{0ex}{0ex}} {\mathrm{T}}_{\mathrm{max}}= \mathrm{Maximum} \mathrm{tension} \mathrm{in} \mathrm{the} \mathrm{string}\phantom{\rule{0ex}{0ex}}{\mathrm{T}}_{\mathrm{min}}=M\mathrm{inimum} \mathrm{tension} in \mathrm{the} \mathrm{string}.\phantom{\rule{0ex}{0ex}}\mathrm{Velocity} \mathrm{at} \mathrm{highest} \mathrm{point} \mathrm{is} -
\frac{10}{\sqrt{2}}\mathrm{m}/\mathrm{s}
10\sqrt{3} \mathrm{m}/\mathrm{s}
The relation between velocity (v) and time (t) is
v\propto \sqrt{\mathrm{t}}
, then which one of the following quantity is constant?
A particle is moving on the circular path of the radius (R) with centripetal acceleration
{a}_{c}={k}^{2}R{t}^{2}
. Then the correct relation showing power (P) delivered by net force versus time (t) is
A steel wire can withstand a load up to 2940 N. A load of 150 kg is suspended from a rigid support. The maximum angle through which the wire can be displaced from the mean position, so that the wire does not break when the load passes through the position of equilibrium, is (2008 E)
°
°
°
°
|
Prediction Using Discriminant Analysis Models - MATLAB & Simulink - MathWorks India
True Misclassification Cost per Class
Expected Misclassification Cost per Observation
predict uses three quantities to classify observations: posterior probability, prior probability, and cost.
\stackrel{^}{y}=\underset{y=1,...,K}{\mathrm{arg}\mathrm{min}}\sum _{k=1}^{K}\stackrel{^}{P}\left(k|x\right)C\left(y|k\right),
\stackrel{^}{y}
\stackrel{^}{P}\left(k|x\right)
C\left(y|k\right)
The space of X values divides into regions where a classification Y is a particular value. The regions are separated by straight lines for linear discriminant analysis, and by conic sections (ellipses, hyperbolas, or parabolas) for quadratic discriminant analysis. For a visualization of these regions, see Create and Visualize Discriminant Analysis Classifier.
P\left(x|k\right)=\frac{1}{{\left({\left(2\pi \right)}^{d}|{\Sigma }_{k}|\right)}^{1/2}}\mathrm{exp}\left(-\frac{1}{2}\left(x-{\mu }_{k}\right){\Sigma }_{k}^{-1}{\left(x-{\mu }_{k}\right)}^{T}\right),
|{\Sigma }_{k}|
{\Sigma }_{k}^{-1}
\stackrel{^}{P}\left(k|x\right)=\frac{P\left(x|k\right)P\left(k\right)}{P\left(x\right)},
where P(x) is a normalization constant, namely, the sum over k of P(x|k)P(k).
'uniform' — The prior probability of class k is 1 over the total number of classes.
A numeric vector — The prior probability of class k is the jth element of the Prior vector. See fitcdiscr.
After creating a classifier obj, you can set the prior using dot notation:
obj.Prior = v;
There are two costs associated with discriminant analysis classification: the true misclassification cost per class, and the expected misclassification cost per observation.
Cost(i,j) is the cost of classifying an observation into class j if its true class is i. By default, Cost(i,j)=1 if i~=j, and Cost(i,j)=0 if i=j. In other words, the cost is 0 for correct classification, and 1 for incorrect classification.
You can set any cost matrix you like when creating a classifier. Pass the cost matrix in the Cost name-value pair in fitcdiscr.
After you create a classifier obj, you can set a custom cost using dot notation:
obj.Cost = B;
B is a square matrix of size K-by-K when there are K classes. You do not need to retrain the classifier when you set a new cost.
Suppose you have Nobs observations that you want to classify with a trained discriminant analysis classifier obj. Suppose you have K classes. You place the observations into a matrix Xnew with one observation per row. The command
[label,score,cost] = predict(obj,Xnew)
returns, among other outputs, a cost matrix of size Nobs-by-K. Each row of the cost matrix contains the expected (average) cost of classifying the observation into each of the K classes. cost(n,k) is
\sum _{i=1}^{K}\stackrel{^}{P}\left(i|X\left(n\right)\right)C\left(k|i\right),
\stackrel{^}{P}\left(i|X\left(n\right)\right)
C\left(k|i\right)
is the cost of classifying an observation as k when its true class is i.
fitcdiscr | predict
ClassificationDiscriminant | CompactClassificationDiscriminant
|
Option price by Heston model using finite differences - MATLAB optByHestonFD - MathWorks 日本
Grid containing prices calculated by the finite difference method, returned as a three-dimensional grid with size AssetGridSize ⨉ VarianceGridSize ⨉ TimeGridSize. The depth is not necessarily equal to the TimeGridSize, because exercise and ex-dividend dates are added to the time grid. PriceGrid(:, :, end) contains the price for t = 0.
\mathrm{max}\left(StâK,0\right)
\mathrm{max}\left(KâSt,0\right)
\begin{array}{l}d{S}_{t}=\left(râq\right){S}_{t}dt+\sqrt{{v}_{t}}{S}_{t}d{W}_{t}\\ d{v}_{t}=\mathrm{κ}\left(\mathrm{θ}â{v}_{t}\right)dt+{\mathrm{Ï}}_{v}\sqrt{{v}_{t}}d{W}_{t}^{v}\\ \text{E}\left[d{W}_{t}d{W}_{t}^{v}\right]=pdt\end{array}
κ is the mean reversion speed for the variance for (κ > 0).
σv is the volatility of the variance for (σv > 0).
p is the correlation between the Weiner processes Wt and Wvt for (-1 ≤ p ≤ 1).
[1] Heston, S. L. “A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options.†The Review of Financial Studies. Vol 6, Number 2, 1993.
|
EUDML | Correction to the article `Deformation of plates of small condensers and Belinskij's problem'. EuDML | Correction to the article `Deformation of plates of small condensers and Belinskij's problem'.
Correction to the article `Deformation of plates of small condensers and Belinskij's problem'.
Aseev, V.V.
Aseev, V.V.. "Correction to the article `Deformation of plates of small condensers and Belinskij's problem'.." Sibirskij Matematicheskij Zhurnal 44.1 (2003): 232-235 (2003); translation in Sib. Math. J. 44. <http://eudml.org/doc/50506>.
author = {Aseev, V.V.},
keywords = {condenser; conformal capacity; conformal modulus},
title = {Correction to the article `Deformation of plates of small condensers and Belinskij's problem'.},
TI - Correction to the article `Deformation of plates of small condensers and Belinskij's problem'.
KW - condenser; conformal capacity; conformal modulus
condenser, conformal capacity, conformal modulus
{𝐑}^{n}
|
Specify Custom Output Layer Backward Loss Function - MATLAB & Simulink - MathWorks France
Create Backward Loss Function
The example Define Custom Classification Output Layer shows how to define and create a custom classification output layer with sum of squares error (SSE) loss and goes through the following steps:
Creating a backward loss function is optional. If the forward loss function only uses functions that support dlarray objects, then software determines the derivatives automatically using automatic differentiation. For a list of functions that support dlarray objects, see List of Functions with dlarray Support. If you want to use functions that do not support dlarray objects, or want to use a specific algorithm for the backward loss function, then you can define a custom backward function using this example as a guide.
The example Define Custom Classification Output Layer shows how to create a SSE classification layer.
A classification SSE layer computes the sum of squares error loss for classification problems. SSE is an error measure between two continuous random variables. For predictions Y and training targets T, the SSE loss between Y and T is given by
L=\frac{1}{N}\sum _{n=1}^{N}\text{}\sum _{i=1}^{K}\text{}{\left({Y}_{ni}-{T}_{ni}\right)}^{2},
View the layer created in the example Define Custom Classification Output Layer. This layer does not have a backwardLoss function.
classdef sseClassificationLayer < nnet.layer.ClassificationLayer ...
% Example custom classification layer with sum of squares error loss.
function layer = sseClassificationLayer(name)
% layer = sseClassificationLayer(name) creates a sum of squares
% error classification layer and specifies the layer name.
layer.Description = 'Sum of squares error';
% loss = forwardLoss(layer, Y, T) returns the SSE loss between
% Calculate sum of squares.
sumSquares = sum((Y-T).^2);
loss = sum(sumSquares)/N;
Implement the backwardLoss function that returns the derivatives of the loss with respect to the input data and the learnable parameters.
The dimensions of Y and T are the same as the inputs in forwardLoss.
The derivative of the SSE loss with respect to the predictions Y is given by
\frac{\delta L}{\delta {Y}_{i}}=\frac{2}{N}\left({Y}_{i}-{T}_{i}\right),
where N is the number of observations in the input.
Create the backward loss function that returns these derivatives.
% dLdY = backwardLoss(layer, Y, T) returns the derivatives of
% the SSE loss with respect to the predictions Y.
dLdY = 2*(Y-T)/N;
classdef sseClassificationLayer < nnet.layer.ClassificationLayer
|
Atoms Physics NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
The diagram shows the energy levels for an electron in a certain atom. Which transition shown represents the emission of a photon with the most energy?
Subtopic: Spectral Series |
The kinetic energy of the electron in an orbit of radius r in the hydrogen atom is (e = electronic charge)
\frac{k{e}^{2}}{{r}^{2}}
\frac{k{e}^{2}}{2r}
\frac{k{e}^{2}}{r}
\frac{k{e}^{2}}{2{r}^{2}}
Subtopic: Bohr's Model of Atom |
Whenever a hydrogen atom emits a photon in the Balmer series:
1. it may emit another photon in the Balmer series.
2. it must emit another photon in the Lyman series.
3. it may emit another photon in the Paschen series.
4. it need not emit any more photon.
The ionisation potential of hydrogen atom is
1. 13.60 volt
4. 14.24 bolt
1. Electric fibre
2. Neon street Sign
3. Red traffic light
Ionisation potential of hydrogen atom is 13.6 eV. Hydrogen atoms in the ground state are excited by monochromatic radiation of photon energy 12.1 eV. According to Bohr's theory, the spectral lines emitted by hydrogen will be:
Frequency of the series limit of Balmer series of hydrogen atom in terms of Rydberg constant R and velocity of light C is:
\frac{RC}{4}
\frac{4}{RC}
4. 4RC
Orbital acceleration of electron is
\frac{{n}^{2}{h}^{2}}{4{\mathrm{\pi }}^{2}{\mathrm{m}}^{2}{\mathrm{r}}^{3}}
\frac{{n}^{2}{h}^{2}}{4{n}^{2}{\mathrm{r}}^{3}}
\frac{4{n}^{2}{h}^{2}}{{\mathrm{\pi }}^{2}{\mathrm{m}}^{2}{\mathrm{r}}^{3}}
\frac{4{n}^{2}{h}^{2}}{4{\mathrm{\pi }}^{2}{\mathrm{m}}^{2}{\mathrm{r}}^{3}}
Consider the spectral line resulting from transition n = 2 to n = 1 in the atoms and ions given below. The shorterst wavelength is given by
3. Singly ionised helium
4. Doubly ionised lithium
\alpha -
particle of energy 5 MeV is scattered through
180°
by a fixed uranium nucleus. The distance of closest approach is of the order
{10}^{-10} m
{10}^{-13} m
{10}^{-14} m
{10}^{-16} m
Subtopic: Various Atomic Models |
|
Predicting Diabetes using Logistic Regression with TensorFlow.js | Deep Learning for JavaScript Hackers (Part I) | Curiousily - Hacker's Guide to Machine Learning
Predicting Diabetes using Logistic Regression with TensorFlow.js | Deep Learning for JavaScript Hackers (Part I)
05.07.2019 — Logistic Regression, TensorFlow, Machine Learning, JavaScript — 8 min read
TL;DR Build a Logistic Regression model in TensorFlow.js using the high-level layers API, and predict whether or not a patient has Diabetes. Learn how to visualize the data, create a Dataset, train and evaluate multiple models.
You’ve been living in this forgotten city for the past 8+ months. You never felt comfortable anywhere but home. However, this place sets a new standard. The constant changes between dry and humid heat are killing you, fast.
The Internet connection is spotty at best, and you haven’t heard from your closed ones for more than two weeks. You have no idea how your partner is and how your kids are doing. You sometimes question the love for your country.
This morning you feel even worse. Constantly hungry and thirsty. You urinated four times, already, and your vision is somewhat blurry. It is not just today you were feeling like that for a week, at least.
You went to the doctor, and she said you might be having Diabetes. Both your mother and father suffer from it, so it seems likely to you. She wasn’t that sure and did a glucose test. Unfortunately, you’re being called and should go before the results are in.
You’re going away for two weeks. Only a couple of guys and your laptop! You have a couple of minutes and download a Diabetes patient dataset. You have TensorFlow.js already installed and a copy of the whole API. Can you build a model to predict whether or not you have Diabetes?
As of 2017, an estimated 425 million people had diabetes worldwide (around 5.5%)
Our data comes from Kaggle but was first introduced in the paper: Using the ADAP Learning Algorithm to Forecast the Onset of Diabetes Mellitus
The population for this study was the Pima Indian population near Phoenix, Arizona. That population has been under continuous study since 1965 by the National Institute of Diabetes and Digestive and Kidney Diseases because of its high incidence rate of diabetes. Each community resident over 5 years of age was asked to undergo a standardized examination every two years, which included an oral glucose tolerance test. Diabetes was diagnosed according to World Health Organization Criteria; that is, if the 2 hour post-load plasma glucose was at least 200 mg/dl (11.1 mmol/l) at any survey examination or if the Indian Health Service Hospital serving the community found a glucose concentration of at least 200 mg/dl during the course of routine medical care.
Here is a summary of the data:
Pregnancies - Number of times pregnant
Glucose - Plasma glucose concentration a 2 hours in an oral glucose tolerance test
BloodPressure - Diastolic blood pressure (mm Hg)
SkinThickness - Triceps skin fold thickness (mm)
BMI - Body mass index (
\frac{weight}{height^2}
in kg/m)
DiabetesPedigreeFunction - Diabetes Pedigree Function (DPF)
Age - Age (years)
Outcome - Class variable (0 - healthy or 1 - diabetic)
According to Estimating Probabilities of Diabetes Mellitus Using Neural Networks paper, the DPF provides:
A synthesis of the diabetes mellitus history in relatives and the genetic relationship of those relatives to the subject. The DPF uses information from parents, grandparents, siblings, aunts and uncles, and first cousins. It provides a measure of the expected genetic influence of affected and unaffected relatives on the subject’s eventual diabetes risk.
Who are Pima Indians?
The Pima (or Akimel Oʼodham, also spelled Akimel Oʼotham, “River People”, formerly known as Pima) are a group of Native Americans living in an area consisting of what is now central and southern Arizona. The majority population of the surviving two bands of the Akimel Oʼodham are based in two reservations: the Keli Akimel Oʼotham on the Gila River Indian Community (GRIC) and the On’k Akimel Oʼodham on the Salt River Pima-Maricopa Indian Community (SRPMIC).
We’ll use the Papa Parse library to read the csv file. Unfortunately, Papa Parse doesn’t work well with await/async. Let’s change that:
1import * as Papa from "papaparse"
3Papa.parsePromise = function (file) {
4 return new Promise(function (complete, error) {
5 Papa.parse(file, {
6 header: true,
7 download: true,
8 dynamicTyping: true,
9 complete,
10 error,
We use the dynamicTyping parameter to instruct Papa Parse to convert the numbers in the dataset from strings. Let’s define a function that loads the data:
1const loadData = async () => {
3 "https://raw.githubusercontent.com/curiousily/Logistic-Regression-with-TensorFlow-js/master/src/data/diabetes.csv"
1const data = await loadData()
Good job! We have the data, let get familiar with it!
While tfjs-vis is nice and well integrated with TensorFlow.js, it lacks (at the time of this writing) a ton of features you might need - overlay plots, color changes, scale customization, etc. That’s why we’ll use Plotly’s Javascript library to make some beautiful plots for our data exploration.
Let’s have a look at the distribution of healthy vs diabetic people:
Little above 65% of the patients in our dataset are healthy. That means that our model should be more accurate than 65% of the time, to be any good. Next up - the insulin levels:
Note that there is a big overlap between the two distributions. Also, we have a lot of 0s in the dataset. Seems like we have a lot of missing values. NaNs are replaced with 0s.
Another important one is the glucose levels after the test:
While there is some overlap, this test seems like it separates the healthy from diabetic patients pretty well.
Let’s have a look at the age:
Generally speaking, it seems like older people are more likely to have diabetes.
Maybe we should take a look at the relationship between age and glucose levels:
The combination of those two seems to separate healthy and diabetic patients very well. That might do wonders for our model.
Another combination you might want to try is the skin thickness vs BMI:
Yep, this one is horrible and doesn’t tell us much :)
Currently, our data sits in an array of objects. Unfortunately, TensorFlow doesn’t work well with those. Luckily, there is the tfjs-data package. We’re going to create a Dataset from our CSV file and use it to train our model with the createDatasets() function:
1const createDataSets = (data, features, testSize, batchSize) => {
The features parameter specifies which columns are in the dataset. testSize is the fraction of the data that is going to be used for testing. batchSize controls the number of data points when the dataset is split into chunks (batches).
Let’s start by extracting the features from the data:
1const X = data.map(r =>
2 features.map(f => {
3 const val = r[f]
4 return val === undefined ? 0 : val
We’re replacing missing values in our features with 0s. You might try to train your model without this step and see what happens?
Let’s prepare the labels:
1const y = data.map(r => {
2 const outcome = r.Outcome === undefined ? 0 : r.Outcome
3 return oneHot(outcome)
Here’s the definition of oneHot:
1const oneHot = outcome => Array.from(tf.oneHot(outcome, 2).dataSync())
One-hot encoding turns categorical variables (healthy - 0 and diabetic - 1) into an array where 1 corresponds to the position of the category and all other variables are 0. Here are some examples:
11 // diabetic =>
2;[0, 1]
10 // healthy =>
Let’s create a Dataset from our data:
1const ds = tf.data
2 .zip({ xs: tf.data.array(X), ys: tf.data.array(y) })
3 .shuffle(data.length, 42)
Note that we also shuffle the data with a seed of 42 :)
Finally, let’s split the data into training and validation datasets:
1const splitIdx = parseInt((1 - testSize) * data.length, 10)
4 ds.take(splitIdx).batch(batchSize),
5 ds.skip(splitIdx + 1).batch(batchSize),
6 tf.tensor(X.slice(splitIdx)),
7 tf.tensor(y.slice(splitIdx)),
We use take to create the training dataset, skip to omit the training examples for the validation dataset and finally, split the data into chunks using batch.
Additionally, we return data for testing our model (more on this later).
Logistic Regression (contrary to its name) allows you to get binary (yes/no) answers from your data. Moreover, it gives you the probability for each answer. Questions like:
Should I ask my boss for a higher salary?
Is this person a real friend?
Does my partner cheat on me?
Do I cheat on my partner?
Do you get where I am getting at?
are answerable using Logistic Regression if sufficient data is available and you’re lucky enough to believe there are answers to all of these?
But I digress, let’s have a look at the mathematical formulation of the Logistic Regression. First, let’s start with the Linear Model:
y = b_1x + b_0
x
b_1
b_0
y
We’re going to use the softmax function to get probabilities out of the Linear Model and obtain a generalized model of Logistic Regression. Softmax Regression allows us to create a model with more than 2 output classes (binary response):
p = \frac{1}{1+\exp^{-(b_1x + b_0)}}
b_1
defines the steepness of the curve and
b_0
moves the curve left and right.
We want to use our data
X
and some training magic to learn the parameters
b_1
b_0
. Let’s use TensorFlow.js for that!
Note that this model will give us a probabilistic answer instead of just a binary response. You might decide to ignore a prediction if the model is not sure about it - e.g. below 80%.
Let’s put the theory into practice by building a model into TensorFlow.js and predict the outcome for a patient.
Remember that the key to building a Logistic Regression model was the Linear Model and applying a softmax function to it:
4 units: 2,
5 activation: "softmax",
6 inputShape: [featureCount],
Note that we have 2 outputs because of the one-hot encoding and dynamic input count, based on the features we’ve chosen to train the model. Yes, it is that easy to build a Logistic Regression model in TensorFlow.js.
The next step is to compile the model:
1const optimizer = tf.train.adam(0.001)
3 optimizer: optimizer,
4 loss: "binaryCrossentropy",
5 metrics: ["accuracy"],
The training process of our model consists of minimizing the loss function. This gets done by the Adam optimizer we’re providing. Note that we’re providing a learning rate of 0.001.
The learning rate is known as a hyperparameter since it is a parameter you provide for your model to use. It controls how much each new update should “override” what your model already knows. Choosing the “correct” learning rate is somewhat of voodoo magic.
We’re using Cross-Entropy loss (known as log loss) to evaluate how well our model is doing. It (harshly) penalizes wrong answers given from classification models, based on the probabilities they give for each class. Here is the definition:
\text{Cross-Entropy}=-\sum_{c=1}^Cy_{o,c}\log(p_{o,c})
C
is the number of classes,
y
is a binary indicator if the class label is the correct classification for the observation and
p
is the predicted probability that
o
c
Note that we request from TensorFlow to record the accuracy metrics.
Let’s use fitDataset to train our model using the training and validation datasets we’ve prepared:
5await model.fitDataset(trainDs, {
7 validationData: validDs,
8 callbacks: {
9 onEpochEnd: async (epoch, logs) => {
10 trainLogs.push(logs)
11 tfvis.show.history(lossContainer, trainLogs, ["loss", "val_loss"])
12 tfvis.show.history(accContainer, trainLogs, ["acc", "val_acc"])
We train our model for 100 epochs (number of times the whole training set is shown to the model) and record the training logs for visualization using the onEpochEnd callback.
We’re going to wrap all of this into a function called trainLogisticRegression which is defined as:
1const trainLogisticRegression = async (
2 featureCount,
3 trainDs,
4 validDs
Let’s use everything we’ve built so far to evaluate how well our model is doing:
1const features = ["Glucose"]
3const [trainDs, validDs, xTest, yTest] = createDataSets(data, features, 0.1, 16)
5trainLogisticRegression(features.length, trainDs, validDs)
Note that we only use the glucose levels for training our model. Here are the results:
Not good at all. Our model performs worse than a dummy that predicts healthy 65% of the time. Also, the loss never really starts dropping. Let’s try with more data:
1const features = ["Glucose", "Age", "Insulin", "BloodPressure"]
5const model = await trainLogisticRegression(features.length, trainDs, validDs)
Much better, the loss value is reduced significantly during training, and we obtain about 79% accuracy on the validation set. Let’s take a closer look at the classification performance with a confusion matrix:
The confusion matrix can be obtained using the model predictions and test set:
1const preds = model.predict(xTest).argMax(-1)
2const labels = yTest.argMax(-1)
4const confusionMatrix = await tfvis.metrics.confusionMatrix(labels, preds)
6const container = document.getElementById("confusion-matrix")
8tfvis.render.confusionMatrix(container, {
9 values: confusionMatrix,
10 tickLabels: ["Healthy", "Diabetic"],
Even though our model might’ve obtained better accuracy, the results are still horrible. Being healthy is vastly overpredicted compared to having diabetes. What if we try with a more complex model:
5 units: 12,
6 activation: "relu",
11model.add(
12 tf.layers.dense({
13 units: 2,
14 activation: "softmax",
Here is the confusion matrix for this model:
We’ll not look into this model for now, but note that we obtain much better results by increasing the complexity of the model.
Congratulations! You built and trained not one, but a couple of models, including Logistic Regression, that predicts whether or not a patient has Diabetes. You’ve also met the real-world - processing data, training and building models, are hard things to do. Moreover, not everything is predictable, no matter how many data points you have.
That said, there are ways to improve the process of building and training models. We know that using some techniques is better than others, in a certain context. Well, Machine Learning is nuanced :)
Logistic Regression by Dr. Saed Sayad
|
Design Kalman filter for state estimation - MATLAB kalman - MathWorks India
You construct the model sys with known inputs u and white process noise inputs w, such that w consists of the last Nw inputs to sys. The "true" plant output yt consists of all outputs of sys. You also provide the noise covariance data Q, R, and N. The returned Kalman filter kalmf is a state-space model that takes the known inputs u and the noisy measurements y and produces an estimate
\stackrel{^}{y}
of the true plant output and an estimate
\stackrel{^}{x}
of the plant states. kalman also returns the Kalman gains L and the steady-state error covariance matrix P.
type = 'current' — Compute output estimates
\stackrel{^}{y}\left[n|n\right]
and state estimates
\stackrel{^}{x}\left[n|n\right]
using all available measurements up to
y\left[n\right]
type = 'delayed' — Compute output estimates
\stackrel{^}{y}\left[n|n-1\right]
\stackrel{^}{x}\left[n|n-1\right]
using measurements only up to
y\left[n-1\right]
. The delayed estimator is easier to implement inside control loops.
The Kalman filter kalmf is a state-space model having two inputs and four outputs. kalmf takes as inputs the plant input signal u and the noisy plant output
y={y}_{t}+v
. The first output is the estimated true plant output
\underset{}{\overset{ˆ}{y}}
. The remaining three outputs are the state estimates
\underset{}{\overset{ˆ}{x}}
. Examine the input and output names of kalmf to see how kalman labels them accordingly.
A,\left[B\text{\hspace{0.17em}}G\right],C,\left[D\text{\hspace{0.17em}}H\right]
\begin{array}{l}\stackrel{˙}{x}=Ax+Bu+Gw\text{ }\text{ }\\ y=Cx+Du+Hw+v\end{array}
\begin{array}{l}x\left[n+1\right]=Ax\left[n\right]+Bu\left[n\right]+Gw\left[n\right]\\ y\left[n\right]=Cx\left[n\right]+Du\left[n\right]+Hw\left[n\right]+v\left[n\right]\end{array}
'current' — Compute output estimates
\stackrel{^}{y}\left[n|n\right]
\stackrel{^}{x}\left[n|n\right]
y\left[n\right]
'delayed' — Compute output estimates
\stackrel{^}{y}\left[n|n-1\right]
\stackrel{^}{x}\left[n|n-1\right]
y\left[n-1\right]
Kalman estimator or kalman filter, returned as a state-space (ss) model. The resulting estimator has inputs
\left[u;y\right]
\left[\stackrel{^}{y};\stackrel{^}{x}\right]
. In other words, kalmf takes as inputs the plant input u and the noisy plant output y, and produces as outputs the estimated noise-free plant output
\stackrel{^}{y}
and the estimated state values
\stackrel{^}{x}
\stackrel{˙}{\stackrel{^}{x}}=A\stackrel{^}{x}+Bu+L\left(y-C\stackrel{^}{x}-Du\right).
\stackrel{^}{x}\left[n+1|n\right]=A\stackrel{^}{x}\left[n|n-1\right]+Bu\left[n\right]+L\left(y\left[n\right]-C\stackrel{^}{x}\left[n|n-1\right]-Du\left[n\right]\right).
P=\underset{t\to \infty }{\mathrm{lim}}E\left(\left\{x-\stackrel{^}{x}\right\}{\left\{x-\stackrel{^}{x}\right\}}^{T}\right).
\begin{array}{l}P=\underset{n\to \infty }{\mathrm{lim}}E\left(\left\{x\left[n\right]-\stackrel{^}{x}\left[n|n-1\right]\right\}{\left\{x\left[n\right]-\stackrel{^}{x}\left[n|n-1\right]\right\}}^{T}\right),\\ Z=\underset{n\to \infty }{\mathrm{lim}}E\left(\left\{x\left[n\right]-\stackrel{^}{x}\left[n|n\right]\right\}{\left\{x\left[n\right]-\stackrel{^}{x}\left[n|n\right]\right\}}^{T}\right).\end{array}
\stackrel{^}{x}\left[n|n\right]=\stackrel{^}{x}\left[n|n-1\right]+{M}_{x}\left(y\left[n\right]-C\stackrel{^}{x}\left[n|n-1\right]-Du\left[n\right]\right)
\stackrel{^}{y}\left[n|n\right]=C\stackrel{^}{x}\left[n|n-1\right]+Du\left[n\right]+{M}_{y}\left(y\left[n\right]-C\stackrel{^}{x}\left[n|n-1\right]-Du\left[n\right]\right)
When there is no direct feedthrough from the noise input w to the plant output y (that is, when H = 0, see Discrete-Time Estimation), then
{M}_{y}=C{M}_{x}
, and the output estimate simplifies to
\stackrel{^}{y}\left[n|n\right]=C\stackrel{^}{x}\left[n|n\right]+Du\left[n\right]
\overline{R}>0
\left[\begin{array}{cc}\overline{Q}& \overline{N};\text{\hspace{0.17em}}\text{\hspace{0.17em}}\begin{array}{cc}{\overline{N}}^{\prime }& \overline{R}\end{array}\end{array}\right]\ge 0
\left[\begin{array}{cc}\overline{Q}& \overline{N}\\ {\overline{N}}^{\prime }& \overline{R}\end{array}\right]=\left[\begin{array}{cc}G& 0\\ H& I\end{array}\right]\left[\begin{array}{cc}Q& N\\ {N}^{\prime }& R\end{array}\right]{\left[\begin{array}{cc}G& 0\\ H& I\end{array}\right]}^{\prime }.
\left(A-\overline{N}{\overline{R}}^{-1}C,\overline{Q}-\overline{N}{\overline{R}}^{-1}{\overline{N}}^{T}\right)
has no uncontrollable mode on the imaginary axis in continuous time, or on the unit circle in discrete time.
\begin{array}{l}\stackrel{˙}{x}=Ax+Bu+Gw\text{ }\text{ }\\ y=Cx+Du+Hw+v\end{array}
E\left(w\right)=E\left(v\right)=0,\text{ }E\left(w{w}^{T}\right)=Q,\text{ }E\left(v{v}^{T}\right)=R,\text{ }E\left(w{v}^{T}\right)=N
The Kalman filter, or Kalman estimator, computes a state estimate
\stackrel{^}{x}\left(t\right)
that minimizes the steady-state error covariance:
P=\underset{t\to \infty }{\mathrm{lim}}E\left(\left\{x-\stackrel{^}{x}\right\}{\left\{x-\stackrel{^}{x}\right\}}^{T}\right).
\begin{array}{l}\frac{d\stackrel{^}{x}}{dt}=A\stackrel{^}{x}+Bu+L\left(y-C\stackrel{^}{x}-Du\right)\\ \left[\begin{array}{c}\stackrel{^}{y}\\ \stackrel{^}{x}\end{array}\right]=\left[\begin{array}{c}C\\ I\end{array}\right]\stackrel{^}{x}+\left[\begin{array}{c}D\\ 0\end{array}\right]u\end{array}
L=\left(P{C}^{T}+\overline{N}\right){\overline{R}}^{-1}
\begin{array}{l}\overline{R}=R+HN+{N}^{T}{H}^{T}+HQ{H}^{T}\\ \overline{N}=G\left(Q{H}^{T}+N\right)\end{array}
The estimator uses the known inputs u and the measurements y to generate the output and state estimates
\stackrel{^}{y}
\stackrel{^}{x}
\begin{array}{l}x\left[n+1\right]=Ax\left[n\right]+Bu\left[n\right]+Gw\left[n\right]\\ y\left[n\right]=Cx\left[n\right]+Du\left[n\right]+Hw\left[n\right]+v\left[n\right]\end{array}
E\left(w\left[n\right]w{\left[n\right]}^{T}\right)=Q,\text{ }E\left(v\left[n\right]v{\left[n\right]}^{T}\right)=R,\text{ }E\left(w\left[n\right]v{\left[n\right]}^{T}\right)=N
\stackrel{^}{x}\left[n+1|n\right]=A\stackrel{^}{x}\left[n|n-1\right]+Bu\left[n\right]+L\left(y\left[n\right]-C\stackrel{^}{x}\left[n|n-1\right]-Du\left[n\right]\right).
L=\left(AP{C}^{T}+\overline{N}\right){\left(CP{C}^{T}+\overline{R}\right)}^{-1}
\begin{array}{l}\overline{R}=R+HN+{N}^{T}{H}^{T}+HQ{H}^{T}\\ \overline{N}=G\left(Q{H}^{T}+N\right)\end{array}
Current estimator — Generates output estimates
\stackrel{^}{y}\left[n|n\right]
\stackrel{^}{x}\left[n|n\right]
y\left[n\right]
. This estimator has the output equation
\left[\begin{array}{c}\stackrel{^}{y}\left[n|n\right]\\ \stackrel{^}{x}\left[n|n\right]\end{array}\right]=\left[\begin{array}{c}\left(I-{M}_{y}\right)C\\ I-{M}_{x}C\end{array}\right]\stackrel{^}{x}\left[n|n-1\right]+\left[\begin{array}{cc}\left(I-{M}_{y}\right)D& {M}_{y}\\ -{M}_{x}D& {M}_{x}\end{array}\right]\left[\begin{array}{c}u\left[n\right]\\ y\left[n\right]\end{array}\right].
\begin{array}{c}{M}_{x}=P{C}^{T}{\left(CP{C}^{T}+\overline{R}\right)}^{-1},\\ {M}_{y}=\left(CP{C}^{T}+HQ{H}^{T}+HN\right){\left(CP{C}^{T}+\overline{R}\right)}^{-1}.\end{array}
Thus, Mx updates the state estimate
\stackrel{^}{x}\left[n|n-1\right]
using the new measurement
y\left[n\right]
\stackrel{^}{x}\left[n|n\right]=\stackrel{^}{x}\left[n|n-1\right]+{M}_{x}\left(y\left[n\right]-C\stackrel{^}{x}\left[n|n-1\right]-Du\left[n\right]\right)
\stackrel{^}{y}\left[n|n\right]=C\stackrel{^}{x}\left[n|n-1\right]+Du\left[n\right]+{M}_{y}\left(y\left[n\right]-C\stackrel{^}{x}\left[n|n-1\right]-Du\left[n\right]\right)
When H = 0, then
{M}_{y}=C{M}_{x}
\stackrel{^}{y}\left[n|n\right]=C\stackrel{^}{x}\left[n|n\right]+Du\left[n\right]
Delayed estimator — Generates output estimates
\stackrel{^}{y}\left[n|n-1\right]
\stackrel{^}{x}\left[n|n-1\right]
using measurements only up to yv[n–1]. This estimator has the output equation:
\left[\begin{array}{c}\stackrel{^}{y}\left[n|n-1\right]\\ \stackrel{^}{x}\left[n|n-1\right]\end{array}\right]=\left[\begin{array}{c}C\\ I\end{array}\right]\stackrel{^}{x}\left[n|n-1\right]+\left[\begin{array}{cc}D& 0\\ 0& 0\end{array}\right]\left[\begin{array}{c}u\left[n\right]\\ y\left[n\right]\end{array}\right]
|
Factors affecting velocity of sound — lesson. Science State Board, Class 10.
Factors affecting the velocity of sound:
The velocity of sound waves is affected by the elastic properties and density of solids when the sound wave travels in the solid medium. Their elastic moduli characterise the elastic property of solids. The velocity of sound is directly proportional to the square root of the elastic modulus and inversely proportional to the square root of the density. As a result, as the density of solid increases, the velocity of sound decreases, whereas as the elasticity of the material increases, the velocity of sound increases.
The velocity of sound waves in gases is affected by the following factors.
In a gas medium, the velocity of sound is inversely proportional to the square root of the density. As a result, as the density of the gas increases, the velocity decreases.
In a gas medium, the velocity of sound is proportional to the square root of the temperature . The velocity of sound in a gas increases as the temperature increases.
The following equation gives the velocity at temperature \(T\):
{v}_{t}=\left({v}_{0}+0.61\phantom{\rule{0.147em}{0ex}}T\right)
{v}_{0}
is the velocity of sound in the gas medium at \(0° C\).
For air,
{v}_{0}
\(=\) \(331\)
m/s
Hence, the velocity of sound changes by \(0.61\)
m/s
when the temperature changes by one degree Celsius.
Effect of relative humidity:
The speed of sound increases as humidity rises. That is why, during rainy seasons, you can clearly hear sounds from a long distance.
|
Chemical Bonding and Molecular Structure Chemistry NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
An element X is placed in group IA of the periodic table because it forms:
1. An oxide which is acid
2. A volatile chloride having formula XCl2
3. An ionic chloride having formula XCl
4. An insoluble chloride XCl4
Subtopic: Ionic Bond |
Which of the following would have a permanent dipole moment ?
Subtopic: Polarity |
In the case of alkali metals, the covalent character decreases in the order
1. MCl > MI > MBr > MF
2. MF > MCl > MBr > MI
3. MF > MCl > MI > MBr
4. MI > MBr > MCl > MF
Which of the following two are isostructural?
1. XeF2, and IF2-
2. NH3, and BF3
3. CO2, and SiO2
4. PCl5, and IF5
Subtopic: Types of Chemical Bond |
Which one of the following species does not exist?
{\mathrm{Be}}_{2}^{+}
Subtopic: M.O.T |
Which one of the following is not paramagnetic ?
{\mathrm{N}}_{2}^{+}
{\mathrm{O}}_{2}^{-}
In a regular octahedral molecule MX6, the number of X-M-X bonds at 180
°
Subtopic: Hybridisation |
Which of the following is not isostructural with SiCl4?
In which of the following species the central atom has the type of hybridisation which is not the same as that present in the other three ?
{\mathrm{I}}_{3}^{-}
{\mathrm{SbCl}}_{5}^{2-}
Which of the following has
\mathrm{p\pi } -\mathrm{d\pi }
{\mathrm{NO}}_{3}^{-}
{\mathrm{SO}}_{3}^{2-}
{\mathrm{BO}}_{3}^{3-}
{\mathrm{CO}}_{3}^{2-}
|
Calculate Sensitivities Using sbiosimulate - MATLAB & Simulink - MathWorks 한êµ
Load and Configure the Model for Sensitivity Analysis
Extract and Plot Sensitivity Data
This table lists the reactions used to model the G protein cycle and the corresponding rate parameters (rate constants) for each mass action reaction. For reversible reactions, the forward rate parameter is listed first.
Assume that you are calculating the sensitivity of species Ga with respect to every parameter in the model. Thus, you want to calculate the time-dependent derivatives
\frac{â\left(Ga\right)}{â\left(kRLm\right)},\frac{â\left(Ga\right)}{â\left(kRL\right)},\frac{â\left(Ga\right)}{â\left(kG1\right)},\frac{â\left(Ga\right)}{â\left(kGa\right)}...
The gprotein_norules.sbproj project contains a model that represents the wild-type strain (stored in variable m1).
The options for sensitivity analysis are in the configuration set object. Get the configuration set object from the model.
csObj = getconfigset(m1);
Use the sbioselect function, which lets you query by type, to retrieve the Ga species from the model.
Ga = sbioselect(m1,'Type','species','Where','Name','==','Ga');
Set the Outputs property of the SensitivityAnalysisOptions object to the Ga species.
csObj.SensitivityAnalysisOptions.Outputs = Ga;
Use the sbioselect function, which lets you query by type, to retrieve all the parameters from the model and store the vector in a variable, pif.
pif = sbioselect(m1,'Type','parameter');
Set the Inputs property of the SensitivityAnalysisOptions object to the pif variable containing the parameters.
csObj.SensitivityAnalysisOptions.Inputs = pif;
Enable sensitivity analysis in the configuration set object (csObj) by setting the SensitivityAnalysis option to true.
csObj.SolverOptions.SensitivityAnalysis = true;
Set the Normalization property of the SensitivityAnalysisOptions object to perform 'Full' normalization.
csObj.SensitivityAnalysisOptions.Normalization = 'Full';
Simulate the model and return the data to a SimData object:
You can extract sensitivity results using the getsensmatrix method of a SimData object. In this example, R is the sensitivity of the species Ga with respect to eight parameters. This example shows how to compare the variation of sensitivity of Ga with respect to various parameters, and find the parameters that affect Ga the most.
Extract sensitivity data in output variables T (time), R (sensitivity data for species Ga), snames (names of the states specified for sensitivity analysis), and ifacs (names of the input factors used for sensitivity analysis):
[T, R, snames, ifacs] = getsensmatrix(simDataObj);
Because R is a 3-D array with dimensions corresponding to times, output factors, and input factors, reshape R into columns of input factors to facilitate visualization and plotting:
R2 = squeeze(R);
After extracting the data and reshaping the matrix, plot the data:
plot(T,R2);
title('Normalized Sensitivity of Ga With Respect To Various Parameters');
ylabel('Normalized Sensitivity of Ga');
leg = legend(ifacs, 'Location', 'NorthEastOutside');
set(leg, 'Interpreter', 'none');
From the previous plot you can see that Ga is most sensitive to parameters kGd, kRs, kRD1, and kGa. This suggests that the amounts of active G protein in the cell depends on the rate of:
Degradation of the receptor-ligand complex
G protein activation
G protein inactivation
|
Circuit satisfiability problem - Wikipedia
The circuit on the left is satisfiable but the circuit on the right is not.
In theoretical computer science, the circuit satisfiability problem (also known as CIRCUIT-SAT, CircuitSAT, CSAT, etc.) is the decision problem of determining whether a given Boolean circuit has an assignment of its inputs that makes the output true.[1] In other words, it asks whether the inputs to a given Boolean circuit can be consistently set to 1 or 0 such that the circuit outputs 1. If that is the case, the circuit is called satisfiable. Otherwise, the circuit is called unsatisfiable. In the figure to the right, the left circuit can be satisfied by setting both inputs to be 1, but the right circuit is unsatisfiable.
CircuitSAT is closely related to Boolean satisfiability problem (SAT), and likewise, has been proven to be NP-complete.[2] It is a prototypical NP-complete problem; the Cook–Levin theorem is sometimes proved on CircuitSAT instead of on the SAT and then reduced to the other satisfiability problems to prove their NP-completeness.[1][3] The satisfiability of a circuit containing
{\displaystyle m}
arbitrary binary gates can be decided in time
{\displaystyle O(2^{0.4058m})}
1 Proof of NP-Completeness
2 Restricted Variants and Related Problems
2.1 Planar Circuit SAT
2.2 Circuit UNSAT
3 Reduction from CircuitSAT
3.1 Minesweeper Inference Problem
4 The Tseytin transformation
Proof of NP-Completeness[edit]
Given a circuit and a satisfying set of inputs, one can compute the output of each gate in constant time. Hence, the output of the circuit is verifiable in polynomial time. Thus Circuit SAT belongs to complexity class NP. To show NP-hardness, it is possible to construct a reduction from 3SAT to Circuit SAT.
Suppose the original 3SAT formula has variables
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
, and operators (AND, OR, NOT)
{\displaystyle y_{1},y_{2},\dots ,y_{k}}
. Design a circuit such that it has an input corresponding to every variable and a gate corresponding to every operator. Connect the gates according to the 3SAT formula. For instance, if the 3SAT formula is
{\displaystyle (\lnot x_{1}\land x_{2})\lor x_{3},}
the circuit will have 3 inputs, one AND, one OR, and one NOT gate. The input corresponding to
{\displaystyle x_{1}}
will be inverted before sending to an AND gate with
{\displaystyle x_{2},}
and the output of the AND gate will be sent to an OR gate with
{\displaystyle x_{3}.}
Notice that the 3SAT formula is equivalent to the circuit designed above, hence their output is same for same input. Hence, If the 3SAT formula has a satisfying assignment, then the corresponding circuit will output 1, and vice versa. So, this is a valid reduction, and Circuit SAT is NP-hard.
This completes the proof that Circuit SAT is NP-Complete.
Restricted Variants and Related Problems[edit]
Planar Circuit SAT[edit]
Assume that we are given a planar Boolean circuit (i.e. a Boolean circuit whose underlying graph is planar) containing only NAND gates with exactly two inputs. Planar Circuit SAT is the decision problem of determining whether this circuit has an assignment of its inputs that makes the output true. This problem is NP-complete.[5] In fact, if the restrictions are changed so that any gate in the circuit is a NOR gate, the resulting problem remains NP-complete.[5]
Circuit UNSAT[edit]
Circuit UNSAT is the decision problem of determining whether a given Boolean circuit outputs false for all possible assignments of its inputs. This is the complement of the Circuit SAT problem, and is therefore Co-NP-complete.
Reduction from CircuitSAT[edit]
Reduction from CircuitSAT or its variants can be used to show NP-hardness of certain problems, and provides us with an alternative to dual-rail and binary logic reductions. The gadgets that such a reduction needs to construct are:
A wire gadget. This gadget simulates the wires in the circuit.
A split gadget. This gadget guarantees that all the output wires have the same value as the input wire.
Gadgets simulating the gates of the circuit.
A True terminator gadget. This gadget is used to force the output of the entire circuit to be True.
A turn gadget. This gadget allows us to redirect wires in the right direction as needed.
A crossover gadget. This gadget allows us to have two wires cross each other without interacting.
Minesweeper Inference Problem[edit]
This problem asks whether it is possible to locate all the bombs given a Minesweeper board. It has been proven to be CoNP-Complete via a reduction from Circuit UNSAT problem.[6] The gadgets constructed for this reduction are: wire, split, AND and NOT gates and terminator.[7] There are three crucial observations regarding these gadgets. First, the split gadget can also be used as the NOT gadget and the turn gadget. Second, constructing AND and NOT gadgets is sufficient, because together they can simulate the universal NAND gate. Finally, since we can simulate XOR with three NANDs, and since XOR is enough to build a crossover, this gives us the needed crossover gadget.
The Tseytin transformation[edit]
Main article: Tseytin transformation
The Tseytin transformation is a straightforward reduction from Circuit-SAT to SAT. The transformation is easy to describe if the circuit is wholly constructed out of 2-input NAND gates (a functionally-complete set of Boolean operators): assign every net in the circuit a variable, then for each NAND gate, construct the conjunctive normal form clauses (v1 ∨ v3) ∧ (v2 ∨ v3) ∧ (¬v1 ∨ ¬v2 ∨ ¬v3), where v1 and v2 are the inputs to the NAND gate and v3 is the output. These clauses completely describe the relationship between the three variables. Conjoining the clauses from all the gates with an additional clause constraining the circuit's output variable to be true completes the reduction; an assignment of the variables satisfying all of the constraints exists if and only if the original circuit is satisfiable, and any solution is a solution to the original problem of finding inputs that make the circuit output 1.[1][8] The converse—that SAT is reducible to Circuit-SAT—follows trivially by rewriting the Boolean formula as a circuit and solving it.
Structured Circuit Satisfiability
^ a b c David Mix Barrington and Alexis Maciel (July 5, 2000). "Lecture 7: NP-Complete Problems" (PDF).
^ Luca Trevisan (November 29, 2001). "Notes for Lecture 23: NP-completeness of Circuit-SAT" (PDF). Archived from the original (PDF) on December 26, 2011. Retrieved February 4, 2012.
^ See also, for example, the informal proof given in Scott Aaronson's lecture notes from his course Quantum Computing Since Democritus.
^ Sergey Nurk (December 1, 2009). "An O(2^{0.4058m}) upper bound for Circuit SAT".
^ a b "Algorithmic Lower Bounds: Fun With Hardness Proofs at MIT" (PDF).
^ Scott, Allan; Stege, Ulrike; van Rooij, Iris (2011-12-01). "Minesweeper May Not Be NP-Complete but Is Hard Nonetheless". The Mathematical Intelligencer. 33 (4): 5–17. doi:10.1007/s00283-011-9256-x. ISSN 1866-7414.
^ Kaye, Richard. Minesweeper is NP-complete (PDF).
^ Marques-Silva, João P. and Luís Guerra e Silva (1999). "Algorithms for Satisfiability in Combinational Circuits Based on Backtrack Search and Recursive Learning" (PDF). [permanent dead link]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Circuit_satisfiability_problem&oldid=1033716800"
|
The p-Block Elements (XII) Chemistry NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
Chemistry - The p-Block Elements (XII)
Chloride of an element A gave a neutral solution in water. In the periodic table the element A belongs to
3. Fifth group
4. Transitional metal
Subtopic: Group 17 - Preparation,Properties & Uses |
The correct relationship between the pH of isomolar solutions of sodium oxide(pH1), sodium sulphide (pH2), sodium selenide (pH3) and sodium telluride (pH4) is -
{\mathrm{pH}}_{1} > {\mathrm{pH}}_{2} \approx {\mathrm{pH}}_{3} > {\mathrm{pH}}_{4}
{\mathrm{pH}}_{1} < {\mathrm{pH}}_{2} < {\mathrm{pH}}_{3} < {\mathrm{pH}}_{4}
{\mathrm{pH}}_{1} < {\mathrm{pH}}_{2} < {\mathrm{pH}}_{3} \approx {\mathrm{pH}}_{4}
{\mathrm{pH}}_{1} > {\mathrm{pH}}_{2} > {\mathrm{pH}}_{3 }> {\mathrm{pH}}_{4}
Subtopic: Group 16- Preparation,Properties & Uses |
NO2 can be produced by heating -
Subtopic: Group 15 -Preparation,Properties & Uses |
The formation of
{\mathrm{PH}}_{4}^{+}
is difficult as compared to
{\mathrm{NH}}_{4}^{+}
1. Lone pair of phosphorous is optically inert.
2. Lone pair of phosphorous resides in almost pure p - orbital.
3. Lone pair of phosphorous resides at sp3 orbital.
4. Lone pair of phosphorous resides in almost pure s - orbital.
Concentrated nitric acid reacts with iodine to give:
Partial Hydrolysis of one mole of peroxodi-sulphuric acid produces
1. Two moles of sulphuric acid
2. Two moles of peroxomono-sulphuric acid
3. One mole of sulphuric acid, one mole of peroxomono-sulphuric acid
4. One mole of sulphuric acid, one mole of peroxomono-sulphuric acid and one mole of hydrogen peroxide
1. F2 has higher dissociation energy than Cl2
2. F has higher electron affinity than Cl
3. HF is stronger acid than HCl
4. Boiling point increases down the group in halogens
Incorrectly matched characteristic is:
1. S8: covalent lattice
2. P4: tetrahedron
3. S52-: zigzag
4. SiO2: Covalent lattice
N2 cannot be produced by heating -
2. NH4Cl+NaNO2
3. NH4Cl+CaO
4. Ba(N3)2
An oxyacid among the following that contains both P-H and P-P bond simultaneously is -
|
The conversion process can be scripted to avoid repeating the same steps for each band separately. In bash, such a script might be as the following example. '''Note,''' however, in this example script constants, band parameters and acquisition related metadata are hard-coded!
==== Example Instructions ====
=== Vegetation Indices ===
{\displaystyle {\frac {W}{m^{2}*sr*nm}}}
{\displaystyle L\lambda ={\frac {10^{4}*DN\lambda }{CalCoef\lambda *Bandwidth\lambda }}}
{\displaystyle \rho _{p}={\frac {\pi *L\lambda *d^{2}}{ESUN\lambda *cos(\Theta _{S})}}}
{\displaystyle \rho }
{\displaystyle \pi }
{\displaystyle L\lambda }
{\displaystyle d}
{\displaystyle Esun}
{\displaystyle cos(\theta _{s})}
{\displaystyle {\frac {W}{m^{2}*\mu m}}}
{\displaystyle [0,255]}
{\displaystyle [0,2047]}
|
Hourly output - zxc.wiki
The hourly capacity is that of a worker or machine during an hour yielded power .
The term hourly output is mainly known in business administration and electrical engineering . What both disciplines have in common in terms of hourly performance is that performance is measured over an hour.
In business administration, the hourly output is a business key figure that reflects the performance (for example, sales revenue ) per hour worked . Heinrich von Stackelberg already defined the hourly output of a worker in 1941 as "the amount of work product achieved during the individual hour". It is one of many metrics for measuring productivity :
{\ displaystyle {\ text {Hourly output}} = {\ frac {\ text {Sales volume}} {\ text {Working hours}}}}
The hourly output and thus productivity improve if a higher sales volume can be produced with constant working hours .
If the work intensity of a worker is influenced by machines ( e.g. assembly line work ), the hourly output is determined exclusively by the running speed of the machines and - at a given speed - is constant over time. However, if the work performance is largely determined by the individual work curve , work suffering , work motivation , disposition , ability or the will of the worker , then the work performance is at the start of work at a low level and increases through habituation to a maximum, and then gradually decreases again due to fatigue . However, the actual hourly output is not bimodal like the work curve, because it can be assumed that physiological performance troughs are bridged by work breaks. Hourly output and operating time are positively correlated, because the duration of the operating time influences the possible hourly output.
In electrical engineering, hourly output is the output that an electrical machine (for example an electric motor or a transformer ) can generate for one hour from a cold state without heating up excessively. The continuous output can be provided for as long as you like without the machine getting too hot. The indication of the hourly output as nominal output was previously common for machines that are typically exposed to changing loads, such as electric locomotives . Today, continuous operation is assumed to be rated operation.
In addition to the hourly output, the short-term output is also of interest for electric locomotives . For electric locomotives, this is limited to three to a maximum of ten minutes, depending on the type of locomotive and power level, and can be called up in situations that require particularly high power (accelerating heavy trains, driving on steep ramps). Several components of the locomotive, in particular the transformer and oil and the electric motors, heat up so much that they then need a longer cooling phase, during which the maximum continuous output can be called up. Once the temperatures have dropped accordingly, the short-term power can be called up again without causing permanent damage to the components due to overheating.
Rolf Hüpen, Working Hours, Operating Hours and Employment , 1994
Th. Buchhold, F. Trawnik: The electrical equipment of the direct current railways including the overhead lines . Published by Julius Springer, Berlin 1931.
^ Heinrich von Stackelberg, hourly output and daily output , in: Archive for mathematical economic and social research, Volume 7, 1941, p. 3
↑ Rolf Hüpen, working hours, operating hours and employment , 1994, p. 50
↑ Peter Rahm, Dynamic Working Time , 1972, p. 75
↑ Rolf Hüpen, working hours, operating hours and employment , 1994, pp. 73 ff.
This page is based on the copyrighted Wikipedia article "Stundenleistung" (Authors); it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License. You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.
|
Computation model defining an abstract machine
For other uses, see Turing machine (disambiguation).
A physical Turing machine model. A true Turing machine would have unlimited tape on both sides, however, physical models can only have a finite amount of tape.
4.3 "State" diagrams
5 Equivalent models
9.3 Alan Turing's a-machine
9.5 1970–present: as a model of computation
{\displaystyle M=\langle Q,\Gamma ,b,\Sigma ,\delta ,q_{0},F\rangle }
{\displaystyle \Gamma }
{\displaystyle b\in \Gamma }
{\displaystyle \Sigma \subseteq \Gamma \setminus \{b\}}
{\displaystyle Q}
{\displaystyle q_{0}\in Q}
{\displaystyle F\subseteq Q}
{\displaystyle M}
{\displaystyle F}
{\displaystyle \delta :(Q\setminus F)\times \Gamma \not \to Q\times \Gamma \times \{L,R\}}
{\displaystyle \delta }
{\displaystyle \{L,R\}}
{\displaystyle Q=\{{\mbox{A}},{\mbox{B}},{\mbox{C}},{\mbox{HALT}}\}}
{\displaystyle \Gamma =\{0,1\}}
{\displaystyle b=0}
{\displaystyle \Sigma =\{1\}}
{\displaystyle q_{0}={\mbox{A}}}
{\displaystyle F=\{{\mbox{HALT}}\}}
{\displaystyle \delta =}
{\displaystyle 0}
Additional details required to visualize or implement Turing machines[edit]
{\displaystyle \{L,R\}}
{\displaystyle \{L,R,N\}}
The "state"[edit]
"State" diagrams[edit]
Equivalent models[edit]
Choice c-machines, oracle o-machines[edit]
Universal Turing machines[edit]
Comparison with real machines[edit]
Computational complexity theory[edit]
Historical background: computational machinery[edit]
The Entscheidungsproblem (the "decision problem"): Hilbert's tenth question of 1900[edit]
Alan Turing's a-machine[edit]
1937–1970: The "digital computer", the birth of "computer science"[edit]
1970–present: as a model of computation[edit]
{\displaystyle \delta }
{\displaystyle F}
Primary literature, reprints, and compilations[edit]
Church's thesis[edit]
Small Turing machines[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Turing_machine&oldid=1089266535"
|
Successive Matrix Squaring Algorithm for Computing the Generalized Inverse
Xiaoji Liu, Yonghui Qin, "Successive Matrix Squaring Algorithm for Computing the Generalized Inverse ", Journal of Applied Mathematics, vol. 2012, Article ID 262034, 12 pages, 2012. https://doi.org/10.1155/2012/262034
Xiaoji Liu1,2 and Yonghui Qin1
1College of Science, Guangxi University for Nationalities, Nanning 530006, China
2Guangxi Key Laborarory of Hybrid Computational and IC Design Analysis, Nanning 530006, China
Academic Editor: J. Biazar
We investigate successive matrix squaring (SMS) algorithms for computing the generalized inverse of a given matrix .
Throughout this paper, the symbol denotes a set of all complex matrices. Let , and the symbols , and stand for the range, the null space, the spectrum of matrix , and the matrix norm, respectively.
A matrix is called a -inverse of matrix if holds. The symbols , , and denote, respectively, the Moore-Penrose inverse, the index, and the Drazin inverse of , and, obviously, (see [1] for details). Let , , and and , and there exists and unique matrix such that then is called -inverse of with the prescribed range and null space of , denoted by .
In [1], it is well known that the generalized inverse of a given matrix with the prescribed range and null space is very important in applications of many mathematics branches such as stable approximations of ill-posed problems, linear and nonlinear problems involving rank-deficient generalized, and the applications to statistics [2]. In particular, the generalized inverse plays an important role for the iterative methods for solving nonlinear equations [1, 2].
In recent years, successive matrix squaring algorithms are investigated for computing the generalized inverse of a given matrix in [3–7]. In [3], the authors exhibit a deterministic iterative algorithm for linear system solution and matrix inversion based on a repeated matrix squaring scheme. Wei derives a successive matrix squaring (SMS) algorithm to approximate the Drazin inverse in [4]. Wei et al. in [5] derive a successive matrix squaring (SMS) algorithm to approximate the weighted generalized inverse , which can be expressed in the form of successive squaring of a composite matrix . Stanimirović and Cvetković-Ilić derive a successive matrix squaring (SMS) algorithm to approximate an outer generalized inverse with prescribed range and null space of a given matrix in [6]. In [7], authors introduce a new algorithm based on the successive matrix squaring (SMS) method and this algorithm uses the strategy of -displacement rank in order to find various outer inverses with prescribed ranges and null spaces of a square Toeplitz matrix.
In this paper, based on [3–5], we investigate successive matrix squaring algorithms for computing the generalized inverse of a matrix in Section 2 and also give a numerical example for illustrating our results in Section 3.
The following given lemma suggests that the generalized inverse is unique.
Lemma 1.1 (see [1, Theorem 2.14]). Let with rank , let be a subspace of of dimension , and let be a subspace of of dimension . Then, has a -inverse such that and if and only if in which case is unique.
The following nations are stated in Banach space but they are true in the finite dimension space. Throughout this paper, let denote the Banach space and let stand for the set of all bounded linear operators from to , in particular .
In the following, we state two lemmas which are given for Banach space but it can be used also for the finite dimension space.
Lemma 1.2 (see [8, Section 4]). Let and and , respectively, closed subspaces of and . Then the following statements are equivalent:(i)has a -inverse such that and ,(ii) is a complemented subspace of , is invertible and .
Lemma 1.3 (see [9, Section 3]). Suppose that the conditions of Lemma 1.2 are satisfied. If we take , then holds and has the following matrix form: where is invertible. Moreover, has the matrix following form:
From (1.5), we obtain the following projections (see [9]):
In this section, we consider successive matrix squaring (SMS) algorithms for computing the generalized inverse .
Let and the sequence in , and we can define the iterative form as follows ([10, Theorem 2.2] for computing the generalized inverse in the infinite space case): From [10], the authors have proved that the iteration (2.1) converges to the generalized inverse if and only if , where and (for the proof see [11] and [10, Theorem 2.1] when ).
In the following, we give the algorithm for computing the generalized inverse of a matrix .
Let and . It is not difficult to see that the above fact can be written as follows: From (2.2) and letting , we have
By (2.3), we prove that the iterative (2.1) is equal to the right upper block in the matrix . Note that we defined the new iterative form as follows:
From the new iterative form (2.4), we arrive at
Assume that , and by (2.5), we have
By (2.4)–(2.6), we have Algorithm 1.
Input: Input the initial value matrices and the accurate value ;
Output: The algorithm export the matrix, that is ;
Begin: Assignment the matrix by the initial value matrix , that is ;
Assigned the matrix by , that is ;
Computed matrix , that is ;
Computed the error between and , that is ;
Judged that whether is lower than or not,
that is while , do ;
Defined the loop function: ;
Finished the loop function
The matrix multiplied by and assigned to , that is ;
End the algorithm.
SMS algorithm for computing the generalized inverse .
From (2.4)–(2.6) and Algorithm 1, we obtain the following result.
Theorem 2.1. Let , and the sequence converges to the generalized inverse if and only if . In this case where and
Proof. From the proof in [11] and [10, Theorem 2.1] when and according to (2.4), (2.5) and (2.6), we easily finish the proof of the former of the theorem. In the following, we only prove the last section, that is, prove that the inequality (2.7) holds.
By applying (2.5) and (2.6), we obtain
By the iteration (2.4) and (2.9), we arrive at
The following corollary given the result is the same as theorem in [6, Theorem 2.3]. It also presents an explicit representation of the the generalized inverse and the sequence (2.4) converges to a -inverse of a given matrix by its full-rank decomposition.
Corollary 2.2. Let be full rank decomposition, and the sequence converges to the -inverse if and only if . In this case where and
Proof. From Theorem 2.5 and by [6, Theorem 2.3], we have the result.
In the following, we consider the improvement of the iterative form (2.1) (see [11] for computing the Moore-Penrose inverse and the Drazin inverse of the matrix case and [10, Theorem 2.2] for computing the generalized inverse in the infinite space case): Let be a block matrix and then
By induction if has the following form: then
Similarly to the iterative form (2.4), we also define the new iterative scheme Note that from (2.18)
Let , and by (2.18), and (2.19), we arrive at
From (2.14) to (2.20), we find that if one wants to compute the generalized inverse then we only compute the element () of the matrix . Similarly to Algorithm 1, we also obtain Algorithm 2.
Input: Input the matrices and the accurate value ;
Output: The algorithm export the matrix: ;
Computed the product of and , and assigned its value to . that is ;
Similarly, we repeatedly do the computation for the product and as well as above
the computation, where .
Computed the product of the matrix and , and assigned its value to as well
as above computations, that is ;
Assigned the matrix by the sum of the matrices , where and
. that is ;
Take the norm of and assigned its value to . that is ;
while do;
We need the iteration not to exceed 500 times. that is ; (In fact )
Do 500 step repeatedly computations in the following.
that is For 1 :
Computed the product of the given matrix and the iteration matrix , and
assigned its value to the new matrix . that is ;
From the iteration , we obtain the new matrix and add its value to ,
and assigned the sum of and to the matrix ..that is ; After these,
return the the step .
Finished the For loop function that is end
Finished the While loop function. that is end
Analogous to Theorem 2.5 by Algorithm 2 and sequence (2.18), we also have the following theorem.
Proof. Similarly the proof in [10, Theorem 2.1], we can prove the former of this theorem. Analogous to the proof of Theorem 2.5, we finish the proof of the theorem.
In the following, we extend the sequence (2.4) to By (2.26) and by induction, we have
Assume that , we easily have
Similarly, from (2.23) and (2.25), we obtain the following result.
Proof. From (2.25) and only using instead of in Theorem 2.1, we easily have that converges to the generalized inverse if and only if . Similarly to the formula (2.29), we obtain that where , , and are the same as Theorem 2.5.
In the following, we consider the dually iterative form.
Let and the sequence in , and we can define the iterative form as follows (see [11] and [10, Theorem 2.3]):
Let and . It is not difficult to see that the above fact can be written as follows:
From iterative forms (2.26) and (2.29), we have the following theorem.
Similarly to Corollary 2.2, we have the result as follows.
Corollary 2.6. Let , full rank decomposition, and the sequence converges to the -inverse if and only if . In this case where and
In the following, we consider the improvement of the iterative form (2.29) (see [11] for computing the Moore-Penrose inverse and the Drazin inverse of the matrix case and [10, Theorem 2.3] for computing the generalized inverse in the infinite space case):
It is similar to (2.14), and we have
Analogous to Theorem 2.5 by Algorithm 2 and from (2.36), we obtain the theorem in the following.
Dually, we give the SMS algorithm for computing the generalized inverse which are analogous to the iterative form (2.23) as follows and omit their proofs:
Similarly Theorem 2.4, from (2.35) and (2.39), we obtain the following result.
Here is an example to verify the effectiveness of the SMS method.
Example 3.1. Let Let ; .
Take By (2.2), we have From [10, 12], we easily have the generalized inverse in
Then, from Algorithm 1, we obtain
But by the iteration (2.1), we get
From the data in (3.5) and (3.6), we obtain Table 1.
Method Iteration (2.1) Algorithm 1
From the above in (3.5), (3.6), and Table 1, we know that we only need two steps by Algorithm 1, but five steps by using iterative form (2.1).
X. Liu is supported by the National Natural Science Foundation of China (11061005), College of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning, China, and Y. Qin is supported by the Innovation Project of Guangxi University for Nationalities (gxun-chx2011075), College of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning, China.
A. Ben-Israel and T. N. E. Greville, Generalized Inverses, Theory and Applications, vol. 15 of CMS Books in Mathematics, Springer, New York, NY, USA, 2nd edition, 2003. View at: Zentralblatt MATH
A. J. Getson and F. C. Hsuan, {2}-Inverses and Their Statistical Application, vol. 47 of Lecture Notes in Statistics, Springer, New York, NY, USA, 1988. View at: Publisher Site
B. Codenotti, M. Leoncini, and G. Resta, “Repeated matrix squaring for the parallel solution of linear systems,” in PARLE '92 Parallel Architectures and Languages Europe, vol. 605 of Lecture Notes in Computer Science, pp. 725–732, Springer, Berlin, Germany, 1992. View at: Publisher Site | Google Scholar
Y. Wei, “Successive matrix squaring algorithm for computing the Drazin inverse,” Applied Mathematics and Computation, vol. 108, no. 2-3, pp. 67–75, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH
Y. Wei, H. Wu, and J. Wei, “Successive matrix squaring algorithm for parallel computing the weighted generalized inverse
{A}_{M,N}^{+}
,” Applied Mathematics and Computation, vol. 116, no. 3, pp. 289–296, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH
P. S. Stanimirović and D. S. Cvetković-Ilić, “Successive matrix squaring algorithm for computing outer inverses,” Applied Mathematics and Computation, vol. 203, no. 1, pp. 19–29, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
M. Miladinović, S. Miljković, and P. Stanimirović, “Modified SMS method for computing outer inverses of Toeplitz matrices,” Applied Mathematics and Computation, vol. 218, no. 7, pp. 3131–3143, 2011. View at: Publisher Site | Google Scholar
D. S. Djordjević and P. S. Stanimirović, “On the generalized Drazin inverse and generalized resolvent,” Czechoslovak Mathematical Journal, vol. 51(126), no. 3, pp. 617–634, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH
D. S. Djordjević and P. S. Stanimirović, “Splittings of operators and generalized inverses,” Publicationes Mathematicae Debrecen, vol. 59, no. 1-2, pp. 147–159, 2001. View at: Google Scholar | Zentralblatt MATH
X. Liu, Y. Yu, and C. Hu, “The iterative methods for computing the generalized inverse
{A}_{T,S}^{\left(2\right)}
of the bounded linear operator between Banach spaces,” Applied Mathematics and Computation, vol. 214, no. 2, pp. 391–410, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
X.-Z. Chen and R. E. Hartwig, “The hyperpower iteration revisited,” Linear Algebra and Its Applications, vol. 233, pp. 207–229, 1996. View at: Publisher Site | Google Scholar | Zentralblatt MATH
B. Zheng and G. Wang, “Representation and approximation for generalized inverse
{A}_{T,S}^{\left(2\right)}
: revisited,” Journal of Applied Mathematics & Computing, vol. 22, no. 3, pp. 225–240, 2006. View at: Publisher Site | Google Scholar
Copyright © 2012 Xiaoji Liu and Yonghui Qin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
EUDML | Heegaard splittings of graph manifolds. EuDML | Heegaard splittings of graph manifolds.
Heegaard splittings of graph manifolds.
Schultens, Jennifer. "Heegaard splittings of graph manifolds.." Geometry & Topology 8 (2004): 831-876. <http://eudml.org/doc/124194>.
@article{Schultens2004,
author = {Schultens, Jennifer},
keywords = {graph manifold; Heegaard splitting; Seifert fibered space},
title = {Heegaard splittings of graph manifolds.},
AU - Schultens, Jennifer
TI - Heegaard splittings of graph manifolds.
KW - graph manifold; Heegaard splitting; Seifert fibered space
graph manifold, Heegaard splitting, Seifert fibered space
3
Articles by Schultens
|
EUDML | On the universal -algebra generated by a partial isometry. EuDML | On the universal -algebra generated by a partial isometry.
On the universal
{C}^{*}
-algebra generated by a partial isometry.
Kandelaki, T.. "On the universal -algebra generated by a partial isometry.." Georgian Mathematical Journal 5.4 (1998): 333-340. <http://eudml.org/doc/48238>.
@article{Kandelaki1998,
author = {Kandelaki, T.},
keywords = {involutive algebra; maximal -norm; Cuntz algebras; homotopical interpretation of -groups; maximal -norm; homotopical interpretation of -groups},
title = {On the universal -algebra generated by a partial isometry.},
AU - Kandelaki, T.
TI - On the universal -algebra generated by a partial isometry.
KW - involutive algebra; maximal -norm; Cuntz algebras; homotopical interpretation of -groups; maximal -norm; homotopical interpretation of -groups
involutive algebra, maximal
{C}^{*}
-norm, Cuntz algebras, homotopical interpretation of
KK
-groups, maximal
{C}^{*}
-norm, homotopical interpretation of
KK
{C}^{*}
{W}^{*}
K
K
|
Cylinder - Simple English Wikipedia, the free encyclopedia
surface formed by the points at a fixed distance from a given straight line called the axis of the cylinder; one of the most basic curvilinear geometric shapes
A cylinder is one of the most basic curved three dimensional geometric shapes, with the surface formed by the points at a fixed distance from a given line segment, known as the axis of the cylinder. The shape can be thought of as a circular prism. Both the surface and the solid shape created inside can be called a cylinder. The surface area and the volume of a cylinder have been known since ancient times.
In differential geometry, a cylinder is defined more broadly as a ruled surface which is spanned by a one-parameter family of parallel lines. A cylinder whose cross section is an ellipse, parabola, or hyperbola is called an elliptic cylinder, parabolic cylinder, or hyperbolic cylinder respectively.
3 Cylindric section
4 Other types of cylinders
In common use a cylinder is taken to mean a finite section of a right circular cylinder, i.e., the cylinder with the generating lines perpendicular to the bases, with its ends closed to form two circular surfaces, as in the figure (right). If the cylinder has a radius r and length (height) h, then its volume is given by:
and its surface area is:
the area of the top (πr2) +
the area of the bottom (πr2) +
the area of the side (2πrh).
Therefore, without the top or bottom (lateral area), the surface area is:
A = 2πrh.
With the top and bottom, the surface area is:
A = 2πr2 + 2πrh = 2πr(r + h).
For a given volume, the cylinder with the smallest surface area has h = 2r. For a given surface area, the cylinder with the largest volume has h = 2r, i.e. the cylinder fits in a cube (height = diameter).
Having a right circular cylinder with a height h units and a base of radius r units with the coordinate axes chosen so that the origin is at the center of one base and the height is measured along the positive x-axis. A plane section at a distance of x units from the origin has an area of A(x) square units where
{\displaystyle A(x)=\pi r^{2}}
{\displaystyle A(y)=\pi r^{2}}
An element of volume, is a right cylinder of base area Awi square units and a thickness of Δix units. Thus if V cubic units is the volume of the right circular cylinder, by Riemann sums,
{\displaystyle \mathrm {Volume\;of\;cylinder} =\lim _{||\Delta \to 0||}\sum _{i=1}^{n}A(w_{i})\Delta _{i}x}
{\displaystyle =\int _{0}^{h}A(y)^{2}\,dy}
{\displaystyle =\int _{0}^{h}\pi r^{2}\,dy}
{\displaystyle =\pi \,r^{2}\,h\,}
Using cylindrical coordinates, the volume can be calculated by integration over
{\displaystyle =\int _{0}^{h}\int _{0}^{2\pi }\int _{0}^{r}s\,\,ds\,d\phi \,dz}
{\displaystyle =\pi \,r^{2}\,h\,}
Cylindric sectionEdit
Cylindric sections are the intersections of cylinders with planes. For a right circular cylinder, there are four possibilities. A plane tangent to the cylinder, meets the cylinder in a single straight line. Moved while parallel to itself, the plane either does not intersect the cylinder or intersects it in two parallel lines. All other planes intersect the cylinder in an ellipse or, when they are perpendicular to the axis of the cylinder, in a circle.[1]
Other types of cylindersEdit
An elliptic cylinder
An elliptic cylinder, or cylindroid, is a quadric surface, with the following equation in Cartesian coordinates:
{\displaystyle \left({\frac {x}{a}}\right)^{2}+\left({\frac {y}{b}}\right)^{2}=1.}
This equation is for an elliptic cylinder, a generalization of the ordinary, circular cylinder (a = b). Even more general is the generalized cylinder: the cross-section can be any curve.
The cylinder is a degenerate quadric because at least one of the coordinates (in this case z) does not appear in the equation.
An oblique cylinder has the top and bottom surfaces displaced from one another.
There are other more unusual types of cylinders. These are the imaginary elliptic cylinders:
{\displaystyle \left({\frac {x}{a}}\right)^{2}+\left({\frac {y}{b}}\right)^{2}=-1}
the hyperbolic cylinder:
{\displaystyle \left({\frac {x}{a}}\right)^{2}-\left({\frac {y}{b}}\right)^{2}=1}
and the parabolic cylinder:
{\displaystyle x^{2}+2ay=0.\,}
Wikisource has the text of the 1911 Encyclopædia Britannica article Cylinder.
Spinning Cylinder at Math Is Fun
Volume of a cylinder Interactive animation at Math Open Reference
Cut a Cylinder Interactive demonstrations of the intersection of a plane and a cylinder
Cylinder Calculator 3D Specify only two cylinder values and all others are immediately calculated, plus interactive 3D visualization in fullscreen.
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Cylinder&oldid=7972940"
|
Sothic_cycle Knowpia
The Sothic cycle or Canicular period is a period of 1,461 Egyptian civil years of 365 days each or 1,460 Julian years averaging 365¼ days each. During a Sothic cycle, the 365-day year loses enough time that the start of its year once again coincides with the heliacal rising of the star Sirius (Ancient Egyptian: Spdt or Sopdet, 'Triangle'; Greek: Σῶθις, Sō̂this) on 19 July in the Julian calendar.[1][a] It is an important aspect of Egyptology, particularly with regard to reconstructions of the Egyptian calendar and its history. Astronomical records of this displacement may have been responsible for the later establishment of the more accurate Julian and Alexandrian calendars.
Sirius (bottom) and Orion (right). The Winter Triangle is formed from the three brightest stars in the northern winter sky: Sirius, Betelgeuse (top right), and Procyon (top left).
Sirius as the brightest star in the constellation Canis Major as observed from the Earth (lines added for clarity).
The ancient Egyptian civil year, its holidays, and religious records reflect its apparent establishment at a point when the return of the bright star Sirius to the night sky was considered to herald the annual flooding of the Nile.[2] However, because the civil calendar was exactly 365 days long and did not incorporate leap years until 22 BCE, its months "wandered" backwards through the solar year at the rate of about one day in every four years. This almost exactly corresponded to its displacement against the Sothic year as well. (The Sothic year is about a minute longer than a Julian year.)[2] The sidereal year of 365.25636 days is only valid for stars on the ecliptic (the apparent path of the Sun across the sky) and having no proper motion, whereas Sirius's displacement ~40° below the ecliptic, its proper motion, and the wobbling of the celestial equator cause the period between its heliacal risings to be almost exactly 365.25 days long instead. This steady loss of one relative day every four years over the course of the 365-day calendar meant that the "wandering" day would return to its original place relative to the solar and Sothic year after precisely 1461 Egyptian civil years or 1460 Julian years.[1]
This calendar cycle was well known in antiquity. Censorinus described it in his book De Die Natale, in CE 238, and stated that the cycle had renewed 100 years earlier on the 12th of August. In the ninth century, Syncellus epitomized the Sothic Cycle in the "Old Egyptian Chronicle." Isaac Cullimore, an early Egyptologist and member of the Royal Society, published a discourse on it in 1833 in which he was the first to suggest that Censorinus had fudged the terminus date, and that it was more likely to fall in CE 136. He also computed the likely date of its invention as being around 1600 BCE.
In 1904, seven decades after Cullimore, Eduard Meyer carefully combed known Egyptian inscriptions and written materials to find any mention of the calendar dates when Sirius rose at dawn. He found six of them, on which the dates of much of conventional Egyptian chronology are based. A heliacal rise of Sirius was recorded by Censorinus as having happened on the Egyptian New Year's Day between 139 CE and 142 CE.[3]
The record itself actually refers to 21 July 140 CE, but astronomical calculation definitely dates the helical rising at 20 July 139 CE, Julian. This correlates the Egyptian calendar to the Julian calendar. A Julian leap day occurs in 140 CE, and so the new year on 1 Thoth is 20 July in 139 CE but it is 19 July for 140–142 CE. Thus Meyer was able to compare the Egyptian civil calendar date on which Sirius was observed rising helically to the Julian calendar date on which Sirius ought to have risen, count the number of intercalary days needed, and determine how many years were between the beginning of a cycle and the observation.
To calculate a date astronomically, one also needs to know the place of observation, since the latitude of the observation changes the day when the heliacal rising of Sirius can be seen, and mislocating an observation can potentially throw off the resulting chronology by several decades.[3] Official observations are known to have been made at Heliopolis (or Memphis, near Cairo), Thebes, and Elephantine (near Aswan),[4] with the rising of Sirius observed at Cairo about 8 days after it is seen at Aswan.[4]
Meyer concluded that the Egyptian civil calendar was created in 4241 BCE.[5][6] Recent scholarship, however, has discredited that claim. Most scholars either move the observation upon which he based this forward by one cycle of Sirius, to 19 July 2781 BCE, or reject the assumption that the document on which Meyer relied indicates a rise of Sirius at all.[7]: 52
Chronological interpretationEdit
Three specific observations of the heliacal rise of Sirius are extremely important for Egyptian chronology. The first is the aforementioned ivory tablet from the reign of Djer which supposedly indicates the beginning of a Sothic cycle, the rising of Sirius on the same day as the new year. If this does indicate the beginning of a Sothic cycle, it must date to about 17 July 2773 BCE.[7]: 51 However, this date is too late for Djer's reign, so many scholars believe that it indicates a correlation between the rising of Sirius and the Egyptian lunar calendar, instead of the solar Egyptian civil calendar, which would render the tablet essentially devoid of chronological value.[7]: 52
Gautschy et al. (2017) claimed that a newly discovered Sothis date from the Old Kingdom and a subsequent astronomic study confirms the Sothic cycle model.[8]
The second observation is clearly a reference to a heliacal rising, and is believed to date to the seventh year of Senusret III. This observation was almost certainly made at Itj-Tawy, the Twelfth Dynasty capital, which would date the Twelfth Dynasty from 1963 to 1786 BCE.[3] The Ramses or Turin Papyrus Canon says 213 years (1991–1778 BCE), Parker reduces it to 206 years (1991–1785 BCE), based on 17 July 1872 BCE as the Sothic date (120th year of 12th dynasty, a drift of 30 leap days). Prior to Parker's investigation of lunar dates, the 12th dynasty was placed as 213 years of 2007–1794 BCE interpreting the date 21 July 1888 BCE as the 120th year, and then for 2003–1790 BCE interpreting the date 20 July 1884 BCE as the 120th year.
The third observation was in the reign of Amenhotep I, and, assuming it was made in Thebes, dates his reign between 1525 and 1504 BCE. If made in Memphis, Heliopolis, or some other Delta site instead, as a minority of scholars still argue, the entire chronology of the 18th Dynasty needs to be extended some 20 years.[7]: 202
Observational procedure and precessionEdit
The Sothic cycle is a specific example of two cycles of differing length interacting to cycle together, here called a tertiary cycle. This is mathematically defined by the formula
{\displaystyle {\frac {1}{a}}+{\frac {1}{b}}={\frac {1}{t}}}
or half the harmonic mean. In the case of the Sothic cycle the two cycles are the Egyptian civil year and the Sothic year.
The Sothic year is the length of time for the star Sirius to visually return to the same position in relation to the sun. Star years measured in this way vary due to axial precession,[9] the movement of the Earth's axis in relation to the sun.
The length of time for a star to make a yearly path can be marked when it rises to a defined altitude above a local horizon at the time of sunrise. This altitude does not have to be the altitude of first possible visibility, nor the exact position observed. Throughout the year the star will rise to whatever altitude was chosen near the horizon approximately four minutes earlier each successive sunrise. Eventually the star will return to the same relative location at sunrise, regardless of the altitude chosen. This length of time can be called an observational year. Stars that reside close to the ecliptic or the ecliptic meridian will – on average – exhibit observational years close to the sidereal year of 365.2564 days. The ecliptic and the meridian cut the sky into four quadrants. The axis of the earth wobbles around slowly moving the observer and changing the observation of the event. If the axis swings the observer closer to the event its observational year will be shortened. Likewise, the observational year can be lengthened when the axis swings away from the observer. This depends upon which quadrant of the sky the phenomenon is observed.
The Sothic year is remarkable because its average duration happened to have been nearly exactly 365.25 days, in the early 4th millennium BCE[10] before the unification of Egypt. The slow rate of change from this value is also of note. If observations and records could have been maintained during predynastic times the Sothic rise would optimally return to the same calendar day after 1461 calendar years. This value would drop to about 1456 calendar years by the Middle Kingdom. The value 1461 could also be maintained if the date of the Sothic rise were artificially maintained by moving the feast in celebration of this event one day every fourth year instead of rarely adjusting it according to observation.
It has been noticed, and the Sothic cycle confirms, that Sirius does not move retrograde across the sky, like other stars, a phenomenon widely known as the precession of the equinox:
Sirius remains about the same distance from the equinoxes – and so from the solstices – throughout these many centuries, despite precession. — J.Z. Buchwald (2003)[11]
For the same reason, the heliacal rising or zenith of Sirius does not slip through the calendar at the precession rate of about one day per 71.6 years as other stars do, but much slower.[12] This remarkable stability within the solar year may be one reason that the Egyptians used it as a basis for their calendar. The coincidence of a heliacal rising of Sirius and the New Year reported by Censorinus occurred about 20 July, that is a month after the summer solstice.
Problems and criticismsEdit
Determining the date of a heliacal rise of Sirius has been shown to be difficult, especially considering the need to know the exact latitude of the observation.[3] Another problem is that because the Egyptian calendar loses one day every four years, a heliacal rise will take place on the same day for four years in a row, and any observation of that rise can date to any of those four years, making the observation imprecise.[3]
A number of criticisms have been levelled against the reliability of dating by the Sothic cycle. Some are serious enough to be considered problematic. Firstly, none of the astronomical observations have dates that mention the specific pharaoh in whose reign they were observed, forcing Egyptologists to supply that information on the basis of a certain amount of informed speculation. Secondly, there is no information regarding the nature of the civil calendar throughout the course of Egyptian history, forcing Egyptologists to assume that it existed unchanged for thousands of years; the Egyptians would only have needed to carry out one calendar reform in a few thousand years for these calculations to be worthless. Other criticisms are not considered as problematic, e.g. there is no extant mention of the Sothic cycle in ancient Egyptian writing, which may simply be a result of it either being so obvious to Egyptians that it didn't merit mention, or to relevant texts being destroyed over time or still awaiting discovery.
Marc Van de Mieroop, in his discussion of chronology and dating, does not mention the Sothic cycle at all, and asserts that the bulk of historians nowadays would consider that it is not possible to put forward exact dates earlier than the 8th century BCE.[13]
Some have recently claimed that the Theran eruption marks the beginning of the Eighteenth Dynasty, due to Theran ash and pumice discovered in the ruins of Avaris, in layers that mark the end of the Hyksos era.[citation needed] Because the evidence of dendrochronologists indicates the eruption took place in 1626 BCE, this has been taken to indicate that dating by the Sothic cycle is off by 50–80 years at the outset of the 18th Dynasty.[citation needed] Claims that the Thera eruption is described on the Tempest Stele of Ahmose I[14] have been disputed by writers such as Peter James.[15]
^ The date slowly varies within the Gregorian calendar, moving about three days later every four centuries. It presently occurs around the beginning of August.
^ a b "Ancient Egyptian Civil Calendar", La Via, retrieved 8 February 2017 .
^ a b c d e Kitchen, K.A. (October 1991). "The chronology of Ancient Egypt". World Archaeology. 23 (2): 205.
^ a b Tetley, M. Christine (2014). The Reconstructed Chronology of the Egyptian Kings. Vol. I. p. 43. Archived from the original on 2017-02-11.
^ Meyer, Eduard, Aegyptische Chronologie, Berlin, Verlag der Königliche Akadamie der Wissenschaften (1904), Ch. I, p.41
^ Breasted, James Henry, A History of the Ancient Egyptians, New York, Charles Scribner's Sons (1908), Ch. II, p.15 ("[T]he calendar year of 365 days was introduced in 4241 B.C., the earliest fixed date in the history of the world as known to us."), citing Meyer.
^ a b c d Grimal, Nicolas (1988). A History of Ancient Egypt. Librairie Arthéme Fayard.
^ Gautschy, Rita; Habicht, Michael E.; Galassi, Francesco M.; Rutica, Daniela; Rühli, Frank J.; Hannig, Rainer (17 Nov 2017). "A new astronomically-based chronological model for the Egyptian Old Kingdom". Journal of Egyptian History. 10 (2): 69–108. doi:10.1163/18741665-12340035. ISSN 1874-1657. Retrieved 2021-01-24.
^ Ingham, M.F. (1969). "The length of the Sothic cycle". The Journal of Egyptian Archaeology. 55: 36–40.
^ SkyCharts III[full citation needed]
^ Buchwald, Jed Z. (2003). "Egyptian stars under Paris skies" (PDF). Engineering and Science. Caltech. Vol. 66, no. 4. Pasadena, CA: California Institute of Technology. pp. 20–31.
^ One day per 120 years, see Winlock, H. (1940). "Origin of the Ancient Egyptian calendar". Proceedings of the American Philosophical Society. 83: 447–464.
^ van de Mieroop, Marc (2015). A History of the Ancient Near East, ca. 3000–323 BC. Oxford, UK: Wiley-Blackwell. ISBN 978-1118718162.
^ Ritner, Robert K.; Moeller, Nadine (2014). "The Ahmose 'Tempest Stela', Thera, and comparative chronology". Journal of Near Eastern Studies. 73 (1): 1–19. doi:10.1086/675069. JSTOR 10.1086/675069.
^ James, Peter (1991). Centuries of Darkness. London, UK.
"Sothic Period" . Encyclopædia Britannica (11th ed.). 1911.
|
Structure of Atom Chemistry NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
Which of the following sets of quantum numbers represent an impossible arrangement -
1. 3 2 -2 (+)1/2
2. 4 0 0 (-)1/2
Subtopic: Quantum Numbers & Schrodinger Wave Equation |
How many unpaired electrons are present in Ni2+ cation (atomic number = 28)
Subtopic: AUFBAU Principle |
Number of unpaired electrons in 1s2 2s2 2p3 is -
Subtopic: Pauli's Exclusion Principle & Hund's Rule |
The quantum numbers of four electrons are given below.
\mathcal{l}
(1) Electron 1 3 0 0 -1/2
(2) Electron 2 4 0 0 1/2
The correct order of decreasing energy of these electrons is -
1. Electron 3 > Electron 1 > Electron 4> Electron 2
2. Electron 4> Electron 2 > Electron 3 > Electron 1
3. Electron 3 > Electron 2 > Electron 4 > Electron 1
Which type of radiation is not emitted by the electronic structure of atoms:
Subtopic: EMT Radiation |
An absorption line of lowest frequency in transition of hydrogen spectra is-
1. n=1 to n=2
Subtopic: Hydrogen Spectra |
If uncertainty in the position of an electron is zero the uncertainty in its momentum will be
(1) < h/4
\mathrm{\pi }
(2) > h/4
\mathrm{\pi }
Subtopic: Heisenberg Uncertainity Principle |
The maximum number of atomic orbitals associated with a principal quantum number 5 is-
Subtopic: Quantum Numbers & Schrodinger Wave Equation | Shell/Subshell |
The de Broglie wavelength of electron of He+ ion is 3.329Å. If the photon emitted upon de-excitation of this He+ ion is made to hit H atom in its ground state so as to liberate electron from it, what will be the de Broglie's wavelength of photoelectron ?
1. 0.1518 Å
Subtopic: De Broglie Equation |
Velocity of helium atom at 300 K is 2.40 x 102 meter per sec. What is its wavelength? (mass number of helium is 4) -
3. 803 Å
4. 8000 Å
|
A Mechanistic Force Model for Contour Turning | J. Manuf. Sci. Eng. | ASME Digital Collection
Rohit G. Reddy,
Rohit G. Reddy
Contributed by the Manufacturing Engineering Division for publication in the JOURNAL OF MANUFACTURING SCIENCE AND ENGINEERING. Manuscript received December 1997; revised October 1999. Associate Technical Editor: M. Elbestawi.
Reddy , R. G., Kapoor , S. G., and DeVor, R. E. (October 1, 1999). "A Mechanistic Force Model for Contour Turning ." ASME. J. Manuf. Sci. Eng. August 2000; 122(3): 398–405. https://doi.org/10.1115/1.1285900
In this paper a mechanistic force model for contour turning is presented. Analytical solutions are developed for evaluation of the mechanistic parameters (chip load, chip thickness, chip width, effective lead angle), as a function of the process parameters (tool geometry, workpiece geometry, and the tool path). The effect of these parameter variations on the cutting forces is analyzed. Simple straight turning tests are employed for model calibration. A workpiece with convex and concave contours is employed for model validation. Model simulations are found to match well with the experimental results. The analytical model is utilized to investigate the effect of process variables with a
26
full factorial design. [S1087-1357(00)01602-6]
machining, cutting, machine tools, calibration
Calibration, Cutting, Stress, Machining, Turning, Model validation, Simulation
Basic Mechanics of the Metal-Cutting Process
J. Appl. Mech. Trans. ASME
The Mechanics of Three-Dimensional Cutting Operation
Klamecki, B. E., 1973, Incipient Chip Formation in Metal Cutting—A Three Dimensional Finite Element Analysis, Ph.D. thesis, University of Illinois at Urbana-Champaign.
Lajczok, M. R., 1980, A Study of Some Aspects of Metal Machining Using the Finite Element Method, Ph.D. thesis, North Carolina State University.
Sabberwal, A. J. P., 1960, “Chip Section and Cutting Force During the Milling Operation,” in Ann. CIRP, pp. 197–203.
The Prediction of Cutting Forces in End Milling With Application to Cornering Cuts
Subramani, G., Suvada, R., Kapoor, S. G., DeVor, R. E., and Meingast, W., 1987, “A Model for the Prediction of Force System for Cylinder Boring Process,” in Proceedings of NAMRC 15, pp. 439–446.
Kuhl, M. J., 1987, “The Prediction of Cutting Forces and Surface Accuracy for the Turning Process, Master’s thesis,” University of Illinois at Urbana-Champaign.
A Dynamic Model of the Cutting Force System in the Turning Process
Mont. Control Manuf. Proc. ASME PED
A Dual-Mechanism Approach to the Prediction of Machining Forces, Part 2: Calibration and Validation
The Importance of Considering Size Effect Along the Cutting Edge in Predicting the Effective Lead Angle for Turning,” in
Trans. NAMRI SME
Turning With a Round-Nosed Form Tool,” in
Proc. NAMRC
Generic Simulation Approach for Multi-Axis Machining, Part 2: Model Calibration and Feed Rate Scheduling
A Stability Solution for the Axial Contour-Turning Process
Modeling and Simulation of Process and Structure Interactions Considering Turning Operations
|
Revision as of 20:54, 12 May 2020 by John Rumm (talk | contribs) (→Worked Example)
Pipework in the home is used to deliver gas from the gas meter to each of the your appliances. The manufacturer of each appliance will have specify the maximum rate of gas consumption for the appliance. The pipework must be large enough able to supply this rate of consumption to all the appliances at the same time, without excess pressure drop, for the appliance to work correctly and safely.
Your gas meter and the main gas rate governor, should be set to deliver a gas at dynamic pressure of 21 mbar (note the dynamic pressure is the pressure measured when gas is flowing - the static pressure may be a few mbar higher).
Bore of the pipe (usually specified in mm diameter)
The flow rate of the gas in cubic meters / hour
The flow resistance of a pipe will be higher at the point it changes direction. The easiest way to make an assessment of the higher pressure drops caused by direction changes, is to adjust the effective length of the pipe. The effective length is the actual length of the pipe, with and additional 0.5m added for each 90 degree elbow, and 0.3m added for each bend or swept elbow.
If we take the following example layout:
For each section of pipe, we need add up the total gas rate that may need to be supplied at that point. Its often easier to work back from the end points of the installation. We can include an educated guess of the pipe size likely to be required at this stage
In this application we can assume that the pressure drop in a pipe is proportional to its length, all other things being equal. So if we take pipe section G, and look at Table 1, we can see that a 3m pipe can discharge 2.9 m³/h using the full 1 mbar pressure drop available. So we can work out the actual pressure drop by scaling that 1 mbar by the actual gas rate and pipe length. so we get:
{\displaystyle 1mbar/3m/2.9m^{3}/h\times 2m\times 0.6m^{3}/h=~0.14mbar}
|
EUDML | On the stability of subgroup actions on certain quasihomogeneous -varieties. EuDML | On the stability of subgroup actions on certain quasihomogeneous -varieties.
On the stability of subgroup actions on certain quasihomogeneous
G
-varieties.
Arzhantsev, Ivan. "On the stability of subgroup actions on certain quasihomogeneous -varieties.." Journal of Lie Theory 10.2 (2000): 345-357. <http://eudml.org/doc/121094>.
@article{Arzhantsev2000,
author = {Arzhantsev, Ivan},
keywords = {quasi-homogeneous variety; stable actions},
title = {On the stability of subgroup actions on certain quasihomogeneous -varieties.},
AU - Arzhantsev, Ivan
TI - On the stability of subgroup actions on certain quasihomogeneous -varieties.
KW - quasi-homogeneous variety; stable actions
quasi-homogeneous variety, stable actions
Articles by Arzhantsev
|
EUDML | On -generalized closed sets. EuDML | On -generalized closed sets.
\theta
-generalized closed sets.
Dontchev, Julian, and Maki, Haruo. "On -generalized closed sets.." International Journal of Mathematics and Mathematical Sciences 22.2 (1999): 239-249. <http://eudml.org/doc/48306>.
author = {Dontchev, Julian, Maki, Haruo},
keywords = {-closure; -generalized closed sets; TGO-connected; -closure; -generalized closed sets},
title = {On -generalized closed sets.},
AU - Maki, Haruo
TI - On -generalized closed sets.
KW - -closure; -generalized closed sets; TGO-connected; -closure; -generalized closed sets
\theta
-closure,
\theta
-generalized closed sets, TGO-connected,
\theta
\theta
-generalized closed sets
{T}_{0}
{T}_{3}
|
Option price and sensitivities by local volatility model, using finite differences - MATLAB optSensByLocalVolFD - MathWorks 日本
Grid containing prices calculated by the finite difference method, returned as a grid that is two-dimensional with size AssetGridSize ⨉ TimeGridSize. The number of columns does not have to be equal to the TimeGridSize, because ExerciseDates and ExDividendDates are added to the time grid. PriceGrid(:, :, end) contains the price for t = 0.
\mathrm{max}\left(StâK,0\right)
\mathrm{max}\left(KâSt,0\right)
\begin{array}{l}{\mathrm{Ï}}_{loc}^{2}\left(K,\mathrm{Ï}\right)=\frac{{\mathrm{Ï}}_{imp}^{2}+2\mathrm{Ï}{\mathrm{Ï}}_{imp}\frac{â{\mathrm{Ï}}_{imp}}{â\mathrm{Ï}}+2\left(\mathrm{Ï}âd\right)K\mathrm{Ï}{\mathrm{Ï}}_{imp}\frac{â{\mathrm{Ï}}_{imp}}{âK}}{{\left(1+K{d}_{1}\sqrt{\mathrm{Ï}}\frac{â{\mathrm{Ï}}_{imp}}{âK}\right)}^{2}+{K}^{2}\mathrm{Ï}{\mathrm{Ï}}_{imp}\left(\frac{{â}^{2}{\mathrm{Ï}}_{imp}}{â{K}^{2}}â{d}_{1}\sqrt{\mathrm{Ï}}{\left(\frac{â{\mathrm{Ï}}_{imp}}{âK}\right)}^{2}\right)}\\ {d}_{1}=\frac{\mathrm{ln}\left({S}_{0}/K\right)+\left(\left(\mathrm{Ï}âd\right)+{\mathrm{Ï}}_{imp}^{2}/2\right)\mathrm{Ï}}{{\mathrm{Ï}}_{imp}\sqrt{\mathrm{Ï}}}\end{array}
|
Kernel smoothing function estimate for univariate and bivariate data - MATLAB ksdensity - MathWorks Benelux
Estimate Density
Estimate Density with Boundary Correction
Estimate Cumulative Distribution Function at Specified Values
Plot Estimated Cumulative Density Function for Given Number of Points
Estimate Survivor and Cumulative Hazard for Censored Failure Data
Estimate Inverse Cumulative Distribution Function for Specified Probability Values
Return Bandwidth of Smoothing Window
Plot Kernel Density Estimate of Bivariate Data
Kernel smoothing function estimate for univariate and bivariate data
[f,xi] = ksdensity(x) returns a probability density estimate, f, for the sample data in the vector or two-column matrix x. The estimate is based on a normal kernel function, and is evaluated at equally-spaced points, xi, that cover the range of the data in x. ksdensity estimates the density at 100 points for univariate data, or 900 points for bivariate data.
ksdensity works best with continuously distributed samples.
[f,xi] = ksdensity(x,pts) specifies points (pts) to evaluate f. Here, xi and pts contain identical values.
[f,xi] = ksdensity(___,Name,Value) uses additional options specified by one or more name-value pair arguments in addition to any of the input arguments in the previous syntaxes. For example, you can define the function type ksdensity evaluates, such as probability density, cumulative probability, survivor function, and so on. Or you can specify the bandwidth of the smoothing window.
[f,xi,bw] = ksdensity(___) also returns the bandwidth of the kernel smoothing window, bw. The default bandwidth is the optimal for normal densities.
ksdensity(___) plots the kernel smoothing function estimate.
ksdensity(ax,___) plots the results using axes with the handle, ax, instead of the current axes returned by gca.
Generate a sample data set from a mixture of two normal distributions.
Plot the estimated density.
The density estimate shows the bimodality of the sample.
Generate a nonnegative sample data set from the half-normal distribution.
Estimate pdfs with two different boundary correction methods, log transformation and reflection, by using the 'BoundaryCorrection' name-value pair argument.
Plot the two estimated pdfs.
ksdensity uses a boundary correction method when you specify either positive or bounded support. The default boundary correction method is log transformation. When ksdensity transforms the support back, it introduces the 1/x term in the kernel density estimator. Therefore, the estimate has a peak near x = 0. On the other hand, the reflection method does not cause undesirable peaks near the boundary.
Compute and plot the estimated cdf evaluated at a specified set of values.
ksdensity seems to smooth the cumulative distribution function estimate too much. An estimate with a smaller bandwidth might produce a closer estimate to the empirical cumulative distribution function.
Return the bandwidth of the smoothing window.
Plot the cumulative distribution function estimate using a smaller bandwidth.
The ksdensity estimate with a smaller bandwidth matches the empirical cumulative distribution function better.
Plot the estimated cdf evaluated at 50 equally spaced points.
Generate sample data from an exponential distribution with mean 3.
Create a logical vector that indicates censoring. Here, observations with lifetimes longer than 10 are censored.
Compute and plot the estimated density function.
Compute and plot the survivor function.
Compute and plot the cumulative hazard function.
Generate a mixture of two normal distributions, and plot the estimated inverse cumulative distribution function at a specified set of probability values.
Generate a mixture of two normal distributions.
Return the bandwidth of the smoothing window for the probability density estimate.
The default bandwidth is optimal for normal densities.
Plot the density using an increased bandwidth value.
A higher bandwidth further smooths the density estimate, which might mask some characteristics of the distribution.
Now, plot the density using a decreased bandwidth value.
A smaller bandwidth smooths the density estimate less, which exaggerates some characteristics of the sample.
Create a two-column vector of points at which to evaluate the density.
Generate a 30-by-2 matrix containing random numbers from a mixture of bivariate normal distributions.
Plot the estimated density of the sample data.
Sample data for which ksdensity returns f values, specified as a column vector or two-column matrix. Use a column vector for univariate data, and a two-column matrix for bivariate data.
Example: [f,xi] = ksdensity(x)
Points at which to evaluate f, specified as a vector or two-column matrix. For univariate data, pts can be a row or column vector. The length of the returned output f is equal to the number of points in pts.
Example: pts = (0:1:25); ksdensity(x,pts);
Axes handle for the figure ksdensity plots to, specified as a handle.
For example, if h is a handle for a figure, then ksdensity can plot to that figure as follows.
Example: ksdensity(h,x)
Example: 'Censoring',cens,'Kernel','triangle','NumPoints',20,'Function','cdf' specifies that ksdensity estimates the cdf by evaluating at 20 equally spaced points that covers the range of data, using the triangle kernel smoothing function and accounting for the censored data information in vector cens.
Bandwidth — Bandwidth of the kernel smoothing window
optimal value for normal densities (default) | scalar value | two-element vector
The bandwidth of the kernel-smoothing window, which is a function of the number of points in x, specified as the comma-separated pair consisting of 'Bandwidth' and a scalar value. If the sample data is bivariate, Bandwidth can also be a two-element vector. The default is optimal for estimating normal densities [1], but you might want to choose a larger or smaller value to smooth more or less.
If you specify 'BoundaryCorrection' as 'log'(default) and 'Support' as either 'positive' or a vector [L U], ksdensity converts bounded data to be unbounded by using log transformation. The value of 'Bandwidth' is on the scale of the transformed values.
Boundary correction method, specified as the comma-separated pair consisting of 'BoundaryCorrection' and 'log' or 'reflection'.
ksdensity converts bounded data x to be unbounded by one of the following transformations. Then, it transforms back to the original bounded scale after density estimation.
For univariate data, if you specify 'Support','positive', then ksdensity applies log(x).
For univariate data, if you specify 'Support',[L U], where L and U are numeric scalars and L < U, then ksdensity applies log((x-L)/(U–x)).
For bivariate data, ksdensity transforms each column of x in the same way with the univariate data.
The value of 'Bandwidth' and the bw output are on the scale of the transformed values.
ksdensity augments bounded data by adding reflected data near the boundaries, then it returns estimates corresponding to the original support. For details, see Reflection Method.
ksdensity applies boundary correction only when you specify 'Support' as a value other than 'unbounded'.
Censoring — Logical vector
vector of 0s (default) | vector of 0s and 1s
Logical vector indicating which entries are censored, specified as the comma-separated pair consisting of 'Censoring' and a vector of binary values. A value of 0 indicates there is no censoring, 1 indicates that observation is censored. Default is there is no censoring. This name-value pair is only valid for univariate data.
Example: 'Censoring',censdata
'pdf' (default) | 'cdf' | 'icdf' | 'survivor' | 'cumhazard'
'pdf' Probability density function.
'cdf' Cumulative distribution function.
Inverse cumulative distribution function. ksdensity computes the estimated inverse cdf of the values in x, and evaluates it at the probability values specified in pi.
This value is valid only for univariate data.
'survivor' Survivor function.
Example: 'Function','icdf'
A kernel function that is a custom or built-in function. Specify the function as a function handle (for example, @myfunction or @normpdf) or as a character vector or string scalar (for example, 'myfunction' or 'normpdf'). The software calls the specified function with one argument that is an array of distances between data values and locations where the density is evaluated. The function must return an array of the same size containing corresponding values of the kernel function.
When 'Function' is 'pdf', the kernel function returns density values. Otherwise, it returns cumulative probability values.
Specifying a custom kernel when 'Function' is 'icdf' returns an error.
For bivariate data, ksdensity applies the same kernel to each dimension.
NumPoints — Number of equally spaced points
100 (default) | scalar value
Number of equally spaced points in xi, specified as the comma-separated pair consisting of 'NumPoints' and a scalar value. This name-value pair is only valid for univariate data.
For example, for a kernel smooth estimate of a specified function at 80 equally spaced points within the range of sample data, input:
'unbounded' (default) | 'positive' | two-element vector, [L U] | two-by-two matrix, [L1 L2; U1 U2]
'unbounded' Default. Allow the density to extend over the whole real line.
'positive' Restrict the density to positive values.
Two-element vector, [L U] Give the finite lower and upper bounds for the support of the density. This option is only valid for univariate sample data.
Two-by-two matrix, [L1 L2; U1 U2] Give the finite lower and upper bounds for the support of the density. The first row contains the lower limits and the second row contains the upper limits. This option is only valid for bivariate sample data.
For bivariate data, 'Support' can be a combination of positive, unbounded, or bounded variables specified as [0 -Inf; Inf Inf] or [0 L; Inf U].
Example: 'Support',[0 10]
PlotFcn — Function used to create kernel density plot
'surf' (default) | 'contour' | 'plot3' | 'surfc'
Function used to create kernel density plot, specified as the comma-separated pair consisting of 'PlotFcn' and one of the following.
'surf' 3-D shaded surface plot, created using surf
'contour' Contour plot, created using contour
'plot3' 3-D line plot, created using plot3
'surfc' Contour plot under a 3-D shaded surface plot, created using surfc
This name-value pair is only valid for bivariate sample data.
Example: 'PlotFcn','contour'
Estimated function values, returned as a vector whose length is equal to the number of points in xi or pts.
xi — Evaluation points
100 equally spaced points | 900 equally spaced points | vector | two-column matrix
Evaluation points at which ksdensity calculates f, returned as a vector or a two-column matrix. For univariate data, the default is 100 equally-spaced points that cover the range of data in x. For bivariate data, the default is 900 equally-spaced points created using meshgrid from 30 equally-spaced points in each dimension.
bw — Bandwidth of smoothing window
Bandwidth of smoothing window, returned as a scalar value.
If you specify 'BoundaryCorrection' as 'log'(default) and 'Support' as either 'positive' or a vector [L U], ksdensity converts bounded data to be unbounded by using log transformation. The value of bw is on the scale of the transformed values.
{\stackrel{^}{f}}_{h}\left(x\right)=\frac{1}{nh}\sum _{i=1}^{n}K\left(\frac{x-{x}_{i}}{h}\right)\text{\hspace{0.17em}},
K\left(·\right)
The kernel estimator for the cumulative distribution function (cdf), for any real values of x, is given by
{\stackrel{^}{F}}_{h}\left(x\right)={\int }_{-\infty }^{x}{\stackrel{^}{f}}_{h}\left(t\right)dt=\frac{1}{n}\sum _{i=1}^{n}G\left(\frac{x-{x}_{i}}{h}\right)\text{\hspace{0.17em}},
G\left(x\right)={\int }_{-\infty }^{x}K\left(t\right)dt
For more details, see Kernel Distribution.
The reflection method is a boundary correction method that accurately finds kernel density estimators when a random variable has bounded support. If you specify 'BoundaryCorrection','reflection', ksdensity uses the reflection method. This method augments bounded data by adding reflected data near the boundaries, and estimates the pdf. Then, ksdensity returns the estimated pdf corresponding to the original support with proper normalization, so that the estimated pdf's integral over the original support is equal to one.
If you additionally specify 'Support',[L U], then ksdensity finds the kernel estimator as follows.
{\stackrel{^}{f}}_{h}\left(x\right)=\frac{1}{nh}\sum _{i=1}^{n}\left[K\left(\frac{x-{x}_{i}^{-}}{h}\right)+K\left(\frac{x-{x}_{i}}{h}\right)+K\left(\frac{x-{x}_{i}^{+}}{h}\right)\right]
for L ≤ x ≤ U,
{x}_{i}^{-}=2L-{x}_{i}
{x}_{i}^{+}=2U-{x}_{i}
, and xi is the ith sample data.
{\stackrel{^}{F}}_{h}\left(x\right)=\frac{1}{n}\sum _{i=1}^{n}\left[G\left(\frac{x-{x}_{i}^{-}}{h}\right)+G\left(\frac{x-{x}_{i}}{h}\right)+G\left(\frac{x-{x}_{i}^{+}}{h}\right)\right]-\frac{1}{n}\sum _{i=1}^{n}\left[G\left(\frac{L-{x}_{i}^{-}}{h}\right)+G\left(\frac{L-{x}_{i}}{h}\right)+G\left(\frac{L-{x}_{i}^{+}}{h}\right)\right]
for L ≤ x ≤ U.
To obtain a kernel estimator for an inverse cdf, a survivor function, or a cumulative hazard function (when 'Function' is 'icdf', 'survivor', or 'cumhazrd'), ksdensity uses both
{\stackrel{^}{f}}_{h}\left(x\right)
{\stackrel{^}{F}}_{h}\left(x\right)
If you additionally specify 'Support' as 'positive' or [0 inf], then ksdensity finds the kernel estimator by replacing [L U] with [0 inf] in the above equations.
Some options that require extra passes or sorting of the input data are not supported:
'Support' (support is always unbounded).
Uses standard deviation (instead of median absolute deviation) to compute the bandwidth.
|
Gear mechanism that allows driven shafts to spin at different speeds - MATLAB - MathWorks India
Crown gear located
Carrier (C) to driveshaft (D) teeth ratio (NC/ND)
Sun-sun and carrier-driveshaft ordinary efficiencies
Sun-sun efficiency
Carrier-driveshaft efficiency
Sun-carrier and driveshaft-casing power thresholds
Sun-carrier and driveshaft-casing viscous friction coefficients
Carrier inertia
Gear mechanism that allows driven shafts to spin at different speeds
The Differential block represents a gear mechanism that allows the driven shafts to spin at different speeds. Differentials are common in automobiles, where they enable the various wheels to spin at different speeds while cornering. Ports D, S1, and S2 represent the longitudinal driveshaft and the sun gear shafts of the differential, respectively. Any one of the shafts can drive the other two.
The block models the differential mechanism as a structural component based on the Simple Gear and Sun-Planet Bevel Simscape™ Driveline™ blocks. The figure demonstrates the equivalent block diagram for the Differential block.
To increase the fidelity of the gear model, specify properties such as gear inertia, meshing losses, and viscous losses. By default, gear inertia and viscous losses are assumed to be negligible. The block enables you to specify the inertias of the gear carrier and internal planet gears. To model the inertias of the outer gears, connect Simscape Inertia blocks to ports D, S1, and S2.
The differential imposes one kinematic constraint on the three connected axes, such that
{\omega }_{S1}-{\omega }_{S2},
ωS1 is the velocity of sun gear shaft 1.
Negative values imply that the differential is left of centerline. The three degrees of freedom reduce to two independent degrees of freedom. The gear pairs are (1,2) = (S, S) and (C, D). C is the carrier.
The sum of the lateral motions is the transformed longitudinal motion. The difference of side motions,
{\omega }_{S1}-{\omega }_{S2}
, is independent of the longitudinal motion. The general motion of the lateral shafts is a superposition of these two independent degrees of freedom, which have this physical significance:
The longitudinal degree of freedom is equivalent to the two lateral shafts rotating at the same angular velocity,
{\omega }_{S1}={\omega }_{S2}
, and at a fixed ratio with respect to the longitudinal shaft.
The differential degree of freedom is equivalent to keeping the longitudinal driving shaft locked,
{\omega }_{D}=0
, where ωD is the velocity of the driving shaft, while the lateral shafts rotate with respect to each other in opposite directions,
{\omega }_{S1}=-{\omega }_{S2}
The lateral axis torques are constrained by the longitudinal axis torque such that the net power flow sums to zero:
{\omega }_{S1}{\tau }_{S1}+{\omega }_{S2}{\tau }_{S2}+{\omega }_{D}{\tau }_{D}-{P}_{loss}=0,
τS1 and τS2 are the torques along the lateral axes.
τD is the longitudinal torque.
Ploss is the power loss.
When the kinematic and power constraints are combined, the ideal case yields
{g}_{D}{\tau }_{D}=2\frac{\left({\omega }_{S1}{\tau }_{S1}+{\omega }_{S2}{\tau }_{S2}\right)}{{\omega }_{S1}+{\omega }_{S2}},
where gD is the gear ratio for the longitudinal driveshaft.
Ideal Fundamental Constraints
The effective Differential block constraint is composed of two sun-planet bevel gear subconstraints.
The first subconstraint is due to the coupling of the two sun-planet bevel gears to the carrier:
\frac{{\omega }_{S1}-{\omega }_{C}}{{\omega }_{S2}-{\omega }_{C}}=-\frac{{g}_{SP2}}{{g}_{SP1}},
where gSP1 and gSP2 are the gear ratios for the sun-planet gears.
The second subconstraint is due to the coupling of the carrier to the longitudinal driveshaft:
{\omega }_{D}=-{g}_{D}{\omega }_{C}.
The sun-planet gear ratios of the underlying sun-planet bevel gears, in terms of the radii, r, of the sun-planet gears are:
{g}_{SP1}=\frac{{r}_{S1}}{{r}_{P1}}
{g}_{SP2}=\frac{{r}_{S2}}{{r}_{P2}}
The Differential block is implemented with
{g}_{SP1}={g}_{SP2}=1
, leaving gD free to adjust.
The gears are assumed to be rigid.
D — Driveshaft
Rotational mechanical conserving port associated with the longitudinal driveshaft.
S1 — Sun gear 1
Rotational conserving port associated with sun gear 1.
Crown gear located — Crown gear location
To the right of centerline (default) | To the left of centerline
Location of the bevel crown gear relative to the centerline of the gear assembly.
Carrier (C) to driveshaft (D) teeth ratio (NC/ND) — Carrier to driveshaft gear rotation ratio
Fixed ratio, gD, of the carrier gear to the longitudinal driveshaft gear rotations as defined by the number of carrier gear teeth divided by the number of driveshaft gear teeth. This gear ratio must be strictly greater than 0.
Sun-sun and carrier-driveshaft ordinary efficiencies — Torque transfer efficiencies
Vector of torque transfer efficiencies, [ηSS, ηCD], from driving to driven sun gear and from the carrier to the longitudinal driveshaft, respectively. The vector elements must be in the range (0,1].
Sun-sun efficiency — Driving sun gear to driven sun gear torque transfer efficiency
Vector of output-to-input power ratios that describe the power flow from the driving sun gear to the driven sun gear, ηSS. The block uses the values to construct a 1-D temperature-efficiency lookup table.
Carrier-driveshaft efficiency — Carrier to driveshaft gear torque transfer efficiency
Vector of output-to-input power ratios that describe the power flow from the carrier to the driveshaft, ηCD. The block uses the values to construct a 1-D temperature-efficiency lookup table.
Sun-carrier and driveshaft-casing power thresholds — Minimum efficiency power threshold for the sun-carrier and driveshaft-casing gear couplings
[0.001, 0.001] W (default) | vector
Vector of power thresholds, pth, for sun-carrier and longitudinal driveshaft-casing [pS, pD], respectively. The full efficiency loss applies above these values. Below these values, a hyperbolic tangent function smooths the efficiency factor.
Sun-carrier and driveshaft-casing viscous friction coefficients — Viscous friction coefficients
Vector of viscous friction coefficients, [μS μD ], for the sun gear-to-carrier gears and longitudinal driveshaft-to-casing gear motions, respectively.
Carrier inertia — Carrier inertia
Moment of inertia of the planet gear carrier. This value must be positive.
Disk Friction Clutch | Simple Gear | Sun-Planet Bevel
|
finished - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Combinatorics : Combinatorial Structures : finished
create an iterator for a specified structure
produce the next structure
return true when the last structure is produced
iterstructs(struct(args), size=n)
nextstruct(it)
finished(it)
(optional) non-negative integer specifying the size of the object or the string 'allsizes' when the object should be selected from all possible sizes
iterator returned by a call to iterstructs
The iterstructs function returns an iterator in the form of a table. The nextstruct and finished functions are used to manipulate this table. The table entry nextvalue is a procedure created to traverse the elements defined by struct(args) of size
n
. The finished entry is a flag that is set to true when the last structure is produced.
The nextstruct command returns the next structure from the iterator defined by the call to iterstructs.
The finished function returns true when the last structure is produced.
If the size is not specified, the default size for that structure is used.
To list all the structures at once, see combstruct[allstructs].
\mathrm{with}\left(\mathrm{combstruct}\right):
\mathrm{allp}≔\mathrm{iterstructs}\left(\mathrm{Permutation}\left([a,b,c]\right)\right):
\mathbf{while}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{not}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathrm{finished}\left(\mathrm{allp}\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{do}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{2.0em}{0.0ex}}\mathrm{nextstruct}\left(\mathrm{allp}\right)\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathbf{end}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{do}
[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{c}]
[\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{c}]
[\textcolor[rgb]{0,0,1}{c}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}]
[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{c}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}]
[\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{c}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{a}]
[\textcolor[rgb]{0,0,1}{c}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{a}]
combstruct[allstructs]
|
Integral p-adic étale cohomology of Drinfeld symmetric spaces
15 February 2021 Integral p-adic étale cohomology of Drinfeld symmetric spaces
Pierre Colmez, Gabriel Dospinescu, Wiesława Nizioł
Pierre Colmez,1 Gabriel Dospinescu,2 Wiesława Nizioł1
1Centre National de la Recherche Scientifique, Institut de Mathématiques de Jussieu-Paris Rive Gauche, Sorbonne Université, Paris, France
2Centre National de la Recherche Scientifique, Unité de Mathématiques Pures et Appliquées, École Normale Supérieure de Lyon, Lyon, France
We compute the integral p-adic étale cohomology of Drinfeld symmetric spaces of any dimension. This refines the computation of the rational p-adic étale cohomology from our recent work on Stein spaces. The main tools are: the computation of the integral de Rham cohomology from that work and, as a new tool, the integral p-adic comparison theorems of Bhatt–Morrow–Scholze and Česnavičius–Koshikawa which replace the quasi-integral comparison theorem of Tsuji. Along the way, we compute the
{A}_{\mathrm{inf}}
-cohomology of Drinfeld symmetric spaces.
The third author would like to thank the Mathematical Sciences Research Institute at Berkeley for their hospitality during the 2019 spring semester when parts of this paper were written. We would like to thank Bhargav Bhatt for suggesting that derived completions could simplify our original proof (which they did!). We thank Kęstutis Česnavičius and Matthew Morrow for helpful discussions related to the subject of this paper. Last but not least, we thank the referees for a very careful reading of the paper and many corrections/suggestions that we have incorporated into the final exposition.
This work was partially supported by Agence Nationale de la Recherche project ANR-14-CE25 and by National Science Foundation grant DMS-1440140.
Pierre Colmez. Gabriel Dospinescu. Wiesława Nizioł. "Integral p-adic étale cohomology of Drinfeld symmetric spaces." Duke Math. J. 170 (3) 575 - 613, 15 February 2021. https://doi.org/10.1215/00127094-2020-0084
Received: 15 September 2019; Revised: 28 June 2020; Published: 15 February 2021
Keywords: Drinfeld symmetric spaces , p-adic cohomology
Pierre Colmez, Gabriel Dospinescu, Wiesława Nizioł "Integral p-adic étale cohomology of Drinfeld symmetric spaces," Duke Mathematical Journal, Duke Math. J. 170(3), 575-613, (15 February 2021)
|
Row_echelon_form Knowpia
In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination.
A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and column echelon form means that Gaussian elimination has operated on the columns. In other words, a matrix is in column echelon form if its transpose is in row echelon form. Therefore, only row echelon forms are considered in the remainder of this article. The similar properties of column echelon form are easily deduced by transposing all the matrices. Specifically, a matrix is in row echelon form if
All rows consisting of only zeroes are at the bottom.
The leading coefficient (also called the pivot) of a nonzero row is always strictly to the right of the leading coefficient of the row above it.
Some texts add the condition that the leading coefficient must be 1.[1]
These two conditions imply that all entries in a column below a leading coefficient are zeros.[2]
The following is an example of a 3×5 matrix in row echelon form, which is not in reduced row echelon form (see below):
{\displaystyle \left[{\begin{array}{ccccc}1&a_{0}&a_{1}&a_{2}&a_{3}\\0&0&2&a_{4}&a_{5}\\0&0&0&1&a_{6}\end{array}}\right]}
Many properties of matrices may be easily deduced from their row echelon form, such as the rank and the kernel.
Reduced row echelon formEdit
A matrix is in reduced row echelon form (also called row canonical form) if it satisfies the following conditions:[3]
The leading entry in each nonzero row is a 1 (called a leading 1).
Each column containing a leading 1 has zeros in all its other entries.
The reduced row echelon form of a matrix may be computed by Gauss–Jordan elimination. Unlike the row echelon form, the reduced row echelon form of a matrix is unique and does not depend on the algorithm used to compute it.[4] For a given matrix, despite the row echelon form not being unique, all row echelon forms and the reduced row echelon form have the same number of zero rows and the pivots are located in the same indices.[4]
This is an example of a matrix in reduced row echelon form, which shows that the left part of the matrix is not always an identity matrix:
{\displaystyle \left[{\begin{array}{ccccc}1&0&a_{1}&0&b_{1}\\0&1&a_{2}&0&b_{2}\\0&0&0&1&b_{3}\end{array}}\right]}
Transformation to row echelon formEdit
By means of a finite sequence of elementary row operations, called Gaussian elimination, any matrix can be transformed to row echelon form. Since elementary row operations preserve the row space of the matrix, the row space of the row echelon form is the same as that of the original matrix.
The resulting echelon form is not unique; any matrix that is in echelon form can be put in an (equivalent) echelon form by adding a scalar multiple of a row to one of the above rows, for example:
{\displaystyle {\begin{bmatrix}1&3&-1\\0&1&7\\\end{bmatrix}}{\xrightarrow {\text{add row 2 to row 1}}}{\begin{bmatrix}1&4&6\\0&1&7\\\end{bmatrix}}.}
However, every matrix has a unique reduced row echelon form. In the above example, the reduced row echelon form can be found as
{\displaystyle {\begin{bmatrix}1&3&-1\\0&1&7\\\end{bmatrix}}\xrightarrow {{\text{subtract 3}}\times {\text{(row 2) from row 1}}} {\begin{bmatrix}1&0&-22\\0&1&7\\\end{bmatrix}}.}
This means that the nonzero rows of the reduced row echelon form are the unique reduced row echelon generating set for the row space of the original matrix.
A system of linear equations is said to be in row echelon form if its augmented matrix is in row echelon form. Similarly, a system of linear equations is said to be in reduced row echelon form or in canonical form if its augmented matrix is in reduced row echelon form.
The canonical form may be viewed as an explicit solution of the linear system. In fact, the system is inconsistent if and only if one of the equations of the canonical form is reduced to 0 = 1.[5] Otherwise, regrouping in the right hand side all the terms of the equations but the leading ones, expresses the variables corresponding to the pivots as constants or linear functions of the other variables, if any.
Pseudocode for reduced row echelon formEdit
The following pseudocode converts a matrix into a reduced row echelon form:
function ToReducedRowEchelonForm(Matrix M) is
lead := 0
rowCount := the number of rows in M
columnCount := the number of columns in M
for 0 ≤ r < rowCount do
if columnCount ≤ lead then
while M[i, lead] = 0 do
if rowCount = i then
if columnCount = lead then
if i ≠ r then Swap rows i and r
Divide row r by M[r, lead]
for 0 ≤ i < rowCount do
if i ≠ r do
Subtract M[i, lead] multiplied by row r from row i
The following pseudocode converts the matrix to a row echelon form (not abbreviated):
function ToRowEchelonForm(Matrix M) is
nr := number of rows in M
nc := number of columns in M
for 0 ≤ r < nr do
allZeros := true
for 0 ≤ c < nc do
if M[r, c] != 0 then
allZeros := false
if allZeros = true then
In M, swap row r with row nr
nr := nr - 1
p := 0
while p < nr and p < nc do
label nextPivot:
while M[p, p] = 0 do
if (p + r) <= nr then
goto nextPivot
In M, swap row p with row (p + r)
r := r + 1
for 1 ≤ r < (nr - p) do
if M[p + r, p] != 0 then
x := -M[p + r, p] / M[p, p]
for p ≤ c < nc do
M[p + r, c] := M[p , c] * x + M[p + r, c]
^ See, for instance, Leon (2009, p. 13)
^ Meyer 2000, p. 44
^ a b Anton, Howard; Rorres, Chris (2013-10-23). Elementary Linear Algebra: Applications Version, 11th Edition. Wiley Global Education. p. 21. ISBN 9781118879160.
^ Cheney, Ward; Kincaid, David R. (2010-12-29). Linear Algebra: Theory and Applications. Jones & Bartlett Publishers. pp. 47–50. ISBN 9781449613525.
Leon, Steve (2009), Linear Algebra with Applications (8th ed.), Pearson, ISBN 978-0136009290 .
Meyer, Carl D. (2000), Matrix Analysis and Applied Linear Algebra, SIAM, ISBN 978-0-89871-454-8 .
The Wikibook Linear Algebra has a page on the topic of: Row Reduction and Echelon Forms
Interactive Row Echelon Form with rational output
|
IRT is an adaptive testing framework that can give graded scores on tests and reduce the time it takes to administer tests.
At the base IRT defines a model that associates some latent ability of the student to probability of answering correct on problems.
p(\theta,b_i)
p
is answering correctly on problem
i
given the student's ability
\theta
and the intrinsic parameters of the problem collected in the vector
b_i
We often want to study the problem intrinsic parameters (
b_i
) as a function of latent ability (
\theta
). We call this the characteristic equation of a problem which is just a
(\theta, p(\theta,b_i))
plot.
In standard models,
\theta
is unbounded that is
\theta \in ]-\infty,\infty[
, and a sigmoid function (
\sigma(x) = \frac{1}{1+e^{-x}}
) is usually used as a base for the characteristic equation. This is convenient since sigmoid is defined
\forall x \in ]-\infty,\infty[
, and has a range of
]0,1[
which makes it a natural probability density function.
\theta
is unbounded, in practice,
\theta
is typically in the range of
[-3,3]
There are some formal assumptions for how
p(\theta,b_i)
should behave. They're mostly common sense, but it's worth noting them regardless.
Monotonicity: The probability of answering correctly,
p(\theta,b_i)
, should monotonically increase as
\theta
Unidimensionality: Basic models assume that there is only one latent trait that encodes student ability,
\theta
, but this assumption can be relaxed in more advanced models.
Local Independence: Responses to different problems are independent from each other.
Invariance: The intrinsic parameters of the problems are stable across different students.
1 Parameter (Rasch model)
The simplest model uses only one intrinsic parameter: the difficulty of the problem. We write the
p(\theta,d_i)= \frac{1}{1+e^{-(\theta-d_i)}}
d_i
is the difficulty parameter. It is analogous to
\theta
in its range. The neutral value is
0
2 Parameter model
We can extend this model by introducing a discrimination parameter that varies the slope of the characteristic equation.
p(\theta,d_i, a_i)= \frac{1}{1+ e^{- a_i (\theta-d_i)}}
a_i
is the discrimination parameter.
a_i
should be in the range
[0,\infty[
0
means that skill has no effect. Everyone has the same probability of answering correctly given by
\frac{1}{2}
, and in the 4 parameter model the average of
b,c
a_i
\infty
then the discrimination becomes a perfect step function. The neutral value is
1
The 3 parameter model takes account for the probability of guessing correctly by raising the lower asymptote
p(\theta,d_i, a_i, c_i)= c_i+(1-c_i)\frac{1}{1+e^{- a_i (\theta-d_i)}}
c_i
is the guessing probability parameter.
c_i
[0,1]
0
means that it's impossible to guess the solution, and
1
means that it is impossible to fail the question. The neutral value is
0
The 4 parameter model is used less often, but is still worthwhile to discuss It introduces a slip factor: The probability of someone who knows the concept makes a mistake.
p(\theta,d_i, a_i, c_i, b_i)= c_i+(b_i-c_i)\frac{1}{1+e^{- a_i (\theta-d_i)}}
b_i
is the upper bound probability parameter.
b_i
should be bigger than
c_i
and is in the range
[0,1]
. It encodes the probability of someone who knows the answer actually answering correctly. The neutral value is
1
Difficulty (0):
Discrimination (1):
Slip factor should be larger than guess factor.
Guess factor (0):
Slip factor (1):
We fit the models using maximum likelihood estimation. This is the same whether or not you know the intrinsic problem parameters. However, if you already know the problem difficulties, then you can choose to iteratively estimate student abilities using only a subset of the problem dataset.
By choosing the problems based on which one gives the maximum information: The maximum of
I= p(\theta,b_i)(1-p(\theta,b_i))
, you can quickly estimate the student's ability to a high degree of confidence. The highest information problem is the problem where you guess the student has the closest to
50\%
chance of answering correctly.
A framework for thinking about neural networks, and supervised machine learning.
How long do you want to live? 80 years? 120 years? 200 years? 1000 year...
|
Energy Critical nonlinear Schrödinger equations in the presence of periodic geodesics
This is a report on recent progress concerning the global well-posedness problem for energy-critical nonlinear Schrödinger equations posed on specific Riemannian manifolds
M
with small initial data in
{H}^{1}\left(M\right)
. The results include small data GWP for the quintic NLS in the case of the
3d
flat rational torus
M={𝕋}^{3}
and small data GWP for the corresponding cubic NLS in the cases
M={ℝ}^{2}×{𝕋}^{2}
M={ℝ}^{3}×𝕋
. The main ingredients are bi-linear and tri-linear refinements of Strichartz estimates which obey the critical scaling, as well as critical function space theory. All results mentioned above have been obtained in collaboration with D. Tataru and N. Tzvetkov.
Mots clés : energy critical nonlinear Schrödinger equations, global well-posedness, critical function spaces, Strichartz estimates
author = {Herr, Sebastian},
title = {Energy {Critical} nonlinear {Schr\"odinger} equations in the presence of periodic geodesics},
TI - Energy Critical nonlinear Schrödinger equations in the presence of periodic geodesics
Herr, Sebastian. Energy Critical nonlinear Schrödinger equations in the presence of periodic geodesics. Journées équations aux dérivées partielles (2010), article no. 10, 10 p. doi : 10.5802/jedp.67. http://www.numdam.org/articles/10.5802/jedp.67/
[1] Vasily M. Babič, Eigenfunctions which are concentrated in the neighborhood of a closed geodesic, Zap. Naučn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 9 (1968), 15–63. | MR 242424 | Zbl 0207.11001
[2] Jean-Marc Bouclet and Nikolay Tzvetkov, Strichartz estimates for long range perturbations, Amer. J. Math. 129 (2007), no. 6, 1565–1609. | MR 2369889 | Zbl 1154.35077
[3] Jean Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. I. Schrödinger equations, Geom. Funct. Anal. 3 (1993), no. 2, 107–156. | MR 1209299 | Zbl 0787.35097
[4] —, On Strichartz’s inequalities and the nonlinear Schrödinger equation on irrational tori, Mathematical aspects of nonlinear dispersive equations, Ann. of Math. Stud., vol. 163, Princeton Univ. Press, Princeton, NJ, 2007, pp. 1–20. | Zbl 1169.35054
[5] Nicolas Burq, Patrick Gérard, and Nikolay Tzvetkov, Strichartz inequalities and the nonlinear Schrödinger equation on compact manifolds, Amer. J. Math. 126 (2004), no. 3, 569–605. | MR 2058384 | Zbl 1067.58027
[6] —, Bilinear eigenfunction estimates and the nonlinear Schrödinger equation on surfaces, Invent. Math. 159 (2005), no. 1, 187–223. | MR 2142336 | Zbl 1092.35099
[7] —, Multilinear eigenfunction estimates and global existence for the three dimensional nonlinear Schrödinger equations, Ann. Sci. École Norm. Sup. (4) 38 (2005), no. 2, 255–301. | Numdam | MR 2144988 | Zbl 1116.35109
[8] —, Global solutions for the nonlinear Schrödinger equation on three-dimensional compact manifolds, Mathematical aspects of nonlinear dispersive equations, Ann. of Math. Stud., vol. 163, Princeton Univ. Press, Princeton, NJ, 2007, pp. 111–129. | MR 2333209 | Zbl 1180.35475
[9] Thierry Cazenave and Fred B. Weissler, The Cauchy problem for the critical nonlinear Schrödinger equation in
{H}^{s}
, Nonlinear Anal. 14 (1990), no. 10, 807–836. | MR 1055532 | Zbl 0706.35127
[10] Hans Christianson, Cutoff resolvent estimates and the semilinear Schrödinger equation, Proc. Amer. Math. Soc. 136 (2008), no. 10, 3513–3520. | MR 2415035 | Zbl 1156.35085
[11] James Colliander, Markus Keel, Gigliola Staffilani, Hideo Takaoka, and Terence C. Tao, Global well-posedness and scattering for the energy-critical nonlinear Schrödinger equation in
{ℝ}^{3}
, Ann. of Math. (2) 167 (2008), no. 3, 767–865. | MR 2415387 | Zbl 1178.35345
[12] Jean Ginibre, Le problème de Cauchy pour des EDP semi-linéaires périodiques en variables d’espace [d’après Bourgain]., Séminaire Bourbaki. Volume 1994/95. Exposés 790-804, Société Mathématique de France. Paris: Astérisque. 237, Exp. No.796, 1996, (French), pp. 163–187. | Numdam | MR 1423623 | Zbl 0870.35096
[13] Martin Hadac, Sebastian Herr, and Herbert Koch, Well-posedness and scattering for the KP-II equation in a critical space, Ann. Inst. H. Poincaré – AN 26 (2009), no. 3, 917–941, Erratum published at http://dx.doi.org/10.1016/j.anihpc.2010.01.006. | Numdam | MR 2526409 | Zbl 1188.35162
[14] Sebastian Herr, Daniel Tataru, and Nikolay Tzvetkov, Strichartz estimates for partially periodic solutions to Schrödinger equations in
4d
and applications, (in preparation).
[15] —, Global well-posedness of the energy critical Nonlinear Schrödinger equation with small initial data in
{H}^{1}\left({𝕋}^{3}\right)
, arXiv:1005.2832 [math.AP], 2010.
[16] Markus Keel and Terence Tao, Endpoint Strichartz estimates, Amer. J. Math. 120 (1998), no. 5, 955–980. | MR 1646048 | Zbl 0922.35028
[17] Herbert Koch and Daniel Tataru, Dispersive estimates for principally normal pseudodifferential operators, Comm. Pure Appl. Math. 58 (2005), no. 2, 217–284. | MR 2094851 | Zbl 1078.35143
[18] —, A priori bounds for the 1D cubic NLS in negative Sobolev spaces, Int. Math. Res. Not. IMRN 2007 (2007), no. 16, Art. ID rnm053, 36p. | MR 2353092 | Zbl 1169.35055
[19] Edmund Landau, Über Gitterpunkte in mehrdimensionalen Ellipsoiden, Math. Z. 21 (1924), no. 1, 126–132. | MR 1544690
[20] Jeremy Marzuola, Jason Metcalfe, and Daniel Tataru, Strichartz estimates and local smoothing estimates for asymptotically flat Schrödinger equations., J. Funct. Anal. 255 (2008), no. 6, 1497–1553. | MR 2565717 | Zbl 1180.35187
[21] M. F. Pyškina, The asymptotic behavior of eigenfunctions of the Helmholtz equation that are concentrated near a closed geodesic, Zap. Naučn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 15 (1969), 154–160. | MR 294886 | Zbl 0249.35015
[22] James V. Ralston, On the construction of quasimodes associated with stable periodic orbits, Comm. Math. Phys. 51 (1976), no. 3, 219–242. | MR 426057 | Zbl 0333.35066
[23] —, Approximate eigenfunctions of the Laplacian, J. Differential Geometry 12 (1977), no. 1, 87–100. | MR 470998 | Zbl 0385.58012
[24] Luc Robbiano and Claude Zuily, Strichartz estimates for Schrödinger equations with variable coefficients, Mém. Soc. Math. Fr. (N.S.) (2005), no. 101-102, vi+208. | Numdam | MR 2193021 | Zbl 1097.35002
[25] Gigliola Staffilani and Daniel Tataru, Strichartz estimates for a Schrödinger operator with nonsmooth coefficients, Comm. Partial Differential Equations 27 (2002), no. 7-8, 1337–1372. | MR 1924470 | Zbl 1010.35015
[26] Hideo Takaoka and Nikolay Tzvetkov, On 2D nonlinear Schrödinger equations with data on
ℝ×𝕋
, J. Funct. Anal. 182 (2001), no. 2, 427–442. | MR 1828800 | Zbl 0976.35085
[27] Laurent Thomann, The WKB method and geometric instability for nonlinear Schrödinger equations on surfaces, Bull. Soc. Math. France 136 (2008), no. 2, 167–193. | Numdam | MR 2415340 | Zbl 1161.35050
|
3'-nucleotidase - Wikipedia
In enzymology, a 3'-nucleotidase (EC 3.1.3.6) is an enzyme that catalyzes the chemical reaction
a 3'-ribonucleotide + H2O
{\displaystyle \rightleftharpoons }
a ribonucleoside + phosphate
Thus, the two substrates of this enzyme are 3'-ribonucleotide and H2O, whereas its two products are ribonucleoside and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name of this enzyme class is 3'-ribonucleotide phosphohydrolase. Other names in common use include 3'-mononucleotidase, 3'-phosphatase, and 3'-ribonucleotidase. This enzyme participates in purine metabolism and pyrimidine metabolism.
SHUSTER L, KAPLAN NO (1953). "A specific b nucleotidase". J. Biol. Chem. 201 (2): 535–46. PMID 13061389.
Retrieved from "https://en.wikipedia.org/w/index.php?title=3%27-nucleotidase&oldid=917318632"
|
Biomolecules Chemistry NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
During the process of digestion, the proteins present in food materials are hydrolyzed to amino acids. The two enzymes involved in the process
\stackrel{\mathrm{Enzyme} \left(\mathrm{A}\right)}{\to }
Polypeptides
\stackrel{\mathrm{Enzyme} \left(\mathrm{B}\right)}{\to }
Amino acids are respectively
1. amylase and maltase
2. diastase and lipase
3. pepsin and trypsin
4. invertase and zymase
Subtopic: Vitamins, Hormones & Enzymes |
Phospholipids are esters of glycerol with
1. one carboxylic acid residue and two phosphate groups
2. three phosphate groups
3. three carboxylic acid residues
4. two carboxylic acid residues and one phosphate groups
Subtopic: α - Amino Acids, Peptide Bond, Proteins & Structure |
Compound that gives positive Fehling's solution test is :
Subtopic: Carbohydrates - Classification & D-L configuration |
a-D-(+)-glucose and b-D-(+)-glucose are
1. Anomers
2. Epimers
3. Enantiomers
4. Geometrical isomers
The couplings between base units of DNA is through :
2. Electrostatic bonding
4. Van der waal's forces
Subtopic: DNA ,RNA & Metal Ions |
The segment of DNA which acts as the instrumental manual for the synthesis of the protein is
Which of the following structures represents the peptide chain ?
On hydrolysis of starch, we finally get
The correct option about the presence of H-bond in nucleotides is:
1. A-T, G-C
2. A-G, T-C
3. G-T, A-C
4. A-A, T-T
|
EUDML | Refinement of Tate's discriminant bound and non-existence theorems for mod Galois representations. EuDML | Refinement of Tate's discriminant bound and non-existence theorems for mod Galois representations.
Refinement of Tate's discriminant bound and non-existence theorems for mod
p
Galois representations.
Moon, Hyunsuk, and Taguchi, Yuichiro. "Refinement of Tate's discriminant bound and non-existence theorems for mod Galois representations.." Documenta Mathematica Extra Vol. (2003): 641-654. <http://eudml.org/doc/128787>.
@article{Moon2003,
author = {Moon, Hyunsuk, Taguchi, Yuichiro},
keywords = {Mod Galois representation; discriminant; Mod Galois representation},
title = {Refinement of Tate's discriminant bound and non-existence theorems for mod Galois representations.},
AU - Moon, Hyunsuk
AU - Taguchi, Yuichiro
TI - Refinement of Tate's discriminant bound and non-existence theorems for mod Galois representations.
KW - Mod Galois representation; discriminant; Mod Galois representation
p
Galois representation, discriminant, Mod
p
Langlands-Weil conjectures, nonabelian class field theory
Articles by Taguchi
|
Predicting a Company's Share Price
The more demand for a stock, the higher it drives the price and vice versa. So while in theory, a stock's initial public offering (IPO) is at a price equal to the value of its expected future dividend payments, the stock's price fluctuates based on supply and demand. Many market forces contribute to supply and demand, and thus to a company's stock price.
Stock prices are meant to reflect the present value of a corporation's future cash flows and profits.
As such, it is possible to price a company's shares at some sort of fair value, even when prices at any moment are driven by supply and demand in the market.
Several stock valuation models exist, such as the dividend discount model or Gordon growth model.
While a stock price is conceptually determined by its expected future dividends, many companies do not distribute dividends.
Stocks that don't pay dividends can be valued based on a multiples approach or relative basis.
Understanding the law of supply and demand is easy; understanding demand can be hard. The price movement of a stock indicates what investors feel a company is worth—but how do they determine what it's worth? One factor, certainly, is its current earnings: how much profit it makes.
But investors often look beyond the numbers. That is to say, the price of a stock doesn't only reflect a company's current value—it also reflects the prospects for a company, the growth that investors expect of it in the future.
There are quantitative techniques and formulas used to predict the price of a company's shares. Called dividend discount models (DDMs), they are based on the concept that a stock's current price equals the sum total of all its future dividend payments when discounted back to their present value. By determining a company's share by the sum total of its expected future dividends, dividend discount models use the theory of the time value of money (TVM).
In addition to dividends, other valuation methods rely on factors such as the P/E (price-to-earnings) or P/S (price-to-sales) multiples on a relative basis. If one automaker has a P/E multiple of 20x and the industry average is 30x among all automakers, it may be undervalued. Discounted cash flow (DCF) analysis is another approach that considers the future cash flows of a business.
Several different types of dividend discount models exist. One of the most popular, due to its straightforwardness, is the Gordon growth model. Developed in the 1960s by U.S. economist Myron Gordon, the equation for the Gordon growth model is represented by the following:
Present value of stock = (dividend per share) / (discount rate - growth rate)
\begin{aligned} &P = \dfrac{D_1}{r-g}\\ &\textbf{where:}\\ &P = \text{\small Current Stock Price}\\ &g = \text{\small Constant growth rate in perpetuity }\\ &\text{\small expected for the dividends}\\ &r = \text{\small Constant cost of equity capital for that }\\ &\text{\small company (or rate of return)}\\ &D_1 = \text{\small Value of the next year's dividends }\\ \end{aligned}
P=r−gD1where:P=Current Stock Priceg=Constant growth rate in perpetuity expected for the dividendsr=Constant cost of equity capital for that company (or rate of return)D1=Value of the next year’s dividends
Example of Share Price Valuation With the Gordon Growth Model
For example, say Widget Inc. stock is trading at $100 per share. This company requires a 5% minimum rate of return (r) and currently pays a $2 dividend per share (D1), which is expected to increase by 3% annually (g).
The intrinsic value (p) of the stock is calculated as:
$2 / (0.05 - 0.03) = $100.
According to the Gordon Growth Model, the shares are correctly valued at their intrinsic level. If they were trading at, say $125 per share, they'd be overvalued by 25%; if they were trading at $90, they'd be undervalued by $10 (and a buying opportunity to value investors who seek out such stocks).
Drawbacks of the Gordon Growth Model
While useful in theory, there are some drawbacks of dividend discount models like the Gordon Growth Model. First, the model assumes a constant rate of growth in dividends per share paid by a company. In reality, many companies vary their dividend rates based on the business cycle, the state of the economy, and in response to unexpected financial difficulties or successes.
Another problem is estimating the appropriate discount rate (minimum rate of return). If the required rate of return turns out to be lower than the dividend growth rate, the result would be negative (i.e., meaningless). Similarly, if the required rate of return is equal to the dividend growth rate, you would have to divide by zero (which is impossible).
Finally, as mentioned above, these models are only useful for valuing dividend-paying stocks. Many companies, especially growth companies or those in the technology sector, do not pay dividends.
Shares are priced based on expectations of future growth and profitability for a company. If those expectations are dashed, share prices can fall. One way to estimate this growth is by looking at the dividends a company pays to its shareholders, which represent profitability. Other factors to look at will include a company's future cash flows, its level of debt, and the amount of liquidity it has on hand. These are examined to see if a company can meet both its long-term and short-term obligations.
Share price refers to the value of a company's stock. The total value of a publicly-traded company is called its market capitalization ("market cap"), which is arrived at by adding up the value of all of the stock outstanding. The more shares that a company has outstanding, the lower each share will be given the same overall value of the corporation.
A share price reflects the value of a company. A highly-priced share may represent a valuable company, but if there are not many shares outstanding, it may not always be the case. Sometimes, the share price rises high enough that a firm's management decides to undergo a stock split, reducing the price of the shares by increasing the number of shares outstanding. A very low share price can signal that a company is struggling. So-called penny stocks can be highly volatile and risky for investors.
The Gordon Growth Model equation above treats a stock's present value similarly to perpetuity, which refers to a constant stream of identical cash flows for an infinite amount of time with no end date. Of course, in real life, companies may not maintain the same growth rate year after year, and their stock dividends may not increase at a constant rate.,
Stern School of Business, New York University. "Dividend Discount Models," Page 2.
|
How Does a Setting of the Vacuum Energy Density, as Given Today, Lead to an Initial Hubble Radius for the Early Universe, i.e. How Does the Early Universe Partly Mimic a Black Hole? ()
First we review what was done by Klauber, in his quantum field theory calculation of the Vacuum energy density, and in doing so, use, instead of Planck Mass, which has 1019 GeV, which leads to an answer 10122 times too large, a cut-off value of instead, a number, N, of gravitons, times graviton mass (assumed to be about 10°43 GeV) to get a number, N, count of about 1031 if the vacuum energy is to avoid an overshoot of 10122, and instead have a vacuum energy 10°47 GeV4. Afterwards, we use the results of Mueller and Lousto, to compare the number N, of 1031, assumed to be entropy using Ng’s infinite quantum statistics, to the ratio of the square of (the Hubble (observational) radius over a calculated grid size which we call a), here, a ~ a minimum time step we call delta t, times, the speed of light. Now in doing so, we use a root finder procedure to obtain where we use an inflaton value due to use of a scale factor if we furthermore use as the variation of the time component of the metric tensor in Pre-Planckian Space-time up to the Planckian space-time initial values.
Beckwith, A. (2018) How Does a Setting of the Vacuum Energy Density, as Given Today, Lead to an Initial Hubble Radius for the Early Universe, i.e. How Does the Early Universe Partly Mimic a Black Hole?. Journal of High Energy Physics, Gravitation and Cosmology, 4, 354-360. doi: 10.4236/jhepgc.2018.42022.
\begin{array}{l}\Delta t\cdot |\left(\sqrt{\frac{8\text{π}G{V}_{0}}{\gamma \cdot \left(3\gamma -1\right)}}\cdot \Delta t-1\right)-\frac{{\left(\sqrt{\frac{8\text{π}G{V}_{0}}{\gamma \cdot \left(3\gamma -1\right)}}\cdot \Delta t-1\right)}^{2}}{2}+\frac{{\left(\sqrt{\frac{8\text{π}G{V}_{0}}{\gamma \cdot \left(3\gamma -1\right)}}\cdot \Delta t-1\right)}^{3}}{3}-\cdots |\\ \approx {\left(\sqrt{\frac{\gamma }{\text{π}G}}\right)}^{-1}\frac{48\text{π}\hslash }{{a}_{\mathrm{min}}^{2}\cdot \Lambda }\end{array}
{{S}_{\Lambda }|}_{\text{Arrow-of-time}}=\text{π}\cdot {\left(\frac{{{R}_{c}|}_{\text{initial}}~c\cdot \Delta t}{{l}_{\text{Planck}}}\right)}^{2}\ne 0
\left(\frac{{{R}_{c}|}_{\text{initial}}~c\cdot \Delta t}{{l}_{\text{Planck}}}\right)~\vartheta \left(1\right)
\begin{array}{l}{\frac{\Delta E\Delta t}{\text{Volume}}~\left[\hslash /\text{Volume}\cdot \left(\delta {g}_{tt}~{a}_{\mathrm{min}}^{2}\cdot {\varphi }_{\text{initial}}\right)\right]|}_{\text{Pre-Planckian}}\\ \underset{\left(\text{Pre-Planckian}\right)\to \left(\text{Planckian}\right)}{\to }{\Delta E\Delta t~\hslash |}_{\text{Planckian}}\end{array}
\begin{array}{l}{g}_{tt}~\delta {g}_{tt}\approx {a}_{\mathrm{min}}^{2}{\varphi }_{\text{initial}}\ll 1\\ \underset{\text{Pre-Planck}\to \text{Planck}}{\to }\delta {g}_{tt}\approx {a}_{\mathrm{min}}^{2}{\varphi }_{\text{Planck}}~1\\ ⇔{\left(\frac{{{R}_{c}|}_{\text{initial}}~c\cdot \Delta t}{{l}_{\text{Planck}}}\right)~\vartheta \left(1\right)|}_{\text{Planck}}\end{array}
a~c\cdot \Delta t
{r}_{H}
\begin{array}{c}S\left(\text{Lousto}\right)~0.31\cdot {\left({r}_{H}/a\right)}^{2}\\ ~0.31\cdot {\left({r}_{H}/c\cdot \Delta t\right)}^{2}\\ ~0.31\cdot {\left({r}_{H}/{l}_{P}\right)}^{2}\end{array}
\begin{array}{l}\rho \left(\text{vacuum-energy-density}\right)\\ =\frac{1}{\text{2π}}{\displaystyle \underset{0}{\overset{\Delta }{\int }}{k}^{3}\text{d}k}=\frac{{\Delta }^{4}}{8\text{π}}\\ \underset{\Delta =\text{planck-mass}}{\to }2.80\times {10}^{74}{\text{GeV}}^{4}\end{array}
\begin{array}{l}\Delta \underset{\text{early-universe}}{\to }N\cdot {m}_{g}\\ ⇒\rho \left(\text{vacuum-energy-density}\right)\\ =\frac{1}{\text{2π}}\underset{0}{\overset{\Delta }{\int }}{k}^{3}\text{d}k=\frac{{\left(N\cdot {m}_{g}\right)}^{4}}{8\text{π}}\\ \underset{\Delta =\text{planck-mass}}{\to }{10}^{-47}{\text{GeV}}^{4}\\ ⇔N={10}^{31}\\ &\text{\hspace{0.17em}}{m}_{g}={10}^{-43}\text{GeV}\end{array}
\begin{array}{l}N={10}^{31}~0.31\cdot {\left({r}_{H}/{l}_{P}\right)}^{2}\\ \Rightarrow {r}_{H}~{10}^{15}\times {l}_{P}~{10}^{-20}\text{meters}\end{array}
{{R}_{c}|}_{\text{initial}}~c\cdot \Delta t
{R}_{\text{initial}}\cdot {10}^{27}={R}_{\text{final}}
{r}_{H}~{10}^{15}\times {l}_{P}~{10}^{-20}\text{meters}
{r}_{H}~{10}^{15}\times {l}_{P}~{10}^{-20}\text{meters}
{{R}_{c}|}_{\text{initial}}~c\cdot \Delta t
{r}_{H}~{10}^{15}\times {l}_{P}~{10}^{-20}\text{meters}
|
Set - Simple English Wikipedia, the free encyclopedia
For the Egyptian god, see Seth.
Georg Cantor, in 1894. Cantor was the first mathematician to talk about sets
Cantor's original definition of a set
A set is an idea from mathematics. A set has members (also called elements). A set is defined by its members, so any two sets with the same members are the same (e.g., if set
{\displaystyle {\mathit {X}}}
{\displaystyle {\mathit {Y}}}
have the same members, then
{\displaystyle {\mathit {X}}={\mathit {Y}}}
Example of a set of polygons
A set cannot have the same member more than once. Membership is the only thing that matters. For example, there is no order or other difference among the members. Anything can be a member of a set, including sets themselves (though if a set is a member of itself, paradoxes such as Russell's paradox can happen).
1 What to do with sets
1.1 Element of
1.5 Cardinality of a set
1.5.1 Infinite cardinality
1.5.2 Countability
5 Paradoxes about sets
5.1 The barber paradox
What to do with sets[change | change source]
Imagine the set is a bag.
Element of[change | change source]
Various things can be put into a bag. Later on, a good question would be if a certain thing is in the bag. Mathematicians call this element of. Something is an element of a set, if that thing can be found in the respective bag. The symbol used for this is
{\displaystyle \in }
{\displaystyle a\in {\mathit {A}}}
{\displaystyle a}
is in the bag
{\displaystyle {\mathit {A}}}
{\displaystyle a}
{\displaystyle {\mathit {A}}}
Unlike a bag, a set can contain at most one item of a given type. So for a set of fruits, it would make no difference if there is one orange, or if there are 10 oranges.
Empty set[change | change source]
Like a bag, a set can also be empty. The empty set is like an empty bag: it has nothing in it. The "empty set" is also called the null set and is represented by the symbol
{\displaystyle \varnothing }
Universe[change | change source]
If we consider, say, some sets of American cars, e.g. a set of all Fords and a set of all Dodges, we may also wish to consider the whole set of American cars. In this case, the set of all American cars would be called a universe.
In other words, a universe is a collection of all the elements one wishes to consider in a given problem. The universe is usually named
{\displaystyle U}
Comparing sets[change | change source]
Two sets can be compared. This is like looking into two different bags. If they contain the same things, they are equal. No matter, in which order these things are.
{\displaystyle {\mathit {A}}=\{Stanford,Stanley\}}
{\displaystyle {\mathit {B}}=\{Stanley,Stanford\}}
, the sets are the same.
Cardinality of a set[change | change source]
When mathematicians talk about a set, they sometimes want to know how big a set is (or what is the cardinality of the set). They do this by counting how many elements are in the set (how many items are in the bag). For finite sets the cardinality is a simple number. The empty set has a cardinality of 0. The set
{\displaystyle \{apple,orange\}}
has a cardinality of 2.
Two sets have the same cardinality if we can pair up their elements—if we can join two elements, one from each set. The set
{\displaystyle \{apple,orange\}}
{\displaystyle \{sun,moon\}}
have the same cardinality. E.g, we could pair apple with sun, and orange with moon. The order does not matter. It is possible to pair all the elements, and none is left out. But the set
{\displaystyle \{dog,cat,bird\}}
{\displaystyle \{5,6\}}
have different cardinality. If we try to pair them up, we always leave out one animal.
Infinite cardinality[change | change source]
At times cardinality is not a number. Sometimes a set has infinite cardinality. The set of all integers is a set with infinite cardinality. Some sets with infinite cardinality are bigger (have a bigger cardinality) than others. There are more real numbers than there are natural numbers, for example, which means we cannot pair up the set of integers and the set of real numbers, even if we worked forever.
Countability[change | change source]
If you can count the elements of a set, it is called a countable set. Countable sets include all sets with a finite number of members. Countable sets also include some infinite sets, such as the natural numbers. You can count the natural numbers with
{\displaystyle {1,2,3...}}
. The natural numbers are nicknamed "the counting numbers", since they are what we usually use to count things with.
An uncountable set is an infinite set that is impossible to count. If we try to count the elements, we will always skip some. It does not matter what step we take. The set of real numbers is an uncountable set. There are many other uncountable sets, even such a small interval like
{\displaystyle [0,1]}
Subsets[change | change source]
A subset of regular polygons
If you look at the set
{\displaystyle A=\{a,b\}}
{\displaystyle B=\{a,b,c,d\}}
, you can see that all elements in the first set are also in the second set.
We say:
{\displaystyle \{a,b\}}
{\displaystyle \{a,b,c,d\}}
{\displaystyle \{a,b\}\subseteq \{a,b,c,d\}}
In general, when all elements of set
{\displaystyle A}
are also elements of set
{\displaystyle B}
{\displaystyle A}
{\displaystyle B}
{\displaystyle A\subseteq B}
It is usually read "
{\displaystyle A}
{\displaystyle B}
Example: Every Chevrolet is an American car. So the set of all Chevrolets is contained in the set of all American cars.
Set operations[change | change source]
Intersection of two sets of polygons
There are different ways to combine sets.
Intersections[change | change source]
{\displaystyle A\cap B}
of two sets
{\displaystyle A}
{\displaystyle B}
is a set that contains all the elements that are both in set
{\displaystyle A}
and in set
{\displaystyle B}
at the same time.
Example: When
{\displaystyle A}
is the set of all cheap cars, and
{\displaystyle B}
is the set of all American cars, then
{\displaystyle A\cap B}
is the set of all cheap American cars.
Unions[change | change source]
Union of two sets of polygons
{\displaystyle A\cup B}
{\displaystyle A}
{\displaystyle B}
is a set that contains all the elements that are in set
{\displaystyle A}
or in set
{\displaystyle B}
. This "or" is the inclusive disjunction, so the union also contains the elements, that are in set
{\displaystyle A}
{\displaystyle B}
. By the way, this means, that the intersection is a subset of the union:
{\displaystyle (A\cap B)\subseteq (A\cup B)}
{\displaystyle A}
{\displaystyle B}
{\displaystyle A\cup B}
is the set of all cars, without all expensive cars that are not from America.
Complement can mean two different things:
{\displaystyle A}
is the universe
{\displaystyle U}
without all the elements of
{\displaystyle A}
{\displaystyle A^{\rm {C}}=U\setminus A}
{\displaystyle U}
is the set of all things you speak about.
{\displaystyle U}
is the set of all cars, and
{\displaystyle A}
is the set of all cheap cars,
{\displaystyle A}
C is the set of all expensive cars.
Differences of two sets of polygons
The set difference of
{\displaystyle A}
{\displaystyle B}
{\displaystyle B}
{\displaystyle A}
{\displaystyle B\setminus A}
It is also called the relative complement of
{\displaystyle A}
{\displaystyle B}
{\displaystyle A}
{\displaystyle B}
{\displaystyle B\setminus A}
is the set of all expensive American cars.
If you exchange the sets in the set difference, the result is different:
In the example with the cars, the difference
{\displaystyle A\setminus B}
is the set of all cheap cars, that are not made in America.
Most mathematicians use uppercase ITALIC (usually Roman) letters to write about sets (such as
{\displaystyle A}
{\displaystyle B}
{\displaystyle C}
). The things that are seen as elements of sets are usually written with lowercase Roman letters.[1][2]
One way of showing a set is by a list of its members, separated by commas, included in braces. For example,
{\displaystyle X=\{1,2,3\}}
is set which has members 1, 2, and 3.
Another way, called the set-builder notation,[3] is by a statement of what is true of the members of the set, like this:
{x | x is a natural number & x < 4}.
In spoken English, this reads: "the set of all x such that x is a natural number and x is less than four". The symbol [ipe "|" means "such that" or "so that".
The empty set is written in a special way:
{\displaystyle \emptyset }
{\displaystyle \varnothing }
{\displaystyle \{\}}
When object a is the member of set
{\displaystyle A}
it is written as:
In spoken English, this reads: "a is a member of
{\displaystyle A}
Venn diagrams[change | change source]
{\displaystyle A\cap B}
To illustrate operations on sets mathematicians use Venn diagrams. Venn diagrams use circles to show individual sets. The universe is depicted with a rectangle. Results of operations are shown as colored areas. In the illustration of the intersection operation the left circle shows set
{\displaystyle A}
and the right circle shows set
{\displaystyle B}
Special sets[change | change source]
Some sets are very important to mathematics. They are used very often. One of these is the empty set. Many of these special sets are written using blackboard bold typeface, and these include:[1][4]
{\displaystyle \mathbb {P} }
{\displaystyle \mathbb {N} }
{\displaystyle \mathbb {N} }
{\displaystyle \mathbb {N} }
{\displaystyle \mathbb {Z} }
{\displaystyle \mathbb {Z} }
{\displaystyle \mathbb {Q} }
, denoting the set of all rational numbers (that is, the set of all proper and improper fractions). So,
{\displaystyle \mathbb {Q} =\left\{{\begin{matrix}{\frac {a}{b}}\end{matrix}}|a,b\in \mathbb {Z} ,b\neq 0\right\}}
, meaning all fractions
{\displaystyle {\begin{matrix}{\frac {a}{b}}\end{matrix}}}
where a and b are in the set of all integers and b is not equal to 0. For example,
{\displaystyle {\begin{matrix}{\frac {1}{4}}\end{matrix}}\in \mathbb {Q} }
{\displaystyle {\begin{matrix}{\frac {11}{6}}\end{matrix}}\in \mathbb {Q} }
{\displaystyle {\begin{matrix}{\frac {a}{1}}\end{matrix}}}
{\displaystyle \mathbb {R} }
{\displaystyle \pi ,}
{\displaystyle e,}
{\displaystyle \mathbb {C} }
, denoting the set of all complex numbers.
Each of these sets of numbers has an infinite number of elements, and
{\displaystyle \mathbb {P} \subset \mathbb {N} \subset \mathbb {Z} \subset \mathbb {Q} \subset \mathbb {R} \subset \mathbb {C} }
Paradoxes about sets[change | change source]
The mathematician Bertrand Russell found that there are problems with the informal definition of sets. He stated this in a paradox called Russell's paradox. An easier to understand version, closer to real life, is called the Barber paradox.
The barber paradox[change | change source]
There is a small town somewhere. In that town, there is a barber. All the men in the town do not like beards, so they either shave themselves, or they go to the barber shop to be shaved by the barber.
We can therefore make a statement about the barber himself: The barber shaves all men that do not shave themselves. He only shaves those men (since the others shave themselves and do not need a barber to give them a shave).
This of course raises the question: What does the barber do each morning to look clean-shaven? This is the paradox.
If the barber shaves himself, he cannot be a barber, since a barber does not shave himself. If he does not shave himself, he falls in the category of those who do not shave themselves, and so, cannot be a barber.
↑ 1.0 1.1 1.2 1.3 1.4 "Comprehensive List of Set Theory Symbols". Math Vault. 2020-04-11. Retrieved 2020-08-19.
↑ Weisstein, Eric W. "Set". mathworld.wolfram.com. Retrieved 2020-08-19.
↑ "Set Symbols". www.mathsisfun.com. Retrieved 2020-08-19.
The following books explore sets in more detail:
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Set&oldid=7169978"
|
Wald test of model specification - MATLAB waldtest - MathWorks 日本
{y}_{t}={\mathrm{β}}_{0}+{\mathrm{β}}_{1}{y}_{t-1}+{\mathrm{β}}_{2}{y}_{t-2}+{\mathrm{ε}}_{t}.
{\mathrm{β}}_{2}=0
r={\mathrm{β}}_{2}
R=\left[\begin{array}{ccc}0& 0& 1\end{array}\right]
{\mathrm{β}}_{2}=0
{y}_{t}=0.9{y}_{t-1}+{\mathrm{ε}}_{t},
{\mathrm{ε}}_{t}={w}_{t}\sqrt{{h}_{t}}
{h}_{t}=1+0.5{\mathrm{ε}}_{t-1}^{2}
{w}_{t}
{y}_{t}=c+{\mathrm{Ï}}_{1}{y}_{t-1}+{\mathrm{ε}}_{t},
{h}_{t}={\mathrm{α}}_{0}+{\mathrm{α}}_{1}{\mathrm{ε}}_{t-1}^{2}
{\mathrm{α}}_{1}=0
r\left(\mathrm{θ}\right)={\mathrm{α}}_{1}
\mathrm{θ}=\left[c,{\mathrm{Ï}}_{1},{\mathrm{α}}_{0},{\mathrm{α}}_{1}{\right]}^{â²}
r={\underset{}{\overset{Ë}{\mathrm{α}}}}_{1}
R=\left[\begin{array}{cccc}0& 0& 0& 1\end{array}\right]
{\mathrm{α}}_{1}=0
{y}_{t}=3+0.9{y}_{t-1}-0.5{y}_{t-2}+{\mathrm{ε}}_{t}+0.7{\mathrm{ε}}_{t-1},
{\mathrm{ε}}_{t}
\left[c,{\mathrm{Ï}}_{1},{\mathrm{Ï}}_{2},{\mathrm{θ}}_{1}{\mathrm{θ}}_{2},{\mathrm{Ï}}^{2}\right]
{\mathrm{γ}}_{1}{\mathrm{α}}_{1}=1
{\mathrm{γ}}_{1}+{\mathrm{α}}_{1}=1
{\mathrm{γ}}_{1}
{\mathrm{α}}_{1}
r\left(\mathrm{θ}\right)=0
Suppose r1,...,rq are the q restriction functions, and the unrestricted model parameters are θ1,...,θp. Then, the restriction function Jacobian is
R=\left(\begin{array}{ccc}\frac{â{r}_{1}}{â{\mathrm{θ}}_{1}}& â¦& \frac{â{r}_{1}}{â{\mathrm{θ}}_{p}}\\ ⋮& â±& ⋮\\ \frac{â{r}_{q}}{â{\mathrm{θ}}_{1}}& \cdots & \frac{â{r}_{q}}{â{\mathrm{θ}}_{p}}\end{array}\right).
W={r}^{â²}{\left(R{\mathrm{Σ}}_{\stackrel{^}{\mathrm{θ}}}{R}^{â²}\right)}^{â1}r,
r is the restriction function that specifies restrictions of the form r(θ) = 0 on parameters θ in the unrestricted model, evaluated at the unrestricted model parameter estimates. In other words, r maps the p-dimensional parameter space to the q-dimensional restriction space.
r=\stackrel{^}{\mathrm{θ}}â{\mathrm{θ}}_{0}
\stackrel{^}{\mathrm{θ}}
is the unrestricted model parameter estimates for the restricted parameters and θ0 holds the values of the restricted model parameters under the null hypothesis.
{\stackrel{^}{\mathrm{Σ}}}_{\stackrel{^}{\mathrm{θ}}}
When W exceeds a critical value in its asymptotic distribution, the test rejects the null, restricted hypothesis in favor of the alternative, unrestricted hypothesis. The nominal significance level (α) determines the critical value.
If EstCov is the same for all tests, but r varies, then waldtest “tests down†against multiple restricted models.
If EstCov varies among tests, but r does not, then waldtest “tests up†against multiple unrestricted models.
|
Nuclei Physics NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
Atoms having different atomic number as well as different mass number but having same number of neutrons
3. isotones
4. isodiaphers
In a radioactive substance at t = 0, the number of atoms is 8
×{10}^{4}
, its half-life period is 3 yr. The number of atoms equal to
1×{10}^{4}
will remain after an interval of:
[UP CPMT 2010]
Subtopic: Radioactivity |
What is the respective number of
\alpha
\beta
-particles emitted in the following radioactive decay?
{X}_{90}^{{}_{200}}\to {}_{}Y_{80}^{168}
Subtopic: Types of Decay |
The half-life of radium is 1622 years. How long will it take for seven-eighth of a given amount of radium to decay
The mass of a proton is 1.0073 u and that of the neutron is 1.0087 u (u = atomic mass unit) The binding energy of
{}_{2}H{e}^{4}
is (mass of helium nucleus = 4.0015 u)
2. 0.061 u
3. 0.0305 J
4. 0.0305 erg
Subtopic: Mass-Energy Equivalent |
The binding energies of the nuclei A and B are
{E}_{a} and {E}_{b}
respectively. Three atoms of the element B fuse to give one atom of element A and an energy Q is released. Then
{E}_{a}, {E}_{b}
and Q are related as:
{E}_{a}-3{E}_{b}=Q
3{E}_{b}-{E}_{a}=Q
{E}_{a}+3{E}_{b}=Q
{E}_{b}+3{E}_{a}=Q
Subtopic: Nuclear Binding Energy |
A free neutron decays into a proton, an electron and:
1. a beta particle.
2. an alpha particle.
3. an antineutrino.
4. a neutrino.
In a radioactive sample the fraction of initial number of radioactive nuclei, which remains undecayed after n mean lives is
\frac{1}{{e}^{n}}
{e}^{n}
1-\frac{1}{{e}^{n}}
{\left(\frac{1}{e-1}\right)}^{n}
The activity of a radioactive sample is measured as 9750 counts/min at t = 0 and as 975 counts/min at t = 5 min. The decay constant is approximately:
1. 0.922/min
The energy equivalent of one atomic mass unit is:
1.6×{10}^{-19} J
6.02×{10}^{23} J
3. 931 MeV
|
Semiconductor Electronics Physics NEET Practice Questions, MCQs, Past Year Questions (PYQs), NCERT Questions, Question Bank, Class 11 and Class 12 Questions, and PDF solved with answers
Physics - Semiconductor Electronics
The following configuration of the logic gate is equivalent to
Subtopic: Logic gates |
Three ideal diodes are connected to the battery as shown in the circuit. The current supplied by the battery is
Subtopic: PN junction |
In a common emitter transistor amplifier, the audio signal voltage across resistance of 1 k
\Omega
is 2V. If the base resistance is 200
\Omega
and the current amplification factor is 50, the input signal voltage will be
(1) 4 mV
(3) 16 mV
Subtopic: Transistor |
If a small amount of aluminium is added to the silicon crystal:
1. its resistance is decreased.
2. it becomes a p-type semiconductor.
3. there will be fewer free electrons than holes in the semiconductor.
4. All of these are correct.
Subtopic: Types of Semiconductors |
A transistor is operated in common emitter configuration at
{\mathrm{V}}_{\mathrm{c}} = 10\mathrm{V}.
When base current is changed from 10mA to 30mA, it produces a change in emitter current from 2A to 4A, the current amplification factor is
Subtopic: Applications of Transistor |
When a p-n junction is forward biased, then:
1. the depletion region becomes thick.
2. the p-side is at a higher potential than n side.
3. the current flowing is zero.
4. the effective resistance is of the order of \(10^6 \Omega\).
When a transistor is used as a switch it is in:
2. Cut off state
3. Saturation state
4. Both cut off state and saturation state are possible
Which of the following is correct for n-type semiconductor?
1. Electron is majority carriers and trivalent atoms are dopants.
2. Electrons are majority carriers and pentavalent atoms are dopants.
3. Holes are majority carriers and pentavalent atoms are dopants.
4. Holes are majority carriers and trivalent atoms are dopants.
\mathrm{If} \mathrm{\alpha } \mathrm{be} \mathrm{the} \mathrm{current} \mathrm{gain} \mathrm{of} \mathrm{a} \mathrm{transistor} \mathrm{in} \mathrm{common} \mathrm{base} \mathrm{mode} \mathrm{and} \mathrm{\beta } \mathrm{be} \mathrm{the} \mathrm{current} \mathrm{gain} \mathrm{in}\phantom{\rule{0ex}{0ex}}\mathrm{common} \mathrm{emitter} \mathrm{mode} \mathrm{then}-
\mathrm{\alpha } < 1
\mathrm{\beta } > 1
\mathrm{\alpha } = \frac{\mathrm{\beta }}{1 + \mathrm{\beta }}
In semiconductors, which of the following gives the law of mass action (where symbols have their usual meanings)-
{\mathrm{n}}_{1} = {\mathrm{n}}_{\mathrm{e}} = {\mathrm{n}}_{\mathrm{h}}
{{\mathrm{n}}_{\mathrm{i}}}^{2} = {\mathrm{n}}_{\mathrm{e}}{\mathrm{n}}_{\mathrm{h}}
{\mathrm{n}}_{\mathrm{h}} >> {\mathrm{n}}_{\mathrm{e}}
{\mathrm{n}}_{\mathrm{h}} << {\mathrm{n}}_{\mathrm{e}}
|
EUDML | A remark on the uniqueness of fundamental solutions to the -Laplacian equation, . EuDML | A remark on the uniqueness of fundamental solutions to the -Laplacian equation, .
A remark on the uniqueness of fundamental solutions to the
p
-Laplacian equation,
p>2
Laurençot, Ph.. "A remark on the uniqueness of fundamental solutions to the -Laplacian equation, .." Portugaliae Mathematica 55.4 (1998): 385-389. <http://eudml.org/doc/48262>.
@article{Laurençot1998,
author = {Laurençot, Ph.},
keywords = {parabolic -Laplacian; bounded measures; parabolic -Laplacian},
title = {A remark on the uniqueness of fundamental solutions to the -Laplacian equation, .},
AU - Laurençot, Ph.
TI - A remark on the uniqueness of fundamental solutions to the -Laplacian equation, .
KW - parabolic -Laplacian; bounded measures; parabolic -Laplacian
p
-Laplacian, bounded measures, parabolic
p
Articles by Laurençot
|
Comments on tag 01MY—Kerodon
Comments on Proposition 6.3.1.20
Go back to the page of Proposition 6.3.1.20.
Comment #477 by Haoqing on July 09, 2020 at 09:23
I guess the target in "
\theta : \operatorname{Fun}(\operatorname{\mathcal{D}}, \operatorname{\mathcal{E}})^{\simeq } \rightarrow \operatorname{Fun}( \operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}})
" should be
\operatorname{Fun}( \operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}})^{\simeq}
which is again equivalent to
\operatorname{Fun}( \operatorname{\mathcal{C}}, \operatorname{\mathcal{E}^{\simeq}})
Comment #480 by Kerodon on July 13, 2020 at 14:59
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 01MY. The letter 'O' is never used.
The tag you filled in for the captcha is wrong. You need to write 01MY, in case you are confused.
|
author = {Gelfand, I. M. and Zelevinsky, A. V.},
title = {Representation models for classical groups and their higher symmetries},
AU - Gelfand, I. M.
AU - Zelevinsky, A. V.
TI - Representation models for classical groups and their higher symmetries
Gelfand, I. M.; Zelevinsky, A. V. Representation models for classical groups and their higher symmetries, dans Élie Cartan et les mathématiques d'aujourd'hui - Lyon, 25-29 juin 1984, Astérisque, no. S131 (1985), 12 p. http://archive.numdam.org/item/AST_1985__S131__117_0/
[1] Gelfand (I. M.) and Zelevinsky (A. V.). - Representation models for classical groups and their hidden symmetries, Funct. anal, and appl., t. 18, no. 3, 1984, p. 14-31 (in Russian). | MR 757246
[2] Biedenharn (L. C.) and Flath (D.). - Beyond the enveloping algebra of sl3. Preprint, Duke University, 1982. | Zbl 0564.17002
[3] Gelfand (I. M.), Bernstein (I. N.) and Gelfand (S. I.). - Representation models for compact Lie groups, Tr. Sem. Petrovski, t. 2, 1976, p. 3-21 (in Russian); | MR 453927 | Zbl 0339.22009
Gelfand (I. M.), Bernstein (I. N.) and Gelfand (S. I.). - Representation models for compact Lie groups, Selecta Math. Sov., t. 1, no. 2, 1981, p. 121-142. | Zbl 0499.22004
[4] Krämer (M.). - Some remarks suggesting an interesting theory of harmonic functions on
SU\left(2n+1\right)/Sp\left(n\right)
SO\left(2n+1\right)/U\left(n\right)
, Arch. Math., t. 33, no. 1, 1979, p. 76-79. | Article | MR 553457 | Zbl 0404.43006
[5] Leites (D. A.). - Theory of supermanifolds. Petrozavodsk, 1983 (in Russian). | Zbl 0599.58001
[6] Serganova (V. V.). - Classification of simple real Lie superalgebras and symmetric superspaces, Funct. anal, and appl., t. 17, no. 3, 1983, p. 46-54 (in Russian). | MR 714220 | Zbl 0545.17001
[7] Proctor (R.). - Prepublication, 1984.
[8] Helgason (S.). - Differential geometry, Lie groups and symmetric spaces. - New York, Academic Press, 1978. | MR 514561 | Zbl 0451.53038
[9] Gelfand (I. M.) and Zelevinsky (A. V.). - Polytopes in the space of schemes and canonical basis in the irreducible representations of
{\mathrm{𝐠𝐥}}_{3}
, Funct. anal. and appl., t. 19, no. 2, 1985, p. 72-75 | MR 800925
|
What is Rate of Change (ROC)
The rate of change (ROC) is the speed at which a variable changes over a specific period of time. ROC is often used when speaking about momentum, and it can generally be expressed as a ratio between a change in one variable relative to a corresponding change in another; graphically, the rate of change is represented by the slope of a line. The ROC is often illustrated by the Greek letter delta.
Rate of change is used to mathematically describe the percentage change in value over a defined period of time, and it represents the momentum of a variable. The calculation for ROC is simple in that it takes the current value of a stock or index and divides it by the value from an earlier period. Subtract one and multiply the resulting number by 100 to give it a percentage representation.
ROC = (\frac{\text{current value}}{\text{previous value}} - 1)*100
ROC=(previous valuecurrent value−1)∗100
The Importance of Measuring Rate of Change
Rate of change is an extremely important financial concept because it allows investors to spot security momentum and other trends. For example, a security with high momentum, or one that has a positive ROC, normally outperforms the market in the short term. Conversely, a security that has a ROC that falls below its moving average, or one that has a low or negative ROC is likely to decline in value and can be seen as a sell signal to investors.
Rate of change is also a good indicator of market bubbles. Even though momentum is good and traders look for securities with a positive ROC, if a broad-market ETF, index, or mutual fund has a sharp increase in its ROC in the short term, it may be a sign that the market is unsustainable. If the ROC of an index or other broad-market security is over 50%, investors should be wary of a bubble.
Rate of Change and Its Relationship With Price
The rate of change is most often used to measure the change in a security's price over time. This is also known as the price rate of change (ROC). The price rate of change can be derived by taking the price of a security at time B minus the price of the same security at time A and dividing that result by the price at time A.
\begin{aligned} &\text{Price ROC} = \frac{B - A}{A} \times 100 \\ &\textbf{where:}\\ &B=\text{price at current time}\\ &A=\text{price at previous time}\\ \end{aligned}
Price ROC=AB−A×100where:B=price at current timeA=price at previous time
This is important because many traders pay close attention to the speed at which one price changes relative to another. For example, option traders study the relationship between the rate of change in the price of an option relative to a small change in the price of the underlying asset, known as an options delta.
Zoom (ZM) Could Add to Historic Gains
|
The Oxygen Cycle — lesson. Science CBSE, Class 9.
Atmospheric air contains \(21\%\) of oxygen (
{O}_{2}
). The biogeochemical process, which explains the oxygen circulation, is known as the oxygen cycle. Oxygen (
{O}_{2}
) is the second most abundant gas in the atmosphere. Aquatic organisms breathe oxygen dissolved in water.
We know that oxygen and carbon cycles are connected, for example, photosynthesis. The following is the process involved in the oxygen cycle
In the respiration process, atmospheric oxygen is used.
During photosynthesis,
{O}_{2}
is released as a byproduct back to the atmosphere.
This process again repeats. Oxygen is the essential component of carbohydrates, proteins, facts etc., present in the human body.
Fact: \(65%\) of oxygen is present in the human body.
The main hoard of oxygen is present in the atmosphere, biosphere, lithosphere, and hydrosphere. The lithosphere is the largest oxygen reservoir on earth; for example, silica (
{\mathit{SiO}}_{2}
) contains a large amount of oxygen and other minerals. The oxides of metals found on the earth also contain oxygen.
The natural process that produces oxygen is by photosynthesis and the reaction of sunlight with water vapour present in the atmosphere (photolysis). There is also some chemical weathering process that releases oxygen, for example, rusting.
We also know that shells of marine organisms are made of calcium carbonate (
{\mathit{CaCO}}_{3}
), so when they die, they get settled at the seafloor and form limestone sedimentary rock. Thus, the weathering of these rocks and the decomposition process of marine organisms release oxygen back to the lithosphere.
The atmospheric oxygen is also essential for combustion, respiration, decomposition and rusting.
|
LMIs in Control/Matrix and LMI Properties and Tools/Concatenation Of Matrices - Wikibooks, open books for an open world
LMIs in Control/Matrix and LMI Properties and Tools/Concatenation Of Matrices
2 Concatenation of LMIs
Matrix concatenation is the process of joining one or more matrices to make a new matrix. This process is similar for concatenating the LMIs as well.
Concatenation of LMIsEdit
A useful property of LMIs is that multiple LMIs can be concatenated together to form a single LMI. For example, satisfying the LMIs
{\displaystyle A<0}
{\displaystyle B<0}
is equivalent to satisfying the concatenated LMI:
{\displaystyle {\begin{bmatrix}A&&0\\0&&B\end{bmatrix}}}
{\displaystyle {\begin{aligned}<0\end{aligned}}}
More generally, satisfying the LMIs
{\displaystyle Ai<0}
{\displaystyle i=1,...,n}
{\displaystyle {\begin{bmatrix}A&&0&&0&&.&&.&&0\\0&&B&&0&&.&&.&&0\\0&&0&&C&&.&&.&&0\\.&&.&&.&&.&&.&&0\\.&&.&&.&&.&&.&&0\\0&&0&&0&&0&&0&&A_{n}\\\end{bmatrix}}}
{\displaystyle {\begin{aligned}<0\end{aligned}}}
Retrieved from "https://en.wikibooks.org/w/index.php?title=LMIs_in_Control/Matrix_and_LMI_Properties_and_Tools/Concatenation_Of_Matrices&oldid=4011017"
|
Comments on tag 01Z5—Kerodon
Subsection 5.5.2: Digression: Slicing and the Homotopy Coherent Nerve (cite)
Comment #808 by Antonio Ruiz on November 18, 2020 at 23:03
There is an extra parenthesis at the end of 5.5.2.4.
In 5.5.2.8,
\textnormal{Ob}(C^{\triangleleft})
\textnormal{Ob}(C)\cup\{X_0\}
Comment #810 by Kerodon on November 19, 2020 at 14:23
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 01Z5. The letter 'O' is never used.
|
Hubble's Law | Astronomy 801: Planets, Stars, Galaxies, and the Universe
Observations and some implications: Universe is expanding
During his work studying galaxies, Hubble used Cepheid variable stars to measure the distances to a sample of galaxies. Even before the Shapley/Curtis debate and the discovery that spiral nebulae are external galaxies, observations had shown that the vast majority of galaxies had spectral lines redshifted from the laboratory values. If you recall from our work on the Doppler effect, a redshift in a spectral line indicates that the object is moving away from us. In a publication by Hubble in 1929, he showed that if you plot the distance to a galaxy (measured from Cepheid variables) and the velocity of the galaxy (measured by the shift in the spectral lines), the two quantities are directly correlated! See the reproduction of his plot below.
Figure 10.4: Edwin Hubble's plot of the Velocity-Distance relationship for galaxies
Credit: Edwin Hubble, Proceedings of the National Academy of Sciences, vol. 15 no. 3, pp.168-173
Read Hubble's original articles! The astronomical community maintains an excellent resource meant primarily for practicing astronomers. It is the SAO/NASA Astrophysics Data System, and you can use it to search the astronomical literature. For example, if you want to find information on every article I have ever published, you can go there and stick my name in the author field. However, for a much more interesting search, you can go there and put in "Hubble, E" in the author field, and it will bring up the listing of Hubble's published works, including the paper in which he published the plot above.
Interpreting these diagrams
On the y-axis, you plot the velocity of the galaxy obtained from the spectrum. On the x-axis, you plot the distance to that galaxy, in this case obtained from Cepheids. If these two quantities (distance and velocity) had nothing to do with each other, then the diagram would look like what we call a "scatter plot." That is, it would appear as a bunch of points randomly placed in different locations. However, it is somewhat apparent in this case that you can draw a straight line through the points. What this means is that as the distance gets bigger, so does the velocity. In algebra class, you learned that the equation for a line that passes through the point (0,0) is:
y=mx
where y = the quantity plotted on the y-axis (velocity), x = the quantity plotted on the x-axis (distance), and m is the slope of the line. For the specific case of this relationship, we usually write the equation this way:
v={H}_{0}d
H0 is called the Hubble constant. It is the slope of the line that relates the distance of a galaxy to its velocity. If you know H0 and if you can calculate the velocity, v, from the spectrum, then you can use this equation to calculate the distance, d, to that galaxy. Let's quickly review how we measure velocities for objects that are receding from us. The equation that you saw in Lesson 4 for the Doppler shift was:
\Delta \lambda /{\lambda }_{0}={v}_{r}/c
Where Δλ is the difference between the measured wavelength for a line in the spectrum of an object and the wavelength for that same line observed in the spectrum of an object at rest. The other term on the left hand side, λ0, is the wavelength of that line in the spectrum of an object at rest. For objects at large distances from Earth where the distance is determined using Hubble's Law, we do not often refer to their recession velocities (e.g., "that galaxy has a velocity of 14,000 km/sec away from us") or their distances in Mpc (e.g., "that galaxy is 247 Mpc from us"), instead, we simply refer to the object's redshift, z. The definition of z is that it is the left hand side of the Doppler shift equation:
z=\Delta \lambda /{\lambda }_{0}
For example, if you observe a galaxy with an H-alpha line at 680 nm, and you know the rest wavelength for that line is 656.3 nm, then its redshift is:
z=\left(680nm-656.3nm\right)/656.3nm=0.036
Hubble's law, which says simply that a galaxy's velocity (or as is sometimes plotted, its redshift) is directly proportional to its distance, also tells us something important about the state of the universe. If the universe is static and unchanging, there should be no correlation between distance and velocity. However, if the universe is expanding, we expect a correlation between distance and velocity. The usual analogy used here is that of an explosion – the fragments of shrapnel produced are moving with a range of velocities, and the most distant objects from the source of the explosion have the largest velocities. Astronomers believe that Hubble's law is a direct consequence of the ongoing expansion of the universe and that the evidence suggests that the universe began in an explosion, which we call the Big Bang.
There are a couple of important caveats that apply to Hubble's Law. They are:
Hubble's Law only works for distant galaxies. For nearby galaxies (in the Local Group), stars inside the Milky Way, and for objects in our Solar System, the relationship between distance and velocity does not hold. The reason for the discrepancy for nearby galaxies is the "peculiar velocity" of the galaxy, that is, its real velocity through space that is unrelated to the expansion. For distant galaxies, their peculiar velocities are small enough that they still lie on or near the line for Hubble's Law. For nearby galaxies, though, their peculiar velocity is larger than their velocity from the expansion, so their peculiar velocity dominates their total velocity, causing them to lie far from the line relating velocity to distance. For example, the galaxy M31 does not even show a redshift; it is blueshifted, showing that its peculiar velocity is pointed towards us, rather than away from us.
Recall the concept of the "lookback time" for an object. For objects at very large distances from us, it is very common to see their distances referred to not in units like parsecs or light years, but in units of time. For example, astronomers will say, "The light from this galaxy was emitted when the universe was 10% of its present age, over 12 billion years ago." We base these descriptions on the redshift of the galaxy and the lookback time.
You can consider Hubble's Law to be the final rung in the distance ladder. If you know Hubble's constant accurately, then you can calculate the distance to any galaxy in the Universe simply by measuring its velocity (which is reasonably easy to do for any galaxy for which you can observe its spectrum). To calibrate Hubble's constant, though, you need to be able to plot the distances for a number of galaxies as obtained using other methods. While that may seem like an easy statement to make, it was an incredibly difficult task to accomplish. For decades, astronomers have argued over the precise value of Hubble's constant. This measurement was, in fact, one of the major reasons for building and launching the Hubble Space Telescope. It spent years observing Cepheid variables in distant galaxies in order to measure Hubble's constant as precisely as possible. The results were reported in 1999. See "Hubble Completes Eight-Year Effort to Measure Expanding Universe."
Note that in Hubble's diagram, above, he has data on galaxies out to 2 Mpc (that is, 2,000,000 parsecs). The diagram produced by the Hubble Key Project team used data on galaxies out to about 23 Mpc.
‹ Olbers' Paradox up The Implications of Hubble's Law: An Expanding Universe ›
|
Sentence stopper thingy - Uncyclopedia, the content-free encyclopedia
Example of an Sentence stopper thingy being used. Courtesy of Mel Gibson enterprises.
“OMG! The Jews killed Jesus ?.....”
~ Oscar Wilde on Sentence stopper thingy
"." syntax. Sentence stopper thingy (SST). Is often wrongly referred to as a period, which is the known act of a woman internally masturbating for five days as a Satanic libation.
.-. .-. .--.
| OO| | OO| / _.-'
| | | | \ '-. . . . . . . . .. . . . . . . . <-SST Genocide
'^^^' '^^^' ' ==
Pre-1942 example of sentence termination on a crude piece of work paper. Courtesy of Harvard University. Circa 1928
The Sentence stopper thingy was discovered in an ancient Aztec treasure chest along with schematics within the ruins of Mexico City in 1942 by Christopher Walken. It is believed that aliens from outer-space derived the functional mechanics of the Sentence stopper thingy in order to expedite sentence completion as well as terminating sentences themselves. Up until this event, everyone on Earth comfortably used the methodical phrase, "go on to next sentence" axiom, preceded by a semi-colon, notifying the reader to advance to the next sentence without permanently discontinuing or prematurely advancing.
Einstein and Fermi hopelessly studying a diagram of the SST
When people were informed that their sentences could be concluded with a simple tap of the wrist; they immediately removed all the “go on to next sentence” buttons on their typewriters or laptops and replaced them with a button that said, “Sentence stopper thingy.” This was a grievous error which resulted in thousands of debilitated laptops and typewriters all over the world, especially uninhabited Siberia. Apparently, when the Catholic priests informed the planet of the Sentence stopper thingy’s capabilities, they forgot to issue instructions mentioning its utter simplicity or even how to use it. Instead, the priests started scaring everyone telling them that the Sentence stopper thingy was a diabolic form of God's Wrath, or something. The most nobly minded individuals at the time, such as Enrico Fermi and Einstein; who also listened to the priests, could not understand its concept either.
Einstein & Fermi's failure to properly grasp the simple mystery of the Sentence stopper thingy led to an indefinite delay of the Atom Bomb’s development and consequently made Hitler dance. They spent weeks staring at pictorial renditions of the Sentence stopper thingy with little or no hope towards solving its mysteries. Fermi conducted over two thousand PH and universal indicator tests on the subject while Einstein sat in a chair and tried to come up with a theory memorable enough that would impress everyone for about another century and ensure his name in the history books.
Einstein's Patented SST Formula:
{\displaystyle .={\frac {1}{2}}God^{2}\ +m_{C}Hammer\;{\frac {Mr.\ T}{\sqrt {Candy-{\frac {v^{2}}{c^{2}}}}}}\ +\ 3tbs\ of\ pure\ Jesus}
Adolph Hitler rejoicing to the calamitous effects of the SST Special Forces on allied troops.
In the mean time, Hitler was making modifications to the Nazi army in order to acclimate them to the new found technology. He employed special units for Allied syntax counter-operations to be carried out against the World's Funniest Joke, a formidable secret Britisher weapon created by the Monty Python wartime alliance which was eventually disabled by Nazi superior intelligence.
SST troopers circa 1941
The SST Special Forces consisted of 1 medic, 2 philanthropists, 41 philosophy professors, 83 dominatrix chauvinists, and 9,587 robot misanthropists code-named Syntax-Storm-Trooper in order to appropriately correlate with the acronym (SST). The SST wore useless white plastic armor with an interior radio and carried high-power laser blasters that shoot shiny red things believed to be the Devil. The opposing force's resistance proved ineffective to the SST syntax resistance power and, as a result, pushed the allies further and further back down the front until they were forced to withdrawal into occupied France. AXIS, occupied France. Whoops! The retreating allies got fucked big time and were reduced to only few defenseless soldiers unable to recite the World's Funniest Joke effectively with out a functioning Sentence stopper thingy. The remaining soldiers were taken prisoner and put into either POW or concentration camps depending upon Nazi 1337 factors at the time.
Domestic use[edit]
Stupid housewives have been known to use a Sentence stopper in order to plug bathtubs, sinks and toilets for cleansing purposes. They were eventually huffed for breaking the 15th Commandment, Thou shalt not commit wedlock to thy spouse as a mere proprietary object or trophy for monetary assurance, and were condemned to the kitchen for life to care for brats, bake pie and serve as the village milf.
There have been some reports that indicate teenagers overusing SST to the point of physical detriment, undernourishment and impending mental retardation. These causal occurrences have been remedied through SST medication, introduced by Pfizer Corporation, which blocks cerebral C02 inhibitor cells and puts the applicant into a transitory coma invalidating them from future syntax-abuse.
SST is the 39,149th independent record label started in the United States. SST Records for alternating, punkish, and softcore rock music.
SST also refers to Super-Sonic donkey Travel; most commonly utilized in Russia.
SST is an acronym for Samoan Swat “kick-murder” wrestling Teams. The most widely known of them, the Headshrinkers, a colloquial moniker that denoted their aptitude in reducing their victim's heads into petty contorted stumps of flesh, were an expansion WWF team of belligerent Samoans who vanquished their opponents with H&K MP5's and Tactical USPs. They're record in the WWF currently remains undefeated.
In late 1983, the Catholic Church launched a crusade against the SST. It went exactly according to plan until their armies got bored and went home in early 1984
Retrieved from "http://en.uncyclopedia.co/w/index.php?title=Sentence_stopper_thingy&oldid=6154039"
|
Reorder eigenvalues in Schur factorization - MATLAB ordschur - MathWorks Deutschland
Reorder Schur Factorization
Reorder eigenvalues in Schur factorization
[US,TS] = ordschur(U,T,select)
[US,TS] = ordschur(U,T,keyword)
[US,TS] = ordschur(U,T,clusters)
[US,TS] = ordschur(U,T,select) reorders the Schur factorization X = U*T*U' produced by [U,T] = schur(X) and returns the reordered Schur matrix TS and the orthogonal matrix US, such that X = US*TS*US'.
In this reordering, the selected cluster of eigenvalues appears in the leading (upper left) diagonal blocks of the quasitriangular Schur matrix TS. The leading columns of US span the corresponding invariant subspace. The logical vector select specifies the selected cluster as e(select), where e = ordeig(T).
[US,TS] = ordschur(U,T,keyword) sets the selected cluster to include all eigenvalues in the region specified by keyword.
[US,TS] = ordschur(U,T,clusters) reorders multiple clusters simultaneously. ordschur sorts the specified clusters in descending order along the diagonal of TS, with the cluster of highest index appearing in the upper left corner.
Compute the Schur factors of a matrix, then reorder the factors according to a specified ordering of the eigenvalues.
Find the Schur factorization of a matrix X. The Schur factorization produces an upper quasitriangular matrix T and a unitary matrix U such that
\mathit{X}={\mathrm{UTU}}^{*}
0.4082 0.5774 0.4082 -0.3333 0.0000 -0.4714
0.4082 0.2887 -0.4082 0.5749 -0.5000 -0.0530
0.4082 -0.5774 -0.4082 -0.3333 -0.0000 -0.4714
111.0000 0.0000 0.0000 0.0000 0.0000 -0.0000
0 27.0000 0.0000 -28.3164 -15.5885 -13.0454
0 0 -27.0000 18.0000 22.0454 -12.7279
0 0 0 0 -0.0000 0.0000
Since T is triangular, the diagonal of T contains the eigenvalues of the original matrix X.
Reorder the Schur factorization so that the eigenvalues are in two clusters, with the cluster of negative eigenvalues appearing first along the diagonal of TS.
[US,TS] = ordschur(U,T,'lhp')
US = 6×6
-27.0000 -19.0919 18.6997 0.0000 9.7888 -12.7279
0 -0.0000 -0.3800 0.0000 15.6493 -15.5227
0 0 -9.7980 -0.0000 2.4773 -8.7185
0 0 0 111.0000 -0.0000 -0.0000
0 0 0 0 27.0000 -26.3600
Unitary matrix, specified as the matrix U returned by [U,T] = schur(X). The matrix U satisfies the properties X = U*T*U' and U'*U = eye(size(X)).
If U and T do not form a valid Schur decomposition, then ordschur does not produce an error and returns incorrect results.
T — Schur matrix
Schur matrix, specified as the upper quasitriangular matrix T returned by [U,T] = schur(X). The matrix T satisfies X = U*T*U'.
Cluster selector, specified as a logical vector with length equal to the number of eigenvalues. The eigenvalues appear along the diagonal of the matrix T produced by [U,T] = schur(X).
(e = ordeig(T))
Cluster indices, specified as a vector of positive integers with length equal to the number of eigenvalues. clusters assigns each eigenvalue returned by e = ordeig(T) to a different cluster. All eigenvalues with the same index value in clusters form one cluster.
Example: ordschur(U,T,[1 1 2 3 3]) groups five eigenvalues into three clusters.
US, TS — Reordered matrices
Reordered matrices, returned as matrices that satisfy X = US*TS*US'.
US is a unitary matrix, and TS is quasitriangular.
If T has complex conjugate pairs (nonzero elements on the subdiagonal), then you should move the pair to the same cluster. Otherwise, ordschur acts to keep the pair together:
If select is not the same for two eigenvalues in a conjugate pair, then ordschur treats both as selected.
If clusters is not the same for two eigenvalues in a conjugate pair, then ordschur treats both as part of the cluster with larger index.
ordeig | ordqz | schur
|
en arxiv:1907.11692 arxiv:1806.02847 mit roberta exbert AutoTrain Compatible Infinity Compatible
RoBERTa large model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English.
Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
>>> unmasker = pipeline('fill-mask', model='roberta-large')
[{'sequence': "<s>Hello I'm a male model.</s>",
'token_str': 'Ġmale'},
{'sequence': "<s>Hello I'm a fashion model.</s>",
'token_str': 'Ġfashion'},
{'sequence': "<s>Hello I'm a professional model.</s>",
'token_str': 'Ġprofessional'},
{'sequence': "<s>Hello I'm a freelance model.</s>",
'token_str': 'Ġfreelance'},
{'sequence': "<s>Hello I'm a young model.</s>",
'token_str': 'Ġyoung'}]
tokenizer = RobertaTokenizer.from_pretrained('roberta-large')
model = RobertaModel.from_pretrained('roberta-large')
model = TFRobertaModel.from_pretrained('roberta-large')
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions:
>>> unmasker("The man worked as a <mask>.")
[{'sequence': '<s>The man worked as a mechanic.</s>',
'token_str': 'Ġmechanic'},
{'sequence': '<s>The man worked as a driver.</s>',
'token_str': 'Ġdriver'},
{'sequence': '<s>The man worked as a teacher.</s>',
'token_str': 'Ġteacher'},
{'sequence': '<s>The man worked as a bartender.</s>',
'token_str': 'Ġbartender'},
{'sequence': '<s>The man worked as a waiter.</s>',
'token_str': 'Ġwaiter'}]
>>> unmasker("The woman worked as a <mask>.")
[{'sequence': '<s>The woman worked as a nurse.</s>',
'token_str': 'Ġnurse'},
{'sequence': '<s>The woman worked as a waitress.</s>',
'token_str': 'Ġwaitress'},
{'sequence': '<s>The woman worked as a teacher.</s>',
{'sequence': '<s>The woman worked as a secretary.</s>',
'token_str': 'Ġsecretary'},
{'sequence': '<s>The woman worked as a cleaner.</s>',
'token_str': 'Ġcleaner'}]
The RoBERTa model was pretrained on the reunion of five datasets:
BookCorpus, a dataset consisting of 11,038 unpublished books;
English Wikipedia (excluding lists, tables and headers) ;
CC-News, a dataset containing 63 millions English news articles crawled between September 2016 and February 2019.
OpenWebText, an opensource recreation of the WebText dataset used to train GPT-2,
Stories a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas.
Together theses datasets weight 160GB of text.
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with <s> and the end of one by </s>
The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The optimizer used is Adam with a learning rate of 4e-4,
{\beta }_{1}=0.9\beta_\left\{1\right\} = 0.9
{\beta }_{2}=0.98\beta_\left\{2\right\} = 0.98
ϵ=1e-6\epsilon = 1e-6
Datasets used to train roberta-large
Spaces using roberta-large
akhaliq/Detic
ucinlp/autoprompt
Jiangjie/loren-fact-checking
bguberfain/Detic
BigSalmon/MASK2
docs-demos/roberta-base
manan/Score-Clinical-Patient-Notes
|
A Transformation-Based Approach to Implication of GSTE Assertion Graphs
2013 A Transformation-Based Approach to Implication of GSTE Assertion Graphs
Guowu Yang, William N. N. Hung, Xiaoyu Song, Wensheng Guo
Generalized symbolic trajectory evaluation (GSTE) is a model checking approach and has successfully demonstrated its powerful capacity in formal verification of VLSI systems. GSTE is an extension of symbolic trajectory evaluation (STE) to the model checking of
\omega
-regular properties. It is an alternative to classical model checking algorithms where properties are specified as finite-state automata. In GSTE, properties are specified as assertion graphs, which are labeled directed graphs where each edge is labeled with two labeling functions: antecedent and consequent. In this paper, we show the complement relation between GSTE assertion graphs and finite-state automata with the expressiveness of regular languages and
\omega
-regular languages. We present an algorithm that transforms a GSTE assertion graph to a finite-state automaton and vice versa. By applying this algorithm, we transform the problem of GSTE assertion graphs implication to the problem of automata language containment. We demonstrate our approach with its application to verification of an FIFO circuit.
Guowu Yang. William N. N. Hung. Xiaoyu Song. Wensheng Guo. "A Transformation-Based Approach to Implication of GSTE Assertion Graphs." J. Appl. Math. 2013 (SI10) 1 - 7, 2013. https://doi.org/10.1155/2013/709071
Guowu Yang, William N. N. Hung, Xiaoyu Song, Wensheng Guo "A Transformation-Based Approach to Implication of GSTE Assertion Graphs," Journal of Applied Mathematics, J. Appl. Math. 2013(SI10), 1-7, (2013)
|
f\left(x\right)
f
x
f\left(x\right)
f
f\left(a,b,c\right)
a
b
c
f
f\left(a,b,c,...\right)
f
f
f
a,b,c,\dots
f
f\left(x\right)
f
f
x
x
f\left(x\right)
further populate this remember table.
f
f\left(x\right)
\mathrm{\pi }
\mathrm{sin}\left(\mathrm{\pi }\right)
\textcolor[rgb]{0,0,1}{0}
\mathrm{apply}\left(\mathrm{sin},\mathrm{\pi }\right)
\textcolor[rgb]{0,0,1}{0}
\mathrm{sin}\left(x\right)
\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)
\mathrm{apply}\left(\mathrm{sin},x\right)
\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)
\mathrm{type}\left(\mathrm{sin}\left(x\right),'\mathrm{function}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{type}\left(\mathrm{sin}\left(x\right),'\mathrm{procedure}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{type}\left(\mathrm{sin},'\mathrm{function}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{type}\left(\mathrm{sin},'\mathrm{procedure}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{type}\left('\left(x↦{x}^{2}\right)\left(a\right)','\mathrm{function}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\left(\mathrm{sin},\mathrm{cos}\right)\left(x\right)
\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{cos}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)
\mathrm{type}\left([\mathrm{sin},\mathrm{cos}]\left(x\right),'\mathrm{function}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{type}\left([\mathrm{sin},\mathrm{cos}]\left(x\right),'\mathrm{list}'\left('\mathrm{function}'\right)\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{type}\left('[\mathrm{sin},\mathrm{cos}]\left(x\right)','\mathrm{function}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{pointto}\left(\mathrm{assemble}\left(\mathrm{kernelopts}\left(\mathrm{dagtag}=\mathrm{STOP}\right)\right)\right)\left(s\right)
\textcolor[rgb]{0,0,1}{\mathbf{stop}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{s}\right)
\mathrm{type}\left(\mathrm{pointto}\left(\mathrm{assemble}\left(\mathrm{kernelopts}\left(\mathrm{dagtag}=\mathrm{STOP}\right)\right)\right)\left(s\right),'\mathrm{function}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{interface}\left('\mathrm{verboseproc}'=3\right):
\mathrm{unassign}\left('f'\right):
\mathrm{eval}\left(f\right)
\textcolor[rgb]{0,0,1}{f}
f\left(2\right)≔3
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{3}
f
\mathrm{eval}\left(f\right)
\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{option}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathrm{remember}}\textcolor[rgb]{0,0,1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{'}\textcolor[rgb]{0,0,1}{\mathrm{procname}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{args}}\right)\textcolor[rgb]{0,0,1}{'}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}\textcolor[rgb]{0,0,1}{\text{#(2) = 3}}
f\left(3\right)≔1:
f\left(1\right)≔2:
\mathrm{eval}\left(f\right)
\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{option}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathrm{remember}}\textcolor[rgb]{0,0,1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{'}\textcolor[rgb]{0,0,1}{\mathrm{procname}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{args}}\right)\textcolor[rgb]{0,0,1}{'}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}\textcolor[rgb]{0,0,1}{\text{#(1) = 2}}\textcolor[rgb]{0,0,1}{\text{#(2) = 3}}\textcolor[rgb]{0,0,1}{\text{#(3) = 1}}
|
Hull-White/Vasicek (HWV) Gaussian Diffusion model - MATLAB - MathWorks India
Create a hwv Object
Hull-White/Vasicek (HWV) Gaussian Diffusion model
Create and displays hwv objects, which derive from the sdemrd (SDE with drift rate expressed in mean-reverting form) class.
Use hwv objects to simulate sample paths of NVars state variables expressed in mean-reverting drift-rate form. These state variables are driven by NBrowns Brownian motion sources of risk over NPeriods consecutive observation periods, approximating continuous-time Hull-White/Vasicek stochastic processes with Gaussian diffusions.
This model allows you to simulate vector-valued Hull-White/Vasicek processes of the form:
d{X}_{t}=S\left(t\right)\left[L\left(t\right)-{X}_{t}\right]dt+V\left(t\right)d{W}_{t}
S is an NVars-by-NVars of mean reversion speeds (the rate of mean reversion).
dWtis an NBrowns-by-1 Brownian motion vector.
HWV = hwv(Speed,Level,Sigma)
HWV = hwv(___,Name,Value)
HWV = hwv(Speed,Level,Sigma) creates a default HWV object.
HWV = hwv(___,Name,Value) creates a HWV object with additional options specified by one or more Name,Value pair arguments.
The HWV object has the following Properties:
Although the hwv object does not enforce restrictions on the signs of any of these input arguments, each argument is specified as a positive value.
If StartState is a scalar, the hwv object applies the same initial value to all state variables on all trials.
If StartState is a column vector, the hwv object applies a unique initial value to each state variable on all trials.
If StartState is a matrix, the hwv object applies a unique initial value to each state variable on each trial.
F\left(t,{X}_{t}\right)=A\left(t\right)+B\left(t\right){X}_{t}
G\left(t,{X}_{t}\right)=D\left(t,{X}_{t}^{\alpha \left(t\right)}\right)V\left(t\right)
simBySolution Simulate approximate solution of diagonal-drift HWV processes
The Hull-White/Vasicek (HWV) short rate class derives directly from SDE with mean-reverting drift (that is, sdemrd):
d{X}_{t}=S\left(t\right)\left[L\left(t\right)-{X}_{t}\right]dt+V\left(t\right)d{W}_{t}
d{X}_{t}=0.2\left(0.1-{X}_{t}\right)dt+0.05d{W}_{t}
When you invoke these parameters with inputs, they behave like functions, giving the impression of dynamic behavior. The parameters accept the observation time t and a state vector Xt, and return an array of appropriate dimension. Even if you originally specified an input as an array, hwv treats it as a static function of time and state, by that means guaranteeing that all parameters are accessible by the same interface.
|
Aristarchus of Samos - zxc.wiki
Aristarchus (os) of Samos ( Greek Ἀρίσταρχος [ὁ Σάμιος] Arístarchos [ho Sámios] ; * around 310 BC on Samos ; † around 230 BC ) was a Greek astronomer and mathematician.
He was a student of Straton von Lampsakos , the head of the school of Aristotle in Athens , and one of the first Greek astronomers to represent the heliocentric worldview , according to which the sun and not the earth is at the center of the universe. Therefore he is also known as the "Greek Copernicus ". However, his theory hardly met with recognition (exception: Seleukos von Seleukia ), so that his ideas were overshadowed by the work of Aristotle and Ptolemy .
1.1 Heliocentric view of the world
1.2 size of the moon
1.3 Distance to the sun
Heliocentric worldview
The only work by Aristarchus that has survived is entitled On the Sizes and Distances of the Sun and Moon and is still based on the geocentric view of the world . From later quotations from other scholars it is known that he also represented the hypothesis of a heliocentric worldview in another book. Archimedes wrote in his sand bill :
“You, King Gelon, know that astronomers call the 'universe' that sphere with the earth at its center, its radius being the distance between the center of the sun and the center of the earth. This is the general view as you have heard it from astronomers. But Aristarchus wrote a book that consists of certain hypotheses, and which, based on these assumptions, shows that the universe is many times larger than the 'universe' I just mentioned. His theses are that the fixed stars and the sun are immobile, that the earth moves around the sun on the circumference of a circle, with the sun in the middle of this orbit, and that the sphere of the fixed stars, the center of which is this sun and within which the earth moves, has such a great extent that the distance from the earth to this sphere equals the distance from this sphere to its center. "
As a consequence, he deduced that if the center was not the earth but the sun, a parallax would actually be observed. The appearance of the starry sky would have to change depending on the current position during one orbit of the earth around the sun. He explained the lack of parallax by an unimaginably great distance to the fixed stars, compared to which the diameter of the earth's orbit is vanishingly small. In fact, this parallax is less than an arc second even for the stars closest to the Sun and therefore cannot be seen with the naked eye. This apparently lack of parallax was the main argument against Aristarchus' model. The fixed star parallax was not detected until 1838 with telescopes .
The possible personal consequences for Aristarchus are indicated by Plutarch in the dialogue About the Moon Face :
“ Kleanthes [a contemporary of Aristarchus] believed it was the duty of the Greeks to accuse Aristarchus of Samos of godlessness for having set the hearth of the universe [the earth] in motion in an attempt to save phenomena by assumed that the sky was at rest and the earth rotated in a crooked circle and rotated around its own axis. "
Aristarchus calculations of the sizes of the earth, sun and moon
(copy from the 10th century )
Aristarchus watched the moon move through the earth's shadow during a lunar eclipse . He concluded that the diameter of the earth was 2.85 times that of the moon. In fact, this factor is around 3.67.
Aristarchus realized that the crescent moon is created when the sunlight falls exactly perpendicular to the direction in which we are looking at the moon. The triangle sun, moon, earth shows a right angle with a half moon :
Aristarchus experimentally determined the size of the angle between the sun and the moon to be at least 87 °. By drawing a right-angled triangle with an angle of 87 °, you can now determine the relationship between the distances between earth and sun and between earth and moon. Nowadays this ratio can of course also be calculated trigonometrically to 1 / cos (87 °) ≈ 19.1. This shows that the sun is at least 19 times as far from the earth as the moon.
{\ displaystyle \ varepsilon}
In fact, the sun is about 400 times as far from the earth as the moon. This corresponds to an angle of about 89 ° 51 '. At the same time, this means that when viewed from the sun, the earth and moon appear only nine angular minutes apart.
{\ displaystyle \ varepsilon}
Aristarchus must have noticed that the angle is almost 90 degrees. In this case, the aim of the measurement had to be the answer to the question: What is the minimum angle? This is the only way to make a reliable statement about the minimum distance from the sun.
Since the sun and moon appear about the same size in the sky, Aristarchus concluded that their actual size must also differ from one another by at least a factor of 19. For the reasons mentioned, this difference is in fact 400 times. But the mere fact that, according to Aristarchus, the sun had to be much larger than the earth was an important indicator for the theory, according to which such a large celestial body must also be in the center of the universe - unless the sun, as Aristarchy's opponents argued massless ether .
Vitruvius mentions him as the inventor of a sundial in the shape of a hemispherical shell. Aristarchus also appears to have developed a theory of eclipses, as suggested by a remark in an anonymous commentary on Homer's Odyssey from the 2nd century BC.
Monument to Aristarchus at Aristotle University of Thessaloniki , Greece
The moon crater named after Aristarchus Aristarchus
The only Greek-speaking astronomer who followed the views of Aristarchus was Seleucus of Seleukia, who probably came from Seleukia-Ctesiphon . Aristarchus hypothesized that the earth moves around the sun and rotates around its axis. Seleucus is said to have provided the theoretical proof for this.
It was not until about 1800 years after Aristarchus that the heliocentric worldview was taken up again by Nicolaus Copernicus and fully elaborated. With explicit reference to Aristarchy, Copernicus propagated the heliocentric worldview in his work De revolutionibus orbium caelestium (1543).
The lunar crater Aristarchus and the asteroid (3999) Aristarchus are named after the astronomer.
Rudolf von Erhardt, Erika von Erhardt-Siebold: Archimedes' Sand-Reckoner. Aristarchus and Copernicus. In: Isis 33, 1942, pp. 578-602.
Thomas L. Heath: Aristarchus of Samos. The ancient Copernicus. Oxford 1913, reprint New York 1981 ( online )
Alan C. Bowen, Bernard R. Goldstein: Aristarchus, Thales, and Heraclitus on solar eclipses. In: Physis Riv. Internaz. Storia Sci. (NS) 31, 3, 1994, pp. 689-729.
AG Gomez: Aristarchos of Samos, the Polymath . AuthorHouse, 2013, ISBN 9781496994233 .
M. Milankovitch: Aristarchus and Apollonios. The heliocentric and geocentric world systems of classical antiquity. In: Acad. Serb. Sci.Publ. Inst. Math. 9, 1956, pp. 79-92.
Otto Neugebauer : A History of Ancient Mathematical Astronomy , Springer, 3 volumes, 1975
Wilhelm Nestle : The post-Socratics. 2 volumes. Jena 1923.
E. Wall: Anatomy of a precursor: The historiography of Aristarchos of Samos. In: Studies in History and Philosophy of Science 6, 3, 1975, pp. 201-228.
SV Zhitomirskii: The heliocentric hypothesis of Aristarchos of Samos and ancient cosmology. In: Istor.-Astronom. Issue 18, 1986, pp. 151–160 (Russian)
William H. Stahl: Aristarchus of Samos . In: Charles Coulston Gillispie (Ed.): Dictionary of Scientific Biography . tape 1 : Pierre Abailard - LS Berg . Charles Scribner's Sons, New York 1970, p. 246-250 .
Bartel Leendert van der Waerden : The heliocentric system in Greek, Hindu and Persian Astronomy , Annals New York Academy of Sciences, Volume 500, 1987, pp. 525-545
Friedrich Hultsch : Aristarchus 25 . In: Paulys Realencyclopadie der classischen Antiquity Science (RE). Volume II, 1, Stuttgart 1895, Col. 873-876.
Aristarchus of Samos is the main character of a historical novel :
Thomas Bührke : The sun in the center - Aristarchus of Samos. Ancient astronomy novel. Beck, Munich 2009, ISBN 978-3-406-58249-3 .
Josef M. Gaßner : From Aristotle to String Theory (2): Aristoteles, Aristarch and Eratosthenes , October 14, 2016.
Aristarchus - The Copernicus of antiquity on ancient-cultures.com
John J. O'Connor, Edmund F. Robertson : Aristarchus of Samos. In: MacTutor History of Mathematics archive .
Alan H. Batten: Aristarchus of Samos
↑ Joh. Christoph Sturm: The incomparable Archimedis sand bill, or profound invention of a number that can be pronounced with surprising ease , 1667, facsimile edition online at Sächsische Landesbibliothek - Staats- und Universitätbibliothek Dresden ; Thomas L. Heath: Works of Archimedes , 1897 (English), various formats at www.archive.org
↑ 927f and 923a, Greek text on www.perseus.tufts.edu , English translation by Harold Cherniss and William C. Helmbold on www.perseus.tufts.edu
^ Alan C. Bowen, Bernard R. Goldstein: Aristarchus, Thales, and Heraclitus on solar eclipses: an astronomical commentary on P Oxy 53.3710 cols. 2.33-3.19 , Physis Riv. Internaz. Storia Sci. (NS), Volume 31, Issue 3, 1994, pp. 689-729.
↑ Strabon , Geographia 16; Plutarch, Quaestiones Platonicae .
↑ Bartel Leendert van der Waerden in: Annals New York Academy of Sciences , Volume 500, 1987, p. 528.
^ Alfred Stückelberger : The starry sky. To the Ptolemaic worldview. In: Christoph Markschies et al. (Ed.): Atlas der Weltbilder , Berlin 2011, pp. 42–52, here: 50; Copernicus writes: "Aristarchum Samium ferunt in eadem fuisse sententia".
SURNAME Aristarchus of Samos
ALTERNATIVE NAMES Aristarchus, Aristarchus
BRIEF DESCRIPTION Greek astronomer, representative of the heliocentric worldview
PLACE OF BIRTH Samos
This page is based on the copyrighted Wikipedia article "Aristarchos_von_Samos" (Authors); it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License. You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.
|
LLL - Maple Help
Home : Support : Online Help : Mathematics : Numerical Computations : Approximations : IntegerRelations Package : LLL
find a reduced basis of a lattice
LLL(lvect, integer)
lvect
list, set, or rtable of lists or Vectors, or a Matrix
The LLL(lvect) command finds a reduced basis (in the sense of Lovasz) of the lattice specified by the (row) vectors of lvect using the LLL algorithm.
If the lattice is generated by vectors with integer coefficients and the option integer is specified, then the reduction is performed using only integer arithmetic. This version is sometimes faster than the default version, which uses rational arithmetic.
This function requires that the dimension of the subspace generated by the vectors equals the number of vectors.
\mathrm{with}\left(\mathrm{IntegerRelations}\right):
\mathrm{LLL}\left([[1,2,3],[2,1,6]]\right)
[[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]]
\mathrm{LLL}\left([[1,2,3],[2,1,6]],'\mathrm{integer}'\right)
[[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]]
\mathrm{LLL}\left(\mathrm{Matrix}\left([[1,2,3],[-1,0,1],[0,1,1]]\right)\right)
[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{1}\end{array}]
Lenstra, A.K.; Lenstra, H.W.; and Lovasz, L. "Factoring Polynomials with Rational Coefficients." Math. Ann., Vol. 261. (1982): 515-534.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.