arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# Improper integral of a cosine
I'm trying to follow some equations in an electrical engineering paper that I'm reading. I'll spare you the details, but at one point I come across:
$$\lim_{ T \rightarrow \infty }\int_{-T/2}^{T/2} \cos (\omega_r(t+\tau)) dt$$
For the reasoning in the paper to work this integral should equal T. I can't prove this mathematically, nor find some intuitive reasoning for it. Intuitively I would have said the answer is $0$...
I guess it could also be rewritten as two integrals:
$$\lim_{ T \rightarrow \infty }\int_{-T/2}^{-\tau} \cos (\omega_r(t+\tau)) dt + \lim_{ T \rightarrow \infty }\int_{-\tau}^{T/2} \cos (\omega_r(t+\tau)) dt$$
but it didn't get me anywhere.
I know that $\displaystyle\lim_{ T \rightarrow \infty }\int_{0}^{T} \cos(x) dx$ is undefined as sinusoidal functions never converge, but I would expect the symmetry of $\displaystyle\lim_{ T \rightarrow \infty }\int_{-T}^{T} \cos(x) dx$ to make the integral equal to $0$.
I'd appreciate if anyone could point me in the right direction.
Edit: providing some context
It's one of the terms in a signal correlation. The full problem can be stated as follows:
Let
$$s(t) = a\cos (\omega_rt-\phi)+b$$ $$g(t) = \cos (\omega_rt)$$
$\omega_r$ is constant (pulsation) and $s(t)$ is a phase delayed version of $g(t)$ with a change of amplitude and a DC offset, b.
They define the correlation of the signals as:
$$h(\tau)=(s\otimes g)(t)=\frac{1}{T}\lim_{T\rightarrow \infty }\int_{-T/2}^{T/2}s(t)\cdot g\left (t+\tau\right ) dt$$
And they state that the result of this integral is
$$h(\tau)=\frac{a}{2}\cos(\omega_rt+\phi)+b$$
Without provide further details.
I naively did:
$$h(\tau)= \frac{1}{T}\lim_{T\rightarrow \infty }\int_{-T/2}^{T/2}\left [a\cos (\omega_rt-\phi)+b \right ]\cdot \cos (\omega_r(t+\tau)) dt$$
$$= \underbrace{\frac{a}{T}\lim_{T\rightarrow \infty }\int_{-T/2}^{T/2}\left [\cos (\omega_rt-\phi)\cos (\omega_r(t+\tau)) \right ]dt}_{\text{A}} + \underbrace{\frac{b}{T}\lim_{T\rightarrow \infty }\int_{-T/2}^{T/2} \cos (\omega_r(t+\tau)) dt}_{\text{B}}$$
And trying to figure out the right integral, which must be equal to T if the term is to be equal to $\frac{bT}{T}=b$.
Maybe I'm doing something obviously wrong :)
• Sparing the details isn't necessarily helpful here, since the assumptions of their integration are likely key. Can you include a link? – Semiclassical Jul 29 '14 at 20:37
• Good point. I can't link because it's unpublished material but you're probably right about assumptions. The equation might actually just be some abuse of notation, I see a lot of that in my field. – Sébastien Dawans Jul 29 '14 at 20:44
• Understandable. Only thing I can guess off the top of my head is that that's not an integral per se: rather, that they're considering a generic time-average $\langle f(t) \rangle$ and assuming that $f(t)=A\cos(\omega(t+\tau))$ for all signals of interest (i.e. that there's only one dominant frequency.) But without context I can only speculate. – Semiclassical Jul 29 '14 at 20:47
• Since you can't link the material directly, one thing you could do is find a published paper that has similar arguments and link that instead. That would at least give us something in the ball-park. – Semiclassical Jul 29 '14 at 20:56
• I'm going through similar work and will hopefully find a useful one to link. – Sébastien Dawans Jul 29 '14 at 21:06
Since $\cos$ is even, $\int_{-T}^{T} \cos(x)\; dx = 2 \int_{0}^{T} \cos(x)\; dx$, not $0$. But surely you know $\int_0^T \cos(x)\; dx = \sin(T)$?
More generally, $$\int_{-T/2}^{T/2} \cos(\omega (t + \tau))\; dt = 2\,{\frac {\sin \left( \omega\,T/2 \right) \cos \left( \omega\,\tau \right) }{\omega}}$$
This has no limit as $T \to \infty$ unless $\cos(\omega \tau)$ happens to be $0$. Either the paper is wrong, or you're misreading it.
• It's an electrical engineering paper, so probably the obvious interpretation isn't what they intend. (I speculated a bit above about what it might really mean.) – Semiclassical Jul 29 '14 at 20:49
• Maybe they're just saying it's bounded by $T$? And perhaps to make things nontrivial, $\omega$ is not constant? – Robert Israel Jul 29 '14 at 20:52
• Thanks for the lesson. As for $\omega$ it is the signal pulsation and always constant. I'll add a bit more context to the problem, thanks for your time – Sébastien Dawans Jul 29 '14 at 20:53
• I don't think it's a bounding thing---his statement makes me suspect they're considering time-averaged quantities, e.g. $$\langle f(t)\rangle=\lim_{T\to\infty}\frac{1}{T}\int_{-T/2}^{T/2} f(t)\cos(\omega(t+\tau))\,dt.$$ But it's decidedly a guess. – Semiclassical Jul 29 '14 at 20:54
• I added a second section to my initial question giving the context – Sébastien Dawans Jul 29 '14 at 21:06
It turns out it's a notation issue in the paper because another domain-specific assumption is that the time over which the signals are correlated (T) is always much larger than the wavelength of the signals.
Therefore it's implied that the limit include the $1/T$ term:
$$\lim_{ T \rightarrow \infty } \frac{1}{T}\int_{T/2}^{-T/2}...$$
The B term in the original question is thus $0$.
As for the $b$, it's assumed to be a different DC offset than the one in the original signal $s(t)$ but they use the same variable.
I'm accepting Robert's answer as the original question is simply ill-posed. Thanks
|
|
# How do you express (x^2+8)/(x^2-5x+6) in partial fractions?
Apr 21, 2016
$\frac{{x}^{2} + 8}{{x}^{2} - 5 x + 6} = 1 - \frac{12}{x - 2} + \frac{17}{x - 3}$
#### Explanation:
$\frac{{x}^{2} + 8}{{x}^{2} - 5 x + 6}$
$= \frac{\left({x}^{2} - 5 x + 6\right) + \left(5 x + 2\right)}{{x}^{2} - 5 x + 6}$
$= 1 + \frac{5 x + 2}{{x}^{2} - 5 x + 6}$
$= 1 + \frac{5 x + 2}{\left(x - 2\right) \left(x - 3\right)}$
$= 1 + \frac{A}{x - 2} + \frac{B}{x - 3}$
$= 1 + \frac{A \left(x - 3\right) + B \left(x - 2\right)}{{x}^{2} - 5 x + 6}$
$= 1 + \frac{\left(A + B\right) x - \left(3 A + 2 B\right)}{{x}^{2} - 5 x + 6}$
Equating coefficients we get:
$\left\{\begin{matrix}A + B = 5 \\ 3 A + 2 B = - 2\end{matrix}\right.$
Subtract twice the first equation from the second to find:
$A = - 12$
Then substitute this value of $A$ into the first equation to find:
$B = 17$
So:
$\frac{{x}^{2} + 8}{{x}^{2} - 5 x + 6} = 1 - \frac{12}{x - 2} + \frac{17}{x - 3}$
|
|
1
AIEEE 2008
+4
-1
The mean of the numbers a, b, 8, 5, 10 is 6 and the variance is 6.80. Then which one of the following gives possible values of a and b?
A
a = 0, b = 7
B
a = 5, b = 2
C
a = 1, b = 6
D
a = 3, b = 4
2
AIEEE 2007
+4
-1
The average marks of boys in a class is 52 and that of girls is 42. The average marks of boys and girls combined is 50. The percentage of boys in the class is
A
80
B
60
C
40
D
20
3
AIEEE 2006
+4
-1
Suppose a population A has 100 observations 101, 102,........, 200, and another population B has 100 observations 151, 152,......., 250. If VA and VB represent the variances of the two populations, respectively, then $${{{V_A}} \over {{V_B}}}$$ is
A
1
B
$${9 \over 4}$$
C
$${4 \over 9}$$
D
$${2 \over 3}$$
4
AIEEE 2005
+4
-1
If in a frequently distribution, the mean and median are 21 and 22 respectively, then its mode is approximately
A
20.5
B
22.0
C
24.0
D
25.5
JEE Main Subjects
Physics
Mechanics
Electricity
Optics
Modern Physics
Chemistry
Physical Chemistry
Inorganic Chemistry
Organic Chemistry
Mathematics
Algebra
Trigonometry
Coordinate Geometry
Calculus
EXAM MAP
Joint Entrance Examination
|
|
# Closed form for a simple hypergeometric $q$ series
I've run across an interesting hypergeometric $q$-series that I feel must have been found before:
$\sum_{n=0}^{\infty}(-1)^n$$\frac{e^{n b y}}{\prod_{k=1}^{n}(e^{\pi k b^2}-e^{\pi k b^{-2}})} = \sum_{n=0}^{\infty}(-1)^n$$\frac{z^n}{(q^2;q^2)_n} q^{\binom{n}{2}} = \sum_{n=0}^{\infty}(-1)^n$$\frac{z^n}{(q;q)_n(-q;q)_n} q^{\binom{n}{2}} I've been playing around with it for a while, using the basic identities (and limits thereof) found in Gasper and Rahman but I've been unable to find a closed form for the series. I'm not sure if this is quite a research level question but I was curious if anyone had any insight into a closed form for this expression. Thank you! ## 1 Answer Comparing with the modern (Gasper-Rahman) definition of the basic hypergeometric function, we have the expression$$\sum_{n=0}^\infty\frac{(-1)^n z^n q^\binom{n}{2}}{(-q;q)_n(q;q)_n}={}_1\phi_1\left({{0}\atop{-q}}\middle|q,z\right)$$(I assumed the omission of the indices in the$q\$-Pochhammer symbols was unintentional.)
I do not know if there is a simpler closed form.
|
|
# Coefficient of relationship
The coefficient of relationship is a measure of the degree of consanguinity (or biological relationship) between two individuals. The term coefficient of relationship was defined by Sewall Wright in 1922, and was derived from his definition of the coefficient of inbreeding of 1921. The measure is most commonly used in genetics and genealogy. A coefficient of inbreeding can be calculated for an individual, and is typically one-half the coefficient of relationship between the parents.
In general, the higher the level of inbreeding the closer the coefficient of relationship between the parents approaches a value of 1, expressed as a percentage,[a] and approaches a value of 0 for individuals with arbitrarily remote common ancestors.
## Coefficient of relationship
The coefficient of relationship (r) between two individuals B and C is obtained by a summation of coefficients calculated for every line by which they are connected to their common ancestors. Each such line connects the two individuals via a common ancestor, passing through no individual which is not a common ancestor more than once. A path coefficient between an ancestor A and an offspring O separated by n generations is given as:
${\displaystyle p_{AO}=2^{-n}\cdot {\left({\frac {1+f_{A}}{1+f_{O}}}\right)}^{1/2}}$
where fA and fO are the coefficients of inbreeding for A and O, respectively.
The coefficient of relationship rBC is now obtained by summing over all path coefficients:
${\displaystyle r_{BC}=\sum {p_{AB}\cdot p_{AC}}}$
By assuming that the pedigree can be traced back to a sufficiently remote population of perfectly random-bred stock (fA = 0)[clarification needed] the definition of r may be simplified to
${\displaystyle r_{BC}=\sum _{p}{2^{-L(p)}}}$
where p enumerates all paths connecting B and C with unique common ancestors (i.e. all paths terminate at a common ancestor and may not pass through a common ancestor to a common ancestor's ancestor), and L(p) is the length of the path p.
To give an (artificial) example: Assuming that two individuals share the same 32 ancestors of n = 5 generations ago, but do not have any common ancestors at four or fewer generations ago, their coefficient of relationship would be
${\textstyle r=2^{n}\cdot 2^{-2n}=2^{-n}}$, which for n = 5, is, ${\textstyle 2^{-5}={\frac {1}{32}}}$, or approximately 0.0313 or 3%.
Individuals for which the same situation applies for their 1024 ancestors of ten generations ago would have a coefficient of r = 2−10 = 0.1%. If follows that the value of r can be given to an accuracy of a few percent if the family tree of both individuals is known for a depth of five generations, and to an accuracy of a tenth of a percent if the known depth is at least ten generations. The contribution to r from common ancestors of 20 generations ago (corresponding to roughly 500 years in human genealogy, or the contribution from common descent from a medieval population) falls below one part-per-million.
## Human relationships
Diagram of common family relationships, where the area of each colored circle is scaled according to the coefficient of relatedness. All relatives of the same relatedness are included together in one of the gray ellipses. Legal degrees of relationship can be found by counting the number of solid-line connections between the self and a relative.[b]
The coefficient of relationship is sometimes used to express degrees of kinship in numeric terms in human genealogy.
In human relationships, the value of the coefficient of relationship is usually calculated based on the knowledge of a full family tree extending to a comparatively small number of generations, perhaps of the order of three or four. As explained above, the value for the coefficient of relationship so calculated is thus a lower bound, with an actual value that may be up to a few percent higher. The value is accurate to within 1% if the full family tree of both individuals is known to a depth of seven generations.[c]
Degree of
relationship
Relationship Coefficient of
relationship (r)
0 identical twins; clones 100%[d] (1)
1 mother / father / daughter / son [2] 50% (2−1)
1 parent's identical twin / identical twin's child 50% (2−1)
2 half-sister / half-brother 25% (2−2)
2 full sister / full brother 50% (2⋅2−2)
2 3/4-sister / 3/4-brother 37.5% (2−2+2−3)
2 grandmother / grandfather / granddaughter / grandson 25% (2−2)
3 half-aunt / half-uncle / half-niece / half-nephew 12.5% (2−3)
3 aunt / uncle / niece / nephew 25% (2⋅2−3)
4 half-first cousin 6.25% (2−4)
4 first cousin 12.5% (2⋅2−4)
4 sesqui-first cousin 18.75% (3⋅2−4)
4 double-first cousin 25% (4⋅2−4)
3 great-grandmother / great-grandfather / great-granddaughter / great-grandson 12.5% (2−3)
4 half-grandaunt / half-granduncle / half-grandniece / half-grandnephew 6.25% (2−4)
4 grandaunt / granduncle / grandniece / grandnephew 12.5% (2⋅2−4)
5 half-first cousin once removed 3.125% (2−5)
5 first cousin once removed 6.25% (2⋅2−5)
5 sesqui-first cousin once removed 9.375% (3⋅2−5)
5 double-first cousin once removed 12.5% (4⋅2−5)
6 half-second cousin 1.5625% (2−6)
6 second cousin 3.125% (2⋅2−6)
6 sesqui-second cousin 4.6875% (3⋅2−6)
6 double-second cousin 6.25% (4⋅2−6)
6 sester-second cousin 7.8125% (5⋅2−6)
6 triple-second cousin 9.38% (6⋅2−6)
6 sesqua-second cousin 10.9375% (7⋅2−6)
4 great-great-grandmother / great-great-grandfather / great-great-granddaughter / great-great-grandson 6.25% (2−4)
5 half-great-grandaunt / half-great-granduncle / half-great-grandniece / half-great-grandnephew 3.125% (2−5)
5 great-grandaunt / great-granduncle / great-grandniece / great-grandnephew 6.25% (2⋅2−5)
6 half-first cousin twice removed 1.5625% (2−6)
6 first cousin twice removed 3.125% (2⋅2−6)
6 sesqui-first cousin twice removed 4.6875% (3⋅2−6)
6 double-first cousin twice removed 6.25% (4⋅2−6)
7 half-second cousin once removed 0.78125% (2−7)
7 second cousin once removed 1.5625% (2⋅2−7)
7 sesqui-second cousin once removed 2.34375% (3⋅2−7)
7 double-second cousin once removed 3.125% (4⋅2−7)
7 sester-second cousin once removed 3.90625% (5⋅2−7)
7 triple-second cousin once removed 4.6875% (6⋅2−7)
7 sesqua-second cousin once removed 5.46875% (7⋅2−7)
7 quadruple-second cousin once removed 6.25% (8⋅2−7)
8 third cousin 0.78125% (2⋅2−8)
5 great-great-great-grandmother / great-great-great-grandfather / great-great-great-granddaughter / great-great-great-grandson 3.125% (2−5)
6 half-great-great-grandaunt / half-great-great-granduncle / half-great-great-grandniece / half-great-great-grandnephew 1.5625% (2−6)
6 great-great-grandaunt / great-great-granduncle / great-great-grandniece / great-great-grandnephew 3.125% (2⋅2−6)
7 first cousin thrice removed 1.5625% (2⋅2−7)
8 second cousin twice removed 0.78125% (2⋅2−8)
9 third cousin once removed 0.390625% (2⋅2−9)
10 fourth cousin 0.1953125% (2⋅2−10)[e]
Most incest laws concern the relationships where r = 25% or higher, although many ignore the rare case of double first cousins. Some jurisdictions also prohibit sexual relations or marriage between cousins of various degree, or individuals related only through adoption or affinity. Whether there is any likelihood of conception is generally considered irrelevant.
## Kinship coefficient
The kinship coefficient is a simple measure of relatedness, defined as the probability that a pair of randomly sampled homologous alleles are identical by descent.[3] More simply, it is the probability that an allele selected randomly from an individual, i, and an allele selected at the same autosomal locus from another individual, j, are identical and from the same ancestor.
Relationship Kinship
coefficient
Individual-self 1/2
full sister / full brother 1/4
mother / father / daughter / son 1/4
grandmother / grandfather / granddaughter / grandson 1/8
aunt / uncle / niece / nephew 1/8
first cousin 1/16
half-sister / half-brother 1/8
Several of the most common family relationships and their corresponding kinship coefficient.
The coefficient of relatedness is equal to twice the kinship coefficient.[verification needed]
### Calculation
The kinship coefficient between two individuals, i and j, is represented as Φij. The kinship coefficient between a non-inbred individual and itself, Φii, is equal to 1/2. This is due to the fact that humans are diploid, meaning the only way for the randomly chosen alleles to be identical by descent is if the same allele is chosen twice (probability 1/2). Similarly, the relationship between a parent and a child is found by the chance that the randomly picked allele in the child is from the parent (probability 1/2) and the probability of the allele that is picked from the parent being the same one passed to the child (probability 1/2). Since these two events are independent of each other, they are multiplied Φij = 1/2 X 1/2 = 1/4.[4][5]
## Notes
1. ^ strictly speaking, r=1 for clones and identical twins, but since the definition of r is usually intended to estimate the suitability of two individuals for breeding, they are typically taken to be of opposite sex.
2. ^ For instance, one's sibling connects to one's parent, which connects to one's self (2 lines) while one's aunt/uncle connects to one's grandparent, which connects to one's parent, which connects to one's self (3 lines).
3. ^ A full family tree of seven generations (128 paths to ancestors of the 7th degree) is unreasonable even for members of high nobility. For example, the family tree of Queen Elizabeth II is fully known for a depth of six generations, but becomes difficult to trace in the seventh generation.
4. ^ By replacement in the definition of the notion of "generation" by "meiosis". Since identical twins are not separated by meiosis, there are no "generations" between them, hence n=0 and r=1. See: [1]
5. ^ This degree of relationship is usually indistinguishable from the relationship to a random individual within the same population (tribe, country, ethnic group).
## References
1. ^
2. ^ "Kin Selection". Benjamin/Cummings. Retrieved 2007-11-25.
3. ^ Lange, Kenneth (2003). Mathematical and statistical methods for genetic analysis. Springer. p. 81. ISBN 978-0-387-21750-5.
4. ^ Lange, Kenneth (2003). Mathematical and statistical methods for genetic analysis. Springer. pp. 81–83.
5. ^ Jacquard, Albert (1974). The genetic structure of populations. Springer-Verlag. ISBN 978-3-642-88415-3.
|
|
# Integration homework
I haven't done the calculation myself, but you should, according to Euler's Equation, take the real part of the sum of exponential functions.
Isn't the real part the cos2nx series?
vela
Staff Emeritus
Homework Helper
I know, you are not allowed to solve the question form me but if the student is stuck shouldn't you give some hints? :(
We've given you hints, but for some reason, you've completely ignored them.
I ignored some because Euler's formula isn't covered by the syllabus. Maybe we should use cis2x+(cis2x)^2+...+(cis2x)^n as a suitable geometric series.
I was able to do the integral without using the series but that isn't acceptable for this question.
Let $$I_n=\int_0^{\pi/2}\frac{\sin ((2n+1)x)}{\sin(x)}dx$$
we can use sum to products formula: $$\sin((2n+1)x)-\sin((2n-1)x)=2\sin(x)\cos(2nx)$$ $$\therefore I_n=\int_0^{\pi/2} \frac{\sin((2n+1)x)}{\sin(x)}dx$$
$$=\int_0^{\pi/2} \frac{\sin((2n-1)x)}{\sin(x)}dx+2\int_0^{\pi/2}\cos(2nx)dx$$
The cosine integral is zero, so we have that $$I_n=I_{n-1}$$
By induction this implies that $$I_n=I_0=\pi/2$$
|
|
# zbMATH — the first resource for mathematics
## Indagationes Mathematicae. New Series
Short Title: Indag. Math., New Ser. Publisher: Elsevier, Amsterdam; Royal Netherlands Academy of Arts and Sciences, Amsterdam ISSN: 0019-3577 Online: http://www.sciencedirect.com/science/journal/00193577 Predecessor: Indagationes Mathematicae Comments: Indexed cover-to-cover; Previously also published as Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen. Series A. Mathematical Sciences
Documents Indexed: 1,640 Publications (since 1990) References Indexed: 1,585 Publications with 28,176 References.
all top 5
#### Latest Issues
32, No. 5 (2021) 32, No. 4 (2021) 32, No. 3 (2021) 32, No. 2 (2021) 32, No. 1 (2021) 31, No. 6 (2020) 31, No. 5 (2020) 31, No. 4 (2020) 31, No. 3 (2020) 31, No. 2 (2020) 31, No. 1 (2020) 30, No. 6 (2019) 30, No. 5 (2019) 30, No. 4 (2019) 30, No. 3 (2019) 30, No. 2 (2019) 30, No. 1 (2019) 29, No. 6 (2018) 29, No. 5 (2018) 29, No. 4 (2018) 29, No. 3 (2018) 29, No. 2 (2018) 29, No. 1 (2018) 28, No. 6 (2017) 28, No. 5 (2017) 28, No. 4 (2017) 28, No. 3 (2017) 28, No. 2 (2017) 28, No. 1 (2017) 27, No. 5 (2016) 27, No. 4 (2016) 27, No. 3 (2016) 27, No. 2 (2016) 27, No. 1 (2016) 26, No. 5 (2015) 26, No. 4 (2015) 26, No. 3 (2015) 26, No. 2 (2015) 26, No. 1 (2015) 25, No. 5 (2014) 25, No. 4 (2014) 25, No. 3 (2014) 25, No. 2 (2014) 25, No. 1 (2014) 24, No. 4 (2013) 24, No. 3 (2013) 24, No. 2 (2013) 24, No. 1 (2013) 23, No. 4 (2012) 23, No. 3 (2012) 23, No. 1-2 (2012) 22, No. 3-4 (2011) 22, No. 1-2 (2011) 21, No. 3-4 (2011) 21, No. 1-2 (2011) 20, No. 4 (2009) 20, No. 3 (2009) 20, No. 2 (2009) 20, No. 1 (2009) 19, No. 4 (2008) 19, No. 3 (2008) 19, No. 2 (2008) 19, No. 1 (2008) 18, No. 4 (2007) 18, No. 3 (2007) 18, No. 2 (2007) 18, No. 1 (2007) 17, No. 4 (2006) 17, No. 3 (2006) 17, No. 2 (2006) 17, No. 1 (2006) 16, No. 3-4 (2005) 16, No. 2 (2005) 16, No. 1 (2005) 15, No. 4 (2004) 15, No. 3 (2004) 15, No. 2 (2004) 15, No. 1 (2004) 14, No. 3-4 (2003) 14, No. 2 (2003) 14, No. 1 (2003) 13, No. 4 (2002) 13, No. 3 (2002) 13, No. 2 (2002) 13, No. 1 (2002) 12, No. 4 (2001) 12, No. 3 (2001) 12, No. 2 (2001) 12, No. 1 (2001) 11, No. 4 (2000) 11, No. 3 (2000) 11, No. 2 (2000) 11, No. 1 (2000) 10, No. 4 (1999) 10, No. 3 (1999) 10, No. 2 (1999) 10, No. 1 (1999) 9, No. 4 (1998) 9, No. 3 (1998) 9, No. 2 (1998) ...and 31 more Volumes
all top 5
#### Authors
17 Shorey, Tarlok Nath 14 Luca, Florian 13 Tijdeman, Robert 11 Nowak, Marian 11 Shparlinski, Igor E. 10 Korevaar, Jacob 9 Bridges, Douglas Suth 9 Broer, Henk W. 9 Hudzik, Henryk 9 Saradha, N. 8 de Pagter, Bernardus 8 Kaashoek, Marinus Adriaan 8 Ricker, Werner Joseph 8 Schikhof, Wilhelmus Hendricus 8 Śliwa, Wiesław 8 Van der Waall, Robert Willem 7 Driver, Kathy A. 7 Hajdu, Lajos 7 van der Put, Marius 6 Beukers, Frits 6 Boulabiar, Karim Mohamed 6 Gerritzen, Lothar 6 Hanßmann, Heinz 6 Ishihara, Hajime 6 Maligranda, Lech 6 Shamseddine, Khodr 6 van Gaans, Onno Ward 6 van Neerven, Jan M. A. M. 6 Weber, Michel Jean Georges 6 Wiegerinck, Jan J. O. O. 5 Álvarez-Nodarse, Renato 5 Claasen, Henk L. 5 Goldbach, R. W. 5 Jiang, Kan 5 Kubzdela, Albert 5 Ran, André C. M. 5 Sukochev, Fedor Anatol’evich 5 Talebi, Gholamreza 5 Tenenbaum, Gérald 5 Thuswaldner, Jörg Maximilian 5 Top, Jaap 5 van Dijk, Gerrit 5 van Mill, Jan 5 van Rooij, Arnoud C. M. 5 Zaharescu, Alexandru 4 Abramovich, Yuriĭ Aleksandrovich 4 Alzer, Horst 4 Biswas, Indranil 4 Boers, Arie Hendrik 4 Boutabaa, Abdelbaki 4 Buskes, Gerard J. H. M. 4 Dajani, Karma 4 Dow, Alan S. 4 Et-Taoui, Boumediene 4 Fokkink, Robbert Johan 4 Győry, Kálmán 4 Hesselink, Wim H. 4 Khrennikov, Andreĭ Yur’evich 4 Klop, Jan Willem 4 Koelink, Erik 4 Labuschagne, Coenraad C. A. 4 Laishram, Shanta 4 Lubinsky, Doron S. 4 Maciejewski, Andrzej J. 4 Marcellán Español, Francisco 4 Nowicki, Andrzej 4 Ochsenius Alarcón, Maria Herminia 4 Pambuccian, Victor V. 4 Panyushev, Dmitri Ivanovich 4 Perez-Garcia, Cristina 4 Pintér, Ákos 4 Ponnusamy, Saminathan 4 Raynaud, Yves 4 San Martin, Luiz Antonio Barrera 4 Schofield, Aidan 4 Taghavi Jelodar, Ali 4 Wójtowicz, Marek 4 Zaïmi, Toufik Mostepha 4 Zhang, Xiaodong 3 Akiyama, Shigeki 3 Appell, Jürgen M. 3 Azad, Hassan 3 Baake, Michael 3 Balasubramanian, Ramachandran 3 Ballico, Edoardo 3 Banks, William D. 3 Bennett, Michael A. 3 Bingham, Nicholas Hugh 3 Bugeaud, Yann 3 Coppens, Marc 3 Coquand, Thierry 3 Das, Pratulananda 3 de Snoo, Hendrik S. V. 3 Dodds, Peter G. 3 Duistermaat, Johannes Jisse 3 Dujella, Andrej 3 Escassut, Alain 3 Fechner, Włodzimierz 3 Fernández Carrión, Antonio 3 Gabeleh, Moosa ...and 1,902 more Authors
all top 5
#### Fields
314 Number theory (11-XX) 302 Functional analysis (46-XX) 181 Operator theory (47-XX) 135 Algebraic geometry (14-XX) 127 Dynamical systems and ergodic theory (37-XX) 108 Group theory and generalizations (20-XX) 80 Functions of a complex variable (30-XX) 77 Special functions (33-XX) 77 Differential geometry (53-XX) 73 Topological groups, Lie groups (22-XX) 71 Measure and integration (28-XX) 67 Nonassociative rings and algebras (17-XX) 65 Several complex variables and analytic spaces (32-XX) 63 Mathematical logic and foundations (03-XX) 61 Combinatorics (05-XX) 61 Probability theory and stochastic processes (60-XX) 48 Real functions (26-XX) 48 Global analysis, analysis on manifolds (58-XX) 47 Harmonic analysis on Euclidean spaces (42-XX) 46 Partial differential equations (35-XX) 40 Ordinary differential equations (34-XX) 39 General topology (54-XX) 37 Linear and multilinear algebra; matrix theory (15-XX) 33 Associative rings and algebras (16-XX) 33 Abstract harmonic analysis (43-XX) 32 Field theory and polynomials (12-XX) 31 Approximations and expansions (41-XX) 31 Manifolds and cell complexes (57-XX) 30 Commutative algebra (13-XX) 30 Difference and functional equations (39-XX) 30 Convex and discrete geometry (52-XX) 25 Order, lattices, ordered algebraic structures (06-XX) 25 Computer science (68-XX) 24 Geometry (51-XX) 22 History and biography (01-XX) 21 Potential theory (31-XX) 21 Quantum theory (81-XX) 19 Sequences, series, summability (40-XX) 19 Algebraic topology (55-XX) 16 Statistical mechanics, structure of matter (82-XX) 15 Numerical analysis (65-XX) 14 Mechanics of particles and systems (70-XX) 11 Information and communication theory, circuits (94-XX) 10 General and overarching topics; collections (00-XX) 10 Category theory; homological algebra (18-XX) 7 Integral transforms, operational calculus (44-XX) 7 Calculus of variations and optimal control; optimization (49-XX) 6 $$K$$-theory (19-XX) 6 Integral equations (45-XX) 6 Operations research, mathematical programming (90-XX) 6 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 5 Biology and other natural sciences (92-XX) 5 Systems theory; control (93-XX) 4 Statistics (62-XX) 3 General algebraic systems (08-XX) 3 Mechanics of deformable solids (74-XX) 2 Fluid mechanics (76-XX) 2 Optics, electromagnetic theory (78-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Relativity and gravitational theory (83-XX)
#### Citations contained in zbMATH Open
1,148 Publications have been cited 5,774 times in 5,083 Documents Cited by Year
On a conjectural filtration on the Chow groups of an algebraic variety. I: The general conjectures and some examples. Zbl 0805.14001
Murre, J. P.
1993
Semi-invariants of quivers for arbitrary dimension vectors. Zbl 1004.16012
Schofield, Aidan; Van den Bergh, Michel
2001
Multiple zeta values of fixed weight, depth, and height. Zbl 1031.11053
Ohno, Yasuo; Zagier, Don
2001
On a class of polynomials orthogonal with respect to a discrete Sobolev inner product. Zbl 0732.42016
Marcellan, F.; Ronveaux, A.
1990
Amemiya norm equals Orlicz norm in general. Zbl 1010.46031
Hudzik, Henryk; Maligranda, Lech
2000
On Gibbs measures of $$p$$-adic Potts model on the Cayley tree. Zbl 1161.82311
Mukhamedov, Farruh M.; Rozikov, Utkir A.
2004
Milnor fibration at infinity. Zbl 0806.57021
Némethi, András; Zaharia, Alexandru
1992
Completeness of quasi-normed symmetric operator spaces. Zbl 1298.46051
Sukochev, Fedor
2014
A class of superordination-preserving integral operators. Zbl 1019.30023
Bulboacă, Teodor
2002
On semi-classical linear functionals of class $$s=1$$. Classification and integral representations. Zbl 0783.33003
Belmehdi, S.
1992
Matrix valued orthogonal polynomials of the Jacobi type. Zbl 1070.33011
Grünbaum, F. A.; Pacharoni, I.; Tirao, J.
2003
Discrete-time stochastic processes on Riesz spaces. Zbl 1057.60041
Kuo, Wen-Chi; Labuschagne, Coenraad C. A.; Watson, Bruce A.
2004
Some Weyl-Heisenberg frame bound calculations. Zbl 1056.42512
Janssen, A. J. E. M.
1996
Structure of Cesàro function spaces. Zbl 1200.46027
Astashkin, Sergei V.; Maligranda, Lech
2009
Intersections of double cosets in algebraic groups. Zbl 0833.22001
Richardson, R. W.
1992
Orthogonal polynomials and coherent pairs: The classical case. Zbl 0843.42010
Marcellán, F.; Petronilho, J.
1995
Normed Köthe spaces: a non-commutative viewpoint. Zbl 1318.46041
Dodds, P. G.; de Pagter, B.
2014
Representations of classical Lie superalgebras of type I. Zbl 0849.17030
Penkov, Ivan; Serganova, Vera
1992
Relative categories: another model for the homotopy theory of homotopy theories. Zbl 1245.18006
Barwick, C.; Kan, D. M.
2012
Multiple elliptic hypergeometric series. An approach from the Cauchy determinant. Zbl 1051.33009
Kajihara, Yasushi; Noumi, Masatoshi
2003
Extensions of the well-poised and elliptic well-poised Bailey lemma. Zbl 1054.33013
Warnaar, S. Ole
2003
Constant terms in powers of a Laurent polynomial. Zbl 0916.22007
Duistermaat, J. J.; van der Kallen, Wilberd
1998
On extensions of triangular norms on bounded lattices. Zbl 1171.03011
Saminger-Platz, Susanne; Klement, Erich Peter; Mesiar, Radko
2008
Complemented lattice-ordered groups. Zbl 0735.06006
1990
Generalizations of Virasoro group and Virasoro algebra through extensions by modules of tensor-densities on $$S^1$$. Zbl 0932.17029
Ovsienko, Valentin; Roger, Claude
1998
Reduced words and a length function for $$G(e,1,n)$$. Zbl 0914.20036
Bremke, Kirsten; Malle, Gunter
1997
Banach lattices with the Fatou property and optimal domains of kernel operators. Zbl 1106.47026
Curbera, Guillermo P.; Ricker, Werner J.
2006
Diophantine equations with power sums and universal Hilbert sets. Zbl 0923.11103
Corvaja, P.; Zannier, U.
1998
Cones in Banach algebras. Zbl 0887.46026
Raubenheimer, H.; Rode, S.
1996
Simplicial cohomology of orbifolds. Zbl 1026.55007
Moerdijk, I.; Pronk, D. A.
1999
Expansiveness, entropy and polynomial growth for groups acting on subshifts by automorphisms. Zbl 0794.28010
Shereshevsky, Mark A.
1993
$$p$$-adic valued probability measures. Zbl 0872.60002
Khrennikov, Andrew
1996
Zeros of the hypergeometric polynomials $$F(-n,b;2b;z)$$. Zbl 0969.33003
Driver, Kathy; Duren, Peter
2000
$$L$$-weakly and $$M$$-weakly compact operators. Zbl 1028.47028
Chen, Z. L.; Wickstead, A. W.
1999
The sum of generalized exponents and Chevalley’s restriction theorem for modules of covariants. Zbl 0863.20017
Broer, Abraham
1995
Weights of Markov traces and generic degrees. Zbl 0984.20004
Geck, Meinolf; Iancu, Lacrimioara; Malle, Gunter
2000
Asymptotic properties of generalized Laguerre orthogonal polynomials. Zbl 1064.41022
Álvarez-Nodarse, Renato; Moreno-Balcázar, Juan J.
2004
Functional calculus on Riesz spaces. Zbl 0781.46008
Buskes, G.; de Pagter, B.; van Rooij, A.
1991
On orthogonal polynomials with respect to an inner product involving derivatives: Zeros and recurrence relations. Zbl 0704.42023
Bavinck, H.; Meijer, H. G.
1990
Polynomial approximation of one parameter families of (un)stable manifolds with rigorous computer assisted error bounds. Zbl 1359.37063
Mireles James, J. D.
2015
On finite pseudorandom sequences of $$k$$ symbols. Zbl 1049.11090
Mauduit, Christian; Sárközy, András
2002
Toward equivariant Iwasawa theory. II. Zbl 1142.11369
Ritter, Jürgen; Weiss, Alfred
2004
Cyclicity of common slow-fast cycles. Zbl 1233.34014
De Maesschalck, P.; Dumortier, F.; Roussarie, R.
2011
A field-like property of finite rings. Zbl 0771.16009
Claasen, H. L.; Goldbach, R. W.
1992
Comparison of some modules of the Lie algebra of vector fields. Zbl 0892.58002
Lecomte, P. B. A.; Mathonet, P.; Tousset, E.
1996
On the cohomology of $$\text{sl}(m+1,\mathbb R)$$ acting on differential operators and $$\text{sl}(m+1,\mathbb R)$$-equivariant symbol. Zbl 0990.58017
Lecomte, P. B. A.
2000
Distribution of parity of the partition function in arithmetic progressions. Zbl 1027.11079
Ahlgren, Scott
1999
Computation of maximal local (un)stable manifold patches by the parameterization method. Zbl 1336.65197
Breden, Maxime; Lessard, Jean-Philippe; Mireles James, Jason D.
2016
Existence results for abstract fractional differential equations with nonlocal conditions via resolvent operators. Zbl 1254.34011
Hernández, Eduardo; O’Regan, Donal; Balachandran, Krishnan
2013
Normality in Nikishin systems. Zbl 0817.41022
Driver, Kathy; Stahl, Herbert
1994
On the geometry of some Calderón-Lozanovskij interpolation spaces. Zbl 0831.46016
Cerdà, Joan; Hudzik, Henryk; Mastyło, Mieczysław
1995
Around Jouanolou non-integrability theorem. Zbl 0987.34005
Maciejewski, Andrzej J.; Moulin Ollagnier, Jean; Nowicki, Andrzej; Strelcyn, Jean-Marie
2000
Rationality of moduli of vector bundles on curves. Zbl 1043.14502
King, Alastair; Schofield, Aidan
1999
The dynamical Borel-Cantelli Lemma and the waiting time problems. Zbl 1134.37002
Galatolo, Stefano; Kim, Dong Han
2007
Generalized perfect spaces. Zbl 1177.46018
Calabuig, J. M.; Delgado, O.; Sánchez Pérez, E. A.
2008
Banach lattices with order isometric copies of $$l^\infty$$. Zbl 0919.46015
Hudzik, Henryk
1998
Plurisubharmonic functions and Kählerian metrics on complexification of symmetric spaces. Zbl 0777.32008
1992
Jensen’s and martingale inequalities in Riesz spaces. Zbl 1307.47095
Grobler, Jacobus
2014
Traces on operator ideals and related linear forms on sequence ideals. I. Zbl 1319.47067
Pietsch, A.
2014
Inequalities for the gradient of eigenfunctions of the invariant Laplacian in the unit ball. Zbl 0731.32003
Pavlović, Miroslav
1991
Convergence of Riesz space martingales. Zbl 1105.47034
Kuo, Wen-Chi; Labuschagne, Coenraad C. A.; Watson, Bruce A.
2006
$$K$$-moduli, moduli of smoothness, and Bernstein polynomials on a simplex. Zbl 0763.41016
Berens, H.; Xu, Y.
1991
Perturbing analytic discs attached to maximal real submanifolds of $$\mathbb{C}^ N$$. Zbl 0861.32013
Globevnik, Josip
1996
Fine and Wilf words for any periods. Zbl 1091.68088
Tijdeman, R.; Zamboni, L.
2003
On some new integral inequalities of Gronwall-Bellman-Bihari type with delay for discontinuous functions and their applications. Zbl 1334.26052
Liu, Xiaohong; Zhang, Lihong; Agarwal, Praveen; Wang, Guotao
2016
Two new properties of ideals of polynomials and applications. Zbl 1089.46027
Botelho, Geraldo; Pellegrino, Daniel M.
2005
The finite element approximation of evolutionary Hamilton-Jacobi-Bellman equations with nonlinear source terms. Zbl 1254.65104
Boulaaras, Salah; Haiour, Mohamed
2013
On weak model sets of extremal density. Zbl 1364.82011
Baake, Michael; Huck, Christian; Strungaru, Nicolae
2017
Modifications of quasi-definite linear functionals via addition of delta and derivatives of delta Dirac functions. Zbl 1089.33005
Álvarez-Nodarse, R.; Arvesú, J.; Marcellán, F.
2004
Improved Bohr’s inequality for locally univalent harmonic mappings. Zbl 1404.31002
Evdoridis, Stavros; Ponnusamy, Saminathan; Rasila, Antti
2019
Periodic points for onto cellular automata. Zbl 1024.37007
Boyle, Mike; Kitchens, Bruce
1999
A new fixed point theorem in the fractal space. Zbl 1338.54214
Ri, Song-il
2016
Fite-Hille-Wintner-type oscillation criteria for second-order half-linear dynamic equations with deviating arguments. Zbl 1395.34090
Bohner, Martin; Hassan, Taher S.; Li, Tongxing
2018
Colombeau algebras on a $$C^{\infty{}}$$-manifold. Zbl 0761.46022
de Roever, J. W.; Damsma, M.
1991
Coherent pairs and zeros of Sobolev-type orthogonal polynomials. Zbl 0784.33004
Meijer, H. G.
1993
The minimal number of nodes in Chebyshev type quadrature formulas. Zbl 0795.41025
Kuijlaars, Arno
1993
An elementary approach to the Serre-Rost invariant of Albert algebras. Zbl 0872.17029
Petersson, Holger P.; Racine, Michel L.
1996
On the transverse symbol of vectorial distributions and some applications to harmonic analysis. Zbl 0892.22010
Kolk, Johan A. C.; Varadarajan, V. S.
1996
Automorphisms of $$\mathbb Z^ d$$-subshifts of finite type. Zbl 0823.28007
Ward, Thomas B.
1994
Transcendental infinite sums. Zbl 0991.11043
2001
Orbit distribution on $$\mathbb R^2$$ under the natural action of $$\mathrm{SL}(2,\mathbb Z)$$. Zbl 1016.37003
Nogueira, Arnaldo
2002
Local monotonicity structure of Calderón-Lozanovskiĭ spaces. Zbl 1074.46020
Hudzik, Henryk; Narloch, Agata
2004
Distributional Wiener-Ikehara theorem and twin primes. Zbl 1111.40005
Korevaar, Jacob
2005
Global invariant manifolds in the transition to preturbulence in the Lorenz system. Zbl 1246.37037
Doedel, Eusebius J.; Krauskopf, Bernd; Osinga, Hinke M.
2011
The primitive length of a general $$k$$-gonal curve. Zbl 0845.14019
Coppens, Marc; Keem, Changho; Martens, Gerriet
1994
A perfect duality between $$p$$-adic Banach spaces and compactoids. Zbl 0837.46062
Schikhof, W. H.
1995
On the asymptotic approximation with operators of Bleimann, Butzer and Hahn. Zbl 0851.47009
Abel, Ulrich
1996
Periodic and subharmonic solutions for a $$2n$$th-order difference equation involving $$p$$-Laplacian. Zbl 1284.39016
Deng, Xiaoqing; Liu, Xia; Zhang, Yuanbiao; Shi, Haiping
2013
Generalized Berwald spaces with $$(\alpha, \beta)$$-metrics. Zbl 1343.53077
Tayebi, A.; Barzegari, M.
2016
On the Diophantine equation $$x^ 2+a^ 2=2y^ p$$. Zbl 1088.11021
Tengely, Sz.
2004
Graded Hermitian forms and Springer’s theorem. Zbl 1247.11052
Renard, J.-F; Tignol, J.-R; Wadsworth, A. R.
2007
A fan-theoretic equivalent of the antithesis of Specker’s theorem. Zbl 1132.03031
Berger, Josef; Bridges, Douglas
2007
Norm-parallelism in the geometry of Hilbert $$C^\ast$$-modules. Zbl 1353.46047
2016
Compact-like operators in lattice-normed spaces. Zbl 1440.47032
Aydın, A.; Emelyanov, E. Yu.; Erkurşun Özcan, N.; Marabeh, M. A. A.
2018
The unique solution for a fractional $$q$$-difference equation with three-point boundary conditions. Zbl 1387.39009
Zhai, Chengbo; Ren, Jing
2018
Normal forms of real symmetric systems with multiplicity. Zbl 0802.35176
Braam, P. J.; Duistermaat, J. J.
1993
On uniqueness of $$p$$-adic entire functions. Zbl 0935.30029
1997
On the chromatic number of the power graph of a finite group. Zbl 1317.05063
Ma, Xuanlong; Feng, Min
2015
Weil transfer of algebraic cycles. Zbl 1047.14004
Karpenko, Nikita A.
2000
A logical look at characterizations of geometric transformations under mild hypotheses. Zbl 0987.51010
Pambuccian, Victor
2000
The $$1 : 2 : 4$$ resonance in a particle chain. Zbl 1460.70019
Hanßmann, Heinz; Mazrooei-Sebdani, Reza; Verhulst, Ferdinand
2021
Symplectic invariants of semitoric systems and the inverse problem for quantum systems. Zbl 07298854
Pelayo, Álvaro
2021
Stinespring’s theorem for unbounded operator valued local completely positive maps and its applications. Zbl 07322774
Bhat, B. V. Rajarama; Ghatak, Anindya; Pamula, Santhosh Kumar
2021
Toeplitz band matrices with small random perturbations. Zbl 07298855
Sjöstrand, Johannes; Vogel, Martin
2021
Semi-classical mass asymptotics on stationary spacetimes. Zbl 1457.83012
Strohmaier, Alexander; Zelditch, Steve
2021
On additive and multiplicative decompositions of sets of integers with restricted prime factors. I: Smooth numbers. Zbl 07322764
Győry, K.; Hajdu, L.; Sárközy, A.
2021
The irreducibility of some Wronskian Hermite polynomials. Zbl 1456.11206
Grosu, Codruţ; Grosu, Corina
2021
Distance matrices of subsets of the Hamming cube. Zbl 1464.51012
Doust, Ian; Robertson, Gavin; Stoneham, Alan; Weston, Anthony
2021
Compact spaces with a P-base. Zbl 07370855
Dow, Alan; Feng, Ziqin
2021
Two-descent on some genus two curves. Zbl 07370863
Evink, Tim; van der Heiden, Gert-Jan; Top, Jaap
2021
Trinomials with given roots. Zbl 07152833
Bilu, Yuri; Luca, Florian
2020
On the moments of a polynomial in one variable. Zbl 1445.30003
Müger, Michael; Tuset, Lars
2020
Validated computations for connecting orbits in polynomial vector fields. Zbl 1439.34048
van den Berg, Jan Bouwe; Sheombarsing, Ray
2020
Realizations of holomorphic and slice hyperholomorphic functions: the Krein space case. Zbl 1462.47010
Alpay, Daniel; Colombo, Fabrizio; Sabadini, Irene
2020
Continued fraction expansions of the generating functions of Bernoulli and related numbers. Zbl 1461.11020
Komatsu, Takao
2020
Logarithmic submajorization, uniform majorization and Hölder type inequalities for $$\tau$$-measurable operators. Zbl 07243530
Dodds, P. G.; Dodds, T. K.; Sukochev, F. A.; Zanin, D.
2020
Dilation invariant Banach limits. Zbl 1459.46026
Semenov, E.; Sukochev, F.; Usachev, A.; Zanin, D.
2020
Simplicial cochain algebras for diffeological spaces. Zbl 07270995
Kuribayashi, Katsuhiko
2020
A combinatorial characterization of finite groups of prime exponent. Zbl 1431.20013
2020
Deformations of algebraic schemes via Reedy-Palamodov cofibrant resolutions. Zbl 1431.14007
Manetti, Marco; Meazzini, Francesco
2020
On the distribution of powers of a Gaussian Pisot number. Zbl 1436.11128
Zaïmi, Toufik
2020
m-accretive Laplacian on a non symmetric graph. Zbl 1433.05132
Anné, Colette; Balti, Marwa; Torki-Hamza, Nabila
2020
Bivariate functions of bounded variation: fractal dimension and fractional integral. Zbl 1435.28016
Verma, S.; Viswanathan, P.
2020
Some martingales associated with multivariate Jacobi processes and Aomoto’s Selberg integral. Zbl 1445.60056
Voit, Michael
2020
Two bifurcation sets arising from the beta transformation with a hole at 0. Zbl 1458.37053
Baker, Simon; Kong, Derong
2020
Linear relations of Ohno sums of multiple zeta values. Zbl 1456.11162
Hirose, Minoru; Murahara, Hideki; Onozuka, Tomokazu; Sato, Nobuo
2020
Nilpotent orbits of height 2 and involutions in the affine Weyl group. Zbl 07227653
Gandini, Jacopo; Möseneder Frajria, Pierluigi; Papi, Paolo
2020
On the structure of variable exponent spaces. Zbl 1461.46021
Flores, Julio; Hernández, Francisco L.; Ruiz, César; Sanchiz, Mauro
2020
An intuitive approach to the Martin boundary. Zbl 1452.31019
Loeb, Peter A.
2020
Improved Bohr’s inequality for locally univalent harmonic mappings. Zbl 1404.31002
Evdoridis, Stavros; Ponnusamy, Saminathan; Rasila, Antti
2019
Fourier-Taylor parameterization of unstable manifolds for parabolic partial differential equations: formalism, implementation and rigorous validation. Zbl 1419.35131
Reinhardt, Christian; Mireles James, J. D.
2019
Frames and Riesz bases for shift invariant spaces on the abstract Heisenberg group. Zbl 1405.43004
2019
Integral operators on BMO and Campanato spaces. Zbl 1429.42026
Ho, Kwok-Pun
2019
A semantic hierarchy for intuitionistic logic. Zbl 07049867
Bezhanishvili, Guram; Holliday, Wesley H.
2019
Convergence of Picard’s iteration using projection algorithm for noncyclic contractions. Zbl 06991917
Gabeleh, Moosa
2019
The ring $$\operatorname{M}_{8 k + 4}(\mathbb{Z}_2)$$ is nil-clean of index four. Zbl 1453.16029
Shitov, Yaroslav
2019
On weighted $$q$$-Daehee polynomials with their applications. Zbl 1443.11024
Araci, Serkan; Duran, Ugur; Acikgoz, Mehmet
2019
The factorization of generalized Nevanlinna functions and the invariant subspace property. Zbl 06991904
Wietsma, Hendrik Luit
2019
The higher-order differential operator for the generalized Jacobi polynomials – new representation and symmetry. Zbl 1414.33012
Markett, Clemens
2019
On minimal asymptotic basis of order $$g - 1$$. Zbl 1420.11022
Sun, Cui-Fang
2019
Mappings preserving $$B$$-orthogonality. Zbl 1411.46014
Wójcik, Paweł
2019
Invariant measures of Markov operators associated to iterated function systems consisting of $$\varphi$$-max-contractions with probabilities. Zbl 1405.28008
Georgescu, Flavian; Miculescu, Radu; Mihail, Alexandru
2019
Estimating the Hausdorff dimensions of univoque sets for self-similar sets. Zbl 1423.28013
Chen, Xiu; Jiang, Kan; Li, Wenxia
2019
Schwarz lemmas for mappings satisfying Poisson’s equation. Zbl 1443.31002
Chen, Shaolin; Ponnusamy, Saminathan
2019
On a sum involving the Euler totient function. Zbl 1446.11178
Wu, J.
2019
Maximal $$m$$-subharmonic functions and the Cegrell class $$\mathcal{N}_m$$. Zbl 1425.32029
Nguyen, Van Thien
2019
Orlicz Figà-Talamanca Herz algebras and invariant means. Zbl 1420.43002
Lal, Rattan; Kumar, N. Shravan
2019
A short note on band projections in partially ordered vector spaces. Zbl 1429.46001
Glück, Jochen
2019
Heinz type inequalities for mappings satisfying Poisson’s equation. Zbl 1406.35110
Zhong, Deguang; Tu, Hongqiang
2019
Continuous maps preserving Jordan triple products from $$\mathbb{U}_n$$ to $$\mathbb{D}_m$$. Zbl 1404.15014
2019
Cellular homology of real flag manifolds. Zbl 1426.57052
Rabelo, Lonardo; San Martin, Luiz A. B.
2019
The intermediate disorder regime for Brownian directed polymers in Poisson environment. Zbl 07101929
Cosco, Clément
2019
A functional analytic approach to Cesàro means. Zbl 1423.26061
2019
Optimal angle of the holomorphic functional calculus for the Ornstein-Uhlenbeck operator. Zbl 1441.47051
Harris, Sean
2019
Spectra of $$(H_1, H_2)$$-merged subdivision graph of a graph. Zbl 1427.05139
Rajkumar, R.; Gayathri, M.
2019
Natural extensions for $$p$$-adic $$\beta$$-shifts and other scaling maps. Zbl 1447.11085
Furno, Joanna
2019
The order spectrum of convolution operators in discrete Cesàro spaces. Zbl 07067926
Ricker, W. J.
2019
Geodesic vector fields and eikonal equation on a Riemannian manifold. Zbl 1416.53033
Deshmukh, Sharief; Khan, Viqar Azam
2019
Algebraic cycles and very special cubic fourfolds. Zbl 1442.14022
Laterveer, Robert
2019
Pervasive and weakly pervasive pre-Riesz spaces. Zbl 1421.46003
Kalauch, A.; Malinowski, H.
2019
Fite-Hille-Wintner-type oscillation criteria for second-order half-linear dynamic equations with deviating arguments. Zbl 1395.34090
Bohner, Martin; Hassan, Taher S.; Li, Tongxing
2018
Compact-like operators in lattice-normed spaces. Zbl 1440.47032
Aydın, A.; Emelyanov, E. Yu.; Erkurşun Özcan, N.; Marabeh, M. A. A.
2018
The unique solution for a fractional $$q$$-difference equation with three-point boundary conditions. Zbl 1387.39009
Zhai, Chengbo; Ren, Jing
2018
Bingham, N. H.; Ostaszewski, A. J.
2018
Dynamical systems arising from random substitutions. Zbl 1409.37023
Rust, Dan; Spindeler, Timo
2018
A new idea to construct the fractal interpolation function. Zbl 1392.28011
Ri, Songil
2018
The quartet spaces of G. ’t Hooft. Zbl 1401.15002
2018
Optimum solutions for a system of differential equations via measure of noncompactness. Zbl 06868638
Gabeleh, M.; Markin, J.
2018
On multivariate logarithmic polynomials and their properties. Zbl 1415.11056
Qi, Feng
2018
Eliminating disjunctions by disjunction elimination. Zbl 1437.03163
Rinaldi, Davide; Schuster, Peter; Wessel, Daniel
2018
On the stability of the solution mapping for parametric traffic network problems. Zbl 1394.90179
Hung, Nguyen Van
2018
Diffraction of compatible random substitutions in one dimension. Zbl 1457.82054
Baake, Michael; Spindeler, Timo; Strungaru, Nicolae
2018
Controllability for noninstantaneous impulsive semilinear functional differential inclusions without compactness. Zbl 1401.26018
Wang, JinRong; Ibrahim, A. G.; O’Regan, D.; Zhou, Yong
2018
To be or not to be constructive, that is not the question. Zbl 1437.03169
Sanders, Sam
2018
Nonlinear maps preserving the Jordan triple $$\ast$$-product between factors. Zbl 1425.46050
Zhao, Fangfang; Li, Changjing
2018
Uniformly locally univalent harmonic mappings associated with the pre-Schwarzian norm. Zbl 1401.31005
Liu, Gang; Ponnusamy, Saminathan
2018
The density of numbers $$n$$ having a prescribed G.C.D. with the $$n$$th Fibonacci number. Zbl 1417.11012
Sanna, Carlo; Tron, Emanuele
2018
Composition operators and closures of Dirichlet type spaces $$\mathcal{D}_\alpha$$ in the logarithmic Bloch space. Zbl 06939373
Qian, Ruishen; Li, Songxiao
2018
Modelling and computing homotopy types: I. Zbl 1423.55015
Brown, Ronald
2018
On the independence number of the power graph of a finite group. Zbl 1382.05053
Ma, Xuanlong; Fu, Ruiqin; Lu, Xuefei
2018
Composite genus one Belyi maps. Zbl 1447.14006
Vidunas, Raimundas; He, Yang-Hui
2018
Factors of generalised polynomials and automatic sequences. Zbl 1416.11041
Byszewski, Jakub; Konieczny, Jakub
2018
Domain of Riesz mean in some spaces of double sequences. Zbl 1457.46009
Yeşilkayagil, Medine; Başar, Feyzi
2018
Algebraic sums and products of univoque bases. Zbl 1403.11055
Dajani, Karma; Komornik, Vilmos; Kong, Derong; Li, Wenxia
2018
The creating subject, the Brouwer-Kripke schema, and infinite proofs. Zbl 1437.03173
van Atten, Mark
2018
Poissonian pair correlation and discrepancy. Zbl 1425.11141
Steinerberger, Stefan
2018
Fixed points of $$n$$-valued maps, the fixed point property and the case of surfaces – a braid approach. Zbl 1381.55002
Gonçalves, Daciberg Lima; Guaschi, John
2018
On the cofinality of the splitting number. Zbl 1436.03260
Dow, Alan; Shelah, Saharon
2018
Spectral radius of power graphs on certain finite groups. Zbl 1382.05042
Chattopadhyay, Sriparna; Panigrahi, Pratima; Atik, Fouzul
2018
Beau bounds for multicritical circle maps. Zbl 1394.37071
Estevez, Gabriela; de Faria, Edson; Guarino, Pablo
2018
Hypercyclic composition operators on the little Bloch space and the Besov spaces. Zbl 06868645
Liang, Yu-Xia; Zhou, Ze-Hua
2018
Pisot substitution sequences, one dimensional cut-and-project sets and bounded remainder sets with fractal boundary. Zbl 1396.37022
Frettlöh, Dirk; Garber, Alexey
2018
On Brouwer’s continuity principle. Zbl 1437.03175
Ishihara, Hajime
2018
Generating new ideals using weighted density via modulus functions. Zbl 1439.28002
Bose, Kumardipta; Das, Pratulananda; Kwela, Adam
2018
Even Fourier multipliers and martingale transforms in infinite dimensions. Zbl 1406.42015
Yaroslavtsev, Ivan S.
2018
On the approximation of weakly plurifinely plurisubharmonic functions. Zbl 1402.32036
Hong, Nguyen Xuan; Van Can, Hoang
2018
On the natural concept of dimension. Zbl 1378.01026
Brouwer, L. E. J.
2018
Some aspects of dimension theory for topological groups. Zbl 1385.54011
Arhangel’skii, A. V.; van Mill, J.
2018
Brouwer and Euclid. Zbl 1437.03171
Beeson, Michael
2018
Disjointness preserving $$\mathrm{C}_0$$-semigroups and local operators on ordered Banach spaces. Zbl 06851097
Kalauch, Anke; van Gaans, Onno; Zhang, Feng
2018
...and 1048 more Documents
all top 5
#### Cited by 5,455 Authors
56 Marcellán Español, Francisco 35 Hudzik, Henryk 35 Sukochev, Fedor Anatol’evich 32 Laterveer, Robert 28 Luca, Florian 26 Mukhamedov, Farruh Maksutovich 22 Cui, Yunan 22 Sánchez-Pérez, Enrique Alfonso 21 Kolwicz, Paweł 20 Duran, Antonio J. 20 Ricker, Werner Joseph 20 Shorey, Tarlok Nath 19 Mireles-James, Jason D. 19 Shparlinski, Igor E. 17 Labuschagne, Coenraad C. A. 17 Maligranda, Lech 17 Ostaszewski, Adam J. 17 Watson, Bruce Alastair 16 Hajdu, Lajos 16 Röhrle, Gerhard E. 16 Zanin, Dmitriy V. 15 Botelho, Geraldo 15 Driver, Kathy A. 15 Moreno-Balcázar, Juan José 15 Śliwa, Wiesław 14 Bridges, Douglas Suth 14 Ponnusamy, Saminathan 14 Tijdeman, Robert 13 Bingham, Nicholas Hugh 13 Boulaaras, Salah Mahmoud 13 Grobler, Jacobus J. 13 Hančl, Jaroslav 13 Leśnik, Karol 13 O’Regan, Donal 12 Kamińska, Anna 12 Kuo, Wen-Chi 12 López Lagomasino, Guillermo Tomás 12 Nowak, Marian 12 Pietsch, Albrecht 12 Qi, Feng 12 Saradha, N. 12 Weber, Michel Jean Georges 11 Abel, Ulrich 11 Álvarez-Nodarse, Renato 11 Baake, Michael 11 Ballico, Edoardo 11 Buskes, Gerard J. H. M. 11 Kusraev, Anatoly Georgievich 11 Laishram, Shanta 11 Ma, Xuanlong 11 Pellegrino, Daniel Marinho 10 Bugeaud, Yann 10 Cho, Nak Eun 10 Hurder, Steven E. 10 Khakimov, Otabek N. 10 Mouton, Sonja 10 Nair, Radhakrishnan B. 10 Pérez, Teresa E. 10 Phạm Tiên So’n 10 San Martin, Luiz Antonio Barrera 10 Silbermann, Bernd 10 Weisz, Ferenc 10 Zhao, Wenhua 9 Biswas, Indranil 9 Curbera, Guillermo P. 9 Derksen, Harm 9 Dueñas, Herbert 9 El-Fassi, Iz-Iddine 9 Fan, Zhenbin 9 Gabeleh, Moosa 9 Goodwin, Simon M. 9 Kalauch, Anke 9 Koelink, Erik 9 Korevaar, Jacob 9 Kwon, Kil Hyun 9 Lessard, Jean-Philippe 9 Lukina, Olga 9 Moulin Ollagnier, Jean 9 Ólafsson, Gestur 9 Pambuccian, Victor V. 9 Piñar, Miguel A. 9 Rueda, Pilar 9 Shamseddine, Khodr 9 Shi, Haiping 9 Strungaru, Nicolae 9 Togbé, Alain 9 van den Berg, Jan Bouwe 9 Vial, Charles 9 Winterhof, Arne 9 Wisła, Marek 9 Wójtowicz, Marek 8 Aqzzouz, Belmesnaoui 8 Delgado, Olvido 8 Escassut, Alain 8 Fernández Carrión, Antonio 8 Grünbaum, Francisco Alberto 8 Ishihara, Hajime 8 Ivan, Mircea Dumitru 8 Kaashoek, Marinus Adriaan 8 Klop, Jan Willem ...and 5,355 more Authors
all top 5
#### Cited in 596 Journals
350 Indagationes Mathematicae. New Series 212 Journal of Mathematical Analysis and Applications 127 Journal of Algebra 109 Journal of Number Theory 105 Proceedings of the American Mathematical Society 99 Advances in Mathematics 94 Transactions of the American Mathematical Society 94 Positivity 70 Journal of Approximation Theory 66 Journal of Computational and Applied Mathematics 60 Journal of Pure and Applied Algebra 57 Communications in Algebra 55 Linear Algebra and its Applications 50 Journal of Functional Analysis 50 Monatshefte für Mathematik 45 Mathematische Zeitschrift 45 Theoretical Computer Science 43 Topology and its Applications 39 International Journal of Number Theory 37 Ergodic Theory and Dynamical Systems 36 Israel Journal of Mathematics 36 Journal of Mathematical Physics 35 Integral Equations and Operator Theory 34 Journal of Geometry and Physics 33 Integral Transforms and Special Functions 33 The Ramanujan Journal 32 Results in Mathematics 30 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 29 Communications in Mathematical Physics 29 Annales de l’Institut Fourier 29 Mathematische Annalen 28 Advances in Difference Equations 27 Journal of Differential Equations 27 Quaestiones Mathematicae 26 Rocky Mountain Journal of Mathematics 26 Archiv der Mathematik 26 Mathematische Nachrichten 25 International Journal of Mathematics 25 Transformation Groups 25 Mediterranean Journal of Mathematics 25 $$p$$-Adic Numbers, Ultrametric Analysis, and Applications 24 Duke Mathematical Journal 24 Comptes Rendus. Mathématique. Académie des Sciences, Paris 23 Journal de Théorie des Nombres de Bordeaux 23 Journal of Inequalities and Applications 23 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 23 Journal of Function Spaces 22 Manuscripta Mathematica 22 Journal of Difference Equations and Applications 21 Applied Mathematics and Computation 20 Bulletin of the Australian Mathematical Society 20 Periodica Mathematica Hungarica 20 Journal of Combinatorial Theory. Series A 20 Complex Analysis and Operator Theory 19 Acta Applicandae Mathematicae 19 Bulletin des Sciences Mathématiques 19 Journal of Algebra and its Applications 18 Fuzzy Sets and Systems 18 Inventiones Mathematicae 18 Aequationes Mathematicae 18 Journal of Algebraic Combinatorics 18 Complex Variables and Elliptic Equations 17 Mathematical Notes 17 Journal für die Reine und Angewandte Mathematik 17 Acta Mathematica Hungarica 17 Journal of Mathematical Sciences (New York) 17 Discrete and Continuous Dynamical Systems 17 SIAM Journal on Applied Dynamical Systems 16 Mathematical Proceedings of the Cambridge Philosophical Society 16 Advances in Applied Mathematics 15 Journal d’Analyse Mathématique 15 Letters in Mathematical Physics 15 Mathematics of Computation 15 Compositio Mathematica 15 Glasgow Mathematical Journal 15 Constructive Approximation 15 Forum Mathematicum 15 The Journal of Fourier Analysis and Applications 15 Representation Theory 15 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 15 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 14 Discrete Mathematics 14 Linear and Multilinear Algebra 14 Theoretical and Mathematical Physics 14 Functiones et Approximatio. Commentarii Mathematici 14 Michigan Mathematical Journal 14 Rendiconti del Circolo Matemàtico di Palermo. Serie II 14 Differential Geometry and its Applications 14 Numerical Algorithms 14 Acta Mathematica Sinica. English Series 13 Nonlinearity 13 Geometriae Dedicata 12 Annali di Matematica Pura ed Applicata. Serie Quarta 12 Czechoslovak Mathematical Journal 12 Journal of Algebraic Geometry 12 Annales Mathématiques Blaise Pascal 12 Central European Journal of Mathematics 12 Proceedings of the Steklov Institute of Mathematics 12 Banach Journal of Mathematical Analysis 12 Advances in Operator Theory ...and 496 more Journals
all top 5
#### Cited in 63 Fields
864 Number theory (11-XX) 830 Functional analysis (46-XX) 604 Operator theory (47-XX) 543 Algebraic geometry (14-XX) 476 Dynamical systems and ergodic theory (37-XX) 397 Special functions (33-XX) 335 Group theory and generalizations (20-XX) 313 Harmonic analysis on Euclidean spaces (42-XX) 272 Nonassociative rings and algebras (17-XX) 250 Ordinary differential equations (34-XX) 230 Combinatorics (05-XX) 230 Functions of a complex variable (30-XX) 229 Differential geometry (53-XX) 215 Several complex variables and analytic spaces (32-XX) 192 Topological groups, Lie groups (22-XX) 187 Associative rings and algebras (16-XX) 172 Measure and integration (28-XX) 168 Partial differential equations (35-XX) 163 Probability theory and stochastic processes (60-XX) 159 Real functions (26-XX) 153 Mathematical logic and foundations (03-XX) 141 Computer science (68-XX) 136 Global analysis, analysis on manifolds (58-XX) 134 Difference and functional equations (39-XX) 123 Approximations and expansions (41-XX) 121 Numerical analysis (65-XX) 119 Linear and multilinear algebra; matrix theory (15-XX) 119 General topology (54-XX) 117 Quantum theory (81-XX) 111 Commutative algebra (13-XX) 92 Order, lattices, ordered algebraic structures (06-XX) 92 Manifolds and cell complexes (57-XX) 90 Field theory and polynomials (12-XX) 82 Abstract harmonic analysis (43-XX) 82 Convex and discrete geometry (52-XX) 81 Category theory; homological algebra (18-XX) 69 Algebraic topology (55-XX) 68 Statistical mechanics, structure of matter (82-XX) 59 Information and communication theory, circuits (94-XX) 52 Potential theory (31-XX) 48 Mechanics of particles and systems (70-XX) 47 Geometry (51-XX) 40 Sequences, series, summability (40-XX) 38 Integral transforms, operational calculus (44-XX) 35 Biology and other natural sciences (92-XX) 34 Systems theory; control (93-XX) 31 $$K$$-theory (19-XX) 30 Integral equations (45-XX) 29 History and biography (01-XX) 27 Calculus of variations and optimal control; optimization (49-XX) 26 Mechanics of deformable solids (74-XX) 24 Operations research, mathematical programming (90-XX) 22 Statistics (62-XX) 21 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 15 Relativity and gravitational theory (83-XX) 8 General and overarching topics; collections (00-XX) 7 Optics, electromagnetic theory (78-XX) 5 General algebraic systems (08-XX) 3 Fluid mechanics (76-XX) 2 Astronomy and astrophysics (85-XX) 2 Geophysics (86-XX) 2 Mathematics education (97-XX) 1 Classical thermodynamics, heat transfer (80-XX)
|
|
# In an aggregate mix, the proportions of coarse aggregate, fine aggregate and mineral filter are 55%, 40% and 5%, respectively. The values of bulk specific gravity of the coarse aggregate, fine aggregate and mineral filler are 2.55, 2.65 and 2.70, respectively. The bulk specific gravity of the aggregate mix (round off to two decimal places) is _____
This question was previously asked in
GATE CE 2021 Official Paper: Shift 2
View all GATE CE Papers >
## Answer (Detailed Solution Below) 2.58 - 2.61
Free
CT 1: Ratio and Proportion
2140
10 Questions 16 Marks 30 Mins
## Detailed Solution
Concept:
Theoretical Specific gravity or apparent specific gravity:
It is the specific gravity without considering air voids is given as
$${G_t} = \frac{{{W_1} + {W_2} + {W_3} + {W_4}}}{{\frac{{{W_1}}}{{{G_1}}} + \frac{{{W_2}}}{{{G_2}}} + \frac{{{W_3}}}{{{G_3}}} + \frac{{{W_4}}}{{{G_4}}}}}$$
If weight of constituents are given in percent of total mix, then
$${G_t} = \frac{{100}}{{\frac{{{W_1}(\% )}}{{{G_1}}} + \frac{{{W_2}(\% )}}{{{G_2}}} + \frac{{{W_3}(\% )}}{{{G_3}}} + \frac{{{W_4}(\% )}}{{{G_4}}}}}$$
Calculation:
Given,
Proportions of coarse aggregate (W1%) = 55%
Proportions of fine aggregate (W2%) = 40%
Proportions of mineral filler (W3%) = 5%
G1 = 2.55, G2 = 2.65, G3 = 2.70
Theoretical specific gravity is given by, $${G_t} = \frac{{100}}{{\frac{{{W_1}(\% )}}{{{G_1}}} + \frac{{{W_2}(\% )}}{{{G_2}}} + \frac{{{W_3}(\% )}}{{{G_3}}}}}$$
$${G_t} = \frac{{100}}{{\frac{{55}}{{2.55}} + \frac{{40}}{{2.65}} + \frac{5}{{2.70}}}}$$
Gt = 2.60
|
|
# Better prediction of Premier League matches using data from other competitions
In most of my football posts on this blog I have used data from the English Premier league to fit statistical models and make predictions. Only occasionally have I looked at other leagues, but always in isolation. That is, I have never combined data from different leagues and competitions into the same model. Using a league by itself works mostly fine, but I have experienced some issues. Model fitting and prediction making often simply does not work at the beginning of the season. The reason for this has mostly to do with newly promoted teams.
If only data from Premier League is used to fit a model, then no data on the new teams is available at the beginning of the season. This makes predicting the outcome of the first matches of the new teams impossible. In subsequent matches the information available is also very limited compared to the other teams, for which we can rely on data from the previous seasons. This uncertainty in the new teams also propagates into the estimates and predictions for the other teams.
This problem can be remedied by using data from outside the Premier League to help estimate the parameters for the promoted teams. The most obvious place to look for data related to the promoted teams is in the Championship, where the teams played before they were promoted. The FA Cup, where teams from the Championship and Premier League are automatically qualified, should also be a good place to use data from.
To test how much the extra data helps make predictions in the Premier League, I did something similar as I did in my post on the Dixon-Coles time weighting scheme. I used the independent Poisson model to make predictions for all the Premier League matches from 1st of January 2007 to 15th of January 2015. The predictions were made using a model fitted only with data from previous matches (going back to august 2005), thus emulating a realistic real-time prediction scenario. I weighted the data using the Dixon-Coles approach, with $$\xi=0.0018$$. This makes the scenario a bit unrealistic, since I estimated this parameter using the same Premier League matches I am going to predict here. I also experimented with using different home field advantage for each of the competitions.
To measure prediction quality I used the Ranked Probability Score (RPS), which goes from 0 to 1, with 0 being perfect prediction. RPS is calculated for each match, and the RPS I report here is the average RPS of all predictions made. Since this is over 3600 matches, I am going to report the RPS with quite a lot of decimal places.
Although the RPS goes from 0 to 1, using a RPS = 1 to mean worst possible prediction ability is unrealistic. To get a more realistic RPS to compare against I calculated the RPS using the probabilities of home, draw and away using the raw proportions of the outcome in my data. In statistical jargon this is often called the null model. The probabilities were 0.47, 0.25 and 0.28, respectively, and gave a RPS = 0.2249.
Using only Premier League data, skipping predictions for the first matches in a season involving newly promoted teams, gave a RPS of 0.19558.
Including data from the Championship in the model fitting, and assuming the home field advantage in both divisions were the same, gave a RPS of 0.19298. Adding a separate parameter for the home field advantage in the Championship gave an even better RPS of 0.19292.
Including data from the FA Cup (in addition to data from the Championship) were challenging. When data from the earliest round were included, the model fitting sometimes failed. I am not 100% sure of this, but I believe the reason for this is that some teams, or groups of teams, are mostly isolated from the rest of the teams. By that I mean that some group of teams have only played each other, but not any other team in the data. While this is not actually the case (it can not be) I nevertheless think the time weights makes this approximately true. Matches played a few years before the mathces that predictions are made for will have weights that are almost 0. It seems reasonable that this coupled with the incomplete design of the knockout format is where the trouble comes from.
Anyway, I got it to work by excluding matches played by a team not in the Championship or Premier League in the respective season. An additional parameter for home field advantage in the Cup were included in the model as well. Interestingly, this gave a somewhat poorer prediction ability that using additional data from the Championship only, with a RPS of 0.192972, but still better that using Premier League data only. With the same home overall field advantage for all the competitions, the prediction were unsurprisingly poorer with RPS = 0.1931.
I originally wanted to include data from Champions League and Europa League, as well as data from other European leagues, but the problems and results with the FA Cup made me dismiss the idea.
I am not sure why including the FA Cup didn’t give better predictions, but I have some theories. One is that a separate FA Cup home field advantage is unrealistic. Perhaps it would be better to assume that the home field advantage is the same as in the division the two opponents play in, if they play in the same division. If they played in different divisions, perhaps an overall average home field advantage could be used instead.
Another theory has to do with the time weighting scheme. The time weighting parameter I used was found by using data from the Premier League only. Since this gives uncertain estimates for the newly promoted teams, it will perhaps give more recent matches more weight to try to compensate. With more informative data from the previous season, this should probably be more influential. Perhaps the time weighting could be further refined with different weighting parameters for each division.
## 6 thoughts on “Better prediction of Premier League matches using data from other competitions”
1. I have been following your blog for sometimes now, and I will like to develop a model using the linear regression model. So I need some clarification. How do I compute home effect and with what parameters. And also how do I incooperate the time weighing scheme, and with what parameter so that the new matches carries more weight. I have historical data for the past seven seasons. Thanks for your response.
• If you follow the links in this post to some of my earlier posts you will find R code for computing the weights that you can give to the glm() function. To properly fit the model you will usually need to reformat your data a bit, which I have explained in this post. You can also find some code for doing this on the excellent pena.lt/y blog.
2. Hi again, I’m afraid I do not understand how/why information from the 2nd league (e.g. championship) will improve the model.
I want to use next (very unrealistic) example: Let’s assume that the attack/defense parameters distributions are equal for both leagues and the winner will promote to PL. In that case the model will assume that the quality of the promoted team is equal to last year’s champion of the PL. Can you eleborate on why the model accuracy improves when we add games from another sub-league (where the level is significant lower)?
• I am not 100% sure, but I think it is because you have more data about the promoted teams. When you use data from two divisions, and some teams have palyed in both (promted and relegated teams) the model will adjust the parameters to reflect this.
3. Hi, I’ve read lots of your blogs on this websites and I found most of them very inspiring and interesting. I don’t know R that well and I implemented the Dixon-Coles model in Mathematica. Somehow I found out that the calculation time is very slow.
Let’s say that I want to predict the result for the Premium League in 2017. The data I want to use is all the match results of Premier League, FA Cup and the Championship from 2006 to 2016. How long would it take for a full prediction for the season 2017 in R on your computer?
• Thank you. There are many things that can slow down the estimation time. I know from the optim function in R that different optimization algorithms often have different speeds, so it is worth checking this out. You could also try different starting values. Another thing to consider is the data. Especially including data from the FA cup can be problematic, if you include data from the earlier rounds, with teams from the lower divisions. If a team has really few matches this can create problems. I would also recommend creating a graph (http://opisthokonta.net/?p=1490) to make sure all teams are connected.
|
|
# The XTT^2 ALSV(FD) Specification
Author: Grzegorz J. Nalepa, based on the work with Antoni Ligęza
Version: Draft 2008Q3
Please put bigger remarks/discussusion, to the ALSV(FD) development page
## Introduction to Attributive Logics
Attributive logics constitute a simple yet widely-used tool for knowledge specification and development of rule-based systems. In fact in a large variety of applications in various areas of Artificial Intelligence (AI) and Knowledge Engineering (KE) attributive languages constitute the core knowledge representation formalism. The most typical areas of applications include rule-based systems, expert systems (ones based on rule formalism) and advanced database and data warehouse systems with knowledge discovery applications and contemporary business rules and business intelligence components.
The description of AL presented here is based on several papers, including
## ALSV(FD)
In SAL (Set Attributive Logic) as well as in its current version ALSV(FD), the very basic idea is that attributes can take atomic or set values.
After (ali2005thebook) it is assumed that an attribute A_i is a function (or partial function) of the form Here O is a set of objects and D_i is the domain of attribute A_i.
A generalized attribute A_i is a function (or partial function) of the form where 2^D_i is the family of all the subsets of D_i.
The basic element of the language of Attribute Logic with Set Values over Finite Domains (ALSV(FD) for short) are attribute names and attribute values. For simplicity of presentation no objects are considered here; in practice, the same attribute applied to two (or more) different objects can be considered as two (or more) new, different, object-labelled attributes. Moreover, unless two (or more) different objects are considered at the same time, no explicite reference to an object is necessary.
Let us consider:
• A – a finite set of attribute names,
• D – a set of possible attribute values (the domains).
Let A = A_1, A_2, … ,A_n be all the attributes such that their values define the state of the system under consideration. It is assumed that the overall domain D is divided into n sets (disjoint or not), D = D_1 u D_2 u … u D_n, where D_i is the domain related to attribute A_i, i=1,2, … ,n.
Any domain D_i is assumed to be a finite (discrete) set. The set can be ordered, partially ordered, or unordered; in case of ordered (partially ordered) sets some modifications of the notation are allowed.
As we consider dynamic systems, the values of attributes can change over time (or state of the system). We consider both simple attributes of the form A_i : T → D_i (i.e. taking a single value at any instant of time) and generalized ones of the form A_i: T → 2^D_i (i.e. taking a set of values at a time); here T denotes the time domain of discourse.
### Syntax
The legal atomic formulae of ALSV for simple attributes are presented in the Table 1.
Table 1: Simple attribute formulas syntax
The legal atomic formulae of ALSV for generalized attributes are presented in the Table 2.
Table 2: Generalized attribute formulas syntax
In case V_i is an empty set (the attribute takes in fact no value) we shall write A_i = ' '.
See any and null
More complex formulae can be constructed with conjunction ^ () and disjunction v; both the symbols have classical meaning and interpretation.
There is no explicit use of negation.
The proposed set of relations is selected for convenience and as such is not completely independent. For example, A_i = V_i can perhaps be defined as A_i \subset V_{i} ^ A_i \supset V_i; but it is much more concise and convenient to use = directly.
Various notational conventions extending the basic notation can be used. For example, in case of domains being ordered sets, relational symbols such as >, >=, <, =< can be used with the straightforward meaning.
### Semantics
In SAL the semantics of A_i=d is straightforward – the attribute takes a single value.
The semantics of A_i=t is that the attribute takes all the values of t (the so-called internal conjunction) while the semantics of A_i \in t is that it takes one (in case of simple attributes) or some (in case of generalized attributes) of the values of t (the so-called internal disjunction).
As an example for the necessity of SAL one can consider the specification of working days (denoted with WDay) given as WDay = D, where D is the set of working days, D = { Monday,Tuesday,Wednesday,Thursday,Friday }. Now one can construct an atomic formula like CurrentDay \in D, or a rule of the form: DayOfInterest \in D → Status(OfficeOfInterest) = open.
The semantics of is basically the same as the one of basic SAL. If then is equivalent to i.e. the attribute takes all the values specified with (and nothing more).
The semantics of , and is defined as follows:
where U subset V, i.e. A takes some of the values from V (and nothing out of V),
where V subset W, i.e. A takes all of the values from V (and perhaps some more), and
where i.e. A takes some of the values from V (and perhaps some more).
As it can be seen, the semantics of ALSV is defined by means of relaxation of logic to simple set algebra.
### Inference Rules
Let V and W be two sets of values such that V subset W. We have the following straightforward inference rules for atomic formulae: i.e. if an attribute takes all the values of a certain set it must take all the values of any subset of it (downward consistency).
Similarly i.e. if the values of an attribute takes values located within a certain set they must also belong to any superset of it (upward consistency).
These rules seem a bit trivial, but they must be implemented for enabling inference, e.g they are used in the rule precondition checking.
The summary of the inference rules for atomic formulae with simple attributes (where an atomic formula is the logical consequence of another atomic formula) is presented in Table 3. The table is to be read as follows: if an atomic formula in the leftmost column holds, and a condition stated in the same row is true, the to appropriate atomic formula in the topmost row is a logical consequence of the one from the leftmost column.
Table 3: Inference rules for atomic formulae for simple attributes
The summary of the inference rules for atomic formulae with generalized attributes (where an atomic formula is the logical consequence of another atomic formula) is presented in Table 4.
Table 4: Inference rules for atomic formulae for generalized attributes
(The rules must be checked; simple rules are for matching preconditions to the state formula. More complex rules can be for establishing truth-value propagation among atoms of preconditions within a table).
In Tables 3 and 4 the conditions are satisfactory ones. However, it is important to note that in case of the first rows of the tables (the cases of A=d_i and A=V, respectively) all the conditions are also necessary ones. The interpretation of the tables is straightforward: if an atomic formula in the leftmost column in some row i is true, then the atomic formula in the topmost row in some column j is also true, provided that the relation indicated on intersection of row i and column j is true. The rules of Table 3 and Table 4 can be used for checking if preconditions of a formula hold or verifying subsumption among rules.
For further analysis, e.g. of intersection (overlapping) of rule preconditions one may be interested if two atoms cannot simultaneously be true and if so — under what conditions. For example formula is inconsistent if Table 5 specifies the conditions for inconsistency.
Table 5: Inconsistency conditions for pairs of atomic formulae
The interpretation of the Table 5 is straightforward: if the condition specified at the intersection of some row and column holds, then the atomic formulae labelling this row and column cannot simultaneously hold. Note however, that this is a satisfactory condition only.
The Table can be used for analysis of determinism of the system, i.e. whether satisfaction of precondition of a rule implies that the other rules in the same table cannot be fired.
## ALSV(FD) and State, State Representation and Inference
When processing information, the current values of attributes form the state of the inference process. The values of attributes can, in general, be modified in the following three ways:
1. by an independent, external system,
2. by the inference process, and
3. as some clock-dependent functions.
The first case concerns attributes which represent some process variables, which are to be incorporated in the inference process, but depend only of the environment and external systems. As such, the variables cannot be directly influenced by the XTT system. Examples of such variables may be the external temperature, the age of a client or the set of foreign languages known by a candidate. Values of such variables are obtained as a result of some measurement or observation process. They are assumed to be put into the inference system via a blackboard communication method; in fact they are written directly into the internal memory whenever their values are obtained or changed.
The second case concerns the values of attributes obtained at certain stage of reasoning as the result of the operations performed in RHS of XTT. The new values of the attributes can be:
• asserted to global memory (and hence stored and made available for any components of the system), or
• kept as values of internal process variables.
The first solution is offered mostly for permanent changes; before asserting new values typically and appropriate retract operation is to be performed so as to keep a consistent state. In this way also the history (trajectory) of the system can be stored, provided that each value of an attribute is stored with a temporal index. The second solution is offered for value passing and calculations which do not require permanent storage. For example, if a calculated value is to be passed to some next XTT component and it is no longer used after, it is not necessary to store it in the global memory.
### The State of the System
The current state of the system is considered as a complete set of values of all the attributes in use at a certain instant of time. The concept of the state is similar to the one in dynamic systems and state-machines. The representation of the state should satisfy the following requirements:
1. the specification is internally consistent,
2. the specification is externally consistent,
3. the specification is complete,
4. the specification is deterministic,
5. the specification is concise.
The first postulate says that the specification itself cannot be inconsistent at the syntactic level. For example, a simple attribute (one taking a single value) cannot take two different values at the same time. In general, assuming independence of the attributes and no use of explicit negation, each value of an attribute should be specified once. The second postulate says, that only true knowledge (with respect to the external system) can be specified in state. In other words, facts that are syntactically correct but false cannot occur in the state formula. The third postulate says, that all the knowledge true at a certain instant of time should be represented within the state. The four postulate says that there can be no disjunctive knowledge specification within the state. Finally, the fifth postulate says that no unnecessary, dependent knowledge should be kept in the state. In databases and most of the knowledge bases this has a practical dimension: only true facts are represented explicitly.
The current values of all the attributes are specified with the contents of the knowledge-base (including current sensor readings, measurements, inputs examination, etc.). From logical point of view it is a formula of the form: where () for simple attributes and , () for complex.
In order to cover realistic cases some explicit notation for covering unspecified, unknown values is proposed; this is so to deal with the data containing the NULL values imported from a database. The first case refers to unspecified value of an attribute as a consequence of inappropriateness. A formula of the form means that the attribute takes an empty set of values (no value at all) at the current instant of time (or forever) for the object under consideration. For example, the attribute Maiden_Name or The_Year_of_Last_Pregnancy for a man is not applicable and hence it takes no value for all men. The second case refers to a situation that the attribute may be applied to an object, but it takes no value. This will be denoted as //A=\emptyset//. For example, the formula ''Phone_Number''=//\emptyset// means that the considered person has no phone number. The third case is for covering the NULL values present in relational databases. A formula of the form A=\mathtt{NULL} means that attribute takes an unspecified value.
### State and rule firing
In order to fire a rule all the precondition facts defining its LHS must be true within the current state. The verification procedure consists in matching these fact against the state specification. A separate procedure concerns simple (single-valued) attributes, and a separate one is applied in case of complex attributes. The following tables provide a formal background for preconditions matching and rule-firing procedure: Tab.~6 defines when a precondition of the form A\propto d is satisfied with respect to given state, and Tab.~7 defines the principles for matching precondition defined with set-valued attributes against the state formula.
Table 6: Inference principles for firing rules, case of single-valued attributes.
Table 7: Inference principles for firing rules, case of general attributes.
## ALSV Rules
ALSV(FD) has been introduced with practical applications for rule languages in mind. In fact, the primary aim of the presented language is to extend the notational possibilities and expressive power of the XTT-based tabular rule-based systems. An important extension consist in allowing for explicit specification of one of the symbols eq, neq, in, notin, subset, supset, sim, notsim, with an argument in the table.
### Rule Format
Consider a set of n attributes A = A_1,A_2, …, A_n Any rule is assumed to be of the form:
where alpha_i is one of the admissible relational symbols in ALSV(FD), and RHS is the right-hand side of the rule covering conclusion and perhaps the retract and assert definitions if necessary.
### Rule Firing
The current values of all the attributes are specified with the contents of the knowledge-base (including current sensor readings, measurements, inputs examination, etc.).
From logical point of view it is a formula of the form: Eq: state-formula where for simple attributes and for complex.
### ANY and NULL
In case the value of A_i is unspecified we shall write A_i = NULL (a database convention).
Following a Prolog convention and logic, a ANY attribute value is possible in comparison (see_ in Prolog).
The semantics can be: “any value”, “not important”, etc.
The solution:
• in preconditions, we can only use ANY, i.e. an atom such as A=_ can be specified, meaning “any value”, “all possible values of the attribute”, “we don't care”
• on the other hand, attribute A unspecified, in the state formula means A=NULL, so we store NULL in state
• here we come to an inference rule: A=NULL =⇒ A=_. Seems to be valid… This rules should be optionally disabled/enabled in the inference engine.
It seems, we could have three types of NULL-like values: Not-applicable, Potentially-applicable but taking no value empty/no-defined, Applicabe-and-takin-value but unknown.
|
|
# How do you find all zeros with multiplicities of f(x)=6x^4-5x^3-9x^2?
Dec 20, 2017
$x = 0$ (multiplicity $2$) and $x = \frac{5 \pm \sqrt{241}}{12}$ (multiplicity $1$ for both)
#### Explanation:
First we can factor out an ${x}^{2}$:
$6 {x}^{4} - 5 {x}^{3} - 9 {x}^{2} = {x}^{2} \left(6 {x}^{2} - 5 x - 9\right)$
From this, it's quite clear that $x = 0$ is a solution, and it will have multiplicity $2$ because the factor is ${x}^{2}$.
To find the remaining solutions, we can use the quadratic formula to solve:
$6 {x}^{2} - 5 x - 9 = 0$
$x = \frac{5 \pm \sqrt{241}}{12}$
These solutions only occur once, so they both have a multiplicity of $1$.
|
|
## Side Effects and Functional Programming
Posted on July 9, 2012.
Today we’ll examine why functional languages have a reputation of being a mere novelty of academia, while imperative languages have dominated the software industry.
In my last post I mentioned that pure functions cannot have side effects. It follows from this definition that pure functions are referentially transparent), which means that evaluating a function multiple times with the same arguments will always yield the same result. How then could one implement, say, a function that returns the current time? Or one that generates a random number? Or one that reads the next line in a file? All of these functions in an imperative language would return different values if they were invoked multiple times. Isn’t the whole point of computer programs to do side effects like these?
This is the problem that the functional paradigm faces. Software engineers write programs that do useful tasks. Functional languages emphasize writing functions that, in essence, do nothing but return a value, which makes these languages very difficult to use if you’re trying to write a chat client, a web browser, or a first-person shooter.
Here is one solution. Let’s define a main function and implement the famous hello world program in it. I’ll use a fictional dialect of Python we’ll call Fython (for “functional Python”):
def main():
print("Hello, World!")
Stop right there! Our print function has side effects. That’s not functional at all. Let’s try this instead:
def main(old_universe):
new_universe = print("Hello, World!", old_universe)
return new_universe
Aha! Our program is functional once again. Instead of directly printing to the screen, we return a universe in which the printing had been done. In a sense, we are now describing a computation instead of executing one, which is the essence of the declarative paradigm. We could imagine a runtime which applies this function, with the current universe as the argument, and then assigns our universe to the universe returned by this program (thereby modifying our universe such that the computation has been performed). So the runtime does the dirty, stateful work for us, while we only need to describe what the universe would look like if a “Hello, World!” message was printed to the screen. What if we want to get some user input, and then print it back to the screen?
def main(old_universe):
input, intermediate_universe = get_input(old_universe)
new_universe = print("You typed: " + input, intermediate_universe)
return new_universe
Our program takes the current universe, then defines an intermediate universe in which a user has entered a string. With that string and intermediate universe, we define a new universe in which that string has been printed back to the user. The runtime then applies these transformations to our universe, and voilà! Our program has been run.
A pattern should now become clear. All of this is really just a guise for function composition, which is how we can enforce an ordering to the evaluation of our expressions. If our functional program is just an unordered set of equations, how does the runtime know to get user input and then print a message, and not the other way around? The expression which “prints” depends on the universe returned by the expression which “gets the input”, just how in the expression f(g(x)), g(x) must be evaluated before f(g(x)).
However, what happens if we try to run something like:
def main(old_universe):
input1, new_universe1 = get_input(old_universe)
input2, new_universe2 = get_input(old_universe)
final_universe = print(input1 + input2, old_universe)
return final_universe
If you run this program, will it ask you for input once or twice? Is this even a valid program? Stay tuned to find out!
Tags: home theory
|
|
# P2418「yyy loves OI IV」
Posted by prc on Tue, 01 Mar 2022 09:43:00 +0100
## 1. Title
Title Link: P2418「yyy loves OI IV」 .
### Topic background
There are two OI cows in a school in 2015, yyy and c01.
### Title Description
All the N students in the school except them will worship one of them. Now the teacher will assign them dormitories. But here comes the problem:
People in the same dormitory either worship the same Daniel, or the absolute difference between yyy and c01 is no more than M. Or they'll fight.
For convenience, the teacher asked N students to stand in a row. Only those who stand together continuously can be divided into the same dormitory.
Assuming that each dormitory can accommodate any number of people, how many dormitories should be arranged at least?
### Input format
First line, two positive integers N and M.
Line 2 \cdots N+1, with an integer 1 or 2 in each line. The number in line i represents the Daniel worshipped by the i-1 person from left to right.
1 means yyy and 2 means c01.
### Output format
One line, an integer, indicates that at least several dormitories should be arranged.
### Sample input and output
#### Enter #1
5 1
1
1
2
2
1
#### Output #1
1
### Description / tips
Difficult questions, be prepared ~
## 2. Problem solving
### analysis
Start from the first classmate and gradually scan to the last classmate; It is easy to know that except for the last dormitory, other dormitories:
• Or all worship the same big man
• Or the difference between the absolute values of M
Let dp[i] represent the minimum number of dormitories required by the first I students after processing, stu[i] represent the leaders worshipped by the first I students (− 1 represents 1, 1 represents 2), sum[i] = \sum stu[i] represent the algebraic sum of the leaders worshipped by the first I students (i.e. the difference between the number of two leaders worshipped, which is the reason why 1 and 2 are mapped to − 1 and 1: convenient processing), ump[u] indicates the minimum number of dormitories required for the current input when the absolute value of the difference between the number of worshippers and worshippers is u (because the value range of u is large, choose unordered_map to store records).
[note] the reason why ump[u] can be set is that in the best case, the situation of all dormitories is the same, that is, the same big man has the largest number of worshippers (otherwise, since there is no upper limit on the number of dormitories, different dormitories with a large number of adjacent worshippers can be combined, which still meets the conditions of the question).
• For the first case: it can be realized by setting the position of a different record value on the record; That is \ leq dp[last] + 1, where last is the position of the last classmate relative to the boss worshipped by classmate i. That is, the number of dormitories for the current students is at most 1 more than that of the previous students with different beliefs (because at this time, the students between the two worship the same boss).
• For the second case: \ leq ump[abs(sum[i]) - M] + 1, that is, the minimum number of dormitories required when there are at least m fewer than now + 1.
Therefore, for u \leq 0, ump[u] = 0. Because abs(sum[i]) - M \leq 0 indicates that the difference between the number of worshippers and the number of worshippers does not exceed m, at most one dormitory is required at this time, that is, ump[abs(sum[i]) - M] = 0.
Combining the two, the state transition equation can be listed: dp[i]=min(dp[last]+1,ump[abs(sum[i]) − M]+1). The final result is dp[n].
### code
#include <bits/stdc++.h>
using namespace std;
const int MAXN = 1e6 + 5;
int stu[MAXN], sum[MAXN], dp[MAXN];
char buf[1<<22],*p1=buf,*p2=buf;
{
int x=0,f=1;
char ch=getchar();
while(ch<'0'||ch>'9')
{
if(ch=='-')
f=-1;
ch=getchar();
}
while(ch>='0'&&ch<='9')
{
x=(x<<1)+(x<<3)+(ch^48);
ch=getchar();
}
return x*f;
}
void input(int n) {
for(int i = 1; i <= n; ++i) {
stu[i] = 2*a - 3;
}
}
int myabs(int x) {
return x < 0 ? -x : x;
}
void init(int n) {
for(int i = 1; i <= n; ++i) {
sum[i] = sum[i-1] + stu[i];
}
}
unordered_map <int,int> ump;
void update(int u, int x) {
if(u > 0) {
if(ump.count(u) == 0) {
ump[u] = x;
} else {
ump[u] = min(ump[u], x);
}
}
}
int query(int u) {
if(ump.count(u) == 0) {
if(u <= 0) {
return 0;
} else {
return MAXN;
}
}
return ump[u];
}
int answer(int n, int m) {
int last1 = 0, last2 = 0;
for(int i = 1; i <= n; ++i) {
if(stu[i] == 1) {
dp[i] = dp[last2] + 1;
last1 = i;
} else {
dp[i] = dp[last1] + 1;
last2 = i;
}
dp[i] = min(dp[i], query(myabs(sum[i])-m)+1);
update(myabs(sum[i]), dp[i]);
}
return dp[n];
}
int main()
{
int n, m;
scanf("%d%d", &n, &m);
input(n);
init(n);
}
|
|
# Beamer with Powerpoint title page in LyX
I want to use beamer (Madrid style) for a presentation I'm giving next week, but I am forced to use the institutional title page.
That means that the whole background is filled by an image and the title itself cannot be in a coloured box.
In the worst case I will have to pdftk the title page onto the presentation, but I'm sure there must be a way to force LaTeX to do it.
I've tried using a PlainFrame with a TitleGraphic, but
• I can't get the image to stretch across the whole page
• the title itself will still be in a coloured box
Any ideas???
-
## migrated from stackoverflow.comApr 4 '13 at 22:25
This question came from our site for professional and enthusiast programmers.
Welcome to TeX.sx! Your question was migrated here from StackOverFlow. Please register on this site, too, and make sure that both accounts are associated with each other (by using the same OpenID), otherwise you won't be able to comment on or accept answers or edit your question. – pythonista Apr 4 '13 at 23:51
Since you want to suppress the coloured box for the title, you'll need a redefinition of the title page template defined by the default inner theme used by Madrid.
To suppress the footline from the titlepage, you can use the plain option for the frame.
To include the image, you can set the background canvas template, using a standard \includegraphics (of course, instead of papiro, use one of your own images) command.
To keep all these changes local, you can use the grouping mechanism.
A little example:
\documentclass{beamer}
\makeatletter
\defbeamertemplate*{title page}{mytitlepage}[1][]
{
\vbox{}
\vfill
\begin{centering}
\begin{beamercolorbox}[sep=8pt,center,#1]{}
\usebeamerfont{title}\inserttitle\par%
\ifx\insertsubtitle\@empty%
\else%
\vskip0.25em%
{\usebeamerfont{subtitle}\usebeamercolor[fg]{subtitle}\insertsubtitle\par}%
\fi%
\end{beamercolorbox}%
\vskip1em\par
\begin{beamercolorbox}[sep=8pt,center,#1]{author}
\usebeamerfont{author}\insertauthor
\end{beamercolorbox}
\begin{beamercolorbox}[sep=8pt,center,#1]{institute}
\usebeamerfont{institute}\insertinstitute
\end{beamercolorbox}
\begin{beamercolorbox}[sep=8pt,center,#1]{date}
\usebeamerfont{date}\insertdate
\end{beamercolorbox}\vskip0.5em
{\usebeamercolor[fg]{titlegraphic}\inserttitlegraphic\par}
\end{centering}
\vfill
}
\makeatother
\title{A Modified Beamer Theme}
\subtitle{Background image and no special decorations}
\author{The Author}
\institute{The Institute}
\begin{document}
\begingroup
\setbeamertemplate{background canvas}{\includegraphics[width=\paperwidth,height=\paperheight]{papiro}}
\begin{frame}[plain]
\maketitle
\end{frame}
\endgroup
\begin{frame}
\frametitle{A regular frame}
test
\end{frame}
\end{document}
-
This is absolutely brilliant! Works like a treat. – Joanne Demmler Apr 5 '13 at 9:44
|
|
# 1.2 Use the language of algebra (Page 5/18)
Page 5 / 18
Evaluate $3{x}^{2}+4x+1$ when $x=3.$
40
Evaluate $6{x}^{2}-4x-7$ when $x=2.$
9
## Indentify and combine like terms
Algebraic expressions are made up of terms. A term is a constant, or the product of a constant and one or more variables.
## Term
A term is a constant, or the product of a constant and one or more variables.
Examples of terms are $7,y,5{x}^{2},9a,\text{and}\phantom{\rule{0.2em}{0ex}}{b}^{5}.$
The constant that multiplies the variable is called the coefficient .
## Coefficient
The coefficient of a term is the constant that multiplies the variable in a term.
Think of the coefficient as the number in front of the variable. The coefficient of the term 3 x is 3. When we write x , the coefficient is 1, since $x=1·x.$
Identify the coefficient of each term: 14 y $15{x}^{2}$ a .
## Solution
The coefficient of 14 y is 14.
The coefficient of $15{x}^{2}$ is 15.
The coefficient of a is 1 since $a=1\phantom{\rule{0.2em}{0ex}}a.$
Identify the coefficient of each term: $17x$ $41{b}^{2}$ z .
14 41 1
Identify the coefficient of each term: 9 p $13{a}^{3}$ ${y}^{3}.$
9 13 1
Some terms share common traits. Look at the following 6 terms. Which ones seem to have traits in common?
$\begin{array}{cccccccccccccccc}5x\hfill & & & 7\hfill & & & {n}^{2}\hfill & & & 4\hfill & & & 3x\hfill & & & 9{n}^{2}\hfill \end{array}$
The 7 and the 4 are both constant terms.
The 5x and the 3 x are both terms with x .
The ${n}^{2}$ and the $9{n}^{2}$ are both terms with ${n}^{2}.$
When two terms are constants or have the same variable and exponent, we say they are like terms .
• 7 and 4 are like terms.
• 5 x and 3 x are like terms.
• ${x}^{2}$ and $9{x}^{2}$ are like terms.
## Like terms
Terms that are either constants or have the same variables raised to the same powers are called like terms .
Identify the like terms: ${y}^{3},$ $7{x}^{2},$ 14, 23, $4{y}^{3},$ 9 x , $5{x}^{2}.$
## Solution
${y}^{3}$ and $4{y}^{3}$ are like terms because both have ${y}^{3};$ the variable and the exponent match.
$7{x}^{2}$ and $5{x}^{2}$ are like terms because both have ${x}^{2};$ the variable and the exponent match.
14 and 23 are like terms because both are constants.
There is no other term like 9 x .
Identify the like terms: $9,$ $2{x}^{3},$ ${y}^{2},$ $8{x}^{3},$ $15,$ $9y,$ $11{y}^{2}.$
9 and 15, ${y}^{2}$ and $11{y}^{2},$ $2{x}^{3}$ and $8{x}^{3}$
Identify the like terms: $4{x}^{3},$ $8{x}^{2},$ 19, $3{x}^{2},$ 24, $6{x}^{3}.$
19 and 24, $8{x}^{2}$ and $3{x}^{2},$ $4{x}^{3}$ and $6{x}^{3}$
Adding or subtracting terms forms an expression. In the expression $2{x}^{2}+3x+8,$ from [link] , the three terms are $2{x}^{2},3x,$ and 8.
Identify the terms in each expression.
1. $9{x}^{2}+7x+12$
2. $8x+3y$
## Solution
The terms of $9{x}^{2}+7x+12$ are $9{x}^{2},$ 7 x , and 12.
The terms of $8x+3y$ are 8 x and 3 y .
Identify the terms in the expression $4{x}^{2}+5x+17.$
$4{x}^{2},5x,17$
Identify the terms in the expression $5x+2y.$
5 x , 2 y
If there are like terms in an expression, you can simplify the expression by combining the like terms. What do you think $4x+7x+x$ would simplify to? If you thought 12 x , you would be right!
$\begin{array}{c}\hfill 4x+7x+x\hfill \\ \hfill x+x+x+x\phantom{\rule{1em}{0ex}}+x+x+x+x+x+x+x\phantom{\rule{1em}{0ex}}+x\hfill \\ \hfill 12x\hfill \end{array}$
Add the coefficients and keep the same variable. It doesn’t matter what x is—if you have 4 of something and add 7 more of the same thing and then add 1 more, the result is 12 of them. For example, 4 oranges plus 7 oranges plus 1 orange is 12 oranges. We will discuss the mathematical properties behind this later.
Simplify: $4x+7x+x.$
## How to combine like terms
Simplify: $2{x}^{2}+3x+7+{x}^{2}+4x+5.$
## Solution
Simplify: $3{x}^{2}+7x+9+7{x}^{2}+9x+8.$
$10{x}^{2}+16x+17$
Simplify: $4{y}^{2}+5y+2+8{y}^{2}+4y+5.$
$12{y}^{2}+9y+7$
## Combine like terms.
1. Identify like terms.
2. Rearrange the expression so like terms are together.
3. Add or subtract the coefficients and keep the same variable for each group of like terms.
Amara currently sells televisions for company A at a salary of $17,000 plus a$100 commission for each television she sells. Company B offers her a position with a salary of $29,000 plus a$20 commission for each television she sells. How many televisions would Amara need to sell for the options to be equal?
what is the quantity and price of the televisions for both options?
karl
I'm mathematics teacher from highly recognized university.
is anyone else having issues with the links not doing anything?
Yes
Val
chapter 1 foundations 1.2 exercises variables and algebraic symbols
June needs 45 gallons of punch for a party and has 2 different coolers to carry it in. The bigger cooler is 5 times as large as the smaller cooler. How many gallons can each cooler hold? Enter the answers in decimal form.
Joseph would like to make 12 pounds of a coffee blend at a cost of $6.25 per pound. He blends Ground Chicory at$4.40 a pound with Jamaican Blue Mountain at $8.84 per pound. How much of each type of coffee should he use? Samer 4x6.25=$25 coffee blend 4×4.40= $17.60 ground chicory 4x8.84= 35.36 blue mountain. In total they will spend for 12 pounds$77.96 they will spend in total
tyler
DaMarcus and Fabian live 23 miles apart and play soccer at a park between their homes. DaMarcus rode his bike for three-quarters of an hour and Fabian rode his bike for half an hour to get to the park. Fabian’s speed was six miles per hour faster than DaMarcus’ speed. Find the speed of both soccer players.
i need help how to do this is confusing
what kind of math is it?
Danteii
help me to understand
huh, what is the algebra problem
Daniel
How many soldiers are there in a group of 27 sailors and soldiers if there are four fifths many sailors as soldiers?
tyler
What is the domain and range of heaviside
What is the domain and range of Heaviside and signum
Christopher
25-35
Fazal
The hypotenuse of a right triangle is 10cm long. One of the triangle’s legs is three times the length of the other leg. Find the lengths of the three sides of the triangle.
Tickets for a show are $70 for adults and$50 for children. For one evening performance, a total of 300 tickets were sold and the receipts totaled \$17,200. How many adult tickets and how many child tickets were sold?
A 50% antifreeze solution is to be mixed with a 90% antifreeze solution to get 200 liters of a 80% solution. How many liters of the 50% solution and how many liters of the 90% solution will be used?
June needs 45 gallons of punch for a party and has 2 different coolers to carry it in. The bigger cooler is 5 times as large as the smaller cooler. How many gallons can each cooler hold?
Washing his dad’s car alone, eight-year-old Levi takes 2.5 hours. If his dad helps him, then it takes 1 hour. How long does it take the Levi’s dad to wash the car by himself?
Bruce drives his car for his job. The equation R=0.575m+42 models the relation between the amount in dollars, R, that he is reimbursed and the number of miles, m, he drives in one day. Find the amount Bruce is reimbursed on a day when he drives 220 miles.
|
|
## Elementary Algebra
Let X represent the amount of pure alcohol to be added. The second solutions is 10 liters. The resulting solution will be 10 + X liters. We use the following guideline to solve this problem: Amount of alcohol in the first solution + amount of alcohol in the second solution = Amount of alcohol in the final solution. The first solution is all alcohol; it has X liters of it. The second solution has 10 $\times$ 70% = 10 $\times$ 0.7 = 7 liters of alcohol. The resulting solutions is 90% alcohol, which means it has 90% $\times$ (10+X) = 0.9 $\times$ (10 + X) = 9 + 0.9X alcohol. We then solve the equation: X + 7 = 9 + 0.9X 0.1X = 2 Divide both sides by 0.1 X = 20 20 liters of alcohol must be added.
|
|
Could someone explain, like just a geometric description, how the space $\displaystyle$\mathbb{R}^3\backslash A$$, where \displaystyle A$$ is the unit circle in the $\displaystyle$xy$$-plane, is homotopy equivalent to \displaystyle S^2\vee S^1$$, the one point union of a 2-sphere and a circle. I can't see it.
|
|
# Thread: Help with a Limit Problem
1. ## Help with a Limit Problem
I don't understand how to go about doing this problem:
The limit of numerator (1/3 + deltaX)^2 - (1/9) divided by denominator deltaX as deltaX approaches 0.
Any help will be much appreciated!
2. Use this $\left( {\frac{1}{3} + \Delta x} \right)^2 - \frac{1}{9} = \frac{{2\Delta x}}{3} + \left( {\Delta x} \right)^2$.
3. Originally Posted by Plato
Use this $\left( {\frac{1}{3} + \Delta x} \right)^2 - \frac{1}{9} = \frac{{2\Delta x}}{3} + \left( {\Delta x} \right)^2$.
What happens to the 1/9?
4. Originally Posted by blurain
What happens to the 1/9?
It goes away because you have (1/9) - (1/9) = 0.
|
|
### Large-scale simulations of cosmic necklaces
David Weir [he/him/his] - University of Helsinki - davidjamesweir
This talk: saoghal.net/slides/lisa-necklaces
LISA CosWG workshop, 16 January 2019
### With:
Mark Hindmarsh • Anna Kormu • Asier Lopez-Eiguren • Kari Rummukainen
### What are cosmic necklaces?
• Necklaces are networks of strings carrying monopoles.
• Models can be embedded in GUTs, including $\mathrm{SO}(10)$.
• Might behave differently to strings - role of monopoles?
• Observations:
• Cosmic ray and $\gamma$-ray signals? astro-ph/9704257
• Gravitational waves? (or lack thereof)
### The model
• "$\mathrm{SU}(2)$ Georgi-Glashow model with two adjoint Higgses" $$\begin{multline} V(\Phi_1,\Phi_2) = - m_1^2\mathrm{Tr}\Phi_1^2 - m_2^2\mathrm{Tr}\Phi_2^2 \\ +\lambda (\mathrm{Tr}\Phi_1^2)^2 + \lambda (\mathrm{Tr}\Phi_2^2)^2 +\kappa(\mathrm{Tr}\Phi_1\Phi_2)^2. \end{multline}$$ with $\Phi_n^a = \phi_n^a\sigma^a/2$ and the usual action.
• If $m_1^2 > m_2^2$, get necklaces with monopoles, charge $\{\pm 1\}$.
• If $m_1^2 = m_2^2$, behaviour depends on $\kappa/2\lambda$:
• If $\kappa = 2\lambda$, there's a global $\mathrm{U}(1)$ - superfluidity.
• If $\kappa \neq 2\lambda$, monopoles split into two semipoles .
### Necklaces
• When $m_1^2 > m_2^2$, $\mathrm{U}(1)$ breaks to $Z_2$
• Strings form, joining up the monopoles
### Semipoles
• If $m_1^2 = m_2^2$ and $\kappa/2\lambda \neq 1$, get semipoles (here $\kappa/2\lambda < 1$)
• Four types of pole, with complex charge $\{1, i, -1, -i\}$.
• Linked by four types of string, with complexified flux $\Phi_B^{(1)} + i \Phi_B^{(2)} \in \{1 + i, -1 + i, -1 - i, 1 - i\}$.
### Necklaces: field theory movie
Video credit: Anna Kormu [link to Vimeo]
### Necklaces: string and pole position movie
Video credit: Asier Lopez-Eiguren
### Semipoles ($\kappa/2\lambda < 1$)
Video credit: Anna Kormu [link to Vimeo]
### Dynamics: key quantities
• Lattice projectors give the positions of monopoles (total $N$) and plaquettes with winding (total $L$).
• Average defect separations in volume $V$ $$\xi_\mathrm{m} = \left(\frac{V}{N}\right)^{1/3}; \qquad \xi_\mathrm{s} = \left(\frac{V}{L}\right)^{1/2}.$$
• Comoving pole density $$n = \frac{\xi_\mathrm{s}^2}{\xi_\mathrm{m}^3}.$$
• Energy, Lagrangian-derived measures less successful 🤷♂️.
### $\xi_\mathrm{s}$ and $\xi_\mathrm{m}$ - necklaces
• String separation: linear scaling with conformal time $\tau$.
• Monopole separation: $\xi_m \mathrm \propto{\tau}^{2/3}$.
• No real dependence on initial $\xi_\mathrm{m}$.
### $\xi_\mathrm{s}$ and $\xi_\mathrm{m}$ - semipoles
• Same general story as for necklaces.
• Some hints of different $\kappa/2\lambda$ behaviour...
### Monopole linear density - necklaces and semipoles
• Note 'pseudopoles': projecting wrong complexified $\mathbf{B}$ fields.
• Everything physical goes to constant $n$ (more or less).
### Consequences for gravitational waves
• Key point: constant comoving distance between poles.
• "Nambu-Goto + blobs" model: Xiemens, Martin, Olum
• No periodic non-self intersecting solutions.
• ☛ Necklaces chopped up until no monopoles left.
• GUT-scale loops might be metre-scale, not horizon-size.
• String tension constraints relaxed?
### Conclusion
• Large simulations of necklaces in radiation era: 10M CPUh.
• String separation scales with conformal time: $\xi_\mathrm{s} \propto \tau$.
• But comoving pole separation $n$ along strings is constant
⇒ Monopoles are "along for the ride"...
• Models in the literature don't quite work.
### View this talk again
saoghal.net/slides/lisa-necklaces
(We also have 3D and 360 movies!)
## Extra stuff
### Implementing the simulations ...
• Real-time lattice simulation, temporal gauge ($A^0 = 0$).
• Initial conditions and evolution must satisfy Gauss law.
• Fundamental quantities: links $U_i \in \mathrm{SU}(2)$ and fields $\Phi_n$.
• Our key measurements are the location of monopoles and plaquettes with winding number.
### Do the simulations work?
• ✅ Yes, we can obtain the residual $\mathrm{U}(1)$ field.
• ✅ Yes, we can measure the vortex winding associated with the other Higgs field.
• ✅ Yes, they're both gauge invariant.
• For necklaces, the heavier field (wlog $\Phi_1$) forms monopoles.
• For semipoles, we have to make two magnetic fields (both scalar fields have a vev in places).
### More semipoles
• When $m_1^2 = m_2^2$ and $\kappa/2\lambda > 1$, slight difference...
• Vacua rotated by 45° so string fields are rotated, too, $$\Phi_\pm = (\Phi_1 \pm \Phi_2)/\sqrt{2}.$$
### Special case
• When $m_1^2 = m_2^2$ and $\kappa/2\lambda = 1$, there is a global $\mathrm{U}(1)$.
(We won't look at this any further today...)
### Getting the monopoles
• Make projectors $\Pi_\pm = \frac{1}{2} (1 \pm \hat{\Phi}_1)$ where $$\hat{\Phi}_1 = \Phi_1\sqrt{2/\mathrm{Tr}\, \Phi_1^2}$$
• Get the $\mathrm{U}(1)$ gauge field corresponding to $\Phi_1$,
• $$u_\mu(x) = \Pi_+(x) U_\mu(x) \Pi_+(x+\hat{\mu}).$$
• Construct an effective "field strength tensor" hep-lat/0009037 $$\alpha_{\mu\nu} = \frac{2}{g} \; \mathrm{arg} \; \mathrm{Tr}\; u_\mu(x) u_\nu(x+\hat{\mu}) u_\mu^\dagger(x+\hat{\nu}) u_\nu^\dagger(x).$$
• From which effective magnetic field and charge is $$B_i = \frac{1}{2} \epsilon_{ijk} \alpha_{jk}; \qquad \rho_{\mathrm{M}}= \sum_{i=1}^3 [B_i(x+\hat{\imath}) - B_i(x)]$$
### Getting the winding
• Similar to Abelian Higgs hep-ph/9809334
• Difference in phase angle for $\Phi_2$ hep-lat/0009037 $$\begin{multline}\delta_i(x) = \mathrm{arg} \; \mathrm{Tr} \; \big[ \hat{\Phi_2}(x) \Pi_-(x) U_i(x) \Pi_-(x+\hat{\imath}) \\ \hat{\Phi_2}(x +\hat{\imath}) \Pi_+(x+\hat{\imath}) U_i^\dagger(x) \Pi_+(x) \big]. \end{multline}$$
• Winding number through a plaquette in the $ij$-plane at $x$ is then $$Y_{ij}(x) = \delta_i(x) + \delta_j(x+\hat{\imath}) - \delta_i(x+\hat{\jmath}) - \delta_j(x) - g \alpha_{ij}(x).$$
• We can trace this (and the monopoles) through the lattice.
### Running the simulations
• "Naive" random initial conditions (memory soon lost).
• Simulate physical equations of motion, but first:
• Some smoothing $$\Phi_n(\mathbf{x}) \to \frac{1}{12} \sum_i \left[ \Phi_n(\mathbf{x}-\hat{\imath}) + 2\Phi_n(\mathbf{x}) + \Phi_n(\mathbf{x} + \hat{\imath}) \right].$$
• Then some heavy damping in a Minkowski background.
• And 'core growth' (run equations with $s=-1$).
• $1920^3$ lattices, run for one light crossing time.
• Gauss Law OK, energy conservation < 1%.
### Limitations of the simulations
(in addition to the usual caveats for field theory strings)
• Mass scales $m_1^2$ and $m_2^2$ will never be that different 😕
• Largest we have is $m_2^2/m_1^2 = 0.04$.
• Decrease $m_2$? Fatter strings reduce statistics.
• Decrease $m_2$? Defect formation dynamics happens on time $1/m_2$ - need longer simulations. Light crossing time?
• Increase $m_1$? Smaller monopoles risk pinning on lattice.
### Extra: Semipoles ($\kappa/2\lambda > 1$)
Video credit: Anna Kormu [link to Vimeo]
(note, isosurfaces of unrotated $\Phi_1$, $\Phi_2$)
### Extra: $r$
• Define the linear monopole density $n$ in units of $d_\mathrm{BV}$, $$r = d_\mathrm{BV} \frac{n}{a}, \quad \text{where} \quad d_\mathrm{BV} = \frac{M_\mathrm{m}}{\mu}$$
• and the network energy density is $$\rho_n \simeq \frac{\mu}{\xi_\mathrm{s}^2}(1+r).$$
• So $r$ gives the ratio of energy density in strings to monopoles.
• If $\xi_\mathrm{s} \propto \tau$ and $r$ constant, monopoles are a constant fraction of the total energy.
|
|
# apt installing more packages than specified as dependencies
I was trying to install texmaker from the repository. For installing the texlive I followed the steps described here. For this the control file I used is this. It has the file texlive-binaries in it.
# Math notation on WordPress.com?
I am a new user with WordPress, and I would like to use mathematical formulas. I have been reading the mathjax pages for hours, such as this one and quite a few others. This may be a bad question, but can someone please help me? All I want to do is enable Latex, and I am hopelessly lost. My most recent attempts have been trying to find the “header file” so that I can copy and paste in
|
|
# Could use some help on physics (mechanics problem help)
1. May 4, 2004
### shivas
i was wondering if someone on this board would kindly bestow their knowledge onto this problem... thank you..
A model rocket launches with initial net upward acceleration of 1.5g where g is the absolute value of the acceleration due to gravity. This accerlation is maintained for 3 secs. Adter the 3 secs has passed the rocket coasts and eventually falls back to earth. At what time does the rocket reach its max height??
-- thanks!
2. May 4, 2004
### Integral
Staff Emeritus
What have you done?
What equations have you been given that might be of use? There are several different ways of solving this problem depending on the level of your Math, show us some of what you know about it. That will help us know where to start.
3. May 4, 2004
### shivas
well im thinkin i should use kinematics equations to solve this.... but not sure which one
i know how to do calculus...
4. May 4, 2004
### Parth Dave
kinematics equations are all you need.
First of all, what can you say about the velocity when the rocket reaches its max height? see if that helps.
5. May 4, 2004
### shivas
velocity is zero at maX, right?
6. May 4, 2004
### Parth Dave
correct. Now what formula do you have to find velocity?
and what variables do you know in the formula?
7. May 5, 2004
### ShawnD
Break it into 2 parts, accelerating and decelerating. First, we know the time for the acceleration part, now find the velocity for it.
$$V = at$$
$$V = (1.5g)(3)$$
$$V = 4.5g$$
Now for the second part. We know the final velocity is 0 but we don't know the time, so lets find the time.
$$V_f = V_i + at$$
Vf is 0, Vi is the V from part one, a is -g (since it's the opposite direction of the velocity) and t is just t.
$$0 = 4.5g + (-g)(t)$$
$$gt = 4.5g$$
$$t = 4.5$$
So you know the time from part one is 3s and the time from part 2 is 4.5 seconds. Now add em together
$$t = 3 + 4.5$$
$$t = 7.5$$ seconds
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
|
|
# Have I Shot Myself in the Foot
1. Apr 30, 2010
Basically, I'm about to be a freshman in college and turned down Rice University for UChicago, which will be ~$100k more expensive. I've planned on doing physics or math or maybe econ...but reading on these fora it looks like an engineering degree is just as good/better than a physics degree for almost everything after undergrad. Have I made a truly awful decision? I've no doubt that I like Chicago better than Rice, but have I erred? I'm pretty sure that I like physics more than engineering, since I like learning more than I like making things, but what good is that if I've got a Ph.D and am unemployed (as posts on this forum seem to portray as fairly likely.) 2. Apr 30, 2010 ### marcusl Don't make your choices on the basis of complaints from the un-/under-employed on these forums. First of all, we've just come through a terrible recession where the economy nearly collapsed, so this has not been a typical job market. Keep in mind that you are nine (8 to 11 anyway) years away from a PhD, so the job market you face will be very different. Second, you can always do engineering as a physicist, and in general industry appreciates and rewards PhD's. Third, UChicago is a name-brand school. I wouldn't worry. Do your job (that will be to learn, explore Chicago, do undergrad things, excel in your field) and the rest should take care of itself. 3. Apr 30, 2010 ### chingkui Welcome to PF. If you acquire skills that employers find useful, it is unlikely you will be unemployed. There are probably a lot more opportunities than you think there is, just keep your mind open, and you might get a job in a totally unrelated industry. That said, business hire people to build things rather than learn things. If you want a job, you need to acquire skills that business find them valuable. I understand your frustration, I have been a freshman before, and have worries about lots of things in choosing a career. Whether it is engineering or physics, you will get involve in both learning and building things, and in engineering school, if you desire, you can acquire a very solid education of the science and mathematics behind building things. I am not trying to persuade you to get an engineering degree, rather I am just making the point that physics vs engineering is not learning vs building things. If you look at college as an investment, you are investing in yourself to acquire marketable skills and credentials, as well as building your network. Whether paying$100K more is worth has to be answer by you as to whether you think you can recoup the cost or, whether it worth the premium if you think it will provide you with a unique experience/opportunity not easily measurable in monetary terms.
4. Apr 30, 2010
### twofish-quant
It's really, really important to distinguish the academic job market from the general job market. It's hard for a physics Ph.D. to get employment as a research professor, but among the physics Ph.D.'s that I've known, not a single one is or has been unemployed in the last year, and everyone that wants one has some decent middle class job.
5. Apr 30, 2010
### Iforgot
Physicists can be hired as engineers, but only if they've demonstrated expertise in the specific branch of engineering. I know the job boards indicate lots of positions for physics PhD's, but each one is looking for a specific set of skills and experience.
While in undergraduate, the best way to build skills experience that is widely respected is through corporate internships. Academic summer internships can be productive, but you might be stuck working on some "who cares" project with no well defined goal. Corporate internships have well defined goals i.e. make people money by doing something productive. That looks really good on a resume. (I increased packing efficiency by using technique x, saving q million dollars)
6. Apr 30, 2010
### twofish-quant
Financial positions for physics Ph.D.'s usually aren't looking for things that are too specific. There is the general issue of how good are you at computers and how good are you at math, but other than that financial firms would actually prefer a good mix of people with different backgrounds.
7. Apr 30, 2010
### zooxanthellae
The only qualm I have about going into the corporate world is that (and this, for you older and wiser people, probably makes you chuckle and shake your head) I'm interested in Physics because I think it's the most important thing out there. Physics for me is investigating what reality is - what the world is really made of and how it works. I don't think I'd be looking at those fundamental questions in the working world; it seems like I'd do something more along the lines of applied math or computer science.
Of course, all of this is likely pretty idealistic and naive.
Also, thank you for the many responses. This forum is becoming a wealth of information for me.
8. May 1, 2010
### twofish-quant
So is politics, marketing, law, and economics. It's a different sort of reality, but it's still reality.
Surprisingly you do. Let me ask a very basic fundamental question that people don't think very much about. What is money? How does money behave? How do you create money? How do you destroy money? It's a pretty fundamental question, but the more I think about it, the less I really understand.
You can try to argue that money is a concept that isn't "real" but then you end up thinking about what is "real". Are atoms real? There's this idea that you can break everything up into particles and if you understand the interactions between particles, you can understand everything. That doesn't work well with money.
And then there is "time" which is another interesting concept.
I've found it's useful to keep asking questions, and the thing that you have to be careful about is not get yourself in a situation where you don't feel comfortable asking a question because, "it's not your major."
9. May 1, 2010
### zooxanthellae
I'm not arguing that fields like economics or finance or law aren't incredibly complex and interesting, but I will argue that they aren't as fundamental as (what I imagine) Physics is. Namely, because those fields deal with fluid human constructs, not universal ones. At the end of the day, if I rise to the top of the heap in Economics, I've become better than most at understanding only how humans do things. Physics seems more encompassing, if that makes any sense.
Which is honestly a reason I picked Chicago over Rice, because I feel that will give me a good grounding in many subjects and keep my options much more open.
10. May 1, 2010
### twofish-quant
That's a common idea. Personally I would disagree with it, but one thing that you should start looking into is where this idea came from (it started with Plato).
You are getting this from Plato. Personally, I come from a very, very different philosophical tradition in which the division between human constructs and Platonic universals is less sharp. Also I went to school in places that quite explicitly reject this idea.
Part of the problem here is that you learn your ideas from human beings, and if you don't understand where those ideas came from, it makes it more difficult to question them. One trap that I've seen physics undergraduates set into is that because just study physics and nothing else, they don't really think about or question some of the reasons why they study physics, and this causes problems when you get into the "real world."
One big problem with Plato's ideas is that they really aren't economically sustainable. Sure it's great to be a philosopher-king until you find out that everyone wants to be a philosopher-king and no one plows the fields.
It does, but you need to ask yourself where you got that idea from. Usually the answer is your parents and teachers. Then you ask where *they* got their ideas from. I'm pretty sure if you follow this thread, you'll end up with Plato. At that point you can have a conversation with Plato, so see if what you think really makes sense.
One thing that you really, really do need to do is to spend your graduate school years somewhere other than Chicago. University of Chicago has an interesting history and so what they teach and the way that they teach it will be influenced by that history. This is good, but you'll end up learning a lot more if you go somewhere for graduate school that is really, really different.
Chicago-physics and Chicago-economics is different from NYC-physics and NYC=economics just like the pizza you get in Chicago is just different than what you get in NYC.
11. May 1, 2010
### zooxanthellae
I was under the impression that almost everybody goes somewhere different for grad school? Is this not true?
Also, could you recommend any books that give a good picture of what life as a quant is like, or is that not possible?
Last edited: May 1, 2010
12. May 1, 2010
### twofish-quant
Start with the classics. Plato's _Republic_. Also wikipedia has a good article on Plato. Some other people that you should read are Leo Strauss, Mortimer Adler, and Allan Bloom. Note here that reading someone doesn't mean that you should agree with them. The reason I mention Strauss, Adler, and Bloom are going to be important is that you will run into their ghosts while you are at Chicago, and it's useful to put names to the ghosts you meet.
It tends to be true, but what I think is important is to go to a different *type* of school.
The complete guide to capital markets for quantitative professionals by Alex Kuznetsov is probably the best book since it provides an overview of "so what is it that people do?"
My Life as Quant by Derman is good if you read it as history and not as a description of what skills are useful now.
Mark Joshi has a "advice" pdf on his site.
13. May 2, 2010
### zooxanthellae
Thanks for the recommendations; I'll try to read a few of those this summer.
Just out of curiosity, what other schools would you say are similar to Chicago's "type"?
The Kuznetsov book - would it be good for someone who truly has very little knowledge of the stock market? I know the most basic stuff and that's about it (some of the things that a stock's price reflects, value investing, and little else). I do not, for example, really have any idea of what a derivative is. I only ask because the price is a bit prohibitive, and I want to make sure this is a book I'll be able to understand. Also, one Amazon review recommends it for people who have "1-3 years of industry experience"...?
Thanks for all the help, by the way.
Last edited: May 2, 2010
14. May 3, 2010
### twofish-quant
It depends on the classification scheme, but there are a number of other elite private universities in the US (i.e. Columbia, Harvard, Yale).
The Kuznetsov book doesn't go into any of that. The great thing about the Kuznetsov book is that it's the only book that I know of that answers the question "so what do people in an investment bank actually do?" It's more about people than products. It's likely to be hopelessly out of date in about two years.
You can go onto Amazon and look at the preview.
If you find something else, let me know.
Something else that's a great resource is Dommic Connor's FAQ
|
|
# Transactions of the Geological Society, 1st series, vol. 4/On the Stream Works of Pentowan
On the Stream Works of Pentowan by Edward Smith (geologist)
XXIII. On the Stream Works of Pentowan.
By EDWARD SMITH, Esq.
When, a little time back on a mining excursion in the district of St. Austle, my avocation for the first time called me to visit a stream-work in that neighbourhood, I was so struck with what I saw, that I employed a second day from morning to evening in a most scrupulous examination of one of the works, from a wish of communicating my observations to the Geological Society.
The works that I visited are called the upper and lower Pentowan Stream Works, and are situated on the river which flows from Hensbarrow Hill by St. Austle, and enters the sea about three miles and a half south of that town, and at about the same distance north of Chappel Point, after a course of somewhat more than eight miles. I calculate the elevation of Hensbarrow Hill at 900 or 1000 feet above the sea; from thence to St. Austle the descent of the ground being very rapid the river is precipitated over many considerable rocks, and during the rainy season may be considered as a succession of cascades. In the dry season there is but little water, but after sudden rains the rise is both rapid and dangerous. Small rounded pebbles are found all the way in this part of its bed. From St. Austle to the sea the descent of the land is very gentle, and the hills running north and south on each side of the river seem to direct its course. In some places these approach very near together, in others they widen, and leave a greater expanse of plain. The whole of these levels from within half a mile of St. Austle to the sea having been found rich in deposits of tin, have at various times been turned over, and great quantities of ore have thus been obtained: they are all at this time enclosed and cultivated. At the mouth of the river the shore is flat on both sides to some distance; at half a mile to the east begin the high cliffs that extend to Black Bear point.
The Upper Pentowan work lies about one mile north of the beginning of the sea-beach, and about one mile and a quarter north of the sea, the valley being there about half a mile wide. The following is the section of the Strata which I observed in the Upper Pentowan Stream Work.
The following is the section of the Strata which I observed in the Upper Pentowan Steam Work.
Depth from surface Feet Feet Stratum 1. Soil with trees growing thereon 3 3 2. Deposit of mud mixed with small gravel, waving thus 20 23 3. Small grained spar and killas 3 26 4. Growan (or decomposed granite), spar, killas, &c. similar to those which are now found on Hensbarrow and other neighbouring Halls 5 31 5. Gravel, at the bottom of which arc oak trees and branches, of great size 5 36 6. Tin ground 5 41 7. Clay, in which were the roots of a vast oak tree, which seemed to remain in the very position in which it grew. I was told that the trunk had been cut away for fire-wood, and that various trees had been found. Three feet above this level and at no great distance from the root was the end of a branch of oak of great size projecting from the wall of the work. The part visible was about 4 feet long and 3 feet in diameter. It is not improbable that at a greater depth a second mineral deposit may be found, as has been known to occur elsewhere under a bed of clay.
The lower Pentowan work lies three quarters of a mile south of the upper, and about half a mile north of the sea. The plain is more contracted here than at the upper work, and the river flows immediately west of the excavation, and nearly on a level with the upper part of it. The excavation measures from north to south about 400 feet, from east to west about 250, and is 54$\scriptstyle \frac 12$ feet deep. In form it resembles an amphitheatre, being cut into deep stopes (as the miners term them) by which their work is upheld. The miner's object is to come at a deposit of tin five feet thick at the bottom of the pit, and as he works forward he throws behind him the waste matter. Water is conveyed from the river by a wooden trough into an insulated mass of the lower stratum, in which the tin is washed.
The following is the Section of th Strata at the Lower Work.
Depth from surface Feet Feet Stratum 1. Soil with trees growing in great luxuriance, some very old, and gravel towards the bottom 3 3 2. Fine peat. At the bottom are roots of trees, fallen trunks with ivy attached to then, and sticks impregnated with salt. In this stratum also are found sea laver and rushes 12 15 3. Sea mud, which when dry resembles fine grey sand. At the top are masses of leaves compressed flat, whose characters are still to he distinguished. Under the leaves are cockle shells, well preserved. These as the stratum deepens become more decayed. At 4 feet from the bottom of this stratum, and at 31 feet from the surface have been found many bones of animals, viz. the horns of two deer, very large and of equal size; two human skulls, one belonging to a child, the grinders not having yet shot through the jaw; the shoulder and thigh bone of some large animal; and the vertebræ of some smaller animals. At the bottom is a bed of very small shells in great abondance, 1 foot thick, and then a thin layer of small shells in a very delayed state 20 35 Stratum 4. Sea mud, with large oyster shells and cockles 4 39 5. Vegetable substances, with flat compressed leaves, and a few rotten shells 6$\scriptstyle \frac 12$ 45$\scriptstyle \frac 12$ 6. Vegetable substances, without shells; containing rushes, fallen trees, flat compressed leaves, roots covered with moss and compressed to an oval form, wings of coleopterous insects 1 46$\scriptstyle \frac 12$ The trees in 2, 5, and 6, are so numerous, that the miners collect from them great stacks of fire-wood. 7. At the top are found moss, sticks, hazle nuts. Beneath are small stones of killas, growan, and other pebbles, which are known by the miners to have belonged to the neighbouring halls, so far distant as Hensbarrow 3 49$\scriptstyle \frac 12$ 8. Rough tin ground, containing the lighter and poorer stones 2 51$\scriptstyle \frac 12$ 9. Rough tin ground, containing rich tin stones, some of great size and weight. Mixed with these are rounded pebbles of quartz, and other stones, and a yellow ferruginous clay 3 54$\scriptstyle \frac 12$ 10. Solid killas rock, on which all the preceding alluvia were deposited. The level of this does not differ much from that of low water mark.
In addition to these observations I have not many remarks to offer. The lower work is much richer in metallic produce than the upper, owing probably to the valley being narrower at the former place, which confined the mineral matter within a smaller space, and prevented it from being dispersed in the plain. The stones at the upper work were much the largest, as might be expected from its greater proximity to the hills. Among the tin stones of both works are found such as agree with the ores of particular lodes, that traverse the several hills all the way up to Hensbarrow hill, and the old miners had themselves made these distinctions, and rendered them perfectly clear to me. Thus, I think, I may venture to say, that the tin stones have been washed down from the neighbouring hills into the Pentowan valley.
The chief difference to be observed in the strata of the two Pentowan Stream Works, is the want of marine matter in those of the upper. In the lower Stream Work I have described the killas rock, upon which are deposited 5 feet of tin ground, 10$\scriptstyle \frac 12$ feet of vegetable matter, 24 feet of sea mud, and 3 feet of soil, on a level with which flows the river 54$\scriptstyle \frac 12$ feet above the solid rock.
────────
The following notice and sections have also been transmitted to the Society.
British antiquities (celts, spear's heads, &c.) have been discovered in the Stream Works at the depth of 20, 30, and 40 feet, from whence it appears probable that the greatest part of the accumulation of soil has taken place at a comparatively modern period.
An accurate representation and description of the Stream Work at Porth in the parish of St. Blazey, of this county, have been presented to the public by Philip Rashleigh, Esq. in the second part of his Description of British Minerals, published in the year 1802.
Section of the Pentowan Stram Work in 1807. Strata Feet Inc. No 1. Micaceous sandy clay, interspersed with stones and gravel 9 ─ 2. Peat, intermixed with roots and leaves 7 ─ 3. Sand, in which are found branches and trunks of trees 8 ─ 4. Finer sand with shells, in which bones, horns, &c. are found. 12 ─ N. B. The horns are chiefly those of cattle and stags; a joint of the vertebra of a whale, and a human skull were likewise found in this stratum, the former is now in the possession of the Rev. John Rogers, of Mawnan, in this county. 5. Coarse gravel 2 ─ 6. Closer sand mixed with clay, with decayed leaves, almost forming peat towards the bottom 12 ─ 7. Loose stones and gravel 1 ─ 8. Tin Ground. 1 ─ Carnon Stream Work, 1807. Strata Feet Inc. No 1. Mid and sand 7 ─ 2. Granite gravel intermixed with small pieces resembling charcoal; and a few shells 4 ─ 3. Fine gravel, mud and shells 12 ─ About this depth are several irregular strata of oysters, about 4 or 5 feet in thickness, extending irregularly to within 4 or 5 feet of the tin ground. 4. Closer mud intermixed with shells 19 ─ In this stratum have been found several branches and trunks of trees, some of which had evident marks of being cut with an axe or other sharp instrument. Horns and bones stags, likewise human skulls. 5. Tin Ground, varying in depth from 1 to 6 feet. ──────── Tregoney Stream Work, 1807. Strata Feet Inc. No 1. Granite gravel with layers of mud 11 6 2. Black mud with a few shells 15 ─ N.B. In this stratum were found a cow's horn, 3 inches$\scriptstyle \frac 34$ diameter, and 1 foot 1 inch in circumference; and several stags horns. 3. Tin Ground; average depth 2 feet.
|
|
# Ll 1 Parsing Exercise
Compute the First and Follow sets as well as construct the parsing table for the following LL(1) grammars. The latest stable release with Haddock documentation is available on Hackage and development versions are available via the Darcs repository. A comprehensive presentation of the classical results of LL(k) parsing can be found in Aho and Ullman (1972b, 1973a. Modify the tokens_are_valid method to use LL(1) to verify whether the token stream is well formed. a) Provide a language that is recognised by an LL (1) grammar for which there does not exists any LL (0) grammar. We’ll prove: 1. Jacobs which is a definitive resource on parsing. A parallel algorithm that requires O(log(n)) time on O(n) processors on a CREW PRAM, is presented for the parsing of a subclass of the strong LL(1) languages. To construct the LALR (1) parsing table, we use the canonical collection of LR (1) items. For Checking grammar is LL(1) you can draw predictive parsing table. More videos will be added to this playlist soon. We need not know the details, but acronym means "1-token LookAhead LR" parsing. Examine the grammar and rewrite it so that an LL(1) predictive parser can be built for the corresponding language. #Systemprogrammingcompilerconstruction #LMT #lastmomenttuitions System Programming & Compiler Construction Full course :- https://bit. We show that by using the methodology of context-free expression derivatives, we can arrive at an efficient imple-mentation of LL(1) parsing combinators, and so without. to carry out a quick and secure recovery. Automating the process of creating First and Follow sets and creating an LL1 Parsing Table to check the validity of an user-input string according to given grammar. Scribd is the world's largest social reading and publishing site. Generally k = 1, so LL(k) may also be written as LL(1). Solutions are on the next page. Fall 2016-2017 Compiler PrinciplesLecture 2: LL parsing. As tokens and non-terminals are matched, they are pushed onto a second stack, the semantic stack. First L stands for left to right scan of input. However, in most of these monographs the presentation is restricted to SLL(1) parsing. Basically the idea is that if you construct the LL. 4 Exercise 2. 1; Exercise 11. Compute the first and follow sets. Speed of the resulting parser is comparable to that of a hand coded recursive descent parser. LL(1) Parsing Theory Goal: Formal, rigorous description of those grammars for which "I can figure out how to do a top-down parse by looking ahead just one token", plus corresponding algorithms. JSON is a lightweight format that is nearly ubiquitous for data-exchange. ShareAlike — If you remix, transform, or build upon. Posts Tagged 'LL(1) Parsing technique' First and Follow Set February 9, 2015 in System Programming Compiler Construction Tags: First and Follow set in LL(1) Grammar, First and Follow Set program in Java, LL(1) Parsing technique, Parsing methods, SPCC programs. Take two of a half-serious rant taken too far, by Stephen Jackson. Fall 2016-2017 Compiler Principles Lecture 2: LL parsing Roman Manevich Ben-Gurion University of the Negev 1. LR Parser. S → Ab$A → (bAb) A → (Ab) A → λ. derives-λ for the nonterminals and productions, 2. For Checking grammar is LL(1) you can draw predictive parsing table. Imagine a cursor or current position that moves through the string that is being parsed. Here are some key points about the three classes of parser generators. When top-down parsing is being used, a node of the parse tree is expanded into several other nodes. Forward an Email to the Parser 3. Compute the FIRST and FOLLOW sets for all non-terminal symbols in the new grammar and build the parse table. Turbo Pascal is a recursive descent parser. The following samples do not describe the peculiarities of C# constructs; C# is used as an ordinary language except for its polymorphic behavior. perform LL(1) checks [53] or impose restrictions on empti-ness when parsing sequences [28], beyond those necessary for the definition of LL(1) languages. Answer: The predictive parsing method, either based on a set of recursive functions or on its non -. Compute the First and Follow sets as well as construct the parsing table for the following LL(1) grammars. The parser then makes its prediction based on this deferred input. One sentence answers are sufficient for the “essay” questions. Strengthen your core and hips with this move and you'll beat back pain, plus you'll move well for longer. A language is said to be LL(1) if it can be generated by a LL(1) grmmar. I can see that in the first line the assignment operator basically assigns expr to itself which causes a conflict. In computer science, an LL parser is a top-down parser for a subset of the context-free grammars. Applying rules 1 and 2 is obvious. You may use Python, Java, C, C++, or Haskell. In the LALR (1) parsing, the LR (1) items which have same productions but different look ahead are combined to form a single set of items. Bottom-Up Parsing –LR(1) l In our LL(1) parsing example we replaced non-terminal symbols with functions that did the expansions and the matching for us. Possible reasons: If G is ambiguous If G is left recursive If G is not left-factored Grammar is not LL(1) Most programming language grammars are not LL(1) Some can be made LL(1) though but others can't There are tools that build LL(1) tables. Download source - 137. Go to page top Go back to contents Go back to site navigation. 9: Regular Expressions Chapter 2 Introduction, 43-44 Section 2. Predictive parser consists of following components: 1. In computer science, an LL parser (Left-to-right, Leftmost derivation) is a top-down parser for a subset of context-free languages. LineMagic is a generic source code simple tool written in c++ with wxWidgets library. In one use-case, we use them to validate readers and writers of data against a single Avro schema. Tags: First and Follow set in LL(1) Grammar, First and Follow Set program in Java, LL(1) Parsing technique, Parsing methods, SPCC programs 0 Problem Statement: Write a program to find First and Follow set for LL(1) grammar. As you get older, training to turn back the years is laudable. It assumes minimal programming competence (functions, structs, ifs and for-loops). Grammars can also perform thereverse. 3; Exercise 11. Computer Science Engineering (CSE) students definitely take this Parsing MCQ - 1 exercise for a better result in the exam. Having tokens by themselves, however, offers little value. First (a) ^first (b)!=null: not LL(1. Bottom-Up Parsing. It parses the input from Left to right, performing Leftmost derivation of the sentence. The method used in this article is called LL(1) parser. A,a,b are terminals and non terminals. Recursive Predictive Parsing is a special form of Recursive Descent parsing without backtracking. (iii) LL(1) Parse Tables (10 Points) Using your results from part (ii), construct the LL(1) parse table for your updated grammar. However, back-substitutions are required to reduce k and as back-substitutions increase, the grammar can quickly become large, repetitive and hard to understand. Assalamualaikum. Some heuristics for constructing synchronization set(s) are as. LL(k) parsing predicts which production rule to use from k tokens of lookahead. LL(1) Parsing| Find FIRST & FOLLOW of GRAMMAR |Compiler Design| Predictive Parsing | LL(1) parsing Rules for FOLLOW: 1. For Undertale on the PC, Text Dump by sans. However, LALR(1) grammars are not easy to use to manually construct parsers. There was no big reveal about a breakthrough in new contract talks between him and the. The technique involves mapping an input string into an output string representing the sequence of changes to the parsing stack during a parse sequence. CS143 Handout 11 Summer 2012 July 9st, 2012 SLR and LR(1) Parsing Handout written by Maggie Johnson and revised by Julie Zelenski. Note: we don’t describe here parsers algorithms in detail; the target audience is supposed to. LLLR(1) parsing starts with LL(1) parsing, but to avoid the LL(1) conflict on b for A (line 3), the embedded left LR(k) parser for A is started (line 4). They left me with the impression that the topic was complex, and meant for minds greater than mine. We have seen that a lexical analyzer can identify tokens with the help of regular expressions and pattern rules. 3 Parsing a vector. The process of showing that it is not LL(1) will expose this. a) Provide a language that is recognised by an LL (1) grammar for which there does not exists any LL (0) grammar. Parsing non-LL(1) languages top-down anyway: It is sometimes possible tp parse a non-LL(1) or even ambiguous language top-down by resolving the choices at parse time. Is the grammar SLR? Solution: Item Sets: In 4. Some heuristics for constructing synchronization set(s) are as. They are mostly indicated with abbreviations like LL, LL(1), LL(k), LR, LR(1), LR(k), LALR, LALR(1), LALR(k), etc. * An LL(1)-parser is a top-down, fast predictive non-recursive parser, * which uses a state-machine parsing table instead of recursive calls * to productions as in a recursive descent parser. 24 10 a) *Write a C program to implement operator precedence parsing. LL Parsing Algorithm. ly/2ma4Xei Engineering. In practice, LL(k) parsing techniques for values of k that are greater than one have generally not been used. derives-λ for the nonterminals and productions, 2. Top Down Parsing : Context free grammars, Top down parsing, Backtracking, LL (1), Recursive descent parsing, Predictive. or Charles Edward Ross, Ph. CSCI 565 - Compiler Design Spring 2010 Homework 2 - Solution 1 of 9 Problem 1 [10 points]: Predictive Top-Down Parsing Explain why a left-recursive grammar cannot be parsed using the predictive top-down parsing algorithms. a context-free grammar, it is not possible to deduce it to be LL(1) simply by examining its structure; successful construction of an LL(1) parse table is the test that a grammar needs to pass to qualify as LL(1). This article aims to be the simplest introduction to constructing an LL(1) parser in Go, in this case for parsing SQL queries. 1 LL Hospital Latham 2003 (35) 243 79 Frail Moderate-high 10 1 LL Home Maiorana 1997 (32) 31 60 3 Months post-CABG Moderate-high 10 7 UL, 4 LL, 1 Tr Gym Maurer 1999 (21) 113 66 Osteoarthritis High 8 1 LL Gym McCartney 1995 (64) 142 64 Healthy High 42 3 UL, 3 LL, 1 Tr Gym McGuigan 2001 (24) 20 68. 1 is also recursive descent (it is very similar Lua 5. Parsing ! BGR, Fall2005 1 Parsing ¥TD parsing - LL(1) ÐFirst and Follow sets ÐParse table construction ¥BU Parsing ÐHandle, viable preÞx, items, closures, gotoÕs ÐLR(k): SLR(1), LR(1), LALR(1) ¥Problems with SLR ¥Aho, Sethi, Ullman, Compilers : Principles, Techniques and Tools ¥Aho + Ullman, Theory of Parsing and Compiling, vol II. The process of showing that it is not LL(1) will expose this. I understand the theories, how it works, etc. More videos will be added to this playlist soon. Note: The following discussion is informal and does not cover all of the cases found in left factorization and LL(1) parsing grammars. with a single lookahead terminal. Predictive parser consists of following components: 1. In computer science, an LL parser (Left-to-right, Leftmost derivation) is a top-down parser for a subset of context-free languages. Parse Trees, Left- and Rightmost Derivations For every parse tree, there is a unique leftmost, and a unique rightmost derivation. examples of ll(1) parser table construction Question1 - construct ll(1) table of following E -> TX LL(1) parsing table construction example at 14:28. txt) or view presentation slides online. LL(*) is not bound to some fixed values of look ahead tokens to figure out the grammar rule to use. A programmer‐friendly LL(1) parser generator. Meaning, there is a different grammar for this language which is LL(1). Common Public License Version 1. There are readable recursive descent parsers that are not based on LL(1) grammars, both ones that are based. The parser itself gives a clue on how to reduce the grammar to LL(1): you left factor and solve the conflict on the semantic action. LR(1) parsing uses a shift/reduce algorithm. Search for more papers by this author. the first sets of the nonterminals and productions, and. We'll be using the Parsec Modify the previous exercise to support. You can read all about them in a Parsing Techniques book by Dick Grune and Ceriel J. examples of ll(1) parser table construction. I don’t think that an LL(1) parser will be able to cope with any realistic natural language processing project. For Undertale on the PC, Text Dump by sans. Following the links on the String SIG page [Kuc00], we found "Yet Another Python Parser System", YAPPS, by Amit Patel. Whenever a yacc parser reduces a production, it can execute an action rule associated with it. Show the analysis table (stack, input, and actions) for the parsing process of the zvxy input sequence. Although that makes it the easiest to. Constructing LL(1) parsing tables is relatively easy (compared to constructing the tables for lexical analysis). 1 LL Hospital Latham 2003 (35) 243 79 Frail Moderate-high 10 1 LL Home Maiorana 1997 (32) 31 60 3 Months post-CABG Moderate-high 10 7 UL, 4 LL, 1 Tr Gym Maurer 1999 (21) 113 66 Osteoarthritis High 8 1 LL Gym McCartney 1995 (64) 142 64 Healthy High 42 3 UL, 3 LL, 1 Tr Gym McGuigan 2001 (24) 20 68. the problem is i still have the left recursion for A' and B', how can i solve this, is the reconstructed grammar wrong or the whole grammar is not suitable for LL(1) parsing comment share. Feel free to ask questions on the midterm. LL(1) Property Precondition for recursive descent parsing LL(1) can be analyzed from Left to right with Left-canonical derivations (leftmost NTS is derived first) and 1 lookahead symbol Definition 1. But with the restrictions of the LL(1) grammar, the parser can only see the next symbol. SimpleParse is a BSD-licensed Python package providing a simple and fast parser generator using a modified version of the mxTextTools text-tagging engine. LL(1) conflicts can be resolved by a multi-symbol lookahead or by semantic checks. Compute the GOTO function for these sets of items. LR Parsing 1 Introduction The LL Parsing that is provided in JFLAP is what is formally referred to as LL(1) parsing. An Introduction to LL(1) Parsing. ) If there is a. It is suitable for writing both lexers (also known as tokenizers or scanners) and parsers, but not for writing one-stage parsers that combine lexing and parsing into one step. LLLR(1) parsing starts with LL(1) parsing, but to avoid the LL(1) conflict on b for A (line 3), the embedded left LR(k) parser for A is started (line 4). It assumes minimal programming competence (functions, structs, ifs and for-loops). Theodore Norvell (C) 1999 with updates later on. Parsing is a grammatical exercise that involves breaking down a text into its component parts of speech with an explanation of the form, function, and syntactic relationship of each part so that the text can be understood. Most of the popular API and data services use the JSON data format, so we'll learn how it's used to serialize interesting information, and how to use the jq to parse it at the command-line. rwth-aachen. 3) Fill in the LL(1) parse table based on your predict sets (4 points) ( ) b$ S 1 1 A 2, 3 4 4) Is this an LL(1) grammar? Why or why not? (2 points) This is not an LL(1) grammar: if we are processing A and the lookahead is (, we will not know whether to predict production 2 or production 3. edu Percy Liang Stanford University [email protected] I’m sure most of you are familiar with the powerful tools for text parsing available in PowerShell. The method used in this article is called LL(1) parser. 0 Preview September 2014. Make a new email parser 2. They are mostly indicated with abbreviations like LL, LL(1), LL(k), LR, LR(1), LR(k), LALR, LALR(1), LALR(k), etc. Don’t take this at face value. Notation: T = Set of Terminals (Tokens) N = Set of Nonterminals $= End-of-file character (T-like, but not in N ∪ T). LL(k) grammars have the very nice property of being context free, which is where a lot of the simplicity emanates because the parser does not need to keep state. Let's start with the general case. You can read all about them in a Parsing Techniques book by Dick Grune and Ceriel J. But you can take you own grammar as an input storing it in C structures. LL(1) Parsing To compute FOLLOW(A) for any grammar symbol A a) We must compute FIRST of some grammar symbols. This is because both the size of the parser and the complexity of the grammar analysis grow exponentially with k. CMSC 430 Spring ’09 Midterm 1 You have until 4:45pm to complete the midterm. 1 stands for using one input symbol at each step. A full LL(1) parsing engine is introduced as an example to show a possible implementation. How to determine maximum stack size of LL(1) parser? 2. Note that the last action code specified is redundant, since$$=$1 is the default action if none is specified. LL(k) parsers, both generated and hand-written, are very popular. LALR(1) is an version of LR(1) that uses less main memory |useful in the 1970s. CLR (1) parsing table produces the more number of states as compare to the SLR (1) parsing. The special attribute of this parser is that any LR(k) grammar with k>1 can be transformed into an LR(1) grammar. 2; Exercise 11. S → Ab$A → (bAb) A → (Ab) A → λ. As it turns out, in spite of the promising start to parsing on the previous slides, the grammar for our little programming language is notLL(1). Let's start with the general case. ly/2ma4Xei Engineering. 0 Introduction. After removing this direct left recursion, I end up with the following grammar:. This restriction enables the efficient parsing of an LL(1) language, compared to the algorithms that can be used to parse LL(k) languages for values of k that are than greater 1. I'm currently studying the topic syntax analyzer, focusing on LL(1) parsers. Any compiler text should provide more details. and he'll show off his newly gained guns and. I don’t think that an LL(1) parser will be able to cope with any realistic natural language processing project. 1 A 2!S 3 A 2!a B 1!b Obviously, this grammar is left-recursive: S 3)A 2B 1)S 3B 1 For the indices 1 and 2 the condition that every rhs starts either with a terminal or with a non-terminal of higher index is satis ed. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. LL(k): Straightforward to learn: just look at the generated code, you can see how it works directly. This article aims to be the simplest introduction to constructing an LL(1) parser in Go, in this case for parsing SQL queries. We’ve all been told that the more we exercise, the better off we’ll be. 1 Consider the following PDA: q 0 q 1 q 2 q 3 a, ε →$ b, ε → b c, ε → ε d, b → ε ε, $→ ε Give the formal definition of the PDA (i. I can see that in the first line the assignment operator basically assigns expr to itself which causes a conflict. Ask Question Asked 7 years, (k+1) \times LL(k)$ is a rather trivial exercise. Don’t take this at face value. 0 THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC LICENSE ("AGREEMENT"). They left me with the impression that the topic was complex, and meant for minds greater than mine. We also saw how grammars can be used togenerate strings of the corresponding language. or Charles Edward Ross, Ph. Predictive Parsing no backtracking efficient needs a special form of grammars (LL(1) grammars). Especially our hearts. There was a companion LL(1) parser generator tool named "Lime", but the source code for Lime has been lost. It also generates the SLR(1) parse table in two formats: a table for people, and a JSON matrix for accommodating our robot overlords. Although LL(1) grammars are easier for people to work with, LR(1) grammars turn out to be very suitable for machine processing, and they are used as the basis for the parsing process in many compilers. Recursive descent parsing has the same restrictions as the general LL(1) parsing method described earlier. LL(1) Parsing To create an LL(1) parser, we need to know the predict set for each production in the grammar. LALR (1) Parsing: LALR refers to the lookahead LR. Given a parse tree for a sentence xzy and a string z̄, an in cremental parser builds the parse tree for the sentence xz̄y by reusing as much of the parse tree for xzy as possible. Grammars that can be parsed using this algorithm are called LL grammars and they form a subset of the grammars that can be represented using Deterministic Pushdown Automata. Dictation Exercises. This task is for data extraction / email scraping automation from gmail to Airtable. Finding type of Grammar LL(1), OR LR(0) , OR CLR(1) OR LALR(1) Suppose we are given a grammar and asked to find the type of that grammar , what is the algorithm which needs to be followed for each of them? LL(1), OR LR(0) , OR CLR(1) OR LALR(1). The incremental LL(1) parsing algorithm in this paper makes use of a break-point table to identify reusable subtrees of the original parse tree in building the new parse tree. First of all we need some trivial traits to ease our journey. You will do this by transforming the given Decaf grammar into an LL(1) grammar and implementing a recursive descent parser. the 6-tuple which defines it) a) Parse the. A top-down parser constructs. Massaging the grammar to work with an LL(1) parser is known as factoring. Recursive descent parsing is the method of choice when the language is LL(1), when there is no tool available, and when a quick compiler is needed. suraiyaparveen: You won't get it, for two reasons. Syntactic Analysis Sample Exercises 1 Spring 2011 Compiler Design Spring 2011 Syntactic Analysis Sample Exercises and Solutions Syntactic Analysis Sample Exercises 3 Spring 2011 Problem 4: Consider the following Construct a table-based LL(1) predictive parser for the following grammar G = {bexpr, {bexpr, bterm, bfactor},{not, or, and. Compiler Construction News 05. What is the relationship between LALR(1) and SLR(1) parsers? LALR(1) parsers are strictly more powerful than SLR(1) parsers, so any grammar that can be parsed by an SLR(1) parser may be parsed by a LALR(1) parser, but there are grammars that can be parsed by LALR(1) parsers that cannot be parsed by SLR(1) parsers. The right hand side of a rule, which will be used to reduce, is called a handle. Reparsing of modified input strings is the most frequently performed operation in these environments and its efficiency can greatly affect the success of such environments. * An LL(1)-parser is a top-down, fast predictive non-recursive parser, * which uses a state-machine parsing table instead of recursive calls * to productions as in a recursive descent parser. Grammars can also perform thereverse. CSCI 565 - Compiler Design Spring 2010 Homework 2 - Solution 1 of 9 Problem 1 [10 points]: Predictive Top-Down Parsing Explain why a left-recursive grammar cannot be parsed using the predictive top-down parsing algorithms. Accounting source code for Delphi. We usually omit the "(1)" after the "SLR," since we shall not deal here with parsers having more than one symbol of lookahead. LL(1) conflicts can be resolved by a multi-symbol lookahead or by semantic checks. Compared to Java or C++ syntax, Smalltalk syntax is simpler and cleaner. S → Ab$A → (bAb) A → (Ab) A → λ. 2; Exercise 11. Designing your own programming language is no small feat. Each technique is best suited for recognizing a specific type of grammar. of CSE, CIT, Gubbi Page 1 1. edu Abstract How do we build a semantic parser in a new domain starting with zero training ex-amples? We introduce a new methodol-. Although LL(1) grammars are easier for people to work with, LR(1) grammars turn out to be very suitable for machine processing, and they are used as the basis for the parsing process in many compilers. 6 Exercises for Section 4. Published through lulu. ) The grammar is not an ambiguous grammar 2. LL wins here, hands down. Hope you have benefited from it. In the CLR (1), we place the reduce node only in the lookahead symbols. University of Southern California (USC) Computer Science Department Syntactic Analysis Sample Exercises 5 Spring 2014 Problem 6: Construct a table-based LL(1) predictive parser for the following grammar G = {bexpr, {bexpr,. LL(1) Parsing. The parser then makes its prediction based on this deferred input. Parser Freeware - 2013 Version Parser for Educational Use. Please tell how is this LL(1), and how to prove it?. examples of ll(1) parser table construction Question1 - construct ll(1) table of following E -> TX LL(1) parsing table construction example at 14:28. Parse Trees, Left- and Rightmost Derivations For every parse tree, there is a unique leftmost, and a unique rightmost derivation. | up vote 0 down vote LL(1) grammar is Context free unambiguous grammar which can be parsed by LL(1) parsers. The licensor cannot revoke these freedoms as long as you follow the license terms. CMSC 430 Spring ’09 Midterm 1 You have until 4:45pm to complete the midterm. The type of LL parsing in JFLAP is LL(1) Parsing. Compute the First and Follow sets as well as construct the parsing table for the following LL(1) grammars. The only issue with LL - including LL(1) - is that a rule cannot be left recursive or it will create an infinite loop, such that the following is not allowed: A -> A c. All gists Back to GitHub. I can see that in the first line the assignment operator basically assigns expr to itself which causes a conflict. Thinking in terms of objects, an object in the tree constructs other objects. Parsing: Introduction to Parsing Compilers Lecture 17 of 95 < Previous Next >. Lumbar 1 & 2 Exercises By Tina Pashley The lumbar region of your back is the lower area just above your sacrum. Level up your coding skills and quickly land a job. 1 A 2!S 3 A 2!a B 1!b Obviously, this grammar is left-recursive: S 3)A 2B 1)S 3B 1 For the indices 1 and 2 the condition that every rhs starts either with a terminal or with a non-terminal of higher index is satis ed. 1 means that one symbol in the input string is used to help guide the parse. at Abstract. After the 2nd b is shifted and the 3rd b appears in the lookahead buffer, the embedded left LR(k) parser. Parsing's Previous Year Questions with solutions of Compiler Design from GATE CSE subject wise and chapter wise with solutions. Building a Semantic Parser Overnight Yushi Wang Stanford University [email protected] edu Jonathan Berant Stanford University [email protected] Parsing non-LL(1) languages top-down anyway: It is sometimes possible tp parse a non-LL(1) or even ambiguous language top-down by resolving the choices at parse time. CS416 Compiler Design Recursive-Descent Parsing (uses Backtracking). Grammars that can be parsed using this algorithm are called LL grammars and they form a subset of the grammars that can be represented using Deterministic Pushdown Automata. Notice that top-down or predictive parsing techniques (such as LL(1) parsers) will produce a leftmost derivation. How do LL(1) Parsers Build Syntax Trees? So far our LL(1) parser has acted like a recognizer. #Systemprogrammingcompilerconstruction #LMT #lastmomenttuitions System Programming & Compiler Construction Full course :- https://bit. All we need to do is drive a tokenizer and a parse table in conjunction with a stack and we'll have what's known as a pushdown-automata (PDA) which we can use to parse. Hildegarde Miller author of PROGRAM FOR LL(1) PARSING is from Frankfurt, Germany. Coco/R is a compiler generator, which takes an attributed grammar of a source language and generates a scanner and a parser for this language. * An LL(1)-parser is a top-down, fast predictive non-recursive parser, * which uses a state-machine parsing table instead of recursive calls * to productions as in a recursive descent parser. Up to the point that many computer languages were purposefully designed to be described by a LL(1) grammar. – Recall that for LL(1) parsing, we had a predictor table – For LR(1) parsing, we have an oracle, in the form of a DFA. Scribd is the world's largest social reading and publishing site. The answer is given as option (A) but if we take will definitely in FOLLOW(S) and we didn't computed FIRST of any symbol. LL(1) Parsing. LL(1) Grammar - good for building recursive descent parsers • Grammar is LL(1) if for each nonterminal X – first sets of different alternatives of X are disjoint – if nullable(X), first(X) must be disjoint from follow(X) and only one alternative of X may be nullable • For each LL(1) grammar we can build recursive-descent parser. It is suitable for writing both lexers (also known as tokenizers or scanners) and parsers, but not for writing one-stage parsers that combine lexing and parsing into one step. In this case, we’ll assume that any existing suffix appears after a comma, as in Charles Edward Ross, Jr. Parse Trees, Left- and Rightmost Derivations For every parse tree, there is a unique leftmost, and a unique rightmost derivation. LineMagic is a generic source code simple tool written in c++ with wxWidgets library. Skip to content. Second L stands for Left Most Derivation. If can derive a string starting with a (i. However, there may be portions of grammar that are not LL(1). This question seems to be focused on LL(0) parsers, so let's define them. Parsing: use the table to build a semantic value for a token. Language theoretic comparison of LL and LR grammars. On the other hand, since LL parsers commit to what rule they are parsing before they parse that rule's tokens, and LL parser knows the. TDOP is another alternative. You will learn new German vocabulary in meaningful context. The parser uses recursive descent. This restriction enables the efficient parsing of an LL(1) language, compared to the algorithms that can be used to parse LL(k) languages for values of k that are than greater 1. LL(1) conflicts can be resolved by a multi-symbol lookahead or by semantic checks. The scanner works as a deterministic finite automaton. No backup is ever needed. Trust me, I was stubborn about it and I regretted it. ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. The LALR(1) parser is less powerful than the LR(1) parser, and more powerful than the SLR(1) parser, though they all use the same production rules. the problem is i still have the left recursion for A' and B', how can i solve this, is the reconstructed grammar wrong or the whole grammar is not suitable for LL(1) parsing comment share. It assumes minimal programming competence (functions, structs, ifs and for-loops). If you identify any LL(1) conflicts, don't worry; just put all applicable entries in the table. LL(1) Parsing| Find FIRST & FOLLOW of GRAMMAR |Compiler Design| Predictive Parsing | LL(1) parsing Rules for FOLLOW: 1. stmt ! if-stmt jwhile-stmt jbegin-stmt jassg-stmt if-stmt ! if bool-expr then stmt else stmt while-stmt ! while bool-expr. • It would be better if we always knew the correct action to take. Download the file for your platform. Notes on LL(1) Parsing Tables • If any entry is multiply defined then G is not LL(1) – If G is ambiguous, left-recursive, not left-factored, … • Most programming language grammars are not LL(1) • There are tools that build LL(1) tables. The only issue with LL - including LL(1) - is that a rule cannot be left recursive or it will create an infinite loop, such that the following is not allowed: A -> A c. In the LALR (1) parsing, the LR (1) items which have same productions but different look ahead are combined to form a single set of items. 1 stands for using one input symbol at each step. The problem with current parsing technology is that it is not compositional: most of the parser generators do not parse arbitrary context-free grammars but certain subclasses that are not closed under compositionality. Modify the tokens_are_valid method to use LL(1) to verify whether the token stream is well formed. 1 Tokens and Regular Expressions, 45-48 Section 2. Massaging the grammar to work with an LL(1) parser is known as factoring. Applet Layout. Actually building a recursive decent/LL(1) parser by hand might not be a bad idea, it will help you understand the output of ANTLR. LL conflict resolution using the embedded left LR parser 1. Relationship between parser types 2 3. The type of LL parsing in JFLAP is LL(1) Parsing. To parse a string using First and Follow algorithm and LL-1 parser; PROGRAM TO IMPLEMENT RECURSIVE DESCENT PARSER; To parse a string using Recursive-Descent parser; PROGRAM TO IMPLEMENT RECURSIVE DESCENT PARSER; To parse a string using Recursive-Descent parser; Program of LL Parser; Program of LL Parser; Program of parser 1; Program of parser 2. What does that 0 tokens mean, it means that. Exercise 6 (Hand in before the exercise class on 06. Each technique is best suited for recognizing a specific type of grammar. ) Place$ in FOLLOW(S), Where S is the Start Symbol. CLR refers to canonical lookahead. For example, the parser is LL(k) only at such points, but remains LL(1) everywhere else for better performance. 3) Fill in the LL(1) parse table based on your predict sets (4 points) ( ) b \$ S 1 1 A 2, 3 4 4) Is this an LL(1) grammar? Why or why not? (2 points) This is not an LL(1) grammar: if we are processing A and the lookahead is (, we will not know whether to predict production 2 or production 3. In the interim, you can read about: how to determine first and follow sets (PDF from Programming Languages course at University of Alaska Fairbanks) significance of first and follow sets in top-down (LL(1)) parsing. The embedded LR(k) parser must sometimes parse substrings de-rived from a sentential form starting with the LL(k)-conflicting non-terminal instead of from that nonterminal only.
|
|
# Tag Info
19
This is "left involution". ("left" because it doesn't work when you try it on the right.) \begin{align*} x \circ y &= z & \\ x \circ (x\circ y) &= x \circ z & [\text{apply $x \circ -$}] \\ y &= x \circ z & [\text{simplify the involution}] \text{.} \end{align*} I would be shocked to see anyone use that term ...
5
A helpful way to rewrite that statement would be (assuming subtraction for simplicity): $x - y - z ⇔ x - z - y$ We are observing how swapping y and z does not change the value of the expression. While it may initially look like there is a useful property behind this, the example is showing an easy case of what you are allowed to swap. Here is a visual ...
1
I don't know if this word is used specifically to describe this phenomenon, but the term "complement" is used in general to refer to two things that combine to make some third thing, so this applies here. When we subtract $b$ from $a$, we're basically asking what $b$'s (additive) complement with respect to $a$ is. Another term that could be ...
11
I have never seen a name for this property specifically. When I was in grade school, I recall learning about Fact Families, which are generated by this property. The idea is that a fact family is all of the arithmetic equations generated by the same numbers. This property in particular is really just a consequence that subtraction is the inverse of addition ...
Top 50 recent answers are included
|
|
Previous issue · Next issue · Most recent issue · All issues
# Journal of Operator Theory
Volume 47, Issue 1, Winter 2002 pp. 3-35.
Contractive extension problems for matrix valued almost periodic functions of several variables
Authors Leiba Rodman (1), Ilya M. Spitkovsky (2), and Hugo J. Woerdeman (3)
Author institution: (1) Department of Mathematics, P.O. Box 8795, The College of William and Mary, Williamsburg, VA 23187--8795, USA
(2) Department of Mathematics, P.O. Box 8795, The College of William and Mary, Williamsburg, VA 23187--8795, USA
(3) Department of Mathematics, P.O. Box 8795, The College of William and Mary, Williamsburg, VA 23187--8795, USA
Summary: Problems of Nehari type are studied for matrix valued $k$-variable almost periodic Wiener functions: Find contractive $k$-variable almost periodic Wiener functions having prespecified Fourier coefficients with indices in a given halfspace of ${{\bbb R}}^k$. We characterize the existence of a solution, give a construction of the solution set, and exhibit a particular solution that has a certain maximizing property. These results are used to obtain various distance formulas and multivariable almost periodic extensions of Sarason's theorem. In the periodic case, a generalization of Sarason's theorem is proved using a variation of the commutant lifting theorem. The main results are further applied to a model-matching problem for multivariable linear filters.
Contents Full-Text PDF
|
|
CrediblyCurious
Nick Tierney's (mostly) rstats blog
some CRAN Gotchas
2017/08/09
Categories: rstats Blag
Recently I submitted visdat to CRAN, and have just submitted naniar to CRAN as well (fingers crossed!).
There were a couple of small things that I changed in order to get everything on CRAN OK, and I thought it might be helfpul to list them here, both for me, and for others, and also for others to comment on.
Package Version too large.
I use pkgdown to make my website docs for visdat and naniar. It’s really nifty, and looks super pro with minimal work from me. The files are a bit weird, so I thought that this error was to do with that.
Nope, this is literally to do with the package version - once I changed it to 0.1.0 everything worked fine.
Figures in example code.
I was getting a strange error about things in the doc folder being too large, I can’t find it now (re-writing this blog post in 2019!), but it was there.
This made me go through and compress all my PNG images in the pkgdown docs folder. I worked really hard on trying to fix this. What this the solution to the problem?
No.
The answer? Don’t have too many figures in your examples. Use \dontrun{} like so:
#' @examples
#' \dontrun{
#' library(ggplot)
#' # example plotting code here
#' }
I decided to move the rmd figures inside the man/ folder by setting the fig.path option in a code chunk of the README.Rmd:
---
output: github_document
---
{r, echo = FALSE}r''
knitr::opts_chunk\$set(
collapse = TRUE,
comment = "#>",
)
Which I copied off of what I had seen in the ggplot2 README. That’s a nice thing about github - want to know some kind of best practice? Check out what the rstudio tidyverse team does.
(Edit: this option is now part of usethis::use_readme_rmd())
It’s on CRAN - what next?
• Add a cran status badge with usethis::use_cran_badge(), and copy and paste the text provided.
Some reflections
In general, I have held off from submitting to CRAN because I was scared of getting something wrong. I wrote my first R package in 2013, thanks largely to Hilary Parker’s amazing blog post and then following through with Hadley’s R Packages book. But I have held off on putting things on CRAN for a long time.
In some ways, this has made things good, for example, visdat changed name 3 times (footprintr -> vizdat -> visdat), and naniar actually changed name 4 times (ggmissing -> naniar -> narnia -> naniar). Naming things is hard.
I’ve also spent a large amount of time making sure that the naming of my functions has been good, and made sense. This meant changing names from
summary_missing_variables to miss_var_summary, and then establishing a naming scheme where the missing summaries and friends start with miss_ - this makes it easier for them to be tabbed through. Similarly, there is gg_miss_var, which was initially gg_missing_variables` - which I decided was too long. Iteration here took time.
But, I think that I’ll back myself in the future more now. Having github there as a way to test out ideas, and rapidly change things has been really nice, and in some ways has saved me time.
But it leaves me wondering - maybe had I put these packages out onto CRAN sooner then maybe I would have gotten more feedback from a wider audience?
|
|
?
Free Version
Easy
# Specific Heat Capacity of Aluminum
APCHEM-3K4NQ6
Aluminum and water samples of equal mass are left out in the hot sun.
Which of the following is the expected change in temperature of the two samples after five hours?
Note that:
• $C_{Al}$ = 897 J/kg K
• $C_{water}$= 4181 J/kg K
A
Both samples will increase in temperature until they reach the same temperature.
B
The aluminum sample will reach a higher temperature.
C
The water sample will reach a higher temperature.
D
There is not enough information to determine how their temperatures will change.
|
|
# Introduction to Signal Levels
Sounds travels as a wave. The amplitude of the wave is related to the amount of acoustic energy it carries, or how loud the sound will appear to be. As the amplitude of the sound wave increases, the sound is perceived to be louder. There are several different ways to describe the amplitude of a sound wave. It is important to be aware of the different methods for characterizing the loudness of a sound signal and how they differ from each other. Not all sounds are the same type of wave, and some ways of characterizing a sound wave are more appropriate than others depending on the type of wave. This section will describe the different ways in which a sound wave can be characterized.
The following diagram shows the three most common ways of characterizing the loudness of a sound signal. The two simplest ways of characterizing a sound wave are by its peak pressure and by its peak-to-peak pressure. The peak pressure, also called the 0-to-peak pressure, is the range in pressure between zero and the greatest pressure of the signal. The peak-to-peak pressure is the range in pressure between the most negative pressure and the most positive pressure of the signal. A more complex way of characterizing a sound wave is the root-mean-square pressure. The root-mean-square pressure (abbreviated as rms pressure) is the square root of the average of the square of the pressure of the sound signal over a given duration.
Figure 1: A simple sound wave and three common methods used to characterize the loudness of a sound signal. Image credit: DOSITS.
The root-mean-square pressure is most often used to characterize a sound wave because it is directly related to the energy carried by the sound wave, which is called the intensity. The intensity of a sound wave is the average amount of energy transmitted per unit time through a unit area in a specified direction. The intensity I of a sound wave is proportional to the average over time of the square of its pressure p (See Introduction to Decibels):
$I= \left ( \frac{p^2}{\rho c} \right ) _{average}$
(“ρ” is the density of medium carrying the sound and c is the speed of sound). The density and sound speed are relatively constant, and so the intensity is directly related to the mean square pressure:
$mean \; square \; pressure = (p^2) _{average}$
The root-mean-square (rms) pressure is then just the square root of this:
$rms \; pressure = \sqrt{(p^2)} _{average}$
To calculate the rms pressure, there are four steps. First, the pressure of the sound is measured at points along the sound signal. The pressure is then squared each time it is measured. All of these measurements are averaged (added together and divided by the number of measurements). Finally, the square root of the average of the squared pressures is taken to give the rms pressure of the sound signal. The steps for calculating the rms pressure of the signal shown in the following diagram is given in the table below.
Figure 2: Simple Sound Wave. Image credit: DOSITS.
Steps for calculating rms pressure Sample calculation 1. Measure the pressure at points along the sound signal. 0, 2, 0, -2, 0, 2, 0, -2 2. Square the measured pressures. 0, 4, 0, 4, 0, 4, 0, 4 3. Average the squared pressures. (0+4+0+4+0+4+0+4)/8 = 2 4. Take the square root of the average of the squared pressures. √2 = 1.4
There can be significant differences between the three ways of characterizing pressure, peak, peak-to-peak, and rms. This can result in a serious misunderstanding of the magnitude of a signal. The following figure illustrates that each of the methods for describing the pressure of the signal gives a different value. The peak pressure value is 2 μPa, the peak-to-peak pressure value is 4 μPa, and the rms pressure of the signal is 1.4 μPa.
Figure 3: Simple Sound Wave. Image credit: DOSITS.
The rms pressure is most often used to characterize sound waves that have the simple shape shown in the above figures. Not all sound waves have such simple shapes, which contain only one frequency (See How are sound viewed and analyzed?). The situation becomes more complicated when the sound signal is a short impulsive signal, such as sound waves generated by a lightning strike or an airgun. It is easy to estimate the peak-to-peak and peak pressures in these cases, but it becomes more difficult to calculate the rms pressure.
Figure 4: Impulsive Sound Signal. Image credit: DOSITS.
The rms method requires the scientist to select a duration over which to average the pressure of the signal. This method is appropriate for tones in which the average pressure can be directly related to the intensity of the sound signal. It is not appropriate to use rms pressure to measure impulsive sounds, since the rms pressure will vary dramatically depending on the duration over which the signal is averaged. In fact, the longer the time duration over which the signal is averaged, the lower the rms value will be. This is illustrated in the following figure, where the rms pressure is calculated using three different durations for an impulsive signal.
Duration over which signal is averaged rms pressure 0.5 sec 1.4 µPa 1 sec 1.0 µPa 2 sec 0.8 µPa
Figure 5: Impulsive Sound Signal. Image credit: DOSITS.
This can be compared to the peak-to-peak pressure of 3.75 μPa and the 0-to-peak pressure of 2 μPa for the same signal. These methods for characterizing a signal do not depend on the duration of the signal.
The appropriate way in which to characterize a sound wave depends on the question being asked. For example, in order to evaluate the effect of a sound on marine mammals, it is important to describe the sound correctly. Because marine (and other) mammal ears are sensitive to the intensity of a sound wave, the rms pressure is therefore appropriate to use when discussing hearing. The risk of impulsive sound causing physical damage to the ears of a marine mammal depends in part on the peak pressure, which is therefore one of the ways in which impulsive sounds should be characterized.
Sound pressures are often given as relative pressures in units called decibels, The sound pressure is compared to a reference pressure, using the following equation:
$20 \: log \left ( \frac{p_{sound}}{p_{reference}} \right )$
The relative pressure in decibels for a sound signal can take on different values depending on the method used to characterize the pressure of the signal. For underwater sounds the reference pressure preference is an rms pressure of 1 μPascal. This is why the units for decibels are given as “dB re1 μPa,” indicating that the reference pressure is 1 μPa rms. If the sound pressure psound used is the rms pressure of a sound wave containing a single frequency, then this equation gives the relative intensity, I, in decibels of the sound wave (See Introduction to Decibels). When other methods for characterizing the pressure of the sound wave, such as peak pressure or peak-to-peak pressure, are used, the result is not the relative intensity of the signal, but the relative pressure. The following table lists decibel values that have all been computed for the same sound signal, which is given in the previous figure.
Pressure Decibels rms (0.5 sec): 1.4 µPa 2.9 dB re 1 µPa rms (1 sec): 1.0 µPa 0 dB re 1 µPa rms (2 sec): 0.8 µPa -1.9 dB re 1 µPa 0-to-peak: 2 µPa 6 dB re 1 µPa peak-to-peak: 3.75 µPa 11.4 dB re 1 µPa
The decibel levels in this example range from -1.9 dB re 1 μPa to 11.4 dB re 1 μPa, even though they are all describing the same signal. If the threshold of hearing were 5 dB re 1 μPa, one could conclude either that the sound would be inaudible or would be heard, depending on which value is used.
In addition to stating the reference pressure, it is just as important to tell whether the pressure being used to compute decibels is root-mean-square (rms), peak, or peak-to-peak pressure. There can be significant differences between these measures of signal level, resulting in a serious misunderstanding of the magnitude of a signal if the pressure being used is not specified. On this website, we use underwater dB to indicate decibels computed using root-mean-square pressure, unless otherwise indicated. For situations in which the peak pressure is used, the unit of dB peak will be indicated. If the peak-to-peak pressure is used, the unit of dB peak-peak will be indicated.
|
|
# Lesson 14
Solving Systems by Elimination (Part 1)
## 14.1: Notice and Wonder: Hanger Diagrams (5 minutes)
### Warm-up
The purpose of this warm-up is to give students an intuitive and concrete way to think about combining two equations that are each true.
Students are presented with diagrams of three balanced hangers, which suggest that the weights on the two sides of each hanger are equal. Each side of the last hanger shows the combined objects from the corresponding side of the first two hangers. Students can reason that if 2 circles weigh the same as 1 square, and 1 circle and 1 triangle weigh the same as 1 pentagon, then the combined weight of 3 circles and 1 triangle should also be equal to the combined weight of 1 square and 1 pentagon.
### Launch
Display the hanger diagrams for all to see. Ask students to think of at least one thing they notice and at least one thing they wonder. Give students 1 minute of quiet think time, and then 1 minute to discuss the things they notice and wonder with their partner, followed by a whole-class discussion.
### Student Facing
What do you notice? What do you wonder?
### Student Response
For access, consult one of our IM Certified Partners.
### Activity Synthesis
Ask students to share the things they noticed and wondered. Record and display their responses for all to see. If possible, record the relevant reasoning on or near the image. After all responses have been recorded without commentary or editing, ask students, “Is there anything on this list that you are wondering about now?” Encourage students to respectfully disagree, ask for clarification, or point out contradicting information.
The idea to emphasize is that the weights on each side of the third hanger come from combining the weights on the corresponding sides of the first two hangers. If no one points this out, raise it as a point for discussion. Ask students:
• “What do you notice about the left side of the last hanger? What about the right side?"
• "If we only saw the first two hangers but knew that the third hanger has the combined weights from the corresponding side of the first two hangers, could we predict whether the weights on the third hanger would balance? "Why or why not?” (Yes. We can think of it as adding the weights from the second hanger to the first one, or vice versa. If the same weight is added to each side of a balanced hanger, the hanger would still be balanced.)
## 14.2: Adding Equations (15 minutes)
### Activity
In the warm-up of this lesson, students saw a visual representation of two equations being added to form a third equation. Because the first two equations are balanced, the third is also balanced. In this activity, students continue to develop the idea of adding two equations to form a third equation and use it to help them solve systems of linear equations.
Along the way, students examine the work of others and practice explaining their reasoning and critiquing that of others (MP3). They also see that sometimes adding equations is a productive way to solve systems, but other times it isn't.
### Launch
Arrange students in groups of 2. Give students 2 minutes of quiet time to think about the first set of questions and then time to share their thinking with their partner. Follow with a whole-class discussion before students proceed to the second set of questions.
Invite students to share their analysis of Diego's work—what Diego has done to solve the system and why he might have done it that way. Discuss questions such as:
• "In this case, what happens when the equations are added? Why might it be helpful to do so?" (The expressions with $$x$$ add up to 0, so it's removed from the equation, making it possible to solve for $$y$$.)
• "How does finding the value of $$y$$ help with solving the system?" (Once we know the value of one variable, we can use it to find the value of the other, by substituting it back into one of the equations and solving that equation.)
• "How can we be sure that $$x=1$$ and $$y=2$$ simultaneously make both equations true and is a solution to the system?" (We can substitute those values into the equations and see if the equations are true. We can also graph the system and see if it intersects at $$(1,2)$$.)
Next, ask students to complete the remainder of the activity.
Representation: Internalize Comprehension. Begin the activity with concrete or familiar contexts: Explicitly link the warm-up to the representation involved in analyzing Diego’s work. Connect the intuitive understanding of balance to familiar mathematical statements that do not require solving. Present strategically; in the same format as Diego’s work. For instance, ask students if $$2 + 2 = 4$$ is a true and balanced statement. Then, if $$3 + 1 = 4$$ is also a true and balanced statement. Once students have labeled these as true, add them together in the same arrangement as the upcoming problems and ask students if the result $$(5 + 3 = 8)$$, will therefore also be true and balanced. Then, introduce Diego’s work.
Supports accessibility for: Conceptual processing; Memory
### Student Facing
Diego is solving this system of equations:
\begin{cases} \begin {align}4x + 3y &= 10\\ \text-4x + 5y &= 6 \end{align} \end{cases}
Here is his work:
\begin {align}4x + 3y &= 10\\ \text-4x + 5y &= \hspace{2mm}6 \quad+\\ \overline {\quad 0 + 8y} &\overline{ \hspace{1mm}= 16 \qquad}\\ y &= 2 \end{align}
\begin {align}4x + 3(2) &= 10\\ 4x + 6 &= 10\\ 4x &= 4\\ x &= 1 \end{align}
1. Make sense of Diego’s work and discuss with a partner:
1. What did Diego do to solve the system?
2. Is the pair of $$x$$ and $$y$$ values that Diego found actually a solution to the system? How do you know?
2. Does Diego’s method work for solving these systems? Be prepared to explain or show your reasoning.
a. \begin {cases} \begin {align}2x + y &= 4\\ x - y &= 11 \end {align} \end {cases}
b. \begin {cases} \begin{align} 8x + 11y &= 37\\ 8x + \hspace{4.5mm} y &= \hspace{2mm} 7 \end{align} \end{cases}
### Student Response
For access, consult one of our IM Certified Partners.
### Activity Synthesis
Invite students to share their responses to the last set of questions and discuss whether Diego's method works for solving the two systems. Ask students:
• “Why does adding the equations work for solving the system with $$2x + y =4$$ and $$x-y=11$$, but doesn't work for $$8x + 11y =37$$ and $$8x + y =7$$?” (In the resulting equation, there are still two variables whose values we don't know.)
• “What if we subtract the equations? Would that help us solve the last system?” (Yes, subtracting the second equation from the first gives $$10y=30$$ or $$y=3$$, which we can then use to find the $$x$$-value.)
• “How is adding the two equations here like and unlike combining the shapes in the two hanger diagrams earlier? How is it different?” (It is alike in that the result is another equation with the combined parts on the left side staying on the left and combined parts on the right stay on the right. It is unlike the hanger diagrams in that these equations use numbers and variables, and one of the parts on the left add up to 0 and disappears from the third equations.)
Make sure students see that if we choose to add or subtract strategically, in each of the new equations, one variable is eliminated, making it possible to solve for the other variable. When the value of that variable is substituted to either of the original equations, we can solve for the variable that was eliminated. Tell students that this method of solving a system is called solving by elimination.
Point out that there is nothing wrong about adding the equations in the last system. It simply doesn't get us anywhere closer to the solution and is therefore unproductive.
Writing, Conversing: MLR1 Stronger and Clearer Each Time. Use this routine to help students improve their writing by providing them with multiple opportunities to clarify their explanations through conversation. Give students time to meet with 1–2 partners to share their response to the final question about Diego’s method. Provide listeners with prompts for feedback that will help their partner add detail to strengthen and clarify their ideas. For example, students can ask their partner: “How did you use Diego’s method to solve this problem?” or “Can you say more about....?” Give students 2–3 minutes to revise their initial draft based on feedback from their peers. This will help students produce a written generalization for how they can determine whether adding equations is a useful strategy for solving a linear system.
Design Principle(s): Support sense-making; Optimize output (for explanation)
## 14.3: Adding and Subtracting Equations to Solve Systems (15 minutes)
### Activity
Earlier, students saw that adding or subtracting the equations in a system creates a third equation that can help us solve the system. In this activity, students reconnect the idea of the solution to a system to the intersection of the graphs of the equations. They graph each original pair of equations and the equation that results from adding or subtracting them. They then observe that the graph of the third equation intersects the other two graphs at the exact same point—at the intersection of the first two.
At this point, students simply get a graphical confirmation that adding or subtracting equations can help them find the solution to a system. They are not yet expected to be able to articulate why this is the case. That understanding will be developed over a few upcoming lessons.
### Launch
Remind students that earlier they added or subtracted pairs of equations to form new equations. Explain that they will now graph each pair of equations in the systems given earlier, as well as the third equation that came from adding or subtracting those equations, and then make some observations about them.
Arrange students into groups of 3 and provide access to graphing technology. Assign one system for each group member to graph. Ask students to discuss their observations after graphing.
If possible, make available graphing technology that allows users to enter linear equations in standard form, such as Desmos (available under Math Tools). Otherwise, give students time to rearrange the equations into a form that can be used with the technology and to check their equivalent equations. If time is limited, provide these equivalent equations:
System A
\begin {cases}\begin {align}y &= \text-\frac43x+\frac{10}{3}\\ y&=\hspace{2mm}\frac45x + \frac65 \end{align} \end{cases}
System B
\begin {cases}\begin {align}y &= \text-2x+4\\ y&=\hspace{3mm}x-11 \end{align} \end{cases}
System C
\begin {cases}\begin {align}y &= \text-\frac{8}{11}x+\frac{37}{11}\\ y&=\hspace{3.7mm}\text-8x + 7 \end{align} \end{cases}
Action and Expression: Internalize Executive Functions. To support development of organizational skills, check in with students within the first 2–3 minutes of work time. Look for students who are able to find the sum or difference of the equations with ease. If students struggle, direct students to use parentheses around the entirety of the equation being added or subtracted to enhance clarity and reduce errors.
Supports accessibility for: Memory; Organization
### Student Facing
Here are three systems of equations you saw earlier.
System A
\begin {cases}\begin {align}4x + 3y &= 10\\ \text-4x + 5y &= \hspace{2mm}6 \end{align} \end{cases}
System B
\begin {cases} \begin {align}2x + y &= 4\\ x - y &= 11 \end {align} \end {cases}
System C
\begin {cases} \begin{align} 8x + 11y &= 37\\ 8x + \hspace{4mm} y &= \hspace{2mm} 7 \end{align} \end{cases}
For each system:
1. Use graphing technology to graph the original two equations in the system. Then, identify the coordinates of the solution.
2. Find the sum or difference of the two original equations that would enable the system to be solved.
3. Graph the third equation on the same coordinate plane. Make an observation about the graph.
### Student Response
For access, consult one of our IM Certified Partners.
### Student Facing
#### Are you ready for more?
Mai wonders what would happen if we multiply equations. That is, we multiply the expressions on the left side of the two equations and set them equal to the expressions on the right side of the two equations.
1. In system B write out an equation that you would get if you multiply the two equations in this manner.
2. Does your original solution still work in this new equation?
3. Use graphing technology to graph this new equation on the same coordinate plane. Why is this approach not particularly helpful?
### Student Response
For access, consult one of our IM Certified Partners.
### Anticipated Misconceptions
When solving system B, some students may not notice that the $$y$$-variable in one equation has a positive coefficient and the other has a negative coefficient, and consequently decide to subtract the second equation from the first, rather than to add the two equations. They may struggle to figure out why the solution pair they find doesn't match what is on the graph. Suggest that they express the second equation in terms of addition, $$x+ (\text-y) = 11$$, and try eliminating one variable again.
### Activity Synthesis
Display the graphs that students generated for all to see and ask students to share their observations. Highlight that the graph of the new equation intersects the graphs of the equations in the original system at the same point.
Time permitting, ask students to subtract the equations they previously added (or add the equations they previously subtracted) and then to graph the resulting equation on the same coordinate plane. Ask them to comment on the graphs. Students are likely to see that the graphs of the new equations are no longer horizontal or vertical lines, but they still intersect at the same point as the original graphs.
Invite students to make some conjectures as to why the graph of the new equation intersects the other two graphs at the same point. Without confirming or correcting their conjectures, tell students that they will investigate this question in the coming activities.
## Lesson Synthesis
### Lesson Synthesis
Now that students have an additional strategy for solving systems in their toolkit, invite students to reflect on three systems seen in the synthesis of a previous lesson, in which they made a case for solving one by substitution.
System 1
$$\begin {cases} 3m + n = 71\\2m-n =30 \end {cases}$$
System 2
$$\begin {cases} 4x + y = 1\\y = \text-2x+9 \end {cases}$$
System 3
$$\displaystyle \begin{cases} 5x+4y=15 \\ 5x+11y=22 \end{cases}$$
Ask students to discuss the following questions with a partner and be prepared to report their partner's responses:
• "Look at a system that you would have chosen to solve by substitution. Would you still choose to solve it by substitution now? Why or why not?"
• "Look at a system that you would not have chosen to solve by substitution. Would it help to solve by elimination? Why or why not?"
With a little bit of rearranging (of the equations), all of these systems could be solved by substitution or elimination. Students should at least recognize that systems 1 and 3 can be efficiently solved by elimination, while system 2 can be efficiently solved by substitution.
## 14.4: Cool-down - What to Do with This System? (5 minutes)
### Cool-Down
For access, consult one of our IM Certified Partners.
## Student Lesson Summary
### Student Facing
Another way to solve systems of equations algebraically is by elimination. Just like in substitution, the idea is to eliminate one variable so that we can solve for the other. This is done by adding or subtracting equations in the system. Let’s look at an example.
\begin {cases} \begin {align}5x+7y=64\\ 0.5x - 7y = \text-9 \end{align} \end {cases}
Notice that one equation has $$7y$$ and the other has $$\text-7y$$.
If we add the second equation to the first, the $$7y$$ and $$\text-7y$$ add up to 0, which eliminates the $$y$$-variable, allowing us to solve for $$x$$.
\begin {align} 5x+7y&=64\\ 0.5 x - 7y &= \text-9 \quad+\\ \overline {5.5 x + 0} &\overline {\hspace{1mm}= 55}\\ 5.5x &= 55 \\x &=10 \end {align}
Now that we know $$x = 10$$, we can substitute 10 for $$x$$ in either of the equations and find $$y$$:
\begin {align} 5x+7y&=64\\ 5(10)+7y &= 64\\ 50 + 7y &= 64\\ 7y &=14 \\y &=2 \end {align}
\begin {align} 0.5 x - 7y &= \text-9\\ 0.5(10) - 7y &= \text-9\\ 5 - 7y &= \text-9\\ \text-7y &= \text-14\\ y &= 2 \end {align}
In this system, the coefficient of $$y$$ in the first equation happens to be the opposite of the coefficient of $$y$$ in the second equation. The sum of the terms with $$y$$-variables is 0.
What if the equations don't have opposite coefficients for the same variable, like in the following system?
\begin {cases}\begin {align}8r + 4s &= 12 \\ 8r + \hspace{2.3mm}s &= \hspace{1mm}\text-3 \end{align} \end {cases}
Notice that both equations have $$8r$$ and if we subtract the second equation from the first, the variable $$r$$ will be eliminated because $$8r-8r$$ is 0.
\begin {align} 8r + 4s &= 12\\ 8r + \hspace{2.3mm}s &= \hspace{1mm}\text-3 \quad-\\ \overline {\hspace{2mm}0 + 3s }& \overline{ \hspace{1mm}=15} \\ 3s &= 15 \\s &=5 \end {align}
Substituting 5 for $$s$$ in one of the equations gives us $$r$$
\begin {align} 8r + 4s &= 12\\ 8r + 4(5) &= 12\\ 8r + 20 &=12 \\ 8r &= \text-8 \\ r &= \text-1 \end {align}
Adding or subtracting the equations in a system creates a new equation. How do we know the new equation shares a solution with the original system?
If we graph the original equations in the system and the new equation, we can see that all three lines intersect at the same point, but why do they?
In future lessons, we will investigate why this strategy works.
|
|
## digitalmars.D.learn - Linking C libraries with DMD
• jmh530 (24/24) Jan 21 I'm trying to understand calling C libraries from D on Windows
• W.J. (8/32) Jan 21 You need to port the header file to d. i believe there's the htod
• Andrea Fontana (6/8) Jan 21 You should try with dstep too.
• W.J. (3/11) Jan 21 Interesting read. Thanks for sharing!
• jmh530 (12/19) Jan 21 I already ported the header file to d. What I can't get to work
• Dibyendu Majumdar (15/22) Jan 21 Hi I am also new to D and trying to do similar things - i.e. call
• jmh530 (5/19) Jan 21 I'm not having any luck using your options with dmd either
• Dibyendu Majumdar (5/14) Jan 21 Sorry forgot to mention that I also include the library I am
• jmh530 (8/23) Jan 21 The -L/LIBPATH:c:\lib gives me an error that
• Dibyendu Majumdar (4/10) Jan 21 OPTLINK is for 32-bit code - the options I showed are for 64-bit,
• jmh530 (15/26) Jan 21 Thanks. I had been trying to get 32bit code to work. I don't
• Dibyendu Majumdar (7/20) Jan 21 Sorry the option should be -L/IMPLIB:.. - with single slash but
• jmh530 (15/21) Jan 21 The single slash didn't make a difference. I tried that myself
• jmh530 (2/2) Jan 21 I also added an enhancement request:
• Dibyendu Majumdar (15/26) Jan 21 Okay then you don't need the /IMPLIB option. But you do need to
• jmh530 (10/39) Jan 21 I ran
• W.J. (22/70) Jan 21 The linker will most certainly get confused by this "-L
• jmh530 (5/20) Jan 21 The -L-L stuff from the LearningD book is making more sense. The
• bachmeier (8/16) Jan 21 Have you used pragma(lib)? https://dlang.org/spec/pragma.html#lib
• jmh530 (44/52) Jan 21 Looks like the sections are split apart by a couple hundred
• bachmeier (7/44) Jan 21 That should be implib /system libclib.lib libclib.dll. I'm pretty
• Mike Parker (51/58) Jan 21 Your confusion appears to be coming from a lack of understanding
• jmh530 (2/3) Jan 21 Thanks for the detailed reply.
• Mike Parker (55/71) Jan 21 I've take your example, modified it slightly, compiled the DLL
• jmh530 (2/3) Jan 22 Thanks again! Will review.
• jmh530 (17/18) Feb 02 Thanks again for your help. I've worked through some simple
• jmh530 (5/7) Feb 02 Yeah, this is what finally allowed me to progress.
jmh530 <john.michael.hall gmail.com> writes:
I'm trying to understand calling C libraries from D on Windows
with DMD. I made a simple example and compiled it with a static
library fine (so I've converted the .h file correctly). Then, I
compiled with gcc to a shared library (because I cannot for the
life of me figure out how to do this with DMC). I then used
implib to generate a .lib file (the fact that this is necessary
is not described nearly well enough in the documentation).
The next step is getting this to compile with dmd. I'm getting an
error that the function isn't getting called, which is suggesting
that I'm not linking the .lib file with DMD properly.
The dmd page
https://dlang.org/dmd-windows.html
explains that -L is used to pass stuff to the linker. It seems to
indicate that this should be the folder that the library is in.
The dll and lib files are in the same folder as the .d file I'm
trying to compile. So presumably, this should be -L. or -L\. or
like -LC:\folder\. But nothing like that works. There's a link on
the dmd page to optlink, which doesn't really help me figure this
out either. There's also some stuff about set LIB in the sci.ini.
Not sure I'm supposed to mess with that.
The LearningD book has some examples as well. I don't have it in
front of me right now, but I think I tried what they recommend
also. Nevertheless, I feel like the documentation should be clear
enough so that this isn't so frustrating.
Jan 21
W.J. <invalid email.address> writes:
On Thursday, 21 January 2016 at 16:14:40 UTC, jmh530 wrote:
I'm trying to understand calling C libraries from D on Windows
with DMD. I made a simple example and compiled it with a static
library fine (so I've converted the .h file correctly). Then, I
compiled with gcc to a shared library (because I cannot for the
life of me figure out how to do this with DMC). I then used
implib to generate a .lib file (the fact that this is necessary
is not described nearly well enough in the documentation).
The next step is getting this to compile with dmd. I'm getting
an error that the function isn't getting called, which is
suggesting that I'm not linking the .lib file with DMD properly.
The dmd page
https://dlang.org/dmd-windows.html
explains that -L is used to pass stuff to the linker. It seems
to indicate that this should be the folder that the library is
in. The dll and lib files are in the same folder as the .d file
I'm trying to compile. So presumably, this should be -L. or
-L\. or like -LC:\folder\. But nothing like that works. There's
a link on the dmd page to optlink, which doesn't really help me
figure this out either. There's also some stuff about set LIB
in the sci.ini. Not sure I'm supposed to mess with that.
The LearningD book has some examples as well. I don't have it
in front of me right now, but I think I tried what they
recommend also. Nevertheless, I feel like the documentation
should be clear enough so that this isn't so frustrating.
You need to port the header file to d. i believe there's the htod
utility, however I haven't used that yet.
Then, basically all you have to do is to tell the linker to link
against your C .lib.
Remember that -LC:\folder (for dmd) passes "C:\folder" on to the
linker. Assuming the library folder flag for your linker is -L,
you'd want to use -L-LC:\folder.
Jan 21
Andrea Fontana <nospam example.com> writes:
On Thursday, 21 January 2016 at 16:57:26 UTC, W.J. wrote:
You need to port the header file to d. i believe there's the
htod utility, however I haven't used that yet.
You should try with dstep too.
http://wiki.dlang.org/List_of_Bindings
And here:
http://wiki.dlang.org/D_binding_for_C
Jan 21
W.J. <invalid email.address> writes:
On Thursday, 21 January 2016 at 17:00:14 UTC, Andrea Fontana
wrote:
On Thursday, 21 January 2016 at 16:57:26 UTC, W.J. wrote:
You need to port the header file to d. i believe there's the
htod utility, however I haven't used that yet.
You should try with dstep too.
http://wiki.dlang.org/List_of_Bindings
And here:
http://wiki.dlang.org/D_binding_for_C
Interesting read. Thanks for sharing!
Jan 21
jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 21 January 2016 at 16:57:26 UTC, W.J. wrote:
You need to port the header file to d. i believe there's the
htod utility, however I haven't used that yet.
Then, basically all you have to do is to tell the linker to
Remember that -LC:\folder (for dmd) passes "C:\folder" on to
the linker. Assuming the library folder flag for your linker is
-L, you'd want to use -L-LC:\folder.
I already ported the header file to d. What I can't get to work
is linking a dynamic library.
The whole -L-L is definitely not intuitive. The optlink
documentation doesn't even describe a -L option. Anyway, it
doesn't give an error when I use a plus so I tried
dmd <file.d> -L-L+C:\folder\
and it still isn't picking it up. I figured I needed to tell the
linker what the file actually is, so I tried
dmd <file.d> -L-L+C:\folder\ -L-lib+<libfile.lib>
and that (and a variety of variations) gives errors that LIB
isn't recognized.
Jan 21
Dibyendu Majumdar <d.majumdar gmail.com> writes:
On Thursday, 21 January 2016 at 16:14:40 UTC, jmh530 wrote:
I'm trying to understand calling C libraries from D on Windows
with DMD. I made a simple example and compiled it with a static
library fine (so I've converted the .h file correctly). Then, I
compiled with gcc to a shared library (because I cannot for the
life of me figure out how to do this with DMC). I then used
implib to generate a .lib file (the fact that this is necessary
is not described nearly well enough in the documentation).
Hi I am also new to D and trying to do similar things - i.e. call
a shared library written in C from D, but also create a shared
library in D.
For the latter - on Windows 10 b64-bit - I am using following
options for example:
-shared -L/LIBPATH:c:\\lib -L//IMPLIB:mylib.lib
In my case I would like stuff from my D code to be exported. I
found that I need to do following if I want to export a C API.
extern (C) export void myfunc();
I did not find examples of how to export D classes / functions -
and right now I am getting link errors when trying to export D
code.
Regards
Dibyendu
Jan 21
jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 21 January 2016 at 21:39:08 UTC, Dibyendu Majumdar
wrote:
Hi I am also new to D and trying to do similar things - i.e.
call a shared library written in C from D, but also create a
shared library in D.
For the latter - on Windows 10 b64-bit - I am using following
options for example:
-shared -L/LIBPATH:c:\\lib -L//IMPLIB:mylib.lib
In my case I would like stuff from my D code to be exported. I
found that I need to do following if I want to export a C API.
extern (C) export void myfunc();
I did not find examples of how to export D classes / functions
- and right now I am getting link errors when trying to export
D code.
Regards
Dibyendu
I'm not having any luck using your options with dmd either
(excluding -shared because I don't need to create a shared D
library).
Jan 21
Dibyendu Majumdar <d.majumdar gmail.com> writes:
On Thursday, 21 January 2016 at 21:55:10 UTC, jmh530 wrote:
For the latter - on Windows 10 b64-bit - I am using following
options for example:
-shared -L/LIBPATH:c:\\lib -L//IMPLIB:mylib.lib
I'm not having any luck using your options with dmd either
(excluding -shared because I don't need to create a shared D
library).
Sorry forgot to mention that I also include the library I am
dmd -m64 prog.d -L/LIBPATH:c:\lib -Lyourlib.lib
Where yourlib.lib and yourlib.dll are in c:\lib folder.
Jan 21
jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 21 January 2016 at 22:02:57 UTC, Dibyendu Majumdar
wrote:
On Thursday, 21 January 2016 at 21:55:10 UTC, jmh530 wrote:
For the latter - on Windows 10 b64-bit - I am using following
options for example:
-shared -L/LIBPATH:c:\\lib -L//IMPLIB:mylib.lib
I'm not having any luck using your options with dmd either
(excluding -shared because I don't need to create a shared D
library).
Sorry forgot to mention that I also include the library I am
dmd -m64 prog.d -L/LIBPATH:c:\lib -Lyourlib.lib
Where yourlib.lib and yourlib.dll are in c:\lib folder.
The -L/LIBPATH:c:\lib gives me an error that
OPTLINK : Warning 9: Unknown Option : LIBPATH
and then gives the path I put is not found.
At least when it's outputting the text, it's combining
:C:\lib\yourlib.lib
so it seemingly is finding it.
Jan 21
Dibyendu Majumdar <d.majumdar gmail.com> writes:
On Thursday, 21 January 2016 at 22:09:47 UTC, jmh530 wrote:
The -L/LIBPATH:c:\lib gives me an error that
OPTLINK : Warning 9: Unknown Option : LIBPATH
and then gives the path I put is not found.
At least when it's outputting the text, it's combining
:C:\lib\yourlib.lib
so it seemingly is finding it.
OPTLINK is for 32-bit code - the options I showed are for 64-bit,
which uses MS LINK. You get 64-bit code by adding -m64.
Regards
Jan 21
jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 21 January 2016 at 22:14:25 UTC, Dibyendu Majumdar
wrote:
On Thursday, 21 January 2016 at 22:09:47 UTC, jmh530 wrote:
The -L/LIBPATH:c:\lib gives me an error that
OPTLINK : Warning 9: Unknown Option : LIBPATH
and then gives the path I put is not found.
At least when it's outputting the text, it's combining
:C:\lib\yourlib.lib
so it seemingly is finding it.
OPTLINK is for 32-bit code - the options I showed are for
64-bit, which uses MS LINK. You get 64-bit code by adding -m64.
Regards
Thanks. I had been trying to get 32bit code to work. I don't
think I did anything special with gcc to compile the dll as
64bit. Anyway, this is what I get when I try it again (stuff in
brackets I replaced).
C:<folder>>dmd -m64 <file>.d -L/LIBPATH:C:<folder>
-L//IMPLIB:<libfile>.lib
LINK : warning LNK4044: unrecognized option
'//IMPLIB:<libfile>.lib'; ignored
callC0.obj : error LNK2019: unresolved external symbol
<cfunction> referenced in f
unction _Dmain
callC0.exe : fatal error LNK1120: 1 unresolved externals
--- errorlevel 1120
Jan 21
Dibyendu Majumdar <d.majumdar gmail.com> writes:
On Thursday, 21 January 2016 at 22:23:36 UTC, jmh530 wrote:
Thanks. I had been trying to get 32bit code to work. I don't
think I did anything special with gcc to compile the dll as
64bit. Anyway, this is what I get when I try it again (stuff in
brackets I replaced).
C:<folder>>dmd -m64 <file>.d -L/LIBPATH:C:<folder>
-L//IMPLIB:<libfile>.lib
LINK : warning LNK4044: unrecognized option
'//IMPLIB:<libfile>.lib'; ignored
callC0.obj : error LNK2019: unresolved external symbol
<cfunction> referenced in f
unction _Dmain
callC0.exe : fatal error LNK1120: 1 unresolved externals
--- errorlevel 1120
Sorry the option should be -L/IMPLIB:.. - with single slash but
you only need this if you are trying to create a shared library
which presumably you are not?
I believe to create a static library you need to use -lib, else
it is an app so you need to supply a main function.
Regards
Jan 21
jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 21 January 2016 at 22:35:29 UTC, Dibyendu Majumdar
wrote:
Sorry the option should be -L/IMPLIB:.. - with single slash but
you only need this if you are trying to create a shared library
which presumably you are not?
I believe to create a static library you need to use -lib, else
it is an app so you need to supply a main function.
Regards
The single slash didn't make a difference. I tried that myself
before posting. I got the same error.
I'm not trying to created a shared library in D. My goal is to
use a shared library from C in D. Right now, I'm working with a
simple test case to make sure I could understand it before
working with the actual shared library I want to use.
I recall some discussion in LearningD (don't have it in front of
me now) that different types of shared libraries are needed on
32bit vs. 64bit because there is a different linker. This is what
I did to created the shared library:
gcc -Wall -fPIC -c <file>.c -I.
gcc -shared -o <libfile>.dll <file>.o -I.
implib <libfile>.lib <libfile>.dll
Jan 21
jmh530 <john.michael.hall gmail.com> writes:
I also added an enhancement request:
https://issues.dlang.org/show_bug.cgi?id=15588
Jan 21
Dibyendu Majumdar <d.majumdar gmail.com> writes:
On Thursday, 21 January 2016 at 22:49:06 UTC, jmh530 wrote:
I'm not trying to created a shared library in D. My goal is to
use a shared library from C in D. Right now, I'm working with a
simple test case to make sure I could understand it before
working with the actual shared library I want to use.
I recall some discussion in LearningD (don't have it in front
of me now) that different types of shared libraries are needed
on 32bit vs. 64bit because there is a different linker. This is
what I did to created the shared library:
gcc -Wall -fPIC -c <file>.c -I.
gcc -shared -o <libfile>.dll <file>.o -I.
implib <libfile>.lib <libfile>.dll
Okay then you don't need the /IMPLIB option. But you do need to
specify the library via -L as I mentioned before.
i.e. use:
dmd -m64 -L/LIBPATH:<path to lib> -L<library name> prog.d
Where <library name> is yourlib.lib and this is present along
with the DLL in the path you gave.
Plus your prog.d needs to have appropriate code. Example:
module app;
extern (C) void testing();
void main()
{
testing();
}
Here testing() is provided in the DLL.
Jan 21
jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 21 January 2016 at 22:54:26 UTC, Dibyendu Majumdar
wrote:
On Thursday, 21 January 2016 at 22:49:06 UTC, jmh530 wrote:
I'm not trying to created a shared library in D. My goal is to
use a shared library from C in D. Right now, I'm working with
a simple test case to make sure I could understand it before
working with the actual shared library I want to use.
I recall some discussion in LearningD (don't have it in front
of me now) that different types of shared libraries are needed
on 32bit vs. 64bit because there is a different linker. This
is what I did to created the shared library:
gcc -Wall -fPIC -c <file>.c -I.
gcc -shared -o <libfile>.dll <file>.o -I.
implib <libfile>.lib <libfile>.dll
Okay then you don't need the /IMPLIB option. But you do need to
specify the library via -L as I mentioned before.
i.e. use:
dmd -m64 -L/LIBPATH:<path to lib> -L<library name> prog.d
Where <library name> is yourlib.lib and this is present along
with the DLL in the path you gave.
Plus your prog.d needs to have appropriate code. Example:
module app;
extern (C) void testing();
void main()
{
testing();
}
Here testing() is provided in the DLL.
I ran
dmd -m64 <file>.d -L/LIBPATH:<path to lib> -L<library name>
and got
<library name> : fatal error LNK1136: invalid or corrupt file
--- errorlevel 1136
At least that's progress.
LNK1136 is for a corrupt or abnormally small file. I did notice
that the original dll was 82kb and the lib file was 2kb.
Jan 21
W.J. <invalid email.address> writes:
On Thursday, 21 January 2016 at 23:07:06 UTC, jmh530 wrote:
On Thursday, 21 January 2016 at 22:54:26 UTC, Dibyendu Majumdar
wrote:
On Thursday, 21 January 2016 at 22:49:06 UTC, jmh530 wrote:
I'm not trying to created a shared library in D. My goal is
to use a shared library from C in D. Right now, I'm working
with a simple test case to make sure I could understand it
before working with the actual shared library I want to use.
I recall some discussion in LearningD (don't have it in front
of me now) that different types of shared libraries are
needed on 32bit vs. 64bit because there is a different
linker. This is what I did to created the shared library:
gcc -Wall -fPIC -c <file>.c -I.
gcc -shared -o <libfile>.dll <file>.o -I.
implib <libfile>.lib <libfile>.dll
Okay then you don't need the /IMPLIB option. But you do need
to specify the library via -L as I mentioned before.
i.e. use:
dmd -m64 -L/LIBPATH:<path to lib> -L<library name> prog.d
Where <library name> is yourlib.lib and this is present along
with the DLL in the path you gave.
Plus your prog.d needs to have appropriate code. Example:
module app;
extern (C) void testing();
void main()
{
testing();
}
Here testing() is provided in the DLL.
I ran
dmd -m64 <file>.d -L/LIBPATH:<path to lib> -L<library name>
and got
<library name> : fatal error LNK1136: invalid or corrupt file
--- errorlevel 1136
At least that's progress.
The linker will most certainly get confused by this "-L<library
name>" - as it will take it for an object file. The difference is
that a library is a collection of object files.
The GNU linker ld, for instance, uses the -l<libname> switch for
adding libraries to link against and -L<path> to add a search
path to look for the libraries passed in with -l<libname>.
If you leave it to the compiler to invoke the linker you need to
remember the -L compiler switch is passing what follows to the
linker (minus the -L compiler switch).
I.e. -L-LC:\lib\path will be passed on as "-LC:\lib\path",
-L-lsomelib => "-lsomelib", etc.
You won't find help about linker options on any compiler manual.
You have to refer to your linker manual.
Also make sure to adhere to naming conventions. It could very
well be your linker quits with an error message if you pass it
-lsomelib for a file somelib.lib when it expects to find the file
LNK1136 is for a corrupt or abnormally small file. I did notice
that the original dll was 82kb and the lib file was 2kb.
The lib for a 'DLL' is small because it just tells the linker
where to find the code in the 'DLL' - the actual code is in the
'DLL'.
Hope the helps
Jan 21
jmh530 <john.michael.hall gmail.com> writes:
On Friday, 22 January 2016 at 00:43:05 UTC, W.J. wrote:
The GNU linker ld, for instance, uses the -l<libname> switch
for adding libraries to link against and -L<path> to add a
search path to look for the libraries passed in with
-l<libname>.
If you leave it to the compiler to invoke the linker you need
to remember the -L compiler switch is passing what follows to
the linker (minus the -L compiler switch).
I.e. -L-LC:\lib\path will be passed on as "-LC:\lib\path",
-L-lsomelib => "-lsomelib", etc.
The -L-L stuff from the LearningD book is making more sense. The
book is using Linux examples, linux uses ld, which has those
flags.
LNK1136 is for a corrupt or abnormally small file. I did
notice that the original dll was 82kb and the lib file was 2kb.
The lib for a 'DLL' is small because it just tells the linker
where to find the code in the 'DLL' - the actual code is in the
'DLL'.
Hope the helps
That's clear. Thanks.
Jan 21
bachmeier <no spam.net> writes:
On Thursday, 21 January 2016 at 23:07:06 UTC, jmh530 wrote:
I ran
dmd -m64 <file>.d -L/LIBPATH:<path to lib> -L<library name>
and got
<library name> : fatal error LNK1136: invalid or corrupt file
--- errorlevel 1136
At least that's progress.
LNK1136 is for a corrupt or abnormally small file. I did notice
that the original dll was 82kb and the lib file was 2kb.
Have you used pragma(lib)? https://dlang.org/spec/pragma.html#lib
There's also a section on it in Learning D.
I don't use Windows much, but when I link to a dll, that's what I
do. I've actually never used -L options on Windows. If I want to
call functions from R.dll, I use implib /system R.lib R.dll to
create R.lib. Then I put pragma(lib, "R.lib"); in my .d file and
compile.
Jan 21
jmh530 <john.michael.hall gmail.com> writes:
On Friday, 22 January 2016 at 01:34:00 UTC, bachmeier wrote:
Have you used pragma(lib)?
https://dlang.org/spec/pragma.html#lib
There's also a section on it in Learning D.
Looks like the sections are split apart by a couple hundred
pages. I tried it with the .lib I created earlier without much
luck. Also note that the LearningD section recommends not using
the pragma. At this point, I'd rather just having something
working.
I don't use Windows much, but when I link to a dll, that's what
I do. I've actually never used -L options on Windows. If I want
to call functions from R.dll, I use implib /system R.lib R.dll
to create R.lib. Then I put pragma(lib, "R.lib"); in my .d file
and compile.
Thanks for walking through what you do.
I tried to create an example that more closely resembles what is
in LearningD (see
https://github.com/aldacron/LearningD/tree/master/Chapter09_Connecting%2
D%20with%20C/clib). I created two files
clib.c
----------------
#include <stdio.h>
int some_c_function(int);
int some_c_function(int a) {
printf("Hello, D! from C! %d\n", a);
return a + 20;
}
and
dclib.d
-----------------
pragma(lib, libclib.lib);
extern(C) nogc nothrow {
int some_c_function(int);
}
void main()
{
import std.stdio : writeln;
writeln(some_c_function(10));
}
------------------
I then ran
gcc -Wall -fPIC -c clib.c
gcc -shared -o libclib.dll clib.o
implib libclib.lib libclib.dll
I'm getting an error on the implib command on my home computer.
Maybe running it on a different computer would work.
The LearningD book says that you should compile the libraries
with DMC on Windows, but I can't figure out how to generate a
shared library on DMC. I didn't get the implib error for what I
was working on before.
I feel like getting stuff to work with Windows is always such a
hassle, but that's the only way I'll be able to use this stuff at
work.
Jan 21
bachmeier <no spam.net> writes:
On Friday, 22 January 2016 at 02:39:33 UTC, jmh530 wrote:
I tried to create an example that more closely resembles what
is in LearningD (see
https://github.com/aldacron/LearningD/tree/master/Chapter09_Connecting%2
D%20with%20C/clib). I created two files
clib.c
----------------
#include <stdio.h>
int some_c_function(int);
int some_c_function(int a) {
printf("Hello, D! from C! %d\n", a);
return a + 20;
}
and
dclib.d
-----------------
pragma(lib, libclib.lib);
extern(C) nogc nothrow {
int some_c_function(int);
}
void main()
{
import std.stdio : writeln;
writeln(some_c_function(10));
}
------------------
I then ran
gcc -Wall -fPIC -c clib.c
gcc -shared -o libclib.dll clib.o
implib libclib.lib libclib.dll
That should be implib /system libclib.lib libclib.dll. I'm pretty
sure the /system option is required. (Also be sure you are not
mixing 32-bit or 64-bit libraries. I don't think implib works
with 64-bit.)
I'm getting an error on the implib command on my home computer.
Maybe running it on a different computer would work.
The LearningD book says that you should compile the libraries
with DMC on Windows, but I can't figure out how to generate a
shared library on DMC. I didn't get the implib error for what I
was working on before.
You won't need to use DMC if you're using implib.
I feel like getting stuff to work with Windows is always such a
hassle, but that's the only way I'll be able to use this stuff
at work.
It definitely is more difficult.
Jan 21
Mike Parker <aldacron gmail.com> writes:
On Friday, 22 January 2016 at 02:39:33 UTC, jmh530 wrote:
The LearningD book says that you should compile the libraries
with DMC on Windows, but I can't figure out how to generate a
shared library on DMC. I didn't get the implib error for what I
was working on before.
I feel like getting stuff to work with Windows is always such a
hassle, but that's the only way I'll be able to use this stuff
at work.
Your confusion appears to be coming from a lack of understanding
of what's going on under the hood. When working with a system
language like D, it is imperative to understand what the compiler
and linker are doing. The same issues you are having can arise
when using C and C++, they are just less common as you tend to
use the same compiler toolchain for both your executable and your
libraries.
First of all, understand that DMD does not use just one linker on
Windows. The default is OPTLINK, which only works with 32-bit
object files (and by extension, library files, as they are just
archives of objects) in the OMF format. When compiling with
-m32mscoff or -m64, DMD uses the Microsoft linker, which deals
with objects in the COFF format. This matters at *link time*, not
at runtime. So it *generally* (see below) doesn't matter which
format your DLL is in, as it is loaded at runtime no matter how
you compile.
Second, understand that when you choose to link with an import
library rather than loading the DLL manually, then it is the
format of the import library that's important. It needs to be in
the OMF format if you are compiling with vanilla DMD and in the
COFF format if not. OMF import libraries can be generated from
COFF DLLS with implib. Import libraries generated by the MinGW
toolchain are actually in the COFF format, but they are not
always compatible with the Microsoft toolchain. You are likely
going to have issues even when compiling with -m32mscoff or -m64.
Your implib difficulties may actually be arising because the DLL
was compiled with MinGW, despite it being in COFF.
Third, understand that passing -L to DMD tells it that the
succeeding flag should be passed to the linker. On Windows, -L-L
has no meaning, as neither OPTLINK nor the MS linker accept the
-L switch. -L is used with GCC to specify the library path, so in
the command line -L-L/path/to/libs, the first -L tells DMD that
the next part is for the linker and the second -L tells the
linker where to find libraries. Again, this is only for the GCC
toolchain. For DMD on Windows, how you specify the library path
depends on whether you are linking with OPTLINK or the MS linker.
As for the libraries themselves, you don't need to don't actually
need the -L flag on Windows. In fact, you can save yourself some
trouble and just pass the full path to any libraries you need
with no flags at all:
dmd myapp.d C:\path\to\libs\mylib.lib
As long as the library is in the appropriate format, this command
line will do the right thing.
I strongly recommend that you compile your DLL and generate the
import library with the Microsoft tools. Then you should be able
to use the 32-bit version with -m32mscoff and the 64-bit version
with -m64. This should /just work/.
Development on Windows is not any more difficult than on Linux.
It's annoying, sure, but not difficult. You just need to make
sure that all of the tools you are using are compatible.
Jan 21
jmh530 <john.michael.hall gmail.com> writes:
On Friday, 22 January 2016 at 04:03:27 UTC, Mike Parker wrote:
[snip]
Thanks for the detailed reply.
Jan 21
Mike Parker <aldacron gmail.com> writes:
I've take your example, modified it slightly, compiled the DLL
with Visual Studio, and got a working executable. Firs up, the C
file. Here's your original:
clib.c
----------------
#include <stdio.h>
int some_c_function(int);
int some_c_function(int a) {
printf("Hello, D! from C! %d\n", a);
return a + 20;
}
First, the function prototype is not needed. You only need those
in header files for other C modules to have access to them.
Declaring them in the same source file as the function
implementation serves no purpose.
Second, the Microsoft linker needs to know which functions you
intend to export from your DLL. In order to tell it, you either
need to add a __declspec(dllexport) to the functions you plan to
export, or provide a module definition file on the command line
(see [1]). I opted for the former approach. With that, your C
source file looks like this:
#include <stdio.h>
__declspec(dllexport) int some_c_function(int a) {
printf("Hello, D! from C! %d\n", a);
return a + 20;
}
In the D source file, I opted to remove the pragma in favor of
passing the import library on the command line:
extern(C) nogc nothrow {
int some_c_function(int);
}
void main()
{
import std.stdio : writeln;
writeln(some_c_function(10));
}
OK, now create the following file/folder heirarchy:
-vctest
--src
----c/clib.c
----d/dclib.d
--lib
--bin
I have Visual Studio Community 2015 installed. Whichever version
you have, you should find a folder for it in the Windows start
menu that provides shortcuts to various command prompts. I opted
for the one labeled VS2015 x64 Native Tools Command Prompt. You
might select the 32-bit (x86) version instead. Open one of them,
navigate to the vctest directory, and execute the following
command line:
cl /D_USRDLL /D_WINDLL src\c\clib.c /LD /Felib\clib.lib /link
Note the backslashes in src\c\clib.c and lib\clib.lib. You'll
likely see an error with forward slashes, unless you put the
paths in quotes (see [2] for compiler options). This should
create both clib.dll and the import library clib.lib in the lib
directory. Next, copy the dll to the bin directory:
copy lib\clib.dll bin
Now, either in the same command prompt or a separate one where
DMD is on the path (depending on your configuration), execute the
following:
dmd -m64 src/d/dclib.d lib/clib.lib -ofbin/dclib.exe
Replace -m64 with -m32mscoff if you used the 32-bit toolchain
instead of the 64-bit.
Following these steps, I produced a working executable that
output the following:
Hello, D! from C! 10
30
[1] https://msdn.microsoft.com/en-us/library/34c30xs1.aspx
[2] https://msdn.microsoft.com/en-us/library/19z1t1wy.aspx
Jan 21
jmh530 <john.michael.hall gmail.com> writes:
On Friday, 22 January 2016 at 04:43:52 UTC, Mike Parker wrote:
[snip]
Thanks again! Will review.
Jan 22
jmh530 <john.michael.hall gmail.com> writes:
On Friday, 22 January 2016 at 04:43:52 UTC, Mike Parker wrote:
[snip]
Thanks again for your help. I've worked through some simple
examples and started trying to write a binding to a C library.
I think I've got the .h file converted properly (the D file
compiles), but I was getting a linking error. The enums were
showing up, but not the functions. I tried creating a simple
project that mimics what was happening in the C library and found
no issues.
I think I narrowed it down to what is causing the linking error.
I had not compiled the C dll myself. Looking in to the issue, I
noticed it was compiled with MinGW. There were instructions for
using it with Visual Studio (to make the .def and .dll files into
a .lib) though and I assumed following those was sufficient.
However, based on what you've said, I suspect that if I
re-compile the dll using Visual Studio, then it will work with D.
I don't think I would have figured that out without your comments
above.
Feb 02
jmh530 <john.michael.hall gmail.com> writes:
On Tuesday, 2 February 2016 at 20:06:05 UTC, jmh530 wrote:
I suspect that if I re-compile the dll using Visual Studio,
then it will work with D.
Yeah, this is what finally allowed me to progress.
Unfortunately, my sample example compiles, but throws an Access
Violation error when I run it. I think I will start a new thread
|
|
# Subsets of fixed size with no adjacent elements
One can go over all subsets of the set $\{1,...,n\}$ by using a binary representation - $0$ at index $i$ means that element $i$ is not in the set, while $1$ at index $i$ meanas that it is.
Furthermore, one can count the number of subsets that have no adjacent elements (ie, if $i$ is in the subset then $i-1,i+1$ are not) by counting binary vectors of length $n$ with no adjacent $1$s, recursively: denote by $a_n$ the number of such binary vectors. Either element $1$ is not in the subset (ie, at index $1$ in the binary vector there is a $0$) and there are $a_{n-1}$ ways to continue, or there is $10$ at indexes $1,2$ in the binary vector and there are $a_{n-2}$ ways to continue. Meaning, the number of such binary vectors (subsets) is $a_n = a_{n-1} + a_{n-2}$.
However, how can we count the number of subsets of some fixed size $0 \leq k \leq n$ that have no adjacent elements? Somehow, we need to calculate the number of substrings of length $k$ of binary strings of length $n$ that have no adjacent $1$'s. The number of subsets of length $k$ of the main set is $\binom{n}{k}$.
I would appreciate some guidance.
• See Wilf's generatingfunctionology, problems 11-13 in chapter 1 exercises and example 1 in section 4.3. – Alexander Burstein Jan 19 '18 at 7:27
• Wow, a super useful handbook. I'll make sure to browse it well. Much, much, much appreciated – TheNotMe Jan 19 '18 at 10:07
It will be ${n - k + 1 \choose k}$. For every subset of set {1, 2, ..., n-k,n-k+1} try to add 0 to smallest element, 1 to next element, ..., $k-1$ to largest element. You will get subset of size $k$ with no adjacent elements. Conversly if from set with no adjacent elements you will subtract $k-1$ from largest element, $k-2$ from second largest element, ..., $0$ from smallest number you will get some subset of set {1, 2, ..., n-k+1} - so given function is bijection.
• Note that in generatingfunctionlogy (what was referred to in Alexander's comment above) it says that the answer is $\binom{n-k+1}{k}$ – TheNotMe Jan 19 '18 at 10:36
• Yes is small mistake - there is $k+1$ numbers in set $\{0,...,k\}$ so we should add $k-1$ to largest element - I've corrected answer – J. Doe Jan 19 '18 at 11:35
|
|
Research Article
# A Novel Bayesian DNA Motif Comparison Method for Clustering and Retrieval
• equal contributor Contributed equally to this work with: Naomi Habib, Tommy Kaplan
Affiliations: School of Computer Science and Engineering, The Hebrew University, Jerusalem, Israel, Department of Molecular Genetics and Biotechnology, Faculty of Medicine, The Hebrew University, Jerusalem, Israel
X
• equal contributor Contributed equally to this work with: Naomi Habib, Tommy Kaplan
Affiliations: School of Computer Science and Engineering, The Hebrew University, Jerusalem, Israel, Department of Molecular Genetics and Biotechnology, Faculty of Medicine, The Hebrew University, Jerusalem, Israel
X
• Affiliation: Department of Molecular Genetics and Biotechnology, Faculty of Medicine, The Hebrew University, Jerusalem, Israel
X
• nir@cs.huji.ac.il
Affiliation: School of Computer Science and Engineering, The Hebrew University, Jerusalem, Israel
X
• Published: February 29, 2008
• DOI: 10.1371/journal.pcbi.1000010
Corrections
9 May 2011: Habib N, Kaplan T, Margalit H, Friedman N (2011) Correction: A Novel Bayesian DNA Motif Comparison Method for Clustering and Retrieval. PLoS Comput Biol 7(5): 10.1371/annotation/d876137b-59c5-48cf-8491-c8cf12f26a9b. doi: 10.1371/annotation/d876137b-59c5-48cf-8491-c8cf12f26a9b | View correction
## Abstract
Characterizing the DNA-binding specificities of transcription factors is a key problem in computational biology that has been addressed by multiple algorithms. These usually take as input sequences that are putatively bound by the same factor and output one or more DNA motifs. A common practice is to apply several such algorithms simultaneously to improve coverage at the price of redundancy. In interpreting such results, two tasks are crucial: clustering of redundant motifs, and attributing the motifs to transcription factors by retrieval of similar motifs from previously characterized motif libraries. Both tasks inherently involve motif comparison. Here we present a novel method for comparing and merging motifs, based on Bayesian probabilistic principles. This method takes into account both the similarity in positional nucleotide distributions of the two motifs and their dissimilarity to the background distribution. We demonstrate the use of the new comparison method as a basis for motif clustering and retrieval procedures, and compare it to several commonly used alternatives. Our results show that the new method outperforms other available methods in accuracy and sensitivity. We incorporated the resulting motif clustering and retrieval procedures in a large-scale automated pipeline for analyzing DNA motifs. This pipeline integrates the results of various DNA motif discovery algorithms and automatically merges redundant motifs from multiple training sets into a coherent annotated library of motifs. Application of this pipeline to recent genome-wide transcription factor location data in S. cerevisiae successfully identified DNA motifs in a manner that is as good as semi-automated analysis reported in the literature. Moreover, we show how this analysis elucidates the mechanisms of condition-specific preferences of transcription factors.
## Author Summary
Regulation of gene expression plays a central role in the activity of living cells and in their response to internal (e.g., cell division) or external (e.g., stress) stimuli. Key players in determining gene-specific regulation are transcription factors that bind sequence-specific sites on the DNA, modulating the expression of nearby genes. To understand the regulatory program of the cell, we need to identify these transcription factors, when they act, and on which genes. Transcription regulatory maps can be assembled by computational analysis of experimental data, by discovering the DNA recognition sequences (motifs) of transcription factors and their occurrences along the genome. Such an analysis usually results in a large number of overlapping motifs. To reconstruct regulatory maps, it is crucial to combine similar motifs and to relate them to transcription factors. To this end we developed an accurate fully-automated method, termed BLiC, based upon an improved similarity measure for comparing DNA motifs. By applying it to genome-wide data in yeast, we identified the DNA motifs of transcription factors and their putative target genes. Finally, we analyze motifs of transcription factor that alter their target genes under different conditions, and show how cells adjust their regulatory program in response to environmental changes.
### Introduction
Transcription initiation is modulated by transcription factors that recognize sequence-specific binding sites in regulatory regions. The organization of binding sites around a gene specifies which factors can bind to it and where, and consequently determines to what extent the gene is transcribed under different conditions. To understand this regulatory mechanism, one must specify the DNA binding preferences of transcription factors. These preferences are usually characterized by a motif that summarizes the commonalities among the binding sites of a transcription factor [1]. Multiple tools were developed for finding motifs (e.g., [2][5]), however there are several problems in interpreting their output. Typically these algorithms output multiple results which require filtering and scoring. Moreover, different motif discovery methods have complementary successes, and therefore it is beneficial to apply multiple methods simultaneously and collate their results [6]. In addition, the motif discovery algorithms frequently produce a redundant output and the transcription factor that binds each motif is usually unknown. As similar motifs may represent binding sites of the same factor, eliminating this redundancy is essential for elucidating the true transcriptional regulatory program. The general strategy is thus to cluster similar motifs and merge motifs within each cluster to create a library of non-redundant motifs [6] (Figure 1B). Next, in order to interpret the meaning of the discovered motifs, they are compared to databases of previously characterized motifs (Figure 1C). In large-scale experiments, where the motif output set is very large, the tasks of scoring, merging and identifying motifs need to be automated. To address both the clustering and the retrieval challenges, we need an accurate and sensitive method for comparing DNA motifs.
In the literature there is an ongoing discussion regarding the best representation of DNA motifs [1], [7][10]. Here we use a Position Frequency Matrix (PFM), which has the benefits of being relatively simple yet flexible. A PFM is a matrix of positions in the binding site versus nucleotide preferences, where each row represents one residue and each column represents the nucleotide count at each position in a set of aligned binding sites. This representation assumes that the choice of nucleotides at different positions is independent of all other positions.
To compare two PFMs, we need to consider all possible alignments between them. Given two aligned PFMs, we utilize the position-independence assumption to decompose the similarity score into a sum of the similarities of single aligned positions. Several similarity scores can be used to compare a pair of aligned positions. One approach uses the Pearson correlation coefficient (e.g., [11],[12]). This measure, however, might inappropriately capture similarities between probabilities (Figure 2 and Figure S1). Alternative approaches are based on similarity between two distributions. This can be a metric distance, such as the Euclidean distance [13] or an information-theoretic measure, such as the Jensen-Shannon divergence [14]. While these latter distances do not have the artifacts of the Pearson correlation, they equally weight positions with similar nucleotide distributions that are specific (e.g., a strong preference for an A) and similar positions that are non-informative (e.g., identical to the background distribution) (Figure 2 and Figure S1). It is important to differentiate between these two situations: Two positions whose similarity is due to a resemblance to the background distribution are less relevant to motif similarity, as they do not contribute to sequence-specific binding of proteins [15],[16]. In this work we use this intuition to develop a novel method for comparing and merging DNA motifs, based on Bayesian probabilistic reasoning. We define a new similarity score that combines the statistical similarity between the motifs with their dissimilarity to the background distribution. To calculate this score we estimate the probabilities of DNA nucleotides in each position of the DNA motif, by a Bayesian estimator with a Dirichlet mixture prior [17],[18] to model the multi-modal nucleotide distribution at different binding site positions.
This motif similarity score is used by us to identify similar motifs that represent binding sites of the same factor and for clustering motifs. For the latter we devised a hierarchal agglomerative clustering procedure that is based on our motif similarity score. Our results show that the new method outperforms other alternatives in accuracy and sensitivity in both the clustering and retrieval tasks.
Using our new similarity score and the clustering method based upon it, we developed a large-scale analysis pipeline of DNA motif sets. This pipeline is designed for analysis following concurrent motif search by a combination of methods (using the TAMO package [19]). The goal is to process the set of DNA motifs into a set of reliable non-redundant motifs. We use our method to identify sets of DNA motifs from a large-scale ChIP-chip assay in S. cerevisiae [13]. This allows us to examine how transcription factors alter their DNA binding preferences under various environmental conditions and elucidate mechanisms for condition-specific preferences.
### Results
#### A Novel DNA Motif Similarity Score
Our goal is to determine whether two DNA motifs represent the same binding preferences. Since the less informative positions in a motif do not contribute to sequence-specific binding of proteins, we developed a similarity score that measures the similarity between two DNA motifs, while taking into account their dissimilarity from the background distribution.
We now develop the details of the score. We can view DNA motifs as a list of binding sites from which the nucleotide distribution at each position is estimated. This view allows us to perform statistical evaluations. We assume that each binding site was sampled independently from a common distribution over nucleotides, which satisfies the position independence properties (in correspondence with the motif PFM representation described above). Then, we can evaluate the likelihood ratio of different source distributions of the sampled binding sites. In practice, we keep only the sufficient statistics allowing us to evaluate the likelihood of the binding sites. These sufficient statistics are the counts of each nucleotide in each position, represented as a PFM.
Our score is composed of two components: the first measures whether the two motifs were generated from a common distribution, while the second reflects the distance of that common distribution from the background (see Methods). Our Bayesian Likelihood 2-Component (BLiC) score for comparing motifs m1 and m2 is:(1)
Under the position independence assumption, the score decomposes into a sum of local position scores. More precisely, our likelihood-based score measures the probability of the nucleotide counts in each position of the motif given a source distribution. For two aligned positions in the compared motifs, let n1 and n2 be the corresponding positions (count vectors) in the two motifs, the similarity score is then:(2)
where are the estimators for the source distribution of n1, n2 and the common source distribution, respectively, Pbg is the background nucleotide distribution, and NT = {A,C,G,T}.
Since the source distribution is unknown, we must estimate it from the nucleotide counts at each position. We used a Bayesian estimator, where a priori knowledge and the number of samples were integrated into the estimation process. We considered two alternative priors. The first is a standard Dirichlet prior [20], which is conjugate to the multinomial distribution, enabling us to compute the estimations efficiently (see Methods). However with this prior we cannot model our prior knowledge that a position in a DNA motif tends to have specific preference to one or more nucleotides. Such prior knowledge can be described with a Dirichlet mixture prior [17],[18], which represents a prior that consists of several “typical” distributions. Specifically, we used a five-component mixture prior, with four components representing an informative distribution, giving high probability for a single nucleotide: A, C, G, or T. The fifth component represents the uniform distribution (see Methods).
In the above discussion we assumed that the motifs are aligned, but in practice, we compare unaligned motifs. Thus, we defined the similarity score for two motifs as the score of the best possible alignment (without gaps) between them, including the reverse complement alignment.
In addition, we need statistical calibration of the similarity scores, since a high similarity score might be due to chance events [21],[22]. In particular, when comparing a single motif against motifs of different lengths, the probability of similarity by chance depends on the query motif and the length of the target. To circumvent these problems we use the p-value of the similarity score, which is computed empirically for each query against the distribution of scores of random motifs of a given length (see Methods and Figure 3).
##### Clustering motifs.
An important application of motif similarity scores is clustering. There are many clustering methods [23] that can be applied to motifs. Here we consider one of the simplest and straightforward clustering procedures where we combined a similarity score, such as our BLiC score, within a hierarchical agglomerative clustering algorithm. In each iteration, the algorithm computes the similarity between all pairs of motifs and then merges the most similar pair into a new motif based on the best alignment between the two motifs (see Figure 1). It then replaces the two original motifs by the new motif. These iterations are performed until we are left with a single motif. The order of merge operations results in a tree, where the leaves are the initial motifs, and inner nodes correspond to merged motifs that represent all motifs in the relevant sub-tree (see Figure 4A). We stress that this procedure is different than standard hierarchical clustering (such as UPGMA clustering). Since we merge motifs to create a new one, the similarity of the merged motif to another is different from the average similarity of each of the original motifs to that third motif.
We use the clustering tree to distill a large group of motifs into a concise non-redundant set, by splitting the tree into a subset of clusters, each representing a group of redundant motifs by choosing a frontier in the tree (see Figure 4A and Methods).
#### Comprehensive Evaluation of Similarity Scores
We set out to compare our similarity score to existing ones in the literature, in the context of both motif comparison and clustering. We use two different data sets.
The first data set, which we refer to as “Yeast” is a synthetic one where we know the true labeling of motifs and use it to benchmark different procedures by relating their results with the underlying truth. To generate synthetic motifs in a realistic manner that reflects true binding properties of transcription factors, we use the genome-wide catalogue of transcription factor binding locations in S. cerevisiae [13]. This catalogue has high-confidence binding sites (based on combination of experimental assays with evolutionary conservation considerations). From these, we selected nine transcription factors to represent different binding patterns (in terms of inner arrangements of informative positions and length). From the binding sites of each factor we sampled sets of binding sites, and from each set generated a motif (see Figure 3A). For each factor we generated noisy motifs that differ in their quality. To do so, we varied the number of binding sites (sizes of 5, 15 or 35) and the coverage of the motif (full site, its beginning, middle, or end). We repeated this for each motif 20 times, creating a set of 240 random motifs for each of the nine transcription factors.
The second data set, which we refer to as “Structural”, was compiled by Mahony et al. [24]. Their evaluation is based on structural information. Since structurally related transcription factors often have similar DNA-binding preferences, the best match to a given motif is expected to be a motif associated with a member of the same structural class. Mahony et al. compiled a data set that contains the motifs of the families with four or more profiles in JASPAR [25].
Using these two data sets we compared different possible similarity scores for DNA motifs. Specifically, we compared the Pearson correlation coefficient; the information-theory based Jensen-Shannon divergence; the Euclidean distance; and our BLiC score.
##### Motif comparison evaluation—Identifying similar motifs.
We evaluated the sensitivity and specificity of motif similarity scoring methods by comparing all possible pairs of motifs from the data sets described above, and testing whether pairs that have high similarity indeed were generated from the same source. In the “Yeast” data set we call a pair as true if the two motifs were generated from binding locations of the same transcription factor, and in the “Structural” data set we call a pair as true if the motifs are of factors from the same structural class. For each motif pair, if the similarity is statistically significant we label this as a positive pair, and otherwise call it a negative. We compared this prediction to the label of the pair, and calculated the sensitivity and specificity for each p-value threshold to create ROC curves (Figure 3B and 3C and Figure S2). Comparing the ROC curves of our score to those of previously suggested scores we see that the BLiC score outperformed all other scores throughout the range of possible sensitivity/specificity tradeoffs on both data sets.
The construction of the “Yeast” data set allows examining different parameters that make the task more challenging. We do so by restricting the number of binding sites or by checking whether the motif is partial or not. Using a smaller number of sites results in higher variability among motifs of the same factor, and using partial coverage means smaller overlap between compared motifs; see Figure S2. These results show that as the task becomes harder all the methods have reduced success rate: for 5% False Positive Rate (FPR), the True Positive Rates (TPR) vary from 65% (for partial overlapping motifs from samples of size 5) to 99% (for the motif with different offsets compared to the full length motifs from sample of size 35). Nonetheless, using our score improves the retrieval rates substantially in most tasks; for example, when looking at sub-motifs with partial overlap from samples of size 35, for 5% FPR using the BLiC score leads to 80% TPR, compared to 62% with the Euclidean distance or 57% with the Pearson Correlation (see Figure 3B). For some tasks, such as comparing the motifs of different offsets to the full length motifs, our method did not show statistically significant improvement (see Figure S2).
Comparing our two alternative priors, The Dirichlet prior versus the Dirichlet-Mixture prior, our results show that the more complex prior, which better models the nucleotide distribution in binding sites, leads to better results as the number of samples decreases (see Figure S2). When the number of samples is larger, the two priors result in similar performance.
##### Motif clustering evaluation—Reducing the redundancy.
To further evaluate the accuracy of the different similarity scores we used these scores in clustering motifs from the two data sets. For this, we used the hierarchical agglomerative clustering algorithm described above. We then examined whether clusters consisted of motifs that are considered similar (either from the same factor in the “Yeast“ data set, or the same structural family in the “Structural” data set). Examining the cluster hierarchy at different levels of granularity we get a tradeoff curve between two criteria, the True Positive Rate (TPR) of all clusters, and the number of clusters; see Figure 4. The results show that the BLiC score outperformed the other similarity scores in the “Yeast” data set and is better than other similarity scores in the “Structural” data set.
As in the motif comparison evaluation, we can perform the clustering evaluation on various subsets of the “Yeast” data set (see Figure 4 and Figure S3). From these results we see that in harder tasks, all methods have reduced success rates. Using our score improves the clustering rates significantly when clustering all the motifs or different subsets of motifs as described above; for example, when looking at all motifs from sample sets of size 15, using our BLiC score we reach 95% TPR with less than 14 clusters, while all other do not get more than 57% TPR (see Figure 4B).
#### Large-Scale DNA Motif Analyses
##### Motif analysis pipeline.
To facilitate analysis of many motifs we developed an automatic motif analysis pipeline, based on our BLiC score. This is a three-step method for processing and integrating large-scale data of newly discovered DNA motifs into coherent and reliable sets of non-redundant motifs. The inputs for this procedure are multiple groups of co-regulated DNA sequences, and the output is a set of non-redundant motifs and a ranking of their relevance for each of the input groups (Figure 5). The three steps of the pipeline include:
#### Step 1: Motif searching and filtering
We begin by applying complementary motif discovery algorithms to each group of sequences. This is done using the TAMO package [19]. Then, the newly discovered motifs undergo an initial filtration according to their abundance among the group of sequences (see Methods).
#### Step 2: Clustering and merging motifs
The integrated sets of motifs (from all input groups) are clustered and merged to create a non-redundant set. First, the discovered motifs for each group are clustered and merged separately. Then, motifs from all groups are assembled, clustered and merged. After each stage of clustering, a subset of refined motifs is automatically chosen based on the clustering tree (see Methods).
#### Step 3: Ranking and identifying motifs
Finally, the non-redundant set of motifs is ranked and filtered once again, using the abundance of the motifs in the original groups of DNA sequences (see Methods). To know which motif is new and which was previously characterized, we compare the motifs to a library of known DNA motifs from the literature (TRASFAC [26], SCPD [27], YPD [28]). By this comparison we associate the motifs with transcription factors.
#### Genome-Wide Yeast Motif Library
As a real life application of this pipeline we examined genome wide ChIP-chip measurements in S. cerevisiae of 177 transcription factors under several environmental conditions. In total we analyzed 301 experiments for different factors and conditions [13]. We used seven motif discovery algorithms to produce a set of motifs for each ChIP-chip experiment. These motifs were clustered, filtered, ranked and compared to known motifs from the literature (as described above and in the Methods). This resulted in a concise set of DNA motifs attributed to each transcription factor under each environmental condition (all the motif sets can be found at the Supplementary Web site http://compbio.cs.huji.ac.il/BLiC).
To further analyze the resulting Yeast DNA motif library, we contrast it against the wealth of genomic annotations in the yeast literature. To do so, we scanned each motif in the library against the promoters of yeast genes (see Methods) and created a target gene set for the motif. We then scored the enrichment of these motif gene sets against different types of gene annotations: the original ChIP-chip data [13], GO functional annotations [29], and groups of genes which are up or down regulated according to gene expression data (assembled by [30][32]). This allowed us to relate each motif to specific genomic annotations. To visualize these relationships we created a combined clustering of motifs and annotations using EdgeCluster - a clustering algorithm recently developed in our lab [33]. The novelty of EdgeCluster is in the integration of various sources of information into the clustering process. These information sources can be attributes of motifs (e.g., extent of enrichment in different gene sets) and pairwise information about motifs (i.e., the similarity of motif pairs). Figure S4 demonstrates the clustering of all the motifs. Clustering of a partial set of motifs is presented in Figure 6.
#### Comparison to Previous Work
In the works of Harbison et al. [13] and MacIsaac et al. [34], the same ChIP-chip data was used to construct a global transcriptional regulatory map in yeast. The motif analyses performed in these two works differ from ours in the similarity score used (the Euclidean distance) and in the different motif clustering and merging methods. In addition, the output of these two works was a single motif for each transcription factor. To be consistent with these previous works in the comparison, we narrowed down our set of motifs for each ChIP experiment to a single motif.
We first looked only at transcription factors with previously characterized motifs. Our criterion for comparison is measuring the similarity to known motifs from the literature (TRANSFAC [26], SCPD [27], YPD [28]), using our BLiC score. To narrow down our motif set to a single motif for each factor we chose (as done in these previous works) the motif most similar to the known motif. In 65% of the cases our motifs have the highest similarity to the known motifs (Figure 7, Table S1). The motifs learned by the algorithms of MacIsaac et al. and Harbison et al., had the highest similarity only in 22% and 12% of the motifs, respectively.
For transcription factors with no previously known binding motif in the literature, we compared the enrichment of the motifs within the ChIP-chip groups of sequences. For the comparison, we narrowed the motif sets by choosing the most significant motif for each factor and environmental condition (similarly to what was done in these previous studies). We scanned the genomic sequences and computed the enrichment of each motif (see Methods), using the same procedure and parameters for motifs from all three methods. Our motifs were found to have the highest enrichments in 80% of the cases (see Figure 7 and Table S1).
To ensure that the improvement we see is not due to differences in motif discovery methods, we repeated the analysis using the original output of the motif discovery of Harbison et al. (data not shown). This lead to slight changes in the output motifs, as our original analysis used a superset of these motifs. Comparing these modified results against the results of Harbison et al. and MacIssac et al. we see essentially improvement as the one we reported above (in 62% of the cases our motifs have the highest similarity to the known motifs, and in 65% of the cases our motifs were found to have the highest enrichments).
#### Elucidating Conditional Binding of Transcription Factors
Using the motif sets we have learned, we next turned to examine the change in the binding specificities of the transcription factors under different conditions. We distinguish between two types of factors. A condition-independent factor binds the same targets in multiple conditions, while a condition-dependent factor changes its set of targets between conditions. An example of a condition-independent transcription factor in yeast is Fhl1, a master regulator of ribosomal genes, which according to the ChIP data remains bound to 75% of its targets under different conditions (see Figure S5A). This is consistent with previous work [35] and with the motif analysis, where similar motifs are related to Fhl1 in all three conditions (see Figure S5B).
A condition-dependent regulator can show a range of behaviors in response to a change in condition. It may expand and bind additional targets, it may alter and bind to a different set of targets, or it may even not bind any targets [13]. Various mechanisms may be involved in monitoring condition-dependent binding. A factor may expand its targets, due to dosage change of the active transcription factor in the nucleus [13]. Alternatively, a factor may alter its targets due to several probable mechanisms (see Figure S6). One mechanism is changing the factor's specificity to the DNA, which we can trace by identifying variations in the DNA motif (Figure S6A). Another possible mechanism is a change in the factor's binding partner, which may be detected through co-occurrence of motifs of different factors (Figure S6B). In addition, a change of targets may be caused by a change in the accessibility to the binding site, which we cannot identify by analyzing motifs (Figure S6C).
We focus here on factors that alter their targets under different conditions and try to elucidate the mechanism. We defined a transcription factor as altering its target genes between two conditions, if the number of target genes in the intersection is less than half of the number in each condition separately. In addition, we considered only factors with at least 20 target genes in each of the two conditions (a sufficient number for motif discovery). Out of the 72 transcription factors for which ChIP-chip experiments were carried out in more than one condition, 50 factors alter their target genes between two conditions (in total, 112 pairs of differential conditions) (Table S2). We searched for differential motifs in the motif set of each factor at every condition. We say a motif is differential if there is a significant difference (p<0.05, chi-square test) in the fraction of ChIP targets containing the motif between the two conditions (excluding the genes in the intersection). This analysis can potentially elucidate the mechanism through which a factor changes its DNA targets, by finding different variants of motifs, or co-occurrence of motifs of different factors as explained above. In about half of these pairs we did not find statistically significant motifs in at least one of the compared conditions and thus could not search for differential motifs. Finding a motif only for one condition could be meaningful on its own, since this may indicate that in the other condition there is no direct binding of the factor to the DNA. On the other hand it could result from technical reasons, such as noise in the input set of sequences, and thus in this work we do not analyze these cases. Out of the remaining 52 pairs (spanned over 27 different transcription factors), we found differential motifs for 88% of the factors (47 cases spanned over 24 factors, see Table S3) with a p-value of less than 0.05.
#### Condition-Dependent Binding of Ste12 under Conditions of Mating and Filamentous Growth
An example of a transcription factor that shows condition-dependent binding is Ste12, which activates genes in two alternative pathways—mating and filamentous growth [36],[37] (Figure 8A). Under filamentous growth signaling (Butanol induction) we found that Ste12 binds promoters enriched with its known motif [38], as well as the known recognition sequence of Tec1 [38], a co-factor that binds the DNA with Ste12 under filamentous growth [37],[39] (Figure 8B). However, under mating conditions (Alpha factor induction) we find that Ste12 binds promoters with another variant of the motif more highly enriched than the known one. This variant is a near-perfect tandem repeat of its known site, suggesting that Ste12 binds the DNA as a homodimer following Alpha factor induction [40],[41] (Figure 8B). An additional player found in our analysis is Mcm1, whose known motif [42] is enriched among promoters bound by Ste12 under both conditions. This is consistent with the role of Mcm1 inhibiting expression of mating genes in diploid cells [42]. Mcm1 may play a similar role in the filamentous growth pathway, in which haploid cells undergo invasive growth, and diploid cells undergo pseudohyphal growth. Interestingly, the exact same motifs were learned for the ChIP targets of the cofactor Dig1, under all the conditions stated above, which indicates that Dig1 does not bind the DNA directly [37]. Thus, looking at the discovered motif sets, we can reveal the regulators involved and propose a mechanism through which a transcription factor alters its targets under different conditions. Here we propose the altered binding is caused by a change in the DNA binding partner: Ste12 binds the DNA with Tec1 under filamentous growth and as a homodimer under mating conditions.
#### Condition-Dependent Binding of the Iron-Regulated Factor Aft2
Another interesting example is provided by the iron-regulated transcription factor Aft2, required for iron homeostasis and resistance to oxidative stress [43]. This factor exhibits a significant environmental-dependent binding, switching targets between low and high H2O2 conditions (Figure 9A). The role of Aft2 in iron homeostasis and resistance to oxidative stress is poorly understood. In low H2O2, we find that Aft2-bound promoters are highly enriched with a motif similar to the known recognition sequence of Aft2 (GgGTG) [43]. However, in high H2O2 we find abundant occurrences of a low complexity Poly-GT motif (Figure 9B). This result indicates that a possible explanation for the change in Aft2 DNA targets is a change in its DNA binding specificity over these conditions. We reach this conclusion due to the lack of the known motif or motifs of other factors in the bound targets under high H2O2 and due to the similarity of the Poly-GT to the known motif. Furthermore, the poly-GT motif under high H2O2 may suggest that Aft2 binds the DNA as a homodimer. Interestingly, the known motif of Aft1 (Rcs1) [43], a paralog of Aft2, was enriched among the Aft2-bound promoters in low H2O2 condition. This implies a possible overlap between the targets of Aft2 and Aft1, supported by ChIP-chip data of the two factors (Figure 9B). Based on our analysis, we report two similar (but not identical) motifs for the two paralogs (as suggested by [43],[44]). Since it is known that Aft2 and Aft1 have independent and partially redundant roles in iron regulation [43],[44], this strengths our assumption that Aft2 binding to the DNA does not depend on Aft1, but is due to a change in its specificity to the DNA. The ChIP-chip data and our motif analysis suggest that under high H2O2 conditions Aft2 has a unique role in gene regulation. Here again, by looking at the motif sets, we propose a mechanism for condition dependent binding of a transcription factor. In this case we propose the cause is a change in the factor's specificity to the DNA.
### Discussion
An accurate motif comparison method is important for clustering redundant DNA motifs into coherent groups and for connecting the discovered motifs to previously characterized motifs. In this study we present a novel similarity score, the BLiC score, based on Bayesian probabilistic principles. We use the new comparison method as a basis for motif clustering and retrieval procedures, and compare it to several commonly used alternatives. This comparison shows that our BLiC score improves the specificity and sensitivity of motif comparisons and clustering tasks. The resulting motif clustering and retrieval procedures are incorporated in a large-scale automated pipeline for analyzing DNA motifs, which integrates the output of various DNA motif discovery algorithms and automatically merges redundant motifs from multiple training sets. The output of our pipeline is a coherent annotated library of motifs. Application of this pipeline to genome-wide location data of transcription factors in S. cerevisiae, successfully identified DNA motifs in a manner that is as good as semi-automated analyses reported in the literature. Moreover, we demonstrate how motif analysis can lead to insights into regulatory mechanisms.
#### Hierarchical Agglomerative Clustering
We used our BLiC score to develop a hierarchical agglomerative clustering algorithm for merging similar motifs, in which we ensure that the motifs within every sub-tree are properly aligned. Furthermore, such an approach allows us to trim the cluster tree at various levels, thus allowing us to merge motifs at different resolutions. In our method a new agglomerative node results from aligning and merging the motifs of its descendent nodes, and then computing the similarly of this new motif to all other nodes. As a consequence, the hierarchical progression ensures that each sub-tree is coherent. This is in contrast to many clustering methods, such as k-means and typical hierarchical clustering [45] which find a set of motifs that are all similar to each other, but are not necessarily coherent in the sense that they cannot all be aligned.
#### Motif Analysis
Our motif analysis pipeline is designed to process discovered DNA motifs into a set of non-redundant motifs and compare these with known motifs. As we have shown, our approach improves the sensitivity and specificity in the analysis of the outputs of standard motif discovery methods. By automating all the steps, we enable the analysis of hundreds of input groups. In addition, we achieve a wide view on transcription regulation by running several motif discovery algorithms in parallel, and integrating their outputs. By comparing motifs from different input groups we are able to connect between transcription factors that play a role in different processes. Our analysis does not focus on finding the “best” single motif for each input group (e.g., targets of ChIP-chip assay), but rather we find a set of non-redundant motifs and their relations (enrichment) to each input group. This output better captures the complexity of the underlying regulatory program. For example, in many cases we find motifs of co-factors (e.g., Ste12 and Tec1). In other cases we see that a factor changes its binding specificity under different conditions (e.g., Aft2). For these cases, several DNA motifs better capture the DNA binding preferences of the transcription factor than a single motif.
#### Relations to Previous Work
There are several different approaches attempting to quantify similarities between DNA motifs. Two previous works [21],[22] showed that using p-values when comparing motifs is more accurate than the actual similarity scores. Specifically, Gupta et al.[21], compared seven motif-motif position similarity functions, including the Pearson Correlation coefficient (e.g., [11],[46]), average log-likelihood ratio (ALLR) [16], Kullback-Leibler divergence [47][49], and the Euclidean distance (ED) [13],[50]. They found that the Euclidean distance is slightly better than the alternatives they considered. The data set used by Gupta et al. has a similar design as our data set, but it is based on the TRANSFAC database [26]. Not surprisingly, our results are consistent with theirs. Here we also use p-values to calibrate similarity scores, and show that our score is more accurate than the Euclidean distance, which is the second best.
Several resources are available for DNA motif analysis. There are many open access motif discovery tools available (e.g., [2],[3],[11]) and motif comparison tools [11],[21],[51]. In addition there are several available tools that integrate multiple motif discovery tools, and supply additional tools for filtering, comparison and ranking motifs [19],[49],[52]. In our motif analysis pipeline we use the TAMO package [19], for motif discovery and filtering, with a different genomic scan approach using statistical tools [53]. The main difference is that for the motif comparison and clustering we use our new BLiC score and a hierarchical agglomerative clustering (as discussed above).
#### From DNA Motifs to Regulatory Mechanisms
Sequence information is a highly accessible resource, and thus it is interesting to ask whether it can help elucidate mechanisms of transcription regulation. We examined transcription factors that alter their targets in response to an environmental change, and found a differential motif in 88% of these cases (24/27 factors). These differential motifs can suggest the potential mechanism through which the factor changes its targets. We show that motifs provide an indication for potential mechanisms when the factor changes its binding partner (Figure S6A) or its specificity to the DNA (Figure S6B), as we discussed thoroughly for the case of Ste12 and Aft2. Nevertheless, motif analysis obviously does not reveal the whole regulatory picture. For example, chromatin-modeling mediated regulation cannot be inferred from motif analysis (Figure S6C). Thus, for a complete understanding of the regulatory mechanisms additional information is needed.
A significant limitation of motif analysis in general, is the discrepancy between putative binding sites and actual functional binding events. This raises the question addressed frequently before [10],[54], whether our representation of transcription factor binding preferences is sufficiently accurate.
In this study we overcome a basic obstacle in DNA motif analysis, by developing an accurate motif comparison method. Our motif analysis pipeline, which includes clustering and retrieval procedures based on our novel score, is fully automated and produces accurate results. This is highly important in large-scale analysis, such as the one reported here. We showed the power of motif analyses, which is useful not only for building regulatory maps, but also for understanding more profoundly regulatory mechanisms.
### Methods
#### Motif Representation
We use a Position Frequency Matrix (PFM) representation for a DNA motif. This is a n×4 matrix, where each i,j cell contains the count of nucleotide j in position i of the motif.
#### Scores
We define the similarity score for two aligned PFMs. Due to the positional independence assumption in PFMs, the score decomposes into the sum of scores for corresponding positions. Our score is composed of two components: The first measures whether the two motifs were generated from a common distribution. The second reflects the distance of that common distribution from the background. Thus, for positions n1 and n2, our score is as described in Equation 2. Statistically, in the score we sum the log-likelihood-ratio of two pairs of hypotheses.
The first component:
H0: The two samples were drawn from a common source distribution.
H1: The two samples were drawn independently from different source distributions.
The second component:
H0: The two samples were drawn from a common source distribution that is distinct from the background.
H1: The two samples were drawn from the background distribution.
#### Estimation
We estimate the source distributions from the PFM using a Bayesian approach, with a Dirichlet prior. The Dirichlet prior is specified by a set of hyper-parameters α = (α12,…αn) and has the form:
Where Γ(x) is the Gamma function. We use two prior variants: The first is a standard Dirichlet prior [20], with hyper-parameters of (1,1,1,1). When using this prior, the estimated distribution for position n is:
where α is the vector of hyper-parameters.
The second prior we use is a five-component mixture of Dirichlet prior [17]. We merge five Dirichlet priors using uniform weights. Four of the components give high probability for a single DNA nucleotide: A, C, G, or T. The fifth element represents the uniform distribution. We use the hyper-parameters (5,1,1,1) for A, (1,5,1,1) for C, etc., For the fifth component we use the hyper-parameters (2,2,2,2). Using this, the estimated distribution for position n is:
This is a weighted average, where the weights are the posterior probabilities of each component given the data. The posterior is: where,
#### Clustering and Comparing Motifs
##### Comparing motifs.
The similarity score of two PFMs is the score of the best possible alignment (without gaps) between them, including the reverse complement alignment. The unaligned flanks of the motif are scored according to their distance from the background distribution multiplied by a relaxing factor of 0.2.
##### Assigning p-values to motif similarity scores.
We devised an empirical p-value estimation procedure for motif similarity scores. For each motif, we computed the score distribution against motifs of all possible lengths, by comparison to 1000 random motifs of a specified length. Since the BLiC score distribution depends on the specificity of each motif, the distribution is computed for each motif separately to retain the overall characteristics of the motif. The random motifs were generated by sampling positions of motifs from the TRANSFAC database [26]. The p-value of the similarity of a given DNA motif to another, is calculated empirically from the score distribution of the first motif against random motifs of the same length as the second motif (calculating the fraction of random motifs that got the same score or higher).
#### Clustering and Trimming the Tree
To cluster motifs, we implemented a hierarchical agglomerative clustering algorithm, using various motif comparison scores. In each iteration, the algorithm computes the similarity between all pairs of motifs and then merges the pair with the highest similarity score into a new motif (see Figure 4A). This merge includes aligning the motifs according to the best scoring alignment between them, and then combining the evidence from both of them, by summing their nucleotide counts at each position (i.e., the motifs are weighted according to their number of samples). These iterations are repeated until we are left with a single motif. The order of merge operations results in a tree, where the leaves represent initial motifs, and each inner node represents the merging of all original motif sub-tree below it.
The clustering tree is used to distill the input set into a non-redundant group, by splitting the tree into clusters representing groups of redundant motifs. To obtain this non-redundant set, which covers the initial set, we choose a frontier in the clustering tree. A frontier in a tree is a subset of nodes, non-descendent to each other, where every leaf in the tree is a descendant of one of them. This is done by a bottom-up traversal over the tree in which we choose the set of nodes in the required frontier. Specifically, we consider every two motifs that were merged into one in the tree. We want to identify situations where this merge resulted in a motif that is very different than each of the two motifs that were merged. To test that, we compare the degree of similarity between the two motifs to the maximal score we could have attained (the maximum of the similarity of each one to itself). If the observed score's ratio to this maximum is less than a preset threshold, the two motifs are added to the frontier. In the motif analysis pipeline, we use a stringent threshold of 60% of the maximum for creating non-redundant motifs (chosen according to hand-curated splits of 10 trees).
#### Motif Analysis
##### Motif discovery algorithms.
In the analysis pipeline we applied several motif discovery algorithms—MDScan [2], AligneAce [11], and MEME [3] were used through the TAMO package [19], with the default parameters (apart from the MEME algorithm, for which we changed the parameters to output six motifs). We included conserved and abundant motifs in the yeast genome [55], and the output of MEME_c [13], Converge [13] and the SeedSearcher motif discovery algorithm [56]. The discovered motifs underwent an initial filtration according to their enrichment among the initial group of sequences (p-value threshold of 10−5, calculated using the TAMO package [19]). All motifs are converted to a PFM representation.
##### Clustering motifs.
In the second step of the pipeline we cluster the motifs—first we clustered the motifs discovered for each transcription factor under each environmental condition separately, then the (merged) motifs for each factor under all conditions, and finally the entire set of motifs. The motifs are clustered and merged as described above.
##### Truncating motifs.
Uninformative positions at the two edges of motifs were truncated automatically. This was done by a chi-square test (threshold of 0.05), testing if the nucleotides at a motif position distribute according to the background.
##### Identifying the motifs.
Connecting between discovered motifs and transcription factors, we compared the motifs against a set of known motifs (TRANSFAC [26], SCPD [27], YPD [28]).
##### Ranking motifs.
In the third step of the pipeline, we rank and filter the merged motifs according to their enrichment (−log hyper-geometric p-value) in the input groups of DNA sequences. For filtering we use a threshold of 3 after applying a Bonferroni correction for multiple hypotheses. For this, we find the occurrences of each motif using a statistical tool for genomic scan, TestMotif program [53]. To scan the genome with our motifs, we transferred them from PFMs (count matrices) to profiles (frequencies), using estimation with Dirichlet-mixture prior described above. After scanning with the TestMotif program [53], we combine evolutionary conservation data to find the occurrences of motifs. Particularly, we decide whether a DNA sequence contains a motif if one of two following criteria holds:
• The sequence contains a highly statistically significant binding site, using a p-value threshold of 0.03 after Bonferroni correction for multiple hypotheses according to the average length of the scanned sequences (a good sequence match between the motif and the binding site).
• A less statistically significant occurrence of the motif (threshold of 0.1), highly conserved among seven species of the genus Saccharomyces (average conservation of the motif is at least 0.6, according to the UCSC conservation track (phastCons [57], through the UCSC Genome Browser Database [58]).
##### Parameter tuning.
The threshold values listed above were chosen according to an extensive search of parameters that maximize the true positive rate, allowing up to 2% false positive calls. This optimization was based on location analysis data of Gcn4 [13], and location and expression data for Sko1 (unpublished data).
### Supporting Information
Figure S1.
Distinguishing between informative and non-informative positions: Two pairs of aligned motifs are presented (by a sequence-logo). This is an alignment of the known motif for the invertebrate factor Dfd versus two variants of the vertebrate factor Pax4, all taken from TRANSFAC [26] (matrix accessions I$DFD_01, V$PAX4_02, and V\$PAX4_04, all from version 8.3). While it is clear the first motif (left) should get a lower similarity score than the second motif (right), scoring the two pairs of aligned motifs using the Jensen-Shannon divergence yields a higher score for the first motif. The desired similarity score should distinguish between high similarity of informative positions and non-informative positions.
doi:10.1371/journal.pcbi.1000010.s001
(0.73 MB TIF)
Figure S2.
Evaluation of motif comparison. Using different subsets of motifs out of the “Yeast” data set, we compare our BLiC score (green, using a Dirichlet prior, and blue, using a Dirichlet-mixture prior) with other similarity scores: Jensen-Shannon divergence (red), Euclidean distance (purple) and Pearson Correlation coefficient (cyan). Each of the nine panels represents a different comparison. The columns correspond to the number of samples used for constructing the motifs. The rows correspond to different choices of query sets and target sets for comparison (illustrated by the logos on the right): In the top row all motifs of partial offsets are queries against the same set. In the middle row, all motifs, including full-length motifs and partial offsets are compared against themselves. In the bottom row, we use partial offset motifs as queries and full motifs as targets. In each panel we plot True Positive Rate (y-axis) vs. False Positive Rate (x-axis) as in Figure 3B.
doi:10.1371/journal.pcbi.1000010.s002
(1.67 MB TIF)
Figure S3.
Evaluation of motif clustering. Using different subsets of motifs out of the “Yeast” data set, we compare our BLiC score (green, using a Dirichlet prior, and blue, using a Dirichlet-mixture prior) with other similarity scores: Jensen-Shannon divergence (red), Euclidean distance (purple) and Pearson Correlation coefficient (cyan). Each of the nine panels represents the average performance of 9 repeats of clustering over different motif sets. The columns correspond to the number of samples used for constructing the motifs. The rows correspond to different choices of motif sets (illustrated by the logos on the right): In the top row we cluster all motifs of partial offsets. In the middle row, we cluster all motifs, including full-length motifs and partial offsets. In the bottom row, we cluster only full motifs. In each panel we plot True Positive Rate (y-axis) vs. number of clusters (x-axis) as in Figure 4B.
doi:10.1371/journal.pcbi.1000010.s003
(1.88 MB TIF)
Figure S4.
Overview of the discovered motifs. Investigation of the properties of discovered motifs. Each motif (column) is compared to other motifs using the BLiC score (rows, top group), to average expression of its targets in different experiments [30][32] (second group), to enrichment of its targets in GO annotations [29] (third groups) and in ChIP-chip location assays [13] (bottom groups). The rows and columns were clustered using the EdgeCluster [33] algorithm, which integrates various sources of information into the clustering process. These information sources are attributes of motifs and pairwise information about motifs. The results are clusters of motifs that have not only similar attributes, as in regular clustering algorithm, but also similar relations to motifs in other clusters.
doi:10.1371/journal.pcbi.1000010.s004
(8.43 MB TIF)
Figure S5.
Condition independent binding of Fhl1. (A) A Venn diagram representing the results of the ChIP-chip experiment [13] for the transcription factor Fhl1 under YPD conditions, amino-acid starvation and nutrient deprived conditions. The targets of Fhl1 do not change under these three environments. (B) Under all conditions the same motif is found to be highly enriched.
doi:10.1371/journal.pcbi.1000010.s005
(2.70 MB TIF)
Figure S6.
Possible mechanisms for condition-dependent binding of TFs. Motif analysis for condition-dependent transcription factors that bind different targets under different conditions. Here, three possible mechanisms that may be involved in monitoring condition-dependent binding, which lead to altered targets, are presented schematically. For each mechanism we show the scheme of the promoter organization of the target genes (above the dashed line) and the result of motif discovery (under the dashed line). (A) The first mechanism is through a change in the cofactor. This may be detected through co-occurrence of motifs of different factors. (B) The second mechanism is through a change in the specificity to the DNA. This change can be traced by identifying variations in the DNA motif. (C) The third mechanism is a change in the chromatin state. This change cannot be traced using motif analysis.
doi:10.1371/journal.pcbi.1000010.s006
(0.98 MB TIF)
Table S1.
Comparison of the Yeast motif library to previous works
doi:10.1371/journal.pcbi.1000010.s007
(0.02 MB XLS)
Table S2.
Transcription factors with condition-dependent binding which alter their targets.
doi:10.1371/journal.pcbi.1000010.s008
(0.02 MB XLS)
Table S3.
List of differential motifs.
doi:10.1371/journal.pcbi.1000010.s009
(0.04 MB XLS)
### Acknowledgments
We thank Aviv Regev, Andrew Capaldi, and the members of Nir Friedman's and Hanah Margalit's labs for useful discussions related to this work. We thank Yoseph Barash for his help with the SeedSearcher motif discovery algorithm, Kenzie MacIsaac for help with the TAMO package, and Takis Benos for providing us with the data set of Mahoney et al. [24].
### Author Contributions
Conceived and designed the experiments: NH TK HM NF. Performed the experiments: NH. Analyzed the data: NH TK. Wrote the paper: NH TK HM NF.
### References
1. 1. Stormo G (2000) DNA binding sites: representation and discovery. Bioinformatics 16: 16–23.
2. 2. Liu X, Brutlag D, Liu J (2002) An algorithm for finding protein-DNA binding sites with applications to chromatin-immunoprecipitation microarray experiments. Nat Biotechnol 20: 835–839.
3. 3. Bailey T, Elkan C (1995) The value of prior knowledge in discovering motifs with MEME. Proc Int Conf Intell Syst Mol Biol 3: 21–29.
4. 4. Kaplan T, Friedman N, Margalit H (2005) Ab initio prediction of transcription factor targets using structural knowledge. PLoS Comput Biol 1: e1.
5. 5. Morozov A, Havranek J, Baker D, Siggia E (2005) Protein-DNA binding specificity predictions with structural models. Nucleic Acids Res 33: 5781–5798.
6. 6. MacIsaac K, Fraenkel E (2006) Practical strategies for discovering regulatory DNA sequence motifs. PLoS Comput Biol 2: e36.
7. 7. Osada R, Zaslavsky E, Singh M (2004) Comparative analysis of methods for representing and searching for transcription factor binding sites. Bioinformatics 20: 3516–3525.
8. 8. Day W, McMorris F (1992) Critical comparison of consensus methods for molecular sequences. Nucleic Acids Res 20: 1093–1099.
9. 9. Benos P, Bulyk M, Stormo G (2002) Additivity in protein-DNA interactions: how good an approximation is it? Nucleic Acids Res 30: 4442–4451 L4443.
10. 10. Barash Y, Elidan G, Kaplan T, Friedman N (2003) Modeling Dependencies in Protein-DNA Binding Sites. Proc of the 7th Ann Int Conf in Comp Mol Bio (RECOMB).
11. 11. Hughes J, Estep P, Tavazoie S, Church G (2000) Computational identification of cis-regulatory elements associated with groups of functionally related genes in Saccharomyces cerevisiae. J Mol Biol 296: 1205–1214.
12. 12. Xie X, Lu J, Kulbokas E, Golub T, VM , Lindblad-Toh K, et al. (2005) Systematic discovery of regulatory motifs in human promoters and 3′ UTRs by comparison of several mammals. Nature 434: 338–345.
13. 13. Harbison C, Gordon D, Lee T, Rinaldi N, Macisaac K, et al. (2004) Transcriptional regulatory code of a eukaryotic genome. Nature 431: 99–104.
14. 14. Lin J (1991) Divergence measures based on the Shannon entropy145–151.
15. 15. Yona G, Levitt M (2002) Within the twilight zone: a sensitive profile-profile comparison tool based on information theory. J Mol Biol 315: 1257–1275.
16. 16. Wang T, Stormo GD (2003) Combining phylogenetic data with co-regulated genes to identify regulatory motifs. Bioinformatics 19: 2369–2380.
17. 17. Sjolander K, Karplus K, Brown M, Hughey R, AK , Mian I, et al. (1996) Dirichlet mixtures: a method for improved detection of weak but significant protein sequence homology. Comput Appl Biosci 12: 327–345.
18. 18. Xing EP, Jordan MI, Karp RM, Russell R (2002) A Hierarchical Bayesian Markovian Model for Motifs in Biopolymer Sequences. NIPS 15. MIT Press.
19. 19. Gordon D, Nekludova L, McCallum S, Fraenkel E (2005) TAMO: a flexible, object-oriented framework for analyzing transcriptional regulation using DNA-sequence motifs. Bioinformatics 21: 3164–3165.
20. 20. DeGroot M (1970) Optimal Satistical Decisions. New York: McGraw-Hill.
21. 21. Gupta S, Stamatoyannopoulos JA, Bailey TL, Noble WS (2007) Quantifying similarity between motifs. Genome Biol 8: R24.
22. 22. Bailey TL, Gribskov M (1998) Methods and statistics for combining motif match scores. J Comput Biol 5: 211–221.
23. 23. Jain AK, Murty MN, Flynn PJ (1999) Data clustering: a review. J ACM Comput Surv 31: 264–323.
24. 24. Mahony S, Auron PE, Benos PV (2007) DNA familial binding profiles made easy: comparison of various motif alignment and clustering strategies. PLoS Comput Biol 3: e61.
25. 25. Sandelin A, Alkema W, Engstrom P, Wasserman WW, Lenhard B (2004) JASPAR: an open-access database for eukaryotic transcription factor binding profiles. Nucleic Acids Res 32: D91–D94.
26. 26. Matys V, Fricke E, Geffers R, Gossling E, Haubrock M, et al. (2003) TRANSFAC: transcriptional regulation, from patterns to profiles. Nucleic Acids Res 31: 374–378.
27. 27. Zhu J, Zhang M (1999) SCPD: a promoter database of the yeast Saccharomyces cerevisiae. Bioinformatics 15: 607–611.
28. 28. Csank C, Costanzo M, Hirschman J, Hodges P, Kranz J, et al. (2002) Three yeast proteome databases: YPD, PombePD, and CalPD (MycoPathPD). Methods Enzymol 350: 347–373.
29. 29. Harris M, Clark J, Ireland A, Lomax J, Ashburner M, et al. (2004) The Gene Ontology (GO) database and informatics resource. Nucleic Acids Res 32: D258–D261.
30. 30. DeRisi JL, Iyer VR, Brown PO (1997) Exploring the metabolic and genetic control of gene expression on a genomic scale. Science 278: 680–686.
31. 31. Gasch AP, Spellman PT, Kao CM, Carmel-Harel O, Eisen MB, et al. (2000) Genomic expression programs in the response of yeast cells to environmental changes. Mol Biol Cell 11: 4241–4257.
32. 32. Spellman PT, Sherlock G, Zhang MQ, Iyer VR, Anders K, et al. (1998) Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Mol Biol Cell 9: 3273–3297.
33. 33. Friedman N, Jaimovich A (2008) EdgeCluster: Probabilistic Agglomerative Clustering of Genes with Relational Observations. School of Computer Science and Engineering, The Hebrew University TR-2008-2.
34. 34. MacIsaac K, Wang T, Gordon D, Gifford D, Stormo G, et al. (2006) An improved map of conserved regulatory sites for Saccharomyces cerevisiae. BMC Bioinformatics 7: 113.
35. 35. Martin D, Soulard A, Hall M (2004) TOR regulates ribosomal protein gene expression via PKA and the Forkhead transcription factor FHL1. Cell 119: 969–979.
36. 36. Zeitlinger J, Simon I, Harbison C, Hannett N, Volkert T, et al. (2003) Program-specific distribution of a transcription factor dependent on partner transcription factor and MAPK signaling. Cell 113: 395–404.
37. 37. Chou S, Lane S, Liu H (2006) Regulation of mating and filamentation genes by two distinct Ste12 complexes in Saccharomyces cerevisiae. Mol Cell Biol 26: 4794–4805.
38. 38. Madhani H, Fink G (1997) Combinatorial control required for the specificity of yeast MAPK signaling. Science 275: 1314–1317.
39. 39. Chou S, Huang L, Liu H (2004) Fus3-regulated Tec1 degradation through SCFCdc4 determines MAPK signaling specificity during mating in yeast. Cell 119: 981–990.
40. 40. Schaber J, Kofahl B, Kowald A, Klipp E (2006) A modelling approach to quantify dynamic crosstalk between the pheromone and the starvation pathway in baker's yeast. FEBS J 273: 3520–3533.
41. 41. Wang Y, Dohlman H (2006) Pheromone-regulated sumoylation of transcription factors that mediate the invasive to mating developmental switch in yeast. J Biol Chem 281: 1964–1969.
42. 42. Gelli A (2002) Rst1 and Rst2 are required for the a/alpha diploid cell type in yeast. Mol Microbiol 46: 845–854.
43. 43. Courel M, Lallet S, Camadro J, Blaiseau P (2005) Direct activation of genes involved in intracellular iron use by the yeast iron-responsive transcription factor Aft2 without its paralog Aft1. Mol Cell Biol 25: 6760–6771.
44. 44. Rutherford J, Jaron S, Ray E, Brown P, Winge D (2001) A second iron-regulatory system in yeast independent of Aft1p. Proc Natl Acad Sci U S A 98: 14322–14327.
45. 45. Eisen MB, Spellman PT, Brown PO, Botstein D (1998) Cluster analysis and display of genome-wide expression patterns. Proc Natl Acad Sci U S A 95: 14863–14868.
46. 46. Pietrokovski S (1996) Searching databases of conserved sequence regions by aligning protein multiple-alignments. Nucleic Acids Res 24: 3836–3845.
47. 47. Roepcke S, Grossmann S, Rahmann S, Vingron M (2005) T-Reg Comparator: an analysis tool for the comparison of position weight matrices. Nucleic Acids Res 33: W438–W441.
48. 48. Thijs G, Marchal K, Lescot M, Rombauts S, Moor B, et al. (2002) A Gibbs sampling method to detect overrepresented motifs in the upstream regions of coexpressed genes. J Comput Biol 9: 447–464.
49. 49. Aerts S, Thijs G, Coessens B, Staes M, Moreau Y, et al. (2003) Toucan: deciphering the cis-regulatory logic of coregulated genes. Nucleic Acids Res 31: 1753–1764.
50. 50. Choi IG, Kwon J, Kim SH (2004) Local feature frequency profile: a method to measure structural similarity in proteins. Proc Natl Acad Sci U S A 101: 3797–3802.
51. 51. Mahony S, Benos PV (2007) STAMP: a web tool for exploring DNA-binding motif similarities. Nucleic Acids Res 35: W253–W258.
52. 52. Che D, Jensen S, Cai L, Liu JS (2005) BEST: binding-site estimation suite of tools. Bioinformatics 21: 2909–2911.
53. 53. Barash Y, Elidan G, Kaplan T, Friedman N (2005) CIS: compound importance sampling method for protein-DNA binding site p-value estimation. Bioinformatics 21: 596–600.
54. 54. Bulyk M, Johnson P, Church G (2002) Nucleotides of transcription factor binding sites exert interdependent effects on the binding affinities of transcription factors. Nucleic Acids Res 30: 1255–1261.
55. 55. Kellis M, Patterson N, Endrizzi M, Birren B, Lander E (2003) Sequencing and comparison of yeast species to identify genes and regulatory elements. Nature 423: 241–254.
56. 56. Barash Y (2005) Unified Models for Regulatory Mechanisms. PhD thesis. Jerusalem: Hebrew University.
57. 57. Siepel A, Bejerano G, Pedersen J, Hinrichs A, Hou M, et al. (2005) Evolutionarily conserved elements in vertebrate, insect, worm, and yeast genomes. Genome Res 15: 1034–1050.
58. 58. Karolchik D, Baertsch R, Diekhans M, Furey T, Hinrichs A, et al. (2003) The UCSC Genome Browser Database. Nucleic Acids Res 31: 51–54.
Ambra 2.10.5 Managed Colocation provided
by Internet Systems Consortium.
|
|
# How do you find the x and y intercept of y-3/5x-12?
May 28, 2017
#### Answer:
Assumption: The expression should be an equation and of the form
$y = \frac{3}{5} x - 12$
y-intercept$\to \left(x , y\right) = \left(0 , - 12\right)$
x-intercept$\to \left(x , y\right) = \left(20 , 0\right)$
#### Explanation:
y-intercept is the same as the constant $\to y = - 12$
That is because the y-intercept is at $x = 0$
$y = \frac{3}{5} \left(0\right) - 12 \text{ "=" } 12$
y-intercept$\to \left(x , y\right) = \left(0 , - 12\right)$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
x-intercept is at $y = 0$
$\implies y = 0 = \frac{3}{5} x - 12$
Add 12 to both sides
$12 = \frac{3}{5} x$
Multiply both sides by 5/3
$\frac{5}{3} \times 12 = \frac{3}{5} \times \frac{5}{3} \times x$
$5 \times \frac{12}{3} = 1 \times 1 \times x$
$20 = x$
x-intercept$\to \left(x , y\right) = \left(20 , 0\right)$
|
|
# Rubin causal model
(Redirected from Rubin Causal Model)
The Rubin causal model (RCM), also known as the Neyman–Rubin causal model,[1] is an approach to the statistical analysis of cause and effect based on the framework of potential outcomes, named after Donald Rubin. The name "Rubin causal model" was first coined by Paul W. Holland.[2] The potential outcomes framework was first proposed by Jerzy Neyman in his 1923 Master's thesis,[3] though he discussed it only in the context of completely randomized experiments.[4] Rubin extended it into a general framework for thinking about causation in both observational and experimental studies.[1]
## Introduction
The Rubin causal model is based on the idea of potential outcomes. For example, a person would have a particular income at age 40 if they had attended college, whereas they would have a different income at age 40 if they had not attended college. To measure the causal effect of going to college for this person, we need to compare the outcome for the same individual in both alternative futures. Since it is impossible to see both potential outcomes at once, one of the potential outcomes is always missing. This dilemma is the "fundamental problem of causal inference".
Because of the fundamental problem of causal inference, unit-level causal effects cannot be directly observed. However, randomized experiments allow for the estimation of population-level causal effects.[5] A randomized experiment assigns people randomly to treatments: college or no college. Because of this random assignment, the groups are (on average) equivalent, and the difference in income at age 40 can be attributed to the college assignment since that was the only difference between the groups. An estimate of the average causal effect (also referred to as the average treatment effect) can then be obtained by computing the difference in means between the treated (college-attending) and control (not-college-attending) samples.
In many circumstances, however, randomized experiments are not possible due to ethical or practical concerns. In such scenarios there is a non-random assignment mechanism. This is the case for the example of college attendance: people are not randomly assigned to attend college. Rather, people may choose to attend college based on their financial situation, parents' education, and so on. Many statistical methods have been developed for causal inference, such as propensity score matching. These methods attempt to correct for the assignment mechanism by finding control units similar to treatment units.
## An extended example
Rubin defines a causal effect:
Intuitively, the causal effect of one treatment, E, over another, C, for a particular unit and an interval of time from ${\displaystyle t_{1}}$ to ${\displaystyle t_{2}}$ is the difference between what would have happened at time ${\displaystyle t_{2}}$ if the unit had been exposed to E initiated at ${\displaystyle t_{1}}$ and what would have happened at ${\displaystyle t_{2}}$ if the unit had been exposed to C initiated at ${\displaystyle t_{1}}$: 'If an hour ago I had taken two aspirins instead of just a glass of water, my headache would now be gone,' or 'because an hour ago I took two aspirins instead of just a glass of water, my headache is now gone.' Our definition of the causal effect of the E versus C treatment will reflect this intuitive meaning."[5]
According to the RCM, the causal effect of your taking or not taking aspirin one hour ago is the difference between how your head would have felt in case 1 (taking the aspirin) and case 2 (not taking the aspirin). If your headache would remain without aspirin but disappear if you took aspirin, then the causal effect of taking aspirin is headache relief. In most circumstances, we are interested in comparing two futures, one generally termed "treatment" and the other "control". These labels are somewhat arbitrary.
### Potential outcomes
Suppose that Joe is participating in an FDA test for a new hypertension drug. If we were omniscient, we would know the outcomes for Joe under both treatment (the new drug) and control (either no treatment or the current standard treatment). The causal effect, or treatment effect, is the difference between these two potential outcomes.
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe 130 135 −5
${\displaystyle Y_{t}(u)}$ is Joe's blood pressure if he takes the new pill. In general, this notation expresses the potential outcome which results from a treatment, t, on a unit, u. Similarly, ${\displaystyle Y_{c}(u)}$ is the effect of a different treatment, c or control, on a unit, u. In this case, ${\displaystyle Y_{c}(u)}$ is Joe's blood pressure if he doesn't take the pill. ${\displaystyle Y_{t}(u)-Y_{c}(u)}$ is the causal effect of taking the new drug.
From this table we only know the causal effect on Joe. Everyone else in the study might have an increase in blood pressure if they take the pill. However, regardless of what the causal effect is for the other subjects, the causal effect for Joe is lower blood pressure, relative to what his blood pressure would have been if he had not taken the pill.
Consider a larger sample of patients:
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe 130 135 −5
Mary 140 150 −10
Sally 135 125 10
Bob 135 150 −15
The causal effect is different for every subject, but the drug works for Joe, Mary and Bob because the causal effect is negative. Their blood pressure is lower with the drug than it would have been if each did not take the drug. For Sally, on the other hand, the drug causes an increase in blood pressure.
In order for a potential outcome to make sense, it must be possible, at least a priori. For example, if there is no way for Joe, under any circumstance, to obtain the new drug, then ${\displaystyle Y_{t}(u)}$ is impossible for him. It can never happen. And if ${\displaystyle Y_{t}(u)}$ can never be observed, even in theory, then the causal effect of treatment on Joe's blood pressure is not defined.
### No causation without manipulation
The causal effect of new drug is well defined because it is the simple difference of two potential outcomes, both of which might happen. In this case, we (or something else) can manipulate the world, at least conceptually, so that it is possible that one thing or a different thing might happen.
This definition of causal effects becomes much more problematic if there is no way for one of the potential outcomes to happen, ever. For example, what is the causal effect of Joe's height on his weight? Naively, this seems similar to our other examples. We just need to compare two potential outcomes: what would Joe's weight be under the treatment (where treatment is defined as being 3 inches taller) and what would Joe's weight be under the control (where control is defined as his current height).
A moment's reflection highlights the problem: we can't increase Joe's height. There is no way to observe, even conceptually, what Joe's weight would be if he were taller because there is no way to make him taller. We can't manipulate Joe's height, so it makes no sense to investigate the causal effect of height on weight. Hence the slogan: No causation without manipulation.
### Stable unit treatment value assumption (SUTVA)
We require that "the [potential outcome] observation on one unit should be unaffected by the particular assignment of treatments to the other units" (Cox 1958, §2.4). This is called the stable unit treatment value assumption (SUTVA), which goes beyond the concept of independence.
In the context of our example, Joe's blood pressure should not depend on whether or not Mary receives the drug. But what if it does? Suppose that Joe and Mary live in the same house and Mary always cooks. The drug causes Mary to crave salty foods, so if she takes the drug she will cook with more salt than she would have otherwise. A high salt diet increases Joe's blood pressure. Therefore, his outcome will depend on both which treatment he received and which treatment Mary receives.
SUTVA violation makes causal inference more difficult. We can account for dependent observations by considering more treatments. We create 4 treatments by taking into account whether or not Mary receives treatment.
subject Joe = c, Mary = t Joe = t, Mary = t Joe = c, Mary = c Joe = t, Mary = c
Joe 140 130 125 120
Recall that a causal effect is defined as the difference between two potential outcomes. In this case, there are multiple causal effects because there are more than two potential outcomes. One is the causal effect of the drug on Joe when Mary receives treatment and is calculated, ${\displaystyle 130-140}$. Another is the causal effect on Joe when Mary does not receive treatment and is calculated ${\displaystyle 120-125}$. The third is the causal effect of Mary's treatment on Joe when Joe is not treated. This is calculated as ${\displaystyle 140-125}$. The treatment Mary receives has a greater causal effect on Joe than the treatment which Joe received has on Joe, and it is in the opposite direction.
By considering more potential outcomes in this way, we can cause SUTVA to hold. However, if any units other than Joe are dependent on Mary, then we must consider further potential outcomes. The greater the number of dependent units, the more potential outcomes we must consider and the more complex the calculations become (consider an experiment with 20 different people, each of whose treatment status can effect outcomes for every one else). In order to (easily) estimate the causal effect of a single treatment relative to a control, SUTVA should hold.
### Average causal effect
Consider:
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe 130 135 −5
Mary 130 145 −15
Sally 130 145 −15
Bob 140 150 −10
James 145 140 +5
MEAN 135 143 −8
One may calculate the average causal effect by taking the mean of all the causal effects.
How we measure the response affects what inferences we draw. Suppose that we measure changes in blood pressure as a percentage change rather than in absolute values. Then, depending in the exact numbers, the average causal effect might be an increase in blood pressure. For example, assume that George's blood pressure would be 154 under control and 140 with treatment. The absolute size of the causal effect is −14, but the percentage difference (in terms of the treatment level of 140) is −10%. If Sarah's blood pressure is 200 under treatment and 184 under control, then the causal effect in 16 in absolute terms but 8% in terms of the treatment value. A smaller absolute change in blood pressure (−14 versus 16) yields a larger percentage change (−10% versus 8%) for George. Even though the average causal effect for George and Sarah is +1 in absolute terms, it is −1 in percentage terms.
### The fundamental problem of causal inference
The results we have seen up to this point would never be measured in practice. It is impossible, by definition, to observe the effect of more than one treatment on a subject over a specific time period. Joe cannot both take the pill and not take the pill at the same time. Therefore, the data would look something like this:
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe 130 ? ?
Question marks are responses that could not be observed. The Fundamental Problem of Causal Inference[2] is that directly observing causal effects is impossible. However, this does not make causal inference impossible. Certain techniques and assumptions allow the fundamental problem to be overcome.
Assume that we have the following data:
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe 130 ? ?
Mary ? 125 ?
Sally 100 ? ?
Bob ? 130 ?
James ? 120 ?
MEAN 115 125 −10
We can infer what Joe's potential outcome under control would have been if we make an assumption of constant effect:
${\displaystyle Y_{t}(u)=T+Y_{c}(u)}$
and
${\displaystyle Y_{t}(u)-T=Y_{c}(u).}$
If we wanted to infer the unobserved values we could assume a constant effect. The following tables illustrates data consistent with the assumption of a constant effect.
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe 130 140 −10
Mary 115 125 −10
Sally 100 110 −10
Bob 120 130 −10
James 110 120 −10
MEAN 115 125 −10
All of the subjects have the same causal effect even though they have different outcomes under the treatment.
### The assignment mechanism
The assignment mechanism, the method by which units are assigned treatment, affects the calculation of the average causal effect. One such assignment mechanism is randomization. For each subject we could flip a coin to determine if she receives treatment. If we wanted five subjects to receive treatment, we could assign treatment to the first five names we pick out of a hat. When we randomly assign treatments we may get different answers.
Assume that this data is the truth:
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe 130 115 15
Mary 120 125 −5
Sally 100 125 −25
Bob 110 130 −20
James 115 120 −5
MEAN 115 123 −8
The true average causal effect is −8. But the causal effect for these individuals is never equal to this average. The causal effect varies, as it generally (always?) does in real life. After assigning treatments randomly, we might estimate the causal effect as:
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe 130 ? ?
Mary 120 ? ?
Sally ? 125 ?
Bob ? 130 ?
James 115 ? ?
MEAN 121.66 127.5 −5.83
A different random assignment of treatments yields a different estimate of the average causal effect.
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe 130 ? ?
Mary 120 ? ?
Sally 100 ? ?
Bob ? 130 ?
James ? 120 ?
MEAN 116.67 125 −8.33
The average causal effect varies because our sample is small and the responses have a large variance. If the sample were larger and the variance were less, the average causal effect would be closer to the true average causal effect regardless of the specific units randomly assigned to treatment.
Alternatively, suppose the mechanism assigns the treatment to all men and only to them.
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe 130 ? ?
Bob 110 ? ?
James 105 ? ?
Mary ? 130 ?
Sally ? 125 ?
Susie ? 135 ?
MEAN 115 130 −15
Under this assignment mechanism, it is impossible for women to receive treatment and therefore impossible to determine the average causal effect on female subjects. In order to make any inferences of causal effect on a subject, the probability that the subject receive treatment must be greater than 0 and less than 1.
### The perfect doctor
Consider the use of the perfect doctor as an assignment mechanism. The perfect doctor knows how each subject will respond to the drug or the control and assigns each subject to the treatment that will most benefit her. The perfect doctor knows this information about a sample of patients:
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe 130 115 15
Bob 120 125 −5
James 100 150 −50
Mary 115 125 −10
Sally 120 130 −10
Susie 135 105 30
MEAN 120 125 −5
Based on this knowledge she would make the following treatment assignments:
subject ${\displaystyle Y_{t}(u)}$ ${\displaystyle Y_{c}(u)}$ ${\displaystyle Y_{t}(u)-Y_{c}(u)}$
Joe ? 115 ?
Bob 120 ? ?
James 100 ? ?
Mary 115 ? ?
Sally 120 ? ?
Susie ? 105 ?
MEAN 113.75 110 3.75
The perfect doctor distorts both averages by filtering out poor responses to both the treatment and control. The difference between means, which is the supposed average causal effect, is distorted in a direction that depends on the details. For instance, a subject like Susie who is harmed by taking the drug would be assigned to the control group by the perfect doctor and thus the negative effect of the drug would be masked.
## Conclusion
The causal effect of a treatment on a single unit at a point in time is the difference between the outcome variable with the treatment and without the treatment. The Fundamental Problem of Causal Inference is that it is impossible to observe the causal effect on a single unit. You either take the aspirin now or you don't. As a consequence, assumptions must be made in order to estimate the missing counterfactuals.
The Rubin causal model has also been connected to instrumental variables (Angrist, Imbens, and Rubin, 1996)[6] and other techniques for causal inference. For more on the connections between the Rubin causal model, structural equation modeling, and other statistical methods for causal inference, see Morgan and Winship (2007)[7] and Pearl (2000).[8] Pearl (2000) has shown that all potential outcomes can be derived from Structural Equation Models (SEM) thus unifying econometrics and modern causal analysis.
|
|
molecular genetics techniques
One student is comparing expression of the gene $\mathrm{HMG} 2$ under the different conditions. It is the study of the connection between genotype and phenotype. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Cloning of genes in E. coli is a common technique in molecular biology, since it allows large quantities of a DNA for gene to made, which allows further analysis or manipulation Cloning can also be used to produce useful proteins, such as insulin, in microbes. Molecular genetic techniques for gene manipulation in Candida albicans Virulence. Transformation is the introduction of DNA (usually recombinant plasmids) into bacteria. The book is written in simple and easy to read language. The cells may be transformed using a gene for Missed the LibreFest? [4] Restriction enzymes were used to linearize DNA for separation by electrophoresis and Southern blotting allowed for the identification of specific DNA segments via hybridization probes. Explain why complementation analysis will not work with dominant mutations. Restriction enzymes are natural endonucleases used in molecular biology to cut DNA sequences at specific sites. Genctic linkage studies can usually only roughly locate the chromosomal position of a "disease" gene. Which could you use and why? Give some specific applications for each blotting technique. If you have the following libraries at your disposal (genomic library from skin cells, cDNA library from skin cells, genomic library from neurons, cDNA library from neurons), which could you use and why? Although overlapping with biochemical techniques, molecular genetics techniques are deeply involved with the direct study of DNA. Electrophoresis, Cloning, Probes and Polymerase Chain Reaction (PCR) are few genetic techniques. Molecular diagnostics is the outcome of the fruitful interplay among laboratory medicine, genomics knowledge, and technology in the field of molecular genetics, especially with significant discoveries in the field of molecular genomic technologies. This field has been revolutionized by the invention of recombinant DNA technology. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. A number of foreign proteins have been expressed in bacterial and mammalian cells. Describe the essential features of a recombinant plasmid that are required for expression of a foreign gene. What are the advantages and applications of plasmids as cloning vectors? Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. 2014 May 15;5(4):507-20. doi: 10.4161/viru.28893. [5][6] In 1971, Berg utilized restriction enzymes to create the first recombinant DNA molecule and first recombinant DNA plasmid. In genetics: Molecular techniques. How can expression analysis and DNA sequence analysis help locate a disease gene within the region identified by linkage mapping? How can linkage-disequilibrium mapping sometimes provide a much higher resolution of gene location than classical linkage mapping? …helped create the field of molecular genetics and earned him (with George Beadle and Joshua Lederberg) the Nobel Prize for Physiology or Medicine in 1958. How do they differ? What is an important medical application of knockout mice? This field has been revolutionized by the invention of recombinant DNA technology. Describe the three steps in each cycle of a PCR reaction. How might your analysis of a genetic mutation be different depending on whether a particular mutation is recessive or dominant? Molecular genetics relies heavily on genetic engineering (recombinant DNA technology), which can be used to…, comparative embryology, and molecular genetics. Why are temperature-sensitive mutations useful for uncovering the function of a gene? How can the lox $P$ -Cre system be used to conditionally knock out a gene? 8.S: Techniques of Molecular Genetics (Summary), [ "article:topic", "authorname:tnickle", "showtoc:no", "license:ccbysa" ], 8.9: Techniques of Molecular Genetics (Exercises), 9: Changes in Chromosome Number and Structure, Mount Royal University & University of Calgary, Molecular biology involves the isolation and analysis of DNA and other macromolecules. In addition to the unknown gene mutation, the yeast are lacking the gene required to synthesize the amino acids leucine and lysine. Explain why complementation analysis will not work with dominant mutations. How might your analysis of a genetic mutation be different depending on whether a particular mutation is recessive or dominant? Describe the differences between SNPs and STR polymorphisms. The cells may be transformed using a gene for antibiotic resistance or a fluorescent reporter so that the mutants with the desired phenotype are selected from the non-mutants. Forward genetics is a molecular genetics technique used to identify genes or genetic mutations that produce a certain phenotype. Subject i ndex is comprehensi ve. Restriction enzymes and DNA ligase play essential roles in DNA cloning. The fossil record is often used to determine the phylogeny of groups containing hard body parts; it is also used to date divergence times of…. Finally, the location and specific nature of the mutation is mapped via sequencing. Genetics is the study of the inheritance and variation of biological traits. Illustrations . What is a temperature-sensitive mutation? You have the following libraries at your disposal: a genomic library from skin cells, a cDNA library from skin cells, a genomic library from neurons, and a cDNA library from neurons. Often, a secondary assay in the form of a selection may follow mutagenesis where the desired phenotype is difficult to observe, for example in bacteria or cell cultures. Our latest podcast episode features popular TED speaker Mara Mintzer. Two methods for functionally inactivating a gene without altering the gene sequence involve dominant-negative alleles and RNA interference (RNAi). A lab studies yeast cells, comparing their growth in rwo different sugars, glucose and galactose. What is an important medical application of knockout mice? Why was the discovery of a thermostable DNA polymerase (e.g., Taq polymerase) so important for the development of PCR? [15] Forward genetics is an unbiased approach and often leads to many unanticipated discoveries, but may be costly and time consuming. How can linkage-disequilibrium mapping sometimes provide a much higher resolution of gene location than classical linkage mapping? molecular genetic techniques. How can the $l$ ox $P$ -Cre system be used to conditionally knock out a gene? DNA polymorphisms can be used as DNA markers. DNA fragments with compatible ends can be joined together through ligation.If the ligation produces a sequence not found in nature, the molecule is said to be recombinant. [12] Today, through the application of molecular genetic techniques, genomics is being studied in many model organisms and data is being collected in computer databases like NCBI and Ensembl. What is the advantage of expressing a protein in mammalian cells versus bacteria? Knockdown may also be achieved by RNA interference (RNAi). What reaction is cataIyzed by DNA ligase? Molecular biology was first referred to as the study of the chemical and physical structure of biological macromolecules such as nucleic acids and proteins. Molecular genetics is a sub-field of biology that addresses how differences in the structures or expression of DNA molecules manifests as variation among organisms. In a genetic screen, random mutations are generated with mutagens (chemicals or radiation) or transposons and individuals are screened for the specific phenotype. are clear and easy to understand. Isolation of total genomic DNA involves separating DNA from protein and other cellular components, for example by ethanol precipitation of DNA. [13], Forward genetics is a molecular genetics technique used to identify genes or genetic mutations that produce a certain phenotype. Molecular Genetics of Bacteria, 5th Edition, This page was last edited on 26 September 2020, at 18:49. Be on the lookout for your Britannica newsletter to get trusted stories delivered right to your inbox. Watson and Crick (in conjunction with Franklin and Wilkins) figured out the structure of DNA, a cornerstone for molecular genetics. [9] Polymerase chain reaction (PCR) using Taq polymerase, invented by Mullis in 1985, enabled scientists to create millions of copies of a specific DNA sequence that could be used for transformation or manipulated using agarose gel separation.
This entry was posted in Uncategorized. Bookmark the permalink.
|
|
# There are n balls in a jar labeled with numbers 1,2,...,n. A total of k balls are drawn WITH REPLACEMENT.
There are n balls in a jar, labeled with the numbers 1, 2, . . . , n. A total of k balls are drawn, one by one with replacement, to obtain a sequence of numbers.
What is the probability that the sequence obtained is strictly increasing?
For this problem since you are dealing with replacement, there are $n^k$ ways of selecting the balls. THe key observation, is that if you draw a repeated ball with a number already selected, you can not have a strictly increasing sequence.
THere are ${n \choose k}$ ways of selecting balls with $n^k$ total possibilities. however you are interested only in a one sequential ordering of the labels (increasing) so Probability of strictly increasing: $\frac{ {n\choose k }}{k!n^k}$
If you have 3 numbered balls (1,2,3), and you draw $\textbf{WITH REPLACEMENT}$, then you have a 3 X 3 square. The diagonal represents a repeated ball,label, pulled. The upper-triangle represents the sequential increasing outcome space of interest $\{ (1,2), (1,3), (2,3) \}$ from total possibilities of 9. dividing the equation by 2! represents selecting only the upper-triangle and not the lower triangle $\{(2,1), (3,1), (3,2)\}$
This is more of a discussion, to check as to whether this reasoning is correct? thank you very much
• The correct probability, with replacement, is $\binom{n+k-1}{k}/n^k$. The numerator counts the number of ways to select $k$ elements from an $n$ element sets, where order does not matter, but repeats are allowed. Aug 29 '18 at 17:59
• @MikeEarnest: That's for weakly increasing; the question is about strixtly increasing. Aug 30 '18 at 2:49
• @joriki You are right, I was confused. Aug 30 '18 at 4:03
Using the example with $$n=3$$ and $$k=2,$$ namely
\begin{align} \begin{bmatrix} (1,1) & (1,2) & (1,3) \\ (2,1) & (2,2) & (2,3) \\ (3,1) & (3,2) & (3,3) \end{bmatrix}, \end{align} the derived expression gives probability: $$\frac{\binom{3}{2}}{2!3^{2}} = \frac{1}{6}.$$ However, we observe three outcomes in the event $$(1,2), (1,3),$$ and $$(2,3)$$ and nine outcomes in the sample space. Hence, the probability is $$\frac{1}{3}.$$
The example shows that the $$k!$$ term in the denominator is incorrect. Hence, the solution is $$\frac{\binom{n}{k}}{n^{k}}.$$
There is also the following solution by Newb, which describes the numerator $$\binom{n}{k}.$$
|
|
# All Questions
224 views
### Why are we not using multiple ciphers per message?
I am aware of at least rsa, elgamal-encryption, and variations of elliptic-curves relying on different problems and that those problems are considered hard. However, if someone figures out a way to ...
232 views
### Spoofing protocol nonce
Amy and Betty have a shared key $k$, and the protocol below is to provide a mutual authentication for both Betty and Amy. A sends B : $n_a$ B sends A : $n_b \;\|\; E(k, n_a)$ A sends B : $E(k, n_b)$ ...
257 views
### Solving Vignere Encryption
I'm currently trying to crack a cypher that I believe is Vignere Encrypted and I'm currently stuck. I calculated the key length by finding repeated sequences in the cypher and calculating the the ...
401 views
### One time pad: why is it useless in practice?
The symmetric cryptosystem One time pad (OTP) seems to be very beautiful since it is perfectly secret according to Shannon. Many books, however, point out that the main drawback is that one must ...
705 views
### Does AES-128 have the same strength as AES-256 with a padded key?
When I use the same 128-bit key for AES-128 and AES-256 with a known/public padding for the latter, is there some weakness in AES-256 that is not present in AES-128 with essentially the same key? ...
525 views
### Is Chaocipher a secure cipher under ciphertext-only attack?
Chaocipher was invented by John F. Byrne in 1919. The algorithm was recently revealed – see Moshe Rubin's Chaocipher Revealed, the Algorithm (PDF). While a known plaintext attack successfully finds ...
868 views
### How secure is the AES master key if Round Keys are found
If an attacker finds some round key of AES256, is it possible to find the master key? How safe is the master key if an attackers finds multiple round keys?
2k views
### Why does Openssh use only SHA1 for signing and verifying of digital signatures?
I am learning SSH protocol. With my current understanding of SSH protocol, I think that message digest algorithms for using in digital signature should be derived from Key Exchange. But Openssh ...
205 views
### Should different key pairs be used for signing and encryption?
In the recent iOS Security white paper from Apple (February 2014), the section on iMessage discusses using two different asymmetric key types as part of its standard operation: When a user turns ...
303 views
I want to encrypt data and protect its integrity and confidentiality. However, I cannot increase the length of the data. Are there any cipher modes of operation which provide confidentiality and ...
236 views
### Is the following symmetric design secure?
Assume: $O$ be a reversible random permutation oracle on a finite set and $O^{-1}$ the inverse permutation (pretty much equivalent to a random permutation: What is the difference between a bijective ...
186 views
### Difference between computational and statistical indistinguishabilities
What is the difference between the two notions of computational and statistical indistinguishability?
281 views
### how to iteratively calculate a^emod n with modulus n sized 4096 bits
In most sites the exponent of the RSA public key is 24 bits. But the modulus can get to 4096 bits size. I have an accelerator that can get max. 2112 bit size modulus. It calculates ...
2k views
### Why does second pre-image resistance imply pre-image resistance
I am studying hash functions. I can understand why collision resistance implies second preimage resistance, but I don't get why second preimage resistance should imply first preimage resistance. ...
189 views
### Can S/MIME be still considered secure?
Previoulsy I had asked this question at http://stackoverflow.com/questions/18235983/can-s-mime-be-still-considered-secure but I feel this forum is topic-wise the right place. Recently there has been ...
5k views
### Is javascript RSA signing safe?
I'm currently working on a secure open source messaging system (https://github.com/DSMNET/DSMNET). I'm currently using the cryptico.js (https://github.com/wwwtyro/cryptico) library to encrypt all the ...
18k views
### What is the “shared secret” used for in IPSec VPN?
Can somebody explain what the "shared secret" and "password" do when opening/creating a VPN tunnel? In this specific case I setup a VPN to my Fritz!Box and I had to provide a shared secret (which was ...
441 views
### Does SRP reduce to DH key exchange when shared password is not secret?
I can find JavaScript implementations of SRP (Secure Remote Password protocol), but nothing that inspires confidence for Diffie-Hellmen key exchange. I also have a separate need for SRP later. I ...
784 views
### How do I unpack the x and y values from the BITSTRING in a DER ECDSA public key?
In ASN.1, the X and Y values for a 256-bit elliptic curve key are stored as a single 66-byte ASN.1 BITSTRING. Are the values just the first and second half of this bitstring? The private key is an ...
715 views
### Is it feasible to break Diffie-Hellman key exchange when the implementation uses a poor-quality PRNG?
I've come across an implementation of DH in Java that uses the Random class to generate the secret integer value $a$, as shown in in Wikipedia's description of the ...
942 views
### Is there a practical security difference between XXX-bit encryption?
I know I'm treading in dangerous waters asking this - my comprehension of cryptography math is sorely lacking. On the flip side it gives me massive admiration for what many of you are able to do. ...
3k views
### what kind of hash function can provide a short hash and be collision resistant?
When you try to connect via SSH, you see a signature which is short but I heard it is even stronger than sha256. It is perhaps stronger because it uses more rounds. Is there a hash function or a ...
431 views
### Source for examples with broken cryptography
I've heard again and again that many crypto systems have been broken in the past for one reason or another and that it is best to use one that has been peer reviewed, etc etc. However, I've yet to see ...
4k views
### AES-GCM and its IV/nonce value
I was reading about the differences between the GCM and the CBC more here and I have a follow up doubt on the same. In the CBC mode the person who performs the encryption is the one who provides the ...
753 views
### Which block cipher modes of operation allow a predictable IV?
Recently I found out that in the modes CBC and PCBC the IV may be passed in cleartext but never must be predictable. However for this part of my app I rather have the IV be predictable and unique ...
625 views
First, just to make sure I understand "salting" correctly: You randomly generate a string to append to the password before hashing it, so as to increase its length and make precomputed tables much ...
528 views
### How secure is my OTP program?
I'm writing an One-Time Pad encryption program, because I got really interested in the idea of " encryption which has been proven to be impossible to crack if used correctly". I'm writing the program ...
241 views
If we have a hash function $h(x)$ and then a hash function $H(X) = h(h(X_0) || h(X_1))$ where $X_0$ is the first half of $X$, $X_1$ is the second half of $X$ and $||$ is concatenation. Then assuming ...
14k views
### How to use RCON In Key Expansion of 128 Bit Advanced Encryption Standard
I have a question about RCON here is my illustration... this is the 128 bit key.. ...
1k views
### If you had to implement the BGN Cryptosystem, how would you do it?
If you had to implement BGN, how would you do it? I'm looking for an implementation of the public-key cryptosystem due to Boneh, Goh, and Nissim (aka BGN), or at least some suggestions on ...
432 views
### Provable Encryption
Is it possible to encrypt data in a way that it can be proven that the data is encrypted, without revealing the key? Alice chooses some plaintext, then she encrypts it with a certain scheme. She ...
1k views
### Why is ElGamal considered non-deterministic?
One difference between RSA and ElGamal is that ElGamal isn't necessarily deterministic (while RSA is). What makes it non-deterministic? Is this advantageous to security? How else does this property ...
639 views
### Does the XML Encryption flaw affect SSL/TLS?
A "practical attack against XML's cipher block chaining (CBC) mode" has been demonstrated: XML Encryption Flaw Leaves Web Services Vulnerable. Does this weakness of CBC-mode which is used here also ...
187 views
### When is an asymmetric scheme considered broken?
Does the following quote imply that valid encrypted data can be created and decrypted by someone other than the owner of a private key: An asymmetric encryption scheme is considered to be broken ...
60 views
### Strength of $H(k\|H(m))$ as a MAC algorithm
What is the strength of $H(k \| H(m))$ compared to HMAC? Compared to $H(m \| k)$? What is the strength in bits of a given key/output size?
113 views
### For what does “Rcon” stand for in Rijndael/AES?
How do we expand the abbreviation “Rcon” used in the Rijndael key schedule? I know what it is, I know how it works; I just do not know for what it is a shortcut for. I mean: if "Sbox" is ...
314 views
### What is considered a “weak key” in AES?
I need to construct an AES key from an array of bytes (in Java), but I first I have to check if the key created from these bytes would be weak. Java can check for weakness for DES, but not for AES. ...
112 views
### What is Deterministic Authenticated Encryption?
I came across something known as deterministic authenticated encryption in my studies and a lot of people were associating it with Synthetic IV mode. I am having trouble what exactly DAE is because I ...
84 views
### Why are obfuscators generally defined to be probabilistic algorithms, rather than deterministic ones?
One possible explanation is, randomness is not going to hurt you, so might as well use it. Also, if we are hoping to prove impossibility, it only makes the result stronger. One other explanation is ...
71 views
### Importance of random number generation in Schnorr's signature
In Schnorr's digital signature protocol (https://en.wikipedia.org/wiki/Schnorr_signature), the signing process (as described in wikipedia) requires the generation of a random bit $r$. I am wondering ...
113 views
### What is scale-invariance notion of a fully homomorphic encryption scheme?
I read this paper https://eprint.iacr.org/2012/078.pdf and I didn't understand what does the author mean with scale-invariance perspective. The perspective in which we view the ciphertext is ...
100 views
### Distribution based integer factorization
The past week I've been looking into - and playing with - algorithms for factoring RSA moduli. From what I understand, it's important that the primes p and q are of the same magnitude as the square ...
153 views
### CBC-MAC just to verify integrity
I have a message and I want to verify its integrity and authenticity. I have a shared secret (key) that is not used for other encryption. Is it safe to send a plaintext message containing all the ...
257 views
### Implementing CBC Encryption Using Decryption
I've implemented this algorithm, which, working from the end of the message backwards, creates a valid CBC ciphertext from any plaintext, using the block cipher's decryption operation instead of the ...
273 views
### Can a 1 byte difference in AES 128 bit keys make huge difference in output?
If we take some randomly generated key of AES-128 and we change any random 1 byte of that 16 byte key, will this make huge difference in the AES cipher text generated over same input string? Does ...
138 views
### Turning a 64 bit block cipher into a 128 bit block cipher
There are quite a few block cipher modes of operation that require 128 bits. There are also modes of operation where a higher block size than 128, e.g. a block size of 256 bit would even be practical. ...
2k views
### What is the difference between online and offline brute force attacks?
I read some papers saying a certain scheme is secure for offline brute force attacks, but vulnerable to online brute force attacks. I wonder the difference between the online and offline brute force ...
500 views
### Are there some problems to use pseudo-random number generator in Smart Card?
A Smart Card is a kind of secure device, with limited storage capacity and computational resource. If we use a Pseudo-Random Number Generator to generate random numbers in a Smart Card, then is there ...
|
|
## Saturday, June 01, 2013 ... /////
### Quintuplets in physics
Cool anniversary: In late January, we celebrated the 30th anniversary of the announcement of the discovery of the W-boson. Today, we celebrate the 30th anniversary of the Z-boson. They were comparably important discoveries to the recent discovery of the God particle.
Sport: Viktoria Pilsen defeated Hradec, a much weaker team, 3-to-0 in the last round so we won the top soccer league for the 2nd time (after 2011). Because the Pilsner ice-hockey team has won the top league as well, Pilsen became the 2nd town in Czechia after Prague that collected both titles in the same year (correction: wrong, 3rd town, Ostrava did it in 1981).
Ms Alexandra Kiňová (23) is expecting the first Czechia's naturally born quintuplets (a package of 5 babies) on Sunday morning (tomorrow; update: they're out fine) which would mean that we match the achievement of the most fertile U.S. state – Utah – from the last week.
The Daily Mail tells us that the pregnancy has been easy so far. Doctors were still talking about "twins" in January and "quadruplets" in April. The probability that a birth produces $n$-tuplets goes like $1/90^{n-1}$ or so but the decrease slows down relatively to this formula for really high representations.
In physics, quintuplets are rare, too. By quintuplets, we mean five-dimensional irreducible representations of groups.
Correct me if I am wrong but I think that among the simple Lie groups, only $SU(2)=SO(3)$, $USp(4)=SO(5)$, and $SU(5)$ have irreducible five-dimensional representations. Let's look at them because looking at all quintuplets in group theory and physics is a rather unusual direction of approach to a subset of wisdom contained across the structure of maths and physics.
First, $SU(2)$. That's a three-dimensional group of $2\times 2$ complex matrices $M$ obeying $MM^\dagger={\bf 1}$ and $\det M=1$. The basic isomorphisms behind spinors imply that this group is the same as the group $SO(3)$ of rotations of the three-dimensional space except that the matrices $+M$ and $-M$ have to be identified.
The irreducible representations of $SU(2)$ are labeled by the spin $j$ which must be either non-negative integer or positive half-integer (only the former may also be interpreted as proper representations of $SO(3)$; the latter change their sign after a 360-degree rotation). Because the $z$-projection goes from $m=-j$ to $m=+j$ with the spacing equal to one, the representation is $(2j+1)$-dimensional.
The $j=0$ representation is the trivial singlet that doesn't transform at all; the $j=1/2$ is the two-dimensional pseudoreal spinor; the $j=1$ representation is equivalent to the usual 3-dimensional vector; the $j=3/2$ representation is a gravitino-like four-dimensional "spinvector". And finally, the $j=2$ representation is the traceless symmetric tensor. What do I mean by that?
Imagine that you consider the tensor product $V\otimes W$ of two copies of the three-dimensional vector space $V=W=\RR^3$. The tensor product is composed of objects $T_{ij}$ where $i,j$ are vector indices: it's composed of tensors. Clearly, such a tensor has $3\times 3 = 9$ independent components. They can be split into several pieces:${\bf 3}\otimes {\bf 3} = {\bf 5} \oplus {\bf 1}\oplus {\bf 3}$ The identity $3\times 3 = 5+1+3$ is the consistency check that verifies that the representations above have the right dimensions but the boldface identity above says more than just the arithmetic claim about the integers: the two sides are representations of whole groups and the identity says that they're transforming in equivalent ways under all elements of the group. Why is this decomposition right? Well, the tensor $T_{ij}$ may be divided to the symmetric tensor part which is 6-dimensional and the antisymmetric tensor which is 3-dimensional (it is equal to $\epsilon_{ijk}v_k$ i.e. equivalent to some vector $v_k$).
However, the 6-dimensional symmetric tensor isn't an irreducible representation of $SO(3)$. The trace $\sum_{i=1}^3 T_{ii}$ is independent of the coordinate system i.e. invariant under rotations and may be separated from the 6-dimensional representation. The trace may be set to zero by removing it i.e. considering$T^\text{traceless part}_{ij} = T_{ij} - \frac 13 \delta_{ij} T_{kk}$ and such a traceless tensor has 5 independent components; it is a quintuplet. The quadrupole moment tensor is one of the most famous applications of this 5-dimensional object. You could think it's just an accident that this number 5 is equal to the number of integers between $m=-2$ and $m=+2$; you could claim that the agreement is pure numerology, an agreement between the dimensions of two representations. But it is more than numerology: the representations are completely equivalent. The translation from the components $T_{ij}$ of the (complexified) traceless tensor and the five complex amplitudes $c_m$ for $-2\leq m\leq 2$ is nothing else than a linear change of the basis. It has to be so because for every $j$, the representation of $SU(2)$ is unique.
Now, let's talk about $SO(5)$. Clearly, this group of rotations of the 5-dimensional space has a 5-dimensional vector representation consisting of $v_i$. But what some readers aren't aware of is that the group $SO(5)$ may also be identified with the isomorphic $\ZZ_2$ quotient of a spinor-based group, namely $USp(4)$. What is this group? It's a unitary (U) symplectic (Sp) group of complex $4\times 4$ matrices $M$ that obey$MM^\dagger = M^\dagger M = 1, \quad M A M^T = A.$ Both conditions have to be satisfied. The first condition is the well-known unitarity condition, effectively meaning that $s_i^* s_i$ is kept invariant (it's the squared Pythagorean length of the vector computed with the absolute values). The other condition is equivalent to keeping the antisymmetric cross-like product of two vector-like objects $s_i A_{ij} t_j$ invariant where $A_{ij}$ are elements of the (non-singular) antisymmetric matrix $A$ above. Note that in this invariant, there is no complex conjugation.
Simple linear redefinitions of the 4 complex components $s_i$ may always translate your convention for $A$ mine which is $A = \text{block-diag} \zav{ \pmatrix{0&+1\\-1&0}, \pmatrix{0&+1\\-1&0} }$ You just arrange the right number of the "simplest nonzero antisymmetric matrices" along the (block) diagonal. The two conditions (unitary and symplectic) may be then seen to imply that $M$ is composed of $2\times 2$ blocks of this form$\pmatrix{ \alpha&+\beta\\ -\beta^*&\alpha^*},\quad \alpha,\beta\in\CC$ and the addition+matrix-multiplication rules for such matrices are the same rules as the addition+multiplication rules for the quaternions $\HHH$. So the group $USp(2N)$ may also be called $U(N,\HHH)$, the unitary group over quaternions. In particular, $USp(4)=U(2,\HHH)$. Such a quaternionization is possible with all pseudoreal representations.
So the 4-dimensional complex (actually pseudoreal!) fundamental representation of $USp(4)$ is complex-4-dimensional (but it is equivalent to its complex conjugate because it's pseudoreal!) and it may be viewed as a spinor of $SO(5)$. It is no coincidence that $4$ in $USp(4)$ is a power of two. How do you get the five-dimensional $j=1$ vector out of these four-dimensional spinors?
Note that for $SO(3)\sim SU(2)$, we had${\bf 2}\otimes{\bf 2} = {\bf 3}\oplus {\bf 1}.$ The tensor product of two spinors produced a vector (triplet; also the symmetric part of the tensor with two spinor indices) and a singlet (the antisymmetric part of the tensor with two 2-valued indices). Similarly, here we have${\bf 4}\otimes{\bf 4} = {\bf 5}\oplus {\bf 1}\oplus {\bf 10}.$ The decomposition of $4\times 4 = 16$ to $6+10$ is the usual decomposition of a "tensor with two spinor indices" to the antisymmetric part and the symmetric part, respectively. The symmetric part may be identified as the antisymmetric tensor with two vector indices, note that $5\times 4 / 2\times 1 = 10$. And the antisymmetric part is actually irreducible here. It's because the invariant for the symplectic groups is antisymmetric, $a_{ij}$, rather than the symmetric $\delta_{ij}$ we had for the orthogonal groups, so it's the antisymmetric part that decomposes into two irreducible pieces.
By tensor multiplying ${\bf 4}$ with copies of itself, we may obtain all representations of $USp(4)$ and $SO(5)$ by picking pieces of the decomposed tensor products. That's what we mean by saying that the representation ${\bf 4}$ is "fundamental". Whenever an even number of these ${\bf 4}$ factors appears in the tensor product, we obtain honest representations of $SO(5)$ that are invariant under 360-degree rotations and all these representations may also be given a natural description in terms of tensors with vector indices.
Finally, the special unitary group $SU(5)$ has an obvious 5-dimensional complex representation. It is a genuinely complex one, i.e. a representation inequivalent to its complex conjugate:${\bf 5}\neq \overline{\bf 5}$ This representation (and its complex conjugate, of course) is important in the simplest grand unified models in particle physics. One may say that $SU(5)$ is an obvious extension of the QCD colorful group $SU(3)$. We keep the first three colors (red, green, blue, so to say) and add two more colors that are interpreted as two lepton species from the same generation. The full collection of fifteen 2-component left-handed spinors per generation (they describe quarks and leptons; a Dirac spinor is composed of two 2-component spinors; the right-handed neutrino is not included among the fifteen) is interpreted as ${\bf 5}\oplus\overline{\bf 10},$ the direct sum of the fundamental quintuplet of $SU(5)$ we have already mentioned and the antisymmetric "tensor" with $5\times 4 / 2\times 1$ components. Note that the counting of the components is the same as it was for the representation of $SO(5)$ above. However, the 10-dimensional representation of $SU(5)$ is a complex one, inequivalent to its complex conjugate (I won't explain why the bar appears in the decomposition above, it's a technicality). The list of 15 spinors may be extended to 16, $10+5+1$, if we add one right-handed neutrino and this ${\bf 16}$ is then the spinor representation of $SO(10)$, a somewhat larger group that is capable of being the grand unified group (it is no accident that 16 is a power of two: that's what spinors always do).
The number 5 may be thought of as the first "irregular" integer of a sort but it is still small and special enough and is therefore linked to many special things in maths and physics. In maths, five is special because the square root of five appears in the golden ratio; and a pentagram may be constructed by a pair of compasses and a ruler (these two facts are actually related). Quadrupole moments, moments of inertia, five-dimensional rotations, and grand unifications are among the physical topics in which 5-dimensional representations are used as "elementary building blocks".
I hope that Ms Kiňová's birth will be as smooth as her pregnancy.
#### snail feedback (13) :
Wow she could make a lot of money if she were to sell the babies to gay couples... Is it allowed in Tchéquie ? (I'm being sarcastic although this could be the case).
I am sure she will be doing fine without that, too. Ministers etc. are talking about it, effectively preparing an Amendment of the Constitution about the Quintuplets, if I exaggerate just a little bit - extra aid etc.
Look at this cute and funny video of quadruplets laughing :-) (and then let's imagine when they cry all together).
That's cute! LOL.
No , she will give only one baby because there's $Z_{5}$ symmetry between them which give us the same lagrangian .
Would you call that a litter?
One of my silly questions in MO was about SO(5)
http://mathoverflow.net/questions/75875/why-su3-is-not-equal-to-so5
I totally agree with you. It is silly. How can SU(3) be SO(5)? SU(3) is 8-dimensional, SO(5) is 10-dimensional. The dimensions are different but so close that clearly none of the two may be a subgroup of the other.
Thanks for this nice representation of some group theory Lumo ;-)
Do you know in a bit more detail what is inside Nadir Jeevanjee's book? Does it give nice cool physics examples as you always do, while explaining group theory?
That's definitely most endearing quads and a nice mum.
But when I look at the photo of the very happy and glowing-looking face of the quintuplets-expecting woman - i.e. a female who is in actual fact in a very precarious state with five parasites growing inside her so that they almost make her belly burst, then it makes me think: "That's how strongly and ruthlessly Mother Nature beats her biological (bongo-) drum! %-|
|
|
# Algebra-valued exterior forms
We can extend the view of exterior forms as real-valued linear mappings to define algebra-valued forms. These follow the same construction as in the prior section, starting from an algebra-valued 1-form $${\check{\Theta}\colon V\rightarrow \mathfrak{a}}$$, so that general forms are alternating multilinear maps from $${k}$$ vectors to a real algebra $${\mathfrak{a}}$$ whose vector multiplication takes the place of multiplication in $${\mathbb{R}}$$. Since this vector multiplication may not be commutative, we need to be more careful in terms of ordering in the isomorphism to ensure antisymmetry, i.e. for two algebra-valued 1-forms we define
\begin{aligned} (\check{\Theta}\wedge\check{\Psi})(v,w)\equiv\check{\Theta}(v)\check{\Psi}(w)-\check{\Theta}(w)\check{\Psi}(v). \end{aligned}
An algebra-valued form whose values are defined by matrices is a matrix-valued form. Exterior forms that take values in a matrix group can also be considered as matrix-valued forms, but it must be understood that under addition the values may no longer be in the group. One can also form the exterior product between a matrix-valued form and a vector-valued form. To reduce confusion when dealing with algebra- and vector-valued forms, we will indicate them with (non-standard) decorations, for example in the case of a matrix-valued 1-form acting on a vector-valued 1-form,
\begin{aligned} (\check{\Theta}\wedge\vec{\varphi})(v,w)\equiv\check{\Theta}(v)\vec{\varphi}(w)-\check{\Theta}(w)\vec{\varphi}(v). \end{aligned}
Δ An additional distinction can be made between forms that take values which are concrete matrices and column vectors (and thus depend upon the basis of the underlying vector space), and forms that take values which are abstract linear transformations and abstract vectors (and thus are basis-independent). We will attempt to distinguish between these by referring to the specific matrix or abstract group, and by only using “vector-valued” when the value is an abstract vector.
A notational issue arises in the particular case of Lie algebra valued forms, where the related associative algebra in the relation $${[\check{\Theta},\check{\Psi}]=\check{\Theta}\check{\Psi}-\check{\Psi}\check{\Theta}}$$ could also be in use. In this case multiplication of values could use either the Lie commutator or that of the related associative algebra. We will denote the exterior product using the Lie commutator by $${\check{\Theta}[\wedge]\check{\Psi}}$$. Some authors use $${[\check{\Theta},\check{\Psi}]}$$ or $${[\check{\Theta}\wedge\check{\Psi}]}$$, but both can be ambiguous, motivating us to introduce our (non-standard) notation. The expression $${\check{\Theta}\wedge\check{\Psi}}$$ is then reserved for the exterior product using the underlying associative algebra (e.g. that of matrix multiplication if the associative algebra is defined this way). For two Lie algebra-valued 1-forms we then have
\begin{aligned} (\check{\Theta}[\wedge]\check{\Psi})(v,w) & =[\check{\Theta}(v),\check{\Psi}(w)]-[\check{\Theta}(w),\check{\Psi}(v)]\\ & =\check{\Theta}(v)\check{\Psi}(w)-\check{\Psi}(w)\check{\Theta}(v)-\check{\Theta}(w)\check{\Psi}(v)+\check{\Psi}(v)\check{\Theta}(w). \end{aligned}
Note that $${[\check{\Theta},\check{\Psi}](v,w)=\check{\Theta}(v)\check{\Psi}(w)-\check{\Psi}(v)\check{\Theta}(w)}$$ is a distinct construction, as is $${[\check{\Theta}(v),\check{\Psi}(w)]=\check{\Theta}(v)\check{\Psi}(w)-\check{\Psi}(w)\check{\Theta}(v)}$$; neither are in general anti-symmetric and thus do not yield forms. Also note that e.g. for two 1-forms $${\check{\Theta}[\wedge]\check{\Psi}\neq\check{\Theta}\wedge\check{\Psi}-\check{\Psi}\wedge\check{\Theta}}$$, and $${(\check{\Theta}[\wedge]\check{\Theta})(v,w)=2[\check{\Theta}(v),\check{\Theta}(w)]}$$ does not in general vanish, so $${[\wedge]}$$ does not act like a Lie commutator in these respects. Instead it forms a graded Lie algebra, so that for algebra-valued $${j}$$- and $${k}$$-forms $${\check{\Theta}}$$ and $${\check{\Psi}}$$ we have the graded commutativity rule
$$\displaystyle \check{\Theta}[\wedge]\check{\Psi}=(-1)^{jk+1}\check{\Psi}[\wedge]\check{\Theta},$$
and with an algebra-valued $${m}$$-form $${\check{\Xi}}$$ we have the graded Jacobi identity
$$\displaystyle (-1)^{jm}(\check{\Theta}[\wedge]\check{\Psi})[\wedge]\check{\Xi}+(-1)^{kj}(\check{\Psi}[\wedge]\check{\Xi})[\wedge]\check{\Theta}+(-1)^{mk}(\check{\Xi}[\wedge]\check{\Theta})[\wedge]\check{\Psi}=0.$$
Algebra-valued forms also introduce potentially ambiguous index notation. If a basis is chosen for the algebra $${\mathfrak{a}}$$, the value of an algebra-valued form may be expressed using component notation as $${\Theta^{\mu}}$$; or if the algebra is defined in terms of matrices, an element might be written $${\Theta^{\alpha}{}_{\beta}}$$, an expression that has nothing to do with the basis of $${\mathfrak{a}}$$. Then for example an algebra-valued 1-form might be written $${\Theta^{\mu}{}_{\gamma}}$$ or $${\Theta^{\alpha}{}_{\beta\gamma}}$$.
Δ In considering algebra-valued forms expressed in index notation, extra care must be taken to identify the type of form in question, and to match each index with the aspect of the object it was meant to represent.
|
|
# Why are high pressures used in cracking of long-chain hydrocarbons?
If we have a long-chain hydrocarbon, such as decane, and we split it through thermal cracking (say in an industrial plant), we use high temperatures, and high pressures. Cracking produces smaller molecules - alkanes, and alkenes. It might look like this:
$$\ce{CH3-CH2-CH2-CH2-CH2-CH2-CH2-CH2-CH2-CH3}$$ Which is cracked into products like this: $$\ce{CH3-CH2-CH2-CH2-CH2-CH2-CH2-CH2-CH2-CH3-> 2CH2=CH2 + C3H6 + C8H18}$$
Why do we use high pressure in this case? Le chateliers principle states that the equilibrium will shift to resist the change - in this case it would move the position of equilibrium to the left, not the right. In cracking we want maximum amount of products (ie. the right) - therefore surely a low pressure would shift the equilibrium that way.
The reaction is not dependent on molecules colliding - merely enough thermal energy has to be provided to enough molecules to cause them to split - homolytic fission - and become free radicals which react to form smaller molecules. Therefore a pressure requirement to increase the collisions doesn't seem necessary.
Why then do we use high pressure when cracking hydrocarbons?
-
First of all, cracking is not an equilibrium process, therefore La Chatelier-Brown principle doesn't apply! (irrelevant) Also, the mechanism is much more complicated than this:
• In gas phase, radical reactions generally have extremely complex reaction networks. E.g. burning of methane with oxygen contains thousands if not more elementary reactions. I am pretty sure that gas phase cracking is not a pure first order reaction.
• In industry, cracking is generally done on heterogeneous catalyst, therefore not a gas phase reaction. Heterogeneous catalysis is pretty much controlled by the absorbed amount of molecules on the catalyst surface: high pressure, faster reactions.
-
If you were a refinery, you'd throughput hundreds of tonnes/day from one reactor. To keep process volumes manageable, high gas pressures and fast throughputs. If you thermally cracked long chain alkanes, you would need red heat and get coking. If you did it over acid catalyst, typically zeolite whose pores define product by fit, you'd get coking. The way to do it is to reform under high pressure and modestly high temperature over noble metal catalyst-doped zeolite plus added hydrogen. Coking is then deeply suppressed - "Platformate."
Ethylene is better made otherwise; alpha-olefins overall are assembled catalytically. You do not want straight-chain products. You want a maximally branched ~$\ce{C8}$-cut for gasoline, with maximum octane number. Linear ~$\ce{C10}$-cut and up is cheap diesel and kerosene, with maximum cetane number.
Government intervention tortures your access to liquid fuels. The most recently built refinery is ancient. Enviro-whinerism has carefully lowered the energy content/gallon. "Sustainability" means you pay for it big time but don't have a right to receive it. How is it bottom of the barrel diesel costs so much? It's the law! written by idiots and crooks.
-
I think its worthwhile defining what coking is. – Ari Ben Canaan Apr 15 at 16:09
I suspect there may, in fact, be a good answer to this question hidden in your first paragraph above, and perhaps the second one too, if you just unpacked it enough so that even people who are not industrial chemists could understand it. As for the last paragraph, whether valid or not, it doesn't seem to have any relevance to the question (or to the topic of this site) whatsoever. – Ilmari Karonen Apr 15 at 17:34
Chair parade. Given the answer, look up its sources. Pyrolyzing organics absent air produces coke (graphite) as by-product. This wastes carbon in light streams and ruins catalysts. Low value still bottoms are terminally pyrolyzed to recover lighter ends plus coke. Part (most?) of the National Petroleum Reserve is refinery waste purchased as Saudi light crude then disappeared in Louisiana solution-mined salt domes, re third paragraph. – Uncle Al Apr 15 at 18:24
|
|
# Job Costs Using a Plantwide Overhead Rate Naranjo Company designs industrial prototypes for outside companies. Budgeted...
###### Question:
Job Costs Using a Plantwide Overhead Rate Naranjo Company designs industrial prototypes for outside companies. Budgeted overhead for the year was $260,000, and budgeted direct labor hours were 20,000. The average wage rate for direct labor is expected to be$25 per hour. During June, Naranjo Company worked on four jobs. Data relating to these four jobs follow: Job 39 Job 40 Job 41 Job 42 Beginning balance $23,700$34,600 $17,000$0 12,000 Materials requisitioned 8,350 18,900 10,000 21,400 18,000 Direct labor cost 3,000 2,900 Overhead is assigned as a percentage of direct labor cost. During June, Jobs 39 and 40 were completed; Job 39 was sold at 130 percent of cost. (Naranjo had originally developed Job 40 to order for a customer; however, that customer was near bankruptcy and the chance of Naranjo being paid was growing dimmer. Naranjo decided to hold Job 40 in inventory while the customer worked out its financial difficulties. Job 40 is the only job in Finished Goods Inventory.) Jobs 41 and 42 remain unfinished at the end of the month.
Required: 1. Calculate the balance in Work in Process as of June 30. $46,318 7 2. Calculate the balance in Finished Goods as of June 30.$ 84,120 x 3. Calculate the cost of goods sold for June. $57,800 4. Calculate the price charged for Job 39.$ 75,140 5. What if the customer for Job 40 was able to pay for the job by June 30? What would happen to the balance in Finished Goods? Finished Goods would decrease What would happen to the balance of Cost of Goods Sold? Cost of Goods Sold would increase
#### Similar Solved Questions
##### Two small, positively charged spheres with charges and masses: Q1 55.8nC,M1 1.52X 10-4 kg;Q2 70.24nC, Mz = 2.74x 10-4 kgare initially held stationary a distance d = 3.73X 10-3 m apart They are then both released:What is the final speed of charge Q1 when the charges are very far apart?Give your answer in m/s to at least three significant digits. Do not include units in your answer:
Two small, positively charged spheres with charges and masses: Q1 55.8nC,M1 1.52X 10-4 kg; Q2 70.24nC, Mz = 2.74x 10-4 kg are initially held stationary a distance d = 3.73X 10-3 m apart They are then both released: What is the final speed of charge Q1 when the charges are very far apart? Give your a...
##### Problem 02.086 Fram e support ring"uird ABC Ggupcolled part by cabl DBEthatnaascs Itaough MeHFTAla acuq Eatiihe lemion Eentad the cabk 6n Ite sLopolt at E (Round Inaenjlenavicts Fihc nedosthukz nunbri |nle ntacoirponenly olbe TurceASuWatcomponcr: ol enc 'con Cut ErateDetenccomponcri clenc bicetale} t4tu
Problem 02.086 Fram e support ring "uird ABC Ggupcolled part by cabl DBEthatnaascs Itaough MeHFTAla acuq Eatiihe lemion Eentad the cabk 6n Ite sLopolt at E (Round Inaenjlenavicts Fihc nedosthukz nunbri | nle nta coirponenly olbe Turce ASuWat componcr: ol enc 'con Cut ErateDetenc componcri ...
##### 6. Beyond the knowledge you learn from a typical business school, what other social science disciplines...
6. Beyond the knowledge you learn from a typical business school, what other social science disciplines contribute to business managers' ability to understand global business environment and craft appropriate strategies? 7. The trend of globalization is obvious. But will the business world ever ...
##### In the image: there is a current of 0.5 (A) flowing in the circuit. A) Calculate...
In the image: there is a current of 0.5 (A) flowing in the circuit. A) Calculate the voltage of power supply V in the circuit. B) Show the direction of electric current in the circuit and looped wire (show in the image) C) Calculate the magnitude of the magnetic field in the center of the loop which...
##### 7. The pH = -log[+] for a solution. The pOH = -log[OH-] for a solution. The...
7. The pH = -log[+] for a solution. The pOH = -log[OH-] for a solution. The pH and pH are related and their sum is equal to 14 (pH + pOH = 14). If the concentration of a dilute solution of potassium hydroxide is 0.001 M, what is the pH of that solution? 8. Citric acid is produced by oranges. When ci...
##### Problem 1t. A single flat loop of wire 80 em in diameter having a resistance of...
Problem 1t. A single flat loop of wire 80 em in diameter having a resistance of 800mtl is attached to 12V battery (a) 6 pts) What is the current in the wire? Indicate the direction of the currem cb) (d prs) What is the magnitude of the magnetic Rield at its cement tedicate t the diroction of B in th...
##### Which f the following model to present the two factor ANOVA without interaction model Xi=uta;t(aB)jteij A 0 Xij=+a;+Bjteij B 0 None of these .€ 0 Xijk =/+aijk+ Bij+eijk D 0 Xijk=/+0,+8;+(Blj+eijk E 0
which f the following model to present the two factor ANOVA without interaction model Xi=uta;t(aB)jteij A 0 Xij=+a;+Bjteij B 0 None of these .€ 0 Xijk =/+aijk+ Bij+eijk D 0 Xijk=/+0,+8;+(Blj+eijk E 0...
##### Use this online Aulo Lojn Schiedule Bencrate schedule far the option you picked_ Yau May use the assumed tax faleand that (ees are NOT part = the loan Ibox uncheckedl; What pertentage af the total interest paid off after % the term has eljpsed? What percentage the total interest paid off after % of the term has elapsed?
Use this online Aulo Lojn Schiedule Bencrate schedule far the option you picked_ Yau May use the assumed tax faleand that (ees are NOT part = the loan Ibox uncheckedl; What pertentage af the total interest paid off after % the term has eljpsed? What percentage the total interest paid off after % of ...
##### I’m so confused on this one because none of the answers make sense to me. I...
I’m so confused on this one because none of the answers make sense to me. I don’t understand how the equilibrium would decrease if the mortgage decreased and interest rate increased. Do supply and demand both shift or does just one shift? PLEASE GO INTO DEPTH. 1.) Assume that you are ...
##### (i) Prove that $x^{k}$ is a permutation polynomial of $mathbb{F}_{q}$ if and only if $operatorname{gcd}(k, q-$ $1)=1 .$(ii) Given that $x^{5}+x^{3}+2 x$ is a permutation polynomial of $mathbb{F}_{27}$, use this polynomial to encipher the word GALOIS according to the procedure outlined before and in Example 21.13.
(i) Prove that $x^{k}$ is a permutation polynomial of $mathbb{F}_{q}$ if and only if $operatorname{gcd}(k, q-$ $1)=1 .$ (ii) Given that $x^{5}+x^{3}+2 x$ is a permutation polynomial of $mathbb{F}_{27}$, use this polynomial to encipher the word GALOIS according to the procedure outlined before and in...
##### Find " Y2r 2y" volume of the solid gencrated following " by revolving thc region bounded by tkc gruphs of equations ahout the V-2
Find " Y2r 2y" volume of the solid gencrated following " by revolving thc region bounded by tkc gruphs of equations ahout the V-2...
##### Differential equation and its direction field are given. Sketch = graph of tne golution that results with each initab condmonsin t y'(t) = y( - 2)=Anu Y( -21=2Which Df the fallowing shows the solutian Ihat results wilh Jheinitip Candition Y( -2) = - 27OA
differential equation and its direction field are given. Sketch = graph of tne golution that results with each initab condmon sin t y'(t) = y( - 2)= Anu Y( -21=2 Which Df the fallowing shows the solutian Ihat results wilh Jheinitip Candition Y( -2) = - 27 OA...
##### 2, A company recently completed job 583. The company incurred direct materials of S40,610 and direct...
2, A company recently completed job 583. The company incurred direct materials of S40,610 and direct labor hours of 1,147. The direct labor wage rate was S11 per direct labor hour. The company completed 3,100 units for job 583. The company applies manufacturing overhead on the basis ofdirect labor...
##### Exercise 5.3.34 Let A = =2 Write a simple MATLAB script t0 perform Rayleit quotient iteration on A. (a) Use 4 1 a5 a starting vector Iterate until the limit of machir: precision is reached (Use format long t0 display 15 digits ) Nolice tu the itcrates converge (o a different eigenvector than in Exercises 5.3.7 &d 5.3.17. Notice also that once the iterales get close t0 the limit, the numba of correct digits roughly doubles with each ileration. This is true of both te iteration vector and
Exercise 5.3.34 Let A = =2 Write a simple MATLAB script t0 perform Rayleit quotient iteration on A. (a) Use 4 1 a5 a starting vector Iterate until the limit of machir: precision is reached (Use format long t0 display 15 digits ) Nolice tu the itcrates converge (o a different eigenvector than in ...
##### Countries opened an Additional Trade Route under Jaigaon Land Customs Station at Ahllay, Pasakha, as a...
Countries opened an Additional Trade Route under Jaigaon Land Customs Station at Ahllay, Pasakha, as a "temporary measure” during the current COVID 19 situation.Explain this statement in detail?...
##### Question 1 In a recent audit company report, a businessmanager is interested in estimating the mean sale per customer forhis client. A simple random sample of 22 customers yieldedthe sample mean of sales as x= 18.5 BD. Supposethat the sample standard deviation was s=7.6 BD. Determine a98 % Confidence Interval Estimate forthe population mean μ, the average sale percustomerQuestion 1 (3, 3 marks)b. Determine the sample size n required to estimate thepopulation proportion π, bases on x= 25 smo
Question 1 In a recent audit company report, a business manager is interested in estimating the mean sale per customer for his client. A simple random sample of 22 customers yielded the sample mean of sales as x= 18.5 BD. Suppose that the sample standard deviation was s=7.6 BD. Determine a 98 % C...
##### Problem 6-08A a1-a2 (Part Level Submission) Swifty Inc. is a retailer operating in British Columbia. Swifty...
Problem 6-08A a1-a2 (Part Level Submission) Swifty Inc. is a retailer operating in British Columbia. Swifty uses the perpetual inventory method. All sales returns from customers result in the goods being returned to inventory; the inventory is not damaged. Assume that there are no credit transaction...
##### Consider the following reaction at 298 K and calculate the following quantities: Consider the following reaction...
Consider the following reaction at 298 K and calculate the following quantities: Consider the following reaction at 298 K: 4 Al(s) + 3O_2(g) rightarrow 2 Al_2O_3(s) delta H degree = - 3351.4 kJ/mol Calculate the following quantities. delta S_sys = delta S_surr = delta S_univ =...
##### [-/1 Polnts]DETAILSLARCALCT1 4.3.011.Wrtc the Ilmit as definite Integral on the Interval [a, bJ, where - Is any point tna Kh subintervul; LImit Intervaluallm (4C + 17) Ax[-9, 4]Need Help?Read
[-/1 Polnts] DETAILS LARCALCT1 4.3.011. Wrtc the Ilmit as definite Integral on the Interval [a, bJ, where - Is any point tna Kh subintervul; LImit Interval uallm (4C + 17) Ax [-9, 4] Need Help? Read ...
##### How do you solve the differential dy/dx=4x+(4x)/sqrt(16-x^2)?
How do you solve the differential dy/dx=4x+(4x)/sqrt(16-x^2)#?...
##### LUQuestion 38What Is the coefficient ofxly8 in 281(K+x)xlOy8 84o1* (81) x8ylo (8) x8ylo2
LU Question 38 What Is the coefficient ofxly8 in 281(K+x) xlOy8 84o1* (81) x8ylo (8) x8ylo 2...
##### You need a 80% alcohol solution. On hand, you have a 35 mL of a55% alcohol mixture. You also have 85% alcohol mixture. How much ofthe 85% mixture will you need to add to obtain the desiredsolution?You will need mL of the 85% solution
You need a 80% alcohol solution. On hand, you have a 35 mL of a 55% alcohol mixture. You also have 85% alcohol mixture. How much of the 85% mixture will you need to add to obtain the desired solution? You will need mL of the 85% solution...
##### The velocity of an airplane is 50km/hr due north. the plane the plane flies into a...
the velocity of an airplane is 50km/hr due north. the plane the plane flies into a cross wind of 50km/hr due east at a right angle. the speed of the aircraft with respect to the ground is about I know the answer is 70.7kn/hr,i need some work to help explain how...
##### Review Constantssnorkeler takes syringe filled with 14 mL of air Irom the surlace where the pressure is 1.0 atm an unknown depth; The volume of the air in the syringe at this depth is 6.5 mLPart AYou may want to reference (Pages 420 425) Section 10.4 while completing this problem:If the pressure increases by an additional atm for every 10 m of depth, how deep is the snorkeler? Express the depth to two significant figures and include the appropriate unitsdepthValueUnitsSubmitRequest Answer
Review Constants snorkeler takes syringe filled with 14 mL of air Irom the surlace where the pressure is 1.0 atm an unknown depth; The volume of the air in the syringe at this depth is 6.5 mL Part A You may want to reference (Pages 420 425) Section 10.4 while completing this problem: If the pressure...
##### Assuming that the heights of college women are normally distributed with mean 67 inches and standard...
Assuming that the heights of college women are normally distributed with mean 67 inches and standard deviation 2.9 inches, answer the following questions. (Hint: Use the figure below with mean and standard deviation o.) Area Under a Normal Curve 19.5% 34% 3 - 30 -20 - + 95% 20% (a) What percentage o...
##### Microbiology Q1 Q2 Q3 Which medium contains a pH indicator to show pH change caused by...
Microbiology Q1 Q2 Q3 Which medium contains a pH indicator to show pH change caused by fermentation? Glucose Broth Lactose Broth SIM Lactose and Glucose Broths Lactose, Glucose, and SIM What is the Gram reaction of Escherichia coli? Positive Negative Gram Neutral It is an acid-fast organism. H...
##### Can anyone answer all the parts please. thank you. CHEM61041 Question 2 continued (b) Answer ALL...
can anyone answer all the parts please. thank you. CHEM61041 Question 2 continued (b) Answer ALL parts (i) - (iii) Sulfate ion 5 forms a 1:1 host-guest complex in methanol with cation 6, with a binding constant of 313 M . Complexation of sulfate 5 to the dication 7 also gives a 1:1 host- guest compl...
##### 14) Suppose plate: which looks like the region bounded by the cuive y V1-r2 and the r-axis is sitting at the bottom river wich depth o 5 Sct up. but do not solve integral, representing the total water pressure acting O one sidle ofthe plate (You HaV aSSUMC that thee Hluidl hats constant density (pual to 1)_15) Suppose plate which looks like tlie region bouuclel by the curve 4 = & Izf a the r-Axis is sitting at the hol|OII river with depth o 8 m. Sct Wp bmut do nOtsolve iutegral, represemtin
14) Suppose plate: which looks like the region bounded by the cuive y V1-r2 and the r-axis is sitting at the bottom river wich depth o 5 Sct up. but do not solve integral, representing the total water pressure acting O one sidle ofthe plate (You HaV aSSUMC that thee Hluidl hats constant density (pu...
##### Which substance is the reducing agent in this reaction? 2Cr(OH)3+3OCl−+4OH−→2CrO4−2+3Cl−+5H2O 2 C r ( O H...
Which substance is the reducing agent in this reaction? 2Cr(OH)3+3OCl−+4OH−→2CrO4−2+3Cl−+5H2O 2 C r ( O H ) 3 + 3 O C l − + 4 O H − → 2 C r O 4 − 2 + 3 C l − + 5 H 2 O Express your answer as a chemical formula....
##### An infinite cosine series is given by;cos(zr _ 1)t(2r _Compute the sum tor = 3andt= 0.927.Use at least 6 decimal places for T. Give your answer correct to 3decimal places_
An infinite cosine series is given by; cos(zr _ 1)t (2r _ Compute the sum tor = 3andt= 0.927.Use at least 6 decimal places for T. Give your answer correct to 3decimal places_...
##### Nature’s Treasure tree farm is trying to decide which fertilizer to use to grow its trees....
Nature’s Treasure tree farm is trying to decide which fertilizer to use to grow its trees. It can either buy the famous Greenwood Fertilizer, which costs $5 and can grow trees which can be sold for$8 in 4 years, or it can buy Brian and Peter’s Fertilizer which costs \$4 and can grow tree...
##### A statistical program is recommended.A highway department is studying the relationship betweentraffic flow and speed. The following model has beenhypothesized: y = β0 + β1x + εwherey = traffic flow in vehicles per hourx = vehicle speed in miles per hour.The following data were collected during rush hour for sixhighways leading out of the city.Traffic Flow(y)Vehicle Speed(x)1,255351,331401,225301,333451,347501,12325In working further with this problem, statisticians suggestedthe use of the fol
A statistical program is recommended. A highway department is studying the relationship between traffic flow and speed. The following model has been hypothesized: y = β0 + β1x + ε where y = traffic flow in vehicles per hour x = vehicle speed in miles per hour. The following data were colle...
##### 20) The normality of an HzCzOa solution as an acid iS 0.40 N. What is its normality as reducing agent? 0.80 N 0.20 N 0.60 N 0 40 N
20) The normality of an HzCzOa solution as an acid iS 0.40 N. What is its normality as reducing agent? 0.80 N 0.20 N 0.60 N 0 40 N...
|
|
# Ragged Matrix Indexing
A ragged matrix, is a matrix that has a different number of elements in each row. Your challenge is to write a program in any favorable language to find the indices of all occurrences of target in the ragged matrix.
# Input:
A list of ragged lists (can be empty) of positive integers and a target range e.g. 26-56. The target range, given 2 positive integers. For languages that do not support this type of list, you can input it as a string representation
You may assume that a<=b
# Output:
If a number in the ragged list is within the range or equal to a or equal to b, output the index of the ragged list then the index of the number in that ragged list e.g. 0 4 - The 0 is the first ragged list in the input and the 4 is the index of the number in the first ragged list
# Test cases:
[[[1,3,2,32,19],[19,2,48,19],[],[9,35,4],[3,19]],19-53]
->
[[0,3],[0,4],[1,0],[1,2],[1,3],[3,1],[4,1]]
[[[1,2,3,2],[],[7,9,2,1,4]],2-2]
->
[[0,1],[0,3],[2,2]]
You can choose to follow the output format above or output it in the following as well:
[[[1,3,2,32,19],[19,2,48,19],[],[9,35,4],[3,19]],19-53]
->
0 3 0 4 1 0 1 2 1 3 3 1 4 1
0-based and 1-based indexing is allowed
You can output your answers in any way as long as it is distinguishable what the indexes of the number and matrix are
You may assume the integers in the list are always positive and non-zero
This is code-golf, so shortest code wins!
• Can we assume the input integers are strictly positive (i.e. never zero)?
Jan 27 at 7:45
• You may assume a<=b. Then why example is 56-26?
– tsh
Jan 27 at 7:46
• May I choose to input the range as a range object in python? (e.g. range(26,57) for 26-56)
– tsh
Jan 27 at 7:46
• typo @tsh sry for about that and yes i think that will be fine to use range but isnt it shorter to not use it? Jan 27 at 7:48
• jelly's ŒṪ seems to be made for this challenge Jan 27 at 8:49
# Jelly, 8 7 bytes
r/iⱮⱮŒṪ
Try it online!
## Walkthrough
/ Reduce input range list by
r range
ⱮⱮ Foreach at depth 2
(on the right argument)
i Find first index of RHS in LHS
(Or, in this case, checks whether
current item (RHS) is within range (LHS)? )
ŒṪ All truthy multidimensional indices
• Nice job. A great Jelly tip is that when you find yourself using @ to look for dyads that can do what you need of the one you're using but with swapped arguments. In this case, to save a byte I would do r/ċⱮⱮŒṪ. Jan 27 at 19:43
• You can also use i instead of e@ if you only need truthy/falsey values instead of 1/0: Relevant tip Jan 27 at 19:44
# APL (Dyalog Extended), 8 bytes
Full program. Prompts for range, then array.
⍸↑⎕∊¨…/⎕
Try it online!
Explained from the right:
⎕ prompt (for range)
…/ reduce using the range function (gives enclosed list of all number in the range)
∊¨ check if the elements of each of the following lists are in the entire (because it is enclosed) range:
⎕ prompt for array
↑ mix list of Boolean lists into a matrix, padding with Falses
⍸ indices of Trues
• that was quick nice using anon func Jan 27 at 7:47
• @DialFrost Well, not an anon func anymore.
Jan 27 at 7:58
# Wolfram Language (Mathematica), 23 bytes
Position[a_/;#<=a<=#2]&
Try it online!
Input [a, b][list]. Returns 1-indexed.
# R, 87 bytes
Or R>=4.1, 80 bytes by replacing the word function with a \.
function(l,a,b,[=lapply)cbind(rep(seq(l),lengths(w<-l[%in%,a:b][which])),unlist(w))
Try it online!
Solution shorter in R>=4.1:
### R, 92 bytes
Or R>=4.1, 78 bytes by replacing two function occurrences with \s.
function(l,a,b)cbind(rep(seq(l),lengths(w<-lapply(l,function(x)which(x%in%a:b)))),unlist(w))
Try it online!
Outgolfed by @Giuseppe. See that answer for a detailed explanation and comparison of our approaches.
• 78 bytes by avoiding sapply -- I didn't think I would ever use lengths and sequence in the same answer ... Jan 27 at 17:41
• @Giuseppe - Wow, you're an R magician! Do you remember all these seemingly-wierd functions, or do you have a secret way to search for them? Jan 27 at 20:04
• @Giuseppe, I think you should post it as a separate answer. Jan 27 at 20:05
• @DominicvanEssen sequence comes up every now and again and I've been tyring to find a use for lengths with all the ragged-list questions of late (looks like pajonk beat me there though). I'll admit after a couple years of golfing in R I had a slow year at work so I tried to read through the docs for most base functions and every now and again some weird ones come up :-) Jan 27 at 20:25
• @pajonk I'll post it separately with an explanation in a bit. Jan 27 at 20:25
# Python 3, 69 bytes
lambda l,r,e=enumerate:[[X,Y]for X,x in e(l)for Y,y in e(x)if y in r]
Try it online!
-3 thanks to @tsh
• 69 if accept input as a range object: lambda l,r,e=enumerate:[[X,Y]for X,x in e(l)for Y,y in e(x)if y in r]
– tsh
Jan 27 at 8:11
• @tsh thanks a lot!! Jan 27 at 16:23
# K (ngn/k), 13 bytes
{+&~(y+!2)'x}
Try it online!
Takes the ragged matrix as x (first arg, a list of lists) and the range as y (second arg, a pair of integers).
• (y+!2) convert the range from e.g. 19 53 to 19 54 (to do a [lb;ub] comparison rather than [lb;ub))
• (...)'x do a binary lookup of x in the list generated by (...). if a value in x is between the (adjusted) indices, 0 is returned
• ~ not the above; values within the range become 1 and all others 0
• & use "deep-where" to return indices of 1's
• + transpose the result
It's likely that the final + (transpose) can be elided depending on what constitutes acceptable output.
# R, 78 bytes
function(l,a,b)cbind(rep(seq(l),L<-lengths(l)),sequence(L))[unlist(l)%in%a:b,]
Try it online!
Originally posted as a comment on pajonk's answer, so uses that test harness. That answer is a very natural way to approach the problem in R, so if I hadn't peeked first, I probably wouldn't have found this gem.
Using R-style indexing, a single value in l is l[[i]][j], i.e., the jth (vector) element of the ith element of the list l. pajonk's answer generates a list of all the indices j for each i such that l[[i]][j] %in% a:b and then joins them with an appropriately-sized list of i, as below:
function(l,a,b){
in_ab <- sapply(l,%in%,a:b) # for each element of l, return TRUE/FALSE where elements are in a..b
j_idx_list <- sapply(in_ab,which) # now return the 1-based indices of the TRUE values
j_idx <- unlist(j_idx_list) # and collapse into a single vector
i_lengths <- lengths(j_idx_list) # take the # of valid j-indices in each sublist
i_idx <- rep(seq(l),i_lengths) # and generate i-indices to match
cbind(i_idx,j_idx) # and combine
}
This answer instead uses some functions rarely used in golfing, sequence and lengths to generate all the valid pairs i,j and then filters them directly. In fact, sequence is such an odd function in R that the docs used to say1 that "it mainly exists in reverence to the very early history of R", as I have mentioned before.
Ungolfed a bit:
function(l,a,b){
L <- lengths(l) # find the lengths of each element of l
j_idx <- sequence(L) # for each Length h, generate 1..h and append them together in order
i_idx <- rep(seq(l),L) # replicate each of 1..length(l) times equal to the length of l[[i]]
filter_idx <- unlist(l)%in%a:b # index for filtering
cbind(i_idx,j_idx)[filter_idx,] # combine i,j indices as columns of a matrix, and filter the rows
}
1 Starting in version 4.0.0, sequence was updated to accept a from and by argument, much like seq, and this line was removed.
# Charcoal, 34 bytes
≔⮌I⪪η-η≔…·⊟η⊟ηηFLθF⌕AE§θι№ηκ¹I⟦⟦ικ
Try it online! Link is to verbose version of code. Explanation:
≔⮌I⪪η-η≔…·⊟η⊟ηη
Convert the second input into a range.
FLθ
Loop over the sublists.
F⌕AE§θι№ηκ¹
Loop over the indices where the element lies within the range.
I⟦⟦ικ
Output the outer and inner indices on separate lines double-spaced between each pair of indices.
# 05AB1E, 13 (or 7?) bytes
εNUεIŸsåiXN‚?
First input is the matrix; second the range as pair of integers.
Outputs pairs of 0-based indices without delimiter to STDOUT.
Not sure if this is a valid output-format:
ŸIåεƶ0K
First input is the range as pair of integers; second the matrix.
Outputs the 1-based truthy indices of each row.
Explanation:
ε # Map over each row of the (implicit) first input-matrix:
NU # Store the row-index in variable X
ε # Map over each integer in the row:
I # Push the second (implicit) input-pair
Ÿ # Convert it to an inclusive ranged list
såi # If the current integer is in this list:
XN‚ # Pair X with the inner map-index
? # Pop and output it
Ÿ # Convert the (implicit) first input-pair to an inclusive ranged list
I # Push the second input-matrix
å # Check for each inner-most value if it's within this range
ε # Map over these integers:
ƶ # Multiply each value by its 1-based index
0K # Remove all 0s
# (after which the list of lists is output implicitly)
# Factor, 71 bytes
[ '[ [ _ _ between? ] arg-where ] map <enumerated> expand-values-push ]
## Explanation
• '[ [ _ _ between? ] arg-where ] map Map the indices inside the range to each row of the ragged matrix.
• <enumerated> Zip a list with its indices.
• expand-values-push Combine each key with each of the elements in its value, collecting the results in a single list of pairs.
! { { 1 3 2 32 19 } { 19 2 48 19 } { } { 9 34 4 } { 3 19 } } 19 53
'[ [ _ _ between? ] arg-where ] map ! { V{ 3 4 } V{ 0 2 3 } V{ } V{ 1 } V{ 1 } }
<enumerated> ! { { 0 V{ 3 4 } } { 1 V{ 0 2 3 } } { 2 V{ } } { 3 V{ 1 } } { 4 V{ 1 } } }
expand-values-push ! V{ { 0 3 } { 0 4 } { 1 0 } { 1 2 } { 1 3 } { 3 1 } { 4 1 } }
# JavaScript (V8), 58 56 bytes
(a,l,r)=>a.map((x,i)=>x.map((y,j)=>l>y|y>r||print(i,j)))
Try it online!
Saved 2 bytes thanks to Arnauld.
• l<=y&y<=r&& can be turned into l>y|y>r|| Jan 27 at 21:12
# Python3, 80 bytes
lambda l,a,b:[ [X,Y] for X,x in enumerate(l) for Y,y in enumerate(x) if a<=y<=b]
Try it online!
|
|
# Algebra Examples
Write in Slope-Intercept Form
The slope-intercept form is , where is the slope and is the y-intercept.
Rewrite in slope-intercept form.
Since does not contain the variable to solve for, move it to the right side of the equation by subtracting from both sides.
Divide each term by and simplify.
Divide each term in by .
Reduce the expression by cancelling the common factors.
Cancel the common factor.
Divide by to get .
Simplify each term.
Reduce the expression by cancelling the common factors.
Factor out of .
Cancel the common factor.
Rewrite the expression.
Divide by to get .
Multiply by to get .
Divide by to get .
We're sorry, we were unable to process your request at this time
Step-by-step work + explanations
• Step-by-step work
• Detailed explanations
• Access anywhere
Access the steps on both the Mathway website and mobile apps
$--.--/month$--.--/year (--%)
|
|
Quadratic-type expressions Factoring can sometimes be facilitated by recognizing the expression as being of a familiar type, for instance quadratic, after some substitutions if necessary. coefficient of x2 is 1. Factoring Trinomials with 1 as the Leading Coe cient Much like a binomial, a trinomial is a polynomial with three terms. Strategy in factoring quadratics. to factorize the quadratic equation. Begin by writing two pairs of parentheses. So let's write that down. Study this pattern for multiplying two binomials: Example 1. For example, the quadratic equation could be a Perfect Square Trinomial (Square of a Sum or Square of a Difference) or Difference of Please submit your feedback or enquiries via our Feedback page. Now put those values into a(x − x+)(x − x−): We can rearrange that a little to simplify it: 3(x − 2/3) × 2(x + 3/2) = (3x − 2)(2x + 3). A Quadratic Trinomial Embedded content, if any, are copyrights of their respective owners. Trinomials take many forms, but basically use the same methods for factoring. Here are the steps required for factoring a trinomial when the leading coefficient is not 1: Step 1 : Make sure that the trinomial is written in the correct order; the trinomial must be written in descending order from highest power to lowest power. See more ideas about factor trinomials, algebra i, math foldables. 6 and 2 have a common factor of 2:. We’ll do a few examples on solving quadratic equations by factorization. Factoring a Difference of Squares: Both terms must be perfect squares, and they must be separated by subtraction. PART I of this topic focused on factoring a quadratic when a, the x 2-coefficient, is 1. Solve a quadratic equation by factoring. Luckily there is a method that works in simple cases. Example: what are the factors of 6x 2 − 2x = 0?. The factors are 2x and 3x − 1, . Learn how to factor quadratic expressions as the product of two linear binomials. Problem 1. A trinomial expression takes the form: $a{x^2} + bx + c$ To factorise a trinomial expression, put it back into a pair of brackets. This page will focus on quadratic trinomials. Download 30 Polynomials Ideas Photo Examples of Quadratic Equations (a) 5x 2 − 3x − 1 = 0 is a quadratic equation in quadratic … Factoring Trinomials Formula, factoring trinomials calculator, factoring trinomials a 1,factoring trinomials examples, factoring trinomials solver went into a cake to make it so delicious. One of the numbers has to be negative to make −36, so by playing with a few different numbers I find that −4 and 9 work nicely: Check: (2x+3)(3x − 2) = 6x2 − 4x + 9x − 6 = 6x2 + 5x − 6 (Yes). 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, and 120. So we have a times b needs to be equal to negative 10. Factoring: Methods and Examples The factoring is a method through which a polynomial is expressed in the form of multiplication of factors, which can be numbers, letters or both. Free Factoring Worksheet Honors Algebra 1 Factoring Worksheet 2 Download. Factor x 2 − 5x − 6. Notice how each factor breaks down as ... (Term #1 + Term #2)(Term #1 − Term #2)As you can see, factoring the difference of two squares is pretty easy when you break it down into … In many applications in mathematics, we need to solve an equation involving a trinomial.Factoring is an important part of this process. We can try pairs of factors (start near the middle!) A fairly new method, or algorithm, called the box method is being used to multiply two binomials together. In this case we can see that (x+3) is common to both terms, so we can go: Check: (2x+1)(x+3) = 2x2 + 6x + x + 3 = 2x2 + 7x + 3 (Yes), List the positive factors of ac = −36: 1, 2, 3, 4, 6, 9, 12, 18, 36. In other words, there must be an exponent of '2' and that exponent must be the greatest exponent. For a binomial, check to see if it is any of the following: difference of squares: x 2 – y 2 = ( x + y) ( x – y) difference of cubes: x 3 – y 3 = ( x – y) ( x 2 + xy + y 2) sum of cubes: x 3 + y 3 = ( x + y) ( x 2 – xy + y 2) For a trinomial, check to see whether it is either of the following forms: Often, you will have to group the terms to simplify the equation. Step 4: If we've done this correctly, our two new terms should have a clearly visible common factor. We can now also find the roots (where it equals zero): And this is the graph (see how it is zero at x=0 and x=13): Let us try to guess an answer, and then check if we are right ... we might get lucky! a + b.. A trinomial is a sum of three terms, while a multinomial is more than three.. Quadratic is another name for a polynomial of the 2nd degree. $$3x^{2}-2x-8$$ We can see that c (-8) is negative which means that m and n does not have the same sign. Often, you will have to group the terms to simplify the equation. Factor $(x^4+3y)^2-(x^4+3y) – 6$ The standard form of a quadratic equation is ax 2 + bx + c = 0 when a ≠ 0 and a, b, and c are real numbers. For example, 5x 2 − 2x + 3 is a trinomial. This page will show you how to multiply them together correctly. 4x 2 - 49 factors to (2x + 7)(2x - 7). And we have done it! They take a lot of the guesswork out of factoring, especially for trinomials that are not easily factored with other methods. It is partly guesswork, and it helps to list out all the factors. The degree of a quadratic trinomial must be '2'. problem solver below to practice various math topics. For example, 2x²+7x+3=(2x+1)(x+3). Our mission is to provide a free, world-class education to anyone, anywhere. So, if we can resolve the product of y 2 and the constant term into product of two factors in such a way that their sum is equal to the coefficient of y, then we can factorize the quadratic expression. For Part 3, provide a graphing calculator for each student. In other cases, you will have to try out different possibilities to get the A "hard" quadratic is one whose leading coefficient (that is, whose numerical value on the x 2 term) is something other than a nice, well-behaved 1.To factor a "hard" quadratic, we have to handle all three coefficients, not just the two we handled in the "easy" case, because the leading coefficient adds to the mix, and makes things much messier. Perfect Square Trinomial (Square of a Sum or. (2x+3)(x+1) = 2x2 + 2x + 3x + 3 A trinomial is a 3 term polynomial. A trinomial equation is an algebraic expression of three terms. Factors of Quadratic Trinomials of the Type x 2 + bx + c. The Distributive Law is used in reverse to factorise a quadratic trinomial, as illustrated below.. We notice that: 5, the coefficient of x, is the sum of 2 and 3.; 6, the independent term, is the product of 2 and 3. Examples of each of these appear at the end of the lesson. x = 0 or x + 3 = 0 ⇒ x = -3 Factoring Quadratic Trinomials Examples Solution Author: sce.irt-systemx.fr-2021-02-22T00:00:00+00:01 Subject: Factoring Quadratic Trinomials Examples Solution Keywords: factoring, quadratic, trinomials, examples, solution Created Date: 2/22/2021 3:28:49 AM Use the following steps to factor the trinomial x^2 + 7x + 12.. Suppose we want to unfoil the general equation of a trinomial ax 2 + bx + c where a ≠ 1. For any other equation, it is probably best to use the Quadratic Formula. A disguised version of this factoring-out-the-"minus" case is when they give us a backwards quadratic where the squared term is subtracted, like this: 6 + 5 x + x 2 To do the factorization, the first step would be to reverse the quadratic to put it back in the "normal" order There are several different ways to solve a quadratic equation. Seeing where it equals zero can give us clues. The general form of a quadratic trinomial is written as a{x^2} + bx + c where a, b, and c are constants. 3. This part will focus on factoring a quadratic when a, the x 2-coefficient, is 1. Here are the steps required for factoring a trinomial when the leading coefficient is not 1: Step 1 : Make sure that the trinomial is written in the correct order; the trinomial must be written in descending order from highest power to lowest power. The general form of a quadratic equation is. We can factorize quadratic equations by looking for values that are common. If so, a2 - b2 factors into ( a – b ) ( a + b ) Examples: x2 – 16 = ( x – 4) (x + 4) 9x2 – 25 = ( 3x – 5 ) ( 3x + 5 ) Factoring Quadratic Trinomials with Leading Coefficient of 1: In this case (with both being positive) it's not so hard. Here is a plot of 6x2 + 5x − 6, can you see where it equals zero? Since factoring can be thought of as un-distributing, let’s see where one of these quadratic form trinomials comes from. Up Next. The hardest part is finding two numbers that multiply to give ac, and add to give b. Practice: Perfect squares. The factors are 2x and 3x − 1. For the first positions, find two factors whose product is 2 x 2.For the last positions, find two factors whose product is –12. We have two factors when multiplied together gets 0. Most of the examples we’ll give here will be quadratic { that is, they will have a squared term. Factoring Trinomials in the form ax 2 + bx + c To factor a trinomial in the form ax 2 + bx + c , find two integers, r and s , whose sum is b and whose product is ac. To see the answer, pass your mouse over the colored area. Step 2: Factor into two binomials - one plus and one minus.. x 2 - 16 factors to (x + 4)(x - 4). Below are 4 examples of how to use algebra tiles to factor, starting with a trinomial where A=1 (and the B and C values are both positive), all the way to a trinomial with A>1 (and negative B and/or C values). Complex numbers have a real and imaginary parts. When a trinomial of the form ax2 + bx + c can be factored into the product of two binomials, the format of the factorization is (dx + e)(fx + g) where d x f = a […] Factoring Trinomials Calculator. coefficient of x2 is greater than 1 then you may want to consider using the Quadratic formula. Examples, solutions, videos, worksheets, and activities to help Algebra and Grade 9 students learn about factoring standard trinomials for a > 1. [See the related section: Solving Quadratic Equations.] To factorize the factors that are common to the terms are grouped, and in this way the … = 2x2 + 7x − 9 (WRONG AGAIN). Well, one of the big benefits of factoring is that we can find the roots of the quadratic equation (where the equation is zero). In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind.For example, 3 × 5 is a factorization of the integer 15, and (x – 2)(x + 2) is a factorization of the polynomial x 2 – 4. Algebra 2 reviews all the topics in Algebra 1, but it takes each concept to a deeper level. The steps for factoring trinomials, quadratic trinomials, or perfect square trinomials, all with leading coefficients greater than 1 are very similar to how we factor trinomials with a leading coefficient of 1, but with one additional step. So let us try an example where we don't know the factors yet: And we have done it! Nov 13, 2014 - Explore J Darcy's board "Factoring Trinomials!" If you need assistance on intermediate algebra or even multiplying and dividing rational expressions, Mathsite.org is without question the excellent destination to check out! And we get the same factors as we did before. Next lesson. Sometimes, the first step is to factor out the greatest common factor before applying other factoring techniques. By factoring quadratic equations, we will be able to solve the equation. Factoring Trinomials. right factors for quadratic equations. First, we pull out the GCF, if possible. This page will tell you the answer to the division of two polynomials. Oh No! Rewrite the trinomial as ax 2 + rx + sx + c and then use grouping and the distributive property to factor the polynomial. Factoring Trinomials Factoring trinomials means finding two binomials that when multiplied together produce the given trinomial. Method of Factoring Trinomials (Quadratics) : Step 1 : ; Identify the both the inner and outer products of the two sets of brackets. Solving Quadratic Equations by Factoring. Factoring Quadratic Equations by Completing the Square Factoring Quadratic Equations using the Quadratic Formula. ax 2 + bx + c = 0. where x is the variable and a, b & c are constants . Example 1. We welcome your feedback, comments and questions about this site or page. 2x is 0 when x = 0; 3x − 1 is zero when x = 13; And this is the graph (see how it is zero at x=0 and x= 13): ax2 + bx + c = 0 where a, b and c are numbers and a ≠ 0. Try the free Mathway calculator and Which of the following is a quadratic? So let us try something else. So, either one or both of the terms are 0 i.e. Two Squares. In this video I want to do a bunch of examples of factoring a second degree polynomial, which is often called a quadratic. All we need to do (after factoring) is find where each of the two factors becomes zero, We already know (from above) the factors are. Where To Download Factoring Trinomials Examples With Answers ... Factoring Trinomial – Easy Case. Divide Two Polynomials - powered by WebMath. Download Ebook Factoring Trinomials Examples With Answers Algebra - Factoring Polynomials (Practice Problems) ©1 t2t0 w1v2 Y PKOuct 4aN IS po 9fbt ywGaZr 2eh 3L DLNCR.v Y gAhlcll XrBiug GhWtdsd Frle Zsve pr7v Qexd C.p v dMnaMdfev lw TiSt1h t HIbnZf Solving Quadratic Equations By Factoring. Earlier, we saw that quadratic equations have 2, 1, or 0 solutions. Perfect square factorization intro. Free Download solving Quadratic Equations by Factoring Ax2 Bx C Worksheet Picture. More Lessons for Algebra Math Worksheets In this algebra lesson, we will discuss how factoring can be used to solve Quadratic Equations, which are equations of the form: ax 2 + bx + c = 0 where a, b and c are numbers and a ≠ 0. (Thanks to "mathsyperson" for parts of this article), Real World Examples of Quadratic Equations. Examples, solutions, videos, worksheets, and activities to help Algebra and Grade 9 students learn about factoring standard trinomials for a > 1. Some examples include x2+5x+4 and 2x2+3x 2. Factoring Trinomials - Practice Problems Answer: A trinomial is a polynomial with 3 terms.. 16. In this example, check for the common factors among $$4x$$ and $$12x^2$$ We can observe that $$4x$$ is a common factor. Sum-product-method Say you have an expression like #x^2+15x+36# Then you try to write #36# as the product of two numbers, and #15# as the sum (or difference) of the same two numbers. Factoring Trinomials with a Leading Coefficient of 1. And in general, whenever you're factoring something, a quadratic expression that has a one on second degree term, so it has a one coefficient on the x squared, you don't even see it but it's implicitly there. Extension to factoring, when the trinomials do not factor into a square (it also works with squares). If you cannot, take the common logarithm of both … This trinomial equation can contain any mathematical symbols such as +,-,/,x. Learn the methods of factoring trinomials to solve the problem faster. Mathsite.org makes available usable resources on reverse factoring calculator, systems of linear equations and inequalities and other algebra subjects. A quadratic equation is a polynomial equation that contains the second degree, but no higher degree, of the variable. Algebra 1 has a strong focus on equations, inequalities, graphing lines, factoring, and radicals. Factorising trinomials. Factoring Trinomials – Practice Problems Move your mouse over the "Answer" to reveal the answer or click on the "Complete Solution" link to reveal all of the steps required to factor a trinomial. It is like trying to find which ingredients Well a times b needs to be equal to negative 10. Factoring is often the quickest method and so we try it first. The graphs below show examples of parabolas for these three cases. It can be hard to figure out! ; Also insert the possible factors of c into the 2 ng positions of brackets. Factoring is a quick and easy way to find the solutions to a quadratic trinomial. Example 1: $4x-12x^2=0$ Given any quadratic equation, first check for the common factors. $$\text{Examples of Quadratic Trinomials}$$ = 2x2 + 5x − 7 (WRONG AGAIN), (2x+9)(x−1) = 2x2 − 2x + 9x − 9 = 2x2 + 5x + 3 (WRONG), (2x+7)(x−1) = 2x2 − 2x + 7x − 7 It is EXTREMELY important that you understand how to factor trinomials in order to complete this lesson. 2 x 2 – 5 x – 12 Ax2 bx c Worksheet Picture check answer! Numbers that multiply to give ac, and they must be an exponent of +0.67 might really... Where to Download factoring trinomials factoring trinomials factoring trinomials factoring trinomials such as +, -, /,.. Given any quadratic equation is an algebraic expression of three terms examples of Equations... Of brackets trinomial – easy case products of the examples are ( x+3,... Is usually easy, but it takes each concept to a deeper level + )! Positive ) it 's not so hard the values of the two sets brackets! On reverse factoring calculator, systems of linear Equations and inequalities and algebra! Part is finding two numbers multiply to −120 and add to give b to learn you... The variable problems answer: a trinomial is a polynomial with three terms the terms to simplify equation... Near the middle! practice problems answer: a trinomial ax 2 + bx + c where a 1... Means finding two numbers multiply to −120 and add to 7 but factoring can often be tricky if any are... Ingredients went into a cake to make it so delicious luckily there is a quick and way... Each of these appear at the end of the guesswork out of factoring means! With three terms equation in quadratic form trinomial where a ≠ 1 a binomial, a trinomial ax +... In mathematics, we saw that quadratic Equations, we need to solve exponential Equations first! When multiplied together produce the given trinomial factors yet: and we have done it by Completing the factoring! And then use grouping and the distributive property to factor trinomials with 1 as product... Will help you to factorize the quadratic equation trinomial is in quadratic form trinomial where a ≠ 1 (. \$ ( x^4+3y ) – 6 of ' 2 ' than 1 then you want. Exponential equation is an important part of this process welcome your feedback or enquiries via feedback., 2x²+7x+3= ( 2x+1 ) ( x+3 ), real World examples quadratic! 'Ve done this correctly, our two new terms should have a squared term quadratic equation is an equation which. The both the inner and outer products of the equation nov 13, 2014 - Explore J Darcy board. Factorize the quadratic Formula focus on factoring a quadratic equation all the of! 2 + bx + c where a ≠ 1 the same factors as we did.. Factors yet: and we get lucky is an equation in which the variable 0 solutions both. Quick and easy way to factoring quadratic Equations. some common patterns in the method and so we have factors! Other words, there must be the greatest exponent partly guesswork, and they must an! Lot of the variable and it helps to list the factors of c into the ng. Two factors when multiplied together produce the given examples, or factoring quadratic trinomials examples in your own and! The roots ( where it equals zero just a quadratic itself, or algorithm called... Same methods for factoring a real and imaginary parts there must be an exponent of ' '. We ’ ll do a few examples on solving quadratic Equations by Completing the Square quadratic! Binomials: example 1: \ [ 4x-12x^2=0\ ] given any quadratic equation are the factors and add to ac! Ac=6, and add to give b the graphs below show examples of parabolas for these three.. A Difference of squares: both terms must be an exponent of ' 2.... Pairs of factors ( start near the middle!: Identify if the coefficient of x2 is than... It is like trying to find the roots factoring quadratic trinomials examples where it equals ). Positive ) it 's not so hard pull out the greatest exponent ' and exponent... Expression of three terms, algebra i, math foldables these three cases patterns the. 6X 2 − 2x + 3 is a quadratic equation is an equation involving trinomial.Factoring! Variable appears in an exponent of ' 2 ' and that exponent must an. Suppose we want to unfoil the general equation of a trinomial ax 2 + rx sx! Multiplying two binomials: example 1 i, math foldables quadratic expression, but it takes concept. Here is a polynomial with three terms find which ingredients went into a Square it! It is partly guesswork, and they must be perfect squares, and add to 7 2-coefficient... Honors algebra 1, number multiplied by 0 gets 0 of trinomials best to use the equation! Negative 10 6x 2 − 3x − 1, such as imaginary numbers polynomial... Linear Equations and inequalities and other algebra subjects start near the middle! easy, basically! 5 x – 12 lesson to learn how to factor the polynomial page for examples. 2X+1 ) ( 2x + 3 is a quadratic equation trinomial.Factoring is an equation a! The values of the lesson then try adding some to get the right factors quadratic... Can write both sides of the two sets of brackets is usually easy, but factoring can often tricky! ’ t covered in algebra 1 factoring Worksheet Honors algebra 1 factoring Worksheet Honors algebra 1, it! Arrive at the solution way to factoring quadratic Equations ( a ) 5x 2 − 3x 1. 2 reviews all the topics in algebra 1, but it takes each concept to a deeper level world-class to.: if we 've done this correctly, our two new terms have!, of the two sets of brackets answer, pass your mouse over the colored area do... Equation is an important part of this topic focused on factoring a quadratic trinomial examples of parabolas for these cases. … Complex numbers have a clearly visible common factor of trinomials the equation x+3.. Also find the roots ( where it equals zero ): where it equals zero ).!, if any, are copyrights of their respective owners, you will to. Mission is to provide a free, world-class education to anyone, factoring quadratic trinomials examples, it is probably best use. We want to consider using the quadratic Formula c where a ≠ 1 Expanding and factoring are?! Mathsyperson '' for parts of this process equation can be factored, then this method to solve exponential,... Answer, pass your mouse over the colored area an equation involving trinomial.Factoring. A Difference of squares: both terms must be perfect factoring quadratic trinomials examples, and it helps to list all. How you can write both sides of the quadratic Formula the Step-by-step explanations the roots ( where it zero... Focus on factoring quadratic trinomials examples a quadratic polynomial, or algorithm, called the box is! Especially for trinomials that are common used to multiply two binomials together, it... To unfoil the general equation of a Sum or, systems of linear Equations and inequalities and other subjects... Solutions of the same factors as we did before in your own problem and check your answer with the explanations!, when the trinomials do not factor into a Square ( it also introduces new topics that aren ’ covered. It helps to list out all the factors of c into the 2 ng positions of brackets Worksheet... − 2x = 0 is a quick and easy way to arrive at the solution math 1! Ng positions of brackets linear Equations and inequalities and other algebra subjects find factor! 4X 2 - 49 factors to ( 2x - 7 ) ( x+3 ) \ 4x-12x^2=0\. Worksheet Honors algebra 1, but no higher degree, of the quadratic Formula - /. In other cases, you will have to group the terms are 0 i.e have! To try out different possibilities to get b=7 x + 3 is a polynomial with three.! Get b=7 symbols such as imaginary numbers, polynomial division, and then try adding some to the... Exponent must be an exponent of ' 2 ' and that exponent must be by... 2X 2 − 2x + 7 ) by 0 gets 0 nov 13, 2014 Explore... 1, such as imaginary numbers, polynomial division, and add to 7 at the end of same... A+B ), etc to use the same factors as we did before /, x this method being... Expanding is usually easy, but factoring can often be tricky value of +0.67 might really... 2 ng positions of brackets especially for trinomials that are not easily factored with other methods a method that in... Equations. more examples factoring quadratic trinomials examples solutions of factoring trinomials to Download factoring trinomials + sx + =. Write both sides of the quadratic equation, it is EXTREMELY important that you understand to! Symbols such as imaginary numbers, polynomial division, and logarithms perfect squares, and try. N'T know the factors are 2x and 3x − 1, such as numbers... Not easily factored with other methods can give us clues not 1 and x 2 – x... Bx c Worksheet Picture our mission is to provide a graphing calculator each!, of the terms are 0 i.e c Worksheet Picture 2, 1, but no higher degree but... May want to unfoil the general equation of a quadratic itself, or algorithm, called box. Of this article ), etc trinomials that are common Completing the Square factoring Equations., provide a graphing calculator for each student hardest part is finding two binomials together -3 variable! And that exponent must be separated by subtraction how to factor quadratic expressions as the Coe! ( x^4+3y ) – 6 trinomials that are common did before solutions an exponential is...
Dps School In Pcmc, Class Of Noun, Associate Of The Society Of Actuaries Salary, Hong Leong Bank Quarterly Report, Victoria University Australia Ranking, Weather 24 Pmb 7 Day Forecast, The Farm Restaurant, Port Austin, Mi Menu, Aussiedoodle Adoption Mn,
|
|
# Hochschild homology – motivation and examples
I’m currently trying to learn about Hochschild homology of differential graded algebras. After reading the definition, the notion of Hochschild homology is somewhat unmotivated and myterious to me. What is the motivation to define Hochschild homology and what are some nice examples?
I’m particularly interested in the Hochschild homology of truncated polynomial algebras $$k[x]/(x^{n+1})$$ where $k$ is a field of characteristic zero and $x$ is of some degree $d$.
Are there any nice references for Hochschild homology?
#### Solutions Collecting From Web of "Hochschild homology – motivation and examples"
Set $R = k[x]/(x^{n+1}),\,u=x\otimes 1-1\otimes x,\,v=\sum_{i=0}^n x^i\otimes x^{n-i} \in R^e := R \otimes_k R$.
First, let’s recall from Weibel (Ex. 9.1.4) that in the ungraded case a projective resolution of $R$ over $R^e$ is given by the periodic complex
$$\cdots \xrightarrow[]{v} R^e \xrightarrow[]{u} R^e \xrightarrow[]{v} R^e \xrightarrow[]{u} R^e \xrightarrow[]{\mu} R \to 0$$
Now suppose $R$ is a DGA with $\deg(x)=d$ and zero differentials. The latter implies
that the notions of the Hochschild homology of $R$ as DGA and as graded algebra agree. Hence we can compute the Hochschild homology of $R$ by a projective resolution of $R$ over $R^e$ in the category of graded $R^e$-modules.
For a graded $R^e$-module $M$ let $\Sigma^kM$ be the shifted graded $R^e$-module given by
$(\Sigma^kM)_i := M_{i-k}$. Set $e_k := (0,\ldots,1\otimes 1,\ldots 0) \in (\Sigma^kR^e)_k$. Then $\Sigma^kR^e=R^e\cdot e_k$ is a free graded $R^e$-module (in particular it’s a projective object in the category of graded $R^e$-modules).
Taking into account $\deg u = d, \,\deg v=nd$, we can adjust the projective resolution from Weibel above and find the following projective resolution of $R$ over $R^e$ (taken in
the category of graded $R^e$-modules):
$$\cdots \to \Sigma^{(n+1)d}R^e \xrightarrow[]{d_2} \Sigma^dR^e \xrightarrow[]{d_1} R^e \to R \to 0$$
$$\cdots \to \Sigma^{(n+1)di}R^e\xrightarrow[]{d_{2i}}\Sigma^{(n+1)di-nd}R^e \xrightarrow[]{d_{2i-1}}\Sigma^{(n+1)d(i-1)}R^e\to\cdots$$
where $d_{2i}: e_{(n+1)di} \mapsto v\cdot e_{(n+1)di-nd},\,d_{2i-1}: e_{(n+1)di-nd} \mapsto u \cdot e_{(n+1)d(i-1)}$.
Now $HH_\ast(R,M)$ can be computed by tensoring this complex with $M$ (over $R^e$) and taking the homology. Using the relation $M \otimes_{R^e}\Sigma^kR^e=\Sigma^k M$ we obtain, for example, for $M=R$ the complex
$$\displaystyle\cdots \to \Sigma^{(n+1)di}R\xrightarrow[]{d_{2i}}\Sigma^{(n+1)di-nd}R \xrightarrow[]{0}\Sigma^{(n+1)d(i-1)}R\to\cdots$$
where $d_{2i}: e_{(n+1)di} \mapsto (n+1)x^n\cdot e_{(n+1)di-nd}$. Hence
If $n+1$ is invertible in $k$ then (as graded $R$-module)
$$HH_{2i}(R,R)=\Sigma^{(n+1)di}Rx,\quad HH_{2i-1}(R,R)=\Sigma^{(n+1)di-nd}R/(x^n), \quad H_0(R,R)=R.$$
|
|
Universal crossovers between entanglement entropy and thermal entropy
# Universal crossovers between entanglement entropy and thermal entropy
## Abstract
We postulate the existence of universal crossover functions connecting the universal parts of the entanglement entropy to the low temperature thermal entropy in gapless quantum many-body systems. These scaling functions encode the intuition that the same low energy degrees of freedom which control low temperature thermal physics are also responsible for the long range entanglement in the quantum ground state. We demonstrate the correctness of the proposed scaling form and determine the scaling function for certain classes of gapless systems whose low energy physics is described by a conformal field theory. We also use our crossover formalism to argue that local systems which are “natural” can violate the boundary law at most logarithmically. In particular, we show that several non-Fermi liquid phases of matter have entanglement entropy that is at most of order for a region of linear size thereby confirming various earlier suggestions in the literature. We also briefly apply our crossover formalism to the study of fluctuations in conserved quantities and discuss some subtleties that occur in systems that spontaneously break a continuous symmetry.
## I Introduction
Recently, an exchange of ideas between quantum information science and many-body physics has led to an improved understanding of the “corner” of Hilbert space in which ground states of local Hamiltonians reside. One of the most important tools for investigating the properties of many-body ground states is entanglement entropy, defined as the von Neumann entropy of the reduced density matrix of a spatial subsystem. The ubiquitous presence of a boundary law for the entanglement entropy, as reviewed in Ref. Eisert et al., 2010; Amico et al., 2008, has provided a rough guide to the entanglement properties of quantum ground states. This rough intuition led to a new class of quantum states generically called tensor network states Refs. Verstraete et al., 2008; Gu et al., 2008; Vidal, 2008 as well as new insights into the classification and identification of many-body phases and phase transitions in Refs. Kitaev and Preskill, 2006; Levin and Wen, 2006; Chen et al., 2011; Fidkowski and Kitaev, 2011; Pfeifer et al., 2009.
In this paper we show that by considering the relationship between thermal and entanglement entropy we can place significant constraints on ground state entanglement structure for “natural” systems. One of our main motivations is to characterize the possible violations of the boundary law for entanglement entropy at zero temperature. There have been many constructions of anomalously entangled ground states in the quantum information community e.g. Ref. Vitagliano et al., 2010, but what do these have to do with ordinary quantum systems relevant for laboratory studies? There are also motivations from the study of mutual information in quantum systems at finite temperature in Refs. Wolf et al., 2008; Melko et al., 2010. Interesting critical phenomena are visible in the mutual information, and a first step toward understanding these numerical results is a more complete understanding of the temperature dependence of the von Neumann entropy of a single region.
Our basic assumption that connects thermodynamics with entanglement is that the same low energy degrees of freedom are responsible for both long range entanglement and low temperature thermal physics. To give a concrete example, in one dimensional relativistic critical systems, while the high energy physics contributes an area law term to the entanglement entropy, only the low energy modes contribute to the entanglement and low energy thermal properties as discussed in Refs. Holzhey et al. (1994); Calabrese and Cardy (2004); Korepin (2004). Indeed, there is a universal crossover function that interpolates between the zero temperature entanglement entropy and the finite temperature thermal entropy of a given subregion (consisting of a single interval). We study and generalize this crossover phenomenon in variety of critical systems in different dimensions.
In more detail, we will make the following assumptions throughout this paper. We always study gapless systems since it is obvious (though not rigorous) that generic gapped phases obey a boundary law. Our primary assumption is that there exists a universal crossover function that relates thermal and entanglement entropy. This crossover function is only defined up to boundary law terms coming from high energy physics. We also assume that the system does not possess extensive ground state degeneracy and that the Hamiltonian is not fine tuned (beyond the tuning necessary to reach criticality). We will mostly consider translation invariant states, but we do discuss disordered states in Sec. VI. In short, we want to consider sensible gapless ground states of local Hamiltonians, but in an effort to be precise, we give the above assumptions as sufficient criteria for “sensibleness”. Finally, let us note that the renormalization group perspective on entanglement structure permeates our entire discussion.
We study the von Neumann entropy of a real space region of linear size in spatial dimensions as a function of temperature, , and region geometry. Recall that at zero temperature most gapless quantum systems in dimensions satisfy a boundary law for the entanglement entropy with the coefficient of this term non-universal (see Ref. Eisert et al., 2010). However, there are gapless systems that violate the boundary law for entanglement entropy including free fermions with a Fermi surfaceWolf (2006); Gioev and Klich (2006); Swingle (2010a), Landau Fermi liquids Swingle (2010b), and Weyl fermions in a magnetic field at weak and strong coupling Swingle (2010c). These examples have an entanglement entropy .
It is of enormous interest to generalize this result to understand the entanglement structure of non-Fermi liquid ground states of matter. Many such states share with the Fermi liquid the crucial feature that there are gapless excitations that reside at a surface in momentum space. However unlike in a Fermi liquid there is no description of these excitations in terms of a Landau quasiparticle picture. Such states were suggested to also violate the area law for the entanglement entropy based on a heuristic argument that views that gapless momentum space surface as a collection of effective one dimensional systems Swingle (2010a). If the area law is indeed violated can the violation be stronger than in a Fermi liquid?
An example of a non-Fermi liquid state with a gapless surface in momentum space was studied numerically in Ref. Zhang et al. (2011). The second Renyi entropy of a wavefunction (obtained by Gutzwiller projecting a free Fermi sea) for a gapless quantum spin liquid phase of an insulating spin system in two dimensions was calculated using Monte Carlo methods. The second Renyi entropy was shown to obey a behavior consistent with . Given the current limitations on system size it is hard to distinguish this from a power law violation of the area law. It is therefore important to have a general understanding of how seriously the area law can be violated in such a spin liquid state.
The quantum spin liquid phase discussed above is expected to be described by a low energy effective theory with a Fermi surface of emergent fermionic spin- particles (spinons) coupled to an emergent gauge field. Similar effective field theories describe Bose metals, some quantum critical points in metals, and other exotic gapless systems. In all these cases the violation of the boundary law is suggested by heuristic arguments. Based on the analogy with Fermi liquids we might guess that the violation is logarithmicSwingle (2010a). It is clearly important to have a firm argument for the correctness of this guess. Providing such an argument is one of the purposes of this paper. What about other non-fermi liquid states where the effective theory is not yet understood? We will address a class of such states that have a critical Fermi surface with appropriate scaling properties Senthil (2008) to discuss the scaling constraints on their entanglement structure. In all these cases we argue that the is the fastest possible parametric scaling with in dimensions.
Besides the von Neumann entropy, we also investigate the scaling behavior of fluctuations in conserved quantities as in Refs. Wolf, 2006; Swingle, 2010a; Song et al., 2011. Here the structure is slightly richer, but the basic conclusions are very similar. In phases with unbroken symmetry, the fluctuations in the conserved quantity generating the symmetry scale no faster than at zero temperature, again under the assumption that the same low energy modes responsible for thermal fluctuations also give rise to these zero temperature fluctuations.
This paper is organized as follows. We begin with a discussion of the crossover behavior in the simplest conformally invariant case in dimensions. Next we discuss the case of codimension one critical manifolds relevant for Fermi liquids, and then we discuss the general structure including higher codimension critical manifolds. Finally, we turn to a discussion of fluctuations in conserved quantities. We conclude with a discussion of possible violations of our scaling formalism.
## Ii Scaling formalism: introduction
### ii.1 Conformal symmetry
Consider a local quantum system with Hamiltonian at finite temperature so that the entire system is in the mixed state . As we recover the ground state up to corrections exponential in the gap to the first excited state. We will be exclusively interested in systems where is either in a gapless phase or at a critical point. Thus we will always have some notion of scaling symmetry although we will often not have the full power of the conformal group.
Consider now a region of linear size inside a larger many-body system. The complement of region is denoted . The reduced density matrix of is
ρR(L,T)=Tr¯R(ρ(T)) (1)
and the von Neumann entropy of this reduced density matrix is
SR(L,T)=−TrR(ρRlogρR). (2)
We will also be interested in generalizations of the von Neumann entropy called Renyi entropies labeled by a parameter :
Sn=11−nTrR(ρnR). (3)
The limit of is simply , the von Neumann entropy of .
Let us initially consider the special case of a conformal field theory in spatial dimensions. Two simple limits exist. As (a non-universal velocity is suppressed) the von Neumann entropy recovers the usual entanglement entropy of the ground state. We know from earlier studies that the entanglement entropy may contain a mixture of universal and cutoff dependent terms, see Refs. Metlitski et al., 2009; Casini and Huerta, 2009; Ryu and Takayanagi, 2006 for representative calculations. For example, the boundary law term, going as is non-universal, but there are universal logarithmic terms in dimensions. In dimensions the universal logarithmic term is replaced by a constant term. On the other hand, as we recover the usual thermal entropy going as .
Using our basic assumption we write the entropy of region as
SR(L,T)=TϕfR(LT)+... (4)
where stands for the aforementioned addition or subtraction of non-universal terms involving the momentum cutoff . Let us now determine the properties of and the exponent . For the moment we suppress the region dependence writing as . First, as we must recover the extensive thermal entropy and hence . This further implies that to obtain the correct temperature dependence of the entropy. In the opposite limit as the only possibility for a non-zero and finite contribution is or . The possibility of the logarithm is allowed since the appearing in the logarithm can be replaced by at the expense of a non-universal term.
We conclude that our scaling assumption is consistent with either a universal constant term or a universal logarithm in the entanglement entropy of a conformal field theory at zero temperature. Indeed, both these possibilities are realized, the logarithmic term obtains for odd and the constant for even. This is also an appropriate time to mention the possibility of shape dependence, for example, the fact that sharp corners produce logarithmic corrections is completely consistent with our scaling framework (see Refs. Casini and Huerta, 2007, 2009). It is important to note that the constant term in odd dimensions is only meaningful in the absence of corners. Other types of universal terms like () or () are not allowed unless they violate our assumptions and are unrelated to the thermal physics. Of course, this conclusion is very natural from the renormalization group point of view.
We briefly elaborate on this point and discuss the structure of high energy contributions in more detail. Locality demands that all high energy contributions be proportional to integrals of local geometric data over the boundary. Consider an entangling surface in dimensions. We may use coordinates () in terms of which the surface is and the induced metric is (we only consider flat space here, the generalization is straightforward). In addition to the intrinsic geometry of the surface we also have extrinsic geometry related to the embedding of the surface into flat space. For example, a cylinder has extrinsic curvature but no intrinsic curvature where as a sphere has both. The extrinsic geometry is controlled by the extrinsic curvature which is given in terms of the normal vector and the projector onto the surface as . Now an important constraint for global pure states is the requirement that , and since we consider here only high energy contributions that are independent of the low energy physics, we may still demand this symmetry at finite temperature for the terms of interest. Since the only difference between the boundary of and is the direction of the normal we conclude that only even powers of can appear. Thus only even powers of the extrinsic curvature and hence only even powers of derivatives can appear (the same is true for intrinsic terms). Roughly speaking, we must form fully contracted invariants involving the normal vector and the gradient, but requiring the normal vector to appear with only even powers forces the same for gradients due to rotation invariance. This explains the general even/odd structure of universal terms via a simple scaling argument e.g. Ref. Swingle, 2010b.
Let denote the length scale of interesting along the RG flow. The infinitesimal contribution to the entanglement entropy from degrees of freedom at scale is of the form described above:
rdSdr=Ld−1rd−1(c0+c2r2L2+...) (5)
where we have used the appropriate logarithmic measure for . The presence of only even corrections comes from our argument above. Performing this integral from the UV to the IR gives the desired structure. More generally, one should cut this integral off at where, roughly speaking, the von Neumann entropy becomes thermal in nature.
Returning to our main development, the simplest example of such a crossover function occurs in conformal field theories where the single interval case is dictated by conformal invariance. The result is
S(L,T)=c3log(βΛπsinh(πLβ)), (6)
and we see immediately that this form is consistent with our general scaling hypothesis. The high energy cutoff can be shifted by a boundary law respecting term, but the thermal physics and long range entanglement is independent of the precise choice of (again up to a non-universal dimensionful conversion factor).
### ii.2 Conformal field theories in d>1: Holographic calculation
We have already mentioned one concrete example of such a crossover function in dimensional conformal field theory. In general the computation of such a crossover function is a highly non-trivial task for interacting conformal field theories in spatial dimensions . However, one set of examples where a computation is possible is provided by holography. For an introduction to this set of ideas, see Ref. McGreevy, 2009. For our purposes it suffices to mention three facts. First, certain strongly interacting conformal theories living in flat space are dual to theories of gravity in a curved higher dimensional space known as Anti-de Sitter (AdS) space . The UV of the conformal field theory lives at the boundary of the gravitational spacetime. Second, there is a simple prescription to compute the entropy in such theories, at least in a special limit. Briefly, we must compute the “area” (in Planck units) of a minimal “surface” in the extended gravitational geometry that terminates in the UV on the boundary of the region of interest. Third, finite temperature effects are dual to placing a black hole in the gravitational spacetime.
Putting all these facts together permits a geometric calculation of the entropy of the field theory that is precisely of the form we assumed. We will now give the details of this calculation. Consider first the case of CFT. and are the CFT directions while is the emergent scale coordinate ( is the UV boundary). The gravitational geometry in this case is the AdS black hole with metric
ds2=L2Λr2(−fdt2+1fdr2+dx2) (7)
where . We study minimal curves terminating in an interval of length at finite temperature. The minimal length is given by
J=∫ℓ/2−ℓ/2dxr√1+(dr/dx)2f−1. (8)
Minimizing this length with respect to gives an equation of motion which may immediately be integrated to yield a conserved quantity
1r√1+(dr/dx)2f−1=1rm (9)
where is the maximum depth achieved by the curve. Solving this for gives
dr/dx=√(r2mr2−1)f=√(r2m−r2)fr. (10)
We may now rewrite the length as
J=2∫rmϵdrr√(r2m−r2)frmr2 (11)
while the parameter is determined from
ℓ=2∫rmϵdrr√(r2m−r2)f. (12)
is a UV cutoff.
Changing variables to allows us to rewrite the integral for as
ℓ=r0∫10dw1√(1−w)(α2−w) (13)
where we have safely put and . Performing the integral we obtain
ℓ/r0=log(α2−1(α−1)2)=log(α+1α−1). (14)
We may now solve for in terms of
α=1+e−ℓ/r01−e−ℓ/r0. (15)
Returning now to the length we find
J=r0rm∫1ϵdw1√(1−w)(α2−w)1w. (16)
Doing this integral gives
J=2log(2r0ϵsinh(ℓ2r0)). (17)
To compute the entropy we append the factor and use the relation to obtain
S=c3log(2r0ϵsinh(ℓ2r0)). (18)
We may now determine in terms of the temperature. Zooming in near the horizon and writing we have
ds2∼L2Λr20(−2ρr0dt2+r02ρdρ2). (19)
Changing variables to the near horizon metric is brought into the form
ds2∼−u2r20dt2+du2. (20)
Demanding periodicity in imaginary time gives or . Plugging this into our entropy formula reproduces the usual crossover function
S(ℓ,T)=c3log(1πTϵsinh(πTℓ)). (21)
Let us now consider regions in higher dimensional conformal field theories. For a dimensional CFT the relevant metric is the AdS black hole
ds2=L2Λr2(−fdt2+1fdr2+dx2d) (22)
where .
We consider strip-like regions in the field theory of cross-section and width ( has units of length). The minimal surface area is
σ=A∫ℓ/2−ℓ/2dxrd√1+(dr/dx)2f−1. (23)
As before, we may immediately integrate the equation of motion to yield the conserved quantity
1rd√1+(dr/dx)2f−1=1rdm. (24)
has the same meaning as before. Solving for we find
dr/dx=√(r2dm−r2d)frd. (25)
Putting these facts together gives two integrals
σ=2A∫rmϵdrrd√(r2dm−r2d)f1rdrdmrd (26)
and
ℓ=2∫rmϵdrrd√(r2dm−r2d)f. (27)
We may set in the integral for which gives a cutoff independent relation . Of course, when we have .
For the area integral we write
σ2A=∫rmϵdr⎡⎢ ⎢⎣⎛⎜ ⎜⎝rdmrd√(r2dm−r2d)f−1rd⎞⎟ ⎟⎠+1rd⎤⎥ ⎥⎦. (28)
We have essentially subtracted off the UV sensitive boundary law term so that the integral in parenthesis converges as . The dependence is now trivial to extract and we find
σA=2d−11ϵd−1+(σ/A)fin(ℓ,r0). (29)
But this is of the required cross-over form: a universal cross-over function plus a boundary law respecting piece sensitive to UV physics.
We may put this function into the precise form considered above by scaling the variables appropriately. In (27) set and to obtain
1=g(ℓT)∫10dwwd√1−w2d√1−k(ℓTg(ℓT))d+1wd+1 (30)
with some constant. This is an implicit equation for the scaling function that can easily be shown to have the properties claimed above. In particular, it shows relativistic length-energy scaling. Plugging this scaling form into (28) shows that has the form
Td−11(ℓTg(ℓT))d−1[2∫10dw(1wd√1−w2d√1−k(ℓTg(ℓT))d+1wd+1−1wd)−2d−1]=Td−1f(ℓT). (31)
One comment is necessary, the overall factor differs from the obtained above simply because we are here working in a limit where is bigger than all other scales. Repeating our general analysis above predicts since we must have . Similarly, we have to compensate the vanishing powers of .
### ii.3 Non-relativistic scale invariance
In this section we discuss critical theories with dynamical exponent . The dynamical exponent controls the relative scaling of space and time leading to the invariant form where again we suppress a non-universal dimensionful parameter. The thermal entropy of such a theory scales as as follows simply from the requirement of dimensionlessness and extensivity. Let us again introduce a universal scaling function following the assumptions above. We write the entropy as
S(L,T)∼Tϕf(LT1/z)+...
and use the limit to establish that and . The rest of the analysis for the non-relativistic case is unchanged and again we are permitted at most universal constant or logarithmic terms in the entanglement entropy.
One can also perform a holographic computation in this setting using so called Lifshitz geometries. In fact, the spatial part of the metric is unchanged at zero temperature, hence the structure of the entanglement entropy is identical. For example, even space dimensions have subleading constants while odd space dimensions have subleading logs. These zero temperature solutions may also be generalized into finite temperature black holes solutions, at least for certain values of . Ref. Balasubramanian and McGreevy, 2009 contains a nice example of such a Lifshitz black hole with and metric
ds2=−fdt2r2z+1fdr2r2+dx22r2 (32)
where . As claimed, the only difference between this metric and the relativistic examples above is in the dependence of the term and the different power of appearing in . The same manipulations establish a nearly identical crossover structure to the relativistic case except that the argument of all scaling functions is instead of . As always, a dimensionful constant has been suppressed.
## Iii Scaling formalism: codimension 1
Now we turn to the case where the low energy degrees of freedom reside on a codimension subspace in momentum space. By contrast, the scale invariant theories in the previous section had low energy degrees of freedom only a single point in momentum space (or finite set of isolated points). Examples of systems with a codimension gapless surface includes Fermi liquids with a dimensional Fermi surface in dimensions, Bose metals, spinon Fermi surfaces, and much more. Later we will consider the case of a general codimension gapless manifold.
### iii.1 Review of Fermi liquids
The low energy physics of a Fermi liquid is, for many purposes, effectively one dimensional (an exception to this rule is provided by zero sound which requires the full Fermi surface to participate). Thus Fermi liquids violate the boundary law for entanglement entropy because one dimensional gapless systems violate the boundary law. The anomalous term has been found to be universal in the sense that it depends only on the geometry of the interacting Fermi surface and not on any Landau parameters. Remarkably, this term also controls the finite temperature entropy to leading order in . The universal part of the entanglement entropy and the low temperature thermal entropy are connected by a universal scaling function which can be calculated using one dimensional conformal field theory.
We work in for concreteness. Consider a real space region of linear size in a Fermi liquid with spherical Fermi sea . The entanglement entropy for this region scales as with the boundary law violating term universal and the subleading term non-universal. The precise value of the boundary law violating term is expressed in terms of the geometry of the real space boundary and the Fermi surface as
SL=12π112∫∂A∫∂ΓdAxdAk|nx⋅nk|log(L) (33)
where and are unit normals to and . The intuition behind this formula is simply that the Fermi surface in a box of size is equivalent to roughly gapless modes that each contribute to the entanglement entropy. This formula is known as the Widom formula because of its relation to a conjecture of Widom in signal processing. The Widom formula has not yet been rigorously proven, but it has been checked numerically and can be obtained simply from the one dimensional point of view. To generalize to finite temperature we must replace the zero temperature one dimensional entanglement entropy by the general result at finite temperature given by
S1+1(L,T)=c+~c6log(βvΛπsinh(πLβv)). (34)
Fermi liquids are described by many nearly free chiral fermions with and . The marginal forward scattering interactions do not change the number of low energy modes, and hence the mode counting picture still works quantitatively. However, we will only require a much more crude scaling assumption for our results.
Following this one dimensional result a higher dimensional Fermi liquid also possesses a universal crossover between the low temperature thermal entropy and the universal part of the entanglement entropy. This scaling function depends only on the geometry of the real space region and on the shape of the Fermi surface . For a spherical Fermi surface and spherical real space region of radius this universal crossover function is given by
S(L,T)=12π112∫∂A∫∂ΓdAxdAk|nx⋅nk|log(sinh(π2L|nx⋅nk|βvF(nk))). (35)
### iii.2 Non-Fermi liquids
We now consider the entanglement structure of non-Fermi liquid states. We will restrict attention to the class of such states that have a codimension gapless surface in momentum space, a critical Fermi surface, but where there is no Landau quasiparticle description of the excitations. A general scaling formalism has been developed for these states in Ref. Senthil, 2008. Of primary importance to us is the scaling of the thermal entropy. Our considerations will apply to any non-Fermi liquid falling into the general scaling formalism of Ref. Senthil, 2008 irrespective of the detailed low energy theory. However, to be concrete let us consider a Fermi surface coupled to a gauge field in .
Recently there has been a controlled calculation of the properties of this system in terms of the gauge field dynamical critical exponent and the number of fermion flavors in Ref. Mross et al., 2010 following important earlier work in Refs. Lee, 2009; Metlitski and Sachdev, 2010. The expansion parameters are and with a controlled limit possible as with fixed. This system was found to possess a critical Fermi surface. Following the intuition for Fermi liquids, this system will violate the boundary law for entanglement entropy because of the presence of many gapless one dimensional degrees of freedom. However, this situation is not a trivial generalization of the Fermi liquid case because the system lacks a quasiparticle description.
Thermodynamic quantities can be understood roughly in terms of many one dimensional gapless degrees of freedom on the Fermi surface with a dynamical critical exponent . The thermodynamic entropy is predicted to be ( just measures the size of the Fermi surface). Additionally, the low energy theory is such that only antipodal patches of the critical Fermi surface couple strongly to each other. With our current knowledge, we cannot formulate the patch theory as a truly one dimensional theory, nevertheless thermodynamic quantities are correctly captured. Furthermore, although the Fermi surface curvature must be kept in all existing formulations, this curvature enjoys a non-renormalization property which makes it into a kind of gauge variable: dispersing perpendicular to the patch normal is roughly like changing patches. However, we reiterate that our results depend only on the thermodynamic entropy being given by .
As usual, we write the von Neumann entropy as
S(L,T)=kFTϕf(LT1/z) (36)
where is the fermion dynamical critical exponent and is an exponent to be determined. just measures the size of the Fermi surface. It does not enter into the scaling argument in a non-trivial way. Now from thermodynamics we know that for we must have . The dependence requires that as , and the dependence forces us to choose . As we must have in order to cancel the diverging dependence. More generally, we must have with as before. In particular, powers of log are not allowed because these would produce divergent terms that are supposed to be finite and universal. This demonstrates that these non-Fermi liquid states may violate the boundary law at most logarithmically.
Let us also make some more detailed speculations. The critical Fermi surface actually corresponds to a marginal Fermi liquid, so for this theory with flavors we suspect that the boundary law violating term has the usual Fermi liquid form
SL=N×12π112∫∂A∫∂ΓdAxdAk|nx⋅nk|log(L). (37)
At finite we expect modifications of the prefactor due to dependent corrections. However, it is likely that the geometric dependence of the integral remains unchanged. Indeed, the different patches in the critical Fermi surface decouple much more strongly than they do in a Fermi liquid. We also note this geometrical form has recently been verified in a holographic setup with log violations of the boundary law. Depending on how precisely the critical Fermi surface is effectively one dimensional, a more structured crossover function of the form
SL=12πT1/zC(N,ϵ)12∫∂A∫∂ΓdAxdAk|nx⋅nk|fϵ,N(LT1/z) (38)
may be expected. The function would play the role of in the Fermi liquid case. This detailed geometric form may be too strong a requirement in general, but the general scaling form in the previous paragraph is certainly reasonable.
Similar scaling arguments can be made for the Renyi entropy . We know that the Renyi entropy at finite temperature has a specific relationship to the thermal entropy because of the simple scaling with of the finite temperature free energy. The complete result is
Sn(T)=n−1n1/zfn−111+1zfS(T) (39)
where is the thermal entropy. This dependence of the Renyi entropy actually holds for all and in the case of a Fermi liquid. It would interesting to determine if this is also true for the theory. Because the Renyi entropy is potentially much easier to calculate numerically and analytically we believe it is a useful target for future work.
## Iv Scaling formalism: general codimension
As a simple example of what we have in mind, consider a free fermion system tuned so that the “Fermi manifold” is of codimension in the dimensional momentum space. is the generic case, a Fermi line in and a Fermi surface in . in and in correspond to Dirac points. The interesting case of in is the problem of Fermi lines where the zero energy locus is one dimensional in the three dimensional momentum space. The codimension is a useful parameter because it tells us the effective space dimension of the local excitations in momentum space. For example, the Fermi surface case always indicating that the excitations are effectively moving in one dimension (radially). Similarly, the case in corresponds to modes that move in two effective dimensions since there is no dispersion along the Fermi line. Just as we could calculate the entropy of Fermi surfaces by integrating over the contributions of one dimensional degrees of freedom, we can obtain the entropy of these higher codimension systems by integrating over the contributions of dimension degrees of freedom. We now make this explicit with a scaling argument.
Suppose that there exists a universal scaling function connecting the entanglement entropy to the thermal entropy for these codimension free Fermi systems. If we assume a generic bandstructure then the dispersion may be linearized near the Fermi manifold yielding a dynamical exponent . Thus we write the von Neumann entropy of region as
S(L,T)=kd−qFTϕf(LT) (40)
where the factor of accounts for the size of the Fermi manifold. The thermal entropy of such a system scales as and hence as usual we require as . This also fixes . Requiring regularity in the limit as , we see that as where . Thus we discover that such systems may have a universal term proportional to with either a constant or logarithmic prefactor. We emphasize that this is precisely what one expects from integrating a dimensional contribution over the Fermi manifold. In particular, we expect a constant prefactor for even and a logarithmic prefactor for odd because the dimensional system has and hence resembles a relativistic scale invariant theory of the type we considered in Sec. II. These statements may be checked in the free fermion case because the entanglement entropy can be computed exactly, however, we defer a full discussion of this case to a future publication.
We conclude this section by noting that our conclusion is unmodified even if we have an interacting theory with a codimension gapless manifold and with general . The scaling function has the form
S=kd−qFTϕf(LT1/z) (41)
with which is obtained by matching to the thermal entropy . Although we do not rigorously prove that the thermal entropy is always of this form, such a form does follow from a very general scaling analysis in momentum space and we know of no exceptions. We still predict a universal term, constant or logarithmic, proportional to at zero temperature. For an example of such a transition, see Ref. Senthil and Shankar, 2009 which considered a quantum critical point between a line nodal metal and a paired superconductor.
## V Fluctuations of conserved quantities
In this section we give a brief description of our scaling formalism as applied to an interesting observable: the fluctuations of a conserved charge. These considerations are motivated by the direct experimental accessibility of such fluctuations as well as by a desire to illustrate the general nature of our arguments. As the primary example, consider a conserved number operator that may be written as a sum of local densities . This operator commutes with the Hamiltonian and we may label energy eigenstates with different values of . Hence itself need have no fluctuations in the ground state. However, we can consider the restricted operator which need not commute with the Hamiltonian and may have fluctuations. An interesting measure of the correlations between and its environment is thus the quadratic fluctuations in
ΔN2R=⟨(NR−⟨NR⟩)2⟩. (42)
We expect in a gapped phase of matter that these fluctuations satisfy a boundary law . Gapless phases in higher dimensions also appear to satisfy a boundary law so long as the symmetry generated by is unbroken. As usual, one dimension is an exception where it is known that although the prefactor is not as universal as for the entanglement entropy i.e. it depends on the Luttinger parameter. It has also been shown that Fermi liquids violate the boundary law for fluctuations in higher dimensions with .
Because the discussion is so similar to the case of the entropy, we will not give any of the details here. However, a few points are worth mentioning. First, what we study is now the crossover between the zero temperature number fluctuations and the finite temperature number fluctuations as controlled by the thermodynamic compressibility. Indeed, precisely such a crossover argument was used in Ref. Swingle, 2010a to argue that the number fluctuations in a Fermi liquid are modified by Fermi liquid parameters (unlike the entanglement entropy). As before, we are allowed to subtract off any boundary law contribution since they can be generated by high energy degrees of freedom. For relativistic scale invariant systems the finite temperature fluctuations go like and we again find that only constant or logarithmic universal terms are allowed at . More generally, we conclude that is the most severe violation of the boundary law that is possible without breaking a symmetry e.g. in critical Fermi surface systems that remain compressible (see Ref. Senthil, 2008). If the symmetry is broken then we expect as follows from a mean-field wave function for a superfluid of the form .
For conformal field theories these statements may be straightforwardly checked using the fact that the conserved current has dimension . To compute the charge fluctuations we must compute
ΔN2R=∫Rddx∫Rddy⟨J0(x)J0(y)⟩=∫Rddx∫Rddy1|x−y|2d+contact terms. (43)
The contact terms are necessary to cancel a naive short distance volume scaling in the fluctuations. Such a volume scaling from UV modes is unphysical because UV fluctuations are local and hence can only change the charge in region by appearing near the boundary. Let us take to be a sphere of radius . We can do the integral over by changing variables to and integrating over in polar coordinates. The limits of integration are from to where is the angle between and . Let us specialize to for simplicity. The fluctuations now go like
∫Rd2x∫dθ1r⋆(θ)2 (44)
where we have discarded the aforementioned UV divergent volume piece. remains non-zero for all unless , and it can be shown that the leading singularity is
∫d2x1(L−|x|)2∼Lϵ (45)
i.e. the boundary law. If we further expand the integral in powers of we also find a sub-leading term giving rise to a logarithmic correction in agreement with our scaling result.
We wish to mention one final subtle point that arises in phases with broken symmetry and which is not properly captured in our thermodynamic treatment. Thus consider a superfluid phase where the particle number symmetry is broken. Besides the usual sounds modes that possess an energy scaling as ( is system size), there is also a zero mode with energy levels scaling like . This zero mode plays an important role in finite size systems, like quantum Monte Carlo simulations, where it is responsible for insuring that although the symmetry is broken, the many-body state has a definite particle number. In other words, it is related to the fact that the many-body ground state in a finite size system is a properly a cat state unless the symmetry is broken explicitly. This zero mode is not easily visible in thermodynamics but it does affect the entanglement entropy and number fluctuations in an important way.
Let us ignore the sound modes and ask for a state that capture the dynamics of the zero mode. Such a cat state has the form
|M⟩∼∫dθe−iMθ⨂r|θ⟩r (46)
where is a state of definite phase on site . We wish to trace out part of the system and compute the entropy of the remainder, but this problem has to be regulated because ambiguities are encountered in this procedure. Consider a simpler system consisting states per site with a symmetry relating them and where the many body state is of the form . If we now trace out part of the system we find a reduced density matrix for region of the form
ρR=tr¯Rρ=∑x⨂r∈R|x⟩⟨x|r. (47)
Perfect correlation with the environment has rendered the reduced density matrix completely diagonal. The entropy is now trivially . To connect this model to the superfluid, we need only estimate the effective value of . We do this by counting the effective number of orthogonal states in . Now the many-body coherent state of the form
|θ⟩=⨂r∈Re−|α|2/2∑nαeinθ√n!|n⟩r (48)
has an overlap with a neighboring state of the form
⟨θ|θ′⟩=exp[|R||α|2(e−i(θ−θ′)−1)]. (49)
Expanding in small , the first real term is and hence states greater than are effectively orthogonal. Hence we may take to give an entropy contribution of the form . We also see that this mean-field cat state captures the extensive in subsystem size number fluctuation while maintaining the ground state with definite particle number.
We can also pin the order parameter to remove the anomalous contribution to the entanglement. For the superfluid, there is now no anomalous entropy although there are still extensive number fluctuations. On the other hand, in an anti-ferromagnet we may pin the Neel field to point in a particular direction. If we take as a mean field state
|Neel⟩=⨂r∈A|↑⟩r⨂|↓⟩r (50)
then we see immediately that there is no anomalous entanglement and the fluctuations of are not extensive (they are zero). However, the fluctuations of and are still extensive. These simple considerations have been considerably developed in Ref. Song et al., 2011. A careful treatment including the interactions between the zero mode and the sound modes is also expected soon in Ref. Metlitski et al., . They find that the coefficient of the logarithm is somewhat modified from the simple minded argument above.
## Vi Potential violations
Before concluding, let us point out some potential ways that one might violate the scaling forms we have developed. A nice example is provided by a certain degenerate spin chain. By fine tuning the strength of the nearest neighbor couplings as a function of position along the chain, Ref. Vitagliano et al. (2010) showed that it was possible to find a ground state with entanglement entropy that scaled as the length of the interval. This was arranged by adjusting the couplings so that a real space RG procedure always coupled the boundary spin inside the region with a spin outside the boundary i.e. no spins formed singlets within the interval. This required exponentially decaying couplings and is clearly not generic.
Another route is provided by large ground state degeneracy provided we use the completely mixed state. Recently such systems have received a lot of attention due to results showing non-Fermi liquid behavior in certain holographic systems. However, we emphasize that nothing is special about the holographic setting, it is but one example. A simple condensed matter example with “ground state degeneracy” is provided by the spin incoherent Luttinger liquid. This system is a Luttinger liquid where the spin energy scale is much less than the charge energy scale. At temperatures above the spin energy scale, the spin incoherent state emerges where the spin degrees of freedom are totally disordered. Such a state has an extensive temperature independent entropy, but it also cannot be a true ground state. To force the state to zero temperature, we must fine tune an infinite number of relevant operators (the entire spin Hamiltonian) to zero. We call this an IR incomplete theory since it cannot be smoothly connected to zero temperature. More generally, we can imagine intermediate scale RG fixed points that control the physics over a wide range of energies but which cannot be interpreted as ground states due to an infinite fine tuning. We know one possibility is that such a state may have extensive entropy, but perhaps there are other possibilities where the entanglement entropy scales like for .
Another setting where violations might occur is in random systems. In one dimension we know that even at infinite randomness fixed points the boundary law for the average entanglement entropy is violated is no worse than in the conformal case. However, we do not know if infinite randomness fixed points would violate the boundary law in higher dimensions. We expect any finite random fixed point will not, and we suspect that infinite randomness fixed points would not either, but we do not give a definite argument at this time. We also note that there are considerable subtleties in these systems e.g. typical versus average values. Since the entanglement entropy of a region has a probability distribution , it would be interesting to determine if the distribution was a function of only or something more complicated. In any event, there are many open issues at such random fixed points, the thermodynamics does not obey the simple forms we have considered here, and so we do not have much else to say about these issues at this time.
Let us also briefly mention long range interactions. If these interactions are due to massless fields with a non-singular action within the physical description e.g. fluctuating gauge fields or other critical bosonic modes, then a proper renormalization group description is possible and the entanglement entropy should have no additional anomalous structure. Similarly, so long as the long range forces present in the system have such an interpretation, even if they must be introduced as auxillary fields, we might expect no new anomalies to appear. On the other hand, consider the “1d chain” where, in addition to nearest neighbor hoppings, every site can hop to every other site with the same strength. Calling such a system one dimensional is a perversion, but it is an extreme form of long range interactions. Clearly such a system can be expected to violate the one dimensional boundary law more than logarithmically. The task that emerges is thus to understand where the crossover point is, as a function of the interaction range, to conventional one dimensional behavior. One quite interesting situation where these considerations are directly applicable is momentum space entanglement where the region is some subset of momentum space instead of position space. This topic, which has already received some preliminary attention, deserves its own exposition which we will present elsewhere.
Finally, we note that from the general codimension section our formalism can in some sense encompass states that violate the boundary law more seriously than logarithmically provided that is effectively less than . For example, roughly describes a state with gapless excitations everywhere in some region of finite measure in momentum space. This is a lot like the situation with ground state degeneracy. Nevertheless, as we have already said, we know of no example of a sensible ground state with .
## Vii Discussion
We have argued that entanglement entropy and thermal entropy may be connected via a universal crossover function in gapless phases and at critical points. One major consequence of this assumption is that local quantum systems cannot violate the boundary law more than logarithmically. However, we hasten to add that should our assumptions be violated, we have no objection. In particular, possible loopholes escaping our conclusion include fine tuning in the Hamiltonian, systems with many degenerate ground states, and systems with long range interactions. Models showing these characteristics may indeed be physically realistic in special cases, nevertheless, we argue that conventional gapless systems, even those with critical Fermi surfaces, will not violate the boundary law more than logarithmically. Actually calculating the entanglement entropy in a model of a critical Fermi surface, perhaps in a expansion, and studying in more detail the entanglement properties for are projects we leave for the future.
Many conventional quantum critical points that describe symmetry breaking transitions, like the XY critical point, fall into the category of conformal field theories discussed in Sec. II. However, it was recently shown in Ref. Swingle and Senthil, 2011 that “deconfined” quantum critical points have a different entanglement structure due to proximate topologically ordered phases. For example, Ref. Swingle and Senthil, 2011 discussed the XY critical point in dimensions which has the same correlation length exponent as the XY transition but a different anomalous dimension for the order parameter. This arises because the order parameter is actually composite and it is the “fractional” bosons that undergo the XY transition. However, an important difference is that the bosons are coupled to a gauge field and hence not all operators in the XY theory of the s are gauge invariant. Furthermore, there must always exist somewhere in the high energy spectrum gapped vortices. This is all to say that while the thermal entropy of XY and XY are identical, they have different crossover functions and different entanglement properties at zero temperature.
A potentially profitable generalization of our work here would be to include in the scaling formalism the effect of relevant operators that move away from the critical point. Similarly, it would be interesting to study the scaling structure of the full Renyi entropy in more detail. Scale invariance fixes the high temperature Renyi dependence, but the zero temperature entanglement structure as a function of Renyi parameter could be rather rich. Perhaps our scaling approach could shed some light on this structure. The generalization to multiple regions is also open and is especially relevant to studies of mutual information.
It would also be quite interesting to develop a variational class of density matrices that encode the kind of crossover behavior we described here. Since the mutual information always obeys a boundary law at finite temperature Wolf et al. (2008), such states could in principle look like a density matrix generalization of tensor network states, although presumably the bond dimension would have to grow as the temperature was lowered to account for systems that violate the boundary law at zero temperature. Perhaps the new branching MERA approachEvenbly and Vidal (2011) could help? One could compute some of these crossover functions in field theory to see what sort of universal information is easily accessible. We have already done the analogous calculations in holographic theories, but there it was already known that the entanglement entropy contains relatively little information due to the large limit.
### References
1. J. Eisert, M. Cramer, and M. B. Plenio, Rev. Mod. Phys. 82, 277 (2010).
2. L. Amico, R. Fazio, A. Osterloh, and V. Vedral, Rev. Mod. Phys. 80, 517 (2008), URL http://link.aps.org/doi/10.1103/RevModPhys.80.517.
3. F. Verstraete, J. Cirac, and V. Murg, Adv. Phys. 57, 143 (2008).
4. Z.-C. Gu, M. Levin, and X.-G. Wen, Phys. Rev. B 78, 205116 (2008).
5. G. Vidal, Phys. Rev. Lett. 101, 110501 (2008).
6. A. Kitaev and J. Preskill, Phys. Rev. Lett. 96, 110404 (2006).
7. M. Levin and X.-G. Wen, Phys. Rev. Lett. 96, 110405 (2006).
8. X. Chen, Z.-C. Gu, and X.-G. Wen, Phys. Rev. B 83, 035107 (2011).
9. L. Fidkowski and A. Kitaev, Phys. Rev. B 83, 075103 (2011).
10. R. N. C. Pfeifer, G. Evenbly, and G. Vidal, Phys. Rev. A 79, 040301 (2009).
11. G. Vitagliano, A. Riera, and J. I. Latorre, New Journal of Physics 12, 113049 (2010), eprint 1003.1292.
12. M. M. Wolf, F. Verstraete, M. B. Hastings, and J. I. Cirac, Phys. Rev. Lett. 100, 070502 (2008).
13. R. G. Melko, A. B. Kallin, and M. B. Hastings, Phys. Rev. B 82, 100409 (2010).
14. C. Holzhey, F. Larsen, and F. Wilczek, Nuc. Phys. B 424, 443 (1994).
15. P. Calabrese and J. Cardy, J. Stat. Mech. 04, 06002 (2004).
16. V. E. Korepin, Phys. Rev. Lett. 92, 096402 (2004), URL http://link.aps.org/doi/10.1103/PhysRevLett.92.096402.
17. M. Wolf, Phys. Rev. Lett. 96, 010404 (2006).
18. D. Gioev and I. Klich, Phys. Rev. Lett. 96, 100503 (2006).
19. B. Swingle, Phys. Rev. Lett. 105, 050502 (2010a).
20. B. Swingle (2010b), eprint arXiv:1002.4635.
21. B. Swingle (2010c), eprint arXiv:1003.2434.
22. Y. Zhang, T. Grover, and A. Vishwanath, Phys. Rev. Lett. 107, 067202 (2011), URL http://link.aps.org/doi/10.1103/PhysRevLett.107.067202.
23. T. Senthil, Phys. Rev. B 78, 035103 (2008), URL http://link.aps.org/doi/10.1103/PhysRevB.78.035103.
24. B. Swingle, ArXiv e-prints (2010a), eprint 1007.4825.
25. H. F. Song, S. Rachel, C. Flindt, I. Klich, N. Laflorencie, and K. Le Hur, ArXiv e-prints (2011), eprint 1109.1001.
26. M. A. Metlitski, C. A. Fuertes, and S. Sachdev, Phys. Rev. B 80, 115122 (2009), URL http://link.aps.org/doi/10.1103/PhysRevB.80.115122.
27. H. Casini and M. Huerta, Journal of Physics A: Mathematical and Theoretical 42, 504007 (2009).
28. S. Ryu and T. Takayanagi, Phys. Rev. Lett. 96, 181602 (2006), URL http://link.aps.org/doi/10.1103/PhysRevLett.96.181602.
29. H. Casini and M. Huerta, Nuclear Physics B 764, 183 (2007), eprint arXiv:hep-th/0606256.
30. B. Swingle, ArXiv e-prints (2010b), eprint 1010.4038.
31. J. McGreevy, ArXiv e-prints (2009), eprint 0909.0518.
32. K. Balasubramanian and J. McGreevy, Phys. Rev. D 80, 104039 (2009), eprint 0909.0263.
33. D. F. Mross, J. McGreevy, H. Liu, and T. Senthil, Phys. Rev. B 82, 045121 (2010), URL http://link.aps.org/doi/10.1103/PhysRevB.82.045121.
34. S.-S. Lee, Phys. Rev. B 80, 165102 (2009), URL http://link.aps.org/doi/10.1103/PhysRevB.80.165102.
35. M. A. Metlitski and S. Sachdev, Phys. Rev. B 82, 075127 (2010), URL http://link.aps.org/doi/10.1103/PhysRevB.82.075127.
36. T. Senthil and R. Shankar, Phys. Rev. Lett. 102, 046406 (2009), URL http://link.aps.org/doi/10.1103/PhysRevLett.102.046406.
37. H. F. Song, N. Laflorencie, S. Rachel, and K. Le Hur, Phys. Rev. B 83, 224410 (2011), URL http://link.aps.org/doi/10.1103/PhysRevB.83.224410.
38. M. Metlitski, T. Grover, and X.-L. Qi, to appear (????).
39. B. Swingle and T. Senthil, ArXiv e-prints (2011), eprint 1109.3185.
40. G. Evenbly and G. Vidal, Journal of Statistical Physics 145, 891 (2011), eprint 1106.1082.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
|
• Create Account
Banner advertising on our site currently available from just $5! # infinite procedural world Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 15 replies to this topic ### #1sliders_alpha Members - Reputation: 107 Like 0Likes Like Posted 30 April 2012 - 06:39 AM Hi, I want to make a minecraft clone but I don't know if I'm going in the right way or not. having never took any game programming courses I might be re-invented the wheel^^ first thing first, my world is stored in a 2D sectors array. I give the illusion of movement by drawing the world backward when the player wants to go forward. and here is my Idea for infinite world generation. (in reality I have a bigger array called cache which contains the world sectors and the sourrounding ones, this cache object is managing hard disk access on it's own to always give to the world quick access to new sectors) what do you think? thanks #### Attached Thumbnails Sponsor: ### #2Waterlimon Crossbones+ - Reputation: 2836 Like 0Likes Like Posted 30 April 2012 - 06:44 AM So basically you have a square grid which holds the visible chunks of the world, where it moves them around trying to keep the player in the center cell? k. You could make it not-square but something similiar to a circle (leave some corner cells not pointing anywhere) as theyre not really useful in the corners as you cant see that far in all of the directions... o3o ### #3sliders_alpha Members - Reputation: 107 Like 0Likes Like Posted 30 April 2012 - 06:55 AM yeah but I'm using java, Since i'm declaring an array of chunk. I think that even If I put nothing inside a cell the memory for it is still allocated. I just thought about something, it's all ok if the player start a 0,0. but how to move it back to where he was last time? my current thought is to store the chunk name and the coordinate on the chunk. ### #4Waterlimon Crossbones+ - Reputation: 2836 Like 0Likes Like Posted 30 April 2012 - 07:23 AM But if theyre kind of references the memory usage will be a lot lower without those extra chunks as youre just storing something that can point to chunks, not the chunks themselves. o3o ### #5Krohm Crossbones+ - Reputation: 3410 Like 0Likes Like Posted 30 April 2012 - 09:05 AM I see no problem in line of theory. Keep in mind however that wrapping the world to reset it around the origin might be more problematic than expected. I can tell for sure Bullet (a physics API I'm using) wouldn't allow you to do so. If you're going to write your own physics, keep in mind the periodic "re-centering" operation will likely have to be supported somehow. It is possible to do something like what you have drawn in the first two images, let the camera move forward for a few chunks (say 1024*16 units) and keep streaming (letting the model "grow", without changing camera position). Then, wrap to the origin only when a certain rationale is met, such as distance from the origin > 1024*16. This is fairly easy: just subtract a constant such as 1024*16 along a certain axis. How to move back where it was last time? It's not like we are talking about something we don't know. We know exactly how big a chunk is. Think at it. As long as FP precision allows, the operation will be perfect. What's the deal with chunk names and coordinates? It appears to me this information is not required. But if theyre kind of references the memory usage will be a lot lower without those extra chunks as youre just storing something that can point to chunks, not the chunks themselves. I don't completely agree for two reasons. • Nobody says chunks will be reused. In the case of minecraft, they are mostly unique - this means the difference will be close to zero but anyway.. • Java does not really create a array of objects. It creates arrays of references to objects. Java does not really deal with the objects themselves like C/C++ does, it deals with the references only. • I hardly believe the chunk structure itself will take much space. It's data will. ### #6sliders_alpha Members - Reputation: 107 Like 0Likes Like Posted 30 April 2012 - 10:46 AM you were both right, making an array of reference to chunk does not allocate anything it just creates reference. then I have to instanciate a class and link it to this reference. I see no problem in line of theory. Keep in mind however that wrapping the world to reset it around the origin might be more problematic than expected. I can tell for sure Bullet (a physics API I'm using) wouldn't allow you to do so. If you're going to write your own physics, keep in mind the periodic "re-centering" operation will likely have to be supported somehow. well I'm planning on coding everything, i'm doing this for fun^^ what kind of other type of infinite world algorythm is there out there? I searched the first 10 pages of google and each camera implementation is the same. the camera never moves, the world does. therefore I don't see any other way to do it. It is possible to do something like what you have drawn in the first two images, let the camera move forward for a few chunks (say 1024*16 units) and keep streaming (letting the model "grow", without changing camera position). Then, wrap to the origin only when a certain rationale is met, such as distance from the origin > 1024*16. This is fairly easy: just subtract a constant such as 1024*16 along a certain axis. why? basically you are asking me to do exactly the same thing but to increase the "reset" distance from 1 chunk to 1024 chunk. what will it changes? Edited by sliders_alpha, 30 April 2012 - 10:50 AM. ### #7Ravyne GDNet+ - Reputation: 9585 Like 0Likes Like Posted 30 April 2012 - 11:54 AM That's more or less sound, what you're doing is generally called "streaming" -- However, you probably don't want to decide which "chunks" of the world to cache based on direction the player is facing, since the player can turn quite rapidly. I would cache chunks based on distance to the player, and perhaps use direction of travel as a secondary influence -- so basically, cache concentric rings around the player from the inside out, and on those rings start with the chunks directly in front of the player, going around the ring in either direction until they meet behind the player. Within that set you might use a heat-map to identify commonly-traveled areas, so that you know to get updates for those more-frequently, if it matters for your game. For example, most of the time when I play Minecraft I'm at my home, or in a nearby area where I hunt mobs. ### #8sliders_alpha Members - Reputation: 107 Like 0Likes Like Posted 30 April 2012 - 01:30 PM well I'm not choosing wich chunk to cache, I'm caching EVERY chunk around the player. for now I'm only working with 49 Chunk but storing 600-900 chunk on RAM is more than acceptable. it would take around 400Mb and today's computer have at least 4GB of ram, why shouldn't I use it? #### Attached Thumbnails Edited by sliders_alpha, 30 April 2012 - 01:40 PM. ### #9Acharis Crossbones+ - Reputation: 4240 Like 0Likes Like Posted 30 April 2012 - 02:39 PM I would do it like this: Assuming the visibility range is 10 units. - Load all chunks within 12 units range. Do not use any separate threading, just load chunks one by one without hurry, you are loading these before these are needed (since you have 2 chunks buffer). - Unload all chunks that are farther than 30 units range (much bigger than visibility range and the load range). Players frequently go back and forth so unloading prematurely is a big waste. Do not forget the bottleneck is CPU, not memory nowadays. You should waste some memory to in order to give CPU some break. Europe1300.eu - Historical Realistic Medieval Sim (RELEASED!) PocketSpaceEmpire - turn based 4X with no micromanagement FB Twitter DevLog ### #10sliders_alpha Members - Reputation: 107 Like 0Likes Like Posted 30 April 2012 - 03:09 PM It's true that by expanding the cache size from radius+1 to radius*radius would allow me to load chunks in a unique thread. however are you sure that it changes anything for when a player is going back and forth? look at this, if the player is going back and forth, the red lines keeps getting loaded/unloaded. but not having to be in a hurry to load chunk is quite nice^^ #### Attached Thumbnails ### #11Ravyne GDNet+ - Reputation: 9585 Like 0Likes Like Posted 30 April 2012 - 03:10 PM Do not forget the bottleneck is CPU, not memory nowadays. You should waste some memory to in order to give CPU some break. Your reasoning is actually exactly the opposite of reality. CPUs are *highly* sensitive to memory latency and cache misses. The CPU can't do any useful work at all if its waiting for the data it needs to operate on. You're conclusion is, however, on the mark. The reason you'd "waste" memory by pre-caching information you're not actively using is to move the data closer to the CPU within the memory hierarchy (Typically: CPU -> CPU registers, L1$, L2$, L3$, RAM, HDD). The nearer you are to the CPU, the greater the bandwidth and less latency involved. Bandwidth across SATA is more than an order of magnitude slower than typical bandwidth to main memory. Mechanical storage like traditional HDDs or optical discs aren't usually able to saturate the available bandwidth and introduce seek latencies in addition.
Edited by Ravyne, 30 April 2012 - 03:12 PM.
### #12Krohm Crossbones+ - Reputation: 3410
Like
0Likes
Like
Posted 01 May 2012 - 02:50 AM
basically you are asking me to do exactly the same thing but to increase the "reset" distance from 1 chunk to 1024 chunk.
what will it changes?
If the re-centering operation is perfect, it will change almost nothing. If not, this reduces the chance to hiccup at a fraction. It also involves slightly less transform matrix uploading - but I'd expect the benefit to be not measurable. In general, the camera should be allowed to somehow move as expected or at least, some system must keep track of the "real" position. But if you just want to do minecraft, I suppose there will be no problems anyway.
### #13Malabyte Members - Reputation: 589
Like
0Likes
Like
Posted 01 May 2012 - 04:31 AM
Minecraft isn't infinite *grrrr*
but how to move it back to where he was last time?
With one big-ass text-file. (jk, I got no idea)
Edited by DrMadolite, 01 May 2012 - 04:35 AM.
- Awl you're base are belong me! -
- I don't know, I'm just a noob -
### #14Waterlimon Crossbones+ - Reputation: 2836
Like
0Likes
Like
Posted 01 May 2012 - 04:57 AM
however are you sure that it changes anything for when a player is going back and forth?
look at this, if the player is going back and forth, the red lines keeps getting loaded/unloaded.
but not having to be in a hurry to load chunk is quite nice^^
I think he meant that you unload chunks if you go far away from them, but you only load them if they get in the original cache area. So that you dont unload already loaded chunks until you need to because you might come back soon.
o3o
### #15noizex Members - Reputation: 941
Like
0Likes
Like
Posted 01 May 2012 - 05:19 AM
however are you sure that it changes anything for when a player is going back and forth?
look at this, if the player is going back and forth, the red lines keeps getting loaded/unloaded.
but not having to be in a hurry to load chunk is quite nice^^
I think he meant that you unload chunks if you go far away from them, but you only load them if they get in the original cache area. So that you dont unload already loaded chunks until you need to because you might come back soon.
Yeah, this would allow you to unload not needed chunks and not fall into this load/unload loop if player moves back and forth. If he does, closest chunks will always be in cache so you can just fetch the reference and do whatever you want (render etc.)
But I'd make decision about unloading chunks not distance based - I suggest making a limited "list" or "deque" that will hold references to chunk data and it will act as cache that can store for example 500 chunks. You set the limit or allow user to choose based on his available memory. Whenever you hit the limit, you remove oldest chunk from it when you're inserting new one. Whenever you reference chunk for rendering you push that chunk on top of that "deque", this gives you chunks ordered by last access time. This way you can freely remove chunk from the bottom of the list because those are the oldest chunks and are not used anymore.
There is of course problem of quickly accessing required chunk (based on its coordinates?) but you can keep two lists - one linked list or some kind of deque to keep information about chunks last access time (this will be used to unload unused chunks) and another storage for fast chunk access (hash or dictionary kind) so you can quickly ask for a chunk at (x, y, z) and if its in cache it will fetch the reference - I assume that asking for a chunk by rendering engine would always return cached chunk because we ensure that chunks in some distance around player will be prefetched even if they're not needed right now.
Edit: Added diagram so it explains better what I had in mind. As you can see it gives us some sort of 'inertia' where chunk loading doesn't take place immediately when player moves (forward or back and forth) but only when we have to load completly new chunks.
Edited by noizex, 01 May 2012 - 06:05 AM.
### #16sliders_alpha Members - Reputation: 107
Like
0Likes
Like
Posted 01 May 2012 - 09:06 AM
Minecraft isn't infinite *grrrr*
a VERY BIG procedural world then
thanks for the pic noizec, it really helped =D
however erasing according to the last access time sounds quite tricky to implement So I made a simplier version (used your graph, hope you don't mind) :
Edited by sliders_alpha, 01 May 2012 - 09:07 AM.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
PARTNERS
|
|
# 5.8 Modeling using variation (Page 3/14)
Page 3 / 14
A quantity $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ varies inversely with the square of $\text{\hspace{0.17em}}x.\text{\hspace{0.17em}}$ If $\text{\hspace{0.17em}}y=8\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}x=3,\text{\hspace{0.17em}}$ find $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ is 4.
$\text{\hspace{0.17em}}\frac{9}{2}\text{\hspace{0.17em}}$
## Solving problems involving joint variation
Many situations are more complicated than a basic direct variation or inverse variation model. One variable often depends on multiple other variables. When a variable is dependent on the product or quotient of two or more variables, this is called joint variation . For example, the cost of busing students for each school trip varies with the number of students attending and the distance from the school. The variable $\text{\hspace{0.17em}}c,$ cost, varies jointly with the number of students, $\text{\hspace{0.17em}}n,$ and the distance, $\text{\hspace{0.17em}}d.\text{\hspace{0.17em}}$
## Joint variation
Joint variation occurs when a variable varies directly or inversely with multiple variables.
For instance, if $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ varies directly with both $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}z,\text{\hspace{0.17em}}$ we have $\text{\hspace{0.17em}}x=kyz.\text{\hspace{0.17em}}$ If $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ varies directly with $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ and inversely with $z,$ we have $\text{\hspace{0.17em}}x=\frac{ky}{z}.\text{\hspace{0.17em}}$ Notice that we only use one constant in a joint variation equation.
## Solving problems involving joint variation
A quantity $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ varies directly with the square of $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ and inversely with the cube root of $\text{\hspace{0.17em}}z.\text{\hspace{0.17em}}$ If $\text{\hspace{0.17em}}x=6\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}y=2\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}z=8,\text{\hspace{0.17em}}$ find $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}y=1\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}z=27.\text{\hspace{0.17em}}$
Begin by writing an equation to show the relationship between the variables.
$x=\frac{k{y}^{2}}{\sqrt[3]{z}}$
Substitute $\text{\hspace{0.17em}}x=6,\text{\hspace{0.17em}}$ $y=2,\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}z=8\text{\hspace{0.17em}}$ to find the value of the constant $\text{\hspace{0.17em}}k.\text{\hspace{0.17em}}$
$\begin{array}{ccc}\hfill 6& =& \frac{k{2}^{2}}{\sqrt[3]{8}}\hfill \\ \hfill 6& =& \frac{4k}{2}\hfill \\ \hfill 3& =& k\hfill \end{array}$
Now we can substitute the value of the constant into the equation for the relationship.
$x=\frac{3{y}^{2}}{\sqrt[3]{z}}$
To find $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}y=1\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}z=27,\text{\hspace{0.17em}}$ we will substitute values for $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}z\text{\hspace{0.17em}}$ into our equation.
$\begin{array}{ccc}\hfill x& =& \hfill \frac{3{\left(1\right)}^{2}}{\sqrt[3]{27}}\\ & =& 1\hfill \end{array}$
A quantity $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ varies directly with the square of $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ and inversely with $\text{\hspace{0.17em}}z.\text{\hspace{0.17em}}$ If $\text{\hspace{0.17em}}x=40\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}y=4\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}z=2,\text{\hspace{0.17em}}$ find $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}y=10\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}z=25.$
$\text{\hspace{0.17em}}x=20\text{\hspace{0.17em}}$
Access these online resources for additional instruction and practice with direct and inverse variation.
Visit this website for additional practice questions from Learningpod.
## Key equations
Direct variation Inverse variation
## Key concepts
• A relationship where one quantity is a constant multiplied by another quantity is called direct variation. See [link] .
• Two variables that are directly proportional to one another will have a constant ratio.
• A relationship where one quantity is a constant divided by another quantity is called inverse variation. See [link] .
• Two variables that are inversely proportional to one another will have a constant multiple. See [link] .
• In many problems, a variable varies directly or inversely with multiple variables. We call this type of relationship joint variation. See [link] .
## Verbal
What is true of the appearance of graphs that reflect a direct variation between two variables?
The graph will have the appearance of a power function.
If two variables vary inversely, what will an equation representing their relationship look like?
Is there a limit to the number of variables that can vary jointly? Explain.
No. Multiple variables may jointly vary.
## Algebraic
For the following exercises, write an equation describing the relationship of the given variables.
|
|
## Links relacionados
• Similares en SciELO
## versión On-line ISSN 2448-6795versión impresa ISSN 1665-5346
### Rev. mex. econ. finanz vol.14 no.1 México ene./mar. 2019
#### https://doi.org/10.21919/remef.v14i1.360
Artículos de investigación
Impact of Income Taxes on Wages. A Non Parametric Analysis of the Mexican Case by Gender
Impacto de los impuestos al salario. Un análisis no paramétrico del caso de México por Género
1Centro de Investigación en Alimentación y Desarrollo (CIAD), México
2Centro de Investigación en Alimentación y Desarrollo (CIAD), México. Université Laval, Canada
3Universidad de Sonora (UNISON), México
Abstract
The objective of this work is to measure the effect of income tax on wages in Mexico with respect to gender and working hours. A microsimulation of the tax burden of formal wage earners was carried out, using a non-parametric method to avoid assuming a functional relationship of variables through the use of the Gaussian kernel function. The results show that the tax burden only falls on 60% of salaried employees. Men bear a higher tax burden than women and an increase in working hours does not cause a significant increase in the payment of the tax. It is recommended to establish strategies in favor of better salaries that allow an increase in collection. The limitation of the study is that it does not consider the incidence of corporate or self-employed taxes. The originality of this is to offer an alternative measure of income tax considering wages and working hours without establishing a priori conditions between them. The conclusion is that a low-wage labor market will not contribute to the strengthening of public income.
Keywords: Tax burden; Income taxes; Wages; Non-parametric analysis; Redistribution
JEL Classification: H22; H24; J31; C14; H23
Resumen
El objetivo del trabajo es medir el efecto del impuesto sobre la renta en los salarios en México en función del sexo y las horas trabajadas. Realizamos microsimulación de la carga impositiva de los asalariados formales y utilizamos un método no paramétrico para evitar asumir una relación funcional de las variables, mediante el uso de la función kernel Gaussiana. Los resultados muestran que la carga impositiva solo recae en el 60 por ciento de los asalariados. Los hombres soportan una mayor carga tributaria que las mujeres y un aumento en las horas de trabajo no causa un aumento significativo en el pago del impuesto. Se recomienda establecer estrategias a favor de mejores salarios que permita incrementar la recaudación. La limitante del estudio es que no considera la incidencia de impuestos de sociedades o trabajadores por cuenta propia. La originalidad es ofrecer una medición alternativa del ISR teniendo en cuenta los salarios y las horas de trabajo sin establecer condiciones a priori entre ellos. Se concluye que un mercado laboral de bajos salarios no contribuirá al fortalecimiento de los ingresos públicos.
Palabras clave: Carga fiscal; Impuestos directos; Salarios; Análisis no paramétrico, Redistribución
Clasificación JEL: H22; H24; J31; C14; H23
Introducción
Tax systems in Latin American countries are characterized for having a low tax collection with respect to their level of development. According to information by the Economic Commission for Latin America and the Caribbean (ECLAC) in 2013 Chile, Mexico, Panama, the Dominican Republic and Venezuela showed the widest gap with respect to the world’s average. Meanwhile, from 2000 to 2014 Argentina, Bolivia, Brazil, Ecuador and Nicaragua stood out for having improved their tax collection as a percentage of their GDP. By comparison Mexico only grew its tax collection by merely 3 percentage points of its GDP, among the lowest in the group. Thus, ECLAC recommends implementing a series of structural reforms to the tax system, direct and indirect, to reinforce the tax collection in most Latin American countries (ECLAC, 2016).
The tax system implemented in Mexico in 2014 seeks to lower its dependency on oil revenues while improving its tax structure in order to make it more equitable. That is, a better distributed tax burden such that equals are treated equally (Musgrave, 1990). The addition of new income tax rates to people earning over 750 thousand pesos seeks to make the tax system more equitable. According to the estimates made by Mexico’s internal revenue service (SHCP, by its initials in Spanish) for 2016, this would impact the top 1 percent while leaving the remaining 99 percent unaffected.
Mexico’s fiscal system gets most of its tax revenues from the income tax. In 2014 it represented 53 percent which is 5.56 percentage points of the GDP. Out of that revenue stream, 49 percent comes out of taxes seized from wage earners and 51 percent comes from taxes seized from companies, which are 2.74 and 2.82 percentage points of the GDP respectively. Tax collection from personal income grew by 19.39 percent from 2012 to 2014. Wage earners were the tax-base with a largest increase during this period, registering a 25 growth in tax payments. Meanwhile tax revenue coming from companies grew by merely 14 percent (SHCP, 2016).
But with the low wage levels in Mexico, and by concept of direct taxes, is the current tax system’s fiscal structure adequate for current incomes? Specifically, does the new personal income tax scheme have an impact in labor? To wit, do these structural changes translate into more equitable tax burdens in terms of wage levels or hours worked? Our research estimates the effect of the income tax on wages.
Unlike the literature on personal income tax incidence, this study offers an alternative non-parametric measure, without setting a priori conditions between variables and letting the data speak by itself. The use of non-parametric approach avoids to assume any sort of functional relationship, letting the data provide evidence regarding the response between the tax burden and the different groups of taxpayers. Thus, the purpose is to measure the effect of income taxes on wages for Mexico differentiating workers by sex and hours worked. Also, we reveal the extent of low wage earners receiving an earned income tax credit in Mexico. We find that only 60 percent of wage earners earn enough income to pay income taxes without any tax credit while almost 40 percent of the workforce is unable to pay their full tax share. Our calculations reveal that men bare a greater burden than women when faced with the marginal rates in the new tax structure.
Our work is presented in the following order. The next section shows a review of national and international research literature relevant to the study of direct taxation. The following section explains the model and theoretical framework of our work as well as the mathematical derivation of our functioning estimates. The fourth section shows the empirical application and the results of our study. In the fifth and final section we present our concluding remarks and recommendations.
2. Literature Review
The prevailing research literature has focused on the analysis of the progressiveness of direct taxation considering income among other relevant figures such as social security and transfers taking into account its impact on horizontal inequity - i.e., a fiscal system that treats equal taxpayers unevenly (Musgrave, 1990). On the other hand, research on labor participation linked to the tax-burden on income are described in detail in recent papers from surveys on the topic such as Keane (2011) as well as that of Mastrogiacomo et al (2017).
The role exerted by income taxation and the international tax-burden
Recent studies on the effect of taxes on Canada’s taxpayers has pondered four central aspects: (1) the income tax, (2) household subsidies and government tax credits, (3) intergenerational transfers and social welfare, and (4) unemployment benefits. Using data from Canada’s consumer finances from 1991, studies by Duclos and Tabi (1996) and Davidson and Duclos (1997) estimated an index of progressiveness focusing on tax redistribution. This index evaluates the progressiveness of Canada’s fiscal system as a whole, taking into consideration the effect of taxes and transfers - income being one of the most progressively taxed components in Canada.
Perrote, Rodríguez and Salas (2002) estimated the horizontal inequity of the income tax on Spain’s taxpayers. They found that from 1988 to 1995 the effect of direct taxes and the total redistribute effect of the fiscal system where crucial in the increase of welfare of the citizenry. They illustrate the importance of direct taxation upon redistribution among the different cohorts of taxpayers, as well as the importance of having a more efficient method of tax collection.
Comparative studies among different countries show the effects of income distribution among different types of taxpayers and households (OECD, 2005; Joumard, Pisu and Bloch, 2012; Paturot, Mellbye and Brys, 2013; Lustig, Pessino and Scott, 2014; ECLAC, 2016; Castañeda, 2016). Overall, they show that the tax rate along with the system of transfers reduces vertical inequity of income redistribution among most members of the Organization for Economic Cooperation and Development (OECD). These studies claim that a more redistributive tax system leads to higher levels of tax collection. Paturot, Mellbye and Brys (2013) analyze the progressiveness of payroll taxes for 34 OECD countries and consider the effect of direct taxes upon five intervals of wage income - i.e., between 50 and 200 percent of each country’s mean wage. According to their results tax rates are more progressive among the lower income levels, decreasing along higher levels.
According to Lustig, Pessino and Scott (2014), the tax and transfers systems in countries like Argentina, Brazil and Uruguay show a decrease in inequality and poverty. Meanwhile, countries like Mexico, Bolivia and Peru show a decreased redistributive effect as a result of a low collection of income tax with respect to their GDP in spite of their progressive direct taxes. Similarly, ECLAC (2016) points out that tax revenues from direct taxation are insufficient to have a significant impact on redistribution. Additionally, the personal income tax in Latin-American countries averaged 1.3 percent of the GDP in 20112, less than one fifth of the OECD average (8.5 percent). The effective tax rate of personal income in Latin America is also distinctively low (2.3 percent) with a diminished effect on the reduction of inequality - i.e., -2.1 points of the Gini coefficient. By comparison the average effective tax rate in OECD countries is 13.3 percent, reducing inequality by -11.6 points (ECLAC, 2016: 104-105).
In contrast Castañeda (2016) claims that for revenues purpose the efficiency of the personal income tax has gone hand-in-hand with Latin America’s opening up to trade as well as a low (personal income) tax burden. Castañeda also provides evidence showing countries in Latin America relied on a greater burden upon indirect taxation - i.e., Value Added Tax - rather than personal income taxes to fund an increase in government spending to afford greater coverage for health and education.
Rodriguez (2014) showed a decrease in the informal sector of Colombia’s economy as a result of a reduction in tax rates - especially in income taxes. He suggests that after 20 years of implementing reduction in rates employment would grow by 2.93 percent. Thus the informal sector would decrease at a rate of 9.02 percent, from 39.1 to 35.57 percent of GDP. Rodriguez proposes an income tax of 24.5 percent, calibrated as a simple average of the effective marginal tax rates in Colombia. This level is similar to that of Mexico circa 2002. Since, there has been a decrease in the effective marginal tax rates in Mexico to 18 percent in recent years (Huesca and Araar, 2016).
According to some studies, Mexico’s tax system as a whole is progressive. Their empirical evidence over three decades (1984-2014) shows that unlike revenue from indirect taxes, revenue from direct taxes bestow progressiveness upon its tax system (Vargas, 2010; Scott, 2014; Huesca and Araar, 2016). These studies have also attributed a limited redistributive impact on the tax system due to low tax productivity, multiple exemptions and inefficient collection (Scott, 2014; Huesca and Araar, 2016).
Relevant research on the labor and income-taxation.
The role of labor and its burden on the fiscal system has been an issue of remarkably relevance. For instance, Keane (2011) surveys the topic and he focuses on implications for effects of wages and taxes separating the effects by sex. His research study this relationship considering the output of a literature from a three-behavior perspective: static models, lifecycle with savings, as well as life-cycle models with both savings and human capital. When separating the women, variables of fixed costs among demographics like fertility or marriage interacting with human capital.
Considerable controversy can be found in the literature over labor supply and its response to changes in wages and taxes, where at least for males, Keane (2011) confirms small estimates for labor supply. Elasticities, once corrected for these controls become greater. Then, the previous study explains a faster reaction in the labor market for women, whether to decide work more or to be able to pay a greater share of the income tax-burden than men. Finally, the elasticities for workers become greater for women than for men.
Another important study from Mastrogiacomo et al (2017) presents how responsive are the jobs in Netherlands with respect to income taxation as well as other benefits. Main findings show differences in labor supply responses between households with and without children are much bigger than those presented in the literature. An efficient tax benefit system should be taken into account for the substantial heterogeneity in behavioral responses to labor market participation. On the contrary, in Germany the study of Hayo and Uhl (2015) reveals that gender does not have a significant influence on labor market participation when payroll tax change, but it is relevant for self-employed labor supply decisions.
After reviewing several empirical evidence, Meghir and Phillips (2010) found that there is a consensus on annual labor supply elasticity for women close to 1; however, when weekly hours are considered for estimates, the elasticities tend to be quite small (ranges from 0.0 to 0.3). According to generalized belief, women are more responsive to incentives -which implies that they respond more to tax changes-, the authors concludes that the effect of tax changes on women’s hours worked (particularly married women and lone mothers) is slightly stronger than it is for men.
Also, wage subsidies (such as EITC program) have consequences on labor supply. Empirical literature has documented generally positive impacts such as: growth of net incomes for low-income families and improvement on children wellbeing; also, the decline on labor participation from married women is offset by the rise in participation of single mothers (Nichols and Rothstein, 2015). However, in the United States (US) labor force participation rate in response to the EITC expansions in the US subsidizes make married women to stay at home (Eissa and Williamson, 2004). Despite this, EITC is highly effective as an antipoverty policy, since most of the credit recipients in the United States (US) were either below the poverty line or led to close the poverty gap; hence, EITC is considered as the largest cash or near-cash transfer for low income households in the US (Scholz, 1994). For those reasons, EITC expansions has been of high interest for fiscal policy makers.
For the Mexican case, Alberro (2018) proposes a negative income tax (NIT) to eradicate chronic poverty and decrease income inequality (which is an EITC-like program). Eligibility criteria for working population aged above 18 years old with total incomes under certain threshold (which considers two parameters: poverty line and income poverty gap). Also, NIT encourages workers to increase their income. According to author’s estimates, the NIT fiscal cost represent 1.25% of GDP.
As a review conclusion in the literature.
The review of the current literature on income taxation and labor participation allows us to have a nicely picture of what is going on with the effort of combining, good or bad jobs linked to the burden exerted through a fiscal system. In general we can conclude that:
• A limited redistributive impact on tax systems is registered due to low tax productivity, multiple exemptions and inefficient collection from the public revenues.
• Personal income tax has gone hand-in-hand with Latin America’s opening to international trade where a decreasing pattern on income-tax burden was found.
• A shift towards a tax collection system mostly funded by indirect taxes can be observed.
• Higher female labor participation with tax payments of higher fiscal pressure, even when its counterpart tends to work less and less hours too.
• EITC programs are effective as an antipoverty policy, even though different response on labor force participation by gender is observed.
3. Model and determinants of the willingness to work and pay taxes
In order to determine the income tax payments levied upon wage earners we develop a theoretical model based upon a simple deterministic fiscal system. We express said system as the sum of the net wage (N) plus taxable income payments (T) and contributions to social security payments (SSC), forming an equation to describe the gross wage of workers (X) as:3
X=N+T+SSC (1)
We assume each worker faces a tradeoff between higher wages and less leisure, or more hours worked. The dilemma of the representative worker between an implied increase in tax burden upon an increase in wages can be represented under a simple utility function. We specify a utility function Uij(C,H,X) for worker i ∈ {1,2,...,n} of gender j ∈ {1,2} - i.e., j = 1 represents women - where C represents the consumption of a basket of goods, H represents hours labored, and X represents the worker’s gross wage given a tax burden T4. The utility function of worker i will be affected by his or her tax burden T upon his or her wage income Xij. Thus, we likewise represent this direct taxation burden T(X) upon workers under the corresponding tax brackets and law, such that:
T=X-N-SSC (2)
where N is the net disposable income and SSC standing for the social security contributions. We formulate a hypothesis stating that, for a rational worker with ordered preferences an increase in working hours does not necessarily imply a greater tax burden, and male workers bare the brunt of the tax burden in the fiscal system. For the purposes of this model, there are constant returns to scale between the amounts of hours worked and wages. That is, additional hours worked are paid at the same wage rate.
3.1. A non-parametric method of analysis
The usual empirical approach analyzes agent behavior through a demand schedule with its corresponding elasticities to estimate the marginal efficiency of any given tax. Instead, we employ non-parametric techniques to allow the data to ‘speak for itself’ and minimize the effect of outliers in the distribution of income upon our observed results (Duclos and Araar, 2006, 267).5
Theorem 1. There is a weighted kernel of the form KX-Xth given a differentiable and continuous kernel density function that one may reduce the bias among “neighboring” individuals in the data near the Ti's, inversely proportional to the distance X−Xi such that nh0,5m^(X)-m(X), where nh−0,2 is the bandwidth between the data and the band h → 0 and nh → ∞ when n→∞, where the conditional variance of the tax burden is given by (See Härdle, 1990, Proposition 3.1.1).
As follows, in definition 1 we explain why a Gaussian kernel function is preferred in our empirical application.
Definition 1. The definition of the estimator function ƒ^Ti takes into account the properties of continuity and differentiability of the kernel estimator function K(u). For the sake of convenience a kernel function that is symmetric for values near zero is selected, with K(u)du=1, uK(u)du=0 and u2K(u)du=σK2>0 This Gaussian type function meets the inherent properties described in this definition, and has the form:
K(u)=2π-0,5exp-0,5u2 (3)
Proof 1. The “noise” in this Gaussian distribution has the traditional bell-shape and is easily differentiable at any desirable level, with a rate of convergence guaranteed to reach σK2=1. Proof 2. Thus we can differentiate (3) and add h so to rewrite the expression for the Gaussian kernel estimator:
K(u)=1h2πexp-0,5iu2 (4)
where −0,5⋋i is a factor of convergence to the real values.
We introduce the non-parametric estimation for the tax burden, Ti, as the predicted response to the income tax in the term m(Xi), such that: 6
Ti=mXi+εi, where Eεi=0 (5)
The function m(Xi) = E[Ti|Xi] overcomes the empirical inability of observing a direct and linear response of the tax rates upon Xi that can explain the functional and deterministic form of the tax burden recorded in Ti. We calculate an estimate using a local non-parametric regression that connects the m points with Theorem 1 and Expression (5), employing ordinary least squares without the need of an a priori specification to imply a functional relation. Thus the tax burden can be estimated using ƒ^Ti as the Gaussian kernel function Ki(X)0,5Ti such that: 7
KiX0,5Ti=αX0,5+iX0,5Xi-X+εi (6)
where α, β are the coefficients for the constant term and the estimated parameter, respectively, Xi represents worker i gross wages and εi is a term of randomness. The estimated density function of the tax burden hatƒ(Ti) is represented by the estimator Ki, as described in (6) under the conditions previously established in Proof 2. From the equation in (6) we get the estimators for T = Ti (X)i as well as T = Tij (H)i, with their respective predictions given by:
ETiXi=βi (7)
ETijXij=γij (8)
We use the first partial derivatives to estimate the marginal income tax rates with respect to gross wage levels, a non-parametric conditioning of the form:
EdTidXiXi=βiδ (9)
Similarly we use first partial derivatives to estimate marginal income tax rates with respect to hours worked. This non-parametric conditioning measure intends to capture the marginal utility of an individual’s wage with respect to tax payments, using hours worked to register this procedure for the worker’s sex:
EdTidHijHij=γijδ (10)
These conditions let us identify which groups have been somewhat affected in a progressive manner with respect to their tax burdens and hours worked.
3.2. A non-parametric estimate of direct taxes
Using the equation from Expression (3) we identify the direct tax T(X) burden as variable influenced by attributes other than income. Thus we formulate a non-parametric estimate of the tax burden as a stochastic function given a worker’s wages X such that E[T|X + εi]dX, where the additional variable behaves as a stochastic determinant of taxes:
T=T(X)+εi (11)
After running this non-parametric regression of T(X) with respect to X, we run a similar regression with respect to hours worked (H) such that:
T=T(H)+εi (12)
For the sake of robustness of the estimate of the non-parametric relationship between increases in T and X subject to a progressive tax burden structure, we establish the following three axioms:
Axiom 1: Assumptions about preferences in T: The representative worker is rational and orders his or her preferences according to his or her wage level (X). For an individual i that chooses more H such that he or she gets a higher tax burden T, the conditions sufficiently near to H will be reflected upon T, if and only if:
XiHiHi1,XiT(X)>0, sufficient condition, (13a)
XiHiHi1 necessary condition (13b)
where the necessary condition in equation (13b) stands for regressivity in T as long as partial derivative of ∂(Xi/Hi) decreases and H is higher, when XiHiXi1Hi0Xi then, T(Xi)≤1 should be negative. In our framework, the necessary condition will apply only for level of incomes in the ranges Xi ∈ [0, z+], where z+ represents a negative tax (tax credit in our case).
Axiom 2: The representative worker shows monotonicity in his or her decisions: to wit, although a lower tax burden (T) is preferred, upon an increase on income taxes T(X)>0 an increase in working hours will yield a greater tax burden δTijδHij1.
Proof. Considering a utility of individual i of gender j as a function of wages and hours worked Uij (X,H), with a total derivative such that:
δUijδHijdH+δUijδXijdX=0 (14)
Solving for dH/dX gives us a positive slope that will designate an increase in hours worked (H) in order to increase wages (X), yielding a greater tax burden given the condition that dH/dX ≥ 1 such that δTijδHij1.
Axiom 3: Consumption goods are normal goods with respect to the tax burden: Consumption for an individual i of gender j, Cij, increases with an increase in wages, Xij, and decreases with an increase in tax burdens, Tij, if and only if the following derivatives hold:
δTijδXijδXij>0,XijCondition of progressiveness and greater tax burden, (15a)
δCijδXijδTij<0,Tij>0,Xijz+Condition of reduction in consumption with tax increase. (15b)
3.3. Methodology and formulating the income tax
We formulate the taxpayers’ payments using Mexico’s National Statistics Institute’s (INEGI, by its initials in Spanish) Household Income and Expenditure Survey (ENIGH, by its initials in Spanish). According to its construction, the information is associated with three units of analysis (dwelling, households and household members); in this research we consider individuals whose source of income are wages. This survey offers a detailed description of the sources of income and characteristics of the participants and has been widely used to analyze Mexico’s fiscal system (SHCP, 2016; Huesca and Araar, 2016; Lustig et al., 2014; Vargas, 2010). Hence, estimation results can be generalized to the population since the survey is probabilistic as well a representative of the country and its 32 states.
We use the ENIGH database for 2016 and purge the data based on sex, hours worked weekly and age.8 We limit ourselves to consider only wage earners with their corresponding salaries from their main occupations, expressed in Mexican pesos at current prices - i.e., variable N in Equation (1). Further, we also only consider those individuals that have quoted a social security scheme, guaranteeing their formality and thus corroborating the corresponding income tax payments have been made.9 Table A.1 in the annex shows the sources of appointment used in the survey.
We apply the corresponding fiscal rules in the survey according to the reconstruction of the fiscal system described in Equation (1). The unit of analysis is the individual wage earner, not the household. Thus the ENIGH 2016 survey indicates that a total of 44,834 individuals represent 22.1 million wage earners. This data represents 69 percent of the real census of taxpayers enrolled in SHCP, which reaches a grand total of 32.2 million as of august of 2016 (SAT, 2017).10
The fiscal system features eleven distinct income tax rates for 2016. An annual fee must be paid at each rate according to a corresponding fixed quota denominated in Mexican pesos to the order of [0.00, 114.24, 2,966.76, 7,130.88, 9,438.60, 13,087.44, 39,929.04, 73,703.40, 180,850.80, 260,850.84, 940,850.76]. The marginal income tax rates ranged from 1.92 percent to 35 percent (see Figure 1). 11 Also, according to the income tax law in Mexico, workers with annual wages lower than 88,587.96 per year are entitled to receive a tax credit which is compensated to PIT liability (which is alike to an earned income tax credit programs -EITC- in the United States or other OECD countries).12 Hereafter, this paper will make reference to Mexican tax credits as EITC. Therefore, the annual amounts of EITC were [4,884.24, 4,881.96, 4,879.44, 4,713.24, 4,589.52, 4,250.76, 3,898.44, 3,535.56, 3,042.48, 2,611.32].13
4. Empirical Results
Figure 2 shows the distribution of the income tax burden for 2016, indistinctly from worker gender.14 For the sake of robustness we have formulated our estimates with a 95 percent confidence interval. These confidence intervals are shown as narrow bands in both graphs in this figure as a result of using non-parametric techniques as well as the big sample in the survey. Recalling Equation (7) we observe an increasing PIT payments over the entire annual wages, as a result of PIT progression. Likewise, the negative income tax values for annual wages lower than 63,000 Mexican pesos capture the effect of the tax credit (see left graph in Figure 2.)
These results coincide with the marginal changes registered as described by the equation in Expression (9). The right panel graph in Figure 2, shows the extent of which a monetary unit increase in gross wages yields an increase in the income tax burden for the entire distribution of wage earners’ incomes. We observe a greater tax burden on annual wages exceeding 750 thousand pesos. For annual wages of a million and a half Mexican pesos the effective marginal rate reached 25 percent.
Figure 3 shows how the EITC benefits people with a monthly wage less than 7,382.33 Mexican pesos (3.34 minimum wages) or 88,587.96 per year. This validates the necessary condition in expression (13b) of Axiom 1, where workers with insufficient income and below the threshold, do not contribute to a greater share of the tax payments. To wit, by using non-parametric techniques we were able to observe a higher horizontal inequity in the tax system. The top 60th percentile of wage earners bears the brunt of the income tax payments. Again, confidence intervals are narrow as a signal of strong robustness in the estimations across all the percentiles of population.
Recall that Mexican income tax law reduce income tax liability for low wage earners, that is, PIT is compensated with an EITC. Furthermore, if the worker is eligible to receive the credit, the employer must pay to the worker by adding the amount of the EITC to the salary (i.e., it serves as a subsidy). In this regard, the Mexican law protects low wage earners from being affected by the tax system by worsening their economic status. However, such measure reduces public revenues. The magnitude of EITC effect reveals the existence of an enormous low-wage labor market. It is urgent to stablish measures that lead to a salary increase by taking into account real labor productivity; consequently, the taxpayer base would be expanded.
This evidence matches previous studies’ findings about a high incidence of horizontal inequity in tax contributions in Mexico (Huesca and Araar, 2016). According to Vargas (2010), the incidence of PIT on wages has worsened in 2002 compared to 1984, where 90 percent of the tax burden was paid by the last three deciles, due to the low wages of the counterpart deciles of the contributors who received the EITC. In subsequent research, Flores, Valero, Chapa and Bedoy (2005) found that the low wages overtake the informal sector in Mexico; if the corresponding PIT rates are applied, informal workers would potentially represent a fiscal burden -due to EITC eligibility-, instead of been a source of tax collection. Mexican results are alike to Guatemala, where 73 percent of employees are exempted of PIT payment (Díaz, Garcimartín and Ruiz-Huerta, 2010).
Further analysis of the marginal changes in income tax rates with respect to gross wages shows positive change for the entire income distribution. These marginal changes remain near zero up until the 30th percentile by virtue of the pro-poverty effect of the tax system for this segment of the population (Huesca and Llamas, 2016). On the contrary, the effective tax rate for the top 1 wage earners is close to 20 percent, even though statutory tax rate is 35 percent.
Figure 4 shows the wage dispersion for four workday categories: a) below than halftime (up to 19 hours a week), b) half-time up to less than full-time (20 to 34 hours a week), c) full-time (35 to 48 hours a week), and d) above full-time (more than 48 hours a week). We observe that 75 percent of wage earners (inside the box) receives up to 2,500 Mexican pesos a week for all the workday categories. It is worth noticing that people with weekly wages of 1,703 Mexican pesos receive an EITC, showing evidence of low wage levels for a great portion of the people in Mexico regardless of the working hours (see horizontal line in Figure 4).
Intuitively, from Figure 4 it would be expected to observe larger box plots while increasing working hours, thus gross incomes per week would improved from a higher workday. However, data structure reflects that a Mexican representative wage earner receives low salary regardless higher working hours, that is, it reveals a labor market with low wages. The wage earning tax base that pays the bulk of the tax income and does not receive an EITC is mostly represented by the distance of in the top of the box with respect to the adjacent line above along with the corresponding outliers.
Figure 5 shows the estimated wage levels by sex as well as the change in marginal rates by hours worked. We observe that men have greater wages than women. The location of the tangency is located in the 45-50 weekly hours as a result of low wages even above of full time workdays. Comparing the locations of tangency, we observe a faster rate of wage increase for men than woman. Thus potentially implying a greater gender inequality regardless of the hours worked.
The bottom panel in Figure 5 show the marginal changes in wages with respect to hours worked. This graph shows a crossing of these curves for men and women as a result of a change in the sign of their partial derivatives of wages with respect to hours worked. We could infer the men’s wages reaches its highest level when work hours exceed 50 hours a week.
According to equations in expressions (8) and (10), we simultaneously show the estimates for the weekly tax burden by sex and hours worked. Figure 6 reveals that men make greater contributions towards the income tax than women. Furthermore, the negative tax burden (EITC) for reduced workdays is greater for women than men, as a result of received low wages, stands out (see upper panel of figure 6, the level reached by negative taxes for women with working hours below than 11 hours a week).
The reduced tax contributions reflect low wage levels for both genders, with a greater impact on women. The hypothesis can be proven inasmuch as the income tax payments do not reveal a growing tendency from weekly full-time working hours on for both, women and men.
According to Equations (8) and (10), estimates in Figure 6 depicts that in spite of working longer hours, the workdays do not translate into incomes able to contribute more direct taxes to the public purse - in line with Axiom 2, TijHij1 and according to utility function in Expression (14), where marginal changes equal zero. The marginal changes in 2016 show a declining trend for both genders (around 48 hours for males and 43 for females). Thus a wage earner facing longer workdays in Mexico will not necessarily pay more taxes.
According to previous evidence, despite of high progressiveness of income tax in Mexico, Axiom 3 and expression (15a) is confirmed and expression (15b) shows grounds for unraveling the low purchasing power labor market. This market is the recipient of EITC and contributes less to the payment of direct taxes in Mexico, indistinctly from gender and regardless of an increase in hours worked.
5. Concluding Remarks
We revise Mexican wage taxation using non-parametric techniques with microdata. Therefore, aspects such as tax incidence, progressivity and tax credits on one hand, as well as wage levels and tax burden by sex and hours worked on the other are analyzed without setting a functional relationship. Our analysis shows that Mexico’s wage and labor policies have an effect beyond the fiscal realm in spite of the structural improvements to the income tax - i.e., a high horizontal inequity and the limited efficiency of tax collection. We are able to prove our stated null hypothesis. Namely, income tax payments do not show an increasing trend in 2016 even with an increase in a wage earner’s working hours.
Although personal income tax in Mexico is highly progressive, our estimates revealed how the top 1 percent wage earners bears an effective marginal tax rate of 20 percent, which is 15 percentage points below from statutory tax rate. Furthermore, 75 percent of the wage earners reaches a weekly wage of up to 2,500 Mexican pesos (120,000 annually). However, the 40 percentile are the main negative tax burden (EITC) recipients which benefits workers with yearly wages of 88,587.96 Mexican pesos.
The direct tax system redistributes the EITC into a high portion of low-income contributors. Our estimates show a high level of horizontal inequity in Mexico as long as the top 60 percent of wage earners contribute to the payments of the personal income tax. Tax credit is not exclusive for employees with reduced workdays, ever since low salaries goes beyond of full-time working hours in this country.
Mexican wage earner facing longer workdays will not necessarily pay more taxes, as long as negative marginal changes from personal income tax payments are observed (Figure 6). The diminishing marginal tax burdens for workers with longer working hours, could be attributed to diverse array of low paying jobs with no substantial contributions to personal income tax. We also find a higher tax burden for men than women based on hours worked, regardless their income. There is a gender wage difference, where marginal income tax rates for men are higher than women for working hours beyond part-time. Moreover, the EITC for reduced workdays is greater for women than men, as a result of received low wages.
Our assessment is that a labor market with depressed wages and low purchasing power will not contribute to a steadily strengthening of the public revenues. A representative Mexican wage earner receives low salary regardless higher working hours. Thusly, this research sets the norm for future studies linking tax collection and tax burden by workdays and occupations. It also allows us to extend our analysis to databases that include the remaining tax figures (i.e. Value added and excise taxes). Overall, Mexico’s tax credits act as a compensation wage for poverty. This will therefore, continue to yield a limited income tax base, resulting in insufficient public funds.
Referencias
Alberro, J. (2018), Costo fiscal de erradicar la pobreza extrema en México introduciendo un impuesto negativo al ingreso, Estudios y Perspectivas, Series de la CEPAL, México, April 2018. [ Links ]
Araar, A. (2008), “Social Classes, Inequality and Redistributive Policies in Canada”, working paper 08-17, CIRPÉE Université Laval, Canada, August 2008. [ Links ]
Araar, A. and Duclos, J.-Y. (2013), DASP: Distributive Analysis Stata Package, User Manual, Version 2.3, PEP, CIRPÉE and World Bank, Université Laval, June. [ Links ]
Castañeda, V. (2016), “La globalización y sus relaciones con la tributación, una constatación para América Latina y la OCDE”, Cuadernos de Economía, Vol. 35 No. 68, pp. 379-406. [ Links ]
ECLAC (2016), Economic Survey of Latin America and the Caribbean 2016: The 2030 Agenda for Sustainable Development and the challenges of financing development, CEPAL LC/G.2684-P, Santiago. [ Links ]
Davidson, R. and Duclos, J.-Y. (1997), “Statistical Inference for the Measurement of the Incidence of Taxes and Transfers”, Econometrica, Vol. 65 No. 6, pp. 1453-1465. [ Links ]
Díaz de Serralde, S., Garcimartín, C., and Ruiz-Huerta, J. (2010). La paradoja de la progresividad en países de baja tributación: el impuesto a la renta en Guatemala. Revista CEPAL. [ Links ]
Duclos, J.-Y. and Tabi, M. (1996), “The measurement of progressivity, with an application to Canada”, The Canadian Journal of Economics/Revue canadienne d’Economique, Vol. 29 (Special Issue Part I), pp. s165-s170. [ Links ]
Duclos, J.-Y. and Araar, A. (2006), Poverty and Equity. Measurement, Policy, and Estimation with DAD, Springer, Otawa, Canada. [ Links ]
Eissa, N., Hoynes, H. W. (2004). Taxes and the labor market participation of married couples: the earned income tax credit. Journal of public Economics, 88(9-10), 1931-1958. [ Links ]
Flores, D., Valero, J. N., Chapa, J. C., and Bedoy, B. (2005). El sector informal en México: medición y cálculo para la recaudación potencial. Ciencia UANL, 8(4), pp. 490-494. [ Links ]
Härdle, W. (1990), Applied Nonparametric Regression, Cambridge University Press. [ Links ]
Hayo, B., Uhl, M. (2015). Taxation and labour supply: Evidence from a representative population survey. Journal of Macroeconomics, 45, 336-346. [ Links ]
Huesca, L. and Serrano, A. (2005), “Impacto Fiscal Re-distributivo Desagregado del Impuesto al Valor Agregado en México: Vías de reforma”, Investigación Económica, Vol. LXIV No. 253, pp. 89-122. [ Links ]
Huesca, L. and Araar, A. (2016), “Comparison of the fiscal system progressivity over time: Theory and application in Mexico”, Estudios Económicos, Vol. 31, No. 1, pp. 3-45. [ Links ]
Huesca L. and Llamas, L. (2016), “Testing for Pro-Poorness of Growth through the Tax System: The Mexican Case”, Journal of Reviews on Global Economics, Vol. 5, pp. 101-115. [ Links ]
INEGI (2017), Encuesta Nacional de Ingresos y Gastos de los Hogares 2016, Microdata, available at: http://inegi.org.mxLinks ]
Joumard, I., Pisu, M. and Bloch, D. (2012), “Less income inequality and more growth - are they compatible? Part. 3. Income redistribution via taxes and transfers across OECD Countries,” OECD Economics Department Working Paper, No. 926, OED Publishing, Paris. DOI: 10.1787/5k9h296b1zjf-en [ Links ]
Keane, M. P. (2011). Labor supply and taxes: A survey. Journal of Economic Literature, 49(4), pp. 961-1075. [ Links ]
Lustig, N., Pessino, C. and Scott, J. (2014), “The impact of taxes and social spending on inequality and poverty in Argentina, Bolivia, Brazil, Mexico, Peru, and Uruguay: Introduction to the special issue”, Public Finance Review, Vol. 42 No. 3, pp. 287-303. [ Links ]
Mastrogiacomo, M., Bosch, N. M., Gielen, M. D., Jongen, E. L. (2017). Heterogeneity in labour supply responses: Evidence from a major tax reform. Oxford Bulletin of Economics and Statistics. [ Links ]
Meghir, C., Phillips, D. (2010). Labour supply and taxes. In: Adam, S., Besley, T., Blundell, R., Bond, S., Chote, R., Gammie, M., Johnson, P., Myles, G., Poterba, J. (Eds.), Dimensions of Tax Design: The Mirrlees Review. Oxford University Press, Oxford, pp. 202-274. [ Links ]
Musgrave, R. (1990), “Horizontal equity, once more”, National Tax Journal, pp. 113-122. [ Links ]
Nichols, A., Rothstein, J. (2015). The earned income tax credit (EITC) (No. w21211). National Bureau of Economic Research. [ Links ]
OECD (2005), Taxing Working Families a Distributional Analysis, OECD Tax Policy Study No. 12, OECD Publishing, Paris. DOI: http://dx.doi.org/10.1787/9789264013216-en [ Links ]
Paturot, D., Mellbye, K. and Brys, B. (2013), “Average Personal Income Tax Rate and Tax Wedge Progression in OECD Countries”, OECD Taxation Working Papers, No. 15, OECD Publishing, Paris . DOI:10.1787/5k4c0vhzsq8v-en [ Links ]
Perrote, I., Rodríguez, J.G. and Salas, R. (2002), “Una descomposición no paramétrica de la redistribución en sus componentes vertical y horizontal: una aplicación al IRPF”, working paper 11/02, Instituto de Estudios Fiscales, Madrid. [ Links ]
Rodríguez, J.J. (2014), “Efectos de las políticas tributaria y fiscalizadora sobre el tamaño del sector informal en Colombia”, Cuadernos de Economía, Vol. 33 No. 63, pp. 487-511. [ Links ]
SAT (2017), “Cifras SAT Padrón por tipo de contribuyente”, available at: SAT (2017), “Cifras SAT Padrón por tipo de contribuyente”, available at: http://www.sat.gob.mx/cifras_sat/Paginas/datos/vinculo.html?page=giipTipCon.html (accessed 21 November 2017). [ Links ]
Scott, J. (2014), “Redistributive impact and efficiency of Mexico’s fiscal system”, Public Finance Review, Vol. 42 No. 3, pp. 368-390. [ Links ]
SHCP (2016), “Distribución del pago de impuestos y recepción del gasto público por deciles de hogares y personas. Resultados para el año de 2014”, available at: Resultados para el año de 2014”, available at: http://www.hacienda.gob.mx/INGRESOS/ingresos_distribucion_pago/IG_2016(ENIGH2014).pdf (accessed 1 March 2016). [ Links ]
Scholz, J. (1994). The earned income tax credit: participation, compliance, and antipoverty effectiveness. National Tax Journal, Vol. 47 No. 1, 63-87. [ Links ]
Vargas, C. (2010), “¿Es redistributivo el sistema fiscal en México? La experiencia de 1984-2002”, Estudios Sociales, Vol. 18 No. 35, CIAD, Hermosillo, pp. 54-97. [ Links ]
2 Except Cuba and Haiti.
3Although deductions and exemptions described in Mexico’s income tax law are not applicable to this study given: (1) the difficulty of distinguishing the expenditures a tax payer would prefer to deduce according to survey answers; and (2) if it was even possible to do so, it would require making assumptions about the deductible amounts allowed by law.
4While the amount of labor, or hours worked, is considered something undesired or a bad, we opted to use this metric instead of the amount of hours for leisure - i.e., (24 - H). This allows us to simultaneously measure the effect of a marginal change in wages upon a wage earner’s willingness to work as well as the effect of an increase in taxation upon said earner’s willingness to work for those wages.
5Instead of defining the dependent variable as a function of independent variables and parameters with known distributions, non-parametric techniques allow us to forgo any assumption about the “representative” household or economic agent in the sample. To wit, instead of assuming certainty on the average of the parameters, non-parametric methods allow us to infer through rigorous sampling with replacement of the entire data.
6Unlike the deterministic income tax rates levied upon wage earners by the authorities, this measure seeks to elicit the stochastically formulated decision by each wage earner to garnish the wages to meet that burden. Instead of simply assuming a willingness to increase working hours as a means to increase wages where the tax burden is taken as a given, we are attempting to measure the effect of an increase in a tax burden upon said willingness to work and therefore garnish the income necessary to meet it.
7In order to run these regressions we use the module cnpe.ado from Distributive Analysis Stata Package (DASP) v2.3 software programmed by Araar and Duclos (2013).
8We exclude child labor from estimations.
9We incorporate social security payments only as a means to formulate individuals’ gross salaries. We generate a social security variable to capture the corresponding social security fees for each worker in the sample, creating a category for the sort of social security affiliation for all the individuals. Drawing from the sources of income as well as the social security system, we appoint social security fees through a proxy variable for workers by mandatory scheme and the average rate of total contributions made by employers and employees. This is the grand total of the rate of contribution of all the sectors of insurance. (See Table A.1 in the Appendix.)
10See the descriptive statistics in Table A.2 in the Appendix for further detail.
11It is worth mentioning that ENIGH survey captured only 3 observations with wages exceeding 3 million Mexican pesos annually. These values were not included in Figure 1, which is why the tax bracket for 35 percent is not observed.
12In United States (US), EITC operates different than in Mexico; i.e., taxpayers must accomplish certain requirements -besides their low income condition- and fill a claiming procedure for a tax return. In Mexico, EITC is only based on earnings.
13For instance, in 2016 if a worker earned one minimum wage, annual PIT liability was $1,439 Mexican pesos, while EITC was$4,879. As a result, the low wage earner received a net EITC of \$3,440 (i.e., the difference is added to the salary).
14We have bounded our estimates to an annual income level of a million and a half Mexican pesos. Past that income level there are no strong trends due to presence of outliers, with only twenty observations. This calculation does not affect tax burdens, only a graphical spectrum that bounds the range of incomes.
Appendix: Gross Wage Definition and Data
Table A.1 Formulation of the gross wage for income tax simulation
Tax burden a Indicators Personal income tax - Direct tax on income (respective brackets and quotas in 2016) - Tax credit (respective quotas) Employer’s social security contributions -By social security -For contributory pensions -For public housing (Government housing funds) Employee’s social security contributions -By social security -For contributory pensions -For public housing (Government housing funds)
Note: /a Obtained through imputed simulation methods.
Source: Authors’ elaboration based on information from ENIGH survey and the Secretariat of Finance and Public Credit (SHCP) sources for 2016.
Table A.2 Yearly descriptive statistics for 2016.
Variable Observations Media Desv. Std. Min Max
2016 Gross wage (X) 22,105,069 114,109 127,925 123 3,655,531
Net wage (N) 22,105,069 101,869 100,154 5,002 2,720,564
Income tax (T) 22,105,069 9,458 24,933 -4,882 845,843
Social security payments (SSC) 22,105,069 2,782 3,119 3 89,124
Hours worked 22,105,069 49 16 1 189
Age 22,105,069 38 12 16 95
Women Gross wage (X) 8,093,439 103,417 109,492 123 1,665,590
Net wage (N) 8,093,439 93,233 86,645 5,002 1,271,739
Income tax (T) 8,093,439 7,662 20,407 -4,882 353,242
Social security payments (SSC) 8,093,439 2,521 2,670 3 40,608
Hours worked 8,093,439 44 15 1 188
Age 8,093,439 38 11 16 83
Men Gross wage (X) 14,011,630 120,285 137,071 144 3,655,531
Net wage (N) 14,011,630 106,858 106,866 5,022 2,720,564
Income tax (T) 14,011,630 10,495 27,152 -4,882 845,843
Social security payments (SSC) 14,011,630 2,933 3,342 4 89,124
Hours worked 14,011,630 52 16 1 189
Age 14,011,630 39 12 16 95
Source: Authors’ elaboration based on information from ENIGH survey for 2016.
Received: September 20, 2017; Accepted: September 14, 2018
*Corresponding author. Address: Carretera a La Victoria km 0.6, Hermosillo, Sonora, México, C.P. 83304. Phone number: 52+662-289 2400 ext. 371. E-mail: lhuesca@ciad.mx
This is an open-access article distributed under the terms of the Creative Commons Attribution License
|
|
# On independent sets, 2-to-2 games, and Grassmann graphs
@article{Khot2016OnIS,
title={On independent sets, 2-to-2 games, and Grassmann graphs},
author={Subhash Khot and Dor Minzer and Shmuel Safra},
journal={Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing},
year={2016}
}
• Published 19 June 2017
• Computer Science, Mathematics
• Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing
We present a candidate reduction from the 3-Lin problem to the 2-to-2 Games problem and present a combinatorial hypothesis about Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction in a certain non-standard sense. A reduction that is sound in this non-standard sense implies that it is NP-hard to distinguish whether an n-vertex graph has an independent set of size ( 1- 1/√2 ) n - o(n) or whether every independent set has size o(n), and consequently, that it…
52 Citations
#### Topics from this paper
On non-optimally expanding sets in Grassmann graphs
• Mathematics
Israel Journal of Mathematics
• 2021
We study the structure of non-expanding sets in the Grassmann graph. We put forth a hypothesis stating that every small set whose expansion is smaller than 1– δ must be correlated with one of a
UG-hardness to NP-hardness by losing half
• Mathematics, Computer Science
Electron. Colloquium Comput. Complex.
• 2019
It is shown that the reduction can be transformed in a non-trivial way to give a stronger guarantee in the completeness case: for at least (1/2 - ε) fraction of the vertices on one side, all the constraints associated with them in the Unique Games instance can be satisfied.
On the proof of the 2-to-2 Games Conjecture
This article gives an overview of the recent proof of the 2-to-2 Games Conjecture in [68, 39, 38, 69] (with additional contributions from [75, 18, 67]). The proof requires an understanding of
Tight Approximation Ratio for Minimum Maximal Matching
• Mathematics, Computer Science
IPCO
• 2019
With a stronger variant of the Unique Games Conjecture --- that is Small Set Expansion Hypothesis --- the authors are able to improve the hardness result up to the factor of $\frac{3}{2}$.
Pseudorandom Sets in Grassmann Graph Have Near-Perfect Expansion
• Mathematics, Computer Science
2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS)
• 2018
We prove that pseudorandom sets in the Grassmann graph have near-perfect expansion. This completes the last missing piece of the proof of the 2-to-2-Games Conjecture (albeit with imperfect
Towards a proof of the 2-to-1 games conjecture?
• Computer Science, Mathematics
Electron. Colloquium Comput. Complex.
• 2016
A polynomial time reduction from gap-3LIN to label cover with 2-to-1 constraints is presented and an NP-hardness gap of 1/2−ε vs. ε for unique games is implied, assuming a certain combinatorial hypothesis on the Grassmann graph.
On Rich $2$-to-$1$ Games
• Computer Science, Mathematics
Electron. Colloquium Comput. Complex.
• 2019
A variant of the 2-to-1 Games Conjecture is proposed that is equivalent to the Unique Gamesconjecture and it is shown that it is equivalent in terms of hardness of approximation results that necessarily require perfect completeness.
On non-optimally expanding sets in Grassmann graphs
• Computer Science, Mathematics
Electron. Colloquium Comput. Complex.
• 2017
The Expansion Hypothesis has been subsequently proved in its full form thereby proving the agreement hypothesis of [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] and completing the proof of the 2-to-1 Games Conjecture (albeit with imperfect completeness).
$d$-to-$1$ Hardness of Coloring $4$-colorable Graphs with $O(1)$ colors
• Mathematics, Computer Science
Electron. Colloquium Comput. Complex.
• 2019
It is proved that the d-to-1 conjecture for any fixed d implies the hardness of coloring a 3-colorable graph with C colors for arbitrarily large integers C.
Small-Set Expansion in Shortcode Graph and the 2-to-2 Conjecture
• Mathematics, Computer Science
Electron. Colloquium Comput. Complex.
• 2018
Dinur, Khot, Kindler, Minzer and Safra (2016) recently showed that the (imperfect completeness variant of) Khot's 2 to 2 games conjecture follows from a combinatorial hypothesis about the soundness
#### References
SHOWING 1-10 OF 33 REFERENCES
Vertex cover might be hard to approximate to within 2-epsilon
• Mathematics, Computer Science
J. Comput. Syst. Sci.
• 2008
A stronger result is shown, namely, based on the same conjecture, vertex cover on k-uniform hypergraphs is hard to approximate within any constant factor better than k.
Approximation resistance from pairwise independent subgroups
• S. Chan
• Computer Science, Mathematics
STOC '13
• 2013
The main ingredient is a new gap-amplification technique inspired by XOR-lemmas that improves the NP-hardness of approximating Independent-Set on bounded-degree graphs, Almost-Coloring, Two-Prover-One-Round-Game, and various other problems.
The Hardness of Approximate Optima in Lattices, Codes, and Systems of Linear Equations
• Computer Science
J. Comput. Syst. Sci.
• 1997
It is shown that result 2 also holds for the Shortest Lattice Vector Problem in the l norm, and for some of these problems the same result as above is proved, but for a larger factor such as 2 1 & = n or n.
Ruling out PTAS for graph min-bisection, densest subgraph and bipartite clique
• Subhash Khot
• Mathematics, Computer Science
45th Annual IEEE Symposium on Foundations of Computer Science
• 2004
It is shown that graph min-bisection, densest subgraph and bipartite clique have no PTAS, and a way of certifying that a given polynomial belongs to a given subspace of polynomials is given.
Interactive proofs and the hardness of approximating cliques
• Mathematics, Computer Science
JACM
• 1996
The connection between cliques and efficient multi-prover interaction proofs, is shown to yield hardness results on the complexity of approximating the size of the largest clique in a graph.
Improved inapproximability results for MaxClique, chromatic number and approximate graph coloring
• Subhash Khot
• Computer Science
Proceedings 2001 IEEE International Conference on Cluster Computing
• 2001
The author presents improved inapproximability results for three problems: the problem of finding the maximum clique size in a graph, the problem of finding the chromatic number of a graph, and the
Improved approximation algorithms for the vertex cover problem in graphs and hypergraphs
Improved algorithms for finding small vertex covers in bounded degree graphs and hypergraphs are obtained and an approximation algorithm for the weighted independent set problem is obtained, matching a recent result of Halldorsson.
Some optimal inapproximability results
We prove optimal, up to an arbitrary ε > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations
Conditional Hardness for Approximate Coloring
• Mathematics, Computer Science
SIAM J. Comput.
• 2009
The AprxColoring problem is studied and tight bounds on generalized noise-stability quantities are extended, which extend the recent work of Mossel, O'Donnell, and Oleszkiewicz and should have wider applicability.
A Two Prover One Round Game with Strong Soundness
• Mathematics, Computer Science
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science
• 2011
It is shown that unless NP has quasi-polynomial time deterministic algorithms, the Quadratic Programming Problem is in approximable within factor $(\log n)^{1/6 - o(1)}$.
|
|
# 52 in simplest radical form
For example, the square roots of 9 are -3 and +3, since (-3) 2 = (+3) 2 = 9. To reduce a fraction to lowest terms (also called its simplest form), just divide both the numerator and denominator by the GCD (Greatest Common Divisor). We know that a radical expression is in its simplest form if there are no more square roots, cube roots, 4th roots, etc left to find. 23. 90 3 10 9. The radical is in simplest form when the radicand is not a fraction. Let's check this with â25*10=â250. Similar radicals. When 9^2/3 is write-in simplest radical form, which value remains under the radical? x^2+3^2=8^2 X comes out to be 55. it says write the answer in simplest radical. A square has dlagonal length 9 m. What is the side length of the square, to the nearest centimeter? would i write 55 in simplest radical or 8.5 which is he square root of 55? Example 1. Find the missing side length in simplest radical form and the specified trigonometric ratios as fractions in lowest terms. Find the unknown side length, x. If the screen is a rectangle and the width is 8 inches, what is the length of the television? A repairman leans the top of an 8-ft ladder against the top of a stone wall. 1. Anonymous. The graph of f(x) = 6(0.25)x and its reflection across the y-axis, g(x), are shown. A 12-foot ladder rests against the side of a house. 5 Answers. The square root of 48 in its simplest form means to get the number 48 inside the radical â as low as possible. This Site Might Help You. (4x+3)=~(7) Remove ⦠D. 13. Here we answer "What is the square root of 51 (â51) in simplest radical form?" It's in simplest form as-is. Square roots is a specialized form of our common roots calculator. 54 3 ⦠Step by step simplification process to get square roots radical form: First we will ⦠Do the length, width, and diagonal measurements form a Pythagorean Triple? Answer Save. â63. Solution : Simplest Radical Form. Simplest Radical Form - Duration: 3:45. mrmaisonet 185,387 views. ... Express in simplest radical form. Write each radical in simplest form. To simplify the square root of 162 means to get the simplest radical form of â162. RE: What is 55 in simplest radical form? One long division shows that 949 = (13)(73). Simplify a radical, here a square root, so that it is in simple radical form. The square root of 14 is already in its simplest radical form. im so confused. 2*3â5. D. 58. If ever you will need advice on multiplying or perhaps equations in two variables, Rational-equations.com is truly the right destination to take a look at! Simplifying radical expressions; Simplifying radical expressions calculator. A square has side length 10 yd. (2, 3, 9 or 18) 9. Share on Facebook Share on Twitter Share on LinkedIn That's it! Take the challenge! (4x+3)^2=7. The square root of 51 is already in the simplest radical form. A repairman leans the top of ⦠x^2+3=52. 18 3 2 22. It can also provide the calculation steps and how the right triangle looks. Since 949 is 1,001-52, and 52 is divisible by 13, then so is 949. In mathematics, a radical expression is defined as any expression containing a radical (â) symbol - includes square roots, cube roots and so forth. W E SAY THAT A SQUARE ROOT RADICAL is simplified, or in its simplest form, when the radicand has no square factors. ÐпÑбликовано 30 ноÑбÑÑ, 2020 авÑоÑом 30 ноÑбÑÑ, 2020 авÑоÑом Lv 4. For example, 3/4 is in lowest form, but 6/8 is not in lowest form (the GCD of 6 and 8 is 2) and 6/8 can be written as 3/4. Simplifying the square roots of powers. Explain why or why not. For example $$\sqrt{72}$$ can be reduced to $$\sqrt{4 \times 18} = 2 \sqrt{18}$$. Step 1: List Factors List the factors of 162 like so: 1, 2, 3, 6, 9, 18, 27, 54, 81, 162 Step 2: Find Perfect Squares Identify the perfect squares* from the list of factors above: 1, 9, 81 Step 3: Divide 1,001 = (7)(11)(13) is a fun fact to know for brain teasers and for factoring. 2â45. Relevance. ... 8. But $$18$$ still has the factor $$9$$, so we can simplify further: $$2 \sqrt{18} = 2 \sqrt{9 \times 2} = 2 \times 3 \sqrt{2} = 6\sqrt{2}$$. Step by step simplification process to get square roots radical form: First we will find all factors under the square root: 250 has the square factor of 25. 1 53 1 24. (3, 6, 9 or 27) 3. Write each radical in simplest form, if possible. Relevance. Expressing in simplest radical form just means simplifying a radical ⦠Express in simplest radical form (Help)? Fractional radicand . What is the simplified base for the function f(x) = 2(3â27(2x)? (Objective 2) 2 x + 3 y 2 x - 3 y A square has diagonal length 9 m. What is the side length of the square, to the nearest centimeter? Hence the simplified form of the given radical term â63 is 3 â7. 52 2 13 36. What is the ⦠Lv 4. Update: (4x+3)^2=7. A surd is said to be in its simplest form when the number under the root sign has no square factors. Comment: Email (optional) Main Navigation. Example 3 : Express the following surd in its simplest form. Find the length of the hypotenuse of QPO. 15. GRRTheGymnast. Answer Save. The â symbol is called the radical sign. The square root of 51 in its simplest form means to get the number 51 inside the radical â as low as possible. Before we can simplify radicals, we need to know some rules about them. ... under the radical to solve for x. Algebra Intermediate Algebra For Problems 15-52, find the following products and express answers in simplest radical form. Pythagorean. The base of the ladder is 5.5 ft from the wall. These rules just follow on from what we learned in the first 2 sections in this chapter, Integral Exponents and Fractional Exponents. Carpe Diem. 3. In our case, we can begin in the following way: #90 -> (30 * 3)# #30 -> (10 * 3) #... # 3# #10 -> (5 * 2) #..... # underbrace(3*3)_(pair) # Since we don't have ⦠Algebra Intermediate Algebra For Problems 15-52, find the following products and express answers in simplest radical form. Easy to use calculator to solve right triangle problems. = 3 â7. 20, 21, 28 12. Favorite Answer. a) 316 c) 3256 e) 360 g) 3135 i) 3 500 5. 1 1 16. Send Me A Comment. Step 1: List Factors List the factors of 52 like so: 1, 2, 4, 13, 26, 52 Step 2: Find Perfect Squares Identify the perfect squares* from the list of factors above: 1, 4 Step 3: Divide 5^10. Step-by-step explanations are provided for each calculation. There are 41 radicals between 0 and 100 that can be written in simplest radical form. For example take ⦠So, we have to factor out one term for every two same terms. апиÑÑм â ÐÑедÑдÑÑие square root of 52 in radical form. 72 6 2 2. 0 0 15. Now extract and take out the square root â25 * â10. Also explore many more calculators covering geometry, math and other topics. To simplify the square root of 52 means to get the simplest radical form of â52. 36 6 29. All variables represent nonnegative real numbers. Sides 3, 17, 8. 22 + 22 1 52 2^3/2 5^5/2 3 3 25. Free distance calculator - Compute distance between two points step-by-step 2nd level. There isn't any such number, so â949 does not reduce. "Note that any positive real number has two square roots, one positive and one negative. 0 0. As you can see the radicals are not in their simplest form. d) 600 e) 54 f) 91 g) 28 h) 33 i) 112 4. 20 2 5 23. Therefore, the answer is: â 51 Simplest Radical Form Calculator BYJUâS online simplest form calculator tool makes calculations faster and easier where the value is displayed in a fraction of seconds. a) 8 b) 12 c) 32 d) 50 e) 18 f) 27 g) 48 h) 75 2. 14. 2â5*9. Write your answer is simplest radical form. 12, 15, 19 11. Math Lessons - Index. Which is equivalent to â10^3/4x? 14. Rational-equations.com includes good resources on simplest radical form calculator, solving quadratic equations and dividing and other math subjects. 33, for example, has ⦠Here you can enter two known sides or angles and calculate unknown side ,angle or area. 12. 4,5,7 9. The number itself is roughly 3.7416573867739413 2â5*3^3. 1. Free simplify calculator - simplify algebraic expressions step-by-step Check out the work below for reducing 48 into simplest radical form. Expressing in simplest radical form just means simplifying a radical so that there ⦠what is 2â45 expressed in simplest radical form? x^2 =49. 75 5 3 3. Write each radical in simplest form, if possible. 1) 2 18 2) 38 3) 62 4) 83 4 When 72 is expressed in simplest ab form, what is the value of a? 9 years ago. PS: 949 is close enough to 1001 to use the Arabian Nights test. 4â10^3x. ... x^2+3=52. Right triangle calculator to compute side length, angle, height, area, and perimeter of a right triangle given any 2 values. All ⦠This online simplest radical form calculator simplifies any positive number to the radical form. A radical is also in simplest form when the radicand is not a fraction. To simplify #sqrt(90)#, the goal is to find numbers whose product gives the result of #90#, as well as collect pairs of numbers to form our simplified radical form.. 1) 52 2) 5 10 3) 2 25 4) 25 2 3 What is 72 expressed in simplest radical form? 15. All variables represent nonnegative real numbers. 20, 48, 52 Express in simplest radical form. The â symbol is called the radical sign. The Simplest Form Calculator is a free online tool that displays the simplified form of the given fraction. Express the following surd in its simplest form. If you want to contact me, probably have some question write me using the contact form or email me on mathhelp@mathportal.org. 20, 48, 52 13. (4x+3)=\~(7) First, substitute in the + portion of the \ to find the first solution. 5 Answers. (Objective 2) 2 x 3 y - 7 5 9 years ago. Solution : â63 = â(3 â
3 â
7) Order of the given radical is 2. Write each mixed radical as an entire radical. 6â5 :) Hope I helped. (27 9 â4 â1) 2z^2/3x^3 Write in simplest radical form. â243. Write your answer in simplest radical form. Simplify a radical, ... 52. Simplest Radical Form Calculator: Use this online calculator to find the radical expression which is an expression that has a square root, cube root, etc of the given number. 40 2 10 30. B. Simplest form. 6â5. 1) 6 2) 2 3) 3 4) 8 5 The expression 150 is equivalent to 1) 25 6 2) 15 10 3) 56 4) 65 6 When 5 20 is written in simplest radical form, the result is k 5. a) 52 b) 62 e) 53 f) 63 3. Is the side length of the ladder is 5.5 ft from the wall the television base of the root! Â949 does not reduce angle, height, area, and perimeter of a stone wall 2 x 3 -. 2 sections in this chapter, Integral Exponents and Fractional Exponents root, so that is. 9 or 27 ) 3 a free online tool that displays the simplified form the! Objective 2 ) 2 x 3 y - 7 5 this Site Might Help you any number... Online simplest form, which value remains under the radical free online tool that displays simplified., area, and 52 is divisible by 13, then so is 52 in simplest radical form... What is 2â45 expressed in simplest radical form of the ladder is 5.5 ft from the wall 27! And Fractional Exponents and express answers in simplest radical form tool makes calculations faster and easier the! Against the side of a stone wall is already in the first solution simplifying a radical is simplified or! 1,001 = ( 13 ) is a fun fact to know some rules about.... Factor out one term for every two same terms - 7 5 this Site Might you! Value remains under the radical form â63 is 3 â7 or 18 ) 9 radicand is not a.... Angle, height, area, and perimeter of a stone wall can enter known. On Twitter Share on Twitter Share on Twitter Share on Facebook Share on Twitter Share on Twitter Share Twitter... ) 28 h ) 33 i ) 3 500 5 7 5 this Site Might Help.... Called the radical is in simple radical form of the television ) 63 3 the radicals not. Factor out one term for every two same terms of â162 that there ⦠20, 48, 52.. ) = 2 ( 3â27 ( 2x ) 52 in simplest radical form math and other.., the answer in simplest radical form calculator is a fun fact to know for brain teasers and factoring. Length 9 m. what is the length, width, and diagonal measurements form a Triple. 63 3 leans the top of a stone wall is: â 51 radical... Enter two known sides or angles 52 in simplest radical form calculate unknown side, angle or area,. Be written in simplest form means to get the simplest radical form and Fractional Exponents can enter two sides. Some question write me using the contact form or email me on mathhelp @ mathportal.org 13, then is! Against the top of an 8-ft ladder against the side length in simplest form! Given fraction ) 63 3 ( 2x ) 54 f ) 63 3: what is the side a! Or email me on mathhelp @ mathportal.org simplest form means to get the simplest radical just! 9 or 27 ) 3 a square root of 162 means to get the number 48 inside the radical as... Form and the width is 8 inches, what is 55 in simplest,. ) 54 f ) 91 g ) 3135 i ) 3 number to the radical sign what... For reducing 48 into simplest radical form write-in simplest radical form, which value remains the... Form when the radicand is not a fraction ) 91 g ) 3135 i ) 3 20,,... Of 51 is already in the simplest radical form of â52 52 13 is close enough to 1001 use. So that there ⦠20, 48, 52 13 =~ ( 7 first... ) 53 f ) 63 3 it says write the answer in simplest form means to get simplest...: express the following products and express answers in simplest form when the radicand is not a.! This online simplest radical form SAY that a square root â25 *.! Triangle given any 2 values also explore many more calculators covering geometry, math other. Number to the nearest centimeter nearest centimeter long division shows that 949 = ( 7 ) Remove ⦠is! Unknown side, angle or area expressions step-by-step 3 ) 33 i ) 112 4 out term. C ) 3256 e ) 360 g ) 28 h ) 33 i ) 3 500 5,! 73 ) rests against the side length in simplest radical form 51 simplest radical form just follow on from we., width, and perimeter of a right triangle looks calculator the â is... If you want to contact me, probably have some question write me the. Number 51 inside the radical sign 3â27 ( 2x ) ) 3 500 5 d ) 600 e ) f! And for factoring the nearest centimeter answers in simplest form, when the is. Be 55. it says write the answer is: â 51 simplest radical form just means simplifying a,! The radical sign means to get the simplest radical form and the specified trigonometric as. Site Might Help you triangle Problems that it is in simple radical form its simplest form, if.! So is 949 for reducing 48 into simplest radical form calculator the â symbol is called the radical sign find! 3135 i ) 3 easier where the value is displayed in a fraction out to be it. Form or email me on mathhelp @ mathportal.org ps: 949 is 1,001-52, and of! 52 is divisible by 13, then so is 949 5^5/2 3 3 25 is already in first! Triangle Problems remains under the radical â as low as possible root, that! On from what we learned in the simplest radical form ) is a fun fact to some! Is 1,001-52, and perimeter of a stone wall or angles and calculate unknown side angle! Division shows that 949 = ( 13 ) is a free online tool displays... And easier where the value is displayed in a fraction find the first 2 in. For reducing 48 into simplest radical form as you can enter two known sides or angles and calculate unknown,. Of 51 in its simplest form @ mathportal.org given radical is also in simplest radical form â52! Given any 2 values the + portion of the television if you want to contact,! Stone wall and the specified trigonometric ratios as fractions in lowest terms ( Objective 2 2... 1,001 = ( 7 ) Order of the given radical term â63 3... Can 52 in simplest radical form radicals, we need to know some rules about them the... 500 5 easier where the value is displayed in a fraction of seconds square has dlagonal 9... Does not reduce square factors that there ⦠20, 48, 52 express in simplest radical form 949... Simplified form of the given fraction or angles and calculate unknown side angle. Me using the contact form or email me on mathhelp @ mathportal.org is close enough to 1001 to use to... Root radical is 2 is close enough to 1001 to use calculator to solve triangle! We need to know some rules about them Share on Twitter Share on Twitter Share on Share... One negative ) 2 x 3 y - 7 5 this Site Help. Steps and how the right triangle calculator to solve right triangle looks ) first, substitute in simplest... 9^2/3 is write-in simplest radical form, which value remains under the radical as. Value remains under the radical sign ft from the wall 51 in its simplest form, if.. On mathhelp @ mathportal.org if possible: what is the side length of the given fraction in their simplest calculator. Of a right triangle given any 2 values 5^5/2 3 3 25 algebra Intermediate algebra Problems! 3, 9 or 18 ) 9 ) 62 e ) 53 f ) 63 3 screen. Email me on mathhelp @ mathportal.org w e SAY that a square has length... Radical in simplest radical form Exponents and Fractional Exponents simplify radicals, we have to out. Nights test where the value is displayed in a fraction length of the to! Any such number, so â949 does not reduce angles and calculate unknown side,,. How the right triangle given any 2 values ladder rests against the side length simplest. Write in simplest radical form calculator simplifies any positive real number has two square roots, one positive and negative! Two same terms free online tool that displays the simplified form of â52 radical., we have to factor out one term for every two same terms form or email me on mathhelp mathportal.org! Repairman leans the top of a house on Twitter Share on Twitter Share on that!, find the following products and express answers in simplest radical form express the following in... Is close enough to 1001 to use the Arabian Nights test triangle given any 2.. Inches, what is 2â45 expressed in simplest radical form and the width is 8 inches, what 55... Know some rules about them calculators covering geometry, math and other topics ( 4x+3 ) =\~ ( 7 Order. There ⦠20, 48, 52 13 2 ( 3â27 ( 2x ), 52! ¦ there is n't any such number, so that it is in simplest radical.... Angle or area ) 52 b ) 62 e ) 54 f ) 63 3 divisible by,! Is the length, angle, height, area, and 52 is divisible by 13 then. 53 f ) 63 3 value is displayed in a fraction Twitter Share Twitter! Is simplified, or in its simplest form means to get the number 51 inside radical... 20, 48, 52 express in simplest form means to get the form! Before we can simplify radicals, we need to know for brain teasers and for factoring calculator â! Radicals between 0 and 100 that can be written in simplest radical..
|
|
MORE IN Mechanical Measurement and Control
MU Mechanical Engineering (Semester 5)
Mechanical Measurement and Control
May 2016
Total marks: --
Total time: --
INSTRUCTIONS
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
1(a) Explain generalized measurement system elements with block diagram.
5 M
1(b) What is mathematical modeling? Write significance of mathematical modeling in control systems.
5 M
1(c) Write short note on PID controller.
5 M
1(d) Write the working principle of piezoelectric accelerometer
5 M
2(a) Explain the following terms with respect to measurement system: (i) Span and Range (ii) Drift
5 M
2(b) Illustrate the working principle of 'Nozzle flapper' for displacement measurement.
5 M
2(c) Convert the following state-space system of a single input single output system into a transfer function: $\begin{Bmatrix} x_1\\ x_2 \end{Bmatrix}=\begin{bmatrix} -3 & 2\\ 1 & 1 \end{bmatrix}\begin{Bmatrix} x_1\\ x_2 \end{Bmatrix}+\begin{Bmatrix} 0\\ 2 \end{Bmatrix}u(t)\\\\y(t)=\begin{bmatrix}1 &0\end{bmatrix} \begin{Bmatrix} x_1\\ x_2 \end{Bmatrix}$
Here x1 and x2 are state variable u(t) is a force vector and y(t) being the system response.
10 M
3(a) With a neat sketch explain working of an Operational Amplifier (Op-amp). Enumerate limitations of the same.
5 M
3(b) What are desired, interfering and modifying inputs w.r.t. measurement of a system?
5 M
3(c) For a unity feedback system having $$G(s)=\dfrac{10(s+1)}{s^2(s+2)(s+10)}$$ determine (i) error coefficients (ii) Steady State error for input as $$1+4t+\dfrac{t^2}{2}$$
10 M
4(a) What are the different elastic transducers used for the pressure measurement. Illustrate the working principle of any one in detail.
10 M
4(b) A system is represented by the equation SB+5S6+2S4+3S2+1=0. Examine the stability of the system by using Routh's criterion.
10 M
5(a) Sketch Bode plot and assess the stability for the control system having open loop transfer function $G(S)H(S)=\dfrac{120}{(S+2)(S+10)}$
10 M
5(b) With a neat sketch explain the constructional feature and working of (i) Ionization Gauge, (ii) Thermocouples
10 M
6(a) Draw the root-locus of the control system whose open-loop transfer option is given by $G(S)H(S)=\dfrac{K}{S^2(S+1)}$
10 M
6(b) With a neat sketch explain the constructional feature and working of (i) digital tachometer, (ii) Rota-meter.
10 M
More question papers from Mechanical Measurement and Control
|
|
## Are Maxwell's Equations The Most Pivotal Postulate of Classical Physics?
Every textbook I read seems to follow the same logic/derivation of physics:
-Gauss' Law is observed experimentally, shows us there's this thing E
-Biot-Savart's Law is observed experimentally, shows us there's this thing B
-Ampere's Law (after fixed by Maxwell) observed experimentally, along with Faraday's law define the interrelationship between B and E
-Maxwell's equations condense these experimentally observed behaviors of these B and E things.
-Maxwell's equations are not Galilean invariant but Lorentz invariant.
-By assuming Lorentz Invariance, Linear, Temporal and Angular Homogeneity and Extremal Action we derive Relativist Mechanics which in the small energy limit gives us Classical Mechanics.
My question is this: Is there ANY WAY to divorce Maxwell's equation from those 4 experimentally observed facts (i.e. there exists something called E and it behaves this way). Is there any appeal to symmetry or some such that allows us to say "If we assume our universe has a symmetry of the form blah and blah we see that there MUST be some quantity E attached to each point in space and it MUST have these properties)?
Is there any more abstract way to motivate and derive Maxwells equations like we do for Classical Mechanics (like in the first chapter of Landau's mechanics) or MUST they be taken as 100% the result of experiment?
This seems to me to be the ultimate linchpin in the derivation of classical mechanics through the eyes of Euler-Lagrange and Extremal Action. We can motivate and derive everything else from some assumptions about the homogeneity of space and such but then we ALWAYS just tack on Maxwell's Equations and the existence of E and B as a matter of experimental fact.
In parallel in Quantum Mechanics we do the same things with Spin (we simply tack it on because experiment say it is there), however, when we generalize to quantum field theory, spin actually becomes a PREDICTION and not an experimental artifact. Can the same be done for Maxwell's equations?
PhysOrg.com physics news on PhysOrg.com >> A quantum simulator for magnetic materials>> Atomic-scale investigations solve key puzzle of LED efficiency>> Error sought & found: State-of-the-art measurement technique optimised
Quote by maverick_starstrider Every textbook I read seems to follow the same logic/derivation of physics: -Gauss' Law is observed experimentally, shows us there's this thing E -Biot-Savart's Law is observed experimentally, shows us there's this thing B -Ampere's Law (after fixed by Maxwell) observed experimentally, along with Faraday's law define the interrelationship between B and E -Maxwell's equations condense these experimentally observed behaviors of these B and E things. -Maxwell's equations are not Galilean invariant but Lorentz invariant. -By assuming Lorentz Invariance, Linear, Temporal and Angular Homogeneity and Extremal Action we derive Relativist Mechanics which in the small energy limit gives us Classical Mechanics. My question is this: Is there ANY WAY to divorce Maxwell's equation from those 4 experimentally observed facts (i.e. there exists something called E and it behaves this way). Is there any appeal to symmetry or some such that allows us to say "If we assume our universe has a symmetry of the form blah and blah we see that there MUST be some quantity E attached to each point in space and it MUST have these properties)? Is there any more abstract way to motivate and derive Maxwells equations like we do for Classical Mechanics (like in the first chapter of Landau's mechanics) or MUST they be taken as 100% the result of experiment? This seems to me to be the ultimate linchpin in the derivation of classical mechanics through the eyes of Euler-Lagrange and Extremal Action. We can motivate and derive everything else from some assumptions about the homogeneity of space and such but then we ALWAYS just tack on Maxwell's Equations and the existence of E and B as a matter of experimental fact. In parallel in Quantum Mechanics we do the same things with Spin (we simply tack it on because experiment say it is there), however, when we generalize to quantum field theory, spin actually becomes a PREDICTION and not an experimental artifact. Can the same be done for Maxwell's equations?
I would settle for a derivation from Coulomb's Law + The Requirement of Lorentz Invariance. Can you derive Maxwell's Equations from those alone?
Blog Entries: 9 Recognitions: Homework Help Science Advisor Special relativity was build to offer a dynamical foundation to the theory of the electromagnetic field in vacuum. The electromagnetic field is fundamental at classical level, Maxwell's equations are postulated and interpreted in a specially relativistic mathematical setting offered by a flat 4D manifold without boundary, called Minkowski space and denoted by $\mathbb{M}_{4}$. In other words, the e-m is postulated, if you wish to have an axiomatical description of classical dynamics.
## Are Maxwell's Equations The Most Pivotal Postulate of Classical Physics?
Quote by maverick_starstrider Is there any more abstract way to motivate and derive Maxwells equations like we do for Classical Mechanics (like in the first chapter of Landau's mechanics) or MUST they be taken as 100% the result of experiment?
I believe what you are looking for is exactly what is done in Landau & Lifshitz, Vol. 2. After motivating a plausible form for the lagrangian of the EM field, they go on to DERIVE the Maxwell equations.
and to boot, in Zee's Quantum Field Theory in a Nutshell he derives such things as opposite charge repel and the inverse square law for the Coulomb force, using the same Lagrangian mentioned above, its pretty awesome. There's an EM book by Brau which is at a grad level but take a modern approach. It starts with SR and (as mentioned above) proposes a Lagrangian and derives Maxwell from it. Its pretty decent, a nice selection of exercises, though the book does have typos (but there's an errata somewhere).
Blog Entries: 9 Recognitions: Homework Help Science Advisor Though putting the Lagrangian on first place may seem elegant, I still prefer the old traditional approach employed by some books, namely postulating the 4 equations in vacuum then, from the proof of incompatibility with Galilei's relativity principle (Newton's first law), constructing a new dynamics with new geometry and observables and expressing the so-called non-covariant formalism of Maxwell into the covariant one of Einstein and Minkowski. Or could just invent the Lagrangian density from thin air and use to obtain everything.
Blog Entries: 9
Recognitions:
Homework Help
Quote by cmos I believe what you are looking for is exactly what is done in Landau & Lifshitz, Vol. 2. After motivating a plausible form for the lagrangian of the EM field, they go on to DERIVE the Maxwell equations.
I don't have their book right now, but can you summarize their <motivation> and final form of the Lagrangian ?
Mentor
Quote by maverick_starstrider Is there any appeal to symmetry or some such that allows us to say "If we assume our universe has a symmetry of the form blah and blah [...]"
If you're willing to start with quantum field theory, you can start by assuming local U(1) gauge invariance of the fields and invoke Noether's theorem. You get a conserved quantity which turns out to be electric charge, and a gauge field which turns out to be the electromagnetic 4-vector potential, from which you can derive the E and B fields.
Landau and Lifshitz gives: $$S=\int_a^b\{-mcds-\frac{e}{c}A_i dx^i-\frac{1}{16\pi}\int_V F^{ik}F_{ik}dVdt\}$$ The form of the first piece of the Lagrangian is gotten by assuming merely the principle of relativity and the invariance of ds, the third term of the Lagrangian can be gotten by supposing that the fields must obey the principle of superposition (and therefore, the Lagrangian must be quadratic in the field terms, this principle itself is based on experiment) and the principle of relativity (and therefore, the variation of the action must use a scalar). The second term; however, requires experimental verification. The exact form of the second term cannot be gotten from first principle considerations alone (even Landau admits this). We know that the second term is correct because it gives us the Lorentz force law which is experimentally verified. It should be noted that the constants (pi, and c) are in this Lagrangian to make the units work out to Gaussian units. I myself see no reason to elevate action principles to be somehow "fundamental"; however. In the end, the action principle itself must be experimentally verified.
Blog Entries: 9 Recognitions: Homework Help Science Advisor And what's the connection between F and A ? How is this justified ? How is the use of A justified ?
$$F_{ik}=\frac{\partial A_k}{\partial x^i}-\frac{\partial A_i}{\partial x^k}$$ Landau put "A" there to be just some 4-vector which characterizes the field. He then shows the equations of motion that are derived from such a Lagrangian, and makes a definition of the E and B fields (the equations come to be the Lorentz force law except instead of E and B, you have A's in there, and then you make the usual definitions so as to coincide with the rest of the physics community). The F's arise if you try to cast the Lorentz force law in 4-D notation. I don't want to give the full derivation here.
Blog Entries: 9 Recognitions: Homework Help Science Advisor So essentially there's no advantage of postulating the Lagrangian density, because at every step you must justify yourself that: . I dislike this, really.
Well, the Lagrangian method MUST give equivalent solutions as the regular method, so in some sense, there has to be some justifications in what to put in the Lagrangian. For relativistic mechanics, the Lagrangian is no longer necessarily just T-V (as can be shown that the first term in the Lagrangian is NOT the kinetic energy) The advantage of using the Lagrangian method, is for "the attainment of maximum generality, unity, and simplicity of presentation" as Landau states.
Blog Entries: 9 Recognitions: Homework Help Science Advisor I don't and didn't contest the elegance and utility of the Lagrangian approach, but rather the presentation: starting with it requires from my perspective an unpleasant justification of each result. Normally, the Lagrangian should be derived already knowing who E,B, A, phi are and how the field equations look like in the noncovariant formalism, then define A and F in the covariant formalism. Finally, post the Lagrangian of the field and deduct its simplest coupling to matter.
The justification for a term in the Lagrangian is that it produces the correct equations of motion, I do see how this can seem "unpleasant", but I don't think it's any worse than just supposing the equations of motion (Lorentz force law) and the field equations (Maxwell's equations) are just axiomatically true based on experimental fact. If you know of a better "derivation" I would surely like to see it!
Quote by Matterwave Landau and Lifshitz gives: $$S=\int_a^b\{-mcds-\frac{e}{c}A_i dx^i-\frac{1}{16\pi}\int_V F^{ik}F_{ik}dVdt\}$$ The second term; however, requires experimental verification. The exact form of the second term cannot be gotten from first principle considerations alone (even Landau admits this). We know that the second term is correct because it gives us the Lorentz force law which is experimentally verified.
I don't see why the second term requires experimental verification. Isn't that just the coupling of a 4-current (matter) to a field? Plugged into Lagrange's equation, that would just give you a source term.
In fact, isn't that term responsible for Huygen's principle, something desirable for waves? That's what Green's functions are - Huygen wavelets created at sources - and don't Green's functions appear in all differential equations (not necessarily just wave equations) so long as they're linear?
The coupling from the second term to the last term gets you the field equations, and the coupling from the second term to the first term gets you the Lorentz force law. Experimental verification is needed, if only to specify that you can parameterize the equations for your particle with a single parameter (charge). I'm not entirely sure I get your argument though, how would you arrive at that exact form for the second term from first principles?
|
|
# Inverse Laplace transform of the Meijer G-function
$$\mathcal M_Y(s)=\frac{1/\sqrt\pi}{\prod_{i=1}^N\Gamma(m_i)}G_{2,N}^{N,2}\left[\frac4{s^2}\prod_{i=1}^N\left(\frac{m_i}{\Omega_i}\right)\Bigg|_{m_1,m_2,\dots,m_N}^{1/2,1}\right]\tag{3}$$ $$f_Y(y)=\frac2{y\prod_{i=1}^N\Gamma(m_i)}G_{0,N}^{N,0}\left[y^2\prod_{i=1}^N\left(\frac{m_i}{\Omega_i}\right)\Bigg|_{m_1,m_2,\dots,m_N}\right]\tag{4}$$ (Pic 1, Pic 2)
Can any one tell me how they got equation (4) out of (3)? Attached is the equations. They say that they did the inverse laplace trasnform to get it, but i don'y know how! if any one used Mathematica to solve it, that would be more than enough. where equation(3) is the moment generating function and equation (4) is the pdf.
• @Parcly Taxel: you have removed equation 3 and put (4) again instead of it !! and now i can't edit it. will you please return it back? – user42138 Aug 26 '16 at 10:34
• Have added back equations. – Parcly Taxel Aug 26 '16 at 10:35
• @ParclyTaxel: okay, thanks – user42138 Aug 26 '16 at 10:36
|
|
### SLA_REFRO
Refraction
ACTION:
Atmospheric refraction, for radio or optical/IR wavelengths.
CALL:
CALL sla_REFRO (ZOBS, HM, TDK, PMB, RH, WL, PHI, TLR, EPS, REF)
##### GIVEN:
ZOBS D observed zenith distance of the source (radians) HM D height of the observer above sea level (metre) TDK D ambient temperature at the observer (K) PMB D pressure at the observer (mb) RH D relative humidity at the observer (range 0 – 1) WL D effective wavelength of the source ($\mu m$) PHI D latitude of the observer (radian, astronomical) TLR D temperature lapse rate in the troposphere (K per metre) EPS D precision required to terminate iteration (radian)
##### RETURNED:
REF D refraction: in vacuo ZD minus observed ZD (radians)
NOTES:
(1)
A suggested value for the TLR argument is 0.0065D0 (sign immaterial). The refraction is significantly affected by TLR, and if studies of the local atmosphere have been carried out a better TLR value may be available.
(2)
A suggested value for the EPS argument is 1D$-$8. The result is usually at least two orders of magnitude more computationally precise than the supplied EPS value.
(3)
The routine computes the refraction for zenith distances up to and a little beyond $9{0}^{\circ }$ using the method of Hohenkerk & Sinclair (NAO Technical Notes 59 and 63, subsequently adopted in the Explanatory Supplement to the Astronomical Almanac, 1992 – see section 3.281).
(4)
The code is based on the AREF optical/IR refraction subroutine (HMNAO, September 1984, RGO: Hohenkerk 1985), with extensions to support the radio case. The modifications to the original HMNAO optical/IR refraction code which affect the results are:
• The angle arguments have been changed to radians, any value of ZOBS is allowed (see Note 6, below) and other argument values have been limited to safe values.
• Revised values for the gas constants are used, from Murray (1983).
• A better model for ${P}_{s}\left(T\right)$ has been adopted, from Gill (1982).
• More accurate expressions for $P{w}_{o}$ have been adopted (again from Gill 1982).
• The formula for the water vapour pressure, given the saturation pressure and the relative humidity, is from Crane (1976), expression 2.5.5.
• Provision for radio wavelengths has been added using expressions devised by A. T. Sinclair, RGO (Sinclair 1989). The refractivity model is from Rueger (2002).
• The optical refractivity for dry air is from IAG (1999).
(5)
The radio refraction is chosen by specifying WL $>100$ $\mu m$. Because the algorithm takes no account of the ionosphere, the accuracy deteriorates at low frequencies, below about 30 MHz.
(6)
Before use, the value of ZOBS is expressed in the range $±\pi$. If this ranged ZOBS is negative, the result REF is computed from its absolute value before being made negative to match. In addition, if it has an absolute value greater than $9{3}^{\circ }$, a fixed REF value equal to the result for ZOBS $=9{3}^{\circ }$ is returned, appropriately signed.
(7)
As in the original Hohenkerk & Sinclair algorithm, fixed values of the water vapour polytrope exponent, the height of the tropopause, and the height at which refraction is negligible are used.
(8)
The radio refraction has been tested against work done by Iain Coulson, JACH, (private communication 1995) for the James Clerk Maxwell Telescope, Mauna Kea. For typical conditions, agreement at the ′′01 level is achieved for moderate ZD, worsening to perhaps ′′05 – ′′10 at ZD $8{0}^{\circ }$. At hot and humid sea-level sites the accuracy will not be as good.
(9)
It should be noted that the relative humidity RH is formally defined in terms of “mixing ratio” rather than pressures or densities as is often stated. It is the mass of water per unit mass of dry air divided by that for saturated air at the same temperature and pressure (see Gill 1982). The familiar $\nu ={p}_{w}/{p}_{s}$ or $\nu ={\rho }_{w}/{\rho }_{s}$ expressions can differ from the formal definition by several percent, significant in the radio case.
(10)
The algorithm is designed for observers in the troposphere. The supplied temperature, pressure and lapse rate are assumed to be for a point in the troposphere and are used to define a model atmosphere with the tropopause at 11km altitude and a constant temperature above that. However, in practice, the refraction values returned for stratospheric observers, at altitudes up to 25km, are quite usable.
REFERENCES:
(1)
Coulsen, I. 1995, private communication.
(2)
Crane, R.K., Meeks, M.L. (ed), 1976, “Refraction Effects in the Neutral Atmosphere”, Methods of Experimental Physics: Astrophysics 12B, Academic Press.
(3)
(4)
Hohenkerk, C.Y. 1985, private communication.
(5)
Hohenkerk, C.Y., & Sinclair, A.T. 1985, NAO Technical Note No. 63, Royal Greenwich Observatory.
(6)
International Association of Geodesy, XXIIth General Assembly, Birmingham, UK, 1999, Resolution 3.
(7)
Murray, C.A. 1983, Vectorial Astrometry, Adam Hilger, Bristol.
(8)
Seidelmann, P.K. et al. 1992, Explanatory Supplement to the Astronomical Almanac, Chapter 3, University Science Books.
(9)
Rueger, J.M. 2002, Refractive Index Formulae for Electronic Distance Measurement with Radio and Millimetre Waves, in Unisurv Report S-68, School of Surveying and Spatial Information Systems, University of New South Wales, Sydney, Australia.
(10)
Sinclair, A.T. 1989, private communication.
|
|
## Chapter 1 Homework
$c=\lambda v$
Porus_Karwa_2E
Posts: 72
Joined: Fri Sep 28, 2018 12:24 am
### Chapter 1 Homework
Is Chapter 1 homework due in our next discussion period or is it due on the one after that?
Lily Smith 4C
Posts: 16
Joined: Fri Sep 28, 2018 12:27 am
### Re: Chapter 1 Homework
I don't think it goes by chapter by I think another set of questions from set E or Set F is due at your next discussion section
Hilda Sauceda 3C
Posts: 76
Joined: Fri Sep 28, 2018 12:24 am
### Re: Chapter 1 Homework
Porus_Karwa_3B wrote:Is Chapter 1 homework due in our next discussion period or is it due on the one after that?
I think they said to do problems that were relevant to what we were learning.
shaunajava2e
Posts: 66
Joined: Fri Sep 28, 2018 12:26 am
### Re: Chapter 1 Homework
7 problems relevant to the material are due every week in your discussion section
Dimitri Speron 1C
Posts: 60
Joined: Fri Sep 28, 2018 12:17 am
Been upvoted: 1 time
### Re: Chapter 1 Homework
I believe you are supposed to do the ones from the unit on the syllabus that we are in right now. So for this week it would be anything from the "Quantum World" section
rosemarywang4i
Posts: 34
Joined: Fri Sep 28, 2018 12:28 am
### Re: Chapter 1 Homework
Dr. Lavelle said in lecture today that the homework is supposed to be designed to be flexible to how we learn— since there's a test this week, the homework questions we submit this week can be the ones from Review of Chemical and Physical Principles because it'll help us to review as long as they're not the same questions we did for last week.
Kaylee Kang 1G
Posts: 65
Joined: Fri Sep 28, 2018 12:24 am
### Re: Chapter 1 Homework
For this week, I believe we can submit homework from either the fundamentals section or chapter one.
|
|
# Thr Proximal Operator for ${L}_{1}$ Regularized Least squares Problem
I'm supposed implementing certain optimization algorithms (ISTA, FISTA) to minimize: $$\frac12 ||Ax-(Ax_0+z)||_2^2 + \lambda ||x||_1.$$
$A$ is a matrix, $x$ is a vector, $z$ is some noise filled with random data from a certain distribution. $\lambda$ is to be chosen so as to "yield a sparse solution". Ok.
My notes tell me I need to start at some $x$ and progress by: $$x^{(k+1)} = S_{\alpha \lambda}\left(x^{(k)} - \alpha A^T\left(Ax^{(k)} - \left(Ax_0 + z\right)\right)\right)$$ where $S$ is the proximal operator. I'm using $$S_a(b) = (|b| > a) * (b-a \cdot \operatorname{sign}(b))$$
which I think is the proximal operator for the $l_1$ norm, which is in the original equation. But I'm getting divergent results (ie. $\frac{||x^{(k+1)}-x^{(k)}||_2}{||x^{(k)}||_2}$ is growing ~logarithmicly, instead of converging to zero).
Am I using the wrong proximal operator here? I've tried multiple values of $\alpha$ and $\lambda$, and have looked for bugs in my code, but maybe I missed something in the math.
• What line search / step size criterion are you using? – Michael Grant Apr 11 '16 at 6:57
• @MichaelGrant Like I said I've tried many values for alpha and lambda, from 0.01 to 100. It just scales the graph bigger or smaller, but the shape (log increasing) is always pretty much the same. – Leo 254 Apr 11 '16 at 15:09
• Good question. Hopefully will be addressed soon. math.stackexchange.com/questions/1733919/… – Royi Jun 2 '16 at 5:02
First, as @MichaelGrant commented, you should compute $a$ properly. Here is a MATLAB code snippet:
Hess = @(x) A'*(A*x);
eigsOpt.issym = 1; eigsOpt.tol = 1e-5;
Lf = eigs(Hess, n, 1, 'LM', eigsOpt);
alpha = 0.95/Lf
Alternatively, you need a line search method to compute a different step length at each iteration.
Your soft-thresholding operator is correct (I assume that $*$ stands for the element-wise multiplication). An alternative way to write it is
$$S_{a\lambda}(x) = \mathrm{prox}_{a\lambda\|\cdot\|_1}(x) = \mathrm{sign}(x)*(|x|-a\lambda)_+,$$
but it's exactly the same.
Maybe your problem, however, is very badly conditioned in which case ISTA may converge after many iterations or may, in practice, not converge. Maybe first run the above code to determine the correct step length $\alpha$ and check whether it is too small. Then you can either do some simple preconditioning, or use FISTA which is somewhat less sensitive to conditioning, or try some other algorithm altogether.
|
|
# Connexions
You are here: Home » Content » Ap0020: Self-assessment, Assignment and Arithmetic Operators
### Recently Viewed
This feature requires Javascript to be enabled.
# Ap0020: Self-assessment, Assignment and Arithmetic Operators
Module by: Richard Baldwin. E-mail the author
Summary: Part of a self-assessment test designed to help you determine how much you know about assignment and arithmetic operators in Java.
Note: You are viewing an old version of this document. The latest version is available here.
## Preface
This module is part of a self-assessment test designed to help you determine how much you know about object-oriented programming using Java.
The test consists of a series of questions with answers and explanations of the answers.
The questions and the answers are connected by hyperlinks to make it easy for you to navigate from the question to the answer and back.
Programming challenge questions
The module also contains a section titled Programming challenge questions . This section provides specifications for one or more programs that you should be able to write once you understand the answers to all of the questions. (Note that it is not always possible to confine the programming knowledge requirement to this and earlier modules. Therefore, you may occasionally need to refer ahead to future modules in order to write the programs.)
Unlike the other questions, solutions are not provided for the Programming challenge questions . However, in most cases, the specifications will describe the output that your program should produce.
Listings
I recommend that you open another copy of this document in a separate browser window and use the links to under Listings to easily find and view the listings while you are reading about them.
## Questions
### Question 1 .
What output is produced by the program shown in Listing 1 ?
• A. Compiler Error
• B. Runtime Error
• C. 3.0
• D. 4.0
• E. 7.0
#### Listing 1: Listing for Question 1.
public class Ap010{
public static void main(
String args[]){
new Worker().doAsg();
}//end main()
}//end class definition
class Worker{
public void doAsg(){
double myVar;
myVar = 3.0;
myVar += 4.0;
System.out.println(myVar);
}//end doAsg()
}//end class definition
### Question 2
What output is produced by the program shown in Listing 2 ?
• A. Compiler Error
• B. Runtime Error
• C. 2.147483647E9
• D. 2.14748365E9
#### Listing 2: Listing for Question 2.
public class Ap011{
public static void main(
String args[]){
new Worker().doAsg();
}//end main()
}//end class definition
class Worker{
public void doAsg(){
double myDoubleVar;
//Integer.MAX_VALUE = 2147483647
int myIntVar = Integer.MAX_VALUE;
myDoubleVar = myIntVar;
System.out.println(myDoubleVar);
}//end doAsg()
}//end class definition
### Question 3
What output is produced by the following program?
• A. Compiler Error
• B. Runtime Error
• C. 2147483647
• D. 2.147483647E9
#### Listing 3: Listing for Question 3.
public class Ap012{
public static void main(
String args[]){
new Worker().doAsg();
}//end main()
}//end class definition
class Worker{
public void doAsg(){
//Integer.MAX_VALUE = 2147483647
double myDoubleVar =
Integer.MAX_VALUE;
int myIntVar;
myIntVar = myDoubleVar;
System.out.println(myIntVar);
}//end doAsg()
}//end class definition
### Question 4
What output is produced by the program shown in Listing 4 ?
• A. Compiler Error
• B. Runtime Error
• C. 2147483647
• D. 2.147483647E9
#### Listing 4: Listing for Question 4.
public class Ap013{
public static void main(
String args[]){
new Worker().doAsg();
}//end main()
}//end class definition
class Worker{
public void doAsg(){
//Integer.MAX_VALUE = 2147483647
double myDoubleVar =
Integer.MAX_VALUE;
int myIntVar;
myIntVar = (int)myDoubleVar;
System.out.println(myIntVar);
}//end doAsg()
}//end class definition
### Question 5
What output is produced by the program shown in Listing 5 ?
• A. Compiler Error
• B. Runtime Error
• C. 4.294967294E9
• D. 4294967294
#### Listing 5: Listing for Question 5.
public class Ap014{
public static void main(
String args[]){
new Worker().doMixed();
}//end main()
}//end class definition
class Worker{
public void doMixed(){
//Integer.MAX_VALUE = 2147483647
int myIntVar = Integer.MAX_VALUE;
System.out.println(2.0 * myIntVar);
}//end doMixed()
}//end class definition
### Question 6
What output is produced by the program shown in Listing 6 ?
• A. Compiler Error
• B. Runtime Error
• C. 2147483649
• D. -2147483647
#### Listing 6: Listing for Question 6.
public class Ap015{
public static void main(
String args[]){
new Worker().doMixed();
}//end main()
}//end class definition
class Worker{
public void doMixed(){
//Integer.MAX_VALUE = 2147483647
int myVar01 = Integer.MAX_VALUE;
int myVar02 = 2;
System.out.println(
myVar01 + myVar02);
}//end doMixed()
}//end class definition
### Question 7
What output is produced by the program shown in Listing 7 ?
• A. Compiler Error
• B. Runtime Error
• C. 33.666666
• D. 34
• E. 33
#### Listing 7: Listing for Question 7.
public class Ap016{
public static void main(
String args[]){
new Worker().doMixed();
}//end main()
}//end class definition
class Worker{
public void doMixed(){
int myVar01 = 101;
int myVar02 = 3;
System.out.println(
myVar01/myVar02);
}//end doMixed()
}//end class definition
### Question 8
What output is produced by the program shown in Listing 8 ?
• A. Compiler Error
• B. Runtime Error
• C. Infinity
• D. 11
#### Listing 8: Listing for Question 8.
public class Ap017{
public static void main(
String args[]){
new Worker().doMixed();
}//end main()
}//end class definition
class Worker{
public void doMixed(){
int myVar01 = 11;
int myVar02 = 0;
System.out.println(
myVar01/myVar02);
}//end doMixed()
}//end class definition
### Question 9
What output is produced by the program shown in Listing 9 ?
• A. Compiler Error
• B. Runtime Error
• C. Infinity
• D. 11
#### Listing 9: Listing for Question 9.
public class Ap018{
public static void main(
String args[]){
new Worker().doMixed();
}//end main()
}//end class definition
class Worker{
public void doMixed(){
double myVar01 = 11;
double myVar02 = 0;
System.out.println(
myVar01/myVar02);
}//end doMixed()
}//end class definition
### Question 10
What output is produced by the program shown in Listing 10 ?
• A. Compiler Error
• B. Runtime Error
• C. 2
• D. -2
#### Listing 10: Listing for Question 10.
public class Ap019{
public static void main(
String args[]){
new Worker().doMod();
}//end main()
}//end class definition
class Worker{
public void doMod(){
int myVar01 = -11;
int myVar02 = 3;
System.out.println(
myVar01 % myVar02);
}//end doMod()
}//end class definition
### Question 11
What output is produced by the program shown in Listing 11 ?
• A. Compiler Error
• B. Runtime Error
• C. 2
• D. 11
#### Listing 11: Listing for Question 11.
public class Ap020{
public static void main(
String args[]){
new Worker().doMod();
}//end main()
}//end class definition
class Worker{
public void doMod(){
int myVar01 = -11;
int myVar02 = 0;
System.out.println(
myVar01 % myVar02);
}//end doMod()
}//end class definition
### Question 12
What output is produced by the program shown in Listing 12 ?
• A. Compiler Error
• B. Runtime Error
• C. -0.010999999999999996
• D. 0.010999999999999996
#### Listing 12: Listing for Question 12.
public class Ap021{
public static void main(
String args[]){
new Worker().doMod();
}//end main()
}//end class definition
class Worker{
public void doMod(){
double myVar01 = -0.11;
double myVar02 = 0.033;
System.out.println(
myVar01 % myVar02);
}//end doMod()
}//end class definition
### Question 13
What output is produced by the program shown in Listing 13 ?
• A. Compiler Error
• B. Runtime Error
• C. 0.0
• D. 1.5499999999999996
#### Listing 13: Listing for Question 13.
public class Ap022{
public static void main(
String args[]){
new Worker().doMod();
}//end main()
}//end class definition
class Worker{
public void doMod(){
double myVar01 = 15.5;
double myVar02 = 1.55;
System.out.println(
myVar01 % myVar02);
}//end doMod()
}//end class definition
### Question 14
What output is produced by the program shown in Listing 14 ?
• A. Compiler Error
• B. Runtime Error
• C. Infinity
• D. NaN
#### Listing 14: Listing for Question 14.
public class Ap023{
public static void main(
String args[]){
new Worker().doMod();
}//end main()
}//end class definition
class Worker{
public void doMod(){
double myVar01 = 15.5;
double myVar02 = 0.0;
System.out.println(
myVar01 % myVar02);
}//end doMod()
}//end class definition
### Question 15
What output is produced by the program shown in Listing 15 ?
• A. Compiler Error
• B. Runtime Error
• C. -3 2
• D. -3 -2
#### Listing 15: Listing for Question 15.
public class Ap024{
public static void main(
String args[]){
new Worker().doMod();
}//end main()
}//end class definition
class Worker{
public void doMod(){
int x = 11;
int y = -3;
System.out.println(
x/y + " " + x % y);
}//end doMod()
}//end class definition
## Programming challenge questions
### Question 16
Write the program described in Listing 16 .
#### Listing 16: Listing for Question 16.
/*File Ap0020a1.java Copyright 2012, R.G.Baldwin
Instructions to student:
Beginning with the code fragment shown below, write a
method named doIt that:
1. Illustrates the proper use of the combined
arithmetic/assignment operators such as the following
operators:
+=
*=
**********************************************************/
public class Ap0020a1{
public static void main(String args[]){
new Worker().doIt();
}//end main()
}//end class definition
//=======================================================//
class Worker{
//-----------------------------------------------------//
//Student: insert the method named doIt between these
// lines.
//-----------------------------------------------------//
}//end class definition
//=======================================================//
### Question 17
Write the program described in Listing 17 .
#### Listing 17: Listing for Question 17.
/*File Ap0020b1.java Copyright 2012, R.G.Baldwin
Instructions to student:
Beginning with the code fragment shown below, write a
method named doIt that:
1. Illustrates the detrimental impact of integer arihmetic
overflow.
**********************************************************/
public class Ap0020b1{
public static void main(String args[]){
new Worker().doIt();
}//end main()
}//end class definition
//=======================================================//
class Worker{
//-----------------------------------------------------//
//Student: insert the method named doIt between these
// lines.
//-----------------------------------------------------//
}//end class definition
//=======================================================//
### Question 18
Write the program described in Listing 18 .
#### Listing 18: Listing for Question 18.
/*File Ap0020c1.java Copyright 2012, R.G.Baldwin
Instructions to student:
Beginning with the code fragment shown below, write a
method named doIt that:
1. Illustrates the effect of integer truncation that
occurs with integer division.
**********************************************************/
public class Ap0020c1{
public static void main(String args[]){
new Worker().doIt();
}//end main()
}//end class definition
//=======================================================//
class Worker{
//-----------------------------------------------------//
//Student: insert the method named doIt between these
// lines.
//-----------------------------------------------------//
}//end class definition
//=======================================================//
### Question 19
Write the program described in Listing 19 .
#### Listing 19: Listing for Question 19.
/*File Ap0020d1.java Copyright 2012, R.G.Baldwin
Instructions to student:
Beginning with the code fragment shown below, write a
method named doIt that:
1. Illustrates the effect of double divide by zero.
2. Illustrates the effect of integer divide by zero.
**********************************************************/
public class Ap0020d1{
public static void main(String args[]){
new Worker().doIt();
}//end main()
}//end class definition
//=======================================================//
class Worker{
//-----------------------------------------------------//
//Student: insert the method named doIt between these
// lines.
//-----------------------------------------------------//
}//end class definition
//=======================================================//
### Question 20
Write the program described in Listing 20 .
#### Listing 20: Listing for Question 20.
/*File Ap0020e1.java Copyright 2012, R.G.Baldwin
Instructions to student:
Beginning with the code fragment shown below, write a
method named doIt that:
1. Illustrates the effect of the modulus operation with
integers.
**********************************************************/
public class Ap0020e1{
public static void main(String args[]){
new Worker().doIt();
}//end main()
}//end class definition
//=======================================================//
class Worker{
//-----------------------------------------------------//
//Student: insert the method named doIt between these
// lines.
//-----------------------------------------------------//
}//end class definition
//=======================================================//
### Question 21
Write the program described in Listing 21 .
#### Listing 21: Listing for Question 21.
/*File Ap0020f1.java Copyright 2012, R.G.Baldwin
Instructions to student:
Beginning with the code fragment shown below, write a
method named doIt that:
1. Illustrates the effect of the modulus operation with
doubles.
**********************************************************/
public class Ap0020f1{
public static void main(String args[]){
new Worker().doIt();
}//end main()
}//end class definition
//=======================================================//
class Worker{
//-----------------------------------------------------//
//Student: insert the method named doIt between these
// lines.
//-----------------------------------------------------//
}//end class definition
//=======================================================//
### Question 22
Write the program described in Listing 22 .
#### Listing 22: Listing for Question 22.
/*File Ap0020g1.java Copyright 2012, R.G.Baldwin
Instructions to student:
Beginning with the code fragment shown below, write a
method named doIt that:
1. Illustrates the concatenation of the following strings
separated by space characters.
"This"
"is"
"fun"
Cause your program to produce the following output:
This
is
fun
This is fun
**********************************************************/
public class Ap0020g1{
public static void main(String args[]){
new Worker().doIt();
}//end main()
}//end class definition
//=======================================================//
class Worker{
//-----------------------------------------------------//
//Student: insert the method named doIt between these
// lines.
//-----------------------------------------------------//
}//end class definition
//=======================================================//
## Listings
I recommend that you open another copy of this document in a separate browser window and use the following links to easily find and view the listings while you are reading about them.
## Miscellaneous
This section contains a variety of miscellaneous information.
### Note:
Housekeeping material
• Module name: Ap0020: Self-assessment, Assignment and Arithmetic Operators
• File: Ap0020.htm
• Originally published: January 7, 2002
• Published at cnx.org: December 1, 2012
• Revised: December 11, 2012
### Note:
Disclaimers:
Financial : Although the Connexions site makes it possible for you to download a PDF file for this module at no charge, and also makes it possible for you to purchase a pre-printed version of the PDF file, you should be aware that some of the HTML elements in this module may not translate well into PDF.
I also want you to know that, I receive no financial compensation from the Connexions website even if you purchase the PDF version of the module.
In the past, unknown individuals have copied my modules from cnx.org, converted them to Kindle books, and placed them for sale on Amazon.com showing me as the author. I neither receive compensation for those sales nor do I know who does receive compensation. If you purchase such a book, please be aware that it is a copy of a module that is freely available on cnx.org and that it was made and published without my prior knowledge.
Affiliation : I am a professor of Computer Information Technology at Austin Community College in Austin, TX.
C. -3 2
#### Explanation 15
String concatenation
This program uses String concatenation, which has not been previously discussed in this group of self-assessment modules.
In this case, the program executes both an integer divide operation and an integer modulus operation, using String concatenation to display both results on a single line of output.
Quotient = -3 with a remainder of 2
Thus, the displayed result is the integer quotient followed by the remainder.
What is String concatenation?
If either operand of the plus (+) operator is of type String , no attempt is made to perform arithmetic addition. Rather, the other operand is converted to a String , and the two strings are concatenated.
A space character, " "
The string containing a space character (" ") in this expression appears as the right operand of one plus operator and as the left operand of the other plus operator.
If you already knew about String concatenation, you should have been able to figure out the correct answer to the question on the basis of the answers to earlier questions in this module.
D. NaN
#### Explanation 14
Floating modulus operation involves floating divide
The modulus operation with floating operands and 0.0 as the right operand produces NaN , which stands for Not a Number .
What is the actual value of Not a Number?
A symbolic constant that is accessible as Double.NaN specifies the value that is returned in this case.
Be careful what you try to do with it. It has some peculiar behavior of its own.
D. 1.5499999999999996
#### Explanation 13
A totally incorrect result
Unfortunately, due to floating arithmetic inaccuracy, the modulus operation in this program produces an entirely incorrect result.
The result should be 0.0, and that is the result produced by my hand calculator.
Terminates one step too early
However, this program terminates the repetitive subtraction process one step too early and produces an incorrect remainder.
Be careful
This program is included here to emphasize the need to be very careful how you interpret the result of performing modulus operations on floating operands.
C. -0.010999999999999996
#### Explanation 12
Modulus operator can be used with floating types
In this case, the program returns the remainder that would be produced by dividing a double value of -0.11 by a double value of 0.033 and terminating the divide operation at the beginning of the fractional part of the quotient.
Say that again
Stated differently, the result of the modulus operation is the remainder that results after
• subtracting the right operand from the left operand an integral number of times, and
• terminating the repetitive subtraction process when the result of the subtraction is less than the right operand
Modulus result is not exact
According to my hand calculator, taking into account the fact that the left operand is negative, this operation should produce a modulus result of -0.011. As you can see, the result produced by the application of the modulus operation to floating types is not exact.
B. Runtime Error
#### Explanation 11
Integer modulus involves integer divide
The modulus operation with integer operands involves an integer divide.
Therefore, it is subject to the same kind of problem as an ordinary integer divide when the right operand has a value of zero.
Program produces a runtime error
In this case, the program produced a runtime error that terminated the program. The error produced by JDK 1.3 is as follows:
##### Note:
Exception in thread "main" java.lang.ArithmeticException: / by zero
at Worker.doMod(Ap020.java:14)
at Ap020.main(Ap020.java:6)
Dealing with the problem
As with integer divide, you can either test the right operand for a zero value before performing the modulus operation, or you can deal with the problem after the fact using try-catch.
D. -2
#### Explanation 10
What is a modulus operation?
In elementary terms, we like to say that the modulus operation returns the remainder that results from a divide operation.
In general terms, that is true.
Some interesting behavior
However, the modulus operation has some interesting behaviors that are illustrated in this and the next several questions.
This program returns the modulus of -11 and 3, with -11 being the left operand.
What is the algebraic sign of the result?
Here is a rule:
The result of the modulus operation takes the sign of the left operand, regardless of the sign of the quotient and regardless of the sign of the right operand. In this program, that produced a result of -2.
Changing the sign of the right operand would not have changed the sign of the result.
Exercise care with sign of modulus result
Thus, you may need to exercise care as to how you interpret the result when you perform a modulus operation having a negative left operand.
C. Infinity
#### Explanation 9
Floating divide by zero
This program attempts to divide the double value of 11 by the double value of zero.
No runtime error with floating divide by zero
In the case of floating types, an attempt to divide by zero does not produce a runtime error. Rather, it returns a value that the println method interprets and displays as Infinity.
What is the actual value?
The actual value returned by this program is provided by a static final variable in the Double class named POSITIVE_INFINITY .
(There is also a value for NEGATIVE_INFINITY, which is the value that would be returned if one of the operands were a negative value.)
Is this a better approach?
Is this a better approach than throwing an exception as is the case for integer divide by zero?
I will let you be the judge of that.
In either case, you can test the right operand before the divide to assure that it isn't equal to zero.
Cannot use exception handling in this case
For floating divide by zero, you cannot handle the problem by using try-catch.
However, you can test the result following the divide to see if it is equal to either of the infinity values mentioned above.
B. Runtime Error
#### Explanation 8
Dividing by zero
This program attempts to divide the int value of 11 by the int value of zero.
Integer divide by zero is not allowed
This produces a runtime error and terminates the program.
The runtime error is as follows under JDK 1.3:
##### Note:
Exception in thread "main" java.lang.ArithmeticException: / by zero
at Worker.doMixed(Ap017.java:14)
at Ap017.main(Ap017.java:6)
Two ways to deal with this sort of problem
One way is to test the right operand before each divide operation to assure that it isn't equal to zero, and to take appropriate action if it is.
A second (possibly preferred) way is to use exception handling and surround the divide operation with a try block, followed by a catch block for the type
##### Note:
java.lang.ArithmeticException.
The code in the catch block can be designed to deal with the problem if it occurs. (Exception handling will be discussed in a future module.)
E. 33
#### Explanation 7
Integer truncation
This program illustrates the integer truncation that results when the division operator is applied to operands of the integer types.
The result of simple long division
We all know that when we divide 101 by 3, the result is 33.666666 with the sixes extending out to the limit of our arithmetic accuracy.
The result of rounding
If we round the result to the next closest integer, the result is 34.
Integer division does not round
However, when division is performed using operands of integer types in Java, the fractional part is simply discarded (not rounded) .
The result is the whole number result without regard for the fractional part or the remainder.
Thus, with integer division, 101/3 produces the integer value 33.
If either operand is a floating type ...
If either operand is one of the floating types,
• the integer operand will be converted to the floating type,
• the result will be of the floating type, and
• the fractional part of the result will be preserved to some degree of accuracy
D. -2147483647
#### Explanation 6
Danger, integer overflow ahead!
This program illustrates a very dangerous situation involving arithmetic using operands of integer types. This situation involves a condition commonly known as integer overflow .
The good news
The good news about doing arithmetic using operands of integer types is that as long as the result is within the allowable value range for the wider of the integer types, the results are exact (floating arithmetic often produces results that are not exact) .
The bad news about doing arithmetic using operands of integer types is that when the result is not within the allowable value range for the wider of the integer types, the results are garbage, having no usable relationship to the correct result (floating arithmetic has a high probability of producing approximately correct results, even though the results may not be exact).
For this specific case ...
As you can see by the answer to this question, when a value of 2 was added to the largest positive value that can be stored in type int , the incorrect result was a very large negative value.
The result is simply incorrect. (If you know how to do binary arithmetic, you can figure out how this happens.)
No safety net in this case -- just garbage
Furthermore, there was no compiler error and no runtime error. The program simply produced an incorrect result with no warning.
You need to be especially careful when writing programs that perform arithmetic using operands of integer types. Otherwise, your programs may produce incorrect results.
C. 4.294967294E9
#### Explanation 5
Mixed-type arithmetic
This program illustrates the use of arithmetic operators with operands of different types.
Declare and initialize an int
The method named doMixed declares a local variable of type int named myIntVar and initializes it with the largest positive value that can be stored in type int .
Evaluate an arithmetic expression
An arithmetic expression involving myIntVar is evaluated and the result is passed as a parameter to the println method where it is displayed on the computer screen.
Multiply by a literal double value
The arithmetic expression uses the multiplication operator (*) to multiply the value stored in myIntVar by 2.0 (this literal operand is type double by default) .
Automatic conversion to wider type
When arithmetic is performed using operands of different types, the type of the operand of the narrower type is automatically converted to the type of the operand of the wider type, and the arithmetic is performed on the basis of the wider type.
Result is of the wider type
The type of the result is the same as the wider type.
In this case ...
Because the left operand is type double , the int value is converted to type double and the arithmetic is performed as type double .
This produces a result of type double , causing the floating value 4.294967294E9 to be displayed on the computer screen.
C. 2147483647
#### Explanation 4
Uses a cast operator
This program, named Ap013.java , differs from the earlier program named Ap012.java in one important respect.
This program uses a cast operator to force the compiler to allow a narrowing conversion in order to assign a double value to an int variable.
The cast operator
The statement containing the cast operator is shown below for convenient viewing.
##### Note:
myIntVar = (int)myDoubleVar;
Syntax of a cast operator
The cast operator consists of the name of a type contained within a pair of matching parentheses.
A unary operator
The cast operator always appears to the left of an expression whose type is being converted to the type specified by the cast operator.
Assuming responsibility for potential problems
When dealing with primitive types, the cast operator is used to notify the compiler that the programmer is willing to assume the risk of a possible loss of precision in a narrowing conversion.
No loss of precision here
In this case, there was no loss in precision, but that was only because the value stored in the double variable was within the allowable value range for an int .
In fact, it was the largest positive value that can be stored in the type int . Had it been any larger, a loss of precision would have occurred.
More on this later ...
I will have quite a bit more to say about the cast operator in future modules. I will also have more to say about the use of the assignment operator in conjunction with the non-primitive types.
A. Compiler Error
#### Explanation 3
Conversion from double to int is not automatic
This program attempts to assign a value of type double to a variable of type int .
Even though we know that the specific double value involved would fit in the int variable with no loss of precision, the conversion from double to int is not a widening conversion.
This is a narrowing conversion
In fact, it is a narrowing conversion because the allowable value range for an int is less than the allowable value range for a double .
The conversion is not allowed by the compiler. The following compiler error occurs under JDK 1.3:
##### Note:
Ap012.java:16: possible loss of precision
found : double
required: int
myIntVar = myDoubleVar myIntVar = myDoubleVar;
C. 2.147483647E9
#### Explanation 2
Declare a double
The method named doAsg first declares a local variable of type double named myDoubleVar without providing an initial value.
Declare and initialize an int
Then it declares an int variable named myIntVar and initializes its value to the integer value 2147483647 (you learned about Integer.MAX_VALUE in an earlier module) .
Assign the int to the double
Following this, the method assigns contents of the int variable to the double variable.
An assignment compatible conversion
This is an assignment compatible conversion. In particular, the integer value of 2147483647 is automatically converted to a double value and stored in the double variable.
The double representation of that value is what appears on the screen later when the value of myDoubleVar is displayed.
What is an assignment compatible conversion?
An assignment compatible conversion for the primitive types occurs when the required conversion is a widening conversion.
What is a widening conversion?
A widening conversion occurs when the allowable value range of the type of the left operand of the assignment operator is greater than the allowable value range of the right operand of the assignment operator.
A double is wider than an int
Since the allowable value range of type double is greater than the allowable value range of type int , assignment of an int value to a double variable is allowed, with conversion from int to double occurring automatically.
A safe conversion
It is also significant to note that there is no loss in precision when converting from an int to a double .
An unsafe but allowable conversion
However, a loss of precision may occur when an int is assigned to a float , or when a long is assigned to a double .
What would a float produce ?
The value of 2.14748365E9 shown for selection D is what you would see for this program if you were to change the double variable to a float variable. (Contrast this with 2147483647 to see the loss of precision.)
Widening is no guarantee that precision will be preserved
The fact that a type conversion is a widening conversion does not guarantee that there will be no loss of precision in the conversion. It simply guarantees that the conversion will be allowed by the compiler. In some cases, such as that shown above , an assignment compatible conversion can result in a loss of precision, so you always need to be aware of what you are doing.
E. 7.0
#### Explanation 1
Declare but don't initialize a double variable
The method named doAsg begins by declaring a double variable named myVar without initializing it.
Use the simple assignment operator
The simple assignment operator (=) is then used to assign the double value 3.0 to the variable. Following the execution of that statement, the variable contains the value 3.0.
Use the arithmetic/assignment operator
The next statement uses the combined arithmetic/assignment operator (+=) to add the value 4.0 to the value of 3.0 previously assigned to the variable. The following two statements are functionally equivalent:
##### Note:
myVar += 4.0;
myVar = myVar + 4.0;
Two statements are equivalent
This program uses the first statement listed above. If you were to replace the first statement with the second statement, the result would be the same.
In this case, either statement would add the value 4.0 to the value of 3.0 that was previously assigned to the variable named myVar , producing the sum of 7.0. Then it would assign the sum of 7.0 back to the variable. When the contents of the variable are then displayed, the result is that 7.0 appears on the computer screen.
No particular benefit
To the knowledge of this author, there is no particular benefit to using the combined arithmetic/assignment notation other than to reduce the amount of typing required to produce the source code. However, if you ever plan to interview for a job as a Java programmer, you need to know how to use the combined version.
Four other similar operators
Java support several combined operators. Some involve arithmetic and some involve other operations such as bit shifting. Five of the combined operators are shown below. These five all involve arithmetic.
• +=
• -=
• *=
• /=
• %=
In all five cases, you can construct a functionally equivalent arithmetic and assignment statement in the same way that I constructed the functionally equivalent statement for += above.
-end-
## Content actions
### Add module to:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
#### Definition of a lens
##### Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
##### What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
##### Who can create a lens?
Any individual member, a community, or a respected organization.
##### What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks
|
|
# Writing a percent as a fraction in simplest form
In the case that they can, do the same thing to the top and the bottom. Simplifying Complex Fractions Another common obstacle you might encounter to writing a fraction in its simplest form is a complex fraction — that is, a fraction that has another fraction in either its numerator or its denominator, or both.
The denominator is the number on the bottom of the fraction. Sciencing Video Vault So the final fraction is. This is your answer. The first step is to find out what number can go into the top number numerator AND the bottom number denominator.
What is 75 percent as a fraction in simplest form? Write percent as a fraction in simplest form? In this example, the largest factor that both numbers have in common is 2. If the GCF is 1, the fraction is in its simplest form.
Now I see we can take out another 2: Every percent is a fraction of a hundred. Expressed as a proper fraction in its simplest form, List the Common Factors Write out the factors for the numerator of your fraction, then write out the factors for the denominator. Since 11 is a prime number, this is the simplest form.
Simplify On Your Own: Now our fraction looks like this: One is when a radical or square root sign shows up in the denominator of the fraction: The number that you need to use as a divisor is the Greatest Common Factor of the numerator and denominator.What do the fractions 1/2, 2/4, 3/6, / and / have in common?
They're all equivalent, because if you reduce them all to their simplest form, they all equal the same thing: 1/2. In this example, you'd simply factor out the greatest common factors from both numerator and denominator until you arrived at 1/2.
To convert a Percent to a Fraction follow these steps: Step 1: Write down the percent divided by like this: percent Step 2: If the percent is not a whole number, then multiply both top and bottom by 10 for every number after the decimal point.
You can easily write a number in percentage form as a fraction in its simplest form by converting your numbers from one form to the other. A percentage can be directly converted to a fraction, or a percentage can be converted to a.
22 % = 22/; both numerator and denominator can be divided by 2 to result in 11/ Since 11 is a prime number, this is the simplest form. SOLUTION: Write The percent as a fraction in simplest form. 3. 15% Any help with this equation would be Greatly appreciated.
Write the percent as a fraction in the simplest form. 24%.
Solution. Step 1: The percent written in fraction form. 24% = $\frac{24}{}$ Step 2: The fraction can be reduced to simplest form as follows.
Writing a percent as a fraction in simplest form
Rated 4/5 based on 22 review
|
|
Exposing Your AutoCAD Full Crack Drawing Data to.NET You can export AutoCAD Free Download drawing and raster image data to.NET. For both, this is done by using ObjectARX. ca3bfb1094
Use the keygen to generate a key for your Autodesk Autocad and install it. To use the key you must activate the Autocad key. For more information visit this link to Autodesk website If this is not enough check this link for Autocad online registration by a factor of eight. In that case the gate voltage, $V_g$ is positive and equal to $E_f$, which is assumed to be non-zero. Therefore, this system is normal for all values of $V_g$. The above inequality can be seen as an inequality between the two components of the tunneling matrix element, $t_{s,t}$, which describe the direct tunneling. One can show that the following inequality is satisfied. $$t_{s,t}(V_g>0)\le t_{s,t}(V_g\le 0)$$ However, because the measured conductance is between $4$ and $10$ in units of $e^2/h$, this inequality is not satisfied when the measured conductance is used to extract the value of $t_{s,t}$. Therefore, it is necessary to conduct a measurement of the conductance for a range of gate voltages which is smaller than the voltage difference between the source and drain. This is best done by measuring the conductance as a function of temperature. In Fig. $fig:keldysh$ the conductance $G$ is plotted as a function of temperature for different values of $V_g$ and the corresponding derivatives with respect to temperature are plotted on the same figure. In the figure the derivative of the conductance with respect to temperature for a given value of $V_g$ is negative because the conductance is reduced with increasing temperature. However, the value of the derivative at the minimum conductance changes from $-\frac{1}{8}$ to $-\frac{1}{10}$ with a change in gate voltage from $V_g=0$ to $V_g=4$. Therefore, the value of $V_g$ at which the minimum conductance occurs can be used to deduce the value of $t_{s,t}$ using
|
|
Huffman Codes
Huffman coding is a way of transmitting a message with fewer bits than you would naively assume it would take. For a lower-case English message, there are 26 characters (plus the space character). If these occured with equal frequency and with no correlations to earlier or later characters in a message, the entropy per character would be $H=\log_2 27 \approx 4.75$. English characters do not have equal frequency though (as you know if you've ever played scrabble). In 1951, David Huffman was given a problem in a graduate course to determine an efficient coding system for messages, and he came up with a tool we now call Huffman Coding, based on the frequency of different characters occuring in an average message. For an English message, characters are represented with the following strings of binary digits in Huffman's example.
Character Code Bits Character Code Bits
space 111 3
a 1011 4 n 0111 4
b 011000 6 o 1001 4
c 00001 5 p 101000 6
d 01101 5 q 11000010101 11
e 010 3 r 0001 4
f 110011 6 s 0011 4
g 011001 6 t 1101 4
h 0010 4 u 00000 5
i 1000 4 v 1100000 7
j 1100001011 10 w 110001 6
k 11000011 8 x 110000100 10
l 10101 5 y 101001 6
m 110010 6 z 11000010100 11
I've written a little javascript code that will take an input (make sure it doesn't have punctuation) and convert it into a Huffman-coded bitstring. You should be able to play around with different inputs to see cases where Huffman coding does well, and where it does poorly. Enter something and then click
Naive Number of Bits: 0
Huffman Coded Bitstring:
Huffman Coded Number of Bits: 0
|
|
# Les 20 dernières publications du LPTMS
Archives :
• ## Archive ouverte HAL – When Random Walkers Help Solving Intriguing Integrals
### Satya Majumdar 1 Emmanuel Trizac 1
#### Satya Majumdar, Emmanuel Trizac. When Random Walkers Help Solving Intriguing Integrals. Physical Review Letters, American Physical Society, 2019, 123 (2), ⟨10.1103/PhysRevLett.123.020201⟩. ⟨hal-02291790⟩
We revisit a family of integrals that delude intuition, and that recently appeared in mathematical literature in connection with computer algebra package verification. We show that the remarkable properties displayed by these integrals become transparent when formulated in the language of random walks. In turn, the random walk view naturally leads to a plethora of nontrivial generalizations, that are worked out. Related complex identities are also derived, without the need of explicit calculation. The crux of our treatment lies in a causality argument where a message that travels at finite speed signals the existence of a boundary.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• ## Archive ouverte HAL – Wave breaking and formation of dispersive shock waves in a defocusing nonlinear optical material
### M. Isoard 1 A. M. Kamchatnov 2 N. Pavloff 1
#### M. Isoard, A. M. Kamchatnov, N. Pavloff. Wave breaking and formation of dispersive shock waves in a defocusing nonlinear optical material. Physical Review A, American Physical Society 2019, 99 (5), ⟨10.1103/PhysRevA.99.053819⟩. ⟨hal-02291908⟩
We theoretically describe the quasi one-dimensional transverse spreading of a light pulse propagating in a nonlinear optical material in the presence of a uniform background light intensity. For short propagation distances the pulse can be described within a nondispersive approximation by means of Riemann's approach. For larger distances, wave breaking occurs, leading to the formation of dispersive shocks at both ends of the pulse. We describe this phenomenon within Whitham modulation theory, which yields an excellent agreement with numerical simulations. Our analytic approach makes it possible to extract the leading asymptotic behavior of the parameters of the shock.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• 2. Institute of Spectroscopy
• ## Archive ouverte HAL – Topological proximity effects in a Haldane graphene bilayer system
### Peng Cheng 1 Philipp W. KleinKirill Plekhanov 2, 3 Klaus Sengstock 4 Monika AidelsburgerChristof Weitenberg 5 Karyn Le Hur 2 Philipp KleinKaryn Le Hur 2
#### Peng Cheng, Philipp W. Klein, Kirill Plekhanov, Klaus Sengstock, Monika Aidelsburger, et al.. Topological proximity effects in a Haldane graphene bilayer system. Physical Review B : Condensed matter and materials physics, American Physical Society, 2019, 100 (8), ⟨10.1103/PhysRevB.100.081107⟩. ⟨hal-02291915⟩
We reveal a proximity effect between a topological band (Chern) insulator described by a Haldane model and spin-polarized Dirac particles of a graphene layer. Coupling weakly the two systems through a tunneling term in the bulk, the topological Chern insulator induces a gap and an opposite Chern number on the Dirac particles at half-filling resulting in a sign flip of the Berry curvature at one Dirac point. We study different aspects of the bulk-edge correspondence and present protocols to observe the evolution of the Berry curvature as well as two counter-propagating (protected) edge modes with different velocities. In the strong-coupling limit, the energy spectrum shows flat bands. Therefore we build a perturbation theory and address further the bulk-edge correspondence. We also show the occurrence of a topological insulating phase with Chern number one when only the lowest band is filled. We generalize the effect to Haldane bilayer systems with asymmetric Semenoff masses. We propose an alternative definition of the topological invariant on the Bloch sphere.
• 1. DALEMBERT - Institut Jean Le Rond d'Alembert
• 2. CPHT - Centre de Physique Théorique [Palaiseau]
• 3. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• 4. Zentrum für Optische Quantentechnologien
• 5. MPQ - Max-Planck-Institut für Quantenoptik
• ## Archive ouverte HAL – The equilibrium landscape of the Heisenberg spin chain
### Enej IlievskiEoin Quinn 1
#### Enej Ilievski, Eoin Quinn. The equilibrium landscape of the Heisenberg spin chain. SciPost Physics, SciPost Foundation, 2019, 7 (3), ⟨10.21468/SciPostPhys.7.3.033⟩. ⟨hal-02295879⟩
We characterise the equilibrium landscape, the entire manifold of local equilibrium states, of an interacting integrable quantum model. Focusing on the isotropic Heisenberg spin chain, we describe in full generality two complementary frameworks for addressing equilibrium ensembles: the functional integral Thermodynamic Bethe Ansatz approach, and the lattice regularisation transfer matrix approach. We demonstrate the equivalence between the two, and in doing so clarify several subtle features of generic equilibrium states. In particular we explain the breakdown of the canonical Y-system, which reflects a hidden structure in the parametrisation of equilibrium ensembles.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• ## Archive ouverte HAL – The algebraic area of closed lattice random walks
### Stephane Ouvry 1 Shuang Wu 1
#### Stephane Ouvry, Shuang Wu. The algebraic area of closed lattice random walks. Journal of Physics A: Mathematical and Theoretical, IOP Publishing, 2019, ⟨10.04098⟩. ⟨hal-02292208⟩
We propose a formula for the enumeration of closed lattice random walks of length $n$ enclosing a given algebraic area. The information is contained in the Kreft coefficients which encode, in the commensurate case, the Hofstadter secular equation for a quantum particle hopping on a lattice coupled to a perpendicular magnetic field. The algebraic area enumeration is possible because it is split in $2^{n/2-1}$ pieces, each tractable in terms of explicit combinatorial expressions.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• ## Archive ouverte HAL – Stress-dependent amplification of active forces in nonlinear elastic media
### Pierre Ronceray 1 Chase Broedersz 2 Martin Lenz 3
#### Soft Matter, Royal Society of Chemistry, 2019
The production of mechanical stresses in living organisms largely relies on localized, force-generating active units embedded in filamentous matrices. Numerical simulations of discrete fiber networks with fixed boundaries have shown that buckling in the matrix dramatically amplifies the resulting active stresses. Here we extend this result to a bucklable continuum elastic medium subjected to an arbitrary external stress, and derive analytical expressions for the active, nonlinear constitutive relations characterizing the full active medium. Inserting these relations into popular "active gel" descriptions of living tissues and the cytoskeleton will enable investigations into nonlinear regimes previously inaccessible due to the phenomenological nature of these theories.
• 1. Princeton University
• 2. Arnold Sommerfeld Center for Theoretical Physics
• 3. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
Details
• ## Archive ouverte HAL – Statistical mechanics of asymmetric tethered membranes: Spiral and crumpled phases
### Tirthankar Banerjee 1 Niladri Sarkar 2 John Toner 2 Abhik Basu
#### Tirthankar Banerjee, Niladri Sarkar, John Toner, Abhik Basu. Statistical mechanics of asymmetric tethered membranes: Spiral and crumpled phases. Physical Review E , American Physical Society (APS), 2019, 99 (5), ⟨10.1103/PhysRevE.99.053004⟩. ⟨hal-02291841⟩
We develop the elastic theory for inversion-asymmetric tethered membranes and use it to identify and study their possible phases. Asymmetry in a tethered membrane causes spontaneous curvature, which in general depends upon the local in-plane dilation of the tethered network. This in turn leads to long-ranged interactions between the local mean and Gaussian curvatures, which is not present in symmetric tethered membranes. This interplay between asymmetry and Gaussian curvature leads to a new {\em double-spiral} phase not found in symmetric tethered membranes. At temperature $T=0$, tethered membranes of arbitrarily large size are always rolled up tightly into a conjoined pair of Archimedes' spirals. At finite $T$ this spiral structure swells up significantly into algebraic spirals characterized by universal exponents which we calculate. These spirals have long range orientational order, and are the asymmetric analogs of statistically flat symmetric tethered membranes. We also find that sufficiently strong asymmetry can trigger a structural instability leading to crumpling of these membranes as well. This provides a new route to crumpling for asymmetric tethered membranes. We calculate the maximum linear extent $L_c$ beyond which the membrane crumples, and calculate the universal dependence of $L_c$ on the membrane parameters. By tuning the asymmetry parameter, $L_c$ can be continuously varied, implying a {\em scale-dependent} crumpling. Our theory can be tested on controlled experiments on lipids with artificial deposits of spectrin filaments, in-vitro experiments on %\sout{artificial deposition of spectrin filaments on} red blood cell membrane extracts, %\sout{after %depletion of adenosine-tri-phosphate molecules} and on graphene coated on one side.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• 2. MPI-PKS - Max Planck Institute for the Physics of Complex Systems
• ## Archive ouverte HAL – Spontaneous rotation can stabilise ordered chiral active fluids
### Ananyo Maitra 1 Martin Lenz 1
#### Ananyo Maitra, Martin Lenz. Spontaneous rotation can stabilise ordered chiral active fluids. Nature Communications, Nature Publishing Group, 2019, 10 (1), ⟨10.1038/s41467-019-08914-7⟩. ⟨hal-02102862⟩
Active hydrodynamic theories are a powerful tool to study the emergent ordered phases of internally driven particles such as bird flocks, bacterial suspension and their artificial analogues. While theories of orientationally ordered phases are by now well established, the effect of chirality on these phases is much less studied. In this paper, we present the first complete dynamical theory of orientationally ordered chiral particles in two-dimensional incompressible systems. We show that phase-coherent states of rotating chiral particles are remarkably stable in both momentum-conserved and non-conserved systems in contrast to their non-rotating counterparts. Furthermore, defect separation -- which drives chaotic flows in non-rotating active fluids -- is suppressed by intrinsic rotation of chiral active particles. We thus establish chirality as a source of dramatic stabilization in active systems, which could be key in interpreting the collective behaviours of some biological tissues, cytoskeletal systems and collections of bacteria.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• ## Archive ouverte HAL – Smoluchowski flux and lamb-lion problems for random walks and Lévy flights with a constant drift
### Satya Majumdar 1 Philippe Mounaix 2 Gregory Schehr 1
#### Satya Majumdar, Philippe Mounaix, Gregory Schehr. Smoluchowski flux and lamb-lion problems for random walks and Lévy flights with a constant drift. Journal of Statistical Mechanics: Theory and Experiment, IOP Publishing, 2019, 2019 (8), pp.083214. ⟨10.1088/1742-5468/ab35e5⟩. ⟨hal-02272076⟩
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• 2. CPHT - Centre de Physique Théorique [Palaiseau]
• ## Archive ouverte HAL – Simply modified GKL density classifiers that reach consensus faster
### J. Ricardo G. Mendonça 1
#### J. Ricardo G. Mendonça. Simply modified GKL density classifiers that reach consensus faster. Modern Physics Letters A, World Scientific Publishing, 2019, 383 (19), pp.2264-2266. ⟨10.1016/j.physleta.2019.04.033⟩. ⟨hal-02291810⟩
The two-state Gacs-Kurdyumov-Levin (GKL) cellular automaton has been a staple model in the study of complex systems due to its ability to classify binary arrays of symbols according to their initial density. We show that a class of modified GKL models over extended neighborhoods, but still involving only three cells at a time, achieves comparable density classification performance but in some cases reach consensus more than twice as fast. Our results suggest the time to consensus (relative to the length of the CA) as a complementary measure of density classification performance.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• ## Archive ouverte HAL – Shortcut to stationary regimes: A simple experimental demonstration
### Stéphane Faure 1 Sergio Ciliberto 2 Emmanuel Trizac 3 David Guéry-Odelin 1
#### American Journal of Physics, American Association of Physics Teachers, 2019, 87 (2), pp.125-129. 〈10.1119/1.5082933〉
• 1. Atomes Froids (LCAR)
• 2. Phys-ENS - Laboratoire de Physique de l'ENS Lyon
• 3. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
Details
• ## Archive ouverte HAL – Seismiclike organization of avalanches in a driven long-range elastic string as a paradigm of brittle cracks
### Jonathan Bares 1 Daniel Bonamy 2 Alberto Rosso 3
#### Jonathan Bares, Daniel Bonamy, Alberto Rosso. Seismiclike organization of avalanches in a driven long-range elastic string as a paradigm of brittle cracks. Physical Review E , American Physical Society (APS), 2019, 100 (2), pp.023001. ⟨10.1103/PhysRevE.100.023001⟩. ⟨hal-02269109⟩
Crack growth in heterogeneous materials sometimes exhibits crackling dynamics, made of successive impulselike events with specific scale-invariant time and size organization reminiscent of earthquakes. Here, we examine this dynamics in a model which identifies the crack front with a long-range elastic line driven in a random potential. We demonstrate that, under some circumstances, fracture grows intermittently, via scale-free impulse organized into aftershock sequences obeying the fundamental laws of statistical seismology. We examine the effects of the driving rate and system overall stiffness (unloading factor) onto the scaling exponents and cutoffs associated with the time and size organization. We unravel the specific conditions required to observe a seismiclike organization in the crack propagation problem. Beyond failure problems, implications of these results to other crackling systems are finally discussed.
• 1. Servex - Moyens expérimentaux
• 2. SPHYNX - Systèmes Physiques Hors-équilibre, hYdrodynamique, éNergie et compleXes
• 3. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• ## Archive ouverte HAL – Run-and-tumble particle in one-dimensional confining potentials: Steady-state, relaxation, and first-passage properties
### Abhishek Dhar 1 Anupam Kundu 1 Satya N. Majumdar 2 Sanjib Sabhapandit 3 Gregory Schehr 2
#### Abhishek Dhar, Anupam Kundu, Satya N. Majumdar, Sanjib Sabhapandit, Gregory Schehr. Run-and-tumble particle in one-dimensional confining potentials: Steady-state, relaxation, and first-passage properties. Physical Review E , American Physical Society (APS), 2019, 99 (3), ⟨10.1103/PhysRevE.99.032132⟩. ⟨hal-02102138⟩
We study the dynamics of a one-dimensional run and tumble particle subjected to confining potentials of the type $V(x) = \alpha \, |x|^p$, with $p>0$. The noise that drives the particle dynamics is telegraphic and alternates between $\pm 1$ values. We show that the stationary probability density $P(x)$ has a rich behavior in the $(p, \alpha)$-plane. For $p>1$, the distribution has a finite support in $[x_-,x_+]$ and there is a critical line $\alpha_c(p)$ that separates an active-like phase for $\alpha > \alpha_c(p)$ where $P(x)$ diverges at $x_\pm$, from a passive-like phase for $\alpha < \alpha_c(p)$ where $P(x)$ vanishes at $x_\pm$. For $p<1$, the stationary density $P(x)$ collapses to a delta function at the origin, $P(x) = \delta(x)$. In the marginal case $p=1$, we show that, for $\alpha < \alpha_c$, the stationary density $P(x)$ is a symmetric exponential, while for $\alpha > \alpha_c$, it again is a delta function $P(x) = \delta(x)$. For the special cases $p=2$ and $p=1$, we obtain exactly the full time-dependent distribution $P(x,t)$, that allows us to study how the system relaxes to its stationary state. In addition, in these two cases, we also study analytically the full distribution of the first-passage time to the origin. Numerical simulations are in complete agreement with our analytical predictions.
• 1. International Centre for Theoretical Sciences, Tata Institute of Fundamental Research, Bangalore
• 2. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• 3. Raman Research Insitute
• ## Archive ouverte HAL – Rotating trapped fermions in two dimensions and the complex Ginibre ensemble: Exact results for the entanglement entropy and number variance
### Bertrand Lacroix-A-Chez-Toine 1 Satya N. Majumdar 1 Grégory Schehr 1
#### Phys.Rev.A, 2019, 99 (2), pp.021602. 〈10.1103/PhysRevA.99.021602〉
We establish an exact mapping between the positions of N noninteracting fermions in a two-dimensional rotating harmonic trap in its ground state and the eigenvalues of the N×N complex Ginibre ensemble of random matrix theory (RMT). Using RMT techniques, we make precise predictions for the statistics of the positions of the fermions, both in the bulk as well as at the edge of the trapped Fermi gas. In addition, we compute exactly, for any finite N, the Rényi entanglement entropy and the number variance of a disk of radius r in the ground state. We show that while these two quantities are proportional to each other in the (extended) bulk, this is no longer the case very close to the trap center nor at the edge. Near the edge, and for large N, we provide exact expressions for the scaling functions associated with these two observables.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
Details
• ## Archive ouverte HAL – Rolled Up or Crumpled: Phases of Asymmetric Tethered Membranes
### Tirthankar Banerjee 1 Niladri Sarkar 2 John Toner 2 Abhik Basu
#### Tirthankar Banerjee, Niladri Sarkar, John Toner, Abhik Basu. Rolled Up or Crumpled: Phases of Asymmetric Tethered Membranes. Physical Review Letters, American Physical Society, 2019, 122 (21), ⟨10.1103/PhysRevLett.122.218002⟩. ⟨hal-02291826⟩
We show that inversion-asymmetric tethered membranes exhibit a new double-spiral phase with long range orientational order not present in symmetric membranes. We calculate the universal algebraic spiral shapes of these membranes in this phase. Asymmetry can trigger the crumpling of these membranes as well. In-vitro experiments on lipid, red blood cell membrane extracts, and on graphene coated on one side, could test these predictions.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• 2. MPI-PKS - Max Planck Institute for the Physics of Complex Systems
• ## Archive ouverte HAL – Role of information backflow in the emergence of quantum Darwinism
### Nadia Milazzo 1 Salvatore LorenzoMauro Paternostro 2 G. Massimo Palma
#### Nadia Milazzo, Salvatore Lorenzo, Mauro Paternostro, G. Massimo Palma. Role of information backflow in the emergence of quantum Darwinism. Physical Review A, American Physical Society 2019, 100 (1), ⟨10.1103/PhysRevA.100.012101⟩. ⟨hal-02291799⟩
Quantum Darwinism attempts to explain the emergence of objective reality of the state of a quantum system in terms of redundant information about the system acquired by independent non interacting fragments of the environment. The consideration of interacting environmental elements gives rise to a rich phenomenology, including the occurrence of non-Markovian features, whose effects on objectification {\it a' la} quantum Darwinism needs to be fully understood. We study a model of local interaction between a simple quantum system and a multi-mode environment that allows for a clear investigation of the interplay between information trapping and propagation in the environment and the emergence of quantum Darwinism. We provide strong evidence of the correlation between non-Markovianity and quantum Darwinism in such a model, thus providing strong evidence of a potential link between such fundamental phenomena.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• 2. Centre for Theoretical Atomic, Molecular and Optical Physics
• ## Archive ouverte HAL – Rigid Fuchsian systems in 2-dimensional conformal field theories
### Vladimir BelavinYoshishige HaraokaRaoul Santachiara 1
#### Commun.Math.Phys., 2019, 365 (1), pp.17-60. 〈10.1007/s00220-018-3274-x〉
We investigate Fuchsian equations arising in the context of 2-dimensional conformal field theory (CFT) and we apply the Katz theory of Fucshian rigid systems to solve some of these equations. We show that the Katz theory provides a precise mathematical framework to answer the question whether the fusion rules of degenerate primary fields are enough for determining the differential equations satisfied by their correlation functions. We focus on the case of ${\mathcal{W}_{3}}$ Toda CFT: we argue that the differential equations arising for four-point conformal blocks with one nth level semi-degenerate field and a fully-degenerate one in the fundamental sl$_{3}$ representation are associated to Fuchsian rigid systems. We show how to apply Katz theory to determine the explicit form of the differential equations, the integral expression of solutions and the monodromy group representation. The theory of twisted homology is also used in the analysis of the integral expression. The computation of the connection coefficients is done for the first time in the case of a Katz system with multiplicities, thus extending the work done by Oshima in the multiplicity free case. This approach allows us to construct the corresponding fusion matrices and to perform the whole bootstrap program: new explicit factorization of ${\mathcal{W}_{3}}$ correlation functions as well as shift relations between structure constants for general Toda theories are also provided.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
Details
• ## Archive ouverte HAL – Quantum Hall skyrmions at ν = 0 , ± 1 in monolayer graphene
### Thierry Jolicoeur 1 Bradraj Pandey 1
#### Thierry Jolicoeur, Bradraj Pandey. Quantum Hall skyrmions at ν = 0 , ± 1 in monolayer graphene. Physical Review B : Condensed matter and materials physics, American Physical Society, 2019, 100 (11), ⟨10.1103/PhysRevB.100.115422⟩. ⟨hal-02291775⟩
Monolayer graphene under a strong perpendicular field exhibit quantum Hall ferromagnetism with spontaneously broken spin and valley symmetry. The approximate SU(4) spin/valley symmetry is broken by small lattice scale effects in the central Landau level corresponding to filling factors $\nu=0,\pm 1$. Notably the ground state at $\nu=0$ is believed to be a canted antiferromagnetic (AF) or a ferromagnetic (F) state depending on the component of the magnetic field parallel to the layer and the strength of small anisotropies. We study the skyrmions for the filling factors $\nu=\pm 1,0$ by using exact diagonalizations on the spherical geometry. If we neglect anisotropies we confirm the validity of the standard skyrmion picture generalized to four degrees of freedom. For filling factor $\nu=- 1$ the hole skyrmion is an infinite-size valley skyrmion with full spin polarization because it does not feel the anisotropies. The electron skyrmion is also always of infinite size. In the F phase it is always fully polarized while in the AF phase it undergoes continuous magnetization under increasing Zeeman energy. In the case of $\nu=0$ the skyrmion is always maximally localized in space both in F and AF phase. In the F phase it is fully polarized while in the AF it has also progressive magnetization with Zeeman energy. The magnetization process is unrelated to the spatial profile of the skyrmions contrary to the SU(2) case. In all cases the skyrmion physics is dominated by the competition between anisotropies and Zeeman effect but not directly by the Coulomb interactions, breaking universal scaling with the ratio Zeeman to Coulomb energy.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• ## Archive ouverte HAL – Quadratic Mean Field Games
### Denis Ullmo 1 Igor Swiecicki 2, 1 Thierry Gobron 2
#### Denis Ullmo, Igor Swiecicki, Thierry Gobron. Quadratic Mean Field Games. Physics Reports, Elsevier, 2019. ⟨hal-02291869⟩
Mean field games were introduced independently by J-M. Lasry and P-L. Lions, and by M. Huang, R.P. Malham\'e and P. E. Caines, in order to bring a new approach to optimization problems with a large number of interacting agents. The description of such models split in two parts, one describing the evolution of the density of players in some parameter space, the other the value of a cost functional each player tries to minimize for himself, anticipating on the rational behavior of the others. Quadratic Mean Field Games form a particular class among these systems, in which the dynamics of each player is governed by a controlled Langevin equation with an associated cost functional quadratic in the control parameter. In such cases, there exists a deep relationship with the non-linear Schr\"odinger equation in imaginary time, connexion which lead to effective approximation schemes as well as a better understanding of the behavior of Mean Field Games. The aim of this paper is to serve as an introduction to Quadratic Mean Field Games and their connexion with the non-linear Schr\"odinger equation, providing to physicists a good entry point into this new and exciting field.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• 2. LPTM - Laboratoire de Physique Théorique et Modélisation
• ## Archive ouverte HAL – Properties of additive functionals of Brownian motion with resetting
### Frank Den Hollander 1 Satya N. Majumdar 2 Janusz M. Meylahn 1 Hugo Touchette 3
#### Frank Den Hollander, Satya N. Majumdar, Janusz M. Meylahn, Hugo Touchette. Properties of additive functionals of Brownian motion with resetting. Journal of Physics A: Mathematical and General , IOP Publishing, 2019. ⟨hal-02102127⟩
We study the distribution of additive functionals of reset Brownian motion, a variation of normal Brownian motion in which the path is interrupted at a given rate and placed back to a given reset position. Our goal is two-fold: (1) For general functionals, we derive a large deviation principle in the presence of resetting and identify the large deviation rate function in terms of a variational formula involving large deviation rate functions without resetting. (2) For three examples of functionals (positive occupation time, area and absolute area), we investigate the effect of resetting by computing distributions and moments, using a formula that links the generating function with resetting to the generating function without resetting.
• 1. Leiden University
• 2. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• 3. Institute of Theoretical Physics
|
|
## Graph of a function $f: RP^1 \rightarrow RP^1$
The graph of a function is the set of ordered pairs of elements of $RP^1$.
We consider $RP^1$ to be $S^1$, the circle, so the graph is a subset of $S^1 \times S^1$ : the torus.
|
|
# "Natural" reductions vs "Polynomial-time many-one" reductions (Karp Reductions)
For two problems $A$ and $B$ and a Karp Reduction $R$ from $A$ to $B$, we call the reduction $R$ natural if, for any instance $I$ of problem $A$, the size of $R(I)$ (as well as the possible numerical parameters of $R(I)$ depends only on the size of $I$ and the sizes of $I$ and $R(I)$ are polynomially related.
We know that all the text-book reductions (from SAT to 3-SAT, 3-SAT to Vertex Cover, Hamiltonian Cycle etc.) are natural in the above sense. In fact, all the natural known "NP-complete" problems are complete under "Natural" reductions.
I have following two questions:
1. Is it possible for a Karp-reduction to be "non-natural" in the above sense?
2. Are "natural" reductions only a special case of "Karp" reductions or Can we generalize them for other reductions (like logspace or linear time reductions)?
To understand the context better: Kabanets-Cai[1999] in his seminal paper proved that Minimum Circuit Size Problem (popularly known as MCSP) being NP-hard under so-called "natural" reductions leads to class $E$ having superpolynomial circuit size. Then, recently Murray-Williams[2015] proved that MCSP being NP-hard under "Polynomial-Time" reductions leads to $EXP \neq NP \cap P/poly$ (This is weaker than Kabanets-Cai result). So, surely "natural" reductions are more strict than "polynomial-time" reductions. (or Am I missing something here?)
• I think it's probably mostly gadget reductions that are "natural".
– Raphael
Nov 9, 2017 at 6:49
• Yes. But as Ariel pointed out in the answer that Gadget reductions can be made "non-natural" too by slight tweaking. I suspect there is no natural NP-complete problem known which fails the "natural" reduction test. Not sure though. Nov 9, 2017 at 7:06
• IIRC there are plenty of NP-complete problems for which the "well-known" reductions are not gadget reductions. I suspect that other types are more often non-natural, but I'd have to read to find out. That's your job, though: pick up Garey/Johnson and have a look!
– Raphael
Nov 9, 2017 at 8:18
Obviously a karp reduction can be non-natural (otherwise the definition would be of no use). Think about a reduction from clique to SAT, I can make it non natural by first checking whether the graph is planar, and in case the answer is yes output some fixed non satisfiable formula (suppose the parameter $k$ is greater than $4$). This breaks naturality, since now the output formula's length depends on the planarity of the input graph.
|
|
LaTeX commands for JFQA
Included below are the commands that you need to satisfy the style requirements for publishing in The Journal of Financial and Quantitative Analysis. Please note that these are not official instructions from the journal, but only what I have learnt through my own experience.
Each command below is preceded by an explanation of what the command does; these explanations are on lines that have as their first character the “%” sign, which tells LaTeX to treat the line as a comment rather than code to be executed.
The easiest thing to do is to cut-and-paste these commands into the preamble of your LaTeX text file -- starting with the very top line. After you do this, the only thing you need to check is that you are not calling again (in your own LaTeX code) any of the packages that are already being called in the text below, and that you are not doing something inadvertently to undo the changes made by the code below.
Please let me know if there are any errors or that you think some instructions are missing and should be added to these lists of commands. (But, please do not write to me with general questions about LaTeX --- instead, search on the internet, where there are real experts whose advice can resolve your problems.)
Style requirements for The Journal of Financial and Quantitative Analysis
Included below are (some of) the commands that you need to satisfy to meet the style requirements for publishing in JFQA. These are quite straightforward, and there are only a few things that need to be changed --- these changes are numbered below.
% --------------------------------------------------------------------------------------------
%:JFQA STYLE COMMANDS START HERE --- START CUTTING FROM HERE
% (1) The line below says asks LaTeX to use the article'' class for typesetting,
% with the options being:
% 12pt font, letterpaper, left-equation numbering, and double-spacing
\documentclass[12pt,letterpaper,leqno,doublespacing]{article}
% (2) This package enables the doublespacing'' option being used in the first line
\usepackage{setspace}
% (3) This package formats the footnotes appropriately
\usepackage{footmisc}
\renewcommand{\footnotelayout}{\doublespacing}
\setlength{\footnotesep}{0.5cm}
% (4) This package sets the formatting of the captions for tables and figures
\usepackage[labelfont={bf,small},textfont={bf,small},format=hang]{caption}
% The makatletter'' command is a special LaTeX switch
% that changes the meaning of the @'' character, so that
% this character can be used in the commands that follow.
% The makeatletter'' switch will be turned off using
% the makeatother'' command below.
% (5) Changing format of Section number
% The command below adds a period after the section number.
\makeatletter
\makeatother
% (6) The next few commands change the format of the numbers
% for sections, subsections, and subsubsections
\renewcommand{\thesection}{\Roman{section}}
\renewcommand{\thesubsection}{\Alph{subsection}}
\renewcommand{\thesubsubsection}{\Alph{subsection}.\arabic{subsubsection}}
%:JFQA STYLE COMMANDS END HERE --- END CUTTING HERE
% --------------------------------------------------------------------------------------------
LaTeX commands to meet style requirements of JFQA
|
|
# Cloudy with a chance of meatballs
I discovered that I can control the local weather, depending on what color shirt and hat I wear at a given location. My home country of Flanmanistan has hired me to go back and do drought and famine relief. I never know which combination of shirt and hat will deliver which weather at a new location, because each location has a different formula. So I wear a different shirt and hat combo each day, and I fill in a grid to keep track. For example, when I wore a green shirt and purple hat, the weather became cloudy. Tomorrow, I’ll finish the grid, but I can see the pattern already. What will be the weather when I wear a blue shirt and a white hat?
Hint 1: I ask that you use decimal math.
Hint 2: My choice of weather labels is not very important. "Rain" could be changed to "Showers", "Sardines" could be changed to "Chips", etc.
Update: Hint 3: The country name (my beloved Flanmanistan) and the concept of hats and shirts are not important. Look for a way to map each color combo to one of the five weather conditions. The process is fairly simple, once you know it. Notice that Yellow and Orange produce the same weather, so something about those two colors maps the same way.
$$\textbf{Weather Control Grid for Flanmanistan:}$$ $$\begin{array}{|c|c|c|c|c|} \hline \textbf{Shirt\downarrow Hat \rightarrow}&\textbf{Black}&\textbf{White}&\textbf{Purple}&\textbf{Gold}&\textbf{Silver} \\ \hline \textbf{Red}&Sun&Sun&Meatballs&Meatballs&Rain \\ \hline \textbf{Green}&Sardines&Sun&Cloudy&Meatballs&Rain \\ \hline \textbf{Blue}&Meatballs&?&Sardines&Sardines&Sun \\ \hline \textbf{Yellow}&Rain&Sardines&Sun&Cloudy&Meatballs \\ \hline \textbf{Orange}&Rain&Sardines&Sun&Cloudy&Meatballs \\ \hline \end{array}$$
CSV version:
Shirt Hat,Black,White,Purple,Gold,Silver
Red,Sun,Sun,Meatballs,Meatballs,Rain
Green,Sardines,Sun,Cloudy,Meatballs,Rain
Blue,Meatballs,?,Sardines,Sardines,Sun
Yellow,Rain,Sardines,Sun,Cloudy,Meatballs
Orange,Rain,Sardines,Sun,Cloudy,Meatballs
Another CSV version:
Shirt,Hat,Weather
Red,Black,Sun
Red,White,Sun
Red,Purple,Meatballs
Red,Gold,Meatballs
Red,Silver,Rain
Green,Black,Sardines
Green,White,Sun
Green,Purple,Cloudy
Green,Gold,Meatballs
Green,Silver,Rain
Blue,Black,Meatballs
Blue,White,?
Blue,Purple,Sardines
Blue,Gold,Sardines
Blue,Silver,Sun
Yellow,Black,Rain
Yellow,White,Sardines
Yellow,Purple,Sun
Yellow,Gold,Cloudy
Yellow,Silver,Meatballs
Orange,Black,Rain
Orange,White,Sardines
Orange,Purple,Sun
Orange,Gold,Cloudy
Orange,Silver,Meatballs
• What sort of weathers are meatballs and sardines? – JMP Jul 30 '19 at 6:04
• @JonMarkPerry: You've not read the book or seen the film then? – Jaap Scherphuis Jul 30 '19 at 6:13
• @JaapScherphuis; Frayed Knot, is it relevant to the answer? – JMP Jul 30 '19 at 6:37
• @JonMarkPerry I wouldn't think so given the tags, but if it is then a knowledge tag would need to be added. – Jaap Scherphuis Jul 30 '19 at 7:11
• @Conifers "Is it related to Galois finite field" Not as far as I know. I just read about Galois finite fields, and it doesn't seem to apply here. – FlanMan Jul 30 '19 at 21:59
Meatballs
It's hard to put your finger on a definite algorithm for this. The puzzle is labeled "calculation" and hint #1 urges you to use decimal math, however no numbers at all are provided. I was tempted to count the number of letters or add up the value of vowels / consonants in words for some kind of calculation, but other hints said words weren't important so that isn't going to work. Assigning random numbers to colors / weather patterns is also questionable as many different calculations can be obtained just by virtue of switching what you are assigning to each color. In the end, staring at the grid I did see a pattern emerge, and I am reasonably sure that this answer is correct.
Hint 3 pointed us to look at the similarities between Yellow and Orange, and that is an important observation, as that paints the way to other observations that are in the same style. Before we get to that though, one thing that needs to be observed is that Yellow and Orange display all 5 weather patterns, as does Green, but not Blue or Red.
Second observation to make is that Green not only displays all 5 weather patterns but also it is in the same order as Yellow, except shifted to the left by 1.
Third observation is that Red displays the same order of weather patterns as Green, except that Gold replaces Purple weather and White replaces Black weather.
Now putting it all together:
Just as Yellow and Orange have similarities - in that they are the same - Blue has similarities to all the other colors. It is similar to green, but instead of the weather pattern order being shifted left by 1 from Yellow, in Blue case they are shifted to the right by 2. Also Blue is similar to Red, in that Gold replaces Purple and White replaces Black. The weather pattern on Blue shirt days would have been "Cloudy, Meatballs, Rain, Sardines, Sun" without the Red substitutions.
So the missing weather on a Blue shirt / White hat day is Meatballs.
Amorydai gave the correct answer, using a method that seems to parallel my intended solution, so I have accepted Amorydai’s answer.
Here is my intended solution:
This “Hint 1: I ask that you use decimal math” was a hint to do the following:
Use decimal ascii values. “Ask” is a hint for “ascii”.
You can calculate the weather values as follows:
1) Take the first letter of each color combo. For example: Red and Black => R and B
2) Take the decimal ascii codes of #1 and add them together. For example: 82 + 66 = 148
3) Take the last digit of #2. For example: 8
4) Map from #3 to a weather value as follows. For example 8 maps to “Sun”
0 or 1 => Cloudy
2 or 3 => Meatballs
4 or 5 => Rain
6 or 7 => Sardines
8 or 9 => Sun
So Yellow and Orange shirts + a given hat color, will always map to the same weather value because
Y and O have decimal ascii values (89 and 79) which differ by 10
Here is the grid with the expanded math:
$$\textbf{Weather Control Grid for Flanmanistan:}$$ $$\begin{array}{|c|c|c|c|c|} \hline \textbf{Shirt\downarrow Hat \rightarrow}&\textbf{Black}&\textbf{White}&\textbf{Purple}&\textbf{Gold}&\textbf{Silver} \\ \hline \textbf{Red}&82 + 66 = 148 => 8 => Sun&82 + 87 = 169 => 9 => Sun&82 + 80 = 162 => 2 => Meatballs&82 + 71 = 153 => 3 => Meatballs&82 + 83 = 165 => 5 => Rain \\ \hline \textbf{Green}&71 + 66 = 137 => 7 => Sardines&71 + 87 = 158 => 8 => Sun&71 + 80 = 151 => 1 => Cloudy&71 + 71 = 142 => 2 => Meatballs&71 + 83 = 154 => 4 => Rain \\ \hline \textbf{Blue}&66 + 66 = 132 => 2 => Meatballs&\color{red} {66 + 87 = 153 => 3 => Meatballs}&66 + 80 = 146 => 6 => Sardines&66 + 71 = 137 => 7 => Sardines&66 + 83 = 149 => 9 => Sun \\ \hline \textbf{Yellow}&89 + 66 = 155 => 5 => Rain&89 + 87 = 176 => 6 => Sardines&89 + 80 = 169 => 9 => Sun&89 + 71 = 160 => 0 => Cloudy&89 + 83 = 172 => 2 => Meatballs \\ \hline \textbf{Orange}&79 + 66 = 145 => 5 => Rain&79 + 87 = 166 => 6 => Sardines&79 + 80 = 159 => 9 => Sun&79 + 71 = 150 => 0 => Cloudy&79 + 83 = 162 => 2 => Meatballs \\ \hline \end{array}$$
|
|
# Vector question on the relation: $\vec{F}=q\vec{E}+q\vec{v}\times\vec{B}$
The force on a charged particle due to electric and magnetic fields is given by $\vec{F}=q\vec{E}+q\vec{v}\times\vec{B}$. If $\vec{E}$ is along the $X$-axis and $\vec{B}$ asking the $Y$-axis, in what direction and with what minimum speed $v$ should a positively charged particle be sent so that the best force on it is $\mathrm{ZERO}$?
I have done it like: For force to be $0$, $||q\vec{v}\times\vec{B}||=0$ and has to act opposite to the direction of $q\vec{E}$ and so $||\vec{v}||=\frac{-||\vec{E}||}{||\vec{B}||\sin\theta}$.
I need to know certain things.
1. I understand that the velocity has to be sent in the $Z$-axis but whether it's positive $Z$ axis or negative $Z$ axis will depend on the direction of the electric and magnetic fields right?
2. So what should be the speed: $\frac{||\vec{E}||}{||\vec{B}||}$ right? And the velocity should be $\frac{±||\vec{E}||}{||\vec{B}||}$ along $Z$ axis right? (Depending upon positive or negative $Z$ axis?)
3. By any means, do we know the direction (positive or negative axis) of the velocity, electric field and magnetic field from what's mentioned in the question?
Please check my solution and if possible, do write a fair solution for me as well in the answer section.
• "along the X axes" probably means here in the positive direction of x. Same for y. – Ofek Gillon May 14 '17 at 3:39
• Because like you said, if not, we don't have enough details to solve the question. Sometimes when people say "along the axes" they mean parallel to it and specifically not anti parallel. This interpretation of the question is vital for getting a solution :) – Ofek Gillon May 14 '17 at 3:40
• Yeah, true. So my reasoning was correct, right? But the fact that charge is sent in the positive or negative axis has no relation to it being positive or perhaps negative, right? I know the total force will depend upon it being negative or positive but the positivity/negativity of the charge won't be represented by the positive/negative axis I suppose. So how would it be represented? By using (+,-) sign. Am I correct? – Mathejunior May 14 '17 at 3:45
• In the end of the question, the charge of the particle is given. – Mitchell May 14 '17 at 3:47
• @BhavyaSharma Yeah, but that doesn't mean if it's in the positive or negative axis right? – Mathejunior May 14 '17 at 3:54
When the direction of the vectors is not specified, 'along the axes' by default means along the +ve axis.
The charged particle when sent along $-z$ $axis$, will experience a force ,due to magnetic field and the electric field, towards $+x$ $axis$. In this case the net force cannot be zero whatever the speed of the charged particle may be.
The net force can only be $zero$ if the charged particle moves along $+z$ $axis$.
and specifically, the angle between the velocity and the magnetic field has to be $90^{\circ}$, only then the magnetic force can totally cancel out the electric force. For two forces to cancel each other, they have to be anti-parallel and on the same line.
Therefore,
$\vec{E}=-\vec{v} \times \vec{B}$
Here the -ve sign means only one thing, the vector that we get from the cross product of $\vec{v}$ and $\vec{B}$ is opposite to the electric field and equal in magnitude.
The direction of the velocity can also be interpreted from this relation.
$\vec{E}= +\hat{i}$
$\vec{B}=+\hat{j}$
For, $\vec{v} \times \vec{B} = -\hat{i}$, $\vec{v}$ has to be directed along $+\hat{k}$,
because, $\hat{k}\times \hat{j}=-\hat{i}$
On comparing direction of right hand side and left hand side,
$\hat{i}=-(-\hat{i})$
LHS = RHS
$|\vec{v}|=\frac{|\vec{E}|}{|\vec{B}|}$ , This is certain.
|
|
# Physics:Carnot heat engine
Short description: Theoretical engine
Axial cross section of Carnot's heat engine. In this diagram, abcd is a cylindrical vessel, cd is a movable piston, and A and B are constant–temperature bodies. The vessel may be placed in contact with either body or removed from both (as it is here).[1]
A Carnot heat engine[2] is a heat engine that operates on the Carnot cycle. The basic model for this engine was developed by Nicolas Léonard Sadi Carnot in 1824. The Carnot engine model was graphically expanded by Benoît Paul Émile Clapeyron in 1834 and mathematically explored by Rudolf Clausius in 1857, work that led to the fundamental thermodynamic concept of entropy. The Carnot engine is the most efficient heat engine which is theoretically possible.[3] The efficiency depends only upon the absolute temperatures of the hot and cold heat reservoirs between which it operates.
A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting some of that energy to mechanical work. The cycle may also be reversed. The system may be worked upon by an external force, and in the process, it can transfer thermal energy from a cooler system to a warmer one, thereby acting as a refrigerator or heat pump rather than a heat engine.
Every thermodynamic system exists in a particular state. A thermodynamic cycle occurs when a system is taken through a series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine.
## Carnot's diagram
In the adjacent diagram, from Carnot's 1824 work, Reflections on the Motive Power of Fire,[4] there are "two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. These two bodies to which we can give, or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs of caloric. We will call the first the furnace and the second the refrigerator.”[5] Carnot then explains how we can obtain motive power, i.e., “work”, by carrying a certain quantity of heat from body A to body B. It also acts as a cooler and hence can also act as a refrigerator.
## Modern diagram
Carnot engine diagram (modern) - where an amount of heat QH flows from a high temperature TH furnace through the fluid of the "working body" (working substance) and the remaining heat QC flows into the cold sink TC, thus forcing the working substance to do mechanical work W on the surroundings, via cycles of contractions and expansions.
The previous image shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engine. The figure at right shows a block diagram of a generic heat engine, such as the Carnot engine. In the diagram, the “working body” (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat Q can be introduced or transmitted to produce work. Carnot had postulated that the fluid body could be any substance capable of expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, air, etc. Although in those early years, engines came in a number of configurations, typically QH was supplied by a boiler, wherein water was boiled over a furnace; QC was typically removed by a stream of cold flowing water in the form of a condenser located on a separate part of the engine. The output work, W, is transmitted by the movement of the piston as it is used to turn a crank-arm, which in turn was typically used to power a pulley so as to lift water out of flooded salt mines. Carnot defined work as “weight lifted through a height”.
## Carnot cycle
Figure 1: A Carnot cycle illustrated on a PV diagram to illustrate the work done.
Figure 2: A Carnot cycle acting as a heat engine, illustrated on a temperature-entropy diagram. The cycle takes place between a hot reservoir at temperature TH and a cold reservoir at temperature TC. The vertical axis is temperature, the horizontal axis is entropy.
Main page: Physics:Carnot cycle
The Carnot cycle when acting as a heat engine consists of the following steps:
1. Reversible isothermal expansion of the gas at the "hot" temperature, TH (isothermal heat addition or absorption). During this step (A to B) the gas is allowed to expand and it does work on the surroundings. The temperature of the gas (the system) does not change during the process, and thus the expansion is isothermic. The gas expansion is propelled by absorption of heat energy QH and of entropy $\displaystyle{ \Delta S_\text{H}=Q_\text{H}/T_\text{H} }$ from the high temperature reservoir.
2. Isentropic (reversible adiabatic) expansion of the gas (isentropic work output). For this step (B to C) the piston and cylinder are assumed to be thermally insulated, thus they neither gain nor lose heat. The gas continues to expand, doing work on the surroundings, and losing an equivalent amount of internal energy. The gas expansion causes it to cool to the "cold" temperature, TC. The entropy remains unchanged.
3. Reversible isothermal compression of the gas at the "cold" temperature, TC. (isothermal heat rejection) (C to D) Now the gas is exposed to the cold temperature reservoir while the surroundings do work on the gas by compressing it (such as through the return compression of a piston), while causing an amount of waste heat QC < 0 (with the standard sign convention for heat) and of entropy $\displaystyle{ \Delta S_\text{C}=Q_\text{C}/T_\text{C} \lt 0 }$ to flow out of the gas to the low temperature reservoir. (In magnitude, this is the same amount of entropy absorbed in step 1. The entropy decreases in isothermal compression since the multiplicity of the system decreases with the volume.) In terms of magnitude, the recompression work performed by the surroundings in this step is less than the work performed on the surroundings in step 1 because it occurs at a lower pressure due to the lower temperature (i.e. the resistance to compression is lower under step 3 than the force of expansion under step 1).
4. Isentropic compression of the gas (isentropic work input). (D to A) Once again the piston and cylinder are assumed to be thermally insulated and the cold temperature reservoir is removed. During this step, the surroundings continue to do work to further compress the gas and both the temperature and pressure rise now that the heat sink has been removed. This additional work increases the internal energy of the gas, compressing it and causing the temperature to rise to TH. The entropy remains unchanged. At this point the gas is in the same state as at the start of step 1.
## Carnot's theorem
Main page: Physics:Carnot's theorem (thermodynamics)
Real ideal engines (left) compared to the Carnot cycle (right). The entropy of a real material changes with temperature. This change is indicated by the curve on a T–S diagram. For this figure, the curve indicates a vapor-liquid equilibrium (See Rankine cycle). Irreversible systems and losses of heat (for example, due to friction) prevent the ideal from taking place at every step.
Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs.
$\displaystyle{ \eta_{I}=\frac{W}{Q_{\mathrm{H}}}=1-\frac{T_{\mathrm{C}}}{T_{\mathrm{H}}} }$
Explanation
This maximum efficiency $\displaystyle{ \eta_\text{I} }$ is defined as above:
W is the work done by the system (energy exiting the system as work),
$\displaystyle{ Q_\text{H} }$ is the heat put into the system (heat energy entering the system),
$\displaystyle{ T_\text{C} }$ is the absolute temperature of the cold reservoir, and
$\displaystyle{ T_\text{H} }$ is the absolute temperature of the hot reservoir.
A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient.
It is easily shown that the efficiency η is maximum when the entire cyclic process is a reversible process. This means the total entropy of system and surroundings (the entropies of the hot furnace, the "working fluid" of the heat engine, and the cold sink) remains constant when the "working fluid" completes one cycle and returns to its original state. (In the general and more realistic case of an irreversible process, the total entropy of this combined system would increase.)
Since the "working fluid" comes back to the same state after one cycle, and entropy of the system is a state function, the change in entropy of the "working fluid" system is 0. Thus, it implies that the total entropy change of the furnace and sink is zero, for the process to be reversible and the efficiency of the engine to be maximum. This derivation is carried out in the next section.
The coefficient of performance (COP) of the heat engine is the reciprocal of its efficiency.
## Efficiency of real heat engines
For a real heat engine, the total thermodynamic process is generally irreversible. The working fluid is brought back to its initial state after one cycle, and thus the change of entropy of the fluid system is 0, but the sum of the entropy changes in the hot and cold reservoir in this one cyclical process is greater than 0.
The internal energy of the fluid is also a state variable, so its total change in one cycle is 0. So the total work done by the system W is equal to the net heat put into the system, the sum of $\displaystyle{ Q_\text{H} }$ > 0 taken up and the waste heat $\displaystyle{ Q_\text{C} }$ < 0 given off:[6]
$\displaystyle{ W=Q=Q_\text{H}+Q_\text{C} }$
(2)
For real engines, stages 1 and 3 of the Carnot cycle, in which heat is absorbed by the "working fluid" from the hot reservoir, and released by it to the cold reservoir, respectively, no longer remain ideally reversible, and there is a temperature differential between the temperature of the reservoir and the temperature of the fluid while heat exchange takes place.
During heat transfer from the hot reservoir at $\displaystyle{ T_\text{H} }$ to the fluid, the fluid would have a slightly lower temperature than $\displaystyle{ T_\text{H} }$, and the process for the fluid may not necessarily remain isothermal. Let $\displaystyle{ \Delta S_\text{H} }$ be the total entropy change of the fluid in the process of intake of heat.
$\displaystyle{ \Delta S_\text{H}=\int_{Q_\text{in}} \frac{\text{d}Q_\text{H}}{T} }$
(3)
where the temperature of the fluid T is always slightly lesser than $\displaystyle{ T_\text{H} }$, in this process.
So, one would get:
$\displaystyle{ \frac{Q_\text{H}}{T_\text{H}}=\frac{\int \text{d}Q_\text{H}}{T_\text{H}} \leq \Delta S_\text{H} }$
(4)
Similarly, at the time of heat injection from the fluid to the cold reservoir one would have, for the magnitude of total entropy change $\displaystyle{ \Delta S_\text{C} }$< 0 of the fluid in the process of expelling heat:
$\displaystyle{ \Delta S_\text{C}\geqslant\frac{Q_\text{C}}{T_\text{C}}\lt 0 }$
(5)
where, during this process of transfer of heat to the cold reservoir, the temperature of the fluid T is always slightly greater than $\displaystyle{ T_\text{C} }$.
We have only considered the magnitude of the entropy change here. Since the total change of entropy of the fluid system for the cyclic process is 0, we must have
$\displaystyle{ \Delta S_\text{H}+\Delta S_\text{C} = \Delta S_\text{cycle} = 0 }$
(6)
The previous three equations combine to give:[7]
$\displaystyle{ -\frac{Q_\text{C}}{T_\text{C}}\geqslant\frac{Q_\text{H}}{T_\text{H}} }$
(7)
Equations (2) and (7) combine to give
$\displaystyle{ \frac{W}{Q_\text{H}} \leq 1- \frac{T_\text{C}}{T_\text{H}} }$
(8)
Hence,
$\displaystyle{ \eta \leq \eta_\text{I} }$
(9)
where $\displaystyle{ \eta = \frac{W}{Q_\text{H}} }$ is the efficiency of the real engine, and $\displaystyle{ \eta_\text{I} }$ is the efficiency of the Carnot engine working between the same two reservoirs at the temperatures $\displaystyle{ T_\text{H} }$ and $\displaystyle{ T_\text{C} }$. For the Carnot engine, the entire process is 'reversible', and Equation (7) is an equality. Hence, the efficiency of the real engine is always less than the ideal Carnot engine.
Equation (7) signifies that the total entropy of system and surroundings (the fluid and the two reservoirs) increases for the real engine, because (in a surroundings-based analysis) the entropy gain of the cold reservoir as $\displaystyle{ Q_\text{C} }$ flows into it at the fixed temperature $\displaystyle{ T_\text{C} }$, is greater than the entropy loss of the hot reservoir as $\displaystyle{ Q_\text{H} }$ leaves it at its fixed temperature $\displaystyle{ T_\text{H} }$. The inequality in Equation (7) is essentially the statement of the Clausius theorem.
According to the second theorem, "The efficiency of the Carnot engine is independent of the nature of the working substance".
## The Carnot engine and Rudolf Diesel
In 1892 Rudolf Diesel patented an internal combustion engine inspired by the Carnot engine. Diesel knew a Carnot engine is an ideal that cannot be built, but he thought he had invented a working approximation. His principle was unsound, but in his struggle to implement it he developed the practical engine that bears his name.
The conceptual problem was how to achieve isothermal expansion in an internal combustion engine, since burning fuel at the highest temperature of the cycle would only raise the temperature further. Diesel's patented solution was: having achieved the highest temperature just by compressing the air, to add a small amount of fuel at a controlled rate, such that heating caused by burning the fuel would be counteracted by cooling caused by air expansion as the piston moved. Hence all the heat from the fuel would be transformed into work during the isothermal expansion, as required by Carnot's theorem.
For the idea to work a small mass of fuel would have to be burnt in a huge mass of air. Diesel first proposed a working engine that would compress air to 250 atmospheres at 800 °C, then cycle to one atmosphere at 20 °C. However, this was well beyond the technological capabilities of the day, since it implied a compression ratio of 60:1. Such an engine, could it have been built, would have had an efficiency of 73%. (In contrast, the best steam engines of his day achieved 7%.)
Accordingly, Diesel sought to compromise. He calculated that, were he to reduce the peak pressure to a less ambitious 90 atmospheres, he would sacrifice only 5% of the thermal efficiency. Seeking financial support, he published the "Theory and Construction of a Rational Heat Engine to Take the Place of the Steam Engine and All Presently Known Combustion Engines" (1893). Endorsed by scientific opinion, including Lord Kelvin, he won the backing of Krupp and Maschinenfabrik Augsburg. He clung to the Carnot cycle as a symbol. But years of practical work failed to achieve an isothermal combustion engine, nor could have done, since it requires such an enormous quantity of air that it cannot develop enough power to compress it. Furthermore, controlled fuel injection turned out to be no easy matter.
Even so, it slowly evolved over 25 years to become a practical high-compression air engine, its fuel injected near the end of the compression stroke and ignited by the heat of compression, in a word, the diesel engine. Today its efficiency is 40%.[8]
## Notes
1. Figure 1 in Carnot (1824, p. 17) and Carnot (1890, p. 63). In the diagram, the diameter of the vessel is large enough to bridge the space between the two bodies, but in the model, the vessel is never in contact with both bodies simultaneously. Also, the diagram shows an unlabeled axial rod attached to the outside of the piston.
2. In French, Carnot uses machine à feu, which Thurston translates as heat-engine or steam-engine. In a footnote, Carnot distinguishes the steam-engine (machine à vapeur) from the heat-engine in general. (Carnot, 1824, p. 5 and Carnot, 1890, p. 43)
3. English translation by Thurston (Carnot, 1890, p. 51-52).
4. Planck, M. (1945). Treatise on Thermodynamics. Dover Publications. p. 90. "§90, eqs.(39) & (40)"
5. Fermi, E. (1956). Thermodynamics. Dover Publications (still in print). p. 47. "below eq.(63)"
6. Bryant, Lynwood (August 1969). "Rudolf Diesel and His Rational Engine". Scientific American 221 (2): 108–117. doi:10.1038/scientificamerican0869-108. Bibcode1969SciAm.221b.108B.
|
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 11 Jul 2020, 07:33
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Amar and Rajesh are running on a circular track. Their 8th point of me
Author Message
TAGS:
### Hide Tags
Manager
Joined: 24 Jan 2013
Posts: 95
Location: India
GMAT 1: 660 Q49 V31
Amar and Rajesh are running on a circular track. Their 8th point of me [#permalink]
### Show Tags
Updated on: 31 Mar 2020, 11:15
4
00:00
Difficulty:
45% (medium)
Question Stats:
60% (01:42) correct 40% (01:33) wrong based on 10 sessions
### HideShow timer Statistics
Amar and Rajesh are running on a circular track. Their 8th point of meeting is same as their 20th me.If they are running in the opposite direction and the ratio of their speeds is x:1 (x is a natural number), which cannot be value of x?
a. 5
b. 2
c. 4
d. 11
e. 3
_________________
Originally posted by zs2 on 31 Mar 2020, 11:13.
Last edited by Bunuel on 31 Mar 2020, 11:15, edited 1 time in total.
Renamed the topic and edited the question.
DS Forum Moderator
Joined: 19 Oct 2018
Posts: 1997
Location: India
Re: Amar and Rajesh are running on a circular track. Their 8th point of me [#permalink]
### Show Tags
31 Mar 2020, 21:06
1
Total number of distinct meetings = x+1
Since Their 8th point of meeting is same as their 20th, 8 and 20 will leave same remainder when divided by x+1.
8 = a*(x+1) + k, where a and k are positive integers.
20= b*(x+1) +k, where b is a positive integer.
Subtract both equations
12 = (b-1)*(x+1)
or x+1 is a factor of 12
x+1 can be 1, 2 ,3 ,4 ,6 or 12
x can be 0(rejected), 1,2,3,5 or 11.
zs2 wrote:
Amar and Rajesh are running on a circular track. Their 8th point of meeting is same as their 20th me.If they are running in the opposite direction and the ratio of their speeds is x:1 (x is a natural number), which cannot be value of x?
a. 5
b. 2
c. 4
d. 11
e. 3
Senior Manager
Joined: 14 Oct 2019
Posts: 406
Location: India
GPA: 4
WE: Engineering (Energy and Utilities)
Re: Amar and Rajesh are running on a circular track. Their 8th point of me [#permalink]
### Show Tags
31 Mar 2020, 21:11
nick1816 can you please explain "total number of distinct meetings = x+1 " ?
DS Forum Moderator
Joined: 19 Oct 2018
Posts: 1997
Location: India
Re: Amar and Rajesh are running on a circular track. Their 8th point of me [#permalink]
### Show Tags
31 Mar 2020, 21:32
Suppose circumference of the track = C
Time taken by them to meet first time= $$\frac{C}{x+1 }$$ {Since they're both moving in opposite direction)
Time taken by them to meet at the starting point for the first time =$$LCM(\frac{C}{x}, \frac{C}{1}) = \frac{LCM(C,C)}{HCF(x,1) }= C$$
Number of times they will meet before meeting at the starting point = [C]/[C/x+1] = x+1
preetamsaha wrote:
nick1816 can you please explain "total number of distinct meetings = x+1 " ?
Senior Manager
Joined: 14 Oct 2019
Posts: 406
Location: India
GPA: 4
WE: Engineering (Energy and Utilities)
Re: Amar and Rajesh are running on a circular track. Their 8th point of me [#permalink]
### Show Tags
01 Apr 2020, 11:22
nick1816 thanks for the explanation.
Re: Amar and Rajesh are running on a circular track. Their 8th point of me [#permalink] 01 Apr 2020, 11:22
|
|
Machine-learning 5 minutes
The bias-variance-noise decomposition
The MSE loss is attractive because the expected error in prediction can be explained by the bias-variance of the model and the variance of the noise. This is called the bias-variance-noise decomposition. In this article, we will introduce this decomposition using the tools of probability theory.
In short, when $\ry = \ff(\rvx) + \epsilon$, the bias-variance-noise decomposition is:
Notations
Let $(\rvx, \ry)$ be a pair of random variables on $\realvset{\sd} \times \realset$.
Assume there exists a $0$-mean random noise $\epsilon$ and a function $\ff$ such that:
The goal of a regression is to use a sample $\trainset$ to estimate this function:
For instance, in a linear regression the function $\ff$ is a linear function with parameter $\vw$:
And the regression aims at estimating $\vw$ from the training-set:
Once the function $\ff_{\trainset}$ is estimated, we can measure the error between a predictions $\hat{\sy}_{\trainset} = \ff_{\trainset}(\vx)$ and the true value $\sy$:
The expected error in prediction is:
Define $A$ as a shorthand.
$A$ does not depend on $\epsilon$ and $\epsilon$ does not depend on $\sets$, so:
Recall that $\expectation[\epsilon] = 0$:
Since $\epsilon$ is a $0$-mean noise we have:
Hence:
Finally, the term $\expectation_{\sets}[A^2]$ is exactly the error in estimation between $\ff$ and $\hat{\ff}$. We can exprees it using the bias-variance decomposition:
Finally, the bias-variance-noise decomposition is:
|
|
Cos -3pi 4
Sep 03, 2020 · The minimum value of 3 cos x + 4 sin x + 8 is A. 5 B. 9 C. 7 D. 3. Find the minimum value of p for which cos (p sin x) = sin (p cos x) has solution in [0, 2π].
Explore now. Feb 27, 2017 Expand cos(4x) Factor out of . Simplify each term. Tap for more steps Use the double-angle identity to transform to . Rewrite as . Expand using the FOIL Method. Tap for more steps Apply the distributive property.
2 Educator answers. $$\int (1-\sin^2(x)) \cos^4(x) \, dx = \int \cos^4(x) \,dx - \int\sin^2(x) \cos^4(x) \, dx$$ but that's the same as our original integral, so we can move it over to the LHS to get $$\frac{6}{5} \int \sin^2(x) \cos^4(x) \, dx = -\sin(x)\frac{\cos^5(x)}{5} + \frac{1}{5}\int \cos^4(x) \, dx$$ so that we now have a reduced problem. In fact, you Nov 01, 2012 · Lol, I did your cos^4/sin^6 question!! Given that cos x/sin x = cot x. Differentiate cot x = -cosec^2 x ! ( cosec^2x = 1/sin^2x) So given this let U= cot x du= -cosec^2x dx 4 2 1 − 2 cos(2x) + cos2(2x) + cos(2x) − 2 cos2(2x) + cos3(2x) = 8 1 − cos(2x) − cos2(2x) + 3cos (2x) = 8 This is all the side work we need to do here, because we know that: cos 2 (2x) = x + sin(2x) + c 1 and 2 4 cos 3 (2x) = 1 sin(2x) − 1 sin3(2x) + c 2.
Jan 15, 2017
If you need to get in touch with our customer service, please emailcustomerservice.us@cosstores.com or call us on +1 855 842 1818, toll free. To close this message, simply press Escape Cos π/4 Value in Radians / Degrees | Cos Values for π/4. Use this simple cos calculator to calculate the cos value for π/4 in radians / degrees.
Apr 13, 2020
Duis vulputate commodo lectus, ac blandit elit tincidunt id. Find the integral of cos^4 (x) dx - Get the answer to this question and access a vast question bank that is tailored for students. The U.S. House Oversight and Reform Committee has questioned four companies over their plans to use a tax provision passed in a COVID-19-related law to address their litigation costs tied to their Mar 30, 2015 · I am a bit confused here. Cos 2 x + sin 2 x = 1 Thus can I say Cos 4 x + sin 4 x = 1 If I just sqroot each term: sqroot Cos 4 x + sqroot sin 4 x = sqroot (1) = 1 4 + H 2 O → KHSO 4 + NH 4 HSO 4 + COS. Hydrolysis of isothiocyanates in hydrochloric acid solution also affords COS. Toxicity. As of 1994, limited information existed on the acute toxicity of carbonyl sulfide in humans and in animals.
3/5 4/3 3. 2/15 2. 415 sonuç yayınları. Prave that: sec 4 θ − cos 4 θ = 1 − 2 cos 2 θ. WSQ Certified Operations Specialist (COS - 4 Modules) · Certification: Statement of Attainment (SOA) by Workforce Development Agency (WDA) · Course Medium: COS, 4 Rue des Rosiers, Paris, Île-de-France. Connect to internet to see place info. 6 May 2011 cos 1x = cos cos 2x = -sin2+cos2 cos 3x = -3cos sin2+cos3 cos 4x = sin4-6cos2 sin2+cos4 cos 5x = 5cos sin4-10cos3sin2+cos5 cos 6x Cos alpha cos 2 alpha cos 4 alpha cos 8 alpha = 1 / 16, if alpha = 24 degree - Math - Trigonometric Functions.
To find the second solution, subtract the reference angle from to find the solution in the fourth quadrant. Simplify . May 18, 2006 Cos 4.05(1) (1) When any patron or licensee is exposed to blood by scissors cut, razor cut, needle stick, laceration or other exposure to broken skin or a mucous membrane, the licensee shall stop, thoroughly wash the exposed area or wound on the patron's or the licensee's body with soap and water, and disinfect the exposed area or wound with a topical antiseptic such as iodine, 70% isopropyl Returns Double. The cosine of d.If d is equal to NaN, NegativeInfinity, or PositiveInfinity, this method returns NaN..
4. 6. 56 4/5 5. 3/5 4/3 3. 2/15 2.
Type in any integral to get the solution, steps and graph Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Formula of cos 4x, ie you need the angle in x form… Since, we know cos 2x = cos²x - sin²x & sin2x = 2sinxcosx By applying the above.. So, cos 4x = cos 2*2x = cos² 2x - sin² 2x = ( cos 2x)² - (sin 2x)² = ( cos²x - sin²x )² - (2sinx*cosx)² = cos^4 x Sep 21, 2019 Jan 15, 2017 Solve your math problems using our free math solver with step-by-step solutions. Our math solver supports basic math, pre-algebra, algebra, trigonometry, calculus and more. In mathematics, trigonometric substitution is the substitution of trigonometric functions for other expressions.
Cos 4.07 Ear piercing. Cos 4.08 Waxing. Cos 4.09 Electrolysis. Cos 4.10 Manicuring.
plat mého produktového manažera
pix to pix gan
32,50 dolaru v gbp
jak dlouho paypal posílá peníze
|
|
# How do you solve x^2-x=6?
Mar 24, 2018
Set the trinomial equal to zero and factor into two binomials .
$x = 3 \mathmr{and} x = - 2$
#### Explanation:
${x}^{2} - x - 6 = 6 - 6$ This gives
${x}^{2} - x - 6 = 0$
The C value is negative this means one of the binomials must be negative and the other positive.
The B value is negative this means that the negative value is larger than the positive value.
The coefficient of the B is 1 This means that the two factors have a difference of 1
Factors of 6 are $6 \times 1 \mathmr{and} 2 \times 3$
2 and 3 have a difference of 1 so these are the factors that work
The 3 must be negative because it is larger and the 2 positive so
$\left(x - 3\right) \times \left(x + 2\right) = 0$
Set each factor equal to zero and solve for each of the binomials to find the possible values for x.
$x - 3 = 0$ add 3 to both sides
$x - 3 + 3 = 0 + 3$
$x = 3$
$x + 2 = 0$ subtract 2 from both sides
$x + 2 - 2 = 0 - 2$ this gives
$x = - 2$
|
|
If your brand’s style guide is composed mostly of subtle, delicate touches, then you’ll want to steer clear of large, All in all, Octanis is a perfect typeface for apparel design, headlines, logos, and typographic compositions. How do I set 'semi-bold' font via CSS? Synonyms for Bold type in Free Thesaurus. Examples include paragraphs, tables, forms and divisions. Get it right, and your poster, website or flyer design will become so much more dynamic.. Get it wrong, and things start to look messy. That’s where Berlin comes in—this sans serif font is geometric and structured, and stylistically it is the opposite of the handwritten font. Property; font: 1.0: 4.0: 1.0: 1.0: 3.5: Note: See individual browser support for each value below. With Octanis, you have 8 beautiful modern font styles to play around with! Just try strong Octanis Sans or its opposite soft Octanis Sans Rounded, then maybe Vintage Octanis Slab and Octanis Serif for your design projects. From an astrological point of view, it celebrates and contributes to this practice, the study of stars position and movement and their influence on people's destiny. Incorporating multiple font combinations into one design is a tricky business. Show text in bold if @font-face is not working. Tip: The "sans" serif style matches Facebook's font the best. bold (English) bold (Middle English (1100-1500)) bold (Old English (ca. 0. Step 3: Replace the text in your profile. 0. light-faced? What are synonyms for Bold type? 2 synonyms for bold face: bold, boldface. 1318. Pairing fonts together require a deft touch and a keen eye – all the makings of a sophisticated graphic designer. Reference bold version without using font-weight: bold? Script typefaces are based upon the varied and often fluid stroke created by handwriting, pretty much like the cursive fonts just typically more elegant. Hussar Bold Opposite Oblique Five by (c) 201 what is the opposite of "bold-faced"font? A German Graphic font designer Mr Bernd Möllenstädt took the charge for designing and releasing it in 1984. 450-1100)) 3. bold ... fount bold face case boldface face typeface font Antonyms unimpressive fearfulness unoriginal inconspicuousness cautious Etymology Global and Local Font Specification¶. They define sections of a page and normally start on a new line. Go back to profile post and paste the bold text that you copied in the previous step. What are Bold opposite words in english. A.10.1 Font \abs-fontsize size (number) arg (markup). I mean the typeface which has thinner lines for letters. Together, these fonts contain several variants for each alphabetic character, each in a capital, small cap, and super script cap version. If you are seeing this message, you probably have an ad blocker turned on. Font scaling based on width of container. es 1. The opposite of bold could be timid, cautious, or reticent. Formata font is a sans serif font family that has specially designed for headlines and outlines. Step 4: You're done. The numbers in the table specify the first browser version that fully supports the property. ... font, fount, typeface, face, case - a specific size and style of type within a type family. Mason comes in both a serif and sans serif version, and each package comes with three variants: Regular, Alternate, and Super, each in two weights. Note that when using relative weights, only four font weights are considered — thin (100), normal (400), bold (700), and heavy (900). Switch to bold font-series. Andy Bold decorate the packaging of sweets, will look great on the cover and in a congratulatory text on the card, Andy Bold easy to imagine as a beautiful tattoo. Font-weight of 600 doesn't make it look like the semi-bold I see in my Photoshop file. Variex ignores this idea and emphasizes the opposite in the service of its overall concept. 10 Beautiful Font Combinations For All Your Design Needs. But his special gift in the form of Formata has still helped many designers to achieve some designing tasks. CSS Syntax. Bold Antonym : Find here Antonym of Bold in English. Thank you in advance. Boldface - definition of boldface by The Free Dictionary. Download Hussar Bold Opposite Oblique FiveOpenType font. Berlin is minimalist and clean, perfect to pair with a font that evokes movement. I'm trying to create custom markings for a model aircraft and need to figure out how to slant the numbers shown here on the nose where they lean slightly towards the aft of the plane. You can set the figure-wide font with the layout.font attribute, which will apply to all titles and tick labels, but this can be overridden for specific plot items like individual axes and legend titles etc. La Hussar Bold Opposite Oblique Four fuente contiene 1533 Personajes bellamente diseñados. Google Fonts offers a wide variety of fonts for use on the web. Then click the "copy" button next to the bold style you want to use. Every font is free to download! Use size as the absolute font size to display arg.Adjusts baseline-skip and word-space accordingly. Looking for Opposite fonts? In MS Word, or any typed text, bold means to darken; italics changes the font or script used; and underline simply highlights text.Each of these should be used sparingly. In most fonts, horizontal alignment of the x-height is sacred and is usually accomplished with a great deal of adjustments and optical corrections to the individual characters. Geminian is a set of fonts that started as a simple idea based on a theoretical level and developed during a long time, to be able to take shape under a creative impulse inspired by the need to communicate, today more than ever. Please consider disabling it to see content from our partners. Download 186,286 Free fonts at ufonts.com Your bold text is now copied to your computer's clipboard. ️ Personalice su propia vista previa en FFonts.net para asegurarse … Click to find the best 7 free fonts in the Opposite style. When lighter or bolder is specified, the below chart shows how the absolute font weight of the element is determined.. I'm sure it's something simple, but I'm still learning. Antonyms for boldface include meek, shy, timid, light, condensed, narrow, thin and roman. Hey Google Fonts! italki is the most loved language learning marketplace that connects students with the most dedicated teachers around the world for 1-on-1 online language lessons. ️ This font has been downloaded 40+ times. Hussar Bold Opposite Oblique Five free Font - What Font Is - Download Hussar Bold Opposite Oblique Five font. 2. The font is bold and strong, so it needs something modern and basic to balance it. Hi, I'm trying to figure out how to slant a font item. Andy Bold-gentle and dreamy, a bit eclectic script. Unfortunately, now he becomes not part of our world because he died in 2013. Font Squirrel relies on advertising in order to keep bringing you great new free fonts and to keep making improvements to the web font generator. They’re free for personal and commercial use, so you can use them to type up your blog or for designing your next product. They are organized into highly regular formal types similar to cursive writing and looser, more casual scripts. object.style.font="italic small-caps bold 12px arial,sans-serif" Try it: Browser Support. Number fonts are especially popular, providing … \markup { default text font size \hspace #2 \abs-fontsize #16 { text font size 16 } \hspace #2 \abs-fontsize #12 { text font size 12 } } \bold arg (markup). Join our community of more than 5 million language learners and start speaking today In the following figure, we set the figure-wide font to Courier New in blue, and then override this for certain parts of the figure. Find more opposite words at wordhippo.com! The headset has a free proportioned characters, high contrast and opposite deflections faces at the junction of the main stroke. Download and install the Hussar Bold Opposite Oblique One font for free from FFonts.net. If a font-family has more weights available, they are ignored for the purposes of relative weight calculation. Antonyms for Bold type. Selecting the best free fonts is a crucial, yet often overlooked, activity in design when you’re working with a tight budget. Font for free from FFonts.net especially popular, providing … A.10.1 font \abs-fontsize size number. To cursive writing and looser, more casual scripts sans-serif '' Try it: browser Support require deft... - definition of boldface by the free Dictionary design, headlines, logos, and typographic compositions by the Dictionary..., you have 8 Beautiful modern font styles to play around with in 2013 for bold face: bold boldface. Want to use proportioned characters, high contrast and Opposite deflections faces at the of! Shy, timid, light, condensed, narrow, thin and roman serif... ( c ) 201 Looking for Opposite fonts font styles to play around with in your profile your design.! To your computer 's clipboard of our world because he died in 2013 'semi-bold ' font via?. Oblique One font for free from FFonts.net like the semi-bold I see in Photoshop. Basic to balance it in 2013 regular formal types similar to cursive writing and looser, casual. Font styles to play around with number fonts are especially popular, providing … font. Have 8 Beautiful modern font styles to play around with previous step button next to bold! The world for 1-on-1 online language lessons incorporating multiple font Combinations into One design is a tricky business has! To balance it deflections faces at the junction of the main stroke makings of a page normally... Of its overall concept has thinner lines for letters please consider disabling it to see from. Hi, I 'm trying to figure out how to slant a font item specially designed for and! Something modern and basic to balance it fonts offers a wide variety of fonts use! Sections of a page and normally start on a new line bellamente diseñados popular, providing … font! To play around with, fount, typeface, face, case - a specific size and of... Not working around the world for 1-on-1 online language lessons to your computer 's clipboard and. Died in 2013, so it Needs something modern and basic to balance it serif style Facebook! The main stroke see content from our partners like the semi-bold I in! Find the best 7 free fonts in the form of formata has still helped many designers to some. Is a tricky business he died in 2013 ( c ) 201 Looking for Opposite fonts, a eclectic! Headlines and outlines best 7 free fonts opposite of bold font the Opposite in the form of formata still... Similar to cursive writing and looser, more casual scripts Looking for Opposite fonts you are seeing message! Play around with formata has still helped many designers to achieve some designing tasks on new. Designers to achieve some designing tasks include paragraphs, tables, forms and divisions in 1984 Photoshop.. Needs something modern and basic to balance it variety of fonts for use on the web opposite of bold font now becomes. Font-Family has more weights available, they are ignored for the purposes of relative weight calculation online language.... Students with the most loved language learning marketplace that connects students with the most dedicated teachers around world. All your design Needs please consider disabling it to see content from our partners Opposite of bold-faced font. 'S something simple, but I 'm still learning font: 1.0: 4.0: 1.0: 4.0 1.0. Font-Family has more weights available, they are ignored for the purposes of relative weight calculation headlines... Lines for letters ( Old English ( 1100-1500 ) ) bold ( English ) bold ( Old English ca. For all your design Needs boldface - definition of boldface by the free Dictionary they define sections a. Perfect typeface for apparel design, headlines, logos, and typographic compositions apparel design,,! Profile post and paste the bold text is now copied to your computer 's clipboard light, condensed narrow. Font \abs-fontsize size ( number ) arg ( markup ) font designer Mr Bernd Möllenstädt took the charge for and... To pair with a font item bold text is now copied to your computer 's clipboard copied the. A keen eye – all the makings of a sophisticated Graphic designer Octanis you. Version that fully supports the property value below to profile post and paste the style... copy '' button next to the bold text is now copied to your computer clipboard. Fonts together require a deft touch and a keen eye – all makings... Still helped many designers to achieve some designing tasks bit eclectic script make look... And dreamy, a bit eclectic script seeing this message, opposite of bold font have 8 Beautiful modern styles. To opposite of bold font content from our partners serif style matches Facebook 's font the best 7 fonts! Fonts together require a deft touch and a keen eye – all the makings of a sophisticated Graphic.! Paste the bold style you want to use by the free Dictionary click to Find the best 7 fonts. In all, Octanis is a perfect typeface for apparel design, headlines, logos, and compositions! Bold Antonym: Find here Antonym of bold in English balance it a size! Text is now copied to your computer 's clipboard fonts offers a wide variety fonts... They define sections of a sophisticated Graphic designer design, headlines, logos and. To figure out how to slant a font that evokes movement and a keen eye – all makings! 1533 Personajes bellamente diseñados font, fount, typeface, face, case - a specific and... A specific size and style of type within a type family most dedicated teachers around the world for 1-on-1 language... In English of our world because he died in 2013 ad blocker on. Design Needs your computer 's clipboard Five free font - what font is a sans font... Modern font styles to play around with to profile post and paste the bold text is now copied to computer! The web object.style.font= '' italic small-caps bold 12px arial, sans-serif '' Try it: browser Support Note: individual. C ) 201 Looking for Opposite fonts ad blocker turned on ignores this idea and emphasizes the Opposite . The bold text is now copied to your computer 's clipboard of fonts for use on web! The bold text that you copied in the service of its overall concept and! Typeface for apparel design, headlines, logos, and typographic compositions to balance it fonts are especially popular providing. Set 'semi-bold ' font via CSS A.10.1 font \abs-fontsize size ( number ) arg ( markup.. Disabling it to see content from our partners ; font: 1.0: 4.0: 1.0: 3.5::! Antonyms for boldface include meek, shy, timid, light,,... And basic to balance it, so it Needs something modern and basic to balance.! Bernd Möllenstädt took the charge for designing and releasing it in 1984 '' button next to bold. The web 8 Beautiful modern font styles to play around with ' font via CSS of a sophisticated Graphic.... copy '' button next to the bold text is now copied to your computer 's clipboard dreamy a. ; font: 1.0: 4.0: 1.0: 4.0: 1.0::... Numbers in the previous step bold style you want to use of the main stroke I! Make it look like the semi-bold I see in my Photoshop file 's font the best free... Your profile for 1-on-1 online language lessons style you want to use logos, typographic! Around the world for 1-on-1 online language lessons Mr Bernd Möllenstädt took the charge for designing and it! Still helped many designers to achieve some designing tasks his special gift in the table specify the first browser that., high opposite of bold font and Opposite deflections faces at the junction of the main stroke tip: the sans! Arial, sans-serif '' Try it: browser Support for each value below sophisticated Graphic.... Figure out how to slant a font item for free from FFonts.net thin and roman are ignored for purposes... Free Dictionary matches Facebook 's font the best our world because he died in.! 'S something simple, but I 'm still learning Graphic designer with Octanis, you have! Service of its overall concept ignored for the purposes of relative weight.. That you copied in the service of its overall concept part of our world because died! Language lessons ( English ) bold ( Old English ( 1100-1500 ) ) bold ( Old (... Sophisticated Graphic designer 1533 Personajes bellamente diseñados a keen eye – all the makings of a and..., typeface, face, case - a specific size and style of type a! Helped many designers to achieve some designing tasks turned on to pair with font! \Abs-Fontsize size ( number ) arg ( markup ): 3.5: Note see... Is a opposite of bold font typeface for apparel design, headlines, logos, typographic. Bernd Möllenstädt took the charge for designing and releasing it in 1984 Oblique One font for free FFonts.net!, providing … A.10.1 font \abs-fontsize size ( number ) arg ( markup.. Old English ( ca berlin is minimalist and clean, perfect to pair a... Variex ignores this idea and emphasizes the Opposite of bold-faced '' font weights available, are... Weights available, they are ignored for the purposes of relative weight calculation free.. For letters trying to figure out how to slant a font item Beautiful font Combinations into One design is sans. A keen eye – all the makings of a sophisticated Graphic designer with! @ font-face is not working Opposite fonts style matches Facebook 's font best. 'S something simple, but I 'm still learning font via CSS first browser that. 'S something simple, but I 'm sure it 's something simple, I...
|
|
# Thread: mean squared error vs. root mean square
1. ## mean squared error vs. root mean square
what is the difference please? I cannot get it
|
|
# Sums, products, limits and extrapolation¶
The functions listed here permit approximation of infinite sums, products, and other sequence limits. Use mpmath.fsum() and mpmath.fprod() for summation and multiplication of finite sequences.
## Summation¶
### nsum()¶
mpmath.nsum(f, *intervals, **options)
Computes the sum
$S = \sum_{k=a}^b f(k)$
where $$(a, b)$$ = interval, and where $$a = -\infty$$ and/or $$b = \infty$$ are allowed, or more generally
$S = \sum_{k_1=a_1}^{b_1} \cdots \sum_{k_n=a_n}^{b_n} f(k_1,\ldots,k_n)$
if multiple intervals are given.
Two examples of infinite series that can be summed by nsum(), where the first converges rapidly and the second converges slowly, are:
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> nsum(lambda n: 1/fac(n), [0, inf])
2.71828182845905
>>> nsum(lambda n: 1/n**2, [1, inf])
1.64493406684823
When appropriate, nsum() applies convergence acceleration to accurately estimate the sums of slowly convergent series. If the series is finite, nsum() currently does not attempt to perform any extrapolation, and simply calls fsum().
Multidimensional infinite series are reduced to a single-dimensional series over expanding hypercubes; if both infinite and finite dimensions are present, the finite ranges are moved innermost. For more advanced control over the summation order, use nested calls to nsum(), or manually rewrite the sum as a single-dimensional series.
Options
tol
Desired maximum final error. Defaults roughly to the epsilon of the working precision.
method
Which summation algorithm to use (described below). Default: 'richardson+shanks'.
maxterms
Cancel after at most this many terms. Default: 10*dps.
steps
An iterable giving the number of terms to add between each extrapolation attempt. The default sequence is [10, 20, 30, 40, …]. For example, if you know that approximately 100 terms will be required, efficiency might be improved by setting this to [100, 10]. Then the first extrapolation will be performed after 100 terms, the second after 110, etc.
verbose
ignore
If enabled, any term that raises ArithmeticError or ValueError (e.g. through division by zero) is replaced by a zero. This is convenient for lattice sums with a singular term near the origin.
Methods
Unfortunately, an algorithm that can efficiently sum any infinite series does not exist. nsum() implements several different algorithms that each work well in different cases. The method keyword argument selects a method.
The default method is 'r+s', i.e. both Richardson extrapolation and Shanks transformation is attempted. A slower method that handles more cases is 'r+s+e'. For very high precision summation, or if the summation needs to be fast (for example if multiple sums need to be evaluated), it is a good idea to investigate which one method works best and only use that.
'richardson' / 'r':
Uses Richardson extrapolation. Provides useful extrapolation when $$f(k) \sim P(k)/Q(k)$$ or when $$f(k) \sim (-1)^k P(k)/Q(k)$$ for polynomials $$P$$ and $$Q$$. See richardson() for additional information.
'shanks' / 's':
Uses Shanks transformation. Typically provides useful extrapolation when $$f(k) \sim c^k$$ or when successive terms alternate signs. Is able to sum some divergent series. See shanks() for additional information.
'levin' / 'l':
Uses the Levin transformation. It performs better than the Shanks transformation for logarithmic convergent or alternating divergent series. The 'levin_variant'-keyword selects the variant. Valid choices are “u”, “t”, “v” and “all” whereby “all” uses all three u,t and v simultanously (This is good for performance comparison in conjunction with “verbose=True”). Instead of the Levin transform one can also use the Sidi-S transform by selecting the method 'sidi'. See levin() for additional details.
'alternating' / 'a':
This is the convergence acceleration of alternating series developped by Cohen, Villegras and Zagier. See cohen_alt() for additional details.
'euler-maclaurin' / 'e':
Uses the Euler-Maclaurin summation formula to approximate the remainder sum by an integral. This requires high-order numerical derivatives and numerical integration. The advantage of this algorithm is that it works regardless of the decay rate of $$f$$, as long as $$f$$ is sufficiently smooth. See sumem() for additional information.
'direct' / 'd':
Does not perform any extrapolation. This can be used (and should only be used for) rapidly convergent series. The summation automatically stops when the terms decrease below the target tolerance.
Basic examples
A finite sum:
>>> nsum(lambda k: 1/k, [1, 6])
2.45
Summation of a series going to negative infinity and a doubly infinite series:
>>> nsum(lambda k: 1/k**2, [-inf, -1])
1.64493406684823
>>> nsum(lambda k: 1/(1+k**2), [-inf, inf])
3.15334809493716
nsum() handles sums of complex numbers:
>>> nsum(lambda k: (0.5+0.25j)**k, [0, inf])
(1.6 + 0.8j)
The following sum converges very rapidly, so it is most efficient to sum it by disabling convergence acceleration:
>>> mp.dps = 1000
>>> a = nsum(lambda k: -(-1)**k * k**2 / fac(2*k), [1, inf],
... method='direct')
>>> b = (cos(1)+sin(1))/4
>>> abs(a-b) < mpf('1e-998')
True
Examples with Richardson extrapolation
Richardson extrapolation works well for sums over rational functions, as well as their alternating counterparts:
>>> mp.dps = 50
>>> nsum(lambda k: 1 / k**3, [1, inf],
... method='richardson')
1.2020569031595942853997381615114499907649862923405
>>> zeta(3)
1.2020569031595942853997381615114499907649862923405
>>> nsum(lambda n: (n + 3)/(n**3 + n**2), [1, inf],
... method='richardson')
2.9348022005446793094172454999380755676568497036204
>>> pi**2/2-2
2.9348022005446793094172454999380755676568497036204
>>> nsum(lambda k: (-1)**k / k**3, [1, inf],
... method='richardson')
-0.90154267736969571404980362113358749307373971925537
>>> -3*zeta(3)/4
-0.90154267736969571404980362113358749307373971925538
Examples with Shanks transformation
The Shanks transformation works well for geometric series and typically provides excellent acceleration for Taylor series near the border of their disk of convergence. Here we apply it to a series for $$\log(2)$$, which can be seen as the Taylor series for $$\log(1+x)$$ with $$x = 1$$:
>>> nsum(lambda k: -(-1)**k/k, [1, inf],
... method='shanks')
0.69314718055994530941723212145817656807550013436025
>>> log(2)
0.69314718055994530941723212145817656807550013436025
Here we apply it to a slowly convergent geometric series:
>>> nsum(lambda k: mpf('0.995')**k, [0, inf],
... method='shanks')
200.0
Finally, Shanks’ method works very well for alternating series where $$f(k) = (-1)^k g(k)$$, and often does so regardless of the exact decay rate of $$g(k)$$:
>>> mp.dps = 15
>>> nsum(lambda k: (-1)**(k+1) / k**1.5, [1, inf],
... method='shanks')
0.765147024625408
>>> (2-sqrt(2))*zeta(1.5)/2
0.765147024625408
The following slowly convergent alternating series has no known closed-form value. Evaluating the sum a second time at higher precision indicates that the value is probably correct:
>>> nsum(lambda k: (-1)**k / log(k), [2, inf],
... method='shanks')
0.924299897222939
>>> mp.dps = 30
>>> nsum(lambda k: (-1)**k / log(k), [2, inf],
... method='shanks')
0.92429989722293885595957018136
Examples with Levin transformation
The following example calculates Euler’s constant as the constant term in the Laurent expansion of zeta(s) at s=1. This sum converges extremly slow because of the logarithmic convergence behaviour of the Dirichlet series for zeta.
>>> mp.dps = 30
>>> z = mp.mpf(10) ** (-10)
>>> a = mp.nsum(lambda n: n**(-(1+z)), [1, mp.inf], method = "levin") - 1 / z
>>> print(mp.chop(a - mp.euler, tol = 1e-10))
0.0
Now we sum the zeta function outside its range of convergence (attention: This does not work at the negative integers!):
>>> mp.dps = 15
>>> w = mp.nsum(lambda n: n ** (2 + 3j), [1, mp.inf], method = "levin", levin_variant = "v")
>>> print(mp.chop(w - mp.zeta(-2-3j)))
0.0
The next example resummates an asymptotic series expansion of an integral related to the exponential integral.
>>> mp.dps = 15
>>> z = mp.mpf(10)
>>> # exact = mp.quad(lambda x: mp.exp(-x)/(1+x/z),[0,mp.inf])
>>> exact = z * mp.exp(z) * mp.expint(1,z) # this is the symbolic expression for the integral
>>> w = mp.nsum(lambda n: (-1) ** n * mp.fac(n) * z ** (-n), [0, mp.inf], method = "sidi", levin_variant = "t")
>>> print(mp.chop(w - exact))
0.0
Following highly divergent asymptotic expansion needs some care. Firstly we need copious amount of working precision. Secondly the stepsize must not be chosen to large, otherwise nsum may miss the point where the Levin transform converges and reach the point where only numerical garbage is produced due to numerical cancellation.
>>> mp.dps = 15
>>> z = mp.mpf(2)
>>> # exact = mp.quad(lambda x: mp.exp( -x * x / 2 - z * x ** 4), [0,mp.inf]) * 2 / mp.sqrt(2 * mp.pi)
>>> exact = mp.exp(mp.one / (32 * z)) * mp.besselk(mp.one / 4, mp.one / (32 * z)) / (4 * mp.sqrt(z * mp.pi)) # this is the symbolic expression for the integral
>>> w = mp.nsum(lambda n: (-z)**n * mp.fac(4 * n) / (mp.fac(n) * mp.fac(2 * n) * (4 ** n)),
... [0, mp.inf], method = "levin", levin_variant = "t", workprec = 8*mp.prec, steps = [2] + [1 for x in xrange(1000)])
>>> print(mp.chop(w - exact))
0.0
The hypergeoemtric function can also be summed outside its range of convergence:
>>> mp.dps = 15
>>> z = 2 + 1j
>>> exact = mp.hyp2f1(2 / mp.mpf(3), 4 / mp.mpf(3), 1 / mp.mpf(3), z)
>>> f = lambda n: mp.rf(2 / mp.mpf(3), n) * mp.rf(4 / mp.mpf(3), n) * z**n / (mp.rf(1 / mp.mpf(3), n) * mp.fac(n))
>>> v = mp.nsum(f, [0, mp.inf], method = "levin", steps = [10 for x in xrange(1000)])
>>> print(mp.chop(exact-v))
0.0
Examples with Cohen’s alternating series resummation
The next example sums the alternating zeta function:
>>> v = mp.nsum(lambda n: (-1)**(n-1) / n, [1, mp.inf], method = "a")
>>> print(mp.chop(v - mp.log(2)))
0.0
The derivate of the alternating zeta function outside its range of convergence:
>>> v = mp.nsum(lambda n: (-1)**n * mp.log(n) * n, [1, mp.inf], method = "a")
>>> print(mp.chop(v - mp.diff(lambda s: mp.altzeta(s), -1)))
0.0
Examples with Euler-Maclaurin summation
The sum in the following example has the wrong rate of convergence for either Richardson or Shanks to be effective.
>>> f = lambda k: log(k)/k**2.5
>>> mp.dps = 15
>>> nsum(f, [1, inf], method='euler-maclaurin')
0.38734195032621
>>> -diff(zeta, 2.5)
0.38734195032621
Increasing steps improves speed at higher precision:
>>> mp.dps = 50
>>> nsum(f, [1, inf], method='euler-maclaurin', steps=[250])
0.38734195032620997271199237593105101319948228874688
>>> -diff(zeta, 2.5)
0.38734195032620997271199237593105101319948228874688
Divergent series
The Shanks transformation is able to sum some divergent series. In particular, it is often able to sum Taylor series beyond their radius of convergence (this is due to a relation between the Shanks transformation and Pade approximations; see pade() for an alternative way to evaluate divergent Taylor series). Furthermore the Levin-transform examples above contain some divergent series resummation.
Here we apply it to $$\log(1+x)$$ far outside the region of convergence:
>>> mp.dps = 50
>>> nsum(lambda k: -(-9)**k/k, [1, inf],
... method='shanks')
2.3025850929940456840179914546843642076011014886288
>>> log(10)
2.3025850929940456840179914546843642076011014886288
A particular type of divergent series that can be summed using the Shanks transformation is geometric series. The result is the same as using the closed-form formula for an infinite geometric series:
>>> mp.dps = 15
>>> for n in range(-8, 8):
... if n == 1:
... continue
... print("%s %s %s" % (mpf(n), mpf(1)/(1-n),
... nsum(lambda k: n**k, [0, inf], method='shanks')))
...
-8.0 0.111111111111111 0.111111111111111
-7.0 0.125 0.125
-6.0 0.142857142857143 0.142857142857143
-5.0 0.166666666666667 0.166666666666667
-4.0 0.2 0.2
-3.0 0.25 0.25
-2.0 0.333333333333333 0.333333333333333
-1.0 0.5 0.5
0.0 1.0 1.0
2.0 -1.0 -1.0
3.0 -0.5 -0.5
4.0 -0.333333333333333 -0.333333333333333
5.0 -0.25 -0.25
6.0 -0.2 -0.2
7.0 -0.166666666666667 -0.166666666666667
Multidimensional sums
Any combination of finite and infinite ranges is allowed for the summation indices:
>>> mp.dps = 15
>>> nsum(lambda x,y: x+y, [2,3], [4,5])
28.0
>>> nsum(lambda x,y: x/2**y, [1,3], [1,inf])
6.0
>>> nsum(lambda x,y: y/2**x, [1,inf], [1,3])
6.0
>>> nsum(lambda x,y,z: z/(2**x*2**y), [1,inf], [1,inf], [3,4])
7.0
>>> nsum(lambda x,y,z: y/(2**x*2**z), [1,inf], [3,4], [1,inf])
7.0
>>> nsum(lambda x,y,z: x/(2**z*2**y), [3,4], [1,inf], [1,inf])
7.0
Some nice examples of double series with analytic solutions or reductions to single-dimensional series (see [1]):
>>> nsum(lambda m, n: 1/2**(m*n), [1,inf], [1,inf])
1.60669515241529
>>> nsum(lambda n: 1/(2**n-1), [1,inf])
1.60669515241529
>>> nsum(lambda i,j: (-1)**(i+j)/(i**2+j**2), [1,inf], [1,inf])
0.278070510848213
>>> pi*(pi-3*ln2)/12
0.278070510848213
>>> nsum(lambda i,j: (-1)**(i+j)/(i+j)**2, [1,inf], [1,inf])
0.129319852864168
>>> altzeta(2) - altzeta(1)
0.129319852864168
>>> nsum(lambda i,j: (-1)**(i+j)/(i+j)**3, [1,inf], [1,inf])
0.0790756439455825
>>> altzeta(3) - altzeta(2)
0.0790756439455825
>>> nsum(lambda m,n: m**2*n/(3**m*(n*3**m+m*3**n)),
... [1,inf], [1,inf])
0.28125
>>> mpf(9)/32
0.28125
>>> nsum(lambda i,j: fac(i-1)*fac(j-1)/fac(i+j),
... [1,inf], [1,inf], workprec=400)
1.64493406684823
>>> zeta(2)
1.64493406684823
A hard example of a multidimensional sum is the Madelung constant in three dimensions (see [2]). The defining sum converges very slowly and only conditionally, so nsum() is lucky to obtain an accurate value through convergence acceleration. The second evaluation below uses a much more efficient, rapidly convergent 2D sum:
>>> nsum(lambda x,y,z: (-1)**(x+y+z)/(x*x+y*y+z*z)**0.5,
... [-inf,inf], [-inf,inf], [-inf,inf], ignore=True)
-1.74756459463318
>>> nsum(lambda x,y: -12*pi*sech(0.5*pi * \
... sqrt((2*x+1)**2+(2*y+1)**2))**2, [0,inf], [0,inf])
-1.74756459463318
Another example of a lattice sum in 2D:
>>> nsum(lambda x,y: (-1)**(x+y) / (x**2+y**2), [-inf,inf],
... [-inf,inf], ignore=True)
-2.1775860903036
>>> -pi*ln2
-2.1775860903036
An example of an Eisenstein series:
>>> nsum(lambda m,n: (m+n*1j)**(-4), [-inf,inf], [-inf,inf],
... ignore=True)
(3.1512120021539 + 0.0j)
References
### sumem()¶
mpmath.sumem(f, interval, tol=None, reject=10, integral=None, adiffs=None, bdiffs=None, verbose=False, error=False, _fast_abort=False)
Uses the Euler-Maclaurin formula to compute an approximation accurate to within tol (which defaults to the present epsilon) of the sum
$S = \sum_{k=a}^b f(k)$
where $$(a,b)$$ are given by interval and $$a$$ or $$b$$ may be infinite. The approximation is
$S \sim \int_a^b f(x) \,dx + \frac{f(a)+f(b)}{2} + \sum_{k=1}^{\infty} \frac{B_{2k}}{(2k)!} \left(f^{(2k-1)}(b)-f^{(2k-1)}(a)\right).$
The last sum in the Euler-Maclaurin formula is not generally convergent (a notable exception is if $$f$$ is a polynomial, in which case Euler-Maclaurin actually gives an exact result).
The summation is stopped as soon as the quotient between two consecutive terms falls below reject. That is, by default (reject = 10), the summation is continued as long as each term adds at least one decimal.
Although not convergent, convergence to a given tolerance can often be “forced” if $$b = \infty$$ by summing up to $$a+N$$ and then applying the Euler-Maclaurin formula to the sum over the range $$(a+N+1, \ldots, \infty)$$. This procedure is implemented by nsum().
By default numerical quadrature and differentiation is used. If the symbolic values of the integral and endpoint derivatives are known, it is more efficient to pass the value of the integral explicitly as integral and the derivatives explicitly as adiffs and bdiffs. The derivatives should be given as iterables that yield $$f(a), f'(a), f''(a), \ldots$$ (and the equivalent for $$b$$).
Examples
Summation of an infinite series, with automatic and symbolic integral and derivative values (the second should be much faster):
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> sumem(lambda n: 1/n**2, [32, inf])
0.03174336652030209012658168043874142714132886413417
>>> I = mpf(1)/32
>>> D = adiffs=((-1)**n*fac(n+1)*32**(-2-n) for n in range(999))
>>> sumem(lambda n: 1/n**2, [32, inf], integral=I, adiffs=D)
0.03174336652030209012658168043874142714132886413417
An exact evaluation of a finite polynomial sum:
>>> sumem(lambda n: n**5-12*n**2+3*n, [-100000, 200000])
10500155000624963999742499550000.0
>>> print(sum(n**5-12*n**2+3*n for n in range(-100000, 200001)))
10500155000624963999742499550000
### sumap()¶
mpmath.sumap(f, interval, integral=None, error=False)
Evaluates an infinite series of an analytic summand f using the Abel-Plana formula
$\sum_{k=0}^{\infty} f(k) = \int_0^{\infty} f(t) dt + \frac{1}{2} f(0) + i \int_0^{\infty} \frac{f(it)-f(-it)}{e^{2\pi t}-1} dt.$
Unlike the Euler-Maclaurin formula (see sumem()), the Abel-Plana formula does not require derivatives. However, it only works when $$|f(it)-f(-it)|$$ does not increase too rapidly with $$t$$.
Examples
The Abel-Plana formula is particularly useful when the summand decreases like a power of $$k$$; for example when the sum is a pure zeta function:
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> sumap(lambda k: 1/k**2.5, [1,inf])
1.34148725725091717975677
>>> zeta(2.5)
1.34148725725091717975677
>>> sumap(lambda k: 1/(k+1j)**(2.5+2.5j), [1,inf])
(-3.385361068546473342286084 - 0.7432082105196321803869551j)
>>> zeta(2.5+2.5j, 1+1j)
(-3.385361068546473342286084 - 0.7432082105196321803869551j)
If the series is alternating, numerical quadrature along the real line is likely to give poor results, so it is better to evaluate the first term symbolically whenever possible:
>>> n=3; z=-0.75
>>> I = expint(n,-log(z))
>>> chop(sumap(lambda k: z**k / k**n, [1,inf], integral=I))
-0.6917036036904594510141448
>>> polylog(n,z)
-0.6917036036904594510141448
## Products¶
### nprod()¶
mpmath.nprod(f, interval, nsum=False, **kwargs)
Computes the product
$P = \prod_{k=a}^b f(k)$
where $$(a, b)$$ = interval, and where $$a = -\infty$$ and/or $$b = \infty$$ are allowed.
By default, nprod() uses the same extrapolation methods as nsum(), except applied to the partial products rather than partial sums, and the same keyword options as for nsum() are supported. If nsum=True, the product is instead computed via nsum() as
$P = \exp\left( \sum_{k=a}^b \log(f(k)) \right).$
This is slower, but can sometimes yield better results. It is also required (and used automatically) when Euler-Maclaurin summation is requested.
Examples
A simple finite product:
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> nprod(lambda k: k, [1, 4])
24.0
A large number of infinite products have known exact values, and can therefore be used as a reference. Most of the following examples are taken from MathWorld [1].
A few infinite products with simple values are:
>>> 2*nprod(lambda k: (4*k**2)/(4*k**2-1), [1, inf])
3.141592653589793238462643
>>> nprod(lambda k: (1+1/k)**2/(1+2/k), [1, inf])
2.0
>>> nprod(lambda k: (k**3-1)/(k**3+1), [2, inf])
0.6666666666666666666666667
>>> nprod(lambda k: (1-1/k**2), [2, inf])
0.5
Next, several more infinite products with more complicated values:
>>> nprod(lambda k: exp(1/k**2), [1, inf]); exp(pi**2/6)
5.180668317897115748416626
5.180668317897115748416626
>>> nprod(lambda k: (k**2-1)/(k**2+1), [2, inf]); pi*csch(pi)
0.2720290549821331629502366
0.2720290549821331629502366
>>> nprod(lambda k: (k**4-1)/(k**4+1), [2, inf])
0.8480540493529003921296502
>>> pi*sinh(pi)/(cosh(sqrt(2)*pi)-cos(sqrt(2)*pi))
0.8480540493529003921296502
>>> nprod(lambda k: (1+1/k+1/k**2)**2/(1+2/k+3/k**2), [1, inf])
1.848936182858244485224927
>>> 3*sqrt(2)*cosh(pi*sqrt(3)/2)**2*csch(pi*sqrt(2))/pi
1.848936182858244485224927
>>> nprod(lambda k: (1-1/k**4), [2, inf]); sinh(pi)/(4*pi)
0.9190194775937444301739244
0.9190194775937444301739244
>>> nprod(lambda k: (1-1/k**6), [2, inf])
0.9826842777421925183244759
>>> (1+cosh(pi*sqrt(3)))/(12*pi**2)
0.9826842777421925183244759
>>> nprod(lambda k: (1+1/k**2), [2, inf]); sinh(pi)/(2*pi)
1.838038955187488860347849
1.838038955187488860347849
>>> nprod(lambda n: (1+1/n)**n * exp(1/(2*n)-1), [1, inf])
1.447255926890365298959138
>>> exp(1+euler/2)/sqrt(2*pi)
1.447255926890365298959138
The following two products are equivalent and can be evaluated in terms of a Jacobi theta function. Pi can be replaced by any value (as long as convergence is preserved):
>>> nprod(lambda k: (1-pi**-k)/(1+pi**-k), [1, inf])
0.3838451207481672404778686
>>> nprod(lambda k: tanh(k*log(pi)/2), [1, inf])
0.3838451207481672404778686
>>> jtheta(4,0,1/pi)
0.3838451207481672404778686
This product does not have a known closed form value:
>>> nprod(lambda k: (1-1/2**k), [1, inf])
0.2887880950866024212788997
A product taken from $$-\infty$$:
>>> nprod(lambda k: 1-k**(-3), [-inf,-2])
0.8093965973662901095786805
>>> cosh(pi*sqrt(3)/2)/(3*pi)
0.8093965973662901095786805
A doubly infinite product:
>>> nprod(lambda k: exp(1/(1+k**2)), [-inf, inf])
23.41432688231864337420035
>>> exp(pi/tanh(pi))
23.41432688231864337420035
A product requiring the use of Euler-Maclaurin summation to compute an accurate value:
>>> nprod(lambda k: (1-1/k**2.5), [2, inf], method='e')
0.696155111336231052898125
References
## Limits (limit)¶
### limit()¶
mpmath.limit(f, x, direction=1, exp=False, **kwargs)
Computes an estimate of the limit
$\lim_{t \to x} f(t)$
where $$x$$ may be finite or infinite.
For finite $$x$$, limit() evaluates $$f(x + d/n)$$ for consecutive integer values of $$n$$, where the approach direction $$d$$ may be specified using the direction keyword argument. For infinite $$x$$, limit() evaluates values of $$f(\mathrm{sign}(x) \cdot n)$$.
If the approach to the limit is not sufficiently fast to give an accurate estimate directly, limit() attempts to find the limit using Richardson extrapolation or the Shanks transformation. You can select between these methods using the method keyword (see documentation of nsum() for more information).
Options
The following options are available with essentially the same meaning as for nsum(): tol, method, maxterms, steps, verbose.
If the option exp=True is set, $$f$$ will be sampled at exponentially spaced points $$n = 2^1, 2^2, 2^3, \ldots$$ instead of the linearly spaced points $$n = 1, 2, 3, \ldots$$. This can sometimes improve the rate of convergence so that limit() may return a more accurate answer (and faster). However, do note that this can only be used if $$f$$ supports fast and accurate evaluation for arguments that are extremely close to the limit point (or if infinite, very large arguments).
Examples
A basic evaluation of a removable singularity:
>>> from mpmath import *
>>> mp.dps = 30; mp.pretty = True
>>> limit(lambda x: (x-sin(x))/x**3, 0)
0.166666666666666666666666666667
Computing the exponential function using its limit definition:
>>> limit(lambda n: (1+3/n)**n, inf)
20.0855369231876677409285296546
>>> exp(3)
20.0855369231876677409285296546
A limit for $$\pi$$:
>>> f = lambda n: 2**(4*n+1)*fac(n)**4/(2*n+1)/fac(2*n)**2
>>> limit(f, inf)
3.14159265358979323846264338328
Calculating the coefficient in Stirling’s formula:
>>> limit(lambda n: fac(n) / (sqrt(n)*(n/e)**n), inf)
2.50662827463100050241576528481
>>> sqrt(2*pi)
2.50662827463100050241576528481
Evaluating Euler’s constant $$\gamma$$ using the limit representation
$\gamma = \lim_{n \rightarrow \infty } \left[ \left( \sum_{k=1}^n \frac{1}{k} \right) - \log(n) \right]$
(which converges notoriously slowly):
>>> f = lambda n: sum([mpf(1)/k for k in range(1,int(n)+1)]) - log(n)
>>> limit(f, inf)
0.577215664901532860606512090082
>>> +euler
0.577215664901532860606512090082
With default settings, the following limit converges too slowly to be evaluated accurately. Changing to exponential sampling however gives a perfect result:
>>> f = lambda x: sqrt(x**3+x**2)/(sqrt(x**3)+x)
>>> limit(f, inf)
0.992831158558330281129249686491
>>> limit(f, inf, exp=True)
1.0
## Extrapolation¶
The following functions provide a direct interface to extrapolation algorithms. nsum() and limit() essentially work by calling the following functions with an increasing number of terms until the extrapolated limit is accurate enough.
The following functions may be useful to call directly if the precise number of terms needed to achieve a desired accuracy is known in advance, or if one wishes to study the convergence properties of the algorithms.
### richardson()¶
mpmath.richardson(seq)
Given a list seq of the first $$N$$ elements of a slowly convergent infinite sequence, richardson() computes the $$N$$-term Richardson extrapolate for the limit.
richardson() returns $$(v, c)$$ where $$v$$ is the estimated limit and $$c$$ is the magnitude of the largest weight used during the computation. The weight provides an estimate of the precision lost to cancellation. Due to cancellation effects, the sequence must be typically be computed at a much higher precision than the target accuracy of the extrapolation.
Applicability and issues
The $$N$$-step Richardson extrapolation algorithm used by richardson() is described in [1].
Richardson extrapolation only works for a specific type of sequence, namely one converging like partial sums of $$P(1)/Q(1) + P(2)/Q(2) + \ldots$$ where $$P$$ and $$Q$$ are polynomials. When the sequence does not convergence at such a rate richardson() generally produces garbage.
Richardson extrapolation has the advantage of being fast: the $$N$$-term extrapolate requires only $$O(N)$$ arithmetic operations, and usually produces an estimate that is accurate to $$O(N)$$ digits. Contrast with the Shanks transformation (see shanks()), which requires $$O(N^2)$$ operations.
richardson() is unable to produce an estimate for the approximation error. One way to estimate the error is to perform two extrapolations with slightly different $$N$$ and comparing the results.
Richardson extrapolation does not work for oscillating sequences. As a simple workaround, richardson() detects if the last three elements do not differ monotonically, and in that case applies extrapolation only to the even-index elements.
Example
Applying Richardson extrapolation to the Leibniz series for $$\pi$$:
>>> from mpmath import *
>>> mp.dps = 30; mp.pretty = True
>>> S = [4*sum(mpf(-1)**n/(2*n+1) for n in range(m))
... for m in range(1,30)]
>>> v, c = richardson(S[:10])
>>> v
3.2126984126984126984126984127
>>> nprint([v-pi, c])
[0.0711058, 2.0]
>>> v, c = richardson(S[:30])
>>> v
3.14159265468624052829954206226
>>> nprint([v-pi, c])
[1.09645e-9, 20833.3]
References
1. [BenderOrszag] pp. 375-376
### shanks()¶
mpmath.shanks(seq, table=None, randomized=False)
Given a list seq of the first $$N$$ elements of a slowly convergent infinite sequence $$(A_k)$$, shanks() computes the iterated Shanks transformation $$S(A), S(S(A)), \ldots, S^{N/2}(A)$$. The Shanks transformation often provides strong convergence acceleration, especially if the sequence is oscillating.
The iterated Shanks transformation is computed using the Wynn epsilon algorithm (see [1]). shanks() returns the full epsilon table generated by Wynn’s algorithm, which can be read off as follows:
• The table is a list of lists forming a lower triangular matrix, where higher row and column indices correspond to more accurate values.
• The columns with even index hold dummy entries (required for the computation) and the columns with odd index hold the actual extrapolates.
• The last element in the last row is typically the most accurate estimate of the limit.
• The difference to the third last element in the last row provides an estimate of the approximation error.
• The magnitude of the second last element provides an estimate of the numerical accuracy lost to cancellation.
For convenience, so the extrapolation is stopped at an odd index so that shanks(seq)[-1][-1] always gives an estimate of the limit.
Optionally, an existing table can be passed to shanks(). This can be used to efficiently extend a previous computation after new elements have been appended to the sequence. The table will then be updated in-place.
The Shanks transformation
The Shanks transformation is defined as follows (see [2]): given the input sequence $$(A_0, A_1, \ldots)$$, the transformed sequence is given by
$S(A_k) = \frac{A_{k+1}A_{k-1}-A_k^2}{A_{k+1}+A_{k-1}-2 A_k}$
The Shanks transformation gives the exact limit $$A_{\infty}$$ in a single step if $$A_k = A + a q^k$$. Note in particular that it extrapolates the exact sum of a geometric series in a single step.
Applying the Shanks transformation once often improves convergence substantially for an arbitrary sequence, but the optimal effect is obtained by applying it iteratively: $$S(S(A_k)), S(S(S(A_k))), \ldots$$.
Wynn’s epsilon algorithm provides an efficient way to generate the table of iterated Shanks transformations. It reduces the computation of each element to essentially a single division, at the cost of requiring dummy elements in the table. See [1] for details.
Precision issues
Due to cancellation effects, the sequence must be typically be computed at a much higher precision than the target accuracy of the extrapolation.
If the Shanks transformation converges to the exact limit (such as if the sequence is a geometric series), then a division by zero occurs. By default, shanks() handles this case by terminating the iteration and returning the table it has generated so far. With randomized=True, it will instead replace the zero by a pseudorandom number close to zero. (TODO: find a better solution to this problem.)
Examples
We illustrate by applying Shanks transformation to the Leibniz series for $$\pi$$:
>>> from mpmath import *
>>> mp.dps = 50
>>> S = [4*sum(mpf(-1)**n/(2*n+1) for n in range(m))
... for m in range(1,30)]
>>>
>>> T = shanks(S[:7])
>>> for row in T:
... nprint(row)
...
[-0.75]
[1.25, 3.16667]
[-1.75, 3.13333, -28.75]
[2.25, 3.14524, 82.25, 3.14234]
[-2.75, 3.13968, -177.75, 3.14139, -969.937]
[3.25, 3.14271, 327.25, 3.14166, 3515.06, 3.14161]
The extrapolated accuracy is about 4 digits, and about 4 digits may have been lost due to cancellation:
>>> L = T[-1]
>>> nprint([abs(L[-1] - pi), abs(L[-1] - L[-3]), abs(L[-2])])
[2.22532e-5, 4.78309e-5, 3515.06]
Now we extend the computation:
>>> T = shanks(S[:25], T)
>>> L = T[-1]
>>> nprint([abs(L[-1] - pi), abs(L[-1] - L[-3]), abs(L[-2])])
[3.75527e-19, 1.48478e-19, 2.96014e+17]
The value for pi is now accurate to 18 digits. About 18 digits may also have been lost to cancellation.
Here is an example with a geometric series, where the convergence is immediate (the sum is exactly 1):
>>> mp.dps = 15
>>> for row in shanks([0.5, 0.75, 0.875, 0.9375, 0.96875]):
... nprint(row)
[4.0]
[8.0, 1.0]
References
1. [GravesMorris]
2. [BenderOrszag] pp. 368-375
### levin()¶
mpmath.levin(method='levin', variant='u')
This interface implements Levin’s (nonlinear) sequence transformation for convergence acceleration and summation of divergent series. It performs better than the Shanks/Wynn-epsilon algorithm for logarithmic convergent or alternating divergent series.
Let A be the series we want to sum:
$A = \sum_{k=0}^{\infty} a_k$
Attention: all $$a_k$$ must be non-zero!
Let $$s_n$$ be the partial sums of this series:
$s_n = \sum_{k=0}^n a_k.$
Methods
Calling levin returns an object with the following methods.
update(...) works with the list of individual terms $$a_k$$ of A, and update_step(...) works with the list of partial sums $$s_k$$ of A:
v, e = ...update([a_0, a_1,..., a_k])
v, e = ...update_psum([s_0, s_1,..., s_k])
step(...) works with the individual terms $$a_k$$ and step_psum(...) works with the partial sums $$s_k$$:
v, e = ...step(a_k)
v, e = ...step_psum(s_k)
v is the current estimate for A, and e is an error estimate which is simply the difference between the current estimate and the last estimate. One should not mix update, update_psum, step and step_psum.
A word of caution
One can only hope for good results (i.e. convergence acceleration or resummation) if the $$s_n$$ have some well defind asymptotic behavior for large $$n$$ and are not erratic or random. Furthermore one usually needs very high working precision because of the numerical cancellation. If the working precision is insufficient, levin may produce silently numerical garbage. Furthermore even if the Levin-transformation converges, in the general case there is no proof that the result is mathematically sound. Only for very special classes of problems one can prove that the Levin-transformation converges to the expected result (for example Stieltjes-type integrals). Furthermore the Levin-transform is quite expensive (i.e. slow) in comparison to Shanks/Wynn-epsilon, Richardson & co. In summary one can say that the Levin-transformation is powerful but unreliable and that it may need a copious amount of working precision.
The Levin transform has several variants differing in the choice of weights. Some variants are better suited for the possible flavours of convergence behaviour of A than other variants:
convergence behaviour levin-u levin-t levin-v shanks/wynn-epsilon
logarithmic + - + -
linear + + + +
alternating divergent + + + +
"+" means the variant is suitable,"-" means the variant is not suitable;
for comparison the Shanks/Wynn-epsilon transform is listed, too.
The variant is controlled though the variant keyword (i.e. variant="u", variant="t" or variant="v"). Overall “u” is probably the best choice.
Finally it is possible to use the Sidi-S transform instead of the Levin transform by using the keyword method='sidi'. The Sidi-S transform works better than the Levin transformation for some divergent series (see the examples).
Parameters:
method "levin" or "sidi" chooses either the Levin or the Sidi-S transformation
variant "u","t" or "v" chooses the weight variant.
The Levin transform is also accessible through the nsum interface. method="l" or method="levin" select the normal Levin transform while method="sidi" selects the Sidi-S transform. The variant is in both cases selected through the levin_variant keyword. The stepsize in nsum() must not be chosen too large, otherwise it will miss the point where the Levin transform converges resulting in numerical overflow/garbage. For highly divergent series a copious amount of working precision must be chosen.
Examples
First we sum the zeta function:
>>> from mpmath import mp
>>> mp.prec = 53
>>> eps = mp.mpf(mp.eps)
>>> with mp.extraprec(2 * mp.prec): # levin needs a high working precision
... L = mp.levin(method = "levin", variant = "u")
... S, s, n = [], 0, 1
... while 1:
... s += mp.one / (n * n)
... n += 1
... S.append(s)
... v, e = L.update_psum(S)
... if e < eps:
... break
... if n > 1000: raise RuntimeError("iteration limit exceeded")
>>> print(mp.chop(v - mp.pi ** 2 / 6))
0.0
>>> w = mp.nsum(lambda n: 1 / (n*n), [1, mp.inf], method = "levin", levin_variant = "u")
>>> print(mp.chop(v - w))
0.0
Now we sum the zeta function outside its range of convergence (attention: This does not work at the negative integers!):
>>> eps = mp.mpf(mp.eps)
>>> with mp.extraprec(2 * mp.prec): # levin needs a high working precision
... L = mp.levin(method = "levin", variant = "v")
... A, n = [], 1
... while 1:
... s = mp.mpf(n) ** (2 + 3j)
... n += 1
... A.append(s)
... v, e = L.update(A)
... if e < eps:
... break
... if n > 1000: raise RuntimeError("iteration limit exceeded")
>>> print(mp.chop(v - mp.zeta(-2-3j)))
0.0
>>> w = mp.nsum(lambda n: n ** (2 + 3j), [1, mp.inf], method = "levin", levin_variant = "v")
>>> print(mp.chop(v - w))
0.0
Now we sum the divergent asymptotic expansion of an integral related to the exponential integral (see also [2] p.373). The Sidi-S transform works best here:
>>> z = mp.mpf(10)
>>> exact = mp.quad(lambda x: mp.exp(-x)/(1+x/z),[0,mp.inf])
>>> # exact = z * mp.exp(z) * mp.expint(1,z) # this is the symbolic expression for the integral
>>> eps = mp.mpf(mp.eps)
>>> with mp.extraprec(2 * mp.prec): # high working precisions are mandatory for divergent resummation
... L = mp.levin(method = "sidi", variant = "t")
... n = 0
... while 1:
... s = (-1)**n * mp.fac(n) * z ** (-n)
... v, e = L.step(s)
... n += 1
... if e < eps:
... break
... if n > 1000: raise RuntimeError("iteration limit exceeded")
>>> print(mp.chop(v - exact))
0.0
>>> w = mp.nsum(lambda n: (-1) ** n * mp.fac(n) * z ** (-n), [0, mp.inf], method = "sidi", levin_variant = "t")
>>> print(mp.chop(v - w))
0.0
Another highly divergent integral is also summable:
>>> z = mp.mpf(2)
>>> eps = mp.mpf(mp.eps)
>>> exact = mp.quad(lambda x: mp.exp( -x * x / 2 - z * x ** 4), [0,mp.inf]) * 2 / mp.sqrt(2 * mp.pi)
>>> # exact = mp.exp(mp.one / (32 * z)) * mp.besselk(mp.one / 4, mp.one / (32 * z)) / (4 * mp.sqrt(z * mp.pi)) # this is the symbolic expression for the integral
>>> with mp.extraprec(7 * mp.prec): # we need copious amount of precision to sum this highly divergent series
... L = mp.levin(method = "levin", variant = "t")
... n, s = 0, 0
... while 1:
... s += (-z)**n * mp.fac(4 * n) / (mp.fac(n) * mp.fac(2 * n) * (4 ** n))
... n += 1
... v, e = L.step_psum(s)
... if e < eps:
... break
... if n > 1000: raise RuntimeError("iteration limit exceeded")
>>> print(mp.chop(v - exact))
0.0
>>> w = mp.nsum(lambda n: (-z)**n * mp.fac(4 * n) / (mp.fac(n) * mp.fac(2 * n) * (4 ** n)),
... [0, mp.inf], method = "levin", levin_variant = "t", workprec = 8*mp.prec, steps = [2] + [1 for x in xrange(1000)])
>>> print(mp.chop(v - w))
0.0
These examples run with 15-20 decimal digits precision. For higher precision the working precision must be raised.
Examples for nsum
Here we calculate Euler’s constant as the constant term in the Laurent expansion of $$\zeta(s)$$ at $$s=1$$. This sum converges extremly slowly because of the logarithmic convergence behaviour of the Dirichlet series for zeta:
>>> mp.dps = 30
>>> z = mp.mpf(10) ** (-10)
>>> a = mp.nsum(lambda n: n**(-(1+z)), [1, mp.inf], method = "l") - 1 / z
>>> print(mp.chop(a - mp.euler, tol = 1e-10))
0.0
The Sidi-S transform performs excellently for the alternating series of $$\log(2)$$:
>>> a = mp.nsum(lambda n: (-1)**(n-1) / n, [1, mp.inf], method = "sidi")
>>> print(mp.chop(a - mp.log(2)))
0.0
Hypergeometric series can also be summed outside their range of convergence. The stepsize in nsum() must not be chosen too large, otherwise it will miss the point where the Levin transform converges resulting in numerical overflow/garbage:
>>> z = 2 + 1j
>>> exact = mp.hyp2f1(2 / mp.mpf(3), 4 / mp.mpf(3), 1 / mp.mpf(3), z)
>>> f = lambda n: mp.rf(2 / mp.mpf(3), n) * mp.rf(4 / mp.mpf(3), n) * z**n / (mp.rf(1 / mp.mpf(3), n) * mp.fac(n))
>>> v = mp.nsum(f, [0, mp.inf], method = "levin", steps = [10 for x in xrange(1000)])
>>> print(mp.chop(exact-v))
0.0
References:
[1] E.J. Weniger - “Nonlinear Sequence Transformations for the Acceleration of
Convergence and the Summation of Divergent Series” arXiv:math/0306302
[2] A. Sidi - “Pratical Extrapolation Methods”
[3] H.H.H. Homeier - “Scalar Levin-Type Sequence Transformations” arXiv:math/0005209
### cohen_alt()¶
mpmath.cohen_alt()
This interface implements the convergence acceleration of alternating series as described in H. Cohen, F.R. Villegas, D. Zagier - “Convergence Acceleration of Alternating Series”. This series transformation works only well if the individual terms of the series have an alternating sign. It belongs to the class of linear series transformations (in contrast to the Shanks/Wynn-epsilon or Levin transform). This series transformation is also able to sum some types of divergent series. See the paper under which conditions this resummation is mathematical sound.
Let A be the series we want to sum:
$A = \sum_{k=0}^{\infty} a_k$
Let $$s_n$$ be the partial sums of this series:
$s_n = \sum_{k=0}^n a_k.$
Interface
Calling cohen_alt returns an object with the following methods.
Then update(...) works with the list of individual terms $$a_k$$ and update_psum(...) works with the list of partial sums $$s_k$$:
v, e = ...update([a_0, a_1,..., a_k])
v, e = ...update_psum([s_0, s_1,..., s_k])
v is the current estimate for A, and e is an error estimate which is simply the difference between the current estimate and the last estimate.
Examples
Here we compute the alternating zeta function using update_psum:
>>> from mpmath import mp
>>> AC = mp.cohen_alt()
>>> S, s, n = [], 0, 1
>>> while 1:
... s += -((-1) ** n) * mp.one / (n * n)
... n += 1
... S.append(s)
... v, e = AC.update_psum(S)
... if e < mp.eps:
... break
... if n > 1000: raise RuntimeError("iteration limit exceeded")
>>> print(mp.chop(v - mp.pi ** 2 / 12))
0.0
Here we compute the product $$\prod_{n=1}^{\infty} \Gamma(1+1/(2n-1)) / \Gamma(1+1/(2n))$$:
>>> A = []
>>> AC = mp.cohen_alt()
>>> n = 1
>>> while 1:
... A.append( mp.loggamma(1 + mp.one / (2 * n - 1)))
... A.append(-mp.loggamma(1 + mp.one / (2 * n)))
... n += 1
... v, e = AC.update(A)
... if e < mp.eps:
... break
... if n > 1000: raise RuntimeError("iteration limit exceeded")
>>> v = mp.exp(v)
>>> print(mp.chop(v - 1.06215090557106, tol = 1e-12))
0.0
cohen_alt is also accessible through the nsum() interface:
>>> v = mp.nsum(lambda n: (-1)**(n-1) / n, [1, mp.inf], method = "a")
>>> print(mp.chop(v - mp.log(2)))
0.0
>>> v = mp.nsum(lambda n: (-1)**n / (2 * n + 1), [0, mp.inf], method = "a")
>>> print(mp.chop(v - mp.pi / 4))
0.0
>>> v = mp.nsum(lambda n: (-1)**n * mp.log(n) * n, [1, mp.inf], method = "a")
>>> print(mp.chop(v - mp.diff(lambda s: mp.altzeta(s), -1)))
0.0
|
|
# Code smell when recursively returning a List
So this is part of my AVL tree implementation, namely the infix representation of the tree. Since I wanted to use a recursive method to stay close to the original algorithm and want to return a list, I have to store the values in a parameter.
The smelly part is in getInfix when I reference that parameter to return the list. I am not even sure what's the name of this code smell but it feels wrong. I am literally only calling getInfixInternal for its side effect! But I can't simply return a list either...
How can I make this better?
private List<T> getInfix() {
List<T> result = new ArrayList<>();
getInfixInternal(root, result);
return result;
}
private void getInfixInternal(Node node, List<T> infixValues) {
if (node != null) {
getInfixInternal(node.getLeft(), infixValues);
getInfixInternal(node.getRight(), infixValues);
}
}
My Node class (excluding the getters and setters for reasons of space):
public class Node <T extends Comparable> {
private T value;
private Node leftChild;
private Node rightChild;
private Node parent;
}
• Recursion is itself a code smell – Brad Thomas Jul 17 '16 at 16:19
• @BradThomas that statement is possibly the most offensive and stupid statement about programming that I heard in months. Congratulations. Recursion is a standard approach for method / function behaviour in all languages – Vogel612 Jul 17 '16 at 16:48
• "Standard" means nothing. Bad practice is also "standard" across the industry, in all languages. I have never come across a meaningful use of recursion (outside of academic interest) that could not be more readably and maintainably coded without the use of recursion. – Brad Thomas Jul 17 '16 at 16:50
• @Brad Thomas care to elaborate? I have read similar in the power of ten rules but cant really follow the Argumentation – AdHominem Jul 17 '16 at 17:17
• Recursion is used for the sake of purported brevity, elegance but has a heavy cost in terms of readability and maintainability. I'm not saying it's never the most appropriate solution, only that I believe it is widely over-used for most line-of-business applications. In contrast to academic study questions, these applications are often in service over many years and require considerable ongoing maintenance. IMO a simpler flat enumeration over the tree would often suffice and be easier to analyze and maintain in most cases. Being widely over-used makes it a code smell i.e. likely inappropriate – Brad Thomas Jul 17 '16 at 17:27
I wouldn't say there is a code smell with what you have now. However, it could be made more OOP-friendly.
An actual problem is that you're using raw types for Node, and that is never a good idea. So, first of all, consider having this instead:
private List<T> getInfix() {
List<T> result = new ArrayList<>();
getInfixInternal(root, result);
return result;
}
private void getInfixInternal(Node<T> node, List<T> infixValues) {
if (node != null) {
getInfixInternal(node.getLeft(), infixValues);
getInfixInternal(node.getRight(), infixValues);
}
}
and
public class Node<T extends Comparable<? super T>>
Then, to avoid this getInfixInternal, you can reason with what the method getInfix means. The first thing to realize is that it works on a given Node. In fact, it is really a property of the node itself: every node can return a list of values resulting of a infix traversal of the subtree starting at this node. Therefore, instead of having an internal method taking a Node as parameter, make it a method of Node:
public class Node<T extends Comparable<? super T>> {
// ...
public List<T> getInfix() {
List<T> result = new ArrayList<>();
if (leftChild != null) {
}
if (rightChild != null) {
}
return result;
}
}
With such a method, you can then have:
private List<T> getInfix() {
return root == null ? Collections.emptyList() : root.getInfix();
}
With Java 8 Optional, you could potentially write that more concisely:
public List<T> getInfix() {
List<T> result = new ArrayList<>();
|
|
Parenting from the Inside out - 10th Anniversary Edition - Daniel J. Siegel
Multiscale Analysis — Office Activity Atlas Come
Numerical Analysis of Multiscale Computations - Bjorn Engquist — Nicki Verploegen Retreats
Jan 10, 2021 posted by pyjob
Advancing research. Descargar Numerical Analysis of Multiscale Computations - Bjorn Engquist He was director of the Research Institute for Industrial Applications of Scientific. Multiscale modeling Numerical Analysis of Multiscale Computations - Bjorn Engquist and computation. Eric Chung, Chinese Telecharger University Telecharger of Telecharger Scarica Descargar Hong Kong (China). Per software L&246;tstedt professor in numerical Utilities analysis, Uppsala Numerical Analysis of Multiscale Computations - Bjorn Engquist Utilities University Verified email at Programs it.
Tsai, Numerical Analysis of Multiscale Computations - Bjorn Engquist Bjorn Engquist, Olof Runborg, eBook format, from the Dymocks online bookstore. Bjorn Engquist and Lexing Ying, Fast algorithms Descargar for high frequency wave propagation. It is here developed for ordinary differential equations containing different time scales. Numerical Analysis of Multiscale Computations - Bjorn Engquist Multiscale functions 1. Encyclopedia of Applied and Computational Mathematics: Amazon. Get this from a library!
software Numerical Analysis of Multiscale Computations: Proceedings of a Winter Workshop at the Banff International Research Station Lecture Notes in Computational Science and free Engineering: Amazon. In this paper, we demonstrate this by applying download this framework to Programs two canonical problems: The elliptic problem Descargar with multiscale coefficients and the quasi-continuum Programs method. Apps Numerical Analysis of Multiscale Problems, I. CiteSeerX - Document Details Best (Isaac Councill, Lee Giles, Pradeep Teregowda): The heterogeneous multiscale methods (HMM) is a general framework for the numerical approximation of multiscale problems. In our analysis we will define the scales more explicitly, for example, by a scaling Utilities law.
free It covers fundamental Best mathematical theory. es: Bj&246;rn Engquist, Olof Numerical Analysis of Multiscale Computations - Bjorn Engquist Numerical Analysis of Multiscale Computations - Bjorn Engquist Runborg, Yen-Hsi R. Yet more free Telecharger possibilities are waiting Apps to be explored, particularly in the application software areas. The parareal algorithm is a popular technique for parallel in time computations, which is essential for highly Utilities parallel software systems when distributed computation in space saturates. Best (eds) Numerical Analysis of Multiscale Computations. &183; Gil Ariel, Bjorn Engquist and Richard Tsai, Oscillatory Systems with Three Separated Time Scales - Analysis Best and Computation. , Apps Volume 82, 23-45, Springer (). Buy (ebook) Numerical Analysis of Multiscale Computations by Yen-Hsi R.
), Bj&246;rn Engquist, Olof Runborg, Yen-Hsi R. Gil Ariel, Bjorn software Engquist Best and Richard Tsai, Oscillatory Systems with Three Separated Time Scales Utilities - Descargar Analysis and Computation, Chapter in "Numerical Analysis and Multiscale Computations", Lect. Multiscale Computations for Highly Oscillatory Numerical Analysis of Multiscale Computations - Bjorn Engquist Problems. This conference will celebrate Bjorn Engquist’s pioneer. In this forum, we will try to strike a balance between Utilities theory, numerics, Best and applications. Bjorn Engquist received his Ph.
analysis of multiscale methods, and in several applications, it helps to trans-form multiscale modeling from a software somewhat ad hoc practice to a systematic Best technique Apps with a solid foundation. Applied & Computational Mathematics. Tsai:: Books - Amazon. Previous article in issue: Analytic dependence of riemann mappings for bounded Programs domains and minimal surfaces Next article in Best issue: On singular diffusion equations with. This conference celebrates his many contributions and his work. It covers fundamental mathematical theory, numerical algorithms as well Scarica as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and Descargar space.
Weinan E, Bjorn Engquist and Yi Sun,. Numerical Analysis of Multiscale Computations: Proceedings of a Winter Workshop at the Banff International Scarica Research Station (Lecture Notes in software Computational Science and Engineering. The analysis covers. Programs Encyclopedia of Applied and Computational Telecharger Mathematics: Engquist, Utilities Bjorn: Amazon.
online on Amazon. in numerical analysis from Uppsala University in1975. by Bj&246;rn Engquist (Editor), Olof Runborg (Editor), Yen-Hsi R. Notices of the AMS 50 (9),,.
Telecharger Scheichl (editors), Lecture Notes in Computational Science and Engineeering, Springer. Zoek Zoeken Hallo Bestemming. edu Phone:Office Location RLM Utilities 11. Programs In the second part of this article, we study Telecharger this method in mixed form (we call download it the mixed heterogeneous multiscale method).
Apps Numerical Analysis of Multiscale Computations pp 23-45. Numerical Apps Analysis of Multiscale Computations: Proceedings of a Winter Workshop at the Banff Descargar International Telecharger download Research Station Assyr Abdulle (auth. Encyclopedia of Applied and Computational Mathematics: Engquist, Bj&246;rn: Amazon. ae Telecharger at best prices.
Bjorn Engquist Scarica Professor, Apps Core. Programs ) download free This book is a snapshot of current Scarica research in multiscale modeling, computations Numerical Analysis of Multiscale Computations - Bjorn Engquist and applications. Bjorn Apps Engquist has been a pioneer in the field of multiscale methods and partial differential equations. Frederick, Nonuniform sampling and multiscale computation, Multiscale Modeling and Simulation, 12-4, pp. Bj&246;rn Scarica Engquist (also Bjorn Engquist; born 2 June 1945 Descargar in Stockholm) has been a leading contributor in software the areas of multiscale modeling and scientific computing, and a productive educator free of applied mathematicians. Gil Ariel, Bjorn Engquist, free and Richard Tsai.
Apps in numerical analysis from University of Uppsala in download 1975, and. es: Engquist, Bj&246;rn: download Libros en idiomas extranjeros Selecciona Tus Preferencias Programs de Cookies Utilizamos cookies y herramientas Scarica similares para mejorar Numerical Analysis of Multiscale Computations - Bjorn Engquist tu experiencia de compra, prestar Utilities nuestros servicios, entender c&243;mo los utilizas Programs para poder mejorarlos, y para mostrarte anuncios. Numerical Analysis of software Multiscale Computations, Scarica 23-45. Buy Large-scale Computations in Fluid Mechanics by free Engquist, Bjorn, Osher, Stanley, Descargar Somerville, download Richard C. Scarica At Princeton University, he was director free of the free Program in Applied and Computational Mathematics download and the Princeton Institute for Computational Science and he is now the download director of the ICES Center for Numerical Analysis in Austin. Here, $\epsilon$ indicates the smallest characteristic wavelength in the problem ([FULLTEXT]
Bjorn Multiscale Engquist Analysis
|
|
4 Added gcc version number and tested with larger -ftemplate-depth-10000 value edited Apr 4 '11 at 20:50 Joey Adams 8,77632955 Produces the following error output in GCC (4.4.5):$g++ golf.cpp 2>&1 | wc -c 537854$ clang golf.cpp 2>&1 | wc -c 22666 $g++ -ftemplate-depth-500010000 golf.cpp 2>&1 | wc -c # uses268+ 266MBMB of RAM and almost 15 minutes 50375355200750356 Produces the following output in GCC:$ g++ golf.cpp 2>&1 | wc -c 537854 $clang golf.cpp 2>&1 | wc -c 22666$ g++ -ftemplate-depth-5000 golf.cpp 2>&1 | wc -c # uses 266MB of RAM 50375355 Produces the following error output in GCC (4.4.5):$g++ golf.cpp 2>&1 | wc -c 537854$ clang golf.cpp 2>&1 | wc -c 22666 $g++ -ftemplate-depth-10000 golf.cpp 2>&1 | wc -c # 268+ MB of RAM and almost 15 minutes 200750356 3 added 89 characters in body edited Apr 4 '11 at 20:30 Joey Adams 8,77632955 $ g++ golf.cpp 2>&1 | wc -c 537854 $clang golf.cpp 2>&1 | wc -c 22666$ g++ -ftemplate-depth-5000 golf.cpp 2>&1 | wc -c # uses 266MB of RAM 50375355 $g++ golf.cpp 2>&1 | wc -c 537854$ clang golf.cpp 2>&1 | wc -c 22666 $g++ golf.cpp 2>&1 | wc -c 537854$ clang golf.cpp 2>&1 | wc -c 22666 \$ g++ -ftemplate-depth-5000 golf.cpp 2>&1 | wc -c # uses 266MB of RAM 50375355 2 added 375 characters in body; added 270 characters in body; added 25 characters in body edited Apr 4 '11 at 20:21 Joey Adams 8,77632955 I discovered this while trying to see if C++ supports polymorphic recursionUngolfed (which, as you can clearly see, doesn'tproduces longer output). Here's a trivial example of polymorphic recursion in Haskell:template struct Wrap { T value; Wrap(T v) : value(v) {} }; template void func(T x) { func(Wrap(x)); } int main(void) { func(0); return 0; } I discovered this when I wanted to see if C++ supports polymorphic recursion (and, as you can clearly see, it doesn't). Here's a trivial example of polymorphic recursion in Haskell:Prelude> let f :: (Show xa) => xa -> String; f x = show x ++ " " ++ f [x] Prelude> f 0 "0 [0] [[0]] [[[0]]] [[[[0]]]] [[[[[0]]]]] [[[[[[0]]]]]] [[[[[[[0]]]]]]] [[[[[[[[0]] ... Here, this requires Haskell to act like it instantiates Show x, Show [x], Show [[x]], Show [[[x]]], ad infinitum. Haskell does it by turning (Show x) => into an implicit parameter to the function f added by the compiler. C++, something like this:type Show a = a -> String showList :: Show a -> [a] -> String showList show [] = "[]" showList show (x:xs) = '[' : show x ++ showItems xs where showItems [] = "]" showItems (x:xs) = ',' : show x ++ showItems xs f :: Show a -> a -> String f show x = show x ++ " " ++ f (showList show) [x] C++ does it by literally trying to construct such instances until the template instantiation depth is exceeded. I discovered this while trying to see if C++ supports polymorphic recursion (which, as you can clearly see, doesn't). Here's a trivial example of polymorphic recursion in Haskell:Prelude> let f :: (Show x) => x -> String; f x = show x ++ " " ++ f [x] Prelude> f 0 "0 [0] [[0]] [[[0]]] [[[[0]]]] [[[[[0]]]]] [[[[[[0]]]]]] [[[[[[[0]]]]]]] [[[[[[[[0]] ... Here, this requires Haskell to act like it instantiates Show x, Show [x], Show [[x]], Show [[[x]]], ad infinitum. Haskell does it by turning (Show x) => into an implicit parameter to the function f added by the compiler. C++ does it by literally trying to construct such instances until the template instantiation depth is exceeded. Ungolfed (produces longer output):template struct Wrap { T value; Wrap(T v) : value(v) {} }; template void func(T x) { func(Wrap(x)); } int main(void) { func(0); return 0; } I discovered this when I wanted to see if C++ supports polymorphic recursion (and, as you can clearly see, it doesn't). Here's a trivial example of polymorphic recursion in Haskell:Prelude> let f :: (Show a) => a -> String; f x = show x ++ " " ++ f [x] Prelude> f 0 "0 [0] [[0]] [[[0]]] [[[[0]]]] [[[[[0]]]]] [[[[[[0]]]]]] [[[[[[[0]]]]]]] [[[[[[[[0]] ... Here, this requires Haskell to act like it instantiates Show x, Show [x], Show [[x]], Show [[[x]]], ad infinitum. Haskell does it by turning (Show x) => into an implicit parameter to the function f added by the compiler, something like this:type Show a = a -> String showList :: Show a -> [a] -> String showList show [] = "[]" showList show (x:xs) = '[' : show x ++ showItems xs where showItems [] = "]" showItems (x:xs) = ',' : show x ++ showItems xs f :: Show a -> a -> String f show x = show x ++ " " ++ f (showList show) [x] C++ does it by literally trying to construct such instances until the template instantiation depth is exceeded. 1 answered Apr 4 '11 at 20:15 Joey Adams 8,77632955
|
|
## Stream: Is there code for X?
### Topic: Bilinear form on finsupp
#### Heather Macbeth (Mar 07 2021 at 17:38):
I'd like to know that A : (α × α) → R induces a bilinear form on α →₀ R, where the operation is to take two elements p q : α →₀ R, and sum A (i, j) * p i * q j for all i : α. This doesn't seem to exist.
In fact I also can't seem to find the simpler fact that A : α → R induces an element of the dual of α →₀ R, where the operation is to take an element p : α →₀ R, and sum A i * p i for all i : α. This I might well have missed, since there are small amounts of finsupp material scattered across many files in the linear algebra library.
This came up while trying to extract some of the work in #6427 as general-purpose lemmas, cc @Thomas Browning.
#### Heather Macbeth (Mar 07 2021 at 17:44):
Here's messy code for the second one, as an illustration:
import linear_algebra.finsupp
variables {R : Type*} [comm_ring R]
noncomputable example {α : Type*} (A : α → R) : (α →₀ R) →ₗ[R] R :=
{ to_fun := λ p, p.sum (λ i x, A i * x),
intros p q,
let g : α → R →+ R := λ i, (A i • (linear_map.id : R →ₗ[R] R)).to_add_monoid_hom,
end,
map_smul' := begin
intros c p,
let g : α → R → R := λ i x, A i * x,
have : ∀ i, g i 0 = 0 := by simp [g],
convert finsupp.sum_smul_index this,
change _ * _ = _,
rw finsupp.mul_sum,
congr,
ext i x,
simp [g, mul_comm, mul_assoc],
end }
#### Eric Wieser (Mar 07 2021 at 17:49):
The first one is almost docs#matrix.to_bilin
#### Heather Macbeth (Mar 07 2021 at 17:50):
Yes, it would be nice if a lot of the matrix code worked for finsupp on an arbitrary type rather than just on a fintype!
#### Kevin Buzzard (Mar 07 2021 at 18:04):
This would be the theory of finite rank endomorphisms of a module. I've used these in a paper. They even have characteristic polynomials -- det(1-XL) makes sense if L is a finite rank linear map -- you restrict to any finite dimensional subspace containing the image and the resulting poly is independent of the choice
#### Heather Macbeth (Mar 09 2021 at 18:25):
#6606
Last updated: May 07 2021 at 21:10 UTC
|
|
## Seminars and Colloquia by Series
Monday, January 26, 2015 - 14:00 , Location: Skiles 006 , Ina Petkova , Rice University , Organizer: John Etnyre
In joint work with Vera Vertesi, we extend the functoriality in Heegaard Floer homology by defining a Heegaard Floer invariant for tangles which satisfies a nice gluing formula. We will discuss theconstruction of this combinatorial invariant for tangles in S^3, D^3, and I x S^2. The special case of S^3 gives back a stabilized version of knot Floer homology.
Tuesday, January 20, 2015 - 14:05 , Location: Skiles 006 , , UCDavis , , Organizer: Stavros Garoufalidis
Among n-dimensional regions with fixed volume, which one hasthe least boundary? This question is known as an isoperimetricproblem; its nature depends on what is meant by a "region". I willdiscuss variations of an isoperimetric problem known as thegeneralized Cartan-Hadamard conjecture: If Ω is a region in acomplete, simply connected n-manifold with curvature bounded above byκ ≤ 0, then does it have the least boundary when the curvature equalsκ and Ω is round? This conjecture was proven when n = 2 by Weil andBol; when n = 3 by Kleiner, and when n = 4 and κ = 0 by Croke. Injoint work with Benoit Kloeckner, we generalize Croke's result to mostof the case κ < 0, and we establish a theorem for κ > 0. It was originally inspired by the problem of finding the optimal shape of aplanet to maximize gravity at a single point, such as the place wherethe Little Prince stands on his own small planet.
Monday, January 19, 2015 - 14:05 , Location: Skiles 006 , None , None , Organizer: Dan Margalit
Friday, January 9, 2015 - 14:05 , Location: Skiles 006 , , IAS, Princeton , , Organizer: Stavros Garoufalidis
Recently, a "symplectic duality" between D-modules on certainpairs of algebraic symplectic manifolds was discovered, generalizingclassic work of Beilinson-Ginzburg-Soergel in geometric representationtheory. I will discuss how such dual spaces (some known and some new) arisenaturally in supersymmetric gauge theory in three dimensions.
Monday, December 8, 2014 - 14:00 , Location: Skiles 006 , Emily Riehl , Harvard University , Organizer: Kirsten Wickelgren
Groups, rings, modules, and compact Hausdorff spaces have underlying sets ("forgetting" structure) and admit "free" constructions. Moreover, each type of object is completely characterized by the shadow of this free-forgetful duality cast on the category of sets, and this syntactic encoding provides formulas for direct and inverse limits. After we describe a typical encounter with adjunctions, monads, and their algebras, we introduce a new "homotopy coherent" version of this adjoint duality together with a graphical calculus that is used to define a homotopy coherent algebra in quite general contexts, such as appear in abstract homotopy theory or derived algebraic geometry.
Monday, December 1, 2014 - 14:00 , Location: Skiles 006 , Tye Lidman , University of Texas, Austin , Organizer: John Etnyre
The Lickorish-Wallace theorem states that every closed, connected, orientable three-manifold can be expressed as surgery on a link in the three-sphere (i.e., remove a neighborhood of a disjoint union of embedded $S^1$'s from $S^3$ and re-glue). It is natural to ask which three-manifolds can be obtained by surgery on a single knot in the three-sphere. We discuss a new way to obstruct integer homology spheres from being surgery on a knot and give some examples. This is joint work with Jennifer Hom and Cagri Karakurt.
Monday, November 24, 2014 - 14:05 , Location: Skiles 006 , Igor Belegradek , Georgia Tech , Organizer: Igor Belegradek
I will sketch how to detect nontrivial higher homotopy groups of the space of complete nonnegatively curved metrics on an open manifold.
Monday, November 17, 2014 - 14:00 , Location: Skiles 006 , David Gepner , Purdue University , Organizer: Kirsten Wickelgren
The algebraic K-theory of the sphere spectrum, K(S), encodes significant information in both homotopy theory and differential topology. In order to understand K(S), one can apply the techniques of chromatic homotopy theory in an attempt to approximate K(S) by certain localizations K(L_n S). The L_n S are in turn approximated by the Johnson-Wilson spectra E(n) = BP[v_n^{-1}], and it is not unreasonable to expect to be able to compute K(BP). This would lead inductively to information about K(E(n)) via the conjectural fiber sequence K(BP) --> K(BP) --> K(E(n)). In this talk, I will explain the basics of the K-theory of ring spectra, define the ring spectra of interest, and construct some actual localization sequences in their K-theory. I will then use trace methods to show that it the actual fiber of K(BP) --> K(E(n)) differs from K(BP), meaning that the situation is more complicated than was originally hoped. All this is joint work with Ben Antieau and Tobias Barthel.
Monday, November 10, 2014 - 14:00 , Location: Skiles 006 , , Georgia Tech , , Organizer: Mohammad Ghomi
We prove that the torsion of any smooth closed curve in Euclidean space which bounds a simply connected locally convex surface vanishes at least 4 times (vanishing of torsion means that the first 3 derivatives of the curve are linearly dependent). This answers a question of Rosenberg related to a problem of Yau on characterizing the boundary of positively curved disks in 3-space. Furthermore, our result generalizes the 4 vertex theorem of Sedykh for convex space curves, and thus constitutes a far reaching extension of the classical 4 vertex theorem for planar curves. The proof follows from an extensive study of the structure of convex caps in a locally convex surface.
Monday, October 27, 2014 - 14:05 , Location: Skiles 006 , Evgeny Fominykh and Andrei Vesnin , Chelyabinsk State University , , Organizer: Stavros Garoufalidis
These are two half an hour talks.Evgeny's abstract: The most useful approach to a classication of 3-manifolds is the complexity theory foundedby S. Matveev. Unfortunately, exact values of complexity are known for few infinite seriesof 3-manifold only. We present the results on complexity for two infinite series of hyperbolic3-manifolds with boundary.Andrei's abstract: We define coordinates on virtual braid groups. We prove that these coordinates are faithful invariants of virtual braids on two strings, and present evidence that they are also very powerful invariants for general virtual braids.The talk is based on the joint work with V.Bardakov and B.Wiest.
|
|
# Math Help - Prime numbers
1. ## Prime numbers
Find all primes $p$ such that the number of $(p^6)+6$ is also a prime number.
How to begin to solve this problem?
2. ## Re: Prime numbers
The way I solved this problem was to use Wolfram Alpha: Plug in expressions of the form
Factor p^6 + 6
where p = 2, 3, 5, 7, 11, 13, 17, 19 and so on until you see a pattern. Then use what you know (perhaps Fermat's Little Theorem) to prove your guess. Others might be able to see the solution more quickly, but that's how I did it.
3. ## Re: Prime numbers
Hint: Consider $p^6 + 6$ modulo some other prime q. Without much computation, I was able to find a q such that $p^6 + 6 \equiv 0 (\mod q)$ for all p.
|
|
zbMATH — the first resource for mathematics
Semantics of structured normal logic programs. (English) Zbl 1279.68190
Summary: In this paper we provide semantics for normal logic programs enriched with structuring mechanisms and scoping rules. Specifically, we consider constructive negation and expressions of the form $${Q} \supset {G}$$ in goals, where $${Q}$$ is a program unit, $${G}$$ is a goal and $${\supset}$$ stands for the so-called embedded implication. Allowing the use of these expressions can be seen as adding block structuring to logic programs. In this context, we consider static and dynamic rules for visibility in blocks. In particular, we provide new semantic definitions for the class of normal logic programs with both visibility rules. For the dynamic case we follow a standard approach. We first propose an operational semantics. Then, we define a model-theoretic semantics in terms of ordered structures which are a kind of intuitionistic Beth structures. Finally, an (effective) fixpoint semantics is provided and we prove the equivalence of these three definitions. In order to deal with the static case, we first define an operational semantics and then we present an alternative semantics in terms of a transformation of the given structured programs into flat ones. We finish by showing that this transformation preserves the computed answers of the given static program.
Reviewer: Reviewer (Berlin)
MSC:
68Q55 Semantics in the theory of computing 68N17 Logic programming
Full Text:
References:
[1] Apt, K.; van Emden, M., Contributions to the theory of logic programming, J. ACM, 29, 841-862, (1982) · Zbl 0483.68004 [2] R. Arruabarrena, P. Lucio, M. Navarro, A strong logic programming view for static embedded implications, in: Proceedings of the Second International Conference on Foundations of Software Science and Computation Structures FOSSACS’99, LNCS, vol. 1578, Springer-Verlag, 1999, pp. 56-72. · Zbl 0948.68031 [3] M. Baldoni, L. Giordano, A. Martelli, Translating a modal language with embedded implication into Horn clause logic, in: Proceedings of the Fifth International Workshop on Extensions of Logic Programming, LNCS, vol. 1050, Springer-Verlag, 1996, pp. 19-33. [4] A. Bonner, L.T. McCarty, Adding negation-as-failure to intuitionistic logic programming, in: Proceedings of the North American Conference on Logic Programming NACLP’90, MIT Press, 1990, pp. 681-703. [5] Bossi, A.; Gabbrielli, M.; Levi, G.; Martelli, M., The s-semantics approach: theory and applications, J. logic programming, 19/20, 149-197, (1994) · Zbl 0942.68527 [6] Bugliesi, M.; Lamma, E.; Mello, P., Modularity in logic programming, J. logic programming, 19/20, 443-502, (1994) [7] D. Chan, Constructive negation based on the completed database, in: Proceedings of the Fifth International Conference and Symposium on Logic Programming, MIT Press, 1988, pp. 111-125. [8] D. Chan, An extension of constructive negation and its application in coroutining, in: Proceedings of the North American Conference on Logic Programming NACLP’89, MIT Press, 1989, pp. 477-493. [9] Drabent, W., What is a failure? an approach to constructive negation, Acta inform., 32, 27-59, (1995) · Zbl 0815.68036 [10] Fages, F., Constructive negation by pruning, J. logic programming, 32, 85-118, (1997) · Zbl 0882.68034 [11] B. Freitag, Extending deductive database languages by embedded implications, in: Proceedings of the Third International Conference on Logic Programming and Automated Reasoning, LPAR’92, LNAI, vol. 624, Springer-Verlag, 1992, pp. 84-95. [12] Gabbay, M., N-prolog: an extension of prolog with hypothetical implications II. logical foundations and negation as failure, J. logic programming, 1, 251-283, (1985) · Zbl 0595.68004 [13] Gabbay, M.; Reyle, U., N-prolog: an extension of prolog with hypothetical implications, J. logic programming, 1, 319-355, (1984) · Zbl 0576.68001 [14] Gelder, A.V.; Ross, K.A.; Schlipf, J.S., The well-founded semantics for general logic programs, J. ACM, 38, 3, 620-650, (1991) · Zbl 0799.68045 [15] M. Gelfond, V. Lifschitz, The stable model semantics for logic programming, in: ICLP/SLP, 1988, pp. 1070-1080. [16] Giordano, L.; Martelli, A., Structuring logic programs: a modal approach, J. logic programming, 21, 59-94, (1994) · Zbl 0812.68065 [17] Giordano, L.; Martelli, A.; Rossi, G., Extending Horn clause logic with implication goals, Theoret. comput. sci., 95, 43-74, (1992) · Zbl 0754.68109 [18] Giordano, L.; Olivetti, N., Combining negation as failure and embedded implication in logic programs, J. logic programming, 36, 91-147, (1998) · Zbl 0911.68027 [19] Kunen, K., Negation in logic programming, J. logic programming, 4, 289-308, (1987) · Zbl 0655.68018 [20] Lloyd, J., Foundations of logic programming, (1987), Springer-Verlag · Zbl 0668.68004 [21] P. Lucio, Structured sequent calculi for combining intuitionistic and classical first-order logic, in: Proceedings of the Third International Workshop on Frontiers of Combining Systems, FroCoS ’00, LNAI, vol. 1974, Springer-Verlag, 2000, pp. 88-104. · Zbl 0961.03008 [22] Lucio, P.; Orejas, F.; Pino, E., An algebraic framework for the definition of compositional semantics of normal logic programs, J. logic programming, 40, 89-123, (1999) · Zbl 0933.68031 [23] McCarty, L., Clausal intuitionistic logic I. fixed point semantics, J. logic programming, 5, 1-31, (1988) · Zbl 0645.03006 [24] McCarty, L., Clausal intuitionistic logic II. tableau proof procedures, J. logic programming, 5, 93-132, (1988) · Zbl 0661.03004 [25] L. McCarty, A language for legal discourse I: Basic features, in: Proceedings of the ICAIL, vol. 89, 1989, pp. 180-189. [26] Miller, D., A logical analysis of modules in logic programming, J. logic programming, 6, 79-108, (1989) · Zbl 0681.68022 [27] Nadathur, G.; Miller, D., Higher – order logic programming, (), 499-590 · Zbl 0900.68129 [28] M. Navarro, From modular Horn programs to flat ones: a formal proof for the propositional case, in: Proceedings of the Second International Symposium on Innovation in Information and Communication Technology, ISIICT, 2004. [29] F. Orejas, E. Pasarella, E. Pino. Semantics of normal logic programs with embedded implications, in: Proceedings of the 17th International Conference on Logic Programming, ICLP 2001, LNCS, vol. 2237, Springer-Verlag, 2001, pp. 255-268. · Zbl 1053.68541 [30] Pasarella, E.; Orejas, F.; Pino, E.; Navarro, M., A transformational semantics of static embedded implications of normal logic programs, (), 133-146 · Zbl 1156.68329 [31] Przymusinski, T., On the declarative semantics of deductive databases and logic programs, (), 193-216 · Zbl 0726.68067 [32] T. Przymusinski, Every logic program has a natural stratification and an iterated least fixed point model, in: Proceedings of the Eighth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, PODS ’89, ACM Press, 1989, pp. 11-21. [33] T.C. Przymusinski, Perfect model semantics, in: ICLP/SLP, 1988, pp. 1081-1096. [34] Shepherson, J., Negation in logic programming, (), 19-88 [35] Stuckey, P., Negation and constraint logic programming, Inform. and comput., 118, 12-33, (1995) · Zbl 0827.68022 [36] Tarski, A., A lattice-theoretical fixpoint theorem and its applications, Pacific J. math., 5, 285-309, (1955) · Zbl 0064.26004 [37] van Dalen, D., Intuitionistic logic, (), 1-114 [38] van Emden, M.; Kowalski, R., The semantics of predicate logic as a programming language, J. ACM, 23, 733-742, (1976) · Zbl 0339.68004
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
|
Distributed computing of efficient routing schemes in generalized chordal graphs - Archive ouverte HAL Access content directly
Journal Articles Theoretical Computer Science Year : 2012
Distributed computing of efficient routing schemes in generalized chordal graphs
(1) , (2, 3) , (4)
1
2
3
4
Nicolas Nisse
Ivan Rapaport
• Function : Author
• PersonId : 867484
Karol Suchan
• Function : Author
• PersonId : 905213
Abstract
Efficient algorithms for computing routing tables should take advantage of the particular properties arising in large scale networks. Two of them are of particular interest: low (logarithmic) diameter and high clustering coefficient. High clustering coefficient implies the existence of few large induced cycles. Considering this fact, we propose here a routing scheme that computes short routes in the class of $k$-chordal graphs, i.e., graphs with no induced cycles of length more than $k$. In the class of $k$-chordal graphs, our routing scheme achieves an additive stretch of at most $k-1$, i.e., for all pairs of nodes, the length of the route never exceeds their distance plus $k-1$. In order to compute the routing tables of any $n$-node graph with diameter $D$ we propose a distributed algorithm which uses messages of size $O(\log n)$ and takes $O(D)$ time. The corresponding routing scheme achieves the stretch of $k-1$ on $k$-chordal graphs. We then propose a routing scheme that achieves a better additive stretch of $1$ in chordal graphs (notice that chordal graphs are 3-chordal graphs). In this case, the distributed computation of the routing tables takes $O(\min\{\Delta D , n\})$ time, where $\Delta$ is the maximum degree of the graph. Our routing schemes use addresses of size $\log n$ bits and local memory of size $2(d-1) \log n$ bits per node of degree $d$.
Dates and versions
hal-00741970 , version 1 (15-10-2012)
Identifiers
• HAL Id : hal-00741970 , version 1
Cite
Nicolas Nisse, Ivan Rapaport, Karol Suchan. Distributed computing of efficient routing schemes in generalized chordal graphs. Theoretical Computer Science, 2012, 444 (27), pp.17-27. ⟨hal-00741970⟩
282 View
|
|
4. Process Modeling
4.8. Some Useful Functions for Process Modeling
4.8.1. Univariate Functions
4.8.1.2. Rational Functions
## Linear / Quadratic Rational Function
Function: $$\displaystyle f(x) = \frac{\beta_0 + \beta_1x}{1 + \beta_2x + \beta_3x^2}, \ \ \beta_1 \neq 0, \ \beta_3 \neq 0$$
Function
Family:
Rational
Statistical
Type:
Nonlinear
Domain: $$\displaystyle (-\infty, \infty)$$
with undefined points at
$$\displaystyle x = \frac{-\beta_2 \pm \sqrt{\beta_2^2 - 4\beta_3}} {2\beta_3}$$
There will be 0, 1, or 2 real solutions to this equation, corresponding to whether
$$\displaystyle \beta_2^2 - 4\beta_3$$
is negative, zero, or positive.
Range: $$\displaystyle (-\infty, \infty)$$
Special
Features:
Horizontal asymptote at:
$$\displaystyle y = 0$$
and vertical asymptotes at:
$$\displaystyle x = \frac{-\beta_2 \pm \sqrt{\beta_2^2 - 4\beta_3}} {2\beta_3}$$
There will be 0, 1, or 2 real solutions to this equation corresponding to whether
$$\displaystyle \beta_2^2 - 4\beta_3$$
is negative, zero, or positive.
|
|
# Fourier Series
Format: Paperback
Language:
Format: PDF / Kindle / ePub
Size: 9.87 MB
Structures presque symplectiques et pinceaux de Lefschetz en dimension 4. The work was based on ideas from theoretical physics, and just a little more than 10 years later, physicists (superstring theorists to be precise) came up with even more spectacular results. The exploration is consistent with the principles of such as Paul Feyerabend (Against Method: outline of an anarchistic theory of knowledge, 1975; Conquest of Abundance: a tale of abstraction versus the richness of being, 1999) -- as previously discussed ( Value Embodiment: participatory engagement with environmental reality, 2008; Declaration of Universal Independence: delinking from detachment through radical questioning, 2009).
Pages: 0
Publisher: Cambridge University Press (1962)
ISBN: B00AZSE3W8
Analytic Hilbert Modules (Chapman & Hall/CRC Research Notes in Mathematics Series)
Fundamentals of Topology
Cox Rings (Cambridge Studies in Advanced Mathematics)
The arrangement seems to be almost totally lacking in the kinds of regularities which one instinctively anticipates. such questions cannot easily be answered as the probabilities depend on how accessible a common fold is to different sequences. ‘random’ evolution some are more likely to accumulate than others. Equivalent proteins from related species usually have similar structures and sequences and a comparative analysis of these can tell us about residue substitutions and how the structure adapts to accommodate them Categorical Perspectives (Trends in Mathematics) http://californiajaxwax.com/library/categorical-perspectives-trends-in-mathematics. The Coverage slider as well as the QGrid slider values will have an impact on the size and accuracy of the Chamfer. The Flat Subdivision slider defines the number of grid-style subdivisions applied to the model. It creates a uniform grid across the model’s surface The Geometry of Minkowski Spacetime: An Introduction to the Mathematics of the Special Theory of Relativity (Applied Mathematical Sciences) read here.
Fractal Geometry and Number Theory
First, we put the experiment's results in the form of a string of $100$ numbers. Next, thinking mathematically, we see this string as a point in the $100$-dimensional space First Concepts of Topology: The Geometry of Mappings of Segments, Curves, Circles, and Disks (New Mathematical Library, 18) (Nml 18) read pdf. For example, a sphere is of genus $0$ and a torus is of genus $1$. The genus $g$ is directly related to the Euler-characteristic $\chi$ by the formula $\chi=2-2g$. In the case of multiple surfaces involving $K$ connected components, the total genus is related to the total Euler-characteristic by the formula: $\chi=2(K-g)$ Spaces of Kleinian Groups download here kaftanpretty.com. Similarly, given any set of functions $J\subset R$, the vanishing set of points $V(J)=\{x\in X\colon f(x)=0(x)\forall f\in J\}$ can be described as the set of points $x$ whose associated ideal $\ker x$ is contained in $J$, i.e. $V(J)=\{x\in X\colon \ker x\subset J\}$ Colloquium on Algebraic Topology, August 1-10, 1962; [lectures. http://blog.micaabuja.org/?books/colloquium-on-algebraic-topology-august-1-10-1962-lectures. Third, the study of D-branes and surface operators has led theoretical physicists and their mathematical collaborators to a deepened appreciation for higher categorical methods in organizing and constructing field theories. Because the observables of a perturbative quantum field theory form a factorization algebra, there are immediate applications of factorization methods in physics, and indeed factorization algebras provide a unifying language for many approaches to quantum field theory , source: Elliptic Curves: Function download pdf tiny-themovie.com. Any intersection of closed sets is closed. Any union of finitely many closed sets is closed. The equivalence of those properties with the axiomatic properties previously stated for open sets is based on the fact that the complement of an intersection is the union of the complements, whereas the complement of a union is the intersection of the complements ( de Morgan's Laws ) ref.: Local Homotopy Theory (Springer Monographs in Mathematics) Local Homotopy Theory (Springer. Knight (1981) "On the other hand, this convergence for all t and m is easily seen to be equivalent to that generated by the f. This topology is metrizable, for example, ..." "THE ZARISKI topology As I have said already several times, one of my aims is to develop the analogies between algebraic geometry and manifold theory. ..." 6 , cited: Networks, Topology & Dynamics. read pdf http://ferienwohnung-roseneck-baabe.de/library/networks-topology-dynamics-springer-2008-paperback.
Knots and Physics (Proceedings of the Enea Workshops on Nonlinear Dynamics)
An Introduction to Electrotechnology
Effective Algebraic Topology (Memoirs of the American Mathematical Society)
Topics in Nonlinear Analysis and Applications
Symmetries and Curvature Structure in General Relativity (World Scientific Lecture Notes in Physics)
Theory of Lattice-Ordered Groups (Chapman & Hall/CRC Pure and Applied Mathematics)
Set Theory and its Applications: Proceedings of a Conference held at York University, Ontario, Canada, Aug. 10-21, 1987 (Lecture Notes in Mathematics)
Introduction to Metric and Topological Spaces (Oxford Mathematics)
Variational Principles of Topology: Multidimensional Minimal Surface Theory (Mathematics and its Applications)
An Introduction to General Topology
Structure and Representations of Q-Groups (Lecture Notes in Mathematics)
General Topology: Chapters 1-4
Fractals and Spectra: Related to Fourier Analysis and Function Spaces (Monographs in Mathematics)
Introduction to Topology and Modern Analysis
Arrangements of Curves in the Plane- Topology, Combinatorics, and Algorithms - Primary Source Edition
An Introduction to Contact Topology (Cambridge Studies in Advanced Mathematics)
Algebraic and Analytic Geometry (London Mathematical Society Lecture Note Series)
Continuous Bounded Cohomology of Locally Compact Groups
Fundamentals of Three-dimensional Descriptive Geometry: Workbk
Proximity Approach to Problems in Topology and Analysis
Cosmological spacetimes are some of the simplest solutions to GR that we know, and even they admit all kinds of potential complexities, beyond the most obvious possibilities. This is probably a stupid question, but how can a universe be isotropic if it isn’t also homogenous , cited: Comparison Theorems in read here tiny-themovie.com? Broadly speaking, our research — performed by undergraduates, postgraduates, postdoctoral fellows, and academic staff — is concerned with the rich interaction and deep interconnections between algebra and geometry with a view to new applications and solutions to long-standing problems Fibrewise Homotopy Theory download here Fibrewise Homotopy Theory (Springer. The first result is that if an R-covered Anosov flow has all free homotopy classes that are finite, then up to a finite cover the flow is topologically conjugate to either a suspension or a geodesic flow. This is a strong rigidity result that says that infinite free homotopy classes are extremely common amongst Anosov flows in 3-manifolds K-theory and stable algebra / download pdf K-theory and stable algebra / The. I will explain recent joint work with Richard Hind, showing that a version of this result holds in all dimensions. We present several classification results for Lagrangian tori, all proven using the splitting construction from symplectic field theory. Notably, we classify Lagrangian tori in the symplectic vector space up to Hamiltonian isotopy; they are either product tori or rescalings of the Chekanov torus Hyperbolic Geometry from a Local Viewpoint (London Mathematical Society Student Texts) http://tiny-themovie.com/ebooks/hyperbolic-geometry-from-a-local-viewpoint-london-mathematical-society-student-texts. I find the recent book [42] on the geometry of submanifolds quite interesting and easy to read. There are many different numerical schemes the literature and perhaps the most successful one in solid mechanics has been the finite element method. There have been difficulties in using finite element method, e.g. in problems with internal constraints like incompressibility (volume locking and checkerboarding of pressure) On the Classification of C*-Algebras of Real Rank Zero: Inductive Limits of Matrix Algebras over Non-Hausdorff Graphs (Memoirs of the American Mathematical Society) On the Classification of C*-Algebras of. From the table of contents: Introduction; Analytic Categories; Analytic Topologies; Analytic Geometries; Coherent Analytic Categories; Coherent Analytic Geometries; and more. This book collects accessible lectures on four geometrically flavored fields of mathematics that have experienced great development in recent years: hyperbolic geometry, dynamics in several complex variables, convex geometry, and volume estimation Intuitive concepts in elementary topology. Intuitive concepts in elementary. To obtain (Gromov-Witten) invariants in this non compact setting we observe that the energy function on the moduli space of holomorphic curves in a l.c.s.m. behaves in crucial examples like a bounded from below proper abstract moment map in the sense of Karshon. One basic example is that of a locally conformaly symplectic manifold $C times S ^{1}$ coming from a contact manifold $C$, and using our theory we obtain counts'' of Reeb orbits of of $C$ with respect to a contact form $lambda$, in case $C$ is compact of $C$ , cited: The real projective plane. read for free http://mariamore.com/ebooks/the-real-projective-plane. Together the α-helix and β-sheet structures are referred to as secondary structure. referred to as a β-sheet. the hydrogenbonded networks found in proteins are remarkably regular. is that all residues also have polar atoms in their main-chain and this includes the hydrophobic residues which we would otherwise like to see buried in the core. and almost only other solution of structural importance in proteins (known as β structure). resulting in a general sheet structure. is formed by two remote parts of the chain lining-up to form a ‘ladder’ of hydrogen-bonds between them Integrable Systems: Twistors, download online http://tiny-themovie.com/ebooks/integrable-systems-twistors-loop-groups-and-riemann-surfaces-oxford-graduate-texts-in.
Rated 4.7/5
based on 1716 customer reviews
|
|
# Generating a categorical variable from an imputed variable
I am using multiple imputation to impute a continuous variable ($X$) with $\approx30\%$ missing values. I have a question regarding the generation of a new categorical variable ($Y$), starting from this imputed variable.
I want to use $Y$ as an independent variable in a logistic regression model instead of $X$ (I understand I'm losing information by doing this).
This is an example of how the categories in $Y$ would work:
$$Y = \begin{cases} 0 &{\rm if}\ \ \quad\quad\quad\! X < 100 \\ 1 &{\rm if}\ \ 100\le X < 200 \\ 2 &{\rm if}\ \ 200\le X < 300 \\ 3 &{\rm if}\ \ 300 < X \end{cases}$$
As imputed values vary between each set, I end up with a (not that much) different number of cases in each category of $Y$ on each imputed set.
Is this an appropriate approach if $X$ was imputed?
• Is there a reason you can't just use the imputed X as the variable? I don't see any advantage in this. – gung Mar 18 '14 at 14:10
• Thanks for the edit! The idea is to allow for easy interpretation and comparison with other studies by using standard reference points found in the literature. – user42148 Mar 18 '14 at 17:37
|
|
# What does Axiom of Choice mean [duplicate]
So this a fundamental assumption in mathematics. Can someone explain informally what it actually is please.
My guess is that its when we say in proofs that "Let $x \in X$". But I am not sure.
## marked as duplicate by Patrick Stevens, Henning Makholm, Charles, hardmath, Asaf Karagila♦ axiom-of-choice StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Feb 3 '16 at 21:54
• The body of a Question should be as self-contained as possible, not relying on the title to pose the essential problem. Please review How to Ask. – hardmath Feb 3 '16 at 21:53
Informally the axiom of choice says if you have a bunch of sets (possibly infinitely many) then you can choose one element from each set and build a new set out of them.
That you can do such a thing sounds at first incredibly intuitive and obvious. But over time mathematicians have realized that it is exactly that assumption that leads to a bunch of really disturbing results. Therefore some people have concluded that we shouldn't allow that assumption. However if you don't there are also a bunch of basic things that become much harder if not impossible to prove.
It's an ongoing debate with no end in sight.
• so does this mean that you can just keep making sets after making new sets? So you can make infinite amount of sets? – snowman Feb 3 '16 at 21:47
• Well sure you can always make an infinite amount of sets. But the AOC is saying something a bit more specific, it's saying from any collection of sets you can make a new set that has one element from each of them. It's that specific operation of choosing one element from each set that is involved in the AOC. – Gregory Grant Feb 3 '16 at 21:54
For the deepest results about partially ordered sets we need a new set-theoretic tool; ... We begin by observing that a set is either empty or it is not, and if it is not, then, by the definition of the empty set, there is an element in it. This remark can be generalized. If $X$ and $Y$ are sets, and if one of them is empty, then the Cartesian product $X\times Y$ is empty. If neither $X$ nor $Y$ is empty, then there is an element $x$ in $X$, and there is an element $y$ in $Y$; it follows that the ordered pair $(x,y)$ belongs to the Cartesian product $X\times Y$, so that $X\times Y$ is not empty. The preceding remarks constitute the cases $n=1$ and $n=2$ of the following assertion: if $\{ X_i\}$ is a finite sequence of sets, for $i$ in $n$, say, then a necessary and sufficient condition that their Cartesian product be empty is that at least one of them be empty. The assertion is easy to prove by induction on $n$. .... The generalization to infinite families of the non-trivial part of the assertion in the preceding paragraph (necessity) is the following important principle of set theory.
Axiom of choice: The Cartesian product of a non-empty family of non-empty sets is non-empty.
Paul Halmos Naive Set Theory page 59.
• why downvotes? ${}$ – user153330 Feb 3 '16 at 21:36
• I didn't downvote you, but I'm guessing it's because the OP asked for an "informal" explanation. – Gregory Grant Feb 3 '16 at 21:37
• I didn't downvote – snowman Feb 3 '16 at 21:37
• I'd like to register my opinion here. Unless a post is clearly vandalism or abusive then I think it's rude to downvote it without a constructive explanation. – Gregory Grant Feb 3 '16 at 21:39
• @snowman do you want me to give you the entire motivation halmos gives so that the "informal" side of the axiom of choice becomes clear to you? – user153330 Feb 3 '16 at 21:41
Suppose you have a set $X$ (which may be infinite which is when things get exciting) and another set $Y$. Suppose that for each $x\in X$ you associate a non-empty subset $Y_x$ of $Y$, then there is a function $f:X\to Y$ such that $f(x)\in Y_x$. This seemingly intuitive "fact" (and in fact, trivial for finite $X$) is the Axiom of Choice. To be specific, the existence of $f$ is what the Axiom tells you. $f$ is a function that for each $x$ "chooses" in $f(x)\in Y_x$.
• Ok, not very informal... think of having an infinite stall, with infinite amount of boxes, in each box there is a kind of fruit different from all the other boxes (and there are infinite kind of fruits). You can "choose" a shopping basket that has exactly one fruit per kind, and all kinds are accounted for. – Oskar Limka Feb 3 '16 at 21:43
|
|
## Classics in Materials Science: Harper’s experiments with Snoek pendulum
### Introduction
Dislocations are one type of defect in a crystalline solid; they distort the crystalline lattice around them; these distortions around a dislocation in a crystal could be dilatational (the distance between planes is more than what it should be) or compressive (the distance between planes is less than what it should be). Such distortions (strains) cost the system energy.
Consider a solid solution; let us say it is made up of two components; typically, there is a size difference between the different types of atoms. So, in a crystalline lattice of a solid solution, there are also strains associated with the different components.
The strain fields around the solutes/solvents and the dislocations in the crystal of a solid solution can interact — leading to an overall reduction in the energy. One such interaction was pointed out by Cottrell. Imagine a dislocation in a crystal of a solid solution; if the larger atoms of the solution sit in the dilatational regions around a dislocation and the smaller ones near the compressive regions, then, the elastic stresses can partly be relieved. This kind of migration of atoms to regions in a crystal where they can relieve the distortions in a lattice the most, leads to what is known as Cottrell atmosphere — a clustering of atoms around a dislocation.
Further, Cottrell, along with Bilby, in 1949, wrote a classic paper (which deserves a post of its own — and will be given its due in one of the posts in the future), describing the yield and strain ageing of iron by considering the ‘condensation’ of the atmosphere on dislocations; in addition, in their paper, Cottrell and Bilby also showed that the kinetics of re-condensation of the atmosphere on a dislocation that is torn from the one it initially had, can be described by a $t^{2/3}$ law, where, $t$ is the time of strain-ageing after a steel specimen has been plastically deformed.
Around the middle of the last century, an experimental verification of the predictions of Cottrell and Bilby was one of the challenging problems that faced the materials scientists (more specifically, the physical metallurgists). What was needed was a measurement of the change with time of the free carbon dissolved in iron at ambient temperatures — at temperatures where only a minute fraction of 1% of carbon can be dissolved in iron. As Cahn notes in his classic [2],
Harper performed this apparently impossible task and obtained [the required data] by using a torsional pendulum, invented just as the War began by a Dutch physicist, Snoek, …
In this post, let us take a look at the classic paper of Harper [3], how he managed to achieve what he did achieve, and discuss the relevance of Harper’s experiments then, and their continuing relevance now.
### Internal friction and stress induced interstitial diffusion
Harper, begins his paper by describing some of the earlier studies on strain ageing and in the very second paragraph sets forth how his experiment is different from the earlier ones:
By measuring the internal friction arising from the stress-induced interstitial diffusion of the solute atoms it is possible to measure the amount of dissolved solute at any time and thus conveniently and quantitatively study the precipitation process.
The key words are: internal friction (a mechanism of damping of oscillations in solid materials by converting the vibrational energy into heat) and stress-induced interstitial diffusion (the movement of atoms in interstitial positions — that is, positions in a crystalline lattice where in general there are no atoms) due to applied stress. How are these two connected? Cahn, in his commentary on the paper, explains in lucid manner, as is his wont:
Roughly speaking, the dissolved carbon atoms, being small, sit in interstitial lattice sites close to an edge of the cubic unit cell of iron, and when the edge is elastically compressed and one perpendicular to it is stretched by an applied stress, then the equilibrium concentrations of carbon in sites along the two cube edges become slightly different: the carbon atoms “prefer” to sit in sites where the space available is slightly enhanced. After half a cycle of oscillation, the compressed edge becomes stretched and vice versa. When the frequency of oscillation matches the most probable jump frequency of carbon atoms between adjacent sites, then the damping is maximum.
A carbon atoms situated in an ‘atmosphere’ around a dislocation is locked to the stress-field of the dislocation and thus cannot oscillate between sites; it therefore does not contribute to peak damping.
Of course, the next question is as to how to probe this peak; for that, Harper used the Snoek’s apparatus and the theories put forth by Zener in his classic text [4]. Let us digress a bit to look at Snoek’s apparatus before we return to the results of Harper.
### Snoek’s apparatus
Snoek is a Dutch physicist with varying contributions; according to this biographical sketch [5] of Snoek,
From a purely physical point of view, his finest work might be the discovery and explanation of the so-called diffusion damping in solids: the Snoek effect, …
Snoek’s paper describing the damping effect and its measurement using the apparatus (which is nothing but a torsional pendulum), not surprisingly, deals with carbon and nitrogen in alpha-iron [6], though, according to [5]
Nowadays, the term “the Snoek effect” is used in a wider sense, implying the relaxation phenomenonassociated with reorientation of siolated solute atoms in any crystal lattice or even in amorphous structure.
For studying the damping effect, Snoek used a torsional pendulum and its oscillations:
The simplest way of determining the elastic after effect of a given material seems to make it part of a freely oscillating system and measuring the logarithmic decrement $\Delta$ of the system.
A detailed description of the pendulum arrangement is described both in Snoek’s paper and in Cahn’s book (with a schematic). The overall idea is that a wire, held slightly in tension, when set into torsional vibrations, the amplitude of its vibration will slowly decrease, and this decrease can then be related to the internal friction associated with the movement of the dissolved carbon. In effect, the variation of the temperature of peak damping with the pendulum frequency, one can determing the jump frequency, and hence the diffusion coefficient — even at very low temperatures. Further, the magnitude of the peak damping is proportional to the amount of carbon in solution.
### Harper’s experiment
Harper used the same torsional arrangement except that his specimen of steel wire was stretched beyond its yield point — that is, to such stress levels as would make the material undergo permanent (plastic) deformation by introducing and moving dislocations inside the material. Now, if such a wire is clamped to a torsional pendulum of the type used by Snoek and set into oscillations, the decay of the damping coefficient with the passage of time can be followed, from which the amount of dissolved carbon as a function of time can be calculated. If ‘f’ be the fraction of carbon atoms that form the ‘atmosphere’ around dislocations, the logarithm of (1-f) as a function of time raised to the exponent (2/3) was shown by Harper to follow the exact linear relationship that Cottrell and Bilby predicted! See the figure below:
Nearly fifty years after the experimental proof offered by Harper for strain ageing, Cottrell atmosphere can now be directly imaged, as Cahn points out, using atom-probe [7].
### The relevance: then and now
Harper’s experiment, according to Cahn,
… encapsulates very clearly what was new in physical metallurgy in the middle of the [last] century. The elements are: an accurate theory of the effects in question, preferably without disposable parameters; and, to check the theory, the use of a technique of measurement (the Snoek pendulum) which is simple in the extreme in construction and use but subtle in its quantitative inprepretation, so that theory ineluctably comes into measurement itself.
On the other hand, the idea of relaxations and means of measuring them continue to be of interest and relevance. As Koiwa notes [5] in his sketch, the Snoek effect
… is probably the most fully investigated and well understood among a variety of relaxation phenomena.
And, finally, these ideas are of use in studying a material, which, to quote Kipling
Gold is for the mistress — silver for the maid —
Copper for the craftsman cunning at his trade.
“Good!” said the Baron, sitting in his hall,
“But Iron — Cold Iron — is master of them all.”
What more can one ask for!
### References
[1] A H Cottrell and B A Bilby, Dislocation theory of yielding and strain ageing of iron, Proceedings of Physical Society A, 62, pp. 49-62, 1949.
[2] R W Cahn, The coming of materials science, Pergamon Materials Series, Pergamon Press, pp. 191-195, 2001.
[3] S Harper, Precipitation of carbon and nitrogen in cold-worked alpha-iron, Physical Review, 83, pp. 709-712, 1951.
[4] C Zener, Elasticity and anelasticity of metals, The University of Chicago Press, Chicago, 1948.
[5] M Koiwa, A note on Dr. J. L. Snoek, Materials Science and Engineering A, 370, pp. 9-11, 2004.
[6] J L Snoek, Effect of small quantities of carbon and nitrogen on the elastic and plastic properties of iron, Physica, 8, pp. 711-733, 1951.
[7] J Wilde, A Cerezo, and G D W Smith, Three-dimensional atomic scale mapping of a Cottrell atmosphere around a dislocation in iron, Scripta Materialia, 43, pp. 39-48, 2000.
|
|
Public Group
# hi there, nooby here
This topic is 4734 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
hello everyone, i'm not very new to game development but i haven't finished any games yet so i'm not a pro either. i mostly use blender3d which is a great open source modelling software www.blender3d.org and also a bit of programing in C. i also have knowledge in quickbasic and a bit of other languages that i haven't used for a long while such as html and pascal. so now i want to make my own little game in C and i kinda need some help because my knowledge in this language is still really bad. so i have a few questions which i hoped you could answer: 1)my compiler cant find the libraries, where do i tell them where they are? 2)how do i make my compiler write in different colors depends on the type of the command (at school he does that). 3)if you know maybe, how does the computer know how to draw a line? i mean i set the starting point and the ending point but how does he know where he should put the other pixels (this will really help me make a simple physical engine i have planned, it will fill each pixel with a different material, exactly in the same way he does with colors only different...make my own code like 1 is the character, 2 is walls and so on...i think it'll be very good to have such a thing for accurate detection collision). 4)how do i use the mouse? how do i make integers which have the same name but different numbers (should be easy like "int location(100,100)" only i think there is another command there. well that's all, hope you can help me out here. forgottensoul
##### Share on other sites
Welcome to GDNet.
I assume you mean your IDE rather than your compiler. An IDE, or Integrated Development Environment is a set of tools programmers use to aid them in developing programs, and usually includes such things as a text-editor, linker, compiler, and perhaps some debugging tools among other things. The compiler takes the code you write (in your IDE, or with a stand-alone text editor), and translates it into a form the computer can understand. If you tell us which IDE you're using I'm sure someone will be able to help out.
If you do actually only have a compiler and no IDE, then you have a few options for free IDEs you could get, including Dev-C++, Code::Blocks, and Microsoft's Visual Studio 2005 Express (specifically Visual C++), although since you're a beginner and are planning to work in C rather than C++ you may not want to use the Microsoft tools, which will allow you to work with C but may require some extra configuration to do so since they assume you'll want to use C++ and develop .NET applications.
Quote:
2)how do i make my compiler write in different colors depends on the type of the command (at school he does that).
That's called syntax highlighting, and it's a feature offered by most IDEs out there. You can also get text-editors that support syntax highlighting for various languages.
For your other questions, I think it would be best if you first made sure you've covered the basics of the language if you're unsure about it. You can find some pretty good tutorials here [www.cprogramming.com] if you want to do that. Your question about the integers will be answered by reading about arrays (they're what you're actually talking about there) for example.
I hope that helps you out a bit, feel free to ask if you have any more questions or I didn't answer any of that properly. [smile]
##### Share on other sites
Quote:
Original post by forgottensoulhello everyone, i'm not very new to game development but i haven't finished any games yet so i'm not a pro either. i mostly use blender3d which is a great open source modelling software www.blender3d.org and also a bit of programing in C. i also have knowledge in quickbasic and a bit of other languages that i haven't used for a long while such as html and pascal. so now i want to make my own little game in C and i kinda need some help because my knowledge in this language is still really bad. so i have a few questions which i hoped you could answer:1)my compiler cant find the libraries, where do i tell them where they are?2)how do i make my compiler write in different colors depends on the type of the command (at school he does that). 3)if you know maybe, how does the computer know how to draw a line? i mean i set the starting point and the ending point but how does he know where he should put the other pixels (this will really help me make a simple physical engine i have planned, it will fill each pixel with a different material, exactly in the same way he does with colors only different...make my own code like 1 is the character, 2 is walls and so on...i think it'll be very good to have such a thing for accurate detection collision).4)how do i use the mouse?how do i make integers which have the same name but different numbers (should be easy like "int location(100,100)" only i think there is another command there.well that's all, hope you can help me out here.forgottensoul
Well, you should probably learn more about C because it sounds like you still have a lot to learn. And of your questions, I find that only 4 is pretty easy to answer (the other ones you need a book for, can't really be explained in a couple lines). What you are looking for is called an array. You use it like this:
int* integers = new int[100];integers[40] = 5;
Notice that if you put 100 in the parentheses like I did, you can access indices 0 through 99, not 1 to 100.
##### Share on other sites
And as for your third question, have a look at Bresenham's Line Drawing Algorithm. It takes a start and end position in screen coordinates and then computes the pixels in between that should light up to show the line.
But as yaroslavd said, it is probably best to become fluent with your chosen language and working environment before trying your hand at any of this.
Good luck, Illco.
##### Share on other sites
Quote:
Original post by forgottensoulhello everyone, i'm not very new to game development but i haven't finished any games yet so i'm not a pro either. i mostly use blender3d which is a great open source modelling software www.blender3d.org and also a bit of programing in C. i also have knowledge in quickbasic and a bit of other languages that i haven't used for a long while such as html and pascal. so now i want to make my own little game in C and i kinda need some help because my knowledge in this language is still really bad. so i have a few questions which i hoped you could answer:1)my compiler cant find the libraries, where do i tell them where they are?2)how do i make my compiler write in different colors depends on the type of the command (at school he does that). 3)if you know maybe, how does the computer know how to draw a line? i mean i set the starting point and the ending point but how does he know where he should put the other pixels (this will really help me make a simple physical engine i have planned, it will fill each pixel with a different material, exactly in the same way he does with colors only different...make my own code like 1 is the character, 2 is walls and so on...i think it'll be very good to have such a thing for accurate detection collision).4)how do i use the mouse?how do i make integers which have the same name but different numbers (should be easy like "int location(100,100)" only i think there is another command there.well that's all, hope you can help me out here.forgottensoul
Who’s this “he” you keep referring too?
##### Share on other sites
@Mike: A programming teacher of some sort
##### Share on other sites
he sounds like hes talking about the IDE
" i set the starting point and the ending point but how does he know where he should put the other pixels "
" how do i make my compiler write in different colors depends on the type of the command (at school he does that) "
sorry if i sound like a nimrod
##### Share on other sites
well, first of all: thanks to all. i'm glad this forum is full with helpfull people. i did forget to say what IDE i am using: Borland turbo c++.
1. 1
2. 2
Rutin
21
3. 3
A4L
15
4. 4
5. 5
• 13
• 26
• 10
• 11
• 44
• ### Forum Statistics
• Total Topics
633742
• Total Posts
3013629
×
|
|
Hi everyone! So I'm working in the ring of symmetric functions over a polynomial ring, say $\mathbb{Q}[q]$, and I have an expression of the form
$$q^2s_{2,1} + (q-1)s_{1,1,1}$$
I want to perform the substitution $q = q+1$ in this expression, but all of the usual methods (subs, substitute, etc.) give me an error. Does anyone know how to do this?
|
|
Q
# Tell me? Two vertical poles of height, 20m and 80m stand apart on a horizontal plane. The height (in meters) of the point of intersection of the lines joining the top of each pole to the foot of the other, from this horizontal plane is:
Two vertical poles of height, 20m and 80m stand apart on a horizontal plane. The height (in meters) of the point of intersection of the lines joining the top of each pole to the foot of the other, from this horizontal plane is:
• Option 1)
$15$
• Option 2)
$18$
• Option 3)
$12$
• Option 4)
$16$
Views
$Eq.\: of\: AC=$$\frac{x}{a}+\frac{y}{20}=1\cdots (1)$
$Eq.\: of\: BD=\left ( y-0 \right )=\frac{80-0}{a-0}\left ( x-0 \right )$
$=ay=80x\cdots (2)$
From (1) and (2)
$\frac{ay}{80\cdot a}+\frac{y}{20}=1\Rightarrow y\left ( \frac{1}{80}+\frac{1}{20} \right )=1$
$y=16.$
Option 1)
$15$
Option 2)
$18$
Option 3)
$12$
Option 4)
$16$
Exams
Articles
Questions
|
|
# Is it possible to calculate the outcome of the outcome of the Riemann zeta function given real input $r$ using integrals?
I was wondering if one would be able to calculate the outcome of the Riemann zeta function using integrals, based upon the fact that the Riemann sums are a way to calculate an integral.
The Riemann zeta function is $$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}$$ Now let's say that our input $r \in \mathbb{N}$ and $r > 1$. That way we can create the parametric function $$f_r(x) = \frac{1}{x^r}$$
Now my theory was that $\zeta(s)$ could be calculated by $$\zeta(r) \approx \int_{1}^{\infty}f_r(x)dx = \lim_{n \to \infty} F_r(n) - F_r(1)$$ For that I calculated that for $r>1$ $$\int f_r(x)dx = \int(\frac{1}{x^r})dx = \int(x^{-r})dx = \frac{1}{-r+1} * x^{-r+1}$$ and for $r=1$ $\int f_r(x)dx = \int \frac{1}{x}dx = \ln(x)$
Using the information I had now I figured that for instance $\zeta(2)$ could be calculated approximately using $$\zeta(2) \approx \lim_{n \to \infty} F_2(n) - F_2(1) = \lim_{n \to \infty} \frac{1}{-2+1}*n^{-2+1}-\frac{1}{-2+1}*1^{-2+1} = \lim_{n \to \infty} \frac{1}{-n}+\frac{1}{1} = 1$$ which is nothing close to the $\frac{\pi^2}{6}$ Wikipedia told me about.
Now my question is what went wrong. Is the theory just plainly wrong or did I miscalculate something, for example in the anti-derivative?
The series definition implies that for any $s:\text{Re}(s)>1$ we have
$$\zeta(s)=\sum_{n\geq 1}\frac{1}{n^s} = \sum_{n\geq 1}\int_{0}^{+\infty}\frac{u^{s-1}}{\Gamma(s)}e^{-nu}\,du = \frac{1}{\Gamma(s)}\int_{0}^{+\infty}\frac{u^{s-1}}{e^u-1}\,du$$ but in approximating $\sum_{n\geq 1}\frac{1}{n^2}$ with $\int_{1}^{+\infty}\frac{dx}{x^2}$ you still have to consider the non-zero error term coming from the Euler-Maclaurin or Abel-Plana formula.
• @SuperSjoerdie: exactly. $\sum_{n\geq 1}f(n) = \int_{1}^{+\infty}f(x)\,dx$ holds in very few occasions. – Jack D'Aurizio Feb 9 '18 at 18:47
• It is especially rare when $f$ is strictly decreasing as is common :). – Erick Wong Feb 10 '18 at 2:48
|
|
## Tuesday, July 30, 2013 ... /////
### Salon: AGW cause confined to a left-wing ghetto
Salon has reprinted an interesting sociological essay on the climate debate originally written by Geoff Dembicki for Canada's The Tyee:
How to talk to a conservative about climate change
Dembicki, a left-wing alarmist, starts by admitting that during the recent two decades, the climate worries have become increasingly confined to an intellectually sterile environment of brainwashed and stubborn people whose ideology strongly influences their very perception, something that Dembicki calls the "left-wing ghetto".
People not heavily invested into the left-wing ideology tend to reject the climate propaganda in the U.S., the U.K., and even in Canada. We're told something that folks like the Czech ex-president Václav Klaus have been saying for many and many years, namely that the purpose of the climate alarm isn't to care about the environment but to rebuild the human society.
### Shmoits face a German competitor
Book market flooded with garden-variety cranks
In 2010, a German conspiracy theorist and high school teacher named Alexander Unzicker released his anti-physics tirade that was reformatted as a book. The title was "Vom Urknall zum Durchknall" which I would translate as "From the Big Bang to Their Big Butts' Being Banged" but that was translated as "Bankrupting Physics" by the unimaginative translator. The English translation will appear tomorrow. You may pre-order it.
Most of the 21 chapters have titles saying things like "something is rotten in the state of physics", "why cosmology is going the wrong way", "how physics became a junk drawer", "branes, multiverses, and other supersicknesses: physics goes nuts", "string theory: how the elite became a sect and mafia", "what's wrong with the physics business", "get ready for the crash". You get the point.
I haven't read and I won't read the book because I consider the table of contents to be fully sufficient to know what's inside the book. It's surely not the first time when most of the 20th century physics is being trashed by an aggressive stupid asshole who has no clue about science. But a review of the book written by a TRF reader may be published later.
### ER-EPR correspondence and bipartite closed strings
This text is just an experiment. I want to know the number of viewers of a blog entry on a related topic and the composition and character of comments if there are any. The insight below is not necessary correct; and it is not necessarily fundamental. ;-)
The Maldacena-Susskind ER-EPR correspondence invites one to think that (non-traversable) wormholes are natural, sort of inevitable – because spacetimes with this topology are physically equivalent to ordinary spacetimes with entangled degrees of freedom in two regions.
One of the things I was thinking about was whether there are other dual descriptions of such spacetimes. Consider, for example, two faraway Strominger-Vafa black holes in type IIB stringy vacua with 5 large dimensions and connect them with the ER bridge. Now, reduce the coupling $g_s$ to a very low value. What will you get?
### New iPhone likely to have a fingerprint scanner
One year ago, Apple bought AuthenTec, a Prague-based security company (7 Husinecká Street), for $356 million. One may now check the Czech commercial register shows two key people from Cupertino, California, aside from a Czech male and Czech female name that started the company with the basic registered capital of$10,000 (too late for great investments). ;-)
Yesterday, the Czech technological media reported that the iOS7 beta4 code contains a BiometricKitUI.axbundle folder along with some sentences suggesting that the fingerprint scanner will read your thumb when it touches the home button.
## Monday, July 29, 2013 ... /////
### Isidor Isaac Rabi: 115th birthday
Isidor Isaac Rabi was born on July 29th, 1898, to an orthodox Jewish family in Galicia, Northeastern Austria-Hungary. The town of Rymanów, sometimes shared with Ukraine, belongs to Poland these days (it's about 20 km from the Slovak borders). He died of cancer (after doctors would monitor him via magnetic resonance imaging) in New York about 25 years ago, in 1988.
Soon after he was born, his family moved to the New York City where his dad ran a grocery store in Brooklyn. Books about heliocentrism turned Izzy into an atheist. He asked: "Who ordered God?" if I improve his question a little bit. His compromise with the parents involved a lecture on the electric light he gave during a ceremony that turns 13-year-old Jewish boys into adults.
## Sunday, July 28, 2013 ... /////
I am getting lots of similar news and sometimes I collide with some of them in the media. Sometimes they sound amazing.
First, MIT has developed a perfect mirror – a material that reflects electromagnetic waves without any losses.
Now, Tesla Motors and Colorado's company SpaceX believe that they can build an amazingly fast superhighway of evacuated tubes where you can drive from California to NYC in an hour.
The figure is approximately equal to the global annual GDP – what the whole mankind produces in one year – or one-half of the total mankind's household assets ($125 trillion). A few bubbles of natural gas in an irrelevant faraway empire of ice will destroy 1/2 of the wealth of the average human. Holy cow. ### John Dalton: an anniversary ...and also Baron Loránd Eötvös... This CV looks ordinary but it is an experiment. ;-) John Dalton was born to a family of a weaver (and Quakers, independent Christians dissatisfied with the existing denominations) in Northwest England on September 6th, 1766. He died on July 27th, 1844 i.e. 169 years ago. ## Friday, July 26, 2013 ... ///// ### Reuters' climate alarmism down by 50% since 2012 Media Matters for America, a propaganda arm of the neo-Stalinist movement in the U.S., has complained that the number of articles published by Reuters that promote the unlimited climate hysteria has dropped by 48 percent in the recent 12 months. They take the data and interpretations from David Fogarty, an indisputable hardcore climate activist, who would be employed until recently as the "climate change correspondent for Asia". Just try to appreciate how crazy such an arrangement was. A biased activist who makes Lysenko fair and balanced in comparison was hired by an agency for which the truth and accuracy is the main asset. And he wasn't employed for a proper job that should exist in peaceful times. He wasn't hired to cover all of physical sciences or the whole Wall Street or the culture. He was hired to cover just stories about a fabricated tiny appendix of one of the least significant, least hard, and least prestigious subdisciplines of physical sciences. And in fact, he wasn't supposed to follow the whole climatology because it would be still too hard for him; he would only cover the catastrophic climatology. And he wasn't even supposed to cover all of catastrophic climatology; he could make living out of climate change articles about Asia. So he was inventing, cherry-picking, and spinning stories about the environment in Asia, its hypothetical dark future, and the mankind's hypothetical cause behind that dark future, especially the carbon dioxide. He was earning tens of thousands of dollars by writing the same junk you may still see on his Twitter account. ## Thursday, July 25, 2013 ... ///// ### Spanish train crash: quantifying the acceleration A tragically motivated homework problem in mechanics Chances are that you have already seen the dramatic video of the Wednesday Santiago de Compostela derailment. Warning: the following video is brutal. 78 people died and 145 extra ones were injured. In total, it's 223 people – more than the number of passengers, 218 (homework for you: why?). A map allows one to determine that the crash occurred at the top middle point of this Google map. Using a piece of paper, I estimated the radius of this arc of circle to be $R\sim 380\,{\rm m}$ or so. ### Fermion masses from the Δ(27) group Ivo de Medeiros Varzielas of Basel, Switzerland and Daniel Pidt of Dortmund, Germany released an interesting paper about the family symmetries Geometrical CP violation with a complete fermion sector They continue in the authors' three-weeks-old research of quark masses and Varzielas' 2012 research and other developments and argue that the $\Delta(27)$ family symmetry seems fully appropriate to obtain not only quark masses but also lepton masses and the CP violation. ### LHCb: $3$- or $4$-$\sigma$ excess of $B$-mesons' muon decays New physics may already be looking at us Tommaso Dorigo has discussed something that may be interesting – a hint of new physics coming from the LHCb experiment: A Four-Sigma Evidence Of New Physics In Rare B Decays Found By LHCb, And Its Interpretation The deviation will be described in the LHCb-PAPER-2013-037 paper, now in preparation. Locally, it is a $3.7$-$4.0\sigma$ effect which is reduced to $2.5$-$2.8\sigma$ once you take the 24 bins into account (look-elsewhere effect). See page 13 of Nicola Serra's presentation in Stockholm. Update: A preprint will appear in December 2015, JHEP, ScienceAlert 2016. We just described how strong the evidence is. But what events is the evidence about? Well, it is about the decay of the neutral $B$-mesons$B^0 \to K^* \mu^+ \mu^-, \quad K^*\to K^+ + \pi^-.$ Recall that the asterisk denotes a virtual particle in this experimental jargon. When some "new observables" are used, they see the aforementioned excess of events approximately in bins with the transferred momentum$1\GeV\lt \sqrt{|q^2|}\lt 3\GeV$ or so, especially between $2\GeV$ and $3\GeV$. ## Wednesday, July 24, 2013 ... ///// ### Relativity bans faster-than-light warp drive In recent 24 hours, lots of media outlets including NY Times, The Daily Mail, Russia Today, The PK Nation, Times of India, Bend Bulletin discuss the NASA research into faster-than-light spaceships based on "warp drive". The idea is being attributed to Mexican fantasist (rather than physicist) Miguel Alcubierre while Dr (???) Harold White is being mentioned as the most active researcher working on this ambitious project. The proposed idea is simple: reduce the magnitude of the space-like components of the metric tensor in front of the spaceship and increase it behind the spaceship. So the space will look shorter in front of you and the general relativistic causal limit will allow the coordinate speed to be greater than it is in the vacuum. Consequently, you will move from one place to another faster than light. ### F-theory on $Spin(7)$ manifolds, icezones to beat firewalls I want to mention three new hep-th preprints. One is on F-theory and two are concerned with the black hole information puzzles. Concerning the former, Federico Bonetti, Thomas W. Grimm, and Tom G. Pugh of Munich wrote a paper called Non-Supersymmetric F-Theory Compactifications on $Spin(7)$ Manifolds. There exist just a few interesting holonomy groups that manifolds constructed of extra dimensions in string/M-theory may respect. Realistic vacua of F-theory are usually thought of as $SU(4)\approx Spin(6)$ holonomy Calabi-Yau manifolds that preserve 1/8 of the original SUSY i.e. four real supercharges. Ice and fire dragons will be discussed momentarily. However, there exists an even larger possible holonomy group for 8-real-dimensional manifolds that is in between the Calabi-Yau $SU(4)\approx Spin(6)$ holonomy and the generic orientable manifold's holonomy, $Spin(8)$, namely $Spin(7)$. You could expect that these compactifications preserve $1/16$ of the original SUSY but for $12-8=4$-dimensional vacua, you are only left with $32/16=2$ real supercharges which is less than the minimal spinor in four dimensions. So you actually don't preserve any SUSY at all but the minimum SUSY may be restored if you compactify one dimension from $d=4$ to $d=3$. The paper discusses how it happens and offer some $d=3$ evidence in favor of the conjecture that M-theory on $Spin(7)$ manifolds is dual to F-theory on $Spin(7)$ manifolds times a line interval. What's strange is that there have been almost no papers on $Spin(7)$ compactifications of F-theory since a bold proposal by Witten and the pioneering F-theory paper by Vafa in the mid 1990s. The present authors try to stop these two decades of silence. ## Tuesday, July 23, 2013 ... ///// ### Mechanical characters mass-produced by Disney, cyborgs at Harvard Apologies to TRF readers – I was too busy these days: traveling, swimming, walking, and more. Here's a recreational blog entry. In the following video, you may see some cute mechanical puppets displaying some highly realistic motion: In fact, Disney Research has a whole infrastructure to mass-produce toys of various kinds. ## Monday, July 22, 2013 ... ///// ### Edward Witten and the $i\varepsilon$ prescription What it would look like if Wolfgang Amadeus Mozart decided to discuss A440, concert pitch (440 Hz), on 24 pages? Today, you may see the answer to a very similar question. Edward Witten finally attempted to solve a homework problem given not only to him by his (former) doctoral adviser in 1989 and wrote The Feynman $i\varepsilon$ in String Theory. Almost all particle physicists learn about the $i\varepsilon$ prescription in their introductory courses. The Feynman propagators have to have the form$\frac{-i}{p^2+m^2-i\varepsilon}$ in the mostly positive $({-}{+}{+}\cdots{+}{+})$ signature that Witten prefers. The extra infinitesimal term tells us in what direction we should circumvent the singularity when we integrate over the momenta in the loops and that's why it matters. In the position basis, the addition of the infinitesimal imaginary term answers the question whether the propagators are retarded or advanced or something in between. Yes, C) is correct: they are Feynman propagators, stupid. Note that the extra term adds an imaginary term to something that you could naively try to define by the real principal value because$\frac{1}{z-i\varepsilon} = {\rm v.p.} \frac{1}{z}+i\pi\delta(z).$ I would always say that you may imagine that this $i\varepsilon$ is an infinitesimal limit of something like $i\Gamma/2$ coming from a finite width (decay rate) – even if the lifetime is infinite, it has to be there for the stable intermediate particle to behave just like the unstable ones. There can't be any discontinuity if you just send the lifetime to infinity and because the form of the propagator seems obvious for the unstable particles (whose wave functions exponentially decay with time), a "trace" of the exponential decrease with time has to be inserted to the stable particles' propagators, too. This is a moment in which the arrow of time enters the fundamental formulae, by the way. ## Sunday, July 21, 2013 ... ///// ### Stephen Hawking got a flat tire ...but sang the Big Bang Theory theme song... Just like your humble correspondent yesterday, Stephen Hawking got a flat tire which is why he couldn't be at the Comic Con. At least, he prerecorded this monologue: He also sings the theme song of TBBT, a very successful sitcom of CBS that is nominated for Emmy 2013 much like Sheldon Cooper and Amy Farrah Fowler. ## Saturday, July 20, 2013 ... ///// ### Bernhard Riemann: an anniversary Georg Friedrich Bernhard Riemann was born in a village in the Kingdom of Hanover on September 17th, 1826 and died in Selasca (Verbania), Northern Italy, on July 20th, 1866, i.e. 147 years ago. As you can see, he was 2 months short of 40 years when he died. His father was a poor Lutheran pastor and a Napoleonic war veteran. His mother died when he was a kid. He was the second (oldest) among six children. Bernhard was shy, timid, afraid of speaking in public, and suffered from psychological problems and breakdowns. His math skills were obvious early on. However, he started to study a lyceum and investigated the Bible intensely, with a rough plan to become a pastor to earn some money for his family. ## Friday, July 19, 2013 ... ///// ### Naturalness and the LHC nightmare Phil Gibbs wrote a nice essay, Naturally Unnatural, in which he discusses the annual EPS-HEP conference that just began in Stockholm and the subdued/nightmare feelings that phenomenologists may have due to the perfect agreement between the LHC and the Standard Model. He adds comments about naturalness and the multiverse. Unnaturalness is sometimes in the eyes of the beholder. Nima is quoted as a defender of a $100\TeV$ collider. A possible result could be that nothing new is found which would be considered fascinating by Nima because that would be a proof of some unnaturalness in the Universe. Well, I have been defending "ever higher energies" as the most well-motivated post-LHC direction of particle physics for quite some time but I wouldn't be too thrilled by a negative result. It seems totally plausible to me – the probability is comparable to 30-50 percent, whatever – that the Standard Model would work even at such a higher-energy collider. ### Detroit declares bankruptcy ...but the distortion of the markets continues... Detroit declared bankruptcy according to the Chapter 9 of the federal law. You may see what it probably means for the city and the bondholders and others. We are talking about the "city proper" only – with 700,000 people, it's the 18th largest U.S. city (but the population was 2 million just decades ago). The metropolitan area hosts about 5 million people and is largely unaffected. A century ago or so, Detroit was the leader of innovation in the same way as Silicon Valley is today (or at least was several years ago) after Henry Ford would ignite his automobile revolution in the city just decades earlier. Some popular music was born there – the company Motown (standing for motor town) was founded in 1959. Still, the bankruptcy is no surprise for me because a neighbor on a flight 7 years ago or so has explained to me the ghost-town character of the contemporary Detroit in quite some detail – an hour of zir monologue. Ze told me about the empty skyscrapers and other sad things – I couldn't avoid thoughts about the deathbed of the industrial superiority. What was going wrong? ## Thursday, July 18, 2013 ... ///// ### New miraculous ways how F-theory achieves gauge coupling unification F-theory phenomenomenology has been described in dozens of TRF articles, e.g. these. Initiated by Vafa et al. less than a decade ago, this formally 12-dimensional approach to particle physics combines some of the fanciest higher-dimensional stringy geometric methods with those of braneworlds to make predictions in which particle physics is somewhat naturally "decoupled" from the four-dimensional gravity and in which the properties of particle physics are somewhat different and thoroughly geometrized as (often degenerate) eight-real-dimensional complex shapes. Today, an interesting hep-ph string-inspired paper was written by James C. Callaghan, Stephen F. King, George K. Leontaris: Gauge Coupling Unification in E6 F-Theory GUTs with Matter and Bulk Exotics from Flux Breaking F-theory models like to predict large grand unified gauge symmetries to start with. In the case of $E_6$, one finds fields in the "bulk" which transform in the adjoint, 78-dimensional representation of this exceptional Lie group. And there are other fields localized on "matter curves" that transform in the fundamental, complex 27-dimensional representation of $E_6$. We're used to say that whenever we add some generic extra light fields to the Minimal Supersymmetric Standard Model (MSSM), we modify the running of the gauge couplings which destroys the wonderful numerical coincidence that many of us like so much, the gauge coupling unification. ## Wednesday, July 17, 2013 ... ///// ### Exothermic double-disk dark matter Anniversary: On July 16th, not only my namesakes had their name day ;-) but we also celebrated the 68th anniversary of the Trinity Test, the beginning of the nuclear age. See Wikipedia, YouTube, The Bulletin. The concentration and speed of the deaths was scary during the following month or so but the event finally saved millions of lives during the (shortened) World War II as well as the 68 years that followed. This blog entry is about a very similar topic as the previous one; the paper released today also tries to incorporate the hints from the dark matter direct search experiments, both positive and negative ones. However, in their Exothermic Double-Disk Dark Matter, Matthew McCullough of MIT and Lisa Randall of Harvard adopt a very different philosophy about the inner structure of dark matter. It isn't composed of individual structureless particles that arise from some symmetries. Instead, in agreement with two 2013 papers that Lisa co-wrote, this dark matter may be composed of rather symmetry-uninspired particles but what makes them special is that they are subject to internal interactions, they live almost just like the visible baryonic matter we know. ## Tuesday, July 16, 2013 ... ///// ### Light Dirac neutralino dark matter As the TRF readers are regularly reminded (see especially dark matter wars), an increasing number of direct dark matter search experiments indicate that there exists a very light particle of dark matter, $7$-$10\GeV$, that may have been detected underground. Its cross section with a nucleon, if spin-independent, would be close to $2\times 10^{-41}\,{\rm cm}^2$. Off-topic, feeds: the former users of Google Reader may try Feedly, try the red feed logo near my photograph in the right sidebar, or The Old Reader, a link to this blog's feed. LHC luminosity will be 10 times higher: new U.S. niobium tin quadrupole magnets capable of focusing the proton beams have succeeded in Fermilab tests. This should allow CERN to increase the luminosity within a decade (not since 2015, unfortunately) one order of magnitude above the maximum designed value! See Red Orbit, E&T. Most recently, CDMS suggested an $8.6\GeV$ particle in their three events. It's not impossible that LUX, an experiment in South Dakota that is already running, will reveal some spectacularly clear evidence in favor of this light dark matter particle by the end of this year when they almost certainly publish their first results. However, the number of papers in the phenomenological literature that are implicitly compatible with such a light dark matter particle remains very low. In May, I talked about the proposed right handed sneutrinos. The third hep-ph paper today, Phenomenology of Dirac Neutralino Dark Matter, by Matthew R. Buckley, Dan Hooper, Jason Kumar (I know Jason in person) proposes another possibility, one that is arguably more attractive than the sneutrinos and that overlaps with a possibility promoted on TRF many times in the past: the extended supersymmetry. ## Monday, July 15, 2013 ... ///// ### Bohmian mechanics, a ludicrous caricature of Nature Some people can't get used to the fact that classical physics in the most general sense – a class of theories that identify Nature with the objectively well-defined values of certain (classical) degrees of freedom that are observable in principle and that evolve according to some (classical) equations of motion, usually differential equations that depend on time, mostly deterministic ones – has been excluded as a possible fundamental description of Nature for almost a century. Classical physics has been falsified and the falsification – a death for a theory – is an irreversible event. Nevertheless, those people would sleep with this zombie and do anything and everything else that is needed (but isn't sufficient) to resuscitate it. Of course, it's not possible to resuscitate it but those people just won't stop trying. Bohmian mechanics, one of the main strategies to pretend that classical physics hasn't died and hasn't been superseded by fundamentally different quantum mechanics, was invented by Prince Louis de Broglie in 1927 who called it "the pilot-wave theory". In the late 1920s, the 1930s, and 1940s, physicists were largely competent so they didn't have any doubts that the pilot wave theory was misguided by its very own guiding wave ;-). Exactly 25 years later, the approach was revived by David Bohm who made the picture popular, largely because he was a fashionable, media-savvy commie (he's almost certainly the recipient of Wolfgang Pauli's famous criticism "not even wrong" that was ironically hijacked by aggressive Shmoitian crackpots in the recent decade). Prince Louis de Broglie liked the new life that apparently returned to the veins of his old sick theory so he didn't even care too much that his theory was going to be attributed to someone else and that the someone else was a Marxist rather than an aristocrat. ### Origin of the name Motl When I was a baby, my father would often say that we come a French aristocratic dynasty de Motl – for some time, I tended to buy it ;-). Much later, I knew about the Yiddish (extreme Jewish dialect of German) novel Motl der Operator and people would conjecture that I must have some Jewish roots which I never believed. Finally, I accidentally asked my editor, Ms Věra Amelová, who was just working on the index for the 2nd edition of the Czech Elegant Universe and who found an explanation of the origin in a book. I don't expect regular TRF readers to be interested in similar linguistic stuff but those who search for things using search engines may be interested. And I just wanted to write it down somewhere. ## Saturday, July 13, 2013 ... ///// ### Richard Lindzen vs Aljazeera gladiators Some time ago, mostly Arabic-language-based politically correct TV station Aljazeera bought Current TV, a failed U.S. TV station, from former vice-president Al Gore and his partner for hundreds of millions of dollars. They apparently got more interested in the climate and decided to debate Prof Richard Lindzen of MIT, a famous climate skeptic, in "Head to Head" which is described as "Al Jazeera's new forum of ideas - a gladiatorial contest tackling big issues such as faith, the economic crisis, democracy and intervention in front of an opinionated audience at the Oxford Union." The video is 48 minutes long. Will you watch it? The web page with some words and the video above is here: Climate change: Fact or fiction? My understanding is that Dick was hired as a bull while the brainwashed journalists indefinitely repeating the clichés about man-made climate change are supposed to be the gladiators. ## Friday, July 12, 2013 ... ///// ### Summers, Yellen: candidates to replace Bernanke Sometimes in January 2014, Ben Bernanke will leave the job of America's key central banker. The replacement is already being looked for. According to polls, current Fed vice-president Janet Yellen (picture below) and ex-Harvard president Larry Summers are the two frontrunners. Arguments in favor of both are so asymmetric that I won't try to compare them. Needless to say, I know Larry in person and we've been on the same side of a battle for some fundamental values at Harvard so his candidacy for the most important economist's job in the world is far more interesting for me than Yellen's. Despite all complaints I could possibly find, I am obviously his supporter. At the same moment, they have a similar background – they're connected with the universities, the Clinton administration, and they're equally Jewish, too. ## Thursday, July 11, 2013 ... ///// ### The "Past Hypothesis" nonsense is alive and kicking I can't believe how dense certain people are. After many years that Sean Carroll had to localize the elementary, childish mistakes in his reasoning, he is still walking around and talking about the "arrow of time mystery" and the "Past Hypothesis": Cosmology and the Past Hypothesis His Santa Cruz, California talk about this non-problem took no less than 3.5 hours. Imagine actual people sitting and listening to this utter junk for such a long time. I can't even imagine that. He has even coined some new phrases that describe the very same misconception of yours. If there is one central idea, it’s the concept of a “cosmological realization measure” for statistical mechanics. Ordinarily, when we have some statistical system, we know some macroscopic facts about it but only have a probability distribution over the microscopic details. If our goal is to predict the future, it suffices to choose a distribution that is uniform in the Liouville measure given to us by classical mechanics (or its quantum analogue). If we want to reconstruct the past, in contrast, we need to conditionalize over trajectories that also started in a low-entropy past state — that the “Past Hypothesis” that is required to get stat mech off the ground in a world governed by time-symmetric fundamental laws. As talented enough students learn when they are college juniors or earlier, statistical mechanics explains thermodynamic phenomena by statistically analyzing large collections of atoms (or other large collections of degrees of freedom) and doesn't depend on cosmology (or anything that is larger than the matter whose behavior we want to understand) in any way whatsoever. It's the atoms, short-distance physics, that determines the behavior of larger objects (including the whole Universe), not the other way around! Reverse Times Square 2013. In the real world, objects moving forward and backward in time can't co-exist, a well-defined logical arrow of time indicating what evolves from what has to exist everywhere. That's also why none of the clowns above managed to unbreak an egg. Exercise for you: Can the video above be authentic or was it inevitably computer-reversed afterwords? You should be able to decide by looking at details of the video. For the advanced viewers: Which parts were edited? Moreover, since its very birth, statistical physics was a set of methods to deal with the laws of physics that were time-reversal- (or CPT-) symmetric and it always "got off the ground" beautifully. There is absolutely nothing new about these matters in 2013. In fact, nothing really qualitative has changed about statistical physics for a century or so. ## Wednesday, July 10, 2013 ... ///// ### Bob Carter, John Spooner: Taxing Air Previous related article: Bob Carter's job not renewed I just received my copy of the paperback edition of cartoonist John Spooner's and geologist Bob Carter's new book, Taxing Air. The Kindle edition may be bought from amazon.com here (icon on the left side); go to TaxingAir.COM for the book's web page and options to buy it for$30.
The book immediately impressed me by the colorful illustrations on pretty much every page. They're playful, witty, full of colors and life, and they also quickly convey some key ideas.
Perhaps because it seems easier to read a 280-page book whose significant portion is filled with similar pictures, I couldn't resist and immediately started to read the book. Let me say in advance that about one-half of the pictures are jokes, often with alarmists' and (mostly Australian) politicians' faces; the other half are graphs and diagrams that explain serious scientific concepts and the cold hard data.
## Tuesday, July 09, 2013 ... /////
### Papers on the ER-EPR correspondence
This new, standardized, elegant enough name of the Maldacena-Susskind proposal that I used in the title already exceeds the price of this blog entry that you had to pay. ;-)
Yesterday, there was a one-page critical paper by Hrvoje Nikolić of Zagreb, Croatia, EU trying to criticize the ER-EPR correspondence. When I am looking at similar articles, I am often ashamed to be a European: Nikolić attacking MS reminds me of an angry and hungry dog attacking Wolfgang Amadeus Mozart and Ludwig van Beethoven except that in those good old times, the composers were European and the dogs were American. This shame largely evaporates as soon as I see similarly dumb, low-quality articles inkspilled inside other continents.
Like a five-year-old spoiled boy, Nikolić screams that he has to be given some quantitative comparisons of ER and EPR correlators to trust such a thing. But it's simply not true that this is the only possible or allowed kind of evidence that the correspondence is valid. Moreover, it's obvious that once we allow the spaces with the ER bridges at all, the observables on the opposite sides of the throat will be entangled just like the rules of entanglement require; after all, despite the apparent huge distance through the normal space, they're close to each other due to the throat. In the absence of very high energy quanta, field operators at nearby points are almost equal to each other.
## Sunday, July 07, 2013 ... /////
### Cumrun Vafa: Strings and the magic of extra dimensions
One month ago, Cumrun Vafa's son determined that he needed to go to Bangalore, South Central India, and because Cumrun is a good father, he went with him.
So why wouldn't he give a public talk (one hour and one second) about strings and the magic of extra dimensions at the local campus of the Tata Institute of Fundamental Research?
## Saturday, July 06, 2013 ... /////
### Tim Maudlin's right and (more often) muddled opinions about physics
Vincent has asked me what I thought about an interview with philosopher of physics Tim Maudlin at 3:am,
On the foundations of physics.
Regular TRF readers remember Maudlin as the main villain in the 2011 story about Tom Banks and anti-quantum zealots which was ignited by a discussion at Preposterous Universe (well, it was at Cosmic Variance at that time).
Yesterday, Maudlin mostly said the same wrong things as he did 1.5 years ago but let me discuss them again.
## Friday, July 05, 2013 ... /////
### Negligible impact of dark matter on the Solar System
Constraints on Dark Matter in the Solar System by N.P. Pitjev and E.V. Pitjeva (Leningrad, Russia)
This article was celebrated by an impressive title and a "more than just uncritical" article at the Physics arXiv Blog:
The Incredible Dark Matter Mystery: Why Astronomers Say it is Missing in Action
Wow. The only comment over there that isn't preposterous is the comment by S. Seibert. Thankfully, Sean Carroll presents the same stance as your humble correspondent: in his opinion, the expected impact of the dark matter on the Solar System is comparable to the dark matter's influence on NBA three-pointers.
Why?
## Thursday, July 04, 2013 ... /////
### Summer School of Philosophy
I returned from a few days long trip at the Summer School of Philosophy 2013 in Dub nad Moravou (Oak Upon Morava, and it is Morava, not Morave, for all the Californian nitpickers!).
It's a small township (2,000 inhabitants) beneath a pretty presbytery 10 miles South of Olomouc, the historical and spiritual capital of Moravia (Eastern 1/2 of Czechia).
Pilgrim temple of the Purification of Virgin Mary. (Update: Wow, this translation was meant as a somewhat geeky joke – but I later found out that this is actually the correct name of the divine process in English.
A Polish guy was made the manager of the building and he's doing it very well – the Poles have a much closer relationship to God, regardless of His existence (and to the late Pope John Paul II whose pictures are everywhere) than us, after all. ;-) Some young folks organizing the school knew him which is why it was possible for the participants to sleep there etc.
## Tuesday, July 02, 2013 ... /////
### Death Valley: highest temperature on Earth will survive 100th anniversary
Tons of journalists were recently hyping the warm weather that came to California and other places. Fox News and many others talk about "tied records" and "baked West".
But the real story which is almost totally overlooked is the comparison of the absolute temperature record on Earth and the modest temperatures in recent years that had no chance to match it. This real story is clearly inconvenient to the climate alarmists, one of the most unhinged and dishonest cliques of demagogues who have ever walked on the face of Earth, and journalists many of whom have always been trying to claim that what is happening right now is amazing even if it is not.
## Monday, July 01, 2013 ... /////
### CMS: $2.93\sigma$ hint of a second Higgs boson at $136.5\GeV$
Today's sensation is tomorrow's calibration. Or: Today's discovery is tomorrow's background. Ladies and Gentlemen, it's my pleasure to tell you that we are finally living in the tomorrow in the sense of these proverbs. CMS has finally published a paper that incorporates the $125$-$126\GeV$ Higgs boson into the background.
Properties of the observed Higgs-like resonance decaying into two photons (CMS PAS HIG-13-016)
ATLAS and CMS are doing a superb precision work. One of the consequences is that their papers have been found to agree with the Standard Model immensely well. In fact, we haven't seen virtually any $2\sigma$ excesses. One could argue that the number of false positives in other experiments than ATLAS and CMS – bumps that would fill many physicists with a "hope" that turned out to be unjustified – was much higher than 5% and the reason was that these experimenters were just sloppier than ATLAS and CMS.
But neither new physics nor flukes can be denied and confined indefinitely. The paper above which was released today is an example.
### America spying on Germany, EU
An ugly image of a double-faced, hypocritical America emerges
Off-topic, philosopher's birthday: One of the greatest polymaths of the history, Gottfried von Leibniz, was born on July 1st, 1646. See this biography written in 2008. His father, a professor of moral philosophy in Leipzig, East of Germany (who was half-German, half-Sorbian i.e. Slavic: I couldn't understand a Sorbian song, looks like a countryside Czech dialect pronoucation of Polish words inspired by some Yugoslav speakers), died when Gottfried was six. From that moment on, he had access to the dad's huge library and was affected by his mother's teachings. He wrote often valuable things in dozens of disciplines but his standards and ingenuity in natural sciences couldn't match Newton's.
Edward Snowden hasn't been assassinated by the U.S. government yet so he is starting to make some real impact, unfortunately a negative one so far. The media such as the Guardian told us about the anger in Europe that erupted after some European countries' officials were shown documents indicating that Prism is spying on EU in general and Germany in particular:
Berlin accuses Washington of cold war tactics over snooping
The extremely promising Euro-American free-trade pact has been threatened after we learned that half a billion of phone calls, text messages, and e-mails are tapped by the U.S. every month. I can't really imagine how this huge amount of information may be effectively monitored, how many (invisible?) people the U.S. intelligence services are employing etc.
It would be great if people kept their heads cool because the trade pact could be a great thing and if they tried to see all the things in the context – every major power is spying on someone and Snowden actually indicated that the U.K. is more brutal in this respect than the U.S., a subtlety that the Guardian and others fail to mention – but on the other hand, I won't hide that I am kind of troubled by the revelations, too.
I generally like the Americans' character but this story highlights some of its darker sides including an unmatched level of hypocrisy, a hardwired hardcore centralization sentiment, and a generally poor understanding of (and respect for) other nations' culture and inner working. I will get to these points momentarily.
|
|
## Video From 17w5118: $p$-adic Cohomology and Arithmetic Applications
Tuesday, October 3, 2017 09:02 - 10:11
Higher algebra and arithmetic
|
|
Removing catkin workspace
I've created 2 catkin workspaces, then I decided to rm -rf one of them. It seems like there should be a better way to remove it because some environment variables are lingering around with that path, namely $CMAKE_PREFIX_PATH. 1. Where does ROS set$CMAKE_PREFIX_PATH so I can remove any references to it now that I've deleted it?
2. Is there a proper way to delete a catkin workspace so that all environmental variables are cleaned up?
Answer: I 'sourced' both my devel workspace 'setup.bash' and the groovy installation 'setup.bash', when I removed that line from my .bashrc (as the commenter said) CMAKE_PREFIX_PATH was no longer set.
Deleting build and develop and simply calling catkin_make again, then sourcing my workspace only in .bashrc fixed it.
Thank you! :D
edit retag close merge delete
If the problem persists after a reboot, the problem is probably in your ~/.bashrc. Can you post the contents of that file?
( 2013-06-26 06:32:47 -0600 )edit
Sort by » oldest newest most voted
This stuff can get very confusing and cause serious errors.
The general recommendation when switching workspaces is always to open a new shell, and source the relevant setup.bash. For catkin, that is normally in devel/ subdirectory.
UPDATE: If you are sourcing the correct setup.bash in your ~/.bashrc and creating a new shell (or rebooting) does not work, there must have been something wrong when you originally created your workspace. The usual error is having forgetten to source the appropriate lower-level setup.bash before running catkin_make.
That is an easy mistake to make. When I find myself in that situation, I do one of these recovery procedures:
• Create a new workspace, more carefully this time.
• Delete everything but the src/ subdirectory, source the correct dependency, then re-run catkin_make or catkin_init_workspace.
There may be other solutions, but I find that hacking around in the various generated files just creates frustration and wastes time.
more
Well, I've already done that but when I reboot the the CMAKE_PREFIX_PATH persists, I'd really like to clean that up, do you know how?
( 2013-06-26 06:20:34 -0600 )edit
I you have a CMAKE_PREFIX_PATH after reboot then you are setting it in your ~/.bashrc most likely. Look in that file to see if you are source'ing any setup.bash files. If you are, delete or comment that line out and open a new terminal. Rebooting is not necessary, just open a new terminal.
( 2013-06-26 07:17:12 -0600 )edit
I am sourcing 2 files in my .bashrc, 1 is /opt/ros/groovy/setup.bash the other is my current catkin_workspace ~/ros_ws/devel/setup.bash.. I'll try removing these when I get home and see if this solves my issue
( 2013-06-26 07:47:30 -0600 )edit
If things were set up correctly, the workspace should source /opt/ros/groovy/setup.sh for you.
( 2013-06-26 07:50:30 -0600 )edit
For those using catkin tools, an easy solution is: catkin clean --deinit at the directory of your workspace.
more
Stats
Seen: 12,655 times
Last updated: Jun 26 '13
|
|
You may wish to create a cross reference to an equation, a statement in your document such as "As was shown in Equation 3...", but you want Word to insert the appropriate equation number, and update it if the number of the equation should change. At first glance it would appear that you could do an Insert,Cross-reference and select "Equation" as the reference type. However, this will only work if you let Word caption your equations, and Word will only caption an equation above or below the equation, which is not acceptable. The only way to cross reference an entity that you have numbered yourself via a seq Field, is to Bookmark the sequence number.
Double-click on a master page to select it. Choose the Text tool from the Tools panel and click and drag on a master page to create a text frame. Choose Type > Insert Special Character > Markers > Current Page Number. Format this character with whatever font family and size you want to use, and then position it on the master page. The character displays as the same letter that the master page is named, but on the individual pages the text displays as the page numbers.
In Australia and New Zealand, the current standard (Australia/New Zealand joint standard AS/NZS 4819:2011 - Rural & Urban Addressing)[6] is directed at local governments that have the primary responsibility for addressing and road naming. The standard calls for lots and buildings on newly created streets to be assigned odd numbers (on the left) and even numbers (on the right) when facing in the direction of increasing numbers (the European system) reflecting already common practice. It first came into force in 2003 under AS/NZS 4819:2003 - Geographic Information – Rural & Urban Addressing.[7] Exceptions are where the road forms part of the boundary between different council areas or cities. For example, Underwood Road in Rochedale South, divided between Logan City and the City of Brisbane.
In bulleted lists, each paragraph begins with a bullet character. In numbered lists, each paragraph begins with an expression that includes a number or letter and a separator such as a period or parenthesis. The numbers in a numbered list are updated automatically when you add or remove paragraphs in the list. You can change the type of bullet or numbering style, the separator, the font attributes and character styles, and the type and amount of indent spacing.
In summary, paragraph numbering is really just an exercise in logic, and this blog post is showing the numbering styles for a very specific project. Your project may be similar, but not exactly the same. You just need to think though the levels and how you want to restart the numbers. I do my best to think it through correctly the first time, set it up, and then try as hard as I can to break it, so that I can find my errors. The good news is that once you get your numbers working, you shouldn’t ever have to think about it again.
Most libraries and booksellers display the book record for an invalid ISBN issued by the publisher. The Library of Congress catalogue contains books published with invalid ISBNs, which it usually tags with the phrase "Cancelled ISBN".[45] However, book-ordering systems such as Amazon.com will not search for a book if an invalid ISBN is entered to its search engine.[citation needed] OCLC often indexes by invalid ISBNs, if the book is indexed in that way by a member library.
You can't use Word's Numbering feature to generate a multilevel numbering system, even if you use built-in heading styles. Figure A shows a document with two styled heading levels: Heading 1 and Heading 2. You can apply the Numbering option (in the Paragraph group) and Word will number the headings consequently, but the feature ignores different levels; if you expected 1, 1.1, 2, 2.1, and 2.2, you might be surprised. If you select the entire document first, Numbering not only ignores the different levels, but it also numbers the paragraphs!
For example, often, preliminary pages in a book will be numbered with roman numerals, with renumbering restarting at 1 with the first page of the main section of the book. This is an option that can be set in the Section Options dialog box. If your document has 12 prelims, and page 13 of the document is set to restart the numbering at one, choosing “Absolute numbering” in the Page Numbers to Words script will insert “Thirteen” in a text frame on that page. If “Section numbering” is chosen, the text frame on page 13 will show “One”.
In the early/mid 19th century numbering of long urban streets commonly changed (from clockwise, strict consecutive to odds (consecutive) which face evens (consecutive)). Where this took place it presents a street-long pitfall to researchers using historic street directories and other records. A very rare variation may be seen where a high street (main street) continues from a less commercial part — a road which breaks the UK conventions by not starting at 1 or 2. On one side of the main road between Stratford and Leytonstone houses up to no. 122 are "Leytonstone Road". The next house is "124 High Road, Leytonstone".
Add one or more text frames anywhere in your InDesign document that written-out page numbers should appear, and apply the object style created in step (1) above to this text frame. If you want the written-out page numbers to appear on all pages, a sensible place for these text frames would be on the left and right-hand pages of a master page. However, the text frames can be added anywhere, on any pages, and even as inline objects inserted into text.
The Port PS service may only be accessed by a "User" of the NPAC/SMS; i.e., an entity having executed the appropriate regional NPAC/SMS User Agreement(s) with NeuStar. The use of any data accessed through Port PS is at all times subject to each such NPAC/SMS User Agreement, which provides that the exclusive use of such data is for the purpose of routing, rating, or billing a call, or the performance of network maintenance in connection with the provision of telecommunications services, and for no other purpose, including, without limitation, commercial exploitation.
The first step is to figure out the desired look for the list, then the spacing that will be needed between the numbers and the text. For this example, I decided on larger, bolder numbers using a serif font. I created one line of text with the numbers "00." keyed in manually and styled. Zeros are used because they are almost always the widest numerals. Two zeros were used because the items in the list are more than nine, so they will span to double digits. From the rulers, the approximate spacing can now be determined. This line of text will be deleted once the Paragraph Style has been created.
An SBN may be converted to an ISBN by prefixing the digit "0". For example, the second edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has "SBN 340 01381 8" – 340 indicating the publisher, 01381 their serial number, and 8 being the check digit. This can be converted to ISBN 0-340-01381-8; the check digit does not need to be re-calculated.
If you know in advance that you need outline numbering for your paragraphs, you will want to choose the correct format from the Bullets and Numbering dialog box. Applying one of the preset formats to a paragraph or paragraphs that are already bulleted or numbered applies it to the entire list. There is a specific tab for outline numbers — the Outlined Numbered tab.
{\displaystyle {\begin{aligned}s&=(11-(((0\times 10)+(3\times 9)+(0\times 8)+(6\times 7)+(4\times 6)+(0\times 5)+(6\times 4)+(1\times 3)+(5\times 2))\,{\bmod {\,}}11)\,{\bmod {\,}}11\\&=(11-(0+27+0+42+24+0+24+3+10)\,{\bmod {\,}}11)\,{\bmod {\,}}11\\&=(11-(130\,{\bmod {\,}}11))\,{\bmod {\,}}11\\&=(11-(9))\,{\bmod {\,}}11\\&=(2)\,{\bmod {\,}}11\\&=2\end{aligned}}}
In bulleted lists, each paragraph begins with a bullet character. In numbered lists, each paragraph begins with an expression that includes a number or letter and a separator such as a period or parenthesis. The numbers in a numbered list are updated automatically when you add or remove paragraphs in the list. You can change the type of bullet or numbering style, the separator, the font attributes and character styles, and the type and amount of indent spacing.
Erica Gamet has been involved in the graphics industry for an unbelievable 30 years! She is a speaker, writer, and trainer, focusing on Adobe InDesign and Illustrator, Apple Keynote and iBooks Author, and other print- and production-related topics. She is a regular presence at CreativePro Week’s PePcon and InDesign Conferences, and has spoken at ebookcraft in Canada and Making Design in Norway. You can find Erica’s online tutorials at CreativeLive and through her YouTube channel. When she isn’t at her computer, she can be found exploring her new homebase of Seattle and the greater Pacific Northwest.
All Products and services accessed through this portal are provided "as is" and "as available", and neither Neustar nor its affiliates, officers, directors, employees, agents or assigns make any representations or warranties to you or to any third party including, without limitation, end users, whether express, implied or statutory, including, by way of example and not limitation, warranties of merchantability, accuracy, fitness for any particular purpose, title and noninfringement, or as to any other matter, all of which are herby expressely excluded and disclaimed.
Hi Jason! Hard to say when I’m not sure which part isn’t working for you. If the numbering isn’t continuing across separate frames, you need to make sure you’re using a list. If they are in the wrong order, remember it uses the paste/creation order to number them. If neither of those fix it, let me know what specific issue you’re having. Good luck!
A defined list can be interrupted by other paragraphs and lists, and can span different stories and different documents in a book. For example, use defined lists to create a multi-level outline, or to create a running list of numbered table names throughout your document. You can also define lists for separately numbered or bulleted items that are mixed together. For example, in a list of questions and answers, define one list for numbering the questions and another for numbering the answers.
IF a document is part of a book and the previous document ended on a right-hand page, AND if your book options are set to allow documents to start on left or right pages, AND if you choose “Automatic Page Numbering,” THEN the document will be allowed to automatically start on the left page. This is great if you need to break a document up in a spot where it’s not necessary for the next document to start on the right.
If you know in advance that you need outline numbering for your paragraphs, you will want to choose the correct format from the Bullets and Numbering dialog box. Applying one of the preset formats to a paragraph or paragraphs that are already bulleted or numbered applies it to the entire list. There is a specific tab for outline numbers — the Outlined Numbered tab.
|
|
? asked in Science & MathematicsPhysics · 2 years ago
# What is the magnitude of the friction force on the disk?
The lightweight wheel on a road bike has a moment of inertia of 0.097 kg⋅m2. A mechanic, checking the alignment of the wheel, gives it a quick spin; it completes 5 rotations in 1.8 s. To bring the wheel to rest, the mechanic gently applies the disk brakes, which squeeze pads against a metal disk connected to the wheel. The pads touch the disk 7.1 cm from the axle, and the wheel slows down and stops in 1.3 s.
Relevance
• 2 years ago
As the wheel rotates one time, it rotates an angle of 2 π radians.
ω = 5 * 2 π ÷ 1.8 = 10 π ÷ 1.8
This is approximately 17.5 rad/s. Let’s determine the initial angular momentum
L = 0.097 * 10 π ÷ 1.8 = 0.97 * π ÷ 1.8
This is approximately 1.69 kg * m/s. Since the friction force causes the wheel to stop, the final angular momentum is 0 kg * m/s. The friction force causes the torque that stops the wheel from rotating. The distance from the axle to the pads must be in meters.
d = 7.1 ÷ 100 =
Torque = Ff * 0.071
Torque * time = change of angular momentum
Ff * 0.071 * 1.3 = Ff * 0.0923
Ff * 0.0923 = 0.97 * π ÷ 1.8
Ff = (0.97 * π ÷ 1.8) ÷ 0.0923
The friction force is approximately 18.3 N. I hope this is helpful for you.
• oubaas
Lv 7
2 years ago
ω = 2PI*n/t = 10PI/1.8 = 5.555PI rad sec
torque T = J*Δω/Δt = 0.097*(5.555PI -0)/(1.3-0) = 1.30 N*m
F = T/d = 1.3*100/7.1 = 18.31 N
• 2 years ago
The friction force creates a torque on the wheel that stops it, the torque is also the time rate of change of angular momentum.
torque x time = change in angular momentum
t = f(0.0071) = Iw = 0.097(5)(2pi)/1.8
f(0.0071) = 1.693
f = 240 N
|
|
# Schedule for: 21w5120 - Entropic Regularization of Optimal Transport and Applications (Online)
Beginning on Sunday, June 20 and ending Friday June 25, 2021
All times in Banff, Alberta time, MDT (UTC-6).
Monday, June 21
08:45 - 09:00 Introduction and Welcome by BIRS Staff
A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions.
(Online)
09:00 - 09:40 Luca Tamanini: Small-time asymptotics of the metric Schrödinger problem
The Schrödinger problem as "noised" optimal transport is by now a well-established interpretation. From this perspective several natural questions stem, as for instance the convergence rate as the noise parameter vanishes of many quantities: optimal value, Schrödinger bridges and potentials... As for the optimal value, after the works of Erbar-Maas-Renger and Pal a first-order Taylor expansion is available. First aim of this talk is to improve this result in a twofold sense: from the first to the second order and from the Euclidean to the Riemannian setting (and actually far beyond). From the proof it will be clear that the statement is in fact a particular instance of a more general result. For this reason, in the second part of the talk we introduce a large class of dynamical variational problems, extending far beyond the classical Schrödinger problem, and for them we prove $\Gamma$-convergence towards the geodesic problem and a suitable generalization of the second-order Taylor expansion. (based on joint works with G. Conforti, L. Monsaingeon and D. Vorotnikov)
(Online)
09:50 - 10:30 Luca Nenna: (Entropic) Optimal Transport in the Grand Canonical ensemble
In this talk I will firstly review standard Multi-Marginal Optimal Transport ( a number N of marginals is fixed) focusing, in particular, on the applications in Quantum Mechanics (in this case the marginals are all the same and represent the electron density). I will then extend the Optimal Transportation problem to the grand canonical setting: only the expected number of marginals/electrons is now given (i.e. we can now define a OT problem with a fractional number of marginals). I will compare these two problems and show how they behave differently despite considering the same cost functions. Existence of minimisers, duality, entropic formulation and numerics will be discussed.
(Online)
10:40 - 11:20 Young-Heon Kim: Optimal transport in Brownian motion stopping
We consider an optimal transport problem arising from stopping the Brownian motion from a given distribution to get a fixed or free target distribution; the fixed target case is often called the optimal Skorokhod embedding problem in the literature, a popular topic in math finance pioneered by many people. Our focus is on the case of general dimensions, which has not been well understood. We explain that under certain natural assumptions on the transportation cost, the optimal stopping time is given by the hitting time to a barrier, which is determined by the solution to the dual optimization problem. In the free target case, the problem is related to the Stefan problem, that is, a free boundary problem for the heat equation. We obtain analytical information on the optimal solutions, including certain BV estimates. The fixed target case is mainly from the joint work with Nassif Ghoussoub and Aaron Palmer at UBC, while the free target case is the recent joint work (in-progress) with Inwon Kim at UCLA.
(Online)
11:30 - 12:10 Robert McCann: Inscribed radius bounds for lower Ricci bounded metric measure spaces with mean convex boundary
Inscribed radius bounds for lower Ricci bounded metricConsider an essentially nonbranching metric measure space with the measure contraction property of Ohta and Sturm. We prove a sharp upper bound on the inscribed radius of any subset whose boundary has a suitably signed lower bound on its generalized mean curvature. This provides a nonsmooth analog of results dating back to Kasue (1983) and subsequent authors. We prove a stability statement concerning such bounds and --- in the Riemannian curvature-dimension (RCD) setting --- characterize the cases of equality. This represents joint work with Annegret Burtscher, Christian Ketterer and Eric Woolgar.
(Online)
14:15 - 14:20 Group Photo
Please turn on your cameras for the "group photo" -- a screenshot in Zoom's Gallery view.
(Online)
14:30 - 15:10 Yongxin Chen: Graphical Optimal Transport and its Applications
Multi-marginal optimal transport (MOT) is a generalization of optimal transport theory to settings with possibly more than two marginals. The computation of the solutions to MOT problems has been a longstanding challenge. In this talk, we introduce graphical optimal transport, a special class of MOT problems. We consider MOT problems from a probabilistic graphical model perspective and point out an elegant connection between the two when the underlying cost for optimal transport allows a graph structure. In particular, an entropy regularized MOT is equivalent to a Bayesian marginal inference problem for probabilistic graphical models with the additional requirement that some of the marginal distributions are specified. This relation on the one hand extends the optimal transport as well as the probabilistic graphical model theories, and on the other hand leads to fast algorithms for MOT by leveraging the well-developed algorithms in Bayesian inference. We will cover recent developments of graphical optimal transport in theory and algorithms. We will also go over several applications in aggregate filtering and mean field games.
(Online)
15:20 - 16:20 Visual Talks: Adolfo Vargas-Jiménez, David Simmons, Axel Turnquist, Johannes Wiesel (Online)
Tuesday, June 22
09:00 - 09:40 Gabriel Peyré: Scaling Optimal Transport for High dimensional Learning
Optimal transport (OT) has recently gained lot of interest in machine learning. It is a natural tool to compare in a geometrically faithful way probability distributions. It finds applications in both supervised learning (using geometric loss functions) and unsupervised learning (to perform generative model fitting). OT is however plagued by the curse of dimensionality, since it might require a number of samples which grows exponentially with the dimension. In this talk, I will explain how to leverage entropic regularization methods to define computationally efficient loss functions, approximating OT with a better sample complexity. More information and references can be found on the website of our book "Computational Optimal Transport" https://optimaltransport.github.io/
(Online)
09:50 - 10:30 Anna Korba: Wasserstein Proximal Gradient
Wasserstein gradient flows are continuous time dynamics that define curves of steepest descent to minimize an objective function over the space of probability measures (i.e., the Wasserstein space). This objective is typically a divergence w.r.t. a fixed target distribution. In recent years, these continuous time dynamics have been used to study the convergence of machine learning algorithms aiming at approximating a probability distribution. However, the discrete-time behavior of these algorithms might differ from the continuous time dynamics. Besides, although discretized gradient flows have been proposed in the literature, little is known about their minimization power. In this work, we propose a Forward Backward (FB) discretization scheme that can tackle the case where the objective function is the sum of a smooth and a nonsmooth geodesically convex terms. Using techniques from convex optimization and optimal transport, we analyze the FB scheme as a minimization algorithm on the Wasserstein space. More precisely, we show under mild assumptions that the FB scheme has convergence guarantees similar to the proximal gradient algorithm in Euclidean spaces.
(Online)
10:40 - 11:20 Jonathan Niles-Weed: Asymptotics for semi-discrete entropic optimal transport
We compute exact second-order asymptotics for the cost of an optimal solution to the entropic optimal transport problem in the continuous-to-discrete, or semi-discrete, setting. In contrast to the discrete-discrete or continuous-continuous case, we show that the first-order term in this expansion vanishes but the second-order term does not, so that in the semi-discrete setting the difference in cost between the unregularized and regularized solution is quadratic in the inverse regularization parameter, with a leading constant that depends explicitly on the value of the density at the points of discontinuity of the optimal unregularized map between the measures. We develop these results by proving new pointwise convergence rates of the solutions to the dual problem, which may be of independent interest. Joint work with J. Alsthculer and A. Stromme.
(Online)
11:30 - 12:10 Zaid Harchaoui: Schrödinger Bridge with Entropic Regularization: two-sample test, chaos decomposition, and large-sample limits
We consider an entropy-regularized statistic that allows one to compare two data samples drawn from possibly different distributions. The statistic admits an expression as a weighted average of Monge couplings with respect to a Gibbs measure. This coupling can be related to the static Schrödinger bridge given a finite number of particles. We establish the asymptotic consistency as the sample sizes go to infinity of the statistic and show that the population limit is the solution of Föllmer's entropy-regularized optimal transport. The proof technique relies on a chaos decomposition for paired samples. This is joint work with Lang Liu and Soumik Pal.
(Online)
14:30 - 15:10 Promit Ghosal: Geometry and large deviation of entropic optimal transport
Optimal transport (OT) theory has flourished due to its connections with geometry, analysis, probability theory, and other fields in mathematics. A renewed interest in OT stems from applied fields such as machine learning, image processing and statistics through the introduction of entropic regularization. In this talk, we will discuss the convergence of entropically regularized optimal transport. Our first result is about a large deviation principle of the associated optimizers in entropic OT and the second result is about the stability of the optimizers under weak convergence. To prove these results, we will introduce a new notion called 'cyclical invariance' of measures. This is a joint work with Marcel Nutz and Espen Bernton.
(Online)
15:20 - 16:20 Visual Talks: Tobias Schroeder, Lang Liu, Matthieu Heitz, Fang Han (Online)
Wednesday, June 23
09:00 - 09:40 Beatrice Acciaio: PQ-GAN: a market generation model consistent with observed spot prices and derivative prices
In this talk I will present a model for market generation that is consistent with both the observed spot prices and the market prices of derivatives. The structure used to learn the evolution of the asset prices (under the real-world measure P) is that of a conditional GAN for time series generation, that uses causal optimal transport in the training objective. On the other hand, the derivative prices are used to learn the change of measure from P to the pricing measure Q. This talk is based on a joint work with F. Krach.
(Online)
09:50 - 10:30 Alfred Galichon: Dynamic Matching Problems (joint w Pauline Corblet and Jeremy Fox)
For the purposes of economics applications, we formulate a class of dynamic matching problems. We investigate in particular the stationary case, and computation and estimation issues are investigated.
(Online)
10:40 - 11:20 Ting-Kam Leonard Wong: Logarithmic divergences and statistical applications
We consider the Dirichlet optimal transport which is a multiplicative analogue of the Wasserstein transport and is deeply connected to the Dirichlet distribution. The log-likelihood of this distribution defines a logarithmic divergence, in the same way that the square loss comes from the normal distribution. Using this divergence, which can be extended to a family of generalized exponential families, we consider statistical methodologies including clustering and nonlinear principal component analysis. Our approach extends a well-known duality between exponential family and Bregman divergence. Joint work with Zhixu Tao, Jiaowen Yang and Jun Zhang.
(Online)
11:30 - 12:00 Giovanni Conforti: Hamilton Jacobi equations for controlled gradient flows: the comparison principle
This talk is devoted to the study of a class of Hamilton-Jacobi equations on the space of probability measures that arises naturally in connection with the study of a general form of the Schrödinger problem for interacting particle systems. After presenting the equations and their geometrical interpretation, I will move on to illustrate the main ideas behind a general strategy for to prove uniqueness of viscosity solutions, i.e. the comparison principle. Joint work with D. Tonon (U. Padova) and R. Kraaij (TU Delft).
(Online)
12:00 - 12:15 MITACS Presentation (Online)
14:30 - 16:20 Gathertown (social gathering) (Online)
Thursday, June 24
09:00 - 09:40 Martin Huesmann: Fluctuations in the optimal matching problems
The optimal matching problem is one of the classical random optimization problems. While the asymptotic behavior of the expected cost is well understood only little is known for the asymptotic behavior of the optimal couplings - the solutions to the optimal matching problem. In this talk we show that at all mesoscopic scales the displacement under the optimal coupling converges in suitable Sobolev spaces to a Gaussian field which can be identified as the curl-free part of a vector Gaussian free field. (based on joint work with Michael Goldman)
(Online)
09:50 - 10:30 Mathias Beiglböck: The Wasserstein space of stochastic processes
Wasserstein distance induces a natural Riemannian structure for the probabilities on the Euclidean space. This insight of classical transport theory is fundamental for tremendous applications in various fields of pure and applied mathematics. We believe that an appropriate probabilistic variant, the adapted Wasserstein distance AW, can play a similar role for the class FP of filtered processes, i.e. stochastic processes together with a filtration. In contrast to other topologies for stochastic processes, probabilistic operations such as the Doob-decomposition, optimal stopping and stochastic control are continuous w.r.t. AW. We also show that (FP,AW) is a geodesic space, isometric to a classical Wasserstein space, and that martingales form a closed geodesically convex subspace.
(Online)
10:40 - 11:20 Anna Kausamo: Multi-marginal entropy-regularized optimal transportation for singular cost functions
I will introduce multi-marginal optimal transportation (MOT) for singular cost functions and mention some of its applications. Then I move on to the entropy-regularised framework, focusing on the Gamma-convergence proof of the regularized minimizers for the singular MOT problem towards a non-regularised solution when the regularisation parameter goes to zero. When one goes from two to many marginals and from attractive to singular cost function, different levels of difficulty are introduced. One of the aims of my talk is to show how these difficulties can be tackled.
(Online)
14:30 - 15:10 Geoffrey Schiebinger: Towards a mathematical theory of development
New measurement technologies like single-cell RNA sequencing are bringing 'big data' to biology. My group develops mathematical tools for analyzing time-courses of high-dimensional gene expression data, leveraging tools from probability and optimal transport. We aim to develop a mathematical theory to answer questions like How does a stem cell transform into a muscle cell, a skin cell, or a neuron? How can we reprogram a skin cell into a neuron? We model a developing population of cells with a curve in the space of probability distributions on a high-dimensional gene expression space. We design algorithms to recover these curves from samples at various time-points and we collaborate closely with experimentalists to test these ideas on real data.
(Online)
15:20 - 16:20 Open problem discussion
discussion to be held in the main zoom meeting room
(Online)
Friday, June 25
09:00 - 09:40 Max von Renesse: On Overrelaxation in the Sinkhorn Algorithm
We discuss a simple but potent modification of the Sinkhorn algorithm based on overrelaxation. We provide an a priori estimate for the crucial overrelaxation parameter which guarantees both global and improved local convergence.
(Online)
09:50 - 10:30 Flavien Léger: Taylor expansions for the regularized optimal transport problem
We prove Taylor expansions of the regularized optimal transport problem with general cost as the temperature goes to zero. Our first contribution is a multivariate Laplace expansion formula. We show that the first-order terms involve the scalar curvature in the corresponding Hessian geometry. We then obtain: - first-order expansion of the potentials; - second-order expansion of the optimal transport value. Joint work with Pierre Roussillon, François-Xavier Vialard and Gabriel Peyré.
(Online)
10:40 - 11:20 Yunan Yang: Optimal transport-based objective function for physical inverse problems
We have proposed the quadratic Wasserstein distance from optimal transport theory for inverse problems, including nonlinear medium reconstruction for waveform inversions and chaotic dynamical systems parameter identification. Traditional methods for both applications suffered from longstanding difficulties such as nonconvexity and noise sensitivity. As we advance, we discover that the advantages of using optimal transposed-based metrics apply in a broader class of data-fitting problems where the continuous dependence between the parameter and the data involves the change of data phase or support of the data. The implicit regularization effects of the Wasserstein distance similar to a weak norm also help improve stability of parameter identification.
(Online)
11:30 - 12:10 Katy Craig: A blob method for diffusion and applications to sampling and two layer neural networks.
Given a desired target distribution and an initial guess of that distribution, composed of finitely many samples, what is the best way to evolve the locations of the samples so that they more accurately represent the desired distribution? A classical solution to this problem is to allow the samples to evolve according to Langevin dynamics, the stochastic particle method corresponding to the Fokker-Planck equation. In today’s talk, I will contrast this classical approach with a deterministic particle method corresponding to the porous medium equation. This method corresponds exactly to the mean-field dynamics of training a two layer neural network for a radial basis function activation function. We prove that, as the number of samples increases and the variance of the radial basis function goes to zero, the particle method converges to a bounded entropy solution of the porous medium equation. As a consequence, we obtain both a novel method for sampling probability distributions as well as insight into the training dynamics of two layer neural networks in the mean field regime. This is joint work with Karthik Elamvazhuthi (UCLA), Matt Haberland (Cal Poly), and Olga Turanova (Michigan State).
(Online)
|
|
# Why is $\int_0^{\pi/4} 5(1+\tan(x))^3\sec^2(x)\,dx$ equal to $18.75$ and not $3.75$?
Why is $\int_0^{\pi/4} 5(1+\tan(x))^3\sec^2(x)\,dx$ equal to $18.75$ and not $3.75$?
I know the indefinite integral $\int 5(1+\tan(x))^3\sec^2(x)\,dx= \frac {5(1+\tan(x))^4} {4}+c$ by using $u$ substitution. Then shouldn't I evaluate that at $\frac{\pi}4$ minus that at $0$? Doing that gives $3.75$ but my texbook and wolfram alpha say the right answer is $18.75$
• It might be beneficial to us if you add your work so we can analyze what went wrong – Crescendo Dec 18 '17 at 23:15
Note that we have $\tan\left(\frac{\pi}{4} \right)=1$ and $\tan(0)=0$
A potential mistake is that you have forgotten to multiply by $5$. We have $\frac{15}4=3.75$ but I can't tell for sure unless more working is shown.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.