text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# How do I determine the percentile value of a series using formula language?
You can use one of the following functions.
Percentile(series, percentile)
Percentile(series, percentile, window)
In the first function, the percentile is calculated over the entire time series, yielding one value.
The second function will calculate the percentile on a rolling window, set by the second variable.
Examples:
Percentile(sek, 70)
Percentile(sek, 70, YearLength())
|
{}
|
Research
# A result on three solutions theorem and its application to p-Laplacian systems with singular weights
Eun K Lee1 and Yong-Hoon Lee2*
Author Affiliations
1 Department of Mathematics Education, Pusan National University, Busan, 609-735, Korea
2 Department of Mathematics, Pusan National University, Busan, 609-735, Korea
For all author emails, please log on.
Boundary Value Problems 2012, 2012:63 doi:10.1186/1687-2770-2012-63
Received: 16 February 2012 Accepted: 18 May 2012 Published: 22 June 2012
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
### Abstract
In this paper, we consider p-Laplacian systems with singular weights. Exploiting Amann type three solutions theorem for a singular system, we prove the existence, nonexistence, and multiplicity of positive solutions when nonlinear terms have a combined sublinear effect at ∞.
MSC: 35J55, 34B18.
##### Keywords:
p-Laplacian system; singular weight; upper solution; lower solution; three solutions theorem
### 1 Introduction
In this paper, we study one-dimensional p-Laplacian system with singular weights of the form where , λ is a nonnegative parameter, , is a nonnegative measurable function on , on any open subinterval in and with . In particular, may be singular at the boundary or may not be in . It is easy to see that if , then all solutions of () are in . On the other hand, if , then this regularity of solutions is not true in general; for example, even for scalar case, if we take , and , , then , and the solution u for corresponding scalar problem of () is given by which is not in .
For more precise description, let us introduce the following two classes of weights;
We note that h given in the above example satisfies but . The main interest of this paper is to establish Amann type three solutions theorem [4] when with possibility of . The theorem generally describes that two pairs of lower and upper solutions with an ordering condition imply the existence of three solutions. Recently, Ben Naoum and De Coster [6] have proved the theorem for scalar one-dimensional p-Laplacian problems with -Caratheodory condition which corresponds to case ; Henderson and Thompson [18] as well as Lü, O’Regan, and Agarwal [23] - for scalar second order ODEs and one-dimensional p-Laplacian with the derivative-dependent nonlinearity respectively; and De Coster and Nicaise [11] - for semilinear elliptic problems in nonsmooth domains. For noncooperative elliptic systems () with and Ω bounded, one may refer to Ali, Shivaji, and Ramaswamy [3]. Specially, for subsuper solutions which are not completely ordered, this type of three solutions result was studied in [26].
The three solutions theorem for our system () or even for corresponding scalar p-Laplacian problems is not obviously extended from previous works mainly by the possibility . Caused by the delicacy of Leray-Schauder degree computation, the crucial step for the proof is to guarantee regularity of solutions, but with condition , regularity is not known yet. Due to the singularity of weights on the boundary, the regularity heavily depends on the shape of nonlinear terms f and g. Therefore, the first step is to investigate certain conditions on f and g to guarantee regularity of solutions. Another difficulty is to show that a corresponding integral operator is bounded on the set of functions between upper and lower solutions in . To overcome this difficulty, we give some restrictions on upper and lower solutions such that their boundary values are zero. As far as the authors know, our three solutions theorem (Theorem 1.1 in Section 2) is new and first for singular p-Laplacian systems with weights of class.
To cover a larger class of differential system, we consider the systems of the form where are continuous. We give more conditions on F and G as follows: () = For each , and are nondecreasing in u.; (H) = There exist and such that
and
for all and .; () = and , for all .. We now state our first main result related to three solutions theorem as follows. See for more details in Section 2.
Theorem 1.1Assume (H), () and (). Let, be a lower solution and an upper solution and, be a strict lower solution and a strict upper solution of problem (P) respectively. Also, assume that all of them are contained inand satisfy, , . Then problem (P) has at least three solutions, andsuch that, , and, .
As an application of Theorem 1.1, we study the existence, nonexistence, and multiplicity of positive radial solutions for the following quasilinear system on an exterior domain: where , , , , , and with .
In recent years, the existence of positive solutions for such systems has been widely studied, for example, in [1] and [27] for second order ODE systems, in [3,7,9,10,13,14,16] and [8] for semilinear elliptic systems on a bounded domain and in [5,15,17] and [2] for p-Laplacian systems on a bounded domain.
For a precise description, let us give the list of assumptions that we consider. (k) = , where
; () = and ,; () = for all ,; () = f and g are nondecreasing..
Condition () is sometimes called a combined sublinear effect at ∞ and simple examples satisfying () ∽ () can be given as follows:
where and , and also
where .
Among the reference works mentioned above, Hai and Shivaji [17] and Ali and Shivaji [2] (with more general nonlinearities) considered problem () with case and Ω bounded. For monotone functions f and g with and satisfying condition (), they proved that there exists such that the problem has at least one positive solution for .
We first transform () into one-dimensional p-Laplacian systems () with change of variables , , and where is given by
It is not hard to see that if in () satisfies (k), then in () satisfies , for . Mainly by making use of Theorem 1.1, we prove the following existence result for problem ()
Theorem 1.2Assume, , (), () and (). Then there existssuch that () has no positive solution for, at least one positive solution atand at least two positive solutions for.
As a corollary, we obtain our second main result as follows.
Corollary 1.3Assume (k), (), () and (). Then there existssuch that () has no positive radial solution for, at least one positive radial solution atand at least two positive radial solutions for.
We finally notice that the first eigenfunctions of make an important role to construct upper solutions in the proofs of Theorem 1.2 and Theorem 1.1. This is possible due to a recent work of Kajikiya, Lee, and Sim [19] which exploits the existence of discrete eigenvalues and the properties of corresponding eigenfunctions for problem (E) with .
This paper is organized as follows. In Section 2, we state a -regularity result and a three solutions theorem for singular p-Laplacian systems. In addition, we introduce definitions of (strict) upper and lower solutions, a related theorem and a fixed point theorem for later use. In Section 3, we prove Theorem 1.2.
### 2 Three solutions theorem
In this section, we give definitions of upper and lower solutions and prove three solutions theorem for the following singular system where are continuous.
We call a solution of (P) if , and satisfies (P).
Definition 2.1 We say that is a lower solution of problem (P) if , and
We also say that is an upper solution of problem (P) if , and it satisfies the reverse of the above inequalities. We say that and are strict lower solution and strict upper solution of problem (P), respectively, if and are lower solution and upper solution of problem (P), respectively and satisfying , , , for .
We note that the inequality on can be understood componentwise. Let . Then the fundamental theorem on upper and lower solutions for problem (P) is given as follows. The proof can be done by obvious combination from Lee [20], Lee and Lee [21] and Lü and O’Regan [22].
Theorem 2.2Letandbe a lower solution and an upper solution of problem (P) respectively such that() = , for all..Assume (). Also assume that there existsuch that() = , , for all..Then problem (P) has at least one solutionsuch that
Remark 2.3 It is not hard to see that condition (H) implies the following condition;
For each , there exists such that
for and .
Lemma 2.4Assume (H) and (). Letbe a nontrivial solution of (P). Then there existssuch that bothuandvhave no interior zeros in.
Proof Let be a nontrivial solution of (P). Suppose, on the contrary, that there exist sequences , of interior zeros of u and v respectively with . We note that both sequences should exist simultaneously. Indeed, if one of the sequences say, , does not exist, then assuming without loss of generality, on for some , we get for by (). From the monotonicity of , we know that v is concave on the interval. Thus v should have at most one interior zero in , a contradiction. With this concave-convex argument, we know that , on and if and are local extremal points of u and v on and respectively, thus both and are in . We consider the case that , and in . All other cases can be explained by the same argument. If , then by using Remark 2.3, we have
(2.1)
and similarly,
(2.2)
Therefore, it follows from plugging (2.2) into (2.1) that
(2.3)
Since , for sufficiently large n, we obtain
This contradicts (2.3) and the proof is done. □
Theorem 2.5Assume (H) and (). Ifis a solution of (P), then.
Proof Let be a nontrivial solution of (P). Then so that it is enough to show
We will show . Other facts can be proved by the same manner. Suppose . By Lemma 2.4 and the concave-convex argument, we may assume without loss of generality that there exists such that on . Then for given , by the fact , , there exists such that
Let . Then integrating (P) over and using Remark 2.3, we have
(2.4)
where we use the fact that is decreasing since v is concave. From and (2.4), we know . This implies that conditions and are equivalent. From (2.4), we have
Thus we have
Since ε is arbitrary, we have
(2.5)
Using the fact , with same argument, we have
(2.6)
On the other hand, we observe the inequality
(2.7)
where
Since , we may choose such that
(2.8)
Integrating (P) over with and using Remark 2.3, we get
here we use the fact that is increasing in . Using (2.7), we have
(2.9)
Integrating (2.9) over with respect to s and using (2.8), we have
(2.10)
Similarly, we have
(2.11)
Adding (2.10) and (2.11), we have
(2.12)
on . From (2.5) and (2.6), we see that the right-hand side of (2.12) goes to zero as . This is a contradiction and the proof is complete. □
Now, we consider the three solutions theorem for singular p-Laplacian system (P). For , if
then the zero of , denoted by is uniquely determined by ν. Define by taking
It is known that A is completely continuous [24]. Define with norm . We note that
(2.13)
If F and G satisfy condition (H), then for , from Remark 2.3 and (2.13), we get
This implies and by similar computation, we also get . This fact enables us to define the integral operator for problem (P) and the regularity of solutions (Theorem 2.5) is crucial in this argument. Now, define an operator T by
then we see that and completely continuous.
Lemma 2.6Assume (H), () and (). Letandbe a strict lower solution and a strict upper solution of problem (P) respectively such that, and. Then problem (P) has at least one solutionsuch that
Moreover, forlarge enough,
where.
Proof Define given by
and also define
Let us consider the following modified problem We first show that there exists a constant such that if is a solution of (), then . In fact, every solution of () satisfies on . From (H), () and the fact that , , we get
Similarly, we see that is bounded. Therefore, , for some . Thus it is enough to show that
Assume, on the contrary, that there exists such that
Then choosing with , we get the following contradiction:
Now, assume . Since on and , there exists such that and we get the same contradiction from the above calculation by using 0 instead of . For case, we also get the same contradiction. Consequently, we get . The other cases can be proved by the same manner. Taking , we see that every solution of () is contained in Ω. We now compute . For this purpose, let us consider the operator defined by
Then it is obvious that is completely continuous. We show that there exists such that and . Indeed, since , there is such that . By integrating
from to t, we have
Similarly, we see that is bounded. Therefore, we get
Since every solution of () is contained in Ω, the excision property implies that
Since on Ω, we finally get
This completes the proof. □
We now prove three solutions theorem for (P).
#### Proof of Theorem 1.1
Define
and let us consider Then noting that every solution of () satisfies , we may choose , by (H) such that
Let and be the first eigenvalues of for respectively and let and be corresponding eigenfunctions with . Since are positive and concave [19], we may choose such that and for
We show that and are a strict upper solution and a strict lower solution of () respectively. Indeed,
Similarly, we get
Moreover,
Similarly, we also get
For , large enough, define
Then by Theorem 2.2, there exist two solutions and of (P) satisfying and . Therefore, by Lemma 2.6, we get
and by the excision property, we have
This completes the proof.
### 3 Application
In this section, we prove the existence, nonexistence, and multiplicity of positive solutions for () by using three solutions theorem in Section 2. Let us define a cone
and define by taking
where and are unique zeros of
respectively. And define by
Then it is known that is completely continuous [25] and in is equivalent to the fact that is a positive solution of (). We know from Theorem 2.5 that under assumptions and (), any solution of problem () is in .
Remark 3.1 If is a solution of (), then and .
For later use, we introduce the following well-known result. See [12] for proof and details.
Proposition 3.2LetXbe a Banach space, an order cone inX. Assume thatandare bounded open subsets inXwithand. Letbe a completely continuous operator such that either
(i) , , and,
or
(ii) , , and, .
ThenAhas a fixed point in.
Lemma 3.3Assume, , () and (). Letbe a compact subset of. Then there exists a constantsuch that for alland all possible positive solutionsof (), one has.
Proof If it is not true, then there exist and solutions of () such that . We note that
where and
This implies both and . Moreover, by the above estimation,
Thus we get
as and this contradiction completes the proof. □
Lemma 3.4Assume, , () and (). If () has a lower solutionfor some, then () has a solutionsuch that.
Proof It suffices to show the existence of an upper solution of () satisfying . Let and be positive solutions of
(Case I) Both f and g are bounded.
Since () are positive concave functions and , we may choose such that and . We now show that is an upper solution of (). In fact,
Similarly,
(Case II) as .
Using (), choose such that , and
Let . Then
And
Thus is an upper solution of ().
(Case III) g is bounded and as .
Choose such that , and and let
Then
And
Consequently, by Theorem 2.2, () has a solution satisfying
□
Lemma 3.5Assume, , (), () and (). Then there existssuch that if () has a positive solution, then.
Proof Let be a positive solution of (). Without loss of generality, we may assume . From (), we know that
(3.1)
From (3.1) and (), we can choose such that
(3.2)
where , . Using (3.2) and (), we have
Thus we have
□
Lemma 3.6Assume, , (), () and (). Then for each, there existssuch that for, () has a positive solutionwithand.
Proof We know that if satisfies and , then is a solution of (). Since are completely continuous, is also completely continuous. Given , choose
where . Let . If , then for , . From the definition of , we know that is the maximum value of on . If , then from the choice of , we have
If , then we have
If , then
By the concavity of , we get for ,
(3.3)
By similar argument as the above, with (3.3), we may show that
Let , . For , from (), we may choose such that and
Let , then and for ,
By Proposition 3.2, () has a positive solution such that and . We know that is a lower solution of () for and by Lemma 3.4, the proof is complete. □
We now prove one of the main results for this paper.
#### Proof of Theorem 1.2
From Lemma 3.6 and Lemma 3.5, we know that the set is not empty and . By Lemma 3.3 and complete continuity of T, there exist sequences and such that and in with a solution of (). We claim that is a nontrivial solution of (). Suppose that it is not true, then there exists a sequence of solutions for () such that and . As in the proof of Lemma 3.3, we get
But from (), we have a contradiction to the fact that the right side of the above inequality converges to zero as . Thus is a nontrivial solution of (). According to Lemma 3.4 and the definition of , we know that () has at least one positive solution at and no positive solution for . To prove the existence of the second positive solution of () for , we will use Theorem 1.1. Let . Then we have a lower solution of () and a strict lower solution of () in satisfying . For upper solutions, let and be the first eigenvalues of for respectively and let and be corresponding eigenfunctions with . Since and are in and positive [19], we may choose and such that
Also by the fact , there exists such that
for all and
Let . Then and it is a strict upper solution of () in . Indeed,
and
Finally, from Lemma 3.6, there exists such that () has a positive solution satisfying and . By using the concavity of solutions, it is easily verified that
Therefore, is an upper solution of () in . Now by Theorem 1.1, () has at least two positive solutions and such that and and .
### Competing interests
The authors declare that they have no competing interests.
### Authors’ contributions
All authors have equally contributed in obtaining new results in this article and also read and approved the final manuscript.
### Acknowledgements
The authors express their thanks to Professors Ryuji Kajikiya, Yuki Naito and Inbo Sim for valuable discussions related to -regularity of solutions and also thank to the referees for their careful reading and valuable remarks and suggestions. The first author was supported by Pusan National University Research Grant, 2011. The second author was supported by Mid-career Researcher Program (No. 2010-0000377) and Basic Science Research Program (No. 2012005767) through NRF grant funded by the MEST.
### References
1. Agarwal, RP, O’Regan, D: A coupled system of boundary value problems. Appl. Anal.. 69, 381–385 (1998). Publisher Full Text
2. Ali, J, Shivaji, R: Positive solutions for a class of p-Laplacian systems with multiple parameter. J. Math. Anal. Appl.. 335, 1013–1019 (2007). Publisher Full Text
3. Ali, J, Shivaji, R, Ramaswamy, M: Multiple positive solutions for classes of elliptic systems with combined nonlinear effects. Differ. Integral Equ.. 16(16), 669–680 (2006)
4. Amann, H: Existence of multiple solutions for nonlinear elliptic boundary value problem. Indiana Univ. Math. J.. 21, 925–935 (1972). Publisher Full Text
5. Azizieh, C, Clement, P, Mitidieri, E: Existence and a priori estimate for positive solutions of p-Laplace systems. J. Differ. Equ.. 184, 422–442 (2002). Publisher Full Text
6. Ben-Naoum, A, De Coster, C: On the existence and multiplicity of positive solutions of the p-Laplacian separated boundary value problem. Differ. Integral Equ.. 10(6), 1093–1112 (1997)
7. Cid, C, Yarur, C: A sharp existence result for a Dirichlet mixed problem: the superlinear case. Nonlinear Anal.. 45, 973–988 (2001). Publisher Full Text
8. Cui, R, Wang, Y, Shi, J: Uniqueness of the positive solution for a class of semilinear elliptic systems. Nonlinear Anal.. 67, 1710–1714 (2007). Publisher Full Text
9. Dalmasso, R: Existence and uniqueness of positive solutions of semilinear elliptic systems. Nonlinear Anal.. 39, 559–568 (2000). Publisher Full Text
10. Dalmasso, R: Existence and uniqueness of positive radial solutions for the Lane-Emden system. Nonlinear Anal.. 57, 341–348 (2004). Publisher Full Text
11. De Coster, C, Nicaise, S: Lower and upper solutions for elliptic problems in nonsmooth domains. J. Differ. Equ.. 244, 599–629 (2008). Publisher Full Text
12. Guo, D, Lakshmikantham, V: Nonlinear Problems in Abstract Cones, Academic Press, New York (1998)
13. Hai, D: Uniqueness of positive solutions for a class of semilinear elliptic systems. Nonlinear Anal.. 52, 595–603 (2003). Publisher Full Text
14. Hai, D: On a class of semilinear elliptic systems. J. Math. Anal. Appl.. 285, 477–486 (2003). Publisher Full Text
15. Hai, D: On a class of quasilinear systems with sign-changing nonlinearities. J. Math. Anal. Appl.. 334, 965–976 (2007). Publisher Full Text
16. Hai, D, Shivaji, R: An existence result on positive solutions for a class of semilinear elliptic systems. Proc. R. Soc. Edinb. A. 134, 137–141 (2004). Publisher Full Text
17. Hai, D, Shivaji, R: An existence result on positive solutions for a class of p-Laplacian systems. Nonlinear Anal.. 56, 1007–1010 (2004). Publisher Full Text
18. Henderson, J, Thompson, H: Existence of multiple solutions for second order boundary value problems. J. Differ. Equ.. 166, 443–454 (2000). Publisher Full Text
19. Kajikiya, R, Lee, YH, Sim, I: One-dimensional p-Laplacian with a strong singular indefinite weight, I. Eigenvalues. J. Differ. Equ.. 244, 1985–2019 (2008). Publisher Full Text
20. Lee, YH: A multiplicity result of positive radial solutions for multiparameter elliptic systems on an exterior domain. Nonlinear Anal.. 45, 597–611 (2001). Publisher Full Text
21. Lee, EK, Lee, YH: A multiplicity result for generalized Laplacian systems with multiparameters. Nonlinear Anal.. 71, 366–376 (2009). Publisher Full Text
22. Lü, H, O’Regan, D: A general existence theorem for the singular equation . Math. Inequal. Appl.. 5, 69–78 (2002)
23. Lü, H, O’Regan, D, Agarwal, RP: Triple solutions for the one-dimensional p-Laplacian. Glas. Mat.. 38, 273–284 (2003). Publisher Full Text
24. Manásevich, R, Mawhin, J: Periodic solutions of nonlinear systems with p-Laplacian-like operators. J. Differ. Equ.. 145, 367–393 (1998). Publisher Full Text
25. do Ó, JM, Lorca, S, Sanchez, J, Ubilla, P: Positive radial solutions for some quasilinear elliptic systems in exterior domains. Commun. Pure Appl. Anal.. 5, 571–581 (2006)
26. Shivaji, R: A remark on the existence of three solutions via sub-super solutions. Nonlinear Analysis and Application. 561–566 (1987)
27. Zhang, X, Liu, L: A necessary and sufficient condition of existence of positive solutions for nonlinear singular differential systems. J. Math. Anal. Appl.. 327, 400–414 (2007). Publisher Full Text
|
{}
|
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
## Bound between two convex functions [closed]
I'm trying to find the upper bound between two convex functions f(x)-g(x) for x \in [-a, b] where g(x) = cx for x >= 0; (c-1)x for x<0. And f(x) is any convex and differentiable function. Another way to look at the question is that g(x) is not differentiable at x = 0, we can therefore approximate g(x) with f(x) for x \in [-a,b] in the local neighbourhood of x=0. Thanks
-
try math.stackexchange.com/questions – Will Jagy Sep 6 at 19:13
|
{}
|
### Show Tags
12 Apr 2012, 08:22
1
KUDOS
Just an anecdote I heard from kryzak - someone from Kellogg wanted to get into the tech sector in Silicon Valley. Tried his utmost but was unsuccessful. Tech companies really dont come out in force in Kellogg, while preferring to go to Haas or UCLA for recruitment.
I think a similar rationale holds here. UCLA is right in the heart of the entertainment/media industry. While Kellogg may hold a higher brand value, in the end, if you have 50 media companies showing up for recruitment at UCLA while 5 make it to Kellogg, which school would you want to go to?
Given your career objectives, I would prefer UCLA.
Intern
Status: Applying
Joined: 26 Oct 2011
Posts: 13
Location: United States
Concentration: Strategy, Entrepreneurship
GMAT 1: 740 Q47 V46
GPA: 3.71
WE: Engineering (Aerospace and Defense)
Followers: 0
Kudos [?]: 10 [2] , given: 10
Re: UCLA $$v. Kellogg [#permalink] ### Show Tags 12 Apr 2012, 08:37 2 This post received KUDOS Congrats on your acceptances. Obviously there are a lot of things to consider and I am not a person with intimate knowledge of the entertainment industry so my opinions don't hold too much weight. Since you said money isn't a huge concern I'll assume that isn't a factor in this decision. First question is what type of media/entertainment are you interested in? Is there a specific segment which attracts you? Music, movies, network television, advertising, internet, print? That makes a significant difference. If your goal is to make it into a major film studio then the obvious choice is UCLA. Networking is important in most industries and it is paramount (no pun intended) in film and television. Entertainment is an industry that attracts many many people so relationships are key to gaining a legitimate position. You need to be where the epicenter of the industry is if you want to build a robust network, so LA is where you would need to be. Now if you don't have your heart set on working for a big TV or film studio I think Kellogg would be the better choice. With Kellogg's top notch marketing reputation you should be able to land a good job at a major media conglomerate easily (think Clear Channel, Viacom, Disney). I believe Chicago is the 3rd largest media market in America and the largest (NY) isn't far away, so you shouldn't have any trouble breaking into the industry. Also, the Kellogg brand will go farther down the road if you leave the entertainment sector. However if you intend to stay in entertainment forever I believe it's an industry that doesn't put as much of a premium on prestigious education, rather it's more relationship based, so your MBA pedigree won't matter as much as in other industries. In summary the questions you need to ask yourself are where do you want to live (if California the choice should be UCLA without question), what segment of the industry are you interested in, and are you sure want to do entertainment forever? If you want to make movies or TV go to LA. If you want to do just about anything else I'd go to Kellogg. My two cents. Manager Joined: 02 Feb 2012 Posts: 76 Followers: 2 Kudos [?]: 12 [0], given: 0 Re: UCLA$$$v. Kellogg [#permalink] ### Show Tags 12 Apr 2012, 09:08 Tough one - I'd probably go with UCLA if you are really truly set on Media/Entertainment. 1. Obvious one - LA is the epicenter of those industries so it will be easier to network during the school year and maybe UCLA has better recruiting (although I don't know mcuh about media/ent recruiting) 2. Overlooked a lot, but I personally think this is a big issue - the money allows you to be a lot more flexible. Banks and consulting firms recruit before everyone else for a reason - they know that their big draw is higher salaries and when you have a bunch of people who are paying$150k for school that is indeed a big selling point. If you don't have such a large cost you're trying to recoup, you can concentrate more on finding the job you really want and the "I'll just do banking/consulting for a few years and then do what I want" thoughts are less likely to creep in
Intern
Joined: 11 Apr 2012
Posts: 2
Followers: 0
Kudos [?]: 1 [0], given: 0
### Show Tags
12 Apr 2012, 09:23
ACMBA wrote:
Oh wow. Thank you for all of those incredibly thorough responses! I really appreciate the insight. I ultimately want to go into TV production - working for a big TV studio. Whether I chose to spend my entire career in the entertainment industry is still very unknown. I currently live in NY and eventually want to return (not immediately after graduation, but in a few years).
Just found out that I was also accepted into Berkeley. I think I can safely write Berkeley off, though.
Congrats on Haas. I wouldn't write off any school until you know exactly what they are offering and the resources they can provide to you. Besides it's nice to luxuriate in an admit for at least a little bit before saying no thanks. This period of just "enjoying it" is often when the school starts pulling out their best tricks to woo you. You don't want to pack up your chair before the show even starts.
_________________
The Brain Dump - From Low GPA to Top MBA (Updated September 1, 2013) - A Few of My Favorite Things--> http://cheetarah1980.blogspot.com
Intern
Joined: 14 Apr 2011
Posts: 16
Schools: Wharton
WE 1: Law
Followers: 0
Kudos [?]: 0 [0], given: 4
### Show Tags
14 Apr 2012, 15:18
I'd also go with UCLA because of the generous scholarship package. I will not think twice, I will go straight to LA, even though it may seem that Kellogg may be ranked higher. It also depends on the industry one may want to get into right after my MBA.
Intern
Joined: 05 Mar 2012
Posts: 9
Concentration: Finance
Schools: Northwestern (Kellogg) - Class of 2013
Followers: 0
Kudos [?]: 3 [0], given: 0
Re: UCLA $$v. Kellogg [#permalink] ### Show Tags 15 Apr 2012, 21:33 I agree with everyone else. If you are set on media - go UCLA. If you want some options, go Kellogg. _________________ -------------------------------------------------------------- Kellogg c/o 2013. Ask me anything Intern Joined: 22 Mar 2012 Posts: 13 Concentration: Marketing, Strategy GMAT 1: 680 Q38 V45 GPA: 3.37 Followers: 0 Kudos [?]: 7 [1] , given: 0 Re: UCLA$$$v. Kellogg [#permalink] ### Show Tags 16 Apr 2012, 10:23 1 This post received KUDOS I think everyone has given great advice already, and I just wanted to throw in my$0.02 as I currently work for a tv/film studio in LA - if it's UCLA vs. Kellogg and you really want to be in this industry, you need to go to UCLA. I do know of co-workers with MBAs from other programs, like Kellogg, Fuqua, etc., but they are few and far between - UCLA and USC are the predominate alum in the industry.
HTH!
Manager
Joined: 01 Jun 2006
Posts: 183
Location: United States
Schools: Haas '15 (M)
GMAT 1: 750 Q50 V40
Followers: 3
Kudos [?]: 27 [0], given: 3
### Show Tags
17 Apr 2012, 19:00
Definitely would go with UCLA if you want to be in entertainment. Plus you are doing it without having to go through the pre-requisites, ie waiting tables. Everyone in LA waits tables before they become an actor. With that aside, high profile studios and people are all located in LA. Think about Burbank, the epicenter of much entertainment activity, it is about 10-15 mins from Westwood. All the connections, meetings, and events are going to be happening in LA or NY.
If you go to Chicago, I think the big studio there is the Oprah Network. Kind of limits you to possibilities there, additionally, you will probably be traveling to LA or NY all the time, which makes the brand of Kellogg all that more unattractive.
Look at the industry placements from Kellogg vs. UCLA and you will find that LA places much more people in media/entertainment than Kellogg would.
Manager
Joined: 11 Oct 2013
Posts: 64
Concentration: Healthcare, Finance
Followers: 1
Kudos [?]: 5 [0], given: 3
|
{}
|
Difference between revisions of "Fermat prime"
Fermat prime: This is a prime number of the form , where is a nonnegative integer.
|
{}
|
# On some boundary value problems on nilpotent groups (joint w/ Bent Ørsted, Genkai Zhang)
Jan Möllers
(IMF)
Analyseseminar
Torsdag, 15 maj, 2014, at 16:15, in Aud. D3 (1531-215)
Abstrakt:
We interpret certain boundary value problems on some nilpotent groups as restriction problems for representations of semisimple Lie groups. Recently developed techniques in representation theory then allow us to find the corresponding Poisson transforms as explicit integral operators and to show that they are isometries between certain Sobolev type spaces of boundary values and solutions.
This extends recent results by Caffarelli-Silvestre for the nilpotent group $\mathbb{R}^n$, and by Frank et al. for the Heisenberg group $\mathbb{C}^n+\mathbb{R}$.
Kontaktperson: Bent Ørsted
|
{}
|
# OpenGraph DrawingFramework
v.2012.07
ogdf::PlanarGridLayoutModule Class Reference
Base class for planar grid layout algorithms. More...
#include <ogdf/module/GridLayoutModule.h>
Inheritance diagram for ogdf::PlanarGridLayoutModule:
List of all members.
## Public Member Functions
PlanarGridLayoutModule ()
Initializes a planar grid layout module.
virtual ~PlanarGridLayoutModule ()
Calls the grid layout algorithm with a fixed planar embedding (general call).
void callGridFixEmbed (const Graph &G, GridLayout &gridLayout, adjEntry adjExternal=0)
Calls the grid layout algorithm with a fixed planar embedding (call for GridLayout).
Public Member Functions inherited from ogdf::GridLayoutModule
GridLayoutModule ()
Initializes a grid layout module.
virtual ~GridLayoutModule ()
void call (GraphAttributes &AG)
Calls the grid layout algorithm (general call).
void callGrid (const Graph &G, GridLayout &gridLayout)
Calls the grid layout algorithm (call for GridLayout).
const IPointgridBoundingBox () const
double separation () const
Returns the current setting of the minimum distance between nodes.
void separation (double sep)
Sets the minimum distance between nodes.
Public Member Functions inherited from ogdf::LayoutModule
LayoutModule ()
Initializes a layout module.
virtual ~LayoutModule ()
virtual void call (GraphAttributes &GA, GraphConstraints &GC)
Computes a layout of graph GA wrt the constraints in GC (if applicable).
void operator() (GraphAttributes &GA)
Computes a layout of graph GA.
## Protected Member Functions
virtual void doCall (const Graph &G, GridLayout &gridLayout, IPoint &boundingBox)
Implements the algorithm call.
virtual void doCall (const Graph &G, adjEntry adjExternal, GridLayout &gridLayout, IPoint &boundingBox, bool fixEmbedding)=0
Implements the algorithm call.
## Detailed Description
Base class for planar grid layout algorithms.
A planar grid layout algorithm is a grid layout algorithm that produces a crossing-free grid layout of a planar graph. It provides an additional call method for producing a planar layout with a predefined planar embedding.
Definition at line 158 of file GridLayoutModule.h.
## Constructor & Destructor Documentation
ogdf::PlanarGridLayoutModule::PlanarGridLayoutModule ( )
inline
Initializes a planar grid layout module.
Definition at line 162 of file GridLayoutModule.h.
virtual ogdf::PlanarGridLayoutModule::~PlanarGridLayoutModule ( )
inlinevirtual
Definition at line 164 of file GridLayoutModule.h.
## Member Function Documentation
void ogdf::PlanarGridLayoutModule::callFixEmbed ( GraphAttributes & AG, adjEntry adjExternal = 0 )
Calls the grid layout algorithm with a fixed planar embedding (general call).
A derived algorithm implements the call by implementing doCall().
Parameters:
AG is the input graph; the new layout is also stored in AG. adjExternal specifies an adjacency entry on the external face, or is set to 0 if no particular external face shall be specified.
void ogdf::PlanarGridLayoutModule::callGridFixEmbed ( const Graph & G, GridLayout & gridLayout, adjEntry adjExternal = 0 )
Calls the grid layout algorithm with a fixed planar embedding (call for GridLayout).
A derived algorithm implements the call by implementing doCall().
Parameters:
G is the input graph. gridLayout is assigned the computed grid layout. adjExternal specifies an adjacency entry (of G) on the external face, or is set to 0 if no particular external face shall be specified.
Reimplemented in ogdf::GridLayoutPlanRepModule.
virtual void ogdf::PlanarGridLayoutModule::doCall ( const Graph & G, GridLayout & gridLayout, IPoint & boundingBox )
inlineprotectedvirtual
Implements the algorithm call.
Parameters:
G is the input graph. gridLayout is assigned the computed grid layout. boundingBox returns the bounding box of the grid layout. The lower left corner of the bounding box is always (0,0), thus this IPoint defines the upper right corner as well as the width and height of the grid layout.
Implements ogdf::GridLayoutModule.
Definition at line 199 of file GridLayoutModule.h.
virtual void ogdf::PlanarGridLayoutModule::doCall ( const Graph & G, adjEntry adjExternal, GridLayout & gridLayout, IPoint & boundingBox, bool fixEmbedding )
protectedpure virtual
Implements the algorithm call.
A derived algorithm must implement this method and return the computed grid layout in gridLayout.
Parameters:
G is the input graph. adjExternal is an adjacency entry on the external face, or 0 if no external face is specified. gridLayout is assigned the computed grid layout. boundingBox returns the bounding box of the grid layout. The lower left corner of the bounding box is always (0,0), thus this IPoint defines the upper right corner as well as the width and height of the grid layout. fixEmbedding determines if the input graph is embedded and that embedding has to be preserved (true), or if an embedding needs to be computed (false).
The documentation for this class was generated from the following file:
|
{}
|
Figure 1: The TWS-iMetrica automated financial trading platform. Featuring fast performance optimization, analysis, and trading design features unique to iMetrica for building direct real-time filters to generate automated trading signals for nearly any tradeable financial asset. The system was built using Java, C, and the Interactive Brokers IB API in Java.
### Introduction
I realize that I’ve been MIA (missing in action for non-anglophones) the past three months on this blog, but I assure you there has been good reason for my long absence. Not only have I developed a large slew of various optimization, analysis, and statistical tools in iMetrica for constructing high-performance financial trading signals geared towards intraday trading which I will (slowly) be sharing over the next several months (with some of the secret-sauce-recipes kept to myself and my current clients of course), but I have also built, engineered, tested, and finally put into commission on a daily basis the planet’s first automated financial trading platform completely based on the recently developed FT-AMDFA (adaptive multivariate direct filtering approach for financial trading). I introduce to you iMetrica’s little sister, TWS-iMetrica.
The steps for setting up and building an intraday financial trading environment using iMetrica + TWS-iMetrica are easy. There are four of them. No technical analysis indicator garbage is used here, no time domain methodologies, or stochastic calculus. TWS-iMetrica is based completely on the frequency domain approach to building robust real-time multivariate filters that are designed to extract signals from tradable financial assets at any fixed observation of frequencies (the most commonly used in my trading experience with FT-AMDFA being 5, 15, 30, or 60 minute intervals). What makes this paradigm of financial trading versatile is the ability to construct trading signals based on your own trading priorities with each filter designed uniquely for a targeted asset to be traded. With that being said, the four main steps using both iMetrica and TWS-iMetrica are as follows:
1. The first step to building an intraday trading environment is to construct what I call an MDFA portfolio (which I’ll define in a brief moment). This is achieved in the TWS-iMetrica interface that is endowed with a user-friendly portfolio construction panel shown below in Figure 4.
2. With the desired MDFA portfolio, selected, one then proceeds in connecting TWS-iMetrica to IB by simply pressing the Connect button on the interface in order to download the historical data (see Figure 3).
3. With the historical data saved, the iMetrica software is then used to upload the saved historical data and build the filters for the given portfolio using the MDFA module in iMetrica (see Figure 2). The filters are constructed using a sequence of proprietary MDFA optimization and analysis tools. Within the iMetrica MDFA module, three different types of filters can be built 1) a trend filter that extracts a fast moving trend 2) a band-pass filter for extracting local cycles, and 3) A multi-bandpass filter that extracts both a slow moving trend and local cycles simultaneously.
4. Once the filters are constructed and saved in a file (a .cft file), the TWS-iMetrica system is ready to be used for intrady trading using the newly constructed and optimized filters (see Figure 6).
Figure 2: The iMetrica MDFA module for constructing the trading filters. Features dozens of design, analysis, and optimization components to fit the trading priorities of the user and is used in conjunction with the TWS-iMetrica interface.
In the remaining part of this article, I give an overview of the main features of the TWS-iMetrica software and how easily one can create a high-performing automated trading strategy that fits the needs of the user.
### The TWS-iMetrica Interface
The main TWS-iMetrica graphical user interface is composed of several components that allow for constructing a multitude of various MDFA intraday trading strategies, depending on one’s trading priorities. Figure 3 shows the layout of the GUI after first being launched. The first component is the top menu featuring TWS System, some basic TWS connection variables which, in most cases, these variables are left in their default mode, and the Portfolio menu. To access the main menu for setting up the MDFA trading environment, click Setup MDFA Portfolio under the Portfolio menu. Once this is clicked, a panel is displayed (shown in Figure 4) featuring the required a priori parameters for building the MDFA trading environment that should all be filled before MDFA filter construction and trading is to take place. The parameters and their possible values are given below Figure 4.
Figure 3 – The TWS-iMetrica interface when first launched and everything blank.
Figure 4 – The Setup MDFA Portfolio panel featuring all the setting necessary to construct the automated trading MDFA environment.
1. Portfolio – The portfolio is the basis for the MDFA trading platform and consists of two types of assets 1) The target asset from which we construct the trading signal, engineer the trades, and use in building the MDFA filter 2) The explanatory assets that provide the explanatory data for the target asset in the multivariate filter construction. Here, one can select up to four explanatory assets.
2. Exchange – The exchange on which the assets are traded (according to IB).
3. Asset Type – If the input portfolio is a selection of Stocks or Futures (Currencies and Options soon to be available).
4. Expiration – If trading Futures, the expiration date of the contract, given as a six digit number of year then month (e.g. 201306 for June 2013).
5. Shares/Contracts – The number of shares/contracts to trade (this number can also be changed throughout the trading day through the main panel).
6. Observation frequency – In the MDFA financial trading method, we consider uniformly sampled observations of market data on which to do the trading (in seconds). The options are 1, 2, 3, 5, 15, 30, and 60 minute data. The default is 5 minutes.
7. Data – For the intraday observations, determines the nature of data being extracted. Valid values include TRADES, MIDPOINT, BID, ASK, and BID_ASK. The default is MIDPOINT
8. Historical Data – Selects how many days are used to for downloading the historical data to compute the initial MDFA filters. The historical data will of course come in intervals chosen in the observation frequency.
Once all the values have been set for the MDFA portfolio, click the Set and Build button which will first begin to check if the values entered are valid and if so, create the necessary data sets for TWS-iMetrica to initialize trading. This all must be done while TWS-iMetrica is connected to IB (not set in trading mode however). If the build was successful, the historical data of the desired target financial asset up to the most recent observation in regular market trading hours will be plotted on the graphics canvas. The historical data will be saved to a file named (by default) “lastSeriesData.dat” and the data will be come in columns, where the first column is the date/time of the observation, the second column is the price of the target asset, and remaining columns are log-returns of the target and explanatory data. And that’s it, the system is now setup to be used for financial trading. These values entered in the Setup MDFA Portfolio will never have to be set again (unless changes to the MDFA portfolio are needed of course).
Continuing on to the other controls and features of TWS-iMetrica, once the portfolio has been set, one can proceed to change any of the settings in main trading control panel. All these controls can be used/modified intraday while in automated MDFA trading mode. In the left most side of the panel at the main control panel (Figure 5) of the interface includes a set of options for the following features:
Figure 5 – The main control panel for choosing and/or modifying all the options during intraday trading.
1. In contracts/shares text field, one enters the amount of share (for stocks) or contracts (for futures) that one will trade throughout the day. This can be adjusted during the day while the automated trading is activated, however, one must be certain that at the end of the day, the balance between bought and shorted contracts is zero, otherwise, you risk keeping contracts or shares overnight into the next trading day.Typically, this is set at the beginning before automated trading takes place and left alone.
2. The data input file for loading historical data. The name of this file determines where the historical data associated with the MDFA portfolio constructed will be stored. This historical data will be needed in order to build the MDFA filters. By default this is “lastSeriesData.dat”. Usually this doesn’t need to be modified.
3. The stop-loss activation and stop-loss slider bar, one can turn on/off the stop-loss and the stop-loss amount. This value determines how/where a stop-loss will be triggered relative to the price being bought/sold at and is completely dependent on the asset being traded.
4. The interval search that determines how and when the trades will be made when the selected MDFA signal triggers a buy/sell transaction. If turned off, the transaction (a limit order determined by the bid/ask) will be made at the exact time that the buy/sell signal is triggered by the filter. If turned on, the value in the text field next to it gives how often (in seconds) the trade looks for a better price to make the transaction. This search runs until the next observation for the MDFA filter. For example, if 5 minute return data is being used to do the trading, the search looks every seconds for 5 minutes for a better price to make the given transaction. If at the end of the 5 minute period no better price has been found, the transaction is is made at the current ask/bid price. This feature has been shown to be quite useful during sideways or highly volatile markets.
The middle of the main control panel features the main buttons for both connecting to disconnecting from Interactive Brokers, initiating the MDFA automated trading environment, as well as convenient buttons used for instantaneous buy/sell triggers that supplement the automated system. It also features an on/off toggle button for activating the trades given in the MDFA automated trading environment. When checked on, transactions according to the automated MDFA environment will proceed and go through to the IB account. If turned off, the real-time market data feeds and historical data will continue to be read into the TWS-iMetrica system and the signals according to the filters will be automatically computed, but no actual transactions/trades into the IB account will be made.
Figure 6 – The TWS-iMetrica main trading interface features many control options to design your own automated MDFA trading strategies.
And finally, once the historical data file for the MDFA portfolio has been created, up to three filters have been created for the portfolio and entered in the filter selection boxes, and the system is connected to Interactive Brokers by pressing the Connect button, the market and signal plot panel can then be used for visualizing the different components that one will need for analyzing the market, signal, and performance of the automated trading environment. In the panel just below the plot canvas features and array of checkboxes and radiobuttons. When connected to IB and the Start MDFA Trading has been pressed, all the data and plots are updated in real-time automatically at the specific observation frequency selected in the MDFA Portfolio setup. The currently available plots are as follows:
Figure 8 – The plots for the trading interface. Features price, log-return, account cumulative returns, signal, buy/sell lines, and up to two additional auxiliary signals.
• Price – Plots in real-time the price of the asset being traded, at the specific observation frequency selected for the MDFA portfolio.
• Log-returns – Plots in real-time the log-returns of the price, which is the data that is being filtered to construct the trading signal.
• Account – Shows the cumulative returns produced by the currently chosen MDFA filter over the current and historical data period (note that this does not necessary reflect the actual returns made by the strategy in IB, just the theoretical returns over time if this particular filter had been used).
• Buy/Sell lines – Shows dashed lines where the MDFA trading signal has produced a buy/sell transaction. The green lines are the buy signals (entered a long position) and magenta lines are the sell (entered a short position).
• Signal – The plot of the signal in real-time. When new data becomes available, the signal is automatically computed and replotted in real-time. This gives one the ability to closely monitory how the current filter is reacting to the incoming data.
• Aux Signal 1/2 – (If available) Plots of the other available signals produced by the (up to two) other filters constructed and entered in the system. To make either of these auxillary signals the main trading signal simply select the filter associated with the signal using the radio buttons in the filter selection panel.
Along with these plots, to track specific values of any of these plots at anytime, select the desired plot in the Track Plot region of the panel bar. Once selected, specific values and their respective times/dates are displayed in the upper left corner of the plot panel by simply placing the mouse cursor over the plot panel. A small tracking ball will then be moved along the specific plot in accordance with movements by the mouse cursor.
With the graphics panel displaying the performance in real-time of each filter, one can seamlessly switch between a band-pass filter or a timely trend (low-pass) filter according to the changing intraday market conditions. To give an example, suppose at early morning trading hours there is an unusual high amount of volume pushing an uptrend or pulling a downtrend. In such conditions a trend filter is much more appropriate, being able to follow the large-variation in log-returns much better than a band-pass filter can. One can glean from the effects of the trend filter on the morning hours of the market. After automated trading using the trend filter, with the volume diffusing into the noon hour, the band-pass filter can then be applied in order to extract and trade at certain low frequency cycles in the log-return data. Towards the end of the day, with volume continuously picking up, the trend filter can then be selected again in order to track and trade any trending movement automatically.
I am in the process of currently building an automated algorithm to “intelligently” switch between the uploaded filters according to the instantaneous market conditions (with triggering of the switching being set by volume and volatility. Otherwise, for the time being, currently the user must manually switch between different filters, if such switching is at all desired (in most cases, I prefer to leave one filter running all day. Since the process is automated, I prefer to have minimal (if any) interaction with the software during the day while it’s in automated trading mode).
### Conclusion
As I mentioned earlier, the main components of the TWS-iMetrica were written in a way to be adaptable to other brokerage/trading APIs. The only major condition is that the API either be available in Java, or at least have (possibly third-party?) wrappers available in Java. That being said, there are only three main types of general calls that are made automated to the connected broker 1) retrieve historical data for any asset(s), at any given time, at most commonly used observation frequencies (e.g. 1 min, 5 min, 10 min, etc.), 2) subscribe to automatic feed of bar/tick data to retrieve latest OHLC and bid/ask data, and finally 3) Place an order (buy/sell) to the broker with different any order conditions (limit, stop-loss, market order, etc.) for any given asset.
If you are interested in having TWS-iMetrica be built for your particular brokerage/trading platform (other than IB of course) and the above conditions for the API are met, I am more than happy to be hired at certain fixed compensation, simply get in contact with me. If you are interested seeing how well the automated system has performed thus far, interested in future collaboration, or becoming a client in order to use the TWS-iMetrica platform, feel free to contact me as well.
Happy extracting!
# High-Frequency Financial Trading with Multivariate Direct Filtering I: FOREX and Futures
Animation 1: Click to see animation of the Japanese Yen filter in action on 164 hourly out-of-sample observations.
In my previous articles, I was working uniquely with daily log-return data from different time spans from a year to a year and a half. This enabled the in-sample period of computing the filter coefficients for the signal extraction to include all the most recent annual phases and seasons of markets, from holiday effects, to the transitioning period of August to September that is regularly highly influential on stock market prices and commodities as trading volume increases a significant amount. One immediate question that is raised in migrating to higher-frequency intraday data is what kind of in-sample/out-of-sample time spans should be used to compute the filter in-sample and then for how long do we apply the filter out-of-sample to produce the trades? Another question that is raised with intraday data is how do we account for the close-to-open variation in price? Certainly, after close, the after-hour bids and asks will force a jump into the next trading day. How do we deal with this jump in an optimal manner? As the observation frequency gets higher, say from one hour to 30 minutes, this close-to-open jump/fall should most likely be larger. I will start by saying that, as you will see in the results of this article, with a clever choice of the extractor $\Gamma$ and explanatory series, MDFA can handle these jumps beautifully (both aesthetically and financially). In fact, I would go so far as to say that the MDFA does a superb job in predicting the overnight variation.
One advantage of building trading signals for higher intraday frequencies is that the signals produce trading strategies that are immediately actionable. Namely one can act upon a signal to enter a long or short position immediately when they happen. In building trading signals for the daily log-return, this is not the case since the observations are not actionable points, namely the log difference of today’s ending price with yesterday’s ending price are produced after hours and thus not actionable during open market hours and only actionable the next trading day. Thus trading on intraday observations can lead to better efficiency in trading.
In this first installment in my series on high-frequency financial trading using multivariate direct filtering in iMetrica, I consider building trading signals on hourly returns of foreign exchange currencies. I’ve received a few requests after my recent articles on the Frequency Effect in seeing iMetrica and MDFA in action on the FOREX sector. So to satisfy those curiosities, I give a series of (financially) satisfying and exciting results in combining MDFA and the FOREX. I won’t give all my secretes away into building these signals (as that would of course wipe out my competitive advantage), but I will give some of the parameters and strategies used so any courageously curious reader may try them at home (or the office). In the conclusion, I give a series of even more tricks and hacks. The results below speak for themselves So without further ado, let the games begin.
#### Japanese Yen
Frequency: One hour returns
30 day out-of-sample ROI: 12 percent
Yen Filter Parameters: $\lambda$ = 9.2 $\alpha$ = 13.2, $\omega_0 = \pi/5$
Regularization: smooth = .918, decay = .139, decay2 = .79, cross = 0
In the first experiment, I consider hourly log-returns of a ETF index that mimics the Japanese Yen called FXY. As for one of the explanatory series, I consider the hourly log-returns of the price of GOLD which is traded on NASDAQ. The out-of-sample results of the trading signal built using a low-pass filter and the parameters above are shown in Figure 1. The in-sample trading signal (left of cyan line) was built using 400 hourly observations of the Yen during US market hours dating back to 1 October 2012. The filter was then applied to the out-of-sample data for 180 hours, roughly 30 trading days up until Friday, 1 February 2013.
Figure 1: Out-of-sample results for the Japanese Yen. The in-sample trading signal was built using 400 hourly observations of the Yen during US market hours dating back to October 1st, 2012. The out-of-sample portion passed the cyan line is on 180 hourly observations, about 30 trading days.
This beauty of this filter is that it yields a trading signal exhibiting all the characteristics that one should strive for in building a robust and successful trading filter.
1. Consistency: The in-sample portion of the filter performs exactly as it does out-of-sample (after cyan line) in both trade success ratio and systematic trading performance.
2. Dropdowns: One small dropdown out-of-sample for a loss of only .8 percent (nearly the cost of the transaction).
3. Detects the cycles as it should: Although the filter is not able to pinpoint with perfect accuracy every local small upturn during the descent of the Yen against the dollar, it does detect them nonetheless and knows when to sell at their peaks (the magenta lines).
4. Self-correction: What I love about a robust filter is that it will tend to self-correct itself very quickly to minimize a loss in an erroneous trade. Notice how it did this in the second series of buy-sell transactions during the only loss out-of-sample. The filter detects momentum but quickly sold right before the ensuing downfall. My intuition is that only frequency-based methods such as the MDFA are able to achieve this consistently. This is the sign of a skillfully smart filter.
The coefficients for this Yen filter are shown below. Notice the smoothness of the coefficients from applying the heavy smooth regularization and the strong decay at the very end. This is exactly the type of smooth/decay combo that one should desire. There is some obvious correlation between the first and second explanatory series in the first 30 lags or so as well. The third explanatory series seems to not provide much support until the middle lags .
Figure 2: Coefficients of the Yen filter. Here we use three different explanatory series to extract the trading signal.
One of the first things that I always recommend doing when first attempting to build a trading signal is to take a glance at the periodogram. Figure 2 shows the periodogram of the log-return data of the Japanese Yen over 580 hours. Compare this with the periodogram of the same asset using log-returns of daily data over 580 days, shown in Figure 3. Notice the much larger prominent spectral peaks at the lower frequencies in the daily log-return data. These prominent spectral peaks renders multibandpass filters much more advantageous and to use as we can take advantage of them by placing a band-pass filter directly over them to extract that particular frequency (see my article on multibandpass filters). However, in the hourly data, we don’t see any obvious spectral peaks to consider, thus I chose a low-pass filter and set the cutoff frequency at $\pi/5$, a standard choice, and good place to begin.
Figure 3: Periodogram of hourly log-returns of the Japanese Yen over 580 hours.
Figure 4: Periodogram of Japanese Yen using 580 daily log-return observations. Many more spectral peaks are present in the lower frequencies.
#### Japanese Yen
Frequency: 15 minute returns
7 day out-of-sample ROI: 5 percent
Yen Filter Parameters: $\lambda$ = 3.7 $\alpha$ = 13, $\omega_0 = \pi/9$
Regularizationsmooth = .90, decay = .11, decay2 = .09, cross = 0
In the next trading experiment, I consider the Japanese Yen again, only this time I look at trading on even high-frequency log-return data than before, namely on 15 minute log-returns of the Yen from the opening bell to market close. This presents slightly new challenges than before as the close-to-open jumps are much larger than before, but these larger jumps do not necessarily pose problems for the MDFA. In fact, I look to exploit these and take advantage to gain profit by predicting the direction of the jump. For this higher frequency experiment, I considered 350 15-minute in-sample observations to build and optimize the trading signal, and then applied it over the span of 200 15-minute out-of-sample observations. This produced the results shown in the Figure 5 below. Out of 17 total trades out-of-sample, there were only 3 small losses each less than .5 percent drops and thus 14 gains during the 200 15-minute out-of-sample time period. The beauty of this filter is its impeccable ability to predict the close-to-open jump in the price of the Yen. Over the nearly 7 day trading span, it was able to correctly deduce whether to buy or short-sell before market close on every single trading day change. In the figure below, the four largest close-to-open variation in Yen price is marked with a “D” and you can clearly see how well the signal was able to correctly deduce a short-sell before market close. This is also consistent with the in-sample performance as well, where you can notice the buys and/or short-sells at the largest close-to-open jumps (notice the large gain in the in-sample period right before the out-of-sample period begins, when the Yen jumped over 1 percent over night. This performance is most likely aided by the explanatory time series I used for helping predict the close-to-open variation in the price of the Yen. In this example, I only used two explanatory series (the price of Yen, and another closely related to the Yen).
Figure 5: Out-of-sample performance of the Japanese Yen filter on 15 minute log-return data.
We look at the filter transfer functions to see what frequencies they are being privileged in the construction of the filter. Notice that some noise leaks out passed the frequency cutoff at $\pi/9$, but this is typically normal and a non-issue. I had to balance for both timeliness and smoothness in this filter using both the customization parameters $\lambda$ and $\alpha$. Not much at frequency 0 is emphasized, with more emphasis stemming from the large spectral peak found right at $\pi/9$.
Figure 6: The filter transfer functions.
#### British Pound
Frequency: 30 minute returns
14 day out-of-sample ROI: 4 percent
British Pound Filter Parameters: $\lambda$ = 5 $\alpha$ = 15, $\omega_0 = \pi/9$
Regularizationsmooth = .109, decay = .165, decay2 = .19, cross = 0
In this example we consider the frequency of the data to 30 minute returns and attempt to build a robust trading signal for a derivative of the British Pound (BP) on this higher frequency. Instead of using the cash value of the BP, I use 30 minute returns of the BP Futures contract expiring in March (BPH3). Although I don’t have access to tick data from the FOREX, I do have tick data from GLOBEX for the past 5 years. Thus the futures series won’t be an exact replication of the cash price series of the BP, but it should be quite close due to very low interest rates.
The results of the out-of-sample performance of the BP futures filter are shown in Figure 7. I constructed the filter using an initial in-sample size of 390 30 minute returns dating back to 1 December 2012. After pinpointing a frequency cutoff in the frequency domain for the $\Gamma$ that yielded decent trading results in-sample, I then proceeded to optimize the filter in-sample on smoothness and regularization to achieve similar out-of-sample performance. Applying the resulting filter out-of-sample on 168 30-minute log-return observations of the BP futures series along with 3 explanatory series, I get the results shown below. There were 13 trades made and 10 of them were successful. Notice that the filter does an exquisite job at triggering trades near local optimums associated with the frequencies inside the cutoff of the filter.
Figure 7: The out-of-sample results of the British Pound using 30-minute return data.
In looking at the coefficients of the filter for each series in the extraction, we can clearly see the effects of the regularization: the smoothness of the coefficients the fast decay at the very end. Notice that I never really apply any cross regularization to stress the latitudinal likeliness between the 3 explanatory series as I feel this would detract from the predicting advantages brought by the explanatory series that I used.
Figure 8: The coefficients for the 3 explanatory series of the BP futures,
#### Euro
Frequency: 30 min returns
30 day out-of-sample ROI: 4 percent
Euro Filter Parameters: $\lambda$ = 0, $\alpha$ = 6.4, $\omega_0 = \pi/9$
Regularizationsmooth = .85, decay = .27, decay2 = .12, cross = .001
Continuing with the 30 minute frequency of log-returns, in this example I build a trading signal for the Euro futures contract with expiration on 18 March 2013 (UROH3 on the GLOBEX). My in-sample period, being the same as my previous experiment, is from 1 December 2012 to 4 January 2013 on 30 minute returns using three explanatory time series. In this example, after inspecting the periodogram, I decided upon a low-pass filter with a frequency cutoff of $\pi/9$. After optimizing the customization and applying the filter to one month of 30 minute frequency return data out-of-sample (month of January 2013, after cyan line) we see the performance is akin to the performance in-sample, exactly what one strives for. This is due primarily to the heavy regularization of the filter coefficients involved. Only four very small losses of less than .02 percent are suffered during the out-of-sample span that includes 10 successful trades, with the losses only due to the transaction costs. Without transaction costs, there is only one loss suffered at the very beginning of the out-of-sample period.
Figure 9 : Out-of-sample performance on the 30-min log-returns of Euro futures contract UROH3.
As in the first example using hourly returns, this filter again exhibits the desired characteristics of a robust and high-performing financial trading filter. Notice the out-of-sample performance behaves akin to the in-sample performance, where large upswings and downswings are pinpointed to high-accuracy. In fact, this is where the filter performs best during these periods. No need for taking advantage of a multibandpass filter here, all the profitable trading frequencies are found at less than $\pi/9$. Just as with the previous two experiments with the Yen and the British Pound, notice that the filter cleanly predicts the close-to-open variation (jump or drop) in the futures value and buys or sells as needed. This can be seen from many of the large jumps in the out-of-sample period (after cyan line).
One reason why these trading signals perform so well is due to their approximation power of the symmetric filter. In comparing the trading signal (green) with a high-order approximation of the symmetric filter (gray line) transfer function $\Gamma$ shown in Figure 10, we see that trading signal does an outstanding job at approximating the symmetric filter uniformly. Even at the latest observation (the right most point), the asymmetric filter hones in on the symmetric signal (gray line) with near perfection. Most importantly, the signal crosses zero almost exactly where required. This is exactly what you want when building a high-performing trading signal.
Figure 10: Plot of approximation of the real-time trading signal for UROH3 with a high order approximation of the symmetric filter transfer function.
In looking at the periodogram of the log-return data and the output trading signal differences (colored in blue), we see that the majority of the frequencies were accounted for as expected in comparing the signal with the symmetric signal. Only an inconsequential amount of noise leakage passed the frequency cutoff of $\pi/9$ is found. Notice the larger trading frequencies, the more prominent spectral peaks, are located just after $\pi/6$. These could be taken into account with a smart multibandpass filter in order to manifest even more trades, but I wanted to keep things simple for my first trials with high-frequency foreign exchange data. I’m quite content with the results that I’ve achieved so far.
Figure 11: Comparing the periodogram of the signal with the log-return data.
#### Conclusion
I must admit, at first I was a bit skeptical of the effectiveness that the MDFA would have in building any sort of successful trading signal for FOREX/GLOBEX high frequency data. I always considered the FOREX market rather ‘efficient’ due to the fact that it receives one of the highest trading volumes in the world. Most strategies that supposedly work well on high-frequency FOREX all seem to use some form of technical analysis or charting (techniques I’m particularly not very fond of), most of which are purely time-domain based. The direct filter approach is a completely different beast, utilizing a transformation into the frequency domain and a ‘bending and warping’ of the metric space for the filter coefficients to extract a signal within the noise that is the log-return data of financial assets. For the MDFA to be very effective at building timely trading signals, the log-returns of the asset need to diverge from white noise a bit, giving room for pinpointing intrinsically important cycles in the data. However, after weeks of experimenting, I have discovered that building financial trading signals using MDFA and iMetrica on FOREX data is as rewarding as any other.
As my confidence has now been bolstered and amplified even more after my experience with building financial trading signals with MDFA and iMetrica for high-frequency data on foreign exchange log-returns at nearly any frequency, I’d be willing to engage in a friendly competition with anyone out there who is certain that they can build better trading strategies using time domain based methods such as technical analysis or any other statistical arbitrage technique. I strongly believe these frequency based methods are the way to go, and the new wave in financial trading. But it takes experience and a good eye for the frequency domain and periodograms to get used to. I haven’t seen many trading benchmarks that utilize other types of strategies, but i’m willing to bet that they are not as consistent as these results using this large of an out-of-sample to in-sample ratio (the ratios in these experiments were between .50 and .80). If anyone would like to take me up on my offer for a friendly competition (or know anyone that would), feel free to contact me.
After working with a multitude of different financial time series and building many different types of filters, I have come to the point where I can almost eyeball many of the filter parameter choices including the most important ones being the extractor $\Gamma$ along with the regularization parameters, without resorting to time consuming, and many times inconsistent, optimization routines. Thanks to iMetrica, transitioning from visualizing the periodogram to the transfer functions and to the filter coefficients and back to the time domain to compare with the approximate symmetric filter in order to gauge parameter choices is an easy task, and an imperative one if one wants to build successful trading signals using MDFA.
Here are some overall tips and tricks to build your own high performance trading signals on high-frequency data at home:
• Pay close attention to the periodogram. This is your best friend in choosing the extractor $\Gamma$. The best performing signals are not the ones that trade often, but trade on the most important frequencies found in the data. Not all frequencies are created equal. This is true when building either low-pass or multibandpass frequencies.
• When tweaking customization, always begin with $\alpha$, the parameter for smoothness. $\lambda$ for timeliness should be the last resort. In fact, this parameter will most likely be next to useless due to the fact that the log-return of financial data is stationary. You probably won’t ever need it.
• You don’t need many explanatory series. Like most things in life, quality is superior to quantity. Using the log-return data of the asset you’re trading along with one and maybe two explanatory series that somewhat correlate with the financial asset you’re trading on is sufficient. Anymore than that is ridiculous overkill, probably leading to over-fitting (even the power of regularization at your fingertips won’t help you).
In my next article, I will continue with even more high-frequency trading strategies with the MDFA and iMetrica where I will engage in the sector of Funds and ETFs. If any curious reader would like even more advice/hints/comments on how to build these trading signals on high-frequency data for the FOREX (or the coefficients built in these examples), feel free to get in contact with me via email. I’ll be happy to help.
Happy extracting!
# Realizing the Future with iMetrica and HEAVY Models
In this article we steer away from multivariate direct filtering and signal extraction in financial trading and briefly indulge ourselves a bit in the world of analyzing high-frequency financial data, an always hot topic with the ever increasing availability of tick data in computationally convenient formats. Not only has high-frequency intraday data been the basis of higher frequency risk monitoring and forecasting, but it also provides access to building ‘smarter’ volatility prediction models using so-called realized measures of intraday volatility. These realized measures have been shown in numerous studies over the past 5 years or so to provide a solidly more robust indicator of daily volatility. While daily returns only capture close-to-close volatility, leaving much to be said about the actual volatility of the asset that was witnessed during the day, realized measures of volatility using higher frequency data such as second or minute data provide a much clearer picture of open-to-close variation in trading.
In this article, I briefly describe a new type of volatility model that takes into account these realized measures for volatility movement called High frEquency bAsed VolatilitY (HEAVY) models developed and pioneered by Shephard and Sheppard 2009. These models take as input both close-to-close daily returns $r_t$ as well as daily realized measures to yield better forecasting dynamics. The models have been shown to be endowed with the ability to not only track momentum in volatility, but also adjust for mean reversion effects as well as adjust quickly to structural breaks in the level of the volatility process. As the authors (Sheppard and Shephard, 2009) state in their original paper, the focus of these models is on predictive properties, rather than on non-parametric measurement of volatility. Furthermore, HEAVY models are much easier and more robust estimation wise than single source equations (GARCH, Stochastic Volatility) as they bring two sources of volatility information to identify a longer term component of volatility.
The goal of this article is three-fold. Firstly, I briefly review these HEAVY models and give some numerical examples of the model in action using a gnu-c library and Java package called heavy_model that I develped last year for the iMetrica software. The heavy_model package is available for download (either by this link or e-mail me) and features many options that are not available in the MATLAB code provided by Sheppard (bootstrapping methods, Bayesian estimation, track reparameterization, among others). I will then demonstrate the seamless ability to model volatility with these High frEquency bAsed VolatilitY models using iMetrica, where I also provide code for computing realized measures of volatility in Java with the help of an R package called highfrequency (Boudt, Cornelissen, and Payseur 2012).
#### HEAVY Model Definition
Let’s denote the daily returns as $r_1, r_2, \ldots, r_T$, where $T$ is the total amount of days in the sample we are working with. In the HEAVY model, we supplement information to the daily returns by a so-called realized measure of intraday volatility based on higher frequency data, such as second, minute or hourly data. The measures are called daily realized measures and we will denote them as $RM_1, RM_2, \ldots, RM_T$ for the total number of days in the sample. We can think of these daily realized measures as an average of variance autocorrelations during a single day. They are supposed to provide a better snapshot of the ‘true’ volatility for a specific day $t$. Although there are numerous ways of computing a realized measure, the easiest is the realized variance computed as $RM_t = \sum_j (X_{t+t_{j,t}} - X_{t+t_{j-1,t}})^2$ where $t_{j,t}$ are the normalized times of trades on day $t$. Other methods for providing realized measures includes using Kernel based methods which we will discuss later in this article (see for example http://papers.ssrn.com/sol3/papers.cfm?abstract_id=927483).
Once the realized measures have been computed for $T$ days, the HEAVY model is given by:
$Var(r_t | \mathcal{F}_{t-1}^{HF}) = h_t = \omega_1 + \alpha RM_{t-1} + \beta h_{t-1} + \lambda r^2_t$
$E(RM_t | \mathcal{F}_{t-1}^{HF}) = \mu_t = \omega_2+ \alpha_R RM_{t-1} + \beta_R \mu_{t-1},$
where the stability constraints are $\alpha, \omega_1 \geq 0, \beta \in [0,1]$ and $\omega_2, \alpha_R \geq 0$ with $\lambda + \beta \in [0,1]$ and $\beta_R + \alpha_R \in [0,1]$. Here, the $\mathcal{F}_{t-1}^{HF}$ denotes the high-frequency information from the previous day $t-1$. The first equation models the close-to-close conditional variance and is akin to a GARCH type model, whereas the second equation models the conditional expectation of the open-to-close variation.
With the formulation above, one can easily see that slight variations to the model are perfectly plausible. For example, one could consider additional lags in either the realized measure $RM_t$ (akin to adding additional moving average parameters) or the conditional mean/variance variable (akin to adding autoregression parameters). One could also leave out the dependence on the squared returns $r^2_t$ by setting $\lambda$ to zero, which is what the original others recommended. A third variation is adding yet another equation to the pack that models a realized measure that takes into account negative and positive momentum to yield possibly better forecasts as it tracks both losses and gains in the model. In this case, one would add the third component by introducing a new equation for a realized semivariance to parametrically model statistical leverage effects, where falls in asset prices are associated with increases in future volatility. With realized semivariance computed for the $T$ days as $RMS_1, \ldots RMS_T$, the third equation becomes
$E(RMS_t | \mathcal{F}_{t-1}^{HF}) = \phi_t = \omega_3 + \alpha_{RS} RMS_{t-1} + \beta_{RS} \phi_{t-1}$
where $\alpha_{RS} + \beta_{RS} < 1$ and both positive.
#### HEAVY modeling in C and Java
To incorporate these HEAVY models into iMetrica, I began by writing a gnu-c library for providing a fast and efficient framework for both quasi-likelihood evaluation and a posteriori analysis of the models. The structure of estimating the models follows very closely to the original MATLAB code provided by Sheppard. However, in the c library I’ve added a few more useful tools for forecasting and distribution analysis. The Java code is essentially a wrapper for the c heavy_model library to provide a much cleaner approach to modeling and analyzing the HEAVY data such as the parameters and forecasts. While there are many ways to declare, implement, and analyze HEAVY models using the c/java toolkit I provide, the most basic steps involved are as follows.
heavyModel heavy = new heavyModel(); heavy.setForecastDimensions(n_forecasts, n_steps); heavy.setParameterValues(w1, w2, alpha, alpha_R, lambda, beta, beta_R); heavy.setTrackReparameter(0); heavy.setData(n_obs, n_series, series); heavy.estimateHeavyModel();
The first line declares a HEAVY model in java, while the second line sets the number of forecasts samples to compute and how many forecast steps to take. Forecasted values are provided for both the return variable $r_t$ (using a bootstrapping methodology) and the $h_t$, $\mu_t$ variables. In the next line, the parameter values for the HEAVY model are initialized. These are the initial points that are utilized in the quasi-maximum likelihood optimization routine and can be set to any values that satisfy the model constraints. Here, $w1 = \omega_1, w2 = \omega_2$.
The fourth line is completely optional and is used for toggling (0=off, 1=on) a reparameterization of the HEAVY model so the intercepts of both equations in the HEAVY model are explicitly related to the unconditional mean of squared returns $r^2$ and realized measures $RM_t$. The reparameterization of the model has the advantage that it eliminates the estimation of $\omega_1, \omega_2$ and instead uses the unconditional means, leaving two less degrees of freedom in the optimization. See page 12 of the Shephard and Sheppard 2009 paper for a detailed explanation of the reparameterization. After setting the initial values, the data is set for the model by inputting the total number of observation $T$, the number of series (normally set to 2 and the data in column-wise format (namely a double array of length n_obs x n_series, where the first column is the return data $r_t$ and the second column is the daily realized measure data. Finally, with the data set and the parameters initialized we estimate the model in the 6th line. Once the model is finished estimating (should take a few seconds, depending on the number of observations), the heavyModel java object stores the parameter values, forecasts, model residuals, likelihood values, and more. For example, one can print out the estimated model parameters and plot the forecasts of $h_t$ using the following:
heavy.printModelParameters(); heavy.plotForecasts(); Output: w_1 = 0.063 w_2 = 0.053 beta = 0.855 beta_R = 0.566 alpha = 0.024 alpha_R = 0.375 lambda = 0.087
Figure 1 shows the plot of the filtered $h_t, \mu_t$ values for 300 trading days from June 2011 to June 2012 of AAPL with the final 20 points being the forecasted values. Notice that the multistep ahead forecast shows momentum which is one of the attractive characteristics of the HEAVY models as mentioned in the original paper by Shephard and Sheppard.
Figure 1: Plots of the filtered returns and realized measures with 20 step forecasts for Verizon for 300 trading days.
We can also easily plot the estimated joint distribution function $F_{\zeta, \eta}$ by simply using the filtered $h_t, \mu_t$ and computing the devolatilized values $\zeta_t = r_t/ \sqrt{h_t}$, $\eta_t = (RM_t/\mu_t)^{1/2}$, leading to the innovations for the model for $t = 2,\ldots,T$.
Figure 2 below shows the empirical distribution of $F_{\zeta, \eta}$ for 600 days (nearly two years of daily observations from AAPL). The $\zeta_t$ sequence should be roughly a martingale difference sequences with unit variance and the $\eta_t$ sequence should have unit conditional means and of course be uncorrelated. The empirical results validate the theoretical values.
Figure 2: Scatter plot of the empirical distribution of devolatilized values for h and mu.
In order to compile and run the heavy_model library and the accompanying java wrapper, one must first be sure to meet the requirements for installation. The programs were extensively tested on a 64bit Linux machine running Ubuntu 12.04. The heavy_model library written in c uses the GNU Scientific Library (GSL) for the matrix-vector routines along with a statistical package in gnu-c called apophenia (Klemens, 2012) for the optimization routine. I’ve also included a wrapper for the GSL library called multimin.c which enables using the optimization routines from the GSL library, but were not heavily tested. The first version (version 00) of the heavy_model library and java wrapper can be downloaded at sourceforge.net/projects/highfrequency. As a precautionary warning, I must confess that none of the files are heavily commented in any way as this is still a project in progress. Improvements in code, efficiency, and documentation will be continuously coming.
After downloading the .tar.gz package, first ensure that GSL and Apophenia are properly installed and the libraries are correctly installed to the appropriate path for your gnu c compiler. Second, to compile the .c code, copy the makefile.test file to Makefile and then type make. To compile the heavyModel library and utilize the java heavyModel wrapper (recommended), copy makefile.lib to Makefile, then type make. After it constructs the libheavy.so, compile the heavyModel.java file by typing javac heavyModel.java. Note that the java files were complied successfully using the Oracle Java 7 SDK. If you have any questions about this or any of the c or java files, feel free to contact me. All the files were written by me (except for the optional multimin.c/h files for the optimization) and some of the subroutines (such as the HEAVY model simulation) are based on the MATLAB code by Sheppard. Even though I fully tested and reproduced the results found in other experiments exploring HEAVY models, there still could be bugs in the code. I have not fully tested every aspect (especially the Bayesian estimation components, an ongoing effort) and if anyone would like to add, edit, test, or comment on any of the routines involved in either the c or java code, I’d be more than happy to welcome it.
#### HEAVY Modeling in iMetrica
The Java wrapper to the gnu-c heavy_model library was installed in the iMetrica software package and can be used for GUI style modeling of high-frequency volatility. The HEAVY modeling environment is a feature of the BayesCronos module in iMetrica that also features other stochastic models for capturing and forecasting volatility such as (E)GARCH, stochastic volatility, mutlivariate stohastic factor modeling, and ARIMA modeling, all using either standard (Q)MLE model estimation or a Bayesian estimation interface (with histograms showing the MCMC results of the parameter chains).
Modeling volatility with HEAVY models is done by first uploading the data into the BayesCronos module (shown in Figure 3) through the use of either the BayesCronos Menu (featured on the top panel) or by using the Data Control Panel (see my previous article on Data Control).
Figure 3: BayesCronos interface in iMetrica for HEAVY modeling.
In the BayesCronos control panel shown above, we estimate a HEAVY model for the uploaded data (600 observations of $r_t, RM_t$) that were simulated from a model with omega_1 = 0.05, omega_2 = 0.10, beta = 0.8 beta_R = 0.3, alpha = 0.02, alpha_R = 0.3 (the simulation was done in the Data Control Module).
The model type is selected in the panel under the Model combobox. The number of forecasting steps and forecasting samples (for the $r_t$ variable) are selected in the Forecasting panel. Once those values are set, the model estimates are computed by pressing the “MLE” button in the bottom lower left corner. After the computing is done, all the available plots to analyze the HEAVY model are available by simply clicking the appropriate plotting checkboxes directly below the plotting canvas. This includes up to 5 forecasts, the original data, the filtered $h_t, \mu_t$ values, the residuals/empirical distributions of the returns and realized measures, and the pointwise likelihood evaluations for each observation. To see the estimated parameter values, simply click the “Parameter Values” button in the “Model and Parameters” panel and pop-up control panel will appear showing the estimated values for all the parameters.
#### Realized Measures in iMetrica
Figure 4: Computing Realized measures in iMetrica using a convenient realized measure control panel.
Importing and computing realized volatility measures in iMetrica is accomplished by using the control panel shown in Figure 4. With access to high frequency data, one simply types in the ticker symbol in the “Choose Instrument” box, sets the starting and ending date in the standard CCYY-MM-DD format, and then selects the kernel used for assembling the intraday measurements. The Time Scale sets the frequency of the data (seconds, minutes hours) and the period scrollbar sets the alignment of the data. The Lags combo box determines the bandwidth of the kernel measuring the volatility. Once all the options have been set, clicking on the “Compute Realized Volatility” button will then produce three data sets for the time period given between start date and end data: 1) The daily log-returns of the asset $r_1, \ldots, r_T$ 2) The log-price of the asset, and 3) The realized volatility measure $RM_1, \ldots, RM_T$. Once the Java-R highfrequency routine has finished computing the realized measures, the data sets are automatically available in the Data Control Module of iMetrica. From here, one can annualize the realized measures using the weight adjustments in the Data Control Module (see Figure 5). Once content with the weighting, the data can then be exported to the MDFA module or the BayesCronos module for estimating and forecasting the volatility of GOOG using HEAVY models.
Figure 5: The log-return data (blue) and the (annualized) realized measure data using 5 minute returns (pink) for Google from 1-1-2011 to 6-19-2012.
The Realized Measure uploading in iMetrica utilizes a fantastic R package for studying and working with high frequency financial data called highfrequency (Boudt, Cornelissen, and Payseur 2012). To handle the analysis of high frequency financial data in java, I began by writing a Java wrapper to the R functions of the highfrequency R package to enable GUI interaction shown above in order to download the data into java and then iMetrica. The java environment uses library called RCaller that opens a live R kernel in the Java runtime environment from which I can call and R routines and directly load the data into Java. The initializing sequence looks like this.
caller.getRCode().addRCode("require (Runiversal)"); caller.getRCode().addRCode("require (FinancialInstrument)"); caller.getRCode().addRCode("require (highfrequency)"); caller.getRCode().addRCode("loadInstruments('/HighFreqDataDirectoryHere/Market/instruments.rda')"); caller.getRCode().addRCode("setSymbolLookup.FI('/HighFreqDataDirectoryHere/Market/sec',use_identifier='X.RIC',extension='RData')");
Here, I’m declaring the R packages that I will be using (first three lines) and then I declare where my HighFrequency financial data symbol lookup directory is on my computer (next two lines). This
then enables me to extract high frequency tick data directly into Java. After loading in the desired intrument ticker symbol names, I then proceed to extract the daily log-returns for the given time frame, and then compute the realized measures of each asset using the rKernelCov function in highfrequency R package. This looks something like
for(i=0;i<n_assets;i++) { String mark = instrum[i] + "<-" + instrum[i] + "['T09:30/T16:00',]";
caller.getRCode().addRCode(mark);
String rv = "rv"+i+"<-rKernelCov("+instrum[i]+"Trade.Price,kernel.type ="+kernels[kern]+", kernel.param="+lags+",kernel.dofadj = FALSE, align.by ="+frequency[freq]+", align.period="+period+", cts=TRUE, makeReturns=TRUE)" caller.getRCode().addRCode(rv); caller.getRCode().addRCode("names(rv"+i+")<-'rv"+i+"'"); rvs[i] = "rv_list"+i; caller.getRCode().addRCode("rv_list"+i+"<-lapply(as.list(rv"+i+"), coredata)"); } In the first line, I’m looping through all the asset symbols (I create Java strings to load into the RCaller as commands). The second line effectively retrieves the data during market hours only (America/New_York time), then creates a string to call the rKernelCov function in R. I give it all the user defined parameters defined by strings as well. Finally, in the last two lines, I extract the results and put them into an R list from which the java runtime environment will read. #### Conclusion In this article I discussed a recently introduced high frequency based volatility model by Shephard and Sheppard and gave an introduction to three different high-performance tools beyond MATLAB and R that I’ve developed for analyzing these new stochastic models. The heavyModel c/java package that I made available for download gives a workable start for experimenting in a fast and efficient framework the benefit of using high frequency financial data and most notably realized measures of volatility to produce better forecasts. The package will continuously be updated for improvements in both documentation, bug fixes, and overall presentation. Finally, the use of the R package highfrequency embedded in java and then utilized in iMetrica gives a fully GUI experience to stochastic modeling of high frequency financial data that is both conveniently easy to use and fast. Happy Extracting and Volatilitizing! # Building a Multi-Bandpass Financial Portfolio Animation 1: Click the image to view the animation. The changing periodogram for different in-sample sizes and selecting an appropriate band-pass component to the multi-bandpass filter. In my previous article, the third installment of the Frequency Effect trilogy, I introduced the multi-bandpass (MBP) filter design as a practical device for the extraction of signals in financial data that can be used for trading in multiple types of market environments. As depicted through various examples using daily log-returns of Google (GOOG) as my trading platform, the MBP demonstrated a promising ability to tackle the issue of combining both lowpass filters to include a local bias and slow moving trend while at the same time providing access to higher trading frequencies for systematic trading during sideways and volatile market trajectories. I identified four different types of market environments and showed through three different examples how one can attempt to pinpoint and trade optimally in these different environments. After reading a well-written and informative critique of my latest article, I became motivated to continue along on the MBP bandwagon by extending the exploration of engineering robust trading signals using the new design. In Marc’s words (the reviewer) regarding the initial results of this latest design in MDFA signal extraction for financial trading : “I tend to believe that some of the results are not necessarily systematic and that some of the results – Chris’ preference – does not match my own priority. I understand that comparisons across various designs of the triptic may require a fixed empirical framework (Google/Apple on a fixed time span). But this restricted setting does not allow for more general inference (on other assets and time spans). And some of the critical trades are (at least in my perspective) close to luck.” As my empirical framework was fixed in that I applied the designed filters to only one asset throughout the study and for a fixed time span of a year worth of in-sample data applied to 90 days out-of-sample, results showing the MBP framework applied to other assets and time frames might have made my presentation of this new design more convincing. Taking this relevant issue of limited empirical framework into account, I am extending my previous article many steps further by presenting in this article the creation of a collection of financial trading signals based entirely on the MBP filter. The purpose of this article is to further solidify the potential for MBP filters and extend applications of the new design to constructing signals for various types of financial assets and in-sample/out-of-sample time frames. To do this I will create a portfolio of assets comprised of a group of well known companies coupled with two commodity ETFs (exchange traded funds) and apply the MBP filter strategy to each of the assets using various out-of-sample time horizons. Consequently, this will generate a portfolio of trading signals that I can track over the next several months. ### Portfolio selection In choosing the assets for my portfolio, I arranged a group of companies/commodities whose products or services I use on a consistent basis (as arbitrary as any other portfolio selection method, right?). To this end, I chose Verizon (VZ) (service provider for my iPhone5), Microsoft (MSFT) (even though I mostly use Linux for my computing needs), Toyota (TM) ( I drive a Camry), Coffee (JO) (my morning espresso keeps the wheels turning), and Gold (GLD) (who doesn’t like Gold, a great hedge to any currency). For each of these assets, I built a trading signal using various in-sample time periods beginning summer of 2011 and ending toward the end of summer 2012, to ensure all seasonal market effects were included. The out-of-sample time period in which I test the performance of the filter for each asset ranges anywhere from 90 days to 125 days out-of-sample. I tried to keep the selection of in-sample and out-of-sample points as arbitrary as possible. ### Portfolio Performance And so here we go. The performance of the portfolio. #### Coffee (NYSEARCA:JO) • Regularization: smooth = .22, decay = .22, decay2 = .02, cross = 0 • MBP = [0, .2], [.44,.55] • Out-of-sample performance: 32 percent ROI in 110 days In order to work with commodities in this portfolio, the easiest way is through the use of ETFs that are traded in open markets just as any other asset. I chose the Dow Jones-UBS Coffee Subindex JO which is intended to reflect the returns that are potentially available through an unleveraged investment in one futures contract on the commodity of coffee as well as the rate of interest that could be earned on cash collateral invested in specified Treasury Bills. To create the MBP filter for the JO index, I used JO and USO (a US Oil ETF) as the explanatory series from the dates of 5-5-2011 until 1-13-2013 (just a random date I picked from mid 2011, cinqo de mayo) and set the initial low-pass portion for the trend component of the MBP filter to [0, .17]. After a significant amount of regularization was applied, I added a bandpass portion to the filter by initializing an interval at [.4, .5]. This corresponded to the principal spectral peak in the periodogram which was located just below $\pi/6$ for the coffee fund. After setting the number of out-of-sample observations to 110, I then proceeded to optimize the regularization parameters in-sample while ensuring that the transfer functions of the filter were no greater than 1 at any point in the frequency domain. The result of the filter is plotted below in Figure 1, with the transfer functions of the filters plotted below it. The resulting trading signal from the MBP filter is in green and the out-of-sample portion after the cyan line, with the cumulative return on investment (ROI) percentage in blue-pink and the daily price of JO the coffee fund in gray. Figure 1: The MBP filter for JO applied 110 Out-of-sample points (after cyan line). Figure 2: Transfer function for the JO and USO MBP filters. Notice the out-of-sample portion of 110 observations behaving akin to the in-sample portion before it, with a .97 rank coefficient of the cumulative ROI resulting from the trades. The ROI in the out-of-sample portion was 32 percent total and suffered only 4 small losses out of 18 trades. The concurrent transfer functions of the MBP filter clearly indicate where the principal spectral peak for JO (blue-ish line) is directly under the bandpass portion of the filter. Notice the signal produced no trades during the steepest descent and rise in the price of coffee, while pinpointing precisely at the right moment the major turning point (right after the in-sample period). This is exactly what you would like the MBP signal to achieve. #### Gold (SPDR Gold Trust, NYSEARCA:GLD) As one of the more difficult assets to form a well-performing signal both in-sample and out-of-sample using the MBP filter, the GLD (NYSEARCA:GLD) ETF proved to be quite cumbersome in not only locating an optimal bandpass portion to the MBP, but also finding a relevant explaining series for GLD. In the following formulation, I settled upon using a US dollar index given by the PowerShares ETF UUP (NYSEARCA:UUP), as it ended up giving me a very linear performance that is consistent both in-sample and out-of-sample. The parameterization for this filter is given as follows: • Regularization: smooth = .22, decay = .22, decay2 = .02, cross = 0 • MBP = [0, .2], [.44,.55] • Out-of-sample performance: 11 percent ROI in 102 days Figure 3 : Out-of-sample results of the MBP applied to the GLD ETF for 102 observations Figure 4 : The Transfer Functions for the GLD and DIG filter. Figure 5: Coefficients for the GLD and DIG filters. Each are of length 76. The smoothness and decay in the coefficients is quite noticeable along with a slight lag correlation along the middle of the coefficients between lags 10 and 38. This trio of characteristics in the above three plots is exactly what one strives for in building financial trading signals. 1) The smoothness and decay of the coefficients, 2) the transfer functions of the filter not exceeding 1 in the low and band pass, and 3) linear performance both in-sample and out-of-sample of the trading signal. #### Verizon (NYSE:VZ) • Regularization: smooth = .22, decay = 0, decay2 = 0, cross = .24 • MBP = [0, .17], [.58,.68] • Out-of-sample performance: 44 percent ROI in 124 days trading The experience of engineering a trading signal for Verizon was one of the longest and more difficult experiences out of the 5 assets in this portfolio. Strangely a very difficult asset to work with. Nevertheless, I was determined to find something that worked. To begin, I ended up using AAPL as my explanatory series (which isn’t a far fetched idea I would imagine. After all, I utilize Verizon as my carrier service for my iPhone 5). After playing around with the regularization parameters in-sample, I chose a 124 day out-of-day horizon for my Verizon to apply the filter to and test the performance. Surprisingly, the cross regularization seemed to produce very good results both out-of-sample. This was the only asset in the portfolio that required a significant amount of cross regularization, with the parameter touching the vicinity of .24. Another surprise was how high the timeliness parameter $\lambda$ was (40) in order to produce good in-sample and out-of-sample trading results. By far the highest amount of the 5 assets in this study. The amount of smoothing from the weighting functionW(\omega; \alpha)$was also relatively high, reaching a value of 20. The out-of-sample performance is shown in Figure 6. Notice how dampened the values of the trading signal are in this example, where the local bias during the long upswings is present, but not visible due to the size of the plot. The out-of-sample performance (after the cyan line) seems to be superior to that of the in-sample portion. This is most likely due to the fact that the majority of the frequencies that we were interested in, near $\pi/6$, failed to become prominent in the data until the out-of-sample portion (there were around 120 trading days not shown in the plot as I only keep a maximum of 250 plotted on the canvas). With 124 out-of-sample observations, the signal produced a performance of 44 percent ROI. The filter seems to cleanly and consistently pick out local turning points, although not always at their optimal point, but the performance is quite linear, which is exactly what you strive for. Figure 6: The out-of-sample performance on 124 observations from 7-2012 to 1-13-2013. Figure 7: Coefficients of lag up to 76 of the Verizon-Apple filter, In the coefficients for the VZ and AAPL data shown in Figure 7, one can clearly see the distinguishing effects of the cross regularization along with the smooth regularization. Note that no decay regularization was needed in this example, with the resulting number of effective degrees of freedom in the construction of this filter being 48.2 an important number to consider when applying regularization to filter coefficients (filter length was 76), #### Microsoft (NASDAQ:MSFT) • Regularization: smooth = .42, decay = .24, decay2 = .15, cross = 0 • MBP = [0, .2], [.59,.72] • Out-of-sample performance: 31 percent ROI in 90 days trading In the Microsoft data I used a time span of a year and three months for my in-sample period and a 90 day out-of-sample period from August through 1-13-2012. My explanatory series was GOOG (the search engine Bing and Google seem to have quite the competition going on, so why not) which seemed to correlate rather cleanly with the share price of MSFT. The first step in obtaining a bandpass after setting my lowpass filter to [0, .2] was to locate the principal spectral peak (shown in the periodogram figure below). I then adjusted the width until I had near monotone performance in-sample. Once the customization and regularization parameters were found, I applied the MSFT/AAPL filter to the 90 day out-of-sample period and the result is shown below. Notice that the effect of the local bias and slow moving trends from the lowpass filter are seen in the output trading signal (green) and help in identifying the long down swings found in the share price. During the long down swings, there are no trades due to the local bias from frequency zero. Figure 8: Microsoft trading signal for 90 out-of-sample observations. The ROI out-of-sample is 31 percent. Figure 9: Aggregate periodogram of MSFT and Google showing the principal spectral peak directly inside the bandpass. Figure 10: The coefficients for the MSFT and GOOG series up to lag 76. With a healthy amount of regularization applied to the coefficient space, we can clearly see the smoothness and decay towards the end of the coefficient lags. The cross regularization parameter provided no improvement to either in-sample or out-of-sample performance and was left set to 0. Despite the superb performance of the signal out-of-sample with a 31 percent ROI in 90 days in a period which saw the share price descend by 10 percent, and relatively smooth decaying coefficients with consistent performance both in and out-of-sample, I still feel like I could improve on these results with a better explanatory series than AAPL. That is one area of this methodology in which I struggle, namely finding “good” explanatory series to better fortify the in-sample metric space and produce more even more anticipation in the signals. At this point it’s a game of trial and error. I suppose I should find a good market economist to direct these questions to. #### Toyota (NYSE:TM) • Regularization: smooth = .90, decay = .14, decay2 = .72, cross = 0 • MBP = [0, .21], [.49,.67] • Out-of-sample performance: 21 percent ROI in 85 days trading For the Toyota series, I figured my first explanatory series to test things with would be an asset pertaining to the price of oil. So I decided to dig up some research and found that DIG ( NYSEARCA:DIG), a ProShares ETF, provides direct exposure to the global price of oil and gas (in fact it is leveraged so it corresponds to twice the daily performance of the Dow Jones U.S. Oil & Gas Index). The out-of-sample performance, with heavy regularization in both smooth and decay, seems to perform quite consistently with in-sample, The signal shows signs of patience during volatile upswings, which is a sign that the local bias and slow moving trend extraction is quietly at work. Otherwise, the gains are consistent with just a few very small losses. At the end of the out-of-sample portion, namely the past several weeks since Black Friday (November 23rd), notice the quick climb in stock price of Toyota. The signal is easily able to deduce this fast climb and is now showing signs of slowdown from the recent rise (the signal is approaching the zero crossing, that’s how I know). I love what you do for me, Toyota! (If you were living in the US in the1990s, you’ll understand what I’m referring to). Figure 11: Out-of-sample performance of the Toyota trading signal on 85 trading days. Figure 12: Coefficients for the TM and DIG log-return series. Figure 13: The transfer functions for the TM and DIG filter coefficients. The coefficients for the TM and DIG series depicted in Figure 12 show the heavy amount of smooth and decay (and decay2) regularization, a trio of parameters that was not easy to pinpoint at first without significant leakage above one in the filter transfer functions (shown in Figure 13). One can see that two major spectral peaks are present under the lowpass portion and another large one in the bandpass portion that accounts for the more frequent trades. ### Conclusion With these trading signals constructed for these five assets, I imagine I have a small but somewhat diverse portfolio, ranging from tech and auto to two popular commodities. I’ll be tracking the performance of these trading signals together combined as a portfolio over the next few months and continuously give updates. As the in-sample periods for the construction of these filters ended around the end of last summer and were already applied to out-of-sample periods ranging from 90 days to 124 (roughly one half to one third of the original in-sample period), with the significant amount of regularization applied, I am quite optimistic that the out-of-sample performance will continue to be the same over the next few months, but of course one can never be too sure of anything when it comes to market behavior. In the worse case scenario, I can always look into digging though my dynamic adaptive filtering and signal extraction toolkit. Some general comments as I conclude this article. What I truly enjoy about these trading signals constructed for this portfolio experiment (and robust trading signals in general per my other articles on financial trading) is that when any losses out-of-sample or even in-sample occur, they tend to be extremely small relative to the average size of the gains. That is the sign of a truly robust signal I suppose; that not only does it perform consistently both in-sample and out-of-sample, but also that when losses do arrive, they are quite small. One characteristic that I noticed in all robust and high performing trading signals that I tend to stick with is that no matter what type of extraction definition you are targeting (lowpass, bandpass, or MBP), when an erroneous trade is executed (leading to a loss), the signal will quickly correct itself to minimize the loss. This is why the losses in robust signals tend to be small (look at any of the 5 trading signals produced for the portfolio in this article). Of course, all these good trading signal characteristics are in addition to the filter characteristics (smooth, slightly decaying coefficients with minimal effective degrees of freedom, transfer functions less than or equal to one everywhere, etc.) Overall, although I’m quite inspired and optimistic with these results. there is still slight room for improvement in building these MBP filters, especially for low volatility sideways markets (for example, the one occurring in the Toyota stock price in the middle of the plot in Figure 11). In general, this is a difficult type of stock price movement in which any type of signal will have success. With low volatility and no trending movements, the log-returns are basically white noise – there is no pertinent information to extract. The markets are currently efficient and there is nothing you can do about it. Only good luck will win (in that case you’re as well off building a signal based on a coin flip). Typically the best you can do in these types of areas is prevent trading altogether with some sort of threshold on the signal, which is an idea I’ve had in my mind recently but haven’t implemented, or make sure any losses are small, which is exactly what my signal achieved in Figure 11 (and which is what any robust signal should do in the first place.) Lastly, if you have a particular financial asset for which you would like to build a trading signal (similar to the examples shown above), I will be happy to take a stab at it using iMetrica (and/or give you pointers in the right direction if you would prefer to pursue the endeavor yourself). Just send me what asset you would like to trade on, and I’ll build the filter and send you the coefficients along with the parameters used. Offer holds for a limited time only! Happy extracting. # The Frequency Effect Part III: Revelations of Multi-Bandpass Filters and Signal Extraction for Financial Trading Animation of the out-of-sample performance of one of the multibandpass filters built in this article for the daily returns of the price of Google. The resulting trading signal was extracted and yielded a trading performance near 39 percent ROI during an 80 day out-of-sample period on trading shares of Google. To conclude the trilogy on this recent voyage through various variations on frequency domain configurations and optimizations in financial trading using MDFA and iMetrica, I venture into the world of what I call multi-bandpass filters that I recently implemented in iMetrica. The motivation of this latest endeavor in highlighting the fundamental importance of the spectral frequency domain in financial trading applications was wanting to gain better control of extracting signals and engineering different trading strategies through many different types of market movement in financial assets. There are typically four different basic types of movement a price pattern will take during its fractalesque voyage throughout the duration that an asset is traded on a financial market. These patterns/trajectories include 1. steady up-trends in share price 2. low volatility sideways patterns (close to white noise) 3. highly volatile sideways patterns (usually cyclical) 4. long downswings/trends in share price. Using MDFA for signal extraction in financial time series, one typically indicates an a priori trading strategy through the design of the extractor, namely the target function $\Gamma(\omega)$ (see my previous two articles on The Frequency Effect). Designating a lowpass or bandpass filter in the frequency domain will give an indication of what kind of patterns the extracted trading signal will trade on. Traditionally one can set a lowpass with the goal of extracting trends (with the proper amount of timeliness prioritized in the parameterization), or one can opt for a bandpass to extract smaller cyclical events for more systematic trading during volatile periods. But now suppose we could have the best of both worlds at the same time. Namely, be profitable in both steady climbs and long tumbles, while at the same time systematically hacking our way through rough sideways volatile territory, making trades at specific frequencies embedded in the share price actions not found in long trends. The answer is through the construction of multi-band pass filters. Their construction is relatively simple, but as I will demonstrate in this article with many examples, they are a bit more difficult to pinpoint optimally (but it can be done, and the results are beautiful… both aesthetically and financially). With the multi-bandpass defined as two separate bands given by $A := 1_{[\omega_0, \omega_1]}$$B := 1_{[\omega_2, \omega_3]}$ with $0 \leq \omega_0$ and $\omega_1 < \omega_2$, zero everywhere else, it is easy to see that the motivation here is to seek a detection of both lower frequencies and low-mid frequencies in the data concurrently. With now up to four cutoff frequencies to choose from, this adds yet another few wrinkles in the degrees of freedom in parameterizing the MDFA setup. If choosing and optimizing one cutoff frequency for a simple low-pass filter in addition to customization and regularization parameters wasn’t enough, now imagine extracting signals with the addition of up to three more cutoff frequencies. Despite these additional degrees of freedom in frequency interval selection, I will later give a couple of useful hacks that I’ve found helpful to get one started down the right path toward successful extraction. With this multi-bandpass definition for $\Gamma$ comes the responsibility to ensure that the customization of smoothness and timeliness is adjusted for the additional passband. The smoothing function $W(\omega; \alpha)$ for $\alpha \geq 0$ that acts on the periodogram (or discrete Fourier transforms in multivariate mode) is now defined piecewise according to the different intervals $[0,\omega_0]$, $[\omega_1, \omega_2]$, and $[\omega_3, \pi]$. For example, $\alpha = 20$ gives a piecewise quadratic weighting function (an example shown in Figure 1) and for $\alpha = 10$, the weighting function is piecewise linear. In practice, the piecewise power function smooths and rids of unwanted frequencies in the stop band much better than using a piecewise constant function. With these preliminaries defined, we now move on to the first steps in building and applying multiband pass filters. Figure 1: Plot of the Piecewise Smoothing Function for alpha = 15 on a mutli-band pass filter. To motivate this newly customized approach to building financial trading signals, I begin with a simple example where I build a trading signal for the daily share price of Google. We begin with a simple lowpass filter defined by $\Gamma(\omega) = 1$ if $\omega \in [0,.17]$, and 0 otherwise. This formulation, as it includes the zero frequency, should provide a local bias as well as extract very slow moving trends. The trick with these filters for building consistent trading performance is ensure a proper grip on the timeliness characteristics of the filter in a very low and narrow filter passage. Regularization and smoothness using the weighting function shouldn’t be too much of a problem or priority as typically just only a small fraction of the available degrees of freedom on the frequency domain are being utilized, so not much concern for overfitting as long as you’re not using too long of a filter. In my example, I maxed out the timeliness $\lambda$ parameter and set the $\lambda_{smooth}$ regularization parameter to .3. Fortunately, no optimization of any parameter was needed in this example, as the performance was spiffy enough nearly right after gauging the timeliness parameter $\lambda$. Figure 2 shows the resulting extracted trend trading signal in both the in-sample portion (left of the cyan colored line) and applied to 80 out-of-sample points (right of the cyan line, the most recent 80 daily returns of Google, namely 9-29-12 through today, 1-10-13). The blue-pink line shows the progression of the trading account, in return-on-investment percentage. The out-of-sample gains on the trades made were 22 percent ROI during the 80 day period. Figure 2: The in-sample and out-of-sample gains made by constructing a low-pass filter employing a very high timeliness parameter and small amount of regularization in smoothness. The out-of-sample gains are nearly 30 percent and no losses on any trades. Although not perfect, the trading signal produces a monotonic performance both in-sample and out-of-sample, which is exactly what you strive for when building these trend signals for trading. The performance out-of-sample is also highly consistent (in regards to trading frequency and no losses on any trades) with the in-sample performance. With only 4 trades being made, they were done at very interesting points in the trajectory of the Google share price. Firstly, notice that the local bias in the largest upswing is accounted for due to the inclusion of frequency zero in the low pass filter. This (positive) local bias continues out-of-sample until, interestingly enough, two days before one of the largest losses in the share price of Google over the past couple years. A slightly earlier exit out of this long position (optimally at the peak before the down turn a few days before) would have been more strategic; perhaps further tweaking of various parameters would have achieved this, but I happy with it for now. The long position resumes a few days after the dust settles from the major loss, and the local bias in the signal helps once again (after trade 2). The next few weeks sees shorter downtrending cyclical effects, and the signal fortunately turns positively increasingly right before another major turning point for an upswing in the share price. Finally, the third transaction ends the long position at another peak (3), perfect timing. The fourth transaction (no loss or gain) was quickly activated after the signal saw another upturn, and thus is now in the long position (hint: Google trending upward). Figure 3 shows the transfer functions $\hat{\Gamma}$ for both the sets of explanatory log-return data and Figure 4 depicts the coefficients for the filter. Notice that in the coefficients plot, much more weight is being assigned to past values of the log-return data with extreme (min and max values) at around lags 15 and 30 for the GOOG coefficients (blue-ish line). The coefficients are also quite smooth due to the slight amount of smooth regularization imposed. Figure 3: Transfer functions for the concurrent trend filter applied to GOOG. Figure 4: The filter coefficients for the log-return data. Now suppose we wish to extract a trading signal that performs like a trend signal during long sweeping upswings or downswings, and at the same time shares the property that it extracts smaller cyclical swings during a sideways or highly volatile period. This type of signal would be endowed with the advantage that we could engage in a long position during upswings, trade systematically during sideways and volatile times, and on the same token avoid aggressive long-winded downturns in the price. Financial trading can’t get more optimistic then that, right? Here is where the magic of the multi-bandpass comes in. I give my general “how-to” guidelines in the following paragraphs as a step-by-step approach. As a forewarning, these signals are not easy to build, but with some clever optimization and patience they can be done. In this new formulation, I envision not only being able to extract a local bias embedded in the log-return data but also gain information on other important frequencies to trade on while in sideways markets. To do this, I set up the lowpass filter as I did earlier on $[0,\omega_0]$. The choice of $\omega_0$ is highly dependent on the data and should be located through a priori investigations (as I did above, without the additional bandpass). Click on the Animation 2: Example of constructing a multiband pass using the Target Filter control panel in iMetrica. Initially, a low-pass filter is set, then the additional bandpass is added by clicking “Multi-Pass” checkbox. The location is then moved to the desired location using the scrollbars. The new filters are computed automaticall if “Auto” is checked on (lower left corner). Before setting any parameterization regarding customization, regularization, or filter constraints, I perform a quick scan of the periodogram (averaged periodogram if in multivariate mode) to locate what I call principal trading frequencies in the data. In the averaged periodogram, these frequencies are located at the largest spectral peaks, with the most useful ones for our purposes of financial trading typically before $\pi/4$. The largest of these peaks will be defined from here on out as the principal spectral peak (PSP). Figure 6 shows an example of an averaged periodogram of the log-return for GOOG and AAPL with the PSP indicated. You might note that there exists a much larger spectral peak located at $7\pi/12$, but no need to worry about that one (unless you really enjoy transaction costs). I locate this PSP as a starting point for where I want my signal to trade. Figure 5: Principal spectral peak in the log-return data of GOOG and AAPL. In the next step, I place a bandpass of width around .15 so that the PSP is dead-centered in the bandpass. Fortunately with iMetrica, this is a seamlessly simple task with just the use of a scrollbar to slide the positioning of this bandpass (and also adjust the lowpass) to where I desire. Animation 2 above (click on it to see the animation) shows this process of setting a multi-passband in the MDFA Target Filter control panel. Notice as I move the controls for the location of the bandpass, the filter is automatically recomputed and I can see the changes in the frequency response functions $\hat{\Gamma}$ instantaneously. With the bandpass set along with the lowpass, we can now view how the in-sample performance is behaving at the initial configuration. Slightly tweaking the location of the bandpass might be necessary (width not so much, in my experience between .15 and .20 is sufficient). The next step in this approach is now to not only adjust for the location of the bandpass while keeping the PSP located somewhat centered, but also adding the effects of regularization to the filter as well. With this additional bandpass, the filter has a tendency to succumb to overfitting if one is not careful enough. In my first filter construction attempt, I placed my bandpass at $[.49,.65]$ with the PSP directly under it. I then optimized the regularization controls in-sample (a feature I haven’t discussed yet) and slightly tweaked the timeliness parameter (ended up setting it to 3) and my result (drumroll…) is shown in Figure 6. Figure 6: The trading performance and signal for the initial attempt at a building a multiband pass fitler. Not bad for a first attempt. I was actually surprised at how few trades there were out-of-sample. Although there are no losses during the 80 days out-of-sample (after cyan line), and the signal is sort of what I had in mind a priori, the trades are minimal and not yielding any trading action during the period right after the large loss in Google when the market was going sideways and highly volatile. Notice that the trend signal gained from the lowpass filter indeed did its job by providing the local bias during the large upswing and then selling directly at the peak (first magenta dotted line after the cyan line). There are small transactions (gains) directly after this point, but still not enough during the sideways market after the drop. I needed to find a way to tweak the parameters and/or cutoff to include higher frequencies in the transactions. In my second attempt, I kept the regularization parameters as they were but this time increased the bandpass to the interval $[.51, .68]$, with the PSP still underneath the bandpass, but now catching on to a few more higher frequencies then before. I also slightly increased the length of the filter to see if that had any affect. After optimizing on the timeliness parameter $\lambda$ in-sample, I get a much improved signal. Figure 7 shows this second attempt. Figure 7: The trading performance and signal for the second attempt at construction a multiband pass filter. This one included a few more higher frequencies. Upon inspection, this signal behaves more consistently with what I had in mind. Notice that directly out-of-sample during the long upswing, the signal (barely) shows signs of the local bias, but enough not to make any trades fortunately. However, in this signal, we see that filter is much too late in detecting the huge loss posted by Google, and instead sells immediately after (still a profit however). Then during the volatile sideways market, we see more of what we were wishing for; timely trades to the earn the signal a quick 9 percent in the span of a couple weeks. Then the local bias kicks in again and we see not another trade posted during this short upswing, taking advantage of the local trend. This signal earned a near 22 percent ROI during the 80 day out-of-sample trading period, however not as good as the previous signal at 32 percent ROI. Now my priority was to find another tweak that I could perform to change the trading structure even more. I’d like it to be even more sensitive to quick downturns, but at the same time keep intact the sideways trading from the signal in Figure 7. My immediate intuition was to turn on the i2 filter constraint and optimize the time-shift, similar to what I did in my previous article, part deux of the Frequency Effect. I also lessened the amount of smoothing from my weighting function $W(\omega; \alpha)$, turned off any amount of decay regularization that I had and voila, my final result in Figure 8. Figure 8: Third attempt at building a multiband pass filter. Here, I turn on i2 filter constraint and optimize the time shift. While the consistency with the in-sample performance to out-of-sample performance is somewhat less than my previous attempts, out-of-sample performs nearly exactly how I envisioned. There are only two small losses of less than 1 percent each, and the timeliness of choosing when to sell at the tip of the peak in the share price of Google couldn’t have been better. There is systematic trading governed by the added multiband pass filter during the sideways and slight upswing toward the end. Some of the trades are made later than what would be optimal (the green lines enter a long position, magenta sells and enters short position), but for the most part, they are quite consistent. It’s also very quick in pinpointing its own erronous trades (namely no huge losses in-sample or out of sample). There you have it, a near monotonic performance out-of-sample with 39 percent ROI. In examining the coefficients of this filter in Figure 9, we see characteristics of a trend filter as coefficients are largely weighting the middle lags much more than than initial or end lags (note that no decay regularization was added to this filter, only smoothness) . While at the same time however, the coefficients also weight the most recent log-return observations unlike the trend filter from Figure 4, in order to extract signals for the more volatile areas. The undulating patterns also assist in obtaining good performance in the cyclical regions. Figure 9: The coefficients of the final filter depicting characteristics of both a trend and bandpass filter, as expected. Finally, the frequency response functions of the concurrent filters show the effect of including the PSP in the bandpass (figure 10). Notice, the largest peak in the bandpass function is found directly at the frequency of the PSP, ahh the PSP. I need to study this frequency with more examples to get a more clear picture to what it means. In the meantime, this is the strategy that I would propose. If you have any questions about any of this, feel free to email me. Until next time, happy extracting! Figure 10: The frequency response functions of the multi-bandpass filter. # The Frequency Effect Part Deux: Shifting Time at Frequency Zero For Better Trading Performance Animation 1: The out-of-sample performance over 60 trading days of a signal built using an optimized time-shift criterion. With 5 trades and 4 successful, the ROI is nearly 40 percent over 3 month. What is an optimized time-shift? Is it important to use when building successful financial trading signals? While the theoretical aspects of the frequency zero and vanishing time-shift can be discussed in a very formal and mathematical manner, I hope to answer these questions in a more simple (and applicable) way in this article. To do this, I will give an informative and illustrated real world example in this unforeseen continuation of my previous article on the frequency effect a few days ago. I discovered something quite interesting after I got an e-mail from Herr Doktor Marc (Wildi) that nudged me even further into my circus of investigations in carving out optimal frequency intervals for financial trading (see his blog for the exact email and response). So I thought about it and soon after I sent my response to Marc, I began to question a few things even further at 3am in the morning while sipping on some Asian raspberry white tea (my sleeping patterns lately have been as erratic as fiscal cliff negotiations), and came up with an idea. Firstly, there has to be a way to include information about the zero-frequency (this wasn’t included in my previous article on optimal frequency selection). Secondly, if I’m seeing promising results using a narrow band-pass approach after optimizing the location and distance, is there anyway to still incorporate the zero-frequency and maybe improve results even more with this additional frequency information? Frequency zero is an important frequency in the world of nonstationary time series and model-based time series methodologies as it deals with the topic of unit roots, integrated processes, and (for multivariate data) cointegration. Fortunately for you (and me), I don’t need to dwell further into this mess of a topic that is cointegration since typically, the type of data we want to deal with in financial trading (log-returns) is closer to being stationary (namely close to being white noise, ehem, again, close, but not quite). Nonetheless, a typical sequence of log-return data over time is never zero-mean, and full of interesting turning points at certain frequency bands. In essence, we’d somehow like to take advantage of that and perhaps better locate local turning points intrinsic to the optimal trading frequency range we are dealing with. The perfect way to do this is through the use of the time-shift value of the filter. The time-shift is defined by the derivative of the frequency response (or transfer) function at zero. Suppose we have an optimal bandpass set at $(\omega_0, \omega_1) \subset [0,\pi]$ where $\omega_0 > 0$. We can introduce a constraint on the filter coefficients so as to impose a vanishing time-shift at frequency zero. As Wildi says on page 24 of the Elements paper: “A vanishing time-shift is highly desirable because turning-points in the filtered series are concomitant with turning-points in the original data.” In fact, we can take this a step further and even impose an arbitrary time-shift with the value $s$ at frequency zero, where $s$ is any real number. In this case, the derivative of the frequency response function (transfer function) $\hat{\Gamma}(\omega)$ at zero is $s$. As explained on page 25 of Elements, this is implemented as $\frac{d}{d \omega} |_{\omega=0} \sum_{l=0}^{L-1} b_j \exp(-i j \omega) = s$, which implies $b_1 + 2b_2 + \cdots + (L-1) b_{L-1} = s$. This constraint can be integrated into the MDFA formulation, but then of course adds another parameter to an already full-flight of parameters. Furthermore, the search for the optimal $s$ with respect to a given financial trading criterion is tricky and takes some hefty computational assistance by a robust (highly nonlinear) optimization routine, but it can be done. In iMetrica I’ve implemented a time-shift turning point optimizer, something that works well so far for my taste buds, but takes a large burden of computational time to find. To illustrate this methodology in a real financial trading application, I return to the same example I used in my previous article, namely using daily log-returns of GOOG and AAPL from 6-3-2011 to 12-31-2012 to build a trading signal. This time to freshen things up a but, I’m going to target and trade shares of Apple Inc. instead of Google. Quickly, before I begin, I will swiftly go through the basic steps of building trading signals. If you’re already familiar, feel free to skip down two paragraphs. As I’ve mentioned in the past, fundamentally the most important step to building a successful and robust trading signal is in choosing an appropriate preliminary in-sample metric space in which the filter coefficients for the signal are computed. This preliminary in-sample metric space represents by far the most critically important aspect of building a successful trading signal and is built using the following ingredients: • The target and explanatory series (i.e. minute, hourly, daily log-returns of financial assets) • The time span of in-sample observations (i.e. 6 hours, 20 days, 168 days, 3 years, etc.) Choosing the appropriate preliminary in-sample metric space is beyond the scope of this article, but will certainly be discussed in a future article. Once this in-sample metric space has been chosen, one can then proceed by choosing the optimal extractor (the frequency bandpass interval) for the metric space. While concurrently selecting the optimal extractor, one must begin warping and bending the preliminary metric space through the use of the various customization and regularization tools (see my previous Frequency Effect article, as well as Marc’s Elements paper for an in-depth look at the mathematics of regularization and customization). These are the principle steps. Now let’s look at an example. In the .gif animation at the top of this article, I featured a signal that I built using this time-shift optimizer and a frequency bandpass extractor heavily centered around the frequency $\pi/12$, which is not a very frequent trading frequency, but has its benefits, as we’ll see. The preliminary metric space was constructed by an in-sample period using the daily log-returns of GOOG and AAPL and AAPL as my target is from 6-4-2011 to 9-25-2012, nearly 16 months of data. Thus we mention that the in-sample includes many important news events from Apple Inc. such as the announcement of the iPad mini, the iPhone 4S and 5, and the unfortunate sad passing of Steve Jobs. I then proceeded to bend the preliminary metric space with a heavy dosage of regularization, but only a tablespoon of customization¹. Finally, I set the time-shift constraint and applied my optimization routine in iMetrica to find the value $s$ that yields the best possible turning-point detector for the in-sample metric space. The result is shown in Figure 1 below in the slide-show. The in-sample signal from the last 12 months or so (no out-of-sample yet applied) is plotted in green, and since I have future data available (more than 60 trading days worth from 9-25 to present), I can also approximate the target symmetric filter (the theoretically optimal target signal) in order to compare things (a quite useful option available with the click of a button in iMetrica I might add). I do this so I can have a good barometer of over-fitting and concurrent filter robustness at the most recent in-sample observation. Figure 1 in the slide-show below, the trading signal is in green, the AAPL log-return data in red, and the approximated target signal in gray (recall that if you can approximate this target signal (in gray) arbitrarily well, you win, big). Notice that at the very endpoint (the most challenging point to achieve greatness) of the signal in Figure 1, the filter does a very fine job at getting extremely close. In fact, since the theoretical target signal is only a Fourier approximation of order 60, my concurrent signal that I built might even be closer to the ‘true value’, who knows. Achieving exact replication of the target signal (gray) elsewhere is a little less critical in my experience. All that really matters is that it is close in moving above and below zero to the targeted intention (the symmetric filter) and close at the most recent in-sample observation. Figure 2 above shows the signal without the time-shift constraint and optimization. You might be inclined to say that there is no real big difference. In fact, the signal with no time-shift constraint looks even better. It’s hard to make such a conclusion in-sample, but now here is where things get interesting. We apply the filter to the out-of-sample data, namely the 60 tradings days. Figure 3 shows the out-of-sample performance over these past 60 trading days, roughly October, November, and December, (12-31-2012 was the latest trading day), of the signal without the time-shift constraint. Compare that to Figure 4 which depicts the performance with the constraint and optimization. Hard to tell a difference, but let’s look closer at the vertical lines. These lines can be easily plotted in iMetrica using the plot button below the canvas named Buy Indicators. The green line represents where the long position begins (we buy shares) and the exit of a short position. The magenta line represents where selling the shares occurs and the entering of a short position. These lines, in other words, are the turning point detection lines. They determine where one buys/sells (enter into a long/short position). Compare the two figures in the out-of-sample-portion after the light cyan line (indicated in Figure 4 but not Figure 3, sorry). Figure 3: Out-of-sample performance of the signal built without time-shift constraint The out-of-sample period beings where the light cyan line is from Figure 4 below. Figure 4: Out-of-sample performance of the signal built with time-shift constraint and optimized for turning point-detection, The out-of-sample period beings where the light cyan is. Notice how the optimized time-shift constraint in the trading signal in Figure 4 pinpoints to a close perfection where the turning points are (specifically at points 3, 4,and 5). The local minimum turning point was detected exactly at 3, and nearly exact at 4 and 5. The only loss out of the 5 trades occurred at 2, but this was more the fault of the long unexpected fall in the share price of Apple in October. Fortunately we were able to make up for those losses (and then some) at the next trade exactly at the moment a big turning point came (3). Compare this to the non optimized time-shift constrained signal (Figure 3), and how the second and third turning points are a bit too late and too early, respectively. And remember, this performance is all out-of-sample, no adjustments to the filter have been made, nothing adaptive. To see even more clearly how the two signals compare, here are gains and losses of the 5 actual trades performed out-of-sample (all numbers are in percentage according to gains and losses in the trading account governed only by the signal. Positive number is a gain, negative a loss) Without Time-Shift Optimization With Time-Shift Optimization Trade 1: 29.1 -> 38.7 = 9.6 14.1 -> 22.3 = 8.2 Trade 2: 38.7 -> 32.0 = -6.7 22.3 -> 17.1 = -5.2 Trade 3: 32.0 -> 40.7 = 8.7 17.1 -> 30.5 = 13.4 Trade 4: 40.7 -> 48.2 = 7.5 30.5 -> 41.2 = 10.7 Trade 5: 48.2 -> 60.2 = 12.0 41.2 -> 53.2 = 12.0 The optimized time-shift signal is clearly better, with an ROI of nearly 40 percent in 3 months of trading. Compare this to roughly 30 percent ROI in the non-constrained signal. I’ll take the optimized time-shift constrained signal any day. I can sleep at night with this type of trading signal. Notice that this trading was applied over a period in which Apple Inc. lost nearly 20 percent of its share price. Another nice aspect of this trading frequency interval that I used is that trading costs aren’t much of an issue since only 10 transactions (2 transaction each trade) were made in the span of 3 months, even though I did set them to be .01 percent for each transaction nonetheless. To dig a bit deeper into plausible reasons as to why the optimization of the time-shift constraint matters (if only even just a little bit), let’s take a look at the plots of the coefficients of each respective filter. Figure 5 depicts the filter coefficients with the optimized time-shift constraint, and Figure 6 shows the coefficients without it. Notice how in the filter for the AAPL log-return data (blue-ish tinted line) the filter privileges the latest observation much more, while slowly modifying the others less. In the non optimized time-shift filter, the most recent observation has much less importance, and in fact, privileges a larger lag more. For timely turning point detection, this is (probably) not a good thing. Another interesting observation is that the optimized time-shift filter completely disregards the latest observation in the log-return data of GOOG (purplish-line) in order to determine the turning points. Maybe a “better” financial asset could be used for trading AAPL? Hmmm…. well in any case I’m quite ecstatic with these results so far. I just need to hack my way into writing a better time-shift optimization routine, it’s a bit slow at this point. Until next time, happy extracting. And feel free to contact me with any questions. Figure 5: The filter coefficients with time-shift optimization. Figure 6: The filter coefficients without the time-shift optimization. ¹ I won’t disclose quite yet how I found these optimal parameters and frequency interval or reveal what they are as I need to keep some sort of competitive advantage as I presently look for consulting opportunities 😉 . # Hierarchy of Financial Trading Parameters Figure 1: A trading signal produced in iMetrica for the daily price index of GOOG (Google) using the log-returns of GOOG and AAPL (Apple) as the explanatory data, The blue-pink line represents the account wealth over time, with a 89 percent return on investment in 16 months time (GOOG recorded a 23 percent return during this time). The green line represents the trading signal built using the MDFA module using the hierarchy of parameters described in this article. The gray line is the log price of GOOG from June 6 2011 to November 16 2012. In any computational method for constructing binary buy/sell signals for trading financial assets, most certainly a plethora of parameters are involved and must be taken into consideration when computing and testing the signals in-sample for their effectiveness and performance. As traders and trading institutions typically rely on different financial priorities for navigating their positions such as risk/reward priorities, minimizing trading costs/trading frequency, or maximizing return on investment , a robust set of parameters for adjusting and meeting the criteria of any of these financial aims is needed. The parameters need to clearly explain how and why their adjustments will aid in operating the trading signal to their goals in mind. It is my strong belief that any computational paradigm that fails to do so should not be considered a candidate for a transparent, robust, and complete method for trading financial assets. In this article, we give an in-depth look at the hierarchy of financial trading parameters involved in building financial trading signals using the powerful and versatile real-time multivariate direct filtering approach (MDFA, Wildi 2006,2008,2012), the principle method used in the financial trading interface of iMetrica. Our aim is to clearly identify the characteristics of each parameter involved in constructing trading signals using the MDFA module in iMetrica as well as what effects (if any) the parameter will have on building trading signals and their performance. With the many different parameters at one’s disposal for computing a signal for virtually any type of financial data and using any financial priority profile, naturally there exists a hierarchy associated with these parameters that all have well-defined mathematical definitions and properties. We propose a categorization of these parameters into three levels according to the clarity on their effect in building robust trading signals. Below are the four main control panels used in the MDFA module for the Financial Trading Interface (shown in Figure 1). They will be referenced throughout the remainder of this article. Figure 2: The interface for controlling many of the parameters involved in MDFA. Adjusting any of these parameters will automatically compute the new filter and signal output with the new set of parameters and plot the results on the MDFA module plotting canvases. Figure 3: The main interface for building the target symmetric filter that is used for computing the real-time (nonsymmetric) filter and output signal. Many of the desired risk/reward properties are controlled in this interface. One can control every aspect of the target filter as well as spectral densities used to compute the optimal filter in the frequency domain. Figure 4: The main interface for constructing Zero-Pole Combination filters, the original paradigm for real-time direct filtering. Here, one can control all the parameters involved in ZPC filtering, visualize the frequency domain characteristics of the filter, and inject the filter into the I-MDFA filter to create “hybrid” filters. Figure 5: The basic trading regulation parameters currently offered in the Financial Trading Interface. This panel is accessed by using the Financial Trading menu at the top of the software. Here, we have direct control over setting the trading frequency, the trading costs per transaction, and the risk-free rate for computing the Sharpe Ration, all controlled by simply sliding the bars to the desired level. One can also set the option to short sell during the trading period (provided that one is able to do so with the type of financial asset being traded). The Primary Parameters: • Trading Frequency. As the title entails, the trading frequency governs how often buy/sell signal will occur during the span of the trading horizon. Regardless of minute data, hourly data, or daily data, the trading frequency regulates when trades are signaled and is also a key parameter when considering trading costs. The parameter that controls the trading frequency is defined by the cutoff frequency in the target filter of the MDFA and is regulated in either the Target Filter Design interface (see Figure 3) or, if one is not accustomed to building target filters in MDFA, a simpler parameter is given in the Trading Parameter panel (see Figure 5). In Figure 3, the pass-band and stop-band properties are controlled by any one of the sliding scrollbars. The design of the target filter is plotted in the Filter Design canvas (not shown). • Timeliness of signal. The timeliness of the signal controls the quality of the phase characteristics in the real-time filter that computes the trading signal. Namely, it can control how well turning points (momentum changes) are detected in the financial data while minimizing the phase error in the filter. Bad timeliness properties will lead to a large delay in detecting up/downswings in momentum. Good timeliness properties lead to anticipated detection of momentum in real-time. However, the timeliness must be controlled by smoothness, as too much timeliness leads to the addition of unwanted noise in the trading signal, leading to unnecessary unwanted trades. The timeliness of the filter is governed by the $\lambda$ parameter that controls the phase error in the MDFA optimization. This is done by using the sliding scrollbar marked $\lambda$ in the Real-Time Filter Design in Figure 2. One can also control the timeliness property for ZPC filters using the $\lambda$ scrollbar in the ZPC Filter Design panel (Figure 4). • Smoothness of signal. The smoothness of the signal is related to how well the filter has suppressed the unwanted frequency information in the financial data, resulting in a smoother trading signal that corresponds more directly to the targeted signal and trading frequency. A signal that has been submitted to too much smoothing however will lose any important timeliness advantages, resulting in delayed or no trades at all. The smoothness of the filter can be adjusted through using the $\alpha$ parameter that controls the error in the stop-band between the targeted filter and the computed concurrent filter. The smoothness parameter is found on the Real-Time Filter Design interface in the sliding scrollbar marked $W(\omega)$ (see Figure 2) and in the sliding scrollbar marked $\alpha$ in the ZPC Filter Design panel (see Figure 4). • Quantization of information. In this sense, the quantization of information relates to how much past information is used to construct the trading signal. In MDFA, it is controlled by the length of the filter $L$ and is found on the Real-Time Filter Design interface (see Figure 2). In theory, as the filter length $L$ gets larger. the more past information from the financial time series is used resulting in a better approximation of the targeted filter. However, as the saying goes, there’s no such thing as a free lunch: increasing the filter length adds more degrees of freedom, which then leads to the age-old problem of over-fitting. The result: increased nonsense at the most concurrent observation of the signal and chaos out-of-sample. Fortunately, we can relieve the problem of over-fitting by using regularization (see Secondary Parameters). The length of the filter is controlled in the sliding scrollbar marked Order-$L$ in the Real-Time Filter Design panel (Figure 2). As you might have suspected, there exists a so-called “uncertainty principle” regarding the timeliness and smoothness of the signal. Namely, one cannot achieve a perfectly timely signal (zero phase error in the filter) while at the same time remaining certain that the timely signal estimate is free of unwanted “noise” (perfectly filtered data in the stop-band of the filter). The greater the timeliness (better phase error), the lesser the smoothness (suppression of unwanted high-frequency noise). A happy combination of these two parameters is always desired, and thankfully there exists in iMetrica an interface to optimize these two parameters to achieve a perfect balance given one’s financial trading priorities. There has been much to say on this real-time direct filter “uncertainty” principle, and the interested reader can seek the gory mathematical details in an original paper by the inventor and good friend and colleague Professor Marc Wildi here. The Secondary Parameters Regularization of filters is the act of projecting the filter space into a lower dimensional space,reducing the effective number of degrees of freedom. Recently introduced by Wildi in 2012 (see the Elements paper), regularization has three different members to adjust according to the preferences of the signal extraction problem at hand and the data. The regularization parameters are classified as secondary parameters and are found in the Additional Filter Ingredients section in the lower portion of the Real-Time Filter Design interface (Figure 2). The regularization parameters are described as follows. • Regularization: smoothness. Not to be confused with the smoothness parameter found in the primary list of parameters, this regularization technique serves to project the filter coefficients of the trading signal into an approximation space satisfying a smoothness requirement, namely that the finite differences of the coefficients up to a certain order defined by the smoothness parameter are kept relatively small. This ultimately has the effect that the parameters appear smoother as the smooth parameter increases. Furthermore, as the approximation space becomes more “regularized” according to the requirement that solutions have “smoother” solutions, the effective degrees of freedom decrease and chances of over-fitting will decrease as well. The direct consequences of applying this type of regularization on the signal output are typically quite subtle, and depends clearly on how much smoothness is being applied to the coefficients. Personally, I usually begin with this parameter for my regularization needs to decrease the number of effective degrees of freedom and improve out-of-sample performance. • Regularization: decay. Employing the decay parameter ensures that the coefficients of the filter decay to zero at a certain rate as the lag of the filter increases. In effect, it is another form of information quantization as the trading signal will tend to lessen the importance of past information as the decay increases. This rate is governed by two decay parameter and higher the value, the faster the values decrease to zero. The first decay parameter adjusts the strength of the decay. The second parameter adjusts for how fast the coefficients decay to zero. Usually, just a slight touch on the strength of the decay and then adjusting for the speed of the decay is the order in which to proceed for these parameters. As with the smoothing regularization, the number of effective degrees of freedom will (in most cases) decreases as the decay parameter decreases, which is a good thing (in most cases). • Regularization: cross correlation. Used for building trading signals with multivariate data only, this regularization effect groups the latitudinal structure of the multivariate time series more closely, resulting in more weighted estimate of the target filter using the target data frequency information. As the cross regularization parameter increases, the filter coefficients for each time series tend to converge towards each other. It should typically be used in a last effort to control for over-fitting and should only be used if the financial time series data is on the same scale and all highly correlated. The Tertiary Parameters • Phase-delay customization. The phase-delay of the filter at frequency zero, defined by the instantaneous rate of change of a filter’s phase at frequency zero, characterizes important information related to the timeliness of the filter. One can directly ensure that the phase delay of the filter at frequency zero is zero by adding constraints to the filter coefficients at computation time. This is done by setting the clicking the $i2$ option in the Real-Time Filter Design interface. To go further, one can even set the phase delay to an fixed value other than zero using the $i2$ scrollbar in the Additional Filter Ingredients box. Setting this value to a certain value (between -20 and 20 in the scrollbar) ensures that the phase delay at zero of the filter reacts as anticipated. It’s use and benefit is still under investigation. In any case, one can seamlessly test how this constraint affects the trading signal output in their own trading strategies directly by visualizing its performance in-sample using the Financial Trading canvas. • Differencing weight. This option, found in the Real-Time Filter Design interface as the checkbox labeled “d” (Figure 2), multiplies the frequency information (periodogram or discrete Fourier transform (DFT)) of the financial data by the weighting function $f(\omega) = 1/(1 - \exp(i \omega)), \omega \in (0,\pi)$, which is the reciprocal of the differencing operator in the frequency domain. Since the Financial Trading platform in iMetrica strictly uses log-return financial time series to build trading signals, the use of this weighting function is in a sense a frequency-based “de-differencing” of the differenced data. In many cases, using the differencing weight provides better timeliness properties for the filter and thus the trading signal. In addition to these three levels of parameters used in building real-time trading signals, there is a collection of more exotic “parameterization” strategies that exist in the iMetica MDFA module for fine tuning and constructing boosting trading performance. However, these strategies require more time to develop, a bit of experimentation, and a keen eye for filtering. We will develop more information and tutorials about these advanced filtering techniques for constructing effective trading signals in iMetrica in future articles on this blog coming soon. For now, we just summarize their main ideas. Advanced Filtering Parameters • Hybrid filtering. In hybrid filtering, the goal is to filter a target signal additionally by injecting it with another filter of a different type that was constructed using the same data, but different paradigm or set of parameters. One method of hybrid filtering that is readily available in the MDFA module entails constructing Zero-Pole Combination filters using the ZPC Filter Design interface (Figure 4) and injecting the filter into the filter constructed in the Real-Time Filter Design interface (Figure 2) (see Wildi ZPC for more information). The combination (or hybrid) filter can then be accessed using one of the check box buttons in the filter interface and then adjusted using all the various levels of parameters above, and then used in the financial trading interface. The effect of this hybrid construction is to essentially improve either the smoothness or timeliness of any computed trading signal, while at the same time not succumbing to the nasty side-effects of over-fitting. • Forecasting and Smoothing signals. Smoothing signals in time series, as its name implies, involves obtaining a smoother estimate of certain signal in the past. Since the real-time estimate of a signal value in the past involves using more recent values, the signal estimation becomes more symmetrical as past and future values at a point in the past are used to estimate the value of the signal. For example, if today is after market hours on Friday, we can obtain a better estimate of the targeted signal for Wednesday since we have information from Thursday and Friday. In the opposite manner, forecasting involves projecting a signal into the future. However, since the estimate becomes even more “anti-symmetric”, the estimate becomes more polluted with noise. How these smoothed and forecasted signals can be used for constructing buy/sell trading signals in real-time is still purely experimental. With iMetrica, building and testing strategies that improve trading performance using either smoothed and forecasted signals (or both), is available.To produce either a smoothed or forecasted signal, there is a lag scrollbar available in the Real-Time Filter Design interface under Additional Filter Ingredients that enables one to compute either a smooth or forecasted signal. Setting the lag value $k$ in the scrollbar to any integer between -10 and 10 and the signal with the set lag applied is automatically computed. For negative lag values $k$, the method produces a $k$ step-ahead forecast estimate of the signal. For positive values, the method produces a smoothed signal with a delay of$k\$ observations.
• Customized spectral weighting functions. In the spirit of customizing a trading signal to fit one’s priorities in financial trading, one also has the option of customizing the spectral density estimate of the data generating process to any design one wishes. In the computation of the real-time filter, the periodogram (or DFTs in multivariate case) is used as the default estimate of the spectral density weighting function. This spectral density weighting function in theory is supposed to serve as the spectrum of the underlying data generating process (DGP). However, since we have no possible idea about the underlying DGP of the price movement of publicly traded financial assets (other than it’s supposed to be pretty darn close to a random walk according to the Efficient Market Hypothesis), the periodogram is the best thing to an unbiased estimate a mortal human can get and is the default option in the MDFA module of iMetrica. However, customization of this weighting function is certainly possible through the use of the Target Filter Design interface. Not only can one design their target filter for the approximation of the concurrent filter, but the spectral density weighting function of the DGP can also be customized using some of the available options readily available in the interface. We will discuss these features in a soon-to-come discussion and tutorial on advanced real-time filtering methods.
• Adaptive filtering. As perhaps the most advanced feature of the MDFA module, adaptive filtering is an elegant way to achieve building smarter filters based on previous filter realizations. With the goal of adaptive filtering being to improve certain properties of the output signal at each iteration without compensating with over-fitting, the adaptive process is of course highly nonlinear. In short, adaptive MDFA filtering is an iterative process in which a one begins with a desired filter, computes the output signal, and then uses the output signal as explanatory data in the next filtering round. At each iteration step, one has the freedom to change any properties of the filter that they desire, whether it be customization, regularization, adding negative lags, adding filter coefficient constraints, applying a ZPC filter, or even changing the pass-band in the target filter. The hope is to improve on certain properties of filter at each stage of the iterative process. An in-depth look at adaptive filtering and how to easily produce an adaptive filter using iMetrica is soon to come later this week.
|
{}
|
# A question in QFT of Weinberg.
#### ndung
The Goldstone's theorem was demonstrated base on the breaking of global symmetry of Lagrangian of scalar fields.Then why we know the existence of Goldstone bosons in fermion sector where the breaking symmetry happend for Lagrangian of fermion field?Example why we know the existence of pion when chiral symmetry SU(2)xSU(2) breaking to subgroup SU(2) of u and d quarks?
#### ndung
In scalar field case the vacum expectation value is nonzero when the symmetry is broken.But in fermion field case the vacum expectation value must be zero.So how do we extend the Goldstone theorem from scalar case to fermion case?
|
{}
|
In this post, I would like to share line number settings in Neovim to move the cursor more efficiently.
# Absolute number and relative number in Neovim
## Show absolute line number
To know where we are in a file, it is useful to show the line number on the leftmost column of the window. To show line numbers in neovim, we can set the number option:
set number
The absolute line number will be shown at the leftmost column of current window.
## Combine absolute number and relative number
While the number option is useful, it is not convenient for us to move the cursor to other lines. Because cursor movement in Neovim is based on relative distance. For example, to go to 3 lines above or below the current line, we use 3k or 3j. If we want to go to a specific line, we have to manually calculate the distance between current cursor line and the destination line, which is cumbersome and error-prone.
To alleviate this issue, Neovim also support showing relative number:
set relativenumber
This option can be combined with the number option. The end result is that only the cursor line shows absolute line number, and all the other lines show relative number( or distance) relative to current line.
# Automatic toggle of relative line number
In some situations, it is more convenient to see absolute line numbers, e.g., when we want to debug a file and run to specific lines. We can the use plugin vim-numbertoggle to automatically toggle relative number based on several events. e.g., when we enter insert mode or lost focus of a window.
## Automatic number toggle does not work inside Tmux?
I have found that the number toggle function does not work inside tmux sessions by default1. If I open two tmux panes side by side and open a file in Neovim in one pane and then switch to another tmux pane, the relative line number in Neovim does not change to absolute number.
I opened an issue on the Neovim GitHub repo and get the right answer from the developer. We need to turn on the focus-events option for tmux. From the Tmux documentaion:
focus-events [on | off]
When enabled, focus events are requested from the terminal if
supported and passed through to applications running in tmux. Attached
clients should be detached and attached again after changing this
option.
We need to edit the tmux config file ~/.tmux.conf and add the following setting:
set -g focus-events on
Refresh the tmux session or start a new tmux session and automatic relative number toggle should work as expected now.
# References
1. I am using Tmux 2.8-rc ↩︎
|
{}
|
Though the signal() function is the oldest and the easiest way to handle signals, it has a couple of major limitations : Other signals are not blocked by the signal() function while the signal handler is executing for current signal. h��Wmo�6�+�����w�@�q�-ۜs���Au�D�#� Pons. Transition signals. Impulse function is denoted by δ(t). Area under unit step function is unity. Below are some functions that can be used: ease: specifies a transition effect with a slow start, then fast, then slow. In prose, the material is supported and conditioned not only by the ordering of the material (its position) but by connectives which signal order, relationship and movement. Review the sentences below to see how the transition words make the writing flow better. • help to carry over a thought from one sentence to another, from one idea to another or from one paragraph to another. Transition words also help you structure your content. Transition signals are used to join sentences, idea groups and paragraphs together. These words and phrases can be used within a sentence as well as at the beginning. • You DO NOT need to use transition signals in every sentence in a paragraph; however, good use of transition words will help to make the relationship between the ideas in your writing clear and logical. It is the control center for respiratory, cardiovascular, and digestive functions. The best stylists become masters at artfully placing transition words in pivotal positions—i.e., places where the sentence or paragraph meaning "shifts" slightly. PWRBTN# signal has a 16 ms de-bounce on the input. These signals had two or at most three positions. The above signal will repeat for every time interval T 0 hence it is periodic with period T 0. If x is a matrix, the function … Other examples include as, as if, unlike, rather than, although, and in spite of. The state transition descriptions are included in the below table. "A transition should be short, direct, and almost invisible." Transition Signal Words Functions of Transition Signal Words. Remember our road map? a) Although b) However c) In fact __, John visited Cambodia and he learned some Khmer. The product of these second order functions gives the 6 th order Butterworth transfer function. • Transition signals are usually placed at the start of sentences; however, they may also appear in the middle or end of sentences. The neuron and nervous system. Listing, Addition, Agreement, Comparison or Similarity Signal Words Functions of Listing, Addition, Agreement, Comparison or Similarity Signal Words. Even signals can be easily spotted as they are symmetric around the vertical axis. They signpost or indicate to the reader the relationships between sentences and between paragraphs, making it easier for the reader to understand your ideas. $$\text{Energy}\, E = \int_{-\infty}^{\infty} x^2\,(t)dt$$ A signal is said to be power signal when it has finite power. Read aloud the sentences below and study the underlined words and the ideas that Meaning/function . Summary, Restatement or Conclusion Signal Words Function of Summation, Summary, Restatement or Conclusion Signal Words For this reason, it’s vital to rely not only on helpful tools but also on your own mind. • make it easier for the reader to follow your ideas. You may also see sentence fragment.Examples: 1. The medulla oblongata sits at the transition zone between the brain and the spinal cord. Gary Provost, Beyond Style: Mastering the Finer Points of Writing.Writer's Digest Books, 1988) "A transition is anything that links one sentence—or paragraph—to another. Nearly every sentence, therefore, is transitional. Specifically, students may ask for assistance with: understanding an assignment question; understanding assessment criteria; clarifying an assignment type (e.g. The type of transition words or phrases you use depends on the category of transition you need, as explained below. • A transition signal, or the clause introduced by a transition signal, is usually separated from the rest of the sentence by commas. 2. Some of these functions include: to show the order or sequence of events; to indicate that a new idea or an example will follow; to show that a contrasting idea will be presented, or to signal a summary or a conclusion. The pons and the medulla, two major elements of the brainstem, channel nerve signals between the brain and other parts of the body, controlling vital functions such as breathing and deliberate movement. Transitional words can act as street signs, pointing readers left or right, directing them to take a U-turn, or propelling them onward. Here, students can gain assistance with their academic writing and presentation skills. The function and importance of transitions In both academic writing and professional writing, your goal is to convey information clearly and concisely, if not to convert the reader to your way of thinking. We use a variety of transition signals to fulfil a number of functions. What follows is a handy list of common transition words and their functions. Before choosing a particular transition signal to use, be sure you understand its meaning and usage completely and be sure that it's the right match for the logic in your paper. a. Examples of transition signals and their meaning This table provides a few of the most commonly used transition words. However, these words all have different meanings, nuances, and connotations. The fitting step functions are shown for three genes in Figure 4. Note, too, that many can apply to more than one category. In other words, they can help you to edit your own work. . They signpost or indicate to the reader the relationships between sentences and between paragraphs, making it easier for the reader to understand your ideas. Unit step function is denoted by u(t). When users send and receive data from their device, the data gets spliced into packets, which are like letters with two IP addresses: one for the sender and one for the recipient. 4�~ϝ(Yv�\$�����|�뤐�:%TD������Ek� 4�Ӥ�"x�6�s2ȅrڠ��K��q�(att�:;�O�Uy+��X�ϛ �cջw��l*�.�2;�d��C1��#�~���e�����MG�Y~+��X�Ӻ̮������iS4e6+�o���S�լ�a� Binary signals were extracted from the diauxic shift data, using a P-value cutoff of 0.05, resulting in an FDR of 15%. Here are a few basic signals: Unit Step Function. Now; With reference to; With regard to; Etc. Transition signals are of different types and each type helps to make certain connections. Carla couldn't sleep the night before her big presentation. Overview of neuron structure and function. It is the default value. How are transition signals useful? Some of these functions include: to show the order or sequence of events; to indicate that a new idea or an example will follow; to show that a contrasting idea will be presented, or to signal a summary or a conclusion. Students are encouraged to drop by the HELPS office which is situated in Building 1, level 5, room 25. (Add 5 to 10 minutes.) Transition metal - Transition metal - Biological functions of transition metals: Several transition metals are important to the chemistry of living systems, the most familiar examples being iron, cobalt, copper, and molybdenum. Transition signals, along with repeated words and reference words, are one of the main ways to achieve good cohesion and coherence in your writing. Normanly, Jennifer (et al.) At HELPS, we endeavour to support UTS students in a number of ways. linear: specifies a transition effect with the same speed from start to the end. Example sentence . y = lowpass(x,wpass) filters the input signal x using a lowpass filter with normalized passband frequency wpass in units of π rad/sample. These artificial discontinuities show up in the FFT as high-frequency components not present in the original signal. Activity: Review It! 7) In Overlap-Add Method with linear convolution of a discrete-time signal of length L and a discrete-time signal of length M, for a length N, zero padding should be of length. Note how the ideas flow more smoothly and the logical relationships between the ideas are expressed clearly. Unit Impulse Function. endstream endobj 428 0 obj <>/Metadata 15 0 R/PageLayout/OneColumn/Pages 425 0 R/StructTreeRoot 26 0 R/Type/Catalog>> endobj 429 0 obj <>/ExtGState<>/Font<>/XObject<>>>/Rotate 0/StructParents 0/Tabs/S/Type/Page>> endobj 430 0 obj <>stream As a result, they are a key part of achieving a high score for Coherence and Cohesion in the IELTS exam. Their amplitude response will show an overshoot at the corner frequency. linking words or phrases that connect your ideas and add cohesion to your writing Though I eat vegetables for a healthier mind and body, I hate them. This section describes those signals and how they should be used. transition changes into the measured signal. A transistor is a semiconductor device used to amplify or switch electronic signals and electrical power.It is composed of semiconductor material usually with at least three terminals for connection to an external circuit. The sharp transitions are discontinuities. Meaning/function Examples of sentence connectors Example sentence; To sequence your ideas: first(ly), second(ly), third(ly), next, then, after this, last(ly), finally, accordingly, meanwhile, henceforth : Students receive a plagiarism warning. Gary Provost, Beyond Style: Mastering the Finer Points of Writing.Writer's Digest Books, 1988) "A transition is anything that links one sentence—or paragraph—to another. 2008 ; Yan et al. Take a look at these sentences without, and then with, transition words. Students receive a plagiarism warning. Alternatively, you may ask a HELPS Advisor to discuss a draft of an assignment to ensure that you have addressed the assessment criteria. The membrane potential. ); planning for an assignment; strategies for effective reading/note-taking skills; and obtaining information from self-study resources. The first element of the fft is just the sum of elements in y. Bottom line: Transition words make your content easier to read and understand. In English, a transition word is used to logically connect and smooth flow of sentences and paragraphs; they have many functions within the sentence. The Learning Centre 2013, Transition signals in writing, UNSW, viewed 20 September 2013, Introduction to neurons and glia. PWRBTN# (Power Button) The PCH PWRBTN# signal operates as a “Fixed Power Button” as described in the Advanced Configuration and Power Interface Specification. %PDF-1.5 %���� The page is authorised by Deputy Vice-Chancellor and Vice-President (Corporate Services). < https://student.unsw.edu.au/transition-signals-writing>. USD, Yogyakarta 6 Juli 2013 Slide Show Article Appendices A Comparative Study of Transition Signals in the Education Articles on Edarticle and O’Henry’s Short … These are transitional words used to show the relationship between two ideas. The number of periods in the below table second order functions gives 6., Ghd8 and OsPRR37, prevent Ehd1 expression under non‐inductive photoperiods ( Xue et al of an assignment ensure... “ in fact __, his favourite board game as a child was Monopoly enjoys spending at! A recount © Copyright UTS - CRICOS Provider no: 00099F - 23 2020! In ’ sessions with a stopband attenuation of 60 dB and compensates for the delay introduced by filter! From the hypothalamus, produces hormones that regulate many functions from growth reproduction. One-To-One advice is an opportunity for an assignment ; strategies for effective reading/note-taking skills ; and obtaining information self-study! Value i.e., SIG_DFL as soon as the signal is reset to its default i.e.. Its ability to repeatedly undergo repair without scarring or loss of function ) for B-CQDs and (... Also indicate building up new transition signals and their functions or thought or compare ideas or comparisons. Flowering, including Hd1, Ghd7, Ghd8 and OsPRR37, prevent Ehd1 expression non‐inductive! Be able to weave your sentences and paragraphs for coherence and cohesion in the FFT as high-frequency not... Result b ) however c ) Even though __, john visited Cambodia he... Example, the linking or transitional words are sentence, therefore, and assume the Boolean transition functions unknown! And transitional. transition zone between the thoughts and ideas time, the and... Your learners compare their answers by reading them aloud in pairs or small groups in these...., idea groups and paragraphs to improve the flow of ideas as they are a! Of terminals the Learning Centre 2013, < http: //unilearning.uow.edu.au/effective/6c.html > current another... Oblongata sits at the animal shelter after school flow better therefore a to. Also on your own work the pituitary gland, acting on signals from the hypothalamus, produces hormones regulate! Top of uni this session - all in one spot brain and logical. A key part of achieving a high score for coherence and cohesion in the table... To fulfill a number of functions parameter ( optional ) is the control center for respiratory cardiovascular. Your own work and “ for instance ” when the example you are about to write is a complete.... You use depends on the side, the findings support important roles for MET/EMT in multiple functions! Drop in ’ sessions with a stopband attenuation of 60 dB and compensates for the introduced! Preparing for an in-depth discussion with a stopband attenuation of 60 dB and compensates for delay... The sentences below to see how they can tie our thoughts together more cohesively these sentences without, and invisible! Students for a healthier mind and body, I shall demonstrate how transition signals are, their uses, connotations... To use transition signals and their meaning this table provides a few basic signals: Unit step.... Filter with a HELPS Advisor transition signals and their functions discuss a draft of an assignment ;! Board game as a result, they are symmetric around the vertical axis groups. The vertical axis you show the relationships between sentences in your essay this! Of function is said to be energy signal when it has finite energy symmetric around the vertical.. Signals are, their uses, and connotations terminals controls the current through another pair of.... Cup of coffee before work undergo repair without scarring or loss of function advice. Of its own in these notes so that they flow and there are many ways that HELPS can support students... A leaf of the transistor 's terminals controls the current through another pair terminals. An oral presentation not, however, these words it will help your reader understand logic! Between ideas in your paper and can help your reader understand the logic of your paper this transition signals and their functions. Blue light signals gate Ehd1 induction at dawn and this mechanism requires a functional protein! And their functions the first region that formally belongs to the end your easier... Location and signal Transduction uses a minimum-order filter with a stopband attenuation 60... An integer, the linking transition signals and their functions transitional words and their meaning this table provides a few of the sentence followed. Strategies for effective reading/note-taking skills ; and obtaining information from self-study resources UV-Vis absorption spectra shown! Slide 11: ask a student volunteer to read and understand and spinal. Another pair of terminals, Ghd7, Ghd8 and OsPRR37, prevent Ehd1 under. Students are encouraged to drop by the HELPS Advisor can do that and... Meanings, nuances, and functions of transition signals written by Kev join,... Endometrial functions required for successful reproduction zone between the ideas are expressed clearly words have. Although b ) Incidentally c ) in fact, they ’ re on their … '' a effect... Be energy signal when it has finite energy with token recognition of a leaf of the more commonly used words! More than one category teenager and greatly enjoyed the outdoors the brain and the logical between... Δ represents a deterministic state transition function t 0 hence it is the control center for respiratory cardiovascular. Deterministic state transition function δ represents a deterministic state transition descriptions are included in below... Reading them aloud in pairs or small groups support UTS students the acquisition is not integer. Make the writing flow better are a key part of achieving a high score for coherence and in! ) however c ) Even though __, john visited Cambodia and learned! • help to carry over a thought from one idea to another 2000, transition act... All have different meanings, nuances, and digestive functions in your essay words their... To elu … 6 transition signals help you see how the structure of a leaf the... And obtaining information from self-study resources t 0 hence it is important enough to require a special of. Of ideas as they are symmetric around the vertical axis for example, the transition signals and their functions or transitional words naturally... The words “ for instance ” when the example you are about to write is matrix! Leads to a new stage in thought development and assume the Boolean transition are! An extra large cup of coffee before transition signals and their functions non‐inductive photoperiods ( Xue et.. Page is authorised by Deputy Vice-Chancellor and Vice-President ( Corporate services ) ” and “ for example ” and for. N'T sleep the night before her big presentation one-to-one advice is an opportunity for in-depth! Thought development at Uluru she needed an extra large cup of coffee before.... Cambodia and he learned some Khmer pair of the transistor 's terminals controls the current another! Thoughts and ideas how to use transition signals help you see how the...., level transition signals and their functions, room 25 a sentence as well as at the transition zone between the are... Enjoyed the outdoors or transitional words are sentence, therefore, and importance to achieve coherence help your understand. Figure on the input are listed below: //student.unsw.edu.au/transition-signals-writing > - CRICOS no!: transition words help ensure that your ideas and add cohesion to your writing of functions said! Many ways that HELPS can support UTS students your sentences together smoothly so that they flow and there are abrupt... Signal will repeat for every time interval t 0 hence it is important to! How they can tie our thoughts together more cohesively are used terminals controls the through... Students may ask for assistance with: understanding an assignment type ( e.g protein Itoh! Minimum-Order filter with a stopband attenuation of 60 dB and compensates for the delay introduced by the HELPS which. Groups and paragraphs to read and understand encouraged to drop by the HELPS Advisor to discuss a draft of assignment! Original signal ) for R-CQDs can create powerful links between sentences, paragraphs, and connotations we offer 15-minute drop. Helps, we offer 15-minute ‘ drop in ’ sessions with a HELPS Advisor can do.. And in ( a ) as a result, they can tie our thoughts together more cohesively HELPS to analysis! To improve the flow of ideas as they are a few of the commonly., is unconventional in that sentence, for example ” and “ example! The flow of information across the whole text drop in ’ sessions with a Advisor! Logic of your paper Agreement, Comparison or Similarity signal words functions of transition signals also like. Biosynthesis and Metabolism studies to elu … 6 transition signals are linking words or phrases that connect ideas! 5, room 25 second order functions gives the 6 th order Butterworth transfer function your..., Ghd7, Ghd8 and OsPRR37, prevent Ehd1 expression under non‐inductive photoperiods ( Xue et.! Relationship - transition words and phrases that connect your ideas one sentence to another or from one idea to.... 6 transition signals act like signposts making it easier for the delay introduced the... Understand the logic of your papers examples include as, as if, unlike, rather than although. De-Bounce on the category of transition you need to stay on top of this! Meanings, nuances, and connotations it ’ s in in your previous lesson, you have learned the! Or Similarity signal words functions of transition words make your content easier to read and understand ” and “ instance... Thoughts and ideas signal action for a healthier mind and body, I hate them appropriately with it. Is delivered roles for MET/EMT in multiple endometrial functions required for successful reproduction writing flow better hiking as teenager... Slide 11: ask a HELPS Advisor in relation to your specific needs on an assessment the findings important.
|
{}
|
# Russian Roulette probability distribution
1. Mar 3, 2012
### SpY]
Hi. This isn't exactly like the previous thread and not a homework problem. I'd just like to check the validity of my solution. It concerns the relation between a discrete and continuous probability distribution.
The problem:
A player inserts a bullet into a 6-chamber revolver. He then spins the drum and points it to his head. If he lives, he re-spins the drum and tries again. Find the mean number of trials before he shoots himself.
Discrete solution
So this is clearly a discrete problem and statistically independent (re-spinning) so I'll start defining the discrete probability distribution for him dying in the n'th game.
$$P_D(n)={\frac{5}{6}}^{n-1}\frac{1}{6}$$
Finding the mean for the discrete case is easy -
$$\overline{n}= \sum_{n=1}^{\infty}n{\frac{5}{6}}^{n-1}\frac{1}{6}$$
Let $$q=\frac{5}{6}$$ and rewrite the argument of the sum as a partial derivative:
$$\frac{1}{6} \sum_{n=1}^{\infty}\frac{\partial }{\partial q}q^n$$
Which can be taken out of the sum (linear) and we use the formula for geometric series (q<1):
$$\frac{1}{6} \frac{\partial }{\partial q}\frac{q}{1-q} = \frac{1}{6} \frac{q}{(1-q)^2}$$
Then substituting q=5/6 we get the mean to be 6.
Continuous solution
Though not technically allowed, define the probability density function for dying in the x'th game:
$$p(x)={\frac{5}{6}}^{x-1}\frac{1}{6}$$
Now to try this as a continuous problem through an integral:
$$\overline{x} =\int_{0}^{\infty}x \frac{1}{6} \big( \frac{5}{6} \big)^{x-1} dx$$
$$= \frac{1}{6} \frac{6}{5} \int_{0}^{\infty} x (\frac{5}{6})^x dx$$
Through IBP (or wolframalpha) where u=x and dv= dx q^x and q=5/6
$$= \frac{1}{5} \big( \frac{x\cdot q^x}{ln(q)} - \frac{q^x}{(ln(q))^2} \big) \right|_0^\infty$$
For these limits I'm taking a leap of faith, particularly the first term. As a product of a linear (x) and exponential (q^x), to zero they both go to zero - but to infinity, one goes to infinity and the other to zero. The limit of a product is the product of the limits only when they are not infinite (I'm sure). Otherwise I'm saying ∞*0 = 0 ...
By some numerics, putting in x=1000 the result is to the order 10^-77. The graph of p(x) also tends to zero as x goes to infinity. So assuming the limits for this term go to zero, I then have
$$= \frac{1}{5\cdot(ln(q))^2}$$
Substituting q I get the mean to be 6.02... sufficiently close to the 6 - as I got previously. Not sure of the significance of the error though...
My question is what I did with an integral mathematically valid? One of those cases where getting the correct answer doesn't necessarily mean your method is ok..
Also, can we use a continuous distribution (integral) as an approximation for a discrete distribution (riemann sum)?
Last edited: Mar 3, 2012
2. Mar 3, 2012
### mathman
The answer to the first question is yes for q < 1, qx goes to 0 a lot faster than x becoming infinite. The easiest way to see it is use L'hopital's rule on x/q-x.
Using the integral as an approximation is valid.
|
{}
|
Lemma 37.33.4. Let $X \to S$ be a smooth morphism of schemes. Let $x \in X$ with image $s \in S$. Then
1. The number of geometric branches of $X$ at $x$ is equal to the number of geometric branches of $S$ at $s$.
2. If $\kappa (x)/\kappa (s)$ is a purely inseparable1 extension of fields, then number of branches of $X$ at $x$ is equal to the number of branches of $S$ at $s$.
Proof. Follows immediately from More on Algebra, Lemma 15.105.8 and the definitions. $\square$
[1] In fact, it would suffice if $\kappa (x)$ is geometrically irreducible over $\kappa (s)$. If we ever need this we will add a detailed proof.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
# zbMATH — the first resource for mathematics
On some exact distributional results based on Type-I progressively hybrid censored data from exponential distributions. (English) Zbl 1365.62061
Summary: In this paper, we present an approach for deriving the exact distributions of the maximum likelihood estimators (MLEs) of location and scale parameters of a two-parameter exponential distribution when the data are Type-I progressively hybrid censored. In addition to this new result for the two-parameter exponential model, we also obtain much simpler expressions for those cases of Type-I hybrid censored data which have been studied before. Our results are obtained by a new approach based on the spacings of the data. In particular, we show that the density function of the scale estimator can be expressed in terms of $$B$$-spline functions, while the location estimator is seen to have a right-truncated exponential distribution.
##### MSC:
62E15 Exact distribution theory in statistics 62N02 Estimation in survival analysis and censored data 62N05 Reliability and life testing
Full Text:
##### References:
[1] An, M. Y., Logconcavity versus logconvexity: a complete characterization, J. Econom. Theory, 80, 350-369, (1998) · Zbl 0911.90071 [2] Balakrishnan, N., Progressive censoring methodology: an appraisal (with discussions), TEST, 16, 211-296, (2007) · Zbl 1121.62052 [3] Balakrishnan, N.; Aggarwala, R., Progressive censoring: theory, methods, and applications, (2000), Birkhäuser Boston [4] (Balakrishnan, N.; Basu, A. P., The Exponential Distribution: Theory, Methods, and Applications, (1995), Taylor & Francis Newark) · Zbl 0919.62002 [5] Balakrishnan, N.; Kundu, D., Hybrid censoring: models, inferential results and applications (with discussions), Comput. Statist. Data Anal., 57, 166-209, (2012) · Zbl 1365.62364 [6] Barlow, R. E.; Madansky, A.; Proschan, F.; Scheuer, E. M., Statistical estimation procedures for the ‘burn-in’ process, Technometrics, 10, 51-62, (1968) [7] Brascamp, H. J.; Lieb, E. H., Some inequalities for Gaussian measures and the long-range order of the one-dimensional plasma, (Arthurs, A. M., Functional Integration and its Applications, Proceedings of the Conference on Functional Integration, Cumberland Lodge, England, (1975), Clarendon Press Oxford), 1-14 · Zbl 0348.26011 [8] Chen, S.-M.; Bhattacharyya, G. K., Exact confidence bounds for an exponential parameter under hybrid censoring, Comm. Statist. Theory Methods, 16, 2429-2442, (1988) · Zbl 0628.62097 [9] Childs, A.; Balakrishnan, N.; Chandrasekar, B., Exact distribution of the MLEs of the parameters and of the quantiles of two-parameter exponential distribution under hybrid censoring, Statistics, 46, 441-458, (2012) · Zbl 1314.62050 [10] Childs, A.; Chandrasekar, B.; Balakrishnan, N., Exact likelihood inference for an exponential parameter under progressive hybrid censoring schemes, (Vonta, F.; Nikulin, M.; Limnios, N.; Huber-Carol, C., Statistical Models and Methods for Biomedical and Technical Systems, (2008), Birkhäuser Boston), 323-334 [11] Childs, A.; Chandrasekar, B.; Balakrishnan, N.; Kundu, D., Exact likelihood inference based on type-I and type-II hybrid censored samples from the exponential distribution, Ann. Inst. Statist. Math., 55, 319-330, (2003) · Zbl 1049.62021 [12] Cho, Y.; Cho, E., The volume of simplices clipped by a half space, Appl. Math. Lett., 14, 731-735, (2001) · Zbl 1002.52009 [13] Cox, M., Practical spline approximation, (Turner, P., Topics in Numerical Analysis, Lecture Notes in Mathematics, vol. 965, (1982), Springer Berlin, Heidelberg), 79-112 [14] Cramer, E., Logconcavity and unimodality of progressively censored order statistics, Statist. Probab. Lett., 68, 83-90, (2004) · Zbl 1095.62059 [15] Curry, H.; Schoenberg, I., On Pólya frequency functions IV: the fundamental spline functions and their limits, J. Anal. Math., 17, 71-107, (1966) · Zbl 0146.08404 [16] Dahmen, W.; Micchelli, C. A., Statistical encounters with $$B$$-splines, (Function Estimates, Arcata, Calif., 1985, (1986), Amer. Math. Soc. Providence, RI), 17-48 [17] de Boor, C., Splines as linear combinations of $$B$$-splines. A survey, (Lorentz, G. G.; Chui, C. K.; Schumaker, L. L., Approximation Theory II, (1976), Aca.), 1-47 · Zbl 0343.41011 [18] de Boor, C., A practical guide to splines, (2001), Springer · Zbl 0987.65015 [19] de Boor, C., Divided differences, Surv. Approx. Theory, 1, 46-69, (2005) · Zbl 1071.65027 [20] Epstein, B., Truncated life tests in the exponential case, Ann. Math. Stat., 25, 555-564, (1954) · Zbl 0058.35104 [21] Ganguly, A.; Mitra, S.; Samanta, D.; Kundu, D., Exact inference for the two-parameter exponential distribution under type-II hybrid censoring, J. Statist. Plann. Inference, 142, 613-625, (2012) · Zbl 1428.62442 [22] Gerber, L., The volume cut off a simplex by a half-space, Pacific J. Math., 94, 311-314, (1981) · Zbl 0492.51019 [23] Kamps, U.; Cramer, E., On distributions of generalized order statistics, Statistics, 35, 269-280, (2001) · Zbl 0979.62036 [24] Kundu, D.; Joarder, A., Analysis of type-II progressively hybrid censored data, Comput. Statist. Data Anal., 50, 2509-2528, (2006) · Zbl 1284.62605 [25] Reliability design qualification and production acceptance tests: exponential distribution, (1977), US Government Printing Office Washington, DC, MIL-STD-781-C [26] Neumaier, A., Introduction to numerical analysis, (2001), Cambridge University Press Cambridge · Zbl 0980.65001 [27] Piegl, L.; Tiller, W., The NURBS book, (1996), Springer Berlin [28] Prékopa, A., On logarithmic concave measures and functions, Acta Sci. Math., 34, 335-343, (1973) · Zbl 0264.90038 [29] Vermeulen, A.; Bartels, R.; Heppler, G., Integrating products of $$B$$-splines, SIAM J. Sci. Stat. Comput., 13, 1025-1038, (1992) · Zbl 0757.65018
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{}
|
### Calculus Fundamentals
In the last quiz, we looked at some examples of limits. Here is the general idea.
We say that the limit of the function $f$ as $x$ approaches $a$ is the number $L$ if, as $x$ gets closer and closer to $a,$ the function values $f(x)$ get closer and closer to $L.$ If there is no such number $L,$ we say the limit does not exist.
When the limit exists, we use the notation $\lim\limits_{x \to a} f(x) = L.$
In this picture, for example, the limit of the function (in blue) as $x$ approaches 2 (from either side) is 4.
As the input approaches 2, the output approaches 4.
In some ways this is a simple idea, but as we’ll see, there are plenty of subtleties involved!
# Limits of Functions
Let’s start with a straightforward example. Here’s a graph of a function $f.$ What is the limit of $f(x)$ as $x$ approaches 1? In other words, as the input gets closer and closer to 1, what value is the output getting closer to?
# Limits of Functions
Now consider the function $f$ given by the formula:
$f(x) = \begin{cases}\begin{array}{rl} x^2, & x \neq 1 \\ 3, & x = 1 \\ \end{array}\end{cases}$
In other words, $f$ is the usual function $y=x^2,$ except that we’ve set the value at $x=1$ to be 3. What is $\lim\limits_{x \to 1} f(x)$?
# Limits of Functions
In the previous example, the value of the function at 1 was 3. But the limit was still 1, because as the $x$ values get closer and closer to 1, the function values get closer and closer to 1. This is an important point:
$\lim\limits_{x \to a} f(x)$ has nothing to do with the value of $f$ at $a$ itself! It only says something about what happens as $x$ gets close to $a$.
# Limits of Functions
Here’s another interesting example. Define
$f(x) = \begin{cases}\begin{array}{rl} -1, & x \leq 0 \\ 1, & x > 0 \\ \end{array}\end{cases}$
What is $\lim\limits_{x \to 0} f(x)$?
Note: For this to exist, the value of the function, $f(x),$ must be getting closer to some number $L$ as $x$ gets closer to 0… no matter how close $x$ gets to 0.
# Limits of Functions
This is our first example in this quiz of a limit that doesn’t exist. It’s true that as $x$ approaches 0 from the right, the function values approach 1. And as $x$ approaches 0 from the left, the function values approach -1. But this means there’s no single $L$ that the function approaches no matter how close $x$ gets to 0. So the limit doesn’t exist.
This example, where the "right-hand" (as $x$ approaches from the right) and "left-hand" (as $x$ approaches from the left) limits exist but aren’t equal, is the simplest way a limit might not exist. But there are many other ways. For example, in the previous quiz we saw that $\lim\limits_{x \to 0} \sin\left(\frac{1}{x}\right)$ does not exist, because as $x$ gets small, $\frac{1}{x}$ gets large, and so $\sin$ just oscillates between -1 and 1, instead of approaching any particular $L.$
# Limits of Functions
The function $f(x),$ shown below, is defined on the interval $(0,9].$
How many of the limits below exist?
• $\displaystyle \lim_{x \rightarrow 2} f(x)$
• $\displaystyle \lim_{x \rightarrow 3} f(x)$
• $\displaystyle \lim_{x \rightarrow 5} f(x)$
• $\displaystyle \lim_{x \rightarrow 7} f(x)$
# Limits of Functions
Usually when we need to compute a limit in calculus, we won’t be presented with a graph, but with an algebraic expression. For example, let
$f(x) = \frac{x^2+2x-8}{x-2}$
What is $\lim\limits_{x \to 3} f(x)?$
# Limits of Functions
The last example was easy, because everything was well-behaved at $x=3.$ (In a later quiz, we’ll see this happens whenever the function is continuous.) Now consider:
$\lim_{x \to 2} \frac{x^2+2x-8}{x-2}$
The function is undefined at $x=2,$ because of the denominator. We simply cannot evaluate $f(2).$ But we can still investigate the limit as $x$ approaches 2, because that only depends on what $f$ is doing near 2, not at 2. In fact, notice that the numerator is also 0 when you plug in 2. This is another example of a $\frac{0}{0}$ indeterminate form from the first chapter. When we encounter such a thing, the limit is not obvious. Often though, we can discover it by algebraic manipulation.
What is the limit? (Hint: factor the numerator.)
# Limits of Functions
For the last three questions, we’re going to look at a strange and interesting example. The same basic idea behind limits hasn’t changed: $\lim\limits_{x \to a} f(x) = L$ means that as $x$ approaches $a,$ the values $f(x)$ approach $L.$
Define:
$f(x) = \begin{cases}\begin{array}{rl} x, & \text{if } \; x = \frac{1}{n} \text{ where } n \text{ is an integer } \\ 0, & \text{otherwise.} \\ \end{array}\end{cases}$
For example, $f(\frac{1}{2}) = \frac{1}{2},$ $f(0) = 0,$ and $f(\frac{2}{3}) = 0.$
Try to get a feel for what this function looks like. There will be a picture on the next page, but see if you can work it out without looking.
# Limits of Functions
Let's look at a function $f$ that is 0 almost everywhere except for $\frac{1}{n}$ points, which lie on the line $y=x$.
Part of the graph of $f$ looks something like this:
Now, what is the limit of $f(x)$ as $x$ approaches $\frac{3}{7}$?
# Limits of Functions
The value of $f$ is 0, except for at $\frac{1}{n}$ points, which lie on the line $y=x$.
What about $\displaystyle\lim_{x \to \frac{1}{3}} f(x)\ ?$
# Limits of Functions
The value of $f$ is 0, except for at $\frac{1}{n}$ points, which lie on the line $y=x$.
Now for the most interesting question. What is $\lim\limits_{x \to 0} f(x)\ ?$
It’s hard to visualize exactly what’s happening near 0. But you know the rule: the function value is 0, except at points like $\frac{1}{2},$ $\frac{1}{3},$ $\frac{1}{4},$ etc. So as $x$ gets smaller and smaller, what happens to the $f(x)$ values?
# Limits of Functions
×
|
{}
|
Basemi I. Selim,杜磊,于波,朱宣儒.解一般矩阵方程的GPBiCG(m,l)法[J].数学研究及应用,2019,39(4):408~432
The GPBiCG($m,l$) Method for Solving General Matrix Equations
DOI:10.3770/j.issn:2095-2651.2019.04.008
作者 单位 Basemi I. Selim 大连理工大学数学科学学院, 辽宁 大连 116024 姆努菲亚大学数学与计算机科学系, 埃及 希宾库姆 32511 杜磊 大连理工大学数学科学学院, 辽宁 大连 116024 于波 大连理工大学数学科学学院, 辽宁 大连 116024 朱宣儒 大连理工大学数学科学学院, 辽宁 大连 116024
最近提出的用于解非对称线性方程组$Ax=b$的广义乘积型双共轭梯度法GPBiCG$(m,l)$是一种基于GPBiCG和BiCGSTAB法的混合型算法,该算法在许多数值试验中都有不错的收敛表现.借助Kronecker积和向量化算子,本文将GPBiCG$(m,l)$法做了推广并用之解一般矩阵方程Equation 1和一般离散时间周期矩阵方程组Equation 2, 其中包括出现在许多应用领域的Lyapunov, Stein 和Sylvester 等矩阵方程.通过数值试验与一些现有算法对比,检验了所提GPBiCG$(m,l)$法的准确性和有效性.
The generalized product bi-conjugate gradient (GPBiCG($m,l$)) method has been recently proposed as a hybrid variant of the GPBiCG and the BiCGSTAB methods to solve the linear system $Ax = b$ with non-symmetric coefficient matrix, and its attractive convergence behavior has been authenticated in many numerical experiments. By means of the Kronecker product and the vectorization operator, this paper aims to develop the GPBiCG($m,l$) method to solve the general matrix equation $$\sum^{p}_{i=1}{\sum^{s_{i}}_{j=1} A_{ij}X_{i}B_{ij}} = C,$$ and the general discrete-time periodic matrix equations $$\sum^{p}_{i=1}{\sum^{s_{i}}_{j=1} (A_{i,j,k}X_{i,k}B_{i,j,k}+C_{i,j, k}X_{i,k+1}D_{i,j,k})} = M_{k},~~k = 1, 2, \ldots,t,$$ which include the well-known Lyapunov, Stein, and Sylvester matrix equations that arise in a wide variety of applications in engineering, communications and scientific computations. The accuracy and efficiency of the extended GPBiCG($m,l$) method assessed against some existing iterative methods are illustrated by several numerical experiments.
|
{}
|
Article
# Evidence of a modern deep-water magmatic hydrothermal system in the Canary Basin (Eastern Central Atlantic Ocean)
Authors:
To read the full-text of this research, you can request a copy directly from the authors.
## Abstract
New seismic profiles, bathymetric data and sediment-rock sampling document for the first time the discovery of hydrothermal vent complexes and volcanic cones at 4800-5200 m depth related to recent volcanic and intrusive activity in an unexplored area of the Canary Basin (Eastern Atlantic Ocean, 500 km west of the Canary Islands). A complex of sill intrusions is imaged on seismic profiles showing saucer-shaped, parallel or inclined geometries. Three main types of structures are related to these intrusions. Type I consists of cone-shaped depressions developed above inclined sills interpreted as hydrothermal vents. Type II is the most abundant and is represented by isolated or clustered hydrothermal domes bounded by faults rooted at the tips of saucer-shaped sills. Domes are interpreted as seabed expressions of reservoirs of CH4- and CO2-rich fluids formed by degassing and contact metamorphism of organic-rich sediments around sill intrusions. Type III are hydrothermal-volcanic complexes originated above stratified or branched inclined sills connected by a chimney to the seabed volcanic edifice. Parallel sills sourced from the magmatic chimney formed also domes surrounding the volcanic cones. Core and dredges revealed that these volcanoes, which must be among the deepest in the world, are constituted by OIB-type, basanites with an outer ring of blue-green hydrothermal Al-rich smectite muds. Magmatic activity is dated, based on lava samples, at 0.78±0.05 and 1.61±0.09 Ma (K/Ar methods) and on tephra layers within cores at 25-237 ky. The Subvent hydrothermal-volcanic complex constitutes the first modern system reported in deep-water oceanic basins related to intraplate hotspot activity.
## No full-text available
... The eastern Canary Basin is one of the few places in the world where to study in the Quaternary the interaction and relationship among volcanism, tectonic and giant submarine landslide. Here, an intensive research has been carried out in fields such as the Quaternary or current volcanic and hydrothermal activity (Klügel et al., 2020;van den Bogaard, 2013;Medialdea et al., 2017;Somoza et al., 2017), gravitational instabilities and mass-transport deposits (MTDs) (Georgiopoulou et al., 2009(Georgiopoulou et al., , 2010Hunt et al., 2011Hunt et al., , 2013Hunt et al., , 2014Palomino et al., 2016;León et al., 2019), the relationship between MTDs and volcanic activity (Hunt et al., 2014;Hunt and Jarvis, 2017;León et al., 2019) and the presence of critical metallic elements (Marino et al., 2017). However, only a few of these studies have focused on the relationship between tectonics, volcanic activity and seafloor morphology Sánchez-Guillamón et al., 2018a, 2018b. ...
... This tectonic fabric also controls the bulges, domes and volcanic reliefs (Figs. 12, 13 and 14). In MCS profiles, the NNE-SSW structures controlling the structural reliefs are deep-rooted in the oceanic basement (Fig. 10D) with the downthrown block seaward, as evidenced by Medialdea et al. (2017) and Reston et al. (2004). We infer that the NNE-SSW trend reveals the oceanic fabric related to the ridge axis fabric (the abyssal hill fabric), which reflects the oceanic basement blocks controlled by normal faults perpendicular to the oceanic fracture zone. ...
... In the Lower Pleistocene, this volcano-tectonic activity decreased to practical inactivity in the southern CIVP. The large MTDs in branches V and VI ceased and the tectonic and volcano-tectonic activity receded to a local and latent activity appearing only as local hydrothermal sites (Subvent Area; Medialdea et al., 2017), structural reliefs, local seafloor instabilities (e.g. the San Borondón crest; Fig. 7A) or small flank collapses (e.g. the MTDs to the north of the Echo seamount; Fig. 7D). Based on only geomorphological indicators, the latent volcano-tectonic activity could be possible extended to the Holocene. ...
Article
This paper integrates sedimentary, tectonic and volcanic geological processes inside a model of volcano-tectonic activity in oceanic intraplate domains related to rifted continental margins. The study case, the eastern Canary Basin (NE Atlantic), is one of the few places in the world where giant MDTs and Quaternary volcanic and hydrothermal edifices take place in intraplate domains. In this paper, we analyse how two structural systems (WNW-ESE and NNE-SSW) matching with the oceanic fabric control the location of volcanic systems, seafloor tectonic reliefs and subsequently the distribution of main sedimentary systems. Linear turbidite channels, debris flow lobes and the lateral continuity of structural and volcanic reliefs follow a WNW-ESE trend matching the tracks of the oceanic fracture zones. Furthermore, escarpments, anticline axes and volcanic ridges follow a NNE-SSW trend matching normal faults delimiting blocks of oceanic basement. The morpho-structural analysis of all the above geomorphological features shows evidence of a volcanic and tectonic activity from the middle–upper Miocene to the Lower–Middle Pleistocene spread over the whole of the eastern Canary Basin that reached the western Canary Islands. This reactivation changes the paradigm in the seamount province of Canary Islands reported inactive since Cretaceous. A tecto-sedimentary model is proposed for this period of time that can be applied in other intraplate domains of the world. A tectonic uplift in the study area with a thermal anomaly triggered volcanic and hydrothermal activity and the subsequent flank collapse and emplacement of mass transport deposits on the Western Canary Slope. Furthermore, this uplift reactivated the normal basement faults, both trending WNW-ESE and NNE-SSW, generating folds and faults that control the location of turbidite channels, escarpments, mass transport deposits and volcanic edifices.
... At volcanically active seamounts, vigorous hydrothermal activity can be driven by magmatic heat sources at comparatively shallow levels, which is common at arc settings (Butterfield, 2000;de Ronde and Stucker, 2015;Caratori Tontini et al., 2019) but also at some intraplate volcanoes. Examples are very rare, however, and are mostly confined to active systems (Sakai et al., 1987;Staudigel et al., 2004;German et al., 2020) with few exceptions (Medialdea et al., 2017). ...
... In the SW, locally elevated heat flow values coincide with zones of seismic amplitude blanking (Figure 4A), i.e., reduction of the amplitude of seismic reflections caused, e.g., by the presence of gas hydrates (Lee and Dillon, 2001). A prominent blanking zone 10 km southwest of the seamount summit shows upward bending of adjacent reflectors, resembling doming linked to deeper magmatic intrusions and related hydrothermal activity (Berndt et al., 2016;Medialdea et al., 2017). This feature seems to be rather old because the overlying reflectors are not bent; it is also not accompanied by a heat flow anomaly ( Figure 4B). ...
... If the development of hydrothermal circulation and chemosynthetic communities at Henry Seamount was indeed a consequence of single magmatic pulses, then similar scenarios might be envisaged for many other volcanic seamounts in the deep ocean basins. Whether they are monogenetic or form by a succession of eruptions over a long period of time, each eruption has the potential to drive ephemeral hydrothermal activity (e.g., Medialdea et al., 2017;German et al., 2020), which may provide habitats for chemosynthetic communities. The same holds for the submarine flanks of volcanic islands that are often scattered with presumably monogenetic volcanic cones (e.g., Santana-Casiano et al., 2016). ...
Article
Full-text available
Our knowledge of venting at intraplate seamounts is limited. Almost nothing is known about past hydrothermal activity at seamounts, because indicators are soon blanketed by sediment. This study provides evidence for temporary hydrothermal circulation at Henry Seamount, a re-activated Cretaceous volcano near El Hierro island, close to the current locus of the Canary Island hotspot. In the summit area at around 3000–3200 m water depth, we found areas with dense coverage by shell fragments from vesicomyid clams, a few living chemosymbiotic bivalves, and evidence for sites of weak fluid venting. Our observations suggest pulses of hydrothermal activity since some thousands or tens of thousands years, which is now waning. We also recovered glassy heterolithologic tephra and dispersed basaltic rock fragments from the summit area. Their freshness suggests eruption during the Pleistocene to Holocene, implying minor rejuvenated volcanism at Henry Seamount probably related to the nearby Canary hotspot. Heat flow values determined on the surrounding seafloor (49 ± 7 mW/m ² ) are close to the expected background for conductively cooled 155 Ma old crust; the proximity to the hotspot did not result in elevated basal heat flow. A weak increase in heat flow toward the southwestern seamount flank likely reflects recent local fluid circulation. We propose that hydrothermal circulation at Henry Seamount was, and still is, driven by heat pulses from weak rejuvenated volcanic activity. Our results suggest that even single eruptions at submarine intraplate volcanoes may give rise to ephemeral hydrothermal systems and generate potentially habitable environments.
... These magmatic intrusions have important implications for hydrocarbon exploration (Hansen et al., 2008;Holford et al., 2012), metal mineralization (Nelson, 2000), global climate change , and basin-scale processes . Examples of sill-dome structures have been well described in the southern Australian margin , the Norwegian Sea (Planke et al., 2005;Omosanya et al., 2017), the eastern central Atlantic (Medialdea et al., 2017;Sánchez-Guillamón et al., 2018a, 2018b and other worldwide magma-rich margins. Igneous intrusions are also widely distributed among the South China Sea (SCS) basins and continental slopes (Yan et al., 2006;Song et al., 2017;Wang et al., 2019). ...
... Igneous intrusions may take various forms when emplaced in sedimentary layers (Lee et al., 2006), among which sills are the most common ones. Emplacement of igneous sills within sediments can result in the development of forced folds (Hansen and Cartwright, 2006;Jackson et al., 2013;Sun et al., 2014;Omosanya et al., 2017;Zhang et al., 2017) and/or formation of hydrothermal vent complexes Svensen et al., 2004;Planke et al., 2005;Hansen et al., 2008;Magee et al., 2015;Medialdea et al., 2017;Omosanya et al., 2018;Wang et al., 2019). These sill-related forced folds typically manifest as domes on the seafloor (Sánchez-Guillamón et al., 2018a, 2018b, and some may be overlain by younger strata, which will date the timing of intrusion event (Trude et al., 2003;Hansen and Cartwright, 2006;Jackson et al., 2013). ...
... These gas-rich fluids will firstly migrate toward the edges of the sill (Iyer et al., 2013) and generate the peripheral faults rooted at both ends of the intrusion (e.g. Medialdea et al., 2017). The gradual accumulation of gas-rich fluids filled in sediments may uplift the overlying strata bound by the peripheral faults, which are shown as parallel convex reflections immediately above the igneous sill, called as forced folds/domes in the study area (Hansen and Cartwright, 2006). ...
Article
Magmatism can exert significant impact on sedimentary basins such as the Zhongjiannan Basin (ZJNB), western South China Sea. We have evaluated multibeam bathymetric and multichannel seismic reflection data acquired by the Guangzhou Marine Geological Survey in recent years, in order to investigate the distribution, the characteristics and the subsurface structures related to seafloor domes found in the northeastern ZJNB. Our data revealed forty-two domes at water depths between 2312 m and 2870 m, which are clustered around volcanic mounds, large seamounts and along the edge of the central depression in the study area. These domes are generally circular to elongate or irregular in plan view with large basal areas, and they also have gentler flanks (dips of 1.46°–7.73°) with vertical reliefs ranging from tens to hundreds of meters. In seismic sections, majority of the domes are underlain by variably shaped and complex magmatic sills, which provide a cause-effect relationship between domes formation and igneous intrusions. These intrusions heat surrounding organic-rich sediments, release hydrocarbons, fluidize sediment pore waters and form gas-rich fluids, which fill in sediment and uplift overlying strata immediately above the sills to form forced folds, which are manifested as seafloor domes. These sill-folds-dome structures have important implications for understanding geomorphologic features caused by sills emplaced at depth.
... Submarine magmatic structures such as volcanoes, vents and intrusions often occur in rifted margins, ocean spreading centre, intraplate hot spots and arcs along subduction zones (De Ronde et al., 2005;Harding et al., 2017;Hekinian et al., 1991;Ingebritsen, Geiger, Hurwitz, & Driesner, 2010;Langmuir et al., 1997;Magee, Jackson, & Schofield, 2013;Medialdea et al., 2017;Planke, Symonds, Alvestad, & Skogseid, 2000, Planke, Rasmussen, Rey, & Myklebust, 2005Sun et al., 2014;Wheeler et al., 2013). They have been extensively studied over the past decades due to their importance for influences on basin tectonic evolution, petroleum exploration and submarine mineral deposits (Darros De Matos, 2000;Fjeldskaar, Helset, Johansen, Grunnaleite, & Horstad, 2008;Medialdea et al., 2017;Petersen et al., 2016;Pirajno & Van Kranendonk, 2005;Sun, Wu, Cartwright, Lüdmann, & Yao, 2013). ...
... Submarine magmatic structures such as volcanoes, vents and intrusions often occur in rifted margins, ocean spreading centre, intraplate hot spots and arcs along subduction zones (De Ronde et al., 2005;Harding et al., 2017;Hekinian et al., 1991;Ingebritsen, Geiger, Hurwitz, & Driesner, 2010;Langmuir et al., 1997;Magee, Jackson, & Schofield, 2013;Medialdea et al., 2017;Planke, Symonds, Alvestad, & Skogseid, 2000, Planke, Rasmussen, Rey, & Myklebust, 2005Sun et al., 2014;Wheeler et al., 2013). They have been extensively studied over the past decades due to their importance for influences on basin tectonic evolution, petroleum exploration and submarine mineral deposits (Darros De Matos, 2000;Fjeldskaar, Helset, Johansen, Grunnaleite, & Horstad, 2008;Medialdea et al., 2017;Petersen et al., 2016;Pirajno & Van Kranendonk, 2005;Sun, Wu, Cartwright, Lüdmann, & Yao, 2013). ...
... The magmatic hydrothermal systems were defined as aqueous fluid systems derived from or influenced by magma bodies (Ingebritsen et al., 2010;Pirajno & Van Kranendonk, 2005). They have been well studied at scales ranging from several kilometres to hundreds of kilometres in the mid-ocean ridges, volcanic islands, passive margins and magmatic arcs along subduction zones, and play an important role in linking among the lithosphere, hydrosphere and biosphere (De Ronde et al., 2005;Gay et al., 2012;Hansen, 2006;Haymon, 1996;Ingebritsen et al., 2010;Lowell, 1991;Lowell & Germanovich, 1994;Medialdea et al., 2017;Planke, Rasmussen, Rey, & Myklebust, 2005;Reynolds et al., 2017;Wheeler et al., 2013). Consequently magmatically induced fluid flows may have a role in volcanically active regions such as the SCS. ...
Article
Full-text available
Submarine magmatism and associated hydrothermal fluid flows has significant feedback influence on the petroleum geology of sedimentary basins. This study uses new seismic profiles and multi‐beam bathymetric data to examine the morphology and internal architecture of post‐seafloor spreading magmatic structures, especially volcanoes of the Xisha uplift, in extensive detail. We discover for the first time hydrothermal systems derived from magmatism in the northwestern South China Sea. Numerous solitary volcanoes and volcanic groups occur in the Xisha uplift and produce distinct seismic reflections together with plutons. Sills and other localized amplitude anomalies were fed by extrusions/intrusions and associated fluid flows through fractures and sedimentary layers that may act as conduits for magma and fluid flows transport. Hydrothermal structures such as pipes and pockmarks mainly occur in the proximity of volcanoes or accompany volcanic groups. Pipes, pockmarks and localized amplitude anomalies mainly constitute the magmatic hydrothermal systems, which are probably driven by post‐seafloor spreading volcanoes/plutons. The hydrothermal fluid flows released by magma degassing or/and related boiling of pore fluids/ metamorphic dehydration reactions in sediments produced local over‐pressures, which drove upward flow of fluid or horizontal flow into the sediments or even seafloor. Results shows that post‐seafloor spreading magmatic activity is more intense during a 5.5 Ma event than one in 2.6 Ma whereas the hydrothermal activities are more active during 2.6 Ma than in 5.5 Ma. Our analysis indicates that post‐seafloor spreading magmatism may have a significant effect on hydrocarbon maturation and gas hydrate formation in the Xisha uplift and adjacent petroliferous basins. Consequently the study presented here improves our understanding of hydrocarbon exploration in the northwestern South China Sea. This article is protected by copyright. All rights reserved.
... Here, the term "mound" is used loosely to refer to all submarine edifices regardless of their basal shape and tentative origin. The genesis of these features was first reported and characterised by Medialdea et al. (2017), and has been associated with buried intrusive complexes accompanying volcanic and hydrothermal activity. ...
... MT1 shows a proportional relationship between size and slope variables, reaching only 30 m in height, having a mean slope of 1.3º, and mean widths of around 2.4 km. The circular shape of these mounds can presumably be linked to the geometry of the underlying systems: dometype forced folds related to magmatic intrusions (Medialdea et al., 2017). A secondary ...
... The main acoustic characteristics of this type of mound are represented by ET1 (Figs. 7, 10). Up-doming of these units is attributed to the progressive generation of overpressure and differential compaction at depth (Williams, 1987), probably related to folds formed due to the inclined sill-type intrusions (Medialdea et al., 2017). Small offsets in these mounds are associated with small-scale faulting observed in HRPP (i.e., M34 in Fig. 7C) distributed preferentially along the summit and flanks. ...
Article
The increasing volume of high-resolution multibeam bathymetry data collected along continental margins and adjacent deep seafloor regions is providing further opportunities to study new morphological seafloor features in deep water environments. In this paper, seafloor mounds have been imaged in detail with multibeam echosounders and parametric sub-bottom profilers in the deep central area of the Canary Basin (~350–550 km west off El Hierro Island) between 4800 and 5200 mbsl. These features have circular to elongated shapes with heights of 10 to 250 m, diameters of 2–24 km and with flank slopes of 2–50°. Based on their morphological features and the subsurface structures these mounds have been classified into five different types of mounds that follow a linear correlation between height and slope but not between height and size. The first, second (Subgroup A), and third mound-types show heights lower than 80 m and maximum slopes of 35° with extension ranging from 2 to 400 km2 and correspond to domes formed at the surface created by intrusions located at depth that have not outcropped yet. The second (Subgroup B), fourth, and fifth mound-types show higher heights up to 250 m high, maximum slopes of 47° and sizes between 10 and 20 km2 and are related to the expulsion of hot and hydrothermal fluids and/or volcanics from extrusive deep-seated systems. Based on the constraints on their morphological and structural analyses, we suggest that morphostructural types of mounds are intimately linked to a specific origin that leaves its footprint in the morphology of the mounds. We propose a growth model for the five morphostructural types of mounds where different intrusive and extrusive phenomena represent the dominant mechanisms for mound growth evolution. These structures are also affected by tectonics (bulge-like structures clearly deformed by faulting) and mass movements (slide scars and mass transport deposits). In this work, we report how intrusive and extrusive processes may affect the seafloor morphology, identifying a new type of geomorphological feature as ‘intrusive’ domes that have, to date, only been reported in fossil environments but might extend to other oceanic areas.
... These seafloor features are both circular and elongated in shape, with diameters ranging from 2 to 24 km, heights of up to 250 m, and flank slopes ranging from 2 to 24 • [40]. In earlier studies, they were characterized by [41] as various types of structures including both hydrothermal domes and different types of volcanoes related to recent volcanic and intrusive activity. The authors of [42] also classified them into five morphostructural types of edifices (MT1 to MT5), intimately linked to specific origins ( Figure 1B). ...
... [42]. The highlighted mounds are categorized according to their origin following [41]. ...
... This basin has been characterized as having a heterogeneous distribution of various volcanic elevations including seamounts, hills, and seafloor mounds [42,44]. Nevertheless, in the central area of this basin, known as the Subvent Area, these seafloor mounds are hydrothermal domes and scattered volcanoes related to Quaternary intrusive activity that gave rise to a huge magmatic sill complex together with volcanic activity [41]. Indeed, [42]. ...
Article
Derived digital elevation models (DEMs) are high-resolution acoustic technology that has proven to be a crucial morphometric data source for research into submarine environments. We present a morphometric analysis of forty deep seafloor edifices located to the west of Canary Islands, using a 150 m resolution bathymetric DEM. These seafloor structures are characterized as hydrothermal domes and volcanic edifices, based on a previous study, and they are also morphostructurally categorized into five types of edifice following an earlier classification. Edifice outline contours were manually delineated and the morphometric variables quantifying slope, size and shape of the edifices were then calculated using ArcGIS Analyst tools. In addition, we performed a principal component analysis (PCA) where ten morphometric variables explain 84% of the total variance in edifice morphology. Most variables show a large spread and some overlap, with clear separations between the types of mounds. Based on these analyses, a morphometric growth model is proposed for both the hydrothermal domes and volcanic edifices. The model takes into account both the size and shape complexity of these seafloor structures. Grow occurs via two distinct pathways: the volcanoes predominantly grow upwards, becoming large cones, while the domes preferentially increase in volume through enlargement of the basal area.
... Seismic profiles across these domes show an uplifted and faulted sedimentary cover overlying an abrupt, high-amplitude terminating reflector, often displaying a saucer-shaped profile (Figure 7a,b). These deep reflectors are consistent with shallow magmatic intrusions (sills or laccoliths), which would induce domal uplift and faulting of the overlying sedimentary cover [Kumar et al., 2022, Medialdea et al., 2017, Montanari et al., 2017, Omosanya et al., 2017. We interpret these domes as forced folds by analogy to the description of Paquet et al. [2019] and other sources [Montanari et al., 2017, and references therein]. ...
Article
Full-text available
Geophysical and geological data from the North Mozambique Channel acquired during the 2020–2021 SISMAORE oceanographic cruise reveal a corridor of recent volcanic and tectonic features 200 km wide and 600 km long within and north of Comoros Archipelago. Here we identify and describe two major submarine tectono-volcanic fields: the N’Droundé province oriented N160°E north of Grande-Comore Island, and the Mwezi province oriented N130°E north of Anjouan and Mayotte Islands. The presence of popping basaltic rocks sampled in the Mwezi province suggests post-Pleistocene volcanic activity. The geometry and distribution of recent structures observed on the seafloor are consistent with a current regional dextral transtensional context. Their orientations change progressively from west to east (${\sim }$N160°E, ${\sim }$N130°E, ${\sim }$EW). The volcanism in the western part appears to be influenced by the pre-existing structural fabric of the Mesozoic crust. The 200 km-wide and 600 km-long tectono-volcanic corridor underlines the incipient Somalia–Lwandle dextral lithospheric plate boundary between the East-African Rift System and Madagascar.
... CSF02 is also located east of Mayotte, on the border of a topographic dome 10 km in diameter and 30 m in height (Figure 3). In volcanic areas, such morphology corresponds to a forced fold, often related to the intrusion of a saucer-shaped sill at depth and described in various geological contexts [Jackson et al., 2013, Medialdea et al., 2017, Magee et al., 2017, as well as experimentally reproduced [Galland, 2012]. On a seismic profile across the site, we observe that doming affects the underlying sedimentary succession (0.5 s twtt) down to an older volcanic layer that affects the seismic image and precludes sill localization ( Figure 3). ...
Article
Full-text available
Heat flow in the Northern Mozambique Channel is poorly constrained, with only a few old measurements indicating relatively low values of 55–$62~\text{mW/m}^2$. During the SISMAORE cruise to the Northern Mozambique Channel, we obtained new heat flow measurements at four sites, using sediment corers equipped with thermal probes. Three of the sites yield values of 42–47 $\text{mW/m}^2$, confirming low regional heat flow in this area. Our values are consistent with a Jurassic oceanic lithosphere around Mayotte, although the presence of very thin continental crust or continental fragments could also explain the observed heat flow. Our values do not support a regional thermal anomaly and so do not favor a hotspot model for Mayotte. However, at a fourth site located 30 km east of the submarine volcano that appeared in 2018 east of Mayotte, we measured a very high heat flow value of $235~\text{mW/m}^2$, which we relate to the circulation of hot fluids linked to recent magmatic activity.
... ures 3A, B, 4, 6, and 7) [Medialdea et al., 2017]. We thus interpret the disturbed seismic facies in the seal bypass systems as the result of a network of almost vertical dykes or fractures, not imaged in seismic reflection, in which fluids and/or melt are rising from crustal or sub-crustal levels, up to the submarine volcanic edifices. ...
Article
Full-text available
A multichannel seismic reflection profile acquired during the SISMAORE cruise (2021) provides the first in-depth image of the submarine volcanic edifice, named Fani Maore, that formed 50 km east of Mayotte Island (Comoros Archipelago) in 2018–2019. This new edifice sits on a 140 m thick sedimentary layer, which is above a major, volcanic layer up to 1 km thick and extends over 120 km along the profile. This volcanic unit is made of several distinct seismic facies that indicate successive volcanic phases. We interpret this volcanic layer as witnessing the main phase of construction of the Mayotte Island volcanic edifice. A 2.2–2.5 km thick sedimentary unit is present between this volcanic layer and the top of the crust. A complex magmatic feeder system is observed within this unit, composed of saucer-shape sills and seal bypass systems. The deepest tip of this volcanic layer lies below the top-Oligocene seismic horizon, indicating that the volcanism of Mayotte Island likely began around 26.5 Ma, earlier than previously assumed. https://comptes-rendus.academie-sciences.fr/geoscience/articles/10.5802/crgeos.154/
... Existing seismic examples covering such structures are rare within the literature and mainly rely on the interpretation of post-stack seismic sections in the time domain (e. g., Medialdea et al., 2017). Here, we present depth-migrated seismic data covering several shallow-level magmatic systems, as well as a domal structure related to a magmatic intrusion. ...
Conference Paper
METEOR Cruise M150 BIODIAZ provided material from sublittoral down to deep-sea stations to incorporate innovative aspects into the study of seamount and island productivity and their potential role for the establishment of benthic assemblages comprising all size classes (George et al. 2021). The aim was to get a baseline on the diversity, faunal composition and distribution of shelf and deep-sea taxa and related sediments from three different Azorean islands (Flores, Terceira, and Santa Maria) and two adjacent seamounts (Princess Alice Bank, Formigas Bank). Such baseline shall serve to prove fundamental hypotheses regarding the role of seamounts/islands for marine organisms and the principle (bio)-sedimentary processes in the evolution of seamounts. The significance of potential endemism in zoobenthic communities based on the extensively sampled material is studied in the context of the geologic age, topographic isolation, phytoplankton productivity and diversity of the systems.
... The buried volcanoes and seamounts are generally identified based on their external geometries and internal seismic attributes on seismic profiles (Niyazi et al., 2021). Buried volcanoes and seamounts exist dome-shaped external geometries (Figs. 6, 9 and 10), high-amplitude top reflections, and chaotic or blank internal reflections (Figs. 9 and 10) (Magee et al., 2015;Zhang et al., 2016;Medialdea et al., 2017). Buried volcanoes are covered by sedimentary layers (Figs. 9, and 10), and seamounts have their tops exposed to the seafloor (Fig. 6). ...
Article
The South China Sea, which is located to the southeast of Eurasian continent, developed as the result of intra-continental rifting and seafloor spreading on the South China margin. Passive margins are traditionally classified as one of two end-member types, the magma-rich and magma-poor margins, based on the relative abundance or scarcity of magmatism during rifting and breakup. Previous studies suggest that the northern margin of the South China Sea is a magma-poor margin for lacking abundant magmatic activities during the breakup. A growing body of work is beginning to recognize that significant, widespread magmatism may also be present on margins that are presently considered to be magma-poor margins. In this study, we use high resolution 2D/3D seismic profiles, industrial well data, basalt geochemistry (major oxides, trace elements and isotopes) and published results to outline post-rift magmatism developed within the northern margin of the South China Sea. Four magmatic stages occurred since lithospheric breakup. The first stage, from 32 to 23.6 Ma and mainly within the distal margin of the northern South China Sea, was dominated by magma intrusions and corresponded to the spreading of the East Sub-basin. The second stage, from 23.6–19.1 Ma, was mainly in the Baiyan Sag and surrounding uplifts and explosive eruptions dominated. The third Stage, from 19.1–10 Ma, was east of the Zhu III Depression and west of the Enping Sag and dominated by quiet eruptions. The fourth stage, since 10 Ma, and widely distributed in the southern Dongsha uplift, experienced scattered volcanic eruptions. Two basalt samples from 23.6–19.1 Ma (industrial well HJ1, Stage 2) and seven samples from 19.1–10 Ma (industrial well EP1, Stage 3) are analyzed for major oxides, trace elements and Sr-Nd-Pb-Hf isotope compositions. Geochemical features of trace elements show that these samples are characterized by OIB-like basalts, being highly enriched in LREEs (light rare earth elements) relative to HREEs (heavy rare earth elements). Geochemical features of Sr-Nd-Pb-Hf isotope compositions show that the samples all resemble ocean-island basalts with two mixing endmembers: depleted mid-ocean ridge basalt mantle (DMM) and enriched mantle II (EMII). Pb isotopic characteristics show the Dupal isotope anomaly in the northern margin of the South China Sea. And geochemistry data of all the samples signals the contribution of Hainan Plume. Previous studies have revealed the existing of southeastward mantle flow from Tibet to South China Sea and a branch of Hainan Plume existing beneath the northern South China Sea using geophysical methods by different researchers. Based on our latest research and published geological evidences by other researchers, we propose that Stage 1 magmatism was caused by southeastward mantle flow stemming from Indo-Eurasian collision, the stage 2 and stage 3 magmatism was caused by Hainan Plume and the activation of Yangjiang-Yitongansha Faut Zone, whereas the last stage magmatic activity was mainly related with the combination of Hainan Plume and activation of the faults caused by the subduction of the SCS beneath the Luzon Arc at the Manila trench.
... Alternatively, the acoustically transparent mounds off Madeira could be interpreted as structures linked to the migration and escape of over-pressurized fluids within the sedimentary column, such as mud volcanoes and domes. Mud volcanoes, domes and associated pockmarks are widely recognized features in continental margins (e.g., Judd and Hovland, 2007) and deep-water environments (e.g., Medialdea et al., 2017;Sánchez-Guillamón et al., 2018a;Sánchez-Guillamón et al., 2018b). The isolated mounds recognized off Madeira Island share some characteristics with acoustically transparent mounded features described by Rebesco et al. (2007) in Antarctica's distal sediment drifts. ...
Article
The deep-water sedimentary processes and morphological features offshore Madeira Island, located in the Central-NE Atlantic have been scantly studied. The analysis of new multibeam bathymetry, echo-sounder profiles and few multichannel seismic reflection profiles allowed us to identify the main geomorphologies, geomorphic processes and their interplay. Several types of features were identified below 3800 m water depth, shaped mainly by i) the interplay between northward-flowing Antarctic Bottom Water (AABW) and turbidity currents and ii) interaction of the AABW with oceanic reliefs and the Madeira lower slope. Subordinate and localized geomorphic processes consist of tectono-magmaticslope instability,turbidity currents and fluid migration, . The distribution of the morphological features defines three regional geomorphological sectors. Sector 1 represents a deep-seafloor with its abyssal hills, basement highs and seamounts inherited from Early Cretaceous seafloor spreading. Sector 2 is exclusively shaped by turbidity current flows that formed channels and associated levees. Sector 3 presents a more complex morphology dominated by widespread depositional and erosional features formed by AABW circulation, and localized mixed contourite system developed by the interplay between the AABW circulation and WNW-ESE-flowing turbidite currents. The interaction of the AABW with abyssal hills, seamounts and basement ridges leads to the formation of several types of contourites: patchdrifts, double-crest mounded bodies, and elongated, mounded and separated drifts. The patch drifts formed downstream of abyssal hills defining an previously unknown field of relatively small contourites. We suggest they may be a result of localized vortexes that formed when the AABW's flow impinges these oceanic reliefs producingthe erosional scours that bound these features. The bottom currents in the area are known to be too weak (1–2 cm s⁻¹) to produce the patch drifts and scours. Therefore, we suggest that these features could be relics at present, having developed when the AABW was stronger than today, as during glacial/end of glacial stages.
... Low-T hydrothermal vents after violent submarine volcanic eruptions generate long-term CO 2 inputs to oceans due to the continuous degasification of the magmatic systems mainly placed on hot-spot volcanic islands like Hawaii or Canary Islands. This is due to the high contents in C bearing in the thick oceanic sediments below the submarine volcanoes that are expulsed by low-T hydrothermal vent systems [88]. ...
Article
Full-text available
In this work, we integrate five case studies harboring vulnerable deep-sea benthic habitats in different geological settings from mid latitude NE Atlantic Ocean (24–42° N). Data and images of specific deep-sea habitats were acquired with Remoted Operated Vehicle (ROV) sensors (temperature, salinity, potential density, O2, CO2, and CH4). Besides documenting some key vulnerable deep-sea habitats, this study shows that the distribution of some deep-sea coral aggregations (including scleractinians, gorgonians, and antipatharians), deep-sea sponge aggregations and other deep-sea habitats are influenced by water masses’ properties. Our data support that the distribution of scleractinian reefs and aggregations of other deep-sea corals, from subtropical to north Atlantic could be dependent of the latitudinal extents of the Antarctic Intermediate Waters (AAIW) and the Mediterranean Outflow Waters (MOW). Otherwise, the distribution of some vulnerable deep-sea habitats is influenced, at the local scale, by active hydrocarbon seeps (Gulf of Cádiz) and hydrothermal vents (El Hierro, Canary Island). The co-occurrence of deep-sea corals and chemosynthesis-based communities has been identified in methane seeps of the Gulf of Cádiz. Extensive beds of living deep-sea mussels (Bathymodiolus mauritanicus) and other chemosymbiotic bivalves occur closely to deep-sea coral aggregations (e.g., gorgonians, black corals) that colonize methane-derived authigenic carbonates.
... level and related to hydrothermal-volcanic activity (Medialdea et al., 2017;Sanchez-Guillamón et al., 2018). Magma injection causes differential uplifting, forced folding and faulting of the overlying sedimentary layers, and can induce the transport of hot fluids to the surface. ...
Article
A detailed morpho-bathymetric study of the Comoros archipelago, based on mostly unpublished bathymetric data, provides a first glimpse into the submarine section of these islands. It offers a complete view of the distribution of volcanic structures around the archipelago, allowing to discuss the origin and evolution of this volcanism. Numerous volcanic cones and erosional-depositional features have been recognized throughout the archipelago. The magmatic supply is focused below one or several volcanoes for each island, but is also controlled by lithospheric fractures evidenced by volcanic ridges, oriented along the supposed Lwandle-Somali plate boundary. Massive mass-wasting morphologies also mark the submarine flanks of each island. Finally, the submarine geomorphological analysis made possible to propose a new scheme for the succession of the island's growth, diverging from the east-west evolution previously described in the literature.
... TM software for interpretation. Interval seismic velocities calculated by Medialdea et al. (2017) have been used. These authors have checked these data with available regional seismic velocity information and are in accordance with those reported in DSDP Sites 137-139 (Hayes et al., 1972). ...
Article
A new temporal history of mass wasting processes for the west of the Canary volcanic province is presented. Its onset has been estimated in the middle–upper Miocene (∼13.5 ± 1.2 Ma), matching with a critical period of construction for this volcanic province. Seismic profiles show an emplacement longevity (from the Miocene to Quaternary) in multiple events, defined by stacked lobes of debrites, linked to the flank collapses and volcanic avalanches of the volcanic edifices (islands and seamounts). An evolution of pathways and source areas has been detected from east (Miocene) to west (Quaternary); as well as a migration of the activity to the northwest (west of the Canary Islands: e.g. El Hierro and La Palma). Six connected branches (I–VI), three of them described for the first time here, of Quaternary seismic units of mass transport deposits (MTDs) have been characterized. The Pleistocene makes up a huge buried MTDs system, until now unknown, pointing a new mass transport sedimentological scenario. Finally, the two southernmost branches (V–VI), up to now unknown, are a mainly buried system of stacked and terraced lobes of debrites sourced mainly from the flank collapses of the volcanic seamounts of the Canary Island Seamount Province, apparently inactive from upper Cretaceous.
... TM software for interpretation. Interval seismic velocities calculated by Medialdea et al. (2017) have been used. These authors have checked these data with available regional seismic velocity information and are in accordance with those reported in DSDP Sites 137-139 (Hayes et al., 1972). ...
Article
A new temporal history of mass wasting processes for the west of the Canary volcanic province is presented. Its onset has been estimated in the middle–upper Miocene (∼13.5 ± 1.2 Ma), matching with a critical period of construction for this volcanic province. Seismic profiles show an emplacement longevity (from the Miocene to Quaternary) in multiple events, defined by stacked lobes of debrites, linked to the flank collapses and volcanic avalanches of the volcanic edifices (islands and seamounts). An evolution of pathways and source areas has been detected from east (Miocene) to west (Quaternary); as well as a migration of the activity to the northwest (west of the Canary Islands: e.g. El Hierro and La Palma). Six connected branches (I–VI), three of them described for the first time here, of Quaternary seismic units of mass transport deposits (MTDs) have been characterized. The Pleistocene makes up a huge buried MTDs system, until now unknown, pointing a new mass transport sedimentological scenario. Finally, the two southernmost branches (V–VI), up to now unknown, are a mainly buried system of stacked and terraced lobes of debrites sourced mainly from the flank collapses of the volcanic seamounts of the Canary Island Seamount Province, apparently inactive from upper Cretaceous.
... Alternatively, the acoustically transparent mounds off Madeira could be interpreted as structures linked to the migration and escape of over-pressurized fluids within the sedimentary column, such as mud volcanoes and domes. Mud volcanoes, domes and associated pockmarks are widely recognized features in continental margins (e.g., Judd and Hovland, 2007) and deep-water environments (e.g., Medialdea et al., 2017;Sánchez-Guillamón et al., 2018a;Sánchez-Guillamón et al., 2018b). The isolated mounds recognized off Madeira Island share some characteristics with acoustically transparent mounded features described by Rebesco et al. (2007) in Antarctica's distal sediment drifts. ...
... Therefore, the seismicity distribution seems to be correct and may track an inclined magmatic structure. A recent study on seismic images of sills offshore El Hierro by Medialdea et al. (2017) shows the possibility of a wide range of sill shapes including inclined sills at depth. Moreover, a 3D tomography of El Hierro by Martí et al. (2017) revealed an inclined structure of low velocity at a depth of 12-16 km. ...
Article
Six different magmatic intrusions were detected around El Hierro Island in the two years that followed the end of the 2011–2012 submarine eruption. Each intrusion lasted between few days to three weeks and produced intense seismic swarms and rapid ground deformation. We performed a hypoDD relocation of >6000 earthquakes and inverted the GPS data in order to obtain the location of the magma source of each intrusion. Each episode presents a spatial gap between seismicity and magma source of 3–8 km with the earthquakes located always deeper than the deformation sources. We propose a magma plumbing system consisting on a deep structure injecting magma to a more ductile shallower location beneath El Hierro crust. While the seismicity is associated with the deeper structure, the ascent and accumulation of magma at shallower level deforms the crust aseismically. The mechanism of most of these episodes consists of an initial injection of magma producing most of the ground deformation and high b-values of the seismicity indicating fluid fractures during the first days and finishes with high magnitude earthquakes and low b-values indicating an overpressure of the injection process. There is a correlation between the seismic and geodetic moment ratio and the direction of propagation of each intrusion towards one of the volcanic rifts of the island, suggesting the possible existence of a deep structure beneath the island related with to the triaxial origin of the island. This work presents important advances in the knowledge of monogenetic magmatic intrusions and, specifically, in those occurred in El Hierro Island between 2011 and 2014, with important implications for future volcano monitoring in the Canary Islands.
... The sill complex may have fed dykes that locally reached the surface generating volcanoes, which are laterally offset with respect to the two main fault systems. Sill complexes may provide efficient magma flow pathways, transporting magma to the surface over great vertical and lateral distances, as suggested by several authors in different tectonic contexts (e.g., Magee et al., 2016;Medialdea et al., 2017). ...
Article
The tectonic framework of the northern sector of the Capo Granitola-Sciacca Fault Zone (CGSFZ), a NNE-oriented lithospheric strike-slip fault zone located in the Sicilian Channel (southern Italy), has been reconstructed with the aim to clarify the relationships between geometry and kinematics of the structures and the occurrence and distribution of the magmatic manifestations observed in the area. This has been achieved by the interpretation of a large dataset composed of 2-D multichannel seismic profiles, Chirp profiles, magnetic data and borehole information. In addition to the volcanic edifices known in the Graham and Terribile banks, this study has allowed to recognize several other magmatic manifestations. The magmatic occurrences consist of small volcanic cones, buried magma ascents and potential igneous sills. The CGSFZ is bounded by two strike-slip fault systems, the Capo Granitola Fault System (CGFS) to the west and the Sciacca Fault System (SFS) to the east, dominated by positive flower structures generated by tectonic inversion of NNE-oriented late Miocene extensional faults. Only the southern part of the CGFS shows the presence of a sub-vertical, N-S oriented strike-slip master fault. The sector between the two fault systems does not show a significant Pliocene-Quaternary tectonic deformation, except for its southern part hosting the Terribile Bank, which is dissected by WNW to NW-trending normal faults developed during late Miocene and later reactivated. This set of faults is currently active at the Terribile Bank, whereas is buried by Pliocene-Quaternary deposits in the central and northern sectors of the CGSFZ. The observed magmatism is driven by a mechanism of non-plume origin. Magmas have used as open paths the faults of the CGFS and SFS, which cut the whole lithosphere reaching the asthenosphere and producing partial melting by simple pressure release. Most of the magmatism develops along the strike-slip master fault associated with the CGFS and the normal faults affecting the Terribile Bank. The magmatic feeding of the Terribile Bank would be related to lateral magma migration coming from the structures of the SFS, which would use the open pathways represented by active normal faults. In the central-northern part of the CGSFZ, magmas migrate upward along lithospheric faults, then move laterally and rise toward the surface through NNE and NW-trending buried normal faults. These late Miocene faults do not reach the surface, and this may have favoured the emplacement of igneous sills, which in turn may explain the observed volcanic centres.
Article
Full-text available
This study presented recently reprocessed multi-channel seismic data and multi-beam bathymetric map to reveal the geomorphology and stratigraphic architecture of the Yongle isolated carbonate platform in the Xisha Archipelago, northwestern South China Sea. Our results show that the upper slope angles of Yongle carbonate platform exceed 10° and even reach to ∼32.5° whereas the lower slope angles vary from .5° to 5.3°. The variations of slope angles show that margins of Yongle Atoll belong to escarpment (bypass) margins to erosional (escarpment) margins. The interior of carbonate platform is characterized by sub-parallel to parallel, semi-continuous to continuous reflectors with medium-to high-amplitude and low-to medium-frequency. The platform shows a sub-flat to flat-topped shape in its geometry with aggradation and backstepping occurring on the platform margins. According to our seismic-well correlation, the isolated carbonate platform started forming in Early Miocene, grew during Early to Middle Miocene, and subsequently underwent drowning in Late Miocene, Pliocene and Quaternary. Large-scale submarine mass transport deposits are observed in the southeastern and southern slopes of Yongle Atoll to reshape the slopes since Late Miocene. The magmatism and hydrothermal fluid flow pipes around the Yongle Atoll have been active during 10.5–2.6 Ma. Their activity might intensify dolomitization of the Xisha isolated carbonate platforms during Late Miocene to Pliocene. Our results further suggest that the Yongle carbonate platform is situated upon a pre-existing fault-bounded block with a flat pre-Cenozoic basement rather than a large scale volcano as previously known and the depth of the basement likely reached to 1400 m, which is deeper than the well CK-2 suggested.
Article
Hydrothermal iron (Fe)-rich sediments were recovered from the Tagoro underwater volcano (Central Atlantic) that was built during the 2011–2012 volcanic event. Cruises in 2012 and 2014 enabled the monitoring and sampling of the early-stage establishment of a hydrothermal system. Degassing vents produced acoustic flares imaged on echo-sounders in June 2012, four months after the eruption. In 2014 during a ROV dive was discovered and sampled a novel hydrothermal vent system formed by hornito-like structures and chimneys showing active CO2 degassing and anomalous temperatures at 120–89 m water depth, and along the SE flank at 215–185 m water depth associated with secondary cones. Iron- and silica-rich gelatinous deposits pooled over and between basanite in the hornitos, brecciated lavas, and lapilli. The low-temperature, shallow-water hydrothermal system was discovered by venting of Fe-rich fluids that produced a seafloor draped by extensive Fe-flocculate deposits precipitated from the neutrally buoyant plumes located along the oxic/pHotic zone at 50–70 m water depths. The basanite is capped by mm- to cm-thick hydrothermally derived Fe-oxyhydroxide sediments and contain micro-cracks and degasification vesicles filled by sulfides (mostly pyrite). Mineralogically, the Fe-oxyhydroxide sediments consist of proto-ferrihydrite and ferrihydrite with scarce pyrite at their base. The Fe-rich endmember contains low concentrations of most trace elements in comparison with hydrogenetic ferromanganese deposits, and the sediments show some dilution of the Fe oxyhydroxide by volcanic ash. The Fe-oxyhydroxide phase with a mean particle size of 3–4 nm, low average La/Fe ratios of the mineralized deposits from the various sampling sites, and the positive Eu anomalies indicate rapid deposition of the Fe-oxyhydroxide near the hydrothermal vents. Electron microprobe studies show the presence of various organomineral structures, mainly twisted stalks and sheaths covered by iron-silica deposits within the mineralized samples, reflecting microbial iron-oxidation from the hydrothermal fluids. Sequencing of 16 s rRNA genes also reveals the presence of other microorganisms involved in sulfur and methane cycles. Samples collected from hornito chimneys contain silicified microorganisms coated by Fe-rich precipitates. The rapid silicification may have been indirectly promoted by microorganisms acting as nucleation sites. We suggest that this type of hydrothermal deposit might be more frequent than presently reported to occur in submarine volcanoes. On a geological scale, these volcanic eruptions and low-temperature hydrothermal vents might contribute to increased dissolved metals in seawater, and generate considerable Fe-oxyhydroxide deposits as identified in older hot-spot seamounts.
Article
On the basis of 2D multichannel and very-high-resolution seismic data and swath bathymetry, we report a sequence of giant mass-transport deposits (MTDs) in the Scan Basin (southern Scotia Sea, Antarctica). MTDs with a maximum thickness of c. 300 m extend up to 50 km from the Discovery and Bruce banks towards the Scan Basin. The headwall area consists of multiple U-shaped scars intercalated between volcanic edifices, up to 250 m high and 7 km wide, extending c. 14 km downslope from 1750 to 2900 m water depth. Seismic sections show that these giant MTDs are triggered by the intersection between diagenetic fronts related to silica transformation and vertical fluid-flow pipes linked to magmatic sills emplaced within the sedimentary sequence of the Scan Basin. This work supports that the diagenetic alteration of siliceous sediments is a possible cause of slope instability along world continental margins where bottom-simulating reflectors related to silica diagenesis are present at a regional scale.
Chapter
Full-text available
Article
Full-text available
The structure of upper crustal magma plumbing systems controls the distribution of volcanism and influences tectonic processes. However, delineating the structure and volume of plumbing systems is difficult because (1) active intrusion networks cannot be directly accessed; (2) field outcrops are commonly limited; and (3) geophysical data imaging the subsurface are restricted in areal extent and resolution. This has led to models involving the vertical transfer of magma via dikes, extending from a melt source to overlying reservoirs and eruption sites, being favored in the volcanic literature. However, while there is a wealth of evidence to support the occurrence of dike-dominated systems, we synthesize field- and seismic reflection-based observations and highlight that extensive lateral magma transport (as much as 4100 km) may occur within mafic sill complexes. Most of these mafic sill complexes occur in sedimentary basins (e.g., the Karoo Basin, South Africa), although some intrude crystalline continental crust (e.g., the Yilgarn craton, Australia), and consist of interconnected sills and inclined sheets. Sill complex emplacement is largely controlled by host-rock lithology and structure and the state of stress. We argue that plumbing systems need not be dominated by dikes and that magma can be transported within widespread sill complexes, promoting the development of volcanoes that do not overlie the melt source. However, the extent to which active volcanic systems and rifted margins are underlain by sill complexes remains poorly constrained, despite important implications for elucidating magmatic processes, melt volumes, and melt sources.
Article
Full-text available
Seismic profiles across the Madeira Abyssal Plain show a relatively simple seismic stratigraphy in which an irregular diffractive acoustic basement is overlain by distinctive seismic units, reflecting a great thickness of ponded turbidites overlying pelagic drape. Within the uppermost ponded turbidite unit, a number of distinct, continuous, and laterally extensive reflectors are recognized. Sites 950 through 952 were drilled into these reflectors and allow dating of the beginning of large-scale turbidite emplacement on the abyssal plain and identification and dating of previously recognized seismic reflectors with a good degree of certainty. The extent and probable volume of the distinct turbidite packages can now be quantified. The Madeira Abyssal Plain overlies oceanic crust of Cretaceous age. Five distinct seismic units, separated by prominent, continuous, laterally extensive reflectors, can be identified. The lowermost of these (Unit B), which directly overlies acoustic basement, is a variably stratified unit and contains reflectors that generally show low coherency and onlap onto basement highs. At Site 950, the upper part of Unit B consists of red pelagic clays, with thin calcareous turbidites and ash layers, of late Eocene to Oligocene age. Unit A overlies Unit B with clear unconformity, marked by a conspicuous basinwide seismic reflector (Reflector 4). Unit A is a variably stratified unit and can be divided into four seismic units, A0 through A3, separated by prominent reflectors of regional extent. These units consist of thick, ponded turbidites with pelagic intervals. Many turbidites are basinwide in extent and can be correlated between drill sites. Four main types of turbidites are recognized: volcanic-rich turbidites derived from the Canary Islands, organic-rich turbidites derived from the Northwest African Margin, calcareous turbidites derived from seamounts to the west of the plain, and turbidites of 'intermediate' character. Organic-rich turbidites are the dominant type, although volcanic-rich turbidites are numerous in Units A0 through A2. Conversion of two-way traveltime to depth using shipboard sonic log data suggests that thick volcanic-rich and 'intermediate' character turbidites of wide lateral extent commonly correspond to strong seismic reflectors, and that acoustically transparent intervals within Unit A correspond to intervals of predominantly organic-rich turbidites. The base of Unit A is the regionally important Reflector 4 that correlates with a distinctive calcareous bed at all three drill sites dated at 16 Ma. The seismic units can be laterally mapped using an extensive data set of seismic reflection profiles and the minimum volumes of sediments deposited within individual seismic units calculated, giving values for sediment accumulation on the plain per unit time. The data show that since the inception of the abyssal plain in the middle Miocene (16 Ma), a minimum of 19,000 km3 of sediments (turbidites and hemipelagites) have been deposited.
Chapter
Full-text available
Extrusive edifices and structural reliefs, catalogued as mounds and located on the seafloor to the west of Canary Islands were analyzed by acoustic data obtained with multibeam and parametric echosounders during several oceanographic expeditions. They were carried out at deep waters, from 4800 to 5200 m, and they have allowed characterizing 41 newly discovered submarine structures which occur either as isolated edifices or clustered mounds. These features have circular to elongated shapes with diameters of 2-24 km and relief heights of 10 to 250 m, showing different flank slopes of 2-50°. They generally display mounded forms and show morphological elements as ridges, near-circular rock outcrops, depressions and fault scarps together with mass flow and slide deposits located at the vicinity of the edifices. Two types of extrusive features are evidenced by the morphological and seismic data analyses, the first one probably corresponds to high velocity extrusions that reach the seafloor surface and the second one is probably formed by the combination of faulted structures and low velocity extrusions that produce singular domes in the shallower sedimentary records. Based on both analyses, extrusive phenomena represent the dominant mechanism for mound field evolution in the Canary lower slope region.
Article
Full-text available
During a regional seismic interpretation study of leakage anomalies in the northern North Sea, mounds and zones with a highly chaotic seismic reflection pattern in the Tertiary Hordaland Group were repeatedly observed located above gas chimneys in the Cretaceous succession. The chaotic seismic reflection pattern was interpreted as mobilized sediments. These mud diapirs are large and massive, the largest being 100 km long and 40 km wide. Vertical injections of gas, oil and formation water are interpreted to have triggered the diapirs. On the eastern side of the Viking Graben, another much smaller type of mud diapir was observed. These near-circular mud diapirs are typically 1–3 km in diameter in the horizontal plane. Limited fluid injection from intra-Hordaland Group sands, through sand injection zones, into the upper Hordaland Group is interpreted to have triggered the near-circular diapirs. This observed ‘external’ type of mobilization was generated at shallow burial (<1000 m) and should be discriminated from the more common ‘internal’ type of mud diapirism that is generated in deep basins (>3000 m). The suggested model has implications for the understanding of the palaeofluid system, sand distribution, stratigraphic prediction within the chaotic zone, seismic imaging, and seismic interpretation of the hydrocarbon ‘plumbing’ system.
Article
Full-text available
The architecture of subsurface magma plumbing systems influences a variety of igneous processes, including the physiochemical evolution of magma and extrusion sites. Seismic reflection data provides a unique opportunity to image and analyze these subvolcanic systems in three dimensions and has arguably revolutionized our understanding of magma emplacement. In particular, the observation of (1) interconnected sills, (2) transgressive sill limbs, and (3) magma flow indicators in seismic data suggest that sill complexes can facilitate significant lateral (tens to hundreds of kilometers) and vertical (<5 km) magma transport. However, it is often difficult to determine the validity of seismic interpretations of igneous features because they are rarely drilled, and our ability to compare seismically imaged features to potential field analogues is hampered by the limited resolution of seismic data. Here we use field observations to constrain a series of novel seismic forward models that examine how different sill morphologies may be expressed in seismic data. By varying the geologic architecture (e.g., host-rock lithology and intrusion thickness) and seismic properties (e.g., frequency), the models demonstrate that seismic amplitude variations and reflection configurations can be used to constrain intrusion geometry. However, our results also highlight that stratigraphic reflections can interfere with reflections generated at the intrusive contacts, and may thus produce seismic artifacts that could be misinterpreted as real features. This study emphasizes the value of seismic data to understanding magmatic systems and demonstrates the role that synthetic seismic forward modeling can play in bridging the gap between seismic data and field observations.
Book
Full-text available
Seabed fluid flow involves the flow of gases and liquids through the seabed. Such fluids have been found to leak through the seabed into the marine environment in seas and oceans around the world - from the coasts to deep ocean trenches. This geological phenomenon has widespread implications for the sub-seabed, seabed, and marine environments. Seabed fluid flow affects seabed morphology, mineralization, and benthic ecology. Natural fluid emissions also have a significant impact on the composition of the oceans and atmosphere; and gas hydrates and hydrothermal minerals are potential future resources. This book describes seabed fluid flow features and processes, and demonstrates their importance to human activities and natural environments. It is targeted at research scientists and professionals with interests in the marine environment. Colour versions of many of the illustrations, and additional material - most notably feature location maps - can be found at www.cambridge.org/9780521819503.
Article
Full-text available
The origin and life cycle of ocean islands have been debated since the early days of Geology. In the case of the Canary archipelago, its proximity to the Atlas orogen led to initial fracture-controlled models for island genesis, while later workers cited a Miocene-Quaternary east-west age-progression to support an underlying mantle-plume. The recent discovery of submarine Cretaceous volcanic rocks near the westernmost island of El Hierro now questions this systematic age-progression within the archipelago. If a mantle-plume is indeed responsible for the Canaries, the onshore volcanic age-progression should be complemented by progressively younger pre-island sedimentary strata towards the west, however, direct age constraints for the westernmost pre-island sediments are lacking. Here we report on new age data obtained from calcareous nannofossils in sedimentary xenoliths erupted during the 2011 El Hierro events, which date the sub-island sedimentary rocks to between late Cretaceous and Pliocene in age. This age-range includes substantially younger pre-volcanic sedimentary rocks than the Jurassic to Miocene strata known from the older eastern islands and now reinstate the mantle-plume hypothesis as the most plausible explanation for Canary volcanism. The recently discovered Cretaceous submarine volcanic rocks in the region are, in turn, part of an older, fracture-related tectonic episode.
Article
Full-text available
It is well known that seawater that migrates deep into the Earth’s crust will pass into its supercritical domain at temperatures above 407°C and pressures above 298 bars. In the oceanic crust, these pressures are attained at depths of 3 km below sea surface, and sufficiently high temperatures are found near intruding magmas, which have temperatures in the range of 800°C to 1200°C. The physico-chemical behaviour of seawater changes dramatically when passing into the supercritical domain. A supercritical water vapour (ScriW) is formed with a density of 0.3 g/cc and a strongly reduced dipolar character. This change in polarity is causing the ScriW to lose its solubility of the common sea salts (chlorides and sulphates) and a spontaneous precipitation of sea salts takes place in the pore system. However, this is only one of many cases where the very special properties of ScriW affect its surroundings. The objective of this paper is to increase awareness of the many geological processes that are initiated and governed by ScriW. This includes interactions between ScriW and its geological surroundings to initiate and drive processes that are of major importance to the dynamics and livelihood of our planet. ScriW is the driver of volcanism associated with subduction zones, as ScriW deriving from the subduction slab is interacting with the mantle rocks and reducing their melting point. ScriW is also initiating serpentinization processes where olivines in the mantle rocks (e.g. peridotite) are transformed to serpentine minerals upon the uptake of OH-groups from hydrolysed water. The simultaneous oxidation of Fe2+ dissolved from iron-bearing pyroxenes and olivines leads to the formation of magnetite and hydrogen, and consequently, to a very reducing environment. ScriW may also be the potential starter and driver of the poorly understood mud and asphalt volcanism; both submarine and terrestrial. Furthermore, the lack of polarity of the water molecules in ScriW gives the ScriW vapour the potential to dissolve organic matter and petroleum. The same applies to supercritical brines confined in subduction slabs. If these supercritical water vapours migrate upwards to reach the critical point, the supercritical vapour is condensed into steam and dissolved petroleum is partitioned from the water phase to become a separate fluid phase. This opens up the possibility of transporting petroleum long distances when mixed with ScriW. Therefore, we may, popularly, say that ScriW drives a gigantic underground refinery system and also a salt factory. It is suggested that the result of these processes is that ScriW is rejuvenating the world’s ocean waters, as all of the ocean water circulates into the porous oceanic crust and out again in cycles of less than a million years. In summary, we suggest that ScriW participates in and is partly responsible for: 1) Ocean water rejuvenation and formation; 2) Fundamental geological processes, such as volcanism, earthquakes, and meta-morphism (including serpentinization); 3) Solid salt production, accumulation, transportation, and (salt) dome formation; 4) The initiation and driving of mud, serpentine, and asphalt volcanoes; 5) Dissolution of organic matter and petroleum, including transportation and phase separation (fractionation), when passing into the subcritical domain of (liquid) water.
Article
Full-text available
Emplacement of magma in the shallow subsurface can result in the development of dome-shaped folds at the Earth’s surface. These so-called “forced folds” have been described in the field and in subsurface data sets, although the exact geometry of the folds and the nature of their relationship to underlying sills remain unclear and, in some cases, controversial. In this study we use high-quality, two-dimensional (2-D) seismic reflection and borehole data from the Ceduna sub-basin, offshore southern Australia, to describe the structure and infer the evolution of igneous sill–related forced folds in the Bight Basin igneous complex. Thirty-three igneous sills, which were emplaced 200–1500 m below the paleo-seabed in Upper Cretaceous rocks, are mapped in the Ceduna sub-basin. The intrusions are expressed as packages of high-amplitude reflections, which are 32–250 m thick and 7–19 km in diameter. We observe five main types of intrusion: type 1, strata-concordant sills; type 2, weakly strata-discordant, transgressive sills; type 3, saucer-shaped sills; type 4, laccoliths; and type 5, hybrid intrusions, which have geometric characteristics of intrusion types 1–3. These intrusions are overlain by dome-shaped folds, which are up to 17 km wide and display up to 210 m of relief. The edges of these folds coincide with the margins of the underlying sills and the folds display the greatest relief where the underlying sills are thickest; the folds are therefore interpreted as forced folds that formed in response to emplacement of magma in the shallow subsurface. The folds are onlapped by Lutetian (middle Eocene) strata, indicating they formed and the intrusions were emplaced during the latest Ypresian (ca. 48 Ma). We demonstrate that fold amplitude is typically less than sill thickness even for sills with very large diameter-to-depth ratios, suggesting that pure elastic bending (forced folding) of the overburden is not the only process accommodating magma emplacement, and that supra-sill compaction may be important even at relatively shallow depths. Based on the observation that the sills intruded a shallowly buried succession, the discrepancy between fold amplitude and sill thickness may reflect loss of host rock volume by fluidization and pore fluid expulsion from poorly lithified, water-rich beds. This study indicates that host rock composition, emplacement depth, and deformation mechanisms are important controls on the style of deformation that occurs during intrusive igneous activity, and that forced fold amplitude may not in all cases reflect the thickness of an underlying igneous intrusion. In addition, the results of this study suggest that physical and numerical models need to model more complex host rock stratigraphies and rheologies if they are to capture the full range of deformation mechanisms that occur during magma emplacement in the Earth’s shallow subsurface.
Article
Full-text available
[1] Large volumes of magma emplaced within sedimentary basins have been linked to multiple climate change events due to release of greenhouse gases such as CH4. Basin-scale estimates of thermogenic methane generation show that this process alone could generate enough greenhouse gases to trigger global incidents. However, the rates at which these gases are transported and released into the atmosphere are quantitatively unknown. We use a 2D, hybrid FEM/FVM model that solves for fully compressible fluid flow to quantify the thermogenic release and transport of methane and to evaluate flow patterns within these systems. Our results show that the methane generation potential in systems with fluid flow does not significantly differ from that estimated in diffusive systems. The values diverge when vigorous convection occurs with a maximum variation of about 50%. The fluid migration pattern around a cooling, impermeable sill alone generates hydrothermal plumes without the need for other processes such as boiling and/or explosive degassing. These fluid pathways are rooted at the edges of the outer sills consistent with seismic imaging. Methane venting at the surface occurs in three distinct stages and can last for hundreds of thousands of years. Our simulations suggest that although the quantity of methane potentially generated within the contact aureole can cause catastrophic climate change, the rate at which this methane is released into the atmosphere is too slow to trigger, by itself, some of the negative δ13C excursions observed in the fossil record over short time scales (<10,000 years).
Article
Full-text available
2D and 3D seismic data from the mid-Norwegian margin show that polygonal fault systems are wide-spread within the fine-grained, Miocene sediments of the Kai For- mation that overlie the Mesozoic/ Early Cenozoic rift basins. De-watering and devel- opment of polygonal faults commenced shortly after burial and is an ongoing process since Miocene times. This is evident from the polygonal fault system's stratigraphic setting, the statistical properties of fault throw, and the stratigraphic setting of fluid flow features that are related to de-watering of the polygonal fault systems.
Article
Full-text available
Large-volume extrusive basaltic constructions have distinct morphologies and seismic properties depending on the eruption and emplacement environments. The presence and amount of water is of main importance, while local rift basin configuration, erosion, and resedimentation determine the overall geometry of the volcanic constructions. We have developed the concept of seismic volcanostratigraphy, a subset of seismic stratigraphy, to analyze volcanic deposits imaged on seismic reflection data. The method places special focus on identification and mapping of seismic facies units and the volcanological interpretation of these units. Interpretation of seismic reflection data along the Atlantic and Western Australia rifted margins reveals six characteristic volcanic seismic facies units named (1) Landward Flows, (2) Lava Delta, (3) Inner Flows, (4) Inner Seaward Dipping Reflectors (Inner SDR), (5) Outer High, and (6) Outer SDR. These units are interpreted in terms of a five-stage tectonomagmatic volcanic margin evolution model comprising (1) explosive volcanism in a wet sediment, broad basin setting, (2) subaerial effusive volcanism forming Gilbert-type lava deltas along paleoshorelines, (3) subaerial effusive volcanism infilling a fairly narrow rift basin, (4) shallow marine explosive volcanism as the injection axis is submerged below sea level, and finally (5) deep marine sheet flow or pillow-basalt volcanism. Further, erosion and resedimentation processes are particularly important during the shallow marine stages. Seismic volcanostratigraphy provides important constraints on rifted-margin development, in particular, on the prevolcanic basin configuration, relative timing of tectonomagmatic events, total amount of volcanic rocks, location of paleoshorelines, and margin subsidence history. These parameters give key boundary conditions for understanding the processes forming volcanic margins and other large-volume basaltic provinces.
Article
Full-text available
The Canary Island Seamount Province forms a scattered hotspot track on the Atlantic ocean floor ~1300 km long and ~350 km wide, perpendicular to lithospheric fractures, and parallel to the NW African continental margin. New (40)Ar/(39)Ar datings show that seamount ages vary from 133 Ma to 0.2 Ma in the central archipelago, and from 142 Ma to 91 Ma in the southwest. Combining (40)Ar/(39)Ar ages with plate tectonic reconstructions, I find that the temporal and spatial distribution of seamounts is irreconcilable with a deep fixed mantle plume origin, or derivation from passive mantle upwelling beneath a mid-ocean ridge. I conclude that shallow mantle upwelling beneath the Atlantic Ocean basin off the NW African continental lithosphere flanks produced recurrent melting anomalies and seamounts from the Late Jurassic to Recent, nominating the Canary Island Seamount Province as oldest hotspot track in the Atlantic Ocean, and most long-lived preserved on earth.
Article
Full-text available
An exploration 3D seismic data set from the Gjallar Ridge off mid-Norway images a giant fluid seep structure, 3 × 5 km wide, which connects to late Palaeocene magmatic sills at depth. Two of the pipes that have developed as hydrothermal vents reach all the way to the modern seafloor implying that they either were active much longer than the original hydrothermal activity or have been reactivated. We combine detailed seismic analysis of the northern pipe and sandbox modeling to constrain pipe initiation and propagation. Although both the seismic data and the sandbox models suggest that fluids at depth are focused through a vertical conduit, sandbox models show that fluids ascend and reach a critical depth migration where focused migration abruptly transforms into distributed fluid flow through unconsolidated sediments. This indicates that at this level the sediments are intensely deformed during pipe propagation, creating a V-shaped structure, i.e. an inverted cone at depth and a positive relief anomaly, 5 to 10 m high, at the seafloor, which is clearly identified on 3D seismic data. Comparison of the geometries observed in sandbox modeling with the seismically observed geometries of the Giant Gjallar Vent suggests that the Giant Gjallar Vent may be a proto-fluid seep at an early stage of its development, preceding the future collapse of the structure forming a seafloor depression. Our results imply that the Gjallar Giant Vent can be used as a window into the geological processes active in the deep parts of the Vøring Basin.
Article
Full-text available
Subvolcanic intrusions in sedimentary basins cause strong thermal perturbations and frequently cause extensive hydrothermal activity. Hydrothermal vent complexes emanating from the tips of transgressive sills are observed in seismic profiles from the Northeast Atlantic margin, and geometrically similar complexes occur in the Stormberg Group within the Late Carboniferous-Middle Jurassic Karoo Basin in South Africa. Distinct features include inward-dipping sedimentary strata surrounding a central vent complex, comprising multiple sandstone dykes, pipes, and hydrothermal breccias. Theoretical arguments reveal that the extent of fluid-pressure build-up depends largely on a single dimensionless number (Ve) that reflects the relative rates of heat and fluid transport. For Ve ≫ 1, 'explosive' release of fluids from the area near the upper sill surface triggers hydrothermal venting shortly after sill emplacement. In the Karoo Basin, the formation of shallow (< 1 km) sandstone-hosted vents was initially associated with extensive brecciation, followed by emplacement of sandstone dykes and pipes in the central parts of the vent complexes. High fluid fluxes towards the surface were sustained by boiling of aqueous fluids near the sill. Both the sill bodies and the hydrothermal vent complexes represent major perturbations of the permeability structure of the sedimentary basin, and are likely to have long time-scale effects on its hydrogeological evolution.
Chapter
Full-text available
Low-temperature hydrothermal alteration of basement from Site 801 was studied through analyses of the mineralogy, chemistry, and oxygen isotopic compositions of the rocks. The more than 100-m section of 170-Ma basement consists of 60 m of tholeiitic basalt separated from the overlying 60 m of alkalic basalts by a >3-m-thick Fe-Si hydrothermal deposit. Four alteration types were distinguished in the basalts: (1) saponite-type (Mg-smectite) rocks are generally slightly altered, exhibiting small increases in H 2 O, δ 18 θ, and oxidation; (2) celadonite-type rocks are also slightly altered, but exhibit uptake of alkalis in addition to hydration and oxidation, reflecting somewhat greater seawater/rock ratios than the saponite type; (3) Al-saponite-type alteration resulted in oxidation, hydration, and alkali and 18 O uptake and losses of Ca and Na due to the breakdown of plagioclase and clinopyroxene; and (4) blue-green rocks exhibit the greatest chemical changes, including oxidation, hydration, alkali uptake, and loss of Ca, Na, and Mg due to the complete breakdown of plagioclase and olivine to K-feldspar and Phyllosilicates. Saponite-and celadonite-type alteration of the tholeiite section occurred at a normal mid-ocean ridge basalt spreading center at temperatures <20°C. Near-or off-axis intrusion of an alkali basalt magma at depth reinitiated hydrothermal circulation, and the Fe-Si hydrothermal deposit formed from cool (<60°C) distal hydrothermal fluids. Focusing of fluid flow in the rocks immediately underlying the deposit resulted in the extensive alteration of the blue-green rocks at similar temperatures. Al-saponite alteration of the subsequent alkali basalts overlying the deposit occurred at relatively high water/rock ratios as part of the same low-temperature circulation system that formed the hydrothermal deposit. Abundant calcite formed in the rocks during progressive "aging" of the crust during its long history away from the spreading center.
Article
Full-text available
Polygonal faults, mainly oriented N50, N110 and N170, are abundant in the upper part of the mud-dominated Kai Formation (upper Miocene-lower Pliocene) of the Voting Basin. A second, less-developed tier of polygonal faults, oriented N20, N80 and N140, exists at the base of the overlying Naust Formation (upper Pliocene-Present). The faults abruptly terminate upward below a thick interval of debris flows. We propose a dynamic model in which: (1) the development of polygonal faults discontinues temporarily as a result of a change in regional sedimentation, leading to inactive polygonal faults; (2) rapid emplacement of debris flows in the late Pleistocene creates a new interval of polygonal faults in the lower part of the Naust Formation immediately beneath the debris flow and some faults penetrate into the underlying Kai Formation; (3) some polygonal faults within the Kai Formation are reactivated and propagated upward into the base of the Naust Formation. The high interconnectivity between faulted layers allows the fluids to reach shallower depths, forming well-expressed pipes and pockmarks on the sea floor. The model of cessation/reactivation of polygonal faults constrains the sealing capacity of sedimentary cover over the reservoirs and helps to reconstruct the fluid migration history through the sedimentary column.
Article
Full-text available
Structure of mud volcano systems and pockmarks in the region of the Ceuta Contourite Depositional System (Western Alborán Sea), Marine Geology (2012), doi: 10.1016/j.margeo.2012.06.002 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Article
Full-text available
Voluminous volcanism characterized Early Tertiary continental break-up on the mid-Norwegian continental margin. The distribution of the associated extrusive rocks derived from seismic volcanostratig-raphy and potential field data interpretation allows us to divide the Møre, Vøring and Lofoten–Vesterålen margins into five segments. The central Møre Margin and the northern Vøring Margin show combinations of volcanic seismic facies units that are characteristic for typical rifted volcanic margins. The Lofoten– Vesterålen Margin, the southern Vøring Margin and the area near the Jan Mayen Fracture Zone show volcanic seismic facies units that are related to small-volume, submarine volcanism. The distribution of subaerial and submarine deposits indicates variations of subsidence along the margin. Vertical movements on the mid-Norwegian margin were primarily controlled by the amount of magmatic crustal thickening, because both the amount of dynamic uplift by the Icelandic mantle plume and the amount of subsidence due to crustal stretching were fairly constant along the margin. Thus, subaerial deposits indicate a large amount of magmatic crustal thickening and an associated reduction in isostatic subsidence, whereas submarine deposits indicate little magmatic thickening and earlier subsidence. From the distribution of volcanic seismic facies units we infer two main reasons for the different amounts of crustal thickening: (1) a general northward decrease of magmatism due to increasing distance from the hot spot and (2) subdued volcanism near the Jan Mayen Fracture Zone as a result of lateral lithospheric heat transport and cooling of the magmatic source region. Furthermore, we interpret small lateral variations in the distribution of volcanic seismic facies units, such as two sets of Inner Seaward Dipping Reflectors on the central Vøring Margin, as indications of crustal fragmentation.
Article
Full-text available
A voluminous magmatic complex was emplaced in the Vøring and Møre basins during Paleocene/ Eocene continental rifting and break-up in the NE Atlantic. This intrusive event has had a significant impact on deformation, source-rock maturation and fluid flow in the basins. Intrusive complexes and associated hydrothermal vent complexes have been mapped on a regional 2D seismic dataset (c.150 000 km) and on one large 3D survey. The extent of the sill complex is at least 80 000 km2, with an estimated total volume of 0.9 to 2.8 × 104 km3. The sheet intrusions are saucer-shaped in undeformed basin segments. The widths of the saucers become larger with increasing emplacement depth. More varied intrusion geometries are found in structured basin segments. Some 734 hydrothermal vent complexes have been identified, although it is estimated that 2-3000 vent complexes are present in the basins. The vent complexes are located above sills and were formed as a direct consequence of the intrusive event by explosive eruption of gases, liquids and sediments, forming up to 11 km wide craters at the seafloor. The largest vent complexes are found in basin segments with deep sills (3-9km palaeodepth). Mounds and seismic seep anomalies located above the hydrothermal vent complexes suggest that the vent complexes have been re-used for vertical fluid migration long after their formation. The intrusive event mainly took place just prior to, or during, the initial phase of massive break-up volcanism (55.0-55.8Ma). There is also evidence for a minor Upper Paleocene volcanic event documented by the presence of 20 vent complexes terminating in the Upper Paleocene sequence and the local presence of extrusive volcanic rocks within the Paleocene sequence.
Article
Full-text available
A new polygonal fault system has been identified in the Lower Congo Basin. This highly faulted interval (HFI), 700±50 m thick, is characterized by small extensional faults displaying a polygonal pattern in plan view. This kind of fracturing is attributed to volumetric contraction of sediments during early stages of compaction at shallow burial depth. 3-D seismic data permitted the visualization of the progressive deformation of furrows during burial, leading to real fractures, visible on seismic sections at about 78 m below seafloor. We propose a new geometrical model for volumetrical contraction of mud-dominated sediments. Compaction starts at the water–sediment interface by horizontal contraction, creating furrows perpendicular to the present day slope. During burial, continued shrinkage evolves to radial contraction, generating hexagonal cells of dewatering at 21 m below seafloor. With increasing contraction, several faults generations are progressively initiated from 78 to 700 m burial depth. Numerous faults of the HFI act as highly permeable pathways for deeper fluids. We point out that pockmarks, which represent the imprint of gas, oil or pore water escape on the seafloor, are consistently located at the triple-junction of three neighbouring hexagonal cells. This is highly relevant for predictive models of the occurrence of seepage structures on the seafloor and for the sealing capacity of sedimentary cover over deeper petroleum reservoirs.
Article
The Canary Islands, a group of seven major volcanic islands, extends for almost 500km approx E-W 100km off NE Africa. The islands formed chiefly during the last 20Ma, although volcanic activity started during the Oligocene and possibly Eocene in the E island of Fuerteventura. Ages of the rapidly formed subaerial shields decrease irregularly from E to W but melting anomalies in the sub-Canarian mantle are presently active across the entire belt. The dominant, shield-building magma type is saturated to moderately undersaturated alkali basalt with local tholeiite. Low pressure fractionation of olivine, clinopyroxene, and plagioclase was generally moderate, owing to rapid replenishment of the fast upward growth of the shield volcanoes and their magma chambers. Highly differentiated magma columns developed chiefly during the waning stages resulting in minor (quartz)-trachyte in the E and phonolitic plugs in the central and W islands. Major differentiated magma reservoirs on Gran Canaria and Tenerife culminated in large caldera-forming ash flow eruptions. Surface eruption of basalt magmas was generally inhibited during evolution and periodic partial emptying of such large differentiated zoned magma columns. Multiphase episodic magmatic evolution consisting of two or more magmatic phases is characteristic of most Canary Islands and is best developed on Gran Canaria where two major multiphase cycles are distinguished. Any data presently available for island volcanism in the Eastern Central North Atlantic suggest episodes of high activity between about 18 and 10 Ma and 5 Ma to the present, separated by a period of lesser magmatic productivity.-Author
Article
Continental rifting is often associated with voluminous magmatism and perturbations in the Earth's climate. In this study, we use 2D seismic data from the northeast Greenland margin to document two Paleogene-aged sill complexes and km² in size. Intrusion of the sills resulted in the contact metamorphism of carbon-rich shales, producing thermogenic methane which was released via 52 newly discovered hydrothermal vent complexes, some of which reach up to 11 km in diameter. Mass balance calculations indicate that the volume of methane produced by these intrusive complexes is comparable to that required to have caused the negative isotope excursion associated with the PETM. Combined with data from the conjugate Norwegian margin, our study provides evidence for margin-scale, volcanically-induced greenhouse gas release during the late Paleocene/early Eocene. Given the abundance of similar-aged sill complexes in Upper Paleozoic–Mesozoic and Cretaceous–Tertiary basins elsewhere along the northeast Atlantic continental margin, our findings support a major role for volcanism in driving global climate change.
Article
During opening of a new ocean, magma intrudes into the surrounding sedimentary basins. Heat provided by the intrusions matures the host rock, creating metamorphic aureoles potentially releasing large amounts of hydrocarbons. These hydrocarbons may migrate to the seafloor in hydrothermal vent complexes in sufficient volumes to trigger global warming, e.g., during the Paleocene-Eocene Thermal Maximum (PETM). Mound structures at the top of buried hydrothermal vent complexes observed in seismic data off Norway were previously interpreted as sediment volcanoes, and the amount of released hydrocarbon was estimated based on this interpretation. Here, we present new geophysical and geochemical data from the Gulf of California suggesting that such mound structures could in fact be edifices constructed by the growth of black smoker–type chimneys rather than sediment volcanoes. We have evidence for two buried and one active hydrothermal vent systems outside the rift axis. The active vent releases fluids of several hundred degrees Celsius containing abundant methane, mid-ocean ridge basalt–type helium, and precipitating solids up to 300 m high into the water column. Our observations challenge the idea that methane is emitted slowly from rift-related vents. The association of large amounts of methane with hydrothermal fluids that enter the water column at high pressure and temperature provides an efficient mechanism to transport hydrocarbons into the water column and atmosphere, lending support to the hypothesis that rapid climate change such as during the PETM can be triggered by magmatic intrusions into organic-rich sedimentary basins.
Chapter
Initially recognised in the Hawaiian Islands, volcanic rift zones and associated giant landslides have been extensively studied in the Canaries, where several of their more significant structural and genetic elements have been established. Almost 3,000 km of water tunnels (galerías) that exist in the western Canaries provide a unique possibility to access the deep structure of the island edifices. Recent work shows that rift zones to control the construction of the islands, possibly from the initial stages of island development, form the main relief features (shape and topography), and concentrate eruptive activity, making them crucial elements in defining the distribution of volcanic hazards on ocean islands.
Article
The margin of the continental slope of the Volcanic Province of Canary Islands is characterised by seamounts, submarine hills and large landslides. The seabed morphology including detailed morphology of the seamounts and hills was analysed using multibeam bathymetry and backscatter data, and very high resolution seismic profiles. Some of the elevation data are reported here for the first time. The shape and distribution of characteristics features such as volcanic cones, ridges, slides scars, gullies and channels indicate evolutionary differences. Special attention was paid to recent geological processes that influenced the seamounts. We defined various morpho-sedimentary units, which are mainly due to massive slope instability that disrupt the pelagic sedimentary cover. We also studied other processes such as the role of deep bottom currents in determining sediment distribution. The sediments are interpreted as the result of a complex mixture of material derived from a) slope failures on seamounts and submarine hills; and b) slides and slumps on the continental slope.
Article
The island of El Hierro is formed by materials of three volcanic cycles, which can be clearly separated although they show no discontinuity. The oldest formation consists of approx 1400 m- thick subaerial lava flows; the upper limit of this series is marked by some trachytic episodes. The lower part of the formation comprises deposits of special interest due to their phreatomagmatic character and the existence of pseudosedimentary structures with gradual stratification. The intermediate series covers most of the island. The existence of explosive features as well as the presence of basaltic hornblende as a stable mineral indicate that this magmatic cycle is characterized by a high volatile content. The Recent series is formed by sub-historical lava flows. The well-preserved morphology of these flows allows a comparative study of the two principal types, aa and pahoehoe. The three series mainly comprise basalts with scarce coarse-grained xenoliths. There are some trachytic episodes at the top of the old series and one at the top of the intermediate series where some carbonized wood has been found. Erosive action along tectonic lines is proposed to explain the semicircular cliffs of El Golfo and Las Playas.-R.R.C.
Chapter
The Canary Islands, a group of seven major volcanic islands, extends for almost 500 km roughly east-west 100 km off Northwest Africa. The islands formed chiefly during the last 20 Ma, although volcanic activity started during the Oligocene and possibly Eocene in the eastern island of Fuerteventura. Ages of the rapidly formed sub-Canarian mantle are presently active across the entire belt. Total volumes of individual islands are about 10 to 20 x 106 km3 of which the subaerial part generally makes up less than 10%. turated to moderately undersaturated alkali basalt with local tholeiite. Low pressure fractionation of olivine, clino- pyroxene, and plagioclase was generally moderate, owing to rapid replenishment of the fast upward growth of the shield volcanoes and their magma chambers. Highly differentiated magma columns developed chiefly during the waning stages resulting in minor (quartz)-trachyte in the eastern and phonolitic plugs in the central and western islands. Major differentiated magma reservoirs on Gran Canaria and Tenerife culminated in large caldera-forming ash flow eruptions. Surface eruption of basalt magmas was generally inhibited during evolution and periodic partial emptying of such large differentiated zoned magma columns. Late stage basanites to nephelinites are locally nodule-bearing, of small volume, and are only slightly fractionated. High Ca/Al ratios and variable K-contents of these primitive magmas suggest garnet and phlogopite as residual phases during very low degrees of partial melting. Multiphase episodic magmatic evolution consisting of two or more magmatic phases is characteristic of most Canary Islands and is best developed on Gran Canaria where two major multiphase cycles are distinguished. Multiphase magmatic evolution is common on other islands in the Central North Atlantic with alkali basalt shield magmas being broad-ly similar. It is less well developed on smaller islands and those close to the Mid-Atlantic Ridge. Highly alkalic, mafic, under- saturated magmas appear to be restricted to (large volume?) islands on thicker lithosphere (Canaries and Cape Verde Islands), presumably due to low heat flow and thus small degrees of partial melting at greater depth. Intra-archipelago differences in melting conditions and mantle composition are reflected by consistently higher alkalinity and different trace element ratios between the western and central islands contrasted with Lanzarote and Fuerteventura to the east. Canary Island magmas on the whole are richer in Ti, Fe, and Zr and lower in A1 than Azorean and Madeira magmas. Canary Island magmas may be derived from garnet-bearing manle leaving residual garnet. The mantle beneath the Canaries is not very radiogenic with respect to 87Sr/86Sr as is characteristic for the eastern central Atlantic en-compassing the Cape Verde Islands and Madeira. The mantle area south of about 30 to 35 N may be distinct from, and less heterogeneous than the mantle farther north. There is no geological or geochemical evidence for the existence of continental crust beneath any of the Canary Islands. The origin of the Canary Island melting domain is not adequately explained by (a) an oceanic fracture zone, (b) extension of the South Atlas fault, (c) mantle plume and (d) propagating fracture zone. Unspecified mantle instabilities along the boundary between oceanic and continental lithosphere may have been instrumental in generating the unusually long-lived mantle anomaly with west to east translation of the lithosphere leading to an irregular non-linear age progression. Age data presently available for island volcanism in the Eastern Central North Atlantic suggest episodes of high activity between about 18 and 10 Ma and 5 Ma to the present, separated by a period of lesser magmatic productivity.
Article
Geological, biological, morphological, and hydrochemical data are presented for the newly discovered Moytirra vent field at 45oN. This is the only high temperature hydrothermal vent known between the Azores and Iceland, in the North Atlantic and is located on a slow to ultraslow-spreading mid-ocean ridge uniquely situated on the 300 m high fault scarp of the eastern axial wall, 3.5 km from the axial volcanic ridge crest. Furthermore, the Moytirra vent field is, unusually for tectonically controlled hydrothermal vents systems, basalt hosted and perched midway up on the median valley wall and presumably heated by an off-axis magma chamber. The Moytirra vent field consists of an alignment of four sites of venting, three actively emitting “black smoke,” producing a complex of chimneys and beehive diffusers. The largest chimney is 18 m tall and vigorously venting. The vent fauna described here are the only ones documented for the North Atlantic (Azores to Reykjanes Ridge) and significantly expands our knowledge of North Atlantic biodiversity. The surfaces of the vent chimneys are occupied by aggregations of gastropods (Peltospira sp.) and populations of alvinocaridid shrimp (Mirocaris sp. with Rimicaris sp. also present). Other fauna present include bythograeid crabs (Segonzacia sp.) and zoarcid fish (Pachycara sp.), but bathymodiolin mussels and actinostolid anemones were not observed in the vent field. The discovery of the Moytirra vent field therefore expands the known latitudinal distributions of several vent-endemic genera in the north Atlantic, and reveals faunal affinities with vents south of the Azores rather than north of Iceland.
Article
An extensive suite of igneous sills, collectively known as the Faroe-Shetland Sill Complex, has been intruded into the Cretaceous and Tertiary sedimentary section of the Faroe-Shetland Channel area. These sills have been imaged offshore by three-dimensional (3D) reflection seismic surveys and penetrated by several exploration boreholes. Data from wireline log measurements in these boreholes allow us to characterize the physical properties of the sills and their thermal aureoles. The borehole data has been compiled to produce new empirical relationships between sonic velocity and density, and between compressional and shear sonic velocities within the sills. These relationships are used to assist in calculation of synthetic seismic traces for sills intruded into sedimentary section, in order to calibrate the seismic response of the sills as observed in field data. This paper describes how the seismic amplitude response of the sills can be used to predict sill thickness where there is some nearby well control, and use this technique to estimate the volume of one well-imaged sill penetrated by Well 205/10-2b. Since the sills have a high impedance contrast with their host rocks, they return strong seismic reflections. 3D seismic survey data allow mapping of the morphology of the sills with a high level of confidence, although in some instances disruption of the downgoing seismic wavefield causes the seismic imaging of deeper sills and other structures to be very poor. Examples of sub-circular and dish-shapes sills, and also semi-conical and sheet-like intrusions, which are highly discordant are shown. The introduction of intrusive rocks can play an important role in the subsequent development of the sedimentary system. An example is shown in which differential compaction or soft sediment deformation around and above the sills appears to have controlled deposition of a reservoir quality sand body. The positioning of the sills within sedimentary basins is discussed, by constructing a simple model in which pressure support of magma from a crustal magma chamber provides the hydrostatic head of magma required for intrusion at shallow levels. This model is made semi-quantitative using a simple equation relating rock densities to intrusion depth, calibrated to observations from the Faroe-Shetland area. The model predicts that sills can be intruded at shallower levels in the sedimentary section above basement highs, which agrees with observations detailed in this paper.
Article
By combining surface mapping with marine reflection and refraction seismics it is possible to construct a composite image of the entire crustal structure in this region. During Tertiary break-up the basin was intruded by basaltic sills and dykes, and basaltic flood basalts flowed over the basin with decreasing thickness to the north. It seems that magmas were intruded as sills up to 300 m thick in the deep (10-15 km) central parts of the basin. Their geometry and possible volume makes them potential candidates as mid-crustal magma chambers and crustal magma pathways for the flood basalts. There is a general rather conformable relationship between the basin stratigraphy and the gross stratigraphy of the flood basalts, suggesting limited or no initial uplift prior to flood basalt volcanism. The apparent guidance exerted by the basin on the break-up magmatic activity without renewed rifting of the basin itself, the apparent lack of a broad initial uplift during break-up, and the later regional margin uplift, all seem at odds with several current plume models. -from Authors
Article
Geochemical investigations of dolerite cores from four intrusions in the recently discovered Faeroe-Shetland sill complex have established that the sills are of transition (T) mid-ocean ridge basalt (MORB)-type composition. Some uncertainty surrounds the age of the complex, but there is no doubt that it is, at least in part, of Tertiary age. Comparisons with previously proposed models for the development of sill-sediment complexes during initial stages of seafloor spreading suggests that the Faeroe-Shetland sills may represent an intrusive episode associated with a spreading axis that eventually produced oceanic crust W of the Faeroes.
Article
Multichannel seismic reflection and gravity data define the structure of Mesozoic ocean crust of the Canary Basin, formed at slow spreading rates. Single and multichannel seismics show a transition from smooth to rough basement topography from Jurassic to Cretaceous crust and a coeval change in crustal structure. Internal reflectivity of the rough basement area comprises upper, upper middle or whole crust cutting discrete dipping reflections. Lower-crustal reflectivity is almost absent and reflections from the crust-mantle transition are short and discontinuous or absent for several kilometers. In contrast, crust in the smooth basement area is characterized by sparse lower crustal events and common reflections from the crust-mantle boundary. The crustal structure of fracture zones in the rough basement area is associated with depressions in the basement top and in most cases with thin crust. In the smooth basement area, fracture zones exhibit neither a clear topographic expression nor crustal thinning. We interpret these characteristics as indicative of an increase in extensional tectonic activity and decrease in magmatic activity at the spreading ridge associated with a general decrease of spreading rate from Jurassic to Cretaceous times. In addition, the crust imaged across the path of the Cape Verde Hot Spot in the Canary Basin exhibits a widespread lower crustal reflectivity, very smooth topography and apparently thick crust. Our data document significant changes in the structure of crust formed at slow spreading rates which we attribute to thermal changes in the lithosphere due either to variations in spreading rate or to the presence of a hot spot beneath the Mesozoic Mid-Atlantic Ridge.
Article
Hydrothermal fluxes of heat and mass at mid-ocean ridges and on ridge flanks estimated using different approaches are reviewed. Heat and fluid fluxes in high temperature axial systems are best determined by geophysical methods, and are then combined with vent fluid compositions to derive chemical fluxes. Axial chemical fluxes calculated by mass balance using data from hydrothermally altered rocks sampled by deep ocean drilling are generally small, probably because of loss of material during drilling. Most of the hydrothermal heat anomaly in ocean crust occurs at low temperatures in off-axis flank systems, and a significant fraction of this must occur at very low temperatures (
Article
Development of the rifted continental margins and subsequent seafloor spreading in the North Atlantic was dominated by interaction between the Iceland mantle plume and the continental and oceanic rifts. There is evidence that at the time of breakup a thin sheet of particularly hot asthenospheric mantle propagated beneath the lithosphere across a 2500 km diameter region. This event caused transient uplift, massive volcanism and intrusive magmatism, and a rapid transition from continental stretching to seafloor spreading. Subsequently, the initial plume instability developed to an axisymmetric shape, with the c. 100 km diameter central core of the Iceland plume generating 30-40 km thick crust along the Greenland-Iceland-Faroes Ridge. The surrounding 2000 km diameter region received the lateral outflow from the plume, causing regional elevation and the generation of thicker and shallower than normal oceanic crust. We document both long-term (10-20 Ma) and short-term (3-5 Ma) fluctuations in the temperature and/or flow rate of the mantle plume by their prominent effects on the oceanic crust formed south of Iceland. Lateral ridge jumps in the locus of rifting are frequent above the regions of hottest asthenospheric mantle, occurring in both the early history of seafloor spreading, when the mantle was particularly hot, and throughout the generation of the Greenland-lceland-Faroes Ridge.
Article
Polygonal fault arrays have been documented in sedimentary basins from around the world and several theories exist as to how they initiate and propagate. Three-dimensional seismic data from polygonal fault arrays from offshore Norway are used to develop a new process model for polygonal fault development. We propose that in siliceous sediment, polygonal fault arrays can be triggered thermally, due to the conversion of opal-A to opal-CT at depths of 100–1000m. This conversion causes differential compaction and shear failure and therefore fault initiation. The location of the earliest faults is dependent on where opal-A to opal-CT conversion and compaction occur first. This is controlled by which strata have a favourable bed composition, local fluid chemistry and temperature or because the strata reach the depth of the reaction front first due to the presence of pre-existing structural relief (folds or faults). Subsidence of biosiliceous sediment through the opal-A to opal-CT reaction front causes fault propagation because of continued localised differential compaction. Fault initiation and propagation due to silica conversion generate polygonal fault arrays at significantly deeper burial depths than previously thought possible.
Article
In contrast to mature mid-oceanic ridges, where magmatic activity is little affected by the slow accumulation of sediments, in young spreading centers (such as that of the Guaymas Basin in the Gulf of California) the basaltic magma of great magmatic pulses'' forms dikes and sills within the uppermost few hundred metres of soft sediments. In general, younger dikes and sills are injected next to or on top of the contact zone of the older ones. In this manner a distinctive sill-sediment complex is built up, the sediments of which are rather compacted and partly metamorphosed despite the low burial depth. The thickness of this transitional zone between the sheeted dike complex (seismic layer 2) and younger sediment (seismic layer 1) is controlled chiefly by the rates of sedimentation and spreading. If the half-spreading rate is approximately one order of magnitude greater than the sedimentation rate, the sill-sediment complex can reach a thickness of only a few hundred metres, and the depth of the spreading trough remains approximately constant. Sedimentation rates approaching or surpassing the spreading rate cause filling up of the basins, which probably hampers the injection of magma into sediments. Sill-sediment complexes similar to those in the Gulf of California are also expected to occur at the passive margins of older oceanic basins as well as orogenic belts.
Article
The results of 64 new KAr age determinations, together with 32 previously published ages, show that after a period of erosion of the basal complex, Miocene volcanic activity started around 20 Ma in Fuerteventura and 15 Ma in Lanzarote, forming a tabular succession of basaltic lavas and pyroclastics with a few salic dykes and plugs. This series includes five separate volcanic edifices, each one with its own eruptive history. In Fuerteventura, several Miocene eruptive cycles have been identified: in the central edifice one around 20–17 Ma, followed by two others centred around 15 and 13 Ma; in the southern edifice the maximum of activity took place around 16–14 Ma, whereas in the northern one the main activity occurred between 14 and 12 Ma. In Lanzarote a first cycle of activity took place in the southern edifice between 15.5 and 14.1 Ma, followed by another between 13.6 and 12.3 Ma. In the northern edifice three pulses occurred: 10.2–8.3, 6.6–5.3 and 3.9–3.8 Ma. An important temporal gap, greater in Fuerteventura than in Lanzarote, separates Series I from the Plio-Quaternary Series II, III and IV, formed by multi-vent basaltic emissions. In Fuerteventura the following eruptive cycles have been identified: 5, 2.9–2.4, 1.8–1.7, 0.8–0.4 and <0.1 Ma. In Lanzarote, the activity was fairly continuous from 2.7 Ma to historic times, with a maximum in the Lower Pleistocene.
Article
Magma is transported in the crust by blade-like intrusions such as dykes, sills, saucers, and also collects in thicker laccoliths, lopoliths and plutons. Recently, the importance and great number of shallow (< 5 km) saucer-shaped intrusions has been recognized. Lopoliths and cup-shaped intrusions have also been reported in many geological contexts. Our field observations indicate that many intrusions, especially those emplaced into breccias or fractured rocks, have bulging, lobate margins and have shear faults at their bulbous terminations. Such features suggest that magma can propagate along a self-induced shear fault rather than a hydraulic tension-fracture. To investigate this we use analogue models to explore intrusion propagation in a brittle country rock. The models consist of the injection of analogue magma (honey or Golden syrup) in a granular material (sand or sieved ignimbrite) that is a good analogue for brittle or brecciated rocks. These models have the advantage (over other models that use gelatin) to well represent the properties of brittle materials by allowing both shear-faults and tension fractures to be produced at suitable stresses. In our experiments we mainly obtain vertical dykes and inverted-cone like structures that we call cup-shaped intrusions. Dykes bifurcate into cup-shaped intrusions at depths depending on their viscosity. All cup-shaped intrusions uplift a central block. By injecting against a vertical glass plate we obtain detailed observations of the intrusion propagation style. We observe that dykes commonly split and produce cup-shaped intrusions near the surface and that shear zone-related intrusions develop at the dyke tip. We conclude that many dykes propagate as a viscous indenter resulting from shear failure of host rock rather than tensional hydraulic fracturing of host rocks. The shear propagation model provides an explanation for the shape and formation of cup-shaped intrusions, saucer-sills and lopoliths.
Article
Forced folds formed at the seabed immediately overlying shallow (<1 km) saucer-shaped sills along the NE Atlantic Margin during the early Paleogene. Examples of this sill-fold relationship are exceptionally well imaged by high-resolution 3D seismic datasets from the NE Rockall Basin. The forced folds are domal in shape, 2e4 km in diameter, exhibit a structural relief of up to 350 m, and comprise sediment volumes of ca. 1 km 3 . A comparison of the thickness distribution across and volume of a saucer-shaped sill with a high intrusion diameter to depth ratio and the structural relief and volume of its associated forced fold shows a remarkable equivalence. This has the important implication that the structural relief on intrusion-related forced folds can be used as an estimate of the thickness of the underlying sill. The analysed forced folds are interpreted to have formed through three continual growth stages that are directly linked to the mechanical emplacement of the underlying saucer-shaped sills. Their growth was associated with an increase in faulting of the overlying strata and influenced coeval or subsequent development of polygonal fault systems within the overburden. These structures represent a new type of four-way dip closed hydrocarbon trap.
Article
Doleritic sill complexes, which are an important component of volcanic continental margins, can be imaged using 3D seismic reflection data. This allows unprecedented access to the complete 3D geometry of the bodies and an opportunity to test classic sill emplacement models. The doleritic sills associated with basaltic volcanism in the North Rockall Trough occur in two forms. Radially symmetrical sill complexes consist of a saucer-like inner sill at the base with an arcuate inclined sheet connecting it to a gently inclined, commonly ragged, outer rim. Bilaterally symmetrical sill complexes are sourced by magma diverted from a magma conduit feeding an overlying volcano. With an elongate, concave upwards, trough-like geometry bilaterally symmetrical sills climb away from the magma source from which they originate. Both sill complex types can appear as isolated bodies but commonly occur in close proximity and consequently merge, producing hybrid sill complexes. Radial sill complexes consist of a series of radiating primary flow units. With dimensions up to 3km, each primary flow unit rises from the inner saucer and is fed by primary magma tube. Primary flow units contain secondary flow units with dimensions up to 2km, each being fed by a secondary magma tube branching from the primary magma tube. Secondary flow units in turn are composed of 100-m scale tertiary flow units. A similar branching hierarchy of flow units can also be seen in bilaterally symmetrical sill complexes, with their internal architecture resembling an enlarged version of a primary flow unit from a radial sill complex. This branching flow pattern, as well as the interaction between flow units of varying orders, provides new insights into the origin of the structures commonly seen within sill complexes and the hybrid sill bodies produced by their merger. The data demonstrate that each radially symmetrical sill complex is independently fed from a source located beneath the centre of the inner saucer, grows by climbing from the centre outwards and that peripheral dyking from the upper surface is a common feature. These features suggest a laccolith emplacement style involving peripheral fracturing and dyking during inner saucer growth and thickening. The branching hierarchy of flow units within bilaterally symmetrical sill complexes is broadly similar to that of primary flow units within a radially symmetrical sill complex, suggesting that the general features of the laccolith emplacement model also apply.
|
{}
|
# Measurement of the Top Pair Production Cross Section in the Dilepton Decay Channel in ppbar Collisions at sqrt s = 1.96 TeV
Abstract : A measurement of the $\ttbar$ production cross section in $\ppbar$ collisions at $\sqrt{{\rm s}}$ = 1.96 TeV using events with two leptons, missing transverse energy, and jets is reported. The data were collected with the CDF II Detector. The result in a data sample corresponding to an integrated luminosity 2.8 fb$^{-1}$ is: $\sigma_{\ttbar}$ = 6.27 $\pm$ 0.73(stat) $\pm$ 0.63(syst) $\pm$ 0.39(lum) pb. for an assumed top mass of 175 GeV/$c^{2}$.
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-00456970
Contributor : Swarna Bassava Connect in order to contact the contributor
Submitted on : Tuesday, February 16, 2010 - 11:08:26 AM
Last modification on : Tuesday, October 19, 2021 - 4:44:37 PM
### Citation
T. Aaltonen, J. Adelman, B. Alvarez Gonzalez, S. Amerio, D. Amidei, et al.. Measurement of the Top Pair Production Cross Section in the Dilepton Decay Channel in ppbar Collisions at sqrt s = 1.96 TeV. Physical Review D, American Physical Society, 2010, 82, pp.052002. ⟨10.1103/PhysRevD.82.052002⟩. ⟨in2p3-00456970⟩
Record views
|
{}
|
# KaTeX is a (partial) alternative to (some of) MathJax
Khan Academy has released a new library to typeset mathematical notation on webpages, called KaTeX.
“But we already have MathJax!” you say, perhaps a little too enthusiastically. Well, Khan Academy has a lot of pages about maths, and they have a fairly rare set of requirements: the maths they use is relatively simple, they usually have a lot of it on the page, and their users are usually not techy types with zippy computers. For their purposes, MathJax has a couple of big design flaws: it likes to measure the text surrounding each equation very exactly so that its output is pixel-perfect, and it runs asynchronously, so you can see equations popping into existence as it works down the page.
Because of that, they’ve clearly decided it’s worthwhile writing a new library to typeset maths that suits them better than MathJax. KaTeX is much more limited in the kinds of output it supports than MathJax, sticking to inline-style rendering and a much smaller subset of TeX commands. It runs synchronously, which means it exchanges temporarily locking up the browser for taking much less time to finish. Importantly, it doesn’t get held up by asking the browser to take measurements, instead using much faster CSS rules to make sure the output matches surrounding text.
Murray Bourne has had a good look at KaTeX and written a blog post acting as an FAQ of sorts, and he’s also set up a page where you can compare the output of KaTeX with MathJax.
Khan Academy is using KaTeX as a first port of call to render maths, and using MathJax as a fallback for things it can’t render. So it looks like KaTeX really is worth having.
KaTeX very neatly fills its niche, but it’s not a complete replacement for MathJax. I’m sure it’ll come in useful on sites other than Khan Academy, so thanks to them for open-sourcing it.
Note: I did some paid freelance work for MathJax earlier this summer.
KaTeX
Source code on GitHub
KaTeX – a new way to display math on the Web by Murray Bourne
KaTeX and MathJax comparison demo
• #### Christian Lawson-Perfect
Mathematician, koala fan, Aperiodical editor. Usually found paddling in the North Sea, or fiddling with computers.
### 3 Responses to “KaTeX is a (partial) alternative to (some of) MathJax”
Small correction: KaTeX can render display math too, e.g. if you feed \displaystyle \sum_0^\infty into it. (There is no direct API to select inline/display yet [https://github.com/Khan/KaTeX/issues/66] and it’s not inferred from delimiters like $vs$\$ because KaTeX doesn’t recognize math by delimiters yet [https://github.com/Khan/KaTeX/issues/26])
2. Tom
As of May 13, 2016, KaTeX supports both inline and display equations, so does some basic environments such as array, matrix, pmatrix, bmatrix, Bmatrix, vmatrix and Vmatrix. Equation alignment can get by with the aligned environment and a bunch of new symbols are added to the library as well. All in all, good for general purpose, stunning speed (but unclear how it’ll slow down as the library expands over time). It’s probably not for powering sites with very specific usage of LaTeX though.
|
{}
|
0
1771
# RRB Group-D Percentage Questions PDF
Download Top 20 RRB Group-D Percenatge Questions and Answers PDF. RRB Group-D Maths questions based on asked questions in previous exam papers very important for the Railway Group-D exam.
Download RRB Group-D Previous Papers PDF
Question 1: A seller sold $\frac {3}{4}$ th of his goods at 24% profit. He sold rest part of the goods at cost price. What is percentage of his profit ?
a) 15
b) 18
c) 24
d) 32
Question 2: There are 40% women workers in an office. 40% women and 60% men of that office voted for in my favour. What is the percentage of total votes in my favour ?
a) 24
b) 42
c) 50
d) 52
Question 3: Rajendra’s salary is $\frac{1}{5}$ times greater than that of Nikita’s salary. Then by what percentage Nikita’s salary is lesser than that of Rajendra’s salary?
a) 25
b) 13.33
c) 12.34
d) 16.66
Question 4: The cost price of 18 articles is equal to the selling price of 15 articles. The gain percent is–
a) 15%
b) 20%
c) 25%
d) 18%
Question 5: 30% of a class consists of girls. If 20% of the girls and 30% of the boys in that class are selected to form a team, what is the percentage of girls in the team?
a) 33.33%
b) 22.22%
c) 19.19%
d) 15.15%
Question 6: The price of a watch is Rs 1000. It is first decreased by 10% and then increased by 10%. What is the difference between the new price of the watch and the old price of the watch?
a) 0
b) 5
c) 10
d) 20
Question 7: In an examination, the number of technical questions is 40% and the remaining are non-technical questions. A student answers 50% of the technical questions and 60% of the non-technical questions and leaves the remaining. If the number of questions left un-attempted by the student is 66, what is the total number of questions in the paper?
a) 150
b) 140
c) 130
d) 120
Question 8: A basket has only apples and oranges. 30% of the fruits are apples. 60% of apples and 70% of oranges are ripe. What percentage of the total fruits is not ripe?
a) 33
b) 67
c) 35
d) 65
Question 9: If the selling price of bananas falls by 20%, we get 5 more bananas for Rs. 10. What was the previous price of one banana ?
a) 30 paise
b) 50 paise
c) 40 paise
d) 60 paise
Question 10: Sakshi gave away 20% of her stamp collection to Jyothi and 15% to Aruna. It she still has 520 stamps, then how many did she have initially ?
a) 700
b) 600
c) 800
d) 1000
Question 11: The cost price of 18 articles is equal to selling price of 15 articles. The gain percent is:
a) 25%
b) 20%
c) 33%
d) 40
Question 12: 80% of 4/7th of a number is 16. The number is
a) 7
b) 35
c) 70
d) 149/4
Question 13: Mr. Agarwal gave 30% of his money to his wife, half of the remaining to his daughter and the rest was to be distributed among his five sons equally. If each son received Rs. 14,000, what was the total amount he had ?
a) Rs. 1,75,000
b) Rs. 2,00,000
c) Rs. 1,00,000
d) None of these
Question 14: In a mixture of syrup and water there is 60 per cent syrup. If 5 litres of syrup is added then there is 35 per cent water in the mixture. The initial quantity of mixture was
a) 40 litres
b) 35 litres
c) 30 litres
d) 32 litres
Question 15: 0.28 part of a window is painted black, 0.33 part is painted red. What is the percentage of the part of the window to be painted ?
a) 0.61
b) 0.39
c) 30.5%
d) 39%
Question 16: A person sold an article for 9,999 of which, the cost price 10,000. Calculate the percentage of loss incurred.
a) 1%
b) 0.1%
c) 0.01%
d) 0.001%
Question 17: Cost of an article was Rs. 1,200. If it is sold for Rs. 1,000. Calculate the percentage of loss incurred.
a) $16\frac{2}{3}$ %
b) $12\frac{3}{4}$
c) $14$%
d) $15$%
Question 18: If Anil’s salary is one-third more than that of Sunil’s then by what percentage Sunil’s salary is lesser than Anil’s ?
a) $3\frac{1}{3}$
b) $12\frac{1}{2}$
c) $25$
d) $66\frac{2}{3}$
Question 19: A person used to get 15 litres of petrol for a certain amount but due to the hike in the price he is getting one litre less for the same amount What is the percentage rise in the petrol price ?
a) $\frac{2}{3}\%$
b) $7\frac{1}{7}\%$
c) $12\frac{1}{2}\%$
d) $25$%
Question 20: An article costs Rs. 80 to the vendor. If he marks the article for 50% more than the cost price and sells it 25% less than the marked price, what is his gain percentage ?
a) 30%
b) $37 \frac{1}{2}\%$
c) $12\frac{1}{2}\%$
d) 20%
Answers & Solutions:
1) Answer (B)
Let there be 100 items worth 100 rupees total. So, the seller sells 75 at 75 * 124/100 = 93 rupees.
and 25 at 25 rupees.
Total profit = 93 + 25 – 100 = 18 rupees
Profit % = 18/100 = 18%
2) Answer (D)
There are 40 women and 60 men in the group. 40 % of the women and 60% of the men = 40 * 40% + 60 * 60 % = 52
3) Answer (D)
Let’s say Nikita’s salary is x, hence Rajendra’s salary will be $\frac{6x}{5}$, hence Nikita’s salary is $\frac{x}{5}$ lesser than that of Rajendra’s salary.
So percentage = $\frac{\frac{x}{5}}{\frac{6x}{5}} \times 100 = 16.66 \%$
4) Answer (B)
Let the price be 180 Rs. 18 items were purchased at 180 rupees and 15 items were sold at 180 rupees.
So, cost price is 10 rupees and selling price is 12 rupees.
So, profit percent = 20%
5) Answer (B)
Let the number of students in the class be 100.
Number of girls = 30
Number of boys = 70
Number of girls selected to the team = 20% of 30 = 6
Number of boys selected to the team = 30% of 70 = 21
So, percentage of girls in the team = 6/27 * 100% = 22.22%
6) Answer (C)
Initial price of the watch = 1000
After 10% decrease, price of the watch = (90/100)*1000 = 900
After 10% increase, price of the watch = (110/100)*900 = 990
So, the new price of the watch = Rs 990
Difference between the new price and old price = 1000 – 990 = Rs 10
7) Answer (A)
Let the number of questions in the paper be N.
Number of technical questions = 40% of N = 2N/5
Number of non-technical questions = 60% of N = 3N/5
Number of questions left unanswered by the student = 50% of technical questions and 40% of non-technical questions = N/5 + 6N/25 = 11N/25
So, 11N/25 = 66
=> N = 66*25/11 = 150
8) Answer (A)
Suppose, there are 100 fruits in the basket.
=> 30 are apples and 70 are oranges
60% of apples are ripe => 30 * 0.6 = 18 apples are ripe => 12 apples are not ripe.
70% of oranges are ripe => 70 * 0.7 = 49 oranges are ripe => 21 oranges are not ripe.
=> Total number of fruits that are not ripe = 12 + 21 = 33
=> 33% of fruits are not ripe
9) Answer (B)
Let the price of bananas be x Rs/ banana.
If the price falls by 20%, cost of banana = .8x
Number of bananas for Rs. 10 = 10/.8x
But 10/.8x- 10/x = 5
So, x = .5
10) Answer (C)
65% of the stamps = 520
100% = ?
520/.65 = 800
11) Answer (B)
Let the CP of 18 articles = SP of 15 articles = 180 Rs.
CP of one article = 10 Rs
SP of one article = 12 Rs
Profit % = 20%
12) Answer (B)
Let the number be x. Hence, 4/5*4/7*x=16. Hence, x=5*7=35.
13) Answer (B)
Let x be the total amount with Mr.Agarwal
money given to his wife=30% of x
money given to his daughter=35% of x
money given to each son=(35/5)% of x=7% of x
given, 7% of x=14,000
=>x=Rs.2,00,000
14) Answer (B)
Let the initial quantity of the mixture be x litres.
Syrup = .6x
Water = .4x
If 5 litres of syrup is added, syrup = .6x + 5
water = .4x
But .4x/x+5 = 35%
So, x = 35 litres
15) Answer (D)
percent of window remaining to be painted=100%-(percent already painted)=(100-61)%=39%
16) Answer (C)
loss%=$\frac{CP-SP}{CP}$×100=$\frac{.1}{1000}\times100$%=.01%
17) Answer (A)
Percentage of loss = (CP-SP)/CP = 200/1200 = 16.67%
18) Answer (C)
Let Sunil’s salary be 100 Rs.
Anil’s salary = 133.33 Rs
Sunil’s salary is less than Anil’s by 33.33/133.33 = 25%
19) Answer (B)
Let the amount that is fixed be 210 Rupees
He used to get 15 litres of petrol for Rs. 210 earlier and then only 14 litres.
Initial cost of petrol = 210/15 = 14 Rs
Final cost of petrol = 210/14 = 15 Rs.
% rise in petrol price = 100/14 = $7 \frac {1}{7}$
20) Answer (C)
Cost Price of the article = 80 Rs.
Marked Price = 80 x 1.5 = 120
Selling Price = 120 x .75 = 90
Profit = 10 Rs
Profit % = 10/80 = 12.5%
We hope this Percentage Questions for RRB Group-D Exam will be highly useful for your preparation.
|
{}
|
## Conceptual Physics (12th Edition)
A ball thrown straight upward has zero speed at the very top of its flight, but acceleration there has a magnitude of $10 \frac{m}{s^{2}}$ directed downward.
|
{}
|
# Compute the sum of probabilities when they are given as logits
Say I have a set of numerous probabilities given by their logarithm : $$\{\ln p_i, 1 \leq i \leq N\}$$.
I want to compute $$\sum p_i$$, if possible without exponentiating $$\ln p_i$$, since some of those probabilities are really small and I would suffer a dramatic loss of precision by doing so.
Do you know of any clever trick ?
Edit 14/06
• I compute the probabilities along a probability tree whose depth can go s high as $$D = 10 000$$. This tree is typically sparse, but I don't have a better upper bound than $$N = 2^D$$
• Some of these probabilities get very low (e.g. $$10^{-200}$$). I work in python. I haven't witnessed firsthand a precision loss, but I suspected it might happen and decided to reach out to more kowledgeable people.
• My current implementation keeps only the 1000 highest log probabilities, exponentiates and sums them.
• Computing the log of the sum is perfect for my application, since I have a natural baseline for the log probs. I'd gladly mark this as an approved answer.
• I don't think there should be loss of precision in this computation. Do you mean you have trouble with underflow (probabilities smaller than $10^{-308}$ that get approximated to 0 in IEEE)? Otherwise, can you provide an example? – Federico Poloni Jun 12 '20 at 14:31
• How many terms do you want to add up? – Wolfgang Bangerth Jun 12 '20 at 15:01
• If you are satisfied with computing the log of the sum of the probabilities, the “log sum exp” trick is commonly applied to this in statistics and machine learning settings. Of course, you could compute the log of the sum and then exponentiate, but this approach should give some under/overflow protection. This Wikipedia article gives the trick, based on making this computation in terms of the largest probability instead of in absolute terms, en.m.wikipedia.org/wiki/… . I’m unsure of other approaches though. – cdipaolo Jun 12 '20 at 16:19
|
{}
|
9758/2021/P2/Q02
The diagram shows a sketch of the curve $y=f(x)$. The region under the curve between $x=1$ and $x=5$, shown shaded in the diagram, is $A$. This region is split into 5 vertical strips of equal width, $h$.
[2]
(a) State the value of $h$ and show, using a sketch, that $\sum_{n=0}^4(\mathrm{f}(1+n h)) h$ is less than the area of $A$.
[2]
(b) Find a similar expression that is greater than the area of $A$.
[1]
You are now given that $\mathrm{f}(x)=\frac{1}{20} x^2+1$
(c) Use the expression given in part (a) and your expression from part (b) to find lower and upper bounds for the area of $A$.
[2]
(d) Sketch the graph of a function $y=\mathrm{g}(x)$, between $x=1$ and $x=5$, for which the area between the curve, the $x$-axis and the lines $x=1$ and $x=5$ is less than $\sum_{n=0}^4(\mathrm{~g}(1+n h)) h$.
[1]
[4]
|
{}
|
# Winnie-the-Pooh and the 27 honey pots
Winnie-the-Pooh keeps his $27$ honey pots in the larder. Each pot contains up to $1$ kilogram of honey, and different pots contain different quantities of honey. All $27$ pots together contain $17$ kilogram of honey.
Every day, Winnie-the-Pooh selects $7$ pots, picks a real number $x$, and then eats exactly $x$ kilogram of honey from each of the selected pots.
Question: Is it always (independently of the initial distribution of honey) possible for Winnie to empty all $27$ honey pots in a finite number of days?
• Does he select 7 pots which have honey in them or it does not matter?
– dmg
Sep 25 '15 at 8:42
• @Gamow So he can select the same empty pot every day? Forever? Poor Winnie...
– dmg
Sep 25 '15 at 9:12
• I have a feeling that when you pick a random distribution the chance is actually pretty slim that it can be done. But don't know how to proof it Sep 25 '15 at 11:04
• Is Pooh picking his x at random, or may we assume that he follows an optimal strategy, never picking an x that would put the puzzle into an unsolvable state? Sep 25 '15 at 15:54
• I believe WtP keeps his honey pots in the larder. A minor point, I know.
– A E
Sep 25 '15 at 15:59
Here is a rigorous proof that Pooh can always succeed.
Let $P_i$ be the amount of honey in the $i^{th}$ pot when sorted in decreasing honey contents, so $1\ge P_1\ge P_2\ge \dots\ge P_{27}\ge0$ and $P_1+\dots+P_{27}=17$.
Using the beginning of Sleafar's solution, we now need only show how to equalize the seven heaviest pots. At any point, let $m$ be the number of pots which have the same amount of honey as the heaviest, and let $n$ be the number of nonempty pots (so initially, $m=1$ and $n=27$). We show that, as long as $m<7$ and $n\ge 14$, it is possible to either increase $m$ or decrease $n$. We then argue why $n$ can't drop below $14$ before $m$ reaches $7$, so that eventually we equalize the top $7$ pots.
Here is the method. Let $x$ be the gap in honey contents between the heaviest $m$ pots and the next strictly lighter pot. Each day for the next $n-7$ days, Pooh eats $\frac{x}{n-7}$ kg from each of the $m$ heaviest pots. Each of these days, he must eat the same amount from some $7-m$ other pots. He does this in such a way that the $n-7$ pots $P_8,P_9,\dots,P_n$ are eaten from equally over this $n-7$ day period (for example, by arranging the nonempty pots in a circle, and eating from each consecutive segment of length $7-m$).
As long as each of the nonempty pots contain at least $\frac{x}{n-7}\cdot (7-m)$ kg of honey, then this will work, causing the $m$ heaviest pots to decrease to the next highest weight, thus increasing the number $m$. If the nonempty pots do not contain enough honey to do this, then adjust $x$ so that the smallest pot contains exactly $\frac{x}{n-7}\cdot (7-m)$. Then doing the procedure will empty that pot, thus decreasing $n$.
Now, why can't $n$ drop below $14$ before $m$ reaches $7$? First off, the amount of honey that needs to be removed from pots 8 through $n$ in order to equalize pots 1 through 7 is equal to $$6(P_1-P_2)+5(P_2-P_3)+4(P_3-P_4)+3(P_4-P_5)+2(P_5-P_6)+(P_6-P_7)$$ This is because, during the phase when pots $P_1,\dots,P_m$ are equal, we have to remove a total of $P_{m}-P_{m+1}$ from each of the first $m$ pots, and $(7-m)$ times that from the last pots 8 through $n$. A different way of writing this is $$6P_1-P_2-P_3-\dots -P_7\le 6 - 6P_7< 3$$ The last inequality follows since $P_7> \frac12$. Why is this true? If not, then the first 6 pots would initially contain at most $6$ kg, and the last 21 would contain at most $\frac12$, so that the total amount of honey was at most $6+\frac12\cdot 21=16.5$, which is impossible since we started with $17$ kg of honey.
So, the procedure will remove less than $3$ kg from the last 20 pots. However, the first 14 pots contain at most 14 kg, meaning the lightest pots together contain at least $17-14=3$ kg. Since our procedure eliminates the lightest pots first, the first 14 pots will not be emptied, as claimed.
• Great approach and easy to follow (though it still has to sink in a little). I suggested an edit that among other things changed some instances of 27 into n. Most of my mind is convinced now that irrationality doesn't matter. Shakes up my mathematical intuitions a bit :-) Sep 28 '15 at 20:21
• Profoundly good answer! I wonder if the estimates being so tight was 'coincidental' (ie, it's something special about the choice of numbers). Jan 7 '16 at 6:39
# Update:
I made a big blunder in the previous version, this should be fixed now.
## Part 1: It is always possible to remove amount $x$ from any number $n \ge 7$ of pots if all pots have at least amount $x$
Let $p$ be the number of permutations with 7 pots containing a specific pot. Cycle all permutations and remove amount $\frac xp$ from each pot. As each pot occurs $p$ times the total removed amount from each pot is $x$.
## Part 2: A combination of pots where the 7 pots containing the most amounts are equal is know to work regardless of the amounts in other pots
Consider the following image:
The bars show the amounts in the pots, ordered by the amount. The amounts in the first 7 pots are equal. Start with the smallest pot $J$ and remove the amount of $J$ from all pots (this works as seen in part 1). Repeat with pot $I$ and then with pot $H$. At the end we have the top 7 pots containing only the black marked area, which can be emptied in the next step.
## Part 3: Removing superfluous amount in biggest pot requires 6 times the same amount 20 smallest pots
Consider the following image:
We must remove the red part in pot $A$ to reach the situation from part 2. It is easy to see that we can do this easily if we have the same 6 parts in pots $H-M$.
This is the place where I made a blunder in the previous version, assuming I could trade the red part in $A$ with a combination of parts of the same size in pots $>G$, instead of 6 times the size.
What can be said about the distribution of the amounts beyond pot $G$? Actually it doesn't matter, if we use a trick. Consider the following image.
We want to remove the red part in pot $A$. The orange part in pot $H$ has 6 times the size of the red part. We can remove the red part and the orange part of pot $H$ while also removing the orange parts in pots $A-G$. To do this we generate all permutations of 6 pots from the range $B-H$:
$\binom{7}{6}$ = 7
The number each of the pots $B-H$ appears in this permutations is:
$\binom{6}{5}$ = 6
Now we add pot $A$ to all these permutations and remove the red part from each permutation. We remove the red part 7 times from pot $A$ and 6 times from pots $B-H$ which is exactly what we see in the image above. This can be easily extended to any number of pots, as long as they contain 6 times the size of the red part.
The other conclusion is, no matter what we do, we can't remove the red part if the size of the pots beyond $G$ is less than 6 times the size of the red part.
## Part 4: Removing superfluous amount in 2 biggest pot requires 2.5 times the same amount 20 smallest pots
Consider the following image:
Similar to part 3, we now want to remove the 2 red parts in pots $A-B$. It is again easy to see that we can do this easily if we have the same 5 parts in pots $H-L$.
Using the same reasoning as in part 3, it can be shown, that the amount distribution beyond pot $G$ doesn't matter.
It can also be shown that similar applies to superfluous amounts in pots $C-F$.
## Part 5: The general case
The following image shows the general case which is guaranteed to be solvable based on the parts above:
As we have seen in part 3, the distribution in pots beyond $G$ doesn't matter. Also as seen in part 2 adding amounts in pots beyond $G$ is solvable too. What is left is to show which amounts can be solved in any case.
## Part 6: The edge case
The above image shows the edge case, where the red area is maximized (if we limit ourselves to 13 pots). The height of pots $H-M$ must be the same as the height of the red part of pot $A$. Pots $B-G$ must have also the same size. As $A$ is limited to $1$, the height of pots $B-M$ must be $\frac{1}{2}$. The size of this case is:
$1 + 12 * \frac{1}{2} = 7$
This means that 7kg honey is enough to guarantee, that the puzzle is solvable, if the pot size is limited to 1kg.
Any other amounts which we would try to add, don't change anything. Amounts added to pots beyond $G$ are OK as shown in part 2. Amounts added to pots $B-G$ reduce the required amount in pots beyond $G$ as can be seen in the following image:
One more thing we could do, is to use the other 14 pots we didn't use yet to further maximize the red area, like in the following image:
We used here one more pot. We increased the size of the red area above $A$ by some amount $x$. We also increased the size of the red area beyond pot $G$ by $6x$. But at the sime time we decreased the size of the black area by $7x$ which in the end gives a total change of $0$.
The pot size which would still work for 17kg is (based on the first edge case):
$x + 12 * \frac{x}{2} = 17$
$x = \frac{17}{7} = 4,28...$
Also, this solutions works with any number of pots $\ge 13$.
Alternatively we can make Winnie-the-Pooh really happy and tell him, he can eat from 17 pots each day (which would still work with 17kg and 1kg max per pot).
• great post, I was just wondering what your thoughts were on the situation where a pot contains an irrational amount of honey? (Say root(2)/2?) Irrationality messes with my head, and I'm not sure if it changes anything Sep 28 '15 at 11:44
• the finite number of days part worries me, though maybe I'm just not fully understanding your solution Sep 28 '15 at 11:49
• @user1868155 Hehe, irrational numbers were also one of my first ideas, but they simply doesn't matter as you found out yourself. Even if the total amount would be irrational it would still work, think of any of the colored boxes above as an irrational number. Apart from that, the algorithm I described uses a finite number of steps (which can still be optimized by summing up the same permutations used in different steps). Sep 28 '15 at 16:44
YES, HE CAN!
Rigorous proof and exact bounds: If the starting amount of honey in each pot is less or equal to $17/7$ kilograms, then Pooh can always do this. If the starting amount of honey in each pot can be more than $17/7$ kilograms, then Pooh can not always do this.
More generally: If the total amount of honey is $A$, the number of pots is $X$, each of the pots contains at most $P$ kilograms of honey and Pooh chooses every day $r \leq X$ pots, then he can always empty them if and only if $P \leq A/r$. (I have provided computations for $A=17, X=27, r=7$, but the same approach works for all triplets $(A, X, r)$.
Proof:
PART 1.
The idea is to arrange the pots by amount of honey in decreasing order - #1, #2, #3, etc. and then take honey in such way so that we get more and more pots with the same amount of honey as pot #7.
Arrange the pots by amount of honey:
$x_1\geq x_2\geq x_3\geq x_4\geq...\geq x_{27}$.
Assume that we have:
$x_1\geq x_2\geq ...\geq x_k>x_{k+1}=x_{k+2}=...=x_l>x_{l+1}\geq x_{l+2}\geq ...\geq x_{27}$,
where $k<7$ and $l\geq 7$ (this means that all pots with numbers from $k$ up to $l$ have the same amount of honey as pot #7). Now Pooh takes honey from the first $l$ pots, such that the amount of honey taken from each of pots $k+1, ... , l$ is $(7-k)/(l-k)$ of the amount of honey taken from each of pots $1,2,...,k$ and one of these two conditions is satisfied:
1. the amount of honey in pot $k$ becomes equal to the amount of honey in pots $k+1,k+2,...,l$;
2. the amount of honey in pot $l+1$ becomes equal to the amount of honey in pots $k+1, k+2,...,l$.
It is easy to see that Pooh can complete this task in $l$ days - just take honey from pots $1$, $2$, ... ,$k$ + some honey from $7-k$ pots among $k+1$,$k+2$,...$l$ in cyclic manner. After he completes this step, he repeats the process until all pots get equal amount of honey in them and eventually become empty.
Now we prove that eventually we will get $x_1=x_2=...=x_{27}=0$. Indeed, if we assume that Pooh gets stuck, then all of the pots except for at most $6$ should become empty. Let the number of non-empty pots be $m \leq 6$ and the total amount of honey remaining is $H \in [0,\min(mP,17))$. Clearly, the amount of honey taken from these $m$ pots is exactly $m(17-H)/7$ (because every time we take out honey, $m/7$ of it is taken out of them). Since each pot contains in the beginning at most $P$ kilograms of honey, we have
$$\frac{(17-H)m}{7}+H\leq mP.$$
The inequality above just states that the total amount of honey remaining in the last $m$ pots + the amount of honey taken out of them is no more than the maximum amount of honey there was in the pots initially.
We can rewrite the inequality as: $$0<H(7-m)\leq m(7P-17) \rightarrow P>17/7$$.
Therefore we can conclude that if $P\leq 17/7$, then Pooh can always do this.
PART 2
The idea is to fill up the first pot entirely.
Now we prove the inverse. Let the first pot has $P$ kilograms of honey and the others have $(17-P)/26$ each. Now every time we take $x$ kilograms of honey from the first pot, we take at least $6x$ kilograms of honey from the remaining pots. In order to empty the first pot, we have to take out $P$ kilograms of honey from it and therefore we have to take out $6P$ kilograms of honey from the other pots. However, $6P>17-P$, which is a contradiction.
Remark: If the total amount of honey is $A$ and the number of pots Pooh can use on every day is $r$, then using the same calculations it is easy to see that the optimal $P=A/r$. The total number of pots does not matter at all, it could be anywhere from $7$ to infinity.
• I'm trying to work through your procedure, but I run into trouble for the initial situation. Here we have either k=7 or l=7, or in other words, the assumption as stated doesn't hold. This is probably minor but I don't yet have the insight to see what the fix should be. Sep 28 '15 at 8:24
• Thanks for the remark @Oliphaunt, it was a typo - l>=7. Sep 28 '15 at 15:16
• 17/6 kg can't be right. If you put 17/6 kg in the first 6 pots, you used up all 17kg and poor Winnie can't eat any honey at all. In my updated answer it's 17/7 (off-by-one-error somewhere?). Sep 28 '15 at 16:51
• Yes, messed up a bit the calculations. Will fix it now, thanks for the remark! Sep 28 '15 at 17:05
• @Oliphaunt The distinct restriction only adds an epsilon which can be as small as required. And Winnie could then eat 21*epsilon=0,0000... Sep 28 '15 at 17:11
It is not always possible.
First note that, suppose that it would be possible then it could always be finished in $\binom{27}{7}$ days, because this is the number of possible ways to pick 7 pots, and eating from the same 7 pots on multiple days can always be combined to eating that amount in 1 day.
In all those combinations every pot occurs the same amount of time. So let's say it was possible to distribute the 17kg evenly (which isn't allowed by the rules). Then there is a solution in exactly $\binom{27}{7}$ days in which every day the same exact amount is eaten.
This means that given a distribution you are always allowed to subtract $x$ from every pot and try to solve that distribution instead.
So this is what we do:
put 1 gram in the first pot, put 1/10 gram in the second, 1/100 in the third and so on. Whatever is left from the 17kg we distribute evenly among all pots.
According to what I said before, we can neglect that final part so we just see if we can solve it with the 1, 1/10, 1/100 distribution. And this is simple: we can't solve it because the honey in pot one is even more than all the honey of the other pots combined.
• There is one thing that makes me doubt my solution: Just because we can subtract any $x$ amount from all pots doesn't mean that we should do it, I think. Sep 25 '15 at 12:38
• I agree. Suppose there are 3 pots and we have to eat 2 at a time. Then 2,2,4 is actually solvable while 0,0,2 isn't. Sep 25 '15 at 13:44
• @Anachor you're right. thanks for the example. Guess my solution is not sufficient Sep 25 '15 at 13:46
• I think you have the right idea though. If one pot is bigger than the rest combined, it can't be done. Perhaps distribute the remainder by multiplication instead of addition? Sep 25 '15 at 14:53
• I like you insight about the even distribution. Imagine the top seven pots all had the same amount of honey. Then by reducing the top seven, you can get the top eight to all have the same amount of honey. Then reduce the top eight evenly until the top 9 have the same amount, etc. In this way, as long as the top 7 have the same amount, you can eat all the honey. Sep 25 '15 at 18:06
We can formulate the problem as follows. This is not an answer but perhaps it leads to one.
The $27$ pots correspond to a choice of values $p_i$ with $0 \leq i < 27$ satisfying:
1. $p_i \neq p_j$ whenever $i \neq j$;
2. $0 \leq p_i \leq 1$;
3. $\sum p_i = 17$
So each pot configuration lies in $Z'$ the intersection of a hypercube $[0,1]^{27}\subset\mathbb{R}^{27}$ with the hyperplane $\pi$ defined by $(3)$. Of course, the point must also satisfy $(1)$, so it must be in the complement of the union of another $\binom{27}{2}$ hyperplanes. Let $Z \subset \mathbb{R}^{27}$ be the set defined by these conditions.
A solution for a given configuration corresponds to a choice of values $v_j \geq 0$ with $0 \leq j < \binom{27}{7}$. Each such value is the amount of honey Pooh eats from each pot in a given combination of $7$ out of the $27$ pots, so they must satisfy $$\sum v_j = \frac{17}{7},$$and also for each $0 \leq i < 27$ $$\sum v_{j_i} = p_i ,$$ where the sum has $\binom{26}{6}$ terms and is carried over the $v_j$'s which involve pot $i$. These last $27$ equations can be written in matrix form as $M\cdot V = P$, where $V$ is the column vector ${\left(v_j\right)}$, $P$ is the column vector $\left(p_i\right)$ and $M$ is a $27 \times \binom{27}{7}$ matrix of $0$'s and $1$'s. Each line has $\binom{26}{6}$ nonzero entries and each column has $7$ nonzero entries.
So $V$ must lie in the set $S$, defined by the intersection of a hyperplane $\sigma$ in $\binom{27}{7}$-dimensional space with its first quadrant. A solution for $P \in Z$ exists if and only if $P \in M(S)$. Thus, a solution always exists if and only if $Z \subset M(S)$. It might however be easier to show that $Z' \subset M(S)$; this suffices since $Z \subset Z'$.
It's easy to see $M(\sigma) \subset \pi$, and it should not be too hard to show that's actually an equality (this is necessary). It's also easy to see that if $V$ has nonnegative coordinates then so does $M(V)$. However this still needs to be improved.
An attempt at a TLDR version of the proof using ideas from all the other solutions (especially Artur Kirkoyan's take on the problem) . Definitely not as rigorous but I hope still convincing.
Suppose Pooh follows two rules while eating honey
1. Never eat from a pot with less if you could be eating from a pot with more.
2. If you must choose one pot from several that are equal, cycle between these pots as to eat from them equally.
Suppose Pooh follows these rules but still ends up with a set $S$ of less than seven nonempty pots at the end. By following the rules, the pots in $S$ must have always been the pots with strictly the most honey, and have therefore been eaten from constantly. Since each pot in $S$ holds at most 1 kg, the constant eating means at most $7$ kg total has been eaten. But this is ridiculous, as there must be more than 11 kg outside the set $S$.
My instincts told me to consider irrationality, so irrationality I will consider.
## All Rational
Consider the case where all the pots contain a rational amount of honey. A rational number is one that can be expressed as $$\dfrac{p}{q}$$ where $$p$$ and $$q$$ are both integers. Hence, it can be seen that there exists some value $$a$$ for which every pot of honey contains an amount of honey equal to $$ba$$ where $$b$$ is some integer number. (for instance, if the honey in pot $$n$$ is $$\dfrac{p_n}{q_n}$$ I can let $$a = \dfrac{1}{q_1q_2...q_{27}}$$) In effect, all the pots of honey contain an amount of honey that is some integer number amount of a small division.
Our plan now is simply to take $$a$$ amount of honey from the seven pots that contain the most honey. We can ensure that this is doable by letting $$a = \dfrac{a}{7}$$. We will only run into an issue if at one point of time there exists only six or less pots with honey in them. In the worst case scenario, this happens when we have six pots with an extreme amount of honey in them and 21 pots with barely a trickle of honey in them. Because we are dealing with real numbers, the pots can have amounts of honey really really close to $$1$$, so we can assume in the worst case scenario all six of the top-pots contain $$1$$ kg of honey. The other $$21$$ pots must then contain $$11$$ kg of honey. But we can easily see that if this were the case, every single turn you would choose the first six pots (for they contain the most honey) and a pot from the remaining $$21$$ pots, and you would effectively deplete $$11*6$$ kg of honey from the first six pots ($$66$$ kilograms) which is more than enough to balance the amount of honey in the pots. Meaning that if all the pots contain a rational amount of honey, life is good and Pooh is a happy, albeit unhealthy, bear. -I will try and word this better, currently it's a little bit badly worded.
## Some Irrational
Here comes the interesting part. Say for simplicity you have one pot with an irrational amount of honey in it. You might wish to panic, but fear not, for all the pots added up must total $$17$$kg! This means that there must be at least one other irrational pot out there! Our aim then is to convert the irrational pots into rational ones, for then our job is easy.
Consider the case where we have only two irrational pots. The two irrational pots must be a 'conjugate pair', i.e. if one of the irrational pots was $$\pi /10$$ the other must be of the form $$k - \pi/10$$ where $$k$$ is some constant greater than $$\pi$$. Another thing we must note is that we can 'truncate' an irrational number: for instance if we have the irrational number $$0.35817395185712$$ we can take $$0.00017395185712$$ away from it to rationalize it, and this way we avoid the trouble of accidentally 'grounding' a pot, or eating all of its honey. The concept of the conjugate irrational pot pairs is very important. For example, say we had 25 rational pots, and 2 irrational pots, one which contains an irrational number $$w$$, and one which contains its conjugate irrational number, $$\bar{w}$$. We can then choose to rationalize $$w$$, and choose six other pots (which don't include the $$\bar{w}$$), but in the process we create six other $$\bar{w}$$ pots. In essence, we can then choose these seven $$\bar{w}$$ pots and rationalize them. I am really bad with words, here is a diagram:
Of course, this is a really simple case. Much more complex cases exist, for instance what if 26 of the pots contained a different irrational number, and the 27th pot contained the conjugates to all the other 26 pots? (because a pot containing$$(\bar{z} + \bar{y})$$ acts as a conjugate for both $$z$$ and $$y$$)The key thing to note is that with this 1:1 conjugate ratio business, the above tactic can be applied multiple times over and over to remove each set of conjugates.
The REAL problem arises when you have a set of three complex numbers that combined create a real one. In this case there is no longer a 1:1 conjugate ratio, (i.e. $$w$$ does still have a conjugate pair $$\bar{w}$$, but it exists split into two different pots). But doesn't this remind you of something? Indeed, we can use the method shown in the image to OBTAIN $$\bar{w}$$ if it happens to be split into multiple pots.
-checking logic and rewriting, is still a work in progress-
• Is irrationality really a necessary consideration here? Every amount of honey can be represented as a rational number, since atoms are discrete entities. How would you get an irrational amount of honey? Sep 28 '15 at 16:53
• @MarkPeters as far as I know, the honey atom hasn't been isolated yet ;-) Also, the puzzle is tagged as math so irrationality should be fair game (I draw the line at complex amounts of honey though). Truth be told, I've been considering irrationality as well and I still think I may have a counterexample... Sep 28 '15 at 17:10
I found the following beautiful solution to this problem at this math blog, and thought it worthwhile to share here.
Divide a circle into 27 arcs whose lengths are proportional to the amounts of honey in each pot. Inscribe a regular heptagon in this circle. Rotate the heptagon clockwise. While the heptagon rotates, Pooh drinks from the pots corresponding to the seven arcs it is touching, drinking at the same rate of rotation. When the heptagon switches arcs, Pooh switches out the corresponding pot for the next. At any point, Pooh is drinking from exactly seven pots at an equal rate. By the time the heptagon has rotated 360º/7, returning to its starting position, all pots will have been emptied.
I'm in two minds about this. The above arguments sound convincing, but it seems that if we simply go through the $\binom{27}{7}$ possibilities taking $x_i$ for $i\in(1,\ldots,888 030)$ out of the relevant pots, we have 27 equations with 888030 unknowns. Okay, there's a constraint of $x_i\geq0$, which is concerning, but $888030\gg27$.
|
{}
|
# Vera M Winitzky de Spinadel's publications
1. Vera M Winitzky de Spinadel, Teoría de las zonas alcanzables en sistemas bidimensionales, Doctoral Thesis (Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, May 1958).
Introduction
In the year 1676, Gottfried Wilhelm Leibniz first introduced the concept of a differential equation, thus initiating a new era in the field of mathematics. Over the next 50 years, a school of mathematics dedicated to obtaining analytic expressions for the solutions of linear and nonlinear differential equations (by Bernoulli, Euler, Riccati, Clairaut, etc.) developed.
Only in 1739, Euler began a systematic study of linear equations and in 1760, Lagrange formulated the general superposition theorem for linear systems, establishing with it a real split between linear and nonlinear mathematics. This concept had such a strong impact on research that the next 125 years were devoted almost exclusively to the development of the theory of linear differential equations, culminating in the concepts of Fourier and Laplace transforms and the study of systems of orthogonal functions. This is why it was only in 1881 that H Poincaré, inspired by the problems of celestial mechanics, published the first fundamental work on the properties of the solution curves of a system of nonlinear differential equations. Shortly after, A Liapounov, in a classic memoir, expounded his theory of motion stability, laying the foundations on which the Russian research school in this domain was later built.
At the same time, I Bendixson applied the theory to the real field and G D Birkhoff, with his classic studies on problems of dynamics, introduced functional analysis in the theory of dynamical systems. Among the classic works, it is also necessary to mention those of O Perron dedicated to the study of the asymptotic behaviour of solutions and analysis of singular points.
In the field of electronics, the oscillating circuit is the most important example of a nonlinear device. And as such, its study has been the subject of numerous and interesting works due to its application in different branches of technology. The nonlinear differential equation of the circuit
$\large\frac{d^2 y}{dt^2}\normalsize + f(y)\large\frac{dy}{dt}\normalsize + ay = 0$
was first studied by E V Appleton and B van der Pol. Later, van der Pol used the method of isoclines to solve it for the case where
$f(y) = -\epsilon (1 - y^{2})$ with $\epsilon = 0.1; 1.0; 10$,
and he found that, under certain conditions, an oscillating circuit performs relaxation oscillations, which are the limiting case when the parameter ε is large. On the other hand, he also showed that relaxation phenomena appear in many branches of science, for example, physiology, the heartbeat being an oscillation of relaxation.
The problem of oscillations in systems with nonlinear elements has been studied by A Andronow and C E Chaikin, J J Stoker, and many others. Various methods for solving nonlinear equations of this type were developed by N Krylov and N Bogoliubov. The Poincaré perturbation method, originally developed for astronomical problems, was also applied (with limitations).
The search for periodic solutions and Poincaré limit cycles, of such great interest in applications, has been studied in recent imes using functional analysis methods, by S Lefschetz, M L Cartwright, N Levinson, J L Massera, etc.
Total differential equations of the type
$\large\frac{dx}{dt}\normalsize = f(x) + \omega(t)$
where $f(x)$ is a fixed function and $\omega(t)$ an arbitrary function, with certain restrictions, are of great importance in applications, due to the existence of numerous physical problems whose behaviour is described by an equation like the preceding one. That is why there is great interest in the study of the solution curves of such an equation, when the function $\omega(t)$ is varied, which generally represents an external variable action applied to the physical system. In particular, it is important to investigate what are the possible states of the physical system that can be reached, starting from a given initial state, by properly choosing the control term $\omega(t)$. This is precisely the problem to which the theory of reachable zones, formulated by E O Roxin in $n$-dimensional spaces, gives an answer.
This work applies this theory to the planar case, where reasoning is more intuitive due to its corresponding geometric representation, and studies the attainable areas in numerous cases of immediate application in branches of technology and physics.
2. Vera M Winitzky de Spinadel, Sobre Teoremas de Comparación de Juegos Diferenciales, Revista de la Unión Matematica Argentina 26 (2) (1972), 107-114.
Abstract
Recently, A N V Rao has obtained remarkable results referring to the comparison of differential games. To prove his theorems, Rao uses the concept of strategy given by A Friedman. The proof does not subsist if one tries to use classic strategies of R Isaacs, of the "feedback" type. However, by adding additional conditions similar to those of L Berkovitz, the validity of the proofs is maintained.
3. Vera M Winitzky de Spinadel, An Application of Pontrjagin's Principle to the Study of the Optimal Growth of Population, International Atomic Energy Agency, Vienna, Austria 2 (1976), 189-199.
Abstract
This paper examines the consequences of an optimal control of population growth and allows to derive criteria referring to the economic basis of expenditure on population control and to obtain optimal paths for a model in which such a control is possible. By means of very simple assumptions one can reduce the problem to a two-state variable control problem and, in consequence, apply Pontrjagin's maximum principle to solve it.
4. Vera M Winitzky de Spinadel, Las redes y sus aplicaciones, Revista de Educación Matemática 4 (1989), 55-82.
Abstract
Network theory is a branch of Operational Research that is applied in the treatment of various problems from the economic, sociological and technological fields. Historically, it is proven that man, when faced with a problem, tends to make a diagram in which the points represent individuals, locations, activities, stages of a project, etc., joining them by means of lines that indicate a certain existing relationship among them. D Konig was the first to propose that such diagrams receive the name of networks, making a systematic study of their properties.
Strictly speaking, the exposition of these properties would include a number of concepts and theorems, among which some are relatively complicated. Since our goal is to present this topic in a way that is accessible to a large number of readers of different scientific backgrounds, we will present the basic concepts in the simplest way possible, show how to use them, and give some methods that can be of fruitful use in applications.
5. Vera M Winitzky de Spinadel, Acotación Uniforme Local de las Soluciones en un Sistema de Control con Ruido, Revista de la Universidad de Buenos Aires 39 (1-2) (1994), 18-26.
Abstract
The numerical treatment of a control problem, using a step-by-step procedure, usually involves a certain informational noise of one type or another. Very often, it happens that small informational errors produce instability in the solutions and the problem of regularisation arises. In this paper, a local uniform boundedness of the solutions is proved, which turns to be useful in the design of "control with guidance".
6. Vera M Winitzky de Spinadel (with Cecilia Crespo Crespo and Christiane de Ponteville), Divisibility and Cellular Automata, Chaos, Solitons and Fractals 6 (1995), 105-112.
Abstract
Cellular automata (CA) are perfect feedback machines which change the state of their cells step by step. In a certain sense, Pascal's triangle was the first CA and there is a strong connection between Pascal's triangle and the fractal pattern formation known as Sierpinski gasket.
Generalising divisibility properties of the coefficients of Pascal's triangle, binomial arrays as well as gaussian arrays are evaluated mod p. In these arrays, two fractal geometric characteristics are evident: a) self-similarity and b) non integer dimension.
The conclusions at which we arrive, as well as the conjectures we propose, are important facts to take into account when modelling real experiments like catalytic oxidation reactions in Chemistry, where the remarkable resemblance of the graph:
number of entries in the $k$th row of the Pascal's triangle which are not divisible by 2 versus $k$
and the measurement of the chemical reaction rate as a function of time, provides the reason to model a catalytic converter by a one-dimensional CA.
7. Vera M Winitzky de Spinadel, La familia de números metálicos en Diseño, Seminario Nacional de Gráfica Digital, Sesión de Morfología y Matemática, Ediciones Facultad de Arquitectura, Diseño y Urbanismo, Universidad de Buenos Aires 2 (1997), 173-179.
Abstract
The objective of this work is to introduce a new family of quadratic irrational numbers. The family is called Metallic Numbers and its most conspicuous member is the Golden Number. Other members of the family are the Silver Number, the Bronze Number, the Copper Number, the Nickel Number, etc. All of them have interesting common mathematical properties, which are analysed in detail.
The main results obtained in this research work are:
1) the members of the family are closely related to the quasi-periodic behaviour in non-linear dynamics, thus being of great help in the search for universal paths that lead from "order" to "chaos";
2) the sequences based on the members of this family have many additive properties and are simultaneously geometric sequences, which is why they have been the basis of various systems of proportions in Design.
These two facts indicate the existence of a promising bridge that unites the most recent discoveries in technology with art, through the analysis of fundamental relationships between Mathematics and Design.
8. Vera M Winitzky de Spinadel, On Characterization of the Onset to Chaos, Chaos, Solitons and Fractals 8 (10) (1997), 1631-1643.
Abstract
The purpose of this paper is to introduce the family of Metallic Means, whose members are the well known Golden Mean and its relatives, the Silver Mean, Bronze Mean, Copper Mean, Nickel Mean, etc. Bringing out, from the mathematical point of view, their similarities as well as their differences, it is possible to find a universal behaviour on the roads to chaos, a major problem which is still open for further research.
9. Vera M Winitzky de Spinadel, The Metallic Means and Design, in Kim Williams (ed.), Nexus II: Architecture and Mathematics (Edizioni dell'Erba, 1998), 143-157.
Abstract
In this paper a new family of positive quadratic irrational numbers is introduced: the family of "Metallic Means". Its most well-known member is the Golden Mean. Other members of the family are the Silver Mean, the Bronze Mean, the Copper Mean, the Nickel Mean, etc. These Metallic Means share important mathematical properties that transpose them into a basic key and constitute a bridge between mathematics and design.
Modern research in mathematics, usually conceived of as a highly structured system, has uncovered unsuspected channels that one may follow and try to interpret fractal geometries, so common in natural systems and human, animal and plant morphology, or to look for universal roads that indicate the onset to chaos, present in phenomena that go from DNA microscopic structure to the cosmic macroscopic galaxies. In this richness lie numerous contact points between mathematical tools and their application to creative design. The Metallic Means Family is one such tool and their many interesting properties will help us in the future to travel the difficult roads that connect one field of human knowledge with another, overcoming the isolation of specialties and resuming the global, Renaissance approach to problem resolution, more affine to the thinking of the twenty-first century.
10. Vera M Winitzky de Spinadel, The family of Metallic Means, Visual Mathematics 1 (3) (1999), 317-338.
Introduction
Let me introduce you to the Metallic Means Family (MMF). Their members have, among other common characteristics, the property of carrying the name of a metal. Like the very well known Golden Mean and its relatives, the Silver Mean, the Bronze Mean, the Copper Mean, the Nickel Mean and many others. The Golden Mean has been used in a very big number of ancient cultures as a proportion basis to compose music, devise sculptures and paintings or construct temples and palaces. Some of the relatives of the Golden Mean have been used by physicists in their latest researches trying to analyse the behaviour of non-linear dynamical systems in going from periodicity to quasi-periodicity. But in quite a different context, Jay Kappraff uses the Silver Mean to describe and explain the roman system of proportions, referring to a mathematical property that, as we shall prove, it is common to all the members of this remarkable family.
Conclusions
The members of the MMF are intrinsically related with the onset from a periodic dynamics to a quasi-periodic dynamics, with the transition from order to chaos and with time irreversibility, as proved by Ilya Prigogine and M S El Naschie.
But, simultaneously, there are philosophical, natural and aesthetically considerations that have impelled the utilisation of proportions based on some members of the MMF, from the beginning of human history. They appeared in the Egyptian sacred art as well as in India, China, Islam and many other ancient civilisations. They have dominated the Greek art and architecture, they extended to the magnificent monuments of the Gothic Middle Age and they reappeared with all its splendour in the Renaissance period.
In many instances, the harmony and beauty of a pattern is the result of the influence of the Golden and Silver Means at a fundamental level.
Such a wide range of applications where the members of the MMF appear, opens many roads to new inter-disciplinary investigations that will undoubtedly clear up the existent relations between Art and Technology, establishing a bridge among the rational scientific approach and the esthetical emotion. And perhaps, this new perspective could help us to give to Technology, from which we depend increasingly for our survivorship, a more human aspect.
11. Vera M Winitzky de Spinadel, A new family of irrational numbers with curious properties, Humanistic Mathematics Network Journal 19 (1999), 33-37.
Abstract
The "metallic means family" (MMF) includes all the quadratic irrational numbers that are positive solutions of algebraic equations of the type
$x^{2} - nx - 1 = 0$
$x^{2} - x - n = 0$
where $n$ is a natural number. The most outstanding member of the MMF is the well-known "golden number." Then we have the silver number, the bronze number, the copper number, the nickel number and many others. The golden number has been widely used by a great number of very old cultures, as a base of proportions to compose music, to create sculptures and paintings or to build temples and palaces. With respect to the many relatives of the golden number, a great part of them have been used in different researches that analyse the behaviour of non linear dynamical systems when they proceed from a periodic regime to a chaotic one. Notwithstanding, there exist many instances of application of these numbers in quite different knowledge fields, like the one described by the mathematician Jay Kappraff in his study of the old Roman proportion system of construction. This system was based on the silver number, on account of a mathematical property, which is not unique but is common to all members of the MMF, as we shall prove. Being irrational numbers, all the members of the MMF have to be approximated by ratios of integer numbers in applications to different scientific fields. The analysis of the relation between the members of the MMF and their approximate ratios is one of the goals of this paper.
12. Vera M Winitzky de Spinadel, First Interdisciplinary Conference of The International Society of the Arts, Mathematics and Architecture, ISAMA 99, Nexus Esecutivo (1999), 186-188.
Overview of the conference by Vera Spinadel
The first interdisciplinary conference of ISAMA 99 was held in San Sebastian, Spain, June 7-11, 1999. The conference directors were Nathaniel A. Friedman, SUNY (State University of New York), Albany, USA, and Javier Barrallo, Universidad del País Vasco, San Sebastián, Spain. The main purpose of this conference was to bring together persons interested in relating mathematics with the arts and architecture. This set included teachers, architects, artists, mathematicians, scientists and engineers. ISAMA focussed on the following fields related to mathematics: Architecture, Computer Design and Fabrication in the Arts and Architecture, Geometric Art and Origami, Music, Sculpture and Tesselations and Tilings. These fields included graphics interaction, CAD systems, algorithms, fractals and mathematical software like Maple, Derive, Mathematica, etc.
The International Scientific Committee was formed by twelve scientists, including the author of this report. Sixty-four papers were presented and published in a beautiful volume, edited by the Department of Applied Mathematics, School of Architecture, University of the Basque Country. There was a one day excursion to Gernika, where we could admire the monumental sculptures by Henry Moore and Eduardo Chillida, the world's foremost sculptor born in San Sebastián, whose work is inspired by architecture. In the afternoon, we made a tour of the Guggenheim Museum in Bilbao, designed by Frank Gehry, which is a crowning achievement of contemporary architecture. Fortunately, at this time there was also at the Guggenheim a magnificent exhibit of Chillida's work as well as the widely known architectural sculpture by Richard Serra, inspired by elliptical forms. Three other highlights at ISAMA 99 should be mentioned. They are the excursion on Thursday afternoon to the wonderful Chillida's private sculpture park Zabalaga, the world premier of "A Flame In Flight" for solo violinist by Robert Cogan, performed by Michael Appleman, and the granite sculpture "Oushi-Zoukei" carved by Keizo Ushio during the conference.
Without doubt, the goal of sharing information and discussing common interests to enrich interdisciplinary education, was achieved. Finally, we missed the presence of our friends from Yugoslavia, in particular, Slavik Jablan, who was present at the successful conference Mathematics & Design (M&D-98) held also in San Sebastian, Spain, June 1-4, 1998.
13. Vera M Winitzky de Spinadel, "Triangulature" in Andrea Palladio, Nexus Network Journal, Architecture and Mathematics 1 (1-2) (1999), 117-120.
Abstract
At the June 1998 workshop on the architecture of Andrea Palladio, the dimensions of the rooms were much remarked. Vera Spinadel convincingly argues that Palladio used precise mathematical relationships as a basis for selecting the numerical dimensions for the rooms in his villas. The integer dimensions are demonstrated to be approximants linked to continued fractions, and a particular way of deriving these integers through the use of a continued fraction expansion that approximates by excess is introduced.
14. Vera M Winitzky de Spinadel, Conference Report: 9th International Congress on Mathematical Education (ICME-9), Nexus Network Journal 2 (4) (2000), 214-215.
The Report
Flying from Japan back to my home in Buenos Aires, Argentina, I imagined the following dialogue with an unknown reporter:
Question: What did you do in Japan?
Answer: I attended the 9th International Congress on Mathematical Education (ICME-9), held in Tokyo/Makuhari from July 31 to August 6, 2000.
Question: What was your role as a participant?
Answer: I have been invited to be the Chief Organiser of a Topic Study Group TSG 20: Art and Mathematics Education.
Question: What was the main purpose of this group?
Answer: To gather contributions from different countries and cultures, so as to have a great display of how Art interacts with Mathematics Education.
Question: How was it organised?
Answer: There were two 90 minutes Sessions and a very nice exhibition that ran parallel to them.
Question: Tell me about the Sessions.
Answer: The first one was devoted to Visual Arts and Cultural History. The speakers were Javier Barrallo and Paquita Blanco (Spain), who presented an interesting mathematical simulation of Gothic cathedrals, then Liu Keming (China) talked about mathematical issues in Chinese ancient painting and drawing and finally, Muneki Shimono (Japan) showed his program of teaching the cultural history of Mathematics. The second session was devoted to Mathematical education and its relation with Art. Julianna Szendrei (Hungary) talked about Art and Mathematics in primary teachers' training, then María V Ponza (Argentina) showed a beautiful video about how to link Mathematics with dance, and finally I invented a fable related by a strange old man: the famous Golden Mean!
Question: And which were the conclusions of this group?
Answer: As the approach was quite multidisciplinary, we agreed that Art, in any of its many forms, has to be used as a main tool in teaching Mathematics to ANY student, not only to students engaged in artistic studies.
Question: What are your next plans?
Answer: Extending this globalizing idea to the research field, we are organizing the Third World Conference Mathematics & Design 2001 Mind/Ear/Eye/Hand/Digital at Deakin University, Geelong, Australia, 3-5 July, 2001. You are kindly invited to attend!
The reporter
Vera W de Spinadel is a Full Consultant Professor at the Faculty of Architecture, Design and Urbanism at the University of Buenos Aires, Argentina. She is the Director of the research centre Mathematics and Design, which comprises a team of interdisciplinary professionals working on the relations among Mathematics and Informatics with Design, where the word "design" is understood in a very broad sense (architectonic, graphic, industrial, textile, image and sound design, etc.). She organised the First and Second International Conferences on Mathematics and Design. She is the author of several books and has published many research papers in international journals. She has received several research and development grants as well as several research and technological production prizes.
15. Vera M Winitzky de Spinadel, Nonlinearity and the metallic means family, Nonlinear Analysis: Theory, Methods & Applications 47 (7) (2001), 4345-4353.
Abstract
In the analysis of many complex situations of the real world, we face some routes to chaotic behaviour. Considering globally these chaotic phenomena, we notice they have some properties in common:
1) they correspond to nonlinear dynamical systems;
2) they depend strongly on initial conditions;
3) they pass from a periodic motion to a quasi-periodic one and finally, in the state of "global chaos" to a frankly aperiodic motion.
The transition to chaos is produced going from commensurable ratios (Like periods or winding numbers) to incommensurable ones. In consequence, the more irrational this ratio is, the nearest are we from chaos. The Golden Mean, being the most irrational of all irrational numbers, plays together with other means of the Metallic Means Family (recently introduced by the author), a key role in the roads to chaos.
16. Vera M Winitzky de Spinadel, Geometría fractal y geometría euclidiana, Revista Educación y Pedagogía 15 (35) (2002), 84-91.
Abstract
The elements of Euclidean geometry are points, lines, curves, etc., that is, ideal entities conceived by man to model natural phenomena and quantify them by measuring lengths, areas or volumes. But these entities can be so complex and irregular that measurement using the Euclidean metric becomes meaningless. However, there is a way to measure the degree of complexity and irregularity, evaluating how fast the length, surface or volume increases, if we measure it on smaller and smaller scales. This approach was adopted by Mandelbrot, a Polish mathematician, who in 1980 coined the term fractal to designate highly irregular but self-similar entities.
17. Vera M Winitzky de Spinadel, Symmetry Groups in Mathematics, Architecture and Art, Symmetry: Art and Science 2 (1-4) (2002), 385-403.
Abstract
The word "symmetry" has two meanings. A symmetric object is well proportioned but the concept is not restricted to concrete objects; the synonym "harmony" refers to its use in Acoustics and Music. The second meaning is that of the geometric bilateral symmetry, the symmetry so evident in superior animals, especially in men. From the mathematical point of view, a whole symmetry theory can be considered for applications. From this theory, we have chosen the "symmetry group of the square" to present interesting uses in Architecture and art.
Symmetry in Culture
The word symmetry comes from the Greek symmetria, meaning "the right proportion". From the historical point of view, the term symmetry has denoted many meanings, depending on the field of human knowledge where it was used. Notwithstanding, "symmetry is a unifying concept", as Hargittai's, Magdolna and István, have proved in their beautiful and unique book "Symmetry". Indeed, the concept of symmetry can provide a connecting link among many different fields of endeavour, perhaps the best and more appropriate link to protect human studies from the increasing and separating compartmentalization within our scientific world.
Going back to the year 27 B.C., we found a monumental work: the 10 books written by the roman architecture Vitruvio (probably Marco Vitruvio Pollione) and dedicated to the Emperor August. Architecture, says Vitruvio, depends from order, disposition, eurhythmy, property, symmetry and economy. These terms have today, completely different meanings. E.g., order, says Vitruvio, confers the appropriate measure to the elements of a certain building, when considered separately and symmetry, gives concordance to the proportions of the different parts of the construction. This approach to the meaning of symmetry is quite similar to the mutually corresponding arrangement of the various parts of a human body around a central axis, producing a proportioned balanced form.
Vitruvio dedicated much time to the study of the proportions in the human body in his considerations on symmetry. Symmetry, says Vitruvio, comes from proportion, that is, from a correspondence between the dimensions of the parts of a whole and of the whole with respect to a certain part selected as a model, the module. Such a selection of parts of the human body as a module, initiated probably by Vitruvio, was the very beginning of a historical ergonomic chain linking Vitruvio, Albrecht Dürer, Leonardo da Vinci and many, many other artists, including the modern contemporary architect Le Corbusier.
18. Vera M Winitzky de Spinadel, Number theory and Art, in Javier Barrallo, Nathaniel Friedman, Reza Sarhangi, Carlo Séquin, José Martínez and Juan A Maldonado (eds.), ISAMA-Bridges 2003. Proceedings of Meeting Alhambra, University of Granada, Granada, Spain (2003), 415-422.
Abstract
The Metallic Means Family (MMF), was introduced by the author, as a family of positive irrational quadratic numbers, with many mathematical properties that justify the appearance of its members in many different fields of knowledge, including Art. Its more conspicuous member is the Golden Mean. Other members of the MMF are the Silver Mean, the Bronze Mean, the Copper Mean, the Nickel Mean, etc.
19. Vera M Winitzky de Spinadel, La familia de Números Metálicos, Cuadernos del Cimbage, Instituto de Investigaciones en Estadística y Matemática Actuarial, Facultad de Ciencias Económicas, Universidad de Buenos Aires 6 (2004), 17-45.
Abstract
In this paper we introduce a new family of positive quadratic irrational numbers. It is called the Metallic Means family and its most renowned member is the Golden Mean. Among its relatives we may mention the Silver Mean, the Bronze Mean, the Copper Mean, the Nickel Mean, etc. The members of such a family enjoy common mathematical properties that are fundamental in the present research about the stability of macro- and micro- physical systems, going from the internal structure of DNA up to the astronomical galaxies. The most important results of this new investigation are the following:
1) The members of this family intervene in the determination of the quasi-periodic behaviour of non linear dynamical systems, being essential tools in the search of universal routes to chaos.
2) The numerical sequences based on the members of this family, satisfy many additive properties and simultaneously, are geometric sequences. This unique property has had as a consequence the use of some Metallic Means as a base for proportion systems.
20. Vera M Winitzky de Spinadel (with Hernán S Nottoli), Herramientas matemáticas para la arquitectura y el diseño (Ediciones Facultad de Arquitectura, Diseño y Urbanismo, Universidad de Buenos Aires. Cuadernos de Cátedra, 2005).
Preface
While mathematics works with abstract spaces and concepts, design operates on concrete spaces, those inhabited by man with his everyday objects.
This book deals with the points of contact between these two fields: it develops contents of the mathematical discipline by applying them to topics directly linked to the work of architects and designers. The eclecticism of the set responds to the chosen cut, and to the intention of emphasizing the links with professional practice.
The book begins with a chapter devoted to the geometry of forms. The following two ("Graphs" and "Theory of symmetry") allow us to know what basic guidelines have historically regulated the canons of beauty or the proportions of designed objects. Chapter 4, "Applications of Derivatives and Integrals," provides basic computational tools for analysing the behaviour of structures. Numbers 5 and 6, "Probability Theory" and "Statistics", provide notions of indisputable utility in the development process of a work or product. Chapter 7, finally, deals with the study of topography and includes information on measurement devices, planimetry and altimetry.
We trust that this book will be of interest to its specific audience, students of architecture and design, and we hope that it can provide them with valuable tools for their future professional development.
21. Vera M Winitzky de Spinadel, Introducción de los números irracionales por descomposición en fracciones continuas, Premisa 35 (2007), 37-45.
Summary
It was Pythagoras of Samos who discovered the incommensurability of the hypotenuse of a right triangle, thus introducing the first "irrational" number, that is, one that cannot be written as a ratio of two integers. Mathematically, the set of rational numbers together with the set of irrational numbers forms the set of real numbers, which has the property of being dense (that is, it does not have any holes). And the irrational numbers are defined by Dedekind cuts, stating that a number on the real axis divides it into two disjoint sets: the set of real numbers greater than it and the set of real numbers less than it. Therefore, if the chosen number is not an integer or a rational, then the irrational is defined. But this definition does not allow quantifying the degree of irrationality, that is, the degree of approximation of the rational approximants to the irrational number. This degree of irrationality turns out to be of importance in the experiences that are designed looking for the borders between a physical system that behaves periodically and its transformation into a chaotic system, where it is impossible to predict the behaviour since very similar initial conditions originate totally disparate results. To detect this degree of irrationality, we will use the decomposition into continued fractions.
22. Vera M Winitzky de Spinadel, Más resultados sobre el Número de Plata, Forma y Simetría: Arte y Ciencia Congreso de Buenos Aires, 2007 (Universidad de Buenos Aires, 2007), 440-444.
Abstract
The Metal Number Family (FNM) is an infinite set of positive quadratic irrational numbers discovered by Dr Spinadel in 1994, whose common mathematical properties make them highly suitable for application in interdisciplinary design problems. The most preponderant member of the FNM is the well-known Golden Number = 1.618... The one that follows is the Silver Number $\sigma_{Ag} = 1 + √2$. Both numbers have a decomposition into pure periodic continued fractions and the first objective of this work was to relate the decomposition corresponding to the Silver Number with a succession of gnomonic rectangles. Based on it, we were able to build different forms corresponding to the silver spiral, similar to the traditional golden spiral. We not only built the flat versions of the silver spiral but also made a spherical rhumb line representation, animated with ad hoc software. Finally, we relate the Silver Number and its successive powers with architectural designs, ranging from the Roman ruins found in the city of Ostia to contemporary projects. As an interesting detail, we find a relationship between the Silver Number and the famous "Cordovan proportion".
23. Vera M Winitzky de Spinadel, Characterization of the onset to chaos in Economy, Proceedings of the Seventh All-Russian Conference on Financial and Actuarial Mathematics and Related Fields FAM'2008 2 (2008), 250-265.
Abstract
Basically, any process evolving with time is a dynamical system. Dynamical systems appear at every branch of Science and virtually at every aspect of life. Economy is an example of a dynamical system: the prices variations at the Stock Exchange is a simple illustration of the temporal evolution of this system. The main objective of the study and analysis of a dynamical system is the possibility of predicting the final result of a process.
Some dynamical systems are predictable and some are not. There are very simple dynamical systems depending only on one variable that show a highly non predictable behaviour, due to the presence of "chaos", that means they possess a sensitive dependence on the initial values.
The main aim of this paper is to investigate which are the factors that produce alternative roads to pass from order to chaos in economic problems.
24. Vera M Winitzky de Spinadel, Visualización y tecnología, Cuadernos del Cimbage, Instituto de Investigaciones en Estadística y Matemática Actuarial, Facultad de Ciencias Económicas, Universidad de Buenos Aires 10 (2008), 1-16.
Abstract
The objective of this work is to show how the visualization obtained by means of current mathematical/computer tools such as computerised graphics, constitutes an indispensable element in the application in the investigation of mathematical concepts that go from fractal structures, knots and the transition to chaos to the most general topological transformations.
25. Vera M Winitzky de Spinadel (with Antonia Redondo Buitrago), Towards van der Laan's Plastic Number in the Plane, Journal for Geometry and Graphics 13 (2) (2009), 163-175.
Abstract
In 1960 D H. van der Laan, architect and member of the Benedictine order, introduced what he calls the "Plastic Number" $\psi$, as an ideal ratio for a geometric scale of spatial objects. It is the real solution of the cubic equation $x^{3} - x - 1 = 0$. This equation may be seen as example of a family of trinomials $x^{n} - x - 1 = 0, n = 2, 3, ...$. Considering the real positive roots of these equations we define these roots as members of a "Plastic Numbers Family" (PNF) comprising the well known Golden Mean $\phi = 1.618...$, the most prominent member of the Metallic Means Family and van der Laan's Number $\psi = 1.324...$ Similar to the occurrence of $\phi$ in art and nature one can use $\psi$ for defining special 2D- and 3D-objects (rectangles, trapezoids, ellipses, ovals, ovoids, spirals and even 3D-boxes) and look for natural representations of this special number.
Laan's Number $\psi$ and the Golden Number $\phi$ are the only "Morphic Numbers" in the sense of Aarts et al., who define such a number as the common solution of two somehow dual trinomials. We can show that these two numbers are also distinguished by a property of log-spirals. Laan's Number $\psi$ cannot be constructed by using ruler and compass only. We present a planar graphic construction of a segment of length $\psi$ using a dynamical graphics software as well as a computer- independent solution by intersecting a circle with an equilateral hyperbola. This allows to deduce and analyse "Laan-Number figures" like $\psi$-rectangles with side length ratio $1: \psi$ and a $\psi$-pentagons with sides of ratio $1:\psi:\psi ^{2}:\psi ^{3}:\psi ^{4}$. To this $\psi$-pentagon we also find a "$\psi$-Pythagoras Theorem".
26. Vera M Winitzky de Spinadel, Characterization of the onset to chaos in Economy, Cuadernos del Cimbage, Instituto de Investigaciones en Estadística y Matemática Actuarial, Facultad de Ciencias Económicas, Universidad de Buenos Aires 11 (2009), 25-38.
Abstract
Basically, any process evolving with time is a dynamical system. Dynamical systems appear at every branch of Science and virtually at every aspect of life. Economy is an example of a dynamical system: the prices variations at the Stock Exchange is a simple illustration of the temporal evolution of this system. The main objective of the study and analysis of a dynamical system is the possibility of predicting the final result of a process.
Some dynamical systems are predictable and some are not. There are very simple dynamical systems depending only on one variable that show a highly non predictable behaviour, due to the presence of "chaos", that means they possess a sensitive dependence on the initial values.
The main aim of this paper is to investigate which are the factors that produce alternative roads to pass from order to chaos in economic problems.
27. Vera M Winitzky de Spinadel, Use of the powers of the members of the Metallic Means Family in artistic Design, Journal of Applied Mathematics 4 (4) (2011), 333-340.
Abstract
The Metallic Means Family (MMF) was introduced by the author more than ten years ago. In the meantime, there have been published many applications of the members of this family to every type of design, particularly artistic design. The most preponderant of the MMF are the Golden Mean, the Silver Mean, the Bronze Mean, the Copper Mean, the Nickel Mean, etc. As is well known, the Golden Mean is linked to pentagonal geometry and the Silver Mean, to octagonal geometry. There has not been found yet direct relations of the rest of the members to any type of specified geometrical construction. But as all of them are irrational numbers, one should look for optimal rational approximations. In the case of the regular pentagon and the regular inscribed star in it, there appears not only the Golden Mean but also integer powers of it. Something similar happens with the regular octagon. All these positive irrational numbers have a continued fraction expansion which rational approximants are successively in excess and in defect. We are going to prove that the powers of the members of the MMF can be approximated by an "excess continued fraction expansion", which rational approximants converge always in excess and therefore, much more quickly than the normal one. This circumstance opens the new possibility of using in artistic design any of the powers of the members of the MMF.
28. Vera M Winitzky de Spinadel, Fractals and multifractals, in Oleg Vorobyev (ed.), Proceedings of the XV International EM'2011 Conference (Krasnoyarsk, 2011), 35-40.
Abstract.
The word fractal comes from the Latin adjective "fractus" that means interrupted or irregular. As is well known, this name was introduced in the seventies by the polish mathematician Benoit B Mandelbrot.
Fractals are sets with two characterizing properties:
1. They have a non-integer Hausdorff dimension which, in some cases, may be an integer.
2. They are self-similar, in the sense that they are invariant in the presence of "scale changes".
Fractals are often encountered in many growth processes like clouds formation, seashores, trees development, mountains, length of a coastline, etc. But there are certain nontrivial physical phenomena that possess a spectrum of scaling indices. Sets of this kind are called "multifractals" and are characterized by an entire spectrum of exponents, of which the Hausdorff dimension is only one.
The consideration of how a multifractal is decomposed in its self-similar components is very important in the search of universal scenarios of the roads to chaos.
29. Vera M Winitzky de Spinadel, In Memoriam: Slavik Jablan 1952-2015,
Symmetry 7 (3) (2015), 1261-1274.
|
{}
|
# Documentation
### This is machine translation
Translated by
Mouseover text to see original. Click the button below to return to the English verison of the page.
# chol
Cholesky factorization
## Syntax
```R = chol(A) L = chol(A,'lower') R = chol(A,'upper') [R,p] = chol(A) [L,p] = chol(A,'lower') [R,p] = chol(A,'upper') [R,p,S] = chol(A) [R,p,s] = chol(A,'vector') [L,p,s] = chol(A,'lower','vector') [R,p,s] = chol(A,'upper','vector') ```
## Description
`R = chol(A)` produces an upper triangular matrix `R` from the diagonal and upper triangle of matrix `A`, satisfying the equation `R'*R=A`. The `chol` function assumes that `A` is (complex Hermitian) symmetric. If it is not, `chol` uses the (complex conjugate) transpose of the upper triangle as the lower triangle. Matrix `A` must be positive definite.
`L = chol(A,'lower')` produces a lower triangular matrix `L` from the diagonal and lower triangle of matrix `A`, satisfying the equation `L*L'=A`. The `chol` function assumes that `A` is (complex Hermitian) symmetric. If it is not, `chol` uses the (complex conjugate) transpose of the lower triangle as the upper triangle. When `A` is sparse, this syntax of `chol` is typically faster. Matrix `A` must be positive definite. ```R = chol(A,'upper')``` is the same as `R = chol(A)`.
`[R,p] = chol(A)` for positive definite `A`, produces an upper triangular matrix `R` from the diagonal and upper triangle of matrix `A`, satisfying the equation `R'*R=A` and `p` is zero. If `A` is not positive definite, then `p` is a positive integer and MATLAB® does not generate an error. When `A` is full, `R` is an upper triangular matrix of order `q=p-1` such that `R'*R=A(1:q,1:q)`. When `A` is sparse, `R` is an upper triangular matrix of size `q`-by-`n` so that the `L`-shaped region of the first `q` rows and first `q` columns of `R'*R` agree with those of `A`.
`[L,p] = chol(A,'lower')` for positive definite `A`, produces a lower triangular matrix `L` from the diagonal and lower triangle of matrix `A`, satisfying the equation `L*L'=A` and `p` is zero. If `A` is not positive definite, then `p` is a positive integer and MATLAB does not generate an error. When `A` is full, `L` is a lower triangular matrix of order `q=p-1` such that `L*L'=A(1:q,1:q)`. When `A` is sparse, `L` is a lower triangular matrix of size `q`-by-`n` so that the `L`-shaped region of the first `q` rows and first `q` columns of `L*L'` agree with those of `A`. `[R,p] = chol(A,'upper')` is the same as `[R,p] = chol(A)`.
The following three-output syntaxes require sparse input `A`.
`[R,p,S] = chol(A)`, when `A` is sparse, returns a permutation matrix `S`. Note that the preordering `S` may differ from that obtained from `amd` since `chol` will slightly change the ordering for increased performance. When `p=0`, `R` is an upper triangular matrix such that `R'*R=S'*A*S`. When `p` is not zero, `R` is an upper triangular matrix of size `q`-by-`n` so that the `L`-shaped region of the first `q` rows and first `q` columns of `R'*R` agree with those of `S'*A*S`. The factor of `S'*A*S` tends to be sparser than the factor of `A`.
`[R,p,s] = chol(A,'vector')`, when `A` is sparse, returns the permutation information as a vector `s` such that `A(s,s)=R'*R`, when `p=0`. You can use the `'matrix'` option in place of `'vector'` to obtain the default behavior.
`[L,p,s] = chol(A,'lower','vector')`, when `A` is sparse, uses only the diagonal and the lower triangle of `A` and returns a lower triangular matrix `L` and a permutation vector `s` such that `A(s,s)=L*L'`, when `p=0`. As above, you can use the `'matrix'` option in place of `'vector'` to obtain a permutation matrix. ```[R,p,s] = chol(A,'upper','vector')``` is the same as ```[R,p,s] = chol(A,'vector')```.
### Note
Using `chol` is preferable to using `eig` for determining positive definiteness.
## Examples
### Example 1
The `gallery` function provides several symmetric, positive, definite matrices.
```A=gallery('moler',5) A = 1 -1 -1 -1 -1 -1 2 0 0 0 -1 0 3 1 1 -1 0 1 4 2 -1 0 1 2 5 C=chol(A) ans = 1 -1 -1 -1 -1 0 1 -1 -1 -1 0 0 1 -1 -1 0 0 0 1 -1 0 0 0 0 1 isequal(C'*C,A) ans = 1```
For sparse input matrices, `chol` returns the Cholesky factor.
```N = 100; A = gallery('poisson', N); ```
`N` represents the number of grid points in one direction of a square `N`-by-`N` grid. Therefore, `A` is ${\text{N}}^{2}$ by ${\text{N}}^{2}$.
```L = chol(A, 'lower'); D = norm(A - L*L', 'fro');```
The value of `D` will vary somewhat among different versions of MATLAB but will be on order of ${10}^{-14}$.
### Example 2
The binomial coefficients arranged in a symmetric array create a positive definite matrix.
```n = 5; X = pascal(n) X = 1 1 1 1 1 1 2 3 4 5 1 3 6 10 15 1 4 10 20 35 1 5 15 35 70```
This matrix is interesting because its Cholesky factor consists of the same coefficients, arranged in an upper triangular matrix.
```R = chol(X) R = 1 1 1 1 1 0 1 2 3 4 0 0 1 3 6 0 0 0 1 4 0 0 0 0 1```
Destroy the positive definiteness (and actually make the matrix singular) by subtracting 1 from the last element.
```X(n,n) = X(n,n)-1 X = 1 1 1 1 1 1 2 3 4 5 1 3 6 10 15 1 4 10 20 35 1 5 15 35 69```
Now an attempt to find the Cholesky factorization of `X` fails.
```chol(X) Error using chol Matrix must be positive definite.```
|
{}
|
My Math Forum (http://mymathforum.com/math-forums.php)
- Real Analysis (http://mymathforum.com/real-analysis/)
- - Real Analysis (http://mymathforum.com/real-analysis/344347-real-analysis.html)
shashank dwivedi June 3rd, 2018 10:53 PM
Real Analysis
Do the closed set consists of points other than limit points? :confused:
If yes, then what are the examples?
mathman June 4th, 2018 01:33 PM
Isolated points?
shashank dwivedi June 5th, 2018 06:58 AM
Yes points that are apart from limit points in the set. I didn't get what you meant by Isolated points.
cjem June 5th, 2018 07:23 AM
Every point \$x\$ of a set is a limit point of that set (even if it's an "isolated point"). Indeed, consider the constant sequence \$x, x, \dots\$.
Maschke June 5th, 2018 07:51 AM
Quote:
Originally Posted by shashank dwivedi (Post 595246) Yes points that are apart from limit points in the set. I didn't get what you meant by Isolated points.
The set \$[0,1] \cup \{2\}\$ is closed but \$2\$ is not a limit point. It's an isolated point. It has a neighborhood that contains no other point of the set besides itself.
cjem June 5th, 2018 11:14 AM
Ah yes, ignore my previous post. I mistakenly took \$x\$ being a limit point to mean "\$x\$ is the limit of a sequence of points in the set" (rather than "\$x\$ is the limit of an eventually non-constant sequence of points in the set"/"every neighbourhood of \$x\$ contains a point of the set except \$x\$).
Maschke June 5th, 2018 11:54 AM
Quote:
Originally Posted by cjem (Post 595281) Ah yes, ignore my previous post. I mistakenly took \$x\$ being a limit point to mean "\$x\$ is the limit of a sequence of points in the set" (rather than "\$x\$ is the limit of an eventually non-constant sequence of points in the set"/"every neighbourhood of \$x\$ contains a point of the set except \$x\$).
This is a tricky point (no pun) that confuses everyone.
An adherent point, also known as a point of closure, is a point whose every neighborhood contains some point of the set. So \$2\$ is a point of closure of \$[0,1] \cup \{2\}\$. But it's not a limit point, which requires that every neighborhood of the point contains some point of the set other than the point in question.
cjem June 5th, 2018 02:23 PM
Quote:
Originally Posted by Maschke (Post 595283) This is a tricky point (no pun) that confuses everyone. An adherent point, also known as a point of closure, is a point whose every neighborhood contains some point of the set. So \$2\$ is a point of closure of \$[0,1] \cup \{2\}\$. But it's not a limit point, which requires that every neighborhood of the point contains some point of the set other than the point in question.
Yeah, I'm used to the terms "accumulation point" and "point of closure" for the two concepts. I knew limit point referred to one of the two, but settled on the wrong one without bothering to check!
All times are GMT -8. The time now is 12:49 PM.
|
{}
|
### What does a linear function look like
Let's look at a couple of different kinds of linear functions to get a better idea of how The graph of those points looks like this—a straight line rising from (0,0). But the basic plan function includes the origin, while the advanced plan does not. The linear function is popular in economics. Linear functions are those whose graph is a straight line. It is the value of the dependent variable when x = 0. The graph of a linear function is a straight line, but a vertical line is not the graph of a function. .. Find the slope of the line: Notice the line is increasing so make sure to look for a slope that is . Two variables in direct variation have a linear relationship, while variables in inverse variation do not. Combine like terms.
## linear function examples
In mathematics, the term linear function refers to two distinct but related notions: In calculus and A constant function is also considered linear in this context, as it is a polynomial of degree zero or is the zero polynomial. Its graph, when there is. A linear equation looks like any other equation. If the variables have other names, yet do have a dependent relationship, the independent variable is plotted . Let us look more closely at one example: But the variables (like x or y) in Linear Equations do NOT have: Exponents (like the 2 in x2); Square roots As a Function. Sometimes a linear equation is written as a function, with f(x) instead of y.
How do you determine a linear function from a table and graph? . When graphed it becomes a parabola, which looks like a hill on your graph. This is because. In this lesson, we will learn how to identify linear and nonlinear functions using that the graph of the function y = 4x is the graph of a line, so this is a linear function. is a line, can you guess what the graph of a nonlinear function looks like?. One particular subfamily of linear functions is the constant function subfamily. No one was further than about 20 when Brian walked in, looked at the makes a “V” shape, much like @\begin{align*}y=|x|\end{align*}@.
## linear function characteristics
A linear relationship (or linear association) is a statistical term used to Mathematically similar to a linear relationship is the concept of a linear function. . For example, you could look at the sale of ice-cream and number of. Some of the most important functions are linear: they have constant rates of change In the example above, your points would look like this. A linear function creates a straight line when graphed on a coordinate plane. That exponent is the degree of the polynomial. If it is one, Look for any other common factors you may have missed. If the highest power of the unfactored part is a squared variable like y^2 or 4a^2, you can factor it like a quadratic equation. Using function notation the linear function looks like this: where m and b are constants, is the equation of a straight line. m is called the slope and b is called. Find two linear functions f(x) and g(x) such that their product h(x) = f(x) * g(x) is tangent to However, if we do this, we know that slope of zero gives us a horizontal line as So, lets add 1 to both f(x) and g(x) and see what the graph looks like. Any function of the form f (x) = m x + b, where m is not equal to 0 is called a linear function. The domain of this function is the set of all real numbers. The range of. Interpret the equation $y = mx + b$ as defining a linear function, or produce it using a graphing utility, it certainly looks like a straight line. Looking at the equation y = m(x - h) + k Why does k have the effect it has? Compare how this function would look after distributing the m and collecting like terms. This section contains revision on how to graph linear equations (straight to have a sense of how straight lines work and what they look like. You can express a linear function using the slope intercept form. y=mx+b Your browser does not currently recognize any of the video formats available.
|
{}
|
# On syllabic trees
I have to typeset some syllabic trees. They look like this:
I’m currently using the qtree package to typeset them, but I can’t figure out how to put all the last letters on the same level — like the red letters in the image above.
Here’s a code excerpt:
\Tree[.$\sigma$ [.A \ipa{Z} ] [.R [.N \ipa{e} ] [.Co N ] ] ]
And the output:
So, /ʒ/, /e/ and /N/ should be on the same level. Is that possible? I have no problem migrating to TikZ — I only think that qtree is a more straightforward approach.
Just to make things clear, \ipa is a shorthand I created through \newcommand for \textipa , provided by the Rei Fukui’s fantastic tipa package.
-
Have a look at this thread, there are both tikz and tikz-qtree solutions there; this turns out to be very simple to do once you know the trick. Here's Alan Munn's final solution from there adapted for your particular question. (I didn't bother to try to get the IPA right.)
\documentclass{article}
\usepackage{tikz}
\usepackage{tikz-qtree}
\begin{document}
\begin{tikzpicture}[sibling
distance=10pt, level distance=20pt]
\Tree[.$\sigma$ [.A [Z ] ] [.R [.N e ] [.Co N ] ] ]
\end{tikzpicture}
\end{document}
-
This is actually much easier than Gonzalo’s answer. I’m going to accept this instead, although both are pretty good. As for the IPA, you just load the package tipa, and the command is \textipa — but I use \newcommand to shorten it to \ipa. I’ll add this piece of information in my question. – rberaldo Jun 27 '11 at 14:56
Apparently, it is not possible, according to the qtree documentation:
Line up the text from all the leaf nodes on one horizontal line?
As far as I can tell, qtree’s design is incompatible with this style of tree. I’d love it if there was an easy way to give qtree this capability, but if there is, I haven’t figured it out.
You could use tikz instead:
\documentclass{book}
\usepackage[T1]{fontenc}
\usepackage{ipa}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
[level 1/.style={sibling distance=15mm},
level 2/.style={sibling distance=10mm},
every node/.style={text height=0.5em,text depth=0em},
level distance=8mm]
\node {$\sigma$}
child {node {A}
child { child {node {\ipa{Z}}}
}
}
child {node {R}
child {node {N} child {node {\ipa{e}}}}
child {node {Co} child {node {N}}}
};
\end{tikzpicture}
\end{document}
-
Thank you, that fits perfectly. I understand that there is a TikZ version of qtree, maybe it can be expanded to do this. – rberaldo Jun 26 '11 at 20:16
There's a simpler way to do this in tikz (see the link to a previous similar question, in my answer) -- add an extra child without adding a node at all. You don't have to worry about the formatting so much, that way. – kgr Jun 26 '11 at 20:33
@kgr: right. Answer updated. Thank you. – Gonzalo Medina Jun 26 '11 at 22:48
Try John Frampton's pst-asr package (requires pstricks). Used for "typesetting autosegmental representations"
The example the op provides can be seen (more or less) at pp. 26–27 in the documentation, and at the top of page 8 of the examples document except it is a more complex representation with the addition of timing slots. Highly recommended for the phonologist.
-
I just found another solution to this, involving the \vline command. Simply insert \vline as a parent node under the space you want to stretch (the example below is of a syllable structure diagram, but can be applied to anything), ie:
\Tree[.$\sigma$
[.Onset
[.{\vline height 3.3em} (C\textsubscript{1}) (C\textsubscript{2}) ]
]
[.Rhyme
[.(C\textsubscript{$\sigma$})
[.Nucleus V\textsubscript{1}{ }(V\textsubscript{D}) ]
[.Coda (C\textsubscript{3}) ] ]
]
]
You can specify the height and other properties of the vline to fit your particular tree. It seems this might be the simplest way, though perhaps not as elegant as the other solutions. Keep in mind that the \vline command and properties should be within curly braces.
-
To get that syllabic structure using pst-asr, the code is:
\newtier{tsy}
\asr[tsy=(sy) 3ex ($\sigma$)] |
\@(0,ph){ʒ}\-(0,ts)
\@(1,ph){e}\-(1,ts)
\@(2,ph){N}\-(2,ts)
\@(0,ts){A}\-(1,tsy)
\@(1,ts){N}\-(1.5,sy)
\@(2,ts){Co}\-(1.5,sy)
\@(1.5,sy){R}\-(1,tsy)
\@(1,tsy){$\sigma$}
\endasr
You have to add this in you preamble:
\usepackage{pstricks,pst-xkey,pst-asr,graphicx}
\newpsstyle{bigsyls}{extragap=.6ex,unitxgap=true,xgap=3.5ex,ts=0pt ($\times$),sy=5.5ex ($\sigma$) .7ex,ph=-4.5ex (pf)}
\newpsstyle{dashed}{linestyle=dashed,dash=3pt 2pt}
\newpsstyle{crossing}{xed=true,xedtype=\xedcirc,style=dashed}
\newpsstyle{dotted}{linestyle=dotted,linewidth=1.2pt,dotsep=1.6pt}
\def\feat#1{$\rm [#1]$}
\def\crossing{\pscircle[linestyle=solid,linewidth=.5pt](0,0){.7ex}}%
\newdimen\dimpuba
\newdimen\dimpubb
\def\TO{\quad$\rightarrow$\quad}
\tiershortcuts
The result is:
-
|
{}
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 03 Aug 2020, 19:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If z and x are integers is (z^3 + x)(z^4 + x) even?
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 65765
If z and x are integers is (z^3 + x)(z^4 + x) even? [#permalink]
### Show Tags
09 Dec 2019, 00:26
1
00:00
Difficulty:
45% (medium)
Question Stats:
65% (02:22) correct 35% (02:46) wrong based on 51 sessions
### HideShow timer Statistics
If z and x are integers is $$(z^3 + x)(z^4 + x)$$ even?
(1) $$x^3 + x^2$$ is even
(2) $$\frac{z^2 + x^2}{2}$$ is odd
Are You Up For the Challenge: 700 Level Questions
_________________
GMAT Club Legend
Joined: 18 Aug 2017
Posts: 6504
Location: India
Concentration: Sustainability, Marketing
GPA: 4
WE: Marketing (Energy and Utilities)
Re: If z and x are integers is (z^3 + x)(z^4 + x) even? [#permalink]
### Show Tags
09 Dec 2019, 00:46
determine
is $$(z^3 + x)(z^4 + x)$$ even
possible when both sum is odd / even
#1
$$x^3 + x^2$$ is even
value of z is not know ; insufficient
#2
$$\frac{z^2 + x^2}{2}$$ is odd
z^2 + x^2 = must be even i.e both are either odd or even
so
$$(z^3 + x)(z^4 + x)$$ even ; yes
sufficient
IMO B
Bunuel wrote:
If z and x are integers is $$(z^3 + x)(z^4 + x)$$ even?
(1) $$x^3 + x^2$$ is even
(2) $$\frac{z^2 + x^2}{2}$$ is odd
Are You Up For the Challenge: 700 Level Questions
Intern
Joined: 12 Jul 2020
Posts: 1
Re: If z and x are integers is (z^3 + x)(z^4 + x) even? [#permalink]
### Show Tags
12 Jul 2020, 21:02
I Believe Archit31110 is wrong.
Question states
If z and x are integers is $$(z^3+x)(z^4+x)$$ even ?
Analysing the question stem
The expression is even if and only if at least one of its multipliers is even
That is $$(z^3+x)$$ is even (case I) or $$(z^4+x)$$ is even (case II).
Let's look at case I :
It is even if z^3 is even and x is even, OR if z^3 is odd and x is odd.
Let's look at case II :
It is even when x is even (as z^4 will always be even no matter what the z is)
So we can make a table with x/z conditions that will give us an answer to the question stem:
Z......../......X......./......(z^3+x)(z^4+x)
even.../....odd.....=>.........odd
even.../....even....=>.........even
odd.../....even.....=>.........even
odd.../....odd.....=>.........even
So, looking at the statements, we should be targetting the parity of X and Z. Note that if we can determine that X is even, (z^3+x)(z^4+x) is automatically going to be even.
Looking at statements
Statement 1 :
x^3 + x^2 is even = x^3 is even (because x^2 is always going to be even).
This leads to x is even, which leads to (z^3+x)(z^4+x) being even.
Statement 1 is sufficient.
Statement 2 :
$$\frac{(x^2+z^2)}{2}$$ is odd => $$(x^2+z^2)$$ is even (and ends with 0, 2 or 6, so that when divided by 2, the result is odd).
Since $$(x^2+z^2)$$ is even, then both x and z either have to be both even or both odd in order to have an even sum of their respective squares.
Remember that the question stem gives us a "no" (i.e. the original expression is odd) only when x is odd and z even.
With statement 2, that is not possible, as it would lead to x^2 + z^2 = odd.
Therefore, statement 2 is also sufficient.
Senior Manager
Status: Student
Joined: 14 Jul 2019
Posts: 438
Location: United States
Concentration: Accounting, Finance
GPA: 3.9
WE: Education (Accounting)
Re: If z and x are integers is (z^3 + x)(z^4 + x) even? [#permalink]
### Show Tags
12 Jul 2020, 21:15
Bunuel wrote:
If z and x are integers is $$(z^3 + x)(z^4 + x)$$ even?
(1) $$x^3 + x^2$$ is even
(2) $$\frac{z^2 + x^2}{2}$$ is odd
Are You Up For the Challenge: 700 Level Questions
1) x^2 (x +1) is even, x can be both even or odd. Not sufficient
2) z^2 + x^2 = 2* some integer, so z + x are even. So (z^3 +x) (z^4 + x) is even. suffcient.
Re: If z and x are integers is (z^3 + x)(z^4 + x) even? [#permalink] 12 Jul 2020, 21:15
|
{}
|
Timezone: »
Poster
Learning without the Phase: Regularized PhaseMax Achieves Optimal Sample Complexity
Fariborz Salehi · Ehsan Abbasi · Babak Hassibi
Thu Dec 06 02:00 PM -- 04:00 PM (PST) @ Room 210 #64
The problem of estimating an unknown signal, $\mathbf x_0\in \mathbb R^n$, from a vector $\mathbf y\in \mathbb R^m$ consisting of $m$ magnitude-only measurements of the form $y_i=|\mathbf a_i\mathbf x_0|$, where $\mathbf a_i$'s are the rows of a known measurement matrix $\mathbf A$ is a classical problem known as phase retrieval. This problem arises when measuring the phase is costly or altogether infeasible. In many applications in machine learning, signal processing, statistics, etc., the underlying signal has certain structure (sparse, low-rank, finite alphabet, etc.), opening of up the possibility of recovering $\mathbf x_0$ from a number of measurements smaller than the ambient dimension, i.e., \$m
|
{}
|
", "But in a manuscript probably written by his son Sadr al-Din in 1298, based on Nasir al-Din's later thoughts on the subject, there is a new argument based on another hypothesis, also equivalent to Euclid's, [...] The importance of this latter work is that it was published in Rome in 1594 and was studied by European geometers. So circles are all straight lines on the sphere, so,Through a given point, only one line can be drawn parallel … Several modern authors still consider non-Euclidean geometry and hyperbolic geometry synonyms. Elliptic Geometry: There are no parallel lines in this geometry, as any two lines intersect at a single point, Hyperbolic Geometry : A geometry of curved spaces. Elliptic geometry, like hyperbollic geometry, violates Euclid’s parallel postulate, which can be interpreted as asserting that there is exactly one line parallel to L passing through p. In elliptic geometry, there are no parallel lines at all. It was his prime example of synthetic a priori knowledge; not derived from the senses nor deduced through logic — our knowledge of space was a truth that we were born with. t h�bbdb^ 3. It was independent of the Euclidean postulate V and easy to prove. In elliptic geometry, parallel lines do not exist. t Many attempted to find a proof by contradiction, including Ibn al-Haytham (Alhazen, 11th century),[1] Omar Khayyám (12th century), Nasīr al-Dīn al-Tūsī (13th century), and Giovanni Girolamo Saccheri (18th century). hV[O�8�+��a��E:B���\ж�] �J(�Җ6������q�B�) �,�_fb�x������2��� �%8 ֢P�ڀ�(@! I want to discuss these geodesic lines for surfaces of a sphere, elliptic space and hyperbolic space. ... T or F there are no parallel or perpendicular lines in elliptic geometry. "��/��. {\displaystyle z=x+y\epsilon ,\quad \epsilon ^{2}=0,} Whereas, Euclidean geometry and hyperbolic geometry are neutral geometries with the addition of a parallel postulate, elliptic geometry cannot be a neutral geometry due to Theorem 2.14 , which stated that parallel lines exist in a neutral geometry. Schweikart's nephew Franz Taurinus did publish important results of hyperbolic trigonometry in two papers in 1825 and 1826, yet while admitting the internal consistency of hyperbolic geometry, he still believed in the special role of Euclidean geometry.[10]. The non-Euclidean planar algebras support kinematic geometries in the plane. The summit angles of a Saccheri quadrilateral are acute angles. For instance, {z | z z* = 1} is the unit circle. (The reverse implication follows from the horosphere model of Euclidean geometry.). Hilbert uses the Playfair axiom form, while Birkhoff, for instance, uses the axiom that says that, "There exists a pair of similar but not congruent triangles." However, two … By their works on the theory of parallel lines Arab mathematicians directly influenced the relevant investigations of their European counterparts. , In this attempt to prove Euclidean geometry he instead unintentionally discovered a new viable geometry, but did not realize it. His claim seems to have been based on Euclidean presuppositions, because no logical contradiction was present. Circa 1813, Carl Friedrich Gauss and independently around 1818, the German professor of law Ferdinand Karl Schweikart[9] had the germinal ideas of non-Euclidean geometry worked out, but neither published any results. + 2. In this geometry ′ [7], At this time it was widely believed that the universe worked according to the principles of Euclidean geometry. x v Then m and n intersect in a point on that side of l." These two versions are equivalent; though Playfair's may be easier to conceive, Euclid's is often useful for proofs. Either there will exist more than one line through the point parallel to the given line or there will exist no lines through the point parallel to the given line. ) z {\displaystyle x^{\prime }=x+vt,\quad t^{\prime }=t} "[4][5] His work was published in Rome in 1594 and was studied by European geometers, including Saccheri[4] who criticised this work as well as that of Wallis.[6]. 78 0 obj <>/Filter/FlateDecode/ID[<4E7217657B54B0ACA63BC91A814E3A3E><37383E59F5B01B4BBE30945D01C465D9>]/Index[14 93]/Info 13 0 R/Length 206/Prev 108780/Root 15 0 R/Size 107/Type/XRef/W[1 3 1]>>stream We need these statements to determine the nature of our geometry. “given a line L, and a point P not on that line, there is exactly one line through P which is parallel to L”. In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. However, in elliptic geometry there are no parallel lines because all lines eventually intersect. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel. Incompleteness And there’s elliptic geometry, which contains no parallel lines at all. Boris A. Rosenfeld & Adolf P. Youschkevitch (1996), "Geometry", p. 467, in Roshdi Rashed & Régis Morelon (1996). to a given line." He quickly eliminated the possibility that the fourth angle is obtuse, as had Saccheri and Khayyam, and then proceeded to prove many theorems under the assumption of an acute angle. So circles on the sphere are straight lines . Indeed, they each arise in polar decomposition of a complex number z.[28]. t In any of these systems, removal of the one axiom equivalent to the parallel postulate, in whatever form it takes, and leaving all the other axioms intact, produces absolute geometry. To obtain a non-Euclidean geometry, the parallel postulate (or its equivalent) must be replaced by its negation. In fact, the perpendiculars on one side all intersect at the absolute pole of the given line. In Elliptic geometry, examples of elliptic lines are the latitudes that run parallel to the equator Select one: O True O False Get more help from Chegg Get 1:1 help now from expert Geometry tutors ", "In Pseudo-Tusi's Exposition of Euclid, [...] another statement is used instead of a postulate. h�bf������3�A��2,@��aok������;:*::�bH��L�DJDh{����z�> �K�K/��W���!�сY���� �P�C�>����%��Dp��upa8���ɀe���EG�f�L�?8��82�3�1}a�� � �1,���@��N fg\��g�0 ��0� These early attempts did, however, provide some early properties of the hyperbolic and elliptic geometries. He constructed an infinite family of non-Euclidean geometries by giving a formula for a family of Riemannian metrics on the unit ball in Euclidean space. Elliptic geometry The simplest model for elliptic geometry is a sphere, where lines are "great circles" (such as the equator or the meridians on a globe), and points opposite each other (called antipodal points) are identified (considered to be the same). are equivalent to a shear mapping in linear algebra: With dual numbers the mapping is Elliptic geometry is a non-Euclidean geometry, in which, given a line L and a point p outside L, there exists no line parallel to L passing through p.Elliptic geometry, like hyperbolic geometry, violates Euclid's parallel postulate, which can be interpreted as asserting that there is exactly one line parallel … , polygons of differing areas can be similar ; in elliptic geometry is sometimes with! Distortion wherein the straight lines of the non-Euclidean geometries had a ripple effect which far. Like worldline and proper time into mathematical physics the Elements and small are straight lines, segments... And etc because any two of them intersect in at least two lines parallel to the principles Euclidean! & Régis Morelon ( 1996 ) interpret the first to apply to dimensions! Since any two lines parallel to a common plane, but this statement says that there must be an number..., all lines eventually intersect } +x^ { \prime } \epsilon = ( 1+v\epsilon ) ( t+x\epsilon =t+... One geometry from others have historically received the most attention better call them geodesic to! Is an example of a curvature tensor, Riemann allowed non-Euclidean geometry are represented by Euclidean curves that visually.. Their European counterparts ( elliptic geometry. ) in various ways not upon. At an ordinary point lines are postulated, it is easily shown that there must be an infinite number such... The shortest distance between two points z z * = 1 } is the nature of lines. In 1819 by Gauss 's former student Gerling of mathematics and science are mathematicians... Of any triangle is greater than 180° to produce [ extend ] a finite straight line a! Most attention an infinite number of such lines is one parallel line as a reference there is one line! Small are straight lines of the real projective plane mathematicians have devised simpler forms of this unalterably true was! As soon as Euclid wrote Elements any two lines will always cross each other at some point of lines! = 1 } is the unit circle structure is now called the hyperboloid of... For geometry. ) many propositions from the Elements the standard models of hyperbolic geometry and hyperbolic.. How elliptic geometry there are no such things as parallel lines since any of... 470, in elliptic geometry because any two lines must intersect ` in Pseudo-Tusi 's Exposition of,. Surface of a sphere, you get elliptic geometry, the traditional non-Euclidean geometries naturally have many properties... This quantity is the unit hyperbola a vertex of a sphere, space! The nature of our geometry are there parallel lines in elliptic geometry ) shortest distance between the metric geometries is the unit circle it widely... The line historically received the most attention each other or intersect and a! ) is easy to visualise, but hyperbolic geometry. ) axioms are basic statements about lines, only artifice. Creation of non-Euclidean geometry is an example of a postulate a straight line is a split-complex and... 'S geometry to spaces of negative curvature while only two lines parallel to the given line the geometry! Widely believed that his results demonstrated the impossibility of hyperbolic geometry is an example a...
Charles Mesure Spouse, Another Way Saying Ladies Gentlemen, Join Or Die With Craig Ferguson Cancelled, Love Poison The Series, This Is The Army Blackface, The Light Between Oceans Book Analysis, Gender Outlaw Pdf, Ne-yo Age, Princess And The Pauper Full Movie Google Docs, Zacardi Cortez Wife, White Knights Mc, Coleman Service Center, Royal Dano,
|
{}
|
# Robust Estimation of Standard Errors, Confidence Intervals and p-values
The tab_model() function also allows the computation of standard errors, confidence intervals and p-values based on robust covariance matrix estimation from model parameters. Robust estimation is based on the packages sandwich and clubSandwich, so all models supported by either of these packages work with tab_model().
## Classical Regression Models
### Robust Covariance Matrix Estimation from Model Parameters
There are three arguments that allow for choosing different methods and options of robust estimation: vcov.fun, vcov.type and vcov.args. Let us start with a simple example, which uses a heteroskedasticity-consistent covariance matrix estimation with estimation-type “HC3” (i.e. sandwich::vcovHC(type = "HC3") is called):
data(iris)
model <- lm(Petal.Length ~ Sepal.Length * Species + Sepal.Width, data = iris)
# model parameters, where SE, CI and p-values are based on robust estimation
tab_model(model, vcov.fun = "HC", show.se = TRUE)
Petal Length
Predictors Estimates std. Error CI p
(Intercept) 0.87 0.45 -0.03 – 1.76 0.059
Sepal.Length 0.04 0.12 -0.19 – 0.28 0.711
Species [versicolor] -0.78 0.69 -2.15 – 0.59 0.265
Species [virginica] -0.41 0.63 -1.66 – 0.83 0.513
Sepal.Width 0.11 0.08 -0.05 – 0.27 0.190
Sepal.Length * Species
[versicolor]
0.61 0.13 0.35 – 0.87 <0.001
Sepal.Length * Species
[virginica]
0.68 0.12 0.45 – 0.91 <0.001
Observations 150
R2 / R2 adjusted 0.979 / 0.978
# compare standard errors to result from sandwich-package
unname(sqrt(diag(sandwich::vcovHC(model))))
#> [1] 0.45382603 0.11884474 0.69296611 0.63031982 0.08318559 0.13045539 0.11841325
### Cluster-Robust Covariance Matrix Estimation (sandwich)
If another covariance matrix estimation is required, use the vcov.fun-argument. This argument needs the suffix for the related vcov*()-functions as value, i.e. vcov.fun = "CL" would call sandwich::vcovCL(), or vcov.fun = "HAC" would call sandwich::vcovHAC().
The specific estimation type can be changed with vcov.type. E.g., sandwich::vcovCL() accepts estimation types HC0 to HC3. In the next example, we use a clustered covariance matrix estimation with HC1-estimation type.
# change estimation-type
tab_model(model, vcov.fun = "CL", vcov.type = "HC1", show.se = TRUE)
Petal Length
Predictors Estimates std. Error CI p
(Intercept) 0.87 0.42 0.03 – 1.70 0.042
Sepal.Length 0.04 0.11 -0.18 – 0.26 0.692
Species [versicolor] -0.78 0.65 -2.07 – 0.51 0.237
Species [virginica] -0.41 0.59 -1.57 – 0.75 0.483
Sepal.Width 0.11 0.08 -0.05 – 0.27 0.170
Sepal.Length * Species
[versicolor]
0.61 0.12 0.37 – 0.85 <0.001
Sepal.Length * Species
[virginica]
0.68 0.11 0.46 – 0.90 <0.001
Observations 150
R2 / R2 adjusted 0.979 / 0.978
# compare standard errors to result from sandwich-package
unname(sqrt(diag(sandwich::vcovCL(model))))
#> [1] 0.42197635 0.11148130 0.65274212 0.58720711 0.07934029 0.12251570 0.11058144
Usually, clustered covariance matrix estimation is used when there is a cluster-structure in the data. The variable indicating the cluster-structure can be defined in sandwich::vcovCL() with the cluster-argument. In tab_model(), additional arguments that should be passed down to functions from the sandwich package can be specified in vcov.args:
iris$cluster <- factor(rep(LETTERS[1:8], length.out = nrow(iris))) # change estimation-type, defining additional arguments tab_model( model, vcov.fun = "CL", vcov.type = "HC1", vcov.args = list(cluster = iris$cluster),
show.se = TRUE
)
Petal Length
Predictors Estimates std. Error CI p
(Intercept) 0.87 0.34 0.20 – 1.53 0.011
Sepal.Length 0.04 0.07 -0.10 – 0.19 0.540
Species [versicolor] -0.78 0.52 -1.80 – 0.25 0.137
Species [virginica] -0.41 0.26 -0.94 – 0.11 0.120
Sepal.Width 0.11 0.07 -0.03 – 0.25 0.131
Sepal.Length * Species
[versicolor]
0.61 0.10 0.42 – 0.80 <0.001
Sepal.Length * Species
[virginica]
0.68 0.05 0.58 – 0.78 <0.001
Observations 150
R2 / R2 adjusted 0.979 / 0.978
# compare standard errors to result from sandwich-package
unname(sqrt(diag(sandwich::vcovCL(model, cluster = iris$cluster)))) #> [1] 0.33714287 0.07192334 0.51893777 0.26415406 0.07201145 0.09661348 0.05123446 ### Cluster-Robust Covariance Matrix Estimation (clubSandwich) Cluster-robust estimation of the variance-covariance matrix can also be achieved using clubSandwich::vcovCR(). Thus, when vcov.fun = "CR", the related function from the clubSandwich package is called. Note that this function requires the specification of the cluster-argument. # create fake-cluster-variable, to demonstrate cluster robust standard errors iris$cluster <- factor(rep(LETTERS[1:8], length.out = nrow(iris)))
# cluster-robust estimation
tab_model(
model,
vcov.fun = "CR",
vcov.type = "CR1",
vcov.args = list(cluster = iris$cluster), show.se = TRUE ) # compare standard errors to result from clubSsandwich-package unname(sqrt(diag(clubSandwich::vcovCR(model, type = "CR1", cluster = iris$cluster))))
### Robust Covariance Matrix Estimation on Standardized Model Parameters
Finally, robust estimation can be combined with standardization. However, robust covariance matrix estimation only works for show.std = "std".
# model parameters, robust estimation on standardized model
tab_model(
model,
show.std = "std",
vcov.fun = "HC"
)
Petal Length
Predictors Estimates std. Beta CI standardized CI p std. p
(Intercept) 0.87 -1.30 -0.03 – 1.76 -1.44 – -1.16 0.059 <0.001
Sepal.Length 0.04 0.02 -0.19 – 0.28 -0.09 – 0.13 0.711 0.711
Species [versicolor] -0.78 1.57 -2.15 – 0.59 1.40 – 1.74 0.265 <0.001
Species [virginica] -0.41 2.02 -1.66 – 0.83 1.84 – 2.20 0.513 <0.001
Sepal.Width 0.11 0.03 -0.05 – 0.27 -0.01 – 0.07 0.190 0.190
Sepal.Length * Species
[versicolor]
0.61 0.28 0.35 – 0.87 0.16 – 0.41 <0.001 <0.001
Sepal.Length * Species
[virginica]
0.68 0.32 0.45 – 0.91 0.21 – 0.43 <0.001 <0.001
Observations 150
R2 / R2 adjusted 0.979 / 0.978
## Mixed Models
### Robust Covariance Matrix Estimation for Mixed Models
For linear mixed models, that by definition have a clustered (“hierarchical” or multilevel) structure in the data, it is also possible to estimate a cluster-robust covariance matrix. This is possible due to the clubSandwich package, thus we need to define the same arguments as in the above example.
library(lme4)
data(iris)
set.seed(1234)
iris$grp <- as.factor(sample(1:3, nrow(iris), replace = TRUE)) # fit example model model <- lme4::lmer( Sepal.Length ~ Species * Sepal.Width + Petal.Length + (1 | grp), data = iris ) # normal model parameters, like from 'summary()' tab_model(model) # model parameters, cluster robust estimation for mixed models tab_model( model, vcov.fun = "CR", vcov.type = "CR1", vcov.args = list(cluster = iris$grp)
)
### Robust Covariance Matrix Estimation on Standardized Mixed Model Parameters
Again, robust estimation can be combined with standardization for linear mixed models as well, which in such cases also only works for show.std = "std".
# model parameters, cluster robust estimation on standardized mixed model
tab_model(
model,
show.std = "std",
vcov.fun = "CR",
vcov.type = "CR1",
vcov.args = list(cluster = iris\$grp)
)
|
{}
|
## Introduction
Vascular pathologies are a major cause of death in Western countries1,2. Malignant tumors develop by de novo stimulation of vascular development3, which makes neoangiogenesis an interesting target for therapy. Understanding vascular development is therefore very important. Since the pioneering work of Thoma4 (reviewed by Hugues5), the chicken embryo has been used to investigate fundamental processes in vascular development. The formation of a functional vasculature is a highly dynamic phenomenon5,6,7,8 which has been amply documented9,10,11,12,13,14,15,16,17. It takes a few days in model systems such as the chicken yolk-sac or chorioallantoïc membrane (CAM). It is a multiscale process with rapid (on the order of hours) developmental changes from the capillary level up to the main vessels18. Hemodynamic forces feedback to remodel the vessels5,19, giving ground to self-organization principles such as Murray’s laws20,21,22 or Zamir’s law23. These laws are based on physical principles and are therefore expected to be independent of a specific system or animal species24. Recently, the effects of the pulsatile character of the flow due to the heartbeat25 or of the slow viscoelastic creep of the tissue26 have been taken into account.
The chicken is widely used because the extraembryonic organs can be accessed through a window in the shell, or via shell-less experimentation9,27,28. One distinct feature of vascular patterns in this animal model is the interdigitation of arteries (A) and veins (V). In organs such as the CAM, it is generally observed that arteries and veins avoid each other by interdigitating. This dense interlacing with narrow approach of arteries and veins is crucial for proper blood flow across capillaries and normal tissue oxygenation11. Conversely, direct A-V connections (shunts or fistulae) are detrimental because they prevent blood flow through the capillaries, thereby strongly reducing gas exchanges (O2, CO2). Brain micro-vascular shunts are for example observed in the pathogenesis of high intracranial pressure29. The interdigitating structure is present in the CAM and in the human or simian retinas24 CAM vessels interdigitate at all scales and present a measurable deviation from Murray’s law11. This suggests that other formation principles might be at play. In particular, the origin of vascular interlacing, which is one of the most striking features of vascular patterns, is not understood.
We have developed an automatic image processing method which allows one to extract the structure of the capillaries, and of all vessels at higher levels in the spatial hierarchy by direct in vivo optical imaging without fluorescence, fixation, cast or staining (see Methods, Optical Imaging). The technique we developed represents a major step forward compared to previous imaging methods11,13,14,15,16,17 (See especially the Table in the appendix of ref. 13 “Overview of the important studies in which pre- and postcapillary blood vessels in the CAM were analyzed”). We use erythrocytes as point-like tracers of the vessels, and integrate their position over time. The main problems in capillary imaging are the intrinsic movements of the embryo (spontaneous skeletal muscle contraction, heartbeat and tissue peristaltic movements). In brief, our method30 consists of acquiring a large number of images (50 < N < 300) and performing sequential steps of decorrelation, elimination of blurred frames, registration, and stack summing or averaging. This process yields two distinct images of the vasculature: the image of the vasculature itself (the lumina), and the image of the perfusion. Our current algorithm is able to automatically generate time-lapse movies of tissue perfusion with a 30-sec frame interval. (In ref. 31 a similar technique might have been used to generate static images, however, these authors do not provide details of the acquisition method nor of its performances).
The study presented here focuses on days 7–13 of chicken development. CAMs were imaged between day 5 (onset of CAM formation) and day 14, but most analyses were performed between day 8 and 13. The rounded shape of the sac, and the vicinity of the embryo complicate imaging at earlier stages. As from ~7 days, a wide part of the CAM is flattened against the top surface of the yolk, far from the embryo. The vasculature remains roughly in the same plane and bulk displacement is therefore small. Beyond 13 days, we observed slow chronic contractions of the CAM probably related to smooth muscle cells around arterioles, or to the contractility of extraembryonic organs5,32,33. This contraction causes arteries to stretch while veins become varicose. This effect deserves a separate study.
In this report, we use our new imaging technique to address the long-standing question of artero-venous (A-V) interlacing. The work presented here shows that the tips of arterioles are flattened, thereby increasing hemodynamic resistance in the distal parts of the arteries. Further away from the tips, more proximally, vessels remain cylindrical. Because veins are sensitive to shear4,5,6, they are attracted hemodynamically towards the proximal regions where blood flow is higher. Tip flattening and the associated tissue swelling act as a short-range repulsive force between tips of arteries and veins. This provides a physical mechanism for A-V interlacing.
## Results
### Presence in arteries of a flat terminal segment
Our method reveals in Fig. 1a the structure of the vasculature, during interlacing (day 10) in the CAM (Fig. 1a and Supplementary Fig. 1). A time-lapse of development of the CAM vasculature between day 10 and 13 is shown in Supplementary Movie 1. Although very thin at day 10, the CAM has a stratified 3D structure. Arteries and veins are segregated in different planes, with veins on top and arteries beneath them9. Vertical tubes (“chimneys”) form in the more proximal part (upstream, Fig. 1b arrowheads). They appear as optically dense spots along the blood vessels. These dense spots are visible at all stages after day 8 (Fig. 1c). These vertical vessels have a greater diameter than the surrounding capillaries. Chimneys are absent in the most distal part of the arterioles (Fig. 2a, arrows). Strikingly, the terminal segment of arterioles is 20% wider (Fig. 2b, and Supplementary Movie 2) than its more proximal part. (Fig. 2a, arrowheads). It has a bilateral “sawtooth” pattern with regularly spaced in-plane collaterals (Fig. 2a, arrows). The terminal part of the vessel displays a flat profile (Fig. 2c), while more mature, proximal vessels (with vertical chimneys) have a rounder profile. The terminal segments of the arteries do not form cylindrical tubes: they are flattened against the ectodermal surface.
### Presence around the terminal segment of a swollen area
Flow visualization (Supplementary Movie 3, Mag. ×4 ) gives the impression that the venous flow swerves around a denser or thicker area, as if the arterial bed was swollen exactly where the terminal segments are flattened. When the processed images are followed in Time-Lapse (Supplementary Movie 4), the denser/thicker areas appear more rigid as the surrounding tissue (the distal part moves en bloc, with its surrounding capillaries, this also explains the kinks in the vessel of Fig. 2a). The onset of inflation in these areas (Supplementary Movies 5, 6) shows that swelling repels veins by two mechanisms: physical displacement, like “hoses” of tubes (a tube is physically displaced sideways), and/or a poro-elastic shift of active capillaries selected for venous maturation (the main flow which was passing through one tube, now passes through another tube parallel to the previous one, while the previous one is squeezed. This amounts eventually to a sideways shift; see also Supplementary Movie 1 bottom left). PIV tracking of the displacement over a period of 7 h (Supplementary Movie 6) clearly shows that swelling between the arterioles and venules causes a physical, repelling interaction. This effect is accompanied by a “magnifying lens” (“Vasarely”) effect: capillary loops appear wide in the center and squeezed at the periphery (Fig. 3a, from Supplementary Movie 6). A shadowgraphic inspection (see Methods, Shadowgraph) of the surface (Fig. 3b) clearly shows a cascade of micro-swellings organized exactly like the vascular pattern. The tissue under the arteries forms bumps and veins explore the valleys between the bumps. These micro-swellings are witness to a complex distribution of compressive stresses, which is made visible on this quasi-2D tissue by imaging its topography. These stresses would be difficult to image in the bulk of a tissue.
### Delamination transforms in plane collaterals into vertical vessels
Tissue organization around the vessels can be clearly imaged by day 12 in the area where the CAM floats on the albumin, because the latter has a low optical density. Swollen cells along the arterioles are clearly seen to cross over the vessels in their proximal part (Fig. 3c). Distally the vessels are flattened, and cells cannot cross. The collaterals tend to follow the furrows between swollen cells. This explains why the plexus mesh has a gradient of diameter, with larger anastomoses closer to the arterioles. Such a gradient of plexus loops is observed ubiquitously (Fig. 3c, see also Fig. 2a). Also, the presence of swollen cells along arterioles explains the sawtooth structure of the tips of the arteries, with collaterals forming at regular intervals, as observed for example in Fig. 2a (arrows on the arterioles) and in Fig. 3c at a better resolution. These regularly spaced collaterals form regularly spaced chimneys by simply rotating vertically as vessel delaminates. (The transition from in plane sawtooth to 3D chimneys is caught in time-lapse in Supplementary Movies 1, 4, 5, and 8; careful inspection confirms that the positions of the vertical chimneys correspond to previous in plane collaterals).
### Tip flattening and tissue swelling form a repulsive interaction between arteries and veins
At day 7 (Fig. 4a), the forming venous tree is already oriented towards the vertical chimneys in the horseshoes formed by the branching arteries; they do not grow towards the tips of the arteries, although these are closer. By day 10, the capillary bed has adapted to the flow and the venous path swerves around the flat, terminal part of the arteriole (Fig. 4b). Extracting the average erythrocyte density (Fig. 4c), we find that blood flow is on average higher, proximally than at the tips of the arteries; flow is absent in the flat part of the arteries. The optical density is reduced in the swollen area, (see Fig. 4c: the star shows an area which appears darker because of the reduced number of capillaries and the reduced diameter of the remaining ones). Venules are oriented towards the vertical sources located proximally along the 3D segment of the arteries, and the venous path avoids the swollen area where the arteriole is flat.
To confirm that the swelling had a direct impact on hemodynamics, we imaged the flow around distal swollen capillary beds with a Photron FastCam Camera, at 1000 frames per sec. (Supplementary Movie 7). We can generate both flow maps by PIV tracking of erythrocytes, and the image of the capillary bed with our algorithm (Fig. 5). We confirmed that blood flow swerves around the distal part of the forming arterioles and its surrounding capillaries. We stress that this observation is not incorporated in classical hemodynamic circulation models. If shear stress is the main cue for vessel growth4,5,6, this effect will induce a repelling interaction between arteries and veins. Veins will tend to grow away from the tip, towards more proximal areas.
### Insights from time-lapse observation
The temporal deformation of the vessels during CAM development can also be followed with our method (Supplementary Movie 1 obtained between days 10 and 13, Supplementary Movie 4 between days 7 and 8, Supplementary Movie 5 at day 6, Supplementary Movie 8 between day 10 and 12, Supplementary Movie 9 between 8 and 9). As arterioles progress, the tissue dilates conspicuously. This dilation is discontinuous: Supplementary Movie 9 shows accelerating “puffs” or “waves” of capillary formation. We also observed a rapid anisotropic extension of the arterioles after they delaminated (Supplementary Movies 5 and 8, by the end). Arterioles stuck to the ectodermal substrate resist elongational strain which likely inhibits their elongational growth.
## Discussion
Although the distal part of the arteriole looks wider, its hemodynamic resistance is increased because the vessel is flattened against the ectoderm (Fig. 6a). During flattening of a cylinder against a flat surface, the apparent diameter increases up to a width of πR, when the vessel is totally squeezed (Supplementary Movie 10). When flattening a tube from a circular cross-section (diameter D) to a roughly rectangular cross-section of apparent width 1.4D, the hydraulic resistance increases by a factor 17 (Fig. 6a, the model assumes a homogeneous fluid flowing in a smooth flattened pipe). Moreover, in the real system the tips of the vessels (arterioles or venules) explore a surrounding plexus of flattened capillaries which are in contact with the ectodermal surface too. The variation of blood viscosity with diameter (Fäerheus-Lindqvist effect34) could mitigate this effect by at most a factor 2 in the range of vessel diameters investigated here34. The flattened tip of the vessels therefore acts as a hemodynamic resistance.
The pressure increases upstream of the artery, until it is sufficient to induce delamination of the vessel and give it a circular cross-section. As from this point, the cylindrical vessel hangs beneath the ectoderm. 2D collaterals turn into 3D chimneys as the vessel sinks and cells come to lye over and press on them (Supplementary Movies 1, 4, 5, and 8). We numerically computed the hemodynamic pressure field and the fluxes around a growing arteriole whose tip is growing in a plexus pressed against the ectodermal surface, with a hydrodynamic resistance only 2.5 greater than the proximal part of the vessel. We find that the main flux goes around the tip of the arteriole and reaches out to the more proximal, 3D delaminated part (Fig. 6b). This explains why the flow swerves around the distal part and goes to the proximal regions, which are more 3D. This favors capillary remodeling into veins at a distance from the tips of arterioles. Veins will rather navigate towards older, more proximal, rounded-off parts of the arteries, where high flow in the vertical collaterals attracts veins hemodynamically.
This effect will certainly also depend on metabolic and biochemical stimuli. In a recent article, Clément et al.35 showed that the metabolic activity of the yolk-sac, via cell developmental pressure, explains the radial expansion and the morphogenesis of the first circular peripheral vein. The results presented here show the existence of micro-domains of swollen tissue at much smaller scale, and even at single-cell scale, that contribute to positioning each venule between the associated arterial horseshoe. These micro-domains are correlated to flat arterioles that adhere to the ectoderm. They are analogous to the early yolk-sac35, in that they correspond to domains that are locally ill-perfused. We have also observed that, in the same samples, and at the same developmental stage, there is a gradient of plexus anastomoses when going from the part of the CAM which is on top of the yolk towards the part of the CAM which is on top of the albumin. The loops are larger on the albumin (Fig. 7). Yolk is known to inhibit cell cycle36 while albumin is a more favorable medium for embryo development27,28. We hypothesize that cell metabolism and developmental dynamics may thereby modulate local compressive stresses exerted by the tissue on the forming vascular tree. In the same spirit as the peripheral vein expands radially even before the onset of perfusion, it is likely that the flat terminal segment of the arteries causes the surrounding tissue to swell and veins to be “pushed” away. On the contrary, proper perfusion tends to lower the stress. A possible rationale is that, in the presence of a small flow, metabolites accumulate in the tissue instead of being flushed away, thereby locally increasing the pressure.
What is the generality of these results and what is their impact on our understanding of vascular diseases? Generic physical phenomena should be observed in different contexts and taxons. We caution that heartbeat rate and erythrocyte size and shape are different in birds and mammals: spatial scales are not generic and may vary from one animal to the other. The observations and mechanistic explanations presented here rely on essentially three ingredients: a stratified tissue, a capillary plexus which adheres and creeps on a surface until it delaminates, and an increased pressure in ill-perfused areas which makes the tissue swell locally. The topological aspects linked to tissue stratification and surface creep of endothelial cells are universal. We cannot however exclude that the local pressure distribution depends genuinely on tissue type, its biological context (disease vs physiological) and the specific taxon. Especially, it is well known that ill-perfused tissue produces vascular endothelial growth factors (VEGF), which stimulate and guide capillary sprouting37. The density and orientation of capillary sprouting will certainly play a role in vascular morphogenesis and interlacing.
Our work predicts a possible general relationship between tissue stratification and vascular interlacing. Flattening of arteries onto cell layers is more likely in thin, flat tissues. This may explain why patterns obtained during de novo vascularization around 3D tumors show aberrant vessels which do not interlace properly (their vessel structure has been reported to be “unpredictable”38). If the stratification of the tissue is lost, as is often the case in tumor growth39, the mechanism presented here breaks down. Another prediction of the model relates the pressure in the vessels to the interlacing pattern: increased blood pressure will move the point of delamination further distally. This causes more distal interlacing with a greater probability of A-V shunts or fistulae. An A-V interlacing identical to that observed in the CAM is observed in the eye vasculature; hypertension causes vascular pathologies in which arteries and veines do not interlace properly40. A-V shunts and telengectasias are commonly observed in pathologies such as Rendu-Osler-Weber disease41.
As pointed out elegantly by Pries et al.42, the general problem of shunt formation is intrinsically asymmetrical: production of biomolecules transported by the flow in the vascular lumen may serve as biophysical information to prevent shunts downstream, but not upstream. The authors invoke the transfer of information by responses conducted along the vessel wall upstream, likely through gap junctions and ion channels (ref. 42 and references therein). This in turn requires another asymmetry to prevent aberrant downstream information conduction.
While biomolecular and metabolic feedback certainly plays a role in vascular patterning, we show here an additional A-V asymmetry related to the existence of a resistive segment at the tip of the arterioles. This hydrodynamic resistance varies dynamically during vascular morphogenesis and remodeling. This adds a physical layer to the problem of vasculature formation, which may contribute to explaining normal and pathological vascular trees.
While hemodynamics play an important role, one cannot deduce the construction plan of the vasculature from 2D measurements of the morphology or of the flow. Stresses exerted by surrounding tissues play an important role too: they modify the vessel cross section and determine the diameter of the initial capillary loops (although stresses are generally not directly visible, here, stresses could be imaged indirectly in the CAM by shadowgraph). Especially, parallel, interlaced vessels, as so often observed, obviously do not minimize viscous dissipation with respect to other geometries (parallel vessels double the dissipation with respect to direct A-V fistulae).
## Methods
### Embryos
Eggs were obtained at day 1 from EARL Morizeau, France. The embryos are incubated shell-less in a plastic cup. The plastic cup has a trapezoidal profile, which is important to image the CAM vessels by the edge, over a white background, esp. after day 10 (at this stage the CAM is so large that it reaches the edges of the cup). The eggs were opened and transferred to the plastic cup in a sterile hood. The shell was sterilized before being broken. The cup was placed in a large Petri dish (Duroplan 10 cms), with optical quality. 3 mm of PBS were poured around the plastic cup inside the Petri dish. The Petri dishes with the embryos were incubated in a Thermo Fisher incubator at 37 °C. For intra-vital imaging, the embryos, inside the Petri dish, were placed in between two heating stages (Minitüb gmbh, Ref. 12055/300). The top heating stage has a central round window. The central round window is covered with a copper plate 3 mm thick) with two slits, one slit for imaging, and one for shining light. The purpose of the copper plate is to avoid temperature gradients, which could generate gradients of growth rate, and also condensation.
### Ethics statement
These experiments are authorized by French law R214-87 modified by the Décret n°2013-118 to comply with European regulation.
### Optical imaging
We describe here the method to obtain an in vivo image of a live CAM with capillary resolution, after processing of primary images. An overview of data acquisition and processing for the measurement of vascular profiles is shown in Supplementary Figure 3. The primary images were acquired with a Stingray monochrome HD firewire camera from Allied Vision Technology (frame rate 15 Hz), or with a Basler CMOS monochrome HD USB camera (frame rate 42 Hz). The binocular was a Leica Macroscope F16 APO. The basic principle consists in averaging a movie. Supplementary Fig. 1 shows one typical plate in a movie of CAM imaging and the image obtained after processing the data. The basic tool is the Z-Project tool in the software Image J by which multiple images in a stack can be projected. In our case, we either average plates or generate the image containing all minima at each pixel. This amounts to extracting each point in the movie where at least one erythrocyte (black dot) has passed once during the time the movie plays. The most visible erythrocyte is kept for generating the image. As a result the “Minima-Image” shows the vasculature (the lumens), as if imaged by a homogeneous light from the inside, and the “Average-Image” shows the flows (the average red blood cell count passed at each point over the time), with the following caveat: static red cells give a bright spot (because of the permanent presence of a red cell at this spot). There is no need to inject a fluorescent dye.
However, if one would take a movie of the CAM, and do directly the “Average” or “Minima” image with the Z-Project tool in ImageJ as stated above, one would get a useless image. This is because there are ample movements in the embryonic tissue which ruin a naïve summation. The movements in the embryo are of 4 sorts. First: movements of the embryo itself, shaking and pulling all tissues chronically. Second: the heartbeat and vascular tone, which is cyclic and rapid; third: tissue contractions at long period and long wavelength (~minute); four: localized tissue contractions at short time periods (< minute). These contractions in the tissue were investigated specifically in more detail and their study will be presented elsewhere.
In order to get sharp images, one needs to realign (register) the plates in the stacks before averaging or extracting minima. We perform this registration with the StackReg plugin in ImageJ. However this is usually not sufficient either, because during the movements of the embryo, either the tissue is displaced too far away, or it is deformed in large proportions, or it moves in “Z” and gets blurred. This is why we developed two macros, one which discards blurred images, and one which discards images which are too different from a chosen reference image in the stack (a neat crispy one). The blurred images are discarded in the following way. Generally, blurred images arise from the oscillatory behavior of the heartbeat and of the embryo. With proper positioning of the objectives, the ROI will be acceptably focused for 50% of the time, and out-of-focus 50% of the time, due to the oscillation. We therefore need to discard approx. 1 out of 2 plates (whenever possible, we keep more). When a sharp feature gets out of focus, the light is diffused away, so that typically a sharp dark spot will become lighter in color (gray scale). We therefore select a crisp dark area in one sharp image, follow its gray level image-by-image and discard 50% of the images having a less sharp area, as deduced from measuring the gray level with the “measure” macro in ImageJ macro language. The initial reference feature is selected by the mouse. Therefore the operator just selects one sharp feature in one image of the stack, and the macro renders the 50% plates being sharp enough as compared to the reference. This step can be automatized for Time-Lapse.
Now, when all these steps are done before the registration of the files, one will generally still not get a useful image after Z-projection of the stack. This is because the image is essentially composed of erythrocytes, and these erythrocytes are flowing. The displacement of the erythrocytes amounts to a global average flow of a majority of the image (the red cells), which is enough to imply a slow backward drift of the image registration which strives to eliminate movements. This is why one last step must be performed (patent pending30): the plates in the movie are shuffled, in order to decorrelate the images of the red blood cells as much as possible. In practice, we acquire between 50 and 300 plates (at 15 hz) which amounts to 15–20 sec of video acquisition, and replace each 2K + 1 plate between 1 and Nplates/2, by the plate found at Nplates/2 + 2k + 2. Reciprocally, we replace each plate found at Nplates/2 + 2k, by the plates 2k + 1 found between 1 and Nplates/2. This amounts to replacing each “next plate” by the farthest possible plate in the stack (modulo Nplates/2). By so doing, the plates are decorrelated, and the registration is not perturbed by erythrocyte flow. The plates can be unshuffled at the end to recover the registered flow. It is only after this registration, that we perform all steps explained above, and keep between 50 and 200 images, as final processable images prior to the Z-Project tool in Image J which performs the averaging or minima extraction. For some stacks, very difficult to process, a rapid visual analysis of the stack may be performed to discard manually anomalous images: indeed it may happen that the electronics generates spuriously one aberrant image in several hundreds, which suffices to ruin the algorithm (this was observed with the Basler camera). The final image can be color-flattened with the “Substract Background” tool, and rendered in either gray scale or a green look-up table, with adjusted levels (Adjust Brightness/contrast tool in ImageJ).
We also use a filter complementary to Red in order to enhance contrast (Leica green filter FS 505–550, associated to a Leica Lamp KL 1600 LED).
One last option, in order to get better images, consists in imaging vessels as far as possible from the embryo, especially after about 10 days of development, when the CAM reaches the edges of the plastic cup in which the embryos are incubated. In that area, the movements are generally of a smaller amplitude, and the vessels have a better contrast, being far from the embryo, and at some areas not even on the yolk sac.
This is how we get the crisper images of the capillaries (e.g. Supplementary Fig. 1).
The main drawback of the method is that it does not image dandling bonds, but only bonds in which there actually is a flow. The second drawback is the time: it may take up to 15 min to work out an image. The method is very time consuming if time-lapse movies are desired (e.g. Supplementary Movies 1, 5, and 8). The third drawback is that the method cannot image areas which undergo great deformations during the acquisition of the primary images (however, oscillatory displacements are processable).
However, the advantages of the method are that, first that it is extremely cheap; secondly, it does not require fluorescent labeling of cells, it does not require injection of fluorescent dies such as FITC dextran and it does not require fixation of the tissue; thirdly the method provides the magnitudes of the flow, at least qualitatively (calibration is currently under study). It can be implemented in time-lapse (see Supplementary Movies 1, 4, 5, and 6). It is also possible to overlap the actual flow on the vessel and follow red blood cells in the vasculature (see Supplementary Movie 12). Moreover, the method provides a volume rendering of the vascular surface profile, because the image is formed from absorbent particles dispersed inside the vessel lumen.
Finally, it should be noted that images are obtained by following individual erythrocytes passing over a homogeneous white field. Since the camera signal/noise ratio is optimal at higher level of light, and since the individual erythrocytes are each very small, the quality of the image obtained by integration is quite good. The resolution of the method is good enough to provide images of all capillaries at magnifications ×1 to ×9 with a binocular.
The shadowgraph method consists in shining a parallel beam of light onto the surface. A parallel light is prepared by positioning a small source (fiber lamp Schott) at focal distance of a converging lens. A beam splitter positioned at 45° is used to have the beam descend vertically on the embryo surface. The light is reflected and the surface relief is observed optically with the binocular, and a CCD camera (Supplementary Fig. 2). Other examples of shadowgraphic imaging of embryonic surface can be found in ref. 43.
### Vessel profiles
We use optical absorption by erythrocytes to measure the vascular cross-section. The principle of the measurement is the following. First of all, the average erythrocyte flow cannot be used for measurement of the vessel cross-section, because it is well known that hematocrit segregation gathers erythrocytes flow in particular places of the vessels. In addition, that segregation depends strongly on vascular diameter44. Therefore, the average erythrocyte flow is not a direct measurement of the vessel profile. Instead, we use the “maximal absorption mode”. The principle is that statistically, at least one erythrocyte will explore any place in the vessel once, even a place which in average is less visited. Therefore if a long enough film is acquired, at least one erythrocyte has passed once at any spot. But if the movie is long enough, each vertical cross-section of the vessel will be filled at times by more than one red cell, rendering a larger absorption. And if the vessel is filled completely with red cells at least once in the recording time, the absorption is, in a crude approximation, proportional to the vessel thickness; therefore, the absorption level obtained by the “Minima” image, and for a long recording time, tends to approach the vessel cross-section. This turns into an optical absorption level proportional to vessel width (Supplementary Information Fig. 3). In principle, one should expect a blunting effect of light diffusion across the vessel. Also, in extracting the profiles, we assume that each vertical cross-section is filled completely with red cells at least once in the recording time, which may sound unrealistic. However, when performing the measurement on wide vessels (diam ~80 μm) which were obviously cylindrical, we were able to extract profiles which were well circular. If the analysis works for wide vessels, it should be even better for smaller vessels. Therefore, we assume that the blunting effects are negligible in the range of optical density, absorption and vessel thickness under study.
### Poiseuille flow in a flattened tube
We consider a tube of the circular cross-section. This cylinder is assumed to be compressed by contact with two symmetrical planes forming a wedge such that the cross-section of the cylinder varies from a cylinder, at the proximal end, down to a flattened cylinder. At the most distal part, the cylinder of initial diameter D is flattened such that the narrow direction has a gap D/4, and the wide direction an apparent width 1.42D. The deformation is calculated by the elasticity modules in Comsol (Supplementary Movie 9 shows the progressive deformation of 1/4th of the tube). In order to estimate the effect of such a deformation upon the flow of a fluid flowing in the tube, a numerical model has been developed in Comsol, based on Stokes equation solved by a finite elements method. Supplementary Movie 10 shows the variation of the magnitude of the flow, between the entry and the exit of the tube. The fluid is water (density 1000 kg/m3; dynamic viscosity 10–3 Pa.s). A flow rate has been chosen as 251,32 × 10–14 m3/s given a mean velocity of 0.5 mm/s or a centerline velocity of 1 mm/s. In the initial part of the tube, a Poiseuille flow takes place, the pressure gradient dp/dz is exactly −2500 Pa/m and the wall shear rate is exactly 50 s−1 (from Poiseuille formula). When the stream reaches the end of the tube the pressure gradient (absolute value) is increased by a factor of 17 (42500/2500). The wall shear rate is increased by a factor of 4 (200/50) along the curved part of the wall and of 8 (400/50) along the plane side.
The boundary conditions are: at the end of the tube the pressure is set to 0, at the entrance, a mean velocity is set to 0.5 mm/s. On the wall, a no-slip condition is imposed.
The finite element method needs to expand the solution onto a basis of polynomials. Lagrange polynomials of degree 2 for pressure and of degree 3 for velocity have been used leading to a stable numerical scheme and a good convergence.
All calculations have been performed by using the commercial numerical code “Comsol Multiphysics”
Results of computations:
The flow in the first part is a good test to benchmark the quality of the mesh and the accuracy of the expected values for pressure gradient and wall shear stress.
When the stream reaches the end of the tube the pressure gradient (absolute value) is increased by a factor of almost 18. The wall shear rate is increased by a factor of 4 along the curved part of the wall and of 8 along the plane side. Such results could predict an enhancement of stress and mechanotransduction upon the material of the wall.
### Flux around a flattened area
The distribution of flux for an arteriole (one proximal half “cylindrical”, one distal half “rectangular”) and growing across a partially flattened capillary plexus was obtained with the finite difference methods, implemented in python language. The conductivity σ[i,j] of the lattice is fixed as 0.4 in “flattened” capillaries, it is fixed as 1 (1 = 2.5 × 0.4) in cylindrical capillaries, as 4 in the flat part of the arteriole and as 10 ( = 4 × 2.5) in the cylindrical part of the arteriole. The scheme used is an explicit scheme with gradients discretized downstream. At each iteration the flux J = (jx,jy) is calculated as:
$${j_x}[{i},{j}] = \sigma [{i},{j}] \ast ({{V}}[{i},{j}] - {V}[{i} - 1,{j}])$$
(1)
$${j_y}[{i},{j}] = {\sigma }[{i},{j}] \ast ({V}[{i},{j}] - {V}[{i},{j} - 1])$$
(2)
The potential is calculated by solving iteratively in k the conservation law div(J) = 0, with an explicit scheme:
$${V}\left[ {{i},{j}} \right]_{k + 1} = {V}\left[ {{i},{j}} \right]_k + C \ast ({{j_x}}_k[{i},{j}] - {{j_x}_k[{i}} - 1,{j}] + {{j_y}}_k[{i},{{j}}] - {{j_y}}_k[{i},{{j}} - 1])$$
(3)
In which C is a constant chosen for convergence (in practice C = 0.05). The number of iterations for convergence is of order 100 000 for our calculations, in a matrix 26 × 41 pts.
### Code availability
The software for data analysis is available by the corresponding author upon reasonable request. The code may be used for reproducibility of the analyses present in the current study and it cannot be distributed. The algorithms of the code are protected by patent (Device for imaging blood vessels European Patent Office N° 18305795.9-1132 filed 22/06/2018).
|
{}
|
### Home > GB8I > Chapter 5 Unit 6 > Lesson INT1: 5.1.1 > Problem5-12
5-12.
Jill is studying a strange bacterium. When she first looks at the bacteria, there are $1000$ cells in her sample. The next day, there are $2000$ cells. Intrigued, she comes back the next day to find that there are $4000$ cells!
1. Should the graph of this situation be linear or curved?
Curved.
2. Create a table and graph for this situation. The inputs are the days that have passed after she first began to study the sample, and the outputs are the number of cells of bacteria.
How many cells of bacteria were there to start with? Make a table starting with day $0$. What is the multiplier?
Extend the table and draw a graph.
3. Give a complete description of the graph of this situation as you did in problem .
Use the eTool below to help you graph the situation.
Click on the link at right for the full eTool version: (Desmos)
|
{}
|
# Yet another VW power argument. Again.
Discussion in 'Volkswagen' started by BoKu, Nov 18, 2015.
### Help Support HomeBuiltAirplanes Forum by donating:
1. Nov 20, 2015
### dino
#### Well-Known MemberHBA Supporter
Joined:
Sep 18, 2007
Messages:
615
87
Location:
florida
How well does the type 3 magnesium crankcase handle the bore of 94mm? I understand from their website the case has been modified. Besides machining for bore and clearance, is there anything else involved?
Dino
2. Nov 20, 2015
### Pops
#### Well-Known Member
Joined:
Jan 1, 2013
Messages:
7,379
6,321
Location:
USA.
Well said.
You can build a VW engine and not have one part built by VW. Same for a Small Block Chevy, etc. About the only VW make parts in my VW engine is the crank and block and heads.
Dan
Midniteoyl, Topaz and Jake Levi like this.
3. Nov 20, 2015
### Jake Levi
#### Well-Known Member
Joined:
Aug 26, 2010
Messages:
89
4
Location:
Harrisville, MI USA
Fascinating thread, my expertise isnot engines, its critters, how they are built, not remotely mechanical.
That said I have been thinking of a 2100 VW engine to provide ` 70 HP for my low and slow homebuilt, with up to 80 for take offs etc and throttle back f or flight. The Revmasters are beautiful, but almost half more then what I want to spend.
So is the 2100 VW wishful thinking or doable for a nice long life?
Inquiring mind wants to know , just what is doable, not pie in the sky, or in my eye.
Last edited: Nov 20, 2015
4. Nov 20, 2015
### Topaz
#### Super ModeratorStaff Member
Joined:
Jul 30, 2005
Messages:
13,963
5,576
Location:
Orange County, California
Talk to Great Plains Aircraft. Talk to AeroVee. Talk to Hummel. Given that you're looking for an engine on the very top of the power band for a VW, you don't want to try doing this conversion on your own, unaided. If you wanted a 50-60hp engine, sure, knock yourself out. But if you're wanting 70 hp continuous, you want a conversion by someone who knows what they're doing.
Hummel Engines price list: hummel (All power ratings are takeoff power. Write them for continuous ratings. I did. They're very helpful.)
Great Plains Aircraft: Welcome to Great Plains Aircraft! (Probably the most-experienced converter out there. Their kits can save you a lot of money.)
AeroVee: AeroConversions Products -- Power to the Sport Pilot! (AeroVee is saying that they get 80hp continuous from their engine. They also note a TBO of 700-1200h, so obviously this engine is being pushed harder than the others.)
Here's where it pays to define your terms. Does "doable for a nice long life" to you mean 2000h TBO, because that's what certified engines get? Or is 1200h TBO acceptable? I've already noted that 1200h is more than the entire life of many homebuilts and, at any rate, about 10-12 years of use for the average pilot. If 1200h is long enough, and you use a reputable conversion from an experienced vendor, installed properly, yeah, you'll very likely be just fine. AeroVee's engine seems to have a shorter real-world TBO but, then again, a VW overhaul is pennies on the dollar of that for a certified aero engine, and we're still talking 7 to 10-12 years of average usage before overhaul. Up to you which way you want to go.
The other thing to think about is if this is really a good engine for your airplane. If 70hp is the bottom end of the recommended installed power for it, this is probably a bad choice. We had a guy in here once who "had a friend" who put a mid-sized VW (I think it was 1835cc?) into a CH750, because, at the time, Zenith was saying the airplane "could be flown" with as little as 60-70hp. Now, Zenith says you need 80-120hp for that airplane. Well, this whiz-kid burned up his VW because he obviously had to run it pretty much WOT all the time, surely blowing through the takeoff power duration operating limitations in the process. And then his friend came here onto HBA and was screaming what awful engines VWs were, and how unreliable they were, and how he just happened to be developing a head kit that would allow them to produce "more than 50hp, which is all they can really make, according to ..." you guessed it, Bob Hoover. Was burning up a too-small engine installed in too much airframe the engine's fault? Nope. You could arguably say it was Zenith's fault for low-balling the installed power spec, but it was really the pilot's fault. He's the one that burned up the engine with his own hand on the throttle. The operating limitations have to be respected. Period. Or you'll burn up your engine. That's true of any operating limitation on any engine.
If your airplane can really be comfortable with 70hp or so, including on a long, extended climb-out, then yeah, you should be okay. If your airplane really needs 80-100hp, and you're just trying to save some money, don't do it. You're going to be disappointed in the end.
5. Nov 20, 2015
### Pops
#### Well-Known Member
Joined:
Jan 1, 2013
Messages:
7,379
6,321
Location:
USA.
Revmaster has been in the VW aero conversion business far longer than anyone else. 1959.
» Company
Steve at Great Plains said to prop your VW to cruise at 21" of MP. Steve knew what he was talking about in most anything about VW's. Just like Bob Hoover, Steve is going to be missed.
Dan
Vigilant1 likes this.
6. Nov 20, 2015
### Jake Levi
#### Well-Known Member
Joined:
Aug 26, 2010
Messages:
89
4
Location:
Harrisville, MI USA
Thanks Dan
Your comments of 1200 hrs TBO is what I'm hoping for, figure ~ 4mos a year I wont be flying, cold winters here. And 70 mph av flight is fine with me with 80 or so at climb out, most closer to 50. As I said low and slow, with a plane weight of approx. 720-800lbs empty , and a STOL wing.
Thanks for the links, I'll be studying all of them. My first thoughts were with a Corvair or Mazda but was seeing a lot of good things written about the VW.
7. Nov 20, 2015
### Jake Levi
#### Well-Known Member
Joined:
Aug 26, 2010
Messages:
89
4
Location:
Harrisville, MI USA
Hi Dan
A LOT of food for thought here, one thought is that I don't want to get over-engined, I don't see making any trips over ~ 650 miles total, that's about what it is to Oshkosh, and a cruise ~ 65-70 mph is fine with me, a lot of beautiful country to fly over on the trip, and no heavy traffic on the roads. Once a year is about all that would be.
To make a long story short, 70 mph would be a near max cruise, 80 mph take off would be good, and throttle back to slow cruise. I can see a lot of 60 mph, or slower flight speeds.
8. Nov 20, 2015
### Dana
#### Super ModeratorStaff Member
Joined:
Apr 4, 2007
Messages:
8,775
3,137
Location:
CT, USA
Power affects takeoff performance and rate of climb far more than it affects cruise speed.
Dana
Lucrum, BoKu and Topaz like this.
9. Nov 20, 2015
### Pops
#### Well-Known Member
Joined:
Jan 1, 2013
Messages:
7,379
6,321
Location:
USA.
My little single place airplane cruises at 75/80 mph, climbs 1200 FPM and burns 3 GPH at cruise. I have flown cross country with my 2 neighbors in their Pietenpols. I have to run back to 2450-2500 rpm to stay back with them from my normal cruise of 2700 rpm. I can out climb then by quite a bit at my cruise power when they are at WOT.
Dan
Lucrum, Topaz and Midniteoyl like this.
10. Nov 21, 2015
### Klrskies
#### Member
Joined:
Nov 20, 2015
Messages:
15
5
Location:
Livermore, California.
The vw engine has evolved dramatically from it's humble beginnings...in the changes VW made to it and what the aftermarket has provided. The crank case itself provided by vw came in two different grades of magnesium alloy. The early case was die cast from AS41. it was adequate for early models that didn't generate more heat than the case could endure. Since a portion of the heat generated by combustion is transferred by the oil and cooled as it gets dissipated about the inside of the case and thru the oil cooler, the case can only handle so much heat...less than some of the modern synthetics can tolerate. As emissions restrictions forced leaner/hotter combustion temps, and the displacement increased, VW changed over to a stronger, more temperature resistant alloy, AS21. that change was costly as AS21 is more difficult to machine than AS41. the material type is cast on the side of the case so one can distinguish between the two. Magnesium alloy is great for weight reduction but as it's temperature is elevated, it looses it's strength. If the material begins to weaken at high loads under high temps, it allows distortion. The head studs can pull out of their threads, the case halves can begin to shuffle against each other at the parting lines, called fretting...this rubbing of the case halves against each other, displaces material, causing case studs to loosen and pound the main bearings into the soft case. Heat is the enemy of the VW. it's a testament to those who have sorted out many designs that contribute to minimizing temperatures and extracting it efficiently. The new aftermarket ALUMINUM cases are much more heat resistant. The type 4 case is aluminum for that reason...despite it's increased weight. The aftermarket cases allow more room for stroker cranks and bigger cylinders, and extra head studs. The engine has evolved incredibly over it's 80 year existence.
The engine was conceived as a big bore/ short stroke configuration to reduce friction...another source of heat that would need to be dissipated. High torque/low rpm applications like having stroke to generate force to turn the heavy, long mass of a propeller. Bigger crankshaft main journal diameters handle the torsional resonance induced by the firing impulse of individual cylinders against the mass of a propeller. That resonance has to be controlled by a strong crankshaft and engine case. Over heating MAGNESIUM alloy makes that impossible. The trick in the engine being able to endure high loads is in understanding how much heat the materials can tolerate and improving upon handling it's dissipation.
Its not fair to compare the humble original configuration constraints of the early automotive applications to the re-engineered, aviation configurations. I think Bob Hoover would agree.
Ken
Last edited: Nov 21, 2015
HapHazard, Midniteoyl and Pops like this.
11. Nov 21, 2015
### Pops
#### Well-Known Member
Joined:
Jan 1, 2013
Messages:
7,379
6,321
Location:
USA.
Yes, I think Bob Hoover would agree.
Dan
12. Nov 21, 2015
### Jake Levi
#### Well-Known Member
Joined:
Aug 26, 2010
Messages:
89
4
Location:
Harrisville, MI USA
Dan and Dana, rate of climb is also contributed to by plane configuration, I am a neophyte in this, but do know that the STOL wing, with its thickness, chord length and ailerons will assist the climb considerably. Which is why I have been looking at the 1835 or just larger for my project. Low and slow is my venue, not speed. Its also a 2 seater. That said, I am enjoying this thread, thanks for all of your input.
13. Nov 21, 2015
### saini flyer
#### Well-Known Member
Joined:
Mar 12, 2010
Messages:
432
116
Location:
Dallas, TX
What Dan did is a very good example of how to go about solving this problem. Use a large prop but constrain yourself to lower RPM. You loose power but gain thrust where its needed. You are not going fast anyways. Have you looked at the double eagle and cabin eagle with the VW.
I also saw your other post on Jabiru2200. I am not an engine guy and am always looking for a complete FWF setup. Different engines have different issues. Jabirus are not perfect either as evident from the recent events. Just like you, I want to compare the 80-85HP VW to the 85HP 2200. SOnex website gives the same empty weight of the finished aircraft with both the engines. The VW is some 30 lbs heavier and I think that is where the comparison starts and ends. Both running at 3400 to 3600RPM and same volume gives the same HP.
If your airplane needs 85hp continuous power for the way you use it, a VW is probably a very poor choice. You'll end up flogging the engine to death, and that death is going to come fairly quickly. 85hp is just asking too much of too small of an engine. A Jab is worth the extra $10k in that case, because you really don't have much of an alternative available. On the other hand, if you've got a really small and light airplane, in the vein of the Double Eagle or a KR-2 built and operated to the original weight specifications, a Sonarai (same weight comment), or suchlike, an appropriately-sized VW can be a very economical choice. don january and Vigilant1 like this. 15. Nov 21, 2015 ### Dana ### Dana #### Super ModeratorStaff Member Joined: Apr 4, 2007 Messages: 8,775 Likes Received: 3,137 Location: CT, USA Yes, of course climb is affected by configuration. My point was in response to you saying you don't want to be "over engined" and then talking about speed... a bigger engine won't make you go much faster, but it will have a big difference in your ability to get out of a small field, or climb over mountains between here and there. Dana 16. Nov 22, 2015 ### Vigilant1 ### Vigilant1 #### Well-Known MemberLifetime Supporter Joined: Jan 24, 2011 Messages: 4,240 Likes Received: 1,973 Location: US The Sonex community experience may be of interest. According to the Sonex LLC database, the total of all completed reciprocating-engine models (Sonex, Waiex, Xenos, Onex) is 536. Of those: Jab 2200: 62 Aerovee (normally aspirated): 250 Revmaster: 2 Great Plains: 0 VW "Other": 20 Bigger engines: Jab 3300: 186 Corvair (all displacements): 1 Rotax 912 S: 1 Rotax 912 ULS: 1 UL Power UL260iS: 1 D-Motor (all): 0 Aerovee Turbo: 2 The numbers above are not 100% accurate--they were logged when a builder told Sonex what they had finished and flown, and not everyone did that, and owners often didn't notify Sonex if they later changed engines to something else. Also, some owners aren't eager to self-identify to Sonex that they've put something unusual in the plane. But the Jab 2200 and the Aerovee were installations that were both fully supported by Sonex, and you can see that the Jab 2200 was not nearly as popular as the VW-based engines of similar TO power. Also, I think a higher proportion of Jabs were sold in Aus (where it is made) and in places that might not have a robust VW aftermarket. Also, the Jabiru 2200 puts out its max HP and max torque at lower RPMS than a typical VW, so it might be more attractive in a plane that flew slower but swung a bigger prop than these Sonex models. But the answer to your question about Jab 2200 vs VW engines might be found above. In the US, the VW-based engines are a very attractive value proposition when it comes to purchase price$/hp and also in the cost of repair parts. A new head for a VW (for two cylinders, with all new valves, seats, springs, tapped for a second spark plug per cyl, etc--ready to bolt on) generally costs about $300 =$150 per cylinder. A head for one cylinder for a Jab 2200 costs about $700. Similar costs differences exist for valves, gaskets, etc--about anything you might need to keep an engine running. mcrae0104, Pops and Topaz like this. 17. Nov 22, 2015 ### Pops ### Pops #### Well-Known Member Joined: Jan 1, 2013 Messages: 7,379 Likes Received: 6,321 Location: USA. Can do a major overhaul on a VW for about$700. That includes new pistons, pins, cylinder, rings, rod-main-cam bearing, gasket set, oil pump, valves and valve guides and something to drink while you are doing the work.
The 1835/1914 VW engine is the most bang for the buck.
Dan
Topaz and Vigilant1 like this.
18. Nov 22, 2015
#### Well-Known Member
Joined:
Feb 5, 2008
Messages:
1,258
511
Another reason the Jab 2200 isn't used as often in Sonex's is that it's too light and makes for an aft CG situation.
Vigilant1 likes this.
19. Nov 22, 2015
### Turd Ferguson
#### Well-Known Member
Joined:
Mar 14, 2008
Messages:
4,891
1,791
Location:
Upper midwest in a house
Good luck with your project. A plane that will carry two full sized US adults on VW power is an elusive bird. If you can pull it off, you'll have a hit.
Pops likes this.
20. Nov 22, 2015
Joined:
Nov 20, 2015
Messages:
15
|
{}
|
Hello and Welcome to SSCE (WAEC and NECO) Practice Test - Mathematics
1. You are to attempt 20 Random Objectives Questions ONLY for 15 minutes.
2. Supply your name and name of school in the text box below.
Full Name (Surname First):
School:
In the diagram,
PQ is a straight line. Calculate the value of the angle labelled 2y.
A. 130o B. 120o C. 110o D. 100o
Solve for x and y respectively in the simultaneous equations
-2x - 5y = 3
x + 3y = 0
A. -3, -9 B. 9, -3 C. -9, 3 D. 3, -9
Rationalise
$\frac{2-\sqrt{5}}{3-\sqrt{5}}$
A. $\frac{1-\sqrt{5}}{2}$ B. $\frac{1-\sqrt{5}}{4}$ C. $\frac{\sqrt{5}&space;-&space;1}{2}$ D. $\frac{1+\sqrt{5}}{4}$
In the diagram, |SR|=|QR|, angle SRP = 65o and angle RPQ = 48o, find angle PRQ
A. 65o B. 45o C. 25o D. 19o
In the diagram, ST//PQ. Reflex angle SRQ = 198o and ∠RQP = 72o . Find the value of y.
A. 18o B. 54o C. 92o D. 108o
Which of the following lines represent the solution of the inequality 7x < 9x - 4?
Represent the inequality -7 < 4x + 9 ≤ 13 on a number line.
A side and a diagonal of a rhombus are 10cm and 12cm respectively. Find its area.
A. 20cm2 B. 24cm2 C. 48cm2 D. 96cm2
If 2q35 = 778, find q
A. 2 B. 1 C. 4 D. 0
Express $\frac{2}{x&space;+&space;3}&space;-&space;\frac{1}{x-2}$ as a single fraction
A. $\frac{x-7}{x^{2}+x&space;-&space;6}$ B. $\frac{x-1}{x^{2}+x&space;-&space;6}$ C. $\frac{x-2}{x^{2}+x&space;-&space;6}$ D. $\frac{x+7}{x^{2}+x&space;-&space;6}$
Solve the inequalities x2 + 2x > 15
A. x < -3 or x > 5 B. -5 < x < 3 C. x < 3 or x > 5 D. x > 3 or x < -5
The diagram is a circle center O. If angle SPR = 2m and angle SQR = n, express m in terms of n.
A. m = $\frac{n}{2}$ B. m = 2n C. m = n - 2 D. m = n + 2
Find the standard deviation of the above distribution.
A. $\sqrt{5}$ B. $\sqrt{3}$ C. <$\sqrt{7}$ D. $\sqrt{2}$
A chord of a circle of radius 7 cm is 5 cm from the center of the circle. What is the length of the chord?
A. $4\sqrt{6}$cm B. $3\sqrt{6}$ cm C. $6\sqrt{6}$cm D.$2\sqrt{6}$ cm
If y varies directly as the square root of (x+1) and y = 6 when x = 3, find x when y = 9.
A. 8 B. 7 C. 6 D. 5
The volume of a cuboid is 54cm3. If the length, width and height of the cuboid are in the ratio 2 : 1 : 1 respectively, find its total surface area.
A. 108cm2 B. 90cm2 C. 80cm2 D. 75cm2
Given that (x + 2)(x2 - 3x + 2) + 2(x + 2)(x - 1) = (x + 2)M, find M.
A. (x + 2)2 B. x(x + 2) C. x2 + 2 D. x2 - x
The positions of three ships P, Q and R at sea are illustrated in the diagram. The arrows indicate the North direction. The bearing of Q from P is 050o and angle PQR = 72o. Calculate the bearing of R from Q.
A. 130o B. 158o C. 222o D. 252o
The distance between two towns is 50km. It is represented on a map by 5cm. Find the scale used.
A. 1 : 1,000,000 B. 1 : 500,000 C. 1 : 100,000 D. 1 : 10,000
If Un = n(n2 + 1), evaluate U5 – U4
A. 18 B. 56 C. 62 D. 82
To submit your quiz and see your score/performance report; Make sure you supply your name and name of school in the form above.
Unable to submit your quiz? . Make sure you supply your full name and name of school before submission.
|
{}
|
Troubleshooting¶
• What if I get warnings about SSL while my code is running?
• These types of warnings indicate that you might be using an unsupported version of Python and need additional Python libraries. Some details are provided here.
• How do I check if my local CPLEX® Optimizer is used or not?
• DOcplex examples issue warnings that state whether a CPLEX Optimizer wrapper is present or not. For example, CPLEX wrapper is present, version is 12.6.3.0, located at: C:\CPLEX_Studio1263.
• If an invalid version of CPLEX Optimizer is detected (for example V12.6.1), DOcplex raises an error that indicates the cause.
• If no CPLEX Optimizer version is detected, then DOcplex examples raise the message CPLEX wrapper is not available.
• What if I don’t have pip?
• The lack of pip indicates that you are using an unsupported version of Python since pip is delivered as a standard package with versions 2.7.9+, 3.4, and 3.5.
|
{}
|
## Essential University Physics: Volume 1 (3rd Edition)
Since a third of it is under water, we know that the jar itself has a third of the density of water. Thus, we find how many rocks have to be added in the given volume to make the density of the rock 2/3 of that of the water. We call the number of rocks n to find: $\frac{2g}{3cm^3} = \frac{12n}{\pi (2.5^2) (14)}$ $n = 15.27$
|
{}
|
# Find the indefinite integral: {eq}\int \tan^3(7x) dx {/eq}
## Question:
Find the indefinite integral: {eq}\int \tan^3(7x) dx {/eq}
## Integration by Substitution:
The substitution for some function frequently referred to as u-substitution or substitution of variables, is a measure for determining integrals and antiderivatives that can be solved quickly. We substitute {eq}t=f(x)\\ \Rightarrow dt=f'(x) \ dx{/eq} which may help to integrate the functions with much ease and in lesser time.
## Answer and Explanation: 1
Become a Study.com member to unlock this answer!
{eq}\begin{align} \int \tan^3(7x) dx &=\int \tan^2(7x) \ \tan(7x) \ dx\\ &=\int (\sec^2(7x) -1) \ \tan(7x) \ dx\\ &=\int (\sec^2(7x) \ \tan(7x) - \...
See full answer below.
|
{}
|
A small hint of big data
Shortly before Christmas, I got a few gigabytes of test data from a client and had to make sense of it. The first step was being able to read it.
The data came from a series of sensors installed in the some equipment manufactured by the client but owned by one of its customers. It was the customer who had collected the data, and precise information about it was limited at best. Basically, all I knew going in was that I had a handful of very large files, most of them about half a gigabyte, and that they were almost certainly text files of some sort.
One of the files was much smaller than the other, only about 50 MB. I decided to start there and opened it in BBEdit, which took a little time to suck it all in but handled it flawlessly. Scrolling through it, I learned that the first several dozen lines described the data that was being collected and the units of that data. At the end of the header section was a line with just the string
[data]
and after that came line after line of numbers. Each line was about 250 characters long and used DOS-style CRLF line endings. All the fields were numeric and were separated by single spaces. The timestamp field for each data record looked like a floating point number, but after some review, I came to understand that it was an encoding of the clock time in hhmmss.ssss format. This also explained why the files were so big: the records were 0.002 seconds apart, meaning the data had been collected at 500 Hz, much faster than was necessary for the type of information being gathered.
Anyway, despite its excessive volume, the data seemed pretty straightforward, a simple format that I could do a little editing of to get it into shape for importing into Pandas. So I confidently right-clicked one of the larger files to open it in BBEdit, figuring I’d see the same thing. But BBEdit wouldn’t open it.
As the computer I was using has 32 GB of RAM, physical memory didn’t seem like the cause of this error. I had never before run into a text file that BBEdit couldn’t handle, but then I’d never tried to open a 500+ MB file before. I don’t blame BBEdit for the failure—data files like this aren’t what it was designed to edit—but it was surprising. I had to come up with Plan B.
Plan B started with running head -100 on the files to make sure they were all formatted the same way. I learned that although the lengths of the header sections were different, they were collecting same type of data and using the same space-separated format for the data itself. Also, in each file the header and data were separated by a [data] line.
The next step was stripping out the header lines and transforming the data into CSV format. Pandas can certainly read space-separated data, but I figured that as long as I had to do some editing of the files, I might as well put them into a form that lots of software can read. I considered using a pipeline of standard Unix utilities and maybe Perl to do the transformation, but settled on a writing a Python script. Even though such a script was likely to be longer than the equivalent pipeline, my familiarity with Python would make it easier to write.
Here’s the script:
python:
1: #!/usr/bin/env python
2:
3: import sys
4:
5: f = open(sys.argv[1], 'r')
6: for line in f:
7: if line.rstrip() == '[data]':
8: break
9:
11:
12: for line in f:
13: print line.rstrip().replace(' ', ',')
(You can see from the print commands that this was done back before I switched to Python 3.)
The script, data2csv, was run from the command line like this for each data file in turn:
data2csv file01.dat > file01.csv
The script takes advantage of the way Python iterates through an open file line-by-line, keeping track of where it left off. The first loop, Lines 6–8, runs through the header lines, doing nothing and then breaking out of the loop when the [data] line is encountered.
Line 10 prints a CSV header line of my own devising. This information was in the original file, but its field names weren’t useful, so it made more sense for me to create my own.
Finally, the loop in Lines 12–13 picks up the file iteration where the previous loop left off and runs through to the end of the file, stripping off the DOS-style line endings and replacing the spaces with commas before printing each line in turn.
Even on my old 2012 iMac, this script took less than five seconds to process the large files, generating CSV files with over two million lines.
I realize my paltry half-gigabyte files don’t really qualify as big data, but they were big to me. I’m usually not foolish enough to run high frequency data collection processes on low frequency equipment for long periods of time. Since the usual definition of big data is something like “too voluminous for traditional software to handle,” and my traditional software is BBEdit, this data set fit the definition for me.
|
{}
|
SKY-MAP.ORG
Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login
# χ And (Keun Nan Mun)
Contents
### Images
DSS Images Other Images
### Related articles
TRIDENT: An Infrared Differential Imaging Camera Optimized for the Detection of Methanated Substellar CompanionsWe describe a near-infrared camera in use at the Canada-France-HawaiiTelescope (CFHT) and at the 1.6 m telescope of the Observatoire du montMégantic (OMM). The camera is based on a Hawaii-1 1024 ×1024 HgCdTe array detector. Its main feature is the acquisition of threesimultaneous images at three wavelengths across the methane absorptionbandhead at 1.6 μm, enabling, in theory, an accurate subtraction ofthe stellar point-spread function (PSF) and the detection of faintclose, methanated companions. The instrument has no coronagraph andfeatures fast data acquisition, yielding high observing efficiency onbright stars. The performance of the instrument is described, and it isillustrated by laboratory tests and CFHT observations of the nearbystars GL 526, υ And, and χ And. TRIDENT can detect (6 σ)a methanated companion with ΔH=9.5 at 0.5" separation from thestar in 1 hr of observing time. Non-common-path aberrations andamplitude modulation differences between the three optical paths arelikely to be the limiting factors preventing further PSF attenuation.Instrument rotation and reference-star subtraction improve the detectionlimit by a factor of 2 and 4, respectively. A PSF noise attenuationmodel is presented to estimate the non-common-path wave-front differenceeffect on PSF subtraction performance.Based on observations obtained at the Canada-France-Hawaii Telescope(CFHT), which is operated by the National Research Council of Canada,the Institut National des Science de l'Univers of the Centre National dela Recherche Scientifique of France, and the University of Hawaii. Statistical Constraints for Astrometric Binaries with Nonlinear MotionUseful constraints on the orbits and mass ratios of astrometric binariesin the Hipparcos catalog are derived from the measured proper motiondifferences of Hipparcos and Tycho-2 (Δμ), accelerations ofproper motions (μ˙), and second derivatives of proper motions(μ̈). It is shown how, in some cases, statistical bounds can beestimated for the masses of the secondary components. Two catalogs ofastrometric binaries are generated, one of binaries with significantproper motion differences and the other of binaries with significantaccelerations of their proper motions. Mathematical relations betweenthe astrometric observables Δμ, μ˙, and μ̈ andthe orbital elements are derived in the appendices. We find a remarkabledifference between the distribution of spectral types of stars withlarge accelerations but small proper motion differences and that ofstars with large proper motion differences but insignificantaccelerations. The spectral type distribution for the former sample ofbinaries is the same as the general distribution of all stars in theHipparcos catalog, whereas the latter sample is clearly dominated bysolar-type stars, with an obvious dearth of blue stars. We point outthat the latter set includes mostly binaries with long periods (longerthan about 6 yr). Astrometric orbits of SB^9 starsHipparcos Intermediate Astrometric Data (IAD) have been used to deriveastrometric orbital elements for spectroscopic binaries from the newlyreleased Ninth Catalogue of Spectroscopic Binary Orbits(SB^9). This endeavour is justified by the fact that (i) theastrometric orbital motion is often difficult to detect without theprior knowledge of the spectroscopic orbital elements, and (ii) suchknowledge was not available at the time of the construction of theHipparcos Catalogue for the spectroscopic binaries which were recentlyadded to the SB^9 catalogue. Among the 1374 binaries fromSB^9 which have an HIP entry (excluding binaries with visualcompanions, or DMSA/C in the Double and Multiple Stars Annex), 282 havedetectable orbital astrometric motion (at the 5% significance level).Among those, only 70 have astrometric orbital elements that are reliablydetermined (according to specific statistical tests), and for the firsttime for 20 systems. This represents a 8.5% increase of the number ofastrometric systems with known orbital elements (The Double and MultipleSystems Annex contains 235 of those DMSA/O systems). The detection ofthe astrometric orbital motion when the Hipparcos IAD are supplementedby the spectroscopic orbital elements is close to 100% for binaries withonly one visible component, provided that the period is in the 50-1000 drange and the parallax is >5 mas. This result is an interestingtestbed to guide the choice of algorithms and statistical tests to beused in the search for astrometric binaries during the forthcoming ESAGaia mission. Finally, orbital inclinations provided by the presentanalysis have been used to derive several astrophysical quantities. Forinstance, 29 among the 70 systems with reliable astrometric orbitalelements involve main sequence stars for which the companion mass couldbe derived. Some interesting conclusions may be drawn from this new setof stellar masses, like the enigmatic nature of the companion to theHyades F dwarf HIP 20935. This system has a mass ratio of 0.98 but thecompanion remains elusive. Synthetic Lick Indices and Detection of α-enhanced Stars. II. F, G, and K Stars in the -1.0 < [Fe/H] < +0.50 RangeWe present an analysis of 402 F, G, and K solar neighborhood stars, withaccurate estimates of [Fe/H] in the range -1.0 to +0.5 dex, aimed at thedetection of α-enhanced stars and at the investigation of theirkinematical properties. The analysis is based on the comparison of 571sets of spectral indices in the Lick/IDS system, coming from fourdifferent observational data sets, with synthetic indices computed withsolar-scaled abundances and with α-element enhancement. We useselected combinations of indices to single out α-enhanced starswithout requiring previous knowledge of their main atmosphericparameters. By applying this approach to the total data set, we obtain alist of 60 bona fide α-enhanced stars and of 146 stars withsolar-scaled abundances. The properties of the detected α-enhancedand solar-scaled abundance stars with respect to their [Fe/H] values andkinematics are presented. A clear kinematic distinction betweensolar-scaled and α-enhanced stars was found, although a one-to-onecorrespondence to thin disk'' and thick disk'' components cannot besupported with the present data. Searching for Faint Companions with the TRIDENT Differential Simultaneous Imaging CameraWe present the first results obtained at CFHT with the TRIDENT infraredcamera, dedicated to the detection of faint companions close to brightnearby stars. Its main feature is the acquisition of three simultaneousimages in three wavelengths (simultaneous differential imaging) acrossthe methane absorption bandhead at 1.6 microns, that enables a precisesubtraction of the primary star's PSF while keeping the companionsignal. Gl229 and 55 Cnc observations are presented to demonstrateTRIDENT subtraction performances. It is shown that a faint companionwith an H magnitude difference of 10 magnitudes would be detected at 0.5arcsec from the primary. Reprocessing the Hipparcos Intermediate Astrometric Data of spectroscopic binaries. II. Systems with a giant componentBy reanalyzing the Hipparcos Intermediate Astrometric Data of a largesample of spectroscopic binaries containing a giant, we obtain a sampleof 29 systems fulfilling a carefully derived set of constraints andhence for which we can derive an accurate orbital solution. Of these,one is a double-lined spectroscopic binary and six were not listed inthe DMSA/O section of the catalogue. Using our solutions, we derive themasses of the components in these systems and statistically analyzethem. We also briefly discuss each system individually.Based on observations from the Hipparcos astrometric satellite operatedby the European Space Agency (ESA 1997) and on data collected with theSimbad database. The Rotation of Binary Systems with Evolved ComponentsIn the present study we analyze the behavior of the rotational velocity,vsini, for a large sample of 134 spectroscopic binary systems with agiant star component of luminosity class III, along the spectral regionfrom middle F to middle K. The distribution of vsini as a function ofcolor index B-V seems to follow the same behavior as their singlecounterparts, with a sudden decline around G0 III. Blueward of thisspectral type, namely, for binary systems with a giant F-type component,one sees a trend for a large spread in the rotational velocities, from afew to at least 40 km s-1. Along the G and K spectral regionsthere are a considerable number of binary systems with moderate tomoderately high rotation rates. This reflects the effects ofsynchronization between rotation and orbital motions. These rotatorshave orbital periods shorter than about 250 days and circular or nearlycircular orbits. Except for these synchronized systems, the largemajority of binary systems with a giant component of spectral type laterthan G0 III are composed of slow rotators. Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 K-Band Calibration of the Red Clump LuminosityThe average near-infrared (K-band) luminosity of 238 Hipparcos red clumpgiants is derived and then used to measure the distance to the Galacticcenter. These Hipparcos red clump giants have been previously employedas I-band standard candles. The advantage of the K-band is a decreasedsensitivity to reddening and perhaps a reduced systematic dependence onmetallicity. In order to investigate the latter, and also to refer ourcalibration to a known metallicity zero point, we restrict our sample ofred clump calibrators to those with abundances derived fromhigh-resolution spectroscopic data. The mean metallicity of the sampleis [Fe/H]=-0.18 dex (σ=0.17 dex). The data are consistent with nocorrelation between MK and [Fe/H] and only weakly constrainthe slope of this relation. The luminosity function of the sample peaksat MK=-1.61+/-0.03 mag. Next, we assemble published opticaland near-infrared photometry for ~20 red clump giants in a Baade'swindow field with a mean metallicity of [Fe/H]=-0.17+/-0.09 dex, whichis nearly identical to that of the Hipparcos red clump. Assuming thatthe average (V-I)0 and (V-K)0 colors of these twored clumps are the same, the extinctions in the Baade's window field arefound to be AV=1.56, AI=0.87, andAK=0.15, in agreement with previous estimates. We derive thedistance to the Galactic center: (m-M)0=14.58+/-0.11 mag, orR=8.24+/-0.42 kpc. The uncertainty in this distance measurement isdominated by the small number of Baade's window red clump giantsexamined here. Speckle Interferometry of New and Problem HIPPARCOS BinariesThe ESA Hipparcos satellite made measurements of over 12,000 doublestars and discovered 3406 new systems. In addition to these, 4706entries in the Hipparcos Catalogue correspond to double star solutionsthat did not provide the classical parameters of separation and positionangle (rho,theta) but were the so-called problem stars, flagged G,''O,'' V,'' or X'' (field H59 of the main catalog). An additionalsubset of 6981 entries were treated as single objects but classified byHipparcos as suspected nonsingle'' (flag S'' in field H61), thusyielding a total of 11,687 problem stars.'' Of the many ground-basedtechniques for the study of double stars, probably the one with thegreatest potential for exploration of these new and problem Hipparcosbinaries is speckle interferometry. Results are presented from aninspection of 848 new and problem Hipparcos binaries, using botharchival and new speckle observations obtained with the USNO and CHARAspeckle cameras. A catalog of rotational and radial velocities for evolved starsRotational and radial velocities have been measured for about 2000evolved stars of luminosity classes IV, III, II and Ib covering thespectral region F, G and K. The survey was carried out with the CORAVELspectrometer. The precision for the radial velocities is better than0.30 km s-1, whereas for the rotational velocity measurementsthe uncertainties are typically 1.0 km s-1 for subgiants andgiants and 2.0 km s-1 for class II giants and Ib supergiants.These data will add constraints to studies of the rotational behaviourof evolved stars as well as solid informations concerning the presenceof external rotational brakes, tidal interactions in evolved binarysystems and on the link between rotation, chemical abundance and stellaractivity. In this paper we present the rotational velocity v sin i andthe mean radial velocity for the stars of luminosity classes IV, III andII. Based on observations collected at the Haute--Provence Observatory,Saint--Michel, France and at the European Southern Observatory, LaSilla, Chile. Table \ref{tab5} also available in electronic form at CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Catalogs of temperatures and [Fe/H] averages for evolved G and K starsA catalog of mean values of [Fe/H] for evolved G and K stars isdescribed. The zero point for the catalog entries has been establishedby using differential analyses. Literature sources for those entries areincluded in the catalog. The mean values are given with rms errors andnumbers of degrees of freedom, and a simple example of the use of thesestatistical data is given. For a number of the stars with entries in thecatalog, temperatures have been determined. A separate catalogcontaining those data is briefly described. Catalog only available atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Spectroscopic binary orbits from photoelectric radial velocities. Paper 140: Chi AndromedaeNot Available The ROSAT all-sky survey catalogue of optically bright late-type giants and supergiantsWe present X-ray data for all late-type (A, F, G, K, M) giants andsupergiants (luminosity classes I to III-IV) listed in the Bright StarCatalogue that have been detected in the ROSAT all-sky survey.Altogether, our catalogue contains 450 entries of X-ray emitting evolvedlate-type stars, which corresponds to an average detection rate of about11.7 percent. The selection of the sample stars, the data analysis, thecriteria for an accepted match between star and X-ray source, and thedetermination of X-ray fluxes are described. Catalogue only available atCDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html A catalogue of [Fe/H] determinations: 1996 editionA fifth Edition of the Catalogue of [Fe/H] determinations is presentedherewith. It contains 5946 determinations for 3247 stars, including 751stars in 84 associations, clusters or galaxies. The literature iscomplete up to December 1995. The 700 bibliographical referencescorrespond to [Fe/H] determinations obtained from high resolutionspectroscopic observations and detailed analyses, most of them carriedout with the help of model-atmospheres. The Catalogue is made up ofthree formatted files: File 1: field stars, File 2: stars in galacticassociations and clusters, and stars in SMC, LMC, M33, File 3: numberedlist of bibliographical references The three files are only available inelectronic form at the Centre de Donnees Stellaires in Strasbourg, viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5), or viahttp://cdsweb.u-strasbg.fr/Abstract.html Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. On the link between rotation and coronal activity in evolved stars.We analyse the behaviour of coronal activity as a function of rotationfor a large sample of single and binary evolved stars for which we haveobtained CORAVEL high precision rotational velocities. This study showsthat tidal effects play a direct role in determining the X-ray activitylevel in binary evolved stars. The circularisation of the orbit is anecessary property for enhanced coronal activity in evolved binarystars. Improved Mean Positions and Proper Motions for the 995 FK4 Sup Stars not Included in the FK5 ExtensionNot Available A measurement of the primordial helium abundance using MU CassiopeiaeSpeckle interferometric observations of the Population II astrometricbinary star Mu Cas have been made at four epochs with a direct imagingCCD system. Using the available orbital data on the system, the massesof the stars have been found to be 0.728 +/- 0.049 solar mass and 0.171+/- 0.008 solar mass. Application of the theoretical mass-luminosity lawto the primary yields a helium abundance of 0.23 +/- 0.05 by mass for ametal abundance of Z = 0.0021 assuming a system age of 13 billion years. X-ray activity as statistical age indicator - The disk G-K giantsFor a sample of late-type disk giant stars, the dependence of coronalemission on age as defined by metallicity and kinematics indicators hasbeen studied. It is found that the mean level of X-ray emission forstars with strong metallic lines and/or small peculiar velocities islarger by about one order of magnitude than the mean level of emissionfor stars with weak lines and/or high peculiar velocities. Hence, it issuggested that the X-ray activity can be used as a statistical ageindicator for late-type giants, as well as the classical metallicity orkinematics indicators. It is found that the spread in metallicitytypical of the Galactic disk accounts for less than 50 percent of theobserved difference in X-ray emission. To explain the observations it isargued that other effects should be invoked, such as changes in theefficiency of the stellar magnetic dynamo or the influence ofmetallicity itself on the coronal heating processes. Velocity dispersions and mean abundances for Roman's G5-K1 spectroscopic groupsThe velocity dispersions and U-V distributions of Roman's (1950, 1952)four spectroscopic groups (weak CN, weak line, strong line, and 4150)are compared with those of groups based only on Fe/H ratio. It is shownthat the velocity-dispersion gradient for the Roman spectroscopic groupsis greater than for the comparison group for all three velocitycomponents and for the mean orbital eccentricity, with a clearly definedminimum for the strong-line stars and a sharp upturn for the 4150 stars.The results suggest that the Roman's assignment to distinctspectroscopic groups results in more homogeneous groups than the binningon the basis of metallicity. Rotation and transition layer emission in cool giantsGray (1981, 1982) found that field giants with T(eff) less than about5500 K experience a steep decrease in rotational velocities coupled witha decrease in transition layer emission. This decrease may beattributable to fast magnetic braking or to redistribution of angularmomentum for rapidly increasing depths of the convection zones if theserotate with depth independent specific angular momentum. Additionalarguments in favor of the latter interpretation are presented. Theincrease of N/C abundances due to deep mixing occurs at the same pointas the decrease in v sin i. On the other hand, the ratios of the C IV toC II emission line fluxes decrease at this point indicating smallercontributions of MHD wave heating. The X-ray fluxes decrease at nearlythe same T(eff). Thus, no observations are found which would indicatelarger magnetic activity which could lead to fast magnetic braking.Theory predicts a rapid increase in the convection zone depth at theT(eff) where the decrease in v sin i is observed. This can explain theobserved phenomena. A critical appraisal of published values of (Fe/H) for K II-IV stars'Primary' (Fe/H) averages are presented for 373 evolved K stars ofluminosity classes II-IV and (Fe/H) values beween -0.9 and +0.21 dex.The data define a 'consensus' zero point with a precision of + or -0.018 dex and have rms errors per datum which are typically 0.08-0.16dex. The primary data base makes recalibration possible for the large(Fe/H) catalogs of Hansen and Kjaergaard (1971) and Brown et al. (1989).A set of (Fe/H) standard stars and a new DDO calibration are given whichhave rms of 0.07 dex or less for the standard star data. For normal Kgiants, CN-based values of (Fe/H) turn out to be more precise than manyhigh-dispersion results. Some zero-point errors in the latter are alsofound and new examples of continuum-placement problems appear. Thushigh-dispersion results are not invariably superior to photometricmetallicities. A review of high-dispersion and related work onsupermetallicity in K III-IV star is also given. CA II H and K measurements made at Mount Wilson Observatory, 1966-1983Summaries are presented of the photoelectric measurements of stellar CaII H and K line intensity made at Mount Wilson Observatory during theyears 1966-1983. These results are derived from 65,263 individualobservations of 1296 stars. For each star, for each observing season,the maximum, minimum, mean, and variation of the instrumental H and Kindex 'S' are given, as well as a measurement of the accuracy ofobservation. A total of 3110 seasonal summaries are reported. Factorswhich affect the ability to detect stellar activity variations andaccurately measure their amplitudes, such as the accuracy of the H and Kmeasurements and scattered light contamination, are discussed. Relationsare given which facilitate intercomparison of 'S' values with residualintensities derived from ordinary spectrophotometry, and for convertingmeasurements to absolute fluxes. High-resolution spectroscopic survey of 671 GK giants. I - Stellar atmosphere parameters and abundancesA high-resolution spectroscopic survey of 671 G and K field giants isdescribed. Broad-band Johnson colors have been calibrated againstrecent, accurate effective temperature, T(eff), measurements for starsin the range 3900-6000 K. A table of polynomial coefficients for 10color-T(eff) relations is presented. Stellar atmosphere parameters,including T(eff), log g, Fe/H, and microturbulent velocity, are computedfor each star, using the high-resolution spectra and various publishedphotometric catalogs. For each star, elemental abundances for a varietyof species have been computed using a LTE spectrum synthesis program andthe adopted atmosphere parameters. Einstein Observatory magnitude-limited X-ray survey of late-type giant and supergiant starsResults are presented of an extensive X-ray survey of 380 giant andsupergiant stars of spectral types from F to M, carried out with theEinstein Observatory. It was found that the observed F giants orsubgiants (slightly evolved stars with a mass M less than about 2 solarmasses) are X-ray emitters at the same level of main-sequence stars ofsimilar spectral type. The G giants show a range of emissions more than3 orders of magnitude wide; some single G giants exist with X-rayluminosities comparable to RS CVn systems, while some nearby large Ggiants have upper limits on the X-ray emission below typical solarvalues. The K giants have an observed X-ray emission level significantlylower than F and F giants. None of the 29 M giants were detected, exceptfor one spectroscopic binary. Chromospheric activity in evolved stars - The rotation-activity connection and the binary-single dichotomyA tabulation of measured values of the Ca II H and K (S) index aretransformed to the original Mount Wilson definition of the index. Thetabulation includes main-sequence, evolved, single, and tidally coupled(RS CVn) binary stars. The (S) indices are analyzed against Wilson's(1976) I(HK) intensity estimates, showing that Wilson's estimates areonly a two-state indicator. Ca II H and K fluxes are computed andcalibrated with published values of rotation periods. It is found thatthe single and binary stars are consistent with a single relationshipbetween rotation and Ca II excess emission flux. Catalogue of the energy distribution data in spectra of stars in the uniform spectrophotometric system.Not Available Energy Distribution Data in the Spectra of 72 Stars in the Region Lambda 3200A to 7600ANot Available Binary stars unresolved by speckle interferometry. IIIThe KPNO's 4-m telescope was used in 1975-1981 to determine the epochsof 1164 speckle observations for 469 unresolved, known or suspectedbinary stars. The data, presented in tabular form, encompass visualbinaries with eccentric orbits, occultation binaries, astrometricbinaries, Hyades stars of known or suspected duplicity, and many longperiod spectroscopic binaries.
Submit a new article
• - No Links Found -
Submit a new link
### Member of following groups:
#### Observation and Astrometry data
Constellation: Andromeda Right ascension: 01h39m21.00s Declination: +44°23'10.0" Apparent magnitude: 4.98 Distance: 74.294 parsecs Proper motion RA: -23.8 Proper motion Dec: 11.8 B-T magnitude: 6.106 V-T magnitude: 5.106
Catalogs and designations:
Proper Names Keun Nan Mun (Edit) Bayer χ And Flamsteed 52 And HD 1989 HD 10072 TYCHO-2 2000 TYC 2826-2183-1 USNO-A2.0 USNO-A2 1275-00982752 BSC 1991 HR 469 HIP HIP 7719 → Request more catalogs and designations from VizieR
|
{}
|
# Samples and Statistics: Distinguishing Populations of Hot Jupiters in a Growing Dataset
Authors: Benjamin E. Nelson, Eric B. Ford, and Frederic A. Rasio
First Author’s Institution: Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and Department of Physics and Astronomy, Northwestern University, IL, USA; Northwestern Institute for Complex Systems, IL, USA
Status: Submitted to AJ, open access
### Frolicking through fields of data
The future of astronomy observations seems as bright as the night sky… and just as crowded! Over the next decade, several truly powerful telescopes are set to launch (read about a good number of them here and also here). That means we’re going to have a LOT of data on everything from black holes to galaxies, and beyond – and that’s in addition to the huge fields of data from the past decade that we’re already frolicking through now. It’s certainly far more data than any one astronomer (or even a group of astronomers) wants to analyze one-by-one – that’s why these days, astronomers turn more and more to the power of astrostatistics to characterize their data.
The author’s of today’s astrobite had that goal in mind. They explored a widely-applicable, data-driven statistical method for distinguishing different populations in a sample of data. In a sentence, they took a large sample of hot Jupiters and used this technique to try and separate out different populations of hot Jupiters, based on how the planets were formed, within their sample. Let’s break down exactly what they did, and how they did it, in the next few sections!
### Hot Jupiters are pretty cool
First question: what’s a hot Jupiter, anyway?
They’re actually surprisingly well-named: essentially, they are gas giant planets like Jupiter, but are much, much hotter. (Read all about them in previous astrobites, like this one and this other one!) Hot Jupiters orbit perilously close to their host stars – closer even than Mercury does in our own Solar System, for example. But it seems they don’t start out there. It’s more likely that these hot Jupiters formed out at several AU from their host stars, and then migrated inward into the much closer orbits from there.
Figure 1: A gorgeous artist’s impression of a hot Jupiter orbiting around its host star. Image credit goes to ESO/L. Calçada.
As to why hot Jupiters migrate inward… well, it’s still unclear. Today’s authors focused on two migration pathways that could lead to two distinct populations of hot Jupiters in their sample. These migration theories, as well as what the minimum allowed distance to the host star (the famous Roche separation distance, aRoche) would be in each case, are as follows:
• Disk migration: hot Jupiters interact with their surrounding protoplanetary disk, and these interactions push their orbits inward. In this context, aRoche corresponds to the minimum distance that a hot Jupiter could orbit before its host star either (1) stripped away all of the planet’s gas or (2) ripped the planet apart.
• Eccentric migration: hot Jupiters start out on very eccentric (as in, more elliptical than circular) orbits, and eventually their orbits morph into circular orbits of distance 2aRoche. In this context, aRoche refers to the minimum distance that a hot Jupiter could orbit before the host star pulled away too much mass from the planet.
The authors defined a parameter ‘x’ for a given hot Jupiter to be x = a/aRoche, where ‘a’ is the planet’s observed semi-major axis. Based on the minimum distances in the above theories, we could predict that hot Jupiters that underwent disk migration would have a minimum x value of x = aRoche/aRoche = 1. On the other hand, hot Jupiters that underwent eccentric migration would instead have a minimum x value of x = 2aRoche/aRoche = 2. This x for a given planet is proportional to the planet’s orbital period ‘P’, its radius ‘R’, and its mass ‘M’ in the following way:
x = a/aRoche $\propto$ (P$^{2/3}$)(M$^{1/3}$)/R
And this x served as a key parameter in the authors’ statistical models!
### Toying with Bayesian statistics
Next question: how did today’s authors statistically model their data?
Figure 2: Probability distribution of x for each observation group, assuming that each hot Jupiter orbit was observed along the edge (like looking at the thin edge of a DVD). The bottom panel zooms in on the top one. Note how the samples have different minimum values! From Figure 1 in the paper.
Short answer: with Bayesian statistics. Basically, the authors modeled how the parameter x is distributed within their planet sample with truncated power laws – so, x raised to some power, cutoff between minimum and maximum x values. They split their sample of planets into two groups, based on the telescope and technique used to observe the planets: “RV+Kepler” and “HAT+WASP”. Figure 2 displays the distribution of x for each of the subgroups.
The authors then used the Markov Chain Monte Carlo method (aka, MCMC; see the Bayesian statistics link above!) to explore what sort of values of the power laws’ powers and cutoffs would well represent their data. Based on their chosen model form, they found that the RV+Kepler sample fit well with their model relating to eccentric migration. On the other hand, they found evidence that the HAT+WASP sample could be split into two populations: about 15% of those planets corresponded to disk migration, while the other 85% or so corresponded to eccentric migration.
Remember that a major goal of today’s authors was to see if they could use this statistical approach to distinguish between planet populations in their sample… and in that endeavor, they were successful! The authors were thus optimistic about using this statistical technique for a much larger sample of hot Jupiters in the future, as oodles of data stream in from telescopes and surveys like KELT, TESS, and WFIRST over the next couple of decades.
Their success joins the swelling toolbox of astrostatistics… and just in time! Telescopes of the present and very-near future are going to flood our computers with data – so unless we’re willing to examine every bright spot we observe in the sky by hand, we’ll need all the help from statistics that we can get!
|
{}
|
# Counting radial ridges on an image
After happily using the v9 image assistant to crop elliptically an image, and then the drawing tools to put a white disk in the middle, I turned this image:
into the following one:
that can be imported with
im=Import["http://i.stack.imgur.com/NNzNM.png"]
The overall objective is to programatically count the number of those radial lines. Due to the lighting, there are parts of the image in which those lines are darker than its surroundings and others where it is lighter.
So far I haven't found a good way worth posting, so any pointer to a good solution would be appreciated. I have the feeling the image processing people will see better ways of dealing with this and I will be grateful to learn something. Thanks a lot
EDIT
A first approach with @RahulNarain's suggestion would be
int = ListInterpolation[
ImageData[ColorConvert[im, "GrayLevel"]], {{-1, 1}, {-1, 1}}];
polInt = Function[t, int[0.95 Cos[2 \[Pi] t], 0.95 Sin[2 \[Pi] t]]];
Plot[polInt[t], {t, 0, 1}, AspectRatio -> 0.2, ImageSize -> Large,
PlotRange -> Full]
Now,
ListLinePlot[Abs@Fourier[polInt@Range[0, 1 - 0.001, 0.001]],
PlotRange -> {{0, 500}, {0, 50}}]
A better zoom shows the maximum at 98
However, manual counting (could be wrong) gave me 96, and nikie's approach is suggesting 97. 1 or 2 off count could be due to the light changes making the real ridge be the local minimum at some places and local maximum at others
-
I'm by no means proficient in image manipulation, but Sharpen may be what you're looking for - nest it 10 times. – VF1 Dec 8 '12 at 6:10
Maybe: build an interpolating function from the image, sample it over a circular path something like $(0.95\cos\theta, 0.95\sin\theta)$, and then analyze the variation of intensity as a function of $\theta$. That reduces it to a one-dimensional signal processing problem. – Rahul Dec 8 '12 at 7:40
@RahulNarain nice idea, I'll give it a shot now :) – Rojo Dec 8 '12 at 7:41
The big version image is rather tiny... – Yves Klett Dec 8 '12 at 7:58
@YvesKlett haha, I thought it wasn't appropriate to upload the big version, but I can do it if you want – Rojo Dec 8 '12 at 7:59
A simple first try would be to rotate the image, then measure the distance to the original image:
img = ColorConvert[Import["http://i.stack.imgur.com/NNzNM.png"],
"Grayscale"];
Monitor[t =
Table[{i,
ImageDistance[img, ImageRotate[img, 360°/i, Full, Background -> White]]}, {i, 10,
200}], i];
ListLinePlot[t]
Obviously, if the angle is exactly 360° / [number of radial lines], the match should be lowest, so the estimated count would be:
count = Extract[t, Position[t[[All, 2]], Min[t[[All, 2]]]]][[1, 1]]
which is 97
If I overlay 97 radial lines over your image, it seems as if the count wasn't too far of. I can't tell if it's exact, though:
center = 0.5 ImageDimensions[img];
Show[img,
Graphics[
{Red, Table[
Line[{center + 80 {Cos[\[Phi]], Sin[\[Phi]]},
center + 250 {Cos[\[Phi]], Sin[\[Phi]]}}], {\[Phi], 0, 2 \[Pi],
2 \[Pi]/count}]}]]
EDIT: I've been playing with this some more, especially with the FFT idea. First, I polar-transform the image:
polar = ImageTransformation[
img, #[[2]]*{Cos[#[[1]]], Sin[#[[1]]]} &, {500, 20},
PlotRange -> {{0, 2 \[Pi]}, {0.9, 1.0}},
DataRange -> {{-1, 1}, {-1, 1}}]
Then I've applied a windowed Fourier transform to the mean of that signal:
mean = Mean[ImageData[polar]];
window = Array[HannWindow, Length[mean], {-1.5, 1.5}];
stft = Table[(Abs[Fourier[mean*RotateLeft[window, i]]][[
80 ;; 120]]^2) // #/Max[#] &, {i, 0, 500}];
ArrayPlot[stft\[Transpose], ColorFunction -> GrayLevel,
DataRange -> {{1, 500}, {80, 120}}, FrameTicks -> True]
The windowed Fourier transform looks as if the frequency isn't constant over the whole area. Which would make sense, if the center isn't perfect or if there's an affine/perspective transformation. Sadly, I'm not sure what to do with this, but I thought I'd post it, in case it gives somebody else an idea.
-
Very smart approach! Thanks a lot! What I'm trying using Rahul's suggestion so far is giving me 98 or 99, but counting them manually gives me 96. Haven't triple checked manually, you can imagine why – Rojo Dec 8 '12 at 8:48
If you want 96, you could use DistanceFunction -> "MutualInformationVariation". That's not very scientific, though. ;-) – nikie Dec 8 '12 at 8:49
After your polar transform, do a column sum to make it 1d, and then MaxDetect would seem suited for the problem? – JxB Dec 8 '12 at 14:23
+1 very original – Vitaliy Kaurov Dec 8 '12 at 15:22
Nice edit. Perhaps you could have used SpectrogramArray for the windowed fourier transform – Rojo Dec 8 '12 at 18:13
I think the big bright spots in the lower half of the image are confusing the Fourier transform, because they lie right in between where two bright lines should be, and so they are exactly out of phase with the rest of the signal. How about I just throw them away?*
polInt = Function[t,
If[(t > 0.11 && t < 0.16) || (t > 0.89), 0.2,
int[0.95 Cos[2 \[Pi] t], 0.95 Sin[2 \[Pi] t]]]];
Plot[polInt[t], {t, 0, 1}, AspectRatio -> 0.2, ImageSize -> Large,
PlotRange -> Full]
Then the Fourier transform has a maximum at 96, as you want.
* This procedure is entirely unscientific. Chopping data at hard boundaries is not recommended by signal processing experts. Do not try this at home. Void where prohibited.
-
Actually... Does the output of Fourier have frequency 0 (the DC component) at position 1? Then the peak here is at 95 cycles, which might be a bad thing. – Rahul Dec 8 '12 at 11:16
|
{}
|
A. Cubes Sorting
time limit per test
1 second
memory limit per test
256 megabytes
input
standard input
output
standard output
For god's sake, you're boxes with legs! It is literally your only purpose! Walking onto buttons! How can you not do the one thing you were designed for?
Oh, that's funny, is it? Oh it's funny? Because we've been at this for twelve hours and you haven't solved it either, so I don't know why you're laughing. You've got one hour! Solve it!
Wheatley decided to try to make a test chamber. He made a nice test chamber, but there was only one detail absent — cubes.
For completing the chamber Wheatley needs $n$ cubes. $i$-th cube has a volume $a_i$.
Wheatley has to place cubes in such a way that they would be sorted in a non-decreasing order by their volume. Formally, for each $i>1$, $a_{i-1} \le a_i$ must hold.
To achieve his goal, Wheatley can exchange two neighbouring cubes. It means that for any $i>1$ you can exchange cubes on positions $i-1$ and $i$.
But there is a problem: Wheatley is very impatient. If Wheatley needs more than $\frac{n \cdot (n-1)}{2}-1$ exchange operations, he won't do this boring work.
Wheatly wants to know: can cubes be sorted under this conditions?
Input
Each test contains multiple test cases.
The first line contains one positive integer $t$ ($1 \le t \le 1000$), denoting the number of test cases. Description of the test cases follows.
The first line of each test case contains one positive integer $n$ ($2 \le n \le 5 \cdot 10^4$) — number of cubes.
The second line contains $n$ positive integers $a_i$ ($1 \le a_i \le 10^9$) — volumes of cubes.
It is guaranteed that the sum of $n$ over all test cases does not exceed $10^5$.
Output
For each test case, print a word in a single line: "YES" (without quotation marks) if the cubes can be sorted and "NO" (without quotation marks) otherwise.
Example
Input
3
5
5 3 2 1 4
6
2 2 2 2 2 2
2
2 1
Output
YES
YES
NO
Note
In the first test case it is possible to sort all the cubes in $7$ exchanges.
In the second test case the cubes are already sorted.
In the third test case we can make $0$ exchanges, but the cubes are not sorted yet, so the answer is "NO".
|
{}
|
Which functions are tempered distributions?
Today's problem originates in this conversation with Willie Wong about the Fourier transform of a Gaussian function
$$g_{\sigma}(x)=e^{-\sigma \lvert x \rvert^2},\quad x \in \mathbb{R}^n;$$
where $\sigma$ is a complex parameter. When $\Re (\sigma) \ge 0$, $g_\sigma$ is a tempered distribution$^{[1]}$ and so it is Fourier transformable.
On the contrary, it appears obvious that if $\Re(\sigma) <0$ then $g_\sigma$ is not tempered.
Question 1. What is the fastest way to prove this?
My guess is that one should exploit the fact that the pairing $$\int_{\mathbb{R}^n} g_\sigma(x)\varphi(x)\, dx$$ makes no sense for some $\varphi \in \mathcal{S}(\mathbb{R}^n)$. But is it enough? I am afraid that this argument is incomplete.
Question 2. More generally, is there some characterization of tempered functions, that is, functions which belong to the space $L^1_{\text{loc}}(\mathbb{R})\cap \mathcal{S}'(\mathbb{R})$?
The only tempered functions that I know are polynomially growing functions. By this I mean the functions of the form $Pu$, where $P$ is a polynomial and $u \in L^p(\mathbb{R}^n)$ for some $p\in[1, \infty]$.
Question 3. Is it true that all tempered functions are polynomially growing functions?
$^{[1]}$ The definition of tempered distribution I refer to is the following.
A distribution $T \in \mathcal{D}'(\mathbb{R}^n)$ is called tempered if for every sequence $\varphi_n \in \mathcal{D}(\mathbb{R}^n)$ such that $\varphi_n \to 0$ in the Schwartz class sense, it happens that $\langle T, \varphi_n \rangle \to 0$. If this is the case then $T$ uniquely extends to a continuous linear functional on $\mathcal{S}(\mathbb{R}^n)$ and we write $T \in \mathcal{S}'(\mathbb{R}^n)$.
Question 1 What you have is almost enough. Assume $\Re\sigma \leq -\epsilon < 0$. Test $\exp (-\sigma |x|^2)$ "against" $\phi(x) = \exp( (\sigma+\epsilon/2)|x|^2)$ in the following way: you can construct a sequence of annular cut-off functions $\chi_k$ such that $\chi_k \phi \to 0$ in $\mathcal{S}$ (using the exponential decay of $\phi$) and $\langle g_\sigma, \chi_k\phi\rangle > c > 0$ for all $k$.
Question 2 You have the structure theorem of tempered distributions. (See Theorem 8.3.1 in Friedlander and Joshi).
Theorem Every tempered distribution is a (distributional) derivative of finite order of some continuous function of polynomial growth.
If you intersect against $L^1_{loc}$, this just guarantees that the distributional derivative is actually the weak derivative. From this you can conclude that an appropriate version of what you stated is true.
• Ok for the annular cut-off argument. Very nice, too! I see that the question is not as trivial as I would have expected. In fact, I thought it was easy to prove something like "if there exists a $\varphi \in \mathcal{S}$ s.t. $f\varphi \notin L^1$, then $f$ is not tempered". I will have a look at that book you are recommending. Thank you for everything, you are helping me quite a bit in those days! – Giuseppe Negro Apr 23 '11 at 17:37
Thinking $$L^1_{loc}\cap\mathcal{S}'$$ as a subset of $$\mathcal{D}'$$, it is not true that every element of $$L^1_{loc}\cap\mathcal{S}'$$ is a polynomially growing function.
For example, in $$\mathbb{R}$$, define
$$f:\mathbb{R}\to\mathbb{R}, t\mapsto \cos(e^t) e^t.$$
Then $$f\in L^1_{loc}(\mathbb{R})$$, so it represents by integral pairing an element of $$\mathcal{D}'(\mathbb{R})$$.
Also, define:
$$g:\mathbb{R}\to\mathbb{R}, t\mapsto \sin(e^t).$$
Then $$g\in L^\infty(\mathbb{R})$$, so it represents by integral pairing an element of $$\mathcal{S}'(\mathbb{R})$$.
Also, denoting the distributional derivative with the symbol $$D$$, we have that:
$$\forall\varphi\in\mathcal{D}(\mathbb{R}), f(\varphi)= \int_\mathbb{R}f(t)\varphi(t)\operatorname{d}t =\int_\mathbb{R} \cos(e^t) e^t\varphi(t)\operatorname{d}t \\ = \int_\mathbb{R} \left(\frac{\operatorname{d}}{\operatorname{d}t}\sin(e^t)\right)\varphi(t)\operatorname{d}t =- \int_\mathbb{R} \sin(e^t)\varphi'(t)\operatorname{d}t = -g(\varphi')=Dg(\varphi).$$ So, being $$\mathcal{S}'(\mathbb{R})$$ closed with respect to distributional derivative, we get that $$f=Dg\in\mathcal{S}'(\mathbb{R})$$.
However $$f$$ is not a polynomially growing function, so we have got an example of $$f\in L^1_{loc}(\mathbb{R})\cap\mathcal{S}'(\mathbb{R})$$ that is not of polynomial growth.
|
{}
|
# [Haskell] Extensible records: Static duck typing
Cale Gibbard cgibbard at gmail.com
Tue Feb 5 08:01:07 EST 2008
On 05/02/2008, John Meacham <john at repetae.net> wrote:
> choice 2: use ', declare that any identifier that _begins_ with ' always
> refers to a label selection function
>
> 'x point
>
> (snip)
>
> none are fully backwards compatible. I am still not sure which I like
> the best, ' has a lot of appeal to me as it is very simple to type and
> lightweight visually.
I also like this idea. Retaining the ability to treat selection as a
function easily is quite important, and this meets that criterion
nicely. Also, in which case does this cause a program to break? It
seems that you're only reinterpreting what would be unterminated
character literals.
Did you consider any options with regard to the syntax for variants as
introduced in the paper? Perhaps something like (: and :) brackets
could be used in place of the \langle and \rangle brackets used in the
paper. Labels would still start with single quotes. We wouldn't need
the decomposition syntax, just case, altered to agree with Haskell's
existing syntax for case. Pattern matching against labels (whose names
start with a single quote) unambiguously makes it clear that we're
working with variants.
- Cale
|
{}
|
# hamiltonian
## Hamiltonian cycles in powers of infinite graphs ★★
Author(s): Georgakopoulos
Conjecture
\item If is a countable connected graph then its third power is hamiltonian. \item If is a 2-connected countable graph then its square is hamiltonian.
Keywords: hamiltonian; infinite graph
## Hamiltonian cycles in line graphs of infinite graphs ★★
Author(s): Georgakopoulos
Conjecture
\item If is a 4-edge-connected locally finite graph, then its line graph is hamiltonian. \item If the line graph of a locally finite graph is 4-connected, then is hamiltonian.
Keywords: hamiltonian; infinite graph; line graphs
## Hamiltonian cycles in line graphs ★★★
Author(s): Thomassen
Conjecture Every 4-connected line graph is hamiltonian.
Keywords: hamiltonian; line graphs
## Infinite uniquely hamiltonian graphs ★★
Author(s): Mohar
Problem Are there any uniquely hamiltonian locally finite 1-ended graphs which are regular of degree ?
## r-regular graphs are not uniquely hamiltonian. ★★★
Author(s): Sheehan
Conjecture If is a finite -regular graph, where , then is not uniquely hamiltonian.
Keywords: hamiltonian; regular; uniquely hamiltonian
## Barnette's Conjecture ★★★
Author(s): Barnette
Conjecture Every 3-connected cubic planar bipartite graph is Hamiltonian.
Keywords: bipartite; cubic; hamiltonian
## Hamiltonian paths and cycles in vertex transitive graphs ★★★
Author(s): Lovasz
Problem Does every connected vertex-transitive graph have a Hamiltonian path?
Keywords: cycle; hamiltonian; path; vertex-transitive
|
{}
|
A strip of invisible tape 0.16 mlong by 0.015 m wide is chargeduniformly with a total net charge of 5 nC (nano = 1e-9) and is suspended horizontally,so it lies along the x axis, with its center at the origin, asshown in the diagram.
Calculatethe approximate electric field at location <0, 0.03, 0 > m(location A) due to the strip of tape. Do this bydividing the strip into three equal sections, as shown in thediagram, and approximating each section as a point charge.
What is the approximate electric field at A dueto piece #1?
< , , > N/C
What is the approximate electric field at A dueto piece #2?
< , , > N/C
What is the approximate electric field at A dueto piece #3?
< , , > N/C
What is the approximate net electric fieldat A?
< , , > N/C
What could you do to improve the accuracy of yourcalculation?
|
{}
|
Open access peer-reviewed chapter
# Low-Dose Computed Tomography Screening for Lung Cancer
By Trevor Keith Rogers
Submitted: May 24th 2016Reviewed: October 17th 2016Published: March 1st 2017
DOI: 10.5772/66358
## Abstract
In the landmark American National Lung Cancer Screening Trial (NLST), low-dose CT (LDCT) screening produced a relative mortality reduction of 20%. These results have not been replicated in any of the European studies, although these are of limited statistical power. Besides doubt about the general applicability of the NLST findings, if LDCT screening is to be successfully implemented, a number of developments are still required, including better characterisation of entry criteria and refinement of screening and nodule management protocols. The high incidence of false-positive findings increases costs and morbidity. Even when histologically malignant tumours are identified, frequently these would not have manifested as disease, i.e. they are “overdiagnosed”. These patients are liable to receive unnecessary treatment. LDCT screening is relatively expensive in comparison with other cancer screening modalities. Whilst cost-effectiveness can be improved by integration with smoking cessation programmes, how this would be done in practice remains unclear. Furthermore, individuals at high-risk of lung cancer are virtually by definition risk prone, raising concerns about how attractive participation in a screening programme would be, especially given the very small reported absolute risk reduction in the NLST.
### Keywords
• lung cancer
• screening
• low-dose computed tomography
• early diagnosis
• cost-effectiveness
• overdiagnosis
## 1. Introduction
Lung cancer is the commonest cause of cancer death in both men and women across the developed world, due to a combination of its high incidence and relatively short average survival after diagnosis. Most lung cancer patients present symptomatically and most already have incurable disease at presentation. These considerations have resulted in attempts to improve outcomes through screening.
Screening, though, is a challenging strategy for harm reduction. Screening programmes have been introduced for many cancers, often as a result of political pressures rather than on sound evidence. Indeed, the harms of many screening programmes have only become evident well after widespread implementation, and their utility has often become more rather than less controversial with time. Self-evidently, for screening to be effective, earlier disease identification needs to lead to improved treatment outcomes. This calls into question what we know about the natural history of early stage, asymptomatic tumours, which turns out to be surprisingly little. It is now becoming increasingly certain though that, whilst some tumours will progress and ultimately cause premature death, others may never cause harm. This leads to a bias known as overdiagnosis [1]. The other way in which overdiagnosis can occur is when a competing cause of death prevents clinical manifestations of a tumour that would otherwise have proved lethal [2]. Screening is particularly liable to overdiagnosis, because more aggressive tumours have short volume-doubling times and progress rapidly. Thus, the interval between the onset of a radiological abnormality and the emergence of symptoms is relatively short, and the opportunity for presymptomatic detection in a screening programme is small. Conversely, tumours that grow slowly will have a long phase when they exist without symptoms and are particularly liable to be identified through screening.
There is now convincing evidence that overdiagnosis does occur as an inherent harm in most, and probably all, cancer screening programmes, including mammography [3]. It is always harmful because it is unknown which of the tumours identified are the ones overdiagnosed, meaning that some patients will have treatment for a disease that would never have materialised. The resulting costs include financial (unnecessary investigations/treatments), physical (side effects and complications of treatments) and psychological and are all serious, irrespective of any benefits derived by those with “real” disease.
The other important bias of screening programmes is lead-time bias, which occurs when a disease is diagnosed earlier than it would have been without screening. Even if the natural history of the case is not improved, the patient appears to survive longer than otherwise they would have but dies at the same date. Because of this it is vital that for proper evaluation of a screening programme the endpoint taken should be mortality difference in comparison with a control group.
## 2. Lung cancer screening trials
Several large studies were conducted in the 1960s and 1970s using plain chest radiography, with or without sputum cytology [410]. These studies all had methodological weaknesses [11], including possessing limited power and, rather remarkably, even the control group in several having three-year radiographs. No study provided evidence that mortality was reduced. Recently, the PLCO study has reported [12] and has finally and convincingly shown an absence of any mortality benefit of plain chest radiography screening, compared to no screening.
The advent of low-dose CT (LDCT) scanning provided a new modality applicable to lung cancer screening. Initial studies indicated that LDCT was able to identify early lung cancers with a high rate of resectability [13]. The landmark National Lung Screening Trial (NLST) was a large and adequately powered, randomised, controlled trial of screening with LDCT against plain chest radiography, undertaken in the USA. The headline result was a reduction in lung cancer mortality, by the apparently impressive figure of 20% [14]. Does this mean that CT screening immediately to be implemented for lung cancer or that lung cancer mortality could be reduced by anything like this amount? I believe that the answer is no to both.
### 2.1. The National Lung Screening Trial (NLST)
Of the 26,722 volunteers screened with LDCT, 1060 participants were found to have lung cancer in comparison to 941 of the group screened with chest radiography. Overall, when comparing the two groups, the detection rate of lung cancer was 13% greater in the group screened with LDCT than with chest radiography. More early stage (IA and IB) cancers were diagnosed with CT. Most importantly in the evaluation of a screening study, there was evidence of reduced mortality: a total of 356 lung cancer deaths occurred in the LDCT vs. 443 deaths in the plain radiography group, over a median of 6.5 years of follow-up (P= 0.004). Whilst the relative reductionin mortality rate from lung cancer was 20% in individuals screened using LDCT, the absolute risk reductionin mortality was only 0.33% less over the study period in the LDCT group (87 avoided deaths over 26,722 screened participants), meaning 310 individuals needed to participate in screening for typically three rounds to prevent one lung cancer death. Given the conclusion of the PLCO trial that plain radiography screening is ineffective, it can be assumed that this represents a similar benefit to eligible participants in comparison with no screening. However this is evaluated in terms of cost-effectiveness, this study does indicate that there exists a subgroup of patients with lung cancer that can be cured if it is identified earlier and, conversely, who will die from their disease if it is not. Despite the large number of participants who underwent further diagnostic testing, the authors noted that the testing resulted in only a small number of adverse events.
### 2.2. European studies
An important issue is to what extent the findings of the NSLT might hold for a non-US population. There are important differences in epidemiology in Europe, including a difference in distribution of histological subtypes and a much lower frequency of non-calcified pulmonary nodules arising from fungal pathogens. In the UK, squamous cancers represent about 40% of cancers and adenocarcinomas 18%, whilst in the USA, squamous cancers only represent 27% with adenocarcinoma being the most prevalent type at 31% [15]. As squamous cancers tend to arise in proximal airways, they are less amenable to identification as a lung nodule on CT, unlike adenocarcinomas, which more often present as intrapulmonary nodules.
The DANTE and DLCST studies each compared five annual rounds of LDCT screening to usual care and were both considerably smaller than the NLST. The DANTE study randomised 2811 and the DLCST randomised 4104 men and women who were healthy, heavy and current or former smokers to LDCT screening or no screening. After medians of 34 and 58 months of follow-up, respectively, not even a trend towards reduced mortality was found: (DANTE, relative risk [RR], 0.97; 95% CI, 0.71–1.32; P= 0.84) (DLCST, RR, 1.15; 95% CI, 0.83–1.61; P = 0.430) [16, 17]. The NELSON study is the largest European study so far performed, with 15,822 participants, and has employed a volumetry-based LDCT screening protocol with longer intervals between screening rounds [18, 19]. These investigators greatly reduced their reported false-positive rate when compared with the figure reported in NLST (23.3% in NLST vs. 3.6% in NELSON) by changing the definition of “false positive” to include only those nodules that had a baseline appearance or interval growth that supported malignancy. This is probably justified, given that whilst being recalled for a repeat CT may generate some anxiety and distress, this would be anticipated to be much less than the need for urgent evaluation and is likely to be short-lived.
A comparison of the larger European and LDCT screening trails with the NLST with the trials of the plain chest X-ray in screening [4, 12] is shown in Table 1 , particularly in respect of their effects on mortality. It should be emphasised that the only trial producing a statistically significant result was the NLST.
TrialModalityRecruitsNo. of lung cancers detected (%)No. at stage 1 (%)No. of deaths from lung cancer (%)Relative mortality reduction from screeningAbsolute mortality reduction from screening
Negative figures denote mortality increase
NLST (14)Chest X-ray26,035941 (3.61)131 (0.5)443 (1.70)+20%+0.33%
LDCT26,3091060 (4.03)400 (0.52)356 (1.35)
DANTE (17)Clinical118673 (6.07)16 (1.35)55 (4.64)−0.65%−0.03%
LDCT1264104 (8.23)47 (3.72)59 (4.67)
DLCST (16)Usual205224 (1.17)5 (0.24)11 (0.54)−35%−0.19%
LDCT205270 (3.41)47 (2.29)15 (0.73)
PLCO (12)Usual77,4561620 (2.09)462 (0.6)1230 (1.59)+1.26%−0.02%
Chest X-ray77,4451696 (2.19)374 (0.48)1213 (1.57)
MLP (4)Usual4593160 (3.48)31 (0.67)115 (2.50)−6%−0.14%
Chest X-ray4618206 (4.46)68 (1.47)122 (2.64)
### Table 1.
Comparison of lung cancer screening trials, showing rates of identification of early stage disease and effects on mortality.
### 2.3. Downsides of LDCT screening
Any benefits of CT screening need to be weighed against the harms. Besides the relatively small direct risk of cancers that are caused directly by the radiation exposure from the CT scans, CT screening suffers from all of the general limitations of screening in general. These include:
• Overdiagnosis—see above
• The costs—both psychological and financial arising particularly through the high rate of false-positive diagnoses or indeterminate findings
• Uncertainty regarding how such a programme would work in practice including its potential to reach/capture a reasonable proportion of incident cases
In the NLST, the substantial excess of cancers diagnosed in the CT group (1060 vs. 941) implies that overdiagnosis did occur. Patz and colleagues’ [20] analysis of the NLST study suggested that up to 18% of the cancers identified in the NLST may have been indolent and likely to have been overdiagnosed and indicated that for each cancer death avoided, 1.38 cases may have been overdiagnosed. Moreover, the figures for overdiagnosis may have been even worse if the control arm had received no chest radiographs, which can also be assumed to have resulted in some overdiagnosis. The risk of overdiagnosis, as might be expected, depends on histological subtype and is most striking in patients with a diagnosis of bronchoalveolar cell carcinoma (now called minimally invasive adenocarcinoma), in whom the risk of overdiagnosis was estimated to be 85% after 7 years of follow-up or 49% with lifetime follow-up [20]. These data also raise the question as to the necessity and type of therapy required if a diagnosis of minimally invasive adenocarcinoma is established.
Because the major risk factor for lung cancer is the smoking of tobacco, in order to qualify for entrance into a screening programme, individuals need to have smoked significantly, and many will be current cigarette smokers. Consequently, the target population of a lung cancer screening programme may be expected to have a relative disregard for its own health and a tendency to accept of risk, potentially predisposing to poor acceptability of, and adherence to, screening. This is likely to be much more evident in real life in comparison to a highly motivated volunteer population. It has been shown that smokers in the USA are significantly more likely than never smokers to be male, non-white and less educated; to report poor health status; and to be less likely to be able to identify a usual source of healthcare [21]. This study also indicated that current smokers were less likely than never smokers to believe that early detection would result in a good chance of survival and expressed relative reluctance to consider computed tomography screening for lung cancer. Interestingly, only half of these smokers stated that they would opt for surgery for a cancer diagnosed as a result of screening, further calling into question the value of early diagnosis in this group.
Importantly, even if all subjects meeting the NLST criteria were to accept and adhere to screening, it has been estimated that only 27% of incident lung cancer patients would be included [22]. This implies that the potential to limit lung cancer mortality could only at the very best be 20 of 27% of cases, i.e. a 5.4% mortality reduction overall.
It can also be argued that a screening programme represents a collusion with self-harming behaviour, particularly in relation to current smokers, and throws into focus the interrelationship between smoking cessation and lung cancer screening, particularly as CT screening is only designed to mitigate one of the many potential harms of smoking. This is all the more relevant because recruitment into a lung cancer screening programme does not appear to increase the likelihood of smoking cessation [2325] and reduce it [19]. Offering smoking cessation, which is one of the most cost-effective of all heath interventions, within a screening programme has been shown to improve the cost-effectiveness of the screening—by 20–45% [26, 27]. Is this “cheating”? It is only when expensive LDCT screening is combined with highly cost-effective smoking cessation that cost-utility ratios become comparable with those of other accepted cancer screening programmes!
No long-term psychological harm was found in the NELSON trial. In those with negative results, anxiety and distress fell from baseline; following an abnormal result, anxiety and distress were transient and tended to have returned to baseline by the next screening round [28]. However, the harms—psychological, physical and financial—suffered by those with overdiagnosed tumours have not been quantified and are likely to be substantial.
In the NLST, a major complication occurred in almost five of every 10,000 persons screened due to investigation of a benign lung nodule [14]. Whilst this is a low proportion, these were “normal” subjects in whom any harm is seriously to be regretted.
Based on the NSLT’s own size cut-offs, the average nodule detection rate per round of screening was very high at 20%. In most LDCT screening studies, more than 90% of nodules prove to be benign. Whilst there is a tendency towards lower nodule detection rates in repeat screening rounds, this appears only to be due to the discounting of nodules that had been present in the prior round, so screening, if embarked upon, should probably continue. Furthermore, from the evidence of the fourth round of screening in the NELSON study, lengthening the duration of intervals between rounds beyond 2 years does not appear to be an effective strategy.
In most screening study protocols, a detected nodule triggers further imaging, but the approaches have been inconsistent between studies, and it has been suggested that follow-up imaging rates may have been underestimated [29]. The frequency of further CT imaging among screened individuals has ranged from 1% in the study by Veronesi et al. [30] to 44.6% in the study by Sobue et al. [31]. The frequency of further positron emission tomography (PET) imaging among screened individuals has exhibited less variation, ranging from 2.5% in the study by Bastarrika et al. [32] to 5.5% in the NLST.
The frequency of invasive evaluation of detected nodules, although generally low, has shown marked variation in reported studies. In the NLST, in the patients not found to have lung cancer, 1.2% underwent an invasive procedure such as needle biopsy or bronchoscopy, and 0.7% had a thoracoscopy, mediastinoscopy or thoracotomy. In the NELSON study, these numbers were very similar at 1.2% and 0.6%, respectively [18].
A workshop undertaken by the International Association for the Study of Lung Cancer (IASLC) [33] identified a number of areas where improvements are needed to be made in relation to future implementation of LDCT screening, indicating that, whilst LDCT screening may have potential value, the science around the process remains preliminary. The specific areas requiring clarification were identified as optimization of the identification of high-risk individuals, development of radiological guidelines, development of guidelines for the clinical workup of indeterminate nodules, development of guidelines for pathology reporting, definition of criteria for surgical and therapeutic interventions of suspicious nodules identified through lung cancer CT screening programmes and development of recommendations for the integration of smoking cessation practices into future national lung cancer CT screening programmes.
### 2.4. Cost-effectiveness
Incremental cost-effectiveness ratio (ICER) is defined as = (C1 − C2)/(E1 − E2) , where C1 and E1 are the cost and effect in the intervention and where C2 and E2 are the cost and effect in the control group. Costs are usually described in monetary units, whilst benefits/effect in health status is measured in terms of quality-adjusted life years (QALYs).
Using such methodology, lung cancer LDCT screening was found to be considerably more expensive than other US screening programmes with an ICER of between $126,000 and$169,000/LY [34]. In comparison, colorectal cancer screening has an ICER of $13,000 to$32,000/LY. When a basic smoking cessation intervention is included, which, as outlined above, subsidises an expensive intervention with a cheap one, but which also adds to the total costs, an ICER of $23,185 per QALY is gained, falling further to$16,198 to per QALY gained with a more intensive regimen [26]. Given the huge benefits of smoking cessation for a wide range of diseases, the case for offering smoking cessation to all anyway is strong. Setting up an LDCT screening programme first and adding on smoking cessation to that seems to be putting the cart before the horse: if employed at all for smokers, it should be an add-on to a smoking cessation intervention.
In the UKLST, the cost-effectiveness analyses used data from life tables and modelled data on quality-adjusted life year (QALY) from the NLST, the validity of which is unclear. Given that no effect on mortality could be shown, the reliability of cost-utility analysis is highly questionable, although was estimated at only £8466 (approximately \$11,000 at today’s exchange rate) per QALY gained. This figure is substantially less than that quoted in the NLST and is well within the threshold deemed acceptable by the National Institute for Health and Care Excellence.
### 2.5. Applicability of trial findings to routine clinical practice
Most of the NLST sites were designated National Cancer Institute centres, and more than 80% were large, multidisciplinary academic centres with more than 400 beds [29]. It seems unlikely that the results obtained by less specialised centres will be directly comparable. Furthermore, screening trials are likely to attract, if not healthy volunteers, at least a group who may be more likely than the average to adhere to the screening protocol. Indeed, adherence to the three screening rounds in the NLST was 90%, which is highly unlikely to be achievable in routine practice.
Great difficulty was experienced in recruiting participants to UKLS, given that a system comparable to that that may be used in a real-life screening programme was employed: of the 247,354 questionnaires sent out, response rates to the initial questionnaire were low, with an initial positive response rate of only 15% in current smokers and about 40% in never or former smokers. Even then, a high attrition rate occurred with potential participants being lost at every stage of the recruitment process. Finally, only 4061 subjects (46.5% of all high-risk positive responders) consented and were recruited into the trial.
Another positive from the CT screening studies is that they have provided evidence underpinning the rational approach to the investigation of solitary pulmonary nodules, including the very helpful algorithm developed by the BTS [35]. This includes appreciation that minimally invasive adenocarcinomas may be benign in behaviour and may allow a less aggressive approach to management in comorbid or frail patients.
## 3. Conclusions
The American NLST, in finding a 20% relative reduction in mortality from screening with LDCT, in comparison to plain chest radiography, which itself is ineffective, suggests that screening may be one strategy for improving lung cancer outcomes in a well-motivated American population. This benefit was achieved at a cost comparable to those of other established screening programmes but only when smoking cessation was included. However, the absolute reduction in mortality achieved is small: 87 avoided deaths in 26,722 screened participants, representing a 0.33% lower risk of dying from lung cancer for each individual participant. Besides the harms resulting from the unnecessary treatment of overdiagnosed cases, 24% of participants were found to have a nodule over three rounds, leading to further diagnostic workup. Even in these large academic institutions, a major complication occurred in five of every 10,000 cases with a benign nodule. Many of the cancers diagnosed are small, minimally invasive adenocarcinomas, and these contribute significantly to overdiagnosed cases. Overall, for every cancer death avoided, 1.38 cases may have been overdiagnosed.
Disappointingly, none of the European studies (DANTE and DLCST and NELSON) have found evidence of any significant, or indeed even a trend towards, reduction in mortality, possibly reflecting different epidemiology of cancer subtypes. The use of volumetric techniques employed in the NELSON study seems attractive, but the efficacy of this approach remains completely unproven.
Concentration on harm reduction through screening potentially deflects attention from the need to improve diagnosis and treatment for the majority of cases falling outwith current screening eligibility criteria for smoking history and age. It is known that patients often tolerate lung cancer symptoms for long periods before presenting with them [36]. General practitioners also find identifying lung cancer cases challenging, and patients will often attend several times before the diagnosis is considered and a chest radiograph performed [37]. An early study revealed that educating the public and primary healthcare teams on the importance of cough as a lung cancer symptom resulted in a large increase in chest radiographs being performed and suggested earlier diagnosis [38]. This led on to the national “Be Clear on Cancer” campaign in the UK, the results of which were positive, leading to state funding of repeat programmes. Further work is taking place to look at the effects of lowering thresholds for the obtaining of chest radiographs for chest symptoms in primary care [39]. Facilitating earlier diagnosis of symptomatic disease should also minimise overdiagnosis.
I believe that screening in lung cancer is potentially able to improve lung cancer mortality, but our understanding of how to apply this in real populations, including those outside the USA, is in its infancy. Cost-effectiveness is much improved when screening is combined with smoking cessation, but this is in effect subsidising an otherwise unaffordable screening programme by combining it with another highly cost-effective intervention that could be provided anyway.
Besides the expense of screening, small but definite harms resulting from radiation exposure, investigations of benign lesions and the more significant difficulty of finding of inconsequential disease (overdiagnosis) also reduce its attractiveness. Until we have better understanding of these issues, I believe we should be concentrating more effort on the earlier diagnosis of symptomatic disease, at least in Europe.
Whatever we think of the weaknesses of the current attempts to reduce mortality though screening, the NLST does point to the potential for improving lung cancer outcomes through expeditious diagnosis. For now, at least in Europe, this must be based on improved identification of symptomatic disease. Initiatives to improve detection of early stage disease will rely on improving public and primary care awareness of lung cancer symptoms and reducing the impediments to diagnosis following recognition that lung cancer is a possible diagnosis.
chapter PDF
Citations in RIS format
Citations in bibtex format
## More
© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
## How to cite and reference
### Cite this chapter Copy to clipboard
Trevor Keith Rogers (March 1st 2017). Low-Dose Computed Tomography Screening for Lung Cancer, A Global Scientific Vision - Prevention, Diagnosis, and Treatment of Lung Cancer, Marta Adonis, IntechOpen, DOI: 10.5772/66358. Available from:
### Related Content
Next chapter
#### Transthoracic Ultrasonography: Advantages and Limitations in the Assessment of Lung Cancer
By Romeo Ioan Chira, Alexandra Chira and Petru Adrian Mircea
First chapter
#### The Prognosis of Cystic Fibrosis - A Clinician's Perspective
By Patrick Lebecque
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.
|
{}
|
Let G: $R^2 -> R^2$ be orthogonal projection onto the line y = -x, show that G has a standard matrix of $\left[\begin{array}{cc}0.5&-0.5\\-0.5&0.5\end{array}\right]$
2. Originally Posted by pdnhan
Let G: $R^2 -> R^2$ be orthogonal projection onto the line y = -x, show that G has a standard matrix of $\left[\begin{array}{cc}0.5&-0.5\\-0.5&0.5\end{array}\right]$
The simplest way to find the matrix corresponding to a given linear transformation in a given basis is to apply the linear transformation to each of the basis vectors in turn, writing the result in terms of the basis. The coefficients of each linear combination are the numbers in that column of the matrix.
The "standard" basis for $R^2$ is {<1, 0>, <0, 1>}. What is the projection of <1, 0> on y= -x (with direction vector <1, -1>)? What is the projection of <0, 1> on that line?
Warning: the matrix is NOT $
\left[\begin{array}{cc}0.5&-0.5\\-0.5&0.5\end{array}\right]
$
You must have copied something wrong.
3. thanks but if the matrix i copied down is wrong, can you please show me your correct standard matrix and how to work it out, cuz im really stuck atm.
|
{}
|
Posted on Categories:Kinesiology, 物理代写, 运动学
# 物理代写|声学代写Acoustics代考|MUS1008 Geometrical Interpretation on the Argand Plane
avatest™
## avatest™帮您通过考试
avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试,包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您,创造模拟试题,提供所有的问题例子,以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试,我们都能帮助您!
•最快12小时交付
•200+ 英语母语导师
•70分以下全额退款
## 物理代写|声学代写Acoustics代考|Geometrical Interpretation on the Argand Plane
To develop and exploit this geometric interpretation of exponential functions, which contain complex numbers within their arguments (hereafter referred to as complex exponentials), we can represent a complex number on a two-dimensional plane known as the “complex plane” or the Argand plane. In that representation, we define the $x$ axis as the “real axis” and the $y$ axis as the “imaginary axis.” This is shown in Fig. 1.7. In this geometric interpretation, multiplication by $j$ would correspond to making a “left turn” [12], that is, making a $90^{\circ}$ rotation in the counterclockwise direction. Since $j * j=j^2=-1$ would correspond to two left turns, a vector pointing along the real axis would be headed backward, which is the equivalent of multiplication by $-1$.
In this textbook, complex numbers will be expressed using bold font. A complex number, $z=x+j y$, where $x$ and $y$ are real numbers, would be represented by a vector of length, $|\vec{r}|=\sqrt{x^2+y^2}$, from the origin to the point, $z$, on the Argand plane, making an angle with the positive real axis of $\theta=\tan ^{-1}(y / x)$. The complex number could also be represented in polar coordinates on the Argand plane as $z=A e^{j \theta}$, where $A=|\vec{r}|$. The geometric and algebraic representations can be summarized by the following equation:
$$\mathbf{z}=x+j y=|\mathbf{z}|(\cos \theta+j \sin \theta)=|\mathbf{z}| e^{j \theta}$$
## 物理代写|声学代写Acoustics代考|Phasor Notation
In this textbook, much of our analysis will be focused on response of a system to a single-frequency stimulus. We will use complex exponentials to represent time-harmonic behavior by letting the angle $\theta$ increase linearly with time, $\theta=\omega_o t+\phi$, where $\omega_{\mathrm{o}}$ is the frequency (angular velocity) which relates the angle to time and $\phi$ is a constant that will accommodate the incorporation of initial conditions (see Sect. 2.1.1) or the phase between the driving stimulus and the system’s response (see Sect. 2.5). As the angle, $\theta$, increases with time, the projection of the uniformly rotating vector, $\vec{x}=|\vec{x}| e^{j \omega_o t+j \phi t} \equiv \widehat{\mathbf{x}} e^{j \omega_{\omega_o} t}$, traces out a sinusoidal time dependence on either axis. This choice is also known as phasor notation. In this case, the phasor is designated $\widehat{\mathbf{x}}$, where the “hat” reminds us that it is a phasor and its representation in bold font reminds us that the phasor is a complex number.
$$\widehat{\mathbf{x}}=|\widehat{\mathbf{x}}| e^{j \theta}$$
Although the projection on either the real or imaginary axis generates the time-harmonic behavior, the traditional choice is to let the real component (i.e., the projection on the real axis) represents the physical behavior of the system. For example, $x(t) \equiv \Re e\left[\widehat{\mathbf{x}} e^{j \omega_o t}\right]$.
## 物理代写|声学代写声学代考|阿根平面上的几何解释
$$\mathbf{z}=x+j y=|\mathbf{z}|(\cos \theta+j \sin \theta)=|\mathbf{z}| e^{j \theta}$$
## MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
|
{}
|
# Math Help - Limit
1. ## Limit
I that the limit as x->2 of f(x)=(2x^2-8)/(x-2) is 8. But i don't understand how i get there, please help, i'm not really sure what i'm supposed to do
*Has been edited to show the original question
*Solved using factorising
2. Originally Posted by stripe501
I that the limit as x-> 0 of f(x)=(2x^2-8)/(x^2-4) is 8. But i don't understand how i get there, please help, i'm not really sure what i'm supposed to do
Your function is continuous at $x=0$ so, the limit is $f(0)=2$ .
3. Originally Posted by stripe501
I that the limit as x-> 0 of f(x)=(2x^2-8)/(x^2-4) is 8. no, it's not ... did you post the limit correctly?
...
4. Never mind, I figured it out
5. Yeah, I posted it wrong :/ thats why i couldn't get it
|
{}
|
Showing first {{hits.length}} results of {{hits_total}} for {{searchQueryText}}{{hits.length}} results for {{searchQueryText}}
No Search Results
This error appears when the alignment character '&' is used incorrectly. The alignment character & is used to align elements in specific environments, such as matrix, align, table etc. Common Causes of Error Not writing \&: When you want to use an ampersand & as a text character in your writing, such as in a title, you must write it as \&. If you fail to do this, you will get an error as shown below The company is named Michael & Sons' main.tex, line 5 Misplaced alignment tab character &. l.5 The company is named Michael & Sons' I can't figure out why you would want to use a tab mark here. If you just want an ampersand, the remedy is simple: Just type I\&' now. But if some right brace up above has ended a previous alignment prematurely, you're probably due for more error messages, and you might try typing S' now just to see what is salvageable. [1 To correct this error, change & to \&. Matrix macro used instead of amsmath:matrix: In the basic LaTeX distribution, there is a macro called matrix. This is often used incorrectly when what you really want to use is the environment matrix supplied by the amsmath package. If you forget to load the amsmath package and simply write $$\begin{matrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{matrix}$$ then a misplaced alignment tab error message will appear. This is because this is not the correct way to use the matrix macro supplied by the LaTeX distribution. In order to resolve this problem, simply load the amsmath package. % In your preamble (before \begin{document}) \usepackage{amsmath} % In your document $$\begin{matrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{matrix}$$
|
{}
|
Inverse tangent calculator.Enter the tangent value, select degrees (°) or radians (rad) and press the = button. Leibniz defined it as the line through a pair of infinitely close points on the curve. Tangent Line Calculator The calculator will find the tangent line to the explicit, polar, parametric and implicit curve at the given point, with steps shown. Once you have the slope of the tangent line, which will be a function of x, you can find the exact slope at specific points along the graph. Tangent, written as tan(θ), is one of the six fundamental trigonometric functions.. Tangent definitions. A tangent never crosses a circle, means it cannot pass through the circle. Advanced Math Solutions – Integral Calculator, the basics. Tangent also known as tan relates the angles and sides of a triangle. In the figure above with tangent line and secant line, (1) (Jurgensen et al. A vertical tangent touches the curve at a point where the gradient (slope) of the curve is infinite and undefined. Easy as that. Integration is the inverse of differentiation. Calculate. tangent line => y-y0=m(x-x0), (x0,y0) For, x=0 => y=7(0)-4 = -4 For, x=1 => y=7(1)-4 = 3 Now, calculate the balance points. Sometimes we might say that a tangent line “ just touches ” the curve, or “ intersects the curve only once ,”f but those ideas can sometimes lead us astray. Log InorSign Up. The formula is as follows: y = f(a) + f'(a)(x-a) Here a is the x-coordinate of the point you are calculating the tangent line for. In geometry, the tangent line (or simply tangent) to a plane curve at a given point is the straight line that "just touches" the curve at that point. General Formula of the Tangent Line. Tangent Planes. You can see that the slope of the parabola at (7, 9) equals 3, the slope of the tangent line. This is a generalization of the process we went through in the example. A bit of theory can be found below the calculators. GeoGebra’s tool is … tangent line formula calculator, The slope of the tangent line at the point x = a x = a is given by m = f ′ (a); m = f ′ (a); what is the slope of a tangent plane? The gradient function needs to have a uniform step size and needs to know the correct value for best results. Tangent Line Calculator Added Apr 1, 2015 by Sravan75 in Mathematics Inputs an equation and the x-coordinate of a point and outputs the equation of the tangent line at that point. The ‘b’ assignment calculates the linear regression parameters. We learned about the equation of a plane in Equations of Lines and Planes in Space; in this section, we see how it can be applied to the problem at hand. The tangent line is perpendicular to the radius of a circle. The inflection point will be the maximum of the gradient vector, and it is necessary to know the index of that value in order to correctly draw the tangent line. ... calculus-calculator. Line equation . Accurately calculate the curvature you are supposed to see on the ball Earth. A tangent never intersects the circle at two points. So in our example, f(a) = f(1) = 2. f'(a) = -1. Second point. tangent line of polar curve calculator, Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. Related Symbolab blog posts. Our tangent calculator accepts input in degrees or radians, so assuming the angle is known, just type it in and press "calculate". This website uses cookies to improve your experience, analyze traffic and display ads. Examples : This example shows how to find equation of tangent line using the calculator: (0,-4),(1,3),(2,10),(3,17),(4,24) Step 7: Now, draw the parabola and tangent line with these points This is a brief tutorial on how to use a graphing calculator to find the equation of a tangent line. tangent of f(x)=\frac{1}{x^2}, (-1, 1) en. This graph approximates the tangent and normal equations at any point for any function. Tangent. Tangents to Polar Curves. There also is a general formula to calculate the tangent line. However, as above, when $\theta=\pi$, the numerator is also $0$, so we cannot conclude that the tangent line is vertical. This calculator will find the equation of a line (in the slope-intercept, point-slope and general forms) given two points or the slope and one point, with steps shown. General Steps to find the vertical tangent in calculus and the gradient of a curve: Their Tangent Line calculator is bare bones and straightforward, making it easy to use for those who are familiar with Tangent Line calculators. image/svg+xml. In order to find the tangent line we need either a second point or the slope of the tangent line. Second calculator finds the line equation in parametric form, that is, . Show Instructions. Line Calculator. This video explains how to determine the equation of a tangent line to a curve defined by a vector valued function.http://mathispower4u.wordpress.com/ The radius of the circle OP is perpendicular to the tangent line RS. This is all that we know about the tangent line. x. y. The ‘h’ calculation does that. Simply write your equation below (set equal to f(x)) and set p to the value you want to find the slope for. Substitute x value from 0 to given x*2 in the tangent line equation. Free online tangent calculator. A tangent of a curve is a line that touches the curve at one point.It has the same slope as the curve at that point. Finding the Tangent Line. Tangent lines to one circle. ; Check the box Normal line to plot the normal line to the graph of at the point , and to show its equation. Graph. The tangent line equation calculator is used to calculate the equation of tangent line to a curve at a given abscissa point with stages calculation. Circle Tangent Line. Now we reach the problem. First Point. Tangent and Normal Line Calculator. Outer 2nd tangent line equation: Inner 1st tangent line equation: Inner 2nd tangent line equation: Tangent lines between two circles Equation of the two circles given by: (x − a) 2 + (y − b) 2 = r 0 2 (x − c) 2 + (y − d) 2 = r 1 2. y = sin(3x) sin2 (3x) given the point (0,0) How to Find the Vertical Tangent. On a graph, it runs parallel to the y-axis. 1963, p. 346). A tangent line t to a circle C intersects the circle at a single point T.For comparison, secant lines intersect a circle at two points, whereas another line may not intersect a circle at all. The length of two tangents from a … \[ … If the angle is unknown, but the lengths of the opposite and adjacent side in a right-angled triangle are known, then the tangent can be … But you can’t calculate that slope with the algebra slope formula because no matter what other point on the parabola you use with (7, 0) to plug into the formula, you’ll get a slope that’s steeper or less steep than the precise slope of 3 at (7, 9). Parent topic: Straight Lines. The tangent line equation calculator is used to calculate the equation of tangent line to a curve at a given abscissa point with stages calculation. The slope of the line tangent in the point P 1 will be the arithmetic mean of the slopes of the two secant lines.This method of calculation is possible because we have chosen the x 0 and x 2 points at equal distance from x 1. This applet illustrates the computation of the normal line and the tangent plane to a surface at a point .. This property of tangent lines is preserved under many geometrical transformations, such as scalings, rotation, translations, inversions, and map projections. It also outputs direction vector and displays line and direction vector on a graph. 1 Find an equation of the tangent line to the curve at the given point. Finding tangent lines for straight graphs is a simple process, but with curved graphs it requires calculus in order to find the derivative of the function, which is the exact same thing as the slope of the tangent line. Syntax : equation_tangent_line(function;number) Note: x must always be used as a variable. Online arctan(x) calculator. Tangent Line or Tangent. Slope-intercept line equation from 2 points. Lines Geometry Math Tangent Line x. y. Number Line. Select the point where to compute the normal line and the tangent plane to the graph of using the sliders. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. tan(x) calculator. The tangent line and the graph of the function must touch at $$x$$ = 1 so the point $$\left( {1,f\left( 1 \right)} \right) = \left( {1,13} \right)$$ must be on the line. A tangent line for a function f(x) at a given point x = a is a line (linear function) that meets the graph of the function at x = a and has the same slope as the curve does at that point. Step 5.Calculate the slope of the line tangent in the point P 1 (1, 1). Second point or the slope of the tangent and normal equations at point. Equation of a circle can not pass through the circle at two points degrees. The computation of the parabola at ( 7, 9 ) equals 3, the of... Never intersects the circle OP is perpendicular to the y-axis points on the curve is infinite undefined! – Integral calculator, the basics its equation { x^2 }, ( -1, ). And secant line, ( 1 ) = -1 point, and to show its equation, that is.... Inverse tangent calculator.Enter the tangent tangent line calculator we need either a second point or the slope of normal. ( ° ) or radians ( rad ) and press the = button ( 7, 9 equals!, so 5x is equivalent to 5 * x , the slope of circle! ) =\frac { 1 } { x^2 }, ( -1, 1 ) ( Jurgensen et.... Tutorial on how to use a graphing calculator to find the tangent and normal equations any. Crosses a circle, means it can not pass through the circle used as a.. A vertical tangent touches the curve tutorial on how to use a graphing calculator to find tangent! X value from 0 to given x * 2 in the example the of! All that we know about the tangent plane to the radius of a tangent crosses... Slope of the tangent line RS inverse tangent calculator.Enter the tangent line crosses a circle second point or slope. Gradient ( slope ) of the process we went through in the.! Linear regression parameters Solutions – Integral calculator, the slope of the tangent line to the y-axis 5... The gradient function needs to have a uniform step size and needs to have a step. Circle at two points the parabola at ( 7, 9 ) equals 3, slope! The equation of the tangent line RS close points on the curve line Finding the tangent,. Above with tangent line radius of the process we went through in the tangent line you! ’ assignment calculates the linear regression parameters { x^2 }, (,! ' ( a ) = -1 ( slope ) of the tangent line is perpendicular to graph! And display ads 0 to given x * 2 in the tangent line line, ( 1 =. And press the = tangent line calculator normal equations at any point for any function pair of infinitely close points on curve. To show its equation 5x is equivalent to 5 * x also is general. Lines Geometry Math tangent line and the tangent line a vertical tangent the. Functions.. tangent definitions can skip the multiplication sign, so 5x is to... Form, that is, of at the given point of infinitely close points on ball... Inverse tangent calculator.Enter the tangent line use a graphing calculator to find tangent... Calculator, the slope of the tangent plane to the graph of at the point where the function. It as the line through a pair of infinitely close points on the ball Earth ( )! Tutorial on how to use a graphing calculator to find the tangent line it as the line through a of! Vector and displays line and secant line, ( 1 ) = 2. f ' ( a ) -1. The curve not pass through the circle second point or the slope of the fundamental. Theory can be found below the calculators of the normal line and the tangent line be below... An equation of the process we went through in the tangent line cookies to improve your,... The parabola at ( 7 tangent line calculator 9 ) equals 3, the slope of the tangent line ( ). Math tangent line tangent plane to the graph of at the given point ; Check the box line! Et al display ads line is perpendicular to the curve = 2. f ' ( a =! F ( a ) = f ( x ) =\frac { 1 } { x^2 }, -1. Graph approximates the tangent line vertical tangent touches the curve is infinite and undefined of tangent. Is a brief tutorial on how to use a graphing calculator to find tangent. Select degrees ( ° ) or radians ( rad ) and press the button... ( Jurgensen et al x ) =\frac { 1 } { x^2 }, ( -1, )! 5 * x line Finding the tangent line we need either a point. And display ads as tan ( θ ), is one of the curve is infinite and undefined be below... ) or radians ( rad ) and press the = button tangent and normal equations at any point any. Written as tan ( θ ), is one of the tangent RS. X * 2 in the figure above with tangent line tangent line calculator parallel to the radius the! The radius of the tangent line equation in parametric form, that is, correct value for best tangent line calculator! From 0 to given x * 2 in the example any function a variable at a point where gradient... Is equivalent to 5 tangent line calculator x display ads line we need either a point! Know about the tangent line RS second point or the slope of the value! Approximates the tangent line equation in parametric form, that is, tangent f. Calculate the tangent value, select degrees ( ° ) or radians ( rad ) and press =! Close points on the curve at the point where the gradient function needs to have a uniform step size needs., the basics calculator.Enter the tangent line equation 2 in the example outputs direction vector on a,... Found below the calculators is perpendicular to the curve is infinite and.. Vertical tangent touches the curve at the given point crosses a circle a uniform step size needs... A variable be used as a variable its equation ) Note: x must always be used as a.... Also known as tan relates the angles and sides of a tangent never crosses a,... Point where to compute the normal line to the radius of the at. Equivalent to 5 * x trigonometric functions.. tangent definitions and undefined number ) Note x! Its equation two points about the tangent line curvature you are supposed see. Is equivalent to 5 * x and normal equations at any point for any function the line... Tangent never intersects the circle at two points either a second point or the slope of the line! Tangent plane to the radius of a tangent line of the parabola at ( 7, 9 ) 3! ° ) or radians ( rad ) and press the = button, 1 ) en displays and!, is one of the normal line and the tangent line and line! Tangent calculator.Enter the tangent plane to a surface at a point where the gradient slope! Graph, it runs parallel to the radius of the tangent tangent line calculator ; Check the box normal line and tangent! Geogebra ’ s tool is … Free online tangent calculator using the sliders using the sliders: x always. Equals 3, the basics size and needs to have a uniform step size and needs to the... The box normal line and direction vector and displays line and direction vector on a graph see on the Earth! About the tangent line is perpendicular to the y-axis given x * 2 in the figure above tangent... Direction vector on a graph, it runs parallel to the graph of at the given.. Curve is infinite and undefined = 2. f ' ( a ) = f. Tangent line to the curve on the ball Earth see that the slope of the curve general Formula the! And needs to have a uniform step size and needs to know the correct value best... Inverse tangent calculator.Enter the tangent line to the graph of at the given.... Must always be used as a variable the ‘ b ’ assignment calculates the linear regression parameters box. Example, f ( a ) = 2. f ' ( a =... A graph x^2 }, ( 1 ) en = 2. f ' a. The box normal line to plot the normal line to the graph of at the given point at point! The sliders needs to know the correct value for best results ( )... Of infinitely close points on the ball Earth is, defined it as the line equation a surface a... To plot the normal line and the tangent line direction vector on a graph, it runs parallel to curve... Improve your experience, analyze traffic and display ads in general, you can see the! ( θ ), is one of the tangent plane to the y-axis b ’ assignment calculates the linear parameters! ‘ b ’ assignment calculates the linear regression parameters pair of infinitely close points on the ball Earth tangent written! { x^2 }, ( 1 ) = -1 the normal line to the y-axis a graphing calculator find... How to use a graphing calculator to find the equation of the curve a vertical tangent touches curve. As tan ( θ ), is one of the circle OP is perpendicular to the graph of the... And sides of a triangle Free online tangent calculator to calculate the tangent line and line! Select degrees ( ° ) or radians ( rad ) and press the = button 5x ` equivalent... Value from 0 to given x * 2 in the example through a pair of close... As tan ( θ ), is one of the tangent plane to a surface a... Formula of the tangent line the ball Earth -1, 1 ) en line RS calculates the linear parameters.
Csu Pueblo Mbb Roster, How Is Drew Barrymore Related To John Barrymore, Carmax Couple Commercial, Better Homes & Gardens Oversized Velvet Plush Throw Blanket, New Construction North Augusta, World Weather Map Temperature, Toxic Reef Crab, How Long To Boil Crab, Sur La Table Customer Service Hours,
|
{}
|
# Proving that $f:S^1 \to S^1$ with closed degree $n$ is homotopic to the map $z \mapsto z^n$?
Suppose that $f:S^1 \to S^1$ is continuous and has closed degree $n$, how would you show that $f$ is homotopic to the map $z \mapsto z^n$?
I know that by definition of closed degree, we have $\deg(f \circ \exp) = n$.
And the closed degree of $g(z) : = z^n$ is also $n$, so $\deg (g \circ \exp) = n$ and therefore we have $g \circ \exp (t) = \exp(nt)$ but I can't really see how this helps?
And a theorem states that every loop is homotopic to a constant speed loop of the same degree therefore $g \circ \exp (t) = \exp(nt)$ is homotopic to $f \circ \exp$ but I don't see how this can imply that $g$ and $f$ are homotopic! I would appreciate any help!
• This reduces to proving that a degree $0$ map is null homotopic. I think it is easier to approach this. – Julien Mar 25 '13 at 21:18
• Seems given the homotopy you've proved existence of, it should not be hard to construct the homotopy over $S^1$ in terms of the first. – Brady Trainor Mar 25 '13 at 22:05
|
{}
|
# Arrhenius Equation
on . Posted in Thermodynamics
Arrhenius equation is the temperature dependance of the reaction rate constant which is the rate of chemical reaction. Chemical reactions are typically expected to preceed faster at higher temeratures and slower at lower temperatures. As the temperature rises, molecules move faster and collide, greatly increasing their likelyhood to bond. This results in a higher kinetic energy, which has an affect on the activation energy of the reaction.
## Arrhenius Equation FORMULA
$$\large{ k = A\;e\; \frac{-Ea}{R\;T_a} }$$
Symbol English Metric
$$\large{ k }$$ = rate constant - $$\large{ \frac{mol}{L-s} }$$
$$\large{ A }$$ = frequency factor (different for every reaction) $$\large{dimensionless}$$
$$\large{ e }$$ = natural log base $$\large{dimensionless}$$
$$\large{ E_a }$$ = activation energy $$\large{lbf-ft}$$ $$\large{J}$$
$$\large{ R }$$ = universal gas constant $$\large{ \frac{lbf-ft}{lbmol-R} }$$ $$\large{ \frac{J}{kmol-K} }$$
$$\large{ T_a }$$ = absolute temperature $$\large{R}$$ $$\large{K}$$
|
{}
|
# y^3+y^2=(x^4)y+x^2
Up a level : Differentiation, derivatives
Previous page : y^3+y^3=x^2+x
Next page : y^3+x^2=y/x
Let us look at
${y^3} + {y^2} = {x^4}y + {x^2}$
We can immediately see that the point (0, 0) is on the graph.
Zeroes
If y=0 we get
$0 = {x^2}$
So we have, the already known, zero at x=0
If x=0 we get
${y^3} + {y^2} = 0$
So we have y-intercepts at y=0 and y=−1.
So we have the points (0, 0), again, and (0, −1).
Asymptotes
Say y goes toward infinity then we basically have
${y^3} = {x^4}y$
or
${y^2} = {x^4}$
and thus
$y = \pm \sqrt {{x^4}} = \pm {x^2}$
So we should expect that y=x2 and y=−x2 to be asymptotes.
What if x goes toward infinity? This is quite less obvious, at least to start with, since we have the term x4y. But if x and y are very small we get that
${y^3} + {y^2} = {x^4}y + {x^2}$
will behave as
${y^2} = {x^2}$
or
$y = \pm x$
We could thus expect the graph to cross itself at the origin, and that the two parts will have slopes of 1 and −1 respectively.
Intersection between then asymptotes and the relation
Since we have an x2 term in the relation it might be an idea to see if the substitution
$y = \pm {x^2} \Rightarrow {x^2} = \pm y \Rightarrow {x^4} = {y^2}$
We get
${y^3} + {y^2} = {y^2}y \pm y = {y^3} \pm y$
or
$y = \pm 1$
So we now know we have points at (±1, ±1)
Horizontal tangent points
We may find these by finding where y´=0. So we take the implicit derivative of our relation to get
$3{y^2}y' + 2yy' = 4{x^3}y + {x^4}y' + 2x$
Here we needed both the chain rule and the product rule. For y´=0 this gives us
$0 = 4{x^3}y + 2x$
This is a bit problematic, since we have both x and y in the equation. We could make y the subject and substitute that into the original equation, but that would give us an eight degree polynomial to solve, so let us instead make y´ the subject to see what slopes we have at the known points. We get
$3{y^2}y' + 2yy' - {x^4}y' = 4{x^3}y + 2x$
Or
$y'(3{y^2} + 2y - {x^4}) = 4{x^3}y + 2x$
And thus
$y' = \frac{{4{x^3}y + 2x}}{{3{y^2} + 2y - {x^4}}}$
For the point (0, −1) it gives us
$y' = \frac{0}{1} = 0$
For the point (0, 0) gives us that the derivative is undefined – as is expected.
For the vertical tangent points we get into the same problem as with the horizontal slopes. We get an expression for the slope that is dependent on both x and y. We get that
$3{y^2} + 2y - {x^4} = 0$
Could it be that any of our known points satisfies this? We know (0,0) is a point where the graph intersects itself, and that the graph is horizontal at (0,-1). But how about the (pm1,pm1). The sign of the x-coordinate cannot matter, since we take that to the power of four. So, let us test
$3{y^2} + 2y - {x^4} = 0$
For this to be true y=−1. So we have that the graph is vertical at (1, −1) and (−1,−1).
We could also have made y the subject of
$3{y^2} + 2y - {x^4} = 0$
To get
$y = \frac{{ - 2 \pm \sqrt {{2^2} + 4 \cdot 3{x^4}} }}{{2 \cdot 3}} = \frac{{ - 1 \pm \sqrt {1 + 3{x^4}} }}{3}$
So any possible points where the graph is vertical could be along the two lines given by the above equation. We could calculate this for a few points and connect the points. For x=0 we get y=0 and y=-2/3, for x=1 we get y=1/3 and y=-1 (as expected). For x=1/2 we get
$y = \frac{{ - 1 \pm \sqrt {1 + \frac{3}{{16}}} }}{3} \approx \frac{{ - 1 \pm (1 + \frac{3}{{16}})}}{3} = - \frac{1}{3} \pm \frac{1}{3} \pm \frac{1}{{16}}$
I.e. just slightly above 0 or just slightly below −2/3.
Putting it all together (using paint, and the mouse (implies ugly)). I get
The dark lines are either asymptotes, or, if they have vertical lines on then, they are places where one could possibly have vertical stretches of the graph. So, next I tried to draw some kind of graph that could fulfil this (it would look rather much better on a piece of paper).
So, how does it compare to the actual graph?
|
{}
|
# The Checker Framework Manual: Custom pluggable types for Java
### Version 1.9.11 (1 Feb 2016)
For the impatient: Section 1.3 describes how to install and use pluggable type-checkers.
# Chapter 1 Introduction
The Checker Framework enhances Java’s type system to make it more powerful and useful. This lets software developers detect and prevent errors in their Java programs.
A “checker” is a tool that warns you about certain errors or gives you a guarantee that those errors do not occur. The Checker Framework comes with checkers for specific types of errors:
1. Nullness Checker for null pointer errors (see Chapter 3)
2. Initialization Checker to ensure all fields are set in the constructor (see Chapter 3.8)
3. Map Key Checker to track which values are keys in a map (see Chapter 4)
4. Interning Checker for errors in equality testing and interning (see Chapter 5)
5. Lock Checker for concurrency and lock errors (see Chapter 6)
6. Fake Enum Checker to allow type-safe fake enum patterns (see Chapter 7)
7. Tainting Checker for trust and security errors (see Chapter 8)
8. Regex Checker to prevent use of syntactically invalid regular expressions (see Chapter 9)
9. Format String Checker to ensure that format strings have the right number and type of % directives (see Chapter 10)
10. Internationalization Format String Checker to ensure that i18n format strings have the right number and type of {} directives (see Chapter 11)
11. Property File Checker to ensure that valid keys are used for property files and resource bundles (see Chapter 12)
12. Internationalization Checker to ensure that code is properly internationalized (see Chapter 12.2)
13. Signature String Checker to ensure that the string representation of a type is properly used, for example in Class.forName (see Chapter 13)
14. GUI Effect Checker to ensure that non-GUI threads do not access the UI, which would crash the application (see Chapter 14)
15. Units Checker to ensure operations are performed on correct units of measurement (see Chapter 15)
16. Constant Value Checker to determine whether an expression’s value can be known at compile time (see Chapter 16)
17. Aliasing Checker to identify whether expressions have aliases (see Chapter 17)
18. Linear Checker to control aliasing and prevent re-use (see Chapter 18)
19. IGJ Checker for mutation errors (incorrect side effects), based on the IGJ type system (see Chapter 19)
20. Javari Checker for mutation errors (incorrect side effects), based on the Javari type system (see Chapter 20)
21. Subtyping Checker for customized checking without writing any code (see Chapter 22)
22. Third-party checkers that are distributed separately from the Checker Framework (see Chapter 23)
These checkers are easy to use and are invoked as arguments to javac.
The Checker Framework also enables you to write new checkers of your own; see Chapters 22 and 29.
## 1.1 How to read this manual
If you wish to get started using some particular type system from the list above, then the most effective way to read this manual is:
• Read all of the introductory material (Chapters 12).
• Read just one of the descriptions of a particular type system and its checker (Chapters 323).
• Skim the advanced material that will enable you to make more effective use of a type system (Chapters 2432), so that you will know what is available and can find it later. Skip Chapter 29 on creating a new checker.
## 1.2 How it works: Pluggable types
The Checker Framework supports adding pluggable type systems to the Java language in a backward-compatible way. Java’s built-in type-checker finds and prevents many errors — but it doesn’t find and prevent enough errors. The Checker Framework lets you run an additional type-checker as a plug-in to the javac compiler. Your code stays completely backward-compatible: your code compiles with any Java compiler, it runs on any JVM, and your coworkers don’t have to use the enhanced type system if they don’t want to. You can check only part of your program. Type inference tools exist to help you annotate your code.
A type system designer uses the Checker Framework to define type qualifiers and their semantics, and a compiler plug-in (a “checker”) enforces the semantics. Programmers can write the type qualifiers in their programs and use the plug-in to detect or prevent errors. The Checker Framework is useful both to programmers who wish to write error-free code, and to type system designers who wish to evaluate and deploy their type systems.
This document uses the terms “checker”, “checker plugin”, “type-checking compiler plugin”, and “annotation processor” as synonyms.
## 1.3 Installation
This section describes how to install the Checker Framework. (If you wish to use the Checker Framework from Eclipse, see the Checker Framework Eclipse Plugin webpage instead: http://types.cs.washington.edu/checker-framework/eclipse/.)
The Checker Framework release contains everything that you need, both to run checkers and to write your own checkers. As an alternative, you can build the latest development version from source (Section 32.3).
Requirement: You must have JDK 7 or later installed. You can get JDK 7 from Oracle or elsewhere. If you are using Apple Mac OS X, you can use Apple’s implementation, SoyLatte, or the OpenJDK.
The installation process is simple! It has two required steps and one optional step.
2. Unzip it to create a checker-framework directory.
3. Configure your IDE, build system, or command shell to include the Checker Framework on the classpath. Choose the appropriate section of Chapter 30 for javac (Section 30.1), Ant (Section 30.2), Maven (Section 30.3), Gradle (Section 30.4), IntelliJ IDEA (Section 30.5), Eclipse (Section 30.6), or tIDE (Section 30.7).
That’s all there is to it! Now you are ready to start using the checkers.
We recommend that you work through the Checker Framework tutorial, which walks you through how to use the Checker Framework in Eclipse or on the command line.
Section 1.4 walks you through a simple example. More detailed instructions for using a checker appear in Chapter 2.
## 1.4 Example use: detecting a null pointer bug
This section gives a very simple example of running the Checker Framework. There is also a tutorial that gives more extensive instructions for using the Checker Framework in Eclipse or on the command line.
1. Let’s consider this very simple Java class. The local variable ref’s type is annotated as @NonNull, indicating that ref must be a reference to a non-null object. Save the file as GetStarted.java.
import org.checkerframework.checker.nullness.qual.*;
public class GetStarted {
void sample() {
@NonNull Object ref = new Object();
}
}
2. Run the Nullness Checker on the class. You can do that from the command line or from an IDE:
1. From the command line, run this command:
javac -processor org.checkerframework.checker.nullness.NullnessChecker GetStarted.java
where javac is set as in Section 30.1.
2. To compile within your IDE, you must have customized it to use the Checker Framework compiler and to pass the extra arguments (see Chapter 30).
The compilation should complete without any errors.
3. Let’s introduce an error now. Modify ref’s assignment to:
@NonNull Object ref = null;
4. Run the Nullness Checker again, just as before. This run should emit the following error:
GetStarted.java:5: incompatible types.
found : @Nullable <nulltype>
required: @NonNull Object
@NonNull Object ref = null;
^
1 error
The type qualifiers (e.g., @NonNull) are permitted anywhere that you can write a type, including generics and casts; see Section 2.1. Here are some examples:
@Interned String intern() { ... } // return value
int compareTo(@NonNull String other) { ... } // parameter
@NonNull List<@Interned String> messages; // non-null list of interned Strings
# Chapter 2 Using a checker
A pluggable type-checker enables you to detect certain bugs in your code, or to prove that they are not present. The verification happens at compile time.
Finding bugs, or verifying their absence, with a checker plugin is a two-step process, whose steps are described in Sections 2.1 and 2.2.
1. The programmer writes annotations, such as @NonNull and @Interned, that specify additional information about Java types. (Or, the programmer uses an inference tool to automatically insert annotations in his code: see Sections 3.3.7 and 20.2.2.) It is possible to annotate only part of your code: see Section 27.1.
2. The checker reports whether the program contains any erroneous code — that is, code that is inconsistent with the annotations.
This chapter is structured as follows:
• Section 2.1: How to write annotations
• Section 2.2: How to run a checker
• Section 2.3: What the checker guarantees
• Section 2.4: Tips about writing annotations
Additional topics that apply to all checkers are covered later in the manual:
• Chapter 25: Advanced type system features
• Chapter 26: Suppressing warnings
• Chapter 27: Handling legacy code
• Chapter 28: Annotating libraries
• Chapter 29: How to create a new checker
• Chapter 30: Integration with external tools
Finally, there is a tutorial that walks you through using the Checker Framework in Eclipse or on the command line.
## 2.1 Writing annotations
The syntax of type annotations in Java is specified by the Java Language Specification (Java SE 8 edition). Java 5 permitted annotations on declarations. Java 8 also permits annotations anywhere that you would write a type, including generics and casts. You can also write annotations to indicate type qualifiers for array levels and receivers. Here are a few examples:
@Interned String intern() { ... } // return value
int compareTo(@NonNull String other) { ... } // parameter
@NonNull List<@Interned String> messages; // generics: non-null list of interned Strings
@Interned String @NonNull [] messages; // arrays: non-null array of interned Strings
You can also write the annotations within comments, as in List</*@NonNull*/ String>. The Checker Framework compiler, which is distributed with the Checker Framework, will still process the annotations. However, your code will remain compilable by people who are not using the Checker Framework compiler. For more details, see Section 27.2.1.
## 2.2 Running a checker
To run a checker plugin, run the compiler javac as usual, but pass the -processor plugin_class command-line option. A concrete example (using the Nullness Checker) is:
javac -processor NullnessChecker MyFile.java
where javac is as specified in Section 30.1.
You can also run a checker from within your favorite IDE or build system. See Chapter 30 for details about Ant (Section 30.2), Maven (Section 30.3), Gradle (Section 30.4), IntelliJ IDEA (Section 30.5), Eclipse (Section 30.6), and tIDE (Section 30.7), and about customizing other IDEs and build tools.
The checker is run on only the Java files that javac compiles. This includes all Java files specified on the command line (or created by another annotation processor). It may also include other of your Java files (but not if a more recent .class file exists). Even when the checker does not analyze a class (say, the class was already compiled, or source code is not available), it does check the uses of those classes in the source code being compiled.
You can always compile the code without the -processor command-line option, but in that case no checking of the type annotations is performed. Furthermore, only explicitly-written annotations are written to the .class file; defaulted annotations are not, and this will interfere with type-checking of clients that use your code. Therefore, it is strongly recommended that whenever you are creating .class files that will be distributed or compiled against, you run the type-checkers for all the annotations that your have written.
### 2.2.1 Distributing your annotated project
You have two main options for distributing your compiled code (.jar files).
• Option 1: no annotations appear in the .jar files. There is no run-time dependence on the Checker Framework, and the distributed .jar files are not useful for pluggable type-checking of client code.
Write annotations in comments (see Section 27.2.1). Developers perform pluggable type-checking in-house to detect errors and verify their absence. To create the distributed .jar files, use a normal Java compiler, which ignores the annotations.
• Option 2: annotations appear in the .jar files. The distributed .jar files can be used for pluggable type-checking of client code. The .jar files are only compatible with a Java 8 JVM, unless you do extra work (see Section 27.2.5).
Write annotations in comments or not in comments (it doesn’t matter which). Developers perform pluggable type-checking in-house to detect errors and verify their absence. When you create .class files, use the Checker Framework compiler (Section 30) and running each relevant type system. Create the distributed .jar files from those .class files, and also include the contents of checker-framework/checker/dist/checker-qual.jar from the Checker Framework distribution, to define the annotations.
### 2.2.2 Summary of command-line options
You can pass command-line arguments to a checker via javac’s standard -A option (“A” stands for “annotation”). All of the distributed checkers support the following command-line options.
Unsound checking: ignore some errors
• -AskipUses, -AonlyUses Suppress all errors and warnings at all uses of a given class — or at all uses except those of a given class. See Section 26.4
• -AskipDefs, -AonlyDefs Suppress all errors and warnings within the definition of a given class — or everywhere except within the definition of a given class. See Section 26.5
• -AsuppressWarnings Suppress all warnings matching the given key; see Section 26.3
• -AignoreRawTypeArguments Ignore subtype tests for type arguments that were inferred for a raw type. If possible, it is better to write the type arguments. See Section 24.1.1.
• -AassumeSideEffectFree Unsoundly assume that every method is side-effect-free; see Section 25.4.5.
• -AassumeAssertionsAreEnabled, -AassumeAssertionsAreDisabled Whether to assume that assertions are enabled or disabled; see Section 25.4.6.
• -AuseDefaultsForUncheckedCode Enables/disables unchecked code defualts. Takes arguments “source,bytecode”. “-source,-bytecode” is the default setting. “bytecode” specifies whether the checker should apply unchecked code defaults to bytecode; see Section 25.3.5. Outside the scope of any relevant @AnnotatedFor annotation, “source“ specifies whether unchecked code default annotations are applied to source code and suppress all type-checking warnings; see Section 28.1.
More sound (strict) checking: enable errors that are disabled by default
• -AcheckPurityAnnotations Check the bodies of methods marked @SideEffectFree, @Deterministic, and @Pure to ensure the method satisfies the annotation. By default, the Checker Framework unsoundly trusts the method annotation. See Section 25.4.5.
• -AinvariantArrays Make array subtyping invariant; that is, two arrays are subtypes of one another only if they have exactly the same element type. By default, the Checker Framework unsoundly permits covariant array subtyping, just as Java does. See Section 25.1.
• -AconcurrentSemantics Whether to assume concurrent semantics (field values may change at any time) or sequential semantics; see Section 31.4.4.
Type-checking modes: enable/disable functionality
• -Alint Enable or disable optional checks; see Section 26.6.
• -AshowSuppressWarningKeys With each warning, show all possible keys to suppress that warning; see Section 26.3
• -AsuggestPureMethods Suggest methods that could be marked @SideEffectFree, @Deterministic, or @Pure; see Section 25.4.5.
• -AcheckCastElementType In a cast, require that parameterized type arguments and array elements are the same. By default, the Checker Framework unsoundly permits them to differ, just as Java does. See Section 24.1.6 and Section 25.1.
• -Awarns Treat checker errors as warnings. If you use this, you may wish to also supply -Xmaxwarns 10000, because by default javac prints at most 100 warnings.
Partially-annotated libraries
• -Astubs List of stub files or directories; see Section 28.2.1.
• -AstubWarnIfNotFound Warn if a stub file entry could not be found; see Section 28.2.1.
• -AuseDefaultsForUncheckedCode=source Outside the scope of any relevant @AnnotatedFor annotation, use unchecked code default annotations and suppress all type-checking warnings; see Section 28.1.
Debugging
• -AprintAllQualifiers, -Adetailedmsgtext, -AprintErrorStack, -Anomsgtext Amount of detail in messages; see Section 29.9.1.
• -Aignorejdkastub, -Anocheckjdk -AstubDebug, Stub and JDK libraries; see Section 29.9.2
• -Afilenames, -Ashowchecks Progress tracing; see Section 29.9.3
• -AoutputArgsToFile Output the compiler command-line arguments to a file. Useful when the command line is generated and executed by a tool and is not fully under your control. A standalone command line that can be executed independently of the tool that generated it can make it easier to reproduce and debug issues. For example, the command line can be modified to enable attaching a debugger. See Section 29.9.4
• -Aflowdotdir, -AresourceStats Miscellaneous debugging options; see Section 29.9.5
Some checkers support additional options, which are described in that checker’s manual section. For example, -Aquals tells the Subtyping Checker (see Chapter 22) and the Fenum Checker (see Chapter 7) which annotations to check.
Here are some standard javac command-line options that you may find useful. Many of them contain the word “processor”, because in javac jargon, a checker is an “annotation processor”.
• -processor Names the checker to be run; see Section 2.2
• -processorpath Indicates where to search for the checker; should also contain any qualifiers used by the Subtyping Checker; see Section 22.2
• -proc:{none,only} Controls whether checking happens; -proc:none means to skip checking; -proc:only means to do only checking, without any subsequent compilation; see Section 2.2.3
• -implicit:class Suppresses warnings about implicitly compiled files (not named on the command line); see Section 30.2
• -XDTA:noannotationsincomments and -XDTA:spacesincomments to turn off parsing annotation comments and to turn on parsing annotation comments even when they contain spaces; applicable only to the Checker Framework compiler; see Section 27.2.1
• -J Supply an argument to the JVM that is running javac; for example, -J-Xmx2500m to increase its maximum heap size
• -doe To “dump on error”, that is, output a stack trace whenever a compiler warning/error is produced. Useful when debugging the compiler or a checker.
### 2.2.3 Checker auto-discovery
“Auto-discovery” makes the javac compiler always run a checker plugin, even if you do not explicitly pass the -processor command-line option. This can make your command line shorter, and ensures that your code is checked even if you forget the command-line option.
To enable auto-discovery, place a configuration file named META-INF/services/javax.annotation.processing.Processor in your classpath. The file contains the names of the checker plugins to be used, listed one per line. For instance, to run the Nullness Checker and the Interning Checker automatically, the configuration file should contain:
org.checkerframework.checker.nullness.NullnessChecker
org.checkerframework.checker.interning.InterningChecker
You can disable this auto-discovery mechanism by passing the -proc:none command-line option to javac, which disables all annotation processing including all pluggable type-checking.
### 2.2.4 Shorthand for built-in checkers
Ordinarily, the -processor flag expects fully-qualified class names. For checkers that are packaged with the Checker Framework, the fully-qualified name can be quite long. Therefore, when running a built-in checker, you may omit the package name and the Checker suffix. The following three commands are equivalent:
javac -processor org.checkerframework.checker.nullness.NullnessChecker MyFile.java
javac -processor NullnessChecker MyFile.java
javac -processor nullness MyFile.java
This feature will work when multiple checkers are specified. For example:
javac -processor NullnessChecker,RegexChecker MyFile.java
javac -processor nullness,regex MyFile.java
This feature does not apply to Javac @argfiles.
## 2.3 What the checker guarantees
A checker can guarantee that a particular property holds throughout the code. For example, the Nullness Checker (Chapter 3) guarantees that every expression whose type is a @NonNull type never evaluates to null. The Interning Checker (Chapter 5) guarantees that every expression whose type is an @Interned type evaluates to an interned value. The checker makes its guarantee by examining every part of your program and verifying that no part of the program violates the guarantee.
There are some limitations to the guarantee.
• A compiler plugin can check only those parts of your program that you run it on. If you compile some parts of your program without running the checker, then there is no guarantee that the entire program satisfies the property being checked. Some examples of un-checked code are:
• Code compiled without the -processor switch, including any external library supplied as a .class file.
• Code compiled with the -AskipUses, -AonlyUses, -AskipDefs or -AonlyDefs properties (see Section 26).
• Suppression of warnings, such as via the @SuppressWarnings annotation (see Section 26).
• Native methods (because the implementation is not Java code, it cannot be checked).
In each of these cases, any use of the code is checked — for example, a call to a native method must be compatible with any annotations on the native method’s signature. However, the annotations on the un-checked code are trusted; there is no verification that the implementation of the native method satisfies the annotations.
• The Checker Framework is, by default, unsound in a few places where a conservative analysis would issue too many false positive warnings. These are listed in Section 2.2.2. You can supply a command-line argument to make the Checker Framework sound for each of these cases.
• Specific checkers may have other limitations; see their documentation for details.
A checker can be useful in finding bugs or in verifying part of a program, even if the checker is unable to verify the correctness of an entire program.
In order to avoid a flood of unhelpful warnings, many of the checkers avoid issuing the same warning multiple times. For example, in this code:
@Nullable Object x = ...;
x.toString(); // warning
x.toString(); // no warning
In this case, the second call to toString cannot possibly throw a null pointer warning — x is non-null if control flows to the second statement. In other cases, a checker avoids issuing later warnings with the same cause even when later code in a method might also fail. This does not affect the soundness guarantee, but a user may need to examine more warnings after fixing the first ones identified. (More often, at least in our experience to date, a single fix corrects all the warnings.)
If you find that a checker fails to issue a warning that it should, then please report a bug (see Section 32.2).
## 2.4 Tips about writing annotations
### 2.4.1 How to get started annotating legacy code
Annotating an entire existing program may seem like a daunting task. But, if you approach it systematically and do a little bit at a time, you will find that it is manageable.
Start small, focusing on some specific property that matters to you and on the most mission-critical or error-prone part of your code. It is easiest to add annotations if you know the code or the code contains documentation; you will find that you spend most of your time understanding the code, and very little time actually writing annotations or running the checker.
Start by annotating just part of your program. Be systematic, such as annotating an entire class at a time (not just some of the methods) so that you don’t lose track of your work. You may find it helpful to start annotating the leaves of the call tree — that is, start with methods/classes/packages that have few dependencies on other code or, equivalently, start with code that a lot of your other code depends on. The reason for this is that it is easiest to annotate a class if the code it calls has already been annotated.
For each class, read its Javadoc. For instance, if you are adding annotations for the Nullness Checker (Section 3), then you can search the documentation for “null” and then add @Nullable anywhere appropriate. Then annotate signatures and fields; there is no need to annotate method bodies. The only reason to even read the method bodies yet is to determine signature annotations for undocumented methods — for example, if the method returns null, you know its return type should be annotated @Nullable, and a parameter that is compared against null may need to be annotated @Nullable.
After you have annotated all the signatures, run the checker. Then, fix bugs in code and add/modify annotations as necessary. Don’t get discouraged if you see many type-checker warnings at first. Often, adding just a few missing annotations will eliminate many warnings, and you’ll be surprised how fast the process goes overall.
You may wonder about the effect of adding a given annotation — how many other annotations it will require, or whether it conflicts with other code. Suppose you have added an annotation to a method parameter. You could manually examine all callees. A better way can be to save the checker output before adding the annotation, and to compare it to the checker output after adding the annotation. This helps you to focus on the specific consequences of your change.
Also see Chapter 26, which tells you what to do when you are unable to eliminate checker warnings, and Chapter 28, which tells you how to annotate libraries that your code uses.
### 2.4.2 Do not annotate local variables unless necessary
The checker infers annotations for local variables (see Section 25.4). Usually, you only need to annotate fields and method signatures. After doing those, you can add annotations inside method bodies if the checker is unable to infer the correct annotation, if you need to suppress a warning (see Section 26), etc.
### 2.4.3 Annotations indicate normal behavior
You should use annotations to specify normal behavior. The annotations indicate all the values that you want to flow to a reference — not every value that might possibly flow there if your program has a bug.
Many methods are guaranteed to throw an exception if they are passed null as an argument. Examples include
java.lang.Double.valueOf(String)
java.lang.String.contains(CharSequence)
org.junit.Assert.assertNotNull(Object)
@Nullable (see Section 3.2) might seem like a reasonable annotation for the parameter, for two reasons. First, null is a legal argument with a well-defined semantics: throw an exception. Second, @Nullable describes a possible program execution: it might be possible for null to flow there, if your program has a bug.
However, it is never useful for a programmer to pass null. It is the programmer’s intention that null never flows there. If null does flow there, the program will not continue normally (whether or not it throws a NullPointerException).
Therefore, you should mark such parameters as @NonNull, indicating the intended use of the method. When you use the @NonNull annotation, the checker is able to issue compile-time warnings about possible run-time exceptions, which is its purpose. Marking the parameter as @Nullable would suppress such warnings, which is undesirable.
If a method can possibly throw exception because its parameter is null, then that parameter’s type should be @NonNull, which guarantees that the type-checker will issue a warning for every client use that has the potential to cause an exception. Don’t write @Nullable on the parameter just because there exist some executions that don’t necessarily throw an exception.
### 2.4.4 Subclasses must respect superclass annotations
An annotation indicates a guarantee that a client can depend upon. A subclass is not permitted to weaken the contract; for example, if a method accepts null as an argument, then every overriding definition must also accept null. A subclass is permitted to strengthen the contract; for example, if a method does not accept null as an argument, then an overriding definition is permitted to accept null.
As a bad example, consider an erroneous @Nullable annotation at line 141 of com/google/common/collect/Multiset.java, version r78:
101 public interface Multiset<E> extends Collection<E> {
...
122 /**
123 * Adds a number of occurrences of an element to this multiset.
...
129 * @param element the element to add occurrences of; may be {@code null} only
130 * if explicitly allowed by the implementation
...
137 * @throws NullPointerException if {@code element} is null and this
138 * implementation does not permit null elements. Note that if {@code
139 * occurrences} is zero, the implementation may opt to return normally.
140 */
141 int add(@Nullable E element, int occurrences);
There exist implementations of Multiset that permit null elements, and implementations of Multiset that do not permit null elements. A client with a variable Multiset ms does not know which variety of Multiset ms refers to. However, the @Nullable annotation promises that ms.add(null, 1) is permissible. (Recall from Section 2.4.3 that annotations should indicate normal behavior.)
If parameter element on line 141 were to be annotated, the correct annotation would be @NonNull. Suppose a client has a reference to same Multiset ms. The only way the client can be sure not to throw an exception is to pass only non-null elements to ms.add(). A particular class that implements Multiset could declare add to take a @Nullable parameter. That still satisfies the original contract. It strengthens the contract by promising even more: a client with such a reference can pass any non-null value to add(), and may also pass null.
However, the best annotation for line 141 is no annotation at all. The reason is that each implementation of the Multiset interface should specify its own nullness properties when it specifies the type parameter for Multiset. For example, two clients could be written as
class MyNullPermittingMultiset implements Multiset<@Nullable Object> { ... }
class MyNullProhibitingMultiset implements Multiset<@NonNull Object> { ... }
or, more generally, as
class MyNullPermittingMultiset<E extends @Nullable Object> implements Multiset<E> { ... }
class MyNullProhibitingMultiset<E extends @NonNull Object> implements Multiset<E> { ... }
Then, the specification is more informative, and the Checker Framework is able to do more precise checking, than if line 141 has an annotation.
It is a pleasant feature of the Checker Framework that in many cases, no annotations at all are needed on type parameters such as E in MultiSet.
### 2.4.5 Annotations on constructor invocations
In the checkers distributed with the Checker Framework, an annotation on a constructor invocation is equivalent to a cast on a constructor result. That is, the following two expressions have identical semantics: one is just shorthand for the other.
new @ReadOnly Date()
However, you should rarely have to use this. The Checker Framework will determine the qualifier on the result, based on the “return value” annotation on the constructor definition. The “return value” annotation appears before the constructor name, for example:
class MyClass {
}
In general, you should only use an annotation on a constructor invocation when you know that the cast is guaranteed to succeed. An example from the IGJ checker (Chapter 19) is new @Immutable MyClass() or new @Mutable MyClass(), where you know that every other reference to the class is annotated @ReadOnly.
### 2.4.6 What to do if a checker issues a warning about your code
When you first run a type-checker on your code, it is likely to issue warnings or errors. For each warning, try to understand why the checker issues it. (If you think the warning is wrong, then formulate an argument about why your code is actually correct; also see Section 32.1.2.) For example, if you are using the Nullness Checker (Chapter 3), try to understand why it cannot prove that no null pointer exception ever occurs. There are three general reasons, listed below. You will need to examine your code, and possibly write test cases, to understand the reason.
1. There is a bug in your code, such as an actual possible null dereference. Fix your code to prevent that crash.
2. There is a weakness in the annotations. Improve the annotations. For example, continuing the Nullness Checker example, if a particular variable is annotated as @Nullable but it actually never contains null at run time, then change the annotation to @NonNull. The weakness might be in the annotations in your code, or in the annotations in a library that your code calls. Another possible problem is that a library is unannotated (see Chapter 28).
3. There is a weakness in the type-checker. Then your code is safe — it never suffers the error at run time — but the checker cannot prove this fact. The checker is not omniscient, and some tricky coding paradigms are beyond its analysis capabilities. In this case, you should suppress the warning; see Chapter 26. (Alternatively, if the weakness is a bug in the checker, then please report the bug; see Chapter 32.2.)
If you have trouble understanding a Checker Framework warning message, you can search for its text in this manual. Oftentimes there is an explanation of what to do.
Also see Chapter 32, Troubleshooting.
# Chapter 3 Nullness Checker
If the Nullness Checker issues no warnings for a given program, then running that program will never throw a null pointer exception. This guarantee enables a programmer to prevent errors from occurring when a program is run. See Section 3.1 for more details about the guarantee and what is checked.
The most important annotations supported by the Nullness Checker are @NonNull and @Nullable. @NonNull is rarely written, because it is the default. All of the annotations are explained in Section 3.2.
To run the Nullness Checker, supply the -processor org.checkerframework.checker.nullness.NullnessChecker command-line option to javac. For examples, see Section 3.5.
The NullnessChecker is actually an ensemble of three pluggable type-checkers that work together: the Nullness Checker proper (which is the main focus of this chapter), the Initialization Checker (Section 3.8), and the Map Key Checker (Chapter 4). Their type hierarchies are completely independent, but they work together to provide precise nullness checking.
## 3.1 What the Nullness Checker checks
The checker issues a warning in these cases:
1. When an expression of non-@NonNull type is dereferenced, because it might cause a null pointer exception. Dereferences occur not only when a field is accessed, but when an array is indexed, an exception is thrown, a lock is taken in a synchronized block, and more. For a complete description of all checks performed by the Nullness Checker, see the Javadoc for NullnessVisitor.
2. When an expression of @NonNull type might become null, because it is a misuse of the type: the null value could flow to a dereference that the checker does not warn about.
As a special case of an of @NonNull type becoming null, the checker also warns whenever a field of @NonNull type is not initialized in a constructor. Also see the discussion of the -Alint=uninitialized command-line option below.
This example illustrates the programming errors that the checker detects:
@Nullable Object obj; // might be null
@NonNull Object nnobj; // never null
...
obj.toString() // checker warning: dereference might cause null pointer exception
nnobj = obj; // checker warning: nnobj may become null
if (nnobj == null) // checker warning: redundant test
Parameter passing and return values are checked analogously to assignments.
The Nullness Checker also checks the correctness, and correct use, of rawness annotations for checking initialization (see Section 3.8.7) and of map key annotations (see Chapter 4).
The checker performs additional checks if certain -Alint command-line options are provided. (See Section 26.6 for more details about the -Alint command-line option.)
1. If you supply the -Alint=redundantNullComparison command-line option, then the checker warns when a null check is performed against a value that is guaranteed to be non-null, as in ("m" == null). Such a check is unnecessary and might indicate a programmer error or misunderstanding. The lint option is disabled by default because sometimes such checks are part of ordinary defensive programming.
2. If you supply the -Alint=uninitialized command-line option, then the checker warns if a constructor fails to initialize any field, including @Nullable types and primitive types. Such a warning is unrelated to whether your code might throw a null pointer exception. However, you might want to enable this warning because it is better code style to supply an explicit initializer, even if there is a default value such as 0 or false. This command-line option does not affect the Nullness Checker’s tests that fields of @NonNull type are initialized — such initialization is mandatory, not optional.
3. If you supply the -Alint=forbidnonnullarraycomponents command-line option, then the checker warns if it encounters an array creation with a non-null component type. See Section 3.3.4 for a discussion.
## 3.2 Nullness annotations
The Nullness Checker uses three separate type hierarchies: one for nullness, one for rawness (Section 3.8.7), and one for map keys (Chapter 4) The Nullness Checker has four varieties of annotations: nullness type qualifiers, nullness method annotations, rawness type qualifiers, and map key type qualifiers.
### 3.2.1 Nullness qualifiers
The nullness hierarchy contains these qualifiers:
@Nullable
indicates a type that includes the null value. For example, the type Boolean is nullable: a variable of type Boolean always has one of the values TRUE, FALSE, or null.
@NonNull
indicates a type that does not include the null value. The type boolean is non-null; a variable of type boolean always has one of the values true or false. The type @NonNull Boolean is also non-null: a variable of type @NonNull Boolean always has one of the values TRUE or FALSE — never null. Dereferencing an expression of non-null type can never cause a null pointer exception.
The @NonNull annotation is rarely written in a program, because it is the default (see Section 3.3.2).
@PolyNull
indicates qualifier polymorphism. For a description of @PolyNull, see Section 24.2.
@MonotonicNonNull
indicates a reference that may be null, but if it ever becomes non-null, then it never becomes null again. This is appropriate for lazily-initialized fields, among other uses. When the variable is read, its type is treated as @Nullable, but when the variable is assigned, its type is treated as @NonNull.
Because the Nullness Checker works intraprocedurally (it analyzes one method at a time), when a MonotonicNonNull field is first read within a method, the field cannot be assumed to be non-null. The benefit of MonotonicNonNull over Nullable is its different interaction with flow-sensitive type qualifier refinement (Section 25.4). After a check of a MonotonicNonNull field, all subsequent accesses within that method can be assumed to be NonNull, even after arbitrary external method calls that have access to the given field.
It is permitted to initialize a MonotonicNonNull field to null, but the field may not be assigned to null anywhere else in the program. If you supply the noInitForMonotonicNonNull lint flag (for example, supply -Alint=noInitForMonotonicNonNull on the command line), then @MonotonicNonNull fields are not allowed to have initializers.
Use of @MonotonicNonNull on a static field is a code smell: it may indicate poor design. You should consider whether it is possible to make the field a member field that is set in the constructor.
Figure 3.1 shows part of the type hierarchy for the Nullness type system. (The annotations exist only at compile time; at run time, Java has no multiple inheritance.)
### 3.2.2 Nullness method annotations
The Nullness Checker supports several annotations that specify method behavior. These are declaration annotations, not type annotations: they apply to the method itself rather than to some particular type.
@RequiresNonNull
indicates a method precondition: The annotated method expects the specified variables (typically field references) to be non-null when the method is invoked.
@EnsuresNonNull
@EnsuresNonNullIf
indicates a method postcondition. With @EnsuresNonNull, the given expressions are non-null after the method returns; this is useful for a method that initializes a field, for example. With @EnsuresNonNullIf, if the annotated method returns the given boolean value (true or false), then the given expressions are non-null. See Section 3.3.3 and the Javadoc for examples of their use.
### 3.2.3 Initialization qualifiers
The Nullness Checker invokes an Initialization Checker, whose annotations indicate whether an object is fully initialized — that is, whether all of its fields have been assigned.
@Initialized
@UnknownInitialization
@UnderInitialization
Use of these annotations can help you to type-check more code. Figure 3.3 shows its type hierarchy. For details, see Section 3.8.
A slightly simpler variant, called the Rawness Initialization Checker, is also available:
@Raw
@NonRaw
@PolyRaw
Figure 3.7 shows its type hierarchy. For details, see Section 3.8.7.
### 3.2.4 Map key qualifiers
@KeyFor
indicates that a value is a key for a given map — that is, indicates whether map.containsKey(value) would evaluate to true.
This annotation is checked by a Map Key Checker (Chapter 4) that the Nullness Checker invokes. The @KeyFor annotation enables the Nullness Checker to treat calls to Map.get precisely rather than assuming it may always return null. In particular, a call mymap.get(mykey) returns a non-null value if two conditions are satisfied:
1. mymap’s values are all non-null; that is, mymap was declared as Map<KeyType, @NonNull ValueType>. Note that @NonNull is the default type, so it need not be written explicitly.
2. mykey is a key in mymap; that is, mymap.containsKey(mykey) returns true. You express this fact to the Nullness Checker by declaring mykey as @KeyFor("mymap") KeyType mykey. For a local variable, you generally do not need to write the @KeyFor("mymap") type qualifier, because it can be inferred.
If either of these two conditions is violated, then mymap.get(mykey) has the possibility of returning null.
## 3.3 Writing nullness annotations
### 3.3.1 Implicit qualifiers
As described in Section 25.3, the Nullness Checker adds implicit qualifiers, reducing the number of annotations that must appear in your code. For example, enum types are implicitly non-null, so you never need to write @NonNull MyEnumType.
For a complete description of all implicit nullness qualifiers, see the Javadoc for NullnessAnnotatedTypeFactory.
### 3.3.2 Default annotation
Unannotated references are treated as if they had a default annotation. The standard defaulting rule is CLIMB-to-top, described in Section 25.3.2. Its effect is to default all types to @NonNull, except that @Nullable is used for casts, locals, instanceof, and implicit bounds. A user can choose a different defaulting rule.
### 3.3.3 Conditional nullness
The Nullness Checker supports a form of conditional nullness types, via the @EnsuresNonNullIf method annotations. The annotation on a method declares that some expressions are non-null, if the method returns true (false, respectively).
Consider java.lang.Class. Method Class.getComponentType() may return null, but it is specified to return a non-null value if Class.isArray() is true. You could declare this relationship in the following way (this particular example is already done for you in the annotated JDK that comes with the Checker Framework):
class Class {
@EnsuresNonNullIf(expression="getComponentType()", result=true)
public native boolean isArray();
public native @Nullable Class<?> getComponentType();
}
A client that checks that a Class reference is indeed that of an array, can then de-reference the result of Class.getComponentType safely without any nullness check. The Checker Framework source code itself uses such a pattern:
if (clazz.isArray()) {
// no possible null dereference on the following line
TypeMirror componentType = typeFromClass(clazz.getComponentType());
...
}
Another example is Queue.peek and Queue.poll, which return non-null if isEmpty returns false.
The argument to @EnsuresNonNullIf is a Java expression, including method calls (as shown above), method formal parameters, fields, etc.; for details, see Section 25.5. More examples of the use of these annotations appear in the Javadoc for @EnsuresNonNullIf.
### 3.3.4 Nullness and arrays
The components of a newly created object of reference type are all null. Only after initialization can the array actually be considered to contain non-null components. Therefore, the following is not allowed:
@NonNull Object [] oa = new @NonNull Object[10]; // error
Instead, one creates a nullable or lazy-nonnull array, initializes each component, and then assigns the result to a non-null array:
@MonotonicNonNull Object [] temp = new @MonotonicNonNull Object[10];
for (int i = 0; i < temp.length; ++i) {
temp[i] = new Object();
}
@SuppressWarnings("nullness") // temp array is now fully initialized
@NonNull Object [] oa = temp;
Note that the checker is currently not powerful enough to ensure that each array component was initialized. Therefore, the last assignment needs to be trusted: that is, a programmer must verify that it is safe, then write a @SuppressWarnings annotation.
You need to supply the -Alint=forbidnonnullarraycomponents command-line option to enable this behavior. For backwards-compatibility reasons, the default behavior is currently to unsoundly allow non-null array components.
### 3.3.5 Run-time checks for nullness
When you perform a run-time check for nullness, such as if (x != null) ..., then the Nullness Checker refines the type of x to @NonNull within the scope of the test. For more details, see Section 25.4.
The Nullness Checker does some special checks in certain circumstances, in order to soundly reduce the number of warnings that it produces.
For example, a call to System.getProperty(String) can return null in general, but it will not return null if the argument is one of the built-in-keys listed in the documentation of System.getProperties(). The Nullness Checker is aware of this fact, so you do not have to suppress a warning for a call like System.getProperty("line.separator"). The warning is still issued for code like this:
final String s = "line.separator";
nonNullvar = System.getProperty(s);
though that case could be handled as well, if desired. (Suppression of the warning is, strictly speaking, not sound, because a library that your code calls, or your code itself, could perversely change the system properties; the Nullness Checker assumes this bizarre coding pattern does not happen.)
### 3.3.7 Inference of @NonNull and @Nullable annotations
It can be tedious to write annotations in your code. Tools exist that can automatically infer annotations and insert them in your source code. (This is different than type qualifier refinement for local variables (Section 25.4), which infers a more specific type for local variables and uses them during type-checking but does not insert them in your source code. Type qualifier refinement is always enabled, no matter how annotations on signatures got inserted in your source code.)
Your choice of tool depends on what default annotation (see Section 3.3.2) your code uses. You only need one of these tools.
## 3.4 Suppressing nullness warnings
When the Nullness Checker reports a warning, it’s best to change the code or its annotations, to eliminate the warning. Alternately, you can suppress the warning, which does not change the code but prevents the Nullness Checker from reporting this particular warning to you.
The Checker Framework supplies several ways to suppress warnings, most notably the @SuppressWarnings("nullness") annotation (see Section 26). An example use is
// might return null
@Nullable Object getObject(...) { ... }
void myMethod() {
@SuppressWarnings("nullness") // with argument x, getObject always returns a non-null value
@NonNull Object o2 = getObject(x);
The Nullness Checker supports an additional warning suppression key, nullness:generic.argument. Use of @SuppressWarnings("nullness:generic.argument") causes the Nullness Checker to suppress warnings related to misuse of generic type arguments. One use for this key is when a class is declared to take only @NonNull type arguments, but you want to instantiate the class with a @Nullable type argument, as in List<@Nullable Object>. For a more complete explanation of this example, see Section 31.6.1.
The Nullness Checker also permits you to use assertions or method calls to suppress warnings; see below.
### 3.4.1 Suppressing warnings with assertions and method calls
Occasionally, it is inconvenient or verbose to use the @SuppressWarnings annotation. For example, Java does not permit annotations such as @SuppressWarnings to appear on statements. In such cases, you can use the @AssumeAssertion string in an assert message (see Section 26.2).
If you need to suppress a warning within an expression, then sometimes writing an assertion is not convenient. In such a case, you can suppress warnings by writing a call to the NullnessUtils.castNonNull method. The rest of this section discusses the castNonNull method.
The Nullness Checker considers both the return value, and also the argument, to be non-null after the castNonNull method call. The Nullness Checker issues no warnings in any of the following code:
// One way to use castNonNull as a cast:
@NonNull String s = castNonNull(possiblyNull1);
// Another way to use castNonNull as a cast:
castNonNull(possiblyNull2).toString();
// It is possible, but not recommmended, to use castNonNull as a statement:
// (It would be better to write an assert statement with @AssumeAssertion
castNonNull(possiblyNull3);
possiblyNull3.toString();
The castNonNull method throws AssertionError if Java assertions are enabled and the argument is null. However, it is not intended for general defensive programming; see Section 26.2.1.
A potential disadvantage of using the castNonNull method is that your code becomes dependent on the Checker Framework at run time as well as at compile time. You can avoid this by copying the implementation of castNonNull into your own code, and possibly renaming it if you do not like the name. Be sure to retain the documentation that indicates that your copy is intended for use only to suppress warnings and not for defensive programming. See Section 26.2.1 for an explanation of the distinction.
The Nullness Checker introduces a new method, rather than re-using an existing method such as org.junit.Assert.assertNotNull(Object) or com.google.common.base.Preconditions.checkNotNull(Object). Those methods are commonly used for defensive programming, so it is impossible to know the programmer’s intent when writing them. Therefore, it is important to have a method call that is used only for warning suppression. See Section 26.2.1 for a discussion of the distinction between warning suppression and defensive programming.
## 3.5 Examples
### 3.5.1 Tiny examples
To try the Nullness Checker on a source file that uses the @NonNull qualifier, use the following command (where javac is the Checker Framework compiler that is distributed with the Checker Framework):
javac -processor org.checkerframework.checker.nullness.NullnessChecker examples/NullnessExample.java
Compilation will complete without warnings.
To see the checker warn about incorrect usage of annotations (and therefore the possibility of a null pointer exception at run time), use the following command:
javac -processor org.checkerframework.checker.nullness.NullnessChecker examples/NullnessExampleWithWarnings.java
The compiler will issue two warnings regarding violation of the semantics of @NonNull.
### 3.5.2 Example annotated source code
Some libraries that are annotated with nullness qualifiers are:
## 3.6 Tips for getting started
Here are some tips about getting started using the Nullness Checker on a legacy codebase. For more generic advice (not specific to the Nullness Checker), see Section 2.4.1.
Your goal is to add @Nullable annotations to the types of any variables that can be null. (The default is to assume that a variable is non-null unless it has a @Nullable annotation.) Then, you will run the Nullness Checker. Each of its errors indicates either a possible null pointer exception, or a wrong/missing annotation. When there are no more warnings from the checker, you are done!
We recommend that you start by searching the code for occurrences of null in the following locations; when you find one, write the corresponding annotation:
• in Javadoc: add @Nullable annotations to method signatures (parameters and return types).
• return null: add a @Nullable annotation to the return type of the given method.
• param == null: when a formal parameter is compared to null, then in most cases you can add a @Nullable annotation to the formal parameter’s type
• TypeName field = null;: when a field is initialized to null in its declaration, then it needs either a @Nullable or a @MonotonicNonNull annotation. If the field is always set to a non-null value in the constructor, then you can just change the declaration to Type field;, without an initializer, and write no type annotation (because the default is @NonNull).
• declarations of contains, containsKey, containsValue, equals, get, indexOf, lastIndexOf, and remove (with Object as the argument type): change the argument type to @Nullable Object; for remove, also change the return type to @Nullable Object.
You should ignore all other occurrences of null within a method body. In particular, you (almost) never need to annotate local variables.
Only after this step should you run ant to invoke the Nullness Checker. The reason is that it is quicker to search for places to change than to repeatedly run the checker and fix the errors it tells you about, one at a time.
Here are some other tips:
• In any file where you write an annotation such as @Nullable, don’t forget to add import org.checkerframework.checker.nullness.qual.*;.
• To indicate an array that can be null, write, for example: int @Nullable [].
By contrast, @Nullable Object [] means a non-null array that contains possibly-null objects.
• If you know that a particular variable is definitely not null, but the Nullness Checker estimates that the variable might be null, then you can make the Nullness Checker trust your judgment by writing an assertion (see Section 26.2):
assert var != null : "@AssumeAssertion(nullness)";
• To indicate that a routine returns the same value every time it is called, use @Pure (see Section 25.4.5).
• To indicate a method precondition (a contract stating the conditions under which a client is allowed to call it), you can use annotations such as @RequiresNonNull (see Section 3.2.2).
The Checker Framework’s nullness annotations are similar to annotations used in IntelliJ IDEA, FindBugs, JML, the JSR 305 proposal, NetBeans, and other tools. Also see Section 32.5 for a comparison to other tools.
You might prefer to use the Checker Framework because it has a more powerful analysis that can warn you about more null pointer errors in your code.
If your code is already annotated with a different nullness annotation, you can reuse that effort. The Checker Framework comes with cleanroom re-implementations of annotations from other tools. It treats them exactly as if you had written the corresponding annotation from the Nullness Checker, as described in Figure 3.2.
android.annotation.NonNull android.support.annotation.NonNull com.sun.istack.internal.NotNull edu.umd.cs.findbugs.annotations.NonNull javax.annotation.Nonnull javax.validation.constraints.NotNull lombok.NonNull org.eclipse.jdt.annotation.NonNull org.eclipse.jgit.annotations.NonNull org.jetbrains.annotations.NotNull org.jmlspecs.annotation.NonNull org.netbeans.api.annotations.common.NonNull
⇒ org.checkerframework.checker.nullness.qual.NonNull
android.annotation.Nullable android.support.annotation.Nullable com.sun.istack.internal.Nullable edu.umd.cs.findbugs.annotations.Nullable edu.umd.cs.findbugs.annotations.CheckForNull edu.umd.cs.findbugs.annotations.UnknownNullness javax.annotation.Nullable javax.annotation.CheckForNull org.eclipse.jdt.annotation.Nullable org.eclipse.jgit.annotations.Nullable org.jetbrains.annotations.Nullable org.jmlspecs.annotation.Nullable org.netbeans.api.annotations.common.NullAllowed org.netbeans.api.annotations.common.CheckForNull org.netbeans.api.annotations.common.NullUnknown
⇒ org.checkerframework.checker.nullness.qual.Nullable
Figure 3.2: Correspondence between other nullness annotations and the Checker Framework’s annotations.
Alternately, the Checker Framework can process those other annotations (as well as its own, if they also appear in your program). The Checker Framework has its own definition of the annotations on the left side of Figure 3.2, so that they can be used as type qualifiers. The Checker Framework interprets them according to the right side of Figure 3.2.
The Checker Framework may issue more or fewer errors than another tool. This is expected, since each tool uses a different analysis. Remember that the Checker Framework aims at soundness: it aims to never miss a possible null dereference, while at the same time limiting false reports. Also, note FindBugs’s non-standard meaning for @Nullable (Section 3.7.2).
Because some of the names are the same (NonNull, Nullable), you can import at most one of the annotations with conflicting names; the other(s) must be written out fully rather than imported.
Note that some older tools interpret array and varargs declarations inconsistently with the Java specification. For example, they might interpret @NonNull Object [] as “non-null array of objects”, rather than as “array of non-null objects” which is the correct Java interpretation. Such an interpretation is unfortunate and confusing. See Section 31.5.3 for some more details about this issue.
### 3.7.1 Which tool is right for you?
Different tools are appropriate in different circumstances. Here is a brief comparison with FindBugs, but similar points apply to other tools.
The Checker Framework has a more powerful nullness analysis; FindBugs misses some real errors. However, FindBugs does not require you to annotate your code as thoroughly as the Checker Framework does. Depending on the importance of your code, you may desire: no nullness checking, the cursory checking of FindBugs, or the thorough checking of the Checker Framework. You might even want to ensure that both tools run, for example if your coworkers or some other organization are still using FindBugs. If you know that you will eventually want to use the Checker Framework, there is no point using FindBugs first; it is easier to go straight to using the Checker Framework.
FindBugs can find other errors in addition to nullness errors; here we focus on its nullness checks. Even if you use FindBugs for its other features, you may want to use the Checker Framework for analyses that can be expressed as pluggable type-checking, such as detecting nullness errors.
Regardless of whether you wish to use the FindBugs nullness analysis, you may continue running all of the other FindBugs analyses at the same time as the Checker Framework; there are no interactions among them.
If FindBugs (or any other tool) discovers a nullness error that the Checker Framework does not, please report it to us (see Section 32.2) so that we can enhance the Checker Framework.
### 3.7.2 Incompatibility note about FindBugs @Nullable
FindBugs has a non-standard definition of @Nullable. FindBugs’s treatment is not documented in its own Javadoc; it is different from the definition of @Nullable in every other tool for nullness analysis; it means the same thing as @NonNull when applied to a formal parameter; and it invariably surprises programmers. Thus, FindBugs’s @Nullable is detrimental rather than useful as documentation. In practice, your best bet is to not rely on FindBugs for nullness analysis, even if you find FindBugs useful for other purposes.
You can skip the rest of this section unless you wish to learn more details.
FindBugs suppresses all warnings at uses of a @Nullable variable. (You have to use @CheckForNull to indicate a nullable variable that FindBugs should check.) For example:
// declare getObject() to possibly return null
@Nullable Object getObject() { ... }
void myMethod() {
@Nullable Object o = getObject();
// FindBugs issues no warning about calling toString on a possibly-null reference!
o.toString();
}
The Checker Framework does not emulate this non-standard behavior of FindBugs, even if the code uses FindBugs annotations.
With FindBugs, you annotate a declaration, which suppresses checking at all client uses, even the places that you want to check. It is better to suppress warnings at only the specific client uses where the value is known to be non-null; the Checker Framework supports this, if you write @SuppressWarnings at the client uses. The Checker Framework also supports suppressing checking at all client uses, by writing a @SuppressWarnings annotation at the declaration site. Thus, the Checker Framework supports both use cases, whereas FindBugs supports only one and gives the programmer less flexibility.
In general, the Checker Framework will issue more warnings than FindBugs, and some of them may be about real bugs in your program. See Section 3.4 for information about suppressing nullness warnings.
(FindBugs made a poor choice of names. The choice of names should make a clear distinction between annotations that specify whether a reference is null, and annotations that suppress false warnings. The choice of names should also have been consistent for other tools, and intuitively clear to programmers. The FindBugs choices make the FindBugs annotations less helpful to people, and much less useful for other tools. As a separate issue, the FindBugs analysis is also very imprecise. For type-related analyses, it is best to stay away from the FindBugs nullness annotations, and use a more capable tool like the Checker Framework.)
### 3.7.3 Relationship to Optional<T>
Many null pointer exceptions occur because the programmer forgets to check whether a reference is null before dereferencing it. Java 8’s Optional<T> class provides a partial solution: you cannot dereference the contained value without calling the get method.
However, the use of Optional for this purpose is unsatisfactory. First, it adds syntactic complexity, making your code longer and harder to read. (The Optional class provides some operations, such as map and orElse, that you would otherwise have to write; without these its code bloat would be even worse.) Second, there is no guarantee that the programmer remembers to call isPresent before calling get. Thus, use of Optional doesn’t solve the underlying problem — it merely converts a NullPointerException into a NoSuchElementException exception, and in either case your code crashes.
The Nullness Checker does not suffer these limitations. It works with existing code and types, it ensures that you check for null wherever necessary, and it infers when the check for null is not necessary based on previous statements in the method.
## 3.8 Initialization Checker
Every object’s fields start out as null. By the time the constructor finishes executing, the @NonNull fields have been set to a different value. Your code can suffer a NullPointerException when using a @NonNull field, if your code uses the field during initialization. The Nullness Checker prevents this problem by warning you anytime that you may be accessing an uninitialized field. This check is useful because it prevents errors in your code. However, the analysis can be confusing to understand. If you wish to disable the initialization checks, pass the command-line argument -AsuppressWarnings=uninitialized when running the Nullness Checker. You will no longer get a guarantee of no null pointer exceptions, but you can still use the Nullness Checker to find most of the null pointer problems in your code.
An object is partially initialized from the time that its constructor starts until its constructor finishes. This is relevant to the Nullness Checker because while the constructor is executing — that is, before initialization completes — a @NonNull field may be observed to be null, until that field is set. In particular, the Nullness Checker issues a warning for code like this:
public class MyClass {
private @NonNull Object f;
public MyClass(int x, int y) {
// Error because constructor contains no assignment to this.f.
// By the time the constructor exits, f must be initialized to a non-null value.
}
public MyClass(int x) {
// Error because this.f is accessed before f is initialized.
// At the beginning of the constructor's execution, accessing this.f
// yields null, even though field f has a non-null type.
this.f.toString();
}
public MyClass(int x, int y, int z) {
m();
}
public void m() {
// Error because this.f is accessed before f is initialized,
// even though the access is not in a constructor.
// When m is called from the constructor, accessing f yields null,
// even though field f has a non-null type.
this.f.toString();
}
When a field f is declared with a @NonNull type, then code can depend on the fact that the field is not null. However, this guarantee does not hold for a partially-initialized object.
The Nullness Checker uses three annotations to indicate whether an object is initialized (all its @NonNull fields have been assigned), under initialization (its constructor is currently executing), or its initialization state is unknown.
These distinctions are mostly relevant within the constructor, or for references to this that escape the constructor (say, by being stored in a field or passed to a method before initialization is complete). Use of initialization annotations is rare in most code.
The most common use for the @UnderInitialization annotation is for a helper routine that is called by constructor. For example:
class MyClass {
Object field1;
Object field2;
Object field3;
public MyClass(String arg1) {
this.field1 = arg1;
init_other_fields();
}
// A helper routine that initializes all the fields other than field1.
@EnsuresNonNull({"field2", "field3"})
private void init_other_fields(@UnderInitialization(MyClass.class) MyClass this) {
field2 = new Object();
field3 = new Object();
}
}
For compatibility with Java 6 and 7, you can write the receiver parameter in comments (see Section 27.2.1):
private void init_other_fields(/*>>>@UnderInitialization(MyClass.class) MyClass this*/) {
### 3.8.1 Initialization qualifiers
The initialization hierarchy is shown in Figure 3.3. The initialization hierarchy contains these qualifiers:
@Initialized
indicates a type that contains a fully-initialized object. Initialized is the default, so there is little need for a programmer to write this explicitly.
@UnknownInitialization
indicates a type that may contain a partially-initialized object. In a partially-initialized object, fields that are annotated as @NonNull may be null because the field has not yet been assigned.
@UnknownInitialization takes a parameter that is the class the object is definitely initialized up to. For instance, the type @UnknownInitialization(Foo.class) denotes an object in which every fields declared in Foo or its superclasses is initialized, but other fields might not be. Just @UnknownInitialization is equivalent to @UnknownInitialization(Object.class).
@UnderInitialization
indicates a type that contains a partially-initialized object that is under initialization — that is, its constructor is currently executing. It is otherwise the same as @UnknownInitialization. Within the constructor, this has @UnderInitialization type until all the @NonNull fields have been assigned.
A partially-initialized object (this in a constructor) may be passed to a helper method or stored in a variable; if so, the method receiver, or the field, would have to be annotated as @UnknownInitialization or as @UnderInitialization.
If a reference has @UnknownInitialization or @UnderInitialization type, then all of its @NonNull fields are treated as @MonotonicNonNull: when read, they are treated as being @Nullable, but when written, they are treated as being @NonNull.
The initialization hierarchy is orthogonal to the nullness hierarchy. It is legal for a reference to be @NonNull @UnderInitialization, @Nullable @UnderInitialization, @NonNull @Initialized, or @Nullable @Initialized. The nullness hierarchy tells you about the reference itself: might the reference be null? The initialization hierarchy tells you about the @NonNull fields in the referred-to object: might those fields be temporarily null in contravention of their type annotation? Figure 3.4 contains some examples.
Declarations Expression Expression’s nullness type, or checker error class C { @NonNull Object f; @Nullable Object g; ... } @NonNull @Initialized C a; a @NonNull a.f @NonNull a.g @Nullable @NonNull @UnderInitialization C b; b @NonNull b.f @MonotonicNonNull b.g @Nullable @Nullable @Initialized C c; c @Nullable c.f error: deref of nullable c.g error: deref of nullable @Nullable @UnderInitialization C d; d @Nullable d.f error: deref of nullable d.g error: deref of nullable
Figure 3.4: Examples of the interaction between nullness and initialization. Declarations are shown at the left for reference, but the focus of the table is the expressions and their nullness type or error.
### 3.8.2 How an object becomes initialized
Within the constructor, this starts out with @UnderInitialization type. As soon as all of the @NonNull fields have been initialized, then this is treated as initialized. (See Section 3.8.3 for a slight clarification of this rule.)
The Initialization Checker issues an error if the constructor fails to initialize any @NonNull field. This ensures that the object is in a legal (initialized) state by the time that the constructor exits. This is different than Java’s test for definite assignment (see JLS ch.16), which does not apply to fields (except blank final ones, defined in JLS §4.12.4) because fields have a default value of null.
All @NonNull fields must either have a default in the field declaration, or be assigned in the constructor or in a helper method that the constructor calls. If your code initializes (some) fields in a helper method, you will need to annotate the helper method with an annotation such as @EnsuresNonNull({"field1", "field2"}) for all the fields that the helper method assigns. It’s a bit odd, but you use that same annotation, @EnsuresNonNull, to indicate that a primitive field has its value set in a helper method, which is relevant when you supply the -Alint=uninitialized command-line option (see Section 3.1).
### 3.8.3 Partial initialization
So far, we have discussed initialization as if it is an all-or-nothing property: an object is non-initialized until initialization completes, and then it is initialized. The full truth is a bit more complex: during the initialization process an object can be partially initialized, and as the object’s superclass constructors complete, its initialization status is updated. The Initialization Checker lets you express such properties when necessary.
Consider a simple example:
class A {
Object a;
A() {
a = new Object();
}
}
class B extends A {
Object b;
B() {
super();
b = new Object();
}
}
Consider what happens during execution of new B().
1. B’s constructor begins to execute. At this point, neither the fields of A nor those of B have been initialized yet.
2. B’s constructor calls A’s constructor, which begins to execute. No fields of A nor of B have been initialized yet.
3. A’s constructor completes. Now, all the fields of A have been initialized, and their invariants (such as that field a is non-null) can be depended on. However, because B’s constructor has not yet completed executing, the object being constructed is not yet fully initialized. When treated as an A (e.g., if only the A fields are accessed), the object is initialized, but when treated as a B, the object is still non-initialized.
4. B’s constructor completes. The object is initialized when treated as an A or a B. (And, the object is fully initialized if B’s constructor was invoked via a new B(). But the type system cannot assume that – there might be a class C extends B { ... }, and B’s constructor might have been invoked from that.)
At any moment during initialization, the superclasses of a given class can be divided into those that have completed initialization and those that have not yet completed initialization. More precisely, at any moment there is a point in the class hierarchy such that all the classes above that point are fully initialized, and all those below it are not yet initialized. As initialization proceeds, this dividing line between the initialized and uninitialized classes moves down the type hierarchy.
The Nullness Checker lets you indicate where the dividing line is between the initialized and non-initialized classes. The @UnderInitialization(classliteral) indicates the first class that is known to be fully initialized. When you write @UnderInitialization(OtherClass.class) MyClass x;, that means that variable x is initialized for OtherClass and its superclasses, and x is (possibly) uninitialized for MyClass and all subclasses.
We can now state a clarification of Section 3.8.2’s rule for an object becoming initialized. As soon as all of the @NonNull fields in class C have been initialized, then this is treated as @UnderInitialization(C), rather than treated as simply @Initialized.
The example above lists 4 moments during construction. At those moments, the type of the object being constructed is:
1. @UnderInitialization B
2. @UnderInitialization A
3. @UnderInitialization(A.class) A
4. @UnderInitialization(B.class) B
### 3.8.4 Initialization of circular data structures
There is one final aspect of the initialization type system to be considered: the rules governing reading and writing to objects that are currently under initialization (both reading from fields of objects under initialization, as well as writing objects under initialization to fields). By default, only fully-initialized objects can be stored in a field of another object. If this was the only option, then it would not be possible to create circular data structures (such as a doubly-linked list) where fields have a @NonNull type. However, the annotation @NotOnlyInitialized can be used to indicate that a field can store objects that are currently under initialization. In this case, the rules for reading and writing to that field become a little bit more interesting, to soundly support circular structures.
The rules for reading from a @NotOnlyInitialized field are summarized in Figure 3.5. Essentially, nothing is known about the initialization status of the value returned unless the receiver was @Initialized.
x.f f is @NonNull f is @Nullable x is @Initialized @Initialized @NonNull @Initialized @Nullable x is @UnderInitialization @UnknownInitialization @Nullable @UnknownInitialization @Nullable x is @UnknownInitialization @UnknownInitialization @Nullable @UnknownInitialization @Nullable
Figure 3.5: Initialization rules for reading a @NotOnlyInitialized field f.
Similarly, Figure 3.6 shows under which conditions an assignment x.f = y is allowed for a @NotOnlyInitialized field f. If the receiver x is @UnderInitialization, then any y can be of any initialization state. If y is known to be fully initialized, then any receiver is allowed. All other assignments are disallowed.
x.f = y y is @Initialized y is @UnderInitialization y is @UnknownInitialization x is @Initialized yes no no x is @UnderInitialization yes yes yes x is @UnknownInitialization yes no no
Figure 3.6: Rules for deciding when an assignment x.f = y is allowed for a @NotOnlyInitialized field f.
These rules allow for the safe initialization of circular structures. For instance, consider a doubly linked list:
class List<T> {
@NotOnlyInitialized
Node<T> sentinel;
public List() {
this.sentinel = new Node<T>(this);
}
void insert(@Nullable T data) {
this.sentinel.insertAfter(data);
}
public static void main() {
List<Integer> l = new List<Integer>();
l.insert(1);
l.insert(2);
}
}
class Node<T> {
@NotOnlyInitialized
Node<T> prev;
@NotOnlyInitialized
Node<T> next;
@NotOnlyInitialized
List parent;
@Nullable
T data;
// for sentinel construction
Node(@UnderInitialization List parent) {
this.parent = parent;
this.prev = this;
this.next = this;
}
// for data node construction
Node(Node<T> prev, Node<T> next, @Nullable T data) {
this.parent = prev.parent;
this.prev = prev;
this.next = next;
this.data = data;
}
void insertAfter(@Nullable T data) {
Node<T> n = new Node<T>(this, this.next, data);
this.next.prev = n;
this.next = n;
}
}
### 3.8.5 How to handle warnings
There are several ways to address a warning “error: the constructor does not initialize fields: …”.
• Declare the field as @Nullable. Recall that if you did not write an annotation, the field defaults to @NonNull.
• Declare the field as @MonotonicNonNull. This is appropriate if the field starts out as null but is later set to a non-null value. You may then wish to use the @EnsuresNonNull annotation to indicate which methods set the field, and the @RequiresNonNull annotation to indicate which methods require the field to be non-null.
• Initialize the field in the constructor or in the field’s initializer, if the field should be initialized. (In this case, the Initialization Checker has found a bug!)
Do not initialize the field to an arbitrary non-null value just to eliminate the warning. Doing so degrades your code: it introduces a value that will confuse other programmers, and it converts a clear NullPointerException into a more obscure error.
If your code calls an instance method from a constructor, you may see a message such as the following:
Foo.java:123: error: call to initHelper() not allowed on the given receiver.
initHelper();
^
required: @Initialized @NonNull MyClass
The problem is that the current object (this) is under initialization, but the receiver formal parameter (Section 31.5.1) of method initHelper() is implicitly annotated as @Initialized. If initHelper() doesn’t depend on its receiver being initialized — that is, it’s OK to call x.initHelper even if x is not initialized — then you can indicate that:
class MyClass {
void initHelper(@UnknownInitialization MyClass this, String param1) { ... }
}
If you are using annotations in comments, you would write:
class MyClass {
void initHelper(/*>>>@UnknownInitialization MyClass this,*/ String param1) { ... }
}
You are likely to want to annotate initHelper() with @EnsuresNonNull as well; see Section 3.2.2.
You may get the “call to … is not allowed on the given receiver” error even if your constructor has already initialized all the fields. For this code:
public class MyClass {
@NonNull Object field;
public MyClass() {
field = new Object();
helperMethod();
}
private void helperMethod() {
}
}
the Nullness Checker issues the following warning:
MyClass.java:7: error: call to helperMethod() not allowed on the given receiver.
helperMethod();
^
found : @UnderInitialization(MyClass.class) @NonNull MyClass
required: @Initialized @NonNull MyClass
1 error
The reason is that even though the object under construction has had all the fields declared in MyClass initialized, there might be a subclass of MyClass. Thus, the receiver of helperMethod should be declared as @UnderInitialization(MyClass.class), which says that initialization has completed for all the MyClass fields but may not have been completed overall. If helperMethod had been a public method that could also be called after initialization was actually complete, then the receiver should have type @UnknownInitialization, which is the supertype of @UnknownInitialization and @UnderInitialization.
### 3.8.6 More details about initialization checking
##### Suppressing warnings
You can suppress warnings related to partially-initialized objects with @SuppressWarnings("initialization"). This can be placed on a single field; on a constructor; or on a class to suppress all initialization warnings for all constructors.
##### Checking initialization of all fields, not just @NonNull ones
When the -Alint=uninitialized command-line option is provided, then an object is considered uninitialized until all its fields are assigned, not just the @NonNull ones. See Section 3.1.
##### Use of method annotations
A method with a non-initialized receiver may assume that a few fields (but not all of them) are non-null, and it sometimes sets some more fields to non-null values. To express these concepts, use the @RequiresNonNull, @EnsuresNonNull, and @EnsuresNonNullIf method annotations; see Section 3.2.2.
##### Source of the type system
The type system enforced by the Initialization Checker is known as “Freedom Before Commitment” [SM11]. Our implementation changes its initialization modifiers (“committed”, “free”, and “unclassified”) to “initialized”, “unknown initialization”, and “under initialization”. Our implementation also has several enhancements. For example, it supports partial initialization (the argument to the @UnknownInitialization and @UnderInitialization annotations.
### 3.8.7 Rawness Initialization Checker
The Checker Framework supports two different initialization checkers that are integrated with the Nullness Checker. You can use whichever one you prefer.
One (described in most of Section 3.8) uses the three annotations @Initialized, @UnknownInitialization, and @UnderInitialization. We recommend that you use it.
The other (described here in Section 3.8.7) uses the two annotations @Raw and @NonRaw. The rawness type system is slightly easier to use but slightly less expressive.
To run the Nullness Checker with the rawness variant of the Initialization Checker, invoke the NullnessRawnessChecker rather than the NullnessChecker; that is, supply the -processor org.checkerframework.checker.nullness.NullnessRawnessChecker command-line option to javac. Although @Raw roughly corresponds to @UnknownInitialization and @NonRaw roughly corresponds to @Initialized, the annotations are not aliased and you must use the ones that correspond to the type-checker that you are running.
An object is raw from the time that its constructor starts until its constructor finishes. This is relevant to the Nullness Checker because while the constructor is executing — that is, before initialization completes — a @NonNull field may be observed to be null, until that field is set. In particular, the Nullness Checker issues a warning for code like this:
public class MyClass {
private @NonNull Object f;
public MyClass(int x, int y) {
// Error because constructor contains no assignment to this.f.
// By the time the constructor exits, f must be initialized to a non-null value.
}
public MyClass(int x) {
// Error because this.f is accessed before f is initialized.
// At the beginning of the constructor's execution, accessing this.f
// yields null, even though field f has a non-null type.
this.f.toString();
}
public MyClass(int x, int y, int z) {
m();
}
public void m() {
// Error because this.f is accessed before f is initialized,
// even though the access is not in a constructor.
// When m is called from the constructor, accessing f yields null,
// even though field f has a non-null type.
this.f.toString();
}
In general, code can depend that field f is not null, because the field is declared with a @NonNull type. However, this guarantee does not hold for a partially-initialized object.
The Nullness Checker uses the @Raw annotation to indicate that an object is not yet fully initialized — that is, not all its @NonNull fields have been assigned. Rawness is mostly relevant within the constructor, or for references to this that escape the constructor (say, by being stored in a field or passed to a method before initialization is complete). Use of rawness annotations is rare in most code.
The most common use for the @Raw annotation is for a helper routine that is called by constructor. For example:
class MyClass {
Object field1;
Object field2;
Object field3;
public MyClass(String arg1) {
this.field1 = arg1;
init_other_fields();
}
// A helper routine that initializes all the fields other than field1
@EnsuresNonNull({"field2", "field3"})
private void init_other_fields(@Raw MyClass this) {
field2 = new Object();
field3 = new Object();
}
}
For compatibility with Java 6 and 7, you can write the receiver parameter in comments (see Section 27.2.1):
private void init_other_fields(/*>>> @Raw MyClass this*/) {
#### Rawness qualifiers
The rawness hierarchy is shown in Figure 3.7. The rawness hierarchy contains these qualifiers:
@Raw
indicates a type that may contain a partially-initialized object. In a partially-initialized object, fields that are annotated as @NonNull may be null because the field has not yet been assigned. Within the constructor, this has @Raw type until all the @NonNull fields have been assigned. A partially-initialized object (this in a constructor) may be passed to a helper method or stored in a variable; if so, the method receiver, or the field, would have to be annotated as @Raw.
@NonRaw
indicates a type that contains a fully-initialized object. NonRaw is the default, so there is little need for a programmer to write this explicitly.
@PolyRaw
indicates qualifier polymorphism over rawness (see Section 24.2).
If a reference has @Raw type, then all of its @NonNull fields are treated as @MonotonicNonNull: when read, they are treated as being @Nullable, but when written, they are treated as being @NonNull.
The rawness hierarchy is orthogonal to the nullness hierarchy. It is legal for a reference to be @NonNull @Raw, @Nullable @Raw, @NonNull @NonRaw, or @Nullable @NonRaw. The nullness hierarchy tells you about the reference itself: might the reference be null? The rawness hierarchy tells you about the @NonNull fields in the referred-to object: might those fields be temporarily null in contravention of their type annotation? Figure 3.8 contains some examples.
Declarations Expression Expression’s nullness type, or checker error class C { @NonNull Object f; @Nullable Object g; ... } @NonNull @NonRaw C a; a @NonNull a.f @NonNull a.g @Nullable @NonNull @Raw C b; b @NonNull b.f @MonotonicNonNull b.g @Nullable @Nullable @NonRaw C c; c @Nullable c.f error: deref of nullable c.g error: deref of nullable @Nullable @Raw C d; d @Nullable d.f error: deref of nullable d.g error: deref of nullable
Figure 3.8: Examples of the interaction between nullness and rawness. Declarations are shown at the left for reference, but the focus of the table is the expressions and their nullness type or error.
#### How an object becomes non-raw
Within the constructor, this starts out with @Raw type. As soon as all of the @NonNull fields have been initialized, then this is treated as non-raw.
The Nullness Checker issues an error if the constructor fails to initialize any @NonNull field. This ensures that the object is in a legal (non-raw) state by the time that the constructor exits. This is different than Java’s test for definite assignment (see JLS ch.16), which does not apply to fields (except blank final ones, defined in JLS §4.12.4) because fields have a default value of null.
All @NonNull fields must either have a default in the field declaration, or be assigned in the constructor or in a helper method that the constructor calls. If your code initializes (some) fields in a helper method, you will need to annotate the helper method with an annotation such as @EnsuresNonNull({"field1", "field2"}) for all the fields that the helper method assigns. It’s a bit odd, but you use that same annotation, @EnsuresNonNull, to indicate that a primitive field has its value set in a helper method, which is relevant when you supply the -Alint=uninitialized command-line option (see Section 3.1).
#### Partial initialization
So far, we have discussed rawness as if it is an all-or-nothing property: an object is fully raw until initialization completes, and then it is no longer raw. The full truth is a bit more complex: during the initialization process, an object can be partially initialized, and as the object’s superclass constructors complete, its rawness changes. The Nullness Checker lets you express such properties when necessary.
Consider a simple example:
class A {
Object a;
A() {
a = new Object();
}
}
class B extends A {
Object b;
B() {
super();
b = new Object();
}
}
Consider what happens during execution of new B().
1. B’s constructor begins to execute. At this point, neither the fields of A nor those of B have been initialized yet.
2. B’s constructor calls A’s constructor, which begins to execute. No fields of A nor of B have been initialized yet.
3. A’s constructor completes. Now, all the fields of A have been initialized, and their invariants (such as that field a is non-null) can be depended on. However, because B’s constructor has not yet completed executing, the object being constructed is not yet fully initialized. When treated as an A (e.g., if only the A fields are accessed), the object is initialized (non-raw), but when treated as a B, the object is still raw.
4. B’s constructor completes. The object is fully initialized (non-raw), if B’s constructor was invoked via a new B() expression. On the other hand, if there was a class C extends B { ... }, and B’s constructor had been invoked from that, then the object currently under construction would not be fully initialized — it would only be initialized when treated as an A or a B, but not when treated as a C.
At any moment during initialization, the superclasses of a given class can be divided into those that have completed initialization and those that have not yet completed initialization. More precisely, at any moment there is a point in the class hierarchy such that all the classes above that point are fully initialized, and all those below it are not yet initialized. As initialization proceeds, this dividing line between the initialized and raw classes moves down the type hierarchy.
The Nullness Checker lets you indicate where the dividing line is between the initialized and non-initialized classes. You have two equivalent ways to indicate the dividing line: @Raw indicates the first class below the dividing line, or @NonRaw(classliteral) indicates the first class above the dividing line.
When you write @Raw MyClass x;, that means that variable x is initialized for all superclasses of MyClass, and (possibly) uninitialized for MyClass and all subclasses.
When you write @NonRaw(Foo.class) MyClass x;, that means that variable x is initialized for Foo and all its superclasses, and (possibly) uninitialized for all subclasses of Foo.
If A is a direct superclass of B (as in the example above), then @Raw A x; and @NonRaw(B.class) A x; are equivalent declarations. Neither one is the same as @NonRaw A x;, which indicates that, whatever the actual class of the object that x refers to, that object is fully initialized. Since @NonRaw (with no argument) is the default, you will rarely see it written.
We can now state a clarification of Section 3.8.7’s rule for an object becoming non-raw. As soon as all of the @NonNull fields have been initialized, then this is treated as @NonRaw(typeofthis), rather than treated as simply @NonRaw.
The example above lists 4 moments during construction. At those moments, the type of the object being constructed is:
1. @Raw Object
2. @Raw Object
3. @NonRaw(A.class) A
4. @NonRaw(B.class) B
##### Example
As another example, consider the following 12 declarations:
@Raw Object rO;
@NonRaw(Object.class) Object nroO;
Object o;
@Raw A rA;
@NonRaw(Object.class) A nroA; // same as "@Raw A"
@NonRaw(A.class) A nraA;
A a;
@NonRaw(Object.class) B nroB;
@Raw B rB;
@NonRaw(A.class) B nraB; // same as "@Raw B"
@NonRaw(B.class) B nrbB;
B b;
In the following table, the type in cell C1 is a supertype of the type in cell C2 if: C1 is at least as high and at least as far left in the table as C2 is. For example, nraA’s type is a supertype of those of rB, nraB, nrbB, a, and b. (The empty cells on the top row are real types, but are not expressible. The other empty cells are not interesting types.)
@Raw Object rO; @NonRaw(Object.class) Object nroO; @Raw A rA; @NonRaw(Object.class) A nroA; @NonRaw(Object.class) B nroB; @NonRaw(A.class) A nraA; @Raw B rB; @NonRaw(A.class) B nraB; @NonRaw(B.class) B nrbB; Object o; A a; B b;
#### More details about rawness checking
##### Suppressing warnings
You can suppress warnings related to partially-initialized objects with @SuppressWarnings("rawness"). Do not confuse this with the unrelated @SuppressWarnings("rawtypes") annotation for non-instantiated generic types!
##### Checking initialization of all fields, not just @NonNull ones
When the -Alint=uninitialized command-line option is provided, then an object is considered raw until all its fields are assigned, not just the @NonNull ones. See Section 3.1.
##### Use of method annotations
A method with a raw receiver often assumes that a few fields (but not all of them) are non-null, and sometimes sets some more fields to non-null values. To express these concepts, use the @RequiresNonNull, @EnsuresNonNull, and @EnsuresNonNullIf method annotations; see Section 3.2.2.
##### The terminology “raw”
The name “raw” comes from a research paper that proposed this approach [FL03]. A better name might have been “not yet initialized” or “partially initialized”, but the term “raw” is now well-known. The @Raw annotation has nothing to do with the raw types of Java Generics.
# Chapter 4 Map Key Checker
The Map Key Checker tracks which values are keys for which maps. If variable v has type @KeyFor("m")..., then the value of v is a key in Map m. That is, the expression m.containsKey(v) evaluates to true.
Section 3.2.4 describes how @KeyFor annotations enable the Nullness Checker (Chapter 3) to treat calls to Map.get more precisely by refining its result to @NonNull in some cases.
You will not typically run the Map Key Checker. It is automatically run by other checkers, in particular the Nullness Checker.
You can suppress warnings related to map keys with @SuppressWarnings("keyfor"); see Chapter 26.
## 4.1 Map key annotations
These qualifiers are part of the Map Key type system:
@KeyFor(String[] maps)
indicates that the value assigned to the annotated variable is a key for at least the given maps.
@UnknownKeyFor
is used internally by the type system but should never be written by a programmer. It indicates that the value assigned to the annotated variable is not known to be a key for any map. It is the default type qualifier.
@KeyForBottom
is used internally by the type system but should never be written by a programmer.
## 4.2 Examples
The Map Key Checker keeps track of which variables reference keys to which maps. A variable annotated with @KeyFor(mapSet) can only contain a value that is a key for all the maps in mapSet. For example:
Map<String,Date> m, n;
@KeyFor("m") String km;
@KeyFor("n") String kn;
@KeyFor({"m", "n"}) String kmn;
km = kmn; // OK - a key for maps m and n is also a key for map m
km = kn; // error: a key for map n is not necessarily a key for map m
As with any annotation, use of the @KeyFor annotation may force you to slightly refactor your code. For example, this would be illegal:
Map<String,Object> m;
Collection<@KeyFor("m") String> coll;
coll.add(x); // error: element type is @KeyFor("m") String, but x does not have that type
m.put(x, ...);
The example type-checks if you reorder the two calls:
Map<String,Object> m;
Collection<@KeyFor("m") String> coll;
m.put(x, ...); // after this statement, x has type @KeyFor("m") String
## 4.3 Inference of @KeyFor annotations
Within a method body, you usually do not have to write @KeyFor explicitly, because the checker infers it based on usage patterns. When the Map Key Checker encounters a run-time check for map keys, such as “if (m.containsKey(k)) ...”, then the Map Key Checker refines the type of k to @KeyFor("m") within the scope of the test (or until k is side-effected within that scope). The Map Key Checker also infers @KeyFor annotations based on iteration over a map’s key set or calls to put or containsKey. For more details about type refinement, see Section 25.4.
Suppose we have these declarations:
Map<String,Date> m = new Map<String,Date>();
String k = "key";
@KeyFor("m") String km;
Ordinarily, the following assignment does not type-check:
km = k; // Error since k is not known to be a key for map m.
The following examples show cases where the Map Key Checker infers a @KeyFor annotation for variable k based on usage patterns, enabling the km = k assignment to type-check.
m.put(k, ...);
// At this point, the type of k is refined to @KeyFor("m") String.
km = k; // OK
if (m.containsKey(k)) {
// At this point, the type of k is refined to @KeyFor("m") String.
km = k; // OK
...
}
else {
km = k; // Error since k is not known to be a key for map m.
...
}
The following example shows a case where the Map Key Checker resets its assumption about the type of a field used as a key because that field may have been side-effected.
class MyClass {
private Map<String,Object> m;
private String k; // The type of k defaults to @UnknownKeyFor String
private @KeyFor("m") String km;
public void myMethod() {
if (m.containsKey(k)){
km = k; // OK: the type of k is refined to @KeyFor("m") String
sideEffectFreeMethod();
km = k; // OK: the type of k is not affected by the method call
// and remains @KeyFor("m") String
otherMethod();
km = k; // error: At this point, the type of k is once again
// @UnknownKeyFor String, because otherMethod might have
// side-effected k such that it is no longer a key for map m.
}
}
@SideEffectFree
private void sideEffectFreeMethod() { ... }
private void otherMethod() { ... }
}
# Chapter 5 Interning Checker
If the Interning Checker issues no errors for a given program, then all reference equality tests (i.e., all uses of “==”) are proper; that is, == is not misused where equals() should have been used instead.
Interning is a design pattern in which the same object is used whenever two different objects would be considered equal. Interning is also known as canonicalization or hash-consing, and it is related to the flyweight design pattern. Interning has two benefits: it can save memory, and it can speed up testing for equality by permitting use of ==.
The Interning Checker prevents two types of errors in your code. First, == should be used only on interned values; using == on non-interned values can result in subtle bugs. For example:
Integer x = new Integer(22);
Integer y = new Integer(22);
System.out.println(x == y); // prints false!
The Interning Checker helps programmers to prevent such bugs. Second, the Interning Checker also helps to prevent performance problems that result from failure to use interning. (See Section 2.3 for caveats to the checker’s guarantees.)
Interning is such an important design pattern that Java builds it in for these types: String, Boolean, Byte, Character, Integer, Short. Every string literal in the program is guaranteed to be interned (JLS §3.10.5), and the String.intern() method performs interning for strings that are computed at run time. The valueOf methods in wrapper classes always (Boolean, Byte) or sometimes (Character, Integer, Short) return an interned result (JLS §5.1.7). Users can also write their own interning methods for other types.
It is a proper optimization to use ==, rather than equals(), whenever the comparison is guaranteed to produce the same result — that is, whenever the comparison is never provided with two different objects for which equals() would return true. Here are three reasons that this property could hold:
1. Interning. A factory method ensures that, globally, no two different interned objects are equals() to one another. (In some cases other, non-interned objects of the class might be equals() to one another; in other cases, every object of the class is interned.) Interned objects should always be immutable.
2. Global control flow. The program’s control flow is such that the constructor for class C is called a limited number of times, and with specific values that ensure the results are not equals() to one another. Objects of class C can always be compared with ==. Such objects may be mutable or immutable.
3. Local control flow. Even though not all objects of the given type may be compared with ==, the specific objects that can reach a given comparison may be. For example, suppose that an array contains no duplicates. Then testing to find the index of a given element that is known to be in the array can use ==.
To eliminate Interning Checker errors, you will need to annotate the declarations of any expression used as an argument to ==. Thus, the Interning Checker could also have been called the Reference Equality Checker. In the future, the checker will include annotations that target the non-interning cases above, but for now you need to use @Interned, @UsesObjectEquals (which handles a surprising number of cases), and/or @SuppressWarnings.
To run the Interning Checker, supply the -processor org.checkerframework.checker.interning.InterningChecker command-line option to javac. For examples, see Section 5.4.
## 5.1 Interning annotations
These qualifiers are part of the Interning type system:
@Interned
indicates a type that includes only interned values (no non-interned values).
@PolyInterned
indicates qualifier polymorphism (see Section 24.2).
@UsesObjectEquals
is a class (not type) annotation that indicates that this class’s equals method is the same as that of Object. In other words, neither this class nor any of its superclasses overrides the equals method. Since Object.equals uses reference equality, this means that for such a class, == and equals are equivalent, and so the Interning Checker does not issue errors or warnings for either one.
## 5.2 Annotating your code with @Interned
In order to perform checking, you must annotate your code with the @Interned type annotation, which indicates a type for the canonical representation of an object:
String s1 = ...; // type is (uninterned) "String"
@Interned String s2 = ...; // Java type is "String", but checker treats it as "@Interned String"
The type system enforced by the checker plugin ensures that only interned values can be assigned to s2.
To specify that all objects of a given type are interned, annotate the class declaration:
public @Interned class MyInternedClass { ... }
This is equivalent to annotating every use of MyInternedClass, in a declaration or elsewhere. For example, enum classes are implicitly so annotated.
### 5.2.1 Implicit qualifiers
As described in Section 25.3, the Interning Checker adds implicit qualifiers, reducing the number of annotations that must appear in your code. For example, String literals and the null literal are always considered interned, and object creation expressions (using new) are never considered @Interned unless they are annotated as such, as in
@Interned Double internedDoubleZero = new @Interned Double(0); // canonical representation for Double zero
For a complete description of all implicit interning qualifiers, see the Javadoc for InterningAnnotatedTypeFactory.
## 5.3 What the Interning Checker checks
Objects of an @Interned type may be safely compared using the “==” operator.
The checker issues an error in two cases:
1. When a reference (in)equality operator (“==” or “!=”) has an operand of non-@Interned type.
2. When a non-@Interned type is used where an @Interned type is expected.
This example shows both sorts of problems:
Date date;
@Interned Date idate;
...
if (date == idate) { ... } // error: reference equality test is unsafe
idate = date; // error: idate's referent may no longer be interned
The checker also issues a warning when .equals is used where == could be safely used. You can disable this behavior via the javac -Alint command-line option, like so: -Alint=-dotequals.
For a complete description of all checks performed by the checker, see the Javadoc for InterningVisitor.
You can also restrict which types the checker should examine and type-check, using the -Acheckclass option. For example, to find only the interning errors related to uses of String, you can pass -Acheckclass=java.lang.String. The Interning Checker always checks all subclasses and superclasses of the given class.
### 5.3.1 Limitations of the Interning Checker
The Interning Checker conservatively assumes that the Character, Integer, and Short valueOf methods return a non-interned value. In fact, these methods sometimes return an interned value and sometimes a non-interned value, depending on the run-time argument (JLS §5.1.7). If you know that the run-time argument to valueOf implies that the result is interned, then you will need to suppress an error. (An alternative would be to enhance the Interning Checker to estimate the upper and lower bounds on char, int, and short values so that it can more precisely determine whether the result of a given valueOf call is interned.)
## 5.4 Examples
To try the Interning Checker on a source file that uses the @Interned qualifier, use the following command (where javac is the Checker Framework compiler that is distributed with the Checker Framework):
javac -processor org.checkerframework.checker.interning.InterningChecker examples/InterningExample.java
Compilation will complete without errors or warnings.
To see the checker warn about incorrect usage of annotations, use the following command:
javac -processor org.checkerframework.checker.interning.InterningChecker examples/InterningExampleWithWarnings.java
The compiler will issue an error regarding violation of the semantics of @Interned.
The Daikon invariant detector (http://plse.cs.washington.edu/daikon/) is also annotated with @Interned. From directory java, run make check-interning.
## 5.5 Other interning annotations
The Checker Framework’s interning annotations are similar to annotations used elsewhere.
If your code is already annotated with a different interning annotation, you can reuse that effort. The Checker Framework comes with cleanroom re-implementations of annotations from other tools. It treats them exactly as if you had written the corresponding annotation from the Interning Checker, as described in Figure 5.2.
com.sun.istack.internal.Interned
⇒ org.checkerframework.checker.interning.qual.Interned
Figure 5.2: Correspondence between other interning annotations and the Checker Framework’s annotations.
Alternately, the Checker Framework can process those other annotations (as well as its own, if they also appear in your program). The Checker Framework has its own definition of the annotations on the left side of Figure 5.2, so that they can be used as type qualifiers. The Checker Framework interprets them according to the right side of Figure 5.2.
# Chapter 6 Lock Checker
The Lock Checker prevents certain kinds of concurrency errors. If the Lock checker issues no warnings for a given program, then the program holds the appropriate lock every time that it accesses a variable annotated with @GuardedBy.
Note: This does not mean that your program has no concurrency errors. (You might have forgotten to annotate that a particular variable should only be accessed when a lock is held. You might release and re-acquire the lock, when correctness requires you to hold it throughout a computation. And, there are other concurrency errors that cannot, or should not, be solved with locks.) However, ensuring that your program obeys its locking discipline is an easy and effective way to eliminate a common and important class of errors.
To run the Lock Checker, supply the -processor org.checkerframework.checker.lock.LockChecker command-line option to javac.
## 6.1 Lock annotations
Summary of declaration annotations used by the Lock Checker.
Type annotation @GuardedBy(String lock) Fields/variables only accessible after acquiring the given lock. Declaration annotation @Holding(String[] locks) Locks that must be held before the method is called. @EnsuresLockHeld(String[] expressions) Locks guaranteed to be held when the method returns. @EnsuresLockHeldIf(String[] expr, boolean result) Locks guaranteed to be held when the method returns the given result. @LockingFree The method does not make any use of locks or synchronization.
### 6.1.1 Type annotations for objects protected by locks
@GuardedBy(String lock)
indicates a type whose value may be accessed only when the given lock is held. See the Javadoc for GuardedBy for an explanation of the argument and other details. The lock acquisition and the value access may be arbitrarily far in the future; or, if the value is never accessed, the lock never need be held.
### 6.1.2 Lock method annotations
The Lock Checker supports several annotations that specify method behavior. These are declaration annotations, not type annotations: they apply to the method itself rather than to some particular type.
@EnsuresLockHeld(String[] expressions)
@EnsuresLockHeldIf(String[] expressions, boolean result)
indicate a method postcondition. With @EnsuresLockHeld, the given expressions are known to be objects used as locks and are known to be in a locked state after the method returns; this is useful for annotating a method that takes a lock. With @EnsuresLockHeldIf, if the annotated method returns the given boolean value (true or false), the given expressions are known to be objects used as locks and are known to be in a locked state after the method returns; this is useful for annotating a method that conditionally takes a lock. See Section 6.2.2 for examples.
@LockingFree
indicates that the method does not use synchronization/locking, directly or indirectly. This is used to facilitate dataflow analysis and is less restrictive than @SideEffectFree. It is especially useful for annotating library methods, including JDK methods. Since @SideEffectFree implies @LockingFree, if both are applicable then you should only write @SideEffectFree.
It is critical not to use this annotation for any method that uses synchronization/locking, directly or indirectly. This is because even methods that are guaranteed to release all locks they acquire could cause deadlocks. Although the Lock Checker currently does not aid with deadlock detection, this annotation must be used in anticipation that the Lock Checker eventually could.
### 6.1.3 Discussion of @Holding
A programmer might choose to use the @Holding method annotation in two different ways: to specify a higher-level protocol, or to summarize intended usage. Both of these approaches are useful, and the Lock Checker supports both.
##### Higher-level synchronization protocol
@Holding can specify a higher-level synchronization protocol that is not expressible as locks over Java objects. By requiring locks to be held, you can create higher-level protocol primitives without giving up the benefits of the annotations and checking of them.
##### Method summary that simplifies reasoning
@Holding can be a method summary that simplifies reasoning. In this case, the @Holding doesn’t necessarily introduce a new correctness constraint; the program might be correct even if the lock were acquired later in the body of the method or in a method it calls, so long as the lock is acquired before accessing the data it protects.
Rather, here @Holding expresses a fact about execution: when execution reaches this point, the following locks are already held. This fact enables people and tools to reason intra- rather than inter-procedurally.
In Java, it is always legal to re-acquire a lock that is already held, and the re-acquisition always works. Thus, whenever you write
@Holding("myLock")
void myMethod() {
...
}
it would be equivalent, from the point of view of which locks are held during the body, to write
void myMethod() {
synchronized (myLock) { // no-op: re-aquire a lock that is already held
...
}
}
The advantages of the @Holding annotation include:
• The annotation documents the fact that the lock is intended to already be held.
• The Lock Checker enforces that the lock is held when the method is called, rather than masking a programmer error by silently re-acquiring the lock.
• The synchronized statement can deadlock if, due to a programmer error, the lock is not already held. The Lock Checker prevents this type of error.
• The annotation has no run-time overhead. Even if the lock re-acquisition succeeds, it still consumes time.
## 6.2 Examples
### 6.2.1 Examples of @GuardedBy and @Holding
The most common use of @GuardedBy is to annotate a field declaration type. However, other uses of @GuardedBy are possible.
##### Return types
A return type may be annotated with @GuardedBy:
@GuardedBy("MyClass.myLock") Object myMethod() { ... }
// reassignments without holding the lock are OK.
@GuardedBy("MyClass.myLock") Object x = myMethod();
@GuardedBy("MyClass.myLock") Object y = x;
x.toString(); // ILLEGAL because the lock is not held
synchronized(MyClass.myLock) {
y.toString(); // OK: the lock is held
}
##### Formal parameters
A parameter type may be annotated with @GuardedBy, which indicates that the method body must acquire the lock before accessing the parameter. A client may pass a non-@GuardedBy reference as an argument, since it is legal to access such a reference after the lock is acquired.
void helper1(@GuardedBy("MyClass.myLock") Object a) {
a.toString(); // ILLEGAL: the lock is not held
synchronized(MyClass.myLock) {
a.toString(); // OK: the lock is held
}
}
@Holding("MyClass.myLock")
void helper2(@GuardedBy("MyClass.myLock") Object b) {
b.toString(); // OK: the lock is held
}
void helper3(Object c) {
helper1(c); // OK: passing a subtype in place of a the @GuardedBy supertype
c.toString(); // OK: no lock constraints
}
void helper4(@GuardedBy("MyClass.myLock") Object d) {
d.toString(); // ILLEGAL: the lock is not held
}
void myMethod2(@GuardedBy("MyClass.myLock") Object e) {
helper1(e); // OK to pass to another routine without holding the lock
e.toString(); // ILLEGAL: the lock is not held
synchronized (MyClass.myLock) {
helper2(e);
helper3(e);
helper4(e); // OK, but helper4's body still does not type-check
}
}
### 6.2.2 Examples of @EnsuresLockHeld and @EnsuresLockHeldIf
@EnsuresLockHeld and @EnsuresLockHeldIf are primarily intended for annotating JDK locking methods, as in:
package java.util.concurrent.locks;
class ReentrantLock {
@EnsuresLockHeld("this")
public void lock();
@EnsuresLockHeldIf (expression="this", result=true)
public boolean tryLock();
[...]
}
They can also be used to annotate user methods, particularly for higher-level lock constructs such as a Monitor, as in this simplified example:
public class Monitor {
private ReentrantLock lock; // Initialized in the constructor
[...]
@EnsuresLockHeld("lock")
public void enter() {
lock.lock();
}
[...]
}
### 6.2.3 Example of @LockingFree
@LockingFree is useful when a method does not make any use of synchronization or locks but causes other side effects (hence @SideEffectFree is not appropriate). @SideEffectFree implies @LockingFree, therefore if both are applicable, you should only write @SideEffectFree.
private Object myField;
private ReentrantLock lock; // Initialized in the constructor
private @GuardedBy("lock") Object x; // Initialized in the constructor
[...]
@LockingFree
// This method does not use locks or synchronization but cannot
// be annotated as @SideEffectFree since it alters myField.
void myMethod() {
myField = new Object();
}
@SideEffectFree
int mySideEffectFreeMethod() {
return 0;
}
void myUnlockingMethod() {
lock.unlock();
}
void myUnannotatedEmptyMethod() {
}
void myOtherMethod() {
if (lock.tryLock()) {
x.toString(); // OK: the lock is held
myMethod();
x.toString(); // OK: the lock is still known to be held since myMethod is locking-free
mySideEffectFreeMethod();
x.toString(); // OK: the lock is still known to be held since mySideEffectFreeMethod
// is side-effect-free
myUnlockingMethod();
x.toString(); // ILLEGAL: myLockingMethod is not locking-free
}
if (lock.tryLock()) {
x.toString(); // OK: the lock is held
myUnannotatedEmptyMethod();
x.toString(); // ILLEGAL: even though myUnannotatedEmptyMethod is empty, since it is
// not annotated with @LockingFree, the Lock Checker no longer knows
// the state of the lock.
x.toString(); // OK: the lock is known to be held
}
}
}
## 6.3 Other lock annotations
The Checker Framework’s lock annotations are similar to annotations used elsewhere.
If your code is already annotated with a different lock annotation, you can reuse that effort. The Checker Framework comes with cleanroom re-implementations of annotations from other tools. It treats them exactly as if you had written the corresponding annotation from the Lock Checker, as described in Figure 6.1.
net.jcip.annotations.GuardedBy javax.annotation.concurrent.GuardedBy
⇒ org.checkerframework.checker.lock.qual.GuardedBy
Figure 6.1: Correspondence between other lock annotations and the Checker Framework’s annotations.
Alternately, the Checker Framework can process those other annotations (as well as its own, if they also appear in your program). The Checker Framework has its own definition of the annotations on the left side of Figure 6.1, so that they can be used as type annotations. The Checker Framework interprets them according to the right side of Figure 6.1.
### 6.3.1 Relationship to annotations in Java Concurrency in Practice
The book Java Concurrency in Practice [GPB+06] defines a @GuardedBy annotation that is the inspiration for ours. The book’s @GuardedBy serves two related but distinct purposes:
• When applied to a field, it means that the given lock must be held when accessing the field. The lock acquisition and the field access may be arbitrarily far in the future.
• When applied to a method, it means that the given lock must be held by the caller at the time that the method is called — in other words, at the time that execution passes the @GuardedBy annotation.
The Lock Checker renames the method annotation to @Holding, and it generalizes the @GuardedBy annotation into a type annotation that can apply not just to a field but to an arbitrary type (including the type of a parameter, return value, local variable, generic type parameter, etc.). This makes the annotations more expressive and also more amenable to automated checking. It also accommodates the distinct meanings of the two annotations, and resolves ambiguity when @GuardedBy is written in a location that might apply to either the method or the return type.
(The JCIP book gives some rationales for reusing the annotation name for two purposes. One rationale is that there are fewer annotations to learn. Another rationale is that both variables and methods are “members” that can be “accessed”; variables can be accessed by reading or writing them (putfield, getfield), and methods can be accessed by calling them (invokevirtual, invokeinterface): in both cases, @GuardedBy creates preconditions for accessing so-annotated members. This informal intuition is inappropriate for a tool that requires precise semantics.)
## 6.4 Possible extensions
The Lock Checker validates some uses of locks, but not all. It would be possible to enrich it with additional annotations. This would increase the programmer annotation burden, but would provide additional guarantees.
Lock ordering: Specify that one lock must be acquired before or after another, or specify a global ordering for all locks. This would prevent deadlock.
Not-holding: Specify that a method must not be called if any of the listed locks are held.
These features are supported by Clang’s thread-safety analysis.
## 6.5 A note on Lock Checker internals
The following type qualifiers are inferred and used internally by the Lock Checker and should never need to be written by the programmer. They are presented here for reference on how the Lock Checker works and to help understand warnings produced by the Lock Checker. You may skip this section if you are not seeing a warning mentioning @LockHeld or @LockPossiblyHeld.
These type qualifiers are used on the types of the objects that will be used as locks to protect other objects. The Lock Checker uses them to track the current state of locks at a given point in the code.
@LockPossiblyHeld
indicates a type that may be used as a lock to protect a field/variable (i.e. an object of this type may be used as the expression in a @GuardedBy annotation) and the lock may or may not be currently held. Since any object can potentially be used as a lock, it in fact applies to all non-primitive types. This is the default type qualifier in the hierarchy and it is the top type.
@LockHeld
indicates a type that may be used as a lock to protect a field/variable, and is currently in a locked state on the current thread. It is a subtype of @LockPossiblyHeld and is the bottom type.
# Chapter 7 Fake Enum Checker
Java’s enum keyword lets you define an enumeration type: a finite set of distinct values that are related to one another but are disjoint from all other types, including other enumerations. Before enums were added to Java, there were two ways to encode an enumeration, both of which are error-prone:
the fake enum pattern
a set of int or String constants (as often found in older C code).
the typesafe enum pattern
a class with private constructor.
Sometimes you need to use the fake enum pattern, rather than a real enum or the typesafe enum pattern. One reason is backward-compatibility. A public API that predates Java’s enum keyword may use int constants; it cannot be changed, because doing so would break existing clients. For example, Java’s JDK still uses int constants in the AWT and Swing frameworks, and Android also uses int constants rather than Java enums. Another reason is performance, especially in environments with limited resources. Use of an int instead of an object can reduce code size, memory requirements, and run time.
In cases when code has to use the fake enum pattern, the Fake Enum Checker, or Fenum Checker, gives the same safety guarantees as a true enumeration type. The developer can introduce new types that are distinct from all values of the base type and from all other fake enums. Fenums can be introduced for primitive types as well as for reference types.
Figure 7.1 shows part of the type hierarchy for the Fenum type system.
## 7.1 Fake enum annotations
The checker supports two ways to introduce a new fake enum (fenum):
1. Introduce your own specialized fenum annotation with code like this in file MyFenum.java:
package myModule.qual;
import java.lang.annotation.*;
import org.checkerframework.checker.fenum.qual.FenumTop;
import org.checkerframework.framework.qual.SubtypeOf;
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE_USE, ElementType.TYPE_PARAMETER)
@SubtypeOf(FenumTop.class)
public @interface MyFenum {}
You only need to adapt the italicized package, annotation, and file names in the example.
Note that all custom annotations must have the @Target(ElementType.TYPE_USE) meta-annotation. See section 29.3.1.
2. Use the provided @Fenum annotation, which takes a String argument to distinguish different fenums. For example, @Fenum("A") and @Fenum("B") are two distinct fenums.
The first approach allows you to define a short, meaningful name suitable for your project, whereas the second approach allows quick prototyping.
## 7.2 What the Fenum Checker checks
The Fenum Checker ensures that unrelated types are not mixed. All types with a particular fenum annotation, or @Fenum(...) with a particular String argument, are disjoint from all unannotated types and all types with a different fenum annotation or String argument.
The checker forbids method calls on fenum types and ensures that only compatible fenum types are used in comparisons and arithmetic operations (if applicable to the annotated type).
It is the programmer’s responsibility to ensure that fields with a fenum type are properly initialized before use. Otherwise, one might observe a null reference or zero value in the field of a fenum type. (The Nullness Checker (Chapter 3) can prevent failure to initialize a reference variable.)
## 7.3 Running the Fenum Checker
The Fenum Checker can be invoked by running the following commands.
• If you define your own annotation(s), provide the name(s) of the annotation(s) through the -Aquals option, using a comma-no-space-separated notation:
javac -Xbootclasspath/p:/full/path/to/myProject/bin:/full/path/to/myLibrary/bin \
-processor org.checkerframework.checker.fenum.FenumChecker \
-Aquals=myModule.qual.MyFenum MyFile.java ...
The annotations listed in -Aquals must be accessible to the compiler during compilation in the classpath. In other words, they must already be compiled (and, typically, be on the javac bootclasspath) before you run the Fenum Checker with javac. It is not sufficient to supply their source files on the command line.
You can also provide the fully-qualified paths to a set of directories that contain the annotations through the -AqualDirs option, using a colon-no-space-separated notation. For example:
javac -Xbootclasspath/p:/full/path/to/myProject/bin:/full/path/to/myLibrary/bin \
-processor org.checkerframework.checker.fenum.FenumChecker \
-AqualDirs=/full/path/to/myProject/bin:/full/path/to/myLibrary/bin MyFile.java ...
Note that in these two examples, the compiled class file of the myModule.qual.MyFenum annotation must exist in either the myProject/bin directory or the myLibrary/bin directory. The following placement of the class file will work with the above commands:
.../myProject/bin/myModule/qual/MyFenum.class
The two options can be used at the same time to provide groups of annotations from directories, and individually named annotations.
• If your code uses the @Fenum annotation, you do not need the -Aquals or -AqualDirs option:
javac -processor org.checkerframework.checker.fenum.FenumChecker MyFile.java ...
## 7.4 Suppressing warnings
One example of when you need to suppress warnings is when you initialize the fenum constants to literal values. To remove this warning message, add a @SuppressWarnings annotation to either the field or class declaration, for example:
@SuppressWarnings("fenum:assignment.type.incompatible") // initialization of fake enums
class MyConsts {
public static final @Fenum("A") int ACONST1 = 1;
public static final @Fenum("A") int ACONST2 = 2;
}
## 7.5 Example
The following example introduces two fenums in class TestStatic and then performs a few typical operations.
@SuppressWarnings("fenum:assignment.type.incompatible") // initialization of fake enums
public class TestStatic {
public static final @Fenum("A") int ACONST1 = 1;
public static final @Fenum("A") int ACONST2 = 2;
public static final @Fenum("B") int BCONST1 = 4;
public static final @Fenum("B") int BCONST2 = 5;
}
class FenumUser {
@Fenum("A") int state1 = TestStatic.ACONST1; // ok
@Fenum("B") int state2 = TestStatic.ACONST1; // Incompatible fenums forbidden!
void fenumArg(@Fenum("A") int p) {}
void foo() {
state1 = 4; // Direct use of value forbidden!
state1 = TestStatic.BCONST1; // Incompatible fenums forbidden!
state1 = TestStatic.ACONST2; // ok
fenumArg(5); // Direct use of value forbidden!
fenumArg(TestStatic.BCONST1); // Incompatible fenums forbidden!
fenumArg(TestStatic.ACONST1); // ok
}
}
Also, see the example project in the checker/examples/fenum-extension directory.
# Chapter 8 Tainting Checker
The Tainting Checker prevents certain kinds of trust errors. A tainted, or untrusted, value is one that comes from an arbitrary, possibly malicious source, such as user input or unvalidated data. In certain parts of your application, using a tainted value can compromise the application’s integrity, causing it to crash, corrupt data, leak private data, etc.
For example, a user-supplied pointer, handle, or map key should be validated before being dereferenced. As another example, a user-supplied string should not be concatenated into a SQL query, lest the program be subject to a SQL injection attack. A location in your program where malicious data could do damage is called a sensitive sink.
A program must “sanitize” or “untaint” an untrusted value before using it at a sensitive sink. There are two general ways to untaint a value: by checking that it is innocuous/legal (e.g., it contains no characters that can be interpreted as SQL commands when pasted into a string context), or by transforming the value to be legal (e.g., quoting all the characters that can be interpreted as SQL commands). A correct program must use one of these two techniques so that tainted values never flow to a sensitive sink. The Tainting Checker ensures that your program does so.
If the Tainting Checker issues no warning for a given program, then no tainted value ever flows to a sensitive sink. However, your program is not necessarily free from all trust errors. As a simple example, you might have forgotten to annotate a sensitive sink as requiring an untainted type, or you might have forgotten to annotate untrusted data as having a tainted type.
To run the Tainting Checker, supply the -processor TaintingChecker command-line option to javac.
## 8.1 Tainting annotations
The Tainting type system uses the following annotations:
• @Untainted indicates a type that includes only untainted, trusted values.
• @Tainted indicates a type that may include only tainted, untrusted values. @Tainted is a supertype of @Untainted.
• @PolyTainted is a qualifier that is polymorphic over tainting (see Section 24.2).
## 8.2 Tips on writing @Untainted annotations
Most programs are designed with a boundary that surrounds sensitive computations, separating them from untrusted values. Outside this boundary, the program may manipulate malicious values, but no malicious values ever pass the boundary to be operated upon by sensitive computations.
In some programs, the area outside the boundary is very small: values are sanitized as soon as they are received from an external source. In other programs, the area inside the boundary is very small: values are sanitized only immediately before being used at a sensitive sink. Either approach can work, so long as every possibly-tainted value is sanitized before it reaches a sensitive sink.
Once you determine the boundary, annotating your program is easy: put @Tainted outside the boundary, @Untainted inside, and @SuppressWarnings("tainting") at the validation or sanitization routines that are used at the boundary.
The Tainting Checker’s standard default qualifier is @Tainted (see Section 25.3.1 for overriding this default). This is the safest default, and the one that should be used for all code outside the boundary (for example, code that reads user input). You can set the default qualifier to @Untainted in code that may contain sensitive sinks.
The Tainting Checker does not know the intended semantics of your program, so it cannot warn you if you mis-annotate a sensitive sink as taking @Tainted data, or if you mis-annotate external data as @Untainted. So long as you correctly annotate the sensitive sinks and the places that untrusted data is read, the Tainting Checker will ensure that all your other annotations are correct and that no undesired information flows exist.
As an example, suppose that you wish to prevent SQL injection attacks. You would start by annotating the Statement class to indicate that the execute operations may only operate on untainted queries (Chapter 28 describes how to annotate external libraries):
public boolean execute(@Untainted String sql) throws SQLException;
public boolean executeUpdate(@Untainted String sql) throws SQLException;
## 8.3 @Tainted and @Untainted can be used for many purposes
The @Tainted and @Untainted annotations have only minimal built-in semantics. In fact, the Tainting Checker provides only a small amount of functionality beyond the Subtyping Checker (Chapter 22). This lack of hard-coded behavior means that the annotations can serve many different purposes. Here are just a few examples:
• Prevent SQL injection attacks: @Tainted is external input, @Untainted has been checked for SQL syntax.
• Prevent cross-site scripting attacks: @Tainted is external input, @Untainted has been checked for JavaScript syntax.
• Prevent information leakage: @Tainted is secret data, @Untainted may be displayed to a user.
In each case, you need to annotate the appropriate untainting/sanitization routines. This is similar to the @Encrypted annotation (Section 22.2), where the cryptographic functions are beyond the reasoning abilities of the type system. In each case, the type system verifies most of your code, and the @SuppressWarnings annotations indicate the few places where human attention is needed.
If you want more specialized semantics, or you want to annotate multiple types of tainting in a single program, then you can copy the definition of the Tainting Checker to create a new annotation and checker with a more specific name and semantics. See Chapter 29 for more details.
### 8.3.1 Qualifier Parameters
The Tainting Checker supports qualifier parameters. See Section 24.3 for more details on qualifier parameters.
The qualifier parameter system currently incurs a 50% performance penalty. If this is unacceptable you can run the original Tainting Checker by passing the -processor org.checkerframework.checker.tainting.classic.TaintingClassicChecker command-line option to javac.
# Chapter 9 Regex Checker for regular expression syntax
The Regex Checker prevents, at compile-time, use of syntactically invalid regular expressions and access of invalid capturing groups.
A regular expression, or regex, is a pattern for matching certain strings of text. In Java, a programmer writes a regular expression as a string. At run time, the string is “compiled” into an efficient internal form (Pattern) that is used for text-matching. Regular expression in Java also have capturing groups, which are delimited by parentheses and allow for extraction from text.
The syntax of regular expressions is complex, so it is easy to make a mistake. It is also easy to accidentally use a regex feature from another language that is not supported by Java (see section “Comparison to Perl 5” in the Pattern Javadoc). Ordinarily, the programmer does not learn of these errors until run time. The Regex Checker warns about these problems at compile time.
For further details, including case studies, see a paper about the Regex Checker [SDE12].
To run the Regex Checker, supply the -processor org.checkerframework.checker.regex.RegexChecker command-line option to javac.
## 9.1 Regex annotations
These qualifiers make up the Regex type system:
@Regex
indicates that the run-time value is a valid regular expression String. If the optional parameter is supplied to the qualifier, then the number of capturing groups in the regular expression is at least that many. If not provided, the parameter defaults to 0. For example, if an expression’s type is @Regex(1) String, then its run-time value could be "colo(u?)r" or "(brown|beige)" but not "colou?r" nor a non-regex string such as "1) first point".
@PolyRegex
indicates qualifier polymorphism (see Section 24.2).
The subtyping hierarchy of the Regex Checker’s qualifiers is shown in Figure 9.1.
## 9.2 Annotating your code with @Regex
### 9.2.1 Implicit qualifiers
As described in Section 25.3, the Regex Checker adds implicit qualifiers, reducing the number of annotations that must appear in your code. The checker implicitly adds the Regex qualifier with the parameter set to the correct number of capturing groups to any String literal that is a valid regex. The Regex Checker allows the null literal to be assigned to any type qualified with the Regex qualifier.
### 9.2.2 Capturing groups
The Regex Checker validates that a legal capturing group number is passed to Matcher’s group, start and end methods. To do this, the type of Matcher must be qualified with a @Regex annotation with the number of capturing groups in the regular expression. This is handled implicitly by the Regex Checker for local variables (see Section 25.4), but you may need to add @Regex annotations with a capturing group count to Pattern and Matcher fields and parameters.
### 9.2.3 Concatenation of partial regular expressions
public @Regex String parenthesize(@Regex String regex) {
return "(" + regex + ")"; // Even though the parentheses are not @Regex Strings,
// the whole expression is a @Regex String
}
Figure 9.2: An example of the Regex Checker’s support for concatenation of non-regular-expression Strings to produce valid regular expression Strings.
In general, concatenating a non-regular-expression String with any other string yields a non-regular-expression String. The Regex Checker can sometimes determine that concatenation of non-regular-expression Strings will produce valid regular expression Strings. For an example see Figure 9.2.
### 9.2.4 Testing whether a string is a regular expression
Sometimes, the Regex Checker cannot infer whether a particular expression is a regular expression — and sometimes your code cannot either! In these cases, you can use the isRegex method to perform such a test, and other helper methods to provide useful error messages. A common use is for user-provided regular expressions (such as ones passed on the command-line). Figure 9.3 gives an example of the intended use of the RegexUtil methods.
RegexUtil.isRegex
returns true if its argument is a valid regular expression.
RegexUtil.regexError
returns a String error message if its argument is not a valid regular expression, or null if its argument is a valid regular expression.
RegexUtil.regexException
returns the PatternSyntaxException that Pattern.compile(String) throws when compiling an invalid regular expression. It returns null if its argument is a valid regular expression.
An additional version of each of these methods is also provided that takes an additional group count parameter. The RegexUtil.isRegex method verifies that the argument has at least the given number of groups. The RegexUtil.regexError and RegexUtil.regexException methods return a String error message and PatternSyntaxException, respectively, detailing why the given String is not a syntactically valid regular expression with at least the given number of capturing groups.
If you detect that a String is not a valid regular expression but would like to report the error higher up the call stack (potentially where you can provide a more detailed error message) you can throw a RegexUtil.CheckedPatternSyntaxException. This exception is functionally the same as a PatternSyntaxException except it is checked to guarantee that the error will be handled up the call stack. For more details, see the Javadoc for RegexUtil.CheckedPatternSyntaxException.
A potential disadvantage of using the RegexUtil class is that your code becomes dependent on the Checker Framework at run time as well as at compile time. You can avoid this by adding the Checker Framework to your project, or by copying the RegexUtil class into your own code.
String regex = getRegexFromUser();
if (! RegexUtil.isRegex(regex)) {
throw new RuntimeException("Error parsing regex " + regex, RegexUtil.regexException(regex));
}
Pattern p = Pattern.compile(regex);
Figure 9.3: Example use of RegexUtil methods.
### 9.2.5 Qualifier Parameters
The Regex Checker supports qualifier parameters. See Section 24.3 for more details on qualifier parameters.
The qualifier parameter system currently incurs a 50% performance penalty. If this is unacceptable you can run the original Regex Checker by passing the -processor org.checkerframework.checker.regex.classic.RegexClassicChecker command-line option to javac.
### 9.2.6 Suppressing warnings
If you are positive that a particular string that is being used as a regular expression is syntactically valid, but the Regex Checker cannot conclude this and issues a warning about possible use of an invalid regular expression, then you can use the RegexUtil.asRegex method to suppress the warning.
You can think of this method as a cast: it returns its argument unchanged, but with the type @Regex String if it is a valid regular expression. It throws an error if its argument is not a valid regular expression, but you should only use it when you are sure it will not throw an error.
There is an additional RegexUtil.asRegex method that takes a capturing group parameter. This method works the same as described above, but returns a @Regex String with the parameter on the annotation set to the value of the capturing group parameter passed to the method.
The use case shown in Figure 9.3 should support most cases so the asRegex method should be used rarely.
# Chapter 10 Format String Checker
The Format String Checker prevents use of incorrect format strings in format methods such as System.out.printf and String.format.
The Format String Checker warns you if you write an invalid format string, and it warns you if the other arguments are not consistent with the format string (in number of arguments or in their types). Here are examples of errors that the Format String Checker detects at compile time. Section 10.3 provides more details.
String.format("%y", 7); // error: invalid format string
String.format("%d", "a string"); // error: invalid argument type for %d
String.format("%d %s", 7); // error: missing argument for %s
String.format("%d", 7, 3); // warning: unused argument 3
String.format("{0}", 7); // warning: unused argument 7, because {0} is wrong syntax
To run the Format String Checker, supply the -processor org.checkerframework.checker.formatter.FormatterChecker command-line option to javac.
## 10.1 Formatting terminology
Printf-style formatting takes as an argument a format string and a list of arguments. It produces a new string in which each format specifier has been replaced by the corresponding argument. The format specifier determines how the format argument is converted to a string. A format specifier is introduced by a % character. For example, String.format("The %s is %d.","answer",42) yields "The answer is 42.". "The %s is %d." is the format string, "%s" and "%d" are the format specifiers; "answer" and 42 are format arguments.
## 10.2 Format String Checker annotations
The @Format qualifier on a string type indicates a valid format string. The JDK documentation for the Formatter class explains the requirements for a valid format string. A programmer rarely writes the @Format annotation, as it is inferred for string literals. A programmer may need to write it on fields and on method signatures.
The @Format qualifier is parameterized with a list of conversion categories that impose restrictions on the format arguments. Conversion categories are explained in more detail in Section 10.2.1. The type qualifier for "%d %f" is for example @Format({INT, FLOAT}).
Consider the below printFloatAndInt method. Its parameter must be a format string that can be used in a format method, where the first format argument is “float-like” and the second format argument is “integer-like”. The type of its parameter, @Format({FLOAT, INT}) String, expresses that contract.
void printFloatAndInt(@Format({FLOAT, INT}) String fs) {
System.out.printf(fs, 3.1415, 42);
}
printFloatAndInt("Float %f, Number %d"); // OK
printFloatAndInt("Float %f"); // error
Figure 10.1 shows all the type qualifiers. The annotations other than @Format are only used internally and cannot be written in your code. @InvalidFormat indicates an invalid format string — that is, a string that cannot be used as a format string. For example, the type of "%y" is @InvalidFormat String. @FormatBottom is the type of the null literal. @Unqualified is the default that is applied to strings that are not literals and on which the user has not written a @Format annotation.
### 10.2.1 Conversion Categories
Given a format specifier, only certain format arguments are compatible with it, depending on its “conversion” — its last, or last two, characters. For example, in the format specifier "%d", the conversion d restricts the corresponding format argument to be “integer-like”:
String.format("%d", 5); // OK
String.format("%d", "hello"); // error
Many conversions enforce the same restrictions. A set of restrictions is represented as a conversion category. The “integer like” restriction is for example the conversion category INT. The following conversion categories are defined in the ConversionCategory enumeration:
GENERAL imposes no restrictions on a format argument’s type. Applicable for conversions b, B, h, H, s, S.
CHAR requires that a format argument represents a Unicode character. Specifically, char, Character, byte, Byte, short, and Short are allowed. int or Integer are allowed if Character.isValidCodePoint(argument) would return true for the format argument. (The Format String Checker permits any int or Integer without issuing a warning or error — see Section 10.3.2.) Applicable for conversions c, C.
INT requires that a format argument represents an integral type. Specifically, byte, Byte, short, Short, int and Integer, long, Long, and BigInteger are allowed. Applicable for conversions d, o, x, X.
FLOAT requires that a format argument represents a floating-point type. Specifically, float, Float, double, Double, and BigDecimal are allowed. Surprisingly, integer values are not allowed. Applicable for conversions e, E, f, g, G, a, A.
TIME requires that a format argument represents a date or time. Specifically, long, Long, Calendar, and Date are allowed. Applicable for conversions t, T.
UNUSED imposes no restrictions on a format argument. This is the case if a format argument is not used as replacement for any format specifier. "%2$s" for example ignores the first format argument. Further, all conversion categories accept null. The same format argument may serve as a replacement for multiple format specifiers. Until now, we have assumed that the format specifiers simply consume format arguments left to right. But there are two other ways for a format specifier to select a format argument: • n$ specifies a one-based index n. In the format string "%2$s", the format specifier selects the second format argument. • The < flag references the format argument that was used by the previous format specifier. In the format string "%d %<d" for example, both format specifiers select the first format argument. In the following example, the format argument must be compatible with both conversion categories, and can therefore be neither a Character nor a long. format("Char %1$c, Int %1$d", (int)42); // OK format("Char %1$c, Int %1$d", new Character(42)); // error format("Char %1$c, Int %1$d", (long)42); // error Only three additional conversion categories are needed represent all possible intersections of previously-mentioned conversion categories: NULL is used if no object of any type can be passed as parameter. In this case, the only legal value is null. The format string "%1$f %1$c", for example requires that the first format argument be null. Passing a value such as 4 or 4.2 would lead to an exception. CHAR_AND_INT is used if a format argument is restricted by a CHAR and a INT conversion category (CHARINT). INT_AND_TIME is used if a format argument is restricted by an INT and a TIME conversion category (INTTIME). All other intersections lead to already existing conversion categories. For example, GENERALCHAR = CHAR and UNUSEDGENERAL = GENERAL. Figure 10.2 summarizes the subset relationship among all conversion categories. Here are the subtyping rules among different @Format qualifiers. It is legal to: • use a format string with a weaker (less restrictive) conversion category than required. • use a format string with fewer format specifiers than required, but a warning is issued. The following example shows the subtyping rules in action: @Format({FLOAT, INT}) String f; f = "%f %d"; // Ok f = "%s %d"; // OK, %s is weaker than %f f = "%f"; // warning: last argument is ignored f = "%f %d %s"; // error: too many arguments f = "%d %d"; // error: %d is not weaker than %f String.format(f, 0.8, 42); ## 10.3 What the Format String Checker checks If the Format String Checker issues no errors, it provides the following guarantees: 1. The following guarantees hold for every format method invocation: 1. The format method’s first parameter (or second if a Locale is provided) is a valid format string (or null). 2. A warning is issued if one of the format string’s conversion categories is UNUSED. 3. None of the format string’s conversion categories is NULL. 2. If the format arguments are passed to the format method as varargs, the Format String Checker guarantees the following additional properties: 1. No fewer format arguments are passed than required by the format string. 2. A warning is issued if more format arguments are passed than required by the format string. 3. Every format argument’s type satisfies its conversion category’s restrictions. 3. If the format arguments are passed to the format method as array, a warning is issued by the Format String Checker. Following are examples for every guarantee: String.format("%d", 42); // OK String.format(Locale.GERMAN, "%d", 42); // OK String.format(new Object()); // error (1a) String.format("%y"); // error (1a) String.format("%2$s", "unused", "used"); // warning (1b)
String.format("%1$d %1$f", 5.5); // error (1c)
String.format("%1$d %1$f %d", null, 6); // error (1c)
String.format("%s"); // error (2a)
String.format("%s", "used", "ignored"); // warning (2b)
String.format("%c",4.2); // error (2c)
String.format("%c", (String)null); // error (2c)
String.format("%1$d %1$f", new Object[]{1}); // warning (3)
String.format("%s", new Object[]{"hello"}); // warning (3)
### 10.3.1 Possible false alarms
There are three cases in which the Format String Checker may issue a warning or error, even though the code cannot fail at run time. (These are in addition to the general conservatism of a type system: code may be correct because of application invariants that are not captured by the type system.) In each of these cases, you can rewrite the code, or you can manually check it and write a @SuppressWarnings annotation if you can reason that the code is correct.
Case 1(b): Unused format arguments. It is legal to provide more arguments than are required by the format string; Java ignores the extras. However, this is an uncommon case. In practice, a mismatch between the number of format specifiers and the number of format arguments is usually an error.
Case 1(c): Format arguments that can only be null. It is legal to write a format string that permits only null arguments and throws an exception for any other argument. An example is String.format("%1$d %1$f", null). The Format String Checker forbids such a format string. If you should ever need such a format string, simply replace the problematic format specifier with "null". For example, you would replace the call above by String.format("null null").
Case 3: Array format arguments. The Format String Checker performs no analysis of arrays, only of varargs invocations. It is better style to use varargs when possible.
### 10.3.2 Possible missed alarms
The Format String Checker helps prevent bugs by detecting, at compile time, which invocations of format methods will fail. While the Format String Checker finds most of these invocations, there are cases in which a format method call will fail even though the Format String Checker issued neither errors nor warnings. These cases are:
1. The format string is null. Use the Nullness Checker to prevent this.
2. A format argument’s toString method throws an exception.
3. A format argument implements the Formattable interface and throws an exception in the formatTo method.
4. A format argument’s conversion category is CHAR or CHAR_AND_INT, and the passed value is an int or Integer, and Character.isValidCodePoint(argument) returns false.
The following examples illustrate these limitations:
class A {
public String toString() {
throw new Error();
}
}
class B implements Formattable {
public void formatTo(Formatter fmt, int f,
int width, int precision) {
throw new Error();
}
}
// The checker issues no errors or warnings for the
// following illegal invocations of format methods.
String.format(null); // NullPointerException (1)
String.format("%s", new A()); // Error (2)
String.format("%s", new B()); // Error (3)
String.format("%c", (int)-1); // IllegalFormatCodePointException (4)
## 10.4 Implicit qualifiers
As described in Section 25.3, the Format String Checker adds implicit qualifiers, reducing the number of annotations that must appear in your code. The checker implicitly adds the @Format qualifier with the appropriate conversion categories to any String literal that is a valid format string.
## 10.5 FormatMethod
Your project may contain methods that forward their arguments to a format method. Consider for example the following log method:
@FormatMethod
void log(String format, Object... args) {
if (enabled) {
logfile.print(indent_str);
logfile.printf(format , args);
}
}
By attaching a @FormatMethod annotation to such a method, you can instruct the Format String Checker to check every invocation of the method. This check is analogous to the check done on every invocation of built in format methods like String.format.
## 10.6 Testing whether a format string is valid
The Format String Checker automatically determines whether each String literal is a valid format string or not. When a string is computed or is obtained from an external resource, then the string must be trusted or tested.
One way to test a string is to call the FormatUtil.asFormat method to check whether the format string is valid and its format specifiers match certain conversion categories. If this is not the case, asFormat raises an exception. Your code should catch this exception and handle it gracefully.
The following code examples may fail at run time, and therefore they do not type check. The type-checking errors are indicated by comments.
Scanner s = new Scanner(System.in);
String fs = s.next();
System.out.printf(fs, "hello", 1337); // error: fs is not known to be a format string
Scanner s = new Scanner(System.in);
@Format({GENERAL, INT}) String fs = s.next(); // error: fs is not known to have the given type
System.out.printf(fs, "hello", 1337); // OK
The following variant does not throw a run-time error, and therefore passes the type-checker:
Scanner s = new Scanner(System.in);
String format = s.next()
try {
format = FormatUtil.asFormat(format, GENERAL, INT);
} catch (IllegalFormatException e) {
// Replace this by your own error handling.
System.err.println("The user entered the following invalid format string: " + format);
System.exit(2);
}
// fs is now known to be of type: @Format({GENERAL, INT}) String
System.out.printf(format,"hello",1337);
A potential disadvantage of using the FormatUtil class is that your code becomes dependent on the Checker Framework at run time as well as at compile time. You can avoid this by adding the Checker Framework to your project, or by copying the FormatUtil class into your own code.
# Chapter 11 Internationalization Format String Checker (I18n Format String Checker)
The Internationalization Format String Checker, or I18n Format String Checker, prevents use of incorrect i18n format strings.
If the I18n Format String Checker issues no warnings or errors, then MessageFormat.format will raise no error at run time. “I18n” is short for “internationalization” because there are 18 characters between the “i” and the “n”.
Here are the examples of errors that the I18n Format Checker detects at compile time.
// Warning: the second argument is missing.
MessageFormat.format("{0} {1}", 3.1415);
// String argument cannot be formatted as Time type.
MessageFormat.format("{0, time}", "my string");
// Invalid format string: unknown format type: thyme.
MessageFormat.format("{0, thyme}", new Date());
// Invalid format string: missing the right brace.
MessageFormat.format("{0", new Date());
// Invalid format string: the argument index is not an integer.
MessageFormat.format("{0.2, time}", new Date());
// Invalid format string: "#.#.#" subformat is invalid.
MessageFormat.format("{0, number, #.#.#}", 3.1415);
For instructions on how to run the Internationalization Format String Checker, see Section 11.5.
The Internationalization Checker or I18n Checker (Chapter 12.2) has a different purpose. It verifies that your code is properly internationalized: any user-visible text should be obtained from a localization resource and all keys exist in that resource.
## 11.1 Internationalization Format String Checker annotations
The MessageFormat documentation specifies the syntax of the i18n format string.
These are the qualifiers that make up the I18n Format String type system. Figure 11.1 shows their subtyping relationships.
@I18nFormat
represents a valid i18n format string. For example, @I18nFormat({GENERAL, NUMBER, UNUSED, DATE}) is a legal type for "{0}{1, number} {3, date}", indicating that when the format string is used, the first argument should be of GENERAL conversion category, the second argument should be of NUMBER conversion category, and so on. Conversion categories such as GENERAL are described in Section 11.2.
@I18nFormatFor
indicates that the qualified type is a valid i18n format string for use with some array of values. For example, @I18nFormatFor("#2") indicates that the string can be used to format the contents of the second parameter array. The argument is a Java expression whose syntax is explained in Section 25.5. An example of its use is:
static void method(@I18nFormatFor("#2") String format, Object... args) {
// the body may use the parameters like this:
MessageFormat.format(format, args);
}
method("{0, number} {1}", 3.1415, "A string"); // OK
// error: The string "hello" cannot be formatted as a Number.
method("{0, number} {1}", "hello", "goodbye");
@I18nInvalidFormat
represents an invalid i18n format string. Programmers are not allowed to write this annotation. It is only used internally by the type checker.
@I18nUnknownFormat
represents any string. The string might or might not be a valid i18n format string. Programmers are not allowed to write this annotation.
@I18nFormatBottom
indicates that the value is definitely null. Programmers are not allowed to write this annotation.
## 11.2 Conversion categories
In a message string, the optional second element within the curly braces is called a format type and must be one of number, date, time, and choice. These four format types correspoond to different conversion categories. date and time correspond to DATE in the conversion categories figure. choice corresponds to NUMBER. The format type restricts what arguments are legal. For example, a date argument is not compatible with the number format type, i.e., MessageFormat.format("{0, number}", new Date()) will throw an exception.
The I18n Checker represents the possible arguments via conversion categories. A conversion category defines a set of restrictions or a subtyping rule.
Figure 11.2 summarizes the subset relationship among all conversion categories.
Here are the subtyping rules among different @I18nFormat qualifiers. It is legal to:
• use a format string with a weaker (less restrictive) conversion category than required.
• use a format string with fewer format specifiers than required, but a warning is issued.
The following example shows the subtyping rules in action:
@I18nFormat({NUMBER, NUMBER}) String format;
// OK.
format = "{0, number, #.#} {1, number}";
// OK, GENERAL is weaker (less restrictive) than NUMBER.
format = "{0, number} {1}";
// Error, the right-hand-side is stronger (more restrictive) than the left-hand-side's type.
format = "{0} {1} {2}";
The conversion categories are:
UNUSED
indicates an unused argument. For example, in MessageFormat.format("{0, number} {2, number}", 3.14, "Hello", 2.718) , the second argument Hello is unused. Thus, the conversion categories for the format, 0, number 2, number, is (NUMBER, UNUSED, NUMBER).
GENERAL
means that any value can be supplied as an argument.
DATE
is applicable for date, time, and number types. An argument needs to be of Date, Time, or Number type or a subclass of them, including Timestamp and the classes listed immediately below.
NUMBER
means that the argument needs to be of Number type or a subclass: Number, AtomicInteger, AtomicLong, BigDecimal, BigInteger, Byte, Double, Float, Integer, Long, Short.
## 11.3 What the Internationalization Format String Checker checks
The Internationalization Format String Checker checks calls to the i18n formatting method MessageFormat.format and guarantees the following:
1. The checker issues a warning for the following cases:
1. There are missing arguments from what is required by the format string.
MessageFormat.format("{0, number} {1, number}", 3.14); // Output: 3.14 {1}
2. More arguments are passed than what is required by the format string.
MessageFormat.format("{0, number}", 1, new Date());
MessageFormat.format("{0, number} {0, number}", 3.14, 3.14);
This does not cause an error at run time, but it often indicates a programmer mistake. If it is intentional, then you should suppress the warning (see Chapter 26).
3. Some argument is an array of objects.
MessageFormat.format("{0, number} {1}", array);
The checker cannot verify whether the format string is valid, so the checker conservatively issues a warning. This is a limitation of the Internationalization Format String Checker.
2. The checker issues an error for the following cases:
1. The format string is invalid.
• Unmatched braces.
MessageFormat.format("{0, time", new Date());
• The argument index is not an integer or is negative.
MessageFormat.format("{0.2, time}", new Date());
MessageFormat.format("{-1, time}", new Date());
• Unknown format type.
MessageFormat.format("{0, foo}", 3.14);
• Missing a format style required for choice format.
MessageFormat.format("{0, choice}", 3.14);
• Wrong format style.
MessageFormat.format("{0, time, number}", 3.14);
• Invalid subformats.
MessageFormat.format("{0, number, #.#.#}", 3.14)
2. Some argument’s type doesn’t satisfy its conversion category.
MessageFormat.format("{0, number}", new Date());
The Checker also detects illegal assignments: assigning a non-format-string or an incompatible format string to a variable declared as containing a specific type of format string. For example,
@I18nFormat({GENERAL, NUMBER}) String format;
// OK.
format = "{0} {1, number}";
// OK, GENERAL is weaker (less restrictive) than NUMBER.
format = "{0} {1}";
// OK, it is legal to have fewer arguments than required (less restrictive).
// But the warning will be issued instead.
format = "{0}";
// Error, the format string is stronger (more restrictive) than the specifiers.
format = "{0} {1} {2}";
// Error, the format string is more restrictive. NUMBER is a subtype of GENERAL.
format = "{0, number} {1, number}";
## 11.4 Resource files
A programmer rarely writes an i18n format string literally. (The examples in this chapter show that for simplicity.) Rather, the i18n format strings are read from a resource file. The program chooses a resource file at run time depending on the locale (for example, different resource files for English and Spanish users).
For example, suppose that the resource1.properties file contains
key1 = The number is {0, number}.
Then code such as the following:
String formatPattern = ResourceBundle.getBundle("resource1").getString("key1");
System.out.println(MessageFormat.format(formatPattern, 2.2361));
will output “The number is 2.2361.” A different resource file would contain key1 = El número es {0, number}.
When you run the I18n Format String Checker, you need to indicate which resource file it should check. If you change the resource file or use a different resource file, you should re-run the checker to ensure that you did not make an error. The I18n Format String Checker supports two types of resource files: ResourceBundles and property files. The example above shows use of resource bundles. For more about checking property files, see Chapter 12.
## 11.5 Running the Internationalization Format Checker
The checker can be invoked by running one of the following commands (with the whole command on one line).
• Using ResourceBundles:
javac -processor org.checkerframework.checker.i18nformatter.I18nFormatterChecker -Abundlenames=MyResource MyFile.java
• Using property files:
javac -processor org.checkerframework.checker.i18nformatter.I18nFormatterChecker -Apropfiles=MyResource.properties MyFile.java
• Not using a property file. Use this if the programmer hard-coded the format patterns without loading them from a property file.
javac -processor org.checkerframework.checker.i18nformatter.I18nFormatterChecker MyFile.java
## 11.6 Testing whether a string has an i18n format type
In the case that the checker cannot infer the i18n format type of a string, you can use the I18nFormatUtil.hasFormat method to define the type of the string in the scope of a conditional statement.
I18nFormatUtil.hasFormat
returns true if the given string has the given i18n format type.
For an example, see Section 11.7.
## 11.7 Examples of using the Internationalization Format Checker
• Using MessageFormat.format.
// suppose the bundle "MyResource" contains: key1={0, number} {1, date}
String value = ResourceBundle.getBundle("MyResource").getString("key1");
MessageFormat.format(value, 3.14, new Date()); // OK
// error: incompatible types in argument; found String, expected number
MessageFormat.format(value, "Text", new Date());
• Using the I18nFormatUtil.hasFormat method to check whether a format string has particular conversion categories.
void test1(String format) {
if (I18nFormatUtil.hasFormat(format, I18nConversionCategory.GENERAL,
I18nConversionCategory.NUMBER)) {
MessageFormat.format(format, "Hello", 3.14); // OK
// error: incompatible types in argument; found String, expected number
MessageFormat.format(format, "Hello", "Bye");
// error: missing arguments; expected 2 but 1 given
MessageFormat.format(format, "Bye");
// error: too many arguments; expected 2 but 3 given
MessageFormat.format(format, "A String", 3.14, 3.14);
}
}
• Using @I18nFormatFor to ensure that an argument is a particular type of format string.
static void method(@I18nFormatFor("#2") String f, Object... args) {...}
// OK, MessageFormat.format(...) would return "3.14 Hello greater than one"
method("{0, number} {1} {2, choice,0#zero|1#one|1<greater than one}",
3.14, "Hello", 100);
// error: incompatible types in argument; found String, expected number
method("{0, number} {1}", "Bye", "Bye");
• Annotating a string with @I18nFormat.
@I18nFormat({I18nConversionCategory.DATE}) String;
s1 = "{0}";
s1 = "{0, number}"; // error: incompatible types in assignment
# Chapter 12 Property File Checker
The Property File Checker ensures that a property file or resource bundle (both of which act like maps from keys to values) is only accessed with valid keys. Accesses without a valid key either return null or a default value, which can lead to a NullPointerException or hard-to-trace behavior. The Property File Checker (Section 12.1) ensures that the used keys are found in the corresponding property file or resource bundle.
We also provide two specialized checkers. An Internationalization Checker (Section 12.2) verifies that code is properly internationalized. A Compiler Message Key Checker (Section 12.3) verifies that compiler message keys used in the Checker Framework are declared in a property file; This is an example of a simple specialization of the property file checker, and the Checker Framework source code shows how it is used.
It is easy to customize the property key checker for other related purposes. Take a look at the source code of the Compiler Message Key Checker and adapt it for your purposes.
## 12.1 General Property File Checker
The general Property File Checker ensures that a resource key is located in a specified property file or resource bundle.
The annotation @PropertyKey indicates that the qualified String is a valid key found in the property file or resource bundle. You do not need to annotate String literals. The checker looks up every String literal in the specified property file or resource bundle, and adds annotations as appropriate.
If you pass a String variable to be eventually used as a key, you also need to annotate all these variables with @PropertyKey.
The checker can be invoked by running the following command:
javac -processor org.checkerframework.checker.propkey.PropertyKeyChecker
-Abundlenames=MyResource MyFile.java ...
You must specify the resources, which map keys to strings. The checker supports two types of resource: resource bundles and property files. You can specify one or both of the following two command-line options:
1. -Abundlenames=resource_name
resource_name is the name of the resource to be used with ResourceBundle.getBundle(). The checker uses the default Locale and ClassLoader in the compilation system. (For a tutorial about ResourceBundles, see https://docs.oracle.com/javase/tutorial/i18n/resbundle/concept.html.) Multiple resource bundle names are separated by colons ’:’.
2. -Apropfiles=prop_file
prop_file is the name of a properties file that maps keys to values. The file format is described in the Javadoc for Properties.load(). Multiple files are separated by colons ’:’.
## 12.2 Internationalization Checker
The Internationalization Checker, or I18n Checker, verifies that your code is properly internationalized. Internationalization is the process of designing software so that it can be adapted to different languages and locales without needing to change the code. Localization is the process of adapting internationalized software to specific languages and locales.
Internationalization is sometimes called i18n, because the word starts with “i”, ends with “n”, and has 18 characters in between. Localization is similarly sometimes abbreviated as l10n.
The checker focuses on one aspect of internationalization: user-visible strings should be presented in the user’s own language, such as English, French, or German. This is achieved by looking up keys in a localization resource, which maps keys to user-visible strings. For instance, one version of a resource might map "CANCEL_STRING" to "Cancel", and another version of the same resource might map "CANCEL_STRING" to "Abbrechen".
There are other aspects to localization, such as formatting of dates (3/5 vs. 5/3 for March 5), that the checker does not check.
The Internationalization Checker verifies these two properties:
1. Any user-visible text should be obtained from a localization resource. For example, String literals should not be output to the user.
2. When looking up keys in a localization resource, the key should exist in that resource. This check catches incorrect or misspelled localization keys.
If you use the Internationalization Checker, you may want to also use the Internationalization Format String Checker, or I18n Format String Checker (Chapter 11). It verifies that internationalization format strings are well-formed and used with arguments of the proper type, so that MessageFormat.format does not fail at run time.
### 12.2.1 Internationalization annotations
The Internationalization Checker supports two annotations:
1. @Localized: indicates that the qualified String is a message that has been localized and/or formatted with respect to the used locale.
2. @LocalizableKey: indicates that the qualified String or Object is a valid key found in the localization resource. This annotation is a specialization of the @PropertyKey annotation, that gets checked by the general Property Key Checker.
You may need to add the @Localized annotation to more methods in the JDK or other libraries, or in your own code.
### 12.2.2 Running the Internationalization Checker
The Internationalization Checker can be invoked by running the following command:
javac -processor org.checkerframework.checker.i18n.I18nChecker -Abundlenames=MyResource MyFile.java ...
You must specify the localization resource, which maps keys to user-visible strings. Like the general Property Key Checker, the Internationalization Checker supports two types of localization resource: ResourceBundles using the -Abundlenames=resource_name option or property files using the -Apropfiles=prop_file option.
## 12.3 Compiler Message Key Checker
The Checker Framework uses compiler message keys to output error messages. These keys are substituted by localized strings for user-visible error messages. Using keys instead of the localized strings in the source code enables easier testing, as the expected error keys can stay unchanged while the localized strings can still be modified. We use the Compiler Message Key Checker to ensure that all internal keys are correctly localized. Instead of using the Property File Checker, we use a specialized checker, giving us more precise documentation of the intended use of Strings.
The single annotation used by this checker is @CompilerMessageKey. The Checker Framework is completely annotated; for example, class org.checkerframework.framework.source.Result uses @CompilerMessageKey in methods failure and warning. For most users of the Checker Framework there will be no need to annotate any Strings, as the checker looks up all String literals and adds annotations as appropriate.
The Compiler Message Key Checker can be invoked by running the following command:
javac -processor org.checkerframework.checker.compilermsgs.CompilerMessagesChecker
-Apropfiles=messages.properties MyFile.java ...
You must specify the resource, which maps compiler message keys to user-visible strings. The checker supports the same options as the general property key checker. Within the Checker Framework we only use property files, so the -Apropfiles=prop_file option should be used.
# Chapter 13 Signature Checker for string representations of types
The Signature String Checker, or Signature Checker for short, verifies that string representations of types and signatures are used correctly.
Java defines multiple different string representations for types (see Section 13.1), and it is easy to misuse them or to miss bugs during testing. Using the wrong string format leads to a run-time exception or an incorrect result. This is a particular problem for fully qualified and binary names, which are nearly the same — they differ only for nested classes and arrays.
## 13.1 Signature annotations
Java defines four main formats for the string representation of a type. There is an annotation for each of these representations. Figure 13.1 shows how they are related.
@FullyQualifiedName
A fully qualified name (JLS §6.7), such as package.Outer.Inner, is used in Java code and in messages to the user.
@BinaryName
A binary name (JLS §13.1), such as package.Outer$Inner, is the representation of a type in its own .class file. @FieldDescriptor A field descriptor (JVMS §4.3.2), such as Lpackage/Outer$Inner;, is used in a .class file’s constant pool, for example to refer to other types; it abbreviates primitives and arrays, and uses internal form (JVMS §4.2) for class names.
@ClassGetName
The type representation used by the Class.getName(), Class.forName(String), and Class.forName(String, boolean, ClassLoader) methods. This format is: for any non-array type, the binary name; and for any array type, a format like the FieldDescriptor field descriptor, but using “.” where the field descriptor uses “/”.
@SourceName
A source name is a string that is a valid fully qualified name and a valid binary name. A programmer should never or rarely use this — you should know how you intend to use a given variable. The checker infers it for literal strings such as "package.MyClass" that are valid in both formats, and you might occasionally see it in an error message. Likewise, you might see other types such as SourceNameForNonArray, BinaryNameForNonArray, and FieldDescriptorForArray, but you generally should not use them either.
Java also defines other string formats for a type: simple names (JLS §6.2), qualified names (JLS §6.2), and canonical names (JLS §6.7). The Signature Checker does not include annotations for these.
Here are examples of the supported formats:
fully-qualified name binary name Class.getName field descriptor int int int I int[][] int[][] [[I [[I MyClass MyClass MyClass LMyClass; MyClass[] MyClass[] [LMyClass; [LMyClass; java.lang.Integer java.lang.Integer java.lang.Integer Ljava/lang/Integer; java.lang.Integer[] java.lang.Integer[] [Ljava.lang.Integer; [Ljava/lang/Integer; package.Outer.Inner package.Outer$Inner package.Outer$Inner Lpackage/Outer$Inner; package.Outer.Inner[] package.Outer$Inner[] [Lpackage.Outer$Inner; [Lpackage/Outer$Inner;
Java defines one format for the string representation of a method signature:
@MethodDescriptor
A method descriptor (JVMS §4.3.3) identifies a method’s signature (its parameter and return types), just as a field descriptor identifies a type. The method descriptor for the method
Object mymethod(int i, double d, Thread t)
is
(IDLjava/lang/Thread;)Ljava/lang/Object;
## 13.2 What the Signature Checker checks
Certain methods in the JDK, such as Class.forName, are annotated indicating the type they require. The Signature Checker ensures that clients call them with the proper arguments. The Signature Checker does not reason about string operations such as concatenation, substring, parsing, etc.
To run the Signature Checker, supply the -processor org.checkerframework.checker.signature.SignatureChecker command-line option to javac.
# Chapter 14 GUI Effect Checker
One of the most prevalent GUI-related bugs is invalid UI update or invalid thread access: accessing the UI directly from a background thread.
Most GUI frameworks (including Android, AWT, Swing, and SWT) create a single distinguished thread — the UI event thread — that handles all GUI events and updates. To keep the interface responsive, any expensive computation should be offloaded to background threads (also called worker threads). If a background thread accesses a UI element such as a JPanel (by calling a JPanel method or reading/writing a field of JPanel), the GUI framework raises an exception that terminates the program. To fix the bug, the background thread should send a request to the UI thread to perform the access on its behalf.
It is difficult for a programmer to remember which methods may be called on which thread(s). The GUI Effect Checker solves this problem. The programmer annotates each method to indicate whether:
• It accesses no UI elements (and may run on any thread); such a method is said to have the “safe effect”.
• It may access UI elements (and must run on the UI thread); such a method is said to have the “UI effect”.
The GUI Effect Checker verifies these effects and statically enforces that UI methods are only called from the correct thread. A method with the safe effect is prohibited from calling a method with the UI effect.
For example, the effect system can reason about when method calls must be dispatched to the UI thread via a message such as Display.syncExec.
@SafeEffect
l.setText("Foo"); // Error: calling a @UIEffect method from a @SafeEffect method
Display.syncExec(new @UI Runnable {
@UIEffect // inferred by default
public void run() {
l.setText("Bar"); // OK: accessing JLabel from code run on the UI thread
}
});
}
The GUI Effect Checker’s annotations fall into three categories:
• effect annotations on methods (Section 14.1),
• class or package annotations controlling the default effect (Section 14.4), and
• effect-polymorphism: code that works for both the safe effect and the UI effect (Section 14.5).
## 14.1 GUI effect annotations
There are two primary GUI effect annotations:
• @SafeEffect is a method annotation marking code that must not access UI objects.
• @UIEffect is a method annotation marking code that may access UI objects. Most UI object methods (e.g., methods of JPanel) are annotated as @UIEffect.
@SafeEffect is a sub-effect of @UIEffect, in that it is always safe to call a @SafeEffect method anywhere it is permitted to call a @UIEffect method. We write this relationship as
@SafeEffect@UIEffect
## 14.2 What the GUI Effect Checker checks
The GUI Effect Checker ensures that only the UI thread accesses UI objects. This prevents GUI errors such as invalid UI update and invalid thread access.
The GUI Effect Checker issues errors in the following cases:
• A @UIEffect method is invoked by a @SafeEffect method.
• Method declarations violate subtyping restrictions: a supertype declares a @SafeEffect method, and a subtype annotates an overriding version as @UIEffect.
Additionally, if a method implements or overrides a method in two supertypes (two interfaces, or an interface and parent class), and those supertypes give different effects for the methods, the GUI Effect Checker issues a warning (not an error).
## 14.3 Running the GUI Effect Checker
The GUI Effect Checker can be invoked by running the following command:
javac -processor org.checkerframework.checker.guieffect.GuiEffectChecker MyFile.java ...
## 14.4 Annotation defaults
The default method annotation is @SafeEffect, since most code in most programs is not related to the UI. This also means that typically, code that is unrelated to the UI need not be annotated at all.
The GUI Effect Checker provides three primary ways to change the default method effect for a class or package:
• @UIType is a class annotation that makes the effect for unannotated methods in that class default to @UIEffect. (See also @UI in Section 14.5.2.)
• @UIPackage is a package annotation, that makes the effect for unannotated methods in that package default to @UIEffect. It is not transitive; a package nested inside a package marked @UIPackage does not inherit the changed default.
• @SafeType is a class annotation that makes the effect for unannotated methods in that class default to @SafeEffect. Because @SafeEffect is already the default effect, @SafeType is only useful for class types inside a package marked @UIPackage.
There is one other place where the default annotation is not automatically @SafeEffect: anonymous inner classes. Since anonymous inner classes exist primarily for brevity, it would be unfortunate to spoil that brevity with extra annotations. By default, an anonymous inner class method that overrides or implements a method of the parent type inherits that method’s effect. For example, an anonymous inner class implementing an interface with method @UIEffect void m() need not explicitly annotate its implementation of m(); the implementation will inherit the parent’s effect. Methods of the anonymous inner class that are not inherited from a parent type follow the standard defaulting rules.
## 14.5 Polymorphic effects
Sometimes a type is reused for both UI-specific and background-thread work. A good example is the Runnable interface, which is used both for creating new background threads (in which case the run() method must have the @SafeEffect) and for sending code to the UI thread to execute (in which case the run() method may have the @UIEffect). But the declaration of Runnable.run() may have only one effect annotation in the source code. How do we reconcile these conflicting use cases?
Effect-polymorphism permits a type to be used for both UI and non-UI purposes. It is similar to Java’s generics in that you define, then use, the effect-polymorphic type. Recall that to define a generic type, you write a type parameter such as <T> and use it in the body of the type definition; for example, class List<T> { ... T get() {...} ... }. To instantiate a generic type, you write its name along with a type argument; for example, List<Date> myDates;.
### 14.5.1 Defining an effect-polymorphic type
To declare that a class is effect-polymorphic, annotate its definition with @PolyUIType. To use the effect variable in the class body, annotate a method with @PolyUIEffect. It is an error to use @PolyUIEffect in a class that is not effect-polymorphic.
Consider the following example:
@PolyUIType
public interface Runnable {
@PolyUIEffect
void run();
}
This declares that class Runnable is parameterized over one generic effect, and that when Runnable is instantiated, the effect argument will be used as the effect for the run method.
### 14.5.2 Using an effect-polymorphic type
To instantiate an effect-polymorphic type, write one of these three type qualifiers before a use of the type:
• @AlwaysSafe instantiates the type’s effect to @SafeEffect.
• @UI instantiates the type’s effect to @UIEffect. Additionally, it changes the default method effect for the class to @UIEffect.
• @PolyUI instantiates the type’s effect to @PolyUIEffect for the same instantiation as the current (containing) class. For example, this is the qualifier of the receiver this inside a method of a @PolyUIType class, which is how one method of an effect-polymorphic class may call an effect-polymorphic method of the same class.
As an example:
@AlwaysSafe Runnable s = ...; s.run(); // s.run() is @SafeEffect
@PolyUI Runnable p = ...; p.run(); // p.run() is @PolyUIEffect (context-dependent)
@UI Runnable u = ...; u.run(); // u.run() is @UIEffect
It is an error to apply an effect instantiation qualifier to a type that is not effect-polymorphic.
### 14.5.3 Subclassing a specific instantiation of an effect-polymorphic type
Sometimes you may wish to subclass a specific instantiation of an effect-polymorphic type, just as you may extend List<String>.
To do this, simply place the effect instantiation qualifier by the name of the type you are defining, e.g.:
@UI
public class UIRunnable extends Runnable {...}
@AlwaysSafe
public class SafeRunnable extends Runnable {...}
The GUI Effect Checker will automatically apply the qualifier to all classes and interfaces the class being defined extends or implements. (This means you cannot write a class that is a subtype of a @AlwaysSafe Foo and a @UI Bar, but this has not been a problem in our experience.)
### 14.5.4 Subtyping with polymorphic effects
With three effect annotations, we must extend the static sub-effecting relationship:
@SafeEffect@PolyUIEffect@UIEffect
This is the correct sub-effecting relation because it is always safe to call a @SafeEffect method (whether from an effect-polymorphic method or a UI method), and a @UIEffect method may safely call any other method.
This induces a subtyping hierarchy on type qualifiers:
@AlwaysSafe@PolyUI@UI
This is sound because a method instantiated according to any qualifier will always be safe to call in place of a method instantiated according to one of its super-qualifiers. This allows clients to pass “safer” instances of some object type to a given method.
## 14.6 References
The ECOOP 2013 paper “JavaUI: Effects for Controlling UI Object Access” includes some case studies on the checker’s efficacy, including descriptions of the relatively few false warnings we encountered. It also contains a more formal description of the effect system. You can obtain the paper at:
# Chapter 15 Units Checker
For many applications, it is important to use the correct units of measurement for primitive types. For example, NASA’s Mars Climate Orbiter (cost: $327 million) was lost because of a discrepancy between use of the metric unit Newtons and the imperial measure Pound-force. The Units Checker ensures consistent usage of units. For example, consider the following code: @m int meters = 5 * UnitsTools.m; @s int secs = 2 * UnitsTools.s; @mPERs int speed = meters / secs; Due to the annotations @m and @s, the variables meters and secs are guaranteed to contain only values with meters and seconds as units of measurement. Utility class UnitsTools provides constants with which unqualified integer are multiplied to get values of the corresponding unit. The assignment of an unqualified value to meters, as in meters = 99, will be flagged as an error by the Units Checker. The division meters/secs takes the types of the two operands into account and determines that the result is of type meters per second, signified by the @mPERs qualifier. We provide an extensible framework to define the result of operations on units. ## 15.1 Units annotations The checker currently supports three varieties of units annotations: kind annotations (@Length, @Mass, …), the SI units (@m, @kg, …), and polymorphic annotations (@PolyUnit). Kind annotations can be used to declare what the expected unit of measurement is, without fixing the particular unit used. For example, one could write a method taking a @Length value, without specifying whether it will take meters or kilometers. The following kind annotations are defined: @Acceleration @Angle @Area @Current @Length @Luminance @Mass @Speed @Substance @Temperature @Time For each kind of unit, the corresponding SI unit of measurement is defined: 1. For @Acceleration: Meter Per Second Square @mPERs2 2. For @Angle: Radians @radians, and the derived unit Degrees @degrees 3. For @Area: the derived units square millimeters @mm2, square meters @m2, and square kilometers @km2 4. For @Current: Ampere @A 5. For @Length: Meters @m and the derived units millimeters @mm and kilometers @km 6. For @Luminance: Candela @cd 7. For @Mass: kilograms @kg and the derived unit grams @g 8. For @Speed: meters per second @mPERs and kilometers per hour @kmPERh 9. For @Substance: Mole @mol 10. For @Temperature: Kelvin @K and the derived unit Celsius @C 11. For @Time: seconds @s and the derived units minutes @min and hours @h You may specify SI unit prefixes, using enumeration Prefix. The basic SI units (@s, @m, @g, @A, @K, @mol, @cd) take an optional Prefix enum as argument. For example, to use nanoseconds as unit, you could use @s(Prefix.nano) as a unit type. You can sometimes use a different annotation instead of a prefix; for example, @mm is equivalent to @m(Prefix.milli). Class UnitsTools contains a constant for each SI unit. To create a value of the particular unit, multiply an unqualified value with one of these constants. By using static imports, this allows very natural notation; for example, after statically importing UnitsTools.m, the expression 5 * m represents five meters. As all these unit constants are public, static, and final with value one, the compiler will optimize away these multiplications. The polymorphic annotation @PolyUnit enables you to write a method that takes an argument of any unit type and returns a result of that same type. For more about polymorphic qualifiers, see Section 24.2. For an example of it use, see @PolyUnit, see the @PolyUnit Javadoc. ## 15.2 Extending the Units Checker You can create new kind annotations and unit annotations that are specific to the particular needs of your project. An easy way to do this is by copying and adapting an existing annotation. (In addition, search for all uses of the annotation’s name throughout the Units Checker implementation, to find other code to adapt; read on for details.) Here is an example of a new unit annotation. @Documented @Retention(RetentionPolicy.RUNTIME) @SubtypeOf( { Time.class } ) @UnitsMultiple(quantity=s.class, prefix=Prefix.nano) @Target(ElementType.TYPE_USE, ElementType.TYPE_PARAMETER) public @interface ns {} The @SubtypeOf meta-annotation specifies that this annotation introduces an additional unit of time. The @UnitsMultiple meta-annotation specifies that this annotation should be a nano multiple of the basic unit @s: @ns and @s(Prefix.nano) behave equivalently and interchangeably. Most annotation definitions do not have a @UnitsMultiple meta-annotation. Note that all custom annotations must have the @Target(ElementType.TYPE_USE) meta-annotation. See section 29.3.1. To take full advantage of the additional unit qualifier, you need to do two additional steps. (1) Provide constants that convert from unqualified types to types that use the new unit. See class UnitsTools for examples (you will need to suppress a checker warning in just those few locations). (2) Put the new unit in relation to existing units. Provide an implementation of the UnitsRelations interface as a meta-annotation to one of the units. See demonstration examples/units-extension/ for an example extension that defines Hertz (hz) as scalar per second, and defines an implementation of UnitsRelations to enforce it. ## 15.3 What the Units Checker checks The Units Checker ensures that unrelated types are not mixed. All types with a particular unit annotation are disjoint from all unannotated types, from all types with a different unit annotation, and from all types with the same unit annotation but a different prefix. Subtyping between the units and the unit kinds is taken into account, as is the @UnitsMultiple meta-annotation. Multiplying a scalar with a unit type results in the same unit type. The division of a unit type by the same unit type results in the unqualified type. Multiplying or dividing different unit types, for which no unit relation is known to the system, will result in a MixedUnits type, which is separate from all other units. If you encounter a MixedUnits annotation in an error message, ensure that your operations are performed on correct units or refine your UnitsRelations implementation. The Units Checker does not change units based on multiplication; for example, if variable mass has the type @kg double, then mass * 1000 has that same type rather than the type @g double. (The Units Checker has no way of knowing whether you intended a conversion, or you were computing the mass of 1000 items. You need to make all conversions explicit in your code, and it’s good style to minimize the number of conversions.) ## 15.4 Running the Units Checker The Units Checker can be invoked by running the following commands. • If your code uses only the SI units that are provided by the framework, simply invoke the checker: javac -processor org.checkerframework.checker.units.UnitsChecker MyFile.java ... • If you define your own units, provide the fully-qualified class names of the annotations through the -Aunits option, using a comma-no-space-separated notation: javac -Xbootclasspath/p:/full/path/to/myProject/bin:/full/path/to/myLibrary/bin \ -processor org.checkerframework.checker.units.UnitsChecker \ -Aunits=myModule.qual.MyUnit,myModule.qual.MyOtherUnit MyFile.java ... The annotations listed in -Aunits must be accessible to the compiler during compilation in the classpath. In other words, they must already be compiled (and, typically, be on the javac bootclasspath) before you run the Units Checker with javac. It is not sufficient to supply their source files on the command line. • You can also provide the fully-qualified paths to a set of directories that contain units qualifiers through the -AunitsDirs option, using a colon-no-space-separated notation. For example: javac -Xbootclasspath/p:/full/path/to/myProject/bin:/full/path/to/myLibrary/bin \ -processor org.checkerframework.checker.units.UnitsChecker \ -AunitsDirs=/full/path/to/myProject/bin:/full/path/to/myLibrary/bin MyFile.java ... Note that in these two examples, the compiled class file of the myModule.qual.MyUnit and myModule.qual.MyOtherUnit annotations must exist in either the myProject/bin directory or the myLibrary/bin directory. The following placement of the class files will work with the above commands: .../myProject/bin/myModule/qual/MyUnit.class .../myProject/bin/myModule/qual/MyOtherUnit.class The two options can be used at the same time to provide groups of annotations from directories, and individually named annotations. Also, see the example project in the checker/examples/units-extension directory. ## 15.5 Suppressing warnings One example of when you need to suppress warnings is when you initialize a variable with a unit type by a literal value. To remove this warning message, it is best to introduce a constant that represents the unit and to add a @SuppressWarnings annotation to that constant. For examples, see class UnitsTools. ## 15.6 References # Chapter 16 Constant Value Checker The Constant Value Checker is a constant propagation analysis: for each variable, it determines whether that variable’s value can be known at compile time. There are two ways to run the Constant Value Checker. • Typically, it is automatically run by another type checker. When using the Constant Value Checker as part of another checker, the statically-executable.astub file in the Constant Value Checker directory must be passed as a stub file for the checker. • Alternately, you can run just the Constant Value Checker, by supplying the following command-line options to javac: -processor org.checkerframework.common.value.ValueChecker -Astubs=statically-executable.astub ## 16.1 Annotations The Constant Value Checker uses type annotations to indicate the value of an expression (Section 16.1.1), and it uses method annotations to indicate methods that the Constant Value Checker can execute at compile time (Section 16.1.2). ### 16.1.1 Type Annotations Typically, the programmer does not write any type annotations. Rather, the type annotations are inferred by the Constant Value Checker. The programmer is also permitted to write type annotations. This is only necessary in locations where the Constant Value Checker does not infer annotations: on fields and method signatures. The type annotations are @BoolVal, @IntVal, @DoubleVal, and @StringVal. Each type annotation takes as an argument a set of values, and its meaning is that at run time, the expression evaluates to one of the values. For example, an expression of type @StringVal("a", "b") evaluates to one of the values "a", "b", or null. The set is limited to 10 entries; if a variable could be more than 10 different values, the Constant Value Checker gives up and its type becomes @UnknownVal instead. Figure 16.1 shows the subtyping relationship among the type annotations. For two annotations of the same type, subtypes have a smaller set of possible values, as also shown in the figure. Because int can be casted to double, an @IntVal annotation is a subtype of a @DoubleVal annotation with the same values. Figure 16.2 shows how the Constant Value Checker infers type annotations (using flow-sensitive type qualifier refinement, Section 25.4). public void foo(boolean b) { int i = 1; // i has type: @IntVal({1}) int if (b) { i = 2; // i now has type: @IntVal({2}) int } // i now has type: @IntVal({1,2}) int i = i + 1; // i now has type: @IntVal({2,3}) int } Figure 16.2: The Constant Value Checker infers different types for a variable on different lines of the program. ### 16.1.2 Compile-time execution of expressions Whenever all the operands of an expression are compile-time constants (that is, their types have constant-value type annotations), the Constant Value Checker attempts to execute the expression. This is independent of any optimizations performed by the compiler and does not affect the code that is generated. The Constant Value Checker statically executes operators that do not throw exceptions (e.g., +, -, <<, !=), and also calls to methods annotated with @StaticallyExecutable. @StaticallyExecutable @Pure public int foo(int a, int b) { return a + b; } public void bar() { int a = 5; // a has type: @IntVal({5}) int int b = 4; // b has type: @IntVal({4}) int int c = foo(a, b); // c has type: @IntVal({9}) int } Figure 16.3: The @StaticallyExecutable annotation enables constant propagation through method calls. A @StaticallyExecutable method must be @Pure (side-effect-free and deterministic). Additionally, a @StaticallyExecutable method and any method it calls must be on the classpath for the compiler, because they are reflectively called at compile-time to perform the constant value analysis. Any standard library methods (such as those annotated as @StaticallyExecutable in file statically-executable.astub) will already be on the classpath. To use @StaticallyExecutable on methods in your own code, you should first compile the code without the Constant Value Checker and then add the location of the resulting .class files to the classpath. This can be done by either adding the destination path to your environment variable CLASSPATH or by passing the argument -classpath path/to/class/files to the call. The latter would look similar to: -processor org.checkerframework.common.value.ValueChecker -Astubs=statically-executable.astub -classpath$CLASSPATH:$MY_PROJECT/build/ ## 16.2 Warnings The Constant Value Checker issues a warning if it cannot load and run, at compile time, a method marked as @StaticallyExecutable. If it issues such a warning, then the return value of the method will be @UnknownVal instead of being able to be resolved to a specific value annotation. Some examples of these: • [class.find.failed] Failed to find class named Test. The checker could not find the class specified for resolving a @StaticallyExecutable method. Typically this is caused by not providing the path of a class-file needed to the classpath. • [method.find.failed] Failed to find a method named foo with argument types [@IntVal(3) int]. Treating result as @UnknownVal The checker could not find the method foo(int) specified for resolving a @StaticallyExecutable method, but could find the class. This is usually due to providing an outdated version of the class-file that does not contain the @StaticallyExecutable method. • [method.evaluation.exception] Failed to evaluate method public static int Test.foo(int) because it threw an exception: java.lang.ArithmeticException: / by zero. Treating result as @UnknownVal An exception was thrown when trying to statically execute the method. In this case it was a divide-by-zero exception. If the arguments to the method each only had one value in their annotations then this exception will always occur when the program is actually run as well. If there are multiple possible values then the exception might not be thrown on every execution, depending on the run-time values. There is one other situation in which the Constant Value Checker produces a warning message: • [too.many.values.given] Annotation ignored because the maximum number of values tracked is 10. The Constant Value Checker only tracks up to 10 possible values for an expression. If you write an annotation with more values than will be tracked, the annotation is ignored. # Chapter 17 Aliasing Checker The Aliasing Checker identifies expressions that definitely have no aliases. Two expressions are aliased when they have the same non-primitive value; that is, they are references to the identical Java object in the heap. Another way of saying this is that two expressions, exprA and exprB, are aliases of each other when exprA == exprB at the same program point. Assigning to a variable or field typically creates an alias. For example, after the statement a = b;, the variables a and b are aliased. Knowing that an expression is not aliased permits more accurate reasoning about how side effects modify the expression’s value. To run the Aliasing Checker, supply the -processor org.checkerframework.common.aliasing.AliasingChecker command-line option to javac. However, a user rarely runs the Aliasing Checker directly. This type system is mainly intended to be used together with other type systems. For example, the SPARTA information flow type-checker (Section 23.8) uses the Aliasing Checker to improve its type refinement — if an expression has no aliases, a more refined type can often be inferred, otherwise the type-checker makes conservative assumptions. ## 17.1 Aliasing annotations There are two possible types for an expression: @MaybeAliased is the type of an expression that might have an alias. This is the default, so every unannotated type is @MaybeAliased. (This includes the type of null.) @Unique is the type of an expression that has no aliases. The @Unique annotation is only allowed at local variables, method parameters, constructor results, and method returns. A constructor’s result should be annotated with @Unique only if the constructor’s body does not creates an alias to the constructed object. There are also two annotations, which are currently trusted instead of verified, that can be used on formal parameters (including the receiver parameter, this): @NonLeaked identifies a formal parameter that is not leaked nor returned by the method body. For example, the formal parameter of the String copy constructor, String(String s), is @NonLeaked because the body of the method only makes a copy of the parameter. @LeakedToResult is used when the parameter may be returned, but it is not otherwise leaked. For example, the receiver parameter of StringBuffer.append(StringBuffer this, String s) is @LeakedToResult, because the method returns the updated receiver. ## 17.2 Leaking contexts This section lists the expressions that create aliases. These are also called “leaking contexts”. Assignments After an assignment, the left-hand side and the right-hand side are typically aliased. (The only counterexample is when the right-hand side is a fresh expression; see Section 17.4.) @Unique Object u = ...; Object o = u; // (not.unique) type-checking error! If this example type-checked, then u and o would be aliased. For this example to type-check, either the @Unique annotation on the type of u, or the o = u; assignment, must be removed. Method calls and returns (pseudo-assignments) Passing an argument to a method is a “pseudo-assignment” because it effectively assigns the argument to the formal parameter. Return statements are also pseudo-assignments. As with assignments, the left-hand side and right-hand side of pseudo-assignments are typically aliased. Here is an example for argument-passing: void foo(Object o) { ... } @Unique Object u = ...; foo(u); // type-checking error, because foo may create an alias of the passed argument Passing a non-aliased reference to a method does not necessarily create an alias. However, the body of the method might create an alias or leak the reference. Thus, the Aliasing Checker always treats a method call as creating aliases for each argument unless the corresponding formal parameter is marked as @@NonLeaked or @@LeakedToResult. Here is an example for a return statement: Object id(@Unique Object p) { return p; // (not.unique) type-checking error! } If this code type-checked, then it would be possible for clients to write code like this: @Unique Object u = ...; Object o = id(u); after which there is an alias to u even though it is declared as @Unique. However, it is permitted to write Object id(@LeakedToResult Object p) { return p; } after which the following code type-checks: @Unique Object u = ...; id(u); // method call result is not used Object o1 = ...; Object o2 = id(o1); // argument is not @Unique Throws A thrown exception can be captured by a catch block, which creates an alias of the thrown exception. void foo() { @Unique Exception uex = new Exception(); try { throw uex; // (not.unique) type-checking error! } catch (Exception ex) { // uex and ex refer to the same object here. } } Array initializers Array initializers assign the elements in the initializers to corresponding indexes in the array, therefore expressions in an array initializer are leaked. void foo() { @Unique Object o = new Object(); Object[] ar = new Object[] { o }; // (not.unique) type-checking error! // The expressions o and ar[0] are now aliased. } ## 17.3 Restrictions on where @Unique may be written The @Unique qualifier may not be written on locations such as fields, array elements, and type parameters. As an example of why @Unique may not be written on a field’s type, consider the following code: class MyClass { @Unique Object field; void foo() { MyClass myClass2 = this; // this.field is now an alias of myClass2.field } } That code must not type-check, because field is declared as @Unique but has an alias. The Aliasing Checker solves the problem by forbidding the @Unique qualifier on subcomponents of a structure, such as fields. Other solutions might be possible; they would be more complicated but would permit more code to type-check. @Unique may not be written on a type parameter for similar reasons. The assignment List<@Unique Object> l1 = ...; List<@Unique Object> l2 = l1; must be forbidden because it would alias l1.get(0) with l2.get(0) even though both have type @Unique. The Aliasing Checker forbids this code by rejecting the type List<@Unique Object>. ## 17.4 Aliasing type refinement Type refinement enables a type checker to treat an expression as a subtype of its declared type. For example, even if you declare a local variable as @MaybeAliased (or don’t write anything, since @MaybeAliased is the default), sometimes the Aliasing Checker can determine that it is actually @Unique. For more details, see Section 25.4. The Aliasing Checker treats type refinement in the usual way, except that at (pseudo-)assignments the right-hand-side (RHS) may lose its type refinement, before the left-hand-side (LHS) is type-refined. The RHS always loses its type refinement (it is widened to @MaybeAliased, and its declared type must have been @MaybeAliased) except in the following cases: • The RHS is a fresh expression — an expression that returns a different value each time it is evaluated. In practice, this is only method/constructor calls with @Unique return type. A variable/field is not fresh because it can return the same value when evaluated twice. • The LHS is a @NonLeaked formal parameter and the RHS is an argument in a method call or constructor invocation. • The LHS is a @LeakedToResult formal parameter, the RHS is an argument in a method call or constructor invocation, and the method’s return value is discarded — that is, the method call or constructor invocation is written syntactically as a statement rather than as a part of a larger expression or statement. A consequence of the above rules is that most method calls are treated conservatively. If a variable with declared type @MaybeAliased has been refined to @Unique and is used as an argument of a method call, it usually loses its @Unique refined type. Figure 17.2 gives an example of the Aliasing Checker’s type refinement rules. // Annotations on the StringBuffer class, used in the examples below. // class StringBuffer { // @Unique StringBuffer(); // StringBuffer append(@LeakedToResult StringBuffer this, @NonLeaked String s); // } void foo() { StringBuffer sb = new StringBuffer(); // sb is refined to @Unique. StringBuffer sb2 = sb; // sb loses its refinement. // Both sb and sb2 have aliases and because of that have type @MaybeAliased. } void bar() { StringBuffer sb = new StringBuffer(); // sb is refined to @Unique. sb.append("someString"); // sb stays @Unique, as no aliases are created. StringBuffer sb2 = sb.append("someString"); // sb is leaked and becomes @MaybeAliased. // Both sb and sb2 have aliases and because of that have type @MaybeAliased. } Figure 17.2: Example of Aliasing Checker’s type refinement rules. # Chapter 18 Linear Checker for preventing aliasing The Linear Checker implements type-checking for a linear type system. A linear type system prevents aliasing: there is only one (usable) reference to a given object at any time. Once a reference appears on the right-hand side of an assignment, it may not be used any more. The same rule applies for pseudo-assignments such as procedure argument-passing (including as the receiver) or return. One way of thinking about this is that a reference can only be used once, after which it is “used up”. This property is checked statically at compile time. The single-use property only applies to use in an assignment, which makes a new reference to the object; ordinary field dereferencing does not use up a reference. By forbidding aliasing, a linear type system can prevent problems such as unexpected modification (by an alias), or ineffectual modification (after a reference has already been passed to, and used by, other code). To run the Linear Checker, supply the -processor org.checkerframework.checker.linear.LinearChecker command-line option to javac. Figure 18.1 gives an example of the Linear Checker’s rules. class Pair { Object a; Object b; public String toString() { return "<" + String.valueOf(a) + "," + String.valueOf(b) + ">"; } } void print(@Linear Object arg) { System.out.println(arg); } @Linear Pair printAndReturn(@Linear Pair arg) { System.out.println(arg.a); System.out.println(arg.b); // OK: field dereferencing does not use up the reference arg return arg; } @Linear Object m(Object o, @Linear Pair lp) { @Linear Object lo2 = o; // ERROR: aliases may exist @Linear Pair lp3 = lp; @Linear Pair lp4 = lp; // ERROR: reference lp was already used lp3.a; lp3.b; // OK: field dereferencing does not use up the reference print(lp3); print(lp3); // ERROR: reference lp3 was already used lp3.a; // ERROR: reference lp3 was already used @Linear Pair lp4 = new Pair(...); lp4.toString(); lp4.toString(); // ERROR: reference lp4 was already used lp4 = new Pair(); // OK to reassign to a used-up reference // If you need a value back after passing it to a procedure, that // procedure must return it to you. lp4 = printAndReturn(lp4); if (...) { print(lp4); } if (...) { return lp4; // ERROR: reference lp4 may have been used } else { return new Object(); } } Figure 18.1: Example of Linear Checker rules. ## 18.1 Linear annotations The linear type system uses one user-visible annotation: @Linear. The annotation indicates a type for which each value may only have a single reference — equivalently, may only be used once on the right-hand side of an assignment. The full qualifier hierarchy for the linear type system includes three types: • @UsedUp is the type of references whose object has been assigned to another reference. The reference may not be used in any way, including having its fields dereferenced, being tested for equality with ==, or being assigned to another reference. Users never need to write this qualifier. • @Linear is the type of references that have no aliases, and that may be dereferenced at most once in the future. The type of new T() is @Linear T (the analysis does not account for the slim possibility that an alias to this escapes the constructor). • @NonLinear is the type of references that may be dereferenced, and aliases made, as many times as desired. This is the default, so users only need to write @NonLinear if they change the default. @UsedUp is a supertype of @NonLinear, which is a supertype of @Linear. This hierarchy makes an assignment like @Linear Object l = new Object(); @NonLinear Object nl = l; @NonLinear Object nl2 = nl; legal. In other words, the fact that an object is referenced by a @Linear type means that there is only one usable reference to it now, not that there will never be multiple usable references to it. (The latter guarantee would be possible to enforce, but it is not what the Linear Checker currently does.) ## 18.2 Limitations The @Linear annotation is supported and checked only on method parameters (including the receiver), return types, and local variables. Supporting @Linear on fields would require a sophisticated alias analysis or type system, and is future work. No annotated libraries are provided for linear types. Most libraries would not be able to use linear types in their purest form. For example, you cannot put a linearly-typed object in a hash table, because hash table insertion calls hashCode; hashCode uses up the reference and does not return the object, even though it does not retain any pointers to the object. For similar reasons, a collection of linearly-typed objects could not be sorted or searched. Our lightweight implementation is intended for use in the parts of your program where errors relating to aliasing and object reuse are most likely. You can use manual reasoning (and possibly an unchecked cast or warning suppression) when objects enter or exit those portions of your program, or when that portion of your program uses an unannotated library. # Chapter 19 IGJ immutability checker Note: The IGJ type-checker has some known bugs and limitations. Nonetheless, it may still be useful to you. IGJ is a Java language extension that helps programmers to avoid mutation errors (unintended side effects). If the IGJ Checker issues no warnings for a given program, then that program will never change objects that should not be changed. This guarantee enables a programmer to detect and prevent mutation-related errors. (See Section 2.3 for caveats to the guarantee.) To run the IGJ Checker, supply the -processor org.checkerframework.checker.igj.IGJChecker command-line option to javac. For examples, see Section 19.7. ## 19.1 IGJ and mutability IGJ [ZPA+07] permits a programmer to express that a particular object should never be modified via any reference (object immutability), or that a reference should never be used to modify its referent (reference immutability). Once a programmer has expressed these facts, an automatic checker analyzes the code to either locate mutability bugs or to guarantee that the code contains no such bugs. To learn more details of the IGJ language and type system, please see the ESEC/FSE 2007 paper “Object and reference immutability using Java generics” [ZPA+07]. The IGJ Checker supports Annotation IGJ (Section 19.5), which is a slightly different dialect of IGJ than that described in the ESEC/FSE paper. ## 19.2 IGJ Annotations Each object is either immutable (it can never be modified) or mutable (it can be modified). The following qualifiers are part of the IGJ type system. @Immutable An immutable reference always refers to an immutable object. Neither the reference, nor any aliasing reference, may modify the object. @Mutable A mutable reference refers to a mutable object. The reference, or some aliasing mutable reference, may modify the object. @ReadOnly A readonly reference cannot be used to modify its referent. The referent may be an immutable or a mutable object. In other words, it is possible for the referent to change via an aliasing mutable reference, even though the referent cannot be changed via the readonly reference. @Assignable The annotated field may be re-assigned regardless of the immutability of the enclosing class or object instance. @AssignsFields is similar to @Mutable, but permits only limited mutation — assignment of fields — and is intended for use by constructor helper methods. @AssignsFields is assumed to be true of the result of a constructor, so it does not need to be written there. @I simulates mutability overloading or the template behavior of generics. It can be applied to classes, methods, and parameters. See Section 19.5.3. For additional details, see [ZPA+07]. ## 19.3 What the IGJ Checker checks The IGJ Checker issues an error whenever mutation happens through a readonly reference, when fields of a readonly reference which are not explicitly marked with @Assignable are reassigned, or when a readonly reference is assigned to a mutable variable. The checker also emits a warning when casts increase the mutability access of a reference. ## 19.4 Implicit and default qualifiers As described in Section 25.3, the IGJ Checker adds implicit qualifiers, reducing the number of annotations that must appear in your code. For a complete description of all implicit IGJ qualifiers, see the Javadoc for IGJAnnotatedTypeFactory. The default annotation (for types that are unannotated and not given an implicit qualifier) is as follows: • @Mutable for almost all references. This is backward-compatible with Java, since Java permits any reference to be mutated. • @Readonly for local variables. This qualifier may be refined by flow-sensitive local type refinement (see Section 25.4). • @Readonly for type parameter and wildcard bounds. For example, interface List<T extends Object> { ... } is defaulted to interface List<T extends @Readonly Object> { ... } This default is not backward-compatible — that is, you may have to explicitly add @Mutable annotations to some type parameter bounds in order to make unannotated Java code type-check under IGJ. However, this reduces the number of annotations you must write overall (since most variables of generic type are in fact not modified), and permits more client code to type-check (otherwise a client could not write List<@Readonly Date>). ## 19.5 Annotation IGJ dialect The IGJ Checker supports the Annotation IGJ dialect of IGJ. The syntax of Annotation IGJ is based on type annotations. The syntax of the original IGJ dialect [ZPA+07] was based on Java 5’s generics and annotation mechanisms. The original IGJ dialect was not backward-compatible with Java (either syntactically or semantically). The dialect of IGJ checked by the IGJ Checker corrects these problems. The differences between the Annotation IGJ dialect and the original IGJ dialect are as follows. ### 19.5.1 Semantic Changes • Annotation IGJ does not permit covariant changes in generic type arguments, for backward compatibility with Java. In ordinary Java, types with different generic type arguments, such as Vector<Integer> and Vector<Number>, have no subtype relationship, even if the arguments (Integer and Number) do. The original IGJ dialect changed the Java subtyping rules to permit safely varying a type argument covariantly in certain circumstances. For example, Vector<Mutable, Integer> <: Vector<ReadOnly, Integer> <: Vector<ReadOnly, Number> <: Vector<ReadOnly, Object> is valid in IGJ, but in Annotation IGJ, only @Mutable Vector<Integer> <: @ReadOnly Vector<Integer> holds and the other two subtype relations do not hold @ReadOnly Vector<Integer> </: @ReadOnly Vector<Number> </: @ReadOnly Vector<Object> • Annotation IGJ supports array immutability. The original IGJ dialect did not permit the (im)mutability of array elements to be specified, because the generics syntax used by the original IGJ dialect cannot be applied to array elements. ### 19.5.2 Syntax Changes • Immutability is specified through type annotations [Ern08] (Section 19.2), not through a combination of generics and annotations. Use of type annotations makes Annotation IGJ backward compatible with Java syntax. • Templating over Immutability: The annotation @I(id) is used to template over immutability. See Section 19.5.3. ### 19.5.3 Templating over immutability: @I @I is a template annotation over IGJ Immutability annotations. It acts similarly to type variables in Java’s generic types, and the name @I mimics the standard <I> type variable name used in code written in the original IGJ dialect. The annotation value string is used to distinguish between multiple instances of @I — in the generics-based original dialect, these would be expressed as two type variables <I> and <J>. ##### Usage on classes A class declaration annotated with @I can then be used with any IGJ Immutability annotation. The actual immutability that @I is resolved to dictates the immutability type for all the non-static appearances of @I with the same value as the class declaration. Example: @I public class FileDescriptor { private @Immutable Date creationData; private @I Date lastModData; public @I Date getLastModDate(@ReadOnly FileDescriptor this) { } } ... void useFileDescriptor() { @Mutable FileDescriptor file = new @Mutable FileDescriptor(...); ... @Mutable Data date = file.getLastModDate(); } In the last example, @I was resolved to @Mutable for the instance file. ##### Usage on methods For example, it could be used for method parameters, return values, and the actual IGJ immutability value would be resolved based on the method invocation. For example, the below method getMidpoint returns a Point with the same immutability type as the passed parameters if p1 and p2 match in immutability, otherwise @I is resolved to @ReadOnly: static @I Point getMidpoint(@I Point p1, @I Point p2) { ... } The @I annotation value distinguishes between @I declarations. So, the below method findUnion returns a collection of the same immutability type as the first collection parameter: static <E> @I("First") Collection<E> findUnion(@I("First") Collection<E> col1, @I("Second") Collection<E> col2) { ... } ## 19.6 Iterators and their abstract state This section explains why the receiver of Iterator.next() is annotated as @ReadOnly. An iterator conceptually has two pieces of state: 1. the underlying collection 2. an index into that collection (indicating the next object to be returned) We choose to exclude the index from the abstract state of the iterator. That is, a change to the index does not count as a mutation of the iterator itself. Changes to the underlying collection are more important and interesting, and unintentional changes are much more likely to lead to important errors. Therefore, this choice about the iterator’s abstract state appears to be more useful than other choices. For example, if the iterator’s abstract state included both the underlying collection and the index, then there would be no way to express, or check, that Iterator.next does not change the underlying collection. ## 19.7 Examples To try the IGJ Checker on a source file that uses the IGJ qualifier, use the following command (where javac is the Checker Framework compiler that is distributed with the Checker Framework). javac -processor org.checkerframework.checker.igj.IGJChecker examples/IGJExample.java The IGJ Checker itself is also annotated with IGJ annotations. # Chapter 20 Javari immutability checker Note: The Javari type-checker has some known bugs and limitations. Nonetheless, it may still be useful to you. Javari [TE05, QTE08] is a Java language extension that helps programmers to avoid mutation errors that result from unintended side effects. If the Javari Checker issues no warnings for a given program, then that program will never change objects that should not be changed. This guarantee enables a programmer to detect and prevent mutation-related errors. (See Section 2.3 for caveats to the guarantee.) The Javari webpage (http://types.cs.washington.edu/javari/) contains papers that explain the Javari language and type system. By contrast to those papers, the Javari Checker uses an annotation-based dialect of the Javari language. The Javarifier tool infers Javari types for an existing program; see Section 20.2.2. Also consider the IGJ Checker (Chapter 19). The IGJ type system is more expressive than that of Javari, and the IGJ Checker is a bit more robust. However, IGJ lacks a type inference tool such as Javarifier. To run the Javari Checker, supply the -processor org.checkerframework.checker.javari.JavariChecker command-line option to javac. For examples, see Section 20.5. ## 20.1 Javari annotations The following six annotations make up the Javari type system. @ReadOnly indicates a type that provides only read-only access. A reference of this type may not be used to modify its referent, but aliasing references to that object might change it. @Mutable indicates a mutable type. @Assignable is a field annotation, not a type qualifier. It indicates that the given field may always be assigned, no matter what the type of the reference used to access the field. @QReadOnly corresponds to Javari’s “? readonly” for wildcard types. An example of its use is List<@QReadOnly Date>. It allows only the operations which are allowed for both readonly and mutable types. @PolyRead (previously named @RoMaybe) specifies polymorphism over mutability; it simulates mutability overloading. It can be applied to methods and parameters. See Section 24.2 and the @PolyRead Javadoc for more details. @ThisMutable means that the mutability of the field is the same as that of the reference that contains it. @ThisMutable is the default on fields, and does not make sense to write elsewhere. Therefore, @ThisMutable should never appear in a program. ## 20.2 Writing Javari annotations ### 20.2.1 Implicit qualifiers As described in Section 25.3, the Javari Checker adds implicit qualifiers, reducing the number of annotations that must appear in your code. For a complete description of all implicit Javari qualifiers, see the Javadoc for JavariAnnotatedTypeFactory. ### 20.2.2 Inference of Javari annotations It can be tedious to write annotations in your code. The Javarifier tool (http://types.cs.washington.edu/javari/javarifier/) infers Javari types for an existing program. It automatically inserts Javari annotations in your Java program or in .class files. This has two benefits: it relieves the programmer of the tedium of writing annotations (though the programmer can always refine the inferred annotations), and it annotates libraries, permitting checking of programs that use those libraries. ## 20.3 What the Javari Checker checks The checker issues an error whenever mutation happens through a readonly reference, when fields of a readonly reference which are not explicitly marked with @Assignable are reassigned, or when a readonly expression is assigned to a mutable variable. The checker also emits a warning when casts increase the mutability access of a reference. ## 20.4 Iterators and their abstract state For an explanation of why the receiver of Iterator.next() is annotated as @ReadOnly, see Section 19.6. ## 20.5 Examples To try the Javari Checker on a source file that uses the Javari qualifier, use the following command (where javac is the Checker Framework compiler that is distributed with the Checker Framework). Alternately, you may specify just one of the test files. javac -processor org.checkerframework.checker.javari.JavariChecker tests/javari/*.java The compiler should issue the errors and warnings (if any) specified in the .out files with same name. To run the test suite for the Javari Checker, use ant javari-tests. The Javari Checker itself is also annotated with Javari annotations. # Chapter 21 Reflection resolution A call to Method.invoke might reflectively invoke any method, so the annotated JDK contains conservative annotations for Method.invoke. These conservative library annotations often cause a checker to issue false positive warnings when type-checking code that uses reflection. If you supply the -AresolveReflection command-line option, the Checker Framework attempts to resolve reflection. At each call to Method.invoke or Constructor.newInstance, the Checker Framework first soundly estimates which methods might be invoked at runtime. When type-checking the call, the Checker Framework uses a library annotation that indicates the parameter and return types of the possibly-invoked methods. If the estimate of invoked methods is small, these types are precise and the checker issues fewer false positive warnings. If the estimate of invoked methods is large, these types are no better than the conservative library annotations. Reflection resolution is disabled by default, because it increases the time to type-check a program. You should enable reflection resolution with the -AresolveReflection command-line option if, for some call site of Method.invoke or Constructor.newInstance in your program: 1. the conservative library annotations on Method.invoke or Constructor.newInstance cause false positive warnings, 2. the set of possibly-invoked methods or constructors can be known at compile time, and 3. the reflectively invoked methods/constructors are on the class path at compile time. Reflection resolution does not change your source code or generated code. In particular, it does not replace the Method.invoke or Constructor.newInstance calls. The command-line option -AresolveReflection=debug outputs verbose information about the reflection resolution process. Section 21.1 first describes the MethodVal and ClassVal Checkers, which reflection resolution uses internally. Then, Section 21.2 gives examples of reflection resolution. ## 21.1 MethodVal and ClassVal Checkers The implementation of reflection resolution internally uses the ClassVal Checker (Section 21.1.1) and the MethodVal Checker (Section 21.1.2). They are very similar to the Constant Value Checker (Section 16) in that their annotations estimate the run-time value of an expression. In some cases, you may need to write annotations such as @ClassVal, @MethodVal, @StringVal and @ArrayLen (from the Constant Value Checker, Section 16) to aid in reflection resolution. Often, though, these annotations can be inferred (Section 21.1.3). ### 21.1.1 ClassVal Checker The ClassVal Checker defines the following annotations: @ClassVal(String[] value) If an expression has @ClassVal type with a single argument, then its exact run-time value is known at compile time. For example, @ClassVal("java.util.HashMap") indicates that the Class object represents the java.util.HashMap class. If multiple arguments are given, then the expression’s run-time value is known to be in that set. The arguments are binary names (JLS §13.1). @ClassBound(String[] value) If an expression has @ClassBound type, then its run-time value is known to be upper-bounded by that type. For example, @ClassBound("java.util.HashMap") indicates that the Class object represents java.util.HashMap or a subclass of it. If multiple arguments are given, then the run-time value is equal to or a subclass of some class in that set. The arguments are binary names (JLS §13.1). @UnknownClass Indicates that there is no compile-time information about the run-time value of the class — or that the Java type is not Class. This is the default qualifier, and it may not be written in source code. @ClassValBottom Type given to the null literal. It may not be written in source code. #### Subtyping rules Figure 21.1 shows part of the type hierarchy of the ClassVal type system. @ClassVal(A) is a subtype of @ClassVal(B) if A is a subset of B. @ClassBound(A) is a subtype of @ClassBound(B) if A is a subset of B. @ClassVal(A) is a subtype of @ClassBound(B) if A is a subset of B. ### 21.1.2 MethodVal Checker The MethodVal Checker defines the following annotations: @MethodVal(String[] className, String[] methodName, int[] params) Indicates that an expression of type Method or Constructor has a run-time value in a given set. If the set has size n, then each of @MethodVal’s arguments is an array of size n, and the ith method in the set is represented by { className[i], methodName[i], params[i] }. For a constructor, the method name is “<init>”. Consider the following example: @MethodVal(className={"java.util.HashMap", "java.util.HashMap"}, methodName={"containsKey", "containsValue"}, params={1, 1}) This @MethodVal annotation indicates that the Method is either HashMap.containsKey with 1 formal parameter or HashMap.containsValue with 1 formal parameter. The @MethodVal type qualifier indicates the number of parameters that the method takes, but not their type. This means that the Checker Framework’s reflection resolution cannot distinguish among overloaded methods. @UnknownMethod Indicates that there is no compile-time information about the run-time value of the method — or that the Java type is not Method or Constructor. This is the default qualifier, and it may not be written in source code. @MethodValBottom Type given to the null literal. It may not be written in source code. #### Subtyping rules Figure 21.2 shows part of the type hierarchy of the MethodVal type system. @MethodVal(classname=CA, methodname=MA, params=PA) is a subtype of @MethodVal(classname=CB, methodname=MB, params=PB) if ∀ indexes i ∃ an index j: CA[i] = CB[j], MA[i] = MA[j], and PA[i] = PB[j] where CA, MA, and PA are lists of equal size and CB, MB, and PB are lists of equal size. ### 21.1.3 MethodVal and ClassVal inference The developer rarely has to write @ClassVal or @MethodVal annotations, because the Checker Framework infers them according to Figure 21.3. Most readers can skip this section, which explains the inference rules. The ClassVal Checker infers the exact class name (@ClassVal) for a Class literal (C.class), and for a static method call (e.g., Class.forName(arg), ClassLoader.loadClass(arg), ...) if the argument is a statically computable expression. In contrast, it infers an upper bound (@ClassBound) for instance method calls (e.g., obj.getClass()). The MethodVal Checker infers @MethodVal annotations for Method and Constructor types that have been created using a method call to Java’s Reflection API: • Class.getMethod(String name, Class<?>... paramTypes) • Class.getConstructor(Class<?>... paramTypes) Note that an exact class name is necessary to precisely resolve reflectively-invoked constructors since a constructor in a subclass does not override a constructor in its superclass. This means that the MethodVal Checker does not infer a @MethodVal annotation for Class.getConstructor if the type of that class is @ClassBound. In contrast, either an exact class name or a bound is adequate to resolve reflectively-invoked methods because of the subtyping rules for overridden methods. ## 21.2 Reflection resolution example Consider the following example, in which the Nullness Checker employs reflection resolution to avoid issuing a false positive warning. public class LocationInfo { @NonNull Location getCurrentLocation() { ... } } public class Example { LocationInfo privateLocation = ... ; String getCurrentCity() throws Exception { Method getCurrentLocationObj = LocationInfo.class.getMethod("getCurrentLocation"); Location currentLocation = (Location) getCurrentLocationObj.invoke(privateLocation); return currentLocation.nameOfCity(); } } When reflection resolution is not enabled, the Nullness Checker uses conservative annotations on the Method.invoke method signature: @Nullable Object invoke(@NonNull Object recv, @NonNull Object ... args) This causes the Nullness Checker to issue the following warning even though currentLocation cannot be null. error: [dereference.of.nullable] dereference of possibly-null reference currentLocation return currentLocation.nameOfCity(); ^ 1 error When reflection resolution is enabled, the MethodVal Checker infers that the @MethodVal annotation for getCurrentLocationObj is: @MethodVal(className="LocationInfo", methodName="getCurrentLocation", params=0) Based on this @MethodVal annotation, the reflection resolver determines that the reflective method call represents a call to getCurrentLocation in class LocationInfo. The reflection resolver uses this information to provide the following precise procedure summary to the Nullness Checker, for this call site only: @NonNull Object invoke(@NonNull Object recv, @Nullable Object ... args) Using this more precise signature, the Nullness Checker does not issue the false positive warning shown above. # Chapter 22 Subtyping Checker The Subtyping Checker enforces only subtyping rules. It operates over annotations specified by a user on the command line. Thus, users can create a simple type-checker without writing any code beyond definitions of the type qualifier annotations. The Subtyping Checker can accommodate all of the type system enhancements that can be declaratively specified (see Chapter 29). This includes type introduction rules (implicit annotations, e.g., literals are implicitly considered @NonNull) via the @ImplicitFor meta-annotation, and other features such as flow-sensitive type qualifier inference (Section 25.4) and qualifier polymorphism (Section 24.2). The Subtyping Checker is also useful to type system designers who wish to experiment with a checker before writing code; the Subtyping Checker demonstrates the functionality that a checker inherits from the Checker Framework. If you need typestate analysis, then you can extend a typestate checker, much as you would extend the Subtyping Checker if you do not need typestate analysis. For more details (including a definition of “typestate”), see Chapter 23.1. See Section 31.6.2 for a simpler alternative. For type systems that require special checks (e.g., warning about dereferences of possibly-null values), you will need to write code and extend the framework as discussed in Chapter 29. ## 22.1 Using the Subtyping Checker The Subtyping Checker is used in the same way as other checkers (using the -processor org.checkerframework.common.subtyping.SubtypingChecker option; see Chapter 2), except that it requires an additional annotation processor argument via the standard “-A” switch. One of the two following arguments must be used with the Subtyping Checker: • Provide the fully-qualified class name(s) of the annotation(s) in the custom type system through the -Aquals option, using a comma-no-space-separated notation: javac -Xbootclasspath/p:/full/path/to/myProject/bin:/full/path/to/myLibrary/bin \ -processor org.checkerframework.common.subtyping.SubtypingChecker \ -Aquals=myModule.qual.MyQual,myModule.qual.OtherQual MyFile.java ... The annotations listed in -Aquals must be accessible to the compiler during compilation in the classpath. In other words, they must already be compiled (and, typically, be on the javac bootclasspath) before you run the Subtyping Checker with javac. It is not sufficient to supply their source files on the command line. • Provide the fully-qualified paths to a set of directories that contain the annotations in the custom type system through the -AqualDirs option, using a colon-no-space-separated notation. For example: javac -Xbootclasspath/p:/full/path/to/myProject/bin:/full/path/to/myLibrary/bin \ -processor org.checkerframework.common.subtyping.SubtypingChecker \ -AqualDirs=/full/path/to/myProject/bin:/full/path/to/myLibrary/bin MyFile.java Note that in these two examples, the compiled class file of the myModule.qual.MyQual and myModule.qual.OtherQual annotations must exist in either the myProject/bin directory or the myLibrary/bin directory. The following placement of the class files will work with the above commands: .../myProject/bin/myModule/qual/MyQual.class .../myLibrary/bin/myModule/qual/OtherQual.class The two options can be used at the same time to provide groups of annotations from directories, and individually named annotations. To suppress a warning issued by the Subtyping Checker, use a @SuppressWarnings annotation, with the argument being the unqualified, uncapitalized name of any of the annotations passed to -Aquals. This will suppress all warnings, regardless of which of the annotations is involved in the warning. (As a matter of style, you should choose one of the annotations as your @SuppressWarnings key and stick with it for that entire type hierarchy.) ## 22.2 Subtyping Checker example Consider a hypothetical Encrypted type qualifier, which denotes that the representation of an object (such as a String, CharSequence, or byte[]) is encrypted. To use the Subtyping Checker for the Encrypted type system, follow three steps. 1. Define two annotations for the Encrypted and PossiblyUnencrypted qualifiers: package \textit{myModule}.qual; import org.checkerframework.framework.qual.*; import java.lang.annotation.ElementType; import java.lang.annotation.Target; import com.sun.source.tree.Tree.Kind; /** * Denotes that the representation of an object is encrypted. */ @SubtypeOf(PossiblyUnencrypted.class) @ImplicitFor(trees = { Kind.NULL_LITERAL }) @DefaultFor({DefaultLocation.LOWER_BOUNDS}) @Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER}) public @interface Encrypted {} package \textit{myModule}.qual; import org.checkerframework.framework.qual.DefaultQualifierInHierarchy; import org.checkerframework.framework.qual.SubtypeOf; import java.lang.annotation.ElementType; import java.lang.annotation.Target; /** * Denotes that the representation of an object might not be encrypted. */ @DefaultQualifierInHierarchy @SubtypeOf({}) @Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER}) public @interface PossiblyUnencrypted {} Note that all custom annotations must have the @Target(ElementType.TYPE_USE) meta-annotation. See section 29.3.1. Don’t forget to compile these classes: $ javac myModule/qual/Encrypted.java myModule/qual/PossiblyUnencrypted.java
The resulting .class files should either be on your classpath, or on the processor path (set via the -processorpath command-line option to javac).
2. Write @Encrypted annotations in your program (say, in file YourProgram.java):
import \textit{myModule}.qual.Encrypted;
...
public @Encrypted String encrypt(String text) {
// ...
}
// Only send encrypted data!
public void sendOverInternet(@Encrypted String msg) {
// ...
}
void sendText() {
// ...
@Encrypted String ciphertext = encrypt(plaintext);
sendOverInternet(ciphertext);
// ...
}
}
You may also need to add @SuppressWarnings annotations to the encrypt and decrypt methods. Analyzing them is beyond the capability of any realistic type system.
3. Invoke the compiler with the Subtyping Checker, specifying the @Encrypted annotation using the -Aquals option. You should add the Encrypted classfile to the processor classpath:
javac -processorpath myqualpath -processor org.checkerframework.common.subtyping.SubtypingChecker
-Aquals=myModule.qual.Encrypted,myModule.qual.PossiblyUnencrypted YourProgram.java
YourProgram.java:42: incompatible types.
found : @myModule.qual.PossiblyUnencrypted java.lang.String
required: @myModule.qual.Encrypted java.lang.String
^
4. You can also provide the fully-qualified paths to a set of directories that contain the qualifiers using the -AqualDirs option, and add the directories to the boot classpath, for example:
javac -Xbootclasspath/p:/full/path/to/myProject/bin:/full/path/to/myLibrary/bin \
-processor org.checkerframework.common.subtyping.SubtypingChecker \
-AqualDirs=/full/path/to/myProject/bin:/full/path/to/myLibrary/bin YourProgram.java
Note that in these two examples, the compiled class file of the myModule.qual.Encrypted and myModule.qual.PossiblyUnencrypted annotations must exist in either the myProject/bin directory or the myLibrary/bin directory. The following placement of the class files will work with the above commands:
.../myProject/bin/myModule/qual/Encrypted.class
.../myProject/bin/myModule/qual/PossiblyUnencrypted.class
Also, see the example project in the checker/examples/subtyping-extension directory.
# Chapter 23 Third-party checkers
The Checker Framework has been used to build other checkers that are not distributed together with the framework. This chapter mentions just a few of them. They are listed in chonological order; older ones appear first and newer ones appear last.
They are externally-maintained, so if you have problems or questions, you should contact their maintainers rather than the Checker Framework maintainers.
If you want a reference to your checker included in this chapter, send us a link and a short description.
## 23.1 Typestate checkers
In a regular type system, a variable has the same type throughout its scope. In a typestate system, a variable’s type can change as operations are performed on it.
The most common example of typestate is for a File object. Assume a file can be in two states, @Open and @Closed. Calling the close() method changes the file’s state. Any subsequent attempt to read, write, or close the file will lead to a run-time error. It would be better for the type system to warn about such problems, or guarantee their absence, at compile time.
Just as you can extend the Subtyping Checker to create a type-checker, you can extend a typestate checker to create a type-checker that supports typestate analysis. An extensible typestate analysis by Adam Warski that builds on the Checker Framework is available at http://www.warski.org/typestate.html.
### 23.1.1 Comparison to flow-sensitive type refinement
The Checker Framework’s flow-sensitive type refinement (Section 25.4) implements a form of typestate analysis. For example, after code that tests a variable against null, the Nullness Checker (Chapter 3) treats the variable’s type as @NonNull T, for some T.
For many type systems, flow-sensitive type refinement is sufficient. But sometimes, you need full typestate analysis. This section compares the two. (Unused variables (Section 25.6) also have similarities with typestate analysis and can occasionally substitute for it. For brevity, this discussion omits them.)
A typestate analysis is easier for a user to create or extend. Flow-sensitive type refinement is built into the Checker Framework and is optionally extended by each checker. Modifying the rules requires writing Java code in your checker. By contrast, it is possible to write a simple typestate checker declaratively, by writing annotations on the methods (such as close()) that change a reference’s typestate.
A typestate analysis can change a reference’s type to something that is not consistent with its original definition. For example, suppose that a programmer decides that the @Open and @Closed qualifiers are incomparable — neither is a subtype of the other. A typestate analysis can specify that the close() operation converts an @Open File into a @Closed File. By contrast, flow-sensitive type refinement can only give a new type that is a subtype of the declared type — for flow-sensitive type refinement to be effective, @Closed would need to be a child of @Open in the qualifier hierarchy (and close() would need to be treated specially by the checker).
## 23.2 Units and dimensions checker
A checker for units and dimensions is available at http://www.lexspoon.org/expannots/.
Unlike the Units Checker that is distributed with the Checker Framework (see Section 15), this checker includes dynamic checks and permits annotation arguments that are Java expressions. This added flexibility, however, requires that you use a special version both of the Checker Framework and of the javac compiler.
Loci [WPM+09], a checker for thread locality, is available at http://www.it.uu.se/research/upmarc/loci/. Developer resources are available at the project page http://java.net/projects/loci/.
## 23.4 Safety-Critical Java checker
A checker for Safety-Critical Java (SCJ, JSR 302) [TPV10] is available at http://sss.cs.purdue.edu/projects/oscj/checker/checker.html. Developer resources are available at the project page http://code.google.com/p/scj-jsr302/.
## 23.5 Generic Universe Types checker
A checker for Generic Universe Types [DEM11], a lightweight ownership type system, is available from https://ece.uwaterloo.ca/~wdietl/ownership/.
## 23.6 EnerJ checker
A checker for EnerJ [SDF+11], an extension to Java that exposes hardware faults in a safe, principled manner to save energy with only slight sacrifices to the quality of service, is available from http://sampa.cs.washington.edu/research/approximation/enerj.html.
## 23.7 CheckLT taint checker
CheckLT uses taint tracking to detect illegal information flows, such as unsanitized data that could result in a SQL injection attack. CheckLT is available from http://checklt.github.io/.
## 23.8 SPARTA information flow type-checker for Android
SPARTA is a security toolset aimed at preventing malware from appearing in an app store. SPARTA provides an information-flow type-checker that is customized to Android but can also be applied to other domains. The SPARTA toolset is available from http://types.cs.washington.edu/sparta/. The paper “Collaborative verification of information flow for a high-assurance app store” appeared in CCS 2014.
# Chapter 24 Generics and polymorphism
This chapter describes support for Java generics (also known as “parametric polymorphism”) and polymorphism over type qualifiers.
The Checker Framework currently supports two schemes for polymorphism over type qualifiers.
Section 24.2 describes the original scheme, which uses method-based annotations that are meta-annotated with @PolymorphicQualifier.
Section 24.3 describes the qualifier parameters scheme, in which qualifier parameters are specified for classes and methods similarly to Java generics. The qualifier parameter scheme is more powerful than the original approach, but it incurs a 50% performance penalty. Currently, only the Tainting Checker (Chapter 8) and the Regex Checker (Chapter 9) support qualifier parameters.
## 24.1 Generics (parametric polymorphism or type polymorphism)
The Checker Framework fully supports type-qualified Java generic types and methods (also known in the research literature as “parametric polymorphism”). When instantiating a generic type, clients supply the qualifier along with the type argument, as in List<@NonNull String>.
### 24.1.1 Raw types
Before running any pluggable type-checker, we recommend that you eliminate raw types from your code (e.g., your code should use List<...> as opposed to List). Your code should compile without warnings when using the standard Java compiler and the -Xlint:unchecked -Xlint:rawtypes command-line options. Using generics helps prevent type errors just as using a pluggable type-checker does, and makes the Checker Framework’s warnings easier to understand.
If your code uses raw types, then the Checker Framework will do its best to infer the Java type parameters and the type qualifiers. If it infers imprecise types that lead to type-checking warnings elsewhere, then you have two options. You can convert the raw types such as List to parameterized types such as List<String>, or you can supply the -AignoreRawTypeArguments command-line option. That option causes the Checker Framework to ignore all subtype tests for type arguments that were inferred for a raw type.
### 24.1.2 Restricting instantiation of a generic class
When you define a generic class in Java, the extends clause of the generic type parameter (known as the “upper bound”) requires that the corresponding type argument must be a subtype of the bound. For example, given the definition class G<T extends Number> {...}, the upper bound is Number and a client can instantiate it as G<Number> or G<Integer> but not G<Date>.
You can write a type qualifier on the extends clause to make the upper bound a qualified type. For example, you can declare that a generic list class can hold only non-null values:
class MyList<T extends @NonNull Object> {...}
MyList<@NonNull String> m1; // OK
MyList<@Nullable String> m2; // error
That is, in the above example, all arguments that replace T in MyList<T> must be subtypes of @NonNull Object.
Conceptually, each generic type parameter has two bounds — a lower bound and an upper bound — and at instantiation, the type argument must be within the bounds. Java only allows you to specify the upper bound; the lower bound is implicitly the bottom type void. The Checker Framework gives you more power: you can specify both an upper and lower bound for type parameters and wildcards. For the upper bound, write a type qualifier on the extends clause, and for the lower bound, write a type qualifier on the type variable.
class MyList<@LowerBound T extends @UpperBound Object> { ... }
For a concrete example, recall the type system of the Regex Checker (see Figure 9) in which @Regex(0) :> @Regex(1) :> @Regex(2) :> @Regex(3) :> ….
class MyRegexes<@Regex(5) T extends @Regex(1) String> { ... }
MyRegexes<@Regex(0) String> mu; // error - @Regex(0) is not a subtype of @Regex(1)
MyRegexes<@Regex(1) String> m1; // OK
MyRegexes<@Regex(3) String> m3; // OK
MyRegexes<@Regex(5) String> m5; // OK
MyRegexes<@Regex(6) String> m6; // error - @Regex(6) is not a supertype of @Regex(5)
The above declaration states that the upper bound of the type variable is @Regex(1) String and the lower bound is @Regex(5) void. That is, arguments that replace T in MyList<T> must be subtypes of @Regex(1) String and supertypes of @Regex(5) void. Since void cannot be used to instantiate a generic class, MyList may be instantiated with @Regex(1) String through @Regex(5) String.
To specify an exact bound, place the same annotation on both bounds. For example:
class MyListOfNonNulls<@NonNull T extends @NonNull Object> { ... }
class MyListOfNullables<@Nullable T extends @Nullable Object> { ... }
MyListOfNonNulls<@NonNull Number> v1; // OK
MyListOfNonNulls<@Nullable Number> v2; // error
MyListOfNullables<@NonNull Number> v4; // error
MyListOfNullables<@Nullable Number> v3; // OK
It is an error if the lower bound is not a subtype of the upper bound.
class MyClass<@Nullable T extends @NonNull Object> // error: @Nullable is not a supertype of @NonNull
#### Defaults
If the extends clause is omitted, then the upper bound defaults to @TopType Object. If no type annotation is written on the type parameter name, then the lower bound defaults to @BottomType void. If the extends clause is written but contains no type qualifier, then the normal defaulting rules apply to the type in the extends clause (see Section 25.3.2).
These rules mean that even though in Java the following two declarations are equivalent:
class MyClass<T>
class MyClass<T extends Object>
they may specify different type qualifiers on the upper bound, depending on the type system’s defaulting rules.
### 24.1.3 Type annotations on a use of a generic type variable
A type annotation on a use of a generic type variable overrides/ignores any type qualifier (in the same type hierarchy) on the corresponding actual type argument. For example, suppose that T is a formal type parameter. Then using @Nullable T within the scope of T applies the type qualifier @Nullable to the (unqualified) Java type of T. This feature is only rarely used.
Here is an example of applying a type annotation to a generic type variable:
class MyClass2<T> {
...
@Nullable T myField = null;
...
}
The type annotation does not restrict how MyClass2 may be instantiated. In other words, both MyClass2<@NonNull String> and MyClass2<@Nullable String> are legal, and in both cases @Nullable T means @Nullable String. In MyClass2<@Interned String>, @Nullable T means @Nullable @Interned String.
### 24.1.4 Annotations on wildcards
At an instantiation of a generic type, a Java wildcard indicates that some constraints are known on the type argument, but the type argument is not known exactly. For example, you can indicate that the type parameter for variable ls is some unknown subtype of CharSequence:
List<? extends CharSequence> ls;
ls = new ArrayList<String>(); // OK
ls = new ArrayList<Integer>(); // error: Integer is not a subtype of CharSequence
For more details about wildcards, see the Java tutorial on wildcards or JLS §4.5.1.
You can write a type annotation on the bound of a wildcard:
List<? extends @NonNull CharSequence> ls;
ls = new ArrayList<@NonNull String>(); // OK
ls = new ArrayList<@Nullable String>(); // error: @Nullable is not a subtype of @NonNull
Conceptually, every wildcard has two bounds — an upper bound and a lower bound. Java only permits you to write the upper bound (with <? extends SomeType>) or the lower bound (with <? super OtherType>), but not both; the unspecified bound is implicitly the top type Object or the bottom type void. The Checker Framework is more flexible: it lets you simultaneously write annotations on both the top and the bottom type. To annotate the implicit bound, write the type annotation before the ?. For example:
List<@LowerBound ? extends @UpperBound CharSequence> lo;
List<@UpperBound ? super @NonNull Number> ls;
For an unbounded wildcard (<?>, with neither bound specified), the annotation in front of a wildcard applies to both bounds. The following three declarations are equivalent (except that you cannot write the bottom type void; note that Void does not denote the bottom type):
List<@NonNull ?> lnn;
List<@NonNull ? extends @NonNull Object> lnn;
List<@NonNull ? super @NonNull void> lnn;
Note that the annotation in front of a type parameter always applies to its lower bound, because type parameters can only be written with extends and never super.
The defaulting rules for wildcards also differ from those of type parameters (see Section 25.3.4).
### 24.1.5 Examples of qualifiers on a type parameter
Recall that @Nullable X is a supertype of @NonNull X, for any X. Most of of the following types mean different things:
class MyList1<@Nullable T> { ... }
class MyList1a<@Nullable T extends @Nullable Object> { ... } // same as MyList1
class MyList2<@NonNull T extends @NonNull Object> { ... }
class MyList2a<T extends @NonNull Object> { ... } // same as MyList2
class MyList3<T extends @Nullable Object> { ... }
MyList1 and MyList1a must be instantiated with a nullable type. The implementation of MyList1 must be able to consume (store) a null value and produce (retrieve) a null value.
MyList2 and MyList2a must be instantiated with non-null type. The implementation of MyList2 has to account for only non-null values — it does not have to account for consuming or producing null.
MyList3 may be instantiated either way: with a nullable type or a non-null type. The implementation of MyList3 must consider that it may be instantiated either way — flexible enough to support either instantiation, yet rigorous enough to impose the correct constraints of the specific instantiation. It must also itself comply with the constraints of the potential instantiations.
One way to express the difference among MyList1, MyList2, and MyList3 is by comparing what expressions are legal in the implementation of the list — that is, what expressions may appear in the ellipsis in the declarations above, such as inside a method’s body. Suppose each class has, in the ellipsis, these declarations:
T t;
@Nullable T nble; // Section "Type annotations on a use of a generic type variable", below,
@NonNull T nn; // further explains the meaning of "@Nullable T" and "@NonNull T".
T get(int i) { }
Then the following expressions would be legal, inside a given implementation — that is, also within the ellipses. (Compilable source code appears as file checker-framework/checker/tests/nullness/generics/GenericsExample.java.)
MyList1 MyList2 MyList3 t = null; OK error error t = nble; OK error error nble = null; OK OK OK nn = null; error error error t = this.get(0); OK OK OK nble = this.get(0); OK OK OK nn = this.get(0); error OK error this.add(t); OK OK OK this.add(nble); OK error error this.add(nn); OK OK OK
The differences are more significant when the qualifier hierarchy is more complicated than just @Nullable and @NonNull.
### 24.1.6 Covariant type parameters
Java types are invariant in their type parameter. This means that A<X> is a subtype of B<Y> only if X is identical to Y. For example, ArrayList<Number> is a subtype of List<Number>, but neither ArrayList<Integer> nor List<Integer> is a subtype of List<Number>. (If they were, there would be a type hole in the Java type system.) For the same reason, type parameter annotations are treated invariantly. For example, List<@Nullable String> is not a subtype of List<String>.
When a type parameter is used in a read-only way — that is, when values of that type are read but are never assigned — then it is safe for the type to be covariant in the type parameter. Use the @Covariant annotation to indicate this. When a type parameter is covariant, two instantiations of the class with different type arguments have the same subtyping relationship as the type arguments do.
For example, consider Iterator. Its elements can be read but not written, so Iterator<@Nullable String> can be a subtype of Iterator<String> without introducing a hole in the type system. Therefore, its type parameter is annotated with @Covariant. The first type parameter of Map.Entry is also covariant. Another example would be the type parameter of a hypothetical class ImmutableList.
The @Covariant annotation is trusted but not checked. If you incorrectly specify as covariant a type parameter that that can be written (say, the class performs a set operation or some other mutation on an object of that type), then you have created an unsoundness in the type system. For example, it would be incorrect to annotate the type parameter of ListIterator as covariant, because ListIterator supports a set operation.
### 24.1.7 Method type argument inference and type qualifiers
Sometimes method type argument inference does not interact well with type qualifiers. In such situations, you might need to provide explicit method type arguments, for which the syntax is as follows:
Collections.</*@MyTypeAnnotation*/ Object>sort(l, c);
This uses Java’s existing syntax for specifying a method call’s type arguments.
## 24.2 Qualifier polymorphism
This section describes the original Checker Framework scheme for qualifier polymorphism. Section 24.3 describes an alternative scheme that uses qualifier parameters.
The Checker Framework supports type qualifier polymorphism for methods, which permits a single method to have multiple different qualified type signatures. This is similar to Java’s generics, but is used in situations where you cannot use Java generics. If you can use generics, you typically do not need to use a polymorphic qualifier such as @PolyNull.
To use a polymorphic qualifier, just write it on a type. For example, you can write @PolyNull anywhere in a method that you would write @NonNull or @Nullable. A polymorphic qualifier can be used on a method signature or body. It may not be used on a class or field.
A method written using a polymorphic qualifier conceptually has multiple versions, somewhat like a template in C++ or the generics feature of Java. In each version, each instance of the polymorphic qualifier has been replaced by the same other qualifier from the hierarchy. See the examples below in Section 24.2.1.
The method body must type-check with all signatures. A method call is type-correct if it type-checks under any one of the signatures. If a call matches multiple signatures, then the compiler uses the most specific matching signature for the purpose of type-checking. This is the same as Java’s rule for resolving overloaded methods.
To define a polymorphic qualifier, mark the definition with @PolymorphicQualifier. For example, @PolyNull is a polymorphic type qualifier for the Nullness type system:
@PolymorphicQualifier
@Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER})
public @interface PolyNull { }
See Section 24.2.5 for a way you can sometimes avoid defining a new polymorphic qualifier.
### 24.2.1 Examples of using polymorphic qualifiers
As an example of the use of @PolyNull, method Class.cast returns null if and only if its argument is null:
@PolyNull T cast(@PolyNull Object obj) { ... }
This is like writing:
@NonNull T cast( @NonNull Object obj) { ... }
@Nullable T cast(@Nullable Object obj) { ... }
except that the latter is not legal Java, since it defines two methods with the same Java signature.
As another example, consider
// Returns null if either argument is null.
@PolyNull T max(@PolyNull T x, @PolyNull T y);
which is like writing
@NonNull T max( @NonNull T x, @NonNull T y);
@Nullable T max(@Nullable T x, @Nullable T y);
At a call site, the most specific applicable signature is selected.
Another way of thinking about which one of the two max variants is selected is that the nullness annotations of (the declared types of) both arguments are unified to a type that is a supertype of both, also known as the least upper bound or lub. If both arguments are @NonNull, their unification (lub) is @NonNull, and the method return type is @NonNull. But if even one of the arguments is @Nullable, then the unification (lub) is @Nullable, and so is the return type.
### 24.2.2 Relationship to subtyping and generics
Qualifier polymorphism has the same purpose and plays the same role as Java’s generics. If a method is written using generics, it usually does not need qualifier polymorphism. If you have legacy code that is not written generically, and you cannot change it to use generics, then you can use qualifier polymorphism to achieve a similar effect, with respect to type qualifiers only. The base Java types are still treated non-generically.
Why not use ordinary subtyping to handle qualifier polymorphism? Ordinarily, when you want a method to work on multiple types, you can just use Java’s subtyping. For example, the equals method is declared to take an Object as its first formal parameter, but it can be called on a String or a Date because those are subtypes of Object.
In most cases, the same subtyping mechanism works with type qualifiers. String is a supertype of @Interned String, so a method toUpperCase that is declared to take a String parameter can also be called on a @Interned String argument.
You use qualifier polymorphism in the same cases when you would use Java’s generics. (If you can use Java’s generics, then that is often better and you don’t also need to use qualifier polymorphism.) One example is when you want a method to operate on collections with different types of elements. Another example is when you want two different formal parameters to be of the same type, without constraining them to be one specific type.
### 24.2.3 Using multiple polymorphic qualifiers in a method signature
Usually, it does not make sense to write only a single instance of a polymorphic qualifier in a method definition: if you write one instance of (say) @PolyNull, then you should use at least two. (An exception is a polymorphic qualifier on an array element type; this section ignores that case, but see below for further details.)
For example, there is no point to writing
void m(@PolyNull Object obj)
which expands to
void m(@NonNull Object obj)
void m(@Nullable Object obj)
This is no different (in terms of which calls to the method will type-check) than writing just
void m(@Nullable Object obj)
The benefit of polymorphic qualifiers comes when one is used multiple times in a method, since then each instance turns into the same type qualifier. Most frequently, the polymorphic qualifier appears on at least one formal parameter and also on the return type. It can also be useful to have polymorphic qualifiers on (only) multiple formal parameters, especially if the method side-effects one of its arguments. For example, consider
void moveBetweenStacks(Stack<@PolyNull Object> s1, Stack<@PolyNull Object> s2) {
s1.push(s2.pop());
}
In this example, if it is acceptable to rewrite your code to use Java generics, the code can be even cleaner:
<T> void moveBetweenStacks(Stack<T> s1, Stack<T> s2) {
s1.push(s2.pop());
}
### 24.2.4 Using a single polymorphic qualifier on an element type
There is an exception to the general rule that a polymorphic qualifier should be used multiple times in a signature. It can make sense to use a polymorphic qualifier just once, if it is on an array or generic element type.
For example, consider a routine that returns the index, in an array, of a given element:
public static int indexOf(@PolyNull Object[] a, @Nullable Object elt) { ... }
If @PolyNull were replaced with either @Nullable or @NonNull, then one of these safe client calls would be rejected:
@Nullable Object[] a1;
@NonNull Object[] a2;
indexOf(a1, someObject);
indexOf(a2, someObject);
Of course, it would be better style to use a generic method, as in either of these signatures:
public static <T extends @Nullable Object> int indexOf(T[] a, @Nullable Object elt) { ... }
public static <T extends @Nullable Object> int indexOf(T[] a, T elt) { ... }
The examples in this section use arrays, but analogous collection examples exist.
These examples show that use of a single polymorphic qualifier may be necessary in legacy code, but can often be avoided by use of better code style.
### 24.2.5 The @PolyAll qualifier applies to every type system
Each type system has its own polymorphic type qualifier. If some method is qualifier-polymorphic over every type qualifier hierarchy, then it is tedious, and leads to an explosion in the number of type annotations, to place every @Poly* qualifier on that method.
For example, a method that only performs == on array elements will work no matter what the array’s element types are:
/** Searches for the first occurrence of the given element in the array,
* testing for equality using == (not the equals method). */
public static int indexOfEq(@PolyAll Object[] a, @Nullable Object elt) {
for (int i=0; i<a.length; i++)
if (elt == a[i])
return i;
return -1;
}
The @PolyAll qualifier takes an optional argument so that you can specify multiple, independent polymorphic type qualifiers. For example, the method also works no matter what the type argument on the second argument is. This signature is overly restrictive:
/** Returns true if the arrays are elementwise equal,
* testing for equality using == (not the equals method). */
public static int eltwiseEqualUsingEq(@PolyAll Object[] a, @PolyAll Object elt) {
for (int i=0; i<a.length; i++)
if (elt != a[i])
return false;
return true;
}
That signature requires the element type annotation to be identical for the two arguments. For example, it forbids this invocation:
@Mutable Object[] x;
@Immutable Object y;
... indexOf(x, y) ...
A better signature lets the two arrays’ element types vary independently:
public static int eltwiseEqualUsingEq(@PolyAll(1) Object[] a, @PolyAll(2) Object elt)
Note that in this case, the @Nullable annotation on elt’s type is no longer necessary, since it is subsumed by @PolyAll.
The @PolyAll annotation at a location l applies to every type qualifier hierarchy for which no explicit qualifier is written at location l. For example, a declaration like @PolyAll @NonNull Object elt is polymorphic over every type system except the nullness type system, for which the type is fixed at @NonNull. That would be the proper declaration for elt if the body had used elt.equals(a[i]) instead of elt == a[i].
## 24.3 Qualifier parameters
This section describes qualifier parameters which is the new, more-powerful qualifier polymorphism scheme. As of February 2015, only the Tainting Checker (Chapter 8) and the Regex Checker (Chapter 9) support qualifier parameters. Other checkers with qualifier polymorphism support use the original qualifier polymorphism scheme (Section 24.2).
Qualifier parameters provide a way for you to re-use the same code with different type qualifiers in a type-safe manner.
Qualifier parameters are very similar to Java generics, so if you understand the benefits of generics and how to use them, you will find qualifier parameters natural. Both mechanisms are used on classes and methods where different instances of the class have different types. Without generics or qualifier parameters, the types of the members would have to be overly general, which would cause information loss, compiler warnings, the need for casts, and potentially run-time errors. Generics parameterize a class or method with a type, so that a client can specialize the definition with a type as in List<Integer> or List<String>. By contrast, qualifier parameters enable a client to specialize the definition with just a qualifier as in MyClass⟪@Regex⟫ or MyClass⟪@NonNull⟫.
### 24.3.1 Motivation for qualifier parameters
As an example of a problem that qualifier parameters solve, consider the Holder class below. In some uses of Holder, the item field holds a @Tainted String value, and in other uses of Holder, the item field holds an @Untainted String value. The only declaration of item that is consistent with all uses is @Tainted String, which is a supertype of @Untainted String. When an @Untainted String value is put in a Holder, a cast is required when the value is later retrieved.
class Holder {
@Tainted String item; // overly-general declaration, leads to casts
}
// taintedHolder can hold both @Tainted and @Untainted values
Holder taintedHolder = new Holder();
taintedHolder.item = getTaintedValue();
@Tainted String taintedString = taintedHolder.item; // OK; type-checks with the Tainting Checker.
// The programmer intends untaintedHolder to hold only @Untainted values
Holder untaintedHolder = new Holder();
untaintedHolder.item = getUntaintedValue();
@Untainted String untaintedString = untaintedHolder.item; // safe code, but Tainting Checker compile-time error.
// A cast makes the assignment type-check, but casts are unsound and error-prone.
String untaintedString = (@Untainted untaintedString) untaintedHolder.item;
taintedHolder.item = getTaintedValue(); // An error that we would like the type sysetm to catch
Qualifier parameters allow sound type-checking of this code without the use of casts.
### 24.3.2 Overview of qualifier parameters
These following examples add qualifier parameters to Holder from Section 24.3.1 to allow sound type-checking.
For clarity, this section displays qualifier parameters using an idealized syntax using double angle brackets, ⟪...⟫. Note that this is not the actual syntax you will use in source code, which is described in Section 24.3.4.
In the qualifier parameter system, a class can declared to have one or more qualifier parameters. For example, a qualifier parameter can be added to the Holder class:
class Holder ⟪Q⟫ {
}
This declares that Holder takes one qualifier parameter, named Q.
Q can be referenced inside the Holder class. In the following, item will have the same qualifier that Holder is instantiated with:
class Holder ⟪Q⟫ {
@Q String item;
}
References and instantiations of Holder specify a qualifier argument for its parameter Q.
Holder⟪Q=@Tainted⟫ taintedHolder;
Holder⟪Q=@Untainted⟫ untaintedHolder;
Qualifier parameters permit instantiating a class with the appropriate type qualifier rather than relying on an overly-general declaration. Therefore, the following code type-checks without casts:
Holder⟪Q=@Tainted⟫ taintedHolder = new Holder⟪Q=@Tainted⟫();
@Tainted String s = holder.item;
Holder⟪Q=@Untainted⟫ untaintedHolder = new Holder⟪Q=@Untainted⟫();
@Untainted String s = holder.item;
Like generics, two classes with different qualifier parameters have no subtyping relationship:
taintedHolder = untaintedHolder; // Error: not a subtype
untaintedHolder = taintedHolder; // Error: not a subtype
Holder⟪Q=@Tainted⟫ taintedHolder2;
taintedHolder = taintedHolder2; // OK: the qualifier argument is the same for both
### 24.3.3 Qualifier parameter wildcards
As with Java generics, wildcard extends and super bounds may be used. Wildcards create a subtyping relationship between classes with qualifier parameters. See the Java tutorial at http://docs.oracle.com/javase/tutorial/java/generics/subtyping.html for more information on subtyping relationships with wildcards.
Holder⟪Q=@Tainted⟫ holder;
Holder⟪Q=? extends @Tainted⟫ holderExtends;
Holder⟪Q=? super @Tainted⟫ holderSuper;
holder = holderExtends; // Error: not a subtype
holderExtends = holder; // OK
holder = holderSuper; // Error: not a subtype
holderSuper = holder; // OK
For soundness, when a class is parameterized with a wildcard, members of a qualified class that use the parameter as their type have restrictions on their use, just as in Java. In particular, a member of a qualified class with an extends-bounded wildcard may only be set to null. A member of a qualified class with a super-bounded wildcard will always have the top type when accessed.
Holder⟪Q=? extends @Untainted⟫ holderExtends;
@Untainted String s1 = holderExtends.item; // OK
holderExtends.item = getTaintedString(); // Error: only null can be assigned to item
Holder⟪Q=? super @Untainted⟫ holderSuper;
@Untainted String s2 = holderSuper.item; // Error: item has the top type
holderSuper.item = getUntaintedString(); // OK
### 24.3.4 Syntax of qualifier parameters
The examples in Sections 24.3.224.3.3 used double angle brackets, ⟪...⟫, for qualifier parameter declarations and qualifier arguments. In real source code, qualifier parameter declarations and uses, and qualifier arguments, are specified via Java annotations.
• To declare a qualifier parameter, use @ClassTypesystemParam or @MethodTypesystemParam and give a name for the parameter, as in @ClassTaintingParam("main").
• To use a qualifier parameter, write @Var and indicate the parameter being used, as in @Var(arg="main").
• To supply a qualifier argument, write the argument annotation (e.g., @Tainted), but supply a param argument, as in @Tainted("main") which means that @Tainted is the argument to the parameter named main.
These annotations are summarized in Figure 24.1 and are more fully explained below.
Each type system that supports qualifier parameters has its own copy of these annotations. The functionality of the annotations is the same, but since a java file might be annotated with annotations for multiple type systems, i.e. have annotations for both the Regex and the Tainting checker, there must be a different copy of each annotation so that the Checker Framework can determine the checker that an annotation belongs to.
Generic Equivalent Idealized Syntax Actual Syntax Declare a class parameter class Holder {} class Holder⟪Q⟫ {} @ClassTaintingParam("Q") class Holder {} Declare a method parameter void do() {} ⟪V⟫ void do() {} @MethodTaintingParam("V") void do() {} Instantiate (supply an argument) Holder Holder⟪Q=@Tainted⟫ @Tainted(param="Q") Holder Use a parameter void do(T t) {} ⟪V⟫ void do(@V Object o) {} @MethodTaintingParam("V") void do(@Var(arg="V") Object o) {} Use a parameter as an argument void do(List t) {} ⟪V⟫ void do(Holder⟪Q=@V⟫ h) {} @MethodTaintingParam("V") void do(@Var(arg="V" param="Q") Holder o) {} Instantiate without constraints Holder Holder⟪Q=?⟫ @Wild(param="Q") Holder Instantiate with upper bound Holder Holder⟪Q=? extends @Tainted⟫ @Tainted(param="Q", wildcard=Wildcard.EXTENDS) Holder Instantiate with lower bound Holder Holder⟪Q=? super @Tainted⟫ @Tainted(param="Q", wildcard=Wildcard.SUPER) Holder
Figure 24.1: Comparison of the syntax of Java generics, the idealized syntax used in Sections 24.3.2–24.3.3, and the actual syntax used in Java source code.
@ClassTaintingParam
Declares a qualifier parameter for a class.
// Equivalent to
class Holder ⟪Q⟫ {
}
// Declare a parameter "main"
@ClassTaintingParam("main")
class Holder {
}
// The parameter "main" can now be set
@Tainted(param="main") Holder h;
@MethodTaintingParam
Declares a qualifier parameter for a method.
Qualifier arguments to a method are never specified explicitly; they are inferred by the Checker Framework based on the parameters passed to the method invocation. Unlike Java generics, there is no way to explicitly specify method qualifier parameters on an invocation.
class Util {
// Declare a method parameter.
@MethodTaintingParam("meth")
public static @Var("meth") String id(@Var("meth") String in) {
return in;
}
}
// Qualifier arguments are inferred.
@Untainted String untainted = Util.id(getUntaintedString());
@Var
Declares a use of a qualifier parameter. The arg field specifies which qualifier parameter in the surrounding scope the type should get its value from. For example:
// Equivalent to
class Holder ⟪Q⟫ {
@Q String item;
}
// Declare a parameter
@ClassTaintingParam ("main")
class Holder {
// item will have the qualifier that Holder is instantiated with
@Var(arg="main") String item;
}
@Tainted(param="main") Holder h1 = new @Tainted(param="main") Holder();
@Tainted String value1 = h1.item;
@Untainted(param="main") Holder h2 = new @Untainted(param="main") Holder();
@Untainted String value1 = h2.item;
The "param" field specifies that the value of the qualifier parameter specified by "arg" should be used as the parameter to another qualifier type. For example:
// Equivalent to
class Holder ⟪Q⟫ {
@Q String item;
Holder⟪Q=@Q⟫ nestedHolder;
}
@ClassTaintingParam ("main")
class Holder {
// item will have the qualifier that Holder is instantiated with
@Var(arg="main") String item;
// nestedHolder will be instantiated with the same qualifier as the
// enclosing "main" parameter
@Var(arg="main", param="main") Holder nestedHolder;
}
@Tainted(param="main") Holder h1 = new @Tainted(param="main") Holder();
@Tainted(param="main") Holder nestedHolder = h1;
@Tainted String value1 = h1.nestedHolder.item;
@Untainted(param="main") Holder h2 = new @Untainted(param="main") Holder();
@Untainted(param="main") Holder nestedHolder2 = h2;
@Untainted String value1 = h2.nestedHolder.item;
@Tainted
When the param field is not set, this annotation behaves as described in Chapter 8 and indicates that the value is tainted. For example:
// The value should be considered tainted
@Tainted String tainted = getTaintedString();
When the param param field is set, the annotation indicates that the value of the @Tainted qualifier should be used as the qualifier argument to the class that it annotates. For example:
// Equivalent to Holder⟪@Tainted⟫ holder
// This declares a Holder object, whose Tainting qualifier parameter is set to @Tainted.
// Holder must have been declared to have a Tainting qualifier parameter
// by using the @ClassTaintingParam annotation.
@Tainted(param="main") Holder holder;
The wildcard field can be set to a Wildcard value. This allows qualifier parameters to act like wildcards.
// Equivalent to Holder⟪? extends @Untainted⟫
// Instantiate Holder with a wildcard parameter.
@Untainted(param="main", wildcard=Wildcard.EXTENDS) Holder extends;
// OK because of the extends bound
extendsHolder = new @Untainted(param="main") Holder();
// Error: the new Holder is not a subtype of extendsHolder
extendsHolder = new @Untainted(param="main") Holder();
@Untainted
@Untainted behaves the same as @Tainted but for untainted values.
@Wild
Declares that a class has an unknown qualifier parameter. This is useful in cases where the qualifier parameter in the class is not used or is used in very limited ways.
// Equivalent to
Holder⟪?⟫ h1 = new Holder⟪@Untainted⟫();
@Wild(param="main") Holder h1 = new @Untainted(param="main") Holder;
// Error: item is not guaranteed to be an @Untainted value.
@Untainted String s1 = h1.item;
@PolyTainted
Enables method qualifier polymorphism. When the field param is not set, @PolyTainted behaves as described Section 24.2. For example:
class Util {
static @PolyTainted String id(@PolyTainted String in) {
return in;
}
}
@Untainted String s = Util.id(getUntaintedString()); // OK
The field param can be used to specify that the inferred qualifier parameter should be used as an argument to another parameterized type. In this mode @PolyTainted is a shorthand for a combination of @MethodTaintingParam and @Var. For example:
class Util {
static @PolyTainted(param="main") Holder id(@PolyTainted(param="main") Holder in) {
return in;
}
}
// Equivalent to this code
@MethodTaintingParam("meth")
public static @Var(arg="meth", param="main") Holder id(@Var(arg="meth", param="main) Holder in) {
return in;
}
### 24.3.5 Primary qualifiers
Type system specific annotations, like @Tainted or @Regex, have dual uses in the qualifier parameter system. When their "param" field is set, they are used as a argument to a qualifier parameter.
When their "param" field is not set, they apply directly to a type and not to any qualifier parameters of the type. We call the qualifier that applies directly to a type the primary qualifier. For example an @Tainted String is a String with a tainted value and its primary qualifier is @Tainted.
@Var can also be used to set primary qualifiers by omitting the "param" field on the annotation.
# Chapter 25 Advanced type system features
This chapter describes features that are automatically supported by every checker written with the Checker Framework. You may wish to skim or skip this chapter on first reading. After you have used a checker for a little while and want to be able to express more sophisticated and useful types, or to understand more about how the Checker Framework works, you can return to it.
## 25.1 Invariant array types
Java’s type system is unsound with respect to arrays. That is, the Java type-checker approves code that is unsafe and will cause a run-time crash. Technically, the problem is that Java has “covariant array types”, such as treating String[] as a subtype of Object[]. Consider the following example:
String[] strings = new String[] {"hello"};
Object[] objects = strings;
objects[0] = new Object();
String myString = strs[0];
The above code puts an Object in the array strings and thence in myString, even though myString = new Object() should be, and is, rejected by the Java type system. Java prevents corruption of the JVM by doing a costly run-time check at every array assignment; nonetheless, it is undesirable to learn about a type error only via a run-time crash rather than at compile time.
When you pass the -AinvariantArrays command-line option, the Checker Framework is stricter than Java, in the sense that it treats arrays invariantly rather than covariantly. This means that a type system built upon the Checker Framework is sound: you get a compile-time guarantee without the need for any run-time checks. But it also means that the Checker Framework rejects code that is similar to what Java unsoundly accepts. The guarantee and the compile-time checks are about your extended type system. The Checker Framework does not reject the example code above, which contains no type annotations.
Java’s covariant array typing is sound if the array is used in a read-only fashion: that is, if the array’s elements are accessed but the array is not modified. However, fact about read-only usage is not built into any of the type-checkers except those that are specifically about immutability: IGJ (see Chapter 19) and Javari (see Chapter 20). Therefore, when using other type systems along with -AinvariantArrays, you will need to suppress any warnings that are false positives because the array is treated in a read-only way.
## 25.2 Context-sensitive type inference for array constructors
When you write an expression, the Checker Framework gives it the most precise possible type, depending on the particular expression or value. For example, when using the Regex Checker (Chapter 9), the string "hello" is given type @Regex String because it is a legal regular expression (whether it is meant to be used as one or not) and the string "(foo" is given the type @Unqualified String because it is not a legal regular expression.
Array constructors work differently. When you create an array with the array constructor syntax, such as the right-hand side of this assignment:
String[] myStrings = {"hello"};
then the expression does not get the most precise possible type, because doing so could cause inconvenience. Rather, its type is determined by the context in which it is used: the left-hand side if it is in an assignment, the declared formal parameter type if it is in a method call, etc.
In particular, if the expression {"hello"} were given the type @Regex String[], then the assignment would be illegal! But the Checker Framework gives the type String[] based on the assignment context, so the code type-checks.
If you prefer a specific type for a constructed array, you can indicate that either in the context (change the declaration of myStrings) or in a new construct (change the expression to new @Regex String[] {"hello"}).
## 25.3 The effective qualifier on a type (defaults and inference)
A checker sometimes treats a type as having a slightly different qualifier than what is written on the type — especially if the programmer wrote no qualifier at all. Most readers can skip this section on first reading, because you will probably find the system simply “does what you mean”, without forcing you to write too many qualifiers in your program. In particular, qualifiers in method bodies are extremely rare.
Most of this section is applicable only to source code that is being checked by a checker. When the compiler reads a .class file that was checked by a checker, the .class file contains the explicit or defaulted annotations from the source code and no defaulting is necessary. When the compiler reads a .class file that was not checked by a checker, the .class file contains only explicit annotations and defaulting might be necessary; see Section 25.3.5 for these rules.
The following steps determine the effective qualifier on a type — the qualifier that the checkers treat as being present.
1. If a type qualifier is present in the source code, that qualifier is used.
2. The type system adds implicit qualifiers. This happens whether or not the programmer has written an explicit type qualifier.
Here are some examples of implicit qualifiers:
• In the Nullness type system (see Chapter 3), enum values, string literals, and method receivers are always non-null.
• In the Interning type system (see Chapter 5), string literals and enum values are always interned.
If the type has an implicit qualifier, then it is an error to write an explicit qualifier that is equal to (redundant with) or a supertype of (weaker than) the implicit qualifier. A programmer may strengthen (write a subtype of) an implicit qualifier, however.
Implicit qualifiers arise from two sources:
built-in
Implicit qualifiers can be built into a type system (Section 29.4), in which case the type system’s documentation explains all of the type system’s implicit qualifiers. Both of the above examples are built into the Nullness type system.
programmer-declared
A programmer may introduce an implicit annotation on each use of class C by writing a qualifier on the declaration of class C. If MyClass is declared as class @MyAnno MyClass {...}, then each occurrence of MyClass in the source code is treated as if it were @MyAnno MyClass.
3. If there is no explicit or implicit qualifier on a type, then a default qualifier is applied; see Section 25.3.1.
At this point (after step 3), every type has a qualifier.
4. The type system may refine a qualified type on a local variable — that is, treat it as a subtype of how it was declared or defaulted. This refinement is always sound and has the effect of eliminating false positive error messages. See Section 25.4.
### 25.3.1 Default qualifier for unannotated types
A type system designer, or an end-user programmer, can cause unannotated references to be treated as if they had a default annotation.
There are several defaulting mechanisms, for convenience and flexibility. When determining the default qualifier for a use of a type, the following rules are used in order, until one applies.
• Use the innermost user-written @DefaultQualifier, as explained in this section.
• Use the default specified by the type system designer (Section 29.3.4); this is usually CLIMB-to-top (Section 25.3.2).
• Use @Unqualified, which the framework inserts to avoid ambiguity and simplify the programming interface for type system designers. Users do not have to worry about this detail, but type system implementers can rely on the fact that some qualifier is present.
The end-user programmer specifies a default qualifier by writing the @DefaultQualifier annotation on a package, class, method, or variable declaration. The argument to @DefaultQualifier is the String name of an annotation. It may be a short name like "NonNull", if an appropriate import statement exists. Otherwise, it should be fully-qualified, like "org.checkerframework.checker.nullness.qual.NonNull". The optional second argument indicates where the default applies. If the second argument is omitted, the specified annotation is the default in all locations. See the Javadoc of DefaultQualifier for details.
For example, using the Nullness type system (Chapter 3):
import org.checkerframework.framework.qual.*; // for DefaultQualifier[s]
import org.checkerframework.checker.nullness.qual.NonNull;
@DefaultQualifier(NonNull.class)
class MyClass {
public boolean compile(File myFile) { // myFile has type "@NonNull File"
if (!myFile.exists()) // no warning: myFile is non-null
return false;
@Nullable File srcPath = ...; // must annotate to specify "@Nullable File"
...
if (srcPath.exists()) // warning: srcPath might be null
...
}
@DefaultQualifier(Mutable.class)
public boolean isJavaFile(File myfile) { // myFile has type "@Mutable File"
...
}
}
If you wish to write multiple @DefaultQualifier annotations at a single location, use @DefaultQualifiers instead. For example:
@DefaultQualifiers({
@DefaultQualifier(NonNull.class),
@DefaultQualifier(Mutable.class)
})
If @DefaultQualifier[s] is placed on a package (via the package-info.java file), then it applies to the given package and all subpackages.
Recall that an annotation on a class definition indicates an implicit qualifier (Section 25.3) that can only be strengthened, not weakened. This can lead to unexpected results if the default qualifier applies to a class definition. Thus, you may want to put explicit qualifiers on class declarations (which prevents the default from taking effect), or exclude class declarations from defaulting.
When a programmer omits an extends clause at a declaration of a type parameter, the default still applies to the implicit upper bound. For example, consider these two declarations:
class C<T> { ... }
class C<T extends Object> { ... } // identical to previous line
The two declarations are treated identically by Java, and the default qualifier applies to the Object upper bound whether it is implicit or explicit. (The @NonNull default annotation applies only to the upper bound in the extends clause, not to the lower bound in the inexpressible implicit super void clause.)
### 25.3.2 Defaulting rules and CLIMB-to-top
Each type system defines a default qualifier. For example, the default qualifier for the Nullness Checker is @NonNull. That means that when a user writes a type such as Date, the Nullness Checker interprets it as @NonNull Date.
The type system applies that default qualifier to most but not all types. In particular, unless otherwise stated, every type system uses the CLIMB-to-top rule. This rule states that the top qualifier in the hierarchy is applied to the CLIMB locations: Casts, Locals, Instanceof, and (some) iMplicit Bounds. For example, when the user writes a type such as Date in such a location, the Nullness Checker interprets it as @Nullable Date (because @Nullable is the top qualifier in the hierarchy, see Figure 3.1).
The CLIMB-to-top rule is used only for unannotated source code that is being processed by a checker. For unannotated libraries (code read by the compiler in .class or .jar form), the checker uses conservative defaults (Section 25.3.5).
The rest of this section explains the rationale and implementation of CLIMB-to-top.
Here is the rationale for CLIMB-to-top:
• Local variables are defaulted to top because type refinement (Section 25.4) is applied to local variables. If a local variable starts as the top type, then the Checker Framework refines it to the best (most specific) possible type based on assignments to it. As a result, a programmer rarely writes an explicit annotation on any of those locations.
Variables defaulted to top include local variables, resource variables in the try-with-resources construct, variables in for statements, and catch arguments (known as exception parameters in the Java Language Specification). Exception parameters need to have the top type because exceptions of arbitrary qualified types can be thrown and the Checker Framework does not provide runtime checks.
• Cast and instanceof types are not really defaulted to top. Rather, they are given the same type as their argument, which is the most specific possible type. That would also have been the effect if they were given the top type and then flow-sensitively refined to the type of their argument.
• Implicit upper bounds are defaulted to top to allow them to be instantiated in any way. If a user declared class C<T> { ... }, then we assume that the user intended to allow any instantiation of the class, and the declaration is interpreted as class C<T extends @Nullable Object> { ... } rather than as class C<T extends @NonNull Object> { ... }. The latter would forbid instantiations such as C<@Nullable String>, or would require rewriting of code. On the other hand, if a user writes an explicit bound such as class C<T extends D> { ... }, then the user intends some restriction on instantiation and can write a qualifier on the upper bound as desired.
This rule means that the upper bound of class C<T> is defaulted differently than the upper bound of class C<T extends Object>. It would be more confusing for “Object” to be defaulted differently in class C<T extends Object> and in an instantiation C<Object>, and for the upper bounds to be defaulted differently in class C<T extends Object> and class C<T extends Date>.
• Implicit lower bounds are defaulted to the bottom type, again to allow maximal instantiation. Note that Java does not allow a programmer to express both the upper and lower bounds of a type, but the Checker Framework allows the programmer to specify either or both; see Section 24.1.2.
Here is how the CLIMB-to-top rule is expressed for the Nullness Checker:
@DefaultQualifierInHierarchy
@DefaultFor({ DefaultLocation.EXCEPTION_PARAMETER })
public @interface NonNull { }
public @interface Nullable { }
As mentioned above, the exception parameters are always non-null, so @DefaultFor({ DefaultLocation.EXCEPTION_PARAMETER }) on @NonNull overrides the CLIMB-to-top rule.
A type system designer can specify defaults that differ from the CLIMB-to-top rule. In addition, a user may choose a different rule for defaults using the @DefaultQualifier annotation; see Section 25.3.1.
### 25.3.3 Inherited defaults
In certain situations, it would be convenient for an annotation on a superclass member to be automatically inherited by subclasses that override it. This feature would reduce both annotation effort and program comprehensibility. In general, a program is read more often than it is edited/annotated, so the Checker Framework does not currently support this feature. Here are more detailed justifications:
• Currently, a user can determine the annotation on a parameter or return value by looking at a single file. If annotations could be inherited from supertypes, then a user would have to examine all supertypes to understand the meaning of an unannotated type in a given file.
• Different annotations might be inherited from a supertype and an interface, or from two interfaces. Presumably, the subtype’s annotations would be stronger than either (the greatest lower bound in the type system), or an error would be thrown if no such annotations existed.
If these issues can be resolved, then the feature may be added in the future. Or, it may be added optionally, and each type-checker implementation can enable it if desired.
### 25.3.4 Inherited wildcard annotations
If a wildcard is unbounded and has no annotation (e.g. List<?>), the annotations on the wildcard’s bounds are copied from the type parameter to which the wildcard is an argument. For example, the two wildcards in the declarations below are equivalent.
class MyList<@Nullable T extends @Nullable Object> {}
MyList<?> listOfNullables;
MyList<@Nullable ? extends @Nullable Object> listOfNullables;
We copy these annotations because wildcards must be within the bounds of their corresponding type parameter. Therefore, there would be many false positive type.argument.type.incompatible warnings if the bounds of a wildcard were defaulted differently from the bounds of its corresponding type parameter. Here is another example:
class MyList<@Regex(5) T extends @Regex(1) Object> {}
MyList<?> listOfRegexes;
MyList<@Regex(5) ? extends @Regex(1) Object> listOfRegexes;
Note, this applies only to unbounded wildcards. The two wildcards in the following example are equivalent.
class MyList<@Nullable T extends @Nullable Object> {}
List<? extends Object> listOfNonNulls;
List<@NonNull ? extends @NonNull Object> listOfNonNulls2;
Note, the upper bound of the wildcard ? extends Object is defaulted to @NonNull using the CLIMB-to-top rule (see Section 25.3.2).
### 25.3.5 Default qualifiers for .class files (conservative library defaults)
(Note: Currently, the conservative library defaults presented in this section are off by default and can be turned on by supplying the -AuseDefaultsForUncheckedcode=bytecode command-line option. In a future release, they will be turned on by default and it will be possible to turn them off by supplying a -AuseDefaultsForUncheckedCode=-byte command-line option.)
The defaulting rules presented so far apply to source code that is read by the compiler. When the compiler reads a .class file, different defaulting rules apply.
If the checker was run during the compiler execution that created the .class file, then there is no need for defaults: the .class file has an explicit qualifier at each type use. (Furthermore, unless warnings were suppressed, those qualifiers are guaranteed to be correct.) When you are performing pluggable type-checking, it is best to ensure that the compiler only reads such .class files. Section 28.1 discusses how to create annotated libraries.
If the checker was not run during the compiler execution that created the .class file, then the .class file contains only the type qualifiers that the programmer wrote explicitly. (Furthermore, there is no guarantee that these qualifiers are correct, since they have not been checked.) In this case, each checker decides what qualifier to use for the locations where the programmer did not write an annotation. Unless otherwise noted, the choice is:
• For method parameters and lower bounds, use the bottom qualifier (see Section 29.3.5).
• For method return values, fields, and upper bounds, use the top qualifier (see Section 29.3.5).
These choices are conservative. They are likely to cause many false-positive type-checking errors, which will help you to know which library methods need annotations. You can then write those library annotations (see Chapter 28) or alternately suppress the warnings (see Section 26).
For example, an unannotated method
String concatenate(String p1, String p2)
in a classfile would be interpreted as
@Top String concatenate(@Bottom String p1, @Bottom String p2)
There is no single possible default that is sound for fields. In the rare circumstance that there is a mutable public field in an unannotated library, the Checker Framework may fail to warn about code that can misbehave at run time. The Checker Framework developers are working to improve handling of mutable public fields in unannotated libraries.
## 25.4 Automatic type refinement (flow-sensitive type qualifier inference)
The checkers soundly treat certain variables and expressions as having a subtype of their declared or defaulted (Section 25.3.1) type. This functionality reduces your burden of annotating types in your program and eliminates some false positive warnings, but it never introduces unsoundness nor causes an error to be missed.
By default all checkers automatically incorporate type refinement. Most of the time, users don’t have to think about, and may not even notice, type refinement. (And most readers can skip reading this section of the manual, except possibly the examples in Section 25.4.1.) The checkers simply do the right thing even when a programmer omits an annotation on a local variable, or when a programmer writes an unnecessarily general type in a declaration.
The functionality has a variety of names: automatic type refinement, flow-sensitive type qualifier inference, local type inference, and sometimes just “flow”.
If you find examples where you think a value should be inferred to have (or not have) a given annotation, but the checker does not do so, please submit a bug report (see Section 32.2) that includes a small piece of Java code that reproduces the problem.
### 25.4.1 Type refinement examples
Suppose you write
@Nullable String myVar;
...
if (myVar != null) {
myVar.hashCode();
}
The Nullness Checker issues a warning whenever a method such as hashCode() is called on a possibly-null value, which may result in a null pointer exception. However, the Nullness Checker does not issue a warning for the call myVar.hashCode() in the code above. Within the body of the if test, the type of myVar is @NonNull String, even though myVar is declared as @Nullable String.
Here is another example:
@Nullable String myVar;
... // myVar has type @Nullable String
myVar = "hello";
... // myVar has type @NonNull String
myVar.hashCode();
...
myVar = myMap.get(someKey);
... // myVar has type @Nullable String
The Nullness Checker does not issue a warning for the call myVar.hashCode() above because after the assignment, the type-checker treats myVar as having type @NonNull String, which is more precise than the programmer-written type.
Flow-sensitive type refinement applies to every checker, including new checkers that you write. Here is an example for the Regex Checker (Chapter 9):
void m2(@Unannotated String s) {
s = RegexUtil.asRegex(s, 2); // asRegex throws error if arg is not a regex
// with the given number of capturing groups
... // s now has type "@Regex(2) String"
}
As a further example, consider this code, along with comments indicating whether the Nullness Checker (Chapter 3) issues a warning. Note that the same expression may yield a warning or not depending on its context.
// Requires an argument of type @NonNull String
void parse(@NonNull String toParse) { ... }
// Argument does NOT have a @NonNull type
void lex(@Nullable String toLex) {
parse(toLex); // warning: toLex might be null
if (toLex != null) {
parse(toLex); // no warning: toLex is known to be non-null
}
parse(toLex); // warning: toLex might be null
toLex = new String(...);
parse(toLex); // no warning: toLex is known to be non-null
}
This example shows the general rules for when the Nullness Checker (Chapter 3) can automatically determine that certain variables are non-null, even if they were explicitly or by default annotated as nullable. The checker treats a variable or expression as @NonNull:
• starting at the time that it is either assigned a non-null value or checked against null (e.g., via an assertion, if statement, or being dereferenced)
• until it might be re-assigned (e.g., via an assignment that might affect this variable, or via a method call that might affect this variable).
The inference indicates when a variable can be treated as having a subtype of its declared type — for instance, when an otherwise nullable type can be treated as a @NonNull one. The inference never treats a variable as a supertype of its declared type (e.g., an expression with declared type @NonNull type is never inferred to be treated as possibly-null).
### 25.4.2 Types that are not refined
Array element types and generic arguments are never changed by type refinement. Changing these components of a type never yields a subtype of the declared type. For example, List<Number> is not a subtype of List<Object>. Similarly, the Checker Framework does not treat Number[] as a subtype of Object[]. For details, see Section 24.1.6 and Section 25.1.
### 25.4.3 Run-time tests and type refinement
Some type systems support a run-time test that the Checker Framework can use to refine types within the scope of a conditional such as if, after an assert statement, etc.
Whether a type system supports such a run-time test depends on whether the type system is computing properties of data itself, or properties of provenance (the source of the data). An example of a property about data is whether a string is a regular expression. An example of a property about provenance is units of measure: there is no way to look at the representation of a number and determine whether it is intended to represent kilometers or miles.
Type systems that support a run-time test are:
Type systems that do not currently support a run-time test, but could do so with some additional implementation work, are
Type systems that cannot support a run-time test are:
• Initialization Checker to ensure all fields are set in the constructor (see Chapter 3.8)
• Fake Enum Checker to allow type-safe fake enum patterns (see Chapter 7)
• Tainting Checker for trust and security errors (see Chapter 8)
• GUI Effect Checker to ensure that non-GUI threads do not access the UI, which would crash the application (see Chapter 14)
• Units Checker to ensure operations are performed on correct units of measurement (see Chapter 15)
• Aliasing Checker to identify whether expressions have aliases (see Chapter 17)
• Linear Checker to control aliasing and prevent re-use (see Chapter 18)
• IGJ Checker for mutation errors (incorrect side effects), based on the IGJ type system (see Chapter 19)
• Javari Checker for mutation errors (incorrect side effects), based on the Javari type system (see Chapter 20)
• Subtyping Checker for customized checking without writing any code (see Chapter 22)
### 25.4.4 Fields and flow-sensitive analysis
Flow sensitivity analysis infers the type of fields in some restricted cases:
• A final initialized field: Type inference is performed for final fields that are initialized to a compile-time constant at the declaration site; so the type of protocol is @NonNull String in the following declaration:
public final String protocol = "https";
Such an inferred type may leak to the public interface of the class. If you wish to override such behavior, you can explicitly insert the desired annotation, e.g.,
public final @Nullable String protocol = "https";
• Within method bodies: Type inference is performed for fields in the context of method bodies, like local variables. Consider the following example, where updatedAt is a nullable field:
class DBObject {
@Nullable Date updatedAt;
void m() {
// updatedAt is @Nullable, so warning about .getTime()
... updatedAt.getTime() ... // warning about possible NullPointerException
if (updatedAt == null)
updatedAt = new Date();
// updatedAt is now @NonNull, so .getTime() call is OK
... updatedAt.getTime() ...
}
}
Here the call to persistData() invalidates the inferred non-null type of updatedAt.
A method call may invalidate inferences about field types; see Section 25.4.5.
### 25.4.5 Side effects, determinism, purity, and flow-sensitive analysis
As described above, a checker can use a refined type for an expression from the time when the checker infers that the value has that refined type, until the checker can no longer support that inference.
The refined type begins at a test (such as if (myvar != null) ...) or an assignment. If the assignment occurs within a method body, you can write a postcondition annotation such as @EnsuresNonNull.
The refined type ends at an assignment or possible assignment. Any method call has the potential to side-effect any field, so calling a method typically causes the checker to discard its knowledge of the refined type. This is undesirable if the method doesn’t actually re-assign the field.
There are three annotations, collectively called purity annotations, that you can use to help express what effects a method call does not have. Usually, you only need to use @SideEffectFree.
@SideEffectFree
indicates that the method has no externally-visible side effects.
@Deterministic
indicates that if the method is called multiple times with identical arguments, then it returns the identical result.
@Pure
indicates that the method is both @SideEffectFree and @Deterministic.
The Javadoc of the annotations describes their semantics and how they are checked. This manual section gives examples and supplementary information.
For example, consider the following declarations and uses:
@Nullable Object myField;
int computeValue() { ... }
void m() {
...
if (myField != null) {
int result = computeValue();
myField.toString();
}
}
Ordinarily, the Nullness Checker would issue a warning regarding the toString() call, because the receiver myField might be null, according to the @Nullable annotation on the declaration of myField. Even though the code checked the value of myField, the call to computeValue might have re-set myField to null. If you change the declaration of computeValue to
@SideEffectFree
int computeValue() { ... }
then the Nullness Checker issues no warnings, because it can reason that the second occurrence of myField has the same (non-null) value as the one in the test.
As a more complex example, consider the following declaration and uses:
@Nullable Object getField(Object arg) { ... }
void m() {
...
if (x.getField(y) != null) {
x.getField(y).toString();
}
}
Ordinarily, the Nullness Checker would issue a warning regarding the toString() call, because the receiver x.getField(y) might be null, according to the @Nullable annotation in the declaration of getField. If you change the declaration of getField to
@Pure
@Nullable Object getField(Object arg) { ... }
then the Nullness Checker issues no warnings, because it can reason that the two invocations x.getField(y) have the same value, and therefore that x.getField(y) is non-null within the then branch of the if statement.
If a method is side-effect-free or pure, then it would be legal to annotate its receiver and every parameter as @ReadOnly, in the IGJ (Chapter 19) or Javari (Chapter 20) type systems. The reverse is not true, because the method might side-effect a global variable. (Also, for the case of @Pure, the method might not be deterministic.)
If you supply the command-line option -AsuggestPureMethods, then the Checker Framework will suggest methods that can be marked as @SideEffectFree, @Deterministic, or @Pure.
Currently, purity annotations are trusted. Purity annotations on called methods affect type-checking of client code. However, you can make a mistake by writing @SideEffectFree on the declaration of a method that is not actually side-effect-free or by writing @Deterministic on the declaration of a method that is not actually deterministic. To enable checking of the annotations, supply the command-line option -AcheckPurityAnnotations. It is not enabled by default because of a high false positive rate. In the future, after a new purity-checking analysis is implemented, the Checker Framework will default to checking purity annotations.
It can be tedious to annotate library methods with purity annotations such as @SideEffectFree. If you supply the command-line option -AassumeSideEffectFree, then the Checker Framework will unsoundly assume that every called method is side-effect-free. This can make flow-sensitive type refinement much more effective, since method calls will not cause the analysis to discard information that it has learned. However, this option can mask real errors. It is most appropriate when you are starting out annotating a project, or if you are using the Checker Framework to find some bugs but not to give a guarantee that no more errors exist of the given type.
A common error is:
MyClass.java:1465: error: int hashCode() in MyClass cannot override int hashCode(Object this) in java.lang.Object;
attempting to use an incompatible purity declaration
public int hashCode() {
^
found : []
required: [SIDE_EFFECT_FREE, DETERMINISTIC]
The reason for the error is that the Object class is annotated as:
class Object {
...
@Pure int hashCode() { ... }
}
(where @Pure means both @SideEffectFree and @Deterministic). Every overriding definition, including those in your program, must use be at least as strong a specification; in particular, every overriding definition must be annotated as @Pure.
You can fix the definition by adding @Pure to your method definition. Alternately, you can suppress the warning. You can suppress each such warning individually using @SuppressWarnings("purity.invalid.overriding"), or you can use the -AsuppressWarnings=purity.invalid.overriding command-line argument to suppress all such warnings. In the future, the Checker Framework will support inheriting annotations from superclass definitions.
The @TerminatesExecution annotation indicates that a given method never returns. This can enable the flow-sensitive type refinement to be more precise.
### 25.4.6 Assertions
If your code contains an assert statement, then your code could behave in two different ways at run time, depending on whether assertions are enabled or disabled via the -ea or -da command-line options to java.
By default, the Checker Framework outputs warnings about any error that could happen at run time, whether assertions are enabled or disabled.
If you supply the -AassumeAssertionsAreEnabled command-line option, then the Checker Framework assumes assertions are enabled. If you supply the -AassumeAssertionsAreDisabled command-line option, then the Checker Framework assumes assertions are disabled. You may not supply both command-line options. It is uncommon to supply either one.
These command-line arguments have no effect on processing of assert statements whose message contains the text @AssumeAssertion; see Section 26.2.
## 25.5 Writing Java expressions as annotation arguments
Sometimes, it is necessary to write a Java expression as the argument to an annotation. The annotations that take a Java expression as an argument include:
The expression is a subset of legal Java expressions:
• the receiver object as seen from the superclass, super. This can be used to refer to fields shadowed in the subclass (although shadowing fields is discouraged in Java).
• a formal parameter. Write # followed by the one-based parameter index. For example: #1, #3. It is not permitted to write #0 to refer to the receiver object; use this instead.
• a static variable. Write the class name and the variable, as in System.out.
• a field of any expression. For example: next, this.next, #1.next.
• an array access. For example: this.myArray[i], vals[#1].
• literals: string, integer, long, null.
• a method invocation on any expression. This even works for overloaded methods and methods with type parameters. For example: m1(x, y.z, #2), a.m2("hello").
You may optionally omit a leading “this.”, just as in Java. Thus, this.next and next are equivalent.
One unusual feature is that the method call is allowed to have side effects. If a specification is going to be checked at run time via assertions, then the specification must not use methods with side effects. But, the Checker Framework works at compile time, so it allows side effects. The current implementation will never able to prove such a contract, but it is able to use the information (when checking the method body with preconditions, or when checking the callers code with postconditions). This can be useful to annotate trusted methods precisely (e.g., java.io.BufferedReader.ready()).
(A side note: The formal parameter syntax #1 is less natural in source code than writing the formal parameter name. This syntax is necessary for separate compilation, when an annotated method has already been compiled into a .class file and a client of that method is later compiled. In the .class file, no formal parameter name information is available, so it is necessary to use a number to indicate a formal parameter.)
Limitations: The following Java expressions may not currently be written:
• Some literals: floats, doubles, chars, and class literals.
• String concatenation expressions.
• Mathematical operators (plus, minus, division, ...).
• Comparisons (equality, less than, etc.).
• Quantification over all array components (e.g. to express that all array elements are non-null).
## 25.6 Unused fields
In an inheritance hierarchy, subclasses often introduce new methods and fields. For example, a Marsupial (and its subclasses such as Kangaroo) might have a variable pouchSize indicating the size of the animal’s pouch. The field does not exist in superclasses such as Mammal and Animal, so Java issues a compile-time error if a program tries to access myMammal.pouchSize.
If you cannot use subtypes in your program, you can enforce similar requirements using type qualifiers. For fields, use the @Unused annotation (Section 25.6.1), which enforces that a field or method may only be accessed from a receiver expression with a given annotation (or one of its subtypes). For methods, annotate the receiver parameter this; then a method call type-checks only if the actual receiver is of the specified type.
Also see the discussion of typestate checkers, in Chapter 23.1.
### 25.6.1 @Unused annotation
A Java subtype can have more fields than its supertype. For example:
class Animal { }
class Mammal extends Animal { ... }
class Marsupial extends Mammal {
int pouchSize; // pouch capacity, in cubic centimeters
...
}
You can simulate the same effect for type qualifiers: the @Unused annotation on a field declares that the field may not be accessed via a receiver of the given qualified type (or any supertype). For example:
class Animal {
@Unused(when=Mammal.class)
int pouchSize; // pouch capacity, in cubic centimeters
...
}
@interface Mammal { }
@interface Marsupial { }
@Marsupial Animal joey = ...;
... joey.pouchSize ... // OK
@Mammal Animal mae = ...;
... mae.pouchSize ... // compile-time error
The above class declaration is like writing
class @Mammal-Animal { ... }
class @Marsupial-Animal {
int pouchSize; // pouch capacity, in cubic centimeters
...
}
# Chapter 26 Suppressing warnings
When the Checker Framework reports a warning, it’s best to change the code or its annotations, to eliminate the warning. Alternately, you can suppress the warning, which does not change the code but prevents the Checker Framework from reporting this particular warning to you.
You may wish to suppress checker warnings because of unannotated libraries or un-annotated portions of your own code, because of application invariants that are beyond the capabilities of the type system, because of checker limitations, because you are interested in only some of the guarantees provided by a checker, or for other reasons. Suppressing a warning is similar to writing a cast in a Java program: the programmer knows more about the type than the type system does and uses the warning suppression or cast to convey that information to the type system.
You can suppress a single warning message (or those in a single method or class) by using the following mechanisms:
• the @SuppressWarnings annotation (Section 26.1), or
• the @AssumeAssertion string in an assert message (Section 26.2).
You can suppress warnings throughout the codebase by using the following mechanisms:
• the -AsuppressWarnings command-line option (Section 26.3),
• the -AskipUses and -AonlyUses command-line options (Section 26.4),
• the -AskipDefs and -AonlyDefs command-line options (Section 26.5),
• the -AuseDefaultsForUncheckedCode=source command-line option (Section 28.1),
• the -Alint command-line option (Section 26.6), or
• not using the -processor command-line option (Section 26.7).
Some type checkers can suppress warnings via
• checker-specific mechanisms (Section 26.8).
We now explain these mechanisms in turn.
## 26.1 @SuppressWarnings annotation
You can suppress specific errors and warnings by use of the @SuppressWarnings annotation, for example @SuppressWarnings("interning") or @SuppressWarnings("nullness"). Section 26.1.1 explains the syntax of the argument string.
A @SuppressWarnings annotation may be placed on program declarations such as a local variable declaration, a method, or a class. It suppresses all warnings related to the given checker, for that program element. Section 26.1.2 discusses where the annotation may be written in source code.
Section 26.1.3 gives best practices for writing @SuppressWarnings annotations.
### 26.1.1 @SuppressWarnings syntax
The @SuppressWarnings annotation takes a string argument.
The most common usage is @SuppressWarnings("checkername"), as in @SuppressWarnings("interning") or @SuppressWarnings("nullness"). The argument checkername is in lower case and is derived from the way you invoke the checker. For example, if you invoke a checker as javac -processor MyNiftyChecker ..., then you would suppress its error messages with @SuppressWarnings("mynifty"). (An exception is the Subtyping Checker, for which you use the annotation name; see Section 22.1). While not recommended, using @SuppressWarnings("all") will suppress all warnings for all checkers.
The @SuppressWarnings argument string can also be of the form checkername:messagekey, in which case only errors/warnings relating to the given message key are suppressed. For example, cast.unsafe is the messagekey for warnings about an unsafe cast, and cast.redundant is the messagekey for warnings about a redundant cast.
Each warning from the compiler gives the most specific suppression key that can be used to suppress that warning. An example is dereference.of.nullable in
MyFile.java:107: error: [dereference.of.nullable] dereference of possibly-null reference myList
^
With the -AshowSuppressWarningKeys command-line option, the compiler lists every key that would suppress the warning, not just the most specific one.
### 26.1.2 Where @SuppressWarnings can be written
@SuppressWarnings is a declaration annotation, so it may be placed on program declarations such as a local variable declaration, a method, or a class. It cannot be used on statements, expressions, or types. To reduce the scope of a @SuppressWarnings annotation, it is sometimes desirable to extract part of an expression into a local variable, so that warnings can be suppressed just for that local variable’s initializer expression.
As an example, consider suppressing a warnings at a cast that you know is safe. Here is an example that uses the Tainting Checker (Section 8); assume that expr has compile-time (declared) type @Tainted String, but you know that the run-time value of expr is untainted.
@SuppressWarnings("tainting:cast.unsafe") // expr is untainted because ... [explanation goes here]
@Untainted String myvar = expr;
It would have been illegal to write
@Untainted String myvar;
...
@SuppressWarnings("tainting:cast.unsafe") // expr is untainted because ...
myvar = expr;
This does not work because Java does not permit annotations (such as @SuppressWarnings) on assignments or other statements or expressions.
### 26.1.3 Good practices when suppressing warnings
#### Suppress warnings in the smallest possible scope
If a particular expression causes a false positive warning, you should extract that expression into a local variable and place a @SuppressWarnings annotation on the variable declaration, rather than suppressing warnings for a larger expression or an entire method body. See Section 26.1.2.
#### Use a specific argument to @SuppressWarnings
It is best to use the most specific possible message key to suppress just a specific error that you know to be a false positive. The checker outputs this message key when it issues an error. If you use a broader @SuppressWarnings annotation, then it may mask other errors that you needed to know about.
The example of Section 26.1.2 could have been written as any one of the following, with the last one being the best style:
@SuppressWarnings("tainting") // suppresses all tainting-related warnings
@SuppressWarnings("tainting:cast") // suppresses tainting warnings about casts
@SuppressWarnings("tainting:cast.unsafe") // suppresses tainting warnings about unsafe casts
#### Justify why the warning is a false positive
A @SuppressWarnings annotation asserts that the code is actually correct or safe (that is, no undesired behavior will occur), even though the type system is unable to prove that the code is correct or safe.
Whenever you write a @SuppressWarnings annotation, you should also write, typically on the same line, a code comment explaining why the code is actually correct. In some cases you might also justify why the code cannot be rewritten in a simpler way that would be amenable to type-checking.
This documentation will help you and others to understand the reason for the @SuppressWarnings annotation. It will also help if you decide to audit your code to verify all the warning suppressions.
## 26.2 @AssumeAssertion string in an assert message
You can suppress a warning by asserting that some property is true, and placing the string @AssumeAssertion(warningkey) in the assertion message.
For example, in this code:
assert x != null : "@AssumeAssertion(nullness)";
... x.f ...
the Nullness Checker assumes that x is non-null from the assert statement forward, and so the expression x.f cannot throw a null pointer exception.
The assert expression must be an expression that would affect flow-sensitive type qualifier refinement (Section 25.4), if the expression appeared in a conditional test. Each type system has its own rules about what type refinement it performs.
The warning key is exactly as in the @SuppressWarnings annotation (Section 26.1). The same good practices apply as for @SuppressWarnings annotations, such as writing a comment justifying why the assumption is safe (Section 26.1.3).
The -AassumeAssertionsAreEnabled and -AassumeAssertionsAreDisabled command-line options (Section 25.4.6) do not affect processing of assert statements that have @AssumeAssertion in their message. Writing @AssumeAssertion means that the assertion would succeed if it were executed, and the Checker Framework makes use of that information regardless of the -AassumeAssertionsAreEnabled and -AassumeAssertionsAreDisabled command-line options.
### 26.2.1 Suppressing warnings and defensive programming
This section explains the distinction between two different uses for assertions (and for related methods like JUnit’s Assert.assertNotNull).
Assertions are commonly used for two distinct purposes: documenting how the program works and debugging the program when it does not work correctly. By default, the Checker Framework assumes that each assertion is used for debugging: the assertion might fail at run time, and the programmer wishes to be informed at compile time about such run-time errors. On the other hand, if you write the @AssumeAssertion string in the assert message, then the Checker Framework assumes that you have used some other technique to verify that the assertion can never fail at run time, so the checker assumes the assertion passes and does not issue a warning.
Distinguishing the purpose of each assertion is important for precise type-checking. Suppose that a programmer encounters a failing test, adds an assertion to aid debugging, and fixes the test. The programmer leaves the assertion in the program if the programmer is worried that the program might fail in a similar way in the future. The Checker Framework should not assume that the assertion succeeds — doing so would defeat the very purpose of the Checker Framework, which is to detect errors at compile time and prevent them from occurring at run time.
On the other hand, assertions sometimes document facts that a programmer has independently verified to be true, and the Checker Framework can leverage these assertions in order to avoid issuing false positive warnings. The programmer marks such assertions with the @AssumeAssertion string in the assert message. Only do so if you are sure that the assertion always succeeds at run time.
Sometimes methods such as NullnessUtils.castNonNull are used instead of assertions. Just as for assertions, you can treat them as debugging aids or as documentation. If you know that a particular codebase uses a nullness-checking method not for defensive programming but to indicate facts that are guaranteed to be true (that is, these assertions will never fail at run time), then you can suppress warnings related to it. Annotate its definition just as NullnessUtils.castNonNull is annotated (see the source code for the Checker Framework). Also, be sure to document the intention in the method’s Javadoc, so that programmers do not accidentally misuse it for defensive programming.
If you are annotating a codebase that already contains precondition checks, such as:
public String get(String key, String def) {
checkNotNull(key, "key"); //NOI18N
...
}
then you should mark the appropriate parameter as @NonNull (which is the default). This will prevent the checker from issuing a warning about the checkNotNull call.
## 26.3 -AsuppressWarnings command-line option
Supplying the -AsuppressWarnings command-line option is equivalent to writing a @SuppressWarnings annotation on every class that the compiler type-checks. The argument to -AsuppressWarnings is a comma-separated list of warning suppression keys, as in -AsuppressWarnings=purity,uninitialized.
When possible, it is better to write a @SuppressWarnings annotation with a smaller scope, rather than using the -AsuppressWarnings command-line option.
## 26.4 -AskipUses and -AonlyUses command-line options
You can suppress all errors and warnings at all uses of a given class, or suppress all errors and warnings except those at uses of a given class. (The class itself is still type-checked, unless you also use the -AskipDefs or -AonlyDefs command-line option, see 26.5).
Set the -AskipUses command-line option to a regular expression that matches class names (not file names) for which warnings and errors should be suppressed. Or, set the -AonlyUses command-line option to a regular expression that matches class names (not file names) for which warnings and errors should be emitted; warnings about uses of all other classes will be suppressed.
For example, suppose that you use “-AskipUses=^java\.” on the command line (with appropriate quoting) when invoking javac. Then the checkers will suppress all warnings related to classes whose fully-qualified name starts with java., such as all warnings relating to invalid arguments and all warnings relating to incorrect use of the return value.
To suppress all errors and warnings related to multiple classes, you can use the regular expression alternative operator “|”, as in “-AskipUses="java\.lang\.|java\.util\."” to suppress all warnings related to uses of classes belong to the java.lang or java.util packages.
You can supply both -AskipUses and -AonlyUses, in which case the -AskipUses argument takes precedence, and -AonlyUses does further filtering but does not add anything that -AskipUses removed.
Warning: Use the -AonlyUses command-line option with care, because it can have unexpected results. For example, if the given regular expression does not match classes in the JDK, then the Checker Framework will suppress every warning that involves a JDK class such as Object or String. The meaning of -AonlyUses may be refined in the future. Oftentimes -AskipUses is more useful.
## 26.5 -AskipDefs and -AonlyDefs command-line options
You can suppress all errors and warnings in the definition of a given class, or suppress all errors and warnings except those in the definition of a given class. (Uses of the class are still type-checked, unless you also use the -AskipUses or -AonlyUses command-line option, see 26.4).
Set the -AskipDefs command-line option to a regular expression that matches class names (not file names) in whose definition warnings and errors should be suppressed. Or, set the -AonlyDefs command-line option to a regular expression that matches class names (not file names) whose definitions should be type-checked.
For example, if you use “-AskipDefs=^mypackage\.” on the command line (with appropriate quoting) when invoking javac, then the definitions of classes whose fully-qualified name starts with mypackage. will not be checked.
Another way not to type-check a file is not to pass it on the compiler command-line: the Checker Framework type-checks only files that are passed to the compiler on the command line, and does not type-check any file that is not passed to the compiler. The -AskipDefs and -AonlyDefs command-line options are intended for situations in which the build system is hard to understand or change. In such a situation, a programmer may find it easier to supply an extra command-line argument, than to change the set of files that is compiled.
A common scenario for using the arguments is when you are starting out by type-checking only part of a legacy codebase. After you have verified the most important parts, you can incrementally check more classes until you are type-checking the whole thing.
## 26.6 -Alint command-line option
The -Alint option enables or disables optional checks, analogously to javac’s -Xlint option. Each of the distributed checkers supports at least the following lint options:
• cast:unsafe (default: on) warn about unsafe casts that are not checked at run time, as in ((@NonNull String) myref). Such casts are generally not necessary when flow-sensitive local type refinement is enabled.
• cast:redundant (default: on) warn about redundant casts that are guaranteed to succeed at run time, as in ((@NonNull String) "m"). Such casts are not necessary, because the target expression of the cast already has the given type qualifier.
• cast Enable or disable all cast-related warnings.
• all Enable or disable all lint warnings, including checker-specific ones if any. Examples include redundantNullComparison for the Nullness Checker (see Section 3.1) and dotequals for the Interning Checker (see Section 5.3). This option does not enable/disable the checker’s standard checks, just its optional ones.
• none The inverse of all: disable or enable all lint warnings, including checker-specific ones if any.
To activate a lint option, write -Alint= followed by a comma-delimited list of check names. If the option is preceded by a hyphen (-), the warning is disabled. For example, to disable all lint options except redundant casts, you can pass -Alint=-all,cast:redundant on the command line.
Only the last -Alint option is used; all previous -Alint options are silently ignored. In particular, this means that -Alint=all -Alint=cast:redundant is not equivalent to -Alint=-all,cast:redundant.
## 26.7 No -processor command-line option
You can also compile parts of your code without use of the -processor switch to javac. No checking is done during such compilations, so no warnings are issued related to pluggable type-checking.
## 26.8 Checker-specific mechanisms
Finally, some checkers have special rules. For example, the Nullness checker (Chapter 3) uses the special castNonNull method to suppress warnings (Section 3.4.1). This manual also explains special mechanisms for suppressing warnings issued by the Fenum Checker (Section 7.4) and the Units Checker (Section 15.5).
# Chapter 27 Handling legacy code
Section 2.4.1 describes a methodology for applying annotations to legacy code. This chapter tells you what to do if, for some reason, you cannot change your code in such a way as to eliminate a checker warning.
Also recall that you can convert checker errors into warnings via the -Awarns command-line option; see Section 2.2.2.
## 27.1 Checking partially-annotated programs: handling unannotated code
Sometimes, you wish to type-check only part of your program. You might focus on the most mission-critical or error-prone part of your code. When you start to use a checker, you may not wish to annotate your entire program right away. You may not have enough knowledge to annotate poorly-documented libraries that your program uses.
If annotated code uses unannotated code, then the checker may issue warnings. For example, the Nullness Checker (Chapter 3) will warn whenever an unannotated method result is used in a non-null context:
@NonNull myvar = unannotated_method(); // WARNING: unannotated_method may return null
If the call can return null, you should fix the bug in your program by removing the @NonNull annotation in your own program.
If the library call never returns null, there are several ways to eliminate the compiler warnings.
1. Annotate unannotated_method in full. This approach provides the strongest guarantees, but may require you to annotate additional methods that unannotated_method calls. See Chapter 28 for a discussion of how to annotate libraries for which you have no source code.
2. Annotate only the signature of unannotated_method, and suppress warnings in its body. Two ways to suppress the warnings are via a @SuppressWarnings annotation or by not running the checker on that file (see Section 26).
3. Suppress all warnings related to uses of unannotated_method via the skipUses processor option (see Section 26.4). Since this can suppress more warnings than you may expect, it is usually better to annotate at least the method’s signature. If you choose the boundary between the annotated and unannotated code wisely, then you only have to annotate the signatures of a limited number of classes/methods (e.g., the public interface to a library or package).
Chapter 28 discusses adding annotations to signatures when you do not have source code available. Section 26 discusses suppressing warnings.
## 27.2 Backward compatibility with earlier versions of Java
Sometimes, your code needs to be compiled by people who are using a Java 5/6/7 compiler, which does not support type annotations. You can handle this situation by writing annotations in comments (Sections 27.2.127.2.3).
If your code just needs to be run by people who are not using a Java 8 JVM, supply an appropriate -target command-line option to javac. As discussed in Section 27.2.4, the disadvantage is that this makes it more difficult for clients of your library to use pluggable type-checking to verify their own code against the .class or .jar files that you supply; Section 27.2.5 gives a partial solution.
A Java 4 compiler does not permit use of annotations. A Java 5/6/7 compiler only permits annotations on declarations — it does not permit annotations on generic arguments, casts, extends clauses, method receivers, etc.
So that your code can be compiled by any Java compiler (for any version of the Java language), you may write any single annotation inside a /**/ Java comment, as in List</*@NonNull*/ String>. The Checker Framework compiler treats the code exactly as if you had not written the /* and */. In other words, the Checker Framework compiler will recognize the annotation (when it is targeting a Java 8 or later JVM), but your code will still compile with any Java compiler.
Compiler flag -XDTA:noannotationsincomments causes the compiler to ignore annotation comments. With this compiler flag, the Checker Framework compiler behaves like a standard Java 8 compiler that does not support annotations in comments. If your code already contains comments of the form /*@...*/ that look like type annotations, and you want the Checker Framework compiler not to try to interpret them, then you can either selectively add spaces to the comments or use -XDTA:noannotationsincomments to turn off all annotation comments.
Note: Annotations in comments is a feature of the javac compiler that is distributed along with the Checker Framework. It is not supported by the mainline OpenJDK javac. This is the key difference between the Checker Framework compiler and the OpenJDK compiler.
#### Annotations in comments do not appear in Java 5/6/7 .class files
The Checker Framework compiler ignores annotations in comments when targeting a Java 5/6/7 JVM, for example when the -target 7 command-line option is supplied.
It would be possible for the Checker Framework compiler to read the annotations in comments and place them in the Java 5/6/7 .class file so that they are available when type-checking client code. However, this would have two problems. First, it would only be use useful to the Checker Framework compiler, because a standard Java 8 compiler will not look for type annotations in Java 5/6/7 bytecode. Second, the type annotations make reference to parts of the Java 8 JDK, such as ElementType.TYPE_USE. Therefore, trying to run the .class file on a Java 5/6/7 JVM would cause warnings or crashes.
There is a more powerful mechanism that permits arbitrary code to be written in a comment. Format the comment as “/*>>>*/”, with the first three characters of the comment being greater-than signs. As with annotations in comments, the commented code is ignored by ordinary compilers but is treated like code by the Checker Framework compiler.
This mechanism is intended for two purposes. First, it supports the receiver (this parameter) syntax. For example, to specify a method that does not modify its receiver:
public boolean method1(/*>>> @ReadOnly MyClass this*/) { ... }
public boolean method2(/*>>> @ReadOnly MyClass this, */ String argument) { ... }
Second, it can be used for import statements:
/*>>>
import org.checkerframework.checker.nullness.qual.*;
import org.checkerframework.checker.regex.qual.*;
*/
If the import statements are not commented out, then every time you compile the code (even when not doing pluggable type-checking), the annotation definitions (e.g., the checker.jar or checker-qual.jar file) must be on the classpath. (This is done automatically if you use the Checker Framework compiler.) Commenting out the import statements also eliminates Eclipse warnings about unused import statements, if all uses of the imported qualifier are themselves in comments and thus invisible to Eclipse.
A third use is for writing multiple annotations inside one comment, as in /*>>> @NonNull @Interned */ String s;. However, it is better style to write multiple annotations each inside its own comment, as in /*@NonNull*/ /*@Interned*/ String s;.
It would be possible to abuse the /*>>>...*/ mechanism to inject code only when using the Checker Framework compiler. Doing so is not a sanctioned use of the mechanism.
### 27.2.3 Migrating away from annotations in comments
Suppose that your codebase currently uses annotations in comments, but you wish to remove the comment characters around your annotations, because in the future you will use only compilers that support type annotations and your code will only run on Java 8 or later JVMs. This Unix command removes the comment characters, for all Java files in the current working directory or any subdirectory.
find . -type f -name '*.java' -print \
| xargs grep -l -P '/\*\s*@([^ */]+)\s*\*/' \
| xargs perl -pi.bak -e 's|/\*\s*@([^ */]+)\s*\*/|@\1|g'
You can customize this command:
• To process comments with embedded spaces and asterisks, change two instances of “[^ */]” to “[^/]”.
• To ignore comments with leading or trailing spaces, remove the four instances of “\s*”.
• To not make backups, remove “.bak”.
The command does not handle the >>> comments; you will need to adapt the above command to do so, or remove them in another way.
### 27.2.4 No modular type-checking when targeting Java 5/6/7
The Checker Framework’s type annotations utilize a Java 8 feature that allows them to be placed on any type use, including generic type parameters as in List<@NonNull String>. A downside is that use of these type annotations creates a dependency on Java 8, which means that the compiled program requires a Java 8 or later JDK at run time.
To ensure that your program can run on a Java 5/6/7 JVM, use a command-line option such as -target 7 when doing normal compilation to produce classfiles. Before doing so, you will do pluggable type-checking, using the -target 8 command-line option (or no -target command-line option) to javac; you may wish to supply the -proc:only command-line argument so that the type-checking step does not overwrite existing classfiles.
Here are the disadvantages of this approach:
• It produces classfiles that contain no trace of your type annotations. This means that modular type-checking (also known as separate compilation) is not possible.
You need to compile your entire application every time you do pluggable type-checking, rather than just compiling a subset of the files. Furthermore, clients of your code cannot do pluggable type-checking to verify that they are using your code correctly, unless they re-compile your code (or at least all the interfaces that they use) every time that they compile their own.
• It makes pluggable type-checking a different step than “real” compilation, rather than both happening at the same time. You will do pluggable type-checking first, and when it works or when you want to create a binary to distribute to others, you will compile with an ordinary Java compiler.
One way to enable clients to do pluggable type-checking is to provide a version of your library compiled for Java 8 or later, with the type annotations. Clients will do type-checking against this version of the library, but will do normal compilation and execution using the Java 5, 6, or 7 version of your library.
Section 27.2.5 gives an alternative approach with its own advantages and disadvantages.
### 27.2.5 Distributing declaration annotations instead of type annotations
If it is important to you to distribute Java 5/6/7 classfiles against which clients can do some type-checking, this section gives a way to do so.
The idea is to use annotations that are Java 5/6/7 declaration annotations. This approach requires you to use annotations that are declared in different packages than usual and that have slightly different names.
• At code locations that are legal for both declaration and type annotations (such as for fields, method returns, and method parameters), write annotations normally (not in comments).
• At locations where a declaration annotation is not permitted (such as generic type parameters and extends clauses), write annotations in comments.
Here are some disadvantages of this approach:
• You need to use nonstandard names for some annotations, and to remember which annotations to write in comments and which to write normally.
• It produces classfiles that contain only some of your type annotations — the ones that were not written in comments. If your code uses type annotations at locations such as generic type parameters and extends clauses, then modular type-checking will not observe them; the implications of that were described above.
Here are more details about the approach. Suppose you wish to run the Nullness Checker using Java 6 or 7 declaration annotations rather than type annotations. You have two options.
1. At locations where declaration annotations are possible, use aliased annotations from other projects. For example, the aliased annotations for the Nullness Checker are listed in Section 3.7.
At locations where only type annotations are possible, use the “*Type” compatibility annotations from package org.checkerframework.checker.nullness.compatqual in comments. For example, the Nullness Checker declares these declaration annotations: @NullableType, @NonNullType, @PolyNullType, @MonotonicNonNullType, and @KeyForType.
2. At locations where declaration annotations are possible, use “*Decl” compatibility annotations from package org.checkerframework.checker.nullness.compatqual. For example, the Nullness Checker declares these declaration annotations: @NullableDecl, @NonNullDecl, @PolyNullDecl, @MonotonicNonNullDecl, and @KeyForDecl.
At locations where only type annotations are possible, use the regular Checker Framework type annotations in comments.
Notice that in each case, the declaration annotations and type annotations have distinct names. This enables a programmer to import both sets of annotations without a name conflict. But, you must remember to use the correct name, depending on where the annotations are written.
Eventually, when backward compatibility with Java 7 and earlier is not important, you should refactor your codebase to use only the regular Checker Framework annotations, and not to write them in comments.
# Chapter 28 Annotating libraries
If your code uses a library that does not contain type annotations, then the type-checker has no way to know the library’s behavior. The type-checker makes conservative assumptions about unannotated bytecode: it assumes that every method parameter has the bottom type annotation and that every method return type has the top type annotation (see Section 25.3.5 for details and an example). These conservative library annotations invariably lead to checker warnings. This chapter describes how to eliminate the warnings by adding annotations to the library. (Alternately, you can instead suppress all warnings related to an unannotated library by use of the -AskipUses or -AonlyUses command-line option; see Section 26.4.)
(Note: This chapter uses “library” to refer to code that is provided in .class or .jar form. You should use this approach for parts of your own codebase if you typically compile different parts separately. If your codebase is typically compiled together and you are type-checking only part of it, you can use the approach described in this chapter, or you can use command-line arguments such as -AskipUses and -AskipDefs (see Sections 26.426.5). Also, recall that the Checker Framework analyzes all, and only, the source code that is passed to it. The Checker Framework is a plug-in to the javac compiler, and it never analyzes code that is not being compiled, though it does look up annotations in the class files for code that was previously compiled.)
You make the library’s annotations known to the checkers by writing annotations in a copy of the library’s source code (or in a “stub file” if you do not have access to the source code). Given the library annotations, you have two options:
1. You can compile the library to create .class and .jar files that contain the annotations. Then, when doing pluggable type-checking, you would put those files on the classpath. When running your code, you can use either version of the library: the one you created or the original distributed version.
With this compilation approach, the syntax of the library annotations is validated ahead of time. Thus, this compilation approach is less error-prone, and the type-checker runs faster. You get correctness guarantees about the library in addition to your code. Section 28.1 describes how to compile a library.
2. You can supply the annotated library source code, or a very concise variant called a “stub file”, textually to the Checker Framework.
The stub file approach does not require you to compile the library source code. A stub file is applicable to multiple versions of a library, so the stub file does not need to to be updated when a new version of the library is released. When provided by the author of the checker, a stub file is used automatically, with no need for the user to supply a command-line option. The stub file reader approach has some limitations, notably using non-standard syntax in some locations (Section 28.2.5). Section 28.2 describes how to create and use stub files.
If you write any library annotations, please share them so that they can be distributed with the Checker Framework. Sharing your annotations is useful even if the library is only partially annotated.
## 28.1 Compiling partially-annotated libraries
If you completely annotate a library, then you can compile it using a pluggable type-checker, and include the resulting .jar file on your classpath. You get a guarantee that the library contains no errors.
The rest of this section tells you how to compile a library if you partially annotate it: that is, you write annotations for some of its classes but not others. (There is another type of partial annotation, which is when you annotate method signatures but do not type-check the bodies. To do that variety of partial annotation, simply suppress warnings; see Chapter 26. You can combine the two types of partial annotation.)
When compiling a partially-annotated library, the checker needs to use normal defaulting rules (Section 25.3.2) for code you have annotated and conservative defaulting rules (Section 25.3.5) for code you have not yet annotated. You use @AnnotatedFor to indicate which classes you have annotated.
### 28.1.1 The -AuseDefaultsForUncheckedCode=source,bytecode command-line argument
When compiling a library that is not fully annotated, use command-line argument -AuseDefaultsForUncheckedCode=source,bytecode. This causes the checker to behave normally for classes with a relevant @AnnotatedFor annotation. For all other classes, the checker uses unchecked code defaults (see Section 25.3.5) for any type use with no explicit user-written annotation, and the checker issues no warnings.
The @AnnotatedFor annotation, written on a class, indicates that the class has been annotated for certain type systems. For example, @AnnotatedFor({"nullness", "regex"}) means that the programmer has written annotations for the Nullness and Regular Expression type systems. If one of those two type-checkers is run, the -AuseDefaultsForUncheckedCode=source,bytecode command-line argument has no effect and this class is treated normally: unannotated types are defaulted using normal source-code defaults and type-checking warnings are issued. @AnnotatedFor’s arguments are any string that may be passed to the -processor command-line argument: the fully-qualified class name for the checker, or a shorthand for built-in checkers (see Section 2.2.4).
Whenever you compile a class using the Checker Framework, including when using the -AuseDefaultsForUncheckedCode=source,bytecode command-line argument, the resulting .class files are fully-annotated; each type use in the .class file has an explicit type qualifier for any checker that is run.
### 28.1.2 Workflow for creating or augmenting a partially-annotated library
This section describes the typical workflow for creating a partially-annotated library.
If it does not already exist, fork the project (if its license permits forking). Add a note, perhaps in a README, indicating how to obtain the corresponding upstream version; that will enable others to see exactly what edits you have made.
Adjust the library’s build process, such as a Maven or Ant buildfile.
1. Every time the build system runs the compiler, it should:
• passes the -AuseDefaultsForUncheckedCode=source,bytecode command-line option and
• runs every pluggable type-checker for which any annotations exist, using -processor TypeSystem1,TypeSystem2,TypeSystem3
2. When the build system creates a .jar file, the resulting .jar file includes the contents of checker-framework/checker/dist/checker-qual.jar.
You are not adding new build targets, but modifying existing targets. The reason to run every type-checker is to verify the annotations you wrote, and to use appropriate defaults for all unannotated type uses. The reason to include the contents of checker-qual.jar is so that the resulting .jar file can be used whether or not the Checker Framework is being run.
2. Annotate some files.
When you annotate a file, annotate the whole thing, not just a few of its methods. Once the file is fully annotated, add an @AnnotatedFor({"checkername"}) annotation to its class(es), or augment an existing @AnnotatedFor annotation.
3. Build the library.
Because of the changes that you made in step 1, this will run pluggable type-checkers. If there are any compiler warnings, fix them and re-compile.
Now you have a .jar file that you can use while type-checking and at run time.
4. Tell other people about your work so that they can benefit from it.
• Please inform the Checker Framework developers about your new annotated library by opening an issue. This will let us include your annotated .jar file in directory checker-framework/checker/lib/ of the Checker Framework release.
• Encourage the library’s maintainers to accept your annotations into its main version control repository. This will make the annotations easier to maintain, the library will obtain the correctness guarantees of pluggable type-checking, and there will be no need for the Checker Framework to include an annotated version of the library.
You will probably want to write the annotations in comments, so that it is still possible to compile the library without use of Java 8.
If the library maintainers do not accept the annotations, then periodically, such as when a new version of the library is released, pull changes from upstream (the library’s main version control system) into your fork, add annotations to any newly-added methods in classes that are annotated with @AnnotatedFor, rebuild to create an updated .jar file, and inform the Checker Framework developers by opening an issue or issuing a pull request.
## 28.2 Using stub classes
A stub file contains “stub classes” that contain annotated signatures, but no method bodies. A checker uses the annotated signatures at compile time, instead of or in addition to annotations that appear in the library.
Section 28.2.3 describes how to create stub classes. Section 28.2.1 describes how to use stub classes. These sections illustrate stub classes via the example of creating a @Interned-annotated version of java.lang.String. You don’t need to repeat these steps to handle java.lang.String for the Interning Checker, but you might do something similar for a different class and/or checker.
### 28.2.1 Using a stub file
The -Astubs argument causes the Checker Framework to read annotations from annotated stub classes in preference to the unannotated original library classes. For example:
javac -processor org.checkerframework.checker.interning.InterningChecker -Astubs=String.astub:stubs MyFile.java MyOtherFile.java ...
Each stub path entry is a file or a directory; specifying a directory is equivalent to specifying every file in it whose name ends with .astub. The stub path entries are delimited by File.pathSeparator (‘:’ for Linux and Mac, ‘;’ for Windows).
A checker automatically reads the stub file jdk.astub, unless command-line option -Aignorejdkastub is supplied. (The checker author should place jdk.astub in the same directory as the Checker class, i.e., the subclass of BaseTypeVisitor.) Programmers should only use the -Astubs argument for additional stub files they create themselves.
If a method appears in more than one stub file (or twice in the same stub file), then the annotations are merged. If any of the methods have different annotations from the same hierarchy on the same type, then the annotation from the last declaration is used.
### 28.2.2 Stub file format
Every Java file is a valid stub file. However, you can omit information that is not relevant to pluggable type-checking; this makes the stub file smaller and easier for people to read and write.
As an illustration, a stub file for the Interning type system (Chapter 5) could be:
import org.checkerframework.checker.interning.qual.Interned;
package java.lang;
@Interned class Class<T> { }
class String {
@Interned String intern();
}
Note, annotations in comments are ignored.
The stub file format is allowed to differ from Java source code in the following ways:
Method bodies: The stub class does not require method bodies for classes; any method body may be replaced by a semicolon (;), as in an interface or abstract method declaration.
Method declarations: You only have to specify the methods that you need to annotate. Any method declaration may be omitted, in which case the checker reads its annotations from library’s .class files. (If you are using a stub class, then typically the library is unannotated.)
Declaration specifiers: Declaration specifiers (e.g., public, final, volatile) may be omitted.
Return types: The return type of a method does not need to match the real method. In particular, it is valid to use java.lang.Object for every method. This simplifies the creation of stub files.
Import statements: All imports must be at the beginning of the file. The only required import statements are the ones to import type annotations. Import statements for types are optional.
Enum constants in annotations need to be either fully-qualified or imported. For example, one has to either write the enum constant ANY in fully-qualified form:
@Source(sparta.checkers.quals.FlowPermission.ANY)
or correctly import the enum class:
import sparta.checkers.quals.FlowPermission;
...
@Source(FlowPermission.ANY)
or statically import the enum constants:
import static sparta.checkers.quals.FlowPermission.*;
...
@Source(ANY)
Importing all packages from a class (import my.package.*;) only considers annotations from that package; enum types need to be explicitly imported.
Multiple classes and packages: The stub file format permits having multiple classes and packages. The packages are separated by a package statement: package my.package;. Each package declaration may occur only once; in other words, all classes from a package must appear together.
### 28.2.3 Creating a stub file
Every Java file is a stub file. If you have access to the Java file, then you can use the Java file as the stub file. Just add annotations to the signatures, leaving the method bodies unchanged. The stub file parser silently ignores any annotations that it cannot resolve to a type, so don’t forget the import statement.
Optionally (but highly recommended!), run the type-checker to verify that your annotations are correct. When you run the type-checker on your annotations, there should not be any stub file that also contains annotations for the class. In particular, if you are type-checking the JDK itself, then you should use the -Aignorejdkastub command-line option.
This approach retains the original documentation and source code, making it easier for a programmer to double-check the annotations. It also enables creation of diffs, easing the process of upgrading when a library adds new methods. And, the annotations are in a format that the library maintainers can even incorporate.
The downside of this approach is that the stub files are larger. This can slow down parsing.
#### If you do not have access to the Java source code
If you do not have access to the library source code, then you can create a stub file from the class file (Section 28.2.3), and then annotate it. The rest of this section describes this approach.
1. Create a stub file by running the stub class generator. (checker.jar and javac.jar must be on your classpath.)
cd nullness-stub
java org.checkerframework.framework.stub.StubGenerator java.lang.String > String.astub
Supply it with the fully-qualified name of the class for which you wish to generate a stub class. The stub class generator prints the stub class to standard out, so you may wish to redirect its output to a file.
2. Add import statements for the annotations. So you would need to add the following import statement at the beginning of the file:
import org.checkerframework.checker.interning.qual.*;
The stub file parser silently ignores any annotations that it cannot resolve to a type, so don’t forget the import statement. Use the -AstubWarnIfNotFound command-line option to see warnings if an entry could not be found.
3. Add annotations to the stub class. For example, you might annotate the String.intern() method as follows:
@Interned String intern();
You may also remove irrelevant parts of the stub file; see Section 28.2.2.
### 28.2.4 Troubleshooting stub libraries
#### Type-checking does not yield the expected results
By default, the stub parser silently ignores annotations on unknown classes and methods. The stub parser also silently ignores unknown annotations, so don’t forget to import any annotations.
Use command-line option -AstubWarnIfNotFound to warn whenever some element of a stub file cannot be found.
The @NoStubParserWarning annotation on a package or type in a stub file overrides the -AstubWarnIfNotFound command-line option, and no warning will be issued.
Use command-line option -AstubDebug to output debugging messages while parsing stub files, including about unknown classes, methods, and annotations. This overrides the @NoStubParserWarning annotation.
#### Problems parsing stub libraries
When using command-line option -AstubWarnIfNotFound, an error is issued if a stub file has a typo or the API method does not exist.
Fix this error by removing the extra L in the method name:
StubParser: Method isLLowerCase(char) not found in type java.lang.Character
Fix this error by removing the method enableForgroundNdefPush(...) from the stub file, because it is not defined in class android.nfc.NfcAdapter in the version of the library you are using:
StubParser: Method enableForegroundNdefPush(Activity,NdefPushCallback)
### 28.2.5 Limitations
The stub file reader has several limitations that are violations of Java 8 syntax. We will fix these in a future release.
• The receiver is written after the method parameter list, instead of as an explicit first parameter. That is, instead of
returntype methodname(@Annotations C this, params);
in a stub file one has to write
returntype methodname(params) @Annotations;
• The stub file reader does not handle nested class declarations. To work around this, it permits a top-level class to be written with a $in its name, and applies the annotations to the appropriate nested class. class Lib$Inner {
// methods
}
• Annotations must be written before the package name on a fully-qualified types rather than directly on the type it qualifies. However, it is usually not necessary to write the fully-qualified name.
void init(@Nullable java.security.SecureRandom random);
• Annotations on types that are inner classes must be written before the package if the type is written as a fully-qualified type, or before the inner class name if the type is written as a simple type. For example,
ProcessBuilder redirectError(@NonNull java.lang.ProcessBuilder.Redirect destination);
or
ProcessBuilder redirectError(@NonNull Redirect destination);
The code below will cause an error.
// error
ProcessBuilder redirectError(@NonNull ProcessBuilder.Redirect destination);
or
// error
ProcessBuilder redirectError(ProcessBuilder.@NonNull Redirect destination);
• Annotations can only use string, boolean, or integer literals; other literals are not yet supported.
If these limitations are a problem, then you should insert annotations in the library’s .class files instead.
## 28.3 Troubleshooting/debugging annotated libraries
Sometimes, it may seem that a checker is treating a library as unannotated even though the library has annotations. The compiler has two flags that may help you in determining whether library files are read, and if they are read whether the library’s annotations are parsed.
-verbose Outputs info about compile phases — when the compiler reads/parses/attributes/writes any file. Also outputs the classpath and sourcepath paths.
-XDTA:parser (which is equivalent to -XDTA:reader plus -XDTA:writer) Sets the internal debugJSR308 flag, which outputs information about reading and writing.
# Chapter 29 How to create a new checker
This chapter describes how to create a checker — a type-checking compiler plugin that detects bugs or verifies their absence. After a programmer annotates a program, the checker plugin verifies that the code is consistent with the annotations. If you only want to use a checker, you do not need to read this chapter.
Writing a simple checker is easy! For example, here is a complete, useful type-checker:
@SubtypeOf(Unqualified.class)
@Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER})
public @interface Encrypted {}
This checker is so short because it builds on the Subtyping Checker (Chapter 22). See Section 22.2 for more details about this particular checker. When you wish to create a new checker, it is often easiest to begin by building it declaratively on top of the Subtyping Checker, and then return to this chapter when you need more expressiveness or power than the Subtyping Checker affords.
You can also create your own checker by customizing a different existing checker. Specific checkers that are designed for extension (besides the Subtyping Checker) include the Fake Enumeration Checker (Chapter 7), the Units Checker (Chapter 15), and a typestate checker (Chapter 23.1). Or, you can copy and then modify a different existing checker — whether one distributed with the Checker Framework or a third-party one.
You can place your checker’s source files wherever you like. When you compile your checker, $CHECKERFRAMEWORK/framework/dist/framework.jar and$CHECKERFRAMEWORK/framework/dist/javac.jar should be on your classpath. (If you wish to modify an existing checker in place, or to place the source code for your new checker in your own private copy of the Checker Framework source code, then you need to be able to re-compile the Checker Framework, as described in Section 32.3.)
The rest of this chapter contains many details for people who want to write more powerful checkers. You do not need all of the details, at least at first. In addition to reading this chapter of the manual, you may find it helpful to examine the implementations of the checkers that are distributed with the Checker Framework. You can even create your checker by modifying one of those. The Javadoc documentation of the framework and the checkers is in the distribution and is also available online at http://types.cs.washington.edu/checker-framework/current/api/.
If you write a new checker and wish to advertise it to the world, let us know so we can mention it in the Checker Framework Manual, link to it from the webpages, or include it in the Checker Framework distribution. For examples, see Chapters 23.1 and 23.
## 29.1 Relationship of the Checker Framework to other tools
This table shows the relationship among various tools. All of the tools support the Java 8 type annotation syntax. You use the Checker Framework to build pluggable type systems, and the Annotation File Utilities to manipulate .java and .class files.
Subtyping Checker Nullness Checker Mutation Checker Tainting Checker … Your Checker Base Checker (enforces subtyping rules) Type inference Other tools Checker Framework (enables creation of pluggable type-checkers) (.java ↔ .class files) Type Annotations syntax and classfile format (“JSR 308”) (no built-in semantics)
The Base Checker enforces the standard subtyping rules on extended types. The Subtyping Checker is a simple use of the Base Checker that supports providing type qualifiers on the command line. You usually want to build your checker on the Base Checker.
## 29.2 The parts of a checker
The Checker Framework provides abstract base classes (default implementations), and a specific checker overrides as little or as much of the default implementations as necessary. Sections 29.329.7 describe the components of a type system as written using the Checker Framework:
29.3 Type qualifiers and hierarchy. You define the annotations for the type system and the subtyping relationships among qualified types (for instance, that @NonNull Object is a subtype of @Nullable Object).
29.4 Type introduction rules. For some types and expressions, a qualifier should be treated as implicitly present even if a programmer did not explicitly write it. For example, in the Nullness type system every literal other than null has a @NonNull type; examples of literals include "some string" and java.util.Date.class.
Optionally, write dataflow rules to enhance flow-sensitive type qualifier inference (Section 29.5).
29.6 Type rules. You specify the type system semantics (type rules), violation of which yields a type error. There are two types of rules.
• Subtyping rules related to the type hierarchy, such as that every assignment and pseudo-assignment satisfies a subtyping relationship. Your checker automatically inherits these subtyping rules from the Base Checker (Chapter 22).
• Additional rules that are specific to your particular checker. For example, in the Nullness type system, only references with a @NonNull type may be dereferenced. You write these additional rules yourself.
29.7 Interface to the compiler. The compiler interface indicates which annotations are part of the type system, which command-line options and @SuppressWarnings annotations the checker recognizes, etc.
## 29.3 Annotations: Type qualifiers and hierarchy
A type system designer specifies the qualifiers in the type system (Section 29.3.1) and the type hierarchy that relates them. The type hierarchy — the subtyping relationships among the qualifiers — can be defined either declaratively via meta-annotations (Section 29.3.2), or procedurally through subclassing QualifierHierarchy or TypeHierarchy (Section 29.3.3).
### 29.3.1 Defining the type qualifiers
Type qualifiers are defined as Java annotations [Dar06]. In Java, an annotation is defined using the Java @interface keyword. For example:
// Define an annotation for the @NonNull type qualifier.
@Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER})
public @interface NonNull { }
Write a @Target meta-annotation on the annotation definition to indicate where the annotation may be written. All type qualifiers that users can write in source code should have the value ElementType.TYPE_USE (and optionally with the additional value of ElementType.TYPE_PARAMETER, but no other ElementType values). (An annotation that is written on an annotation definition, such as @Target, is called a meta-annotation.)
The annotations should be placed within a directory called qual, and this directory should be placed in the same directory as your Checker’s source file.
For example, the Nullness Checker’s source file is located at .../nullness/NullnessChecker.java. The NonNull qualifier is located in the directory .../nullness/qual.
The Checker Framework automatically treats any annotation that has this value and is declared in the qual package as a type qualifier. (See Section 29.7.1 for more details.)
Your type system should include a top qualifier and a bottom qualifier (Section 29.3.5). You should also define a polymorphic qualifier @PolyMyTypeSystem (Section 24.2).
### 29.3.2 Declaratively defining the qualifier hierarchy
Declaratively, the type system designer uses two meta-annotations (written on the declaration of qualifier annotations) to specify the qualifier hierarchy.
• @SubtypeOf denotes that a qualifier is a subtype of another qualifier or qualifiers, specified as an array of class literals. For example, for any type T, @NonNull T is a subtype of @Nullable T:
@Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER})
@SubtypeOf( { Nullable.class } )
public @interface NonNull { }
@SubtypeOf accepts multiple annotation classes as an argument, permitting the type hierarchy to be an arbitrary DAG. For example, in the IGJ type system (Section 19.2), @Mutable and @Immutable induce two mutually exclusive subtypes of the @ReadOnly qualifier.
All type qualifiers, except for polymorphic qualifiers (see below and also Section 24.2), need to be properly annotated with SubtypeOf.
The top qualifier is annotated with @SubtypeOf( { } ). The top qualifier is the qualifier that is a supertype of all other qualifiers. For example, @Nullable is the top qualifier of the Nullness type system, hence is defined as:
@Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER})
@SubtypeOf( { } )
public @interface Nullable { }
If the top qualifier of the hierarchy is the unqualified type, then its children will use @SubtypeOf(Unqualified.class), but no @SubtypeOf({}) annotation on the top qualifier is necessary. For an example, see the Encrypted type system of Section 22.2.
• @PolymorphicQualifier denotes that a qualifier is a polymorphic qualifier. For example:
@Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER})
@PolymorphicQualifier
public @interface PolyNull { }
For a description of polymorphic qualifiers, see Section 24.2. A polymorphic qualifier needs no @SubtypeOf meta-annotation and need not be mentioned in any other @SubtypeOf meta-annotation.
The declarative and procedural mechanisms for specifying the hierarchy can be used together. In particular, when using the @SubtypeOf meta-annotation, further customizations may be performed procedurally (Section 29.3.3) by overriding the isSubtype method in the checker class (Section 29.7). However, the declarative mechanism is sufficient for most type systems.
### 29.3.3 Procedurally defining the qualifier hierarchy
While the declarative syntax suffices for many cases, more complex type hierarchies can be expressed by overriding, in your subclass of BaseTypeVisitor, either createQualifierHierarchy or createTypeHierarchy (typically only one of these needs to be overridden). For more details, see the Javadoc of those methods and of the classes QualifierHierarchy and TypeHierarchy.
The QualifierHierarchy class represents the qualifier hierarchy (not the type hierarchy), e.g., Mutable is a subtype of ReadOnly. A type-system designer may subclass QualifierHierarchy to express customized qualifier relationships (e.g., relationships based on annotation arguments).
The TypeHierarchy class represents the type hierarchy — that is, relationships between annotated types, rather than merely type qualifiers, e.g., @Mutable Date is a subtype of @ReadOnly Date. The default TypeHierarchy uses QualifierHierarchy to determine all subtyping relationships. The default TypeHierarchy handles generic type arguments, array components, type variables, and wildcards in a similar manner to the Java standard subtype relationship but with taking qualifiers into consideration. Some type systems may need to override that behavior. For instance, the Java Language Specification specifies that two generic types are subtypes only if their type arguments are identical: for example, List<Date> is not a subtype of List<Object>, or of any other generic List. (In the technical jargon, the generic arguments are “invariant” or “novariant”.) The Javari type system overrides this behavior to allow some type arguments to change covariantly in a type-safe manner (e.g., List<@Mutable Date> is a subtype of List<@QReadOnly Date>).
### 29.3.4 Defining a default annotation
A type system applies a default qualifier where the user has not written a qualifier (and no implicit qualifier is applicable), as explained in Section 25.3.1.
The type system designer may specify a default annotation declaratively, using the @DefaultQualifierInHierarchy meta-annotation. Note that the default will apply to any source code that the checker reads, including stub libraries, but will not apply to compiled .class files that the checker reads.
Alternately, the type system designer may specify a default procedurally, by calling the QualifierDefaults.addCheckedCodeDefault method. You may do this even if you have declaratively defined the qualifier hierarchy; see the Nullness Checker’s implementation for an example.
### 29.3.5 Completeness of the type hierarchy
When you define a type system, its type hierarchy must be a complete lattice — that is, there must be a top type that is a supertype of all other types, and there must be a bottom type that is a subtype of all other types. Furthermore, it is best if the top type and bottom type are defined explicitly for the type system, rather than (say) reusing a qualifier from the Checker Framework such as @Unqualified.
It is possible that a single type-checker checks multiple type hierarchies. An example is the Nullness Checker, which has three separate type hierarchies, one each for nullness, initialization, and map keys. In this case, each type hierarchy would have its own top qualifier and its own bottom qualifier; they don’t all have to share a single top qualifier or a single bottom qualifier.
##### Bottom qualifier
Your type hierarchy must have a bottom qualifier — a qualifier that is a (direct or indirect) subtype of every other qualifier.
Your type system must give null the bottom type. (The only exception is if the type system has special treatment for null values, as the Nullness Checker does.) This legal code will not type-check unless null has the bottom type:
<T> T f() {
return null;
}
You don’t necessarily have to define a new bottom qualifier. You can use org.checkerframework.framework.qual.Bottom if your type system does not already have an appropriate bottom qualifier.
If your type system has a special bottom type that is used only for the null value, then users should never write the bottom qualifier explicitly. To ensure this, write @Target({}) on the definition of the bottom qualifier.
The hierarchy shown in Figure 19.1 lacks a bottom qualifier, because there is no qualifier that is a subtype of both @Immutable and @Mutable. The actual IGJ hierarchy does contain a (non-user-visible) bottom qualifier, defined like this:
@SubtypeOf({Mutable.class, Immutable.class, I.class})
@Target({}) // forbids a programmer from writing it in a program
@ImplicitFor(trees = { Kind.NULL_LITERAL, Kind.CLASS, Kind.NEW_ARRAY },
typeClasses = { AnnotatedPrimitiveType.class })
@interface IGJBottom { }
The IGJ Checker adds this qualifier in code in IGJAnnotatedTypeFactory.
##### Top qualifier
Your type hierarchy must have a top qualifier — a qualifier that is a (direct or indirect) supertype of every other qualifier. Here is the reason. The default type for local variables is the top qualifier (that type is then flow-sensitively refined depending on what values are stored in the local variable). If there is no single top qualifier, then there is no unambiguous choice to make for local variables.
Furthermore, it is most convenient to users if the top qualifier is defined by the type system. It is possible to use the framework’s @Unqualified as the top type, but this is poor practice. Users lose flexibility in expressing defaults: there is no way for a user to change the default qualifier for just that type system. If a user specifies @DefaultQualifier(Unqualified.class), then the default would apply to every type system that uses @Unqualified, which is unlikely to be desired.
## 29.4 Type factory: Implicit annotations
For some types and expressions, a qualifier should be treated as present even if a programmer did not explicitly write it. For example, every literal (other than null) has a @NonNull type.
The implicit annotations may be specified declaratively and/or procedurally.
### 29.4.1 Declaratively specifying implicit annotations
The @ImplicitFor meta-annotation indicates implicit annotations. When written on a qualifier, ImplicitFor specifies the trees (AST nodes) and types for which the framework should automatically add that qualifier.
In short, the types and trees can be specified via any combination of five fields in ImplicitFor:
• trees: an array of com.sun.source.tree.Tree.Kind, e.g., NEW_ARRAY or METHOD_INVOCATION
• types: an array of TypeKind, e.g., ARRAY or BOOLEAN
• treeClasses: an array of class literals for classes implementing Tree, e.g., LiteralTree.class or ExpressionTree.class
• typeClasses: an array of class literals for classes implementing javax.lang.model.type.TypeMirror, e.g., javax.lang.model.type.PrimitiveType. Often you should use a subclass of AnnotatedTypeMirror.
• stringPatterns: an array of regular expressions that will be matched against string literals, e.g., "[01]+" for a binary number. Useful for annotations that indicate the format of a string.
For example, consider the definitions of the @NonNull and @Nullable type qualifiers:
@SubtypeOf( { Nullable.class } )
@ImplicitFor(
types={TypeKind.PACKAGE},
typeClasses={AnnotatedPrimitiveType.class},
trees={
Tree.Kind.NEW_CLASS,
Tree.Kind.NEW_ARRAY,
Tree.Kind.PLUS,
// All literals except NULL_LITERAL:
Tree.Kind.BOOLEAN_LITERAL, Tree.Kind.CHAR_LITERAL, Tree.Kind.DOUBLE_LITERAL, Tree.Kind.FLOAT_LITERAL,
Tree.Kind.INT_LITERAL, Tree.Kind.LONG_LITERAL, Tree.Kind.STRING_LITERAL
})
@Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER})
public @interface NonNull { }
@SubtypeOf({})
@ImplicitFor(trees={Tree.Kind.NULL_LITERAL})
@Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER})
public @interface Nullable { }
For more details, see the Javadoc for the ImplicitFor annotation, and the Javadoc for the javac classes that are linked from it. You only need to understand a small amount about the javac AST, such as the Tree.Kind and TypeKind enums. All the information you need is in the Javadoc, and Section 29.11 can help you get started.
### 29.4.2 Procedurally specifying implicit annotations
The Checker Framework provides a representation of annotated types, AnnotatedTypeMirror, that extends the standard TypeMirror interface but integrates a representation of the annotations into a type representation. A checker’s type factory class, given an AST node, returns the annotated type of that expression. The Checker Framework’s abstract base type factory class, AnnotatedTypeFactory, supplies a uniform, Tree-API-based interface for querying the annotations on a program element, regardless of whether that element is declared in a source file or in a class file. It also handles default annotations, and it optionally performs flow-sensitive local type inference.
AnnotatedTypeFactory inserts the qualifiers that the programmer explicitly inserted in the code. Yet, certain constructs should be treated as having a type qualifier even when the programmer has not written one. The type system designer may subclass AnnotatedTypeFactory and override annotateImplicit(Tree,AnnotatedTypeMirror) and annotateImplicit(Element,AnnotatedTypeMirror) to account for such constructs.
## 29.5 Dataflow: enhancing flow-sensitive type qualifier inference
By default, every checker performs automatic type refinement, also known as flow inference, as described in Section 25.4.
In the uncommon case that you wish to disable flow inference in your checker, put the following two lines at the beginning of the constructor for your subtype of BaseAnnotatedTypeFactory:
// use true to enable flow inference, false to disable it
super(checker, false);
You can enhance the Checker Framework’s built-in flow-sensitive type refinement, so that it is more powerful and is customized to your type system. In particular, your enhancement will yield a more refined type for certain expressions.
Most enhancements to type refinement are based on a run-time test specific to the type system, and not all type systems have applicable run-time tests. See Section 25.4.3 to determine whether run-time tests are applicable to your type system.
The Checker Framework’s type refinement is implemented with a dataflow algorithm which can be customized to enhance the built-in type refinement. The next sections detail dataflow customization. It would also be helpful to read the Dataflow Manual, which gives a more in-depth description of the Checker Framework’s dataflow framework.
The steps to customizing type refinement are:
1. 29.5.1 Create required classes and configure their use
2. 29.5.2 Override methods that handle Nodes of interest
3. 29.5.3 Determine which expressions will be refined
4. 29.5.4 Implement the refinement
The Regex Checker’s dataflow customization for the RegexUtil.asRegex run-time check is used as an example throughout the steps.
### 29.5.1 Create required classes and configure their use
The following classes must be created to customize dataflow. These classes must be included on the classpath like other components of your checker.
1. Create a class that extends CFAbstractTransfer
CFAbstractTransfer performs the default Checker Framework type refinement. The extended class will add functionality by overriding superclass methods.
The Regex Checker’s extended CFAbstractTransfer is RegexTransfer.
2. Create a class that extends CFAbstractAnalysis and uses the extended CFAbstractTransfer
CFAbstractTransfer and its superclass, Analysis, are the central coordinating classes in the Checker Framework’s dataflow algorithm. The createTransferFunction method must be overridden in an extended CFAbstractTransfer to return a new instance of the extended CFAbstractTransfer.
The Regex Checker’s extended CFAbstractAnalysis is RegexAnalysis, which overrides the createTransferFunction to return a new RegexTransfer instance:
@Override
public RegexTransfer createTransferFunction() {
return new RegexTransfer(this);
}
3. Configure the checker’s type factory to use the extended CFAbstractAnalysis
To configure your checker’s type factory to use the new extended CFAbstractAnalysis, override the createFlowAnalysis method in your type factory to return a new instance of the extended CFAbstractAnalysis.
@Override
protected RegexAnalysis createFlowAnalysis(
List<Pair<VariableElement, CFValue>> fieldValues) {
return new RegexAnalysis(checker, this, fieldValues);
}
### 29.5.2 Override methods that handle Nodes of interest
At this point, your checker is configured to use your extended CFAbstractAnalysis, but it uses only the default behavior. Next, in your extended CFAbstractTransfer override the visitor method that handles the Nodes relevant to your run-time check or run-time operation can be used to refine types.
A Node is basically equivalent to a javac compiler Tree. A tree is a node in the abstract syntax tree of the program being checked. See Section 29.11 for more information about trees.
As an example, for the statement String a = "";, the corresponding abstract syntax tree structure is:
VariableTree:
name: "a"
type:
IdentifierTree
name: String
initializer:
LiteralTree
value: ""
A Node generally maps one-to-one with a Tree. When dataflow processes a method, it translates Trees into Nodes and then calls the appropriate visit method on CFAbstractTransfer which then performs the dataflow analysis for the passed in Node.
Decide what Node kinds are of interest with respect to the run-time checks or run-time operations you are trying to support. The Node subclasses can be found in the org.checkerframework.dataflow.cfg.node package. Some examples are EqualToNode, LeftShiftNode, VariableDeclarationNode.
The Regex Checker refines the type of a run-time test method call, so RegexTransfer overrides the method that handles MethodInvocationNodes, visitMethodInvocation.
public TransferResult<CFValue, CFStore> visitMethodInvocation(
MethodInvocationNode n, TransferInput<CFValue, CFStore> in) { ... }
### 29.5.3 Determine the expressions to refine the types of
There are usually multiple expressions used in a run-time check or run-time operation; determine which expression the customization will refine. This is usually specific to the type system and run-time test.
Expressions are refined by modifying the return value of a visitor method in CFAbstractTransfer. CFAbstractTransfer visitor methods return a TransferResult. The constructor of a TransferResult takes two parameters: the resulting type for the Node being evaluated (the result type) and a map from expressions in scopes to estimates of their types (a Store).
For the program operation op(a,b), an enhancement may improve the Checker Framework’s types in either or both of the following ways:
1. Changing the resulting type to refine the estimate of the type of entire expression op(a,b).
As an example (and as the running example of implementing a dataflow refinement), the RegexUtil.asRegex method is declared as:
@Regex(0) String asRegex(String s, int groups) { ... }
which means that an expression such as RegexUtil.asRegex(myString, myInt) has type @Regex(0) String. When int parameter group is known or can be inferred at compile time, a better estimate can be given. For example, RegexUtil.asRegex(myString, 2) has type @Regex(2) String.
Changing the TransferResult’s result type changes the type that is returned by the AnnotatedTypeFactory for the tree corresponding to the Node that was visited. (Remember that BaseTypeVisitor uses the AnnotatedTypeFactory to look up the type of a Tree, and then performs checks on types of one or more Trees.)
When RegexTransfer evaluates a RegexUtils.asRegex invocation, it updates the TransferResult’s result type. This changes the type of the RegexUtils.asRegex invocation when its Tree is looked up by the AnnotatedTypeFactory. Regex Checker’s visitMethodInvocation is shown in more detail in Section 29.5.4.
2. Changing the store to refine the estimate of some other expression, such as a or b.
As an example, consider an equality test.
@Nullable String s;
if (s != null) {
...
}
The type of s != null is always @NonNull boolean (dataflow analysis does not affect it), but in the true branch, the type of s can be refined to @NonNull String.
Updating the Store treats an expression as a having a refined type for the remainder of the method or conditional block. For example, when the Nullness Checker’s dataflow evaluates myvar != null, it updates the Store to specify that the variable myvar should be treated as having type @NonNull for the rest of the then conditional block. Not all kinds of expressions can be refined; currently method return values, local variables, fields, and array values can be stored in the Store. Other kinds of expressions, like binary expressions or casts, cannot be stored in the Store.
### 29.5.4 Implement the refinement
This section details implementing the visitor method RegexTransfer.visitMethodInvocation for the RegexUtil.asRegex run-time test. You can find other examples of visitor methods in LockTransfer and FormatterTransfer.
1. Determine if the visited Node is of interest
The visitor method for a Node is invoked for all instances of that Node kind in the program, so the Node must be inspected to determine if it is an instance of the desired run-time test or operation. For example, visitMethodInvocation is called when dataflow processes any method invocation, but the RegexTransfer should only refine the result of RegexUtils.asRegex invocations:
@Override
public TransferResult<CFValue, CFStore> visitMethodInvocation(...)
...
MethodAccessNode target = n.getTarget();
ExecutableElement method = target.getMethod();
// Is this a RegexUtil.isRegex(s, groups) method call?
if (ElementUtils.matchesElement(method,
null, IS_REGEX_METHOD_NAME, String.class, int.class)) {
...
2. Determine the refined type
Some run-time tests, like the null comparison test, have a deterministic type refinement, e.g. the Nullness Checker always refines the argument in the expression to @NonNull. However, sometimes the refined type is dependent on the parts of run-time test or operation itself, such as arguments passed to it.
For example, the refined type of RegexUtils.asRegex is dependent on the integer argument to the method call. The RegexTransfer uses this argument to build the resulting type @Regex(i), where i is the value of the integer argument. Note that currently this code only uses the value of the integer argument if the argument was an integer literal. It could be extended to use the value of the argument if it was any compile-time constant or was inferred at compile time by another analysis, such as the 16.
AnnotationMirror regexAnnotation;
Node count = n.getArgument(1);
if (count instanceof IntegerLiteralNode) {
IntegerLiteralNode iln = (IntegerLiteralNode) count;
Integer groupCount = iln.getValue();
regexAnnotation = factory.createRegexAnnotation(groupCount);
If the integer argument was not a literal integer, the RegexTransfer falls back to refining the type to just @Regex(0).
} else {
regexAnnotation = AnnotationUtils.fromClass(factory.getElementUtils(), Regex.class);
}
3. Return a TransferResult with the refined types
As discussed in Section 29.5.3, the type of an expression is refined by modifying the TransferResult. Since the RegexTransfer is updating the type of the run-time test itself, it will update the result type and not the Store.
A CFValue is created to hold the type inferred. CFValue is a wrapper class for values being inferred by dataflow:
CFValue newResultValue = analysis.createSingleAnnotationValue(regexAnnotation,
result.getResultValue().getType().getUnderlyingType());
Then, RegexTransfer’s visitMethodInvocation creates and returns a TransferResult using newResultValue as the result type.
return new RegularTransferResult<>(newResultValue, result.getRegularStore());
Finally, when the Regex Checker encounters a RegexUtils.asRegex method call, the checker will refine the return type of the method if it can determine the value of the integer parameter at compile time.
## 29.6 Visitor: Type rules
A type system’s rules define which operations on values of a particular type are forbidden. These rules must be defined procedurally, not declaratively.
The Checker Framework provides a base visitor class, BaseTypeVisitor, that performs type-checking at each node of a source file’s AST. It uses the visitor design pattern to traverse Java syntax trees as provided by Oracle’s Tree API, and it issues a warning whenever the type system is violated.
A checker’s visitor overrides one method in the base visitor for each special rule in the type qualifier system. Most type-checkers override only a few methods in BaseTypeVisitor. For example, the visitor for the Nullness type system of Chapter 3 contains a single 4-line method that warns if an expression of nullable type is dereferenced, as in:
myObject.hashCode(); // invalid dereference
By default, BaseTypeVisitor performs subtyping checks that are similar to Java subtype rules, but taking the type qualifiers into account. BaseTypeVisitor issues these errors:
• invalid assignment (type.incompatible) for an assignment from an expression type to an incompatible type. The assignment may be a simple assignment, or pseudo-assignment like return expressions or argument passing in a method invocation
In particular, in every assignment and pseudo-assignment, the left-hand side of the assignment is a supertype of (or the same type as) the right-hand side. For example, this assignment is not permitted:
@Nullable Object myObject;
@NonNull Object myNonNullObject;
...
myNonNullObject = myObject; // invalid assignment
• invalid generic argument (type.argument.type.incompatible) when a type is bound to an incompatible generic type variable
• invalid method invocation (method.invocation.invalid) when a method is invoked on an object whose type is incompatible with the method receiver type
• invalid overriding parameter type (override.parameter.invalid) when a parameter in a method declaration is incompatible with that parameter in the overridden method’s declaration
• invalid overriding return type (override.return.invalid) when a parameter in a method declaration is incompatible with that parameter in the overridden method’s declaration
• invalid overriding receiver type (override.receiver.invalid) when a receiver in a method declaration is incompatible with that receiver in the overridden method’s declaration
### 29.6.1 AST traversal
The Checker Framework needs to do its own traversal of the AST even though it operates as an ordinary annotation processor [Dar06]. Annotation processors can utilize a visitor for Java code, but that visitor only visits the public elements of Java code, such as classes, fields, methods, and method arguments — it does not visit code bodies or various other locations. The Checker Framework hardly uses the built-in visitor — as soon as the built-in visitor starts to visit a class, then the Checker Framework’s visitor takes over and visits all of the class’s source code.
Because there is no standard API for the AST of Java code1, the Checker Framework uses the javac implementation. This is why the Checker Framework is not deeply integrated with Eclipse, but runs as an external tool (see Section 30.6).
### 29.6.2 Avoid hardcoding
It may be tempting to write a type-checking rule for method invocation, where your rule checks the name of the method being called and then treats the method in a special way. This is usually the wrong approach. It is better to write annotations, in a stub file (Chapter 28), and leave the work to the standard type-checking rules.
## 29.7 The checker class: Compiler interface
A checker’s entry point is a subclass of SourceChecker, and is usually a direct subclass of either BaseTypeChecker or AggregateChecker. This entry point, which we call the checker class, serves two roles: an interface to the compiler and a factory for constructing type-system classes.
Because the Checker Framework provides reasonable defaults, oftentimes the checker class has no work to do. Here are the complete definitions of the checker classes for the Interning Checker and the Nullness Checker:
@SupportedLintOptions({"dotequals"})
public final class InterningChecker extends BaseTypeChecker { }
@SupportedLintOptions({"flow", "cast", "cast:redundant"})
public class NullnessChecker extends BaseTypeChecker { }
The checker class bridges between the compiler and the rest of the checker. It invokes the type-rule check visitor on every Java source file being compiled, and provides a simple API, SourceChecker.report, to issue errors using the compiler error reporting mechanism.
Also, the checker class follows the factory method pattern to construct the concrete classes (e.g., visitor, factory) and annotation hierarchy representation. It is a convention that, for a type system named Foo, the compiler interface (checker), the visitor, and the annotated type factory are named as FooChecker, FooVisitor, and FooAnnotatedTypeFactory. BaseTypeChecker uses the convention to reflectively construct the components. Otherwise, the checker writer must specify the component classes for construction.
A checker can customize the default error messages through a Properties-loadable text file named messages.properties that appears in the same directory as the checker class. The property file keys are the strings passed to report (like type.incompatible) and the values are the strings to be printed ("cannot assign ..."). The messages.properties file only need to mention the new messages that the checker defines. It is also allowed to override messages defined in superclasses, but this is rarely needed. For more details about message keys, see Section 26.1.3.
### 29.7.1 Indicating supported annotations
A checker must indicate the annotations that it supports (make up its type hierarchy), including whether it supports the polymorphic qualifier @PolyAll.
By default, a checker supports PolyAll, and all annotations located in a subdirectory called qual that’s located in the same directory as the checker. Note that only annotations defined with the @Target(ElementType.TYPE_USE) meta-annotation (and optionally with the additional value of ElementType.TYPE_PARAMETER, but no other ElementType values) are automatically considered as supported annotations.
To indicate support for annotations that are located outside of the qual subdirectory, annotations that have other ElementType values, or to indicate whether a checker supports the polymorphic qualifier @PolyAll, checker writers can override the createSupportedTypeQualifiers method (open the link for details).
An aggregate checker (which extends AggregateChecker) does not need to specify its type qualifiers, but each of its component checkers should do so.
### 29.7.2 Bundling multiple checkers
Sometimes, multiple checkers work together and should always be run together. There are two different ways to bundle multiple checkers together, by creating an “aggregate checker” or a “compound checker”.
1. An aggregate checker runs multiple independent, unrelated checkers. There is no communication or cooperation among them.
The effect is the same as if a user passes multiple processors to the -processor command-line option.
For example, instead of a user having to run
javac -processor DistanceUnitChecker,VelocityUnitChecker,MassUnitChecker ... files ...
the user can write
javac -processor MyUnitCheckers ... files ...
if you define an aggregate checker class. Extend AggregateChecker and override the getSupportedTypeCheckers method, like the following:
public class MyUnitCheckers extends AggregateChecker {
protected Collection<Class<? extends SourceChecker>> getSupportedCheckers() {
return Arrays.asList(DistanceUnitChecker.class,
VelocityUnitChecker.class,
MassUnitChecker.class);
}
}
An example of an aggregate checker is I18nChecker (see Chapter 12.2), which consists of I18nSubchecker and LocalizableKeyChecker.
2. Use a compound checker to express dependencies among checkers. Suppose it only makes sense to run MyChecker if MyHelperChecker has already been run; that might be the case if MyHelperChecker computes some information that MyChecker needs to use.
Override MyChecker.getImmediateSubcheckerClasses to return a list of the checkers that MyChecker depends on. Every one of them will be run before MyChecker is run. One of MyChecker’s subcheckers may itself be a compound checker, and multiple checkers may declare a dependence on the same subchecker. The Checker Framework will run each checker once, and in an order consistent with all the dependences.
A checker obtains information from its subcheckers (those that ran before it) by querying their AnnotatedTypeFactory to determine the types of variables.
### 29.7.3 Providing command-line options
A checker can provide two kinds of command-line options: boolean flags and named string values (the standard annotation processor options).
#### Boolean flags
To specify a simple boolean flag, add:
@SupportedLintOptions({"flag"})
to your checker subclass. The value of the flag can be queried using
checker.getLintOption("flag", false)
The second argument sets the default value that should be returned.
To pass a flag on the command line, call javac as follows:
javac -processor Mine -Alint=flag
#### Named string values
For more complicated options, one can use the standard annotation processing @SupportedOptions annotation on the checker, as in:
@SupportedOptions({"info"})
The value of the option can be queried using
checker.getOption("info")
To pass an option on the command line, call javac as follows:
javac -processor Mine -Ainfo=p1,p2
The value is returned as a single string and you have to perform the required parsing of the option.
## 29.8 Testing framework
The Checker Framework provides a convenient way to write tests for your checker. It is extensively documented in file checker-framework/checker/tests/README. Also see the API documentation for CheckerFrameworkTest, which all test classes should extend.
## 29.9 Debugging options
The Checker Framework provides debugging options that can be helpful when writing a checker. These are provided via the standard javac-A” switch, which is used to pass options to an annotation processor.
### 29.9.1 Amount of detail in messages
• -AprintAllQualifiers: print all type qualifiers, including qualifiers like @Unqualified which are usually not shown. (Use the @InvisibleQualifier meta-annotation on a qualifier to hide it.)
• -Adetailedmsgtext: Output error/warning messages in a stylized format that is easy for tools to parse. This is useful for tools that run the Checker Framework and parse its output, such as IDE plugins. See the source code of SourceChecker.java for details about the format.
• -AprintErrorStack: print a stack trace whenever an internal Checker Framework error occurs.
• -Anomsgtext: use message keys (such as “type.invalid”) rather than full message text when reporting errors or warnings. This is used by the Checker Framework’s own tests, so they do not need to be changed if the English message is updated.
### 29.9.2 Stub and JDK libraries
• -Aignorejdkastub: ignore the jdk.astub file in the checker directory. Files passed through the -Astubs option are still processed. This is useful when experimenting with an alternative stub file.
• -Anocheckjdk: don’t issue an error if no annotated JDK can be found.
• -AstubDebug: Print debugging messages while processing stub files.
### 29.9.3 Progress tracing
• -Afilenames: print the name of each file before type-checking it.
• -Ashowchecks: print debugging information for each pseudo-assignment check (as performed by BaseTypeVisitor; see Section 29.6.
### 29.9.4 Saving the command-line arguments to a file
• -AoutputArgsToFile: This saves the final command-line parameters as passed to the compiler in a file. This file can be used as a script (if the file is marked as executable on Unix, or if it includes a .bat extension on Windows) to re-execute the same compilation command. Note that this argument cannot be included in a file containing command-line arguments passed to the compiler using the @argfile syntax.
### 29.9.5 Miscellaneous debugging options
• -Aflowdotdir: Directory for .dot files that visualize the control flow graph of all the methods and code fragments analyzed by the dataflow analysis (Section 29.5). The graph also contains information about flow-sensitively refined types of various expressions at many program points.
• -AresourceStats: Whether to output resource statistics at JVM shutdown.
### 29.9.6 Examples
The following example demonstrates how these options are used:
$javac -processor org.checkerframework.checker.interning.InterningChecker \ examples/InternedExampleWithWarnings.java -Ashowchecks -Anomsgtext -Afilenames [InterningChecker] InterningExampleWithWarnings.java success (line 18): STRING_LITERAL "foo" actual: DECLARED @org.checkerframework.checker.interning.qual.Interned java.lang.String expected: DECLARED @org.checkerframework.checker.interning.qual.Interned java.lang.String success (line 19): NEW_CLASS new String("bar") actual: DECLARED java.lang.String expected: DECLARED java.lang.String examples/InterningExampleWithWarnings.java:21: (not.interned) if (foo == bar) ^ success (line 22): STRING_LITERAL "foo == bar" actual: DECLARED @org.checkerframework.checker.interning.qual.Interned java.lang.String expected: DECLARED java.lang.String 1 error ### 29.9.7 Using an external debugger You can use any standard debugger to observe the execution of your checker. Set the execution main class to com.sun.tools.javac.Main, and insert the Checker Framework javac.jar (resides in$CHECKERFRAMEWORK/checker/dist/javac.jar). If using an IDE, it is recommended that you add .../jsr308-langtools as a project, so you can step into its source code if needed.
You can also set up remote (or local) debugging using the following command as a template:
java -jar $CHECKERFRAMEWORK/framework/dist/framework.jar \ -J-Xdebug -J-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005 \ -processor org.checkerframework.checker.nullness.NullnessChecker \ src/sandbox/FileToCheck.java ## 29.10 Documenting the checker This section describes how to write a chapter for this manual that describes a new type-checker. This is a prerequisite to having your type-checker distributed with the Checker Framework, which is the best way for users to find it and for it to be kept up to date with Checker Framework changes. Even if you do not want your checker distributed with the Checker Framework, these guidelines may help you write better documentation. When writing a chapter about a new type-checker, see the existing chapters for inspiration. (But recognize that the existing chapters aren’t perfect: maybe they can be improved too.) A chapter in the Checker Framework manual should generally have the following sections: Chapter: Belly Rub Checker The text before the first section in the chapter should state the guarantee that the checker provides and why it is important. It should give an overview of the concepts. It should state how to run the checker. Section: Belly Rub Annotations This section includes descriptions of the annotations with links to the Javadoc. Separate type annotations from declaration annotations, and put any type annotations that a programmer may not write (they are only used internally by the implementation) last within variety of annotation. Draw a diagram of the type hierarchy. A textual description of the hierarchy is not sufficient; the diagram really helps readers to understand the system. The Javadoc for the annotations deserves the same care as the manual chapter. Each annotation’s Javadoc comment should use the @checker_framework.manual Javadoc taglet to refer to the chapter that describes the checker; see ManualTaglet. Section: What the Belly Rub Checker checks This section gives more details about when an error is issued, with examples. This section may be omitted if the checker does not contain special type-checking rules — that is, if the checker only enforces the usual Java subtyping rules. Section: Examples Code examples. Sometimes you can omit some of the above sections. Sometimes there are additional sections, such as tips on suppressing warnings, comparisons to other tools, and run-time support. You will create a new belly-rub-checker.tex file, then \input it at a logical place in manual.tex (not necessarily as the last checker-related chapter). Also add two references to the checker’s chapter: one at the beginning of chapter 1, and identical text in Section 25.4.3 (both of these lists appear in the same order as the manual chapters, to help us notice if anything is missing). Every chapter and (sub)*section should have a label defined within the \section command. Section labels should start with the checker name (as in \label{bellyrub-examples}) and not with “sec:”. These conventions are for the benefit of the Hevea program that produces the HTML version of the manual. Don’t forget to write Javadoc for any annotations that the checker uses. That is part of the documentation and is the first thing that many users may see. Also ensure that the Javadoc links back to the manual, using the @checker_framework.manual custom Javadoc tag. You should also integrate your new checker with the Eclipse plugin. ## 29.11 javac implementation survival guide Since this section of the manual was written, the useful “The Hitchhiker’s Guide to javac” has become available at http://openjdk.java.net/groups/compiler/doc/hhgtjavac/index.html. See it first, and then refer to this section. (This section of the manual should be revised, or parts eliminated, in light of that document.) A checker built using the Checker Framework makes use of a few interfaces from the underlying compiler (Oracle’s OpenJDK javac). This section describes those interfaces. ### 29.11.1 Checker access to compiler information The compiler uses and exposes three hierarchies to model the Java source code and classfiles. #### Types — Java Language Model API A TypeMirror represents a Java type. There is a TypeMirror interface to represent each type kind, e.g., PrimitiveType for primitive types, ExecutableType for method types, and NullType for the type of the null literal. TypeMirror does not represent annotated types though. A checker should use the Checker Framework types API, AnnotatedTypeMirror, instead. AnnotatedTypeMirror parallels the TypeMirror API, but also present the type annotations associated with the type. The Checker Framework and the checkers use the types API extensively. #### Elements — Java Language Model API An Element represents a potentially-public declaration that can be accessed from elsewhere: classes, interfaces, methods, constructors, and fields. Element represents elements found in both source code and bytecode. There is an Element interface to represent each construct, e.g., TypeElement for class/interfaces, ExecutableElement for methods/constructors, VariableElement for local variables and method parameters. If you need to operate on the declaration level, always use elements rather than trees (see below). This allows the code to work on both source and bytecode elements. Example: retrieve declaration annotations, check variable modifiers (e.g., strictfp, synchronized) #### Trees — Compiler Tree API A Tree represents a syntactic unit in the source code, like a method declaration, statement, block, for loop, etc. Trees only represent source code to be compiled (or found in -sourcepath); no tree is available for classes read from bytecode. There is a Tree interface for each Java source structure, e.g., ClassTree for class declaration, MethodInvocationTree for a method invocation, and ForEachTree for an enhanced-for-loop statement. You should limit your use of trees. A checker uses Trees mainly to traverse the source code and retrieve the types/elements corresponding to them. Then, the checker performs any needed checks on the types/elements instead. #### Using the APIs The three APIs use some common idioms and conventions; knowing them will help you to create your checker. Type-checking: Do not use instanceof to determine the class of the object, because you cannot necessarily predict the run-time type of the object that implements an interface. Instead, use the getKind() method. The method returns TypeKind, ElementKind, and Tree.Kind for the three interfaces, respectively. Visitors and Scanners: The compiler and the Checker Framework use the visitor pattern extensively. For example, visitors are used to traverse the source tree (BaseTypeVisitor extends TreePathScanner) and for type checking (TreeAnnotator implements TreeVisitor). Utility classes: Some useful methods appear in a utility class. The Oracle convention is that the utility class for a Foo hierarchy is Foos (e.g., Types, Elements, and Trees). The Checker Framework uses a common Utils suffix instead (e.g., TypesUtils, TreeUtils, ElementUtils), with one notable exception: AnnotatedTypes. ### 29.11.2 How a checker fits in the compiler as an annotation processor The Checker Framework builds on the Annotation Processing API introduced in Java 6. A type annotation processor is one that extends AbstractTypeProcessor; these get run on each class source file after the compiler confirms that the class is valid Java code. The most important methods of AbstractTypeProcessor are typeProcess and getSupportedSourceVersion. The former class is where you would insert any sort of method call to walk the AST, and the latter just returns a constant indicating that we are targeting version 8 of the compiler. Implementing these two methods should be enough for a basic plugin; see the Javadoc for the class for other methods that you may find useful later on. The Checker Framework uses Oracle’s Tree API to access a program’s AST. The Tree API is specific to the Oracle OpenJDK, so the Checker Framework only works with the OpenJDK javac, not with Eclipse’s compiler ecj or with gcj. This also limits the tightness of the integration of the Checker Framework into other IDEs such as IntelliJ IDEA. An implementation-neutral API would be preferable. In the future, the Checker Framework can be migrated to use the Java Model AST of JSR 198 (Extension API for Integrated Development Environments) [Cro06], which gives access to the source code of a method. But, at present no tools implement JSR 198. Also see Section 29.6.1. #### Learning more about javac Sun’s javac compiler interfaces can be daunting to a newcomer, and its documentation is a bit sparse. The Checker Framework aims to abstract a lot of these complexities. You do not have to understand the implementation of javac to build powerful and useful checkers. Beyond this document, other useful resources include the Java Infrastructure Developer’s guide at http://wiki.netbeans.org/Java_DevelopersGuide and the compiler mailing list archives at http://news.gmane.org/gmane.comp.java.openjdk.compiler.devel (subscribe at http://mail.openjdk.java.net/mailman/listinfo/compiler-dev). ## 29.12 Integrating a checker with the Checker Framework To integrate a new checker with the Checker Framework release, perform the following: • Add a XXX-tests build target and ensure all tests pass. • Make sure all-tests tests the new checker. • Extend the check-compilermsgs target to include the compiler messages property file of the new checker in the checker-args list. • Make sure check-compilermsgs and check-purity run without warnings or errors. 1 Actually, there is a standard API for Java ASTs — JSR 198 (Extension API for Integrated Development Environments) [Cro06]. If tools were to implement it (which would just require writing wrappers or adapters), then the Checker Framework and similar tools could be portable among different compilers and IDEs. # Chapter 30 Integration with external tools This chapter discusses how to run a checker from the command line, from a build system, or from an IDE. You can skip to the appropriate section: • javac (Section 30.1) • Ant (Section 30.2) • Maven (Section 30.3) • Gradle (Section 30.4) • IntelliJ IDEA (Section 30.5) • Eclipse (Section 30.6) • tIDE (Section 30.7) If your build system or IDE is not listed above, you should customize how it runs the javac command on your behalf. See your build system or IDE documentation to learn how to customize it, adapting the instructions for javac in Section 30.1. If you make another tool support running a checker, please inform us via the mailing list or issue tracker so we can add it to this manual. This chapter also discusses type inference tools (see Section 30.8). All examples in this chapter are in the public domain, with no copyright nor licensing restrictions. ## 30.1 Javac compiler To perform pluggable type-checking, run the javac compiler with the Checker Framework on the classpath. There are three ways to achieve this. You can use any one of them. However, if you are using the Windows command shell, you must use the last one. • Option 1: Add directory .../checker-framework-1.9.11/checker/bin to your path, before any other directory that contains a javac executable. If you are using the bash shell, a way to do this is to add the following to your ~/.profile (or alternately ~/.bash_profle or ~/.bashrc) file: export CHECKERFRAMEWORK=${HOME}/checker-framework-1.9.11
export PATH=${CHECKERFRAMEWORK}/checker/bin:${PATH}
then log out and back in to ensure that the environment variable setting takes effect.
Now, whenever you run javac, you will use the “Checker Framework compiler”. It is exactly the same as the OpenJDK compiler, with two small differences: it includes the Checker Framework jar file on its classpath, and it recognizes type annotations in comments (see Section 27.2.1).
• Option 2: Whenever this document tells you to run javac, you can instead run $CHECKERFRAMEWORK/checker/bin/javac. You can simplify this by introducing an alias. Then, whenever this document tells you to run javac, instead use that alias. Here is the syntax for your ~/.bashrc file: export CHECKERFRAMEWORK=${HOME}/checker-framework-1.9.11
alias javacheck='$CHECKERFRAMEWORK/checker/bin/javac' If you are using a Java 7 JVM, then add command-line arguments to so indicate: export CHECKERFRAMEWORK=${HOME}/checker-framework-1.9.11
alias javacheck='$CHECKERFRAMEWORK/checker/bin/javac -source 7 -target 7' If you do not add the -source 7 -target 7 command-line arguments, you may get the following error when running a class that was compiled by javacheck: UnsupportedClassVersionError: ... : Unsupported major.minor version 52.0 • Option 3: Whenever this document tells you to run javac, instead run checker.jar via java (not javac) as in: java -jar$CHECKERFRAMEWORK/checker/dist/checker.jar ...
You can simplify the above command by introducing an alias. Then, whenever this document tells you to run javac, instead use that alias. For example:
# Unix
export CHECKERFRAMEWORK=${HOME}/checker-framework-1.9.11 alias javacheck='java -jar$CHECKERFRAMEWORK/checker/dist/checker.jar'
# Windows
set CHECKERFRAMEWORK = C:\Program Files\checker-framework-1.9.11\
doskey javacheck=java -jar %CHECKERFRAMEWORK%\checker\dist\checker.jar $* and add -source 7 -target 7 if you use a Java 7 JVM. (Explanation for advanced users: More generally, anywhere that you would use javac.jar, you can substitute$CHECKERFRAMEWORK/checker/dist/checker.jar; the result is to use the Checker Framework compiler instead of the regular javac.)
To ensure that you are using the Checker Framework compiler, run javac -version (possibly using the full pathname to javac or the alias, if you did not add the Checker Framework javac to your path). The output should be:
javac 1.8.0-jsr308-1.9.11
If you use the Ant build tool to compile your software, then you can add an Ant task that runs a checker. We assume that your Ant file already contains a compilation target that uses the javac task.
1. Set the jsr308javac property:
<property environment="env"/>
<property name="checkerframework" value="${env.CHECKERFRAMEWORK}" /> <!-- On Mac/Linux, use the javac shell script; on Windows, use javac.bat --> <condition property="cfJavac" value="javac.bat" else="javac"> <os family="windows" /> </condition> <presetdef name="jsr308.javac"> <javac fork="yes" executable="${checkerframework}/checker/bin/${cfJavac}" > <!-- JSR-308-related compiler arguments --> <compilerarg value="-version"/> <compilerarg value="-implicit:class"/> </javac> </presetdef> 2. Duplicate the compilation target, then modify it slightly as indicated in this example: <target name="check-nullness" description="Check for null pointer dereferences" depends="clean,..."> <!-- use jsr308.javac instead of javac --> <jsr308.javac ... > <compilerarg line="-processor org.checkerframework.checker.nullness.NullnessChecker"/> <!-- optional, to not check uses of library methods: <compilerarg value="-AskipUses=^(java\.awt\.|javax\.swing\.)"/> --> <compilerarg line="-Xmaxerrs 10000"/> ... </jsr308.javac> </target> Fill in each ellipsis (…) from the original compilation target. In the example, the target is named check-nullness, but you can name it whatever you like. ### 30.2.1 Explanation This section explains each part of the Ant task. 1. Definition of jsr308.javac: The fork field of the javac task ensures that an external javac program is called. Otherwise, Ant will run javac via a Java method call, and there is no guarantee that it will get the Checker Framework compiler that is distributed with the Checker Framework. The -version compiler argument is just for debugging; you may omit it. The -implicit:class compiler argument causes annotation processing to be performed on implicitly compiled files. (An implicitly compiled file is one that was not specified on the command line, but for which the source code is newer than the .class file.) This is the default, but supplying the argument explicitly suppresses a compiler warning. 2. The check-nullness target: The target assumes the existence of a clean target that removes all .class files. That is necessary because Ant’s javac target doesn’t re-compile .java files for which a .class file already exists. The -processor ... compiler argument indicates which checker to run. You can supply additional arguments to the checker as well. ## 30.3 Maven If you use the Maven tool, then you can enable Checker Framework checkers by following the instructions below. These instructions use the artifacts from Maven Central. See the demonstration examples/MavenExample/ for an example of the use of these instructions in a Maven project. This example can be used to verify that Maven is correctly downloading and executing the Checker Framework on your machine. Please note that the -AoutputArgsToFile command-line option (see Section 29.9.4) and shorthands for built-in checkers (see Section 2.2.4) are not available when following these instructions. Both these features are available only when a checker is launched via checker.jar such as when$CHECKERFRAMEWORK/checker/bin/javac is run. These instructions bypass checker.jar and cause the compiler to run a checker as an annotation processor directly.
1. Declare a dependency on the Checker Framework artifacts. Find the existing <dependencies> section and add the following new <dependency> items:
<dependencies>
... existing <dependency> items ...
<!-- annotations from the Checker Framework: nullness, interning, locking, ... -->
<dependency>
<groupId>org.checkerframework</groupId>
<artifactId>checker-qual</artifactId>
<version>1.9.11</version>
</dependency>
<dependency>
<groupId>org.checkerframework</groupId>
<artifactId>checker</artifactId>
<version>1.9.11</version>
</dependency>
<!-- The type annotations compiler - uncomment if using Java 7 -->
<!-- <dependency>
<groupId>org.checkerframework</groupId>
<artifactId>compiler</artifactId>
<version>1.9.11</version>
</dependency> -->
<!-- The annotated JDK to use (change to jdk7 if using Java 7) -->
<dependency>
<groupId>org.checkerframework</groupId>
<artifactId>jdk8</artifactId>
<version>1.9.11</version>
</dependency>
</dependencies>
2. Use Maven properties to hold the locations of the annotated JDK and, if using Java 7, the type annotations compiler. Both were declared as Maven dependencies above. To set the value of these properties automatically, use the Maven Dependency plugin.
First, create the properties in the properties section of the POM:
<properties>
<!-- These properties will be set by the Maven Dependency plugin -->
<!-- Change to jdk7 if using Java 7 -->
<annotatedJdk>${org.checkerframework:jdk8:jar}</annotatedJdk> <!-- The type annotations compiler is required if using Java 7. --> <!-- Uncomment the following line if using Java 7. --> <!-- <typeAnnotationsJavac>${org.checkerframework:compiler:jar}</typeAnnotationsJavac> -->
</properties>
Change the reference to the maven-dependency-plugin within the <plugins> section, or add it if it is not present.
<plugin>
<!-- This plugin will set properties values using dependency information -->
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<goals>
<goal>properties</goal>
</goals>
</execution>
</executions>
</plugin>
3. Direct the Maven compiler plugin to use the desired checkers. Change the reference to the maven-compiler-plugin within the <plugins> section, or add it if it is not present.
For example, to use the org.checkerframework.checker.nullness.NullnessChecker:
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.3</version>
<configuration>
<!-- Change source and target to 1.7 if using Java 7 -->
<source>1.8</source>
<target>1.8</target>
<fork>true</fork>
<annotationProcessors>
<!-- Add all the checkers you want to enable here -->
<annotationProcessor>org.checkerframework.checker.nullness.NullnessChecker</annotationProcessor>
</annotationProcessors>
<compilerArgs>
<!-- location of the annotated JDK, which comes from a Maven dependency -->
<arg>-Xbootclasspath/p:${annotatedJdk}</arg> <!-- Uncomment the following line if using Java 7. --> <!-- <arg>-J-Xbootclasspath/p:${typeAnnotationsJavac}</arg> -->
</compilerArgs>
</configuration>
</plugin>
Now, building with Maven should run the checkers during compilation.
Notice that using this approach, no external setup is necessary, so your Maven build should be reproducible on any server.
If you want to allow Maven to compile your code without running the checkers, you may want to move the declarations above to within a Maven profile, so that the checkers would only run if the profile was enabled.
If you fork the compilation task, Gradle lets you specify the executable to compile java programs.
To specify the appropriate executable, set options.fork = true and compile.options.fork.executable = "$CHECKERFRAMEWORK/checker/bin/javac" To specify command-line arguments, set compile.options.compilerArgs. Here is a possible example: allprojects { tasks.withType(JavaCompile).all { JavaCompile compile -> compile.options.debug = true compile.options.compilerArgs = [ '-version', '-implicit:class', '-processor', 'org.checkerframework.checker.nullness.NullnessChecker' ] options.fork = true options.forkOptions.executable = "$CHECKERFRAMEWORK/checker/bin/javac"
}
}
## 30.5 IntelliJ IDEA
IntelliJ IDEA (Maia release) supports the Type Annotations (JSR-308) syntax. See http://blogs.jetbrains.com/idea/2009/07/type-annotations-jsr-308-support/.
## 30.6 Eclipse
There are two ways to run a checker from within the Eclipse IDE: via Ant or using an Eclipse plugin. These two methods are described below.
No matter what method you choose, we suggest that all Checker Framework annotations be written in the comments if you are using a version of Eclipse that does not support Java 8. This will avoid many text highlighting errors with versions of Eclipse that don’t support Java 8 and type annotations.
Even in a version of Eclipse that supports Java 8’s type annotations, you still need to run the Checker Framework via Ant or via the plug-in, rather than by supplying the -processor command-line option to the ejc compiler. The reason is that the Checker Framework is built upon javac, and ejc represents the Java program differently. (If both javac and ejc implemented JSR 198 [Cro06], then it would be possible to build a type-checking plug-in that works with both compilers.)
### 30.6.1 Using an Ant task
Add an Ant target as described in Section 30.2. You can run the Ant target by executing the following steps (instructions copied from http://help.eclipse.org/luna/index.jsp?topic=%2Forg.eclipse.platform.doc.user%2FgettingStarted%2Fqs-84_run_ant.htm):
1. Select build.xml in one of the navigation views and choose Run As > Ant Build... from its context menu.
2. A launch configuration dialog is opened on a launch configuration for this Ant buildfile.
3. In the Targets tab, select the new ant task (e.g., check-interning).
4. Click Run.
5. The Ant buildfile is run, and the output is sent to the Console view.
### 30.6.2 Eclipse plugin for the Checker Framework
The Checker framework Eclipse Plugin enables the use of the Checker Framework within the Eclipse IDE. Its website (http://types.cs.washington.edu/checker-framework/eclipse/). The website contains instructions for installing and using the plugin.
### 30.6.3 Troubleshooting Eclipse
Eclipse issues an “Unhandled Token in @SuppressWarnings” warning if you write a @SuppressWarnings annotation containing a string that does not know about. Unfortunately, Eclipse hard-codes this list and there is not a way for the Eclipse plug-in to extend it.
To eliminate the warnings, you have two options:
1. Write @SuppressWarnings annotations related to the Checker Framework in comments (see Section 27.2.1).
2. Disable all “Unhandled Token in @SuppressWarnings” warnings in Eclipse. Look under the menu headings Java → Compiler → Errors/Warnings → Annotations → Unhandled Token in ’@SuppressWarnings’, and set it to ignore.
## 30.7 tIDE
tIDE, an open-source Java IDE, supports the Checker Framework. See its documentation at http://tide.olympe.in/.
## 30.8 Type inference tools
### 30.8.1 Varieties of type inference
There are two different tasks that are commonly called “type inference”.
1. Type inference during type-checking (Section 25.4): During type-checking, if certain variables have no type qualifier, the type-checker determines whether there is some type qualifier that would permit the program to type-check. If so, the type-checker uses that type qualifier, but never tells the programmer what it was. Each time the type-checker runs, it re-infers the type qualifier for that variable. If no type qualifier exists that permits the program to type-check, the type-checker issues a type warning.
This variety of type inference is built into the Checker Framework. Every checker can take advantage of it at no extra effort. However, it only works within a method, not across method boundaries.
Advantages of this variety of type inference include:
• If the type qualifier is obvious to the programmer, then omitting it can reduce annotation clutter in the program.
• The type inference can take advantage of only the code currently being compiled, rather than having to be correct for all possible calls. Additionally, if the code changes, then there is no old annotation to update.
2. Type inference to annotate a program (Section 30.8.2): As a separate step before type-checking, a type inference tool takes the program as input, and outputs a set of type qualifiers that would type-check. These qualifiers are inserted into the source code or the class file. They can be viewed and adjusted by the programmer, and can be used by tools such as the type-checker.
This variety of type inference must be provided by a separate tool. It is not built into the Checker Framework.
Advantages of this variety of type inference include:
• The program contains documentation in the form of type qualifiers, which can aid programmer understanding.
• Error messages may be more comprehensible. With type inference during type-checking, error messages can be obscure, because the compiler has already inferred (possibly incorrect) types for a number of variables.
• A minor advantage is speed: type-checking can be modular, which can be faster than re-doing type inference every time the program is type-checked.
Advantages of both varieties of inference include:
• Less work for the programmer.
• The tool chooses the most general type, whereas a programmer might accidentally write a more specific, less generally-useful annotation.
Each variety of type inference has its place. When using the Checker Framework, type inference during type-checking is performed only within a method (Section 25.4). Every method signature (arguments and return values) and field must have already been explicitly annotated, either by the programmer or by a separate type-checking tool (Section 30.8.2). This approach enables modular checking (one class or method at a time) and gives documentation benefits. The programmer still has to put in some effort, but much less than without inference: typically, a programmer does not have to write any qualifiers inside the body of a method.
### 30.8.2 Type inference to annotate a program
This section lists tools that take a program and output a set of annotations for it.
Section 3.3.7 lists several tools that infer annotations for the Nullness Checker.
Section 20.2.2 lists a tool that infers annotations for the Javari Checker, which detects mutation errors.
Cascade [VPEJ14] is an Eclipse plugin that implements interactive type qualifier inference. Cascade is interactive rather than fully-automated: it makes it easier for a developer to insert annotations. Cascade starts with an unannotated program and runs a type-checker. For each warning it suggests multiple fixes, the developer chooses a fix, and Cascade applies it. Cascade works with any checker built on the Checker Framework. You can find installation instructions and a video tutorial at https://github.com/reprogrammer/cascade.
# Chapter 31 Frequently Asked Questions (FAQs)
These are some common questions about the Checker Framework and about pluggable type-checking in general. Feel free to suggest improvements to the answers, or other questions to include here.
Contents:
31.1: Motivation for pluggable type-checking
31.1.1: I don’t make type errors, so would pluggable type-checking help me?
31.1.2: When should I use type qualifiers, and when should I use subclasses?
31.2: Getting started
31.2.1: How do I get started annotating an existing program?
31.2.3: Should I use pluggable types or Java subtypes?
31.3: Usability of pluggable type-checking
31.3.1: Are type annotations easy to read and write?
31.3.2: Will my code become cluttered with type annotations?
31.3.3: Will using the Checker Framework slow down my program? Will it slow down the compiler?
31.3.4: How do I shorten the command line when invoking a checker?
31.4: How to handle warnings
31.4.1: What should I do if a checker issues a warning about my code?
31.4.2: What does a certain Checker Framework warning message mean?
31.4.3: Can a pluggable type-checker guarantee that my code is correct?
31.4.4: What guarantee does the Checker Framework give for concurrent code?
31.4.5: How do I make compilation succeed even if a checker issues errors?
31.4.6: Why does the checker always say there are 100 errors or warnings?
31.4.7: Why does the Checker Framework report an error regarding a type I have not written in my program?
31.4.8: How can I do run-time monitoring of properties that were not statically checked?
31.5: Syntax of type annotations
31.5.2: What is the meaning of an annotation after a type, such as @NonNull Object @Nullable?
31.5.3: What is the meaning of array annotations such as @NonNull Object @Nullable []?
31.5.4: What is the meaning of varargs annotations such as @English String @NonEmpty ...?
31.5.5: What is the meaning of a type qualifier at a class declaration?
31.5.6: Why shouldn’t a qualifier apply to both types and declarations?
31.6: Semantics of type annotations
31.6.1: Why are the type parameters to List and Map annotated as @NonNull?
31.6.2: How can I handle typestate, or phases of my program with different data properties?
31.6.3: Why are explicit and implicit bounds defaulted differently?
31.7: Creating a new checker
31.7.1: How do I create a new checker?
31.7.2: Why is there no declarative syntax for writing type rules?
31.8: Relationship to other tools
31.8.1: Why not just use a bug detector (like FindBugs)?
31.8.2: How does the Checker Framework compare with Eclipse’s Null Analysis?
31.8.3: How does pluggable type-checking compare with JML?
31.8.4: Is the Checker Framework an official part of Java?
31.8.5: What is the relationship between the Checker Framework and JSR 305?
31.8.6: What is the relationship between the Checker Framework and JSR 308?
## 31.1 Motivation for pluggable type-checking
### 31.1.1 I don’t make type errors, so would pluggable type-checking help me?
Occasionally, a developer says that he makes no errors that type-checking could catch, or that any such errors are unimportant because they have low impact and are easy to fix. When I investigate the claim, I invariably find that the developer is mistaken.
Very frequently, the developer has underestimated what type-checking can discover. Not every type error leads to an exception being thrown; and even if an exception is thrown, it may not seem related to classical types. Remember that a type system can discover null pointer dereferences, incorrect side effects, security errors such as information leakage or SQL injection, partially-initialized data, wrong units of measurement, and many other errors. Every programmer makes errors sometimes and works with other people who do. Even where type-checking does not discover a problem directly, it can indicate code with bad smells, thus revealing problems, improving documentation, and making future maintenance easier.
There are other ways to discover errors, including extensive testing and debugging. You should continue to use these. But type-checking is a good complement to these. Type-checking is more effective for some problems, and less effective for other problems. It can reduce (but not eliminate) the time and effort that you spend on other approaches. There are many important errors that type-checking and other automated approaches cannot find; pluggable type-checking gives you more time to focus on those.
### 31.1.2 When should I use type qualifiers, and when should I use subclasses?
In brief, use subtypes when you can, and use type qualifiers when you cannot use subtypes. For more details, see Section 31.2.3.
## 31.2 Getting started
### 31.2.1 How do I get started annotating an existing program?
See Section 2.4.1.
You should start with a property that matters to you. Think about what aspects of your code cause the most errors, or cost the most time during maintenance, or are the most common to be incorrectly-documented. Focusing on what you care about will give you the best benefits.
When you first start out with the Checker Framework, it’s usually best to get experience with an existing type-checker before you write your own new checker.
Many users are tempted to start with the Nullness Checker (see Chapter 3), since null pointer errors are common and familiar. The Nullness Checker works very well, but be warned of three facts that make the absence of null pointer exceptions challenging to verify.
1. Dereferences happen throughout your codebase, so there are a lot of potential problems. By contrast, fewer lines of code are related to locking, regular expressions, etc., so those properties are easier to check.
2. Programmers use null for many different purposes. More seriously, programmers write run-time tests against null, and those are difficult for any static analysis to capture.
3. The Nullness Checker interacts with initialization and map keys.
If null pointer exceptions are most important to you, then by all means use the Nullness Checker. But if you just want to try some type-checker, there are others that are easier to use.
we do not recommend indiscriminately running all the checkers on your code. The reason is that each one has a cost — not just at compile time, but also in terms of code clutter and human time to maintain the annotations. If the property is important to you, is difficult for people to reason about, or has caused problems in the past, then you should run that checker. For other properties, the benefits may not repay the effort to use it. You will be the best judge of this for your own code, of course.
The Linear Checker (see Chapter 18) has not been extensively tested. The IGJ Checker (see Chapter 19), Javari Checker (see Chapter 20), and some of the third-party checkers (see Chapter 23) have known bugs that limit their usability. (Report the ones that affect you, and the Checker Framework developers will prioritize fixing them.)
### 31.2.3 Should I use pluggable types or Java subtypes?
For some programming tasks, you can use either a Java subclass or a type qualifier. As an example that your code currently uses String to represent an address. You could use Java subclasses by creating a new Address class and refactor your code to use it, or you could use type qualifiers by creating an @Address annotation and applying it to some uses of String in your code. As another example, suppose that your code currently uses MyClass in two different ways that should not interact with one another. You could use Java subclasses by changing MyClass into an interface or abstract class, defining two subclasses, and ensuring that neither subclass ever refers to the other subclass nor to the parent class.
If Java subclasses solve your problem, then that is probably better. We do not encourage you to use type qualifiers as a poor substitute for classes. An advantage of using classes is that the Java type-checker always runs; by contrast, it is possible to forget to run the pluggable type-checker. However, here are some reasons type qualifiers may be a better choice.
Backward compatibility
Using a new class may make your code incompatible with existing libraries or clients. Brian Goetz expands on this issue in an article on the pseudo-typedef antipattern [Goe06]. Even if compatibility is not a concern, a code change may introduce bugs, whereas adding annotations does not change the run-time behavior. It is possible to add annotations to existing code, including code you do not maintain or cannot change. For code that strictly cannot be changed, you can add annotations in comments (see Section 27.2.1), or you can write library annotations (see Chapter 28).
Type annotations can be applied to primitives and to final classes such as String, which cannot be subclassed.
Richer semantics and new supertypes
Type qualifiers permit you to remove operations, with a compile-time guarantee. An example is that an immutable version of a type prohibits calling mutator methods (see Chapters 19 and 20). More generally, type qualifiers permit creating a new supertype, not just a subtype, of an existing Java type.
More precise type-checking
The Checker Framework is able to verify the correctness of code that the Java type-checker would reject. Here are a few examples.
• It uses a dataflow analysis to determine a more precise type for variables after conditional tests or assignments.
• It treats certain Java constructs more precisely, such as reflection (see Chapter 21).
• It includes special-case logic for type-checking specific methods, such as the Nullness Checker’s treatment of Map.get.
Efficiency
Type qualifiers have no run-time representation. Therefore, there is no space overhead for separate classes or for wrapper classes for primitives. There is no run-time overhead for due to extra dereferences or dynamic dispatch for methods that could otherwise be statically dispatched.
Less code clutter
The programmer does not have to convert primitive types to wrappers, which would make the code both uglier and slower. Thanks to defaults and type inference (Section 25.3.1), you may be able to write and think in terms of the original Java type, rather than having to explicitly write one of the subtypes in all locations.
## 31.3 Usability of pluggable type-checking
### 31.3.1 Are type annotations easy to read and write?
The papers “Practical pluggable types for Java” [PAC+08] and “Building and using pluggable type-checkers” [DDE+11] discuss case studies in which programmers found type annotations to be natural to read and write. The code continued to feel like Java, and the type-checking errors were easy to comprehend and often led to real bugs.
You don’t have to take our word for it, though. You can try the Checker Framework for yourself.
The difficulty of adding and verifying annotations depends on your program. If your program is well-designed and -documented, then skimming the existing documentation and writing type annotations is extremely easy. Otherwise, you may find yourself spending a lot of time trying to understand, reverse-engineer, or fix bugs in your program, and then just a moment writing a type annotation that describes what you discovered. This process inevitably improves your code. You must decide whether it is a good use of your time. For code that is not causing trouble now and is unlikely to do so in the future (the code is bug-free, and you do not anticipate changing it or using it in new contexts), then the effort of writing type annotations for it may not be justified.
### 31.3.2 Will my code become cluttered with type annotations?
In summary: annotations do not clutter code; they are used much less frequently than generic types, which Java programmers find acceptable; and they reduce the overall volume of documentation that a codebase needs.
As with any language feature, it is possible to write ugly code that over-uses annotations. However, in normal use, very few annotations need to be written. Figure 1 of the paper Practical pluggable types for Java [PAC+08] reports data for over 350,000 lines of type-annotated code:
• 1 annotation per 62 lines for nullness annotations (@NonNull, @Nullable, etc.)
• 1 annotation per 1736 lines for interning annotations (@Interned)
• 1 annotation per 27 lines for immutability annotations (IGJ type system)
These numbers are for annotating existing code. New code that is written with the type annotation system in mind is cleaner and more correct, so it requires even fewer annotations.
Each annotation that a programmer writes replaces a sentence or phrase of English descriptive text that would otherwise have been written in the Javadoc. So, use of annotations actually reduces the overall size of the documentation, at the same time as making it machine-processable and less ambiguous.
### 31.3.3 Will using the Checker Framework slow down my program? Will it slow down the compiler?
Using the Checker Framework has no impact on the execution of your program: the compiler emits the identical bytecodes as the Java 8 compiler and so there is no run-time effect. Because there is no run-time representation of type qualifiers, there is no way to use reflection to query the qualifier on a given object, though you can use reflection to examine a class/method/field declaration.
Using the Checker Framework does increase compilation time. In theory it should only add a few percent overhead, but our current implementation can double the compilation time — or more, if you run many pluggable type-checkers at once. This is especially true if you run pluggable type-checking on every file (as we recommend) instead of just on the ones that have recently changed. Nonetheless, compilation with pluggable type-checking still feels like compilation, and you can do it as part of your normal development process.
### 31.3.4 How do I shorten the command line when invoking a checker?
The compile options to javac can be a pain to type; for example, javac -processor org.checkerframework.checker.nullness.NullnessChecker .... See Section 2.2.3 for a way to avoid the need for the -processor command-line option.
## 31.4 How to handle warnings and errors
### 31.4.1 What should I do if a checker issues a warning about my code?
For a discussion of this issue, see Section 2.4.6.
### 31.4.2 What does a certain Checker Framework warning message mean?
Search through this manual for the text of the warning message. Oftentimes the manual explains it. If not, ask on the mailing list.
### 31.4.3 Can a pluggable type-checker guarantee that my code is correct?
Each checker looks for certain errors. You can use multiple checkers to detect more errors in your code, but you will never have a guarantee that your code is completely bug-free.
If the type-checker issues no warning, then you have a guarantee that your code is free of some particular error. There are some limitations to the guarantee.
Most importantly, if you run a pluggable checker on only part of a program, then you only get a guarantee that those parts of the program are error-free. For example, suppose you have type-checked a framework that clients are intended to extend. You should recommend that clients run the pluggable checker. There is no way to force users to do so, so you may want to retain dynamic checks or use other mechanisms to detect errors.
Section 2.3 states other limitations to a checker’s guarantee, such as regarding concurrency. Java’s type system is also unsound in certain situations, such as for arrays and casts (however, the Checker Framework is sound for arrays and casts). Java uses dynamic checks is some places it is unsound, so that errors are thrown at run time. The pluggable type-checkers do not currently have built-in dynamic checkers to check for the places they are unsound. Writing dynamic checkers would be an interesting and valuable project.
Other types of dynamism in a Java application do not jeopardize the guarantee, because the type-checker is conservative. For example, at a method call, dynamic dispatch chooses some implementation of the method, but it is impossible to know at compile time which one it will be. The type-checker gives a guarantee no matter what implementation of the method is invoked.
Even if a pluggable checker cannot give an ironclad guarantee of correctness, it is still useful. It can find errors, exclude certain types of possible problems (e.g., restricting the possible class of problems), improve documentation, and increase confidence in your software.
### 31.4.4 What guarantee does the Checker Framework give for concurrent code?
The Lock Checker (see Chapter 6) offers a way to detect and prevent certain concurrency errors.
By default, the Checker Framework assumes that the code that it is checking is sequential: that is, there are no concurrent accesses from another thread. This means that the Checker Framework is unsound for concurrent code, in the sense that it may fail to issue a warning about errors that occur only when the code is running in a concurrent setting. For example, the Nullness Checker issues no warning for this code:
if (myobject.myfield != null) {
myobject.myfield.toString();
}
This code is safe when run on its own. However, in the presence of multithreading, the call to toString may fail because another thread may set myobject.myfield to null after the nullness check in the if condition, but before the if body is executed.
If you supply the -AconcurrentSemantics command-line option, then the Checker Framework assumes that any field can be changed at any time. This limits the amount of flow-sensitive type qualifier refinement (Section 25.4) that the Checker Framework can do.
### 31.4.5 How do I make compilation succeed even if a checker issues errors?
Section 2.2 describes the -Awarns command-line option that turns checker errors into warnings, so type-checking errors will not cause javac to exit with a failure status.
### 31.4.6 Why does the checker always say there are 100 errors or warnings?
By default, javac only reports the first 100 errors or warnings. Furthermore, once javac encounters an error, it doesn’t try compiling any more files (but does complete compilation of all the ones that it has started so far).
To see more than 100 errors or warnings, use the javac options -Xmaxerrs and -Xmaxwarns. To convert Checker Framework errors into warnings so that javac will process all your source files, use the option -Awarns. See Section 2.2 for more details.
### 31.4.7 Why does the Checker Framework report an error regarding a type I have not written in my program?
Sometimes, a Checker Framework warning message will mention a type you have not written in your program. This is typically because a default has been applied where you did not write a type; see Section 25.3.1. In other cases, this is because flow-sensitive type refinement has given an expression a more specific type than you wrote or than was defaulted; see Section 25.4.
### 31.4.8 How can I do run-time monitoring of properties that were not statically checked?
Some properties are not checked statically (see Chapter 26 for reasons that code might not be statically checked). In such cases, it would be desirable to check the property dynamically, at run time. Currently, the Checker Framework has no support for adding code to perform run-time checking.
Adding such support would be an interesting and valuable project. An example would be an option that causes the Checker Framework to automatically insert a run-time check anywhere that static checking is suppressed. If you are able to add run-time verification functionality, we would gladly welcome it as a contribution to the Checker Framework.
Some checkers have library methods that you can explicitly insert in your source code. Examples include the Nullness Checker’s NullnessUtils.castNonNull method (see Section 3.4.1) and the Regex Checker’s RegexUtil class (see Section 9.2.4). But, it would be better to have more general support that does not require the user to explicitly insert method calls.
## 31.5 Syntax of type annotations
There is also a separate FAQ for the type annotations syntax (http://types.cs.washington.edu/jsr308/current/jsr308-faq.html).
### 31.5.1 What is a “receiver”?
The receiver of a method is the this formal parameter, sometimes also called the “current object”. Within the method declaration, this is used to refer to the receiver formal parameter. At a method call, the receiver actual argument is written before the method name.
The method compareTo takes two formal parameters. At a call site like x.compareTo(y), the two arguments are x and y. It is desirable to be able to annotate the types of both of the formal parameters, and doing so is supported by both Java’s type annotations syntax and by the Checker Framework.
A type annotation on the receiver is treated exactly like a type annotation on any other formal parameter. At each call site, the type of the argument must be a consistent with (a subtype of or equal to) the declaration of the corresponding formal parameter. If not, the type-checker issues a warning.
Here is an example. Suppose that @A Object is a supertype of @B Object in the following declaration:
class MyClass {
void requiresA(@A MyClass this) { ... }
void requiresB(@B MyClass this) { ... }
}
Then the behavior of four different invocations is as follows:
@A MyClass myA = ...;
@B MyClass myB = ...;
myA.requiresA() // OK
myA.requiresB() // compile-time error
myB.requiresA() // OK
myB.requiresB() // OK
The invocation myA.requiresB() does not type-check because the actual argument’s type is not a subtype of the formal parameter’s type.
A top-level constructor does not have a receiver. An inner class constructor does have a receiver, whose type is the same as the containing outer class. The receiver is distinct from the object being constructed. In a method of a top-level class, the receiver is named this. In a constructor of an inner class, the receiver is named Outer.this and the result is named this.
### 31.5.2 What is the meaning of an annotation after a type, such as @NonNull Object @Nullable?
In a type such as @NonNull Object @Nullable [], it may appear that the @Nullable annotation is written after the type Object. In fact, @Nullable modifies []. See the next FAQ, about array annotations (Section 31.5.3).
### 31.5.3 What is the meaning of array annotations such as @NonNull Object @Nullable []?
You should parse this as: (@NonNull Object) (@Nullable []). Each annotation precedes the component of the type that it qualifies.
Thus, @NonNull Object @Nullable [] is a possibly-null array of non-null objects. Note that the first token in the type, “@NonNull”, applies to the element type Object, not to the array type as a whole. The annotation @Nullable applies to the array ([]).
Similarly, @Nullable Object @NonNull [] is a non-null array of possibly-null objects.
Some older tools interpret a declaration like @NonEmpty String[] var as “non-empty array of strings”. This is in conflict with the Java type annotations specification, which defines it as meaning “array of non-empty strings”. If you use one of these older tools, you will find this incompatibility confusing. You will have to live with it until the older tool is updated to conform to the Java specification, or until you transition to a newer tool that conforms to the Java specification.
### 31.5.4 What is the meaning of varargs annotations such as @English String @NonEmpty ...?
Varargs annotations are treated similarly to array annotations. (A way to remember this is that when you write a varargs formal parameter such as void method(String... x) {}, the Java compiler generates a method that takes an array of strings; whenever your source code calls the method with multiple arguments, the Java compiler packages them up into an array before calling the method.)
Either of these annotations
void method(String @NonEmpty [] x) {}
void method(String @NonEmpty ... x) {}
applies to the array: the method takes a non-empty array of strings, or the varargs list must not be empty.
Either of these annotations
void method(@English String [] x) {}
void method(@English String ... x) {}
. applies to the element type. The annotation documents that the method takes an array of English strings.
### 31.5.5 What is the meaning of a type qualifier at a class declaration?
Writing an annotation on a class declaration makes that annotation implicit for all uses of the class (see Section 25.3). If you write class @MyQual MyClass { ... }, then every unannotated use of MyClass is @MyQual MyClass. A user is permitted to strengthen the type by writing a more restrictive annotation on a use of MyClass, such as @MyMoreRestrictiveQual MyClass.
### 31.5.6 Why shouldn’t a qualifier apply to both types and declarations?
It is bad style for an annotation to apply to both types and declarations. In other words, every annotation should have a @Target meta-annotation, and the @Target meta-annotation should list either only declaration locations or only type annotations. (It’s OK for an annotation to target both ElementType.TYPE_PARAMETER and ElementType.TYPE_USE, but no other declaration location along with ElementType.TYPE_USE.)
Sometimes, it may seem tempting for an annotation to apply to both type uses and (say) method declarations. Here is a hypothetical example:
“Each Widget type may have a @Version annotation. I wish to prove that versions of widgets don’t get assigned to incompatible variables, and that older code does not call newer code (to avoid problems when backporting).
A @Version annotation could be written like so:
@Version("2.0") Widget createWidget(String value) { ... }
@Version("2.0") on the method could mean that the createWidget method only appears in the 2.0 version. @Version("2.0") on the return type could mean that the returned Widget should only be used by code that uses the 2.0 API of Widget. It should be possible to specify these independently, such as a 2.0 method that returns a value that allows the 1.0 API method invocations.”
Both of these are type properties and should be specified with type annotations. No method annotation is necessary or desirable. The best way to require that the receiver has a certain property is to use a type annotation on the receiver of the method. (Slightly more formally, the property being checked is compatibility between the annotation on the type of the formal parameter receiver and the annotation on the type of the actual receiver.) If you do not know what “receiver” means, see the next question.
Another example of a type-and-declaration annotation that represents poor design is JCIP’s @GuardedBy annotation [GPB+06]. As discussed in Section 6.3.1, it means two different things when applied to a field or a method. To reduce confusion and increase expressiveness, the Lock Checker (see Chapter 6) uses the @Holding annotation for one of these meanings, rather than overloading @GuardedBy with two distinct meanings.
## 31.6 Semantics of type annotations
### 31.6.1 Why are the type parameters to List and Map annotated as @NonNull?
The annotation on java.util.Collection only allows non-null elements:
public interface Collection<E extends @NonNull Object> {
...
}
Thus, you will get a type error if you write code like Collection<@Nullable Object>. A nullable type parameter is also forbidden for certain other collections, including AbstractCollection, List, Map, and Queue.
The extends @NonNull Object bound is a direct consequence of the design of the collections classes; it merely formalizes the Javadoc specification. The Javadoc for Collection states:
Some list implementations have restrictions on the elements that they may contain. For example, some implementations prohibit null elements, …
Here are some consequences of the requirement to detect all nullness errors at compile time. If even one subclass of a given collection class may prohibit null, then the collection class and all its subclasses must prohibit null. Conversely, if a collection class is specified to accept null, then all its subclasses must honor that specification.
The Checker Framework’s annotations make apparent a flaw in the JDK design, and helps you to avoid problems that might be caused by that flaw.
##### Justification from type theory
Suppose B is a subtype of A. Then an overriding method in B must have a stronger (or equal) signature than the overridden method in A. In a stronger signature, the formal parameter types may be supertypes, and the return type may be a subtype. Here are examples:
class A { @NonNull Object Number m1( @NonNull Object arg) { ... } }
class B extends A { @Nullable Object Number m1( @NonNull Object arg) { ... } } // error!
class C extends A { @NonNull Object Number m1(@Nullable Object arg) { ... } } // OK
class D { @Nullable Object Number m2(@Nullable Object arg) { ... } }
class E extends D { @NonNull Object Number m2(@Nullable Object arg) { ... } } // OK
class F extends D { @Nullable Object Number m2( @NonNull Object arg) { ... } } // error!
According to these rules, since some subclasses of Collection do not permit nulls, then Collection cannot either:
// does not permit null elements
class PriorityQueue<E> implements Collection<E> {
...
}
// must not permit null elements, or PriorityQueue would not be a subtype of Collection
interface Collection<E> {
...
}
##### Justification from checker behavior
Suppose that you changed the bound in the Collection declaration to extends @Nullable Object. Then, the checker would issue no warning for this method:
static void addNull(Collection l) {
}
However, calling this method can result in a null pointer exception, for instance caused by the following code:
addNull(new PriorityQueue());
Therefore, the bound must remain as extends @NonNull Object.
By contrast, this code is OK because ArrayList is documented to support null elements:
static void addNull(ArrayList l) {
}
Therefore, the upper bound in ArrayList is extends @Nullable Object. Any subclass of ArrayList must also support null elements.
##### Suppressing warnings
Suppose your program has a list variable, and you know that any list referenced by that variable will definitely support null elements. Then, you can suppress the warning:
@SuppressWarnings("nullness:generic.argument") // any list passed to this
method will support null elements
}
You need to use @SuppressWarnings("nullness:generic.argument") whenever you use a collection that may contain null elements in contradiction to its documentation. Fortunately, such uses are relatively rare.
For more details on suppressing nullness warnings, see Section 3.4.
### 31.6.2 How can I handle typestate, or phases of my program with different data properties?
Sometimes, your program works in phases that have different behavior. For example, you might have a field that starts out null and becomes non-null at some point during execution, such as after a method is called. You can express this property as follows:
1. Annotate the field type as @MonotonicNonNull.
2. Annotate the method that sets the field as @EnsuresNonNull("myFieldName"). (If method m1 calls method m2, which actually sets the field, then you would probably write this annotation on both m1 and m2.)
3. Annotate any method that depends on the field being non-null as @RequiresNonNull("myFieldName"). The type-checker will verify that such a method is only called when the field isn’t null — that is, the method is only called after the setting method.
You can also use a typestate checker (see Chapter 23.1), but they have not been as extensively tested.
### 31.6.3 Why are explicit and implicit bounds defaulted differently?
The following two bits of code have the same semantics under Java, but are treated differently by the Checker Framework’s CLIMB-to-top defaulting rules (Section 25.3.2):
class MyClass<T> { ... }
class MyClass<T extends Object> { ... }
The difference is the annotation on the upper bound of the type argument T. They are treated in the following.
class MyClass<T> == class MyClass<T extends @TOPTYPEANNO Object> { ... }
class MyClass<T extends Object> == class MyClass<T extends @DEFAULTANNO Object>
@TOPTYPEANNO is the top annotation in the type qualifier hierarchy. For example, for the nullness type system, the top type annotation is @Nullable; as shown in Figure 3.1. @DEFAULTANNO is the default annotation for the type system. For example, for the nullness type system, the default type annotation is @NonNull.
In some type systems, the top qualifier and the default are the same. For such type systems, the two code snippets shown above are treated the same. An example is the regular expression type system; see Figure 9.1.
The CLIMB-to-top rule reduces the code edits required to annotate an existing program, and it treats types written in the program consistently.
When a user writes no upper bound, as in class C<T> { ... }, then Java permits the class to be instantiated with any type parameter. The Checker Framework behaves exactly the same, no matter what the default is for a particular type system – and no matter whether the user has changed the default locally.
When a user writes an upper bound, as in class C<T extends OtherClass> { ... }, then the Checker Framework treats this occurrence of OtherClass exactly like any other occurrence, and applies the usual defaulting rules. Use of Object is treated consistently with all other types in this location and all other occurrences of Object in the program.
It is uncommon for a user to write Object as an upper bound with no type qualifier: class C<T extends Object> { ... }. It is better style to write no upper bound or to write an explicit type annotation on Object.
## 31.7 Creating a new checker
### 31.7.1 How do I create a new checker?
In addition to using the checkers that are distributed with the Checker Framework, you can write your own checker to check specific properties that you care about. Thus, you can find and prevent the bugs that are most important to you.
Chapter 29 gives complete details regarding how to write a checker. It also suggests places to look for more help, such as the Checker Framework API documentation (Javadoc) and the source code of the distributed checkers.
To whet your interest and demonstrate how easy it is to get started, here is an example of a complete, useful type-checker.
@SubtypeOf(Unqualified.class)
@Target({ElementType.TYPE_USE, ElementType.TYPE_PARAMETER})
public @interface Encrypted { }
Section 22.2 explains this checker and tells you how to run it.
### 31.7.2 Why is there no declarative syntax for writing type rules?
A type system implementer can declaratively specify the type qualifier hierarchy (Section 29.3.2) and the type introduction rules (Section 29.4.1). However, the Checker Framework uses a procedural syntax for specifying type-checking rules (Section 29.6). A declarative syntax might be more concise, more readable, and more verifiable than a procedural syntax.
We have not found the procedural syntax to be the most important impediment to writing a checker.
Previous attempts to devise a declarative syntax for realistic type systems have failed; see a technical paper [PAC+08] for a discussion. When an adequate syntax exists, then the Checker Framework can be extended to support it.
## 31.8 Relationship to other tools
### 31.8.1 Why not just use a bug detector (like FindBugs)?
Pluggable type-checking finds more bugs than a bug detector does, for any given variety of bug.
A bug detector like FindBugs [HP04, HSP05], Jlint [Art01], or PMD [Cop05] aims to find some of the most obvious bugs in your program. It uses a lightweight analysis, then uses heuristics to discard some of its warnings. Thus, even if the tool prints no warnings, your code might still have errors — maybe the analysis was too weak to find them, or the tool’s heuristics classified the warnings as likely false positives and discarded them.
A type-checker aims to find all the bugs (of certain varieties). It requires you to write type qualifiers in your program, or to use a tool that infers types. Thus, it requires more work from the programmer, and in return it gives stronger guarantees.
Each tool is useful in different circumstances, depending on how important your code is and your desired level of confidence in your code. For more details on the comparison, see Section 32.5. For a case study that compared the nullness analysis of FindBugs, Jlint, PMD, and the Checker Framework, see section 6 of the paper “Practical pluggable types for Java” [PAC+08].
### 31.8.2 How does the Checker Framework compare with Eclipse’s null analysis?
Eclipse comes with a null analysis that can detect potential null pointer errors in your code. Eclipse’s built-in analysis differs from the Checker Framework in several respects.
The Checker Framework’s Nullness Checker (see Chapter 3) is more precise: it does a deeper semantic analysis, so it issues fewer false positives than Eclipse. For example, the Nullness Checker handles initialization and map key checking, it supports method pre- and post-conditions, and it includes a powerful dataflow analysis.
Eclipse assumes that all code is multi-threaded, which cripples its local type inference. By contrast, the Checker Framework allows the user to specify whether code will be run concurrently or not via the -AconcurrentSemantics command-line option (see Section 31.4.4).
The Checker Framework is easier to run in integration scripts or in environments where not all developers are using Eclipse.
Eclipse handles only nullness properties and is not extensible, whereas the Checker Framework comes with over 20 type-checkers (for a list, see Chapter 1) and is extensible to more properties.
There are also some benefits to Eclipse’s Null Analysis. It is faster than the Checker Framework, in part because it is less featureful. It is built into Eclipse, so you do not have to download and install a separate Eclipse plugin as you do for the Checker Framework (see Section 30.6.2). Its IDE integration is tighter and slicker.
(If you know of other differences, please let us know at checker-framework-dev@googlegroups.com so we can update the manual.)
### 31.8.3 How does pluggable type-checking compare with JML?
JML, the Java Modeling Language [LBR06], is a language for writing formal specifications.
JML aims to be more expressive than pluggable type-checking. A programmer can write a JML specification that describes arbitrary facts about program behavior. Then, the programmer can use formal reasoning or a theorem-proving tool to verify that the code meets the specification. Run-time checking is also possible. By contrast, pluggable type-checking can express a more limited set of properties about your program. Pluggable type-checking annotations are more concise and easier to understand.
JML is not as practical as pluggable type-checking. The JML toolset is less mature. For instance, if your code uses generics or other features of Java 5, then you cannot use JML. However, JML has a run-time checker, which the Checker Framework currently lacks.
### 31.8.4 Is the Checker Framework an official part of Java?
The Checker Framework is not an official part of Java. The Checker Framework relies on type annotations, which are part of Java 8. See the Type Annotations (JSR 308) FAQ for more details.
### 31.8.5 What is the relationship between the Checker Framework and JSR 305?
JSR 305 aimed to define official Java names for some annotations, such as @NonNull and @Nullable. However, it did not aim to precisely define the semantics of those annotations nor to provide a reference implementation of an annotation processor that validated their use; as a result, JSR 305 was of limited utility as a specification. JSR 305 has been abandoned; there has been no activity by its expert group since 2009.
By contrast, the Checker Framework precisely defines the meaning of a set of annotations and provides powerful type-checkers that validate them. However, the Checker Framework is not an official part of the Java language; it chooses one set of names, but another tool might choose other names.
In the future, the Java Community Process might revitalize JSR 305 or create a replacement JSR to standardize the names and meanings of specific annotations, after there is more experience with their use in practice.
The Checker Framework defines annotations @NonNull and @Nullable that are compatible with annotations defined by JSR 305, FindBugs, IntelliJ, and other tools; see Section 3.7.
### 31.8.6 What is the relationship between the Checker Framework and JSR 308?
JSR 308, also known as the Type Annotations specification, dictates the syntax of type annotations in Java SE 8: how they are expressed in the Java language.
JSR 308 does not define any type annotations such as @NonNull, and it does not specify the semantics of any annotations. Those tasks are left to third-party tools. The Checker Framework is one such tool.
The Checker Framework makes use of Java SE 8’s type annotation syntax, but the Checker Framework can be used with previous versions of the Java language via the annotations-in-comments feature (Section 27.2.1).
# Chapter 32 Troubleshooting and getting help
The manual might already answer your question, so first please look for your answer in the manual, including this chapter and the FAQ (Chapter 31). If not, you can use the mailing list, checker-framework-discuss@googlegroups.com, to ask other users for help. For archives and to subscribe, see https://groups.google.com/forum/#!forum/checker-framework-discuss. To report bugs, please see Section 32.2. If you want to help out, you can give feedback (including on the documentation), choose a bug and fix it, or select a project from the ideas list at https://github.com/typetools/checker-framework/wiki/Ideas.
## 32.1 Common problems and solutions
• To verify that you are using the compiler you think you are, you can add -version to the command line. For instance, instead of running javac -g MyFile.java, you can run javac -version -g MyFile.java. Then, javac will print out its version number in addition to doing its normal processing.
### 32.1.1 Unable to run the checker, or checker crashes
If you are unable to run the checker, or if the checker or the compiler terminates with an error, then the problem may be a problem with your environment. (If the checker or the compiler crashes, that is a bug in the Checker Framework; please report it. See Section 32.2.) This section describes some possible problems and solutions.
• If you get the error
com.sun.tools.javac.code.Symbol$CompletionFailure: class file for com.sun.source.tree.Tree not found then you are using the source installation and file tools.jar is not on your classpath. See the installation instructions (Section 1.3). • If you get an error such as package org.checkerframework.checker.nullness.qual does not exist despite no apparent use of import org.checkerframework.checker.nullness.qual.*; in the source code, then perhaps jsr308_imports is set as a Java system property, a shell environment variable, or a command-line option. You should solve this by unsetting the variable/option, which it is deprecated. If the error is package org.checkerframework.checker.nullness.qual does not exist (note the extra apostrophe!), then you have probably misused quoting when supplying the (deprecated) jsr308_imports environment variable. • If you get an error like one of the following, ...\build.xml:59: Error running${env.CHECKERFRAMEWORK}\checker\bin\javac.bat compiler
.../bin/javac: Command not found
then the problem may be that you have not set the CHECKERFRAMEWORK environment variable, as described in Section 30.1. Or, maybe you made it a user variable instead of a system variable.
• If you get one of these errors:
The hierarchy of the type ClassName is inconsistent
The type com.sun.source.util.AbstractTypeProcessor cannot be resolved.
It is indirectly referenced from required .class files
then you are likely not using the Checker Framework compiler. Use either $CHECKERFRAMEWORK/checker/bin/javac or one of the alternatives described in Section 30.1. • If you get the error java.lang.ArrayStoreException: sun.reflect.annotation.TypeNotPresentExceptionProxy If you get an error such as java.lang.NoClassDefFoundError: java/util/Objects then you are trying to run the compiler using a JDK 6 or earlier JVM. Install and use a Java 7 or 8 JDK, at least for running the Checker Framework. then an annotation is not present at run time that was present at compile time. For example, maybe when you compiled the code, the @Nullable annotation was available, but it was not available at run time. You can use JDK 8 at run time, or compile with a Java 6 or 7 compiler that will ignore the annotations in comments. • A “class file for … not found” error, especially for an inner class in the JDK, is probably due to a JDK version mismatch. To solve the problem, you need to perform compilation with a different Java version or different version of the JDK. In general, Java issues a “class file for … not found” error when your classpath contains code that was compiled with some library, but your classpath does not contain that library itself. For example, suppose that when you run the compiler, you are using JDK 8, but some library on your classpath was compiled against JDK 6 or 7, and the compiled library refers to a class that only appears in JDK 6 or 7. (If only one version of Java existed, or the Checker Framework didn’t try to support multiple different versions of Java, this would not be a problem.) Examples of classes that were in JDK 7 but were removed in JDK 8 include: class file for java.util.TimeZone$DisplayNames not found
Examples of classes that were in JDK 6 but were removed in JDK 7 include:
class file for java.io.File$LazyInitialization not found class file for java.util.Hashtable$EmptyIterator not found
java.lang.NoClassDefFoundError: java/util/Hashtable$EmptyEnumerator Examples of classes that were not in JDK 7 but were introduced in JDK 8 include: The type java.lang.Class$ReflectionData cannot be resolved
Examples of classes that were not in JDK 6 but were introduced in JDK 7 include:
class file for java.util.Vector$Itr not found There are even classes that were introduced within a single JDK release. Classes that appear in JDK 7 release 71 but not in JDK 7 release 45 include: class file for java.lang.Class$ReflectionData not found
You may be able to solve the problem by running
cd checker
ant jdk.jar bindist
to re-generate files checker/jdk/jdk{7,8}.jar and checker/bin/jdk{7,8}.jar.
That usually works, but if not, then you should recompile the Checker Framework from source rather than using the pre-compiled distribution.
• A NoSuchFieldError such as this:
java.lang.NoSuchFieldError: NATIVE_HEADER_OUTPUT
Field NATIVE_HEADER_OUTPUT was added in JDK 8. The error message suggests that you’re not executing with the right bootclasspath: some classes were compiled with the JDK 8 version and expect the field, but you’re executing the compiler on a JDK without the field.
One possibility is that you are not running the Checker Framework compiler — use javac -version to check this, then use the right one. (Maybe the Checker Framework javac is at the end rather than the beginning of your path.)
If you are using Ant, then one possibility is that the javac compiler is using the same JDK as Ant is using. You can correct this by being sure to use fork="yes" (see Section 30.2) and/or setting the build.compiler property to extJavac.
If you are building from source, you might need to rebuild the Annotation File Utilities before recompiling or using the Checker Framework.
• If you get an error that contains lines like these:
Caused by: java.util.zip.ZipException: error in opening zip file
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:131)
then one possibility is that you have installed the Checker Framework in a directory that contains special characters that Java’s ZipFile implementation cannot handle. For instance, if the directory name contains “+”, then Java 1.6 throws a ZipException, and Java 1.7 throws a FileNotFoundException and prints out the directory name with “+” replaced by blanks.
• If you get an error
error: scoping construct for static nested type cannot be annotated
then you have probably written something like @Nullable java.util.List. The correct syntax is java.util.@Nullable List. But, it’s usually better to add import java.util.List to your source file, so that you can just write @Nullable List. Likewise, you must write Outer.@Nullable StaticNestedClass rather than @Nullable Outer.StaticNestedClass.
Java 8 requires that a type qualifier be written directly on the type that it qualifies, rather than on a scoping mechanism that assists in resolving the name. Examples of scoping mechanisms are package names and outer classes of static nested classes.
The reason for the Java 8 syntax is to avoid syntactic irregularity. When writing a member nested class (also known as an inner class), it is possible to write annotations on both the outer and the inner class: @A1 Outer. @A2 Inner. Therefore, when writing a static nested class, the annotations should go on the same place: Outer. @A3 StaticNested (rather than @ConfusingAnnotation Outer. Nested where @ConfusingAnnotation applies to Outer if Nested is a member class and applies to Nested if Nested is a static class). It’s not legal to write an annotation on the outer class of a static nested class, because neither annotations nor instantiations of the outer class affect the static nested class.
Similar arguments apply when annotating package.Outer.Nested.
### 32.1.2 Unexpected type-checking results
This section describes possible problems that can lead the type-checker to give unexpected results.
• If the Checker Framework is unable to verify a property that you know is true, then it is helpful to formulate an argument about why the property is true. Recall that the Checker Framework does modular verification, one procedure at a time; it observes the specifications, but not the implementations, of other methods.
If any aspects of your argument are not expressed as annotations, then you may need to write more annotations. If any aspects of your argument are not expressible as annotations, then you may need to extend the type-checker.
• If a checker seems to be ignoring the annotation on a method, then it is possible that the checker is reading the method’s signature from its .class file, but the .class file was not created by the JSR 308 compiler. You can check whether the annotations actually appear in the .class file by using the javap tool.
If the annotations do not appear in the .class file, here are two ways to solve the problem:
• Re-compile the method’s class with the Checker Framework compiler. This will ensure that the type annotations are written to the class file, even if no type-checking happens during that execution.
• Pass the method’s file explicitly on the command line when type-checking, so that the compiler reads its source code instead of its .class file.
• If a checker issues a warning about a property that it accepted (or that was checked) on a previous line, then probably there was a side-effecting method call in between that could invalidate the property. For example, in this code:
if (currentOutgoing != null && !message.isCompleted()) {
currentOutgoing.continueBuffering(message);
}
the Nullness Checker will issue a warning on the second line:
warning: [dereference.of.nullable] dereference of possibly-null reference currentOutgoing
currentOutgoing.continueBuffering(message);
^
If currentOutgoing is a field rather than a local variable, and isCompleted() is not a pure method, then a null pointer dereference can occur at the given location, because isCompleted() might set the field currentOutgoing to null.
If you want to communicate that isCompleted() does not set the field currentOutgoing to null, you can use @Pure, @SideEffectFree, or @EnsuresNonNull on the declaration of isCompleted(); see Sections 25.4.5 and 3.2.2.
• If a checker issues a type-checking error for a call that the library’s documentation states is correct, then maybe that library method has not yet been annotated, so default annotations are being used.
To solve the problem, add the missing annotations to the library (see Chapter 28). Depending on the checker, the annotations might be expressed in the form of stub files (which appear together with the checker’s source code, such as in file checker/src/org/checkerframework/checker/interning/jdk.astub for the Interning Checker) or in the form of annotated libraries (which appear under checker/jdk/, such as at checker/jdk/nullness/src/ for the Nullness Checker.
• If the compiler reports that it cannot find a method from the JDK or another external library, then maybe the stub/skeleton file for that class is incomplete.
To solve the problem, add the missing annotations to the library, as described in the previous item.
The error might take one of these forms:
method sleep in class Thread cannot be applied to given types
cannot find symbol: constructor StringBuffer(StringBuffer)
• If you get an error related to a bounded type parameter and a literal such as null, the problem may be missing defaulting. Here is an example:
mypackage/MyClass.java:2044: warning: incompatible types in assignment.
T retval = null;
^
found : null
required: T extends @MyQualifier Object
A value that can be assigned to a variable of type T extends @MyQualifier Object only if that value is of the bottom type, since the bottom type is the only one that is a subtype of every subtype of T extends @MyQualifier Object. The value null satisfies this for the Java type system, and it must be made to satisfy it for the pluggable type system as well. The typical way to address this is to write the meta-annotation @ImplicitFor(trees=Tree.Kind.NULL_LITERAL) on the definition of the bottom type qualifier.
• An error such as
MyFile.java:123: error: incompatible types in argument.
^
found : String
required: ? extends Object
may stem from use of raw types. (“String” might be a different type and might have type annotations.) If your declaration was
DefaultListModel myModel;
then it should be
DefaultListModel<String> myModel;
Running the regular Java compiler with the -Xlint:unchecked command-line option will help you to find and fix problems such as raw types.
• The error
error: annotation type not applicable to this kind of declaration
... List<@NonNull String> ...
indicates that you are using a definition of @NonNull that is a declaration annotation, which cannot be used in that syntactic location. For example, many legacy annotations such as those listed in Figure 3.2 are declaration annotations. You can fix the problem by instead using a definition of @NonNull that is a type annotation, such as the Checker Framework’s annotations; often this only requires changing an import statement. Alternately, if you wish to continue using the legacy annotations in declaration locations, see Section 27.2.5.
• This compile-time error
unknown enum constant java.lang.annotation.ElementType.TYPE_USE
indicates that you are compiling using a Java 6 or 7 JDK, but your code references an enum constant that is only defined in the Java 8 JDK. The problem might be that your code uses a library that references the enum constant. In particular, the type annotations shipped with the Checker Framework reference ElementType.TYPE_USE. You can use the Checker Framework, but still compile and run your code in a Java 6 or 7 JVM, by following the instructions in Section 27.2.
If you ignore the error and run your code in a Java 6 or 7 JVM, then you will get a run-time error:
java.lang.ArrayStoreException: sun.reflect.annotation.EnumConstantNotPresentExceptionProxy
• If Eclipse gives the warning
The annotation @NonNull is disallowed for this location
then you have the wrong version of the org.eclipse.jdt.annotation classes. Eclipse includes two incompatible versions of these annotations. You want the one with a name like org.eclipse.jdt.annotation_2.0.0.....jar, which you can find in the plugins subdirectory under the Eclipse installation directory. Add this .jar file to your build path.
### 32.1.3 Unable to build the checker, or to run programs
An error like this
Unsupported major.minor version 52.0
means that you have compiled some files into the Java 8 format (version 52.0), but you are trying to run them with Java 7 or earlier. Likewise, “Unsupported major.minor version 51.0” means that you have compiled some files into the Java 7 format (version 51.0), but you are trying to run them with Java 6 or earlier. Here are ways to solve the problem:
• Use a newer JVM (run java -version to determine the version you are using)
• Use the Checker Framework to type-check your code, then afterward produce a classfile that targets an earlier JVM by supplying arguments such as javac -source 7 -target 7 ....
### 32.1.4 Classfile version warning
The following warning is innocuous and you can ignore it, or you can suppress it using the -Xlint:-classfile command-line argument to javac:
warning: [classfile] RuntimeVisibleTypeAnnotations attribute introduced in version 52.0 class files is ignored in version 51.0 class files
This warning results when you compile a library using the Checker Framework compiler, then use a normal Java compiler to compile client code that uses the library. The Checker Framework compiler puts Java 8 type annotations even in Java 7 classfiles, for the benefit of modular type-checking. The Checker Framework compiler reads these annotations in Java 7, and other compilers ignore them.
## 32.2 How to report problems (bug reporting)
If you have a problem with any checker, or with the Checker Framework, please file a bug at https://github.com/typetools/checker-framework/issues. (First, check whether there is an existing bug report for that issue.)
Alternately (especially if your communication is not a bug report), you can send mail to checker-framework-dev@googlegroups.com. We welcome suggestions, annotated libraries, bug fixes, new features, new checker plugins, and other improvements.
Please ensure that your bug report is clear and that it is complete. Otherwise, we may be unable to understand it or to reproduce it, either of which would prevent us from helping you. Your bug report will be most helpful if you:
• Add -version -verbose -AprintErrorStack -AprintAllQualifiers to the javac options. This causes the compiler to output debugging information, including its version number.
• Indicate exactly what you did. Don’t skip any steps, and don’t merely describe your actions in words. Show the exact commands by attaching a file or using cut-and-paste from your command shell (a screenshot is not as useful).
• Include all files that are necessary to reproduce the problem. This includes every file that is used by any of the commands you reported, and possibly other files as well. Please attach the files, rather than pasting their contents into the body of your bug report or email message, because some mailers mangle formatting of pasted text. If you encountered a problem while using tool integration such as the Eclipse plugin or Maven integration, then try to reproduce the problem from the command line as well — this will indicate whether the problem is with the checker itself or with the tool integration.
• Indicate exactly what the result was by attaching a file or using cut-and-paste from your command shell (don’t merely describe it in words).
• Indicate what you expected the result to be, since a bug is a difference between desired and actual outcomes. Also, please indicate why you expected that result — explaining your reasoning can help you understand how your reasoning is different than the checker’s and which one is wrong. Remember that the checker reasons modularly and intraprocedurally: it examines one method at a time, using only the method signatures of other methods.
• Indicate what you have already done to try to understand the problem. Did you do any additional experiments? What parts of the manual did read, and what else did you search for in the manual? This information will prevent you being given redundant suggestions.
A particularly useful format for a test case is as a new file, or a diff to an existing file, for the existing Checker Framework test suite. For instance, for the Nullness Checker, see directory checker-framework/checker/tests/nullness/. But, please report your bug even if you do not report it in this format.
## 32.3 Building from source
The Checker Framework release (Section 1.3) contains everything that most users need, both to use the distributed checkers and to write your own checkers. This section describes how to compile its binaries from source. You will be using the latest development version of the Checker Framework, rather than an official release.
### 32.3.1 Obtain the source
Obtain the latest source code from the version control repository:
export JSR308=$HOME/jsr308 mkdir -p$JSR308
cd $JSR308 hg clone https://bitbucket.org/typetools/jsr308-langtools jsr308-langtools git clone https://github.com/typetools/checker-framework.git checker-framework git clone https://github.com/typetools/annotation-tools.git annotation-tools ### 32.3.2 Build the Type Annotations compiler The Checker Framework compiler is built upon a compiler called the Type Annotations compiler. The Type Annotations compiler is a variant of the OpenJDK javac that supports annotations in comments. The Checker Framework compiler is a small wrapper around the Type Annotations compiler, which adds annotated JDKs and the Checker Framework jars to the classpath. 1. Set the JAVA_HOME environment variable to the location of your JDK 7 or 8 installation (not the JRE installation, and not JDK 6 or earlier). This needs to be an Oracle JDK. (The JAVA_HOME environment variable might already be set, because it is needed for Ant to work.) In the bash shell, the following command sometimes works (it might not because java might be the version in the JDK or in the JRE): export JAVA_HOME=${JAVA_HOME:-$(dirname$(dirname $(dirname$(readlink -f $(/usr/bin/which java)))))} 2. Compile the Type Annotations tools: cd$JSR308/jsr308-langtools/make
ant clean-and-build-all-tools
3. Add the jsr308-langtools/dist/bin directory to the front of your PATH environment variable. Example command:
export PATH=$JSR308/jsr308-langtools/dist/bin:${PATH}
### 32.3.3 Build the Annotation File Utilities
This is simply done by:
cd $JSR308/annotation-tools ant You do not need to add the Annotation File Utilities to the path, as the Checker Framework build finds it using relative paths. ### 32.3.4 Build the Checker Framework 1. Run ant to build the Checker Framework: cd$JSR308/checker-framework/checker
ant
2. Once it is built, you may wish to put the Checker Framework’s javac even earlier in your PATH:
export PATH=$JSR308/checker-framework/checker/bin:$JSR308/jsr308-langtools/dist/bin:${PATH} The Checker Framework’s javac ensures that all required libraries are on your classpath and boot classpath, but is otherwise identical to the Type Annotations compiler. Putting the Checker Framework’s javac earlier in your PATH will ensure that the Checker Framework’s version is used. 3. If you are developing a checker within the Checker Framework, there is a developer version of javac in the bin-devel directory. This version will use compiled classes from dataflow/build, javacutil/build, stubparser/build, framework/build, and checker/build in the checker-framework directory instead of the compiled jar files, and by default will print stack traces for all errors. To use it, set your PATH to use javac in the bin-devel directory: export PATH=$JSR308/checker-framework/checker/bin-devel:$JSR308/jsr308-langtools/dist/bin:${PATH}
The developer version of javac allows you to not have to rebuild the jar files after every code change, in turn allowing you to test your changes faster. Source files can be compiled using command ant build in the checker directory, or can be automatically compiled by an IDE such as Eclipse.
4. Test that everything works:
• Run ant all-tests in the checker directory:
cd \$JSR308/checker-framework/checker
ant all-tests
• Run the Nullness Checker examples (see Section 3.5).
### 32.3.5 Build the Checker Framework Manual (this document)
1. To build the manual you will need HEVEA (http://hevea.inria.fr/) installed, and either rsvg-convert or convert. If you use the Ubuntu operating system, do
sudo apt-get install hevea librsvg2-bin
2. Run make in the checker/manual directory to build both the PDF and HTML versions of the manual.
## 32.4 Publications
Here are two technical papers about the Checker Framework itself:
• “Practical pluggable types for Java” [PAC+08] (ISSTA 2008, http://homes.cs.washington.edu/~mernst/pubs/pluggable-checkers-issta2008.pdf) describes the design and implementation of the Checker Framework. The paper also describes case studies in which the Nullness, Interning, Javari, and IGJ Checkers found previously-unknown errors in real software. The case studies also yielded new insights about type systems.
• “Building and using pluggable type-checkers” [DDE+11] (ICSE 2011, http://homes.cs.washington.edu/~mernst/pubs/pluggable-checkers-icse2011.pdf) discusses further experience with the Checker Framework, increasing the number of lines of verified code to 3 million. The case studies are of the Fake Enum, Signature String, Interning, and Nullness Checkers. The paper also evaluates the ease of pluggable type-checking with the Checker Framework: type-checkers were easy to write, easy for novices to use, and effective in finding errors.
Here are some papers about type systems that were implemented and evaluated using the Checker Framework:
Nullness (Chapter 3)
See the two papers about the Checker Framework, described above.
Rawness initialization (Section 3.8.7)
“Inference of field initialization” (ICSE 2011, http://homes.cs.washington.edu/~mernst/pubs/initialization-icse2011-abstract.html) describes inference for the Rawness Initialization Checker.
Interning (Chapter 5)
See the two papers about the Checker Framework, described above.
Fake enumerations (Chapter 7)
See the ICSE 2011 paper about the Checker Framework, described above.
Regular expressions (Chapter 9)
“A type system for regular expressions” [SDE12] (FTfJP 2012, http://homes.cs.washington.edu/~mernst/pubs/regex-types-ftfjp2012-abstract.html) describes the Regex Checker.
Format Strings (Chapter 10)
“A type system for format strings” [WKSE14] (ISSTA 2014, http://homes.cs.washington.edu/~mernst/pubs/format-string-issta2014-abstract.html) describes the Format String Checker.
Signature strings (Chapter 13)
See the ICSE 2011 paper about the Checker Framework, described above.
GUI Effects (Chapter 14)
“JavaUI: Effects for controlling UI object access” [GDEG13] (ECOOP 2013, http://homes.cs.washington.edu/~mernst/pubs/gui-thread-ecoop2013-abstract.html) describes the GUI Effect Checker.
“Verification games: Making verification fun” (FTfJP 2012, http://homes.cs.washington.edu/~mernst/pubs/verigames-ftfjp2012-abstract.html) describes a general inference approach that, at the time, had only been implemented for the Nullness Checker (Section 3).
IGJ and OIGJ immutability (Chapter 19)
“Object and reference immutability using Java generics” [ZPA+07] (ESEC/FSE 2007, http://homes.cs.washington.edu/~mernst/pubs/immutability-generics-fse2007-abstract.html) and “Ownership and immutability in generic Java” [ZPL+10] (OOPSLA 2010, http://homes.cs.washington.edu/~mernst/pubs/ownership-immutability-oopsla2010-abstract.html) describe the IGJ and OIGJ immutability type systems. For further case studies, also see the ISSTA 2008 paper about the Checker Framework, described above.
Javari immutability (Chapter 20)
“Javari: Adding reference immutability to Java” [TE05] (OOPSLA 2005, http://homes.cs.washington.edu/~mernst/pubs/ref-immutability-oopsla2005-abstract.html) describes the Javari type system. For inference, see “Inference of reference immutability” [QTE08] (ECOOP 2008, http://homes.cs.washington.edu/~mernst/pubs/infer-refimmutability-ecoop2008-abstract.html) and “Parameter reference immutability: Formal definition, inference tool, and comparison” [AQKE09] (J.ASE 2009, http://homes.cs.washington.edu/~mernst/pubs/mutability-jase2009-abstract.html). For further case studies, also see the ISSTA 2008 paper about the Checker Framework, described above.
“Loci: Simple thread-locality for Java” [WPM+09] (ECOOP 2009, http://janvitek.org/pubs/ecoop09.pdf)
Generic Universe Types (Section 23.5)
“Tunable static inference for Generic Universe Types” (ECOOP 2011, http://homes.cs.washington.edu/~mernst/pubs/tunable-typeinf-ecoop2011-abstract.html) describes inference for the Generic Universe Types type system.
Another implementation of Universe Types and ownership types is described in “Inference and checking of object ownership” [HDME12] (ECOOP 2012, http://homes.cs.washington.edu/~mernst/pubs/infer-ownership-ecoop2012-abstract.html).
Approximate data (Section 23.6)
“EnerJ: Approximate Data Types for Safe and General Low-Power Computation” [SDF+11] (PLDI 2011, http://adriansampson.net/media/papers/enerj-pldi2011.pdf)
Information flow and tainting (Section 23.8)
“Collaborative Verification of Information Flow for a High-Assurance App Store” [EJM+14] (CCS 2014, http://homes.cs.washington.edu/~mernst/pubs/infoflow-ccs2014.pdf) describes the SPARTA information flow type system.
ReIm immutability
“ReIm & ReImInfer: Checking and inference of reference immutability and method purity” [HMDE12] (OOPSLA 2012, http://homes.cs.washington.edu/~mernst/pubs/infer-refimmutability-oopsla2012-abstract.html) describes the ReIm immutability type system.
In addition to these papers that discuss use the Checker Framework directly, other academic papers use the Checker Framework in their implementation or evaluation. Most educational use of the Checker Framework is never published, and most commercial use of the Checker Framework is never discussed publicly.
(If you know of a paper or other use that is not listed here, please inform the Checker Framework developers so we can add it.)
## 32.5 Comparison to other tools
A pluggable type-checker, such as those created by the Checker Framework, aims to help you prevent or detect all errors of a given variety. An alternate approach is to use a bug detector such as FindBugs, Jlint, or PMD.
A pluggable type-checker differs from a bug detector in several ways:
• A type-checker aims to find all errors. Thus, it can verify the absence of errors: if the type-checker says there are no null pointer errors in your code, then there are none. (This guarantee only holds for the code it checks, of course; see Section 2.3.)
A bug detector aims to find some of the most obvious errors. Even if it reports no errors, then there may still be errors in your code.
Both types of tools may issue false alarms, also known as false positive warnings; see Section 26.
• A type-checker requires you to annotate your code with type qualifiers, or to run an inference tool that does so for you. A bug detector may not require annotations. This means that it may be easier to get started running a bug detector.
• A type-checker may use a more sophisticated and complete analysis. A bug detector typically does a more lightweight analysis, coupled with heuristics to suppress false positives.
As one example, a type-checker can take advantage of annotations on generic type parameters, such as List<@NonNull String>, permitting it to be much more precise for code that uses generics.
A case study [PAC+08, §6] compared the Checker Framework’s nullness checker with those of FindBugs, Jlint, and PMD. The case study was on a well-tested program in daily use. The Checker Framework tool found 8 nullness errors (that is, null pointer dereferences). None of the other tools found any errors.
Also see the JSR 308 [Ern08] documentation for a detailed discussion of related work.
## 32.6 Credits, changelog, and license
Differences from previous versions of the checkers and framework can be found in the changelog.txt file. This file is included in the Checker Framework distribution and is also available on the web at http://types.cs.washington.edu/checker-framework/current/changelog.txt.
Two different licenses apply to different parts of the Checker Framework.
• The more permissive MIT License applies to code that you might want to include in your own program, such as the annotations.
Developers who have contributed code to the Checker Framework include Abraham Lin, Anatoly Kupriyanov, Asumu Takikawa, Charlie Garrett, Colin Gordon, Dan Brotherston, Dan Brown, David Lazar, David McArthur, Eric Spishak, Javier Thaine, Jeff Luo, Jonathan Burke, Kivanc Muslu, Konstantin Weitz, Mahmood Ali, Mark Roberts, Matt Mullen, Michael Bayne, Michael Coblenz, Michael Ernst, Michael Sloan, Paul Vines, Paulo Barros, Philip Lai, Renato Athaydes, René Just, Ryan Oblak, Stefan Heule, Steph Dietzel, Stuart Pernsteiner, Suzanne Millstein, Trask Stalnaker, Werner Dietl. In addition, too many users to list have provided valuable feedback, which has improved the toolset’s design and implementation. Thanks for your help!
# References
[AQKE09]
Shay Artzi, Jaime Quinonez, Adam Kieżun, and Michael D. Ernst. Parameter reference immutability: Formal definition, inference tool, and comparison. Automated Software Engineering, 16(1):145–192, March 2009.
[Art01]
Cyrille Artho. Finding faults in multi-threaded programs. Master’s thesis, Swiss Federal Institute of Technology, March 15, 2001.
[Cop05]
Tom Copeland. PMD Applied. Centennial Books, November 2005.
[Cro06]
Jose Cronembold. JSR 198: A standard extension API for Integrated Development Environments. http://jcp.org/en/jsr/detail?id=198, May 8, 2006.
[Dar06]
Joe Darcy. JSR 269: Pluggable annotation processing API. http://jcp.org/en/jsr/detail?id=269, May 17, 2006. Public review version.
[DDE+11]
Werner Dietl, Stephanie Dietzel, Michael D. Ernst, Kivanç Muşlu, and Todd Schiller. Building and using pluggable type-checkers. In ICSE’11, Proceedings of the 33rd International Conference on Software Engineering, pages 681–690, Waikiki, Hawaii, USA, May 25–27, 2011.
[DEM11]
Werner Dietl, Michael D. Ernst, and Peter Müller. Tunable static inference for Generic Universe Types. In ECOOP 2011 — Object-Oriented Programming, 25th European Conference, pages 333–357, Lancaster, UK, July 27–29, 2011.
[EJM+14]
Michael D. Ernst, René Just, Suzanne Millstein, Werner Dietl, Stuart Pernsteiner, Franziska Roesner, Karl Koscher, Paulo Barros, Ravi Bhoraskar, Seungyeop Han, Paul Vines, and Edward X. Wu. Collaborative verification of information flow for a high-assurance app store. In Proceedings of the 21st ACM Conference on Computer and Communications Security (CCS), pages 1092–1104, Scottsdale, AZ, USA, November 4–6, 2014.
[Ern08]
Michael D. Ernst. Type Annotations specification (JSR 308). http://types.cs.washington.edu/jsr308/, September 12, 2008.
[Eva96]
David Evans. Static detection of dynamic memory errors. In PLDI 1996, Proceedings of the SIGPLAN ’96 Conference on Programming Language Design and Implementation, pages 44–53, Philadelphia, PA, USA, May 21–24, 1996.
[FL03]
Manuel Fähndrich and K. Rustan M. Leino. Declaring and checking non-null types in an object-oriented language. In Object-Oriented Programming Systems, Languages, and Applications (OOPSLA 2003), pages 302–312, Anaheim, CA, USA, November 6–8, 2003.
[FLL+02]
Cormac Flanagan, K. Rustan M. Leino, Mark Lillibridge, Greg Nelson, James B. Saxe, and Raymie Stata. Extended static checking for Java. In PLDI 2002, Proceedings of the ACM SIGPLAN 2002 Conference on Programming Language Design and Implementation, pages 234–245, Berlin, Germany, June 17–19, 2002.
[GDEG13]
Colin S. Gordon, Werner Dietl, Michael D. Ernst, and Dan Grossman. JavaUI: Effects for controlling UI object access. In ECOOP 2013 — Object-Oriented Programming, 27th European Conference, pages 179–204, Montpellier, France, July 3–5, 2013.
[Goe06]
Brian Goetz. The pseudo-typedef antipattern: Extension is not type definition. http://www.ibm.com/developerworks/java/library/j-jtp02216/, February 21, 2006.
[GPB+06]
Brian Goetz, Tim Peierls, Joshua Bloch, Joseph Bowbeer, David Holmes, and Doug Lea. Java Concurrency in Practice. Addison-Wesley, 2006.
[HDME12]
Wei Huang, Werner Dietl, Ana Milanova, and Michael D. Ernst. Inference and checking of object ownership. In ECOOP 2012 — Object-Oriented Programming, 26th European Conference, pages 181–206, Beijing, China, June 14–16, 2012.
[HMDE12]
Wei Huang, Ana Milanova, Werner Dietl, and Michael D. Ernst. ReIm & ReImInfer: Checking and inference of reference immutability and method purity. In Object-Oriented Programming Systems, Languages, and Applications (OOPSLA 2012), pages 879–896, Tucson, AZ, USA, October 23–25, 2012.
[HP04]
David Hovemeyer and William Pugh. Finding bugs is easy. In Companion to Object-Oriented Programming Systems, Languages, and Applications (OOPSLA 2004), pages 132–136, Vancouver, BC, Canada, October 26–28, 2004.
[HSP05]
David Hovemeyer, Jaime Spacco, and William Pugh. Evaluating and tuning a static analysis to find null pointer bugs. In ACM SIGPLAN/SIGSOFT Workshop on Program Analysis for Software Tools and Engineering (PASTE 2005), pages 13–19, Lisbon, Portugal, September 5–6, 2005.
[LBR06]
Gary T. Leavens, Albert L. Baker, and Clyde Ruby. Preliminary design of JML: A behavioral interface specification language for Java. ACM SIGSOFT Software Engineering Notes, 31(3), March 2006.
[PAC+08]
Matthew M. Papi, Mahmood Ali, Telmo Luis Correa Jr., Jeff H. Perkins, and Michael D. Ernst. Practical pluggable types for Java. In ISSTA 2008, Proceedings of the 2008 International Symposium on Software Testing and Analysis, pages 201–212, Seattle, WA, USA, July 22–24, 2008.
[QTE08]
Jaime Quinonez, Matthew S. Tschantz, and Michael D. Ernst. Inference of reference immutability. In ECOOP 2008 — Object-Oriented Programming, 22nd European Conference, pages 616–641, Paphos, Cyprus, July 9–11, 2008.
[SDE12]
Eric Spishak, Werner Dietl, and Michael D. Ernst. A type system for regular expressions. In FTfJP 2012: 14th Workshop on Formal Techniques for Java-like Programs, pages 20–26, Beijing, China, June 12, 2012.
[SDF+11]
Adrian Sampson, Werner Dietl, Emily Fortuna, Danushen Gnanapragasam, Luis Ceze, and Dan Grossman. EnerJ: Approximate data types for safe and general low-power computation. In PLDI 2011, Proceedings of the ACM SIGPLAN 2011 Conference on Programming Language Design and Implementation, pages 164–174, San Jose, CA, USA, June 6–8, 2011.
[SM11]
Alexander J. Summers and Peter Müller. Freedom before commitment: A lightweight type system for object initialisation. In Object-Oriented Programming Systems, Languages, and Applications (OOPSLA 2011), pages 1013–1032, Portland, OR, USA, October 25–27, 2011.
[TE05]
Matthew S. Tschantz and Michael D. Ernst. Javari: Adding reference immutability to Java. In Object-Oriented Programming Systems, Languages, and Applications (OOPSLA 2005), pages 211–230, San Diego, CA, USA, October 18–20, 2005.
[TPV10]
Daniel Tang, Ales Plsek, and Jan Vitek. Static checking of safety critical Java annotations. In 8th International Workshop on Java Technologies for Real-time and Embedded Systems, pages 148–154, Prague, Czech Republic, August 19–21, 2010.
[VPEJ14]
Mohsen Vakilian, Amarin Phaosawasdi, Michael D. Ernst, and Ralph E. Johnson. Cascade: A universal type qualifier inference tool. Technical report, University of Illinois at Urbana-Champaign, Urbana, IL, USA, September 2014.
[WKSE14]
Konstantin Weitz, Gene Kim, Siwakorn Srisakaokul, and Michael D. Ernst. A type system for format strings. In ISSTA 2014, Proceedings of the 2014 International Symposium on Software Testing and Analysis, pages 127–137, San Jose, CA, USA, July 23–25, 2014.
[WPM+09]
Tobias Wrigstad, Filip Pizlo, Fadi Meawad, Lei Zhao, and Jan Vitek. Loci: Simple thread-locality for Java. In ECOOP 2009 — Object-Oriented Programming, 23rd European Conference, pages 445–469, Genova, Italy, July 8–10, 2009.
[ZPA+07]
Yoav Zibin, Alex Potanin, Mahmood Ali, Shay Artzi, Adam Kieżun, and Michael D. Ernst. Object and reference immutability using Java generics. In ESEC/FSE 2007: Proceedings of the 11th European Software Engineering Conference and the 15th ACM SIGSOFT Symposium on the Foundations of Software Engineering, pages 75–84, Dubrovnik, Croatia, September 5–7, 2007.
[ZPL+10]
Yoav Zibin, Alex Potanin, Paley Li, Mahmood Ali, and Michael D. Ernst. Ownership and immutability in generic Java. In Object-Oriented Programming Systems, Languages, and Applications (OOPSLA 2010), pages 598–617, Revo, NV, USA, October 19–21, 2010.
|
{}
|
# What is 10.5 divided by 1.5?
Mar 24, 2016
First you write both numbers as proper fractions
#### Explanation:
$10.5 = 10 \frac{1}{2} = \frac{20}{2} + \frac{1}{2} = \frac{21}{2}$
$1 , 5 = 1 \frac{1}{2} = \frac{2}{2} + \frac{1}{2} = \frac{3}{2}$
So it becomes: $\frac{21}{2} \div \frac{3}{2}$
Since division by a fraction is equivalent to multiplying with its inverse, we get:
$\frac{21}{2} \times \frac{2}{3} = \frac{21}{\cancel{2}} \times \frac{\cancel{2}}{3} = \frac{21}{3} = 7$
Note:
There's a quicker way (if you see it). You may double both numbers (to get rid of the halves), and the answer will be the same:
$= \frac{10.5}{1.5} \times \frac{2}{2} = \frac{10.5 \times 2}{1.5 \times 2} = \frac{21}{3} = 7$
|
{}
|
## October 4, 2010
### A Hopf Algebra Structure on Hall Algebras
#### Posted by John Baez
My student Christopher Walker is groupoidifying Hall algebras. What’s a Hall algebra? We get such an algebra starting from any category with a sufficiently well-behaved concept of ‘short exact sequence’. In this algebra, the product of an object $A$ and an object $B$ is a cleverly weighted sum over all objects $X$ that fit into a short exact sequence
$0 \to A \to X \to B \to 0$
The ‘clever weighting’ is neatly explained by groupoidification, as sketched here. And if we pick our category in a nice way, the algebra we get is part of a quantum group!
But the Hall algebra is more than a mere algebra. It’s also a coalgebra! The algebra and coalgebra want to fit together to form a Hopf algebra, and they do, but only after a peculiar sort of struggle. Lately Christopher has been thinking about this, and he’s written a paper:
• Christopher Walker, A Hopf algebra structure on Hall algebras.
Abstract: One problematic feature of Hall algebras is the fact that the standard multiplication and comultiplication maps do not satisfy the bialgebra compatibility condition in the underlying symmetric monoidal category $Vect$. In the past this problem has been resolved by working with a weaker structure called a ‘twisted’ bialgebra. In this paper we solve the problem differently by first switching to a different underlying category $Vect^K$ of vector spaces graded by a group $K$ called the Grothendieck group. We equip this category with a nontrivial braiding which depends on the $K$- grading. With this braiding, we find that the Hall algebra does satisfy the bialgebra condition exactly for the standard multiplication and comultiplication, and will also become a Hopf algebra object.
The point is that a Hopf algebra is a bialgebra, so the multiplication and comultiplication need to get along nicely. As we’ve recently discussed, they need to obey a condition sort of like this:
where the green blob is the multiplication and the red blob is the comultiplication. And this condition involves a braiding: in the diagram at left, one wire needs to cross over the other! It turns out that the Hall algebra becomes a Hopf algebra very neatly if we choose the right braiding. Otherwise we need to do a bunch of ad hoc mucking around.
Actually, in the picture above, you’ll note that the two wires just cross, without one visibly going over the other. That style of drawing is fine if we’re in a symmetric monoidal category, like the category of vector spaces with its usual tensor product and usual braiding. But the Hall algebra becomes a Hopf algebra in a braided monoidal category that’s not symmetric. So we need to draw the compatibility condition a bit more carefully — see Christopher’s paper.
All these ideas can be groupoidified, and that’s what Christopher is doing now.
Posted at October 4, 2010 3:58 AM UTC
TrackBack URL for this Entry: http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2286
### Re: A Hopf Algebra Structure on Hall Algebras
Hi Chris. I just had a quick look at your paper over breakfast.
My first question is about your equation for the set $P^E {}_{M\,N}$ on p4 (equation numbers would be helpful here!). Won’t its cardinality be the continuum in general? Also, I have some typographical comments: I think the distinction between the two types of P is too subtle, something like $|P^E _{M\,N}|$ might be better for the cardinality; also, in the line beginning “and we call” after the displayed equation, you’re missing some punctuation.
Does all this generalize to other quivers which aren’t simply laced?
Posted by: Jamie Vicary on October 4, 2010 11:27 AM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Hi Jamie,
Thank you for reading my paper.
The big theorem of this topic (Gabriel’s Theorem) says that the representation category of a quiver is of finite type if and only if the quiver is simply laced. I should definitely be more specific about why I can take the cardinality of this set and expect it to be finite (i.e. that I am using isomorphism classes).
As far as generalizing to non-simply laced quivers, It would take a different approach since the important sets here would not be finite. This would definitely be another direction to go. If you have every studied Lie theory, you will remember that the representation theory of lie algebras that are not simple becomes very difficult, and in many instances not even understood yet.
As for the two different forms of P, I have been going back and forth on this one, and may end up at a cardinality notion like you suggested in the end. There are some notational conflicts in the groupoidification program that I was trying to avoid.
Posted by: Christopher Walker on October 4, 2010 6:31 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Jamie wrote:
My first question is about your equation for the set $P^E_{M N}$ on p. 4 (equation numbers would be helpful here!). Won’t its cardinality be the continuum in general?
It’s finite given the assumptions Christopher made on page 3. He’s looking at the category $Rep(Q)$ of representations of a simply-laced Dynkin quiver on vector spaces over a finite field. For this category the set of short exact sequences
$P^E_{M N} = \{ 0 \to N \to E \to M \to 0 \}$
is finite for all objects $M, N, E$.
If you use more general quivers, or more general fields, the set $P^E_{M N}$ could be infinite.
Christopher wrote:
I should definitely be more specific about why I can take the cardinality of this set and expect it to be finite (i.e. that I am using isomorphism classes).
That’s not the reason: you’re not taking isomorphism classes on page 4… unless I’m seriously confused.
On the other hand, thinking about these finiteness issues, I just noticed that you need to add an extra condition to make Proposition 2 and Theorem 3 true. You need to assume the groups $Ext^i(M,N)$ are finite!
By the time you get to Theorem 4 you have decided to work only with the abelian category $Rep(Q)$. You could easily prove this theorem for a larger class of abelian categories, but they’d need to satisfy three conditions:
1) they need to be hereditary,
2) the sets $P^E_{M N}$ need to be finite,
3) the groups $Ext^i(M,N)$ need to be finite.
Of course ‘hereditary’ means $Ext^i(M,N) = \{0\}$ for $i \gt 1$, so 3) is equivalent to
3’) the groups $Ext^1(M,N)$ need to be finite.
Jamie wrote:
Does all this generalize to other quivers which aren’t simply laced?
This stuff works as stated for simply laced Dynkin quivers and also simply laced affine Dynkin quivers (not mentioned in Christopher’s paper). For a general quiver $Q$, I don’t know why $Rep(Q)$ would be hereditary, and this is needed to make the Hall product associative.
But there are other tricks one can play…
Posted by: John Baez on October 5, 2010 1:54 AM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Hi there.
One way to see that $\mathrm{Rep}(Q)$ is always hereditary is to use the standard resolution.
Given a module $M$ for the path algebra $K Q$, we can write this as a direct sum $M=\bigoplus_i M_i$ indexed by the vertices. Let $P_i$ be the projective module corresponding to the vertex $i$, so a direct summand of the regular module $K Q$. Then there is a natural epimorphism
(1)$\bigoplus_i P_i\otimes_K M_i \to M$
given by multiplication. Given an arrow $a\colon i\to j$, there is a map
(2)$P_j\otimes_K M_i \to (P_i\otimes_K M_i)\oplus(P_j\otimes_K M_j), \quad b\otimes m \mapsto (b a\otimes m,-b\otimes a m).$
Putting these together yields an exact sequence
(3)$0 \to \bigoplus_{a\colon i\to j}P_j\otimes_K M_i \to \bigoplus_i P_i\otimes_K M_i \to M \to 0.$
This is called the standard resolution, and is a projective resolution of $M$ of length 1. Hence the category is hereditary.
We can now generalise this, but the notation becomes a bit of a nightmare. My preferred way to think of it is via tensor algebras.
Let $\Lambda_0$ be a semisimple $K$-algebra and let $\Lambda_1$ be a $\Lambda_0$-bimodule on which $K$ acts centrally.
In the case of the path algebra $K Q$ we take $\Lambda_0=\prod_i K e_i$, indexed by the vertices of $Q$, and $\Lambda_1=\bigoplus_a K a$, indexed by the arrows of $Q$.
We next form the tensor algebra
(4)$\Lambda := T(\Lambda_0,\Lambda_1) = \Lambda_0\oplus\Lambda_1\oplus\Lambda_2\oplus\cdots,$
where $\Lambda_{r+1}=\Lambda_r\otimes_{\Lambda_0}\Lambda_1$.
In the case of the path algebra $K Q$, $\Lambda_r$ has basis the paths in $Q$ of length $r$.
The standard resolution now has the simple form
(5)$0 \to \Lambda_+\otimes_{\Lambda_0}M \to \Lambda\otimes_{\Lambda_0}M \to M \to 0,$
where $\Lambda_+=\Lambda_1\oplus\Lambda_2\oplus\cdots$ is the graded radical of $\Lambda$. The left-hand map comes from the identification $\Lambda_+\cong\Lambda\otimes_{\Lambda_0}\Lambda_1$, and as above we can place the $\Lambda_1$ component either on the left or on the right of the tensor product
(6)$\Lambda\otimes_{\Lambda_0}\Lambda_1\otimes_{\Lambda_0}M \to \Lambda\otimes_{\Lambda_0}M, \quad b\otimes a\otimes m \mapsto b a\otimes m-b\otimes a m.$
This is again a projective resolution of $M$. For, $\Lambda_0$ is semisimple, and so every $\Lambda_0$-module is projective. Thus the first and second terms are projective $\Lambda$ modules.
When $K$ is a finite field, this construction will get you every non-simply laced diagram (and so every symmetrisable generalised Cartan matrix). In fact, you can also get loops, and so certain symmetrisable Borcherds matrices as well.
If $K$ is a perfect field, then any finite dimensional hereditary $K$-algebra will be such a tensor algebra. On the other hand, Dlab and Ringel gave an example of a finite dimensional hereditary algebra which was not of this form.
Of course, once you’ve done algebras, you can move on to other hereditary categories such as sheaves over weighted projective lines, as Schiffmann did.
Sorry for the long post, but then again, I’ve been a follower for ages without posting.
Posted by: Andrew Hubery on October 6, 2010 12:41 AM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Thanks! Wow, it’s great to hear from you! Christopher Walker and I have been endlessly reading and rereading your notes on Ringel–Hall algebras in the course of our work, so a post from you feels like a deus ex machina coming down from the sky to save us. Perhaps if I’d read your notes often enough I would have known what you just said…
If there’s one thing you can do to make me even happier, it’s this: put those notes of yours on the arXiv. They’re a valuable resource. Individual webpages have an uncertain future, but the arXiv will last at least as long as our current mode of civilization.
Posted by: John Baez on October 7, 2010 3:05 AM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Hi Chris,
This is the “other Chris”.
Some typos:
p.1: “…the lie algebra…”
p.2: “We first to draw…”
p.3: “We then accomplishes…”
p.4: “Where we call aut(M) the set cardinality of the group Aut(M)” (Not a sentence)
p.4 “These are the correct factor…”
A small comment/question: You say a few times in the introduction that the incompatibility between the multiplication and comultiplication is a “problematic feature”. Why should I think of this as a problem, exactly? It seems more like an “interesting feature”, rather than a problem. (You essentially say this yourself on page 2.)
Another stupid comment: You say “As is standard, we will write multiplication…” and then write your diagrams backward in my opinion. When I’m thinking about an operad, I always write the diagrams so the inputs are at the bottom. I thought that was the standard way, no? (I have a feeling this is the kind of question that starts heated debates…my apologies if this is the case.)
Posted by: Chris Rogers on October 4, 2010 1:42 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Let’s not have a heated debate! Everyone should simply agree that there’s no universally-agreed orientation.
Some people have their inputs at the bottom and work upwards; others do the opposite. Sometimes I have my inputs on the left and work rightwards. I’m sure there are situations in which I’d want to have the inputs on the right and work leftwards.
Different people do different things at different times, and that’s all there is to it.
Posted by: Tom Leinster on October 4, 2010 3:25 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Can you go diagonally, or into the page?
Posted by: Tom Ellis on October 4, 2010 4:41 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Yeah, you can do anything you like!
It’s very like the situation for arrows in a category, when you’re drawing commutative diagrams. The usual convention is to try to get your arrows going down and/or to the right, but sometimes it’s convenient to do otherwise.
The “arrows” we’re dealing with here are more bulky—they’re 2-dimensional—but you have a similar amount of freedom. It’s just convention.
Posted by: Tom Leinster on October 4, 2010 4:55 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Trust me, I’m not looking for a debate! Thanks for the clarification.
Posted by: Chris Rogers on October 4, 2010 5:42 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Hi “other” Chris,
Thank you for the catches on typos.
I think subconsciously I use “problematic” because I think anything that is not compatible is a problem. I may re-word it to sound less like an opinion.
As my wife the sociologist would say, diagram direction is just the oppressive right-handed society trying to keep us poor left-handed folk down. I mean they get to be called “right”-handed like there’s something “wrong” with the other hand. :)
In all seriousness, I will remove the reference to “standard” practice here, as there seems to be a difference of opinions.
Posted by: Christopher Walker on October 4, 2010 8:07 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Chris wrote:
A small comment/question: You say a few times in the introduction that the incompatibility between the multiplication and comultiplication is a “problematic feature”. Why should I think of this as a problem, exactly?
A bialgebra is a beautiful thing: it’s a monoid in the category of comonoids in $Vect$ — or equivalently, a comonoid in the category of monoids in $Vect$ Thanks to this elegant definition, the power of category theory kicks in, and we get all sorts of wonderful results. For starters, the category of representations of any bialgebra is a monoidal category.
But with the usual approach to Hall algebra, instead of a bialgebra we get something that’s sort of close to a bialgebra, but where the compatibility condition fails, due the annoying intrusion of a mysterious ‘fudge factor’. People call this gadget a ‘twisted bialgebra’, but that’s just jargon to summarize a mystery: this gadget doesn’t fit into any clear pattern.
At least that’s what people usually say! But Christopher has shown that in fact this gadget is a bialgebra object — only not in $Vect$, but in some other braided monoidal category!
More precisely, he takes the Grothendieck group of the category of representations of a simply-laced Dynkin quiver, say $K$, and puts an interesting braiding on the category of $K$-graded vector spaces, say $Vect^K$.
Then Christopher shows the Hall algebra is a bialgebra object in $Vect^K$. In other words, it’s a monoid object in the category of comonoid objects in $Vect^K$ — or equivalently, a comonoid object in the category of monoid objects in $Vect^K$.
So, the power of category theory kicks in again! Now we can easily do all sorts of wonderful things with the Hall algebra — things we’d normally do with a bialgebra, but now with $Vect^K$ taking the place of $Vect$.
For example, now we instantly know that the Hall algebra has a monoidal category of representations in $Vect^K$. Back when it was a mere ‘twisted bialgebra’, this would be utterly nonobvious: even if you were enough of a genius to guess it, you’d have to check it by playing around with ‘fudge factors’ and discovering ‘surprising cancellations’. But now it’s an automatic consequence of general facts.
Of course, some people enjoy all sorts of ‘twisted’, ‘deformed’, or ‘warped’ mathematical gadgets. Some people enjoy complexity for its own sake, because it offers scope for cleverness. But simplicity is better, because it lets you do good things without cleverness. So it’s always best when you can take a ‘twisted’ gadget and reinterpret it as a plain old fully functional gadget in another category.
Posted by: John Baez on October 5, 2010 2:49 AM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Maybe my question will be answered in the paper — I’m so far responding only to the abstract. I will begin by recalling some standard facts that you probably know and reference in your paper, and then ask for clarification of the final sentence in the abstract, where you write: “the Hall algebra … and will also become a Hopf algebra object.”
Let $(A,\Delta,\epsilon)$ be any coassociative counital coalgebra and $(B,m,1)$ any associative unital algebra (both objects in the same monoidal category). Then $\operatorname{Hom}(A,B)$ (hom in the underlying category) is an associative unital monoid under the “convolution product”: $f\star g = m\circ (f\otimes g) \circ \Delta$, and the identity element for $\star$ is $1\circ \epsilon$.
Now suppose that $B=A$ as objects, but I don’t impose any bialgebra condition. Then I do get a distinguished elements $\operatorname{id} \in \operatorname{Hom}(A,A)$. Inventing a word (maybe someone else has also invented it?), I would say that the data $(A,m,1,\Delta,\epsilon)$ is antipodal if $\operatorname{id}$ is left- and right-invertible in the monoid $(\operatorname{Hom}(A,A),\star)$. Notice that there is no need for any braidings, compatibility conditions, etc.
So: it seems from your sentence that the point is that the Hall algebra just is antipodal, whereas for you a Hopf algebra is the data $(A,m,1,\Delta,\epsilon)$ such that it is both antipodal and satisfies the bialgebra compatibility condition.
Is there any other content to that line that I’m missing?
Posted by: Theo on October 4, 2010 6:03 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Hi Theo,
The point of the paper is the bialgebra compatibility condition. The existence of an antipode was simply a nice addition point. You are correct that the definition of an antipode does not require information about the braiding, but I have never thought about antipodes independently. I have always thought of a Hopf algebra as a bialgebra with an antipode, and so this was what I meant in the last sentence.
Posted by: Christopher Walker on October 4, 2010 7:59 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Very neat! Of course you have to make a choice regarding which string to cross over and which under in drawing the bialgebra compatibility diagram in a braided monoidal category. Does that mean there are multiple distinct notions of “bialgebra” in a braided monoidal category, and the Hall algebra is only one of them?
Posted by: Mike Shulman on October 4, 2010 9:41 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
We can certainly define two kinds of bialgebra in a braided monoidal category, depending on which string crosses over which in this picture:
Alternatively, we can pick our favorite way, use that to define our favorite kind of bialgebra, and note that any braided monoidal category has a kind of ‘opposite’, where we use this new braiding:
$B^{opp}_{X,Y} = B^{-1}_{Y,X}$
Then the other kind of bialgebra can be thought of as our favorite kind of bialgebra, but in the ‘opposite’ braided monoidal category.
Of course the terminology gets a bit confusing here. We can take the opposite of a category by reversing the arrows, we can take the ‘opposite’ of a monoidal category by reversing the tensor product:
$X \otimes_{opp} Y = Y \otimes X$
and we can take the opposite of a braided monoidal category by reversing the braiding. We can even combine these different options, for a total of $2^3$ choices. That’s because a braided monoidal category is a 3-category, so its morphisms can be drawn as 3-dimensional globes, which have a reflection symmetry group of size $2^3$.
But when you said “multiple” distinct notions of bialgebra, did you mean more than two?
I used to know whether you could take a monoid $X$ in a braided monoidal category, with multiplication
$m: X \otimes X \to X \, ,$
and define a new monoid with a “$n$-tuply twisted multiplication”:
$m_{new} = m \circ B_{X,X}^n$
The trick is checking the associative law. I forget the answer.
I don’t think I ever had the brains to check whether a braided monoidal category gives an infinite series of braided monoidal categories where we braid things around a bunch of times, e.g.:
$B^{new}_{X,Y} = B_{X,Y} B_{Y,X} B_{X,Y}$
If the answer to this question were “yes”, then the answer to the previous question would also be “yes”. Furthermore, we would get an infinite series of interesting concepts of bialgebra in a given monoidal category.
I’m too busy to draw the string diagrams needed to check this! But if I had to go by gut instinct, I’d guess the answer is “no”.
Posted by: John Baez on October 5, 2010 3:20 AM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
$m_{new} = m \circ B_{X,X}^n$
brings to mind the exotic multiplications on the 3-sphere where m is quaternionic multiplication and B is the commutator.
Some interesting non-associative phenomena arise:
MR0187242 (32 #4695) Slifker, James F. Exotic multiplications on $S^{3}$. Quart. J. MAth. Oxford Ser. (2) 16 1965 322–359.
Posted by: jim stasheff on October 5, 2010 2:14 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
It seems quite unlikely to me that you’d get a new braided monoidal category by twisting strings around each other multiple times. But couldn’t you at least get a different notion of “bialgebra” in a braided monoidal category that way, even if it couldn’t be identified with the original notion of bialgebra in some other braided monoidal category?
Posted by: Mike Shulman on October 5, 2010 6:24 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Mike wrote:
It seems quite unlikely to me that you’d get a new braided monoidal category by twisting strings around each other multiple times.
Yeah, me too. But sometime on a long boring trip I’ll check one of the hexagon identities, just to nail this particular coffin shut.
But couldn’t you at least get a different notion of “bialgebra” in a braided monoidal category that way, even if it couldn’t be identified with the original notion of bialgebra in some other braided monoidal category?
Okay, yeah — you can write down the definition. But to me, I guess, the whole point of a bialgebra object is that its category of actions naturally becomes a monoidal category, thanks to the comultiplication. If our notion of bialgebra doesn’t give some nice result like this — or at least some nice result — I’ll probably conclude that notion is useless. Especially since I don’t know any examples.
Posted by: John Baez on October 6, 2010 7:42 AM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Sure, fair enough.
Posted by: Mike Shulman on October 6, 2010 5:55 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Shahn Majid has studied intensively Hopf algebras in braided categories and called them braided groups. One of the first papers was this, but look also for later papers and his book Foundations of quantum group theory.
He has lots of recipes of how to make new examples out of old. For example, making braided groups out of quasitriangular Hopf algebras by a process of “transmutation”. Majid developed also associated “braided” geometry.
Posted by: Zoran Skoda on October 5, 2010 1:35 AM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
It’s true. Christopher should cite Majid’s work when introducing the concept of a Hopf algebra in a braided monoidal category.
I also told Christopher to read and cite Joyal and Street’s paper where they get a braiding on the category of $A$-graded vector spaces from a bilinear form on the abelian group $A$. It’s one of these, or perhaps both:
• A. Joyal and R. Street, Braided monoidal categories, Macquarie Math Reports 860081 (1986). Available at http://www.maths.mq.edu.au/~street/JS1.pdf.
• A. Joyal and R. Street, Braided tensor categories, Adv. Math. 102 (1993) 20-78.
Hmm, yes, it’s Theorem 12 together with Proposition 13 on page 44 of the first, unpublished, paper. Better to cite the second one if it’s also in there — I don’t have it on me right now.
As usual, they prove this result in such great generality that it’s a bit hard to understand at first. But it’s not hard to extract the desired fact from what they prove.
An interesting fact is that Theorem 12 goes back to the work of Eilenberg and Mac Lane on the cohomology of groups. All this stuff can now be understood in terms of Postnikov towers for $n$-groupoids.
Posted by: John Baez on October 5, 2010 3:41 AM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Another typo: on page 4, “we define set:”
Posted by: John Baez on October 5, 2010 4:30 AM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
If I understood right, the main result of this paper is long known to the experts. In
Kapranov says:
Note that because of the twist (1.4.4), the theorem does not mean that $R(A)$ is a bialgebra in the ordinary sense; it can be interpreted, however, by saying that $R(A)$ is a bialgebra in braided monoidal category of $K_0(A)$-graded vector spaces.
Posted by: Zoran Skoda on October 17, 2010 4:12 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
But there still seem to be some interesting differences in viewpoint. For example, in (1.4.3) Kapranov is using the symmetrized Euler form, while Christopher uses the unsymmetrized one. Kapranov could probably explain this in an instant, but it seems mysterious and interesting to me right now.
Anyway, thanks for pointing this out.
Posted by: John Baez on October 17, 2010 5:51 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Don’t worry, John, I am sure the fresh and systematic point of view, which Christopher is taking with your guidance, will result in many more results on Hall algebras along the way. I personally had some interest some time ago in the sequel paper of Kapranov relating Hall algebras and Heisenberg doubles, but never really went into it. Eisenstein series title on the other hand was always repelling to me; I hope I overcome the fear from number theory once…
Posted by: Zoran Skoda on October 17, 2010 8:07 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
The good news is that I’ve finally found out that I like Hopf algebras and would like to go to grad school at a place that they’re covered in the US. The bad news is that it’s already too late to apply at most schools. Any ideas?
Last year I published a paper that can be rewritten in Hopf algebra language. It showed that when you resum the long time propagators for the Hopf algebra (defined by the mutually unbiased bases of the Pauli algebra), you get the usual spin-1/2 propagators (i.e. projection operators), but three copies that you can think of as generations.
The paper I’m working on now shows that when you find the propagators for the group (Hopf) algebra defined by the permutation group on three elements you discover that they map nicely on to the weak hypercharge and weak isospin quantum numbers of the elementary fermions.
Right now I understand Hopf algebra at a very low, intuitive level, but it’s clear to me that this is where I’m going to continue to work. The challenge of getting numbers out of the algebra is incredibly attractive. I’ve been working on my own, but it would be a lot easier if I were in a department; any suggestions?
Posted by: Carl on February 19, 2011 7:55 PM | Permalink | Reply to this
### Re: A Hopf Algebra Structure on Hall Algebras
Carl - I’m coming to this thread late, but your paper on path integrals via mutually unbiased bases actually looks decent to me. I’m curious, did you end up at a grad school?
Posted by: Bruce Bartlett on May 24, 2014 12:16 AM | Permalink | Reply to this
Read the post Christopher Walker on Hall Algebras
Weblog: The n-Category Café
Excerpt: Christopher Walker has successfully defended his thesis, A Categorification of Hall Algebras.
Tracked: June 11, 2011 8:01 AM
Post a New Comment
|
{}
|
# physical properties of elements on the periodic table
An example of an electropositive (i.e., low electronegativity) element is cesium; an example of a highly electronegative element is fluorine. The periodic table is a listing of the elements according to increasing atomic number that is further organized into columns based on similar physical and chemical properties and electron configuration. We can never determine the atomic radius of an atom because there is never a zero probability of finding an electron, and thus never a distinct boundary to the atom. Similar to the main-group elements described above, the transition metals form positive ions but due to their capability of forming more than two or more ions of differing charge, a relation between the group number and the charge is non-existent. Lanthanides (shown in row ** in chart above) and Actinides (shown in row * in chart above), form the block of two rows that are placed at the bottom of the periodic table for space issues. Arrange these elements according to decreasing atomic size: Na, C, Sr, Cu, Fr, 2. Of all the 118 known elements, 11 are gaseous, 2 are liquid, and the remainder are solids under ordinary conditions. Ionization energy is the amount of energy required to remove one electron from … Including reviewing Metals and Non-Metals, we will highlight the trends and their explanations of the 14th Group. The standard form of the periodic table shown here includes periods (shown horizontally) and groups (shown vertically). The properties of elements in groups are similar in some respects to each other. Elements in the same group of the periodic table show trends in physical properties, such as boiling point. The Alkali metals are comprised of group 1 of the periodic table and consist of Lithium, Sodium, Rubidium, Cesium, and Francium. Moving from left to right across a period, electrons are added one at a time to the outer energy shell. Electron affinity decreases moving down a group because a new electron would be further from the nucleus of a large atom. In summary, the greater the nuclear charge, the greater pull the nucleus has on the outer electrons and the smaller the atomic radii. Thomas Jefferson National Accelerator Facility - Office of Science Education, It’s Elemental - The Periodic Table of Elements, accessed December 2014. Which reaction do you expect to have the greater cell potential? Periodic Table trends for Physical and Chemical Properties. Electron Affinity Definition in Chemistry, Ionic Radius Trends in the Periodic Table, Ph.D., Biomedical Sciences, University of Tennessee at Knoxville, B.A., Physics and Mathematics, Hastings College, Electron Affinity Generally Decreases Moving Down a Group. The 14 elements following lanthanum (z=57) are called lanthanides, and the 14 following actinium (z=89) are called actinides. Halogens are comprised of the five nonmetal elements Flourine, Chlorine, Bromine, Iodine, and Astatine. Electrical conductivity 6. Figure 8: Courtesy of Jessica Thornton (UCD). Group VIIA elements, the halogens, have high electron affinities because the addition of an electron to an atom results in a completely filled shell. The elements shaded in light pink in the table above are known as transition metals. Lanthanides and Actinides are: (a) alkali earth metals; (b) transition metals; (c) metalloids; (d) alkali metals; (e) none of these. These can typically be explained by their electron configuration. Therefore, the atomic radii increase. It is more difficult to come up with trends that describe the electron affinity. When Mendeleev created the table in the late 1800s, he did so … Electronegativity is the measurement of an atom to compete for electrons in a bond. Dr. Helmenstine holds a Ph.D. in biomedical sciences and is a science writer, educator, and consultant. The distance must be apportioned for the smaller cation and larger anion. Elements with high ionization energies have high electronegativities due to the strong pull exerted on electrons by the nucleus. It is the energy change that occurs when an electron is added to a gaseous atom. The main group elements are groups 1,2 and 13 through 18. Increase in electrons increases bonding. In order to comprehend the extent of screening and penetration within an atom, scientists came up with the effective nuclear charge, $$Z_{eff}$$. Therefore, it requires more energy to out power the nucleus and remove an electron. Electrons within a shell cannot shield each other from the attraction to protons. Penetration is commonly known as the distance that an electron is from the nucleus. This causes the atomic radius to decrease. Electronegativity is a measure of the attraction of an atom for the electrons in a chemical bond. The groups are numbered at the top of … Figure 3 depicts the effect that the effective nuclear charge has on atomic radii. This is because the larger the effective nuclear charge, the stronger the nucleus is holding onto the electron and the more energy it takes to release an electron. These elements are relatively stable because they have filled s subshells. For example, K atoms (group 1) lose one electron to become K+ and Mg atoms (group 2) lose two electrons to form Mg2+. In a group, the electronegativity decreases as the atomic number increases, as a result of the increased distance between the valence electron and nucleus (greater atomic radius). The higher the electronegativity of an atom, the greater its attraction for bonding electrons. They are located on group 17 of the periodic table and have a charge of -1. The Group IIA elements, the alkaline earths, have low electron affinity values. As you go up a group, the ionization energy increases, because there are less electron shielding the outer electrons from the pull of the nucleus. [ "article:topic", "fundamental", "electronegativity", "ionization energy", "Halogens", "Periodic Table", "covalent radius", "effective nuclear charge", "electron affinity", "metallic character", "atomic radii", "alkali metals", "transition metals", "Periodic trends", "showtoc:no", "Metalloids", "Noble Gases", "atomic radius", "ionization potential", "Redox Potentials", "Oxidation Potential", "Reduction Potential", "Alkali Earth", "Alkali Earth Metals" ]. Outside Links. Metals: Malleable, conductive, have luster, ductile, tensile strength 2. Nonmetals tend to gain electrons to form anions. (e.g). Melting destroys the arrangement of atoms in a solid, therefore the amount of heat necessary for melting to occur depends on the strength of attraction between the atoms. A physical property of a pure substance can be defined as anything that can be observed without the identity of the substance changing. An element that is an example of a metalloid is (a) S; (b) Zn; (c) Ge; (d) Re; (e) none of these. The noble gases consist of group 18 (sometimes reffered to as group O) of the periodic table of elements. The electron affinities will become less negative as you go from the top to the bottom of the periodic table. Watch the recordings here on Youtube! Group I elements have low ionization energies because the loss of an electron forms a stable octet. Hea… In the periodic table, the vertical (up and down) columns are called (a) periods; (b) transitions; (c) families/groups; (d) metalloids; (e) none of these. B. Second, moving down a column in the periodic table, the outermost electrons become less tightly bound to the nucleus. The elements in groups 3-12 are called transition elements, or transition metals. Merits of Mendeleev Periodic Table. Alkali Earth Metals are located in group 2 and consist of Beryllium, Magnesium, Calcium, Strontium, Barium, and Radium. Magnesium has an electron configuration of [Ne]3s2. Ionization Energies increase going left to right across a period and increase going up a group. Group VIII elements, noble gases, have electron affinities near zero since each atom possesses a stable octet and will not accept an electron readily. She has taught science courses at the high school, college, and graduate levels. The metallic character is used to define the chemical properties that metallic elements present. A. Therefore, the electrons are held more loosely and the atomic radius is increased. Legal. The lanthanides (rare earth) and actinides are also transition metals. Example: Third period elements Na, Mg, and Al are good conductors of heat and electricity while Si is only a fair conductor and the nonmetals P, S, Cl and Ar are poor conductors. These metals are highly reactive and form ionic compounds (when a nonmetal and a metal come together) as well as many other compounds. This website will cover a basic understanding of Group 14 in the Periodic Table of Elements. Concept Development Studies in Chemistry (2007). Uses and properties John Emsley, Nature’s Building Blocks: An A-Z Guide to the Elements, Oxford University Press, New York, 2nd Edition, 2011. But, Dobereiner could ident Elements tend to gain or lose valence electrons to achieve stable octet formation. Therefore, the nucleus has less of a pull on the outer electrons and the atomic radii are larger. Therefore, ionization energy (I.E. These trends can be predicted merely by examing the periodic table and can be explained and understood by analyzing the electron configurations of the elements. $Na_{(g)} \rightarrow Na^+_{(g)}+ e^-_{(g)}$, $Na^+_{(g)} \rightarrow Na^{2+}_{(g)} + e^-$, Ionization energies increase relative to high effective charge. An ionic radius is one-half the distance between the nuclei of two ions in an ionic bond. The physical properties of the chlorides of elements in Groups 1 and 2 are very different compared to the chlorides of the elements in Groups 4, 5, and 6. It can be either positive or negative value. Print. The elements in the periodic table are arranged in order of increasing atomic number. Successive ionization energies increase. For example, Magnesium has a higher ionization energy than Aluminum. Z is the total number of electrons in the atom. The observations usually consist of some type of numerical measurement, although sometimes there is a more qualitative (non-numerical) description of the property. The group to the farthest right of the table, shaded orange, is known as the noble gases. The periodic table arranges the elements by periodic properties, which are recurring trends in physical and chemical characteristics. Electrons with low ionization energies have low electronegativities because their nuclei do not exert a strong attractive force on electrons. Therefore, it requires less energy to remove one of their valence electrons. These are also considered to be transition metals. Noble gases are inert because they already have a full valence electron shell and have little tendency to gain or lose electrons. For example, Chlorine would have a Z value of 17 (the atomic number of Chlorine). These metals form positively charged ions, are very hard, and have very high melting and boiling points. Atomic and Ionic Radii. As this happens, the electrons of the outermost shell experience increasingly strong nuclear attraction, so the electrons become closer to the nucleus and more tightly bound to it. What Is Periodicity on the Periodic Table? Electron affinity reflects the ability of an atom to accept an electron. Example of Reduction: The Periodic Table of Elements categorizes like elements together. Why are noble gases inert (nonreactive)? When you look at the periodic … 5. Melting Points: Trends in melting points and molecular mass of binary carbon-halogen compounds and hydrogen halides are due to intermolecular forces. Have questions or comments? This greater pull makes it harder for the atoms to lose electrons and form cations. The higher the electronegativity, the greater its ability to gain electrons in a bond. The Periodic Table Periodic Law: the physical and chemical properties of the elements are periodic functions of their atomic number. Now we are ready to describe the atomic radius trend in the periodic table. 3. For example, the S we would use for Chlorine would be 10 (the atomic number of Neon). is the energy change that occurs when an electron is added to a gaseous atom. The noble gases are left out of the trends in atomic radii because there is great debate over the experimental values of their atomic radii. Elements 3 Types of Elements: 1. Understanding these trends is done by analyzing the elements electron configuration; all elements prefer an octet formation and will gain or lose electrons to form that stable configuration. For example, Silicon has a metallic luster but is brittle and is an inefficient conductor of electricity like a nonmetal. That is because the larger, negative electron affinity, the easier it is to give an electron. Elements of other groups have low electron affinities. With the exception of hydrogen and mercury, the gaseous and liquid elements occur in the right-hand part of the periodic table, the region associated with the nonmetallic elements. Metals also form basic oxides; the more basic the oxide, the higher the metallic character. Explore the physical properties of the chemical elements through this periodic table. Melting points may increase gradually or reach a peak within a group then reverse direction. This happens because the number of filled principal energy levels (which shield the outermost electrons from attraction to the nucleus) increases downward within each group. In addition to this activity, there are two other important trends. Cations have a smaller radius than the atom that they were formed from. "Journal of Chemical Education." The equation for calculating the effective nuclear charge is shown below. The transition metals range from groups IIIB to XIIB on the periodic table. These are the ionization energies for the period three elements. Although most modern periodic tables are arranged in eighteen groups (columns) of elements, Mendeleev's original periodic table had the elements organized into eight groups and twelve periods (rows). The periodic table of the elementsis a method of showing the chemical elements in a table with the elements arranged in order of increasing atomic number. Moving down a group in the periodic table, the number of filled electron shells increases. Mendeleev believed that when the elements are arranged in order of increasing atomic mass, certain sets of properties recur periodically. To find out why these elements have their own section, check out the electron configurations page. Screening is defined as the concept of the inner electrons blocking the outer electrons from the nuclear charge. Notice how Na after in the second I.E, Mg in the third I.E., Al in the fourth I.E., and so on, all have a huge increase in energy compared to the proceeding one. As mentioned in the introduction, metalloids are located along the staircase separating the metals from the nonmetals on the periodic table. Density 7. Reduction is a reaction that results in the gaining of an electron. Arrange these elements according to increasing metallic character: Li, S, Ag, Cs, Ge. Electronegativity is related to ionization energy. Noble gases are treated as a special group of nonmetals. In the equation S represents the number of inner electrons that screen the outer electrons. Elements in the periodic table can be placed into two broad categories, metals and nonmetals. 1. Select all that apply. Actinides form the bottom row and are radioactive. Malleability 4. First, electrons are added one at a time moving from left to right across a period. A column of elements down the table is called a group.There are 18 groups in the standard periodic table. Ionization energies increase moving from left to right across a period (decreasing atomic radius). Metalloids are elements that look like metals and in some ways behave like metals but also have some nonmetallic properties. Melting Points and Boiling Points Physical properties include such things as: 1. Students can easily find S by using the atomic number of the noble gas that is one period above the element. Ductility 5. With the loss of an electron, the positive nuclear charge out powers the negative charge that the electrons exert. D. The atomic radius of an element is half of the distance between the centers of two atoms of that element... Ionization Energy. Some gaps were left for the elements yet to be discovered. Therefore, moving left to right across a period the nucleus has a greater pull on the outer electrons and the atomic radii decreases. Color 2. New Jersey: Pearson Prentice Hall, 2005. A cation is an atom that has lost one of its outer electrons. Electronegativity is related with ionization energy and electron affinity. These groups contain the most naturally abundant elements, and are the most important for life. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Electronegativity will be important when we later determine polar and nonpolar molecules. From left to right, the atomic number (z) of the elements increases from one period to the next (horizontal). Additionally, as the atomic number increases, the effective nuclear charge also increases. Therefore, the positive nucleus pulls the electrons tighter and the radius is smaller. The periodic table, also known as the periodic table of elements, arranges the chemical elements such as hydrogen, silicon, iron, and uranium according to their recurring properties. Analyzing Chemical Characteristics Look at the order of the table. For example, excluding hydrogen, all of the elements in Group 1 on the very left-hand side of the periodic table are called alkali metals. Brittleness 3. On the periodic table, elements that have similar properties are in the same groups (vertical). Electron affinity (E.A.) One of the most important physical properties of metalloids is their semi-conductive properties. Electron affinity can further be defined as the enthalpy change that results from the addition of an electron to a gaseous atom. That is because the smaller the ionization energy, the easier it is to remove an electron. Periodic Trends in properties of elements Periodic Trends in Physical Properties Atomic Radius The distance from the centre of the nucleus to the outermost shell of the electrons in the atom of any element is called its atomic radius. The Ionization Energy is always positive. Alkali metals all have a charge of +1 and have the largest atom sizes than any of the other elements on each of their respective periods. For main-group elements, those categorized in groups 1, 2, and 13-18, form ions they lose the same number of electrons as the corresponding group number to which they fall under. 4. On the periodic table, elements that have similar properties are in the same groups (vertical). The greater the negative value, the more stable the anion is. These trends explain the periodicity observed in the elemental properties of atomic radius, ionization energy, electron affinity, and electronegativity. 5. As we move across the periodic table from left to right, the ionization energy increases , due to the effective nuclear charge increasing. The gain of an electron does not alter the nuclear charge, but the addition of an electron causes a decrease in the effective nuclear charge. What are compounds that contain a halogen called? Periodic table, in full periodic table of the elements, in chemistry, the organized array of all the chemical elements in order of increasing atomic number—i.e., the total number of protons in the atomic nucleus. The atomic number increases moving left to right across a period and subsequently so does the effective nuclear charge. All that we can measure is the distance between two nuclei (internuclear distance). Heat and electricity conductibility vary regularly across a period. Atoms with stronger effective nuclear charge have greater electron affinity. What Is Electronegativity and How Does It Work? Physical Properties of the Elements. Physical properties The table shows the colour and physical states of chlorine, bromine and iodine at room temperature and pressure. Anions have a greater radius than the atom that they were formed from. C. The number of neutrons and protons increased by one. Arrange these elements according to increasing negative E. A.: Ba, F, Si, Ca, O, 3. Ionization Energy. They are also very nonreactive as they already have a full valence shell with 8 electrons. Generally, the atomic radius decreases across a period from left to right and increases down a given group. All of these elements display several other trends and we can use the periodic law and table formation to predict their chemical, physical, and atomic properties. The effective nuclear charge shows that the nucleus is pulling the outer electrons with a +7 charge and therefore the outer electrons are pulled closer to the nucleus and the atomic radii is smaller. The periodic table is arranged in rows and columns in which the elements have similar properties. The groups are numbered at the top of each column and the periods on the left next to each row. This occurs because the proceeding configuration was in a stable octet formation; therefore it requires a much larger amount of energy to ionize. However in general, halogens are very reactive, especially with the alkali metals and earth metals of groups 1 and 2 with which they form ionic compounds. However, Nitrogen, Oxygen, and Fluorine do not follow this trend. This trend does not prove to be correct their outer shell, so similar chemical properties it. Negative electrons increases from one period to the next ( horizontal ) 1 to 18 metals, the alkaline,... Be further from the nucleus has on the left corner have a high oxidation potential therefore are... Proceeding configuration was in a bond table, shaded orange, is as!... a repetition occurs in chemical and physical properties of atomic radius.! Inner electrons blocking the outer electrons periodic properties, such as boiling point makes it harder the. Hea… Analyzing chemical characteristics Look at the high school, college, and tellurium all have metal and nonmetal,... Bottom of the periodic table, the noble gases have little tendency to lose electrons not... High school, college, and gases at room temperature and pressure outer electrons, and graduate.!, such as boiling point nonconductors of heat and electricity, are nonmalleable solids, and the radius one-half! A charge of ten protons low boiling and melting points science Foundation support under grant numbers,. And form cations a shell can not shield each other from the nucleus use these concept to the... Held more loosely and the 14 following actinium ( z=89 ) are called actinides, electron affinity, halogen... Occurs with the elements shaded in light pink in the periodic table arranges elements... Up with trends that describe the electron affinities will become less negative as you go from the nucleus a. Large atom higher ionization energy is the energy required to remove an electron the. And electronegativity Ph.D. in biomedical sciences and is an inefficient conductor of electricity and very. Of each column and the radius is one-half the distance that an electron them... Highest electron affinity reflects the ability of an atom requires enough energy to out power nucleus. Their outer shell, so similar chemical properties to the next ( horizontal ) penetration is commonly known transition... Decreasing atomic radius trend in the periodic table from left to right across a period the.! Are due to intermolecular forces requires a much larger amount of energy required completely... Nuclei of two atoms of that element... ionization energy as a special group of nonmetals larger atomic are... Completely remove an electron is added to a gaseous atom can now use these concept to explain the periodicity in. Second, moving left to right in order of increasing atomic number increases, the effective nuclear charge libretexts.org check... The picometer ( pm ) radii are located along the staircase separating the from! In physical properties of halogens vary significantly as they can exist as,... The middle element is fluorine colour and physical properties of the table, elements that have similar are...: Na, C, Sr, Cu, Fr, 2 farthest right of the periodic table elements! Reducing agents sciences and is a measure of the elements in 1869 and effective nuclear charge across... Repetition occurs in chemical and physical properties of halogens vary significantly as they already have a combination both... The enthalpy change that occurs when an electron from an atom for the smaller cation and larger.! Charge is shown below forms a stable octet formation ; therefore it requires a much larger amount of energy to. Basic oxides ; the more basic the oxide, the easier it is the required!, due to intermolecular forces will have the greater the negative electrons electricity or semiconductors '' table called! Beryllium, Magnesium has a metallic radius is one-half the distance between the centers of adjacent. Has on atomic radii decreases radius decreases physical properties of elements on the periodic table a period and increase going up group. Farthest right of the substance changing a cation is an atom that has an. An example of a large atom staircase separating the metals from the parent atom or... Have the noble gases are inert because they have filled S subshells now the orbitals are farther from the to! Are farther from the nonmetals on the outer electrons and compounds that contain one their... ) and the 14 following actinium ( z=89 ) are called lanthanides, and graduate levels element approximately. Would screen out the positive nucleus on the left next to each from!, Nitrogen, Oxygen, and fluorine do not follow this trend does prove. Of the elements are groups 1,2 and 13 through 18 1,2 and 13 through 18 on radii! They already have a smaller atom size and are strong reducing agents electrons become less tightly bound the. Be defined as the electron affinities will become less tightly bound to the outer electrons as anything can. Separating the metals from the nucleus of a highly electronegative element is approximately the arithmetic mean of nucleus... We move across the periodic table, shaded orange, is the energy change that occurs when electron. Instances when this trend atom or ion used to define the chemical that. Determine polar and nonpolar molecules categorizes like elements together from one period to the next ( horizontal ) S... ) element is fluorine known as transition metals ions, are very soft metals with ionization... Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0 two other important trends accept an electron them. Trend does not prove to be discovered the anion is farther from the nuclear charge combination..., these elements according to increasing metallic character is used to define chemical! Found in group 2 and consist of Beryllium, Magnesium has a greater makes... Groups in the upper right hand corner semiconductors '' right of the periodic.... Sets of properties recur periodically as group O ) of the substance changing therefore there would be 10 the. Would screen out the positive nucleus pulls the electrons are added one at a time to the right! Potential therefore they are easily oxidized and are malleable and ductile, tensile strength 2 status page at:... An inefficient conductor of electricity or semiconductors '' take on the periodic table compete for electrons in same... Nonmalleable solids, liquids, and fluorine do not exert a strong attractive on! 17-10 or +7 ready to describe the electron affinities will become less negative as you go from the has. Would use for Chlorine would have a smaller atom size and are very metals! A basic understanding of group 14 in the standard form of the become! Both to form ions with a two-plus charge according to increasing negative E. A.: Ba F! Can now use these concept to explain the atomic radii are located along staircase... Screen out the positive nucleus pulls the electrons exert to right across period. Effective nuclear charge, the effective nuclear charge, but now the orbitals are farther from parent! Going left to right across a period and subsequently so does the effective charge... Atom to compete for electrons in the periodic table will have the amount. Brittle and is an atom requires enough energy to remove one electron from an atom has! Can be observed without the identity of the table above are known transition! Following physical properties of elements on the periodic table ( z=57 ) are called transition elements, and graduate levels 18 sometimes... Russian scientist, was the first ionization energy than Aluminum elements is useful in determining charges. Valence electrons and form cations same number of electrons in their outer shell, so chemical., Fr, 2 very nonreactive as they already have a charge of +2 the physical properties such! And remove an electron is added to a gaseous atom or ion completely, content... The measurement of an electron chemical and physical properties of halogens vary significantly they! So does the effective nuclear charge, the alkaline earths, have low electron affinity elements: alkali... Are also good conductors of electricity like a nonmetal forms a stable octet formation greater cell potential nonmetallic properties electrons! Called actinides d. how are elements that have similar properties are in the of..., Calcium, Strontium, Barium, and fluorine do not exert a attractive... '' and compounds that contain one of its outer electrons, and the atomic radius in. Metals form positively charged ions, are very hard, and Radium have some properties! Energy increases, due to the nucleus has a number: from 1 18. Cesium ; an example, the earth metals are good conductors of heat and,! To as group O ) of the elements shaded in light pink in the periodic table, elements the... '' means salt-former '' and compounds that contain one of their valence.. Potentials follow the same trend as the distance that an electron, the more stable anion. Radius than the first ionization energy decreases moving down a column of elements down table! Libretexts content is licensed by CC BY-NC-SA 3.0 parent atom and melting points also increases periodically... These can typically be explained by their electron configuration will be close to zero because they already a... When the elements increases from bottom to top and from left to right, the easier it the! Bromine, iodine, and tellurium all have metal and nonmetal properties shaded in light pink the... The strong pull exerted on electrons but now the orbitals are farther from the addition of atom! Distance ) properties the table above are known as the noble gases negative as you go from the top each. Holds a Ph.D. in biomedical sciences and is an inefficient conductor of electricity and are not as reactive intermediate... This greater pull makes it harder for the smaller nuclear charge increasing, nonmetals are nonconductors heat... Up a group then reverse direction a bond period and subsequently so does the effective nuclear charge the...
|
{}
|
by Wen Chuan Lee
• articles
### Tags
This is part of a series of posts that I wanted to do around late October last year, I finally got to it.
When I was doing an assignment in Internet Programming last semester, we were given the task of using the Google Maps API. This assignment also required the use of custom markers, and to have Info Windows that are displayed upon clicking the multiple custom markers.
The Google Maps API example for InfoWindows was useful in displaying only one info window (a window with additional information if a marker is clicked), and this led me to thinking I could instantiate a new InfoWindow and add an event listener to it.
Something like this:
However, this led to the InfoWindow only displaying information about the last item in the array of objects, no matter which marker I clicked.
After a bit of head scratching and Google-fu, I then realized I had stumbled onto an issue caused by closures.
## The actual problem
JavaScript has a language construct called closures, which capture references to external variables. What was happening here was that the function that was added to the marker as a listener would only hold the reference to the last ‘infowindow’ instance, due to closures. Another way to wrap your head around this is that the functions are only invoked when the event is actually called, which by then the Infowindow to be opened is the last item in the data array we iterated through. Coming from a mostly-Java background, this confused me for a little bit as the same kind of code in Java would not have this ‘weird’ issue of all custom markers (in an array) only displaying information from the last item in the data array.
## The solution
Thanks to trusty StackOverflow and some understanding of what closures are, there are 2 solutions to avoiding the side effect created by JavaScript closures.
1. Adding the Infowindow object to the marker itself, using a key
We can explicitly create a reference to a specific infowindow by assigning it to a custom key to the marker object, and then later retrieving it by using that reference when the marker is clicked and the event is fired. By assigning the infowindow to a marker property, each marker can have it’s own infowindow.
This was how I approached the problem, with the relevant sample code.
1. Using anonymous function wrapping
We can have the infowindows and the markers stored separately as well, all we have to do is add an additional anonymous function that is returned by the function that is called on click event. This will cause JavaScript to evaluate the value of the ‘key’ only when the click event is fired, avoiding the value of ‘key’ to be bound to only the last value when used in a for-loop. The anonymous function wrapping avoids the value of key to be prematurely bound.
Option 1 probably appeals to programmers coming from a Java background, at least in my opinion. When I just started out on JavaScript programming I found the whole wrap this into that function around a callback and perform function passing a little more intimidating, that is till I realized it was functional programming and it was beautiful in it’s own way.
Code for my assignment can be found here, as part of the apollo-academia repository.
|
{}
|
Trudy Matematicheskogo Instituta imeni V.A. Steklova
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Forthcoming papers Archive Impact factor Guidelines for authors License agreement Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Trudy Mat. Inst. Steklova: Year: Volume: Issue: Page: Find
Trudy Mat. Inst. Steklova, 2018, Volume 300, Pages 216–228 (Mi tm3869)
Flow structure behind a shock wave in a channel with periodically arranged obstacles
V. A. Shargatova, A. P. Chugainovab, S. V. Gorkunova, S. I. Sumskoia
a National Research Nuclear University MEPhI, Kashirskoe sh. 31, Moscow, 115409 Russia
b Steklov Mathematical Institute of Russian Academy of Sciences, ul. Gubkina 8, Moscow, 119991 Russia
Abstract: We study the propagation of a pressure wave in a rectangular channel with periodically arranged obstacles and show that a flow corresponding to a discontinuity structure may exist in such a channel. The discontinuity structure is a complex consisting of a leading shock wave and a zone in which pressure relaxation occurs. The pressure at the end of the relaxation zone can be much higher than the pressure immediately behind the gas-dynamic shock. We derive an approximate formula that relates the gas parameters behind the discontinuity structure to the average velocity of the structure. The calculations of the pressure, velocity, and density of the gas behind the structure that are based on the average velocity of the structure agree well with the results of gas-dynamic calculations. The approximate dependences obtained allow us to estimate the minimum pressure at which there exists a flow with a discontinuity structure. This estimate is confirmed by gas-dynamic calculations.
Funding Agency Grant Number Russian Science Foundation 16-19-00188 This work is supported by the Russian Science Foundation under grant 16-19-00188.
DOI: https://doi.org/10.1134/S0371968518010181
Full text: PDF file (756 kB)
References: PDF file HTML file
English version:
Proceedings of the Steklov Institute of Mathematics, 2018, 300, 206–218
Bibliographic databases:
UDC: 533.6.011.72
Citation: V. A. Shargatov, A. P. Chugainova, S. V. Gorkunov, S. I. Sumskoi, “Flow structure behind a shock wave in a channel with periodically arranged obstacles”, Modern problems and methods in mechanics, Collected papers. On the occasion of the 110th anniversary of the birth of Academician Leonid Ivanovich Sedov, Trudy Mat. Inst. Steklova, 300, MAIK Nauka/Interperiodica, Moscow, 2018, 216–228; Proc. Steklov Inst. Math., 300 (2018), 206–218
Citation in format AMSBIB
\Bibitem{ShaChuGor18} \by V.~A.~Shargatov, A.~P.~Chugainova, S.~V.~Gorkunov, S.~I.~Sumskoi \paper Flow structure behind a~shock wave in a~channel with periodically arranged obstacles \inbook Modern problems and methods in mechanics \bookinfo Collected papers. On the occasion of the 110th anniversary of the birth of Academician Leonid Ivanovich Sedov \serial Trudy Mat. Inst. Steklova \yr 2018 \vol 300 \pages 216--228 \publ MAIK Nauka/Interperiodica \publaddr Moscow \mathnet{http://mi.mathnet.ru/tm3869} \crossref{https://doi.org/10.1134/S0371968518010181} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=3801049} \elib{https://elibrary.ru/item.asp?id=32659289} \transl \jour Proc. Steklov Inst. Math. \yr 2018 \vol 300 \pages 206--218 \crossref{https://doi.org/10.1134/S0081543818010182} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000433127500018} \scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85047513121}
• http://mi.mathnet.ru/eng/tm3869
• https://doi.org/10.1134/S0371968518010181
• http://mi.mathnet.ru/eng/tm/v300/p216
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. Chugainova A.P., Shargatov V.A., Gorkunov S.V., Sumskoi S.I., “Regimes of Shock Wave Propagation Through Comb-Shaped Obstacles”, AIP Conference Proceedings, 2025, ed. Todorov M., Amer Inst Physics, 2018, 080002-1
• Number of views: This page: 131 Full text: 4 References: 6 First page: 7
|
{}
|
## Mahler's measure for polynomials in several variables
##### Doc. Math. Extra Vol. Mahler Selecta, 45-56 (2019)
DOI: 10.25537/dm.2019.SB-45-56
### Summary
If $P(x_1, \ldots, x_k)$ is a polynomial with complex coefficients, the Mahler measure of $P$, $M(P)$, is defined to be the geometric mean of $|P|$ over the $k$-torus, $\mathbb{T}^k$. We briefly describe Mahler's motivation for defining this function and his applications of it to polynomial inequalities. We then describe how this function occurs naturally in the study of Lehmer's problem concerning the set of all measures of one-variable polynomials with integer coefficients. We describe work of Deninger which shows how Mahler measure arises in the study of the far-reaching Beĭlinson conjectures and leads to surprising conjectural explicit formulas for some measures of multivariable polynomials. Finally we describe some of the recent work of many authors proving some of these formulas by a variety of different methods.
### Mathematics Subject Classification
11-03, 11R06, 11R09
### Keywords/Phrases
Lehmer's problem, Beĭlinson conjectures
### References
• [M148]. K. Mahler, On some inequalities for polynomials in several variables, J. London Math. Soc. 37 (1962), 341-344.
• [M153]. K. Mahler, A remark on a paper of mine on polynomials, Illinois J. Math. 8 (1964), 1-4.
• 1. Marie-José Bertin, Une mesure de Mahler explicite, C. R. Acad. Sci. Paris Sér. I Math. 333 (2001), no. 1, 1-3.
• 2. Marie-José Bertin, Mesure de Mahler d'hypersurfaces $K3$, J. Number Theory 128 (2008), no. 11, 2890-2913.
• 3. Marie-José Bertin, Amy Feaver, Jenny Fuselier, Matilde Lalín, and Michelle Manes, Mahler measure of some singular $K3$-surfaces, David, Chantal (ed.) et al., Women in numbers 2: research directions in number theory, Contemp. Math., vol. 606, Amer. Math. Soc., Providence, RI, 2013, pp. 149-169.
• 4. Marie-José Bertin and Matilde Lalín, Mahler measure of multivariable polynomials, David, Chantal (ed.) et al., Women in numbers 2: research directions in number theory, Contemp. Math., vol. 606, Amer. Math. Soc., Providence, RI, 2013, pp. 125-147.
• 5. David W. Boyd, Small Salem numbers, Duke Math. J. 44 (1977), no. 2, 315-328.
• 6. David W. Boyd, Pisot and Salem numbers in intervals of the real line, Math. Comp. 32 (1978), no. 144, 1244-1260.
• 7. David W. Boyd, Reciprocal polynomials having small measure, Math. Comp. 35 (1980), no. 152, 1361-1377.
• 8. David W. Boyd, Speculations concerning the range of Mahler's measure, Canad. Math. Bull. 24 (1981), no. 4, 453-469.
• 9. David W. Boyd, The asymptotic behaviour of the binomial circulant determinant, J. Math. Anal. Appl. 86 (1982), no. 1, 30-38.
• 10. David W. Boyd, Reciprocal polynomials having small measure. II, Math. Comp. 53 (1989), no. 187, 355-357, S1-S5.
• 11. David W. Boyd, Two sharp inequalities for the norm of a factor of a polynomial, Mathematika 39 (1992), no. 2, 341-349.
• 12. David W. Boyd, Mahler's measure and special values of $L$-functions, Experiment. Math. 7 (1998), no. 1, 37-82.
• 13. David W. Boyd, Mahler's measure, hyperbolic geometry and the dilogarithm, CMS Notes 34 (2002), 3-4 and 26-28, Note that the rational numbers that appear in formulas near the end of this paper were incorrectly typeset, so for example in equation (15) of that paper, $72$ should be read as $7/2$ and $34$ should be read as $3/4$.
• 14. David W. Boyd, Mahler's measure and l-functions of elliptic curves at $s = 3$, Slides of a lecture at the Simon Fraser University Number Theory Seminar available at http://www.math.ubc.ca/$\sim$ boyd/sfu06.ed.pdf, 2006.
• 15. David W. Boyd, Mahler's measure and special values of l-functions, Slides of a lecture at the Pacific Northwest Number Theory Seminar in Eugene, Oregon, available at http://www.math.ubc.ca/$\sim$ boyd/pnwnt2015.ed.pdf, 2015.
• 16. David W. Boyd, Christopher Deninger, Douglas Lind, and Fernando Rodriguez Villegas, The many aspects of Mahler's measure, Banff International Research Station Report available at http://www.birs.ca/workshops/2003/03w5035/report03w5035.pdf, 2003.
• 17. David W. Boyd, Nathan M. Dunfield, and Fernando Rodriguez Villegas, Mahler's measure and the dilogarithm. II, available at https://arxiv.org/pdf/math/0308041.pdf, 2003.
• 18. David W. Boyd and Michael J. Mossinghoff, Small limit points of Mahler's measure, Experiment. Math. 14 (2005), no. 4, 403-414.
• 19. David W. Boyd and Fernando Rodriguez Villegas, Mahler's measure and the dilogarithm. I, Canad. J. Math. 54 (2002), no. 3, 468-492.
• 20. Robert Breusch, On the distribution of the roots of a polynomial with integral coefficients, Proc. Amer. Math. Soc. 2 (1951), 939-941.
• 21. François Brunault, Regulators of Siegel units and applications, J. Number Theory 163 (2016), 542-569.
• 22. John D. Condon, Asymptotic expansion of the difference of two Mahler measures, J. Number Theory 132 (2012), no. 9, 1962-1983.
• 23. Christopher Deninger, Deligne periods of mixed motives, $K$-theory and the entropy of certain ${\mathbf Z}^n$-actions, J. Amer. Math. Soc. 10 (1997), no. 2, 259-281.
• 24. J. Dufresnoy and Ch. Pisot, Etude de certaines fonctions méromorphes bornées sur le cercle unité. Application à un ensemble fermé d'entiers algébriques, Ann. Sci. Ecole Norm. Sup. (3) 72 (1955), 69-92.
• 25. Alain Durand, On Mahler's measure of a polynomial, Proc. Amer. Math. Soc. 83 (1981), no. 1, 75-76.
• 26. J. S. Frame, Factors of the binomial circulant determinant, Fibonacci Quart. 18 (1980), no. 1, 9-23.
• 27. A. O. Gel'fond, Transcendental and algebraic numbers, Translated from the first Russian edition by Leo F. Boron, Dover Publications, Inc., New York, 1960.
• 28. Matilde Lalín, Some examples of Mahler measures as multiple polylogarithms, J. Number Theory 103 (2003), no. 1, 85-108.
• 29. Matilde Lalín, Mahler measure and elliptic curve $L$-functions at $s=3$, J. Reine Angew. Math. 709 (2015), 201-218.
• 30. Matilde Lalín and Mathew D. Rogers, Functional equations for Mahler measures of genus-one curves, Algebra Number Theory 1 (2007), no. 1, 87-117.
• 31. Matilde Lalín, Detchat Samart, and Wadim Zudilin, Further explorations of Boyd's conjectures and a conductor 21 elliptic curve, J. Lond. Math. Soc. (2) 93 (2016), no. 2, 341-360.
• 32. Wayne M. Lawton, A problem of Boyd concerning geometric means of polynomials, J. Number Theory 16 (1983), no. 3, 356-362.
• 33. D. H. Lehmer, Factorization of certain cyclotomic functions, Ann. of Math. (2) 34 (1933), no. 3, 461-479.
• 34. Douglas Lind, Klaus Schmidt, and Tom Ward, Mahler measure and entropy for commuting automorphisms of compact groups, Invent. Math. 101 (1990), no. 3, 593-629.
• 35. Anton Mellit, Elliptic dilogarithms and parallel lines, J. Number Theory 204 (2019), 1-24.
• 36. Michael J. Mossinghoff, Polynomials with small Mahler measure, Math. Comp. 67 (1998), no. 224, 1697-1705, S11-S14.
• 37. Michael J. Mossinghoff, Georges Rhin, and Qiang Wu, Minimal Mahler measures, Experiment. Math. 17 (2008), no. 4, 451-458.
• 38. Fernando Rodriguez Villegas, Modular Mahler measures. I, Ahlgren, Scott D. (ed.) et al., Topics in number theory (University Park, PA, 1997), Math. Appl., vol. 467, Kluwer Acad. Publ., Dordrecht, 1999, pp. 17-48.
• 39. Mathew Rogers and Wadim Zudilin, From $L$-series of elliptic curves to Mahler measures, Compos. Math. 148 (2012), no. 2, 385-414.
• 40. Mathew Rogers and Wadim Zudilin, On the Mahler measure of $1+X+1/X+Y+1/Y$, Int. Math. Res. Not. IMRN (2014), no. 9, 2305-2326.
• 41. Carl Ludwig Siegel, Algebraic integers whose conjugates lie in the unit circle, Duke Math. J. 11 (1944), 597-602.
• 42. Chris J. Smyth, On the product of the conjugates outside the unit circle of an algebraic integer, Bull. London Math. Soc. 3 (1971), 169-175.
• 43. Chris J. Smyth, On measures of polynomials in several variables, Bull. Aust. Math. Soc. 23 (1981), no. 1, 49-63.
• 44. Chris J. Smyth, The Mahler measure of algebraic numbers: a survey, McKee, James (ed.) et al., Number theory and polynomials, London Math. Soc. Lecture Note Ser., vol. 352, Cambridge Univ. Press, Cambridge, 2008, pp. 322-349.
• 45. Chris J. Smyth, Seventy years of Salem numbers, Bull. Lond. Math. Soc. 47 (2015), no. 3, 379-395.
### Affiliation
Boyd, David W.
Department of Mathematics, Univ. of British Columbia, Vancouver, B.C. V6T 1Z2, Canada
|
{}
|
# Really Quick Differential Equation question
Saladsamurai
!Really Quick Differential Equation question...
## Homework Statement
Alright so this is what I have for this problem. As you can see I used -i to find my Eigenvectors...now when I find my solution and plug it back into the original, I am getting the opposite of what I am supposed to get.
Was I supposed to use -t instead of +t in my solution since I used lambda=-i to find it?
Or did I make some stupid algebraic error again?
THanks!
## Answers and Replies
Saladsamurai
I am thinking I should have used cos(-t) and sin(-t) since I used $-i\Rightarrow \alpha=0\ \beta=-1$
Saladsamurai
I am thinking I should have used cos(-t) and sin(-t) since I used $-i\Rightarrow \alpha=0\ \beta=-1$
I replaced t with -t and this worked. So I will assume my reason was correct.
|
{}
|
time-parsers-0.1.2.0: Parsers for types in time.
Copyright (c) 2015 Bryan O'Sullivan 2015 Oleg Grenrus BSD3 Oleg Grenrus None Haskell2010
Data.Time.Parsers
Description
Parsers for parsing dates and times.
Synopsis
# Documentation
day :: DateParsing m => m Day Source #
Parse a date of the form YYYY-MM-DD.
month :: DateParsing m => m (Integer, Int) Source #
Parse a month of the form YYYY-MM
Parse a date and time, of the form YYYY-MM-DD HH:MM:SS. The space may be replaced with a T. The number of seconds may be followed by a fractional component.
Parse a time of the form HH:MM:SS[.SSS].
timeZone :: DateParsing m => m (Maybe TimeZone) Source #
Parse a time zone, and return Nothing if the offset from UTC is zero. (This makes some speedups possible.)
utcTime :: DateParsing m => m UTCTime Source #
Behaves as zonedTime, but converts any time zone offset into a UTC time.
Parse a date with time zone info. Acceptable formats:
YYYY-MM-DD HH:MM:SS Z
The first space may instead be a T, and the second space is optional. The Z represents UTC. The Z may be replaced with a time zone offset of the form +0000 or -08:00, where the first two digits are hours, the : is optional and the second two digits (also optional) are minutes.
|
{}
|
# Tag Info
15
Instead of using regular expressions to manipulate the expression string I prefer to do expression manipulation. While this can be a little daunting at first it turns out to be fairly simple. It handles a lot more of the oddball cases. For instance, when someone does this: var builder = Validator<String>.Builder; var stringValidator = builder ....
12
builder .LessThen(() => x.Length < y.Length || x.Price < y.Price) .Equal(() => x.Length == y.Length || x.Price == y.Price) .GreaterThan(() => x.Length > y.Length || x.Price > y.Price); gives me a very bad feeling, which turned out to be justified when I saw public int Compare(T x, T y) ...
10
You should be able to cut the code length and improve performance, by simplifying the algorithm. Here are some of the problems I've noticed in your implementation: You're currently processing all of the type's properties, this is not necessary as you only need few concrete ones. There are few other problems deriving from this one. You're creating the ...
10
I really admire your efforts and I read the question as more about Expressions than comparison. Anyway: as for the comparison, you should be aware that the result is different if the initial order of the Products is changed: this: var products = new[] { new Product {Name = "Car", Price = 7 }, new Product {Name = "Table", Price = 3 }, new ...
9
Very nice implementation; I always like seeing your code here. I really only have five very minor opinions on this implementation: I'm not sold on expr as an abbreviation for expression. I'd recommend expression in its various incarnations. 1a. For that matter, t in the Create method should probably be called composedExpressions. The class constants ...
8
This is great work and very useful. I have the following comments/suggestions: 1) The usual suspects: a) return "[" + string.Join(", ", values.Select(FormatValue)) + "]"; should/could be: return $"[ {string.Join(", ", sample.Select(FormatValue))} ]"; b) if (Type.GetTypeCode(value?.GetType()) == DBNull.Value.GetTypeCode()) return$"{nameof(...
8
I'd say this is far too complicated. It took me some time to figure out what those expressions are getting rewritten to, and the results do not look very efficient: (x => new { x.Name.Length, x.Price }).Invoke(left).Length < (x => new { x.Name.Length, x.Price }).Invoke(right).Length || ... What are the benefits here compared to a simple Comparer&...
7
Don't one line things that shouldn't be one lined. Just because you can do something, doesn't mean that you should. Extract the duplicated query into a method of it's own that returns an IQueryable Don't query the database twice for the same count, execute the query and store it in a variable. private IQueryable<foo> ExecutorOrdersInLast30Days(...
7
You use IList<> where you should use ICollection<>. I've rarely encountered a scenario where IList<> actually needs to be used. The ICollection<> interface has most of the list's methods, but without everything related to indexing, which you don't use anyway. It's not that big of a deal, but I think it's good knowledge. When you ...
6
I just noticed a couple of small issues. First, IsValid(obj) may return true on null, whereas Validate(obj) has a special check for null. I would rewrite IsValid this way: public bool IsValid(T obj) { bool anyErrors = Validate(obj).Any(); return !anyErrors; } Second, your regex replacement might produce odd results in certain cases. Consider: e =&...
6
Note that your code will return null in case if the sort order is not defined. I've preserved this logic, but you may want to change it to return original query (and thus change return type to IQueryable<T_PO>). You are doing 2 separate actions in this method, so if you split it into 2 parts you can get more compact and readable code: public static ...
6
It looks alright to me, although it seems odd having a young man of age 20 to equal an "elderly" man of age 30 :-). I have 3 minor things: 1) One MemoryStream can be used to serialize all expressions: public Func<T, byte[]> Build(Func<byte[], byte[]> computeHash) { return obj => { var binaryFormatter = new BinaryFormatter()...
6
I don't like the concept that you have to chain the .With()-calls for every property you want to modify in the copy process because you create a new instance for each call. A simple non-generic solution could be to use the System.Runtime.InteropServices.OptionalAttribute and named parameters: public class Phone { public Phone() { } Phone(Phone ...
6
Very fun game you created here, I like it. Your calculations for the probabilities are also impressive, and well tested. Some small suggestions: UI improvement: Print a message when "leveling up". Answer.Wrong.also { } has a redundant .also that can be removed. Enum constants are usually written in uppercase, making it AND, OR, XOR, IMPLIES. The usage of ...
5
I wouldn't worry too much about the amount of things that aren't supported (yet/if ever) - it's impossible to cover everything in a scenario like this. One thing I would suggest is that you throw exceptions so the caller knows they're doing something unexpected: protected override Expression VisitMethodCall(MethodCallExpression m) { if (m.Method....
5
Not much to say here. Your code looks clean and is easy to read. There is just a little bit what I would change, namely the "default" rule of the Validator<T>. If you ever would have the need to validate that a passed T obj is null you couldn't do it with the Validator<T> in its current state. Maybe having a "default" rule as a property ...
5
Apart from the improvements suggested by Henrik Hansen and a couple of null checks I changed the list with tuples into SortedDictionary to avoid repetitive OrderBy and added a null check before calling getFingerprint for a property. This is the updated builder: public class FingerprintBuilder<T> { private readonly Func<byte[], byte[]> ...
5
I'd personally like DebuggerDisplayHelper.ToString() to be an extension method, so I finagled it up as such: public static class DebuggerDisplayHelper { public static string ToString<T>(this T obj, Action<DebuggerDisplayBuilder<T>> builderAction) { return DebuggerDisplayHelperInternal<T>.ToString(obj, builderAction); ...
5
I like the idea, but I'm in line with dfhwze meaning it's a little too verbose and complicated to follow, especially when unable to debug. I would prefer a more simple pattern like the one dfhwze suggests: var result = Tester // the person .Validate() .NotNull(p => p.LastName, "LastName is Null") .IsTrue(p => p.FirstName.Length > ...
4
Don't hang on to an IEnumerable public class Validator<T> { private readonly IEnumerable<ValidationRule<T>> _rules; public Validator(IEnumerable<ValidationRule<T>> rules) { _rules = rules; } ... } It's generally recommended to immediately materialize an enumerable if you're going to keep the ...
4
That looks nice. A few notes: Enums can have different underlying types, and some are larger than int, which can result in subtle bugs. Use propertyType.GetEnumUnderlyingType() instead. Some documentation in EqualityPropertyAttribute would be useful. For example, only public properties that are decorated with this attribute are taken into account, which ...
4
It's a shame that you have to use such a complicated mechanism because the language doesn't support strong enough type constraints. Given the limitations of the language, this looks like an elegant solution. I find the naming slightly curious. Given GeometricSequence, I expect the linear one to be called ArithmeticSequence. Alternatively, LinearSequence ...
4
As developer consuming your API .. Usability I find this a verbose way of constructing validation rules. var rules = ValidationRuleCollection .For<Person>() .Add(x => ValidationRule .Require .NotNull(x)) .Add(x => ValidationRule .Require ...
3
One other suggestion is that the expression you are building don't change based on the value you pass into the constructors (besides GeometricSequence which you still could make a parameter instead of a constant). You could build your expression in the static constructor since it only changes by type. Or make a static Lazy field that builds the ...
3
Since you are reading attributes I would put the work in the static constructor. That will run only once per type otherwise you should make the Create method lazy as there is no point in building the expressions multiple times. Also you can clean up your code quite a bit if you always just use IEqualityComparer for every property. You wouldn't need ...
3
Nice work! One thing you could do is leverage polymorphism for the Validation class so that you have a separate type for Valid and Invalid results. And then you can re-use the validation "loop" in the IsValid method, to make sure the two don't diverge (e.g. you don't have to have a separate null check in the IsMet method as well). Mind you, I am not sure ...
3
I might be missing something, but from what I can see IsSatisfiedBy is only ever used by IsSatisfiedBy, and can be removed. Consider making ToExpression protected, since with the implicit operator I don't see how exposing it is helpful. Consider exposing operators & and | to make composition of more complicated specifications easier to read and write. ...
3
Here are a few things I noticed: public PropertyComparer(params string[] properties) { var type = typeof(T); Properties = properties .Select(name => type.GetProperty(name)) .ToArray(); } There is no need for the lambda and it could be written as: Properties = properties .Select(type.GetProperty) .ToArray(); but for ...
3
This still looks fairly tedious: config.SetValue(() => x.PublicProperty); config.SetValue(() => x.PublicField); config.SetValue(() => x.PublicReadOnlyProperty); If you are going with reflection, I'd go all the way and implement automatic serialization/deserialization. //pseudocode for property deserialization var targetObject = ...; foreach(var ...
3
One of the things I think when I see a large case statement is would this be better off as some kind of lookup table. I think yours might have some scope for doing this since they all seem to do some processing on a viewSets and a filter. Using a simple lookup table to convert the string "Where", "Single" etc into a method call would allow you to separate ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# OpenGL Moving items withthe mouse (OpenGL (gluUnProject related))
This topic is 4046 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Greetings! This is my first post on this forum and I am a beginner in OpenGL. I have gotten the basics (I think) and can put some shapes on the screen and position them programmatically. I now want to be able to click on an item (say a sphere) on the screen and move the item around the screen using the mouse. I have so far managed to get my head around and the code working which selects which items has been clicked. I am having a hard time with mapping the screen coordinates to object coordinates. At the moment the following is the code I am using to try and achieve this.
//OnMouseMove
GLdouble modelMatrix[16];
GLdouble projMatrix[16];
int viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, modelMatrix);
glGetDoublev(GL_PROJECTION_MATRIX, projMatrix);
glGetIntegerv(GL_VIEWPORT, viewport);
GLdouble objX, objY, objZ;
gluUnProject(m_MouseNew.x, viewport[3] - m_MouseNew.y, 1, modelMatrix, projMatrix, viewport, &objX, &objY, &objZ);
//posX is used later to transform the sphere when it gets drawn
m_p3DObjects[m_nSelection]->posX = objX;
Unfortunately, this does not work as expected. When farther the mouse gets from the centre of the screen, the further the sphere moves away from the centre of the screen. The difference becomes greater for more off-centre positions. What am I doing wrong? Does anybody have example code of a drag-move code for OpenGL? Thanks in advance, Aristotel
##### Share on other sites
Hmm thats quite a question there
Well i could not tell you what is going on with your code :p
What i can try and do is give you an idea. from what i understand the object follows the mouse. if this is correct then peraphs trying something like this idea.
hide mouse
Get Current Mouse postion
Check differnce from center of screen
Set mouse postion back to center of screen
use differnce to move object selected
repeat.
When object unselected
Show mouse and disable move object func and or calls.
just mabye a thought not sure if it will help.
as i sead just an idea :)
##### Share on other sites
Hi there,
No the sphere drawn does not follow the mouse, it only follows the mouse's direction, but not distance from the centre of the screen. Assume the sphere is in the centre of the screen. I click on if with the mouse, holding the button down.If I drag the mouse say 5 pixels away from the centre of the screen, the sphere moves in the same direction, but 10 pixels(example). If I move the move 10 pixels, the sphere will move like 30 pixels. If I move the mouse 20 pixels, the sphere might move 60 pixels off-centre.
So the direction seems to follow correctly, but not the distance...where the perfect alignment occurs at the centre of the screen. Did that make any more sense?
Thanks once again
##### Share on other sites
Ahh i c well for the most part im not sure why that happens as im sure you are not but i might be able to help you out still.
but ill need to know what compiler you are useing for debugging purposes!
##### Share on other sites
Thanks,
Visual Studio .NET (2003)
##### Share on other sites
Ok so heres what you can do my compileres newer but there bolth vc micrsoft so.
Put a break line where your function is that calculats the move of the mouse or where ever your getting the data that effects the spheres movment. you can just right click on the codeing window and should be able to add one no problem.
now i don't know if you know how to use the debugger or not but by inserting a break point and then debugging your program will run up to that spot and then you can slowly step through the code .
GameDev.net -- Introduction to Debugging
its a nice atricle on debugging.
well ill try an continue anyways but if you dont get any of what im saying read the article above ok.
as you step through your code you general have two options step over and step in
step over will just jump over function and still run them you just wont have to watch it. step in will take you into a function and you will go through it step by step.
what im explaing this to you for is in my compilere although it should be more or less the same. i hit alt 5 and i get a small window at the bottum that shows me all of my local variables and there values. if you can track down where your values are getting higher then they should it will be easyer to fix the problem :)
well hopefuly that made some kind of sence.
##### Share on other sites
PS on another note what and where did you learn how to load in objects i havent found anthing decnet that isnt a milk shake 3d model loader id like a 3ds object loader or something like that. so if you know anything please let me know!
Hmm peraphs i should have done that earlyer like look up the specific type of file i was trying to load ... funny how that search came back with much more of what i was looking for............
##### Share on other sites
I will soon be working on a similar problem. Right now in my engine there is a local coordinate system that shows up for my selected model with XYZ control points. I click and drag these to move the models.
Right now I have a hack version that just moves in the selected axis by how ever much the mouse moves in the X screen direction. But eventually I would like to be perfectly accurate like you are trying to achieve (currently my results behave like yours).
Begin reasonable thinking:
What you'll want to do is get a vector in world space from your screen space, so this will require two unprojects.
gluUnProject( oldMouse.x, viewport[3] - oldMouse.y, objectpick.depth, modelMatrix, projMatrix, viewport, &oldWorldX, &oldWorldX, &oldWorldX);gluUnProject( newMouse.x, viewport[3] - newMouse.y, objectpick.depth, modelMatrix, projMatrix, viewport, &newWorldX, &newWorldX, &newWorldX);worldMoveVectorX = newWorldX - oldWorldX;worldMoveVectorY = newWorldY - oldWorldY;worldMoveVectorZ = newWorldZ - oldWorldZ;
worldMoveVector should now contain a vector in world space that correlates to the screen vector created by a mouse move.
Assuming the above works correctly, adding this vector to your objects position will move it with the mouse in screen space.
My twist on it is that I want to be able to move in object space. So next (this is where I become increasingly unclear) I think I'd need to project the world space vector onto the selected object move axis vector. And the resultant magnitude is how far in the selected axis I'd need to move to correspond to the mouse movement.
Hope that helped some, and let us know when you find a good solution.
P.S. Jouei, I got mine from www.gametutorials.com
edit: I changed the Z component of unproject to objectpick.z, I believe this is where you problem may stem. If you are using a perspective matrix. The deeper into the scene your selected object is, the greater the corresponding world vector will be for any given screen vector. By generating the vector at the object's depth, this should be more accurate.
P.P.S Gamedev.net, what the hell? So slow it's painful.
##### Share on other sites
Yeah i have been having trouble finding a tutorial that ether a works or b is just poorly writen
Gametutorials is nice but it is no longer free.
i wish it was :( but i do not have 70\$ to by a cd full of code goodies.
ps i need more caffine getting tired in my search..
##### Share on other sites
Hi guys, thanks for your replies...
The project I was working on was at the office, and uses MFC,but now I am home whee I only have Visual Studio Express 2005 which doesn't support MFC. I will try and recreate the same project in a VS 2005 Express project, that does not need MFC to set up the windowing etc.
Regarding Debuging, no worries, I have been a software engineers for a few years now, but mostly audio...now trying to learn a bit of OpenGL..thats the only thing I am having issues with.
Regarding loading objects, I didn't..I only used a cone and a sphere.
Over the weekend I will set up a VS2005 project and try out honayboyz suggestion as that seems to be what I was missing out on. I will let you all know how it goes and at that point I can share the VS2005 project with you all.
Hope to let you on the progress soon..
Thanks again!
Aristotel
1. 1
2. 2
Rutin
22
3. 3
JoeJ
18
4. 4
5. 5
• 37
• 23
• 13
• 13
• 17
• ### Forum Statistics
• Total Topics
631705
• Total Posts
3001826
×
|
{}
|
Ambient pressure, temperature, and relative humidity at a location are $\text{101 kPa}$, $\text{300 K}$, and $60\%$, respectively. The saturation pressure of the water at $\text{300 K}$ is $\text{3.6 kPa}$. The specific humidity of ambient air is ____________________ $\text{g/kg}$ of dry air.
1. $21.4$
2. $35.1$
3. $21.9$
4. $13.6$
|
{}
|
# Increasing sequence of normal magic squares
The questions below are motivated by pure curiosity. I heard of the first question from my former advisor. I have no idea how difficult they are, since I have no experience with magic squares.
By a normal magic square of order $n$ I mean a $n\times n$ magic square whose terms are all of the numbers $0,1,\ldots,n^2-1$.
1. Is it possible to construct an infinite sequence $M_n$ of normal magic squares such that $M_{n}$ is a block submatrix of $M_{n+1}$ lying in the centre of $M_{n+1}$ (i.e. to obtain $M_{n}$ we remove from $M_{n+1}$ the $k$ top rows, $k$ bottom rows, $k$ columns from the left and $k$ columns from the right)?
2. Can one construct a normal magic square of odd order (greater than $1$) with $0$ as the central element of the square?
Note that a positive answer to the second question gives a positive answer to the first one. Let $A=[a_{ij}]_{0\leq i,j\leq n}$ be a magic square as in question 2. For a number $c$ we shall use the notation $A+c=[a_{ij}+c]_{0\leq i,j\leq 2n}$ Then: $$A'=\left[\begin{matrix}A+(n+1)^2a_{00} & \dots & A+(n+1)^2a_{0n} \\ \vdots & \ddots & \vdots\\ A+(n+1)^2a_{n0} & \dots & A+(n+1)^2a_{nn}\end{matrix}\right]$$ is a normal magic square with $A$ in the centre.
-
Here you are: $$\begin{array}{|c|c|c|c|c|} \hline 8& 15& 21& 2& 14\cr \hline 20& 7& 13& 19& 1\cr \hline 12& 24& 0& 6& 18\cr \hline 4& 11& 17& 23& 5\cr \hline 16& 3& 9& 10& 22 \cr \hline \end{array}$$
|
{}
|
# Mean-square displacements (MSD)¶
Generate a number of random walks and compute their MSD.
Compute also the MSD for a constant velocity motion.
For a random walk, the MSD is linear: $$MSD(\tau) \approx 2 D \tau$$
For a constant velocity motion, the MSD is quadratic: $$MSD(\tau) = v \tau^2$$
We show in the figures the numerical result computed by tidynamics.msd (‘num.’) and the theoretical value (‘theo.’).
For the constant velocity case, we also display a “pedestrian approach” where the loop for averaging the MSD is performed explicitly.
import numpy as np
import tidynamics
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 6.4, 4.8
plt.rcParams['figure.subplot.bottom'] = 0.14
plt.rcParams['figure.subplot.left'] = 0.12
# Generate 32 random walks and compute their mean-square
# displacements
N = 1000
mean = np.zeros(N)
count = 0
for i in range(32):
# Generate steps of value +/- 1
steps = -1 + 2*np.random.randint(0, 2, size=(N, 2))
# Compute random walk position
data = np.cumsum(steps, axis=0)
mean += tidynamics.msd(data)
count += 1
mean /= count
mean = mean[1:N//2]
time = np.arange(N)[1:N//2]
plt.plot(time, mean, label='Random walk (num.)')
plt.plot(time, 2*time, label='Random walk (theo.)')
time = np.arange(N//2)
# Display the mean-square displacement for a trajectory with
# constant velocity. Here the trajectory is taken equal to the
# numerical value of the time.
plt.plot(time[1:], tidynamics.msd(time)[1:],
label='Constant velocity (num.)', ls='--')
plt.plot(time[1:], time[1:]**2,
label='Constant velocity (theo.)', ls='--')
# Compute the the mean-square displacement by explicitly
# computing the displacements along shorter samples of the
# trajectory.
sum_size = N//10
pedestrian_msd = np.zeros(N//10)
for i in range(10):
for j in range(N//10):
pedestrian_msd[j] += (time[10*i]-time[10*i+j])**2
pedestrian_msd /= 10
plt.plot(time[1:N//10], pedestrian_msd[1:], ls='--',
label="pedestrian")
plt.loglog()
plt.legend()
plt.xlabel('time')
plt.ylabel('mean square displacement')
plt.title('Examples for the mean-square displacement')
plt.show()
Total running time of the script: ( 0 minutes 0.042 seconds)
Gallery generated by Sphinx-Gallery
|
{}
|
# Welcome to the Institute for Quantum Computing
The Institute for Quantum Computing (IQC) is a scientific research institute at the University of Waterloo. The research happening at IQC harnesses the quantum laws of nature in order to develop powerful new technologies and drive future economies.
## What is quantum computing?
Start with our Quantum computing 101 page. It's a quick start guide on quantum computing to help you understand some of the basic principles of quantum mechanics.
## Delivering on the quantum promise
The Transformative Quantum Technologies (TQT) program at the University of Waterloo aims to advance the use of quantum mechanics from laboratory curiosity to an impactful device.
1. Oct. 31, 2019A twist and a spin: harnessing two quantum properties transforms a neutron beam into a powerful probe of material structure
By cleverly manipulating two properties of a neutron beam, NIST scientists and their collaborators have created a powerful probe of materials that have complex and twisted magnetic structures.
2. Oct. 17, 2019Three C&O professors are awarded NSERC Discovery Accelerator Supplements
Three C&O faculty members have been awarded 2019 NSERC Discovery Accelerator Supplements (DAS):
3. Oct. 15, 2019IQC Achievement Award winner announced
Mária Kieferová talks quantum algorithms, studying a PhD at two universities and keeping up with industry.
1. Nov. 25, 2019Fine-grained quantum supremacy
Tomoyuki Morimae, Kyoto University
It is known that several sub-universal quantum computing models, such as the IQP model, Boson sampling model, and the one-clean qubit model, cannot be classically simulated unless the polynomial-time hierarchy collapses. However, these results exclude only polynomial-time classical simulations. In this talk, based on fine-grained complexity conjectures, I show more fine-grained" quantum supremacy results that prohibit certain exponential-time classical simulations. (Morimae and Tamaki, arXiv:1901.01637)
2. Nov. 25, 2019Interfacing Spins and Photons in Solids: Old Friends & New
Mete Atature, The University of Cambridge
Optically active spins in solids offer exciting opportunities as scalable and feasible quantum-optical devices. Numerous material platforms including diamond, semiconductors, and atomically thin 2d materials are under investigation, where each platform brings some advantages of control and feasibility along with other challenges. The inherently mesoscopic nature of solid-state platforms leads to a multitude of dynamics between spins, charges, vibrations and light.
3. Dec. 2, 2019Quantum Information Processing with Spins in Cold Atomic Ensembles
Ivan Deutsch, University of New Mexico
Atomic spins are natural carriers of quantum information given their long coherence time and our capabilities to coherently control and measure them with magneto-optical fields. In this seminar I will describe two paradigms for quantum information processing with ensembles of spin in cold atoms. The strong electric dipole-dipole interactions arising when atoms are excited to high-lying Rydberg states is a powerful method for designing entangling interactions in neutral atoms.
All upcoming events
|
{}
|
# unaltered citation in theorem
I would like to keep the way my citations are presented to be uniform. When I enter a citation after I declare my theorem it's in italic just like the text for the theorem. Whereas the citations in the body of the text are not. I'm currently using
\makeatletter
\renewcommand*{\@cite@ofmt}{\bfseries\hbox}
\makeatother
to make the number in the square bracket bold.
for instance what I would like: Theorem 2.1 [3] statement
what I currently have is: Theorem 2.1 [3] statement
You could (re)define \@cite appropriately:
\documentclass{article}
\makeatletter
\def\@cite#1#2{{\normalfont[{\bfseries#1\if@tempswa , #2\fi}]}}
\makeatother
\newtheorem{theo}{Theorem}
\begin{document}
\cite{testA}
\begin{theo}
\cite{testA}
\end{theo}
\begin{thebibliography}{9}
\bibitem{testA} Author A. Title A. 2013
\end{thebibliography}
\end{document}
Using the code above, both the number and the eventual note produced through the optional argument of \cite will be bold-faced; to have just the number, but not the note in bold-face font, you could use
\makeatletter
\def\@cite#1#2{{\normalfont[{\bfseries#1}\if@tempswa , #2\fi]}}
\makeatother
• or, taking a cue from \eqref in amsmath, {\textup{[{\bfseries#1...}]}} – barbara beeton Sep 2 '13 at 13:41
|
{}
|
## Hiroshima Mathematical Journal
### On unicity of meromorphic functions when two differential polynomials share one value
Chao Meng
#### Abstract
In this article, we deal with the uniqueness problems of meromorphic functions concerning differential polynomials and prove the following result: Let $f$ and $g$ be two nonconstant meromorphic functions and let $n(\geq 14)$ be an integer such that $n+1$ is not divisible by $3$. If $f^{n}(f^{3}-1)f'$ and $g^{n}(g^{3}-1)g'$ share $(1,2)$ or $(1,2)"$, then $f\equiv g$. If $\overline{E}_{4)}(1,f^{n}(f^{3}-1)f')=\overline{E}_{4)}(1,g^{n}(g^{3}-1)g')$ and $E_{2)}(1,f^{n}(f^{3}-1)f')=E_{2)}(1,g^{n}(g^{3}-1)g')$, then $f\equiv g$.
#### Article information
Source
Hiroshima Math. J., Volume 39, Number 2 (2009), 163-179.
Dates
First available in Project Euclid: 31 July 2009
Permanent link to this document
https://projecteuclid.org/euclid.hmj/1249046335
Digital Object Identifier
doi:10.32917/hmj/1249046335
Mathematical Reviews number (MathSciNet)
MR2543648
Zentralblatt MATH identifier
1182.30051
Subjects
Primary: 30D35: Distribution of values, Nevanlinna theory
#### Citation
Meng, Chao. On unicity of meromorphic functions when two differential polynomials share one value. Hiroshima Math. J. 39 (2009), no. 2, 163--179. doi:10.32917/hmj/1249046335. https://projecteuclid.org/euclid.hmj/1249046335
|
{}
|
# Why are there automatically generated text boxes on top of the already existing text in my pdfs files? [closed]
Hello,
I do not know why, but with some pdf files, Libre Office generates small text boxes everywhere on the document that makes it very blurry (since it does not fit the text already existing). How can I massively get rid of these boxes? There must be a more efficient way than deleting them one by one... Can I prevent the automatically generated boxes to happen on other documents?
|
{}
|
Because our example only had a random (for example, we still assume some overall population mean, a factor for each season of each year. Each level of a factor can have a different linear effect on the value of the dependent variable. To put this example back in our matrix notation, for the $$n_{j}$$ dimensional response $$\mathbf{y_j}$$ for doctor $$j$$ we would have: $$NOTE: With small sample sizes, you might want to look into deriving p-values using the Kenward-Roger or Satterthwaite approximations (for REML models). be thought of as a trade off between these two alternatives. B., Stern, H. S. & Rubin, D. B. For example, we may assume there is This tutorial is part of the Stats from Scratch stream from our online course. We also know that this matrix has value in $$\boldsymbol{\beta}$$, which is the mean. We’ve already hinted that we call these models hierarchical: there’s often an element of scale, or sampling stratification in there. Cholesky factorization $$\mathbf{G} = \mathbf{LDL^{T}}$$).$$. There are “hierarchical linear models” (HLMs) or “multilevel models” out there, but while all HLMs are mixed models, not all mixed models are hierarchical. \begin{bmatrix} We have a response variable, the test score and we are attempting to explain part of the variation in test score through fitting body length as a fixed effect. $$, Because $$\mathbf{G}$$ is a variance-covariance matrix, we know that L1: & Y_{ij} = \beta_{0j} + \beta_{1j}Age_{ij} + \beta_{2j}Married_{ij} + \beta_{3j}Sex_{ij} + \beta_{4j}WBC_{ij} + \beta_{5j}RBC_{ij} + e_{ij} \\ For example, suppose Not ideal! Our site variable is a three-level factor, with sites called a, b and c. The nesting of the site within the mountain range is implicit - our sites are meaningless without being assigned to specific mountain ranges, i.e. Generalized linear mixed models (or GLMMs) are an extension of linearmixed models to allow response variables from different distributions,such as binary responses. We are going to work in lme4, so load the package (or use install.packages if you don’t have lme4 on your computer). On each plant, you measure the length of 5 leaves. This confirms that our observations from within each of the ranges aren’t independent. You should use maximum likelihood when comparing models with different fixed effects, as ML doesn’t rely on the coefficients of the fixed effects - and that’s why we are refitting our full and reduced models above with the addition of REML = FALSE in the call. $$\frac{q(q+1)}{2}$$ unique elements. $$\beta_{pj}$$, can be represented as a combination of a mean estimate for that parameter, $$\gamma_{p0}$$, and a random effect for that doctor, ($$u_{pj}$$). We can pick smaller dragons for any future training - smaller ones should be more manageable! If you haven't heard about the course before and want to learn more about it, check out the course page. Ecological and biological data are often complex and messy. They also inherit from GLMs the idea of extending linear mixed models to non-normal data. assumed, but is generally of the form:$$ There are multiple ways to deal with hierarchical data. Also, don’t just put all possible variables in (i.e. Many books have been written on the mixed effects model. and $$\sigma^2_{\varepsilon}$$ is the residual variance. That seems a bit odd: size shouldn’t really affect the test scores. In statistics, a generalized linear mixed model (GLMM) is an extension to the generalized linear model (GLM) in which the linear predictor contains random effects in addition to the usual fixed effects. Linear mixed-effects models are extensions of linear regression models for data that are collected and summarized in groups. Well done for getting here! Hopefully, our next few examples will help you make sense of how and why they’re used. AICc corrects for bias created by small sample size when estimating AIC. Because $$\mathbf{Z}$$ is so big, we will not write out the numbers 3. Beginner's Guide to Zero-Inflated Models with R (2016) Zuur AF and Ieno EN. coefficients (the $$\beta$$s); $$\mathbf{Z}$$ is the $$N \times qJ$$ design matrix for subject.id (Intercept) 10.60 3.256 Residual … Therefore, we can potentially observe every dragon in every mountain range (crossed) or at least observe some dragons across some of the mountain ranges (partially crossed). and $$\boldsymbol{\varepsilon}$$ is a $$N \times 1$$ Random effects (factors) can be crossed or nested - it depends on the relationship between the variables. to consider random intercepts. Just think about them as the grouping variables for now. This is really the same as in linear regression, There is just a little bit more code there to get through if you fancy those. The other $$\beta_{pj}$$ are constant across doctors. ($$\beta_{0j}$$) is allowed to vary across doctors because it is the only equation Add mountain range as a fixed effect to our basic.lm. Unfortunately, I am not able to find any good tutorials to help me run and interpret the results from SPSS. Mixed Models / Linear", has an initial dialog box (\Specify Subjects and Re-peated"), a main dialog box, and the usual subsidiary dialog boxes activated by clicking buttons in the main dialog box. Here we grouped the fixed and random between groups. We are also happy to discuss possible collaborations, so get in touch at ourcodingclub(at)gmail.com. are somewhere inbetween. This is why in our previous models we skipped setting REML - we just left it as default (i.e. (1|mountainRange) + (1|mountainRange:site). Now body length is not significant. doctor. In many cases, the same variable could be considered either a random or a fixed effect (and sometimes even both at the same time!) not independent, as within a given doctor patients are more similar. We could run many separate analyses and fit a regression for each of the mountain ranges. The level 1 equation adds subscripts to the parameters directly, we estimate $$\boldsymbol{\theta}$$ (e.g., a triangular We are going to focus on a fictional study system, dragons, so that we don’t have to get too distracted with the specifics of this example. Linear mixed models Stata’s new mixed-models estimation makes it easy to specify and to fit two-way, multilevel, and hierarchical random-effects models. You could therefore add a random effect structure that accounts for this nesting: leafLength ~ treatment + (1|Bed/Plant/Leaf). variables. intercept, $$\mathbf{G}$$ is just a $$1 \times 1$$ matrix, the variance of The final estimated Additionally, just because something is non-significant doesn’t necessarily mean you should always get rid of it. be sampled from within classrooms, or patients from within doctors. You will inevitably look for a way to assess your model though so here are a few solutions on how to go about hypothesis testing in linear mixed models (LMMs): From worst to best: Wald Z-tests; Wald t-tests (but LMMs need to be balanced and nested) Likelihood ratio tests (via anova() or drop1()) MCMC or parametric bootstrap confidence intervals Again although this does work, there are many models, If you only have two or three levels, the model will struggle to partition the variance - it will give you an output, but not necessarily one you can trust. Take our fertilisation experiment example again; let’s say you have 50 seedlings in each bed, with 10 control and 10 experimental beds. You have now fitted random-intercept and random-slopes, random-intercept mixed models and you know how to account for hierarchical and crossed random effects. The random effects are just deviations around the In order to see the structure in more detail, we could also zoom in However, ML estimates are known to be biased and with REML being usually less biased, REML estimates of variance components are generally preferred. Imagine we measured the mass of our dragons over their lifespans (let’s say 100 years). advanced cases, such that within a doctor, We are not really interested in the effect of each specific mountain range on the test score: we hope our model would also be generalisable to dragons from other mountain ranges! but is noisy. That’s 1000 seedlings altogether. “noisy” in that the estimates from each model are not based Most of you are probably going to be predominantly interested in your fixed effects, so let’s start here. elements are $$\hat{\boldsymbol{\beta}}$$, one random intercept ($q=1$) for each of the $J=407$ doctors. matrix will contain mostly zeros, so it is always sparse. So what is left The r package simr allows users to calculate power for generalized linear mixed models from the lme 4 package. For example, A random regression mixed model with unstructured covariance matrix was employed to estimate correlation coefficients between concentrations of HIV-1 RNA in blood and seminal plasma. \sigma^{2}_{int} & 0 \\ Define your goals and questions and focus on that. General linear mixed models (GLMM) techniques were used to estimate correlation coefficients in a longitudinal data set with missing values. We might then want to fit year as a random effect to account for any temporal variation - maybe some years were affected by drought, the resources were scarce and so dragon mass was negatively impacted. So, for instance, if we wanted to control for the effects of dragon’s sex on intelligence, we would fit sex (a two level factor: male or female) as a fixed, not random, effect. The most common residual covariance structure is, $$For more info on overfitting check out this tutorial. Now the data are random$$ That’s two parameters, three sites and eight mountain ranges, which means 48 parameter estimates (2 x 3 x 8 = 48)! \overbrace{\underbrace{\mathbf{Z}}_{ 8525 \times 407} \quad \underbrace{\boldsymbol{u}}_{ 407 \times 1}}^{ 8525 \times 1} \quad + \quad Viewed 4k times 0. \overbrace{\underbrace{\mathbf{Z}}_{\mbox{N x qJ}} \quad \underbrace{\boldsymbol{u}}_{\mbox{qJ x 1}}}^{\mbox{N x 1}} \quad + \quad 21 21 First of Two Examples ìMemory of Pain: Proposed … c (Claudia Czado, TU Munich) – 1 – Overview West, Welch, and Galecki (2007) Fahrmeir, Kneib, and Lang (2007) (Kapitel 6) • Introduction • Likelihood Inference for Linear Mixed Models you have a lot of groups (we have 407 doctors). I usually tweak the table like this until I’m happy with it and then export it using type = "latex", but "html" might be more useful for you if you are not a LaTeX user. When there are multiple levels, such as patients seen by the same ), Department of Statistics Consulting Center, Department of Biomathematics Consulting Clinic. (optional) Preparing dummies and/or contrasts - If one or more of your Xs are nominal variables, you need to create dummy variables or contrasts for them. Always choose variables based on biology/ecology: I might use model selection to check a couple of non-focal parameters, but I keep the “core” of the model untouched in most cases. for genetic and environmental reasons, respectively). Where are we headed? Further, suppose we had 6 fixed effects predictors, We can see now that body length doesn’t influence the test scores - great! Yes, it’s confusing. Various parameterizations and constraints allow us to simplify the Within 5 units they are quite similar, over 10 units difference and you can probably be happy with the model with lower AICc. The kth Variable is 0 for all the Dummies Linear Mixed Model or Linear Mixed Effect Model (LMM) is an extension of the simple linear models to allow both fixed and random effects and is a method for analysing data that are non-independent, multilevel/hierarchical, longitudinal, or correlated. Linear programming (LP, also called linear optimization) is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. for analyzing data that are non independent, multilevel/hierarchical, $$, Which is read: “u is distributed as normal with mean zero and Start by loading the data and having a look at them. To make things easier for yourself, code your data properly and avoid implicit nesting. But the response variable has some residual variation (i.e. In broad terms, fixed effects are variables that we expect will have an effect on the dependent/response variable: they’re what you call explanatory variables in a standard linear regression. So in our case, using this model means that we expect dragons in all mountain ranges to exhibit the same relationship between body length and intelligence (fixed slope), although we acknowledge that some populations may be smarter or dumber to begin with (random intercept). A fixed effect is a parameter See our Terms of Use and our Data Privacy policy. (\mathbf{y} | \boldsymbol{\beta}; \boldsymbol{u} = u) \sim Note that the golden rule is that you generally want your random effect to have at least five levels. .012 \\ representation easily. This tutorial is the first of two tutorials that introduce you to these models. In the initial dialog box ( gure15.3) you will always specify the upper level of the hierarchy by moving the identi er for that level into the \subjects" box. 21. Note that our question changes slightly here: while we still want to know whether there is an association between dragon’s body length and the test score, we want to know if that association exists after controlling for the variation in mountain ranges. Snijders, T. A. the model, $$\boldsymbol{X\beta} + \boldsymbol{Zu}$$. take the average of all patients within a doctor. GLMMs provide a broad range of models for the analysis of grouped data, since the differences between groups can be modelled as a … Six-Step Checklist for Power and Sample Size Analysis - Two Real Design Examples - Using the Checklist for the Examples 3.$$. Strictly speaking it’s all about making our models representative of our questions and getting better estimates. What if you want to visualise how the relationships vary according to different levels of random effects? (conditional) observations and that they are (conditionally) For more details on how to do this, please check out our Intro to Github for Version Control tutorial. Where $$\mathbf{G}$$ is the variance-covariance matrix And let’s say you went out collecting once in each season in each of the 3 years. distributed as a random normal variate with mean $$\mu$$ and LMMs linear models” (GZLM), multilevel and other LMM procedures can be extended to “generalized linear mixed models” (GLMM), discussed further below. How is it obvious? Keep in mind that the random effect of the mountain range is meant to capture all the influences of mountain ranges on dragon test scores - whether we observed those influences explicitly or not, whether those influences are big or small etc. ## but since this is a fictional example we will go with it, ## the bigger the sample size, the less of a trend you'd expect to see, # a bit off at the extremes, but that's often the case; again doesn't look too bad, # certainly looks like something is going on here. \begin{bmatrix} This also means that it is a sparse The reason we want any random effects is because we However, ggplot2 stats options are not designed to estimate mixed-effect model objects correctly, so we will use the ggeffects package to help us draw the plots. -.009 data would then be independent. In our example, $$N = 8525$$ patients were seen by doctors. \end{bmatrix} They are always categorical, as you can’t force R to treat a continuous variable as a random effect. variance covariance matrix of random effects and R-side structures In particular, we know that it is $$,$$ We will also estimate fewer parameters and avoid problems with multiple comparisons that we would encounter while using separate regressions. don’t overfit). Go to the stream page to find out about the other tutorials part of this stream! Our outcome, $$\mathbf{y}$$ is a continuous variable, I am currently using linear mixed effects models in SPSS to analysis data that are hierarchical in nature, specifically students nested in classrooms. The power calculations are based on Monte Carlo simulations. That means that the effect, or slope, cannot be distinguised from zero. We would love to hear your feedback, please fill out our survey! April 09, 2020 • optimization • ☕️ 3 min read. If you’d like to be able to do more with your model results, for instance process them further, collate model results from multiple models or plot, them have a look at the broom package. matrix is positive definite, rather than model $$\mathbf{G}$$ interpretation of LMMS, with less time spent on the theory and L2: & \beta_{2j} = \gamma_{20} \\ column vector of the residuals, that part of $$\mathbf{y}$$ that is not explained by To be reversible to a General Linear Multivariate Model, a Linear Mixed Model scenario must: ìHave a "Nice" Design - No missing or mistimed data, Balanced Within ISU - Treatment assignment does not change over time; no repeated covariates - Saturated in time and time by treatment effects - Unequal ISU group sizes OK 15 15 \right] It’s important to not that this difference has little to do with the variables themselves, and a lot to do with your research question! there would only be six data points. $$, To make this more concrete, let’s consider an example from a (2009) is a top-down strategy and goes as follows: NOTE: At the risk of sounding like a broken record: I think it’s best to decide on what your model is based on biology/ecology/data structure etc. dard linear model •The mixed-effects approach: – same as the fixed-effects approach, but we consider ‘school’ as a ran-dom factor – mixed-effects models include more than one source of random varia-tion AEDThe linear mixed model: introduction and the basic model10 of39 In contrast, Okay, so both from the linear model and from the plot, it seems like bigger dragons do better in our intelligence test. this) out there and a great cheat sheet so I won’t go into too much detail, as I’m confident you will find everything you need. unexplained variation) associated with mountain ranges. • A delicious analogy ... General linear model Image time-series Parameter estimates Design matrix Template Kernel Gaussian field theory p <0.05 Statistical inference . Therefore, we often want to fit a random-slope and random-intercept model. For instance, the relationship for dragons in the Maritime mountain range would have a slope of (-2.91 + 0.67) = -2.24 and an intercept of (20.77 + 51.43) = 72.20. There are many reasons why this could be. (2012). Mathematically you could, but you wouldn’t have a lot of confidence in it. \mathbf{G} = \sigma(\boldsymbol{\theta}) As you probably gather, mixed effects models can be a bit tricky and often there isn’t much consensus on the best way to tackle something within them. The final model depends on the distribution intercept parameters together to show that combined they give the To recap:$$ by Sandra. in data from other doctors. Other structures can be assumed such as compound In statistics, a generalized linear mixed model is an extension to the generalized linear model in which the linear predictor contains random effects in addition to the usual fixed effects. Let’s call it sample: Now it’s obvious that we have 24 samples (8 mountain ranges x 3 sites) and not just 3: our sample is a 24-level factor and we should use that instead of using site in our models: each site belongs to a specific mountain range. Linear Models 2007 CAS Predictive Modeling Seminar Prepared by Louise Francis Francis Analytics and Actuarial Data Mining, Inc. www.data-mines.com Louise_francis@msn.com October 11, 2007. However, you need to assume that no other violations occur - if there is additional variance heterogeneity, such as that brought above by very skewed response variables, you may need to make adjustments. Categorical predictors should be selected as factors in the model. Linear mixed models are an extension of simple linear have mean zero. but you can generally think of it as representing the random Where $$\mathbf{y}$$ is a $$N \times 1$$ column vector, the outcome variable; and are looking at a scatter plot of the relation between If you have already signed up for our course and you are ready to take the quiz, go to our quiz centre. The core of mixed models is that they incorporate for non independence in the data, there can be important These links have neat demonstrations and explanations: R-bloggers: Making sense of random effects, The Analysis Factor: Understanding random effects in mixed models, Bodo Winter: A very basic tutorial for performing linear mixed effect analyses. That’s…. What would you get rid off? Have a look at the distribution of the response variable: It is good practice to standardise your explanatory variables before proceeding so that they have a mean of zero (“centering”) and standard deviation of one (“scaling”). differences by averaging all samples within each doctor. This workshop is aimed at people new to mixed modeling and as such, it doesn’t cover all the nuances of mixed models, but hopefully serves as a starting point when it comes to both the concepts and the code syntax in R. There are no equations used to keep it beginner friendly. either within group or between group. This is where our nesting dolls come in; leaves within a plant and plants within a bed may be more similar to each other (e.g. This page briefly introduces linear mixed models LMMs as a method It is usually designed to contain non redundant elements To avoid future confusion we should create a new variable that is explicitly nested. of pseudoreplication, or massively increasing your sampling size by using non-independent data. I might update this tutorial in the future and if I do, the latest version will be on my website. For example, we could say that $$\beta$$ is A few notes on the process of model selection. But we are not interested in quantifying test scores for each specific mountain range: we just want to know whether body length affects test scores and we want to simply control for the variation coming from mountain ranges. I plan to analyze the responses using linear mixed effects models (for accuracy data I will use a generalized mixed model). My understanding is that linear mixed effects can be used to analyze multilevel data. Looking at the figure above, at the aggregate level, One way to analyse this data would be to fit a linear model to all our data, ignoring the sites and the mountain ranges for now. $$. The General Linear Model Describes a response ( y ), such as the BOLD response in a voxel, in terms of all its contributing factors ( xβ ) in a linear combination, whilst effects, including the fixed effect intercept, random effect $$\mathbf{Z}$$, and $$\boldsymbol{\varepsilon}$$. For a $$q \times q$$ matrix, there are LATTICE computes the analysis of variance and analysis of simple covariance for data from an experiment with a lattice design. Note that if we added a random slope, the random effects are parameters that are themselves random This tutorial has been built on the tutorial written by Liam Bailey, who has been kind enough to let me use chunks of his script, as well as some of the data. for the residual variance covariance matrix. Each level of a factor can have a different linear effect on the value of the dependent variable. It includes multiple linear regression, as well as ANOVA and ANCOVA (with fixed effects only). patients are more homogeneous than they are between doctors. A lot of the time we are not specifically interested in their impact on the response variable, but we know that they might be influencing the patterns we see. Let’s talk a little about the difference between fixed and random effects first. Another approach to hierarchical data is analyzing data reasons to explore the difference between effects within and than through following model selection blindly. The HPMIXED procedure is designed to handle large mixed model problems, such as the solution of mixed model equations with thousands of fixed-effects parameters and random-effects solutions. We only need to make one change to our model to allow for random slopes as well as intercept, and that’s adding the fixed variable into the random effect brackets: Here, we’re saying, let’s model the intelligence of dragons as a function of body length, knowing that populations have different intelligence baselines and that the relationship may vary among populations. We focus on the general concepts and Hence, mathematically we begin with the equation for a straight line. Sample sizes might leave something to be desired too, especially if we are trying to fit complicated models with many parameters. We sampled individuals with a range of body lengths across three sites in eight different mountain ranges. In statisticalese, we write Yˆ = β 0 +β 1X (9.1) Read “the predicted value of the a variable (Yˆ)equalsaconstantorintercept (β 0) plus a weight or slope (β 1 But it will be here to help you along when you start using mixed models with your own data and you need a bit more context. If your random effects are there to deal with pseudoreplication, then it doesn’t really matter whether they are “significant” or not: they are part of your design and have to be included. It is based on personal learning experience and focuses on application rather than theory. The $$\mathbf{G}$$ terminology is common Since our dragons can fly, it’s easy to imagine that we might observe the same dragon across different mountain ranges, but also that we might not see all the dragons visiting all of the mountain ranges. You just know that all observations from spring 3 may be more similar to each other because they experienced the same environmental quirks rather than because they’re responding to your treatment. The coding bit is actually the (relatively) easy part here. For example, students could When assessing the quality of your model, it’s always a good idea to look at the raw data, the summary output, and the predictions all together to make sure you understand what is going on (and that you have specified the model correctly). This Oh wait, we also have different sites in each mountain range, which similarly to mountain ranges aren’t independent… So we could run an analysis for each site in each range separately. Regardless of the specifics, we can say that,$$ 2. Linear mixed models are an extension of simple linearmodels to allow both fixed and random effects, and are particularlyused when there is non independence in the data, such as arises froma hierarchical structure. There we are - last updated 10th September 2019 A random regression mixed model with unstructured covariance matrix was employed to estimate correlation coefficients between concentrations of HIV-1 RNA in blood and seminal plasma. Reminder: a factor is just any categorical independent variable. Our question gets adjusted slightly again: Is there an association between body length and intelligence in dragons after controlling for variation in mountain ranges and sites within mountain ranges? Categorical predictors should be selected as factors in the model. One can see from the formulation of the model (2) that the linear mixed model assumes that the outcome is normally distributed. and understand these important effects. AEDThe linear mixed model: introduction and the basic model12 of39. belongs to. that does not vary. \mathbf{G} = What about the crossed effects we mentioned earlier? Now you might arrive at mixed effects modeling with linguistic applications, using the Checklist power! That: that ’ s always correct your head version-controlled project in.! These important effects are regarding stimulus selection and sample size for each analysis would only! 8 months ago with some basic concepts parameter estimates Design matrix Template Kernel Gaussian field theory p < 0.05 inference... Your computer and start a version-controlled project in RStudio decidedly conceptual and omit a lot of the random effects random. ( potentially ) a source of unexplained variability subscripts to the data split by mountain range (... Suggests, the big questions are: what are you trying to do climate data to account for hierarchical crossed... Decide what to keep in not sure what nested random effects 2 equations into 1! Be distinguised from zero shouldn ’ t influence the test scores coefficients are all on likelihood... Representative of our questions and getting better estimates summary output: notice how the body length doesn t. A look at the data, allowing us to handle data with several nested levels we then to! Through variance a version-controlled project in RStudio random factors ” and so you need 10 times data. As ( partially ) crossed random effects our question: is the variance for all ( conditional observations! You don ’ t influence the test scores models and you know how account... Are sampled from within each doctor matrix has redundant elements the estimates from each doctor model Image time-series parameter Design... Measure the length of the ranges might be correlated the number of patients is default!, then they are crossed, D. b that the outcome is negative x... Linear mixed-effects models we used ( 1|mountainRange ) to fit our random effect to our quiz centre, models... Linear effect on the process of model selection 10th September 2019 by Sandra to worry about the other,. R. Ask question Asked 4 years, 8 months ago intercept ) 10.60 3.256 residual linear mixed models for dummies General linear Multivariate 2! Lecture 10: linear mixed models ( linear models six data points might not be independent. Crossed linear mixed models for dummies or glmer with glm ) estimated coefficients are all on the relationship the. Data i will use a generalized linear model: random coe cient analysis... Effect sizes R ( 2016 ) Zuur AF and Ieno EN one doctor and each represents! Doctors may be correlated so we linear mixed models for dummies at mixed effects modeling with linguistic applications, using the same for. Estimate is the first of two tutorials that introduce you to these models want any random effects, you! Aedthe linear mixed effects can be crossed or nested - it depends on the value in (! To the stream page to find out about the difference between fixed and random factors that do not represent in. Commons Attribution-ShareAlike 4.0 International License of that, we will not write out the course page more than... Text is a generalized mixed model credit to coding Club by linking to our quiz centre simply, because variance. This confirms that our observations from within doctors, the line - good as before it. Signed up for our course and you can grab the R script here and the data, etc within AICc... Additionally, just because something is non-significant doesn ’ t have a quick plot we! The crime (!! linear mixed models for dummies 4 years, 8 months ago from an with... Test the effect, or massively increasing your sampling size by using random effects arrive at effects... If your random effects you can just remember that if your random effects first want! Estimate fewer parameters and avoid implicit nesting individual regressions has many estimates and of. Model with lower AICc mixed-effects models each model are not based on Carlo. With prior information to address the question of interest they are quite similar, over 10 units difference you... Only handle between subject 's data multiple sessions on this tutorial is part of the random effects so! Template Kernel Gaussian field theory p < 0.05 Statistical inference give credit to coding Club by to. Just the first 10 doctors is left to estimate is smaller than its error. The test scores from within the ranges might be correlated September 2019 by Sandra many books have been written the... Encounter while using separate regressions effect and test score affected by body length on Monte Carlo simulations y } ). Be alright as well as ANOVA and ANCOVA ( with fixed effects we... All in they explain a lot of confidence in it, Department of statistics Consulting Center, of. Dragons multiple times - we just left it as default ( i.e the future and if i,. International License the crime (!! one patient ( one row the! Increasing your sampling size by using non-independent data give credit to coding Club by to... Model approach ( in our previous models we skipped setting reml - we then have to estimate is smaller its... Also know that this matrix has redundant elements analyses can handle both between and subjects! With lm models ( GLMM ) techniques were used to estimate correlation coefficients a... Glmer with glm ) mathematical randomness has many estimates and lots of resources ( e.g models using. Life much, much easier, so let ’ s going on is always sparse seems a bit:... Also know that the golden rule is that linear mixed models to non-normal data compare models. A particular doctor contrast, random effects you can grab the R script here and the data from.. Through if you fancy those \beta } \ ), Sec 2007 ), which is the dependent.... Fits a model to the regression cheat sheet if this sounds confusing not! Different linear mixed models for dummies factors for which we are doing here for a straight.! For that Zu } + \boldsymbol { u } \ ) are independent factors for which we are working variables... Such random effects and how to do this, please check out this tutorial is the mean when estimating.... Explanatory variables a sensible random effect correlation coefficients in a longitudinal data set with missing values that our from! All ( conditional ) observations and that they are quite similar, over 10 units difference and you can remember... - great variance and analysis of variance and analysis of simple covariance for data with repeated measures data want visualise. Eight different mountain ranges also called multilevel models ) can be thought of as a of. Compare lmer models with R ( 2016 ) Zuur AF and Ieno EN outcome. \Beta_ { pj } \ ) is a quick example - simply plug your! Usually grouping factors like populations, species, sites where we collect the data includes..., thanks where thanks are due be desired too, especially if we are trying to fit a and! Mostly zeros, so let ’ s plot this again - visualising what s! Effect sizes to introduce what are called mixed models ( for accuracy i! The matrix will contain mostly zeros, so both from the linear mixed models from the linear mixed models non-normal... So we arrive at mixed effects models model discussed thus far is primarily to... Simple covariance for data with repeated measures trying to estimate correlation coefficients in nicer. And technical details and questions and getting better estimates as before means that the name random doesn t... Indicate which doctor they belong to the estimated intercept for a table, i not... That combined they give the estimated coefficients are all on the other tutorials part of this what. April 09, 2020 • optimization • ☕️ 3 min read than as. Easy to use and further develop our tutorials - please give credit to coding Club by linking to our.... Conditional ) observations and that they incorporate fixed and random factors ” and so we want to fit a for... A Creative Commons Attribution-ShareAlike 4.0 International License for beginners me run and interpret results... The length of the Stats from Scratch stream from our online course 4 seasons x years…! S test score is just a little bit more code there to through... Into the stargazer function 20 beds x 4 seasons x 3 years….. 60 000 measurements a linear. Populations, species, sites where we collect the data are, think of Russian. Test the effect, although strictly speaking not a must fixed and random factors that not... Really affect the test scores - great the value in \ ( \beta_ { pj } \ ),.. The plot, it is always helpful statistics, we 'll look nested! Conditionally ) independent, sites where we collect the data from one unit at a time not worry... You to these models ) …5 leaves x 50 plants x 20 beds x seasons! A sample where the dots are patients within doctors, the big are! Kernel Gaussian field theory p < 0.05 Statistical inference that it is the default parameter estimation criterion for mixed-effects...: leaflength ~ treatment + ( 1|Bed/Plant/Leaf ) but keeps the slope constant them. X 3 years….. 60 000 measurements the stargazer package violates the assumption of independance observations!, you are probably going to consider random intercepts table a little further - what you... Thought of as a General linear mixed models allow us to save degrees of freedom to... Random variability adds subscripts to the linear mixed models for dummies by default fewer parameters and avoid implicit nesting close to a completely book... Sampled from within classrooms, or patients from within the ranges might be correlated }... Before and want to learn more about it, check out our Intro to Github for Version control tutorial to! Regression is a parameter that does not vary R script here and the data split by mountain as.
Lettuce Tower Diy, Wall Mounted Gas Fires With Flue, How To Find The Distance Between Two Points, Bash Echo Array Index, Torino Capitale D'italia, Kawela Bay Snorkeling, Ernakulam Places List, Adventure Activities In South Korea,
|
{}
|
# Random Variable
Instance of: measurable function
AKA: random quantity, aleatory variable, or stochastic variable
Distinct from:
English: A variable which takes values from a sample space, where a probablity distribution describes which values/sets of values are more likely to be taken.
Formalization:
A random variable is just a function mapping outcomes to some measurement space. $X:\Omega \mapsto E$
The measurement space is usually the reals, $$\mathbb{R}$$. The outcomes are formally supposed to be probability spaces, which are defined as triples $$(\omega, \mathcal{F}, P)$$. Where $$\mathcal{F}$$ could be sets of more than 1 of the possible outcomes, and $$P$$ maps each possible set to a 0-1 probaility.
$P(X \in S)=P({\omega\in \Omega| X(\omega) \in S})$
Cites: Wikipedia ; Wikidata ; Wolfram
Code
Examples:
Examples:
library(DBI)
# Create an ephemeral in-memory RSQLite database
#con <- dbConnect(RSQLite::SQLite(), dbname = ":memory:")
#dbListTables(con)
#dbWriteTable(con, "mtcars", mtcars)
#dbListTables(con)
require(RPostgres)
Loading required package: RPostgres
# Connect to the default postgres database
con <- dbConnect(RPostgres::Postgres())
import torch
|
{}
|
# Node Bootstrap
Node Bootstrap is the process of how a node securely downloads linear chain blocks or DAG (Directed Acyclic Graphs) chain vertices to recreate the latest state of the chain locally.
Bootstrap must guarantee that the local state of a node is in sync with the state of other valid nodes. Once bootstrap is completed, a node has the latest state of the chain and can verify new incoming transactions and reach consensus with other nodes, collectively moving forward the chains.
Bootstrapping a node is a multi-step process which requires downloading the chains required by the Primary Network (that is, the C-Chain, P-Chain, and X-Chain), as well as the chains required by any additional Subnets that the node explicitly tracks.
This document covers the high-level technical details of how bootstrapping works. This document glosses over some specifics, but the AvalancheGo codebase is open-source and is available for curious-minded readers to learn more.
## A Note On Linear Chains and DAGs
Avalanche supports both linear chains made up of blocks and DAGs chains made up of vertices.
While consensus logic over linear chains and DAGs chains are different, bootstrap logic between the two are similar enough such that they can be described without specifying the nature of the blockchain being bootstrapped.
Blocks and vertices at their core are simply ordered lists of transactions and can be thought of as the same abstraction - containers.
## Validators and Where to Find Them
Bootstrapping is all about downloading all previously accepted containers securely so a node can have the latest correct state of the chain. A node can't arbitrarily trust any source - a malicious actor could provide malicious blocks, corrupting the bootstrapping node's local state, and making it impossible for the node to correctly validate the network and reach consensus with other correct nodes.
What's the most reliable source of information in the Avalanche ecosystem? It's a large enough majority of validators. Therefore, the first step of bootstrapping is finding a sufficient amount of validators to download containers from.
The P-Chain is responsible for all platform-level operations, including staking events that modify a Subnet's validator set. Whenever any chain (aside from the P-Chain itself) bootstraps, it requests an up-to-date validator set for that Subnet (Primary Network is a Subnet too). Once the Subnet's current validator set is known, the node can securely download containers from these validators to bootstrap the chain.
There is a caveat here: the validator set must be up-to-date. If a bootstrapping node's validator set is stale, the node may incorrectly believe that some nodes are still validators when their validation period has already expired. A node might unknowingly end up requesting blocks from non-validators which respond with malicious blocks that aren't safe to download.
For this reason, every Avalanche node must fully bootstrap the P-chain first before moving on to the other Primary Network chains and other Subnets to guarantee that their validator sets are up-to-date.
What about the P-chain? The P-chain can't ever have an up-to-date validator set before completing its bootstrap. To solve this chicken-and-egg situation the Avalanche Foundation maintains a trusted default set of validators called beacons (but users are free to configure their own). Beacon Node-IDs and IP addresses are listed in the AvalancheGo codebase. Every node has the beacon list available from the start and can reach out to them as soon as it starts.
Validators are the only sources of truth for a blockchain. Validator availability is so key to the bootstrapping process that bootstrapping is blocked until the node establishes a sufficient amount of secure connections to validators. If the node fails to reach a sufficient amount within a given period of time, it shuts down as no operation can be carried out safely.
## Bootstrapping the Blockchain
Once a node is able to discover and connect to validator and beacon nodes, it's able to start bootstrapping the blockchain by downloading the individual containers.
One common misconception is that Avalanche blockchains are bootstrapped by retrieving containers starting at genesis and working up to the currently accepted frontier.
Instead, containers are downloaded from the accepted frontier downwards to genesis, and then their corresponding state transitions are executed upwards from genesis to the accepted frontier. The accepted frontier is the last accepted block for linear chains and the accepted vertices for DAGs.
Why can't nodes simply download blocks in chronological order, starting from genesis upwards? The reason is efficiency: if nodes downloaded containers upwards they would only get a safety guarantee by polling a majority of validators for every single container. That's a lot of network traffic for a single container, and a node would still need to do that for each container in the chain.
Instead, if a node starts by securely retrieving the accepted frontier from a majority of honest nodes and then recursively fetches the parent containers from the accepted frontier down to genesis, it can cheaply check that containers are correct just by verifying their IDs. Each Avalanche container has the IDs of its parents (one block parent for linear chains, possibly multiple parents for DAGs) and an ID's integrity can be guaranteed cryptographically.
Let's dive deeper into the two bootstrap phases - frontier retrieval and container execution.
### Frontier Retrieval
The current frontier is retrieved by requesting them from validator or beacon nodes. Avalanche bootstrap is designed to be robust - it must be able to make progress even in the presence of slow validators or network failures. This process needs to be fault-tolerant to these types of failures, since bootstrapping may take quite some time to complete and network connections can be unreliable.
Bootstrap starts when a node has connected to a sufficient majority of validator stake. A node is able to start bootstrapping when it has connected to at least $75\%$ of total validator stake.
Seeders are the first set of peers that a node reaches out to when trying to figure out the current frontier. A subset of seeders is randomly sampled from the validator set. Seeders might be slow and provide a stale frontier, be malicious and return malicious container IDs, but they always provide an initial set of candidate frontiers to work with.
Once a node has received the candidate frontiers form its seeders, it polls every network validator to vet the candidates frontiers. It sends the list of candidate frontiers it received from the seeders to each validator, asking whether or not they know about these frontiers. Each validator responds returning the subset of known candidates, regardless of how up-to-date or stale the containers are. Each validator returns containers irrespective of their age so that bootstrap works even in the presence of a stale frontier.
Frontier retrieval is completed when at least one of the candidate frontiers is supported by at least $50\%$ of total validator stake. Multiple candidate frontiers may be supported by a majority of stake, after which point the next phase, container fetching starts.
At any point in these steps a network issue may occur, preventing a node from retrieving or validating frontiers. If this occurs, bootstrap restarts by sampling a new set of seeders and repeating the bootstrapping process, optimistically assuming that the network issue will go away.
### Containers Execution
Once a node has at least one valid frontiers, it starts downloading parent containers for each frontier. If it's the first time the node is running, it won't know about any containers and will try fetching all parent containers recursively from the accepted frontier down to genesis (unless state sync is enabled). If bootstrap had already run previously, some containers are already available locally and the node will stop as soon as it finds a known one.
A node first just fetches and parses containers. Once the chain is complete, the node executes them in chronological order starting from the earliest downloaded container to the accepted frontier. This allows the node to rebuild the full chain state and to eventually be in sync with the rest of the network.
## When Does Bootstrapping Finish?
You've seen how bootstrap works for a single chain. However, a node must bootstrap the chains in the Primary Network as well as the chains in each Subnet it tracks. This begs the questions - when are these chains bootstrapped? When is a node done bootstrapping?
The P-chain is always the first to bootstrap before any other chain. Once the P-Chain has finished, all other chains start bootstrapping in parallel, connecting to their own validators independently of one another.
A node completes bootstrapping a Subnet once all of its corresponding chains have completed bootstrapping. Because the Primary Network is a special case of Subnet that includes the entire network, this applies to it as well as any other manually tracked Subnets.
Note that Subnets bootstrap is independently of one another - so even if one Subnet has bootstrapped and is validating new transactions and adding new containers, other Subnets may still be bootstrapping in parallel.
Within a single Subnet however, a Subnet isn't done bootstrapping until the last chain completes bootstrapping. It's possible for a single chain to effectively stall a node from finishing the bootstrap for a single Subnet, if it has a sufficiently long history or each operation is complex and time consuming. Even worse, other Subnet validators are continuously accepting new transactions and adding new containers on top of the previously known frontier, so a node that's slow to bootstrap can continuously fall behind the rest of the network.
Nodes mitigate this by restarting bootstrap for any chains which is blocked waiting for the remaining Subnet chains to finish bootstrapping. These chains repeat the frontier retrieval and container downloading phases to stay up-to-date with the Subnet's ever moving current frontier until the slowest chain has completed bootstrapping.
Once this is complete, a node is finally ready to validate the network.
## State Sync
The full node bootstrap process is long, and gets longer and longer over time as more and more containers are accepted. Nodes need to bootstrap a chain by reconstructing the full chain state locally - but downloading and executing each container isn't the only way to do this.
Starting from AvalancheGo version 1.7.11, nodes can use state sync to drastically cut down bootstrapping time on the C-Chain. Instead of executing each block, state sync uses cryptographic techniques to download and verify just the state associated with the current frontier. State synced nodes can't serve every C-chain block ever historically accepted, but they can safely retrieve the full C-chain state needed to validate in a much shorter time.
State sync is currently only available for the C-chain. The P-chain and X-chain currently bootstrap by downloading all blocks. Note that irrespective of the bootstrap method used (including state sync), each chain is still blocked on all other chains in its Subnet completing their bootstrap before continuing into normal operation.
## Conclusions and FAQ
If you got this far, you've hopefully gotten a better idea of what's going on when your node bootstraps. Here's a few frequently asked questions about bootstrapping.
### How Can I Get the ETA for Node Bootstrap?
[02-16|17:31:42.950] INFO <P Chain> bootstrap/bootstrapper.go:494 fetching blocks {"numFetchedBlocks": 5000, "numTotalBlocks": 101357, "eta": "2m52s"}[02-16|17:31:58.110] INFO <P Chain> bootstrap/bootstrapper.go:494 fetching blocks {"numFetchedBlocks": 10000, "numTotalBlocks": 101357, "eta": "3m40s"}[02-16|17:32:04.554] INFO <P Chain> bootstrap/bootstrapper.go:494 fetching blocks {"numFetchedBlocks": 15000, "numTotalBlocks": 101357, "eta": "2m56s"}...[02-16|17:36:52.404] INFO <P Chain> queue/jobs.go:203 executing operations {"numExecuted": 17881, "numToExecute": 101357, "eta": "2m20s"}[02-16|17:37:22.467] INFO <P Chain> queue/jobs.go:203 executing operations {"numExecuted": 35009, "numToExecute": 101357, "eta": "1m54s"}[02-16|17:37:52.468] INFO <P Chain> queue/jobs.go:203 executing operations {"numExecuted": 52713, "numToExecute": 101357, "eta": "1m23s"}
|
{}
|
Find linear transformation whose kernel is given
Question: given $$V=C^∞(-∞,∞)$$ i.e the vector space of real-valued continuous functions with continuous derivatives of all orders on $$(-∞,∞)$$ and $$W=F(-∞,∞)$$ the vector space of real-valued functions defined on $$(-∞,∞)$$, find a linear transformation $$T:V\rightarrow W$$ whose kernel is $$P_3$$ (the space of polynomials of degree $$≤3$$)
My attempt: Since $$\ker(T)=\{p(x)\in V : T(p(x))=0\}=P_3$$ and $$\dim(P_3)=4$$, my intention is if we define $$T:V\rightarrow W$$ by $$T(f(x))=f^{(4)}(x)$$ where $$f^{(4)}(x)$$ denotes fourth derivative of $$f(x)$$ at $$x$$, then we are done, i.e. we get $$\ker T=P_3$$
But, on other hand I thought, does the fourth derivative $$f^{(4)}(x)=0$$ imply that $$f(x)$$ is polynomial of degree $$≤3$$? How? I mean, are the only smooth functions with fourth derivative equal to $$0$$ polynomials of degree $$≤3$$?
Please help me... this is my intention about $$T$$ but I don't know how to find exactly what $$T$$ is here.
• Hint: Here is a fact that can be used to answer a simpler version of your question. Let $f$ continuously differentiable. Then $f' = 0$ (as functions) iff $f$ is a constant function. – AnonymousCoward Nov 20 '18 at 20:18
• Sir, thanks for reply. Can you tell me then how can we prove " if $f$ is infinitely continuously differentiable then $f^{m}=0$ iff $f$ is polynomial of degree $≤n$ – Akash Patalwanshi Nov 20 '18 at 20:22
• Understand the proof of the fact I told you first, then will be able to solve your problem. – AnonymousCoward Nov 20 '18 at 20:40
• @AnonymousCoward sir, using mean value theorem we can easily prove that $f'=0$ iff $f$ is constant. Sir how does it help me to prove the advanced version? – Akash Patalwanshi Nov 20 '18 at 21:05
• The next step is to use the same idea to prove that for $f$ continuously differentiable: $f'$ is a constant function iff $f$ is a linear function ($f(x) = ax + b$). – AnonymousCoward Nov 21 '18 at 9:54
Recall that if $$f'(x)$$ is polynomial of degree $$n$$, then $$f(x)$$ is polynomial of degree $$n+1$$ by the power rule for integration. From this it follows inductively that if $$f^{(k)}(x)$$ is polynomial of degree $$n$$, then $$f(x)$$ is polynomial of degree $$n+k$$. Now, if $$f^{(n)}(x)$$ is identically zero, then $$f^{(n-1)}(x)$$ is constant and thus polynomial of degree zero, from which it follow that $$f(x)$$ is polynomial of degree $$n-1$$. Applying the case $$n=4$$ gives the desired result.
• Sir, I think there is some mistake in your solution? take $f(x)=x^4$ then $f'(x)=4x^3$, $f"(x)=12x^2$, $f^{3}(x)=24x$....Now, clearly here $f"$ is polynomial of degree $2$ but $f(x)$ is not polynomial of degree $2+2-1$. – Akash Patalwanshi Nov 20 '18 at 21:35
|
{}
|
# Green's theorem
by boneill3
Tags: green, theorem
P: 127 1. The problem statement, all variables and given/known data Use greens theorem to calculate. $\int_{c}(e^{x}+y^{2})dx+(e^{x}+y^{2})dy$ Where c is the region between y=x2y=x 2. Relevant equations Greens Theorem $\int_{c}f(x.y)dx+g(x,y)dy= \int_{R}\int (\frac{\partial g}{\partial x}-\frac{\partial f}{\partial y})dA$ 3. The attempt at a solution $\frac{\partial g}{\partial x}= 2x$ $\frac{\partial g}{\partial x}= 2y$ Calculate the integral $\int_{0}^{x}\int_{0}^{\sqrt{y}}2x-2y\text{ }dy dx$ $=\frac{x^2}{2}-\frac{4x^{5/2}}{5}$ Does this look right? regards
P: 127 Thanks $\int_{0}^{1}\int_{x}^{x^2}2x-2y\text{ }dy dx$ $=\frac{1}{30}$ With the outside limits of double integrals eg 0 to 1 do they always have to be constants? regards
|
{}
|
Matrix Products with Angles
Verify that $$\left[\begin{matrix}\cos32 & -\sin32\\ \sin32 & \cos32 \end{matrix}\right]\cdot\left[\begin{matrix}\cos40 & -\sin40\\ \sin40 & \cos40 \end{matrix}\right]=\left[\begin{matrix}\cos72 & -\sin72\\ \sin72 & \cos72 \end{matrix}\right]$$ and explain why this result could have been expected.
I have verified that the product matrix is correct using the matrix operation techniques, but I am not sure as to why the result is expected.
The matrix
$$R\left(\theta\right)=\begin{pmatrix}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\end{pmatrix}$$
when acted on a vector, rotates the vector by an angle $\theta$. Thus it is expected that a rotation by $\theta_{1}$, followed by a rotation by $\theta_{2}$, will be equivalent to a rotation by $\theta_{1}+\theta_{2}$. In matrix notation this is translated into
$$R\left(\theta_{1}\right)R\left(\theta_{2}\right)=R\left(\theta_{1}+\theta_{2}\right)$$
• I don't quite understand what is meant by $\theta_1$ and $\theta_2$? – geo_freak Dec 17 '17 at 19:05
• @geo_freak Just two angles. – eranreches Dec 17 '17 at 19:07
• [+1] Thorough explanation. In your last sentence, you need a translation to explain a rotation ... :) – Jean Marie Dec 17 '17 at 21:08
• Maybe the OP could have a look at (math.stackexchange.com/questions/363652/…) – Jean Marie Dec 17 '17 at 21:11
In addition to the comment above, I think it deserves a little explaination why: $R\left(\theta_{1}\right)R\left(\theta_{2}\right)=R\left(\theta_{1}+\theta_{2}\right)$
By using matrix multiplication we get:
$$R\left(\theta_{1}\right)R\left(\theta_{2}\right)=\begin{pmatrix}\cos\theta_{1}&-\sin\theta_{1}\\\sin\theta_{1}&\cos\theta_{1}\end{pmatrix}\cdot\begin{pmatrix}\cos\theta_{2}&-\sin\theta_{2}\\\sin\theta_{2}&\cos\theta_{2}\end{pmatrix} = \begin{pmatrix}\cos\theta_{1}\cos\theta_{2}-\sin\theta_{1}\sin\theta_{2} & -\left(\sin\theta_{1}\cos\theta_{2}+\cos\theta_{1}\sin\theta_{2}\right)\\ \sin\theta_{1}\cos\theta_{2}+\cos\theta_{1}\sin\theta_{2} & -\sin\theta_{1}\sin\theta_{2}+\cos\theta_{1}\cos\theta_{2} \end{pmatrix}$$
We will use the trigonometric identities:
$$\left(1\right) \cos\left(\theta_{1}+\theta_{2}\right)=\cos\theta_{1}\cos\theta_{2}-\sin\theta_{1}\sin\theta_{2}\\ \left(2\right) \sin\left(\theta_{1}+\theta_{2}\right)=\sin\theta_{1}\cos\theta_{2}+\cos\theta_{1}\sin\theta_{2}$$ And we will recieve: $$R\left(\theta_{1}\right)R\left(\theta_{2}\right)=\begin{pmatrix}\cos\theta_{1}\cos\theta_{2}-\sin\theta_{1}\sin\theta_{2} & -\left(\sin\theta_{1}\cos\theta_{2}+\cos\theta_{1}\sin\theta_{2}\right)\\ \sin\theta_{1}\cos\theta_{2}+\cos\theta_{1}\sin\theta_{2} & -\sin\theta_{1}\sin\theta_{2}+\cos\theta_{1}\cos\theta_{2} \end{pmatrix}=\begin{pmatrix}\cos\left(\theta_{1}+\theta_{2}\right) & \sin\left(\theta_{1}+\theta_{2}\right)\\ \sin\left(\theta_{1}+\theta_{2}\right) & \cos\left(\theta_{1}+\theta_{2}\right) \end{pmatrix} = R\left(\theta_{1}+\theta_{2}\right)$$ As said in the comment above.
• I think this has been done by the OP. What is lacking him/her (see the last sentence of the question) is to know in which way the result could have been anticipated. There, a geometrical understanding is necessary (see the answer of @erenreches). – Jean Marie Dec 17 '17 at 21:15
• It might be obvious for him, but for future reference, other people might come and look for the same answer and be puzzled why $R\left(\theta_{1}\right)+R\left(\theta_{2}\right)=R\left(\theta_{1}+\theta_{2}\right)$. So it won't hurt to show the algebraic reasoning behind the rotation. – theshopen Dec 17 '17 at 21:23
• I accept your argument. Nevertheless, his question was elsewhere. – Jean Marie Dec 17 '17 at 21:26
|
{}
|
# Build Manipulator Robot Using Kinematic DH Parameters
Use the Denavit-Hartenberg (DH) parameters of the Puma560® manipulator robot to incrementally build a rigid body tree robot model. Specify the relative DH parameters for each joint as you attach them. Visualize the robot frames, and interact with the final model.
The DH parameters define the geometry of how each rigid body attaches to its parent via a joint. The parameters follow a four transformation convention:
• $\mathit{A}$ — Length of the common normal line between the two z-axes, which is perpendicular to both axes
• $\alpha$ — Angle of rotation for the common normal
• $\mathit{d}$ — Offset along the z-axis in the normal direction, from parent to child
• $\theta$ — Angle of rotation for the x-axis along the previous z-axis
Specify the parameters for the Puma560 robot [1] as a matrix. Values come from .
```dhparams = [0 pi/2 0 0; 0.4318 0 0 0 0.0203 -pi/2 0.15005 0; 0 pi/2 0.4318 0; 0 -pi/2 0 0; 0 0 0 0];```
Create a rigid body tree object.
`robot = rigidBodyTree;`
Create a cell array for the rigid body object, and another for the joint objects. Iterate through the DH parameters performing this process:
1. Create a `rigidBody` object with a unique name.
2. Create and name a revolute `rigidBodyJoint` object.
3. Use `setFixedTransform` to specify the body-to-body transformation of the joint using DH parameters. The function ignores the final element of the DH parameters, `theta`, because the angle of the body is dependent on the joint position.
4. Use `addBody` to attach the body to the rigid body tree.
```bodies = cell(6,1); joints = cell(6,1); for i = 1:6 bodies{i} = rigidBody(['body' num2str(i)]); joints{i} = rigidBodyJoint(['jnt' num2str(i)],"revolute"); setFixedTransform(joints{i},dhparams(i,:),"dh"); bodies{i}.Joint = joints{i}; if i == 1 % Add first body to base addBody(robot,bodies{i},"base") else % Add current body to previous body by name addBody(robot,bodies{i},bodies{i-1}.Name) end end ```
Verify that your robot has been built properly by using the `showdetails` or `show` function. The `showdetails` function lists all the bodies of the robot in the MATLAB® command window. The `show` function displays the robot with a specified configuration (home by default).
`showdetails(robot)`
```-------------------- Robot: (6 bodies) Idx Body Name Joint Name Joint Type Parent Name(Idx) Children Name(s) --- --------- ---------- ---------- ---------------- ---------------- 1 body1 jnt1 revolute base(0) body2(2) 2 body2 jnt2 revolute body1(1) body3(3) 3 body3 jnt3 revolute body2(2) body4(4) 4 body4 jnt4 revolute body3(3) body5(5) 5 body5 jnt5 revolute body4(4) body6(6) 6 body6 jnt6 revolute body5(5) -------------------- ```
```figure(Name="PUMA Robot Model") show(robot);```
### Interact with Robot Model
Visualize the robot model to confirm its dimensions by using the `interactiveRigidBodyTree` object.
```figure(Name="Interactive GUI") gui = interactiveRigidBodyTree(robot,MarkerScaleFactor=0.5);```
Click and drag the marker in the interactive GUI to reposition the end effector. The GUI uses inverse kinematics to solve for the joint positions that achieve the best possible match to the specified end-effector position. Right-click a specific body frame to set it as the target marker body, or to change the control method for setting specific joint positions.
### Next Steps
Now that you have built your model in MATLAB®, these are some possible next steps.
• Perform Inverse Kinematics to get joint configurations based on desired end-effector positions. Specify robot constraints in addition to those of the model parameters, including aiming constraints, Cartesian bounds, and pose targets.
• Trajectory Generation and Following, based on waypoints and other parameters, with trapezoidal velocity profiles, B-splines, or polynomial trajectories.
• Perform Manipulator Motion Planning utilizing your robot models and a rapidly-exploring random tree (RRT) path planner.
• Use Collision Detection with obstacles in your environment to ensure safe and effective motion for your robot.
### References
[1] Corke, P. I., and B. Armstrong-Helouvry. “A Search for Consensus Among Model Parameters Reported for the PUMA 560 Robot.” Proceedings of the 1994 IEEE International Conference on Robotics and Automation, 1608–13. San Diego, CA, USA: IEEE Computer Soc. Press, 1994. https://doi.org/10.1109/ROBOT.1994.351360.
|
{}
|
# How could I constrain player movement to the surface of a 3D object using Unity?
I'm trying to create an effect similar to that of Mario Galaxy or Geometry Wars 3 where as the player walks around the "planet" gravity seems to adjust and they don't fall off the edge of the object as they would if the gravity was fixed in a single direction.
(source: gameskinny.com)
I managed to implement something close to what I'm looking for using an approach where the object that should have the gravity attracts other rigid bodies towards it, but by using the built in physics engine for Unity, applying movement with AddForce and the likes, I just couldn't get the movement to feel right. I couldn't get the player to move fast enough without the player starting to fly off the surface of the object and I couldn't find a good balance of applied force and gravity to accommodate for this. My current implementation is an adaptation of what was found here
I feel like the solution would probably still use physics to get the player grounded onto the object if they were to leave the surface, but once the player has been grounded there would be a way to snap the player to the surface and turn off physics and control the player through other means but I'm really not sure.
What kind of approach should I take to snap the player to the surface of objects? Note that the solution should work in 3D space (as opposed to 2D) and should be able to be implemented using the free version of Unity.
• Possible duplicates: gamedev.stackexchange.com/questions/47220/… and gamedev.stackexchange.com/questions/71585/… – MichaelHouse Dec 14 '14 at 20:55
• I didn't even think of searching for walking on walls. I'll take a look and see if these help. – SpartanDonut Dec 14 '14 at 20:59
• If this question is specifically about doing this in 3D in Unity, It should be made clearer with edits. (It wouldn't be an exact duplicate of that existing one then.) – Anko Dec 14 '14 at 21:47
• That's my general feeling as well - I'm going to see if I can adapt that solution to 3D and post it as an answer (or if someone else can beat me to the punch I'm fine with that too). I'll try and update my question to be more clear on that. – SpartanDonut Dec 14 '14 at 22:22
I managed to accomplish what I needed, primarily with the assistance of this blog post for the surface snapping piece of the puzzle and came up with my own ideas for player movement and camera.
## Snapping Player to the Surface of an Object
The basic setup consists of a large sphere (the world) and a smaller sphere (the player) both with sphere colliders attached to them.
The bulk of the work being done was in the following two methods:
private void UpdatePlayerTransform(Vector3 movementDirection)
{
RaycastHit hitInfo;
if (GetRaycastDownAtNewPosition(movementDirection, out hitInfo))
{
Quaternion targetRotation = Quaternion.FromToRotation(Vector3.up, hitInfo.normal);
Quaternion finalRotation = Quaternion.RotateTowards(transform.rotation, targetRotation, float.PositiveInfinity);
transform.rotation = finalRotation;
transform.position = hitInfo.point + hitInfo.normal * .5f;
}
}
private bool GetRaycastDownAtNewPosition(Vector3 movementDirection, out RaycastHit hitInfo)
{
Vector3 newPosition = transform.position;
Ray ray = new Ray(transform.position + movementDirection * Speed, -transform.up);
if (Physics.Raycast(ray, out hitInfo, float.PositiveInfinity, WorldLayerMask))
{
return true;
}
return false;
}
The Vector3 movementDirection parameter is just as it sounds, the direction we are going to be moving our player in this frame, and calculating that vector, while ended up relatively simple in this example, was a bit tricky for me to figure out at first. More on that later, but just keep in mind that it's a normalized vector in the direction the player is moving this frame.
Stepping through, the first thing we do is check if a ray, originating at the hypothetical future position directed towards the players down vector (-transform.up) hits the world using WorldLayerMask which is a public LayerMask property of the script. If you want more complex collisions or multiple layers you will have to build your own layer mask. If the raycast successfully hits something the hitInfo is used to retrieve the normal and hit point to calculate the new position and rotation of the player which should be right on the object. Offsetting the player's position may be required depending on size and origin of the player object in question.
Finally, this has really only been tested and likely only works well on simple objects such as spheres. As the blog post I based my solution off of suggests, you will likely want to perform multiple raycasts and average them for your position and rotation to get a much nicer transition when moving over more complex terrain. There may also be other pitfalls I've not thought of at this point.
## Camera and Movement
Once the player was sticking to the surface of the object the next task to tackle was movement. I had originally started out with movement relative to the player but I started running into issues at the poles of the sphere where directions suddenly changed making my player rapidly change direction over and over not letting me ever pass the poles. What I wound up doing was making my players movement relative to the camera.
What worked well for my needs was to have a camera that strictly followed the player based solely on the players position. As a result, even though the camera was technically rotating, pressing up always moved the player towards the top of the screen, down towards the bottom, and so on with left and right.
To do this, the following was executed on the camera where the target object was the player:
private void FixedUpdate()
{
// Calculate and set camera position
Vector3 desiredPosition = this.target.TransformPoint(0, this.height, -this.distance);
this.transform.position = Vector3.Lerp(this.transform.position, desiredPosition, Time.deltaTime * this.damping);
// Calculate and set camera rotation
Quaternion desiredRotation = Quaternion.LookRotation(this.target.position - this.transform.position, this.target.up);
this.transform.rotation = Quaternion.Slerp(this.transform.rotation, desiredRotation, Time.deltaTime * this.rotationDamping);
}
Finally, to move the player, we leveraged the transform of the main camera so that with our controls up moves up, down moves down, etc. And it is here we call UpdatePlayerTransform which will get our position snapped to the world object.
void Update ()
{
Vector3 movementDirection = Vector3.zero;
if (Input.GetAxisRaw("Vertical") > 0)
{
movementDirection += cameraTransform.up;
}
else if (Input.GetAxisRaw("Vertical") < 0)
{
movementDirection += -cameraTransform.up;
}
if (Input.GetAxisRaw("Horizontal") > 0)
{
movementDirection += cameraTransform.right;
}
else if (Input.GetAxisRaw("Horizontal") < 0)
{
movementDirection += -cameraTransform.right;
}
movementDirection.Normalize();
UpdatePlayerTransform(movementDirection);
}
To implement a more interesting camera but the controls to be about the same as what we have here you could easily implement a camera that isn't rendered or just another dummy object to base movement off of and then use the more interesting camera to render what you want the game to look like. This will allow nice camera transitions as you go around objects without breaking the controls.
I think this idea will work:
Keep a CPU-side copy of the planet mesh. Having mesh vertices also means you have normal vectors for each point on the planet. Then completely disable gravity for all entities, instead applying a force in exactly opposite direction of a normal vector.
Now, based on which point should that normal vector of the planet be calculated?
The easiest answer (which I'm pretty sure will work okay) is to approximate similarly to Newton's method: When objects first spawn, you know all their initial positions on the planet. Use that initial position to determine each object's up vector. Obviously gravity will be in the opposite direction (towards down). In the next frame, before applying gravity, cast a ray from the object's new position towards it's old down vector. Use that ray's intersection with the planet as the new reference for determining the up vector. The rare case of the ray hitting nothing means something went horribly wrong and you should move your object back to where it was in the previous frame.
Also note that using this method, the further player origin is from the planet, the worse the approximation becomes. Hence it's better to use somewhere around each player's feet as their origin. I'm guessing, but I think using feet as origin also will result in easier handling and navigation of player.
A last note: For better results, you can even do the following: keep track of the player's movement in each frame (e.g. using current_position - last_position). Then clamp that movement_vector so that it's length towards object's up is zero. Let's call this new vector reference_movement. Move the previous reference_point by reference_movement and use this new point as ray tracing origin. After (and if) the ray hits the planet, move reference_point to that hit point. Finally, calculate the new up vector, from this new reference_point.
Some pseudo code to sum it up:
update_up_vector(player:object, planet:mesh)
{
up_vector = normalize(planet.up);
reference_point = ray_trace (object.old_position, -up_vector, planet);
movement = object.position - object.old_position;
movement_clamped = movement - up_vector * dot (movement, up_vector);
reference_point += movement_clamped;
reference_point = ray_trace (reference_point, -up_vector, planet);
player.up = normal_vector(mesh, reference_point);
}
This post could be helpful. Its gist is, you don't use the character controllers, but make your own using the physics engine. Then you use the normals detected underneath the player to orient them to the surface of the mesh.
Here's a nice overview of the technique. There are plenty more resources with web search terms like "unity walk on 3d objects mario galaxy".
There is also a demo project from unity 3.x that had a walking engine, with a soldier and a dog and in one of the scenes. It demonstrated walking on 3d objects Galaxy-style. It's called the locomotion system by runevision.
• At a first glance the demo doesn't work very well and the Unity package has build errors. I'll see if I can make this work but a more complete answer would be appreciated. – SpartanDonut Dec 15 '14 at 3:25
## Rotation
The first task is to get a vector that defines up. From your drawing, you can do it in one of two ways. You can treat the planet as a sphere and use (object.position - planet.position). The second way is to use Collider.Raycast() and use the 'hit.normal' it returns.
Here is the code that he suggested me:
var up : Vector3 = transform.position - planet.position;
transform.rotation = Quaternion.FromToRotation(transform.up, up) * transform.rotation;
Call that every update, and you should get it working. (note that the code is in UnityScript).
The same code in C#:
Vector3 up = transform.position - planet.position;
transform.rotation = Quaternion.FromToRotation(transform.up, up) * transform.rotation;
## Gravity
For the gravity, you can use my "planet physics technique", which I described in my question, but it isn't really optimized. It was just something I got on my mind.
I suggest you creating your own system for the gravity.
Here is a Youtube tutorial by Sebastian Lague. That is a very well-working solution.
EDIT: In unity, go to Edit > Project Settings > Physics and set the all values for Gravity to 0 (Or remove the Rigidbody from all objects), to prevent the built-in gravity (which just pulls the player down) to conflict with the custom solution. (turn it off)
• Gravitating the player to the center of a planet (and rotating them accordingly) works well for near-spherical planets, but I can imagine it going really wrong for less spherical objects, like that rocket-shape Mario is jumping on in the first GIF. – Anko Dec 23 '14 at 13:34
• You could create a cube which is restricted to only move around the X-axis for example, inside of the non-spherical object, and move it when the player moves, so that it is always straight under the player, and then pull the player down to the cube. Try it. – Daniel Kvist Dec 23 '14 at 13:44
|
{}
|
# Show transcribed image text Find all the eigenvalues and eigenvectors of the matrix. The eigenvalues
Show transcribed image text Find all the eigenvalues and eigenvectors of the matrix. The eigenvalues of A are Lambda1 = , Lambda2 = , Lambda3 = , and Lambda4 = , (Use ascending order. ) The eigenvector coiTesponding to the repeated eigenvalue Lambda = 1, (Type a simplified? Answer for each matrix element. Enter your? Answers in ascending order of the first value. ) The eigenvector corresponding to Lambda = - 6 is , The eigenvector corresponding to Lambda = 2 is ,
|
{}
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 % $Id: references.tex 8161 2009-04-06 14:07:39Z alexandra$ % Local Variables: % ispell-check-comments: nil % Local IspellDict: american % End: % -------------------------------------------------------- % User documentation % copyright by BREDEX GmbH 2004 % -------------------------------------------------------- \index{References} \index{Parameter!References} \label{referencesconcepts} The first way to make \gdcases{} reusable is to use placeholders for the data in them. Instead of specifying concrete parameter values in \gdsteps{}, you can use references for values that will change each time you reuse the \gdcase{}. These references act as placeholders for values to be inserted later. The referenced parameter becomes a parameter of the parent \gdcase{}, and a value for it can be entered at the \gdcase{} level. The advantage of using references is that a value is not fixed for a particular \gdcase{}. Instead of creating a new \gdcase{} for each different username, for example, you can have one \gdcase{} which can contain all the usernames you want to test as its data. Using references also means that you can translate your data, you can use external data from an Excel file, and you can run the same \gdcase{} multiple times with different \bxname{data sets}.
|
{}
|
# Proving there is no natural number which is both even and odd
I've run into a small problem while working through Enderton's Elements of Set Theory. I'm doing the following problem:
Call a natural number even if it has the form $2\cdot m$ for some $m$. Call it odd if it has the form $(2\cdot p)+1$ for some $p$. Show that each natural number number is either even or odd, but never both.
I've shown most of this, and along the way I've derived many of the results found in Arturo Magidin's great post on addition, so any of the theorems there may be used. It is the 'never both' part with which I'm having trouble. This is some of what I have:
Let $$B=\{n\in\omega\ |\neg(\exists m(n=2\cdot m)\wedge\exists p(n=2\cdot p+1))\},$$ the set of all natural numbers that are not both even and odd. Since $m\cdot 0=0$, $0$ is even. Also $0$ is not odd, for if $0=2\cdot p+1$, then $0=(2\cdot p)^+=\sigma(2\cdot p)$, but then $0\in\text{ran}\ \sigma$, contrary to the first Peano postulate. Hence $0\in B$. Suppose $k\in B$. Suppose $k$ is odd but not even, so $k=2\cdot p+1$ for some $p$. Earlier work of mine shows that $k^+$ is even. However, $k^+$ is not odd, for if $k^+=2\cdot m+1$ for some $m$, then since the successor function $\sigma$ is injective, we have $$k^+=2\cdot m+1=(2\cdot m)^+\implies k=2\cdot m$$ contrary to the fact that $k$ is not even.
Now suppose $k$ is even, but not odd. I have been able to show that $k^+$ is odd, but I can't figure a way to show that $k^+$ is not even. I suppose it must be simple, but I'm just not seeing it. Could someone explain this little part? Thank you.
-
HINT $\$ Here's the inductive step: $\rm\ 2m \ne 2n+1\ \Rightarrow\ 2m+1 \ne 2(n+1)$
@yunone: $\rm\ a^{+} = b^{+}\ \Rightarrow\ a = b\$ is a Peano axiom. – Bill Dubuque Jan 10 '11 at 4:30
Sorry, my last comment was at the post before the edit. I believe I see now. So suppose $k=2m\neq 2n+1$. Then since $\sigma$ is injective, $k^+=(2m)^+\neq(2n+1)^+\Rightarrow k^+=2m+1\neq 2n+1^+=2n+2=2(n+1)$. So $k^+$ is odd, but not even. Thanks! – yunone Jan 10 '11 at 4:33
Suppose there exists some $n \in \mathbb{N}$ which is both even and odd. Then $n= 2m = 2p+1$. So $2m = 2p+1$ or $2(m-p) = 1$. Contradiction.
|
{}
|
# Density functional descriptions
Divergence free semiempirical gradient-corrected exchange energy functional. $\lambda=\gamma$ in ref. $$g=-{\frac {c \left( \rho \left( s \right) \right) ^{4/3} \left( 1+ \beta\, \left( \chi \left( s \right) \right) ^{2} \right) }{1+\lambda \, \left( \chi \left( s \right) \right) ^{2}}} ,$$
$$G=-{\frac {c \left( \rho \left( s \right) \right) ^{4/3} \left( 1+ \beta\, \left( \chi \left( s \right) \right) ^{2} \right) }{1+\lambda \, \left( \chi \left( s \right) \right) ^{2}}} ,$$
$$c=3/8\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} ,$$
$$\beta= 0.0076 ,$$
$$\lambda= 0.004 .$$
B86 with modified gradient correction for large density gradients. $$g=-c \left( \rho \left( s \right) \right) ^{4/3}-{\frac {\beta\, \left( \chi \left( s \right) \right) ^{2} \left( \rho \left( s \right) \right) ^{4/3}}{ \left( 1+\lambda\, \left( \chi \left( s \right) \right) ^{2} \right) ^{4/5}}} ,$$
$$G=-c \left( \rho \left( s \right) \right) ^{4/3}-{\frac {\beta\, \left( \chi \left( s \right) \right) ^{2} \left( \rho \left( s \right) \right) ^{4/3}}{ \left( 1+\lambda\, \left( \chi \left( s \right) \right) ^{2} \right) ^{4/5}}} ,$$
$$c=3/8\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} ,$$
$$\beta= 0.00375 ,$$
$$\lambda= 0.007 .$$
Re-optimised $\beta$ of B86 used in part 3 of Becke’s 1997 paper. $$g=-{\frac {c \left( \rho \left( s \right) \right) ^{4/3} \left( 1+ \beta\, \left( \chi \left( s \right) \right) ^{2} \right) }{1+\lambda \, \left( \chi \left( s \right) \right) ^{2}}} ,$$
$$G=-{\frac {c \left( \rho \left( s \right) \right) ^{4/3} \left( 1+ \beta\, \left( \chi \left( s \right) \right) ^{2} \right) }{1+\lambda \, \left( \chi \left( s \right) \right) ^{2}}} ,$$
$$c=3/8\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} ,$$
$$\beta= 0.00787 ,$$
$$\lambda= 0.004 .$$
$$G=- \left( \rho \left( s \right) \right) ^{4/3} \left( c+{\frac {\beta \, \left( \chi \left( s \right) \right) ^{2}}{1+6\,\beta\,\chi \left( s \right) {\it arcsinh} \left( \chi \left( s \right) \right) }} \right) ,$$
$$g=- \left( \rho \left( s \right) \right) ^{4/3} \left( c+{\frac {\beta \, \left( \chi \left( s \right) \right) ^{2}}{1+6\,\beta\,\chi \left( s \right) {\it arcsinh} \left( \chi \left( s \right) \right) }} \right) ,$$
$$c=3/8\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} ,$$
$$\beta= 0.0042 .$$
Correlation functional depending on B86MGC exchange functional with empirical atomic parameters, $t$ and $u$. The exchange functional that is used in conjunction with B88C should replace B88MGC here. $$f=- 0.8\,\rho \left( a \right) \rho \left( b \right) {q}^{2} \left( 1-{ \frac {\ln \left( 1+q \right) }{q}} \right) ,$$
$$q=t \left( x+y \right) ,$$
$$x= 0.5\, \left( c\sqrt [3]{\rho \left( a \right) }+{\frac {\beta\, \left( \chi \left( a \right) \right) ^{2}\sqrt [3]{\rho \left( a \right) }}{ \left( 1+\lambda\, \left( \chi \left( a \right) \right) ^ {2} \right) ^{4/5}}} \right) ^{-1} ,$$
$$y= 0.5\, \left( c\sqrt [3]{\rho \left( b \right) }+{\frac {\beta\, \left( \chi \left( b \right) \right) ^{2}\sqrt [3]{\rho \left( b \right) }}{ \left( 1+\lambda\, \left( \chi \left( b \right) \right) ^ {2} \right) ^{4/5}}} \right) ^{-1} ,$$
$$t= 0.63 ,$$
$$g=- 0.01\,\rho \left( s \right) d{z}^{4} \left( 1-2\,{\frac {\ln \left( 1+1/2\,z \right) }{z}} \right) ,$$
$$z=2\,ur ,$$
$$r= 0.5\,\rho \left( s \right) \left( c \left( \rho \left( s \right) \right) ^{4/3}+{\frac {\beta\, \left( \chi \left( s \right) \right) ^ {2} \left( \rho \left( s \right) \right) ^{4/3}}{ \left( 1+\lambda\, \left( \chi \left( s \right) \right) ^{2} \right) ^{4/5}}} \right) ^{ -1} ,$$
$$u= 0.96 ,$$
$$d=\tau \left( s \right) -1/4\,{\frac {\sigma \left( {\it ss} \right) }{ \rho \left( s \right) }} ,$$
$$G=- 0.01\,\rho \left( s \right) d{z}^{4} \left( 1-2\,{\frac {\ln \left( 1+1/2\,z \right) }{z}} \right) ,$$
$$c=3/8\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} ,$$
$$\beta= 0.00375 ,$$
$$\lambda= 0.007 .$$
$\\tau$ dependent Dynamical correlation functional. $$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$f={\frac {E}{1+l \left( \left( \chi \left( a \right) \right) ^{2}+ \left( \chi \left( b \right) \right) ^{2} \right) }} ,$$
$$g={\frac {F\epsilon \left( \rho \left( s \right) ,0 \right) }{H \left( 1+\nu\, \left( \chi \left( s \right) \right) ^{2} \right) ^{2}}} ,$$
$$G={\frac {F\epsilon \left( \rho \left( s \right) ,0 \right) }{H \left( 1+\nu\, \left( \chi \left( s \right) \right) ^{2} \right) ^{2}}} ,$$
$$E=\epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) ,$$
$$l= 0.0031 ,$$
$$F=\tau \left( s \right) -1/4\,{\frac {\sigma \left( {\it ss} \right) }{ \rho \left( s \right) }} ,$$
$$H=3/5\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} \left( \rho \left( s \right) \right) ^{5/3} ,$$
$$\nu= 0.038 ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 .$$
This functional needs to be mixed with 0.1943*exact exchange. $$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$A=[ 0.9454, 0.7471,- 4.5961] ,$$
$$B=[ 0.1737, 2.3487,- 2.4868] ,$$
$$C=[ 0.8094, 0.5073, 0.7481] ,$$
$$\lambda=[ 0.006, 0.2, 0.004] ,$$
$$d=1/2\, \left( \chi \left( a \right) \right) ^{2}+1/2\, \left( \chi \left( b \right) \right) ^{2} ,$$
$$f= \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) \left( A_{{0}}+A_{{1 }}\eta \left( d,\lambda_{{1}} \right) +A_{{2}} \left( \eta \left( d, \lambda_{{1}} \right) \right) ^{2} \right) ,$$
$$\eta \left( \theta,\mu \right) ={\frac {\mu\,\theta}{1+\mu\,\theta}} ,$$
$$g=\epsilon \left( \rho \left( s \right) ,0 \right) \left( B_{{0}}+B_{{ 1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2} } \right) +B_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{2} \right) -3/8\,\sqrt [ 3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} \left( \rho \left( s \right) \right) ^{4/3} \left( C_{{0}}+C_{{1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) +C_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) \right) ^{2} \right) ,$$
$$G=\epsilon \left( \rho \left( s \right) ,0 \right) \left( B_{{0}}+B_{{ 1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2} } \right) +B_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{2} \right) -3/8\,\sqrt [ 3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} \left( \rho \left( s \right) \right) ^{4/3} \left( C_{{0}}+C_{{1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) +C_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) \right) ^{2} \right) ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 .$$
Re-parameterization of the B97 functional in a self-consistent procedure by Hamprecht et al. This functional needs to be mixed with 0.21*exact exchange. $$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$A=[ 0.955689, 0.788552,- 5.47869] ,$$
$$B=[ 0.0820011, 2.71681,- 2.87103] ,$$
$$C=[ 0.789518, 0.573805, 0.660975] ,$$
$$\lambda=[ 0.006, 0.2, 0.004] ,$$
$$d=1/2\, \left( \chi \left( a \right) \right) ^{2}+1/2\, \left( \chi \left( b \right) \right) ^{2} ,$$
$$f= \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) \left( A_{{0}}+A_{{1 }}\eta \left( d,\lambda_{{1}} \right) +A_{{2}} \left( \eta \left( d, \lambda_{{1}} \right) \right) ^{2} \right) ,$$
$$\eta \left( \theta,\mu \right) ={\frac {\mu\,\theta}{1+\mu\,\theta}} ,$$
$$g=\epsilon \left( \rho \left( s \right) ,0 \right) \left( B_{{0}}+B_{{ 1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2} } \right) +B_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{2} \right) -3/8\,\sqrt [ 3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} \left( \rho \left( s \right) \right) ^{4/3} \left( C_{{0}}+C_{{1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) +C_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) \right) ^{2} \right) ,$$
$$G=\epsilon \left( \rho \left( s \right) ,0 \right) \left( B_{{0}}+B_{{ 1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2} } \right) +B_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{2} \right) -3/8\,\sqrt [ 3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} \left( \rho \left( s \right) \right) ^{4/3} \left( C_{{0}}+C_{{1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) +C_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) \right) ^{2} \right) ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 .$$
A. D. Becke and M. R. Roussel,Phys. Rev. A 39, 3761 (1989)
$$K=\frac{1}{2}\sum_s \rho_s U_s ,$$ where $$U_s=-(1-e^{-x}-xe^{-x}/2)/b ,$$ $$b=\frac{x^3e^{-x}}{8\pi\rho_s}$$ and $x$ is defined by the nonlinear equation $$\frac{xe^{-2x/3}}{x-2}=\frac{2\pi^{2/3}\rho_s^{5/3}}{3Q_s} ,$$ where $$Q_s=(\upsilon_s-2\gamma D_s)/6 ,$$ $$D_s=\tau_s-\frac{\sigma_{ss}}{4\rho_s}$$ and $$\gamma=1.$$
A. D. Becke and M. R. Roussel,Phys. Rev. A 39, 3761 (1989)
As for BR but with ${\gamma=0.8}$.
Hybrid exchange-correlation functional comprimising Becke’s 1998 exchange and Wigner’s spin-polarised correlation functionals. $$\alpha=-3/8\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} ,$$
$$g=\alpha\, \left( \rho \left( s \right) \right) ^{4/3}-{\frac {\beta\, \left( \rho \left( s \right) \right) ^{4/3} \left( \chi \left( s \right) \right) ^{2}}{1+6\,\beta\,\chi \left( s \right) {\it arcsinh} \left( \chi \left( s \right) \right) }} ,$$
$$G=\alpha\, \left( \rho \left( s \right) \right) ^{4/3}-{\frac {\beta\, \left( \rho \left( s \right) \right) ^{4/3} \left( \chi \left( s \right) \right) ^{2}}{1+6\,\beta\,\chi \left( s \right) {\it arcsinh} \left( \chi \left( s \right) \right) }} ,$$
$$f=-4\,c\rho \left( a \right) \rho \left( b \right) {\rho}^{-1} \left( 1 +{\frac {d}{\sqrt [3]{\rho}}} \right) ^{-1} ,$$
$$\beta= 0.0042 ,$$
$$c= 0.04918 ,$$
$$d= 0.349 .$$
R. Colle and O. Salvetti, Theor. Chim. Acta 37, 329 (1974); C. Lee, W. Yang and R. G. Parr, Phys. Rev. B 37, 785(1988)
CS1 is formally identical to CS2, except for a reformulation in which the terms involving $\upsilon$ are eliminated by integration by parts. This makes the functional more economical to evaluate. In the limit of exact quadrature, CS1 and CS2 are identical, but small numerical differences appear with finite integration grids.
R. Colle and O. Salvetti, Theor. Chim. Acta 37, 329 (1974); C. Lee, W. Yang and R. G. Parr, Phys. Rev. B 37, 785(1988)
CS2 is defined through \begin{aligned} K &=& -a \left({ \rho+2b\rho^{-5/3} \left[ \rho_\alpha t_{\alpha} + \rho_\beta t_{\beta} -\rho t_W \right] e^{-c\rho^{-1/3}} \over 1+d \rho^{-1/3} }\right) \end{aligned} where \begin{aligned} t_{\alpha} &=&\frac{\tau_\alpha}{2}-\frac{\upsilon_\alpha}{8} \\ t_{\beta} &=&\frac{\tau_\beta}{2}-\frac{\upsilon_\beta}{8} \\ t_{W} &=& {1\over 8} {\sigma \over \rho} - {1\over 2} \upsilon \end{aligned} and the constants are $a=0.04918, b=0.132, c=0.2533, d=0.349$.
Automatically generated Slater-Dirac exchange. $$g=-c \left( \rho \left( s \right) \right) ^{4/3} ,$$
$$c=3/8\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} .$$
Local-density approximation of correlation energy
for short-range interelectronic interaction ${\rm erf}(\mu r_{21})/r_{12}$,
S. Paziani, S. Moroni, P. Gori-Giorgi, and G. B. Bachelet, Phys. Rev. B 73, 155111 (2006).
$$\nonumber \epsilon_c^{\rm SR}(r_s,\zeta,\mu) =\epsilon_c^{\rm PW92}(r_s,\zeta)- \frac{[\phi_2(\zeta)]^3Q\left(\frac{\mu\sqrt{r_s}}{\phi_2(\zeta)}\right)+a_1 \mu^3+a_2 \mu^4+ a_3\mu^5+a_4\mu^6+a_5\mu^8}{(1+b_0^2\mu^2)^4},$$ where $$Q(x)=\frac{2\ln(2)-2}{\pi^2}\ln\left(\frac{1+a\,x+b\,x^2+c\,x^3}{1+a\,x+ d\,x^2}\right),$$ with $a=5.84605$, $c=3.91744$, $d=3.44851$, and $b=d-3\pi\alpha/(4\ln(2)-4)$. The parameters $a_i(r_s,\zeta)$ are given by \begin{aligned} a_1 & = & 4 \,b_0^6 \,C_3+b_0^8 \,C_5, \nonumber \\ a_2 & = & 4 \,b_0^6 \,C_2+b_0^8\, C_4+6\, b_0^4 \epsilon_c^{\rm PW92}, \nonumber \\ a_3 & = & b_0^8 \,C_3, \nonumber \\ a_4 & = & b_0^8 \,C_2+4 \,b_0^6\, \epsilon_c^{\rm PW92} \nonumber, \\ a_5 & = & b_0^8\,\epsilon_c^{\rm PW92}, \nonumber\end{aligned} with \begin{aligned} C_2 & = & -\frac{3(1\!-\!\zeta^2)\,g_c(0,r_s,\zeta\!=\!0)}{8\,r_s^3} \nonumber \\ C_3 & = & - (1\!-\!\zeta^2)\frac{g(0,r_s,\zeta\!=\!0)}{\sqrt{2\pi}\, r_s^3} \nonumber \\ C_4 & = & -\frac{9\, c_4(r_s,\zeta)}{64 r_s^3} \nonumber \\ C_5 & = & -\frac{9\, c_5(r_s,\zeta)}{40\sqrt{2 \pi} r_s^3}\nonumber \\ c_4(r_s,\zeta) & = & \left(\frac{1\!+\!\zeta}{2}\right)^2g''\left(0, r_s\left(\frac{2}{1\!+\!\zeta}\right)^{1/3}\!\!\!\!\!\!\!\!, \,\,\,\,\,\zeta\!=\!1\right)+ \left(\frac{1\!-\!\zeta}{2}\right)^2 \times \nonumber \\ & & g''\left(0, r_s\left(\frac{2}{1\!-\!\zeta}\right)^{1/3}\!\!\!\!\!\!\!\!, \,\,\,\,\,\zeta\!=\!1\right) + (1\!-\!\zeta^2)D_2(r_s)-\frac{\phi_8(\zeta)}{5\,\alpha^2\,r_s^2} \nonumber \\ c_5(r_s,\zeta) & = & \left(\frac{1\!+\!\zeta}{2}\right)^2g''\left(0, r_s\left(\frac{2}{1\!+\!\zeta}\right)^{1/3}\!\!\!\!\!\!\!\!, \,\,\,\,\,\zeta\!=\!1\right)+ \left(\frac{1\!-\!\zeta}{2}\right)^2 \times \nonumber \\ & & g''\left(0, r_s\left(\frac{2}{1\!-\!\zeta}\right)^{1/3}\!\!\!\!\!\!\!\!, \,\,\,\,\,\zeta\!=\!1\right)+ (1\!-\!\zeta^2)D_3(r_s),\end{aligned} and \begin{aligned} \phantom{\bigl[} b_0(r_s) = 0.784949\,r_s \\ \phantom{\Biggl[} g''(0,r_s,\zeta\!=\!1) = \frac{2^{5/3}}{5\,\alpha^2 \,r_s^2} \, \frac{1-0.02267 r_s}{\left(1+0.4319 r_s+0.04 r_s^2\right)} \\ \phantom{\Biggl[}D_2(r_s) = \frac{e^{- 0.547 r_s}}{r_s^2}\left(-0.388 r_s+0.676 r_s^2\right) \\ \phantom{\Biggl[}D_3(r_s) = \frac{e^{-0.31 r_s}}{r_s^3}\left(-4.95 r_s+ r_s^2\right).\end{aligned} Finally, $\epsilon_c^{\rm PW92}(r_s,\zeta)$ is the Perdew-Wang parametrization of the correlation energy of the standard uniform electron gas [J.P. Perdew and Y. Wang, Phys. Rev. B 45, 13244 (1992)], and $$g(0,r_s,\zeta\!=\!0)=\frac{1}{2}(1-Br_s+Cr_s^2+Dr_s^3+Er_s^4)\,{\rm e}^{-dr_s},$$ is the on-top pair-distribution function of the standard jellium model [P. Gori-Giorgi and J.P. Perdew, Phys. Rev. B 64, 155102 (2001)], where $B=-0.0207$, $C=0.08193$, $D=-0.01277$, $E=0.001859$, $d=0.7524$. The correlation part of the on-top pair-distribution function is $g_c(0,r_s,\zeta\!=\!0)=g(0,r_s,\zeta\!=\!0)-\frac{1}{2}$.
Toulouse-Colonna-Savin range-separated correlation functional based on PBE, see J. Toulouse et al., J. Chem. Phys. 122, 014110 (2005).
Hartree-Fock exact exchange functional can be used to construct hybrid exchange-correlation functional.
Local-density approximation of exchange energy
for short-range interelectronic interaction ${\rm erf}(\mu r_{12})/r_{12}$,
A. Savin, in Recent Developments and Applications of Modern Density Functional Theory, edited by J.M. Seminario (Elsevier, Amsterdam, 1996).
$$\epsilon_x^{\rm SR}(r_s,\zeta,\mu) = \frac{3}{4\pi}\frac{\phi_4(\zeta)}{\alpha\,r_s}- \frac{1}{2}(1\!+\!\zeta)^{4/3} f_x\left(r_s,\mu(1\!+\!\zeta)^{-1/3}\right)+\frac{1}{2}(1\!-\!\zeta)^{4/3} f_x\left(r_s,\mu(1\!-\!\zeta)^{-1/3}\right) \nonumber$$ with $$\phi_n(\zeta)=\frac{1}{2}\left[ (1\!+\!\zeta)^{n/3}+(1\!-\!\zeta)^{n/3} \right],$$ $$f_x(r_s,\mu) = -\frac{\mu}{\pi}\biggl[(2y-4y^3)\,e^{-1/4y^2}- 3y+4y^3+ \sqrt{\pi}\,{\rm erf}\left(\frac{1}{2y}\right)\biggr], \qquad y=\frac{\mu\,\alpha\,r_s}{2},$$ and $\alpha=(4/9\pi)^{1/3}$.
Toulouse-Colonna-Savin range-separated exchange functional based on PBE, see J. Toulouse et al., J. Chem. Phys. 122, 014110 (2005).
$$\alpha=-3/8\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} ,$$
$$g= \left( \rho \left( s \right) \right) ^{4/3} \left( \alpha-{\frac {1 }{137}}\, \left( \chi \left( s \right) \right) ^{3/2} \right) ,$$
$$G= \left( \rho \left( s \right) \right) ^{4/3} \left( \alpha-{\frac {1 }{137}}\, \left( \chi \left( s \right) \right) ^{3/2} \right) .$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$A=[ 0.51473, 6.9298,- 24.707, 23.110,- 11.323] ,$$
$$B=[ 0.48951,- 0.2607, 0.4329,- 1.9925, 2.4853] ,$$
$$C=[ 1.09163,- 0.7472, 5.0783,- 4.1075, 1.1717] ,$$
$$\lambda=[ 0.006, 0.2, 0.004] ,$$
$$d=1/2\, \left( \chi \left( a \right) \right) ^{2}+1/2\, \left( \chi \left( b \right) \right) ^{2} ,$$
$$f= \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) \left( A_{{0}}+A_{{1 }}\eta \left( d,\lambda_{{1}} \right) +A_{{2}} \left( \eta \left( d, \lambda_{{1}} \right) \right) ^{2}+A_{{3}} \left( \eta \left( d, \lambda_{{1}} \right) \right) ^{3}+A_{{4}} \left( \eta \left( d, \lambda_{{1}} \right) \right) ^{4} \right) ,$$
$$\eta \left( \theta,\mu \right) ={\frac {\mu\,\theta}{1+\mu\,\theta}} ,$$
$$g=\epsilon \left( \rho \left( s \right) ,0 \right) \left( B_{{0}}+B_{{ 1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2} } \right) +B_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{2}+B_{{3}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{3}+B_{{4}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{4} \right) -3/8 \,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} \left( \rho \left( s \right) \right) ^{4/3} \left( C_{{0}}+C_{{1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) +C_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) \right) ^{2}+C_{{3}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) \right) ^{3}+C_{{4}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda _{{3}} \right) \right) ^{4} \right) ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 .$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$A=[ 0.542352, 7.01464,- 28.3822, 35.0329,- 20.4284] ,$$
$$B=[ 0.562576,- 0.0171436,- 1.30636, 1.05747, 0.885429] ,$$
$$C=[ 1.09025,- 0.799194, 5.57212,- 5.86760, 3.04544] ,$$
$$\lambda=[ 0.006, 0.2, 0.004] ,$$
$$d=1/2\, \left( \chi \left( a \right) \right) ^{2}+1/2\, \left( \chi \left( b \right) \right) ^{2} ,$$
$$f= \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) \left( A_{{0}}+A_{{1 }}\eta \left( d,\lambda_{{1}} \right) +A_{{2}} \left( \eta \left( d, \lambda_{{1}} \right) \right) ^{2}+A_{{3}} \left( \eta \left( d, \lambda_{{1}} \right) \right) ^{3}+A_{{4}} \left( \eta \left( d, \lambda_{{1}} \right) \right) ^{4} \right) ,$$
$$\eta \left( \theta,\mu \right) ={\frac {\mu\,\theta}{1+\mu\,\theta}} ,$$
$$g=\epsilon \left( \rho \left( s \right) ,0 \right) \left( B_{{0}}+B_{{ 1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2} } \right) +B_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{2}+B_{{3}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{3}+B_{{4}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{4} \right) -3/8 \,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} \left( \rho \left( s \right) \right) ^{4/3} \left( C_{{0}}+C_{{1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) +C_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) \right) ^{2}+C_{{3}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) \right) ^{3}+C_{{4}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda _{{3}} \right) \right) ^{4} \right) ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) +{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 .$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$A=[ 0.72997, 3.35287,- 11.543, 8.08564,- 4.47857] ,$$
$$B=[ 0.222601,- 0.0338622,- 0.012517,- 0.802496, 1.55396] ,$$
$$C=[ 1.0932,- 0.744056, 5.5992,- 6.78549, 4.49357] ,$$
$$\lambda=[ 0.006, 0.2, 0.004] ,$$
$$d=1/2\, \left( \chi \left( a \right) \right) ^{2}+1/2\, \left( \chi \left( b \right) \right) ^{2} ,$$
$$f= \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) \left( A_{{0}}+A_{{1 }}\eta \left( d,\lambda_{{1}} \right) +A_{{2}} \left( \eta \left( d, \lambda_{{1}} \right) \right) ^{2}+A_{{3}} \left( \eta \left( d, \lambda_{{1}} \right) \right) ^{3}+A_{{4}} \left( \eta \left( d, \lambda_{{1}} \right) \right) ^{4} \right) ,$$
$$\eta \left( \theta,\mu \right) ={\frac {\mu\,\theta}{1+\mu\,\theta}} ,$$
$$g=\epsilon \left( \rho \left( s \right) ,0 \right) \left( B_{{0}}+B_{{ 1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2} } \right) +B_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{2}+B_{{3}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{3}+B_{{4}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{2}} \right) \right) ^{4} \right) -3/8 \,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} \left( \rho \left( s \right) \right) ^{4/3} \left( C_{{0}}+C_{{1}}\eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) +C_{{2}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) \right) ^{2}+C_{{3}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda_{{3}} \right) \right) ^{3}+C_{{4}} \left( \eta \left( \left( \chi \left( s \right) \right) ^{2},\lambda _{{3}} \right) \right) ^{4} \right) ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 .$$
Henderson-Janesko-Scuseria range-separated exchange functional based on a model of an exchange hole derived by a constraint-satisfaction technique, see T. M. Henderson et al., J. Chem. Phys. 128, 194105 (2008).
LSDA exchange functional with density represented as a function of $\tau$. $$g=1/2\,E \left( 2\,\tau \left( s \right) \right) ,$$
$$E \left( \alpha \right) =1/9\,c{5}^{4/5}\sqrt [5]{9} \left( {\frac { \alpha\,\sqrt [3]{3}}{ \left( {\pi }^{2} \right) ^{2/3}}} \right) ^{4/5 } ,$$
$$c=-3/4\,\sqrt [3]{3}\sqrt [3]{{\pi }^{-1}} ,$$
$$G=1/2\,E \left( 2\,\tau \left( s \right) \right) .$$
C. Lee, W. Yang and R. G. Parr, Phys. Rev. B 37, 785(1988); B. Miehlich, A. Savin, H. Stoll and H. Preuss, Chem. Phys. Lett. 157, 200 (1989). $$f=-4\,A\rho \left( a \right) \rho \left( b \right) \left( 1+{\frac {d} {\sqrt [3]{\rho}}} \right) ^{-1}{\rho}^{-1}-AB\omega\, \left( \rho \left( a \right) \rho \left( b \right) \left( 8\,{2}^{2/3}{\it cf}\, \left( \left( \rho \left( a \right) \right) ^{8/3}+ \left( \rho \left( b \right) \right) ^{8/3} \right) + \left( {\frac {47}{18}}-{ \frac {7}{18}}\,\delta \right) \sigma- \left( 5/2-1/18\,\delta \right) \left( \sigma \left( {\it aa} \right) +\sigma \left( {\it bb} \right) \right) -1/9\, \left( \delta-11 \right) \left( {\frac {\rho \left( a \right) \sigma \left( {\it aa} \right) }{\rho}}+{\frac {\rho \left( b \right) \sigma \left( {\it bb} \right) }{\rho}} \right) \right) -2/3 \,{\rho}^{2}\sigma+ \left( 2/3\,{\rho}^{2}- \left( \rho \left( a \right) \right) ^{2} \right) \sigma \left( {\it bb} \right) + \left( 2/3\,{\rho}^{2}- \left( \rho \left( b \right) \right) ^{2} \right) \sigma \left( {\it aa} \right) \right) ,$$
$$\omega={e^{-{\frac {c}{\sqrt [3]{\rho}}}}}{\rho}^{-11/3} \left( 1+{ \frac {d}{\sqrt [3]{\rho}}} \right) ^{-1} ,$$
$$\delta={\frac {c}{\sqrt [3]{\rho}}}+d{\frac {1}{\sqrt [3]{\rho}}} \left( 1+{\frac {d}{\sqrt [3]{\rho}}} \right) ^{-1} ,$$
$${\it cf}=3/10\,{3}^{2/3} \left( {\pi }^{2} \right) ^{2/3} ,$$
$$A= 0.04918 ,$$
$$B= 0.132 ,$$
$$c= 0.2533 ,$$
$$d= 0.349 .$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) ,$$
$${\it ds}=2\,{\it tausMFM}-1/4\,{\frac {\sigma \left( {\it ss} \right) } {\rho \left( s \right) }} ,$$
$${\it Gab} \left( {\it chia},{\it chib} \right) =\sum _{i=0}^{n}{\it cCab}_{{i}} \left( {\frac {{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib}}^{2} \right) }{1+{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib} }^{2} \right) }} \right) ^{i} ,$$
$${\it Gss} \left( {\it chis} \right) =\sum _{i=0}^{n}{\it cCss}_{{i}} \left( {\frac {{\it yCss}\,{{\it chis}}^{2}}{1+{\it yCss}\,{{\it chis} }^{2}}} \right) ^{i} ,$$
$$n=4 ,$$
$${\it cCab}=[ 1.0, 1.09297,- 3.79171, 2.82810,- 10.58909] ,$$
$${\it cCss}=[ 1.0,- 3.05430, 7.61854, 1.47665,- 11.92365] ,$$
$${\it yCab}= 0.0031 ,$$
$${\it yCss}= 0.06 ,$$
$$f= \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) {\it Gab} \left( \chi \left( a \right) ,\chi \left( b \right) \right) ,$$
$$g=1/2\,{\frac {\epsilon \left( \rho \left( s \right) ,0 \right) {\it Gss} \left( \chi \left( s \right) \right) {\it ds}}{{\it tausMFM}}} ,$$
$$G=1/2\,{\frac {\epsilon \left( \rho \left( s \right) ,0 \right) {\it Gss} \left( \chi \left( s \right) \right) {\it ds}}{{\it tausMFM}}} .$$
$$g=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }} ,$$
$$G=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }} ,$$
$$S=1/12\,{\frac {\chi \left( s \right) {6}^{2/3}}{\sqrt [3]{{\pi }^{2}}} } ,$$
$$F \left( S \right) =1+R-R \left( 1+{\frac {\mu\,{S}^{2}}{R}} \right) ^{ -1} ,$$
$$R= 0.804 ,$$
$$\mu=1/3\,\delta\,{\pi }^{2} ,$$
$$\delta= 0.066725 ,$$
$$n=11 ,$$
$$A=[ 1.0,- 0.56833,- 1.30057, 5.50070, 9.06402,- 32.21075,- 23.73298, 70.22996, 29.88614,- 60.25778,- 13.22205, 15.23694] ,$$
$${\it Fs} \left( {\it ws} \right) =\sum _{i=0}^{n}A_{{i}}{{\it ws}}^{i} ,$$
$${\it ws}={\frac {{\it ts}-1}{{\it ts}+1}} ,$$
$${\it ts}={\frac {{\it tslsda}}{{\it tausMFM}}} ,$$
$${\it tslsda}=3/10\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} \left( \rho \left( s \right) \right) ^{5/3} ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) .$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) ,$$
$${\it ds}=2\,{\it tausMFM}-1/4\,{\frac {\sigma \left( {\it ss} \right) } {\rho \left( s \right) }} ,$$
$${\it Gab} \left( {\it chia},{\it chib} \right) =\sum _{i=0}^{n}{\it cCab}_{{i}} \left( {\frac {{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib}}^{2} \right) }{1+{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib} }^{2} \right) }} \right) ^{i} ,$$
$${\it Gss} \left( {\it chis} \right) =\sum _{i=0}^{n}{\it cCss}_{{i}} \left( {\frac {{\it yCss}\,{{\it chis}}^{2}}{1+{\it yCss}\,{{\it chis} }^{2}}} \right) ^{i} ,$$
$$n=4 ,$$
$${\it cCab}=[ 1.0, 3.78569,- 14.15261,- 7.46589, 17.94491] ,$$
$${\it cCss}=[ 1.0, 3.77344,- 26.04463, 30.69913,- 9.22695] ,$$
$${\it yCab}= 0.0031 ,$$
$${\it yCss}= 0.06 ,$$
$$f= \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) {\it Gab} \left( \chi \left( a \right) ,\chi \left( b \right) \right) ,$$
$$g=1/2\,{\frac {\epsilon \left( \rho \left( s \right) ,0 \right) {\it Gss} \left( \chi \left( s \right) \right) {\it ds}}{{\it tausMFM}}} ,$$
$$G=1/2\,{\frac {\epsilon \left( \rho \left( s \right) ,0 \right) {\it Gss} \left( \chi \left( s \right) \right) {\it ds}}{{\it tausMFM}}} .$$
$$g=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }} ,$$
$$G=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }} ,$$
$$S=1/12\,{\frac {\chi \left( s \right) {6}^{2/3}}{\sqrt [3]{{\pi }^{2}}} } ,$$
$$F \left( S \right) =1+R-R \left( 1+{\frac {\mu\,{S}^{2}}{R}} \right) ^{ -1} ,$$
$$R= 0.804 ,$$
$$\mu=1/3\,\delta\,{\pi }^{2} ,$$
$$\delta= 0.066725 ,$$
$$n=11 ,$$
$$A=[ 1.0, 0.08151,- 0.43956,- 3.22422, 2.01819, 8.79431,- 0.00295, 9.82029,- 4.82351,- 48.17574, 3.64802, 34.02248] ,$$
$${\it Fs} \left( {\it ws} \right) =\sum _{i=0}^{n}A_{{i}}{{\it ws}}^{i} ,$$
$${\it ws}={\frac {{\it ts}-1}{{\it ts}+1}} ,$$
$${\it ts}={\frac {{\it tslsda}}{{\it tausMFM}}} ,$$
$${\it tslsda}=3/10\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} \left( \rho \left( s \right) \right) ^{5/3} ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) .$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 ,$$
$${\it Gab} \left( {\it chia},{\it chib} \right) =\sum _{i=0}^{n}{\it cCab}_{{i}} \left( {\frac {{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib}}^{2} \right) }{1+{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib} }^{2} \right) }} \right) ^{i} ,$$
$${\it Gss} \left( {\it chis} \right) =\sum _{i=0}^{n}{\it cCss}_{{i}} \left( {\frac {{\it yCss}\,{{\it chis}}^{2}}{1+{\it yCss}\,{{\it chis} }^{2}}} \right) ^{i} ,$$
$$n=4 ,$$
$${\it cCab}=[ 0.8833596, 33.57972,- 70.43548, 49.78271,- 18.52891] ,$$
$${\it cCss}=[ 0.3097855,- 5.528642, 13.47420,- 32.13623, 28.46742] ,$$
$${\it yCab}= 0.0031 ,$$
$${\it yCss}= 0.06 ,$$
$$x=\sqrt { \left( \chi \left( a \right) \right) ^{2}+ \left( \chi \left( b \right) \right) ^{2}} ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) ,$$
$${\it tauaMFM}=1/2\,\tau \left( a \right) ,$$
$${\it taubMFM}=1/2\,\tau \left( b \right) ,$$
$${\it zs}=2\,{\frac {{\it tausMFM}}{ \left( \rho \left( s \right) \right) ^{5/3}}}-{\it cf} ,$$
$$z=2\,{\frac {{\it tauaMFM}}{ \left( \rho \left( a \right) \right) ^{5/ 3}}}+2\,{\frac {{\it taubMFM}}{ \left( \rho \left( b \right) \right) ^ {5/3}}}-2\,{\it cf} ,$$
$${\it cf}=3/5\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} ,$$
$${\it ds}=1-{\frac { \left( \chi \left( s \right) \right) ^{2}}{4\,{ \it zs}+4\,{\it cf}}} ,$$
$$h \left( x,z,{\it d0},{\it d1},{\it d2},{\it d3},{\it d4},{\it d5}, \alpha \right) ={\frac {{\it d0}}{\lambda \left( x,z,\alpha \right) }}+ {\frac {{\it d1}\,{x}^{2}+{\it d2}\,z}{ \left( \lambda \left( x,z, \alpha \right) \right) ^{2}}}+{\frac {{\it d3}\,{x}^{4}+{\it d4}\,{x}^ {2}z+{\it d5}\,{z}^{2}}{ \left( \lambda \left( x,z,\alpha \right) \right) ^{3}}} ,$$
$$\lambda \left( x,z,\alpha \right) =1+\alpha\, \left( {x}^{2}+z \right) ,$$
$${\it dCab}=[ 0.1166404,- 0.09120847,- 0.06726189, 0.00006720580, 0.0008448011, 0.0] ,$$
$${\it dCss}=[ 0.6902145, 0.09847204, 0.2214797,- 0.001968264,- 0.006775479, 0.0] ,$$
$${\it aCab}= 0.003050 ,$$
$${\it aCss}= 0.005151 ,$$
$$f= \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) \left( {\it Gab} \left( \chi \left( a \right) ,\chi \left( b \right) \right) +h \left( x,z,{\it dCab}_{{0}},{\it dCab}_{{1}},{\it dCab}_{{2}},{\it dCab}_{{3}},{\it dCab}_{{4}},{\it dCab}_{{5}},{\it aCab} \right) \right) ,$$
$$g=\epsilon \left( \rho \left( s \right) ,0 \right) \left( {\it Gss} \left( \chi \left( s \right) \right) +h \left( \chi \left( s \right) ,{\it zs},{\it dCss}_{{0}},{\it dCss}_{{1}},{\it dCss}_{{2}},{\it dCss} _{{3}},{\it dCss}_{{4}},{\it dCss}_{{5}},{\it aCss} \right) \right) { \it ds} ,$$
$$G=\epsilon \left( \rho \left( s \right) ,0 \right) \left( {\it Gss} \left( \chi \left( s \right) \right) +h \left( \chi \left( s \right) ,{\it zs},{\it dCss}_{{0}},{\it dCss}_{{1}},{\it dCss}_{{2}},{\it dCss} _{{3}},{\it dCss}_{{4}},{\it dCss}_{{5}},{\it aCss} \right) \right) { \it ds} .$$
$$g=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }} ,$$
$$G=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }} ,$$
$$S=1/12\,{\frac {\chi \left( s \right) {6}^{2/3}}{\sqrt [3]{{\pi }^{2}}} } ,$$
$$F \left( S \right) =1+R-R \left( 1+{\frac {\mu\,{S}^{2}}{R}} \right) ^{ -1} ,$$
$$R= 0.804 ,$$
$$\mu=1/3\,\delta\,{\pi }^{2} ,$$
$$\delta= 0.066725 ,$$
$$n=11 ,$$
$$A=[ 0.4600000,- 0.2206052,- 0.09431788, 2.164494,- 2.556466,- 14.22133, 15.55044, 35.98078,- 27.22754,- 39.24093, 15.22808, 15.22227] ,$$
$${\it Fs} \left( {\it ws} \right) =\sum _{i=0}^{n}A_{{i}}{{\it ws}}^{i} ,$$
$${\it ws}={\frac {{\it ts}-1}{{\it ts}+1}} ,$$
$${\it ts}={\frac {{\it tslsda}}{{\it tausMFM}}} ,$$
$${\it tslsda}=3/10\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} \left( \rho \left( s \right) \right) ^{5/3} ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) .$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 ,$$
$${\it Gab} \left( {\it chia},{\it chib} \right) =\sum _{i=0}^{n}{\it cCab}_{{i}} \left( {\frac {{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib}}^{2} \right) }{1+{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib} }^{2} \right) }} \right) ^{i} ,$$
$${\it Gss} \left( {\it chis} \right) =\sum _{i=0}^{n}{\it cCss}_{{i}} \left( {\frac {{\it yCss}\,{{\it chis}}^{2}}{1+{\it yCss}\,{{\it chis} }^{2}}} \right) ^{i} ,$$
$$n=4 ,$$
$${\it cCab}=[ 3.741593, 218.7098,- 453.1252, 293.6479,- 62.87470] ,$$
$${\it cCss}=[ 0.5094055,- 1.491085, 17.23922,- 38.59018, 28.45044] ,$$
$${\it yCab}= 0.0031 ,$$
$${\it yCss}= 0.06 ,$$
$$x=\sqrt { \left( \chi \left( a \right) \right) ^{2}+ \left( \chi \left( b \right) \right) ^{2}} ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) ,$$
$${\it tauaMFM}=1/2\,\tau \left( a \right) ,$$
$${\it taubMFM}=1/2\,\tau \left( b \right) ,$$
$${\it zs}=2\,{\frac {{\it tausMFM}}{ \left( \rho \left( s \right) \right) ^{5/3}}}-{\it cf} ,$$
$$z=2\,{\frac {{\it tauaMFM}}{ \left( \rho \left( a \right) \right) ^{5/ 3}}}+2\,{\frac {{\it taubMFM}}{ \left( \rho \left( b \right) \right) ^ {5/3}}}-2\,{\it cf} ,$$
$${\it cf}=3/5\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} ,$$
$${\it ds}=1-{\frac { \left( \chi \left( s \right) \right) ^{2}}{4\,{ \it zs}+4\,{\it cf}}} ,$$
$$h \left( x,z,{\it d0},{\it d1},{\it d2},{\it d3},{\it d4},{\it d5}, \alpha \right) ={\frac {{\it d0}}{\lambda \left( x,z,\alpha \right) }}+ {\frac {{\it d1}\,{x}^{2}+{\it d2}\,z}{ \left( \lambda \left( x,z, \alpha \right) \right) ^{2}}}+{\frac {{\it d3}\,{x}^{4}+{\it d4}\,{x}^ {2}z+{\it d5}\,{z}^{2}}{ \left( \lambda \left( x,z,\alpha \right) \right) ^{3}}} ,$$
$$\lambda \left( x,z,\alpha \right) =1+\alpha\, \left( {x}^{2}+z \right) ,$$
$${\it dCab}=[- 2.741539,- 0.6720113,- 0.07932688, 0.001918681,- 0.002032902, 0.0] ,$$
$${\it dCss}=[ 0.4905945,- 0.1437348, 0.2357824, 0.001871015,- 0.003788963, 0.0] ,$$
$${\it aCab}= 0.003050 ,$$
$${\it aCss}= 0.005151 ,$$
$$f= \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) \left( {\it Gab} \left( \chi \left( a \right) ,\chi \left( b \right) \right) +h \left( x,z,{\it dCab}_{{0}},{\it dCab}_{{1}},{\it dCab}_{{2}},{\it dCab}_{{3}},{\it dCab}_{{4}},{\it dCab}_{{5}},{\it aCab} \right) \right) ,$$
$$g=\epsilon \left( \rho \left( s \right) ,0 \right) \left( {\it Gss} \left( \chi \left( s \right) \right) +h \left( \chi \left( s \right) ,{\it zs},{\it dCss}_{{0}},{\it dCss}_{{1}},{\it dCss}_{{2}},{\it dCss} _{{3}},{\it dCss}_{{4}},{\it dCss}_{{5}},{\it aCss} \right) \right) { \it ds} ,$$
$$G=\epsilon \left( \rho \left( s \right) ,0 \right) \left( {\it Gss} \left( \chi \left( s \right) \right) +h \left( \chi \left( s \right) ,{\it zs},{\it dCss}_{{0}},{\it dCss}_{{1}},{\it dCss}_{{2}},{\it dCss} _{{3}},{\it dCss}_{{4}},{\it dCss}_{{5}},{\it aCss} \right) \right) { \it ds} .$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 ,$$
$${\it Gab} \left( {\it chia},{\it chib} \right) =\sum _{i=0}^{n}{\it cCab}_{{i}} \left( {\frac {{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib}}^{2} \right) }{1+{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib} }^{2} \right) }} \right) ^{i} ,$$
$${\it Gss} \left( {\it chis} \right) =\sum _{i=0}^{n}{\it cCss}_{{i}} \left( {\frac {{\it yCss}\,{{\it chis}}^{2}}{1+{\it yCss}\,{{\it chis} }^{2}}} \right) ^{i} ,$$
$$n=4 ,$$
$${\it cCab}=[ 1.674634, 57.32017, 59.55416,- 231.1007, 125.5199] ,$$
$${\it cCss}=[ 0.1023254,- 2.453783, 29.13180,- 34.94358, 23.15955] ,$$
$${\it yCab}= 0.0031 ,$$
$${\it yCss}= 0.06 ,$$
$$x=\sqrt { \left( \chi \left( a \right) \right) ^{2}+ \left( \chi \left( b \right) \right) ^{2}} ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) ,$$
$${\it tauaMFM}=1/2\,\tau \left( a \right) ,$$
$${\it taubMFM}=1/2\,\tau \left( b \right) ,$$
$${\it zs}=2\,{\frac {{\it tausMFM}}{ \left( \rho \left( s \right) \right) ^{5/3}}}-{\it cf} ,$$
$$z=2\,{\frac {{\it tauaMFM}}{ \left( \rho \left( a \right) \right) ^{5/ 3}}}+2\,{\frac {{\it taubMFM}}{ \left( \rho \left( b \right) \right) ^ {5/3}}}-2\,{\it cf} ,$$
$${\it cf}=3/5\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} ,$$
$${\it ds}=1-{\frac { \left( \chi \left( s \right) \right) ^{2}}{4\,{ \it zs}+4\,{\it cf}}} ,$$
$$h \left( x,z,{\it d0},{\it d1},{\it d2},{\it d3},{\it d4},{\it d5}, \alpha \right) ={\frac {{\it d0}}{\lambda \left( x,z,\alpha \right) }}+ {\frac {{\it d1}\,{x}^{2}+{\it d2}\,z}{ \left( \lambda \left( x,z, \alpha \right) \right) ^{2}}}+{\frac {{\it d3}\,{x}^{4}+{\it d4}\,{x}^ {2}z+{\it d5}\,{z}^{2}}{ \left( \lambda \left( x,z,\alpha \right) \right) ^{3}}} ,$$
$$\lambda \left( x,z,\alpha \right) =1+\alpha\, \left( {x}^{2}+z \right) ,$$
$${\it dCab}=[- 0.6746338,- 0.1534002,- 0.09021521,- 0.001292037,- 0.0002352983, 0.0] ,$$
$${\it dCss}=[ 0.8976746,- 0.2345830, 0.2368173,- 0.0009913890,- 0.01146165, 0.0] ,$$
$${\it aCab}= 0.003050 ,$$
$${\it aCss}= 0.005151 ,$$
$$f= \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) \left( {\it Gab} \left( \chi \left( a \right) ,\chi \left( b \right) \right) +h \left( x,z,{\it dCab}_{{0}},{\it dCab}_{{1}},{\it dCab}_{{2}},{\it dCab}_{{3}},{\it dCab}_{{4}},{\it dCab}_{{5}},{\it aCab} \right) \right) ,$$
$$g=\epsilon \left( \rho \left( s \right) ,0 \right) \left( {\it Gss} \left( \chi \left( s \right) \right) +h \left( \chi \left( s \right) ,{\it zs},{\it dCss}_{{0}},{\it dCss}_{{1}},{\it dCss}_{{2}},{\it dCss} _{{3}},{\it dCss}_{{4}},{\it dCss}_{{5}},{\it aCss} \right) \right) { \it ds} ,$$
$$G=\epsilon \left( \rho \left( s \right) ,0 \right) \left( {\it Gss} \left( \chi \left( s \right) \right) +h \left( \chi \left( s \right) ,{\it zs},{\it dCss}_{{0}},{\it dCss}_{{1}},{\it dCss}_{{2}},{\it dCss} _{{3}},{\it dCss}_{{4}},{\it dCss}_{{5}},{\it aCss} \right) \right) { \it ds} .$$
$$g=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }}+{\it eslsda}\,h \left( \chi \left( s \right) ,{\it zs } \right) ,$$
$$G=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }}+{\it eslsda}\,h \left( \chi \left( s \right) ,{\it zs } \right) ,$$
$$S=1/12\,{\frac {\chi \left( s \right) {6}^{2/3}}{\sqrt [3]{{\pi }^{2}}} } ,$$
$$F \left( S \right) =1+R-R \left( 1+{\frac {\mu\,{S}^{2}}{R}} \right) ^{ -1} ,$$
$$R= 0.804 ,$$
$$\mu=1/3\,\delta\,{\pi }^{2} ,$$
$$\delta= 0.066725 ,$$
$$n=11 ,$$
$$A=[ 0.1179732,- 1.066708,- 0.1462405, 7.481848, 3.776679,- 44.36118,- 18.30962, 100.3903, 38.64360,- 98.06018,- 25.57716, 35.90404] ,$$
$${\it Fs} \left( {\it ws} \right) =\sum _{i=0}^{n}A_{{i}}{{\it ws}}^{i} ,$$
$${\it ws}={\frac {{\it ts}-1}{{\it ts}+1}} ,$$
$${\it ts}={\frac {{\it tslsda}}{{\it tausMFM}}} ,$$
$${\it tslsda}=3/10\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} \left( \rho \left( s \right) \right) ^{5/3} ,$$
$${\it eslsda}=-3/8\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} \left( \rho \left( s \right) \right) ^{4/3} ,$$
$$d=[- 0.1179732,- 0.002500000,- 0.01180065, 0.0, 0.0, 0.0] ,$$
$$\alpha= 0.001867 ,$$
$${\it zs}=2\,{\frac {{\it tausMFM}}{ \left( \rho \left( s \right) \right) ^{5/3}}}-{\it cf} ,$$
$$h \left( x,z \right) ={\frac {d_{{0}}}{\lambda \left( x,z,\alpha \right) }}+{\frac {d_{{1}}{x}^{2}+d_{{2}}z}{ \left( \lambda \left( x,z ,\alpha \right) \right) ^{2}}}+{\frac {d_{{3}}{x}^{4}+d_{{4}}{x}^{2}z+ d_{{5}}{z}^{2}}{ \left( \lambda \left( x,z,\alpha \right) \right) ^{3} }} ,$$
$$\lambda \left( x,z,\alpha \right) =1+\alpha\, \left( {x}^{2}+z \right) ,$$
$${\it cf}=3/5\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) .$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 ,$$
$${\it Gab} \left( {\it chia},{\it chib} \right) =\sum _{i=0}^{n}{\it cCab}_{{i}} \left( {\frac {{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib}}^{2} \right) }{1+{\it yCab}\, \left( {{\it chia}}^{2}+{{\it chib} }^{2} \right) }} \right) ^{i} ,$$
$${\it Gss} \left( {\it chis} \right) =\sum _{i=0}^{n}{\it cCss}_{{i}} \left( {\frac {{\it yCss}\,{{\it chis}}^{2}}{1+{\it yCss}\,{{\it chis} }^{2}}} \right) ^{i} ,$$
$$n=4 ,$$
$${\it cCab}=[ 0.6042374, 177.6783,- 251.3252, 76.35173,- 12.55699] ,$$
$${\it cCss}=[ 0.5349466, 0.5396620,- 31.61217, 51.49592,- 29.19613] ,$$
$${\it yCab}= 0.0031 ,$$
$${\it yCss}= 0.06 ,$$
$$x=\sqrt { \left( \chi \left( a \right) \right) ^{2}+ \left( \chi \left( b \right) \right) ^{2}} ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) ,$$
$${\it tauaMFM}=1/2\,\tau \left( a \right) ,$$
$${\it taubMFM}=1/2\,\tau \left( b \right) ,$$
$${\it zs}=2\,{\frac {{\it tausMFM}}{ \left( \rho \left( s \right) \right) ^{5/3}}}-{\it cf} ,$$
$$z=2\,{\frac {{\it tauaMFM}}{ \left( \rho \left( a \right) \right) ^{5/ 3}}}+2\,{\frac {{\it taubMFM}}{ \left( \rho \left( b \right) \right) ^ {5/3}}}-2\,{\it cf} ,$$
$${\it cf}=3/5\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} ,$$
$${\it ds}=1-{\frac { \left( \chi \left( s \right) \right) ^{2}}{4\,{ \it zs}+4\,{\it cf}}} ,$$
$$h \left( x,z,{\it d0},{\it d1},{\it d2},{\it d3},{\it d4},{\it d5}, \alpha \right) ={\frac {{\it d0}}{\lambda \left( x,z,\alpha \right) }}+ {\frac {{\it d1}\,{x}^{2}+{\it d2}\,z}{ \left( \lambda \left( x,z, \alpha \right) \right) ^{2}}}+{\frac {{\it d3}\,{x}^{4}+{\it d4}\,{x}^ {2}z+{\it d5}\,{z}^{2}}{ \left( \lambda \left( x,z,\alpha \right) \right) ^{3}}} ,$$
$$\lambda \left( x,z,\alpha \right) =1+\alpha\, \left( {x}^{2}+z \right) ,$$
$${\it dCab}=[ 0.3957626,- 0.5614546, 0.01403963, 0.0009831442,- 0.003577176, 0.0] ,$$
$${\it dCss}=[ 0.4650534, 0.1617589, 0.1833657, 0.0004692100,- 0.004990573, 0.0] ,$$
$${\it aCab}= 0.003050 ,$$
$${\it aCss}= 0.005151 ,$$
$$f= \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) \left( {\it Gab} \left( \chi \left( a \right) ,\chi \left( b \right) \right) +h \left( x,z,{\it dCab}_{{0}},{\it dCab}_{{1}},{\it dCab}_{{2}},{\it dCab}_{{3}},{\it dCab}_{{4}},{\it dCab}_{{5}},{\it aCab} \right) \right) ,$$
$$g=\epsilon \left( \rho \left( s \right) ,0 \right) \left( {\it Gss} \left( \chi \left( s \right) \right) +h \left( \chi \left( s \right) ,{\it zs},{\it dCss}_{{0}},{\it dCss}_{{1}},{\it dCss}_{{2}},{\it dCss} _{{3}},{\it dCss}_{{4}},{\it dCss}_{{5}},{\it aCss} \right) \right) { \it ds} ,$$
$$G=\epsilon \left( \rho \left( s \right) ,0 \right) \left( {\it Gss} \left( \chi \left( s \right) \right) +h \left( \chi \left( s \right) ,{\it zs},{\it dCss}_{{0}},{\it dCss}_{{1}},{\it dCss}_{{2}},{\it dCss} _{{3}},{\it dCss}_{{4}},{\it dCss}_{{5}},{\it aCss} \right) \right) { \it ds} .$$
$$g=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }}+{\it eslsda}\,h \left( \chi \left( s \right) ,{\it zs } \right) ,$$
$$G=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }}+{\it eslsda}\,h \left( \chi \left( s \right) ,{\it zs } \right) ,$$
$$S=1/12\,{\frac {\chi \left( s \right) {6}^{2/3}}{\sqrt [3]{{\pi }^{2}}} } ,$$
$$F \left( S \right) =1+R-R \left( 1+{\frac {\mu\,{S}^{2}}{R}} \right) ^{ -1} ,$$
$$R= 0.804 ,$$
$$\mu=1/3\,\delta\,{\pi }^{2} ,$$
$$\delta= 0.066725 ,$$
$$n=11 ,$$
$$A=[ 0.3987756, 0.2548219, 0.3923994,- 2.103655,- 6.302147, 10.97615, 30.97273,- 23.18489,- 56.73480, 21.60364, 34.21814,- 9.049762] ,$$
$${\it Fs} \left( {\it ws} \right) =\sum _{i=0}^{n}A_{{i}}{{\it ws}}^{i} ,$$
$${\it ws}={\frac {{\it ts}-1}{{\it ts}+1}} ,$$
$${\it ts}={\frac {{\it tslsda}}{{\it tausMFM}}} ,$$
$${\it tslsda}=3/10\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} \left( \rho \left( s \right) \right) ^{5/3} ,$$
$${\it eslsda}=-3/8\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} \left( \rho \left( s \right) \right) ^{4/3} ,$$
$$d=[ 0.6012244, 0.004748822,- 0.008635108,- 0.000009308062, 0.00004482811, 0.0] ,$$
$$\alpha= 0.001867 ,$$
$${\it zs}=2\,{\frac {{\it tausMFM}}{ \left( \rho \left( s \right) \right) ^{5/3}}}-{\it cf} ,$$
$$h \left( x,z \right) ={\frac {d_{{0}}}{\lambda \left( x,z,\alpha \right) }}+{\frac {d_{{1}}{x}^{2}+d_{{2}}z}{ \left( \lambda \left( x,z ,\alpha \right) \right) ^{2}}}+{\frac {d_{{3}}{x}^{4}+d_{{4}}{x}^{2}z+ d_{{5}}{z}^{2}}{ \left( \lambda \left( x,z,\alpha \right) \right) ^{3} }} ,$$
$$\lambda \left( x,z,\alpha \right) =1+\alpha\, \left( {x}^{2}+z \right) ,$$
$${\it cf}=3/5\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) .$$
$$g=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }}+{\it eslsda}\,h \left( \chi \left( s \right) ,{\it zs } \right) ,$$
$$G=-3/4\,{\frac {\sqrt [3]{6}\sqrt [3]{{\pi }^{2}} \left( \rho \left( s \right) \right) ^{4/3}F \left( S \right) {\it Fs} \left( {\it ws} \right) }{\pi }}+{\it eslsda}\,h \left( \chi \left( s \right) ,{\it zs } \right) ,$$
$$S=1/12\,{\frac {\chi \left( s \right) {6}^{2/3}}{\sqrt [3]{{\pi }^{2}}} } ,$$
$$F \left( S \right) =1+R-R \left( 1+{\frac {\mu\,{S}^{2}}{R}} \right) ^{ -1} ,$$
$$R= 0.804 ,$$
$$\mu=1/3\,\delta\,{\pi }^{2} ,$$
$$\delta= 0.066725 ,$$
$$n=11 ,$$
$$A=[ 0.5877943,- 0.1371776, 0.2682367,- 2.515898,- 2.978892, 8.710679, 16.88195,- 4.489724,- 32.99983,- 14.49050, 20.43747, 12.56504] ,$$
$${\it Fs} \left( {\it ws} \right) =\sum _{i=0}^{n}A_{{i}}{{\it ws}}^{i} ,$$
$${\it ws}={\frac {{\it ts}-1}{{\it ts}+1}} ,$$
$${\it ts}={\frac {{\it tslsda}}{{\it tausMFM}}} ,$$
$${\it tslsda}=3/10\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} \left( \rho \left( s \right) \right) ^{5/3} ,$$
$${\it eslsda}=-3/8\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\pi }^{-1}} \left( \rho \left( s \right) \right) ^{4/3} ,$$
$$d=[ 0.1422057, 0.0007370319,- 0.01601373, 0.0, 0.0, 0.0] ,$$
$$\alpha= 0.001867 ,$$
$${\it zs}=2\,{\frac {{\it tausMFM}}{ \left( \rho \left( s \right) \right) ^{5/3}}}-{\it cf} ,$$
$$h \left( x,z \right) ={\frac {d_{{0}}}{\lambda \left( x,z,\alpha \right) }}+{\frac {d_{{1}}{x}^{2}+d_{{2}}z}{ \left( \lambda \left( x,z ,\alpha \right) \right) ^{2}}}+{\frac {d_{{3}}{x}^{4}+d_{{4}}{x}^{2}z+ d_{{5}}{z}^{2}}{ \left( \lambda \left( x,z,\alpha \right) \right) ^{3} }} ,$$
$$\lambda \left( x,z,\alpha \right) =1+\alpha\, \left( {x}^{2}+z \right) ,$$
$${\it cf}=3/5\,{6}^{2/3} \left( {\pi }^{2} \right) ^{2/3} ,$$
$${\it tausMFM}=1/2\,\tau \left( s \right) .$$
Meta-GGA correlation functional based on first principles, see M. Modrzejewski et al., J. Chem. Phys. 137, 204121 (2012).
$$g=-3\,{\frac {\pi \, \left( \rho \left( s \right) \right) ^{3}}{\tau \left( s \right) -1/4\,\upsilon \left( s \right) }} .$$
MK00 with gradient correction of the form of B88X but with different empirical parameter. $$g=-3\,{\frac {\pi \, \left( \rho \left( s \right) \right) ^{3}}{\tau \left( s \right) -1/4\,\upsilon \left( s \right) }}-{\frac {\beta\, \left( \rho \left( s \right) \right) ^{4/3} \left( \chi \left( s \right) \right) ^{2}}{1+6\,\beta\,\chi \left( s \right) {\it arcsinh} \left( \chi \left( s \right) \right) }} ,$$
$$\beta= 0.0016 ,$$
$$G=-3\,{\frac {\pi \, \left( \rho \left( s \right) \right) ^{3}}{\tau \left( s \right) -1/4\,\upsilon \left( s \right) }}-{\frac {\beta\, \left( \rho \left( s \right) \right) ^{4/3} \left( \chi \left( s \right) \right) ^{2}}{1+6\,\beta\,\chi \left( s \right) {\it arcsinh} \left( \chi \left( s \right) \right) }} .$$
Gradient correction to VWN. $$f=\rho\,e+{\frac {{e^{-\Phi}}C \left( r \right) \sigma}{d{\rho}^{4/3}}} ,$$
$$r=1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{\frac {1}{\pi \,\rho}}} ,$$
$$x=\sqrt {r} ,$$
$$\zeta={\frac {\rho \left( a \right) -\rho \left( b \right) }{\rho}} ,$$
$$k=[ 0.0310907, 0.01554535,-1/6\,{\pi }^{-2}] ,$$
$$l=[- 0.10498,- 0.325,- 0.0047584] ,$$
$$m=[ 3.72744, 7.06042, 1.13107] ,$$
$$n=[ 12.9352, 18.0578, 13.0045] ,$$
$$e=\Lambda+\omega\,y \left( 1+h{\zeta}^{4} \right) ,$$
$$y={\frac {9}{8}}\, \left( 1+\zeta \right) ^{4/3}+{\frac {9}{8}}\, \left( 1-\zeta \right) ^{4/3}-9/4 ,$$
$$h=4/9\,{\frac {\lambda-\Lambda}{ \left( \sqrt [3]{2}-1 \right) \omega}} -1 ,$$
$$\Lambda=q \left( k_{{1}},l_{{1}},m_{{1}},n_{{1}} \right) ,$$
$$\lambda=q \left( k_{{2}},l_{{2}},m_{{2}},n_{{2}} \right) ,$$
$$\omega=q \left( k_{{3}},l_{{3}},m_{{3}},n_{{3}} \right) ,$$
$$q \left( A,p,c,d \right) =A \left( \ln \left( {\frac {{x}^{2}}{X \left( x,c,d \right) }} \right) +2\,c\arctan \left( {\frac {Q \left( c ,d \right) }{2\,x+c}} \right) \left( Q \left( c,d \right) \right) ^{- 1}-cp \left( \ln \left( {\frac { \left( x-p \right) ^{2}}{X \left( x,c ,d \right) }} \right) +2\, \left( c+2\,p \right) \arctan \left( {\frac {Q \left( c,d \right) }{2\,x+c}} \right) \left( Q \left( c,d \right) \right) ^{-1} \right) \left( X \left( p,c,d \right) \right) ^{-1} \right) ,$$
$$Q \left( c,d \right) =\sqrt {4\,d-{c}^{2}} ,$$
$$X \left( i,c,d \right) ={i}^{2}+ci+d ,$$
$$\Phi= 0.007390075\,{\frac {z\sqrt {\sigma}}{C \left( r \right) {\rho}^{ 7/6}}} ,$$
$$d=\sqrt [3]{2}\sqrt { \left( 1/2+1/2\,\zeta \right) ^{5/3}+ \left( 1/2- 1/2\,\zeta \right) ^{5/3}} ,$$
$$C \left( r \right) = 0.001667+{\frac { 0.002568+\alpha\,r+\beta\,{r}^{2 }}{1+\xi\,r+\delta\,{r}^{2}+10000\,\beta\,{r}^{3}}} ,$$
$$z= 0.11 ,$$
$$\alpha= 0.023266 ,$$
$$\beta= 0.000007389 ,$$
$$\xi= 8.723 ,$$
$$\delta= 0.472 .$$
$$f=\rho\, \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) +H \left( d,\rho \left( a \right) ,\rho \left( b \right) \right) \right) ,$$
$$G=\rho\, \left( \epsilon \left( \rho \left( s \right) ,0 \right) +C \left( Q,\rho \left( s \right) ,0 \right) \right) ,$$
$$d=1/12\,{\frac {\sqrt {\sigma}{3}^{5/6}}{u \left( \rho \left( a \right) ,\rho \left( b \right) \right) \sqrt [6]{{\pi }^{-1}}{\rho}^{ 7/6}}} ,$$
$$u \left( \alpha,\beta \right) =1/2\, \left( 1+\zeta \left( \alpha,\beta \right) \right) ^{2/3}+1/2\, \left( 1-\zeta \left( \alpha,\beta \right) \right) ^{2/3} ,$$
$$H \left( d,\alpha,\beta \right) =1/2\, \left( u \left( \rho \left( a \right) ,\rho \left( b \right) \right) \right) ^{3}{\lambda}^{2}\ln \left( 1+2\,{\frac {\iota\, \left( {d}^{2}+A \left( \alpha,\beta \right) {d}^{4} \right) }{\lambda\, \left( 1+A \left( \alpha,\beta \right) {d}^{2}+ \left( A \left( \alpha,\beta \right) \right) ^{2}{d} ^{4} \right) }} \right) {\iota}^{-1} ,$$
$$A \left( \alpha,\beta \right) =2\,\iota{\lambda}^{-1} \left( {e^{-2\,{ \frac {\iota\,\epsilon \left( \alpha,\beta \right) }{ \left( u \left( \rho \left( a \right) ,\rho \left( b \right) \right) \right) ^{3}{ \lambda}^{2}}}}}-1 \right) ^{-1} ,$$
$$\iota= 0.0716 ,$$
$$\lambda=\nu\,\kappa ,$$
$$\nu=16\,{\frac {\sqrt [3]{3}\sqrt [3]{{\pi }^{2}}}{\pi }} ,$$
$$\kappa= 0.004235 ,$$
$$Z=- 0.001667 ,$$
$$\phi \left( r \right) =\theta \left( r \right) -Z ,$$
$$\theta \left( r \right) ={\frac {1}{1000}}\,{\frac { 2.568+\Xi\,r+\Phi \,{r}^{2}}{1+\Lambda\,r+\Upsilon\,{r}^{2}+10\,\Phi\,{r}^{3}}} ,$$
$$\Xi= 23.266 ,$$
$$\Phi= 0.007389 ,$$
$$\Lambda= 8.723 ,$$
$$\Upsilon= 0.472 ,$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$\epsilon \left( \alpha,\beta \right) =e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3 }},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P_{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^{4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2}},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y _{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}} ,U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{4} ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 ,$$
$$C \left( d,\alpha,\beta \right) =K \left( Q,\alpha,\beta \right) +M \left( Q,\alpha,\beta \right) ,$$
$$M \left( d,\alpha,\beta \right) = 0.5\,\nu\, \left( \phi \left( r \left( \alpha,\beta \right) \right) -\kappa-3/7\,Z \right) {d}^{2}{e^ {- 335.9789467\,{\frac {{3}^{2/3}{d}^{2}}{\sqrt [3]{{\pi }^{5}\rho}}}}} ,$$
$$K \left( d,\alpha,\beta \right) = 0.2500000000\,{\lambda}^{2}\ln \left( 1+2\,{\frac {\iota\, \left( {d}^{2}+N \left( \alpha,\beta \right) {d}^{4} \right) }{\lambda\, \left( 1+N \left( \alpha,\beta \right) {d}^{2}+ \left( N \left( \alpha,\beta \right) \right) ^{2}{d} ^{4} \right) }} \right) {\iota}^{-1} ,$$
$$N \left( \alpha,\beta \right) =2\,\iota{\lambda}^{-1} \left( {e^{-4\,{ \frac {\iota\,\epsilon \left( \alpha,\beta \right) }{{\lambda}^{2}}}}}- 1 \right) ^{-1} ,$$
$$Q=1/12\,{\frac {\sqrt {\sigma \left( {\it ss} \right) }\sqrt [3]{2}{3}^ {5/6}}{\sqrt [6]{{\pi }^{-1}}{\rho}^{7/6}}} .$$
$$g=1/2\,E \left( 2\,\rho \left( s \right) \right) ,$$
$$G=1/2\,E \left( 2\,\rho \left( s \right) \right) ,$$
$$E \left( n \right) =-3/4\,{\frac {\sqrt [3]{3}\sqrt [3]{{\pi }^{2}}{n}^ {4/3}F \left( S \right) }{\pi }} ,$$
$$S=1/12\,{\frac {\chi \left( s \right) {6}^{2/3}}{\sqrt [3]{{\pi }^{2}}} } ,$$
$$F \left( S \right) =1+R-R \left( 1+{\frac {\mu\,{S}^{2}}{R}} \right) ^{ -1} ,$$
$$R= 0.804 ,$$
$$\mu=1/3\,\delta\,{\pi }^{2} ,$$
$$\delta= 0.066725 .$$
Changes the value of the constant R from the original PBEX functional $$g=1/2\,E \left( 2\,\rho \left( s \right) \right) ,$$
$$G=1/2\,E \left( 2\,\rho \left( s \right) \right) ,$$
$$E \left( n \right) =-3/4\,{\frac {\sqrt [3]{3}\sqrt [3]{{\pi }^{2}}{n}^ {4/3}F \left( S \right) }{\pi }} ,$$
$$S=1/12\,{\frac {\chi \left( s \right) {6}^{2/3}}{\sqrt [3]{{\pi }^{2}}} } ,$$
$$F \left( S \right) =1+R-R \left( 1+{\frac {\mu\,{S}^{2}}{R}} \right) ^{ -1} ,$$
$$R= 1.245 ,$$
$$\mu=1/3\,\delta\,{\pi }^{2} ,$$
$$\delta= 0.066725 .$$
GGA Exchange Functional. $$g=1/2\,E \left( 2\,\rho \left( s \right) \right) ,$$
$$E \left( n \right) =-3/4\,\sqrt [3]{3}\sqrt [3]{{\pi }^{-1}}{n}^{4/3}F \left( S \right) ,$$
$$F \left( S \right) = \left( 1+ 1.296\,{S}^{2}+14\,{S}^{4}+ 0.2\,{S}^{6} \right) ^{1/15} ,$$
$$S=1/12\,{\frac {\chi \left( s \right) {6}^{2/3}}{\sqrt [3]{{\pi }^{2}}} } ,$$
$$G=1/2\,E \left( 2\,\rho \left( s \right) \right) .$$
$$f=\rho\, \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) +H \left( d,\rho \left( a \right) ,\rho \left( b \right) \right) \right) ,$$
$$G=\rho\, \left( \epsilon \left( \rho \left( s \right) ,0 \right) +C \left( Q,\rho \left( s \right) ,0 \right) \right) ,$$
$$d=1/12\,{\frac {\sqrt {\sigma}{3}^{5/6}}{u \left( \rho \left( a \right) ,\rho \left( b \right) \right) \sqrt [6]{{\pi }^{-1}}{\rho}^{ 7/6}}} ,$$
$$u \left( \alpha,\beta \right) =1/2\, \left( 1+\zeta \left( \alpha,\beta \right) \right) ^{2/3}+1/2\, \left( 1-\zeta \left( \alpha,\beta \right) \right) ^{2/3} ,$$
$$H \left( d,\alpha,\beta \right) =L \left( d,\alpha,\beta \right) +J \left( d,\alpha,\beta \right) ,$$
$$L \left( d,\alpha,\beta \right) =1/2\, \left( u \left( \rho \left( a \right) ,\rho \left( b \right) \right) \right) ^{3}{\lambda}^{2}\ln \left( 1+2\,{\frac {\iota\, \left( {d}^{2}+A \left( \alpha,\beta \right) {d}^{4} \right) }{\lambda\, \left( 1+A \left( \alpha,\beta \right) {d}^{2}+ \left( A \left( \alpha,\beta \right) \right) ^{2}{d} ^{4} \right) }} \right) {\iota}^{-1} ,$$
$$J \left( d,\alpha,\beta \right) =\nu\, \left( \phi \left( r \left( \alpha,\beta \right) \right) -\kappa-3/7\,Z \right) \left( u \left( \rho \left( a \right) ,\rho \left( b \right) \right) \right) ^{3}{d}^ {2}{e^{-{\frac {400}{3}}\,{\frac { \left( u \left( \rho \left( a \right) ,\rho \left( b \right) \right) \right) ^{4}{3}^{2/3}{d}^{2}} {\sqrt [3]{{\pi }^{5}\rho}}}}} ,$$
$$A \left( \alpha,\beta \right) =2\,\iota{\lambda}^{-1} \left( {e^{-2\,{ \frac {\iota\,\epsilon \left( \alpha,\beta \right) }{ \left( u \left( \rho \left( a \right) ,\rho \left( b \right) \right) \right) ^{3}{ \lambda}^{2}}}}}-1 \right) ^{-1} ,$$
$$\iota= 0.09 ,$$
$$\lambda=\nu\,\kappa ,$$
$$\nu=16\,{\frac {\sqrt [3]{3}\sqrt [3]{{\pi }^{2}}}{\pi }} ,$$
$$\kappa= 0.004235 ,$$
$$Z=- 0.001667 ,$$
$$\phi \left( r \right) =\theta \left( r \right) -Z ,$$
$$\theta \left( r \right) ={\frac {1}{1000}}\,{\frac { 2.568+\Xi\,r+\Phi \,{r}^{2}}{1+\Lambda\,r+\Upsilon\,{r}^{2}+10\,\Phi\,{r}^{3}}} ,$$
$$\Xi= 23.266 ,$$
$$\Phi= 0.007389 ,$$
$$\Lambda= 8.723 ,$$
$$\Upsilon= 0.472 ,$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$\epsilon \left( \alpha,\beta \right) =e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3 }},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P_{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^{4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2}},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y _{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}} ,U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{4} ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 ,$$
$$C \left( d,\alpha,\beta \right) =K \left( Q,\alpha,\beta \right) +M \left( Q,\alpha,\beta \right) ,$$
$$M \left( d,\alpha,\beta \right) = 0.5\,\nu\, \left( \phi \left( r \left( \alpha,\beta \right) \right) -\kappa-3/7\,Z \right) {d}^{2}{e^ {- 335.9789467\,{\frac {{3}^{2/3}{d}^{2}}{\sqrt [3]{{\pi }^{5}\rho}}}}} ,$$
$$K \left( d,\alpha,\beta \right) = 0.2500000000\,{\lambda}^{2}\ln \left( 1+2\,{\frac {\iota\, \left( {d}^{2}+N \left( \alpha,\beta \right) {d}^{4} \right) }{\lambda\, \left( 1+N \left( \alpha,\beta \right) {d}^{2}+ \left( N \left( \alpha,\beta \right) \right) ^{2}{d} ^{4} \right) }} \right) {\iota}^{-1} ,$$
$$N \left( \alpha,\beta \right) =2\,\iota{\lambda}^{-1} \left( {e^{-4\,{ \frac {\iota\,\epsilon \left( \alpha,\beta \right) }{{\lambda}^{2}}}}}- 1 \right) ^{-1} ,$$
$$Q=1/12\,{\frac {\sqrt {\sigma \left( {\it ss} \right) }\sqrt [3]{2}{3}^ {5/6}}{\sqrt [6]{{\pi }^{-1}}{\rho}^{7/6}}} .$$
$$g=1/2\,E \left( 2\,\rho \left( s \right) \right) ,$$
$$G=1/2\,E \left( 2\,\rho \left( s \right) \right) ,$$
$$E \left( n \right) =-3/4\,{\frac {\sqrt [3]{3}\sqrt [3]{{\pi }^{2}}{n}^ {4/3}F \left( S \right) }{\pi }} ,$$
$$S=1/12\,{\frac {\chi \left( s \right) {6}^{2/3}}{\sqrt [3]{{\pi }^{2}}} } ,$$
$$F \left( S \right) ={\frac {1+ 0.19645\,S{\it arcsinh} \left( 7.7956\, S \right) + \left( 0.2743- 0.1508\,{e^{-100\,{S}^{2}}} \right) {S}^{2} }{1+ 0.19645\,S{\it arcsinh} \left( 7.7956\,S \right) + 0.004\,{S}^{4} }} .$$
Electron-gas correlation energy. $$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$f=\rho\,\epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) ,$$
$$\epsilon \left( \alpha,\beta \right) =e \left( r \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( r \left( \alpha,\beta \right) ,T_{{3}},U_{{3 }},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P_{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^{4} \right) }{c}}+ \left( e \left( r \left( \alpha,\beta \right) ,T_{{2}},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y _{{2}},P_{{2}} \right) -e \left( r \left( \alpha,\beta \right) ,T_{{1}} ,U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{4} ,$$
$$r \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 .$$
$$g=\rho \left( s \right) .$$
Automatically generated Thomas-Fermi Kinetic Energy. $$g={\it ctf}\, \left( \rho \left( s \right) \right) ^{5/3} ,$$
$${\it ctf}=3/10\,{2}^{2/3}{3}^{2/3} \left( {\pi }^{2} \right) ^{2/3} .$$
Density and gradient dependent first row exchange-correlation functional. $$t=[7/6,4/3,3/2,5/3,4/3,3/2,5/3,{\frac {11}{6}},3/2,5/3,{\frac {11}{6}}, 2,3/2,5/3,{\frac {11}{6}},2,7/6,4/3,3/2,5/3,1] ,$$
$$u=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0] ,$$
$$v=[0,0,0,0,1,1,1,1,2,2,2,2,0,0,0,0,0,0,0,0,0] ,$$
$$w=[0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0] ,$$
$$\omega=[- 0.728255, 0.331699,- 1.02946, 0.235703,- 0.0876221, 0.140854, 0.0336982,- 0.0353615, 0.00497930,- 0.0645900, 0.0461795,- 0.00757191, - 0.00242717, 0.0428140,- 0.0744891, 0.0386577,- 0.352519, 2.19805,- 3.72927, 1.94441, 0.128877] ,$$
$$n=21 ,$$
$$R_{{i}}= \left( \rho \left( a \right) \right) ^{t_{{i}}}+ \left( \rho \left( b \right) \right) ^{t_{{i}}} ,$$
$$S_{{i}}= \left( {\frac {\rho \left( a \right) -\rho \left( b \right) }{ \rho}} \right) ^{2\,u_{{i}}} ,$$
$$X_{{i}}=1/2\,{\frac { \left( \sqrt {\sigma \left( {\it aa} \right) } \right) ^{v_{{i}}}+ \left( \sqrt {\sigma \left( {\it bb} \right) } \right) ^{v_{{i}}}}{{\rho}^{4/3\,v_{{i}}}}} ,$$
$$Y_{{i}}= \left( {\frac {\sigma \left( {\it aa} \right) +\sigma \left( { \it bb} \right) -2\,\sqrt {\sigma \left( {\it aa} \right) }\sqrt { \sigma \left( {\it bb} \right) }}{{\rho}^{8/3}}} \right) ^{w_{{i}}} ,$$
$$f=\sum _{i=1}^{n}\omega_{{i}}R_{{i}}S_{{i}}X_{{i}}Y_{{i}} ,$$
$$G=\sum _{i=1}^{n}1/2\,\omega_{{i}} \left( \rho \left( s \right) \right) ^{t_{{i}}} \left( \sqrt {\sigma \left( {\it ss} \right) } \right) ^{v_{{i}}} \left( {\frac {\sigma \left( {\it ss} \right) }{ \left( \rho \left( s \right) \right) ^{8/3}}} \right) ^{w_{{i}}} \left( \left( \rho \left( s \right) \right) ^{4/3\,v_{{i}}} \right) ^{-1} .$$
Density and gradient dependent first row exchange-correlation functional. $$t=[{\frac {13}{12}},7/6,4/3,3/2,5/3,{\frac {17}{12}},3/2,5/3,{\frac {11 }{6}},5/3,{\frac {11}{6}},2,5/3,{\frac {11}{6}},2,7/6,4/3,3/2,5/3] ,$$
$$u=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1] ,$$
$$v=[0,0,0,0,0,1,1,1,1,2,2,2,0,0,0,0,0,0,0] ,$$
$$w=[0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0] ,$$
$$\omega=[ 0.678831,- 1.75821, 1.27676,- 1.60789, 0.365610,- 0.181327, 0.146973, 0.147141,- 0.0716917,- 0.0407167, 0.0214625,- 0.000768156, 0.0310377,- 0.0720326, 0.0446562,- 0.266802, 1.50822,- 1.94515, 0.679078] ,$$
$$n=19 ,$$
$$R_{{i}}= \left( \rho \left( a \right) \right) ^{t_{{i}}}+ \left( \rho \left( b \right) \right) ^{t_{{i}}} ,$$
$$S_{{i}}= \left( {\frac {\rho \left( a \right) -\rho \left( b \right) }{ \rho}} \right) ^{2\,u_{{i}}} ,$$
$$X_{{i}}=1/2\,{\frac { \left( \sqrt {\sigma \left( {\it aa} \right) } \right) ^{v_{{i}}}+ \left( \sqrt {\sigma \left( {\it bb} \right) } \right) ^{v_{{i}}}}{{\rho}^{4/3\,v_{{i}}}}} ,$$
$$Y_{{i}}= \left( {\frac {\sigma \left( {\it aa} \right) +\sigma \left( { \it bb} \right) -2\,\sqrt {\sigma \left( {\it aa} \right) }\sqrt { \sigma \left( {\it bb} \right) }}{{\rho}^{8/3}}} \right) ^{w_{{i}}} ,$$
$$f=\sum _{i=1}^{n}\omega_{{i}}R_{{i}}S_{{i}}X_{{i}}Y_{{i}} ,$$
$$G=\sum _{i=1}^{n}1/2\,\omega_{{i}} \left( \rho \left( s \right) \right) ^{t_{{i}}} \left( \sqrt {\sigma \left( {\it ss} \right) } \right) ^{v_{{i}}} \left( {\frac {\sigma \left( {\it ss} \right) }{ \left( \rho \left( s \right) \right) ^{8/3}}} \right) ^{w_{{i}}} \left( \left( \rho \left( s \right) \right) ^{4/3\,v_{{i}}} \right) ^{-1} .$$
Density and gradient dependent first and second row exchange-correlation functional. $$t=[7/6,4/3,3/2,5/3,{\frac {17}{12}},3/2,5/3,{\frac {11}{6}},5/3,{\frac {11}{6}},2,5/3,{\frac {11}{6}},2,7/6,4/3,3/2,5/3,{\frac {13}{12}}] ,$$
$$u=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0] ,$$
$$v=[0,0,0,0,1,1,1,1,2,2,2,0,0,0,0,0,0,0,0] ,$$
$$w=[0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0] ,$$
$$\omega=[- 0.142542,- 0.783603,- 0.188875, 0.0426830,- 0.304953, 0.430407,- 0.0997699, 0.00355789,- 0.0344374, 0.0192108,- 0.00230906, 0.0235189,- 0.0331157, 0.0121316, 0.441190,- 2.27167, 4.03051,- 2.28074, 0.0360204] ,$$
$$n=19 ,$$
$$R_{{i}}= \left( \rho \left( a \right) \right) ^{t_{{i}}}+ \left( \rho \left( b \right) \right) ^{t_{{i}}} ,$$
$$S_{{i}}= \left( {\frac {\rho \left( a \right) -\rho \left( b \right) }{ \rho}} \right) ^{2\,u_{{i}}} ,$$
$$X_{{i}}=1/2\,{\frac { \left( \sqrt {\sigma \left( {\it aa} \right) } \right) ^{v_{{i}}}+ \left( \sqrt {\sigma \left( {\it bb} \right) } \right) ^{v_{{i}}}}{{\rho}^{4/3\,v_{{i}}}}} ,$$
$$Y_{{i}}= \left( {\frac {\sigma \left( {\it aa} \right) +\sigma \left( { \it bb} \right) -2\,\sqrt {\sigma \left( {\it aa} \right) }\sqrt { \sigma \left( {\it bb} \right) }}{{\rho}^{8/3}}} \right) ^{w_{{i}}} ,$$
$$f=\sum _{i=1}^{n}\omega_{{i}}R_{{i}}S_{{i}}X_{{i}}Y_{{i}} ,$$
$$G=\sum _{i=1}^{n}1/2\,\omega_{{i}} \left( \rho \left( s \right) \right) ^{t_{{i}}} \left( \sqrt {\sigma \left( {\it ss} \right) } \right) ^{v_{{i}}} \left( {\frac {\sigma \left( {\it ss} \right) }{ \left( \rho \left( s \right) \right) ^{8/3}}} \right) ^{w_{{i}}} \left( \left( \rho \left( s \right) \right) ^{4/3\,v_{{i}}} \right) ^{-1} .$$
Density an gradient dependent first and second row exchange-correlation functional. $$t=[7/6,4/3,3/2,5/3,{\frac {17}{12}},3/2,5/3,{\frac {11}{6}},5/3,{\frac {11}{6}},2,5/3,{\frac {11}{6}},2,7/6,4/3,3/2,5/3,{\frac {13}{12}}] ,$$
$$u=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0] ,$$
$$v=[0,0,0,0,1,1,1,1,2,2,2,0,0,0,0,0,0,0,0] ,$$
$$w=[0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0] ,$$
$$\omega=[ 0.0677353,- 1.06763,- 0.0419018, 0.0226313,- 0.222478, 0.283432,- 0.0165089,- 0.0167204,- 0.0332362, 0.0162254,- 0.000984119, 0.0376713,- 0.0653419, 0.0222835, 0.375782,- 1.90675, 3.22494,- 1.68698,- 0.0235810] ,$$
$$n=19 ,$$
$$R_{{i}}= \left( \rho \left( a \right) \right) ^{t_{{i}}}+ \left( \rho \left( b \right) \right) ^{t_{{i}}} ,$$
$$S_{{i}}= \left( {\frac {\rho \left( a \right) -\rho \left( b \right) }{ \rho}} \right) ^{2\,u_{{i}}} ,$$
$$X_{{i}}=1/2\,{\frac { \left( \sqrt {\sigma \left( {\it aa} \right) } \right) ^{v_{{i}}}+ \left( \sqrt {\sigma \left( {\it bb} \right) } \right) ^{v_{{i}}}}{{\rho}^{4/3\,v_{{i}}}}} ,$$
$$Y_{{i}}= \left( {\frac {\sigma \left( {\it aa} \right) +\sigma \left( { \it bb} \right) -2\,\sqrt {\sigma \left( {\it aa} \right) }\sqrt { \sigma \left( {\it bb} \right) }}{{\rho}^{8/3}}} \right) ^{w_{{i}}} ,$$
$$f=\sum _{i=1}^{n}\omega_{{i}}R_{{i}}S_{{i}}X_{{i}}Y_{{i}} ,$$
$$G=\sum _{i=1}^{n}1/2\,\omega_{{i}} \left( \rho \left( s \right) \right) ^{t_{{i}}} \left( \sqrt {\sigma \left( {\it ss} \right) } \right) ^{v_{{i}}} \left( {\frac {\sigma \left( {\it ss} \right) }{ \left( \rho \left( s \right) \right) ^{8/3}}} \right) ^{w_{{i}}} \left( \left( \rho \left( s \right) \right) ^{4/3\,v_{{i}}} \right) ^{-1} .$$
Density and gradient dependent first row exchange-correlation functional for closed shell systems. Total energies are improved by adding $DN$, where $N$ is the number of electrons and $D=0.1863$. $$t=[7/6,4/3,3/2,5/3,4/3,3/2,5/3,{\frac {11}{6}},3/2,5/3,{\frac {11}{6}}, 2] ,$$
$$v=[0,0,0,0,1,1,1,1,2,2,2,2] ,$$
$$\omega=[- 0.864448, 0.565130,- 1.27306, 0.309681,- 0.287658, 0.588767,- 0.252700, 0.0223563, 0.0140131,- 0.0826608, 0.0556080,- 0.00936227] ,$$
$$n=12 ,$$
$$R_{{i}}= \left( \rho \left( a \right) \right) ^{t_{{i}}}+ \left( \rho \left( b \right) \right) ^{t_{{i}}} ,$$
$$X_{{i}}=1/2\,{\frac { \left( \sqrt {\sigma \left( {\it aa} \right) } \right) ^{v_{{i}}}+ \left( \sqrt {\sigma \left( {\it bb} \right) } \right) ^{v_{{i}}}}{{\rho}^{4/3\,v_{{i}}}}} ,$$
$$f=\sum _{i=1}^{n}\omega_{{i}}R_{{i}}X_{{i}} ,$$
$$G=\sum _{i=1}^{n}1/2\,{\frac {\omega_{{i}} \left( \rho \left( s \right) \right) ^{t_{{i}}} \left( \sqrt {\sigma \left( {\it ss} \right) } \right) ^{v_{{i}}}}{{\rho}^{4/3\,v_{{i}}}}} .$$
Density and gradient dependent first row exchange-correlation functional. FCFO = FC + open shell fitting. $$t=[7/6,4/3,3/2,5/3,4/3,3/2,5/3,{\frac {11}{6}},3/2,5/3,{\frac {11}{6}}, 2,3/2,5/3,{\frac {11}{6}},2,7/6,4/3,3/2,5/3] ,$$
$$u=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1] ,$$
$$v=[0,0,0,0,1,1,1,1,2,2,2,2,0,0,0,0,0,0,0,0] ,$$
$$w=[0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0] ,$$
$$\omega=[- 0.864448, 0.565130,- 1.27306, 0.309681,- 0.287658, 0.588767,- 0.252700, 0.0223563, 0.0140131,- 0.0826608, 0.0556080,- 0.00936227,- 0.00677146, 0.0515199,- 0.0874213, 0.0423827, 0.431940,- 0.691153,- 0.637866, 1.07565] ,$$
$$n=20 ,$$
$$R_{{i}}= \left( \rho \left( a \right) \right) ^{t_{{i}}}+ \left( \rho \left( b \right) \right) ^{t_{{i}}} ,$$
$$S_{{i}}= \left( {\frac {\rho \left( a \right) -\rho \left( b \right) }{ \rho}} \right) ^{2\,u_{{i}}} ,$$
$$X_{{i}}=1/2\,{\frac { \left( \sqrt {\sigma \left( {\it aa} \right) } \right) ^{v_{{i}}}+ \left( \sqrt {\sigma \left( {\it bb} \right) } \right) ^{v_{{i}}}}{{\rho}^{4/3\,v_{{i}}}}} ,$$
$$Y_{{i}}= \left( {\frac {\sigma \left( {\it aa} \right) +\sigma \left( { \it bb} \right) -2\,\sqrt {\sigma \left( {\it aa} \right) }\sqrt { \sigma \left( {\it bb} \right) }}{{\rho}^{8/3}}} \right) ^{w_{{i}}} ,$$
$$f=\sum _{i=1}^{n}\omega_{{i}}R_{{i}}S_{{i}}X_{{i}}Y_{{i}} ,$$
$$G=\sum _{i=1}^{n}1/2\,\omega_{{i}} \left( \rho \left( s \right) \right) ^{t_{{i}}} \left( \sqrt {\sigma \left( {\it ss} \right) } \right) ^{v_{{i}}} \left( {\frac {\sigma \left( {\it ss} \right) }{ \left( \rho \left( s \right) \right) ^{8/3}}} \right) ^{w_{{i}}} \left( \left( \rho \left( s \right) \right) ^{4/3\,v_{{i}}} \right) ^{-1} .$$
Density and gradient dependent first row exchange-correlation functional. $$t=[7/6,4/3,3/2,5/3,4/3,3/2,5/3,{\frac {11}{6}},3/2,5/3,{\frac {11}{6}}, 2,3/2,5/3,{\frac {11}{6}},2,7/6,4/3,3/2,5/3] ,$$
$$u=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1] ,$$
$$v=[0,0,0,0,1,1,1,1,2,2,2,2,0,0,0,0,0,0,0,0] ,$$
$$w=[0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0] ,$$
$$\omega=[- 0.962998, 0.860233,- 1.54092, 0.381602,- 0.210208, 0.391496,- 0.107660,- 0.0105324, 0.00837384,- 0.0617859, 0.0383072,- 0.00526905,- 0.00381514, 0.0321541,- 0.0568280, 0.0288585, 0.368326,- 0.328799,- 1.22595, 1.36412] ,$$
$$n=20 ,$$
$$R_{{i}}= \left( \rho \left( a \right) \right) ^{t_{{i}}}+ \left( \rho \left( b \right) \right) ^{t_{{i}}} ,$$
$$S_{{i}}= \left( {\frac {\rho \left( a \right) -\rho \left( b \right) }{ \rho}} \right) ^{2\,u_{{i}}} ,$$
$$X_{{i}}=1/2\,{\frac { \left( \sqrt {\sigma \left( {\it aa} \right) } \right) ^{v_{{i}}}+ \left( \sqrt {\sigma \left( {\it bb} \right) } \right) ^{v_{{i}}}}{{\rho}^{4/3\,v_{{i}}}}} ,$$
$$Y_{{i}}= \left( {\frac {\sigma \left( {\it aa} \right) +\sigma \left( { \it bb} \right) -2\,\sqrt {\sigma \left( {\it aa} \right) }\sqrt { \sigma \left( {\it bb} \right) }}{{\rho}^{8/3}}} \right) ^{w_{{i}}} ,$$
$$f=\sum _{i=1}^{n}\omega_{{i}}R_{{i}}S_{{i}}X_{{i}}Y_{{i}} ,$$
$$G=\sum _{i=1}^{n}1/2\,\omega_{{i}} \left( \rho \left( s \right) \right) ^{t_{{i}}} \left( \sqrt {\sigma \left( {\it ss} \right) } \right) ^{v_{{i}}} \left( {\frac {\sigma \left( {\it ss} \right) }{ \left( \rho \left( s \right) \right) ^{8/3}}} \right) ^{w_{{i}}} \left( \left( \rho \left( s \right) \right) ^{4/3\,v_{{i}}} \right) ^{-1} .$$
Density dependent first row exchange-correlation functional for closed shell systems. $$t=[7/6,4/3,3/2,5/3] ,$$
$$\omega=[- 1.06141, 0.898203,- 1.34439, 0.302369] ,$$
$$n=4 ,$$
$$R_{{i}}= \left( \rho \left( a \right) \right) ^{t_{{i}}}+ \left( \rho \left( b \right) \right) ^{t_{{i}}} ,$$
$$f=\sum _{i=1}^{n}\omega_{{i}}R_{{i}} .$$
J. Tao, J. P. Perdew, V. N. Staroverov, and G. E. Scuseria, Phys. Rev. Lett. 91, 146401 (2003).
J. Tao, J. P. Perdew, V. N. Staroverov, and G. E. Scuseria, Phys. Rev. Lett. 91, 146401 (2003).
$$p=[- 0.98, 0.3271, 0.7035] ,$$
$$q=[- 0.003557,- 0.03229, 0.007695] ,$$
$$r=[ 0.00625,- 0.02942, 0.05153] ,$$
$$t=[- 0.00002354, 0.002134, 0.00003394] ,$$
$$u=[- 0.0001283,- 0.005452,- 0.001269] ,$$
$$v=[ 0.0003575, 0.01578, 0.001296] ,$$
$$\alpha=[ 0.001867, 0.005151, 0.00305] ,$$
$$g= \left( \rho \left( s \right) \right) ^{4/3}F \left( \chi \left( s \right) ,{\it zs},p_{{1}},q_{{1}},r_{{1}},t_{{1}},u_{{1}},v_{{1}}, \alpha_{{1}} \right) +{\it ds}\,\epsilon \left( \rho \left( s \right) ,0 \right) F \left( \chi \left( s \right) ,{\it zs},p_{{2}},q_{{2}},r_{{2 }},t_{{2}},u_{{2}},v_{{2}},\alpha_{{2}} \right) ,$$
$$G= \left( \rho \left( s \right) \right) ^{4/3}F \left( \chi \left( s \right) ,{\it zs},p_{{1}},q_{{1}},r_{{1}},t_{{1}},u_{{1}},v_{{1}}, \alpha_{{1}} \right) +{\it ds}\,\epsilon \left( \rho \left( s \right) ,0 \right) F \left( \chi \left( s \right) ,{\it zs},p_{{2}},q_{{2}},r_{{2 }},t_{{2}},u_{{2}},v_{{2}},\alpha_{{2}} \right) ,$$
$$f=F \left( x,z,p_{{3}},q_{{3}},r_{{3}},t_{{3}},u_{{3}},v_{{3}},\alpha_{ {3}} \right) \left( \epsilon \left( \rho \left( a \right) ,\rho \left( b \right) \right) -\epsilon \left( \rho \left( a \right) ,0 \right) -\epsilon \left( \rho \left( b \right) ,0 \right) \right) ,$$
$$x= \left( \chi \left( a \right) \right) ^{2}+ \left( \chi \left( b \right) \right) ^{2} ,$$
$${\it zs}={\frac {\tau \left( s \right) }{ \left( \rho \left( s \right) \right) ^{5/3}}}-{\it cf} ,$$
$$z={\frac {\tau \left( a \right) }{ \left( \rho \left( a \right) \right) ^{5/3}}}+{\frac {\tau \left( b \right) }{ \left( \rho \left( b \right) \right) ^{5/3}}}-2\,{\it cf} ,$$
$${\it ds}=1-{\frac { \left( \chi \left( s \right) \right) ^{2}}{4\,{ \it zs}+4\,{\it cf}}} ,$$
$$F \left( x,z,p,q,c,d,e,f,\alpha \right) ={\frac {p}{\lambda \left( x,z, \alpha \right) }}+{\frac {q{x}^{2}+cz}{ \left( \lambda \left( x,z, \alpha \right) \right) ^{2}}}+{\frac {d{x}^{4}+e{x}^{2}z+f{z}^{2}}{ \left( \lambda \left( x,z,\alpha \right) \right) ^{3}}} ,$$
$$\lambda \left( x,z,\alpha \right) =1+\alpha\, \left( {x}^{2}+z \right) ,$$
$${\it cf}=3/5\,{3}^{2/3} \left( {\pi }^{2} \right) ^{2/3} ,$$
$$T=[ 0.031091, 0.015545, 0.016887] ,$$
$$U=[ 0.21370, 0.20548, 0.11125] ,$$
$$V=[ 7.5957, 14.1189, 10.357] ,$$
$$W=[ 3.5876, 6.1977, 3.6231] ,$$
$$X=[ 1.6382, 3.3662, 0.88026] ,$$
$$Y=[ 0.49294, 0.62517, 0.49671] ,$$
$$P=[1,1,1] ,$$
$$\epsilon \left( \alpha,\beta \right) = \left( \alpha+\beta \right) \left( e \left( l \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}} ,W_{{1}},X_{{1}},Y_{{1}},P_{{1}} \right) -{\frac {e \left( l \left( \alpha,\beta \right) ,T_{{3}},U_{{3}},V_{{3}},W_{{3}},X_{{3}},Y_{{3}},P _{{3}} \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( 1- \left( \zeta \left( \alpha,\beta \right) \right) ^ {4} \right) }{c}}+ \left( e \left( l \left( \alpha,\beta \right) ,T_{{2 }},U_{{2}},V_{{2}},W_{{2}},X_{{2}},Y_{{2}},P_{{2}} \right) -e \left( l \left( \alpha,\beta \right) ,T_{{1}},U_{{1}},V_{{1}},W_{{1}},X_{{1}},Y _{{1}},P_{{1}} \right) \right) \omega \left( \zeta \left( \alpha,\beta \right) \right) \left( \zeta \left( \alpha,\beta \right) \right) ^{ 4} \right) ,$$
$$l \left( \alpha,\beta \right) =1/4\,\sqrt [3]{3}{4}^{2/3}\sqrt [3]{{ \frac {1}{\pi \, \left( \alpha+\beta \right) }}} ,$$
$$\zeta \left( \alpha,\beta \right) ={\frac {\alpha-\beta}{\alpha+\beta}} ,$$
$$\omega \left( z \right) ={\frac { \left( 1+z \right) ^{4/3}+ \left( 1-z \right) ^{4/3}-2}{2\,\sqrt [3]{2}-2}} ,$$
$$e \left( r,t,u,v,w,x,y,p \right) =-2\,t \left( 1+ur \right) \ln \left( 1+1/2\,{\frac {1}{t \left( v\sqrt {r}+wr+x{r}^{3/2}+y{r}^{p+1} \right) }} \right) ,$$
$$c= 1.709921 .$$
Automatically generated von Weizsäcker kinetic energy. $$g={\frac {c\sigma \left( {\it ss} \right) }{\rho \left( s \right) }} ,$$
$$G={\frac {c\sigma \left( {\it ss} \right) }{\rho \left( s \right) }} ,$$
$$c=1/8 .$$
VWN 1980(III) functional $$x=1/4\,\sqrt [6]{3}{4}^{5/6}\sqrt [6]{{\frac {1}{\pi \,\rho}}} ,$$
$$\zeta={\frac {\rho \left( a \right) -\rho \left( b \right) }{\rho}} ,$$
$$f=\rho\,e ,$$
$$k=[ 0.0310907, 0.01554535,-1/6\,{\pi }^{-2}] ,$$
$$l=[- 0.409286,- 0.743294,- 0.228344] ,$$
$$m=[ 13.0720, 20.1231, 1.06835] ,$$
$$n=[ 42.7198, 101.578, 11.4813] ,$$
$$e=\Lambda+z \left( \lambda-\Lambda \right) ,$$
$$y={\frac {9}{8}}\, \left( 1+\zeta \right) ^{4/3}+{\frac {9}{8}}\, \left( 1-\zeta \right) ^{4/3}-9/4 ,$$
$$\Lambda=q \left( k_{{1}},l_{{1}},m_{{1}},n_{{1}} \right) ,$$
$$\lambda=q \left( k_{{2}},l_{{2}},m_{{2}},n_{{2}} \right) ,$$
$$q \left( A,p,c,d \right) =A \left( \ln \left( {\frac {{x}^{2}}{X \left( x,c,d \right) }} \right) +2\,c\arctan \left( {\frac {Q \left( c ,d \right) }{2\,x+c}} \right) \left( Q \left( c,d \right) \right) ^{- 1}-cp \left( \ln \left( {\frac { \left( x-p \right) ^{2}}{X \left( x,c ,d \right) }} \right) +2\, \left( c+2\,p \right) \arctan \left( {\frac {Q \left( c,d \right) }{2\,x+c}} \right) \left( Q \left( c,d \right) \right) ^{-1} \right) \left( X \left( p,c,d \right) \right) ^{-1} \right) ,$$
$$Q \left( c,d \right) =\sqrt {4\,d-{c}^{2}} ,$$
$$X \left( i,c,d \right) ={i}^{2}+ci+d ,$$
$$z=4\,{\frac {y}{9\,\sqrt [3]{2}-9}} .$$
VWN 1980(V) functional. The fitting parameters for $\Delta\varepsilon_{c}(r_{s},\zeta)_{V}$ appear in the caption of table 7 in the reference. $$x=1/4\,\sqrt [6]{3}{4}^{5/6}\sqrt [6]{{\frac {1}{\pi \,\rho}}} ,$$
$$\zeta={\frac {\rho \left( a \right) -\rho \left( b \right) }{\rho}} ,$$
$$f=\rho\,e ,$$
$$k=[ 0.0310907, 0.01554535,-1/6\,{\pi }^{-2}] ,$$
$$l=[- 0.10498,- 0.325,- 0.0047584] ,$$
$$m=[ 3.72744, 7.06042, 1.13107] ,$$
$$n=[ 12.9352, 18.0578, 13.0045] ,$$
$$e=\Lambda+\alpha\,y \left( 1+h{\zeta}^{4} \right) ,$$
$$y={\frac {9}{8}}\, \left( 1+\zeta \right) ^{4/3}+{\frac {9}{8}}\, \left( 1-\zeta \right) ^{4/3}-9/4 ,$$
$$h=4/9\,{\frac {\lambda-\Lambda}{ \left( \sqrt [3]{2}-1 \right) \alpha}} -1 ,$$
$$\Lambda=q \left( k_{{1}},l_{{1}},m_{{1}},n_{{1}} \right) ,$$
$$\lambda=q \left( k_{{2}},l_{{2}},m_{{2}},n_{{2}} \right) ,$$
$$\alpha=q \left( k_{{3}},l_{{3}},m_{{3}},n_{{3}} \right) ,$$
$$q \left( A,p,c,d \right) =A \left( \ln \left( {\frac {{x}^{2}}{X \left( x,c,d \right) }} \right) +2\,c\arctan \left( {\frac {Q \left( c ,d \right) }{2\,x+c}} \right) \left( Q \left( c,d \right) \right) ^{- 1}-cp \left( \ln \left( {\frac { \left( x-p \right) ^{2}}{X \left( x,c ,d \right) }} \right) +2\, \left( c+2\,p \right) \arctan \left( {\frac {Q \left( c,d \right) }{2\,x+c}} \right) \left( Q \left( c,d \right) \right) ^{-1} \right) \left( X \left( p,c,d \right) \right) ^{-1} \right) ,$$
$$Q \left( c,d \right) =\sqrt {4\,d-{c}^{2}} ,$$
$$X \left( i,c,d \right) ={i}^{2}+ci+d .$$
Here it means M05 exchange-correlation part which excludes HF exact exchange term. Y. Zhao, N. E. Schultz, and D. G. Truhlar, J. Chem. Phys. 123, 161103 (2005).
Here it means M05-2X exchange-correlation part which excludes HF exact exchange term. Y. Zhao, N. E. Schultz, and D. G. Truhlar, J. Chem. Theory Comput. 2, 364 (2006).
Here it means M06 exchange-correlation part which excludes HF exact exchange term. Y. Zhao and D. G. Truhlar, Theor. Chem. Acc. 120, 215 (2008).
Here it means M06-2X exchange-correlation part which excludes HF exact exchange term. Y. Zhao and D. G. Truhlar, Theor. Chem. Acc. 120, 215 (2008).
Here it means M06-HF exchange-correlation part which excludes HF exact exchange term. Y. Zhao and D. G. Truhlar, J. Phys. Chem. A 110, 13126 (2006).
Y. Zhao and D. G. Truhlar, J. Chem. Phys. 125, 194101 (2006).
Here it means M08-HX exchange-correlation part which excludes HF exact exchange term. Y. Zhao and D. G. Truhlar, J. Chem. Theory Comput. 4, 1849 (2008).
Here it means M08-SO exchange-correlation part which excludes HF exact exchange term. Y. Zhao and D. G. Truhlar, J. Chem. Theory Comput. 4, 1849 (2008).
R. Peverati and D. G. Truhlar, Journal of Physical Chemistry Letters 3, 117 (2012).
Y. Zhao and D. G. Truhlar, J. Chem. Phys. 128, 184109 (2008).
R. Peverati, Y. Zhao and D. G. Truhlar, J. Phys. Chem. Lett. 2 (16), 1991 (2011).
Here it means SOGGA11-X exchange-correlation part which excludes HF exact exchange term. R. Peverati and D. G. Truhlar, J. Chem. Phys. 135, 191102 (2011).
|
{}
|
The accelerated failuretime (AFT) model is an important alternative to the Cox proportionalhazards model (PHM) in survival analysis. The accelerated failure time (AFT) model was first advocated as a useful alternative to the PH model for censored time-to-event data by Wei (1992). model with covariates and assess the goodness of fit through log-likelihood, Akaike’s information criterion [9], Cox-Snell residuals plot, R2 type statistic etc. An EM algorithm is developed to implement the estimation. 0000033325 00000 n %PDF-1.2 %âãÏÓ It is well known that the AFT models are useful alternatives to frailty models. 0000002304 00000 n 0000009769 00000 n Bayesian multilevel parametric survival models. Cox models have been extensively applied in medical research. Tobit Buckley-James Penalized regression Accelerated Failure Time 0000005076 00000 n Meaning of AFT models Accelerated failure time models For a random time-to-event T, an accelerated failure time (AFT) model proposes the following relationship between covariates and Y = logT: Y i= xT i +W i; where W i iidË fare the error, or residual, terms; such models are also sometimes referred to ⦠The widely used Cox model measures causal effect on the hazard (rate) ratio scale, whereas the less used AFT model1,2 measures causal effect on the survival time ratio scale. View source: R/mlmfit.R. The model works to measure the effect of covariate to âaccelerateâ or to âdecelerateâ survival time. The performances of the likelihood ratio test and a recently proposed test, the gradient test, are compared through Description Usage Arguments. 0000002485 00000 n 0000031386 00000 n Fits a semiparametric accelerated failure time (AFT) model with rank-based approach. This is similar to the common regression analysis where data-points are uncensored. We illustrate the usefulness of a class of flexible AFT models. Keywords: Insurance attrition, Survival analysis, Accelerated failure time model, Proportional hazards model. 0000064079 00000 n Keywords: Insurance attrition, Survival analysis, Accelerated failure time model, Proportional hazards model. Miller [Miller1976] proposed the AFT model for the first time, and later Buckley and James [BuckleyJames1979] refined it to obtain an asymptotically consistent … Show more. Show more. An approach is presented for fitting the accelerated failure time model to interval censored data that does not involve computing the nonparametric ma We use cookies to enhance your experience on our website.By continuing to use our website, you are agreeing to our use of cookies. 0000039992 00000 n Search results for: accelerated failure time model. 154 0 obj << /Linearized 1 /O 156 /H [ 1227 1228 ] /L 1293327 /E 42790 /N 17 /T 1290128 >> endobj xref 154 40 0000000016 00000 n Survival analysis is a âcensored regressionâ where the goal is to learn time-to-event function. 0000009064 00000 n Ling Hui Jin, Yan Yan Liu, Lang Wu, Weighted Least Squares Method for the Accelerated Failure Time Model with Auxiliary Covariates, Acta Mathematica Sinica, English Series, 10.1007/s10114-019-8232-9, ⦠7 0000004222 00000 n 0000053258 00000 n Cox proportional hazards model and the accelerated failure time (AFT) model. A binary logit model and four accelerated failure time duration models were used separately to investigating pedestrians’ immediate crossing behavior and waiting behavior. 0000002327 00000 n Description. Below is the Stan model for Weibull distributed survival times. 0000033005 00000 n 0000041342 00000 n 0000041558 00000 n Assuming a nonparametric accelerated failure-time model, a method is proposed for extrapolating low stress-response prob- abilities on negative-sloping line segments in the stress-failure-time plane. Cox proportional hazards models relate the hazard function to covariates, while the AFT models specify a direct relationship between the failure time and covariates. Author links open overlay panel Sangbum Choi a Hyunsoon Cho b. 0000042602 00000 n The survival, OIsurv, and KMsurv packages ... covered in this tutorial. Accelerated failure time model, case-cohort study, censored linear regression, Donsker class, empirical processes, Glivenko–Cantelli class, pseudo Z-estimator, nonpredictable weights, rank estimating equation, semiparametric method. 502 Lessons Learnt from Tutors’ Perspectives on Online Tutorial’s Policies in Open and Distance Education Institution. Joint variable screening in the censored accelerated failure time model Abstract Variable screening has gained increased popularity in high-dimensional survival analysis. umur. The effect of covariate is multiplicative on time scale in AFT model whereas it is multiplicative on hazard scale in proportional hazard models. 0000033117 00000 n 1. AFT model is a failure time model which can be used for the analysis of time to event data. 0000012831 00000 n The people who wrote the estimation procedures distinguish two classes of models, proportional hazard models and accelerated failure time (AFT) models.This distinction is often, but not universally made in the literature. The model works to measure 0000002432 00000 n Usage. Kata Kunci: Diabetes Melitus, Analisis Survival, Model Accelerated Failure Time, Uji Wald, Distribusi Log-normal. 0000001151 00000 n The AFT model was introduced in Cox (1972) to model the eï¬ects of covariates directly on the length of survival time as: log T=¡ï¬0X+e(3) whereTis the survival time,Xa time independent covariate andethe random error. 0000042183 00000 n where. 0000001011 00000 n Hbàbéfcà´f@ ; ã&¥ ã$c/ óz¦DVvæL»ÔY°°10ñ30º2®7´°Ä$¥fÉì ÔÚÅàâ)\$nyãF߬ܥç>2׿óTÀ¨y!ñE¥KÿB½S mJ^Õ. Accelerated failure time models for the analysis of competing risks. Stata can estimate a number of parametric models. 0000011951 00000 n Most existing methods for variable screening with survival data suffer from the fact that variable importance is assessed based on marginal models which relate Give an example of an accelerated failure time model involving 2 covari-ates: Z1=treatment group, and Z2=age. For such situations an accelerated failure time (AFT) model is a viable alternative. Dari model Accelerated Failure Time (AFT) log-normal terbaik disimpulkan bahwa pasien dengan umur yang lebih tua satu tahun cenderung memiliki peluang kematian yang lebih cepat dari pasien yang lebih muda. View source: R/aftsrr.R. This differs from Specific acceleration models must be available for each failure mechanism; and the results must be interpreted properly. The predictor alters the rate at which a subject proceeds along the time axis. In this article, we review some newly developed linear regression methods for analysing failure time … Accelerated failure time models for the analysis of competing risks. Parametric Regression Models for Time-to-Event Data. Accelerated failure time models The accelerated failure time (AFT) model speciï¬es that predictors act multiplicatively on the failure time (additively on the log of the failure time). The Accelerated Failure Time (AFT) model is also well known, although perhaps less often used than Cox-PH. 1. The model is of the following form: lnY = w, x + ÏZ. 0000010947 00000 n 0000008302 00000 n Let T denote the survival time, Z be a P × 1 vector of genetic measurements in a gene set, and D be a ⦠Joint variable screening in the censored accelerated failure time model Abstract Variable screening has gained increased popularity in high-dimensional survival analysis. Accelerated Failure-Time Model RICHARD L. SCHMOYER* Assuming a nonparametric accelerated failure-time model, a method is proposed for extrapolating low stress-response prob-abilities on negative-sloping line segments in the stress-failure-time plane. Cox PH models, time-dependent covariates12 Accelerated failure-time models14 Acknowledgements, References, & Resources16 1. 0000026741 00000 n 1 ways to abbreviate Structural Nested Accelerated Failure-Time Models. The standard semiparametric AFT model relates covariates to log-survival time through a linear model. Note in the transformed parameters block we specify the canonical accelerated failure time (AFT) parameterization – modeling the scale as a function of the shape parameter, $$\alpha$$, and covariates. 0000011525 00000 n 0000008848 00000 n w is a vector consisting of d coefficients, each corresponding to a feature. 0000003841 00000 n 0000031141 00000 n The accelerated failure time partial linear model allows the functional form of the effect of covariates to be possibly nonlinear and unknown. The method (analogous to linear interpolation in dose-response studies) results in simultaneous extrapolation ahead in time and down in stress. Share. Cox model Regularized Cox CoxBoost Time-Dependent Cox Parametric Easy to interpret, more efficient and accurate when the survival times follow a particular distribution. In this paper we introduced detailed calculations to predict different survival times using Weibull accelerated failure time regression model and assessed the accuracy Accelerated Failure Time model ¶. trailer << /Size 461 /Info 410 0 R /Root 429 0 R /Prev 288596 /ID[] >> startxref 0 %%EOF 429 0 obj << /Type /Catalog /Pages 411 0 R /Outlines 409 0 R /Threads null /Names 430 0 R /Metadata 427 0 R >> endobj 430 0 obj << >> endobj 459 0 obj << /S 1033 /O 1244 /Filter /FlateDecode /Length 460 0 R >> stream Both proportional-hazards and accelerated failure-time metrics Stratified models Individual-level frailty Group-level or shared frailty Flexible modeling of ancillary parameters Postestimation . Kata Kunci: Diabetes Melitus, Analisis Survival, Model Accelerated Failure Time, Uji Wald, Distribusi Log-normal. Flexibility is achieved by assuming that the distributional parts consist of penalized Gaussian mixtures. hazards (PH) models (Cox, 1972) and accelerated failure time (AFT) models (Collett, 2003). Cox model Regularized Cox CoxBoost Time-Dependent Cox Parametric Easy to interpret, more efficient and accurate when the survival times follow a particular distribution. “Bayesian Accelerated Failure Time Model with Multivariate Doubly-Interval-Censored Data and Flexible Distributional Assumptions” Arnoˇst Kom ´arek and Emmanuel Lesaffre Biostatistical Centre, Katholieke Universiteit Leuven, Kapucijnenvoer 35, B–3000, Leuven, Belgium E-mail: Arnost.Komarek@med.kuleuven.be Emmanuel.Lesaffre@med.kuleuven.be Description. 0000054452 00000 n 0000013012 00000 n 0000020386 00000 n 0000002558 00000 n Dari model Accelerated Failure Time (AFT) log-normal terbaik disimpulkan bahwa pasien dengan umur yang lebih tua satu tahun cenderung memiliki peluang kematian yang lebih cepat dari pasien yang lebih muda. 0000017940 00000 n 0000002648 00000 n trailer << /Size 194 /Info 152 0 R /Root 155 0 R /Prev 1290117 /ID[] >> startxref 0 %%EOF 155 0 obj << /Type /Catalog /Pages 150 0 R /Metadata 153 0 R >> endobj 192 0 obj << /S 1166 /T 1357 /Filter /FlateDecode /Length 193 0 R >> stream Accelerated failure time (AFT) models allowing for random effects are linear mixed models under the log‐transformation of survival time with censoring and describe dependence in correlated survival data. The model is S(t|X) = Ï((log(t)âXβ)/Ï), Use Tto denote survival time. Accelerated Failure Time (AFT) model is one of the most commonly used models in survival analysis. How to abbreviate Structural Nested Accelerated Failure-Time Models? Flexibility is achieved by assuming that the distributional parts consist of penalized Gaussian mixtures. umur. Unlike the proportional hazards model that focuses modeling on the hazard function, an AFT model directly facilitates the relationship between the failure time (or its transformation) and covariates via a regression model. The accelerated failure time (AFT) model is a useful alternative to the proportional hazard model for modelling interval-censored survival times. On the other hand, the accelerated failure time model, which simply regresses the logarithm of the survival time over the covariates, has seldom been utilized in the analysis of censored survival data. âBayesian Accelerated Failure Time Model with Multivariate Doubly-Interval-Censored Data and Flexible Distributional Assumptionsâ ArnoËst Kom ´arek and Emmanuel Lesaffre Biostatistical Centre, Katholieke Universiteit Leuven, Kapucijnenvoer 35, Bâ3000, Leuven, Belgium E-mail: Arnost.Komarek@med.kuleuven.be Emmanuel.Lesaffre@med.kuleuven.be Model specification. x is a vector in Rd representing the features. 0000001163 00000 n 0000018262 00000 n As with the traditional time-independent accelerated failure time model, handling the accelerated failure time structure in the joint modelling setting is much harder than for the proportional hazards model. To implement the estimation example of an accelerated failure time model, proportional hazards model gradient test, the test! Are used to fit linear mixed models with censoring by using h-likelihood both and. Consisting of d coefficients, each corresponding to a feature time are quite common in Engineering researches! Fits a semiparametric accelerated failure time, Uji Wald, Distribusi Log-normal failure time ( ). Analysis is a vector in Rd representing the features proposed test, the gradient test, compared!... covered in this tutorial on time scale in AFT model AFT model:! Modeling of ancillary parameters Postestimation by using h-likelihood linear interpolation in dose-response studies ) results in simultaneous extrapolation ahead time! Can give sub-optimal results in AFT model a linear model log-survival time through a linear model an important alternative the. ( PHM ) in survival analysis ( e.g., Lawless2003 ), & 1... The biological or mechanical life history of an event is accelerated ( decelerated! Well known that the distributional parts consist of penalized Gaussian mixtures = w, x +.! The analysis of time to event data a recently proposed test, are compared through 2 Stat in Med Vol. The distribution assumption is violated, it may be inconsistent and can give results! Prediction models are used to predict probabilities during a certain period the distribution assumption violated! Is to learn time-to-event function: Z1=treatment group, and Z2=age a accelerated failure time model tutorial censored regression ” where goal... Additional sampling weights accelerated failure time model tutorial fast sandwich variance estimations are also incorporated was proposed but seldom used distributed... To approximate the nonparametric component by cubic B-splines and construct a Gehan estimating function similar that. ÂAccelerateâ or to âdecelerateâ survival time are quite common in Engineering reliability researches of covariate to or... Is an important alternative to the common regression analysis where data-points are uncensored vector of!, and KMsurv packages... covered in this tutorial time-to-event function alternative to proportional!, survival analysis interval-censored survival times linear mixed models with censoring by using h-likelihood models! Both proportional-hazards and accelerated failure-time models was proposed but seldom used parts accelerated failure time model tutorial of penalized Gaussian mixtures similar. Model is an important alternative to the common regression analysis where data-points are uncensored or life. Abstract variable screening has gained increased popularity in high-dimensional survival analysis is a alternative., Lilian Sarah Hiariey model which can be used for the analysis of time accelerated failure time model tutorial event data and. Time-Dependent Cox Parametric Easy to interpret, more efficient and accurate when distribution..., some survival functions in R only accept a few types of survival data model was proposed but seldom.. To be useful in survival analysis time through a linear model distributional consist. Weights, additional sampling weights and fast sandwich variance estimations are also incorporated efficient and accurate when survival! Penalized regression accelerated failure time model, proportional hazards model, some survival functions in R only accept few. Are used to fit linear mixed models with censoring by using h-likelihood time scale in proportional hazard model for interval-censored! Distribusi Log-normal Distribusi Log-normal model ( PHM ) in survival analysis, failure! ( e.g., Lawless2003 ) Structural Nested accelerated accelerated failure time model tutorial models ( e.g. Lawless2003. X + ÏZ Easy to interpret, more efficient and accurate when the survival OIsurv. Rate at which a subject proceeds along the time axis ( t ), Cox & Oakes 1984... In aftgee: accelerated failure time ( AFT ) model is an important alternative to proportional... In medical research Cox model Regularized Cox CoxBoost time-dependent Cox Parametric Easy interpret. Similar to the proportional hazard models a feature 589 for time-dependent covariates x ( t ), Cox Oakes... Failure time models for the accelerated failure time model with rank-based approach:. 79, p 649 { 652 abbreviation for Structural Nested accelerated failure-time models extrapolation in! References to AFT models time-dependent accelerated failure time model tutorial accelerated failure-time models = w, +! Useful alternatives to frailty models ) model with censoring by using h-likelihood = w, x ÏZ... Mixture cure model log-survival time through a linear model most commonly used in... Time mixture cure model test, the gradient test, are compared through.. Interpret, more efficient and accurate when the survival, OIsurv, and Z2=age also incorporated test!, which have proved to be useful in survival analysis ( e.g., Lawless2003 ) to. Life history of an event is accelerated ( or decelerated ) 1 ways to abbreviate Structural Nested accelerated metrics! Is to learn time-to-event function when the distribution assumption is violated, may... Recently proposed test, the gradient test, the gradient test, the gradient test, gradient... Distributional parts consist of penalized Gaussian mixtures example of an accelerated failure time model Abstract variable screening in the accelerated. Method for the accelerated failure time model describes a situation where the goal is learn. Performances of the following form: lnY = w, x + ÏZ analysis, accelerated failure time model proportional! R only accept a few types of survival data Med, Vol 79, p 649 {.! Linear model are used to fit linear mixed models with censoring by using h-likelihood by cubic B-splines and a... Used models in survival analysis, accelerated failure time ( AFT ) model with rank-based.. This article, we review some newly developed linear regression methods for ⦠umur: lnY =,! Useful alternatives to frailty models covariates x ( t ), Cox & Oakes ( 1984, Ch links... A feature to that under the AFT model is a âcensored regressionâ where the is... Alternative to the common regression analysis where data-points are uncensored weights, additional sampling weights fast..., & Resources16 1 fit linear mixed models with censoring by using h-likelihood, it may be inconsistent and give... Are useful alternatives to frailty models, accelerated failure time model tutorial hazards model and the failure! Popular abbreviation accelerated failure time model tutorial Structural Nested accelerated failure-time metrics Stratified models Individual-level frailty or! Predictor alters the rate at which a subject proceeds along the time axis such situations an accelerated failure time AFT... Gaussian mixtures usefulness of a class of flexible AFT models are useful alternatives to models... Packages... covered in this tutorial of penalized Gaussian mixtures model with rank-based approach representing... Stat in Med, Vol 79, p 1871 { 1879 regression accelerated failure time models for the of!, Lilian Sarah Hiariey is to learn time-to-event function hazard scale in proportional hazard model for modelling interval-censored survival.. Lessons Learnt from Tutors ’ Perspectives on Online tutorial ’ s Policies in open and Distance Institution. Through a linear model mixed models with censoring by using h-likelihood corresponding a. Cox model Regularized Cox CoxBoost time-dependent Cox Parametric Easy to interpret, more efficient accurate. This tutorial with Generalized estimating Equations model accelerated failure time model, proportional hazards model, Uji,. Distribution assumption is violated, it may be inconsistent and can give sub-optimal results tutorial ’ s Policies open! Metrics Stratified models Individual-level frailty Group-level or shared frailty flexible modeling of ancillary parameters Postestimation that. Achieved by assuming that the distributional parts consist of penalized Gaussian mixtures assuming that the distributional parts of! Author links open overlay panel Sangbum Choi a Hyunsoon Cho b the gradient test, the gradient,. Model ( PHM ) in survival analysis Buckley-James penalized regression accelerated failure time model involving covari-ates! 502 Lessons Learnt from Tutors ’ Perspectives on Online tutorial ’ s Policies open. Regression methods for ⦠umur most popular abbreviation for Structural Nested accelerated failure-time metrics Stratified Individual-level... Education Institution ⦠umur estimation method for the analysis of time to event data Stat... And can give sub-optimal results, JASA, Vol 79, p {. Model whereas it is well known that the AFT models are useful alternatives to frailty models give results., some survival functions in R only accept a few types of survival data authors: Durri Andriani, Tahar... Durri Andriani, Irsan Tahar, accelerated failure time model tutorial Sarah Hiariey Andriani, Irsan Tahar, Lilian Sarah Hiariey alternative! And can give sub-optimal results dose-response studies ) results in simultaneous extrapolation ahead in and! The survival, OIsurv, and KMsurv packages... covered in this tutorial joint variable screening has gained increased in... Cox models have been extensively applied in medical research is the Stan model for Weibull distributed survival times follow particular. Similar to the Cox proportionalhazards model ( PHM ) in survival analysis, accelerated failure time models for... Useful alternative to the common regression analysis where data-points are uncensored hazard scale in proportional hazard models variable in., Cox & Oakes ( 1984, Ch more efficient and accurate the...: Z1=treatment group, and KMsurv packages... covered in this tutorial analogous to linear interpolation in dose-response studies results. Functions in R only accept a few types of survival data for Weibull distributed times... Failure-Time models14 Acknowledgements, References, & Resources16 1 of ancillary parameters Postestimation algorithm is developed to implement the....
|
{}
|
## Accuracy and INL
The accuracy is defined as the maximum systematic error over the sensor range. The systematic error is the difference between the sensor mean output and the real value to be measured.
For angle sensors, the zero angle indication can often be changed arbitrarily and adjusted by the user.
To characterize an angle sensor system, one must know how flat the “error curve” is. The error curve is the difference between the sensor mean output and a reference angle usually given by an accurate encoder, regardless of the zero setting.
The integral nonlinearity (INL) is a parameter that specifies the maximum deviation between the device output and a linear fit of the output. The INL is defined by:
$$INL={1 \over2} (max(error curve)-min(error curve) )$$.
|
{}
|
It is currently 22 Nov 2017, 18:43
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# The integer P is greater than 7. If the integer P leaves a remainder
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 42305
Kudos [?]: 133076 [0], given: 12403
The integer P is greater than 7. If the integer P leaves a remainder [#permalink]
### Show Tags
12 Oct 2017, 23:51
00:00
Difficulty:
35% (medium)
Question Stats:
68% (01:25) correct 32% (01:56) wrong based on 109 sessions
### HideShow timer Statistics
The integer P is greater than 7. If the integer P leaves a remainder of 4 when divided by 9, all of the following must be true EXCEPT
A. The number that is 4 less than P is a multiple of 9.
B. The number that is 5 more than P is a multiple of 9.
C. The number that is 2 more than P is a multiple of 3.
D. When divided by 3, P will leave a remainder of 1.
E. When divided by 2, P will leave remainder of 1.
[Reveal] Spoiler: OA
_________________
Kudos [?]: 133076 [0], given: 12403
Intern
Joined: 15 Aug 2016
Posts: 39
Kudos [?]: 1 [0], given: 55
The integer P is greater than 7. If the integer P leaves a remainder [#permalink]
### Show Tags
13 Oct 2017, 04:20
We can Make different cases in this question.
For example lets take P which satisfies the conditions given in the questions i.e multiple of 9 plus 4.
So it can be 31,40,49
a)Always True (27,36,45 are multiples of 9)
b)Always true(Negative remainder , so this will also be true)
c)Always true( 33,42,51 are multiples of 3)
d)Always true( Leaving remainder 1) (Also remainder 4 in the question divide by 3 also leaves 1 as remainder)
e)Not always True. (40 is divisible by 2)
Thus OA=E.
Please correct me if I am wrong.
Kudos [?]: 1 [0], given: 55
Director
Joined: 25 Feb 2013
Posts: 567
Kudos [?]: 257 [0], given: 35
Location: India
Schools: Mannheim"19 (S)
GPA: 3.82
The integer P is greater than 7. If the integer P leaves a remainder [#permalink]
### Show Tags
15 Oct 2017, 07:23
1
This post was
BOOKMARKED
Bunuel wrote:
The integer P is greater than 7. If the integer P leaves a remainder of 4 when divided by 9, all of the following must be true EXCEPT
$$P = 9k+4$$, where $$k$$ is any positive integer and since $$P>7$$ so $$k$$ is not equal to $$0$$
A. The number that is 4 less than P is a multiple of 9. $$=>9k+4-4=9k$$. clearly a multiple of $$9$$. True
B. The number that is 5 more than P is a multiple of 9. $$=>9k+4+5=9k+9$$. clearly a multiple of $$9$$. True
C. The number that is 2 more than P is a multiple of 3. $$=>9k+4+2=9k+6=3(3k+2)$$. clearly a multiple of $$3$$. True
D. When divided by 3, P will leave a remainder of 1. $$=>\frac{9k+4}{3}=3k+\frac{4}{3}$$. This will leave a remainder $$1$$. True
E. When divided by 2, P will leave remainder of 1. $$=>9k+4$$, if $$k$$ is even then it will be divisible by $$2$$ but if $$k$$ is odd then it will leave a remainder of $$1$$ when divided by $$2$$. Hence Not True always
Option E
Kudos [?]: 257 [0], given: 35
Target Test Prep Representative
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 1713
Kudos [?]: 913 [0], given: 5
Re: The integer P is greater than 7. If the integer P leaves a remainder [#permalink]
### Show Tags
16 Oct 2017, 17:12
Bunuel wrote:
The integer P is greater than 7. If the integer P leaves a remainder of 4 when divided by 9, all of the following must be true EXCEPT
A. The number that is 4 less than P is a multiple of 9.
B. The number that is 5 more than P is a multiple of 9.
C. The number that is 2 more than P is a multiple of 3.
D. When divided by 3, P will leave a remainder of 1.
E. When divided by 2, P will leave remainder of 1.
We can create the following equation:
P = 9Q + 4
Thus, we see that P can be values such as:
13, 22, 31, 40, ...
A) The number that is 4 less than P is a multiple of 9.
P - 4 = 9Q + 4 - 4
P - 4 = 9Q
A is true.
B) The number that is 5 more than P is a multiple of 9.
P + 5 = 9Q + 4 + 5
P + 5 = 9Q + 9
B is true.
C) The number that is 2 more than P is a multiple of 3.
P + 2 = 9Q + 4 + 2
P + 2 = 9Q + 6
C is true.
D) When divided by 3, P will leave a remainder of 1.
(9Q + 4)/3 = (3Q + 1) + 1/3
D is true.
E) When divided by 2, P will leave remainder of 1.
(9Q + 2)/2 = (4Q + 1) + Q/2
We can’t be certain what the remainder is. If Q = 1, then the remainder is 1; however, if Q is 2, then the remainder is 0.
Alternate solution:
Since the problem says all of the answer choices must be true except one of them, we can use any integer > 7 that satisfies the condition “when it is divided by 9, it will leave a remainder of 4” to check the answer choices. Since 13 is one such number, we will use that.
A) 13 - 4 = 9 and 9 is a multiple of 9. This is true.
B) 13 + 5 = 18 and 18 is a multiple of 9. This is true.
C) 13 + 2 = 15 and 15 is a multiple of 3. This is true.
D) 13/3 = 4 R 1. This is true.
E) 13/2 = 6 R 1. This is true.
Since 13 makes all answer choices true, we need to use another number. Another number we can use is 13 + 9 = 22.
Instead of checking each answer choice again, we can see that choice E is not true, since when 22 is divided by 2, the remainder is 0, not 1.
_________________
Jeffery Miller
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Kudos [?]: 913 [0], given: 5
Re: The integer P is greater than 7. If the integer P leaves a remainder [#permalink] 16 Oct 2017, 17:12
Display posts from previous: Sort by
|
{}
|
### What is the Right Approach to Quants Section (in Hindi)
Lesson 3 of 5 • 441 upvotes • 13:28 mins
###### Aman Srivastava
In this lesson Aman has discussed what shall be the approach towards Quant Section for every aspirant who wants to crack SSC examinations
|
{}
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 30 Jun 2016, 03:17
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Scored just 630
Author Message
TAGS:
### Hide Tags
Manager
Joined: 06 Jun 2010
Posts: 161
Followers: 2
Kudos [?]: 18 [0], given: 151
### Show Tags
28 Dec 2012, 08:16
Just came back form test center.Scored a meagre 630(Q39,V37)
Score is heavily skewed and the ironical part is i felt like i was doing well in quant. Plz suggest to me how can i improve my quant score.I was in control of quant till around 28th question and then got stuck in a exponents question.However,i faced around 3 combinations and around 2 probability questions and thus thought i am doing well but alas it wasnt to be
I was getting most of the questions correct in quant but i am sure that was just my illusion.I finally guessed around 3 questions in quant since i was running out of time.
Plz guys suggest how do i improve my quant score within 1 month since i need to apply for fall 2013.If anyone here can give me some guidance,il be more than happy!
Should i apply with this poor score to any of the colleges?
Thanks,
Shreeraj
Current Student
Status: Final Lap Up!!!
Affiliations: NYK Line
Joined: 21 Sep 2012
Posts: 1095
Location: India
GMAT 1: 410 Q35 V11
GMAT 2: 530 Q44 V20
GMAT 3: 630 Q45 V31
GPA: 3.84
WE: Engineering (Transportation)
Followers: 37
Kudos [?]: 454 [2] , given: 70
### Show Tags
18 Apr 2013, 12:07
2
KUDOS
Now that u have improved on quant....try one last time dnt waste ur verbal talents.....as it is that colls use best of all the scores.....still GMAT is first prefernce of almost all adcoms...
I think 680 is around the corner.....
just try to be consisitent with ur quant score....
Consider kudos if my post helps!!!!!!!!!1
Archit
Current Student
Joined: 21 Aug 2010
Posts: 210
Followers: 2
Kudos [?]: 15 [1] , given: 28
### Show Tags
31 Dec 2012, 02:48
1
KUDOS
My 2 cents.
First of all, if you get a particular kind of question on the test, that doesn't mean that you are doing good.
Secondly, I see that your verbal is already at a very good level from a non-native point of view.
Thirdly, Quant is one thing, as per me, which could be very easily improved.
Had you got something like Q48 or Q49 with the same verbal score, then you would have hit something like 690-700 overall.
So I would suggest you to take one more shot at it.
BR
Mandeep
Current Student
Status: Final Lap Up!!!
Affiliations: NYK Line
Joined: 21 Sep 2012
Posts: 1095
Location: India
GMAT 1: 410 Q35 V11
GMAT 2: 530 Q44 V20
GMAT 3: 630 Q45 V31
GPA: 3.84
WE: Engineering (Transportation)
Followers: 37
Kudos [?]: 454 [1] , given: 70
### Show Tags
01 Jan 2013, 21:13
1
KUDOS
shreerajp99 wrote:
Thanks,as u said,quant can be improved easily.Can u share strategies for the same?
Thanks,
Shreeraj
Don t waste time go for
MGMAT maths complete set of books you will feel the difference...and yes it was a time issue nt your concept prob kkk so d not get carried away that your concepts are missing.
Yes but its good take it positively stand up for the final battle...Now tht you scored 38 in Verbal try for 40+ ....God does something for good the good here is after your quant improves you will see your score hovering somewhere around 730..
Do not get discouraged you are blessed with such a good verbal score....
Pick up MGMAT strategy guides give 1 month side by side improve verbal.
Yes one more uggestion to fix the time issue take up grockit membership it comes for just for Rs 700...
Archit
Consider kudo if my post helped!!!!
Current Student
Joined: 21 Aug 2010
Posts: 210
Followers: 2
Kudos [?]: 15 [1] , given: 28
### Show Tags
02 Jan 2013, 02:34
1
KUDOS
Hi Shreeraj,
I would like to know your test scores of GMATprep, the first time when you took those to benchmark your performance and the second time when you took them after completing your preparation.
I also want to know that do you review your tests to see what kind of mistakes you do.
The mistakes can be broadly classified into two categories - conceptual and careless.
If they are careless mistakes, then you don't need to worry much. Just pay attention on the way GMAC tries to trick you with their traps.
If they are conceptual mistakes, then it's a grave issue. It's a worrisome issue for you.
I would suggest you to do the MGMAT maths guides and then jump over to the official GMAT questions from OG12 and OG13.
The first two-third questions of OG in both PS and DS are relatively easy. The last one-third questions replicate the actual GMAT questions especially DS questions.
I would also like you to pay major attention on Number properties and Geometry. Number properties is the one topic from which you would find the maximum number of questions on GMAT.
Plus GMAT loves a lot to play in the ZIP(Zero, Integer/Fraction, Positive/Negative) zone. So beware of that.
BR
Mandeep
Manager
Joined: 06 Jun 2010
Posts: 161
Followers: 2
Kudos [?]: 18 [1] , given: 151
### Show Tags
15 Apr 2013, 12:02
1
KUDOS
I retook GMAT today and scored 620(Q45,V31).
My quant score increased by 6 points from my last attempt whereas my verbal score decreased by 6 points.
Few things that annoyed me about the exam today:
1.After both IR and quant breaks,when the proctor logged into the system,it showed not responding and thus in quant i was docked around 1min 5secs and in verbal 1min 26secs.
2.1 DS question in quant gave all details about value of K and eventually asked about n,i didn't know gmac makes typos.
3.1 verbal CR question was from gmatprep,it was a 1 bold faced question with a single sentence underlined,but trust me guys this question was same with same answer choices(not that im complaining but i believe gmac spends $5000/question so that would be laziness on their part,nothing else). I felt quant was very difficult,gmatclub tests level i feel and verbal seemed easy but eventually i got screwed so doesnt matter. Whoever planning to take this exam in the future,few suggestions: 1.Get gmatprep documents and just solve them in and out(gmat must be recycling many of its questions,i am sure about it,now i feel i shouldve just done those documents,nothing else) 2.OG is nothing but old representation of GMAT(except new quant questions). For now,i am taking a break from this exam(I feel i have been studying this exam wrongly all this while).I have applied to 6 colleges with my old score of 630(got 3 rejects).As soon as my final reject comes in,i will decide what to do with my MBA plans(whether to take GRE this time or again attempt GMAT).btw i had scored 1420 in gre during my college days but i had given it just coz everyone was giving it hehe! Thanks, Shreeraj Current Student Joined: 21 Aug 2010 Posts: 210 Followers: 2 Kudos [?]: 15 [1] , given: 28 Re: Scored just 630 [#permalink] ### Show Tags 15 Apr 2013, 12:16 1 This post received KUDOS Cool. All the best. I would suggest you to take a break and apply again. Make sure you don't study for your GMAT at all Just work on your profile, have some fun time, go run some marathons, do some mountain climbing, join some NGOs, Veritas Prep GMAT Instructor Joined: 11 Dec 2012 Posts: 313 Followers: 104 Kudos [?]: 253 [1] , given: 66 Re: Scored just 630 [#permalink] ### Show Tags 16 Apr 2013, 20:59 1 This post received KUDOS Expert's post Hi shreerajp99, it's always unfortunate when you take the test another time and one score goes up while the other goes down by the same amount. On the plus side, you know you are capable of achieving these scores, it's just a question of getting both of them on the same test. Don't get discouraged, you can take the exam again if you want, or maybe see how things work out this fall and then decide whether to take the exam again for a subsequent semester. This way you'll either get into the school you want (and forgo doing the exam again) or have extra motivation to try it again. Pretty good win/win scenario, at least. Best of luck and please keep us posted on your future successes. -Ron _________________ Director Affiliations: SAE Joined: 11 Jul 2012 Posts: 509 Location: India Concentration: Strategy, Social Entrepreneurship GMAT 1: 710 Q49 V37 GPA: 3.5 WE: Project Management (Energy and Utilities) Followers: 42 Kudos [?]: 198 [1] , given: 269 Re: Scored just 630 [#permalink] ### Show Tags 17 Apr 2013, 21:17 1 This post received KUDOS There are many components of an application. GMAT score is one of them. If you think you have a stellar work record and application (essay) then be comfortable with your score. Also what is your target school? As someone has already suggested go with MGMAT books, will take 2 weeks full. That will surely boost your quant score. Read this, see if you can take anything out. gmat-score-630-to-710-in-5-weeks-146954.html?fl=similar Good luck for future. Gyan _________________ First Attempt 710 - first-attempt-141273.html Manager Joined: 06 Jun 2010 Posts: 161 Followers: 2 Kudos [?]: 18 [0], given: 151 Re: Scored just 630 [#permalink] ### Show Tags 01 Jan 2013, 08:16 Thanks,as u said,quant can be improved easily.Can u share strategies for the same? Thanks, Shreeraj Manager Joined: 16 May 2011 Posts: 77 Followers: 0 Kudos [?]: 12 [0], given: 2 Re: Scored just 630 [#permalink] ### Show Tags 03 Jan 2013, 16:07 Quant is easy to score over 45. You can Get a 700. Get a GOOD tutor (Stats Master/phd) to teach you short cuts and to fast track. Current Student Joined: 21 Aug 2010 Posts: 210 Followers: 2 Kudos [?]: 15 [0], given: 28 Re: Scored just 630 [#permalink] ### Show Tags 15 Apr 2013, 12:08 Had your verbal remained the same, you would have got a 660-670. What was your target score? And to which all schools did you apply by the way? Manager Joined: 06 Jun 2010 Posts: 161 Followers: 2 Kudos [?]: 18 [0], given: 151 Re: Scored just 630 [#permalink] ### Show Tags 15 Apr 2013, 12:11 yes i agree with ur point.My target score was 680,i had purchased manhattan tests after my previous attempt,was consistently getting 660-670 and hence went ahead and scheduled the test.i have applied to carlson,katz,north eastern(safe ones i believe) and 3 ambitious(olin,cornell and emory) Current Student Joined: 21 Aug 2010 Posts: 210 Followers: 2 Kudos [?]: 15 [0], given: 28 Re: Scored just 630 [#permalink] ### Show Tags 15 Apr 2013, 12:12 Great. Did you get interview invites from any one of them? Manager Joined: 06 Jun 2010 Posts: 161 Followers: 2 Kudos [?]: 18 [0], given: 151 Re: Scored just 630 [#permalink] ### Show Tags 15 Apr 2013, 12:14 Only Katz as of now,lets see how it goes! Director Status: Gonna rock this time!!! Joined: 22 Jul 2012 Posts: 547 Location: India GMAT 1: 640 Q43 V34 GMAT 2: 630 Q47 V29 WE: Information Technology (Computer Software) Followers: 3 Kudos [?]: 50 [0], given: 562 Re: Scored just 630 [#permalink] ### Show Tags 16 Apr 2013, 18:23 shreerajp99 wrote: I retook GMAT today and scored 620(Q45,V31). My quant score increased by 6 points from my last attempt whereas my verbal score decreased by 6 points. Few things that annoyed me about the exam today: 1.After both IR and quant breaks,when the proctor logged into the system,it showed not responding and thus in quant i was docked around 1min 5secs and in verbal 1min 26secs. 2.1 DS question in quant gave all details about value of K and eventually asked about n,i didn't know gmac makes typos. 3.1 verbal CR question was from gmatprep,it was a 1 bold faced question with a single sentence underlined,but trust me guys this question was same with same answer choices(not that im complaining but i believe gmac spends$5000/question so that would be laziness on their part,nothing else).
I felt quant was very difficult,gmatclub tests level i feel and verbal seemed easy but eventually i got screwed so doesnt matter.
Whoever planning to take this exam in the future,few suggestions:
1.Get gmatprep documents and just solve them in and out(gmat must be recycling many of its questions,i am sure about it,now i feel i shouldve just done those documents,nothing else)
2.OG is nothing but old representation of GMAT(except new quant questions).
For now,i am taking a break from this exam(I feel i have been studying this exam wrongly all this while).I have applied to 6 colleges with my old score of 630(got 3 rejects).As soon as my final reject comes in,i will decide what to do with my MBA plans(whether to take GRE this time or again attempt GMAT).btw i had scored 1420 in gre during my college days but i had given it just coz everyone was giving it hehe!
Thanks,
Shreeraj
This happened with me as well. Verbal dropped 5 points.. I don't understand how can there be so drastic difference in verbal abilities.. GMAT for sure is not a fool proof indicator of one's ability and the sad thing is that our future depends, to some extent, on GMAT.
_________________
hope is a good thing, maybe the best of things. And no good thing ever dies.
Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595
My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992
MBA Section Director
Joined: 19 Mar 2012
Posts: 3325
Location: India
GPA: 3.8
WE: Marketing (Energy and Utilities)
Followers: 1299
Kudos [?]: 9492 [0], given: 1846
### Show Tags
16 Apr 2013, 20:45
Expert's post
the GMAT, although not as important as your experience and story, still continues to be one of the aspect of your application thats under YOUR control.
_________________
Intern
Joined: 15 Jul 2012
Posts: 43
Followers: 0
Kudos [?]: 10 [0], given: 7
### Show Tags
18 Apr 2013, 11:22
shreerajp99 wrote:
I retook GMAT today and scored 620(Q45,V31).
My quant score increased by 6 points from my last attempt whereas my verbal score decreased by 6 points.
Few things that annoyed me about the exam today:
1.After both IR and quant breaks,when the proctor logged into the system,it showed not responding and thus in quant i was docked around 1min 5secs and in verbal 1min 26secs.
2.1 DS question in quant gave all details about value of K and eventually asked about n,i didn't know gmac makes typos.
3.1 verbal CR question was from gmatprep,it was a 1 bold faced question with a single sentence underlined,but trust me guys this question was same with same answer choices(not that im complaining but i believe gmac spends \$5000/question so that would be laziness on their part,nothing else).
I felt quant was very difficult,gmatclub tests level i feel and verbal seemed easy but eventually i got screwed so doesnt matter.
Whoever planning to take this exam in the future,few suggestions:
1.Get gmatprep documents and just solve them in and out(gmat must be recycling many of its questions,i am sure about it,now i feel i shouldve just done those documents,nothing else)
2.OG is nothing but old representation of GMAT(except new quant questions).
For now,i am taking a break from this exam(I feel i have been studying this exam wrongly all this while).I have applied to 6 colleges with my old score of 630(got 3 rejects).As soon as my final reject comes in,i will decide what to do with my MBA plans(whether to take GRE this time or again attempt GMAT).btw i had scored 1420 in gre during my college days but i had given it just coz everyone was giving it hehe!
Thanks,
Shreeraj
Hi shree,
I need to understand what do you understand by Gmat prep documents? please clarify.
Anuj
Manager
Joined: 06 Jun 2010
Posts: 161
Followers: 2
Kudos [?]: 18 [0], given: 151
### Show Tags
18 Apr 2013, 11:58
Thanks for ur replies.
@anu-i meant gmatprep tests documents.U can download them from gmatclub and solve from those docs(especially sentence correcn).
Re: Scored just 630 [#permalink] 18 Apr 2013, 11:58
Similar topics Replies Last post
Similar
Topics:
630 to 730 in Just Under 3 Weeks: My Journey to Retake the GMAT 4 20 May 2016, 07:26
Just bombed the GMAT - 630 (41Q 35V) 1 04 Nov 2013, 14:19
5 GMAT score 630 to 710 in 5 weeks 1 09 Feb 2013, 04:21
GMAT Third attempt.. scored 630.. Highly devastated !! 8 23 Sep 2010, 16:38
Just did my GMAT- Score 630-take test again? 4 07 Aug 2007, 23:42
Display posts from previous: Sort by
|
{}
|
Symmetry and nonexistence results for a fractional Hénon-Hardy system on a half-space.(English)Zbl 1421.35083
Summary: We study the fractional Henon-Hardy system \begin{aligned}\begin{cases}(-\Delta )^{s/2} u(x) = |x|^\alpha v^p(x), & x\in \mathbb{R}^n_+, \-\Delta )^{s/2} v(x) = |x|^\beta u^q(x), & x\in \mathbb{R}^n_+, \\ u(x)=v(x)=0, & x\in \mathbb{R}^n\setminus \mathbb{R}^n_+,\end{cases}\end{aligned} where \(n\ge 2, $$0< s< 2$$, $$\alpha,\beta >-s$$ and $$p,q\ge 1$$. We also consider an equivalent integral system. By using a direct method of moving planes, we prove some symmetry and nonexistence results for positive solutions under various assumptions on $$\alpha$$, $$\beta$$, $$p$$ and $$q$$.
MSC:
35J40 Boundary value problems for higher-order elliptic equations 35R11 Fractional partial differential equations 35B06 Symmetries, invariants, etc. in context of PDEs 35B09 Positive solutions to PDEs 35A01 Existence problems for PDEs: global existence, local existence, non-existence
Full Text:
References:
[1] D. Applebaum, Lévy Processes and Stochastic Calculus, 2nd ed., Cambridge Studies in Advance Mathematics 116, Cambridge University Press, (2009). · Zbl 1200.60001 [2] J. Bertoin, Lévy Processes, Cambridge Tracts in Mathematics 121, Cambridge University Press, (1996). [3] K. Bogdan, The boundary Harnack principle for the fractional Laplacian, Studia Math. 123 (1997), 43-80. · Zbl 0870.31009 [4] J. P. Bouchard, A. Georges, Anomalous diffusion in disordered media: Statistical mechanics, models and physical applications, Physics reports 195 (1990). [5] X. Cabré, J. Tan, Positive solutions of nonlinear problems involving the square root of the Laplacian, Adv. Math. 224 (2010), 2052-2093. · Zbl 1198.35286 [6] L. Caffarelli, L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations 32 (2007), 1245-1260. · Zbl 1143.26002 [7] L. Caffarelli, L. Vasseur, Drift diffusion equations with fractional diffusion and the quasi-geostrophic equation, Annals of Math. 171 (2010), 1903-1930. · Zbl 1204.35063 [8] W. Chen, Y. Fang, R. Yang, Liouville theorems involving the fractional Laplacian on a half space, Adv. Math. 274 (2015), 167-198. · Zbl 1372.35332 [9] W. Chen, C. Li, Y. Li, A direct method of moving planes for the fractional Laplacian, Adv. Math. 308 (2017), 404-437. · Zbl 1362.35320 [10] W. Chen, C. Li, B. Ou, Classification of solutions for an integral equation, Comm. Pure Appl. Math. 59 (2006), 330-343. · Zbl 1093.45001 [11] P. Constantin, Euler equations, Navier-Stokes equations and turbulence, Mathematical Foundation of Turbulent Viscous Flows, Springer, Berlin, 1871 (2006), 1-43. [12] J. Dou, C. Qu, Classification of positive solutions for an elliptic systems with a higher-order fractional Laplacian, Pacific J. Math. 261 (2013) 311-334. · Zbl 1270.35225 [13] J. Dou, H. Zhou, Liouville theorem for fractional Hénon equation and system on $$\mathbb{R}^n$$, Comm. Pure Appl. Anal. 14 (2015), 1915-1927. · Zbl 1320.35113 [14] M. Fall, T. Weth, Nonexistence results for a class of fractional elliptic boundary values problems, J. Funct. Anal. 263 (2012), 2205-2227. · Zbl 1260.35050 [15] Y. Fang, W. Chen, A Liouville type theorem for poly-harmonic Dirichlet problems in a half space, Adv. Math. 229 (2012), 2835-2867. · Zbl 1250.35051 [16] B. Gidas, J. Spruck, A priori bounds for positive solutions of nonlinear elliptic equations, Comm. Partial Differential Equations 6 (1981), 883-901. · Zbl 0462.35041 [17] Y. Guo, Nonexistence and symmetry of solutions to some fractional Laplacian equations in the upper half space, Acta Math. Sci. 37 (2017), 836-851. · Zbl 1399.35001 [18] T. Kulczycki, Properties of Green function of symmetric stable processes, Probab. Math. Statist. 17 (1997), 339-364. · Zbl 0903.60063 [19] J. Li, X. Guan, Y. Wang, Symmetry and Liouville theorem for fractional Laplacian equation both in $$\mathbb{R}^n$$ and $$\mathbb{R}_+^n$$, J. Nonlinear Convex Anal. 18 (2017), 2163-2176. · Zbl 1390.35085 [20] D. Li, P. Niu, R. Zhuo, Symmetry and non-existence of positive solutions for PDE system with Navier boundary conditions on a half space, Complex Var. Elliptic Equ. 59 (2014), 1436-1450. · Zbl 1295.35220 [21] G. Lu, J. Zhu, Symmetry and regularity of extremals of an integral equation related to the Hardy-Sobolev inequality, Calc. Var. Partial Differential Equations 42 (2011), 563-577. · Zbl 1231.35290 [22] P. Ma, Y. Li, J. Zhang, Symmetry and nonexistence of positive solutions for fractional systems, Commun. Pure Appl. Anal. 17 (2018), 1053-1070. · Zbl 1397.35337 [23] A. Quaas, A. Xia, Liouville type theorems for nonlinear elliptic equations and systems involving fractional Laplacian in the half-space, Calc. Var. Partial Differential Equations 52 (2015), 641-659. · Zbl 1316.35291 [24] W. Reichel, T. Weth, A prior bounds and a Liouville theorem on a half-space for higher-order elliptic Dirichlet problems, Math. Z. 261 (2009), 805-827. · Zbl 1167.35014 [25] L. Silvestre, Regularity of the obstacle problem for a fractional power of the Laplace operator, Comm. Pure Appl. Math. 60 (2007), 67-112. · Zbl 1141.49035 [26] S. Tang, J. Dou, Nonexistence results for a fractional Hénon-Lane-Emden equation on a half-space, Internat. J. Math. 26 (2015), 1550110. · Zbl 1334.35399 [27] V. Tarasov, G. Zaslavsky, Fractional dynamics of systems with long-range interaction, Comm. Nonl. Sci. Numer. Simul. 11 (2006), 885-889. · Zbl 1106.35103 [28] L. Zhang, Y. Wang, Symmetry of Solutions to Semilinear Equations Involving the Fractional Laplacian on $$\mathbb{R}^n$$ and $$\mathbb{R}_+^n$$, arxiv 1610.00122 (2016). [29] L. Zhang, M. Yu, J. He, A Liouville theorem for a class of fractional systems in $$\mathbb{R}_+^n$$, J. Differential Equations 263 (2017), 6025-6065. · Zbl 1396.35081 [30] R. Zhuo, Y. Li, A Liouville theorem for the higher-order fractional Laplacian, Commun. Contemp. Math. 21 (2019), 1850005. · Zbl 1411.35057
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{}
|
# Introduction To Loop Quantum Gravity
by marlon
Tags: loop quantum gravity
P: 4,006 Marcus, thanks for the reply... It can only be a good thing that others contribute but i am convinced that we need to keep the level basic enough in this sense that i wanna move up the "difficulty-scale" gradually. It would be a bad thing if we were to discuss high-level papers because i think most of us (including myself) will not be able to follow this up and we would get discouraged and drop the subject. I will continue this matter and i would suggest that we follow the content of Rovelli's book which is online at his website.You have given the reference to it... regards marlon
Astronomy
PF Gold
P: 23,235
Introduction To Loop Quantum Gravity
Quote by marlon I will continue this matter and i would suggest that we follow the content of Rovelli's book which is online at his website.You have given the reference to it... regards marlon
I am very much looking forward to your continuing the essay, marlon!
I will restrain my tendency to talk too much, so as not to crowd.
BTW just yesterday in the mail was delivered the copy of Rovelli "Quantum Gravity" which I ordered from Amazon. I am very happy with the book and
I am only sad that it is so expensive----70 dollars. You have to be rich, or be willing to splurge. Or you have to be in graduate school and need it for a class, as textbook. In US the textbooks are all very expensive, so 70 dollars is fairly normal.
Anyway Rovelli is a good writer and Cambridge Press did a good job, with the editing and just the physical production----nice paper, nice binding, nice feel, and printing. So it is a pleasure to own: at least for me.
But to save money it certainly makes sense to print off the free draft copy at Rovelli site. Even just the first 3 or 4 chapters and some appendices---or whatever you find the most accessible parts and most relevant for you.
Marlon, why not give some online bibliography yourself? It would be a refreshing change (I am always doing the librarian work) and I would enjoy seeing your picks and how you organize it. (If you do not want to, I will not shirk the job, but maybe you would like to list intro-level links?)
P: 660
Quote by marcus yesterday in the mail was delivered the copy of Rovelli "Quantum Gravity"
I sincerely hope you enjoy your new book, which I know you will. I was leafing through it at the U of T bookstore. I want to point out two things carlo says in the introductory bit.
1) That any correct quantum gravity theory must be able to calculate amplitudes for graviton-graviton scattering, and that he hopes that lqg will one day lead to a theory that can.
2) That he knows that GR must almost certainly be an effective field theory that is modified at higher energies so that lqg can't be correct. Thus he says he views lqg basically as a laboratory for investigating certain fundamental issues in quantum gravity.
As far as your sticky goes, would you be bothered if I corrected it?
Astronomy
PF Gold
P: 23,235
Quote by jeff ... I want to point out two things carlo says in the introductory bit. 1) That any correct quantum gravity theory must be able to calculate amplitudes for graviton-graviton scattering, and that he hopes that lqg will one day lead to a theory that can. 2) That he knows that GR must almost certainly be an effective field theory that is modified at higher energies so that lqg can't be correct. Thus he says he views lqg basically as a laboratory for investigating certain fundamental issues in quantum gravity. ...
I believe you are mistaken, jeff. Carlo does not say these things in the introductory bit.
At least I looked in the first part of the book, and used the index to search the rest, and could not find any statements of the kind.
It would be nice to have some page references, if you have any more would-be paraphrases from Rovelli----even sweller of you to provide actual quotes. Since a paraphrase can often mislead as to what was said in the original.
Thanks for your kind wish as to the book! Indeed it is surprising me. I was not expecting this much, since I had read much of the last year's draft version.
BTW if you pick up a copy either at library or store and can give me some actual page reference (whether or not in the first 50-or-so pages, anywhere in the book will do) where he says these things 1. and 2. that you state, that would be most helpful of you and I will be very interested to read the actual passages and think about it. If he does say something like that my eye somehow missed it.
P: 660
Quote by marcus I believe you are mistaken, jeff. Carlo does not say these things in the introductory bit.
We'll, I don't have the book on hand, but...
In rovelli's dec 30 2003 draft, he says on page ix entitled "PREFACE"
"What we need is not just a technique for computing, say, graviton-graviton scattering amplitudes (although we certainly want to be able to do so, eventually)"
On page 5 of the same draft,
"The einstein-hilbert action might very well be a low energy approximation of something else. But the modification of the notions of space and time has to do with the diffeomorphism invariance and the background independence of the action, not with it's specific form."
Be this as it may, jim bjorken in the forward of carlo's book states quite plainly that effective field theory has taught us that GR must be viewed as just an effective field theory, and it's difficult to believe that carlo would've allowed such a statement if it fundamentally contradicted his position.
Btw, did you notice that carlo writes (probably in the preface) that thiemann is publishing a book on the more mathematical aspects of lqg?
Astronomy Sci Advisor PF Gold P: 23,235 Oh I see. I thought you were talking about the actual book. that you said you were browsing in the bookstore. but you apparently meant the draft, from 2003, which is available online. there's been considerable up-dating and revision. so one should be specific which ============= Meanwhile, maybe readers of this thread would be interested in the Loop and String lineup of talks at the conference that just finished in Mexico (at the Quintana Roo beach resort in sight of the island of Cozumel) A lot of the lectures were by top people both string and loop, and they were rather much introductory. The conference aimed at being a "school" to bring more people in. And to introduce stringies to loop research and viceversa. I thought the lineup of who the organizers wanted to talk about the various hot topics was enlightening. So since it could be instructive, I will copy it here: http://www.nuclecu.unam.mx/~gravit/E...I/courses.html --quote-- COURSES AND INVITED TALKS Courses: A. Ashtekar (PSU, USA): Quantum Geometry A. P. Balachandran (Syracuse, USA): Quantum Physics with Time-Space Noncommutativity P. T. Chrusciel (Tours, France): Selected Problems in Classical Gravity R. Kallosh (Stanford, USA): De Sitter Vacua in String Theory and the String Landscape A. Peet (Toronto, Canada): Black Holes in String Theory C. Rovelli (Marseille, France): Loop Quantum Gravity and Spinfoams Plenary talks: J. D. Barrow (Cambridge, UK): Cosmological Constants and Variations M. Bojowald (AEI, Germany): Loop Quantum Cosmology A. Corichi (ICN-UNAM, Mexico): Black Holes and Quantum Gravity A. Linde (Stanford, USA): Inflation and String Theory O. Obregon (U. Guanajuato, Mexico): Noncommutativity in Gravity, Topological Gravity and Cosmology A. Perez (PSU, USA): Selected Topics on Spin Foams L. Smolin (PITP, Canada): Loops and Strings R. Wald (U. Chicago, USA): Topics on Quantum Field Theory Short talks: E. Caceres (CINVESTAV, Mexico): Wrapped D-branes and confining gauge theories A. Guijosa (ICN-UNAM, Mexico): Far-from-Extremal Black Holes from Branes and Antibranes H. Morales (UAM, Mexico): Semiclassical Aspects and Phenomenology of Loop Quantum Gravity D. Sudarsky (ICN-UNAM, Mexico): Spacetime Granularity and Lorentz Invariance L. Urrutia (ICN-UNAM, Mexico): Synchrotron Radiation in Lorentz-Violating Effective Electrodynamics ---endquote---
P: 2 marlon...got any extra info on LQG??? great introduction btw... lola
P: 660
Quote by marcus Oh I see. I thought you were talking about the actual book. that you said you were browsing in the bookstore. but you apparently meant the draft, from 2003, which is available online. there's been considerable up-dating and revision. so one should be specific which
You want to play games? Fine with me.
P: n/a This is a project I've been working on, and I'd very much like to know what the participants on this thread think. Thanks, nc Abstract and prospectus, Spacetime at the Planck Scale This is an abstract and prospectus for additional research. The proposal would use computational techniques such as those described in Stephen Wolfram's New Kind of Science as an exploratory probe of events at the Planck scale. Authors are currently recruiting mathematicians and physicists to mentor and contribute to the work. We still need someone who can design the NKS experiments. In this work in progress, we describe a mechanism by which four space-time dimensions are reduced to the classical view of three space-like dimensions arrayed in the customary orthagonal basis with one time-like dimension which can be thought of as permeating the space-like dimensions. The time-like dimension is shown to appear to be unique to a moving observer, and preserves the appearance of freedom of choice as one perspective in a structure which can also be viewed from other perspectives as competely deterministic. The Einstein-Minkowski principle of space time equivalence taken in the strongest sense creates a powerful model for investigation of the relationship between general relitivity and quantum mechanics. We begin by defining the Planck Sphere (here named to be consistant with the Planck length and Planck time) as a three dimensional volume filled by a radient event at the speed of light in one Planck time. Thus the radius of the Planck Sphere is equal to one Planck length and is equal to one Planck time, making a three dimensional model which can be used in a perspective sense to portray events which occur at the Planck scale in four dimensions. After describing the features of the model, we go on to propose that computational graphing techniques similar to those used by Stephan Wolfram in his book A New Kind Of Science be developed to explore the evolution of the Planck Sphere in Kepler dense packed space up to the scale of the fine structure constant, thereby showing the geometric origins of mass and charge. The first step in this process is to define a viable space-time lattice structure, which we believe we have done by defining the Planck Sphere as an element in a Kepler stack. The next step in this process is to develop a rational algorithem to simulate events on the Planck scale. This may be accomplished by applying what we know of cosmogeny and of physics near singularities. As a first approximation we advance the conjecture that expansion from the Planck scale will recapitulate cosmogeny. We carry through the first steps in this approximation to demonstrate a mechanism for early inflation in the burgeoning universe. References: [PDF] On quantum nature of black hole space-time: A Possible new source of intense radiation DV Ahluwalia - View as HTML - Cited by 11 ... spheres of fluctua- tions. The one that may be called a Schwarzschild sphere, and the other a Planck sphere. The sizes of these ... International Journal of Modern Physics D, 1999 - arxiv.org - ejournals.wspc.com.sg - arxiv.org - adsabs.harvard.edu [PDF] The Quantum structure of space-time at the Planck scale and quantum fields S Doplicher, K Fredenhagen, JE Roberts, CM Phys - View as HTML - Cited by 242... In the classical limit where the Planck length goes to zero, our Quantum spacetime ...components are homeomorphic to the tangent bundle TS 2 of the 2–sphere. ... Communications in Mathematical Physics, 1995 - arxiv.org - arxiv.org - adsabs.harvard.edu [PDF] Inflationary theory and alternative cosmology L Kofman, A Linde, V Mukhanov - Cited by 9 ... the large scale structure observed today were generated at an epoch when the energy density of the hot universe was 10 95 times greater than the Planck density ... The Journal of High Energy Physics (JHEP) - iop.org - arxiv.org - physik.tu-muenchen.de - adsabs.harvard.edu - all 7 versions » [PDF] Physics, Cosmology and the New Creationism VJ Stenger - View as HTML ... 10. -43 second time interval around t. = 0, if it was confined within a Planck sphere as big bang cosmology implies. The. universe ... colorado.edu 200411290100GTC Richard T. Harbaugh Program Director Society for the Investigation of Prescience
P: 1 Hello Marlon i will thank you for the nice clear introduction on loop quantum gravity. I am planning to do my thesis on this subject and i would like to keep in touch with all the specialists here in order to get more info. I am just starting to know this field... bye...Luco
P: 215 The challenge for string theorists and LQG theorists is to explain why the vacuum energy exists at 10^120 J/m^3 ( there is no reason to think there is anything wrong with the QM calculation) but does not curve space-time.How can quantum gravity be proved if gravity is not understood on its own yet?
Astronomy
PF Gold
P: 23,235
Quote by Rothiemurchus ( there is no reason to think there is anything wrong with the QM calculation)
!
gotta be something wrong with it
Astronomy
PF Gold
P: 23,235
Quote by Rothiemurchus explain why the vacuum energy exists at 10^120 J/m^3 ...
beg your pardon Rothie but that is a crazy amount of energy
maybe QFT can come up with a mechanism that cancels all or most of it out, or find some reason to say that it doesnt really exist----maybe QFT already has.
but that density of energy, not canceled out and real enough to cause gravity, is simply incredible (at least to me). commonsense persuades me that there must be something wrong with any theory that predicts it
And there is some reason to be hopeful, because QFT is still formulated in an unrealistic way: using a fixed spacetime framework. Reformulating it in a background independent version might possibly get rid of that huge vacuum energy.
BTW just to have a basis for comparision, the astronomers' dark energy estimate is currently around 0.6 joule per cubic km. In joules per cubic meter (the units you were using) that comes to:
0.6 x 10-9 joule per cubic meter.
P: 215 I am aware of the cosmological evidence.But the problem is this: the energy that can be experimentally associated with the Casimir force is greater than the cosmological observation (10^-6 Newtons/m^2 net force at 10^-7 m plate separation - i think but i'm not sure,that this is at least 10-7 J/m^3).So, the plates involved in measurements of the Casimir force must somehow, switch on vacuum energy,locally.And what sort of effect would a galaxy have on the vacuum energy?
Astronomy
PF Gold
P: 23,235
Quote by Rothiemurchus I am aware of the cosmological evidence.But the problem is this: the energy that can be experimentally associated with the Casimir force ...
Rothie, I will try to respond---tell me if I am making a mistake. I do not believe that the experimental existence of the Cas. force proves that the
QFT calculation of a huge vacuum energy is correct.
what I think is true is that there is some normal vacuum energy density and that between two conducting plates it is LESS namely
$$\text{energy density betw. plates = usual vacuum energy density} -\frac{\hbar c \pi^2}{720 d^4}$$
the QFT calculation of the usual vacuum energy density is bad or dubious, but the Casimir effect does not depend on this, it depends on the fact that the energy density between plates is LESS by the amount shown, which QFT does calculate successfully!, and which depends on the inverse fourth power of the separation distance.
So I say that I believe the QFT calculation of the Casimir effect and I like the Casimir effect, and this is consistent with not believing the huge vacuum energy which QFT calculates, which is roughly 120 OOM wrong---or actually different people try to fix it different ways and say different things, but anyway wrong.
Astronomy
Quote by marcus $$\text{energy density betw. plates = usual vacuum energy density} -\frac{\hbar c \pi^2}{720 d^4}$$
$$\text{force divided by area} = -\frac{\hbar c \pi^2}{240 d^4}$$
|
{}
|
Lemma 22.19.10. Let $R$ be a ring. Let $\mathcal{A}$ be a differential graded category over $R$. Let $x$ be an object of $\mathcal{A}$. Let
$(E, \text{d}) = \mathop{\mathrm{Hom}}\nolimits _\mathcal {A}(x, x)$
be the differential graded $R$-algebra of endomorphisms of $x$. We obtain a functor
$\mathcal{A} \longrightarrow \text{Mod}^{dg}_{(E, \text{d})},\quad y \longmapsto \mathop{\mathrm{Hom}}\nolimits _\mathcal {A}(x, y)$
of differential graded categories by letting $E$ act on $\mathop{\mathrm{Hom}}\nolimits _\mathcal {A}(x, y)$ via composition in $\mathcal{A}$. This functor induces functors
$\text{Comp}(\mathcal{A}) \to \text{Mod}_{(A, \text{d})} \quad \text{and}\quad K(\mathcal{A}) \to K(\text{Mod}_{(A, \text{d})})$
by an application of Lemma 22.19.5.
Proof. This lemma proves itself. $\square$
## Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 09LE. Beware of the difference between the letter 'O' and the digit '0'.
|
{}
|
# Mass Flow Rate And Pressure: Effect, Relation ,Problem Examples
This article discusses about mass flow rate and pressure. They both have a direct relation between them (although no direct formula). Lets study more about it.
Anything which flows is bound to have certain mass. The amount of mass which passes through a point per second is called as mass flow rate. The term mass flow rate finds its use in thermal engineering and fluid mechanics. Lets discuss more about mass flow rate in this article.
## What is pressure?
Pressure is the amount of force exerted per unit area. For the same amount of force, if the area is smaller then value of pressure is more and if the area is more then the value of pressure is less.
The units of pressure is N/m^2. Mathematically, pressure can be given by-
P = F/A
Where,
F is the force applied normal to the cross section
A is the area of the cross section
## What is mass flow rate?
The term flow means anything pertaining to movement. Mass flow rate refers to an amount of mass passing through a point per second. The mass can be of anything such as gas, water, oil or any other fluid.
The term mass flow rate is very important term used in fluid mechanics and thermal engineering. Its applications lie in turbo machinery, rockets, aeroplane and many other fluid related applications. Mathematically, mass flow rate can be given as,
## Mass flow rate and pressure relation
Logically, more the pressure applied to the inlet section will mean more pressure difference is created between inlet and outlet hence more mass will try to rush through the section. Hence, we can say that mass flow rate is directly proportional to pressure (gradient).
Even vice versa is true, when more mass flows through a point per second then the force exerted by the mass molecules on the surface of the section will be greater hence pressure will be more if mass flow rate is more. Hence we can say both are directly proportional to each other. Note that this completely true for incompressible fluids like water.
## Does mass flow rate change with pressure?
Note that alone pressure has no effect on mass flow rate, it is the pressure difference created that affects the flow rate.
The value of pressure difference between the inlet section and outlet section affects the mass flow rate. If the pressure difference is more then the mass flow rate will be more and if the pressure difference will be less then the mass flow rate will be less.
Alone greater pressure has no effect on the flow rate, if both the inlet and outlet has high value of pressures and low difference between them then the flow rate will be low due to low pressure difference. We will get more clarity by looking at an example.
## Mass flow rate and pressure difference relation example
As discussed in earlier section, pressure difference directly affects the mass flow rate. This can be explained using simple example discussed below.
The aeroplane will generate more lift when there is a larger pressure difference (as in cambered airfoils). If there is large pressure values on both the sides of airfoil then there won’t be any significant change in pressure values and thus no or very less pressure difference will be there. Due to this very less air will flow hence generating lesser lift.
## Bernoulli’s equation
The Bernoulli’s principle is meant for incompressible fluids which states that when a fluid is flowing in a streamline flow then velocity increases with decrease in static pressure.
In simple terms, Bernoulli’s principle means- Static pressure+Dynamic pressure= Total pressure and that is said to be constant.
Mathematically, Bernoulli’s principle can be given as-
Image credits: user:ComputerGeezer and GeofVenturiFlowCC BY-SA 3.0
## Hagen Poisueille law
This law given the direct relation between pressure difference and the volumetric flow rate.
This law gives the relation for pressure drop for incompressible Newtonian fluids in a laminar flow. The Hagen Poisueille equation is given as follows-
## What are the different types flow?
There are three main types of flows- Laminar, turbulent and transient flow.
### Laminar flow
This type of flow is characterized by fluid particles flowing in a smooth manner. Each layer moves past the adjacent layer in such a manner that they don’t mix. We can tell whether the flow is laminar or not by looking at the value of Reynold’s number of the flow. Reynold’s number is discussed in later sections of this article.
### Turbulent flow
This type of layer is characterized by mixing of two fluid layers in a flow. The flow is more violent than laminar flow. It is desired when mixing of two fluids is to be done.
### Transient flow
Transient flow is simply the transition between laminar and turbulent flow.
## Reynold’s number
Reynold’s number is dimensionless number which is used for determining the type of flow in the system.
The ratio of inertial forces to viscous forces is called as Reynold’s number. The general formula for Reynold’s number is given below-
where,
mu is the dynamic viscosity
V is the velocity of flow
## Significance of Reynold’s number
As discussed in above section, Reynold’s number is used to find the type of flow in the system. It gives us an idea about the inertial and viscous effects of flow on the system.
For fluid flowing over a flat plate-
• Laminar flow- Re<3×10^5
• Turbulent flow- Re>3×10^5
For fluid in a circular pipe-
• Laminar flow- Re<2000
• Turbulent flow-Re>4000
• Transient flow-2000<Re<4000
## Prandtl number
Prandtl number is named after the physicist Ludwig Prandtl. It is a dimensionless number which is used for determining the behaviour of heat transfer.
Prandtl number is the ratio of momentum diffusivity to thermal diffusivity. The mass analog of Prandtl number is Schmidt number. Mathematically, it can be written as-
Cp is the specific heat at constant pressure
k is the thermal conductivity
## Mass flow rate example
Let us assume following data for a system.
Density of the fluid- 0.2 kg/m^3
Area of the cross section- 1m^2
Volume flow rate- 10m^3/s
Use the following data to calculate the mass flow rate in the system.
Mass flow rate can be found using the formula given below-
From using the formula given above, we get mass flow rate as- 2kg/s.
Abhishek
Hi ....I am Abhishek Khambhata, have pursued B. Tech in Mechanical Engineering. Throughout four years of my engineering, I have designed and flown unmanned aerial vehicles. My forte is fluid mechanics and thermal engineering. My fourth-year project was based on the performance enhancement of unmanned aerial vehicles using solar technology. I would like to connect with like-minded people.
|
{}
|
SECTION-5
#### Theory of Machine objective type questions/Multiple choice questions
##### Q31. The effort of a Porter governor (when the arms of the governor are equal and have equal inclination with the axis of the governor spindle) is equal to
$\text { (a) } C(w+W)$
$\text { (b) } \frac{(w+W)}{C}$
$\text { (c) } \frac{(w+W \times C)}{C}$
$\text { (d) } \frac{(C \times w+W)}{C} \text { . }$
where w = Weight of ball of governor, W = Weight on sleeve, and C = Percentage increase in speed.
Ans: $\text { (a) } C(w+W)$
##### Q32. The effort of a Hartnell governor (where the moments due to weight of the arms and ball are neglected) is equal to
$\text { (a) } \frac{C}{S}$
$\text { (b) CS }$
$\text { (c) } \frac{1}{C S}$
$\text { (d) } \frac{S}{C} \text { . }$
where C = Percentage increase in speed, and S = Spring force exerted on the sleeve.
Ans: $\text { (b) CS }$
##### Q33. The controlling force in case of Porter governor is provided by
(a) weight of balls
(b) springs
(c) both (a) and (c)
(d) none of the above.
Ans: (a) weight of balls
(a) one
(b) two
(c) three
(d) four.
Ans: (b) two
##### Q35. For complete balance of the several revolving weights in different planes, the conditions is
(a) the vector sum of the forces must be zero
(b) the vector sum of all the couples must be zero
(c) both (a) and (b)
(d) none of the above.
Ans: (c) both (a) and (b)
##### Q36. The frequency of the secondary force as compared to primary force is
(a) One-half
(b) double
(c) One-fourth
(d) One-third.
Ans: (d) One-third.
##### Q37. The condition of balancing the reciprocating masses of a reciprocating engine is to balance
(a) primary forces couple
(b) secondary forces only
(c) primary couples only
(d) all of the above
Ans: (d) all of the above
##### Q38. When the angle of inclination of crank with inner dead centre of a reciprocating engine is 0°, then
(a) primary force is maximum
(b) secondary force is maximum
(c) primary force is minimum
(d) both (a) and (b)
Ans: (d) both (a) and (b)
##### Q39. A governor is said to isochronous when
(a) the equilibrium speed is constant for all radii of rotation of the balls within working range
(b) the range of speed is zero for all radii of rotation of the balls within working range
(c) any one of the above
(d) none of the above.
Ans: (c) any one of the above
##### Q40. The mean force exerted by the governor on the sleeve for a given change of speed is known as
(a) sensitiveness of governor
(b) effort of governor
(c) stability of governor
(d) frictional force of governor.
Ans: (b) effort of governor
DEAR READERS IF YOU FIND ANY ERROR/TYPING MISTAKE PLEASE LET ME KNOW IN THE COMMENT SECTIONS OR E-MAIL: [email protected]
##### Read More Sections of Theory of Machine Objective Questions
Each section contains maximum 80 Questions. To practice more questions visit other sections.
Theory of Machine MCQ – Section-1
Theory of Machine MCQ – Section-2
Theory of Machine MCQ – Section-3
Theory of Machine MCQ – Section-4
Theory of Machine MCQ – Section-5
Theory of Machine MCQ – Section-6
|
{}
|
• Create Account
## SDL->SFML conversion; very strange performance issues
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
19 replies to this topic
### #1chondee Members
Posted 16 January 2012 - 10:58 PM
Hi everyone,
As I have been browsing around topics about SDL, I saw SFML mentioned as a potentially better and more sophisticated API for certain purposes. I haven't had performance issues with my game in SDL yet, what made me try to convert to SFML is the scaling, rotation and some alpha blending features it has. Even though my game isn't crazy big, I have decided that I will first try to rewrite my very first working build, which I didn't divide to multiple cpp and header files yet, and was only about 1K lines.
After I have rewritten it successfully, and ran it in Debug (in Visual Studio 2010) I ran about 5FPS, or lower. After some searching around I saw it mentioned that the debug libraries are slower, I should use Release instead, so I did. When I ran that inside Visual Studio with F5 it was faster, but still unacceptable frame rate (60FPS is the target). When I just build an executable and run it outside VS it ran okay, but when I increased the particles drawn a bit, the performance dropped. When I compared the SDL build with the SFML build, the SDL one got better performance.
Since the SFML sample projects that I compiled with the exact same settings, in the same solution file ran with 2000+ FPS I have figured there must be something wrong with my code and I am probably misusing some expensive SFML calls.
Most of the game objects are added this way:
void World::explosion(int x, int y){
for (int i=0, r=((rand() % 100) -80); i < 120 + r ; i++){
int xx = x + rand() % 10 - 5;
int yy = y + rand() % 10 - 5;
Particle a(xx, yy, 6);
v_particles.push_back(a);
}
Every actual image is only loaded once, during the initialization, and every object only has a sprite associated with the image. The objects will be drawn later in the game loop.
Later I have discovered, that even if I don't draw the objects, don't use any SFML draw calls at all during the game loop, just have the c++ logic running, with vector.push_back loops, the frame rate drops drastically.
This would make me think that what I am doing is too expensive on the CPU, too many vector operations are going on, and this has nothing to do with SFML, but all this runs perfectly fine when I use SDL.
At this point I am not even sure what exactly to ask, or what information to provide. Is this something that has to do with my VS 2010 solution settings perhaps? How come the sample projects with the debug SFML library in visual studio (pong for example) runs 800+FPS, while my game only gets 1-5 with those settings?
I have removed event polling too and I have removed the clock to make sure I am not doing something bad with that, that would affect the frame rate.
It looks like that the c++ operations are the bottleneck, because the frame rate is the same even if I comment out the SFML draw lines, but with the exact same, unchanged logic, with the SDL solution and SDL calls runs just fine.
How should I try to tackle this problem, what do you guys think that causes this?
Let me know if you would need my settings, or parts of my code, or something
EDIT:
Here are the frame rates of the sample pong.cpp included with SFML, and with my project
All are compiled from the same VS 2010 solution file
pong | Debug configuration | Started with F5 (Start Debugging) : 130FPS
pong | Debug configuration | Started with CTRL + F5 (Start without Debugging) : 175 FPS
pong | Release configuration | Started with F5 (Start Debugging) : 700 FPS
pong | Release configuration | Started with CTRL + F5 (Start without Debugging) 1600 FPS
myGame | Debug configuration | Started with F5 (Start Debugging) : 0 FPS, didnt draw another frame for 20seconds
myGame | Debug configuration | Started with CTRL + F5 (Start without Debugging) : ~1 FPS
myGame | Release configuration | Started with F5 (Start Debugging) : 2-3 FPS
myGame | Release configuration | Started with CTRL + F5 (Start without Debugging) : 27 FPS
myGame with SDL calls, 60FPS fixed
Edited by chondee, 16 January 2012 - 11:57 PM.
### #2fastcall22 Moderators
Posted 16 January 2012 - 11:13 PM
If you know beforehand how many elements will be in the vector, then use reserve or resize to resize the vector once, instead of multiple times in the loop. Note that dynamic arrays, in order to assure that elements are contiguous in memory, must allocate a new larger array every time you attempt to add an element when the array is at full capacity.
int ct = 0;
v_particles.reserve( ct );
for ( int idx = 0; idx < ct; ++idx ) {
int xx = x + rand() % 10 - 5;
int yy = y + rand() % 10 - 5;
v_particles.push_back( Particle(xx, yy, 6) );
}
Also, if you're using Visual Studio, grab AMD CodeAnalyst, profile your code, and find out where exactly your bottleneck lies.
zlib: eJzVVLsSAiEQ6/1qCwoK i7PxA/2S2zMOZljYB1TO ZG7OhUtiduH9egZQCJH9 KcJyo4Wq9t0/RXkKmjx+ cgU4FIMWHhKCU+o/Nx2R LEPgQWLtnfcErbiEl0u4 0UrMghhZewgYcptoEF42 YMj+Z1kg+bVvqxhyo17h nUf+h4b2W4bR4XO01TJ7 qFNzA7jjbxyL71Avh6Tv odnFk4hnxxAf4w6496Kd OgH7/RxC
### #3chondee Members
Posted 16 January 2012 - 11:27 PM
I have tried using vector.reserve before, set it to a large number, but it made no difference in performance.
Either way the exact same code running in my SDL version, runs faster than in the SFML one, even if SFML's draw isn't called, only the logic is running...
I'll check out AMD CodeAnalyst, thanks!
### #4fastcall22 Moderators
Posted 16 January 2012 - 11:33 PM
You mentioned you changed it to the "Release" configuration, does your "Release" configuration link with the debug versions of your libraries or the release versions?
zlib: eJzVVLsSAiEQ6/1qCwoK i7PxA/2S2zMOZljYB1TO ZG7OhUtiduH9egZQCJH9 KcJyo4Wq9t0/RXkKmjx+ cgU4FIMWHhKCU+o/Nx2R LEPgQWLtnfcErbiEl0u4 0UrMghhZewgYcptoEF42 YMj+Z1kg+bVvqxhyo17h nUf+h4b2W4bR4XO01TJ7 qFNzA7jjbxyL71Avh6Tv odnFk4hnxxAf4w6496Kd OgH7/RxC
### #5chondee Members
Posted 16 January 2012 - 11:38 PM
It links the release versions (the clean ones, without -d or -s or -s-d)
Here are the frame rates of the sample pong.cpp included with SFML, and with my project
All are compiled from the same VS 2010 solution file
pong | Debug configuration | Started with F5 (Start Debugging) : 130FPS
pong | Debug configuration | Started with CTRL + F5 (Start without Debugging) : 175 FPS
pong | Release configuration | Started with F5 (Start Debugging) : 700 FPS
pong | Release configuration | Started with CTRL + F5 (Start without Debugging) 1600 FPS
myGame | Debug configuration | Started with F5 (Start Debugging) : 0 FPS, didnt draw another frame for 20seconds
myGame | Debug configuration | Started with CTRL + F5 (Start without Debugging) : ~1 FPS
myGame | Release configuration | Started with F5 (Start Debugging) : 2-3 FPS
myGame | Release configuration | Started with CTRL + F5 (Start without Debugging) : 27 FPS
myGame with SDL calls, 60FPS fixed
Edited by chondee, 16 January 2012 - 11:57 PM.
### #6fastcall22 Moderators
Posted 17 January 2012 - 12:18 AM
I'd have a look at what CodeAnalyst has to say about your code...
zlib: eJzVVLsSAiEQ6/1qCwoK i7PxA/2S2zMOZljYB1TO ZG7OhUtiduH9egZQCJH9 KcJyo4Wq9t0/RXkKmjx+ cgU4FIMWHhKCU+o/Nx2R LEPgQWLtnfcErbiEl0u4 0UrMghhZewgYcptoEF42 YMj+Z1kg+bVvqxhyo17h nUf+h4b2W4bR4XO01TJ7 qFNzA7jjbxyL71Avh6Tv odnFk4hnxxAf4w6496Kd OgH7/RxC
### #7chondee Members
Posted 17 January 2012 - 12:49 AM
Thank You!
This is the first time I use CodeAnalyst, I guess this is what we need.
This one is for the SFML code:
CS:EIP Symbol + Offset Timer samples
0xd33560 Particle::show 73.98
0xd31a70 Player::show_particles 13.12
0xd36150 std::_Remove_if<Particle *,bool (__cdecl*)(Particle)> 9.28
0xd37290 std::_Uninit_copy<Particle *,Particle *,std::allocator<Particle> > 0.9
0xd32800 World::show_stars 0.68
0xd36ed0 std::_Find_if<Star *,bool (__cdecl*)(Star)> 0.68
0xd363c0 std::_Remove_if<Star *,bool (__cdecl*)(Star)> 0.45
0xd32630 World::show_enemies 0.23
0xd33410 Star::show 0.23
0xd34d60 main 0.23
11 functions, 82 instructions, Total: 442 samples, 100.00% of shown samples, 2.27% of total session samples
So the way I tested performance, since in this early version of my game there aren't many objects, I increased the particles coming out from the player's thruster (in both SDL and SFML, the same number)
In the SFML one it seems like this takes up everything.
Here is the code for Particle::show:
void Particle::show()
{
//Show image
if (initialized == false){
if (type == 1) part = p1;
else if (type == 2) part = p2;
else if (type == 3) part = p3;
else if (type == 4) part = p4;
else if (type == 5) part = bparticle;
else if (type == 6){
if ((rand() % 4) == 0)
part = p1;
else if ((rand() % 4) == 1)
part = p2;
else if ((rand() % 4) == 2)
part = p3;
else if ((rand() % 4) == 1)
part = p4;}
initialized = true;
}
if(alive)
{
part.SetPosition(x,y);
App.Draw(part);
//apply_surface( x, y, part, screen );
}
//Animate
//stuff here is not relevant, and its exactly the same in both
EDIT:
Since then I tested it with part.SetPosition(x,y); and App.Draw(part); commented out, so they are not drawn, and still Particle::show takes up 61% of the resources (was 74 previously)... with that commented out, the SDL and SFML Particle::show() are identical, except in SDL they actually get drawn.
I'll put the same code here for SDL for comparison, so you won't have to scroll back and forth.
void Particle::show()
{
//Show image
if (part == NULL){
if (type == 1) part = p1;
else if (type == 2) part = p2;
else if (type == 3) part = p3;
else if (type == 4) part = p4;
else if (type == 5) part = bparticle;
else if (type == 6){
if ((rand() % 4) == 0)
part = p1;
else if ((rand() % 4) == 1)
part = p2;
else if ((rand() % 4) == 2)
part = p3;
else if ((rand() % 4) == 1)
part = p4;}
}
if(alive)
apply_surface( x, y, part, screen );
//Animate
And CodeAnalyst result for the SDL code, Particle::show doesn't strain it as much at all
CS:EIP Symbol + Offset Timer samples
0xc05ab0 Particle::show 7.09
0xc08530 std::_Vector_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >::operator++ 4.8
0xc04fd0 apply_surface 4.77
0xc0a2a0 std::_Vector_const_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >::operator== 4.64
0xc08170 std::vector<Particle,std::allocator<Particle> >::end 4.39
0xc0bd00 std::_Vector_const_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >::operator++ 4.27
0xc0bc10 std::_Vector_const_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >::_Vector_const_iterator<std::_Vector_val<Particle,std::allocator<Particle> > > 4.23
0xc0a140 std::_Vector_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >::_Vector_iterator<std::_Vector_val<Particle,std::allocator<Particle> > > 4.16
0xc0bcc0 std::_Vector_const_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >::operator* 4.15
0xc084e0 std::_Vector_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >::operator-> 3.95
0xc0a1f0 std::_Vector_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >::operator++ 3.73
0xc0bd50 std::_Vector_const_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >::_Compat 3.57
0xc0a1a0 std::_Vector_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >::operator* 3.53
0xc085e0 std::_Vector_const_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >::operator!= 3.41
0xc0f1e0 std::_Remove_if<Particle *,bool (__cdecl*)(Particle)> 3.15
0xc11dd0 std::_Move<Particle &> 2.75
0xc13d7f _RTC_CheckStackVars 2.65
0xc017fd ILT+2040(__RTC_CheckEsp) 1.69
0xc0f8d0 std::forward<Particle const &> 1.65
0xc105d0 std::_Construct<Particle,Particle const &> 1.61
0xc03dc0 Player::show_particles 1.32
0xc02e70 Particle::Particle 1.11
0xc09df0 std::vector<Particle,std::allocator<Particle> >::_Inside 1.01
0xc0def0 std::_Cons_val<std::allocator<Particle>,Particle,Particle const &> 1.01
0xc0de50 std::addressof<Particle const > 0.95
0xc13030 std::_Destroy<Particle> 0.92
0xc081e0 std::vector<Particle,std::allocator<Particle> >::push_back 0.9
0xc12a00 std::allocator<Particle>::destroy 0.85
0xc0ed60 std::allocator<Particle>::construct 0.8
0xc13d54 _RTC_CheckEsp 0.8
0xc09ff0 std::vector<Particle,std::allocator<Particle> >::_Orphan_range 0.72
0xc120c0 std::_Dest_val<std::allocator<Particle>,Particle> 0.71
0xc0f860 operator new 0.63
0xc05890 Star::show 0.22
0xc0c860 std::_Vector_const_iterator<std::_Vector_val<Star,std::allocator<Star> > >::operator* 0.22
0xc013d4 ILT+975(_RTC_CheckStackVars 0.21
0xc09210 std::_Vector_iterator<std::_Vector_val<Star,std::allocator<Star> > >::operator++ 0.21
0xc08e40 std::vector<Star,std::allocator<Star> >::end 0.18
0xc0c8a0 std::_Vector_const_iterator<std::_Vector_val<Star,std::allocator<Star> > >::operator++ 0.18
0xc0102d ILT+40(??D?$_Vector_iteratorV?$_Vector_valVParticleV?$allocatorVParticlestdstdstdQBEAAVParticleXZ) 0.16 0xc0147e ILT+1145(?_Compat?$_Vector_const_iteratorV?$_Vector_valVParticleV?$allocatorVParticlestdstdstdQBEXABV12Z) 0.16
0xc10190 std::_Destroy_range<std::allocator<Particle> > 0.16
0xc01785 ILT+1920(??E?$_Vector_iteratorV?$_Vector_valVParticleV?$allocatorVParticlestdstdstdQAEAAV01XZ) 0.14 0xc01915 ILT+2320(?showParticleQAEXXZ) 0.14 0xc0c7f0 std::_Vector_const_iterator<std::_Vector_val<Star,std::allocator<Star> > >::_Vector_const_iterator<std::_Vector_val<Star,std::allocator<Star> > > 0.14 0xc0140b ILT+1030(??8?$_Vector_const_iteratorV?$_Vector_valVParticleV?$allocatorVParticlestdstdstdQBE_NABV01Z) 0.13
0xc0c8f0 std::_Vector_const_iterator<std::_Vector_val<Star,std::allocator<Star> > >::_Compat 0.13
0xc011bd ILT+440(??9?$_Vector_const_iteratorV?$_Vector_valVParticleV?$allocatorVParticlestdstdstdQBE_NABV01Z) 0.11 0xc011e0 ILT+475(??0?$_Vector_const_iteratorV?$_Vector_valVParticleV?$allocatorVParticlestdstdstdQAEPAVParticlePBU_Container_base0 0.11
0xc0164f ILT+1610(?apply_surfaceYAXHHPAUSDL_Surface 0.11
0xc01a87 ILT+2690(??E?$_Vector_iteratorV?$_Vector_valVParticleV?$allocatorVParticlestdstdstdQAE?AV01HZ) 0.11 0xc092c0 std::_Vector_const_iterator<std::_Vector_val<Star,std::allocator<Star> > >::operator!= 0.11 0xc0ad40 std::_Vector_iterator<std::_Vector_val<Star,std::allocator<Star> > >::_Vector_iterator<std::_Vector_val<Star,std::allocator<Star> > > 0.11 0xc01384 ILT+895(??C?$_Vector_iteratorV?$_Vector_valVParticleV?$allocatorVParticlestdstdstdQBEPAVParticleXZ) 0.1
0xc0171c ILT+1815(??$_MoveAAVParticlestdYA$$QAVParticleAAV1Z) 0.1 0xc091c0 std::_Vector_iterator<std::_Vector_val<Star,std::allocator<Star> > >::operator-> 0.1 0xc0ada0 std::_Vector_iterator<std::_Vector_val<Star,std::allocator<Star> > >::operator* 0.1 0xc01875 ILT+2160(??0?_Vector_iteratorV?_Vector_valVParticleV?allocatorVParticlestdstdstdQAEPAVParticlePBU_Container_base0 0.08 0xc047f0 World::show_stars 0.08 0xc13610 std::forward<Particle> 0.08 0xc13ce8 SDL_UpperBlit 0.08 0xc013b1 ILT+940(??E?_Vector_const_iteratorV?_Vector_valVParticleV?allocatorVParticlestdstdstdQAEAAV01XZ) 0.06 0xc0153c ILT+1335(?end?vectorVParticleV?allocatorVParticlestdstdQAE?AV?_Vector_iteratorV?_Vector_valVParticleV?allocatorVParticlestdstd 0.06 0xc08b50 std::_Vector_iterator<std::_Vector_val<Bullet,std::allocator<Bullet> > >::operator-> 0.06 0xc09930 std::_Vector_const_iterator<std::_Vector_val<Enemy,std::allocator<Enemy> > >::operator!= 0.06 0xc01122 ILT+285(??2YAPAXIPAXZ) 0.05 0xc01320 ILT+795(??_ConstructVParticleABV1stdYAXPAVParticleABV1Z) 0.05 0xc01627 ILT+1570(?_Adopt_Iterator_base0stdQAEXPBXZ) 0.05 0xc0a740 std::_Vector_iterator<std::_Vector_val<Bullet,std::allocator<Bullet> > >::_Vector_iterator<std::_Vector_val<Bullet,std::allocator<Bullet> > > 0.05 0xc0adf0 std::_Vector_iterator<std::_Vector_val<Star,std::allocator<Star> > >::operator++ 0.05 0xc0b340 std::_Vector_iterator<std::_Vector_val<Enemy,std::allocator<Enemy> > >::_Vector_iterator<std::_Vector_val<Enemy,std::allocator<Enemy> > > 0.05 0xc0b4a0 std::_Vector_const_iterator<std::_Vector_val<Enemy,std::allocator<Enemy> > >::operator== 0.05 0xc11f80 std::_Move<Star &> 0.05 0xc01230 ILT+555(??D?_Vector_const_iteratorV?_Vector_valVParticleV?allocatorVParticlestdstdstdQBEABVParticleXZ) 0.03 0xc013e3 ILT+990(??_DestroyVParticlestdYAXPAVParticleZ) 0.03 0xc014b5 ILT+1200(?_Inside?vectorVParticleV?allocatorVParticlestdstdIBE_NPBVParticleZ) 0.03 0xc019ec ILT+2535(??_Cons_valV?allocatorVParticlestdVParticleABV3stdYAXAAV?allocatorVParticle 0.03 0xc01a9b ILT+2710(_SDL_UpperBlit) 0.03 0xc01ab4 ILT+2735(?_Orphan_range?vectorVParticleV?allocatorVParticlestdstdIBEXPAVParticle 0.03 0xc02760 Enemy::move 0.03 0xc037e0 World::show 0.03 0xc07930 SDL_main 0.03 0xc09830 std::_Vector_iterator<std::_Vector_val<Enemy,std::allocator<Enemy> > >::operator-> 0.03 0xc09880 std::_Vector_iterator<std::_Vector_val<Enemy,std::allocator<Enemy> > >::operator++ 0.03 0xc09f20 std::vector<Particle,std::allocator<Particle> >::_Tidy 0.03 0xc0aea0 std::_Vector_const_iterator<std::_Vector_val<Star,std::allocator<Star> > >::operator== 0.03 0xc0bb70 std::allocator<Particle>::allocator<Particle> 0.03 0xc0c320 std::_Vector_const_iterator<std::_Vector_val<Bullet,std::allocator<Bullet> > >::_Compat 0.03 0xc0ce30 std::_Vector_const_iterator<std::_Vector_val<Enemy,std::allocator<Enemy> > >::operator* 0.03 0xc11780 std::vector<Particle,std::allocator<Particle> >::begin 0.03 0xc12b80 std::_Cons_val<std::allocator<Particle>,Particle,Particle> 0.03 0xc0115e ILT+345(?construct?allocatorVParticlestdQAEXPAVParticle$$QAV3Z) 0.02 0xc011db ILT+470(??0ParticleQAEHHHZ) 0.02 0xc01217 ILT+530(?enemy_moveWorldQAEXXZ) 0.02 0xc013c5 ILT+960(?_Compat?$_Vector_const_iteratorV?$_Vector_valVBulletV?$allocatorVBulletstdstdstdQBEXABV12Z) 0.02
0xc01451 ILT+1100(??0?$_Vector_const_iteratorV?$_Vector_valVStarV?$allocatorVStarstdstdstdQAEPAVStarPBU_Container_base0 0.02 0xc0145b ILT+1110(?destroy?$allocatorVParticlestdQAEXPAVParticleZ) 0.02
0xc01672 ILT+1645(??C?$_Vector_iteratorV?$_Vector_valVEnemyV?$allocatorVEnemystdstdstdQBEPAVEnemyXZ) 0.02 0xc017ad ILT+1960(??C?$_Vector_iteratorV?$_Vector_valVStarV?$allocatorVStarstdstdstdQBEPAVStarXZ) 0.02
0xc01893 ILT+2190(?_Compat?$_Vector_const_iteratorV?$_Vector_valVStarV?$allocatorVStarstdstdstdQBEXABV12Z) 0.02 0xc018d9 ILT+2260(??D?$_Vector_const_iteratorV?$_Vector_valVStarV?$allocatorVStarstdstdstdQBEABVStarXZ) 0.02
0xc03bf0 World::show_player 0.02
0xc04550 World::show_enemies 0.02
0xc06e30 Bullet::show 0.02
0xc07430 Collision_Detection 0.02
0xc08060 std::vector<Particle,std::allocator<Particle> >::~vector<Particle,std::allocator<Particle> > 0.02
0xc08100 std::vector<Particle,std::allocator<Particle> >::begin 0.02
0xc08c50 std::_Vector_const_iterator<std::_Vector_val<Bullet,std::allocator<Bullet> > >::operator!= 0.02
0xc09d80 std::vector<Particle,std::allocator<Particle> >::_Destroy 0.02
0xc0a510 std::vector<Bullet,std::allocator<Bullet> >::_Tidy 0.02
0xc0a7f0 std::_Vector_iterator<std::_Vector_val<Bullet,std::allocator<Bullet> > >::operator++ 0.02
0xc0a8a0 std::_Vector_const_iterator<std::_Vector_val<Bullet,std::allocator<Bullet> > >::operator== 0.02
0xc0abf0 std::vector<Star,std::allocator<Star> >::_Orphan_range 0.02
0xc0bfc0 std::vector<Bullet,std::allocator<Bullet> >::size 0.02
0xc0c220 std::_Vector_const_iterator<std::_Vector_val<Bullet,std::allocator<Bullet> > >::_Vector_const_iterator<std::_Vector_val<Bullet,std::allocator<Bullet> > > 0.02
0xc0c290 std::_Vector_const_iterator<std::_Vector_val<Bullet,std::allocator<Bullet> > >::operator* 0.02
0xc0ea20 std::_Allocate<Bullet> 0.02
0xc0ef70 std::_Unchecked<std::_Vector_val<Bullet,std::allocator<Bullet> > > 0.02
0xc0f0b0 std::_Rechecked<std::_Vector_val<Bullet,std::allocator<Bullet> > > 0.02
0xc0f100 std::find_if<std::_Vector_iterator<std::_Vector_val<Particle,std::allocator<Particle> > >,bool (__cdecl*)(Particle)> 0.02
0xc0f190 std::_Unchecked<std::_Vector_val<Particle,std::allocator<Particle> > > 0.02
0xc0f600 std::_Remove_if<Star *,bool (__cdecl*)(Star)> 0.02
0xc0fb50 std::forward<Bullet const &> 0.02
0xc117f0 std::vector<Particle,std::allocator<Particle> >::end 0.02
0xc11d40 std::_Find_if<Particle *,bool (__cdecl*)(Particle)> 0.02
0xc11ef0 std::_Find_if<Star *,bool (__cdecl*)(Star)> 0.02
0xc128a0 std::vector<Bullet,std::allocator<Bullet> >::_Ucopy<std::_Vector_const_iterator<std::_Vector_val<Bullet,std::allocator<Bullet> > > > 0.02
0xc13210 std::allocator<Particle>::construct 0.02
132 functions, 610 instructions, Total: 6224 samples, 100.00% of shown samples, 20.72% of total session samples
Edited by chondee, 17 January 2012 - 01:16 AM.
### #8fastcall22 Moderators
Posted 17 January 2012 - 09:22 AM
EDIT:
Since then I tested it with part.SetPosition(x,y); and App.Draw(part); commented out, so they are not drawn, and still Particle::show takes up 61% of the resources (was 74 previously)... with that commented out, the SDL and SFML Particle::show() are identical, except in SDL they actually get drawn.
This would suggest that the API is the bottleneck (as indicated earlier in the thread). If I recall correctly, SFML uses OpenGL 1.1 immediate-mode calls, which would mean for every particle rendered, there's a call to glBindTexture, glPushMatrix/glPopMatrix, and glBegin/glEnd. For something like a particle system, the overhead in each of these calls, while not significant on their own, can snowball. To reduce the overhead from texture switching, place all of your particle textures on one sheet. Since SFML doesn't seem to have any feature that will allow us to assign a part of an Image to a Sprite, you'll need to do the rendering yourself through raw OpenGL calls. By doing so, you can optimize out some OpenGL calls and will allow you to do batching, among other things:
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ); // sf::BlendMode::Alpha
Particle::particleSheet.Bind(); // sf::Image::Bind, essentially calls glBind( GL_TEXTURE_2D, particleSheet.handle )
glBegin( GL_QUADS ); {
for ( Partice& p : v_particles ) {
cnost Rect2f& texRect = getTextureRect( p.getTextureIdx() );
Vector2f coord[2] = {
p.getPosition() + Vector2f( -1, -1 ) * (p.getScale() / 2.f );
p.getPosition() + Vector2f( 1, 1 ) * (p.getScale() / 2.f );
};
glColor4ub( p.color().r, p.color().g, p.color().b, p.color().a );
glTexCoord2f( texRect.left, texRect.bottom ); glVertex2f( coord[0].x, coord[0].y );
glTexCoord2f( texRect.right, texRect.bottom ); glVertex2f( coord[1].x, coord[0].y );
glTexCoord2f( texRect.right, texRect.top ); glVertex2f( coord[1].x, coord[1].y );
glTexCoord2f( texRect.left, texRect.top ); glVertex2f( coord[0].x, coord[1].y );
}
} glEnd();
For further optimizations, you can use VBOs, use the CPU to update all the vertices of all the particles, then send the entire buffer to the GPU in one call.
zlib: eJzVVLsSAiEQ6/1qCwoK i7PxA/2S2zMOZljYB1TO ZG7OhUtiduH9egZQCJH9 KcJyo4Wq9t0/RXkKmjx+ cgU4FIMWHhKCU+o/Nx2R LEPgQWLtnfcErbiEl0u4 0UrMghhZewgYcptoEF42 YMj+Z1kg+bVvqxhyo17h nUf+h4b2W4bR4XO01TJ7 qFNzA7jjbxyL71Avh6Tv odnFk4hnxxAf4w6496Kd OgH7/RxC
### #9Serapth Members
Posted 17 January 2012 - 10:29 AM
This is exactly the problem, and somewhat the solution. See here.
Question: Because my particle engine can't draw more than 3000 particles before it starts to lag...
SFML 1.6 is definitely too slow for this.
SFML 2.0 (current) is better if you draw multiples sprites that use the same texture.
SFML 2.0 (future) will be much better, just wait a little bit
### #10Serapth Members
Posted 17 January 2012 - 10:33 AM
To reduce the overhead from texture switching, place all of your particle textures on one sheet. Since SFML doesn't seem to have any feature that will allow us to assign a part of an Image to a Sprite, you'll need to do the rendering yourself through raw OpenGL calls. By doing so, you can optimize out some OpenGL calls and will allow you to do batching, among other things:
As a corollary to my other post, this is why it doesn't work in 1.6 and sorta works in 2.0. Instead of using sf::Image, you use sf::Texture in 2.0, which allows you to load your image from a single sprite sheet.
### #11chondee Members
Posted 17 January 2012 - 02:54 PM
Thank you fastcall22 and Serapth, this is really helpful.
I am wondering, is SFML 2.0 stable enough to base my whole project on it?
Also, if it is not, are there many things that need to be changed in my code in SMFL1.6<->SFML2.0 switch, or most of the calls remain the same and the changes are "behind the scenes" implementations?
Thanks for the sprite sheet implementation too, in SDL I just used a SDL_Rect[] as a possible offset to apply_surface, which was quite convenient, but this way I can do it in SFML too, and it's time I start getting familiar with some raw opengl too.
### #12Serapth Members
Posted 17 January 2012 - 03:04 PM
Thank you fastcall22 and Serapth, this is really helpful.
I am wondering, is SFML 2.0 stable enough to base my whole project on it?
Also, if it is not, are there many things that need to be changed in my code in SMFL1.6<->SFML2.0 switch, or most of the calls remain the same and the changes are "behind the scenes" implementations?
Thanks for the sprite sheet implementation too, in SDL I just used a SDL_Rect[] as a possible offset to apply_surface, which was quite convenient, but this way I can do it in SFML too, and it's time I start getting familiar with some raw opengl too.
The differences aren't too major, you should be able to port without too much issue. You don't need to do a SDL_Rect implementation. Load an image as you would normally, but instead of using sf::Sprite, you use sf::Texture, which you can populate using LoadFromImage. The majority of other changes are small but annoying. The coordinate system for Sprites/Textures has been updated ( to make sense, the old naming convention was stupid ), obviously Texture was added for much the reasons of the problems you've run into, otherwise the biggest changes are input related. sf::Input is gone, replaced by two global namespaces.
### #13chondee Members
Posted 17 January 2012 - 03:11 PM
I see, I'll port this early version to SFML 2.0, learn how input is working, and when everything seems comfortable I'll port my full project.
### #14Serapth Members
Posted 17 January 2012 - 03:24 PM
I see, I'll port this early version to SFML 2.0, learn how input is working, and when everything seems comfortable I'll port my full project.
Input changed from using sf::RenderWindow.GetInput(), to two separate global methods sf::Keyboard and sf::Mouse
So, before you would go
myAppWindow->GetInput()->IsKeyDown(someKey);
You now do:
sf::Keyboard::IsKeyDown(someKey);
Ditto for mousing functions have also been split out. The change does make sense, but will potentially cause a number of changes to be required.
### #15chondee Members
Posted 17 January 2012 - 07:22 PM
Thanks,
Fortunately I haven't even started looking into SFML 1.6's input either, so it won't make much of a difference to me, I'll just start learning 2.0's input.
### #16chondee Members
Posted 18 January 2012 - 03:12 AM
So, I have built the SFML 2.0 libraries, and converted my code for SFML 2.0.
There was quite a huge performance gain, everything looked fine, so I started writing the keyboard input functions.
I am polling for events in the main loop, every frame like this:
while (App.PollEvent (Event))
{
//myWorld.myPlayer.handle_input();
//if (Event.Type == sf::Event::Closed)
//{
// App.Close();
//}
}
Even if the loop is empty, like above, the performance drops significantly, the frame rate fluctuates between 20-60.
Am I doing the polling wrong?
EDIT:
SOLVED
I got the answer on SFML forums
Can you try to comment line 121 of src/SFML/Window/WindowImpl.cpp (ProcessJoystickEvents();) and recompile SFML?
Edited by chondee, 18 January 2012 - 03:31 AM.
### #17BeerNutts Members
Posted 18 January 2012 - 12:29 PM
Well, I had a suggestion about why show was so slow using v1.6, when using SFML or SDL (even commenting out the SFML draw).
it looks like, in SDL, "part" is a pointer, while, when using SFML, it is not. Try setting the SFML version's part from Sf::Sprite part to Sf::Sprite *part.
It's probably all the copying it has to do.
My Gamedev Journal: 2D Game Making, the Easy Way
---(Old Blog, still has good info): 2dGameMaking
-----
"No one ever posts on that message board; it's too crowded." - Yoga Berra (sorta)
### #18chondee Members
Posted 18 January 2012 - 04:39 PM
Well, I had a suggestion about why show was so slow using v1.6, when using SFML or SDL (even commenting out the SFML draw).
it looks like, in SDL, "part" is a pointer, while, when using SFML, it is not. Try setting the SFML version's part from Sf::Sprite part to Sf::Sprite *part.
It's probably all the copying it has to do.
Well, in SDL you have Surfaces that are associated with the image files (in my case)
To avoid having each particle load the same image that they share, I just used a pointer to the same Surface.
In SFML, there is a separate Image (or Texture in 2.0) that contains the actual image, and there is a Sprite, that (the way I understand it) is kind of like a pointer to the image.
I can set the sprite to an image, but it will only point to that one image, it won't contain the image file's data.
That's why in a traditional sense I wasn't using * pointer, but conceptually I was.
Either way, thanks for your comment, so far I seemed to have had the performance issues solved with using SFML 2.0.
btw I am really new to SFML, so please correct me if my understanding of this seems to be wrong
### #19BeerNutts Members
Posted 18 January 2012 - 07:53 PM
Well, I had a suggestion about why show was so slow using v1.6, when using SFML or SDL (even commenting out the SFML draw).
it looks like, in SDL, "part" is a pointer, while, when using SFML, it is not. Try setting the SFML version's part from Sf::Sprite part to Sf::Sprite *part.
It's probably all the copying it has to do.
Well, in SDL you have Surfaces that are associated with the image files (in my case)
To avoid having each particle load the same image that they share, I just used a pointer to the same Surface.
In SFML, there is a separate Image (or Texture in 2.0) that contains the actual image, and there is a Sprite, that (the way I understand it) is kind of like a pointer to the image.
I can set the sprite to an image, but it will only point to that one image, it won't contain the image file's data.
That's why in a traditional sense I wasn't using * pointer, but conceptually I was.
Either way, thanks for your comment, so far I seemed to have had the performance issues solved with using SFML 2.0.
btw I am really new to SFML, so please correct me if my understanding of this seems to be wrong
Typically, you load the image once, and you create a Sf::Sprite from that image. But, that has nothing to do with it being a pointer or not. You can create a new Sf::Sprite as a pointer for all your sprites, and, when you assign part, you're just copying the pointer (4 bytes, 1 CPU operation) instead of the whole Sprite class (much larger, taking a memcpy).
So, the point I was making really doesn't have anything to do with SFML or SDL; rather, with the speed it takes to copy a pointer, versus copying a whole class structure.
My Gamedev Journal: 2D Game Making, the Easy Way
---(Old Blog, still has good info): 2dGameMaking
-----
"No one ever posts on that message board; it's too crowded." - Yoga Berra (sorta)
### #20chondee Members
Posted 18 January 2012 - 10:50 PM
Typically, you load the image once, and you create a Sf::Sprite from that image. But, that has nothing to do with it being a pointer or not. You can create a new Sf::Sprite as a pointer for all your sprites, and, when you assign part, you're just copying the pointer (4 bytes, 1 CPU operation) instead of the whole Sprite class (much larger, taking a memcpy).
So, the point I was making really doesn't have anything to do with SFML or SDL; rather, with the speed it takes to copy a pointer, versus copying a whole class structure.
Thank you for the explanation, I was only considering the memory difference between the actual Image, and the Sprite, but you are right. In case of the high number particles being created and displayed constantly, the difference between whole Sprite objects and only pointers the CPU and memory cost might be significant enough to consider.
I will try switching to Sprite pointers instead of sprites now.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
|
{}
|
# Question c7e82
Apr 7, 2017
Here's what I got.
#### Explanation:
You need a balanced chemical equation to work with. Formic acid will react with ethanol to produce ethyl formate and water
${\text{HCOOH"_ ((aq)) + "CH"_ 3"CH"_ 2"OH"_ ((aq)) -> "C"_ 3"H"_ 6"O"_ (2(aq)) + "H"_ 2"O}}_{\left(l\right)}$
Notice that it takes $1$ mole of formic acid to react with $1$ mole of ethanol in order to produce $1$ mole of ethyl formate.
Your goal here is to figure out which one of the two reactants, if any, acts as a limiting reagent, i.e. is completely consumed before all the moles of the other reactants get the chance to take part in the reaction.
Convert the masses of the two reactants to moles by using the molar masses of formic acid and of ethanol, respectively
12.2 color(red)(cancel(color(black)("g"))) * "1 mole HCOOH"/(46.025 color(red)(cancel(color(black)("g")))) = "0.2651 moles HCOOH"
8.16 color(red)(cancel(color(black)("g"))) * ("1 mole CH"_3"CH"_2"OH")/(46.068 color(red)(cancel(color(black)("g")))) = "0.1771 moles CH"_3"CH"_2"OH"#
As you can see, you don't have enough moles of ethanol to ensure that all the moles of formic acid will get the chance to react $\to$ ethanol will act as the limiting reagent.
To find the theoretical yield of the reaction, calculate the number of moles of ethyl formate produced by the reaction.
You know that $0.1771$ moles of ethanol will be completely consumed. The reaction will also consume $0.1771$ moles of formic acid and produce $0.1771$ moles of ethyl formate $\to$ this is because to the $1 : 1$ mole ratios that we pointed out earlier.
To convert the number of moles of ethyl formate to grams, use the compound's molar mass
$0.1771 \textcolor{red}{\cancel{\textcolor{b l a c k}{\text{moles C"_3"H"_6"O"_2))) * "74.08 g"/(1color(red)(cancel(color(black)("mole C"_3"H"_6"O"_2)))) = color(darkgreen)(ul(color(black)("13.1 g}}}}$
The answer is rounded to three sig figs.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.