paper
stringlengths
9
16
proof
stringlengths
0
131k
math/0102072
First note that the LHS in REF is MATH, whenever MATH. We thus can assume right away that MATH. Then, using the special extension, with respect to the normal vector field MATH, we get MATH . Furthermore, the LHS, MATH is of order MATH, whenever MATH are in MATH. This finishes the proof of REF . To prove REF , assume that MATH are in MATH. Then MATH . This finishes the proof.
math/0102072
REF can be proved by calculating the lift of vector fields in local coordinates. First, taking local coordinates near the corner in MATH, we can introduce local coordinates on MATH, projective with respect to MATH: MATH . These coordinates are valid near MATH but away from MATH. There we calculate MATH . Near the fibre diagonal MATH we can assume that the coordinates MATH and MATH are paired, thus MATH is described by MATH. Near MATH, but away from MATH, we can introduce coordinates on MATH, projective with respect to MATH: MATH . From this we get: MATH . From these calculations REF is obvious. Finally, near the lifted diagonal MATH we can add the condition that the coordinates MATH and MATH are also paired. Then MATH is just MATH and the assertion of REF also follows from REF . For REF , we just mention that the map from MATH to the interior of MATH is given as follows. A nonzero element MATH lifts via MATH to a nonvanishing vector field MATH on MATH, which generates a flow MATH on that face. This flow is identified with the linear flow generated by MATH in the fibres of MATH.
math/0102072
Reduction to the case MATH is done as usual using NAME 's argument (see CITE or REF). In REF it then suffices to verify the mapping property for an operator MATH supported in a coordinate patch. There, write MATH. We can estimate MATH . Now, it is easy to see that there is an operator MATH such that MATH. Then, the above is certainly smaller than MATH . The MATH's are finite, since MATH maps the function MATH to MATH and the adjoint kernel satisfies the same estimates; REF are then consequences of REF using the fact that MATH and MATH.
math/0102072
This is the standard symbolic construction using the symbol sequence MATH . The symbol MATH is independent of MATH and invertible. We can therefore find a MATH and, because of the surjectivity of the symbol map MATH, also a MATH with MATH . We now set as usual MATH . The construction can be made holomorphic in the parameter MATH.
math/0102072
First, the exactness of the sequence in REF should be clear. The algebraic properties follow, since composition in MATH is induced by composition in MATH: Taking two operators MATH, MATH in the small calculus MATH, it follows from the composition formula, REF , that MATH is MATH whenever MATH or MATH is MATH. Thus, the composition of MATH and MATH is independent of the choice of extension to MATH.
math/0102072
First note that it follows from REF that for MATH tangent to the boundary MATH . Using this, we can write MATH which proves the claim.
math/0102072
First, we know from the general MATH-calculus that the operator MATH is invertible except if MATH is an indicial root, where it has a null space in the ``extended MATH-sense". Since MATH anticommutes with MATH, one easily finds that the set of indicial roots MATH is invariant under multiplication with MATH. Also it follows immediately from REF that MATH. It therefore suffices to show that MATH . This is easy. Let MATH. If MATH then the distribution MATH becomes integrable in MATH with respect to MATH and extends to a distribution (also denoted MATH) on MATH. This distribution fulfills MATH in all of MATH for some MATH. By elliptic regularity, we can conclude that MATH is in MATH, from which the claim follows.
math/0102072
Choose a cutoff function MATH. Then, the compactly supported section MATH, as treated above, is responsible for the term MATH in the index set MATH. Choosing a noninteger MATH we know that MATH, and by REF there is MATH satisfying MATH . By the general theory of MATH-pseudodifferential operators (compare REF) MATH . Now, choosing another cutoff function MATH with MATH, we can write MATH . But MATH is compactly supported and can therefore be treated as in the first part.
math/0102072
First, note that MATH restricted to MATH is just the family of (constant coefficient) fibre NAME operators given by MATH on MATH. Hence REF follows by applying REF fibre by fibre.
math/0102072
For purposes of notational simplicity we are going to ignore coefficients. From the symbolic construction, we have holomorphic families MATH . Setting MATH, we want to solve MATH holomorphically in MATH to order MATH at MATH. For this, we write MATH and MATH in a (finite) NAME series (sum): MATH and the maps MATH are holomorphic. As part of the proof we will show that MATH . In the first step, we have to solve MATH . Write MATH for MATH. It is a holomorphic family MATH. By REF , we can find a holomorphic family MATH which can be extended to a holomorphic family MATH . Thus, we have shown the case MATHing inductively that the equation has been solved up to order MATH, with index sets as indicated, we set MATH . Since MATH has index set MATH and MATH has index set MATH at MATH, the index set for MATH is just MATH. Now, the equation MATH can be solved holomorphically in MATH by REF and MATH has index set MATH. Setting MATH and MATH yields the result.
math/0102072
In REF , only the independence of the extension of MATH might be worth a little thought. It follows from the fact, that for MATH we have MATH . Thus, the expression defining MATH vanishes for the difference of two extensions of MATH. REF is proved by induction. We have proved the case MATH in REF , now assume that MATH . Then starting with MATH such that MATH means that we have MATH. Therefore, MATH by REF and the hypothesis of the induction.
math/0102072
This Lemma is a special case of a similar statement for elliptic operators on a compact manifold minus a point. An instance of this was presented in REF and we use this result for the proof of REF . Write MATH around MATH and consider the little calculation MATH . This allows us to calculate the indicial family at MATH as MATH which is just the indicial family whose roots were calculated to be MATH in REF .
math/0102072
In this proof, we will uniformly assume that the coefficient bundle is MATH and drop it from our notation. Then MATH . In order to reduce MATH to an error term on the face MATH, we have to solve away its expansion at MATH. Since by REF the indicial roots of MATH are just MATH, the theory of elliptic MATH-differential operators (see REF, or REF below for a similar statement) tells us that the equation MATH for the zero-mode part can be solved with MATH . Extend MATH and MATH to kernels MATH and MATH on MATH. All this can be done holomorphically in MATH. Noting that MATH we get, using REF , MATH . It remains to solve away MATH, the restriction of MATH to MATH. To do this, just define MATH to be MATH times the extension of MATH to MATH. Then, setting MATH proves the claim.
math/0102072
An accidental multiplicity in MATH is a point MATH with MATH such that MATH with MATH positive eigenvalues of MATH and MATH. This can be rewritten as MATH from which the assertion in REF immediately follows.
math/0102072
We remind the reader that we will be working with the coefficient bundle MATH. Assume for the moment that MATH, that is, MATH does not have any real indicial roots. Also assume for simplicity that MATH is not an accidental multiplicity that is, MATH for MATH. Since MATH we can solve MATH with MATH . We can extend MATH to MATH in such a way that all the leading coefficients of MATH at MATH are in the zero modes and independent of MATH close to MATH. This gives an element MATH . This operator can be used to solve away the error term MATH. By REF we have MATH where the form of MATH is due to our special choice of extension. We are left with the problem of solving away an error perpendicular to the zero-modes. This is done as usual by extending MATH, to MATH such that the coefficients associated to MATH-at MATH are just MATH applied to the corresponding coefficients of MATH at MATH. Define MATH to be MATH times this extension, thus MATH . With REF it is now easy to see that, setting MATH, we have MATH with index sets as indicated in the Proposition. To prove the claimed meromorphy, a more careful analysis of the term MATH above is needed. Using MATH, recall that since MATH, the NAME transform MATH is holomorphic and (still assuming MATH) we have MATH . The inverse of the indicial family has the form REF . The integral thus is well defined, as long as MATH stays away from the real axis. When MATH approaches the real axis from MATH, only the eigenvalues smaller than MATH contribute a singularity. Denote by MATH the eigenvalues of MATH which are smaller than a given MATH. Writing MATH and MATH for the projections onto the corresponding eigenspaces of MATH, we get the decomposition MATH . The first term in this decomposition is uniformly bounded in MATH and has no poles in the strip MATH. Since the term MATH is rapidly decreasing at real infinity, performing the integral in REF for this part yields a section in MATH, which depends holomorphically on MATHed into REF , the summands of the third term yield integrals of the form MATH . For small MATH this can be evaluated by shifting contours to MATH. From the residue theorem (using the same method for the term stemming from REF-eigenvalue) we get an expansion for MATH of the form MATH . Here, the coefficients MATH are holomorphic with values in the MATH-eigenspaces of MATH. The expansion for MATH can be similarly obtained by shifting contours to MATH. REF is meromorphic in the sense of being an expansion with meomorphic coefficients. Its extension to MATH is still meromorphic in this strong sense, that is, as an element in MATH, as long as we stay away from accidental multiplicities. Away from these points, the extensions of the leading coefficients MATH of the expansion can (be clearly distinguished and therefore) still be chosen to lie in the MATH-eigenspaces of MATH. Around points of accidental multiplicity, the extension of REF is still meromorphic in the weaker sense claimed in the Proposition.
math/0102072
This Lemma is a variation of REF, with an easier proof. Again, we first assume that MATH. We have to show how to solve for a section MATH of the form MATH, with MATH meromorphic in the MATH-sections over MATH and MATH. As usual, decompose MATH with MATH and MATH. Starting with MATH, we can just set MATH which, according to REF , continues to be meromorphic with extra poles arising only at points of accidental multiplicity. NAME REF solves our problem to first order in the zero modes: MATH . Again, solving for MATH is easy. Just set MATH . This solves the problem to first order. Iteration of this procedure finishes the proof.
math/0102072
The proof closely follows the proof of the analytic NAME theorem. For a MATH in MATH denote by MATH the projection (of finite rank) onto the null space of MATH. Then with respect to MATH and MATH the operator MATH can be written as the matrix MATH . The operator MATH is of the form MATH, with MATH . Also MATH is invertible for MATH close to MATH with an inverse of the same type. To see this, we write MATH and use the following ``semi-ideal" property of the space MATH, which is described in CITE, and which shows that the last term in the above sum is of the same type as the operator MATH: For MATH and MATH we have MATH. The rest of the proof now follows as usual: Assuming the invertibility of MATH, the operator MATH is invertible exactly if the endomorphism MATH is invertible, since MATH . Thus the inverse exists, and is of the claimed form, whenever MATH. The determinant is not constantly REF, since MATH is invertible by construction.
math/0102072
Let MATH be a section in MATH such that MATH. Then, starting with the parametrix described in REF , we get the equation MATH on MATH. The adjoint of this equation can then be applied to MATH: MATH . Since by REF the remainder MATH has an expansion with index set of type MATH at the left face, we find MATH, with MATH as above. REF is an immediate consequence of this, since all indeces in MATH have positive real part. The proof of REF then follows from this, once one recalls that MATH is obtained from MATH by continuation from MATH. Alternatively MATH . All of these null spaces are of finite dimension, since the operator MATH is NAME on MATH. To analyze the structure of the null space of MATH, with MATH, we temporarily write MATH and MATH. Then MATH is our first guess at a parametrix: MATH with MATH, MATH and index sets given by MATH . Taking the limit MATH as before proves the claim.
math/0102072
Injectivity of the left map is clear, so is exactness in the middle. We only have to show that the map on the right is (well defined and) surjective. Also, it should be clear that for MATH, the expression MATH lies in MATH and pairs to MATH with MATH under REF . NAME then follows from the fact that MATH where we have chosen MATH such that MATH is NAME on MATH. Since MATH with this property can be chosen arbitrarily small, the claim follows.
math/0102072
To prove REF , note that MATH is not a branching point and MATH. It follows from the selfadjointness of MATH and the spectral radius formula that MATH. Thus MATH can have a pole of order at most MATH with a NAME expansion as in REF . The operators MATH, fulfill the equalities MATH from the first two of which the claim follows. The proof of REF is analogous, when one uses the fact that around a branching point MATH the surface MATH is uniformized by MATH.
math/0102072
REF should be clear, REF follows from REF , since MATH is orthogonal to the MATH-null space of MATH. Also REF follow from REF , since MATH certainly belong to the formal eigenspaces to first order and the sums on the Right-hand side are just the most general linear combination of formal eigensections, which are MATH for MATH in the physical domain.
math/0102072
This is a simple consequence of ``NAME 's formula", which looks like MATH in our context.
math/0102072
First, the above Corollary implies that the MATH and the MATH span the generalized eigenspace of MATH. The boundary values of any element in the generalized eigenspace have a unique decomposition into their MATH-parts, which can be written in two ways according to REF : MATH . This shows that MATH and MATH are inverses of each other! To prove unitarity, we use REF . For any MATH, for instance MATH for a fixed eigenvalue MATH, we get MATH and so on.
math/0102072
This proof is a good exercise to test the reader's versality with the concepts introduced in this Section. We give the details for reference. First, using REF , we know that MATH . Therefore MATH can be written as the limiting function of a function on MATH in two ways: MATH . Note that in REF the NAME operator is applied to a function which is already in MATH! Thus MATH . Recalling REF of MATH, REF is proved. To prove REF , simply recall that MATH is in the extended MATH-null space of MATH. Thus, from the definition of the boundary pairing MATH as claimed. The last equation then follows from REF .
math/0102072
We show how the map MATH is defined. Starting with MATH we first blow down all those faces which do not intersect any of the faces lifted from the left factor: MATH thus getting MATH . Now, obviously, MATH is a p-submanifold of MATH, that is, their blow ups can be exchanged and the triple diagonal can be blown down. The same argument let us us exchange the blow ups of MATH and MATH, yielding MATH . Now the face MATH in question here is really the preimage of this face under the blow up of MATH, and one easily checks that it is transversal to MATH. Being disjoint from the other blow ups, we can switch the corresponding blow up to the front and then blow down MATH altogether. Following this, it is easy to see that the blow up of MATH can be ``pulled to the front", if one switches the order of the blow ups MATH and MATH. Thus this face can be blown down, too, leaving us with MATH where we have written the faces MATH, MATH in their blown down form in MATH. It is important to note that, again, MATH is really the preimage of that face under the blow up of MATH, making it transversal to the lifted left diagonal. Thus MATH can be blown down and we are left with MATH which projects nicely to MATH.
math/0102072
First, recall the partial integration formula for the NAME derivative with respect to a vector field MATH on a manifold with boundary MATH: MATH . Now let MATH and MATH. We then have MATH . Since near MATH the density factor in MATH is of the form MATH we see that the last integral in the above expression vanishes for MATH, but contributes the constant MATH for MATH. Thus, recalling that MATH and using REF , we find for MATH: MATH . Note also that in the case MATH we have MATH therefore MATH that is, the mean value condition is fulfilled.
math/0102072
We start by solving the above equation to first order at MATH. Applying MATH to both sides, we get first MATH . This is to be understood as an equation on the fibres of MATH. Writing MATH (and forgetting about the MATH-index until it is really needed) we get MATH . Taking the NAME transform in MATH yields MATH the solution of which is easily seen to be MATH which also shows that the solution is of order MATH at MATH in the calculus. Thus the compatibility condition at the corner reads MATH and the heat equation on the interior of MATH is MATH . Writing MATH we get the equation MATH which is solved by MATH . The compatibility condition at MATH is then MATH where MATH is the kernel of the projection onto the null space of MATH. Then REF is the initial value for the problem at MATH. In order to solve the heat equation at MATH, recall that MATH . Thus for MATH the expression MATH a priori only lies in MATH. However, it follows from REF that MATH can be extended to the interior in such a way that MATH and MATH . It is then clear that the heat equation at MATH can be solved by choosing MATH more concretely, namely MATH and then use the extension which fulfills REF . Since the solutions at the faces MATH, MATH, MATH are compatible, we can choose an extension MATH with MATH . This gives a parametrix, which solves the heat equation to first order at all the faces MATH . In order to further improve our parametrix at MATH, we have to successively find MATH, MATH with MATH and MATH . Again writing MATH and so on, we have MATH . That this can be uniquely solved is the content of the following Let MATH and MATH. Then there is a unique solution MATH with MATH . First, the homogeneous equation MATH is uniquely solved by MATH. Writing MATH we see that MATH . Thus MATH solves the equation MATH that is, MATH is of the form MATH which is obviously in MATH. This also finishes the proof of the Theorem.
math/0102072
First, since the bundle MATH is just pulled back from MATH, it suffices to prove the above statements in the blown down picture. To prove REF , restrict to the face MATH. There, we can assume the coordinates MATH, MATH to be paired (compare REF). For a section MATH, homogeneous with respect to to the grading of MATH, we can therefore define MATH . Then MATH which shows that the map MATH has image in the Right-hand side of REF , and is an isomorphism of algebras. This proves REF . REF can be obtained by iteration of this scheme using an orthonormal basis in MATH and MATH respectively. The important observation is that an element MATH in MATH can be lifted from the left or the right to MATH, and we can define as above MATH and the same can be done for any MATH at MATH.
math/0102072
Since the connection MATH is the connection on MATH pulled back from MATH, it suffices to prove REF for vectors MATH, tangent to the fibre diagonal MATH. In view of the proof of REF and the fact, shown in REF , that MATH preserves the decomposition MATH over the boundary, the assertion is then clear. For REF to REF , look at the decomposition of the curvature (and see the remark at the end of REF) MATH . REF then follow immediately from REF .
math/0102072
REF is proved as in REF by writing down a local basis for MATH. To prove REF , we first check that sections in MATH satisfy the condition in REF of MATH. Note that for sections MATH the derivative MATH still lies in MATH. Thus MATH and MATH is of order at most MATH at MATH. The proof of the reverse inclusion proceeds by induction and is left to the reader.
math/0102072
We make abundant use of the following commutator identity: Let MATH, MATH. Then over MATH CASE: MATH ( MATH restricted to MATH) CASE: MATH is of order MATH at REF is just a consequence of our definition of MATH and MATH. For the proof of REF consider MATH . This is of order MATH at MATH, since it is covariant differentiation with respect to a vector tangent to MATH as can be seen from MATH . This finishes the proof of the Lemma. The proof of REF now proceeds as follows: Let MATH. Then using the identies from REF we get modulo lower order at MATH: MATH . Higher derivatives of the curvature do not contribute because of REF . To show REF , note that is clear from REF that MATH all the higher derivatives being of order at most MATH. Thus MATH is in fact an element in MATH. The first equality in REF is now an immediate consequence of NAME 's rule while the second equality can be obtained by playing around with the definition in REF : MATH which is just MATH.
math/0102072
Again, since MATH is induced from MATH, away from MATH it suffices to prove REF for vectors MATH, tangent to the diagonal MATH. In view of the proof of REF the assertion is then clear. For REF , just note that MATH can be thought of as being of the form MATH for MATH. Away from REF then implies MATH which is obviously REF when restricted to MATH. The assertions extend to MATH by continuity.
math/0102072
REF follows from iterating the argument in the case of a simple rescaling, REF follows from REF and the fact that MATH on MATH is just induced from the connection on MATH.
math/0102072
As in REF this follows from the commutator identity in REF and the properties of the curvature described in REF . In addition, one has to verify that the term MATH (is of order REF and) preserves MATH at MATH! This will follow from the more concrete calculation of this term in REF below. Note also that MATH vanishes at MATH.
math/0102072
We do not give the details of this proof here. Basically, the bundle MATH on MATH can be lifted to MATH from the left and the right, using MATH, MATH. Composition of endomorphisms gives a well defined map MATH . The connections used in the definition of the rescaling can be lifted to MATH in the same way. Using compatibility of these connections with this product, as well as with the pullback and pushforward operations, the proof then boils down to NAME 's rule.
math/0102072
As an example, let us do the first part in REF (leaving out the ubiquitous MATH): MATH where, in MATH and MATH, we have used the fact that MATH vanishes at the face MATH.
math/0102072
This is really just a reformulation of REF . For REF just recall that we now also have to consider the action of MATH on MATH and MATH, which is MATH on MATH and MATH on MATH according to REF .
math/0102072
The proofs of REF follow the same pattern. To prove REF , note that MATH . This can be seen by looking at the map MATH flipping sides in MATH. Since, by assumption, the vector MATH is lifted from a vector on MATH, tangent to MATH, we can restrict ourselves to the following situation: MATH . Using REF above, we get MATH which proves REF . It now suffices to calculate the derivative of MATH in MATH along the fibres of MATH. But from REF , the assumption on MATH and REF , this is just a linear combination of sections of the type MATH . Thus for MATH representing a point in the fibre, we can use REF (and its analogue for sections lifted from the right) to get MATH which proves the claim.
math/0102072
The proof of REF at MATH is easy. First, recall again that MATH vanishes at MATH since MATH vanishes there. Thus MATH and we can calculate the derivative along the fibre as in the proof of REF . In this case, however, the constant term along the fibres does not vanish, accounting for the term MATH in the formula. Now using REF and writing MATH gives the result. It remains to show that the Right-hand side in REF really is an element in MATH. This should be clear for the linear term in the fibres by REF . It is also true for MATH, since MATH is tangent to the fibres MATH, and can therefore be written in the form MATH for MATH. Then MATH but the order-REF part of MATH is seen to be in MATH in REF ! REF follows from REF . The proof of REF is completely analogous to REF once we restrict ourselves to MATH with extensions MATH in MATH as stated. Then again MATH and the result follows by calculating the derivative along the fibres as in REF . Finally REF : For MATH, MATH the term at MATH, MATH was calculated in REF .
math/0102072
REF follows from the fact that MATH vanishes at MATH and MATH where the last summand again vanishes at MATH. To prove REF , first note that MATH restricts as MATH at MATH. The first order contribution of MATH at MATH can be calculated using REF MATH as claimed.
math/0102072
These are direct consequences of the NAME REF for the NAME operator MATH and the calculations made in the above series of Propositions, especially REF. To see that we can use the NAME operator MATH instead of MATH, note that by the NAME formula and REF the difference of their squares: MATH vanishes at MATH and MATH.
math/0102072
We prove REF using (the generalization to families of) REF . The connection MATH, for MATH, appearing in the definition of MATH was calculated in REF : MATH . This is of the form required in REF with MATH and MATH. The underlying vector space MATH is any fibre of the bundle MATH, the metric MATH is given by the fibre metric MATH and the coefficient algebra MATH is MATH.
math/0102072
The solutions at the faces patch together because of REF . We thus get a parametrix which solves the heat equation to first order in the rescaled heat calculus. This is transformed into a parametrix with a smooting remainder using the Composition REF . Then a final inversion step as in REF is performed.
math/0102072
REF follows from the fact, proved in REF (see also the remark in REF ), that the restriction of MATH to these corners contains no contribution of type MATH. Thus the supertrace has to vanish there. To prove REF we localize around the corner MATH (the argument at MATH is even easier), where we can use the coordinates MATH and MATH. Then, using the vanishing at the corner shown in REF , we can calculate the MATH-limit as follows: MATH for MATH with compact support and MATH. We can decompose MATH such that MATH vanishes at MATH, and MATH and MATH. Thus MATH which proves the claim.
math/0102072
First, at MATH the density factor looks like MATH. Therefore MATH . In REF , using the counting function MATH on MATH and REF we get: MATH .
math/0102072
Let MATH. Then MATH and MATH from which REF follow. REF follow from the usual NAME theoretic arguments.
math/0102072
Recall that MATH. We leave it to the more enthusiastic readers to use this with REF and the method of proof of REF to prove the claim. The appearance of the last summands in the MATH and MATH-terms above is due to the fact that MATH .
math/0102072
This is shown just as in the MATH-case. Recall that we know from REF that any element MATH is of the form MATH with MATH. Defining MATH to be the corresponding restriction map MATH we get the sequence MATH which is exact in the left and in the middle and compatible with the grading MATH. Thus MATH and we proceed to show that the Right-hand side vanishes. The idea is to use the decomposition MATH and to show that MATH decomposes accordingly, that is, the projections on the left and right summand are maps in MATH. Once this is shown, the fact that the grading MATH switches the two summands in REF implies that the MATH eigenspaces of MATH in MATH have equal dimension. Start with MATH such that MATH. We may then assume that MATH pairs to MATH with MATH under the usual MATH-pairing REF . It then follows from REF that MATH for some MATH. According to REF this has the form MATH . Then MATH is a decomposition of the desired type, since MATH and analogously MATH.
math/0102072
The LHS in REF equals MATH as shown in REF . For the terms on the Right-hand side use the fact that for MATH the relative curvature MATH is right NAME multiplication MATH with MATH, that is, (see REF) MATH . Recalling (see again CITE) that MATH one shows that MATH . Now rewrite REF as MATH and define MATH as in REF, but using MATH instead of MATH.
math/0102072
It suffices to prove this for MATH and MATH. If both MATH and MATH are even, the Lemma is a direct consequence of REF : MATH . When MATH and MATH are odd, we write MATH to see that MATH where we have used MATH.
math/0102072
For REF note that MATH . Analogously in REF , using MATH we get MATH and dividing everything by REF gives the result.
math/0102072
The proof proceeds as usual. MATH . The first summand on the Right-hand side gives MATH since MATH is torsionfree. The second and third summand vanish since (MATH, MATH) is a NAME connection. The last summand gives MATH, which is just two times the vertical Laplacian in the above formula. This proves the first equation in the Proposition, for the proof of the second equation we refer to REF.
math/0102072
The NAME formula for the operator MATH with MATH is well known and proved for example in REF. For a general operator MATH just note that MATH from which the result follows immediately.
math/0102072
This follows from NAME 's formula once one has shown that MATH. This is done as in REF by calculating the normal action of the expressions appearing in REF , MATH, MATH, MATH and MATH, using their mapping properties as described at the beginning of this Appendix.
math/0102072
The proof proceeds just as in REF . MATH . Here we have used REF .
math/0102072
The first part is obtained using the specialization of the methods in REF based on the series of Propositions above. For REF we just mention that MATH . This is just of the type used in the NAME theorem REF .
math/0102074
We have, for arbitrary MATH (which we identify with MATH): MATH where we have used the form of the product REF the definition of the action REF and the undeformed coproduct of MATH. On the other hand we have: MATH which ends the proof.
math/0102074
Consider MATH, where MATH are homogeneous elements of MATH. On one hand: MATH on the other hand, calculating it directly: MATH .
math/0102083
Denote the right-hand side of REF by MATH. The bound MATH is immediate from the NAME inequality MATH . It remains to show MATH. We may fix a tree MATH such that MATH . From the definition of MATH we see that MATH if the constant MATH is chosen sufficiently large. The set on the left-hand side is the union of disjoint dyadic intervals MATH. Observe that MATH . By REF we have MATH . We thus have MATH . From the construction of MATH we have the pointwise estimate MATH . Integrating this on MATH and inserting into the previous we obtain MATH and the claim follows from REF .
math/0102083
The wave packets MATH are orthonormal whenever the MATH are disjoint. The claim then follows immediately from NAME 's inequality.
math/0102083
This shall be a NAME version of the proof of REF. From REF it suffices to show that MATH for all MATH and all trees MATH, where MATH is the vector-valued function MATH . It suffices to prove this estimate in the case when MATH contains its top MATH, since in the general case one could then decompose MATH into disjoint trees with this property and then sum. In this case it thus suffices to show MATH . From the definition of MATH it is clear that we may restrict MATH and MATH to MATH, in which case it suffices to show MATH . We shall assume that MATH is centered at the frequency origin in the sense that REF is on the boundary of MATH. (The general case can then be handled by modulating by an appropriate NAME ``plane wave"). But then the linear operator MATH is a (vector-valued) dyadic NAME operator, and the claim follows from standard theory.
math/0102083
Let MATH, MATH be such that MATH. If MATH, then MATH, and so MATH. This proves the ``only if" part. Now suppose to get a contradiction that there is MATH and MATH such that MATH and MATH. Then MATH. If MATH, then we may find MATH such that MATH, hence MATH. But this implies that MATH, contradicting the disjointness hypothesis of the lemma. This proves the ``if" part.
math/0102083
By REF , we need to show that MATH for any collection MATH of quartiles in MATH such that the tiles MATH are disjoint. Fix MATH, and define the set MATH by MATH . By REF we may write MATH for all MATH. We can simplify this as MATH where MATH is one of the adjoints of the bilinear NAME transform MATH. Since the MATH are orthonormal as MATH varies in MATH, we may use NAME 's inequality to estimate the left-hand side of REF by MATH . The claim then follows from REF and the assumptions MATH, MATH.
math/0102083
By repeating the proof of REF , we reduce to showing that MATH where MATH is an arbitrary subset of MATH. By duality we may write the left-hand side as MATH for some MATH-normalized function MATH. By REF we may estimate this by MATH . The claim then follows from REF .
math/0102083
By REF it suffices to show that MATH for some MATH and some MATH-tree MATH. We may assume (as in the proof of REF ) that MATH contains its top MATH, in which case we may reduce to MATH . From REF we see that the only quartiles MATH which matter are those such that MATH. Thus we may restrict MATH, MATH, MATH, MATH to MATH. Fix MATH, MATH, and define the set MATH by MATH . By REF as before we have MATH for all MATH. By the dyadic NAME estimate for the tree MATH we may thus reduce REF to MATH . But this follows from REF and the assumptions MATH, MATH.
math/0102083
Without loss of generality we may assume that MATH is a REF-tree. We then use NAME to estimate the left-hand side by MATH . From REF we have MATH for MATH. Also, since the singleton tree MATH is a MATH-tree with top MATH, we have MATH for all MATH. The claim follows.
math/0102083
The idea is to initialize MATH to equal MATH, and remove trees from MATH one by one (placing them into MATH) until REF is satisfied. We assume by pigeonholing that we only have quartiles MATH such that the length of MATH is an even (odd) p[ower of MATH. We describe the tree selection algorithm. We shall need four collections MATH of trees, where MATH; we initialize all four collections to be empty. Suppose that we can find a MATH and a quartile MATH such that MATH . We may assume that MATH is maximal with respect to this property and the tile order MATH. Having assumed this maximality, we may then assume that MATH is maximal if MATH, or minimal if MATH; here MATH is the center of MATH. We then place the MATH-tree MATH with top MATH into the collection MATH, and then remove all the quartiles in this tree from MATH. We then place the MATH-tree MATH with top MATH into the collection MATH, and then remove all the quartiles in this tree from MATH. We then repeat this procedure until there are no further quartiles MATH which obey REF. After completing this algorithm, none of the tiles MATH in MATH will obey REF, so that REF holds for all MATH-trees in MATH. (If the tree does not contain its top, we can break it up as the disjoint union of trees which do). We then set MATH and MATH. It remains to prove REF. Since the trees in MATH have the same tops as those in MATH it suffices to prove the estimate for MATH. We shall only prove the claim for MATH, as the argument for MATH is similar. Fix MATH. The key geometric observation is that the tiles MATH are all pairwise disjoint. Indeed, suppose that there existed MATH and MATH such that MATH and MATH. Without loss of generality we may assume that MATH so that MATH . From the nesting of dyadic intervals, and from the assumption that two different scales differ at leats by a factor of MATH, this implies that MATH . Since MATH consists entirely of MATH-trees, we have MATH, thus MATH . On the other hand, since MATH is a MATH-tree, we have MATH . Since MATH, we thus see that MATH and MATH are disjoint and that MATH . Since we chose our trees MATH in MATH so that MATH was maximized, this implies that MATH was selected earlier than MATH. On the other hand, from REF and the nesting of dyadic intervals we have MATH which implies from REF that MATH . Thus MATH would have been selected for a tree in MATH at the same time that MATH was selected for MATH. But this contradicts the fact that MATH is part of MATH, and therefore selected at a later time for MATH. This establishes the pairwise disjointness of the MATH. From this disjointness and REF we have MATH . From REF we therefore have MATH as desired.
math/0102083
Since MATH is finite, we see that the hypotheses of REF hold for all MATH if MATH for some sufficiently large MATH. Set the MATH to be empty for all MATH. Now initialize MATH and MATH. For MATH in turn, we apply REF , moving the quartiles in MATH from MATH in MATH and keeping the tiles in MATH inside MATH. We then increment MATH and repeat this process. Since we are assuming the MATH are non-zero, every quartile must eventually be absorbed into one of the MATH. The properties are then easily verified.
math/0102084
To motivate matters, let us first consider the simpler object MATH for some cube MATH. By NAME this is equal to MATH where MATH and MATH is the side-length of MATH. We can rewrite this as MATH where MATH ranges over all tri-tiles with frequency cube MATH and spatial interval MATH in MATH, MATH is the function MATH and MATH is the center of MATH. Note that MATH is a wave packet on MATH uniformly in MATH. Similarly, consider MATH . The multiplier MATH in the statement of the Corollary can now be rewritten as MATH where MATH . Note that the collection of tri-tiles MATH has rank REF, and similarly for the collection of tri-tiles MATH. Observe that we can get rid of the complex conjugation sign in the definition of MATH by redefining MATH to be MATH and redefining MATH accordingly; this also replaces the condition MATH by the condition MATH, but it does not change the rank one property of the collection MATH. The claim then follows by integrating the conclusion of REF over MATH, MATH, using the uniformity assumptions of that Theorem. (The finiteness condition on MATH and MATH can be removed by the usual limiting arguments.)
math/0102084
Let MATH, MATH be an extremizer of REF, and take MATH for all MATH.
math/0102084
The same as in REF .
math/0102084
In the NAME case this is immediate from NAME 's inequality. The argument in the NAME case is more technical, however. We may assume all trees in this lemma are sparse. By squaring both sides, we reduce to showing that MATH . We may assume that MATH since the inner product vanishes otherwise. By symmetry we may thus assume that MATH. From the decay of the MATH we have MATH so it suffices to show that MATH . Let us first consider the portion of the sum where MATH. In this case we estimate MATH. We treat the contribution of the first term MATH, as the second is similar. For each fixed MATH, there are only MATH many candidates MATH to appear as MATH satisfying all the above conditions, and for each fixed MATH the MATH summations have disjoint spatial intervals MATH. One can then perform the MATH summations and estimate this contribution by MATH which is acceptable by REF. It remains to consider the contribution when MATH. From REF applied to the singleton trees MATH, MATH we have MATH . It thus suffices to show that MATH for all trees MATH. From the assumptions on MATH and MATH and sparseness of the trees we see that the tree MATH which contains MATH must be distinct from MATH. By strong MATH-disjointness this implies that MATH. Also from strong MATH-disjointness we see that the MATH are disjoint. We thus have MATH and the claim then follows by summing in MATH.
math/0102084
By REF we may find a collection of strongly MATH-disjoint trees MATH, and complex co-efficients MATH for all MATH such that MATH and such that MATH for all MATH and all sub-trees MATH of MATH. The claim then follows from NAME and NAME REF.
math/0102084
This is essentially REF (see also REF), but we give a proof here for completeness. By REF it suffices to show the estimate MATH for all MATH and MATH-trees MATH. Fix MATH. By frequency translation invariance we may assume that MATH contains the origin. Let us first assume that MATH is supported outside of MATH. From the decay of MATH we have MATH . Applying this estimate, we obtain MATH and the claim follows from NAME. Now suppose that MATH is supported on MATH. It suffices to show that MATH for all MATH. Since MATH contains the origin, we see that the vector-valued operator MATH is a NAME operator, and is hence weak-type MATH; note that the MATH boundedness of this operator follows from the almost orthogonality of the MATH. (For more general MATH this operator would be a modulated NAME operator). The claim follows from standard NAME theory.
math/0102084
If MATH, then MATH. The claim then follows from the sparseness of MATH.
math/0102084
We can divide into the cases MATH and MATH . In REF we use REF to reduce REF to MATH . But the proof of this estimate is essentially the same as REF with the roles of MATH and MATH reversed. Thus it suffices to prove REF under REF . We first consider the set of all pairs MATH such that MATH, where MATH is the tree in MATH containing MATH. These constraints imply MATH. By splitting into MATH cases we may assume that the ratio between MATH and MATH is fixed. We may also assume that the distance of MATH and MATH is MATH for some fixed MATH, provided we prove the final estimate with an extra factor of MATH. However, then we have MATH only for a bounded number of essentially unique MATH for any given MATH, and for those MATH we have MATH . Hence we can estimate the corresponding piece of REF using NAME by MATH . Now we consider the pairs MATH with MATH, where MATH is the tree containing MATH. We estimate the corresponding part of the left-hand side of REF by MATH . By REF, it suffices to show that MATH for each MATH. Fix MATH. Let us first estimate the contribution of the case when MATH. Define the collection MATH by MATH . By REF we may rewrite the contribution of this case to REF as MATH where MATH . By NAME and NAME we can bound the previous by MATH . The MATH are almost orthogonal as MATH varies, so we can bound this by MATH . It thus suffices to prove MATH . We write the left-hand side as MATH . We now consider each MATH with MATH as a tree by itself. By strongly REF- disjointness of the tree MATH we see that MATH with MATH are pairwise disjoint. Moreover, they are contained in MATH. In particular we have MATH, and REF follows from REF . This concludes the treatment of the case MATH. To finish the estimation of REF it remains to treat the contribution of the case MATH and MATH for each MATH, with an additional factor of MATH on the right hand side. Fix MATH. In this case we use the crude estimate MATH from REF, and reduce to showing that MATH for each MATH. Fix MATH. We split MATH into MATH and MATH. To control the former contribution we use the crude estimates MATH from REF and MATH and sum crudely in MATH. To control the latter contribution we observe that MATH has a MATH norm of MATH, so it suffices to show that MATH . But this follows by repeating the proof of REF.
math/0102084
By REF it suffices to show that MATH for any MATH and any MATH-tree MATH. We may assume (as in the proof of REF ) that MATH contains its top MATH, in which case we may reduce to MATH . Fix MATH. To prove REF, first consider the relatively easy case when MATH vanishes on MATH. In this case we shall prove the stronger estimate MATH for all MATH; REF then follows by square-summing in MATH. We now prove REF. Fix MATH. By REF we may estimate MATH . Interchanging the sum and integral and applying NAME we thus have MATH where for MATH, the square function MATH is the vector-valued quantity MATH . To show REF, it thus suffices by NAME to prove the weighted square-function estimate MATH for all MATH and MATH. But this follows since MATH is a modulated NAME operator whose kernel MATH decays like MATH for all MATH. This proves REF when MATH vanishes on MATH. A similar argument gives REF when MATH vanishes on MATH. We may thus reduce to the case when MATH, MATH are both supported on MATH. We may then assume that MATH. Define the collection MATH of tri-tiles by MATH . From REF we have MATH where MATH . To prove REF it thus suffices to show that MATH . The vector-valued operator MATH is a modulated NAME operator, so it suffices to show that MATH . But this follows from REF (or more precisely, the analogue of REF for the discretized operator MATH). This finishes the proof of REF.
math/0102084
By REF , it suffices to show that MATH for all collections MATH of strongly MATH-disjoint trees, and all co-efficients MATH such that MATH for all MATH. Fix MATH, MATH. By REF we may write the left-hand side of REF as MATH where MATH . The claim then follows from REF , and REF .
math/0102084
This proof is reproduced verbatim from CITE. Without loss of generality we may assume that MATH is a REF-tree. We then use NAME to estimate the left-hand side by MATH . From REF we have MATH for MATH. Also, since the singleton tree MATH is a MATH-tree with top MATH, we have MATH for all MATH. The claim follows.
math/0102084
The idea is to initialize MATH to equal MATH, and remove trees from MATH one by one (placing them into MATH) until REF is satisfied. If MATH is a tile, let MATH denote the center of MATH. If MATH and MATH are tiles, we write MATH if MATH and MATH, and MATH if MATH and MATH. We now perform the following algorithm. We shall need a collection MATH of trees, which we initialize to be the empty set. We consider the set of all trees MATH of type MATH in MATH which are ``upward trees" in the sense that MATH and which satisfy the size estimate MATH . If there are no trees obeying REF, we terminate the algorithm. Otherwise, we choose MATH among all such trees so that the center MATH of MATH is maximal (primary goal), and that MATH is maximal with respect to set inclusion (secondary goal). Let MATH denote the MATH-tree MATH . We remove both MATH and MATH from MATH, and add them to MATH. Then one repeats the algorithm until we run out of trees obeying REF. Since MATH is finite, this algorithm terminates in a finite number of steps, producing trees MATH. We claim that the trees MATH produced in this manner are strongly MATH-disjoint. It is clear from construction that MATH for all MATH; by the rank REF assumption we thus see that MATH for all MATH, MATH, MATH. Now suppose for contradiction that we had tri-tiles MATH, MATH such that MATH and MATH. From the sparseness assumption we thus have MATH. Since MATH and MATH, we thus see that MATH. By our selection algorithm this implies that MATH. Also, since MATH, MATH, and MATH we see that MATH. Since MATH, this means that MATH. But MATH and MATH are disjoint by construction, which is a contradiction. Thus the trees MATH are strongly MATH-disjoint. From this, REF, and REF we see that MATH . Since MATH has the same top as MATH, we may thus add all the MATH and MATH to MATH while respecting REF. Now consider the set MATH of remaining tri-tiles. We note that MATH for all trees MATH in MATH, since otherwise the portion of MATH which obeyed REF would be eligible for selection by the above algorithm. We now repeat the previous algorithm, but replace MATH by MATH (so that the trees MATH are ``downward-pointing" instead of ``upward-pointing") and select the trees MATH so that the center MATH is minimized rather than maximized. This yields a further collection of trees to add to MATH while still respecting REF, and the remaining collection of tiles MATH has the property that MATH for all trees MATH in MATH. Combining REF we obtain REF as desired.
math/0102084
Since MATH is finite, we see that the hypotheses of REF hold for all MATH if MATH for some sufficiently large MATH. Set the MATH to be empty for all MATH. Now initialize MATH and MATH. For MATH in turn, we apply REF , moving the tri-tiles in MATH from MATH in MATH and keeping the tri-tiles in MATH inside MATH. We then increment MATH and repeat this process. Since we are assuming the MATH are non-zero, every tri-tile must eventually be absorbed into one of the MATH. The properties are then easily verified.
math/0102086
: Since MATH, it suffices to stratify MATH only at regular MATH. Let MATH be the stratification of MATH at MATH, and suppose MATH exhibits the equivalent representations of the forcing extension by MATH or MATH, respectively. Since MATH and the MATH forcing is MATH-distributive, it follows that MATH and so MATH for some (quotient) forcing generic MATH. That is, MATH factors as MATH. And since MATH and MATH adds no MATH-sequences over MATH, it follows that MATH also adds no MATH-sequences over MATH. So the quotient forcing is MATH-distributive and we have witnessed the stratification of MATH at MATH.
math/0102086
: Standard arguments establish that if MATH is MATH-supercompact, then this can be preserved to a forcing extension in which MATH (one simply forces MATH with MATH at sufficiently many stages MATH in a reverse NAME iteration). This iteration admits a gap between any two nontrivial stages of forcing, and by starting the iteration beyond any particular MATH, we may ensure that it is MATH-closed. Afterwards, we may directly force MATH by adding a NAME subset to MATH; since this adds no subsets to MATH, it therefore preserves the MATH-supercompactness of MATH. So let us assume without loss of generality that we have already performed this forcing, if necessary, and that MATH and MATH in MATH. Let MATH be the reverse NAME MATH-iteration which at stage MATH forces with MATH, provided that MATH is above MATH and measurable in MATH. This forcing is MATH-closed and admits a gap between any two nontrivial stages of forcing. Suppose that MATH is MATH-generic. We claim that MATH has trivial NAME rank in MATH. If not, then there would be an embedding MATH with critical point MATH for which MATH is closed under MATH-sequences in MATH and MATH is measurable in MATH. By the Gap Forcing Theorem of CITE applied in MATH, it follows that MATH is measurable in MATH and consequently a stage of nontrivial forcing in MATH. Factoring MATH as MATH, it follows that MATH must be MATH for some MATH-generic NAME subset MATH. Since every subset of MATH in MATH is in MATH and the forcing MATH has size MATH, it follows that every subset of MATH in MATH is in MATH. In particular, every dense subset of MATH from MATH is in MATH and so MATH is actually MATH-generic as well. Since this contradicts the fact that MATH, there can be no such embedding MATH and so MATH has trivial NAME rank in MATH. It follows that MATH is neither MATH-strong nor MATH-supercompact in MATH. We claim nevertheless that MATH remains MATH-strongly compact in MATH. For this, we use a technique of NAME, unpublished by him but exposited in CITE, CITE, CITE, CITE, CITE, and CITE. Let MATH be a MATH-supercompactness embedding generated by a normal fine measure on MATH and MATH an ultrapower embedding by a measure on MATH of minimal NAME rank in MATH, so that MATH is not measurable in MATH. Let MATH be the combined embedding; it witnesses the MATH-strong compactness of MATH. We will lift this embedding to MATH so as to witness the MATH-strong compactness of MATH in MATH. Consider the forcing MATH, factored as MATH, and the forcing MATH, factored as MATH. With this notation, for example, MATH. Now let MATH be the term forcing poset for MATH over MATH (see CITE for the first published account of term forcing, or CITE; the notion is originally due to NAME Laver). That is, MATH consists of (sufficiently many) MATH-names for elements of MATH, ordered by MATH if and only if MATH. As in the proof of CITE, a full collection of names, meaning that any name forced by MATH to be in MATH is forced by MATH to be equal to one of them, can be found of size MATH in MATH, which has cardinality MATH in MATH. Further, since MATH is forced to be MATH-directed closed, it is easy to see that MATH is MATH-directed closed in MATH, and hence also MATH-directed closed in MATH. Since MATH has only MATH many dense sets for MATH, and this has cardinality MATH in MATH, we may by the usual diagonalization techniques (see, for example, CITE) construct a MATH-generic filter MATH in MATH. And since MATH is the ultrapower by a measure on MATH and MATH is MATH-closed, it follows that MATH is MATH-generic for the term forcing for MATH over MATH (see CITE or CITE). Thus, we may lift the embedding MATH to MATH. And since MATH is MATH-generic, it is also MATH-generic, and so we may form the extension MATH. Since MATH is not measurable in MATH, the stage MATH forcing in MATH is trivial, and so MATH is MATH-closed in MATH. Since the MATH forcing is highly closed in MATH, it cannot affect closure down at MATH, so the forcing MATH is MATH-closed in MATH. Going one step more, the poset MATH is MATH-closed in MATH, has size MATH there, which has size MATH in MATH, and MATH is closed under MATH-sequences in MATH. Thus, by the usual diagonalization techniques, we may construct in MATH (actually, we can do it in MATH) a MATH-generic MATH for this much of MATH. By the fundamental property of term forcing (see CITE or CITE), in MATH we may construct a MATH-generic filter MATH from MATH. Putting these filters together, let MATH and lift the embedding to MATH in MATH. The attentive reader will observe that we used the term forcing only to help construct MATH. Now that we have done so, we may discard MATH and MATH. To see that this lifted embedding witnesses the MATH-strong compactness of MATH in MATH, let MATH. One can now easily check that MATH, MATH, MATH and MATH. Thus, MATH induces a fine measure on MATH in MATH, as desired.
math/0102086
: We may assume without loss of generality, by forcing if necessary, that the gch holds and further, by forcing with the notion in CITE if necessary, that in MATH there is already a level-by-level agreement between strong compactness and supercompactness. Since these forcing notions admit a very low gap, by the Gap Forcing Theorem CITE they do not increase the degree of supercompactness or (since the forcing notions are also mild in the sense of CITE) strong compactness of any cardinal. It follows that no cardinal in MATH is supercompact up to a partially supercompact cardinal. Let MATH be the reverse NAME support MATH-iteration which begins by adding a NAME real and then has nontrivial forcing only at later stages MATH that are inaccessible limits of partially supercompact cardinals. At such a stage MATH in MATH, assuming MATH has a level-by-level agreement between strong compactness and supercompactness, we force with the lottery sum of all MATH-directed closed posets MATH, of size less than the next partially supercompact cardinal, that preserve this level-by-level agreement. (Please note that this will always include the trivial poset.) The iteration MATH is an example of a modified lottery preparation of the type used in CITE, CITE, and CITE. Supposing MATH is MATH-generic, we will refer to the iteration MATH and the resulting model MATH as the lottery preparation preserving level-by-level agreement. If MATH is MATH-supercompact in MATH for a regular cardinal MATH, then this remains true in MATH. In particular, after the lottery preparation for preserving level-by-level agreement MATH, the cardinal MATH remains fully supercompact. : (This same observation was essentially made in CITE, the modified lottery preparations resembling as they do the partial Laver preparations.) We may assume that MATH is a limit of partially supercompact cardinals, since otherwise the forcing MATH is equivalent to forcing that is small with respect to MATH and the result is immediate by CITE. Let MATH be a MATH-supercompactness embedding with critical point MATH such that MATH is not MATH-supercompact in MATH. Since MATH is MATH-supercompact in MATH and no cardinal is supercompact up to a partially supercompact cardinal, it follows that the next partially supercompact cardinal above MATH in MATH is at least MATH; further, MATH itself is not even measurable in MATH because if it were, then the MATH-supercompactness of MATH would imply that MATH is MATH-supercompact in MATH, contrary to our assumption. In short, the next partially supercompact cardinal in MATH above MATH is strictly above MATH. Thus, below a condition opting for trivial forcing at stage MATH in MATH, we may factor MATH as MATH, where MATH is MATH-closed. Thus, by the usual diagonalization techniques, we may construct in MATH a MATH-generic filter MATH and lift the embedding to MATH with MATH. This embedding witnesses that MATH is MATH-supercompact in MATH, as desired. For any ordinal MATH, there is a level-by-level agreement between strong compactness and supercompactness in MATH. In particular, the lottery preparation preserving level-by-level agreement MATH really does preserve the level-by-level agreement between strong compactness and supercompactness. : Suppose inductively that the result holds below MATH and consider MATH. Since successor stages of forcing always preserve the level-by-level agreement if it exists, we may assume that MATH is a limit of stages of forcing, and hence a limit of partially supercompact cardinals. For any MATH, our induction hypothesis guarantees a level-by-level agreement for MATH in MATH, and by the Gap Forcing Theorem CITE, MATH is neither strongly compact nor supercompact up to a partially supercompact cardinal in that model. Therefore, since the later non-trivial stages of forcing are closed beyond the next inaccessible limit of partially supercompact cardinals, which by our assumptions and the level-by-level agreement is beyond the degree of strong compactness or supercompactness of MATH, it follows that there is a level-by-level agreement for MATH in MATH. By CITE, cardinals above MATH are not affected by small forcing or anything equivalent to small forcing, and so we need only consider the cardinal MATH itself. Suppose accordingly that MATH is MATH-strongly compact in MATH for a regular cardinal MATH; we will show it is also MATH-supercompact there. By the Gap Forcing Theorem CITE, we know that MATH is MATH-strongly compact in MATH, and hence by the level-by-level agreement, it is also MATH-supercompact there. So by the previous lemma MATH is MATH-supercompact in MATH, as desired. If MATH is an inaccessible limit of partially supercompact cardinals, then any stratified MATH-directed closed forcing MATH over MATH preserves the level-by-level agreement between strong compactness and supercompactness for MATH and smaller cardinals. Indeed, MATH need only be stratified at regular cardinals MATH above MATH. : Suppose that the result holds for cardinals below MATH (with full Boolean value) and that MATH is MATH-directed closed in MATH and stratified for regular MATH above MATH. From the closure of MATH and the fact that no cardinal is supercompact beyond a partially supercompact cardinal in MATH and hence also (by the Gap Forcing Theorem CITE) in MATH, one sees that it must preserve the level-by-level agreement between strong compactness and supercompactness for all cardinals below MATH. So we consider the cardinal MATH itself. Suppose that MATH is MATH-strongly compact in MATH for some regular cardinal MATH. We will show that MATH is MATH-supercompact there as well. By the Gap Forcing Theorem CITE, we know that MATH is MATH-strongly compact in MATH, and hence by the level-by-level agreement, it is MATH-supercompact there as well. Fix a MATH-supercompactness embedding MATH with MATH not MATH-supercompact in MATH. As in REF it follows that the next partially supercompact cardinal of MATH above MATH is above MATH. Since MATH is stratified above MATH, we may factor MATH as MATH, where MATH and MATH is MATH-distributive. By cardinality considerations, forcing with MATH adds no new subsets of MATH over MATH, and so it suffices for us to show that MATH is MATH-supercompact in MATH, where MATH is MATH-generic. The cardinal MATH is an inaccessible limit of partially supercompact cardinals in MATH, and so there is a lottery at stage MATH in MATH. Furthermore, since the induction hypothesis holds up to MATH in MATH and MATH is MATH-directed closed, has size at most MATH in MATH and, by REF , is stratified above MATH (and this can be seen in MATH), it follows that MATH is allowed to appear in the stage MATH lottery of MATH. Thus, below a condition opting for MATH in this lottery, we may factor the forcing MATH as MATH, where MATH is MATH-closed. Now the usual diagonalization arguments apply and we may construct in MATH a MATH-generic filter MATH and lift the embedding to MATH where MATH. Since MATH is a directed subset of cardinality less than MATH in MATH, we may find a (master) condition MATH below it. Working now below MATH we diagonalize to construct a MATH-generic object MATH and lift the embedding fully to MATH, thereby witnessing the MATH-supercompactness of MATH in MATH, as desired. After the lottery preparation preserving level-by-level agreement MATH, the supercompactness of MATH becomes indestructible by any MATH-directed closed forcing that is stratified above MATH. : Suppose that MATH is stratified above MATH and MATH-directed closed in MATH, and assume MATH is MATH-generic. Select any regular MATH and a MATH-supercompactness embedding MATH for which MATH is not MATH-supercompact in MATH. It follows as in REF that MATH is allowed in the stage MATH lottery and the diagonalization arguments given in REF allow us to lift the embedding to MATH. This witnesses the MATH-supercompactness of MATH in MATH, as desired. This completes the proof of the theorem.
math/0102086
: These are all MATH-directed closed and either completely stratified or stratified above MATH. Indeed, since MATH, we can see that REF are special cases of MATH, in REF . And it is easy to see that this and MATH, for MATH and MATH regular, are stratified: any regular cardinal MATH in the extension must be either at most MATH or at least MATH, and one can trivially factor the forcing. (Note that we use the gch in MATH and the fact MATH in order to know that MATH if MATH is regular and at least MATH.) The cases of MATH or any MATH-directed closed MATH of size at most MATH are clearly stratified above MATH. The conditions of the forcing to add a stationary non-reflecting subset to MATH are simply the bounded subsets MATH which are not stationary in their supremum nor have any initial segment stationary in its supremum, ordered by end-extension. This forcing is MATH-directed closed by simply taking unions of conditions. To see that it is stratified, we note that for any MATH, the forcing is MATH-strategically closed and hence MATH-distributive, and for any regular MATH, by gch in MATH and the fact MATH, the forcing has size less than or equal to MATH. So in each case the factorization is trivial. Finally, it is easy to see that a non-overlapping iteration of these posets remains stratified above MATH.
math/0102086
: We reiterate our implicit assumption for this preparation that the ground model MATH satisfies the gch and a level-by-level agreement between strong compactness and supercompactness, and that no cardinal in MATH is supercompact up to a partially supercompact cardinal there. We begin by showing that for any cardinal MATH the forcing MATH over MATH preserves the level-by-level agreement between strong compactness and supercompactness. Suppose inductively that this holds (with full Boolean value) for all cardinals below MATH. Since MATH adds no small sets, it does not affect the level-by-level agreement between strong compactness and supercompactness for cardinals below MATH, and since the forcing is itself small with respect to larger cardinals, it preserves such agreement above MATH by CITE. So it remains only to check the agreement right at MATH. Accordingly, suppose that MATH is MATH-strongly compact for some regular cardinal MATH in MATH, where MATH is MATH-generic; we aim to show that MATH is also MATH-supercompact there. The Gap Forcing Theorem CITE implies that MATH is MATH-strongly compact in MATH and hence also MATH-supercompact, by the level-by-level agreement there, witnessed by an embedding MATH. We may assume that MATH is not MATH-supercompact in MATH. We will lift this embedding to MATH, thereby witnessing the MATH-supercompactness of MATH there. We note first that MATH must be a limit of partially supercompact cardinals, since otherwise the forcing MATH is equivalent to small forcing followed by MATH-closed forcing that adds a subset to MATH; but by CITE, all such forcing destroys the measurability of MATH, contrary to our assumption that MATH is MATH-strongly compact in MATH. And since these smaller cardinals are fixed by MATH, it follows now that MATH is an inaccessible limit of partially supercompact cardinals in MATH, and hence a nontrivial stage of forcing in MATH. By elementarity the induction hypothesis holds up to MATH in MATH, and so the forcing MATH, which is the same in MATH and MATH since MATH, preserves the level-by-level agreement between strong compactness and supercompactness over MATH. It is therefore allowed to appear in the stage MATH lottery of MATH. Below a condition opting for this poset in that lottery, therefore, the forcing MATH factors as MATH, where MATH is the part of the forcing at stages beyond MATH. Since the next partially supercompact cardinal in MATH must be beyond MATH, it follows that MATH is MATH-closed in MATH. Further, MATH is closed under MATH-sequences in MATH and the number of dense subsets of MATH in MATH is at most MATH. Therefore, we may simply line up these dense sets in MATH, diagonalize to construct a MATH-generic filter MATH and lift the embedding to MATH with MATH. It remains to lift the embedding through the forcing MATH. If MATH, and this is the easy case, the usual master condition argument allows us to lift the embedding. Specifically, since MATH one uses MATH to see that MATH is also in MATH, and since this is a directed subset of MATH of size less than MATH, there is a condition MATH, called the master condition, below it. NAME below this condition, one builds a MATH-generic filter MATH and lifts the embedding fully to MATH, with MATH, as desired. For the only remaining case, the hard case, assume MATH. In this case MATH will not be in MATH, and there will be no master condition. Nevertheless, with care we will still be able to construct a generic filter extending MATH. (This technique appears in CITE and CITE, and is similar to a technique involving strong cardinals in CITE.) We will construct a MATH-generic filter MATH in MATH such that MATH, and then lift the embedding fully to MATH, with MATH, thereby witnessing the MATH-supercompactness of MATH in MATH. As a first step towards this, suppose that MATH is a maximal antichain in MATH and MATH is a condition that is compatible with every element of MATH. We will find a condition MATH that decides MATH while remaining compatible with MATH. Since MATH is MATH-c.c., it follows that the antichain MATH has size at most MATH in MATH. Since also the usual supercompactness arguments show that MATH is unbounded in MATH, it follows that MATH for sufficiently large MATH. Fix such a MATH such that also MATH and let MATH. Since MATH has size at most MATH, it is in MATH and is a (master) condition for MATH, which is a complete subposet of MATH. Since MATH is compatible with every element of MATH, we know that MATH and MATH are compatible. Choose MATH below MATH and MATH and deciding MATH. We claim that MATH remains compatible with MATH. To see this, consider any condition MATH for MATH. Since MATH is a partial function from MATH into MATH, we may split it into two pieces MATH where MATH and MATH. It follows that MATH, with the key point being that the domain of MATH is disjoint from the domain of any element of MATH. In particular, MATH is compatible with MATH. Since also MATH, it follows that MATH is compatible with MATH, as we claimed. Now we iterate this idea to construct the MATH-generic filter MATH. Since the forcing MATH has size MATH and is MATH-c.c., it has MATH many maximal antichains in MATH. Since MATH, we may enumerate these antichains in a sequence MATH in MATH. We now define a descending sequence MATH of conditions in MATH, each of which is compatible with every element of MATH. At successor stages, if MATH is defined, we employ the argument of the previous paragraph to select a condition MATH that decides the antichain MATH and remains compatible with MATH. At limit stages MATH, let MATH. Because MATH is closed under MATH sequences in MATH, we know that MATH, and it clearly remains compatible with every element of MATH. Let MATH be the filter generated by the conditions MATH. Since these conditions decide every maximal antichain, MATH is MATH-generic. And since MATH, we may lift the embedding to MATH, where MATH, thereby witnessing the MATH-supercompactness of MATH in MATH, as desired. The careful reader will observe that we have actually proved that if MATH, a limit of partially supercompact cardinals, is MATH-supercompact in MATH for some regular cardinal MATH above MATH, then this is preserved by forcing over MATH with MATH. In particular, the case MATH shows that the supercompactness of MATH in MATH is indestructible by MATH, just as the theorem states, and so the proof is complete.
math/0102086
: We use a similar argument for this Corollary. First we claim for any MATH that forcing with MATH over MATH, where MATH is a regular cardinal below the next partially supercompact cardinal above MATH, preserves the level-by-level agreement between strong compactness and supercompactness. Suppose inductively that this holds below MATH and that MATH is MATH-generic. It is easy to see that the level-by-level agreement below or above MATH is not affected, so suppose that MATH is MATH-strongly compact for some regular cardinal MATH in MATH; we will show that MATH is MATH-supercompact there as well. By the Gap Forcing Theorem CITE we know that MATH is MATH-strongly compact and hence MATH-supercompact in MATH. If MATH, then since MATH adds no new subsets to MATH, the MATH-supercompactness of MATH in MATH is trivially preserved to MATH. So we may assume MATH. In this case it follows that MATH is a limit of partially supercompact cardinals, since otherwise the forcing MATH would be equivalent to small forcing followed by MATH-closed forcing adding a subset to MATH, which by CITE would destroy the MATH-strong compactness of MATH, contrary to our assumption that MATH is MATH-strongly compact in MATH. So, let MATH be a MATH-supercompactness embedding such that MATH is not MATH-supercompact in MATH. Since the induction hypothesis holds up to MATH in MATH, the forcing MATH appears in the stage MATH lottery of MATH, and below a condition opting for this poset in that lottery we may factor the iteration as MATH, where MATH is MATH-closed in MATH. And the diagonalization argument of REF shows how to lift this embedding to MATH, where MATH for some generic filter MATH constructed in MATH. It remains to lift the embedding through the forcing MATH. If MATH, this can be done with the usual master condition argument, and we omit the details. The remaining case, the hard case of MATH, proceeds as in the hard case of REF . Specifically, one first shows as before that if MATH is compatible with MATH and MATH is a maximal antichain in MATH, then there is a stronger condition MATH deciding MATH and still compatible with MATH. For this, one uses the fact that MATH is unbounded in MATH and consequently MATH is contained in MATH for sufficiently large MATH. By counting antichains and iterating this argument, we once again construct a descending sequence of conditions that eventually meet every maximal antichain of MATH in MATH. The filter MATH generated by these conditions is therefore MATH-generic and extends MATH, so we may lift the embedding to MATH, where MATH, thereby witnessing the MATH-supercompactness of MATH in MATH, as desired. Finally, we observe that we have actually proved that if MATH is a limit of partially supercompact cardinals and is MATH-supercompact in MATH, then forcing with MATH over MATH for any MATH preserves the MATH-supercompactness of MATH. In particular, the case MATH shows that the full supercompactness of MATH is indestructible by MATH for any regular MATH.
math/0102086
: The point here is that indestructibility by MATH implies resurrectibility by any MATH-directed closed forcing MATH of size MATH, assuming MATH. This is true because such a poset MATH completely embeds into the collapse poset (this can be seen by observing that MATH is MATH-closed, collapses MATH to MATH and has size MATH, and noting that there is only one forcing notion with these features). Thus, we may view MATH as MATH, where MATH is MATH, and so even if MATH happens to destroy the supercompactness of MATH, further forcing with MATH amounts altogether to forcing with MATH, which preserves the supercompactness of MATH.
math/0102086
: We may assume, by forcing with the poset of CITE if necessary, that there is a level-by-level agreement between strong compactness and supercompactness in MATH and that the gch holds there (note that the CITE forcing preserves all supercompact cardinals and by the Gap Forcing Theorem CITE creates no new ones). Because MATH is supercompact, it is a limit of strong cardinals (see CITE). Furthermore, since MATH is the least supercompact cardinal in MATH, no cardinal below MATH is supercompact up to a strong cardinal cardinal (lest it be fully supercompact). Let MATH be the reverse NAME support MATH-iteration with nontrivial forcing only at stages MATH that are inaccessible limits of strong cardinals. At such a stage MATH, the forcing is the lottery sum of all MATH, where MATH, each MATH is regular, and MATH is less than the next strong cardinal above MATH (plus trivial forcing). Suppose that MATH is MATH-generic. The usual lifting arguments (see for example, CITE) establish that MATH remains supercompact in MATH and furthermore that the supercompactness of MATH becomes indestructible there by further forcing with MATH whenever MATH and each MATH is regular. (Note: the possibility of strong cardinals above MATH is irrelevant here, since for any MATH one can use an embedding MATH for which MATH is not MATH-supercompact in MATH; consequently, the next strong cardinal in MATH above MATH is above MATH, which is all that is needed in the lifting argument). It follows as in REF that the supercompactness of MATH is strongly resurrectible in MATH. We now argue that the model MATH retains the level-by-level agreement between strong compactness and supercompactness. Since the forcing MATH has size MATH, we know that the level-by-level agreement for cardinals above MATH holds easily by CITE. It remains only to consider cardinals MATH below MATH. Accordingly, suppose that MATH is MATH-strongly compact in MATH for some regular cardinal MATH; we aim to show it is also MATH-supercompact there. By the Gap Forcing Theorem CITE, we know that MATH is MATH-strongly compact in MATH, and hence also MATH-supercompact there. Fix a MATH-supercompactness embedding MATH for which MATH is not MATH-supercompact in MATH. It follows that the next strong cardinal in MATH above MATH is above MATH. We first treat the case in which MATH is a limit of strong cardinals, that is, when MATH is a stage of forcing in MATH. In the stage MATH lottery, the generic MATH selected some winning poset MATH, and below a condition deciding this we may factor the forcing MATH as MATH where MATH is either trivial forcing or MATH for some MATH. Since MATH is closed beyond the next strong cardinal above MATH, it does not affect the MATH-supercompactness of MATH, and so it suffices for us to show that MATH is MATH-supercompact in MATH, where MATH is MATH-generic. If MATH is trivial, or if MATH, then the usual lifting arguments allow us to lift the embedding MATH to MATH, thereby witnessing the MATH-supercompactness of MATH in MATH. So we may assume MATH. Since MATH is a cardinal in MATH, it follows that MATH. If MATH, then the forcing MATH adds no new subsets to MATH, and so the MATH-supercompactness of MATH in MATH (from the case of trivial MATH above) is preserved to MATH. So we have reduced to the case that MATH. Since MATH is MATH-strongly compact in MATH, it is also MATH-strongly compact there, and hence MATH-supercompact in MATH. One may therefore employ the usual arguments to lift a MATH-supercompactness embedding from MATH to MATH, as desired. We now treat the case that MATH is not a limit of strong cardinals. Let MATH be the supremum of the strong cardinals below MATH. If MATH is not inaccessible, then the forcing MATH factors as MATH, where MATH is small relative to MATH and MATH is closed beyond the next strong cardinal above MATH. Such forcing must preserve the MATH-supercompactness of MATH. We may therefore assume alternatively that MATH is inaccessible, and hence a stage of forcing. The generic MATH selected a winning poset MATH in the stage MATH lottery, and below a condition deciding this we may factor MATH as MATH. The forcing MATH is closed beyond MATH, and does not affect the MATH-supercompactness of MATH. Thus, it suffices for us to see that MATH is MATH-supercompact in MATH, where MATH is MATH-generic. The forcing MATH is either trivial or MATH for some MATH. If MATH is trivial or MATH, then the forcing is small relative to MATH, and the result is immediate. So assume MATH. Since MATH is a cardinal, it must be that MATH. If in addition MATH, then MATH and we may ignore the forcing MATH, since it does not destroy the MATH-supercompactness of MATH in MATH, a small forcing extension. What remains is the case MATH. Here, as in REF , we make a key use of the main result of CITE: the forcing MATH is small forcing followed by MATH-closed forcing that adds a new subset to MATH. By CITE, such forcing necessarily destroys the MATH-strong compactness of MATH, contrary to our assumption that MATH is MATH-strongly compact in MATH, and hence in MATH. So the proof is complete.
math/0102086
: The basic point is that if a cardinal MATH is MATH-supercompact, then it has very high NAME rank in MATH-supercompactness. To see this, suppose MATH is a MATH-supercompactness embedding. Since all the MATH-supercompactness measures from MATH are in MATH, it follows that MATH is MATH-supercompact in MATH. Thus, for the induced MATH-supercompactness factor embedding MATH, the cardinal MATH is MATH-supercompact in MATH. So the NAME rank is at least MATH in MATH, and so it is at least MATH in MATH and therefore also in MATH; so it is at least MATH in MATH, and so at least MATH in MATH and therefore at least MATH in MATH, and so on cycling around up to MATH and beyond. To prove the claim, now, let MATH be any strong limit cardinal above MATH and let MATH be a MATH-supercompactness embedding by a measure with trivial NAME rank (or just make MATH as small as possible). It follows that MATH is not MATH-supercompact in MATH; but by the closure of MATH it is MATH-supercompact there. Therefore, since MATH is a strong limit cardinal, the basic point in the previous paragraph shows that it exhibits nontrivial NAME rank in MATH for every degree of supercompactness below MATH. So it is robust in MATH. And since this is true for arbitrarily large MATH, it follows that the set of robust cardinals below MATH is large with respect to supercompactness.
math/0102086
: As in REF , we may assume without loss of generality, by forcing if necessary, that the gch holds and further, by forcing with the notion in CITE if necessary, that in MATH there is already a level-by-level agreement between strong compactness and supercompactness. Thus, we also have that no cardinal is supercompact up to a partially supercompact cardinal. Let MATH be the reverse NAME support MATH-iteration which adds a NAME real and then has nontrivial forcing only at cardinals MATH that are inaccessible limits of partially supercompact cardinals. At such a stage MATH, the stage MATH forcing MATH is the lottery sum of all MATH-directed closed MATH of size less than MATH, the least cardinal such that MATH is not MATH-supercompact in MATH. Suppose that MATH is MATH-generic, and consider the model MATH. First, we claim that MATH is indestructibly supercompact in MATH. It suffices to argue that if MATH is MATH-directed closed in MATH and MATH is MATH-generic, then MATH is supercompact in MATH. Fix any regular cardinal MATH and a MATH-supercompactness embedding MATH so that MATH is not MATH-supercompact in MATH. Since MATH will necessarily be MATH-supercompact in MATH, the forcing MATH will appear in the stage MATH lottery of MATH. Below a condition opting for MATH in this lottery, the forcing MATH factors as MATH, where MATH is the remainder of the iteration. Note that since no cardinal is supercompact beyond a partially supercompact cardinal, no cardinal above MATH is partially supercompact. Thus, the next nontrivial stage of forcing in MATH is beyond MATH, and so MATH is MATH-closed in MATH. Therefore, by the usual diagonalization techniques, we may construct in MATH a MATH-generic filter MATH and lift the embedding to MATH with MATH. After this, we find a master condition below MATH in MATH and again by diagonalization construct a MATH-generic MATH in MATH. This allows us to lift the embedding fully to MATH, which witnesses the MATH-supercompactness of MATH in MATH, as desired. Second, we claim that in MATH there is a level-by-level agreement between strong compactness and supercompactness at the cardinals MATH that are robust in MATH. To see this, on the one hand, the main results of CITE show that since the forcing MATH is mild and admits a very low gap (see CITE for the relevant definitions), it does not increase the degree of strong compactness or supercompactness of any cardinal. On the other hand, we will now argue that it fully preserves all regular degrees of supercompactness of every partially supercompact cardinal MATH that is robust in MATH. Suppose that MATH is robust and MATH-supercompact in MATH for some regular cardinal MATH above MATH. A simple reflection argument shows that MATH must be a limit of partially supercompact cardinals, and therefore is a nontrivial stage of forcing. The generic filter MATH opted for some particular forcing MATH in the stage MATH lottery. By increasing MATH if necessary, we may assume that MATH. Since no cardinal is supercompact beyond a partially supercompact cardinal, as in REF , the next nontrivial stage of forcing beyond MATH is well beyond MATH. It therefore suffices to argue that MATH is MATH-supercompact in MATH. Fix any MATH-supercompactness embedding MATH with critical point MATH such that MATH is MATH-supercompact in MATH (this is where we use the robustness of MATH). It follows that MATH appears in the stage MATH lottery of MATH, and we may lift the embedding to MATH just as we did in the previous paragraph with MATH. Thus, MATH remains MATH-supercompact in MATH and hence in MATH. So we retain the level-by-level agreement between strong compactness and supercompactness for the partially supercompact cardinals MATH that are robust in MATH. Now let us argue that the collection MATH of partially supercompact cardinals that are robust in MATH remains large with respect to supercompactness in MATH. The point is that if MATH is a MATH-supercompactness embedding with critical point MATH, then below a condition opting for trivial forcing in the lottery at stage MATH, we may lift the embedding to MATH. In particular, if MATH for the original embedding, then this remains true for the lifted embedding. Indeed, this argument shows that our forcing preserves every set that is large with respect to supercompactness in the ground model.
math/0102087
Define a functor MATH and natural transformations MATH, MATH (these being the domain respectively,codomain functors) factoring the identity MATH as follows. For a map MATH of MATH, let MATH be the set of all commutative diagrams MATH such that MATH, and MATH the coproduct of all these arrows MATH; MATH comes with a natural map to MATH. The value of MATH at MATH is the pushout of MATH with its natural induced map MATH (respectively, MATH) from MATH (respectively, to MATH). Beginning with MATH, MATH, MATH, build by transfinite induction a diagram of functors MATH and compatible natural transformations MATH, MATH. For a successor ordinal MATH, MATH, MATH, MATH. For a limit ordinal MATH, MATH respectively, MATH, MATH are colimits along the chain MATH. Let MATH be a regular cardinal greater than the rank of presentability of the domain of any arrow in MATH. The requisite factorization of MATH is MATH. (MATH is a transfinite composition of coproducts of pushouts of arrows from MATH, but a coproduct of arrows is a transfinite composition of pushouts, and a transfinite composition of transfinite compositions is one such again.) The second part of the claim follows by factoring any MATH as MATH with MATH and MATH, and noting that MATH has the left lifting property with respect to MATH, in particular, with respect to MATH, which entails that it is a retract of MATH of the type claimed.
math/0102087
CASE: The strategy is to exhibit a set MATH of morphisms such that MATH. From there, NAME 's axioms follow in a well-known way. Two small object arguments yield the factorization REF ; the part of REF that is not the definition follows from REF, MREF and the retract argument; MREF holds by the definition of (co)fibrations and cREF; finally, a locally presentable category is complete and cocomplete, so MREF is satisfied. MATH itself will be constructed in two steps. REF shows that if a collection MATH of morphisms is ``dense" between MATH and MATH, then MATH. REF , using cREF, constructs such a MATH that is only a set. Let MATH be a collection (set or possibly proper class) of maps in MATH such that for any commutative square MATH with MATH, MATH there exists MATH that factors it: MATH . Then any MATH can be factored as MATH with MATH, MATH. Corollary. Under the assumptions of the previous lemma, MATH. Proof of the corollary. MATH is the saturation of MATH under pushout, transfinite composition and retracts, MATH and MATH is supposed to be closed under these operations, so MATH. Conversely, consider any MATH and write it MATH as above. Since MATH and MATH, MATH is a retract of MATH (in the category of objects under the domain of MATH). So MATH. Proof of REF . This is rather like the ordinary small object argument, save that one glues on the ``interpolating" maps MATH instead of the MATH. More precisely, we wish to build by transfinite induction on MATH certain factorizations MATH of MATH such that (this is the induction hypothesis) the diagram MATH is a continuous composition of maps belonging to MATH. Thence the composite itself will belong to MATH. Since MATH and MATH, REF-of-REF property of MATH implies that MATH. Set MATH, MATH. At a successor stage, let MATH be the set of all commutative squares MATH with MATH. The density assumption on MATH means the existence of a factorization MATH with MATH, for each square MATH. Let MATH be the pushout MATH along the canonical MATH. Let MATH be the canonical pushout corner map from MATH to MATH. The connecting map MATH is a pushout of coproducts of morphisms from MATH. But any coproduct of maps is a transfinite composition (starting from the coproduct of the domains), so the connecting map belongs to MATH. At a limit ordinal MATH, MATH is the colimit of the diagram MATH, MATH. Let now MATH be a regular cardinal exceeding the rank of presentability of all the objects that occur as domains of maps in MATH. The required factorization of MATH is MATH. Indeed, consider any lifting problem MATH with MATH. Since MATH is regular, the diagram MATH is MATH-filtered, and since MATH commutes with MATH-filtered colimits by assumption, MATH factors through a prior stage MATH. If the lifting problem MATH is indexed by MATH, the solution to the original one is the bottom composite MATH . There exists a set MATH with the property required in REF . Indeed, consider the set of all morphisms (in the category of arrows) from MATH to the solution set MATH: MATH form the pushout MATH and the canonical corner map MATH and factor MATH as MATH with MATH, MATH. Set MATH. MATH is the set of such MATH (one for each morphism from MATH to MATH). Indeed, MATH and MATH, so MATH. MATH by REF, so MATH and REF-of-REF imply MATH. Finally, any morphism from MATH to MATH allows to be factored, using cREF, as MATH . This completes the proof of REF and of NAME 's theorem.
math/0102087
Let MATH be a set of strong generators for MATH, so that the functors MATH, MATH, collectively reflect isomorphisms. (Any locally presentable category has such MATH.) Let MATH be the set of (isomorphism types of) regular quotients of these generators; finally, let MATH be the set of all (isomorphism types of) subobjects of members of MATH. Then MATH (a fortiori MATH, since monos are closed under retract). Argue by contradiction. Suppose MATH is a mono but MATH. By transfinite induction, we will build a chain MATH of subobjects of MATH that is REF properly increasing REF satisfies MATH for MATH. This contradicts that MATH is well-powered. Set MATH. At a successor stage, find MATH, MATH, that does not factor through MATH. Factor MATH as a regular epi followed by a mono: MATH. Form an effective subobject union diagram as above: MATH and define MATH. Note MATH. MATH is bigger than MATH since MATH factors through it, so the induction hypotheses are satisfied. At a limit ordinal MATH, set MATH and use REF .
math/0102087
More generally, any accessible functor satisfies the solution set condition (at every object); see NAME - REF .
math/0102087
The full inverse image of a full, accessible subcategory by an accessible functor is again accessible; see NAME - REF .
math/0102087
In fact, any full subcategory of cocomplete category closed under MATH-filtered colimits for some MATH must be closed under retracts. This is since a retract is a colimit on an ``idempotent loop" diagram MATH which is MATH-filtered, that is, MATH-filtered for every MATH.
math/0102087
By REF.
math/0102087
Each of these properties has the following form: a set of coherent sentences (in the language of the appropriate diagram of MATH-structures) implies certain coherent sentences. Topoi with enough points inherit the truth of such statements from MATH. As to MATH, the definition of having enough models in MATH is that such conclusions extend to an arbitrary topos. It is REF - NAME that countable coherent logic has enough models in MATH, of NAME that finitary coherent sentences do. The case of universal NAME logic / essentially algebraic theories is classical (see NAME - CITE). The Boolean case MATH is NAME 's theorem.
math/0102087
By REF , MATH and MATH are closed under filtered colimits in the category of morphisms of MATH. If they are closed under composition, they must be closed under transfinite composition. (Use transfinite induction.)