paper stringlengths 9 16 | proof stringlengths 0 131k |
|---|---|
quant-ph/0102054 | Let us assume that the matrix MATH is unitary. Then in compliance with REF , MATH and MATH, that is, the rows of the matrix are orthonormal. Let us assume that MATH and the rows of the matrix are normalized. Then in compliance with REF the rows of the matrix are orthogonal. Hence MATH and the matrix is unitary. |
quant-ph/0102054 | REF imply that NAME REF are satisfied iff the columns of the evolution matrix are orthonormal and rows are normalized. In compliance with REF , columns are orthonormal and rows are normalized iff the matrix is unitary. |
quant-ph/0102054 | By REF . |
quant-ph/0102054 | It is sufficient to prove that any deterministic finite automaton (DFA) can be simulated by RPA. Let us consider a DFA with MATH states MATH, where MATH. To simulate MATH we shall construct a RPA MATH with the number of states MATH. The set of states is MATH, where MATH are the newly introduced states, which are linked to MATH by a one-to-one relation MATH. Thus MATH has one-to-one relation to MATH. The stack alphabet is MATH, where MATH; the set of accepting states is MATH and the set of rejecting states is MATH. As for the function MATH, MATH and MATH. We shall define sets MATH and MATH as follows: MATH . The construction of the transition function MATH is performed by the following rules: CASE: MATH; CASE: MATH; CASE: MATH; CASE: MATH; CASE: MATH; CASE: MATH; CASE: MATH. Thus we have defined MATH for all the possible arguments. Our automaton simulates the DFA. Note that the automaton may reach a state in MATH only by reading the end-marking symbol MATH on the input tape. As soon as MATH reaches the end-marking symbol MATH, it goes to an accepting state, if its current state is in MATH, and goes to a rejecting state otherwise. The construction is performed in a way so that MATH satisfies NAME REF . As we know, RPA automatically satisfies the local probability REF . Let us prove, that the automaton satisfies the orthogonality REF . For RPA, REF is equivalent to the requirement that for all triples MATH. If MATH, MATH by REF . Let us consider the case when MATH. We shall denote MATH as MATH respectively. Let us assume from the contrary that MATH. By REF , MATH. Hence MATH. By the definition of MATH, MATH and MATH. Since MATH, MATH. Therefore MATH, that is, MATH. We have come to a contradiction with the fact that MATH. If MATH, MATH by REF . If MATH then MATH by REF . In case MATH or MATH is MATH, or MATH, proof is straightforward. The compliance with row vectors norm REF and separability REF is proved in the same way. |
quant-ph/0102054 | Our RPA has four states MATH, where MATH is an accepting state, whereas MATH - rejecting one. Stack alphabet MATH consists of two symbols MATH. Stack filled with REF's means that the processed part of the word MATH has more occurrences of a's than b's, whereas REF's means that there are more b's than a's. Furthermore, length of the stack word is equal to the difference of number of a's and b's. Empty stack denotes that the number of a's and b's is equal. Values of the transition function follow: MATH other values of arguments yield MATH. |
quant-ph/0102054 | Sketch of proof. The automaton takes three equiprobable actions, during the first action it compares MATH to MATH, whereas during the second action MATH to MATH is compared. Input word is rejected if the third action is chosen. Acceptance probability totals MATH. |
quant-ph/0102054 | Sketch of proof. The automaton starts the following actions with the following amplitudes: REF with an amplitude MATH compares MATH to MATH. REF with an amplitude MATH compares MATH to MATH. REF with an amplitude MATH accepts the input. If exactly one comparison gives positive answer, input is accepted with probability MATH. If both comparisons gives positive answer, amplitudes, which are chosen to be opposite, annihilate and the input is accepted with probability MATH. |
quant-ph/0102078 | As shorthand, we write MATH for MATH. By REF , MATH . Let vectors MATH and MATH be defined by MATH . Then, by the NAME - NAME inequality, MATH where MATH denotes matrix transposition. Since each vector MATH is of unit norm, we have MATH, so MATH. The matrix product MATH is upper bounded by MATH, which is at most MATH by REF . MATH . |
quant-ph/0102078 | Similar to the proof of REF . By REF , MATH . Let MATH be such that MATH, where we let MATH range from MATH to MATH and simply set the thus caused undefined projection operators to be zero operators. Then by REF , MATH . Applying the NAME - NAME inequality, and in analogy with REF , MATH . Since MATH, we conclude that MATH. MATH . |
quant-ph/0102078 | Almost identical to the proof for REF , except that we now require a second vector MATH with MATH. Then by REF , MATH . In analogy with REF , we have MATH . Besides having MATH as in the proof of REF , we also have that MATH . Therefore, MATH. MATH . |
quant-ph/0102108 | Assume that the program MATH that minimizes the right-hand side of REF is MATH and the computed MATH is MATH: MATH . There is a universal quantum NAME machine MATH in the standard enumeration MATH such that for every quantum NAME machine MATH in the enumeration there is a self-delimiting program MATH (the index of MATH) and MATH for all MATH: if MATH then MATH. In particular, this holds for MATH such that MATH with auxiliary input MATH halts with output MATH. But MATH with auxiliary input MATH halts on input MATH also with output MATH. Consequently, the program MATH that minimizes the right-hand side of REF with MATH substituted for MATH, and computes MATH for some state MATH possibly different from MATH, satisfies MATH . Combining the two displayed inequalities, and setting MATH, proves the theorem. |
quant-ph/0102108 | Let MATH be such that MATH . Denote the program MATH that minimizes the righthand side by MATH and the program MATH that minimizes the expression in the statement of the theorem by MATH. A dovetailed computation is a method related to NAME 's celebrated diagonalization method: run all programs alternatingly in such a way that every program eventually makes progress. On an list of programs MATH one divides the overall computation into stages MATH. In stage MATH of the overall computation one executes the MATH-th computation step of every program MATH for MATH. By running MATH on all binary strings (candidate programs) simultaneously dovetailed-fashion, one can enumerate all objects that are directly computable, given MATH, in order of their halting programs. Assume that MATH is also given a MATH length program MATH to compute MATH - that is, enumerate the basis vectors in MATH. This way MATH computes MATH, the program MATH computes MATH. Now since the vectors of MATH are mutually orthogonal MATH . Since MATH is one of the basis vectors we have MATH is the length of a prefix code (the NAME code) to compute MATH from MATH and MATH. Denoting this code by MATH we have that the concatenation MATH is a program to compute MATH: parse it into MATH and MATH using the self-delimiting property of MATH and MATH. Use MATH to compute MATH and use MATH to compute MATH, determine the probabilities MATH for all basis vectors MATH in MATH. Determine the NAME code words for all the basis vectors from these probabilities. Since MATH is the code word for MATH we can now decode MATH. Therefore, MATH which was what we had to prove. |
quant-ph/0102108 | Write MATH. For every state MATH in MATH-dimensional NAME space with basis vectors MATH we have MATH. Hence there is a MATH such that MATH. Let MATH be a MATH-bit program to construct a basis state MATH given MATH. Then MATH. Then MATH. |
quant-ph/0102108 | The uncomputability follows a fortiori from the classical case. The semicomputability follows because we have established an upper bound on the quantum NAME complexity, and we can simply enumerate all halting classical programs up to that length by running their computations dovetailed fashion. The idea is as follows: Let the target state be MATH of MATH qubits. Then, MATH. (The unconditional case MATH is similar with MATH replaced by MATH.) We want to identify a program MATH such that MATH minimizes MATH among all candidate programs. To identify it in the limit, for some fixed MATH satisfying REF below for given MATH, repeat the computation of every halting program MATH with MATH at least MATH times and perform the assumed projection and measurement. For every halting program MATH in the dovetailing process we estimate the probability MATH from the fraction MATH: the fraction of MATH positive outcomes out of MATH measurements. The probability that the estimate MATH is off from the real value MATH by more than a MATH is given by NAME 's bound: for MATH, MATH . This means that the probability that the deviation MATH exceeds MATH vanishes exponentially with growing MATH. Every candidate program MATH satisfies REF with its own MATH or MATH. There are MATH candidate programs MATH and hence also MATH outcomes MATH with halting computations. We use this estimate to upper bound the probability of error MATH. For given MATH, the probability that some halting candidate program MATH satisfies MATH is at most MATH with MATH . The probability that no halting program does so is at least MATH. That is, with probability at least MATH we have MATH for every halting program MATH. It is convenient to restrict attention to the case that all MATH's are large. Without loss of generality, if MATH then consider MATH instead of MATH. Then, MATH . The approximation algorithm is as follows: CASE: Set the required degree of approximation MATH and the number of trials MATH to achieve the required probability of error MATH. CASE: Dovetail the running of all candidate programs until the next halting program is enumerated. Repeat the computation of the new halting program MATH times REF : If there is more than one program MATH that achieves the current minimum, then choose the program with the least length (and hence the least number of successfull observations). If MATH is the selected program with MATH successes out of MATH trials then set the current approximation of MATH to MATH . This exceeds the proper value of the approximation based on the real MATH instead of MATH by at most REF bit for all MATH. CASE: Goto REF . |
quant-ph/0102108 | Every orthonormal basis of MATH has MATH basis vectors and there are at most MATH programs of length less than MATH. Hence there are at most MATH programs of length MATH available to approximate the basis vectors. We construct an orthonormal basis satisfying the lemma: The set of directly computed pure quantum states MATH span a MATH-dimensional subspace MATH with MATH in the MATH-dimensional NAME space MATH such that MATH. Here MATH is a MATH-dimensional subspace of MATH such that every vector in it is perpendicular to every vector in MATH. We can write every element MATH as MATH where the MATH's form an orthonormal basis of MATH and the MATH's form an orthonormal basis of MATH so that the MATH's and MATH's form an orthonormal basis MATH for MATH. For every state MATH, directly computed by a program MATH, given MATH, and basis vector MATH we have MATH. Therefore, MATH (MATH, MATH). This proves the lemma. |
quant-ph/0102108 | The theorem follows immediately from a generalization of REF to arbitrary orthonormal bases: Every orthonormal basis MATH of the MATH-dimensional NAME space MATH has at least MATH basis vectors MATH that satisfy MATH. Use the notation of the proof of REF . Let MATH be a set initially containing the programs of length less than MATH, and let MATH be a set initially containing the set of basis vectors MATH with MATH. Assume to the contrary that MATH. Then at least two of them, say MATH and MATH and some pure quantum state MATH directly computed from a MATH-length program satisfy MATH with MATH being the directly computed part of both MATH, MATH. This means that MATH since not both MATH and MATH can be equal to MATH. Hence for every directly computed pure quantum state of complexity MATH there is at most one basis state, say MATH, of the same complexity (in fact only if that basis state is identical with the directly computed state.) Now eliminate every directly computed pure quantum state MATH of complexity MATH from the set MATH, and the basis state MATH as above (if it exists) from MATH. We are now left with MATH basis states of which the directly computed parts are included in MATH with MATH with every element in MATH of complexity MATH. Repeating the same argument we end up with MATH basis vectors of which the directly computed parts are elements of the empty set MATH, which is impossible. |
quant-ph/0102108 | Recall the MATH and MATH complexities of pure quantum states CITE mentioned in the Introduction. Denote by MATH and MATH the maximal values of MATH and MATH over all MATH qubit states MATH, respectively. All of the following was shown in CITE (the notation as above and MATH an arbitrary state, for example MATH): MATH . Combining these inequalities gives the theorem. |
quant-ph/0102108 | REF is obvious. REF follows from REF . |
quant-ph/0102108 | By REF we there is a program MATH to compute MATH with MATH and, by a similar argument as used in the proof of REF , a program MATH to compute MATH from MATH with MATH up to additive constants. Use MATH to construct two copies of MATH and MATH to construct MATH from one of the copies of MATH. The separation between these concatenated binary programs is taken care of by the self-delimiting property of the subprograms. An additional constant term takes care of the couple of MATH-bit programs that are required. |
quant-ph/0102108 | MATH by the proof of REF . Then, the lemma follows by REF . |
quant-ph/0102108 | Only the second inequality is non-obvious. Let MATH and let MATH be a maximally complex classical MATH-bit state. Then, MATH. Hence the MATH-bit program approximating MATH by observing input MATH, and outputting the resulting outcome, demonstrates MATH. Furthermore, MATH is approximated by MATH with MATH. Thus, MATH (the MATH-term is due to the specification of the length of MATH, and the MATH term is due to the requirement of self-delimiting coding). The lemma follows since MATH. |
quant-ph/0102108 | With MATH we have MATH for MATH with MATH fixed. Substitution in REF shows that there exists a state MATH such that (up to logarithmic additive terms) MATH and MATH. So writing (again up to logarithmic additive terms) MATH we obtain MATH, up to an additive logarithmic term, which, with MATH, can only hold for MATH. Hence, for large enough MATH and MATH, one of the MATH inequalities in the above chain must be false. |
quant-ph/0102108 | There are MATH binary strings of length less than MATH. A fortiori there are at most MATH elements of MATH that can be computed by binary programs of length less than MATH, given MATH. This implies that at least MATH elements of MATH cannot be computed by binary programs of length less than MATH, given MATH. Substituting MATH by MATH together with REF yields the lemma. |
quant-ph/0102132 | When MATH and MATH commute, we have MATH . Since we assumed that the metric is NAME, MATH and we have MATH and the differentiation gives the NAME metric. |
cs/0103017 | Symmetry considerations imply that the bitangent sphere must be centered on the MATH-axis, so it can be described by the equation MATH for some constants MATH and MATH. Let MATH denote the intersection curve of MATH and the cylinder MATH. Every intersection point between MATH and the helix must lie on MATH. If we project the helix and the intersection curve to the MATH-plane, we obtain the sinusoid MATH and a portion of the parabola MATH. These two curves meet tangentially at the points MATH and MATH. The mean value theorem implies that MATH at most four times in the range MATH. (Otherwise, the curves MATH and MATH would intersect more than twice in that range.) Since the curves meet with even multiplicity at two points, those are the only intersection points in the range MATH. Since MATH is concave, we have MATH, so there are no intersections with MATH. Thus, the curves meet only at their two points of tangency. |
cs/0103017 | Let MATH and MATH be arbitrary points in MATH, and let MATH be the unique ball tangent to the helix at MATH and MATH. By REF , MATH does not otherwise intersect the helix and therefore contains no point in MATH. Thus, MATH and MATH are neighbors in the NAME triangulation of MATH. |
cs/0103017 | There are three cases to consider, depending on whether the spread is at least MATH, between MATH and MATH, or at most MATH. The first case is trivial. For the case MATH, we take a set of evenly spaced points on a helix with pitch MATH: MATH . Every point in MATH is connected by a NAME edge to every other point less than a full turn away on the helix, and each turn of the helix contains MATH points, so the total complexity of the NAME triangulation is MATH. The final case MATH is slightly more complicated. Our point set consists of several copies of our helix construction, with the helices positioned at the points of a square lattice, so the entire construction loosely resembles a mattress. Specifically, MATH is the set MATH where MATH and MATH are parameters to be determined shortly. This set contains MATH points. The diameter of MATH is MATH and the closest pair distance is MATH, so its spread is MATH. Thus, given MATH and MATH, we have MATH and MATH. Straightforward calculations imply that for all MATH and MATH, the bitangent sphere MATH has radius less than MATH. Since adjacent helices are separated by distance MATH, every point in MATH is connected in the NAME triangulation to every point at most half a turn away in the same helix. Each turn of each helix contains MATH points, so the NAME triangulation of MATH has complexity MATH. |
cs/0103017 | The outer surface MATH clearly has area MATH, so it suffices to bound the surface area of the `holes'. For each MATH, let MATH be the boundary of the MATH-th hole, and let MATH. For any point MATH, let MATH denote the open line segment of length MATH extending from MATH towards the center of the ball MATH with MATH on its boundary. (If MATH lies on the surface of more than one MATH, choose one arbitrarily.) Let MATH be the union of all such segments, and for each MATH, let MATH. Each MATH is a fragment of a spherical shell of thickness MATH inside the ball MATH. See REF . For each MATH, we have (after some tedious calculations) MATH where MATH is the radius of MATH. The triangle inequality implies that MATH and MATH are disjoint for any two points MATH, so the shell fragments MATH are pairwise disjoint. Finally, since MATH fits inside a ball of radius MATH, its volume is MATH. Thus, MATH. |
cs/0103017 | Without loss of generality, assume that MATH is centered at the origin and that MATH is the closest point of MATH to the origin. Let MATH be the open ball of radius MATH centered at the origin, let MATH be the open unit ball centered at MATH, and let MATH be the cone whose apex is the origin and whose base is the circle MATH. See REF . MATH lies entirely inside MATH, and since MATH, we easily observe that MATH lies entirely outside MATH. Thus, the surface area of MATH is at least the area of the spherical cap MATH, which is exactly MATH. |
cs/0103017 | Let MATH be an arbitrary point in MATH, and let MATH be a ball of radius MATH centered at MATH. Call a NAME neighbor of MATH a friend if it lies inside MATH, and call a friend MATH interesting if there is another point MATH (not necessarily a NAME neighbor of MATH) such that MATH and MATH. A simple packing argument shows that MATH has at most MATH boring friends. Let MATH be the set of interesting friends of MATH. Every point MATH lies on the boundary of a NAME ball MATH that contains no points of MATH in its interior and also has MATH on its boundary. It is straightforward to prove that because MATH is interesting and has distance at least MATH from any other point, MATH must have radius at least MATH. Let MATH be the ball concentric with MATH with radius MATH less than the radius of MATH. Finally, for any point MATH, let MATH be the unit-radius ball centered at MATH. We now have a set of unit balls, one for each interesting friend of MATH, whose centers lie at distance exactly MATH from the boundary of the Swiss cheese MATH. By REF , MATH has surface area MATH, and by REF , each unit ball MATH contains MATH surface area of MATH. Since the unit balls are disjoint, it follows that MATH has at most MATH interesting friends. |
cs/0103017 | Call a point far-reaching if it has a NAME neighbor at distance at least MATH, and let MATH be the set of far-reaching points. Let MATH be a ball of radius MATH containing MATH. For each MATH, let MATH be a maximal empty ball containing MATH and its furthest NAME neighbor, and let MATH be the concentric ball with radius MATH smaller than MATH. By construction, each ball MATH has radius at least MATH. Finally, for any far-reaching point MATH, let MATH be the unit-radius ball centered at MATH. By REF , the Swiss cheese MATH has surface area MATH, and by REF , each unit ball MATH contains MATH surface area of MATH. Since these unit balls are disjoint, there are at most MATH of them. |
cs/0103017 | For all MATH, let MATH be the number of far-reaching points in MATH, that is, those with NAME edges of length at least MATH. From REF , we have MATH. By REF , if the farthest neighbor of a point MATH is at distance between MATH and MATH, then MATH has MATH neighbors. Thus, the total number of NAME edges is at most MATH . |
cs/0103017 | REF implies that only MATH points can be endpoints of crossing edges. Thus, we can assume without loss of generality that MATH. We compute the total number of crossing edges by iteratively removing the point with the fewest crossing edges and retriangulating the resulting hole, say by incremental flipping. Conjecture REF implies that we delete only MATH crossing edges with each point, so altogether we delete MATH crossing edges. Not all of these edges are in the original NAME triangulation, but that only helps us. |
cs/0103017 | Assume Conjecture REF is true, and let MATH be an arbitrary set of MATH points with diameter MATH, where the closest pair of points is at unit distance. MATH is contained in an axis-parallel cube MATH of width MATH. We construct a well-separated pair decomposition of MATH CITE, based on a simple octtree decomposition of MATH. The octtree has MATH levels. At each level MATH, there are MATH cells, each a cube of width MATH. Our well-separated pair decomposition includes, for each level MATH, the points in any pair of level-MATH cells separated by a distance between MATH and MATH. A simple packing argument implies that any cell in the octtree is paired with MATH other cells, all at the same level, and so any point appears in MATH subset pairs. Every NAME edge of MATH is a crossing edge for some well-separated pair of cells. REF implies that the points in any well-separated pair of level-MATH cells have MATH crossing NAME edges. Since there are MATH such pairs, the total number of crossing edges between level-MATH cells is MATH. Thus, there are MATH . NAME edges altogether. REF also implies that for any well-separated pair of level-MATH cells, the average number of crossing edges per point is MATH. Since every point belongs to a constant number of subset pairs at each level, the total number of crossing edges at level MATH is MATH. Thus, the total number of NAME edges is MATH. |
cs/0103017 | Let MATH be an arbitrary MATH-sample of MATH. Amenta and Bern CITE observed that MATH for any points MATH. This observation implies that for any point MATH, we have MATH, where MATH is the sample point closest to MATH. Thus, we can cover MATH with circular neighborhoods of radius MATH around each sample point MATH. By similar arguments, the neighborhood of MATH has area at least MATH, and any point in the neighborhood of MATH has local feature size at most MATH. It follows that each neighborhood has sample measure MATH, and since there are MATH such neighborhoods, MATH. |
cs/0103017 | Let MATH be any parsimonious MATH-sample of MATH. Let MATH be a small sphere intersecting MATH in a non-planar curve, where the distance from MATH to any point os MATH is at elast the radius of MATH. Such a sphere always exists unless MATH is itself a sphere. Let MATH and MATH be extremely short segments of the intersection curve MATH that approximate skew line segments. Straighten these curves slightly, keeping them on the surface MATH and keeping the endpoints fixed, to obtain curves MATH and MATH. Finally, let MATH and MATH be sets of MATH evenly spaced points on MATH and MATH, respectively. See REF . The NAME triangulation of MATH has complexity MATH; every point in MATH is a NAME neighbor of every point in MATH. Moreover, any NAME circumsphere of MATH closely approximates the sphere MATH and thus excludes every point in MATH. Thus, MATH is a parsimonious MATH-sample of MATH consisting of MATH points whose NAME triangulation has complexity MATH. |
cs/0103017 | The surface MATH is the boundary of two sausages MATH and MATH, each of which is the NAME sum of a unit sphere and a line segment. Specifically, let MATH where MATH is the unit ball centered at the origin, MATH, and MATH. The local feature size of every point on MATH is MATH, so any uniform MATH-sample of MATH has MATH points. Define the seams MATH and MATH as the maximal line segments in each sausage closest to the MATH-plane: MATH . Our uniform MATH-sample MATH contains MATH points along each seam: MATH . The NAME triangulation of these MATH points has complexity MATH. Let MATH be the ball whose boundary passes through MATH and MATH and is tangent to both seams. This ball may contain other portions of the surface, but we claim that the intersection is small enough that we can avoid it with our sample points. The intersection of MATH and MATH is a small oval, tangent to MATH and symmetric about the plane MATH. Tedious calculation (which we omit) implies that the width of the oval is MATH . See REF . So MATH lies entirely within a strip of width MATH centered along the seam MATH. A symmetric argument gives the analogous result for MATH. We can uniformly sample MATH so that no other sample point lies within either strip. Each segment MATH is an edge in the NAME triangulation of the sample, and there are MATH such segments. |
cs/0103017 | Intuitively, we produce the surface MATH by pushing two sausages into a spherical balloon. These sausages create a pair of conical wedges inside the balloon whose seams lie along two skew lines. The local feature size is small near the seams and drops off quickly elsewhere, so a large fraction of the points in any uniform sample must lie near the seams. We construct a particular sample with points exactly along the seams that form a quadratic-complexity triangulation, similarly to our earlier sausage construction. Our construction relies on several parameters: the radius MATH of the spherical balloon, the width MATH and height MATH of the wedges, and the distance MATH between the seams. Each wedge is the NAME sum of a unit sphere, a right circular cone with height MATH centered along the MATH-axis, and a line segment of length MATH parallel to one of the other coordinate axes. The boundary of each wedge can be decomposed into cylindrical, spherical, conical, and planar facets. The cylindrical and spherical facets constitute the blade of the wedge, and the seam of the blade is the line segment of length MATH that bisects the cylindrical facet. The local feature size of any point on the blade is exactly MATH, and the local feature size of any other boundary point is its distance from the blade. Straightforward calculations imply that the sample measure of the wedge is MATH. A first approximation MATH of the surface MATH is obtained by removing two wedges from a ball of radius MATH centered at the origin. One wedge points into the ball from below; its seam is parallel to the MATH-axis and is centered at the point MATH. The other wedge points into the ball from above; its seam is parallel to the MATH-axis and is centered at MATH. Let MATH denote the distance between the wedges. Our construction has MATH, so MATH. To obtain the final smooth surface MATH, we round off the sharp edges by rolling a ball of radius MATH inside MATH along the wedge/balloon intersection curves. We call the resulting warped toroidal patches the sleeves. The local feature size of any point on the sleeves or on the balloon is at least MATH. Since MATH is star-shaped and contained in a sphere of radius MATH, its surface area is at most MATH. It follows that the sleeves have constant sample measure. The local feature size of wedge points changes only far from the blades and by only a small constant factor, so MATH. To complete the construction, we set MATH, MATH, and MATH. See REF . Finally, we construct a uniform MATH-sample MATH with MATH sample points evenly spaced along each seam and every other point at least MATH away from the seams. Setting MATH (and thus MATH) ensures that the NAME spheres MATH between seam points do not touch the surface except on the blades. By the argument in REF , there are MATH . NAME edges between seam points. |
cs/0103017 | Let MATH be a set containing the following MATH points: MATH . We easily verify that every pair of points MATH and MATH lie on a sphere MATH with every other point in MATH at least unit distance outside. Let MATH, where MATH is the unit-radius sphere centered at MATH. Clearly, MATH for every point MATH, so MATH. Let MATH be an arbitrary parsimonious MATH-sample of MATH, let MATH, and for any point MATH, let MATH be the sample points on its unit sphere. Choose an arbitrary NAME pair MATH, and let MATH be a sphere concentric with MATH but with radius smaller by MATH. This sphere is tangent to MATH and MATH but is at least unit distance from every other component of MATH. Expand MATH about its center until it hits (without loss of generality) a point MATH, and then expand it about MATH until it hits a point MATH. The resulting sphere MATH passes through MATH and MATH and has no points of MATH in its interior, so MATH and MATH are joined by an edge in the NAME triangulation of MATH. There are at least MATH such edges. |
cs/0103017 | Intuitively, we create the surface MATH by pushing two rows of regularly spaced unit balls into a large spherical balloon, similarly to the proof of REF . As before, the surface contains two wedges, but now each wedge has a row of small conical teeth. Our construction relies on the same parameters MATH of our earlier construction. We now have additional parameter MATH, which is simultaneously the height of the teeth, the distance between the teeth, and half the thickness of the `blade' of the wedge. Our construction starts with the (toothless) surface described in the proof of REF , but using a ball of radius MATH instead of a unit ball to define the wedges. We add MATH evenly-spaced teeth along the blade of each wedge, where each tooth is the NAME sum of a unit ball with a right circular cone of radius MATH. Each tooth is tangent to both planar facets of its wedge. To create the final smooth surface MATH, we roll a ball of radius MATH over the blade/tooth intersection curves. The complete surface has sample measure MATH. Finally, we set the parameters MATH, MATH, and MATH, so that MATH. Let MATH be a parsimonious MATH-sample of MATH, and let MATH. For any pair of teeth, one on each wedge, there is a sphere tangent to the ends of the teeth that has distance MATH from the rest of the surface. We can expand this sphere so that it passes through one point on each tooth and excludes the rest of the points. Thus, the NAME triangulation of MATH has complexity MATH. |
cs/0103017 | Consider the surface MATH consisting of MATH unit balls evenly spaced along two skew line segments, exactly as in the proof of REF , with thin cylinders joining them into a single connected surface. With high probability, a random sample of MATH points contains at least one point on each ball, on the side facing the opposite segment. Thus, with high probability, there is at least one NAME edge between any ball on one segment and any ball on the other segment. |
cs/0103017 | Let MATH be the surface used to prove REF , but with MATH teeth. With high probability, a weighted random sample of MATH contains at least one point at the tip of each tooth. |
cs/0103018 | Using the same state set (and some additional transitions which are labeled with the empty word) we can construct (in polynomial time) a finite automaton which accepts the following language MATH where MATH means that MATH is a descendant of MATH by the convergent rewriting system MATH. Then we complement MATH with respect to MATH; and we build the intersection with the regular set of freely reduced words. MATH . |
cs/0103018 | The first question can be solved by guessing a word MATH letter by letter and calculating MATH. The second question can be solved since MATH implies MATH for some MATH and MATH with MATH. Hence we can guess MATH and MATH. During the guess we compute MATH and then we verify MATH. MATH . |
cs/0103018 | Let MATH be a primitive word. In our setting the definition of the MATH-stable normal form of a word MATH depends on the property whether or not MATH is a factor of MATH. So we distinguish two cases and in the following we also write MATH for denoting MATH. Then, for example, MATH means the same as MATH. CASE: We assume that MATH is not a factor of MATH. The idea is to replace each maximal factor of the form MATH with MATH by a sequence MATH and each maximal factor of the form MATH with MATH by a sequence MATH. This leads to the following notion: The MATH-stable normal form (first kind) of MATH is a shortest sequence (MATH is minimal) MATH such that MATH, MATH, MATH, MATH for MATH, and the following conditions are satisfied: CASE: MATH. CASE: MATH if and only if neither MATH nor MATH is a factor of MATH . CASE: If MATH, then: MATH . The MATH-stable normal form of MATH becomes MATH . Let MATH with MATH and MATH. Then the MATH-stable normal form of MATH is: MATH . CASE: We assume that MATH is a factor of MATH. Then we can write MATH with MATH and MATH, MATH. We allow MATH, hence the second case includes the case MATH. In fact, if MATH, then below we obtain the usual definition of MATH-stable normal form. Moreover, by switching to some conjugated word of MATH we could always assume that MATH for some letter MATH being fixed by the involution, MATH, but this switch is not made here. The idea is to replace each maximal factor of the form MATH with MATH by a sequence MATH. In this notation MATH is representing the factor MATH. The MATH-stable normal form (second kind) of MATH is now a shortest sequence (MATH is minimal) MATH such that MATH, MATH for MATH, and the following conditions are satisfied: CASE: MATH. CASE: MATH if and only if MATH is not a factor of MATH . CASE: If MATH, then: MATH . Since MATH, the MATH-stable normal form of MATH becomes MATH . So, for the second kind no negative integers interfere. Let MATH with MATH. Then MATH and MATH. Let MATH . Then the MATH-stable normal form of MATH is: MATH . In both cases we can write the MATH-stable normal form of MATH as a sequence MATH where MATH are words and MATH are integers. For every finite semigroup MATH there is a number MATH such that for all MATH the element MATH is idempotent, that is, MATH. It is clear that the number MATH for our monoid MATH is the same as the number MATH. It is well-known CITE that we can take MATH (it is however more convenient to define MATH for MATH). Hence in the following MATH. For specific situations this might be an overestimation, but this choice guarantees MATH for all MATH and all MATH. Now, let MATH be words such that the MATH-stable normal forms are identical up to one position where for MATH appears an integer MATH and for MATH appears an integer MATH. We know MATH whenever the following conditions are satisfied: MATH, MATH, MATH, and MATH. Then we have MATH. This is the reason to change the syntax of the MATH-stable normal form. Each non-zero integer MATH is written as MATH where MATH are uniquely defined by MATH, MATH, and MATH. For MATH we may choose MATH. We shall read MATH as a variable ranging over non-negative integers, but MATH, MATH, and MATH are viewed as constants. In fact, if MATH, then we best view MATH also as a constant in order to avoid problems with the constraints. Let MATH, MATH, and MATH be words such that MATH holds. Write these words in their MATH-stable normal forms: MATH . Since MATH there are many identities. For example, for MATH we have MATH, MATH, MATH, MATH, etc. What exactly happens depends only on the MATH-stable normal form of the product MATH. There are several cases, which easily can be listed. We treat only one of them, which is in some sense the worst case in order to produce a large exponent of periodicity. This is the case where MATH with MATH and MATH. Then it might be that MATH and MATH with MATH (and MATH). Hence we have MATH and MATH. It follows MATH, MATH, and there is only one non-trivial identity: MATH . Since by REF, the case MATH leads to the identity: MATH . Assume now that MATH and MATH. If we replace MATH, MATH, and MATH by some MATH, MATH, and MATH such that still MATH, then we obtain new words MATH, MATH, and MATH with the same images under MATH in MATH and still the identity MATH. What follows then is completely analogous to what has been done in detail in CITE. Using the MATH-stable normal form we can associate with an equation MATH of denotational length MATH together with its solution MATH some linear Diophantine system of MATH equations in at most MATH variables. The variables range over natural numbers since zeros are substituted. (In fact the number of variables can be reduced to be at most MATH). The parameters of this system are such that maximal size of a minimal solution (with respect to the component wise partial order of MATH) is in MATH with the same approach as in CITE. This tight bound is based in turn on the work of CITE; a more moderate bound MATH (which is enough for our purposes) is easier to obtain, see for example, CITE. The maximal size of a minimal solution of the linear Diophantine system has a backward translation to a bound on the exponent of periodicity. For this translation we have to multiply with the factor MATH and to add MATH. Putting everything together we obtain the claim of the proposition. MATH . |
cs/0103018 | Clearly MATH and MATH for all MATH. Next by REF for MATH and MATH for MATH. Hence MATH for MATH and therefore MATH. This means MATH since MATH . |
cs/0103018 | CASE: Clearly, the only-if condition is satisfied by the definition of a projection since then MATH. For the converse, assume that MATH and that MATH implies MATH. Then for each MATH we can choose a word MATH such that MATH. We can make the choice such that MATH for all MATH. If MATH, then we can find MATH such that MATH, since we can take the shortest word MATH such that MATH. For MATH we know that there is some word MATH with MATH and MATH. Hence we can write MATH with MATH and MATH. For MATH we can demand MATH. For MATH we can demand MATH. Thus, we find a projection MATH such that MATH and moreover, MATH for all MATH. CASE: Using the reasoning in the proof of REF we may assume that MATH satisfies MATH for all MATH. Since MATH defines a base change with MATH, we know by REF that MATH is a solution of MATH. Clearly, MATH. MATH . |
cs/0103018 | By definition, MATH and MATH are extended to homomorphisms MATH and MATH leaving the letters of MATH invariant. Since MATH we have MATH and MATH. Since MATH is a solution, we have MATH and MATH leaves the letters of MATH invariant. The solution MATH satisfies MATH for all MATH. Hence, if MATH, then MATH. If MATH, then MATH and MATH, again by the definition of a partial solution. MATH . |
cs/0103018 | Let MATH, MATH, MATH, and MATH. The non-deterministic algorithm works as follows: For each MATH we guess admissible exponential expressions MATH and MATH with MATH. We define an exponential expressions MATH and MATH. For each MATH we guess an admissible exponential MATH with MATH and MATH. Next we verify whether or not MATH. During this test we have to create an exponential expression MATH (and MATH, respectively,) by replacing MATH in MATH (and MATH, respectively,) with the expression MATH. This increases the size in the worst case by a factor of MATH. The other tests whether MATH for MATH and MATH for MATH involve admissible exponential expressions over Boolean matrices and can be done in polynomial time. The correctness of the algorithm follows from our general assumption that all MATH appear in MATH. Therefore, if we have MATH, then MATH (or MATH) appears necessarily as a factor in MATH. Hence MATH has an exponential expression of polynomial size by REF . Therefore guesses of MATH, MATH, and MATH as above are possible without running out of space. MATH . |
cs/0103018 | We first guess some alphabet MATH of polynomial size together with MATH. Then we guess some admissible base change MATH such that MATH and we compute MATH. Next we guess some admissible equation with constraints MATH which uses MATH and MATH. We check using REF that there is some partial solution MATH such that MATH. (Note that every equation with constraints MATH satisfying MATH for some MATH is admissible by REF .) Finally we check using REF and that there is some projection MATH such that MATH. We obtain MATH . |
cs/0103018 | We may assume that MATH. By contradiction assume that MATH is not free. Then there is some MATH and some cut MATH such that MATH with MATH. If MATH, then we have a immediate contradiction. For MATH the relation MATH is due to some pair MATH, MATH with MATH or MATH. Since MATH contains no cut, we can use the same pair to find an interval MATH such that MATH and MATH. Using induction on MATH we see that MATH cannot be free. A contradiction, because then MATH is not free. MATH . |
cs/0103018 | Assume by contradiction that MATH is not free. Then it contains an implicit cut MATH with MATH. By the observation above: If MATH, then MATH is an implicit cut of MATH and MATH is not free. Otherwise, MATH and MATH is not free. MATH . |
cs/0103018 | We may assume that MATH is a positive interval, that is, MATH. We show the existence of MATH where MATH and MATH is a cut. The existence of MATH where MATH and MATH is a cut follows by a symmetric argument. If MATH, then MATH itself is a cut and we can choose MATH. Hence let MATH and consider the positive interval MATH. This interval is not free, but the only possible position for an implicit cut is MATH. Thus for some cut MATH we have MATH with MATH and MATH. A simple reflection shows that we have MATH and MATH. Hence we can choose MATH. MATH . |
cs/0103018 | Let MATH be maximal free. Then MATH and MATH is maximal free, too. Hence MATH and MATH is closed under involution. By REF we may assume that MATH is a cut. Say MATH. Then MATH and there is no other maximal free interval MATH with MATH because of REF . Hence there are at most MATH such intervals MATH. Symmetrically, there are at most MATH maximal free intervals MATH where MATH and MATH is a cut. MATH . |
cs/0103018 | It is enough to show that MATH and MATH can be represented by exponential expressions of size MATH. Then MATH can have size at most MATH and the assertion follows. We will estimate the size of an exponential expression for MATH, only. We start again with the MATH-transformation of MATH . If MATH is small there is nothing to do since MATH. An easy reflection shows that MATH can become large, only if there is some MATH such that MATH or MATH is long. By symmetry we treat the case MATH only and we fix some notation. We let MATH, MATH, and MATH. Let MATH be a minimal cover of MATH. We may assume that MATH is large. It is enough to find an exponential expression for the MATH-factor MATH having size in MATH, because we want the whole expression to have size in MATH. Note that MATH is a proper factor of MATH. Hence no critical word of MATH can appear as a factor inside MATH. This means there is some MATH such that both MATH and MATH. Indeed, if MATH, then we choose MATH. Otherwise we let MATH be minimal such that MATH. Then MATH is impossible because MATH would appear as a factor in MATH. We can write MATH and since MATH is a letter, it is enough to find exponential expressions for MATH, MATH, of size MATH each. As a conclusion it is enough to prove the following lemma. MATH . |
cs/0103018 | We show that there is an exponential expression of size MATH under the assumption MATH. This is enough, because we always can write MATH as MATH, where MATH, the MATH are letters, and each MATH satisfies the assumption. Note that the assumption implies MATH and we may define MATH as the suffix of length MATH of MATH. For MATH let MATH. Then MATH is a critical word which appears as a factor in MATH. If the words MATH, MATH are pairwise different, then MATH and we are done. Hence we may assume that there are repetitions. Let MATH be the smallest index such that a critical word is seen for the second time and let MATH be the first appearance of MATH. This means for MATH the words MATH are pairwise different and MATH. Now, MATH and MATH, hence MATH and MATH overlap in MATH. We can choose MATH maximal such that MATH is a prefix of the word MATH. (Note that the last factor MATH insures that the prefix ends with MATH). For some index MATH we can write MATH . We claim that MATH. Indeed, let MATH be maximal such that MATH and assume that MATH. Then both MATH and MATH are periods of MATH, but MATH. Hence by NAME and NAME 's Theorem CITE we obtain that the greatest common divisor of MATH and MATH is a period, too. Due to the definition of a MATH-factorization (MATH was the first repetition) the length MATH is therefore a multiple of MATH and we must have MATH. This shows the claim. Moreover, we have MATH where MATH for MATH. We have MATH, hence MATH. It follows that MATH is an exponential expression of size MATH. More precisely, for some suitable constant MATH its size is at most MATH. The constant MATH depends only on the constant which is hidden when writing MATH. By induction on the size of the set MATH we may assume that MATH has an exponential expression of size at most MATH. Hence the exponential expression for MATH has size at most MATH . Thus, the size is in MATH. MATH . |
cs/0103021 | By making a query to the black box, the input MATH will become MATH; after applying a NAME transform we obtain the quantum state MATH . From this we see that the problem of determining MATH is equivalent to estimating the amplitude of MATH (or MATH). The problem of estimating the amplitude of a quantum state, which is equivalent to the problem of counting the number of solutions to a quantum problem, is well-studied CITE. In CITE they prove that MATH queries are required for a MATH-approximate count, where MATH is the number of solutions, MATH is the set of possible inputs and MATH defines the closeness of the approximation. If we use this lower bound for the case of amplitude estimation, we get a lower bound of MATH, since the amplitude is MATH. In our case, MATH can take any value in MATH, MATH must be less than MATH and MATH, so we obtain the lower bound of MATH qubits. |
cs/0103021 | Suppose we are able to query the black box with frequencies in the range MATH. We claim that this black box can be simulated by a black box with only one tick rate MATH at the cost of replacing each query with at most MATH queries. This can be done since one query to the MATH black box with tick rate MATH is equivalent to MATH consecutive queries with tick rate MATH, using the output of one query as the input to the next. Notice that a superposition of queries to the MATH black box does not pose any challenge to the simulation, since we can also query the one-tick rate black box in a superposition of times. For such an input, the number of queries is defined to be the maximum over all states of the superpositions. Since in all cases we query the black box at most MATH times, this means that the one-tick rate version will run with at most MATH queries. Now, in REF we have already proved that when we use only one tick rate, we need to communicate at least MATH qubits, and therefore MATH. |
cs/0103024 | By using the method given in the preliminary section, we can compute the level of any given point MATH in polylogarithmic time by using MATH processors. We now apply parametric searching to have the sequential time bound to compute the point MATH. A parametric searching algorithm is usually stated as a sequential algorithm; however, it is naturally a parallel algorithm if we use a parallel decision algorithm and also a parallel sorting algorithm. We remark that we can do it easier without using the parametric searching if we examine the range searching method in precise; however, we omit it in this paper. |
cs/0103024 | At-most-MATH-level (the part of the arrangement below MATH-level) is a union of MATH concave chains such that all concave peaks in the chains appear in the MATH-level CITE. If a concave chain among them has a peak in MATH, the slope of the chain must be changed from positive to negative. Thus, the number of maximal peaks within MATH is the difference between the numbers of positive slope lines at two endpoints. |
cs/0103024 | If we construct the dual of range search data structure for the set of lines with positive slopes, the number of positive slope lines below a given point MATH can be computed in MATH time. Hence, MATH and MATH can be computed in MATH time, and the number of maximal peaks can be computed by using REF . The number of minimal peaks is easily computed from the number of maximal peaks and slopes of the MATH-level at endpoints. |
cs/0103024 | The parametric searching method gives MATH time complexity apart from the MATH preprocessing time. We balance MATH and MATH to have MATH. If MATH, we instead use MATH. This gives the time complexity. |
cs/0103024 | First we search for a horizontal line MATH such that it intersects MATH-level, and the number of peaks below it is not less than MATH. Such a line MATH can be found in MATH time. Next, let MATH and MATH be the leftmost intersection and the rightmost intersection with the MATH-level on MATH, and let MATH be the interval between them. If the MATH-level at an endpoint of the input interval MATH is below MATH, we connect the point on MATH-level at the MATH-value of the endpoint to MATH with a segment to form a chain MATH with at most three segments. Let MATH be the set of lines in the arrangement intersecting with the chain MATH. The cardinality of MATH is at most MATH because of REF . Finally, we find all peaks below MATH by using the MATH-branching binary search. Here, by using the windowing method of CITE, we should only take care of lines in MATH together with lines below endpoints of the chain MATH. There are at most MATH such lines. Hence, this second step can be done in MATH time. |
gr-qc/0103074 | The statement concerning the *-operation is obvious. In order to show that the algebra product can be calculated by REF , we first show that if MATH and MATH, then MATH. Clearly MATH is compactly supported and symmetric. We must show that in addition MATH. This can be seen by an application of CITE, which yields, in combination with REF for MATH, MATH . It is not difficult to see that the set on the right side of the above inclusion is in fact contained in MATH, thereby showing that MATH is in the class MATH, as we wanted to show. We finish the proof by showing that REF holds not only for smooth test functions, but also for our admissible test distributions MATH and MATH. To see this, we consider sequences of test functions MATH and MATH converging to MATH and MATH in the sense of MATH respectively, MATH (for a definition of these spaces and their pseudo topology, the so-called ``NAME pseudo topology", see the Appendix), where MATH and MATH are closed conic sets in MATH and MATH, respectively with the property that MATH and MATH. Now the operation of composing distributions - which forms the basis of the definition of the contracted tensor product, REF - is continuous in the NAME pseudo topology. Therefore MATH in the space MATH, where MATH is a certain closed conic set in MATH, which is calculable from MATH and MATH using formula REF . Now expressions of the sort MATH arise from the pointwise product of distributions. This product is continuous in the NAME pseudo topology. Therefore we conclude that MATH. By a similar argument, it also follows that MATH. REF , applied to some vector MATH, is already known to hold for MATH and MATH, since these are smooth test functions. It follows that REF must also hold for our admissible test distributions. The last statement of the theorem is obvious from the definition of MATH when MATH and MATH are smooth functions. By a continuity argument similar to the one above, it also holds for distributional MATH and MATH. |
gr-qc/0103074 | MATH must be of the form REF where MATH is some natural number. We must show that MATH. Let us assume that MATH and that MATH. We show that this leads to a contradiction. By REF for all test functions. Using REF (and recalling that MATH), this gives us MATH . Using the relation MATH, the support properties of the advanced and retarded fundamental solutions and the fact that MATH is compactly supported, one finds from REF that the distribution MATH must be of compact support. In combination with a microlocal argument similar to the one given in the proof of REF , one finds moreover that MATH. Since MATH, it follows from REF that MATH, which contradicts our hypothesis. |
gr-qc/0103074 | In order to show that the right hand side of REF represents an element in MATH, we must show that MATH. We first note that, since MATH and MATH are NAME states, MATH is smooth. By CITE we therefore find MATH . The distribution MATH is by definition symmetric and of compact support. Therefore MATH, which gives us that MATH. Since every element in MATH can be written as a sum of elements of the form MATH, with MATH, we may therefore take REF as the definition of a linear map from MATH to MATH. That this map is a homomorphism is demonstrated by the following calculation. MATH where we have used the identity MATH . That MATH preserves the *-operation follows because MATH is real, which is in turn a consequence of the fact that MATH. That MATH is one-to-one can be seen from an explicit construction of its inverse, given by the same formula as REF, but with MATH replaced by MATH. |
gr-qc/0103074 | Let MATH be a quasi-free NAME state for the spacetime MATH and let MATH. Then MATH is the two-point function of a quasi-free NAME state MATH on MATH. (Here we are using the assumption that our isometry MATH is causality preserving.) By REF , we may assume that the abstract algebras MATH and MATH are concretely realized as linear operators on the GNS constructions of the quasi-free NAME states MATH and MATH. Since every element in MATH can be written as a sum of elements of the form MATH, the above formula gives, by linearity, a map from MATH to MATH. That this map is a *-homomorphism can easily be seen from REF , together with the relation MATH. That MATH is injective follows from the definition. |
gr-qc/0103074 | Let MATH be a NAME surface not intersecting the future of MATH and let MATH be a NAME surface not intersecting the past of MATH. Define a bidistribution MATH on MATH by MATH where MATH . By a standard argument based on NAME 's law (see for example, CITE), one can see that MATH does not depend on the particular choice for MATH. Let MATH be an arbitrary smooth function on MATH satisfying MATH for all MATH and MATH for all MATH. We then define a linear map MATH by MATH . The distribution MATH satisfies the following properties: CASE: MATH is of compact support with MATH, CASE: MATH for all MATH and MATH. REF immediately follows from the fact that MATH for all MATH and the fact that MATH for all MATH. REF holds since MATH . We wish to show that the MATH-th tensor power of MATH gives a map MATH . We begin by showing that MATH has the following wave front set: MATH . In order to see this, we note that by definition, MATH . We are thus in a position to apply the ``propagation of singularities theorem" CITE to MATH. This theorem tells us that an element MATH is in MATH if and only if every element of the form MATH is in MATH, where MATH with respect to MATH and where MATH with respect to MATH. Moreover, by definition of MATH, we have that MATH for all MATH. The wave front set of MATH is known to be MATH . Combining these two pieces of information then gives us the above wave front set for MATH. Since differentiating and multiplying a distribution by a smooth function does not enlarge its wave front set, it holds that MATH. By the rules CITE for calculating the wave front set of a tensor product of distributions, we get from this that MATH . Let MATH, that is, MATH is a symmetric, compactly supported MATH-point distribution with MATH. Then it follows from the above form of MATH that MATH . Therefore CITE applies and we conclude from that theorem that the linear operator MATH has a well-defined action on distributions MATH. The wave front set of the distribution MATH can be calculated from CITE using our knowledge about MATH and MATH: MATH . Since the distribution MATH is of compact support by REF , we have thus demonstrated that the MATH-th tensor power of MATH gives a map from MATH to MATH, as we had claimed. The algebras MATH and MATH are faithfully represented on the GNS NAME spaces of any quasi-free NAME states MATH respectively MATH on the corresponding NAME subalgebras. We may choose these quasi-free states (or rather their two-point functions) to have identical initial data on MATH. In view of REF , this amounts to saying that MATH for all compactly supported test functions MATH. We now define MATH by MATH where the MATH are the generators of MATH and where the MATH are the generators of MATH. We must show that this is indeed a *-isomorphism. That MATH respects the product in both algebras, REF , follows from MATH where MATH is used for the contractions in MATH on the left side, and MATH is used for the contractions in MATH on the right side, as one can easily verify using relation REF and the definition of the contracted tensor product. That MATH respects the *-operation follows because MATH is real. That MATH is invertible can be seen by an explicit construction of its inverse, given by the same construction as above, but with the spacetimes MATH and MATH interchanged. The definition of MATH does not depend on the specific choice for MATH, but it depends on a choice for MATH. It is however not difficult to see that isomorphism MATH itself is independent of that choice. We finally prove that the restriction of MATH to MATH is the identity. By REF above we have MATH for any MATH. Now if the support of MATH is in MATH (so that MATH) then the above expression vanishes on MATH. Since this expression is moreover a solution to the NAME equation, it must in fact vanish everywhere. Therefore, by the same argument as in the proof of REF , there is a MATH such that MATH. Since MATH, this implies that MATH for all MATH. This argument can be generalized to show that MATH for all MATH and arbitrary MATH, thus proving our claim. |
gr-qc/0103074 | Let MATH be a quasi-free NAME state for the theory at MATH. For all MATH, let MATH . Then MATH is the two-point function of a quasi-free NAME state of the theory scaled by MATH. (Note that REF is equivalent to the relation MATH between the smeared two-point functions, because the metric volume element transforms as MATH.) We use MATH to give a concrete realization of the algebra MATH. We then define (using the same symbol for the generators MATH in both algebras) MATH . MATH is a well defined map for all MATH, because MATH. Using REF , it is also easily checked to be a *-homomorphism. |
hep-th/0103100 | The proof is exactly the same as the case MATH and MATH is smooth, except that, in deriving this cohomology space, we need to cover MATH by more than two open subsets. |
hep-th/0103164 | This follows immediately from REF together with the definition of MATH. To see the last statement, assume that MATH is invariant. Then MATH for any MATH. |
math-ph/0103002 | Let MATH be the set of permutations where the origin belongs to a cycle of length greater than MATH. One has MATH . Then MATH. Since MATH with MATH a self-avoiding walk from REF to MATH, and MATH, one can write MATH . The first inequality is NAME 's lemma. The last term goes to REF as MATH since the sum over all cycles containing the origin converges. |
math-ph/0103009 | MATH . The MATH property follows from the TF REF follows from REF and the fact, that MATH . The proof of MATH is given in REF . |
math-ph/0103009 | Since the potential MATH is spherical symmetric and REF is fulfilled for derivatives in all directions, we get for MATH . Hence, MATH, together with all derivatives, is bounded above by a constant on a ball around MATH. Since the operation MATH smears the potential, for every MATH, over a region MATH in the MATH-direction the difference MATH can be expressed by MATH times a function MATH which is bounded by a constant. Since MATH the same argument can be given for all derivatives. |
math-ph/0103009 | On MATH we apply REF to REF with MATH and MATH, which implies for arbitrary but fixed MATH (we may set the chemical potential MATH for simplicity, the computations for arbitrary MATH are essentially the same) MATH . Hence, multiplying with MATH and integrating over MATH leads to MATH . In the case of MATH the terms of REF have to be estimated separately. The semiclassical part reads MATH . An analogue estimate one derives for MATH by using REF . |
math-ph/0103009 | Obviously, the main contribution to the magnitude of the semiclassical term MATH is produced by the NAME singularity, that is, MATH . |
math-ph/0103009 | This is done by putting together REF and setting MATH. |
math-ph/0103009 | We assume now that we have already got the estimate MATH which till now we have only proven on MATH, by setting MATH in REF . The missing part will be proved in REF . Since we know by REF we get MATH for some MATH. This implies the statement of the lemma. |
math-ph/0103009 | By REF and combining the estimations of REF with MATH. |
math-ph/0103009 | Recall MATH . By the NAME inequality we derive MATH . |
math-ph/0103009 | By decomposition we have on the one hand the fully semiclassical and easier to handle part MATH . On the other hand there is the more interesting term MATH . For MATH we separately calculate MATH and MATH . Whereas REF can be bounded by REF can analogously be estimated by REF , or REF . The term MATH is a bit more delicate. We can either use REF or REF , which states that for MATH, given in REF , with MATH fulfilling REF and MATH . Since MATH, we get that MATH in MATH. Hence, we can apply REF to our case, with MATH, yielding MATH . The term MATH stems from rescaling MATH to MATH. So, MATH . |
math-ph/0103009 | First we assume MATH. Applying REF with MATH and MATH we get for arbitrary but fixed MATH (we set MATH) MATH . After multiplying with MATH and integrating over MATH we get MATH . In the case of MATH we again have to decompose MATH, since MATH is smaller than MATH. So for fixed but arbitrary MATH, MATH is decomposed with respect to MATH. For MATH we proceed as in the previous section and estimate each term separately and for MATH we immediately arrive at REF . The pure semiclassical part MATH can analogously be estimated as in REF . |
math-ph/0103009 | As in REF we first assume MATH. The other case, where the terms have to be computed separately, works as in REF . Applying REF with MATH and MATH we get for arbitrary but fixed MATH (we set MATH) MATH . This implies MATH . |
math-ph/0103009 | Note that by REF the estimate REF is proved and the assumption of REF justified. So by REF we arrive at REF . |
math-ph/0103009 | Let us start with MATH . By the HLS inequality we get MATH . The term MATH is a bit more delicate and we refer to REF for a proof of the estimate REF . Note: REF is proved for region MATH with possibly a very large parameter MATH. This parameter MATH is chosen in a way, such that only the lowest NAME band contributes to NAME 's calculations. Since we only treat the lowest NAME band case the assertion of REF holds in our case on the whole region MATH. Furthermore, we remark that if REF would be valid on MATH, we could immediately conclude by the HLS inequality that MATH. But since the validity of REF cannot be guaranteed on MATH we have to refer to NAME 's method. |
math-ph/0103009 | By the definition of the MSTF functional, it suffices to show that MATH. For this purpose we note that for every non-negative function MATH, on a general measure space, one derives from convexity that MATH . Hence for every MATH and MATH, we have MATH . Since MATH, we arrive at MATH . |
math-ph/0103009 | REF immediately implies MATH . |
math-ph/0103009 | We set MATH for an arbitrary MATH. We get from REF that for every MATH there exists a MATH, such that MATH . Hence, this implies MATH . By convexity of MATH, for MATH, and by the equation MATH, we get MATH . Using REF and integrating over the MATH-variable, REF can be written as MATH . Consequently the functional MATH can be estimated from below by MATH for MATH appropriately chosen, where we used the notations MATH and MATH . |
math-ph/0103009 | Let MATH be a minimizing sequence of MATH. REF yields that there exists a constant MATH, such that MATH for all MATH. By the NAME theorem there exists a subsequence, still denoted as MATH, and a MATH, with MATH, such that MATH . Since MATH-norms are weakly lower semicontinuous, we derive for all MATH, MATH and using NAME 's Lemma we consequently arrive at MATH . Moreover since MATH for all MATH, we conclude by weak convergence MATH for each MATH. By REF and the dominated convergence theorem we have MATH . In order to show MATH we use the fact that for sequences of functions MATH, MATH, MATH defines a real inner product and consequently a real NAME MATH. Since REF yields MATH for all MATH, we can extract another subsequence MATH, such that MATH . Hence, we conclude MATH and consequently get REF . Altogether we have shown MATH . The uniqueness follows from the strict convexity of MATH. |
math-ph/0103009 | Let MATH. Then the same proof as in REF shows that there exists a MATH, with MATH and MATH . Obviously MATH, as a function of MATH, is non-increasing, and the convexity of MATH implies the convexity of MATH. Hence, by definition of MATH and REF it is clear that MATH is strictly decreasing up to MATH and constant for MATH. Furthermore we get that MATH for MATH. (Note that MATH would be a contradiction to MATH.) |
math-ph/0103009 | The proof works analogously to REF , if the variable perpendicular to the field is replaced by the angular momentum quantum numbers. |
math-ph/0103009 | The existence of a minimizing density MATH we get from REF . The uniqueness follows from the strict convexity of MATH in MATH. |
math-ph/0103009 | By definition of MATH, we have MATH, so the TF equation reads REF MATH with MATH . We assume MATH. CITE tell us that the potentials MATH and MATH behave like MATH as MATH. Hence, we get that for each MATH as well as MATH . Since we therefore get MATH we can conclude that there exists a MATH and a MATH, such that MATH which by REF is a contradiction to MATH. |
math-ph/0103009 | If we multiply REF with MATH and integrate over MATH, we get MATH . Note that by multiplication with MATH the MATH-bracket can be dropped, since MATH where MATH. Clearly MATH, so after summing over MATH we arrive at MATH . Moreover, REF tells us MATH which we use, together with symmetry, in order to estimate the right side of REF : MATH . Inserting into REF finally leads to MATH . |
math-ph/0103009 | The lower bound is obvious. For the upper bound we use the relation REF and take the NAME of MATH, that is, MATH . This density is neither in MATH nor in MATH, so we define a cut-off density MATH and use this as comparison density in REF , which leads to MATH . Since MATH we get by NAME 's inequality MATH . After estimating MATH and MATH we see that the minimum of REF as a function of MATH is achieved for MATH, which implies MATH . Next, optimizing the last two term on the right side of REF with respect to MATH and multiplying with MATH yields the statement of the theorem. |
math-ph/0103009 | This is an immediate consequence of REF , which says that MATH . |
math-ph/0103031 | Let MATH and let MATH denote the domain bounded by the curves MATH and MATH, where MATH. We first prove that MATH can be meromorphically continued to the domain MATH, that is, to the right of the domain MATH, by considering REF as the NAME equation MATH where MATH. The latter equation is equivalent to to the second order linear differential equation MATH through the standard transformations MATH where MATH is a point such that MATH. Note that, according to REF, the function MATH is analytic in MATH. Consider REF in the region MATH, where MATH, MATH. According to the assumption of the theorem, the function MATH is analytic in MATH, so that solutions of the linear REF are analytic in MATH. Thus, we can analytically continue MATH onto MATH. If MATH does not attain zero value in MATH, we get an analytic continuation of MATH onto MATH by REF. However, if MATH for some MATH, then MATH has a first order pole in MATH, which is an isolated singularity of MATH. Thus, we obtained the required meromorphic continuation of MATH onto MATH. This process can be continued to the domains MATH, etc. in the same fashion. However, now we have to consider a possibility that MATH has a singularity at MATH. Then, MATH and, correspondingly, MATH may have singularities at MATH, where MATH. We need to show that these possible singularities of MATH are first order poles only. Let MATH be the NAME expansions of MATH and MATH near MATH in REF. Comparing the like powers of MATH in REF, we obtain MATH so that the principle part of MATH at MATH is MATH. Combining the expression for MATH with REF, we obtain MATH . Direct computations show that MATH and MATH so that MATH . Considering now this equation near the point MATH, we obtain MATH . The proof that all the points MATH are either regular points or first order poles of MATH follows by induction from the following two lemmas. The differential equation MATH where the coefficient MATH has the form MATH posseses two linearly independent solutions MATH that are analytic in a neighborhood of MATH. Here MATH and MATH with MATH. NAME multipliers of REF at MATH are MATH and MATH, so the first terms of MATH and MATH are MATH and MATH respectively. Suppose, MATH . Computing MATH, we see that the odd coefficients MATH, so that MATH is in the form REF. Similar arguments work for the second solution MATH. Arguments of REF show that solutions to REF have no branching at MATH since NAME multipliers are integer and there are no logarithms. Note that the general solution to REF has the form MATH where MATH. The only nontrivial special solution to REF is proportional to MATH given by REF. In any case, the point MATH is a simple pole of the solution MATH to REF. The following Lemma shows that the new coefficient MATH where MATH is a solution to REF, has the form REF with another MATH. Thus, the point MATH is also a simple pole or a regular point of the solution MATH. (Note than MATH is a regular point of MATH if the new MATH.) The new coefficient MATH has the form REF with the new MATH equal to MATH if MATH is given by REF or equal to MATH if MATH is given by REF. Consider, for example, the case when MATH is given by REF. Using expansion REF for MATH, we obtain the first odd coeficient MATH in REF is MATH . On the other hand, we see that MATH, where the leading odd term of MATH is MATH. Then MATH where the leading odd term in the square brackets is MATH. Finally, we get the leading odd term of MATH as MATH. So, according to REF, the leading odd term of MATH at MATH is of the order MATH. The leading term of MATH is MATH. So, MATH has the form REF with the new MATH equal to MATH. The case when MATH is given by REF can be considered in a similar way. Consider, for example, the singular point MATH. According to REF, we have MATH at this point. Solving the corresponding initial value problem for REF, we get the solution MATH of either type REF or REF. In any case, MATH has a first order pole. The corresponding function MATH, according to REF , has a second order pole at MATH with the principal part MATH if MATH is proportional to REF, and is regular at MATH if MATH is given by REF. We can continue these arguments to show that all points MATH are either first order poles or regular points. Thus, we proved meromorphic continuation of MATH on MATH. To prove the meromorphic continuation to the left of the domain MATH let us note that the transformation MATH reduces REF to the equation of the same type MATH which have an analytic solution MATH on MATH. We can now use the previous arguments to continue MATH to the right on the whole complex plane. |
math-ph/0103031 | The substitution MATH reduces REF to MATH . Comparing leading coefficients and taking into account MATH, we get MATH. Taking into account REF, the latter equation can be now rewritten as MATH . To expand MATH in powers of MATH we need the free term of REF to be of order MATH or less. Thus, we obtain MATH, so that the free term is of the order MATH. REF can be now rewritten as MATH . It is clear that the expression in the left hand side is the dominant term of the latter equation and that the substitution MATH, satisfying REF, defines the coefficients MATH uniquely. |
math-ph/0103031 | Our main idea is to reduce the considered DDE to an integro-differential equation (IDE) and to show that the latter equation can be solved by successive iterations in a proper sectorial neighborhood of infiniti. Substituting MATH, we obtain REF for MATH. The inverse NAME transform MATH, applied to REF, yields (see, for example, CITE) MATH where MATH and MATH. After a simple algebra, the latter equation becomes MATH where MATH. Note that MATH is a meromorphic function with simple poles at the points MATH, where MATH, and that MATH has not more than linear growth in any non-vertical direction MATH. Separating the linear part MATH of MATH along the positive real axis, we can rewrite REF as MATH where the function MATH is bounded on MATH. Applying now the NAME transform MATH to REF, we reduce REF to the IDE MATH where MATH, MATH and the convolution is defined by MATH with a sufficiently large MATH. Considering REF as a perturbed linear equation MATH, we rewrite the former as MATH where the nonlinear operator MATH denotes the right hand side of REF and the contour of integration MATH is to be specified below. REF can be rewritten in the operator form as MATH. Let us assume for a while that MATH and consider sector MATH. According to REF, this choice of MATH allows us to find a proper closed subsector MATH that contains the right half-plane. We want to solve the IDE REF in a sufficiently remote part of the sector MATH by successive approximations. In order to formulate the statement more precisely we need to introduce the following notations. Let MATH be the image of the sector MATH under the transformation MATH. Let MATH be a sufficiently remote point on the ray MATH and let MATH, where MATH denote the parallel shift of MATH so that the vertex of MATH is shifted to MATH. For every MATH we define a contour MATH as a ray eminating from MATH and such that: MATH if MATH; MATH if MATH and; MATH if MATH. Then the region MATH and the contour MATH are the images of MATH and MATH under the transformation MATH. Let MATH, MATH and MATH. We will show that the solution to REF is given by MATH where the series converges absolutely and uniformly in MATH for sufficiently large MATH. This can be done by introducing the NAME space MATH of functions MATH, such that MATH is analytic on MATH and satisfy MATH there with some constant MATH. (Note that MATH depends on MATH.) The norm of MATH is MATH. According to REF from CITE, the integral operator MATH is a bounded linear operator, where MATH is proportional to MATH. We start to study the nonlinear operator MATH by considering the convolution MATH where, according to CITE, MATH . Here MATH denotes the logarithmic derivative of the NAME. It is clear that MATH is a meromorphic function with double poles at MATH. The asymptotic expansion MATH where MATH are NAME numbers, follows from the NAME formula (see CITE). Combining the latter facts, we obtain the estimate MATH where the constant MATH depends only on the sector MATH. If MATH then MATH and MATH where MATH does not depend on MATH and MATH. Consider first the case MATH belongs to the right half-plane MATH, where the number MATH is choosen so that MATH. Note that MATH does not depend on MATH. Setting MATH in REF, we obtain MATH . Note that we are integrating over the boundary of MATH and thus MATH. The fact that MATH is analytic in MATH follows from the properties of MATH and MATH immediately. Utilizing REF and the fact that MATH, we obtain MATH where MATH and MATH with MATH. Let MATH be the latter integral and MATH denote its integrand, which has simple poles at the points MATH and MATH in the upper half-plane. Computing MATH via the residues of MATH in the upper half-plane, we obtain MATH . Thus MATH since MATH can be taken greater than MATH. Consider now the half-plane MATH that is obtained by rotating MATH on the angle MATH, where MATH is choosen in such a way that MATH. Define now another convolution MATH by REF, where the contour of integration is the boundary of MATH. Using the same arguments as above, we can obtain the estimate MATH where MATH and MATH continuously depends on MATH. However, MATH is an analytic continuation of MATH, since the functions coincide on MATH. Taking MATH, where MATH, we complete the proof of the proposition. To complete the proof of the theorem we use the standard technique to show the convergence of iterations REF. Using properties of the NAME transform and of the operator MATH, one can show that MATH. Let MATH, and let us prove by induction that MATH if MATH is sufficiently large. Indeed, according to the estimate of the convolution, MATH where the induction assumption MATH was used to estimate the nonlinear term of MATH. It remains to choose MATH so that MATH for MATH to complete the proof of the theorem for MATH the sector MATH. Let us now consider the general case MATH. The sector MATH is given by MATH if MATH and by MATH if MATH. Note that the opening of the sector MATH is greater than MATH for any MATH and that MATH contains the right hlaf-plane if MATH. Let us now choose a proper closed subsector MATH of opening greater than MATH, and let MATH be the bisector of MATHly, MATH. Let MATH define the NAME transform along the ray MATH. The contour for the corresponding inverse NAME transform as well as for the corresponding convolution MATH is a straight line perpendicular to MATH. Therefore, we can use our previous arguments to show the uniform and absolute convergence of iterations REF in a properly constructed MATH. In the same fashion the theorem can be proven for any nonempty sector MATH, MATH. Recall that the function MATH has poles on the imaginary axis and, therefore, MATH is not defined for MATH. However, we can define MATH for MATH such that MATH and repeat the previous arguments for sectors MATH, MATH. |
math-ph/0103034 | Indeed, using REF we have: MATH so: MATH . Since MATH and MATH commute with each other their coproduct does not change. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.