text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
"""websocket cmd client for wssrv.py example.""" import argparse import asyncio import signal import sys import aiohttp async def start_client(loop: asyncio.AbstractEventLoop, url: str) -> None: name = input("Please enter your name: ") # input reader def stdin_callback() -> None: line = sys.stdin.buffer.readline().decode("utf-8") if not line: loop.stop() else: ws.send_str(name + ": " + line) loop.add_reader(sys.stdin.fileno(), stdin_callback) async def dispatch() -> None: while True: msg = await ws.receive() if msg.type == aiohttp.WSMsgType.TEXT: print("Text: ", msg.data.strip()) elif msg.type == aiohttp.WSMsgType.BINARY: print("Binary: ", msg.data) elif msg.type == aiohttp.WSMsgType.PING: await ws.pong() elif msg.type == aiohttp.WSMsgType.PONG: print("Pong received") else: if msg.type == aiohttp.WSMsgType.CLOSE: await ws.close() elif msg.type == aiohttp.WSMsgType.ERROR: print("Error during receive %s" % ws.exception()) elif msg.type == aiohttp.WSMsgType.CLOSED: pass break # send request async with aiohttp.ClientSession() as client: async with client.ws_connect(url, autoclose=False, autoping=False) as ws: await dispatch() ARGS = argparse.ArgumentParser( description="websocket console client for wssrv.py example." ) ARGS.add_argument( "--host", action="store", dest="host", default="127.0.0.1", help="Host name" ) ARGS.add_argument( "--port", action="store", dest="port", default=8080, type=int, help="Port number" ) if __name__ == "__main__": args = ARGS.parse_args() if ":" in args.host: args.host, port = args.host.split(":", 1) args.port = int(port) url = f"http://{args.host}:{args.port}" loop = asyncio.get_event_loop() loop.add_signal_handler(signal.SIGINT, loop.stop) loop.create_task(start_client(loop, url)) loop.run_forever()
{ "redpajama_set_name": "RedPajamaGithub" }
971
{"url":"https:\/\/math.stackexchange.com\/questions\/3087257\/tropical-algebra-and-tropical-semiring","text":"Tropical algebra and tropical semiring\n\nI want to know what is the difference between Tropical algebra and min-plus algebra and the difference between Tropical semiring and semiring . I need Reference explains these differences and tropical algebra please.\n\n\u2022 Tropical algebra is commonly encountered as either $(\\min, +)$ or $(\\max, +)$ algebra (the two are isomorphic). The $(\\min, +)$ semiring is a specific semiring where $\\min$ is acting as binary \"addition\" and $+$ is acting as \"multiplication\". For a reference you can check Butkovic's book or this expository article by Speyer and Sturmfels. \u2013\u00a0VHarisop Jan 25 at 16:20","date":"2019-05-22 18:38:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5429962277412415, \"perplexity\": 706.8742864338304}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232256948.48\/warc\/CC-MAIN-20190522183240-20190522205240-00067.warc.gz\"}"}
null
null
An alternative to your solution of using the queue, which might be simpler I think, would be to use node-red-contrib-semaphore. The node readme is very basic, and has no example flow. @Colin I'm struggling to understand how this node would work in a flow environment? I will post a flow in the morning, is late here. Here is an example of using node-red-contrib-semaphore. A semaphore is like a token used on a single track railway to ensure only one train is on the line at a time. The semaphore take node allows the semaphore to be taken or acquired, once that happens then messages will be queued until the semaphore is released in the semaphore leave node. The effect is that the next message is not allowed to pass until the protected task completes. One thing you have to make sure is that every exit from the task does send a message (and only one) to the semaphore leave node, otherwise the flow will grind to a halt. It may therefore be a good idea to include a catch node as I have done in case the task generates an error and does not send any output. Thanks Colin. I was almost there!!! And yes it does seem easier to use than node-red-contrib-simple-message-queue so I'll probably change to using it.
{ "redpajama_set_name": "RedPajamaC4" }
9,452
Q: find the numerical index of a list of characters in ascending alphabetical order I am writing a cryptography program that does columnar transposition. a person enters a key as a string, eg, key = 'ZEBRAS' I need to determine the numerical index corresponding to each letter, in ascending alphabetical order. for example, * *Z E B R A S *6 3 2 4 1 5 A is the highest, so its rank 1. z is the lowest, so its rank 6. I want to store this value into an appropriate data structure, so when i go to encrypt a message, i will read off the column corresponding to position 1 first, and 6 last. A: Create a dictionary from sorted & unique group of letters & indices from 1 to length of the string (you need unicity or several indexes will be generated if there are several occurrences of a letters (as shown below, I have added an S to the word): s="ZEBRASS" us=set(s) sl=dict(zip(sorted(us),range(1,len(us)+1))) print(sl) sl contains: {'Z': 6, 'A': 1, 'E': 3, 'R': 4, 'S': 5, 'B': 2} To "encrypt", apply the dictionary to your string: sc = [sl[c] for c in s] print(sc) result: [6, 3, 2, 4, 1, 5, 5] A: Thank you for the submissions. I am looking for multiple repeated letters to count as a different score. SO "ZEBRASAB" would result as zebrasab picture what i did was essential make a class using the plaintext from the key to find the index of the letters in the string, as well as the alphabetic weight of the letters. then i sorted it by initial weight, and position, then adjusted the final weight to be heavier if in later position in the string. import alpha class KeyChar(object): def __init__(self,l,pos,w): self.letter = l self.position = pos self.weight = w def setWeight(self,w): self.weight = w def getRawKeyList(key): key_list = [] key_weight_normalized = dict() i = 0 for c in key: weight = alpha.giveAlphabet('u')[c] char = KeyChar(c,i,weight) i=i+1 key_list.append(char) return key_list def adjustKeyWeights(key_obj_list): #first sort based off weight kchar_sorted = sorted(key_obj_list, key = lambda kchar: (kchar.weight,kchar.position)) i=0 for k in kchar_sorted: #print k.letter, k.position, k.setWeight(i) #readjust weights based on weight #print k.weight i=i+1 return kchar_sorted # return weighted key sorted by letter weight (smallest letter first) def getWeightedKeyList(key): k_adjusted = adjustKeyWeights(getRawKeyList(key)) final_key = sorted(k_adjusted, key = lambda kchar: kchar.weight) key_list = getKeyAsList(final_key) return final_key,key_list def main(): key = 'ZEBRASAB' key_obj_list,key_list = getWeightedKeyList(key.upper()) #DEBUGGING / TESTING for k in key_obj_list: print k.letter, k.position, k.weight main() A: Create a temp list to store the sorted word, and extract the position from the temp list. Below is the sample code: >>> word = 'ZEBRAS' >>> sorted_word = sorted(word) >>> sorted_word ['A', 'B', 'E', 'R', 'S', 'Z'] >>> index_string = [sorted_word.index(a)+1 for a in word] >>> index_string [6, 3, 2, 4, 1, 5]
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,922
Microsoft products key Microsoft products key. How to Find Microsoft Office Product Keys 2019-02-25 Sunday, February 24, 2019 11:54:43 PM Adam How do I find my product key on my microsoft account? If you fail to connect the remote computer with ProduKey, read the instructions in the following Blog post:. Exceed expectations Imagine and Investigate our information in new and natural routes with a crisp User Interface. When the process is finished, the status appears in the Action Status column of the dialog box. Product key types Key Type Description Not Applicable No key is needed to install this product. You can select a recommended product key or a product key from the All Product Keys list. When it's turned on, the odd and even rows are displayed in different color, to make it easier to read a single line. Note: If you bought a new, unused Office 365 product key card to renew an Office 365 subscription or to buy your trial, you can enter that product key if Office prompts you for it. In order to use this option, you must have Administrator privileges in all computers on your local network. Be aware that the computer names will appear a few seconds after finishing to scan the product keys. Most of these are new features only for previous Office users — Office 365 subscribers have the same available for a long time. If you would like to activate your Microsoft office most importantly then you may try to utilize Re Loader Activator. This latest version has remarkable fashion and lets us manipulate all pastime at a doorstep. It would be better though if it can be higher. Microsoft Office 365 Activation Key + Crack Full Version Download The sort of key entered in the thing determines the activation system. To export your keys, simply click on the Export all keys link at the far right of the Product Keys page. It is reliable and its reliability is likely to be constant. One key each has already been claimed for Visio Standard 2010 and Visio Premium 2010, and both have four keys remaining. Visual Studio subscriptions typically include five product keys for current versions of Windows and Office products, and three keys for older versions. It was packaged by Microsoft Inc. Windows option should be unmarked but the offer options should be marked. How to Find Microsoft Office Product Keys For most subscribers, this provides more than enough activations to meet their needs. The Windows version and other software that it offers product keys are Adobe Programs, Microsoft Office 2013, Microsoft Enterprise Products, Nero, Microsoft Office 2010, Video games, and all other versions of Microsoft Office in which Microsoft Office 365 is inclusive. It depends on what you're trying to do. The core-level user should have the ability to utilize Microsoft Office Excel 2010 to make and edit professional-looking spreadsheets for an assortment of purposes and situations. Your satisfaction is our primary goal. Product Key for Microsoft Office 2016 comes with many exciting features. Some of these products require product keys during installation, and some of those require activation. Free Microsoft Office 2016 Product Key 2019 100% Working The acceptance of Microsoft on Mac may not be unconnected with its wide acceptance with many people. Unfortunately, there are many dishonest sellers who offer stolen, abused, or otherwise unauthorized Microsoft product keys for sale. License This utility is released as freeware. Most product keys also allow multiple activations of the product for each key. Microsoft Office 2019 Latest Version: If you genuinely need Office, you may want to receive one or the other. We can use as an enterprise or a single user. We want to work with you to improve access to valuable Microsoft products, by offering a great deal on new product keys. ProduKey Our stuff whenever, anyplace Sign in to Office and use OneDrive to effortlessly get to our current records on any gadget with consistent joining. In support of this commitment, Microsoft has implemented daily key claim limits for Visual Studio subscriptions. Other services that can be delivered with the use of internet on Microsoft Office are also granted access to with Office 365. This Key Generator software is an online software-based use for generating keys. If you want the view the product key information in another computer, or in another operating system within the same computer, use the command-line options below. Where to enter your Office product key If you specify it with a save command-line option, an error message won't be displayed if the save action is failed. If approved, product keys will be accessible in. Step 3: Locate the Office one-time purchase or individual Office app, and then select Install Office to view your product key this doesn't actually install Office. To get benefits of all these features, you need to have Microsoft Office 2016 Product Key, and here it is for Free. The majority of the computer user utilizes the Microsoft windows operating system. Can I view my product key in Office? If you want to run ProduKey without the translation, simply rename the language file, or move it to another folder. You can also enter the product key at. Visual Studio subscribers are licensed to demonstrate their applications to end users. Any Office 365 user can discover the appropriate product key if he genuinely wishes to receive it. You can claim a key from the download page for the product, or you can search for the key you need on the page. A new search tool gives users the capacity to look up all of the available commands quickly. Download Key Finder-Thing and enjoy it. With endless capabilities, we can enjoy writing documents using Word, make outstanding presentations using PowerPoint, creates tables by using Excel, receives emails from Outlook in addition to enjoying skype live conversations with the latest Skype for business.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,001
Q: DAX Sum of Average to Rank I have a pivot table that pulls a rate by person from powerpivot. The rate is a calculated field, not a column and I need the Grand Total to show as a SUM of the rates, not an average. This is purely for ranking purposes to show who has the highest overall rates to lowest. This is what it looks like with the field I want: Person Jan-2018 Feb-2018 Mar-2018 GrandTotal [DesiredField] A 80% 71% 82% 77.6% 233% B 76% [blank] 85% 80.5% 161% C 82% 85% 79% 82% 246% So person C would be at the top followed by A then B. Due to OLAP limitations I can't create a calculated field and 'Summarize Values By' field is greyed out. If there is a better work around please let me know. A: Since there's no data model given, I assume you have a measure (calculated field), some persons and a Date dimension with a 'Month-Year' column. You could try the SUMX function, it iterates over a given table of values. In this case, you need to sum up the calculated [Rate], for each 'Month-Year'. Example: SumRatePerMonth := SUMX( VALUES('DimDate'[Year-Month]), [Rate] ) The formula iterates over the [Month-Year] column in the DimDate table*, calculates the [Rate] for each [Month-Year], and sums it. Note: Like I said, I assume you have a some sort of Date dimension, from which you pull the Month-Year column. If my assumption is wrong, please provide us with a sample or screenshot of your datamodel.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,476
Q: Drop-down navigation menu being overlapped by lower div Stack, I'm having difficulty creating a good drop down navigation menu using only css. The menu is simple, it needs to only have a few things in it, and only needs to drop down (no need to go down and over etc). The menu is somewhat working, but whenever I hover over it to drop it down, the menu gets overlapped by the wrapper div directly below the header div where the menu is located. I've tried setting the z-index of the drop down menu to like 20000 and it is still being overlapped. Here is a direct link to the test page I am working on: http://www.lolbrary.com/workspace/dropdownheader.php Any ideas? Thanky, billmalarky A: In #fullheaderbar and #profilemenu you need to remove overflow:hidden; and in .profile-menu ul you need to change position:relative to position:absolute; then tweak from there :)
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,202
\section{Introduction}\label{sec:Intro} {\bf Anomalies} An anomaly is a quantum effect whereby a classically conserved current $J^\mu$ ceases to enjoy its conservation, $\nabla_\mu \langle J^\mu \rangle \neq 0 $ \cite{Weinberg:1996kr,Bertlmann:1996xk,Bilal:2008qx,Harvey:2005it}. To date, a multitude of different anomalies have been discovered that can be classified into two main categories: local (gauge) and global anomalies. A gauge anomaly corresponds to a gauged symmetry (and current) and the consistency of a quantum field theory requires this anomaly to vanish. While global anomalies are permitted, their existence still imposes stringent conditions on the structure of quantum field theories due to the anomaly matching condition discovered by 't Hooft \cite{'tHooft:1979bh}. The condition states that a result of an anomaly calculation must be invariant under the renormalisation group flow and is thus independent of whether it is computed in the UV microscopic theory or an IR effective theory. Of particular importance to quantum field theory have been the chiral anomalies, which are present in theories with massless fermions. The values of the current divergences resulting from these anomalies are known to be one-loop exact. From the point of view of the topological structure of gauge theories, one can suspect that this should be true very generically due to the fact that the anomaly is related to the topologically protected index of the Dirac operator. Perturbatively, non-renormalisation of the one-loop anomalies was established in \cite{Adler:1969gk,Bell:1969ts,PhysRev.182.1517}. In a typical four dimensional chiral theory, there are two classically conserved currents: the axial $J^\mu_5$ (associated with the $\gamma_5$ Dirac matrix) and the vector current $J^\mu$. By including quantum corrections, their Ward identities can be written as \begin{equation} \begin{aligned} \nabla_\mu \left\langle J^\mu_5 \right\rangle &= \epsilon^{\mu\nu\rho\sigma}\left(\kappa {F}_{A,\mu\nu} {F}_{A,\rho\sigma} + \gamma {F}_{V,\mu\nu} {F}_{V,\rho\sigma} + \lambda {R}^{\alpha_1}_{\;\;\alpha_2 \mu\nu} {R}^{\alpha_2}_{\;\;\alpha_1 \rho\sigma} \right),\\ \nabla_\mu \left\langle J^\mu \right\rangle &= 0, \end{aligned} \label{eqn:non-conserved-current} \end{equation} where $F_{A,\mu\nu}$, $F_{V,\mu\nu}$ are the field strengths associated with the axial and the vector gauge fields. $R^{\alpha}_{\;\;\beta\mu\nu}$ is the Riemann curvature tensor of the curved manifold on which the four dimensional field theory is defined, and $\kappa$, $\gamma$ and $\lambda$ are the three Chern-Simons coupling constants. While the axial current conservation is violated by quantum effects, the vector current remains conserved. Among other works, various arguments in favour of non-renormalisation of one-loop anomalies have been presented in \cite{Fukushima:2008xe,Newman:2005as,Kharzeev:2009pj,Jensen:2012jy,Banerjee:2012iz,Nair:2011mk,Sadofyev:2010pr,Sadofyev:2010is}. The situation is much less clear when, as in \cite{Jensen:2013vta}, one considers the contributions of mixed, gauge-global anomalies. In such cases, it was shown in \cite{Jensen:2013vta} that one should expect anomalous currents to receive radiative corrections at higher loops. The connection between this work and mixed, gauge-global anomalies will be elaborated upon below. A further set of open questions related to the non-renormalisation of anomalies enters the stage from the possibility of considering non-perturbative effects in QFT. From a historically more unconventional point of view, anomalies have recently also been studied through the (macroscopic) hydrodynamic entropy current analysis \cite{Son:2009tf,Neiman:2010zi}.\footnote{For a recent discussion of anomalies from the point of view of UV divergences in classical physics and its connection to the breakdown of the time reversal symmetry, see \cite{Polonyi:2013lma,Polonyi:2015tla}.} The effects of gravitational anomalies on the hydrodynamic gradient expansion were then studied by using the Euclidean partition function on a cone in \cite{Jensen:2012kj}. Macroscopic transport properties associated with anomalous conservation laws have now been analysed in detail (at least theoretically) both at non-zero temperature and density. To date, the most prominent and well-understood anomaly-induced transport phenomena have been associated with the chiral magnetic effect \cite{Kharzeev:2007tn,Fukushima:2008xe,Kharzeev:2009pj} and the chiral vortical effect \cite{Son:2009tf,Kharzeev:2010gr}. \\ {\bf Chiral conductivities in field theory} In the low-energy hydrodynamic limit, we expect that to leading order in the gradient expansion of relevant fields, the expectation values of these currents can be expressed in the form of Ohm's law. The corresponding conductivities can then be defined in the following way: If a chiral system is perturbed by a small external magnetic field $B^\mu = (1/2)\epsilon^{\mu\nu\rho\sigma} u_\nu F_{\rho\sigma}$ and a spacetime vortex $\omega^\mu = \epsilon^{\mu\nu\rho\sigma}u_\nu \nabla_\rho u_\sigma$, where $u^\mu$ is the fluid velocity vector in the laboratory frame, then the expectation values of the two currents change by $\langle \delta J^\mu\rangle$ and $\langle \delta J^\mu_5\rangle$. Note that unlike in Eq. \eqref{eqn:non-conserved-current}, both the axial and vector current conservation are now broken by the induced anomalies. To leading (dissipationless) order, the change can be expressed in terms of the conductivity matrix \begin{align}\label{CondDef} \left( \begin{array}{c} \langle \delta J^\mu \rangle \\ \langle \delta J^\mu_5 \rangle \end{array} \right) = \left( \begin{array}{cc} \sigma_{JB} & \sigma_{J\omega} \\ \sigma_{J_5 B} & \sigma_{J_5\omega} \end{array} \right) \left( \begin{array}{c} B^\mu \\ \omega^\mu \end{array} \right), \end{align} where $\sigma_{JB}$ is known as the {\em chiral magnetic conductivity}, $\sigma_{J\omega}$ as the {\em chiral vortical conductivity} and $\sigma_{J_5 B}$ as the {\em chiral separation conductivity}. The signature of anomalies can thus be traced all the way to the extreme IR physics and analysed by the linear response theory. This will be the subject studied in this work. By following a set of rules postulated in \cite{Loganayagam:2012zg} (see also \cite{Jensen:2013rga}), a convenient way to express the anomalous conductivities is in terms of the anomaly polynomials. We briefly review these rules in Appendix \ref{app:anomP}. They allow one to compute the anomalous conductivities from the structure of the anomaly polynomials in arbitrary (even) dimensions, independently of the value of the coupling constant \cite{Jensen:2012kj,Azeyanagi:2013xea,Loganayagam:2012zg,Jensen:2013rga}. In the IR limit, we may assume that the stress-energy tensor and the charge current can be expressed in a hydrodynamic gradient expansion \cite{Kovtun:2012rj,Baier:2007ix,Romatschke:2009kr,Grozdanov:2015kqa}. The constitutive relations for a fluid with broken parity, in the Landau frame, are \cite{Erdmenger:2008rm,Son:2009tf,Banerjee:2008th,Torabian:2009qk} \begin{equation} \begin{aligned} T^{\mu\nu}&= \varepsilon u^\mu u^\nu + P \Delta^{\mu\nu} - \eta \sigma^{\mu\nu} - \zeta \Delta^{\mu\nu} \nabla_\lambda u^\lambda+ \mathcal{O}\left(\partial^2\right),\\ J^\mu_{I} &= n_{I} u^\mu + \sigma_{I} \Delta^{\mu\nu} \left( u^\rho F_{I,\rho\nu} - T\, \nabla_\nu \left( \frac{\mu_{I}}{T}\right)\right) +\xi_{I,B} B_{I}^\mu + \xi_{I,\omega} \omega^\mu + \mathcal{O}\left(\partial^2\right), \end{aligned} \end{equation} where the index $I=\{A,V\}$ labels the axial and the vector currents ($J^\mu_5 = J^\mu_A$, $J^\mu = J^\mu_V$) and their respective transport coefficients. In the stress-energy tensor, $\varepsilon$, $P$, $\eta$ and $\zeta$ are the energy density, pressure, shear viscosity and bulk viscosity. Furthermore, $n$, $\sigma$, $T$, $\mu$ and $F_{\mu\nu}$ are the charge density, charge conductivity, temperature, chemical potential and the gauge field strength tensor. The vector field $u^\mu$ is the velocity field of the fluid, the transverse projector (to the fluid flow) $\Delta^{\mu\nu}$ is defined as $\Delta^{\mu\nu} = u^\mu u^\nu + g^{\mu\nu}$, with $g^{\mu\nu}$ the metric tensor and $\sigma^{\mu\nu}$ the symmetric, transverse and traceless relativistic shear tensor composed of $\nabla_\mu u_\nu$. Plugging the above constitutive relations into the anomalous Ward identities, one can show that the anomalous conductivities are controlled by the transport coefficients $\xi_B$ and $\xi_\omega$ (see e.g. \cite{Landsteiner:2011iq}). It was shown in \cite{Son:2009tf,Neiman:2010zi} that by demanding the non-negativity of local entropy production (and similarly, by using a Euclidean effective action in \cite{Jensen:2012kj,Jensen:2012jh,Banerjee:2012iz})\footnote{Note that the analysis in \cite{Son:2009tf,Jensen:2012jh,Banerjee:2012iz} only involves the axial gauge field. However, it is straightforward to generalise their results to the case with both the axial and the vector current.}, the anomalous chiral separation conductivity $\sigma_{J_5B}$ and the chiral magnetic conductivity $\sigma_{JB}$ become fixed by the anomaly coefficient $\gamma$: \begin{align}\label{s1} \sigma_{J_5 B} = -2 \gamma \mu, && \sigma_{J B} = -2 \gamma \mu_5. \end{align} On the other hand, the transport coefficient $\sigma_{J_5\omega} $ could not be completely determined by the anomaly and thermodynamic quantities. Its form contains an additional constant term, \begin{align}\label{s2} \sigma_{J_5\omega} = \kappa \mu^2 + \tilde{c} T^2, \end{align} where $\tilde{c}$ is some yet-undetermined constant, which could run along the renormalisation group flow. By using perturbative field theory methods \cite{Landsteiner:2011cp,Loganayagam:2012pz} and simple holographic models \cite{Landsteiner:2011iq,Azeyanagi:2013xea}, it was then suggested that $\tilde{c}$ could be fixed by the gravitational anomaly coefficients, $\lambda$.\footnote{We note that in the presence of chiral gravitinos, the relation between $\tilde c$ and the gravitational anomaly coefficient $\lambda$ is different from those studied in this work \cite{Loganayagam:2012pz,Chowdhury:2015pba}.} However, the gravitational anomaly enters the equations of motion \eqref{eqn:non-conserved-current} with terms at fourth order in the derivative expansion while $\xi_\omega$ and $\xi_B$ enter the equation of motion at second order. Thus, if one analysed the hydrodynamic expansion in terms of the na\"{i}ve gradient expansion with all fluctuations of the same order, it would seem to be impossible to express $\tilde{c}$ in terms of the gravitational anomaly. The above paradox was resolved in \cite{Jensen:2012kj}. There, the theory was placed on a product space of a cone and a two dimensional manifold. The deficit angle $\delta$ was defined along the thermal cycle, $\beta$, as $\beta \sim \beta + 2\pi (1+\delta) $. Demanding continuity of one-point functions in the vicinity of $\delta = 0$ then fixed the unknown coefficient $\tilde{c}$ in terms of the gravitational anomaly coefficient $\lambda$ (the gradient expansion breaks down). The above construction can be extended to theories outside the hydrodynamic regime in arbitrary even dimensions and in the presence of other types of anomalies, so long as the theories only involve background gauge fields and a background metric \cite{Jensen:2013rga}. In the presence of dynamical gauge fields, the anomalous transport coefficients do not seem to remain protected from radiative corrections. This is consistent with the fact that the chiral vortical conductivity $\sigma_{J\omega}$, given otherwise by the thermal field theory result \begin{align}\label{s3} \sigma_{J \omega} = 2\gamma \mu_5 \mu, \end{align} was also argued to get renormalised in theories with dynamical gauge fields by \cite{Golkar:2012kb,Hou:2012xg,Gorbar:2013upa}.\footnote{For a discussion of temperature dependence and thermal corrections to the chiral vortical conductivity in more complicated systems, see Ref. \cite{Kalaydzhyan:2014bfa}.} Furthermore, these various pieces of information regarding the renormalisation of the chiral conductivities are consistent with the findings of \cite{Jensen:2013vta} (already noted above) and lattice results \cite{Huang:1989kg,Yamamoto:2011ks,Yamamoto:2011gk,Fukushima:2010zza}: In theories with dynamical gauge fields and mixed, gauge-global anomalies, chiral conductivities renormalise. \\ {\bf Holography and universality of transport coefficients} Certain classes of strongly interacting theories at finite temperature and chemical potential can be formulated using gauge-gravity (holographic) duality. Thus, in comparison with the weakly coupled regime accessible to perturbative field theory calculations, holography can be seen as a convenient tool to investigate chiral transport properties at the opposite end of the coupling constant scale. Within holography, anomalous hydrodynamic transport was first studied in the context of fluid-gravity correspondence \cite{Bhattacharyya:2008jc} by \cite{Kalaydzhyan:2011vx,Erdmenger:2008rm,Banerjee:2008th} who added the Chern-Simons gauge field to the bulk. The two DC conductivities associated specifically with chiral magnetic and chiral vortical effects were then computed in the five-dimensional anti-de Sitter Reissner-N\" ordstrom black brane background in \cite{Gynther:2010ed,Amado:2011zx,Landsteiner:2011iq}. The results were extended to arbitrary dimensions in \cite{Azeyanagi:2013xea}. The work of \cite{Azeyanagi:2013xea} showed that these transport coefficients could be extracted from first-order differential equations (as opposed to the usual second-order wave equations in the bulk) due to the existence of a {\em conserved current} along the holographic radial direction. In a similar manner, this occurs in computations of the shear viscosity \cite{Policastro:2001yc,Kovtun:2004de} and other DC conductivities \cite{Iqbal:2008by,Donos:2015gia}. We will refer to this situation as the case when the membrane paradigm is applicable (see Fig. \ref{fig_membranes}). The existence of the membrane paradigm makes the calculation of chiral conductivities significantly simpler. Reassuringly, the holographic results for the chiral conductivities agree with the results obtained from conventional QFT methods described above and stated in Eqs. \eqref{s1}, \eqref{s2} and \eqref{s3} \cite{Landsteiner:2011cp,Loganayagam:2012zg,Loganayagam:2012pz}. More recently, these calculations were generalised to cases of non-conformal holography (in which $T^\mu_{\;\;\mu} \neq 0$), giving the same results \cite{Gursoy:2014ela,Gursoy:2014boa}. A way to think of such holographic setups is as of geometric realisations of the renormalisation group flows. \begin{figure}[t] \centering \includegraphics[width=1\textwidth]{membranes1.pdf} \caption{A schematic representation of the membrane paradigm: The image on the left-hand-side corresponds to a holographic calculation (without the membrane paradigm) in which one has to solve for the bulk fields all along the $D$ dimensional bulk. On the right-hand-side (the membrane paradigm case), the field theory observable of interest can be read off from a conserved current (along the radial coordinate). Hence, we only need information about its dynamics at the horizon and the AdS boundary. The membrane paradigm enables us to consider independent {\em effective theories} at the two surfaces with $(D-1)$ dimensions. While the UV effective theory directly sources the dual field theory, it is the IR theory on the horizon that fixes the values of dual correlators in terms of the bulk black hole parameters. As in this paper, such a structure may enable us to make much more general (universal) claims about field theory observables then if the calculation depended on the details of the full $D$--dimensional dynamics.} \label{fig_membranes} \end{figure} Universal holographic statements, most prominent among them being the ratio of shear viscosity to entropy density, $\eta / s = \hbar / (4 \pi k_B)$ \cite{Policastro:2001yc,Kovtun:2004de,Iqbal:2008by}, can normally be reduced to an analysis of the dynamics of a minimally-coupled massless scalar mode and the existence of the membrane paradigm. The fact that the membrane paradigm exists in some theories for anomalous chiral conductivities thus naturally leads to the possibility of universality of these transport coefficients in holography. Motivated by this fact, in this work, we study whether and when non-renormalisation theorems for anomalous transport can be established in holography. Recently, a work by G\"{u}rsoy and Tarr\' io \cite{Gursoy:2014boa} made the first step in this direction by proving the universality of chiral magnetic conductivity $\sigma_{JB}$ in a two-derivative Einstein-Maxwell-dilaton theory with an arbitrary scalar field potential and anomaly-inducing Chern-Simons terms. The only necessary assumptions were that the bulk geometry is asymptotically anti-de Sitter (AdS) and that the Ricci scalar at the horizon must be regular. Because this statement is valid for two-derivative theories, it applies to duals at infinitely strong ('t Hooft) coupling $\lambda$ and infinite number of adjoint colours, $N$. In this sense, it is applicable within the same class of theories as the statement of universality for $\eta / s$. Higher-derivative corrections to supergravity actions arise when $\alpha'$ corrections are computed from string theory. Usually, this is done by either computing loop corrections to the $\beta$-functions of the sigma model or by computing string scattering amplitudes and guessing the effective supergravity action that could result in the same amplitudes (see e.g. \cite{Grisaru:1986px,Gross:1986iv,Gross:1986mw}). Via the holographic dictionary, these higher-derivative corrections translate into (perturbative) coupling constant corrections in powers of the inverse coupling constant ($1/\lambda$) expanded around $\lambda \to \infty$ \cite{Gubser:1998nz}. The result of $ \eta / s = 1 / (4\pi)$ (having set $\hbar = k_B = 1$) is not protected from higher-derivative bulk corrections; it receives coupling constant corrections both in four-derivative theories (curvature-squared) \cite{Kats:2007mq,Brigante:2007nu,Buchel:2008wy,Myers:2009ij} and in the presence of the leading-order top-down corrections to the $\mathcal{N} = 4$ supersymmetric Yang-Mills theory with an infinite number of colours (these $R^4$ corrections are proportional to $\alpha'^3 \sim 1 / \lambda^{3/2}$) \cite{Buchel:2004di}. An equivalent statement exists also in second-order hydrodynamics \cite{Baier:2007ix,Bhattacharyya:2008jc}. There, a particular linear combination of three transport coefficients, $2 \eta \tau_\Pi - 4 \lambda_1 - \lambda_2$, was shown to vanish for the same class of two-derivative theories as those that exhibit universality of $\eta /s $. It was then shown that the same linear combination of second-order transport coefficient vanishes to leading order in the coupling constant corrections even when curvature-squared terms \cite{Shaverin:2012kv,Grozdanov:2014kva} and $R^4$ terms dual to the $\mathcal{N}=4$ 't Hooft coupling corrections are included in the bulk action \cite{Grozdanov:2014kva}. However, by using the non-perturbative results for these transport coefficients in Gauss-Bonnet theory \cite{Grozdanov:2015asa}, one finds that the universal relation is violated non-perturbatively (or at second order in the perturbative coupling constant expansion) \cite{Grozdanov:2014kva}.\footnote{The violation of universality in second-order hydrodynamics was later also verified in \cite{Shaverin:2015vda} by using fluid-gravity methods in Gauss-Bonnet theory.} Our goal in this work is to study the universality of the four anomalous conductivities $\sigma_{JB}$, $\sigma_{J\omega}$, $\sigma_{J_5 B}$ and $\sigma_{J_5\omega}$ in general higher-derivative theories, thereby incorporating an infinite series of coupling constant corrections to results at infinite coupling (from two-derivative bulk theories). What we will show is that the expressions \eqref{s1}, \eqref{s2} and \eqref{s3} remain universal in any higher-derivative theory so long as the action (excluding the Chern-Simons terms) is gauge- and diffeomorphism-invariant.\footnote{As we are mainly interested in theories in which the anomalous Ward identity retains the form of Eq. \eqref{eqn:non-conserved-current}, the conditions of gauge- and diffeomorphism-invariance are imposed to avoid explicit violation of Eq. \eqref{eqn:non-conserved-current} by the bulk matter content (see Section \ref{section:massive-vector-fields} for a discussion of such an example that includes massive vector fields).} All we will assume, in analogy with \cite{Gursoy:2014boa}, is that the bulk theory is asymptotically AdS (it has a UV conformal fixed point) and that it permits a black brane solution with a regular, non-extremal horizon. In its essence, the proof will reduce to showing the validity of the membrane paradigm and then a study of generic, higher-derivative effective theories (all possible terms present in the conserved current) at the horizon and the boundary (as depicted in Fig. \ref{fig_membranes}). The condition of regularity of these constructions at the horizon will play a crucial role in the proof. By studying cases of theories for which the membrane paradigm fails, one can then find theories in which universality may be violated. Our findings can be seen as a test of holography in reproducing the correct Ward identities for the anomalous currents. The fact that we find universality of chiral conductivities with an infinite series of coupling constant corrections (albeit expanded around infinite coupling) is an embodiment of the fact that when only global anomalies are present, anomalous transport is protected from radiative corrections. An example related to the presence of mixed, gauge-global anomalies, which will invalidate the membrane paradigm, will be studied in Section \ref{section:examples}. Again, as expected from field theory arguments, a case like that will naturally be able to violate the universality (or non-renormalisation) of chiral conductivities. \\ {\bf Organisation of the paper} The paper is organised as follows: In Section \ref{section:holographic setup}, we describe the holographic theory at finite temperature and chemical potential that is studied in the main part of this work. We then turn to the proof of the universality of chiral conductivities in Section \ref{section:anomaly-induced-electric-current}. First, in Section \ref{sec.MemParad}, we show how to compute anomalous conductivities by using the membrane paradigm and specify the conditions that must be obeyed in order for the membrane paradigm to be valid. In Section \ref{section:any-higher-derivative}, we then prove that a gauge- and diffeomorphism-invariant action indeed satisfies those conditions and thus always gives the same anomalous conductivities. In Section \ref{section:examples}, we study examples that obey and violate the conditions required for universality. In particular, those that violate the universality include either massive gauge fields or naked singularities in the bulk. The paper proceeds with a discussion of results and future directions in Section \ref{section:discussion}. Finally, Appendix \ref{app:anomP} includes a discussion of anomaly polynomials and the replacement rule. \section{The holographic setup} \label{section:holographic setup} In this work, we consider five dimensional bulk actions with a dynamical metric $G_{ab}$, two massless gauge fields $A_a$ and $V_a$ that are dual to the axial and the vector current in the boundary theory, respectively, and a set of scalar (dilaton) fields, $\phi_I$: \begin{align}\label{FullAction} S = \int d^5x \sqrt{-G}\, \left\{ \mathcal{L} \left[A_a,V_a,G_{ab},\phi_I \right] + \mathcal{L}_{CS} \left[A_a,V_a,G_{ab} \right] \right\}. \end{align} The Lagrangian density $\mathcal{L}$ should be thought of as a general, diffeomorphism- and gauge-invariant action that may include arbitrary higher-derivative terms of the fields. Since we are interested in anomalous transport, \eqref{FullAction} must include the Chern-Simons terms, $\mathcal{L}_{CS}$, that source global chiral anomalies in the boundary theory. In holography, higher-than-second-derivative bulk terms correspond to the ('t Hooft) coupling corrections to otherwise infinitely strongly coupled states ($\lambda \to \infty$). Since $\mathcal{L}$ may include operators with arbitrary orders of derivatives (and corresponding bulk coupling constants), holographically computed quantities describing a hypothetical dual of \eqref{FullAction} are able to incorporate an infinite series of coupling constant corrections to observables at infinite coupling.\footnote{In type IIB theory, higher-derivative bulk terms and corrections to infinitely coupled results in $\mathcal{N} = 4$ theory are proportional to powers of $\alpha' \propto 1 / \lambda^{1/2}$. See e.g. \cite{Gubser:1998nz} and numerous subsequent works.} However, one should still think of these corrections as perturbative in powers of $1/\lambda$ due to various potential problems that may arise in theories with higher derivatives, such as the Ostrogradsky instability \cite{Ostrogradsky,Woodard:2015zca}.\footnote{See also \cite{Camanho:2014apa} for a recent discussion of causality violation in theories with higher-derivative bulk actions, in particular with four-derivative, curvature-squared actions.} The second source of corrections are the quantum gravity corrections that need to be computed in order to find the $1/N$-corrections in field theory. If we consider $S$ in Eq. \eqref{FullAction} to be a {\em local} quantum effective action, expanded in a gradient expansion, we may also claim that our holographic results incorporate certain types of (perturbative) $1/N$ corrections, included in $\mathcal{L}$. What is important is the expectation (or the condition) that the anomalous Chern-Simons terms in $\mathcal{L}_{CS}$ do not renormalise under quantum bulk corrections. It will prove convenient to write the action \eqref{FullAction} as \begin{align} \mathcal{L}\left[A_a,V_a,G_{ab},\phi_I \right] \equiv \mathcal{L}_G\left[R_{abcd}\right] + \mathcal{L}_\phi \left[\phi_I\right] + \mathcal{L}_A\left[A_a,R_{abcd},\phi_I\right]+ \mathcal{L}_V\left[V_a,R_{abcd},\phi_I \right] , \label{eqn:general-action} \end{align} where $\mathcal{L}_G$ now contains the Einstein-Hilbert term (along with the cosmological constant) and higher-derivative terms of the metric, expressed in terms various contractions and derivatives of the Riemann curvature $R_{abcd}$. $\mathcal{L}_\phi$ contains kinetic and potential terms of a set of neutral scalar fields, $\phi_I$. By $F_{A,ab}$ and $F_{V,ab}$, we denote the field strengths corresponding to $A_a$ and $V_a$, respectively. Arbitrary derivatives of $F_{A,ab}$ and $F_{V,ab}$ may enter into $\mathcal{L}_A$ and $\mathcal{L}_V$, and along with the Chern-Simons terms, \begin{equation} \begin{aligned} \mathcal{L}_A \left[A_a,R_{abcd},\phi_I\right]&= \mathcal{L}_A\left[F_{A,ab},\nabla_a F_{A,bc},\ldots,R_{abcd},\nabla_a R_{bcde},\ldots , \phi_I, \partial_a \phi_I, \ldots \right],\\ \mathcal{L}_V \left[V_a,R_{abcd},\phi_I\right]&= \mathcal{L}_V\left[F_{V,ab},\nabla_a F_{V,bc},\ldots,R_{abcd},\nabla_a R_{bcde},\ldots , \phi_I, \partial_a \phi_I, \ldots \right],\\ \mathcal{L}_{CS} \left[A_a,V_a,G_{ab} \right] &= \epsilon^{abcde} A_a \left( \frac{\kappa}{3} F_{A,bc}F_{A,de} + \gamma F_{V,bc}F_{V,de} + \lambda R^{p}_{\;\;qbc} R^q_{\;\;pde} \right). \end{aligned} \label{eqn:LA-LV-LCS} \end{equation} The ellipses `$\ldots$' stand for higher-derivative terms built from $F_{A,ab}$, $F_{V,ab}$, $R$, $R_{ab}$, $R_{abcd}$ and $\phi_I$.\footnote{Latin letters $\{a,b,c,\ldots\}$ are used to label the spacetime indices in the five-dimensional bulk theory while the spacetime indices in the dual boundary theory are denoted by the Greek letters $\{\mu,\nu,\rho,\ldots \}$. The indices $\{i,j,k,\ldots \}$ represent the spatial directions of the boundary theory.} Note also that we have chosen $\mathcal{L}_A$ and $\mathcal{L}_V$ so as not to mix the two gauge fields. If there were mixing terms like $F_{A,ab}F_{V}^{ab}$ in the Lagrangian, then the anomalous Ward identities would no longer be those from Eq. \eqref{eqn:non-conserved-current} and additional complications regarding operator mixing would have to be dealt with. We note that the normalisation of the Levi-Civita tensor is chosen to be $\epsilon_{trxyz} = \sqrt{-G}$. Our goal is to study coupling constant corrections to the anomalous conductivities that arise from the Ward identity in Eq. \eqref{eqn:non-conserved-current}. We therefore avoid any ingredients in the action \eqref{eqn:general-action} that would explicitly introduce additional terms into \eqref{eqn:non-conserved-current}. Beyond imposing gauge- and diffeomorphism-invariance of \eqref{eqn:non-conserved-current}, we will also restrict our attention to Lagrangians $\mathcal{L}_A$ and $\mathcal{L}_V$ that contain no Levi-Civita tensor. An explicit example with violated (bulk) gauge-invariance that can generate a mixed, gauge-global anomaly on the boundary (altering the Ward identity \eqref{eqn:non-conserved-current}) will be studied in Section \ref{section:massive-vector-fields}. Furthermore, we assume that the bulk theory admits a homogenous, translationally-invariant and asymptotically anti-de Sitter black brane solution of the form \begin{equation} \begin{aligned} ds^2 &=r^2 f(r) d\bar t^2 + \frac{ dr^2}{r^2g(r)} + r^2 \left(d\bar x^2+d\bar y^2+d\bar z^2\right),\\ A &= A_t(r) d\bar t, \qquad V = V_t(r) d\bar t, \qquad \phi_I = \phi_I (r), \end{aligned} \label{eqn:background-solution-unboosted} \end{equation} with $f(r)$ and $g(r)$ two arbitrary functions of the radial coordinate $r$. At AdS infinity, \begin{align} \lim_{r\to\infty} f(r) = \lim_{r\to\infty} g(r) = 1. \end{align} The coordinates used in Eq. \eqref{eqn:background-solution-unboosted}, $\{\bar x^\mu,r\}$, will be referred to as the un-boosted coordinates. Near the (outer) horizon, we assume that the metric can be written in a non-extremal, Rindler form \begin{align} f(r) &= f_1 (r-r_h) + f_2 (r-r_h)^2+\mathcal{O}(r-r_h)^3, \label{nearH1}\\ g(r) &= g_1(r-r_h) + g_2 (r-r_h)^2+ \mathcal{O}(r-r_h)^3. \label{nearH2} \end{align} The Hawking temperature of this black brane background (and its dual) is given by \begin{equation} T = \frac{r_h^2}{4\pi} \sqrt{ f_1 g_1}. \label{eqn:temperature} \end{equation} The classical equations of motion describing this system can be obtained by varying the action \eqref{eqn:general-action}. Firstly, the variations of the two gauge fields give\footnote{In five spacetime dimensions, we define the Hodge dual of a $p$-form $\Omega = (p!)^{-1}\Omega_{a_1\ldots a_p}dx^{a_1}\wedge \ldots\wedge dx^{a_p}$ as \begin{equation}\nonumber \star \Omega = \frac{1}{p!(5-p)!} \sqrt{-G}\; \Omega_{a_1\ldots a_p}\epsilon^{a_1\ldots a_p}_{\;\;\;\;\;\;\;\;\;\;\; a_{p+1} \ldots a_{5}} dx^{a_{p+1}}\wedge\ldots \wedge dx^{a_5}. \end{equation} } \begin{align} d\star H_5 = 0, && d\star H = 0 , \label{eqn:Maxwell-eoms} \end{align} where the two-forms $H_5$ and $H$ are defined as \begin{equation} \begin{aligned} H_5 &= \frac{1}{2}\left( \frac{\delta \left( \mathcal{L}_A\right)}{\delta \left(\nabla^a A^b\right)}- \nabla_c \frac{\delta \left( \mathcal{L}_A\right)}{\delta \left( \nabla_c \nabla^a A^b\right)} + \ldots \right) dx^a dx^b + \kappa \star \omega_A + \gamma\star \omega_V + \lambda \star \omega_\Gamma , \\ H &= \frac{1}{2}\left( \frac{\delta \left( \mathcal{L}_V\right)}{\delta \left(\nabla^a V^b\right)}- \nabla_c \frac{\delta \left( \mathcal{L}_V\right)}{\delta \left( \nabla_c \nabla^a V^b\right)} + \ldots \right) dx^a dx^b + \gamma \star(V\wedge dA) . \end{aligned} \label{eqn:2-form-H} \end{equation} The ellipses again denote expressions coming from the higher-derivative terms. The three abelian Chern-Simons three-forms are composed of the two gauge field one-forms $A = A_a dx^a$ and $V= V_a dx^a$, and the Levi-Civita connection one-form $\Gamma^a_{~b} = \Gamma^a_{~bc}\, dx^c$ as \begin{align} \omega_X = {\mathrm{Tr}} \left( X \wedge dX + \frac{2}{3} X\wedge X \wedge X \right), \label{eqn:Chern-Simons-3-form} \end{align} where $X = \{ A ,V , \Gamma^a_{~b} \}$.\footnote{In terms of the index notation, the Chern-Simons form built out of the Levi-Civita connection is given by \begin{equation}\nonumber \omega_{abc} = \Gamma^{p_1}_{\;\;p_2a }\partial_b \Gamma^{p_2}_{\;\;p_1 c} + (2/3)\Gamma^{p_1}_{\;\;p_2 a} \Gamma^{p_2}_{\;\;p_3b} \Gamma^{p_3}_{\;\;p_1 c}. \end{equation}} Secondly, varying the metric gives the Einstein's equation \begin{equation} R_{ab} - \frac{1}{2} G_{ab} R + \ldots = T^{M}_{ab} + \frac{1}{2}\nabla_c \left(\Sigma_{ab}^{~~c}+ \Sigma_{ba}^{~~c} \right), \label{eqn:Einstein-eoms} \end{equation} where $T^M_{ab}$ is the stress-energy tensor for the scalars and the gauge fields, excluding the Chern-Simons terms. The {\em spin current} $\Sigma_{ab}^{~~c}$ is defined as \begin{equation} \Sigma_{ab}^{~~c} = -\lambda\; \epsilon_a^{\,\,\, d_1d_2d_3d_4}F_{d_1d_2} R_{d_3d_4 b}^{~~~~~~ c}. \end{equation} We refer the reader to \cite{Azeyanagi:2013xea} for a more general definition of the spin current, its connection to the anomaly polynomial in Eq. \eqref{eqn:anomaly-polynomial} and expressions for $\Sigma_{ab}^{~~c}$ for different anomaly polynomials. We assume that the equations of motion coming from the variations of the scalar fields in \eqref{FullAction} can also be solved, but we will make no further reference to that set of equations. As stated above, the full system of equations is assumed to result in a non-extremal, asymptotically AdS black brane solution and non-trivial, backreacted profiles for the gauge and the scalar fields. To find the set of anomalous conductivities $\{ \sigma_{J_5 B}, \sigma_{J B}, \sigma_{J_5 \omega}, \sigma_{J \omega}\}$ in all hypothetical duals of this holographic setup, it is convenient to consider the following perturbed metric in the boosted (fluid-gravity) frame \cite{Azeyanagi:2013xea}: \begin{equation} ds^2 = -2 \sqrt{\frac{f(r)}{g(r)}} u_\mu dr dx^\mu + r^2 f(r) u_\mu u_\nu dx^\mu dx^\nu +r^2 \Delta_{\mu\nu} dx^\mu dx^\nu + 2 r^2 h(r) u_\mu \omega_\nu dx^\mu dx^\nu , \label{eqn:perturbed-metric} \end{equation} where the projector $\Delta_{\mu\nu}$ is defined as $\Delta_{\mu\nu} = \eta_{\mu\nu} + u_\mu u_\nu$, with $\eta_{\mu\nu}$ the four-dimensional Minkowski metric. Note that once we set the fluid to be stationary, i.e. $u^\mu_{eq} = \{-1,0,0,0\}$, the metric \eqref{eqn:perturbed-metric} will return to the un-boosted form \eqref{eqn:background-solution-unboosted}, but in the Eddington-Finkelstein coordinates, as is usual in the fluid-gravity correspondence \cite{Bhattacharyya:2008jc,Rangamani:2009xk}. The perturbations are organised so that the fluid velocity $u_\mu$ depends only on the boundary coordinates $x^\mu$ and all of the $r$-dependence is encoded in $h(r)$. Since the vorticity is defined as $\omega^\mu = \epsilon^{\mu\nu\rho\sigma} u_\nu \partial_\rho u_\sigma$, the last term in \eqref{eqn:perturbed-metric} corresponds to the metric perturbations at first order in the derivative expansion (in the $x^\mu$ coordinates). Similarly, the perturbed axial and vector gauge fields can be written as\footnote{Our choice of the metric and the gauge fields can be understood in the following way: If one considers the perturbed metric and the gauge fields with all possible terms at first order in gradient expansions, they have the form \begin{align}\nonumber ds^2 =-2 S(r) \,u_\mu dx^\mu dr+ F(r)\, u_\mu u_\nu dx^\mu dx^\nu + G(r)\, \Delta_{\mu\nu}dx^\mu dx^\nu + 2 H^\perp_\mu(r,x) \,u_\nu dx^\mu dx^\nu+ \Pi(r)\,\sigma_{\mu\nu} dx^\mu dx^\nu, \end{align} \begin{align}\nonumber &A = C(r) u_\mu dx^\mu + a^\perp_\mu(r,x) dx^\mu ,&& V = D(r) u_\mu dx^\mu + v^\perp_\mu(r,x) dx^\mu, \end{align} where $H^\perp_\mu$, $a^\perp_\mu$ and $v^\perp_\mu$ are vectors orthogonal to the fluid velocity $u^\mu$. Using the equations of motion for $\{ H^\perp_\mu, a^\perp_\mu, v^\perp_\mu\}$, one can show that they decouple from all other perturbations at the same order in the gradient expansion (see e.g. \cite{Erdmenger:2008rm,Banerjee:2008th}). Thus, to compute anomalous conductivities, one can consistently solve for only $\{ H^\perp_\mu, a^\perp_\mu, v^\perp_\mu\}$, setting the remaining perturbations to zero. To first order, this gives our Eqs. \eqref{eqn:perturbed-metric} and \eqref{eqn:perturbed-gauge-fields}. } \begin{equation} \begin{aligned} &A = -A_t (r)\, u_\mu dx^\mu + \tilde{a}(x^\mu) + a(r) \,\omega_\mu dx^\mu,\\ &V = -V_t (r)\, u_\mu dx^\mu + \tilde{v}(x^\mu) + v(r) \,\omega_\mu dx^\mu. \label{eqn:perturbed-gauge-fields} \end{aligned} \end{equation} One may use the one-forms $\tilde{a}$ and $\tilde{v}$ to define the magnetic field source $B^\mu = \epsilon^{\mu\nu\rho\sigma} u_\nu \partial_\rho \tilde{v}_\sigma$ and the (fictitious) axial magnetic field source $B_5^\mu = \epsilon^{\mu\nu\rho\sigma} u_\nu \partial_\rho \tilde{a}_\sigma $. \section{Proof of universality} \label{section:anomaly-induced-electric-current} In this section, we show that upon expanding the equations of motion \eqref{eqn:Maxwell-eoms} and \eqref{eqn:Einstein-eoms} to first order in the (boundary) derivative expansion, the conserved currents can be expressed as a total radial derivative of some function. This type of a radially conserved quantity is necessary for the applicability of the membrane paradigm, used e.g. in \cite{Iqbal:2008by} and many other holographic studies. To express all four anomalous conductivities purely in terms of the near-horizon data, our work will generalise the membrane paradigm result for the chiral magnetic conductivity of G\"{u}rsoy and Tarr\' io \cite{Gursoy:2014boa}. This will then enable us to establish the universality of the four transport coefficients in the presence of a general higher-derivative bulk theory specified in Section \ref{section:holographic setup}. Furthermore, the structure of the equations will single out the properties that holographic theories must violate in order for there to be a possibility that the dual conductivities may get renormalised. Our proof can be divided into two steps: First (in Section \ref{sec.MemParad}), we expand the equations of motion for the gauge field \eqref{eqn:Maxwell-eoms} to first order in the (boundary coordinate) derivative expansions and arrange them into a total-derivative form of a conserved current along the radial direction. This radially conserved current can be written as a sum of the anomalous Chern-Simons terms and terms that come from the rest of the action. We identify the conditions that each of these terms has to satisfy in order for the anomalous conductivities to have a universal form fixed by the Chern-Simons action. Proving the validity of these conditions is then done in Section \ref{section:any-higher-derivative} by analysing the horizon and the boundary behaviour of the higher-derivative bulk effective action (and all possible resulting terms that can appear in the conserved current). \subsection{Anomalous conductivities and the membrane paradigm} \label{sec.MemParad} Let us begin by considering the axial and the vector currents, $\langle \delta J^\mu_5 \rangle$ and $\langle \delta J^\mu \rangle$, sourced by a small magnetic field and a small vortex. As in \cite{Gursoy:2014boa}, the membrane paradigm equations follow from the two Maxwell's equations in \eqref{eqn:Maxwell-eoms}. For conciseness, we only show the details of the axial current computation, which involves $H_5$ from Eq. \eqref{eqn:2-form-H}. A calculation for the vector current, involving $H$, proceeds along similar lines. In case of the vector current, we will only state the relevant results. To first order in the gradient expansion along the boundary directions $x^\mu$, both equations in \eqref{eqn:Maxwell-eoms} can be schematically written as \begin{equation} \partial_r \left(\sqrt{-G} H^{ra}_5\left(\partial^1\right) \right)+ \partial_\mu \left( \sqrt{-G} H^{\mu a}_5\left(\partial^0\right)\right) = 0, \label{eqn:expand-Maxwell} \end{equation} where $H^{ra}_5\left(\partial^0\right)$ and $H^{\mu a}_5\left(\partial^1\right)$ are the components of the conserved current two-form in Eq. \eqref{eqn:2-form-H} that contain zero- and one-derivative terms (derivatives are taken with respect to $x^\mu$). As our first goal is to rewrite the problem in terms of a radially conserved quantity, we need to consider the structure of second term in \eqref{eqn:expand-Maxwell}. We will set the index $a$ to the four-dimensional index $\nu$. It is easy to see that only the Chern-Simons terms from $\mathcal{L}_{CS}$ can enter into this term at zeroth order in the (boundary) derivative expansion, i.e. $ \partial_\mu \left(\sqrt{-G} H_5^{\mu\nu}\left(\partial^0\right)\right)\vert_{\kappa=g=\lambda=0}=0$ (cf. Eq. \eqref{eqn:LA-LV-LCS}). This is because $H_5^{\mu\nu}$ can only be constructed out of the (axial) gauge field \eqref{eqn:perturbed-gauge-fields} and the metric tensor \eqref{eqn:perturbed-metric}, containing no derivatives along $x^\mu$. At zeroth-order in the derivative expansion, any two-tensor $X^{\mu\nu}$ can thus be decomposed as \begin{equation} X^{\mu\nu} = X_1 \, u^\mu u^\nu + X_2 \, \Delta^{\mu\nu} + X_3 \, u^{(\mu} A^{\nu)} + X_4\, u^{[\mu} A^{\nu]} , \end{equation} where $X_i$ are scalar functions of the radial coordinate. For an anti-symmetric $X^{\mu\nu}$, as are $H_5^{\mu\nu}$ and $H^{\mu\nu}$, $X_1$, $X_2$ and $X_3$ must vanish and only $X_4$ can be non-zero. Since such a term can only come from $\mathcal{L}_{CS}$, $\mathcal{L}$ cannot contribute to the second term in \eqref{eqn:expand-Maxwell}. For $a = \nu$, the two terms in Eq. \eqref{eqn:expand-Maxwell} are therefore given by \begin{equation} \begin{aligned} \partial_r \left[ \sqrt{-G} H^{r\nu}_5\left(\partial^1\right) \right] & = \frac{\partial}{\partial r}\left[\ldots + \kappa \left(A_t B_5^\nu + A_t^2 \omega^\nu \right) + \gamma \left(V_t B^\nu + V_t^2 \omega^\nu \right) + \lambda \frac{g(r^3f' )^2}{2r^2 f} \omega^\nu \right] ,\\ \partial_\mu \left[\sqrt{-G} H^{\mu \nu}_5 \left(\partial^0\right) \right] &= \kappa\left(\partial_r A_t\right) B_5^\nu + \gamma \left( \partial_r V_t \right) B^\nu = \partial_r \left( \kappa A_t B^\nu_5 + g V_t B^\nu \right). \end{aligned} \label{eqn:CS-part-of-H} \end{equation} The ellipsis indicates the non-Chern-Simons terms. Hence, one can write the Maxwell's equation for the axial gauge field as a derivative of a conserved current along the $r$-direction: \begin{equation} \partial_r \mathcal{J}^\mu_5 (r) = 0. \label{eqn:membrane-current-J5} \end{equation} The axial bulk current is defined as \begin{equation} \mathcal{J}^\mu_5 (r)= \mathcal{J}^{\mu}_{5,mb}(r) + \mathcal{J}^{\mu}_{5,r}(r) + \mathcal{J}^\mu_{5,CS}(r) , \label{eqn:def-CJ5} \end{equation} where the {\em membrane current} $\mathcal{J}^\mu_{5,mb}(r)$, the Chern-Simons current $\mathcal{J}^\mu_{5,CS}$ and $\mathcal{J}^\mu_{5,r}$ are defined as \begin{equation} \begin{aligned} \mathcal{J}^\mu_{5,mb} &= \sqrt{-G} \left( \frac{\partial \mathcal{L}_A}{\partial A'_\mu} - \partial_a \frac{ \partial \mathcal{L}_A }{\partial(\partial_a A'_\mu)} + \ldots \right)\biggr\vert_{h(r) \to 0} , \\ \mathcal{J}^\mu_{5,r} &= \sqrt{-G} \left( \frac{\partial \mathcal{L}_A }{\partial A'_\mu} - \partial_a \frac{ \partial \mathcal{L}_A}{\partial(\partial_a A'_\mu)}+\ldots \right)\biggr\vert_{a(r)\to 0} , \\ \mathcal{J}^\mu_{5,CS} &= 2\kappa A_t B^\mu_5 + 2\gamma V_t B^\mu + \left( \kappa A_t^2 + \lambda \frac{g(r^2 f')^2}{2f } \right)\omega^\mu\, . \end{aligned} \label{eqn:membrane-and-CS-current} \end{equation} Note that the primes indicate derivatives with respect to the radial coordinate. The expectation value of the external boundary current $\langle \delta J^\mu_5\rangle$ that we turned on to excite anomalous transport (cf. Eq. \eqref{CondDef}) is obtained by varying the perturbed on-shell action \eqref{eqn:general-action} with respect to the bulk axial gauge field fluctuation at the boundary. We find that it is the membrane current $\mathcal{J}^\mu_{5,mb}$ evaluated at the boundary ($r\to \infty$) that can be interpreted as its expectation value: \begin{equation}\label{ExpJasJmb} \langle \delta J^\mu_5 \rangle = \lim_{r\to \infty} \mathcal{J}^\mu_{5,mb}(r). \end{equation} This result is of central importance to the existence of the membrane paradigm in our discussion. Let us now study how $\mathcal{J}^\mu_{5,mb}$ can be related to the full conserved current $\mathcal{J}^\mu$ from Eq. \eqref{eqn:def-CJ5}. What will prove very convenient is the gauge choice for $A$ and $V$ whereby (see e.g. \cite{Gynther:2010ed}) \begin{align} \lim_{r\to\infty} A_t(r )=0, && \lim_{r\to\infty} V_t(r)=0 . \end{align} Such a choice results in\footnote{For an alternative gauge choice, see e.g. formalism B from Ref. \cite{Landsteiner:2012kd}.} \begin{align} \lim_{r\to \infty} \mathcal{J}_{5,CS} (r) = 0, \end{align} which together with the conservation equation \eqref{eqn:membrane-current-J5} and Eq. \eqref{ExpJasJmb} implies that \begin{align}\label{eqn:vev-J5} \langle \delta J^\mu_5 \rangle = \mathcal{J}^{\mu}_{5,mb}(r_h) + \mathcal{J}^{\mu}_{5,r}(r_h) - \mathcal{J}^\mu_{5,r}(\infty) + \mathcal{J}^\mu_{5,CS}(r_h). \end{align} What we will prove in the next section (Sec. \ref{section:any-higher-derivative}) will be the statement that for any theory specified by the action in \eqref{FullAction}, \begin{align}\label{eqn:condition-for-universality} \mathcal{J}^{\mu}_{5,mb}(r_h) + \mathcal{J}^{\mu}_{5,r}(r_h) - \mathcal{J}^\mu_{5,r}(\infty) = 0, \end{align} implying that the current $\langle \delta J^\mu_5\rangle$ can be completely determined by only the Chern-Simons current evaluated at the horizon, \begin{align}\label{MembraneParadigmFullCurrentFinal} \langle \delta J^\mu_5 \rangle &= \mathcal{J}^\mu_{5,CS}(r_h). \end{align} The same reasoning and equations \eqref{ExpJasJmb}--\eqref{MembraneParadigmFullCurrentFinal} apply also to the case of the vector current, up to the appropriate replacements of $A_a$ by $V_a$, $\mathcal{L}_A$ by $\mathcal{L}_V$ and the axial Chern-Simons current by \begin{equation} \mathcal{J}^\mu_{CS} = 2\gamma \left( A_t B^\mu + V_t B^\mu_5\right)+ 2\gamma A_t V_t \, \omega^\mu. \end{equation} Let us for now assume that the condition \eqref{eqn:condition-for-universality} is satisfied and proceed to compute the anomalous conductivities. In our gauge choice, the gauge fields at the horizon are related to the two chemical potentials via \begin{align}\label{AVhorizon} A_t(r_h) = -\mu_5, && V_t(r_h) = -\mu. \end{align} By using the near-horizon expansions \eqref{nearH1} and \eqref{nearH2}, the last term in $\mathcal{J}^\mu_{5,CS}$ from \eqref{eqn:membrane-and-CS-current} can be related to the temperature \begin{equation} \frac{g \left(r^2f'\right)^2}{f} = r^4 f_1 g_1 =4 \left(2\pi T\right)^2. \end{equation} Furthermore, using the horizon values of the gauge fields from Eq. \eqref{AVhorizon} along with the definitions of the anomalous conductivities from \eqref{CondDef}, we find \begin{align} &\sigma_{J_5 B} = -2 \gamma \mu, && \sigma_{J B} = -2 \gamma \mu_5,\nonumber\\ & \sigma_{J_5 \omega} = \kappa \mu_5^2 + \gamma \mu^2 + 2\lambda (2\pi T)^2, && \sigma_{J \omega} = 2\gamma \mu_5 \mu . \label{eqn:universal-anomalous-conductivity-electric} \end{align} Hence, so long as the condition \eqref{eqn:condition-for-universality} is satisfied, the bulk theory \eqref{FullAction} gives precisely the non-renormalised, universal conductivities stated in Eqs. \eqref{s1}, \eqref{s2} and \eqref{s3}. \subsection{Universality} \label{section:any-higher-derivative} We will now show that the condition \eqref{eqn:condition-for-universality} always holds in theories in which $\mathcal{L}$ (as defined in Eq. \eqref{FullAction}) is gauge- and diffeomorphism-invariant. Thus, we will establish the universality of the anomaly-induced conductivities $\sigma_{J_5 B}$, $\sigma_{J B}$, $\sigma_{J_5 \omega}$ and $\sigma_{J \omega}$ from Eq. \eqref{eqn:universal-anomalous-conductivity-electric} in theories with arbitrary higher-derivative actions, dual to an infinite series of coupling constant corrections expanded around infinite coupling. The condition \eqref{eqn:condition-for-universality} requires us to understand how $\mathcal{J}_{5,mb}^\mu$ and $\mathcal{J}_{5,r}^\mu$ behave at the two ends of the five-dimensional geometry (boundary and horizon). To make general statements about that, we construct an effective field theory (or the effective current) in terms of the metric, gauge fields and dilatons with first-order perturbations to quadratic order in the amplitude expansion. The two conditions that we impose on the effective theory and the resulting currents are the following: \begin{itemize} \item[(1)] The theory must be regular at the non-extremal horizon, by which we mean that any Lorentz scalar present in the action (or a current) must be regular (non-singular) when evaluated at the horizon. \item[(2)] The bulk spacetime is asymptotically anti-de Sitter. \end{itemize} For conciseness, we again only analyse the axial gauge field, $A_a$. A completely equivalent procedure can be applied to the case of the vector gauge field, $V_a$. From the definitions of $\mathcal{J}^\mu_{5,mb}$ and $\mathcal{J}^\mu_{5,r}$ in Eq. \eqref{eqn:membrane-and-CS-current}, it is clear that the only relevant part of the action \eqref{eqn:general-action} for this analysis is $\mathcal{L}_A$. Because the two currents are independent of the Chern-Simons terms, they only depend on the terms encoded in $H^{ra}_{5}\left(\partial^1\right)$ (see discussion below Eq. \eqref{eqn:expand-Maxwell}). The possible terms in $H^{ra}_5\left(\partial^1\right)$ that correspond to $\mathcal{J}^\mu_{5,mb}$ and $\mathcal{J}^\mu_{5,r}$ can be written (schematically, up to correct tensor structures of $ \mathcal{C}_{A,n} $ and $ \mathcal{C}_{G,n}$) as \begin{equation} H^{r\mu}_{5}\left(\partial^1\right) = \sum_{n=1}^\infty \left[ \mathcal{C}_{A,n} \partial^n_r a(r) + \mathcal{C}_{G,n}\partial^n_r h(r) \right] \omega^\mu + H^{r\mu}_{5,CS}\left(\partial^1\right), \label{eqn:non-CS-H} \end{equation} where $H^{r\mu}_{5,CS}$ is the irrelevant Chern-Simons part of $H^{r\mu}_5$, stated explicitly in Eq. \eqref{eqn:CS-part-of-H}. Since the action $\mathcal{L}_A$ does not contain any Levi-Civita tensors, the terms in $\{\mathcal{C}_{A,n},\,\mathcal{C}_{G,n}\}$ can only depend on $a(r)$ and $h(r)$. This implies that $\mathcal{C}_{A,n} = \mathcal{C}_{G,n} = 0$ when $a(r)=h(r)=0$, to first order in the boundary-coordinate derivative expansion. Hence, the problem reduces to the question of finding all possible structure of the tensorial coefficients $\{\mathcal{C}_{A,n},\,\mathcal{C}_{G,n}\}$ at the horizon and at the boundary. It is now convenient to return to the un-boosted coordinates, $\{ r, \bar{x}^\mu\}$, used in Eq. \eqref{eqn:background-solution-unboosted}. In these coordinates, the perturbed metric and the axial gauge field are (in analogy with \eqref{eqn:perturbed-metric} and \eqref{eqn:perturbed-gauge-fields}) \begin{align} ds^2 &= -r^2f(r) d\bar{t}^2 + \frac{d r^2}{r^2g(r)} + r^2 (d\bar x^2 + d\bar y^2 + d \bar z^2) + 2h_{\bar t i}(r,\bar x^i) d\bar t d\bar x^i,\\ A &= A_t dt + a_i(r,\bar x^i) d\bar x^i, \end{align} where the perturbations are now denoted by $h_{\bar t i},$ $a_i$ and $v_i$ with $i=\{x,y,z\}$. One can relate $\{ h_{\bar t i}, a_i\}$ to $\{h(r), a(r)\}$ by using the appropriate coordinate transformations, which give \begin{align} h_{\bar t i} &= \ldots +r^2 h(r) \, u_\mu \omega_\nu \frac{\partial x^\mu}{\partial \bar{t}} \frac{\partial x^\nu}{\partial \bar{x}^i} + \mathcal{O}\left(\partial^2\right), \nonumber\\ a_{i} &= \ldots + a(r) \, \omega_\mu \frac{\partial x^\mu}{\partial \bar x^i} + \mathcal{O}\left(\partial^2\right). \label{eqn:boosted-h-and-a} \end{align} Here, the ellipses denote the zeroth-order terms in the derivative expansion. It is convenient to consider $u^\mu-u^\mu_{eq}$ to be small, which gives \begin{align} u_\mu dx^\mu = dt + \delta u_i dx^i,&& dt = d\bar t + \frac{1}{r^2}\sqrt{\frac{1}{f(r)g(r)}} dr, && dx^i = d\bar x^i. \label{eqn:coordinate-transformation} \end{align} This choice of the fluid velocity further gives $\omega^t = B^t = 0$. Thus, in the remainder in this section, we will only write down the tensors $\{H^{r\mu}_5,\mathcal{J}^\mu_5,\mathcal{J}^\mu_{5,CS} \}$ with spatial components of $\mu = \{i,j,k,\ldots \}$. It immediately follows that $H^{ri}_5 \left(r,x^\mu\right)$ in the boosted coordinates and $H^{ri}_5\left(r,\bar x^\mu\right)$ in the un-boosted coordinates have identical expressions. In analogy with \eqref{eqn:non-CS-H}, expanding $H_5^{ri}$ in the un-boosted coordinates to first order in amplitudes of $a_i$ and $h_{\bar t i}$, \begin{equation} \begin{aligned} H_5^{ri}\left[a_i,h_{ti}\right] &= \left( \mathcal{I}^{rirj}_{A,1}\partial_r a_j +\mathcal{I}^{rirrj}_{A,2} \partial^2_r a_j + \ldots \right) + \left( \mathcal{I}_{G,0}^{ri\bar t j} h_{\bar t j} + \mathcal{I}_{G,1}^{r i \bar t r j}\partial_r h_{\bar t j} + \mathcal{I}_{G,2}^{ri\bar t r r j}\partial^2_r h_{\bar t j}+ \ldots \right)\\ &\quad + \left( \text{terms with derivatives along $x^i$}\right). \end{aligned} \label{eqn:delta-H-unboosted} \end{equation} Note that $\mathcal{I}^{rij}_{A,0}=0$ because gauge-invariance of $\mathcal{L}_A$ excludes the possibility of any explicit dependence on $a_i$ (only derivatives of $a_i$ may appear). The ellipses represent terms with higher derivatives in $r$ and $\{\mathcal{I}_{A,n},\mathcal{I}_{G,n}\}$ are tensors contracted with $\partial^n_r a_i$ and $\partial^n_r h_{\bar t i}$. To verify \eqref{eqn:delta-H-unboosted}, we can use the coordinate transformations \eqref{eqn:boosted-h-and-a}, which show that all relevant terms from \eqref{eqn:non-CS-H} are indeed contained in \eqref{eqn:delta-H-unboosted}. Thus, one can determine the coefficients $\{\mathcal{C}_{A,n},\mathcal{C}_{G,n}\}$ by applying \eqref{eqn:coordinate-transformation} to \eqref{eqn:delta-H-unboosted} and matching the coefficients of $\partial^n_ra(r)\,\omega^i$ and $\partial^n_r h(r) \,\omega^i$. The structure of the $\{\mathcal{I}_{G,n}, \mathcal{I}_{A,n}\}$ tensors near the horizon and the AdS-boundary can be understood in the following way: In the un-boosted frame, we define five mutually orthogonal unit-vectors or vielbeins, $e_{\hat{p}a} = \delta_{\hat p a}$, where the hatted indices $\{ \hat{p},\hat q,..\} = \{\hat 0,\hat1,\hat2,\hat3,\hat4\}$ are used as (local flat space) bookkeeping indices. The full set of the five-dimensional vectors with upper Lorentz indices can now be written as $e^a_{\;\;\hat p} = \left[\sqrt{G}\right]^{ab} \delta_{\hat p b}$: \begin{equation} \begin{aligned} e_{\hat 0} &= \Big(\left(r^2f\right)^{-1/2},0,0,0,0\Big),\\ e_{\hat 1} &= \Big(0,1/r,0,0,0\Big),\\ e_{\hat 2} &= \Big(0,0,1/r,0,0\Big),\\ e_{\hat 3} &= \Big(0,0,0,1/r,0\Big),\\ e_{\hat 4} &= \Big(0,0,0,0,\left(r^2g\right)^{1/2}\Big). \end{aligned} \label{eqn:vielbein-components} \end{equation} These normal vectors allow us to write the tensors $\{ \mathcal{I}_{G,n},\mathcal{I}_{A,n}\}$ as \begin{equation} \begin{aligned} \mathcal{I}^{a_1a_2\ldots a_m}_{A,n} &= \sum_{\hat{p}_1,\ldots ,\hat{p}_m} \mathcal{S}_{A,n}^{\hat p_1\ldots \hat p_m} e^{a_1}_{\;\;\hat p_1} \ldots e^{a_m}_{\;\;\hat p_m}, \\ \mathcal{I}^{a_1a_2\ldots a_m}_{G,n} &= \sum_{\hat{p}_1,\ldots ,\hat{p}_m} \mathcal{S}_{G,n}^{\hat p_1\ldots \hat p_m} e^{a_1}_{\;\;\hat p_1} \ldots e^{a_m}_{\;\;\hat p_m}, \end{aligned} \label{eqn:decomposed-CC} \end{equation} where $\{\mathcal{S}_{A,n},\mathcal{S}_{G,n}\}$ are (spacetime) Lorentz-scalars. The regularity condition imposed at the horizon demands that these scalar have to be non-singular at $r=r_h$. The question of whether $\mathcal{I}_{G,n}$ and $\mathcal{I}_{A,n}$ vanish at the horizon is therefore completely determined by the values the projectors $e^{a_1}_{\;\;\hat p_1} \ldots e^{a_m}_{\;\;\hat p_m}$ take when evaluated at the horizon. To demonstrate this fact more clearly, let us write down the first few relevant components of the tensors $\mathcal{I}_{G,n}$ and $\mathcal{I}_{A,n}$ explicitly: \begin{align*} &\mathcal{I}_{G,0}^{ri \bar t j}=\left(r^{-2}\sqrt{g/f}\right)\,\mathcal{S}_{G,0}^{4\hat i 0 \hat j}\;, & &\mathcal{I}_{A,0}^{rij} = 0\;,\\ &\mathcal{I}_{G,1}^{ri \bar t rj}=\left(r^{-1}\sqrt{g^2/f}\right)\,\mathcal{S}_{G,2}^{4\hat i 04\hat j}\;, &&\mathcal{I}_{A,1}^{ri rj} = g \, \mathcal{S}^{4\hat i 4\hat j}_{A,1}\;,\\ &\mathcal{I}_{G,2}^{ri\bar t rrj}=\left(\sqrt{g^3/f}\right) \,\mathcal{S}_{G,2}^{4\hat i 044\hat j}\;, &&\mathcal{I}_{A,2}^{rirrj}= \left(r g^{3/2}\right) \mathcal{S}_{A,2}^{4\hat i 44\hat j}\;,\\ &\mathcal{I}_{G,3}^{ri\bar t rrrj}=\left(r\sqrt{g^4/f}\right)\,\mathcal{S}_{G,3}^{4\hat i 0444\hat j}\;,&&\mathcal{I}_{A,3}^{rirrrj}= \left( r^2g^2 \right)\, \mathcal{S}_{A,3}^{4\hat i 444\hat j}\;, \end{align*} with $r = r_h$. As before, the tensor $\mathcal{I}^{rij}_{A,0}=0$ because of the gauge-invariance of $\mathcal{L}_A$. With this decomposition, the problem of determining the non-zero terms in $H^{ri}_5$ has been reduced to simple power-counting. Namely, a tensor $\mathcal{I}^{a_1a_2\ldots}$ can only be non-zero at the horizon if the number of $e^{\bar{t}}_{\;\hat 0}$ in its decomposition is equal to or greater than the number of $e^{r}_{\;\hat 4}$. The regularity of the scalars $\mathcal{S}_{A,n}$ and $\mathcal{S}_{G,n}$ at the horizon plays a crucial role here. Hence, one can see that the only non-zero tensor from the set of $\{\mathcal{I}_{A,n},\mathcal{I}_{G,n}\}$ is $\mathcal{I}_{G,0}^{ri\bar t j}$. The conserved current evaluated at the horizon thus becomes \begin{equation} \mathcal{J}^i_5 = \sqrt{-G}\left(\sqrt{\frac{g}{f}} \;\mathcal{S}^{4\hat j 0 \hat i}_{G,0}\right) h(r_h) \,u_\mu \omega_\nu \frac{\partial x^\mu}{\partial \bar t} \frac{\partial x^\nu}{\partial \bar x^j} + \mathcal{J}^i_{5,CS} (r_h) . \label{eqn:non-CS-CJ5-general} \end{equation} To see why the first term in \eqref{eqn:non-CS-CJ5-general} has to vanish, recall that as other scalars, the Ricci scalar also has to be regular at the horizon. As pointed out in \cite{Gursoy:2014boa}, this condition implies that $h_{ti} \sim (r-r_h)$ at the horizon. Therefore, the conserved current at the horizon is indeed fully determined by the anomalous Chern-Simons term: \begin{align}\label{CJ5onlyCSmatters} \mathcal{J}^i_5 = \mathcal{J}^i_{5,CS} (r_h). \end{align} With $H^{rt}_5 = 0$, Eq. \eqref{CJ5onlyCSmatters} implies the first two terms from the condition \eqref{eqn:condition-for-universality} vanish: \begin{align}\label{ProofIng1} \mathcal{J}^{\mu}_{5,mb}(r_h) + \mathcal{J}^{\mu}_{5,r}(r_h) = 0. \end{align} Similarly, we can determine the value of the current $\mathcal{J}^\mu_{5,r}$ at the boundary. Since $\mathcal{J}^\mu_{5,r}$ includes only terms linear in $h(r)$, it is enough to consider \begin{equation} H^{ri}_5 = \left( \mathcal{I}_{G,0}^{ri\bar t j} h_{\bar t j} + \mathcal{I}_{G,1}^{r i \bar t r j}\partial_r h_{\bar t j} + \mathcal{I}_{G,2}^{ri\bar t r r j}\partial^2_r h_{\bar t j}+ \ldots \right) + \ldots. \label{eqn:delta-H-unboosted-Jr} \end{equation} Now, because the boundary is asymptotically AdS and higher-derivative terms considered here do not change the scaling behaviour near the boundary, we can use the near-AdS solution for $h(r)$ \cite{Azeyanagi:2013xea}: \begin{align} h(r) = \frac{\mathcal{H}}{r^4} + \mathcal{O}\left(r^{-5}\right) . \label{eqn:near-boundary-h} \end{align} Substituting the expansion for $h(r)$ into \eqref{eqn:delta-H-unboosted-Jr}, it immediately follows that the third term in the condition \eqref{eqn:condition-for-universality} vanishes as well when it is evaluated at the boundary (note again that $H^{rt}_5 = 0$): \begin{align}\label{ProofIng2} \mathcal{J}^\mu_{5,r} (\infty) = 0. \end{align} Together, Eqs. \eqref{ProofIng1} and \eqref{ProofIng2} imply the validity of the condition stated in Eq. \eqref{eqn:condition-for-universality}, which completes our proof. The analysis of the vector current $\mathcal{J}^\mu$ and a proof of a condition analogous to \eqref{eqn:condition-for-universality} follow through along exactly the same lines. This implies that all four anomalous conductivities take the universal form of \eqref{eqn:universal-anomalous-conductivity-electric} for all holographic theories specified in \eqref{FullAction} so long as the (effective) theory is regular at the non-extremal horizon and the bulk is asymptotically anti-de Sitter. \section{Examples and counter-examples} \label{section:examples} In this section, we turn our attention to explicit examples of theories that obey and violate the conditions used in our proof in Section \ref{section:anomaly-induced-electric-current} and thus result in universal and renormalised anomalous conductivities, respectively. We will first demonstrate their universality in two- and four-derivative theories with a non-extremal horizon and then move on to describing two holographic models, which violate the assumptions in the proof of Eq. \eqref{eqn:condition-for-universality}. More precisely, in Section \ref{section:Einstein-Hilbert-Maxwell-gravity}, we compute the conductivities in the two-derivative Einstein-Maxwell-Dilaton theory. In Section \ref{section:4-derivatives}, we then show explicitly how our proof works in the case of the most general four-derivative action with Maxwell fields and dynamical gravity. In both of those case, the conductivities are universal and the current at the horizon only depends on the metric fluctuation, as established by our effective theory method in \eqref{eqn:non-CS-CJ5-general}. In Section \ref{section:counter-examples}, we comment on the validity of our proof in gravity duals without a horizon. We use the examples of the confining soft/hard-wall models and charged dilatonic black holes at zero temperature. The membrane paradigm computation goes through as before in the case of confining geometry. However, the conductivities no longer have any temperature dependence, which would require us to augment the replacement rule discussed in Appendix \ref{app:anomP}. As for the latter example, the family of theories considered suffers from naked singularities in the bulk. Lastly, in Section \ref{section:massive-vector-fields}, we point out how the bulk terms corresponding to field theories with a gauge-global anomaly violate the assumptions in our proof. This is consistent with the known fact that anomalous conductivities in systems with mixed anomalies receive corrections along the renormalisation group flow. We will not review the details behind the holographic constructions of such systems but rather focus on the reasons for why these models may violate the universality from the point of view of Section \ref{section:any-higher-derivative}. \subsection{Einstein-Maxwell-dilaton theory at finite temperature} \label{section:Einstein-Hilbert-Maxwell-gravity} As for our first example, we consider the two-derivative Einstein-Mawell-dilaton theory with a non-trivial dilaton profile: \begin{align} &\mathcal{L}_G = R - 2 \Lambda, && \mathcal{L}_\phi = -(\partial \phi)^2- V(\phi),\\ &\mathcal{L}_A = -\frac{1}{4}Z_A(\phi) F_{A,ab}F_{A}^{\;\;ab}, && \mathcal{L}_V = -\frac{1}{4} Z_V(\phi)F_{V,ab} F_{V}^{\;\;ab}, \end{align} having used the notation of the action in Eq. \eqref{eqn:general-action}. This is an extension of the case studied in \cite{Gursoy:2014boa}, which includes the gravitational anomaly and anomalous conductivities that follow from a response to a small vortex. The theory has two charges that are conserved along the radial direction at zeroth-order in the boundary-derivative expansion. The expressions follow from the $a = \mu$ component of the Maxwell's equations: \begin{align} Q_5 &= r^3\sqrt{\frac{g}{f} } Z_A \partial_r A_t ,\\ Q &= r^3\sqrt{\frac{g}{f}} Z_V \partial_r V_t . \end{align} At first order in derivatives, the two conserved currents $\mathcal{J}^\mu_5$ and $\mathcal{J}^\mu$ are given by \begin{equation} \begin{aligned} \mathcal{J}^\mu_5 &= \left[ Q_5 h + r^3\sqrt{f g } Z_A \partial_r a \right] \omega^\mu + \mathcal{J}^\mu_{5,CS} , \\ \mathcal{J}^\mu &= \left[ Q h + r^3 \sqrt{f g } Z_V \partial_r v\right] \omega^\mu + \mathcal{J}^\mu_{CS}. \end{aligned} \label{eqn:membrane-current-J5-J-Einstein} \end{equation} Thus, we can immediately read off the membrane currents: \begin{align} \delta J^\mu_{5,mb} &= r^3 \sqrt{\frac{g}{f} } Z_A \partial_r a,\\ \delta J^\mu_{mb} &= r^3 \sqrt{\frac{g}{f} } Z_V \partial_r v . \end{align} Moreover, the regularity of the black hole horizon implies that that metric fluctuation has to vanish at the horizon \cite{Gursoy:2014boa}, i.e. $h(r_h)=0$. At the horizon, the two currents $\mathcal{J}^\mu_5(r_h)$ and $\mathcal{J}^\mu(r_h)$ are therefore completely determined by the anomalous terms $\mathcal{J}^\mu_{5,CS}(r_h)$ and $\mathcal{J}^\mu_{CS}(r_h)$. Next, we investigate the behaviour of $\mathcal{J}^\mu_5$ and $\mathcal{J}^\mu$ at the boundary. Substituting the near-boundary solutions \eqref{eqn:near-boundary-h} into \eqref{eqn:membrane-current-J5-J-Einstein}, one can see that $Q_5 h$ and $Q h$ are sub-leading, which implies that $\mathcal{J}^\mu_5$ and $\mathcal{J}^\mu$ at $r \to \infty$ become determined by the membrane currents evaluated at the boundary. \subsection{Four-derivative Einstein-Maxwell theory} \label{section:4-derivatives} In this section, we consider the most general four-derivative theory of massless gravitons and gauge fields. The action $\mathcal{L}_A$ can be written as (see \cite{Gross:1986mw,Anninos:2008sj,Kats:2006xp,Myers:2009ij,SGNonPert}): \begin{equation}\label{EMaction4d} \begin{aligned} \mathcal{L}_A =& -\frac{1}{4} F_{ab}F^{ab} + \alpha_4 R F_{ab} F^{ab} + \alpha_5 R^{ab} F_{ac} F_b^{\;\;c} + \alpha_6 R^{abcd} F_{ab}F_{cd} +\alpha_7 (F_{ab}F^{ab})^2\\ & + \alpha_8 \nabla_a F_{bc} \nabla^a F^{bc} + \alpha_9\nabla_a F_{bc} \nabla^b F^{ac}+\alpha_{10} \nabla_a F^{ab} \nabla^c F_{cb}+\alpha_{11} F^{ab} F_{bc} F^{cd}F_{da}, \end{aligned} \end{equation} and similarly $\mathcal{L}_V$. Note that in Eq. \eqref{EMaction4d}, all indices $A$ denoting that $F_{ab}$ is the axial field strength have been suppressed. The conserved current two-form, $H^{ab}_5$, in this theory is \begin{equation} \begin{aligned} H^{ab}_5 =& -F^{ab} + 4 \alpha_4 RF^{ab} + 2\alpha_5 (R^{ac}F_c^{\;\;b} -R^{bc}F_c^{\;\;a} ) + 4\alpha_6 R^{cdab}F_{cd} \\ & + 8 \alpha_7 F_{cd}F^{cd} F^{ab} - 4\alpha_8 \Box F^{ab} - 2\alpha_9 \nabla_c (\nabla^a F^{cb} - \nabla^b F^{ca}) \\ &+ 2\alpha_{10} (\nabla^b\nabla_c F^{ca} - \nabla^a\nabla_c F^{cb}) + 8\alpha_{11} F^{bc}F_{cd}F^{da}. \end{aligned} \label{eqn:conserved-current-2-form-4-deriv-Maxwell} \end{equation} The current $\mathcal{J}^i_5$ is then \begin{equation} \mathcal{J}^\mu_5= \mathcal{J}^\mu_{5,\text{Maxwell}}+\sum_{n=4}^{11} \alpha_n \mathcal{J}^\mu_{5,(n)} + \mathcal{J}^\mu_{CS}, \end{equation} where $\mathcal{J}^\mu_{5,\text{Maxwell}}$ is the axial current that follows from the two-derivative Maxwell action analysed in Section \ref{section:Einstein-Hilbert-Maxwell-gravity}. The remaining terms, $\mathcal{J}^\mu_{5,(n)}$, all have the schematic form \begin{equation} \mathcal{J}^\mu_{5,(n)} = \left[ C_{n,1} h + C_{n,2} \partial_r h + C_{n,3} \partial^2_r h + D_{n,1} \partial_r a + D_{n,2} \partial^2_r a + D_{n,3} \partial^3_r a \right] \omega^\mu , \end{equation} where the coefficients $C_{n,i}$ and $D_{n,i}$ depend on the background and parameters of the action. The full expressions for these coefficients are lengthy and will not be presented here. Near the non-extremal horizon (assumed to exist), the metric must behave as in Eqs. \eqref{nearH1} and \eqref{nearH2}. What we find is that when evaluated at the horizon, all coefficients except $C_{n,1}$ vanish. This result therefore precisely agrees with the structure of $\mathcal{J}^\mu_5$ predicted in \eqref{eqn:non-CS-CJ5-general}, which followed from our general treatment of $H^{r\mu}_5$ in Section \ref{section:any-higher-derivative}. At the horizon, the full set of $\mathcal{J}^\mu_{5,(n)}$ is given by \begin{equation} \begin{aligned} \mathcal{J}^\mu_{5,(4)} (r_h) &= -\frac{2 r_h^2\sqrt{g_1}A_t'}{f_1^{3/2}} \left( 20 f_1g_1 + 3f_2g_1 r_h + f_1g_2 r_h \right) h(r_h) \,\omega^\mu, \\ \mathcal{J}^\mu_{5,(5)} (r_h) &= - \frac{r_h \sqrt{g_1}A_t'}{f_1^{3/2}} \left( 14 r_hf_1g_1 + 2 r_h^2 g_1 f_2 +r_h^2 f_1g_2 \right) h(r_h)\,\omega^\mu,\\ \mathcal{J}^\mu_{5,(6)} (r_h) &= - \frac{2 r_h^2 \sqrt{g_1} A_t'}{f_1^{3/2}} \left( 8f_1g_1 + 3r_hg_1f_2 +r_h f_1g_2 \right) h(r_h)\, \omega^\mu,\\ \mathcal{J}^\mu_{5,(7)} (r_h) &= -\frac{16 r_hg_1^{3/2}(A_t')^3}{f_1^{3/2}}h(r_h) \,\omega^\mu,\\ \mathcal{J}^\mu_{5,(8)}(r_h) &= - \frac{2 8r_h^3\sqrt{g_1}}{f_1^{3/2}} \left( -g_1f_2 + f_1g_2+2f_1g_1A_t''/A_t' \right) h(r_h)\, \omega^\mu,\\ \mathcal{J}^\mu_{5,(9)} (r_h) &= \frac{1}{2} \mathcal{J}^\mu_{5,(8)} ,\\ \mathcal{J}^\mu_{5,(10)} (r_h) &= \frac{ r_h^2\sqrt{g_1}}{f_1^{3/2}}\left( 6f_1g_1 -r_h g_1f_2 + r_h f_1g_2 + 2r_h f_1g_1A_t''/A_t' \right) h(r_h) \, \omega^\mu,\\ \mathcal{J}^\mu_{5,(11)} (r_h) &= - \frac{1}{2}\mathcal{J}^\mu_{5,(7)}. \end{aligned} \end{equation} Finally, imposing the horizon Ricci scalar regularity condition (see the discussion after Eq. \eqref{eqn:non-CS-CJ5-general}), $h(r_h)=0$, we find that all $J^\mu_{5,(n)} (r_h) = 0$. At the AdS boundary ($r\to\infty$), we further find that all coefficients $C_{n,i} \sim r^{-m}$, where $m>0$. With this explicit verification, our results imply that the most general gauge- and diffeomorphism-invariant four-derivative theory \eqref{EMaction4d} satisfies the condition \eqref{eqn:condition-for-universality} and that the anomalous conductivities in its dual all have the universal form of Eq. \eqref{eqn:universal-anomalous-conductivity-electric}. \subsection{Theories without horizons and theories with scaling geometries at zero temperature} \label{section:counter-examples} In this section, we consider two classes of backgrounds, each one a possible solution of the Einstein-Maxwell-dilaton theory of Section \ref{section:Einstein-Hilbert-Maxwell-gravity}. The first one belongs to the family of soft/hard wall model that are dual to a field theory with a mass gap \cite{Csaki:1998qr,Karch:2006pv,Gursoy:2007er,Batell:2008zm}. The second example is the scaling geometry that can arise as a solution of the Einstein-Maxwell-dilaton theory at zero temperature (see e.g. \cite{Charmousis:2010zz}). What we show is that the criterion for the universality of anomalous conductivities, i.e. Eq. \eqref{eqn:condition-for-universality}, is still satisfied in the gapped system. However, the conductivities can no longer computed by using the replacement rule in the form stated in Eq. \eqref{eqn:replacement-rule-conductivity}. For the scaling geometries, the universality may be violated due to the presence of naked singularities. A way to retain a holographic theory at zero temperature in which the condition \eqref{eqn:condition-for-universality} is satisfied is to put very strong constraints on the geometry that avoid the naked singularity. These constraints restrict the allowed range of value of the hyperscaling violation exponent, $\theta$, and the dynamical critical exponent, $z$. Let us start with an example of the soft/hard wall geometry at zero density. In an un-boosted frame, the metric for these models can be written as \begin{equation} ds^2 = e^{-(M/u)^\nu} \left( -u^2 d\bar{t}^2 + \frac{du^2}{u^2} + u^2 (d\bar x^2+d\bar y^2+d\bar z^2) \right), \end{equation} where the parameter $M$ sets the scale of the mass gap. The nature of the spectrum is also controlled by the parameter $\nu$: while the gapped spectrum is continuous above the gap when $\nu =1$, it is discrete when $\nu >1$. The hard wall model in which the AdS radius is capped off at $u\ll M$ corresponds to the limiting value of $\nu \to \infty$ \cite{Gursoy:2007er,Batell:2008me}. One can change coordinates of the above metric to bring them to the form of \eqref{eqn:background-solution-unboosted} by redefining the radial coordinate as $r = e^{-\frac{1}{2}(M/u)^\nu} u$. In the deep IR region, $u\ll M$, the functions $f(r)$ and $g(r)$ can be written as \begin{align} f_{IR}(r) = 1,&& g_{IR}(r) = g\left(u\ll M\right)=\nu^2 \left(\frac{M}{u} \right)^\nu e^{M/u}. \label{eqn:wall-f-and-g} \end{align} Despite there being no horizon, the dual of the above geometry can still have non-zero temperature; it can be interpreted as a thermal state before undergoing a phase transition to the black hole phase at high temperature, analogously to the Hawking-Page transition \cite{Herzog:2006ra}. The two currents, $\mathcal{J}^\mu_5$ and $\mathcal{J}^\mu$, must now be evaluated at $r = 0$ and at the boundary $(r = \infty)$. Because the geometry is still asymptotically AdS, their near-boundary behaviour is the same as in all the cases studied before. The fact that $g(r)$ exponentially diverges in the IR appears problematic at first. However, the volume form, which is proportional to $\sqrt{-G}$, is exponentially suppressed. Evaluating $\mathcal{J}^i_5$ at $u=0$, one finds that $\mathcal{J}^{\mu}_{5,mb}(0) + \mathcal{J}^{\mu}_{5,r}(0) = 0$ as in \ref{section:any-higher-derivative}. Thus, the universality condition \eqref{eqn:condition-for-universality} is still satisfied. On the other hand, the Chern-Simons current $\mathcal{J}^\mu_{5,CS}$ no longer behaves the same way. Although the profiles of the gauge fields $A_t$, and $V_t$ can be assumed to asymptote to a constant value at $r = 0$, the derivative of $f'$ can no longer be interpreted as the temperature of the dual theory (substituting \eqref{eqn:wall-f-and-g} into \eqref{eqn:membrane-and-CS-current}, we see that $\mathcal{J}^\mu_{5,CS}$ has no temperature dependence). Therefore, in the confining phase, the replacement rules discussed in the Appendix \ref{app:anomP} are no longer applicable even if the condition \eqref{eqn:condition-for-universality} is satisfied. The above statements also apply to the AdS soliton-like geometries. Next, we explore the scaling geometries at zero temperature. In the un-boosted frame, the metric can now be written as \begin{equation} ds^2 = r^2 \left( -r^{n_0} dt^2 +d\bar x^2+d\bar y^2+d\bar z^2 \right) + \frac{dr^2}{r^{n_1}}, \end{equation} or in terms of $\theta$ and $z$, \begin{align} n_0 = 2 + \frac{6(z-1)}{3-\theta},\qquad n_1 = 2+ \frac{2\theta}{3-\theta}. \end{align} As mentioned in \cite{Charmousis:2010zz}, many of these geometries contain a naked singularity. As a result, the scalars $\{ \mathcal{S}_{A,n},\mathcal{S}_{G,n} \}$ used in Eq. \eqref{eqn:decomposed-CC} no longer have to be finite. Such systems can therefore easily violate the universality condition \eqref{eqn:condition-for-universality}. Thus, the universality of the anomalous conductivities is no longer guaranteed in the presence of a naked singularity. In this work, we do not study in detail what happens to anomalous conductivities in such cases and whether they nevertheless remain universal for some geometries. Are there special values of $z$ and $\theta$ for which it is easy to see that the condition \eqref{eqn:condition-for-universality} remains satisfied? In other words, what are the ranges of $\{ z,\theta\}$ for which the theory has no naked singularity? This problem was addressed in \cite{Copsey:2012gw}, where it was found that the geometries that satisfy either one of the following two conditions, \begin{equation} n_0 = n_1 =2 ,\qquad n_0 = n_1 \ge 4, \label{eqn:condition-no-naked-singularity} \end{equation} have no naked singularities. The authors assumed that the matter content has to satisfy the null energy condition, which, for this geometry, is equivalent to imposing the following two inequalities: \begin{equation} n_0 \ge n_1, \qquad (n_0-2)(n_0+n_1+4) \ge 0. \label{eqn:NEC} \end{equation} The first solution in \eqref{eqn:condition-no-naked-singularity} is simply the empty AdS solution with $z=1$ and $\theta=0$. The second solution (or a family of solutions) is more involved and requires non-trivial matter to support such geometries. Of particular interest are charged dilatonic black holes with $z \to \infty$, $\theta \to -\infty$ and a fixed ratio $-\theta/z=\eta$, dual to strongly interacting theories with finite density (see e.g. \cite{Gubser:2009qt,Huijse:2011ef}). While such systems still satisfy the null energy condition, the geometries nevertheless exhibit a naked singularity at zero temperature. This means that unless there is a way to resolve the singularity, the universal structure of anomalous conductivities, although not necessarily, may be violated at zero temperature for all values of $\eta$. One way to resolve this issue, as mentioned in \cite{Gubser:2009qt} for $\eta=1$, is to lift the black hole solution to a ten- or eleven- dimensional solution of string or M-theory \cite{Cvetic:1999xp}. To study such solutions, one also needs to find the ten- and eleven-dimensional analogues of the Chern-Simons terms ($\mathcal{L}_{CS}$ in \eqref{eqn:LA-LV-LCS}). In case of a supergravity setup, this was studied in \cite{Klebanov:2002gr} and many subsequent works. An explicit computation of chiral magnetic conductivity, $\sigma_{JB}$, in a top-down setup of probe flavour branes can be found in \cite{Hoyos:2011us}. More generally, it is plausible that the problems of IR singularities can be avoided when they are of the ``good type" \cite{Gubser:2000nd}.\footnote{We thank Umut G\"{u}rsoy for discussions on this point.} In such scenarios, it may be the case that so long as the naked singularity can be cloaked by an infinitesimal horizon, the existence of universality can be extended to very small temperatures. What is clear is that at strictly zero temperature, the regularity of (small) metric perturbations is no longer well-defined. We defer a more detailed study of these issues and of top-down constructions to future works. \subsection{Bulk theories with massive vector fields} \label{section:massive-vector-fields} In this section, we comment on the universality of anomalous conductivities in field theories with mixed, gauge-global anomalies. Such theories exhibit the following anomalous Ward identity: \begin{equation} \partial_\mu \left\langle J^\mu_5\right\rangle = \beta \epsilon^{\mu\nu\rho\sigma} \mathcal{F}_{\mu\nu} \mathcal{F}_{\rho\sigma} + \left(\text{global anomaly terms}\right) , \label{eqn:anomalous-Ward-identitiy-gluon} \end{equation} where $\mathcal{F}_{\mu\nu}$ is the field strength of the gluon fields (e.g. in QCD). The global anomaly terms were stated in Eq. \eqref{eqn:non-conserved-current}. As shown by perturbative quantum field theory calculations \cite{Neiman:2010zi,Golkar:2012kb,Hou:2012xg,Jensen:2013vta}, the anomalous conductivities in such theories are known to be renormalised, i.e. they receive quantum corrections. Holographic models dual to theories with the anomalous Ward identity of the form of Eq. \eqref{eqn:anomalous-Ward-identitiy-gluon} were proposed and studied in \cite{Klebanov:2002gr,Casero:2007ae,Jimenez-Alba:2014iia,Gursoy:2014ela,Jimenez-Alba:2015awa}. In this work, we focus on the bottom-up construction of \cite{Jimenez-Alba:2014iia}, where the following terms are added to the bulk action \eqref{eqn:general-action}: \begin{equation} \Delta S = \int d^5x \sqrt{-G} \left( - \frac{m^2}{2} (A_a - \partial_a \theta)(A^a - \partial^a \theta) -\frac{\kappa}{3} \epsilon^{abcde} (\partial_a \theta) F_{bc}F_{de}\right) \end{equation} We have set the vector and the gravitational Chern-Simons terms to zero, i.e. $\gamma = \lambda = 0$ (see Eq. \eqref{eqn:non-conserved-current}). The scalar field $\theta$ is the St\"uckelberg axion. A holographic theory with $\Delta S$ in the action can clearly evade the arguments of the proof of universality from Section \ref{section:anomaly-induced-electric-current}. The reason is that the equation of motion for a massive vector field cannot be written in the form of Eq. \eqref{eqn:Maxwell-eoms}. The right-hand-side of \eqref{eqn:Maxwell-eoms} now contains terms which explicitly depend on $A_a$ and one cannot reduce the equations into a total derivative form, $\partial_r \mathcal{J}^\mu = 0$. Hence, in models with massive vector fields, dual to field theories with mixed, gauge-global anomalies, anomalous conductivities can be renormalised. This is consistent with field theory calculations mentioned above. More precisely, from the point of view of field theory, the operators associated with anomalous transport are renormalised along the renormalisation group flow. In gravity, they depend on the entire bulk geometry and thus the condition of horizon regularity is not sufficient to ensure universality. In relation to our discussion about universality in field theory (see the Introduction \ref{sec:Intro}), it would be interesting to understand what precisely happens to the arguments of the regularity of one-point functions on a cone in such cases. \section{Discussion} \label{section:discussion} In this work, we studied the coupling constant dependence of the universality of chiral conductivities associated with the anomalous axial and vector currents in holographic models with arbitrary higher-derivative actions of the metric, gauge fields and scalars. We showed that so long as the action (excluding the Chern-Simons terms) was gauge- and diffeomorphism-invariant, the membrane paradigm construction for the chiral conductivities remained valid, resulting in universal chiral conductivities (see Eq. \eqref{eqn:universal-anomalous-conductivity-electric}). The proof assumed the existence of a regular, non-extremal black brane with an asymptotically AdS geometry. This result is valid for an infinite-order expansion of coupling constant corrections to holographic results at infinite coupling. Hence, it is complementary to perturbative field theory proofs (expanded around zero coupling) of the non-renormalisation of chiral conductivities in systems with global anomalies and therefore of the anomalous Ward identities with the form of Eq. \eqref{eqn:non-conserved-current}. Furthermore, our paper also explored cases which may violate universality, in particular, in cases with naked singularities and massive vector fields that explicitly violate Eq. \eqref{eqn:non-conserved-current} through mixed, gauge-global anomalies. This work provides a consistency test of holography in its ability to reproduce the expected non-renormalisation of global Ward identities at the level of (non-zero temperature and density) transport in very general bulk constructions that include arbitrary higher-derivative actions. Furthermore, we believe that the methods presented in this work can be of wider use to other holographic statements of universality that employ the membrane paradigm. An important conceptual question that remains is the precise relation between the regularity condition of our constructions at the horizon and properties of their dual field theories. It is tempting to speculate that the regularity of the background geometry is related to the regularity of one-point functions on a cone that fix $\tilde c$ and ensure universality of anomalous conductivities in field theory (see discussion after Eq. \eqref{s2}).\footnote{We thank the anonymous JHEP referee for a discussion regarding this point.} We end this paper by listing some problems that are left to future works. Most importantly, there exists another anomalous conductivity in the stress-energy tensor, which can be sourced by a small vortex, $\delta T^{\mu\nu} = \sigma^\epsilon u^{(\mu} \omega^{\nu)}$. The analysis of this conductivity was not performed in this work. In the fluid-gravity framework, $\sigma^\epsilon$ was studied in the Einstein-Maxwell theory by \cite{Azeyanagi:2013xea}. Forming a conserved bulk current for computing components of the stress-energy tensor tends to be significantly more complicated than for those of a boundary current. However, it may be possible to achieve this by using the Hamiltonian methods recently employed for the calculations of the thermo-electric DC conductivities \cite{Donos:2014cya,Donos:2015gia,Banks:2015wha} in two-derivative theories, which should be extended to computations of anomalous transport in higher-derivative theories. One may also wonder what happens to anomalous transport in inhomogeneous and anisotropic systems. In standard non-anomalous transport, it is known that universal relations can be violated, e.g. in $\eta / s$ \cite{Mamo:2012sy,Jain:2014vka,Jain:2015txa,Hartnoll:2016tri,Alberte:2016xja,Burikham:2016roo}. While analysing such systems is in general significantly more difficult, the existence of the membrane paradigm, as e.g. in case of the DC thermo-electric conductivities \cite{Donos:2015gia,Banks:2015wha,Donos:2015bxe}, may still enable one to prove general statements about the behaviour of conductivities in disordered systems \cite{Grozdanov:2015qia,Grozdanov:2015djs}. These methods remain to be explored in the context of anomalous transport. In even-dimensional theories, anomalous conductivities are directly related to the parity-odd hydrodynamic constitutive relation of \cite{Son:2009tf,Landsteiner:2011iq,Jensen:2012jy}. These parity-odd terms are related to global anomalies. In odd dimension, one can still construct hydrodynamics with parity-odd terms, as e.g. in \cite{Jensen:2011xb}. A well-known parity-odd transport coefficients is the Hall viscosity \cite{Avron:1995fg,Avron:1997}. This quantity has relations to topological states of matter, such as fractional quantum Hall systems (see e.g. \cite{Hoyos:2014pba} and references therein). A holographic theory with non-zero Hall viscosity can be obtained by adding a topological term similar to the dimensionally-reduced gravitational Chern-Simons term \cite{Saremi:2011ab}. Recently, in \cite{Haehl:2015pja}, the constitutive relation term associated with the Hall viscosity was generalised to a class of hydrodynamic terms that resemble the Berry curvature. Despite these similarities, there is no known non-renormalisation theorem for parity-odd transport coefficients in odd dimensions. Lastly, we point out that many recent works have found novel structures in entanglement entropy of theories with anomalies \cite{Castro:2014tta,Iqbal:2015vka,Nishioka:2015uka,Nishioka:2015uka,Azeyanagi:2015uoa,Belin:2015jpa}. As a result of non-renormalisation, one may expect there to exist strong constraints on the structure of extremal bulk surfaces associated with entanglement entropy. It would be interesting to better understand the connection between geometric constraints on holographic entanglement entropy and non-renormalisation theorems for anomalies. \acknowledgments{The authors would like to thank Richard Davison, Misha Goykhman, Nabil Iqbal, Aron Jansen, Janos Pol\'{o}nyi, Petter S\"aterskog, Koenraad Schalm and Jan Zaanen for stimulating discussions. We are particularly grateful to Umut G\"{u}rsoy and Javier Tarr\' io for their comments on the draft of the manuscript. S. G. is supported in part by a VICI grant of the Netherlands Organisation for Scientific Research (NWO), and by the Netherlands Organisation for Scientific Research/Ministry of Science and Education (NWO/OCW). The work of N. P. is supported by the DPST scholarship from the Thai government and by Leiden University. }
{ "redpajama_set_name": "RedPajamaArXiv" }
7,601
Q: How to write prepare and execute statements in OOP PDO? I have a little problem here.I am new to OOP, so the question might sound stupid, but I couldn't find any helpful information. I am trying to login to the database and put the user's entered values inside of it and I want to ask you, in which place should I write PDO prepare and execute statements? I know they're needed, but I have no idea how to write it corretly...Besides that, I also got an error "Object of class PDO could not be converted to string on line 24". Thank you for any help, here's my code: <?php class Connection { public $connection; public $dbHost = 'localhost'; public $dbName = 'employees'; public $charset = 'charset=utf8'; public $dbUser = 'root'; public $dbPassword = ''; public function __construct() { try { $this->connection = new PDO ("mysql:host=$this->dbHost;$this->dbName;$this->dbUser; $this->dbPassword"); $this->connection->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch (PDOException $e) { echo "There is something wrong with the database".$e->getMessage(); } } function insertUserValues($tableName, $data) { $query = "INSERT INTO ".$tableName."("; $query .= implode (",",array_keys($data)).') VALUES ('; $query .= "'" . implode ("','",array_values($data))."')"; } } $users = new Connection(); ?> A: I'm not really good with explanations bru, but I just see that there's no answer after a long time. I have created a basic class for you to insert values using PDO, I hope it will point you to the correct direction, I will also share some useful links for you. First the connect. I can see you have done the connection already in your class, but below is the proper best pdo connection. $host = '127.0.0.1'; $db = 'YourDatabase'; $user = 'YourDBUser'; $pass = 'YourDBPass'; $charset = 'utf8'; $dsn = "mysql:host=$host;dbname=$db;charset=$charset"; $opt = [ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC, PDO::ATTR_EMULATE_PREPARES => false, ]; $dbh = new PDO($dsn, $user, $pass, $opt); that is how you set up proper PDO connection. dns stands for data source name Reference of the above https://phpdelusions.net/pdo#dsn this guy explains it better. everything you need to know. Now how do you put that connection all together with your class? well I will create a file collect pdoClass.php and work from that class. <?php class Connection { private $host = "127.0.0.1"; private $dbName = "YourDB"; private $user = "YourUser"; private $pass = "YourPass"; private $charset = 'utf8'; private $dbh; private $error; private $stmt; //connection public function __construct() { $dsn = "mysql:host=" . $this->host . ";dbname=" . $this->dbName . ";charset=" . $this->charset; $options = array( PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC, PDO::ATTR_EMULATE_PREPARES => false ); try { // setup connection $this->dbh = new PDO($dsn, $this->user, $this->pass, $options); } //catch any errors catch (PDOException $e) { $this->error = $e->getMessage(); } } //prepare statement public function insertUserValues($query) { $this->stmt = $this->dbh->prepare($query); } //bind values public function bind($param, $value, $type = null) { if (is_null($type)) { switch (true) { case is_int($value): $type = PDO::PARAM_INT; break; case is_bool($value): $type = PDO::PARAM_BOOL; break; case is_null($value): $type = PDO::PARAM_NULL; break; default: $type = PDO::PARAM_STR; } } //actual value binding $this->stmt->bindValue($param, $value, $type); } //execute statement public function run() { return $this->stmt->execute(); } } ?> basically that's all you need to setup the database and the function to insert in your db, I have tried to comment some sections. Now to use this class create index.php or what ever you like. then include the class <?php include'pdoClass.php'; $users = new Connection(); $users->insertUserValues('INSERT INTO test (name, age, description) VALUES(?,?,?)'); $users->bind(1, 'User'); //bind each value $users->bind(2, 391); // bind $users->bind(3, 'This is a value'); if($database->run()){ echo "record inserted"; } ?> Done, if you have any question or like me to explain anything, feel free to comment below I will try my best to assist u. Edit : if you need to fetch the results, you can also make a new function in the class, Single row : public function SingleRow(){ $this->run(); return $this->stmt->fetch(); } see we use fetch(); to only fetch one row. most people when they fetch results will fetch them like this fetch(PDO::FETCH_ASSOC) but because we did a proper connection and defined our default fetch mode in the connection we don't need all that we can just use fetch(); to display those results on your index.php file this is how you will do : $users->insertUserValues("SELECT name, age, description FROM test WHERE name = :name"); $users->bind(':name','joe'); $row = $users->SingleRow(); echo '<pre>'; print_r($row); echo '</pre>'; this will display joe's result as an array. to get all the results from our db we do another function to display all results. public function All(){ $this->run(); return $this->stmt->fetchall(); } You see the difference now we use fetchall() because we want all the results. $users->insertUserValues("SELECT * FROM test"); $row = $users->All(); echo '<pre>'; print_r($row); echo '</pre>';
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,658
package gov.sandia.cognition.learning.function.categorization; import gov.sandia.cognition.annotation.PublicationReference; import gov.sandia.cognition.annotation.PublicationType; import gov.sandia.cognition.learning.algorithm.BatchLearner; import gov.sandia.cognition.learning.algorithm.SupervisedBatchLearner; import gov.sandia.cognition.learning.data.DefaultWeightedValueDiscriminant; import gov.sandia.cognition.learning.data.InputOutputPair; import gov.sandia.cognition.math.Ring; import gov.sandia.cognition.statistics.AbstractDistribution; import gov.sandia.cognition.statistics.ComputableDistribution; import gov.sandia.cognition.statistics.DataDistribution; import gov.sandia.cognition.statistics.ProbabilityFunction; import gov.sandia.cognition.statistics.distribution.DefaultDataDistribution; import gov.sandia.cognition.util.AbstractCloneableSerializable; import gov.sandia.cognition.util.DefaultWeightedValue; import gov.sandia.cognition.util.ObjectUtil; import gov.sandia.cognition.util.WeightedValue; import java.util.ArrayList; import java.util.Collection; import java.util.HashMap; import java.util.LinkedList; import java.util.Map; import java.util.Random; import java.util.Set; /** * Categorizer that returns the category with the highest posterior likelihood * for a given observation. This is known as a MAP categorizer, where * the posterior is proportionate to the category's conditional likelihood * for a given observation times the prior probability of the category. * @param <ObservationType> Type of observations * @param <CategoryType> Type of categories * @author Kevin R. Dixon * @since 3.0 */ @PublicationReference( author="Wikipedia", title="Maximum a posteriori estimation", type=PublicationType.WebPage, year=2010, url="http://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation" ) public class MaximumAPosterioriCategorizer<ObservationType,CategoryType> extends AbstractDistribution<ObservationType> implements DiscriminantCategorizer<ObservationType,CategoryType,Double> { /** * PMF of the various categories */ DataDistribution.PMF<CategoryType> categoryPriors; /** * Map that contains the probability functions for the observations * for the given categories. */ Map<CategoryType,ProbabilityFunction<ObservationType>> categoryConditionals; /** * Creates a new instance of MaximumAPosterioriCategorizer */ public MaximumAPosterioriCategorizer() { this.categoryPriors = new DefaultDataDistribution.PMF<CategoryType>( 2 ); this.categoryConditionals = new HashMap<CategoryType, ProbabilityFunction<ObservationType>>( 2 ); } @Override @SuppressWarnings("unchecked") public MaximumAPosterioriCategorizer<ObservationType,CategoryType> clone() { return (MaximumAPosterioriCategorizer<ObservationType,CategoryType>) super.clone(); } /** * Adds the given category with the given mass (which is divided by the * masses of all categories to determine the prior probability weight) * and the distribution function * @param category * Category to add * @param mass * Mass of the category * @param conditional * Conditional probability function of observations for the category */ public void addCategory( final CategoryType category, final double mass, final ProbabilityFunction<ObservationType> conditional ) { this.categoryPriors.increment(category, mass); this.categoryConditionals.put( category, conditional ); } /** * Gets the prior probability weight and conditional distribution for * the given category. * @param category * Category to consider * @return * Prior probability weight and conditional distribution for * the given category. */ public WeightedValue<ProbabilityFunction<ObservationType>> getCategory( final CategoryType category ) { ProbabilityFunction<ObservationType> conditional = this.categoryConditionals.get(category); double prior = this.categoryPriors.evaluate(category); return new DefaultWeightedValue<ProbabilityFunction<ObservationType>>( conditional, prior ); } @Override public Set<? extends CategoryType> getCategories() { return this.categoryConditionals.keySet(); } @Override public CategoryType evaluate( final ObservationType input) { return this.evaluateWithDiscriminant(input).getValue(); } @Override public DefaultWeightedValueDiscriminant<CategoryType> evaluateWithDiscriminant( final ObservationType input) { CategoryType maxCategory = null; double maxPosterior = Double.NEGATIVE_INFINITY; for( CategoryType category : this.getCategories() ) { double posterior = this.computePosterior(input, category); if( maxPosterior < posterior ) { maxPosterior = posterior; maxCategory = category; } } return DefaultWeightedValueDiscriminant.create(maxCategory, maxPosterior); } /** * Computes the posterior of the observation given the category. * Actually, this is the conjunctive likelihood since we've not normalizing * by the likelihood of the observation over all categories. Since we're * only interested in finding the MAP category, we're doing the standard * thing and not normalizing. * @param observation * Observation to consider * @param category * Category to consider * @return * Posterior likelihood of the observation given the category. */ public double computePosterior( final ObservationType observation, final CategoryType category ) { ProbabilityFunction<ObservationType> categoryConditional = this.categoryConditionals.get(category); double posterior; if( categoryConditional != null ) { double prior = this.categoryPriors.evaluate(category); double conditional = categoryConditional.evaluate(observation); posterior = conditional*prior; } else { posterior = 0.0; } return posterior; } /** * Gets the mean category, if it is a number or ring. * * @return * The mean. */ @SuppressWarnings("unchecked") public ObservationType getMean() { ObservationType mean = null; for( CategoryType category : this.getCategories() ) { ObservationType categoryMean = this.getMean(); double prior = this.categoryPriors.evaluate(category); if( categoryMean instanceof Number ) { if( mean == null ) { mean = (ObservationType) new Double( 0.0 ); } double weightedCategoryMean = prior * ((Number) categoryMean).doubleValue(); mean = (ObservationType) new Double( ((Number) mean).doubleValue() + weightedCategoryMean ); } else if( categoryMean instanceof Ring<?> ) { Ring<?> weightedCategoryMean = ((Ring<?>) categoryMean).scale(prior); if( mean == null ) { mean = (ObservationType) weightedCategoryMean; } else { ((Ring) mean).plusEquals( weightedCategoryMean ); } } else { throw new UnsupportedOperationException( "Mean not supported for type " + categoryMean.getClass() ); } } return mean; } @Override public void sampleInto( final Random random, final int numSamples, final Collection<? super ObservationType> output) { ArrayList<? extends CategoryType> categories = this.categoryPriors.sample(random, numSamples); for( CategoryType category : categories ) { ProbabilityFunction<ObservationType> pdf = this.categoryConditionals.get(category); output.add( pdf.sample(random) ); } } /** * Learner for the MAP categorizer * @param <ObservationType> Type of observations * @param <CategoryType> Type of categories */ public static class Learner<ObservationType,CategoryType> extends AbstractCloneableSerializable implements SupervisedBatchLearner<ObservationType,CategoryType,MaximumAPosterioriCategorizer<ObservationType,CategoryType>> { /** * Learner that creates the conditional distributions for each * category. */ private BatchLearner<Collection<? extends ObservationType>, ? extends ComputableDistribution<ObservationType>> conditionalLearner; /** * Default constructor */ public Learner() { this( null ); } /** * Creates a new instance of Learner * @param conditionalLearner * Learner that creates the conditional distributions for each * category. */ public Learner( final BatchLearner<Collection<? extends ObservationType>, ? extends ComputableDistribution<ObservationType>> conditionalLearner) { this.conditionalLearner = conditionalLearner; } @Override public MaximumAPosterioriCategorizer.Learner<ObservationType,CategoryType> clone() { @SuppressWarnings("unchecked") Learner<ObservationType,CategoryType> clone = (Learner<ObservationType,CategoryType>) super.clone(); clone.setConditionalLearner( ObjectUtil.cloneSmart( this.getConditionalLearner() ) ); return clone; } @Override public MaximumAPosterioriCategorizer<ObservationType, CategoryType> learn( final Collection<? extends InputOutputPair<? extends ObservationType, CategoryType>> data) { DataDistribution.PMF<CategoryType> categoryPrior = new DefaultDataDistribution.PMF<CategoryType>(); Map<CategoryType,LinkedList<ObservationType>> categoryData = new HashMap<CategoryType, LinkedList<ObservationType>>(); for( InputOutputPair<? extends ObservationType,CategoryType> pair : data ) { categoryPrior.increment( pair.getOutput() ); LinkedList<ObservationType> categoryValues = categoryData.get( pair.getOutput() ); if( categoryValues == null ) { categoryValues = new LinkedList<ObservationType>(); categoryData.put( pair.getOutput(), categoryValues ); } categoryValues.add( pair.getInput() ); } MaximumAPosterioriCategorizer<ObservationType,CategoryType> categorizer = new MaximumAPosterioriCategorizer<ObservationType, CategoryType>(); for( CategoryType category : categoryPrior.getDomain() ) { LinkedList<ObservationType> categoryValues = categoryData.get(category); ComputableDistribution<ObservationType> distribution = this.conditionalLearner.learn(categoryValues); ProbabilityFunction<ObservationType> conditional = distribution.getProbabilityFunction(); categorizer.addCategory( category, categoryPrior.get(category), conditional ); } return categorizer; } /** * Getter for conditionalLearner * @return * Learner that creates the conditional distributions for each * category. */ public BatchLearner<Collection<? extends ObservationType>,? extends ComputableDistribution<ObservationType>> getConditionalLearner() { return this.conditionalLearner; } /** * Setter for conditionalLearner * @param conditionalLearner * Learner that creates the conditional distributions for each * category. */ public void setConditionalLearner( BatchLearner<Collection<? extends ObservationType>, ? extends ComputableDistribution<ObservationType>> conditionalLearner) { this.conditionalLearner = conditionalLearner; } } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,523
Q: Failed to load resource: the server responded with a status of 500 () when call servlet using ajax I have a servlet which is name copyImage and the servlet codes like this. protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // TODO Auto-generated method stub String docpath = null; String docname = null; File original=new File(docpath); File dest=new File("T:\\Temp\\"); try { FileUtils.copyFileToDirectory(original, dest); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } String newPath="T:\\Temp\\"+docname; request.getSession().setAttribute("newPath", newPath); doGet(request, response); } I want to call servlet using $.ajax post. I wrote this code to call servlet using ajax. But it gives me this error Failed to load resource: the server responded with a status of 500 () in console log. In the eclipse I get this errors java.lang.NullPointerException at java.io.File.<init>(Unknown Source) at org.solr.copyImage.doPost(copyImage.java:44) at javax.servlet.http.HttpServlet.service(HttpServlet.java:661) at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:650) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:800) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:806) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Unknown Source) My ajax codes something like this. <script> function getImages(name,path){ var documentname=name; var documentpath=path; $.ajax({ type:"POST", url:"copyImage", data:{ docname:documentname, docpath:documentpath, }, success:function(data){ document.getElementById("showImages").src=data; }, }); } </script> Is the problem sending data? How do I fix it? A: Your code does: protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String docpath = null; String docname = null; File original=new File(docpath); According to the documentation for the File constructor that takes a String: Throws: NullPointerException - If the pathname argument is null docpath in your code is null when you create the new File.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,231
\section{Introduction} This paper gives a differential geometric account of Mukai duality on K3 surfaces, intended as a background reference paper for the author's companion paper \cite{myG2paper} dealing with Mukai duality of adiabatic coassociative K3 fibrations. Mukai duality in the algebro-geometric setting has a well established literature \cite{Huy1}\cite{FourierMukai}\cite{Mukai}\cite{Mukai1}\cite{Mukai2}\cite{Mukai3}. The basic picture is that moduli spaces of stable vector bundles of certain topological types (specified by a Mukai vector $v(E)$) on a K3 surface $X$, are again K3 surfaces $X^\vee$, whose periods and symplectic forms can be described in terms of data on $X$. Morever, one can use the (quasi)-universal bundle to define the Fourier-Mukai transform (a.k.a. Nahm transform), which under appropriate cohomology vanishing conditions converts vector bundles on $X$ to vector bundles on $X^\vee$ and vice versa. On the differential geometric side, the papers of Bartocci et al \cite{Bartocci1}\cite{Bartocci2} deal with the same subject and share some of the key intermediate results with this paper, although their technical emphasis is on twistor methods and reduction to complex geometry, while this paper primarily uses spinor techniques and can be more readily adapted to the $G_2$ setting \cite{myG2paper}. Our setting is also more general as we do not assume the Mukai vector to have zero degree. A major influence to this paper is the work of Braam and Baal \cite{BraamBaal} on the Nahm transform over a 4-torus. The outline of this paper is as follows (for more details, see introductions to each Chapter): Chapter \ref{ModuliofvectorbundlesonK3} begins with a brief review of the algebro-geometric theory, and recalls the celebrated construction of the hyperk\"ahler structure on $X^\vee$. We then show carefully how to put an optimal connection $\nabla^{univ}$ on the universal bundle $\mathcal{E}\to X\times X^\vee$. Chapter \ref{TheMukaidualofaK3surface} studies several aspects of the geometry of the Mukai dual K3 surface $X^\vee$: the hyperk\"ahler periods of $X^\vee$, the variation of Hermitian-Yang-Mills connections over $X^\vee$ parametrised by $X$, and the interpretation of the hyperk\"ahler structure on $X$ in terms of data on $X^\vee$. Chapter \ref{TheNahmtransformonK3surfaces} studies the Nahm transform using differential geometric techniques, and is the more original part of this paper. The main result says \begin{thm}(\cf Theorem \ref{NahmtransformperservesASD} and \ref{Fourierinversiontheorem}) Let $(\mathcal{F}, \alpha)$ be an irreducible HYM connection over a hyperK\"ahler K3 surface $X$, whose Mukai vector $v(\mathcal{F})$ shares the same slope as $v(E)$ but $v(\mathcal{F})\neq v(E)$. Then the Nahm transform is well defined and produces a HYM connection $(\hat{\mathcal{F}}, \hat{\alpha})$ over $X^\vee$. Under further nonsingularity assumptions the inverse Nahm transform $(\hat{\hat{\mathcal{F}}}, \hat{\hat{\alpha}})$ of $(\hat{\mathcal{F}}, \hat{\alpha})$ is well defined and is canonically isomorphic to $(\mathcal{F},\alpha)$. \end{thm} \begin{rmk} Here the slope condition ensures the Hom bundle carries ASD connections rather than just HYM, and $v(\mathcal{F})\neq v(E)$ means the Fourier-Mukai transform is a bundle, not a skyscraper sheaf. \end{rmk} \begin{rmk} Most results in the present paper will be applied to \cite{myG2paper}. For readers interested in \cite{myG2paper}, familiarity with Chapter \ref{ModuliofvectorbundlesonK3} and Section \ref{HyperkahlerperiodsontheMukaidualK3}, \ref{ASDconnectionsonMukaidual}, \ref{Nahmtransform} is essential, and knowledge of other parts is useful. \end{rmk} \begin{Acknowledgement} The author thanks his PhD supervisor Simon Donaldosn and co-supervisor Mark Haskins for inspirations, and Simon Center for hospitality. \end{Acknowledgement} \section{Moduli of vector bundles on K3 surfaces}\label{ModuliofvectorbundlesonK3} Moduli of vector bundles on K3 surfaces, and Fourier-Mukai transforms in particular, has been studied extensively in algebraic geometry in the language of coherent sheaves and derived categories. From the viewpoint of our applications, it seems preferable to interpret this theory in differential geometry in terms of bundles and ASD instantons; this has the advantage of not favoring any particular complex structure, and makes easier contact with metric geometry and $G_2$ instantons. The fundamental link between these two viewpoints is the Hitchin-Kobayashi correspondence. Section \ref{Mukaidualreview} reviews some basic results about moduli spaces of stable vector bundles on K3 surfaces, which in some special cases turn out to be again K3 surfaces, called the Mukai dual. Section \ref{hyperkahlerquotientreview} reviews the celebrated construction of the hyperk\"ahler metric on these moduli spaces, from a gauge theoretic viewpoint. These materials are standard, which we include in the hope of making the paper more accessible. Section \ref{universalconnection} continues with the gauge theoretic thread, and explain under some topological conditions, how to construct a universal connection on the universal bundle, which is compatible with all the 3 complex structures, and restricts to ASD connections on each K3 fibre. When part of the topological conditions fail, we show how to modify the construction to achieve a second best substitute. \begin{rmk} The construction of the universal connection is also an important step in the work of Bartocci et al \cite{Bartocci1}, but these authors applied a result which only works for semisimple structure groups, and missed out the topological issues arising from the centre of the $U(r)$ structure group, as we will discuss in Section \ref{universalconnection}. \end{rmk} \subsection{The Mukai dual of a K3 surface}\label{Mukaidualreview} This Section gives a very brief review of the standard algebro-geometric theory, mainly following \cite{Huy1}, Section 6.1, which in turn is based on the works of Mukai. Let $(X, \omega_1, \omega_2, \omega_3)$ be a \textbf{hyperk\"ahler K3 surface}. In this Section we consider the preferred complex structure $\omega_2+\sqrt{-1} \omega_3$. \begin{rmk} The algebraic theory requires the projectivity assumption, \ie the K\"ahler class of $\omega_1$ is rational. This is not restrictive because the 2-sphere of complex structures always contain projective members. \end{rmk} It is convenient in the study of bundles on $X$ to incorporate the topological data into the so called Mukai vector: \begin{Def} The \textbf{Mukai vector} of a coherent sheaf $E$ on a smooth variety $V$ is $v(E)=ch(E) \sqrt{Td(V)} \in H^{2 *}(V, \Q)$. More concretely, for $V=X$ the K3 surface, let $r=rk(E), c_1=c_1(E), c_2=c_2(E)$, then $v(E)=(r, c_1, \frac{1}{2}c_1^2-c_2+r)$. \end{Def} \begin{Def} If $v=\oplus v_i \in \bigoplus H^{2i}(V, \Z)$, then let $v^\vee=\oplus (-1)^i v_i$. We define the \textbf{Mukai pairing} to be the bilinear form on $H^{2*}(V,\Q)$ \[ (v,w)= -\int_V v^\vee \cup w. \] \end{Def} The importance of the Mukai vector comes from the Riemann-Roch formula: \begin{lem}(\cf \cite{Huy1} Page 168) If $E$, $F$ are coherent sheaves on $X$ then the Euler characteristic of the pair $(E,F)$ is \[ \chi(E, F)= \sum (-1)^i \dim \text{Ext}^i(E,F)=- (v(E), v(F) ). \] \end{lem} We consider the (coarse) moduli space $\mathcal{M}(v)=M(r,c_1,c_2)$ of semistable sheaves with a given Mukai vector $v$, and we denote the locus of stable vector bundles inside by $\mathcal{M}^s(v)$. Now if $E$ is a stable vector bundle, then by Serre duality $\text{H}^0(\End_0(E))=\text{H}^2(\End_0(E))=0$, so the moduli space is smooth at the point $E$ with dimension $(v,v)+2$. A feature of the theory is that low dimensional moduli spaces are easier to understand. \begin{thm}\label{Mukaidualalgebraic} (\cf \cite{Huy1} page 169) Assume $(v,v)=0$. If $\mathcal{M}^s(v)$ has a compact irreducible component, then $\mathcal{M}^s(v)$ is equal to this component. \end{thm} \begin{Def}\label{Mukaidualdefinition} Assume $(v,v)=0$, the moduli space $\mathcal{M}^s(v)$ is compact and nonempty. Then we call $X^\vee=\mathcal{M}^s(v)$ the \textbf{Mukai dual} of $X$. \end{Def} An important tool is Fourier-Mukai transform (FM) on cohomology. This is easiest to explain when the universal bundle $\mathcal{E}\to X\times X^\vee$ exists, but even if it does not, FM can still be defined via the `quasi-universal family' which always exists (\cf Section 4.6 \cite{Huy1}). $FM$ can be thought of as the analogue of the Fourier transform and the inverse Fourier transform. \begin{Def}\label{DefinitionofFourierMukai} The \textbf{Fourier-Mukai transform on cohomology} is a pair of maps between the cohomologies of $X$ and $X^\vee$, denoted $FM: H^*(X, \Q)\to H^*(X^\vee, \Q)$ and $FM^\vee: H^*(X^\vee, \Q)\to H^*(X, \Q)$. Let $pr_X: X\times X^\vee\to X$, $pr_{X^\vee}: X\times X^\vee\to X^\vee$ be the two projections, and $v(\mathcal{E})$ be the Mukai vector of $\mathcal{E}$ on $X\times X^\vee$. Then \[ FM(\alpha)=pr_{X^\vee_*}( v(\mathcal{E})^\vee \cup pr_{X}^*\alpha ), \quad FM^\vee(\alpha')=pr_{X_*}( v(\mathcal{E}) \cup pr_{X^\vee}^*\alpha' ). \] \end{Def} \begin{rmk}\label{FourierMukaicohomology} The formula is motivated by compatibility with the Fourier-Mukai transform on bundles/ sheaves/ derived category/ K-theory, which also involves this kind of convolution operations. The basic idea is to start from any bundle/ coherent sheaf/complex of sheaves $\mathcal{F}\to X$, and the Fourier-Mukai transform will output the complex $R^*pr{X^\vee_*}( \mathcal{E}^\vee \otimes pr_{X}^*\mathcal{F} ) $. The alternating sum \[ [R^0pr_{X^\vee_*}( \mathcal{E}^\vee \otimes pr_{X}^*\mathcal{F} ) ]-[R^1pr_{X^\vee_*}( \mathcal{E}^\vee \otimes pr_{X}^*\mathcal{F} ) ]+[R^2pr_{X^\vee_*}( \mathcal{E}^\vee \otimes pr_{X}^*\mathcal{F} ) ] \] defines a K-theory class, and therefore its Chern character defines a cohomology class. The Mukai vector on $X^\vee$ constructed from this Chern character is related to the Mukai vector of $\mathcal{F}$ by the Fourier-Mukai transform on cohomology (\cf Corollary 5.29 in \cite{FourierMukai}). In many interesting situations, $R^0$ and $R^2$ vanish, and then the Fourier-Mukai transform on cohomology can be used to work out the Chern character of the $R^1$ term, which in good situations is a bundle over $X^\vee$. \end{rmk} \begin{Def} The natural weight two Hodge structure on the even degree cohomology $H^{even}(Y)$ of a compact complex surface $Y$, is given by prescribing $H^{even 2,0}(Y)= H^{2,0}(Y)$, $H^{even 0,2}(Y)= H^{0,2}(Y) $, and $H^{even 1,1}(Y)=H^0(Y)\bigoplus H^{1,1}(Y) \bigoplus H^4(Y)$. \end{Def} An important characterisation of the complex structure of $X^\vee$ is \begin{thm}\label{FourierMukaitransformoncohomologymainperoperties} (\cf \cite{Huy1}, Section 6.1) The Mukai dual $X^\vee$ is also a K3 surface. The Fourier-Mukai transform pair $FM$ and $FM^\vee$ are inverse to each other, define an isomorphism of weight two Hodge structures $H^*(X) \simeq H^*(X^\vee)$, preserve the Mukai pairing, and are muturally adjoint with respect to the Mukai pairing. \end{thm} \begin{thm}(\cite{Huy1}, Proposition 6.1.14) The Fourier-Mukai transform $FM$ sends the orthogonal complement of $v$ in $H^*(X)$ to $H^2(X^\vee)\bigoplus H^4(X^\vee)$, and sends the isotropic vector $v$ to the fundamental class $[X^\vee]^*\in H^4(X^\vee)$, so induces a linear isometry $H^2(X^\vee, \R)\simeq v^\perp/\R v.$ If $v$ is furthermore primitive, \ie not divisible by any nontrivial integer, then $FM$ induces an isomorphism of Hodge structures $H^2(X^\vee, \Z)\simeq v^\perp /v\Z$. \end{thm} \begin{rmk} A crucial fact in the proof is that $X^\vee$ admits a hyperk\"ahler structure (see next Section), which by the classification of surfaces leaves only the possibility of K3 surfaces and Abelian surfaces. One uses further information from the cohomology of $X^\vee$ obtained by studying the transform $FM$ to show it is K3. The theorem characterises the period data of the complex structure of $X^\vee$, so by the Torelli theorem determines the complex structure. \end{rmk} \subsection{Hyperk\"ahler structure on moduli of vector bundles}\label{hyperkahlerquotientreview} Fix a Hermitian vector bundle $E$ with Mukai vector $v$. The Hitchin-Kobayashi correspondence allows for comparison between the moduli space $\mathcal{M}^s(v)$ of stable holomorphic vector bundles structures on $E$, and Hermitian-Yang-Mills (HYM) connections $A$. We shall take a viewpoint where \emph{no complex structure is preferred}. Since K3 surfaces have no torsion cohomology, any $PU(r)$-bundle lifts topologically to a $U(r)$-bundle. HYM connections are equivalent to saying the associated $PU(r)$-connections are ASD, and the central curvature satisfies $ \frac{\sqrt{-1}}{2\pi r}\Tr F_A= \mathcal{B}, $ where $\mathcal{B}$ is the harmonic representative of $\frac{1}{r}c_1(E)$. Hitchin's hyperk\"ahler quotient leads to the following celebrated result: \begin{thm}(see \cite{Mukai})\label{hyperkahlerstructureonmodulitheorem} On $\mathcal{M}^s(v)$, there is a canonical hyperk\"ahler structure. \end{thm} \begin{cor} The Mukai dual $X^\vee$ has a natural \textbf{hyperk\"ahler structure}. \end{cor} \begin{proof} (Theorem \ref{hyperkahlerstructureonmodulitheorem}) Consider the affine space $\mathcal{A}$ of projective unitary connections on $E$, which has a Euclidean hyperk\"ahler structure $(\mathcal{A}, \overline{g}, \overline{\omega_1}, \overline{\omega_2}, \overline{\omega_3})$. The metric is given by \[ \overline{g}(a, b)=\frac{1}{4\pi^2} \int_X \langle a, b\rangle d\text{Vol}_X, \quad a\in T_A \mathcal{A}, \] where $T_A \mathcal{A}$ is identified as traceless $ad(E)$ valued 1-forms, and the pointwise inner product is defined using the negative of the $su(r)$ trace pairing and the inner product on 1-forms. The hyperk\"ahler forms are \[ \overline{\omega}_i (a, b)= \frac{-1}{4\pi^2}\int_X \Tr (a \wedge b) \wedge \omega_i, \quad a, b\in T_A\mathcal{A}. \] The corresponding complex structures $I_i a$ are simply acting pointwise on the 1-form part of $a$ by the negative of precomposition: \[ I_i a= -a\circ I_i, \quad a\in T_A\mathcal{A}. \] Therefore $\overline{\omega_i}(\cdot{}, \cdot{})=\overline{g}(I_i \cdot{}, \cdot{})$. The space $\mathcal{A}$ admits an action by the gauge group $\mathcal{G}$ of $PU(r)$ gauge transformations. The hyperk\"ahler moment map is $\mu=(\mu_1, \mu_2, \mu_3)$, where \[ \mu_i=\frac{-1}{4\pi^2} F_A \wedge \omega_i. \] The zeros of $\mu$ are just the ASD connections on $X$. At a point $A$ representing an irreducible ASD connection, the group action is free, and the moment map $\mu$ is regular at $A$. We have $X^\vee= \mu^{-1}(0) /\mathcal{G}$, which equips $X^{\vee}$ with a hyperk\"ahler structure. For applications in this paper, it is useful to recall the description of the hyperk\"ahler structure on the quotient. At a smooth point $A$, there is a canonical orthogonal decomposition of vector spaces \begin{equation}\label{hyperkahlerquotientconstructiondecompositionformula} \begin{split} T_A \mathcal{A}=\Omega^1(X, ad_0 (E) )= & T_A\mathcal{M} \bigoplus \\ & (\Lie \mathcal{G}) A \oplus I_1(\Lie \mathcal{G}) A \oplus I_2(\Lie \mathcal{G}) A \oplus I_3(\Lie \mathcal{G}) A. \end{split} \end{equation} Here $ad_0(E)$ means the traceless part of $ad(E)$, and the tangent space for $\mathcal{M}$ is identified with the finite dimensional vector space of solutions to the linearised ASD equation and the Coulumb gauge condition \begin{equation} T_A \mathcal{M}= \{ a\in T_A\mathcal{A} : \quad d^+_A a=0, d_A^* a=0 \}. \end{equation} This is a module of the quaternionic action. The Lie algebra $\Lie \mathcal{G}$ acts at $A$ and the deformation it generates are of the form $d_A \Phi$ for some $\Phi \in \Omega^0(X, ad_0(E))$; these are the elements of $(\Lie \mathcal{G}) A$. The hyperk\"ahler structure $(g^\mathcal{M}, \omega_1^\mathcal{M}, \omega_2^\mathcal{M}, \omega_3^\mathcal{M}) $ on the tangent space $T_A\mathcal{M}$ is then the natural restriction of the Euclidean hyperk\"ahler structure. The Levi-Civita connection on the moduli space $\mathcal{M}$ is described as follows. Let $a'$ be a tangent vector field defined on a local open set $T\subset \mathcal{M}$ with coordinates $\tau_i$. We represent this $a'$ as a map from $T$ to the infinite dimensional vector space $\Omega^1(X, ad_0(E))$, which at any point $\tau\in T$ lands in the corresponding tangent space $T_A\mathcal{M}\subset T_A\mathcal{A}=\Omega^1(X, ad_0 (E) )$. Then to compute the Levi-Civita connection $\nabla^{L.C.}_{\frac{\partial}{\partial \tau_i} }a$, we first calculate the derivative $\frac{\partial a'}{\partial \tau_i}$, and then orthogonally project to $T_A\mathcal{M}$. This turns out to be well defined. \end{proof} \subsection{The universal connection}\label{universalconnection} The concept of universal bundle in differential geometry and algebraic geometry has a subtle difference. We fix a Mukai vector $v=v(E)$ with $(v,v)=0$, which determines the topological type of a Hermitian bundle $E\to X$. The moduli space of irreducible HYM connections (assumed to be compact and nonempty) must be a K3 surface, called the Mukai dual K3 surface $X^\vee$. The associated universal bundle of $PU(r)$ ASD connections exist unconditionally over $X\times X^\vee$. Since $X\times X^\vee$ has no torsion cohomology, the $PU(r)$ bundle lifts topologically to a $U(r)$ vector bundle $\mathcal{E}\to X\times X^\vee$, and the connections on $X$ fibres can be lifted to the tautological HYM connections on $E\to X$. This $\mathcal{E}\to X\times X^\vee$ is the \textbf{universal bundle} in differential geometry which exists unconditionally, wheras in algebraic geometry compatibility with complex structures on $X^\vee$ imposes further conditions on $c_1(\mathcal{E})$ which are not always satisfied. Our aim is to put an optimal global connection on $\mathcal{E}$, which restricts fibrewise to the HYM connection on $X$, by adapting \cite{DonaldsonKronheimer} Section 5.2.3. The na\"ive idea is to start from the space of all unitary connections $\mathcal{A}$, and seek a canonical connection on the principal $U(r)$ bundle $P\times\mathcal{A}$ over $X\times\mathcal{A}$, where $P\to X$ is the principal bundle corresponding to $E\to X$. The canonical connection should be invariant under the action of the gauge group of unitary transformations $\mathcal{G}$, and one tries to descend it to the quotient bundle over $\mathcal{A}/\mathcal{G}$. This is the approach in \cite{Bartocci1}, but it contains missing steps because the quotient bundle is only a $PU(r)$ bundle due to the nontrivial centre in $U(r)$. To remedy this, we first replace $\mathcal{A}$ and $\mathcal{G}$ by their projective unitary counterparts. We will tacitly restrict attention to the locus of irreducible connections. Now we define the universal connection $\nabla^{univ}$ on $P\times \mathcal{A}\to X\times\mathcal{A}$. At the point $(x, A)\in X\times \mathcal{A}$, we put \begin{equation}\label{universalconnectionansatz} \begin{cases} \nabla^{univ}_v= \nabla^A_v &\quad v\in T_xX, \\ \nabla^{univ}_a= \nabla^{trivial}_a+ G_A d_A^* a &\quad a\in T_A\mathcal{A}, \end{cases} \end{equation} where $G_A$ is the inverse of the Laplacian $\Lap_A=d_A^*d_A$, which is well defined because of the irreducible connection condition. If we think of the universal connection as a $u(r)$-valued 1-form on $P\times \mathcal{A}$, then the alternative definition is \[ \begin{cases} A^{univ}|_{P\times\{A\}}= A, \\ A^{univ}(a)= G_A d_A^* a, \quad a\in T_A\mathcal{A}. \end{cases} \] This $\nabla^{univ}$ is $\mathcal{G}$ invariant, so descends to the quotient bundle over $\mathcal{A}/\mathcal{G}$. \begin{lem}(\cf \cite{DonaldsonKronheimer} Proposition 5.2.17 with minor modifications) The curvature $F(\nabla^{univ})$ at the point $(x,A)\in X\times \mathcal{A}$ is given by \begin{equation}\label{universalbundlecurvature} \begin{cases} F( \nabla^{univ} ) (u_1, u_2)= F_A(u_1, u_2), &\quad u_1, u_2\in T_x X, \\ F( \nabla^{univ} )( a, u )=\langle a, u\rangle, &\quad a\in T_A \mathcal{A}, u\in T_x X, \quad d_A^*a=0, \\ F( \nabla^{univ})(a_1, a_2)= -2 G_A \{ a_1, a_2 \}, &\quad a_1, a_2 \in T_A \mathcal{A},\quad d_A^*a_1=d_A^*a_2=0. \end{cases} \end{equation} where $\{ a_1, a_2\}$ means pointwise taking the Lie bracket on the bundle part, and contracting the 1-form part using the metric on $X$. \end{lem} Now given the universal family $\mathcal{E}\to X\times X^\vee$, we get a family of irreducible $PU(r)$ ASD connections, hence a map into this quotient bundle, so we can pullback $\nabla^{univ}$ to obtain a connection on the associated $PU(r)$ bundle of $\mathcal{E}$, also denoted $\nabla^{univ}$, with the same curvature formula. \begin{prop} The curvature of the $PU(r)$ connection $\nabla^{univ}$ over $X\times X^\vee$ is of Dolbeault type (1,1) on the complex manifold $X\times X^\vee$, under every choice of complex structure in the hyperk\"ahler triple. We may call such a connection \textbf{triholomorphic}. \end{prop} \begin{proof} We check its (0,2) component of $F(\nabla^{univ})$ vanishes, and the (2,0) part is similar. We test the curvature formula (\ref{universalbundlecurvature}) against complexified vectors of the shape $u+\sqrt{-1} I_i u$ and $a+\sqrt{-1}I_i a$. This is a simple calculation. For the part of curvature pairing with two tangent vectors from $T_AX^\vee\subset T_A\mathcal{A}$ (See Section \ref{hyperkahlerquotientreview} for notations), \[ F( \nabla^{univ})(a_1+ \sqrt{-1}I_ia_1, a_2+\sqrt{-1}I_i a_2)= -2 G_A \{ a_1+\sqrt{-1}I_ia_1, a_2+\sqrt{-1}I_ia_2 \}, \] which vanishes by \[ \begin{split} &\{ a_1+\sqrt{-1}I_ia_1, a_2+\sqrt{-1}I_ia_2 \} \\ & =\{ a_1, a_2\} -\{ I_i a_1, I_i a_2 \} +\sqrt{-1}( \{ I_ia_1, a_2 \} +\{ a_1, I_ia_2 \} ) =0. \end{split} \] For the part of $F(\nabla^{univ})$ pairing with two vectors from $T_x X$, this is merely the fibrewise ASD condition. For the cross term in $F(\nabla^{univ})$, \[ F( \nabla^{univ})(a+ \sqrt{-1}I_ia, u+\sqrt{-1}I_i u)=\langle a+ \sqrt{-1}I_ia, u+\sqrt{-1}I_i u \rangle, \] which vanishes by $ \langle I_ia, v\rangle= -\langle a, I_iv\rangle. $ We have tacitly used that $I_ia_j\in T_AX^\vee$ are still in Coulumb gauge, and $I_i$ acts on the coupled 1-form $a_j$ by pointwise applying the negative of precomposition. These come from description of the hyperk\"ahler quotient construction (\cf Section \ref{hyperkahlerquotientreview}). \end{proof} \begin{thm}\label{universalconnectionmainproperties} There is a Hermitian connection on $\mathcal{E}\to X\times X^\vee$, still denoted $\nabla^{univ}$, which lifts the triholomorphic $PU(r)$ connection, and whose central curvature satisfies \begin{equation}\label{centralcurvatureBfield} \frac{\sqrt{-1}}{2\pi r}\Tr F(\nabla^{univ})= \mathcal{B}+\mathcal{B'}, \end{equation} where $ \mathcal{B},\mathcal{B'}$ are the harmonic 2-forms representing $\frac{1}{r}c_1(E)\in H^2(X)$ and $\frac{1}{r}c_1(\mathcal{E}|_x) \in H^2(X^\vee)$ for any $x\in X$. In particular, when $c_1(E)$ is orthogonal to the hyperk\"ahler triple on $X$ and $c_1(\mathcal{E}|_x)$ is orthogonal to the hyperk\"ahler triple on $X^\vee$, then $\nabla^{univ}$ is a triholomorphic Hermitian connection. \end{thm} \begin{proof} To obtain $\nabla^{univ}$ it is enough to prescribe a $U(1)$ connection on the line bundle $\Lambda^r \mathcal{E}$. The line bundle $\Lambda^r \mathcal{E}$ is topologically the tensor product of line bundles $L\to X$ and $L'\to X^\vee$; on both line bundles we can find a connection with prescribed curvature $r\mathcal{B}$ and $r\mathcal{B'}$ because they represent the appropriate Chern class; taking the tensor product connection gives the connection on $\Lambda^r \mathcal{E}$. This gives us $\nabla^{univ}$. When the first Chern classes satisfy the orthogonality conditions, then Hodge theory implies $\mathcal{B}\wedge \omega_i=0$ and $\mathcal{B}'\wedge\omega^{X^\vee}_i=0$, \ie the central part of $F(\nabla^{univ})$ is triholomorphic. But we also know the associated $PU(r)$ connection is triholomorphic, so the $U(r)$ connection $\nabla^{univ}$ is triholomorphic as well. \end{proof} \begin{rmk} The Chern class conditions are necessary for the existence of triholomorphic $U(r)$ connections, because a triholomorphic connection restricted to any fibre copy of $X$ and $X^\vee$ is ASD. A related issue is that $\nabla^{univ}$ is not always holomorphic with respect to a given complex structure on $X^\vee$, namely the universal bundle in the algebro-geometric sense needs not always exist. \end{rmk} \begin{rmk} The construction of the $U(r)$ connection $\nabla^{univ}$ has the ambiguity of twisting by a $u(1)$-valued exact 1-form pulled back from $X^\vee$, alternatively thought of as a flat $U(1)$ connection. This is intimately related to the fact that in algebraic geometry, the definition of the universal family involves a possible twist by a holomorphic line bundle. \end{rmk} \section{The Mukai dual of a K3 surface}\label{TheMukaidualofaK3surface} In this Chapter we study the geometry of the Mukai dual (\cf Definition \ref{Mukaidualdefinition}) in more detail using the $U(r)$ connection $\nabla^{univ}$ in Theorem \ref{universalconnectionmainproperties}, with particular emphasis given to the idea of \textbf{duality}. We rephrase the relation of the \textbf{hyperk\"ahler periods} on $X$ and $X^\vee$ in terms of Donaldson's $\mu$-map in Section \ref{HyperkahlerperiodsontheMukaidualK3}. The induced triholomorphic $PU(r)$ connection induces a family of ASD connections on $X^\vee$, parametrised by $X$. Modulo the issue of strict stability, \ie the irreducibility of these ASD connections, this allows us to interpret $X$ as a moduli space of $PU(r)$ ASD connections over $X^\vee$, inducing another \textbf{hyperk\"ahler structure} on $X$, which turns out to agree with the original hyperk\"ahler structure, as we discuss in Section \ref{ASDconnectionsonMukaidual}. \subsection{Hyperk\"ahler periods on the Mukai dual K3 surface}\label{HyperkahlerperiodsontheMukaidualK3} We use the connection provided by Theorem \ref{universalconnectionmainproperties} to give a differential geometric understanding of the Hodge structure of $X^\vee$. \begin{prop}(compare \cite{DonaldsonKronheimer} Page 197) The cohomology class of the hyperk\"ahler 2-form $\omega_i^{X^\vee}$ on the moduli space $X^\vee$ is given by the slant product \begin{equation}\label{Donaldsonmumaphyperkahler} [\omega_i^{X^\vee}]=(ch_2(\mathcal{E})- \frac{1}{2r} c_1(\mathcal{E}) ^2)\wedge [\omega_i]/[X]= -\frac{1}{2r}p_1(ad(\mathcal{E}))\cup [\omega_i]/[X]. \end{equation} \end{prop} \begin{proof} We can represent the Chern character $ch(\mathcal{E})$ of the universal bundle in terms of the curvature forms: \[ Ch(\nabla^{univ})= \Tr ( \exp \frac{ \sqrt{-1} }{2\pi} F(\nabla^{univ}) )=r+ Ch_1(\nabla^{univ}) +Ch_2(\nabla^{univ})+\ldots. \] In particular $ Ch_2(\nabla^{univ}) =-\frac{1}{8\pi^2} \Tr (F(\nabla^{univ})\wedge F(\nabla^{univ} ) ). $ It is convenient to decompose the curvature $F(\nabla^{univ})$ into 3 parts, depending on whether the components of the 2-form factor comes from $X$ or $X^\vee$, \begin{equation}\label{typedecompositionofuniversalcurvature} F(\nabla^{univ})= F(\nabla^{univ})^{X, X} +F(\nabla^{univ})^{X, X^\vee}+ F(\nabla^{univ})^{X^\vee, X^\vee}. \end{equation} For later convenience, we denote \begin{equation}\label{definitionofOmega} \Omega= F(\nabla^{univ})^{X, X^\vee}. \end{equation} Thus the slant product $ch_2(E)\cup [\omega_i]/[X]$ can be represented by the integration along fibres \begin{equation*} \begin{split} &\int_X Ch_2(\nabla^{univ}) \wedge \omega_i=-\frac{1}{8\pi^2}\int_X \Tr (F(\nabla^{univ})\wedge F(\nabla^{univ} ) )\wedge \omega_i \\ &=-\frac{1}{8\pi^2} \int_X \Tr (\Omega\wedge \Omega)\wedge \omega_i-\frac{1}{4\pi^2}\int_X \Tr F(\nabla^{univ})^{X,X} \wedge F(\nabla^{univ} )^{X^\vee, X^\vee} \wedge \omega_i \\ &=-\frac{1}{8\pi^2} \int_X \Tr (\Omega\wedge \Omega)\wedge \omega_i+ (\int_X \mathcal{B}\wedge \omega_i) r \mathcal{B}' . \end{split} \end{equation*} which is a 2-form on $X^\vee$. The last equality here uses \[ F(\nabla^{univ})^{X,X}\wedge \omega_i= (\mathcal{B}\wedge \omega_i) I_\mathcal{E} .\] Now take the cohomology class, we see \[ \begin{split} [-\frac{1}{8\pi^2} \int_X \Tr (\Omega\wedge \Omega)\wedge \omega_i]&=\int_X (ch_2(\mathcal{E})- \frac{1}{r} c_1(\mathcal{E}|_x) c_1(E))\wedge [\omega_i] \\ &=(ch_2(\mathcal{E})- \frac{1}{2r} c_1(\mathcal{E}) ^2)\wedge [\omega_i]/[X] \\ &=\frac{1}{2r}p_1(ad(\mathcal{E}) )\cup [\omega_i]/[X]. \end{split} \] We evaluate the 2-form at the point $A\in X^\vee$ on tangent vectors $a_1, a_2\in T_A X^\vee$, to get \[ \begin{split} & -\frac{1}{8\pi^2}\iota_{a_2}\iota_{a_1} \int_X \Tr (\Omega\wedge \Omega)\wedge \omega_i \\ =& -\frac{1}{8\pi^2}\iota_{a_2} \int_X \Tr (\iota_{a_1}\Omega\wedge \Omega)\wedge \omega_i -\frac{1}{8\pi^2}\iota_{a_2} \int_X \Tr (\Omega\wedge \iota_{a_1}\Omega)\wedge \omega_i \\ =& -\frac{1}{4\pi^2}\iota_{a_2} \int_X \Tr (\iota_{a_1}\Omega\wedge \Omega)\wedge \omega_i \\ =& \frac{1}{4\pi^2} \int_X \Tr (\iota_{a_1}\Omega\wedge\iota_{a_2} \Omega)\wedge \omega_i \\ =& \frac{1}{4\pi^2} \int_X \Tr (a_1\wedge a_2)\wedge \omega_i . \end{split} \] The last equality uses (\ref{universalbundlecurvature}) which says in particular \[ \Omega(a, v)=\langle a, v \rangle, \quad a\in T_A X^\vee, v\in T_x X. \] We recognise $\frac{1}{4\pi^2} \int_X \Tr (a_1\wedge a_2)\wedge \omega_i $ as minus the hyperk\"ahler form $\omega_i^{X^\vee}$ on the moduli space $X^\vee$, hence the claim. \end{proof} \begin{rmk} This quite delicate computation should be compared to \cite{DonaldsonKronheimer}, Page 197, which is essentially the same calculation but outputs different numerical factors. To clarify, our convention is that integration along $X$ commutes with wedging by forms on $X^\vee$. \end{rmk} \begin{rmk} When the rank $r=2$, the relation between $\omega_i^{X^\vee}$ and $\omega_i$ is Donaldson's $\mu$-map in the theory of 4-manifold invariants: \begin{equation}\label{Donaldsonmumap} \tilde{\mu}: H^2(X) \to H^2(X^\vee), \quad \alpha\mapsto \alpha\cup \frac{-1}{2r} p_1(ad(\mathcal{E}))\cup [\omega_i]/[X]. \end{equation} \end{rmk} \begin{thm} \label{Mukaidualhyperkahlertriple} The $\mu$-map is an isometry. In particular the volume of $X$ and $X^\vee$ are equal. \end{thm} \begin{proof} We shall make use of FM transform on cohomology in Section \ref{Mukaidualreview}. We can compute the Mukai vector $v(\mathcal{E})= ch(\mathcal{E})\sqrt{Td(X) Td(X^\vee)}$, and $v(\mathcal{E}^\vee)= ch(\mathcal{E}^\vee)\sqrt{Td(X) Td(X^\vee)}$. Since $X$ and $X^\vee$ are K3 surfaces, we know $Td(X)=1+2[X]^*$ and $Td(X^\vee)=1+ 2[X^\vee]^*$. Thus for any $\alpha\in H^2(X)$, \[ \begin{split} FM(\alpha) _{H^0(X^\vee)}&=- \int_X \alpha \cup c_1(E), \\ FM(\alpha) _{H^2(X^\vee)}&= \alpha\cup ch_2(\mathcal{E^\vee})/[X]=-\tilde{\mu}(\alpha) +\frac{1}{r}(\int_X c_1(E)\wedge \alpha)c_1(\mathcal{E}|_x) . \end{split} \] Here $\tilde{\mu}$ is the Donaldson $\mu$-map. To calculate $FM(\alpha)_{H^4}$, we observe the FM transform of the fundamental class of $X$ is \[ FM([X]^*)= v(\mathcal{E}^\vee)\cup [X]^*/[X]= v(\mathcal{E}^\vee|_x) \in H^*(X^\vee), \quad \forall x\in X \] so the Mukai pairing \[ \begin{split} 0&=-([X]^*, \alpha)=-(FM([X]^*), FM(\alpha) )\\ &=FM(\alpha)_{H^0} (\int_{X^\vee} ch_2(\mathcal{E}|_x)+r)+\int_{X^\vee} FM(\alpha)_{H^2}\cup c_1(\mathcal{E}|_x)+ r \int_{X^\vee} FM(\alpha)_{H^4}\\ &=FM(\alpha)_{H^0} \frac{1}{2r}(\int_{X^\vee} c_1(\mathcal{E}|_x)^2)+\int_{X^\vee} FM(\alpha)_{H^2}\cup c_1(\mathcal{E}|_x)+ r \int_{X^\vee} FM(\alpha)_{H^4}\\ &= r \int_{X^\vee} FM(\alpha)_{H^4}- \int_{X^\vee}\tilde{\mu}(\alpha)\cup c_1(\mathcal{E}|_x) + \frac{1}{2r} (\int_X\alpha\cup c_1(E)) \int_{X^\vee} c_1(\mathcal{E}|_x)^2, \end{split} \] hence \[ FM(\alpha)_{H^4}=\frac{1}{r} \tilde{\mu}(\alpha)\cup c_1(\mathcal{E}|_x) -\frac{1}{2r^2}(\int_X\alpha \cup c_1(E)) c_1(\mathcal{E}|_x)^2. \] The $\mu$-map is an isometry because \[ \begin{split} \int_X \alpha^2&= (\alpha, \alpha)= (FM(\alpha), FM(\alpha) )\\ &= FM(\alpha)_{H^2}^2- 2 FM(\alpha)_{H^0} FM(\alpha)_{H^4}= \int_{X^\vee} \tilde{\mu}(\alpha)^2, \end{split} \] where the last step is proved by substituting the expressions for components of $FM(\alpha)$ and cancelling out terms. Since $\omega_i$ and $\omega_i^{X^\vee}$ are related by the $\mu$-map, their volumes are equal. \end{proof} \begin{rmk} When $r=2$, the integral $q(\alpha)=\int_{X^\vee} \tilde{\mu}(\alpha)^2$ is one of Donaldson's polynomial invariants for K3 surfaces, and this volume computation is just the higher rank generalisation. \end{rmk} \subsection{ASD connections on $X^\vee$ and hyperk\"ahler structure}\label{ASDconnectionsonMukaidual} The universal $U(r)$ connection $\nabla^{univ}$ on $\mathcal{E}\to X\times X^\vee$ in Theorem \ref{universalconnectionmainproperties} induces a family of HYM connections (or equivalently, ASD $PU(r)$ connections) on $E'\simeq \mathcal{E}^\vee|_x\to X^\vee$ parametrised by $x\in X$, where $E'\to X^\vee$ denotes the underlying Hermitian bundle. The reason for considering the dual bundle is to be compatible with the inverse Fourier-Mukai transforms (\cf Section \ref{Mukaidualreview}). The goal of this Section is \begin{thm}\label{doubledualityhyperkahlerforms} If all these HYM connections are irreducible, then $X$ is the moduli space $\mathcal{M}^s(v(E')))$, and the hyperk\"ahler structure on $X$ induced from the moduli interpretation agrees with $(g, \omega_i)$. \end{thm} \begin{rmk} This means $X$ is also the Mukai dual of $X^\vee$, so $X$ and $X^\vee$ appear on equal footing. We will in fact separate the irreducibility/stability assumption from most of the intermediate arguments. \end{rmk} We first observe that $X$ has the correct virtual dimension: \begin{lem} The Mukai vector of $E'\to X^\vee$ satisfies $(v(E'), v(E'))=0$, so the moduli space $\mathcal{M}^s(v(E'))$ has real virtual dimension $4$. \end{lem} \begin{proof} The Fourier-Mukai transform on cohomology (\cf Section \ref{Mukaidualreview}) maps the fundamental class $[X]^*\in H^4(X)$ to \[ FM( [X]^* )= [X]^*\cup v(\mathcal{E}^\vee) /[X] = v(E') \in H^*(X^\vee). \] Now because the Fourier-Mukai transform preserves the Mukai pairing, we have \[ ( v(E') , v(E') )=( [X]^*, [X]^* ) = 0. \] The formula for the complex virtual dimension is $(v(E'), v(E') )+2=2$. (\cf Section \ref{Mukaidualreview}). Hence the claim. \end{proof} We study the variation of these ASD $PU(r)$ connections on $X^\vee$ as $x\in X$ varies. Let $T'\subset X$ be an open set with coordinates $x_i$, containing a point of interest $x_0\in T'$. We can topologically identify the smooth bundles $\mathcal{E^\vee}|_{x}\to \{x\}\times X^\vee$ for $x\in T'$, as the bundle $E'\to X^\vee$. Then we get a varying family of ASD connections $A_{x}= -\nabla^{univ,t}|_{x}$ parametrised by $x\in T'$, inside an infinite dimensional space modelled on $\Omega^1(X^\vee, ad_0 (E'))$ (\cf Section \ref{hyperkahlerquotientreview}). The minus transpose takes place because we are working with the dual bundle. The topological identification can be twisted by gauge transformations. To rigidify the situation, we may use the transposed universal connection $-\nabla^{univ,t}$ to give an infinitesimal trivialisation around the central fibre, so that the derivative $\frac{\partial A_{x}}{\partial x_i} $ at the point $x_0$ agrees with the commutator \[ [-\nabla^{univ,t}_{ \frac{\partial }{\partial x_i} } , -\nabla^{univ,t} ]= -\iota_{\frac{\partial }{\partial x_i} } F(\nabla^{univ,t})= -\iota_{\frac{\partial }{\partial x_i} } \Omega^t\in \Omega^1(X^\vee, ad_0 (E')), \] where we have restricted the coupled 1-form $-\iota_{\frac{\partial }{\partial x_i} } F(\nabla^{univ,t}) $ to the central fibre, and only the component $\Omega$ of $F(\nabla^{univ})$ actually contributes. We can regard this commutator as the \textbf{infinitesimal variation of the ASD connections}. \begin{lem}\label{infinitesimalvariationofASDconnectionisinCoulumbgauge} The infinitesimal variation $ a_i=-\iota_{\frac{\partial }{\partial x_i} } \Omega^t \in \Omega^1(X^\vee, ad_0 (E')) $ satisfies the linearised ASD equation $d_{A_{x_0}}^+ a_i=0$, and the Coulumb gauge fixing condition $d_{A_{x_0}}^* a_i=0$. \end{lem} \begin{proof} Since $a_i$ is traceless, the $PU(r)$ version and the $U(r)$ version of these equations are equivalent. The linearised ASD equation follows directly from variation of ASD connections. To show the Coulumb gauge condition, we start from the equation over $X\times X^\vee$, \begin{equation} d_{A^{univ}}^* F(\nabla^{univ})=0, \end{equation} which is a consequence of the triholomorphic property. Now recall the decomposition (\ref{typedecompositionofuniversalcurvature}) and (\ref{definitionofOmega}), \[ F(\nabla^{univ})= F(\nabla^{univ})^{X, X} +\Omega+ F(\nabla^{univ})^{X^\vee, X^\vee}. \] Similarly, the operator $d_{A^{univ}}^ * $ can be decomposed into $ d_{A^{univ}}^ *= d_{A^{univ}}^ {X, *}+d_{A^{univ}}^ { X^\vee, *}, $ corresponding to decreasing the bidegree of forms in the $X$ direction or the $X^\vee$ direction. The Yang-Mills equation decomposes as \[ d_{A^{univ}}^ {X, *} F(\nabla^{univ})^{X, X}+ d_{A^{univ}}^ { X^\vee, *} \Omega=0, \] and \[ d_{A^{univ}}^ { X^\vee, *} F(\nabla^{univ})^{X^\vee, X^\vee}+ d_{A^{univ}}^ { X, *} \Omega=0. \] We observe that the connection is ASD on each fibre, so the Yang-Mills equation is satisfied, which implies \[d_{A^{univ}}^ { X, *} F(\nabla^{univ})^{X, X}=0, \quad d_{A^{univ}}^ { X^\vee, *} F(\nabla^{univ})^{X^\vee, X^\vee}=0.\] Thus in turn we have \begin{equation}\label{YangMillsoncomponents} d_{A^{univ}}^ { X^\vee, *} \Omega=0, \quad d_{A^{univ}}^ { X, *}\Omega=0. \end{equation} Hence on the central fibre, \[ d_{A^{univ}}^ { X^\vee, *} \iota_{ \frac{\partial }{\partial x_i} } \Omega = - \iota_{ \frac{\partial }{\partial x_i} } d_{A^{univ}}^ { X^\vee, *}\Omega=0 \in \Omega^0(X^\vee, ad_0(E')), \] which is the desired Coulumb gauge condition, after taking transpose. \end{proof} \begin{rmk} It is curious that the Coulumb gauge condition on the fibre is related to the Yang-Mills condition on the total space. \end{rmk} The infinitesimal variation of ASD connections behaves well under quaternionic actions: \begin{lem}\label{infinitesimalvariationofASDconnectioniscompatiblewithquaternionicaction} For any choice of complex structure $I_k$, The infinitesimal variation induced by $I_k \frac{\partial}{\partial x_i}$ is $I_k a_i$, where by $I_k a_i$ we mean the negative of precomposition by $I_k$. \end{lem} \begin{proof} We compute using the triholomorphic property of $\Omega$ as follows: \[ \begin{split} \iota_{ I_k \frac{\partial}{\partial x_i} } \Omega =\Omega ( I_k \frac{\partial}{\partial x_i} , \cdot{} )=\Omega( \frac{\partial}{\partial x_i}, -I_k(\_) ) = - \iota_{ \frac{\partial}{\partial x_i} } \Omega \circ I_k. \end{split} \] We then take the transpose to see the result. \end{proof} We interprete the family of ASD instantons on $E'\to X^\vee$ as giving a map from $X$ to the space of $PU(r)$ connections on $E'\to X^\vee$, modulo gauge equivalence classes. We can compare this with the the hyperk\"ahler quotient construction in Section \ref{hyperkahlerquotientreview}. The optimal hope is to identify $X$ with the moduli space $\mathcal{M}^s(v(E'))$ of ASD instantons on $E'\to X^\vee$ with its hyperk\"ahler structure. There are several issues to this. First, in general these ASD instantons may not be irreducible, to ensure the smoothness of moduli space. Second, it is not a priori clear that the map from $X$ to the moduli space is an immersion, because infinitesimal variations may be zero. Despite these issues, we can nevertheless write down the (semi)-metric and the triple of 2-forms on $X$ induced from the moduli space picture, since Lemma \ref{infinitesimalvariationofASDconnectionisinCoulumbgauge} already puts us in the appropriate gauge fixing condition. (Compare with Section \ref{hyperkahlerquotientreview}). The (semi)-metric is given by \begin{equation} g^{\vee\vee}(\frac{\partial}{\partial x_i},\frac{\partial}{\partial x_j} )=\frac{1}{4\pi^2}\int_{X^\vee} \langle \iota_{\frac{\partial}{\partial x_i}} \Omega^t , \iota_{\frac{\partial}{\partial x_j}} \Omega^t \rangle d\text{Vol}_{X^\vee}, \end{equation} where the pointwise inner product of coupled 1-forms on $X^\vee$ is given by combining the negative of the trace pairing on the bundle part, and the inner product on the 1-form part. This is clearly smooth and semi-positive definite, but not a priori known to be positive definite. The corresponding triple of 2-forms $\omega^{\vee\vee}_i$ are defined by the requirement \[ \omega_k^{\vee\vee}(\_, \_)= g^{\vee\vee}( I_k\_, \_ ). \] More explicitly, we can write down \[ \begin{split} \omega_k^{\vee\vee}(\frac{\partial}{\partial x_i},\frac{\partial}{\partial x_j} )&=\frac{-1}{4\pi^2}\int_{X^\vee} \Tr ( \iota_{\frac{\partial}{\partial x_i}} F(\nabla^{univ,t}) \wedge \iota_{\frac{\partial}{\partial x_j}} F(\nabla^{univ,t}) ) \wedge \omega_k^{X^\vee} \\ & =\frac{-1}{4\pi^2}\int_{X^\vee} \Tr ( \iota_{\frac{\partial}{\partial x_i}} \Omega \wedge \iota_{\frac{\partial}{\partial x_j}} \Omega )\wedge \omega_k^{X^\vee} . \end{split} \] We can write more concisely by wedging $dx_i\wedge dx_j$ and sum up: \begin{equation}\label{hyperkahlerformdoubledual} \omega_k^{\vee\vee}=\frac{1}{8\pi^2}\int_{X^\vee} \Tr ( F(\nabla^{univ,t}) \wedge F(\nabla^{univ,t}) ) \wedge \omega_k^{X^\vee}. \end{equation} Here we pick up another minus sign when we commute $dx_j$ with the coupled 1-form $\iota_{\frac{\partial}{\partial x_i}} F(\nabla^{univ,t}) $, and an extra factor $\frac{1}{2}$ because \[\omega_k^{\vee\vee}=\frac{1}{2}\sum_{i,j}\omega_k^{\vee\vee}(\frac{\partial}{\partial x_i},\frac{\partial}{\partial x_j}) dx_i\wedge dx_j .\] \begin{lem} The triple of 2-forms $\omega^{\vee\vee}_i$ are closed. \end{lem} \begin{proof} We notice the trace of curvature terms are closed on $X\times X^\vee$ by Chern-Weil theory. Thus the integrand $\Tr ( F(\nabla^{univ,t}) \wedge F(\nabla^{univ,t}) ) \wedge \omega_i^{X^\vee}$ is closed on $X\times X^\vee$, and the integration on fibres preserves this closedness. \end{proof} \begin{lem} The hyperk\"ahler structure $(g^{\vee\vee}, \omega_i^{\vee\vee})$ agrees with $(g, \omega_i)$. \end{lem} \begin{proof} Both $g$ and $g^{\vee\vee}$ are compatible with the quaternionic actions of $I_1$, $I_2$, $I_3$, using Lemma \ref{infinitesimalvariationofASDconnectioniscompatiblewithquaternionicaction}. Such metrics are determined up to a scalar function $f$, so $g^{\vee\vee}=fg$ is conformal to $g$. Now $\omega_i^{\vee\vee}=f\omega_i$ is closed, so $df\wedge \omega_i=0$, which implies $df=0$, hence $f$ is a constant. To pin down the constant requires some input from cohomology. By a computation completely analogous to (\ref{Donaldsonmumap}), \[ [\omega_i^{\vee\vee}]= -\frac{1}{2r} p_1(ad(\mathcal{E}))\cup [\omega_i^{X^\vee}]/[X^\vee] \] is given by the adjoint of Donaldson's $\mu$-map $\tilde{\mu}$ acting on $[\omega_i^{X^\vee}]$. Since $\tilde{\mu}$ is an isometry, its adjoint is its inverse, so $[\omega_i^{X^\vee}]=\tilde{\mu}^{-1}([\omega_i^{X^\vee}])=[\omega_i]$, hence the constant $f=1$. \end{proof} We now prove Theorem \ref{doubledualityhyperkahlerforms}. \begin{proof} Since $g^{\vee\vee}$ is positive definite, we see a posteriori that the infinitesimal variation is nowhere vanishing, so the tautological map $X\to \mathcal{M}^s(v(E'))$ is an immersion. Since $X$ has the right virtual dimension, this is a local isomorphism once we assume irreducibility/stability. Thus $\mathcal{M}^s(v(E'))$ contains a compact component, namely the image of $X$, so $\mathcal{M}^s(v(E'))$ is the image by Theorem \ref{Mukaidualalgebraic}. But $\mathcal{M}^s(v(E'))$ is a hyperk\"ahler surface, so cannot be an unramified quotient of $X$. We see $X=\mathcal{M}^s(v(E'))$. The hyperk\"ahler structure induced by the moduli interpretation is precisely $(g^{\vee\vee}, \omega_i^{\vee\vee})$, so agrees with $(g, \omega_i)$. \end{proof} \section{The Nahm transform on K3 surfaces}\label{TheNahmtransformonK3surfaces} The aim of this Chapter is to develop the theory of the Nahm transform (a.k.a the Fourier-Mukai transform) on K3 surfaces in the differential geometric setting, by adapting the work of Braam and Baal \cite{BraamBaal} on the Nahm transform over the 4-torus. There are two major technical difficulties: the non-flat nature of the ambient metric, and the non-explicit nature of the universal connection. Section \ref{Nahmtransform} defines the \textbf{Nahm transform} for irredubible HYM connections on bundles of certain slopes over the hyperk\"ahler K3 surface, gives a brief comparison with the algebraic viewpoint, and then proceed to use spinor methods to show that the \textbf{Hermitian Yang-Mills} property is preserved by the Nahm transform. To prove this, we use a formula of the curvature of the transformed connection in terms of certain Green operators, as in \cite{BraamBaal}, and the heart of the argument is to introduce the natural action of complex structure operators on positive spinors (\cf the Appendix), which commute with the Green operators. This allows us to reduce the proof to a local calculation, where the key input is the properties of the universal connection on $\mathcal{E}\to X\times X^\vee$ from Theorem \ref{universalconnectionmainproperties}. The rest of the Chapter aims to prove the \textbf{Fourier inversion theorem} (\cf \ref{Fourierinversiontheorem}) for the Nahm transform by differential geometric calculations, and is by far the most difficult part of this paper. This means the Nahm transform suitably applied twice (the `\textbf{inverse Nahm transform}') would give back the original bundle with its HYM connection. A technical feature is that a large amount of cancellation effects take place to remove many complicated expressions. Section \ref{inverseNahmtransformcomparisonmap} defines a \textbf{canonical comparison map} between the original bundle and the inverse Nahm transform, built out of Green operators and curvature operators on coupled spinors. We then show this is well defined, namely that the map lands in the correct target, by verifying a coupled Dirac equation. Section \ref{Injectiveisometry} shows that the canonical comparison map is a \textbf{Hermitian isometry}. The Hermitian inner product on the inverse Nahm transform takes the form of a correlator type expression, which is converted via functional analytic calculations into the integral of a Laplacian type expression involving some singular Schwartz kernels. The proof then proceeds by deriving delicate asymptotic formulae of the singularity, and evaluating them in the limit. Section \ref{Comparingtheconnections} \textbf{compares connections} on the original bundle with the inverse Nahm transform, and show that they agree under the canonical comparison map. The proof extends the asymptotic calculation in Section \ref{Injectiveisometry} to higher order. \subsection{The Nahm transform}\label{Nahmtransform} The aim of this Section is to translate the Fourier-Mukai transform for stable vector bundles $\mathcal{F}$ with the same slope as $E$ on the K3 surface $X$, into the language of the Nahm transform, in analogy with the work of Braam and Baal \cite{BraamBaal} in the 4-torus case. The main result says under certain conditions the HYM condition is preserved under Nahm transform, proved using spinor techniques. We briefly review the \textbf{algebraic viewpoint} for completeness, which will not be substantially used. Let $(X, g, \omega_1, \omega_2, \omega_3)$ be a hyperk\"ahler K3 surface, and $X^\vee$ a Mukai dual K3 surface, such that there is a universal bundle $\mathcal{E}\to X\times X^\vee$ with a universal connection $\nabla^{univ}$, as in Theorem \ref{universalconnectionmainproperties}. Let $\mathcal{F}$ be a stable holomorphic vector bundle with the same slope as $E$, such that $F\not \simeq \mathcal{E}|_\tau$ for any $\tau\in X^\vee$, which by the properties of stable vector bundles implies $\text{H}^0(X,\underline\Hom ( \mathcal{E}|_\tau, \mathcal{F}) )=0 $ and $\text{H}^0(X, \underline\Hom (\mathcal{F}, \mathcal{E}|_\tau))=\text{Ext}^0(\mathcal{F},\mathcal{E}|_\tau)=0$. By Serre duality, we also have $\text{Ext}^2(\mathcal{E}|_{\tau}, \mathcal{F})\simeq \text{Ext}^0( \mathcal{F}, \mathcal{E}|_\tau)^*=0$. The Fourier-Mukai transform on sheaves produces a vector bundle $R^1pr_{X^\vee *}( \mathcal{E}^\vee\otimes pr^*_{X}\mathcal{F} )$ over $X^\vee$, whose fibres at $\tau \in X^\vee$ are $ \text{H}^1(X, \underline{\Hom}(\mathcal{E}|_{\tau}, \mathcal{F} ) )\simeq \text{Ext}^1( \mathcal{E}|_{\tau} , \mathcal{F} ) $ by the base change theorem. Here $\underline{\Hom}$ denotes the holomorphic $\Hom$ bundle. From the \textbf{differential geometric viewpoint}, the Fourier-Mukai transform can be understood in terms of the Nahm transform (Compare \cite{BraamBaal}, Section 1). We think of the stable holomorphic bundle $\mathcal{F}$ equivalently as a vector bundle $\mathcal{F}$ with an irreducible HYM connection $ \alpha$. (This notation is to avoid confusion with connections on $\mathcal{E}$.) Another description which does not favour any complex structure, is that $\alpha$ induces an ASD $PU(rk(\mathcal{F}))$ connection, and the central curvature satisfies \[ \frac{\sqrt{-1}}{ 2\pi} \Tr F_\alpha=rk(\mathcal{F}) \mathcal{B}_\mathcal{F}, \] where $\mathcal{B}_\mathcal{F}$ is the harmonic 2-form representing the class $\frac{1}{rk(\mathcal{F})} c_1(\mathcal{F})$. The slope condition is equivalent to \[ (\mathcal{B}_\mathcal{F}-\mathcal{B})\wedge \omega_i=0, \] where $\mathcal{B}$ is the harmonic representative of $\frac{1}{rk(E)} c_1(E)$ as in Theorem \ref{universalconnectionmainproperties}. Let $S^+_X, S^{-} _X\to X$ be the spinor bundles over $X$. This allows us to define the Dirac operators, by coupling $\alpha$ to the universal connection $\nabla^{univ}$ and the Levi-Civita connection on spinors: \[ \begin{split} & D^+_{\alpha_\tau }: \Gamma(X, S^+_X \otimes\Hom( \mathcal{E}|_{\tau}, \mathcal{F} ) ) \to \Gamma(X, S^-_X \otimes\Hom( \mathcal{E}|_{\tau}, \mathcal{F} ) ), \\ & D^-_{ \alpha_\tau }: \Gamma(X, S^-_X \otimes\Hom( \mathcal{E}|_{\tau}, \mathcal{F} ) ) \to \Gamma(X, S^+_X \otimes\Hom( \mathcal{E}|_{\tau}, \mathcal{F} ) ). \end{split} \] These two operators are formally adjoint to each other. It is important that the coupled connection $\alpha_\tau$ on $\text{Hom}(\mathcal{E}|_\tau, \mathcal{F})$ is an ASD unitary connection, instead of merely ASD projective unitary connection. \begin{rmk} Using the arguments in \cite{DonaldsonKronheimer} Section 3.2, the Dirac operator is closely related to the Dolbeault complex, and its kernel can be identified with the Dolbeault cohomology groups: \[ \ker D^+_{\alpha_\tau }\simeq \text{H}^0(X, \underline{\Hom} (\mathcal{E}|_\tau, \mathcal{F} ) ) \oplus \text{H}^2(X, \underline{\Hom} (\mathcal{E}|_\tau, \mathcal{F} ) ) , \] \[ \ker D^-_{\alpha_\tau }\simeq \text{H}^1(X, \underline{\Hom} (\mathcal{E}|_\tau, \mathcal{F} ) ) . \] \end{rmk} \begin{lem}\label{vanishingkernellemma} If the topological type of $\mathcal{F}$ is different from $E$, then $\ker D^+_{\alpha_\tau } \subset \Gamma(X, S^+_X \otimes\Hom( \mathcal{E}|_\tau, \mathcal{F} ) )$ is zero for all $\tau\in X^\vee$. \end{lem} \begin{proof}\label{Lichnerowichformulaargument} By the Lichnerowich formula \[ D^-_{ \alpha_\tau } D^+_{\alpha_\tau }= \nabla^*_{\alpha_\tau} \nabla_{\alpha_\tau} + F_{\alpha_\tau}^+ + \frac{1}{4}R, \] where $ F_{\alpha_\tau}^+$ denotes the action of the curvature tensor on positive spinors, and $R$ is the scalar curvature. On a hyperk\"ahler K3 surface, $R$ is zero, and the ASD condition means $F_{\alpha_\tau}^+=0$. By a Bochner type argument, we see that $\ker D^+_{\alpha_\tau }$ consists entirely of parallel coupled spinor fields. For a K3 surface $X$, the positive spin bundle $S^+_X$ is covariantly trivial. So using $\text{H}^0(X,\underline\Hom( \mathcal{E}|_{\tau}, \mathcal{F} ) ) =0$, this kernel must vanish. \end{proof} The vector spaces $\hat{\mathcal{F} }|_\tau=\ker D^-_{ \alpha_\tau }$ therefore fit together into a vector bundle $\hat{\mathcal{F}}\to X^\vee$. This can be thought of as a subbundle of the infinite dimensional Hermitian vector bundle $\hat{H}\to X^\vee$, whose fibres are $\hat{H}_\tau= \Gamma( X, S^-_X \otimes \Hom( \mathcal{E}|_{\tau}, \mathcal{F} ) )$. This bundle $\hat{H}$ has a natural covariant derivative $\hat{d}$, induced by the universal connection on $\mathcal{E}$ in the $X^\vee$ direction. Let $\hat{\alpha}$ be the subbundle connection on $\hat{\mathcal{F}}$, more concretely described by the covariant derivative $\hat{\nabla}=P\hat{d}$, where $P: \hat{H}\to \hat{\mathcal{F}}$ is the $L^2$ projection. Using Hodge theory, the $L^2$ projection operator is expressed as \[ P=1- D_{\alpha_\tau}^+ G_\tau D^-_{\alpha_\tau} \] with $G_\tau= ( D^-_{\alpha_\tau} D_{\alpha_\tau}^+ )^{-1}= ( \nabla^*_{\alpha_\tau}\nabla_{\alpha_\tau} )^{-1} $ acting on $\Gamma( X, \Hom ( \mathcal{E}|_\tau, \mathcal{F} ) \otimes S^+_X ) $. \begin{Def} The \textbf{Nahm transform} of $(\mathcal{F}, \alpha)$ is the pair of vector bundle with connection $(\hat{\mathcal{F}}, \hat{\alpha})$. \end{Def} We follow \cite{BraamBaal} with some modifications to show \begin{thm}\label{NahmtransformperservesASD} The Nahm transform $(\hat{\mathcal{F}}, \hat{\alpha})$ is also a HYM connection, with the same slope as $E'\to X$ defined in Section \ref{HyperkahlerperiodsontheMukaidualK3}. \end{thm} \begin{proof} Let $\hat{f}^j(\tau)= \psi^j_\tau (x) \in \Gamma( X, S^-_X \otimes \Hom( \mathcal{E}|_{\tau}, \mathcal{F} ) ) $ with $\tau\in X^\vee$ and $j=1,2\ldots , rk(\hat{\mathcal{F}})$ be a local orthonormal framing of $\hat{\mathcal{F}}\to X^\vee$. For a section $\hat{s}(\tau)= \sum_j \hat{s}_j \hat{f}^j(\tau)$, with $\hat{s}_j$ being local $C^\infty$ functions on $X^\vee$, one can compute \[ \hat{\nabla} \hat{s} = P\hat{d} \hat{s} =(1-D_{\alpha_\tau}^+ G_\tau D^-_{\alpha_\tau} ) [ \hat{d} \sum \hat{s}_j (\tau) \psi^j_\tau(x) ], \] In components, we can write \[ \hat{\nabla} \hat{s}= ( d \hat{s}_j + \hat{\alpha}_{jk} \hat{s}_k ) \hat{f}^j, \] where the connection matrix $\hat{\alpha}_{jk}$ of $\alpha$ is equal to \[ \hat{\alpha}_{jk}= \langle \hat{f}^j, \hat{d} \hat{f}^k\rangle= \langle \psi^j_\tau, \hat{d} \psi^k_\tau \rangle. \] The curvature matrix is \[ \hat{F}_{ij}= d\hat{\alpha}_{ij} +\sum_k\hat{\alpha}_{ik} \wedge \hat{\alpha}_{kj}= \langle \hat{d} \psi^i_\tau, \wedge \hat{d} \psi^j_\tau \rangle+ \langle \psi^i_\tau, \wedge \hat{d}^2 \psi^j_\tau \rangle + \sum_k \langle \psi^i_\tau, \hat{d} \psi^k_\tau \rangle \wedge \langle \psi^k_\tau, \hat{d} \psi^j_\tau \rangle. \] Now $\langle \hat{d} \psi^i_\tau, \psi^k_\tau \rangle=-\langle \psi^i_\tau, \hat{d} \psi^k_\tau \rangle$ by the compatibility with the Hermitian structures, so the third term above is recognized as $- \langle P \hat{d} \psi^i_\tau,\wedge \hat{d} \psi^j_\tau \rangle$, and \[ \begin{split} \hat{F}_{ij}= & \langle \hat{d} \psi^i_\tau, \wedge \hat{d} \psi^j_\tau \rangle- \langle P \hat{d} \psi^i_\tau,\wedge \hat{d} \psi^j_\tau \rangle + \langle \psi^i_\tau, \wedge \hat{d}^2 \psi^j_\tau \rangle\\ = & \langle D_{\alpha_\tau}^+ G_\tau D^-_{\alpha_\tau} \hat{d} \psi^i_\tau, \wedge \hat{d} \psi^j_\tau \rangle + \langle \psi^i_\tau, \wedge \hat{d}^2 \psi^j_\tau \rangle\\ = & \langle G_\tau D^-_{\alpha_\tau} \hat{d} \psi^i_\tau, \wedge D_{\alpha_\tau}^- \hat{d} \psi^j_\tau \rangle+ \langle \psi^i_\tau, \wedge \hat{d}^2 \psi^j_\tau \rangle. \end{split} \] But since $ D^-_{\alpha_\tau}\psi^i_\tau=0$, \[ D^-_{\alpha_\tau} \hat{d} \psi^i_\tau=[ D^-_{\alpha_\tau}, \hat{d} ]\psi^i_\tau. \] Computing this commutator $[ D^-_{\alpha_\tau}, \hat{d} ]$ requires extra care compared to the $T^4$ case in \cite{BraamBaal}. In coordinates, this is given by \[ [ D^-_{\alpha_\tau}, \hat{d} ]= [ \sum_{\mu} c(dx_\mu) \nabla^{univ, t}_{\frac{\partial }{\partial x_\mu}}, \sum_\nu d\tau_\nu \nabla^{univ,t}_{ \frac{\partial }{\partial \tau_\nu}} ]. \] Here $c(dx_\mu)$ means the Clifford multiplication action of the 1-form $dx_\mu$ on spinors, and the covariant derivatives are essentially only acting on the dual of the universal bundle $\mathcal{E}$, not on $\mathcal{F}$, nor the spinor factor, which is why we see the transposed universal connection. We now recognize that \[ \Omega= \sum_{\mu, \nu} [ \nabla^{univ}_{\frac{\partial }{\partial x_\mu}}, \nabla^{univ}_{ \frac{\partial }{\partial \tau_\nu}} ] dx_\mu \wedge d\tau_\nu \] is part of the curvature of the universal connection on $\mathcal{E}\to X\times X^\vee$. We can now write \[ D^-_{\alpha_\tau} \hat{d} \psi^i_\tau=[ D^-_{\alpha_\tau}, \hat{d} ]\psi^i_\tau= -\Omega^t\cdot{} \psi^i_\tau, \] where by writing $-\Omega^t\cdot{} \psi^i_\tau$ we need to make cotangent vectors $dx_\mu$ on $X$ act on spinors by Clifford multiplication, take care of minus transpose on the bundle factor, and leave $d\tau_\nu$ untouched. From this computation, we see the \textbf{curvature matrix} of $\hat{\alpha}$ is \begin{equation}\label{curvaturematrixNahmtransform} \hat{F}_{ij} = \langle G_\tau \Omega^t\cdot{} \psi^i_\tau, \wedge \Omega^t\cdot{} \psi^j_\tau \rangle+ \langle \psi^i_\tau, \wedge \hat{d}^2 \psi^j_\tau \rangle. \end{equation} We claim that this curvature matrix (\ref{curvaturematrixNahmtransform}) is HYM on $X^\vee$. This is more delicate than the flat 4-forus case in \cite{BraamBaal}. We first observe that $\hat{d}^2$ comes from the curvature of $\nabla^{univ}$ in the $X^\vee$ direction, so this contribution to $\hat{F}_{ij}$ is HYM. Its contribution to $\frac{\sqrt{-1}}{2\pi}\Tr \hat{F}$ is $-rk(\hat{\mathcal{F}})\mathcal{B}'$, with $\mathcal{B}'$ as in Theorem \ref{universalconnectionmainproperties}; the reason for the minus sign is that we are using the dualised bundle $\mathcal{E}^\vee$. It remains to show that the term $\langle G_\tau \Omega^t\cdot{} \psi^i_\tau, \wedge \Omega^t\cdot{} \psi^j_\tau \rangle$ is ASD. Equivalently, for any complex structure $I_k$, we need to show $\langle G_\tau \Omega^t\cdot{} \psi^i_\tau, \wedge \Omega^t\cdot{} \psi^j_\tau \rangle$ is equal to \[ \begin{split} \sum \langle G_\tau \Omega^t(\frac{\partial}{\partial \tau_a}, \frac{\partial}{\partial x_\mu}) c(dx_\mu) \cdot{} \psi^i_\tau, \Omega^t(\frac{\partial}{\partial \tau_b}, \frac{\partial}{\partial x_\nu}) c(dx_\nu) \cdot{} \psi^j_\tau \rangle I_kd\tau_a \wedge I_k d\tau_b \\ = \sum \langle G_\tau \Omega^t(I_k \frac{\partial}{\partial \tau_a}, \frac{\partial}{\partial x_\mu}) c(dx_\mu) \cdot{} \psi^i_\tau, \Omega^t(I_k\frac{\partial}{\partial \tau_b}, \frac{\partial}{\partial x_\nu}) c(dx_\nu) \cdot{} \psi^j_\tau \rangle d\tau_a \wedge d\tau_b. \end{split} \] Using the triholomorphic property of $\Omega$, \[ \Omega( I_k v, w )= -\Omega( v, I_k w ), \quad v\in TX^\vee, w\in TX, \] the above is \[ \sum \langle G_\tau \Omega^t( \frac{\partial}{\partial \tau_a}, \frac{\partial}{\partial x_\mu}) c(I_k dx_\mu) \cdot{} \psi^i_\tau, \Omega^t(\frac{\partial}{\partial \tau_b}, \frac{\partial}{\partial x_\nu}) c(I_kdx_\nu) \cdot{} \psi^j_\tau \rangle d\tau_a \wedge d\tau_b. \] A source of difficulty is that, because the metric is not flat, we cannot directly commute the Green operator with Clifford multiplication $c(dx_\mu)$. The remedy is quite subtle, and we need to recall some \textbf{spin geometry} in dimension 4 (\cf the Appendix). Since $X$ is hyperk\"ahler, the complex structures $I_1$, $I_2$, $I_3$ can be made to act on $S^+_X$, and act trivially on $S^-_X$, in a way compatible with the Clifford multiplication (see (\ref{complexstructureactiononpositivespin1}), (\ref{complexstructureactiononpositivespin2})). The crucial observation is that, because $G_\tau$ does not act on the spinor part, it commutes with the operator $I_k^{S^+}$. Thus the above is \[ \begin{split} &\sum \langle G_\tau I_k^{S^+} \Omega^t( \frac{\partial}{\partial \tau_a}, \frac{\partial}{\partial x_\mu}) c( dx_\mu) \cdot{} \psi^i_\tau, I_k^{S^+}\Omega^t(\frac{\partial}{\partial \tau_b}, \frac{\partial}{\partial x_\nu}) c(dx_\nu) \cdot{} \psi^j_\tau \rangle d\tau_a \wedge d\tau_b \\ =& \sum \langle I_k^{S^+} G_\tau \Omega^t( \frac{\partial}{\partial \tau_a}, \frac{\partial}{\partial x_\mu}) c( dx_\mu) \cdot{} \psi^i_\tau, I_k^{S^+}\Omega^t(\frac{\partial}{\partial \tau_b}, \frac{\partial}{\partial x_\nu}) c(dx_\nu) \cdot{} \psi^j_\tau \rangle d\tau_a \wedge d\tau_b \\ =& \langle G_\tau \Omega^t \cdot{} \psi^i_\tau, \Omega^t \cdot{} \psi^j_\tau \rangle \end{split} \] as required. This verifies the ASD condition on $\langle G_\tau \Omega^t \cdot{} \psi^i_\tau, \Omega^t \cdot{} \psi^j_\tau \rangle $. \end{proof} \subsection{The inverse Nahm transform and the comparison map}\label{inverseNahmtransformcomparisonmap} Let us assume that the family of HYM connections on $E'\simeq \mathcal{E}^\vee|_x\to X^\vee$ parametrised by $x\in X$ are all irreducible as in Section \ref{HyperkahlerperiodsontheMukaidualK3}. Let $(\hat{ \mathcal{F} }, \hat{\alpha})$ be the Nahm transform of $(\mathcal{F}, \alpha)$, which has the same slope as $E'\to X^\vee$. Assume $\text{H}^0(X^\vee, \underline{\Hom}(\mathcal{E}^\vee|_x, \mathcal{F} ))=0$ for all $x$, then we can perform the \textbf{inverse Nahm transform construction} starting from $\hat{\mathcal{F} }$. We consider the coupled Dirac operators \[ \begin{split} D^+_{\hat{\alpha}_x} : \Gamma(X^\vee, S^+_{X^\vee } \otimes \Hom(\mathcal{E}^\vee|_x, \hat{\mathcal{F} } ))\to \Gamma(X^\vee, S^-_{X^\vee } \otimes \Hom(\mathcal{E}^\vee|_x, \hat{\mathcal{F} } )), \\ D^-_{\hat{\alpha}_x} : \Gamma(X^\vee, S^-_{X^\vee } \otimes \Hom(\mathcal{E}^\vee|_x, \hat{\mathcal{F} } ))\to \Gamma(X^\vee, S^+_{X^\vee } \otimes \Hom(\mathcal{E}^\vee|_x, \hat{\mathcal{F} } )). \end{split} \] By Lemma \ref{vanishingkernellemma}, The kernel of $D^+_{\hat{\alpha}_x }$ vanishes, and the kernel of $D^-_{\hat{\alpha}_x}$ fits into a vector bundle $\hat{\hat{\mathcal{F}}}\to X$, equipped with a natural connection $\hat{\hat{\alpha}}$. The pair $(\hat{\hat{\mathcal{F}}}, \hat{\hat{\alpha}}) $ is called the inverse Nahm transform of $(\hat{F}, \hat{\alpha})$. The aim of this Section is to describe a \textbf{canonical comparison map} between the two bundles $\mathcal{F}\to X$ and $\hat{\hat{\mathcal{F}}}\to X$, by adapting ideas in \cite{BraamBaal} where a similar construction is made over $T^4$. The main complication in our context is that the K3 metric is not flat. Still, the positive spinor bundle on $X$ and $X^\vee$ are flat, and their spaces of covariantly constant spinors are canonically identified. Given $x\in X$ and $f\in F|_x^*$, we will construct out of $f$ a canonical section \[ G\Psi(f)\in \Gamma(X^\vee, \hat{\mathcal{F }}^*\otimes S^-_{X^\vee}\otimes \mathcal{E}^\vee|_x). \] At any $\tau\in X^\vee$, let $s\in \hat{\mathcal{F} }|_\tau \subset \Gamma(X, \mathcal{F}\otimes \mathcal{E}^\vee|_\tau\otimes S^-_X)$. We can make $\Omega$ act on $s$ as in Section \ref{Nahmtransform}, to achieve a section \[ \Omega^t\cdot s\in \Gamma(X, \mathcal{E}^\vee|_\tau \otimes \mathcal{F} \otimes S^+_X\otimes T^*_\tau X^\vee)= \Gamma(X, \mathcal{E}^\vee|_\tau \otimes \mathcal{F} \otimes S^+_X)\otimes T^*_\tau X^\vee. \] We make the Green operator $G_\tau$ act on the $\Gamma(X, \mathcal{E}^\vee|_\tau \otimes \mathcal{F} \otimes S^+_X)$ factor of $\Omega^t\cdot s$ and leave the $T^*_\tau X^\vee$ factor untouched. We then contract $T_\tau^* X^\vee$ with $S^+_X$ by Clifford multiplication. This produces a section in $ \Gamma(X, \mathcal{E}^\vee|_\tau \otimes \mathcal{F} )\otimes S^-_{ X^\vee}|_\tau. $ Now we evaluate the $\Gamma(X, \mathcal{E}^\vee|_\tau \otimes \mathcal{F} )$ factor at the point $x\in X$ against $f\in F_x^*$. The result lands in $ \mathcal{E}^\vee|_{\tau,x} \otimes S^-_{ X^\vee}|_\tau$. This algorithm defines an element of $\hat{\mathcal{F}}^*\otimes \mathcal{E}^\vee|_{\tau,x} \otimes S^-_{ X^\vee}|_\tau$. When we vary $\tau$ we get a section of $ \Gamma(X^\vee, \hat{\mathcal{F }}^*\otimes S^-_{X^\vee}\otimes \mathcal{E}^\vee|_x) $ as promised, depending only on $f$. We denote this section as $G\Psi(f)$. The main result of this Section is \begin{prop}\label{comparisonmapDiracequation}(Compare Proposition 2.1 in \cite{BraamBaal}) The canonical section $G\Psi(f)$ satisfies the Dirac equation. \end{prop} This allows us to build the comparison map $\mathcal{F}\to \hat{\hat{\mathcal{F} }}$. Recall from the proof of Theorem \ref{NahmtransformperservesASD} the local orthonormal frame $\hat f^j= \psi_\tau^j$ of $\hat {\mathcal{F} }$. The section $G\Psi (f)$ evaluates on $\psi_\tau^j $ and outputs a local section of $S^-_{X^\vee}\otimes \mathcal{E}^\vee|_x$. There is an antilinear bundle automorphism $\epsilon$ acting on $S^-_{X^\vee}$ (\cf Appendix), which canonically extends to an antilinear bundle map $\epsilon: S^-_{X^\vee}\otimes \mathcal{E}^\vee|_x \to S^-_{X^\vee}\otimes \mathcal{E}|_x$ by using the Hermitian structure on $\mathcal{E}|_x$. Thus we can write a well defined section of $\hat{\mathcal{F} }\otimes S^-_{X^\vee}\otimes \mathcal{E}|_x\to X^\vee$, whose value at $\tau$ is \begin{equation}\label{comparisonmap} u_x(f) (\tau)= \frac{1}{2}\sum_j \epsilon( G\Psi(f)|_\tau (\psi^j_\tau ) ) \hat{f}^j. \end{equation} This section $u_x(f)$ depends on $f$ in an antilinear fashion. Therefore using the inner product on $\mathcal{F}|_x$, this construction gives a complex linear map from $\mathcal{F}|_x$ to $\Gamma(X^\vee,\hat{\mathcal{F} }\otimes S^-_{X^\vee}\otimes \mathcal{E}|_x )$. \begin{thm}\label{comparisonmapDiracequation1} The section $u_x(f)$ defined above satisfies the Dirac equation, and therefore is an element of $\hat{\hat{\mathcal{F}}}$. \end{thm} \begin{proof} This follows immediately from Proposition \ref{comparisonmapDiracequation} because $\epsilon$ preserves the Clifford multiplication and the Hermitian connection, so preserves the Dirac equation. \end{proof} The rest of this Section is devoted to the proof of Proposition \ref{comparisonmapDiracequation}. This requires us to understand how to take covariant derivatives of $G\Psi(f)$ as $\tau\in X^\vee$ varies. We follow the definition of $G\Psi(f)$. The most essential step is to compute $G_\tau \Omega^t\cdot s$. We can regard $s\mapsto G_\tau \Omega^t\cdot s$ as an element in the Hom bundle from $\hat{\mathcal{F}}$ to the infinite rank bundle with fibre $\Gamma(X, \mathcal{E}^\vee|_\tau \otimes \mathcal{F}\otimes S^+_X)\otimes T^*_\tau X^\vee$. The covariant derivative on this Hom bundle is induced from the covariant derivative on $\hat{\mathcal{F}}$ and the covariant derivative on the infinite rank bundle; the latter is trivial on the $\mathcal{F}\otimes S^+_X$ factor, agrees with the Levi-Civita connection on the $T^*X^\vee$ factor, and is induced from the transposed universal connection $-\nabla^{univ,t}$ on the $\mathcal{E}^\vee$ factor. In concrete formulae, we need to compute \[ \nabla_{\frac{\partial}{\partial \tau_i}} \{ G_\tau (\Omega^t\cdot s) \}=[-\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau ](\Omega^t\cdot s )+G_\tau( \nabla_{\frac{\partial}{\partial \tau_i} } \Omega^t )\cdot s+ G_\tau ( \Omega^t\cdot (-\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }s) ) \] on the infinite rank bundle, and subtract off \[ \begin{split} G_\tau ( \Omega^t\cdot\{ \nabla^{\hat{\mathcal{F} } }_{\frac{\partial}{\partial \tau_i}} s \} )&=G_\tau \Omega^t\cdot\{ -\nabla^{univ,t }_{\frac{\partial}{\partial \tau_i}} s- D^+_{\alpha_\tau} G_\tau D^-_{\alpha_\tau}( -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i}} s ) \} \\ &=G_\tau \Omega^t\cdot\{ -\nabla^{univ,t }_{\frac{\partial}{\partial \tau_i}} s- D^+_{\alpha_\tau} G_\tau [D^-_{\alpha_\tau}, -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i}}]s \} \\ &= G_\tau \Omega^t\cdot\{ -\nabla^{univ,t }_{\frac{\partial}{\partial \tau_i}} s- D^+_{\alpha_\tau} G_\tau (\iota_{\frac{\partial}{\partial \tau_i}}\Omega^t)\cdot s \} . \end{split} \] The above equation uses the description of the connection on $\hat{\mathcal{F} }$, as discussed in Section \ref{Nahmtransform}. After this subtraction, we get \begin{equation}\label{GPsiderivativeintermediatestep} [-\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau ](\Omega^t\cdot s )+ G_\tau \Omega^t\cdot\{ D^+_{\alpha_\tau} G_\tau (\iota_{\frac{\partial}{\partial \tau_i}}\Omega^t)s \}+G_\tau( \nabla_{\frac{\partial}{\partial \tau_i} } \Omega^t )\cdot s . \end{equation} The task is to understand the individual terms. \begin{lem}(Compare Lemma 2.2 in \cite{BraamBaal}, `partial derivative of the Green operator') The commutator \[[-\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau ]=2 G_\tau ( -\iota_{\frac{\partial}{\partial \tau_i} }\Omega^t , \nabla_{\alpha_\tau} ) G_\tau,\] where $( -\iota_{\frac{\partial}{\partial \tau_i} }\Omega^t , \nabla_{\alpha_\tau} )=\sum_{\mu}-\Omega^t( \frac{\partial}{\partial \tau_i} , \frac{\partial}{\partial x_\mu}) \nabla^{\alpha_\tau}_ \frac{\partial}{\partial x_\mu} $ for an orthonormal basis $\frac{\partial}{\partial x_\mu}$, or in other words, we contract the 1-form part of $ -\iota_{\frac{\partial}{\partial \tau_i} }\Omega^t$ and $\nabla_{\alpha_\tau} $. \end{lem} \begin{proof} We compute the variation of the Laplacian $[-\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , \nabla^*_{\alpha_\tau} \nabla_{\alpha_\tau} ]$. By the Jacobi identity, \[ [ \nabla^*_{\alpha_\tau} \nabla_{\alpha_\tau} , -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } ]= \nabla^*_{\alpha_\tau}[ \nabla_{\alpha_\tau}, -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } ]+[ \nabla^*_{\alpha_\tau}, -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } ] \nabla_{\alpha_\tau}. \] The first term is $\nabla^*_{\alpha_\tau}\circ \iota_{\frac{\partial}{\partial \tau_i} } \Omega^t$, where $\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t$ should be understood as a pointwise curvature type operator acting on sections in $\Gamma(X, \mathcal{F}\otimes \mathcal{E}^\vee|_\tau)$. Since $\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t$ is in the Coulumb gauge (\cf the proof of Lemma \ref{infinitesimalvariationofASDconnectionisinCoulumbgauge} and switch the role of $X$ and $X^\vee$), we get the operator identity \[ \nabla^*_{\alpha_\tau}[ \nabla_{\alpha_\tau}, -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } ]=( -\iota_{\frac{\partial}{\partial \tau_i} }\Omega^t , \nabla_{\alpha_\tau} ). \] Now for the second term in the Jacobi identity, we compute \[ [ \nabla^*_{\alpha_\tau}, -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } ] =\sum_{\mu} \iota_{\frac{\partial}{\partial x_\mu} } \circ \Omega^t( \frac{\partial}{\partial x_\mu},\frac{\partial}{\partial \tau_i} ), \] so the second term $[ \nabla^*_{\alpha_\tau}, -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } ] \nabla_{\alpha_\tau}=( -\iota_{\frac{\partial}{\partial \tau_i} }\Omega^t , \nabla_{\alpha_\tau} ).$ Thus \[ [ \nabla^*_{\alpha_\tau} \nabla_{\alpha_\tau} , -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } ]=2( -\iota_{\frac{\partial}{\partial \tau_i} }\Omega^t , \nabla_{\alpha_\tau} ), \] and multiplying $G_\tau$ both on the left and the right gives us the Lemma. The reader can understand this as essentially the formula for the derivative of the inverse matrix. \end{proof} \begin{lem}\label{computationOmegaDirac} The operator \[ \iota_{\frac{\partial}{\partial \tau_i} }\Omega^t\cdot D^+_{\alpha_\tau}=-( \iota_{\frac{\partial}{\partial \tau_i} }\Omega^t , \nabla_{\alpha_\tau} )+\sum_{k=1}^3 I_k^{S^+} ( \iota_{I_k\frac{\partial}{\partial \tau_i} }\Omega^t , \nabla_{\alpha_\tau} ). \] \end{lem} \begin{proof} We can write using a pointwise orthonormal frame $\frac{\partial}{\partial x_\mu}$ that \[ \iota_{\frac{\partial}{\partial \tau_i} }\Omega^t\cdot= \sum_{\mu} \Omega^t(\frac{\partial}{\partial \tau_i}, \frac{\partial}{\partial x_\mu} )c(\frac{\partial}{\partial x_\mu} ), \quad D^+_{\alpha_\tau} =\sum_{\nu}c(\frac{\partial}{\partial x_\nu} )\nabla^{\alpha_\tau}_{\frac{\partial}{\partial x_\nu} } . \] Now we write the composed operator $\iota_{\frac{\partial}{\partial \tau_i} }\Omega^t\cdot D^+_{\alpha_\tau} $ as the sum of $4$ parts, corresponding to $\frac{\partial}{\partial x_\mu}=\frac{\partial}{\partial x_\nu}$, and $\frac{\partial}{\partial x_\mu}=I_k\frac{\partial}{\partial x_\nu}$, for $k=1,2,3$. This leads to our claimed formula using the triholomorphic property of $\Omega^t$ and the properties of $I_k^{S^+}$. The manipulations are similar to the proof of Theorem \ref{NahmtransformperservesASD}. We leave the details to the reader as an exercise. \end{proof} The above two lemmas give \begin{cor}\label{computationGOmegaDGOmega} The term $ G_\tau \Omega^t\cdot\{ D^+_{\alpha_\tau} G_\tau (\iota_{\frac{\partial}{\partial \tau_i}}\Omega^t)s \} $ in (\ref{GPsiderivativeintermediatestep}) is equal to \[ \frac{1}{2}\sum_{j} d\tau_j \otimes \{ [ -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_j} } , G_\tau ] (\iota_{\frac{\partial}{\partial \tau_i}}\Omega^t)s +\sum_{k=1}^3 [ \nabla^{univ,t}_{I_k\frac{\partial}{\partial \tau_j} } , G_\tau ] (\iota_{I_k\frac{\partial}{\partial \tau_i}}\Omega^t)s \}. \] \end{cor} To proceed further, we follow the definition of $G\Psi(f)$ to perform the contraction $T^*_\tau X^\vee \otimes S^+_{X^\vee} \to S^-_{X^\vee}$, which by the Leibniz rule is compatible with taking covariant derivative on $X^\vee$. Then we need to evaluate against the covector $f\in F^*_x$ to get the covariant derivative of $G\Psi(f)$; this is a trivial step, so we will somtimes suppress that to save some writing. To verify the Dirac equation, we need to Clifford multiply the contraction of (\ref{GPsiderivativeintermediatestep}) by $c(d\tau_i)$, and sum up $i=1,2,3,4$ to evaluate the Dirac operator on $G\Psi(f)$, and evantually show that everything cancels out to give us zero. Let us now analyse how this works for the $3$ terms in (\ref{GPsiderivativeintermediatestep}). The first term gives $\sum_{i} c(d\tau_i) [- \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau ] (\Omega^t\cdot s)$. The second term gives \[ \begin{split} &\sum_i c(d\tau_i) G_\tau \Omega^t\cdot\{ D^+_{\alpha_\tau} G_\tau (\iota_{\frac{\partial}{\partial \tau_i}}\Omega^t)\cdot s ) \} \\ =& \frac{1}{2}\sum_{i, j}c(d\tau_i) c(d\tau_j) \{ [- \nabla^{univ,t}_{\frac{\partial}{\partial \tau_j} } , G_\tau ] (\iota_{\frac{\partial}{\partial \tau_i}}\Omega^t)s +\sum_{k=1}^3 [ \nabla^{univ,t}_{I_k\frac{\partial}{\partial \tau_j} } , G_\tau ] (\iota_{I_k\frac{\partial}{\partial \tau_i}}\Omega^t)s \}\\ =& \frac{1}{2}\sum_{i, j} \{c(d\tau_i) c(d\tau_j) -\sum_{k=1}^3 c(I_k d\tau_i) c(I_k d\tau_j) \} \{ [- \nabla^{univ,t}_{\frac{\partial}{\partial \tau_j} } , G_\tau ] (\iota_{\frac{\partial}{\partial \tau_i}}\Omega^t)s \} \\ =& -\sum_{i, j} c(d\tau_j) c(d\tau_i) \{ [- \nabla^{univ,t}_{\frac{\partial}{\partial \tau_j} } , G_\tau ] (\iota_{\frac{\partial}{\partial \tau_i}}\Omega^t)s \} \\ =& \sum_{j} c(d\tau_j) [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_j} } , G_\tau ] \Omega^t\cdot s. \end{split} \] Thus the second term exactly cancels with the first term. The third term gives \begin{equation}\label{comparisonmapDiracequationthirdterm} \sum_{i} c(d\tau_i)G_\tau( \nabla_{\frac{\partial}{\partial \tau_i} } \Omega^t )\cdot s= \sum_{i} G_\tau c(d\tau_i)( \nabla_{\frac{\partial}{\partial \tau_i} } \Omega^t )\cdot s, \end{equation} since $G_\tau$ does not interfere with spinors on $X^\vee$. To clarify, the covariant derivative on $\Omega^t$ is defined by combining the universal connection on $ad(\mathcal{E})$, with the Levi-Civita connection for 1-forms on $X^\vee$. We compute by the relation between Clifford multiplication and wedge product \[ \sum_i c(d\tau_i)( \nabla_{\frac{\partial}{\partial \tau_i} } \Omega^t )\cdot s= c( d\tau_i \wedge \nabla_{\frac{\partial}{\partial \tau_i} } \Omega^t )\cdot s + ( d_{A^{univ}}^ { X^\vee, *}\Omega^t )\cdot s= c( d_{A^{univ}}^{X^\vee}\Omega^t )\cdot s , \] where we used the component form of the Yang-Mills equation (\ref{YangMillsoncomponents}) to conclude $ d_{A^{univ}}^ { X^\vee, *}\Omega^t =0$. To understand $d_{A^{univ}}^ { X^\vee}\Omega^t $, we start from the Bianchi identity \[ d_{A^{univ}} F(\nabla^{univ})=0, \] and decompose it into $X, X^\vee$ types, to see \[ d_{A^{univ}}^ { X^\vee}\Omega=-d_{A^{univ}}^ { X} F(\nabla^{univ})^{X^\vee, X^\vee}= -\sum_j dx_j\nabla^{univ}_{ \frac{\partial}{\partial x_j} } F(\nabla^{univ})^{X^\vee, X^\vee}. \] Now since $ F(\nabla^{univ})^{X^\vee, X^\vee}$ is projectively ASD on $X^\vee$ and its trace is independent of $x$, its variation \[\nabla^{univ}_{ \frac{\partial}{\partial x_j} } F(\nabla^{univ})^{X^\vee, X^\vee}\] must be a coupled ASD 2-form, so the action on the positive spinor $dx_j\cdot s$ on $X^\vee$ is zero. This discussion shows the contribution from the third term (\ref{comparisonmapDiracequationthirdterm}) is zero. The upshot is that we have verified the Dirac equation in Proposition \ref{comparisonmapDiracequation}. \subsection{Injective isometry}\label{Injectiveisometry} The aim of this Section is to show the canonical map $u: f\mapsto u_x(f)$ (\cf equation (\ref{comparisonmap})) is an injective isometry, by adapting calculations in \cite{BraamBaal}. This requires us to first unravel the rather difficult definition of the Hermitian norm on $\hat{\hat{\mathcal{F}}}|_x$, which is inherited from $\hat{\hat{\mathcal{F}}}|_x\subset \Gamma(X^\vee, \hat{\mathcal{F}}\otimes S^-_{X^\vee}\otimes \mathcal{E}|_x)$. By definition, \[ 4\langle u_x(f), u_x(f')\rangle= \int_{X^\vee} \sum_j \langle \epsilon(G\Psi(f)|_\tau(\psi^j_\tau) ), \epsilon(G\Psi(f')|_\tau(\psi^j_\tau)) \rangle_{ S^-_{X^\vee}|_\tau\otimes \mathcal{E}|_{x,\tau } } d\text{Vol}_{X^\vee}. \] Now we will try to understand the integrand in the above expression. \begin{lem}\label{HermitianisometryLemma1} The integrand \[ \sum_j \langle \epsilon(G\Psi(f)|_\tau(\psi^j_\tau) ), \epsilon(G\Psi(f')|_\tau(\psi^j_\tau)) \rangle_{ S^-_{X^\vee}|_\tau\otimes \mathcal{E}|_{x,\tau } } \] is equal to \[ \Tr_{ \mathcal{E}^\vee|_{x,\tau } } \langle f' , f\circ \Tr_{ S^-_{X^\vee}|_\tau} G_\tau \Omega^t\cdot P_\tau (\Omega^t\cdot)^\dag G_\tau\rangle. \] Here $\langle f' , f\circ \Tr_{ S^-_{X^\vee}|_\tau} G_\tau \Omega^t\cdot P_\tau (\Omega^t\cdot)^\dag G_\tau\rangle$ means evaluating the bundle-valued functional $ f\circ \Tr_{ S^-_{X^\vee}|_\tau} G_\tau \Omega^t\cdot P_\tau (\Omega^t\cdot)^\dag G_\tau$ against $f'\in \mathcal{F}^*|_x\simeq \mathcal{F}|_x$ to obtain a matrix in $\End( \mathcal{E}^\vee|_{x,\tau } )$. \end{lem} \begin{proof} Since $\epsilon$ is an antilinear isometry, we can rewrite the integrand as \[ \sum_j \langle G\Psi(f')|_\tau(\psi^j_\tau) , G\Psi(f)|_\tau(\psi^j_\tau) \rangle_{ S^-_{X^\vee}|_\tau\otimes \mathcal{E}^\vee|_{x,\tau } }. \] This expression involves summing over an orthonormal basis in the kernel of the Dirac equation. We can rewrite this as an infinite sum over an orthonormal basis $\psi'^{j}_\tau$ of $\Gamma(X, \Hom(\mathcal{E}|_\tau, \mathcal{F}\otimes S^-_X))$, by inserting the projection operator $P_\tau=1-D_{\alpha_\tau}^+ G_\tau D^-_{\alpha_\tau}$. The result is \[ \sum_{j=1}^\infty \langle f'\circ G_\tau \Omega^t\cdot \psi'^j_\tau, f\circ G_\tau \Omega^t\cdot P_\tau \psi'^j_\tau\rangle_{ S^-_{X^\vee}|_\tau\otimes \mathcal{E}^\vee|_{x,\tau } }. \] We interpret $f\circ G_\tau \Omega^t\cdot P_\tau$ as a $S^-_{X^\vee}|_\tau\otimes \mathcal{E}^\vee|_{x,\tau } $ valued functional on the function space $L^2(X, \Hom(\mathcal{E}|_\tau, \mathcal{F}\otimes S^-_X) )$. Thus the above expression is \begin{equation}\label{correlatorfunction1} \langle f'\circ G_\tau \Omega^t\cdot , f\circ G_\tau \Omega^t\cdot P_\tau \rangle_{ S^-_{X^\vee}|_\tau\otimes \mathcal{E}^\vee|_{x,\tau } \otimes L^2(X, \Hom(\mathcal{E}|_\tau, \mathcal{F}\otimes S^-_X|_\tau) ) ^* }. \end{equation} where we used the inner product on the finite dimensional vector space $S^-_{X^\vee}|_\tau\otimes \mathcal{E}^\vee|_{x,\tau }$ and the inner product on the linear functionals. To simplify this further, we need to calculate the adjoint of the operator \[ \Omega^t \cdot: L^2(X, \Hom(\mathcal{E}|_\tau, \mathcal{F}\otimes S^-_X) )\to L^2(X, \Hom(\mathcal{E}|_\tau, \mathcal{F} ) \otimes S^-_{X^\vee}|_\tau ), \] \[\Omega^t\cdot= \sum_{i,j} \Omega^t( \frac{\partial }{\partial \tau_j}, \frac{\partial }{\partial x_i}) c(d\tau_j) c(dx_i), \] which is \[ (\Omega^t \cdot)^\dagger: L^2(X, \Hom(\mathcal{E}|_\tau, \mathcal{F} ) \otimes S^-_{X^\vee}|_\tau )\to L^2(X, \Hom(\mathcal{E}|_\tau, \mathcal{F}\otimes S^-_X) ) \] \[ (\Omega^t\cdot)^\dag =\sum_{i,j} \Omega^t(\frac{\partial }{\partial x_i} , \frac{\partial }{\partial \tau_j}) c(dx_i)c(d\tau_j). \] This allows us to rewrite (\ref{correlatorfunction1}) as \[ \langle f'\circ G_\tau , f\circ G_\tau \Omega^t\cdot P_\tau (\Omega^t\cdot)^\dag\rangle_{ S^-_{X^\vee}|_\tau\otimes \mathcal{E}^\vee|_{x,\tau } \otimes L^2(X, \Hom(\mathcal{E}|_\tau, \mathcal{F})\otimes S^-_{X^\vee}|_\tau) ^* }. \] Using that $G_\tau$ is self-adjoint, the above is \[ \langle f' , f\circ G_\tau \Omega^t\cdot P_\tau (\Omega^t\cdot)^\dag G_\tau\rangle_{ S^-_{X^\vee}|_\tau\otimes \mathcal{E}^\vee|_{x,\tau } \otimes L^2(X, \Hom(\mathcal{E}|_\tau, \mathcal{F})\otimes S^-_{X^\vee}|_\tau) ^* }. \] As an analytic subtle point, the evaluation functional $f'$ is not bounded on the $L^2$ space, but we can still make sense of the above expression, because the presence of $P_\tau$ implies the functional $f\circ G_\tau \Omega^t\cdot P_\tau (\Omega^t\cdot)^\dag G_\tau$ is represented by a smooth bundle-valued function, and we just need to evaluate this function against the vector $f'\in \mathcal{F}^*|_x \simeq \mathcal{F}|_x$, and then contract the $S^-_{X^\vee}|_\tau\otimes \mathcal{E}^\vee|_{x,\tau }$ factor. Finally, we observe that $f, f'$ do not interfere with the spinor factor. This means we can first calculate the spinor trace on $S^-_{X^\vee}|_\tau$, and then make the above evaluation, and contract the $ \mathcal{E}^\vee|_{x,\tau }$ factor. The result is \[ \Tr_{ \mathcal{E}^\vee|_{x,\tau } } \langle f' , f\circ \Tr_{ S^-_{X^\vee}|_\tau} G_\tau \Omega^t\cdot P_\tau (\Omega^t\cdot)^\dag G_\tau\rangle \] as required. \end{proof} We next deal with the expression $G_\tau \Omega^t\cdot P_\tau (\Omega^t\cdot)^\dag G_\tau$. The following Lemma is the analogue of Lemma 2.6 in \cite{BraamBaal}, although the non-flat nature of the K3 metric has made the calculations significantly more difficult. \begin{lem}\label{HermitianisometryLemma2} In a geodesic coordinate $\tau_i$ on $X^\vee$, we have \begin{equation} \Tr_{S^-_{X^\vee}|_\tau} G_\tau \Omega^t\cdot P_\tau (\Omega^t)^\dag G_\tau=-4\sum_i [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau] ] . \end{equation} \end{lem} \begin{proof} We use $P_\tau=1-D_{\alpha_\tau}^+ G_\tau D^-_{\alpha_\tau}$ to write \[ G_\tau \Omega^t\cdot P_\tau (\Omega^t\cdot)^\dag G_\tau= G_\tau \Omega^t\cdot (\Omega^t\cdot)^\dag G_\tau- G_\tau \Omega^t \cdot D_{\alpha_\tau}^+ G_\tau D^-_{\alpha_\tau}(\Omega^t\cdot)^\dag G_\tau. \] By Lemma \ref{computationOmegaDirac}, the term $ G_\tau \Omega^t \cdot D_{\alpha_\tau}^+ G_\tau D^-_{\alpha_\tau}(\Omega^t\cdot)^\dag G_\tau$ is equal to \begin{equation}\label{computationGOmegaDGDOmegaG1} \begin{split} &\sum_j c(d\tau_j) G_\tau \{ -(\iota_{\frac{\partial}{\partial \tau_j} } \Omega^t, \nabla_{\alpha_\tau})+\sum_{k=1}^3 I_k^{S^+}( \iota_{I_k\frac{\partial}{\partial \tau_j} } \Omega^t, \nabla_{\alpha_\tau} ) \}G_\tau D^-_{\alpha_\tau}(\Omega^t\cdot)^\dag G_\tau \\ =& \sum_j G_\tau(-\iota_{\frac{\partial}{\partial \tau_j} } \Omega^t, \nabla_{\alpha_\tau})\{ c(d\tau_j)+\sum_{k=1}^3 c(I_kd\tau_j)I_k^{S^+} \}G_\tau D^-_{\alpha_\tau}(\Omega^t\cdot)^\dag G_\tau \\ =& \frac{1}{2}\sum_j [ -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_j} } , G_\tau ] \{ c(d\tau_j)+\sum_{k=1}^3 c(I_kd\tau_j)I_k^{S^+} \} D^-_{\alpha_\tau}(\Omega^t\cdot)^\dag G_\tau \\ =& 2\sum_j [ -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_j} } , G_\tau ] c(d\tau_j) D^-_{\alpha_\tau}(\Omega^t\cdot)^\dag G_\tau \end{split} \end{equation} To simplify further, we need to calculate \[ D^-_{\alpha_\tau}(\Omega^t\cdot)^\dag=( \Omega^t\cdot D^+_{\alpha_\tau} )^\dagger =-\{ -(\iota_{\frac{\partial}{\partial \tau_j} } \Omega^t, \nabla_{\alpha_\tau})+\sum_{k=1}^3 I_k^{S^+}( \iota_{I_k\frac{\partial}{\partial \tau_j} } \Omega^t, \nabla_{\alpha_\tau} ) \}^\dag c(d\tau_j). \] A short calculation, using that $\iota_{\frac{\partial}{\partial \tau_j} } \Omega^t$ is in the Coulumb gauge (\cf Lemma \ref{infinitesimalvariationofASDconnectionisinCoulumbgauge}), shows that $(\iota_{\frac{\partial}{\partial \tau_j} } \Omega^t, \nabla_{\alpha_\tau})$ is self-adjoint. We also have that $I_k^{S^+}$ is anti-self-adjoint. Thus \[ \begin{split} D^-_{\alpha_\tau}(\Omega^t\cdot)^\dag =& \sum_j \{ (\iota_{\frac{\partial}{\partial \tau_j} } \Omega^t, \nabla_{\alpha_\tau})- \sum_{k=1}^3 I_k^{S^+}( \iota_{I_k\frac{\partial}{\partial \tau_j} } \Omega^t, \nabla_{\alpha_\tau} ) \} c(d\tau_j) \\ =& \sum_i (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})\{ c(d\tau_i)+\sum_{l=1}^3 I_l^{S^+} c(I_l d\tau_i)\} \\ =& 4\sum_i (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau}) c(d\tau_i). \end{split} \] Substituting this into the above expression (\ref{computationGOmegaDGDOmegaG1}), we get \begin{equation}\label{computationGOmegaFGOmegaG2} \begin{split} &\Tr_{S^-_{X^\vee}|_\tau} G_\tau \Omega^t \cdot D_{\alpha_\tau}^+ G_\tau D^-_{\alpha_\tau}(\Omega^t\cdot)^\dag G_\tau \\ =& 8\sum_{i,j} [ -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_j} } , G_\tau ](\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})G_\tau \{ \Tr_{S^-_{X^\vee}|_\tau} c(d\tau_j) c(d\tau_i) \} \\ =& -16\sum_{i} [ -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau ](\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})G_\tau. \end{split} \end{equation} We calculate further by the definition of the commutator, and by using Lemma \ref{computationOmegaDirac}, \[ \begin{split} &[ -\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau ](\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})G_\tau \\ =&- \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } G_\tau (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})G_\tau +G_\tau \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }\circ (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})G_\tau \\ =& -\frac{1}{2}\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }[ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau] +G_\tau \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }\circ (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})G_\tau \\ =&-\frac{1}{2}\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }[ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau] + G_\tau [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau}) ]G_\tau \\ &+ G_\tau (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }\circ G_\tau \\ =& -\frac{1}{2}\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }[ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau] + G_\tau [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau}) ]G_\tau \\ &+ G_\tau (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})[\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, G_\tau]+ \frac{1}{2}[\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau ]\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } \\ =& -\frac{1}{2}[\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau]] + G_\tau [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau}) ]G_\tau \\ &+ G_\tau (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})[\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, G_\tau]. \end{split} \] Comparing this with \[ \begin{split} [-\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau ](\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})G_\tau =&-2 G_\tau (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})G_\tau (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})G_\tau \\ =& G_\tau (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})[-\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau ], \end{split} \] we obtain \[ \begin{split} & 2[-\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau ](\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau})G_\tau \\ =&-\frac{1}{2}[\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau]] + G_\tau [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau}) ]G_\tau. \end{split} \] Substituting this into (\ref{computationGOmegaFGOmegaG2}), we get \begin{equation}\label{computationGOmegaFGOmegaG3} \begin{split} &\Tr_{S^-_{X^\vee}|_\tau} G_\tau \Omega^t \cdot D_{\alpha_\tau}^+ G_\tau D^-_{\alpha_\tau}(\Omega^t\cdot)^\dag G_\tau \\ =& \sum_i 4[\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau]] -8 G_\tau [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau}) ]G_\tau. \end{split} \end{equation} We work in a geodesic coordinate $\tau_i$ on $X^\vee$. Then the Coulumb condition (\ref{YangMillsoncomponents}) reads \[ \sum_i \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t=0, \] hence \[ \sum_i [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, (\iota_{\frac{\partial}{\partial \tau_i} } \Omega^t, \nabla_{\alpha_\tau}) ]= \sum_{i,\mu} \Omega^t(\frac{\partial}{\partial \tau_i}, \frac{\partial}{\partial x_\mu})\circ \Omega^t(\frac{\partial}{\partial \tau_i}, \frac{\partial}{\partial x_\mu}), \] where $\frac{\partial }{\partial x_\mu}$ form an orthonormal basis at the given point. Using this, we can rewrite the RHS of (\ref{computationGOmegaFGOmegaG3}) as \[ 4\sum_i [\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau]] -8 G_\tau \sum_{i,\mu} \Omega^t(\frac{\partial}{\partial \tau_i}, \frac{\partial}{\partial x_\mu})\circ \Omega^t(\frac{\partial}{\partial \tau_i}, \frac{\partial}{\partial x_\mu}) G_\tau. \] However, a short computation using the triholomorphic property of $\Omega$ gives \[ \begin{split} & \Tr_{S^-_{X^\vee}|_\tau} G_\tau \Omega^t\cdot (\Omega\cdot)^\dag G_\tau = -8G_\tau\sum_{i,\mu} \Omega^t(\frac{\partial}{\partial \tau_i}, \frac{\partial}{\partial x_\mu})\circ \Omega^t(\frac{\partial}{\partial \tau_i}, \frac{\partial}{\partial x_\mu})G_\tau. \end{split} \] Combining the above two equations, we obtain \[ \Tr_{S^-_{X^\vee}|_\tau} G_\tau \Omega^t\cdot P_\tau (\Omega^t)^\dag G_\tau=-4\sum_i [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} }, [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i} } , G_\tau] ] \] as required. \end{proof} \begin{cor} The inner product $\langle u_x(f), u_x(f')\rangle $ is equal to \begin{equation}\label{doubletransformHermitiannorm1} -\int_{X^\vee} \Tr_{\mathcal{E}^\vee|_{x,\tau} } \langle f', f\circ \sum_i [\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i}}, [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i}}, G_\tau ] ] \rangle d\text{Vol}_{X^\vee}. \end{equation} \end{cor} \begin{rmk} The expression $-\sum_i [\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i}}, [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i}}, G_\tau ] ]$ can be heuristically understood as the Laplacian of $G_\tau$ in the $X^\vee$ direction. If we pretend the Schwartz kernel $G_\tau(y,x)$ of the Green operator $G_\tau$ is smooth, then (\ref{doubletransformHermitiannorm1}) would be zero. But this is false, because $f'$ and $f$ are located at the same point $x\in X$, and $G_\tau(y,x)$ blows up at $y=x$. The delicate cancellations which took place in the above calculation say that despite the singular nature of $G_\tau$, the expression $ -\sum_i [\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i}}, [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i}}, G_\tau ] ]$ is smooth at $y=x$. \end{rmk} Now we proceed to evaluate (\ref{doubletransformHermitiannorm1}) by a careful consideration of the \textbf{asymptotes} as $y\to x$. For this, we first introduce a topological trivialisation of $\mathcal{E}\to X\times X^\vee$ over a local $T\subset X$, so that for all $y\in T$, the underlying smooth bundle of $\mathcal{E}|_y\to X^\vee$ become identified. In other words, we define the parallel transport operators $Q_\tau(x,y): \mathcal{E}^\vee|_{y,\tau}\to \mathcal{E}^\vee|_{x,\tau}$. In particular $Q_\tau(x,x)=1$. This $Q_\tau$ corresponds to a local flat connection $\nabla^{Q_\tau}$ on $\mathcal{E}^\vee|_\tau \to T\subset X$. We denote $\nabla^{univ}_{\frac{\partial}{\partial y_i} }=\nabla^{Q_\tau}_{\frac{\partial}{\partial y_i} }+A_i$, where $A_i$ can be thought of as the connection matrix of $\nabla^{univ}|_{\mathcal{E}|_\tau }$, and we can demand $A_i(x)=0$ at the particular point $y=x\in T\subset X$, although we cannot make $A_i$ vanish globally because $\nabla^{univ}$ is far from being flat. \begin{lem}\label{HermitianisometryLemma3} The expression (\ref{doubletransformHermitiannorm1}) is equal to the limit \begin{equation}\label{doubletransformHermitiannorm2} \lim_{y\to x} \int_{X^\vee} \langle f', f\circ \Tr_{\mathcal{E}|_{\tau,x} } \{ ( \Lap_{X^\vee}^{univ} Q_\tau(x,y) )G_\tau (y,x) \} \rangle d\text{Vol}_{X^\vee}, \end{equation} where $\Lap_{X^\vee}^{univ}$ is $-\sum_i\nabla^{univ}_{\frac{\partial}{\partial \tau_i} } \nabla^{univ}_{\frac{\partial}{\partial \tau_i} }$ in geodesic local coordinates on $X^\vee$. \end{lem} \begin{proof} Recall $G_\tau(y,x)$ is the Schwartz kernel of $G_\tau$, so the Schwartz kernel of $[\nabla^{univ,t}_{\frac{\partial}{\partial \tau_i}}, [ \nabla^{univ,t}_{\frac{\partial}{\partial \tau_i}}, G_\tau ] ]$ is $\nabla^{univ}_{\frac{\partial}{\partial \tau_i}}\nabla^{univ}_{\frac{\partial}{\partial \tau_i}}G_\tau (y,x)$, where we are differentiating the $\mathcal{E}^\vee|_{y,\tau}\otimes \mathcal{E}|_{x,\tau}\otimes \mathcal{F}|_y \otimes \mathcal{F}|_x^*$ valued function $G_\tau(y,x)$ with respect to the parameter $\tau_i$, and when the connection acts on the dual bundle the minus transpose is implicitly understood. This is continuous at $y=x$ even though $G_\tau(y,x)$ is not. To take the trace over $\mathcal{E}|_{x,\tau}$, we need to multiply by the parallel transport operator $Q_\tau(x,y)$. This allows us to write (\ref{doubletransformHermitiannorm1}) as the limit \[ \lim_{y\to x} \int_{X^\vee} \langle f', f\circ \Tr_{\mathcal{E}|_{\tau,x} }\{Q_\tau(x,y) \Lap_{X^\vee}^{univ} G_\tau (y,x) \} \rangle d\text{Vol}_{X^\vee}. \] For any $y\neq x$, all expressions are smooth, and an application of Green's formula gives the claim. \end{proof} \begin{lem} The leading asymptote as $y\to x$ of $\Lap^{univ}_{X^\vee} Q_\tau(x,y)$ in the trivialisation defined by $Q_\tau$ is \begin{equation} \begin{split} \Lap^{univ}_{X^\vee} Q_\tau(x,y) & \sim \sum_{j,k}\{\frac{1}{2}\Lap^{univ}_{X^\vee} (\nabla^{Q_\tau}_{ \frac{\partial}{\partial y_k} }A_j(y))|_{y=x} \\ &-\sum_i \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } ) \Omega(\frac{\partial}{\partial y_k}, { \frac{\partial}{\partial \tau_i} } )|_{y=x}\} y_jy_k+ O( |x-y|^3 ). \end{split} \end{equation} Here in the coordinates $y_i$ the origin corresponds to the point $x$. \end{lem} \begin{proof} Since $Q(x,x)=1$, we see $\Lap^{univ}_{X^\vee} Q_\tau(x,x)=0$. For the first order derivative, we compute \[ \begin{split} \nabla^{Q_\tau}_{ \frac{\partial}{\partial y_j} }\nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } Q_\tau(x,y)=[\nabla^{Q_\tau}_{ \frac{\partial}{\partial y_j} }, \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } ]Q_\tau(x,y) \\ =[\nabla^{univ}_{ \frac{\partial}{\partial y_j} }, \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } ]Q_\tau(x,y)-[A_j, \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }] Q_\tau(x,y) \\ =\Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )Q_\tau(x,y)+ \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }A_j(y) Q_\tau(x,y), \end{split} \] hence \[ \begin{split} &\nabla^{Q_\tau}_{ \frac{\partial}{\partial y_j} }\nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } Q_\tau(x,y) \\ =& [ \nabla^{Q_\tau}_{ \frac{\partial}{\partial y_j} }, \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } ]\nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } Q_\tau(x,y) + \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } \{ \{ \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )+ \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }A_j(y)\} Q_\tau(x,y) \} \\ =& \{ \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )+ \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }A_j(y)\}\nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } Q_\tau(x,y) \\ & + \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } \{ \{ \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )+ \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }A_j(y)\} Q_\tau(x,y) \} \\ =& 2 \{ \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )+ \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }A_j(y)\}\nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } Q_\tau(x,y) \\ & + \{\nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )+ \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }\nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }A_j(y)\} Q_\tau(x,y). \end{split} \] Summing over $i$ and applying the Coulumb gauge condition \[ \sum_i \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )=0, \] We obtain \begin{equation}\label{QtauLaplacianasymptote1} \begin{split} \nabla^{Q_\tau}_{ \frac{\partial}{\partial y_j} }\Lap^{univ}_{X^\vee} Q_\tau(x,y) =& (\Lap^{univ}_{X^\vee}A_j(y) )Q_\tau(x,y) \\ &- 2 \sum_i \{ \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } ) + \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }A_j(y)\}\nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } Q_\tau(x,y) , \end{split} \end{equation} which being evaluated at $y=x$, gives \[ \{\nabla^{Q_\tau}_{ \frac{\partial}{\partial y_j} }\Lap^{univ}_{X^\vee} Q_\tau(x,y)\}|_{y=x}=0. \] Now we proceed to evaluate the second derivatives, at $y=x$. By differentiating (\ref{QtauLaplacianasymptote1}) and commuting the operators, we get \[ \begin{split} \nabla^{Q_\tau}_{ \frac{\partial}{\partial y_k} }\nabla^{Q_\tau}_{ \frac{\partial}{\partial y_j} }\Lap^{univ}_{X^\vee} Q_\tau(x,y) |_{y=x} &= \Lap^{univ}_{X^\vee} (\nabla^{Q_\tau}_{ \frac{\partial}{\partial y_k} }A_j(y))|_{y=x} \\ &- 2\sum_i\Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } ) \Omega(\frac{\partial}{\partial y_k}, { \frac{\partial}{\partial \tau_i} } )|_{y=x}. \end{split} \] We have thus obtained the Taylor coefficients up to the second order for the expression $\Lap^{univ}_{X^\vee} Q_\tau(x,y)$ as $y\to x$. \end{proof} We also need to recall the short distance asymptote of the Green's function $G_\tau(y,x)$ as $y\to x$ in geodesic coordinates $y_i$ on $T\subset X$, as in \cite{BraamBaal}, proof of Proposition 2.7. Recall also the connection matrix of $\alpha_\tau$ vanishes at $y=x$ in our chosen trivialisation. \begin{lem} In the geodesic coordinates on $X$ and using our trivialisation of bundles, the asymptote as $y\to x$ of the Green's function $G_\tau(y,x)$ is \begin{equation}\label{asymptotesGreenfunction} G_\tau(y,x)\sim \frac{1}{4\pi^2 |y-x|^2} (I +O(|y-x|^{2-\epsilon}) ). \end{equation} where $|y-x|^2=\sum_i y_i^2$ and $\epsilon$ is any small positive number. \end{lem} \begin{lem}\label{HermitianisometryLemma4} The integral \[ \int_{X^\vee} \Tr_{\mathcal{E}|_{x,\tau} }\{ (\Lap^{univ}_{X^\vee} Q_\tau(x,y))G_\tau(y,x) \} d\text{Vol}_{X^\vee} \] converges to the identity matrix on $\mathcal{F}|_x$ as $y\to x$. \end{lem} \begin{proof} Using the asymptotes of $\Lap^{univ}_{X^\vee} Q_\tau(x,y)$ and $G_\tau(y,x)$, we see that the limit is given by \[ \begin{split} \sum_{j,k} \frac{y_jy_k I_{\mathcal{F}|_x} }{4\pi^2|y-x|^2} \int_{X^\vee} \Tr_{\mathcal{E}|_{x,\tau} } \{\frac{1}{2}\Lap^{univ}_{X^\vee} (\nabla^{Q_\tau}_{ \frac{\partial}{\partial y_k} }A_j(y))|_{y=x} \\ - \sum_i \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } ) \Omega(\frac{\partial}{\partial y_k}, { \frac{\partial}{\partial \tau_i} } )|_{y=x}\}d\text{Vol}_{X^\vee}. \end{split} \] The integral \[ \int_{X^\vee} \Tr_{\mathcal{E}|_{x,\tau} } \Lap^{univ}_{X^\vee} (\nabla^{Q_\tau}_{ \frac{\partial}{\partial y_k} }A_j(y))|_{y=x} d\text{Vol}_{X^\vee}=0, \] because it is the integral of a Laplacian. The other integral \[ \frac{-1}{4\pi^2}\int_{X^\vee} \Tr_{\mathcal{E}|_{x,\tau} } \sum_i \Omega(\frac{\partial}{\partial x_j}, { \frac{\partial}{\partial \tau_i} } ) \Omega(\frac{\partial}{\partial x_k}, { \frac{\partial}{\partial \tau_i} } ) d\text{Vol}_{X^\vee}=\delta_{jk}, \] because the LHS is the same as the metric $g^{\vee\vee}(\frac{\partial}{\partial x_j}, \frac{\partial}{\partial x_k} )$ on $X$ which we treated in Section \ref{ASDconnectionsonMukaidual}, and there we showed that it agrees with the metric $g(\frac{\partial}{\partial x_j}, \frac{\partial}{\partial x_k} )$. Thus the whole expression of the limit is \[ \sum_{j,k} \frac{y_jy_k I_{\mathcal{F}|_x} }{|y-x|^2}\delta_{jk}= I_{\mathcal{F}|_x} \] as required. \end{proof} \begin{thm} The antilinear map $f\mapsto u_x(f)$ is an isometry $\mathcal{F}^*|_x\to \hat{\hat{\mathcal{F}}}|_x$. \end{thm} \begin{proof} By the Lemmas in this Section, the inner product $\langle u_x(f), u_x(f')\rangle$ is equal to (\ref{doubletransformHermitiannorm2}), which by the above Lemma is equal to $\langle f', f \rangle$. Thus the antilinear map $f\mapsto u_x(f)$ is an injective isometry. To show it is also an isomorphism, the family Atiyah-Singer theorem says that the Chern character (or equivalently the Mukai vector) of $\hat{\hat{\mathcal{F}} }$ can be specified by the Fourier-Mukai transform on cohomology \[ v( \hat{\mathcal{F} } )= FM( v(\mathcal{F} ) ), \quad v( \hat{\hat{\mathcal{F}} } )=FM^\vee( v(\hat{\mathcal{F} }) ). \] Using that $FM^\vee$ is inverse to $FM$, we see that the two bundles $\hat{\hat{\mathcal{F}}}$ and $\mathcal{F}$ have the same Mukai vector, and in particular have the same rank. Thus the injective isometry must be an isomorphism. \end{proof} \subsection{Comparing the connections}\label{Comparingtheconnections} The aim of this Section is to compare the connection $\hat{\hat{\alpha}}$ on the inverse Nahm transform $\hat{\hat{\mathcal{F}}}$, with the connection $\alpha$ on the original bundle $\mathcal{F}$. Let $f$ be a local section of the dual bundle $\mathcal{F}^*$. Via the canonical comparison map, this gives us a section of $\hat{\hat{\mathcal{F}}}$, whose value at each $x\in X$ is just \[ u_x(f)= \frac{1}{2}\sum_j \epsilon(G\Psi(f)|_\tau(\psi^j_\tau) )\otimes \hat{f}^j \in \hat{\hat{\mathcal{F}}}|_x\subset \Gamma(X^\vee, \hat{\mathcal{F}}\otimes S^-_{X^\vee}\otimes \mathcal{E}|_x). \] We need to understand the meaning of covariant derivatives on the bundle $\hat{\hat{\mathcal{F}}}$. This is entirely analogous to the definition of $\hat{\alpha}$. Given any section $s$ of $\hat{\hat{\mathcal{F}}}\to X$, we first regard it as a section of the infinite rank bundle $\hat{\hat{\mathcal{H}}}$ with fibres \[ \hat{\hat{\mathcal{H}}}|_x= \Gamma(X^\vee, \hat{\mathcal{F}}\otimes S^-_{X^\vee}\otimes \mathcal{E}|_x), \] take the natural covariant derivative of $s$ on the bundle $\hat{\hat{\mathcal{H}}}\to X$, and then project orthogonally to the subbundle $\hat{\hat{\mathcal{F}}}$. The connection matrix on $\hat{\hat{\mathcal{F}}}$ is therefore specified by $\langle u_x(f), \nabla_x u_x(f')\rangle$ for any local sections $f, f'$ of $\mathcal{F}$, which without loss of generality satisfy $\nabla^{\alpha}f=0$ and $\nabla^{\alpha}f'=0$ at the given point $x\in X$. The following delicate computation follows the strategy of Section \ref{Injectiveisometry} of evaluating some integrals by asymptotic expansion. \begin{prop}\label{comparingconnectionmatrices} At $x\in X$, if $f$ and $f'$ have vanishing covariant derivatives, then for $\mu=1,2,3,4$, \[\langle u_x (f), \nabla_{\frac{\partial}{\partial x_\mu} }u_x(f')\rangle=0. \] In other words, the connection matrices of $\alpha$ and $\hat{\hat{\alpha}}$ agree under the canonical comparison map. \end{prop} \begin{proof} By the definition of the canonical comparison map \begin{equation}\label{inverseNahmtransformcovariantderivative} \begin{split} &\langle u_x(f), \nabla_x u_x(f')\rangle \\ =& \frac{1}{4} \int_{X^\vee} \sum_j \langle \epsilon ( G\Psi(f)|_\tau (\psi^j_\tau) ), \nabla_x \epsilon ( G\Psi(f')|_\tau (\psi^j_\tau) ) \rangle_{S^-_{X^\vee|_\tau \otimes \mathcal{E}|_{x,\tau} } } d\text{Vol}_{X^\vee} \\ =& \frac{1}{4} \int_{X^\vee} \sum_j \langle \nabla_x ( G\Psi(f')|_\tau (\psi^j_\tau) ), G\Psi(f)|_\tau (\psi^j_\tau) \rangle_{S^-_{X^\vee|_\tau \otimes \mathcal{E}|_{x,\tau} } } d\text{Vol}_{X^\vee} . \end{split} \end{equation} In our situation $\langle f, \nabla^\alpha f'\rangle =0$. By a small variant of the argument of Lemma \ref{HermitianisometryLemma1}, the expression (\ref{inverseNahmtransformcovariantderivative}) is equal to \[ \frac{1}{4}\Tr_{ \mathcal{E^\vee}|_{x,\tau} } \int_{X^\vee} \langle f'\circ \nabla_x \circ \Tr_{S^-_{X^\vee}|_\tau} G_\tau \Omega^t\cdot P_\tau (\Omega^t\cdot)^\dag G_\tau , f \rangle d\text{Vol}_{X^\vee} . \] By Lemma \ref{HermitianisometryLemma2} and the argument of Lemma \ref{HermitianisometryLemma3}, this is \begin{equation}\label{inverseNahmtransformcovariantderivative2} \lim_{y\to x} \int_{X^\vee} \Tr_{ \mathcal{E^\vee}|_{x,\tau} } \langle f'| Q_\tau (x,y) \nabla^{\alpha_\tau}_y \Lap_{X^\vee} G_\tau(y,x) |f\rangle d\text{Vol}_{X^\vee}. \end{equation} Here the notation $\nabla^{\alpha_\tau}_y$ means $\sum_{\mu } dy_\mu \nabla^{\alpha_\tau}_{ \frac{\partial}{\partial y_\mu } }$. By computing commutators, \[ \begin{split} \nabla^{\alpha_\tau}_{ \frac{\partial}{\partial y_i } } \Lap_{X^\vee}=& \Lap_{X^\vee}\nabla^{\alpha_\tau }_{ \frac{\partial}{\partial y_i } } -\sum_j \nabla^{univ,t }_{\frac{\partial}{\partial \tau_j} }( \Omega^t( \frac{\partial}{\partial y_i }, \frac{\partial}{\partial \tau_j} ) )-2\sum_j \Omega^t( \frac{\partial}{\partial y_i }, \frac{\partial}{\partial \tau_j} )\nabla^{univ ,t }_{ \frac{\partial}{\partial \tau_j} } \\ = &\Lap_{X^\vee}\nabla^{\alpha_\tau }_{ \frac{\partial}{\partial y_i }} +2\sum_j \Omega^t( \frac{\partial}{\partial y_i }, \frac{\partial}{\partial \tau_j} )(-\nabla^{univ, t}_{ \frac{\partial}{\partial \tau_j} }) \end{split} \] where the second equality uses the Coulumb condition (\ref{YangMillsoncomponents}), and the minus transpose is inserted to remember the fact that $G_\tau(y,x)$ involves the dualised factor $\mathcal{E}^\vee|_{y,\tau}$. Using this and the Green's formula, the expression (\ref{inverseNahmtransformcovariantderivative2}) is \[ \begin{split} \lim_{y\to x} \int_{X^\vee} \Tr_{ \mathcal{E^\vee}|_{x,\tau} } & \{ \langle f'| (\Lap_{X^\vee} Q_\tau (x,y) ) \nabla^{\alpha_\tau }_y G_\tau(y,x) |f\rangle \\ & -2\sum_{i,j} dy_i \langle f'| (\nabla^{univ}_{\frac{\partial}{\partial \tau_j} } Q_\tau(x,y) ) \circ \Omega^t( \frac{\partial}{\partial y_i }, \frac{\partial}{\partial \tau_j} )G_\tau(y,x) |f \rangle \} d\text{Vol}_{X^\vee} . \end{split} \] The sign in front of the second terms comes from integration by part. By the asymptotic formula of $G_\tau(y,x)$ (\cf (\ref{asymptotesGreenfunction})), we can replace $G_\tau(y,x)$ by $\frac{I}{4\pi^2|x-y|^2 }$, and $\nabla^{\alpha_\tau }_y G_\tau(y,x)$ by $- \frac{2\sum_i y_i dy_i I}{ 4\pi^2|x-y|^4 }$, because the lower order terms in the asymptotic expansion are smooth enough to be neglegible when $y\to x$. Thus in the trivialisation given by $Q_\tau$ as in Section \ref{Injectiveisometry}, the quantity $\langle u_x (f), \nabla_{\frac{\partial}{\partial x_\mu} }u_x(f')\rangle$ is equal to \begin{equation}\label{inverseNahmtransformcovariantderivative3} \begin{split} \lim_{y\to x} \frac{-1}{2\pi^2|x-y|^2} & \int_{X^\vee} \{ \frac{y_\mu}{|x-y|^2 }\langle f'|\Tr_{ \mathcal{E}|_{x,\tau} } (\Lap_{X^\vee} Q_\tau (x,y) ) |f\rangle \\ & +\sum_{j} \langle f'|\Tr_{ \mathcal{E}|_{x,\tau} } ( \nabla^{univ}_{\frac{\partial}{\partial \tau_j} } Q_\tau(x,y)\Omega( \frac{\partial}{\partial y_\mu }, \frac{\partial}{\partial \tau_j} )) |f \rangle \} d\text{Vol}_{X^\vee} . \end{split} \end{equation} Here we $\Omega^t$ has been changed to $\Omega$ when we convert trace over $\mathcal{E}^\vee|_{x,\tau}$ to trace over $\mathcal{E}|_{x,\tau}$. The trivialisation $Q_\tau$ is used to identify $\mathcal{E}|_{y,\tau}$ with $\mathcal{E}|_{x,\tau}$. We then proceed to the evaluation of this limit (\ref{inverseNahmtransformcovariantderivative3}). This requires us to know asymptotic expansions to one higher order compared to Section \ref{Injectiveisometry}. \begin{lem}(Higher order asymptotes)\label{HigherorderasymptotesLemma} In the geodesic coordinates on $X$, under the trivialisation induced by $Q_\tau$, we have the asymptotic formula as $y\to x$, \begin{equation}\label{thirdorderTaylorasymptote0} \frac{1}{4\pi^2|x-y|^2}\int_{X^\vee} \Tr_{ \mathcal{E}|_{x,\tau} }\langle f'| \Lap^{univ}_{X^\vee} Q_\tau(x,y) |f\rangle d\text{Vol}_{X^\vee}\sim \langle f', f\rangle +O(|y-x|^2). \end{equation} Morever, \begin{equation}\label{thirdorderTaylorasymptote0'} \begin{split} &\frac{1}{4\pi^2}\int_{X^\vee} \langle f'|\Tr_{ \mathcal{E}|_{x,\tau} } \{ \sum_i (\nabla^{univ}_{\frac{\partial}{\partial \tau_i} } Q_\tau(x,y) ) \Omega( \frac{\partial }{\partial y_\mu} , \frac{\partial }{\partial \tau_i} ) \} |f\rangle d\text{Vol}_{X^\vee}\\ &\sim -y_\mu \langle f', f\rangle + O(|y-x|^3). \end{split} \end{equation} \end{lem} Given (\ref{thirdorderTaylorasymptote0}), (\ref{thirdorderTaylorasymptote0'}), we immediately see that the limit (\ref{inverseNahmtransformcovariantderivative3}) vanishes by a cancellation of asymptotic expressions. This shows \[ \langle u_x (f), \nabla_{\frac{\partial}{\partial x_\mu} }u_x(f')\rangle=0 \] as required. \end{proof} Now we show the higher order asymptotes. \begin{proof}(of Lemma \ref{HigherorderasymptotesLemma}) To save some writing, we will use the summation convention, and furthermore we will simply write $\nabla_k$ for $\nabla^{Q_\tau}_{ \frac{\partial}{\partial y_k } } $. The trace will be over the $\mathcal{E}$ factor using the trivialisation $Q_\tau$. Starting from (\ref{QtauLaplacianasymptote1}), \[ \begin{split} & \nabla_k \nabla_j \Lap^{univ}_{X^\vee} Q_\tau(x,y) = (\nabla_k\Lap^{univ}_{X^\vee} A_j )Q_\tau(x,y) \\ &- 2 \{ \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } ) + \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }A_j \} \{ \Omega(\frac{\partial}{\partial y_k}, { \frac{\partial}{\partial \tau_i} } ) + \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }A_k\} Q_\tau(x,y) \\ & -2\nabla_k \{ \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } ) + \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} }A_j \} \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } Q_\tau(x,y) , \end{split} \] so taking one more derivative and evaluating at $y=x$, the third order derivative is given by \begin{equation}\label{thirdorderTaylorasymptote1} \begin{split} & \nabla_l \nabla_k \nabla_j \Lap^{univ}_{X^\vee} Q_\tau(x,y) |_{y=x} = \Lap^{univ}_{X^\vee} \nabla_l\nabla_k A_j - 2\nabla_l \{ \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )\Omega(\frac{\partial}{\partial y_k}, { \frac{\partial}{\partial \tau_i} } )\} \\ & -2\{ ( \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } \nabla_l A_j ) \Omega(\frac{\partial}{\partial y_k}, { \frac{\partial}{\partial \tau_i} } ) + \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } ) \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } \nabla_l A_k \} \\ & -2\{ \nabla_k\Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } ) + \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } \nabla_k A_j \} \Omega( \frac{\partial}{\partial y_l}, { \frac{\partial}{\partial \tau_i} } ). \end{split} \end{equation} We then integrate $\Tr (\nabla_l \nabla_k \nabla_j \Lap^{univ}_{X^\vee} Q_\tau(x,y) |_{y=x})$ with respect to $\tau$ over $X^\vee$. Many terms do not contribute to the integral: for example, \[ \int_{X^\vee} \Tr \Lap^{univ}_{X^\vee} \nabla_l\nabla_k A_j d\text{Vol}_{X^\vee}= \int_{X^\vee} \Lap_{X^\vee} \Tr \nabla_l\nabla_k A_j d\text{Vol}_{X^\vee}= 0, \] because the integral of the Laplacian of a function is zero. For another example, \[ \begin{split} & \int_{X^\vee} \Tr ( \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } \nabla_l A_j ) \Omega(\frac{\partial}{\partial y_k}, { \frac{\partial}{\partial \tau_i} } ) d\text{Vol}_{X^\vee} \\ =& \int_{X^\vee} \Tr \nabla^{univ}_{ \frac{\partial}{\partial \tau_i} } \{(\nabla_l A_j) \Omega(\frac{\partial}{\partial y_k}, { \frac{\partial}{\partial \tau_i} } )\} d\text{Vol}_{X^\vee} \\ =& \int_{X^\vee} \text{div}_{X^\vee}\Tr \{(\nabla_l A_j) \Omega(\frac{\partial}{\partial y_k}, { \frac{\partial}{\partial \tau_i} } ) d\tau_i\} d\text{Vol}_{X^\vee} =0, \end{split} \] because the integral of the divergence of a 1-form is zero. Summing up all contributions, one obtains \begin{equation}\label{thirdorderTaylorasymptote2} \begin{split} & \int_{X^\vee} \Tr \nabla_l \nabla_k \nabla_j \Lap^{univ}_{X^\vee} Q_\tau(x,y) |_{y=x}d\text{Vol}_{X^\vee}\\ = &-2 \int_{X^\vee} \Tr \nabla_l \{ \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )\Omega(\frac{\partial}{\partial y_k}, { \frac{\partial}{\partial \tau_i} } )\}+ \Tr \nabla_k\Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )\Omega( \frac{\partial}{\partial y_l}, { \frac{\partial}{\partial \tau_i} } ) d\text{Vol}_{X^\vee} \\ = & -2 \int_{X^\vee} \Tr \nabla_k\Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )\Omega( \frac{\partial}{\partial y_l}, { \frac{\partial}{\partial \tau_i} } ) d\text{Vol}_{X^\vee} +8\pi^2 \frac{\partial g_{jk} }{\partial y_l}, \end{split} \end{equation} where the last equality uses \[ -\frac{1}{4\pi^2}\int_{X^\vee} \Tr \{ \Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )\Omega(\frac{\partial}{\partial y_k}, { \frac{\partial}{\partial \tau_i} } )\}d\text{Vol}_{X^\vee}= g(\frac{\partial}{\partial y_j}, \frac{\partial}{\partial y_k} )=g_{jk}, \] as in the proof of Lemma \ref{HermitianisometryLemma4}. We notice the LHS of (\ref{thirdorderTaylorasymptote2}) is symmetric in $l,k,j$, because in a trivialisation the partial derivatives commute. If we switch $j,l$, we obtain \[ \begin{split} & \int_{X^\vee} \Tr \nabla_l \nabla_k \nabla_j \Lap^{univ}_{X^\vee} Q_\tau(x,y) |_{y=x}d\text{Vol}_{X^\vee}\\ =& -2 \int_{X^\vee} \Tr \nabla_k\Omega(\frac{\partial}{\partial y_l}, { \frac{\partial}{\partial \tau_i} } )\Omega( \frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } ) d\text{Vol}_{X^\vee} +8\pi^2 \frac{\partial g_{lk} }{\partial y_j}. \end{split} \] Adding this to (\ref{thirdorderTaylorasymptote2}), and divide by $8\pi^2$, we get \begin{equation}\label{thirdorderTaylorasymptote3} \frac{1}{4\pi^2}\int_{X^\vee} \Tr \nabla_l \nabla_k \nabla_j \Lap^{univ}_{X^\vee} Q_\tau(x,y) |_{y=x}d\text{Vol}_{X^\vee} = \frac{\partial g_{jk} }{\partial y_l}+ \frac{\partial g_{lk} }{\partial y_j}+ \frac{\partial g_{lj} }{\partial y_k}. \end{equation} Substituting this into (\ref{thirdorderTaylorasymptote2}), we get \begin{equation}\label{thirdorderTaylorasymptote4} \int_{X^\vee} \Tr \nabla_k\Omega(\frac{\partial}{\partial y_j}, { \frac{\partial}{\partial \tau_i} } )\Omega( \frac{\partial}{\partial y_l}, { \frac{\partial}{\partial \tau_i} } )|_{y=x} d\text{Vol}_{X^\vee} =-4\pi^2 g_{ls} \Gamma^s_{jk}, \end{equation} where $\Gamma^s_{jk}$ is the Christoffel symbol on $X$ at $x\in X$. Using (\ref{thirdorderTaylorasymptote3}), we can improve Lemma \ref{HermitianisometryLemma4} to the next order. In our coordinates, the point $x$ corresponds to the origin, and $|y-x|^2=g_{ij}(x)y_iy_j$. Then \begin{equation}\label{thirdorderTaylorasymptote5} \begin{split} &\frac{1}{4\pi^2|x-y|^2}\int_{X^\vee} \Tr \Lap^{univ}_{X^\vee} Q_\tau(x,y) d\text{Vol}_{X^\vee} \\ & \sim I_{\mathcal{F}|_x}(1+ \frac{1}{2|x-y|^2}y_jy_ky_l \frac{\partial g_{lk} }{\partial y_j} ) +O(|y-x|^2). \end{split} \end{equation} In the geodesic coordinate, the third order derivative term vanishes, and we obtain (\ref{thirdorderTaylorasymptote0}), as required. Now we show the second part of the Lemma. We can do a similar but easier computation as before by calculating $\nabla^{univ}_{\frac{\partial}{\partial \tau_i} } Q_\tau$ up to the second order. After discarding some Laplacian and divergence terms in the $X^\vee$ integration, we get \[ \begin{split} &\int_{X^\vee} \Tr \{ (\nabla_k\nabla^{univ}_{\frac{\partial}{\partial \tau_i} } Q_\tau(x,y) ) \Omega( \frac{\partial }{\partial y_\mu} , \frac{\partial }{\partial \tau_i} ) \}|_{y=x} d\text{Vol}_{X^\vee} \\ =&\int_{X^\vee} \Tr \{ \Omega( \frac{\partial }{\partial x_k} , \frac{\partial }{\partial \tau_i} ) \Omega( \frac{\partial }{\partial x_\mu} , \frac{\partial }{\partial \tau_i} ) \}d\text{Vol}_{X^\vee}= -4\pi^2 g_{k\mu}, \end{split} \] and \[ \begin{split} &\int_{X^\vee} \Tr \{ (\nabla_j\nabla_k\nabla^{univ}_{\frac{\partial}{\partial \tau_i} } Q_\tau(x,y) ) \Omega( \frac{\partial }{\partial y_\mu} , \frac{\partial }{\partial \tau_i} ) \}|_{y=x} d\text{Vol}_{X^\vee} \\ =&\int_{X^\vee} \Tr \{ \nabla_j\Omega( \frac{\partial }{\partial y_k} , \frac{\partial }{\partial \tau_i} ) \Omega( \frac{\partial }{\partial y_\mu} , \frac{\partial }{\partial \tau_i} ) \}|_{y=x} d\text{Vol}_{X^\vee}= -4\pi^2 g_{s\mu}\Gamma^s_{jk}, \end{split} \] where the last equality uses (\ref{thirdorderTaylorasymptote4}). From this, the second derivative \[ \begin{split} &\int_{X^\vee} \frac{\partial^2}{\partial y_j \partial y_k} \Tr \{ (\nabla^{univ}_{\frac{\partial}{\partial \tau_i} } Q_\tau(x,y) ) \Omega( \frac{\partial }{\partial y_\mu} , \frac{\partial }{\partial \tau_i} ) \}|_{y=x} d\text{Vol}_{X^\vee} \\ =& -4\pi^2 g_{s\mu}\Gamma^s_{jk} +\int_{X^\vee} \Tr \{ \Omega( \frac{\partial }{\partial y_k} , \frac{\partial }{\partial \tau_i} ) \nabla_j \Omega( \frac{\partial }{\partial y_\mu} , \frac{\partial }{\partial \tau_i} ) \}|_{y=x} d\text{Vol}_{X^\vee} \\ & + \int_{X^\vee} \Tr \{ \Omega( \frac{\partial }{\partial y_j} , \frac{\partial }{\partial \tau_i} ) \nabla_k \Omega( \frac{\partial }{\partial y_\mu} , \frac{\partial }{\partial \tau_i} ) \}|_{y=x} d\text{Vol}_{X^\vee} \\ =& -4\pi^2 g_{s\mu}\Gamma^s_{jk}-4\pi^2 g_{sk}\Gamma^s_{j\mu}-4\pi^2 g_{sj}\Gamma^s_{\mu k} \\ =& -2\pi^2\{ \frac{\partial g_{jk}}{\partial y_\mu} + \frac{\partial g_{\mu k}}{\partial y_j} + \frac{\partial g_{j\mu }}{\partial y_k} \}. \end{split} \] Combining these, we get \begin{equation} \begin{split} &\frac{1}{4\pi^2}\int_{X^\vee} \Tr \{ (\nabla^{univ}_{\frac{\partial}{\partial \tau_i} } Q_\tau(x,y) ) \Omega( \frac{\partial }{\partial y_\mu} , \frac{\partial }{\partial \tau_i} ) \} d\text{Vol}_{X^\vee}\\ &\sim I_{\mathcal{F}|_x}\{-g_{k\mu}(x) y_k- \frac{1}{4} ( \frac{\partial g_{jk}}{\partial y_\mu} + \frac{\partial g_{\mu k}}{\partial y_j} + \frac{\partial g_{j\mu }}{\partial y_k} )(x) y_jy_k \}+ O(|y-x|^3). \end{split} \end{equation} In geodesic coordinates, we have that $g_{s\mu}(x)=\delta_{s\mu}$ and the Christoffel symbols vanish, so (\ref{thirdorderTaylorasymptote0'}) follows. \end{proof} \begin{rmk} We have worked in the geodesic coordinate on $X$, which has the advantage of simplifying the asymptotic formulae for $\Lap_{X^\vee} Q_\tau$ and $G_\tau$. It is an interesting exercise to show that even if we use more general coordinates and keep the Christoffel symbols, the asymptotic formulae still cancel out exactly in the proof of Proposition \ref{comparingconnectionmatrices}. \end{rmk} We now collect the main results of Section \ref{inverseNahmtransformcomparisonmap}, \ref{Injectiveisometry}, \ref{Comparingtheconnections} to achieve \begin{thm}\label{Fourierinversiontheorem} (Fourier inversion) Assume the setup of Section \ref{inverseNahmtransformcomparisonmap} so that the inverse Nahm transform $(\hat{\hat{\mathcal{F}}},\hat{\hat{\alpha}})$ is well defined. Then the canonical comparison map $\mathcal{F}\to \hat{\hat{\mathcal{F}}}$ is an isometric isomorphism of Hermitian vector bundles, which identifies the connection $\alpha$ with $\hat{\hat{\alpha}}$. \end{thm} \begin{rmk} The reader may compare this to Proposition 10 and Corollary 3 in \cite{Bartocci2}, which shows a derived category version of Fourier inversion theorem. Their version is technically much simpler, due to the power of the derived category machinery. \end{rmk} \begin{rmk} Our proof depends on the algebraic theory in \cite{Huy1} only by the use of the Fourier-Mukai transform on cohomology. \end{rmk} \section*{Appendix: spinors in dimension 4} We review the well known linear algebraic model of spinors in dimension 4, which serves to establish the conventions used in this paper. Let $S$ be the spin representation of $so(4)$, which admits the chiral splitting into positive and negative spinors $ S=S^+ \oplus S^-. $ For a concrete model of S, we take a complex two dimensional Hermitian vector space $W$, with orthonormal basis $f_1,f_2 $, and let $S=\Lambda^*(W)$. Then $S^+=\C \oplus \Lambda^2(W)$ and $S^-=W$. The Clifford multiplication of an orthonormal basis $\frac{\partial}{\partial x_i}$ of $\R^4$ on $S$ is given by \[ \begin{cases} c_1=f_1\wedge -f_1 \angle \\ c_2=if_1\wedge +if_1 \angle \\ c_3=f_2\wedge -f_2 \angle \\ c_4=if_2\wedge +if_2 \angle \end{cases} \] We see $c_i \in \Hom(S_+, S_-)\oplus \Hom(S_-, S_+)$, $c_i^\dagger=-c_i$, and the Clifford relations $c_ic_j+c_jc_i=-2\delta_{ij}$. The Clifford multiplication is multiplicative on norms: \[ |c(v)\cdot \xi |=|v| |\xi|, \quad |c(v)\cdot \eta |=|v| |\eta|, \quad v\in \R^4, \xi\in S^+, \eta\in S^-. \] With this choice of convention, the Dirac operator $D=c_i \nabla_{\frac{\partial}{\partial x_i}}$ is self adjoint. It is conventional to think of $D$ as a pair of formally adjoint operators, mapping between the positive and negative spinor bundles. It is also convenient to extend the Clifford multiplication to elements of $\Lambda^2(\R^4)$. For example, for $F=\sum_{i<j}{ F_{ij} dx^i \wedge dx^j }$, we make it act as $\sum_{i<j}{ F_{ij} c_ic_j }$. This Clifford multiplication and the wedge product can be compared by the formula \[ c(v)c(w)=c( v\wedge w)- (v, w), \quad v, w\in \R^{4*}. \] Self dual forms act only on positive spin, and ASD forms only act on negative spin. This applies to the standard triple of hyperk\"ahler 2-forms \[ \omega_1 =dx_1 dx_2+ dx_3 dx_4, \quad \omega_2 =dx_1 dx_3 + dx_4 dx_2, \quad \omega_3 =dx_1 dx_4+ dx_2 dx_3. \] The group $Spin(4)$ acts on $\R^4, S^+, S^-$, such that the Clifford multiplication map $\R^4\times S^{+}\to S^{-}$ is equivariant. Up to $2:1$ cover, we can think of standard complex structures $I_1, I_2, I_3 \in SO(4)$ as elements of $Spin(4)$, and as such they act on the spinors. The action of $I_k$ on $S^-$ can be chosen to be trivial; this uniquely specifies the action $I_k^{S^+}$ on $S^+$, which must satisfy the compatibility with Clifford multiplication: \begin{equation}\label{complexstructureactiononpositivespin1} c(I_k v)\cdot{} I_k^{S^+} \xi= c(v) \cdot{}\xi, \quad \xi\in S^+, v\in \R^4, \end{equation} and \begin{equation}\label{complexstructureactiononpositivespin2} I_k^{S^+} (c(v)\cdot{} \eta)= c(I_k v)\cdot{} \eta, \quad \eta\in S^-, v\in \R^4. \end{equation} One can more concretely think of $I_k^{S^+}$ as given by the matrices in the basis $\{1, f_1\wedge f_2\}$, \[ I_1^{S^+} = \begin{bmatrix} -i & 0 \\ 0 & i \\ \end{bmatrix}, I_2^{S^+} = \begin{bmatrix} 0 & -1 \\ 1 & 0 \\ \end{bmatrix}, I_3^{S^+} = \begin{bmatrix} 0 & i \\ i & 0 \\ \end{bmatrix}. \] In particular they satisfy the quaternionic algebraic relations. On a hyperk\"ahler 4-fold, these actions globalise to actions on the positive spin bundle. The action on the negative spin bundle is trivial. Another way to understand these actions $I_i^{S^+}$, is that it equals a half of Clifford multiplication by the 2-form $\omega_i$. There is an antilinear symmetry of $S$ as a Clifford module, coming from the $SU(2)$ structure on $S^+$ and $S^-$, given explicitly in our model by $\epsilon: S\rightarrow S$, \[ f_1 \mapsto f_2, f_2 \mapsto -f_1, 1 \mapsto -f_1\wedge f_2, f_1\wedge f_2 \mapsto 1. \] It satisfies $\epsilon^2=-1$, and commutes with Clifford multiplication and the $I^{S^+}_k$ action. On a hyperk\"ahler manifold, this antilinear symmetry can be globalised to a covariantly constant structure. Another viewpoint on $\epsilon$ is that together with the Hermitian structure it induces the complex symplectic form $\langle \langle \_, \_\rangle \rangle=\langle \epsilon\_, \_ \rangle$ on $S^+$ or $S^-$.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,475
\section{Introduction} \IEEEPARstart{S}{upervisory} control and data acquisition (SCADA) is a well established industrial system to automate/monitor processes and to gather data from remote or local equipment such as programmable logic controller (PLC), remote terminal units (RTU) and human-machine-interfaces (HMI), etc. SCADA became popular in the 60's for power plants, water treatment~\cite{1}, and oil pipelines~\cite{2}, etc., which were usually disconnected from the Internet and made use of hardware devices running proprietary protocols. The network was secured from harmful attacks because of its obscurity, thus security means were barely implemented. However, as more and more SCADA systems are adopting its Modbus protocol over TCP and are accessible via the Internet, they are vulnerable to cyberattacks. In 2010, Stuxnet~\cite{stuxnet} was spread over the world and damaged Iranian nuclear power plants. Since then, the need for industrial network security became urgent. To safeguard SCADA networks, an intrusion detection system (IDS) needs to be implemented. IDS can be signature-based or anomaly-based. Traditionally, signature-based IDS is the mainstream to detect SCADA attacks. It identifies specific patterns from traffic data to detect the malicious activities and can be implemented as policy rules in IDS software such as Snort~\cite{snort,snort_c}. Ref.~\cite{4} investigates a set of attacks against Modbus and designs rules to detect attacks. Ref.~\cite{SRID} proposes a state-relation-based IDS (SRID) to increase the accuracy and decrease the false negative rate in denial-of-service (DoS) detection. However, these detection methods are too complicated and only valid for specific scenarios. Overall, as discovered in the previous research, signature based IDS is only efficient at finding known attacks and its performance relies heavily on the experts' knowledge and experiences. An anomaly-based IDS~\cite{AIDS} overcomes these challenges by introducing machine learning to identify attack patterns from data. It is also widely used in other applications such as mobile data misuse detection~\cite{aids_mobile}, software~\cite{aids_c} and wireless sensor security~\cite{aids_w}. Several machine learning algorithms are proposed to develop anomaly-based IDS. Linda \textit{et al.}~\cite{linda2009neural} tailored a neural network model with error-back propagation and Levenberg-Marquardt learning rules in their IDS. Rrushi and Kang~\cite{rrushi2009detecting} combined logistic regression and maximum likelihood estimation to detect anomalies in process control networks. Poojitha \textit{et al.}~\cite{poojitha2010intrusion} trained a feedforward neural network (FNN) to classify intrusions on the KDD99 dataset and the industrial control system dataset. Zhang \textit{et al.}~\cite{zhang2011distributed} used support vector machine and artificial immune system to identify malicious network traffic in the smart grid. Maglaras and Jiang~\cite{maglaras2014intrusion} developed a one-class support vector machine module to train network traces off-line and detect intrusions on-line. All these machine learning algorithms are excellent in observing the pattern of attacks from the in-packet features. None of them, however, takes into account of the temporal features between packets and thus will not perform well on attacks such as DoS which has strong temporal dependence. DoS attacks are among the most popular attacks to slow down or even crush the SCADA networks. Most of the devices in SCADA operate in low power mode with limited capacity and are vulnerable to DoS~\cite{8}. Up to date, various DoS types, including spoofing~\cite{spoof}, flooding and smurfing~\cite{smurf}, etc., have been reported. Among all types of DoS, flooding DoS is widely-exploited where hackers send a massive number of packets to jam the target network. In~\cite{13}, the author exploits TCP syn flooding attack against the vulnerability of TCP transmission using hping DoS attack tool. Flooding DoS, along with all other DoS, is difficult to detect because the in-packet features extracted from each data packet may not display any suspicious pattern~\cite{10}. Similar to DoS, man-in-the-middle (MITM) is another attack that is hard to detect from observing the in-packet features. It will be more efficient to detect them by observing the inter-packet patterns in time domain. Anomaly-based IDS on DoS and MITM becomes popular along with the advances of machine learning. For example, in~\cite{AAKR}, an auto-associative kernel regression (AAKR) coupled with the statistical probability ratio test (SPRT) is implemented to detect DoS. The result is not satisfactory because the regression model does not take the temporal signatures of DoS into consideration. In~\cite{3}, FNN is used to classify abnormal packets in SCADA with 85\% accuray for MITM-based random response injection and 90\% accuracy for DoS-based random response injection attacks but 12\% at Replay-based attacks. The author exploits various attacks including DoS attacks and man-in-the-middle (MITM) attacks in the testbed built in Modbus/RTU instead of Modbus/TCP. In~\cite{OCSVM}, the authors propose one-class support vector machine (OCSVM) combined with k-means clustering method to detect the DoS. They set flags on every 10 packets to reflect the relationships of time series. But the handcrafted features may be easily by-passed by expert attackers. To detect temporally correlated attacks such as flooding DoS and MITM, one should capture the temporal anomaly from these attacks. However, those above mentioned IDS are not designed to extract temporal patterns from packets sequence. A more practical approach is to implement an IDS with the capacity of time series analysis. Recurrent neural networks (RNN) are the machine learning models that incorporate the recognition of temporal patterns. Among all RNN models, long short-term memory (LSTM) gains its popularity from speech recognition~\cite{Speech}, music composition~\cite{music} and to machine translation~\cite{translation}. It is designed to predict future events according to the information in the previous time steps and suitable for detecting attacks with temporal correlation. For example, Ref.~\cite{LSTM_ddos} applied LSTM for distributed DoS with high success rate. In~\cite{6} the authors also developed a time-series anomaly detector based on LSTM~\cite{Hochreiter:1997:LSM:1246443.1246450} networks to enhance the performance of IDS and apply this framework to the dataset in~\cite{5}. But the number of DoS attacks in the dataset is relatively small and the time interval for the DoS attack in this dataset is too long, making the detection inefficient. Despite of the excellent performance in detecting temporally correlated attacks such as DoS and MITM, the capacity of RNN to detect temporally uncorrelated attacks is limited compared to other types of machine learning algorithms such as FNN. In this paper, utilizing the advantages of both RNN and FNN while avoiding their disadvantages, we implement an omni IDS that can detect all attacks regardless of their temporal dependence. On a SCADA testbed~\cite{8}, we demonstrate that our IDS reaches the highest performance against all attacks compared to those that employ RNN or FNN alone. \section{SCADA Testbed and Data Synthesize} \label{sec:simulatednetwork} Our IDS is tested on a simulated SCADA testbed. A simulated network has the advantage of being easy to maintain, change and operate and is less costly than a real device network. A software testbed, which simulates a SCADA industry network and emulates the attacks was built by L. Zhang~\cite{8} on the basis work of T. Morris~\cite{MORRIS201188}. In the past, several preliminary researches on SCADA security had been conducted on this testbed~\cite{Wang_Hierarchical,shivamThesis}. The attack target is a simple SCADA network, consisting of two tanks using Modbus over TCP. The liquid level of the tanks is controlled by pumps and measured by sensors via Modbus control information. The purpose of this network is to attract hackers and study possible defense methods. Such a system is called Honeypot, as it fools the attacker while exploiting his behaviour. This tank system is developed by the MBLogic HMIBuilder and HMIServer toolkit~\cite{mblogic} and has been extended by L. Zhang in~\cite{8}. The HMI's purpose is to pull data from the sensor or send the desired pump speed to the motor periodically. The back end of the HMI is a PLC while the front end is a web browser. As this system is simulated, we make use of four virtual machines as shown in Fig.~\ref{fig:network}. The SCADA system runs on a Modbus master and several slaves. On a virtual host called Nova the HMI is deployed, thus we refer to this host as Modbus master. In order to extend the network, some Modbus slaves such as PLCs are simulated by the HoneyD software~\cite{honeyd}. This will provide a more realistic Honeypot. The role of a Modbus slave is to process commands from the master by pulling sensory data about the tank system from the PLCs and sending it back to the master. \begin{figure}[!ht] \centering \includegraphics[width=3.5in]{SCADA_network.png} \caption{Testbed architecture~\cite{8}} \label{fig:network} \end{figure} The data needed to feed the neural network is generated by an attack machine using a virtual host named Kali. Kali is a Debian-derived Linux host used for penetration testing and features many attack and defense tools. Additional to the message exchange between the Modbus master (Nova) and its slaves we can launch normal traffic mixed with various attacks from Kali. A command line tool, Modpoll~\cite{modpoll}, is used to send Modbus instructions to the PLC which controls sensible tank system variables. An example Modpoll instruction which sends a pump speed of 5 to the system looks like this: \lstset{language=sh} \lstset{tabsize=1} \begin{lstlisting} $ modpoll -0 -r 32210 10.0.0.5 5 \end{lstlisting} The command addresses a simulated PLC with an IP address of 10.0.0.5 and a register address which contains either a threshold value (register 42212 - 42215), the current pump speed (32210) or the tank level (42210,42211), measured by the sensors. Modpoll will send Modbus requests with function code 16 to attempt a write action to the specified registers. By modifying the pump speed the attackers can exceed the allowed tank level and create serious damage to a system. A script on Kali will randomly choose between these normal or malicious Modbus instructions and will launch a Modpoll instruction with another randomly chosen parameter. This will ensure desired distribution of attack/non-attack data. The traffic will be recorded by the fourth virtual machine referred to as ``Defense Wall'', which operates in the bridge mode and thus is invisible to the attacker. With PyShark we capture the traffic between Nova and Modbus slaves and between the attacker machine Kali and the PLCs. During this process we can label each packet as malicious or normal. \subsection{Features extracted from the data packets}\label{appendix_features} In our testbed, we use a self-developed IDS installed on ``Defense Wall'' to extract 19 features from each data packet captured. They are listed below: \begin{enumerate} \item Source IP address; \item Destination IP address; \item Source port number; \item Destination port number; \item TCP sequence number; \item Transaction identifier set by the client to uniquely identify each request; \item Function code identify the Modbus function used; \item Reference number of the specified register; \item Modbus register data; \item Modbus exception code; \item Time stamp; \item Relative time; \item Highest threshold; \item Lowest threshold; \item High threshold; \item Low threshold; \item Pump speed; \item Tank 1 water level; \item Tank 2 water level. \end{enumerate} Here, the ``Relative time'' represents the time in seconds for packets relative to the first packet in the same TCP session. To reduce the periodicity of this feature, we reset it to zero when ``Relative time'' reaches 3,000 seconds. In our IDS, we adopt feature scaling of each feature $x$ in the dataset according to \begin{equation} x^\prime=\frac{x-\bar{x}}{\sigma_x} \end{equation} where $\bar{x}$ and $\sigma_x$ are the mean and standard deviation of original feature $x$ and $x^\prime$ is the re-scaled feature from $x$ with zero mean and unity variance. \subsection{Types of attacks in our datasets}\label{appendix_attacks} \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{table.png} \caption{Data packet types distribution in Dataset I, II and online script. The ones with a superscript ``*'' are temporally correlated attacks. } \label{fig:attacks_dist} \end{figure} Using our scripts, we created two datasets. As illustrated in Fig.~\ref{fig:attacks_dist}, in addition to ``Normal'' data packets, Dataset I contains attacks that are uncorrelated in time domain while Dataset II contains temporally dependent attacks. Here we have incorporated 10 attacks in our testbed. 7 of them are temporally uncorrelated while the remaining 3 are correlated. The temporally uncorrelated attacks include ``Pump Speed'' (Pump), ``Tank 1 Level'' (T1), ``Tank 2 Level'' (T2), ``Threshold Highest'' (HH), ``Threshold Lowest'' (LL), ``Threshold High'' (H) and ``Threshold Low'' (L) whose detailed descriptions can be found in~\cite{8,MORRIS201188}. Among all temporally correlated attacks, two types of flooding DoS attacks are included~\cite{5}. The first labelled as ``Scan flooding'' (SCAN) is to send massive scan command, resulting in increasing latency of communications between the HMI and the sensors in SCADA. The second type labelled as ``Incorrect CRC'' (CRC) is sending massive packets with incorrect cyclic redundancy check (CRC) to cause latency of master. Another temporally correlated attack included in this testbed is ``Man-in-the-middle'' (MITM) attack. It is an eavesdropping where the attacker monitors the communication traffics between two parties secretly. Here, the MITM attack is launched by Ettercap~\cite{ettercap} using ARP spoofing~\cite{arp_spoof}. One effective way to detect ARP spoofing is identifying the Media Access Control (MAC) address in layer 2 of OSI model. However, most of Network IDSs (NIDS) do not support the protocols in layer 2 such as ARP and MAC protocols. Even Snort requires an ARP spoof preprocessor~\cite{snort_pre} to collect the MAC address information to detect ARP spoofing. Besides, the victim host of ARP spoofing attack would experience packets retransmissions. For SCADA networks, packet retrasmissions or delay may cause great damages. Therefore, the IDS should raise alert when it detects either MITM attack or packets retransmissions. To make the IDS robust in detecting both MITM and packets retransmissions we remove the MAC address feature which was used for labeling MITM attack from the datasets for training neural networks. At the first stage, FNN and LSTM IDSs will be trained as binary classifiers that only predict attacks from normal traffic and tested on these datasets separately for performance comparisons. In on-line phases, these two IDSs along with our FNN-LSTM ensemble IDS will be trained as multi-class classifiers by the combined datasets to predict various types of attacks from normal traffics and implemented on the testbed. In addition, we also implement a script that can launch realtime attacks for online testing. The online script will randomly launch normal traffic, temporally uncorrelated and correlated attacks with ratios shown in the table to examine the omni-detection capability of different IDSs. \section{IDS Implementation} In this paper, we implemented three IDSs: a conventional FNN, a LSTM and a FNN-LSTM ensemble IDS. Here, we use Keras~\cite{keras} to implement tensorflow~\cite{tensorflow} based machine learning models with AdamOptimizer~\cite{kingma2014adam} to train our model. The structure of these IDSs are detailed in the following subsections. \subsection{FNN IDS} \begin{figure}[!t] \centering \subfloat[\label{fig:basic1}]{\includegraphics[width=\columnwidth]{basic1.jpg}}\\ \subfloat[\label{fig:basic2}]{\includegraphics[width=\columnwidth]{basic2.png}}\\ \caption{(a)The schematics of the FNN IDS (b) Details of each neuron in FNN} \label{fig:basic} \end{figure} The basic structure of the FNN IDS is illustrated in Fig.~\ref{fig:basic}. A typical FNN is formed by an input layer, an output layer and one or more hidden layers in-between. Each layer has a number of neurons that use the neuron outputs from the previous layer as input and produces output to the neurons in next layer. In our case, inputs are the scaled and normalized features extracted from the data packets, and outputs are the predictions of attacks and normal events. Mathematically, the FNN can be expressed as: \begin{equation} \begin{array}{rcl} \textbf{z}^{(1)}&=&\textbf{W}^{(1)}\textbf{x}+\textbf{b}^{(1)}, \textbf{h}_1=f_h(\textbf{z}^{(1)})\\ \textbf{z}^{(2)}&=&\textbf{W}^{(2)}\textbf{h}_1+\textbf{b}^{(2)}, \textbf{h}_2=f_h(\textbf{z}^{(2)})\\ &...&\\ \textbf{z}^{(N+1)}&=&\textbf{W}^{(N+1)}\textbf{h}_N+\textbf{b}^{(N+1)}, \hat{\textbf{y}}=\textbf{z}^{(N+1)} \end{array} \end{equation} where $N$ is the number of hidden layers, $f_h$ is the ReLU activation function, and $\textbf{W}^{(1)},\textbf{W}^{(2)},..., \textbf{W}^{(N+1)}$, $\textbf{b}^{(1)},\textbf{b}^{(2)}, ..., \textbf{b}^{(N+1)}$ are the parameters to be trained. Here we use softmax cross entropy as our loss function, which can be expressed as \begin{equation} f_L(\hat{\textbf{y}},\textbf{y})=-\sum_{i=1}^{C}\textbf{y}_{i}\log(f_{s}(\hat{\textbf{y}_i})) \end{equation} where $\hat{\textbf{y}}$ is the predicted label and $\textbf{y}$ the ground truth. $C$ is the number of all possible classes, $\textbf{y}_{i}$ and $\hat{\textbf{y}_i}$ are the actual and predicted labels that belongs to class $i$, and $f_{s}$ is the softmax function. \subsection{LSTM IDS} The LSTM is built on a collection of single LSTM cells~\cite{5}. The structure of single LSTM cells is as Fig.~\ref{fig_sim}. Each LSTM cell has 3 gates: input gate, forget gate and output gate. The input gate selects useful information and push it to the cell. The irrelevant information will be discarded in forget gate. The output gate outputs the activation state \(o_t\). A hidden state vector \(h_t\) is transferred to the next time steps. \begin{figure}[!t] \centering \subfloat[\label{fig_sim}]{ \includegraphics[width=3.5in]{LSTM_neuron.jpg}} \\ \subfloat[\label{LSTM}]{\includegraphics[scale = 0.5]{LSTM_network.png}} \caption{The structure of (a) single LSTM cell, (b) LSTM Network.} \end{figure} The following equations represent the processes of a single LSTM cell: \begin{equation} \begin{array}{rcl} \textbf{f}_t &=& \sigma(\textbf{W}_fx_t+\textbf{U}_fh_{t-1}+\textbf{b}_f)\\ \textbf{i}_t &=& \sigma(\textbf{W}_ix_t+\textbf{U}_ih_{t-1}+\textbf{b}_i)\\ \textbf{o}_t &=& \sigma(\textbf{W}_ox_t+\textbf{U}_oh_{t-1}+\textbf{b}_o)\\ \textbf{c}_t &=& \textbf{f}_t\circ \textbf{c}_{t-1}+\textbf{i}_t\circ \sigma_g(\textbf{W}_cx_t+\textbf{U}_ch_{t-1}+\textbf{b}_c)\\ \textbf{h}_t&=&\textbf{o}_t\circ \sigma_g(\textbf{c}_t) \end{array} \end{equation} where $\sigma_g$ is hyperbolic tangent function and $\sigma$ is sigmoid function. $\circ$ is the element-wise product notation. $W$, $U$, $b$ are the weight matrix for the gates. Shown in Fig.~\ref{LSTM}, the LSTM IDS includes two LSTM layers with 10 LSTM cells in each layer. An activation layer with sigmoid activation function is placed after the last LSTM layer. The ${\{x_1,x_2,...,x_t}\}$ vector is the input vector containing features of packets within $t$ time steps. The dataset is reshaped in this format and fit into the LSTM model. In our model, we set $t=10$. The loss function in this model is binary cross entropy and the optimizer is Adam optimizer~\cite{Adam}. \subsection{FNN-LSTM Ensemble IDS}\label{sub_ensemble} \begin{figure}[!t] \centering \includegraphics[scale=0.4]{ensemble.png} \caption{Ensemble Model.}\label{fig:LC_DoS} \end{figure} Our FNN-LSTM ensemble IDS aims to combine the advantages of both FNN and LSTM while avoiding their weaknesses~\cite{ensemble}. The schematics of this model is as shown in Fig.~\ref{fig:LC_DoS}. In this model, the data packet features are fed into FNN and LSTM simultaneously to predict attacks as a multi-class classifier. The output labels of both are concatenated as the input of a multilayer perceptron, which through training, is capable of voting for the best prediction of the data packet under investigation. \section{Experiment and Result} To demonstrate their capability for detecting attacks with/without temporal correlation, we first implement FNN and LSTM IDSs to establish references for comparison. At this stage, the IDSs only conduct binary classification to predict if the data packet under investigation is normal (labeled as ``0'') or attack (labeled as ``1''). Consequently, sigmoid function \begin{equation} \sigma(z) = \frac{e^z}{1+e^z} \end{equation} is selected as the activation function. Here, $z$ is the output of the previous LSTM layer. \subsection{Hyper parameters tuning} Both IDSs are trained using 70\% of the randomly chosen samples from the two datasets and tested with the remaining 30\% samples following the 10-fold training/testing procedure so that the average and standard deviation of figures of merits including precision, recall and $\mathrm{F_1}$ can be used for evaluation. To determine the number of hidden layers necessary for our FNN, we computed $\mathrm{F_1}$ with 0, 1 and 2 hidden layers where the values of 99.22\%, 99.96\% and 99.97\% are obtained respectively. As shown, employing 1 hidden layers in FNN will increase the $\mathrm{F_1}$ by more than 7\% while using 2 hidden layers the improvement is minimal. Therefore, we select 1 hidden layer in our FNN implementation. In addition, to circumvent overfitting, we further adopted early stop procedure in FNN such that the optimization stops when the number of epochs whose relative differences of loss between consecutive ones are less than $\mathrm{10^{-6}}$ reaches 35~\cite{es}. Similarly, LSTM adopts early stop if either maximum epochs reach 3. In implementation of LSTM, we connect 10 LSTM cells in input layer where the features from 10 consecutive data packets are entered into the cells to predict if the last packet is normal or an attack. In training, we adopt mini-batch with a batch size of $\mathrm{1,000}$. \subsection{Detection of temporally uncorrelated attacks} We exploit the Dataset I described in Section~\ref{sec:simulatednetwork} to compare the detection capability of FNN and LSTM for temporally uncorrelated attacks. To verify the models, learning curves are plotted in Fig.~\ref{learning_dsetI} where training and testing losses as a function of training samples are plotted. Here the average value and standard deviation after 10 fold training/testing are represented by circle markers and error bars respectively. As shown, with training samples exceeding 40,000, FNN training and testing losses (blue dashed lines) start to converge while LSTM (red solid lines) converges at sample size larger than 60,000. Overall, it confirms that the number of samples in Dataset I is sufficient for the training and testing of our IDS. \begin{figure} \centering \includegraphics[scale=0.65]{LearningCurvenonDoS_Loss.png} \caption{Learning Curves of FNN and LSTM using temporally-uncorrelated-attacks dataset (Dataset I).}\label{learning_dsetI} \end{figure} \begin{table}[!t] \normalsize \centering \caption{\label{tab:test_nonDoS}Comparison of the temporally-uncorrelated-attacks detection.} \begin{tabular}{|c|c|c|c|} \hline & { Precision} & { Recall} & { F}$_{1}$ \\\hline { FNN} & $\mathrm{99.996{\pm}0.006}$ & $\mathrm{99.84{\pm}0.05}$ & $\mathrm{99.92{\pm}0.03}$ \\\hline { LSTM} & $\mathrm{99.88{\pm}0.06}$ & $\mathrm{98.7{\pm}0.4}$ & $\mathrm{99.3{\pm}0.1}$ \\\hline \end{tabular} \end{table} \begin{table}[!t] \normalsize \caption{Confusion matrices of temporally-uncorrelated-attacks detection using Dataset I (averaged over 10 trials)}\label{tab:confusion_nonDoS} \centering \begin{tabular}{|c|c|c|c|c|} \cline{4-5} \multicolumn{3}{c|}{}&\multicolumn{2}{c|}{{\bf Predicted}} \\ \cline{4-5} \multicolumn{3}{c|}{}&{\bf Normal} & {\bf Attacks} \\ \hline \multirow{6}{*}{\vspace{0.3in}\rotatebox[origin=c]{90}{{\bf Actual}}} & \multirow{3}{*}{\vspace{0.1in}{\bf Normal}} & {\bf FNN} & $\mathrm{69,845.4}$ & $\mathrm{0.6}$ \\\cline{3-5} &&{\bf LSTM} & $\mathrm{69,902.2}$& $\mathrm{22.8}$ \\\cline{2-5} &\multirow{3}{*}{\vspace{0.1in}{\bf Attacks}} & {\bf FNN} & $\mathrm{30.7}$ & $\mathrm{19,741.3}$ \\\cline{3-5} &&{\bf LSTM}& $\mathrm{241.9}$& $\mathrm{19,448.1 }$ \\\hline \end{tabular} \end{table} After the IDSs are trained, we use 30\% of samples in Dataset I for 10 fold testing. Also shown in Table~\ref{tab:confusion_nonDoS} and~\ref{tab:test_nonDoS}, on average, for FNN, only 0.6 of the 69,846 normal datapackets are mislabelled as attacks while only 30.7 out of 19,771 actual attacks are mislabelled as normal traffic, yielding the precision, recall and $\mathrm{F_1}$ to be $\mathrm{99.996{\pm}0.006\%}$, $\mathrm{99.84{\pm}0.05\%}$, and $\mathrm{99.92{\pm}0.03\%}$. In comparison, LSTM mislabelled 22.8 normal packets as attacks and 241.9 attacks as normal packets, resulting the figures of merits to be $\mathrm{99.88{\pm}0.06\%}$, $\mathrm{98.7{\pm}0.4\%}$ and $\mathrm{99.3{\pm}0.1\%}$. The comparison demonstrates that FNN outperformed LSTM in detecting temporally uncorrelated attacks where recognition of the in-packet feature patterns is critical. \subsection{Detection of temporally correlated attacks} \begin{figure}[!t] \centering \includegraphics[scale=0.65]{LearningCurveDoS_Loss.png} \caption{Learning Curves of FNN and LSTM using temporally-correlated-attacks dataset (Dataset II).}\label{fig:LC_DoS} \end{figure} \begin{table}[!t] \normalsize \caption{Comparison of temporally correlated attacks $\mathrm{(\%)}$}\label{tab:test_DoS} \centering \begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & {\bf Precision} & {\bf Recall} & {\bf F}$_{1}$ \\\hline {\bf FNN} & $\mathrm{73{\pm}2}$ & $\mathrm{49{\pm}4}$ & $\mathrm{58{\pm}2}$ \\\hline {\bf LSTM} & $\mathrm{99.60{\pm}0.01}$ & $\mathrm{99.52{\pm}0.02}$ & $\mathrm{99.56{\pm}0.01}$\\\hline \end{tabular} \end{table} \begin{table}[!t] \normalsize \caption{Confusion matrix of temporally correlated attacks}\label{tab:confusion_DoS} \centering \begin{tabular}{|c|c|c|c|c|} \cline{4-5} \multicolumn{3}{c|}{}&\multicolumn{2}{c|}{{\bf Predicted}} \\ \cline{4-5} \multicolumn{3}{c|}{}&{\bf Normal} & {\bf Attacks} \\ \hline \multirow{6}{*}{\vspace{0.3in}\rotatebox[origin=c]{90}{{\bf Actual}}} & \multirow{3}{*}{\vspace{0.1in}{\bf Normal}} & {\bf FNN} & $\mathrm{28,668.3}$ & $\mathrm{5,044.7}$\\\cline{3-5} &&{\bf LSTM}& $\mathrm{33,504.0}$ & $\mathrm{105.0}$ \\\cline{2-5} &\multirow{3}{*}{\vspace{0.1in}{\bf Attacks}} & {\bf FNN} & $\mathrm{13,510.4}$ & $\mathrm{13,169.6}$\\\cline{3-5} &&{\bf LSTM}& $\mathrm{128.4}$ & $\mathrm{26,652.6} $ \\\hline \end{tabular} \end{table} In this subsection FNN and LSTM are re-trained and tested using Dataset II for comparison of their temporally correlated attacks detection comparison. Again the learning curves in Fig.~\ref{fig:LC_DoS} shows that both FNN (blue dashed lines) and LSTM (red solid lines) converge at training samples exceeding 10,000 while LSTM clearly shows lower testing loss. This confirms the sufficiency of our dataset to generalize the IDS models. The performance of each model is compared in Table~\ref{tab:test_DoS} and \ref{tab:confusion_DoS}. As shown, FNN is inefficient in detecting temporally correlated attacks with precision, recall and $\mathrm{F_1}$ scores as low as $\mathrm{73{\pm}2\%}$, $\mathrm{49{\pm}4\%}$ and $\mathrm{58{\pm}2}$ respectively. In particular, 5,044.7 out of 33,713 normal packets are mislabelled to attacks while 13,510.4 out of 26,680 actual attacks are mislabelled to normal traffic. It is evident that the poor performance of FNN is caused by its inability to inter-packet features. In contrast, LSTM displays an outstanding performance on the corresponding figures of merits to be $\mathrm{99.60{\pm}0.01\%}$, $\mathrm{99.52{\pm}0.02\%}$ and $\mathrm{99.56{\pm}0.01\%}$ where only 105.0 normal packets are mislabelled as attacks and 128.4 attacks packets are mislabelled as normal traffic. As expected, LSTM outperforms FNN in detecting temporally correlated attacks due to its inherent nature to observe data pattern in time domain. \subsection {Omni attacks detection} \begin{table}[!t] \normalsize \caption{Macro-average comparison of omni-attacks detection}\label{omni_macro} \centering \begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} &{\bf Precision} & {\bf Recall} & {\bf F}$_{1}$ \\ \hline {\bf FNN}&$\mathrm{88{\pm}1}$ &$\mathrm{89.2{\pm}0.8}$&$\mathrm{87.4{\pm}0.6}$ \\ \hline {\bf LSTM} &$\mathrm{99.54{\pm}0.03}$&$\mathrm{99.01{\pm}0.07}$&$\mathrm{99.27{\pm}0.05}$\\\hline {\bf Ensemble} &$\mathrm{99.76{\pm}0.05}$&$\mathrm{99.57{\pm}0.03}$&$\mathrm{99.68{\pm}0.04}$\\\hline \end{tabular} \end{table} \begin{figure} \centering \subfloat[\label{omni_precision}]{ \includegraphics[scale=0.6]{precision.png} }\\ \subfloat[\label{omni_recall}]{ \includegraphics[scale=0.6]{recall.png} }\\ \subfloat[\label{omni_F1}]{ \includegraphics[scale=0.6]{F1.png} } \caption{(a) Precision, (b) Recall and (c) $\mathrm{F_1}$ of individual attacks in omni-attacks detection.}\label{fig:LC_DoS} \end{figure} Recognizing the mutual strength of FNN and LSTM IDSs in detecting temporally correlated and uncorrelated attacks, we here combine the advantages of both for an omni attacks detector through ensemble approach. The structure of FNN-LSTM ensemble is described in Subsection~\ref{sub_ensemble}. To implement, we first remodelled FNN and LSTM to multi-class classifiers so that different attacks can be distinguished. Dataset I and II are combined and used to train FNN and LSTM independently. The outputs of both are combined to form the input features of a multilayer perceptron for training. After training, FNN, LSTM and FNN-LSTM ensemble IDSs are integrated into our SCADA testbed to detect and classify attacks. The traffic is generated online using the script that generates a pre-determined ratio of normal, temporally correlated and uncorrelated attacks as described in Fig~\ref{fig:attacks_dist}. To estimate the figures of merits, we evenly divide the predicted labels to 10 portions and compute the average and standard deviation of macro-averaged precision, recall and $\mathrm{F_1}$. As shown in Table~\ref{omni_macro}, among all the three IDSs, the FNN achieve lowest performance with macro-averaged figures of merits of $\mathrm{88{\pm}1\%}$, $\mathrm{89.2{\pm}0.8\%}$ and $\mathrm{87.4{\pm}0.6\%}$ while LSTM reaches $\mathrm{99.54{\pm}0.03\%}$, $\mathrm{99.01{\pm}0.07\%}$ and $\mathrm{99.27{\pm}0.05\%}$. In contrast, the FNN-LSTM ensemble IDS further outperforms both with figures of merits to be $\mathrm{99.76{\pm}0.05}$, $\mathrm{99.57{\pm}0.03}$ and $\mathrm{99.68{\pm}0.04}$. Detailed analysis in Fig.~\ref{fig:LC_DoS} further confirms that the under-performance of FNN (yellow bars) are due to the mislabels of temporally correlated attacks (MITM, CRC and SCAN) while the performance of LSTM (red bars) by temporally uncorrelated attacks (``Pump Speed (Pump)'', ``Tank 1 Level (T1)'', and ``Threshold High (H)'', etc.). Overall, the FNN-LSTM ensemble demonstrates a consistent out-performance over them in all types of attacks. \section{Conclusion} In this paper we demonstrated that the FNN-LSTM ensemble IDS can detect all types of cyberattacks regardless of the their temporal relevance. In opposite, FNN only performance well in temporally uncorrelated attacks and LSTM is relatively weak in uncorrelated attacks. In future research we will further improve our model through field trials. \ifCLASSOPTIONcaptionsoff \newpage \fi \appendices \section*{Acknowledgment}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,517
Library Plaza Dental is dedicated to providing excellent dental care that enables our patients to enjoy good dental health their whole lives. We provide general dental care that includes checkups, cleanings and fillings, but we also go the extra mile to make sure that you have a smile that will reflect your concern for your appearance and general health. Part of our role is to educate our patients on the best habits to cultivate to ensure a great smile. Two of the best habits to practice are brushing after every meal and flossing daily. To learn some helpful tips about these two topics, keep reading. Even something as basic as brushing can bear brushing up on (pun intended). The first thing is to make sure your toothbrush is the right tool for the job. One mistake people make is to use a toothbrush that is too big for their mouths. If you have a large mouth, yes, a large toothbrush is appropriate, but if your mouth is average or small sized, choose a smaller toothbrush so you can reach all the nooks and crannies with ease. You should always choose a soft toothbrush and replace it every six months, or after an illness or cold sore. Remember that you are using a toothbrush, not a tooth scrubber. Brush your teeth gently being sure to angle your brush so that the tips of the bristles reach below the gumline. If you brush too aggressively, you can damage the enamel, the top layer of your tooth that protects your tooth. Flossing your teeth is one of the best habits you can have to ensure your teeth's good health. To be more precise, flossing protects your teeth and gums. The goal of flossing is to clean your teeth below the gumline. The number one reason to do this is to prevent gum disease. When food particles sit under the gum line, they irritate the gum, which becomes inflamed. This is gingivitis and causes your gums to be red, swollen and to bleed. This is the warning signal that action needs to be taken. Call your Library Plaza Dentist if you notice these symptoms along your gumline. If left untreated, gingivitis develops into periodontal disease which can lead to tooth loss. For any of our general dentistry services or cosmetic service, call for a consultation appointment.
{ "redpajama_set_name": "RedPajamaC4" }
1,479
\section{Introduction} The constraints on black hole masses at the highest redshifts currently probed, $z\simeq6$, are few, and seem to provide conflicting results. (i) There seems to be little or no correlation between black hole mass and velocity dispersion, $\sigma$ \citep{Wang2010} in the brightest radio-selected quasars, (ii) typically black holes are `over-massive' at fixed galaxy mass/velocity dispersion compared to their $z=0$ counterparts \citep[e.g., Walter et al. 2004; at lower redshift see also][]{Mclure2004,Shields2006,Peng2006b, Decarli2010, Merloni2010, Woo2008}, but (iii) analysis of the black hole mass/luminosity function and clustering suggests that either many massive galaxies do not have black holes, or these black holes are less massive than expected \citep[W10 hereafter]{Willott2010}. As a result of point (ii), most authors propose that there is {\it positive} evolution in the $M_{\rm BH}-$galaxy relationships, and quantify it as a change in {\it normalization}, in the sense that at fixed galaxy properties (e.g. velocity dispersion, stellar mass), black holes at high redshift are more massive than today. For instance, Merloni et al. (2010) propose that $M_{\rm BH}-M_*$ evolves with redshift as $(1+z)^{0.68}$ while \cite{Decarli2010} suggest $(1+z)^{0.28}$. Point (iii) above, however, is inconsistent with this suggestion unless only about $1/100$ of galaxies with stellar mass $\simeq 10^{10}-10^{11} $ $M_\odot$ ~at $z=6$ host a black hole (W10). These galaxies are nonetheless presumed to be the progenitors of today's massive ellipticals, which typically host central massive black holes. When inferences on the population of massive black holes at the highest redshift are made, we have to take into consideration two important selection effects (see Lauer et al. 2007b). First, only the most massive black holes, powering the most luminous quasars, can be picked up at such high redshifts \citep{Shen2008, Vestergaard2008}. Second, as a result of the limited survey area of current imaging campaigns, only black holes that reside in relatively common galaxies can be recovered. Taken together, these biases imply that the observable population of black holes at high redshift will span a narrow range of masses and host properties (see also Adelberger et al. 2005, Fine et al. 2006). In this paper, we explore the impact of these observational biases on attempts to recover the intrinsic properties of the black hole population. Our calculations are based on simple models grounded on empirical relations measured at much lower redshift, and therefore our results should be treated with caution. The aim of this paper is only to highlight the effects of the different factors that can influence the measurement of the intrinsic properties of the black hole population at high redshift. In section 2 we describe how we generate Monte Carlo realizations of the $M_{\rm BH}-\sigma$ relation at $z=6$ varying the slope and normalization. We then select `observable' systems from these samples, considering both `shallow' or `pencil beam' surveys, and test how well we can recover the parameters of the $M_{\rm BH}-\sigma$ relation from the `observable' systems. In section 3 we discuss theoretical mass functions of black holes derived from the mass function of dark matter halos and various assumptions for the $M_{\rm BH}-\sigma$ relationship. Using these results, we test what assumptions can reproduce the black hole mass function derived by W10. We also discuss (section 4) why obtaining constraints on the average accretion rates and active fraction of black holes as a function of host mass is crucial to our understanding of the high-redshift massive black hole population. Finally, in section 5 we propose a simple theoretical framework that leads to selective accretion onto black holes in a manner that reconciles the observational results (i)-(ii) and (iii) above. \section{Scatter and evolution of the $M_{\rm BH}-\sigma$ relation at high redshift} We can qualitatively show the effects of selection biases with a simple exercise. Let us assume an evolution of the $M_{\rm BH}-\sigma$ relationship of the form: \begin{equation} M_{\rm BH,\sigma}=10^8 \,{\rm M_\odot} \left( \frac{\sigma}{200 \,{\rm km\,s^{-1}}} \right)^\alpha (1+z)^{\gamma}, \label{eq:MS_z} \end{equation} where $\alpha$ is a function of redshift. Let us now also assume that at fixed $\sigma$ the logarithmic scatter in black hole mass is $\Delta=$0.25-0.5 dex ($M_{\rm BH}=M_{\rm BH,\sigma}\times 10^{\Delta \delta}$, where $\delta$ is normally distributed, see, e.g., Gultekin et al. 2009, Merloni et al. 2010. The results are qualitatively unchanged for a uniform distribution in $\log\Delta$.) We create a Monte Carlo simulation of the $M_{\rm BH}-\sigma$ relation at $z=6$ assuming different values of $\alpha$ and $\gamma$. For this exercise we run a number of realizations $N(M_{h}) \propto 1/n(M_{h})$, where $n$ is the number density of halos of a given mass ($M_h$) calculated through the Press \& Schechter formalism. We then select only systems that are likely to be observed. We consider a shallow survey and a pencil beam survey. A wide, shallow survey would preferentially select systems with high luminosity, but has the advantage of a large area. For instance the SDSS quasar catalogue selects sources with luminosities larger than $M_i = -22.0$ ($\simeq 10^{45}$ erg s$^{-1}$) over an area of 9380 deg$^2$, corresponding to a volume of almost 7 comoving Gpc$^3$ at z=6. To simulate a shallow survey, we select black holes with a sizeable mass, implying that large luminosities can be achieved, $M_{\rm BH}>3\times 10^8$ $M_\odot$~ (see, e.g. Salviander et al. 2007; Lauer et al. 2007b, Vestergaard et al. 2008, Shen et al. 2008, 2010 for a discussion of this bias), and hosted in halos with space density $n>1$ Gpc$^{-3}$. Pencil beam surveys can probe fainter systems, but at the cost of a smaller area, e.g. the 2 Ms Chandra Deep Fields cover a combined volume of $\simeq10^5$ comoving Mpc$^3$ at $z=6$ and reach flux limits of $\simeq 10^{-17}$ and $\simeq 10^{-16}$ erg cm$^{-2}$ s$^{-1}$ in the 0.5-2.0 and 2-8 keV bands, respectively (the flux limit corresponds to a luminosity $\simeq10^{43}$ and $\simeq10^{44}$ erg s$^{-1}$at $z=6$). As an example of a pencil beam survey, we select black holes with mass $M_{\rm BH}>10^7$ $M_\odot$~ hosted in halos with density $n>10^3$ Gpc$^{-3}$ To select sources that are observable in current surveys, we link the velocity dispersion, $\sigma$, to the mass of the host dark matter halo. Empirical correlations have been found between the central stellar velocity dispersion and the asymptotic circular velocity ($V_{\rm c}$) of galaxies (Ferrarese 2002; Baes et al. 2003; Pizzella et al. 2005). Some of these relationships (Ferrarese, Baes) mimic closely the simple $\sigma=V_c/\sqrt[]{3}$ definition that one derives assuming complete orbital isotropy. Indeed, it is difficult to imagine that the ratio between $\sigma$ and $V_c$ for massive, stable systems evolves strongly with redshift and that it can be much different from~ $\sqrt[]{3}$ or~$\sqrt[]{2}$ (see Binney \& Tremaine 2008). Since the asymptotic circular velocity ($V_{\rm c}$) of galaxies is a measure of the total mass of the dark matter halo of the host galaxies, we can derive relationships between black hole and dark matter halo mass, adopting, for instance, Equation 1 with $\alpha=4$ and $\gamma=0$: \begin{equation} M_{\rm h}=8.2\times10^{13} M_\odot \left[\frac{M_{\rm BH}}{10^9 M_\odot} \right]^{0.75} \left[ \frac{{\Omega_m}}{{\Omega_m^{\,z}}} \frac{\Delta_{\rm c}} {18\pi^2} \right]^{-1/2} (1+z)^ {-3/2} . \label{eq:sqrt3} \end{equation} In the above relationship, $\Delta_{\rm c}$ is the over--density at virialization relative to the critical density. For a WMAP5 cosmology we adopt here the fitting formula $\Delta_{\rm c}=18\pi^2+82 d-39 d^2$ (Bryan \& Norman 1998), where $d\equiv {\Omega_m^{\,z}}-1$ is evaluated at the redshift of interest, so that $ {\Omega_m^{\,z}}={{\Omega_m} (1+z)^3}/({{\Omega_m} (1+z)^3+{\Omega_{\Lambda}}+{\Omega_k} (1+z)^2})$. Given the mass of a host halo, we estimate the number density from the Press \& Schechter formalism (Sheth \& Tormen 1999). In this section we assume $\sigma=V_c/\sqrt[]{3}$, where $V_c$ is the virial circular velocity of the host halo. The results of this experiment are not strongly dependent on this specific assumption; in Section 3 below we discuss different scalings. Kormendy et al. (2011) question a correlation between black holes and dark matter halos (but see Volonteri et al. 2011 for an updated analysis). We notice that in any case Kormendy's argument is not a concern here, as at large masses Kormendy et al (2011b) suggest that a cosmic conspiracy causes $\sigma$ and $V_c$ to correlate, thus making the link between $M$ and $V_c$ adequate. In the Monte Carlo simulation, at fixed halo mass ($M_h$, hence, $\sigma$), we derive $M_{\rm BH,\sigma}$ from the adopted $M_{\rm BH}-\sigma$ relation (i.e. depending on the choice of $\alpha$ and $\gamma$), and then we draw the black hole mass from $M_{\rm BH}=M_{\rm BH,\sigma}\times 10^{\Delta \delta}$ with varying values of the scatter $\Delta$. \begin{figure} \includegraphics[width= \columnwidth]{f1.jpg} \caption{Top panel: $M_{\rm BH}-\sigma$ relation at $z=6$, assuming $\alpha=6$, $\gamma=0$, and a scatter of 0.25 dex. Cyan dots: `observable' population in a shallow survey. Blue line: linear fit to this `observable' population, yielding $\alpha=2.7$. Green dots: `observable' population in a pencil-beam survey. Dark green line: linear fit to this `observable' population, yielding $\alpha=4.5$. Red line: fit to the whole population, yielding $\alpha=6$. Yellow dashed line: $M_{\rm BH}-\sigma$ at $z=0$ (Equation 1 with $\alpha=4$ and $\gamma=0$). Bottom panel: same for $\alpha=8$, $\gamma=-1$.} \label{MS_z6} \end{figure} First, we test a no-evolution case, where we set $\alpha=4$ and $\gamma=0$. We fit, in log-log space, the $M_{\rm BH}-\sigma$ relation of black holes implied by the `observable' population, considering both a shallow and pencil beam survey. In the no evolution case, we find $\alpha_{\rm fit} \simeq 1$ and $\gamma_{\rm fit} \simeq 0.7$ in the `shallow' survey, with almost all `observable' black holes lying {\it above} the $\alpha=4$ and $\gamma=0$ line, suggesting `overmassive' black holes, only because of the mass threshold that was imposed on the sample. In the `pencil beam' survey we find $\alpha_{\rm fit} \simeq 2.5$ and $\gamma_{\rm fit} \simeq -0.2$. In either case, fitting only the `observable' population yields a much shallower slope than that characterizing the whole population. In Figure~\ref{MS_z6} we show a Monte Carlo simulation of the $M_{\rm BH}-\sigma$ relation at $z=6$ one would find assuming $\Delta=0.25$, $\alpha=6$ and $\gamma=0$ (top panel), and $\alpha=8$ and $\gamma=-1$ (bottom panel). In section 3, we will show that these particular choices of $\alpha$ and $\gamma$ are motivated by our attempt to fit the black hole mass function of W10. In the $\alpha=6$ and $\gamma=0$ Monte Carlo simulation, we find that the best fit has $\alpha_{\rm fit} \simeq 2.7 \pm 0.2$ and $\gamma_{\rm fit} \simeq 0.45 \pm 0.04$ for the `shallow' survey. The apparent normalization of the relationship therefore increases by 0.35 dex (all the blue points lie above the yellow line in the top panel of figure~\ref{MS_z6}). So while the underlying population is characterized only by a change in slope (with respect to the $z=0$ relationship), what would be recovered from the `observable' population is a shallower slope and a positive evolution of the normalization (in agreement with point (ii) in \S1). We note, additionally, that the smaller the range in $M_{BH}$ that is probed, the more likely it is that the scatter $\Delta$ hides {\it any} correlation, likely explaining the lack of correlation (point (i) in \S1) found by Wang et al. (2010)\footnote{Wang et al. did not attempt any fit to the $M_{\rm BH}-\sigma$ relation. They note that they find significant scatter, extending to over 3 orders of magnitude, and that most of the quasar black hole masses lie above the local relationship. See also Shields et al. (2006) for quasars at $z=3$.}. If we increase the level of scatter ($\Delta$) the slope of the relationships recovered from the Monte Carlo sample becomes progressively shallower. We can repeat the same exercise for, e.g., $\alpha=8$ and $\gamma=-1$, and although the underlying population has a much steeper slope and a {\it negative} evolution of the normalization of the $M_{\rm BH}-\sigma$ relation with redshift, the `observable' population in the shallow survey would nevertheless display no evolution at all (blue vs yellow lines in Fig.~\ref{MS_z6}). Summarizing we find that, (1) selection effects can severely alter the mapping between black mass and host galaxy velocity dispersion, leading to observed black hole populations that are more massive than the true distribution. (2) Scatter and selection effects can mask correlations between black mass and host galaxy properties, leading to observed $M_{\rm BH}-\sigma$ relations that are shallower than the true relation. Although the quantitative results must be taken with caution, the existence of biases towards measuring a positive evolution in the black hole-host correlations induced by selection and scatter is generically a robust result (e.g., Shields et al. 2006, Salviander et al. 2007; Lauer et al. 2007b). \section{Impact of evolution of $M_{\rm BH}-\sigma$ relation and scatter on the black hole mass function} We now turn to the mass function of black holes, and how its shape and normalization are affected by the evolution of $M_{\rm BH}-\sigma$ relation and its scatter. We create theoretical mass functions based on Equation 1 coupled with the Press \& Schechter formalism, exploring how different values of $\alpha$ and $\gamma$ influence its functional form. As discussed in section 2, one can derive relationships between black hole and dark matter halo mass given a relationship between black hole mass and velocity dispersion (Equation 1), a relationship between velocity dispersion ($\sigma$) and asymptotic circular velocity (virial velocity, $V_c$), and the virial theorem. For instance, assuming Equation 1 with $\alpha=4$ and $\gamma=0$, and $\sigma=V_c/\sqrt[]{3}$ one derives Eq.~\ref{eq:sqrt3}, while if we assume the relationship proposed by Pizzella et al. (2005) between $\sigma$ and $V_c$: \begin{equation} M_{\rm h}=4.1\times10^{13} M_\odot \left[\frac{M_{\rm BH}}{10^9 M_\odot} \right]^{0.56} \left[ \frac{{\Omega_m}}{{\Omega_m^{\,z}}} \frac{\Delta_c} {18\pi^2} \right]^{-1/2} (1+z) ^{-3/2}. \label{eq:pizzella} \end{equation} To consider the range of possible black hole mass functions, we adopt the two mappings between black hole mass and halo mass provided by Equations~\ref{eq:sqrt3} and \ref{eq:pizzella}. We first consider the resulting black hole mass function when we adopt the local $M_{\rm BH}-\sigma$ relation ($\alpha=4$ and $\gamma=0$), and we will then investigate how the mass function changes if we vary $\alpha$ and $\gamma$. In particular, we will focus on $\alpha=6$ and $\gamma=0$, and $\alpha=8$ and $\gamma=-1$, because, as shown below, a steeper $M_{\rm BH}-\sigma$ relation yields better agreement between theoretical black hole mass functions and W10. We can estimate the mass function of black holes by convolving equations~\ref{eq:sqrt3}, \ref{eq:pizzella} (and their possible redshift evolution) with the mass density of dark matter halos with mass $M_{\rm h}$ derived from the Press \& Schechter formalism (Sheth \& Tormen 1999): \begin{equation} \frac{dN}{d \log M_{\rm BH}}=\frac{dN}{d \log M_{\rm h}} \frac{d\log M_{\rm h}}{d\log M_{\rm BH}}. \end{equation} We assume for the time being that black holes exist in all galaxies. The effect of dropping this assumption is discussed in detail in Section 4. In figure~\ref{testz6_PS_w2} we compare the mass function derived using this technique to the mass function proposed by W10, based on the luminosity function of quasars selected by the Canada-France High-z Quasar Survey, assuming a duty cycle (corresponding to the fraction of black holes that are active, we will refer to this quantity as the active fraction, AF, below) of 0.75 and assuming a lognormal distribution of Eddington fractions, $f_{\rm Edd}$, centered at 0.6 with standard deviation of 0.30 dex \citep[see also][]{Shankar2010a}. W10 further assume the same fraction of obscured AGN as observed at lower redshift ($z=0-2$, Ueda et al. 2003), and correct for Compton thick AGN following Shankar et al. (2009). Note that the evolution of the fraction of obscured or Compton thick AGN (currently not well constrained at high redshift, but see Treister et al. 2011) can strongly influence the results by hiding part of the black hole population (see section 4). \begin{figure} \includegraphics[width= \columnwidth]{f2.jpg} \caption{Mass function of black holes. Black dots: Willott et al 2010. Orange stars: $M_{\rm BH}-M_*$ + Stark et al. 2009 (see Willott et al. 2010 for details). Blue long dashed curve: Press \& Schechter + equation~\ref{eq:sqrt3} ($\alpha=4$). Dark green short dashed curve: Press \& Schechter + equation~\ref{eq:pizzella} ($\alpha=4$). } \label{testz6_PS_w2} \end{figure} If the $M_{\rm BH}-\sigma$ relation evolved with redshift as proposed by \cite{Woo2008}, $\gamma=3.1$, the number density of black holes in the mass range $10^7-10^9$ M$_\odot$ would be $\simeq 0.5$ and $10^{-4}$ comoving Mpc$^{-3}$ respectively (the curve corresponding to this {\it very} strong evolution is not shown in the figure). We note, however, that the sample analyzed by \cite{Woo2008} is at $z\approx 0.4$, and there is no guarantee that such evolution holds at higher redshift. In all cases the analytical models greatly over-estimate the mass function at masses $M_{\rm BH}<10^9$ $M_\odot$, and possibly at all masses -- when we add the suggested {\it positive} redshift evolution of the $M_{\rm BH}-$galaxy relationships. In figure~\ref{testz6_PS_w1} we show instead the mass function we find when we assume different $\alpha$ and $\gamma$ values, with and without scattering. We include scattering, at the level of $\Delta=0.5$, by performing a Monte Carlo simulation, where for each black hole mass we create 500 realizations of the host mass. The W10 black hole mass function can be reproduced by a simple model that has $\alpha=8$ and $\gamma=-1$, if no or little scatter in the black hole properties with galaxy mass is present. We see that as $\alpha$ increases the mass function becomes shallower. At fixed black hole mass, above the `hinge' of Equation 1 (200 $\,{\rm km\,s^{-1}}$) black holes will be found in comparatively less massive galaxies, that have a higher density. On the other hand, below the `hinge', the host of a black hole of a given mass would be more massive than in in the $\alpha=4$ case, hence with a lower space density. This effect makes the mass function shallower. Any decrease in $\gamma$ tends to shift the black hole mass function to lower number densities at all masses. However, a significant amount of scatter increases the number density of observable black holes, as shown in the bottom panel of Figure~\ref{testz6_PS_w1}. This effect has been discussed extensively by \cite{Lauer2007b}, and we refer the reader to this paper for an exhaustive demonstration of its consequences. \cite{Lauer2007b} start from the luminosity function of galaxies, rather than the mass function of dark matter halos, and the fact that the most luminous galaxies are in the exponential part of the luminosity function, implies that the scattering of very high-mass black holes ($M_{\rm BH}\simeq 10^9$ $M_\odot$) in lower mass galaxies has a stronger effect than the scattering of low mass black holes in larger galaxies. A similar conclusion applies to the mass function of dark matter halos. Additionally, since the halo mass function becomes exponential at lower masses at high redshift, the effect of scatter on the shape of the black hole mass function becomes noticeable already at $M_{\rm BH}\simeq 10^7$ $M_\odot$. Including scatter, the simple model with $\alpha=8$ and $\gamma=-1$ is now a much poorer fit to W10 mass function, but it still reproduces their slope very well. Summarizing, we find that the local $M_{\rm BH}-\sigma$ relation ($\alpha=4$ and $\gamma=0$) is unable to reproduce W10 results, even more so when a level of scatter compatible with observational results ($\Delta=0.25-0.5$) is included. A steeper $M_{\rm BH}-\sigma$ relation, with possibly a negative evolution (e.g., $\alpha=8$ and $\gamma=-1$) provides a better fit, although high levels of scatter require an even more dramatic steepening of the slope in order to match the mass function proposed by W10. While the direct comparison with W10 strongly depends on the limitations of our empirical model, the relationship between increased scatter in the $M_{\rm BH}-\sigma$ and increased number density of black holes is a robust result that directly follows from the analysis presented in \cite{Lauer2007b}. \begin{figure} \includegraphics[width= \columnwidth]{f3.jpg} \caption{Mass function of black holes. Top panel: we vary the slope ($\alpha$) and normalization ($\gamma$) of the $M_{\rm BH}-\sigma$ relation, and assume no scatter in the relationship ($\Delta=0$). Bottom panel: we vary the slope of the $M_{\rm BH}-\sigma$ relation, and include scatter in the relationship ($\Delta=0.25$ dex or $\Delta=0.5$ dex). Black dots: Willott et al 2010. All curves assume Press \& Schechter + equation~\ref{eq:sqrt3}, with varying $\alpha$ as labelled in the Figure.} \label{testz6_PS_w1} \end{figure} \section{Occupation fraction of quiescent and active black holes} In section 3 we demonstrated how we derive the theoretical mass function of black holes from the mass function of their host halos and the relation between black hole and halo masses \citep[e.g.,][]{Haiman1998,Wyithe2002}. However, when we convolve equations~\ref{eq:sqrt3} and \ref{eq:pizzella} with the mass density of dark matter halos to derive a black hole mass function, we have to make a conjecture about the fraction of halos of a given mass which host a black hole, the occupation fraction (OF): \begin{equation} \frac{dN}{d \log M_{\rm BH}}={\rm OF}(M_{\rm h},z)\frac{dN}{d \log M_{\rm h}} \frac{d\log M_{\rm h}}{d\log M_{\rm BH}}. \end{equation} In the top panel of Fig.~\ref{testz6_PS_w3} we show black hole mass functions resulting from different choices of the OF. It is clear that a decreasing OF can compensate for an increased scatter in shaping the black hole mass function. As a result of this degeneracy, we can reproduce the W10 mass function for a range of values for the OF and scatter. Even adopting $\alpha=4$ (slope of $M_{\rm BH}-\sigma$ relation at the present day) with a sensible scatter can fit the data, at the cost, however, of making the presence of a black hole (regardless of its shining as a quasar) a very rare instance. We note that it is conceivable that the OF is not constant over all host masses, and a non trivially constant OF is expected particularly at high-redshift, close to the epoch of galaxy and black hole formation \citep{Menou2001}. At face value, the W10 data can be reproduced by OF$=M_{\rm h}/5\times10^{13}$ $M_\odot$ for $\alpha=4$, $\gamma=0$ and $\Delta=0.25$, or OF$=(M_{\rm h}/10^{13}$$M_\odot$$)^{1.25}$ for $\alpha=4$, $\gamma=0$ and $\Delta=0$. Such occupation fractions are several orders of magnitude lowers than predicted by models of formation and cosmic evolution of black holes \citep[e.g.,][]{Volonteri2010}, and we therefore still prefer solutions with steeper slopes. We will explore self-consistently OF and its relationship with the establishment of the $M_{\rm BH}-\sigma$ relation as a function of black hole formation and growth physics in a future paper. Throughout this paper we have compared our theoretical mass function of black holes to constraints derived indirectly from the luminosity function of quasars in W10, rather than from direct black hole mass measurements (that are rather unfeasible at $z=6$). Empirically, one can derive the mass function of black holes from the luminosity function of quasars and a relation between black hole mass and quasar luminosity \citep[e.g.,][]{Shankar2010a,Shankar2010b,Willott2010}: \begin{equation} \frac{dN}{d \log M_{\rm BH}}=\frac{dN}{d \log L} \frac{d \log L}{d \log M_{\rm BH}}. \end{equation} For instance, we can estimate the mass function of black holes from the bolometric luminosity of radio-quiet quasars \citep{Hopkins2007} assuming (1) that all black holes are active, (2) that all black holes radiate at the same Eddington fraction, $f_{\rm Edd}$ (based on various observational results we expect high redshift quasars to radiate close to the Eddington limit, see W10 and references therein). The mass of a black hole powering a quasar with luminosity $L$ is then: \begin{equation} \frac{M_{\rm BH}}{10^9 M_\odot}=3\times 10^{-14}\frac{1}{f_{\rm Edd}}\frac{L}{L_\odot}, \label{eq:fedd} \end{equation} and one can trivially turn the luminosity function into a mass function. As discussed by W10, their mass function is derived assuming similar values of the Eddington ratio and the active fraction using a more accurate technique \citep[see][for details]{Shankar2010a}. Our simple approach provides results consistent with W10 if we assume a constant $f_{\rm Edd}=1$. When we deconvolve the luminosity function of quasars to derive the black hole mass function we have to assume an active fraction (AF): \begin{equation} \frac{dN}{d \log M_{\rm BH}}={\rm AF}(M_{\rm BH},z) \frac{dN}{d \log L} \frac{d \log L}{d \log M_{\rm BH}}, \end{equation} where we indicate that both the active fraction and the Eddington ratio can be functions of the black hole (and host) properties, and of cosmic time. The intrinsic shape of the mass function changes as a function of AF and $f_{\rm Edd}$, and, in particular, any departure from the assumptions $f_{\rm Edd}=1$ and AF$=1$ (that are upper limits to both quantities) will drive the mass function of black holes `up', that is, they will increase the number of black holes at a given mass. We have therefore to bear in mind that the semi-empirical mass function derived by W10 might be underestimating the mass function. The lower panel of Fig.~\ref{testz6_PS_w3} shows how the mass functions one derives from a luminosity function are modified by an AF or $f_{\rm Edd}$ that depend on the BH mass. For instance, we can trivially assume that $f_{\rm Edd}=M_{BH}/ 10^8$ $M_\odot$~ for $M_{BH}< 10^8$ $M_\odot$, and $f_{\rm Edd}=1$ otherwise. Then $M_{\rm BH}=10^8M_\odot (L/3\times10^{12}L_\odot)^{0.5}$, using equation~\ref{eq:fedd} in the last expression. In the same figure we also show the effect of a mass dependent AF, where we adopt the simple expression AF$=M_{BH}/ 10^8$ $M_\odot$~ for $M_{BH}< 10^8$ $M_\odot$, and AF$=1$ otherwise. These specific forms of the mass dependence of AF and $f_{\rm Edd}$ are motivated by the expectation that the most massive black holes at the earliest cosmic times are all actively and almost constantly accreting \citep{Haiman2004,Shapiro2005,Volonteri2005,Volonteri2006}. Such functional forms are here only used to prove that non constant accretion rates modify the expectations in terms of mass function and active fraction, but the expressions we adopt should be considered representative of any class of accretion rates and active fractions that are not constant, rather than actual predictions. If the Eddington ratio and/or AF are a function of halo or black hole mass, then what one derives from flux-limited surveys will be dependent on a combination of various properties. Figure~\ref{active_alpha} shows simple examples. We build a sample of black holes and hosts by performing a Monte Carlo sampling as described in Section 3 for two $M_{\rm BH}-\sigma$ relations ($\alpha=4$ and $\alpha=6$, both with $\gamma=0$) each with a scatter of 0.25 dex. We assign to each black hole a luminosity by assuming either a constant $f_{\rm Edd}=1$, or that $f_{\rm Edd}=M_{BH}/ 10^8$~$M_\odot$~ or $M_{BH}< 10^9$ ~$M_\odot$~ and $f_{\rm Edd}=$1 for $M_{BH}> 10^8$ ~$M_\odot$ (we note that the assumption that $f_{\rm Edd}$ scales with the halo mass, rather than the black hole mass, yields very similar results). Even for a constant Eddington ratio, scatter in the $M_{\rm BH}-\sigma$ relation implies that at fixed halo mass the black hole mass and hence its luminosity is not univocally determined. The `observed' active population of black holes will therefore be different from the `intrinsic' active fraction, which in this exercise was set to unity. A comparison between the left and right panels underscores how the $M_{\rm BH}-\sigma$ relation itself shapes the fraction of active black holes which are detected in optical imaging surveys. We notice that obscuration plays a role similar to the occupation or active fraction. If, say, all black holes are active, but a large fraction are Compton thick, then a large population of obscured quasars would be unaccounted for in optical quasar surveys. There is indeed evidence for a large fraction of high redshift quasars being obscured \citep[e.g.,][]{lafranca2005,Treister2009,Treister2010}. \begin{figure} \includegraphics[width= \columnwidth]{f4.jpg} \caption{Top: theoretical mass function of black holes derived from the mass function of dark matter halos (Eq.~5). Black dots: Willott et al 2010. Other curves as marked in the figure (from top to bottom). Willott et al. results can be reproduces for a range of possible assumptions on the relation between holes and halos. Bottom: empirical mass function of black holes derived from the luminosity function of quasars (Eq.~7 and Eq.~8, adopting the bolometric luminosity function of Hopkins et al. 2007). Triangles: $f_{\rm Edd}=1$ and $AF=1$. The blue solid and red dashed curves show how mass-dependent luminosities or active fractions can modify the shape of the mass function one would infer. } \label{testz6_PS_w3} \end{figure} \begin{figure} \includegraphics[width= \columnwidth]{f5.jpg} \caption{Fraction of active black holes than can be detected in a survey with given luminosity and volume limit. Red histograms (leftmost side of each panel): black holes with $L_{\rm min}=10^{44}$ erg/s and $n_{\rm min}=10^{-5}$ Mpc$^{-3}$ (example of a pencil beam survey). Black histograms (rightmost side of each panel): black holes with $L_{\rm min}=10^{46}$ erg/s and $n_{\rm min}=10^{-9}$ Mpc$^{-3}$ (example of a shallow survey). Left panels: $\alpha=4$; $\Delta=$0.25 dex. Right panels: $\alpha=6$; $\Delta=$0.25 dex. Top panels: $f_{\rm Edd}=1$. Bottom panels: $f_{\rm Edd}=M_{BH}/ 10^8$ $M_\odot$.} \label{active_alpha} \end{figure} \section{Accretion efficiency and host mass} In the previous section we discussed how the `observed' fraction of active black holes, which goes into determining the `observed' mass function depends on the $M_{\rm BH}-\sigma$ relationship and on the link between accretion rate and black hole-host masses. In this section, we explore the consequences and likelihood of a galaxy mass-dependent black hole accretion rate. This hypothesis is plausible, as the gas supply, especially at high redshift is likely dependent on the environment and mass of the host. For instance cold gas that flows rapidly to the center of galaxies from filaments around haloes plays a major role in the buildup of massive galaxies at high redshift \citep{Brooks2009,Governato2009}, with a transition expected to occur when a galaxy has mass above $10^{11}$$M_\odot$, where gas is shocked before it can reach the galaxy's disk. Cold gas flowing into halos along large-scale structure filaments may however be dense enough to penetrate the shock front and deliver cold gas to the galaxy. Galaxies that form within a gas-rich filament will accrete gas from this cold flow and grow substantially before the filament dissipates. These galaxies embedded in filaments are expected to be high peaks of the density field, hence among the most massive at early times. Additionally, as discussed in section 3, instead of an overall normalization evolution, the link between black holes and their hosts might be better explained by an evolution in the slope of the $M_{\rm BH}-\sigma$ relationship. We are not claiming here that the evolution has the exact form that we use in this paper (Equation~\ref{eq:MS_z}). We here discuss a possible physical scenario that can lead to a steeper $M_{\rm BH}-\sigma$ relation. In the following toy model we just explore what physical process could drive the establishment of a given $M_{\rm BH}-\sigma$ relation at a given redshift ($z=6$ in this particular case). In other words, {\it if} the slope of the $M_{\rm BH}-\sigma$ relation at $z=6$ has a given $\alpha$, what can be the driver of such correlation? Let us assume that all black holes start with the same initial mass, $M_0$, and let them grow until $z=6$ ($t_{H}=0.9$ Gyr). If at $z=6$ the slope of the $M_{\rm BH}-\sigma$ relationship is $\alpha$, and we assume that on average black holes accrete at a fraction $\langle f_{\rm Edd} \rangle$ of the Eddington rate, then we can relate this average accretion rate of the mass of the hole, $M_{\rm BH,6}$ and the velocity dispersion, $\sigma_6$, of the host at $z=6$, as follows: \begin{equation} M_{\rm BH,6}=M_0 \exp\left({\langle f_{\rm Edd} \rangle\frac{t_H}{t_{\rm Edd}}\frac{1-\epsilon}{\epsilon}}\right), \label{eq:BHdeltat} \end{equation} where $t_{\rm Edd}=M_{\rm BH} c^2/L_{\rm Edd}=\frac{\sigma_T \,c}{4\pi \,G\,m_p}= 0.45$ Gyr ($c$ is the speed of light, $\sigma_T$ is the Thomson cross section, $m_p$ is the proton mass), and the radiative efficiency, $\epsilon \simeq 0.1$. We can also express the relationship between black hole mass $M_{\rm BH,6}$ and $\sigma_6$ at $z=6$ as: \begin{equation} \sigma_6=200 \,{\rm km\,s^{-1}} \left( \frac{M_{\rm BH,6}}{10^8 \,{\rm M_\odot}} \right)^{1/\alpha}, \end{equation} so that the average accretion rate for a black hole that grows within a galaxy that has a velocity dispersion $\sigma_6$ at $z=6$ results: \begin{equation} \langle f_{\rm Edd} \rangle=\frac{t_{\rm Edd}}{t_H} \frac{\epsilon}{1-\epsilon} \ln \left[ \left(\frac{10^8 \,{\rm M_\odot}}{M_0}\right)\left( \frac{\sigma_6}{200 \,{\rm km\,s^{-1}}} \right)^\alpha \right]. \label{alphabh} \end{equation} Equations 9--11 are based on the initial conditions and on the properties at $z=6$ only. The accretion rate is in principle galaxy--mass dependent, and specifically dependent also on the mass growth of the host. However, for the sake of simplicity it is here set to the average over the integration time. The average accretion rate of Eq.~11 is shown in Figure~\ref{f_Edd_fit} for $\alpha=4$, $\alpha=6$ and $\alpha=8$. Figure~\ref{f_Edd_fit} implies that if only black holes in hosts above a certain velocity dispersion, or mass, or depth of the potential well, can accrete efficiently, it is only natural to expect a different slope of the $M_{\rm BH}-\sigma$ relationship in dependence of the exact threshold. One possibility is that black hole growth is indeed inefficient in low-mass galaxies at early cosmic times, because of the fragile environment where feedback can be very destructive \citep{Milos2009,Alvarez2009,Park2010,Johnson2010}. We note that all these possible scalings of accretion rate with halo mass are consistent with the independence of $f_{\rm Edd}$ on luminosity (and black hole mass) found by W10, as all their black holes have masses above $10^8\,{\rm M_\odot}$, where it is indeed expected that the accretion rate can reach $f_{\rm Edd}\simeq1$ as one can infer from Figure~\ref{f_Edd_fit}. This exercise is not meant to suggest that the typical accretion rate has the exact value of Eq. 9, but that {\it if} accretion is more efficient in more massive halos, then $\alpha$ increases, while if the accretion rate is mass independent, e.g. is constant in all hosts, then $\alpha$ tends to lower values. Equation~\ref{alphabh} simply demonstrates mathematically that in order to achieve a very steep $M_{\rm BH}-\sigma$ accretion in massive haloes has to be more efficient than in small haloes (`selective accretion'), and it should not be used to make predictions about the accretion/growth history of black holes. To test this suggestion, dedicated simulations that can resolve the growth of black holes in cosmological simulations as a function of the host mass are required. However, simulations that explore the cosmic evolution of accretion efficiency, taking into consideration feeding and feedback, as a function of host mass at sufficient resolution are not currently available. The experiment that is closest in spirit to what we propose was performed by \cite{Pelupessy2007}\footnote{Indirectly, similar information can be extracted from \cite{Sijacki2009}, although all their information on the accretion rate is cast in terms if black hole masses, rather than host properties.} who suggest that the more massive the host halo, the higher the Eddington fraction. A very simple fit from their simulation results at $z=6$ (Figure 7 in Pelupessy et al. 2007) gives: \begin{equation} f_{\rm Edd} \approx\frac{M_{\rm h}}{10^{11} \,{\rm M_\odot}}. \label{PP} \end{equation} This equation is shown in Figure~\ref{f_Edd_fit} for different hosts, where we assumed $\sigma=V_c/\sqrt[]{3}$, and that the Eddington limit is capped at $f_{\rm Edd}=1$ (in the bottom panel the black hole mass uses Equation~\ref{eq:BHdeltat}). This scaling leads to very steep relationships between black holes and hosts, as shown in the bottom panel of Figure~\ref{f_Edd_fit}, as the black hole mass depends exponentially on the halo mass (if we insert equation 12 into equation 9) or on the time evolution of the halo mass (if we insert equation 12 into the expression for the accretion rate in Eddington units: $\dot M=f_{\rm Edd}(M/t_{\rm Edd})[(1-\epsilon)/\epsilon]=(M_{\rm h}(t)/10^{11})(M/t_{\rm Edd})[(1-\epsilon)/\epsilon]$. To integrate this equation properly one needs to know the growth rate of the host dark matter halo mass as a function of time in a $\Lambda$CDM cosmology, $M_{\rm h}(t)$. Such exercise requires either merger trees that track the cosmic history of dark matter halos, or an analytical fit to their growth histories, and it is beyond the scope of this paper). {\it Selective} accretion, modulated by the host's potential and environment, is a possible key to explaining a shallow high redshift black hole mass function without requiring an unrealistically low occupation fraction. At late cosmic times we expect the interaction of black holes and galaxies to become more closely linked to baryonic processes (e.g., bulge formation) rather than being related to the halo mass. For instance, secular effects might at late times decouple the properties of the central stellar-dominated region from the overall dark matter halo. As another example, gas accretion through cold flows in filaments is expected to occur only at early times. We can, for instance, see a parallel between the black hole-halo relationship and the baryon-halo relationship. It is expected that at very high redshift most halos possess a baryon fraction of order the cosmic baryon fraction, while at later times the baryonic content evolves under the effect of baryonic physics. In the same way at late times we expect black hole growth to be more closely related to the baryonic content of a galaxy, and hence less ``selectively" linked to the host halo (cf. Volonteri et al. 2011). \begin{figure} \includegraphics[width= \columnwidth]{f6.jpg} \caption{Top: Eddington fraction as a function of halo velocity dispersion, derived from equation~\ref{alphabh} (black solid: $\alpha=4$; blue long dashed: $\alpha=6$; red short dashed: $\alpha=8$) and equation ~\ref{PP} (green dot-dashes). Bottom: black hole mass versus $\sigma$ at $z=6$ assuming $f_{\rm Edd}$ is a function of halo properties, and letting the holes accrete for $t_{H}=0.9$ Gyr. } \label{f_Edd_fit} \end{figure} \section{Summary and conclusions} Recent observational results that focus on high-redshift black holes provide seemingly conflicting results. In particular: \begin{itemize} \item[(i)] there seems to be little or no correlation with velocity dispersion, $\sigma$ in the brightest radio-selected quasars, \item[(ii)] typically black holes are `over-massive' at fixed galaxy mass/velocity dispersion compared to their $z=0$ counterparts, \item[(iii)] clustering and analysis of the mass/luminosity function suggest that either many massive galaxies do not have black holes, or these black holes are less massive than expected. \end{itemize} To try and understand these observational results, we explore the role of scatter and observational biases in recovering the intrinsic properties of the black hole population. We generate Monte Carlo realizations of the $M_{\rm BH}-\sigma$ relation at $z=6$, varying the slope and normalization, and select `observable' systems from these samples, considering either `shallow' or `pencil beam' surveys. We test how well we can recover the parameters of the $M_{\rm BH}-\sigma$ relation from the `observable' systems only. We then create theoretical mass functions of black holes from the mass function of halos and $M_{\rm BH}-\sigma$ and test what assumptions can reproduce the mass function derived by W10. Our techniques are very simplified and we use empirical correlations that are not guaranteed to hold at all masses and reshifts. Therefore one should not interpret our results as solutions to the three conflicting points mentioned above, but rather regard them as a way for understanding how different physical parameters may affect black hole related quantities and their measurements. Our results can be summarized as follows: \begin{itemize} \item Scatter and bias selections can hide the intrinsic correlations between holes and hosts. When selecting within a small range in black hole and galaxy masses, at the high-mass end, the scatter washes out correlations (see point (i) above), and most of the selected systems will tend to lie above the underlying correlation. The correlations recovered from `observable' sub-samples of the whole population can therefore suggest positive evolution even when the underlying population is characterized by no or negative evolution. \item The slope and normalization of the local $M_{\rm BH}-\sigma$ correlation are unable to produce a black hole mass function compatible with W10, as the theoretical mass function greatly overstimates the density of black holes with $M_{\rm BH}<10^8$. The discrepancy can be minimized if very few halos (of order 1/1000) host a massive black hole or an AGN at $z=6$, or most AGN at these redshift are obscured. $M_\odot$. \item If the $M_{\rm BH}-\sigma$ correlation were steeper at $z=6$, then at fixed black hole mass high mass black holes would reside in comparatively less massive galaxies than in the $\alpha=4$ case. Their number density is therefore increased. Viceversa, low mass black holes would be hosted in comparatively larger galaxies (compare red and yellow lines in Figure~1) with a lower space density. This effect helps reproducing the mass function of $z=6$ black holes proposed by W10. \item On the other hand, scatter in the $M_{\rm BH}-\sigma$, at the level of what is observed locally, exacerbates the discrepancy, as it increases the number density of black holes at $M_{\rm BH}>10^7$ $M_\odot$. Any type of positive evolution of the $M_{\rm BH}-\sigma$ exacerbates this discrepancy. \item Analysis of AGN samples might be underestimating the black holes mass function at low masses if the active fraction or luminosity are a function of host or black hole mass. \end{itemize} In the near future the synergy of {\it JWST} and {\it ALMA} can zoom in on quasars and their hosts respectively informing us of their relationship and how the $M_{\rm BH}-\sigma$ relation is established, or how the accretion properties depend on the black hole or halo mass. In the near-IR, {\it JWST} will have the technical capabilities to detect quasars at $z\mathrel{\rlap{\lower 3pt\hbox{$\sim$}} \raise 2.0pt\hbox{$>$}} 6$ down to a mass limit as low as $10^5-10^6$ $M_\odot$, owing to its large field of view and high sensitivity. At the expected sensitivity of {\it JWST}, $\simeq 1$ nJy, almost $7\times 10^3$ deg$^{-2}$ sources at $z> 6$ should be detected \citep{Salvaterra07}. At the same time, the exquisite angular resolution and sensitivity of {\it ALMA} can be used in order to explore black hole growth up to high redshift even in galaxies with high obscuration and active star formation. To date the best studies of the hosts of $z\simeq 6$ quasar have been performed at cm-wavelength \citep{Walteretal2004,Wang2010}. The best studied case is J1148+5251 at z = 6.42. The host has been detected in thermal dust, non-thermal radio continuum, and CO line emission \citep{Bertoldi2003,Carilli2004,Walteretal2004}. {\it ALMA} will be able to detect the thermal emission from a source like J1148+5251 in a few seconds at sub-kpc resolution \citep{Carilli2008}. On a similar time-frame Dark-Energy oriented survey will provide an enormous amount of quasar data as ancillary science (e.g. DES, LSST). Coupling the information we derive from these extremely large yet shallow surveys, with that derived from deep pencil beam surveys will undoubtedly deepen our understanding of the growth of high-redshift black holes. In the meantime, we need to develop dedicated cosmological simulations of black hole formation and early growth that can aid the interpretation of these data. The suggestion that the accretion rate of massive black holes depends on their environment, (through the host halo and its cosmic bias) must be tested with cosmological simulations that implement physically-motivated accretion and feedback prescriptions. We also need to derive predictions for the occupation fraction of black holes in galaxies based on black hole formation models (Bellovary et al. in prep.). This will be a significant improvement over current simulations of black hole cosmic evolution that typically place black holes in halos growing above some threshold mass, typically $\sim 10^{10}$ $M_\odot$, leading to a trivial occupation fraction function. There is no strong physical reason to believe that all and only galaxies with mass $>10^{10}$$M_\odot$~ host massive black holes in their centers. In this paper we focused on the very high-redshift Universe, $z\simeq 6$. Although this redshift range is not a special place, the concurrence of theoretical arguments and observational constraints allow us to make simplifying assumptions that are not expected to be valid at later times. For instance, a timescale argument requires black holes to grow fast to reach the masses probed by current luminosity functions. In turn, this argument, coupled to current observational constraints suggest that the most luminous quasars accrete close to Eddington, and that both active fraction and occupation fractions must be of order unity at the high-luminosity, high-mass end. This is not true at $z=2$, where more variables enter into play, and make the analysis less constraining. An example of a detailed study that connects the mass function of black holes derived from the Press \& Schechter formalism to that derived from the luminosity function is presented in Croton (2009). \section*{Acknowledgments} We thank K. G\"ultekin, T. Lauer and D. Richstone for insightful comments on the manuscript. MV acknowledges support from SAO Award TM9-0006X and NASA award ATP NNX10AC84G. DPS acknowledges support from the STFC through the award of a Postdoctoral Research Fellowship.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,887
{"url":"http:\/\/mathhelpforum.com\/calculus\/154484-vector-calculus-stokes-divergence-theorem.html","text":"# Math Help - Vector Calculus - Stokes and Divergence theorem\n\n1. ## Vector Calculus - Stokes and Divergence theorem\n\nEvaluate the surface integral:\n\nintegrate with respect to S (curl(f)).ds\n\nwhere S is the open surface of the hemisphere x^2+y^2+z^2=a^2, z>0\nwith the outward normal chosen and f is the vector field:\n\nf=(x+1)yi+y^2j+3x^2y^3zk\n\nusing\na)Stokes theorem\nb)Gauss theorem\n\n2. Okay, do know what \"Stokes theorem\" and \"Gauss' theorem\" are? If not, look them up! What do they say?\n\n3. I know what they are but whenever I try and apply them to a question I get it wrong, so I wanted to see a method to see what I'm doing wrong.\n\n4. Originally Posted by staxxyp\nI know what they are but whenever I try and apply them to a question I get it wrong, so I wanted to see a method to see what I'm doing wrong.\nI imagine what HallsofIvy was implying was that these problems require a fairly straight forward application of the theorems. That is, you have the closed surface and a given vector; all that is left is to apply the definition of the theorems. So what have you done? Where are you stuck? If you show us some work we can show you where you went wrong! If you just want examples of how to typically solve these problems use google!","date":"2015-11-28 21:13:58","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8422110080718994, \"perplexity\": 546.0464794359281}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-48\/segments\/1448398453805.6\/warc\/CC-MAIN-20151124205413-00306-ip-10-71-132-137.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} It has long been speculated that the vast energy of supernova remnants (SNRs) is tapped to produce the bulk of Galactic cosmic rays. Those SNRs interacting with molecular clouds are excellent targets to detect evidence of cosmic ray acceleration. The nearby dense gas acts as a target for hadronic cosmic rays, producing $\gamma$-rays via the decay of neutral pions (e.g., Drury et al. 1994). Interest in cosmic rays from interacting SNRs has been reinvigorated by recent detections of $\gamma$-ray emission at GeV- to multi-TeV-energies from supernova remnants. The HESS galactic plane survey yielded 14 new TeV detections, 7 of which are coincident with SNRs \citep{aharonian06}. Targeted observations are now capable of reaching sufficient sensitivities to resolve the morphology of $\gamma$-ray emission associated with the remnant \citep{hess_ctb37a,hess_w28}. In particular, a TeV-energy counterpart to IC 443 was observed to be spatially offset from the GeV-energy $\gamma$-ray source \citep{magic_ic443,veritas_ic443}. The spatial variations with energy may indicate the diffusion of cosmic rays accelerated by the supernova remnant out into the adjacent molecular cloud \citep{torres08}. Interacting SNRs represent a promising class of $\gamma$-ray sources which are likely to be uncovered in increasing numbers by the current generation of $\gamma$-ray observatories. However, to date the identification of $\gamma$-ray counterparts to SNRs has been inhibited by the poor spatial resolution and sensitivity of $\gamma$-ray telescopes \citep{esposito96,torres03}. Often, the association of $\gamma$-ray sources with interacting SNRs rests on supplemental evidence from common tracers of interaction with dense gas: broad molecular lines, bright at infrared to millimeter wavelengths (e.g., Reach et al. 2005), and the hydroxyl(OH) maser at 1720 MHz (e.g., Frail et al. 1994). OH masers, when detected only at 1720 MHz, are a particularly powerful diagnostic which is only associated with SNR shocks. While these "SNR masers" are only detected in $\sim$10\% of remnants, they unambiguously trace cool (25--150 K), dense ($\sim$10$^5$ cm$^{-3}$) gas in the wake of non-dissociative shocks and permit a direct measure of magnetic field strength \citep{lockett99}. Prominent interacting SNRs W28, W44 and IC 443 have been noted for both bright masers and coincident EGRET sources, leading to the suggestion that masers may trace cosmic-ray acceleration sites \citep{claussen97}. In this letter we address the nature of interacting SNRs with $\gamma$-ray counterparts: First, we show that an excellent correlation exists between SNR masers and a subset of GeV- and TeV-energy sources toward SNRs. This strengthens the argument for a hadronic origin for the observed $\gamma$-rays. Second, if pion decay is indeed the dominant emission mechanism, the implied cosmic ray enhancement is sufficient to explain the increased ionization needed to produce high columns of OH in the post-shock gas \citep{wardle99}. This suggests there is a more intimate relationship between SNR masers and $\gamma$-ray sources than merely being complimentary tracers of dense gas interaction. \begin{deluxetable*}{rrlrrrrrrr} \tablecaption{Known SNR Masers and Coincident $\gamma$-ray Sources} \tablewidth{0pt} \tabletypesize{\scriptsize} \tablehead{ \colhead{$l$} & \colhead{$b$} & \colhead{SNR} & \colhead{Diameter} & \colhead{Distance} & \colhead{L$_{GeV}$} & \colhead{$\alpha_{GeV}$} & \colhead{L$_{TeV}$} & \colhead{$\alpha_{TeV}$} & Ref. \\ & & & \colhead{(\arcmin )} & \colhead{(kpc)} & \colhead{(ergs s$^{-1}$)} & & \colhead{(ergs s$^{-1}$)} } \startdata \multicolumn{10}{c}{Group A}\\ \hline 6.4 &--0.1 &W28 &42 &2.0 &4.8e35 &2.1 &1.5e33 &2.7 & 1,2\\ 34.7 &--0.4 &W44 &30 &2.5 &4.1e35 &1.9 &--- &--- & 1 \\ 49.2 &--0.7 &W51 C &30 &6 &1.7e36 &-- &1.5e33 &-- & 3,4 \\ 189.1 &+3.0 &IC 443 &50 &1.5 &1.0e35 &2.0 &1.2e33 &3.1 & 1,5,6\\ \hline \multicolumn{10}{c}{Group B}\\ \hline 0.0 &+0.0 &SgrA East &2.5 &8.5 &1.2e37 &1.7 &1.3e36 &2.2 & \\ 5.7 &--0.0 & &9 &3.2 &--- &--- &0.8e33 &2.3 & 2 \\ 8.7 &--0.1 &W30 &45 &3.9 &--- &--- &1.7e34 &2.7 & \\ 337.8 &--0.1 &Kes 41 &5 &12.3 &4.2e36 &2.5 &--- &--- & \\ 348.5 &+0.1 &CTB 37A &10 &11.3 &3.5e36 &2.3 &5.3e33 &2.3 & 7\\ 359.1 &--0.5 & &10 &5.0 &1.2e36 &2.2 &7.5e32 &2.7 & 8\\ \hline \multicolumn{10}{c}{Group C}\\ \hline 1.0 &--0.1 &Sgr D SNR&8 &8.5 & \\ 1.4 &--0.1 &&10 &8.5 & \\ 5.4 &--1.2 &Duck &35 &5.2 & \\ 9.7 &--0.0 &&11 &4.7 & \\ 16.7 &+0.1 &&4 &2/14 & \\ 21.8 &--0.6 &Kes 69 &20 &5.2 & \\ 31.9 &--0.0 &3C 391 &8 &9 & \\ 32.8 &--0.1 &Kes 78 &20 &5.5/8.5& \\ 337.0 &--0.1 &CTB 33 &3 &11 & \\ 346.6 &--0.2 &&8 &11 & \\ 348.5 &--0.0 &&10 &13.7 & \\ 349.7 &+0.2 &&2 &$>$11 & \\ 357.7 &+0.3 &Square &24 &6.4 & \\ 357.7 &--0.1 &Tornado &5 &$>$6 \enddata \tablecomments{ {\scriptsize EGRET luminosity from 0.04-6 GeV and HESS or VERITAS luminosity from 0.3-10 TeV, where $\alpha$ is the photon index. \noindent REFERENCES. -- (1) \citet{esposito96}, (2) \citet{hess_w28}, (3) \citet{abdo09}, (4) \cite{feinstein09}, (5) \cite{veritas_ic443}, (6) \citet{magic_ic443}, (7) \citet{hess_ctb37a}, (8) \cite{aharonian06} \label{tbl:list}} } \end{deluxetable*} \section{$\gamma$-ray Emission from Interacting SNRs} First, we assess $\gamma$-ray sources which may be directly associated with supernova remnants (hereafter, "$\gamma$-ray SNRs). To identify potential $\gamma$-ray counterparts to SNRs we have drawn from three catalogs: EGRET sources \citep{torres03}, the $Fermi$ Bright Source List \citep{abdo09} and the TeVCat\footnote{TeVCat is an online catalog of very-high-energy ($>$50 GeV) $\gamma$-ray sources. It can be accessed at http://tevcat.uchicago.edu .}. It is not trivial to determine the origins of each $\gamma$-ray source. Potential associations can be confused by the presence of pulsars or pulsar wind nebulae, which are also capable of producing high-energy $\gamma$-ray emission. Some sources reported by \citet{torres03} as potential EGRET counterparts to SNRs have since been associated with other astrophysical sources. The source 3EG J1410-6147, coincident with SNR G312.4-0.4, has recently been associated with the young pulsar J1410-6132 with variability detected by AGILE \citep{obrien08}. The source 3EG J1824-1514 has been confirmed as the microquasar LS 5039 \citep{ls5039}, ruling out an association with SNR G16.8-1.1 \citep{torres03}. Sources 3EG J2016+3657 and 3EG J2020+4017, coincident with SNRs G74.9+1.2 and G78.2+2.1 respectively, are identified as extragalactic blazars \citep{iyudin07}. Finally, the EGRET counterparts for SNRs G180.0-1.7, G355.6+0.0, G39.2-0.3 and G74.8+1.2 \citep{torres03} do not appear in the Fermi bright source catalog \citep{abdo09}, and we therefore do not include them as detections in our analysis. We include all remnants for which an association with the coincident $\gamma$-ray source has not been ruled out. There are 26 identified SNRs with $\gamma$-ray coincidences: 7 are young remnants ($\la$1 kyr), 12 are SNRs with evidence of interaction with dense clouds, and 7 are unclassified remnants. Of the 12 identified SNRs which are interacting with dense gas, all but two (MSH 11-61A and W49B) have detected SNR masers. The presence of masers gives several advantages: (1) masers signpost interaction with dense (10$^5$ cm$^{-3}$) clouds, which will enhance the pion decay signature, (2) the velocity of the maser gives a kinematic distance, and (3) an established velocity allows the adjacent cloud to be isolated from confusing Galactic emission along the line of sight. Table \ref{tbl:list} lists all SNRs with masers and the properties of potentially associated $\gamma$-ray emission. In the first five columns the Galactic coordinates, name, diameter and kinematic distance are listed for each remnant. The $\gamma$-ray luminosity for all derived from the reported $\sim$100 MeV--10 GeV and $\gtrsim$1 TeV fluxes in columns 6 and 8. Spectral indices for are given in columns 7 and 9. References for detections are given in the last column. The ten SNRs with coincident $\gamma$-ray sources are divided based on the certainty of their association: Group A includes four SNRs for which $\gamma$-rays are established as related to the SNR; Group B includes six SNRs with coincident $\gamma$-ray sources which have not been attributed to other astrophysical phenomenon (pulsars, blazars, etc.) but for which an association with the SNR is less than certain; Group C lists the sixteen SNRs with masers which do not yet have detected $\gamma$-ray sources. Given that both masers and $\gamma$-rays are detected for only 10$\%$ of SNRs, the large number of coincident detections makes an association between SNR masers and $\gamma$-ray emission as tracers of interaction quite plausible. \begin{deluxetable*}{rrrrrrrrrrrrr} \label{tbl:contingency} \tablecaption{Contingency Table: Presence of SNR Masers versus SNRs with $\gamma$-ray Sources \label{tbl:contingency}} \tablewidth{0pt} \tabletypesize{\scriptsize} \tablehead{ \colhead{} & \multicolumn{3}{c}{$\gamma$-ray SNRs} && \multicolumn{3}{c}{No $\gamma$-ray Counterpart} \\ \cline{2-4} \cline{6-8} \\ \colhead{} & \colhead{} & \colhead{Expected} & \colhead{} && & \colhead{Expected} & \colhead{} \\ \colhead{OH (1720 MHz)} & & \colhead{from} & \colhead{} && \colhead{} & \colhead{from} & \colhead{} & \colhead{} \\ \colhead{Masers} & \colhead{Number} & \colhead{Counts} & \colhead{$\chi^2$} && \colhead{Number} & \colhead{Counts} & \colhead{$\chi^2$} & \colhead{Total} } \startdata Masers present ....... & 10 & 2.4 & 24.7 && 14 & 21.6 & 2.7 & 24\\ Masers absent ........ & 14 & 19.5 & 1.6 && 185 & 179.0& 0.2 & 199\\ No observations....... & 2 & 4.1 & 1.1 && 41 & 37.9 & 0.1& 43 \\ Total ................ & 26 & & && 240 & & & 266 & \enddata \tablecomments{We do not include those 43 SNRs which have not been observed for SNR Masers.} \end{deluxetable*} \subsection{Contingency Table Analysis} To explore the correlation between SNR masers and $\gamma$-ray sources, we use a contingency table analysis to test the null hypothesis that there is no association between the two groups. Here we include all remnants which have been searched for SNR masers \citep{frail96,green97,koralesky98,fyz99,hewitt09} and those $\gamma$-ray sources which have not been clearly identified as having a non-SNR origin. Table \ref{tbl:contingency} is the resulting contingency table. Three rows are given for SNRs with masers present, absent or not surveyed. Two columns divide SNRs with and without coincident $\gamma$-ray detections. Each of the two columns lists the "Number" of SNRs which meet both classifications, the number "Expected from Counts" assuming the maser and $\gamma$-ray properties of remnants are completely independent, and "Contribution to $\chi^2$" which gives the $\chi^2$, (observed -- expected)$^2$/(expected). For this test the total $\chi^2$ is 31 with two degrees of freedom. This gives a probability of $<$0.0001$\%$ that SNR masers and $\gamma$-ray counterparts to SNRs are not correlated. We note that two of the expected cell frequencies are less than five. To be statistically rigorous we also apply the Fisher Exact Probability Test in addition to the $\chi^2$ Test. The Fisher test considers all possible outcomes of the contingency table in order to determine the probability of finding a correlation at least as strong as is observed. Here again we find a clear rejection of independence between $\gamma$-ray and maser detections in SNRs, with a probability of 0.002$\%$. The two classes are clearly associated. A similar contingency table analysis was used to quantify an association between SNR masers and mixed-morphology remnants, a class of SNRs with shell-type radio morphologies with prominent thermal X-ray emission from their interiors \citep{fyz03mmsnr}. Both classes of remnants are thought to result from interaction with dense gas and are strongly correlated. Of the identified $\gamma$-ray SNRs, 9 are classified as both mixed-morphology and maser-emitting, 2 are mixed-morphology without detected masers (SNRs W49B and MSH 11-61A), and only SNR G5.7-0.0 has detected masers but no X-ray detection. Therefore, either classification produces a nearly identically good correlation with $\gamma$-ray SNRs. Both X-rays and cosmic rays are capable of providing the ionization needed to produce SNR masers. Detailed discussion is given in Section \ref{sec:ionization}. An additional complication arises in that many SNRs have associated pulsars with their own wind nebulae that are also capable of accelerating particles to TeV energies. To account for this we separate the $\gamma$-ray SNRs into two categories based on whether a PWN is or is not also detected. This reduces the number of sources that fall under each classification, lowering the probability of independence from the Fisher test to 1.2$\%$, still significant enough to reject the null hypothesis. Furthermore, there is evidence that at least four SNRs (see Table \ref{tbl:list}, "Group A") are associated with $\gamma$-ray emission from the SNR-cloud interaction, and not the PWN. Improved spatial resolution is needed to discriminate between $\gamma$-ray emission from the PWN and SNR, but we note that there are no markedly different characteristics between the two groups of SNRs with and without PWN. \subsection{Properties of $\gamma$-ray SNRs with Masers} For SNRs interacting with dense gas, $\gamma$-ray counterparts are likely to be dominated by emission from neutral pion decay originating from interactions between hadronic cosmic rays and gas nuclei. In contrast to young SNRs, older interacting SNRs have little if any detected X-ray synchrotron emission. An inverse-Compton scenario requires small magnetic field strengths $<$1 $\mu$G \citep{yamazaki06} whereas Zeeman splitting of SNR masers measures field strengths of order $\sim$1 mG \citep{fyz96,claussen97,brogan00}. Electronic bremsstrahlung emission has also been proposed, but the expectation would be for $\gamma$-ray emission to trace the radio shell morphology \citep{bykov00}. TeV emission from W28 and IC 443 shows no correlation with the radio shell, and an excellent correlation with dense gas \citep{hess_w28,veritas_ic443}. Forthcoming Fermi observations will have sufficient resolution to resolve this question at GeV energies. The presence of a large reservoir of dense gas around the SNR, the expectation that cosmic rays are accelerated by SNRs, and the relatively older ages of this subset of interacting SNRs supports a pion decay origin for $\gamma$-ray counterparts. The morphology of $\gamma$-ray emission is seen to be well matched to that of the molecular cloud. In Figure \ref{fig:histo}, histograms of $\gamma$-ray luminosity show medians of 1.7$\times$10$^{36}$ ergs s$^{-1}$ and 1.5$\times$10$^{33}$ ergs s$^{-1}$ for GeV and TeV detections, respectively. This is consistent with luminosity estimates if a significant fraction of the SN energy (10-20\%) is diverted to cosmic-rays which are incident on the adjacent molecular cloud \citep{drury94}. The orders of magnitude differences in the luminosity reflect the expected energy spectrum of accelerated particles. On average a hardening of the photon spectrum from GeV to TeV energies is also observed. The $\gamma$-ray spectral index, given in columns 7 and 9 of Table \ref{tbl:list}, steepen at higher energies from a mean of $\alpha_{GeV} \sim$ 2.1 to $\alpha_{TeV} \sim$ 2.6. It has been suggested for nearby SNR IC 443 that this spectral steepening results either from the presence of both pion decay and a strong bremsstrahlung component \citep{bykov00}, or the diffusion of cosmic rays at different energies through the dense cloud \citep{torres08}. In addition to IC 443, the SNRs W28, Sgr A East, and G359.1-0.5 show this characteristic spectral hardening at higher energies, but CTB 37A does not. Future detailed spectral modeling of these sources can determine whether a spectral signature can be used to discriminate between different $\gamma$-ray emission mechanisms. \begin{figure} \centering \includegraphics[height=1.5in]{fig1a.ps}\\ \includegraphics[height=1.5in]{fig1b.ps} \caption{Histograms of GeV (top) and TeV (bottom) luminosities for SNRs with masers. Bin sizes of 1 and 0.5 are used. Characteristic luminosities of 1.7$\times$10$^{36}$ ergs s$^{-1}$ and 1.5$\times$10$^{33}$ ergs s$^{-1}$ are found, respectively. \label{fig:histo}} \end{figure} \section{Enhanced Ionizations via Cosmic Rays\label{sec:ionization}} \begin{deluxetable*}{rrrrrrrrrrr} \tablecaption{Comparison of Cosmic Ray and X-ray Ionizations \label{tbl:crionization}} \tablewidth{0pt} \tabletypesize{\scriptsize} \tablehead{ \colhead{$SNR$} & \colhead{Distance} & \colhead{M$_{cloud}$} & \colhead{F$_{\gamma}$($>$100 MeV)} & \colhead{$\zeta_{CR}$ } & \colhead{$\zeta_{Xray}$ } & \colhead{Ref.} \\ & \colhead{(kpc)} & \colhead{(10$^5$ $\hbox{$M_\odot$}$)} & \colhead{(10$^{-8}$ cm$^{-2}$ s$^{-1}$)} & \colhead{(10$^{-16}$ s$^{-1}$)} & \colhead{(10$^{-16}$ s$^{-1}$)} } \startdata W28 &2.0 &0.5 &74.2 & 3.4 & 2.3 & 1\\ W44 &2.5 &3.0 &88.9 & 1.1 & 4.5 & 2\\ W51 C &6.0 &1.9 &40.9 & 4.4 & 8.8 & 3\\ IC 443 &1.5 &0.1 &51.4 & 5.5 & 3.6& 4\\ \enddata \tablecomments{References: (1) \citet{hess_w28}, (2) \citet{seta98}, (3) \citet{feinstein09}, (4) \citet{dickman92}. Fluxes are taken from the Fermi Bright Source List \citep{abdo09}.} \end{deluxetable*} The strong correlation observed between $\gamma$-ray-bright and maser SNRs may result from more than just both tracing interaction with high gas densities. The large post-shock OH abundances required for masing to occur are not produced in chemical models of slow, dense shocks; instead all gas-phase oxygen is rapidly converted to water before the gas cools to 50 K \citep{kaufman96}. The large observed OH column densities are thought to be produced by dissociation of the abundant post-shock water \citep{wardle99}. In this model energetic electrons produced by ionizations in the molecular gas will excite H$_2$, which will subsequently generate a weak flux of far-ultraviolet photons which photo-dissociate water molecules producing OH \citep{prasad83}. The high flux of X-rays from the hot interior characteristic of mixed morphology SNRs has been shown to be a viable ionization mechanism \citep{fyz03mmsnr}. Cosmic rays and X-rays are notionally lumped together in this model. The $\gamma$-ray luminosity can be used to discriminate the relative importance of ionizations by cosmic rays. The $\gamma$-ray spectrum resulting from pion decay of hadronic cosmic ray interactions in SNRs has been described by many authors \citep{drury94,baring99,bykov00}. Here we follow \citet{aharonian91}, formulating the $\gamma$-ray flux as a function of the cosmic ray enhancement in the cloud adjacent to SNR, neglecting any possible gradients through the cloud as a simplification. The $\gamma$-ray flux above energy E$_{\gamma}$ is given by the equation: \begin{equation} {\rm F( \geq E_\gamma) \simeq 3\times10^{-13} \left(\frac{E_\gamma}{TeV}\right)^{-1.6} \omega_{CR} \left(\frac{M_{5}}{d_{kpc}^2} \right) ph\ cm^{-2}\ s^{-1} }, \label{eqn:1} \end{equation} where $d_{kpc}$ is the distance to the SNR in kpc, E$_\gamma$ is the threshold above which the $\gamma$-ray flux is totaled, $\omega_{CR}$ is the ratio of cosmic ray density at the SNR to the local solar value assuming the same $\gamma$-ray emissivity, and M$_5$ is the total mass of the SNR shell and adjacent molecular cloud in units of 10$^5$ \hbox{$M_\odot$} . We note that \citet{drury94} established that spectral variations between 2.0 and 2.7 changes this estimate by less than 20\% . Assuming the observed $\gamma$-ray flux from the SNR is due to hadronic interactions with the surrounding dense gas, it is straightforward to roughly estimate the local energy density of cosmic rays, provided the distance and molecular environment are known. For EGRET and Fermi detections, E$_{\gamma}$ $>$ 100 MeV, equation (\ref{eqn:1}) can be rewritten to express the local cosmic ray enhancement $\omega_{CR}$: \begin{equation} \omega_{CR} \simeq\ \frac{F_{\gamma}(> 100 \ MeV)}{7 \times\ 10^{-7} {\rm\ ph \ cm^{-2}\ s^{-1}}} \ \left(\frac{d_{kpc}^2}{M_5}\right) \ . \end{equation} The cosmic ray ionization scales as the cosmic ray density, so we obtain the ionization rate due to cosmic rays $\zeta_{CR} \simeq\ \omega_{CR}\ \zeta_{\odot}$, where $\zeta_{\odot}$ is the local cosmic ray ionization rate of $\sim$4$\times$10$^{-17}$ s$^{-1}$ \citep{webber98}. In Table \ref{tbl:crionization} we list the observed properties of SNRs from which the local enhancement of cosmic rays is estimated. We find cosmic ray densities are typically increased by 30--150 times that observed for quiescent molecular clouds. This provides an ionization rate sufficient to produce OH abundances of $\sim$10$^{-7}$--10$^{-6}$ in the post-shock gas. For comparison we also list the ionization rate from interior X-ray emission derived by \citet{fyz03mmsnr}. The two sources of ionization are generally found to be comparable. Either could be the dominant ionization mechanism depending on the location of the gas with respect to the interior X-ray emitting plasma and cosmic ray acceleration sites, and may vary on a source-by-source basis. We caution that our estimates of cosmic ray ionization rate are somewhat crude. Even if the dominant emission mechanism is hadronic, there may still be a significant bremsstrahlung component. Detailed spectral modeling with data from the $Fermi$ $\gamma$-ray Space Telescope will permit a much better estimate of the local cosmic ray density. Observations of direct chemical tracers of cosmic ray ionization rates, such as H$_3^+$ and He$^+$ \citep{oka06,dalgarno06,indriolo09}, could be used to confirm the cosmic ray enhancements in the immediate environment of these interacting SNRs. \section{Summary \& Conclusions} We have explored the class of $\gamma$-ray sources which can be explained by enhanced cosmic ray densities at the interaction sites between SNRs and molecular clouds. By correlating SNR masers with known GeV and TeV sources, we have identified an emerging class of $\gamma$-ray-bright interacting SNRs. Of the 24 known Maser SNRs currently identified there are ten with GeV to TeV-energy $\gamma$-ray associations, and six with both. If $\gamma$-ray emission from these sources is largely due to hadronic cosmic rays, the enhanced local cosmic ray ionization rates in these clouds can explain the production of OH molecules behind a C-type shock, suggested by \citet{wardle99}. Furthermore, cosmic ray ionization is typically comparable to or dominant over ionizations from interior thermal X-rays, though without detailed knowledge of the cosmic ray spectrum these results have large uncertainties. Interacting SNRs represent a promising class of $\gamma$-ray sources which are likely to be uncovered by future $\gamma$-ray observatories.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,669
Ces gens-là est le huitième album studio de Jacques Brel, publié à l'été 1966. Sans titre à l'origine, il est désormais identifié par celui de la chanson qui ouvre le disque. Ce 33 tours 30cm compile les titres du 33 tours 25cm Jacky paru fin 1965 et quatre titres enregistrés antérieurement et publiés sur le 25cm Mathilde. Autour de l'album Discographie et références originales : 1966 : 33 tours Barclay 80323S Ces gens-là 1965 : 33 tours 25cm Barclay 80 284 Jacky super 45 tours Barclay 70 900 M : Ces gens-là - Jacky - L'âge idiot super 45 tours Barclay 70 901 M : Fernand - Grand-mère - Les Désespérés 1963 : 33 tours 25cm Barclay 80222S Mathilde 1964 : super 45 tours Barclay 70635 : Mathilde, Tango funèbre, Titine, Les Bergers super 45 tours Barclay 70636 : Jef, Les Bonbons, Au suivant, Le dernier repas Liste des titres La réédition CD de 2003 ajoute les bonus suivants : Crédits Musiciens Jean Corti : accordéon (non crédité) Gérard Jouannest : piano (non crédité) Production Arrangements et direction d'orchestre : François Rauber Prise de son : Gerhard Lehner (non crédité) Crédits visuel : Alain Marouani, Hubert Grooteclaes Références Album musical sorti en 1966 Album de Jacques Brel
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,509
Riri (stylized as RIRI) is the eponymous debut studio album by Japanese singer Riri, released February 14, 2018, by Sony Music Associated Records. The album served as her major label debut following a record deal with Sony Music Entertainment Japan. Three promotional singles were released prior to the release of the album. The first, a remix of "Rush" was released in November 2017. A second promotional single, "Keep Up" was released in December and a third, "Crush on You" was released in January 2018. Upon its release, the album peaked at number 34 on the Oricon Albums Chart and number 18 on the Billboard Japan Hot Albums chart. Background In 2016, Riri was discovered by Japanese-American singer-songwriter Ai and her recently launched talent agency, The Mic-a-holics, after winning a talent contest. Later that year, Riri released her debut extended play I Love to Sing under the agency. In 2017, her second EP Rush was released by The Mic-a-holics. Unlike her previous EP, Rush charted in Japan, debuting and peaking at number 70 on the Billboard Japan Hot Albums chart. Riri ultimately was scouted by Sony Music Japan and eventually was signed to Sony Music Associated Records. Development and production Wanting to appeal overseas and Japanese listeners, Riri worked with both western and far eastern producers. Five previously released tracks from I Love to Sing and Rush were included and recorded in Japan while five new tracks were recorded in Los Angeles. She worked with various songwriters and producers such as Nikki Flores, Joacim Persson, Damon Sharpe, Uta, Ai, and Mayu Wakisaka. Music and lyrics Riri has been described as an R&B and EDM album. Riri was influenced by artists such as Beyoncé, Mariah Carey and Whitney Houston. Riri's western appeal for the album was heavily influenced by Ai, who Riri has described as someone she "cherishes". In an interview with Oricon, Riri described "Promised Road" as a song about eventually having dreams come true. Track listing Notes Certain digital stores display "2018 remastered" within the title of tracks 1–2, 4, 6 and 10 Tracks 1 and 9 are stylized in all capitals Track 10 is stylized in sentence case Charts Release history References 2018 debut albums Onenation albums Sony Music Entertainment Japan albums
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,472
Barleeia trifasciata is een slakkensoort uit de familie van de Barleeiidae. De wetenschappelijke naam van de soort is voor het eerst geldig gepubliceerd in 1960 door Habe. Barleeiidae
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,562
""" Make Randomization test with trimmed mean difference (`randtest-tmean`) available on the command line. """ from functools import partial from .base import randtest, test_statistic from .mcts import trimmed_mean from .argparser_bp import read_data, argparse_cli def main(): """Main function""" description = """ Randomization test for the comparison of trimmed means computed based on two independent samples gathered in a controlled experiment. """ parser = argparse_cli(description) parser.add_argument( "-t", metavar="[0-49]", type=int, choices=range(0, 50), default=20, help="value in percent used for trimming (default: 20).", ) args = parser.parse_args() alpha = args.t / 100 # Use functools.partial to set parameters tmean = partial(trimmed_mean, trim_percent=alpha) data_group_a = read_data(args.fname_data_A) data_group_b = read_data(args.fname_data_B) result = randtest( data_group_a=data_group_a, data_group_b=data_group_b, mct=tmean, tstat=test_statistic, num_permutations=args.p, alternative=args.a, num_jobs=args.n, log_level=args.l, seed=args.s, ) print(result) if __name__ == "__main__": main()
{ "redpajama_set_name": "RedPajamaGithub" }
6,711
Q: Graphics2D awt port of copyArea method to playn I'm porting an old awt java game to playn framework, I've some graphics.copyArea calls.. there is some way to map this call into some play.core.Canvas calls ? Thanks A: You should be able to use the same canvas as the source and destination of a Canvas.drawImage call: CanvasImage image = graphics().createImage(100, 100); // draw stuff in your canvas image // define the source of the copyArea: 15x15+5+5 float sx = 5, sy = 5, swidth = 15, sheight = 15; // note that we use swidth/sheight for dwidth/dheight because we don't want to scale, // we just want to copy data from one place in the image to another image.canvas().drawImage(image, 25, 25, swidth, sheight, sx, sy, swidth, sheight); However, if you're really rendering everything to a Canvas on every frame, you're probably going to discover that that's very slow. You'll be better off restructuring things to use ImageLayer and other constructs that can be hardware accelerated.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,168
Compex Software will be again showcasing more innovative shop management solutions at the SUR/FIN 2018 June 4-6 Huntington Convention Center, Cleveland, Ohio. Visit our Booth #544 for live presentation. Compex Software will be again showcasing more innovative shop management solutions at the SUR/FIN 2017 Atlanta Georgia - June 19-21. Visit our Booth #920 for live presentation. Compex Software will be show casing more innovative shop management solutions at the SUR/FIN 2017 in Atlanta World Congress Convention Center - June 19-21. Visit our Booth #920 for live system demonstrations. We are excited to showcase some of our innovative software and system integration solutions at the 2014 SUR/FIN Exhibition in Cleveland OH. Some of our latest IBMS-Integrated Business Management Software suite features include: - Ability to deploy and access critical information on any enterprise network computing devices from Desktops, Tablets, Smartphones, to Embedded Devices such as RaspberryPi, Arduino etc. Do You Want To Get Paid Sooner??? We have more good news for users who are currently using Salesforce.com for their online (cloud-based) CRM solution. As of January 2012, we have officially released METFIN V9, our flagship ERP application for job shops and surface finishers, with an array of exciting new features. Training Videos Are Now Available for Download.
{ "redpajama_set_name": "RedPajamaC4" }
4,906
Q: Why does my GridView control in c# aspnet display an old table below the new table? The objective i'm trying to achieve is to display filtered data in the same gridcontrol. So what I do is e.g A-DROPDOWN clicked, display XDATA in GRIDVIEW1. Then FilterA selected, this filter calls the "ADROPDOWN" clicked event again and then checks the appropriate filter then passes the appropriate sql query, then displays "YDATA" in GRIDVIEW1. Now when I select B-DROPDOWN, ZDATA is displayed however below it is the YDATA. If I select C-DROPDOWN, VDATA is displayed howver below it is still the YDATA. If I selected A-DROPDOWN then XDATA would be displayed then if I would selected B-DROPDOWN without selecting any of the filters, WDATA would be displayed cleanly in GRIDVIEW1 without any data below or above it. Would appreciate if anyone has any suggestions. Cannot show my code because its 20000 lines. Would like to add. The alphabets assigned to the dropdowns and data represents the different datasets. A: SOLVED !!! Incase it is of help to someone. my update panel in my front end was only updating the gridview, not the table where the grid view was located . I simple made it so that my entire table where the gridview was located was updating. Many thanks.
{ "redpajama_set_name": "RedPajamaStackExchange" }
107
Das Kolisch-Quartett war ein von Rudolf Kolisch 1921 in Wien gegründetes Streichquartett, das durch seine besonderen Verdienste um die Aufführung und Propagierung Neuer Musik zu den zentralen Ensembles des 20. Jahrhunderts gehört. Es zerfiel nach der Emigration in den USA 1944. Geschichte Das Kolisch-Quartett entstand im Kontext von Arnold Schönbergs Verein für musikalische Privataufführungen Anfang der 1920er Jahre in Wien. Zunächst in kurzzeitig wechselnder Besetzung und von Bestandsunterbrechungen begleitet, konstituierte es sich im Herbst 1924 als Wiener Streichquartett. Ab der Neuformierung im Jahr 1927 trug es offiziell den Namen Kolisch-Quartett. Das Ensemble widmete sich hauptsächlich der Pflege Neuer Musik und wurde nach mehreren Tourneen bald international bekannt. Es brachte zahlreiche Werke von Komponisten der Zweiten Wiener Schule (Arnold Schönberg, Alban Berg, Anton von Webern) und aus deren Umfeld zu Ur- oder Erstaufführungen. Oft wurden die Interpretationen in enger Kooperation mit den Komponisten erarbeitet, was ihnen besondere interpretationsgeschichtliche Bedeutung verlieh. Dies galt vor allem für Werke Schönbergs, der als Spiritus rector auch darüber hinaus großen Einfluss auf Rudolf Kolisch und sein Quartett hatte. Radikal an werktreuer Wiedergabe orientiert, bemühte sich das Ensemble jeweils um analytische Durchdringung der Komposition und zugrunde gelegter Überlegungen. Die möglichst intensive Probenarbeit war auf die Beherrschung der Partitur (nicht nur der eigenen Stimme) ausgerichtet und das Quartett spielte ein großes Repertoire (zwecks gegenseitiger Abstimmung über Blickkontakt) auswendig. Ziel war nicht eine "wirkungsvolle", sondern eine "richtige" (authentische) Interpretation des musikalischen Textes. Mangelnde Nachfrage nach zeitgenössischer Musik infolge der zunehmend konservativen Kulturpolitik im politisch veränderten Europa veranlasste das Quartett, Anfang der 1930er Jahre mehr klassische Werke (vor allem Spätwerke Beethovens) in seine Programme aufzunehmen. Die eingeschränkten Konzertmöglichkeiten infolge der Wirtschaftskrise trugen dazu bei, dass die Musiker auch Engagements in Übersee suchten. Erst ab 1935 kamen verschiedene Tourneen in die USA, nach Kanada und Südamerika zustande. Als Rudolf Kolisch, der 1929 von Wien nach Berlin übersiedelt war, Ende 1936 seinen Wohnsitz in die USA verlegte, bildete Amerika forthin den Schwerpunkt der Konzert-Tätigkeit für das Ensemble. Doch gehörten Konzertreisen durch Europa weiterhin zum fixen Programm und zum Zeitpunkt des "Anschlusses" Österreichs befanden sich die Musiker gerade in Amsterdam. Nach der definitiven Emigration in die USA sah sich das Quartett harter Konkurrenz ausgesetzt, nicht zuletzt durch andere vor den Nationalsozialisten aus Europa geflüchtete Ensembles. Ohne Nebeneinnahmen über Orchester-Engagements oder Lehrstellen der Mitglieder war der Weiterbestand des Kolisch-Quartetts auf Dauer nicht mehr möglich. Nach dem Ausscheiden langjähriger Mitglieder (1939) gelang es zwar Rudolf Kolisch einige Zeit, immer wieder ein Quartett zu präsentieren, zuletzt aber nur mehr mit in immer kürzeren Abständen wechselnder Besetzung. Das letzte Konzert eines Kolisch-Quartetts fand schließlich im Mai 1944 in New York statt. Mitglieder Violine: Rudolf Kolisch (1921–1944) Violine: Jaromir Czerny (1921–1922), Gustav Kinzel (1922), Oskar Fitz (1922–1923), Fritz Rothschild (1924–1927), Felix Khuner (1927–1941), Daniel Guilevitch (1941–1943), Lorna Freedman (1943–1944) Viola: Othmar Steinbauer (1921–1922), Herbert Duesberg (1922–1923), Marcel Dick (1924–1927), Eugen Lehner (1927–1939), Jascha Veissi (1939–1941), Kurt Frederick (1941–1942), Ralph Hersh (1942–1943), Bernhard Milofsky (1943–1944) Violoncello: Erik Skeel-Görling (1921–1922), Wilhelm Winkler (1922–1923), Joachim Stutschewsky (1924–1927), Benar Heifetz (1927–1939), Stefan Auber (1939–1941), Fritz Magg (1942–1943), Janos Scholz (1943–1944), Stefan Auber (1944) Literatur Karoly Csipák: "Sehr schnelle Reflexe". Erinnerungen an das Kolisch-Quartett. In: Musica. 40, 1986. S. 518–524 Claudia Maurer Zenck: "Was sonst kann ein Mensch denn machen, als Quartett zu spielen?" – Rudolf Kolisch und seine Quartette. Versuch einer Chronik der Jahre 1921–1944; in: Österreichische Musikzeitschrift, Jg. 53 (1998), H. 11, S. 8–57. Walter Levin: Immigrant Musicians and the American Chamber Music Scene, 1930–1950; in: Reinhold Brinkmann, Christoph Wolff (Hrsg.): Driven into Paradise. The Musical Migration from Nazi Germany to the United States; Berkeley, Los Angeles, London 1999; S. 322–339. Werner Unger: Rudolf Kolisch: Post-War Recordings. A Discography with Recordings from USA, Darmstadt (IMD) and European Radio Stations. Kehl, 2012. (PDF, 282 KB, englisch, abgerufen am 14. Dezember 2021) Weblinks Artikel über Rudolf Kolisch im Lexikon verfolgter Musiker und Musikerinnen der NS-Zeit Diskographie zum Kolisch-Quartett bzw. zu Rudolf Kolisch (Archivdatum 6. August 2009, URL geprüft am 14. Dezember 2021) Streichquartett (Ensemble) Musikgruppe (Österreich)
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,115
Kemfast PASS is a certified Aerospace Consumable Supplier to the OEM, MRO and Defense Industries. We supply a wide array of products including: Adhesives, Lubricants, Paints, Sealants, and many more. We are more than just your average Consumable supplier. We strive to provide industry-leading customer service and quality in all aspects of our business. Our goal is to create long-term relationships with our customers and support them with unrivaled levels of quality and service. We would love the opportunity to serve you – please contact us with any RFQ's or special requests.
{ "redpajama_set_name": "RedPajamaC4" }
576
Q: C# String Filepath Question I am trying to set a filepath for a SoundPlayer object If I have a sounds folder in my main project folder. How do I go about sending Soundplayer test = new Soundplayer("Sounds/Fireball.wav"); A: Where the file is relative to your main project is not important. What's important is where will the sound file be relative to your application at deployment / debug time. If it will have the same relative path as that of the main .exe path then you can use the following. var root = typeof(Program).Assembly.Location; var soundPath = Path.Combine(root, @"sounds\Fireball.wav"); var test = new SoundPlayer(soundPath); A: Have you tried the path as @"Sounds\Fireball.wav"? A: If you are running out of Visual Studio, the current working directory will be bin\Debug, so the file in question would need to be in bin\Debug\Sounds\Fireball.wav. Also, as others have mentioned, you should use backslash \ rather than forwardslash /
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,831
import keyMirror from 'react/lib/keyMirror'; export default keyMirror({ LOAD_PAGE: null, LOAD_PAGE_SUCCESS: null, LOAD_PAGE_ERROR: null, CHANGE_LOCATION: null, CART_ADD: null, CART_REMOVE: null, CART_VISIBLE: null, SET_SELECTED: null, //Selects a product option RECEIVE_DATA: null, //Loads mock data SELECT_PRODUCT: null });
{ "redpajama_set_name": "RedPajamaGithub" }
9,197
<?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/custom_marker_view" android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" android:background="@color/main_blue"> <ImageView android:id="@+id/profile_image" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center_horizontal" android:contentDescription="@null" android:src="@drawable/car" android:background="@color/main_blue" android:padding="3dp"/> </RelativeLayout>
{ "redpajama_set_name": "RedPajamaGithub" }
8,480
Anistock is the only library where you pay once and receive your stock footage in multiple formats for Free. HD is our standard format, select QuickTime, MP4, web, iPhone, Android formats etc. etc. all for 1 low price.The company's Web site, www.anistock.com, gives creative professionals instant access to the world's finest royalty free stock video footage library plus video backgrounds, motion backgrounds video loops and motion graphics for their feature web, broadcast, and other new media channels and programs. Popular sports video backgrounds and sports stock footage are clips showing ski footage, surfing footage, chessboard video background and golf footage. Royalty free stock video backgrounds with sports themes can be viewed everywhere including TV commercials, sporting and adventure videos plus video productions. Stock footage free video download links from Anistock.com plus more royalty free stock video links. People Video Backgrounds Library - People themed HD Video Backgrounds, always royalty free plus free weekly video to download. Video Backgrounds with Characters and character video animations from the Anistock Stock footage and video background library now have a free weekly video to download. Royalty free video cartoon, animations, animals and funny themed video backgrounds in HD plus cartoon stock footage video library. The growth of the internet as a video promotional channel has seen the use of high definition video backgrounds and royalty free stock video continually expand. Video Backgrounds for Health. Anistock stock video footage offers full HD stock video and video backgrounds for $1. Nature Video Backgrounds in high definition from the Anistock Stock footage and video background library.
{ "redpajama_set_name": "RedPajamaC4" }
4,264
Wawasee Village is an unincorporated community in Turkey Creek Township, Kosciusko County, in the U.S. state of Indiana. History Wawasee Village took its name from Wawasee Lake. The lake was named after Wawasee, a Miami chief. Geography Wawasee Village is located at . References Unincorporated communities in Kosciusko County, Indiana Unincorporated communities in Indiana
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,190
If Only My Chemical Romance Stopped at Black Parade Photo Courtesy of Visionary Artistry Magazine In October of 2006, My Chemical Romance released their third album entitled The Black Parade. With the new album came their first number one hit "Welcome to the Black Parade", over three million copies sold worldwide, and the vast popularity they had been moving towards for five years. Millions of fans couldn't get enough of the band and they were smacking their lips for the next entry. But what if it had ended there? What if My Chemical Romance released the biggest commercial success of their career and then swiftly dissolved? According to front man Gerard Way's recent interview with Rolling Stone, that was exactly what was meant to happen. "I plan things pretty far in advance, and before we'd even done the first record, I'd written out titles of things," Way stated yesterday to the Magazine. "I definitely knew I had the title of the second album before we'd even recorded the first… by the time I got to the third album, which didn't have a name, I felt like that was the end. Basically the time spent after Black Parade was me fighting against that instinct, fighting against myself. The end of Black Parade felt like a very natural end. To go beyond that felt like betraying some sort of artistic command that I had within myself." So there it is. Everything that came after Black Parade was never meant to happen. What exactly would we have missed out on? Well, we would have unfortunately lost the hit single "Sing" off their 2010 follow-up Danger Days: The True Lives of the Fabulous Killjoys. We would also be without the band's wonderful cover of "Desolation Row" for the Watchmen end credits. Conversely, we would have been spared four consecutive years of The Black Parade rereleases and live concert recordings that overexposed the music into oblivion. The EP Live and Rare (2007), the redundantly titled live album The Black Parade is Dead (2008), The Black Parade: The B-Sides (2009, with only 3 out of 5 being unused tracks), and two different video albums were senselessly churned out in a span of three years. The sword had swung the other way. Original fans either grew tired or extremely radical while those who were already annoyed with My Chemical Romance became violently critical. All in all, there would be a few small tragedies were My Chemical Romance to have followed the plan and gone their separate ways in 2006, but it would have preserved their greatest work as oppose to letting it be rammed down the throats of listeners for a good half decade. What's more, a 2006 breakup would have made The Black Parade the final third in a trilogy of albums that had all begun when Way witnessed the September 11th attacks and scribbled down his rage in the form of music. To end their career five years after the tragic event that brought them together would have been an ending filled with the dark poetry the band is known for. Since the band's official split last year Way has begun a solo career of his own, citing Nic Cave, David Bowie, and Iggy Pop as a few of his influences. Perhaps the next few years will show us what solo work would have been produced after 2006 had Way stuck to his instincts. Tags: Danger Days: The True Lives of the Fabulous Killjoys, David Bowie, Desolation Row, Gerard Way, Iggy Pop, My Chemical Romance, Nic Cave, Rolling Stone, Sing, The Black Parade, Watchmen, Welcome to the Black Parade The Weirdest Things Ever Thrown During a Concert Taylor Swift's New Song About Old Boyfriend Warner Music Group Resolved Internship Lawsuit Scarlett Johansson May Have to Change Band Name Sam Smith Soulfully Sings Whitney Houston's "How Will I Know" The Black Crowes Are Calling It Quits For Good ppcorn_Desktop_300x250_ATF THIS URL HAS .html IN IT Leave Behind Example Hear Erykah Badu Cover Squeeze's "Tempted" June New Music Releases: Madonna, Prince, and More The Cure to Headline and Curate Pasadena Daydream Festival at the Rose Bowl © Copyright 2017 - Critic Inc.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
76
{"url":"http:\/\/www.komal.hu\/verseny\/feladat.cgi?a=feladat&f=C1353&l=en","text":"Mathematical and Physical Journal\nfor High Schools\nIssued by the MATFUND Foundation\n Already signed up? New to K\u00f6MaL?\n\n# Problem C. 1353. (April 2016)\n\nC.\u00a01353. Find the integers solutions of the equation $\\displaystyle x^2-xy+y^2=7$.\n\nProposed by B.\u00a0Kov\u00e1cs, Szatm\u00e1rn\u00e9meti\n\n(5\u00a0pont)\n\nDeadline expired on May 10, 2016.\n\n### Statistics:\n\n 141\u00a0students\u00a0sent\u00a0a\u00a0solution. 5\u00a0points: 75\u00a0students. 4\u00a0points: 20\u00a0students. 3\u00a0points: 14\u00a0students. 2\u00a0points: 5\u00a0students. 1\u00a0point: 14\u00a0students. 0\u00a0point: 13\u00a0students.\n\nProblems in Mathematics of K\u00f6MaL, April 2016","date":"2018-01-17 05:10:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3815852403640747, \"perplexity\": 6340.085461970889}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-05\/segments\/1516084886815.20\/warc\/CC-MAIN-20180117043259-20180117063259-00327.warc.gz\"}"}
null
null
/** */ package CIM.IEC61970.Informative.InfAssets; /** * <!-- begin-user-doc --> * A representation of the model object '<em><b>SVC Info</b></em>'. * <!-- end-user-doc --> * * <p> * The following features are supported: * </p> * <ul> * <li>{@link CIM.IEC61970.Informative.InfAssets.SVCInfo#getCapacitiveRating <em>Capacitive Rating</em>}</li> * <li>{@link CIM.IEC61970.Informative.InfAssets.SVCInfo#getInductiveRating <em>Inductive Rating</em>}</li> * </ul> * * @see CIM.IEC61970.Informative.InfAssets.InfAssetsPackage#getSVCInfo() * @model annotation="http://iec.ch/TC57/2009/CIM-schema-cim14 Documentation='Properties of SVC assets. Allows the capacitive and inductive ratings for each phase to be specified individually if required.'" * annotation="http://langdale.com.au/2005/UML Profile\040documentation='Properties of SVC assets. Allows the capacitive and inductive ratings for each phase to be specified individually if required.'" * annotation="http://www.eclipse.org/emf/2002/GenModel Documentation='Properties of SVC assets. Allows the capacitive and inductive ratings for each phase to be specified individually if required.' Profile\040documentation='Properties of SVC assets. Allows the capacitive and inductive ratings for each phase to be specified individually if required.'" * @generated */ public interface SVCInfo extends FACTSDeviceInfo { /** * Returns the value of the '<em><b>Capacitive Rating</b></em>' attribute. * <!-- begin-user-doc --> * <p> * If the meaning of the '<em>Capacitive Rating</em>' attribute isn't clear, * there really should be more of a description here... * </p> * <!-- end-user-doc --> * @return the value of the '<em>Capacitive Rating</em>' attribute. * @see #setCapacitiveRating(float) * @see CIM.IEC61970.Informative.InfAssets.InfAssetsPackage#getSVCInfo_CapacitiveRating() * @model dataType="CIM.IEC61970.Domain.Reactance" required="true" * annotation="http://iec.ch/TC57/2009/CIM-schema-cim14 Documentation='Maximum capacitive reactive power'" * annotation="http://www.eclipse.org/emf/2002/GenModel Documentation='Maximum capacitive reactive power'" * @generated */ float getCapacitiveRating(); /** * Sets the value of the '{@link CIM.IEC61970.Informative.InfAssets.SVCInfo#getCapacitiveRating <em>Capacitive Rating</em>}' attribute. * <!-- begin-user-doc --> * <!-- end-user-doc --> * @param value the new value of the '<em>Capacitive Rating</em>' attribute. * @see #getCapacitiveRating() * @generated */ void setCapacitiveRating(float value); /** * Returns the value of the '<em><b>Inductive Rating</b></em>' attribute. * <!-- begin-user-doc --> * <p> * If the meaning of the '<em>Inductive Rating</em>' attribute isn't clear, * there really should be more of a description here... * </p> * <!-- end-user-doc --> * @return the value of the '<em>Inductive Rating</em>' attribute. * @see #setInductiveRating(float) * @see CIM.IEC61970.Informative.InfAssets.InfAssetsPackage#getSVCInfo_InductiveRating() * @model dataType="CIM.IEC61970.Domain.Reactance" required="true" * annotation="http://iec.ch/TC57/2009/CIM-schema-cim14 Documentation='Maximum inductive reactive power'" * annotation="http://www.eclipse.org/emf/2002/GenModel Documentation='Maximum inductive reactive power'" * @generated */ float getInductiveRating(); /** * Sets the value of the '{@link CIM.IEC61970.Informative.InfAssets.SVCInfo#getInductiveRating <em>Inductive Rating</em>}' attribute. * <!-- begin-user-doc --> * <!-- end-user-doc --> * @param value the new value of the '<em>Inductive Rating</em>' attribute. * @see #getInductiveRating() * @generated */ void setInductiveRating(float value); } // SVCInfo
{ "redpajama_set_name": "RedPajamaGithub" }
7,348
{"url":"http:\/\/mathhelpforum.com\/calculus\/19066-solve-using-quotient-rule-print.html","text":"# Solve using Quotient Rule\n\n\u2022 September 16th 2007, 07:26 PM\nTskate\nSolve using Quotient Rule\nI'm stuck on what to do next\n\nProblem:H(s)=s\/square root of s subtract 1 Differentiate using the Quotient Rule\n\nI got this far: (s^1\/2-1)[s]-(s)[s^1\/2-1]\/(s^1\/2)^2\n\u2022 September 16th 2007, 07:42 PM\nJhevon\nQuote:\n\nOriginally Posted by Tskate\nI'm stuck on what to do next\n\nProblem:H(s)=s\/square root of s subtract 1 Differentiate using the Quotient Rule\n\nI got this far: (s^1\/2-1)[s]-(s)[s^1\/2-1]\/(s^1\/2)^2\n\nis this what you mean?\n\n$\\frac d{ds} \\frac {s}{\\sqrt {s} - 1}$\n\u2022 September 16th 2007, 07:58 PM\nKrizalid\nI think he said\n\n$H(s)=\\frac s{\\sqrt{s-1}}$\n\u2022 September 16th 2007, 08:18 PM\nTskate\n\u2022 September 16th 2007, 08:21 PM\nKrizalid\nWell, in that case\n\n$\\frac s{\\sqrt s-1}=\\frac{s\\left(\\sqrt s+1\\right)}{s-1}$\n\nNow apply the quotient rule.","date":"2016-06-25 00:52:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 3, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9726194143295288, \"perplexity\": 6785.533484394398}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-26\/segments\/1466783391634.7\/warc\/CC-MAIN-20160624154951-00113-ip-10-164-35-72.ec2.internal.warc.gz\"}"}
null
null
Q: Probability distribution of product of integers I have a scoring system based on 5 factors with integer values from 1 to 5: Score = A * B * C * D * E So the Score can range from 1 to 3125. Each of the factors is uniformly distributed between 1 and 5. I want to find a way to partition the results into 5 ranges so that the probability of a score in each range is equal. Because the final values for Score are only integers with prime factors 2, 3, and 5, the distribution of possible scores is weighted heavily toward low numbers. I did a simple graph in Excel and it looks like a Poisson or Gamma distribution, but that's just an uneducated guess. Simply put, I want a way to say, "This score is between X and Y, so it is in the 2nd quintile of all scores." A: Brute force gives quintile breaks at $40, 90, 180, 360$. These are far from exact breaks. That patter 90,180,360 seems like something interesting might be going on. Specifically, there are 657 ways to get numbers from $1$ to $40$. There are $1257$ ways to get numbers from $1$ to $90$. There are $1922$ ways to get numbers from $1$ to $180$, and there are $2503$ ways to get numbers from $1$ to $360$. The raw data. The count is the number of ways to get this value or less, so there are $31$ ways to get a value of $5$ or less. $$\begin{matrix}\text{Count}&\text{Value}\\ 1& 1\\6& 2\\11& 3\\26& 4\\31& 5\\51& 6\\81& 8\\91& 9\\111& 10\\161& 12\\181& 15\\226& 16\\256& 18\\306& 20\\386& 24\\396& 25\\406& 27\\466& 30\\517& 32\\577& 36\\657& 40\\687& 45\\782& 48\\812& 50\\832& 54\\9 52& 60\\997& 64\\1067& 72\\1097& 75\\1192& 80\\1197& 81\\1257& 90\\1337& 96\\1397& 100\\1427& 108\\1567& 120\\1577& 125\\1607& 128\\1627& 135\\1687& 144\\1747& 150\\1827& 160\\1832& 162\\1922& 180\\1972& 192\\2042& 200\\2062& 216\\2092& 225\\2212& 240\\2213& 243\\2233& 250\\2248& 256\\2268& 270\\2298& 288\\2388 & 300\\2438& 320\\2443& 324\\2503& 360\\2523& 375\\2543& 384\\2603& 400\\2608& 405\\2618& 432\\2648& 450 \\2708& 480\\2738& 500\\2743& 512\\2763& 540\\2773& 576\\2833& 600\\2838& 625\\2858& 640\\2868& 675\\289 8& 720\\2918& 750\\2923& 768\\2953& 800\\2983& 900\\3003& 960\\3023& 1000\\3024& 1024\\3034& 1125\\3064& 1200\\3069& 1250\\3074& 1280\\3094& 1500\\3104& 1600\\3109& 1875\\3119& 2000\\3124& 2500\\3125& 3125 \end{matrix}$$ A: The problem in this case is that since the numbers you're multiplying can be the same, there are many possible ways how to express a single product value. For example, there are 80 ways how to express 40 as the product of five numbers from 1 to 5. The chance that the boundaries of quintiles will fall between different product values is very small, and it is indeed this case. Just to test the idea, I made a small Haskell program to produce all possible values: import Control.Applicative import Data.List (sort) scores = sort $ prod <$> xs <*> xs <*> xs <*> xs <*> xs where prod a b c d e = a * b * c * d * e xs = [1..5] And confirmed the boundaries of the 1st and 2nd quintile contain the same value: > scores !! 624 40 > scores !! 625 40 So if you put the boundary between 1st and 2nd quintile to be $\geq 40$, the 1st quintile will be too small. And if you put it to be $> 40$, it will be too large.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,732
Q: R Install.packages fails with "object not found error" I am currently trying to install packages on R. On the startup, I get the normal R message with "Error: object 'getw' not found" When I use the install.packages function, I get the same error at the end of the installation, one for each package I tried to install. However, when I start R with R --no-init-file I can install packages normally. I have been fishing around with Rprofile and other initialization settings of R. I have also done clean installs of R, and the message still appears. Does anyone have an idea about how to remove this error? Also, this machine is running Ubuntu 14.04 Trust Tahr. A: This sounds like something is wrong with the .Rprofile file. There can be more than one such file. At the beginning of an R session, R first searches for such a file in the working directory, then in the home directory. You may also want to check if the environment variable R_PROFILE_USER is set (In an R shell, this can be checked with Sys.getenv("R_PROFILE_USER")). If yes, look at the .Rprofile file in that directory to see if there is any suspicious entry. If all fails, make a copy of the .Rprofile file in your home directory and (if applicable) in your working directory with a different name. Then delete the file and try the installation again. If this succeeds you can afterwards restore the old .Rprofile file(s) by using the copy/copies that you made before. A: I had the same error. In my case, this was due to a previous partially failed uninstall of the package I was trying to install. Manually removing the partially uninstalled version of the package then allowed intall.packages to succeed. Full details: I had run devtools::install_github(...) which prompted about newer versions of some required packages being available. I opted to install these updated versions in response to the prompt. One of these packages (Rcpp) failed to be installed with an error about being unable to remove the older version of that package (presumably due to the file being in use/locked somehow). When I tried to install a newer version of Rcpp from install.packages, I got the above error. After investigating various things, I eventually ran .libPaths() which output the location my packages are installed. I went to this folder, found the Rcpp subfolder, which was mostly empty except one file (Rcpp.dll) - presumably the file that failed to be deleted before. I deleted this file manually and deleted the Rcpp folder. I then retried install.packages(...) which now succeeded.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,209
Beringomyia politonigra är en tvåvingeart som beskrevs av Evgenyi Nikolayevich Savchenko 1980. Beringomyia politonigra ingår i släktet Beringomyia och familjen småharkrankar. Inga underarter finns listade i Catalogue of Life. Källor Småharkrankar politonigra
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,415
‪BitConnet's CEO Flees India After Crackdown Of $2.4 Billion Crypto Ponzi Scheme By Joe Pots March 2, 2022 3 Mins Read The long-standing crypto fraud case against BitConnect, its founder, and promoters is yet to be resolved. The SEC has revealed that Satish Kumbhani, the founder of the crypto exchange, is nowhere to be found. The SEC is searching for BitConnect's indicted founder In a recent court filing, SEC attorney Richard Primoff stated that Kumbhani has disappeared from his native country, India. He added that all attempts to know his whereabouts have failed Since November, the commission has been consulting with that country's financial regulatory authorities in an attempt to locate Kumbhani's address. At present, however, Kumbhani's location remains unknown, Primoff said. This has caused the SEC to be unable to serve Kumbhani a notice of the accusations leveled against him. The SEC asked the court for an extension of the case till May 30, as it continues to search for the 36-year-old. The SEC filing comes after Kumbhani was formally accused of being involved in a $2.4 billion last Friday in San Diego by the US Department of Justice (DOJ). The indictment charges the BitConnect founder of misleading investors globally through the crypto exchanges "Lending Program." Under the said program, Kumbhani and others told investors that the exchange could generate them profit using two proprietary automated trading software – "BitConnect Trading Bot," and "Volatility Software." While no such technology existed, the exchange carried on the Ponzi scheme for years, using funds from new investors to pay early investors while also siphoning much of the funds. In 2018, the exchange abruptly stopped the Lending program after receiving cease-and-desist orders from state regulators including Texas and North Carolina. Glenn Arcaro, BitConnect's promoter in the US, pleaded guilty to charges of fraud last September. The SEC is still going after other promoters of the Ponzi scheme. Kumbhani may face up to 70 years of jail time if found guilty of all charges. The court is also seeking to recover the $2.4 billion stolen from investors. Crypto scams victims can hope for justice While a lot of crypto scams have remained unsolved, more and more cases are getting resolved. In the recent past, courts have been increasing their involvement in bringing crypto scammers to book. This year has seen the recovery of $3.6 billion worth of Bitcoin stolen from Bitfinex in 2016. Two suspects were also arrested through the efforts of investigators at the DOJ. Meanwhile, victims of the scams have been encouraged to come forward to claim damages. The post ‪BitConnet's CEO Flees India After Crackdown Of $2.4 Billion Crypto Ponzi Scheme appeared first on cryptotimes24.com. Fed President Might Have Good News For Crypto Holders Breaking: S.Korea Set To Lift ICO Ban Singapore Plans to Make Crypto Trading More Difficult for Retail Investors The Statement Of The Year For Crypto? It's Not Powell's Speech
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,619
{"url":"https:\/\/math.stackexchange.com\/questions\/338319\/associativity-of-cartesian-product","text":"# Associativity of Cartesian Product\n\nI have a basic doubt about the associativity of the cartesian product. Well, first wikipedia says that the cartesian product isn't associative, and there's a good argument for it: if $x\\in E$, $y\\in F$ and $z \\in G$ the identity $((x,y), z)=(x,(y,z))$ would imply that $(x,y) =x$ and $(y,z) = z$ so that $((x,y),z)=(x,y,z)$ would mean nothing.\n\nThat's fine, I like this argument. However, in his book Calculus on Manifolds, Spivak says that the cartesian product is associative. He says: if $A \\subset \\mathbb{R}^m$, $B\\subset \\mathbb{R}^n$ and $C \\subset \\mathbb{R}^p$ then $(A\\times B)\\times C = A \\times (B \\times C)$, and both of these are denoted simply by $A \\times B \\times C$.\n\nWell, this confuses me because Spivak is always very rigorous, so that he wouldn't state something that's not true in such way. Is Spivak or Wikipedia right ? Or Spivak's statement only works for subsets of euclidean spaces ?\n\nThanks in advance and sorry if this question is too basic.\n\n\u2022 Commutative or associative? Your post switches back and forth between the two. \u2013\u00a0Asaf Karagila Mar 22 '13 at 21:45\n\u2022 In general, we often define as commutative a construction like this \"up to isomorphism.\" That is, if the two are equivalent in the category we are discussing. So while $X\\times Y$ might not be the same topological space as $Y\\times X$, they are homeomorphic in a \"natural\" way. That \"natural\" is not just a fuzzy word, when you get to category theory, it has meaning. In the same fashion, $X\\times (Y\\times Z)$ is not the same space as $(X\\times Y)\\times Z$, but these two spaces are \"naturally\" homeomorphic. \u2013\u00a0Thomas Andrews Mar 22 '13 at 21:45\n\u2022 You are worrying about set-theoretic details that are irrelevant. You will never, ever have to actually care whether two sets are literally identical. (Two subsets of a fixed set is another story.) The interesting question is whether there are maps between them and what properties those maps can have. \u2013\u00a0Qiaochu Yuan Mar 22 '13 at 21:58\n\u2022 In the particular situation you quoted, there is a way to make things work: we define $\\mathbb{R}^m \\times \\mathbb{R}^n = \\mathbb{R}^{m + n}$ (which is not true under the usual set-theoretic definitions) and then we will have strictly associative products for all subsets of all euclidean spaces. \u2013\u00a0Zhen Lin Mar 22 '13 at 22:01\n\u2022 Even though these set-theoretic details are ultimately, as @Qiaochu remarks, irrelevant, noticing that they were there to be worried about speaks well of the OP in my opinion. There's nothing wrong with worrying about irrelevant details, as long as one can eventually recognize them as irrelevant (for now) and get on with what one was doing originally. \u2013\u00a0hmakholm left over Monica Mar 22 '13 at 22:22\n\nAt the most formal set-theoretic level they are not truly the same.\n\nFor example, $(\\{1\\}\\times\\{2\\})\\times\\{3\\}$ is the set whose only element is $\\langle\\langle 1,2\\rangle,3\\rangle$, and $\\{1\\}\\times(\\{2\\}\\times\\{3\\})$ is the set whose only element is $\\langle 1,\\langle 2,3\\rangle\\rangle$. And those two only elements are not the same.\n\nOn the other hand, it can be extremely convenient to view both of $\\langle\\langle 1,2\\rangle,3\\rangle$ and $\\langle 1,\\langle 2,3\\rangle\\rangle$ as alternative representation of the same \"ideal thing\" which is an ordered sequence of three numbers, written $\\langle 1,2,3\\rangle$.\n\nAs long as everything we deal with is either numbers or ordered sequences or numbers it is possible to define $A\\times B$ such that if either of $A$ and $B$ is a set of numbers (rather than sequences) then we first replace it with the corresponding set of one-element sequences, and then $A\\times B$ means the set of all concatenations of a sequences from $A$ in front of a sequence from $B$. With that interpretation we do indeed get identity between $(A\\times B)\\times C$ and $A\\times (B\\times C)$.\n\nHowever, Cartesian products are useful for many other things than numbers, and it is fairly tricky and tedious to formalize exactly how the sequence-concatenating Cartesian product works in full generality such that all of the cases we'd want to use the concept in is covered. For example, what if one of the set has members that are things that we already represent as pairs of something, but we want to ignore that implementation detail while working with them at a higher abstraction level? This might well be the case even for numbers -- one popular implementation of the real numbers as Dedekind cuts represent each real number as a pair of two sets of rationals. We certainly don't want those pairs to collapse a member of $\\mathbb R\\times \\mathbb R$ into an ordered sequence of four subsets of $\\mathbb Q$!\n\nThe nitty-gritty of getting such details to work completely formally is not generally worth the trouble. Here's a case where it really is so much easier simply to \"know what one is doing\" than to get formulas following mindless rules to know it, that noone (even Spivak) bothers to be 100% formal.\n\nInstead of formulating airtight formal rules about Cartesian products, one then imagines one of two things:\n\n\u2022 That the Cartesian products are always sequence-concatenating products, and that an invisible \"make a set of things into a set of lengh-1 sequences of things\" operator is implied in formulas where it is necessary for the math to make sense (but not in any other places).\n\n\u2022 That $(A\\times B)\\times C$ and $A\\times(B\\times C)$ are not exactly identical, but are related by an invisible bijection that is implied in formulas whereever it is necessary for the math to make sense. (This is somewhat more general than the previous solution because the implicit-bijection idea can be used to dispose of a lot of other boring problems similar to this one, whereas the sequence-concatenation idea is specific to Cartesian products).\n\nIt is tacitly expected of students that they'll eventually learn enough to know how one could in principle make the invisible fix-it-up operators visible everywhere, though actually doing so brings no real reward and so is rarely done.\n\nWe are, after all, mathematicians. Knowing that a solution exists is enough for us.\n\nThe mathematician sees the tweezers and the bag of carrots, exclaims \"Aha! A solution exists!\" and goes back to bed.\n\n\u2022 Dear lord, tweezers and a bag of carrots? I know variants of this joke, often including a thief or a hotel fire. But what in the world would \"tweezers and a bag of carrots\" be used for? :-) \u2013\u00a0Asaf Karagila Mar 22 '13 at 22:41\n\u2022 [HM45] @Asaf: Isn't that obvious? No? Then it's left as an exercise for the reader. \u2013\u00a0hmakholm left over Monica Mar 22 '13 at 22:51\n\u2022 Can you give an example of one such non-number type where the sequence-concatenating product is tricky to define with? \u2013\u00a0hyperum Jun 3 '19 at 10:26","date":"2021-01-17 13:48:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.831002414226532, \"perplexity\": 356.95242883194936}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703512342.19\/warc\/CC-MAIN-20210117112618-20210117142618-00426.warc.gz\"}"}
null
null
O Mega Duck WG-108 (também conhecido como Cougar Boy ) é um console portátil desenvolvido e fabricado pela Welback Holdings, sediada em Hong Kong, por meio de sua divisão Timlex International, e lançada em 1993. Foi comercializado sob várias marcas diferentes em todo o mundo, incluindo Creatronic e Videojet, e a concha do console veio em plástico branco ou preto. Foi vendido por cerca de 129 fl na Holanda e por um preço semelhante na França e na Alemanha. Na América do Sul (principalmente no Brasil), a versão Creatronic de fabricação chinesa foi distribuída pela Cougar USA, também conhecida como "Organização Eletrônica da Cougar" e vendida como "Cougar Boy". Embora a Cougar USA não tenha lançado o Cougar Boy em seu país de origem . Os cartuchos são muito semelhantes aos da Watara Supervision , mas um pouco mais estreitos, com menos contatos (36 pinos, enquanto os cartuchos de Supervisão têm 40). Conceitualmente, os componentes eletrônicos dentro da Supervisão e do Mega Duck também são muito semelhantes. A posição dos controles de volume, controles de contraste, botões e conectores são praticamente idênticas. No entanto, o LCD da supervisão é maior que o do Mega Duck. O Cougar Boy veio com um cartucho de jogo 4 em um e um fone de ouvido estéreo. Com um joystick externo (não incluído), dois jogadores podem jogar um contra o outro simultaneamente. Uma variante na forma de um laptop educacional para crianças foi lançada na Alemanha por Hartung como o Super Junior Computer Mega Duck  e no Brasil como o Super QuiQue. Especificações técnica O Mega Duck possui um design de placa múltipla, separando a placa-mãe, o LCD e a placa controladora em três conjuntos diferentes. O compartimento da bateria encontra-se na caixa traseira, sendo os contatos conectados por fios e soldados na placa principal. CPU: versão MOS do Z80 (incorporada no VLSI principal ) Velocidade do relógio: 4.194304 MHz RAM: 16 KB em dois chips de 8K ( Goldstar GM76C88LFW) Lógica do sistema: chip VLSI de 80 pinos LCD: matriz de pontos STN de 2,7 "(48 (h) x 51 (w) mm)). Resolução 160 × 144 em 59,732155 Hz Escalas de cinza: 4 níveis de azul escuro sobre fundo verde Controles do jogador: 4 teclas direcionais, A, B, Select e Start Outros controles: Chave liga / desliga e reguladores de contraste e volume Som: Alto-falante embutido (8Ω 200 mW) e saída de fone de ouvido estéreo Dimensões: 155 (l) x 97 (w) x32 (h) mm Peso: 249 g (sem baterias) Alimentação: Quatro pilhas AA ou adaptador CA 6 VDC / 300 mA Consumo atual: 700 mW Duração do jogo: 15 horas em um conjunto de quatro pilhas AA Interface de expansão: Link serial para jogos de dois jogadores (6 pinos) ou joystick externo. Meio de jogo: cartucho ROM de 36 pinos, 63 (l) x 54 (w) mm e 7 mm de espessura, 17 gramas. O controlador de exibição de vídeo do Mega Duck / Cougar Boy possui um recurso especial: a lógica da exibição usa dois " planos de exibição " usados ​​para criar planos de fundo de rolagem paralaxe , como se a imagem fosse desenhada em duas folhas das quais a folha superior é parcialmente transparente. Lista de jogos Esta é uma lista (incompleta) de jogos Mega Duck / Cougar Boy. Cada jogo Mega Duck / Cougar Boy é similarmente rotulado como os mesmos jogos foram comercializados para ambos os sistemas, embora nem todos os jogos tenham sido lançados para o Cougar Boy. A notação MDxxx é usada para Mega Duck Games e a notação CBxxx para Cougar Boy Games. Um MD002 é exatamente o mesmo jogo que o CB002 , a ponto de alguns jogos "Cougar Boy" começarem com o logotipo Mega Duck. Alguns números de notação não são utilizados, indo até 037 , mas faltando 012 e 023, por exemplo. Com exceção do jogo pack-in para o Mega Duck (The Brick Wall), que foi desenvolvido pelo fabricante, todos os jogos foram desenvolvidos pela Thin Chen Enterprise sob as marcas "Sachen" e "Commin" e foram posteriormente lançado para o Game Boy em cartuchos 4 em 1 e 8 em 1 sem a licença da Nintendo. Embora existam 24 cartuchos (sem contar as variantes ou complementos cougar boy do Super Junior Computer), outro jogo está listado em vários sites chamados Tip & Tap  , no entanto, não se sabe se o jogo já foi lançado, ou se existisse. Ver também Lista de consoles de videogame Quarta geração de consoles Consoles de videogame portáteis da oitava geração Videogames portáteis Consolas de jogos eletrónicos da quarta geração Produtos introduzidos em 1993
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,803
<html dir="LTR"> <head> <meta http-equiv="Content-Type" content="text/html; charset=Windows-1252" /> <meta name="vs_targetSchema" content="http://schemas.microsoft.com/intellisense/ie5" /> <title>FieldSelectorResult Members</title> <xml> </xml> <link rel="stylesheet" type="text/css" href="MSDN.css" /> </head> <body id="bodyID" class="dtBODY"> <div id="nsbanner"> <div id="bannerrow1"> <table class="bannerparthead" cellspacing="0"> <tr id="hdr"> <td class="runninghead">Apache Lucene.Net 2.4.0 Class Library API</td> <td class="product"> </td> </tr> </table> </div> <div id="TitleRow"> <h1 class="dtH1">FieldSelectorResult Members </h1> </div> </div> <div id="nstext"> <p> <a href="Lucene.Net.Documents.FieldSelectorResult.html">FieldSelectorResult overview</a> </p> <h4 class="dtH4">Public Static Fields</h4> <div class="tablediv"> <table class="dtTABLE" cellspacing="0"> <tr VALIGN="top"><td width="50%"><img src="pubfield.gif"></img><img src="static.gif" /><a href="Lucene.Net.Documents.FieldSelectorResult.LAZY_LOAD.html">LAZY_LOAD</a></td><td width="50%"> Lazily load this {@link Field}. This means the {@link Field} is valid, but it may not actually contain its data until invoked. {@link Document#GetField(String)} SHOULD NOT BE USED. {@link Document#GetFieldable(String)} is safe to use and should return a valid instance of a {@link Fieldable}. {@link Document#Add(Fieldable)} should be called by the Reader. </td></tr> <tr VALIGN="top"><td width="50%"><img src="pubfield.gif"></img><img src="static.gif" /><a href="Lucene.Net.Documents.FieldSelectorResult.LOAD.html">LOAD</a></td><td width="50%"> Load this {@link Field} every time the {@link Document} is loaded, reading in the data as it is encounterd. {@link Document#GetField(String)} and {@link Document#GetFieldable(String)} should not return null. {@link Document#Add(Fieldable)} should be called by the Reader. </td></tr> <tr VALIGN="top"><td width="50%"><img src="pubfield.gif"></img><img src="static.gif" /><a href="Lucene.Net.Documents.FieldSelectorResult.LOAD_AND_BREAK.html">LOAD_AND_BREAK</a></td><td width="50%"> Load this field as in the {@link #LOAD} case, but immediately return from {@link Field} loading for the {@link Document}. Thus, the Document may not have its complete set of Fields. {@link Document#GetField(String)} and {@link Document#GetFieldable(String)} should both be valid for this {@link Field} {@link Document#Add(Fieldable)} should be called by the Reader. </td></tr> <tr VALIGN="top"><td width="50%"><img src="pubfield.gif"></img><img src="static.gif" /><a href="Lucene.Net.Documents.FieldSelectorResult.LOAD_FOR_MERGE.html">LOAD_FOR_MERGE</a></td><td width="50%"> Behaves much like {@link #LOAD} but does not uncompress any compressed data. This is used for internal purposes. {@link Document#GetField(String)} and {@link Document#GetFieldable(String)} should not return null. {@link Document#Add(Fieldable)} should be called by the Reader. </td></tr> <tr VALIGN="top"><td width="50%"><img src="pubfield.gif"></img><img src="static.gif" /><a href="Lucene.Net.Documents.FieldSelectorResult.NO_LOAD.html">NO_LOAD</a></td><td width="50%"> Do not load the {@link Field}. {@link Document#GetField(String)} and {@link Document#GetFieldable(String)} should return null. {@link Document#Add(Fieldable)} is not called. {@link Document#Add(Fieldable)} should not be called by the Reader. </td></tr> <tr VALIGN="top"><td width="50%"><img src="pubfield.gif"></img><img src="static.gif" /><a href="Lucene.Net.Documents.FieldSelectorResult.SIZE.html">SIZE</a></td><td width="50%">Expert: Load the size of this {@link Field} rather than its value. Size is measured as number of bytes required to store the field == bytes for a binary or any compressed value, and 2*chars for a String value. The size is stored as a binary value, represented as an int in a byte[], with the higher order byte first in [0] </td></tr> <tr VALIGN="top"><td width="50%"><img src="pubfield.gif"></img><img src="static.gif" /><a href="Lucene.Net.Documents.FieldSelectorResult.SIZE_AND_BREAK.html">SIZE_AND_BREAK</a></td><td width="50%">Expert: Like {@link #SIZE} but immediately break from the field loading loop, i.e., stop loading further fields, after the size is loaded </td></tr></table> </div> <h4 class="dtH4">Public Instance Methods</h4> <div class="tablediv"> <table class="dtTABLE" cellspacing="0"> <tr VALIGN="top"><td width="50%"><img src="pubmethod.gif"></img><a href="Lucene.Net.Documents.FieldSelectorResult.Equals.html">Equals</a></td><td width="50%"> </td></tr> <tr VALIGN="top"><td width="50%"><img src="pubmethod.gif"></img><a href="Lucene.Net.Documents.FieldSelectorResult.GetHashCode.html">GetHashCode</a></td><td width="50%"> </td></tr> <tr VALIGN="top"><td width="50%"><img src="pubmethod.gif"></img><a href="ms-help://MS.NETFrameworkSDKv1.1/cpref/html/frlrfSystemObjectClassGetTypeTopic.htm">GetType</a> (inherited from <b>Object</b>)</td><td width="50%">Gets the <a href="ms-help://MS.NETFrameworkSDKv1.1/cpref/html/frlrfSystemTypeClassTopic.htm">Type</a> of the current instance.</td></tr> <tr VALIGN="top"><td width="50%"><img src="pubmethod.gif"></img><a href="ms-help://MS.NETFrameworkSDKv1.1/cpref/html/frlrfSystemObjectClassToStringTopic.htm">ToString</a> (inherited from <b>Object</b>)</td><td width="50%">Returns a <a href="ms-help://MS.NETFrameworkSDKv1.1/cpref/html/frlrfSystemStringClassTopic.htm">String</a> that represents the current <a href="ms-help://MS.NETFrameworkSDKv1.1/cpref/html/frlrfSystemObjectClassTopic.htm">Object</a>.</td></tr></table> </div> <h4 class="dtH4">See Also</h4> <p> <a href="Lucene.Net.Documents.FieldSelectorResult.html">FieldSelectorResult Class</a> | <a href="Lucene.Net.Documents.html">Lucene.Net.Documents Namespace</a></p> <object type="application/x-oleobject" classid="clsid:1e2a7bd0-dab9-11d0-b93a-00c04fc99f9e" viewastext="true" style="display: none;"> <param name="Keyword" value="FieldSelectorResult class"> </param> <param name="Keyword" value="Lucene.Net.Documents.FieldSelectorResult class"> </param> <param name="Keyword" value="FieldSelectorResult class, all members"> </param> </object> <hr /> <div id="footer"> <p> </p> <p>Generated from assembly Lucene.Net [2.4.0.2]</p> </div> </div> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
2,025
Q: How to store comments (e.g. reasons) for filesystem permissions? I am responsible for manageing filesystem permissions on bunch of a SMB/CIFS shares in a larger company. The NTFS allows me to add entries to the access control lists (ACLs), but I cannot attach background information. All permissions have a reason. For example, my superior ordered me to give somebody from another department access to a folder. Currently, I am logging this kind of "access justification" in a Word document. Once in a while, I go through this document and compare it to the current ACLs, folder by folder. Is there a better way to do this? There should be a tool with a nice GUI where I can store the target ACL entries along with a link, ticket number or comment. The tool should be able to assist with comparing the target state with the real ACLs on the share(s). Bonus: It can run pre-defined test cases.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,335
Pátara (likijsko 𐊓𐊗𐊗𐊀𐊕𐊀, Pttara), kasneje preimenovana v Arsinoe (grško , Arsinón), je bila cvetoče mesto in trgovsko središče na jugozahodni sredozemski obali Likije v bližini sedanjega majhnega turškega mesta Gelemiş v Provinci Antalya. Patara je rojstno mesto sv. Nikolaja, ki je večino življenja preživel v bližnji Miri (Demre). Zgodovina Pataro je v lepem naravnem pristaniščem po legendi ustanovil Apolonov sin Patarus. Mesto je stalo 60 stadijev jugozahodno od izliva reke Ksantos (Eşen Çayı). V antiki je bilo znano po Apolonovem templju in preročišču, drugem najslavnejšem za Delfi. Bog Apolon je pogosto omenjen z nadimkom Patareus. Herodot pravi, da je preročišče delovalo samo določen del leta. Njegovo trditev potrjuje Servij, ki pravi, da je delovalo samo v šestih zimskih mesecih. Zgleda, da so se v Pataro priselili Dorci s Krete in da je bil kult Apolona zagotovo dorski. Antični pisci omenjajo Pataro samo kot eno od glavnih likijskih mest. Bila je glavno pristanišče Likije in eno od glavnih mest Likijske zveze s tremi volilnimi glasovi. Mesto se je skupaj z drugimi likijskimi mesti leta 333 pr. n. št. vdalo Aleksandru Velikemu. Med vojnami diadohov (322-275 pr. n. št.) sta mesto opupirala Antigon I. Enooki in nato Demetrij I. Makedonski, potem pa je pripadlo Ptolemejcem. Strabon poroča, da je Ptolemej Filadelf Egiptovski mesto povečal in po svoji ženi in sestri Arsinoji II. Egiptovski preimenoval v Arsinoe (Arsinoë). Mesto je kljub temu obdržalo svoje staro ime. Pataro je leta 196 pr. n. št. zasedel Antioh III. Likijo je kasneje kot nagrado dobil Rodos, zato je kot rimska zaveznica leta 167 pr. n. št. dobila svobodo. Leta 88 pr. n. št. je Pataro oblegal pontski kralj Mitridat IV., med kampanjo proti Marku Antoniju in Avgustu pa sta jo okupirala Mark Junij Brut in Gaj Kasij Longin. Mesto ni doživelo pokola, kakršnega je doživel bližnji Ksantos. Patara je bila leta 43 n. št. uradno priključena k Pamfiliji in vključena v Rimsko cesarstvo. Patara je bila pokristjanjena zelo zgodaj. Znana so tudi imena nekaterih škofov. V Novi zavezi je omenjena kot mesto, v katero je z Rodosa priplul Pavel iz Tarza (apostol Pavel) in iz njega odplul v Fenicijo. V Patari je bil okoli leta 280 rojen Nikolaj iz Mire, bolj znan kot sveti Nikolaj. Ruševine Ruševine mesta ležijo ob sredozemski obali malo vzhodno od reke Ksantus. Med njimi so na severni strani majhnega griča odkrili in izkopali rimsko gledališče, tempelj in globoko okroglo votlino nenavadnega videza, v kateri bi lahko bil sedež preročišča. Mestno obzidje je obdajalo dokaj veliko površino. Njegov potek je lahko sledljiv. Povsem viden je tudi tloris gradu, ki je obvladoval pristanišče, in položaj več stolpov mestnega obzidja. Izven obzidja so številni prazni kamniti sarkofagi. Na večini njih so nagrobni napisi. Znotraj mestnega obzidja so templji, oltarji, podnožja spomenikov in fragmenti kipov. Staro pristanišče je še vidno, vendar je zamočvirjeno, delno zasuto in zaraščeno z grmovjem. Gledališče je bilo zgrajeno med vladavino Antonina Pija (vladal 138-161 n. št.). Njegov premer je 80,5 m in ima približno 30 vrst sedežev. V mestu so tudi ruševine term, ki so bile zgrajene med vladavino cesarja Vespazijana (vladal 69–79 n. št.). Zgodovina izkopavanj Leta 1993 je bil v Pateri izkopan rimski miljnik Stadiasmus Patarensis, spominski steber, na katerem je napis v grškem jeziku, posvečen cesarju Klavdiju, in uraden zapis, da je ceste v provinci Likiji in Pamfiliji zgradil guverner Kvint Veranij Nep. Na stebru so napisana imena mest in razdalje do njih, se pravi da je tudi nekakšen javen itinerarij. Miljnik je razstavljen na vrtu Antaljskega muzeja. Ruševine poletnih mesecih vsako leto izkopavajo skupine turških arheologov. Konec leta 2007 je je bil izkopan ves pesek iz gledališča in nekaj drugih zgradb. Ob glavni cesti so bili delno restavrirani in znova postavljeni stebri. Izkopavanja so pokazala, da so zgradbe v zelo dobrem stanju. Turizem Patara ima razen ruševin tudi slavno 18 km dolgo peščeno plažo, ki je sestavni del turške riviere. Sklici Likija Arheološka najdišča v Turčiji
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,045
Гаэтано Кастровилли (; ) — итальянский футболист, полузащитник клуба «Фиорентина» и сборной Италии. Победитель чемпионата Европы 2020 года. Клубная карьера Кастровилли — воспитанник клубов «Минервино» и «Бари». 22 мая 2015 года в матче против «Специи» он дебютировал в итальянской Серии B. В 2015 году Гаэтано на правах аренды перешёл в молодёжную команду «Фиорентину». В 2017 году клуб выкупил трансфер игрока и сразу же отдал его в аренду в «Кремонезе». 3 сентября в матче против «Авеллино 1912» он дебютировал за новую команду. В этом же поединке Гаэтано забил свой первый гол за «Кремонезе». Летом 2019 года Кастровилли вернулся в «Фиорентину». 24 августа в матче против «Наполи» он дебютировал в итальянской Серии A. 29 сентября в поединке против «Милана» Гаэтано забил свой первый гол за «Фиорентину». Международная карьера 15 сентября 2019 года в отборочном матче чемпионата Европы 2020 против сборной Боснии и Герцеговины Кастровилли дебютировал за сборную Италии. В 2021 году Кастровилли выиграл чемпионат Европы. На турнире он сыграл в матче против сборной Уэльса. Достижения Сборная Италии Чемпион Европы: 2020 Статистика Клубная Статистика игр за сборную Италии Государственные награды Кавалер ордена «За заслуги перед Итальянской Республикой» (16 июля 2021) — в знак признания спортивных ценностей и национального духа, которые вдохновили итальянскую команду на победу на чемпионате Европы по футболу 2020 Примечания Ссылки Профиль на сайте «Фиорентины» Футболисты Италии Игроки сборной Италии по футболу (до 21 года) Игроки сборной Италии по футболу Чемпионы Европы по футболу Игроки ФК «Бари» Игроки ФК «Фиорентина» Игроки ФК «Кремонезе»
{ "redpajama_set_name": "RedPajamaWikipedia" }
733
\section{Introduction} \label{sect:Intro} Various observations indicate that stars at the early stages of the evolution (Young Stellar Objects, YSOs~\citep{adams87}) have magnetic field. Measurements of Zeeman splitting of the spectral lines show that classical T Tauri stars (Class II YSOs) have surface magnetic field with strength $1-3$~kG~\citep{krull07}. Polarization maps of the dust thermal emission indicate that the accretion disks of the young stars have large-scale magnetic field~\citep{li16, li18}. The angular resolution is yet not enough to detect the magnetic field geometry in details. Theory of fossil magnetic field predicts that the magnetic field of the accretion disks of young stars originates from the magnetic field of parent molecular cloud (see for review~\cite{dudorov95, fmft}). The dynamo mechanism driven by the turbulent cyclonic motions and differential rotation in the conducting plasma can also lead to the generation of the magnetic field in the disk (see, for example, reviews \cite{brandenburg05}, \cite{blackman12}). Some aspects of the dynamo action in the accretion disks can be found in \cite{brandenburg95}, \cite{kitchatinov10}, \cite{gressel15}, \cite{moss16}. In this work, we use the MHD model of the accretion disks of young stars with fossil magnetic field developed by~\cite{fmfadys}. \cite{fmfadys} have shown that the magnetic field geometry varies through the disk. The magnetic field is quasi-azimuthal, $B_{\varphi}\sim B_z$ (in cylindrical coordinates), in the regions of thermal ionization, where the magnetic field is frozen in gas. Throughout most of the disk Ohmic diffusion, magnetic ambipolar diffusion and the Hall effect operate (for example, see review~\cite{turner14}). The magnetic field is quasi-poloidal, $(B_r,\,B_{\varphi})\ll B_z$, inside the regions of low ionization fraction (``dead'' zones,~\cite{gammie96}), and quasi-azimuthal or quasi-radial, $B_r\sim B_z$, in the outer regions depending on the ionization parameters. The magnetic field has quasi-radial geometry near the borders of the ``dead'' zones~\citep{kh17}. The intensity of azimuthal magnetic field $B_{\varphi}$ is amplified over a time scale of order of the rotation period $P_{\rm{orb}}$. The intensity of $B_r$ is amplified over an accretion time scale $t_{\rm{acc}}$. In the accretion disks $P_{\rm{orb}}\ll t_{\rm{acc}}$, therefore $B_r\ll B_{\varphi}$ in the inner region (see \cite{fmfadys}). The problem is what mechanism hinders the significant growth of the azimuthal magnetic field in the region of thermal ionization. \cite{fmfadys} have assumed that the magnetic buoyancy can be such a mechanism. Magnetic flux tubes (MFTs) form as a result of Parker instability of a gas layer with strong horizontal magnetic field (see~\cite{parker_book}). A number of numerical simulations confirmed the development of the Parker instability and MFTs formation~\citep{cattaneo88, matthews95, wissink00, fan01, vasil08}. Once formed, the MFTs rise from the disk under the action of the buoyancy force. This process leads to escape of the excess magnetic flux from the regions of its generation. \cite{kh17} and \cite{kh17b} incorporated the magnetic buoyancy into the induction equation and showed that the buoyancy can be treated as the additional mechanism of the magnetic flux escape from the disks. Usually, the MFT dynamics in the accretion disks has been investigated in frame of slender flux tube approximation~\citep{sakimoto89, torkelsson93, chakra94, schram96, bmfad}. The dynamics is determined by the buoyancy force, drag forces, thermal structure of the disk, efficiency of the heat exchange with ambient gas, relation between the centrifugal and magnetic tension forces. \cite{bmfad} considered the dynamics of slender adiabatic MFTs in the accretion disks of young stars. In this paper, we extend their approach by including the radiative heat exchange in the model equations. In frame of the slender flux tube approximation, we investigate the MFTs dynamics in the accretion disks of T~Tauri stars. The initial parameters take into account the disk structure determined using our MHD model of the accretion disks~\citep{fmfadys, kh17}. Apart from the other researchers, we take into account the turbulent drag inside the disk. The paper is organized as follows. In Section~\ref{Sec:model}, we present our model of the magnetic flux tubes dynamics. The model of the accretion disk is briefly discussed in Section~\ref{Sec:disk}. Section~\ref{Sec:res} presents results of numerical simulations. Typical dynamics of the MFT is considered in Section~\ref{Sec:fiduc}. Dependence on the model parameters is investigated in Section~\ref{Sec:param}. We make analytical estimates of the terminal MFT velocities in Section~\ref{Sec:v_b}. We search for observational appearance of the MFTs dynamics in Section~\ref{Sec:outflows}. Section~\ref{Sec:discussion} summarizes and discusses our findings. \section{Model of magnetic flux tube dynamics} \label{Sec:model} We investigate the MFTs dynamics inside the accretion disks using the cylindrical coordinates $(r,\, \varphi, \,z)$. Axis $z$ coincides with the disk rotation axis. The magnetic field inside the disk has components ${\bf B}=(B_r,\,B_{\varphi},\,B_z)$. We assume that toroidal magnetic field ${\bf B}_{\rm{t}}=(0,\,B_{\varphi},\,0)$ splits into the magnetic flux tubes due to Parker instability~\citep{parker_book}. The MFT has the form of torus around the disk rotation axis with major radius $a_{\rm{maj}}=r$ and minor radius $a\ll a_{\rm{maj}}$. The MFT is azimuthally symmetric. Therefore, we can investigate motion of the small part of the torus, i.e. cylinder of unit length. This cylindrical MFT has radius $a$, gas pressure $P_{\rm{g}}$, density $\rho$, temperature $T$, and magnetic field strength $B_{\varphi} = B$. The accretion disk is characterized by pressure $P_{\rm{e}}$, density $\rho_{\rm{e}}$, and temperature $T_{\rm{e}}$. We model the dynamics of the MFT following~\cite{dk85} and use the system of equations \begin{eqnarray} \rho\frac{d{\bf v}}{dt} &=& \left(\rho - \rho_{\rm{e}}\right){\bf g} + \rho{\bf f}_d\left({\bf v},\, \rho,\, T,\, a,\, \rho_e\right),\label{Eq:motion}\\ \frac{d{\bf r}}{dt} &=& {\bf v},\label{Eq:velocity}\\ M_{\rm{l}} &=& \rho\pi a^2,\label{Eq:mass}\\ \Phi &=& B\pi a^2,\label{Eq:mflux}\\ dQ &=& dU + P_{\rm{e}}dV,\label{Eq:dQ}\\ P_{\rm{g}} + \frac{B^2}{8\pi} &=& P_{\rm{e}},\label{Eq:pbal}\\ \frac{dP_{\rm{e}}}{dz} &=& -\rho_{\rm{e}}g_z,\label{Eq:Disk}\\ P_{\rm{g}} &=& \frac{R_{\rm{g}}}{\mu}\rho T,\label{Eq:eos}\\ U &=& \frac{P_{\rm{g}}}{\rho(\gamma - 1)} + \frac{B^2}{8\pi\rho}.\label{Eq:eos_kalor} \end{eqnarray} Equation (\ref{Eq:motion}) is the equation of motion (where ${\bf f}_d$ is the drag force per unit mass), (\ref{Eq:velocity}, \ref{Eq:mass}, \ref{Eq:mflux}) are the definitions of velocity ${\bf v}$, mass $M_{\rm{l}}$ per unit length and magnetic flux $\Phi$ of the MFT, (\ref{Eq:dQ}) is the first law of thermodynamics ($Q$ is the heat per unit mass, $V=1/\rho$ is the specific volume), (\ref{Eq:pbal}) is the balance between internal pressure ($P=P_{\rm{g}} + \frac{B^2}{8\pi}$) and external pressure ($P_{\rm{e}}$), (\ref{Eq:Disk}) is the equation of hydrostatic equilibrium of the disk, (\ref{Eq:eos}) is the equation of state ($R_{\rm{g}}$ is the universal gas constant, $\mu=2.3$ is the molecular weight), (\ref{Eq:eos_kalor}) is the energy per unit mass, $\gamma$ is the adiabatic index. First term on the right-hand side of Equation~(\ref{Eq:motion}) \begin{equation} {\bf F}_{\rm{b}} = \left(\rho - \rho_{\rm{e}}\right){\bf g} \end{equation} is the buoyancy force, that is the difference between gravity force and Archimedes force, ${\bf g}$ is the gravitational acceleration. We study one-dimensional problem of the MFT motion in $z$-direction, then ${\bf v} = (0,\,0,\, v)$, ${\bf r} = (0,\,0,\, z)$, ${\bf g} = (0,\,0,\,g_z)$. Equality (\ref{Eq:pbal}) shows that the MFT is lighter than the ambient gas, i.e. $\rho<\rho_{\rm{e}}$. Therefore, the buoyancy force ${\bf F}_{\rm{b}}=(0,\,0,\,F_{\rm{b}})$ causes the MFT to move upward. Drag force $\rho{\bf f}_{\rm{d}}=(0,\,0,\,\rho f_{\rm{d}})$ counteracts the motion. Aerodynamic drag force (see~\cite{parker_book}) \begin{equation} f_{\rm{d}} = -\frac{\rho_{\rm{e}}v^2}{2}\frac{C_{\rm{d}}}{\rho\pi a^2},\label{Eq:fa} \end{equation} where $C_{\rm{d}}$ is the drag coefficient $\sim 1$. Turbulent drag force~\citep{pneuman72} \begin{equation} f_{\rm{d}} = -\frac{\pi\rho_{\rm{e}}\left(\nu_{\rm{t}}av^3\right)^{1/2}}{\rho\pi a^2},\label{Eq:ft} \end{equation} where $\nu_{\rm{t}}$ is the turbulent viscosity. The latter is estimated as~\citep{ss73} \begin{equation} \nu_{\rm{t}} = \alpha v_{\rm{s}} H,\label{Eq:nu_t} \end{equation} where $\alpha$ is non-dimensional constant characterizing the turbulence efficiency, \begin{equation} v_{\rm{s}} = \sqrt{\frac{R_{\rm{g}}T_{\rm{e}}}{\mu}} \end{equation} is the isothermal sound speed, $H$ is the height scale of the disk. Turbulent drag force (Eq.~\ref{Eq:ft}) is taken into account inside the disk. Aerodynamics drag force (Eq.~\ref{Eq:fa}) is considered above the disk. The disk is assumed to be in hydrostatic equilibrium in $z$-direction. Vertical component of stellar gravity, \begin{equation} g_z = -z\frac{GM_{\star}}{r^3}\left(1 + \frac{z^2}{r^2}\right)^{-3/2}, \end{equation} where $M_{\star}$ is the mass of the star. System of equations (\ref{Eq:motion}-\ref{Eq:eos_kalor}) can be reduced to \begin{eqnarray} \frac{dv}{dt} &=& \left(1 - \frac{\rho_{\rm{e}}}{\rho}\right)g_z + f_d,\label{Eq:v}\\ \frac{dz}{dt} &=& v.\label{Eq:z}\\ a &=& a_0\left(\frac{\rho}{\rho_0}\right)^{-1/2},\label{Eq:a}\\ B &=& B_0\frac{\rho}{\rho_0},\label{Eq:B}\\ \frac{d\rho}{dt} &=& \frac{h_{\rm{c}}P_T + U_T\rho_e g_z v}{P_T\left(U_\rho - \dfrac{P_{\rm{e}}}{\rho^2}\right) - U_T\left(P_\rho + C_{\rm{m}}\rho\right)},\label{Eq:rho}\\ \frac{dT}{dt} &=& \frac{\rho_e g_z v\left(U_\rho - \dfrac{P_{\rm{e}}}{\rho^2}\right) + h_{\rm{c}}\left(P_\rho + C_{\rm{m}}\rho\right)}{U_T\left(P_\rho + C_{\rm{m}}\rho\right) - P_T\left(U_\rho - \dfrac{P_{\rm{e}}}{\rho^2}\right)}\label{Eq:T}\\ \rho_{\rm{e}} &=& \rho_{\rm{m}}e^{-\frac{z^2}{2H}},\label{Eq:rhoe} \end{eqnarray} where $a_0$, $\rho_0$ and $B_0$ are the initial radius, density and magnetic field strength of the MFT, $\left(...\right)_T$ means derivative with respect to $T$ (with constant $\rho$), $\left(...\right)_{\rho}$ means derivative with respect to $\rho$ (with constant $T$), $h_{\rm{c}}$ is the heating power per unit mass, $C_{\rm{m}}=\dfrac{B_0^2}{4\pi\rho_0^2}$, $H=v_{\rm{s}}/\Omega$ is the height scale of the disk, $v_{\rm{s}}=\sqrt{R_{\rm{g}}T_{\rm{e}}/\mu}$ is the isothermal sound speed, \begin{equation} \Omega = \sqrt{\frac{GM_{\star}}{r^3}} \end{equation} is the keplerian angular velocity. Equations~(\ref{Eq:a}) and (\ref{Eq:B}) follow from mass and magnetic flux conservation (Eqs. (\ref{Eq:mass}) and (\ref{Eq:mflux})). Equations (\ref{Eq:rho}) and (\ref{Eq:T}) are derived from (\ref{Eq:dQ}, \ref{Eq:pbal}, \ref{Eq:Disk}). Heating power per unit mass is defined as $h_{\rm{c}}=dQ/dt$. In diffusional approximation \begin{equation} h_{\rm{c}} \simeq -\frac{8}{3\kappa_{\rm{R}}\rho^2}\frac{\sigma_{\rm{R}}T^4 - \sigma_{\rm{R}}T_{\rm{e}}^4}{a^2}, \end{equation} where $\kappa_{\rm{R}}$ is the Rosseland mean opacity, $\sigma_{\rm{R}}$ is the Stefan-Boltzmann constant. We determine $\kappa_{\rm{R}}$ as the power-law function of gas density and temperature following \citet{fmfadys}. Formula (\ref{Eq:rhoe}) is the solution of hydrostatic equilibrium Equation (\ref{Eq:Disk}) in the isothermal case, $T_{\rm{e}}=const$. We determine the surface of the disk as the locus $z=3\,H$. Above the surface, temperature is constant $T_{\rm{e}}$ and density falls down with height according to Equation (\ref{Eq:rhoe}) to the point where $\rho_{\rm{e}}$ becomes equal to the density of the interstellar medium $\rho_{\rm{ism}}=3.8\times 10^{-20}\,\rm{g}\,\rm{cm}^{-3}$. \section{Model of the disk} \label{Sec:disk} We investigate the dynamics of the MFT in the accretion disk of T~Tauri star. We use our MHD model of the accretion disks \citep{fmfadys} to calculate the structure and the magnetic field of the disks. Let us describe briefly the features of the model (see for details \cite{fmfadys} and \cite{kh17}). The model is MHD-generalization of \cite{ss73} model. We solve the MHD equations in the approximation of a thin stationary disk. It is assumed that the turbulence is the main mechanism of the angular momentum transport. The turbulent viscosity is estimated according to (\ref{Eq:nu_t}). The model has two main parameters: $\alpha$ and accretion rate $\dot{M}$. The temperature of the disk is calculated from the balance between viscous heating and radiative cooling. We use low-temperature opacities from \cite{semenov03}. The heating by stellar radiation and cosmic rays in the outer parts of the disk are also taken into account. The magnetic field components are calculated from the induction equation taking into account Ohmic diffusion, magnetic ambipolar diffusion, magnetic buoyancy and the Hall effect. Ionization fraction is determined from the equation of collisional ionization (see~\cite{spitzer_book}) considering the ionization by cosmic rays, X-rays and radioactive decay, radiative recombinations and the recombinations on the dust grains. Additionally, the evaporation of the dust grains and thermal ionization are included in the model. Outer boundary of the disk, $r_{\rm{out}}$, is determined as the contact boundary, where the disk pressure equals the pressure of the external medium. In Figure~\ref{Fig0} we plot the radial profiles of the midplane temperature, surface density, midplane ionization fraction, vertical magnetic field strength and midplane plasma beta for the disk with $\alpha=0.01$, $\dot{M} = 10^{-7}\,M_{\odot}\,\mathrm{yr}^{-1}$. Stellar mass $M=1\,M_{\odot}$. In the simulation, cosmic rays ionization rate $\xi_0=10^{-17}\,\rm{s}^{-1}$ and attenuation length $R_{\rm{CR}}=100\,\rm{g}\,\rm{cm}^{-2}$, stellar X-ray luminosity $L_{\rm{XR}}=10^{30}\,\rm{erg}\,\rm{s}^{-1}$, mean dust grain size $a_{\rm{d}}=0.1\,\mu\rm{m}$. \begin{figure}[t] \centering \includegraphics[width=14.0cm, angle=0]{fig1.eps} \caption{The structure of the disk with $\alpha=0.01$, $\dot{M} = 10^{-7}\,M_{\odot}\,\mathrm{yr}^{-1}$ around a star with $M=1\,M_{\odot}$. Top left: surface density, top right: midplane temperature, bottom left: midplane ionization fraction, bottom right: vertical magnetic field strength (left $y$-axis, black line) and plasma beta (right $y$-axis, grey line). Gray dashed lines with numbers depict typical slopes.} \label{Fig0} \end{figure} Figure~\ref{Fig0} shows that the surface density and temperature gradually decrease with distance from the star. The temperature is $\sim 5000$~K near the inner edge of the disk and $15$~K near its outer edge, $r_{\rm{out}}=220$~au. In our model, the radial dependences of all physical quantities are power-law functions of the distance. Indexes of the power laws depend only on the parameters of the opacity. The indexes change throughout the disk as the opacity does. That is why the dependences $T(r)$ and $\Sigma(r)$ appear as piecewise-linear functions in logarithmic scale in Figure~\ref{Fig0}. Typical slope of the temperature profile $p_{\rm{T}}=-0.9$ in range $1-100$~au, and the typical slope of the surface density profile is $p_{\rm{\Sigma}}=-0.7$. The latter is consistent with observations indicating that $p_{\rm{\Sigma}}\in[0.4,\,1]$~\citep{andrews09}. The radial profile of $B_z$ is more complex. In the innermost par of the disk, $r<0.8$~au, the ionization fraction is high, $x>10^{-10}$ and the magnetic field is frozen in gas. The radial profile of $B_z$ follows the surface density profile in this region, and $B_z\simeq 170$~G at the inner edge of the disk $r_{\rm{in}}=0.027$~au. The ``dead'' zone is situated at $r>0.8$~au, where the ionization fraction is very low. Magnetic ambipolar diffusion reduces the magnetic field strength by 1-2 orders of magnitude in this region, so that typical $B_z(3\,\mathrm{au})=0.1$~G. Near the outer edge of the disk, the magnetic field is frozen in gas and its intensity is $4\times 10^{-3}$~G. The plasma beta is not constant in the disk. It is $\sim 100$ near the inner edge of the disk, $\sim 10^{4}-10^5$ inside the ``dead'' zone and $\sim 10$ near the outer edge of the disk. \section{Results} \label{Sec:res} In this Section, we present the results of simulations of the MFT dynamics in the accretion disks. The system of dynamical equations (\ref{Eq:v}, \ref{Eq:z}, \ref{Eq:rho}, \ref{Eq:T}) is solved with the help of the explicit Runge-Kutta method of the fourth order with automatic selection of the time step and relative accuracy $\varepsilon=10^{-6}$. At each time step, the radius and magnetic field strength of the MFT are calculated with the help of relations (\ref{Eq:a}-\ref{Eq:B}). We assume that the MFT forms inside the disk at height $z_0=[0.5,\,1,\,1.5]\,H$, in thermal equilibrium with the surrounding gas, $T_0=T_e$, and with velocity $u_0=0$. We specify the initial magnetic field strength of the MFT using plasma beta definition, \begin{equation} \beta_0 = \frac{8\pi P_{\rm{g}0}}{B_0^2},\label{Eq:beta0} \end{equation} where $P_{\rm{g}0}$ is the initial gas pressure inside the MFT. Initial density is determined from the condition of the pressure equilibrium (\ref{Eq:pbal}) at $t=0$ in terms of $\beta_0$, \begin{equation} \rho_0 = \dfrac{P_{\rm{e}}(z_0)}{\dfrac{R_{\rm{g}}T_0}{\mu}\left(1 + \dfrac{1}{\beta_0}\right)}. \end{equation} Adiabatic index of the molecular hydrogen gas $\gamma=7/5$. \subsection{Fiducial run} \label{Sec:fiduc} Let us first consider the typical picture of MFT dynamics. In the fiducial run, the dynamical equations (\ref{Eq:v}, \ref{Eq:z}, \ref{Eq:rho}, \ref{Eq:T}) are solved assuming that the MFT is at the distance $r=0.027$~au inside the disk. Parameters of the disk at this distance are following: temperature $T_{\rm{e}}=4830$~K, midplane density $\rho_{\rm{m}}=2\times 10^{-6}\,\rm{g}\,\rm{cm}^{-3}$, height scale $H=6.2\times 10^{-4}$~au, magnetic field strength $B_z=170$~G. Initial parameters of the MFT: $a_0=0.1\,H$, $\beta_0=1$, $z_0=0.5\,H$. In Figure~\ref{Fig1}, we plot the profiles of the velocity, drag and buoyancy forces, densities $\rho$ and $\rho_{\rm{e}}$, and MFT radius. \begin{figure}[t] \centering \includegraphics[width=14.0cm, angle=0]{fig2.eps} \caption{Left panel: vertical profiles of the MFT velocity (left $y$-axis, black line) and forces per unit mass (right $y$-axis, black dashes: buoyancy force, orange: turbulent drag force, magenta: aerodynamic drag force). Right panel: vertical profiles of densities (left $y$-axis, black line with dots~--~MFT density, orange dashes~--~disk density) and MFT radius (right $y$-axis, magenta line). Initial parameters: $a_0=0.1\,H$, $\beta_0=1$, $z_0=0.5\,H$.} \label{Fig1} \end{figure} Left panel of Figure~\ref{Fig1} shows that initially MFT accelerates to the velocity $\simeq 0.7\,\rm{km}\,\rm{s}^{-1}$ almost instantly, because the turbulent drag force is much less than the buoyancy force. After this acceleration, the drag force and buoyancy force become nearly equal, $f_t\lesssim f_b$, and the MFT moves with increasing velocity. The acceleration decreases when the MFT rises to height $z\simeq 2.5-3\,H$. Above the disk, $z>3\,H$, the acceleration tends to zero, as the buoyancy and aerodynamic drag forces become very small. The MFT moves by inertia with nearly constant velocity, $v\simeq 2\,\rm{km}\,\rm{s}^{-1}$. The MFT moves in highly non-uniform medium. Right panel of Figure~\ref{Fig1} shows that the MFT expands during its motion and its density decreases. External density also decreases with height. The densities difference reduces from $50\,\%$ in the starting point to nearly zero at $z>3\,H$, i.e. the degree of the buoyancy decreases. The radius of the MFT becomes larger than the height of the disk at $z\simeq 4\,H$. Further motion of the MFT cannot be investigated in the frame of slender tube approximation. We assume that the MFT dissipates at $z\simeq 4\,H$. The dissipation of the MFTs in the atmosphere of the disk leads to the formation of the non-stationary magnetized corona. The heating of the corona by the dissipation of the magnetic field rising from the disk have also been found and discussed by~\cite{galeev79} and~\cite{stella84} in the context of the accretion disks around black holes and by~\cite{miller00} in application to the disks around classical T~Tauri stars. \subsection{Dependence on parameters} \label{Sec:param} Dynamics of MFT depends on the initial position of the MFT inside the disk. In Figure~\ref{Fig2}, we present the dependence of the MFT velocity on the initial height and distance from the star. We consider three distances, $r=0.027$~au (close to the inner boundary of the disk), $r=0.15$~au ($T_{\rm{e}}=2025$~K, $\rho_{\rm{m}}=4.1\times 10^{-8}\,\rm{g}\,\rm{cm}^{-3}$, $H=0.0053$~au, $B_z=29.5$~G) and $r=0.8$~au (outer zone of the thermal ionization region, where $T_{\rm{e}}=970$~K, $\rho_{\rm{m}}=8.3\times 10^{-10}\,\rm{g}\,\rm{cm}^{-3}$, $H=0.045$~au, $B_z=0.14$~G). The dynamics of the MFTs rising from $z_0=0.5\,H$, $z_0=1\,H$ and $z_0=1.5\,H$ is considered. \begin{figure}[t] \centering \includegraphics[width=7.0cm, angle=0]{fig3.eps} \caption{Velocity profiles at the distances $r=0.027$~au (black), $r=0.15$~au (magenta), $r=0.8$~au (orange) for different initial heights (solid lines: $z_0=0.5\,H$, dashes: $z_0=1\,H$, dots: $z_0=1.5\,H$).} \label{Fig2} \end{figure} Figure~\ref{Fig2} shows that the MFTs rapidly accelerate in the beginning, like in the fiducial run (see Section~\ref{Sec:fiduc}). After that, the MFTs rise with increasing velocity. Above the disk, $z>3\,H$, the MFTs move with a constant velocity, that is $\sim 1$~km~s$^{-1}$ at $r=0.8$~au and $2-2.5$~km~s$^{-1}$ at $r=0.027$~au. Our simulations shows that rise times to the surface of the disk are $0.5\,P_{\rm{orb}}$, $0.8\,P_{\rm{orb}}$ and $1.2\,P_{\rm{orb}}$ for the MFTs with $z_0=0.5\,H$, $z_0=1\,H$ and $z_0=2\,H$, respectively ($P_{\rm{orb}}$ is the rotation period). \subsection{Terminal velocity of the MFT} \label{Sec:v_b} As it was shown in previous Section, after the initial acceleration the MFT moves at a constant velocity, that is determined from the balance between the buoyancy and drag forces. In the case of the aerodynamic drag, we obtain the equality \begin{equation} \Delta\rho g_z = \frac{\rho_{\rm{e}}v_{\rm{b}}^2}{2}\frac{C_{\rm{d}}}{\pi a^2},\label{Eq:fbal} \end{equation} where $\Delta\rho = \rho_{\rm{e}}-\rho$. The densities difference in the thermal equilibrium (see Eq. (\ref{Eq:pbal})) \begin{equation} \Delta\rho = \frac{B^2}{8\pi v_{\rm{s}}^2}.\label{Eq:drho} \end{equation} Substituting (\ref{Eq:drho}) into Eq. (\ref{Eq:fbal}) it is easy to derive the formula for calculating the terminal velocity~\citep{parker_book} \begin{equation} v_{\rm{b}} = v_{\rm{a}}\left(\frac{\pi}{C_{\rm{d}}}\right)^{1/2}\left(\frac{a}{H}\right)^{1/2}\left(\frac{z_0}{H}\right)^{1/2},\label{Eq:v_b} \end{equation} where \begin{equation} v_{\rm{a}} = \frac{B}{\sqrt{4\pi\rho}} \end{equation} is the Alfv{\'e}n speed. Formula (\ref{Eq:v_b}) shows that the terminal velocity of the MFT with $a\sim 1\,H$ at $z_0\sim 1\,H$ approximately equals $v_{\rm{a}}$. For convenience, we express the terminal velocity in terms of plasma beta and local sound speed $v_{\rm{s}}$ \begin{equation} v_{\rm{b}} = v_{\rm{s}}\sqrt{\frac{2}{\beta}}\left(\frac{\pi}{C_{\rm{d}}}\right)^{1/2}\left(\frac{a}{H}\right)^{1/2}\left(\frac{z_0}{H}\right)^{1/2}.\label{Eq:v_b2} \end{equation} Figure~\ref{Fig3} shows dependences of the terminal velocity on the MFT radius for various plasma betas and initial heights $z_0$. Terminal velocities range from $2$ to $50$~km~s$^{-1}$ for radii in range $[0.1,\,1]\,H$. The more initial height $z_0$, the more the terminal velocity of the MFT. The MFTs with weak magnetic field ($\beta=10$) move slowly with the velocity smaller than the sound speed. The MFTs with strong magnetic field ($\beta=0.1$) accelerate to the velocities $\sim 40-50$~km~s$^{-1}$ characterizing molecular outflows. Generally speaking, the rising MFT can cause the outflows. \begin{figure}[t] \centering \includegraphics[width=7.0cm, angle=0]{fig4.eps} \caption{Dependence of the terminal MFT velocity on the radius of the MFT. Black lines: plasma $\beta=0.1$, orange lines: $\beta=1$, magenta lines: $\beta=10$. Solid lines: $z_0=2\,H$, dashed lines: $z_0=1\,H$, dotted lines: $z_0=0.5\,H$. } \label{Fig3} \end{figure} \subsection{Buoyancy-driven outflows} \label{Sec:outflows} In Sections~\ref{Sec:fiduc}-\ref{Sec:v_b} we show that the MFT can form outflows from the disk. The buoyancy extracts the magnetic flux from the disk over the time scale of the MFT rise to the surface of the disk. As we discuss in the introduction, the toroidal magnetic field is permanently amplified in the region of thermal ionization. When the magnetic field reaches state with $\beta\sim 1$, the MFTs form, rise from the disk and carry away the excess of the magnetic flux. Time scale of the magnetic field amplification can be estimated from the induction equation. In the approximations of the accretion disk model, the time scale of the azimuthal magnetic field amplification (see~\cite{fmfadys}), \begin{equation} t_{\rm{gen}} = \frac{2}{3}\frac{|B_{\varphi}|}{|B_z|}\Omega^{-1}\left(\frac{z}{r}\right)^{-1}\simeq 2.12P_{\rm{orb}}\left(\frac{z/r}{0.05}\right)^{-1}\frac{|B_{\varphi}|}{|B_z|}.\label{Eq:t_phi} \end{equation} Formula (\ref{Eq:t_phi}) shows that $t_{\rm{gen}}\simeq 2P_{\rm{orb}}$ at height $z=1\,H$. Comparison of rise times discussed in Section~\ref{Sec:param} and estimate (\ref{Eq:t_phi}) shows that the MFTs rise time is less than $t_{\rm{gen}}$. Therefore, considered buoyancy-driven outflows are periodic. The period of the outflows will be of the order of the magnetic field amplification time scale, i.e. several rotation periods. We propose that periodically rising MFTs can contribute to the variability of the YSOs radiation. In the region $r=[0.5,\,0.8]$~au, temperature $T\lesssim 1500$~K and the MFTs contain dust grains. Such rising MFTs can absorb stellar radiation and reemit it in infra-red (IR). This process can be responsible for the IR-variability of YSOs. The time scale of the variability would be of order of the magnetic field amplification time scale, i.e. rotation periods. This time scale ranges from several days at $r=0.027$~au to several months at $r=0.8$~au. \section{Conclusions and discussion} \label{Sec:discussion} In this paper, we investigate the dynamics of the magnetic flux tubes in the accretion disks of young stars. In our previous paper, the adiabatic motion of the MFTs was considered~\citep{bmfad}. Now we include in the model the radiative heat exchange between the MFT and surrounding medium. The disk in hydrostatic equilibrium is considered. The density, temperature and magnetic field strength of the disk are calculated with the help of our MHD model of the accretion disks~\citep{fmfadys, kh17}. We investigate the dynamics of the MFT with various initial radii $a_0$ and plasma beta $\beta_0$ formed at the different distances from the star, $r=[0.027,\,0.15,\,0.8]$~au, and at the different heights above the midplane of the disk, $z_0=[0.5,\,1,\,2]$~$H$. The accretion disk with turbulence parameter $\alpha=0.01$ and accretion rate $\dot{M} = 10^{-7}\,M_{\odot}\,\mathrm{yr}^{-1}$ around solar mass T~Tauri star is considered. The simulations show that the MFTs rise from the disk to the atmosphere with velocities up to $\simeq 50\,\rm{km}\,\rm{s}^{-1}$. The farther from the star the MFT formed the less its terminal velocity. We divide the MFTs into two categories. Small MFTs (radius less than $\sim 0.1H$) cannot accelerate to speeds more than $10$~km s$^{-1}$. Large MFTs having radii more than $~\sim 0.1\,H$ can reach velocities up to $50$~km~s$^{-1}$. The time of rise of the MFT to the surface of the disk is of the order of rotation period. This time is less than the time scale of the toroidal magnetic field amplification, $t_{\rm{gen}}$. Therefore, the MFTs form inside the disk and float from it periodically over time scales $\sim t_{\rm{gen}}$, that ranges from several days to several months in the region of thermal ionization. The MFTs catastrophically expand above the disk. The MFTs with weak magnetic field ($\beta=10$) rise slowly with speeds less than the sound speed. The MFTs with $\beta=1$ form outflowing magnetized corona. Strongly magnetized MFTs ($\beta=0.1$) cause the outflows with velocities $20-50$~km s$^{-1}$. The outflows velocity is consistent with the velocity of the molecular outflows from YSOs (see reviews~\cite{ray07} and \cite{frank14}). The MFTs formed in the region of the disk with $T=[1000,\,1500]$~K contain dust particles. The rising MFTs will periodically absorb the stellar radiation and reemit it in IR. We propose that this process can contribute to the observational IR-variability found in many YSOs (see, for example,~\cite{flaherty16}). Shadowing of the outer disk regions by periodically rising MFTs can also appear as the IR-variability of the disk. It should be noted that the specific mechanism of the magnetic field generation is not important from the point of view of the dynamics of the magnetic flux tubes. Our conclusions can also be generalized for the dynamo generated magnetic field in the disks. In this work, the isothermal disk was considered. Several works investigated influence of the radiation transfer on the vertical structure of the protoplanetary disks (see, for example,~\cite{andes} and reference therein). We will consider the disk thermal structure in details in our next paper. We will investigate the influence of the magnetic field of the disk on the MFT dynamics. It is also interesting task to compare theoretical variability due to periodically rising MFTs with the IR-periodicity of YSOs. \normalem \begin{acknowledgements} We thank the anonymous referee for some useful comments. This work is supported by the Russian science foundation (project 15-12-10017). \end{acknowledgements} \bibliographystyle{raa}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,465
\section*{Introduction} Digital pathology allows it to scan the histology slides and save them in high resolution so that the images can be seen with virtual microscopy. However, analyzing such histology slides is very time-consuming and requires much expertise. There are two important differences between virtual histology slides and natural images: First, we know the resolution of a pixel and can match the resolution to know precisely the size of the cell. Second, we have tiny color space variants inside one tissue type and one scanning process. Deep Learning can help to reduce the time-consuming parts like counting mitoses and decrease the variability of the predictions. However, the colorspace from different laboratories can look vastly different. This was tackled in the MIDOG 2021 challenge\cite{midog2021} by using different scanners. In the new MIDOG 2022 challenge\cite{midog2022}, unseen tissue types are available on the test dataset. This increases the domain shift enormously. To handle the domain shift in the training and the tests set, there exist three possible solutions and was analyzed already in some works \cite{9234592,9234592}: \begin{itemize} \item Using different strategies to augment the training dataset, the domain space from the source gets enlarged. \item Normalization techniques decrease the diversity and overlap the source and target domains. \item Training the neural network so the domain-specific features are not used in the dataset. \end{itemize} In this work, we build our work on the scheme developed for the Radial Prediction Layer \cite{RPL}. That training prototypes instead of fix classes is possible could be shown in the paper of the Radial Predicton Layer\cite{RPL}. That allow us to learn a network that not only removes the information of which scanner or which tissue type an input comes from, but also removes the information to which case it belongs. \section*{Material and Methods} \begin{SCfigure* \centering \includegraphics[width=17.8cm]{rp-dac2.png} \caption{Update of the RP-DAC-Layer} \label{fig:computerNo} \end{SCfigure*} \subsection{Dataset} We used only the dataset that was provided by the MIDOG 2022 challenge. The trainset consists of 9501 mitotic figures from four scanners and six different tissue types, for one of the tissue types where not provide any labeled information. The dataset was split into 80\% for the training set, 10\% for the validation set, and 10\% for an intern validation test set. The unlabeled data was used in the training process to train the RP-DAC and remove the domain information from the network. \subsection{YOLOv5} Our base model was the YOLOv5s \cite{glenn_jocher_2020_4154370} model that was trained on the COCO dataset. We made minor changes by adding learnable parameters to the residual connection layer to allow the network to remove the domain-specific information. This was done for the layers from 6 to 12, 14 to 19, and 10 to 22. The model was trained with AdamW \cite{https://doi.org/10.48550/arxiv.1711.05101} for 800 epochs with small learning rates between 0.002 to 0.0005. \subsection{Data Augmentation} To enlarge the source domain, we used the technique introduced by Tellez et al. \cite{tellez2018whole} that manipulates the image's hematoxylin, eosin, and dab channel by multiplication it with a factor. This factor is a hyperparameter. We use deconvolution \cite{ruifrok2001quantification} to convert an image from RGB to HED and, after the transformation, back from HED to RGB. To find fair values for this hyperparameter for each scanner/tissue type, we sampled 100 images and calculated the mean of each HED channel. We use the mean of the lowest and the highest value to sample possible factors for each scanner/tissue type separately. This, in combination with a beta distribution to sample possible factors, we get realistic color transformation for each scanner/tissue type. \subsection{RP-DAC} We took the idea of the Radial Prediction Layer \cite{RPL} and modified it such that it can be used as a domain adaption classifier with moving prototypes. For each $n$ domain class, we create an $n$ dimensional space with $n$ prototypes, initialized with a constant to one ax. The starting point of the prototypes is hence an proportional to the identity matrix. The training phase can be divided into two alternating steps. 1. The training of the RP-DAC by using not augmented datasets as input and predicting the domain as output. The loss is calculated by the distance between the prediction and the belonging prototype and the distance from the used prototype to the starting point of the prototype by using the mean squared error. Only the RP-DAC weights, including the prototypes' location, are trained in this steps. The two losses can be weighted to allow more freedom for moving the prototypes in the space. 2. Training of the Detection Network includes augmented examples of the image. The RP-DAC loss is the distance from each prediction to each prototype so that the detection network tries to set all predictions of the RP-DAC to one single point by only updating its own weights and removing the domain information as much as possible. The training procedure allows us to train for the scanner's prediction, tissue type, and case id as a domain. It also can handle different data augmentation like described above, which would lead to different domains. For visualization of the losses of the RP-DAC, see figure \ref{fig:computerNo}. Left shows an example with two domains. In the phase one, the update of the RP-DAC weights are calculated by the loss L1 and L2, the middle graph shows the result after the update. In steps two, the detection network is updated using the loss L2 and L3; the right graph shows the result after the update. The dashed lines show the space where the cost of the training sample is minimized; If a dataset is far away from this line, there is a way that the network is uncertain about this sample. The not normed uncertainly can be calculated with the Manhatten distance to each prototype divided by the Manhatten distance from the center of all prototypes to all prototypes. \subsection{Ensembling of the models} In the test phase, we wanted to use an ensembling of different trained models by using different variants of input colorspaces (HE, HED, RGB, GRAY) and test time augmentation like mirroring. However, the run on the test set failed, so we needed to remove this ensembling, to decrease the execution time. \section*{Results} The results of this technique can be found under the leaderboard of the MIDOG 2022 challenge and will be updated as soon as possible in this preprint. Each trained model without ensembling has an F1 score of around 0.76 for our intern validation. \section*{Discussion} In this work, we showed our contribution to the MIDOG challenge 2022. Our solution shows that it can train on prototypes and not on fixed classes. The learning with the prototypes gives much potential for future work, to use smarter strategies in the learning process, to sample or weight trainings data so that the domain shift can be reduced even more. The residual connections of the Yolo network could be using the strategies from \cite{9506562}, so that the domain features do not get transferred. We see create potential in the usage of prototypes for domain adaption classification. \begin{acknowledgements} The authors acknowledge the financial support by the Federal Ministry of Education and Research of Germany (BMBF) in the project deep.Health (project number 13FH770IX6). We also thank Dr. Christian Krumnow for his helpful feedback. \end{acknowledgements} \section*{Bibliography}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,471
\section{Introduction} \label{sec:intro} \input{introduction.tex} \section{Feed-forward Neural Networks} \label{sec:networks} \input{networks.tex} \section{Decision procedures} \label{sec:decproc} \input{decproc.tex} \section{State of the art: a bird's eye view} \label{sec:stateofart} \input{stateofart.tex} \section{Challenges and perspectives} \label{sec:challenges} \input{challenges.tex} {\small \bibliographystyle{named}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,112
![Lost in GCW](http://is.gd/smooley) ![Lost in GCW](monkey.png "title also lost in GCW") ![Lost in GCW][img] [img]: smiley.png "this title is lost in GCW" ### Relative and Absolute Image URLs ### - absolute ![foo](http://is.gd/smooley) - absolute ![foo](https://thereisnoimageat.this/location.png) - absolute ![foo](ftp://thereisnoimageat.this/location.png) - relative ![foo](smiley.png) ### In Text and Lists ### This sentence has a ![monkey](monkey.png) inside. - item 1 - item 2 has a nested image: ![alt][img] - item 3 has two nested images: ![alt][img] ![alt][img] - item 4 ![monkey](monkey.png "monkey") says hello. Look, a ![monkey](monkey.png "monkey") ### In Links ### A plain image link: [![smiley](smiley.png "foo")](http://tango.freedesktop.org/). A link with [text and ![smiley](smiley.png "foo")](http://tango.freedesktop.org/).
{ "redpajama_set_name": "RedPajamaGithub" }
2,189
Kel & Mel Reviews We reserve the right to call it shit and still like it! CJKpop Freeleases Contact Us (and other stuff) Artist Submission Form The Ratings System CJKpop Video NCT 127 – Regular (English Version) + (Jimmy Kimmel Live! Version) Posted by kelmelreviews on October 10, 2018 A new NCT 127 comeback is upon us and, with the group doing promotions in America ahead of the EP release, they are dropping a new (almost entirely) English-language track and it is as almost all NCT tracks are…A BOP. More than a lot of groups out of Korea in the idol spectrum, NCT 127 is not afraid to toot their own horn and let you know exactly where you stand with them. This song turns their attitude up to 11 and adds a little trap-sumption to the mix for good measure. The amount of sauce it takes to pull this off convincingly when a lot of your members are kittens is off the charts, but they pull it off well and make the song work expertly. A lot of this had to do with the writer of the track, Vedo. An American artist with his own discography and solid releases, he is able to take out the try-hard in this song by writing it from the perspective of someone used to the language in it. There is a lot of slang in this track, but it all works together and makes sense when you listen closely. The reason this works for when Vedo does it and doesn't for other writers unfamiliar with AAVE is that he knows not to pull back too much or overuse. When people naturally speak like this, it is sprinkled throughout the way they talk; it's not all dropped in a single sentence and never used again nor is it used to call attention to certain words. It is a part of the speaker and the way this is written makes everything come across as natural so that the charisma of the performers don't have to make up for unintentional cringe. There appear to be two things that have been toned down for an American audience from a production standpoint, though. The first is the harmonies. While it's usually NCT U that comes with that real richness of backing vocals, NCT 127 haven't been slouches in that department either and this track tends to push all that to the side for unison singing (except for the We got the wave/We gettin' paid part). While we find this disappointing, it makes sense in some respects as American audience are not used to full harmonies from singing groups anymore, especially on party jams. What surprised us is how much the bass had been turned down as well. NCT is probably best known for switching up what we think of as the drop. Songs like Boss, The 7th Sense, and Chain come in with most of the bass banging from the beginning and, when the chorus hits, dropping enhancements in the melody to make the emphasize. When they do a more traditional drop, it's always near the beginning like on Cherry Bomb. This not only keeps the bassline the same for most of the track, there are large sections of the song where there is no bass whatsoever. Honestly, considering the Latin nature of the track this is probably the best song to switch up with, but we would have loved a little more knock in the trunk. But slightly out of character choices don't make this song bad. This is probably one of the catchier songs to come out of a major group this year without coming across as gimmicky or filler. It has us very excited for the Korean version of the track as well as the new release, due out October 12. As a special bonus, we're also including the live stage from when the group performed the song on Jimmy Kimmel Live! It can be hard to hear some of the members over the screaming of the crowd and the backtracking, but we're kind of impressed that they didn't completely lip-sync (Taeyong…we see why you're the leader because we can HEAR you). We think come of the quietness may have come from several of the group members being unsure about their English because we can hear just about everybody on their live version of Cherry Bomb, so we think it's pretty forgivable this time. Once they perform the track more, the live stages with live vocals are bound to improve. The lyrics are finally available on the original video, so hit the [CC] button to see what @VedoTheSinger did for them as a writer. #comeback#English#레귤러#Jimmy Kimmel#Kpop#NCT 127#NCTzens#October 12#Regular#Regular Irregular#Vedo the Singer#YouTube ← Cardiac Da Pulse (feat. Tr3y & Class M) – I'm On That EXO – Don't Mess Up My Tempo (Album Stream) → About kelmelreviews View all posts by kelmelreviews → Agust D (Suga)… on Agust D (Suga) – Agust… dynasticqueen on TVXQ – New Chapter #2: T… Rick Murphy on Toni Braxton – Long As I… carlo🇧🇷🏳️‍🌈 (@pucket… on BTS – Ddaeng (prod. by S… TVXQ – New Chapter #2: The Truth of Love – 15th Anniversary Special Album TVXQ – Jealous EXO – Don't Mess Up My Tempo (Album Stream) Cardiac Da Pulse (feat. Tr3y & Class M) – I'm On That Review: Lenny Harold – Cosmic
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,852
"Inna Lillahi Wainna ilayhi Rajioon"."Indeed we belong to Allah and to Him do we indeed return." (The Holy Quran 2:156) Cousin of Dr Shabbir Haider, Ehtesham Zamir Jafri Passed away in Pakistan It is with deep sadness we wish to inform that Dr Shabbir Haider's Cousin Retired Major General Syed Ehtesham Zamir has passed away in CMH Pakistan on Tuesday 4h May, 2015.His funeral prayer was offered in Bahria Town Phase-IV in Rawalpindi. He was son of renowned Urdu humorous poet Zamir Jafri. Please remember Syed Ehtesham Zamir Marhum in your duas, and pray that Allah SWT may grant him an exalted place in Jannah and sabr-e-jameel to the family members.Please recite Sura-e-Fateha for the departed soul. Pakistan's Army Command and Politicians condoled the death of Maj. Gen. (R) Ehtesham Zameer.In a messages to the family, they said that we was shocked and grieved by the news of the death of Maj. Gen. (R) Ehtesham Zameer.They prayed to Almighty Allah to grant eternal peace to the departed soul. He also prayed for the family for grant of strength and fortitude to bear this irreparable loss with equanimity.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
50
Ulica Nowotarska – jedna z najważniejszych ulic Zakopanego. Stanowi, wraz z ulicą Kasprowicza, drogę wyjazdową z miasta na północ. Jest częścią drogi krajowej nr 47 oraz Zakopianki. Ulicę otacza w większości nowoczesna zabudowa. Ulica została wytyczona na początku XX wieku w miejscu traktu prowadzącego z Krakowa do Zakopanego. Ważne budynki nr. 8 – restauracja węgierska Czardasz nr. 34c – Willa "Barabaszówka" nr. 43- Willa Mrzonka nr. 45 – Schronisko Młodzieżowe PTSM Szarotka nr. 59 – Willa "Płazówka" Willa "Eljaszówka" Infrastruktura Na skrzyżowaniu ulicy z ul. Powstańców Śląskich istnieje rondo. Bibliografia Nowotarska
{ "redpajama_set_name": "RedPajamaWikipedia" }
518
Some users of your service may be very young or have communication barriers and pictorial questions can be useful to reach out to them. Our systems allow you to use pictures in questions with the added benefit that these may prompt people to remember their own situation when using your service. You can also use your own pictures to help people identify premises and facilities. National PROMS Results – What does good look like?
{ "redpajama_set_name": "RedPajamaC4" }
2,406
Molgula euprocta är en sjöpungsart som beskrevs av Richard von Drasche-Wartinberg 1884. Molgula euprocta ingår i släktet Molgula och familjen kulsjöpungar. Inga underarter finns listade i Catalogue of Life. Källor Kulsjöpungar euprocta
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,207
I Corni di Canzo (Còrni, Curunghèj o Culunghèj in dialetto brianzolo), sono un gruppo montuoso sito nel Triangolo lariano. Descrizione I Corni di Canzo sono le cime più elevate della costiera che separa il corso del Lambro dal Lago di Lecco. Si trovano nella parte sud-orientale del Triangolo Lariano, nell'insieme di montagne che, per ragioni morfologiche e storiche, viene anche chiamato "Isola senza Nome". L'iconico profilo, i tre corni sommitali, rendono facile riconoscere questa montagna anche a grande distanza dalla pianura. Si presentano con un versante settentrionale, prevalentemente a fiancate, in gran parte roccioso e con i restanti versanti, quello meridionale e quello occidentale, generalmente erbosi o coperti dal bosco. Il territorio montuoso Corni di Canzo ha come confine settentrionale la Valle di Valbrona, ad oriente le sponde del Lario Orientale ed il Monte Moregallo. Il confine meridionale è segnato dall'abitato di Valmadrera, dove termina la Cresta Sud, e dalla Val Ravella che discende alla frazione di Gajum ed all'abitato di Canzo. Il confine occidentale è invece rappresentato dalla Cresta di Cranno che scende verso l'abitato di Asso e la Cascata della Vallategna. Si tratta di tre cime rocciose, disposte da est a ovest che prendono il nome dal comune di Canzo, infatti quella centrale e quella occidentale delimitano la testata settentrionale della val Ravella ed al contempo costituiscono il confine fra il comune di Canzo e quello di Valbrona. Solo la terza cima, quella più bassa, è nel territorio del comune di Valmadrera. Il Corno Occidentale è alto 1.373 m s.l.m., il Corno Centrale 1.368 metri, mentre il Corno Orientale 1.232 m s.l.m. Questi tre corni sommitali si trovano lungo la dorsale principale che, dalla Cresta di Cranno, raggiunge al Monte Moregallo alla Bocchetta di Moregge. Oltre a questi tre corni principali possiamo rimarcare la presenza di altri due corni minori: il Corno Rat (o primo Corno di Valmadrera) sul versante Sud (924m), il Ceppo della Bella Donna, sul versante nord-est (1.140 m). Le due vette più alte sono ben visibili in Brianza ed hanno l'aspetto di due grandi "corni", come si vede nella fotografia. Inoltre, tutte e tre le cime sono visibili contemporaneamente dal lungolago di Menaggio fino all'alto Lago di Como; tuttavia, man mano che si prosegue verso nord, è necessario elevarsi di quota perché esse rimangano visibili. Le loro origini sono alla base di una leggenda locale che narra di una guerra fra arcangeli e diavoli, di cui Canzio ne era un generale. Montagne limitrofe I Corni di Canzo confinano ad est con il Monte Moregallo. Il principale punto di contatto è la Bocchetta delle Moregge che si pone come spartiacque tra la Valle del Fiume (o Valle delle Moregge) a nord e la Valle di Sant'Antonio a sud. Il confine con il Monte Rai, a sud, è segnato dalla Colma di Ravella, che si pone come spartiacque tra la Valle di Ravella, verso ovest, e la Val Gatton ad est. La Val Ravella, sempre a sud, segna il punto di confine tra i Corni di Canzo ed il Monte Cornizzolo. A nord è invece la piana di Valbrona a separare il versante settentrionale dei Corni dal Monte Megna. Escursioni Il versante rivolto verso la Val Ravella, a monte dell'abitato di Canzo e della frazione di Gajum, offre numerosi itinerari escursionistici. Vi sono infatti percorsi tematici, come il "Sentiero geologico Giorgio Achermann" o lo "Spirito del Bosco" che sono adatti e di interesse anche per i neofiti. Se si decide invece di risalire verso Pianezzo, dove è situato il Rifugio SEV, gli itinerari si fanno più impegnativi per dislivello e sviluppo, restando però nella maggior parte dei casi di tipo E (Escursionistico). Anche sul versante Nord, risalendo da Valbrona, sono presenti numerosi itinerari di facile accesso, tra cui la strada di Oneda, che permette ai mezzi fuoristrada autorizzati di raggiunge Pianezzo ed il rifugio. Da Valmadrera i sentieri che conducono a Pianezzo sul versante Sud-Est sono tra i più impegnativi, con significativo sviluppo e dislivello. Alcuni di questi itinerari sono di tipo EE, quindi consigliabili solo ad escursionisti esperti. Infine la Cresta di Cranno offre è una lunga ed impegnativa salita dal comune di Asso. Da Pianezzo ed il RIfugio SEV è possibile raggiungere la cima del Corno Orientale grazie ad un breve tracciato di tipo E. Raggiungere la cima degli altri corni, il Corno Centrale ed il Corno Occidentale, richiede invece competenze di tipo EE ed in alcuni casi di tipo A (Alpinistico), come per il caminetto del Corno Occidentale (II°). Sebbene molto frequentate, queste cime hanno una forte connotazione dolomitica (sia nella forma che nella tipologia della roccia) e pertanto impongono spesso passaggi obbligati ed esposti. Sul Corno Occidentale è presente l'impegnativa ferrata del Ventennale Cai Canzo, mentre sul Corno Rat è presente l'altrettanto impegnativa ferrata del Ventennale OSA. Entrambe le ferrate sono state recentemente riattrezzate e modernizzate, va tuttavia rimarcato che sono itinerari che, per quanto relativamente brevi, sono fisicamente impegnativi e tecnici, con significativa esposizione e verticalità. Non sono indicate a chi non possegga adeguata esperienza in questo tipo di progressione. Rifugi Il rifugio SEV (Società Escursionisti Valmadreresi) è situato sul versante nord del corno centrale a 1.225 m s.l.m. Si raggiunge, da Valbrona, con una comoda stradina asfaltata che essendo di proprietà privata, è quasi sempre chiusa da una sbarra di ferro per impedire il passaggio di automobili. Il rifugio Terz'Alpe, di proprietà dell'Azienda Regionale delle Foreste ma adibito ad agriturismo, è situato in val Ravella, lungo il sentiero che proviene da Canzo e che porta ai Corni. Note Voci correlate Grigna meridionale Altri progetti Collegamenti esterni Montagne del Triangolo Lariano Montagne della provincia di Como Montagne della provincia di Lecco Canzo
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,137
{"url":"https:\/\/www.intechopen.com\/books\/energy-management-of-distributed-generation-systems\/energy-storage-systems-for-energy-management-of-renewables-in-distributed-generation-systems","text":"Open access peer-reviewed chapter\n\nEnergy Storage Systems for Energy Management of Renewables in Distributed Generation Systems\n\nBy Amjed Hina Fathima and Kaliannan Palanisamy\n\nSubmitted: October 13th 2015Reviewed: March 1st 2016Published: July 13th 2016\n\nDOI: 10.5772\/62766\n\nAbstract\n\nDistributed generation (DG) systems are the key for implementation of micro\/smart grids of today, and energy storages are becoming an integral part of such systems. Advancement in technology now ensures power storage and delivery from few seconds to days\/months. But an effective management of the distributed energy resources and its storage systems is essential to ensure efficient operation and long service life. This chapter presents the issues faced in integrating renewables in DG and the growing necessity of energy storages. Types of energy storage systems (ESSs) and their applications have also been detailed. A brief literature study on energy management of ESSs in distributed microgrids has also been included. This is followed by a simple case study to illustrate the need and effect of management of ESSs in distributed systems.\n\nKeywords\n\n\u2022 energy storage\n\u2022 distributed generation\n\u2022 energy management\n\u2022 renewable\n\u2022 battery\n\n1. Introduction\n\nDistributed generation (DG) and electricity market liberalization have been the key drivers for the evolution of the concept of small-scale energy sources. Growing concerns about climatic changes further encouraged the use of renewable energy sources to ensure energy conservation and sustainability. But integrating renewable energy is turning out to be a real challenge for the smooth operation of DGs. Renewable power especially faces concern regarding power quality. Grid operators face immense issues in scheduling the generated power from the DGs, especially due to renewables and heat-driven energy sources which are difficult to be forecasted. DG may be the key to implement the much talked about micro grids and smart grids of today to ensure a clean and green energy portfolio. However, many issues as listed above need to be addressed to enable successful integration of DGs in the power grid. Thus, an effective management of the generating resources in the DG network is very essential.\n\nEnergy storages can be incorporated for energy management in many ways. Conventional usage of energy storage devices was mostly for long-term storage applications. But now they can be used for power storage and delivery from few seconds to days and months. Energy storage systems (ESSs) can act as spinning reserves for providing short-term power supply to manage instant variability in DG-generated power. They can compensate for the intermittency and variability of renewable resources and improve the power quality and reliability. ESSs can also provide ancillary services to enable quality power delivery to the end users. Optimized selection, sizing and siting of ESS will be critical for design engineers. Efficient management of energy generated in a DG system can enhance the performance of the system, thereby enabling quality and reliable power delivery. Market prices and other economic dynamics have great impact on the operation of DGs, in which case storage systems can act as added assets to achieve better economic dispatch solutions. Storage systems have proved to improve voltage stability, to smoothen wind power variations, to incorporate peak shaving and load leveling features. The main drawback of storage is its maintenance issues and life cycle failures. Effective implementation and usage of energy storages in the distributed grid requires intelligent and flexible energy management strategies capable of handling the dynamics of distributed systems, while ensuring effective and efficient usage of the storage device. The simplest energy management strategy avoids over charging\/discharging of the storage. However, further implementation of hardware-in-loop simulations, optimization algorithms and other intelligent tools and techniques have now been attempted to propose advanced energy management scheme strategies to improve storage lifetime and operability.\n\n2. Growth of renewable power generation\n\nEnergy crisis emerging from the early 1970s with increased environmental concern was the key factor for setting the foundation for development of renewable sources for electric power generation. Countries like Denmark, Germany and the United States led the green energy mission by creating critical markets and policy targets for development of renewable energy. Awareness on climatic change and its adverse effects further fueled this tremendous drive to reduce emissions and generate environment-friendly energy. Renewable power generation has seen accelerated growth in the developing countries, as it provided a means to reduce dependability on imported nonrenewable fuels. In the earlier days, renewable power was only advocated by scientists and environmentalists but with declining costs and expanding markets it eventually emerged as an implementable solution to improve energy security and diversify the energy portfolio of nations. The overall energy supply from renewables has grown to 76 EJ with a total investment amounting to 214.4 billion USD in 2013 which is a 30% increase from the scenario in 2004 [1]. Fastest growth of renewables has been recorded in the power sector with 560 GW being generated from renewables in 2013.\n\nSolar photovoltaics (PV) grabbed the highest growth rate of about 38% (average growth every year) in these 10 years. Wind power saw rapid expansion all over the world with many countries beginning to join in every year. Though China and the United States saw a mild decline in the wind market, it is still expanding due to technological improvements and fall in prices. Hydropower also increased in the last decade with many joint-venture large-scale projects being implemented. At the same time with microturbines installed on small streams and waterfalls, micro hydropower is also viewed as being a good complementary source to other renewables like wind and solar. Ocean energy is still in developmental stage, with France being the only country to have a functional tidal power plant with rated capacity of 240 MW.\n\n3. Microgrids and DG\n\nGrowing power demand is predicted to be highly devastating for our environment as power generation is the highest contributor to greenhouse gas emissions. Also, rapid depletion of conventional energy resources and increasing fuel prices are crippling the economy of many countries. With technological advances, many renewables are now competing as alternative energy sources to conventional fossil fuels. Conventional power generation was highly centralized due to geographical concentration of energy sources. It also faced many issues like need for extraction infrastructure of generated power, losses in transmission and distribution and lacked the flexibility of being set up at desired locations. This led to conceptualization of DG, wherein the power generation takes place at\/near the load centers by many small grid-connected power generating sources called distributed energy resources (DERs) which are mostly renewable in nature. Due to abundance of availability of natural renewable sources like solar and wind, these DERs could be set up anywhere making the DGs flexible, decentralized and modular. They are capable of capturing renewable power and minimizing losses occurring in transmission. This concept has gained a lot of interest due to many reasons as defined by the International Energy Agency [2]. They are congestion on centralized transmission lines, growth of renewables, increased customer demand, electricity market liberalization and environmental awareness. The benefits of DGs are listed below:\n\n1. Flexibility: DGs are flexible in planning, installation, operation and modularity. They can also be started and stopped much more easily as opposed to conventional plants which need startup and shutdown time and costs. Hence they can be easily modulated as per market norms.\n\n2. Reliability: In electrical power systems, it simply means uninterruptible supply to the consumers. This needs high maintenance of transmission network with increased costs for the utility grid. Industrial consumers demand uninterrupted power and hence are more willing for investing in backup and\/or local generators. Fuel cells and microturbines are vastly viewed as being excellent small-scale generators for improving system reliability.\n\n3. Power quality: In many developing countries, grid power is still marred with number of power quality issues like voltage sags and frequency deviations. These problems need to be addressed to make the systems reliable and improved. DGs can easily be brought into play to improve the power quality and deliver reliable power to the consumer.\n\n4. Green power: As most DERs can be renewables, DGs can be setup to promote green energy and reduce GHG emissions. Emissions omitted or reduced are now being viewed as amounting to energy saved. Many nations have now drafted environmental policies which encourage installation of DGs and urge adoption of green energy.\n\n5. Reducing grid congestion: In order to provide power to remote areas which are located far from generation facilities leads to heavy congestion of transmission lines. Hence setting up of DGs in vicinity of such areas prevents burdening of the grid and avoids investment costs for setting up of new lines.\n\n6. Additional benefits: DG also serve some additional purposes like deferral of upgrades from T&D, loss reduction in transmission lines, network support and ancillary services.\n\nThus DG can benefit power system delivery and mobilize new markets. They can be completely decentralized, serving a localized consumer independent of the grid or operate with the grid to address a part of the local load. Thus any DG which exhibits controllability on its connectivity interface can act as a microgrid. The control of microgrid poses many difficulties and needs extensive strategies to command operability based on grid conditions and customer requirements. They also need unique protection strategies to address any issues arising internally to not affect the power grid. Hence, a DG can be implemented with any combination of generating sources renewable or conventional with\/without storage as shown in Figure\u00a01. Thus, many nonrenewable small-scale generators like fuel-cell, diesel-based generators and microturbines are also incorporated into DGs to meet consumer demand [3].\n\n3.1. Issues and options for integrating renewables in DGs\n\nIntegration of renewable power sources poses many challenges during operation and scheduling of DGs. Intermittency, power quality and price of electricity generated are some of the issues which are being currently addressed in the distributed power scenario. DGs facilitate the power system by providing voltage support and power factor correction applications. However, increased penetration of renewables and extensive decentralization of power sources may cause problems in its voltage profile, making it unstable and unreliable. The major issues which need to be addressed for enabling increased implementation of DGs are explained in detail below:\n\n1. Costs and investments: DGs require installation of different power generating technologies at the load centre, each of which is characterized by a different energy cost. Energy costs include costs for investing in installation, operation, maintenance and replacement costs and are generally coined as cost per kilowatt hour of energy generated. Most of the DERs especially the renewable systems are very expensive compared to existing fossil fuel-based generating systems. As per status reports by REN21 [1], solar PV costs have declined by 50% in the past 4 years due to widespread awareness and learning towards conceptualization of grid parity. Now, introduction of concepts like microturbines and biomass generation is proving to be cheaper. Also costs of investments can be reduced during planning a DG by selecting the optimal combination of DERs so as to yield maximum energy at lowest costs and adapting optimal operating conditions.\n\n2. Unpredictability of renewable energy system (RES): A dispatchable power is defined as a power source whose output can be regulated to match the demand so as to ensure power balance. Renewables depend on nature and are considered to be nondispatchable due to their continued unavailability and intermittency. Solar power is available only as long as the sun shines and wind is ever varying in both speed and direction. The planning committee must ensure that any intermittency or variability caused in the generation side by these DERs is balanced before being fed to the consumers (or the power grid) to avoid any damage to consumer appliances and to ensure quality. Usually, any party generating power using DGs signs a contract with its local power system operator called the access responsible party (ARP) to ensure stability and operability of the grid as per the grid codes. Grid code encompasses the technical specifications and rules to be abided by the transmitting party to ensure grid safety and security. Any discrepancy henceforth is penalized. Forecasting studies have come a long way in order to enable better planning of renewable power scheduling especially in the wind power sector. Every plant owner, grid operator and energy trader demand high accuracy in forecasting techniques to ease management of DGs and to improve system reliability.\n\n3. Power quality issues: Frequency variations and voltage drops and deviations may impact on the quality of power delivered to the consumer. The microgenerators in DG systems lack speed governors and spinning reserves and hence may not be able to provide frequency regulation. In islanded mode of operation the DG becomes more sensitive to variations in voltage profile caused by any variations in load. Additional variations fed in from renewables will further impact the frequency and voltage profile of the system. The main power quality problems faced by DG systems are harmonic distortions and voltage deviations introduced by the many inverters installed to integrate the various DERs like PV panels. Thus, connection of large number of DERs into the DG system requires meticulous planning and control to establish load-frequency and power quality.\n\n4. Connectivity issues: As explained before, DGs are capable of operating with and without the grid, ensuring reliable power delivery to the consumer. However, the decision to stay connected or operate in isolation requires extensive sensing equipments and intelligent control systems. In the case of voltage faults or grid failure, the microgrid needs to immediately get into islanding the system from the faulty portion of the grid. Similarly, they also must engage in smooth transition from islanded to grid-connected mode after fault clearance. Switching between connected and stand-alone modes needs to be highly synchronized and safe. Any bidirectional flow of current due to imbalances of voltages at higher renewable penetration needs to be controlled and managed [4, 5].\n\n5. Market regulation: Most countries have a monopolistic electricity market where the utility is the sole regulator of electricity distribution to consumers. In such cases, the power generated by the DGs is purchased by the utility under authoritative drafted agreements. Hence, for further penetration of DGs and microgrids a more liberalized market is needed where the DG power could be directly sold to consumers. Some countries only implement marginal norms for purchase and sales of green power which is not sufficient to justify the investment costs of newer technologies. However, liberalization alone is not the solution for this. It may also lead to increased complexity and prices which adversely affect small DGs and renewable generators in meeting scheduled dispatches and buying of backup power.\n\n6. Regulatory frameworks: Microgrid and DG operators face most problems due to lack of appropriate regulatory frameworks to support and govern investments and operations both in isolated and grid connected scenarios. A fitting legal design for microgrids with market liberalization is very much essential. Proper framework for costing and economic analysis needs to be structured which should recognize value of reliable and nonintermittent power supply.\n\nDGs and microgrids are and will play a pivotal role in meeting future energy challenges. However, the issues discussed above need to be addressed by DG operators, planners and policy makers. Combining energy storage in the portfolio of DERs of a DG will effectively address many of these issues and enable the DG system to operate reliably and securely.\n\n3.2. Need for ESS\n\nESSs can aid in improving the operation and power delivery of the DG and can help in elimination of uncertainties in the system. Conventional power systems only depended on rotational generators for spinning reserves and ancillary services. But most micro sources in a DG, especially renewable generation units lack this facility and hence depend on external storage to fulfil these requirements. Some of the needs of ESS in DG systems are listed below:\n\n1. Spinning reserve and short-term backup: Fuel-powered plants usually are held at stand by to provide for the spinning reserve and yet, they need considerable time (in minutes) to respond. In the absence of spinning reserve in the case of renewable systems, the ESS can aid in ramping up of power delivery in times of need. Recent advanced storage systems are capable of ramping up in terms of seconds to few minutes. Thus, an effectively managed ESS can replace a much larger spinning reserve. They can also store energy for delivering short-term backup power. They are slower to respond but can be brought into commissioning in about an hour.\n\n2. Load levelling and peak shaving: Usually utilities operate peak resources and generators to deliver the peak demand power. They are usually combustion engines and gas-fired plants which are characterized with lower efficiencies and higher emissions. Efforts are being initiated to reduce the peak of the demand curve by improving the end-user energy efficiency, educating on demand response measures and implementing peak pricing strategies. Energy storage is an attractive option for managing the peak of the demand curve. They are capable of storing energy at low peak times and then discharge it at peak time. Thus it acts as a very responsive and flexible peak reserve. Storage is essential for demand response programs too which will enable a consented and co-ordinated direct control on end users\u2019 demand. ESS employed for peak shaving will result in reduced emissions also.\n\n3. Integration of renewable sources in DG: Wind power is generated mostly at night when the demand is low. Hence, storing this energy and delivering it at demand times enhance the efficiency of the DG system. If the same storage systems are provided at the end-user side, then all the excess wind energy can be transmitted at night time (time of low congestion) and stored near to delivery point. This will also reduce the congestion of T&D lines at peak times, causing lesser faults and outages. Similarly, solar power is available only as long as the sun shines and hence can be stored at times when generation exceeds demand and discharged at evening peak hours. Increased penetration of solar power resulted in a critical situation called the duck curve in California. This created a huge difficulty for the system providers to ramp up other generators to meet the sudden shift in demand. ESS can help in such scenarios to level the demand curve and aid in ramping up of supply at peak times. They can also help in improving the penetration of renewable generators by eliminating their variability and intermittency.\n\n4. Power quality support: Integration of renewables poses many power quality issues ranging from flickers to fluctuations and spikes, swells and sags. Then storage systems capable of responding rapidly to system fluctuations like flywheels and ultra capacitors can be implemented to maintain power quality standards as per grid codes. Eliminating harmonics, LVRT and transient response can also be managed with an ESS. Flywheels, super capacitors and fast-responding batteries are extensively applied for frequency and voltage control and power quality improvement in DGs.\n\n5. Ancillary services: Ancillary services are those which are needed by the grid operators to sustain stable and reliable operation of the grid. Frequency regulation is an important aspect of ancillary services including load following and energy arbitrage. Future deregulation of markets and introduction of time-based tariffs will create a platform for energy arbitrage. It involves charging of storage with cheap energy at off-peak times and delivering at a higher price at peak times. It goes hand in hand with peak shaving concept but focuses on commercializing the saved energy for maximum profit. However, it is important to note that storage systems used for regulation and load following purposes need to be highly responsive and efficient, else the losses occurring in ramping up\/down the storage will outweigh the advantages. Usually conventional generators used for ancillary services are operated at below the rated capacities and hence have lower efficiencies and high emissions. ESS can hence be the emission-free cheaper option for ancillary services thus freeing generators to operate at maximum efficiencies.\n\n4. Types of ESSs\n\nStorage and conservation of energy has been practiced by mankind for many decades. Hydro storage and electrochemical batteries have been the traditional face of electricity storage. Based on the technology used, the different ESSs can be classified as shown in Figure\u00a02. Figure\u00a03 shows the share of different energy storage technologies worldwide based on installed capacity.\n\n4.1. Pumped hydro storage (PHS)\n\nThey are commercially installed and most widely operated grid-scale form of storage. PHS employs two reservoirs situated at different heights and pumps water from lower reservoir to upper reservoir using off-peak cheap electricity from the grid. When required the stored water is released to the lower reservoir and electricity is generated through rotating turbines. They are capable of storing huge capacities of energy for many months and have long life times of about 50\u201360 years. Their efficiency lies in the range of 70\u201380% depending on plant capacity, height difference and type of turbine used. They need extensive investments and long gestation and planning periods. New developments in pumped hydro include features like speed pumping which further improves the system response times for ramping applications.\n\n4.2. Compressed air energy storage (CAES)\n\nCompressed air is stored as a form of potential energy in large underground caverns and mines. When needed they are mixed with natural gas and burnt and passed through expansion turbines to generate electricity. This type of CAES has usually low efficiency value of about 50% and is called as diabatic CAES. If the heat generated during compression is stored as some form of thermal energy and reused to heat air in the discharging process, it is termed as an adiabatic CAES. Such systems are still being explored and when developed will give a very high efficiency of about 75%. They are being used in many places like North America and Australia. Advantages of CAES include large energy capacities with long duration storage. But they also need huge investments and are site-dependent with low round-trip efficiency.\n\n4.3. Flywheel energy storage systems (FESSs)\n\nThese store energy in the form of electromechanical kinetic energy in a rotating part like an accelerated rotor or rotating cylinder. The concept is based on the usage of stored mechanical inertia of a rotating object. When charging, the flywheel is accelerated and while discharging the reverse process takes place with the flywheel acting as a brake to extract the stored energy. Advanced FESS has high-speed rotors made of light-weight high-strength carbon materials spinning at twenty to fifty thousand rpm under vacuum. They are capable of supplying high power rating at minimum response times and are very suitable for ramping and spinning reserve applications. Other attractive features of a FESS include high efficiency, high power rating and long life with minimum to no maintenance. The major drawback is the high self-discharge, occurring due to deceleration and resistive losses at bearings.\n\n4.4. Hydrogen storage\n\nElectricity is used to split water into hydrogen and oxygen in an electrolyzer, and the hydrogen generated is stored under pressure in separate tanks. On requirement, this hydrogen is passed with oxygen\/air into a fuel cell to generate electrons and water (a reverse electrolysis process). Thus, it is truly a form of very clean energy as water and heat are the only byproducts of the entire process. This is also called as a regenerative fuel cell (RFC). They possess high modularity, scalability, energy and power capacities. But they have low-to-medium efficiencies of 50% and suffer from self-discharge.\n\n4.5. Battery energy storage systems (BESSs)\n\nThese are the most widely implemented and commercially used storage systems in power system applications. The basic idea is to convert electricity into some electrochemical form and save it as electrolytes inside a cell. While discharging, the electrolytes react with the electrodes in the cell and reverse reaction generates electric current. Over the years, many types of batteries have been developed each having a varied range of characteristics thus making the battery storage technology highly versatile and multipurpose. Nonrechargeable batteries are known as primary batteries and they are mostly used in military and medical applications. A hierarchy of rechargeable\/secondary batteries is shown in Figure\u00a04.\n\nLead-acid batteries are in usage since the 1890s and recent advancements like valve-regulated batteries, absorbent glass mat batteries are being proposed to improve battery performance and reduce maintenance. Nickel cadmium batteries have been used in applications for stabilizing wind energy but have been discouraged due to environmental hazards from cadmium disposal. Lithium-ion batteries have very high energy density and find extensive applications in portable electronic systems. Recent lithium\u2013polymer systems are also being developed with excellent features for renewable integration. NaS batteries are finding increased applications in renewable systems with high power density and efficiency. Metal\u2013air batteries are recently being explored. Lithium\u2013air batteries are most attractive with high-specific energies and zinc\u2013air batteries are also feasible. But metal\u2013air batteries are extremely prone to risk of fire due to high reactivity between metals and air\/humidity. Flow batteries are rechargeable batteries which store the liquid electrolytes externally in separate tanks. This helps in increasing their energy capacity by many folds. They are highly modular and scalar as their power capacity can be added by adding up of electrode cells. Instant recharging can be done by simply replacing the electrolytes in the tank. Recyclable electrolytes make the system highly efficient and they are capable of operating for many thousands of cycles. They are also called redox batteries as reduction\u2013oxidation reactions occur in the battery during charging\/discharging process. Some types of flow batteries are vanadium redox battery (VRB), polysulphide bromide battery (PSB), zinc\u2013bromide battery (ZBB) and iron\u2013chromium battery (ICB). A detailed exploration of battery storage systems and their characteristics have been presented in [7].\n\n4.6. Super-conducting magnet energy storage (SMES)\n\nEach SMES unit has a superconducting coil maintained at low temperature, a power conditioning equipment and a cooling system. Existence of electricity in an electromagnetic form has been discovered ages before when supercooled metals were found to exhibit superconductivity. Employing this feature for storing electric energy began in the late 1960s when strong magnetic fields formed by superconducting coils at extreme low temperature such as 4 Kelvin were explored. Recent research has led to development of materials which can function at 100 Kelvin also. The system has high efficiency but lower energy capacity and power ratings. Also, cooling systems increase the costs and maintenance of the system and it is still in the early stages of demonstration.\n\n4.7. Electric double layer capacitors or super capacitors\n\nThese store electric current in the form of electrostatic charges. Usually it is made of a parallel plate structure with dielectric between them which stores the charge. This is the technology which bridges the gap between conventional dielectric capacitors and batteries as they can store large capacity of energy, comparable to batteries. Since they do not involve any conversion process (as in batteries) they are very fast and capable of rapid charge and discharge cycles. They are highly efficient (<90%), environmentally safe, easily recyclable and suitable for frequency support applications. However, they suffer from very high self-discharge and find increased applications in electric vehicles and traction systems.\n\n4.8. Thermal energy storage (TES)\n\nWith increased solar power penetration, thermal storage has gained a lot of perspective. Excessive heat energy can be stored for later usage in buildings or in externally stored secondary energy carriers. Ice-based and Molten salt technologies are being developed and once under operation are expected to share a huge part of electric demand for heating purposes. Ice-based thermal storage includes latent heat storage and when used with solar power generation is claimed to achieve exemplary efficiencies.\n\n5. ESSs for energy management of DGs\n\nSection\u00a03 detailed the various issues faced in integrating renewable systems and the subsequent need for ESSs. It provided a general understanding on the various applications for which ESSs can be used. Some applications like power smoothing and frequency regulation require storage systems capable of charging\/discharging high power in short duration. Whereas, arbitrage and peak shaving applications need more energy capacity to withhold the stored energy with low self-discharge. Depending on these applications, appropriate type of storage needs to be selected and sized to be integrated with the DG. As storage systems are quite expensive, they need to be managed effectively to ensure their longevity and performance, else they may cause O&M issues, reduced performance and premature failure. A proper management of both the storage as well as the generating units is necessary to ensure effective utilization of the DERs. Hence, energy management has been a key topic of interest to renewable energy researchers and developers. Figure\u00a05 depicts the components of an energy management system.\n\nEnergy management strategies are a crucial part of planning of autonomous and standalone power systems incorporating renewable power [8]. A storage system can be operated to address many applications as detailed before in a microgrid, and the management strategy will have to be proposed in view of attaining the predetermined objectives without affecting the system as well as the storage system. As Nykamp et al. [9] stated in their study, from a Distribution System Operator (DSO) perspective a storage system\u2019s usage is to be primarily oriented towards peak shaving and renewable integration applications. But, on the contrary, a private energy trader would want to utilize the storage system to maximize profits by arbitrage and ancillary services that would earn money. A vivid comparison of different applications of the same storage in a distribution grid has been accounted with simulation results in [9]. Generally, energy management is based on state of charge (SOC) of the energy storage devices so as to avoid any over\/under charging issues [1012]. Reihani et al. [13] simulated grid-scale battery storage to implement peak load shaving, power smoothing and voltage regulation of a distribution system. Tewari and Mohan [14] analyzed and managed a sodium-sulphide (NaS) battery to shift wind power generation from off-peak to peak-load times and explored market participation opportunities for the battery. D\u00edaz-Gonz\u00e1lez [15] managed a flywheel-based storage device to achieve wind power smoothing for grid integration using a vector-controlled algorithm and Chandra et al. [16] used a battery system for frequency damping of system oscillations. Zhou et al. [17] investigated the sizing of a standalone PV-hydrogen system based on an optimized energy management strategy aimed at maximizing efficiency of operation of the hydrogen system. Garcia et al. [18, 19] proposed an intelligent energy management system to determine the power sharing between the hydrogen and battery storage based on operating costs in the hybrid standalone system and yielded better results as against the simple state-based energy management strategies. Cheng et al. [20] used PSO with roulette wheel redistribution mechanism to include power balance equality constraints in the energy management strategy. The energy management strategy considers the effects of depth of discharge of the battery storage and enables extending longevity by penalizing charging and discharging based on SOC levels.\n\nGamarra and Guerrero give a detailed overview of the different optimization techniques applied to microgrid planning and energy management. Wang et al. [21] presented a hierarchical energy management strategy for minimizing operation costs while optimizing the uncertainties in the wind and load and tested it on RT_Lab real-time platform. Mohammadi et al. optimized the management of a microgrid using Hong\u2019s estimation [22] and stochastic-based frameworks [23] to minimize costs and manage uncertainties. They also proposed a unit commitment formulation for the microgrid based on cuckoo search algorithm [24]. Chaouachi et al. [25] attempted an online energy management strategy involving intelligent and multiobjective optimization techniques for a hybrid microgrid to minimize costs and emissions. Chen et al. [26] employed intelligent fuzzy-based management of battery SOC in a DC microgrid integrating renewable source resulting in improved battery lifetime. Feroldi et al. [27] conceptualized control based on receding horizon strategy giving highest priority to wind generation and using hydrogen energy as the least priority source. The strategy enabled an improvement of about 88% in power supply delivery. R. Palma-Behnke et al. [28] employed the rolling horizon strategy to manage a wind-PV-diesel-storage hybrid system by evaluating set points for generating and storage units for optimized operation of the microgrid based on demand side management. Stochastic-based dynamic programming is employed in [29] to adapt to uncertainties of wind energy and market price variations.\n\nESSs also find applications in distributed systems for managing renewable power curtailment occurring due to transmission system constraints [30]. Fu [31] adapted a distribution feeder and operated it as a microgrid by integrating renewable sources and ESSs to measure and observe power quality issues. Silva-Monroy and Watson [32] addressed some core issues encountered while integrating energy storage devices for market management applications for the grid. Abdeltawab et al. [33] also proposed a market-oriented strategy for managing a wind-battery storage system with the aim of earning maximum profit in a deregulated market structure.\n\n6. Energy management of energy storage for a hybrid renewable system \u2013 a case study\n\nA hybrid renewable system consisting of a wind turbine with a solar panel is considered in the case study to understand the need for storage system and to simulate how management of the energy storage will help in improving the reliability and performance of the power system. The wind turbine produces wind power through an asynchronous induction machine at a rated power of 200 kW at 400 V. A 75 kW solar panel is connected to the power system through an inverter to convert the dc current into alternating current to deliver the load. The system is a grid-connected power generating system which delivers the power generated to the grid through a small substation. The hybrid system is simulated by modeling the wind and solar power models as detailed in equations (2) and (2) using Matlab coding software with wind speed and irradiation data fed from real-time measured data from a wind farm site.\n\na. Wind turbine modeling: The turbine generates power Pw depending on the wind speed v. If the speed is lesser than the cut-in speed (vci) or above the cut-off speed (vco), then the turbine does not generate power. For wind speeds greater than vc and rated speed of turbine vr the power is proportional to cubic of wind speed. Cp(\u03bb\u03b2) is the power co-efficient between the tip-speed ratio \u03bb and the pitch angle \u03b2. \u03c1 is the density of air and A denotes the wind turbine swept area.\n\nPwv=0,vvciorvvco0.5\u03c1ACp\u03bb\u03b2v3,vcivvrPratedwt,vrvvcoE1\n\nb. Solar power modeling: Let Ypv be the rating of the solar panel in kilowatt. Solar power generated Ppv in a panel is dependent on the solar irradiation Gc and temperature Tc is incident on the panel. \u03b1 is the temperature coefficient and fpv is the derating factor of the solar panel. Then, GSTC and TSTC are the irradiation and temperature values at standard test conditions.\n\nPpv=YpvfpvGcGSTC1+\u03b1TcTSTCE2\n\nThe simulation is run based on real-time data recorded from a wind test station in southern Tamil Nadu, India. The wind speed profile and the irradiation and temperature are shown in Figures\u00a06 and 7, respectively. Data measurements were conducted for a period of ten days and simulations carried out.\n\nThe sum power thus generated by the wind and the solar system calculated using equations (2) and (2) is considered to be Pgen and is found to be highly intermittent and varying. It becomes a very difficult task for the system operator to schedule this power to the grid. Such high invariabilities also pose to destabilize the grid and tend to affect the quality of the delivered power. Hence, the power generated needs to be dispatched based on a scheduling strategy which eliminates the intermittencies and meets the demand. Also, it is to be noted that maximum wind generation occurs at times like late nights when the load on the grid is at minimum and hence this power cannot be dispatched. Any sudden surges created by wind power cannot be accommodated immediately and the hybrid renewable energy system (HRES) must adhere to feed in power limits of the grid. So, the scheduling strategy takes into account a certain amount of peak shaving and ramp rate-limiting features to address these issues. The power generated (denoted by Pgen) and the power to be dispatched (denoted by Dem) is simulated and plotted in Figure\u00a08. There is a need to manage the energy generated to meet the dispatch curve. Let us explore some energy management strategies to improve this power scenario and understand the need and role of energy storage in the form of case studies developed for a wind-PV HRES using Matlab. Each case study clearly presents the management strategy implemented to operate the storage system optimally.\n\n6.1. Case 1: Without storage\n\nIn this scenario the power generated by the HRES has to be delivered as it is generated. So at times when Pgen > Dem the demand is met and the excess generation is stalled leading to spilling losses. At times when Pgen < Dem, the generated power is entirely fed to meet the demand and the excess demand has to be shed thus leading to shedding losses. Additionally, the shed power has to be met by some other form of conventional energy like a backup diesel generator. The power curve showing power mismatch between the Pgen and Dem is plotted as shown in Figure\u00a09.\n\nEvaluation of simulation results show that the total energy delivered by the HRES for the simulated duration of 10 days is 18.64 MWh, with a total of 4.5 and 4.8 MWh of energy lost due to spilling and shedding, respectively. This led to an overall estimated loss of power supply probability (LPSP) of 26.14% [34].\n\n6.2. Case 2: With a single battery bank\n\nIn this scenario, consider a VRB with the system to improve the power delivery and reduction of losses. Flow batteries have been proving to be an excellent solution for integration of renewable systems due to their ability to store huge capacities for longer time and deliver over 10,000 charge\/discharge cycles. A detailed sizing study was conducted to evaluate the optimum size of the battery using Bat optimization algorithm. The methodology can be referred in [34]. The optimum size thus found for this case is 1250 Ah operating at 120 V. The specifications of the VRB battery are tabulated in Table\u00a01. VRB battery is a modular structure, with each module has a rating of 625 Ah 48 V (30 kWh). Hence the VRB battery has two parallel banks, each having three modules in series. Thus, the total number of battery modules is 6. The VRB battery is modelled based on SOC given by\n\nSOCt=SOCt1+Pbt\u00d7\u0394tEbessE3\n\nEbessis the energy capacity of the battery in kilowatt hour and Pb(t)is the power to be charged\/discharged by the battery in time duration\u0394t. A simple energy management strategy is developed to avoid any over\/under charging of the battery. It is shown in Figure\u00a010. The power delivered (Pdis) curve plotted against Dispatch (Dem) and the power mismatch (Dem-Pdis) curves are shown in Figure\u00a011. Figure\u00a012 shows the SOC of the VRB battery.\n\nThe total energy delivered by the HRES is now increased to 23.2 MWh, and the shedding and spilling losses are reduced to 2.6 MWh and 2.2 MWh, respectively, resulting in a profit of 1020.4 $in the simulation period for 10 days. Assuming VRB battery energy cost of about 600$\/kWh, the payback period is evaluated as about 6 years. The LPSP is also reduced to 11.25%.\n\n6.3. Case 3: With hybrid battery energy storage\n\nIn this scenario, let us consider a VRB and a lithium-ion battery system to be integrated in parallel with the system to improve the power delivery and reduction of losses. Lithium-ion batteries are characteristics of high power capacities which can be utilized to balance the intermittencies experienced in this case. The lithium-ion battery is operated for high power requirements and VRB for storing energy for longer duration. Hence, Li battery is of higher power capacity and VRB is sized to be with higher energy capacity. Power limits for Li and VRB batteries is set to be 60 and 30 kW, respectively. The specifications of the Li-ion battery are shown in Table\u00a01. The VRB battery has an energy capacity of 625 Ah (75 kWh) and is formed by connecting three 625 Ah 48 V modules in series. The Li-ion battery capacity is 250 Ah (30 kWh). A simple energy management strategy is developed to avoid any over\/under charging of both the batteries as shown in Figure\u00a013. As shown, the VRB battery is checked for power limits and operated. Li-ion battery is operated only when the power requirement is beyond the limit of VRB battery or when the VRB battery is completely full\/dry. The power delivered (Pdis) curve plotted against Dispatch (Dem) and the power mismatch (DemPdis) curves are shown in Figure\u00a014. Figure\u00a015 shows the SOC of the VRB and Li-ion batteries. It is to be noted that the Li-ion battery undergoes more charging\/discharging cycles due to its smaller capacity and greater depth of discharge.\n\n7. Conclusion\n\nEnergy storages are becoming indispensible for operation of DGs integrating renewable power sources. Advancement in technology now ensures power storage and delivery from few seconds to days\/months. Optimized selection, sizing and siting of ESS will be critical for design engineers. Effective implementation and usage of ESS in the distributed grid requires intelligent and flexible energy management strategies capable of handling the dynamics of distributed systems. Most energy management systems focus on grid power balance and SOC of ESS. Recent research works focus on implementing energy management to minimize operating costs, manage uncertainties and reduce emissions. Application of optimization tools and techniques has enabled the development of flexible and effective energy management strategies. An effective dispatch and management strategy also needs to ensure efficient storage operation so as to enable its full life cycle usage. The challenge is to prioritize these objectives and evaluate a strategy most optimum for the considered application which can assure reliable power delivery without affecting system stability.\n\nHow to cite and reference\n\nCite this chapter Copy to clipboard\n\nAmjed Hina Fathima and Kaliannan Palanisamy (July 13th 2016). Energy Storage Systems for Energy Management of Renewables in Distributed Generation Systems, Energy Management of Distributed Generation Systems, Lucian Mihet-Popa, IntechOpen, DOI: 10.5772\/62766. Available from:\n\nchapter statistics\n\n1Crossref citations\n\nRelated Content\n\nEnergy Management of Distributed Generation Systems\n\nEdited by Lucian Mihet-Popa\n\nNext chapter\n\nEnergy Storage Technology for Decentralised Energy Management: Future Prospects\n\nBy Bartek A. Glowacki and Emma S Hanley\n\nFirst chapter\n\nSynthesis Processes for Li-Ion Battery Electrodes \u2013 From Solid State Reaction to Solvothermal Self-Assembly Methods\n\nBy Ver\u00f3nica Palomares and Te\u00f3filo Rojo\n\nWe are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.\n\nView all books","date":"2019-06-18 05:49:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.48973575234413147, \"perplexity\": 1916.600315974604}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627998607.18\/warc\/CC-MAIN-20190618043259-20190618065259-00167.warc.gz\"}"}
null
null
package com.secwebservices.api.exception; /** * * RSSGnome - RequestFailureException.java * @author Robert Suppenbach * @created Nov 13, 2013 */ @SuppressWarnings("serial") public class RequestFailureException extends Exception { /** * * @Author Robert Suppenbach * @param message */ public RequestFailureException(String message) { super(message); } /** * * @Author Robert Suppenbach * @param t */ public RequestFailureException(Throwable t) { super(t); } /** * * @Author Robert Suppenbach * @param message * @param t */ public RequestFailureException(String message, Throwable t) { super(message, t); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,308
title: 微軟-myanalytics inshort: 定義 translator: Microsoft Cognitive Services ---
{ "redpajama_set_name": "RedPajamaGithub" }
8,664
Table of Contents Title Page Copyright Page Dedication Acknowledgements Introduction JOE CARNAHAN KERRY CONRAN JON FAVREAU MICHEL GONDRY F. GARY GRAY DAVID GORDON GREEN LUKE GREENFIELD JOHN HAMBURG PATTY JENKINS RICHARD KELLY DYLAN KIDD KARYN KUSAMA NEIL LABUTE DOUG LIMAN MCG TREY PARKER & MATT STONE TODD PHILLIPS BRETT RATNER KEVIN SMITH CHRIS & PAUL WEITZ A PLUME BOOK THE MIND OF THE MODERN MOVIEMAKER JOSH HOROWITZ is a writer and television producer. His television work includes producing for _Charlie Rose_ on PBS and talk shows on CNBC and the Fox News Channel. His writings have appeared in numerous magazines and Web sites, including _Entertainment Weekly, Interview,_ Moviepoopshoot.com, and _Us Weekly_. His musings on popular culture can be read on BetterThanFudge.com. He lives in New York City. PLUME Published by Penguin Group Penguin Group (USA) Inc., 375 Hudson Street, New York, New York 10014, U.S.A. • Penguin Group (Canada), 90 Eglinton Avenue East, Suite 700, Toronto, Ontario, Canada M4P 2Y3 (a division of Pearson Penguin Canada Inc.) • Penguin Books Ltd., 80 Strand, London WC2R 0RL, England • Penguin Ireland, 25 St. Stephen's Green, Dublin 2, Ireland (a division of Penguin Books Ltd.) • Penguin Group (Australia), 250 Camberwell Road, Camberwell, Victoria 3124, Australia (a division of Pearson Australia Group Pty. Ltd.) • Penguin Books India Pvt. Ltd., 11 Community Centre, Panchsheel Park, New Delhi - 110 017, India • Penguin Books (NZ), cnr Airborne and Rosedale Roads, Albany, Auckland 1310, New Zealand (a division of Pearson New Zealand Ltd.) • Penguin Books (South Africa) (Pty.) Ltd., 24 Sturdee Avenue, Rosebank, Johannesburg 2196, South Africa Penguin Books Ltd., Registered Offices: 80 Strand, London WC2R 0RL, England First published by Plume, a member of Penguin Group (USA) Inc. First Printing, February 2006 Copyright © Joshua Horowitz, 2006 All rights reserved REGISTERED TRADEMARK—MARCA REGISTRADA LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA Horowitz, Josh, 1976-— The mind of the modern moviemaker : 20 conversations with the new generation of filmmakers / Josh Horowitz. p. cm. eISBN : 978-1-440-64949-3 Without limiting the rights under copyright reserved above, no part of this publication may be reproduced, stored in or introduced into a retrieval system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying, recording, or otherwise), without the prior written permission of both the copyright owner and the above publisher of this book. PUBLISHER'S NOTE The scanning, uploading, and distribution of this book via the Internet or via any other means without the permission of the publisher is illegal and punishable by law. Please purchase only authorized electronic editions, and do not participate in or encourage electronic piracy of copyrighted materials. Your support of the author's rights is appreciated. BOOKS ARE AVAILABLE AT QUANTITY DISCOUNTS WHEN USED TO PROMOTE PRODUCTS OR SERVICES. FOR INFORMATION PLEASE WRITE TO PREMIUM MARKETING DIVISION, PENGUIN GROUP (USA) INC., 375 HUDSON STREET, NEW YORK, NEW YORK 10014 <http://us.penguingroup.com> _For my parents_ _My mother—whose passion for the arts will_ _always be an inspiration_ _My father—who didn't walk out of_ What About Bob? _Thanks for always indulging me._ **ACKNOWLEDGMENTS** There are twenty-two filmmakers whose generosity with their time and candor made this effort possible. I've thanked all of them profusely in person, on the phone, and by e-mail. But here it is, one more time, a thanks immortalized in print. To each of the filmmakers' assorted agents, publicists, managers, and assistants, my thanks for your patience and cooperation. In particular, I'd like to mention Jason Abril, Raffi Adlan, Rowena Arguelles, Chantelle Aspey, Anders Bard, Samantha Bryant, Ed Choi, Joseph Garner, Todd Gold, Tim David Harms, Marc Hofstatter, Jennifer Howell, Melissa Kates, Suzanne Lehfeldt, Bebe Lerner, Heather Lylis, Phil Raskind, Leslie Rathe, Christine Richardson, Betsy Rudnick, Andy Shapiro, Gale Stanley, and Shannon Treusch. I was assisted in the transcribing of the interviews by Michael Fontana, Sheena Goldstein, and Priya Sanghvi. My thanks to everyone at The Creative Culture, most especially Nicole Diamond Austin and Laura Nolan, for leading me with a sure hand. I am indebted to my editor, Jake Klisivitch, for giving this book a proper home and presentation. When this book was only a vague idea in my head, several people lent support and advice that propelled me forward. My thanks to Chris Ryall, Keith Gordon, and to the trio of filmmakers who were the very first to agree to be a part of it: John Hamburg, Kevin Smith, and Chris Weitz. Many friends and colleagues were generous with advice and encouragement for this project throughout. Among them were Lewis Beale, John Bobey, Alexandra Bresnan, Mark Bryant, Renee Kaplan, Jerome Kramer, the Stuart Krischevsky agency, Cathy Saypol, Liz Topp, Irene Wang, and Cara Weber. My thanks to Courtney Litz for reading virtually every word of the book in your hands. Her recommendations and support were invaluable. All the thanks I can possibly muster to my family. To my parents and my brother and sister, I hope it won't surprise you to know that I actually enjoy you all even more than the movies. And I enjoy the movies a whole lot. I tend to obsess and agonize over most aspects of my life. This project was no different. Sadly for the patient, lovely, and wise Jenny Powers, she had to hear most of the whining. In the many highs and lows of this book, she never failed to be there for me. This book is better for her existence. And so am I. **INTRODUCTION** I have always found it remarkable that the experience of watching a film in a theater, unlike so many other experiences in life, has never really changed for me. Though I'll never again be ten, the rush of different emotions going to a movie elicits remains astoundingly constant. When the lights dim, I am back where I have been a thousand times before. I could be about to watch one of the most dismal wastes of celluloid in history, but for those first few moments, I'm watching _Citizen Kane_. It has me in its grasp, and whether it lets go of me is, in the end, up to the director. I walk into that theater with the expectation of greatness, that there may be at least one moment of brilliance. It's what director Luke Greenfield calls "the chill." It's those moments where, as he told me, "your hair stands up on your arms, and you feel like you can fucking fly." I wanted to do this book because I love how films can both excite and disappoint me, how they somehow can make me feel a little bit more alive, even if I've spent the last two hours sitting by myself in a dark room. There are clearly distinct and powerful voices emerging in film today, and it is time for attention to be paid. Was it such a different feeling for an audience member walking out of 2004's _Eternal Sunshine of the Spotless Mind_ as it was for the patron leaving 1954's _On the Waterfront_? _Eternal Sunshine_ tapped into something as honest and real and true as Marlon Brando did when he picked up Eva Marie Saint's glove a half century before. While _Eternal Sunshine_ was clearly an astonishing creation of originality, other films discussed in this book more clearly echo the past. They are, in their own way, no less original. Each generation puts its unique stamp on genres, whether it's Joe Carnahan's call back to _The French Connection_ in _Narc_ or Todd Phillips's counterculture comedy _Old School_ , a film which owes an obvious debt to _Animal House_. The fact that they stand up so well to (and often surpass) their inspirations is a testament to the craft of these modern moviemakers. Countless books have been written about the seminal filmmakers of the 1970s, a period worthy of examination even if, as Paul Weitz said to me, we often conveniently remove from our memories the sub-par work that came out of that era alongside the classics. But the book I always wanted to read about my heroes of the 1970s was the one written then. As fascinating as it is today to hear someone like Spielberg or Scorsese or Coppola or DePalma reflect on the past, what were they thinking at the time and in the moment? They are fully formed filmmakers now, resolute in their methods and ideologies but surely it must have been fascinating to hear from them as they were learning, while they were experiencing the early ebb and flow of a career. That is what this book is all about. There is astounding work being done today and great filmmakers are emerging every year. I wanted to capture the young talents of today before we know how it all pans out, before their Oscar acceptance speeches or their $100 million flop. The filmmakers in this book are still developing. For many, their best work is yet to come, and that is what is perhaps most exciting about them. Hindsight is 20/20. We ask how and when and why brilliant directors of the past lost their way. How could the talents behind such films as _The Godfather_ , _The Last Picture Show_ , and _The Exorcist_ be the same artists responsible for _Jack, The Cat's Meow,_ and _Jade_? Several of the filmmakers I spoke with worry about similar fates. Most agreed it was simply a matter of keeping in touch with the world around them. David Gordon Green said, "I think there's a point in your life as an artist when you're aggressively reacting to the world around you—and who you know or what you know and who you hear—and there's a point when you're not listening anymore and you're just Woody Allen." This sentiment was echoed by filmmakers as different from Green as McG and Brett Ratner. Can today's filmmakers avoid the mistakes of the previous generation and remain in the moment? Whether a filmmaker can maintain any degree of individuality in today's system was another topic very much on the minds of many filmmakers. Debates raged over the importance of final cut—that ability to have the final say over the cut of your film—and test screenings. Directors like Todd Phillips told me that "Testing a movie in general is crucial. I always find it amazing when directors—outside of Steven Spielberg—just say, 'Here's the movie; take it or leave it.' I just find it astounding, because ultimately you really don't know what you have until you put it up there." As for final cut, Brett Ratner asked rhetorically, "Why do I need final cut? Final cut is for artistes quote unquote, directors whose movies don't make a lot of money." On the other side of the fence was a filmmaker like Karyn Kusama, who in the midst of editing her first big studio film told me she'd likely never do another film of that kind without final cut. "I've realized I'm a strong-minded director with a very clear sense of what I want to do, and I just want to be left alone to do it," she said. As I talked to the filmmakers, I repeatedly asked myself, "What defines them? What qualities does such a diverse group share?" You first have to look at the time that has produced them. This is the first group of filmmakers to make films in the wake of Quentin Tarantino. Make no mistake, the long shadow of the famed former video-store clerk hangs over many whose stories are told here. As Joe Carnahan says, "Tarantino moved over the entire independent film scene like the fucking Hindenburg." Like Tarantino, these men and women grew up with super eight cameras, cable television, and VCRs. For the first time we have a generation of moviemakers who did not need to leave their homes to study the greats of the past. In fact, this may be the first filmmaking generation whose greatest influence wasn't the world around them so much as the world they saw in their living room on TV. David Gordon Green recalls the opening of the second Blockbuster video store in the nation just blocks away from him in Texas. "I remember a sign when they were first opening it saying 10,000 VIDEOS and I thought, whoa that's going to be trouble." Video literally changed these budding filmmakers' lives. If the filmmakers of the 1970s had Vietnam and the civil rights movement shaping their ethos, today's moviemaker is more likely to be shaped by the society reflected to him in pop entertainment. It's no surprise that perhaps the most dominant filmmaker in the young lives of these filmmakers was Steven Spielberg, the man who seemed to control Hollywood in the 1980s. This is a group that has lived through the rise and commodification of the independent film. Is such a seeming oxymoron possible? It would appear so when the term "indie" needs be redefined every year or two to the point where we now cast a cynical eye on the new independent feature released on 500 screens by a branch of one of the largest studio conglomerates in the world. But that does not mean there is no independence left. The filmmakers here look not just to Tarantino but to men like Steven Soderbergh and John Sayles for inspiration, two directors who have perfected the art of operating within the studio system without becoming slaves to it. One of the keys to negotiating this uneasy truce between commerce and art lies in technology. And it may be the burgeoning filmmaking technologies that will truly change the face of the filmmaker of tomorrow. No longer is filmmaking a forum for the few. The speed, ease, and affordability of digital video may have imploded the filmmaking world as we knew it. The filmmakers in this book represent the beginning of this shift. They are the last to be reared solely on celluloid and the first to enjoy the flexibility of the digital world. Nowhere is that reflected more clearly than in the story of Kerry Conran, an unassuming CalArts computer maven who began creating miraculous worlds in the comfort of his own apartment. It is not just the technology available to the makers of film that has made a difference to the directors of today. In an altogether different way, the advent of the Internet is surely one of the key factors in the career of Richard Kelly, whose _Donnie Darko_ was rescued from box-office oblivion thanks to a fervent online following. Like Kerry Conran, virtually all of the filmmakers in this book charted their own unique paths to successful careers. Of course, there is Kevin Smith famously cobbling together a modest budget for _Clerks_ on maxed-out credit cards, but there are also people like Richard Kelly spending more than $40,000 on an ambitious short film, and a playwright, Neil LaBute, with no film experience whatsoever, creating one of the most provocative debuts in the modern era of the medium. While talking to these filmmakers, I quickly realized that their stories—their journeys from wide-eyed adolescents to celluloid power players—were as compelling and illuminating as many of their films. Often their moviemaking dreams seemed almost too far out of the realm of possibility to even say aloud. "Daring to think that I could ever make films in Hollywood was beyond even a fantasy," Kerry Conran told me. When asked how his parents reacted to the news that he wanted to make movies, Kevin Smith said, "It was as if I'd said to them, 'I'm going to go discover the nineteenth dimension.' It was a foreign fucking notion to them. They were like, 'OK. Good luck with that. We support you, but if it doesn't work out, go get a real job.' " My goal for the group of directors I chose to talk to for this book was simple: to bring together a group of filmmakers who represent the breadth of talent and diverse voices making films for American audiences today. That meant filmmakers who have specialized in comedy, those who have specialized in drama, and those whose specialty has been in having no specialty. Perhaps the greatest testament to the breadth of talent that exists today is that twenty-two more filmmakers could have been added to this book without any diminishment of talent. I had no hard and fast criteria for the filmmakers in this book other than talent and significant and promising contributions to American filmmaking. No arbitrary age requirement was determined, no quotas were given. If a book like this were done thirty, twenty, or even ten years ago, the lack of diversity would have been striking. Even today, the reality remains that Hollywood filmmaking is still dominated by one gender and race. However, the playing field today is undeniably shifting and will, no doubt, continue to do so, though there is still a ways to go. Karyn Kusama reflected on how, as a female director, often she was treated as something special, an anomaly, after her first film, _Girlfight_ : "I don't want to be considered, like, the miracle baby." I was continually surprised by the generosity and candor of my subjects. Access and time are the greatest assets an interviewer can receive. All the filmmakers in this book were extremely generous with their time, some speaking with me on as many as four or five occasions. And in an industry where honest communication sometimes seems hamstrung by publicists, I was pleased by how many of these filmmakers spoke with disarming honesty. I remember being taken by surprise as Patty Jenkins revealed to me that, on the eve of filming her breakthrough movie, _Monster_ , she was on the brink of death because of a freak accident. I have a feeling that Trey Parker would have told a chimpanzee how much he and partner Matt Stone "fucking hate actors" the afternoon we spoke. I just happened to be the one chatting him up that day. I recall a wide-ranging conversation with F. Gary Gray where I found him interviewing me almost as much as I was him. Halfway through our afternoon on the phone, he admitted he was extremely nervous, having never spoken in such depth to a writer about his career. Only weeks after his second film had left movie theaters, I found myself in a coffee shop with Dylan Kidd, dissecting what had gone wrong on his sophomore effort, _P.S.,_ after everything had gone right with his first, _Roger Dodger_. I met Michel Gondry at a SoHo restaurant for lunch as he was taking a break from editing his third film. He admitted to being fearful about what his still-in-progress work would turn out to be, this only months after his _Eternal Sunshine of the Spotless Mind_ had been named one of the best films of the year by virtually every major film critic and institution. Fear, in fact, was a recurring theme in my talks with many of the filmmakers: fear of temptation, fear of money, fear of failure. Kevin Smith told me that he thought he might have been kicked out of the moviemaking club because of the disappointing returns of _Mallrats_. Luke Greenfield predicted that fear would be the overriding theme of his career. Fear was perhaps rivaled only by feelings of self-doubt. Perhaps it is the nature of the business to always believe someone more talented and driven is out there gunning for your job, but the men and women making movies today seem to doubt themselves at every moment. Michel Gondry has yet to be satisfied with any of his film work. F. Gary Gray says he dusts off his résumé every time he sees the first cut of one of his films. It goes beyond doubt for Trey Parker and Matt Stone; it's more about misery, as they told me how every moment of making movies is excruciatingly painful for them. Filmmaking, as if it were a surprise, is not a business for the weak. Despite the pain and self-doubt and borderline paranoia, each of these filmmakers holds an astounding measure of optimism and self-confidence. Their trepidation about making movies is rivaled only by their singular belief at times that they are truly meant to be making films. Many are resolute that this is their one true destiny. The first films of many I spoke with stand as testament to their belief that their stories, however uncommercial they may have seemed, demanded to be told. If there is a lesson to be learned from this group, it is perhaps to tell the story you want to tell however you can tell it. You will find no release slate for a major studio that includes such seemingly commercially averse properties as _Donnie Darko_ , _Monster_ , and _George Washington_. But each found its way to the screen thanks to a vision that could not be compromised. It is hard to believe sometimes that filmmaking as we know it is less than a century old—an extraordinary art form still in its infancy. The filmmakers here are aware of the past, hopeful of the present, and a little wary of the future. After all, just as the moviemakers of today are assuming the mantle of the last generation, there are always others rising to take their place. As Karyn Kusama says, "There're always new, young mavericks around the corner." Josh Horowitz July 2005 **JOE CARNAHAN** "I decided I didn't want to turn out like one of these guys at Starbucks, sitting there talking about what kind of movie they're going to make." **SELECTED CREDITS** _Blood, Guts, Bullets and Octane_ (1998)-writer/director _Narc_ (2002)-writer/director _Smokin' Aces_ (2006)-writer/director Joe Carnahan announces how audacious a filmmaker he is in the opening moments of his gritty 2002 crime drama, _Narc_. A highspeed chase between an undercover cop and his prey leaves you breathless. All bets are off by the time the sequence concludes with a pregnant woman shot and the hero screaming from the gut. Carnahan knew there was only one way to achieve what he wanted: Give a stuntman a camera and have him haul ass after his two actors. Those opening minutes set a tone for a film that recalls the danger and rawness of Friedkin's _The French Connection_ and Lumet's _Serpico_. But this was 2002, and no one was making movies like that anymore. No one except for Joe Carnahan, a former promotions producer for a Sacramento TV affiliate who managed to cobble together a career by using the tools around him and never giving up. _Narc_ , as he's quick to point out, was his sixteenth script. In the wake of the low-budget film, Carnahan found himself sought after for projects by the likes of Tom Cruise and Harrison Ford. Ford was said to have called _Narc_ the best film he had ever seen. _Where did you grow up?_ JC: I grew up in Michigan. I lived for a time briefly in Utah, and then in Michigan. _Your dad was in the Air Force, and he also had a grocery store?_ JC: Yeah. He had a small grocery store. _When did you realize your family wasn't well off financially?_ JC: We lost our house and we moved out to California. The bank basically took it. I'll never forget my mother one Christmas said, "Instead of having presents under the tree, why don't we cut pictures out of catalogues of things we like?" And I remember my brother and I laughing at that suggestion. I didn't realize until I was much older how hand-to-mouth it really was. My dad I think in his best year was making almost $19,000 a year. I'll never forget to this day being in this really small house and the Realtor actually telling my parents, "You guys are poor." He literally said that. He was looking over our credit report. How my dad didn't kill him, I don't know. _That's got to hit you hard as a kid._ JC: Oh, yeah. We weren't below the poverty line or anything, but we certainly were lower middle class. _What early film experiences resonate with you today?_ JC: Nothing really knocked me on my ass like _Raiders of the Lost Ark_. I'll never have an experience like that in a movie again. I was just so thoroughly swept up. It was the first time where I realized I was being manipulated by a movie. I saw it in a little theater in Mt. Pleasant, Michigan. And I remember _Star Wars_ specifically. I remember in '97 I was seeing the new release, and it was a brand-new print. I was sitting there at the end in that trench scene and he goes, "Use the force," and I swear to God the hair on my arms stood up. I'm like eight years old again. It was that kind of moment. _What were the first shorts you had an opportunity to shoot?_ JC: We had a great super eight camera and I shot a bunch of shorts on that. I shot a Sergio Leone kind of thing called _Noose_. I did a bank-robbery movie with some buddies of mine because I had an old '77 Camaro that looked like a getaway car. We did all kinds of shit like that, and I shot it on this super eight with a wide-angle attachment because I just was in love with the way it looked. I remember sitting in my bathroom at my mom and dad's house during the 1988 NBA finals cutting my films. It was great. I was very contemptuous of video. _You went to San Francisco State? Is that where your film tastes began to expand?_ JC: That's where I really became aware of the avant-garde, Kenneth Anger, Stan Brakhage, Maya Deren, stuff that was really off kilter to me because I considered myself, and still do, as having reasonably commercial tastes. I remember looking at _Persona_ for the first time, and it really knocked me on my ass and I couldn't figure out why. Some of the images really haunted me. I was kind of picking my favorites and watching everything I could, just developing different tastes in movies. _Were there any moments where you became especially resolute about pursuing filmmaking as your career?_ JC: I was working myself through school, doing a lot of manual labor and working for a moving company. I worked for a deli supplier for like a year. It was just brutal. I'll never forget the conversation I had with the owner taking me through this warehouse with all this provolone and these big wedges of salami and him telling me, "Film you could do as a hobby; this you could do for your life." I'm thinking, "Oh my God!" _If there's anything that's going to motivate you to make films . . ._ JC: We all look for little catalysts, and that was kind of a little signpost, "Dead End Ahead." _When did you start to write?_ JC: I started actively writing in college. To be honest with you, I didn't know how to write a screenplay so I took _Silver Bullet_ , which is a published shooting script, and wrote it like that. I had no idea what I was doing. It was unreadable. I'll never forget sending a script to Shane Black, who at the time was probably twenty-five and he'd written _Lethal Weapon_ , and getting a letter back from him. He could have really dashed the hopes of this impressionable kid. He took the time to read it, so that was huge. I just thought that was such an amazing thing for him to do. _What did you say to him?_ JC: I was like, "Can I use your name to get an agent?" I was just a dumb kid. I didn't know any better. He said, "I'll do my best for you." But his hands were really tied. The fact that he was even willing to take time out was such a huge thing for me. I always tell people that _Narc_ was the sixteenth script that I'd written. You have to really go through that to appreciate it. It was a ton of stuff that will never get made! I remember writing three scripts in a ten-month period. I was working a warehouse job, and I would go in at five in the afternoon, get off at five in the morning, and come back and write until I had to work again. I was just seized by this kind of paralyzing fear that that's where I was going to be for the rest of my life. I didn't want to end up there under any circumstances. _How were the films you were making in school received?_ JC: Some were kind of head-scratchers. I did a thing called _Wired to Blow_ about a conversation between a prison psychologist and a mass murderer who had targeted women. When I look back on that now, it's really deeply disturbing some of the stuff that's coming out of this guy's mouth, how misogynistic it is. I remember screening that and making some enemies out of the women in that class. (laughs) _When you graduated, what were you thinking about your future?_ JC: I thought, "I have got to transition out of moving dressers and humping refrigerators. I have got to stop doing that and really take myself seriously." So I go to work for this guy named Gary Jones and you know there's, like, a handful of people in your life who you say, "OK, if I have an enemy list, this guy's on it." So I go to work for him and he's a Jet Ski salesman and he wanted to get into video duplication and commercials. Basically what he's doing is soft-core porn out of the back of this place. At night he's bringing strippers over. I just had nothing but contempt for this guy. I thought he was a jack-off and a punk. Gary had a habit of firing guys on a Friday so they wouldn't roll over into the next pay period. So he calls me into his office around four thirty and cans me, and I went after him. I had to be taken out of there forcibly. I was just livid because at the time my wife was five months pregnant and I thought, I'm done. So I go home and I sit there and I'm spinning. I literally sit down and I remember there was a TV station in town that did all their own on-air promotions. So on a lark I said, "To hell with it," and I sat down for an hour and wrote three of them. I take these down to the station and I put them in a manila envelope to the attention of the promotions director. I got the gig, and that began what I still consider the best day-to-day kind of time-clock job I ever had. _And it was because of the access to the equipment there that you were able to make a small film of your own?_ JC: Yeah, and by then I'd made a lot of professional contacts. I knew people at the TV stations in Sacramento. I'd done favors for them. I'd written short scripts for them, and I knew there was going to come a time where I was going to call on all these markers. And I'd read Robert Rodriguez's book [ _Rebel Without a Crew_ ]. He kind of demystified a lot of it. I took from that, how to do it quickly and inexpensively. _Is it true that_ Blood, Guts, Bullets and Octane _started as a play?_ JC: Yeah, it was called _Filibuster_. It was about the kidnapping of a congressman, and I thought, It's not working as a one-act. It doesn't have legs but these characters are really fascinating. I started shooting it. I just didn't want to wait anymore. I had, like, thirty really good pages and I thought, "What the hell, I don't know where it's going. Let's just go and we'll figure this out." It was thirteen days over six months we shot that film. I decided I didn't want to turn out like one of these guys at Starbucks, sitting there talking about what kind of movie they're going to make. It's in their head and it's never going to get out. I was twenty-six and I was like, "OK, I want to go, man." It was important for me to do it, because it was going to keep my skills sharp and it was better than just sitting with my thumb up my ass waiting for a break to come. _And it cost just $7,300 to make?_ JC: Yeah. That's what it wound up being. At the time it was like, "Guys, let's just have some fun." We weren't looking to launch careers. I was looking at it like, "OK, I'm done doing shorts. I just want to see if I can hold a three-act narrative together." And I'm using myself and a lot of amateur actors. You're not going to find any method-trained actors in it. (laughs) No one had read _An Actor Prepares_. _A lot of comparisons were made to Tarantino's style when people looked at it._ JC: It became very easy to say it's just a fucking Tarantino knockoff and that's all it is. I look at that film and I see a lot of my personal life in there, the relationship between this old man and this infirm, invalid woman and this whole sacrificing-at-all-costs stuff. I don't look at the aspects of it that are kind of crime fiction. Would I make that film again? Hell, no. Nor would I make any of the short films I've made. It's either you evolve or you don't. At the time Tarantino moved over the entire independent film scene like the fucking Hindenburg. I take nothing away from Quentin. I'm a huge fan! But I thought this idea that he'd somehow invented sliced white bread, it didn't hold. You were really discounting the work of John Woo and Sam Peckinpah and Elmore Leonard, a lot of the stuff that influenced him. _What was the film about if it wasn't the crime elements for you?_ JC: Once the hipster shit was stripped away, you had a story about a guy who was doing extraordinary things to save his dying wife. That's really what it was. It's very important to me and, if anything, that's the part that's most ignored. It's funny, I see the Danny Woo character in Ray's character in _Narc_ , that the ends do justify the means. Sometimes you have to do horrible shit to see a right done. I'm absolutely drawn to that. The stuff I'm working on traffics in that vein I think because maybe I see myself in a lot of ways like that. I consider myself a relatively decent guy and a polite guy and I treat everybody the same and I'm not an asshole, but there's a part of me that's incredibly mercenary. But I think it's all in the justification of the greater good. _What was the turning point for_ Blood, Guts _? Before Sundance, how did it gain momentum?_ JC: We went to the Independent Feature Film Market which is a meat market. But Amy Taubin in _The Village Voice_ decided to champion the movie and literally everything changed overnight. And then there was this whole hustle to get this thing ready for its Sundance premiere. _How much were you working on it up until Sundance?_ JC: Eighteen hours a day at some stretches, just brutalizing ourselves to get this thing done. I remember going to Seattle and they had done an optical—I think it was the sniper rifle scope POV—and I looked at it and thought it was one of the worst optical effects I'd ever seen. I remember going outside in front of this lab and pulling a signpost out of the ground and throwing it in the street. Once it went to Sundance, I had a bad screening and then at the Holiday Village Cinema, it was a bunch of snowboard kids and they loved it. Then once I got the film in decent shape I went on to Berlin, went to Edinburgh, went to Tokyo. It was great. I got to see most of Europe and Japan and Asia on a little $7,300 nothing investment. _Your follow-up was_ Narc _. Was that a script that was ever shopped around as something for someone else to direct?_ JC: Mark Pellington was a name that they were mentioning seriously at one point to direct, and I was just not having it. I was not going to allow that thing to go out without me directing. At the time, it was functioning as a tremendous writing sample. I was getting all these tremendous writing jobs off of it. _But you still couldn't get_ Narc _off the ground for you to direct?_ JC: For three years it was, "Nah, we're not interested." I was like, Jesus, I cannot get this thing afloat at all! But I was just bound and determined that I was going to do that film come hell or high water. If I had to do it for five hundred grand, if I had to do it for fifty grand, I was going to make that movie. _I know the turning point came when Ray Liotta signed on. Did you guys connect on the material right off the bat?_ JC: Almost immediately. Because I saw in Ray this willingness to allow himself to be kind of reinvented and for me to mess with his physical appearance. I like to take an image of something and just stand it on its ear. I don't see Ray when I see that movie. I see that character. I always remember watching the monitor while Ray is sitting in the car, talking about his wife. To watch something you've written conveyed so effortlessly and so beautifully and so sublimely by a guy that talented is just a treasure! You feel this kind of lump in your throat as you're watching him do this and you're just thinking, "My God, man, I'm literally watching a home run in slow motion." That's what it felt like. _There seem to be influences from William Friedkin's work and Errol Morris's_ The Thin Blue Line _in the film._ JC: Oh, absolutely. I was a huge fan of _The Thin Blue Line_. It's a much bigger trendsetter than it's remembered for. All those TV reenactment docudrama things owe a huge debt to that movie. I had always been a big fan of the cop film. I'd watched _Serpico_ and obviously Friedkin's stuff. They were inspired by the French New Wave and this kind of documentary style of capturing things. I just kind of wrote this script in the spirit of that. I also had a lot of friends that were cops, and one of my friends—one that was in a lot of my short films—is a raging heroin addict. He was a snitch for this undercover narcotics cop and, for whatever reason, I was granted kind of this fly-on-the-wall status when I would hang out with these guys. They would talk about any number of things in front of me, a lot of which wound up in the film. _Did the prevalence of television cop shows influence you at all?_ JC: I thought that TV had effectively killed what I considered to be this classic genre. _NYPD Blue_ had come out, and I thought it was incredibly by the numbers. If you ask anybody during the time we were making _Narc_ , I kept saying, "We cannot make fucking _Law & Order_!" Because when the script was making the rounds, I was hearing, "Oh this is fucking _Cagney & Lacey_. Who gives a shit about this stuff? Nobody cares about a cop film." It was rejected out of hand by everybody. _The film has an accomplished look despite the small budget. How much did you have?_ JC: If we had $3 million to make that film, I'd eat my hat, man. The crew was constantly being told there was no money and they were going to shut the film down. It created this powerful sense of dread and angst that if you're shooting a romantic comedy you're fucked, but the fact that we were doing this kind of heavy-duty melodrama, it worked beautifully. I would never want that kind of dread hanging over a project, but it actually pervaded the entire film and I think really worked well. _The money in fact nearly ran out at one point, didn't it?_ JC: I remember walking on the set and hearing the production manager say to the crew, "I don't know when you're going to be paid or if you're ever going to be paid again." Rounding the corner and seeing the entire crew standing there, what do you do in that moment? You really have to find your feet in that moment, because they are definitely put to the fire. So I was really straight with them. I was like, "Listen, I've got a wife and children and the family and bills to pay and if this shit continues, if we don't have money by the end of today, I'll walk off this film." And for whatever it was worth I think being totally honest with them in that moment in some way galvanized us and we carried through. _Having to lead in those conditions with seasoned pros like Liotta and Jason Patric, was there ever tension? I've heard you and Jason had some conflicts._ JC: We had plenty of tête-à-têtes. Literally that same morning I heard that announcement I got into a major screaming match with him, thinking, "OK, I'm going to hit this guy." It was that kind of shit and then we're drinking beers together by the end of the night. _What got you so enraged at the time?_ JC: My opinion at the time was not everybody was doing their homework. Not everybody was prepared to do what we needed to do. And I felt he was reacting from a place that was trying to cover that, trying to mask the fact that his level of preparedness was not where it needed to be. _So he was relying on his natural talents instead of knowing the material?_ JC: Or using diversionary tactics to kind of conceal what it really was, which was like, "Come on, kid, you don't know your lines." And again taking nothing from Jason, because I thought he was extraordinary and the guy does more with just a look in his eyes than most of this younger generation of actors can do with a fucking three-hour movie. _The chase scene that opens the film really sets the tone._ JC: I hadn't seen what I considered to be a really great foot chase. _To Live and Die in L.A._ had a really interesting one. I thought, You know what, let's give it a shot, man. Let's just shoot for the moon and see what happens. I wanted the immediacy and the rush to kind of just take people's breath away and just rocket them into that world. I really wanted the audience to feel the sensation of literally hunting someone down with the intention of either apprehending them or killing them. _Does_ Narc _feel like a personal film to you? Do you see yourself in it?_ JC: It's funny, I look at _Narc_ and I see the kind of disintegration of my marriage. They're always fighting and you know cop work, like in filmmaking, you're taken away for large periods of time and you obsess over your work and things get left by the wayside. It's kind of like, Goddamn I don't want to see something that the director doesn't personalize in some way. I look at _Punch Drunk Love_ and I think, Damn, I want to believe that this guy was in love like that at some point in his life, because it moves me in a way that I see myself in that. I think that's what all great films do for us. They come down to two very simple things: human beings and human behavior. At least, the ones that I'm really intrigued and interested by. I don't need to see fucking spaceships blow up. _When you look at the filmmakers out there today, is there one in particular to emulate?_ JC: To me, the career to have is Soderbergh's. Soderbergh is that throwback to kind of that early '70s filmmaker that was really iconoclastic and truly maverick in the way they made movies. He makes _Ocean's Eleven_ and then goes on to make _Full Frontal_ for $2 million. I have great admiration for those types of choices because they don't adhere to conventional wisdom. Unfortunately you wind up with the generation of filmmakers we have now. They're certainly not setting the world ablaze with personal vision. We're becoming more and more commercial, more driven by the opening weekend. It's a bunch of fucking nonsense, because those are not the movies that we're going to remember in twenty or twenty-five years, nor are we going to give a shit about the people who make them. There're so many directors kind of dabbling here and there. And you go back to Scorsese and he just made movies, man. He wasn't developing fucking TV! _Since_ Narc _you were attached to two very high-profile projects starring Harrison Ford and Tom Cruise respectively. Because your participation with both fell through, have you been concerned that perhaps there might be a perception that Joe Carnahan isn't ready for that level of filmmaking?_ JC: No. If anything, I've gotten this amazing street cred and it's not anything that I'm looking for. But I don't think that perception's out there. If it is, do I give a shit? Not really, because I know I was perfectly willing and able under circumstances that were obviously beneficial to me to execute both those films. I'm not saying that I'm any better than anybody else. I have my own personal demons and my own kind of things that I have to answer to internally and I also have two small children and if I don't love it every morning when I wake up and every night when I go to bed, I'm wasting my time. I'm taking time away from my kids and [that is what] really is going to mean something to me in fifty years when I can't get a film made because I'm too old or dead. I'm not going to be known as the dad who was never here or never around. I refuse to be that guy! If that winds up costing me professionally, I'm willing to pay that price. THE DIRECTOR'S TAKE JOE CARNAHAN _What is the first film you ever saw?_ _The Love Bug_ _What is your favorite film of all time?_ _Raiders of the Lost Ark_ and _Yojimbo_ _What's your favorite line in a film?_ "Isn't that just like a Dago, brings a knife to a gunfight."— _The Untouchables_ _What movie made you realize that film was an art?_ _Raging Bull_ _What movie do you consider your guilty pleasure?_ _American Pimp_ _Who is your favorite movie character of all time?_ Cool Hand Luke _What's your favorite movie snack food?_ Popcorn soaked in butter _Who is your favorite director of all time?_ Akira Kurosawa _Who is the most impressive filmmaker working today?_ Paul Thomas Anderson _What quality do the best directors share?_ Great humanity _Who is your favorite actor or actress of all time?_ Paul Newman _Who is your favorite actor or actress of today?_ Meryl Streep _Who would you cast as yourself in a film about your life?_ Myself _If you could remake one movie, what would it be?_ _Citizen Kane_ _What is your best quality as a director?_ Sense of humor _Finish this sentence: I'll never direct a movie with . . ._ more special effects than dialogue. _Finish this sentence: The perfect movie is . . ._ inspiring, consciousness-shifting. _What will they be saying about your work fifty years from now?_ "Joe who?" _What piece of advice do you have for aspiring filmmakers?_ Hustle your ass off and, to quote current San Diego Chargers coach Marty Schottenheimer, "Refuse to be blocked." _What are you as passionate about as moviemaking?_ My kids **KERRY CONRAN** "We've finally come to a time in our history where you can create what you see in your imagination, what you dream. To have that as an opportunity is very interesting." **SELECTED CREDITS** _Sky Captain and the World of Tomorrow_ (2004)-writer/director Kerry Conran, a seemingly shy, self-effacing Michigan native, threw down the gauntlet as few filmmakers ever have with his first film, _Sky Captain and the World of Tomorrow._ Blending cutting-edge technologies with traditional filmmaking techniques so seamlessly, it's no wonder the project had been labored on for more than a decade. Of course, Hollywood wasn't aware of it being in production. In fact, only a handful of people were. That's because it was all being created in Conran's apartment on his home computers. Eventually the ambitious fantasy throwback to _Flash Gordon_ serials became too big to live just there. It was brought to the screen in 2004 starring Jude Law and Gwyneth Paltrow. Conran's imagination and technical savvy may do nothing less than revolutionize and democratize the way films are made for decades to come. It almost sounds like another filmmaker who once dreamed impossible dreams of galaxies far, far away. _Could you tell me a little about where you grew up?_ KC: I was born in Flint, Michigan, and I lived in some of the surrounding communities as I grew up. When I was growing up, Flint was almost like a steel town in the sense that it was built and founded on the auto industry. During the time I was growing up, it was a pretty flourishing town. The auto industry was doing fine and it was very "Americana." It was probably like any other picturesque place you could grow up in. On the one hand, it was a very unremarkable childhood. My mom was an artist and a painter and my father wrote. So they kind of encouraged our imaginations and creative sides, which was perhaps the only unusual thing. In that sort of city you're usually being honed for the auto industry. To think of anything outside of that wasn't blasphemous, but uncommon. _Your dad worked in the auto industry?_ KC: Yes, in fact, he did. _What sort of writing did he do?_ KC: Just smaller short-story kinds of things. He would work with me, encouraging me, helping me, from a very early age. I think I was in first or second grade when I wrote a little play that we performed at the school and it was certainly something he helped me put together. _What was the first-grade opus of Kerry Conran all about?_ KC: It was the trial for Goldilocks and the three bears. She was being held accountable for breaking and entering. It was a continuation that I don't think had ever been visited. (laughs) A strange topic for a first-grader. _You're the middle child of three, and in fact your brother would later be the production designer on your film. Do you think you had any of the clichéd characteristics of the middle child?_ KC: Although I've grown to be more introverted, I wasn't that way growing up. Unbelievably, in high school I was actually the class clown. I was a bit more demanding of attention. Maybe that is the trait of a middle child. My brother and sister were happy to yield the stage to me. _It's very interesting that you say you were the class clown, because many people now call you introverted and even shy. What happened? Did you really change that much?_ KC: I did go through this sort of metamorphosis when I left high school, and I think it's because I got a little bit more serious about really trying to leave Flint. I had always toyed with the idea of making films when I grew up, but it was a very sobering thought that I knew no one. Daring to think that I could ever make films in Hollywood was beyond even a fantasy. So I decided it was really going to take some effort and motivation on my part. Literally the first summer I was out of school, I spoke to no one. I stayed in my room and started writing and thinking, sort of trying to figure things out for myself. I would say that some phenomenon did happen that made me shut off the rest of the world a little bit more later in life than I did growing up. When I left Flint, and this is the strange thing, that memory is frozen in time for me in a way. I have a little bit of a Peter Pan complex in that I don't feel any different. I really don't feel like a proper, functioning adult member of society yet. I still feel like I'm going through an extended childhood in a way. _You're just playing the part of an adult?_ KC: Exactly, and that's why it's hard when people actually take me seriously now. When you get in a room with adults and they're talking about giant robots, it's a surreal experience. _What do you think you were connecting with early on in the movies you saw?_ KC: I think it was a channel directly to what you imagine as a child when you're running around with towels on, pretending you can fly. In the case of _Godzilla_ or _King Kong_ , it was the sheer spectacle. The scope of it was maybe even a little bit bigger than your own imagination. _By my math you would have been about twelve years old when_ Star Wars _came out. I would guess that it had an impact on you?_ KC: I remember that vividly. I remember getting in the car and pretending you were in a spaceship. It was such an experience that it followed you all the way home and maybe for the rest of your life for those that wanted to follow in its footsteps. I would say, yeah, it was a huge influence. _When did you first get your hands on a film camera?_ KC: It was probably around ten, eleven, or twelve years old. I started off doing stop-motion animation. And that's actually what I would say was the other gigantic influence, the films of Ray Harryhausen. Eventually I started to do short films with friends and family. Probably the longest film I made was ten minutes long. I was about fifteen or so when I did that, and I spent probably a year or two on it. That was eventually the film I used to get into CalArts, actually. _How did you end up at CalArts?_ KC: It was a small arts school and, coming from a smaller town, that had an immediate appeal to me over the giant universities. It also encouraged experimentation, which was interesting to me. I had never even traveled outside of Michigan to that point, and I was sort of deposited there without a car. I was suddenly exposed to all these artists and film students. _Your thesis film is notable for its ambition. Tell me about_ That Darn Bear. KC: It's appallingly embarrassing to speak about now. I spent almost three whole years working on it. I saw an opportunity to work with some of the animators [at CalArts] to make a film that combined live action and animation not unlike _The Incredible Mr. Limpet_ , but in a way that hadn't been done before—to make it really interactive, to make this animated character physically interact with human beings in a live world. That was sort of the impetus for it. _Unfortunately Robert Zemeckis had a similar idea at that time with_ Who Framed Roger Rabbit? KC: Yeah, and it was a year into _That Darn Bear_ that I was just starting to hear about it. Obviously Zemeckis had really taken this well beyond what I could do on no money. I couldn't even dream to compete with what he had done. When it came time to do the animation, the expense was growing and my desire to finish it was diminished because I'd already seen a polished professional version of what I was trying to do. I never finished the film. _Coming out of school, did it seem you were any closer to realizing your dreams?_ KC: It seemed clear that the idea of having a film financed was still every bit as distant as it was when I was growing up in Flint. Even though I'd gotten much closer and I'd met some people, it was almost impossible to get anyone to take a chance on you. It was around that time that, for the second time in my life, I decided to drop out of existence. I realized I would never get an opportunity to do [a film] unless I made one for myself. So that's when I started looking at the emerging computer technology and how it could be used. And it was not unlike the same spirit that interested me in making _That Darn Bear_. It was: How do you integrate the computer technology with a live-action film but for a different purpose—to make it look like a real film but ultimately never leave your room? My goal was never to leave my apartment and feel like I'd traveled across the globe, and I thought it was possible. _When was the first inkling for_ Sky Captain _born?_ KC: Around 1990, when I was just leaving school. I wrote a spec for an animated Saturday-morning cartoon that I actually took to Hanna-Barbera, who promptly told me they don't take outside material. It was initially supposed to be a serial about a pilot and a reporter and they were pitted against a scientist who manned a giant robot. And the idea, not unlike the film, was to have it end with a cliffhanger each Saturday morning. So it really began there and was later sort of folded into a live-action version of it and expanded greatly and modified. _You were working on the short that became_ Sky Captain _essentially in isolation for years. How were you making a living?_ KC: By the time I'd graduated from school, I'd gotten a decent working knowledge of what you could do with the Mac. I had written a few little things for it, and somehow word got out that I was doing quirky nonintuitive things with the computer that filled certain gaps. I had gotten a phone call to write a piece of animation software. Instead of being paid, I asked to keep the equipment that had to be purchased to do it, always with a mind of trying to make this short film of mine. I was really on life support. I had just enough money to survive. That also contributed to me dropping out of society. I just dedicated every waking hour I had to working on the short. _Can you explain to me what the goal was? What were you trying to accomplish with the film you were creating in your computer?_ KC: I simply sat down and thought about the images I wanted to create and the story that I wanted to tell, and I looked at films. One of the films I studied and tore apart was _The Third Man_ , and I would look at not just the composition of the frame but what made it up. I would look at a closeup of Joseph Cotten and I'd realize that the background was a wall that had a splash of light and I thought, "Well, I can create that fairly easily." I can mimic that. Then I would look at an exterior view and it would be a city and I thought, "How would you create that?" I started looking at archival photos and I thought, as long as you had some sort of depth in front of the blue screen, you could make it look like people were walking in front of this photograph. Then I realized if I could put people in photos, then the sky was the limit. I literally could do anything. That was really the impetus and creation of this whole notion, which is to treat live action exactly how animation's been done for the last hundred years and use these new tools to facilitate it. _At one point I understand you considered presenting_ Sky Captain _as a sort of "lost film" that you had discovered?_ KC: I never expected anything I'd created to be seen outside of perhaps the Sundance Film Festival. That was the big dream at the time. I knew that, ambitious as I wanted to be, I could never really finish 120 minutes by myself. So I got this idea where I created a fictional character for myself and made him a protégé of Frank Capra. I thought, What if there was this unknown genius who had done things that no one had seen in that era but it was so beyond what anyone had seen? I was going to take photos of Capra and put someone else in it and just doctor still photography almost like _Zelig_ or _Forrest Gump_. I thought that's the way I can do it. I can burn up ten minutes just with still frames and audio and pretend the rest of it was lost. I wanted to present this thing not as a hoax, but a completely fictional accounting of a person and a film that never existed and present the entire thing as if it was newly discovered. It was really only when I showed the film to Jon Avnet that I realized that I could actually fund a real version of it instead and I went in that direction. I'll never really know if that was a mistake or the way to go. I really would have loved to have done the other version, I have to say. _At a certain point during the years you were working on the short, weren't you getting concerned over how long it would take to finish?_ KC: There was a point midway through the whole process that I found myself in the fetal position when I realized what I had gotten myself into. I accepted that doing it on my own would probably be a twenty-year enterprise. That's why I started investigating ways to maybe only have to make actually twenty minutes of footage and supplement it, presenting it a different way. My brother finally pulled it from my grasp and said, "You should show it." If he had not encouraged me to show it to Marsha Oglesby, who later showed it to Jon, I don't know what would have happened. There's every chance I'd still be working on it right now. _Marsha was a producer and a family friend. Jon Avnet, of course, had directed many films, including_ Fried Green Tomatoes _. After refusing to show what you'd been working on to virtually anyone, you must have been nervous to show the few minutes you had completed to a genuinely successful filmmaker._ KC: I was curious about how he was going to respond to it, but I almost didn't care. I never expected anyone to give me a chance or to help me. If something came from it, that would be great. If nothing came from it, that would be fine too. I could go home and continue to work. And if it took me forever, it took me forever. _When you showed him what you had, what did he say?_ KC: Clearly it made an impression on him. He wanted to see it again. Then he asked me, "What is it? Is it real?" I think he thought it was newsreel footage I had doctored. Then I sort of explained to him my idea behind it, that it was a method to make a film that was a little bit different. It was sort of reverse-engineering the way films had been made up to that point, with an eye towards being ambitious but economical. Just the business side of him probably made the lights go on. I think he saw that if we could create something for that little money—at the time we were talking about maybe no more than $5 million to make it—and have it nearly be competitive with mainstream films, then that really was something new and potentially revolutionary. Our exact conversation afterwards was, he asked, "What do you want?" and I said, "I want to finish my film." And he said, "I think we can do that." _There were several key changes to the film from how you initially conceived it. One was the decision to go from black-and-white to color._ KC: Every scene existed in black-and-white first, and we kind of colored on top of that. It was conceived as a black-and-white film. By the time [executive producer] Aurelio De Laurentiis got involved, it wasn't even a question. (laughs) It was, I won't give you money. Either make your movie or not, but it will be color. At one point in time it was maybe only for the foreign release will we have to go color. Then there was talk of maybe we can release a black-and-white version in New York and L.A. Needless to say, none of that ever happened. _How did casting the film take shape?_ KC: Honestly, discussing names, be it Gwyneth's or Jude's, just seemed absurd to me. It seemed to me so far-fetched that anybody would really want to be in this film. It was like, "Why are we even wasting time talking about these absurdities? Let's start interviewing unknowns and just get it over with." _Using unknowns was, in fact, your plan for casting from the start._ KC: I thought if you saw a recognizable face, you would be removed from the film. As the film got more ambitious, the financing of it was more predicated on Gwyneth Paltrow's and Jude Law's names attached to it. I understood [this] from a practical sense. My interest was to get the film made. I do think that all of these things changed and affected the film that I'd originally started. By the same token, I can't say I have any bad feelings about any of it. It's just a part of the process. _Tell me about your experience of working with actors in your film. I assume you had never worked with actors that much?_ KC: I had very little experience other than directing actors in student films, and those were mainly friends. Student actors and that sort of thing. To go immediately to a couple Oscar winners—Gwyneth and Angelina—and Jude, who's comparably accomplished, the strange thing was, it all seemed so surreal at a certain point, it didn't affect me anymore. The thing that I had to get over was probably with Gwyneth more than anyone else. We would be shooting a scene, and I would know that it's not exactly how I wanted it, and in my own mind I'm having this wrestling match of, "Who am I to go out there and tell Gwyneth Paltrow to do it another way?" That was the biggest struggle I had—to have enough confidence to give them my opinion. What I learned, especially from Gwyneth, was that she would have been terribly upset in fact if I _didn't_ do that. That was probably the greatest lesson that I learned. _What was that first morning of the shoot like for you?_ KC: That was probably the single most traumatic moment on the entire film for me. I just kind of woke up and realized, "What in God's name have I gotten myself into?" It was basically walking into that soundstage and strangely everything was just kind of barren. You walk in and see Jude Law and Gwyneth Paltrow dressed up as these characters that you wrote, sitting in this blue-screen environment and it was like, "How have they agreed to do this movie based on some goofy idea I had?" But mainly it was just all these people—veterans who had been making movies and had worked on _Star Wars_ —they all turned to me at the same time and I was The Guy. I could either turn and run away or finish what I'd gotten myself into. Once we'd shot the first couple of scenes, I knew everything was going to be OK. I can't say I ever relished being in the limelight or being the guy that everyone was looking to. _What are your thoughts on whether subtlety and nuance of performance is lost when a film utilizes as much blue-screen work as yours employed?_ KC: I don't buy the concept that necessarily you compromise performance. I will agree that it does require a lot more prep and planning and thought on the part of both the filmmakers and the actors, but there's no reason you can't get really solid performances. You can't begin to compare the experience to traditional filmmaking, and that's part of the mistake. It's more analogous to theatre. _Do you think your career will revolve around fantasy and sci-fi, or do you anticipate that you'll experiment in other genres?_ KC: I hope so. I am more drawn to spectacle and things that are more experimental in nature. For me, the filmmaker who has made films that have continued to reinvent themselves is Robert Zemeckis. _He seems like the most logical parallel for you as you begin your career._ KC: He is the guy who I would most pattern myself after. _It seems that both of you have that dual interest in creating interesting stories and an almost entrepreneurial approach to experimenting with new filmmaking techniques._ KC: To a tee. He is who I probably most feel a kinship with, both on the scale that he operates and just in the invention that goes into everything he does. There's always going to be something interesting, whether he succeeds or not. _Is there a goal as to what technological boundaries you'd like to push next?_ KC: I think the goal is to try to bring things to the screen that you've never seen before. I like to present things that are new and fresh, which is really difficult to do in this day and age. A lot of it falls on using all the tools that are available right now in creative ways and applying them in ways that you haven't seen before. For instance, in the use of live sets and miniatures. Film is such a rich medium that isn't just limited to filming live actors in front of a set. There's so much more that we can do, that we've yet to scratch the surface. We've finally come to a time in our history where you can create what you see in your imagination, what you dream. To have that as an opportunity is very interesting. _And that's what you want to be, the dream-maker, the guy that can actually translate the stuff in our heads to the big screen?_ KC: That's right. Yeah. _As you look to the future, what do you envision for yourself?_ KC: There is a group of us that made _World of Tomorrow_ who are now embarking on this project I think we'd like to form. The nearest I can compare it to is sort of a live-action Pixar production company—one that's all-encompassing but would principally deal with live action. The idea would be to utilize the technologies available to create very interesting, novel films that might be of a more independent nature, and also to give a chance to other filmmakers. If we can tear these budgets down, we might be able to produce films that give other people chances, but to use some of these techniques would allow them to dream a little bigger. THE DIRECTOR'S TAKE KERRY CONRAN _What is the first film you ever saw?_ _Snow White and the Seven Dwarfs_ _What is your favorite film of all time?_ _King Kong_ (1933) _What's your favorite line in a film?_ "You're gonna need a bigger boat."— _Jaws_ _What movie made you realize that film was an art?_ _The Third Man_ _What movie do you consider your guilty pleasure?_ _1941_ _Who is your favorite movie character of all time?_ James Bond _What's your favorite movie snack food?_ Red Vines _Who is your favorite director of all time?_ Steven Spielberg _Who is the most impressive filmmaker working today?_ Brad Bird _What quality do the best directors share?_ The best directors have a personal and unique point of view and know how to capture it. _Who is your favorite actor or actress of all time?_ Cary Grant _Who is your favorite actor or actress of today?_ Jude Law and Gwyneth Paltrow, of course. _Who would you cast as yourself in a film about your life?_ Chris Farley _If you could remake one movie, what would it be? Metropolis_ _What is your best quality as a director?_ Patience _What is your greatest weakness as a director?_ Interminable politeness _Finish this sentence: I'll never direct a movie about . . ._ giant robots attacking New York City. _Finish this sentence: The perfect movie is . . ._ one that makes you forget you are watching a movie. _What will they be saying about your work fifty years from now?_ Perhaps, simply that I was an inventive storyteller and that my films were uniquely my own. _What piece of advice do you have for aspiring filmmakers?_ Take advantage of the amazing tools available to filmmakers today. Anyone can make a film with a home computer and a DV camera. The trick is to believe in the story you want to tell and never give up until you realize it. And don't be afraid to dream big. _What are you as passionate about as moviemaking?_ Honestly, there is nothing that comes close to making films for me. It's all I ever wanted to do my entire life. It is difficult to top seeing a dream come true. **JON FAVREAU** "I've always wanted to call the shots because I would rather fail than not have a chance to figure it out on my own." **SELECTED CREDITS** _Swingers_ (1996)-writer _Made_ (2001)-writer/director _Elf_ (2003)-director _Zathura_ (2005)-director If Jon Favreau had a filmmaking mantra, he might paraphrase _The Godfather Part II_ : Jon Favreau always made money for his partners. At least, that's been the plan. A savvy filmmaker, Favreau learned early on that creating profitable films meant getting the opportunity to make more films, and on a larger scale. He created filmmaking opportunities for himself after penning _Swingers_ , a script that ironically was meant to launch his acting career. In fact, he was typecast in front of the camera. And so he's made every effort to not duplicate that fate behind it, helming a hard-edged R-rated comedy and PG family film back to back in his first two efforts as a director. With _Elf_ , the Queens kid swung for the fences, trying to create a brand-new holiday classic from scratch. He actually did it too. _I understand both of your parents were teachers?_ JF: My mother was a teacher before I was born, but she gave it up to raise me. My dad worked with emotionally handicapped kids. He really loved his work. That was a big thing, that he loved his job, and the idea to not love what you do was not really in the cards for me. _What are your earliest memories of going to the movies?_ JF: My folks split up when I was about seven. I remember a lot of times when I would go with my dad, a movie would be the thing to do. I think that's pretty common. What else are you going to do? (laughs) There's something very special about seeing your dad laughing at the farting scene in _Blazing Saddles_. That's why Mel Brooks is one of my favorites—because it was something where I would genuinely laugh at it and [my dad] would laugh at it. _Any early fantasies of being in the movies?_ JF: Sure! I was always in school plays as long as I could remember. I even made a black-and-white super eight monster movie with my father when I was in grade school. I guess he was my cinematographer. I wrote it and directed it and starred in it. We shot that in College Point, the neighborhood that he lived in, in Queens. It doesn't really stand the test of time. It was me running around with torn-up clothing, chasing people around Flushing Meadow Park. _Where did you go to school?_ JF: I went to Queens College. I spent a lot of time around Washington Square Park and the NYU campus. It was heavy-duty geek time for me. I was playing Dungeons & Dragons a lot, not knowing that those skills that you learn playing those games would help me in my acting and directing later. All the negative things that I did in my life, all the doodling in my notebook as an artist . . . all helped me. It was that sort of amalgam of skills that came together in very strange places but all seem to sort of tie together now in what I do as a filmmaker. _Somehow you ended up working on Wall Street?_ JF: My friend's father had hired me away from college to work with him at Bear Stearns. I worked there for a year, getting, like, twenty grand a year, which was a lot of money. I finally gave my two weeks' notice. I didn't even want to make it to the Christmas bonus, which is a big thing on Wall Street. By 1988 I took the money and I bought a Harley. I went to Sturgis, South Dakota, for a Harley rally that August. My girlfriend at the time was in L.A., and I drove all the way from Sturgis to L.A. to meet her. I rode back alone and I passed through Chicago and I visited my friend Brian. He was in the Second City training center, and I went to watch him perform. There was his group and then there was a more experienced group. And that more experienced group was Chris Farley and Mike Myers and Tim Meadows and all these people who were great. And I remember they interviewed me onstage for one of the improv games. It blew my mind. I was like, "Oh my God, this is the best, better than standup, better than written plays, better than movies, this is great!" So I call my dad and I said, "I think I really want to do this." He said, "You're twenty-two years old; you're young enough that if you screw up, you could still change course again," and he gave me his blessing. So I moved out to Chicago and I started bartending and washing dishes at Second City. And all the things I was told I couldn't do, I was doing. I was auditioning and I started a small graphic-design company and I was making a living acting in commercials and doing dinner theater. It was a very, very happy time for me. _When you were in Chicago and performing at Second City, what was the dream?_ JF: Dreamwise, you work at Second City and you finally make it to the main stage. Lorne Michaels hears about you, you get hired for _SNL_ , and then maybe you get on a sitcom or maybe you do some comic movies. Then you esteem yourself as an actor and you get some clout and you're able to get movies made and then you start writing and directing your own stuff, doing what Woody Allen does. _So even at that point, the dream of all dreams was to direct?_ JF: Being Woody Allen. Directing your own shit, yeah. Being The Guy. Breaking ground. _You wanted to be the creative force._ JF: I've always wanted to call the shots because I would rather fail than not have a chance to figure it out on my own. I'm a very lazy person by nature. I have to be really engaged, and then I go straight from lazy to obsessive. I couldn't study chemistry, but I could memorize all the books for Dungeons & Dragons. It was ridiculous. The trick is to find what I like to do. _How did_ Rudy _come about?_ JF: I can't get hired by Second City. I've been there for close to four years. I had missed the first audition because I had done _The Untouchables_ pilot. I was like a boxing referee or something. They said, "You really should come in," so I came in for the callback. I didn't know it at the time, but I nailed it and had gotten the part. They had looked in New York, Los Angeles, and Chicago. That was like the dream come true for somebody in Chicago. I remember walking around and my life feeling like a dream because not only did I get the part, but I had to be in South Bend, Indiana, less than forty-eight hours later. And they don't fly you from Chicago. They sent me in a limo. (laughs) Very bizarre. _When you got to South Bend, what was your attitude?_ JF: I just wanted to absorb everything. I did not leave the set. I went every day when I wasn't even on. I had snuck onto sets of films in New York with friends just to be around it. I had done extra work in Chicago, not to be discovered, just to be working towards that special spot on the set where the lights were all pointed, just to watch them. _Do you remember any of those sets in particular?_ JF: Sure. A big one was _Hoffa_. During the riots, there were sixteen hundred extras. And it was a night shoot, so you were there from dusk till dawn every day in the cold. You'd show up and they'd put the shit in your hair to grease you up and they'd put you in the clothes. They'd make sure they weren't seeing the same people, and they'd stage you so that you'd run and they'd have all different groups. Well, I was the worst, because I would _always_ go out there, whatever group they'd call, because I always wanted to be out there. _So there are five different Jon Favreaus in the film?_ JF: Yes! You'd see me running one way with an ax handle, and then you'd see me running the other way with a shovel. I was all over the place—not good for editing, but good for me. Then of course Nicholson would come out and do his speech and I'd be, like, "Holy shit!" I'd listen to Nicholson and watch him do his thing. _On_ Rudy _, what did you observe about the way the director, David Anspaugh, ran his set?_ JF: He was the best. He's the one I still think about as a definitive experience with a director. My first day working, they were shooting me doing something comedic. I think it was a cutaway reaction-type shot and Anspaugh came up to me to and asked if I was wearing a wireless mike. I was like, yeah. He reached around to my pack and he flipped the switch off so now the people with headphones couldn't hear what he was saying to me. Then he said, "Do you mind if I tell you you're going a little bit big?" And from that day on, I was just like, "This guy's the fucking best." _That small gesture of turning the mike off so other people didn't hear . . ._ JF: He allowed me to keep my dignity and he was looking out for me so that I wasn't going to look like an asshole. He was going to be my barometer. And when you have a director like that, you can give yourself completely over to them. It's a wonderful, free feeling as a performer, and I try to create that now for my people. _After_ Rudy _, what were the events that led you to create_ Swingers _?_ JF: I decided to travel out to L.A. for pilot season. I ended up moving out there, and my girlfriend didn't follow. She ended up living with another dude who I knew, and that broke my heart. I have issues from my mother dying when I was young, so I always took breakups really hard. I think it tapped into unresolved stuff. I was couch-surfing. It just sucked. I had nothing going on. That whole psychological state spoke to the character that I presented myself as in _Swingers_. I just wrote _Swingers_ to make my friends laugh. I got a piece of software from my dad, Final Draft, for Christmas, and just started tapping away at it, and it was amazing how much it looked like a screenplay. Next thing you know, I've got eight to ten pages, twenty pages. You're showing it to people, and they're laughing. So I wrote the whole thing. _Did the script attract any industry attention?_ JF: My agent sent it out, and people wanted to buy it. I was like, "Holy shit, what a great way to get my acting career going." So I took a meeting with this first-time director, and all his notes were to make it more badass. Put more of it in Vegas, more shit with the gun, more this, more that. And Vince's dialogue was annoying to him. I said, "Why don't you at least do a reading with the people that I had in mind?" and I thought maybe he'd remember one of them and cast them. We did the reading and it was great! My agent was like, "You should just do this. We'll find the money somewhere." I was like, "Shit, that's pretty cool that she would say that and turn the money down." So we pulled it off the table and we set out to making it. _You went to Sundance at one point before you had filmed it._ JF: I was out there with Jason Priestley, who was attached. He was going to play Sue, and it was going to be me and Vince and him. Me and Vince had gone out and celebrated. We thought it was done. We've got Jason Priestley, and now the money's going to come. There were millions of stories about the "almost" of _Swingers_ , but what it finally came down to was Doug Liman. _Liman initially was simply helping you prepare as you pitched yourself as the director for the project._ JF: He was the guy I was sort of going to so that I didn't sound like an idiot in meetings. I had auditioned for a part that Dave Chappelle ended up playing in his first movie, called _Getting In_. Finally, he was getting so involved with _Swingers_ that he was like, "Look, I can get the money together." At first he was talking about codirecting and right before we started shooting, it turned into him directing [alone]. _How exactly did it go to him directing it himself from the two of you collaborating?_ JF: Right before we went into production he said, "Look, you'll still be creatively involved." The way he presented it was I'd run post- and preproduction and he'd be on the set running the show. We'd be partners and we put that all in the contract so I had approvals over everything. It was very much him with the camera and me with the actors. Because it was such a small crew, it was a good marriage because there was a lot of responsibility to go around. I was in the editing room every day, and that process was good. It got contentious at times, but nothing outside the norms of a creative relationship. After things were successful, everybody had issues with everybody but that's like every success story. _What was the visual approach you and Doug took to the film?_ JF: My whole thing was get the money and shoot it as cheap as you can. It was really Doug's idea to use this documentary cinema vérité approach. His whole approach was, "Don't make _Party Girl_ with half as much money, make _Clerks_ with ten times as much money." He was taking all these chances. Of course, to us at the time it looked like he was kind of flying by the seat of his pants, but ultimately the movie looked really good and we couldn't have gotten that kind of energy or authenticity if we had done it the way I was planning to do it. _During shooting, was there tension about the way he was directing?_ JF: During shooting, it was just a little half-assed. We'd sneak onto the golf course with his camera in the bag. It just seems like he doesn't know as much as he does when you're working with him, so it's a little disconcerting. There's a whole air of chaos around the way he does things, which has really served him well since. He definitely has an unconventional way of running his sets, and a lot of people are confused about the way that he does things. But the movies are good so that goes a long way. Trusting that makes you put up with that shit, but we didn't know that then. He cared about the movie and busted his ass on it. _Did you find that immediately you were viable, as a writer, from_ Swingers _?_ JF: Yeah, I was totally on the radar as a writer, but not so much as an actor. And it was strange, because I'd never really thought of this as a calling card for my writing. It was a great role that I was creating for myself but, you know, surprise, surprise. _Meanwhile, your buddy Vince . . ._ JF: He totally popped off the movie. Part of it is that when you're playing the type of role you're playing in _Swingers_ , people assume you're just playing yourself. They don't see your chops as an actor. The other thing is that Vince just totally fits the mold of what Hollywood is so hungry for in a movie star. He's got the look, the name, the talent. _Weren't you working on a sequel after you filmed_ Swingers _?_ JF: Yes, I wrote a sequel because I was so frustrated in the editing room when people were telling me a character wouldn't say that. I was like, "Shit, I'll write a whole movie about what these people would and wouldn't say and do. It was _Swingers: Part Two_ and it was half flashback to Mike's childhood, like in _Godfather_ , and then the other half was what happened the next year. I ended up writing a western too after that, _Marshal of Revelation_ , about a Hasidic Jewish gun-fighter. (laughs) Miramax was ready to step up and do it. Harvey Weinstein was like, "Here's a million dollars; let's do it." And then that turned into a half million by the next phone call. But that wasn't the thing that scared me. The thing that scared me was not having final cut, because I saw the pressure that they had applied to other filmmakers. And that was very scary, because the film had a very dark ending and I don't think it was inherently a commercial property. _In this period, were you offered any films to direct?_ JF: The two I remember were _American Pie_ and _A Night at the Roxbury. American Pie_ didn't seem like a good follow-up for me at the time. I think the guys who did it did a great job. It was very successful, so it worked out for everybody. _Instead you gained experience as a director first in television._ JF: I wrote a pilot on spec and then I ended up using that leverage to get me to shoot the pilot and direct the pilot. Directing for TV put me in the position where I was responsible and accountable. The stakes weren't nearly as high. Unfortunately, nowadays you come out of film school and everybody's looking at that first project and judging your ability by that. That wasn't how it used to be. Steven Soderbergh worked in movies a long time before people started calling him a new young filmmaker. Now if you're not red hot out of the box, people are going to label you as you don't know what you're doing. There just aren't any more journeyman filmmakers. You've got to be great off the bat like Tarantino, or you're in trouble. _How did_ Made _emerge as the movie to make your feature directing debut on?_ JF: Vince and I were banging around, trying to get _Marshal_ set up, and we had a lot of near misses with Miramax and other investors and, ultimately, it just wasn't happening. It was like, shit, if we just had something more contemporary, a little more of a genre film, we could get it made easily. So it became about writing a piece of material, and when I was on _The Replacements_ , I wrote it. There's a lot of downtime when you're an actor. _With_ Made _it seems like you were combining your love of mob films with the sort of characters you are more familiar with._ JF: Very much so. What appealed to me was, "How do you make the genre movie but not lose the reality of it?" Because if you lose the reality and the characters are not relatable, you're taking a lot out of the equation that I'm good at. I mean, I love Tarantino's films and I love Scorsese's movies, but in both cases I don't feel like that could be me up there. I feel like I'm watching another world of people. What I tried to do with _Made_ was create people on such a low level that you could see yourself in that position. It was like the Rosencrantz and Guildenstern of the Mafia. _I know it was very important to you to get final cut on_ Made _._ JF: Yeah. The way it worked is Vince said, "I'm going to star in this movie, and Jon and I are going to produce it together. Jon's going to direct it and Jon's writing it, and we're going to make it for the amount of money that Artisan said, which is five million." I would not do the deal until they gave me final cut. _How important are rehearsals to you?_ JF: I always rehearse. Rehearsal is really an opportunity for the actors to voice whatever concerns they have. I'll usually adjust the language to fit the actor because I want it to be authentic to them. This isn't to say we're free-associating on the set. The beats are always very clearly defined, and I'm always very true to the material that's on the page. _Why_ Elf _?_ JF: I wanted to do a Christmas movie. I had been offered _Surviving Christmas_ , and I opted not to pursue that one any further. And then this one came along and I thought it really plays into my sensibilities. I thought the one thing about Will Ferrell that I had not seen him do a lot of was show his heart as a person and as a performer. In developing the script, although it was edgy and irreverent at times, I wanted to keep it a PG movie, not a PG-13 movie that made fun of Christmas. _What films did you look at for inspiration?_ JF: We watched _Tootsie_. We looked at the movies that were quote unquote one-concept, one-joke movies, and saw which ones worked and which ones didn't and why. _Big_ was very much a Rosetta stone for us. The thing that made _Elf_ not just a one-joke extended skit is we really gave the character an arc and a story and things change over the course of it. We weren't just a series of jokes strung together. We tried to make it something that had a little bit of a message to it, and people responded. _I appreciated the retro techniques you used in the North Pole scenes._ JF: It was all stop-motion and forced-perspective. Everything was done the old-fashioned way there. I think the CG stuff looks dated a few years later. By using analog technology and stop-motion, at worst it's going to seem charming and nostalgic. At best it's going to look just as good. And that look never changes, no matter how sophisticated the audience gets. Zathura _is an epic fantasy adventure on a much larger scale than_ Elf _._ JF: With this one, it's fun for me to explore special effects further. It's twice the budget _Elf_ was. This is like me trying to tell a story visually and learning and taking a step in that direction just to round myself out as a filmmaker. It's very fun for me. _What about the story drew you in?_ JF: The guy who wrote it, David Koepp, recently went through a divorce and he has two sons, and so the dialogue and the way he presents the kids is very real. It's in going through these adventures that the kids learn to become self-reliant and grow up. It becomes this coming-of-age thing for them. _It sounds like old Spielberg territory._ JF: When I read it, I was like, "Shit, that's what Spielberg would do." He would take a very mundane family relationship that people could relate to and set it as the backdrop of this extraordinary world and set of circumstances, like an alien arriving or _Close Encounters_ or _Jaws_ , even. If you look at the family scenes in _Jaws_ , they're very real. So that was the challenge of this one: to keep the story real. If you have the heart and care emotionally about these characters, it's going to make it that much funnier and scarier and more exciting. _You've said before that you think directing suits you because you're both decisive and obsessive._ JF: Yeah. (laughs) _Do you believe those are two important traits for a director?_ JF: Yeah, those are two good qualities, but it gets in the way of other things. What I've found is, if you have good people around you, you actually have to make very few decisions, because if a production designer comes to you and says, "Should the room be this color or this color?" you say, "Which one do you like?" They say, "Well, I like this one, but this one might look better on film." So then you pull your DP over and say, "Which one do _you_ like?" Sometimes, unfortunately, what happens though is that every decision you make becomes like a Solomon, cut-the-baby-in-half decision. _Having worked with directors I'd imagine you had disagreements with, what techniques have you avoided as a director?_ JF: I don't "handle" people. It's so much easier to manipulate actors than to really have an earnest discussion with them. It's very easy to say whatever's going to appease them and then turn around and do whatever you want to do. It's difficult to be forthright with people, because the job does not lend itself to that. (laughs) But I know that, as an actor, I appreciate it so much and I feel so much commitment to a director that's up-front with me. The trick is to create a stillness amidst the chaos, to be really able to discuss and discover what the scene is. Joel Schumacher used to do a thing where he always would turn to the actor when he was done and say, "Do you want to try another one for you?" He always found the time for anybody to do that, and I do that too. I learned that from him. I think it's all about making the actors understand that you are dialed-in to them. On _Zathura_ I was working with two stars who were seven and twelve, and I really would discuss things like intention, subtext within the scene, overall arc during the movie. . . . I just think it's a good part of the process. _What do you feel are the biggest challenges for you personally as a filmmaker?_ JF: I'm a very lazy person by nature, and if I'm not doing something I can be obsessive about, you're not going to get the most out of me. So the thing is for me to find movies that I can latch on to. I think the biggest challenge for any filmmaker unfortunately—or fortunately—is to learn how to make money for the people that are putting up the money, and that means don't spend more on a movie than it probably will make. Make sure somebody's going to turn a profit, whether you're making a $200,000 digital film or a $100 million action movie. _But at the same time, your business isn't to predict box-office results._ JF: It is though, because that's what gives you creative freedom. If I spend $35 million on _Elf_ and it makes $170 million, that gives me so much creative freedom the next time out. _Your first three films all turned a tidy profit. What happens when inevitably one of your films fails at the box office?_ JF: I will have enough of a track record by then that they'll know that it's an anomaly. They say Hyman Roth always made money for his partners. It sounds silly, but truer words were never spoken. Right now I'm making two PG movies in a row because that allows me to spend the kind of money, build the kind of sets, and learn about visual effects and computer-generated effects and marketing and releasing a movie wide and hitting a wide audience. That's a lot of fun for me after working on very small movies. It's very nice to be able to see a movie hit the whole culture at once. It was very fun with _Elf_ , seeing that happen. Probably after this one, I'll want to try something a little different. But I'll tell you this: If I'm making a movie for forty-year-old parents of small children, I'm probably not going to spend as much money as I would if I was doing a PG movie for kids. It's understanding the nature of the beast. Directing, you get one turn every year or two, and you've got to make enough money to live on; you've got to make an impression and you've got to do something you're going to be proud of, because you don't get that many posters on your wall when you die. THE DIRECTOR'S TAKE JON FAVREAU _What is the first film you ever saw?_ _Chitty Chitty Bang Bang_ _What is your favorite film of all time?_ _The Godfather_ _What's your favorite line in a film?_ "If they move, kill 'em."— _The Wild Bunch_ _What movie made you realize that film was an art?_ It is? _What movie do you consider your guilty pleasure?_ _Blazing Saddles_ _Who is your favorite movie character of all time?_ Bluto Blutarsky _What's your favorite movie snack food?_ Vince Vaughn turned me on to sprinkling Raisinets into popcorn. _Who is your favorite director of all time?_ Kurosawa _Who is the most impressive filmmaker working today?_ The Coen brothers _What quality do the best directors share?_ Their work is real and entertaining. _Who is your favorite actor or actress of all time?_ Buster Keaton _Who is your favorite actor or actress of today?_ Jack Nicholson _Who would you cast as yourself in a film about your life?_ Me. I already did. _If you could remake one movie, what would it be?_ _Death Race 2000_ _What is your best quality as a director?_ I really give a shit. _What is your greatest weakness as a director?_ I'm lazy. _Finish this sentence: I'll never direct a movie . . ._ that I don't believe in. _Finish this sentence: The perfect movie is . . ._ emotionally engaging and original. _What will they be saying about your work fifty years from now?_ "Who was he?" _What piece of advice do you have for aspiring filmmakers?_ Make it yourself with what you have. Don't wait for someone else to give you the green light. _What are you as passionate about as moviemaking?_ My kids. **MICHEL GONDRY** "My father said that my naiveté was my strength. I have this kind of optimism, even when things seem undoable." **SELECTED CREDITS** _Human Nature_ (2001)-director _Eternal Sunshine of the Spotless Mind_ (2004)-director _Dave Chappelle's Block Party_ (2006)-director _The Science of Sleep_ (2006)-writer/director "Michel is only looking for one thing: to extract a bit of magic and mystery things." says Björk frequent music-video collaborator and friend, Michel Gondry. In scores of music videos and just a few feature films to his credit, the French-born Gondry is one of the most unique talents working in film today. _Eternal Sunshine of the Spotless Mind_ , one of the most celebrated films in years, showed off his ability to bring ingenious camera work, soulful direction of actors, and a unifying vision all to a complex script from the mind of Charlie Kaufman. Finally, his storytelling abilities had caught up with his groundbreaking technical work. Gondry is the rare filmmaker who can make the everyday extraordinary, and it all starts with his wonderfully off-kilter view of the world. _There's a dreamlike quality to much of your work. Do you dream a lot?_ MG: Everybody dreams, I guess, but it's the way you think about them or remember them that's different. I don't remember them a lot, but enough to find what's quirky and interesting. I don't like psychoanalysis too much, and I think what it says about dreams is very dated. I've read so much about neurobiology, which is much more captivating. It's like the difference between astrology and astronomy. Astrology is a bunch of old beliefs. As for astronomy, it's amazing to read a book about black holes. It's good, because I like to compare theory with practicality. _The film you're working on now is about dreams, isn't it? Can you tell me a little about_ The Science of Sleep _?_ MG: It's like a journal of dreams in parallel with this guy's feelings about a girl he's fallen in love with, and how first he thinks they just have friendship and then he realizes he's in love with her. _Have you finished filming it?_ MG: Yeah. I'm doing the editing right now. _How is that going?_ MG: It's going. It's hard to judge it. I really like the way the actors are. I'm not sure about the story in general. It's very weird. _Your last two films were as much known for Charlie Kaufman's writing as your directing._ MG: Of course. _And this is your first solo writing effort._ MG: Exactly! In my life, I would say there are three major people who have been inspiring and supportive as collaborators: Etienne from my band, Oui Oui; then Björk; and then Charlie Kaufman. And now for the first time I am on my own. _You've graduated, in a sense._ MG: Well, I didn't graduate yet. That's the scary thing. I will graduate if it goes somewhere. _I understand that you tend to doubt yourself a lot?_ MG: Yeah. _So doing this film essentially on your own without one of these influential collaborators must be all the more frightening._ MG: I got really scared doing this movie. I get scared all the time, but this one was different. _Do you feel more pressure because of all of the acclaim you garnered for_ Eternal Sunshine _?_ MG: Oh, yeah. Of course. It's more pressure, but it's not a bad pressure necessarily. A lot of people are excited to work with me. They forgive me a lot more than they would before. And my own paranoia is like, "Is it based on the success of _Eternal Sunshine,_ or because they liked it?" _You were born and raised in Versailles in France?_ MG: Yes. _What did your parents do for work?_ MG: My mom was a flute teacher. My father was a computer programmer. Before that, he was working in electronics, making speakers and microphones in a small company. He had this crazy hair and a crew of young women working with him. He looked like Johnny Hallyday, who was a big star in France. I think the girls liked him. _I understand you drew a lot as a child?_ MG: I always drew and I played with LEGO and creative toys like that. I always liked to make systems and inventions. I have a naive way, to believe that things can work despite the technology or basic logic. My father said that my naiveté was my strength. I have this kind of optimism even when things seem undoable. _It's interesting you have that optimism to go along with the pessimism you seem to have in yourself._ MG: Absolutely. It goes in waves. _Did you have any experiences with a camera at home?_ MG: Yeah. My father, in the early seventies, bought a super eight camera, but I didn't do crazy stuff with it. I remember doing a little bit of animation. I would do stuff with my cousin where we would run and put the camera on a tripod and take a picture every ten seconds so we could run like Superman. I wasn't very consistent with it. I did a lot of flip books and cartoons with my cousin. My interest was animation. _Did you watch a lot of movies growing up?_ MG: Not more than anyone. I remember we watched a lot of Charlie Chaplin. I remember every Christmas they would have a festival on TV of the animator Tex Avery, who I was a big fan of. _And what did you want to do for a career at that point?_ MG: I wanted to be an illustrator or painter. I didn't think animation could be a job. It was only when I bought a 16-millimeter camera in '85 or '86 that I started to realize it was something I enjoyed. I was never precise about the details. I was very much about the idea and the result. With film, you have to wait until the lab processes it and when you see it, you see it all at once. You see this compilation of moments put together, and I always liked that a lot. That's why I liked animation. You would do a little bit and a little bit and a little bit, and then when you see it together, it's magic. _Many have compared what you're able to do with a camera to something like magic. Was that something you were ever into?_ MG: Yeah, a little bit. I'm not so agile with my fingers and my hands, and I'm not good at manipulating people. I think magic is about how to condition people's perceptions and mislead them. This I'm not good at. _Isn't moviemaking all about manipulating an audience in a way?_ MG: Yeah, there is a little bit of that. But I think there's another quality about it, like seeing something different or more unique in everyday life and putting yourself unconsciously in what you're doing and telling stories. That's not manipulation. I went to see a play by Neil LaBute that was manipulative, and it was great, but I realized I could never do anything like that. It's really twisting your emotions and misleading you to make you feel strongly about a character. And that's like magic, and this I don't like to do. _What do you bring, then, if it's not that skill at manipulating an audience?_ MG: I think I bring a certain degree of naiveté that is not compatible with this way of telling a story. It's like my son: If he wants to lie, he's going to get caught. Something in his eye will be wrong. And I guess I have something like that. _You went to an art school when you were a teenager?_ MG: Yeah. I went to a school from when I was sixteen to nineteen. I did a lot of drawing and I met friends who were from different parts of France who had a similar way of thinking. We were not the ones who were the most verbal, but we could draw. I had a creative drive from when I was a kid. It was something that would keep me going. I was always unsure about life and death, and the only way to get me out of that was to think of material projects that I could construct. _It was around this time that your band Oui Oui began?_ MG: Yeah, exactly. That's why school was good for me. I met people who were in these crossover positions between music and art. _Were the videos you made for the group the first time you had tried anything like that?_ MG: Yeah, my first animated work with a camera was for my band. It basically was like, I wanted to do animation, so I would do a little story using the music of my band. I did three little videos completely on my own using the music. We didn't even have a recording at the time. And later, after we had a record deal and I did the video, I had to deal with the band, because they were creative, especially Etienne. We collaborated, and in some ways it could be frustrating, because I would like to have had complete control. On the other hand it was good, because it was a rich collaboration. And when I met Björk, I found this collaboration to be great because it makes me grow. I guess they both opened my mind. _Filmmaking is considered such a collaborative form. Do you feel that collaborating comes more naturally to you now?_ MG: It goes both ways. Sometimes I really enjoy collaborations, and sometimes it's good to do an idea without thinking intellectually too much. In collaborations you have to find a common ground for communication. You tend to work more with your brain than your instinct. Using instinct, you can pull out some ideas that are more surprising. But I think I've learned to be creative in collaborations. Even when an actor comes up with a different perspective, I can find a way to use what he says in a way that would not diminish it. _When you were starting to do videos for Björk and others, were you starting to think about doing a feature of your own?_ MG: No, it was years later. I would never think it would be possible until now. I didn't have the magic to put all those pieces together and tell one story. I still doubt that. At the time I was totally sure I could never do that. And, as well, I didn't read much and I didn't think I could write a story. It was something I never dared to think about. _Were there close calls before_ Human Nature _of landing a feature to direct?_ MG: Yeah. I remember I did a trip to Los Angeles and I was trying to do this movie, _I Know What You Did Last Summer_. I showed them my ideas for the suspense and tension and fear and I did not get the job. They did not understand me. _Was that a big disappointment to you at the time?_ MG: Yeah. I was excited to do it. But I had more feelings about this [other] project that I worked on for a year called _The Green Hornet_. I worked with a great screenwriter, and I had a great time doing that, but the studio kind of let us down. _What kind of a film was it?_ MG: It was before _The Matrix_ , and it had a lot of effects. I don't like the style that is too sleek and too comic-book. I like it more tongue-in-cheek like _Superman III_ , for instance when it's a little bit absurd and sweet. Superman III _?_ MG: Oh, it's great! And when he fights with himself . . . _Had you read any of Charlie Kaufman's work before meeting him?_ MG: Yeah. I had read _Being John Malkovich_ , and I couldn't believe how quick and easy to read it was. It was complex but unpretentious. He had this quirkiness that I was looking for, but not in a contrived way. It was coming from a real place. The scripts I was reading were bizarre to be bizarre, like somebody conventional would do something that's bizarre to him—but it's not, really. _Were you nervous to be working on your first feature once production was under way on_ Human Nature _?_ MG: Yeah, of course. Especially when you start to do rehearsals with the actors and you don't know how exactly to communicate with them. I did this big reading with all the actors at the table, and it was very depressing, because some of it sounded very corny a lot of the time. It was really scary. _How do you look back on_ Human Nature _? It wasn't received very well. Did you have to make compromises?_ MG: I don't think I made compromises. Even when I watch _Eternal Sunshine_ , I don't feel it's there yet. Obviously it's better, because it had a good response, but I still can't really enjoy it like I do other films. I don't know if it's because I made it or because it's not as good as what I wanted to see. For me _Human Nature_ is a little bit sad, maybe even more so. I don't feel I am watching a story. I don't know. _Is it the same for your music videos?_ MG: No, it's easier to watch the music videos. I think they are OK. They work. _So it's possible for you to be satisfied with your work._ MG: I hope so. I hope I will. It may take some time. _Did the poor response to_ Human Nature _significantly affect how you approached_ Eternal Sunshine _?_ MG: I think I was hurt by the response. I was not ready for it, because when you do a video, people don't come to you when they don't like it. And with a film, everyone becomes a film critic. You go to see a movie and it's part of the process of watching a movie to say what you like and what you don't like. And a lot of people came to me just to say they didn't like it. I didn't understand why they would talk to me about that. And I got to be a little depressed about it. I went into why these comments upset me, and I wrote it all in a notebook. I learned, for instance, sometimes we make a decision based on a reaction instead of as an objective opinion. For instance, I would sometimes be very reluctant to [use] Spike Jonze's opinions when he would give me advice or comments because I felt he couldn't see that we were different and he wouldn't let me be myself. So I was overly reluctant. I could have used more of what he said to me if I was more relaxed. So I learned a lot of things. On _Eternal Sunshine_ I wanted to try different ways. I storyboarded everything in miniature. And I wanted it to be like a play. I wanted there to be more room for the actors. I had forty pages in a notebook of comments like this guy who said _Human Nature_ was rotten because of this and that, and [I wondered] why is that? And maybe this is true, and then it was like, "What could I do to make this different?" It's funny how if you just take the time to write down the problems, you can find solutions and maybe apply them. _Did you use the same practice after_ Eternal Sunshine _?_ MG: I wanted to do my next project in a way that _Eternal Sunshine_ had not been successful. I remember the feeling I had when [ _Eternal Sunshine_ ] was finished and I watched it on my own. I didn't like it. I liked some of it, but overall I was happy when it was finished. It's not the actors or the story or anything. It's just me watching something I've done. When I watch my videos, I like it. When I watch my movies, it's painful. Maybe that's the way I'm going to go, or maybe I'll try different things until I like it. _In_ Eternal Sunshine _, virtually all of the effects were done in-camera, as opposed to using CGI. You have Jim Carrey playing a scene with himself by having him literally running to different locations in the same shot. Why?_ MG: It's generally something that's fun. It's exciting. I get this feeling of craft that I had when I did things as a kid and waited for the result. Like, with my cousin we would build a big bridge and a ball that would bounce on a scale and make the bridge explode, and we'd be on the side and it would go boom. This is something that I really enjoy. Doing stuff in-camera makes no sense in that it could be done so much easier in postproduction. Like this shot where there are two Jim Carreys, it would have been much easier in postproduction. But when it works, there are a lot of little elements that are unexpected. In fact the scene, as complicated as it was, made Jim's performance better because he was so worried about doing it right. He's not thinking about his performance, so he did it better. I think it's hard for an actor working with a blue screen. It's not much fun. I've seen movies done entirely on blue screen, and I couldn't watch them. _Do you think about the themes that reemerge in your work? Both of your first two films were very honest stories about love, for instance._ MG: I think about it, but not too much. I think there is a way you put yourself into your work, and it's through every stage. _How do you put yourself into your work?_ MG: It's all by choosing the projects I do. Next will be the way I meet actors and if we bond with each other and have common ideas and I can identify with them. And then the third step is the way I direct them. I try not to be in their face. All of this adds up to filmmaking that reflects the person making the film. A lot of times it's invisible. It's not something you can really express or put your finger on. But at the end of the day if you are true to yourself, after a bunch of projects, what you are starts to come across. _Is it a good thing to you for an audience member to recognize a film as your work?_ MG: As long as it's not too much [because of] the form, because I think I try to renew the form and explore different ways. I think the common link should more be in the spirit. I guess it's the way I see the world and who I am. But I don't want you to know too much. I think it's stronger if it comes across without me trying to make it come across. _Has the way you've worked with actors changed over the years?_ MG: It's a slow process from doing videos, from how you try to communicate with a singer or a band and they don't know what to do with their hand or their body. From the first day I did a video, I knew what I didn't like. I didn't like it when people were portrayed as heroes or stronger than who they are. And I didn't like it when people were diminished and you made fun of them. By trying to be in between those two poles I find my way, I guess. _What do you look for when you meet with an actor? How do you know when you're clicking?_ MG: Well, for instance when I met Mark Ruffalo for _Eternal Sunshine_ , he said to me, "I think I'd like my character to wear a pompadour." I just thought it was sweet, and obviously when you meet the guy, you want to be his friend. He doesn't project this thing that a lot of actors project like they're trying to impress you. He was somebody normal, that was funny and sweet. I guess I find these connections and I thought it would be good. I didn't have to read with him. It's hard sometimes when you read with people. They are so insecure and you cannot judge them. Björk taught me a lot of good things about that. Like, when she interviews people, she always makes them feel very safe. She never puts people in competition when she wants to work with them. She explained to me one time that if you do that, you make people feel very unconfident and then you don't know who they are. She finds qualities in people that they themselves don't even know they have. So I think it was a good example of how not to put people on the spot when you meet them. On the other hand, in America especially, people have a tendency to oversell themselves, so you have to figure out what's true and what's not. _Do you react to things you see in moviemaking today?_ MG: I try to not work in reaction. I've had that problem where I was like, I don't like that shot or whatever. And then you lose a little bit of your objectivity, because you're reacting to something you don't like and you build up all your opinions based on that. And then you even make marketing decisions based on that like, "I don't like that, so I'll do the opposite." So I think I'm trying to be a little bit more relaxed and openminded, although it becomes like an ethical problem sometimes, like having guns in movies. I have a gun in my first film, and I regret it a little bit. _At the same time, you probably don't want to make any hard and fast rules to limit yourself._ MG: No, but sometimes there is stuff. I don't think I could possibly do a movie about a guy who has no weaknesses. I don't think I would be interested. I would say a lot of movies I don't like show how people are superior to each other. I think it's more interesting to show somebody having to deal with fear, like if there's a confrontation in a train and you say, "I can't believe nobody moved. Nobody said anything." I would be more interested to hear why somebody didn't move. Because in most of the cases, I would not move. I would be petrified in fear, and I would feel really ashamed about it. You always see people in situations in film when they're brave. In a way if you hear about someone being cowardly maybe you can grow from it better. _Kubrick was so selective with his work that he ended up sadly only making a few films. Are you able to go more quickly from project to project?_ _It seems like you are, with_ Science of Sleep _._ MG: I learned with music videos that you have to deal with imperfection. When it's done, it's done. You have to deliver it, and there's nothing you can do about it. The only thing is to learn for the next one. I wonder if Kubrick didn't have a bit of obsessive-compulsiveness with his films. But I don't know. My son is so into _A Clockwork Orange_ . He blasts Beethoven in his ghetto blaster when he goes to sleep. It's funny. _Your son lives with you here in New York. Does he influence your work?_ MG: I really care about what he says. Sometimes he gives me ideas. He gave me the most amazing concept last week that I think I should use. I decided to call my mom every day since he said it. An old lady sat in front of us on the train and I said, "She looks like Grandma." And I was like, "Mom is old now." And he said, "Of course she's old. Old people, the less you go to see them, the faster they grow old." I was like . . . ahhh! And it makes me think of so many things like, "Well, then if you go to see her every day, she's never going to die. She's going to stay young forever. And then, how many times am I going to see my mom until she dies? That's really sad. Maybe it's ten. Maybe it's five. You don't know!" _It's a great concept, but it's awfully depressing._ MG: I was doing an interview before I was doing _Science of Sleep_ , and I was really depressed. And we were talking about how I always work in loops and I realized it was because I was scared of dying. But it didn't come to me in one chunk. It was a long journey into my thinking. I was thinking, "Yeah, I always have been thinking in cycles, because I think of time and I think I'm going to go into a void." _Looking at the techniques you've used and the stories you've told in your career, it seems like a high priority for you has always been to do or create something unique. Would you agree?_ MG: I think it's partly true. In a way you want to be an artist. You want people to notice you. Obviously I think, "OK, if I do something completely different, people will notice me." But I learned to play the drums when I was an adolescent because I thought that it would make me special, and I was shy and I looked like a girl. I saw this drummer in this band and I said, "I wish I was like him." And I worked hard to try to impress people with my drumming, so this is not like I was trying to create something unique. It's part of the feeling of wanting to put something out there and be creative. That's why when you hear pop music, for instance, there is a part that needs to be different and groundbreaking and new and a part that makes you feel emotion and quotes something you've experienced before. I think filmmaking is somewhere in between art and industry, because you need to have this kind of response from your viewer. So you can't decide that it's going to be completely new. What I like is the idea. It keeps me going to keep trying something and seeing if it works. It could be in the editing, the shooting, the writing, whatever. THE DIRECTOR'S TAKE MICHEL GONDRY _What is the first film you ever saw?_ _Le Voyage en ballon_ _What is your favorite film of all time?_ _Back to the Future_ _What movie made you realize that film was an art?_ _The Gold Rush_ _What movie do you consider your guilty pleasure?_ _Stuck on You_ _Who is your favorite movie character?_ George McFly, _Back to the Future_ _What's your favorite movie snack food?_ Bon Bons _Who is your favorite director of all time?_ Jean Vigo _Who is the most impressive filmmaker working today?_ Ingmar Bergman _What quality do the best directors share?_ It's really a combination of so many invisible qualities. _Who is your favorite actor or actress of all time?_ Michel Simon _Who is your favorite actor or actress of today?_ Charlotte Gainsbourg _Who would you cast as yourself in a film about your life?_ John Cameron Mitchell _What is your best quality as a director?_ Creative chaos and precision _What is your greatest weakness as a director?_ I won't tell you. _Finish this sentence: I'll never direct a movie with . . ._ a gun (again). _Finish this sentence: The perfect movie is . . ._ something that stays with you. _What will they be saying about your work fifty years from now?_ How could I know?! _What piece of advice do you have for aspiring filmmakers?_ Number one is finish a project. Number two is start a project. _What are you as passionate about as moviemaking?_ Trying to tell a story **F. GARY GRAY** "I made _Friday_ and _Set It Off_ before I really started to delve into what it meant to be a filmmaker." **SELECTED CREDITS** _Friday_ (1995)-director _Set It Off_ (1996)-director _The Negotiator_ (1998)-director _A Man Apart_ (2003)-director _The Italian Job_ (2003)-director _Be Cool_ (2005)-director A conversation with F. Gary Gray is filled with laughter. His personality is positively buoyant, which is all the more remarkable, considering his beginnings. Raised in South Central Los Angeles, he is quick to say his background is not exactly the typical one for a successful filmmaker. What he did have was drive and an instinct for seizing upon opportunities when they came his way. And it's not as if filmmaking was a passing fancy for him. It has all been part of a master plan he literally put on paper when he was just a teenager. Since making the leap from music videos, his first film, _Friday_ , became a cult comedy phenomenon and his attention to character has elevated films like _Set It Off_ and _The Italian Job_ beyond the trappings of their genre. When I spoke with him, Gray said he had never before talked so openly and in such depth about his career. It clearly wasn't because he had nothing to say. _You were born in New York. How long were you there?_ FGG: Two weeks. My parents were on a trip from Chicago to New York, and I just happened to decide I wanted to show up in New York. (laughs) _Were there any early experiences growing up that pointed you in the direction of making films?_ FGG: I don't have a typical filmmaker background. I didn't grow up with a super eight camera or a video camera. I didn't start cutting movies when I was four or five. I actually didn't really start to get into the research of film until I was much older. I decided I wanted to direct a lot earlier than I started to do the research, which is really strange, but it is the case. _There must have at least been a point early on when you started to comprehend the power of the director behind a film?_ FGG: Honestly, I made films before I learned that. I made _Friday_ and _Set It Off_ before I really started to delve into what it meant to be a filmmaker and what you bring to a story as a filmmaker. I wish I had all the stories about how one day I watched _Citizen Kane_ and it inspired me, but it's not my story. I grew up in South Central L.A., which is a really bad part of L.A. I had a really good friend who's still my best friend to this day, and we would just hang out. I was probably about twelve or thirteen. I had this uncle, his name is Phil Lewis, who was an aspiring actor. He was doing a play at this local kind of community theater. It couldn't have seated more than a hundred people. This is right in the middle of South Central L.A., and he said, "Listen, I really want you guys to come down and watch this play." Now, where I'm from, if you mentioned that you were going to a play, they'd probably kick the living shit out of you. (laughs) So it took him a year to convince us to go to this play, and the only reason he got us to go is he said, "It's summertime and they have young pretty girls." That's all he had to say. Wham! We're there. (laughs). _He knew how to sell it to you._ FGG: Exactly! He knew exactly how to sell it. We went to the show and it was eye-opening for us. It was like a window to a whole other world. Sometimes when you grow up all you really see is the neighborhood. It was good to see black people performing, and it was just a great environment besides the fact there was a bunch of pretty girls who were in the show too. (laughs) He didn't lie. And that sparked my interest. _Your parents split up when you were young. Did you split your time between your parents, or were you with one more than the other?_ FGG: With my mother more than my father until I moved back to Illinois during high school. _Illinois must have been some culture shock for you after growing up in South Central._ FGG: It was definitely a different environment! It wasn't gated. You didn't have armed guards. (laughs) It was one of the top-ten public high schools in the nation. I'm one of six black people in the entire school, like twenty-five hundred kids. There was a freedom there that I didn't have in the high schools in L.A., and there was a life-skills class I took that taught you how to pretty much lay out your goals and identify what it was exactly you wanted to do. I joined a show there called _Giants in Action_. It was a show that we produced, directed, and edited for local cable access. And I remember all you had to do was bring your student ID and you could check out all this equipment. I'm thinking to myself, "Come on, this is not real. There has to be some sort of hitch. Where's the catch? I show them my ID, they give me $3,000 worth of equipment?" It was the first time I realized that people in privileged environments don't take advantage of what they have. So I take out this equipment and I do a documentary on the Shedd Aquarium. It was the ultimate freedom to me, because I realized that there was no one looking over my shoulder and no one questioning what was going on. And that was the beginning of the end. I was the kid who asked all the questions, almost to the point of annoying people. I produced, I edited, I directed, I did everything. And when I mixed what I learned technically with some of the life-skills courses and just some of the stuff I learned from the streets, I laid out what I wanted to do and how I wanted to do it. I did it very, very thoroughly—time lines and everything. In my original time line I wouldn't have made it to the point of where I am today until I was forty. _So according to your time line, where should you be today at about thirty-five?_ FGG: Working my way up through the ranks. You have to know that at the time, I couldn't really name any black filmmakers. I don't think even Spike Lee was around. Obviously there were black filmmakers—Gordon Parks and people like that—but I didn't know about that at the time. _Did making movies seem like an attainable dream to you?_ FGG: The way I was raised and the environment I grew up in, there was no such thing as dreaming. I think it did create an environment where I had tunnel vision, and it was kind of do or die for me. And I think because I was so focused and I didn't put a lot of time into anything else, I think that's a reason why it happened for me. Looking back, it looks kind of retarded, because I had no mentors, no clear examples of successful black filmmakers. It was just something I was interested in. But the one thing that you get from growing up in that type of environment is a certain level of determination and survival. It's something you cannot learn in school. _But why do you think it was making movies that you were so focused on early?_ FGG: You know, if my uncle had taken me up to the neighborhood clinic and there were a lot of pretty girls up there, maybe I'd be a doctor today. Who knows? But in looking back, it just seems like a natural progression. _After high school, you moved to L.A. and went to college, right?_ FGG: I went to L.A. City College for three months. _What happened?_ FGG: You're at that age and you're trying to break into this industry, you're trying to do a bunch of different things. You're trying to PA on sets. You're trying to get extra work on movies. You're trying to do so many different things to get close to it that, actually, I felt like school was getting in the way. Maybe I was impatient. I know you had to get a certain amount of credits to even get near a film class, and I was just like, "No, I just want to learn about film." _You've spent virtually all of your time since you were in high school just working, learning on the job how to make movies. Do you ever feel like you might have missed out on valuable life experiences that could inform the work?_ FGG: Realistically you miss out on certain experiences that could have been beneficial to me as a filmmaker and storyteller. And I'm very sensitive and aware of that right now. Here I am, thirty-five, and I've accomplished some things. But you know, maybe looking back I should have celebrated a little bit more. There was a point where if you looked at my schedule and how hard I worked and how long I worked, you could say, "Well, this guy is a robot." And looking back I probably was some form of robot. (laughs) But I had to be, because I wasn't born into this industry. I wasn't related to anyone in this industry. I look at my first five or six pictures as boot camp for me. It's hard enough to make a picture, period! It's even tougher to do it without learning it at college and learning at least the fundamentals of filmmaking. It's even tougher when you're in the middle of filmmaking and you're learning at the same time. So I had to work twenty hours a day for months and years on end to get a grasp of it as I was doing it. So yeah, there is a part of me that says you did miss something. So now what I'm doing is I'm kind of slowing down to experience a little more of life, not only for my career but just for myself, because I woke up one day and I was thirty. (laughs) I have an impressive résumé, but I definitely sacrificed a lot. _How did you come to direct a short film of your own?_ FGG: When I was younger I used to say, "I can't just walk in the door and say, 'I'm talented. Hire me.' " I would have to walk into a room and leave a tape that said this is what I'm capable of and if it's good enough, then great; and if it's not, I'll keep working at it. I never called myself a director until I directed. I wanted to be able to justify the title. When I was working as a cameraman, I made good money and I saved that money to do a short film. It was a really big risk and a gamble. You just don't give up that type of position at that age unless you feel like you're on to something. But I looked at it as an investment. And all these motivational books I was reading, they had all these examples of people who weren't afraid to take risks. Maybe it was silly, looking back at it, but it's what I was focused on. And I felt like making _Divided We Fall_ was definitely a step in the right direction. _What was the film about?_ FGG: _Divided We Fall_ was in the _Boyz N the Hood_ and _Menace II Society_ vein. It really touched on inner-city violence. I knew if I was to give any contribution, I had to look at my own environment and my own neighborhood and where I grew up and try to effect change, and that's what I was really focused on with that film. _And how did that film lead you to Ice Cube?_ FGG: I ran out of money and I started to look around for help and I went to Ice Cube's production company. I looked him up and said, "I'm trying to finish this project up. Would you like to help out?" I sat down with him and talked to him about this project and he said, "Yeah, I would." He got a company to donate some money for the production. We developed a relationship, and I started actually doing music videos for him. I would write these concepts that were short films—and I'm talking about with dialogue and action sequences and Steadicams and cranes. I would never pay myself, but I would always put in the director's fee to shoot 35-millimeter instead of 16, to add in an extra Steadicam or to put in an action sequence. I just really went out and used music videos . . . _. . . as your guerrilla film school?_ FGG: Yeah. And at the time I don't think a lot of people were doing that type of work in music videos. You would see a lot of performance videos, but you wouldn't see stories. I would put together stories with a beginning, middle, and end, and the artists would get something that was original and fresh. So that gave me the ability to walk onto a set and put together what I thought was a really decent story and kind of sharpen my chops technically. _You were quite young when you started to direct these videos, just twenty-two or twenty-three years old. You must have been one of the youngest people on the set. How do you lead in those circumstances?_ FGG: That's something you have to psychologically work around. I've always kind of looked young too. I have a picture of me on the set with Ice Cube back then, and I look like I'm fifteen years old. As far as being a leader, I was always very clear on what I wanted. If anybody else had any issues with it, then that was really just more their problem than it was mine. I wouldn't let anyone break my stride or my spirit. As I started to grow, I became aware that some of the stuff I was doing was wrong. And then I started to become aware of my approach and that I had to adjust it. _What did you need to adjust? Are you talking about technical approaches as a director?_ FGG: Not necessarily the technical stuff, but articulating your vision and motivating your actors instead of getting in there and result-directing, which I did for quite a while, for a few films actually, until I realized there was a different approach you could take as a director that is better for an actor. _Specifically how were you approaching your actors that was, in retrospect, the wrong approach?_ FGG: (laughs) Specifically, when I was twenty-four, I would say, "Well, in this scene you're supposed to be really mad and you're going to cry and I want you to really cry in it." (laughs) I didn't think I was doing anything wrong. But walking onto _The Negotiator_ and working with Kevin Spacey and Sam Jackson, it's a totally different situation. And I don't think until maybe halfway into the film did I realize that I was result-directing professionals. I started to read up on Spielberg and on different directors like Kubrick and Truffaut, and I realized that half the people I respected had problems directing actors. _So that happened during_ The Negotiator _? Were there a lot of conversations with Sam and Kevin about the better way to talk scenes out?_ FGG: No, I would never do that, because I was way too afraid to let on that I didn't know exactly how to do it technically. Kevin was amazing, though. I remember giving him, like, a thumbs-up and thumbs-down and stuff like that. I'm not really embarrassed now, because it's all a part of learning. I really changed, because I started going to acting classes and started to learn what it takes to motivate an actor in their language, not in my language. I've always known exactly what I wanted. I never had a problem with vision. Articulating that vision was my issue. _What's interesting is that many critics and others have cited from the start your attention to character. And from what you're saying, you were a bit clueless about working with actors in your first few films._ FGG: Well, even before then I was able to get good performances. It's just that it was a very kind of crude way of doing it. (laughs) I'm really embarrassed about this one. I look back and I laugh, and I hope Jada Pinkett will laugh at this too. In _Set It Off_ she's supposed to have an incredibly emotional reaction to her brother dying, and I remember showing her a moment in _Glory_ with Denzel Washington's character being whipped, and a tear drops from his eye. I'm showing her another actor in another performance in another movie, and I have no clue that this is just not how you motivate an actor to get there. I've never told that story before because I probably won't work again after that. (laughs) _Tell me about how your first film,_ Friday _, came about._ FGG: At the time, Matty Rich had done _Straight Out of Brooklyn_ and Ice Cube was interested in doing something in that vein. Not in subject matter, but shooting something for seventy-five grand. He said, "We can put together a movie for seventy-five grand in black-and-white, and we'll call it _Friday_ and it'll be a movie about the 'hood." And here I am, struggling to finish this short film and I'm like, "Great!" Then it ended up getting a lot bigger than a $75,000 black-and-white movie. It was funny, because it was hard and easy. Hard because it was my first movie and I was obviously trying to figure it out. But when I had read the script, I envisioned all of the characters and all the locations and the neighborhood that I grew up in, and so that made it a lot easier for me. I went and shot it in my neighborhood on the block I grew up on, in my best friend's house and my house. _So you'd recommend this for aspiring filmmakers—to just shoot their first film on their block?_ FGG: Shoot it on your block. It cuts down on the research and development. (laughs) I'm glad Ice didn't come to me with a project about the moon or something, because I probably wouldn't be standing here today if that was the case. _The shoot was something like twenty days? Did it feel like it was just sort of flying by?_ FGG: Absolutely. One take, two takes, three if we were feeling really rich. We flew through that thing. _What was your mind-set through the film? Excited? Terrified?_ FGG: I was definitely excited, but probably more nervous and terrified than excited, because you just don't know what you're doing until you put it together. I didn't know we would end up with a movie that people still watch to this day. I just basically used my instincts to put together what I thought was the funniest version of that script that I could and hope like hell it worked. _When you got into the edit room, did you feel like you had a good film?_ FGG: No. I was like, "I better start calling the TV stations and get my camera. I better dust off the résumé." (laughs) My first four times, every single time after the first cut, I'm thinking I better dust off my résumé and really think about doing something else. _You're that hard on yourself?_ FGG: Absolutely. I'm thinking, "How the hell am I going to make this work?" I mean, now sitting where I'm sitting today, it's a totally different process for me, but it was all very new to me. The Negotiator _was the largest budget ever given to an African American director at the time. The press made a lot of that. Did you?_ FGG: No. I've never spent a lot of time basking in any success. I've always wanted to get the green light and go ahead and immediately focus on what it's going to take to put [a movie] on the screen. Filmmaking is tough. It's hard enough to make a bad movie, let alone a movie that's relatively creative and then, beyond that, a movie people consider good. I use the example of eating a bus. You just don't know where to start. (laughs) You just have to take the first bite and keep going. _Did you feel like_ The Negotiator _did what you wanted it to do, at least in terms of your career?_ FGG: I think so. It definitely illustrated what I was capable of. It kind of launched me. _It launched you to directing_ Nutty Professor II _for a time._ FGG: Well, for a time. You did do your homework. Ouch! _The favored euphemism of "creative differences" was mentioned upon your exit from that project. What happened? What was the difference of opinion on the project?_ FGG: The difference in opinion from my perspective was that I didn't have enough time to put it together. It was a lot of special effects. Eddie Murphy had to play a lot of different characters, and I didn't have the prep time to put it together the way I felt was necessary. I didn't want to fail. I was working with Brian Grazer, and this was a great set of people. And if I put together the bad version of _Nutty Professor_ , that wouldn't have been good at all. _What was the appeal of_ A Man Apart _for you? It was called_ Diablo _back when you were working on it, right?_ FGG: Yeah, it was called _Diablo_. The deal was I wanted to make a movie that was kind of a mix of _Scarface_ and _Seven_. It was a much bigger movie than ended up on the screen. It ended up being much different than I originally envisioned. _The film looked to be a "troubled production." It was delayed a few times._ FGG: It was a tough production from beginning to end. I look at that film and it's hard for me not to think about what I had originally envisioned. It was something much bigger and cooler. Let me give an example: I didn't direct the last ten minutes of that movie. John Herzfeld did. And that was because I was on _The Italian Job_. I didn't have the chance to do the final mix the way I wanted to. I didn't have a chance to do the final score the way I wanted to. I thought we were going to do a completely different ending, and the script was much larger. It was the hardest experience I've ever had, and it was probably the best experience I had too, because I learned so much about not only filmmaking but about this industry and the politics. It definitely made me stronger as a professional, as a filmmaker, and as a storyteller. _You followed it up with a much better received film in_ The Italian Job _. I understand you put together a visual mission statement for your crew before starting the film._ FGG: I wanted everybody on the crew to have a sense of what movie we were making, because I didn't want it to feel like we were making different movies. If you have a lot of different players involved and they're making different movies, it causes a problem. I created a document that made it very clear on every level what movie I was making. You could easily reconcile any disagreements by saying, "Look, this is what I'm doing or thinking." I think that's the reason we experienced a certain amount of success with _The Italian Job_ —because I was very clear about what movie I was making, and I think most people felt the same way. _Do you think that if we talk in ten or twenty years we'll be able to look back at the films and say, "This makes an F. Gary Gray film"? What do you think the common denominator will be?_ FGG: I think they'll probably get crazier as I go. (laughs) I'm starting to feel that you'll probably get something a little crazier. Maybe I'm predicting the future, but hopefully I'll be able to do something different if I'm able to keep it going. Again, this really is boot camp. I've kind of survived boot camp so . . . _. . . now the real work begins?_ FGG: I think you're going to start seeing some different things from me. Will the next one be The One? Maybe, maybe not. But you're going to start seeing different stuff, I think. (laughs) I'm already starting my mind brewing. Definitely twenty years from now we're going to have a great conversation. THE DIRECTOR'S TAKE F. GARY GRAY _What is the first film you ever saw?_ _Cooley High_ _What is your favorite film of all time?_ _The Godfather, Part II_ and _Imitation of Life_ _What's your favorite line in a film?_ Moss: What's your name? Blake: FUCK YOU, that's my name!! You know why, mister? 'Cause you drove a Hyundai to get here tonight; I drove a eighty-thousand-dollar BMW. That's my name!!— _Glengarry Glen Ross_ _What movie made you realize that film was an art?_ _Fellini Satyricon_ _What movie do you consider your guilty pleasure?_ _Chopper_ and _Office Space_ _What's your favorite movie snack food?_ Popcorn _Who is your favorite director of all time?_ Fellini _Who is the most impressive filmmaker working today?_ Alexander Payne _What quality do the best directors share?_ Being prepared _Who is your favorite actor or actress of all time?_ Jimmy Stewart and Meryl Streep _Who is your favorite actor or actress of today?_ Johnny Depp and Charlize Theron _Who would you cast as yourself in a film about your life?_ Don Cheadle _If you could remake one movie, what would it be?_ _A Man Apart_ _What is your best quality as a director?_ I'm still working on that. _What is your greatest weakness as a director?_ Craft Service _Finish this sentence: I'll never direct a movie about . . ._ I never say never. _Finish this sentence: The perfect movie is . . ._ a great script, a great cast, a great studio, great producers, and a great experience. _What will they be saying about your work fifty years from now?_ That I wasn't afraid to take risks. _What piece of advice do you have for aspiring filmmakers?_ On all levels, you must do your homework. _What are you as passionate about as moviemaking?_ Music **DAVID GORDON GREEN** "I always like movies about people I recognize. I'm a lot less inspired by a trip to the moon than I am a trip to the grocery store." **SELECTED CREDITS** _George Washington_ (2000)-writer/director _All the Real Girls_ (2003)-writer/director _Undertow_ (2004)-cowriter/director At just twenty-five, David Gordon Green's debut feature, _George Washington_ , was recognized by Roger Ebert, the _New York Times_ , and _Time_ magazine as one of the best films of 2000. But who would have thought the maker of films of such lyricism as _All the Real Girls_ and _George Washington_ grew up loving _Iron Eagle_ and _Red Dawn_? There you have the paradox that is Green. He devours all forms of moviemaking, but what he has created in a short period of time is entirely unique—odes to love and humanity and, of all things, the industrial decay of the South. There is poetry in Green's work all too rarely found in American filmmaking today. It is no wonder that he found a mentor early on in a filmmaker as preternaturally gifted and enigmatic as himself—Terrence Malick. _You grew up primarily in Richardson, Texas. Could you tell me about it?_ DGG: It's not so much anymore. It used to be amazing—a bunch of old crazy people and a lot of one-of-a-kind people. It was a good melting pot too. Every ethnicity, religion, economic background was represented. Now it's just been kind of taken over by all the other crap that seems to pollute the nation. _One of those chains you're alluding to had a big impact on you as a kid, when you were about eleven._ DGG: Blockbuster? _Exactly._ DGG: Yeah! I was the first member of the second Blockbuster, apparently. They had two test shops when they first were getting going. For the longest time I thought I was the first member of Blockbuster ever. _It's pretty ironic, considering your thoughts on the chain now._ DGG: Yeah, I wouldn't be caught dead in one now. _But at the time you must have been excited for the opening._ DGG: I remember a sign when they were first opening saying 10,000 VIDEOS, and I thought, "Whoa, that's going to be trouble." It was literally three blocks from my house. I was sitting there counting the days until I could get to all these movies I'd never seen besides being edited for TV and that kind of shit. _Were you ever into escapist entertainment as a kid?_ DGG: I was totally into everything. I always liked _Big Trouble in Little China_ and _The Goonies_ and I liked a lot of horror movies too. I didn't like things that were kind of overblown. I mean, I wasn't into _Top Gun_ but I was totally into _Iron Eagle_. _That really tells a lot about somebody._ DGG: I was _totally_ into _Iron Eagle_. I saw all three [sic] _Iron Eagles,_ and I couldn't make it through _Top Gun_. I don't know. It's probably my knee-jerk reaction when people are all clamoring over something to go find the oddball version of it. Chappy Sinclair? I mean that movie was awesome. _You wouldn't strike me as a_ Predator _fan, but I heard you were._ DGG: Where would you have heard that? I mean, it's totally true, but I can't remember ever saying that. But, yeah, I love all the old Schwarzenegger movies, from _Commando_ to _Predator._ And _Red Dawn_ . . . can we talk about that? One of my goals in life is to make a movie called _PG-13_ that's about the opening of _Red Dawn_ , the first PG-13 movie. _I've also heard you were a big_ Dukes of Hazzard _fan._ DGG: It had a lot of characters that were familiar to me. I spent a lot of time in my childhood in this town called Longview in East Texas, where a lot of my family and relatives all lived, and it was very much just going out to the country and racing cars and getting drunk and going out on the lake. Those were the surroundings of my youth, so that show kind of brought out all the hee-haw spirit. _What are your earliest memories of wanting to make movies?_ DGG: I remember taking it very seriously in the sixth grade. We had to write a letter to ourselves in the future. It was this weird little project, and I remember writing a letter to myself asking what good movies I'd made lately. (laughs) I was pretty geared from maybe the second grade or third grade. I was always into music and drawing and art of any kind. I remember my parents were always like, "Yeah, you should go be an architect or work in advertising or something." They were trying to think of a more realistic way to kind of gear those instincts, but it didn't really work. I was always playing with a super eight camera, still photographs, and stuff like that. I was making movies when I was fourteen. _What were those first ones about?_ DGG: I remember the first one was called _Fried Chicken_. _What was that about?_ DGG: It was about this retarded kid that lived in my neighborhood. People used to make fun of him, so I decided to make a movie about him because I thought he was cool. I don't know why I called it _Fried Chicken_. Then I made another movie called _Black-Eyed Peter_ that was about a can of black-eyed peas that was possessed. I was the typical jackass kid trying to be funny and cool in the movies—get your friends and the kids in the neighborhood to come and make assholes out of themselves. _Where did you go to college?_ DGG: I went to the University of Texas in Austin for a year. After high school, I enlisted in the Marines, and then my mom convinced me to just give a year of college a try, so I went down there to UT. _Was joining the Marines your idea?_ DGG: I'd always wanted to join the Marines because I figured there were a bunch of jackasses and jarheads over there that needed to be led with some sense of intelligence. I've always been kind of angry about one thing or another. I was just wanting to take some of that leadership away from people that I thought were probably pretty ignorant around the country, meatheads being brainwashed and going through drills. _And your mom convinced you to reconsider?_ DGG: Yeah, she didn't like the idea of me going. My dad was a Navy guy, so he was all pumped up about it and my grandfather was in the Army. My mom was like, "Let's just try school for a year, and if you don't like it, then join the Marines." So I didn't like it and I was getting kicked out of school and I was getting ready to go back into the Marines, and then my mom saved the day again and found out about this art school in North Carolina. _Tell me about that school, the North Carolina School of the Arts._ DGG: It's kind of a little undiscovered thing. It's primarily known as a ballet school. I went there and saw a bunch of dancers hanging out on campus and decided I could be happy. (laughs). And I heard they had this huge, insane 35-millimeter collection that was coming in there. So I got the movie archives, I got the young lovelies prancing about, and it was in North Carolina, which is near the mountains and the beach. It seemed like a beautiful place to shack up for a little bit. I went there and met a lot of amazing people who I make movies with now. _Including Tim Orr, your cinematographer?_ DGG: Yup. Tim and Richard [Wright], my production designer and my sound mixer; my producer, Lisa Muskat; and probably forty other people I could rattle names off for you. _It sounds like it was exactly the environment you were looking for._ DGG: There was just instantly kind of a collaborative energy there. I met people who had common instincts, and they opened my horizons up to foreign influences I had just never been exposed to—Lindsay Anderson and stuff like that. I was just like, "Oh shit, there's a lot of amazing possibilities I never knew existed." I remember the University of Texas right when John Woo had blown up and I was kind of like, "Whatever, that's kind of boring." My taste was to go and watch _M*A*S*H_ or _McCabe & Mrs. Miller_. _Early on, did you ever try your hand at writing a big Hollywood kind of script?_ DGG: I write those all the time. The first script I ever wrote, as far as features are concerned—I was probably nineteen years old—was about a little girl who becomes president, called _P Is for President_. Everything was high-concept commercial. I wrote those all the time. Because my strategy was, let me be a hack Hollywood writer. Let me take that money and do the John Sayles thing. That was the first way I remember strategizing how a middle-class kid would get in the business. It was like, let's see every cheeseball movie that's made over $60 million in the last fifteen years and apply some of that garbage I've learned. _When did you move out to Los Angeles?_ DGG: The day after I graduated I drove out there. I didn't get an apartment or anything, but I lived under the staircase of a buddy of mine for six months. _What kind of work did you do?_ DGG: I did every imaginable job. I did a lot of PA work. I was an assistant to this cheeseball B-movie producer. That was amazing. It was shocking to see how movies were packaged. We would put together movies that were presold to foreign territories. So they'd already made 50 percent profit. The movie cost $1 million. They'd finance it based on $1.5 million foreign presale dollars, so then the movie went into production and the producer didn't even have to show up and give a shit about what it ended up like. It was just people who didn't care and were looking to get, you know, Joey Travolta in their next movie. (laughs) It was amazing to see how that process worked and how people spent all day every day putting together crap and pretending they cared about it. I would get so depressed. I was like, if this is what makes sense in Hollywood, let me go home and make a film that no one will ever see. I also worked at NRG, which is this big market-research company. So I would go to test screenings at night and watch how studios would come in and take a director's work and basically make it as mediocre as possible so that the largest number of people would like it. _The time in L.A. was not a total loss. You used the money you earned there for your first feature,_ George Washington _._ DGG: During that time in L.A., I went in there basically saying, "I have got to make as much money as I can." I was living for free, never going out, never eating out, never going to a bar, sneaking into movies always. I was working, like, five jobs. I worked as a maid. I worked as a janitor in an insane asylum. That was also during the dot-com boom, and I was gambling on the stock market a lot, so I was a little businessman. I had $30,000 by the time I got fed up and had to go. I needed $42,000 to be able to shoot _George Washington_ in anamorphic. For a while I was trying to get Biz Marquee to come be in it. (laughs) I was like, "I just need somebody's name." But I was talking to my friend Tim, showing him the script, and he was like, "If we're going to do it, we really have got to do it the way we would dream of making it. And that means no stars." I needed to raise another twelve grand, so that's when I started writing letters to friends and family saying, "This is not an investment. You will not see your money back. If you donate money to the arts or NPR or your local library or whatever, look at me as your charitable art project for that year." So I ended up putting together about $12,000, most of it coming from people giving three or four hundred dollars. So we got enough to get it in the can and edit it. Nobody got paid on the movie. We ate pizza every day and chicken fingers and lived in the same house. George Washington _shares a landscape with_ All the Real Girls _and to a lesser extent,_ Undertow _. What is your preoccupation with industrial decay about? It's not a typical landscape seen in films._ DGG: I love where man meets Nature and where Nature comes back and takes what's rightfully hers. I don't know. There's something about the texture of rust and the color of collapsed industry that I've always been infatuated with. There's such a flavor to it. And just living there and knowing what it means to people, it's a very hazardous background to a lot of people's lovely lives. _Since_ George Washington _was so well received, I'd imagine you met with a lot of interesting people intrigued by your work._ DGG: I have a journal that would knock your socks off. The funniest thing is when I had breakfast with Sean Penn and he made me watch him read my script. He was just sitting there reading it to himself, and I was just sitting there watching him. That was amazing. _How long did that take?_ DGG: A while, actually. Long enough for it to be uncomfortable and then hilarious and then uncomfortable again and then really funny. _In virtually everything ever written about you, comparisons are made to Terrence Malick. He produced_ Undertow _after getting to know you after seeing_ George Washington _. When you finally met him, I'd imagine that was important for you._ DGG: Getting that first note of, "I like your movie and here's a project idea I've got for us," that was pretty surreal. But there's a little gear that switched when I want back to North Carolina and decided to spend every dime I had. I think it was the sanity switch. I just decided I'm going to invest everything I have. And I haven't been surprised since. It's like you can't shock me anymore, with the good and the bad of it. So I just said, "OK, when am I going to meet him?" and I met him and then was like, "He's a great guy; when am I going to make a movie with him?" He was very inspiring and encouraging. _A lot of people see his influence in your work from_ George Washington _on. Do you?_ DGG: I never really thought about that. _George Washington_ was just a reaction to things I loved and hated. In no way do any of the elements of the story or characters have anything to do with _Days of Heaven_. But once you take on kind of a cinematic style and a lyrical quality, there's not a whole lot of other reference points. People like to be able to put you in a box so they can define you. All the Real Girls _came from an idea you had been working on with your friend Paul Schneider, who starred in the film._ DGG: It was something we wrote when we were probably beating our head against a wall because some girl cheated on us or broke our heart or we cheated on them or whatever. I don't identify with a lot of movie love stories. They're always about the good-looking guy and the hot girl and the perfect music and beautiful ending. And it doesn't register as true, so I wanted some aggressive attempt at authenticity. A heartbreak movie from a guy's point of view hadn't been done. I mean, it was great in _Better Off Dead_ , but I wanted to make a realistic love story, which there are very few of. _Say Anything_ , maybe. _There's a lot more realism to the romance in the film than we're used to seeing in the movies. There's plenty of awkwardness._ DGG: That was certainly the hope. It's really scary to make yourself vulnerable, to just put yourself out there saying, "I know I say some goofy shit with girls, and now I'm going to let people watch what I would say or what's said to me regardless of how pretentious or annoying or silly it is." _What was your budget?_ DGG: Just over $1 million. _Was there pressure to cast someone other than Paul in the lead role?_ DGG: Yeah, after _George Washington_ we had a lot of interest from places about it. We had an offer to set it up with a studio and they wanted to do a $14 million version of it and they wanted me to look at their up-and-comer list, and Paul certainly was not on it. (laughs) But it was kind of a promise I'd made to Paul early in the process. _Could you imagine shooting something in a studio rather than on location as you have thus far?_ DGG: I don't know. _All the Real Girls_ we shot in Paul's hometown. We would shoot outside his ex-girlfriend's house and try to make as much of an emotionally responsive atmosphere on location as possible. I always try to find the most appropriate context or situation for any movie I want to do. There's this Nick Cave book called _And the Ass Saw the Angel_ that is this really gothic Southern book, and I really want to do it on soundstages in eastern Europe. _Do you think that you will always use non-actors to some degree in your films?_ DGG: I hope so. It's always fun and unpredictable, and I like having that degree of uncertainty sometimes. A lot of trained actors have a difficult time improvising. I've certainly been unsuccessful in some instances of trying to make it work or get somebody up there and be themselves. But I'm always up for that challenge to get those really human notes that are a little less designed and a little less off the assembly line. _Are rehearsals an essential part of the process for you?_ DGG: I can't think of a scenario where a rehearsal wouldn't be beneficial. I just put little stock in the script. I don't even take a script to rehearsals. Script supervisors always quit my movie. They hate me. _I've heard you'll sometimes talk to an actor during a take?_ DGG: All the time. They hate that, but it's really fun. I'll just yell at them or something. In _Undertow_ when Josh Lucas is chasing the pig around, you can hear me laughing. We tried to cover that up with pig squeals, but you can still hear me laughing because it's so funny. I'll ruin takes because I can't hold my laugh. A lot of people just check out compositions and look through lenses and stuff. I don't do that. That's why I need Tim—because I don't need to do that. I just know it's going to work right, so I get right up there with the actors and act vicariously through them. _Did_ Undertow _feel like a natural progression for you? It was more of a genre film than your first two._ DGG: I knew I needed to do something that wasn't as slow-moving and lyrical as the first two movies. There was a strategy to open my career up to not be doing Southern weepy movies all the time, which is fine, but there're a lot of things besides that. _Is it true you wanted to film in Germany?_ DGG: Yeah, I wanted to do it in Dresden. Where'd you read that? I love Dresden. It's an amazing kind of dark city, and I wanted to do the second half of the movie in Dresden. _I assume it was another budgetary consideration to abandon that idea?_ DGG: Well, originally we were going to shoot the movie for $6 million, and then I got into conflicts with the studio and they didn't want to cast who I wanted to, so we agreed to part ways and I was going to make it for a million and a half with freedom. Undertow _makes it three films in a row of yours that have begun with a scene talking about love._ DGG: It's never been designed. It's just subconscious or coincidental or something. A moment of love would always seem like the right moment to start the movie. But yeah, I need to stop that. In _George Washington_ that scene was totally improvised. In _All the Real Girls_ there was a whole first act that we don't see because it was not great. And in _Undertow_ , that wasn't supposed to be the first scene. I don't look at scripts or anything like that after I start shooting a movie. And when we edit it, I just see what feels right. After we put it together I was like, "Shit, we did it again." _Do you want a moviegoer to recognize you as the director in one of your films? So far, your style seems pretty recognizable._ DGG: No. I don't want to be recognizable. I don't know that I can help it, but I don't want that. Just like any actor, no one wants to be stereotyped or put in a box. I have buddies that make big-budget movies and I go to their sets and they get forty takes sometimes. I've never had _four_ in my life except on commercials. If I had every option to get it to where I wanted, to where I predesigned it to be, there'd be a different game in the editing room. The hardest thing on _Undertow_ was it'd be like we didn't get the performance I needed and we would need to move on. _Do you feel at this stage in your career that your ambitions are exceeding the budgets you're being trusted with?_ DGG: Yeah, that's the problem. There're two big-budget movies I really want to make more than anything. There's this demolition derby action comedy, and there's this western about drug addicts in the Old West. It's about the birth of heroin, which is totally grim and dark and awesome as crap but nobody will ever let me make it unless I do certain things like put a nice ending on it. I need to get myself to a point where people trust me with more money and I'm more of a commercial commodity in order to get that done. Or do I find a way to make that a tiny movie rather than the movie I want to make in my head? Do I look at it strategically? I don't know. _You've said in the past that you wish you were making films back in the seventies. Do you think audiences back then were more in tune with the films you're interested in making?_ DGG: Yeah, audiences I think were open to challenges and now they're conditioned to the flavor of Starbucks. They're conditioned to see a romantic comedy that hits the same notes whether it's a foreign film or a domestic film. People want to know what they're getting into when they go to a movie. If you're advertising a thriller, it better not be funny! That's the dumb thing now. You don't even see scary movies anymore. It's all funny. Scary is now funny. _Do you worry about losing touch? Many filmmakers of the seventies have been accused of that._ DGG: That's exactly what happened to all those guys who made amazing movies. I think there's a point in your life as an artist when you're aggressively reacting to the world around you—who you know or what you know and who you hear—there's a point when you're not listening anymore and you're just Woody Allen. Woody Allen can't have the keen insight to human dialogue that he used to have, because when he goes out, people are all talking about who's sitting next to him. He's not able to be a fly on the wall and kind of pick and choose the funny parts anymore. I think people in their lives become so obsessed with what they do that they lose a little bit of who they are. I don't think it's necessarily running out of good ideas. It's just running out of the soul and patience in making a movie. And that's one of the things I really worry about as far as any sort of consistency in a career goes. _How do you avoid that?_ DGG: To live as anonymously outside the industry as I can and do all my writing in a way that doesn't have anything to do with Hollywood or movies. And work with people that aren't going to hit me with the burden of politics and the frustration of the industry. When I'm in production, it's kind of an escape and a freedom—an exploration. We're all discovering things. It would be really smart if I would take a couple years off and go join the French Foreign Legion. _There's always the Marines . . ._ DGG: I'm too old for the Marines now. I just found out, and I'm really upset. I can do the Peace Corps or the French Foreign Legion. I think the more you listen, the more you expose yourself to new elements and new ways of living. People get assistants, and their assistants go do their laundry for them. I mean, how do you find the cool voice of the lady at the Laundromat who's bitching about not getting her quarters out of the machine, you know? You don't know those human obstacles anymore, because you just went and paid someone to go and make your life easier. _What don't you like to see in films today?_ DGG: I don't like picturing the witty screenwriter. I just really get irritated when I see a writer taking over a performance. In my past I've attacked certain people, and I have no business doing that, because they're successful and they'll make money and people love their movies and they'll win Oscars. And, shit, I can't watch it. I get criticized for the same thing. I've got a perspective that some people don't agree with. People think I'm pretentious. But I kind of disagree because I just present what people come up with. What I hear, I write it down. THE DIRECTOR'S TAKE DAVID GORDON GREEN _What is the first film you ever saw?_ _Young Frankenstein_ , at two weeks old _What is your favorite film of all time?_ _Thunderbolt and Lightfoot_ or _The Conversation_ _What's your favorite line in a film?_ Ned Beatty saying "This corn is special" at the end of _Deliverance_ _What movie made you realize that film was an art?_ _Never Cry Wolf_ _What movie do you consider your guilty pleasure?_ _M.A.C. and Me_ and _Bad Ronald_ _Who is your favorite movie character of all time?_ Curtis Armstrong's "Booger" from _Revenge of the Nerds_ _What's your favorite movie snack food?_ My fingernails _Who is your favorite director of all time?_ Frederick Wiseman _Who is the most impressive filmmaker working today?_ Lukas Moodysson _What quality do the best directors share?_ Hunger _Who is your favorite actor or actress of all time?_ Richard Pryor _Who is your favorite actor or actress of today?_ Michael Shannon or Lily Tomlin _Who would you cast as yourself in a film about your life?_ Jaime Bell or Chris Elliott _If you could remake one movie, what would it be?_ _H.O.T.S._ _What is your best quality as a director?_ I go to bed early. _What is your greatest weakness as a director?_ No beard/can't dance _Finish this sentence: I'll never direct a movie about . . ._ Never say never. _Finish this sentence: The perfect movie is . . ._ watched at the right time. _What will they be saying about your work fifty years from now?_ I'm sure the world will have better things to be talking about. _What piece of advice do you have for aspiring filmmakers?_ Enjoy it. _What are you as passionate about as moviemaking?_ Tacos **LUKE GREENFIELD** "I am like the book of fear, and every one of my movies taps into that emotion." **SELECTED CREDITS** _The Animal_ (2001)-director _The Girl Next Door_ (2004)-director Luke Greenfield talks so passionately and earnestly about going for "the chill" in his work that it seems like he's still an adolescent dreaming of a career in the movies, instead of actually making them. "The chill" is that moment "where your hair stands up on your arms," Greenfield explains. A worshiper at the church of Spielberg, Greenfield speaks of his "audience" with the great filmmaker as if it were with the pope. His mother must have known something was up, naming her son after one of the most iconic film characters of all time, Cool Hand Luke. Most in the industry agree that Greenfield's potential is far greater than the sum of his filmography's parts. After all, one Rob Schneider film and a _Risky Business_ retread doesn't amount to much on paper. But as anyone who witnessed one of Greenfield's "chill moments" in _The Girl Next Door_ can attest, the best is surely yet to come. _Where did you spend most of your childhood?_ LG: We moved around a little bit. My parents got divorced when I was four. I lived in Norwalk, Connecticut. I lived there for a bunch of years. I moved to Westport when I was eleven, and then I was there until I was eighteen. _Did you live at all with your dad? Were you close?_ LG: I visited my father every other weekend. It's not that my father and I aren't close. It's just that my mother and I are unusually close. We're much more best friends. My mom lives for me. She was part of this dream with me. Every time she gets onto the set of my movies, she's always crying. _Your mom in fact was an aggressive advocate for your career early on, wasn't she?_ LG: When my mom realized I seriously wanted to become a director, she wrote a letter to Steven Spielberg. I was sixteen years old and my mom wrote this letter saying, "My son has wanted to be you since he was ten. It's all he wants to do and I've never seen a kid so passionate for a dream at such a young age." She said, "We don't know anyone in Hollywood, we don't know how the business works, can you please tell me if my son has what it takes?" With the letter she enclosed two of my super eight movies and a research paper I did on Spielberg. Well, she sent it to Amblin and forgot to include a return address. Even to this day we don't know how it worked out, but somehow, some way Steven read my mom's letter, was touched, watched my little super eight movies, and turned to his assistant and said, "Find this kid." Steven wrote me this heartfelt two-page handwritten letter. It changed my life. I still have it memorized. He wrote in the end, "Your raw beginnings are so similar to my own, I know you'll make it. The next stage for you is film school." And the most important part of that letter is when he gave me advice on how to reach audiences. What he was saying was, "It's the 'truth in the telling' of stories that really reaches people." I remember him saying these great stories can be found in your hometown and your neighborhood. Basically he was saying _Jaws_ , _E.T._ , _Poltergeist_ , these huge concept movies, what they're really about is how relevant they are to us as people and the relationships and the things we go through. That's why his movies have reached so many billions of people. _Do you still have the letter?_ LG: I do. I still keep it in the original UPS brown package in a drawer in my desk. He's definitely been a strong influence in my life. _Much of Spielberg's work has been informed by his parents' divorce. Do you think yours is?_ LG: In Steven's work I always see the themes of divorce, and I don't have that theme at all. My themes are more about fear. For a long time I lived in fear and worried a lot and was afraid of the unknown. The movies I'm writing now are still thematically about fear. That stems completely from childhood and from not having a father figure from the age of four to eleven. If I could go back and do it all again, I would go back to elementary school and get in as many fistfights as I could. I think it would make me more of a courageous person, and I wouldn't have lived in fear for so many years. It's not fistfighting anymore. Now it's confrontations with studio heads. _What are the first moviegoing experiences that you remember?_ LG: My most memorable early movie experience was a traumatizing one. I was probably six years old, and I asked my father if we could go see _The Amityville Horror_ because I thought it was a ghost movie like _Casper the Friendly Ghost._ My father warned me that it'd be scary, but I assured him that _Casper_ didn't scare me. So my father took me to see _The Amityville Horror_ when I was six years old. It literally almost killed me. _No wonder you've been dealing with fear your whole life._ LG: No kidding! I'll never forget fifteen minutes into that movie I was so traumatized, he had to take me out of the theater. I couldn't sleep for years. Actually, it's funny; I became obsessed with scary movies. I really loved the _Halloween_ s and the _Friday the 13th_ s. These movies scared the shit out of me, but for some reason I was addicted. One day I really do plan on making a truly terrifying movie. _How did you come to start making films when you were a kid?_ LG: When my uncle gave me a movie camera, that's when it all kind of began. _How prolific were you playing around with that camera?_ LG: By the time I graduated high school, I must have made about fifty short films on super eight. All my friends became the actors. I was basically making movies where these ten- and twelve-year-olds were playing adults with guns. And then I started getting much more into story and character. My last film in high school was called _The Prey_ and was about forty-five minutes long. _Were you storyboarding these films?_ LG: I can't draw. That's one of the main reasons that I'm a filmmaker. If I had the natural ability to draw, I would have been an artist. But what I kind of did was remedial shot lists. On _The Girl Next Door_ I called it my "visual design," creating the movie shot-for-shot. But it goes way beyond a typical shot list. It's actually twice the length of the actual script. I go into great detail for myself on exactly how I want to tell the story visually. It's my guide. And it's funny, I still think it stems from fear. I can't imagine being on a set without a game plan. _I understand USC was an early goal for you?_ LG: I was obsessive about it. It was like USC film school or die. When I was thirteen, I started writing letters to the film school. I would send my super eight movies. So by the time I was eighteen and able to apply to USC, they all knew me. _When you got there, did you feel a kinship with your fellow students?_ LG: It was shocking because out of the thirty kids accepted to my program, only two of them had ever touched a camera before. They were scholastic people, academics who excelled in grades and had great SAT scores. The actual film program doesn't begin until your junior year. They want you to take their academic courses through sophomore year, which is a complete waste of time. Luckily for me I was so far ahead, technically, because I had shot and edited film. I had a good friend named Matthew Jensen, and he also was from the Steven Spielberg school of making mainstream films—making them connect to people, giving people "the chill" where your hair stands up on your arms at the unforgettable moments. And that's what we went for. _Was USC an environment that was welcoming for an admirer of Spielberg like yourself?_ LG: I was the enemy. At USC I was the biggest defender of Spielberg. This is in 1990. Basically everyone in film school in my experience are Scorsese fanatics, they're Kubrick fanatics, and they're like, "Fuck you, Spielberg." USC for me was just about the people I met and my own film experiences. I tell this to students: If you really have made a lot of films in high school and you know what you want to do and you've found your voice, go make independent films. Believe me, USC tried its best, but I really believe the only way you get better is if you keep making films. And USC is all competition. You have thirty kids and only two are picked your final semester to direct what they call the 480 film, and the other kids end up working on your crew. So you can pay $100,000 to go to film school to be a director and wind up just being a boom guy in one of your classmates' student films. It's ridiculous. _Were you one of the two picked?_ LG: I was, and believe me, it's just luck of the draw. I was very fortunate to have made a 480 film and screen my film at the First Look Festival and sign with a big agency. Jeff Robinov was a hot agent at ICM at the time, and he just picked me right up. I remember him calling me and he said, "I'll make it really clear to you: I want to represent you, and I think you should be directing mainstream studio films and I think I can get you a mainstream studio film based on your short film." _Did you feel like you had hit the lottery?_ LG: I was so green and so naive. I hardly knew what an agent really was, and here I thought, "I'm on the path." Little did I know there would be pure darkness and horrible times coming to me. My expectations of what was going to happen did not come true at all. Here I was thinking I am a hot young writer/director with a big agent over at ICM, and I'm delivering dailies tapes for Viacom for six dollars an hour plus mileage. My whole life I had been a Spielberg fanatic. I'd wanted to follow the chronology of his career, which is impossible to follow today. I wanted to make my first studio feature film at twenty-two. I knew the key for a director coming out of film school with a short film was to have a script that you're ready to direct. So I wrote a really personal story called _Sticks & Stones._ I did everything I could to set it up. ICM did everything they could, but there was never a close call. _How long did those lean years last?_ LG: From 1994 to 1999. The most frustrating thing was I hadn't been behind a camera for four or five years, and that was a dreadfully long time, seeing how I'd been behind a camera ever since I was ten years old. I was freaking out. It was a really depressing, horrible time and I didn't know what to do. I kept saying to my parents, "There's no way to work your way up as a director," and my family kept saying, "That can't be true." They kept lecturing me, that in every industry you can work your way up. We always had this constant argument about why I wasn't trying to be an assistant director, and they wouldn't believe me that an A.D. is not a stepping stone to becoming a director. I'd tried numerous times to get into the music video/commercial world. I would have killed for that. They kept saying in order to get into music videos and commercials, you really should do something on spec. They said, "Go raise five thousand dollars and do a spec commercial music video." I said, "Motherfuckers, I can't even eat! I could _live_ off five thousand dollars for a year." It was a really brutal time because everything started falling through. _This is a long period of time to struggle. Is there a plan B at any point?_ LG: There is no plan B. For survival I was doing data entry at Disney Interactive. It was the most soul-killing job, working in this office in Glendale just punching in numbers. It was brutal. I was in debt and even if I won the lottery, I still wouldn't be happy, because I didn't care about the money. I just wanted to be a movie director. That's all I've wanted to do since I was ten, and when you rip that away from me I don't have anything left. _Eventually it was your short film,_ The Right Hook _, that sort of brought you back?_ LG: _The Right Hook_ brought me out of that abyss of darkness. There was a pending actors' strike, and everyone was terrified. _The Right Hook_ was done on videotape. And there was no sound mix. It was still a glamorized rough cut. I didn't have a manager. I didn't have an agent. And my plan was I wanted to hold _The Right Hook_ and not show it to anyone until I had a feature script ready to direct, and that was going to be _Destin_ y. This manger happened to see the film and he went crazy and said, "Luke, if you don't get your short film out there and this strike happens, good luck. Then your short film is a year or two old." He scared the hell out of me and he said, "Let me just show it to one person." It happened so quickly, it's kind of terrifying. This manager showed it to Greg Silver-man at Revolution who loved it and raced it over to Todd Garner, and Garner called me immediately. He says, "Hey, it's Todd Garner. Listen, you're going to direct _The Animal_ , so meet me tomorrow." He sent the script to me, and twenty-four hours later I'm in a room with Rob Schneider and Adam Sandler and the writer. _What happened at the meeting?_ LG: They make a semicircle around me and they start drilling me with questions about my favorite comedies. I started listing them off: _Midnight Run, Flirting with Disaster, Back to the Future, Swingers, Office Space_ , and there are these frowns on their faces. They're like, "What the fuck are you talking about?" (laughs) Rob's very intense. He's looking for a certain personality. I just told him if you're looking for a guy who will kill himself and works nonstop, you've found him. The next day at a second meeting I remember him shaking my hand and he says, "You got the job, kid." I'd been waiting so many years and here it was—boom. From a $30,000 short to a $30 million feature. _Did you celebrate? What did you do after you knew you got the job?_ LG: I'm driving on Bundy, calling my best friend, telling him what happened, and Spike Jonze pulls up next to me in his Mercedes. So I rolled down the window and I signal him to roll down his window. He's like, "What the fuck do you want?" I smile and ask, "Do you know where I can get laid around here?" and Spike looks at me like "What?!" He's like, "Yeah, buddy. Go three lights up and make a right at the sign," and then every light we hit, he's running red lights because he doesn't want to get stuck next to me again. I kind of creeped him out. I raced home and like the typical sap, I cried my eyes out. I finally did it. I have a movie, and I start shooting in six weeks. It's the most crazy time of my life. I'm prepping a movie where I have no control. It was an experience that I had to have, but it was everything backwards of what I thought filmmaking was. _What do you mean when you say_ The Animal _was everything backwards of what you thought filmmaking was?_ LG: _The Animal_ didn't have much to do with me. This was a Rob Schneider movie, and I was just a gun for hire. It was my first time directing something that I had nothing to do with creatively. Rob co-wrote the script, and he's very hands-on. I got along really well with Rob, and Rob's tough. He should be directing. I wouldn't even call myself the director on _The Animal_. I was a guy who knew how to tell a story, and that's what they kind of needed, but as far as any creative input, I look at _The Animal_ as Rob Schneider's movie through and through. _What are your memories of the morning of the first shooting day?_ LG: I remember listening to the U2 song "Beautiful Day," and I drive up to the set at five in the morning and there're just trailers and equipment trucks everywhere and I'm just like, wow. When you make short films you feel so alone. And all of a sudden you have a top-of-the-line crew ready to work—it was amazing. I was very comfortable. I've been making films since I was ten years old. I know what to do, and I went into it full steam. I had that entire movie storyboarded. Not that we followed the storyboards. (laughs) I wasn't really allowed to. But I knew exactly the movie I wanted to shoot. I think I immediately proved myself in preproduction and the first day of shooting because I was just so comfortable directing. I told myself I was back in Westport, directing my little short films on a super eight. _What was your goal for the film, knowing it wasn't really your film in the end?_ LG: I'll be honest with you, my goal of _The Animal_ was: don't get fired. Because if I got fired, it was over. It meant I couldn't handle a film with the big boys. It was not a filmmakers' movie. It was extremely frustrating, but I was very fortunate to get that experience so early in my career. _The Girl Next Door_ was the exact opposite experience. I was maniacally in control. _When you got the script for_ The Girl Next Door _, it was very different from the kind of film it ended up, wasn't it?_ LG: Oh, yeah. The original script was the tawdriest, a T&A movie with cum shots. It was completely the broadest film that you could possibly imagine, with absolutely no reality to it. _So what did you respond to?_ LG: I just loved the idea of a kid like me in high school who never had any adventure falling for this incredible girl who takes him on a wild journey to kind of capture all he'd missed out on because he was so focused on getting into college. It was me. All I wanted to do from age ten was get into USC film school, and the Matthew Kidman character in _Girl Next Door_ , all he wants to do is get into Georgetown. _The lessons learned from_ The Animal _could have been about control. I would think the lessons were different coming out of_ The Girl Next Door _._ LG: Love or hate _The Girl Next Door_ , every single frame of it is mine. The major lesson I got out of _The Girl Next Door_ was [about] the challenges of marketing. That was a huge wake-up call to me, that making a great movie is one thing, but if you don't have the people or the right ideas to market the film, you're dead in the water. So challenge number two is: How do you relay to America and the world the type of film you made? Nowadays when I'm working on my movies, I'm already thinking about the trailer; I'm thinking about the billboard; I'm thinking about what can be used in presentation materials. My whole goal is to share these stories for the world to see. I think _The Girl Next Door_ really deserved a lot more recognition and it was a bummer, but lesson learned; move on. _The irony, of course, is that the film that's so much closer to your heart was virtually ignored by the majority of the moviegoing public._ The Animal _did very well. That wasn't the case for_ The Girl Next Door _._ LG: _The Girl Next Door_ opened to $6 million. It bombed beyond belief. Our world in Hollywood was expecting this movie to make $100 million, and it was just crucifying. But it was a different thing than _The Animal_ , because instead of just agents and executives being excited, I've got people like Steven Spielberg calling me, which was much more rewarding. _Speaking of Spielberg, I know you finally met your boyhood hero._ LG: It was one of the most incredible days of my life. Steven came into the room, laughing, with a Xerox copy of the letter he'd written me sixteen years ago. He says, "Luke, this is very weird. I saw your movie twice because I loved it, and I had no idea I had written you a letter sixteen years ago." He sat down and he was talking to me as if I was George Lucas. He was looking to me and saying, "What should we do together?" We talked for a long time and I said, "My mom's going to kill me because I didn't tell anyone we were meeting." He looked at my cell phone and says, "Well, let's call her right now." He doesn't tell her who he is. He's like, "Hey, Beth, I really loved your son's movie. There's no question we'll be working together." My mom freaked out, crying hysterically. It was very cute. She was literally sobbing and saying, "You have no idea how much you mean to my son." It all came full circle. It was so surreal. _Does criticism matter to you? Roger Ebert, for one, really ripped apart_ The Girl Next Door _._ LG: He's a fucking idiot. I mean, Roger Ebert watches my film and says, "I feel dirty just watching it"? It just makes me sick. It's like either he's being paid, or he truly is a fucking idiot! I don't really care about the critics, to be honest with you, because the last thing I'm going to do is make movies for critics. What I really care about is making sure the audience loves the film and will remember it. There have been so many great movies that have been killed by critics. _What are you thinking about when you are on the set? What are your priorities?_ LG: My whole goal on the set is to get as close to what I picture in my head at all times. I have the whole entire movie edited in my head. I have the music. I have everything. So it's the challenge of making sure I can get everyone into my head, whether it's through the storyboards, whether it's playing music on the set, or just even acting things out. When I'm on the treadmill and I'm running or I'm driving my car and I'm listening to the music of the movie, I get these chills, and I want the world to feel those chills. _Do you have any shortcomings as a filmmaker that you feel you need to address?_ LG: I wish I had been allowed to shoot music videos and commercials, because I need to learn a hell of a lot more. I haven't been given enough experience. I mean, Spike Jonze and David Fincher have done a thousand music videos and commercials. I was never given the opportunity, so I can only go off my experience, which unfortunately for me has been screenwriting and script development. It helped me in a way, because I've been developing great material, but as far as being a technical director, I really lost out on a lot of experience. _What do you think your body of work is going to look like in five or ten years?_ LG: I want to make movies that people are never going to forget—the types of movies that made me want to be a filmmaker. I love what people call "dramedies," the films that make you laugh and then get incredibly dramatic and make you feel something. I know in my heart that I'll probably be remembered as a filmmaker for making a film called _Invincible_. It's something very close to my heart, and it's extremely edgy and groundbreaking. I'm going to wait a little until I can do it exactly the way I've wanted to. I've been planning to make this film since I was fourteen, so I'm going to make it right. It'll be more of an experience than a movie. _What do you think all Luke Greenfield films will share?_ LG: My themes always deal with fear. I am like the book of fear, and every one of my movies taps into that emotion. I think I live in fear of so many things. I'm motivated by fear. As a filmmaker and as a person, I'm going to overcome all my fears when I make _Invincible_. THE DIRECTOR'S TAKE LUKE GREENFIELD _What is the first film you ever saw?_ _The Spy Who Loved Me_ _What is your favorite film of all time?_ _One Flew Over the Cuckoo's Nest_ , _Cool Hand Luke_ , _Jaws, Jerry Maguire, Back to the Future, Out of Sight_ _What's your favorite line in a film?_ "You're gonna have to kill me."— _Cool Hand Luke_ "The Chinaman is not the issue here, Dude!"— _The Big Lebowski_ _What movie made you realize that film was an art?_ I'm not sure when I realized it was an art. Axel Foley in the original _Beverly Hills Cop_ had an effect on me, believe it or not. _What movie do you consider your guilty pleasure?_ _The Money Pit, Point Break, Rocky III_ _Who is your favorite movie character of all time?_ Randle McMurphy in _One Flew Over the Cuckoo's Nest_ Luke in _Cool Hand Luke_ _What's your favorite movie snack food?_ Candy I steal from the store around the corner _Who is your favorite director of all time?_ Spielberg _Who is the most impressive filmmaker working today?_ Spielberg _What quality do the best directors share?_ The best directors are natural-born storytellers. They can naturally see the story play out in their head. They don't need to concentrate or try to force it. It just comes, even when they don't want it to. _Who is your favorite actor or actress of all time?_ De Niro, Paul Newman, etc. _Who is your favorite actor or actress of today?_ Ben Stiller, Tom Cruise, lots of others . . . _If you could remake one movie, what would it be?_ I don't think I would do a remake, but if they were handing them out, it would be _The Fountainhead._ _What is your best quality as a director?_ Making audiences feel "the chill"—when you feel it down your back, your hair stands up on your arms, and you feel like you can fucking fly. It's that natural high. _What is your greatest weakness as a director?_ I'm a pussy. _Finish this sentence: I'll never direct a movie with . . ._ a story or a script I don't believe in. _What will they be saying about your work fifty years from now?_ "His movies took you on journeys and when you left the theater, you felt more alive. It was like an experience rather than just seeing a movie" or "What the hell was that guy's problem?" _What piece of advice do you have for aspiring filmmakers?_ Don't let the naysayers get you down and never, ever take "no." Make the stories that are personal to you. Make the movies that you're willing to die for. _What are you as passionate about as moviemaking?_ Getting a rush out of life, whether it be sex, danger, or some kind of adventure. But to be honest, there's nothing I'm as passionate about as making movies. That's why I have no life. **JOHN HAMBURG** "I remember getting into arguments with people who were like, 'Oh, you just care about the audience.' And I was like, 'Movies are _for_ an audience.' " **SELECTED CREDITS** _Safe Men_ (1998)-writer/director _Meet the Parents_ (2000)-cowriter _Zoolander_ (2001)-cowriter _Along Came Polly_ (2004)-writer/director _Meet the Fockers_ (2004)-cowriter John Hamburg is one of the most potent comic forces making movies today. As a writer he created the "circle of trust" between Ben Stiller and Robert De Niro in _Meet the Parents_ and its sequel, _Meet the Fockers_ , which quickly became the highest-grossing live-action comedy of all time. Hamburg makes movies for audiences, and he is not ashamed of it. His second directorial effort, _Along Came Polly_ , made more in its opening weekend than any other January comedy in history. Raised in New York City by an attorney father and a radio-personality mother, his beginnings were not necessarily humble. But he's proven that an angst-ridden childhood isn't a prerequisite for creating comedy that can resonate with so many. _Your mom has had a radio show on WOR in New York for a very long time._ JH: She's had a radio show for over twenty-five years. She started it when I was six or seven. My mother was a celebrity in a very weird way. She would get recognized on the street, which was weird because she was on the radio. Anytime we would go out to dinner, someone would come up and they were avid fans who felt like they knew her because of her job. _You and your sister were referred to often on the radio show. Did you have a heightened characterization on the show?_ JH: I don't know if I did. I certainly know that my movies have a high preponderance of yentas in the tri-state area who go see it opening weekend because of her publicity. Sometimes they've gone to see _Safe Men_ and they've been like, "What the hell is this?" That's why it was good to do _Meet the Parents_. It was one that the yentas could really— _It brought everyone together._ JH: Yeah, exactly. _What were some films that made an impact on you when you were a kid?_ JH: The first movie I became obsessed with was _The Jerk_ with Steve Martin. To me _The Jerk_ is a movie about a dumb guy written, acted, and directed by smart guys. I feel like I connected with that. I was into that, and then Eddie Murphy and his _Saturday Night Live_ stuff and _Trading Places_ and _48 Hours. Spinal Tap_ must have come out when I was thirteen or fourteen, and I must have watched it a hundred times. I knew every line. And Woody Allen. I just went back and watched _Bananas_ and _Take the Money and Run_ and those kinds of movies on video. It was silliness, but it was inspired silliness. The stuff that was really deep in my bones was that type of comedy. When I think about the stuff that I've done, a lot of that has seeped through. _When did filmmaking in any small way enter the picture?_ JH: In the spring of my sophomore year of high school, my parents gave me a video camera. It was a huge half-inch video camera. And that was it. I became obsessed. I remember reading in _Film Comment_ about how they did the shots in _Raising Arizona_ , and I just became really into it. Then I made a short film called _Ernie_ , which was about a meek freshman who became a superhero and sort of beat up the bully, and it was comic and it had some funny camera work in it. We showed it in our school assembly. If it had been a preview screening, it would have tested in the high 90s. I was like, this is great! This is what I want to do! I remember someone coming up to me, this girl who was a year or two older and she was like, "It wasn't just the story, but the way you directed it." So that was pretty much it. _Was_ Ernie _about any kind of wish-fulfillment thing for you?_ JH: Definitely. I was pretty skinny as a kid. I think there're a lot of moviemakers who do movies about a sort of wimpy guy who beats up the big guy. You don't see a lot of jocks being movie directors. The movies I've done have always been about some guy who is a decent guy at heart just trying to make sense of the world around him and trying to overcome some type of adversity. It started with that movie. _What did you study at Brown?_ JH: I was a history major, but that was because it took the fewest requirements of any major. In my sophomore year I started getting back into making movies. My senior year I took playwriting classes, and I feel like that's where I learned how to be a writer. I started to write monologues, and that was great because that got me really deeply into characters. _In looking back, was there a sense that this was going to be a snap? Were you naive at the time?_ JH: It was just what I wanted to do, and I just had a feeling that I would be able to do it. I had two close friends in a class and we each were writing comedy, and the teacher sat us down and said, "You know, guys, you're not all going to be able to be writers for _Saturday Night Live_. Maybe you should venture out from comedy." I remember when she said that. Not for a second did I think she was right. Maybe that delusion was healthy. _You had a short film at Sundance. Tell me how you got there._ JH: I made it the first year of NYU grad school. It's called _Tick_. At the time there was this school of high-concept comedies. I wanted to make the anti-version of that, which is just as high concept but then you completely forget about the concept and it's just this small character story. So I wrote this movie about two freelance bomb defusers in a make-believe town where bombs keep going off. It cost, like, two thousand bucks. I screened it for the professors at NYU, and it went over really well. If I look back on it, it was a tightly structured three-act movie. It was eight and a half minutes, but it had a beginning, middle, and an end. And it was a comedy. And at NYU, there weren't a lot of guys doing comedy. There were a lot of movies about homeless people in Washington Square Park. _Because it was different from the norm at NYU, did your classmates enjoy it, or did they think, "This guy has gone commercial already"?_ JH: I remember getting into arguments with people who were like, "Oh, you just care about the audience." And I was like, "Well, I don't just care about the audience, but movies are _for_ an audience." You sit down in a theater and watch them with an audience and you want to provoke some type of reaction. People say, "Oh, you're just being manipulative. You're manipulating people into laughing." To me, moviemaking is manipulative. Whether you're making _Breaking the Waves_ or _Zoolander_ , you're manipulating people by the choice of shots, by the actors, by the music. I never felt like I was selling out. I was just writing what came to my mind. _And what was the biggest thing to come out of Sundance? Was it getting financing for_ Safe Men _?_ JH: Yeah. By the time I got there I had written a feature, _Safe Men_ , that I had wanted to make. Endeavor read it and they called me and said, "All right, we're going to sell this script for a million dollars." I was like, "OK." I mean, I wanted to direct it but I literally did not know what was going on. At the same time I had decided to leave NYU. I just felt like I wanted to make features, write features, and direct. So I left NYU. They sent the script out as a spec script to everybody in town and everybody passed. They went from being like, "You're going to be a millionaire," to two days later, "Everybody passed." But I wasn't really depressed, because I wanted to direct it. And two of the producers I had met at Sundance found the money quite quickly for _Safe Men_. Safe Men _had a million-dollar budget?_ JH: That's right. At that time, those types of movies were a million. You kind of get that million and then figure out how to spend it. _A million dollars was obviously a lot more than you'd ever worked with before._ JH: My last movie was three thousand dollars. _How did you feel walking onto a set with that kind of budget?_ JH: There is a feeling I've had on the two movies I've directed of arriving on the set the first day and there is just a machine that exists. You don't quite know how these trucks got there, how this city was created. _Like a military campaign has been erected for your cause._ JH: Exactly. Driving up the first day, there were campers and you just go, "How did this happen?" We had scouted a location but I hadn't visited it the night before, and suddenly it looks the way we talked about it looking. That feeling is hard to beat. Definitely when I stepped on the set and called action, there was a moment where I was like, "Am I playing the role of the movie director?" _How were you working with actors at first?_ JH: I felt pretty comfortable from the beginning with actors. I think there's something when you've written the piece and people like the script where you feel like they give you a level of respect. Maybe you can't say exactly what you want in precisely the way Lee Strasberg would tell you to say it, but you can still go, "I'm feeling like this is not right." Safe Men _was released on only twenty screens in August of 1998 by October Films._ JH: They dumped it. I think October had just been bought by Universal. They were trying to turn them into Miramax, but that wasn't ever going to work. They were the _Secrets & Lies_ and _Breaking the Waves_ guys. No one did any publicity. _That must have really hurt at the time._ JH: In truth I couldn't wait for the movie to be over. They hired a guy named Bud Smith to recut it. He was a sort of Universal edit hatchet guy. He had no idea how to cut a comedy, and he just butchered it. We had to yell and scream to get it back to a version I could be happy with. I did a lot of reflecting. I was like, "The next time I do this, I want to have more control, be more involved," and I also wanted to build up a little more cachet before I directed another movie again. And I felt like the way to do that was through screenwriting. _How did you meet Ben Stiller?_ JH: _Safe Men_ was at the Nantucket Film Festival. We were on a panel together. We met right before the panel for a minute but he was like, "Maybe we can hang out in New York sometime." And then during the panel Ben was like, "Have you guys seen _Safe Men_? Because it is one of the funniest movies you will ever see." _Thus a friendship was born._ JH: Exactly. Anytime somebody tells me they like my work, they're pretty much in. We never intended to make all these movies together. It just happened. We're very different people, but we find similar things funny. We like more low-key things where you're not trying so hard to get a laugh. You're more playing the character and letting the character get the laugh. I think we both have a real appreciation for a lot of similar actors, like supporting actors that are a little off the beaten path. Ben always loves Sam Rockwell. They'll work together someday because he just loves that kind of offbeat thing. Ben champions those kinds of people. Owen Wilson was one of those guys. Ben saw _Bottle Rocket_ and put him in _Cable Guy_ , and everything happened from there with those two guys. _What was your main contribution to_ Zoolander _?_ JH: I think overall the thing I'm proudest of in it is the work I did on the Hansel/Derek relationship and trying to make that emotionally involving. There are things that I wrote in that movie that were sort of absurd that I couldn't get away with in _Meet the Parents_ or _Safe Men_ or anything I'd done—silly things like Derek calling a eulogy a "eugugoly" or picking up the scale model of the reading center and thinking that it's the real one. It was like, "Can we make this guy that stupid but have you also care about him?" I think the reason it's done quite well on DVD and the cult seems to be building up is because there is that emotional core to it. There's a real sweetness to characters like Derek and Hansel. Zoolander _was one of the first films to be released after 9/11. Did that event inform your work? You're a New Yorker._ JH: I've thought about it a lot. Literally the best view from my office was of the Twin Towers. I live twenty blocks away from where the World Trade Center was. I was in the middle of writing what became _Along Came Polly_. I definitely remember going, "Man, what I do is so pointless." I remember also going, "I have to go to my office and write funny stuff" and just not feeling funny at all. _Zoolander_ was the first comedy to come out after 9/11. A couple of people told me it was a real relief to go and be able to laugh at something. We can acknowledge that tragedy has happened and the world has changed, but you can still laugh. You're allowed to laugh. The reason I think there will always be movies is there's a need for that communal experience. Now we're in a world where people are creating their own home theaters. In spite of that I still think there's a need, because people love sharing an experience. And with comedies you can _hear_ the communal experience. I read a review of _Along Came Polly_ where the critic was saying she went on a Friday night and the audience was into this movie and laughing together and she was like, "Maybe that was part of the reason I enjoyed it so much." That made me happy, because that communal experience is a necessary human thing—to laugh together. _How did you get involved with_ Meet the Parents _? The initial screenplay was written by Jim Herzfeld._ JH: They were feeling that _Meet the Parents_ needed some work. It was originally written for Jim Carrey and Bill Murray. It was going to be a different kind of thing. Ben told Jay Roach and Jane Rosenthal, "There's this young guy in New York who made this cool little movie; he did a really good job on the first draft of _Zoolander_ —you should meet him." I went to a meeting with them and had some ideas about it and how I thought it could be better. For some reason I felt like I had a connection, because I understand familial relationships and being Jewish and dating a non-Jewish woman. _Did any of those initial ideas make it into the film?_ JH: One particular throughline: I felt the movie should be more about the private war between De Niro and Stiller. They should have a lot of conflict, and the family shouldn't know quite what's going on. And that throughline I think was a big one in the movie. _One contribution of yours was "the circle of trust" concept, and the phrase itself._ JH: Yes. I made it up. I remember sitting with De Niro and he was talking about CIA agents. He knew some agents because he has a CIA movie he's been working on. And I remember listening to De Niro, and I couldn't quite understand what he was talking about. I got the idea of it, but the specifics were a bit cloudy. I remember thinking that this character would use a term that sounds like you understand what it means but if you really look into it, what does it mean? And that became "the circle of trust." When he says it to Ben, I could picture Ben's face nodding but not really understanding what the hell [De Niro] was talking about. It echoed my reaction to De Niro. _It's an unusually talky film for a successful mainstream comedy. There are some comic set pieces, but there's also a lot of sitting around tables and in living rooms._ JH: It's that thing I really love about movies—which is that there really are no rules. If it's good, you'll just sit there and watch it. Part of that movie is you have to put the time in, because you put yourself in Ben's shoes. And part of the identification is sitting there for an excruciatingly long time. The scene that probably embodies that most is the dinner-table scene. That scene is eight or nine minutes. In the script it's probably ten pages. You would never do that in a comedy. I think none of us expected that scene to end up being as long as it ended up being in the final movie. In _Safe Men_ , Paul Giamatti has a long monologue in the bar, and at the end he finishes and [he and Sam Rockwell] just stare at each other. If I had final cut, it would have gone on for ten more seconds. I think it got funnier. It got weird, and then it got funnier. Those silences are great because that shit happens in life all the time. _Where did the idea for_ Along Came Polly _begin?_ JH: I had finished _Meet the Parents_ and was taking my first adult vacation with my girlfriend. There was this beautiful, handsome naked French guy walking on the beach, and I of course felt very clothed and inadequate compared to him. When I got back from vacation, I didn't know what movie I was going to write. I was thinking about that vacation, and I had seen movies where the girl gets dumped. I was also thinking about my life and the lives of my friends—how once you hit thirty, you're at that age where you're planning things out. I thought, "What would be interesting about that type of guy who plans everything out down to a tee and the rug gets pulled out from under him and he has to rethink his plan?" _Can you point to anything in_ Along Came Polly _as evidence of your evolution as a director?_ JH: I'm just trying to tell a story. There's a lot of subtle stuff that I think we're trying to do to make it look effortless and try to make you forget that you're watching a movie and just focus on the characters. I'm proud of that. If you watch it a few times, you'll see how specific the production design is of Reuben's apartment, which looks very different from Polly's apartment. There are technical things I'm proud of. Reuben's big dance routine—that took a lot technically, with extras, tons of camera angles, Steadicams, lighting, slow motion, and I'm happy with the way that turned out in the movie. I'm proud of the racquetball scene, which is just a very involved scene that wasn't working in the beginning and then we decided that they'd be playing but we'd take the ball away. You can't tell, because of the sound. There's one shot where we digitally put a ball in. I'm proud of something like that as a director because you go to the set and you're like, "This isn't working, the clock is ticking," then we all have to think on our feet. _What filmmakers do you admire?_ JH: When James L. Brooks does a movie, I'll go see it. Some people didn't love _As Good As It Gets_. I loved it. There's a reason that movie did as well as it did—because it connected with people. I sat in the movie theater, and the whole audience was rapt the whole time. I read a review in _Variety_ of _As Good As It Gets_ , and the review literally was as if the movie was an unmitigated disaster. I remember the review saying "destined to make no money" or something like that. And you watched it with an audience and it was clear something was working. _Do you see any common denominators among your peers today?_ JH: I honestly think it's too early to define. If you look at the people, they always cite, like, Wes Anderson or Alexander Payne . . . I don't know yet what defines their stuff. There's definitely a winking. Those guys aren't making films like _Shampoo_ which are just simple slices of life. They're letting you know who the filmmaker is. There's definitely a postmodern feel to those movies. _Your films are a little bit different._ Along Came Polly _in many ways is a return to a classic romantic comedy. So where do you fit in there?_ JH: When I started getting into making movies, I was more in the Coen brothers' school of really out-there characters and crazy camera angles. I think that's a more self-conscious school of moviemaking. And then I started to think, "What movies do I really love, that really stay with me?" I started to think I just want to tell stories about characters. The movies I love are movies like _Broadcast News_ , where those people live with you, not the tracking shot. So a guy like me you can say, "Oh, he doesn't show himself in his movies," but I think audiences connect with the stuff because they're just seeing something they can relate to. _So there's something to be said for someone who can pull back and tell a story and have an audience forget he's directing this movie?_ JH: Far and away, the best example of that is Billy Wilder. It took a while for people to realize what a great director he was. He never shows himself, yet he really does. That's what I think the dream is. A movie like _The Apartment_ is beautifully directed, but you can't put your finger on why it's such a good movie. It just is, because of the characters and story and all these things that he's subtly doing. That level of filmmaking is something I aspire to. _On the other end of that spectrum is someone like Tarantino, who is so present in every shot of his films. At the time of_ Safe Men _'s release, the dialogue in it was sometimes called Tarantino-esque in the reviews._ JH: It wasn't Quentin Tarantino that I was trying to copy. It was just that my world was a world of pop culture, and that was similar to Quentin Tarantino's. Like Tarantino, I grew up watching TV and movies and having action figures. So when I was writing _Safe Men_ , I was not thinking of Quentin Tarantino, I was thinking of what my friends and I argued about, like _Charlie's Angels_ and who was the hottest angel and things like that. _Perhaps that is one of the common denominators for this generation? Pop culture is such a part of our consciousness that it can't help but come out in the work._ JH: It's who we are. You look at _Bottle Rocket_ , and it's clear that there are references to heist movies. He's deconstructing that kind of thing. I just think we grew up on it, so we incorporate that stuff into our lives. There wasn't syndication before our generation, so we grew up watching every episode of _Happy Days_ and that's going to creep into what you think about. _Not to mention the explosion of VHS in the eighties. Everyone could be a cinephile._ JH: You could watch these movies over and over again. Quentin is the ultimate embodiment of a guy for whom movies and TV were his reference, and not really literature. It's a combination of him being first and being brilliant. Safe Men _, while it's become a cult film, got mixed reviews._ Along Came Polly _got mixed reviews._ JH: I don't read them. They weren't for the most part positive. _Do you think comedies are generally not given the respect they deserve by critics?_ JH: I do think there is a thing where critics think it's very easy to make an entire audience laugh. In the grander scheme of things, they think _Cold Mountain_ is much more difficult to do. Philip Seymour Hoffman and I were doing a press conference together for _Along Came Polly_ and the interviewers were like, "Is this just an easy time after _Cold Mountain_?" He's like, "It was harder than _Cold Mountain_." I try to take a Zen attitude about it. I care about how I feel and what the audience thinks. We had a critics' screening for _Along Came Polly_ and the entire audience was laughing the whole time, and there's a row of critics in the back, not smiling and writing into their pads. I just think they watch movies differently sometimes than the audience. If I had listened to the critics, I would have given up after _Safe Men_. THE DIRECTOR'S TAKE JOHN HAMBURG _What is the first film you ever saw?_ _Escape from Witch Mountain_ _What is your favorite film of all time?_ _Annie Hall_ _What's your favorite line in a film?_ "Make sure that everyone sees the cake before we cut it."—Hyman Roth in _The Godfather, Part II_ _What movie made you realize that film was an art?_ _Raising Arizona_ _What movie do you consider your guilty pleasure?_ _Runaway Bride_ _Who is your favorite movie character of all time?_ Broadway Danny Rose _What's your favorite movie snack food?_ Goldenberg's Peanut Chews _Who is your favorite director of all time?_ Woody Allen, Billy Wilder, Martin Scorsese, Joel Coen _What quality do the best directors share?_ Confidence and openness _Who is your favorite actor or actress of today?_ I want to work with too many people, so answering this could come back to haunt me. _Who would you cast as yourself in a film about your life?_ Dana Carvey _If you could remake one movie, what would it be?_ Truffaut's _Day for Night_ _What is your best quality as a director?_ A desire for world-class craft service on the set. _What is your greatest weakness as a director?_ I have a tendency to wear the same outfit every day of production. _Finish this sentence: I'll never direct a movie about . . ._ a wayward youth and the eccentric but wise homeless man who teaches him the meaning of life. _Finish this sentence: The perfect movie is . . ._ uninteresting. Great movies need to be a little messy. _What will they be saying about your work fifty years from now?_ "What's the deal with all the awkward hugs?" _What piece of advice do you have for aspiring filmmakers?_ Do what you want, not what you think you should want. _What are you as passionate about as moviemaking?_ Family. Friends. Televised golf. **PATTY JENKINS** "I wish I could make bucketloads of money . . . but more than anything, I realize the only thing that makes that kind of work worth it is to be engaged." **SELECTED CREDITS** _Monster_ (2003)-writer/director The first line of the very first feature film Patty Jenkins ever directed is "I always wanted to be in the movies." And so Jenkins is today, after an aborted attempt to be an artist and no less than six years as a camera operator. Character studies like _Monster_ can become dominated by their central performance. And so it was with the story of serial killer Aileen Wuornos when star Charlize Theron received scores of accolades while Jenkins found herself often overlooked. But the powerful filmmaking by the writer/director was not lost on everyone. Roger Ebert, for one, called it the best film of the year. Jenkins says she is something of an emotional filmmaker and, to look at her first big-screen effort, it's not a surprise. Few filmmakers, man or woman, could create such a full-bodied portrait of a human being who was so seemingly beyond the pale of our understanding. It is the fine line between empathy and sympathy that Jenkins navigated so well that makes her future efforts so eagerly anticipated. _Where you were born?_ PJ: It's sort of a misleading answer. I was born on an Air Force base in Victorville, California. But I was only there for like a week. _Your dad was a pilot?_ PJ: Yeah, my dad was a pilot. _You traveled around a lot, from what I've read._ PJ: I lived in a different place every couple months or year up until I was five or six, and I lived in Kansas while my mother went through school. I moved during high school to Washington, D.C. And then to New York. _Was it in Kansas that you first started getting into the arts?_ PJ: Yeah. It was somewhere around junior high school. I don't remember making any conscious choice about it. It was just kind of automatic that all my extracurricular things were art, photography, and drama. And I was really into music. I remember watching movies all the time, but it never really occurred to me at all to be a filmmaker. It was kind of like you could be a fine artist or a graphic artist, so then I put my sights on being a fine artist. I couldn't act and I didn't want to play music, so . . . _And filmmaking wasn't something you could even conceive of as a career?_ PJ: It's funny. I've found notes that I wrote when I was in high school that were like movie ideas. But it was actually something that was pure fun to me at the time. It was like, "This is something absolutely impossible that real people don't do but wouldn't this be a cool movie?" _Do you remember some of the ideas that you had back then?_ PJ: There was this one idea that sort of stuck around for a long time, about these kids in the hardcore scene—because I was in the punk rock scene in high school and junior high, so it was about this group of people that I knew. A lot of this stuff strangely went on to influence _Monster_. It was me trying to figure out how to explain some of these really damaged lives that I had encountered. _Can you tell me a bit about your first moviegoing experience?_ PJ: My first few really vivid memories are seeing films that I didn't understand at all. I think it was Jonathan Miller's _Alice in Wonderland_. I have these really strong memories of seeing it when I was incredibly young and being mesmerized by the funny feeling that it gave me and not knowing at all what it was about. I was drawn to strange movies like that, even though I couldn't understand them. I particularly remember Hal Ashby's movies. It's funny—now I look back and I'm sure I couldn't have really understood what they were about. I remember them being such a comfortable place to be as a little kid. I liked watching them. And then I remember coming out of _Reds_. I must have been eight or nine, and I remember sitting in the backseat driving home and feeling something very complex and looking out the window, thinking about what an interesting complex feeling I had. It wasn't joy. It wasn't happiness. It was a kind of ponderance of life. And that has always really stuck with me. _What were you passionate about as a kid?_ PJ: I was into photography and music probably more than anything. I still think probably music is my real love. I think that it's listening to music that made me want to be a filmmaker. When I was in film school, the second I sat down at a Steenbeck with film that I had shot and put in music to it, that was it. Instantaneously I couldn't get enough of it, and it had nothing to do with career or want or desire. I liked to sit there and do that twenty-four hours a day. (laughs) I was like, "I can't stop doing this. This is it." _By marrying music to images it seemed like you were getting at something close to how_ Reds _made you feel?_ PJ: Absolutely. That was it. I was mesmerized. I'm a very emotional person. And it's interesting that I went into the visual arts, because I don't actually think I'm particularly visually talented or interested in visual stuff. _You don't consider yourself a visual filmmaker?_ PJ: No, I totally don't. I feel very educated in visuals. I just don't feel interested in them. There are people like David Fincher who just paint magic with visuals. I don't have a zing at it, you know? I come from a totally emotional place. _I know you have a fascination with filmmaking of the 1970s. What is it about that era for you?_ PJ: That's what our generation grew up watching. The subject matter that they took on was fascinating and they didn't meddle and show their hands so much. They kind of stayed out of the way. And the pacing was incredibly confident and comfortable. It seems like the filmmakers who grew up watching films in the fifties went on to make the films in the seventies. Both of those two periods are probably the most evocative, important periods to me. They're both very emotional, very complex, and very character-driven. And both of them have such great films that kind of delve into the underbellies of life. _How did you end up at Cooper Union?_ PJ: It was a school that a lot of people in the country may have never heard of, but our art program in Kansas knew about it. And it was free if you got in. And it was incredibly hard to get in. So from a very young age I became sort of obsessed with it. I wanted to get to New York. I wanted to get out of the Midwest, to be sure. It was my singular focus. It was the only place I wanted to go. _Where was your head at when you were at Cooper Union? It's not a film school. Were you seriously considering film yet?_ PJ: I was thinking that I wanted to be a filmmaker. I didn't know anything about film or how to get into it. And for some reason my instinct was, I didn't want to transfer. I didn't want to go to a film school. I just started making short films. I remember somebody at NYU coming and watching my films and making the joke that it was like I had made a film doing absolutely everything you weren't supposed to do. And that was true. (laughs) How did I learn about jumping the line? I literally reinvented the wheel on every single issue. _Did it seem like a viable career to you at this point?_ PJ: I wasn't thinking it was viable or not. Painting was something I had trained myself to do to get to go to Cooper, and filmmaking was something I couldn't stop myself from being fascinated with. I decided definitely that I wanted to be a filmmaker, and I also decided at that point that I wanted to be a commercial filmmaker, because it was the emotions I was getting so engaged by. The whole point was bringing those emotions to a large audience. So experimental filmmaking was kind of out the window. I decided I wanted to work in commercials for some reason, just because I knew it was kind of the furthest thing from actually what I wanted to do, but it would teach me technique. _You spent a few years working as a camera person._ PJ: Yeah. Six years. _Were you frustrated or satisfied with your work during that time span?_ PJ: The first few years I was pretty happy. I was really excited I was learning so much. I was seeing that this was actually a real world where people could work, and that was thrilling. And I was finding myself in some really cool, high-end commercials. It was really some fun stuff going on. Shooting a basketball game one day and being on top of a Volvo, driving around, the next day. I'd say two years in or so, I started to feel anxious about where I was going and what I was doing and whether I was doing enough. About five years in, I was saying, "OK, I need to do something I'm not." I progressively started to feel more frustrated with my inability to write a feature in my free time. _How did the American Film Institute program emerge as something you were going to pursue?_ PJ: AFI, just like the Cooper thing, was just something I heard about. I was feeling more and more anxious all the time. I was not at all becoming more interested in becoming a cinematographer or being a crew person for the rest of my life. I sort of knew that I had gotten into this as an education to become a filmmaker and that felt like it was slipping away unless I did something. Having worked on commercials for six years, I didn't want to go to a film school and learn how to do camera work. I knew I wanted to go to write and direct, and that was it. Velocity Rules _was one of the films you made there?_ PJ: That was my last film, my thesis film. _What was it about?_ PJ: When I looked around at short films that won festivals, they reminded me of commercials. I don't feel talented at that at all. I'd sort of decided that I wasn't sure how likely it was that even if you won Sundance in short film it was going to be anything tangible. So I sort of decided to just say "fuck it" and just try to make mini-features—four of them, in very different styles. The last one was a campy, poppy, action short about a housewife who discovers she's a superhero. It was kind of like on that wave of superhero debunking that was going on. _How was it received?_ PJ: It was OK. I wasn't all that happy with it. (laughs) It was too ambitious for a short film. It's too long and it's too much, but I love and adore absolutely everything that it did to prepare me for turning around and doing _Monster_. _Tell me about the beginnings of_ Monster _way back when you first followed the Aileen Wuornos case._ PJ: Her story broke when I was in college, and I followed it. I watched it on the news and was left incredibly curious about her because to me she didn't fit. Not only the mold of other serial killers, but she didn't fit the mold of how she was being described to me at all. She would be categorically kind of defined as a crazy, man-hating, person. It was [described as] this big mystery and I was like, "Well, it doesn't really look like that big of a mystery to me." She looks like someone who was beaten down for a really long time and lost it. I found her story really disturbing and unexplained. And then years later when Nick Broomfield's documentary came out, I went and saw it and I thought that was really good, but it didn't answer what had happened before the crime. _And that was the unanswered question for you?_ PJ: That was the unanswered question. She always looked to me like a quintessential war story—a normal person put in insane circumstances who becomes like a feral animal, a killing machine. I was curious what it takes to make a normal person get there. And I was disturbed. I think it was 1995 that I wrote down a note about a _Raging Bull_ -style character film about Aileen Wuornos. Then, that wasn't a genre that was a popular idea at all, and it seemed like a depressing idea. So I never ever was going to pursue that. _When was the first time it reemerged as a potentially viable idea for you?_ PJ: At the AFI fest. I sat down across from Brad Wyman, my now producing partner, and it was like a two-second conversation. It was the most random thing. He said that he had worked with someone who had just done a movie about Ted Bundy. And I said, "Oh, that's weird; I know about Ted Bundy, and I used to read true crime." And he was like, "They're doing all these serial-killer movies." And I said, "I thought of doing a movie about Aileen Wuornos," and he said, "You should do it. You can get it made in two seconds." I was just kind of like, "Fuck it, let me go see what's going on here." And then it's like the door kept opening and seeming more and more viable, and once I started to look into it, I wrote a letter to Aileen. And suddenly she was writing me back and I was completely sucked in. And then it just ratcheted way up, really quickly. And I suddenly realized, "You better not go forward with this unless you're really positive your partners are making the same movie you are, because I have no desire to make a lesbian exploitation film." I decided I couldn't accept any money unless it was a deal that I felt was structured to at least give me a shot to succeed in making the kind of movie that I wanted to. _What did you write to Aileen initially?_ PJ: I was very up-front. I told her everything I was thinking. And it wasn't like I won her over, either. She responded like, "Yeah, get in line; you're part of the media. You're gonna rip me off anyway so here's what I want." She had no reason to trust people, and she pretty much never did. _What was most of the correspondence about?_ PJ: All of our correspondence was about meeting. Her execution was not scheduled at the time. We assumed that would be two years in the future. Then out of nowhere her execution was scheduled and she was executed a month later, and we were already in preproduction at that point. And so the plans that we'd had for Charlize and I to go down and meet her went out the window. I always tried to be very up-front with her and very clear about how I felt and that I was not trying to screw her over, that I was trying to protect her story. I was never sure that she believed me. It seemed like she didn't. But then the night before she was executed she left us seven thousand of her personal letters. And that's for no money. At the end of the day, for nothing, she left me everything, which seemed so quintessentially her, that even though she was a mistrustful, damaged person, she was endlessly optimistic that one day it would work out. _You passed on some initial interest in the film until you'd written the script. You wanted to ensure the film that was made would be the kind of story that was in your head?_ PJ: Right. And I said, "I'm going to direct it." I've never ever had more confidence about demanding that I was going be a director just for moral reasons. I was, "Either I'm making this movie because I'm now the person who's making these promises to Aileen, or nobody's making this movie." _How long did it take to write the script?_ PJ: It took me seven weeks. I just locked myself up and I did nothing but write for seven weeks. I didn't leave my house. I didn't talk to people. I didn't go on e-mail. I didn't do anything. I was like, "I'm sitting in this room until this script is done." And that was the only way that I got it done. _With the script complete, how did casting get under way?_ PJ: We were looking at name actors, but not A-list actors at all. And it was quite possibly going to go straight to video. I was obsessed with who was going to play it, totally obsessed. Charlize was the person I wanted since the first week of writing it. I remember saying it in the beginning of casting, and everybody was just like, "That's nice, you're on crack." It was close to happening with a lot of actors until we got Charlize to say yes, because the script got pretty popular in town. It was a much more sought-after script than I ever expected it would be, and so the level of powerful talent rose really rapidly. And this whole thing, by the way, happens in a matter of four weeks. Like, from the moment I got done with the script, it was four weeks later we had Charlize, and a month later we're shooting. _This all must have been quite surreal for you, for it to happen so quickly._ PJ: Well, here's the other funny thing that I've never talked about in interviews but really the more that I think about it is kind of relevant to that question: I'm a speed skater, and I speed-skate outside here in Griffith Park. A week after I was done with the script, I got hit by a car going fifty-five miles an hour while I was skating. It was a horrific accident that I should completely be dead from. But for some unknown reason I didn't even break any bones. I went through the windshield and was in the hospital and on painkillers on like Vicodin at home when Brad [Wyman] called me and said, "We're going to do this movie." So I was high on Vicodin when this whole thing first started happening. It's so funny. Truly, Brad and I have talked about this many times; it was my life before that accident, and the life after the accident. (laughs) _Like this has all been a dream since you had the accident._ PJ: Completely. I've always joked that I actually died in that accident. And that this is all my fantasy of what happened. The whole thing was so surreal. I was so distracted by my accident. It's like I'd ripped all my skin off my face, my arms, my back—everything. Suddenly I was trying to deal with my physical body and what was going on, which is the first time in twelve years that anything had superseded my directing career. It was like, "Oh, my God, am I going to live? Do I have a serious head injury? Am I going to die in my sleep?" They were like, "You're making a movie," and I was like, "Uh-huh. OK, well, just call me later. Tell me where I need to be." I never really got that dramatic, jump-up-and-down moment. My life just went from recovering from the accident to walking around in big hats trying to protect my skin, sitting with twenty famous actresses a day in meetings. Once we got Charlize, that was really it for me. Because that was the day I realized there was a shot I was actually not only going to make the movie, but I was going to make the movie I wanted to make. And I had this powerful actress now not only attached but also producing. _How were you feeling during the shoot? Were you nervous?_ PJ: Yes, I was definitely nervous—desperately nervous from beginning to end of the entire movie, because I wanted it to be great. The whole time, all the time, I was like, "I don't know how this is going to happen, but oh well. Here we go!" And that continued until the very end. _The roller rink scene seems to me to recall that moment early on in your career when you realized what you wanted to do—to marry music to images. It just works._ PJ: That scene by far is my pride and joy. There are lots of parts in the movie that I really like, but that's definitely my favorite scene. I wrote the script all the way through and in the first act, I just wasn't feeling it. I wasn't actually feeling them fall in love. And the thing was, it gets so dark and violent later. That scene actually came out of a very frustrated night with me arguing with myself and saying, "You know, serial killer, lesbian, whatever; this is bullshit. You know what it's like when people fall in love." And it was just one of those great lucky things that I just was listening to music and I suddenly wanted them to be in a roller-skating rink and then I pulled up the "Don't stop believing" song and I wrote that scene. It was just one of those things that happen sometimes, where you just see it happen in front of you and you just feel it completely. And it just comes out. The scene never changed from that half hour or hour it took me to write it. _If that scene worked, what didn't in the film?_ PJ: First, everything leading up to the roller-skating rink, which is a lot. (laughs) The whole opening of the film didn't work out the way that I wanted it to. Our first week of shooting was pretty disastrous. We fell way behind schedule, and everybody was kind of getting up to speed and finding themselves. I was the most upset about the opening scene of the two of them [Aileen and Selby] meeting in the bar. That was a scene that on the page I really liked. That scene did not live up to what I hoped for it. _Charlize's performance got such an amazing amount of press and awards. Did you ever feel that your contribution was being ignored?_ PJ: It's funny. I mean, I don't want to sound like the grand master behind this or something, but the interesting thing about my relationship with her performance was that the design of the success of the movie for me _was_ that performance. My resolution before we made this movie was that whoever played this was going to be so good, they'd get nominated for an Oscar. I knew that the only shot such a dark movie like this had was for that performance to be incredible. And I remember being stunned when there started to be award shows, and people would be like, "You're not sitting in the award show; you're not the one that was nominated." And I was like, "I'm not?" There were definitely moments. People would shove me out of the way and tell Charlize how her dialogue was so incredible and I was kind of like, "OK, now you're crossing the line!" (laughs) _Do you ever feel like you're working in a man's world? Or is it something that doesn't even occur to you?_ PJ: It occurs to me, but I don't feel like that. I'm so uninterested in the subject. It's not out of a lack of appreciation for it. I've just talked way too much about it. And my reaction to that was to go, "Oh, fuck it, I don't even care," and just kind of do whatever I want. It's tricky. I don't want to take away from other women who feel oppressed. But for me I feel that it can be a great benefit as much as it can be a problem. There are things about it that piss me off and can be irritating and exhausting. There are also places that I feel that I can get to and observe, and trust that I can establish that I think is probably easier for me than a man. And so I try to just look at it that way. _Do you feel a need to strike now while the iron is still hot and get that second film out of the way, or is it more about making sure that second film is worthy of following the first?_ PJ: Well, I think it depends on what you want. I have my little panic attacks in the middle of the night like, "What if I've got to get it done right now?" But the truth is, my priority number one is that I got into filmmaking because I would sit there and see those scenes in front of me. And at the end of the day, making _Monster_ was unbelievably hard, as making any movie is. And the only thing that made it worth it is not those awards and all those kind of things that I can barely remember because I was so overwhelmed. It was really that night in the editing room, that day on set. It was those things. So yes, I wish it could happen right away and I wish I could make bucketloads of money and all of those things. But more than anything, I realize the only thing that makes that kind of work worth it is to be engaged. And so that's my priority. I do care about success and all of those things. But I don't care enough to do movies just for that reason. THE DIRECTOR'S TAKE PATTY JENKINS _What is the first film you ever saw?_ _Pippi Longstocking_ _What is your favorite film of all time?_ _Pippi Longstocking_. Just kidding. Probably _A Face in the Crowd_ or _The Shining_ _What movie made you realize that film was an art?_ I think it always seemed that way to me. Probably films like _The Red Balloon_ made that impression when I was very young. _What movie do you consider your guilty pleasure?_ _Poltergeist_ _Who is your favorite movie character of all time?_ Chopper _What's your favorite movie snack food?_ Twizzlers/Red Vines _Who is your favorite director of all time?_ Elia Kazan _Who is the most impressive filmmaker working today?_ The Coen brothers _What quality do the best directors share?_ Attention to detail and humanity _Who is your favorite actor or actress of all time?_ So cliché, but true: Marlon Brando _Who is your favorite actor or actress of today?_ Warren Beatty _Who would you cast as yourself in a film about your life?_ Charlize Theron, though I'd have to ugly her up again _If you could remake one movie, what would it be?_ _The Misfits_ _What is your best quality as a director?_ Tenacity _What is your greatest weakness as a director?_ Delegating certain aspects of the process to other people completely _Finish this sentence: I'll never direct a movie about . . ._ a jewelry heist. No, truly, if the character's story is good, anything is possible. So I guess "pure action" is the answer. _Finish this sentence: The perfect movie is . . ._ an experiential journey through a story well told. _What will they be saying about your work fifty years from now?_ I don't know, but I would like it to be that I succeeded at the above. _What piece of advice do you have for aspiring filmmakers?_ To embrace how truly hard it is and always will be, to be really honest with yourself about whether that's really what you want to do with your life, and then to put one foot in front of the other relentlessly. Also to try to keep your eye on the ball about what you want out of it, and not get distracted by things you don't care about. _What are you as passionate about as moviemaking?_ The people that I care about, and speed-skating. **RICHARD KELLY** "I was aware that I was probably developing that reputation of being really difficult, but I didn't care! I thought, 'If it's a good film, everyone's going to forget what an asshole and pain in the ass I was.' " **SELECTED CREDITS** _Donnie Darko_ (2001)-writer/director _Domino_ (2005)-writer _Southland Tales_ (2006)-writer/director D _onnie Darko_ was barely a blip on the pop culture radar when it opened in 2001. It earned an inconsequential $500,000 in a brief theatrical run. As writer/director Richard Kelly says, it was an "experiment failed!" But something about this odd blend of sci-fi, social satire, and teen angst hit a nerve and spawned a massive cult following. Kelly was just twenty-six at the time of the film's release. In the intervening years his stature has only grown as his screenwriting career blossomed and the appetite grew for what his mind might conjure up for a second directorial effort. Amazingly, _Donnie Darko_ was his first script. Now it seems the creative valve cannot be shut, with a myriad of projects in development. So much for _Donnie Darko_ , the film that bombed. The experiment, in fact, continues. _What did your parents do?_ RK: My dad started out working for NASA, and then he became a mechanical engineer working in industrial robotics. My mom taught English and Spanish, and then she tutored kids who were not doing well in school. _I would think for a kid growing up, your dad's job seemed pretty glamorous._ RK: Yeah, he worked on the Viking Lander that went to Mars, and then he worked on the top-secret smokeless cigarette project for Philip Morris. _Did your dad's work appeal to you early on as a potential career?_ RK: For a while I really wanted to be an architect and then I thought of trying to be a cartoonist. There were a lot of different kind of outlets I was thinking of pursuing, and ultimately filmmaking ended up being the hybrid of all those things. _You drew a lot as a kid?_ RK: Oh, yeah. I had a pretty big art portfolio. I did a lot of black-and-white illustration. Pencil illustration was my biggest focus, and I did watercolors and oil paintings. _What sort of movies were you into?_ RK: I definitely went to the movies a lot. I remember _Back to the Future_ and _Aliens_ , _Terminator_ , _E.T._ , Spielberg, Robert Zemeckis, Jim Cameron, and John Hughes. This was in the days long before DVD. No one I knew could afford a Laser Disc player. I remember when you could just start renting movies at video stores. I remember the first movie we rented ever was _Romancing the Stone_. We had this VCR that was top loaded, you know? (laughs) _Do you remember when you started to appreciate the craft behind filmmaking?_ RK: Probably when I saw Oliver Stone's _The Doors_ in 1990. I was fifteen, and I remember a bunch of us were drinking out in the mall parking lot, and I'm with this girl and we went and saw _The Doors_. The Bob Richardson cinematography in that film blew me away. That's when I was like, "This is really an art form." Maybe it was because that film was all about rock 'n' roll and drugs and sex, and it was me discovering the Doors for the first time. _Were you open with your parents early on about your intentions of becoming a filmmaker?_ RK: I didn't really vocalize film, because I didn't want to come across as being a dreamer and have people roll their eyes. I was immediately aware of the fact that saying you want to go to Hollywood to be a film director, people might laugh at that. I just kept it very general. I think I said, "I might be able to get an art scholarship with my portfolio," and I did get one to USC. _Did you have any experience with a camera before school?_ RK: I had no experience with a motion picture camera or even a still camera. I never really held a camera in my life until we started doing super eight film my junior year in college. _What was the first short you put together?_ RK: The first short was called _The Vomiteer_ , and it starred one of my fraternity brothers. I joined a fraternity right away, and that's where I got all my friends. To the film-school people I was the frat guy who they looked at with suspicion—as they should have, because my first film was about a guy who can't stop vomiting. It was very serious, kind of arty, but it was absurd. I guess it was my response to a lot of the people I met in film school who I just didn't fit in with. The most cringe-worthy films I saw there were the ones about a homeless man, or a girl contemplating suicide because she was raped by her dad. You were covering your face while watching these films, going, "Oh God, not again. Not more dark, depressing suicide and rape and molestation." It was some dark shit. (laughs) There's no law that says an art film cannot be tremendously entertaining or tremendously funny. To me, that is the essence of a great story—comedy and suspense. If you can successfully create those two things, the rest is gravy. Not that _The Vomiteer_ was any great work of art. (laughs) _Toward the end of school were you feeling confident about your prospects as a director? It must have been competitive._ RK: Very quickly people get beaten down in film school. If their stuff is not well received, it's a rude awakening. It's very painful and it's very personal to a lot of people who come to film school with dreams of being George Lucas or Steven Spielberg. Quickly those dreams are dashed when their film is mocked and laughed at by their peers or not well received at all. So they start dropping like flies in terms of wanting to direct. A lot decide, "I'm going to edit or I'm going to produce, or hell, I'm going to law school." I was one of the ones who was still gung-ho and I wanted to do a big 35-millimeter short film. So I thought, "After graduation I'm going to ask my dad for a little bit of money and I'm going to rent a 35-millimeter camera and I'm going to get some of my friends together and we're going to go do this big elaborate sci-fi film," which was called _Visceral Matter_. It was kind of _Mystery Science Theater 3000_ -type camp stuff. It was about a mad scientist and a teleportation chamber. I spent the whole summer building this teleportation chamber in my garage in Hermosa Beach. We shot it in late July over a week around the desert and it was a pretty grueling process—a lot of visual effects, computer animation. I was just putting myself through a self-financed graduate school, was the way I looked at this short film. _How much did you spend on the film?_ RK: I ended up spending between forty to fifty thousand dollars. And it's about forty-five minutes long. It's got a lot of digital effects and a pretty elaborate sound design. I really went for it. It gave me great confidence to be on a set and finish something and see it through to the end and understand the process of dealing with a laboratory and dealing with postproduction and working on an Avid. I became very confident in my ability to put together a big elaborate film with a lot of different elements. _It was effectively made to be your calling card?_ RK: Yeah, well, shorts don't get you anywhere unless you have a feature screenplay. So then that's when I set out to write _Donnie Darko_. _Was_ Donnie Darko _the first feature you'd ever written?_ RK: Yeah. _Did it start with the title?_ RK: That and the engine. It started off as a big piece of ice. I remember a folk legend from my hometown, and this does actually happen—chunks of ice, either flushed from a toilet in an airplane or falling from a wing, sometimes crash into houses. That happened somewhere near where I grew up. That somehow became an engine that fell off a plane and that was what I built it around. Then it became, "I want this to be a piece of social satire and kind of a comic-book portrait of the suburbs as I remember them." I thought, "Well, I can't set this in the present day. It's not going to feel right. I need to do it as a period piece. Let's do 1988, and let's do it right around the election" because I thought a jet engine could also be a satirical rendering of the death of the Reagan era. I thought putting together a comic-book story of a dysfunctional kid and this science-fiction mystery can also uncover the black comedy in the suburbs as I remember them in 1988. _Your depiction of the suburbs rang true for many people._ RK: It's very accurate to how I remember them. I grew up in a very right-wing small town. I didn't know any Democrats growing up. (laughs) Pretty much everyone I knew was very conservative. _Were you more of a silent rebel than Donnie?_ RK: Oh yeah. The only autobiographical thing that happened in the film was that I got in a fight with my health teacher about this sort of new-age curriculum that they were promoting. But I had never mouthed off to the extent that Donnie did. It was more on my wish list than anything. I was on the honor roll. I played soccer and I ran track-and-field and fit the mold of a normal kid, but on the inside I was just kind of a repressed artist maniac trying to get the hell out of this place. _What films were you thinking about when it came to the time-travel aspect of the story?_ RK: I was thinking of a lot of time-travel films, certainly _The Terminator_ and _Twelve Monkeys_ , and clearly _Back to the Future_. I thought there was something very elegant about a well-orchestrated time-travel story that pays off and comes together like a very satisfying puzzle. I wanted to make sure that my puzzle operated on all cylinders. _At any point in the writing of it were you concerned that it wasn't the most commercial story you were telling?_ RK: No. I think it's all about pleasing yourself. The biggest mistake a lot of writers make when they're first starting out is, "Oh, I have to try to please the studio executives or I have to try to write for the marketplace." That's where you lose your voice, and your voice is diluted by what you anticipate studio executives responding to. And that's when you are handicapping yourself. I never once thought, "I need to do this because I need to sell this screenplay." I thought, "I'm going to write exactly the movie I want to see." It was all about, "If no one else likes it, fuck 'em." I might be the only person in the world who understands this script and wants to see it made, but that was my attitude and it continues to be my attitude, because I think the purest essence of an artist is ultimately pleasing yourself first. Ultimately, if there's anyone else who likes it, great. If not? Fuck 'em. _How long did it take you to complete the script?_ RK: Four or five weeks, I think. I remember a feeling of enormous satisfaction when I finished it. Not only was it the first feature-length screenplay I had ever written, but I felt like I really had all the pieces that I needed. _With the script done, things happened pretty quickly for you. Through your friend Sean McKittrick, you got it in the hands of CAA agent Beth Swofford, who read it and immediately responded to it._ RK: Beth flipped out for it, and the Tuesday morning after that weekend she read it, I got a call from four agents at CAA. And they all said, "Come in, we want to meet you. We want to be a part of this. We want to help you." And then I got signed by CAA, and that was it. Here I am, twenty-three years old and I've got the most powerful agency in Hollywood saying they want to represent me. It was a huge deal for me! _Did you ever consider selling the script for someone else to direct?_ RK: I was very vocal and very aggressive saying, "I'm directing this, Sean's producing it, and there's no way we will ever sell it and relinquish control." We both knew this was our ticket, and Sean had seen so many other scripts get destroyed by the development process that we knew we'd regret it for the rest of our lives if we let this script go. There was almost a year when the script floated around and we met everyone in Hollywood. Everyone said, "Yeah, we like the script but we don't think you can direct this." It was, "Love the script, great writing sample, would you be interested in writing this for us?" They were offering me a lot of teen slasher films to rewrite. _Did you do any of that work?_ RK: I turned it all down, and people thought I was arrogant and probably nuts, but I don't know how to do something unless my heart's in it. The first writing job I took while we were trying to get _Donnie_ made was _Holes_. I was the first screenwriter to adapt it. Here was this whimsical children's novel, and I was telling them I want to make it a little more dark and adult. I think they thought maybe I would make it ten percent darker, and I went and made it about eighty percent darker. I changed everything. I turned it into a postapocalyptic waste-land, and the warden was Robert Duvall and all the kids were eighteen-year-old criminals. I was like, "Man, I'm so proud of this," and I turned it in and they freaked out. They said, "You have to go back and start from scratch, or you're fired." I wrote a fifteen-page memo pleading my case to [producer] Mike Medavoy, and it was a complete disaster. I got fired and my agent was just like, "Dude, you can't do this again." (laughs) So then we got Jason Schwartzman attached to _Donnie Darko_ and all of a sudden all this heat came back on the script and it got resurrected from the dead. And then we heard that Drew Barrymore and her production partner Nancy Juvonen had read it and they wanted to meet us immediately. So we jetted over to the set of _Charlie's Angels_ , and we met with them in their trailer and I offered [Drew] the role of the English teacher. So all of a sudden we had a star. We were able to raise $4.5 million. _How much tension was there for you on the set, considering this was your first feature? Did you have any anxiety?_ RK: No. I'm very confident on a movie set. I know what I'm doing. I feel like that's what I've been waiting my whole life to do. _How about working with the actors?_ RK: I had no idea how to talk to actors. I had to figure that out as I went along. Somehow it worked out. In terms of the actors, inside I was scared to death, but I just put on a straight face and talked to them as simply as possible. It's a very delicate process on your first film because they're looking at this first-time director going, "Does this guy know what he's doing?" You want to make them feel secure, like they're in good hands. _Jake Gyllenhaal says he was mimicking you in his performance of Donnie. Did you know this during production?_ RK: I didn't realize that until we had wrapped. I was literally losing five pounds per week. I was turning into this walking skeleton. I was sort of becoming Donnie Darko! I started channeling Donnie and Jake, playing Donnie, started channeling me. So it was sort of a subconscious thing going on between director and actor that I can't even explain. _Once you got into the editing room, how did you feel about what you had?_ RK: The editing process was not pleasant for me at all, because I realized this movie is just going to run so long and the story will not make sense. To cut this thing down to under two hours and have the story still make sense was going to be a huge undertaking. I became very paranoid and anxiety ridden because I felt the film was going to be taken away from me and cut behind my back by the financiers. I don't care how good your footage or your performances are, if you're a first-time director it's almost like you need to expect that. "You're a first-time director, we're going to re-cut you—fuck you." That's sort of the attitude. With each passing week it was, "OK, we're going to bring in this other editor and we're going to show the film to this executive over here at Warner Brothers and get his opinion." The writing was on the wall that the film was on the verge of being taken away from me and that was when I really lost my shit. At some point I got a note to streamline the film and focus more on "Donnie's journey." So I went in with the editors and I cut this butchered version of the film. And I had them go in and we put in a temp title up there in this very obnoxious, curly romance-novel font. And I had them color it hot pink and write DONNIE'S JOURNEY on the main title. That was just like throwing gasoline on the fire. I was just this total smartass, and part of me thinks I shouldn't have done that, but another part of me is proud I did. You cannot be a doormat as a first-time director and let them steamroll over you. _Weren't you concerned about getting a reputation as a difficult director on your first film?_ RK: I was aware that I was probably developing that reputation of being really difficult, but I didn't care! I knew all I had was this film and if it wasn't well received, then I might not ever work again! I kept thinking, "I have got to be a dick here. I've got to fight viciously and yell at people. I'm going to do it because all I have in the end is the critical reception of this film." I thought, "If it's a good film, everyone's going to forget what an asshole and pain in the ass I was." In retrospect maybe I took things too personally, but this was my life at the time and it's all that I had, and I had to defend it. It was a very unpleasant time, probably the most unpleasant time of my life, editing that film. I'm glad that it's over. Now you're making me relive it! _We can move on to a cheerier topic, like the film almost going straight to TV. You were taking the film to the festivals, but it wasn't getting sold?_ RK: We came into Sundance and everybody wanted to fuck us. And after Sundance it was like we had an STD and everyone decided they didn't even want to risk fucking us. They wouldn't even give us a blow job. They wouldn't even let me eat them out. (laughs) _But you did get interest from Newmarket?_ RK: They made an offer for $1 million that said "possibility of a theatrical release, not guaranteed, most likely pay cable, Starz network premiere." I brought the guys at Newmarket over to Flower Films and I got Drew Barrymore to put on that sweet, beautiful smile and say, "Guys, you're going to put this in theaters, aren't you?" And they said, "Of course. We were always going to put it in theaters." And it was just like, ugh! Here I am thinking that this movie that I worked so hard on is going to debut on Starz and my career's going to be over. (laughs) _The film presented quite a marketing challenge, I would think._ RK: They planned a Halloween release date, and we all decided the best way to try to market this thing was to tap into more of the thriller element of the story and even gear it toward the image of the rabbit mask. It was a smart decision to make at the time because it's such a difficult movie to market. It's one of those movies you almost need to market with the filmmaker, but I hadn't done anything yet, so they couldn't market it on my name. Everyone did the best they could. And then all of a sudden, September 11th. At that point you're just, like, the last thing anyone wants to have to think about—a very provocative, challenging, disturbing, difficult film like this. I was like, "Just fucking put it out there. Please just put it in theaters so I can not have to live with the fact that this thing just goes straight to pay cable." So everyone decided we're going to stick with this Halloween release date. Literally, like eight movies opened that weekend. It was an awful weekend to open a movie! So the per screen average was like eighteen hundred dollars or something on fifty-something screens. Newmarket was like, "Well, fuck it. We've just got to cut our losses." So there literally wasn't an ad come Sunday. The movie was over! Experiment failed! Failure on every level. Everyone was just heartbroken. But you know that was the end of Act One. _So what was the beginning of Act Two?_ RK: It was when the DVD came out, and then all of a sudden in New York it started playing at the Two Boots Pioneer Theater. So people started lining up to see it and the DVD started to really pick up. Each month it outsold the previous month, and these Internet sites started popping up. A year later the film was released in the UK, and it just started to explode. Teenagers connected with it and it just became this thing people wanted to talk about. The audience rescued the film from oblivion. _Did the fact that there were so many interpretations of the film surprise you?_ RK: I was just so excited that people were getting it, even if they were misinterpreting it and saying it's about mental illness or that Cherita Chen is a spy for the Chinese government or Gretchen is really a reincarnation of Donnie's mother and Donnie wants to fuck his mother—all these theories that were completely wrong. I was just glad people were talking about it. The real, accurate version of the story was the one I ended up doing in the director's cut, with the philosophy of time travel and the comic book—the sci-fi story that I'd always intended to tell. _Is there a filmmaker today whose status you'd like to enjoy?_ RK: Yeah. I mean, to be able to have the clout that someone like David Fincher can wield to be able to get _Fight Club_ made for $70 million at 20th Century-Fox. What Fincher pulled off with that film is just such a coup d'état. Just to imagine the look on Rupert Murdoch's face when he saw that film raised a fist for counterculture. That is the dream—to have the control or clout that people like Soderbergh or Fincher have. As long as the price is right, they can do everything within the studio corporate umbrella, because they've earned the right to do that. _Is it important to you that someone watching your films recognizes it as "a Richard Kelly film"?_ RK: It's not about the ego of this is "a Richard Kelly film." But at the same time, I write my own scripts and I feel I've earned enough to have that credit just because I do write my own material. It also helps you. It's called showbiz. You've got to get out there and sell yourself. If my name can somehow become something that the public recognizes as here's another weird crazy movie from the guy who did _Donnie Darko_ so we'll go see it. It's the same thing with how they market David Fincher's films or even Soderbergh's films or a lot of Spike Jonze/Charlie Kaufman collaborations. _What do you think will be common to all your work?_ RK: I think someone else is going to have to define that, because all I know is I don't ever want to make the same film twice. But at the same time I always want people to know it's a film I made, simply because there's something recognizable about me in it. I want them to all be personal. I don't ever want to make an impersonal film. It's like, I wrote a movie for Tony Scott that very much has my kind of stamp on it, but at the same time I wrote it for Tony. I'm just grateful to finally also be in production on the next film that I direct, because I don't want there to be a four-year lag between each film. I'll just go nuts because I'm sick of writing and I just want to direct a film hopefully every year. _How did your second film,_ Southland Tales _, begin?_ RK: I wrote the very first draft of _Southland Tales_ when we were trying to sell _Darko_ to a distributor in that hellish five months after Sundance. _Something interesting would have to come out of that rough period for you._ RK: Oh, yeah! I was very angry and frustrated and I wanted to write something that would cheer me up. I wanted to write something about L.A., this city that I'd spent the last nine or ten years in. It's evolved significantly, but it's still the basic core story of what I first wrote. I just can't wait to get behind a camera again. I think I'm going to pee in my pants on the first day of shooting, either out of excitement or nervousness. I feel like it's a gift to be able to direct a film and it's been so long and such a struggle to even get _Southland_ off the ground. It's even more elaborate and ambitious and daunting an undertaking than _Donnie Darko_. And the fact that I'm getting to do this at my age and have this level of creative control, I'm grateful for it and I'm never going to take it for granted. _It's been described as a combination of a number of genres. It's part musical, part comedy, et cetera?_ RK: That's basically the best way to describe it. I can't speak in movie marketing terms, and I think sometimes my producers, their eyes roll back in their heads when I say things like that, but that's what it is. It's a hybrid of sci-fi, thriller, comedy, and musical. And what those percentages end up being, I don't know. Maybe it'll end up being ten percent musical, maybe five percent musical. I see them, all four, as being right around twenty-five percent. _Tonally what films will it emulate?_ RK: Well, I'm going to be screening quite a few films. I'll be screening _Dr. Strangelove_ , _Network_ , _Brazil_ , _Blade Runner_ , _The Big Lebowski_ , _Kiss Me Deadly_ , _Heat_ , and _Barry Lyndon_. Also _Pulp Fiction_. _What are you hoping the audience will come away with from_ Southland _?_ RK: More than anything I want it to be something completely unexpected. I want to take people by surprise. I am trying to create a really exquisite piece of pop art, with pop actors appearing in a very subversive and dark political allegory. I am casting faces that you would never expect to see in this kind of film, and therefore it will be subversive on an immediate, aesthetic level. _What music is informing the film?_ RK: Moby has already completed the score to the film. His music is the film's aching heartbeat. _And it's something of a love letter to L.A.?_ RK: It is a love letter to L.A. A really, really, nasty one. _Are you prepared for the inevitability that many of your fans will be disappointed by whatever you follow up_ Darko _with, even if_ Southland _turns out great? How much of that is on your mind?_ RK: None of it. I know I'm not a one-trick pony. I know I have a lot more in me than _Darko_. I know that's only the beginning of what I'm capable of doing. There might be people who never like one of my other films more than _Darko_ , but there's not a thing I can do about that. I wish that I could have gotten another film out already, but all the stuff I'd written after _Darko_ was in anticipation of _Darko_ being well received and an immediate success and me thinking, "Oh, OK, I can then get $15 million for my next film." In fact it poured molasses on my feet. The slow reception of this movie made it a lot harder for me to get the next project off the ground, because they're just as messed up and weird and provocative as _Darko_ and, unfortunately, they're more expensive. The slow-burn success of _Darko_ slowed down my career, but it's just made me a better filmmaker and it'll make the second film that much better. I've had the chance to really direct it in my mind, and I think the time will have been put to good use. THE DIRECTOR'S TAKE RICHARD KELLY _What is the first film you ever saw?_ I don't remember. _What is your favorite film of all time? 2001: A Space Odyssey_ _What movie made you realize that film was an art?_ _Brazil_ _What movie do you consider your guilty pleasure?_ I don't feel guilty enjoying any film. _Who is your favorite movie character of all time?_ Probably "The Dude" from _The Big Lebowski_ _What's your favorite movie snack food?_ Popcorn _Who is your favorite director of all time?_ Stanley Kubrick _What quality do the best directors share?_ I don't know. Having an obsessive eye for detail? _Who would you cast as yourself in a film about your life?_ I would never allow the film to be made. And if they made it without my consent, I would show up on set with a shotgun and threaten to kill them all. _What is your best quality as a director?_ I don't know. And if I answered that question, I would appear smug. _What is your greatest weakness as a director?_ I have many weaknesses, and I don't care to discuss them right now. _Finish this sentence: I'll never direct a movie about . . ._ delusional teenagers who see rabbits in 1980s suburbia. Been there. Done that. I'm game for pretty much anything else. _Finish this sentence: The perfect movie is . . ._ something that cannot exist. There is no such thing as perfection. _What will they be saying about your work fifty years from now?_ I don't know. I've only made one movie so far. _What piece of advice do you have for aspiring filmmakers?_ My advice is don't ask people for advice. Just go and figure it out on your own. _What are you as passionate about as moviemaking?_ Nothing **DYLAN KIDD** "Maybe like life, you don't know how to make the movie until it's done." **SELECTED CREDITS** _Roger Dodger_ (2002)-writer/director _P.S._ (2004)-writer/director The tale of how _Roger Dodger_ came to be could serve as an inspiration to any of the legions of would-be filmmakers who tote around their script in a knapsack, wondering how they will ever realize their dreams. Dylan Kidd wrote and directed the acclaimed character study of a despicable lothario, played winningly by Campbell Scott. Kidd is an anomaly in many ways among his directing peers. He is a New Yorker who refuses to trade in his digs for Hollywood, and a man more comfortable knocking himself than anyone else: "My self-esteem is too low to assume that anyone is interested in my style." He is also undeniably an heir to a proud tradition in American filmmaking—the New York writer/director. _What are your earliest memories of going to the movies?_ DK: All of my early memories of movies are of being traumatized by them. I saw _Close Encounters of the Third Kind_ when I was seven or eight, and my mother had to take me out of the theater because I was so scared. I remember seeing Philip Kaufman's _Invasion of the Body Snatchers_. I couldn't sleep for six months after seeing that movie. I was literally traumatized by it. I remember my father taking me to see _Amarcord_ and just being horrified and titillated. There was this scene with a woman with huge breasts. It was like movies can be either terrifying or erotically forbidden. _Did you rent a lot of videos when you were a kid?_ DK: Absolutely. Like any self-respecting film geek, I was not a happy adolescent and I think the most important moment in my life was when I bought my own VCR. I had a job, working at a library after school. It was the first time I had ever made money. I got, like, three hundred dollars working over the course of an entire summer, and I decided to buy a VCR. I bought this forty-pound monster. That thing only broke down recently. _What kinds of films were you attracted to early on?_ DK: I liked Spielberg, and I waited on line to see _E.T._ like everybody else. But I think when you're young, you sort of gravitate toward the more self-consciously arty stuff. _Rumble Fish_ was huge for me. After seven years of therapy, I still don't exactly know what hit me so hard about that movie. _Was there one film that piqued your interest in filmmaking?_ DK: When I was in film school, I was aware that a lot of people were there because of _Star Wars_. But for me, the movie that when I look back was probably the biggest in terms of that little spark of, "I think I could see myself doing that," was _Stranger Than Paradise_. I was getting _American Film_ magazine and I saw a production still of it. I didn't know anything about the movie and I just rented it. Thank God my video store had this movie. This was before _sex, lies, & videotape_, so there was no sense for me that some guy could just get his friends together and make a movie. And to see this film, it was not my style or aesthetic, it was like, "Holy shit," and I think a lightbulb just went off. It wasn't like a Godard film. It was, "Some guy in New York made this movie!" _In high school did you have much direction?_ DK: I was so miserable. I was just trying to survive. I really was. I was clinically depressed all throughout high school. So for me it was movies and sports. I would have been a professional baseball player, until I became embarrassed that I was not gifted physically at all. I went to college and I was a philosophy major, which just shows how fucked up and aimless I was. I was just very spaced out—and remain to this day a very spaced-out kid that loves movies. I never believed that I would succeed at anything. I felt totally ill-equipped for life. I just liked reading and watching movies. Socially I was hopeless. _What do you think accounts for your depression?_ DK: I think the biggest thing was that there was a history of depression in my family. A lot of people in my family are on heavy medication. There wasn't any one event. It was just that I didn't feel confident as a kid. I never felt that I could do anything. I always felt very clumsy, and that's why film school was so important for me. _You first attended George Washington University, though._ DK: I was such an underachiever in high school that I didn't get into any of the fancy colleges that I was supposed to. And I went to George Washington thinking that I would be at this ivory tower, and I got thrown into the biggest party school outside of, like, Florida State. It was just insane! I knew I had to get the hell out of there. I have some memory of seeing an ad for the Tisch School of the Arts. I'm not sure I knew there was such a thing as a film school, and it was only at that moment I knew I had to transfer. I remember thinking that there was no way I would actually make movies. My plan was to apply to the film school and do that for a semester. I knew that wouldn't work out, so I would just go into the cinema studies department because then I could write about movies. And I just remember getting to film school, and that first time when you look through that viewfinder and the shutter starts, something exploded in my head. I was done. I still didn't ever think I was ever going to be able to do it, but I just fell in love with it. It was just, "Oh my God. This is the best!" There's a block down on the campus and every time I walk by there, I glance at the spot where I was the first time I actually operated a camera. _It sounds like you had found for the first time a place that felt right for you._ DK: I loved it. I see kids in film school now and my sense is that it's a little more career oriented. I went off to film school, like, six months after _sex, lies, & videotape_ had come out, so it was still in the best sense a film school, in that it's just a bunch of people that were nuts about this stuff. For us there was no thought like, "How are we going to get a job?" It was, "Let's watch this obscure Scandinavian film and talk about it for eight hours." It was just the best. I remember being in love with the way the film smelled. You were drunk on cinema. _When you graduated, what did you do?_ DK: When I graduated, I thought I wanted to be a cinematographer, but we graduated in the worst film economy. There was no work in New York. I was working at a pool hall. It was ridiculous. The few jobs I got were little goofy things for MTV, loading cameras on the occasional music video. But still there was absolutely no sense that I could do this. I think my mother was expecting me to go back to law school. For me there was never a plan B. There was no plan A, either. Years went by. I honestly don't know what the breakthrough was, but it took me four years after graduation to finally realize that I needed to make a movie. If you love movies and you want to make movies, there comes a point in your career when you're like, "If I don't make a movie I am going to go insane." _What did you do to make that possible?_ DK: I did a short film in 1996 and I met my producer, Anne Chaisson, through that movie. When I made that short, it was a manic year. The plan was to make this short film, and then I would immediately write a feature. So I started writing a crazy script that was set in real estate, which was where I was working then. _You wrote another feature besides the one set in the real-estate world, didn't you?_ DK: I did a horror movie script when I decided that I just wanted to get a movie made. I was thinking I would shoot it on 16 [millimeter]. I was going to go the _Evil Dead_ route. It was going to be an incredibly erotic vampire thing. It was actually not bad, but I was still clueless. I don't know, a vampire film? What the hell was I thinking? _What was the short about?_ DK: The short was an incredibly pretentious, Bergman-esque sort of horror movie about a kid whose mother is sick and he goes to stay for a summer with his uncle and aunt. It's a very painful thing to look at now, but it's very competent. It won some awards. I think doing that reminded me no matter how hard it is, the actual process of being on set with everybody is like a drug. It's so great. Once I made the short, there was still no plan B but at least now I had a plan A, which was to write an original screenplay. "I'm going try to raise some money and I'm going direct the movie." I wanted to direct. The idea was never to be a writer. It was to write in order to get on set. _How did_ Roger Dodger _begin?_ DK: I decided, "Let me write something that I can shoot." There was no Nick character in the beginning. It was just going to be an hour and a half of Roger going from one bar to another and getting more and more sort of tweaked. The idea was to do something that we could literally go and do guerrilla-style. The style of the movie came from the original idea of doing it without permits and shooting all long lenses, hiding the crew and using radio mikes. _The script seems like it was written precisely to attract an actor to the lead role._ DK: That, I think, was calculated. I knew that we weren't going to be able to raise money without getting someone attached first. Once Anne got the script, we were both like, "OK, clearly this is not a commercial product, but man, some actor is going to want to do this role." So that was calculated. It was calculated in that she wanted me to write enough juicy roles that there could be three or four actors that would help us, because we were ready to do that movie for a $150,000 if we had to. It would have been insane, but I was just that desperate to make it. Writing that movie literally felt like a fever dream. It felt at the time like this was my last chance. I quit real estate just in desperation. I was as miserable as I'd ever been, but I got lucky and got one of those long-term dream temp jobs where you're working at some huge database project for six months and you're not supervised at all and you can do two hours' worth of work. I just sat there, writing. So that script was started at a temp job, writing in this cubicle. _How long did it take to finish a draft?_ DK: The first draft just was insane. The first draft was finished I think, in like, three weeks. _Did you have an actor in mind when you wrote the part of Roger?_ DK: Not when I was writing it. But when I was first done, I really got a sense that Sam Rockwell was going to be a huge star. And Paul Giamatti, as crazy as it sounds. The character was written to be a little bit younger than Campbell. When you're a self-obsessed person, you're obviously going to write about yourself. I was thirty-one when I was writing it, so he had always been less like the adult uncle and more the young adult. It ended up being so much better and spookier to have Campbell be this middle-aged adult and still be so regressed, because it seems like it's not just a phase. You're witnessing a lifestyle choice, not just a bad year. _The story of how you got Campbell to play the lead is almost legendary by now. How long were you walking around town with your script on you at all times?_ DK: Two weeks, maybe. I ran into Maya Rudolph from _Saturday Night Live_ on the street. It was the first time I had ever approached anybody, and she wasn't able to do it. She was in a hurry. She was very nice, but I remember thinking that it would have helped to have said, "Take this script with you just in case." I think that was where I was like, "I should never get caught without a script." _And how did you meet Campbell?_ DK: I was meeting my friend Louis, and we stopped at the Grey Dog on Carmine. Campbell walked in and my initial feeling was, "Wow, he's too old." I went out on the sidewalk and called Anne and we talked about it for five minutes. It was just like, "What the fuck, he's here. What's the worst that could happen?" It was like a thirty-second conversation. Campbell really is just an incredible guy. Most people would say, "I can't accept this script," but he said two things. He said, "I'm going to be brutally honest with you, and it's going to take me a couple of weeks." I think I might have lied and said we had some money. I honestly don't remember. I didn't think I'd ever hear from him again. He called me back two weeks later. I talked to him for five minutes before I realized who it was. _What did he say?_ DK: He said, "Hey, it's Campbell' and I went "Hey!" And then I realized, "Holy shit, it's Campbell Scott!" He was doing a reading somewhere and he invited me, and we went out afterwards and just talked about it. _What did he want to know from you before signing on?_ DK: He wanted to know if I had a plan. He was like, "These are really long scenes." I told him all about my two-camera plan and my long lenses and that this wasn't just going to be a play on film, this was going to be a very visual movie with cigarette smoke and swirling neon and all this stuff. So I think he bought it. He's a very smart guy. He probably thought, "He obviously doesn't have any money, but it seems like he has a plan." Once we met Campbell, it was just like winning the lottery. It was insane. It was April or May when I met Campbell, and within six months we were shooting. It never happens the way you expect it to be. _Campbell is a director himself. Were you ever concerned that he was going to take your baby away from you?_ DK: Huge paranoia on my part, which just seems ridiculous now, because Campbell really is the most honorable person I've ever met. I remember the one time that I really made Campbell angry early on was when he suggested that I speak to Andy Keir, who had edited a film that he had just directed. I remember just having a real sense that I have to have my own person in the editing room. This was all my own paranoid fantasy, and I remember having that conversation with Campbell and he was just pissed. He was like, "Look, man, if this is the way we're going to start dealing with each other . . ." And he was totally right, but I was obsessed with protecting my baby. _When you started to rehearse, was it odd for you to be directing actors with so much experience?_ DK: It was. Isabella was the one. Fuck, it's Isabella Rossellini! She's totally cool, but she's still Isabella. Rehearsals are scary for a lot of directors. It's definitely scary for me. What do you say? Nobody teaches you how to rehearse. You don't know. You just have to do it. _Where was your head at during the shoot after struggling to get to this place for so long?_ DK: I had a complete Zen calm throughout that entire movie. We were so prepared. Everybody was so on the same page. We had the best crew. There was a total sense that this is what I was meant to be doing. We invited the entire crew up to the editing room to watch the first week's stuff that had already been cut together. And it was an incredible moment—the entire crew jammed in this room where we watched a twenty-minute chunk of the movie, and it was just so apparent that it was really working. _Visually what were you trying to accomplish with_ Roger Dodger _?_ DK: The game plan was not to make it feel like a movie. We didn't want you to be aware of coverage. We wanted you to feel like you were literally spying, like literally these four people went into a bar and we photographed them without them knowing. We wanted the sense that the camera was chasing Roger through the movie and this was a guy who didn't want to be seen. So in the end when the camera finally does lock on him and pin him down, it means something. I feel so bad for our cinematographer, Joaquin, because other cinematographers who see the movie realize what an achievement it is but a lot of people were like, "Well, they couldn't afford lights or a tripod." _What were the key differences in how you approached your second film,_ P.S. _, as opposed to_ Roger _?_ DK: The difference between the first movie and the second movie was that in the first I was listening to my gut and the second I was not listening to my gut. I need to get better about making sure I put myself in the position to succeed. In the first movie, we went to set so ready to make that movie. And in _P.S._ , in a very self-sabotaging way, I put myself in a bad position which has nothing to do with the cast and crew, who were all wonderful. It was my own preparation about what kind of movie I was making. The script wasn't tight, and I agreed to make the movie without rehearsing. I'm not good enough to make a movie without a rehearsal. I'm not Orson Welles. I had some feeling that I didn't want to be the guy that waited too long before making a second film. You forget that the first one didn't turn out well because you're a genius: It turned out well because you prepped it for two years and spent eight months of your life thinking about the scenes and how you'll cover them and you rehearsed for a week and you meticulously worked on the script. It's self-sabotage, pure and simple, to walk on set before you're ready. And there's nobody else you can blame except yourself. _It felt like some tough choices had to be made in terms of deciding how much time to devote to the peripheral characters in the story._ DK: I didn't realize until maybe editing how hard the movie was in terms of how much you're juggling. And how much is too much to give the audience? It was designed to have three or four things happening at all times, and that's not always a good idea. Even if you have four things going on, you have to pick the one thing that the people latch on to. It's like instruments in a symphony. You need some kind of balance. You need to pick one thing and have there be other colors to put underneath. We got into editing and I just felt like, "Jesus Christ!" It ended up being this huge salad of different tones and genres. It's interesting. In a way you make the movie in order to teach yourself why you wanted to make the movie. And that's the great unfair thing. Maybe like life, you don't know how to make the movie until it's done. You're like, "OK, I'm ready to start making the movie." _You were able to get your second film out of the way pretty quickly. You didn't sit around and debate what to do as much as some others._ DK: The last thing in the world I want to be is Quentin Tarantino. If you make a movie every five years, then every movie has to be a home run. If you're the guy who makes a movie every year, then you're sort of bulletproof. I want to be the guy that, before you know it, they've made five movies. I really admire people like Kevin Smith and Spike Lee. They got their careers moving so quickly. I admire the fact that Kevin Smith made _Clerks_ and then didn't fuck around. _Mallrats_ came out immediately, and then he did _Chasing Amy_ and before you know it, Kevin Smith's a viable director. There's a statute of limitations, you know? It's very easy to wait too long. _So far you've chosen to stay in New York and make your own films. Does the "Hollywood machine" frighten you?_ DK: My biggest fear is the time suck. I feel like just being this sort of broke-ass guy living in Queens and writing my own script, at least now I'm sort of in control of my time. For me the big fear is writing a script for a studio and them being like, "Great, we love you," and then you realize you still have to go out and do twelve different meetings and pitch it and you still might not get the gig. I'm a bit of a control freak and I would rather live a low-key lifestyle and be maybe a little bit more in control. _Are there filmmakers out there who you'd like to emulate?_ DK: Two guys I would like to emulate are [Richard] Linklater and [Michael] Winterbottom, because they're prolific and versatile. They're able to play in both worlds—big studio stuff and smaller in-dies. There's something less intimidating about Richard Linklater. I watch a P. T. Anderson movie and I feel like crawling under the covers like, "My God, I would never move the camera like that. That guy's a genius. I should go back and work in real estate." With Linklater and Winterbottom there's something that isn't too precious about their movies that I really like. It never feels labored. It feels like a bunch of really smart, passionate people got together and made a movie. _What are you proud of when you look back at the first two films? They have to be considered successes just by the very fact of their existence in this tough business._ DK: As disappointed as we were with _Roger Dodger_ not doing more business, the fact is our first movie got bought, some people saw it and the same thing with _P.S._ We go to festivals and a lot of people emotionally connect to the movie. We're living in a culture where you don't get to feel good about hitting a double. You have to hit a home run every time. So a lot of times you have to just think, "I get paid to make a movie." That's amazing. Making a movie is the most joyously collaborative thing you'll ever do. It's totally life-affirming. Whatever you want to say about movies' place in culture and whether it's part of the corporate patriarchy, the actual activity of making a movie I can say without question is totally life-affirming and democratic and wonderful. So it's important to count your blessings. If I have any advantage, it's that I have to do this because I'm not doing it for the money. I simply have to do this. And I don't know if it's for some self-indulgent thing that I need to express myself, but I just have to. The only way I'll win is to not quit and keep doing it. THE DIRECTOR'S TAKE DYLAN KIDD _What is the first film you ever saw?_ _Fantasia_ _What is your favorite film of all time?_ _Fanny and Alexander_ _What's your favorite line in a film?_ "These blow up into funny shapes?" "Well, no. Unless round is funny."— _Raising Arizona_ _What movie made you realize that film was an art?_ _Rumble Fish_ _What movie do you consider your guilty pleasure?_ _Vision Quest_. Just thinking about J. C. Quinn's Pele monologue makes me cry. Wait, here I go . . . _Who is your favorite movie character of all time?_ Travis Bickle _What's your favorite movie snack food?_ Goldenberg Peanut Chews _Who is your favorite director of all time?_ Ingmar Bergman _Who is the most impressive filmmaker working today?_ Michael Winterbottom, Lukas Moodysson, Abbas Kiarostami, David Gordon Green _What quality do the best directors share?_ Determination _Who is your favorite actor or actress of all time?_ Marcello Mastroianni _Who is your favorite actor or actress of today?_ Campbell Scott, Laura Linney, and everyone else I've worked with so far. Also Woody Harrelson, Samantha Morton, Gene Wilder, and Selma Blair. _Who would you cast as yourself in a film about your life?_ Steve Coogan _If you could remake one movie, what would it be?_ The last one _What is your best quality as a director?_ I prepare like crazy. _What is your greatest weakness as a director?_ I don't like to hurt people's feelings. _Finish this sentence: I'll never direct a movie about . . ._ a renegade cop whose desk-jockey superiors won't let him do his job (unless Woody Harrelson is attached). _Finish this sentence: The perfect movie is . . ._ when you leave the theater feeling more human and less alone. _What will they be saying about your work fifty years from now?_ "Wow, digital video sure looked grainy back then!" _What piece of advice do you have for aspiring filmmakers?_ Remember: We're all going to die. Life is too short to make movies that don't come from your heart. _What are you as passionate about as moviemaking?_ Nothing, and that's becoming a problem. **KARYN KUSAMA** "I've realized I'm a strong-minded director with a very clear sense of what I want to do, and I just want to be left alone to do it." **SELECTED CREDITS** _Girlfight_ (2000)-writer/director _Aeon Flux_ (2005)-director It seems more than appropriate that Karyn Kusama began working in film under John Sayles, a filmmaker rare among his peers for balancing the commercial and artistic concerns of a career. Kusama, an introspective woman of mixed heritage ("My dad is Japanese and my mom is a farm girl from Illinois") is wrestling with her own place in American film today. Her first filmmaking effort, _Girlfight_ , told the story of a young woman channeling her aggression into boxing. The film opens with the protagonist giving the audience her best Kubrickian stare. Kusama has a similar attitude that seems to say, "This is who I am, take it or leave it." Who she is, is a filmmaker with lofty ambitions, creating films that strive to achieve what she calls a poetic sensibility. I spoke with her as she was finishing her second film, the Charlize Theron action vehicle _Aeon Flux_. Even as she was completing her first studio film, the inner turmoil over the route of her young career clearly consumed her. _You grew up in St. Louis?_ KK: That's right. _Can you tell me a little about your family and where you grew up?_ KK: I have a brother and sister and grew up mostly in a suburb of St. Louis. My dad's a child psychiatrist and my mom's an educational psychologist. _Did they have any particular interest in the arts?_ KK: They both do, but my dad especially. His first love is classical music and opera. We actually were lucky as kids to go to a lot of opera, theatre, and dance. _Were you a good student?_ KK: I wouldn't call it that so much as wanting to get the good grades and get the fuck out of there. The suburbs are interesting. The one I was living in was so racially uniform and basically kind of segregated. I've always gravitated toward city life where there's more variety in the people you see and meet. I wanted that diversity. _Can you tell me about some of the first films that made an impression on you?_ KK: The first couple of movies that were really important to me were probably the first and then eventually the second _King Kong_. _Really? The original is often cited by filmmakers, but the remake?_ KK: Yeah, the Dino De Laurentiis version. It's interesting to see how we still love to see such a primal, weird story being told even under pop circumstances. When the gorilla ripped up Jessica Lange's shirt, I was sitting next to my dad just quaking with excitement wondering if he was going to rip me out of the theater. That was really an important movie for me. Then, of all people, my mom took me to see _Eraser-head_ when I was ten, and that was pretty mind-blowing. She didn't realize how strange it was going to be. That was a very powerful experience. _Were you thinking about film as a career when you were in high school?_ KK: I was more interested in writing, actually—poetry and some short stories. That was my obsession at the time, and perhaps that contributes to my interest in finding the "off" moment in a film—the moment that could feel like a digression but ultimately has the heart of the film in it. _What led you to NYU?_ KK: I really wanted to be in a big city. I always knew from a young age I was going to live in New York. I found that a lot of the film schools on the East Coast were more film-theory oriented. I was more interested ultimately in having a camera in my hand and shooting, so NYU ended up being a pretty good fit for me. _How would you characterize the atmosphere of the school? What kind of filmmaking was encouraged?_ KK: When I got to the school, there was a very divided soul. There was a tradition of experimental filmmaking still being taught to film students to very little applause. It's not like I watched the Michael Snow movies and thought to myself, "I really want to make those kinds of movies." But I did think it was interesting and important to be exposed to it. _What sort of things were you making in school?_ KK: I found myself initially making sort of personal documentaries. I became very interested in the idea of experimental narrative. Documentary was really helpful to me at the time to understand the mechanics of storytelling. And I still find that a good documentary feels as gripping if not more so than a good narrative feature, because you're still crafting a story. Sleeping Beauties _was your thesis film?_ KK: Yes. I'm not sure if it succeeds. It sort of is what it is. It's something I had to do then. What's interesting, looking back at that time and even before that, is the unconscious quality you can have when you're working, when you're not sure how much you're saying about yourself or your dreams and demons. I hope I can stay there, despite the fact that the trappings of filmmaking are so focused on the physical business of getting the movie done now than ever before. For me it's important to stay in touch with the unconscious brain at work. _What were you tapping into from your unconscious mind back then?_ KK: I paid somebody thousands of dollars to talk about that stuff. (laughs) A central conflict in my life is definitely how you see yourself as an individual voice, the degree to which you believe in your individual voice, and then the degree to which you need or want or surrender to a will of the community. I found that a lot of the work then was finding that small kind of ambitious part of myself that just kept moving forward. The stories always revolved around how that existed in conflict to some degree with taking care of other people. I think that's always going to be something that interests me, because it's a relevant theme, especially in the American tradition. We're so obsessed with the lone-wolf hero. I'm trying to understand my own lone wolf, I suppose, and where that fits into a real life—or if it fits into a real life. _Coming out of NYU, did you have a game plan for the future?_ KK: Actually I don't think I had a game plan. I thought maybe I would work in film, you know, in production. So I did that for a while. And it was frankly not all that I'd hoped it would be, because the fact of the matter is when you're working on somebody else's film that isn't particularly good, or a Taylor Dane music video . . . _Was that one of the things you worked on?_ KK: Yeah. You don't want to commit too hard emotionally to things like that, because it's frankly not worth it. Music videos are pretty grim territory. What started to happen was I realized unless I had a story to tell, working in film isn't teaching me enough. I need to know the world better. I'd lost a really close friend of mine and suddenly felt like working in commercials and music videos was such a pathetic way to spend my time. I was grappling with much bigger issues about mortality and how we destroy ourselves. So I decided to take a backseat to working in film, and took care of kids and painted houses and all that kind of stuff to live and be in the world. And it probably was the most interesting, informative time of my early adulthood, because I was really searching. That's when I started boxing and meeting people in that world, and I went to artist colonies and all that kind of stuff became the focus for me, like, "How do I continue to experience life and stay engaged in the world and also burrow into my work and writing stories?" I think I found a pretty happy balance. To be honest I had no plan really beyond paying my rent and making it towards burrito night every Tuesday. _It doesn't sound like you're a fan of music videos as a training ground for directors._ KK: The problem with music videos is first and foremost you're working in an environment where you're selling a product, ultimately. And I think that gets problematic because, with my brief experience in the studio system, it's so important to insulate yourself from those concerns. And if you know from the beginning you can't insulate yourself, it infects you. Spike Jonze and Michel Gondry have figured out a way to make what I would call personal, interesting work, but there are too many filmmakers who come out of the commercial and music video traditions and I just don't sense they know how to tell stories. I think the way stylistic choices really resonate is if you have some connection to a literary tradition, a poetic tradition, an operatic tradition, a theatrical tradition. There's so much we need to know about art to make art. I feel like sometimes music videos and commercials become so limited and their worldview is so limited, but maybe that's just me being judgmental. As I'm getting older I think I'm talking out of my ass half the time. _Your entry back into the film world came with a job working for John Sayles._ KK: I was babysitting for a family that worked in film, and they knew I was looking for a more full-time job in film. They said, "Maybe you could be an assistant to our friend John," and I met him when he was mixing a movie and we got along and he hired me. I worked there for almost three years. I got to see _Lone Star_ from inception to release, and I got to see the beginning of _Men with Guns_ and _Limbo_ , so it was a really great education for me. It was a great match for me because I think he's successfully straddled the independent world in terms of making his own films and then can work pretty comfortably in the studio system as a writer. I felt like I got to see two ends of the business. Eventually he told me I probably shouldn't be working there for much longer and needed to make my own movies. _From what I understand_ Girlfight _came out of your own experience in the ring. What led you to try boxing?_ KK: Honestly I just wanted to try something new. I wanted to quit smoking and be in better shape, and I knew I needed a complete change of environment. And boxing had a certain pull on me as a kid through sitting and watching it with my dad. It's something that presented itself to me, and I just decided that I would pursue it. It ended up being a good thing. _The inspiration for_ Girlfight _came from a moment in the ring for you?_ KK: Yeah. I was in the ring with a very beautiful young guy who felt that I was being tentative, and he was being tentative with me and he sort of whispered in my ear, "You can hit me." And I found it kind of weird and thrilling that he would give me permission to some degree. It was a very intimate exchange and it really was interesting to realize that's the nature of the sport. You're alone in the ring in a sense, but the one partner you do have is your opponent. _There must have been butterflies once you got the opportunity to start shooting after dreaming about it for so long._ KK: I think I was so desperate just to be on a set and making a movie and working with actors that I don't think that point of it gave me butterflies. Just the fact that it was finally happening made it nerve-racking and a release at the same time. _I know you had to pare down a lot of your ambitions for the film because of the budget. How do you look back at the film?_ KK: I definitely look at the film as a product of its limitations. I look at it with, I hope, a very clear eye of what we were trying to accomplish in a very short time and with very little money. I'm proud of it for that reason, and I hope if I made it for five or ten times as much money, I'd still be proud of it. But there's an inevitable pang of regret in some areas when I watch the film, just because I know we were seriously limited. The fact that it exists means it's inevitably different than how it started in your brain. And I think that's the sort of key experience I'm finding with filmmaking: You're constantly grappling with a sense of loss, with the pure idea and having to embrace the reality of the footage in front of you or the cast or the set or the music or the crew. You accept that the film you have may be very different than the one you have imagined. That's a very difficult process for most of us, and I think that will be an ongoing experience for me. _You've said before that you want to make films that have a "poetic sensibility." Can you explain what you mean and give me an example?_ KK: To me the idea of a poetic sensibility is finding the moments that are outside of reason and a black-and-white ideology that can comfortably exist and won't conflict with whatever else is on the screen in terms of story. It's something about sequences or images or moments in film where you take the film outside of the linear path or the narrative trajectory to give breathing room to the film. I mean, you could look at movies like _Days of Heaven_ and the whole movie is a poetic sensibility with a very loose simple story hung at the edge of it. But then again I just recently saw _The Big Red One,_ directed by Samuel Fuller. His storytelling tactics were often very blunt and unsentimental, but then there would be these moments of incredible lyricism. It's a hard balance to strike when so many movies are at this point, as Hitchcock said, just pictures of people talking. _Do you think you achieved any moments demonstrating that "poetic sensibility" in_ Girlfight _?_ KK: There were definitely moments I strived for. I don't know if I achieved them. That movie was always meant to be blunt and consciously simple and unfancy filmmaking in a way. I think there's a way to take those kind of social-realist dramas and do something lyrical with them. Maybe the poetry of the film exists in a sort of heightened mental space of the ring itself. That in itself is an escape to a different sort of world. There's nothing more real than somebody whacking you in the face, but there's a sort of agreement that happens when you get into the ring and face your opponent that I think creates a sort of heightened reality. _What was the appeal of_ Aeon Flux _? What earns a year and a half of your life?_ KK: I'm still grappling with that. (laughs) The script was so full of feeling and expression, and it was all told in a very spare style. I really loved that sci-fi environment that wasn't this clatter of postapocalyptic detritus of the world. It was quite a bit more idyllic than that, and I thought that was a fresh redirection for the genre. There was a very interesting claustrophobic comfort and beauty in this world that reminded me a lot of where we are right now, where the tradeoff to having beauty and comfort in your life was that you stopped having a dialogue with others about the nature of your life. _You did not get final cut on_ Aeon Flux _._ KK: I don't think I'm going to ever work on a movie again where I don't have final cut. I've realized I'm a strong-minded director with a very clear sense of what I want to do, and I just want to be left alone to do it and I'm not sure studios are necessarily the most instructive places for filmmakers to be, except to maybe learn about the hard realities of commerce and art intermingling. I definitely feel very strongly that I should be kind of left alone to make movies. (laughs) _So you're saying if you again work on a studio film, you would need final cut?_ KK: Yeah. The fact is, it's very difficult to get final cut but I'm at a point where— _It's just not worth it to you? You'd rather work on a smaller scale with more control?_ KK: Oh, absolutely. The problem right now with filmmaking is it's very difficult to make those kind of movies. It's hard enough to scrape together $3 million for a risky movie. That's the conundrum I'm facing right now. The independent film world is not this welcoming safe haven for me or other filmmakers like me. I just want to make movies, and it's important to me to make good movies. I feel like you roll the dice every time you take somebody's money, so it's been an interesting kind of experience to be longing to make a smaller movie but not necessarily knowing what my venue would be to do that. _So if you had to hazard a guess, the next few Karyn Kusama movies will be films smaller than_ Aeon Flux _?_ KK: Well, I say that, but _Aeon Flux_ was overwhelming in a lot of ways, and the fact is the money was just a component of what made people say they had to always be interfering in the creative process. The money is definitely a component, but I feel like it's just the nature of the business. People who finance your movies generally get really nervous. So there's a part of me that feels like after _Aeon Flux_ if somebody said to me, "Listen, we have a $100 million action movie that happens to be really dark and challenging and interesting," I'd have to consider it, just because I know I could do it. I frankly find it a lot more creatively freeing to have a very limited budget. That's an exciting challenge to me. You start to really kind of hone what's crucial because you only have so many resources to go around. The only issue I have in terms of these big-budget films is it's just that much more money for a studio to be freaking out about. The irony is they torture you on everything. They'll torture you on a $20 million movie. If they have reason to torture you, they'll torture you. If only they knew how critical I was of myself. _I know it bothered you when_ Girlfight _came out that you were inundated with questions about being a female director. What was bothering you?_ KK: I think what I was resenting was being pigeonholed as some kind of miracle because I was a woman. I know that I have a lot to learn, but I know that I have an instinct for filmmaking. I know that I'm at least marginally good at what I do. (laughs) And it just bothers me that there's an assumption that there's something sort of special in being female. Maybe what's special is how long it took me to get here, and that's worth discussing perhaps, but I just don't really want to get into what it means. I think what frustrated me is oftentimes I felt there was a subtle or not so subtle condescension in the question. I think everyone expected me to be so desperately grateful and just sort of be googly-eyed. And the fact is that's not who I am. I've worked very hard to get to where I am, and I continue to do so. And I could make the argument that I have to work a lot harder than a lot of male directors because, frankly, I have to prove myself every day as opposed to every other day, and that's just the way it is right now. That's what pisses me off. I don't want to be considered like the miracle baby. _Do you believe that it's a given that it is a tougher road for women today to be filmmakers?_ KK: I think it's a little disingenuous to say it's not a different set of challenges, because people are simply more comfortable with men behind the camera and running the show, literally and figuratively. It's not something I hold against people anymore, because it's almost like a deficiency within the culture that I have to sort of figure out. I don't get too wrapped up in agonizing about it, but I do think it's just a reality of this business. But at the same time I feel like I am very lucky, because I don't think anyone thought I was going to be directing a movie like _Aeon Flux_ —least of all me! It's interesting because I complain about the fact I know things would be different if I were male, but the fact is it's very difficult to be a filmmaker, male or female, who tells ambiguous stories or stories that kind of wander or don't commit to a theme. In a way that's the tragedy. It goes beyond gender lines. The field of complexity is very narrow within pop filmmaking today. _What are the things you've learned to avoid or not avoid the next time from your experience on_ Aeon Flux _?_ KK: Well, my biggest problem is I'm a pretty decent person and I don't believe in being a screamer and I don't believe in big pompous declarations of my identity as an artist and the big bad studio system. I know there's an unholy alliance between commerce and art, and that is something I've been facing for a long time. I feel like what happens to me is I'm for the most part unfailingly polite, and politeness is not rewarded in this business the way brutality is, unfortunately. I'm trying to figure out how to get what I need without always having to raise my voice. That's always been a hard thing for me. I just don't believe I should have to raise my voice, but the fact is nobody pays attention until I do, so it's weird. _I've heard you say that it's important to you that an audience recognizes your films as your work._ KK: It's not necessarily that I want people to say, "Oh, that's Karyn Kusama," because I can't really expect that yet. But I want people to feel like they are not seeing a movie that's been made out of a mold. I ultimately want to make movies that people don't examine as much as experience. And that's a way I'd like to believe I had some kind of distinctive filmmaking voice. I think it's important to have a sureness of hand when you're taking an audience through your story, and that sureness ends up expressing itself whether your form is to stand back and let the story unfold, or if it's to be quite crafted and manipulative with every moment. The one steward of that experience for the audience over time is always going to end up being the director. I want an audience to watch _Aeon Flux_ or _Girlfight_ and say, "I felt like that director was taking me somewhere and shaping the material for me." Because the fact is, I think movies should be experiences. They should be like walking into a dream, and dreams have to have shape even if they feel shapeless. They have to be manufactured and there needs to be an architect to all of that, so I guess that's what I mean. _What's your take on the state of American filmmaking today? Are you optimistic about the future?_ KK: That's an interesting question. The thing is, it's always been a struggle to make movies but it's harder when the people who are overseeing all that money literally don't even watch movies or care. It's a big change I think from the previous studio incarnation, and it's already affected American filmmaking tremendously. I hope that because we're in a very dicey political situation it helps to incite filmmakers toward a more formally daring or inventive or politicized kind of filmmaking, because the fact is we don't even know how much longer we have on this earth to be lucky enough to work on this art form. At this point, why not get a little faster and looser with it and take chances and fight for your chances. Everyone's so fucking busy trying to keep their jobs, and I think it'd make a lot more sense if people started arguing a point and fighting for something meaningful as opposed to letting every meaningful choice go down the drain again. It's just not worth it to put oneself in jeopardy when we probably don't even have much longer on this planet. It seems worth it to make good movies now. And I sense that could happen, and I'm optimistic in that way. THE DIRECTOR'S TAKE KARYN KUSAMA _What is the first film you ever saw?_ _Fantasia_ _What is your favorite film of all time?_ _The Hustler, Rosemary's Baby, Safe, High and Low, Ikiru, The Elephant Man_ _What's your favorite line in a film?_ "Everybody, everybody wants a piece of me! Aren't you gonna have one?"— _The Hustler_ _What movie made you realize that film was an art?_ _King Kong_ (1933) _What movie do you consider your guilty pleasure?_ _Valley Girl_ , fer sure. I know every line. _Who is your favorite movie character of all time?_ Ellen Ripley, Fast Eddie Felson, Carol White _What's your favorite movie snack food?_ Popcorn _Who is your favorite director of all time?_ Michael Powell, Roman Polanski, Akira Kurosawa, Michael Ritchie, Stanley Kubrick _Who is the most impressive filmmaker working today?_ Todd Haynes, Edward Yang, Wong Kar-Wai, Claire Denis _What quality do the best directors share?_ An obsessive love for, and humbling by, the art form _Who is your favorite actor or actress of all time?_ Barbara Stanwyck, Paul Newman, Toshiro Mifume _Who would you cast as yourself in a film about your life?_ You're kidding, right? _If you could remake one movie, what would it be?_ _Alice in Wonderland_ _What is your best quality as a director?_ High tolerance for absurdity _What is your greatest weakness as a director?_ High tolerance for absurdity _Finish this sentence: I'll never direct a movie about . . ._ an alcoholic cop with a chip on his shoulder. _Finish this sentence: The perfect movie is . . ._ going to keep me up at night. _What will they be saying about your work fifty years from now?_ I'm just hoping we make it that far—so it's hard to say! _What piece of advice do you have for aspiring filmmakers?_ Engage in the world. _What are you as passionate about as moviemaking?_ Cooking **NEIL LABUTE** "I'll probably never make a _Gladiator_ , and I'll probably never make a _Miss Saigon_. I'm not as drawn to the spectacle as I am the people." **SELECTED CREDITS** _In the Company of Men_ (1997)-writer/director _Your Friends & Neighbors_ (1998)-writer/director _Nurse Betty_ (2000)-director _Possession_ (2002)-cowriter/director _The Shape of Things_ (2003)-writer/director _The Wicker Man_ (2006)-writer/director Few filmmakers have so quickly established a unique voice in their work as Neil LaBute did in his first two films, _In the Company of Men_ and _Your Friends & Neighbors_. He is, as every generation of moviemakers needs, a polarizing figure. To this day he is hailed for his unflinching examinations of the darkness inside all of us and by equal measure decried as a misanthrope. Both may be true. _In the Company of Men_ after all began in his mind with one phrase: "Let's hurt somebody." LaBute never had designs on a film career. He was and remains a playwright. Meanwhile, the breadth of his film work has expanded greatly since his initial efforts, taking viewers on a road trip full of violence and black comedy ( _Nurse Betty_ ) and an epic period love story ( _Possession_ ). _Where were you born?_ NL: I was born in Detroit. _But you didn't grow up there?_ NL: We moved to Washington State when I was about five. _What did your parents do for a living?_ NL: Both are retired now. My dad was a long-haul truck driver and my mom worked as a receptionist in a hospital. _Were either of your parents particularly interested in the arts? Did they push you in that direction?_ NL: There was no prompting from either of them towards the arts. My mother was interested in film from a filmgoer's point of view, reading movie magazines and watching films—strictly a fan's sort of interest, not in making them or anything like that. _What do you thing the first signs were of your interest in pursuing writing and the arts?_ NL: I always had a pretty healthy interest in storytelling, whether it was imagining myself to be somebody else or whatever. It was pretty early that I found an interest in the church play, the school play, going to the movies, and television. As limited as TV was at that time, I scoured it for movies. There was a program on PBS that my mother probably got me interested in watching. It was the equivalent of what an International Film 101 class would be. Once a week it was either _Nights of Cabiria_ or _Wild Strawberries_ or _Seven Samurai_. I got pretty hooked on that program even if I couldn't follow the stories. Like they would play _Aleksandr Nevskiy_. I remember that vividly. Following the text or not, I thought, "This is something more than what I'm used to watching. This is something that is really challenging." Still today if I'm working, I'll pop on _La Dolce Vita_ or a good art film just to listen to the sound of it, in the way you'd put on a CD. I like the sound of it in the house. _What moviegoing experiences do you recall?_ NL: I remember going to a revival of _Gone With the Wind_ with my mother when I was young, and that certainly made an impression on me. I felt like we were there for part of my childhood, it was so long. I would sleep and wake up and it would be like the war is still going on. I vividly remember waking up at one point and seeing Rhett gathering Scarlett in his arms and carrying her up this amazing set of stairs toward the boudoir. It was a very powerful collection of all the things that can work in cinema—the soundtrack and the overpowering image. I remember being like, "Wow, what is this?" And then as we were older, the drive-in became very popular. You were thrown in the car and you'd go see a couple of movies. It was the drug of choice for me. My parents never had to worry if I was off at a party, because I was always going to the movies. _Has writing always come easily to you?_ NL: Certainly when I started writing I wasn't the best. You'd read something and go, "Oh, that's something Pinter wrote once or that's an Athol Fugard play, sort of, but bad." But that didn't make me stop and say, "I have got to do something else." I don't think it was, "I'm only going to do this if I'm good." The difficult thing with writing is, once you're done with something, you're unemployed. And you have to get people to buy this stuff. How do I sustain my life with this stuff? But the positive side said, to me: You can write wherever you are. I don't have to go to California and audition for things or move to New York. I can be working at McDonald's. I can be working as a doctor. _How did you end up attending Brigham Young for college?_ NL: I ended up at Brigham Young through a high school guidance counselor who suggested it. It was just, "Here's a school that has a very great arts program and there's a fair amount of money you could get for it." Ultimately it really came down to they made the best offer. _You were not raised as a Mormon, but you were entering a school where nearly everyone is. You must have had a fair amount of trepidation._ NL: Trepidation, yes. Because you're going off essentially for the first time by yourself. But it wasn't like I'm going off on a tramp steamer with a bunch of seasoned sailors. It was like, "Here's a bunch of people who are quite religious." And I knew several members while I was growing up and always liked those folks, so I was going in with basically positive feelings rather than, "Oh, I don't know what this is. Is this some sort of cult?" There was none of that kind of worry, just the simple worries of how am I going to pay for my groceries and that sort of thing. _At some point while there you converted to Mormonism._ NL: Everyone around me was Mormon. And I'm taking classes in the religion. You had to take twelve credits of religion, and several of those had to be Book of Mormon. There's a narrowness of vision anytime you get people of so many like minds together, because you're constantly supporting each other and saying, "Yes, you're right!" (laughs) There was just me saying, "Hey everybody, let's just think about this a minute." _But obviously you didn't convert from sheer weariness of disagreeing._ NL: And it certainly wasn't that. Nor was it me playing the angles going, "Oh, I see actually the Mormons get a break on their tuition." (laughs) I just gradually but steadily became sure that it was something that I wanted to do. _Were you studying film and theatre extensively there?_ NL: The thing about Brigham Young is they had these steadfast moral principles, where they didn't want people having sex before marriage and drinking and all that, so they provided all this entertainment to keep people busy. (laughs) Anything to keep them off of discovering they could actually go over to someone's house and have sex. So there was a film society that ran a lot of the old classic American films and then there was a great program called International Cinema. I just soaked that stuff up. _Was a film career something you had seriously considered by that point?_ NL: I loved film, but I really didn't have any designs on it as a practitioner, because theatre was there, and I could practice it and it was relatively simple to do. Get some friends together, put the show on the lawn or in a classroom or whatever. And I was very good about doing that kind of thing. _When did you consider turning one of your plays into a screenplay? I know you attended the Sundance playwrights lab._ NL: It snowballed from the Sundance lab. One of the pieces that I did at the lab was read or seen by someone who wanted to have me turn it into a screenplay. And that got me hooked up with [the production company] Good Machine. So I went through that process of, "How do I open this up as cinema?" I was introduced to the long process of film financing in the independent world and how long it can take to put a movie together. But also by that point I had started to hear these stories of guys who had put a little money together in all sorts of nefarious ways: selling their blood, laying money down in Atlantic City and using their credit cards, and all of these crazy ideas for making small films. That attracted me because that actually had more of a theatrical spirit, more like what I was used to. This is really like putting on a show in your parents' barn. Now, this I understand. So I went about thinking, "Maybe I could do one of these." Happily I was ignorant, because I had heard all these stories about, "Yeah, he made this movie for twenty-five grand" and all of that. While that's what we shot _In the Company of Men_ for, it certainly wasn't what it took to get it up on the screen. It took more money, which had to be raised very quickly once we realized that we were going to get into Sundance. But happily I didn't know that at the time, or I'm sure I would have just gone the other direction. I thought $25,000 was going to do it, and off we went and made this movie. And I used everything that I knew as a theatrical director and writer and just sort of applied it to film. Because also by that time I had watched a lot of movies and there were people that I really admired, like Ozu and Rohmer, who had this very still aesthetic as opposed to a camera that was constantly moving. I thought, "I don't need it to move." For me I'm happy just to lock it down and look at these things. I'm not caught up in the dynamics of the camera. I'm much more interested in what happens within the frame than constantly shifting the frame. _Supposedly_ In the Company of Men _began for you with one line: "Let's hurt somebody."_ NL: It really did. It was the first line, in fact, of the screenplay. I just wrote it down and started writing from there. I had a couple of guys sitting in an airport and they say that to each other, and I wrote the first scene off of that, but ultimately I ended up flipping that and making it the last scene of the first segment of the movie. I thought that was a provocative way to start a movie, although I ultimately didn't end up starting it that way. _Was it written specifically as a screenplay rather than a play?_ NL: It was written as I often write, which is sort of this nebulous world of dialogue and scenes. A play will start just because I start writing and I get excited about it and I want to finish it and it hasn't been promised to DreamWorks or anybody. I've joked before that if they stay in a room, looks like it's going to be a play, and if they go out to the car, then it starts to look like a movie. (laughs) _Is it true that you initially wanted to do_ In the Company of Men _in black-and-white?_ NL: I did. In fact, we shot it in color because it was the cheapest stock that we could find. It had a due-by date that was passed, and we got it cheap. The rough cut that we sent into Sundance was in black-and-white. When we got in, we found that to actually get a sort of pristine version of it ready for the festival, it was a much easier and less expensive road to get the color processed and ready and timed than it was the black-and-white. And so we ended up taking a color version up there, and that was it. _Why did you want to do it in black-and-white?_ NL: It fit that story to me. I was going off a love of things like _The Apartment_ , which always felt so strong in black-and-white. I've grown up probably seeing more black-and-white films than color. And I love the way those films look. It seemed to work for me—the starkness of it, the contrast really seemed to apply well to this piece. But I would have happily done that with _Your Friends & Neighbors_ or lots of things. You keep hoping that there'll be one that people will let you do in black-and-white. _It would have to be the right material that you could make a compelling argument for._ NL: Or you pay for it yourself, ultimately. You just sort of go, "Well fuck it. I'm just going to go do this on my own, then. And I'll show them how it's done." _Did you have a concrete plan for what you were going to do with the finished film?_ NL: I was working in a real vacuum. I shot it in Indiana and cut it there, and I wasn't around a bunch of other filmmakers. I just followed the only course that I knew, which was, "Let's send this in and see if Sundance is interested." All the things that came from that, getting it released and going to Cannes, were surprises. _The pressures of being on a film set where time is so sacred and costly must be quite different from your theater experiences._ NL: You always feel like you don't have enough time or money. We had an eleven-day shoot for _In the Company of Men_ , and I could feel that pressure of having to get it all in. Then to have a sixty-day shoot for _Nurse Betty_ , it still feels like there's not quite enough time. You're aware that the time is just clicking away. It's represented very nicely by the sound of the film running through the machine. You can just hear it. (laughs) To wipe all of that away and make an environment in which the actor feels completely safe and like they have all the time in the world is tricky. But that is, I think, the game for me because of how important the actors are to me. _What tricks have you learned to keep actors happy and in a good creative space on your sets?_ NL: You can't always do it one way. I can remember on _Your Friends & Neighbors_ I would often put headphones on in between shots and be playing something ungodly like Nirvana, because that chaotic music sort of calmed me down. But Catherine Keener wanted to feel like I was right there and available, so I knew I should take those headphones off and be where she needed me to be. So it's constantly adapting to what you see. _Did you feel like you were a sought-after commodity as a director coming out of Sundance with_ In the Company of Men _?_ NL: Coming out of Sundance, the film made a lot of what you'd consider "noise." People were very interested in it. It won an award, and it was in a very good place from being shot in the middle of nowhere. But walking away from that festival, it hadn't been tested in the ultimate arena for movie executives, which is, "How much money has it made?" And in fact it failed in one particularly important area, which is it didn't get picked up by anybody for domestic distribution. Later that year I went to the New Directors festival in New York, and Sony Classics picked it up and then it got into Cannes and then it was released in August. It did a humble amount of business—a couple million dollars of business. It turned a great profit. So that, then, made people go, "Oh, we're interested." Because they always do that crazy math where they go, "Well, if he can make that much profit with _that_ , then if we give him _this_ , he'll make _this_ much profit," which is not necessarily true. That is the way they think, and so then they became more interested. However, by that time I was already involved in making _Your Friends & Neighbors_. _So there weren't tempting offers of other things to direct after the first film?_ NL: I can remember at one point during the editing of _Your Friends & Neighbors_ that the script for _American Beauty_ came through. I don't think I actually read it. My agents said, "It's a really good script and you kind of have just done that kind of thing." And I'm like, Yeah, that's probably true." And then when it came out I was like, "OK, I should probably read those kind of things now." (laughs) _Was jumping into another film so quickly part of a sense for you of wanting to establish immediately that you were not just a flash in the pan?_ NL: Yeah, I think there was a sense of not wanting to agonize over the decision, that you could talk yourself in circles and into not doing anything. So I thought, "I've got something else here," and here's interest from someone I admire, Jason Patric, who had read the script for _Your Friends & Neighbors_ and said, "Yeah, I'd like to produce this." So I very quickly jumped on that, and off we went. In retrospect I think it was the best thing I could have done, rather than just having fumbled along and worried about what I was going to do and what did this say about me and should it be a bigger picture? The one-two punch of those films did as much to create the voice of me as anything did, and the plays have continued to do that. Nurse Betty _was your first film that didn't come from an original story by you. It would seem that it presented a lot of new challenges for you as a filmmaker._ NL: Well, yeah, there were a number of things that I knew I had to do. I needed to be able to step up and tell, as wacky as it was, a more conventional story in a more conventional way with the tools that people would expect. It's a road picture. I can't make it static. So now I'm planning a shoot-out and a scalping, and that's going to require makeup and CGI. . . . I had to just suddenly learn how to use these other tools without any real formal training. And again it's very gradual. It was never like, "I don't know how I'm going to learn this by tomorrow." It was stuff that you kind of have to sit down and go, "OK, now, how am I going to make this work?" And I also knew that once I had signed on to something like that, I wanted to be honest and true to the material rather than I'm going to try and make this movie more like my other pictures. It needed to be taken care of in the way it was designed. I liked the challenge of doing it. Yes, there are headaches of logistics of "Now we're going to go shoot in the desert or go to Rome, and how are we going to make all this work?" But I found that to be fun. Possession _didn't do well at the box office. Do you agonize over the commercial prospects for your work? Do you feel pressure to make a hit?_ NL: I'm aware of it because it is a business that I'm working in. And as much as you'd like to keep it in this pristine realm of artistry, what you are doing at the end of the day is a commercial venture, and you are hoping that people go see it. So that's tricky to try and slay both of those dragons. There was a kind of dumping of _Possession_ due to a changing of the guard at Focus. The people who said, "We're going to make this movie," were not the same people who were releasing the movie a year or so later. And it's a shame. You want it to get out there and have its life. But it doesn't take away how I feel about the movie. The only thing I walk away from _Possession_ saying is, "I wish it was a little bit longer." We had a cut probably fifteen minutes longer that I liked even better. But we were always trying to cut it down for time, I guess for some imaginary audience of young girls. I always just felt we should make the picture we're interested in. But that's always hindsight. In the more general sense, yeah, I think I'm always aware that to continue to be able to do this, I need to, on a very basic level, be able to return on what I spend. _Whether it's a $1 million film or a $20 million one._ NL: Exactly. I never felt more beholden than to the guys who gave me the money to do _In the Company of Men_. Those were real people with real money taken out of their bank account, as opposed to millions of dollars from Universal. I mean, you can't really comprehend $20 million as opposed to these guys who had $25,000. So yes, I'm aware of it without being completely shackled by, "Oh God, I must make something that people like." They say the worst thing that can happen to you is to make a movie that makes $100 million, because then people are expecting you to do that again. _Is there any less of a sense of ownership over films like_ Nurse Betty _and_ Possession _that come from other people's work?_ NL: Well, I never felt like the hired help, because they were projects I'd pursued. But the fact that those were from some other source, yes, you can't help but feel that they probably required a certain kind of filmmaking that was less like what I had established as my own aesthetic. I think you can see my hand in them easily, but I think that they're probably less like signature pieces than the others are naturally. _And because of that, are they any less satisfying?_ NL: Not for me. I feel like they're all kind of my babies in the end. It's just some of them I've seen from conception and the others I've adopted. (laughs) _Whose films today do you make a point of watching?_ NL: I'm a big film viewer, so I am constantly watching movies. If there's a new one by Woody Allen, I always go see it. I haven't been as satisfied lately in his material but I always go take a look. I'm always curious about what Eric Rohmer is doing, and Mike Leigh and Jane Campion. I'm always curious about what the Coen brothers are going to do. I like Paul Thomas Anderson and Wes Anderson. _You mention Woody Allen, and he strikes me as someone who, like yourself, has a very distinctive voice in film. He's been criticized in recent years for continuing to mine the same territory. Is that a concern for your work?_ NL: I can see _Annie Hall_ and _Manhattan_ and _Hannah and Her Sisters_ and yet I don't feel like he's repeating himself. This is an area in which he wants to work. He feels like he has something to say, and I'm happy to see that. Hopefully that's what people see from my work. In that regard, someone who'd be a sort of hero to me would be Eric Rohmer, who's made a number of films about the foibles of men. He goes back to the same ingredients of, "This guy wants to be with this woman but she actually likes his friends." He's dipped into that well a number of times. They're aesthetically not particularly different, but they're each their own kind of creature. That's probably more the way in which I'm headed rather than I need to do one science-fiction movie and one western. I'll probably never make a _Gladiator_ , and I'll probably never make a _Miss Saigon_. I'm not as drawn to the spectacle as I am the people. _You've mentioned before that your relationship with your father was less than ideal. Do you see a connection between that and the unsympathetic men that recur in your work?_ NL: Sure. My men are often harsh and untrustworthy, and that probably comes out of early experience. The male figures in my life early on were probably more like that—more aggressive and more quarrelsome. And a lot of that has seeped into my work. If you look at the gallery of rogues I've created, most of them are pretty tricky characters and I think that comes from somewhere. But it's not something I give a lot of credence to. It's also because I know that to create drama, you've got to have somebody that creates drama. So why not make it the guys? They can take it. (laughs) _Have you ever considered returning to some of the indelible characters you've created, such as Chad from_ In the Company of Men _?_ NL: Actually right before _Before Sunset_ came out, I talked to Aaron Eckhart. It'd be fun to drop in on Chad every ten years, sort of like _7 Up_ —that sort of thing. We could see where he is, if he's now married, what he's doing. Those kinds of things do interest me. THE DIRECTOR'S TAKE NEIL LABUTE _What is the first film you ever saw?_ _Gone With the Wind_ _What is your favorite film of all time?_ _La Dolce Vita_ _What's your favorite line in a film?_ I love the last line in _Manhattan_ : "You've gotta have a little faith in people." _What movie made you realize that film was an art?_ Probably _Deliverance_ or _The Godfather_ _What movie do you consider your guilty pleasure?_ A Don Knotts film called _The Ghost and Mr. Chicken_ used to make me laugh out loud when I was a kid, mostly because Don Knotts is some kind of comedic genius. I bought it immediately when it came out on DVD. It still works. _Who is your favorite movie character of all time?_ Marcello from _La Dolce Vita_ , Rooster Cogburn from _True Grit_ _What's your favorite movie snack food?_ Popcorn. No question. Butter, salt, napkins. _Who is your favorite director of all time?_ The guy who moves me the most as a filmmaker is Eric Rohmer. _Who is the most impressive filmmaker working today?_ Scorsese and Spielberg are such brilliant technical directors and still work extremely well with actors, and yet, someone who works in such a relatively simple way as Kiarostami makes me incredibly happy as a viewer. _What quality do the best directors share?_ A desire to understand the human spirit is essential. _Who is your favorite actor or actress of all time?_ Hard not to pick Brando because of his sheer ability, but I love guys like Gary Cooper as well. _Who is your favorite actor or actress of today?_ I never tire of watching Meryl Streep. _Who would you cast as yourself in a film about your life?_ I'd shoot for Tyrone Power, but should probably go for Albert Brooks. The best would be down the middle—a young Elliott Gould. _If you could remake one movie, what would it be?_ _The Wicker Man_ _What is your best quality as a director?_ An understanding of text, an ability to work with actors, an undying fear of being able to say, "I don't know." _What is your greatest weakness as a director?_ A certain reticence to be more forceful. I could stand to be a bit more assertive as both a director and as a person. _Finish this sentence: I'll never direct a movie with . . ._ singing and dancing as a strong, central part of the whole. _Finish this sentence: The perfect movie is . . ._ one that takes me to a place that I haven't yet been, whether it's down the street or across the earth. _What will they be saying about your work fifty years from now?_ "Talky. Static. Bruising. Unsettling." _What piece of advice do you have for aspiring filmmakers?_ Never take no for an answer. Believe it when someone says it to you, but immediately move on to the next person until somebody says yes. _What are you as passionate about as moviemaking?_ The theatre **DOUG LIMAN** "I'm not that guy who's making the cookie-cutter movies. I don't mind a little reputation." **SELECTED CREDITS** _Getting In_ (1994)-director _Swingers_ (1996)-director _Go_ (1999)-director _The Bourne Identity_ (2002)-director _Mr. & Mrs. Smith_ (2005)-director Doug Liman makes the conventional unconventional. That could describe the unique take he's brought to films like _Swingers, Go_ , and _The Bourne Identity_. It could also describe his approach to moviemaking itself. Liman's films have been besieged by reports of delays, re-shoots, and problems with their respective endings. Even his collaborators have questioned his methods at times. _Mr. & Mrs. Smith_ producer Akiva Goldsman calls him a "madman." _Swingers_ star Jon Favreau says, "There's a whole air of chaos around the way he does things." Both also swear by his talent. There is a method to Liman's madness and the proof is in the work, films that have been lauded for their energy, originality, and ingenuity. _Your father, Arthur Liman, was one of the most noted attorneys of his day. Was the law something you ever considered as a career?_ DL: No, because my older brother really was following in my father's footsteps. My mother is a painter and an artist so it wasn't like I came from a family of just lawyers. Ironically it was because of my father that I got into film. He had a client called Bell & Howell who made super eight movie cameras and he won a case for them. As a present they gave him a super eight camera and a projector. He had no interest in this camera, but I did. So as a six-year-old I started running around with it making movies. _What sort of movies did you make? What were they about?_ DL: They were about our family dog, Licorice. That was the only actor a six-year-old could get their hands on and control. _What do you think it was about film that drew you in at that age?_ DL: I love film. I like [watching] almost any movie that's 35-millimeter projected. Just the experience of seeing images on screen excites me to this day. I love having a roll of 35-millimeter film in my hand. So that's part of it and I'm a storyteller. My father was a storyteller. That's what trial lawyers do. There was a theatrical component to his court-room appearances. I think at the end of the day that's ultimately why I love filmmaking, telling stories. And that's how I know there's a movie I want to make. If I find myself, after reading a script, wanting to tell people what happens in the story and tell people this great moment, that's indicator number one that I'm likely to make that movie. The other criterion is that I have something really unique to say. If you look at the films I've chosen to do, they're films with a lot of room for the director to put his or her story into it. _Swingers_ is a script Jon Favreau wrote about getting over an ex-girlfriend. Meanwhile the film that I made was about male friendship. In no way is that more obvious than that his script ended with his character getting a call from his ex-girlfriend and his hanging up on her. The film that I made couldn't end like that because the love story I was telling was between Jon and Vince [Vaughn]. The story that was in my head was not the movie that Jon had written. _Even though you knew you wanted to go into film, you didn't go straight to a USC or NYU._ DL: I went to Brown. I decided that I needed to get a real education first so I would have stories to tell. So I would go to Brown and get away from film, which didn't exactly happen because I ended up starting the TV station at Brown. Then I applied to USC film school and got in. _What did you think of USC?_ DL: I pretty much immediately hated it. _Why such a strong reaction?_ DL: Brown had an attitude of, you pay a lot of money to come here, you're an adult, and if you don't want to come to class it's your loss. At USC they're treating you more like you're in medical school. I was like, this is ridiculous. I'm at an arts school. Why should there be any requirements? Why should attendance be mandatory? _Did you graduate?_ DL: I did not follow the rules while I was there. I got through it in about a year and a half or two years. I didn't get a degree, but I ended up with a thesis film. I broke huge rules. The deal I ultimately made with the dean of the school was for me to swear never to talk about it. _How much confidence did you have in yourself coming out of USC?_ DL: The crazy thing was, for my entire life I had been preparing to do this one thing [making movies]. Now I actually have to do it. And suddenly all of the insecurities I was able to dodge in high school and college came back with a vengeance. Everybody else in high school was like, "What am I going to do with my life?" and I was so mellow and I'm not a mellow person at all. Now out of film school it suddenly occurred to me, what if I'm not good at this? I had literally prepared to do nothing but this. I had lived a very charmed life. And suddenly with my first outing, _Getting In_ , I wasn't the overnight success everyone was expecting me to be. It also became clear that my father ultimately expected me to get a real job in film, meaning do something where one day you'll ultimately be running a studio. He was a very practical person and he was like, "How are you ever going to get a mortgage on a house?" He really started to worry that I was becoming a dilettante. We would have huge screaming fights. Our relationship really hit a low point. _What were the lessons learned on that first directing effort,_ Getting In _? It was a film that no one really took note of._ DL: _Getting In_ was made for a million dollars and was [released] direct to home video. I very quickly realized why a million dollars at that time was the worst number to make a movie for. I sort of knew that going in and had some ideas about how an independent film should be made, but I didn't really have enough confidence to fight for those ideas. But I did go shoot the title sequence using those ideas. Unfortunately I did it at the end. But it really paid off and the title sequence is the best thing in that movie. It involved a cat and a rat. I spent five hundred dollars on it and it was better than anything else in the movie. That gave me the confidence when I read _Swingers_ to say, "That's how I want to do this whole movie." _How did you and Jon Favreau come to work on_ Swingers _together?_ DL: Jon Favreau and I had become friendly. We both had scripts under our arms that we wanted to get made. But neither of us had read each other's scripts. I had no idea that my destiny was under his arm because everybody in L.A. has a script and usually the best thing is to see how long you can get away with not being asked to read it. Finally Jon asked me to take a look at it, and I fell in love with it. And I realized I could apply my theories about independent film to _Swingers_. Jon had been working with my housemate Nicole [LaLoggia] on the script for a while and they had the budget set at about a million and a half. I told Jon, "I don't want to step on anyone's toes, but that's the wrong way to make the movie. We should make it dirt cheap." I showed him the title sequence to _Getting In_. I said, "I did this with animals. Now I want to try it with humans." He ultimately went for it. The problem was I didn't have any money. It was a good pitch but ultimately you have to back that thing up. So the next step was to try to find money. I turned to my dad and I was like, "I'm trying to raise some money for this film, and he was like, 'maybe I can help you.' " This is after all those screaming fights. He got one of his clients to give us $200,000 and my dad did all of the paperwork. Obviously he had grave reservations. Suddenly he had a client's money on the line. So my dad became my studio and called me every day. He really instilled in me a sense of, someone's given you money to make a movie and that's a sacred trust and that's something that's stuck with me to this day. I'm often criticized by friends for flaking because I put a movie first, because those are my father's words resonating in my brain. I needed to do everything in my power to protect that investment of $200,000. It was not a license to go party with it. _It was also a big decision for Jon to make, to allow you to direct this pet project of his._ DL: It was huge, especially because everyone was telling him he couldn't make that movie for less than a million and a half. So here I am with basically no track record saying I'm going to go make it for $200,000. I don't have a great film under my belt. I have a thing I did with a rat and a cat for five hundred dollars and I'm saying, do the math and imagine this with you and Vince. Of all the things that have happened in my career the single best thing to happen to me was for God knows what reason Jon Favreau chose to trust me. _Favreau told me codirecting the film with him was a scenario that was discussed._ DL: That was never an idea I was open to but I also was very sensitive to the fact that this was his baby. When I approached him and said, "I want to direct your movie," I was approaching someone who had spent two years thinking he was going to direct it and just hadn't gotten anywhere. If you talk about making a movie with a real budget anyone can direct it. Making a movie for $200,000 requires a specific set of skills from the director. They better know how to DP. They better know how to do every single job on that set. And when the money runs out, as it did on _Swingers_ , the director is going to have to do every single one of those jobs that's left because there won't be any crew left. In _Swingers_ every postproduction sound was recorded by me with a DAT machine I bought from The Good Guys, used for twenty-nine days, and then returned with their money-back guarantee. I was insane. But I had to do what I had to. I had taken somebody's money. I had to finish the movie. _You were also the DP of_ Swingers _. How often were you behind the camera?_ DL: Who else would be behind the camera? I don't think you understand how small our crew was. There was almost no crew. The gaffer was this guy known around L.A. as "Rod with a truck." You hire him because he comes with a truck with some lights in it. The majority of the lighting in that movie was done with lightbulbs I bought from Home Depot. _Favreau says he felt like you ran the set a little chaotically at times._ DL: Of course it was chaotic! I can't fucking believe it worked. The one thing I try to communicate to aspiring filmmakers about _Swingers_ is that the film gives the impression that it was easier to make than it was. The film is probably the most storyboarded of anything I've ever done because that was the only way we could get away with shooting in bars that were open to the public. For the actors the experience was pretty insane. There were situations like where Jon walks across the bar to talk to Heather [Graham]. Days in advance I had blocked this out in my mind and I preset some lights in the ceiling, including a spotlight on Heather. Heather Graham is shockingly beautiful and here she is with a gorgeous backlight on her so she's slightly brighter than anybody else at the bar. There was no camera anywhere near her because I was shooting her from across the bar. And there's a shot where I walk all around the bar with Jon and by the time we'd get to her some guy would be talking to her and I'd have to call cut and shoo them away. We never got a clean shot. In the finished movie, there's some dude talking to her like, this is a hot chick. So there's stuff from an actor's point of view that doesn't surprise me that Jon's like, this was insane because that would never happen on any other set ever. _But you have said in the past that you actually like an environment of chaos on a set._ DL: Yeah. I usually find that my ideas in advance aren't as good as the ideas I come up with in the moment. So I like to have an environment that's thought provoking, that gives me enough variables that are changing so I don't have to generate every single idea. I really don't like wooden filmmaking, and obviously with _Swingers_ I developed a technique to just hand-hold the camera and suddenly it's not wooden. Wooden filmmaking is the thing I'm most terrified of. _So it's all about avoiding wooden filmmaking even if that means scrapping a plan for a scene on the day of filming? That's something that you're said to do from time to time._ DL: If I come up with a better idea I can no longer shoot the less good idea no matter how much work has gone into the less good idea. _Did the success of_ Swingers _feel like you anticipated it would?_ DL: There's a superficial component to that that I missed out on a little bit because my father was very sick at the time we finished the movie. I had been in L.A. for about five years and wanted to experience being the guy who shows up at the party who everyone's looking at because he just sold a script or a movie. By the time I became that guy I just wanted to be back in New York with my dad. _Was he ever able to enjoy_ Swingers _?_ DL: Probably the best compliment I ever got on a movie was when I showed him and my family the movie after we sold it. My father had been battling cancer, and every test was more bad news. He never got a break. When the film ended my dad turned to my mom and said, "Maybe our luck is changing." He got to come to L.A. and come to the premiere. Six months later I got a call that MTV had selected me as their best new filmmaker. This was just a couple weeks before my father died. He was in the hospital and they didn't have MTV in the hospital so my mom got a hotel room across the street and snuck him out of the hospital in a wheelchair in the pouring rain and they went and watched it. _So he didn't have that worry about you being able to pay the mortgage by_ _the time he passed away._ DL: We had a dispute about how to sell the movie. Miramax originally offered us $750,000 and he thought we should just take it. I meanwhile had been talking to this producer, Cary Woods, and he told me he wanted to come on board as executive producer and my dad's like, "You're just being taken advantage of. Why do you need another producer on the movie?" And I disobeyed him and made the deal with Cary Woods and turned down the $750,000. Three days later I got to send a fax to my dad with just one line, "Miramax. 5 mill." So he also got to see that I did have some good business sense. _Tell me about the decision to make_ Go _your follow-up to_ Swingers _._ DL: _Go_ was not the wise decision after _Swingers_. The wisdom that was being forced onto me by my agents was to do a studio film while you can. _Was there anything your agents were pushing on you in particular?_ DL: There was an Alicia Silverstone romantic comedy that was the perfect step up. It ended up being _Heartbreakers_. It was going to be Anjelica Huston and Alicia Silverstone. It was the perfect kind of thing to do, but I happened to fall in love with a little script called _Go_ that was another little indie movie [like _Swingers_ ]. The thing that I had in my back pocket going into _Go_ is that I had a pretty wild childhood, the usual outrageous stuff. What was unique was that nobody got hurt. The thing that I was wrestling with when I was going to do _Go_ was, how was it that all these bad things had happened to me led to me being in such a good place? Like the jobs I didn't get that ultimately led to me doing _Swingers_. Just two years ago I was ice-skating in Central Park at night on the boat pond and I was arrested for unlawful ice activity. It was worth it. _Go_ was a celebration of stuff like that. My experience has been, while you're young you have a get-out-of-jail-free card and you should use it. _With_ Go _, you had some difficulty finding an appropriate ending and that's been a problem in virtually all of yourfilms._ DL: I try to not do it the way it's been done before. With _Go_ , there is no other movie like _Go_ so there's just nobody to turn to. It was very obvious how to end _The Bourne Identity_ right off the bat. It's called the WIJ. It stands for "woman in jeopardy." It's such a common thing for action movies they had a nickname for it at USC. One of my first pitches to Matt Damon when we were talking about the film was, "I am not going to have a WIJ. There's got to be a way to end a spy movie without taking the woman hostage." There's a reason why all these spy movies have WIJs. They work. The problem is it's clichéd now. I was so frustrated with trying to come up with a non-WIJ thing that I actually came to a point where I said to Matt, "Fuck it. Maybe Chris Cooper should just grab her. That literally solves all our problems." And Matt's like, "You can't give up now." He gave me a pep talk. And we did find another way. _In the case of_ Mr. & Mrs. Smith _, youfilmed a more conventional ending with villains that you decided not to use._ DL: What happened with _Mr. & Mrs. Smith_ is that I didn't like anything about the villains. I kept pushing the villain stuff off saying we'll worry about it later. Let's get the love story working because none of the villain stuff written was really working. And then we got into the situation where we had a stop date with Brad Pitt. He was going to have to leave to do _Ocean's Twelve_. So we had to shoot the ending. It was the only fight I ever really had with the studio because I was like, this ending doesn't work. And they were like, you don't have script approval. You have to shoot this. So we did it in two days and it didn't work. I tried to make it work. It wasn't like I tanked it. _Were you satisfied with the ending that you ended up using?_ DL: Honestly _Mr. & Mrs. Smith_ I consider my most flawed of movies. But given the constraints and the politics of making a movie like that I could not be more proud of it. It's a movie where I'm still working on the director's cut. Basically that was a cut of the movie I did that's in the theaters. I like George Lucas's approach to filmmaking, that a movie is never really done. It's a work in progress. _Unlike_ Mr. & Mrs. Smith _,_ The Bourne Identity _was a project that you developed over a number of years._ DL: _Bourne Identity_ was my movie. I developed that from scratch. Nobody got between me and the script. I predated every single person who worked on that movie. The script came from my head. The writer had never even read the book. It was very clear that it was my baby. _Mr. & Mrs. Smith_ wasn't my baby. It was developed by a producer [Akiva Goldsman] long before I was involved. No producer could exert themselves creatively on _Bourne Identity_ because it was my vision. _Mr. & Mrs. Smith_ was Akiva Goldsman's baby before it was mine. And you can't compare working with Matt Damon to working with Brad Pitt and Angelina Jolie. When I was working with Matt there wasn't a single paparazzi during the many months of filming. Early on in the process he was like, "My last two films have tanked. If you screw this up I'm really in trouble. I'm really counting on you now." The unique position that _Bourne Identity_ put me in was making action movies with actors that don't make action movies. I actually put myself in an awkward position because I told Matt we were making an art movie and I told the studio we were making a spy movie. And my goal was to make a good dramatic film that could be sold as a dumb summer movie. I made that bed. Then with _Mr. & Mrs. Smith_ you're working with two of the biggest stars in the world who have much more on the line than I do. If Brad Pitt chooses the wrong movie and becomes Vin Diesel, that's hundreds of millions of dollars of potential revenue gone. He had more at stake than anyone I'd ever worked with. I understand the terror that someone like Brad Pitt feels when he considers doing a film like _Mr. & Mrs. Smith_, which originally had John Woo on as the director. The wrong version of that movie and suddenly Brad Pitt is not cool anymore. _It's also dangerous for you, isn't it? If_ Mr. & Mrs. Smith _had bombed, you probably wouldn't be trusted with another $100 million film so soon._ DL: But the thing that makes me so infuriating to the studios is I don't care. I made more money off _Swingers_ than off any of these movies. I can go make a movie for a couple hundred thousand dollars again. Michael Bay needs the studios. I don't know if he knows how to make a movie for $200,000. But I haven't forgotten. I'm not in it for a career. I don't know how many more movies I'm going to make. I just don't want to repeat myself. _You don't think you're going to be making movies when you're eighty?_ DL: I don't think so. I'm honestly not in it for the career. I like telling the stories. It's like, I just read a script that I like that's a big-budget film. It needs a lot of work. It's a situation where it's already a good script and there are filmmakers who would just go make it. If I'm going to go make this thing, it's suddenly going to take on a whole other life. It's going to be something you haven't seen before. And it will be something I've never tried before; therefore, I won't necessarily know how to end it. There are a lot of stories I want to tell. I'm like, I need to try to use film to say something about politics in America. But then I'm like, film might not be the best medium for that. That's why you hear me having thoughts about leaving film just because I worry about the world right now and I'm not sure how much good I'm doing. _You haven't overtly tackled a message movie yet._ DL: Mostly because those movies are usually too preachy for me so I try to be a little bit more subversive. _Bourne Identity_ , that's Iran Contra. Chris Cooper is playing Oliver North. _Mr. & Mrs. Smith_ is ultimately meant to be a celebration of marriage. I'm certainly developing a lot of material that's more serious. It just usually comes out a little too preachy for me. I'm also not sure what it accomplishes. I might help the world more by doing popcorn movies and take the money that I make and spending it on things the way Angie [Jolie] does, directly helping people. _There's been a lot written about your clashes on_ The Bourne Identity _with producer Frank Marshall. In an_ L.A. Times _article he said, "I'm not saying I directed the movie. But as the producer, it was my job to get the movie finished."_ DL: Clearly that article started with Frank disparaging me to this reporter. I hadn't even thought about Frank Marshall in two years. Success has many parents. If people want to take credit for work that I so clearly did, that's a huge compliment and I'll take it. I think when my father was warning me about Cary Woods and _Swingers_ and being careful of the producers in Hollywood the person he was really warning me about was Frank Marshall. He just didn't know it. _Does it bother you that you have a reputation for being "difficult"?_ DL: I take it the way Akiva Goldsman put it to me: "How can anyone have any issue with your methods when you're four for four? Who's to say there's a quote-unquote right way to make a movie?" Frank stole my franchise that I lovingly created myself. Whatever he has to do to sleep at night. I'm not that guy who's making the cookie-cutter movies. I don't mind a little reputation. I'd rather fail trying to push myself. I learned that early on with _Getting In_. Bowing down to the producers and playing it safe I ended up with a movie I wasn't proud of. I'd rather have a movie I'm proud of and burn some bridges than have everyone love me and me not love the movie. _You haven't really failed since your first film. Are you prepared to fail again?_ DL: You have to be. That's the thing I worry about the most. It's like when you're playing poker and you've won a lot, now suddenly you have this stash you're trying to protect and that's the surest way to fail because you stop being bold. The moment you play it safe you're dead. THE DIRECTOR'S TAKE DOUG LIMAN _What is the first film you ever saw?_ _The Poseidon Adventure_ —I remember running out of the theater in the middle, I was so terrified. _What is your favorite film of all time?_ Too many to name. My first favorite film was _The Four Feathers_. _What's your favorite line in a film?_ The exchange in _What's Eating Gilbert Grape_ when Juliette Lewis catches Johnny Depp having an affair with the townswoman and she asks him if he cared about the woman and Depp says, "Yes," and she responds, "Good." _What movie made you realize that film was an art?_ _Diva_ _What movie do you consider your guilty pleasure?_ Anything with Chevy Chase. _Who is your favorite movie character of all time?_ Anything Katharine Hepburn played, especially Susan in _Bringing Up Baby_ and Tracy Samantha Lord in _The Philadelphia Story_. _What's your favorite movie snack food?_ Nachos with cheese. _Who is your favorite director of all time?_ Like my favorite movies, too many to mention. I would include Spielberg and Scorsese and Coppola and Lasse Hallström. _Who is the most impressive filmmaker working today?_ Tom Tykwer _What quality do the best directors share?_ Original voices _Who is your favorite actor or actress of all time?_ Katharine Hepburn _Who is your favorite actor or actress of today?_ I have loved all of the actors I have worked with. _Who would you cast as yourself in a film about your life?_ I did cast Adam Brody in _The O.C._ _If you could remake one movie, what would it be?_ _Star Wars: Episode I_ _What is your best quality as a director?_ I love telling stories. _What is your greatest weakness as a director?_ I'm a lousy writer. _Finish this sentence: I'll never direct a movie . . ._ that is preachy. _Finish this sentence: The perfect movie is . . ._ entertaining and substantive. _What will they be saying about your work fifty years from now?_ "He never repeated himself." _What piece of advice do you have for aspiring filmmakers?_ Succeeding is much harder than a book like this could ever communicate, but the thick skin you develop now will serve you later because ironically it never really gets easier. **MCG** "I think people create rooms where they have fake feedback sessions with their own staff and no one is truly empowered to say, 'This is shit.' " **SELECTED CREDITS** _Charlie's Angels_ (2000)-director _Charlie's Angels: Full Throttle_ (2003)-director The man born Joseph Nichol knows what is said about him. He is fully aware of his reputation. And that's precisely why McG, the director of pop entertainment like the _Charlie's Angels_ films, is dying to show off what else he can do. Of course, Hollywood isn't about to make him apologize for directing two blockbusters right off the bat of his career. In fact, with his debut film especially, he confounded the conventional wisdom that a film as "troubled" as _Charlie's Angels_ was said to be could stand apart from the cookie-cutter blockbuster crowd. In an impressive body of music-video work and two features, McG has proven he can create images that pop off the screen and connect with an audience. Whether the other facets of this Midwest-born man who says "There's a great deal of darkness in my spirit" emerges in future work, only time will tell. _You were born Joseph Nichol, but I understand you've pretty much always gone by McG? Why?_ McG: Yeah. My uncle was Joe, and my grandpa was Joe so they called me McG, short for McGinty, which is my mother's maiden name. _And that started early on?_ McG: Day one. My name has never ever been anything but McG, and it serves as sort of an unfortunate name at this point because everyone thinks it's sort of a Hollywood nickname and thinks, "What an asshole." But it'd be fraudulent for me to call myself Joe or Joseph. No one has ever addressed me as such in the history of my life. _You were born in Kalamazoo, Michigan. How long did you stay there?_ McG: Just until I was about two. My dad worked in pharmaceuticals and he took us all out west to Newport Beach, California. There were five of us in a Volkswagen Bug, with my Big Wheels strapped on the top. _You're so identified with California today. Is there any of the Midwest left in McG?_ McG: Oh, yeah. There's no doubt the family ethic in the household was very much a function of the Midwest, or even more so of the East, because my family's largely from Pennsylvania. _When you say "the family ethic," are you talking about a work ethic?_ McG: Yeah, a work ethic and sort of the patriarchal vibes of the screaming father. I sort of always respected my dad's no-excuses, get-up-and-work ethic. He was always a really hard worker, even at the expense of his own health sometimes. And hopefully I can have the wisdom to balance that out a little bit, but I always looked up to him for that and respected that he was an entrepreneurial guy that never took no for an answer and just got up and got it done. _Was music a big part of your family_ _'s life always?_ McG: My house was always filled with music. That's sort of my earliest memory—my dad listening to jazz, my mom listening to Motown, my sister listening to disco, and my brother blasting Led Zeppelin. _Did you gravitate in one musical direction?_ McG: I really enjoyed them all. I became a big fan of melody, independent of the genre that it existed in. I'd be my sister's dance partner to the _Thank God It's Friday_ soundtrack. My brother and I would go see _The Song Remains the Same_ with all the stoners at the local art-house theater. It was interesting. _I understand theatre was also a big part of your life as a kid?_ McG: Yeah. I was an actor as a kid, and I did Ronco commercials like the Mr. Microphone commercial. _You're kidding!_ McG: No. And I was part of a theatre group down in Orange County. I really enjoyed acting. There's sort of a famous line in my family, when my parents were informing my grandparents like, "Oh yeah, McG's taking acting lessons," and without a beat my grandpa said, "What for?" So you know there wasn't a great deal of sympathy for the arts in my household. But my dad went to Japan and he came back with an SLR—my first camera—just a cheap Japanese camera with three different lenses. And then I started taking pictures of things I found interesting. I would go rent darkroom space at this place in Costa Mesa and develop my own pictures. I started shooting in different film stocks and experimenting and figuring out how to manipulate the print time in the bath for color saturation and control of the graininess and blacks and whites and getting rid of the grays. I started to really develop a visual imprint in those earliest days. _This is when you were about thirteen?_ McG: Yeah, twelve or thirteen. Basically I'd do two things: I'd ride the bus to go to the darkroom, and I'd ride the bus to go to a place called Music Market that was ninety minutes away by bus. I'd have five or six dollars and records were $5.49. I'd buy one record that I'd be excited about and wouldn't be able to go to the fast-food spot to get a cheeseburger and have to ride home starving. (laughs) But I had to make that sacrifice because I knew I could eat something when I got home, but the record would last forever. _Why do you think pop culture, whether it be music or film, so resonated with you early on?_ McG: I always remember responding very emotionally to film. I had a lot of lonely time on my hands because I wasn't really the best-looking kid in town and I sort of pined after girls. I had to sort of immerse myself in the arts because girls weren't particularly interested in me. _What were the first films that made an impact on you?_ McG: I'm going to have to go with _The Sound of Music_. And watching _The Graduate_ , when I'm probably nine or ten years old, had a very profound impact on me. There's no doubt that there're certain songs and arrangements of music and certain pieces of film that release a chemical reaction in my brain. This sounds a little goofy, but I really believe that. It's such a euphoric experience that I sort of want to chase that experience as often as possible. And it's also that set of chemical circumstances in my brain that makes me fucked up. So, you know, you take the good with the bad. _When you say that you're "fucked up," what are you talking about?_ McG: I'm pretty convinced that there's a chemical reality to who I am, regarding my brain, that makes me kind of a strange guy. And there's the behavioral component of growing up in a house where my dad would lose his temper a lot and my mother has a little bit of a hypochondriac streak. It doesn't take a genius to see the writing on the wall. I'm fundamentally a pretty neurotic guy, but I've come to terms with that. In high school I read Freud and I got really into that. I was convinced I wanted to go to school and become a psychiatrist and go on to become a psychoanalyst. [Later] I came to terms with the fact that my interest in psychiatry and psychology was basically because I was so fucked up. And once I sort of came to terms with that, I was free to immerse myself back in the arts, which is really my passion and my love. _Who was the first filmmaker you studied?_ McG: It's clearly Alfred Hitchcock. I sort of did the same thing with Hitchcock that I did with Freud, and went out and diligently studied every picture he ever made. All this stuff I was picking up from what he was doing was making me very excited, but I was also very excited about things that I thought were visually stunning, like _Sound of Music_ and _Wizard of Oz_ and _Gone With the Wind_ and all the Busby Berkeley stuff. I liked films that were larger than life and spoke to my Walter Mitty escapist lifestyle. That's why my earliest work was a reaction to what was going on in Seattle at the time. I loved the Seattle movement like the next guy, and immersed myself in Nirvana and Soundgarden, but I never bought into the stare-at-your-shoes-and-be-bummed-out thing. I mean, I was legitimately bummed out in my life. But I just liked big performers and big explosive movies that were exciting, like _Raiders of the Lost Ark_ and _Star Wars._ Even though I would identify quietly with _The Graduate_ and _The Conversation_ , my whole thing was a reaction to Seattle. I'm going to bring out this Busby Berkeley component of color and excitement. To me that was sort of the start of the McG video. _Clearly one of the most important personal and professional relationships of your life has been with Sugar Ray_ _'s Mark McGrath._ McG: We've been best friends since we were about eight years old. We went to elementary school together, and junior high and high school. And we're best friends today. We were into breakdancing together. We were very excitable young kids that thought everything was cool, except for what we were about. _You sang back then, didn't you?_ McG: Yeah. We had a band called the Q-Tips in high school, and Mark was sort of my sidekick. He was too shy to sing, so I would, but all the girls would swoon and stare at him. I'd just sort of do everything I could to command attention. I wasn't all that successful. At the time I was very interested with what Rick Rubin was able to put together from his NYU dorm room. Here's this dude who realized hip-hop is real, there's a culture out there, and he taps into it and breaks LL Cool J, gets behind the Beastie Boys, Run-DMC . . . And that was my inspiration to take Mark and say, "You're a very charismatic, good-looking guy. You're a great performer; let's shoot a video." I didn't really know what it even meant to be a director, but I knew what I wanted the end result to look and feel like. _And you were essentially managing what became the group, Sugar Ray, with Mark._ McG: I produced their first record, and I was going to produce their second record, but they sort of marched in and said, "We don't want you." It was a very emotional, sad sort of parting of ways. But I kept doing all the videos for Cypress Hill and then Korn, and before I knew it I was pushing film through a camera every single week, which is a great way to cut your teeth and get your style. You can go to film school, but there's really no substitute for putting film through a camera and working with people. You just go out there and get it done and have your successes and have your failures. It's just trial by fire. _How did you even know how to approach directing that first music video?_ McG: Well, I didn't. We shot the first video for three thousand bucks on 35-millimeter, which is unheard of. I didn't know that to shoot on a location; you need permits and insurance. We didn't know anything. We just stole the locations and ran from the cops. I would put the film into the lab knowing I didn't have any money to get the film out of the lab. It was as do-it-yourself as you could possibly imagine! _Are you sensitive to criticism of music-video turned film directors like yourself ?_ McG: I have a very strong opinion about that. I always quote Mark Romanek. He's a music video superstar who I really admire, who's truly this auteur and artist. He got some vanguard lifetime achievement award and said, "Let's face it: Most music videos are shit! But the three percent that are interesting is some of the most compelling filmmaking that's out there." And that's true! Because you look at what Michel Gondry was doing, you look at what Spike Jonze was doing, you look at what [David] Fincher was doing, and that's truly filmmaking. It's beautiful and it's unapologetic and it's storytelling and it's affecting kids' lives and adults' lives, and there's no shame in that. Under no circumstances do I think it should be a liability to say that you came from music videos, because you learn how to be responsible and get your work done in a day and get all the shots to tell your story. You learn to deal with difficult personalities and communicate with talent. It's arming you in an excellent way to step into _Charlie's Angels_ and deal with Bill Murray and deal with a $100 million budget and a studio that's not interested in fucking around. _Do you feel that directors who come out of music videos are treated as second-class citizens?_ McG: Less than it used to be, but yes, there's still some of that. I really look forward to that being brought to a rapid end. But that's the privilege of a fan. Listen, if you look online, it's not cool to say, "I like McG." When I was doing _Superman_ , people would fucking kill me day and night. That's their privilege. I'd only made two popcorn easy-going _Charlie's Angels_ movies. But under no circumstances am I going to apologize for taking a movie that everybody thought was going be a disaster and making a pretty entertaining film and, most importantly, films that are tonally their own. You can't look at the _Charlie's Angels_ pictures and go, "Oh, that's just like this or that's just like that." It's a little like Bond. It's a little like _Austin Powers_ , but in the end it's its own thing, which I think is an interesting achievement. But obviously as a filmmaker I want to grow in my ability to tell stories and become more subtle and have more depth and grow into my nineteenth-century-literature /Merchant-Ivory phase. _I get the sense from you that you're just itching to show the public that you have more in you than_ Charlie's Angels _._ McG: That's what matters to me most, and it's real difficult! It's so easy to put people in a box. And like I say, that's the privilege of the public to just take what they see and draw conclusions. I'm not trying to deny anybody that, but I definitely want to get into a different filmmaking phase. But under no circumstances do I want to apologize for films that reacted with the public. I loved _E.T._ I like _Star Wars_. There was a time when the event movies were the best movies. Sometimes the Beatles are the biggest band in the world because they're arguably the best band in the world. _How do you go from music videos to a huge summer action film as your first feature? How did you land_ Charlie's Angels _?_ McG: I was doing the music video route, which was fun. You get a call on Monday. Everclear needs a video quickly. You come up with an idea. Four days later you're shooting, and five days after that it's airing. That kind of turnaround was tremendously satisfying. And to this day I sort of miss that. I had the best experience with a Gap commercial with Dwight Yoakam. We went in the studio and redid the song "Crazy Little Thing Called Love" because I like Freddy Mercury and Queen. And he did this little country version, and people really reacted to that campaign. It helped turn Gap around. Then I heard that there was a _Charlie's Angels_ movie out there. I was frustrated with what I felt like were bad movies that came from old television shows. I thought, "Why can't somebody just get this right and make an entertaining movie that comes from an old TV show?" So I said, "Look, I'd just like to talk to Drew Barrymore," because I knew she was behind it and she was kind of a rock 'n' roller and I figured we would have a similar aesthetic. But she cancelled on me like seven times. Finally she acquiesced and agreed to meet me. I remember meeting her at this Vietnamese restaurant in L.A. I said, "I want the movie to be funny and self-aware. I want it to have all the action that the boys' films have and an elegant subversiveness that only the most discriminate viewer will even be aware of." She started getting more and more excited until by the end of the meeting, we were both bouncing around the couch and just jumping out of our skin with enthusiasm. And she committed completely to us doing the picture together, which was insane for her because I'd never made a movie and Sony's counting on this being a big franchise. They want the movie in capable hands, not some shithead like myself. I one-hundred-percent owe it to Drew Barrymore, digging her heels in and giving me a shot. _Was the script close to being locked as you began the shoot?_ McG: Oh, my God. No! We never had a script, period! We had no script. _How could that not have scared you to death?_ McG: Well, that created a great deal of stress, but I really knew where I was going and that it was going to all come together. So I stayed the course. Sometimes those desperate times create the best stuff. Look, I'd prefer to have a locked document and really get into the nuances and subtleties of what you're trying to achieve. I look forward to the day I can make that kind of picture. _It does boggle the mind though to think that a studio can allow a film of that size to go into production without a locked script._ McG: It's miserable. I wouldn't advocate it for anybody. It's no way to do your thing, but when Cameron [Diaz] has this window and Drew has that window and Sony says, "This bus has left the station, we need that movie for November of 2000," that cart drives the horse, so you just keep going. _You opened the film in a way that really sets the tone, with a long, seemingly unbroken shot on the plane. Is that a nod to Scorsese or De Palma or anything in particular?_ McG: Of course. Scorsese and De Palma all the way! And _I Am Cuba_. _What statement, if any, were you trying to make with that opening?_ McG: I was excited about the prospect of an impossible shot and taking advantage of computer-generated components. I wanted to clearly establish the tone of the thing. _T.J. Hooker_ comes on and you say, "Ugh, another movie from an old TV show. What are you going to do, just walk out?" And then they jump out the window. I'm trying to say to the audience, "I get it! I know! You think this sucks. Give me a chance." _You're probably tired of talking about all the reported on-set conflicts that occurred on_ Charlie's Angels _._ McG: There were a lot of heated conversations, though. _So when, as was reported, people like Bill Murray and Lucy Liu are getting into it on the set, where does that leave you, the director?_ McG: It was strange for me, because I had the highest title on the set but in reality there were probably seven or eight people who had more juice than I did. So there's the title of director, and there's the reality of who's running the show. Fortunately there was a lot of trust between the actors and myself, and that went for Murray as well. I had a really good thing with Murray. He's really passionate about being the best he can be every time he steps out in front of the camera, and that's going to lead to some heated conversations, artists having different ideas about how different character arcs should play out. And I always encouraged that. And that's not to say people didn't have words with each other. Of course they did. To me that's representative of passion. That's fine so long as everyone fights fair and comes to a reasonable conclusion and you get out there and you shoot some film and you make your day. The bigger nightmare is people just sleep-walking through it and never challenging you and each other and not giving a fuck. _So from your perspective at least this passion is coming out of wanting to improve the film._ McG: I'm telling you straight up that's the truth. It was all coming out of passion. _Were there test screenings for_ Charlie's Angels _?_ McG: Yeah, we test-screened it once in Peoria, Arizona, and it went average. I remember a look that Drew and I gave each other across the hallway of the multiplex afterwards that was just a mix of so much desire for people wanting to respond to our film and seeing that the response was average. We put our nose to the grindstone, made a few changes, cut a few things, added a few things, and then three or four weeks later we tested the film again and it tested through the roof. _What did you adjust?_ McG: Story clarity, and we changed the cut-down a bit. People were able to lock down more and enjoy the scenes and emote with the characters. The difference was huge. _Do you think you'll always test your films?_ McG: Well, it's a double-edged sword. My fundamental inclination is to test, because you get caught up in your own little bubble of a universe and you want to believe what you want to believe. You have to have a lot of faith in the virginal eyes of the viewer who has no agenda. When you make films that are your own personal art offerings, I wouldn't test it, because it's your own and you don't care. But if you make films with north of $100 million of someone else's money and the object is for it to be received well by a great many people, you're doing yourself a favor by testing a picture. But you have to understand how to read the results. You can't just whore yourself out to every little whim and comment. If you think something is funny and you test it and three times in a row no one laughs, you need to swallow your medicine and realize that's not funny. To deny that is just being ridiculous. I think that's how a lot of errors in filmmaking are made—when directors insulate themselves in environments where they aren't subject to feedback. _That's the exact criticism often made about the last generation of directors—that they often lose touch with the real world and what made their work relevant in the first place._ McG: I'd like you and I to do a book on this subject. I think people create rooms where they have fake feedback sessions with their own staff and no one is truly empowered to say, "This is shit." You look at how many directors out there who [once] made great films, and you just scratch your head wondering why can they not make a film? What is going on? The last five films have been crap! First of all, an ability as an artist to be relevant is fleeting. Look at your favorite songwriters. Far more often than not they had a period of greatness and then fell the fuck off. And it's the same with filmmakers. They have a period where you love everything they're doing, and then they fall off. If you go to 7-Eleven for your coffee and you go to the multiplex to watch your movies, you have certain sensibilities that make you in touch with the reality of the human condition. If you start living behind gates and only seeing special screenings at your house and have your coffee custom made by your own barista, you're not as relevant as you want to believe you are. _Do you worry about that for yourself?_ McG: No, because I haven't made it yet. Hopefully I'll be subject to making shitty films, and hopefully you and I will have enough of a relationship where you'll call me and say, "What the hell are you doing?" And maybe I'll get with it. I really believe that. _After_ Charlie's Angels _, did a lot of filmmakers you respect come to you who enjoyed the film?_ McG: I was put on the creative rights council of the DGA [Directors Guild of America], and all my heroes are on that, like Peter Weir. Steven Soderbergh chairs it. I'm able to have very intimate one-on-ones with these people. They all seemed relatively amused and pleased with _Charlie's Angels_ , but they were very serious about what my choices in the future were going to be and what my plans were to show growth. And I wanted to honor that challenge and become a more accomplished storyteller. _Is it important to you to earn the respect of people you respect?_ McG: It's the single most important thing. I'm still insecure enough that I really want to be respected by David Fincher. I want a nod of appreciation by Steven Soderbergh, and those nods aren't given away. So it's my goal to earn them. To some degree I'm proud of what I've put together, but at the same time I feel a very distinct chip on my shoulder when people think I'm a lightweight, untalented, "Oh let's blow up the building and shoot it with twenty cameras" guy. From the bottom of my heart that's truthfully not who I am. I try to identify what's right for any given picture, and I tried to make the right moves for _Charlie's Angels_. And I think we definitely got it right in the first one and more or less got it right on the second one. _Do you have any regrets about making the sequel your second film? It's delayed, by a few years at least, an opportunity for you to show off what other kind of filmmaking you can do._ McG: That's such a tough question. I think if you have the good fortune of creating a franchise, you have to see it through. Not to mention, these people are my friends. These are people I'm emotionally invested in. None of us were planning on making a sequel, but we all sort of agreed it was all or nothing, Cameron, Lucy, Drew, and me. We were all going to do it, or we all weren't going do it. And when it became clear that the public wanted another one and my friends at Sony wanted to do it, it was the natural choice. I don't regret it. There were some other films that I was circling that could have allowed me to flex different filmmaking moves and maybe not put me in that position where it's so fun to talk shit about me on the Internet. _So what did you learn not to do?_ McG: Never be redundant. Always show growth and keep your toes tapping. I wanted to do more with that picture than I was ultimately able to do. _You have a reputation for being extremely high-energy and positive on the set. How much of that is the genuine you and how much of it is a front for motivating the crew?_ McG: The _Charlie's Angels_ projects were particularly difficult because we never had the benefit of a locked script, so I needed to be that cheerleader and can-do guy. But ironically there's a great deal of darkness in my spirit. I would like to not just be that cheerleader. I would like to focus on artistry and spend a little more time behind the monitor and not in front of the camera cheerleading everybody on in the interest of making our day and getting our work done. _What films have resonated with you in recent years?_ McG: I appreciate what Charlie Kaufman has been up to with Spike [ Jonze] and Michel Gondry. It makes me hungry and want to get out there and show the world what I've got. I really believe given the right piece of material, I can draw those performances out of the actors. I want that chance. I want that great piece of material where you know going in the script is phenomenal. This is going to be our _Million Dollar Baby_. This is going to be _Shakespeare in Love_ , our _American Beauty_. There's nothing about who I am to the layman to suggest I'm worthy or capable of doing any of those things, but that puts a perverse little smile on my face. I look forward to the challenge. I don't want to be doing karate and explosions the rest of my life. THE DIRECTOR'S TAKE McG _What is the first film you ever saw?_ _Chitty Chitty Bang Bang_ _What is your favorite film of all time?_ _The Graduate_ _What's your favorite line in a film?_ "She's totally serious, asswipe."— _Sixteen Candles_ _What movie made you realize that film was an art?_ _Strangers on a Train_ _What movie do you consider your guilty pleasure?_ _Waiting for Guffman_ _Who is your favorite movie character of all time?_ Boo Radley _What's your favorite movie snack food?_ Hot Tamales _Who is your favorite director of all time?_ Alfred Hitchcock and Spike Jonze _Who is the most impressive filmmaker working today?_ Michel Gondry _What quality do the best directors share?_ Originality _Who is your favorite actor or actress of all time?_ Cary Grant _Who is your favorite actor or actress of today?_ Sam Rockwell, Charlotte Rampling _Who would you cast as yourself in a film about your life?_ Philip Seymour Hoffman _If you could remake one movie, what would it be?_ _Midnight Cowboy_ _What is your best quality as a director?_ Endurance _What is your greatest weakness as a director?_ Tunnel vision _Finish this sentence: I'll never direct a movie about . . ._ the Vichy government of France in World War II. _Finish this sentence: The perfect movie is . . ._ _Risky Business_. _What will they be saying about your work fifty years from now?_ "He was hungry." _What piece of advice do you have for aspiring filmmakers?_ Never give up. _What are you as passionate about as moviemaking?_ Aviation **TREY PARKER & MATT STONE** "The honest truth is we just fucking hate actors." —Trey Parker **SELECTED CREDITS** _Cannibal! The Musical_ (1996)-Parker, writer/director _Orgazmo_ (1997)-Parker, writer/director _South Park: Bigger, Longer & Uncut_ (1999)- Parker: cowriter/director, Stone: writer _Team America: World Police_ (2004)- Parker: cowriter/director, Stone: cowriter Trey Parker and Matt Stone hate watching movies. In fact, I'm not entirely sure they even like movies that much to begin with. That's as good a place as any to start when listing what sets them apart from virtually all other filmmakers working today. They began as two Colorado film students who shared a love for Monty Python and making each other laugh. Their collaborations thus far share the latter criterion: a musical about a cannibal, the story of a Mormon acting in pornography, a cartoon featuring Saddam Hussein and Satan in a homosexual affair, and a satire of Jerry Bruckheimer spectacles made with puppets. Meanwhile, they continue to produce one of the most popular comedy series of all time, _South Park_ , and operate in film with near autonomy, insisting always on getting final cut for their projects. Those who embrace the status quo are just bracing for what could possibly come next. _Trey, can you tell me a little about Conifer, Colorado, where you grew up?_ TP: It was a typical small town. I definitely had feelings sometimes like I was in the middle of nowhere and wanted some excitement in my life, like probably every kid in the world does. But it was a great place to grow up. There was a lot there that bred creativity, in that there wasn't much to do but try to entertain yourself and your friends. And I think that really helped, because when I got to be about twelve years old, I started making movies all the time. Every weekend, we'd get together and make some dumb little ninja movie or something. We were all in Tae Kwan Do, and we all thought we were badasses, so we loved making martial-arts movies. I would be Karate Man Dave, and Karate Man Dave just thinks he's fucking awesome and he actually kind of sucks. It's really weird because looking back on it now, I'm twelve years old and I would, like, paint a total handlebar mustache on. _Matt, you grew up in Littleton. Were you also making movies when you were a kid?_ MS: No, not at all. I think it's weird that I ended up here, because I didn't do any of that stuff. I was into music. I played drums. But I was much more like a math kid. That's what I was good at—math and science—and that's what I did. I hadn't even thought about film until I took this summer class. It was just to get credit, because it sounded easy. I liked it, so I did the 16-millimeter class and that's where I met Trey. I never really even thought about films or filmmaking until I was in college. _What were the first films you were into as kids?_ TP: For me it was all comedy. It was Zucker brothers' stuff like _Airplane_ and _Naked Gun_ , and then anything that Monty Python did, of course. That really was the huge obsession. MS: I'd say for me Monty Python were the movies I remembered. TP: It's just sort of who we are as people, too. We just deal with everything with comedy, basically. MS: I remember one of the first movies that made me think about making movies, though, was _Raising Arizona_. It was kind of transparent that they were guys kind of fucking around behind the camera. That was the first time I thought that maybe I could do that. Before that I don't even think I thought that people made movies. They just somehow got to my TV or movie theater. _Trey, how serious were you about film when you were making these movies as a kid?_ TP: It was serious. Even when I was eight, I knew I wanted to be Steven Spielberg. My mom just dug up this thing from the sixth grade, and I guess you were supposed to draw what you wanted to be, and it's me sitting in a director's chair. If I had only known how hard and shitty it was, I would have changed my mind. _Were you as focused on a career, Matt? Was it all about math?_ MS: I was like a little prodigy at math. I drank that prodigy element away some time ago. Other than math, I wanted to be Stewart Copeland of The Police. I didn't really have big ambitions. In high school I remember thinking, "Fuck society, I'm going to go ride around Latin America on a motorcycle," and of course I didn't, because I'm too wimpy, and I just went to college. _How did you two start to collaborate in college?_ TP: We met in a 16-millimeter class. We just got paired up because everyone else in our class wanted to make black-and-white sexual-exploration movies and shit like that, and we wanted to make fucking _Monty Python and the Holy Grail_. _What was the film program like at the school?_ MS: It wasn't much of a program. It was just like thirty people in the production part. And then there was a film-theory part. There was a definite epic struggle between the film-theory side and the film-production side. The film-theory people were the ones sitting around talking about . . . you know, I don't know what the hell they talk about. And the production side was people like us who really didn't care about deconstructing film as much as just making it. It was a nothing department compared to USC or NYU. But on the other hand, there was very little structure and everyone had to chip in, so Trey would be the boom guy on my film and then I would be the costume guy on his. Everyone kind of did everything and, in doing that, you got to do a little of everything. It was really just "Let's make movies and then we'll fucking figure it out when we get there." TP: It was nice, too, being in Colorado. The first big thing that we did that really jump-started the whole career was _Cannibal! The Musical_ , and because it was in Colorado we were able to go around to people and go, "Hey, we're making a movie, you wanna help?" And they were like, "Sure! Have my house! Have my restaurant! You're making a movie?" People were so excited that a movie was actually shooting in Colorado that you could get whatever you wanted. _Do you think you would have floundered at a more competitive film school like NYU or USC?_ TP: Sure. Because when there's too much competition, we just get pissed off and leave. MS: Yeah. I would have gotten out. TP: We're the first guys to bail on anything difficult. Cannibal! _was originally made under the name of its subject, Alfred Packer, who was hanged for cannibalism in the 1800s. His story was a well-known one where you were._ TP: It's a pretty great story. We even talked about the idea to remake it now that we're all, like, thirty-five—the age those guys were supposed to be when all that shit happened. We could just go and spend $25 million and remake that. We love stupid ideas and we thought this would be a really stupid one. _Was it fun to be shooting this bizarre musical while still in school?_ MS: No. TP: No. It's like making anything. You hate it. And anyone I've ever talked to that's worth their salt actually kind of hates the process. Like, I think Michael Bay probably goes out and has a really great time making movies, but that's because he doesn't give a fucking crap. It's so funny, because as stupid as our shit is, we think about it way too hard and we just agonize over, "Is it the right joke? Is it the right point?" And that was sort of the beginning of that. Looking back it's like, "Yeah, that was a pretty great time wasn't it?" But then when we really think about specific days, it's like, "Man, we were fucking miserable making that movie out in the snow and you've got, like, a broken camera and six hours to shoot." _Looking back it was a pretty risky move to make a film like that for more than $100,000. Was it confidence or simply not knowing any better what a gamble you were taking?_ TP: Yeah, it's the thing that's made us all the money because it's the same ignorance that said, "Hey, let's go do an animated show." We don't know how to fucking do an animated show, and we didn't go to anyone and say, "Hey, how do you do an animated show?" If people knew how we do _South Park_ , their fucking heads would explode. We come up with the idea on Thursday and it's on the air Wednesday. There's not one thing of animation in before that. And if we had gone and said, "How do you make animation?" we probably would never have been able to do it. But just not knowing, we approached it in a totally different way and sort of invented a new way of doing animation. It was the same thing with _Packer_. It was just stupidity and naiveté, but it's the thing that has really helped us through our careers. _Was there a plan for what you were going to do with_ Packer _?_ MS: Our plan was to come out to L.A., sell it for a million dollars, pocket some of the money, come back to Colorado, and make another movie. TP: We immediately learned how stupid we were to think, "Well, you just make a movie, and you make money." We came out here and of course it was just the kind of thing that turned into, "Oh, well there's this company that's interested; let's hang out in L.A. another week and see what happens." And so it turned into a month and years. MS: I still want to go back to Colorado. TP: Our bags are still packed. _Trey, before_ South Park _, you'd done a little bit of that kind of animation in school._ TP: Yeah, it was an animation history class, but instead of having to write an essay I said, "Can I make an animated film?" and they said, "Yeah." Not knowing anything about animation, I decided to try to do a sort of Terry Gilliam kind of thing and make everything out of construction paper because the way I drew was super-crappy and child-like. It was absolute shit and total crap, and it got me a Student Academy Award. They flew me out to L.A., put me up in a hotel, and there's the other four finalists and their stuff is, like, beautiful pencil and watercolor animation. And my thing comes up and everyone's like, "What the fuck?" I learned there that crappy shit is totally funny. And I've made a whole career out of it. South Park _fans know that the show essentially happened because of the short you made for producer Brian Graden. How did you guys hook up with him?_ TP: We had finished _Packer_ , and we came out and had a little screening and Brian came to that and afterwards he came up to us and was like, "Whatever you guys want to do, let's do it." Of course, at the time we were like, "Holy shit, that's awesome!" We didn't know that it was like, "Whatever you guys want to do, we'll do it and then you'll get five hundred bucks to make it." _And that became_ The Spirit of Christmas _video you created for him as a holiday greeting card for his friends?_ TP: Yeah. He had seen the one we had done in college and said, "Make me another one of those, and I'll pay for the film plus you guys can each have, like, four hundred bucks" and I was like, "Wow!" So we went and worked our asses off. _When did you realize that_ The Spirit of Christmas _was becoming this mini-phenomenon—that everyone in Hollywood was talking about it?_ TP: When people were coming up to us in bars two months afterwards, saying, "You've got to see this thing. I got this off the Internet" or "I got this thing from my friend and you guys would love it." And then going and showing our own stuff to us. It would happen again and again and we were like, "This thing is fucking everywhere." MS: Brian only sent out about forty copies of it. He was going to send out a lot more, but when he saw it, he thought it was too offensive. So he only sent it to his select group of people who he thought would get it. It was a very fortunate time for us because the Internet had just kind of started and because the animation is just so shitty, it was one of the first things that you could watch over the Internet. You could download it and it was super-shitty, but it still was what it was. _How did_ Orgazmo _happen?_ TP: _Orgazmo_ was actually the movie we came up with while we were shooting _Cannibal!_ We were like, "After we go sell _Cannibal!_ for $10 million, we'll come back to Colorado and make this new movie, which was a musical called _Orgazmo._ The only regret I have is that we got talked out of making it a musical. Paramount saw _Cannibal!_ and said, "What do you guys want to do next?" and we were like, "OK, so there's this Mormon and he wants to be in porno" and they were like, "OK, no thanks, see you around." We found [producer] Kaz Kuzui, and her little company found a Japanese guy who was like, "Here, take a million dollars." It really was the perfect next step from _Cannibal!_ _Was it any more of an enjoyable experience than_ Cannibal! _was?_ TP: Once again, it was miserable. It's always miserable. I have never had a good time on a set. I've never had a good time making an episode of _South Park_ , either. I always hate it, and I always say, "I'll never do it again." _When_ South Park _came on the air, it quickly became huge. You guys were everywhere. What was the point where the frenzy hit its peak to you?_ MS: Actually it was the same week that the Denver Broncos won their first Super Bowl which was, for us, the greatest part of that week. We were at that game, and I think it was the first Broncos game I ever got to go to. That was the week we were on the cover of _Rolling Stone_ , _Newsweek_ , and _Spin_. It was crazy. TP: I think for us it was like, "This is the machine that churns you up and spits you out, and so before it spits us out, let's do as much shit as we can. Let's do _Baseketball_. Let's do a _South Park_ album. We just said yes to everything because we were like, 'fuck it!' " MS: Because it's all going to be over in a year. TP: It's all going to be over in a year! You might as well just cash in. And everything sounded fun, like "Yeah, we'll do an album. Sure, that sounds fun!" It was a pretty confusing time. It also seems like about that time was when no one in the world would tell Matt and Trey that something wasn't funny or that something sucked. All of a sudden everything you say is funny. _You've said that one of the valuable lessons you learned from working with David Zucker on_ Baseketball _was choosing which battles to fight?_ TP: Yeah. Just with studios and how they were like, "Well, that joke worked. Do that again!" And we're like, "Well, that won't work again. That's why it was sweet—because it was one time." And they're like, "No, it's testing well. Do it again." MS: On the _South Park_ movie, when we were signing agreements we said, "No preview screenings, none of those cards, and Trey has final cut." That's the first day of negotiation. Unless you give us all this, we won't talk. TP: And they were like, "Guys, this is your first studio movie. Nobody gets final cut." MS: And we were like, "OK, see ya." TP: And one of the big fights was that they wanted the _South Park_ movie to be PG-13. And we wanted it in there that it had to be R. And that actually served us really well. And it was the same on _Team America:_ "OK, we're doing this weird-ass movie and we have final cut and there's no preview screenings." They really hate it but in a way I think they kind of like the fact that, OK, all the responsibility is on you. _Many filmmakers swear by test screenings—particularly for comedies. Why don't you believe in them?_ TP: The reason we don't do test screenings is because _South Park_ came so very close to not existing because of test screenings. MS: Yeah, the pilot tested horribly. TP: And it's like we knew from personal experience that the comedy that we love, Monty Python, tons of people totally didn't get. So when we were doing shit that even only a small group of people really responded to, we thought we were doing something good. MS: Those preview screenings are especially insidious because it's not even like thumbs up or thumbs down on a show, but thumbs up or down on this joke in the beginning of the third act. I'm a math guy. I like math. But it doesn't apply to movies. We make comedy for ourselves. TP: That's the thing that's really served us well. When I'm sitting down to write, I just think about my friends at home and I think about them watching it and I try to make them laugh. I can't try to make everyone laugh. _Why did you do the_ South Park _movie?_ TP: Part of it was because we could, and another part was here was a chance to do an R-rated version and not bleep everything. But more than that was we had a sweet idea for it. To this day they are still at our heels for a _South Park_ movie sequel. But the _South Park_ TV show is such a hungry baby, every good idea we have goes into that. And until we have another sort of epiphany, like, "Here's a sweet movie to do," we're not going to do it. _The_ South Park _film landed you an Oscar nomination for one of the songs, and you attended in drag as Jennifer Lopez and Gwyneth Paltrow. Did you ever consider performing the song yourselves for the show?_ TP: Hell no. We honestly were debating even going. We were the guys that were totally against that shit. That's what _South Park_ is all about. It's anti-celebrity. It's anti all that crap. And it's like, can we really walk down the red carpet in a tux going, "Here I am, I'm one of you now"? But on the other hand it was like, "Dude, it's the fucking Academy Awards. We should go for once in our lives." And that's when we came up with the idea to go in dresses. We could go but still say "fuck you" to everybody. _It must have been a tough moment to step out of that limo dressed as you were._ TP: It was super-tough. It was really hard. MS: It was fucking really hard, yeah. TP: But not as hard as fucking sitting through that goddamn thing. I mean, we got up and left halfway through, and it was still fucking brutal. _If you look at your work from_ South Park _to_ Team America _, it seems like you've always gravitated toward seemingly crude techniques for telling stories._ TP: We're not the guys who would be happy making fucking _Deuce Bigalow,_ you know what I mean? We always want to do something that's just really fucking weird, because at least if it's not funny, it's really fucking weird. The honest truth is, we just fucking hate actors. Having to work with any people and trying to get them to do your comedy, it pisses us off so much. _Why do you hate working with actors so much?_ TP: I think it goes back to the way I made movies as a kid. The way I was brought up, making movies was to turn the camera on and I go get in front of it and I think of the lines I'm going to say, and I do it. And I tell my friends around me what to say. So it's really hard for me to stand back. When we were doing _That's My Bush_ with real actors, I told them beforehand, "Look, I know the way you actors are, but the way I'm going to direct you is I'm going to act out your shit for you." Because it's the only way I know how to do it. I'm a horrible communicator. I have no patience, and I don't like people. I just have absolutely no respect for actors. I really don't. I honestly believe that eighty percent of people in the world can be pretty good actors. I just have no respect for their art at all. _Matt, are your views as extreme as Trey's?_ MS: No, I think actors are sweet. (laughs) We have to play good cop/ bad cop if we make a movie with actors so we don't have a reputation. I'm not as good of an actor as Trey, so maybe I have a little bit more respect for them. But when you're someone who is behind the scenes—a writer, producer, or director—you do see the inordinate amount of acclaim that actors get sometimes and you're like, "Man, that's not even the fucking hard part." _You talked about how much you work to the last minute on_ South Park _episodes. The same thing happened on_ Team America _, didn't it?_ MS: Oh, yeah. TP: You can ask Scott Rudin. He said he's never seen anything like this. He's like, "I've never seen people write so much in postproduction." Because he saw it so much in the _South Park_ movie, too. He's like, "You guys wrote half that movie in postproduction." It's just how we work. MS: There were whole scenes in _Team America_ that were shot with the puppets saying one thing, and then we'd sit in the editing room and turn the sound off, and basically we'd _Mystery Science Theater 3000_ it and completely change the entire scene. There were at least two or three scenes that we did that with. _How close to the release were you still fiddling with it?_ MS: Oh, two or three weeks. Paramount was going crazy. We did the whole post-schedule in six weeks, beginning to end. Yeah, it was pretty hellish. _You've said that_ Team America _was a difficult filmmaking experience. Was it the technical complexity that frustrated you?_ TP: Yeah. Because we had to rewrite the entire script every single day. We'd get to the new scene, and the puppets couldn't even begin to do what was written, because we wrote the script without even knowing how to do this. About two months into shooting that movie, we became really good at making puppet movies, but we were already two months in and we had about a month to go. _Was ever there a point during the film where you thought you would not be able to complete the movie?_ TP: Every single day. That's why it was so miserable too—because we're working our asses off with no sleep. And in the backs of our minds we're going, "We're never going to make it. We are going to completely fail." So yeah, it was brutal. _You've had many run-ins with the MPAA over the ratings for your films. How do you feel about the organization?_ MS: We have a lot of problems with the ratings board itself. When we originally submitted _Orgazmo_ , we got an NC-17. And we wanted to make an R movie because that was really important for our career. We were like, "OK, what do we have to cut? What's the most offensive shit?" And they just wouldn't tell us. They were like, "Well, that would make us a censorship organization. It's just the overwhelming sexual content. So maybe try another crack at it." But the thing is, we were doing an independent film and to get an Avid was like three or four thousand dollars a week, and we couldn't afford it. And then when we did the _South Park_ movie we got an NC-17. We were like, "Well, we've gotten an NC-17 before," thinking we knew how to deal with it. And Scott Rudin was like, "Here are the notes they gave us: At a minute thirty-eight, cut this word; at a minute forty-one, cut . . ." It was in excruciating detail. And people won't say it, but as one of the signatories to the MPAA you get preferential treatment. And with independent films they just give you an NC-17 and say "fuck you." For _South Park_ , one of the last jokes we were arguing about, Rudin called and says, "We've got to cut this," and I just went crazy. I called him back and said, "Fuck that! This is bullshit!" I just freaked out on him. And so he called somebody and somebody called somebody and the next day we had an R. We didn't change a frame. We had an NC-17, the studio bitched, and then we had an R. _Matt, Trey has directed all of the films himself, even as you two continue to write together. Have you ever wanted to codirect one of the projects?_ MS: No. I think that this is where my lack of ambition has really helped our partnership. Trey is the director and always has been. On _Team America_ I got a little experience being second-unit director, but it's just not one of my strengths. And I'm just not as interested in it. I'm better at other things that Trey isn't good at. _Are there any significant differences in your comic sensibilities?_ TP: No. It's pretty damn similar. _What about in terms of the direction of the career? You're always on the same page?_ TP: We're totally the same person in the sense that neither one of us thinks more than about three months into the future. We don't plan at all, and I think that's helped us too, because by not planning we don't pigeonhole ourselves into anything. We're always sort of mentally available when something good comes along. MS: I think that's why we kind of freak people out a little bit in town, because we don't have a publicist and we don't have a managed career. We just kind of do what we feel like doing. _You say you don't have a managed career and that also seems to extend to the way you speak. You both are notoriously candid. Has it ever cost you anything in terms of your career?_ TP: Never. There have been times when we felt like, "OK, this is over. Everyone's sick of us. It's time to leave town." But we've always been able to say "OK, we can leave this town with our heads pretty fucking high" because the nature of what we do is so anti-selling out. Of course we've sold out in more ways than either of us ever thought we would, but we've always had that in the backs of our minds—that we better always do shit because we think it's cool. Especially now it's like we really better just do shit because we think it's cool and to fuck with people and to not make another million bucks. _Do you go to the movies a lot?_ MS: I don't really watch movies. TP: It's weird, because that's sort of the last thing that either of us wants to do. The last thing I want to do is think about story and structure and character and all that shit. I fucking hate that now. MS: I don't even have a TV at my house. I gave it up about a year and a half ago. _What's wrong with movies today, in your opinion?_ TP: There's so rarely that film now where you get knocked over the head, like, "What the fuck was that?" Now you go and you know what you're going to get. You sit there going, "Well, here they are, giving it to me. I know what scene is coming next." I hate to sound totally jaded and bitter and cynical, but it's like really there are so many movies where you just see the studio in it and you see the Ivy Leaguers sitting around a table saying, "Here's how you do this scene." _Do you feel like your work is given short shrift because it's full of fart jokes and profanity?_ TP: I know that we work way harder then the Michael Bays of the world, and I know that we put a lot more thought into our stuff than most people think. I remember when the _South Park_ movie came out, a lot of critics were like, "Well, they obviously didn't have a lot of ideas so they just filled time with music." And it's like, "Do you know how fucking hard it is to make a musical work?" But on the other hand you've got to stop and look around and say, "Dude, we've made a lot of money off of this, and that's pretty great." MS: Yeah, somebody must like it out there. And a good fart joke is— TP: We still laugh our asses off at fart jokes. MS: I just laughed at Trey this morning. He farted on our friend Jennifer, and it made me laugh. THE DIRECTORS' TAKE TREY PARKER & MATT STONE _What is the first film you ever saw?_ Trey: _Fantastic Voyage_ Matt: _Around the World in Eighty Days_ _What is your favorite film of all time?_ Trey: _A Christmas Story_ Matt: _Babe_ _What's your favorite line in a film?_ Trey: "Don't talk to me, I'm thinking."—Ralphie, in _A Christmas Story_ Matt: "A man for a husband."— _Raising Arizona_ _What movie made you realize that film was an art?_ Trey: _Timecop_ Matt: _The Road Warrior_ _What movie do you consider your guilty pleasure?_ Trey: _The Devil's Advocate_ Matt: _Raw_ _Who is your favorite movie character of all time?_ Trey: The Outlaw Josey Wales Matt: Mad Max _What's your favorite movie snack food?_ Trey: wine Matt: Milk Duds _Who is your favorite director of all time?_ Trey: Don't have one Matt: Trey Parker _Who is the most impressive filmmaker working today?_ Trey: Definitely not me Matt: Besides Trey, Pedro Almodóvar _What quality do the best directors share?_ Trey: They're not completely stupid. Matt: Insanity _Who is your favorite actor or actress of all time?_ Trey: I hate them all. Matt: Eddie Murphy _Who would you cast as yourself in a film about your life?_ Trey: Tom Wopat Matt: Eddie Murphy _If you could remake one movie, what would it be?_ Trey: _Orgazmo_ Matt: _Megaforce_ _What is your best quality as a director?_ Trey: I'm really nice. _What is your greatest weakness as a director?_ Trey: I hate actors. _Finish this sentence: I'll never direct a movie with . . ._ Trey: any chance of making money. _Finish this sentence: The perfect movie is . . ._ Trey: funny, sad, and not longer than one hundred minutes. Matt: less than ninety minutes. _What will they be saying about your work fifty years from now?_ Trey: "Why did anyone ever give him money?" _What piece of advice do you have for aspiring filmmakers?_ Trey: Don't listen to any advice from me. Or Michael Bay. Matt: Have a backup plan. _What are you as passionate about as moviemaking?_ Trey: Lovemaking Matt: Drinking margaritas **TODD PHILLIPS** "For me it's much more recognition having fifteen-year-olds quoting your movie than having Roger Ebert commenting on whether it's funny or not. _Old School_ wasn't made for fat guys struggling with their weight." **SELECTED CREDITS** _Hated_ (1994)-director _Frat House_ (1998)-codirector _Road Trip_ (2000)-cowriter/director _Bittersweet Motel_ (2000)-director _Old School_ (2003)-cowriter/director _Starsky & Hutch_ (2004)-cowriter/director Todd Phillips grew up with the antiestablishment comedies of the 1970s. They obviously made an impact. With the 2003 hit _Old School_ he contributed his own classic comic tale of rebelling against authority. Phillips obviously is proud; "Letterman said it was the funniest film he had seen in ten years." But for those who think this filmmaker is easy to peg, Phillips is full of surprises. For instance, the cursory fan of _Road Trip_ or _Starsky & Hutch_ probably has no idea that the director has devoted a sizeable portion of his career to making documentaries. Of course, even those films carry the unmistakable Phillips stamp of irreverence and unpredictability. In many ways, Phillips's abuse at the hands of overeager frat boys in _Frat House_ is as hysterical as anything Vince Vaughn put his recruits through years later. _You were born in Brooklyn?_ TP: Yeah, in Brooklyn. I grew up mostly on Long Island. _Your parents split up when you were just five years old. Did you divide your time between your mom and dad?_ TP: Oh, no. I probably haven't talked to my dad in twenty years. We just grew up with my mom. I have two older sisters. _Did you find yourself ever wishing you had a brother or some other male presence in your life?_ TP: Yeah, which is actually why I think it sort of found its way into my movies. All my movies are about relationships between guys. I don't think I had a lot of male bonding. (laughs) It's something I've thought about a lot. Every movie I've done is generally about guys and how guys relate to one another. _What were you like as a kid?_ TP: I was a bad kid. I had sort of a delinquent group of friends, honestly. We always had to go to family court. I was fifteen and I remember I had two JV cards, which were Juvenile Delinquent cards. If you get three, you have to go to reform school. _What did you get them for?_ TP: One was for shoplifting and one was for trespassing. _When did you start getting into movies?_ TP: I got into movies at a young age. I got into [the kind of] movies you normally get into as a kid. I didn't discover Preston Sturges at twelve. It was _Stripes_ and _Meatballs_ and that kind of stuff. I didn't understand what a director did, but I loved movies and comedies in particular. _Do you remember the first films you became obsessed with?_ TP: The first thing I became obsessed with was probably _Stripes_ and probably _Revenge of the Nerds_. _Highbrow from the start._ TP: (laughs) Yeah, all the way. It's the stuff I love. To me, what was great about those late-seventies and early-eighties comedies was that they were antiestablishment. It's the tone of that I always responded to. All the good comedies are antiestablishment movies. _Animal House_ is the classic antiestablishment film, and _Stripes_ is the same. _Did you enjoy NYU?_ TP: NYU was a good experience. I didn't officially graduate from NYU. I went through three years and then I kind of just stopped. You get what you can out of film school and then you realize, "OK, I'm done." That was my approach because it's not like you need a degree. It's just like, "What can I get out of this, and what can I do with this?" _Why do you think you gravitated first toward making documentaries at NYU?_ TP: Everyone's eighteen years old when they go to NYU and they expect you to sort of sit down and write a screenplay. Unless you're a totally gifted writer, what do you really have to write about when you're eighteen years old? Ninety-nine percent of people write from experience. They're not necessarily gifted writers. So you have a lot of people writing about where they grew up, and maybe if they were lucky, their parents were divorced and they had some conflict in their lives, but it was ultimately nonsense. I always felt the better route was to do documentaries and, in essence, live life on fast forward by absorbing experiences that you're not necessarily otherwise going to have. _Were you thinking that documentaries would be a means to get into fiction films?_ TP: I never saw them as a separate thing. Documentaries and movies are all storytelling. A good documentary has a beginning, middle, and end. There are a lot of people who call movies "documentaries," but they are basically journalistic pieces. The way to direct a documentary, you have to have a point of view. Michael Moore, whether you like him or hate him, has a point of view behind his movies. He doesn't claim to be a journalist. He's making movies with a beginning, middle, and end the same as _Road Trip_ has a beginning, middle, and end. So to me, I never saw them as separate things. I think documentaries as a means of breaking into film are a really interesting way to go, because documentaries are seventy percent the subject matter you choose, so it's a little bit of an easier road. Ultimately all you are as a director is a storyteller, and if I can tell a story about GG Allin, why can't I tell a story about Will Ferrell going back to college? I was probably naive in thinking that, but it worked and it probably works easier now. Back then it was treated as the ugly stepsister of filmmaking. The word "documentary" made people yawn. I think I was ahead of the curve—mentally, at least. Hated _was your first serious stab at a documentary?_ TP: I was a junior at NYU when I did _Hated_. I had only done photography because I didn't have a video camera when I was younger, so _Hated_ was the first movie I had done. _Can you tell me a little bit about the subject of the film, GG Allin, and how you hooked up with him?_ TP: I was really into punk rock from when I was probably fifteen, and GG was always the most extreme character in punk rock. At the time he was sort of legendary, so I knew he would be an interesting subject. Just by going to shows I ended up becoming friends with his brother, Merle. I told him I wanted to do a documentary with GG. He was like, "You should write him a letter." He was in jail at the time. I wrote him a letter in Michigan and he wrote me back. He was getting out of jail in three months and he was going to be on probation for a year. He couldn't leave Michigan. I was like, "Fuck, I want to do it now." He was like, "Well, if you send me a bus ticket I guess I could do it." (laughs) So we send him a bus ticket and he comes to New York. The whole time we're filming the movie he's broken parole, and then he gets caught at the end of the movie. He served another year in jail while we were editing. Then when he got out of jail, he came to the premiere of the movie at NYU and then he died a month later. I was actually with him, shooting some stills of him for the press kit when he OD'd on heroin. We didn't even know he was dead. We just thought he was passed out. _His routines onstage were pretty out-there, including hurling excrement at times._ TP: Yeah, there was that, but it was more about just violence. He was incredibly extreme. (laughs) I mean, this is not Marilyn Manson. This is the real thing. It wasn't like you were into GG's music when you went to see his show. It was just the excitement of the stage show and the adrenaline of the room, and that was what we tried to capture in the movie. It's sort of the danger of it, and to me it's the thing that I've always been attracted to. I even think it carries itself over into _Old School_. As punk rock as fuckin' GG is, I think that comedy can be as punk rock as that, because there are no rules with it. _I understand you got the money for the film in an unusual way._ TP: The movie cost $14,000, which I didn't have. GG was friends with John Wayne Gacy who was, at the time, on death row. This guy was a painter in prison. He painted clowns. I wrote him a letter and I said, "Hey, I'm doing this movie on GG and you know what would be great is if you could paint the movie poster and then I could sell it and then I would have money to make this movie." Gacy calls me at home from the Menard Correctional Facility in Illinois, and he said that he would do it for fifty dollars and art supplies and a photo of myself. And he sent me this bio that I had to fill out, which was sort of sexual and weird. _Oh my God._ TP: So I did all that, sent him the fifty bucks. Two months later he sends me the painting. He numbers and signs the posters. I put ads out in these punk-rock magazines. I sold them for fifteen dollars apiece in three months—every one of them—made $15,000. (laughs) I actually spoke to [Gacy] three days before he died. He really took to me in a weird way. _Your next documentary took you inside a fraternity's hazing rituals. Wasn't_ Frat House _supposed to be a project for HBO?_ TP: [HBO vice president ] Sheila Nevins loved _Hated_ , but she couldn't air it, because at the time they didn't air documentaries that are made outside of HBO. She was like, "I can't air this movie, but I fucking love it. What do you want to do next?" I hooked up with my friend from NYU Andrew Gurland, and we pitched a few ideas. We made a three-movie deal with HBO. We were going to make three documentaries based on eighties comedies that we grew up on. So we were going to do _Animal House_ , which was _Frat House_. We were going to do _Stripes_. Andrew and I were going to join the army and go through boot camp. And then we were going to do _Bachelor Party_ because Andrew was getting married. (laughs) It was a way to make documentaries entertaining which, by the way, now is happening all over the place, and it's great. _Did you go into_ Frat House _with any kind of preconceived feeling about fraternities?_ TP: Look, I went to NYU. I'm not a fraternity guy, even though I've now made two movies that have them in them. To me, the thing I was always fascinated by was the male bonding. I just want to understand why guys put themselves through that kind of thing. _You become part of the pledge process in the film. What was the worst part for you to endure?_ TP: I think the worst thing was when I was in that dog cage, because I'm a bit of a germ freak. They're, like, pouring tobacco spit on my head and urine and whatever. I couldn't believe I was in a dog cage. _Thinking, "How did I get to this place in my life?" You suffered through it all and you don't even get to be a part of the fraternity._ TP: (laughs) Yeah, exactly. I didn't even get to fuck any of the girls. Frat House _never made it to HBO despite doing very well at Sundance. What happened?_ TP: The movie got a little press, and a lot of the kids started to come out and say it was a setup and we told them what to do and we paid them. Well, first they came out and said they were drunk and stoned when they signed the release, which was absolutely true. (laughs) We found the best time to hand them paperwork was when they were drunk and stoned, which to me was logical. Otherwise, they're like, "Maybe I have to fax this to my dad." So they got us on that, but that didn't really matter, because the process of releases isn't really that legally binding. You don't necessarily need a release, because it's not like we're filming with hidden cameras. It became a big legal headache with parents threatening to sue and saying it was going to ruin their kid's life. So it went away after that. Andrew and I were fighting it for a little bit because we really wanted the movie to be seen, but we now look back on it and kind of like that it's this underground sort of thing. _It was at Sundance that you met Ivan Reitman, who would be important to your career. How did you two meet?_ TP: I met Ivan because his son was there with a short film. His son, at our Q&A, raised his hand and asked, "What was your inspiration for this movie?" And I told him the story I told you about how we wanted to make three movies based on eighties comedies and two of those three movies were his dad's movies. Two days later that same kid comes up to me and goes, "Hey, Dad, this is the guy I was telling you about." _He's produced two of your films. Would you consider him a mentor?_ TP: Yeah, for sure. I didn't know anyone in the film business, ever. It wasn't like I had an uncle in the business, so I always think of Ivan as that guy. _After_ Frat House _you made some money directing some commercials for a while?_ TP: I directed over a hundred commercials probably. _Did you feel your commercial work honed your skills behind the camera?_ TP: You know what it is? You finally to get to really play with all the toys that you'll have available to you on your first feature. So it wasn't really just for the money; it was great experience and the best way to learn how it works on a real movie set, because I had never been on a real movie set before. _How were you working with your cast on_ Road Trip _? It wasn't a terribly_ _experienced cast._ TP: I feel bad for these first-time directors that somehow luck out and get Tom Cruise in their movies. These guys were great for me because they really hadn't done a ton of stuff and we were all sort of in it together. It took a lot of the pressure off. Vince Vaughn is the greatest in the world, but I don't think I could have directed Vince in my first movie. He's just intimidating. Road Trip _was called a "gross-out" comedy by some, much in the same way that_ There's Something About Mary _and_ American Pie _were at that time. Do you find that term belittling?_ TP: I think it's belittling to all the movies in a way, but it didn't bother me, because at the time it was a marketing tool. I just find it dismissive of all those movies, because there're plenty of gross-out movies that don't do well because the story doesn't work. It's easy to be gross. It's hard to be funny. So it has to work on more levels than being just a gross-out thing. _You've said casting is one of the most important parts of your job. Did you have the guys in mind when you were writing_ Old School _?_ TP: We wrote the movie for Vince Vaughn. That was the only part we had in mind because I think Vince is the funniest guy ever, and I don't think he'd ever done a comedy like that. And we had to fight for Vince. _Vince is one of your favorite actors to work with. Why?_ TP: Out of anybody I've ever worked with he's the funniest guy in real life. And whether it comes across in his movies or not, I know having him on the set it's going to be great because when you're in those weird situations, he's going help you out. Vince is somebody who thinks more like a producer than an actor, and that's not a negative thing. He actually thinks about the movie as a whole as opposed to his role. So he's tracking story, which is what you should be doing as the director. It's always nice to have somebody else involved that's doing that. Fight Club _was obviously on your mind when you made_ Old School _. There are a number of references to it._ TP: Yeah, I always saw the movie as the comedy version of _Fight Club_. _Fight Club_ is my favorite movie of the nineties, other than _Boogie Nights_. _You've said that you see_ Old School _as a pull between responsibility and irresponsibility._ TP: It is. It's about that moment in life where you have to decide, "Am I going to go this way or that way?" I personally put off going "that way" for a long time. I'm definitely still "this way." I think it's why _Old School_ connected. It's not so much, "Oh, it's funny when Will gets shot with a dart in his neck." Comedies that last have to work on more than that one level of just being funny _. There's Something About Mary_ is great because it's funny, but it's also great because you're so invested in Ben Stiller's desire to get back with this girl, to get a second chance. It's something you can so relate to. _You're in_ Old School _. What's your one line?_ TP: "I'm here for the gang bang." I know it because I live here in L.A. and across the street they rented a house to eight college kids. Without fail on a Friday or Saturday night they have a party, and without fail at three in the morning they buzz my buzzer and say, "I'm here for the gang bang." And they all run away laughing. And then I see them when they're not drunk in the daytime and they're taking out the garbage and I'm like, "Guys, enough with that." And they're like, "Sorry, man, it was my buddy. It's not going to happen again." And then the next Saturday it happens again. _That character and your character in_ Road Trip _seem to have a lot in common._ TP: It's the same guy. It's an alter ego, and actually I had a part written for him in _Starsky_ , but we didn't have time to shoot it. I very givingly gave up my part. (laughs) That's Mr. Creepy, and he's very sexual. He finds sex in everything. He's so straight that sometimes he's even gay, if that makes sense. _What was the genesis of_ Starsky & Hutch _?_ TP: I met with Ben Stiller and he liked the stuff I'd done, and we started talking about doing a movie but we didn't know what. Suddenly a week or two later he called me and said, "Hey, I found a movie we should do. It's more of an idea: me and Owen Wilson as Starsky and Hutch." And I said, "All right, I'm in." Because I knew we could have fun with that, and I was just dying to work with Ben and Owen. _Tonally the film must have been a challenge. It has comedy and action and it's not exactly a spoof, but—_ TP: That's what was difficult about the movie. I always saw it as kind of a love letter to the show as opposed to an homage or a spoof of the show. Our goal was, "Let's write a prequel to the TV show; do the same thing tonally, but put different actors in it and thus hopefully make it funny." _With_ Starsky _, you were again working on a film about the relationship between men. Do you think this recurring theme might go back to the absence of your father?_ TP: It could, quite possibly. I don't know if it goes back to my dad. It more goes back to the fact that I was raised by three women—two older sisters and my mom. I've sort of been protected in that way, and I always was surrounded by women. I didn't have a brother. You can't help but explore something that's foreign to you in a way. _Interestingly, the relationship between Ben and Owen in_ Starsky _almost mirrors a traditional romantic comedy._ TP: I always said to Ben, "This is going to be a romantic comedy between two straight guys." It follows the beats of a regular romantic comedy because they're thrown together, there's tension, and they're thrown apart and then they come back together stronger than ever. A typical romantic comedy to me is like R&B music. It's just not my thing, but to take two guys and virtually make a romantic comedy with all those beats just seems interesting to me. I find romantic comedies are rarely romantic and rarely funny, which is why what P. T. Anderson did with _Punch Drunk_ [ _Love_ ] is like the greatest romantic comedy of all time—because it's actually funny _and_ romantic. _What sort of mood do you like to establish on set?_ TP: Extremely relaxed and light, almost too much so. Ben told me when we were shooting _Starsky,_ "I feel like we're making a student film." And Owen used to say, "The inmates are running the asylum." Maybe it is too light. I don't know. Look, I'm not delusional. Plenty of people don't like my movies. I've sat down with reviewers who don't like the movies but they'll always say, "Boy, it looks like you had a great time making it." And I think that is true and that the energy finds its way into the movie. So if you're not into the story, I think the enthusiasm and the energy of the fun sucks you in. Or that's what I'd like to believe. _Speaking of critics, does it bother you when a film as successful as_ Old School _still gets panned by prominent critics?_ TP: No. For me it's much more recognition having fifteen-year-olds quoting your movie than having Roger Ebert commenting on whether it's funny or not. _Old School_ wasn't made for fat guys struggling with their weight. I mean, it's tough to make that guy laugh. Just look at the poor guy. I'm not picking on critics. While some people hated that movie, it's also been loved by critics. I don't have any reviews framed, be they good or bad. Ebert loved _Starsky_ , but that didn't make me suddenly feel better. _Do you test-screen your films?_ TP: Every film I've done. I think testing a comedy is absolutely one-hundred-percent crucial, and I think testing a movie in general is crucial. I always find it amazing when directors—outside of Steven Spielberg—just say, "Here's the movie; take it or leave it." I find it astounding because, ultimately, you really don't know what you have until you put it up there. Certainly comedy, when you watch it with an audience, then you know, "OK, that works; that doesn't work." You're making a movie for the audience. _Road Trip_ was not the story I needed to tell. It wasn't going to be the pinnacle of my career. It was a movie I was making to be funny and I think ninety-nine percent of directors who make comedies will tell you the same thing. It's totally different for Paul Thomas Anderson and people making films that are personal stories that are like, "This is exactly the way I wanted to tell it. If it doesn't work for you, it doesn't work for you; so be it." When you're making a film [where the] sole purpose is to make people laugh, to not test it is kind of shocking because how do you know? It's not funny to you anymore. Are you going to just assume that [other people are] going to love it? And then you're shocked on that opening Friday night when you go to the theater and nobody is laughing? _At a certain point, wouldn't you want to trust your own instincts?_ TP: No. I would never not test-screen a movie. Ultimately it's a fine line, because you are a director and everyone wants to consider themselves an auteur. Certainly as a writer-director it's important that it maintain that original vision you had for it. It's not like you can change your movie a hundred-eighty degrees by testing it. It's still the movie you shot. If a joke fell flat and I'm taking it out of my movie, am I changing my vision? Did I compromise what I set out to do? No. The joke didn't work, and I'm adult enough to know it didn't work. Big deal. THE DIRECTOR'S TAKE TODD PHILLIPS _What is the first film you ever saw?_ _Deep Throat_. I was five. _What is your favorite film of all time?_ _Gimme Shelter_ _What's your favorite line in a film?_ "Better to be king for a day than schmuck for a lifetime."— _The King of Comedy_ _What movie made you realize that film was an art?_ _Under the Rainbow_ _What movie do you consider your guilty pleasure?_ _One Night in Paris_ _Who is your favorite movie character of all time?_ Rupert Pupkin _Who is your favorite director of all time?_ Hal Ashby _Who is the most impressive filmmaker working today?_ Paul Thomas Anderson _What quality do the best directors share?_ Persistence of vision _Who is your favorite actor or actress of all time?_ Al Pacino _Who is your favorite actor or actress of today?_ Sean Penn _Who would you cast as yourself in a film about your life?_ Hilary Swank _If you could remake one movie, what would it be?_ _Sullivan's Travels_. Not that it needs to be remade; it is just a film that really connects with me. _What is your best quality as a director?_ Are other people really answering these questions? _What is your greatest weakness as a director?_ Low tolerance _What will they be saying about your work fifty years from now?_ The same thing they say now, "Me and my friends got really high and watched (fill in title) . . . it was sooooo awesome." _What piece of advice do you have for aspiring filmmakers?_ Don't agree to answer lame questionnaires. It's like doing someone else's homework. _What are you as passionate about as moviemaking?_ Gambling and drugs and my dog **BRETT RATNER** "Why do I need final cut? Final cut is for artistes quote unquote—directors whose movies don't make a lot of money." **SELECTED CREDITS** _Money Talks_ (1997)-director _Rush Hour_ (1998)-director _Family Man_ (2000)-director _Rush Hour 2_ (2001)-director _Red Dragon_ (2002)-director _After the Sunset_ (2004)-director _X-Men 3_ (2006)-director From the time he saw _Raging Bull_ as a kid, Brett Ratner knew he wanted to follow the path of Martin Scorsese. Today he may not be Scorsese, but no one else can claim to be anything like Brett Ratner. Start talking with him, and that is immediately clear. You soon realize he could probably talk his way in or out of any situation he wanted. It's no wonder he is up for so many high-profile projects. And it's not as if he has gotten by on enthusiasm alone. He is a student of film, throwing out film references with obvious love and affection. Ratner makes movies for the masses and would not think of apologizing for it. It is hard to argue with the results. The _Rush Hour_ series is one of the most lucrative franchises going today. And his Hannibal Lecter film, _Red Dragon_ , attracted one of the finest ensembles of the last decade. _Can you tell me a little bit about your family and where you grew up?_ BR: I lived in one house with my mom, my grandparents, my great grandmother, and my mom's brother. It was a little house in Miami Beach. We were upper middle class. My grandfather was the patriarch. My whole life, since I can remember, I have dreamed of being a filmmaker. I don't know why. The thing my grandfather and I would do every weekend would be to go to the movies. He's Cuban, so we'd mostly pick action movies. He loved action movies because with them, you don't really have to understand the dialogue. _When did you start to make movies of your own?_ BR: My mom knew this guy named Nile Rodgers who became a record producer. Every Christmas he'd come to town, and one year he came and brought me a film camera. _And you were off to the races?_ BR: Yeah. And so I started using that to make my little films. Around that time they were shooting the pilot for _Miami Vice_ on the street, and I would ask my mom to let me skip school. She was a very liberal mom and she let me. She'd drop me off on the set, and I would watch them film. When you're a kid, it's all fantastic. I was there, seeing it live, and then I would see it on the TV. So I started to connect the dots. I was like, "Oh, I understand what that camera movement was." I used to stalk movie sets, and the other set that I stalked was _Scarface_. I stalked it so much that Brian De Palma said, "Hey, kid, get into the shot." I must have been twelve or thirteen. I'm in the background of that scene in the pool. I'm on a raft. Al Pacino was so cool, but I remember that is the day I decided I don't want to be him. I said, "I want to be the guy telling him what to do," and that guy was Brian De Palma. _What do you think you were being drawn to?_ BR: I think it was the control. I saw him collaborating with everybody, and I saw Pacino just sitting there. I was happy because I could focus at a very young age on what I wanted to do. _Raging Bull_ was my favorite movie, so I said, "OK, how did Martin Scorsese become a filmmaker?" He went to NYU film school. I go, "Well, that's where I'm going to go." So I did whatever I could. All I did every day was skip school or get home from school early, and me and my friends would get together and make these little short films. They were like mock _Miami Vice_ episodes. It's actually my best work. _It obviously paid off. You got into NYU._ BR: I went and the interviewer said, "How dare you apply to this school? You have the worst grades and the worst SAT scores." She said I should go to community college and after I got straight A's, maybe they'd consider letting me in. And I was like, "No, you don't understand." I brought my little projector with me and they were like, "No, we don't look at this." I left devastated. I literally walked down the street with my little suit and my projector and my films and I was like, "My life is over! How can I let this woman decide my future?" So I go to the dean's office, saying I need to see the dean. They let me in. I said, "Dean, my whole life I've dreamed of being a director. I need you to let me in here, or I'm going to be living on my mom's couch for the rest of my life." A few weeks go by and I go to my mailbox and [a letter] says "You've been accepted to NYU." That was a defining moment, and the fact that I got accepted made me realize that "no" wasn't a response that I was ever going to accept. _That seems like a lot of tenacity for someone so young. Where do you think it comes from?_ BR: The tenacity is from my grandmother. And my mom is a very outspoken, outgoing person. But the thing that I got [from them] that I think made me successful was the fact that I got a lot of love. I was never afraid to fail. If I didn't get into NYU, I would be OK. I'd still be making films, but they'd be in my backyard with my friends instead of with NYU students in Washington Square Park. My state of mind was, "If I can't be a director, my mom's still going to love me, so no big deal. I'm going to do the best I can." _You've always been a film buff, but was it in film school that you started to study the history of the medium?_ BR: In film school I started going back. In film school you're taking a lot of film history classes and a lot of film theory classes, which were all bullshit to me, but— _Why were they bullshit to you?_ BR: It's kind of like that joke with Woody Allen in _Annie Hall_ when he's in line and the film professor is talking about some writer, and Woody Allen pulls the guy out of the line and the guy says, "You have no idea what you're talking about." That's what I felt like for some of the professors. They were analyzing these films and saying, like, Daffy Duck was racist. I don't think Walt Disney was thinking about race. These professors act like this is the Bible, even though it was their theory. I didn't always agree. _What do you think your fellow film students made of you?_ BR: They didn't like me very much, because I was the mainstream guy there. My sensibility was to do comedies, and they were doing films about someone dying—really artsy kind of films. But my biggest inspiration was this professor who was a cinematographer of documentaries. He was the least commercial guy ever. Some kid did a film and it was so genius, it literally made me want to quit. I said, "There's no way I will ever make a film as good as this." The execution, the composition, the movement in the frame, it was literally genius! The film was about getting high. Only a few guys in the class had gotten high, so ten kids out of forty laughed. The professor said, "This film doesn't work." They started screaming, "You're a fucking communist! You're a fucking asshole!" I mean, NYU is very vocal. He was like, "Wait a second. What's the greatest story ever told?" Someone's says the Bible. He says, "Yes. The Bible is the best story ever told, because a small child can understand it and an adult can understand it. That's what your approach to your films has to be. If you're going to make films for just you and your friends, you're never going to succeed. If you want to get paid to make films, you have to make films for all audiences." That stayed with me. _You did a short film at NYU called_ Whatever Happened to Mason Reece? _What did you take away from that experience?_ BR: NYU teaches you how to make films. They don't teach you how to get a job. My professors told me, "Brett, you have to make a film. That's your résumé." So I shot this film. One night before I was supposed to shoot, the professor calls me. I'm shooting at seven in the morning. He says, "I read the script. It's not funny." He spoke with broken English. I said, "What do you mean it's not funny? You don't know if it's funny. You're from Poland! You don't speak English. How do you know what's funny?" He said, "No, you rewrite it, right now!" I sat there until one in the morning, rewriting the script. Ten years later, on _Money Talks_ , Chris Tucker and Charlie Sheen come in my trailer and they say we can't shoot this scene. They go, "It's not funny." I said, "What?" I had to rewrite it right there in front of them! And then I realized _that's_ what the fuck my professor was doing. He was preparing me for when I'm on the set of a $25 million movie, where the stars come to me and say we can't shoot and I only have six hours left in the day to shoot it. _Didn't you go through some unusual channels to pay for_ Mason Reece _?_ BR: Yeah. Before I finished the film, I came across a _Forbes_ magazine where I saw the forty most powerful people in Hollywood, and I sent a letter to all of them with a clip from the movie, asking for money. I didn't need the money, but I wanted the relationships. I got thirty-nine rejection letters, and I could not have been happier—letters from Peter Guber, from Steve Ross, from all the heads of all the studios. I thought, "One day I'm going to go out to Hollywood and I'm going to say, 'Hey, you wrote me a letter!' Even though it was a rejection letter, you wrote me a letter." The dean calls me in. I'm thinking I'm getting kicked out. I was the worst film student ever. I would keep equipment [overnight] when I wasn't supposed to. I was using the editing room when it was closed in the middle of the night. I was just crazy. He goes, "Steven Spielberg's office called. They're looking for you. Here's the number." I go back to my dorm room and call the number. They answer the phone, "Steven Spielberg's office." I'm like, "Is Mr. Spielberg in?" They say, "Who's calling?" I say, "Brett Ratner." "Hold on one second, he's expecting your call." My heart was pounding out of my chest. I'm like eighteen, nineteen years old. They finally go, "Mr. Spielberg's going to call you back." I remember falling asleep on the phone. I woke up with slobber all over my phone. When I woke up, the phone rang and it was Kathleen Kennedy. She said, "Steven saw your clip and we're very impressed, but we don't give money for short films and thank you for sending it to us." And I was like, "You don't understand. I'm going to be a big director like Steven Spielberg. You've got to trust me. This is an important film." I kept her on the phone for, like, ten minutes. A month goes by and I get a check from Amblin Entertainment. I was like, "Oh my God!" I blew it up the size of my wall in the dorm. I would show it to girls in the hallway to try to impress them to try to get laid. I don't think I ever cashed the check. I just carried it around in my wallet for three years. When I became a music video director, Quincy Jones called me and said, "I want to meet you." He said, "Come to this party with me." So I think there's going to be like a hundred people there. There are three people there. He goes, "Brett, this is Penny Marshall, Robert De Niro, and Steven Spielberg." I was like, "Oh my God!" Spielberg sat down next to me and he goes, "So did you go to film school?" I said, "It's funny you should ask, because you gave me money." I picked his brain for four hours. I said, "Where did you get that shot from in _Schindler's List_?" He goes, "Oh, that was Michael Powell's _Peeping Tom_." Every shot I asked him about was from another movie. And I realized, "Shit, I do the exact same thing!" So anyway, it was great. He gave me his phone number. He tried to fix me up with his daughter. I didn't want to go there, because I thought if I hurt his daughter, I'll never make it in Hollywood. _One of the most important relationships that you made at NYU was with Russell Simmons. How did you two meet?_ BR: When I was at NYU the first guy I met was Russell Simmons. I lived in the dorm where they started Def Jam. Russell and I became best friends. I did all the early Def Jam videos. I knew I had to do a hundred music videos to practice my craft. Music videos were like another film school to me. I was getting paid to learn. Because in film school, you don't really have all the tools. So I said, "OK, I didn't pay attention in film school to all this type of stuff, so I'm going to learn about every single piece of equipment." And that's what I did. _How did your first feature film,_ Money Talks _, come about?_ BR: Russell had just started the Def Comedy Jam, and I was at the auditions. Out of the hundreds of comedians one of them was Chris Tucker. I said, "This kid is a genius!" And I asked him to be in a music video I did with Heavy D. It was a big video on MTV, and it was the first thing he'd ever done. He went on to _Friday_ , and he was about to do _Money Talks_. Chris was like a cult underground comedian, so Mike De Luca gave him a development deal to do _Money Talks_. Mike hired a big commercial director, and a week before production he went to Mike and said, "I can't work with Chris Tucker. He wants to improvise. He wants to change the lines." That's the genius in Chris Tucker! At the end of the meeting, Mike said to the guy, "You're fired." So Chris said, "I remember this cool white boy named Brett Ratner." And Mike remembered my name because he's very hip to all the hot music directors. He called me in and hired me. _Were you nervous going into the shoot? You had almost no prep time._ BR: I had no clue as to what to expect. I had nothing to lose. I was like, "This is my dream." I didn't know what the fuck I was doing. But my instincts turned out to always be right. _There was never a worry of, "What if I'm making a bomb here?"_ BR: I didn't know what a bomb was! I'd never done a movie. I became more scared as I went. Each time I did a film, it became scarier and scarier. On the set of _Money Talks_ , I'm like, "Fuck, this is the greatest time of my life." I always had that fearless attitude. _How did_ Rush Hour _come about?_ BR: Mike De Luca said, "What do you want to do next?" I said, "I want to do a movie with Jackie Chan and Chris Tucker." So I read four or five different scripts that were all buddy cop things. I wanted to do a contemporary version of _Beverly Hills Cop_ with _Midnight Run_. Out of all the scripts I found, _Rush Hour_ was the best. It was written for Stallone, originally. The idea was to do a movie that was serious in tone but put a comedian in the middle of it. And we crossed the hip-hop and the Asian culture. _I've heard that you like to watch movies every night while you're shooting a film?_ BR: Yeah. I always do that. It keeps it fresh for me. I tend not to watch movies in the same genre. I watch movies in different genres and there might be a similar scene. I have so many references. That's why Scorsese and Spielberg are so quick on their feet and do such great work—because they have all the references. They've seen what works. I think that's what helps me too. Look, I'm not like De Palma or even Paul Thomas Anderson. I can watch Paul Thomas Anderson's films and tell you in every scene what movie he's taking from. I know those references, but that's kind of blatant stuff that he does because he wants to show you how he loves those movies. My stuff is subliminal. You would never even pick it up, really. It's very subtle stuff. _In the wake of_ Rush Hour _, what conversations with other filmmakers meant a lot to you?_ BR: Warren Beatty called me and said, "It's my favorite film of the year. I have got to meet you." And we became very close friends after that. _You were going to remake Cassavetes's_ Killing of a Chinese Bookie _with Warren, weren't you?_ BR: Yeah. I got cursed out by a lot of friends of mine who were just like, "That's a classic!" It's Paul Thomas Anderson's favorite movie, so to Paul I was the antichrist. Mike De Luca was going to greenlight it, and then I got _The Family Man_. I literally had to beg for that job. They didn't want to hire me. They said I was too young. I wasn't a family man myself. I said, "But I _am_ a family man. I'm just not the father in the family. I'm the son in the family." And the way I convinced Nic Cage, I said, "Watch this scene of _Kramer vs. Kramer_ where he's making French toast with his son. That's the relationship I want you to have with your little daughter." I gave him references so that he would understand. _The argument could be made that_ Family Man _was your most important work to date. After your first two comedy/action films, this one showed some range._ BR: It's because of that film that I ended up getting a film like _Red Dragon_ or why I was even being offered _Superman_. Fifty years ago a director would go from comedy to drama to romance to westerns to musicals. It didn't matter what genre it was; that was your job. Look at Billy Wilder. But nowadays it's like, "OK, this is who we go to for comedy. This is who we go to for the action stuff." They put you in a box. Rush Hour 2 _had a very large budget. Many filmmakers believe that perhaps a smaller budget encourages greater ingenuity. What do you think?_ BR: I agree with that. But you can't make _Rush Hour_ on a small budget. But yeah, creatively my best work was my work when I was ten years old. (laughs) Because it's about the creative ideas. Films today are always based on the formula. They don't make movies like _The Long Goodbye_ anymore, or _Harold and Maude_ or _Being There_ , which are full of heart and humanity. People don't die in the end of movies anymore. Warren Beatty died in the end of every one of his movies. Tom Cruise dying at the end of a movie? No way! _You've worked with a lot of exceptional actors, in particular on_ Red Dragon _. How do you work with actors? Do you give line readings?_ BR: Oh, yeah. I got yelled at three times a week on that movie. I always give line readings. I mean, that's a mistake. I'm not an actor. What I do know is what I like and what I don't like. It's my job to rein the actors in and express to them the tone of each scene based on where we've come from and where we're going. And that's what I'm very good at, I think. Because Philip Seymour Hoffman can say the line upside down and it's going to be good, but how does it fit into the puzzle? _You also have to be able to work with actors who come from different backgrounds and levels of study._ BR: Yeah. I got yelled at because I would talk to Anthony Hopkins sometimes like I was talking to Jackie Chan. And it's not that Jackie's not smart, it's that I had to speak in a different way to explain it to him. And Anthony Hopkins is looking at me like, "Who are you talking to?" A lot of times the English ones like Ralph [Fiennes] would come in with a preparation. These guys do it two hundred times in the mirror, or whatever their process is. But I completely throw them off. I go, "Guys, we have got to go in the other direction." But they're such good actors, since they're from the theatre, that they can do that. Whereas Edward Norton, probably being used to the laziness of American filmmakers and the confidence in his own memory and skill, won't work as hard on it but will rely on instinctual stuff. So everyone has a different way of working, and they don't all like the way they work together. I have to be the kind of intermediary. What I do have, I think, is an ear for dialogue. I know it when I've got it. And I do whatever I have to do to get it. I've given some of the worst direction ever, but I've gotten the performance. _It seems like you've made a conscious effort to jump around between different genres thus far._ BR: The good thing about me is that I've made movies in three or four different genres now. _Family Man_ doesn't look like _Rush Hour_. _Red Dragon_ doesn't look like _Rush Hour_. _Red Dragon_ doesn't look like _Money Talks_. It's like a good record producer makes the artists sound like themselves and doesn't force a sound on them. There are some directors where you see their movies and every movie looks the same, no matter what the genre. It doesn't mean they're not good directors. Michael Bay, whether you like his films or not, I admire that when you see the film, you know it's a Michael Bay film. _So does part of you want your films to be as identifiable as a Michael Bay_ _film is? Do you want a viewer to immediately know they_ _'re watching your work?_ BR: Every film director wishes for that, I think. People see my trailers and they say, "That's a Brett Ratner film, just because of the energy and the humor." I get that all the time: "I knew that was your film when I saw it. I didn't see the opening and then I saw the film and I was like, 'Brett must have directed this film.' " And that's the biggest compliment I can get. That's why Nic Cage agreed to do _Family Man_. He saw _Rush Hour_ and he met me and he said, "You're _in_ that movie!" But there's no particular shot that you could point to and say, "That's a Brett Ratner film." After the Sunset _was your first film that didn't succeed from a commercial perspective._ BR: That was the hardest movie I'd ever done. I was really lost in that film, because I didn't have a clue as to what the tone was. It had four different genres in it. I was like, "Fuck, this is crazy." It's kind of an amalgamation of all these different movies, which is why I was less sure of myself when I was giving the actors directions. It's all about finding the language of the film. I think aside from being a filmmaker, I'm a student of social behavior. I'm in the mix. Yeah, I live in a big house in Beverly Hills, but I'm not sitting there all day long. I'm going out. I'm interacting with people. I'm not just having lunch meetings at the Ivy. _After the Sunset_ wasn't a hit, but I'm not hiding out in a penthouse in Vegas, you know? (laughs) _You seem to have made an effort thus far in your career to work with people who have more experience than you behind the camera._ BR: Because I learn from them. When I'm fifty or sixty years old, then _I'm_ going to get the young, hip video guy. Now I'm all the hip you need. I want the guy who's made thirty movies. I recognize talent. And I surround myself with the best people because that makes me look better. I know my position. I don't think I'm better than anybody. I know I'm talented. I know what I'm good at and I know what my weaknesses are. I know that I'm young and have a lot more to do. I don't feel like I've arrived. I have so much more to learn. I have so much more to do. _Do you get final cut on your movies?_ BR: I don't ask for it and, believe it or not, I've never had one frame of my movies changed. My director's cut of _Rush Hour_ is ninety minutes. Why do I need final cut? Final cut is for artistes quote unquote—directors whose movies don't make a lot of money. Maybe Scorsese should have final cut because a guy like Harvey Weinstein or a studio might change it to make it a little more accessible or a little more commercial and he has a vision of what he wants it to be. He wants it to be four hours long or whatever. _Do you worry about losing touch? A lot of the of the great filmmakers of the seventies have been accused of losing whatever it was that made them great to begin with._ BR: I think a lot of that has to do with final cut. A lot of that has to do with guys that don't have to answer to anybody. They don't have to test the movie. I like sitting in the middle of an audience and seeing how people vibe and how they react and how they respond. That's how I learn from an audience. I'm making movies for an audience, not just for myself. _Do you think you have a smaller, less mainstream film in you?_ BR: My taste is accessible to what audiences want. Some people just have certain sensibilities, and I'm not going to apologize for mine. I was always envious of Paul Thomas Anderson because he was like, "Oh, me and Jonathan Demme are buddies and me and Kubrick hung out on the set with him with Tom and Nicole." I was jealous of that and I was like, "Shit, I want to be friends with these directors," and I thought I have to make my personal film about someone dying of brain cancer or whatever to get the respect. But then, after _Rush Hour_ , when I got calls from Demme and Beatty and Bob Evans and all these guys I'm like, "You know what? Directors aren't snobs." They love a movie no matter what the genre is, if it works. It gave me so much confidence because I was just like, "OK, I don't have to go make _Boogie Nights_." _Do you think about a legacy for your work?_ BR: I want to be remembered. I want my films to be remembered. Two hundred years from now I want people to say, "Oh, that's what it was like in that era." _Y tu mama tambien_ is the only film that has captured what was going on today with the youth. If they pull out _American Pie_ and look back, the future's going to think these people were fucking retarded. I want my movies to represent my generation or my era just like _Mean Streets_ represented Scorsese's youth. THE DIRECTOR'S TAKE BRETT RATNER _What is the first film you ever saw?_ _Star Wars_ _What is your favorite film of all time?_ _Being There_ _What's your favorite line in a film?_ "Don't ask me about my business."—Al Pacino, in _The Godfather_ _What movie made you realize that film was an art?_ _The Bicycle Thief_ _What movie do you consider your guilty pleasure?_ _Scarface_ (1983) _Who is your favorite movie character of all time?_ Tony Montana _What's your favorite movie snack food?_ Icee _Who is your favorite director of all time?_ Roman Polanski _Who is the most impressive filmmaker working today?_ Clint Eastwood _What quality do the best directors share?_ A personal style and vision _Who is your favorite actor or actress of all time?_ John Cazale/Sterling Hayden _Who is your favorite actor or actress of today?_ Al Pacino _Who would you cast as yourself in a film about your life?_ Sean Penn _If you could remake one movie, what would it be?_ _The Killing of a Chinese Bookie_ _What is your best quality as a director?_ My passion _What is your greatest weakness as a director?_ My cell phone _Finish this sentence: I'll never direct a movie about . . ._ what I did last night . . . unless I really need the money. _Finish this sentence: The perfect movie is . . ._ one that makes me laugh and cry! _What will they be saying about your work fifty years from now?_ "His movies have definitely gotten better." _What piece of advice do you have for aspiring filmmakers?_ Never give up. _What are you as passionate about as moviemaking?_ Movie-watching **KEVIN SMITH** "Every fucking day I sit there going, 'Oh my God, am I any good at this whatsoever? Is today the day they're going to realize I'm fucking shit at my job?' " **SELECTED CREDITS** _Clerks_ (1994)-writer/director _Mallrats_ (1995)-writer/director _Chasing Amy_ (1997)-writer/director _Dogma_ (1999)-writer/director _Jay and Silent Bob Strike Back_ (2001)-writer/director _Jersey Girl_ (2004)-writer/director _Clerks II_ (2006)-writer/director It is difficult to come up with a filmmaker today who has engendered more loyalty from his fans than Kevin Smith. Perhaps it's because he is a writer and director without a hint of pretension, always quick to knock himself down before anyone else can get a chance. Or perhaps it's because he simply knows of what he writes. In 1994, after all, he created a living, breathing document that spoke to a generation in _Clerks_. Since then, with films like _Chasing Amy_ , _Dogma_ , and _Jersey_ _Girl_ , he's jumped in and out of his so-called view-askew universe of characters, tackling adult issues like sex, religion, and loss without ever apologizing for an obvious love for the profane and scatological. He's also well known for a frankness that is all too rare in Hollywood today. This conversation was no exception. _Can you tell me a little bit about where you were raised in Highlands, New Jersey?_ KS: At one point in an interview in _Time_ magazine I was quoted as saying that I was raised in a white-trash town. And boy did they hate that. _So set the record straight here and now._ KS: Apparently what I should say is I was raised in a very blue-collar town. Those of us downtown were regarded as white trash and clam diggers because fishing was the big industry. Apparently the uptown folks who raised the red flags about the quote didn't consider themselves white trash at all. I always liked Highlands. It was a really nice place to grow up. _Your dad worked in the post office. Is that right?_ KS: He was the dude that cancelled your stamps. He was never a letter carrier, which I think he would have enjoyed more. _Why do you say that?_ KS: He liked being out, actually. When he joined the post office, he was something of a young go-getter, and then the post office just kind of crushed his spirit. He once told me that when he'd been there for about a month, he was pulled aside by a coworker who was like, "You really have got to slow down and pace yourself. You're making the rest of us look bad." He just hated it, because he wasn't allowed to excel at his job. So an early lesson he instilled in me was that whenever I eventually got a job, it should be something I really enjoyed doing. _Do you think you take after him in other ways?_ KS: My sister always says that I take after my father. My father later in life became something of a recluse. He didn't want to go out and interact with the world, and that's kind of how I've always felt about the world. _It's surprising to hear you say that. You seem like such a public person._ KS: Well, right now I'm sitting in an office with the curtains drawn, so it has a dark cavelike atmosphere. It drives my wife up the fucking wall. Maybe it has something to do with the fact that so often I have to go out and do things in terms of work or Q&As and stuff like that. So when there's downtime, I'm like, "Let's sit at home and draw the curtains and watch a lot of TV." And that was my old man. My sister maintains I stole the Silent Bob character off Dad because he was a dude that never really spoke and then when he did speak, he usually said something that went right to the funny bone. _What are the first moviegoing experiences that you remember?_ KS: I remember my parents taking me to drive-ins quite a bit, and the first movie I remember seeing was _The Gumball Rally_. It was essentially like _Cannonball Run_ without celebrities. It was either that or _The Land That Time Forgot_ or _Jaws_. The only real clear memory I have of _The Land That Time Forgot_ is I went with my aunt Judy, who wasn't really an aunt but one of those people you call aunt. She had packed up a bunch of peanut-butter sandwiches in tinfoil, and while everyone was watching the movie, I went through every fucking sandwich and ate each one. So about halfway through the film she was like, "Hey, pass up one of those sandwiches." And I was like, "There are no more." And the whole car freaked out. The whole car looked at me like, "Are you insane? How could you eat five or six sandwiches?" I didn't have much interest in the movie, but really was kind of taken with the notion of eating, which is something I've never been really able to separate myself from. _You've said before that you think you grew up to do what you do because you grew up overweight?_ KS: Pretty much. I got into trying to be funny because I grew up fat, and you tend to overcompensate when you don't look like everyone else. I did grow up kind of hefty, and because of that I was very aware that I would have to come up with something other than a trim, tight body to attract the opposite sex or also to keep dudes from beating me up. _And yet that's never been a subject you've really mined in your films._ KS: Once I saw the movie _Heavy_ I figured, "OK, that story's been done." It just seemed to me like that's rather obvious for me, for the fat guy to make the fat-guy movie. I don't know. _It's telling that the actor you've used most often as your proxy onscreen is Ben Affleck._ KS: Right, who's very far from heavy. I remember identifying a hell of a lot more with Affleck during _Chasing Amy_ when he was a little paunchier and his teeth weren't capped. It's been almost a full decade now of the trim, shapely Affleck, which has been very good for his career but not very good for my self-esteem. _So going back to your childhood, what was the earliest dream for you of a career?_ KS: I guess the earliest dream would have been writing for _SNL_. And then later on I would trade that dream in for the dream of owning a deli. _Those two dreams don't usually go hand in hand._ KS: Well, from high school through the age of twenty I really wanted to write for _SNL_. Before that I always wanted to be a writer, but I didn't quite know how. I figured, like, maybe I'll wind up working at a newspaper, because the _Asbury Park Press_ is something that I could conceive of working for. The idea of writing for TV or movies was just such a foreign notion. After high school I went to college for a semester at the New School for Social Research. There's a bunch of schools within that university, and the school I attended was Eugene Lang. I chose Lang because it billed itself as a writer's school and because it was in New York, which would put me close to Rockefeller Center, which was where _SNL_ was made. A lot of times I would just blow off class and go sit in the lobby of 30 Rock and wait to be discovered, which was so stupid. I assumed that one day Lorne Michaels would walk by and be like, "Hey, you look funny. Can you write?" There would be times I would just sit there and kind of cry because I would be like, "Oh my God. I don't know how to accomplish what I want to accomplish." I finally dropped out of Lang after only doing one semester, and I went back to living in Highlands and worked in convenience stores. I had given up on the notion of being a writer and decided, "OK, I know how to make a sandwich." I've worked in convenience stores and delis all my life. The best I could hope for was one day to own a deli. When I dropped out, I figured— _"That was my best shot and I missed it."_ KS: I missed it, and I should just resign myself to a normal life. It wouldn't be until a few years later, on my twenty-first birthday, when I went to see _Slacker_ at the Angelika in New York that film presented itself as an option. _You saw_ Slacker _with your friend Vincent Pereira, right?_ KS: Yeah. He was the first person I'd ever known who talked about wanting to make movies. He worked at Quick Stop. He'd come in at nine at night and stock the milk room and then mop the floors. It took us a few months to actually start talking to each other, and when we did, we bonded over this love of _Twin Peaks_. Vinny was the one that was first like, "Did you ever think about directing?" And I was like, "Well, I didn't ever think about making a movie. I just like to write." He was like, "The problem with just being a screenwriter is that somebody else controls the material." And that lodged in my mind. _So what clicked when you saw_ Slacker _?_ KS: _Slacker_ had this kind of inspiring effect on me. I viewed that movie with a mixture of awe and arrogance. Awe because I couldn't believe what I was seeing. It opened a whole new world to me. It was the first independent film I'd seen. And arrogance because I was like, "Oh my God, if this counts, I could do this. I could make one of these." So when we're driving home from the theater that night I remember saying to Vincent, "I'm going to be a filmmaker. I think I would want to make a film." So I started looking into film schools, but they were all too expensive and too long. At this point in my life I'm twenty-one; I just want to do it! I don't want to sit around and talk about doing it. I just want to throw myself into the process and learn while doing. So that ruled out places like Tisch or USC or UCLA. So it was in the _Village Voice_ that I found this ad for the Vancouver Film School, which had a 1-800 number. I figured, all right, it's eight months long; it's not a degree program, but that's cool because they're going to put equipment in my hands. I'm actually going to get to make a film and it's only going to cost me nine grand, which is way less than even one year of film school elsewhere. So I applied and was delighted to find out I got in, but when I got there I realized they let every applicant in. I went for about four months before I was like, "Fuck this," because it was a lot of theory and there wasn't a lot of hands-on stuff. I started talking to the front office and I was like, "If I drop out, do I get my tuition back?" And they told me, "If you drop out by this Tuesday, you get half of your tuition back." I knew it was a big deal dropping out, because at home everyone would be like, "Oh, fucking big shit said he was going to film school and then he fucking dropped out." And my parents would be like, "There's yet another thing that Kevin quit." But internally I was like, "This is a good idea because I'm going to take that money I'm going to save and I'm going to sink it into a feature." I'd met Scott Mosier, my producer, at Vancouver. So I said, "Look, man, whichever of us comes up with a script first, the other one will help that person go out and shoot their movie." And he was like, "Oh, absolutely. Sounds like a good idea." _What did your parents say when you told them your plan to make a film?_ KS: My parents, to their credit, were never like, "What are you, fucking stupid?" But they were puzzled because it's not like we had somebody in our family who was in entertainment in any way, shape, or form. It was as if I'd said to them, "I'm going to go discover the nineteenth dimension." It was a foreign fucking notion to them. They were like, "OK. Good luck with that. We support you, but if it doesn't work out, go get a real job." My mother was like, "Your brother's a waiter; why don't you go be a waiter?" But to their credit, they never tried to dissuade me. And then, when we finally got around to making _Clerks_ , we were doing it on credit cards and renting the camera package was going to be about a thousand dollars a week. So for the three-week shoot it was going to be three thousand dollars and they wouldn't take credit cards. So I went to my parents and I was like, "Look, I need three thousand dollars cash to rent from this place." My old man, my mother told me, never made more than, like, thirty grand a year. So for my parents to say, "OK, we can lend you the three thousand dollars," that was everything they fucking had. So it was really kind of a beautiful and big gesture that I never appreciated until later on in life when I found out we were as broke as we were. _Did you have any fear during the making of_ Clerks _?_ KS: Never any fear. That's the weird thing. The fear didn't kick in until the first screening at the Angelika. Up until that point it was all passion. At that first screening at the IFFM [International Feature Film Market], it felt like, "Oh my God, this is it." We worked this hard to get here and nothing was going to happen to this movie. That's when I started breaking down. I was like, "Why's everyone cursing? Everyone curses so much in this fucking movie. And it looks terrible! Why'd we go black-and-white?" I had a really hard-core panic attack for about ten minutes in that nearly empty theater. _If the IFFM screening was the low point, was the high point when Harvey Weinstein said he wanted to buy_ Clerks _?_ KS: Absolutely. That was definitely the high point. We went to Sundance with the assumption that it's dead. Nobody's going to buy this movie. But it didn't matter, because we'd been selected and I felt like, "Wow, the ball is rolling. It will be easier to make another movie next time." I could show them this movie, say we'd gotten into Sundance, and find financing for the next flick. So just getting into Sundance was the high point until Harvey bought the fucking movie. _In retrospect, how much does_ Clerks _feel like it was simply a case of the right film at the right time?_ KS: _Clerks_ definitely was right film, right time. That's all that movie is to me at this point. I mean, the movie of course means the world to me, but I look at it and I don't go, "Man, you can fucking feel the talent burning." It was just like we said something about being a twenty-something slacker when everybody was curious about what it was like to be a twenty-something slacker. A year later, nobody would give a fuck. That movie is very much me in a nutshell at that point in my life and, ironically, it's me in a nutshell at this point in my life. I am still one of the laziest people, which is ironic because I tend to do a lot of things but ultimately, given my druthers, I would do nothing. I always look for the easiest way out. It's one of the reasons I don't want to make a big action movie. It's too much work. It's just much easier for me to write people talking and shoot that. _With the success of_ Clerks _, did it feel like you'd arrived? Did it feel like, "Now this is my life; I'm a filmmaker"?_ KS: The weird thing that happened is how quickly you slip into the new role that is your career. Film went from something that I wanted to do to it becoming my career overnight! And the transition happened so seamlessly that it was almost as if I'd been doing it forever. It was kind of creepy, like Nicholson in _The Shining_ where Grady tells him, "You've always been here." It's one of those things where you hit the ground running. So we immediately set about making another movie as quickly as possible. And that's what it's been like with every movie since. I still have that fear in me that the moment we stop— _They_ _'ll forget about you?_ KS: Yeah. They'll forget, or it won't be as easy to make another movie. You're like, "I don't want to give them time to think, because with enough time they'll be like, 'What the fuck are we funding this idiot for?' " I remember reading a review in _The Washington Post_ of _Chasing Amy_ where this dickhead wrote, like, "I can't believe Miramax continues to fund this person. If you see the words 'film by Kevin Smith written large on the screen, run, don't walk from the theater.' " At the time I was fucking devastated by it. It played on my insecurity. I was like, "What if he's right? What if Miramax reads this review and goes, 'Yeah, fuck this Kevin Smith guy!' " _Do you think your work is ever overrated?_ KS: Some people will say, "Oh, he's overrated." And they're probably right to a large degree. To other people I'm incredibly underrated. And the truth is it's probably somewhere in between. I know I've gotten far more breaks than most people, and it's simply based on sheer goodwill and likability. I've never been a real prick, so they let shit slide. Look how much shit they let slide on our first movie. Our first movie looks terrible, and yet people let it slide because they liked the jokes and they kind of liked the idea of it. I've got a lot of people out there rooting for me not because they necessarily believe in me as an artist, but just because they like me. I'm not one of these people who's like, "I am a born filmmaker and my legacy is fucking resolute." Every fucking day I sit there going, "Oh my God. Am I any good at this whatsoever? Have I just been cruising by on nothing but goodwill? Is today the day they're going to realize I'm fucking shit at my job?" I've always kept my budgets kind of low, so that helps. I've never been like, "Give me $80 million because I have got to make my fucking dream film." The fact that I've always been able to return on the investment of what we've done helps a lot. Because at the end of the day, I assure you, no matter how much goodwill people have for you, if you don't make any money, they don't fucking want to know you. Does it mean the world has embraced our shit? No. But enough people have embraced our shit that it's enabled me to continue going. Mallrats _was the first time you had to work in collaboration with a studio._ KS: Yeah. That was an interesting process. I really wasn't prepared for people telling me, "You've never made a movie, so we're going to tell you how to do it." _Do you think you were too much of a pushover at that point, that you didn't hold your ground enough?_ KS: Absolutely. I wish to God I'd had a little more stones at that point, but I was so terrified that they'd kick us out of the fucking moviemaking club or some shit. I remember talking to Nina Jacobson, who was working at Universal at the time, about cutting out the scene of boys sitting around talking about scars they've gotten from eating pussy, which was a scene that was later put in _Chasing Amy_. And Nina was like, "You know, Kevin, it's not like anyone's trying to hammer out your originality. Don't you want your film to be seen by the widest possible audience? How is that a bad thing?" And I was like, "I guess it isn't." They win you over that way. It wasn't her going, "Surprise! You've come to the dark side!" It was just her making the studio's point of view evident. At the end of the day, they're just in it as a business. That's what they do for a living. They make movies. Of course studios want to reach the widest possible audience. They don't want to make movies like I do. I wanted to make movies for me first and my friends second and if anyone else liked it, great. _How would you respond today to the kinds of things Nina said to you back then?_ KS: I'd be like, "I appreciate that, but then I guess we shouldn't be working together." On _Jersey Girl_ I had to actually listen to studio notes and Harvey, but it wasn't that bad, because I knew it was coming from a guy who ain't fucking corporate. But at the same time we had a problematic movie on our hands—a movie that featured two people that the world couldn't care a fuck less about at that point. So we had to make it more palatable to the fucking masses, otherwise the studio was afraid it was going to be seen as _Gigli 2_. So there were changes made on the movie. Having gone through that with _Jersey_ _Girl_ , it made it like I never want to go through this shit again. I don't want to work with fucking really famous people. It got me to a point where I was like, "I don't want to fucking work with a lot of money, because that means that the studio is going to make you do whatever you can to make it more palatable to the masses." It's why I eventually wound up walking away from _Green Hornet_. Because after _Jersey Girl_ , the notion of working on this movie that would never be my movie but was made to sell action figures or to play multiple times to the same popcorn-eating audience over the summer where it was just going to get trashed by fucking critics for being yet another comic-book movie . . . none of that seemed appealing to me. So I was just like, "Run, don't walk, from that idea." _You say the experience of_ Jersey Girl _made you walk away from_ Green Hornet _. Each of your films seems like a reaction to the one you've just completed._ KS: Very much so. I'm kind of a reactionary filmmaker. _Chasing Amy_ was definitely a reaction to _Mallrats_. _Jay and Silent Bob Strike Back_ was a reaction to the process of releasing _Dogma_ , which was so hellish with all the protests and hate mail and death threats. I just wanted to make a movie that went down smooth. Then with _Jersey Girl_ , I'm thinking, "There's nobody that can protest _this_." Little did I know that the protesters wouldn't be people who were protesting the movie as much as the cast of the movie. And _Clerks II_ is definitely a reaction to having gone through the process of making a $35 million movie and watching it not reach $35 million theatrically. I felt like because of what I went through on it I wanted to go and do something where I didn't have to worry about a budget or making a bunch of money back or a cast. _How much do you agonize over your film's box-office prospects? All of your films seem to top out at making $30 million domestically._ KS: I always have this fear that _Fletch Won_ is the movie that breaks through and does like $100 million in business. It would break my heart a little bit, because it ain't mine. I'm very elated and still amazed in this day and age that there's still $30 million worth of interest in one of my stupid ideas, but at the same time if I'm going to make a movie that really breaks through and crosses into the mainstream— _You want it to be yours._ KS: Yeah. I always thought the closest we would get would be _Jersey_ _Girl_. So there's a part of me that's like, "Wow, if _Fletch_ is the one that really breaks through, I'm going to have mixed feelings about it." It would be kind of like how did Gus Van Sant feel when _Good Will_ broke through? Don't get me wrong, I love _Good Will Hunting_ , but I wonder if there are days when he's like, "Why'd it have to be that one?" _After_ Mallrats _bombed did you feel like your career was in jeopardy?_ KS: The career didn't feel like it was in jeopardy after _Mallrats_. The career felt like it was just flat-out dead. The reviews were so bad and it just made no noise. There were no calls coming in after that. I remember the morning after the opening night we were just completely in the dumps. And the worst part about it was that Monday I was supposed to go to Boston College. I had agreed to go out there and speak. So me and Mosier drove to Boston and gave perhaps the most depressed, blue lecture of all time. We were just, "What's the point? Don't bother trying. You'll only fail." _How did_ Chasing Amy _start for you?_ KS: _Chasing Amy_ kind of started while we were in post on _Mallrats_ in the early days of my relationship with Joey [Lauren Adams]. It really started idea as an idea about a guy who falls in love with a lesbian. Mosier and I had met and started hanging out with the girls who made _Go Fish_ at Sundance. Gwen Turner and Scott in particular hung out and hit it off. And it was clear that Scott was kind of smitten with her, but things weren't going to go very far, so I suggested to Scott that he should write a movie about falling in love with a lesbian and how fucking fruitless it is. And after I kept bugging him about doing it for like a month or so and he didn't show any interest in it, I was like, "Fuck it. I'm doing it." Also at this time Joey and I were getting deeper into our relationship and I started to feel really insecure about the amount of experience Joey had in terms of not so much dating, but fucking. And we would get into fights about that because I didn't know how to deal with it. So yes, _Chasing Amy_ is a movie about a guy who falls in love with a lesbian, but really the problem is that he's more hung up by the fact that she's had sex with [other] guys than the fact that she has sex with girls. So I started writing it as kind of therapy for the relationship. _The case could be made that_ Chasing Amy _was the most important film for your career._ KS: The credibility that movie afforded me was insane. _Mallrats_ did its very best to erase any credibility I'd garnered with _Clerks_. There are many who'd gone before me who were fucking ruined by the sophomore slump. _With all your speaking engagements and activity on the Internet, you might be one of, if not the most, accessible filmmakers in history._ KS: I've always felt like I am not the most successful filmmaker, but I'm definitely the most accessible filmmaker. Some people are like, "Aren't you worried about having stalkers and shit?" I'm like, "You can't have a stalker if you're as accessible as I am. Stalking is about people who aren't accessible. You can see me at a college Q&A. You can find me on the Internet. You can always find me." That's been a big part of my success. I spend a lot of time on the Web, and if film enthusiasts feel like they can talk to you, you create this kind of Internet family. I always feel like that's helped grow the fan base. _There's also a loyalty your fans have to you that is very strong. They feel like you're one of them._ KS: Totally. They feel, and rightfully so, that you're one of them who's just kind of made it to the other side of the fence. _Does your interaction with fans inform the work?_ KS: It really gives me instant feedback, which is phenomenal. Before the Internet all you really had to go on were reviews and box office. Post-Internet it was like, "Wait a sec, I can finally figure out what people who buy tickets think about the movie," because you could jump in there and pretty much learn about what they thought right away. _You were talking once about John Hughes saying that perhaps he can't tell any more stories of youth because he may have outgrown them. Do you think that may be the case for you? Does_ Jersey Girl _represent the beginning of a new stage for you?_ KS: I don't know if it represents the beginning of the trend of, like, I can't tell stories about youth, but maybe it does. Because when I think about the _Clerks II_ script, it's really not about being young. It's about what happens when you can't be young anymore. What happens to the angry young man when he turns thirty-two? The shit that used to work for you doesn't really work anymore, and you have to figure out a new way to be and interact with the world. _Do you worry about falling into the habit of making films for the wrong reasons? Do you think audiences will notice if it happens?_ KS: I think the way you would recognize a big change is me making a lot of money doing stuff that I'm obviously not that comfortable with. There're people who will say the worst thing is to be a sellout. Well, did you like _Clerks_? Because I sold out in that movie. Somebody bought it and put it in the theater and that, technically, is selling out. But there's selling out and there's selling out. There's doing movies purely for the paycheck. _I would think you've had many opportunities to do that. Any close calls?_ KS: I came very close to directing a movie that I didn't write and originate. It was going to be with Ben. And it was this movie we were going to do after _Jersey Girl_ called _Ghosts of Girlfriends Past_. It was yet another variation on _A Christmas Carol_ , and it was Ben playing a character who's going to a friend's wedding and he fucking gets visited by three ex-girlfriends in the night who show him the error of his ways and blah, blah, blah. Could I have been matched up with that material pretty well? Sure. Was I doing it because I was like, "Man, this is a story that must be told and I really believe in this?" No. It was me doing it because Ben was like, "Let's do it together. It'll be fun." It didn't come to pass, thankfully, but that was the closest I came. _Do you envision a scenario where you're accepting an Oscar one day?_ KS: Never. I don't think it'll ever happen. I just don't work in the same way that the Academy seems to like to work. _I assume that doesn't keep you up at night?_ KS: Not in a million years. You know, what would bother me a lot more is if one day I woke up and the fan base wasn't there anymore. That to me is the fucking award, and I know that sounds fucking corny to say, but I find it far more rewarding to know that I can jump onto the board in the morning and find a bunch of people who fucking love our stuff. Who cares if a bunch of people in film like what I do? Most of those cats see the shit you do for free at premieres and whatnot. It's much more rewarding to me that the person who's not in film, who can't fucking see this shit for free, takes part of their hard-earned fucking buck for that week and blows it on something you've created. That's far more rewarding for me. THE DIRECTOR'S TAKE KEVIN SMITH _What is the first film you ever saw?_ It's either _The Gumball Rally_ or _Jaws_. _What is your favorite film of all time?_ _Jaws, JFK, A Man for All Seasons, Do the Right Thing, The Last Temptation of Christ_ _What's your favorite line in a film?_ "One dog goes one way, the other dog goes the other way, and this guy's sayin' 'Whadda ya want from me?' "—Joe Pesci in _Goodfellas_ _What movie made you realize that film was an art?_ Hal Hartley's _Trust_. It feels like a painting that talks. _What movie do you consider your guilty pleasure?_ _Mystery Men_ _Who is your favorite movie character of all time?_ The one I most identify with is Holden McNeil from _Chasing Amy_. _What's your favorite movie snack food?_ Chocolate-chip cookies. _Who is your favorite director of all time?_ Quentin Tarantino _Who is the most impressive filmmaker working today?_ Quentin Tarantino _What quality do the best directors share?_ A sense of how the world should be _Who is your favorite actor or actress of all time?_ Ben Affleck _Who is your favorite actor or actress of today?_ Ben Affleck _Who would you cast as yourself in a film about your life?_ Affleck. He would have to put on some weight, though. _If you could remake one movie, what would it be?_ _A Man for All Seasons_ , but I would never do it, because it was so perfect the first time. _What is your best quality as a director?_ It's always handy to have a director who can give you new lines on the set. _What is your greatest weakness as a director?_ I'm not as interested in visual storytelling as I should be. _Finish this sentence: I'll never direct a movie with . . ._ a huge cock in my pants. That's the only "never" I can think of. I know I'll never direct a movie while sporting a huge cock. _Finish this sentence: The perfect movie is . . ._ one that I can watch back to back to back for a week straight. _What will they be saying about your work fifty years from now?_ "Kevin who?" _What piece of advice do you have for aspiring filmmakers?_ Never write or direct anything that you are not interested in yourself. Never be afraid to lay yourself bare on the canvas. _What are you as passionate about as moviemaking?_ Fucking my wife **CHRIS & PAUL WEITZ** "We were thinking about directing only in the way that you think about dating one of the models in one of the _Sports Illustrated_ swimsuit issues. It was a nice idea, but to get there seemed completely implausible."—Paul Weitz **SELECTED CREDITS** _Antz_ (1998)-Chris & Paul, cowriters _American Pie_ (1999)-Chris: producer, Paul: director _Down to Earth_ (2000)-Chris & Paul: codirectors _About a Boy_ (2002)-Chris & Paul: cowriters/codirectors _In Good Company_ (2004)-Chris: producer, Paul: writer/director/producer _American Dreamz_ (2006)-Chris: producer, Paul: writer/director/producer Making movies was never the plan for Chris and Paul Weitz. That's something of a surprise, considering their lineage. Their grandmother, Lupita Tovar, was a famed Mexican actress, their mother, Susan Kohner, an Academy Award-nominated actress in films like _Imitation of Life_. Yet, to hear them tell it, these brothers virtually fell into the business. One was a struggling playwright, the other on a path to the State Department. Yet once they began collaborating, film emerged as the destination for both. Paul is the elder by three years. Otherwise, they're pretty much in sync, as evidenced by the cohesive vision on display in films like _American Pie_ and _About a Boy_. While those two films wouldn't seem to have much in common, it is clear both are preoccupied with the challenges of being a "good man" today. It's no wonder, since their father was such an intriguing man himself. _Your father, John Weitz, was a pretty fascinating man._ CW: He was a pretty extraordinary guy with no ties to Hollywood whatsoever. He had this kind of amazing World War II experience. He was born in Germany in 1923. He moved to London in 1933 when Hitler came to power; emigrated to the U.S., I think in '39; joined the army and was recruited into the OSS and had extraordinary undercover experiences toward the end of the war. And then subsequently he became a fashion designer. _He was a very successful one too. In fact, he was something of a celebrity._ CW: I guess he and Bill Blass and Geoffrey Beene were the American designers in the 1950s who sort of set the style of men's style. They became licensable names. _He had so many interests, from fashion to racecar driving. What do you think you inherited from him?_ CW: Actually, it's funny. I think the flaw that he passed down to me is that I get bored really easily. PW: I got different flaws. (laughs) I take things overly personally. There's some terrific German word he used in his biography of Hitler's banker, the essence of which is "He had a great degree of 'here I am.' " Meaning that it was deeply important to throw himself out there, whether or not he was going to get hit on the chin. I don't actually think it's a flaw, but sometimes it can manifest itself in less than savory ways. _What did he make of your film careers?_ CW: He loved it. He had a German sense of humor, which is very sort of bawdy and scatological, so I think he genuinely enjoyed _American Pie_. He was really proud and supportive, and I'm sure he was relieved too. PW: He was more relieved on my part because I'm older and it took me longer to make a living. He was just extremely relieved that we were able to fare for ourselves. _What did each of you imagine you'd end up doing with your lives when you were young?_ CW: I didn't really have an idea. I guess I could have imagined being, like, an English professor but that would have been terrible. PW: I remember you were going to be a diplomat, right? CW: Yeah, I almost became a diplomat. I almost went into the State Department. _What about you, Paul? Was there anything you dreamed of becoming when you were a kid?_ PW: Umm . . . CW: Drug dealer? (both laugh) PW: I would say drug dealer, yeah. I would say an actor, then a drug dealer, then maybe bum off my dad for the rest of my life until I realized that was not particularly attractive to women. (laughs) And then the only other thing I could imagine besides doing what I'm doing was to be a teacher. _Did any films have a big impact on either of you growing up?_ CW: _Star Wars_ changed everything for me. We were driving to see it and I was pissing in my pants with excitement and my brother was being kind of mentally sadistic. I didn't really know what it was about, and he told me it was a documentary of a debate between two astronomers. It was really just a big intellectual argument. (laughs) _Paul, does that story ring a bell?_ PW: (laughs) Yeah, sounds about right. I also remember convincing Chris that it would be fun to team up shoplifting in the local book-stores. We would go in there with our book bags and then fit as many books in as possible. This is when Chris is maybe six. And then we would come home and add up the face value of all the books we'd stolen and sort of compete with ourselves. CW: They were secondhand books, though. PW: It was a lifelong love of reading. _Did you have a camera around, growing up? Did you make any movies?_ CW: Absolutely not. We didn't have that childhood of making short films. We weren't driven to express ourselves in film. It all seemed to happen pretty much accidentally. I think we ended up as directors largely to protect our scripts, just to get as much control over it as possible. PW: I guess the earliest thing I did was I wrote a play my mom sent to this thing called the Young Playwrights Competition. It got into the semifinals or something, and I got a one-day performance at a theater and my dad got a limo and took me and a bunch of my friends to it. Then in the middle of the play he was horrified to realize that the play was a parody of him. _You hadn't bothered to mention this to him beforehand?_ PW: I hadn't. And he had a bunch of friends there, and it was really awful because it was an unfair parody. I remember him coming home late to dinner that night and at the table he was steaming, saying, "This is obviously what he thinks of me" and my mom saying, "No, darling. It's just a play." (laughs) So it had a rocky beginning, the whole writing thing. _Paul, you were writing plays for a while in New York before you teamed up with your brother._ PW: Yeah. I managed to get an agent in New York and then I managed to not find any work for about four or five years (laughs). _Through the agent's fault or your own or just circumstance?_ PW: Completely not through the agent's fault. I was trying to get into a training program for writing soap operas. CW: (laughs) I don't remember that! That's crazy. PW: It is. I didn't get in. My agent told me it was "a piss-pot full of money." That was her quote. And unfortunately I didn't get in. _So the dream was to get a writing gig on_ Days of Our Lives _?_ PW: Yes, exactly. If you were an unsuccessful playwright in New York, this was one of your hopes—to pay your rent. Our dad was desperate for me to get a real job, but he felt like it would be too bourgeois to actually kick me in the ass and say, "Get a job" or, "I'm not going to help you out anymore." The thing is that Chris was younger, so he had less time to be unemployed. CW: That's true, actually. _Chris, you went to school in England for much of your education. You got your bachelor's and master's from Cambridge?_ CW: Yeah. I stayed through college at Cambridge and the master's degree actually they throw in two years after you leave Cambridge, provided you haven't committed a felony. It's kind of like a bonus for good behavior. You can really rack up the degrees there. _Did you have any idea what you were going to do?_ CW: No. I didn't. It's a point of pride there that you don't do anything that would be considered vocational, with the result that you're left very confused at the end, with this weird arrogance that is associated with being at Cambridge but with no marketable skills. _Did you have anxiety about not knowing what to do?_ CW: Tremendous anxiety. I worked for a freelance journalist for a while but really never got it together enough to be steadily employed. If it hadn't been for my brother, God knows what would have happened. _What was the first thing you guys collaborated on?_ CW: Well, the first thing we successfully collaborated on was this screenplay, _Legit_ , about porn actors trying to make an art film. PW: I was probably about twenty-five. CW: Yeah, I was twenty-one. _How did screenwriting emerge as something for you two to try?_ PW: Well, I had been dumped by a girl probably partly because I was just bumming off of my dad. I had a couple of jobs, one of which was writing algebra videos, and I decided that a way to make a living was to write screenplays. I wrote _Legit_ with Chris, and I wrote a romantic comedy on my own. The romantic comedy was pretty bad, and _Legit_ was pretty funny. We pitched an idea called _Karma Cops_. It was about a nonviolent Hindu cop from New Delhi who teams with a tough New York cop and MGM paid us guild minimum for it, which was complete manna from heaven. It was like the greatest thing that ever happened. _You worked for a long time before you started to get actual credits on films._ CW: It was seven years before we got our first screen credit. But in the meantime we had been rewriting other people's stuff and writing our own things, plugging away. PW: We were plankton-eaters for a long time. _Was all the rewrite work you were doing rewarding?_ PW: The getting paid part was truly thrilling. I think I knew, more than Chris, that it was a distinct possibility to write a heck of a lot and not get paid, so I was determined to be the hardest-working guy in showbiz. _Your first credit came for cowriting_ Antz _. How did that come about?_ PW: It was DreamWorks' first animated movie, and the only reason we got into that meeting was that it was grade-D writers who were being considered. A meeting was set up with Jeffrey Katzenberg and we came in and said, "We're really quick, and we know how to do this," and pitched him a bunch of ideas and he sort of laughingly said, "What are you doing for the next three years of your life?" We thought he was kidding, but it turns out it was actually true. CW: Writing for animation is kind of this war of attrition. It's really hard work. PW: Also, watching Katzenberg gave us some good lessons about directing in that he was on top of every detail but at the same time he was really good at giving credit to other people when they had done a good job. And you would see him calculating which arguments were worth standing your ground on. He would give on a small point in order to get something that was more important to him, which I think is a good lesson about directing. _By this time were you thinking about directing a project of your own?_ PW: I think we were thinking about directing only in the way that you think about dating one of the models in one of the _Sports Illustrated_ swimsuit issues. It was a nice idea, but to get there seemed completely implausible. _And yet it wasn't long after that_ American Pie _came your way._ PW: We were assuming that we were going to have to do something indie first. CW: We were surprised that they brought us in to meet as directors. And then, once we got the meeting, one thing I distinctly remember is lying to them about having directed theatre. I sort of figured that they were not going to check up on me if I said that I had directed theatre in New York. _Did you come close to landing projects to direct before_ American Pie _?_ PW: There was a _Wonderful World of Disney_ movie that we missed out on. CW: Thank God! It was _Angels in the Endzone_. _What did you respond to in the script for_ American Pie _?_ CW: It was really funny and actually quite heartfelt. It's not the kind of movie that we watched at all, but it had some heart to it and some great set pieces. PW: I remember sitting in Chinatown in San Francisco, reading through the script for the second time, and I think that we actually kind of knew the direction that we wanted to take, which was to make it more palatable for girls. Also I felt like it was something that would not be hurt by being shot unpretentiously. I instinctually felt that we were probably not visual geniuses at that point, and a lot of first-time film directors make a mistake in trying to be stylistically bold. _Was there any awkwardness shooting the nudity in the film?_ PW: It was awkward because Chris would show up nude. (laughs) I think that we didn't really believe until the last moment that Shannon Elizabeth had read the part of the script where she was naked. I think we suspected that at the last moment we would say, "Yeah, and then you take your robe off and you're naked," and she would balk at it, whereas she was completely ready to do it. _Because you were making a relatively low-budget studio film, were you pretty much left on your own?_ CW: We were sort of under the radar. And then we had this insane test screening that was really, for us, unprecedented. It's the only time we've ever done that well in a test screening. We had the perfect audience, who thought it was just the greatest film ever made. That's because they were fourteen. It was like crack for these kids. _Even though you codirected it, the Guild wouldn't let you share the credit. How was it decided Paul would get the director credit? Was there a coin flip?_ CW: No. It was seniority. (both laugh) PW: Yeah, plus Chris knew that I was the less secure of the two of us. _The film came to be lumped in with other films, like_ There's Something About Mary, _as gross-out movies, accused of lowering the bar on humor. What did you make of that conversation?_ PW: It was interesting, because it came out right after the Columbine murders and there was this hue and cry about kids getting into R-RATED movies when they were too young for it. And it was really interesting to see the degree in which society lumps in sex with violence and what a blind eye they turn to violence in film. Suddenly _American Pie_ was part of a larger societal discussion, which actually was great. I don't think that being lumped in as a gross-out movie ever bothered me actually. I didn't give a damn. I think the film was more subversive because it was sweet. _The film was hugely successful. Did you suddenly find yourselves being treated differently?_ CW: Afterwards we thought it was sort of routine to have a movie that grossed $100 million, which really disappointed us when our next movie didn't. It was this kind of bizarre, surreal experience when suddenly I remember we were invited to a lot of parties we had no business being in. But we were pretty much offered the same kind of stuff as _American Pie_ after that, because people feel more comfortable with you as a bankable commodity with something you've done before. PW: I remember one of the producers of _American Pie_ coming in very excited and saying, "Hey, guys, I have a film to offer you; you're gonna love it. It's called _Chick Masters_." Things did change overnight for me, because someone having a manic episode broke into my apartment and it terrified me so much that I ended up moving within two days of the film's release. They had a delusion that I was running a pornography ring. (both laugh) I'm not kidding. I remember this person inside the apartment, ranting and raving, and having a delivery guy arrive at my house with a congratulatory basket for the film opening and trying to tell him not to go inside the house because there was somebody crazy in there. _You followed up_ American Pie _with a retelling of_ Heaven Can Wait _in_ Down to Earth _. It's not considered your best work._ CW: I think in retrospect it's very difficult to remake a beloved film and distinguish yourself. It's much better to remake a bad movie. Chris Rock wanted to make a very sweet romantic comedy, and we probably wanted something more edgy from him. So I don't think any of us were working at the top of our abilities. That's why we sort of sleepwalked through that one, not that we weren't trying hard. We just approached it as business as usual and didn't shake ourselves up in terms of thinking of doing things differently, which is probably what we should have done. PW: That movie would have been helped if we had brought a degree of visual inventiveness to it. CW: We should have been more pretentious on the second one. When we did _About a Boy_ afterwards, we felt we had every reason to kind of let it all hang out and do something totally different and mess around with the camera a lot more. _So by the time you filmed_ About a Boy _you realized it was now or never._ CW: I think we felt that we got such a hard kick in the ass from _Down to Earth_ that we had nothing to lose. I can remember there were specific shots on _About a Boy_ where we had our initial plan and then we had a bail-out position of shooting them more conventionally and then being on set and saying, "Screw the bail-out position; let's just live or die with this." _You wrote the film to Badly Drawn Boy. Do you always write to music?_ PW: For me it's really important in almost like a Pavlov's dog way. The most terrifying moment in writing is the moment of sitting down and beginning. So if I put on a particular piece of music it almost tricks me into thinking, "It's time to write." _Do you play music on the set for your actors?_ PW: No. I think only Cameron Crowe can get away with it. _You were both nominated for an Oscar for the screenplay. What do you remember from that experience?_ PW: The whole thing is a cringing experience. It's one of those moments when you have no control. You go from making a film where you have the illusion of control to experiences which are more or less humiliating because they bring out the worst aspects of you. And it was almost immediately after the invasion of Iraq. CW: That totally spoiled it for us. (both laugh) PW: So not only was there the slight feeling that the Kodak Theater was going to be blown up [by terrorists], but there was also the slight feeling that everyone involved there was in some macabre dance of death. _Has the system for how you've worked together, particularly in the writing process, changed over the years?_ CW: It varies. I'm not sure we've figured out the ideal way to work together. In terms of writing, we've gone from sitting in the same room for hours at a time to working more separately. I think Paul is probably more disciplined about things, and I tend to procrastinate more and do things quickly at the last moment. PW: I think there's two ways to allay one's natural fears about writing: One is to just sort of hope that the gods will descend, and the other way is to try to be a banker about it and sit there regardless of whether you feel particularly inspired or not. And while the banker looks like he's doing more work, I've noticed that Chris actually gets down to it more easily than I sometimes. But I do think that, in terms of collaboration it's all to do with prep, and aside from that, just respecting what the other guy does. It's very fun for me, when we're working on something together, to look at the new pages that Chris has done, because it's all stuff that I generally find very good and very funny. _And hopefully it inspires your creative end?_ PW: It's more like I'm thinking, "Oh, good, I don't have to do that." It's also just fun. Unfortunately we've both noticed that the things that crack us up about each other's writing are misspellings. There's nothing that will make us laugh like a good typo. _What about in terms of story sensibilities? Are you pretty much on the same page?_ CW: I think we're wired pretty similarly for comedy. Part of the reason I didn't codirect with Paul on _In Good Company_ was that it was a subject and themes that were very important to him at the time, and I was more attracted to doing a different kind of movie. _Coming out of_ About a Boy _, you wanted to do something on a much larger scale and for a while you were going to direct a big-budget film_ _based on the fantasy series_ His Dark Materials _._ CW: Yeah, I think that probably Paul thinks about contributing to a body of work piece by piece, whereas I am always thinking that the next movie is going to be the last movie for me because I find it to be a kind of exhausting process. I wanted to bite off as much as I could chew. It was probably a bit more than I could chew. _Do you regret leaving it now?_ CW: Sometimes I definitely do, because those books are incredibly important to me. I think the lesson is that a movie of that kind, which is sort of a CGI epic, is incredibly difficult to make, and you need a certain form of mania and insanity to make it. I gradually realized that I was going to have to convert myself into an obsessive-compulsive person to get it done. And I was also realizing that it was going to be a matter of two to three years, which is a different order of magnitude from anything we've done before. If I were in a different place in my life, then that might be something I could contemplate. It would essentially mean that I wouldn't have a life for two to three years, and I wasn't ready to do that. _Paul, you seem to be in a different place than Chris, being a husband and now a father. It seems like that had to inform_ In Good Company _._ PW: There is the father-daughter relationship that's at the core. I like to learn things when I'm writing something, because you have to empathize with the characters and you have to try to listen to them and there were a couple of areas there that I wanted to learn about. I wanted to project myself into what it might be like to try to be a good person in the thick of being middle-aged. And then the other character, Topher Grace's character, is very work obsessed and able to hide an emotional life by achieving certain things, such as having a beautiful wife or having a certain career track or buying a Porsche, and I could identify with that guy as well. So, to some degree, that part of the movie was me trying to make sense of myself in what I'd been like when I was a bit younger. But what was also interesting to me was that it was a realistic story. I felt, for instance, there was a certain point where I was tempted to have Dennis Quaid's character sabotage the younger character because that would add more drama. _According to filmmaking doctrine it seems almost like that's what the character is supposed to do._ PW: Exactly. In order to dramatize, or to have a sort of fake act one or act two swing but I thought, "Well, the character wouldn't do that." And I ended up, sort of consciously, opting for realism more than basing it on another movie. _Most of your films have dealt with, in one way or another, the difficulties of trying to be a "good man" in today_ _'s society. Is that a theme you've been surprised to see emerge again and again?_ PW: Well, to me that's funny because it's definitely the central theme of a few of the films, certainly _American Pie._ But I think now it's bothersome to me that it's so clearly a theme. It's muddying the waters, because if you are working on a theme, you don't necessarily want to think about it. You should be dealing with it primarily on a subconscious level. _Is it important to both of you that all of your films be "about something"? It seems like you always want your films to work on multiple levels._ CW: I think so. I mean, I don't think we ever want to make a "message movie." I find those kinds of movies sort of boring. PW: I think that things are either about something or about the avoidance of something. In other words, most Hollywood entertainment is about the avoidance of reality, but reality still lurks beneath it. So the answer is yes. For me, it's not about saying anything in particular, but just to acknowledge that there are layers of meaning beneath what we're seeing. CW: Movies today are made backwards. Most movies made today are business propositions, really. If they're about anything, they're about getting people's asses into the seats and then selling the DVDs and stuff. We have a rougher road convincing people to make our movies, because they're not always obvious business propositions. It's the idea that there are good people in it and it's a good script and people will like it because it's good. _There must be a temptation by this stage of your careers to go down the easier road and make a more overtly commercial film._ CW: Well, the safety valve is that we get depressed when we contemplate that. PW: Yeah, there are low moments when you think about how easy it would be to do this interesting film with some megastar, but then you project yourself ahead to when you're sitting in the editing room and you're only one of the schmucks who gets to determine what the tone of the piece is, as opposed to being the primary schmuck. _What's your take on contemporary American filmmaking? How does it compare to, say, the supposed glory days of the 1970s?_ PW: It's less clearly an interesting state than the seventies, but personally I think you get a false sense about how good things were in earlier days because you don't see all the crap that was being made. CW: There's just such a huge market that there's enough room for good films to be made, but also there are a lot of really scary tendencies that make me not want to go out and see a movie. _What tendencies?_ CW: (laughs) I think that sadism is a weird tendency in films at the moment—enjoying seeing people get killed or mangled. I find that disturbing, and it makes me feel like I'm out of touch. PW: There are a lot of violent films today by people who have not personally experienced violence. CW: Yeah, that's part of what upsets me. It's violence without regard for the consequences, really. It's no longer cartoonish violence where people sort of bounce back. You actually have enjoyment of mutilation. I can definitely tell I am not in the same mood as the audience when I'm watching that. And that part of it scares me in terms of wanting to make movies and wanting people to appreciate them. _Were you going to say something, Paul?_ PW: No, I'm just thinking how there's an air of delightful innocence to _American Pie._ (laughs) THE DIRECTORS' TAKE CHRIS & PAUL WEITZ _What is the first film you ever saw?_ Chris: _Midway_ Paul: _Jaws_ _What is your favorite film of all time?_ Chris: _Lawrence of Arabia_ Paul: _The Four Hundred Blows_ _What's your favorite line in a film?_ Chris: "Nobody's perfect."— _Some Like It Hot_ Paul: "It goes up to eleven."— _This Is Spinal Tap_ _What movie made you realize that film was an art?_ Paul: That was always apparent. _What movie do you consider your guilty pleasure?_ Chris: _Point Break_ _Who is your favorite movie character of all time?_ Chris: Max Fisher from _Rushmore_ Paul: I feel like I identify with the characters in almost any film I see. And for a few hours afterwards, I'm acting like the characters. _What's your favorite movie snack food?_ Chris: Goobers Paul: Popcorn _Who is your favorite director of all time?_ Chris: Kurosawa Paul: Truffaut _Who is the most impressive filmmaker working today?_ Chris: Scorsese _What quality do the best directors share?_ Paul: Humility toward the thing that they're making Chris: Stubbornness _Who is your favorite actor or actress of all time?_ Paul: Humphrey Bogart _If you could remake one movie, what would it be?_ Paul: _Down to Earth_ _What is your best quality as a director?_ Paul: Politeness _What is your greatest weakness as a director?_ Chris: Need for sleep _Finish this sentence: I'll never direct a movie with . . ._ Paul: someone who sticks a gun in someone else's mouth. _Finish this sentence: The perfect movie is . . ._ Chris: an hour and twenty minutes long. Paul: cast entirely with monkeys. _What will they be saying about your work fifty years from now?_ Paul: Nothing Chris: "Yeah, I think I heard of that. . . ." _What piece of advice do you have for aspiring filmmakers?_ Chris: Don't. Paul: Remember that film is a mirror of life and not life itself. _What are you as passionate about as moviemaking?_ Chris: Food Paul: My family, and perceiving the stories that are in front of your nose every day
{ "redpajama_set_name": "RedPajamaBook" }
6,686
We are committed to supporting and developing our employees through numerous training and development schemes. The articles below provide an insight into some of these schemes and employee experiences. Maxi Haulage Development Programme 2016 is full steam ahead! https://www.maxihaulage.co.uk/wp-content/uploads/2016/11/development-scheme-participants.jpg 810 900 Claire https://www.maxihaulage.co.uk/wp-content/uploads/2015/07/maxihaulagelimited.png Claire2016-11-23 09:22:312017-01-17 14:10:38Maxi Haulage Development Programme 2016 is full steam ahead! We always ensure that our clients and customers are continually provided with the best possible service whilst maintaining quality and efficiency. This leads to repeat business and long term relationships.
{ "redpajama_set_name": "RedPajamaC4" }
1,360
\section{Introduction} For a group $G$ let $G'$ be the commutator subgroup. For an element $g \in G'$ the \emph{commutator length} ($\mathrm{cl}(g)$) denotes the minimal number of commutators needed to express $g$ as their product. We define the \emph{stable commutator length (scl)} via $\mathrm{scl}(g) = \lim_{n \to \infty} \mathrm{cl}(g^n)/n$. Stable commutator length is well studied and has geometric meaning: Let $X$ be a topological space, let $\gamma$ be a loop in $X$ and let $[\gamma]$ be the conjugacy class in $\pi_1(X)$ corresponding to $\gamma$. Then both $\mathrm{cl}([\gamma])$ and $\mathrm{scl}([\gamma])$ measure the minimal complexity of an orientable surface needed to bound $\gamma$. The theory of these invariants is developed by Calegari in \cite{calegari:scl}. A group $G$ is said to have a \emph{gap in stable commutator length} if there is a constant $C>0$ such that either $\mathrm{scl}(g) = 0$ or $\mathrm{scl}(g) \geq C$ for every non-trivial $g \in G'$. If $G$ is non-abelian, such a constant necessarily satisfies $C \leq 1/2$. Similarly we may define gaps in scl for classes of groups. Many classes of ``negatively curved'' groups have a gap in scl; see Subsection \ref{subsec:spectral gaps of scl}. A common way of establishing gaps in $\mathrm{scl}$ is by constructing \emph{quasimorphisms} and using \emph{Bavard's Duality Theorem} (see \cite{bavard}): For an element $g \in G'$, \[ \mathrm{scl}(g) = \sup_{\bar{\phi} \in \mathcal{Q}(G)} \frac{\bar{\phi}(g)}{2 D(\bar{\phi}) } \] where $\mathcal{Q}(G)$ is the space of \emph{homogeneous quasimorphisms} and $D(\bar{\phi})$ is the \emph{defect of $\bar{\phi}$}; see Subsection \ref{subsec:quasimorphisms and Bavard's Duality} for the definitions and the precise statement. Though it is known that for every element $g \in G'$ the supremum in Bavard's Duality Theorem is obtained by so-called \emph{extremal quasimorphism} these maps are only known explicitly in special cases and are hard to construct; see \cite{calegari:extremal} and \cite{calegari:isometric}. In the first part of this paper, we will construct a family of extremal quasimorphisms on non-abelian free groups. Let $\mathbb{F}_2 = \langle \texttt{a}, \texttt{b} \rangle$ be the free group on generators $\texttt{a}$ and $\texttt{b}$ and let $w \in \mathbb{F}_2$ be such that it does not conjugate into $\langle \texttt{a} \rangle$ or $\langle \texttt{b} \rangle$. Then we will construct a homogeneous quasimorphism $\bar{\phi}$ such that $\bar{\phi}(w) \geq 1$ and $D(\bar{\phi})\leq 1$. This realises the well-known gap of $1/2$ in the case of non-abelian free groups. Our approach is as follows: instead of constructing more complicated quasimorphisms $\bar{\phi}$ we first ``simplify'' the element $w$. This simplification is formalised by functions $\Phi \colon G \to \mathcal{A} \subset \mathbb{F}_2$, called \emph{letter-quasimorphisms}; see Definition \ref{defn:letter quasihomomorphism}. Here $\mathcal{A}$ denotes the set of \emph{alternating words} in $\mathbb{F}_2 = \langle \texttt{a}, \texttt{b} \rangle$ with the generators $\texttt{a}$ and $\texttt{b}$. These are words where each letter alternates between $\{ \texttt{a}, \texttt{a}^{-1} \}$ and $\{ \texttt{b}, \texttt{b}^{-1} \}$. Letter-quasimorphisms are a special case of quasimorphisms between arbitrary groups defined by Hartnick--Schweitzer \cite{hartnick-schweitzer}. After this simplification, the extremal quasimorphisms on $G$ are obtained by pulling back most basic quasimorphisms $\mathbb{F}_2 \to \mathbb{R}$ via such letter-quasimorphisms $G \to \mathcal{A} \subset \mathbb{F}_2$. We further deduce that such quasimorphisms are induced by a circle action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ by examining the defect and using Theorem \ref{thm:ghys} due to Ghys; see also \cite{ghys}. We show: \begin{reptheorem}{thm:main} Let $G$ be a group, $g \in G$ and suppose that there is a letter-quasimorphism $\Phi \colon G \to \mathcal{A}$ such that $\Phi(g)$ is non-trivial and $\Phi(g^n) = \Phi(g)^n$ for all $n \in \mathbb{N}$. Then there is an explicit homogeneous quasimorphism $\bar{\phi} \colon G \to \mathbb{R}$ such that $\bar{\phi}(g) \geq 1$ and $D(\bar{\phi}) \leq 1$. If $G$ is countable then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ such that $[\delta^1 \bar{\phi}]=\rho^*\mathrm{eu}^\mathbb{R}_b \in \mathrm{H}^2_b(G,\mathbb{R})$, for $\mathrm{eu}^\mathbb{R}_b$ the real bounded Euler class. \end{reptheorem} By Bavard's Duality Theorem it is immediate that if such an element $g$ additionally lies in $G'$, then $\mathrm{scl}(g) \geq 1/2$. We state Theorem \ref{thm:main} separately as it may also be applied in other cases than the ones presented in this paper; see Remark \ref{rmk:chen-heuer}. Many groups $G$ have the property that for any element $g \in G'$ there is a letter-quasimorphism $\Phi_g \colon G \to \mathcal{A}$ such that $\Phi_g(g^n) = \Phi_g(g)^n$ where $\Phi_g(g) \in \mathcal{A}$ is non-trivial. We will see that residually free groups and right-angled Artin groups have this property. Note the similarities of this property with being \emph{residually free}; see Remark \ref{rmk:criterion for gaps}. In the second part of this paper we apply Theorem \ref{thm:main} to amalgamated free products using left-orders. A subgroup $H < G$ is called \emph{left-relatively convex} if there is an order on the left cosets $G/H$ which is invariant under left multiplication by $G$. We will construct letter-quasimorphisms $G \to \mathcal{A} \subset \mathbb{F}_2$ using the sign of these orders. We deduce: \begin{reptheorem}{thm:amalgamation} Let $A, B, C$ be groups, $\kappa_A \colon C \hookrightarrow A$ and $\kappa_B \colon C \hookrightarrow B$ injections and suppose both $\kappa_A(C) < A$ and $\kappa_B(C) < B$ are left-relatively convex. If $g \in A \star_C B$ does not conjugate into one of the factors then there is a homogeneous quasimorphism $\bar{\phi} \colon A \star_C B \to \mathbb{R}$ such that $\bar{\phi}(g) \geq 1$ and $D(\bar{\phi}) \leq 1$. If $G$ is countable then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ such that $[\delta^1 \bar{\phi}]=\rho^*\mathrm{eu}^\mathbb{R}_b \in \mathrm{H}^2_b(G,\mathbb{R})$, for $\mathrm{eu}^\mathbb{R}_b$ the real bounded Euler class. \end{reptheorem} It is possible to generalise Theorem \ref{thm:amalgamation} to graphs of groups; see Remark \ref{rmk:chen-heuer}. Again by Bavard's Duality Theorem we infer that any such $g$ which also lies in the commutator subgroup satisfies $\mathrm{scl}(g) \geq 1/2$. We apply this to right-angled Artin groups using the work of \cite{convexsub}. This way we prove: \begin{reptheorem}{thm:raags and scl} Every non-trivial element $g \in G'$ in the commutator subgroup of a right-angled Artin group $G$ satisfies $\mathrm{scl}(g) \geq 1/2$. This bound is sharp. \end{reptheorem} This is an improvement of the bound previously found in \cite{raags1} and \cite{raags2} who deduced a general bound of $1/24$ and a bound of $1/20$ if the right-angled Artin group is two dimensional. Every subgroup of a right-angled Artin group will inherit this bound. Such groups are now known to be an extremely rich class, following the theory of special cube complexes. See \cite{wise}, \cite{haglund-wise}, \cite{agol}, \cite{martin1} and \cite{martin2}. Stable commutator length may serve as an invariant to distinguish virtually special from special cube complexes. We collect some properties of the quasimorphisms constructed in this paper. \begin{itemize} \item The quasimorphisms are induced by circle actions $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ even though we do not construct the explicit action $\rho$. In particular, for every $e \not = g \in F'$ where $F$ is a non-abelian free group and $\mathrm{scl}(g) = 1/2$ there is an \emph{extremal} quasimorphism $\bar{\phi} \colon F \to \mathbb{R}$ induced by a circle action. It is unknown if for an arbitrary element $g \in F'$ there is an action of $F$ on the circle such that the induced quasimorphism is extremal with respect to $g$. \item There are relatively few quasimorphisms needed to obtain the $1/2$ bound in Theorem \ref{thm:raags and scl}. Let $G$ be a right-angled Artin group. Analysis of the constructions show that there is a sequence $\mathcal{S}_N \subset \mathcal{Q}(G)$ of nested sets of homogeneous quasimorphisms such that for every non-trivial cyclically reduced element $g$ of length less than $N$ there is some $\bar{\phi} \in \mathcal{S}_N$ such that $\bar{\phi}(g) \geq 1$ and $D(\bar{\phi}) \leq 1$. We see that $|\mathcal{S}_N| = O(N)$ and the rate-constant only depends on the number of generators of the right-angled Artin group. \item We obtain gap results even for elements which are not in the commutator subgroup. This suggests that it may be interesting to use Bavard's Dualtiy Theorem as a generalisation of stable commutator length to an invariant of general group elements $g \in G$. That is to study the supremum of $\bar{\phi}(g) / 2$ where $\bar{\phi}$ ranges over all homogeneous quasimorphisms with $D(\bar{\phi}) = 1$ which vanish or are bounded on a fixed generating set. In \cite{calegari:ziggurats} the authors studied this supremum over all homogeneous quasimorphisms induced by circle actions. They could prove that this supremum has certain qualitative similarities to the experimental values observed for $\mathrm{scl}$. This includes the experimental phenomenon that values with low denominators appear more frequently in $\mathrm{scl}$. \end{itemize} \subsection{Organisation} In Section \ref{sec:QM and Bavard} we introduce notation, definitions and basic or well established results on stable commutator length, quasimorphisms and Bavard's Duality Theorem. In Section \ref{sec:letter-thin triples and alpha, beta} we introduce \emph{letter-thin triples} which are a special type of triples $(x_1,x_2,x_3)$ of alternating elements $x_1, x_2, x_3 \in \mathcal{A}$. These will be crucial in estimating the defect of the quasimorphisms constructed in this paper. We will define maps $\alpha, \beta \colon \mathcal{A} \to \mathcal{A}$, which we show to respect letter-thin triples in Lemma \ref{lemma:alpha keeps thin.}. In Section \ref{sec:gaps via Letter-Quasimorphisms} we define and study \emph{letter-quasimorphisms} which are maps from arbitrary groups to alternating words of the free group. We deduce Theorem \ref{thm:main} which serves as a criterion for $\mathrm{scl}$-gaps of $1/2$ using these letter-quasimorphisms. Section \ref{sec:Left orders and convex subgroups} recalls some results of \cite{convexsub} on left relatively convex subgroups and orders on groups. Using the sign of these orders we are able to deduce $1/2$ gaps for amalgamated free products in Section \ref{sec:amalgamation}; see Theorem \ref{thm:amalgamation}. We show the $1/2$ gaps for right-angled Artin groups in Section \ref{sec:RAAGs and scl}; see Theorem \ref{thm:raags and scl}. \subsection*{Acknowledgements} I would like to thank my supervisor, Martin Bridson, for his help, support and guidance, and Ric Wade for his very helpful comments. I would further like to thank the referee for carefully reading the paper and recommending helpful improvements. Moreover, I would like to thank the Isaac Newton Institute for Mathematical Sciences in Cambridge for support and hospitality during the programme \emph{Non-Positive Curvature Group Actions and Cohomology} where work on this paper was undertaken. I would like to thank Danny Calegari for a stimulating conversation at the Isaac Newton Institute and Max Forester for pointing out errors in a previous version of this paper. This work was supported by EPSRC grant no EP/K032208/1. The author is also supported by the Oxford-Cocker Scholarship. \section{Quasimorphisms and Bavard's Duality Theorem} \label{sec:QM and Bavard} In Subsection \ref{subsec:quasimorphisms and Bavard's Duality} we give basic properties and definitions of stable commutator length and Bavard's Duality Theorem. In Subsection \ref{subsec:spectral gaps of scl} we collect some known results on (spectral) gaps in stable commutator length. In Subsections \ref{subsec:generalised qm} we define generalised quasimorphisms and in Subsection \ref{subsec:brooks and 2-brooks} the well known Brooks quasimorphisms. \subsection{Quasimorphisms and Bavard's Duality Theorem} \label{subsec:quasimorphisms and Bavard's Duality} For what follows Greek letters ($\alpha$, $\beta$) will denote generic functions, upper-case Roman letters ($A$, $B$) will denote generic groups, lower-case Roman letters ($a,b$) generic group elements and code-font ($\texttt{a}$, $\texttt{b}$) will denote letters in a free group. We will stick to this notation unless it is mathematical convention to do otherwise. Let $G$ be a group. For two elements $g,h \in G$ the \emph{commutator} is defined via $[g,h] = g h g^{-1} h^{-1}$ and the group generated by all such commutators is called the \emph{commutator subgroup} of $G$ and is denoted by $G'$. For an element $g \in G'$ we set \[ \mathrm{cl}(g) = \min \{ k \mid g = \prod_{i=1}^k [g_i,h_i]; g_i, h_i \in G \} \] the \emph{commutator length of $g$}. Note that $\mathrm{cl}$ is subadditive and hence the limit \[ \mathrm{scl}(g) = \lim_{n \to \infty} \frac{\mathrm{cl}(g^n)}{n} \] exists and is called \emph{stable commutator length (scl)}. See \cite{calegari:scl} for a comprehensive reference on scl. Calegari showed that in non-abelian free groups scl can be computed efficiently in polynomial time and is rational. For a group $G$, the set of possible values of $\mathrm{scl}$ is not fully understood, even for non-abelian free groups. See Subsection \ref{subsec:spectral gaps of scl} for a discussion on gaps in $\mathrm{scl}$. We note the following basic property: \begin{prop} $\mathrm{scl}$ is monotone and characteristic. That is, for any group homomorphism $\theta \colon G \to H$ and any $g \in G$ we have $\mathrm{scl}(g) \geq \mathrm{scl}( \theta(g))$. If $\theta$ is an automorphism, then $\mathrm{scl}(g) = \mathrm{scl}( \theta(g))$. \end{prop} A \emph{quasimorphism} is a map $\phi \colon G \to \mathbb{R}$ such that there is a constant $D$, such that for all $g, h \in G$, $|\phi(g) + \phi(h) - \phi(gh) | \leq D$. The infimum of all such $D$ is called the \emph{defect} of $\phi$ and denoted by $D(\phi)$. Note that quasimorphisms form a vector space under pointwise addition and scalar multiplication. A quasimorphism $\bar{\phi}$ is said to be \emph{homogeneous} if $\bar{\phi}(g^n) = n \bar{\phi}(g)$ for all $n \in \mathbb{Z}$, $g \in G$. In particular, $\bar{\phi}$ is \emph{alternating}, i.e. $\bar{\phi}(g^{-1}) = - \bar{\phi}(g)$ for all $g \in G$. Every quasimorphism $\phi \colon G \to \mathbb{R}$ is boundedly close to a unique homogeneous quasimorphism $\bar{\phi} \colon G \to \mathbb{R}$ defined via \[ \bar{\phi}(g) := \lim_{n \to \infty} \frac{\phi(g^n)}{n} \] and we call $\bar{\phi}$ the \emph{homogenisation} of $\phi$. Homogeneous quasimorphisms on $G$ form a vector space, denoted by $\mathcal{Q}(G)$. \begin{prop} \label{prop:defect of homogenisation doubles} Let $\phi \colon G \to \mathbb{R}$ be a quasimorphism and let be $\bar{\phi}$ be its homogenisation. Then $D(\bar{\phi}) \leq 2 D(\phi)$. \end{prop} See Lemma 2.58 of \cite{calegari:scl} for a proof. For what follows we will \emph{always} decorate homogeneous quasimorphisms with a bar-symbol, even if they are not explicitly induced by a non-homogeneous quasimorphism. We refer the reader to \cite{frigerio} and \cite{calegari:scl} for references on quasimorphisms and stable commutator length. If $g_1$ and $g_2$ lie in the same conjugacy class of $G$ then $\bar{\phi}(g_1) = \bar{\phi}(g_2)$, hence homogeneous quasimorphisms are class functions. The key ingredient to calculate gaps in stable commutator length is Bavard's Duality Theorem: \begin{thm} \label{thm:Bavards duality} \cite{bavard} Let $G$ be a group and let $g \in G'$. Then \[ \mathrm{scl}(g) = \sup_{\bar{\phi} \in \mathcal{Q}(G)} \frac{|\bar{\phi}(g)|}{2 D(\bar{\phi})}. \] \end{thm} See \cite{calegari:scl} for a proof and a generalisation of this statement. This theorem allows us to estimate stable commutator length using (homogeneous) quasimorphisms. It can be shown that the supremum in Bavard's Duality Theorem is obtained. That is, for every element $g \in G'$ there is a homogeneous quasimorphism $\bar{\phi}$ with $D(\bar{\phi}) = 1$ such that $\mathrm{scl}(g) = \bar{\phi}(g)/2$. These quasimorphisms are called \emph{extremal} and were studied in \cite{calegari:extremal}. \subsection{(Spectral) Gaps in scl} \label{subsec:spectral gaps of scl} It was shown by \cite{DH} that every non-trivial element $w \in \mathbb{F}_n'$ in the commutator subgroup of a non-abelian free group satisfies that $\mathrm{scl}(w) \geq 1/2$ and that every non-trivial commutator $[w_1,w_2] \in \mathbb{F}_n$ satisfies $\mathrm{scl}([w_1,w_2]) = 1/2$. Using the monotonicity of scl we may conclude that for an arbitrary group $G$ every commutator $[g_1,g_2] \in G'$ satisfies $\mathrm{scl}([g_1,g_2]) \leq 1/2$. On the other hand, some elements $g \in G'$ satisfy $\mathrm{scl}(g) = 0$ for trivial reasons, for example if they are torsions or a positive power of this element is conjugate to a negative power of this element. We call the infimum of $\{ scl(g) > 0 \mid g \in G' \}$ the \emph{gap of $\mathrm{scl}$}, often called the \emph{spectral gap}, and say that a group \emph{has a gap in scl} if this number is positive. Many classes of ``negatively curved'' groups have a gap in scl. \begin{itemize} \item Residually free groups have a gap of exactly $1/2$ by Duncan and Howie \cite{DH}. \item Mapping class groups of closed orientable surfaces, possibly with punctures, have a gap depending on the surface; see \cite{BBF}. \item Hyperbolic groups have a gap which depends on the hyperbolicity constant and the number of generators; see \cite{calegari_fujiwara}. \item Some classes of groups may not have a uniform gap but the first accumulation point on conjugatcy classes of positive $\mathrm{scl}$ may be uniformly bounded away from zero. For example for non-elementary, torsion-free hyperbolic groups and for the fundamental groups of closed hyperbolic manifolds this accumulation point is at least $1/12$; see Theorem B of \cite{calegari_fujiwara} and see Theorem 3.11 of \cite{calegari:scl}. \item Sometimes, one may control $\mathrm{scl}$ on certain generic group elements. If $G = G_1 \star G_2$ is the free product of two torsion-free groups $G_1$ and $G_2$ and $g \in G'$ does not conjugate into one of the factors, then $\mathrm{scl}(g) \geq 1/2$; see \cite{chen} and \cite{Ivanov-Klyachko}. Similarly, if $G = A \star_C B$ and $g \in G'$ does not conjugate into one of the factors and such that $C g C$ does not contain a copy of any conjugate of $g^{-1}$ then $\mathrm{scl}(g) \geq 1/12$. See Theorem D of \cite{calegari_fujiwara} for the first proof of this gap and \cite{scl_in_bs_groups} for the sharp gap and a generalisation to graphs of groups. \item Baumslag--Solitar groups have a sharp uniform gap of $1/12$; see \cite{scl_in_bs_groups}. \end{itemize} Note that this list is not meant to be comprehensive. By monotinicity, having a gap in scl may serve as an obstruction for group embeddings. If $H$ and $G$ are non-abelian groups with $H \hookrightarrow G$ and $C > 0$ is such that every non-trivial element $g \in G'$ satisfies $\mathrm{scl}(g) \geq C$ then so does every non-trivial element of $H'$. \subsection{Generalised Quasimorphisms} \label{subsec:generalised qm} It is possible to generalise quasimorphisms $\phi \colon G \to \mathbb{R}$ to maps $\Phi \colon G \to H$ for $G,H$ \emph{arbitrary groups}. Two quite different proposals for such a generalisation come from Fujiwara--Kapovich (\cite{fuji-kapo}) and Hartnick--Schweitzer (\cite{hartnick-schweitzer}). Whereas the former maps are quite restrictive, the latter type of maps are very rich. The ``letter-quasimorphisms'' defined in this paper will be quasimorphisms as defined by Hartnick--Schweitzer as shown at the end of Subsection \ref{subsec:letter-quasimorphisms and well-behaved letter quasimorphisms}. Adapting the definition of \cite{hartnick-schweitzer} we call a map $\Phi \colon G \to H$ between arbitrary groups a \emph{quasimorphism} if for every (ordinary) quasimorphism $\alpha \colon H \to \mathbb{R}$, $\alpha \circ \Phi \colon G \to \mathbb{R}$, i.e. the pullback of $\alpha$ to $G$ via $\Phi$, defines a quasimorphism on $G$. A map $\phi \colon G \to \mathbb{R}$ is a quasimorphism in the sense of Hartnick--Schweitzer if and only if it is an ordinary quasimorphism. The quasimorphisms $G \to \mathbb{R}$ constructed in this paper will be all pullbacks of the most basic quasimorphisms $\mathbb{F}_2 \to \mathbb{R}$ via letter-quasimorphisms $G \to \mathcal{A} \subset \mathbb{F}_2$; see Remark \ref{rmk:quasimorphisms are pullback of hs qm}. \subsection{Brooks Quasimorphisms} \label{subsec:brooks and 2-brooks} For what follows $\mathbb{F}_2$ will denote the group on two generators $\texttt{a}$ and $\texttt{b}$. A word $w = \texttt{x}_1 \cdots \texttt{x}_k \in F(\{ \texttt{a}, \texttt{b} \} ) = \mathbb{F}_2$ is called \emph{reduced} if it has no backtracking. Unless stated otherwise \emph{we will always assume that elements in the free group are represented by reduced words}. A sub-letter $\texttt{x}_i$ is called a \emph{power of $\texttt{a}$ (or $\texttt{b}$)} if $\texttt{x}_i \in \{ \texttt{a}, \texttt{a}^{-1} \}$ (or $\texttt{x}_i \in \{ \texttt{b}, \texttt{b}^{-1} \}$). Furthermore, $w$ is called \emph{alternating} if the letters of $w$ alternate between an element in $\{ \texttt{a}, \texttt{a}^{-1} \}$ and an element in $\{ \texttt{b}, \texttt{b}^{-1} \}$. The set of alternating words of $\mathbb{F}_2 = \langle \texttt{a}, \texttt{b} \rangle$ is denoted by $\mathcal{A}$. A word $v = \texttt{y}_1 \cdots \texttt{y}_l$ is called \emph{subword} of $w = \texttt{x}_1 \cdots \texttt{x}_k$ if $l \leq k$ and there is an $n \in \{0, \ldots, k-l \}$ such that $\texttt{y}_i = \texttt{x}_{i+n}$ for every $i \in \{ 1, \ldots, l \}$. Let $w \in \mathbb{F}_2$, $g \in \mathbb{F}_2$ be arbitrary reduced words. Let $\nu_w(g)$ be the number of (possibly overlapping) subwords of $w$ in the reduced word $g$. Then the function \[ \eta_w = \nu_w - \nu_{w^{-1}} \] is a quasimorphism, called \emph{Brooks quasimorphism}. These maps were introduced by Brooks in \cite{brooks} to show that the vector space of (homogeneous) quasimorphisms of the free group is infinite dimensional. Observe that for a letter $\texttt{x}$, the map $\eta_\texttt{x}$ is a homomorphism. Brooks quasimorphisms have been vastly generalised to other cases and groups; see \cite{EF} and \cite{me}. Let $g,h \in \mathbb{F}_2$ and let $(c_1, c_2, c_3)$ be reduced words such that $g = c_1^{-1} c_2$, $h= c_2^{-1} c_3$, $h^{-1} g^{-1} = c_3^{-1} c_1$ are reduced words. Then it is easy to see that the value $\eta_w(g) + \eta_w(h) - \eta_w(gh)$ only depends on the first $|w|-1$ letters of the words $c_1, c_2, c_3$, hence the defect is indeed finite. There is an extremal Brooks quasimorphism to the basic commutator $[ \texttt{a}, \texttt{b} ]$, namely $\eta_{\texttt{a} \texttt{b}} - \eta_{ \texttt{b} \texttt{a} }$. This and homomorphisms will be the only Brooks quasimorphisms occurring in this paper. \begin{exmp} \label{exmp: extemal brooks quasimorphisms on free group} Consider $[\texttt{a}, \texttt{b}]$, the commutator of the letters $\texttt{a}$ and $\texttt{b}$. Then it is easy to see that the quasimorphism $\eta_0 = \eta_{\texttt{a} \texttt{b}} - \eta_{\texttt{b} \texttt{a}}$ satisfies that $\eta_0([\texttt{a},\texttt{b}])=\bar{\eta_0}([\texttt{a},\texttt{b}])=2$, $D(\eta_0)=1$ and $D(\bar{\eta}_0) = 2$. As usual, $\bar{\eta}_0$ denotes the homogenisation of $\eta_0$. By Bavard's Duality Theorem (\ref{thm:Bavards duality}) we may estimate $\mathrm{scl}([\texttt{a},\texttt{b} ]) \geq \bar{\eta}([\texttt{a}, \texttt{b}])/2 D(\bar{\eta}) = 1/2$ and, as $\mathrm{scl}([\texttt{a},\texttt{b}]) \leq 1/2$ (see Subsection \ref{subsec:spectral gaps of scl}), we conclude $\mathrm{scl}([\texttt{a},\texttt{b}])=1/2$ and see that $\bar{\eta}_0$ is extremal. \end{exmp} \subsection{Bounded Cohomology} We define (bounded) cohomology of discrete groups and state its basic properties. We refer the reader to \cite{frigerio} for a thorough treatment of the bounded cohomology of discrete groups. Let $G$ be a group, let $V$ be a $\mathbb{Z} G$-module and set $C^n(G, V) = \{ f \colon G^n \rightarrow V \}$. For what follows, $V = \mathbb{Z}$ or $V = \mathbb{R}$ and we think of $V$ as a $\mathbb{Z} G$-module with trivial action. Let $\| \cdot \|_\infty$ be the $l^\infty$-norm on $C^n(G, \mathbb{R})$ and set \[ C^n_b(G, V) = \{ f \in C^n(G,V) \mid \|f \|_\infty < \infty \} \subset C^n(G,V) \] Define the well-known coboundary maps for the inhomogeneous resolution $\delta^n \colon C^n(G, V) \rightarrow C^{n+1}(G, V)$ via \begin{equ*}{rl} \delta^n(f) (g_1, \ldots, g_{n+1}) = &f(g_2, \ldots, g_{n+1}) + \sum_{i=1}^n (-1)^i f(g_1, \ldots, g_i g_{i+1}, \ldots, g_{n+1}) + \cdots \\ &\cdots (-1)^{n+1} f(g_1, \ldots, g_n) \end{equ*} and note that $\delta^n$ restricts to $\delta^n \colon C^n_b(G,V) \to C^{n+1}_b(G,V)$. Set \begin{align*} Z^n_{(b)}(G,V) &= \textrm{ker} \big( \delta^n \colon C_{(b)}^n(G, V) \to C_{(b)}^{n+1}(G,V) \big) \\ B^n_{(b)}(G,V) &= \textrm{im} \left( \delta^{n-1} \colon C_{(b)}^{n-1}(G, V) \to C_{(b)}^n(G,V) \right) \end{align*} the (bounded) cocycles $Z^n_{(b)}(G,V)$ and the (bounded) coboundaries $B^n_{(b)}(G,V)$. Then $\mathrm{H}^n(G,V) = Z^n(G,V) / B^n(G,V)$ is called the \emph{ordinary cohomology} and $\mathrm{H}_b^n(G,V) = Z_b^n(G,V) / B_b^n(G,V)$ is called the \emph{bounded cohomology} of $G$ with coefficients in $V$. The embedding $C_b^n(G, \mathbb{R}) \hookrightarrow C^n_(G, \mathbb{R})$ induces a map $c^n \colon \mathrm{H}_b^n(G,V) \to \mathrm{H}^n(G,V)$ called the \emph{comparison map}. Let $\phi \colon G \to \mathbb{R}$ be a quasimorphism. Then $\delta^1 \phi \in C^2_b(G,\mathbb{R})$ is a bounded $2$-cocycle and hence induces a class $[\delta^1 \phi] \in H^2_b(G, \mathbb{R})$. These classes are exactly the classes which lie in the kernel of the comparison map $c^2 \colon \mathrm{H}_b^2(G,\mathbb{R}) \to \mathrm{H}^2(G,\mathbb{R})$ described above. (Bounded) Cohomology is functorial in both slots: Any homomorphism $\alpha \colon G \to H$ induces a well defined map $\alpha^* \colon \mathrm{H}_{(b)}^n(H,V) \to \mathrm{H}_{(b)}^n(G,V)$ on (bounded) cohomology by pulling back cocycles on $H$ to cocycles on $G$ via $\alpha$. The map $\mathbb{Z} \to \mathbb{R}$ induces a \emph{change of coefficients} map $\mathrm{H}_{(b)}^n(G,\mathbb{Z}) \to \mathrm{H}_{(b)}^n(G,\mathbb{R})$. \subsection{Bounded Cocycles via Actions on the Circle and Vice Versa} This subsection states a classical correspondence between bounded cohomology and circle actions developed by Ghys; see \cite{ghys}. Also, see \cite{cicle_quasimorph_modern} for a thorough treatment of this topic. Let $\mathrm{Homeo}^+(S^1)$ be the group of orientation preserving actions on the circle and let \[ \mathrm{Homeo}^+_\mathbb{Z}(\mathbb{R}) = \{ f \in \mathrm{Homeo}^+(\mathbb{R}) \mid \forall n \in \mathbb{Z}, x \in \mathbb{R}: f(x+n) = f(x)+n \} \] the subgroup of the orientation preserving homeomorphisms of the real line that commutes with the integers. By identifying $S^1 \cong \mathbb{R} / \mathbb{Z}$ we obtain a surjection $\pi \colon \mathrm{Homeo}^+_\mathbb{Z}(\mathbb{R}) \to \mathrm{Homeo}^+(S^1)$. The kernel of $\pi$ is isomorphic to $\mathbb{Z}$ via $\iota \colon n \mapsto f_n$ with $f_n \colon x \mapsto x+n$ and lies in the center of $\mathrm{Homeo}^+_\mathbb{Z}(\mathbb{R})$. Hence \[ \begin{tikzcd} 0 \arrow[r] & \mathbb{Z} \arrow[r,"\iota"] & \mathrm{Homeo}^+_\mathbb{Z}(\mathbb{R}) \arrow[r,"\pi"] & \mathrm{Homeo}^+(S^1) \arrow[r] \arrow[l, bend right, "\sigma"] & 1 \end{tikzcd} \] is a central extension and hence corresponds to a class $\mathrm{eu} \in\mathrm{H}^2(\mathrm{Homeo}^+(S^1), \mathbb{Z})$ the \emph{Euler-class}. This class is represented by the cocycle $\omega \colon (g,h) \mapsto \sigma(g) \sigma(h) \sigma(gh)^{-1} \in \mathbb{Z}$ by identifying $\mathbb{Z}$ with $\mathrm{ker}(\pi)=\mathrm{im}(\iota)$ and where $\sigma$ is any set-theoretic section $\sigma \colon \mathrm{Homeo}^+(S^1) \to \mathrm{Homeo}^+_\mathbb{Z}(\mathbb{R})$. Let $\sigma_b$ be the unique section such that $\sigma_b(f)(0) \in [0,1)$. Then $\omega_b(g,h) = \sigma_b(g) \sigma_b(h) \sigma_b(gh)^{-1}$ satisfies that for all $g,h \in G$, $\omega_b(g,h) \in \{ 0,1 \}$ and hence is $\omega_b$ is a \emph{bounded} cocycle. We call the class $\mathrm{eu}_b = [\omega_b] \in \mathrm{H}^2_b(\mathrm{Homeo}^+(S^1), \mathbb{Z})$ the \emph{bounded Euler class}. See \cite{me_extensions} for the correspondence of group extensions and bounded cohomology. The image of $\mathrm{eu}_b$ under the change of coefficients $\mathrm{H}^2_b(\mathrm{Homeo}^+(S_1), \mathbb{Z}) \to \mathrm{H}^2_b(\mathrm{Homeo}^+(S_1), \mathbb{R})$ is called the \emph{real bounded Euler class} and denoted by $\mathrm{eu}^\mathbb{R}_b$. Any action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ induces a bounded class via $\rho^*\mathrm{eu}_b \in \mathrm{H}^2_b(G,\mathbb{Z})$ (resp. $\rho^*\mathrm{eu}_b^\mathbb{R} \in \mathrm{H}^2_b(G, \mathbb{R})$). Ghys (\cite{ghys}) showed that two actions $\rho_1, \rho_2 \colon G \to \mathrm{Homeo}^+(S_1)$ are \emph{semi-conjugate} if and only if $\rho_1^*\mathrm{eu}_b=\rho_2^*\mathrm{eu}_b \in \mathrm{H}^2_b(G,\mathbb{Z})$. See \cite{cicle_quasimorph_modern} for a precise definition of semi-conjugacy. Similarly, we have $\rho^* \mathrm{eu}^\mathbb{R}_b = 0 \in \mathrm{H}^2_b(G,\mathbb{R})$ if and only if $\rho$ is semi-conjugate to an action by rotations. The class $\rho^*\mathrm{eu}_b \in \mathrm{H}^2_b(G, \mathbb{Z})$ may be represented by a cocycle $\rho^*\omega_b \in Z^2_b(G, \mathbb{Z})$ such that for every $g,h \in G$, $\rho^*\omega_b(g,h) \in \{0,1 \}$. Surprisingly, a converse statement holds: \begin{thm} \label{thm:ghys} \footnote{See \cite{ghys}, see also Theorem 1.3 of \cite{cicle_quasimorph_modern}} Let $G$ be a discrete countable group and let $[\omega] \in H^2_b(G,\mathbb{Z})$ be a class represented by a cocycle $\omega$, such that for all $g,h \in G$, $\omega(g,h) \in \{ 0, 1 \}$. Then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ such that $\rho^*\mathrm{eu}_b = [\omega] \in \mathrm{H}_b^2(G, \mathbb{Z})$. \end{thm} This allows us to show that certain quasimorphisms are induced by a circle action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ without explicitly constructing $\rho$. \section{Letter-Thin Triples and the Maps $\alpha$ and $\beta$} \label{sec:letter-thin triples and alpha, beta} The set of alternating words $\mathcal{A} \subset \mathbb{F}_2$ is the set of all words in the letters $\texttt{a}$ and $\texttt{b}$ where the letters alternate between $\{ \texttt{a}, \texttt{a}^{-1} \}$ and $\{ \texttt{b}, \texttt{b}^{-1} \}$. For example, $\texttt{a} \texttt{b} \texttt{a}^{-1} \texttt{b}^{-1}$ is an alternating word but $\texttt{a} \texttt{b} \texttt{b} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{b}^{-1}$ is not. We will define maps $\alpha, \beta \colon \mathcal{A} \to \mathcal{A}$ and develop their basic properties in Subsection \ref{subsec:alpha and beta}. We also define a version of these maps on $\bar{\mathcal{A}}_0$, which are conjugatcy classes of \emph{even}-length words of $\mathcal{A}$ to understand how $\alpha, \beta$ behave on powers; see Proposition \ref{prop:powers of alpha, beta}. In Subsection \ref{subsec:letter-thin and alpha and beta} we define certain triples $(x_1,x_2,x_3)$ where $x_1,x_2,x_3 \in \mathcal{A}$ called \emph{letter-thin triples}. We think of them as the sides of (thin) triangles; see Figure \ref{fig:triangles}. Note that such triples are not triangles in the usual sense, i.e. the sides $x_1,x_2,x_3$ do \emph{not} correspond to the geodesics between three points in some metric space like a Cayley graph. Letter-thin triples will be crucial in estimating the defect of the quasimorphisms we construct in this paper. We will see that $\alpha$ and $\beta$ map letter-thin triples to letter-thin triples in Lemma \ref{lemma:alpha keeps thin.}, which is the main technical result of this paper. In Subsection \ref{subsec:brooks qm, homomorphisms and letter-thin} we see that basic Brooks quasimorphisms and homomorphisms behave well on letter-thin triples. We usually prove the properties we state for $\alpha, \beta$ just for $\alpha$ and note that all properties may be deduced analogously for $\beta$ by interchanging $\texttt{a}$ and $\texttt{b}$; see Proposition \ref{prop:cutting words}, (\ref{prop-case:interchange a b}). \subsection{The Maps $\alpha$ and $\beta$, Definition and Properties} \label{subsec:alpha and beta} We will describe two maps $\alpha, \beta \colon \mathcal{A} \to \mathcal{A}$ sending alternating words to alternating words. Define $\mathcal{S}_\texttt{a}^+, \mathcal{S}_\texttt{a}^- \subset \mathcal{A}$ as \begin{align*} \mathcal{S}_\texttt{a}^+ & = \{ \texttt{a} \texttt{y}_1 \texttt{a} \cdots \texttt{a} \texttt{y}_l \texttt{a} \mid \texttt{y}_i \in \{ \texttt{b}, \texttt{b}^{-1} \}, l \in \mathbb{N} \} \\ \mathcal{S}_\texttt{a}^- & = \{ \texttt{a}^{-1} \texttt{y}_1 \texttt{a}^{-1} \cdots \texttt{a}^{-1} \texttt{y}_l \texttt{a}^{-1} \mid \texttt{y}_i \in \{ \texttt{b}, \texttt{b}^{-1} \}, l \in \mathbb{N} \} \end{align*} that is, $\mathcal{S}_\texttt{a}^+$ is the set of alternating words which start and end in $\texttt{a}$ and don't contain the letter $\texttt{a}^{-1}$ and $\mathcal{S}_\texttt{a}^-$ is the set of alternating words which start and end in $\texttt{a}^{-1}$ and don't contain the letter $\texttt{a}$. Note that we assume $0 \in \mathbb{N}$, i.e. $\texttt{a} \in \mathcal{S}_\texttt{a}^+$ and $\texttt{a}^{-1} \in \mathcal{S}_\texttt{a}^-$. Analogously we define the sets $\mathcal{S}_\texttt{b}^+ \subset \mathcal{A}$ and $\mathcal{S}_\texttt{b}^- \subset \mathcal{A}$ as \begin{align*} \mathcal{S}_\texttt{b}^+ & = \{ \texttt{b} \texttt{x}_1 \texttt{b} \cdots \texttt{b} \texttt{x}_l \texttt{b} \mid \texttt{x}_i \in \{ \texttt{a}, \texttt{a}^{-1} \}, l \in \mathbb{N} \} \\ \mathcal{S}_\texttt{b}^- & = \{ \texttt{b}^{-1} \texttt{x}_1 \texttt{b}^{-1} \cdots \texttt{b}^{-1} \texttt{x}_l \texttt{b}^{-1} \mid \texttt{x}_i \in \{ \texttt{a}, \texttt{a}^{-1} \}, l \in \mathbb{N} \} \end{align*} and observe that $\texttt{b} \in \mathcal{S}_\texttt{b}^+$ and $\texttt{b}^{-1} \in \mathcal{S}_\texttt{b}^-$. We will decompose arbitrary words $w \in \mathcal{A}$ as a \emph{unique} product of elements in $\{ \texttt{b}, \texttt{b}^{-1} \}$ and $\mathcal{S}_\texttt{a}^+ \cup \mathcal{S}_\texttt{a}^-$: \begin{prop} \label{prop: a decomposition} Let $w \in \mathcal{A}$ be an alternating word. Then there are $\texttt{y}_0, \ldots, \texttt{y}_l$ and $s_1, \ldots, s_l$ such that \[ w = \texttt{y}_0 s_1 \texttt{y}_1 s_2 \cdots \texttt{y}_{l-1} s_l \texttt{y}_l \] where $\texttt{y}_i \in \{\texttt{b}, \texttt{b}^{-1} \}$ except that $\texttt{y}_0$ and/or $\texttt{y}_l$ may be empty and $s_i \in \mathcal{S}_\texttt{a}^+ \cup \mathcal{S}_\texttt{a}^-$. Moreover, $s_i$ alternates between $\mathcal{S}_\texttt{a}^+$ and $\mathcal{S}_\texttt{a}^-$, i.e. there is no $i \in \{ 1, \ldots, l-1 \}$ such that $s_i, s_{i+1} \in \mathcal{S}_\texttt{a}^+$ or $s_i, s_{i+1} \in \mathcal{S}_\texttt{a}^-$. This expression is unique. \end{prop} We will call this way of writing $w$ the \emph{ $\texttt{a}$-decomposition of $w$}. Analogously, we may also write $w \in \mathcal{A}$ as \[ w = \texttt{x}_0 t_1 \texttt{x}_1 t_2 \cdots \texttt{x}_{l-1} t_l \texttt{x}_l \] (possibly with a different $l$), where $\texttt{x}_i \in \{ \texttt{a}, \texttt{a}^{-1} \}$ except that $\texttt{x}_0$ and / or $\texttt{x}_l$ may be empty and $t_i \in \mathcal{S}_\texttt{b}^+ \cup \mathcal{S}_\texttt{b}^-$ where $t_i$ alternate between $\mathcal{S}_\texttt{b}^+$ and $\mathcal{S}_\texttt{b}^-$. We will call this way of writing $w$ the \emph{$\texttt{b}$-decomposition of $w$}. \begin{proof} (of Proposition \ref{prop: a decomposition}) Let $w \in \mathcal{A}$ be an alternating word. Since $\texttt{a} \in \mathcal{S}_\texttt{a}^+$ and $\texttt{a}^{-1} \in \mathcal{S}_\texttt{a}^-$, we may always find some $s_i \in \mathcal{S}_\texttt{a}^+ \cup \mathcal{S}_\texttt{a}^-$ and some $\texttt{y}_i \in \{ \texttt{b}, \texttt{b}^{-1} \}$ such that \[ w = \texttt{y}_0 s_1 \texttt{y}_1 s_2 \cdots \texttt{y}_{n-1} s_n \texttt{y}_n \] with possibly $\texttt{y}_n$ and / or $\texttt{y}_0$ empty. Now let $m$ be the minimal $n$ of all such products representing $w$ i.e. \[ w = \texttt{y}_0 s_1 \texttt{y}_1 s_2 \cdots \texttt{y}_{m-1} s_m \texttt{y}_m. \] Suppose there is an $i \in \{ 1, \ldots, m-1 \}$ such that $s_i,s_{i+1} \in \mathcal{S}_\texttt{a}^+$ (resp. $s_i,s_{i+1} \in \mathcal{S}_\texttt{a}^-$). Set $s' = s_i \texttt{y}_i s_{i+1}$ and note that $s' \in \mathcal{S}_\texttt{a}^+$ (resp. $s' \in \mathcal{S}_\texttt{a}^-$). Then \[ w = \texttt{y}_0 s_1 \texttt{y}_1 s_2 \cdots \texttt{y}_{i-1} s' \texttt{y}_{i+1} \cdots \texttt{y}_{m-1} s_m \texttt{y}_m \] which would contradict the minimality of $m$. Hence all $s_i$ alternate between $\mathcal{S}_\texttt{a}^+$ and $\mathcal{S}_\texttt{a}^-$. By comparing two such expressions we see that such an expression is further unique. \end{proof} \begin{defn} \label{defn:alpha and beta} Let $w \in \mathcal{A}$ and let $w = \texttt{y}_0 s_1 \cdots \texttt{y}_{l-1} s_l \texttt{y}_{l}$ be the $\texttt{a}$-decomposition of $w$. Then $\alpha \colon \mathcal{A} \to \mathcal{A}$ is defined via \[ \alpha \colon w \mapsto \texttt{y}_0 \texttt{x}_1 \texttt{y}_1 \texttt{x}_2 \cdots \texttt{y}_{l-1} \texttt{x}_l \texttt{y}_{l} \] with $\texttt{x}_i = \texttt{a}$ if $s_i \in \mathcal{S}^+_\texttt{a}$ and $\texttt{x}_i = \texttt{a}^{-1}$ if $s_i \in \mathcal{S}^-_{\texttt{a}}$. Analogously suppose that $w = \texttt{x}_0 t_1 \texttt{x}_1 t_2 \cdots \texttt{x}_{l-1} t_l \texttt{x}_l$ is the $\texttt{b}$-decomposition of $w$, with $l$ possibly different from above. We define the map $\beta \colon \mathcal{A} \to \mathcal{A}$ via \[ \beta \colon w \mapsto \texttt{x}_0 \texttt{y}_1 \texttt{x}_1 \texttt{y}_2 \cdots \texttt{x}_{l-1} \texttt{y}_l \texttt{x}_{l} \] with $\texttt{y}_i = \texttt{b}$ if $t_i \in \mathcal{S}^+_\texttt{b}$ and $\texttt{y}_i = \texttt{b}^{-1}$ if $t_i \in \mathcal{S}^-_{\texttt{b}}$. \end{defn} \begin{exmp} Let $w = \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \texttt{b} \texttt{a} \texttt{b} \texttt{a}^{-1}$. Then the $\texttt{a}$-decomposition of $w$ is \[ w = \texttt{b} s_1 \texttt{b}^{-1} s_2 \texttt{b} s_3 \texttt{b} s_4 \] where $s_1 = \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b} \texttt{a} \in \mathcal{S}^+_\texttt{a}$, $s_2 = \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \in \mathcal{S}_\texttt{a}^-$, $s_3 = \texttt{a} \in \mathcal{S}^+_\texttt{a}$ and $s_4 = \texttt{a}^{-1} \in \mathcal{S}^+_\texttt{a}$. Hence \[ \alpha(w) = \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} \texttt{b} \texttt{a}^{-1}. \] Observe that then $\alpha(\alpha(w)) = \alpha(w)$. The $\texttt{b}$-decomposition of $\alpha(w)$ is \[ \alpha(w) = t_1 \texttt{a} t_2 \texttt{a}^{-1} t_3 \texttt{a}^{-1} \] where $t_1 = \texttt{b} \in \mathcal{S}_\texttt{b}^+$, $t_2 = \texttt{b}^{-1} \in \mathcal{S}_\texttt{b}^-$ and $t_3 = \texttt{b} \texttt{a} \texttt{b} \in \mathcal{S}_\texttt{b}^+$. Hence \[ \beta(\alpha(w)) = \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \] and similarly, we may see that $\alpha(\beta(\alpha(w))) = \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} = [\texttt{b}, \texttt{a}]$. Then both $\alpha([\texttt{b}, \texttt{a}]) = [\texttt{b}, \texttt{a}]$ and $\beta([\texttt{b}, \texttt{a}])= [\texttt{b},\texttt{a}]$. We will formalise and use this behaviour later; see Proposition \ref{prop:cutting words} and Proposition \ref{prop:alpha on conjugacy classes decreases}. \end{exmp} The images of $\alpha$ and $\beta$ are obviously contained in the set of alternating words. Moreover, as the $s_i$ in the previous definition all alternate between $\mathcal{S}^+_{\texttt{a}}$ and $\mathcal{S}^-_{\texttt{a}}$, none of the consecutive $\texttt{x}_i$ have the same sign in the image of $\alpha$ and no consecutive $\texttt{y}_i$ have the same sign in the image of $\beta$. \begin{prop} \label{prop:cutting words} The maps $\alpha, \beta \colon \mathcal{A} \to \mathcal{A}$ have the following properties: \begin{enumerate} \item \label{prop-case:alpha alternating} For every $w \in \mathcal{A}$, $\alpha(w^{-1}) = \alpha(w)^{-1}$ and $\beta(w^{-1}) = \beta(w)^{-1}$ \item \label{prop-case:interchange a b} $\psi \circ \alpha = \beta \circ \psi$ and $\psi \circ \beta = \alpha \circ \psi$, where $\psi \colon \mathbb{F}_2 \to \mathbb{F}_2$ is the automorphism defined via $\psi \colon \texttt{a} \mapsto \texttt{b}, \texttt{b} \mapsto \texttt{a}$. \item For any $w \in \mathcal{A}$, $\alpha(\alpha(w)) = \alpha(w)$. Moreover, $|\alpha(w)| \leq |w|$ with equality if and only if $\alpha(w) = w$. The analogous statement holds for $\beta$. \item \label{prop:cases,splitting} Let $v_1 \texttt{x} v_2$ be an alternating word with $v_1, v_2 \in \mathcal{A}$ and $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$. Then $\alpha(v_1 \texttt{x} v_2)$ is equal in $\mathbb{F}_2$ to the element represented by the non-reduced word $\alpha(v_1 \texttt{x}) \texttt{x}^{-1} \alpha(\texttt{x} v_2)$. The analogous statement holds for $\beta$. \end{enumerate} \end{prop} \begin{proof} To see $(1)$, note that if $w = \texttt{y}_0 s_1 \texttt{y}_1 \cdots \texttt{y}_{l-1} s_l \texttt{y}_{l}$ is the $\texttt{a}$-decomposition of $w$, then \[ \texttt{y}_l^{-1} s_l^{-1} \texttt{y}_{l-1}^{-1} \cdots \texttt{y}_1^{-1} s_1^{-1} \texttt{y}_0^{-1} \] is the $\texttt{a}$-decomposition of $w^{-1}$. As $s_i^{-1} \in \mathcal{S}_\texttt{a}^+$ if and only if $s_i \in \mathcal{S}_\texttt{a}^-$ and $s_i^{-1} \in \mathcal{S}_\texttt{a}^-$ if and only if $s_i \in \mathcal{S}_\texttt{a}^+$ we can conclude that $\alpha(w^{-1}) = \alpha(w)^{-1}$. The analogous argument holds for $\beta$. Point $(2)$ is evident from the symmetric way $\alpha$ and $\beta$ have been defined. To see $(3)$, note that $\alpha$ replaces each of the subwords $s_i$ by letters $\texttt{a}$ or $\texttt{a}^{-1}$. These have size strictly less than $|s_i|$ unless $s_i$ is the letter $\texttt{a}$ or $\texttt{a}^{-1}$ already. This shows $|\alpha(w)| \leq |w|$ with equality only if $\alpha(w) = w$ and it also shows that $\alpha \circ \alpha = \alpha$. For (\ref{prop:cases,splitting}) suppose that the $\texttt{a}$-decomposition of $v_1 \texttt{x}$ is $\texttt{y}^1_0 s^1_1 \texttt{y}^1_1 \cdots \texttt{y}^1_{l_1-1} s^1_{l_1}$ and the $\texttt{a}$-decomposition of $\texttt{x} v_2$ is $s^2_1 \texttt{y}^2_1 \cdots \texttt{y}^1_{l_1-1} s^2_{l_2} \texttt{y}^2_{l_2}$. Both, $s^1_{l_1}$ and $s^2_1$ lie in the same set $S_\texttt{a}^+$ or $S_\texttt{a}^-$ depending if $\texttt{x} = \texttt{a}$ or $\texttt{x} = \texttt{a}^{-1}$. Without loss of generality assume that $\texttt{x} = \texttt{a}$. The $\texttt{a}$-decomposition of $v_1 \texttt{x} v_2$ may be seen to be $\texttt{y}^1_0 s^1_1 \texttt{y}^1_1 \cdots \texttt{y}^1_{l_1-1} s \texttt{y}^2_1 \cdots \texttt{y}^2_{l_2-1} s^2_{l_2} \texttt{y}^2_{l_2}$ where $s \in S_\texttt{a}^+$ is equal to $s^1_{l_1} \texttt{a}^{-1} s^2_1$ in $\mathbb{F}_2$. Hence $\alpha(v_1 \texttt{a}) = \texttt{y}^1_0 \texttt{x}^1_1 \texttt{y}^1_1 \cdots \texttt{y}^1_{l_1-1} \texttt{a}$, $\alpha(\texttt{a} v_2) = \texttt{a} \texttt{y}^2_1 \cdots \texttt{y}^2_{l_2-1} \texttt{x}^2_{l_2} \texttt{y}^2_{l_2}$ and \[ \alpha(v_1 \texttt{x} v_2) = \texttt{y}^1_0 \texttt{x}^1_1 \texttt{y}^1_1 \cdots \texttt{y}^1_{l_1-1} \texttt{a} \texttt{y}^2_1 \cdots \texttt{y}^2_{l_2-1} \texttt{x}^2_{l_2} \texttt{y}^2_{l_2}. \] Comparing terms finishes the proposition. \end{proof} To study how the maps $\alpha, \beta \colon \mathcal{A} \to \mathcal{A}$ behave on powers of elements we need to define a version of them on conjugacy classes. Let $\bar{\mathcal{A}}_0$ be the set conjugacy classes of even length alternating words. Note that then necessarily every two representatives $w_1,w_2 \in \mathcal{A}$ of the same conjugacy class in $\bar{\mathcal{A}}_0$ are equal up to cyclic permutation of the letters. This is, there are elements $v_1, v_2 \in \mathcal{A}$ such that $w_1 = v_1 v_2$ and $w_2 = v_2 v_1$ as reduced words. Hence every representative $v \in \mathcal{A}$ of an element in $\bar{\mathcal{A}}_0$ is automatically reduced. \begin{rmk} \label{rmk:on conjugacy classes for acl} Every reduced representative $w \in \mathcal{A}$ of a class in $\bar{\mathcal{A}}_0$ has the same length. Every homogeneous quasimorphism $\bar{\phi} \colon \mathbb{F}_2 \to \mathbb{R}$ depends only on conjugacy classes and hence induces a well-defined map $\bar{\phi} \colon \bar{\mathcal{A}}_0 \to \mathbb{R}$. We say that an element $[w] \in \bar{\mathcal{A}}_0$ \emph{lies in the commutator subgroup} if one (and hence any) representative $w$ of $[w]$ lies in the commutator subgroup of $\mathbb{F}_2$. \end{rmk} \begin{defn} \label{defn:maps alpha bar and beta bar} Define the map $\bar{\alpha} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ as follows: Let $[w] \in \bar{\mathcal{A}}_0$. If $[w] = e$ set $\bar{\alpha}([w]) = e$. Else choose a representative $w \in \mathcal{A}$ of $[w]$ that starts with a power of $\texttt{a}$ and, as $w$ has even length, ends in a power of $\texttt{b}$. Suppose that $w$ starts with the letter $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and write $w= \texttt{x} w'$ for $w' \in \mathcal{A}$ such that $\texttt{x} w'$ is reduced. Then define $\bar{\alpha} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ via \[ \bar{\alpha} \colon [w] \mapsto [\alpha(\texttt{x} w' \texttt{x}) \texttt{x}^{-1}] \in \bar{\mathcal{A}}_0. \] Define $\bar{\beta} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ analogously: For every element $[w] \in \bar{\mathcal{A}}_0$ choose a representative $w \in \mathcal{A}$ which starts with the letter $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$ and write $w = \texttt{y} w'$. Then define $\bar{\beta} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ via \[ \bar{\beta} \colon [w] \mapsto [\beta(\texttt{y} w' \texttt{y} ) \texttt{y}^{-1} ] \in \bar{\mathcal{A}}_0. \] \end{defn} To see that $\bar{\alpha}, \bar{\beta} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ are well-defined, suppose that $w_1, w_2 \in \mathcal{A}$ are both even alternating words which start in a power of $\texttt{a}$ and both represent the same element $[w_1] = [w_2] \in \bar{\mathcal{A}}_0$. Let $\texttt{x}_1, \texttt{x}_2 \in \{ \texttt{a}, \texttt{a}^{-1} \}$ be the first letters of $w_1$ and $w_2$. Then there are elements $v_1, v_2 \in \mathcal{A}$ such that $w_1 = \texttt{x}_1 v_1 \texttt{x}_2 v_2$ as a reduced word and $w_2 = \texttt{x}_2 v_2 \texttt{x}_1 v_1$. Then, by $(3)$ of Proposition \ref{prop:cutting words}, \begin{align*} \alpha(w_1 \texttt{x}_1) \texttt{x}_1^{-1} = \alpha(\texttt{x}_1 v_1 \texttt{x}_2 v_2 \texttt{x}_1) \texttt{x}_1^{-1} &= \alpha(\texttt{x}_1 v_1 \texttt{x}_2) \texttt{x}_2^{-1} \alpha(\texttt{x}_2 v_2 \texttt{x}_1) \texttt{x}_1^{-1} \\ \alpha(w_2 \texttt{x}_2) \texttt{x}_2^{-1} =\alpha(\texttt{x}_2 v_2 \texttt{x}_1 v_1 \texttt{x}_2) \texttt{x}_2^{-1} &= \alpha(\texttt{x}_2 v_1 \texttt{x}_1) \texttt{x}_1^{-1} \alpha(\texttt{x}_1 v_1 \texttt{x}_2) \texttt{x}_2^{-1} \end{align*} which are conjugate in $\mathbb{F}_2$ and so $[\alpha(w_1 \texttt{x}_1) \texttt{x}_1^{-1}] = [\alpha(w_2 \texttt{x}_2) \texttt{x}_2^{-1}]$. This shows that $\bar{\alpha}$ is well defined and analogously that $\bar{\beta}$ is well defined. The definition of $\bar{\alpha}$ given above is useful for performing calculations. However, there is a more geometric way to think about $\bar{\alpha}$ and $\bar{\beta}$ analogous to the definition of $\alpha$ and $\beta$. A common way to depict conjugacy classes in the free group is via labels on a circle: Let $w = \texttt{z}_1 \cdots \texttt{z}_n \in \mathbb{F}_2$ be a cyclically reduced word in the letters $\texttt{z}_i$. Then $w$ labels a circle by cyclically labelling the sides of the circle counterclockwise by $\texttt{z}_1, \texttt{z}_2, \ldots, \texttt{z}_n$ so that $\texttt{z}_n$ is next to $\texttt{z}_1$ on the circle. Two cyclically reduced words $w \in \mathbb{F}_2$ then yield the same labelling up to rotation if and only if they define the same conjugacy class. Let $[w] \in \bar{\mathcal{A}}_0$ be a conjugacy class of a word $w \in \mathcal{A}$ of even length that contains both at least one $\texttt{a}$ and one $\texttt{a}^{-1}$ as a subword. We may similarly define an $\texttt{a}$-decomposition of such a cyclic labelling. One may show that in this geometric model the maps $\bar{\alpha}$ (resp. $\bar{\beta}$) can then be defined just like for $\alpha$ and $\beta$ by replacing the words in $\mathcal{S}_\texttt{a}^+$ by $\texttt{a}$ and the words in $\mathcal{S}_\texttt{a}^-$ by $\texttt{a}^-$. If $[w] \in \bar{\mathcal{A}}_0$ does not contain both $\texttt{a}$ and $\texttt{a}^{-1}$ as subwords then $\bar{\alpha}([w])=e$ in both cases. \begin{figure} \centering \subfloat[]{\includegraphics[width=0.6\textwidth]{cg.pdf} \label{fig:circle general}} \\ \subfloat[]{\includegraphics[width=0.3\textwidth]{ct.pdf} \label{fig:circle trivial}} \caption{Visulizing $\bar{\alpha}$: Conjugacy classes $[w]$ correspond to cyclic labels of a circle. One may define a $\texttt{a}$-decomposition and $\bar{\alpha}$ on such labels except when $[w]$ does not contain $\texttt{a}$ or $\texttt{a}^{-1}$ as a subword. See Example \ref{exmp:alpha bar}} \label{fig:circles and conjugacy classes} \end{figure} Consider the following example: \begin{exmp} \label{exmp:alpha bar} Let $w = \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b} \in \mathcal{A}$. Its conjugacy class is depicted in Figure \ref{fig:circles and conjugacy classes}. We observe that $w$ starts with $\texttt{a}$ and set $w' = \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b}$ so that $w = \texttt{a} w'$. By Definition \ref{defn:maps alpha bar and beta bar}, $\bar{\alpha}([w]) = [\alpha(\texttt{a} w' \texttt{a}) \texttt{a}^{-1}] = [\left( \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \right) \texttt{a}^{-1} ] = [\texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b}^{-1}]$. However, we could have also done an $\texttt{a}$-decomposition of the elements on a circle as pictured in Figure \ref{fig:circles and conjugacy classes} ($\textrm{A}$) with $s_1 = \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b} \texttt{a} \in \mathcal{S}_\texttt{a}^+$ and $s_2 = \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \in \mathcal{S}_\texttt{a}^-$ and obtained the same result. Similarly, let $w = \texttt{a} \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b}$. It's conjugacy class is represented by a cyclic labelling of a circle in Figure \ref{fig:circles and conjugacy classes} (\textrm{B}). The first letter of $w$ is $\texttt{a}$. Set $w' = \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b}$ so that $w = \texttt{a} w'$. The $\texttt{a}$-decomposition of $\texttt{a} w' \texttt{a} = s_1 \in \mathcal{S}_{\texttt{a}}^+$. Hence $\bar{\alpha}([w]) = [\alpha(\texttt{a} w' \texttt{a}) \texttt{a}^{-1}] = [ \left( \texttt{a} \right) \texttt{a}^{-1}] = [e] \in \bar{\mathcal{A}}_0$. \end{exmp} \begin{prop} \label{prop:alpha on conjugacy classes decreases} Let $\bar{\alpha}, \bar{\beta} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ be defined as above and let $[w] \in \bar{\mathcal{A}}_0$. Then $|\bar{\alpha}([w])| \leq |[w]|$ with equality if and only if $\bar{\alpha}([w]) = [w]$. The analogous statement holds for $\bar{\beta}$. If $[w]$ is a non-trivial class in the commutator subgroup of $\mathbb{F}_2$ then $\bar{\alpha}([w])$ and $\bar{\beta}([w])$ are non-trivial. If $\bar{\alpha}([w]) = [w] = \bar{\beta}([w])$ then $[w]$ may be represented by $w = [\texttt{a}, \texttt{b}]^n$ for $n \in \mathbb{Z}$. \end{prop} \begin{proof} To see that $\bar{\alpha}$, $\bar{\beta}$ decrease length unless they fix classes is the same argument as in the proof of Proposition \ref{prop:cutting words}. If $[w]$ is a non-trivial class in the commutator subgroup of $\mathbb{F}_2$ then there is a reduced representative $w$ such that $w = \texttt{a} v_1 \texttt{a}^{-1} v_2$ for some appropriate $v_1,v_2 \in \mathcal{A}$ and we see that $\bar{\alpha}([w])$ is non-trivial as it also contains the subletters $\texttt{a}$ and $\texttt{a}^{-1}$. If $w \in \mathcal{A}$ is a representative such that $\bar{\alpha}$ fixes $[w]$ then $w$ has to be of the form $w = \prod_{i=1}^k \texttt{a} \texttt{y}_i \texttt{a}^{-1} \texttt{y}'_i$ for some $\texttt{y}_i,\texttt{y}'_i \in \{ \texttt{b}, \texttt{b}^{-1} \}$, $k \geq 1$ and similarly, if $\bar{\beta}$ fixes a class then the a representative has to be of the form $w = \prod_{i=1}^k \texttt{x}_i \texttt{b} \texttt{x}'_i \texttt{b}^{-1}$ for some $\texttt{x}_i, \texttt{x}'_i \in \{ \texttt{a}, \texttt{a}^{-1} \}$, $k \geq 1$. Comparing both yields the statement. \end{proof} \begin{prop} \label{prop:powers of alpha, beta} Assume that $w \in \mathcal{A}$ is non-empty, has even length and that $c_1, c_2 \in \mathcal{A}$ are words such that $c_1 w c_2 \in \mathcal{A}$ is again an alternating word. Then there are words $d_1, d_2, w' \in \mathcal{A}$ such that $\alpha(c_1 w^n c_2) = d_1 w'^{n-1} d_2 \in \mathcal{A}$ for all $n \geq 1$ as reduced words where $w'$ has even length and $[w'] = \bar{\alpha}([w]) \in \bar{\mathcal{A}}_0$. If $w$ lies in the commutator subgroup then $w'$ is non-empty. The analogous statement holds for $\beta$. \end{prop} \begin{proof} If $w \in \mathcal{A}$ does not contain both a positive and a negative power of $\texttt{a}$, the statement follows by an easy calculation. Note that this is the case if and only if $\bar{\alpha}([w]) = [e]$. Otherwise $w$ contains at least one sub-letter $\texttt{a}$ and one sub-letter $\texttt{a}^{-1}$. This is the case if $w$ lies in the commutator subgroup. Suppose without loss of generality that $w = v_1 \texttt{a} v_2 \texttt{a}^{-1} v_3$ as a reduced word for some $v_1, v_2, v_3 \in \mathcal{A}$. By multiple applications of Proposition \ref{prop:cutting words}, we see that \begin{align*} \alpha(c_1 w^n c_2) &= \alpha(c_2 \left( v_1 \texttt{a} v_2 \texttt{a}^{-1} v_3 \right)^n c_2) \\ &= \alpha(c_1 v_1 \texttt{a}) \texttt{a}^{-1} \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 (v_1 \texttt{a} v_2 \texttt{a}^{-1} v_3)^{n-1} c_2) \\ &= \alpha(c_1 v_1 \texttt{a}) \texttt{a}^{-1} \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1} \alpha( \texttt{a} v_2 \texttt{a}^{-1} v_3 (v_1 \texttt{a} v_2 \texttt{a}^{-1} v_3)^{n-2} c_2) \\ &= \alpha(c_1 v_1 \texttt{a}) \texttt{a}^{-1} \left( \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1} \right)^2 \alpha( \texttt{a} v_2 \texttt{a}^{-1} v_3 (v_1 \texttt{a} v_2 \texttt{a}^{-1} v_3)^{n-3} c_2) \\ &= \cdots \\ &= \alpha(c_1 v_1 \texttt{a}) \texttt{a}^{-1} (\alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1} )^{n-1} \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 c_2) \end{align*} as non-reduced elements in the free group. Then we define $d_1$, $d_2$ and $w'$ to be the reduced representative of \[ \alpha(c_1 v_1 \texttt{a}) \texttt{a}^{-1}, \text{ } \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 c_2) \text{ and } \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1} \] respectively. Moreover, $\alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a})$ is a reduced alternating word which starts and ends in $\texttt{a}$ and contains the $\texttt{a}^{-1}$ as a sub-letter. If follows that $w'$, the reduced representative of $\alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1}$, starts with $\texttt{a}$, contains $\texttt{a}^{-1}$ and ends with a power of $\texttt{b}$, so $w'$ is non-empty. Further observe that $\bar{\alpha}([\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1])$ is represented by $\alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1}$ and hence $[w'] = \bar{\alpha}(w)$. \end{proof} \subsection{Letter-Thin Triples, $\alpha$ and $\beta$} \label{subsec:letter-thin and alpha and beta} In order to streamline proofs later and ease notation we define an equivalence relation on triples $(x_1,x_2,x_3)$. We think of such a triple as the sides of a (thin) triangle. We stress that the $x_i$ are not actually the side of triangles in some metric space; see Figure \ref{fig:triangles}. Here, we study a special type of triples, namely \emph{letter-thin triples} in Definition \ref{defn:letter-thin}. \begin{defn} \label{defn:equivalent triples} Let $(x_1,x_2,x_3)$ be a triple of elements in $\mathbb{F}_2$ and let $\phi \colon \mathbb{F}_2 \to \mathbb{F}_2$ be a set-theoretic function. We will understand by $\phi(x_1,x_2,x_3)$ the triple $(\phi(x_1), \phi(x_2), \phi(x_3))$. We define $\sim$ to be the equivalence relation on triples generated by \begin{enumerate} \item[(i)] $(x_1,x_2,x_3) \sim (x_2, x_3, x_1)$ \item[(ii)] $(x_1,x_2,x_3) \sim (x_3^{-1}, x_2^{-1}, x_1^{-1})$ \item[(iii)] $(x_1,x_2,x_3) \sim \phi_\texttt{a}(x_1, x_2, x_3)$, where $\phi_\texttt{a} \colon \mathbb{F}_2 \to \mathbb{F}_2$ is the automorphism defined via $\texttt{a} \mapsto \texttt{a}^{-1}$ and $\texttt{b} \mapsto \texttt{b}$. \item[(iv)] $(x_1,x_2,x_3) \sim \phi_\texttt{b}(x_1, x_2, x_3)$, where $\phi_\texttt{b} \colon \mathbb{F}_2 \to \mathbb{F}_2$ is the automorphism defined via $\texttt{a} \mapsto \texttt{a}$ and $\texttt{b} \mapsto \texttt{b}^{-1}$. \end{enumerate} for all $x_1,x_2,x_3 \in \mathbb{F}_2$ and say that $(x_1,x_2,x_3)$ is \emph{equivalent} to $(y_1, y_2, y_3)$ if $(x_1,x_2,x_3) \sim (y_1, y_2, y_3)$ under this relation. \end{defn} Imagining $(x_1,x_2,x_3)$ as labelling the sides of a triangle, two triples are equivalent if they may be obtained from each other by a sequence of rotations $(i)$, flips $(ii)$ or by changing the signs of its labels $(iii)$ \& $(iv)$. \begin{prop} \label{prop:alpha respects equivalence} Let $x_1,x_2,x_3,y_1,y_2,y_3 \in \mathbb{F}_2$ such that $(x_1,x_2,x_3) \sim (y_1, y_2, y_3)$. Then if $x_1,x_2,x_3 \in \mathcal{A}$ also $y_1,y_2,y_3 \in \mathcal{A}$. Moreover, in this case $\alpha(x_1,x_2,x_3) \sim \alpha(y_1,y_2,y_3)$ and $\beta(x_1,x_2,x_3) \sim \beta(y_1, y_2, y_3)$. \end{prop} \begin{proof} The first part is clear from the definitions. Note that $\alpha$ commutes both with ``rotating the side'' $(i)$ and taking inverses $(ii)$ as $\alpha$ satisfies that $\alpha(w^{-1}) = \alpha(w)^{-1}$ for $w \in \mathcal{A}$. Let $w = \texttt{y}_0 s_1 \texttt{y}_1 \cdots \texttt{y}_{k-1} s_k \texttt{y}_k$ be the $\texttt{a}$-decomposition of $w$ (see Definition \ref{defn:alpha and beta}), where $\texttt{y}_i \in \{ \texttt{b}, \texttt{b}^{-1} \}$ and $s_i \in \mathcal{S}^+_\texttt{a} \cup \mathcal{S}^-_\texttt{a}$ alternates between $\mathcal{S}^+_\texttt{a}$ and $\mathcal{S}^-_\texttt{a}$. Then \[ \phi_\texttt{a}(w) = \texttt{y}_0 \phi_\texttt{a}(s_1) \texttt{y}_1 \cdots \texttt{y}_{k-1} \phi_\texttt{a}(s_k) \texttt{y}_k \] where $\phi(s_i) \in \mathcal{S}^+_\texttt{a}$ if and only if $s_i \in \mathcal{S}^-_\texttt{a}$ and $\phi(s_i) \in \mathcal{S}^-_\texttt{a}$ if and only if $s_i \in \mathcal{S}^+_\texttt{a}$. So $\alpha(\phi_\texttt{a}(w)) = \phi_\texttt{a}(\alpha(w))$ and hence $\alpha \circ \phi_\texttt{a}(x_1, x_2, x_3)$ is equivalent to $\alpha(x_1,x_2,x_3)$. Similarly, $\phi_\texttt{b}(w) = \phi_\texttt{b}(\texttt{y}_0) \phi_\texttt{b}(s_1) \phi_\texttt{b}(\texttt{y}_1) \cdots \phi_\texttt{b}(\texttt{y}_{k-1}) \phi_\texttt{b}(s_k) \phi_\texttt{b}(\texttt{y}_k)$ where both $\phi_\texttt{b}(s_i)$ and $s_i$ lie in the same set $\mathcal{S}_\texttt{a}^+$ or $\mathcal{S}_\texttt{a}^-$. We see that once more, $\alpha(\phi_\texttt{b}(w)) = \phi_\texttt{b}(\alpha(w))$ and hence also $\alpha \circ \phi_\texttt{b}(x_1,x_2,x_3)$ is equivalent to $\alpha(x_1,x_2,x_3)$. Analogously, we see the statement for $\beta$. \end{proof} For a visualisation of the following definition we refer the reader to Figure \ref{fig:triangles}. \begin{defn} \label{defn:letter-thin} Let $x_1, x_2, x_3 \in \mathcal{A}$ be \emph{alternating} elements. The triple $(x_1, x_2, x_3)$ is called \emph{letter-thin triple} in one of the following cases: \begin{itemize} \item[ $\text{[T1]}$ ] There are (possibly trivial) elements $c_1, c_2, c_3 \in \mathcal{A}$ such that \begin{itemize} \item[$\text{[T1$\at$]}$] $ (x_1,x_2,x_3) \sim (c_1^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1} \texttt{a} c_3, c_3^{-1} \texttt{a}^{-1} c_1)$ or \item[$\text{[T1$\bt$]}$] $ (x_1,x_2,x_3) \sim (c_1^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1} \texttt{b} c_3, c_3^{-1} \texttt{b}^{-1} c_1)$ \end{itemize} where all words are required to be reduced. \item[ $\text{[T2]}$ ] There are (possibly trivial) elements $c_1, c_2 \in \mathcal{A}$ such that \begin{itemize} \item[$\text{[T2$\at$]}$] $(x_1,x_2,x_3) \sim (c_1^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1}, \texttt{b} c_1)$ or \item[$\text{[T2$\bt$]}$] $(x_1,x_2,x_3) \sim (c_1^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1}, \texttt{a} c_1)$ \end{itemize} where all words are required to be reduced. \end{itemize} In all cases, $\sim$ denotes the equivalence of triples of Definition \ref{defn:equivalent triples}. We say that a letter-thin triple $(x_1,x_2,x_3)$ is \emph{of type $\text{[T1$\at$]}$, $\text{[T1$\bt$]}$, $\text{[T2$\at$]}$} or \emph{$\text{[T2$\bt$]}$} if it is equivalent to the corresponding triple above. \end{defn} Note for example in the representatives of $\text{[T1$\at$]}$ above, necessarily $c_1$, $c_3$ are either empty or their first letter is a power of $\texttt{b}$. Similarly, $c_2$ is either empty or its first letter is a power of $\texttt{a}$, else the $x_i$ would not be alternating. Note that for any letter-thin triple $(x_1,x_2,x_3)$ of type $\text{[T1$\at$]}$ we may always find elements $d_1, d_2, d_3 \in \mathcal{A}$ with first letter a power of $\texttt{b}$ such that \begin{align} \label{equ:maybe letter thin} (x_1,x_2,x_3) = (d_1^{-1} \texttt{x}_1 d_2, d_2^{-1} \texttt{x}_2 d_3, d_3^{-1} \texttt{x}_3 d_1) \end{align} where $\texttt{x}_i \in \{ \texttt{a}, \texttt{a}^{-1} \}$ are such that \emph{not all of $\texttt{x}_1$, $\texttt{x}_2$ and $\texttt{x}_3$ are equal} i.e. have the same parity. As we consider the triples only up to equivalence one may wonder if we can assume that any triple as in Equation (\ref{equ:maybe letter thin}) such that not all of $d_i$ are empty is letter-thin of type $\text{[T1$\at$]}$. However, this is not the case: As $\texttt{x}_1$, $\texttt{x}_2$, $\texttt{x}_3$ do not all have the same parity, there is exactly one $i$ such that $\texttt{x}_i = \texttt{x}_{i+1}$ where indices are considered$\mod 3$. Then one may see that $(x_1,x_2,x_3)$ is of type $\text{[T1$\at$]}$ \emph{if and only if} $d_{i+1}$ is non-trivial. For example, $(d_1^{-1} \texttt{a}, \texttt{a} d_3, d_3^{-1} \texttt{a}^{-1} d_1)$ is \emph{not} letter-thin for any $d_1, d_3 \in \mathcal{A}$ empty or starting with a power of $\texttt{b}$. \begin{exmp} $(\texttt{a}, \texttt{a}, \texttt{a}^{-1})$ is not letter-thin and by the previous discussion also the triple $(\texttt{b}^{-1} \texttt{a}^{-1} , \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} \texttt{a} \texttt{b})$ is not letter-thin. However, $(\texttt{b}^{-1} \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} \texttt{a}^{-1}, \texttt{a} \texttt{b})$ \emph{is} letter-thin. To see this, note that \begin{equ*}{rcl} (\texttt{b}^{-1} \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} \texttt{a}^{-1}, \texttt{a} \texttt{b}) &\overset{(iii)}{\sim}& (\texttt{b}^{-1} \texttt{a} \texttt{b}, \texttt{b}^{-1} \texttt{a}, \texttt{a}^{-1} \texttt{b}) \\ & = & (c_1^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1} \texttt{a} c_3, c_3^{-1} \texttt{a}^{-1} c_1) \end{equ*} for $c_1 = \texttt{b}$, $c_2 = e$ and $c_3 = e$ and where $\overset{(iii)}{\sim}$ denotes the equivalence $(iii)$ of the definition of '$\sim$'; see Definition \ref{defn:equivalent triples}. \end{exmp} Note that by definition, if $(x_1,x_2,x_3)$ is letter-thin then \emph{all $x_1, x_2, x_3$ are alternating words}. See Figure \ref{fig:triangles} for the explanation of the name \emph{letter-thin triple}: First consider elements $g,h \in \mathbb{F}_2 = \langle \texttt{a}, \texttt{b} \rangle$. The triple $(g,h,(gh)^{-1})$ corresponds to sides of a geodesic triangle in the Cayley graph $\textrm{Cay}(\mathbb{F}_2, \{ \texttt{a}, \texttt{b} \})$ with endpoints $e, g, gh$. Note further that there are words $c_1, c_2, c_3 \in \mathbb{F}_2$ such that $g = c_1^{-1} c_2$, $h = c_2^{-1} c_3$, $(gh)^{-1} = c_3^{-1} c_1$ and all these expressions are freely reduced. A \emph{letter-thin} triple $(x_1,x_2,x_3)$ is such that each $x_i$ is in addition alternating and corresponds \emph{almost} to the sides of a geodesic triangle in a Cayley graph, apart from one letter $r \in \{ \texttt{a}, \texttt{b} \}$ in the ``middle'' of the triangle. Figure \ref{fig:triangles} (\textrm{B}) corresponds to case $\text{[T1]}$ of Definition \ref{defn:letter-thin}, Figure \ref{fig:triangles} (\textrm{C}) corresponds to case $\text{[T2]}$ of Definition \ref{defn:letter-thin}. These letter-thin triples $(x_1,x_2,x_3)$ do \emph{not} label sides of triangles in a Cayley graph or any other metric space. \begin{figure} \centering \subfloat[]{\includegraphics[width=0.3\textwidth]{thin.pdf} \label{fig:thin triangle}} \hfill \subfloat[]{\includegraphics[width=0.3\textwidth]{ltg.pdf} \label{fig:letter-thin triple}} \hfill \subfloat[]{\includegraphics[width=0.3\textwidth]{ltd.pdf} \label{fig:letter-thin degenerate}} \caption{Different ``triangles'': (\textrm{A}) arises as a generic thin triangle in the Cayley graph $\mathrm{Cay}(\mathbb{F}_2, \{ \texttt{a}, \texttt{b} \})$ of the free group with standard generating set. Figures (\textrm{B}) and (\textrm{C}) correspond to letter-thin triples $\text{[T1$\at$]}$, $\text{[T2$\at$]}$. The grey dotted circles indicate the part of the letter-thin triples which can not be empty. These letter-thin triples do \emph{not} live in a Cayley graph or any well-known metric space.}. \label{fig:triangles} \end{figure} Observe that $(x_1,x_2,x_3)$ is letter-thin if and only if $\psi(x_1,x_2,x_3)$ is letter-thin for $\psi$ defined as in Proposition \ref{prop:cutting words} (\ref{prop-case:interchange a b}) i.e. $\psi$ is the automorphism $\psi \colon \mathbb{F}_2 \to \mathbb{F}_2$ defined via $\psi \colon \texttt{a} \mapsto \texttt{b}$ and $\psi \colon \texttt{b} \mapsto \texttt{a}$. The maps $\alpha$ and $\beta$ respect letter-thin triples: \begin{lemma} \label{lemma:alpha keeps thin.} If $(x_1,x_2,x_3)$ is letter-thin. Then both $\alpha(x_1, x_2, x_3)$ and $\beta(x_1,x_2,x_3)$ are letter-thin. \end{lemma} \begin{proof} We will proceed as follows: Let $(x_1,x_2,x_3)$ be a letter-thin triple. By Proposition \ref{prop:alpha respects equivalence} it is enough to check that $\alpha(x_1,x_2,x_3)$ is letter-thin for one representative of the equivalence class. Hence it suffices to check that $\alpha(x_1, x_2, x_3)$ is letter thin for \begin{enumerate} \item Type $\text{[T1$\at$]}$: $(x_1,x_2,x_3) = (c_1^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1} \texttt{a} c_3, c_3^{-1} \texttt{a}^{-1} c_1)$ \item Type $\text{[T1$\bt$]}$: $(x_1,x_2,x_3) = (c_1^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1} \texttt{b} c_3, c_3^{-1} \texttt{b}^{-1} c_1)$ \item Type $\text{[T2$\at$]}$: $(x_1,x_2,x_3) = (c_1^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1}, \texttt{b} c_1)$ \item Type $\text{[T2$\bt$]}$: $(x_1,x_2,x_3) = (c_1^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1}, \texttt{a} c_1)$ \end{enumerate} By symmetry, this will show the analogous statement for $\beta$. Proposition \ref{prop:cutting words}, (\ref{prop:cases,splitting}) allows us to compute $\alpha$ piecewise i.e. after each occurrence of a letter $\texttt{a}$ or $\texttt{a}^{-1}$ in a reduced word. For any reduced word $c \in \mathcal{A}$ starting with a power of $\texttt{b}$ or being empty, we will write $c_+$ for the reduced word represented by $\texttt{a}^{-1} \alpha(\texttt{a} c)$, which itself is not reduced since $\alpha(\texttt{a} c)$ starts with an $\texttt{a}$. Similarly, we will write $c_-$ for the reduced word represented by $\texttt{a} \alpha(\texttt{a}^{-1} c)$. Note that $c_+$ and $c_-$ are either empty or their first letter is a power of $\texttt{b}$, as $\alpha(\texttt{a}^{\pm} c)$ is alternating. If $c$ is a word which already has a subscript, say $c_i$, then we will write $c_{i,+}$ and $c_{i,-}$, respectively. We consider each of the above cases independently. For letter-thin triples $(x_1,x_2,x_3)$ of type $\text{[T1$\at$]}$ we compute $\alpha(x_1,x_2,x_3)$ and we will state exactly which equivalences $(i)$, $(ii)$, $(iii)$ and $(iv)$ of Definition \ref{defn:equivalent triples} are needed to obtain one of the representatives for $\text{[T1$\at$]}$, $\text{[T1$\bt$]}$, $\text{[T2$\at$]}$ and $\text{[T2$\bt$]}$ of letter-thin triples as in Definition \ref{defn:letter-thin}. For letter-thin triples $(x_1,x_2,x_3)$ of type $\text{[T1$\bt$]}$, $\text{[T2$\at$]}$ and $\text{[T2$\bt$]}$ we will just state the type of $\alpha(x_1,x_2,x_3)$ without explicitly giving the equivalence. \begin{enumerate} \item Type $\text{[T1$\at$]}$: Suppose $(x_1,x_2,x_3) = (c_1^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1} \texttt{a} c_3, c_3^{-1} \texttt{a}^{-1} c_1)$. As $(x_1,x_2,x_3)$ are alternating $c_2$ is either empty or starts with a positive or a negative power of $\texttt{a}$. We consider these cases separately: \begin{itemize} \item $c_2$ is empty. In this case we compute using Proposition \ref{prop:cutting words}, \begin{align*} \alpha(c_1^{-1} \texttt{a} \texttt{b}) &= \alpha(c_1^{-1} \texttt{a}) \texttt{a}^{-1} \alpha( \texttt{a} \texttt{b}) = \alpha( \texttt{a}^{-1} c_1)^{-1} \texttt{b} = (\texttt{a}^{-1} c_{1,-})^{-1} \texttt{b} = (c_{1,-})^{-1} \texttt{a} \texttt{b} \\ \alpha(\texttt{b}^{-1} \texttt{a} c_3) &= \alpha(\texttt{b}^{-1} \texttt{a}) \texttt{a}^{-1} \alpha(\texttt{a} c_3) = \texttt{b}^{-1} \texttt{a} c_{3,+} \\ \alpha(c_3^{-1} \texttt{a}^{-1} c_1) &= \alpha(c_3^{-1} \texttt{a}^{-1}) \texttt{a} \alpha(\texttt{a}^{-1} c_1) = \alpha(\texttt{a} c_3)^{-1} c_{1,-} = (\texttt{a} c_{3,+})^{-1} c_{1,-} = (c_{3,+})^{-1} \texttt{a}^{-1} c_{1,-} \end{align*} and hence \[ \alpha(x_1, x_2, x_3)=((c_{1,-})^{-1} \texttt{a} \texttt{b}, \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} c_{1,-}) \] which is of type $\text{[T1$\at$]}$. Indeed, for $c_1' = c_{1,-}$, $c_2'=e$ and $c_3' = c_{3,+}$ we see that \[ \alpha(x_1, x_2, x_3) = ({c'_1}^{-1} \texttt{a} \texttt{b} c'_2, {c'_2}^{-1} \texttt{b}^{-1} \texttt{a} c'_3, {c'_3}^{-1} \texttt{a}^{-1} c'_1). \] and hence $\alpha(x_1,x_2,x_3)$ is of type $\text{[T1$\at$]}$. \item $c_2 = \texttt{a} d_2$ where, $d_2 \in \mathcal{A}$. \[ \alpha(x_1, x_2, x_3)=((c_{1,-})^{-1} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} c_{1,-}) \] which is of type $\text{[T2$\bt$]}$ if $c_{1,-}$ is trivial and of type $\text{[T1$\bt$]}$ else. To see this we distinguish between three different cases: \begin{itemize} \item $c_{1,-}$ is trivial: Then \begin{align*} \alpha(x_1, x_2, x_3) &=(\texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1}) \\ &\overset{(i)}{\sim} ((d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1}, \texttt{a} d_{2,+}) \\ &\overset{(iv)}{\sim} (\phi_b(d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_{3,+}, \phi_b(c_{3,+})^{-1} \texttt{a}^{-1}, \texttt{a} \phi_b(d_{2,+})) \\ &= ({c'_1}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c'_2, {c'_2}^{-1} \texttt{a}^{-1}, \texttt{a} c'_1) \end{align*} for $c'_1 = \phi_b(d_{2,+})^{-1}$ and $c'_2 = c_{3,+}$ and hence of type $\text{[T2$\bt$]}$. Here $\sim$ denotes the equivalences on triples defined in Definition \ref{defn:equivalent triples} with the corresponding numbering $(i) - (iv)$. \item $c_{1,-}$ is non-trivial and starts with first letter $\texttt{b}$. Then define $d_1$ via $c_{1,-} = \texttt{b} d_1$. Hence $\alpha(x_1, x_2, x_3)$ equals: \begin{equ*}{ccl} & &(d_1^{-1} \texttt{b}^{-1} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} \texttt{b} d_1) \\ &\overset{(iv)}{\sim} &(\phi_b(d_1)^{-1} \texttt{b} \texttt{a} \phi_b(d_{2,+}), \phi_b(d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} \phi_b(c_{3,+}), \phi_b(c_{3,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \phi_b(d_1)) \\ &= &({c'_1}^{-1} \texttt{b} \texttt{a} c'_2, {c'_2}^{-1} \texttt{a}^{-1} \texttt{b} c'_3, {c'_3}^{-1} \texttt{b}^{-1} c'_1) \end{equ*} for $c'_1 = \phi_b(d_1)$, $c'_2 = \phi_b(d_{2,+})$, $c'_3 = \texttt{a} \phi_b(c_{3,+})$ and hence is of type $\text{[T1$\bt$]}$. \item $c_{1,-}$ is non-trivial and starts with first letter $\texttt{b}^{-1}$. Then define $d_1$ via $c_{1,-} = \texttt{b}^{-1} d_1$. Hence $\alpha(x_1, x_2, x_3)$ equals: \begin{equ*}{ccl} & &(d_1^{-1} \texttt{b} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} d_1) \\ & \overset{(ii)}{\sim} & d_1^{-1} \texttt{b} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} d_1) \\ & = & ({c'_1}^{-1} \texttt{b} \texttt{a} c'_2, {c'_2}^{-1} \texttt{a}^{-1} \texttt{b} c'_3, {c'_3}^{-1} \texttt{b}^{-1} c'_1) \end{equ*} for $c'_1 = d_1$, $c'_2 = c_{3,+}$, $c'_3 = \texttt{a} d_{2,+}$ and hence of type $\text{[T1$\bt$]}$. \end{itemize} \item $c_2 = \texttt{a}^{-1} d_2$ where $d_2 \in \mathcal{A}$. \[ \alpha(x_1, x_2, x_3)=((c_{1,-})^{-1} \texttt{a} \texttt{b} \texttt{a}^{-1} d_{2,-}, (d_{2,-})^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} c_{1,-}) \] which is of type $\text{[T1$\bt$]}$ if $c_{3,+}$ is non-trivial and of type $\text{[T2$\bt$]}$, else. This can be seen analogously to the previous case. \end{itemize} \item Type $\text{[T1$\bt$]}$: Suppose $(x_1,x_2,x_3) = (c_1^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1} \texttt{b} c_3, c_3^{-1} \texttt{b}^{-1} c_1)$. Up to equivalence, there are the following sub-cases: \begin{itemize} \item Both of $c_1, c_3$ are empty. Then \[ \alpha(x_1, x_2, x_3)= ( \texttt{b} \texttt{a} c_{2,+}, (c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} ) \] which is of type $\text{[T1$\bt$]}$ \item $c_1$ is not empty, $c_3$ is empty. Then either \begin{itemize} \item $c_1 = \texttt{a} d_1$. In this case \[ \alpha(x_1, x_2, x_3)= ((d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} \texttt{a} d_{1,+}) \] which is of type $\text{[T1$\bt$]}$ \item $c_1 = \texttt{a}^{-1} d_1$. In this case \[ \alpha(x_1, x_2, x_3)= ((d_{1,-})^{-1} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} \texttt{a}^{-1} d_{1,+}) \] which is of type $\text{[T1$\at$]}$. \end{itemize} \item $c_1$ is empty and $c_3$ is not. Then either \begin{itemize} \item $c_3 = \texttt{a} d_3$, in which case \[ \alpha(x_1, x_2, x_3)= ( \texttt{b} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} d_{3,+}, (d_{3,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1}) \] which is of type $\text{[T1$\bt$]}$. \item $c_3 = \texttt{a}^{-1} d_3$, in which case \[ \alpha(x_1, x_2, x_3)= ( \texttt{b} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} d_{3,-}, (d_{3,-})^{-1} \texttt{a} \texttt{b}^{-1}) \] which is of type $\text{[T1$\at$]}$. \end{itemize} \item Both of $c_1, c_3$ are non-empty. Then either \begin{itemize} \item $c_1 = \texttt{a} d_1$, $c_3 = \texttt{a} d_3$. In this case \[ \alpha(x_1, x_2, x_3)= ( (d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} d_{3,+}, (d_{3,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} d_{1,+}) \] which is of type $\text{[T1$\bt$]}$. \item $c_1 = \texttt{a} d_1$, $c_3 = \texttt{a}^{-1} d_3$. In this case \[ \alpha(x_1, x_2, x_3)= ( (d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} d_{3,-}, (d_{3,-})^{-1} \texttt{a} d_{1,+}) \] which is of type $\text{[T1$\bt$]}$ if $d_{3,-}$ is non-trivial, and of type $\text{[T2$\bt$]}$, else. \item $c_1 = \texttt{a}^{-1} d_1$, $c_3 = \texttt{a} d_3$. In this case \[ \alpha(x_1, x_2, x_3)= ( (d_{1,-})^{-1} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} d_{3,+}, (d_{3,+})^{-1} \texttt{a}^{-1} d_{1,-}) \] which is of type $\text{[T1$\bt$]}$ if $d_{1,-}$ is non-trivial and of type $\text{[T2$\bt$]}$, else. \item $c_1 = \texttt{a}^{-1} d_1$, $c_3 = \texttt{a}^{-1} d_3$. In this case \[ \alpha(x_1, x_2, x_3)= ( (d_{1,-})^{-1} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \ d_{3,-}, (d_{3,-})^{-1} \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} d_{1,-}) \] which is of type $\text{[T1$\bt$]}$ if $c_{2,+}$ is non-trivial and of type $\text{[T2$\bt$]}$, else. \end{itemize} \end{itemize} \item Type $\text{[T2$\at$]}$: Suppose $(x_1,x_2,x_3) = (c_1^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1}, \texttt{b} c_1)$. We distinguish between the following cases \begin{itemize} \item Both of $c_1, c_2$ are empty. Then \[ \alpha(x_1,x_2,x_3) = (\texttt{b}^{-1} \texttt{a} \texttt{b}, \texttt{b}^{-1}, \texttt{b}) \] which is of type $\text{[T2$\at$]}$. \item One of $c_1, c_2$ is empty. Up to equivalence and changing indices we may assume that $c_2$ is empty. Then either \begin{itemize} \item $c_1 = \texttt{a} d_1$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b}, \texttt{b}^{-1}, \texttt{b} \texttt{a} d_{1,+}) \] which is of type $\text{[T2$\at$]}$ or \item $c_1 = \texttt{a}^{-1} d_1$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,-})^{-1} \texttt{a} \texttt{b}, \texttt{b}^{-1}, \texttt{b} \texttt{a}^{-1} d_{1,-}) \] which is of type $\text{[T1$\bt$]}$. \end{itemize} \item Both of $c_1, c_2$ are non-empty. Then either \begin{itemize} \item $c_1 = \texttt{a} d_1$, $c_2 = \texttt{a} d_2$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1}, \texttt{b} \texttt{a} d_{1,+}) \] which is of type $\text{[T1$\bt$]}$ or \item $c_1 = \texttt{a} d_1$, $c_2 = \texttt{a}^{-1} d_2$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b} \texttt{a}^{-1} d_{2,-}, (d_{2,-})^{-1} \texttt{a} \texttt{b}^{-1}, \texttt{b} \texttt{a} d_{1,+}) \] which is of type $\text{[T2$\at$]}$ or \item $c_1 = \texttt{a}^{-1} d_1$, $c_2 = \texttt{a} d_2$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,-})^{-1} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1}, \texttt{b} \texttt{a}^{-1} d_{1,-}) \] which is of type $\text{[T1$\at$]}$ or \item $c_1 = \texttt{a}^{-1} d_1$, $c_2 = \texttt{a}^{-1} d_2$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,-})^{-1} \texttt{a} \texttt{b} \texttt{a}^{-1} d_{2,-}, (d_{2,-})^{-1} \texttt{a} \texttt{b}^{-1}, \texttt{b} \texttt{a}^{-1} d_{1,-}) \] which is of type $\text{[T1$\bt$]}$. \end{itemize} \end{itemize} \item Type $\text{[T2$\bt$]}$: Suppose $(x_1,x_2,x_3) = (c_1^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1}, \texttt{a} c_1)$. We see that \[ \alpha(x_1,x_2,x_3) = ((c_{1,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_{2,+}, (c_{2,+})^{-1} \texttt{a}^{-1}, \texttt{a} c_{1,+} ) \] which is of type $\text{[T2$\bt$]}$. \end{enumerate} This concludes the proof of Lemma \ref{lemma:alpha keeps thin.}. \end{proof} \subsection{Brooks Quasimorphisms, Homomorphisms and Letter-Thin Triples} \label{subsec:brooks qm, homomorphisms and letter-thin} For what follows we want to study how the Brooks quasimorphism $\eta_0 = \eta_{\texttt{a} \texttt{b}} - \eta_{\texttt{b} \texttt{a}}$ defined in Example \ref{exmp: extemal brooks quasimorphisms on free group} or certain homomorphisms behave on letter-thin triples. This will be done in Propositions \ref{prop:letter thin triples and two quasimorphisms} and \ref{prop: letter thin triples and homomorphisms}, respectively. \begin{prop} \label{prop:letter thin triples and two quasimorphisms} Let $\eta_0 = \eta_{\texttt{a} \texttt{b}} - \eta_{\texttt{b} \texttt{a}} \colon \mathbb{F}_2 \to \mathbb{Z}$ be as above. Then \[ |\eta_0(x_1) + \eta_0(x_2) + \eta_0(x_3)| = 1 \] for every letter-thin triple $(x_1,x_2,x_3)$. In particular $\eta_0(x_1)+\eta_0(x_2)+\eta_0(x_3) \in \{ -1, +1 \}$. \end{prop} \begin{proof} First note that if $w = w_1 w_2 \in \mathbb{F}_2$ as a reduced word and if $\texttt{z}_1$ is the last letter of $w_1$ and $\texttt{z}_2$ is the first letter of $w$, then \begin{align} \label{equ:split up words brooks} \eta_0(w) &= \eta_0(w_1) + \eta_0(\texttt{z}_1 \texttt{z}_2) + \eta_0(w_2). \end{align} Let $(x_1,x_2,x_3)$ be a triple. Note that the value \[ |\eta_0(x_1) + \eta_0(x_2) + \eta_0(x_3)| \] is invariant under the equivalences $(i)$ and $(ii)$ of Definition \ref{defn:equivalent triples}. Up to these equivalences we see that any letter-thin triple $(x_1,x_2,x_3)$ is equivalent via $(i)$ and $(ii)$ to the following: \begin{itemize} \item Type $\text{[T1$\at$]}$: $(c_1^{-1} \texttt{x} \texttt{y} c_2, c_2^{-1} \texttt{y}^{-1} \texttt{x} c_3, c_3^{-1} \texttt{x}^{-1} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. If $c_i$ is empty set $\texttt{z}_i = e$. Else let $\texttt{z}_i$ be the first letter of $c_i$. Then, by using successively Equation (\ref{equ:split up words brooks}) we see that \begin{align*} \eta_0(x_1) &= \eta_0(c_1^{-1}) + \eta_0(\texttt{z}_1^{-1} \texttt{x}) + \eta_0(\texttt{x} \texttt{y}) + \eta_0(\texttt{y} \texttt{z}_2) + \eta_0(c_2) \\ \eta_0(x_2) &= \eta_0(c_2^{-1}) + \eta_0(\texttt{z}_2^{-1} \texttt{y}^{-1}) + \eta_0(\texttt{y}^{-1} \texttt{x}) + \eta_0(\texttt{x} \texttt{z}_3) + \eta_0(c_3) \\ \eta_0(x_3) &= \eta_0(c_3^{-1}) + \eta_0(\texttt{z}_3^{-1} \texttt{x}^{-1}) + \eta_0(\texttt{x}^{-1} \texttt{z}_1) + \eta_0(c_1) \end{align*} Using that $\eta_0(c^{-1}) = - \eta_0(c)$ for any $c \in \mathbb{F}_2$ we see that \[ |\eta_0(x_1) + \eta_0(x_2) + \eta_0(x_3)| = |\eta_0(\texttt{x} \texttt{y}) + \eta_0(\texttt{y}^{-1} \texttt{x})| \] and hence we see that for any choice $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$, $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$ \[ |\eta_0(x_1) + \eta_0(x_2) + \eta_0(x_3)| = 1. \] \item Type $\text{[T1$\bt$]}$: $(c_1^{-1} \texttt{y} \texttt{x} c_2, c_2^{-1} \texttt{x}^{-1} \texttt{y} c_3, c_3^{-1} \texttt{y}^{-1} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. This case is analogous to the previous case. \item Type $\text{[T2$\at$]}$: $(c_1^{-1} \texttt{y}^{-1} \texttt{x} \texttt{y} c_2, c_2^{-1} \texttt{y}^{-1}, \texttt{y} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. Again, if $c_i$ is empty set $\texttt{z}_i = e$. Else let $\texttt{z}_i$ be the first letter of $c_i$. By successively using Equation (\ref{equ:split up words brooks}) we see that \begin{align*} \eta_0(x_1) &= \eta_0(c_1^{-1}) + \eta_0(\texttt{z}_1^{-1} \texttt{y}^{-1}) + \eta_0(\texttt{y}^{-1} \texttt{x}) + \eta_0(\texttt{x} \texttt{y}) + \eta_0(\texttt{y} \texttt{z}_2) + \eta_0(c_2) \\ \eta_0(x_2) &= \eta_0(c_2^{-1}) + \eta_0(\texttt{z}_2^{-1} \texttt{y}^{-1}) \\ \eta_0(x_3) &= \eta_0(\texttt{y} \texttt{z}_1) + \eta_0(c_1) \end{align*} and again we observe that \begin{align*} |\eta_0(x_1) + \eta_0(x_2) + \eta_0(x_3)| &= |\eta_0(\texttt{y}^{-1} \texttt{x}) + \eta_0(\texttt{x} \texttt{y})| = 1 \end{align*} for any choice of $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$, $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. \item Type $\text{[T2$\bt$]}$: $(c_1^{-1} \texttt{x}^{-1} \texttt{y} \texttt{x} \texttt{b} c_2, c_2^{-1} \texttt{x}^{-1}, \texttt{x} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. This case is analogous to the previous case. \end{itemize} \end{proof} Recall that $\eta_\texttt{x} \colon \mathbb{F}_2 \to \mathbb{Z}$ denotes the homomorphism which counts the letter $\texttt{x}$. \begin{prop} \label{prop: letter thin triples and homomorphisms} Let $\eta = \eta_\texttt{x} + \eta_\texttt{y} \colon \mathbb{F}_2 \to \mathbb{Z}$ for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ or $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. Then \[ |\eta(x_1) + \eta(x_2) + \eta(x_3)| = 1 \] for any $(x_1,x_2,x_3)$ letter-thin. In particular $\eta(x_1) + \eta(x_2) + \eta(x_3) \in \{ -1, +1 \}$. \end{prop} \begin{proof} Let $\eta$ be as in the proposition and suppose that $(x_1,x_2,x_3)$ is letter-thin. Just like in the proof of the previous proposition we will consider the four different types of letter thin triples up to equivalences $(i)$ and $(ii)$ of Definition \ref{defn:equivalent triples}. \begin{itemize} \item Type $\text{[T1$\at$]}$: $(c_1^{-1} \texttt{x} \texttt{y} c_2, c_2^{-1} \texttt{y}^{-1} \texttt{x} c_3, c_3^{-1} \texttt{x}^{-1} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. We directly calculate, using that $\eta$ is a homomorphism: \begin{align*} \eta(x_1) &= \eta(c_1^{-1} \texttt{x} \texttt{y} c_2) = -\eta(c_1) + \eta(\texttt{x}) + \eta(\texttt{y}) + \eta(c_2) \\ \eta(x_2) &= \eta(c_2^{-1} \texttt{y}^{-1} \texttt{x} c_3) = -\eta(c_2) - \eta( \texttt{y}) + \eta(\texttt{x}) + \eta(c_3) \\ \eta(x_3) &= \eta(c_3^{-1} \texttt{x}^{-1} c_1) = - \eta(c_3) - \eta(\texttt{x}) + \eta(c_1) \end{align*} and hence \[ |\eta(x_1)+ \eta(x_2) + \eta(x_3)| = |\eta(\texttt{x})| = 1 \] for any $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$. \item Type $\text{[T1$\bt$]}$: $(c_1^{-1} \texttt{y} \texttt{x} c_2, c_2^{-1} \texttt{x}^{-1} \texttt{y} c_3, c_3^{-1} \texttt{y}^{-1} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. This case is analogous to the previous case. \item Type $\text{[T2$\at$]}$: $(c_1^{-1} \texttt{y}^{-1} \texttt{x} \texttt{y} c_2, c_2^{-1} \texttt{y}^{-1}, \texttt{y} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. Again we calculate \begin{align*} \eta(x_1) &= \eta(c_1^{-1} \texttt{y}^{-1} \texttt{x} \texttt{y} c_2) = - \eta(c_1) - \eta(\texttt{y}) + \eta(\texttt{x}) + \eta(\texttt{y}) + \eta(c_2) \\ \eta(x_2) &= \eta(c_2^{-1} \texttt{y}^{-1}) = -\eta(c_2) - \eta(\texttt{y}) \\ \eta(x_3) &= \eta(\texttt{y} c_1) = \eta(\texttt{y})+\eta(c_1) \end{align*} and hence again \[ |\eta(x_1)+ \eta(x_2) + \eta(x_3)| = |\eta(\texttt{x})| = 1 \] for any $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$. \item Type $\text{[T2$\bt$]}$: $(c_1^{-1} \texttt{x}^{-1} \texttt{y} \texttt{x} \texttt{b} c_2, c_2^{-1} \texttt{x}^{-1}, \texttt{x} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. This case is analogous to the previous case. \end{itemize} \end{proof} \section{Gaps via Letter-Quasimorphisms} \label{sec:gaps via Letter-Quasimorphisms} The aim of this section is to define letter-quasimorphisms and deduce the criterion for $1/2$ gaps in $\mathrm{scl}$. There will be two types of letter-quasimorphisms: \emph{(general) letter-quasimorphisms} (Definition~\ref{defn:letter quasihomomorphism}) and \emph{well-behaved letter-quasimorphisms} (Definition \ref{defn:well behaved letter quasimorphisms}). The former is useful for application, the latter will be useful for proofs. For each letter-quasimorphism $\Phi \colon G \to \mathcal{A}$ there will be an associated well-behaved letter-quasimorphism $\tilde{\Phi} \colon G \to \mathcal{A}$ where $\tilde{\Phi}(g)$ is obtained from $\Phi(g)$ by modifying its beginning and its end; see Proposition \ref{prop:every letter-qm induces well behaved}. \subsection{Letter-Quasimorphisms and Well-Behaved Letter-Quasimorphisms} \label{subsec:letter-quasimorphisms and well-behaved letter quasimorphisms} As always $\mathcal{A}$ denotes the set of alternating words of $\mathbb{F}_2$ in the generators $\texttt{a}$ and $\texttt{b}$. \begin{defn} \label{defn:letter quasihomomorphism} Let $G$ be a group. We say that $\Phi \colon G \to \mathcal{A}$ is a \emph{letter-quasimorphism} if $\Phi$ is alternating, i.e. $\Phi(g^{-1}) = \Phi(g)^{-1}$ for every $g \in G$ and if for every $g,h \in G$ one of the following holds: \begin{enumerate} \item \label{defn:letter-qm:thin} $\Phi(g) \Phi(h) \Phi(g h)^{-1} = e$, or \item \label{defn:letter-qm:general} there are elements $c_1, c_2, c_3 \in \mathcal{A}$ and letters $\texttt{x}_1, \texttt{x}_2, \texttt{x}_3$ such that either $\texttt{x}_1, \texttt{x}_2, \texttt{x}_3 \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{x}_1 \texttt{x}_2 \texttt{x}_3 \in \{ \texttt{a}, \texttt{a}^{-1} \}$ or $\texttt{x}_1, \texttt{x}_2, \texttt{x}_3 \in \{ \texttt{b}, \texttt{b}^{-1} \}$ and $\texttt{x}_1 \texttt{x}_2 \texttt{x}_3 \in \{ \texttt{b}, \texttt{b}^{-1} \}$ which satisfy that $\Phi(g) = c_1^{-1} \texttt{x}_1 c_2$, $\Phi(h) = c_2^{-1} \texttt{x}_2 c_3$ and $\Phi(g h)^{-1} = c_3^{-1} \texttt{x}_3 c_1$ as freely reduced alternating words. \end{enumerate} \end{defn} The motivating example for letter-quasimorphisms is the following: \begin{exmp} \label{exmp:letter quasimorphisms on free group} Consider the map $\Phi \colon \mathbb{F}_2 \to \mathcal{A}$ defined as follows. Suppose that $w \in \mathbb{F}_2$ has reduced representation $\texttt{a}^{n_1} \texttt{b}^{m_1} \cdots \texttt{a}^{n_k} \texttt{b}^{m_k}$ with all $n_i, m_i \in \mathbb{Z}$ where all but possibly $n_1$ and / or $m_k$ are non-zero. Then set \[ \Phi(w) = \texttt{a}^{\textrm{sign}(n_1)} \texttt{b}^{\textrm{sign}(m_1)} \cdots \texttt{a}^{\textrm{sign}(n_k)} \texttt{b}^{\textrm{sign}(m_k)} \] where $\textrm{sign} \colon \mathbb{Z} \to \{ +1, 0, -1 \}$ is defined as usual. This may be seen to be a letter-quasimorphism and will be vastly generalised to amalgamated free products; see Lemma \ref{lemma:amalgamated yields letter-quasimorphism}. Observe that for any group $G$ and any homomorphism $\Omega \colon G \to \mathbb{F}_2$ the map $\Phi \circ \Omega \colon G \to \mathcal{A}$ is a letter-quasimorphism. Suppose that $G$ is \emph{residually free}. Then for ever non-trivial element $g \in G$ there is a homomorphism $\Omega_g \colon G \to \mathbb{F}_2$ such that $\Omega_g(g) \in \mathbb{F}_2$ is nontrivial. By applying a suitable automorphism on $\mathbb{F}_2$ to $\Omega_g$ we may assume that $\Omega_g(g)$ starts in a power of $\texttt{a}$ and ends in a power of $\texttt{b}$. Then $\Phi_g := \Phi \circ \Omega_g$ is a letter quasimorphism such that $\Phi_g(g)$ is nontrivial and such that $\Phi_g(g^n) = \Phi_g(g)^n$. \end{exmp} \begin{defn} \label{defn:well behaved letter quasimorphisms} We will call triples $(x_1,x_2,x_3)$ \emph{degenerate} if they are equivalent to a triple $(w, w^{-1}, e)$ for some $w \in \mathcal{A}$. Let $G$ be a group. A map $\Psi \colon G \to \mathcal{A}$ is called \emph{well-behaved letter-quasimorphism} if $\Psi$ is alternating, i.e. $\Psi(g^{-1}) = \Psi(g)^{-1}$ for every $g \in G$, and for all $g,h \in G$, the triple \[ (\Psi(g), \Psi(h), \Psi(gh)^{-1}) \] is either letter-thin (see Definition \ref{defn:letter-thin}) or degenerate. \end{defn} \begin{rmk} \label{prop: alpha and beta preserve well-behaved letter-quasimorphisms} Note that a triple $(x_1,x_2,x_3)$ is degenerate if and only if there is some $w \in \mathcal{A}$ such that $(x_1,x_2,x_3)$ equals $(w,w^{-1},e)$, $(w,e, w^{-1})$ or $(e,w,w^{-1})$. Note that if $\Phi \colon G \to \mathcal{A}$ is a well-behaved letter-quasimorphism then also $\alpha \circ \Phi \colon G \to \mathcal{A}$ and $\beta \circ \Phi \colon G \to \mathcal{A}$ are well-behaved letter-quasimorphisms. This follows immediately from Lemma \ref{lemma:alpha keeps thin.} and the fact that $\alpha$ (resp. $\beta$) satisfies $\alpha(w^{-1}) = \alpha(w)^{-1})$ (resp. $\beta(w^{-1}) = \beta(w)^{-1})$) for any $w \in \mathcal{A}$. \end{rmk} It is easy to see that every well-behaved letter-quasimorphism is also a letter-quasimorphism. The contrary does not hold. The map $\Phi \colon \mathbb{F}_2 \to \mathcal{A}$ described in Example \ref{exmp:letter quasimorphisms on free group} is a letter-quasimorphism but not a well-behaved letter-quasimorphism. For example for $g= \texttt{a}$, $h = \texttt{a}$ we obtain $(\Phi(g), \Phi(h), \Phi(h^{-1} g^{-1})) = (\texttt{a}, \texttt{a}, \texttt{a}^{-1})$, which is neither letter-thin nor degenerate. However, we may assign to each letter-quasimorphism $\Phi$ a well-behaved letter-quasimorphism $\tilde{\Phi}$. This will be done by pre-composing $\Phi$ with a map $w \mapsto \tilde{w}$ defined as follows. Set $\tilde{w} = e$ whenever $w \in \{ \texttt{a}, e, \texttt{a}^{-1} \}$. Else let $\texttt{z}_s$ be the first and $\texttt{z}_e$ be the last letter of $w \in \mathcal{A}$. Define $\tilde{w}$ as the reduced element in $\mathbb{F}_2$ freely equal to $\tilde{w} := \zeta_s(\texttt{z}_s) w \zeta_e(\texttt{z}_e)$ where \[ \zeta_s(\texttt{z}) = \begin{cases} e & \text{ if } \texttt{z}=\texttt{a} \\ \texttt{a} & \text{ if } \texttt{z} = \texttt{b} \text{ or } \texttt{b}^{-1} \\ \texttt{a}^2 & \text{ if } \texttt{z} = \texttt{a}^{-1} \end{cases} \] and \[ \zeta_e(\texttt{z}) = \begin{cases} e & \text{ if } \texttt{z} = \texttt{a}^{-1} \\ \texttt{a}^{-1} & \text{ if } \texttt{z} = \texttt{b} \text{ or } \texttt{b}^{-1} \\ \texttt{a}^{-2} & \text{ if } \texttt{z} = \texttt{a}. \end{cases} \] The key point is that $\tilde{w}$ starts with $\texttt{a}$ and ends with $\texttt{a}^{-1}$, unless $w \in \{ \texttt{a}, e, \texttt{a}^{-1} \}$. Observe that $\zeta_e(\texttt{z})^{-1} = \zeta_s(\texttt{z})$, and hence the map $w \mapsto \tilde{w}$ is alternating, i.e. $\widetilde{w^{-1}}= \tilde{w}^{-1}$. For example, $\texttt{a} \mapsto e$, $\texttt{a} \texttt{b} \texttt{a}^{-1} \mapsto \texttt{a} \texttt{b} \texttt{a}^{-1}$ and $\texttt{a}^{-1} \texttt{b} \texttt{a} \texttt{b} \texttt{a} \mapsto \texttt{a} \texttt{b} \texttt{a} \texttt{b} \texttt{a}^{-1}$. If $\Phi \colon G \to \mathcal{A}$ is a letter-quasimorphism then we define $\tilde{\Phi} \colon G \to \mathcal{A}$ via $\tilde{\Phi}(g) := \widetilde{\Phi(g)}$. \begin{prop} \label{prop:every letter-qm induces well behaved} If $\Phi \colon G \to \mathcal{A}$ is a letter-quasimorphism then $\tilde \Phi \colon G \to \mathcal{A}$ is a well-behaved letter-quasimorphism, called the \emph{associated} well-behaved letter-quasimorphism. \end{prop} \begin{proof} As $w \mapsto \tilde{w}$ commutes with taking inverses, if $\Phi$ is alternating then so is $\tilde{\Phi}$. In what follows we will use the following easy to check claim. \begin{claim} Let $(x_1,x_2,x_3)$ be an arbitrary triple obtained from $(y_1,y_2,y_3)$ by applying a sequence of the equivalences $(i)$ and $(ii)$ of Definition \ref{defn:equivalent triples}. Then $(\tilde{x}_1, \tilde{x}_2, \tilde{x}_3) \sim (\tilde{y}_1,\tilde{y}_2,\tilde{y}_3)$. In this case we say that the triples $(x_1,x_2,x_3)$ and $(y_1,y_2,y_3)$ are \emph{equivalent up to rotation and inverses}. \end{claim} Let $g,h \in G$. We wish to show that $(\tilde{\Phi}(g), \tilde{\Phi}(h), \tilde{\Phi}(gh)^{-1})$ is a letter-thin triple or degenerate, i.e. equivalent to $(w, w^{-1}, e)$ for some $w \in \mathcal{A}$. If $(\Phi(g), \Phi(h), \Phi(gh)^{-1})$ is equivalent up to rotation and inverses to $(u_1,u_2,u_3)$ the above claim implies that it suffices to check that $(\tilde{u}_1,\tilde{u}_2,\tilde{u}_3)$ is either letter-thin or equivalent to $(w, w^{-1}, e)$. First suppose that $g,h$ are as in Case (\ref{defn:letter-qm:thin}) of Definition \ref{defn:letter quasihomomorphism} i.e. $\Phi(g) \Phi(h) \Phi(gh)^{-1} = e$. If one of $\Phi(g)$, $\Phi(h)$ and $\Phi(gh)$ are trivial then the two other elements are inverses. Hence, up to rotation and taking inverses we may assume that \[ (\Phi(g), \Phi(h), \Phi(gh)^{-1})=(u,u^{-1},e) \] for some $u \in \mathcal{A}$. Hence $(\tilde{u}, \tilde{u}^{-1}, e)$ is degenerate. If none of $\Phi(g)$, $\Phi(h)$ and $\Phi(gh)^{-1}$ are trivial then, as $\Phi$ maps to alternating elements, there are elements $u_1, u_2$ such that $u_1$ ends in a power of $\texttt{a}$ and $u_2$ starts in a power of $\texttt{b}$, such that $(\Phi(g), \Phi(h), \Phi(gh))$ is equivalent up to rotation and taking inverses to $(u_1, u_2, u_3)$ where $u_3 = u_2^{-1} u_1^{-1}$ as a reduced word. Further, write $u_1 = u_1' \texttt{x}$ as a reduced word for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and an appropriate word $u_1' \in \mathcal{A}$. If $u_1'$ is empty, then $\tilde{u}_1=e$. Let $\texttt{z}_2$ be the last letter of $u_2$. Then \[ (\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (e, \texttt{a} u_2 \zeta_e(\texttt{z}_2), \zeta_e(\texttt{z}_2)^{-1} u_2^{-1} \texttt{a}^{-1} ) \] which is equivalent to $(w, w^{-1}, e)$ for $w = \texttt{a} u_2 \zeta_e(\texttt{z}_2)$. If $u_1'$ is non-empty, let $\texttt{z}_1$ be the first letter of $u_1'$ and as before let $\texttt{z}_2$ be the last letter of $u_2$. Then \[ (\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (\zeta_s(\texttt{z}_1) u_1' \texttt{a}^{-1} , \texttt{a} u_2 \zeta_e(\texttt{z}_2), \zeta_e(\texttt{z}_2)^{-1} u_2^{-1} \texttt{x}^{-1} u'^{-1}_1 \zeta_s(\texttt{z}_1)^{-1} ) \] which can be seen to be letter-thin of type $\text{[T1$\at$]}$. This shows that $(\tilde{\Phi}(g), \tilde{\Phi}(h), \tilde{\Phi}(gh)^{-1})$ is letter-thin or degenerate if $\Phi(g) \Phi(h) \Phi(gh)^{-1} = e$. Hence, suppose that $g,h$ are as in Case (\ref{defn:letter-qm:general}) of Definition \ref{defn:letter quasihomomorphism}. Then $(\Phi(g), \Phi(h), \Phi(gh))$ is equivalent up to rotation and inverses to \[ (u_1,u_2,u_3) = (c_1^{-1} \texttt{x} c_2, c_2^{-1} \texttt{x} c_3, c_3^{-1} \texttt{x}^{-1} c_1) \] for $\texttt{x} \in \{ \texttt{a}, \texttt{b} \}$ where $c_1,c_2,c_3 \in \mathcal{A}$ are \emph{arbitrary} i.e. we do not assume that $c_2$ is non-empty as in Definition \ref{defn:letter-thin}. First, suppose that $\texttt{x} = \texttt{b}$. Define \[ d_i = \begin{cases} c_i \zeta_e(\texttt{z}_i) & \text{ if } c_i \not = e \\ \texttt{a}^{-1} & \text{ else} \end{cases} \] where $\texttt{z}_i$ is the last letter of $c_i$. We may see then, that \[ (\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (d_1^{-1} \texttt{b} d_2, d_2^{-1} \texttt{b} d_3, d_3^{-1} \texttt{b}^{-1} d_1) \] which is letter thin of type $\text{[T1$\bt$]}$ as all $d_i$'s are non trivial. Hence, suppose that $\texttt{x} = \texttt{a}$. For what follows, if $c_i$ is non-empty, we will denote by $\texttt{z}_i$ the last letter of $c_i$ and let $d_i$ be the freely reduced word represented by $c_i \zeta_e(\texttt{z}_i)$. Observe that if $c_i$ is non-empty then so is $d_i$. There are the following cases: \begin{itemize} \item[(i)] $c_1 \not = e$, $c_2 \not = e$, $c_3 \not = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (d_1^{-1} \texttt{a} d_2, d_2^{-1} \texttt{a} d_3, d_3^{-1} \texttt{a}^{-1} d_1) $ \item[(ii)] $c_1 \not = e$, $c_2 \not = e$, $c_3 = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (d_1^{-1} \texttt{a} d_2, d_2^{-1} \texttt{a}^{-1} , \texttt{a} d_1) $ \item[(iii)] $c_1 \not = e$, $c_2 = e$, $c_3 \not = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (d_1^{-1} \texttt{a}^{-1} , \texttt{a} d_3, d_3^{-1} \texttt{a}^{-1} d_1) $ \item[(iv)] $c_1 = e$, $c_2 \not = e$, $c_3 \not = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (\texttt{a} d_2, d_2^{-1} \texttt{a} d_3, d_3^{-1} \texttt{a}^{-1} ) $ \item[(v)] $c_1 \not = e$, $c_2 = e$, $c_3 = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (d_1^{-1} \texttt{a}, e , \texttt{a}^{-1} d_1) $ \item[(vi)] $c_1 = e$, $c_2 \not = e$, $c_3 = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = ( \texttt{a} d_2, d_2^{-1} \texttt{a}^{-1} , e ) $ \item[(vii)] $c_1 = e$, $c_2 = e$, $c_3 \not = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = ( e, \texttt{a} d_3, d_3^{-1} \texttt{a}^{-1} ) $ \item[(viii)] $c_1 = e$, $c_2 = e$, $c_3 = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (e, e, e) $ \end{itemize} and cases $(i)-(iv)$ can be seen to be letter-thin of type $\text{[T1$\at$]}$ and cases $(v)-(viii)$ can be seen to be degenerate. This completes the proof. \end{proof} Both letter-quasimorphisms and well-behaved letter-quasimorphisms are examples of \emph{quasimorphism} in the sense of Hartnick--Schweitzer \cite{hartnick-schweitzer}; see Subsection \ref{subsec:generalised qm}. Let $\Phi$ be a letter-quasimorphism and let $\bar{\eta} \colon \mathbb{F}_2 \to \mathbb{R}$ be an ordinary homogeneous quasimorphism with defect $D$ which vanishes on the generators $\texttt{a}, \texttt{b}$. We wish to calculate the defect of $\bar{\eta}\circ \Phi$. Fix $g,h \in G$. If $\Phi(g) \Phi(h) = \Phi(gh)$, then \[ |\bar{\eta} \circ \Phi(g) + \bar{\eta} \circ \Phi(h) - \bar{\eta} \circ \Phi(gh) | \leq D \] Else, up to rotating the factors we see that \[ (\Phi(g), \Phi(h), \Phi(gh)^{-1}) = (d_1^{-1} \texttt{x} d_2, d_2^{-1} d_3, d_3^{-1} d_1) \] for some appropriate $d_1,d_2,d_3 \in \mathcal{A}$, $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1}, \texttt{b}, \texttt{b}^{-1} \}$. Then, as $\bar{\eta}$ is homogeneous $\bar{\eta}(d_1^{-1} \texttt{x} d_2) = \bar{\eta}(\texttt{x} d_2 d_1^{-1})$ and hence $|\bar{\eta}(\texttt{x} d_2 d_1^{-1})- \bar{\eta}(d_2 d_1^{-1})| \leq D$ as we assumed that $\bar{\eta}$ vanishes on the generators. Then we may estimate \[ |\bar{\eta} \circ \Phi(g) + \bar{\eta} \circ \Phi(h) + \bar{\eta} \circ \Phi(gh)^{-1} | = |\bar{\eta}(d_1^{-1} \texttt{x} d_2) + \bar{\eta}(d_2^{-1} d_3) + \bar{\eta}(d_3^{-1} d_1)| \leq 4D \] and after homogenisation of $\phi = \bar{\eta} \circ \Phi(g)$ we estimate that $D(\bar{\phi}) \leq 8D$ using that homogenisation at most doubles the defect; see Proposition \ref{prop:defect of homogenisation doubles}. Hence if $\Phi(g) \in \mathbb{F}_2'$ is such that $\Phi(g^n) = w^n$ for some non-trivial $w \in \mathcal{A}$ which also lies in the commutator subgroup $\mathbb{F}'$ and $\eta \colon \mathbb{F}_2 \to \mathbb{R}$ is homogenous and extremal to $\Phi(g)$ with defect $1$ then, by Bavard, \[ \mathrm{scl}(g) \geq \frac{\bar{\phi}(g)}{16} \geq \frac{\bar{\eta}(\Phi(g))}{16} = \frac{\mathrm{scl}(\Phi(g))}{8} \] and in particular $\mathrm{scl}(g) \geq 1/16$. This is already a good estimate but we see that we can do much better; see Theorem \ref{thm:main}. We will see that this notion is much more flexible than homomorphisms. There are groups $G$ such that for every non-trivial element $g \in G'$ there is a letter-quasimorphisms $\Phi$ such that $\Phi(g)$ is non-trivial. This may be possible even if the group $G$ is not residually free, for example if $G$ is a right-angled Artin group; see Section \ref{sec:RAAGs and scl}. \subsection{Main Theorem} We now deduce our main criterion for $1/2$-gaps in $\mathrm{scl}$: \begin{thm} \label{thm:main} Let $G$ be a group and let $g_0 \in G$. Suppose there is a letter-quasimorphism $\Phi \colon G \to \mathcal{A}$ such that $\Phi(g_0)$ is non-trivial and that $\Phi(g_0^n) = \Phi(g_0)^n$ for all $n \in \mathbb{N}$. Then there is an explicit homogeneous quasimorphism $\bar{\phi} \colon G \to \mathbb{R}$ with $D(\bar{\phi}) \leq 1$ such that $\bar{\phi}(g_0) \geq 1$. If $g_0 \in G'$, then $\textrm{scl}(g_0) \geq 1/2$. If $G$ is countable then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ such that $[\delta^1 \bar{\phi}]=\rho^*\mathrm{eu}^\mathbb{R}_b \in \mathrm{H}^2_b(G,\mathbb{R})$, for $\mathrm{eu}^\mathbb{R}_b$ the real bounded Euler class. \end{thm} In particular, the $\Phi(g_0) \in \mathcal{A}$ of the Theorem has to be alternating and of \emph{even length}, else $\Phi(g_0)^n$ would not be an alternating word. \begin{proof} Let $\Phi \colon G \to \mathcal{A}$ be the letter-quasimorphism as in the theorem and let $\tilde{\Phi} \colon G \to \mathcal{A}$ be the associated well behaved letter-quasimorphism described above. As $\tilde \Phi(g_0)$ is obtained from $\Phi(g_0)$ by just possibly changing the beginning and the end of the word $\Phi(g_0)$, it is easy to see that there are words $c_1, c_2, w \in \mathcal{A}$ such that $\tilde \Phi(g_0^n) = c_1^{-1} w^{n-1} c_2$ as a freely reduced word for all $n \geq 1$. Consider the sequence $\gamma_i$ of maps $\gamma_i \colon \mathcal{A} \to \mathcal{A}$ defined via $\gamma_0 = id$, $\gamma_{2k+1} = (\alpha \circ \beta )^k \circ \alpha$ and $\gamma_{2k} = (\beta \circ \alpha)^k$ and note that $\gamma_i$ is either $\alpha \circ \gamma_{i-1}$ or $\beta \circ \gamma_{i-1}$; see Definition \ref{defn:alpha and beta}. Analogously define the sequence $\bar{\gamma}_i \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ of maps via $\bar{\gamma}_0 = id$, $\bar{\gamma}_{2k+1} = (\bar{\alpha} \circ \bar{\beta} )^k \circ \bar{\alpha}$ and $\bar{\gamma}_{2k} = (\bar{\beta} \circ \bar{\alpha})^k$ and note that every $\bar{\gamma}_i$ is either $\bar{\alpha} \circ \bar{\gamma}_{i-1}$ or $\bar{\beta} \circ \bar{\gamma}_{i-1}$; see Definition \ref{defn:maps alpha bar and beta bar}. For every letter-thin triple $(x_1, x_2, x_3)$ also $\gamma_i(x_1, x_2, x_3)$ is letter-thin by multiple applications of Lemma \ref{lemma:alpha keeps thin.}. Furthermore, if $(x_1, x_2, x_3)$ is a degenerate triple as in Definition \ref{defn:well behaved letter quasimorphisms}, then also $\gamma_i(x_1,x_2,x_3)$ is a degenerate triple as $\gamma_i$ satisfies $\gamma_i(x^{-1}) = \gamma_i(x)^{-1}$ for all $x \in \mathcal{A}$. Let $w$ be as above and consider the sequence $\bar{\gamma}_i(w) \in \bar{\mathcal{A}}_0$ of conjugacy classes in $\bar{\mathcal{A}}_0$. By Proposition \ref{prop:alpha on conjugacy classes decreases}, if $\bar{\gamma}_i(w)$ is a non-trivial equivalence class in the commutator subgroup then $\bar{\gamma}_{i+1}(w)$ either is non-trivial and has strictly smaller word-length or $\bar{\gamma}_{i}(w) = \bar{\gamma}_{i+1}(w)$; see also Remark \ref{rmk:on conjugacy classes for acl}. Hence, there are the following cases: \begin{itemize} \item For all $i \in \mathbb{N}$, $\bar{\gamma}_i(w)$ lies in $\mathbb{F}_2'$, the commutator subgroup. Then, there is an $N$ such that $\bar{\gamma}_{N}(w) = \bar{\gamma}_{N+i}(w)$ for all $i \in \mathbb{N}$. Both $\bar{\alpha}$ and $\bar{\beta}$ then fix the class $\bar{\gamma}_N(w)$. By Proposition \ref{prop:alpha on conjugacy classes decreases}, $\bar{\gamma}_N(w)$ may be represented by $[\texttt{a}, \texttt{b}]^k$ for $k \in \mathbb{Z} \backslash \{ 0 \}$. Hence, the quasimorphism $\eta_0 = \eta_{\texttt{a} \texttt{b}} - \eta_{\texttt{b} \texttt{a}}$ studied in Example \ref{exmp: extemal brooks quasimorphisms on free group} and Proposition \ref{prop:letter thin triples and two quasimorphisms}, satisfies that $|\bar{\eta}_0(\bar{\gamma}_N(w))| \geq 2$. Define $\psi \colon G \to \mathbb{Z}$ via \[ \psi(g) := \begin{cases} \eta_0 \circ \gamma_N \circ \tilde \Phi(g) & \text{ if } \gamma_N \circ \tilde \Phi(g) \not = e \\ 1 & \text{ else} \end{cases} \] and observe that if $\gamma_N \circ \tilde \Phi(g)$ is non-trivial, then $\psi(g^{-1}) = - \psi(g)$. By multiple applications of Proposition \ref{prop:powers of alpha, beta}, we see that there are some elements $d_1, d_2, w' \in \mathcal{A}$ such that $\gamma_N \circ \tilde \Phi(g^n) = d_1 w'^{n-K} d_2$ for all $n \geq K$, for $K \leq N+1$ and $[w'] = \bar{\gamma}_N([w])$. We see that \begin{align*} |\bar{\psi}(g_0)| &= \lim_{n \to \infty} |\psi(g_0^n)|/n \\ &= \lim_{n \to \infty} |\eta_0 \circ \gamma_N \circ \tilde \Phi(g_0^n)| / n \\ &= \lim_{n \to \infty} |\eta_0 (d_1 w'^{n-K} d_2)|/n \\ &= |\bar{\eta_0} (\bar{\gamma}_N([w]))| \geq 2. \end{align*} By multiple applications of Lemma \ref{lemma:alpha keeps thin.} and the fact that $\alpha(w^{-1}) = \alpha(w)^{-1}$, $\beta(w^{-1}) = \beta(w)^{-1}$ and $\alpha(e)=e=\beta(e)$ we see that $\gamma_N \circ \tilde \Phi$ is a well-behaved letter-quasimorphism. Let $g,h \in G$. We wish to compute the defect $| \psi(g) + \psi(h) - \psi(gh) |$. To ease notation define $(x_1,x_2,x_3)$ as the triple \begin{align*} (x_1,x_2,x_3) = (\gamma_N \circ \tilde \Phi(g), \gamma_N \circ \tilde \Phi(h), \gamma_N \circ \tilde \Phi(gh)^{-1}) \end{align*} which is either letter-thin or degenerate as $\gamma_N \circ \tilde \Phi$ is a well-behaved letter-quasimorphism. If $(x_1,x_2,x_3)$ letter-thin then none of its components $x_i$ are empty. Hence \begin{equ*}{rcl} |\psi(g)+\psi(h)-\psi(gh)| &=& |\psi(g)+\psi(h)+\psi(h^{-1} g^{-1})| \\ &=& |\eta_0(x_1)+ \eta_0(x_2) + \eta_0(x_3)| \\ &=& 1 \end{equ*} by Proposition \ref{prop:letter thin triples and two quasimorphisms}. Suppose that $(x_1,x_2,x_3)$ is degenerate. Then one may see that $(x_1,x_2,x_3)$ equals $(v, v^{-1}, e)$, $(v,e, v^{-1})$ or $(e, v, v^{-1})$ for some $v \in \mathcal{A}$. Using that $-\eta_0(v) = \eta_0(v^{-1})$ for $e \not = v \in \mathcal{A}$ we see that two terms of $\psi(g) + \psi(h) - \psi(gh)$ will cancel and for the other will be $1$. Hence, $|\psi(g) + \psi(h) - \psi(gh)| =1$. Finally, if $(x_1,x_2,x_3) = (e,e,e)$ then $\psi(g) + \psi(h) - \psi(gh) = 1$. In particular we see that for any $g,h \in G$, $\psi(g) + \psi(h) - \psi(gh) \in \{ 1, -1 \}$, so $\psi$ is a quasimorphism. Moreover, by possibly changing the sign of $\psi$ we may assume that $\bar{\psi}(g_0) \geq 2$. \item Otherwise, let $N \in \mathbb{N}$ be the smallest integer such that $\bar{\gamma}_N(w) \not \in \mathbb{F}_2'$. Then $\bar{\gamma}_N(w) \in \mathcal{A}$ is represented by a non-trivial even word which is not in the commutator. Hence \[ |\eta_\texttt{a}(\bar{\gamma}_N(w))| + |\eta_\texttt{b}(\bar{\gamma}_N(w))| \geq 2 \] where $\eta_\texttt{a} \colon \mathbb{F}_2 \to \mathbb{Z}$ (resp. $\eta_\texttt{b} \colon \mathbb{F}_2 \to \mathbb{Z}$) denotes the homomorphism counting the letter $\texttt{a}$ (resp. $\texttt{b}$). Observe that homomorphisms are already homogenised. There is some $\eta = \eta_\texttt{x} + \eta_\texttt{y}$ where $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$, $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$ such that $\eta(\bar{\gamma}_N(w)) \geq 2$. As before, define $\psi \colon G \to \mathbb{Z}$ via \[ \psi(g) := \begin{cases} \eta\circ \gamma_N \circ \tilde \Phi(g) & \text{ if } \gamma_N \circ \tilde \Phi(g) \not = e \\ 1 & \text{ else}. \end{cases} \] By a similar argument as above we see that $\bar{\psi}(g_0) \geq 2$. Again, the triple \begin{align*} (x_1,x_2,x_3) = (\gamma_N \circ \tilde \Phi(g), \gamma_N \circ \tilde \Phi(h), \gamma_N \circ \tilde \Phi(h^{-1} g^{-1})) \end{align*} is either letter-thin or degenerate. By the same argument as in the previous case and using Proposition \ref{prop: letter thin triples and homomorphisms} we conclude that for any $g,h \in G$, $|\psi(g) + \psi(h) - \psi(gh)|=1$, so $\psi$ is a quasimorphism. In particular we see that for any $g,h \in G$, $\psi(g) + \psi(h) - \psi(gh) \in \{ 1, -1 \}$. \end{itemize} In both cases, set \[ \phi(g) := \frac{\psi(g)+1}{2}. \] Then we see that, for any $g,h \in G$, \[ \delta^1 \phi(g,h) = \phi(g) + \phi(h) - \phi(gh) = \frac{\psi(g) + \psi(h) - \psi(gh)+1}{2} \in \{ 0, 1 \}. \] Hence, by Theorem \ref{thm:ghys} due to Ghys (see also \cite{ghys}), there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ on the circle such that $\rho^*\mathrm{eu}_b = [\delta^1 \phi] \in \mathrm{H}^2_b(G,\mathbb{Z})$ and hence $\rho^*\mathrm{eu}^\mathbb{R}_b = [\delta^1 \bar{\phi}] \in \mathrm{H}^2_b(G, \mathbb{R})$. Here, $\mathrm{eu}_b$ (resp. $\mathrm{eu}_b^\mathbb{R}$) denotes the (real) bounded Euler class. Moreover, we observe that $\bar{\phi}(g) = \bar{\psi}(g)/2$, for $\bar{\phi}$ the homogenisation of $\phi$. Furthermore, as $D(\psi) = 1$ we estimate by Proposition \ref{prop:defect of homogenisation doubles} that $D(\bar{\psi}) \leq 2$ and hence $D(\bar{\phi}) \leq 1$. We conclude that there is a quasimorphism $\phi \colon G \to \mathbb{R}$ with homogenisation $\bar{\phi}$ such that $D(\bar{\phi}) \leq 1$, $\bar{\phi}(g_0) \geq 1$. If $G$ is countable then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ with $[\delta^1 \phi] = \rho^*\mathrm{eu}_b^\mathbb{R} \in \mathrm{H}^2_b(G,\mathbb{R})$ where $\mathrm{eu}_b^\mathbb{R}$ is the real bounded Euler class. \end{proof} Applying Theorem \ref{thm:main} to Example \ref{exmp:letter quasimorphisms on free group} we recover that in every residually free group $G$, every non-trivial element $g \in G'$ has stable commutator length at least $1/2$. This gap is realised by a quasimorphism induced by a circle action which has not been known previously. As said in the introduction we think of letter-quasimorphisms as simplifications of elements. Sometimes information about $w$ can not be recovered by $\Phi(w)$. For example for the word $w = \texttt{a} \texttt{b} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b}^{-3} \texttt{a}^{-1} \texttt{b}^3$, we may compute\footnote{These calculations are done with \texttt{scallop}, see \cite{scallop}} $\mathrm{scl}(w) = 3/4$ but $\mathrm{scl}(\Phi(w)) = 1/2$. This example may be generalised: Pick an alternating word $w \in \mathcal{A}$ that starts and ends in a power of $\texttt{b}$. Then $[\texttt{a}, w] \in \mathcal{A}$ and $\mathrm{scl}([\texttt{a}, w]) = 1/2$. Then for any choice of words $v_1, v_2 \in \mathbb{F}_2$ such that $\Phi(v_1) = w$, $\Phi(v_2) = w^{-1}$ and such that $v = \texttt{a} v_1 \texttt{a}^{-1} v_2 \in \mathbb{F}_2'$ we have that $\Phi(v) = [\texttt{a}, w]$. However, $\mathrm{scl}(v)$ is experimentally arbitrarily large. \begin{rmk} \label{rmk:quasimorphisms are pullback of hs qm} As pointed out in the proof all of $\gamma_i \circ \tilde \Phi$ are well-behaved letter-quasimorphisms for any $i \in \mathbb{N}$. The quasimorphisms $\psi$ defined in the proof are then pullbacks of the quasimorphism $\eta_0 = \eta_{\texttt{a} \texttt{b}} - \eta_{\texttt{b} \texttt{a}}$ or homomorphisms $\eta = \eta_\texttt{x} + \eta_\texttt{y}$ via these well-behaved letter-quasimorphisms $\gamma_i \circ \tilde{\Phi} \colon G \to \mathcal{A} \subset \mathbb{F}_2$. \end{rmk} \begin{rmk} \label{rmk:criterion for gaps} In light of Theorem \ref{thm:Bavards duality}, a criterion for groups to have the optimal $\mathrm{scl}$-gap of $1/2$ may hence be as follows: \begin{center} \emph{Let $G$ be a non-abelian group. If for every non-trivial element $g \in G'$ there is a letter-quasimorphism $\Phi \colon G \to \mathcal{A}$ such that $\Phi(g^n) = \Phi(g)^n$ where $\Phi(g)$ is non-trivial. Then $G$ has a gap of $1/2$ in stable commutator length.} \end{center} By Example \ref{exmp:letter quasimorphisms on free group} residually free groups have this property and the criterion has some qualitative similarities to being residually free. We will later see that also non-residually free groups, like right-angled Artin groups, have this property; see Section \ref{sec:RAAGs and scl}. \end{rmk} \section{Left Orders and Left-Relatively Convex Subgroups} \label{sec:Left orders and convex subgroups} For what follows we will use the notation and conventions of \cite{convexsub}. We further emphasise that nothing in this section is original work. An order $\prec$ on a set $\mathcal{X}$ is a subset of $\mathcal{X} \times \mathcal{X}$ where we stress that a pair $(x,y) \in \mathcal{X} \times \mathcal{X}$ is in this subset by writing $x \prec y$. Furthermore, the following holds: \begin{itemize} \item For all $x, y \in \mathcal{X}$ either $x \prec y$ or $y \prec x$. We have $x \prec y$ and $y \prec x$ if and only if $x = y$. \item For all $x, y, z \in \mathcal{X}$ such that $x \prec y$ and $y \prec z$ we have $x \prec z$. \end{itemize} A set $\mathcal{X}$ with a left group action has a \emph{$G$-invariant order} if for all $g \in G$, $x_1,x_2 \in \mathcal{X}$, $x_1 \prec x_2$ implies that $g.x_1 \prec g.x_2$. A group $G$ is said to be \emph{left orderable} if the set $G$ has a $G$-invariant order with respect to its left action on itself. A subgroup $H < G$ is said to be \emph{left relatively convex} in $G$ if the $G$-set $G/H$ has some $G$-invariant order. Note that this definition is valid even if $G$ itself is \emph{not} left-orderable. If $G$ itself is orderable, then this is equivalent to the following: There is an order $\prec$ on $G$ such that for every $h_1, h_2 \in H$ and $g \in G$ with $h_1 \prec g \prec h_2$ we may conclude $g \in H$. In this case we simply say that $H$ is convex in $G$. As $e \in H$, this means that $H$ is a neighbourhood of $e$. It is not hard to see that left relatively convex is transitive: \begin{prop} \label{prop:left-relatively convex is transitive} \footnote{See Section 2 of \cite{convexsub}} Let $K < H < G$ be groups. Then $G/K$ is $G$-orderable such that $H/K$ is convex if and only if $G/H$ is $G$-orderable and $H/K$ is $H$-orderable. \end{prop} An easy example of a pair $H < G$ such that $H$ is left relatively convex in $G$ is $\mathbb{Z} < \mathbb{Z}^2$ embedded in the second coordinate via the standard lexicographic order. Similarly, every subgroup $G < \mathbb{Z} \times G$ embedded via the second coordinate, is left relatively convex for an arbitrary group $G$. Every generator of a non-abelian free group generates a left relatively convex subgroup in the total group; see \cite{DH}. In fact, \cite{convexsub} show that each maximal cyclic subgroup of a right-angled Artin group is left relatively convex. We wish to state the main Theorem of \cite{convexsub}. For this let $\mathrm{T}$ denote an \emph{oriented} simplicial tree, with vertices $\mathrm{V}(\mathrm{T})$ and edges $\mathrm{E}(\mathrm{T})$ and two maps $\iota, \tau \colon \mathrm{E}(\mathrm{T}) \to \mathrm{V}(\mathrm{T})$ assigning to each oriented edge its initial and terminal vertex respectively. Suppose that $G$ acts on $\mathrm{T}$ and denote by $G_v$ (resp. $G_e$) the stabilisers of a vertex $v \in \mathrm{V}(\mathrm{T})$ (resp. an edge $e \in \mathrm{E}(\mathrm{T})$). Note that stabilisers of an edge $e$ naturally embed into $G_{\iota(e)}$ and $G_{\tau(e)}$. \begin{thm} \footnote{Theorem 14 of \cite{convexsub}} Suppose that $\mathrm{T}$ is a left G-tree such that, for each $\mathrm{T}$-edge $e$, $G_e$ is left relatively convex in $G_{\iota(e)}$ and in $G_{\tau(e)}$. Then, for each $v \in \mathrm{V}(\mathrm{T})$, $G_v$ is left relatively convex in $G$. Moreover, if there exists some $v \in \mathrm{V}(\mathrm{T})$ such that $G_v$ is left orderable, then $G$ is left orderable. \end{thm} We deduce the following corollary, see Example 19 of \cite{convexsub} using Bass--Serre Theory. \begin{corr} \label{corr:orders of amalgamations} Let $A, B$ and $C$ be groups and let $\kappa_A \colon C \hookrightarrow A$ and $\kappa_B \colon C \hookrightarrow B$ be injections and let $G = A \star_C B$ be the corresponding amalgamated free product (see Section \ref{sec:amalgamation}). If $\kappa_A(C)$ is left relatively convex in $A$ and $\kappa_B(C)$ is left relatively convex in $B$, then $A$ and $B$ are left relatively convex in $G$. \end{corr} Let $H<G$ be a left relatively convex subgroup and let $\prec$ be a $G$-invariant order of $G/H$. we define the \emph{sign-function} $\textrm{sign} \colon G \to \{ -1, 0, 1 \}$ on representatives $g \in G$ of cosets in $G/H$ via \[ \textrm{sign}(g) = \begin{cases} +1 & \text{ if } gH \succ H \\ 0 & \text{ if } g \in H \\ -1 & \text{ if } gH \prec H \end{cases} \] \begin{prop} \label{prop:orders well defined} Let $H < G$ be a left relatively convex subgroup and let $\prec$ be the $G$-invariant order of $G/H$. Then the sign-function with respect to $\prec$ on elements in $G$ is independent under left or right multiplication by elements of $H$. That is for every $g \in G \smallsetminus H$ and for every $h \in H$, $\textrm{sign}(h g) = \textrm{sign}(g) = \textrm{sign}(g h)$. \end{prop} \begin{proof} Clearly $\textrm{sign}(gh) = \textrm{sign}(g)$ as both $g$ and $gh$ define the same coset. On the other hand, if $h g H \succ H$ then by left multiplication $g H \succ H$ and similarly if $h g H \prec H$ then $g H \prec H$, so $\textrm{sign}(hg) = \textrm{sign}(g)$. \end{proof} \section{Amalgamted Free Products} \label{sec:amalgamation} Let $A, B, C$ be groups and let $\kappa_A \colon C \hookrightarrow A$, $\kappa_B \colon C \hookrightarrow B$ be injections. The \emph{amalgamated free product} $G =A \star_C B$ with respect to $\kappa_A$ and $\kappa_B$ it the group via \[ G = A \star_C B = A \star B / \langle \langle \kappa_A(c)^{-1} \kappa_B(c) \mid c \in C \rangle \rangle. \] It is a well-known fact that the homomorphism $A \to A \star_C B$ (resp. $B \to A \star_C B$) defined by mapping $a \in A$ (resp. $b \in B$) to the corresponding element $a \in G$ (resp. $b \in G$) is \emph{injective} and that $C$ embeds in $G$ via these injections. See \cite{serre} for a reference. Every element $g \in G$ with $g \in G \smallsetminus C $ may be written as a product \[ g = d_1 \cdots d_k \] such that all of $d_i$ are either in $A \smallsetminus \kappa_A(C)$ or in $B \smallsetminus \kappa_B(C)$ and alternate between both. Furthermore for any other such expression \[ g = d'_1 \cdots d'_{k'} \] one may deduce that $k'=k$ and that there are elements $c_i \in C$, $i \in \{ 1, \ldots, k-1 \}$ such that $d'_1 = d_1 c_1$, $d'_i = c_{i-1}^{-1} d_i c_i$ and $d'_k = c_{k-1} d_k$. For what follows, let $\prec_A$ (resp. $\prec_B)$ be a left order on $A/\kappa_A(C)$ (resp. $B/\kappa_B(C)$) and let $\textrm{sign}_A$ (resp. $\textrm{sign}_B$) be its sign on $A$ (resp. $B$). We define the map $\Phi \colon G \to \mathcal{A}$ as follows: If $g \in C$ set $\Phi(g) = e$. Else let $g = d_1 \cdots d_k$ be the normal form described above. Then, set \[ \Phi(g) = \prod_{i=1}^k \Phi(d_i) \] where we define \[ \Phi(d_i) = \begin{cases} \texttt{a}^{ \textrm{sign}_A(d_i)} & \text{ if } d_i \in A \smallsetminus \kappa_A(C) \\ \texttt{b}^{ \textrm{sign}_B(d_i)} & \text{ if } d_i \in B \smallsetminus \kappa_B(C) \end{cases} \] and we note that $\Phi$ is well defined. To see this let $d'_1 \cdots d'_k$ be another normal form for $g$ and let $c_i \in C$ for $i \in \{0, \ldots, k+1 \}$ be such that $d'_i = c_{i-1}^{-1} d_i c_i$ with $c_0=c_{k+1}=e$. Then \[ \textrm{sign}(d_i) = \textrm{sign}(c_{i-1}^{-1} d_i) = \textrm{sign}(c_{i-1}^{-1} d_i c_i) = \textrm{sign}(d'_i) \] by Proposition \ref{prop:orders well defined} and ``$\textrm{sign}$'' either ``$\textrm{sign}_A$'' or ``$\textrm{sign}_B$''. We claim that: \begin{lemma} \label{lemma:amalgamated yields letter-quasimorphism} Let $G = A \star_C B$ and $\Phi \colon G \to \mathcal{A}$ be as above. Then $\Phi$ is a letter-quasimorphism. \end{lemma} We will prove this by giving another description of $\Phi$ in terms of paths in the Bass--Serre tree associated to the amalgamated free product $G = A \star_C B$: Let $\mathrm{T}$ be the tree with vertex set $\mathrm{V} (\mathrm{T}) = \{ g A \mid g \in G \} \sqcup \{ g B \mid g \in G \}$ and oriented edges \[ \mathrm{E}(\mathrm{T}) = \{ (g A, g B) \mid g \in G \} \sqcup \{ (g B, g A) \mid g \in G \} \subset \mathrm{V}(\mathrm{T}) \times \mathrm{V}(\mathrm{T}) \] We define $\iota, \tau \colon \mathrm{E}(\mathrm{T}) \to \mathrm{V}(\mathrm{T})$ via $\iota((g A, g B)) = g A$, $\tau((g A, g B))= g B$ and similarly, $\iota(g B, g A) = g B$, $\tau(g B, g A)= g A$. Moreover, we set $(g A, g B)^{-1} = (g B, g A)$ and $(g B, g A)^{-1} = (g A, g B)$. It is well-known that $\mathrm{T}$ is indeed a connected tree. $G$ acts on $\mathrm{T}$ by left multiplication. We have that $\mathrm{Stab}_G(g A) = g A g^{-1} < G$, respectively $\mathrm{Stab}_G(h B) = h B h^{-1} < G$, $\mathrm{Stab}_G(g A, g B) = g C g^{-1}$ and $\mathrm{Stab}_G(g B, g A) = g C g^{-1}$ A \emph{reduced path of edges} is a sequence $\wp = (e_1, \ldots e_n)$, $e_i \in \mathrm{E}(\mathrm{T})$ such that $\tau(e_i)=\iota(e_{i+1})$ for every $i \in \{ 1, \ldots, n-1 \}$, without backtracking. We call $n$ the \emph{length of the path}. For what follows, $\mathcal{P}$ will be the set of all paths of edges. We define the following map $\Xi \colon \mathcal{P} \to \mathcal{A}$ assigning an alternating word to each path of edges. Let $\wp \in \mathcal{P}$. If $\wp$ has length $1$, then set $\Xi(\wp) :=e$. Else, suppose that $\wp$ has length $2$, i.e. $\wp = (e_1,e_2)$. Suppose that $e_1 = (g_1 A, g_1 B)$ and $e_2 = (g_2 B, g_2 A)$ and note that $g_1 B = g_2 B$. In particular, $g_1^{-1} g_2 \in B$. Set $\Xi(\wp)=\Xi((e_1,e_2)) = \texttt{b}^{\textrm{sign}_B(g_1^{-1} g_2)}$. Similarly, if $e_1 = (g_1 B, g_1 A)$ and $e_2 = (g_2 A, g_2 B)$ note that $g_1 A = g_2 A$ and set $\Xi(\wp) = \Xi((e_1,e_2)) = \texttt{a}^{\textrm{sign}_A(g_1^{-1} g_2)}$. Finally, for an arbitrary paths $\wp = (e_1, \ldots, e_n)$ set $\Xi(\wp) = \Xi(e_1,e_2) \cdot \Xi(e_2, e_3) \cdots \Xi(e_{n-2}, e_{n-1})\cdot \Xi(e_{n-1}, e_n)$. Note that $\Xi$ is well defined. To see this, note that the stabilizer of any edge $(g A, g B)$ (resp. $(g B, g A)$) is $g C g^{-1}$. Hence, if $(g A, g B) = (g' A, g' B)$ (resp. $(g B, g A) = (g' B, g' A)$) there is a $c \in C$ such that $g c = g'$. If $(e_1,e_2)$ is a path of edges such that without loss of generality $e_1 = (g_1 A, g_1 B)= (g'_1 A, g'_1 B)$ and $e_2 = (g_2 A, g_2 B)=(g'_2 A, g'_2 B)$ then there are $c_1,c_2$ such that $g_1 = g'_1 c_1$ and $g_2 = g'_2 c_2$. Hence \[ \textrm{sign}_B(g_1^{-1} g_2) = \textrm{sign}_B(c_1^{-1} {g'_1}^{-1} g'_2 c_2) = \textrm{sign}_B({g'_1}^{-1} g'_2) \] by Proposition \ref{prop:orders well defined}. Define the \emph{inverse of a path} $\wp = (e_1, \ldots, e_n)$ as $\wp^{-1} := (e_n^{-1}, \ldots, e_1^{-1} )$. We see that $\Xi(\wp^{-1}) = \Xi(\wp)^{-1}$ using that $\textrm{sign}(g^{-1}) = - \textrm{sign}(g)$. We collect some further properties of $\Xi$. We note that if $\wp \in \mathcal{P}$ is a path then so is $^g \wp$, where $^g \wp$ denotes the image of $\wp$ under the action of $g \in G$. \begin{prop} \label{prop:properties of xi} $\Xi \colon \mathcal{P} \to \mathcal{A}$ has the following properties: \begin{itemize} \item[(i)] For any $\wp \in \mathcal{P}$ and $g \in G$ we have $\Xi(^g \wp) = \Xi(\wp)$. \item[(ii)] Let $\wp_1, \wp_2$ be two paths of edges such that the last edge in $\wp_1$ is $e_1$, the first edge of $\wp_2$ is $e_2$ such that $\tau(e_1)=\iota(e_2)$ and such that $e_1 \not = e_2^{-1}$. Then $\Xi(\wp_1 \cdot \wp_2) = \Xi(\wp_1) \Xi(e_1, e_2) \Xi(\wp_2)$ as reduced words, where $\wp_1 \cdot \wp_2$ denotes the concatenation of paths. \item[(iii)] Let $g \in G$ and let $\wp(g)$ be the unique path of edges from one of edges $\{ (A,B), (B, A) \}$ to one of the edges $\{ (g A, g B), (g B, g A) \}$. Then $\Xi(\wp(g)) = \Phi(g)$, for $\Phi$ as above. \end{itemize} \end{prop} \begin{proof} To see $(i)$ note that for any path $(e_1, e_2)$ with $e_1 = (g_1 A, g_1 B)$ and $e_2 = (g_2 B, g_2 A)$ we have \[ \Xi(e_1, e_2) = \texttt{b}^{\textrm{sign}(g_1^{-1} g_2)} = \texttt{b}^{\textrm{sign}(g_1^{-1} g^{-1} g g_2)} = \Xi(^g(e_1,e_2)) \] and the same argument holds for paths with $e_1=(g_1 B, g_1 A)$ and $e_2 = (g_2 A, g_2 B)$. Point $(ii)$ is immediate from the definition. To see $(iii)$, without loss of generality assume that the normal form of $g$ is $g = a_1 b_1 \cdots a_k b_k$. Then \[ \wp(g) = (B, A),(a_1 A, a_1 B),(a_1 b_1 B, a_1 b_1 A), \ldots, (g B, g A) \] and comparing $\Xi(\wp(g))$ with $\Phi(g)$ yields $(iii)$. \end{proof} We can now prove Lemma \ref{lemma:amalgamated yields letter-quasimorphism}: \begin{proof} Let $g,h \in G$. First, suppose that the midpoints of \begin{align} \label{equ:midpoint} \{ (A,B), (B,A) \} \text{, } \{ (gA,gB), (gB,gA) \} \text{ and } \{ (ghA,ghB), (ghB,ghA) \} \end{align} lie on a common geodesic segment in $\mathrm{T}$. If the midpoint of $\{ (gA,gB), (gB,gA) \}$ lies in the middle of this segment then there are paths $\wp_1$ and $\wp_2$ such that $\wp(g) = \wp_1 \cdot e$, $^g \wp(h) = e \cdot \wp_2$ and $\wp(gh) = \wp_1 \cdot e \cdot \wp_2$ for $e$ either $(gA, gB)$ or $(gB, gA)$. We see that in this case $\Xi(\wp_1 \cdot e) \cdot \Xi(e \cdot \wp_2) = \Xi(\wp_1 \cdot e \cdot \wp_2)$ as reduced words in $\mathcal{A}$ and hence $\Phi(g) \Phi(h) = \Phi(gh)$. Analogously we see that $\Phi(g) \Phi(h) = \Phi(gh)$ when the midpoint of $\{ (A,B), (B,A) \}$ or $\{ (ghA,ghB), (ghB,ghA) \}$ lies in the middle of this segment. Hence in this case $\Phi$, $g,h \in G$ are as in $(1)$ of Definition \ref{defn:letter quasihomomorphism}. Now suppose that the midpoints in (\ref{equ:midpoint}) do not lie on a common geodesic segment. Then there are non-trivial paths $\wp_1, \wp_2, \wp_3 \in \mathcal{P}$ with initial edges $e_1, e_2, e_3$ satisfying $\iota(e_1)=\iota(e_2)=\iota(e_3)$ and $e_i \not = e_j$ for $i \not = j$ such that \[ \wp(g) = \wp_1^{-1} \cdot \wp_2 \text{ , } ^g \wp(h) = \wp_2^{-1} \cdot \wp_3 \text{ , and } ^{gh} \wp(((gh)^{-1}) = \wp_3^{-1} \cdot \wp_1. \] By Proposition \ref{prop:properties of xi} we infer that \begin{align*} \Phi(g) &= c_1^{-1} \Xi(e_1^{-1}, e_2) c_2 \\ \Phi(h) &= c_2^{-1} \Xi(e_2^{-1}, e_3) c_3 \\ \Phi(gh)^{-1} &= c_3^{-1} \Xi(e_3^{-1}, e_1) c_1 \end{align*} for $c_i = \Xi(p_i)$, $i \in \{1,2,3 \}$. Without loss of generality assume that $e_i = (g_i A, g_i B)$, the case $e_i = (g_i B, g_i A)$ is analogous. Then \begin{align*} \Phi(g) &= c_1^{-1} \texttt{x}_1 c_2 \\ \Phi(h) &= c_2^{-1} \texttt{x}_2 c_3 \\ \Phi(gh)^{-1} &= c_3^{-1} \texttt{x}_3 c_1 \end{align*} \[ \texttt{x}_1= \texttt{b}^{\textrm{sign}_B(g_1^{-1} g_2)} \text{, } \texttt{x}_2 = \texttt{b}^{\textrm{sign}_B(g_2^{-1} g_3)} \text{, and } \texttt{x}_3 = \texttt{b}^{\textrm{sign}_B(g_3^{-1} g_1)} \] We claim that $\textrm{sign}_B(g_1^{-1} g_2) + \textrm{sign}_B(g_2^{-1} g_3) + \textrm{sign}_B(g_3^{-1} g_1) \in \{ -1, +1 \}$. To see this, note that all of the signs are either $\{ +1, -1 \}$ as the edges $e_i$ were assumed to be distinct. Suppose that $\textrm{sign}_B(g_1^{-1} g_2)= \textrm{sign}_B(g_2^{-1} g_3)= \textrm{sign}_B(g_3^{-1} g_1)=1$. Then $g_1^{-1} g_2 C \succ C$, hence $g_3^{-1} g_2 C = (g_3^{-1} g_1) g_1^{-1} g_2 C \succ g_3^{-1} g_1 C \succ C$, so $\textrm{sign}_B(g_3^{-1} g_2) = 1$ and hence $\textrm{sign}_B(g_2^{-1} g_3)=-1$, contradiction. Similarly, not all signs can be negative. Hence indeed $\textrm{sign}_B(g_1^{-1} g_2) + \textrm{sign}_B(g_2^{-1} g_3) + \textrm{sign}_B(g_3^{-1} g_1) \in \{ -1, +1 \}$ and so $\texttt{x}_1 \texttt{x}_2 \texttt{x}_3 \in \{ \texttt{b}, \texttt{b}^{-1} \}$. This shows that $\Phi$ is as in $(2)$ of Definition \ref{defn:letter quasihomomorphism}, hence $\Phi$ is a letter-quasimorphism. \end{proof} \begin{thm} \label{thm:amalgamation} Let $A, B, C$ be groups and $\kappa_A \colon C \hookrightarrow A$, $\kappa_B \colon C \hookrightarrow B$ be injections such that both $\kappa_A(C)$ and $\kappa_B(C)$ are left relatively convex subgroup of $A$ resp. $B$. Let $G = A \star_C B$ be the amalgamated free product for this data. Then for every element $g_0 \in G$ which does not conjugate into $A$ or $B$, there is a homogeneous quasimorphism $\bar{\phi} \colon G \to \mathbb{R}$ such that $\bar{\phi}(g_0) \geq 1$, $D(\bar{\phi}) \leq 1$ and $\bar{\phi}$ vanishes on $A$ and $B$. If $g_0 \in G'$, then $scl(g_0) \geq 1/2$. If $G$ is countable then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ such that $[\delta^1 \bar{\phi}]=\rho^*\mathrm{eu}^\mathbb{R}_b \in \mathrm{H}^2_b(G,\mathbb{R})$, for $\mathrm{eu}^\mathbb{R}_b$ the real bounded Euler class. \end{thm} \begin{rmk} \label{rmk:chen-heuer} The methods developed in this paper may be modified to obtain similar gap results for HNN-extensions and graphs of groups, as well gap results for certain one-relator groups. A generalisation of this and direct proofs of these results using both quasimorphisms and surface mappings will appear in the forthcoming preprint \cite{chen_heuer}. \end{rmk} The existence of a uniform gap was known before; see \cite{calegari_fujiwara} and Subsection \ref{subsec:spectral gaps of scl}. \begin{proof} Let $g_0 \in G$ be as in the Theorem. Then, if $g_0$ does not conjugate into $A$ or $B$ we may conjugate $g_0$ by an element $g_1 \in G$ such that \[ g' = g_1 g_0 g_1^{-1} = a_1 b_1 \cdots a_k b_k \] for \emph{all of} $a_i \in A \smallsetminus \kappa_A(C)$ and $b_i \in B \smallsetminus \kappa_B(C)$. It follows that $\Phi(g')=w$ is a non-empty alternating word of even length and that $\Phi({g'}^n) = w^n$ for $n \in \mathbb{N}$. By Theorem \ref{thm:main} there is a homogeneous quasimorphism $\bar{\phi} \colon G \to \mathbb{R}$ with $D(\bar{\phi}) \leq 1$ and $1 \leq \bar{\phi}(g_0) = \bar{\phi(g')}$ using that homogeneous quasimorphisms are invariant under conjugation. If $G$ is countable then this quasimorphism $\bar{\phi}$ is moreover induced by a circle action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$. \end{proof} \section{Right-Angled Artin Groups} \label{sec:RAAGs and scl} In this section all graphs will be simplicial, i.e. do not contain multiple edges between two vertices or loops. Let $\Gamma$ be a finite simplicial graph with vertices $\mathrm{V}(\Gamma)$ and edges $\mathrm{E}(\Gamma)$. Given a subset $\Lambda \subset \mathrm{V}(\Gamma)$ the \emph{full subgraph on $\Lambda$ in $\Gamma$} is the graph with vertices $\Lambda$ where two elements $v,w \in \Lambda$ are connected by an edge if and only if they are connected in $\Gamma$. For a vertex $v \in \Gamma$, the \emph{link of $v$} is the full subgraph of the set $\{w \mid (v,w) \in \mathrm{E}(\Gamma) \}$ in $\Gamma$ and denoted by $\textrm{Lk}(v)$. The \emph{closed star} is the full subgraph of $\textrm{Lk}(v) \cup \{ v \}$ in $\Gamma$ and denoted by $\textrm{St}(v)$. The \emph{right-angled Artin group} or \emph{RAAG} on $\Gamma$ is the group $\mathrm{A}(\Gamma)$ with group presentation \[ \mathrm{A}(\Gamma) = \langle \mathrm{V}(\Gamma) \mid [v, w]; (v,w) \in \mathrm{E}(\Gamma) \rangle \] A word $w$ in the generators $\mathrm{V}(\Gamma)$ representing an element $[w] \in \mathrm{A}(\Gamma)$ is called \emph{reduced} if it has minimal word length among all words representing $[w]$. A word $w$ is said to be cyclically reduced if it has minimal word length among all of its conjugates. The \emph{support} of an element $g \in \mathrm{A}(\Gamma)$ is the set of vertices that appear in a reduced word representing $g$. It is well-known that the support is well-defined. Let $\Gamma$ be a finite simplicial graph, let $\mathrm{A}(\Gamma)$ be the right-angled Artin group of $\Gamma$ and let $v \in \Gamma$. Then $\mathrm{A}(\Gamma)$ can be thought of as an amalgamated free product of $\mathrm{A}(\textrm{St}(v))$ and $A(\Gamma \backslash \{ v \} )$ where the common subgroup is $\mathrm{A}(\textrm{Lk}(v))$. i.e. \[ \mathrm{A}(\Gamma) = \mathrm{A}(\textrm{St}(v)) \star_{\mathrm{A}(\textrm{Lk}(v))} \mathrm{A}(\Gamma \backslash \{v \}). \] This will be used both in the proof of Theorem \ref{thm:raags and scl} and for induction arguments. \begin{prop} \label{prop: convex subgroups of raartin groups} (Section 4 of \cite{convexsub}) Let $\Lambda \subset \Gamma$ be a full subgraph of $\Gamma$. Then $\mathrm{A}(\Lambda) < \mathrm{A}(\Gamma)$ induced by the embedding, is a left relatively convex subgroup. \end{prop} \begin{proof} We follow the proof of \cite{convexsub}. We may induct on the following statement: For any $\Gamma$ of size at most $k$ and every full subgraph $\Lambda \subset \Gamma$, $\mathrm{A}(\Lambda)$ is left relatively convex in $\mathrm{A}(\Gamma)$. For $k=2$ this is just the case of free-abelian and non-abelian free groups mentioned before. Assume the statement is true for all $n \leq k$. Let $\Gamma$ be a graph with $k+1$ vertices and let $\Lambda \subset \Gamma$ be a full subgraph. If $\Lambda = \Gamma$ there is nothing to show. Else pick $v \in \mathrm{V}(\Gamma) \backslash \mathrm{V}(\Lambda)$ and set $\Gamma'$ to be the full subgraph in $\Gamma$ on the vertices $\mathrm{V}(\Gamma) \backslash \{ v \}$. Hence $\Lambda \subset \Gamma' \subset \Gamma$ with $\Gamma'$ of size $k$. We wish to show that $\mathrm{A}(\Gamma') < \mathrm{A}(\Gamma)$ is a left-relatively convex subgroup. Consider the amalgamation \[ \mathrm{A}(\Gamma) = \mathrm{A}(\textrm{St}(v)) \star_{\mathrm{A}(\textrm{Lk}(v))} \mathrm{A}(\Gamma') \] By induction, $\mathrm{A}(\textrm{Lk}(v)) < \mathrm{A}(\Gamma')$ is a left relatively convex subgroup. Also $\mathrm{A}(\textrm{Lk}(v)) < \mathrm{A}(\textrm{St}(v))$ is a left relatively convex subgroup as $\mathrm{A}(\textrm{St}(v)) = \langle v \rangle \times \mathrm{A}(\textrm{Lk}(v))$. We may use Corollary \ref{corr:orders of amalgamations} to see that $\mathrm{A}(\Gamma') < \mathrm{A}(\Gamma)$ is a left relatively convex subgroup. By induction hypothesis, $\mathrm{A}(\Lambda) < \mathrm{A}(\Gamma')$ is a left-relatively convex subgroup and by transitivity $\mathrm{A}(\Lambda) < \mathrm{A}(\Gamma')$ is a left relatively convex subgroup. \end{proof} We deduce: \begin{thm} \label{thm:quasimorphisms on raags} Let $g \in \mathrm{A}(\Gamma)$ be an element in an right-angled Artin group $\mathrm{A}(\Gamma)$ such that $g_0$ does not conjugate into a subgroup of a clique of $\Gamma$. Then there is a homogeneous quasimorphism $\bar{\phi}$ which vanishes on the generators $\mathrm{V}(\Gamma)$ such that $\bar{\phi}(g_0) \geq 1$ and $D(\bar{\phi}) \leq 1$. Moreover, there is an action $\rho \colon \mathrm{A}(\Gamma) \to \mathrm{Homeo}^+(S^1)$ such that $[\delta^1 \bar{\phi}]=\rho^*\mathrm{eu}^\mathbb{R}_b \in \mathrm{H}^2_b(G,\mathbb{R})$, for $\mathrm{eu}^\mathbb{R}_b$ the real bounded Euler class. \end{thm} Observe that no non-trivial element in the commutator subgroup of a right-angled Artin group conjugates into a clique. An application of Bavard's Duality Theorem \ref{thm:Bavards duality} yields: \begin{thm} \label{thm:raags and scl} Let $g_0$ be a non-trivial element in the commutator subgroup of a right-angled Artin group. Then $\mathrm{scl}(g_0) \geq 1/2$. This bound is sharp. \end{thm} \begin{proof} (of Theorem \ref{thm:quasimorphisms on raags}) Let $g \in \mathrm{A}(\Gamma)$ be such an element. We may suppose that $g$ is cyclically reduced, as homogeneous quasimorphisms are invariant under conjugation. Choose a vertex $v$ in the support of $g$ such that there is another vertex $w$ in the support of $g$ which is non-adjacent to $v$. Such a vertex exists as $g$ does not conjugate into a clique. Write $\mathrm{A}(\Gamma)$ as \[ \mathrm{A}(\Gamma) = \mathrm{A}(\textrm{St}(v)) \star_{\mathrm{A}(\textrm{Lk}(v))} \mathrm{A}(\Gamma \backslash \{v \}) \] and observe that $g$ does not conjugate into any factor of this amalgamation as both $v$ and $w$ are in the support of $g$. By Proposition \ref{prop: convex subgroups of raartin groups}, both $\mathrm{A}(\textrm{Lk}(v)) < \mathrm{A}(\textrm{St}(v))$ and $\mathrm{A}(\textrm{Lk}(v)) < \mathrm{A}(\Gamma \backslash \{v \})$ are left relatively convex subgroups. We conclude using Theorem \ref{thm:amalgamation}. Commutators in $\mathrm{A}(\Gamma)$ have $\mathrm{scl}$ at most $1/2$. Hence this bound is sharp. \end{proof} \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
563
Zacharias Joachim Cleve, född 3 december 1820 i Rantasalmi, död 20 januari 1900 i Veckelax, var en finländsk pedagog och filosof. Cleve blev docent i filosofi vid Helsingfors universitet 1848, och verkade som lektor i filosofi, senare i naturvetenskap vid gymnasiet i Kuopio 1851–1860. Han var professor i pedagogik och didaktik vid Helsingfors universitet 1862–1882. Cleve var en ledande personlighet i arbetet för skolväsendets utveckling i Finland under denna tid samt lantdagsman i prästeståndet vid lantdagarna 1872–1882. Han utgav bland annat två filosofiska akademiska avhandlingar på latin 1848-1850, Försök till lärobok i psykologi (3:e upplagan 1854–1871, utgavs även på finska 1869), Skolan, pedagogiskt utkast (1861), samt Grunddrag till skolpedagogik (1884). Källor Svensk uppslagsbok. Lund 1931. Födda 1820 Avlidna 1900 Män Finländska pedagoger Finländska professorer Personer verksamma vid Helsingfors universitet Finländska filosofer Finländska politiker under 1800-talet Finlandssvenska författare Svensk uppslagsbok Personer från Rantasalmi
{ "redpajama_set_name": "RedPajamaWikipedia" }
952
\section{Introduction} In physics, formal simplicity is often a reliable guide to the significance of a result. The concept of weak measurement, due to Aharonov and his coworkers \cite{AharonovRohrlich05,AAV88}, derives some of its appeal from the formal simplicity of its basic formulae. One can extend the basic concept to a sequence of weak measurements carried out at a succession of points during the evolution of a system \cite{MJP07}, but then the formula relating pointer positions to weak values turns out to be not quite so simple, particularly if one allows arbitrary initial conditions for the measuring system. I show here that the complications largely disappear if one takes the cumulants of expected values of pointer positions; these are related in a formally satisfying way to weak values, and this form is preserved under all measurement conditions. The goal of weak measurement is to obtain information about a quantum system given both an initial state $\ket{\psi_i}$ and a final, post-selected state $\ket{\psi_f}$. Since weak measurement causes only a small disturbance to the system, the measurement result can reflect both the initial and final states. It can therefore give richer information than a conventional (strong) measurement, including in particular the results of all possible strong measurements \cite{OB05,Consortium99}. To carry out the measurement, a measuring device is coupled to the system in such a way that the system is only slightly perturbed; this can be achieved by having a small coupling constant $g$. After the interaction, the pointer's position $q$ is measured (or possibly some other pointer observable; e.g. its momentum $p$). Suppose that, following the standard von Neumann paradigm, \cite{vonNeumann55}, the interaction between measuring device and system is taken to be $H_{int}=g\delta(t)pA$, where $p$ is the momentum of a pointer and the delta function indicates an impulsive interaction at time $t$. It can be shown \cite{AAV88} that the expectation of the pointer position, ignoring terms of order $g^2$ or higher, is \begin{align}\label{qclassic} \langle q \rangle=g Re A_w, \end{align} where $A_w$ is the {\em weak value} of the observable $A$ given by \begin{align} A_w=\frac{\bra{\psi_f}A\ket{\psi_i}}{\braket{\psi_f}{\psi_i}}. \end{align} As can be seen, (\ref{qclassic}) has an appealing simplicity, relating the pointer shift directly to the weak value. However, this formula only holds under the rather special assumption that the initial pointer wavefunction $\phi$ is a gaussian, or, more generally, is real and has zero mean. When $\phi$ is a completely general wavefunction, i.e. is allowed to take complex values and have any mean value \cite{J07,MJP07}, equation (\ref{qclassic}) is replaced by \begin{align}\label{complex-version} \langle q \rangle=\langle q \rangle_i+gRe A_w+gIm A_w\left(\langle pq+qp \rangle_i-2 \langle q \rangle_i \langle p \rangle_i \right), \end{align} where, for any pointer variable $x$, $\langle x \rangle_i$ denotes the initial expected value $\bra{\phi}x\ket{\phi}$ of $x$; so for instance $\langle q \rangle_i$ and $\langle p \rangle_i$ are the means of the initial pointer position and momentum, respectively. (Again, this formula ignores terms of order $g^2$ or higher.) Equation (\ref{complex-version}) seems to have lost the simplicity of (\ref{qclassic}), but we can rewrite it as \begin{align}\label{firstxi} \langle q \rangle =\langle q \rangle_i +gRe(\xi A_w), \end{align} where \begin{align} \label{xi1} \xi=-2i\left(\langle qp \rangle_i-\langle q \rangle_i \langle p \rangle_i \right), \end{align} and equation (\ref{firstxi}) is then closer to the form of (\ref{qclassic}). As will become clear, this is part of a general pattern. One can also weakly measure several observables, $A_1, \ldots, A_n$, in succession \cite{MJP07}. Here one couples pointers at several locations and times during the evolution of the system, taking the coupling constant $g_k$ at site $k$ to be small. One then measures each pointer, and takes the product of the positions $q_k$ of the pointers. For two observables, and in the special case where the initial pointer distributions are real and have zero mean, e.g. a gaussian, one finds \cite{MJP07} \begin{align} \label{ABmean} \langle q_1q_2 \rangle=\frac{g_1g_2}{2}\ Re \left[ (A_2,A_1)_w+(A_1)_w\overline{(A_2)}_w \right], \end{align} ignoring terms in higher powers of $g_1$ and $g_2$. Here $(A_2,A_1)_w$ is the {\em sequential weak value} defined by \begin{align} \label{ABweakvalue} (A_2,A_1)_w=\frac{\bra{\psi_f}WA_2VA_1U\ket{\psi_i}}{\bra{\psi_f}WVU\ket{\psi_i}}, \end{align} where $U$ is a unitary taking the system from the initial state $\ket{\psi_i}$ to the first weak measurement, $V$ describes the evolution between the two measurements, and $W$ takes the system to the final state. (Note the reverse order of operators in $(A_2,A_1)$, which reflects the order in which they are applied.) If we drop the assumption about the special initial form of the pointer distribution and allow an arbitrary $\phi$, then the counterpart of (\ref{ABmean}) becomes extremely complicated: see Appendix, equation \ref{horrible}. Even the comparatively simple formula (\ref{ABmean}) is not quite ideal. By analogy with (\ref{qclassic}) we would hope for a formula of the form $\langle q_1q_2 \rangle \propto Re (A_2,A_1)_w$, but there is an extra term $(A_1)_w\overline{(A_2)}_w$. What we seek, therefore, is a relationship that has some of the formal simplicity of (\ref{qclassic}) and furthermore preserves its form for all measurement conditions. It turns out that this is possible if we take the {\em cumulant} of the expectations of pointer positions. As we shall see in the next section, this is a certain sum of products of joint expectations of subsets of the $q_i$, which we denote by $\langle q_1 \ldots q_n \rangle^c$. For a set of observables, we can define a formally equivalent expression using sequential weak values, which we denote by $(A_n, \ldots , A_1)^c_w$. Then the claim is that, up to order $n$ in the coupling constants $g_k$ (assumed to be all of the same approximate order of magnitude): \begin{align} \label{cumulant-equation} \langle q_1 \ldots q_n\rangle^c=g_1 \ldots g_n Re \left\{ \xi (A_n, \ldots , A_1)_w^c\right\}, \end{align} where $\xi$ is a factor dependent on the initial wavefunctions for each pointer. Equation (\ref{cumulant-equation}) holds for any initial pointer wavefunction, though different wavefunctions produce different values of $\xi$. The remarkable thing is that all the complexity is packed into this one number, rather than exploding into a multiplicity of terms, as in (\ref{horrible}). Note also that (\ref{firstxi}) has essentially the same form as (\ref{cumulant-equation}) since, in the case $n=1$, $\langle A \rangle^c_w=A_w$. However, there is an extra term $\langle q \rangle_i$ in (\ref{firstxi}); this arises because the cumulant for $n=1$ is anomalous in that its terms do not sum to zero. \section{Cumulants} Given a collection of random variables, such as the pointer positions $q_i$, the cumulant $\langle q_1 \ldots q_n \rangle^c$ is a polynomial in the expectations of subsets of these variables \cite{KendallStuart77,Royer83}; it has the property that it vanishes whenever the set of variables $q_i$ can be divided into two independent subsets. One can say that the cumulant, in a certain sense, picks out the maximal correlation involving all of the variables. We introduce some notation to define the cumulant. Let $x$ be a subset of the integers $\{1, \ldots, n\}$. We write $\prod_x q$ for $\prod_{i=1}^{|x|} q_{x(i)}$, where $|x|$ is the size of $x$ and the indices of the $q$'s in the product run over all the integers $x(i)$ in $x$. Then the cumulant is given by \begin{align}\label{cumulant} \langle q_1 \ldots q_n \rangle^c=\sum_{b=\{b_1,\ldots, b_k\}}a_k\prod_{j=1}^k \left\langle \prod_{b_j}q \right\rangle, \end{align} where $b=\{b_1,\ldots, b_k\}$ runs over all partitions of the integers $\{1, \ldots, n\}$ and the coefficient $a_k$ is given by \begin{align}\label{coefficients} a_k=(k-1)!(-1)^{k-1}. \end{align} For $n=1$ we have $\langle q \rangle^c=\langle q \rangle$, and for $n=2$ \begin{align}\label{q2} \langle q_1 q_2 \rangle^c=\langle q_1q_2 \rangle-\langle q_1 \rangle \langle q_2 \rangle. \end{align} There is an inverse operation for the cumulant \cite{ZZXY06,Royer83}: \begin{proposition}\label{anti} \begin{align}\label{anticumulant} \langle q_1 \ldots q_n \rangle=\sum_{b=\{b_1,\ldots, b_k\}}\prod_{j=1}^k \left\langle \prod_{b_j}q \right\rangle^c. \end{align} \end{proposition} \begin{proof} To see that this equation holds, we must show that the term $\prod_{j=1}^k\langle \prod_{b_j} q \rangle$ obtained by expanding the right-hand side is zero unless $b$ is the partition consisting of the single set $\{1, \ldots, n\}$. Replacing each subset $b_j$ by the integer $j$, this is equivalent to $\sum a_{k_1} \ldots a_{k_r}=0$, where the sum is over all partitions of $\{1, \ldots, k\}$ by subsets of sizes $k_1, \ldots, k_r$ and the $a_{k}$'s are given by (\ref{coefficients}). In this sum we distinguish partitions with distinct integers; e.g. $\{1,2\},\{3,4\}$ and $\{1,3\},\{2,4\}$. There are $\binom{k}{k_1 \ldots k_r}(l_1! \ldots l_k!)^{-1}$ such distinct partitions with subset sizes $k_1 \ldots k_r$, where $l_i$ is the number of $k$'s equal to $i$, so our sum may be rewritten as $k! \sum (-1)^{k_1-1} \ldots (-1)^{k_r-1} (l_1! \ldots l_k! k_1 \ldots k_r)^{-1}$, where the sum is now over partitions in the standard sense \cite{Apostol76}. This is $k!$ times the coefficient of $x^k$ in \begin{align} &\left(1+x+\frac{x^2}{2!}+\ldots \right)\left(1+(-x^2/2)+\frac{(-x^2/2)^2}{2!}+\ldots \right)\left(1+(x^3/3)+\frac{(x^3/3)^2}{2!}+\ldots \right)\ldots\\ &=e^{x-x^2/2+x^3/3 \ldots}= e^{log_e(1+x)}=1+x. \end{align} Thus the sum is zero except for $k=1$, which corresponds to the single-set partition $b$. \end{proof} \begin{definition} If $\{1, \ldots, n\}$ can be written as the disjoint union of two subsets $S_1$ and $S_2$, we say the variables corresponding to these subsets are independent if \begin{align}\label{indep} \langle \prod_{S_1^\prime}q \prod_{S_2^\prime} q\rangle=\langle \prod_{S_1^\prime} q\rangle \langle \prod_{S_2^\prime} q\rangle, \end{align} for any subsets $S_i^\prime \subseteq S_i$. \end{definition} We now prove the characteristic property of cumulants: \begin{proposition}\label{indep-lemma} The cumulant vanishes if its arguments can be divided into two independent subsets. \end{proposition} \begin{proof} For $n=2$ this follows at once from (\ref{q2}) and (\ref{indep}), and we continue by induction. From (\ref{anticumulant}) and the inductive assumption for $n-1$, we have \begin{align} \langle q_1 \ldots q_n \rangle=\langle q_1 \ldots q_n \rangle^c+\sum_{b=\{b_1,\ldots, b_k\} \subset S_1}\prod_{j=1}^k \left\langle \prod_{b_j}q \right\rangle^c \sum_{c=\{c_1,\ldots, c_l\} \subset S_2}\prod_{j=1}^l \left\langle \prod_{c_j}q \right\rangle^c. \end{align} This holds because any term on the right-hand side of (\ref{anticumulant}) vanishes when any subset of the partition $b$ includes elements of both $S_1$ and $S_2$. Using (\ref{anticumulant}) again, this implies \begin{align} \langle q_1 \ldots q_n \rangle=\langle q_1 \ldots q_n \rangle^c+\langle \prod_{S_1} q\rangle \langle \prod_{S_2} q\rangle, \end{align} and by independence, $\langle q_1 \ldots q_n \rangle^c=0$. Thus the inductive assumption holds for $n$. \end{proof} In fact, the coefficients $a_k$ in (\ref{cumulant}) are uniquely determined to have the form (\ref{coefficients}) by the requirement that the cumulant vanishes when the variables form two independent subsets \cite{Percus75,Simon79}. For $n=2$, the cumulant (\ref{q2}) is just the covariance, $\langle q_1 q_2 \rangle^c=\langle (q_1-\langle q_1 \rangle)(q_2-\langle q_2 \rangle) \rangle$, and the same is true for $n=3$, namely $\langle q_1 q_2q_3 \rangle^c=\langle (q_1-\langle q_1 \rangle)(q_2 -\langle q_2 \rangle)(q_3 -\langle q_3 \rangle) \rangle$. For $n=4$, however, there is a surprise. The covariance is given by \begin{align} \label{4covariance} \langle \prod_{i=1}^4 (q_i-\langle q_i \rangle) \rangle=\langle q_1q_2q_3q_4 \rangle -\sum \langle q_iq_jq_k \rangle\langle q_l \rangle+\sum \langle q_iq_j \rangle\langle q_k \rangle\langle q_l \rangle-3\langle q_1\rangle\langle q_2\rangle\langle q_3 \rangle\langle q_4 \rangle, \end{align} where the sums include all distinct combinations of indices, but the cumulant is \begin{align} \label{4cumulant} \langle q_1q_2q_3q_4\rangle^c=\langle q_1q_2q_3q_4 \rangle -\sum \langle q_iq_jq_k \rangle\langle q_l \rangle-\sum \langle q_iq_j \rangle\langle q_kq_l \rangle+2\sum \langle q_iq_j \rangle\langle q_k \rangle\langle q_l \rangle-6\langle q_1\rangle\langle q_2\rangle\langle q_3 \rangle\langle q_4 \rangle, \end{align} which includes terms like $\langle q_1q_2 \rangle\langle q_3q_4 \rangle$ that do not occur in the covariance. Note that, if the subsets $\{1,2\}$ and $\{3,4\}$ are independent, the covariance does not vanish, since independence implies we can write the first term in (\ref{4covariance}) as $\langle q_1q_2q_3q_4 \rangle=\langle q_1q_2 \rangle \langle q_3q_4 \rangle$ and there is no cancelling term. However, as we have seen, the cumulant does contain such a term, and it is a pleasant exercise to check that the whole cumulant vanishes. \section{Sequential weak values and cumulants} To carry out a sequential weak measurement, one starts a system in an initial state $\ket{\psi_i}$, then weakly couples pointers at several times $t_k$ during the evolution of the system, and finally post-selects the system state $\ket{\psi_f}$. One then measures the pointers and finally takes the product of the values obtained from these pointer measurements. It is assumed that one can repeat the whole process many times to obtain the expectation of the product of pointer values. If one measures pointer positions $q_k$, for instance, one can estimate $\langle q_1 \ldots q_n \rangle$, but one could also measure the momenta of the pointers to estimate $\langle p_1 \ldots p_n \rangle$. If the coupling for the $k$th pointer is given by $H_{int}=\delta(t-t_k)r_kp$, and if the individual initial pointer wavefunctions are gaussian, or, more generally, are real with zero mean, then it turns out \cite{MJP07} that these expectations can be expressed in terms of sequential weak values of order $n$ or less. Here the sequential weak value of order $n$, $(A_n, \ldots A_1)_w$, is defined by \begin{align} \label{general-weakvalue} (A_n, \ldots A_1)_w=\frac{\bra{\psi_f}U_{n+1}A_nU_n \ldots A_1U_1\ket{\psi_i}}{\bra{\psi_f}U_{n+1} \ldots U_1\ket{\psi_i}}, \end{align} where $U_i$ defines the evolution of the system between the measurements of $A_{i-1}$ and $A_i$. When the $A_k$ are projectors, $A_k=\proj{x_k}$, we can write the sequential weak value as \cite{MJP07} \begin{align} \label{amplitude} (A_n, \ldots A_1)_w=\frac{\bra{\psi_f}U_{n+1}\ket{x_n}\ \bra{x_n}U_n\ket{x_{n-1}} \ldots \bra{x_1}U_1\ket{\psi_i}}{\sum_y \bra{\psi_f}U_{n+1}\ket{y_n}\ \bra{y_n}U_n\ket{y_{n-1}} \ldots \bra{y_1}U_1\ket{\psi_i}}=\frac{\mbox{amplitude}(x)}{\sum_y \mbox{amplitude}(y)}, \end{align} which shows that, in this case, the weak values has a natural interpretation as the amplitude for following the path defined by the $x_k$. Figure \ref{cumulant} shows an example taken from \cite{MJP07} where the path (labelled by '1' and '2' successively) is a route taken by a photon through a pair of interferometers, starting by injecting the photon at the top left (with state $\ket{\psi}_i$) and ending with post-selection by detection at the bottom right (with final state $\ket{\psi}_f$). \begin{figure}[hbtp] \centerline{\epsfig{file=amplitude.eps,width=0.5\textwidth}} \caption{\label{cumulantfig}} \end{figure} In the last section, the cumulant was defined for expectations of products of variables. One can define the cumulant for other entities by formal analogy; for instance for density matrices \cite{ZZXY06}, or hypergraphs \cite{Royer83}. We can do the same for sequential weak values, defining the cumulant by (\ref{cumulant}) with $\langle \prod_{b_j}q\rangle$ replaced by $(\overleftarrow{A_{b_j(|b_j|)}, \ldots A_{b_j(1)}})_w$, where the arrow indicates that the indices, which run over the subset $b_j$, are arranged in ascending order from right to left. For example, for $n=1$, $(A_w)^c=A_w$, and for $n=4$ \begin{align}\label{4weak} (A_4,A_3,A_2,A_1)^c_w&=(A_4,A_3,A_2,A_1)_w-\sum (\overleftarrow{A_i,A_j,A_k})_w(A_l)_w-\sum (\overleftarrow{A_i,A_j})_w(\overleftarrow{A_k,A_l})_w\\ \nonumber &+2\sum (\overleftarrow{A_i,A_j})_w(A_k)_w(A_l)_w-6(A_1)_w(A_2)_w(A_3)_w(A_4)_w. \end{align} There is a notion of independence that parallels (\ref{indep}): given a disjoint partition $S_1 \cup S_2=\{1, \ldots, n\}$ such that \begin{align} (\overleftarrow{A_{S_1^\prime \cup S_2^\prime}})_w=(\overleftarrow{A_{S_1^\prime}})_w(\overleftarrow{A_{S_2^\prime}})_w, \end{align} for any subsets $S_i^\prime \subseteq S_i$, then we say the observables labelled by the two subsets are {\em weakly independent}. There is then an analogue of Lemma \ref{indep-lemma}: \begin{lemma} The cumulant $(A_n, \ldots, A_1)_w^c$ vanishes if the $A_k$ are weakly independent for some subsets $S_1$, $S_2$. \end{lemma} As an example of this, if one is given a bipartite system $\mathcal{H}^A \otimes \mathcal{H}^B$, and initial and final states that factorise as $\ket{\psi_i}=\ket{\psi_i}^A \otimes \ket{\psi_i}^B$ and $\ket{\psi_f}=\ket{\psi_f}^A \otimes \ket{\psi_f}^B$, then observables on the $A$- and $B$-parts of the system are clearly weakly independent. Another class of examples comes from what one might describe as a ``bottleneck'' construction, where, at some point the evolution of the system is divided into two parts by a one-dimensional projector (the bottleneck) and its complement, and the post-selection excludes the complementary part. Then, if all the measurements before the projector belong to $S_1$ and all those after the projector belong to $S_2$, the two sets are weakly independent. This follows because we can write \begin{align*} (\overleftarrow{A_{S_1^\prime \cup S_2^\prime}})_w &=\frac{\bra{\psi_f}U_{n+1}A_n \ldots U_{k+1}A_kW_k\proj{\psi_b}V_kA_{k-1} \ldots A_1U_1\ket{\psi_i}} {\bra{\psi_f}U_{n+1} \ldots U_{k+1}W_k\proj{\psi_b}V_k \ldots U_1\ket{\psi_i}} \\ &=\frac{\bra{\psi_f}U_{n+1}A_n \ldots U_{k+1}A_kW_k\proj{\psi_b}V_k \ldots U_1\ket{\psi_i}} {\bra{\psi_f}U_{n+1} \ldots U_{k+1}W_k\proj{\psi_b}V_k \ldots U_1\ket{\psi_i}} \ \ \frac{\bra{\psi_f}U_{n+1} \ldots U_{k+1}W_k\proj{\psi_b}V_kA_{k-1} \ldots A_1U_1\ket{\psi_i}} {\bra{\psi_f}U_{n+1} \ldots U_{k+1}W_k\proj{\psi_b}V_k \ldots U_1\ket{\psi_i}}\\ &=(\overleftarrow{A_{S_1^\prime}})_w(\overleftarrow{A_{S_2^\prime}})_w, \end{align*} where $W_k\proj{\psi_b}V_k$ is the part of $U_k$ lying in the post-selected subspace. As an illustration of this, suppose we add a connecting link (Figure \ref{bottleneckfig}, ``$L$'') between the two interferometers in Figure \ref{cumulantfig}, so $\proj{\psi_b}$, the bottleneck, is the projection onto $L$, and post-selection discards the part of the wavefunction corresponding to the path $L^\prime$. Then measurements at `1' and `2' are weakly independent; in fact $(A_1)_w=1/2$, $(A_2)_w=1/2$ and $(A_2,A_1)_w=1/4$. Note that the same measurements are {\em not} independent in the double interferometer of Figure \ref{cumulantfig}, where $(A_1)_w=0$, $(A_2)_w=0$, and yet, surprisingly, $(A_2,A_1)_w=-1/2$, \cite{MJP07}. \begin{figure}[hbtp] \centerline{\epsfig{file=bottleneck.eps,width=0.6\textwidth}} \caption{\label{bottleneckfig}} \end{figure} \section{The main theorem}\label{theorem-section} Consider $n$ system observables $A_1, \ldots, A_n$. Suppose $s_k$, for $k=1, \dots, n$, are observables of the $k$th pointer, namely Hermitian functions $s_k(q_k,p_k)$ of pointer position $q_k$ and momentum $p_k$, and the interaction Hamiltonian for the weak measurement of system observable $A_k$ is $H_k=g_ks_kA_k$, where $g_k$ is a small coupling constant (all $g_k$ being assumed of the same order of magnitude $g$). Suppose further that the pointer observables $r_k$ are measured after the coupling. Let $\phi_k$ be the $k$-th pointer's initial wave-function. For any variable $x_k$ associated to the $k$-th pointer, write $\langle x_k \rangle_i$ for $\bra{\phi_k}x_k \ket{\phi_k}$. We are now almost ready to state the main theorem, but first need to clarify the measurement procedure. When we evaluate expectations of products of the $r_k$ for different sets of pointers, for instance when we evaluate $\langle r_1 r_2\rangle$, we have a choice. We could either couple the entire set of $n$ pointers and then select the data for pointers 1 and 2 to get $\langle r_1 r_2\rangle$. Or we could carry out an experiment in which we couple just pointers 1 and 2 to give $\langle r_1 r_2\rangle$. These procedures give different answers. For instance, if we couple three pointers and measure pointers 1 and 2 to get $\langle r_1 r_2\rangle$, in addition to the terms in $g_1$, $g_2$ and $g_1g_2$ we also get terms in $g_2g_3$ and $g_1g_3$ involving the observable $A_3$. This means we get a different cumulant $\langle r_1 \ldots r_n \rangle^c$, depending on the procedure used. In what follows, we regard each expectation as being evaluated in a separate experiment, with only the relevant pointers coupled. It will be shown elsewhere that, with the alternative definition, the theorem still holds but with a different value of the constant $\xi$. \begin{theorem}[Cumulant theorem]\label{main-theorem} For $n \ge 2$, for any pointer observables $r_k$ and $s_k$, and for any initial pointer wavefunctions $\phi_k$, up to total order $n$ in the $g_k$, \begin{align}\label{main-result} \langle r_1 \ldots r_n \rangle^c= g_1 \ldots g_n Re \left\{ \xi (A_n, \ldots, A_1)^c_w\right\}, \end{align} where $\xi$ (sometimes written more explicitly as $\xi_{r_1 \ldots r_n}$) is given by \begin{align} \label{xi} \xi=2(-i)^n \left( \prod_{k=1}^n \langle r_ks_k \rangle_i-\prod_{k=1}^n \langle r_k \rangle_i \langle s_k \rangle_i \right). \end{align} For $n=1$ the same result holds, but with the extra term $\langle r \rangle_i$: \begin{align} \label{main-result1} \langle r \rangle =\langle r \rangle_i +g Re (\xi A_w). \end{align} \end{theorem} \begin{proof} We use the methods of \cite{MJP07} to calculate the expectations of products of pointer variables for sequential weak measurements. Let the initial and final states of the system be $\ket{\psi_i}$ and $\ket{\psi_f}$, respectively. Consider some subset $b=\{b_1, \ldots, b_\kappa\}$ of $\{1, \ldots, n\}$, with $b_1 \le b_2 \le \ldots \le b_\kappa$. The state of the system and the pointers $b_1, \ldots, b_\kappa$ after the coupling of those pointers is \begin{align}\label{totalstate} \Psi_{\mathcal{S},\mathcal{M}}=U_{n+1}\dots U_{b_\kappa+1} e^{-ig_{b_\kappa}s_{b_\kappa}A_{b_\kappa}}U_{b_\kappa} \ldots e^{-ig_{b_1}s_{b_1}A_{b_1}}U_{b_1}\dots U_1 \ket{\psi_i}\phi_{b_1}(r_{b_1}) \ldots \phi_{b_\kappa}(r_{b_\kappa}), \end{align} and following post-selection by the system state $\ket{\psi_f}$, the state of the pointers is \begin{align} \label{bigstate} \Psi_\mathcal{M}=\bra{\psi_f}U_{n+1}\dots U_{b_\kappa+1} e^{-ig_{b_\kappa}s_{b_\kappa}A_{b_\kappa}}U_{b_\kappa} \ldots e^{-ig_{b_1}s_{b_1}A_{b_1}}U_{b_1}\dots U_1\ket{\psi_i}\phi_{b_1}(r_{b_1}) \ldots \phi_{b_\kappa}(r_{b_\kappa}). \end{align} Expanding each exponential, we have \begin{align} \label{expectation} \langle r_{b_1} \ldots r_{b_\kappa} \rangle &=\frac{\int \overline{\Psi}_\mathcal{M} r_{b_1} \ldots r_{b_\kappa} \Psi_\mathcal{M} dr_{b_1} \ldots dr_{b_\kappa}}{\int |\Psi_\mathcal{M}|^2 dr_{b_1} \ldots dr_{b_\kappa}},\\ \label{sumratio}&=\frac{\sum_{i_1 \ldots i_n \in b\ ;\ j_1 \ldots j_n \in b} \ \alpha_{i_1 \ldots i_n}\overline{\alpha}_{j_1 \ldots j_n}u^{b_1}_{i_{b_1}j_{b_1}} \ldots u^{b_\kappa}_{i_{b_\kappa}j_{b_\kappa}}}{\sum_{i_1 \ldots i_n \in b\ ;\ j_1 \ldots j_n \in b} \ \alpha_{i_1 \ldots i_n}\overline{\alpha}_{j_1 \ldots j_n}v^{b_1}_{i_{b_1}j_{b_1}} \ldots v^{b_\kappa}_{i_{b_\kappa}j_{b_\kappa}}}, \end{align} where $i_k \ge 0$ are integers, $i_1, \ldots, i_n \in b$ means that $i_l=0$ for $l \notin b$, and \begin{align} \label{alpha} \alpha_{i_1 \ldots i_n}&=\left(\prod_{k=1}^n g_k^{i_k}\right)\ (A^{i_n}_n, \ldots A^{i_1}_1)_w, \\ \label{u} u^k_{l m}&=\int (m!)^{-1}\overline{(-is_k)^{m}\phi_k(r_k)}r_k(l!)^{-1}(-is_k)^{l}\phi_k(r_k)dr_k,\\ \label{v} v^k_{l m}&=\int (m!)^{-1}\overline{(-is_k)^{m}\phi_k(r_k)}(l!)^{-1}(-is_k)^{l}\phi_k(r_k)dr_k. \end{align} Let us write (\ref{sumratio}) as \begin{align} \langle r_{b_1} \ldots r_{b_\kappa} \rangle =\frac{\sum_{{\bf i} \in b, {\bf j} \in b} x_{{\bf i};{\bf j}}}{\sum_{{\bf i} \in b, {\bf j} \in b} y_{{\bf i};{\bf j}}}, \end{align} where \begin{align} \label{x} x_{{\bf i};{\bf j}}&=\alpha_{i_1 \ldots i_n}\overline{\alpha}_{j_1 \ldots j_n}u^{b_1}_{i_{b_1}j_{b_1}} \ldots u^{b_\kappa}_{i_{b_\kappa}j_{b_\kappa}},\\ \label{y} y_{{\bf k};{\bf l}}&=\alpha_{k_1 \ldots k_n}\overline{\alpha}_{l_1 \ldots l_n}v^{b_1}_{i_{b_1}j_{b_1}} \ldots v^{b_\kappa}_{i_{b_\kappa}j_{b_\kappa}}, \end{align} and ${\bf i}$ denotes the index set $\{i_1 \ldots i_n\}$, etc.. Define \begin{align} X_b=\sum_{{\bf i} \in b, {\bf j} \in b} x_{{\bf i};{\bf j}},\ \ Y_b=\sum_{{\bf i} \in b, {\bf j} \in b} y_{{\bf i};{\bf j}}. \end{align} Then \begin{align}\label{CXY1} \langle r_1, \ldots , r_n \rangle^c&=\sum_{b_1, \ldots , b_k} (k-1)!(-1)^{k-1} \prod_{l=1}^k \langle r_{b_l(1)} \ldots r_{b_l(|b_l|)} \rangle \\ \label{CXY2} &=\sum_{b_1, \ldots , b_k} (k-1)!(-1)^{k-1} \prod_{l=1}^k \frac{X_{b_l}}{Y_{b_l}}. \end{align} Set $\mathcal{Y}=\prod_{b \subset \{1,\ldots , n\}} Y_b$, where $b$ in the product ranges over all distinct subsets of the integers $\{1,\ldots , n\}$. Then $\mathcal{Y} \langle r_1 \ldots r_n \rangle^c$ is an (infinite) weighted sum of terms \begin{align} \label{z} z_\mathcal{I}=(x_{{\bf i}(1);{\bf j}(1)} \ldots x_{{\bf i}(m);{\bf j}(m)})(y_{{\bf k}(1);{\bf l}(1)} \ldots y_{{\bf k}(m^\prime);{\bf l}(m^\prime)}), \end{align} where \begin{align}\label{index} \mathcal{I} &=\mathcal{I}_i\cup \mathcal{I}_j \cup \mathcal{I}_k \cup \mathcal{I}_l\\ &=\{{\bf i}(1), \ldots, {\bf i}(m)\} \cup \{{\bf j}(1), \ldots, {\bf j}(m)\} \cup \{{\bf k}(1), \ldots, {\bf k}(m^\prime)\} \cup \{{\bf l}(1), \ldots, {\bf l}(m^\prime)\} \nonumber \end{align} denotes the set of all the index sets that occur in $z_\mathcal{I}$. The strategy is to show that, when the size of the index set $\mathcal{I}$ is less than $n$, the coefficient of $z_\mathcal{I}$ vanishes; by (\ref{alpha}) this implies that all coefficients of order less than $n$ in $g$ vanish. We then look at the index sets of size $n$, corresponding to terms of order $g^n$, and show that the relevant terms sum up to the right-hand side of (\ref{main-result}). But if $\mathcal{Y} \langle r_1 \ldots r_n \rangle^c=g^nx+O(g^{n+1})$ for some x, then we also have $\langle r_1 \ldots r_n \rangle^c=g^nx+O(g^{n+1})$, since $\mathcal{Y}=1+O(g)$. Let $b=\{b_1, \ldots, b_s\}$ be a partition of $\{1, \ldots, n\}$. We say that $b$ is a {\em valid} partition for $\mathcal{I}$ if \begin{enumerate}[(i)] \item For each $r$ with $1 \le r \le m$, ${\bf i}(r) + {\bf j}(r) \in b_l$, for some $b_l$, and we can associate a distinct $b_l$ to each $r$. (Here ${\bf i} + {\bf j}$ means the index set $\{i_1+j_1, \ldots i_n+j_n\}$.) \item For each $r$ with $1 \le r \le m^\prime$, ${\bf k}(r) + {\bf l}(r) \in S$, for some subset $S \subset \{1,\ldots , n\}$ that is not in the partition $b$, i.e. for which $S \ne b_l$ for any $l$, and we can associate a distinct $S$ to each $r$. Let $\gamma(\mathcal{I},b)$ be the number of ways of associating a subset $S$ to each $r$. \end{enumerate} \begin{lemma} \label{vanishing} The coefficient of $z_\mathcal{I}$ in $\mathcal{Y} \langle r_1 \ldots r_n \rangle^c$ is zero if all the index sets in $\mathcal{I}$ have a zero at some position $r$. \end{lemma} \begin{proof} If we expand $\mathcal{Y} \langle r_1 \ldots r_n \rangle^c$ using (\ref{CXY2}), each term in this expansion is associated with a partition $b$ of $\{1, \ldots, n\}$. Let $b$ be a valid partition for $\mathcal{I}$, and let $c=\{c_1, \ldots, c_s\}$ denote the partition derived from $b$ by removing $r$ from the subset $b_l$ that contains it, and deleting that subset if it contains only $r$. Then the following partitions include $b$ and are all valid : \begin{align}\label{cancel} &c^{(1)}=\{(rc_1),c_2, \ldots ,c_s\}\\\nonumber &c^{(2)}=\{c_1,(rc_2), \ldots ,c_s\}\\\nonumber &\ldots\ldots\ldots \\\nonumber &c^{(s)}=\{c_1,c_2, \ldots ,(rc_s)\}\\\nonumber &\\\nonumber &c^{(s+1)}=\{r,c_1,c_2, \ldots ,c_s\}. \end{align} Each partition $c^{(i)}$, for $1 \le i \le s+1$ contributes $\gamma(\mathcal{I},b)$ to the coefficient of $z_\mathcal{I}$ in $\mathcal{Y} \prod_{l=1}^k X_{c^{(i)}}/Y_{c^{(i)}}$, and since this term has coefficient $(s-1)!(-1)^{(s-1)}$ in (\ref{CXY2}) for partitions $c^{(1)}, c^{(2)}, \ldots c^{(s)}$, and $s!(-1)^s$ for $c^{(s+1)}$, the sum of all contributions is zero. \end{proof} From equations (\ref{alpha}) and (\ref{index}), the power of $g$ in the term $z_\mathcal{I}$ is $|I|=|I_i|+|I_j|+|I_k|+|I_l|$. This, together with the preceding Lemma, implies that the lowest order non-vanishing terms in $\mathcal{Y} \langle r_1 \ldots r_n \rangle^c$ are $z_\mathcal{I}$'s that have a '1' occurring once and once only in each position; we call these {\em complete lowest-degree} terms. \begin{lemma} \label{one-index-set} The coefficient of a complete lowest-degree term $z_\mathcal{I}$ in $\mathcal{Y} \langle r_1 \ldots r_n \rangle^c$ is zero unless only one of the four classes of indices in $\mathcal{I}$, viz. $\mathcal{I}_i$, $\mathcal{I}_j$, $\mathcal{I}_k$ or $\mathcal{I}_l$, has non-zero terms. \end{lemma} \begin{proof} Consider first the case where the indices in $\mathcal{I}_j$ and $\mathcal{I}_l$ are zero, and where both $\mathcal{I}_i$ and $\mathcal{I}_k$ have some non-zero indices. Let $b=\{b_1, \ldots, b_r\}$ be the partition whose subsets consists of the non-zero positions in index sets ${\bf i}(t)$ in $\mathcal{I}_i$, and let $c=\{c_1, \ldots, c_s\}$ be some partition of the remaining integers in $\{1, \ldots, n\}$. Suppose $s \le r$. Then we can construct a set of partitions by mixing $b$ and $c$; these have the form \begin{align} \label{mix} d^{(w)}=\{c_{i_1}, \dots, c_{i_t},(x_1b_1), \ldots, (x_rb_r)\}, \end{align} where each $x_i$ is either empty or consists of some $c_i$, and all the subsets $c_i$ are present once only in the partition. If any $d^{(w)}$ is eligible, all the other mixtures will also be eligible. Furthermore, the set of all eligible partitions can be decomposed into non-overlapping subsets of mixtures obtained in this way. Any mixture $d^{(w)}$ gives the same value of $\gamma(\mathcal{I},d^{(w)})$, which we denote simply by $\gamma$; so to show that all the contributions to the coefficient of $z_\mathcal{I}$ cancel, we have only to sum over all the mixtures, weighting a partition with $t$ subsets by $(t-1)!(-1)^{t-1}$. This gives \begin{align*} \mbox{Coefficient of }z_\mathcal{I}&=\gamma \sum_{i=0}^s (s+r-1)!(-1)^{s+r-i}\binom{s}{i}\binom{r}{i}i!\\ &=\gamma (-1)^{s+r-1}s! \sum (s+r-i-1) \ldots (s-i+1) \binom{r}{i}(-1)^i,\\ &=\gamma (-1)^{s+r-1}s! \frac{\partial^{r-1}}{\partial x^{r-1}} \left\{x^{s-1}(x-1)^r \right\} \vert_{x=1}=0. \end{align*} The above argument applies equally well to the situation where $\mathcal{I}_i$ and $\mathcal{I}_l$ both have some non-zero indices and indices in $\mathcal{I}_j$ and $\mathcal{I}_k$ are zero. If the non-zero indices are present in $\mathcal{I}_i$ and $\mathcal{I}_j$, we can take any eligible partition $a=\{a_1, \ldots, a_r\}$ and divide each subset $a_k$ into two subsets $b_k$ and $c_k$ with the indices from $\mathcal{I}_i$ in $b_k$ and those from $\mathcal{I}_j$ in $c_k$. All the mixtures of type (\ref{mix}) are eligible, and they include the original partition $a$. By the above argument, the coefficients of $z_\mathcal{I}$ arising from them sum to zero. Other combinations of indices are dealt with similarly. Note that, for $n=4$ and for the index sets $(1,1,0,0) \in \mathcal{I}_i$ and $(0,0,1,1) \in \mathcal{I}_j$, the ``mixture'' argument shows that coefficient of $z_\mathcal{I}$ coming from $\langle r_1r_2r_3r_4\rangle$ cancels that coming from $\langle r_1r_2 \rangle \langle r_3r_4\rangle$ to give zero. This cancellation occurs with the cumulant (\ref{4cumulant}), but not with the covariance (\ref{4covariance}), where the term $\langle r_1r_2 \rangle \langle r_3r_4\rangle$ is absent. \end{proof} The only terms that need to be considered, therefore, are complete lowest-degree terms with non-zero indices only in one of the sets $\mathcal{I}_i$, $\mathcal{I}_j$, $\mathcal{I}_k$ and $\mathcal{I}_l$. It is easy to calculate the coefficients one gets for such terms. Consider the case of $\mathcal{I}_i$. We only need to consider the single partition $b$ whose subsets are the index sets of $\mathcal{I}_i$. For this partition, by (\ref{z}), (\ref{x}) and (\ref{y}), \begin{align} z_\mathcal{I}=\prod_{e=1}^t \alpha_{{\bf i}(e)}\prod_{k=1}^n u^k_{1,0}v^k_{0,0}=g_1 \ldots g_n\prod_{e=1}^t \left(A_{{\bf i}(e)(|{\bf i}(e)|)}, \ldots ,A_{{\bf i}(e)(1)}\right)_w\prod_{k=1}^n \langle r_ks_k \rangle_i \end{align} From (\ref{CXY2}), $z_\mathcal{I}$ appears in $\mathcal{Y} \langle r_1 \ldots r_n \rangle^c$ with a coefficient $(t-1)!(-1)^{t-1}$. So, summing over all $z_\mathcal{I}$ with indices in $\mathcal{I}_i$, one obtains $g_1 \ldots g_n(A_n, \ldots, A_1)^c_w\prod_{k=1}^n (-i\langle r_ks_k \rangle_i)$. Similarly, from (\ref{alpha}), (\ref{u}) and (\ref{v}), summing over the $z_\mathcal{I}$ with indices in $\mathcal{I}_j$ gives the complex conjugate of $g_1 \ldots g_n (A_n, \ldots, A_1)^c_w\prod_{k=1}^n (-i\langle r_ks_k \rangle_i)$. Thus $\mathcal{I}_i$ and $\mathcal{I}_j$ together give $g_1 \ldots g_n (2\prod_{k=1}^n (-i\langle r_ks_k \rangle_i)) Re \left\{(A_n, \ldots, A_1)^c_w \right\}$. This corresponds to (\ref{main-result}), but with only the first half of $\xi$ as defined by (\ref{xi}). The rest of $\xi$ comes from the index sets $\mathcal{I}_k$ and $\mathcal{I}_l$. However, the sum of the coefficients of $z_\mathcal{I}$ for the same index set in $\mathcal{I}_i$ and $\mathcal{I}_k$ is zero. This is true because, for any complete lowest degree index set, the sum of coefficients for all $z_\mathcal{I}$ with the indices divided in any manner between $\mathcal{I}_i$ and $\mathcal{I}_k$ is zero, being the number ways of obtaining that index set from $\mathcal{Y}$ times $\sum_{t=1}^n (t-1)(-1)^{t-1}$. But by Lemma \ref{one-index-set}, the coefficient of $z_\mathcal{I}$ is zero unless the index set comes wholly from $\mathcal{I}_i$ or $\mathcal{I}_k$. Now (\ref{z}), (\ref{x}) and (\ref{y}) tell us that, for an index set in $\mathcal{I}_k$, \begin{align} z_\mathcal{I}=\prod_{e=1}^t \alpha_{{\bf i}(e)}\prod_{k=1}^n u^k_{0,0}v^k_{1,0}=g_1 \ldots g_n\prod_{e=1}^t \left(A_{{\bf i}(e)(|{\bf i}(e)|)}, \ldots ,A_{{\bf i}(e)(1)}\right)_w\prod_{k=1}^n \langle r_k \rangle_i\langle s_k \rangle_i, \end{align} and from the above argument, this appears appears in $\mathcal{Y} \langle r_1 \ldots r_n \rangle^c$ with coefficient $-(t-1)!(-1)^{t-1}$. Again, the index sets in $\mathcal{I}_l$ give the complex conjugate of those in $\mathcal{I}_k$. Thus we obtain the remaining half of $\xi$, which proves (\ref{main-result}) for $n \ge 2$. For $n=1$ the constant terms (of order zero in $g$) in $\mathcal{Y} \langle r \rangle$ do not vanish, but the proof goes through if we consider $\mathcal{Y}(\langle r \rangle-\langle r \rangle_i)$ instead. \end{proof} \section{Exploring the theorem} Consider first the simplest case, where $n=1$ and $r=q$. We take $H_{int}=g \delta(t) pA$ throughout this section, so $s=p$. Then (\ref{main-result1}) and (\ref{xi}) give \begin{align}\label{qmean} \langle q \rangle =\langle q \rangle_i +gRe(\xi_q A_w)\qquad \mbox{with} \qquad \xi_q=-2i\left(\langle qp \rangle_i-\langle q \rangle_i \langle p \rangle_i \right), \end{align} which we have already seen as equations (\ref{firstxi}) and (\ref{xi1}). If we measure the pointer momentum, so $r=p$, we find \begin{align}\label{pmean} \langle p \rangle =\langle p\rangle_i+gRe(\xi_p A_w) \qquad \mbox{with} \qquad \xi_p=-2i(\langle p^2\rangle_i-\langle p\rangle_i^2), \end{align} which is equivalent to the result obtained in \cite{J07}. For two variables, our theorem for $r_1=q_1, r_2=q_2$, is \begin{align}\label{qq} \langle q_1q_2 \rangle^c=g_1g_2 Re (\xi_{qq} (A_2,A_1)^c_w), \end{align} with \begin{align}\label{xiqq} \xi_{qq}=2(\langle q_1\rangle_i\langle p_1\rangle_i\langle q_2\rangle_i\langle p_2\rangle_i-\langle q_1p_1\rangle_i\langle q_2p_2\rangle_i). \end{align} The calculations in the Appendix allow one to check (\ref{qq}) and (\ref{xiqq}) by explicit evaluation; see (\ref{explicit}). Note in passing that, if one writes $\Delta q=\sqrt{\langle (q_1- \langle q_1 \rangle)^2\rangle}$, the Cauchy-Schwarz inequality \begin{align*} \{\langle q_1 q_2 \rangle^c \}^2=\left\{\langle (q_1-\langle q_1 \rangle)(q_2-\langle q_2 \rangle)\right\}^2 \le \langle (q_1- \langle q_1 \rangle)^2\rangle \langle (q_2- \langle q_2 \rangle)^2\rangle \end{align*} implies a Heisenberg-type inequality \begin{align*} \Delta q_1 \Delta q_2 \ge g_1g_2 Re \{ \xi_{qq} (A_2,A_1)^c_w\}, \end{align*} relating the pointer noise distributions of two weak measurements carried out at different times during the evolution of the system. When one or both of the $q_k$ in (\ref{qq}) is replaced by the pointer momentum $p_k$, we get \begin{align} \label{qp} \langle q_1 p_2 \rangle^c&=g_1g_2 Re \left( \xi_{qp} (A_2,A_1)^c_w\right),\\ \label{pp} \langle p_1 p_2 \rangle^c&=g_1g_2 Re \left( \xi_{pp} (A_2,A_1)^c_w\right), \end{align} with \begin{align} \label{xiqp} \xi_{qp}&=-2\left(\langle q_1p_1\rangle_i\langle p_2^2\rangle_i-\langle q_1\rangle_i\langle p_1\rangle_i \langle p_2\rangle_i^2 \right),\\ \label{xipp} \xi_{pp}&=-2\left(\langle p_1^2\rangle_i\langle p_2^2\rangle_i-\langle p_1\rangle_i^2 \langle p_2\rangle_i^2 \right). \end{align} Consider now the special case where $\phi$ is real with zero mean. Then the very complicated expression for $\langle q_1q_2 \rangle$ in (\ref{horrible}) reduces to \begin{align} \label{2q} \langle q_1q_2 \rangle=\frac{g_1g_2}{2}\ Re \left[ (A_2,A_1)_w+(A_1)_w({\bar A}_2)_w \right], \end{align} as shown in \cite{MJP07}. Two further examples from \cite{MJP07} are \begin{align} \label{3q} \langle q_1q_2q_3 \rangle&=\frac{g_1g_2g_3}{4}\ Re \left[(A_3,A_2,A_1)_w+(A_3,A_2)_w({\bar A}_1)_w+(A_3,A_1)_w(({\bar A}_2)_w+(A_2,A_1)_w({\bar A}_3)_w \right],\\ \label{4q} \langle q_1q_2q_3q_4 \rangle&=\frac{g_1g_2g_3g_4}{8}\ Re \left[ (A_4,A_3,A_2,A_1)_w+(A_4,A_3,A_2)_w({\bar A}_1)_w+\ldots +(A_4,A_3)_w(\overline{A_2,A_1})_w+\ldots \right]. \end{align} We can use these formulae to calculate the cumulant $\langle q_1 \ldots q_n \rangle$, and thus check Theorem \ref{main-theorem}for this special class of wavefunctions $\phi$. Each formula contains on the right-hand side a leading sequential weak value, but there are also extra terms, such as $(A_1)_w({\bar A}_2)_w$ in (\ref{2q}) and $(A_2,A_1)_w({\bar A}_3)_w$ in (\ref{3q}). All these extra terms are eliminated when the cumulant is calculated, and we are left with (\ref{main-result}) with $\xi_{q_1 \ldots q_n}=(1/2)^{n-1}$. This gratifying simplification depends on the fact that the cumulant is a sum over all partitions. For instance, it does not occur if one uses the covariance instead of the cumulant. To see this, look at the case $n=4$: The term $\langle q_1q_2q_3q_4 \rangle$ in $Cov(q_1,q_2,q_3,q_4)$, the covariance of pointer positions, gives rise via (\ref{4q}) to weak value terms like $(A_4,A_3)_w(\overline{A_2,A_1})_w$. However, (\ref{4covariance}) together with (\ref{2q}), (\ref{3q}) and (\ref{4q}) show that $Cov(q_1,q_2,q_3,q_4)$ has no other terms that generate any multiple of $(A_4,A_3)_w(\overline{A_2,A_1})_w$, and consequently this weak value expression cannot be cancelled and must be present in $Cov(q_1,q_2,q_3,q_4)$. This means that there cannot be any equation relating $Cov(q_1,q_2,q_3,q_4)$ and $Cov(A_4,A_3,A_2,A_1)_w$. This negative conclusion does not apply to the cumulant $\langle q_1q_2q_3q_4 \rangle^c$, as this includes terms such as $\langle q_1q_2 \rangle\langle q_3q_4 \rangle$; see (\ref{4cumulant}). \section{Simultaneous weak measurement} We have treated the interactions between each pointer and the system individually, the Hamiltonian for the $k$'th pointer and system being $H_k=g_k \delta(t-t_k) s_kA_k$, but of course we can equivalently describe the interaction between all the pointers and the system by $H= \sum_k g_k \delta(t-t_k) s_kA_k$. For sequential measurements we implicitly assume that all the times $t_k$ are distinct. However, the limiting case where there is no evolution between coupling of the pointers and all the $t_k$'s are equal is of interest, and is the {\em simultaneous} weak measurement considered in \cite{RS04,R04,LR05}. In this case, the state of the pointers after post-selection is given by \begin{align} \label{simul-bigstate} \Psi_\mathcal{M}=\bra{\psi_f}e^{-i(g_1s_1A_1 \ldots +g_ns_nA_n)} \ket{\psi_i}\phi_1(r_1) \ldots \phi_n(r_n). \end{align} The exponential $e^{-i(g_1s_1A_1 \ldots +g_ns_nA_n)}$ here differs from the sequential expression $e^{-ig_ns_nA_n} \ldots e^{-ig_1s_1A_1}$ in (\ref{bigstate}) in that each term in the expansion of the latter appears with the operators in a specific order, viz. the arrow order $\leftarrow$ as in (\ref{4weak}), whereas in the expansion of the former the same term is replaced by a symmetrised sum over all orderings of operators. For instance, for arbitrary operators $X$, $Y$ and $Z$, the third degree terms in $e^Xe^Ye^Z$ include $X^3/3!$, $X^2Y/2!$ and $XYZ$, whose counterparts in $e^{(X+Y+Z)}$ are, respectively, $X^3/3!$, $\{X^2Y+XYX+YX^2\}/3!$ and $\{XYZ+XZY+YXZ+YZX+ZXY+ZYX\}/3!$. Apart from this symmetrisation, the calculations in Section \ref{theorem-section} can be carried through unchanged for simultaneous measurement. Thus if we replace the sequential weak value by the {\em simultaneous weak value} \cite{RS04,R04,LR05} \begin{align} \label{swv} (A_{i_k}, \ldots, A_{i_1})_{ws}=\frac{1}{k!}\sum_{\pi \in S_k} \left(A_{i_{\pi(k)}}, \ldots, A_{i_{\pi(1)}} \right)_w, \end{align} where the sum on the right-hand side includes all possible orders of applying the operators, we obtain a version of Theorem \ref{main-theorem} for simultaneous weak measurement: \begin{align} \langle r_1 \ldots r_n \rangle^c= g_1 \ldots g_n Re \left\{ \xi (A_n, \ldots, A_1)^c_{ws}\right\}. \end{align} Likewise, relations such (\ref{2q}), (\ref{3q}), etc., hold with simultaneous weak values in place of the sequential weak values; indeed, these relations were first proved for simultaneous measurement \cite{RS04,R04}. \begin{figure}[hbtp] \centerline{\epsfig{file=alternate3.eps,width=0.5\textwidth}} \caption{\label{alternate}} \end{figure} From (\ref{swv}) we see that, when the operators $A_k$ all commute, the sequential and simultaneous weak values coincide. One important instance of this arises when the operators $A_k$ are applied to distinct subsystems, as in the case of the simultaneous weak measurements of the electron and positron in Hardy's paradox \cite{Hardy92,ABPRT01}. When the operators do not commute, the meaning of simultaneous weak measurement is not so obvious. One possible physical interpretation follows from the well-known formula \begin{align} \label{formula} e^{X+Y}=\lim_{N \to \infty}(e^{X/N}e^{Y/N})^N \end{align} and its analogues for more operators. Suppose two pointers, one for $A_1$ and one for $A_2$, are coupled alternately in a sequence of $N$ short intervals (Figure \ref{alternate}, top diagram) with coupling strength $g_k/N$ for each interval. This is an enlarged sense of sequential weak measurement \cite{MJP07} in which the same pointer is used repeatedly, coherently preserving its state between couplings. The state after post-selection is \begin{align}\label{2simul} \Psi_\mathcal{M}=\bra{\psi_f}\left(e^{-i\left(\frac{g_2}{N}\right)s_2A_2}e^{-i\left(\frac{g_1}{N}\right)s_1A_1}\right)^N \ket{\psi_i}\phi_1(r_1)\phi_2(r_2). \end{align} From (\ref{formula}) we deduce that \begin{align} \Psi_\mathcal{M} \approx \bra{\psi_f}e^{-i(g_2s_2A_2+g_1s_1A_1)}\ket{\psi_i}\phi_1(r_1)\phi_2(r_2). \end{align} This picture readily extends to more operators $A_k$. One can also simulate a simultaneous measurement by averaging the results of a set of sequential measurements with the operators in all orders; in effect, one carries out a set of experiments that implement the averaging in (\ref{swv}). There is then no single act that counts as simultaneous measurement, but weak measurement in any case relies on averaging many repeats of experiments in order to extract the signal from the noise. In a certain sense, therefore, sequential measurement includes and extends the concept of simultaneous measurement. However, if we wish to accomplish simultaneous measurement in a single act, then we need a broader concept of weak measurement where pointers can be re-used; indeed, we can go further, and consider generalised weak coupling between one time-evolving system and another, followed by measurement of the second system. However, even in this case, the measurement results can be expressed algebraically in terms of the sequential weak values of the first system \cite{MJP07}. \section{Lowering operators} Lundeen and Resch \cite{LR05} showed that, for a gaussian initial pointer wavefunction, if one defines an operator $a$ by \begin{align*} a_{LR}=\langle p^2 \rangle_i^{1/2} \left(q+\frac{ip}{2\langle p^2 \rangle_i}\right), \end{align*} then the relationship \begin{align*} \langle a_{LR} \rangle=g \langle p^2 \rangle_i^{1/2} A_w \end{align*} holds. They argued that $a_{LR}$ can be interpreted physically as a lowering operator, carrying the pointer from its first excited state $\ket{1}$, in number state notation, to the gaussian state $\ket{0}$ (despite the fact that the pointer is not actually in a harmonic potential). Although $a_{LR}$ is not an observable, $\langle a_{LR} \rangle$ can be regarded as a prescription for combining expecations of pointer position and momentum to get the weak value. If instead of $a_{LR}$ one takes \begin{align} \label{gaussian} a=q+\frac{ip}{2\langle p^2 \rangle_i}, \end{align} then the even simpler relationship \begin{align} \label{LR1} \langle a \rangle=g A_w, \end{align} holds. We refer to $a$ as a generalised lowering operator. Lundeen and Resch also extended their lowering operator concept to simultaneous weak measurement of several observables $A_k$. Rephrased in terms of our generalised lowering operators $a_k$ defined by (\ref{gaussian}), their finding \cite{LR05} can be stated as \begin{align}\label{simul} \langle a_1 \ldots a_n \rangle=g_1 \ldots g_n(A_1 \ldots A_n)_{ws}. \end{align} This is of interest for two reasons. First, the entire simultaneous weak value appears on the right-hand side, not just its real part; and second, the ``extra terms'' in the simultaneous analogues of (\ref{2q}), (\ref{3q}) and (\ref{4q}) have disappeared. The lowering operator seems to relate directly to weak values. We can generalise these ideas in two ways. First, we extend them from simultaneous to sequential weak measurements. Secondly, instead of assuming the initial pointer wavefunction is a gaussian, we allow it be arbitrary; we do this by defining a generalised lowering operator \begin{align}\label{lowering} a=q+i \frac{p}{\eta}, \qquad \mbox{with} \qquad \eta=-i{\overline \xi}_p/{\overline \xi}_q. \end{align} For a gaussian $\phi$, $\eta=2\langle p^2 \rangle_i$, so the above definition reduces to (\ref{gaussian}) in this case. In general, however, $\phi$ will not be annihilated by $a$ and is therefore not the number state $\ket{0}$ (this state is a gaussian with complex variance $\eta^{-1}$). Nonetheless, there is an analogue of Theorem \ref{main-theorem} in which the whole sequential weak value, rather than its real part, appears: \begin{theorem}[Cumulant theorem for lowering operators]\label{lowering-theorem} For $n>1$ \begin{align}\label{nlowering} \langle a_1 \ldots a_n \rangle^c = g_1 \ldots g_n \ \vartheta \ (A_n, \ldots A_1)_w^c, \end{align} where $\vartheta$ is given by \begin{align}\label{constant} \vartheta=\sum_{(i_1, \ldots i_n) \in \{0,1\}^n} \ \frac{(-1)^{\sum i_j} \xi_{r_{i_1} \ldots r_{i_n}} \left( {\overline \xi}_{r_{1-i_1}} \ldots {\overline \xi}_{r_{1-i_n}} \right) }{2\left({\overline{ \xi}}_{p_1} \ldots {\overline{\xi}}_{p_n} \right)}. \end{align} For $n=1$ the same result holds, but with the extra term $\langle a \rangle_i$: \begin{align}\label{1lowering} \langle a \rangle = \langle a \rangle_i +\vartheta g A_w. \end{align} \end{theorem} \begin{proof} Put $r_0=q$, $r_1=p$. Then \begin{align*} \langle a_1 \ldots a_n \rangle^c &=\langle (q_1+i p_1/\eta_1) \ldots (q_n+i p_n/\eta_n) \rangle^c,\\ &=\sum_{(i_1, \ldots i_n) \in \{0,1\}^n} \ \frac{(-1)^{\sum i_j}\langle r_{i_1} \ldots r_{i_n} \rangle^c \left( {\overline \xi}_{r_{1-i_1}} \ldots {\overline \xi}_{r_{1-i_n}} \right) }{\left({\overline \xi}_{p_1} \ldots {\overline \xi}_{p_n} \right)},\\ &=g_1 \ldots g_n \left[ \vartheta (A_n, \ldots A_1)^c_w + \varpi (\overline{A_n, \ldots A_1})^c_w \right], \end{align*} where we used Theorem \ref{main-theorem} to get the last line, and where $\vartheta$ is given by (\ref{constant}) and $\varpi$ by \begin{align*} \varpi=\sum_{(i_1, \ldots i_n) \in \{0,1\}^n} \ \frac{(-1)^{\sum i_j} {\overline \xi}_{r_{i_1} \ldots r_{i_n}} \left( {\overline \xi}_{r_{1-i_1}} \ldots {\overline \xi}_{r_{1-i_n}} \right) }{2\left({\overline \xi}_{p_1} \ldots {\overline \xi}_{p_n} \right)}; \end{align*} (note the bar over ${\overline \xi}_{r_{i_1} \ldots r_{i_n}}$ that is absent in the definition of $\vartheta$ by (\ref{constant})). We want to prove $\varpi=0$, and to do this it suffices to prove that the complex conjugate of the numerator is zero, i.e. \begin{align*} \varpi^\prime=\sum_{(i_1, \ldots i_n) \in \{0,1\}^n} \ (-1)^{\sum i_j} \xi_{r_{i_1} \ldots r_{i_n}} \left( \xi_{r_{1-i_1}} \ldots \xi_{r_{1-i_n}} \right)=0. \end{align*} Let $a_k=\langle q_ks_k \rangle_i$, $b_k=\langle q_k \rangle_i \langle s_k \rangle_i$, $c_k=\langle p_ks_k \rangle_i$, $d_k=\langle p_k \rangle_i \langle s_k \rangle_i$. Using the definition of $\xi$ in (\ref{xi}), the above equation can be written \begin{align*} \varpi^\prime/(2^{n+1}(-1)^n)&=\prod_{k=1}^n \left\{ a_k(c_k-d_k)-c_k(a_k-b_k)\right\} -\prod_{k=1}^n \left\{ b_k(c_k-d_k)-d_k(a_k-b_k)\right\}\\ &=\prod (b_kc_k-a_kd_k)-\prod (b_kc_k-a_kd_k) = 0. \end{align*} \end{proof} Suppose the interaction Hamiltonian has the standard von Neumann form $H_{int}=gpA$, so $s=p$ in the definition of $\xi$ by equation (\ref{xi}). Then for $n=1$, since $\overline{ \xi}_p=\xi_p$ and $\overline{\langle qp \rangle}_i=\langle pq \rangle_i$, $\vartheta=(-i)(\xi_q-\overline {\xi}_q)=(-i)(\langle qp \rangle_i-\langle pq \rangle_i)=1$, so we get the even simpler result \begin{align}\label{simple} \langle a \rangle=\langle a \rangle_i+gA_w. \end{align} This is valid for all initial pointer wavefunctions, and therefore extends Lundeen and Resch's equation (\ref{LR1}). It seems almost too simple: there is no factor corresponding to $\xi$ in equation (\ref{qmean}). However, a dependency on the initial pointer wavefunction is of course built into the definition of $a$ through $\eta$. For $n>1$ it is no longer true that $\vartheta=1$, even with the standard interaction Hamiltonian. However, if in addition $\langle p \rangle_i=0$, then \begin{align*} \vartheta=(-i)^n\prod_{k=1}^n\left(\langle q_kp_k \rangle_i- \langle p_kq_k \rangle_i \right)=(-i)^n(i)^n=1. \end{align*} Thus $\langle a_1 \ldots a_n \rangle^c = g_1 \ldots g_n (A_n, \ldots A_1)_w^c$ for all $n$. Applying the inverse operation for the cumulant, given by Propostion \ref{anti}, we deduce: \begin{corollary} If $\langle p \rangle_i=0$, e.g. if the initial pointer wavefunction $\phi$ is real, then for $n>1$ \begin{align}\label{nice} \langle a_1 \ldots a_n \rangle=g_1 \ldots g_n (A_n, \ldots A_1)_w. \end{align} \end{corollary} This is the sequential weak value version of the result for simultaneous measurements, (\ref{simul}), but is more general than the gaussian case treated in \cite{LR05}. We might be tempted to try to repeat the above argument for pointer positions $q_k$ instead of the lowering operators $a_k$ by applying the anti-cumulant to both sides of (\ref{main-result}). This fails, however, because of the need to take the real part of the weak values; in fact, this is one way of seeing where the extra terms come from in (\ref{2q}), (\ref{3q}) and (\ref{4q}) and their higher analogues. Note also that (\ref{nice}) does not hold for general $\phi$, since then different subsets of indices may have different values of $\vartheta$. \section{Discussion} The procedure for sequential weak measurement involves coupling pointers at several stages during the evolution of the system, measuring the position (or some other observable) of each pointer, and then multiplying the measured values together. In \cite{MJP07} it was argued that we would really like to measure the product of the values of the operators $A_1, \ldots A_n$, and that this corresponds to the sequential weak value $(A_n, \ldots, A_1)_w$. Multiplication of the values of pointer observables is the best we can do to achieve this goal. However, this brings along extra terms, such as $(A_1)_w({\bar A}_2)_w $ in (\ref{2q}), which are an artefact of this method of extracting information. From this perspective, the cumulant extracts the information we really want. In \cite{MJP07}, a somewhat idealised measuring device was being considered, where the pointer position distribution is real and has zero mean. When the pointer distribution is allowed to be arbitrary, the expressions for $\langle q_1 \ldots q_n \rangle$ become wildly complicated (see for instance (\ref{horrible})). Yet the cumulant of these terms condenses into the succinct equation (\ref{main-result}) with all the complexity hidden away in the one number $\xi$. Why does the cumulant have this property? Recall that the cumulant vanishes when its variables belong to two independent sets. The product of the pointer positions $q_1, \ldots q_n$ will include terms that come from products of disjoint subsets of these pointer positions, and the cumulant of these terms will be sent to zero, by Lemma \ref{indep-lemma}. For instance, with $n=2$, the pointers are deflected in proportion to their individual weak values, according to (\ref{firstxi}), and the cumulant subtracts this component leaving only the component that arises from the $O(g^2)$-influence of the weak measurement of $A_1$ on that of $A_2$. The subtraction of this component corresponds to the subtraction of the term $(A_1)_w({\bar A}_2)_w $ from (\ref{2q}). In general, the cumulant of pointer positions singles out the maximal correlation involving all the $q_i$, and the theorem tells us that this is directly related to the corresponding ``maximal correlation'' of sequential weak values, $(A_n, \ldots, A_1)^c$, which involves all the operators. In fact, the theorem tells us something stronger: that it does not matter what pointer observable $r(p,q)$ we measure, e.g. position, momentum, or some Hermitian combination of them, and that likewise the coupling of the pointer with the system can be via a Hamiltonian $H_{int}=gs(p,q)A$ with any Hermitian $s(p,q)$. Different choices of $r$ and $s$ lead only to a different multiplicative constant $\xi$ in front of $(A_n, \ldots, A_1)_w^c$ in (\ref{main-result}). We always extract the same function of sequential weak values, $(A_n, \ldots, A_1)_w^c$, from the system. This argues both for the fundamental character of sequential weak values and also for the key role played by their cumulants. \section{Acknowledgements} I am indebted to J. {\AA}berg for many discussions and for comments on drafts of this paper; I thank him particularly for putting me on the track of cumulants. I also thank A. Botero, P. Davies, R. Jozsa, R. Koenig and S. Popescu for helpful comments. A preliminary version of this work was presented at a workshop on ``Weak Values and Weak Measurement'' at Arizona State University in June 2007, under the aegis of the Center for Fundamental Concepts in Science, directed by P. Davies.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,412
\section*{This is an unnumbered first-level section head} \section{Introduction} Fractal interpolation techniques and methodologies have been considered extensively over the last decades in order to obtain approximation results for immensely complex and complicated geometries such as fractal sets or sets modelled as fractal sets. Common to all these techniques is the existence or the approximate assumption of an underlying self-referential structure of the (approximate) fractal set. For many application, the (approximate) fractal set is the graph of a function defined on an appropriate subset of a, say, Banach space with values in another Banach space. In other words, one tries to solve the Fractal Interpolation Problem: Given a bounded subset $\mathsf{X}$ of a Banach space $\mathsf{E}$ and a Banach space $\mathsf{F}$, construct a \emph{global} function $\psi:\mathsf{X} = \coprod\limits_{i=1}^n \mathsf{X}_i\to\mathsf{F}$ belonging to some prescribed function space $\mathscr{F}:= \mathscr{F}(\mathsf{X},\mathsf{F})$ satisfying $n$ functional equations of the form \[ \psi (h_i (x)) = q_i (x) + s_i (x) \psi (x), \quad\text{on $\mathsf{X}$ and for $i\in \mathbb{N}_n$}. \] The functions $h_i$ are assumed to partition $\mathsf{X}$ (linearly or nonlinearly) into disjoint subsets $\mathsf{X}_i = h_i(\mathsf{X})$, and the functions $q_i$ and $s_i$ are appropriately chosen. We remark that, if such a global solution exists, then it is pieced together in a prescribed manner from copies of itself on the subsets $\mathsf{X}_i$. This latter property then defines or expresses the self-referential nature of $\psi$ and communicates the fact that the graph of $\psi$ is in general an object of immense geometric complexity, i.e., a fractal set. The afore-mentioned fractal interpolation problem has been investigated in numerous publications and for different function spaces $\mathscr{F}$ where the domains of $f\in \mathscr{F}$ were one-dimensional, $n$-dimensional, or infinite-dimensional. For specific applications though, $\mathsf{X}$ is usually a compact or half-open bounded interval of the numerical Banach space $\mathbb{R}$ with values in $\mathsf{F}$ or even just $\mathbb{R}^m$, $m\in\mathbb{N}$. However, in many situations especially in physics and particularly electrodynamics, complex (approximately) self-referential structures may arise over domains which are curves or more generally (Banach) submanifolds. In this paper, we initiate the investigation in this latter setting and introduce fractal interpolation where the domain $\mathsf{X}$ is the trace of a regular curve $\gamma :I \to \mathsf{E}$ in a Banach space. (For a proper choice of the interval $I$ the trace is then a $C^1$ Banach submanifold of codimension one.) The set-up presented here will then indicate the general approach to fractal interpolation over (Banach) submanifolds and will be left to a follow-up paper. The contents of this paper are organized as follows. In Section 2, iterated function systems are defined and the notation and terminology for the remainder of the paper introduced. In the next section, some remarks about the code space naturally associated with an IFS are presented. Section 4 reviews some concepts from the theory of curves in Banach spaces highlighting those results that are needed in this paper. In Section 5, the novel concept of fractal interpolation over curves in Banach spaces is introduced and an appropriate function space $\mathscr{F}$, namely a certain Lipschitz algebra, chosen. The main result, namely the existence of a unique solution to the fractal interpolation problem over curves, is proven in this section as well. In the next section, a relation between the unique solution of the interpolation problem and the underlying IFS and its code space is exhibited. The final section lists some future research directions. \section{Iterated Functions Systems} Let $\mathsf{E}:=(\mathsf{E}, \n{\cdot}_\mathsf{E})$ and $\mathsf{F}:=(\mathsf{F}, \n{\cdot}_\mathsf{F})$ be Banach spaces (over $\mathbb{R}$ or $\mathbb{C}$). For a map $f: \mathsf{E} \to \mathsf{F}$, we define the Lipschitz constant associated with $f$ by \[ L (f) = \sup_{x\in\mathsf{E}, x \neq 0} \frac{\n{f(x)-f(y)}_\mathsf{F}}{\n{x-y}_\mathsf{E}}. \] A map $f$ is called \emph{Lipschitz} or a \emph{Lipschitz function} if $L (f) < + \infty$ and a \emph{contraction} if $L (f) < 1$. \begin{definition} Let $(\mathsf{E},\n{\cdot}_\mathsf{E}$ be a Banach space and $\mathcal{F}$ a finite set of functions $\mathsf{E}\to\mathsf{E}$. Then, the pair $(\mathsf{E},\mathcal{F})$ is called an iterated function system (IFS) on $\mathsf{E}$. In case all maps $f\in\mathcal{F}$ are contractions then $(\mathsf{E},\mathcal{F})$ is called a \emph{contractive} IFS. \end{definition} For a more general definition of IFS and related topics, we refer the interested reader to \cite{BWL14}. \begin{remark} Throughout this paper, we deal with contractive IFSs and therefore we drop the adjective ``contractive." \end{remark} With the finite set of contractions $\mathcal{F}$ on $\mathsf{E}$, one associates a set-valued operator, again denoted by $\mathcal{F}$, acting on the hyperspace ${\mathscr{H}}(\mathsf{E})$ of nonempty compact subsets of $\mathsf{E}$: \[ \mathcal{F} (E) := \bigcup_{f\in \mathcal{F}} f (E),\qquad E\in \mathscr{H}(\mathsf{E}). \] The hyperspace ${\mathscr{H}}(\mathsf{E})$ endowed with the Hausdorff-Pompeiu metric $d_{\mathscr{H}}$ defined by \[ d_{\mathscr{H}} (S_1, S_2) := \max\{d(S_1, S_2), d(S_2, S_1)\}, \] where $d(S_1,S_2) := \sup\limits_{x\in S_1} d(x, S_2) := \sup\limits_{x\in S_1}\inf\limits_{y\in S_2} \n{x-y}_\mathsf{E}$, becomes a metric space. It is a known fact that the completeness of $\mathsf{E}$ implies the completeness of $({\mathscr{H}}(\mathsf{E}), d_{\mathscr{H}})$ as a metric space. Moreover, it can be shown that the set-valued operator $\mathcal{F}$ is contractive on the complete metric space $({\mathscr{H}}(\mathsf{E}), d_{\mathscr{H}})$ with Lipschitz constant $L(\mathcal{F}) = \max\{L (f) : f\in \mathcal{F}\} < 1$ if all $f\in \mathcal{F}$ are contractions. In this case and by the Banach Fixed Point Theorem, $\mathcal{F}$ has a unique fixed point in $\mathscr{H}(\mathsf{E})$. This fixed point is called the \emph{attractor} of or the \emph{fractal (set)} generated by the IFS $(\mathsf{E},\mathcal{F})$. The attractor or fractal $F$ satisfies the self-referential equation \begin{equation}\label{fixedpoint} F = \mathcal{F} (F) = \bigcup_{f\in\mathcal{F}} f (F), \end{equation} i.e., $F$ is made up of a finite number of images of itself. Eqn. \eqref{fixedpoint} reflects the fractal nature of $F$ showing that it is as an object of immense geometric complexity. The proof of the Banach Fixed Point Theorem also shows that the fractal $F$ can be obtained iteratively via the following procedure: Choose an arbitrary $F_0\in {\mathscr{H}}(\mathsf{E})$ and define \begin{equation}\label{F} F_n := \mathcal{F} (F_{n-1}),\qquad n\in\mathbb{N}. \end{equation} Then $F =\lim\limits_{n\to\infty} F_n$, where the limit is taken with respect to the Hausdorff-Pompeiu metric $d_{\mathscr{H}}$. For more details about IFSs and fractals and their properties, we refer the interested reader to the large literature on these topics and list only two references \cite{B12,M16} which are closely related to the present exhibition. \section{Some Remarks about IFSs and Their Code Space} In this short section, we give a brief overview of the important concept of code space associated with an IFS. As a reference, we offer \cite{B12,BBHV16}. Let $(\mathsf{E}, \mathcal{F})$ be an IFS with $\#\mathcal{F} = n$. With an IFS one associates the code space $\mathbb{N}_n^\infty$, i.e., the compact space of all infinite sequences of the form $\boldsymbol{i} := (i_1 i_2 i_3\ldots )$ whose elements are from $\mathbb{N}_n$. The set $\mathbb{N}_n^\infty$ is endowed with a metric $\delta$ defined by \[ \delta (\boldsymbol{i},\boldsymbol{j}) := 2^{-k}, \] where $k$ is the least integer such that $i_k\neq j_k$. This makes $(\mathbb{N}_n^\infty,\delta)$ into a compact metric space. Define maps \begin{gather*} s_i:\mathbb{N}_n^\infty\to\mathbb{N}_n^\infty,\\ s_i(\boldsymbol{j}) := i\,\boldsymbol{j}. \end{gather*} Then each $s_i$ is a contraction with Lipschitz constant $L(s_i) = \frac12$ and a homeomorphism onto its image. In other words, the pair $(\mathbb{N}_n^\infty, s)$ with $s := \{s_i : i\in \mathbb{N}_n\}$ is an IFS referred to as the code space IFS. For an IFS $(\mathsf{E}, \mathcal{F})$ with maps $\mathcal{F} = \{f_1, \ldots, f_n\}$ and attractor $F$, define the code map \begin{gather*} \pi : \mathbb{N}_n^\infty\to F,\\ \pi(\boldsymbol{i}) := \lim_{k\to\infty} f_{i_1}\circ f_{i_2}\circ \cdots \circ f_{i_k} (x), \end{gather*} for a fixed $x\in F$ and all $\boldsymbol{i} = i_1 i_2 \ldots \in \mathbb{N}_n^\infty$. It is known that for contractive IFSs, $\pi(\boldsymbol{i})$ is one-elemental and independent of $x\in F$. Furthermore, $\pi$ is continuous and surjective. If \begin{gather*} S_i:\mathbb{N}_n^\infty\to\mathbb{N}_n^\infty,\\ S_i (i_1 i_2 i_3 \ldots) := (i_2 i_3 \ldots) \end{gather*} denotes the left shift operator on $\mathbb{N}_n^\infty$, then $\pi$ is a conjugation between the code space IFS $(\mathbb{N}_n^\infty, \sigma)$ and the IFS $(\mathsf{E}, \mathcal{F})$ in the sense that \[ \pi\circ\sigma_i = f_i\circ\pi\quad\text{and}\quad \pi\circ S_i \in \mathcal{F}^{-1}\circ \pi, \] where $\mathcal{F}^{-1}(E) := \bigcup\limits_{i\in\mathbb{N}_n} f_i^{-1} (E)$, $E\in \mathscr{H}(\mathsf{E})$. In order to introduce the concept of fractal transformation, we require one more item, namely, section of the code map $\pi$. A mapping $\sigma: F\to \mathbb{N}_n^\infty$ is called a section of $\pi$ if $\pi\circ\sigma = \mathrm{id\,}_F$. Now suppose, two IFSs $(\mathsf{E}, \mathcal{F}_1)$ and $(\mathsf{E}, \mathcal{F}_2)$ with $\#\mathcal{F}_1 = \#\mathcal{F}_2$ are given. Denote by $F_1$ and $F_2$, respectively, their attractors. The fractal transformations $\mathfrak{F}_{12} : F_1 \to F_2$ and $\mathfrak{F}_{21} : F_2 \to F_1$ are defined by \[ \mathfrak{F}_{12} := \pi_2\circ\sigma_1\quad\text{and}\quad \mathfrak{F}_{21} := \pi_1\circ\sigma_2. \] Their respective actions are best seen by the commutative diagram below. \[ \begin{tikzcd} F_1 \arrow[dr, shift left=.5ex, "\sigma_1"] \arrow[rr, shift left=0.5ex,"\mathfrak{F}_{12}"] \arrow[rr, leftarrow, shift right=0.5ex, swap, "\mathfrak{F}_{21}"] && F_2 \arrow[dl, shift left=.5ex, "\sigma_2"]\\ & \mathbb{N}_n^\infty \arrow[ul, shift left=.5ex, "\pi_1"] \arrow[ur, shift left=.5ex, "\pi_2"] \end{tikzcd} \] If $\mathfrak{F}_{12}$ is a homeomorphism then it is called a fractal homeomorphism and thus $\mathfrak{F}_{21} = (\mathfrak{F}_{12})^{-1}$. \section{Curves in Banach Spaces} In this section, we briefly introduce the concept of a curve in a normed linear space, in particular, a Banach space. \begin{definition} Let $(\mathsf{V}, \n{\cdot}_\mathsf{V})$ be a normed linear space over $\mathbb{R}$. Let $I\subset\mathbb{R}$ be an interval and $\gamma:I\to V$ a continuous mapping. Then the pair $(I, \gamma)$ is called a curve in $V$. The family of all curves on a given Interval $I$ is denoted by $\Gamma(I, \mathsf{V})$. \end{definition} Note that if $\mathsf{V}$ is finite dimensional, i.e., isomorphic to $\mathbb{R}^m$, for some $m\in \mathbb{N}$, then a curve can be interpreted as a vector-valued function $\gamma: I\to\mathbb{R}^m$, \begin{equation}\label{vecval} \gamma(t) = \begin{pmatrix} \gamma_1 (t)\\ \vdots \\ \gamma_m (t) \end{pmatrix}, \end{equation} with each $\gamma_i:I\to\mathbb{R}$. When the interval $I$ is understood, we simply write $\gamma$ instead of $(I,\gamma)$. We need to distinguish between the curve $\gamma$ as a mapping $I\to\mathsf{E}$ and the trace of $\gamma$, i.e., the image set $\abs{\gamma} :=\gamma(I)\subset\mathsf{E}$. \begin{definition} Let $(\mathsf{E}, \n{\cdot}_\mathsf{E})$ be a real Banach space and $I\subset\mathbb{R}$ an open interval. Suppose that $\gamma:I \to \mathsf{E}$ is a curve and $t_0\in I$. The curve $\gamma$ is called differentiable in $t_0$ if there exists a linear mapping $v:\mathbb{R}\to\mathsf{E}$ such that \[ \lim_{h\to 0} \frac{\n{\gamma(t_0+h) - \gamma(t_0) - v(h)}_\mathsf{E}}{h} = 0. \] \end{definition} We note that if a curve $\gamma:I\to\mathsf{E}$ is differentiable at a point $t_0$ then the derivative is uniquely determined. The linear space \[ L(\mathbb{R}, \mathsf{E}) := \left\{v:\mathbb{R}\to\mathsf{E} : \text{$v$ is linear}\right\} \] is isomorphic to $\mathsf{E}$ and, as every linear mapping $v:\mathbb{R}\to\mathsf{E}$ is continuous, $L(\mathbb{R}, \mathsf{E})$ is naturally a Banach space. The isomorphism between the Banach spaces $\mathsf{E}$ and $L(\mathbb{R}, \mathsf{E})$ allows us to identify the linear mapping $v:\mathbb{R}\to\mathsf{E}$ with the element $v(1)\in \mathsf{E}$. The vector $v(1)$ is called the derivative of the curve $\gamma$ at $t_0$ and is denoted by $\gamma'(t_0)$. \begin{definition} Let $(\mathsf{E}, \n{\cdot}_\mathsf{E})$ be a real Banach space and $I\subset\mathbb{R}$ an interval. Further, let $\gamma:I\to\mathsf{E}$ be a curve. $\gamma$ is called continuously differentiable on $I$ if \begin{enumerate} \item $\gamma$ is differentiable at every point $t\in\Int(I) $, the interior of $I$; \item the mapping \[ I\to L(\mathbb{R},\mathsf{E}), \quad t\mapsto \gamma'(t) \] is continuous on $\Int(I)$ and extends continuously to $I$. The family of all continuously differentiable curves $\gamma:I\to\mathsf{E}$ is denoted by $\Gamma^1(I,\mathsf{E})$. \end{enumerate} A continuously differentiable curve $\gamma\in \Gamma^1 (I,\mathsf{E})$ is called \emph{regular} if $\gamma'\neq 0$ on $I$. \end{definition} Now consider the case of a Banach algebra $\mathsf{A}:=(\mathsf{A}, +, \cdot)$, i.e., when $(\mathsf{A}, +)$ is a vector space with a norm $\n{\cdot}$ and endowed with a product $\cdot:\mathsf{A}\times\mathsf{A}$ satisfying \begin{enumerate} \item $(\mathsf{A}, +, \n{\cdot})$ is a Banach space; \item $(\mathsf{A}, +, \cdot)$ is an associate $\mathbb{R}$-algebra; \item $\forall\,a_1,a_2\in \mathsf{A}: \n{a_1\cdot a_2} \leq \n{a_1}\n{a_2}$. \end{enumerate} A Banach algebra $\mathsf{A}$ is called \emph{unital} if it has a multiplicative identity. \begin{example}\label{ex1} For a unital Banach algebra $\mathsf{A}$, we obtain for any $a\in \mathsf{A}$ a curve $\gamma : I \to \mathsf{A}$ defined by $t\mapsto \exp t \cdot a$, where $\exp$ denotes the exponential function in the Banach algebra $\mathsf{A}$. As a special case, we consider a finite dimensional Banach algebra $\mathsf{A}$ such as $\mathbb{K}^m$ with $\mathbb{K} \in \{\mathbb{R},\mathbb{C}\}$, for some $m\in \mathbb{N}$, endowed with some norm. Then, $\Hom (\mathsf{A},\mathsf{A})$, the set of self-maps $\mathsf{A}\to\mathsf{A}$ with multiplication $\cdot$ defined as composition of maps $\circ$, is a Banach algebra when equipped with the operator norm and for each $L\in \Hom(\mathsf{E},\mathsf{E})$, we have curves $\gamma: I\to \Hom(\mathsf{E},\mathsf{E})$, $t\mapsto \exp t\cdot L$ yielding curves $I\to \mathsf{E}$, $t\mapsto (\exp t\cdot L)(v)$, for each $v\in \mathsf{E}$. \end{example} For our later purposes, we require the following \emph{Generalized Mean Value Theorem for Curves}. \begin{theorem}\label{thm3.5} Assume that the curve $\gamma:[a,b]\to\mathsf{E}$ is differentiable on $(a,b)$ and $\{\gamma'(t) : t\in (a,b)\}$ is bounded. Define \[ \n{\gamma'}_\mathsf{E} := \sup\{\n{\gamma'(t) : t\in (a,b)}\}. \] Then, \begin{equation}\label{mwt} \n{\gamma(b) - \gamma(a)}_\mathsf{E} \leq \n{\gamma'}_\mathsf{E} (b-a). \end{equation} \end{theorem} \section{Fractal Interpolation over Curves}\label{sec5} We briefly recall some basics from fractal interpolation theory and the theory of fractal functions. To this end, let $\mathsf{U}$ and $\mathsf{V}$ be open subsets of Banach spaces $\mathsf{E}$ and $\mathsf{F}$, respectively. A mapping $f:\mathsf{U}\to\mathsf{V}$ is called a \emph{$C^1$-diffeomorphism} if $f$ is a bijection, Fr\'echet differentiable on $\mathsf{U}$, and the inverse of $f$, $f^{-1}$, is Fr\'echet differentiable on $\mathsf{V}$. In a similar way, one defines higher order diffeomorphisms $\mathsf{U}\to\mathsf{V}$. The collection of all $C^\alpha$-diffeomorphisms from $\mathsf{U}\to\mathsf{V}$ with $\alpha\in \mathbb{N}_0\cup\{\infty\}$ is denoted by $\Diff^\alpha (\mathsf{U},\mathsf{V})$. In case, $\mathsf{U} := \mathsf{E} := \mathsf{F}$, we simply write $\Diff^\alpha (\mathsf{E})$. If $\mathsf{X}\subset\mathsf{E}$ then $f:\mathsf{X}\to\mathsf{F}$ is called a $C^\alpha$-diffeomorphism on $\mathsf{X}$, in symbols $f\in \Diff^\alpha(\mathsf{X},\mathsf{F})$, if there exists an open $\mathsf{U}\subset\mathsf{E}$ with $\mathsf{X}\subset\mathsf{U}$ and a $C^\alpha$-diffeomorphism $g:\mathsf{U}\to\mathsf{F}$ such that $f = g\lvert_\mathsf{X}$. \subsection{Fractal Interpolation in General} Let $\mathsf{X}$ be a nonempty bounded subset of a Banach space $\mathsf{E}$. Suppose we are given a finite family \[ h:= \{h_i\in \Diff^\alpha (\mathsf{X}) : i = 1, \ldots, n\} \] of $C^\alpha$-diffeomorphisms generating a partition $\Pi(h)$ of $\mathsf{X}$ in the sense that \begin{equation}\label{c1} \mathsf{X} = \coprod_{i=1}^n h_i(\mathsf{X}), \end{equation} where $\coprod$ denotes the disjoint union of sets. For simplicity, we set $\mathsf{X}_i := h_i(\mathsf{X})$.\\ Recall that a mapping $f:\mathsf{E}\to\mathsf{F}$ is called \emph{affine} if $f - f(0)$ is linear. \begin{definition} A partition $\Pi(h)$ of $\mathsf{X}$ is called {nonlinear} if the maps generating $\Pi(h)$ are not affine; otherwise linear. \end{definition} In case of a nonlinear partition, we may express \eqref{c1} also as follows: The IFS $(\mathsf{X}, h)$ has as its attractor $\mathsf{X}$ which consists of finitely many disjoint \emph{nonlinear} images of itself. Note that this extends and generalizes known methodologies. For an introduction to fractal interpolation over nonlinear partitions and related topics, we refer the interested reader to \cite{M22}. In the following, we consider the partition functions $h_i$ as either being linear or nonlinear. A more precise specification will be provided when necessary. \\ One of the goals of fractal interpolation is the construction of a \emph{global} function \begin{equation}\label{psipart} \psi:\mathsf{X} = \coprod\limits_{i=1}^n \mathsf{X}_i\to\mathsf{F} \end{equation} belonging to some prescribed function space $\mathscr{F}$ and satisfying $n$ functional equations of the form \begin{equation}\label{psieq} \psi (h_i (x)) = q_i (x) + s_i (x) \psi (x), \quad\text{$x\in\mathsf{X}$, $i\in \mathbb{N}_n$}, \end{equation} where for each $i\in\mathbb{N}_n$, $q_i\in\mathscr{F}$ and $s_i$ is chosen such that $s_i\cdot\psi\in\mathscr{F}$. In other words, the global solution is pieced together in a prescribed manner from copies of itself on the subsets $\mathsf{X}_i = h_i(\mathsf{X})$ defined by the partition $\Pi(h)$. We refer to \eqref{psipart} and \eqref{psieq} as the \emph{fractal interpolation problem} and to $\psi$ as a \emph{fractal function}. The latter were first constructed in \cite{B86} in the context of interpolating a given set of real-valued data in $\mathbb{R}^2$. One possible way to solve the fractal interpolation problem is to consider \eqref{psieq} as the fixed point equation for an associated affine operator acting on an appropriately defined or prescribed function space. \subsection{Partitioning Regular Curves in Banach Spaces} Let $\gamma\in \Gamma^1 (I,\mathsf{E})$ be a regular curve where $I := [0,1]\subset\mathbb{R}$. The emphasis $I = [0,1]$ is only a mild restriction as any other half-open interval $[a,b]$ with $a < b$ can be obtained from $I$ via the orientation-preserving diffeomorphism $t\mapsto (b-a) t + a$. Note that as $\gamma\in \Gamma^1 (I,\mathsf{E})$, $|\gamma| = \gamma(I)$ is compact in $\mathsf{E}$ and thus (totally) bounded. In particular, $\n{\gamma'}$ is bounded and Theorem \ref{thm3.5} is applicable. \begin{remark} Strictly speaking, a curve is an equivalence class of parametrized curves. The trace of a curve, however, is uniquely defined, as equivalently parametrized curves have the same trace. Moreover, the terms initial and terminal point of a curve are independent of its parametrization and so is its length. In what follows, we will always use the arc length parametrization of a curve \begin{gather*} \sigma : I \to [0, \ell(\gamma)],\\ \sigma := \int_0^t \n{\gamma'(\tau)} d\tau, \end{gather*} where $\ell(\gamma)$ denotes the (finite) length of $\gamma$. \end{remark} Suppose we generate a partition of $|\gamma|\in\mathscr{H}(\mathbb{R})$ by means of finitely many (prescribed) set-valued partition functions $H_i: \mathscr{H}(\mathbb{R})\to \mathscr{H}(\mathbb{R})$ which are $C^\alpha$-diffeomorphisms such that \begin{gather} H_i(|\gamma|) =: |\gamma_i|, \quad i\in \mathbb{N}_n,\\ H_1(\{\gamma(0)\}) = \{\gamma(0)\} =: p_0, \quad H_n(\{\gamma(1)\}) = \{\gamma(1)\} =:p_n,\\ H_i (\{\gamma(0)\}) =: p_i \in |\gamma_i|,\quad H_i (\{\gamma(1)\}) =: p_{i+1} = H_{i+1} (\{\gamma(0)\})\in |\gamma_{i+1}|, \quad i\in \mathbb{N}_{n-1}.\label{eq4.6} \end{gather} Note that \eqref{eq4.6} means that the intersection between adjacent traces is one-elemental: \[ |\gamma_i| \cap |\gamma_{i+1}| = \{p_i\},\quad i\in \mathbb{N}_{n-1}. \] As $\gamma\in \Gamma^1 (I,\mathsf{E})$ and regular, the mapping $t\mapsto \sigma = \int\limits_0^t \n{\gamma'(\tau)} d\tau$ is a $C^1$-diffeomor\-phism. Hence, to every $p_i$, $i\in \mathbb{N}_{n-1}$, corresponds a unique $\sigma_i\in [0,\ell(\gamma)]$ and therefore a unique $t_i\in I$. The set $\{t_j : j \in \mathbb{N}_{n-1}\cup\{0\}\}$ defines a partition of $I$ into subintervals $[t_j, t_{j+1}]$ with $|\gamma_{j+1}| = \gamma([t_{j}, t_{j+1}])$. Two adjacent subintervals of the above form have only the point $\{t_{j+1}\}$ in common thus allowing $I$ to be written as the disjoint union \begin{equation}\label{eq4.7} I = \coprod_{i=1}^{n-1} [t_{i-1},t_i) \sqcup [t_{n-1},t_n] =: \coprod_{i=1}^n I_i. \end{equation} Conversely, we could partition the interval $I$ via finitely many (linear or nonlinear) partition functions $h_i :I\to I_i$ into subintervals $I_i :=[t_{i-1},t_i]$ generating a partition of $I$ of the form \eqref{eq4.7}. As $|\gamma| = \gamma(I)$ and $|\gamma_i| = \gamma(I_i)$ we obtain \begin{align*} (\gamma\circ h_i)(I) = \gamma(h_i (I)) = \gamma(I_i) = |\gamma_i| = H_i (|\gamma|) = H_i (\gamma(I)) = (H_i\circ\gamma)(I), \end{align*} and, thus, \begin{equation}\label{eq4.8} \gamma\circ h_i = H_i\circ\gamma, \quad i\in\mathbb{N}_n. \end{equation} Hence, for a regular $\gamma\in \Gamma^1 (I,\mathsf{E})$, \begin{equation}\label{gammapart} |\gamma| = \bigcup_{i=1}^n |\gamma\circ h_i| =: \bigcup_{i=1}^n |\gamma_i|, \end{equation} with $|\gamma_i| \cap |\gamma_{i+1}|$ being one-elemental. We can obtain a disjoint partition of $|\gamma|$ by using \[ |\gamma| = \coprod_{i=1}^{n-1} |\gamma_i|\setminus\{p_i\} \sqcup |\gamma_n| \] Clearly, in each of the two cases, $|\gamma_i|$ is compact in $\mathsf{E}$ and thus bounded. \subsection{The Fractal Interpolation Problem for Curves} In this subsection, we pose and solve the fractal interpolation problem for regular curves in Banach spaces. More precisely, given the interval $I = [0,1)$, a regular $\gamma\in \Gamma^1 (I,\mathsf{E})$, and a partition $\Pi(h)$ of $|\gamma|$ of the form \eqref{gammapart}, we like to construct a unique function \begin{equation}\label{psigamma} \psi: |\gamma| = \coprod\limits_{i=1}^n |\gamma_i|\to\mathsf{F} \end{equation} belonging to some prescribed function space $\mathscr{F}$ and satisfying $n$ functional equations of the form \begin{equation}\label{psieqgamma} \psi (\gamma_i) = q_i\circ\gamma + (s_i \circ\gamma)\cdot (\psi\circ\gamma), \quad i\in \mathbb{N}_n, \end{equation} where for each $i\in\mathbb{N}_n$, $q_i\in\mathscr{F}$ and $s_i$ is chosen such that $s_i\cdot\psi\in\mathscr{F}$, i.e., the global solution of \eqref{psieqgamma} is pieced together in a prescribed manner from copies of itself on the subcurves $|\gamma_i| $ defined by the partition $\Pi(h)$. The unique solution of \eqref{psieqgamma} will be termed a \emph{fractal function on $\gamma$}. Let $(\mathsf{X}, d)$ be a metric space and let $\Lip (\mathsf{X}, \mathsf{F})$ consist of all bounded Lipschitz functions $\mathsf{X}\to\mathsf{F}$ endowed with the norm \begin{equation} \n{f} := \n{f}_\infty + L(f). \end{equation} Here, $\n{\cdot}_\infty$ denotes the supremum norm on continuous functions $\mathsf{X}\to\mathsf{F}$. Under this norm and the point-wise product, $\Lip(\mathsf{X},\mathsf{F})$ becomes a unital commutative Banach algebra and is an example of a \emph{Lipschitz algebra}. (Cf., for instance, \cite{J70,Sh63,W18}.) In what follows, we regard $|\gamma|$ as a (compact) subset of the complete metric space $(\mathsf{E}, d)$ where $d$ is the metric induced by the norm $\n{\cdot}_\mathsf{E}$. The natural function space $\mathscr{F}$ to consider here is then the Lipschitz algebra $\Lip (|\gamma|,\mathsf{A})$ with a Banach algebra $\mathsf{A}$. On the Lipschitz algebra $\Lip (|\gamma|,\mathsf{A})$, we define an affine operator $T$, called a Read-Bajractarevi\'c (RB) operator, as follows. Given any $f\in\Lip (|\gamma|,\mathsf{A})$, set \begin{equation}\label{eq4.12} Tf (\gamma_i) := q_i (\gamma) + s_i(\gamma)\cdot f(\gamma), \quad i\in \mathbb{N}_n, \end{equation} where $q_i, s_i\in \Lip (|\gamma|,\mathsf{A})$. Equivalently, we may express \eqref{eq4.12} in the form \begin{equation} T (f\circ\gamma) = \sum_{i=1}^n (q_i\circ\gamma)\circ h_i^{-1} \,1\hspace{-0.23em}\mathrm{l}_{I_i} + \sum_{i=1}^n (s_i\circ\gamma)\circ h_i^{-1}\cdot (f\circ\gamma)\circ h_i^{-1} \,1\hspace{-0.23em}\mathrm{l}_{I_i} \end{equation} Moreover, we require that the following compatibility (join-up) conditions hold: \begin{gather} T f(p_0) = f(p_0), \quad T f(p_n) = f(p_n),\label{eq4.14}\\ T f(p_i-) = T f(p_i+), \;\; i \in \mathbb{N}_{n-1}.\label{eq4.15} \end{gather} Eqns. \eqref{eq4.14} and \eqref{eq4.15} can also be expressed in the form \begin{gather} f(p_0) = \frac{q_1(p_0)}{1-s_1(p_0)}, \quad f(p_n) = \frac{q_n(p_n)}{1-s_n(p_n)},\\ q_i(p_n) + \frac{s_i(p_n)\cdot q_n(p_n)}{1-s_n(p_n)} = q_{i+1}(p_0) + \frac{s_{i+1}(p_0)\cdot q_1(p_0)}{1-s_1(p_0)}, \;\; i \in \mathbb{N}_{n-1}. \end{gather} Clearly, $Tf\in \Lip (|\gamma|,\mathsf{A})$, i.e., $Tf$ is bounded and continuous, and $\n{Tf} < \infty$. \begin{theorem}\label{thm4.3} The RB operator defined in \eqref{eq4.12} and satisfying the compatibility conditions \eqref{eq4.14} and \eqref{eq4.15} is contractive on the Lipschitz algebra $\Lip(|\gamma|, \mathsf{A})$ provided that \begin{equation} L(T) := \max\left\{\max_{i\in \mathbb{N}_n} \n{s_i}_\infty, \max_{i\in\mathbb{N}_n} L(s_i) \cdot \n{\gamma'}_{\mathsf{E}}\right\} < 1. \end{equation} \end{theorem} \begin{proof} It suffices to consider the linear part $T-T(0)$ of $T$. For this purpose, we set $h := f - g$, for $f,g\in \Lip (|\gamma|,\mathsf{A})$. Now, let $t\in I$. Then, there exists an $i\in \mathbb{N}_n$ such that $t\in I_i$ and thus \begin{align*} \n{T h(\gamma(t))}_\mathsf{A} \leq \n{s_i(\gamma\circ h_i^{-1}(t))}_\mathsf{A}\cdot \n{h(\gamma\circ h_i^{-1}(t))}_\mathsf{A}. \end{align*} Taking the supremum over $t$ yields \begin{equation}\label{estimate1} \n{T h}_\infty \leq \max_{i\in \mathbb{N}_i}\n{s_i}_\infty\cdot \n{h}_\infty. \end{equation} Furthermore, for $s,t\in I$, \begin{align*} \n{T h(\gamma(t)) - T h(\gamma(s))}_\mathsf{A} & \leq \n{s_i(\gamma(t)) - s_i(\gamma(s))}_\mathsf{A} \, \n{h(\gamma(t)) - h(\gamma(s))}_\mathsf{A}\\ & \leq L(s_i) \n{\gamma(t) - \gamma(s)}_\mathsf{E} \, L(h) \n{\gamma(t) - \gamma(s)}_\mathsf{E}\\ & \leq \max_{i\in \mathbb{N}_n} L(s_i) \,\n{\gamma'}_\mathsf{E}\, L(h)\,\n{\gamma(t) - \gamma(s)}_\mathsf{E}, \end{align*} where we used Theorem \eqref{thm3.5} to obtain the last inequality. Hence, \begin{equation}\label{estimate2} L(T h) \leq \max_{i\in \mathbb{N}_n} L(s_i) \,\n{\gamma'}_\mathsf{E}\,L(h). \end{equation} The inequalities \eqref{estimate1} and \eqref{estimate2} now yield the claim. \end{proof} Theorem \eqref{thm4.3} yields the following immediate corollary. \begin{corollary} The RB operator defined in \eqref{eq4.12} and satisfying the compatibility conditions \eqref{eq4.14} and \eqref{eq4.15} has a unique fixed point $\psi\in \Lip(|\gamma|,\mathsf{A})$. This fixed point obeys the self-referential equation \[ \psi (\gamma_i) := q_i (\gamma) + s_i(\gamma)\cdot \psi(\gamma), \quad i\in \mathbb{N}_n, \] and thus is the unique solution to the fractal interpolation problem \eqref{psipart} and \eqref{psieq} in the present setting. Moreover, $\psi : |\gamma|\to\mathsf{A}$ can be obtained by the iterative procedure \begin{gather*} \psi_0 \in \Lip(|\gamma|,\mathsf{A})\text{ chosen arbitrarily.};\\ \psi_k := T\psi_{k-1}, \;k\in \mathbb{N}, \end{gather*} with \[ \psi = \lim_{k\to\infty} \psi_k, \] where the limit is taken in the norm topology of $\Lip(|\gamma|, \mathsf{A})$. Furthermore, one has the error estimate \[ \n{\psi - \psi_k} \leq \frac{L(T)^k}{1-L(T)}\,\n{T\psi_0 - \psi_0},\quad k\in \mathbb{N}. \] \end{corollary} \begin{proof} The statements are straight-forward consequences of the Banach Fixed Point Theorem and its constructive proof. \end{proof} \begin{definition} The unique fixed point $\psi$ of the RB operator \eqref{eq4.12} is called an $\mathsf{A}$-valued fractal function over the curve $\gamma$. \end{definition} The next examples show the flexibility and versatility of the new concept introduced above. \begin{example} In the case $\mathsf{E}:= \mathbb{R}^m$, we can regard a curve $\gamma:I\to\mathbb{R}^m$ as the vector-valued function \eqref{vecval}. Considering $\mathbb{R}^m$ also as a metric space $(\mathbb{R}^m,d)$ with the metric $d$ given by $d(x,y) = \n{x-y} = \sqrt{\sum\limits_{j=1}^m (x_j-y_j)^2}$ and taking for $\mathsf{A}$ the commutative unital Banach algebra $\mathbb{R}^k$ with norm \[ \n{x}_\mathsf{A} := \max_{l\in \mathbb{N}_k} |x_l|,\quad x = (x_1,\ldots, x_k)^\top, \] where ${}^\top$ denotes the transpose, and multiplication defined by \[ x\cdot y := (x_1 y_1, \ldots, x_k y_k)^\top, \] the elements of the Lipschitz algebra $\Lip(|\gamma|, \mathbb{R}^k)$ are then vector-valued functions of $m$ variables of the form \begin{gather*} f: |\gamma|\subset\mathbb{R}^m\to \mathbb{R}^k,\\ f(\gamma(t)) = \begin{pmatrix} f_1(\gamma_1(t),\ldots, \gamma_m(t))\\ \vdots \\f_k(\gamma_1(t),\ldots, \gamma_m(t)) \end{pmatrix}, \end{gather*} where $f_l:\mathbb{R}^m\to\mathbb{R}$, $l\in \mathbb{N}_k$. \end{example} \begin{example} Here we take again $\mathsf{E} := \mathbb{R}^m$ but for $\mathsf{A}$ we choose $\mathbb{R}^{k\times k}$, the collection of all $k\times k$-matrices with entries in $\mathbb{R}$ or $\mathbb{C}$. As a norm on $\mathbb{R}^{k\times k}$ we select a submultiplicative matrix norm such as, for instance, a Schatten $p$-norm, \[ \n{A}_p := \left(\sum_{l=1}^k \sigma_l^p (A)\right)^{1/p}, \quad A\in \mathbb{R}^{k\times k}, \;\; p\in [1,\infty], \] where the $\sigma_l$ denote the singular values of $A$. Multiplication is the usual multiplication of matrices. Then $\mathbb{R}^{k\times k}$ becomes a unital noncommutative Banach algebra. In this setting, the elements of the Lipschitz algebra $\Lip(|\gamma|, \mathbb{R}^{k\times k})$ are matrix-valued functions of the form \begin{gather*} f: |\gamma|\subset\mathbb{R}^m\to \mathbb{R}^{k\times k},\\ f(\gamma(t)) = \begin{pmatrix} f_{11}(\gamma(t)) & \cdots & f_{1k}(\gamma(t))\\ \vdots & \ddots & \vdots\\ f_{k1}(\gamma(t)) & \cdots & f_{kk}(\gamma(t)) \end{pmatrix}, \end{gather*} with $f_{ij}:\mathbb{R}^m\to\mathbb{R}$, $i,j\in \mathbb{N}_k$. \end{example} \section{Relation to Iterated Function Systems} In this section, the relation between the graph $G(\psi)$ of the fixed point $\psi$ of the operator $T$ and the attractor of the associated contractive IFS is described. To this end, note that there are two IFSs whose attractors are the interval $I$ and the trace $|\gamma|$ of a curve $\gamma:I\to\mathsf{E}$, respectively. The former is given by $(I, h)$ with $h:=\{h_i : i\in \mathbb{N}_n\}$ and the latter by $(|\gamma|, H)$ where $H:=\{H_i : i\in \mathbb{N}_n\}$. Both IFSs have the same code space, namely, $\mathbb{N}_n^\infty$, and as \eqref{eq4.8} holds, we see that the fractal transformation $\mathfrak{F} : I \to |\gamma|$ is indeed a homeomorphism. In other words, for the purposes of this section, it does not matter which IFS we consider for the first variable in $G(\psi)$. In order to keep the presentation simple, we choose the IFS $(|\gamma|, H)$ and leave the other case to the interested reader. Next, consider the product Banach space $\mathsf{E}\times\mathsf{A}$ endowed with the product norm $\n{\cdot}_\mathsf{E} + \n{\cdot}_\mathsf{A}$. For $|\gamma|\in\mathscr{H}(\mathsf{E})$ and $\mathsf{Y}\in\mathscr{H}(\mathsf{A})$, let mappings $w_i:|\gamma|\times\mathsf{Y}\to|\gamma|\times\mathsf{A}$ be defined by \begin{equation}\label{wn} w_i (x, y) := (H_i (x), q_i (x) + s_i(x)\cdot y), \quad i\in\mathbb{N}_n, \end{equation} and required to map into $|\gamma|\times\mathsf{Y}$. (By Proposition 27 in \cite{M16}, this is always possible.) Here, $q_i$ and $s_i$ are defined as in Section \ref{sec5}. Note that the $H_i$ are defined on the compact subset $|\gamma|$ of $\mathsf{E}$ and that $H_i(|\gamma|)\cap H_j(|\gamma|)\neq\emptyset$ but one-elemental for $i,j\in \mathbb{N}_n$ with $i\neq j$. However, we insist that $H$ is \emph{non-overlapping} with respect to the IFS $(|\gamma|, H)$ in the sense of \cite[Definition 2.4]{BBHV16} and that at the contact points $\{H_i(|\gamma|)\cap H_j(|\gamma|)\}$ the compatibility conditions \eqref{eq4.14} and \eqref{eq4.15} hold. This will guarantee the existence of a solution $\psi$ (cf. also \cite{SB17}). To facilitate notation, introduce the mappings \begin{gather*} v_i : \mathsf{E}\times\mathsf{Y}\to\mathsf{A},\\ v_i := q_i + s_i\cdot y,\quad i\in \mathbb{N}_n. \end{gather*} We remark that for a fixed $x\in |\gamma|$, arbitrary $y_1,y_2\in \mathsf{Y}$, and $i\in \mathbb{N}_n$, \begin{align*} \n{v_i(x,y_1) - v_i(x,y_2)}_\mathsf{A} &= \n{s_i(x) y_1 - s_i(x) y_2}_\mathsf{A}\\ & \leq \n{s_i}_\mathsf{A} \n{y_1-y_2}_\mathsf{A} \leq L(T) \n{y_1-y_2}_\mathsf{A}, \end{align*} i.e., $v_i$ is uniformly contractive in the second variable. Moreover, for a fixed $y\in \mathsf{Y}$, arbitrary $x_1, x_2\in |\gamma|$, and $i\in \mathbb{N}_n$, $v_i$ is also uniformly Lipschitz continuous in the first variable: \begin{align*} \n{v_i(x_1, y) - v_i(x_2, y)}_\mathsf{A} &\leq \n{q_i(x_1)-q_i(x_2)}_\mathsf{A} + \n{y}_\mathsf{A} \n{s_i(x_1)-s_i(x_2)}_\mathsf{A}\\ & \leq \left(L(q_i) + \n{y}_\mathsf{A} L(s_i) \right) \n{x_1-x_2}_\mathsf{A}. \end{align*} As $y\in \mathsf{Y}\in \mathscr{H}(\mathsf{A})$, there exists a nonnegative constant $c_\mathsf{Y}$ such that $\n{y}_\mathsf{A} \leq c_\mathsf{Y}$. Hence, \[ \n{v_i(x_1, y) - v_i(x_2, y)}_\mathsf{A} \leq\left(L(q_i) + c_\mathsf{Y} L(s_i) \right) \n{x_1-x_2} _\mathsf{A} =: \lambda \n{x_1-x_2}_\mathsf{A}, \] proving the claim. Let $\overline{L} (H) := \max\{L(H_i) : i\in\mathbb{N}_n\}$ and let $\vartheta := \frac{1-\overline{L} (H)}{2\lambda}$. Then the mapping $\n{\cdot}_\vartheta : \mathsf{E}\times\mathsf{A}\to \mathbb{R}_0^+$ given by \[ \n{\cdot}_\vartheta := \n{\cdot}_\mathsf{E} + \vartheta\,\n{\cdot}_\mathsf{F} \] is a norm on $\mathsf{E}\times\mathsf{A}$ compatible with the product topology on $\mathsf{E}\times \mathsf{A}$. The next theorem can also be found in \cite{BHM14} in a more general setting. Here, we adopt it for our purposes. \begin{theorem}\label{thm3.1} Set $\mathcal{W} := \{w_1, \ldots, w_n\}$. Then, $\mathcal{F} := (|\gamma|\times\mathsf{Y}, \mathcal{W})$ is a contractive IFS with respect to the norm $\n{\cdot}_\vartheta$ and the graph $G(\psi)$ of the solution to the fractal interpolation problem \begin{equation}\label{eq3.8} \psi(\gamma_i) = q_i(\gamma)+s_i(\gamma),\quad i\in\mathbb{N}_n, \end{equation} is the unique attractor of the IFS $\mathcal{F}$. Furthermore, if $T$ is the RB operator \begin{gather}\label{eq3.9a} T: \Lip(|\gamma|,\mathsf{A})\to\Lip(|\gamma|,\mathsf{A}),\nonumber\\ f\mapsto q_i\circ\gamma\circ h_i^{-1} + (s_i\circ\gamma\circ h_i^{-1})\cdot (f\circ\gamma\circ h_i^{-1}),\;\;\text{on $\mathsf{X}_i$},\; i\in\mathbb{N}_n, \end{gather} associated with the fractal interpolation problem \eqref{eq3.8} then \begin{equation}\label{GW} G(T f) = \mathcal{F} (G(f)),\quad\forall\,f\in \Lip(|\gamma|,\mathsf{A}), \end{equation} where $\mathcal{F}$ denotes the set-valued operator \eqref{fixedpoint}. \end{theorem} \begin{proof} See \cite[Theorem 3.7]{BHM14} adapted to the current setting. \end{proof} The above statements are paramount to the commutativity of the diagram below. \begin{equation}\label{diagram} \begin{CD} |\gamma|\times \mathsf{A} @>\mathcal{F}>> |\gamma|\times \mathsf{A}\\ @AAGA @AAGA\\ \Lip(|\gamma|,\mathsf{A}) @>T>> \Lip(|\gamma|,\mathsf{A}) \end{CD} \end{equation} \vskip 10pt\noindent where $G$ is the mapping $\Lip(|\gamma|,\mathsf{A})\ni g\mapsto G(g) = \{(x, g(x)) : x\in |\gamma|\}\in \mathsf{E}\times \mathsf{A}$. On the other hand, if one assumes that $\mathcal{F} = (|\gamma|\times\mathsf{A}, w_1, w_2, \ldots, w_n)$ is an IFS whose mappings $w_i$ are of the form \eqref{wn} where the functions $H_i$ are linear or nonlinear partition functions $H_i:|\gamma |\to|\gamma_i|$ with non-overlapping attractor $|\gamma|$ and contact points $\{H_i(|\gamma|)\cap H_j(|\gamma|)\}$. Further assume that the mappings $v_i$ are uniformly Lipschitz continuous in the first variable and uniformly contractive in the second variable. Then we can associate with the IFS $\mathcal{F}$ an RB operator $T$ of the form \eqref{eq3.9a} and thus a fractal interpolation problem $H_i(|\gamma|)\cap H_j(|\gamma|)\neq\emptyset$ with appropriate compatibility conditions. The attractor $A$ of $\mathcal{F}$ is then the graph $G(\psi)$ of the solution $\psi$ of \eqref{eq3.8}, respectively, the fixed point of $T$. \section{Future Research Directions} The above approach may be generalized along different directions. \begin{itemize} \item Instead of considering curves in Banach spaces, one may look in general at Banach or even Fr\'echet submanifolds. \item The Lipschitz algebras $\Lip (X, \mathsf{F})$ could be replaced by the space of bounded real-valued Lipschitz functions on $X$ which vanish at a given base point $x_0\in X$. The corresponding Lipschitz algebras are then denoted by $\Lip_0(X,\mathsf{F})$. \end{itemize} We leave it to the interested reader to pursue these research directions. \bibliographystyle{amsalpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,487
\subsection{INTRODUCTION} \stepcounter{section} \vglue 0.1cm \baselineskip=18pt \elevenrm \hspace*{\parindent} The CP violation is described by phases appearing in the Kobayashi-Maskawa matrix\cite{KM} in the standard theory of quarks and leptons. The CP violating phases are introduced only when the number of the quark-lepton generations is equal to or greater than three. In other words the reason why we have the CP violation in nature is that we have three generations of quarks and leptons. The CP violating phases are partially determined by experimental data in the neutral K-meson system. The prediction for the neutron electric dipole moment\cite{NEDM} based on the Kobayashi-Maskawa CP violating phases (KM phases) is extremely small and is well below the experimental lower bound\cite{NEDE}. Thus the standard theory with Kobayashi-Maskawa CP violation is consistent with the present experimental situation.\\ \hspace*{\parindent}The KM phases are introduced as free parameters in the standard theory. From the point of view of the fundamental theory of quarks and leptons this situation is not satisfactory and we would like to see where is the theoretical origin of the KM phases describing the CP violation.\\ \hspace*{\parindent}One of the possibilities to explain the KM phases by the more fundamental origin is to introduce the complex vacuum expectation values for the Higgs field as discussed by Weinberg more than a decade ago\cite{SCP}. In this approach it is required to have at least three Higgs doublets in order to interprete the full KM phases. This mechanism suggests that the spontaneous electroweak symmetry breaking has something to do with the origin of the CP violation.\\ \hspace*{\parindent}Pushing forward this idea we are naturally led to the composite Higgs models where the Higgs field is replaced by a composite system of fundamental fermions. There are a variety of the composite Higgs models including the technicolor model\cite{TECH}, top-condensation model\cite{TOPC,TCOL}, fourth-generation model\cite{FG} and color-sextet quark model\cite{SQ}. In the composite Higgs models the CP violation may occur if the complex vacuum expectation value would result for the composite field $\bar{\psi}\psi$ with fundamental fermion $\psi$. The realization of such circumstance has once been suggested long time ago by Dashen in other context\cite{ODCP}.\\ \hspace*{\parindent}The idea of Dashen will be recapitulated in the next section and will be applied straightforwardly to the composite Higgs models. Eichten, Lane and Preskill\cite{DCP1} have adopted Dashen's idea in the technicolor model to elucidate the mechanism of the dynamical CP violation. In their paper the general framework of generating the dynamical CP violation was presented and some physical consequences were pointed out. Later Goldstein\cite{DCP2} has reconsidered the problem and constructed a model of the dynamical CP violation with two quark and techniquark doublets. This model, however, fails to give rise to the CP-violating phase unless one introduces extra leptons or one assumes an existence of the strong CP violation in the technicolor sector.\\ \hspace*{\parindent}In the present paper we would like to construct some simple examples of the dynamical CP violation in the composite Higgs models. In our models we assume the presence of two flavors of up(down)-type extra fundamental quarks and three flavors of up(down)-type ordinary quarks. We start with the Lagrangian with flavor symmetry (i.e. all fermions massless) in which a nonvanishing vacuum expectation value develops for the composite field $\bar{\psi}\psi$ with $\psi$ the fundamental fermion. To this Lagrangian we add flavor-symmetry breaking terms to realize the quark mass hierarchy. We consider transformations which mixes the flavors of quarks. We find a special solution for the transformations which gives the true vacuum with the proper direction. According to this special solution the CP violating terms are generated in the flavor-symmetry breaking part of the Lagrangian.\\ \hspace*{\parindent}The main purpose of our argument is to show the usefulness of the Dashen mechanism for the dynamical CP violation in a transparent way. Our model is too simple to explain the KM phases practically and should be elaborated to reproduce the standard theory as a low-energy effective theory. If our model has something to do with the nature, it has to be consistent with the existing experimental observations. Thus we calculate the contribution in our model to the electric dipole moment of the neutron and the $\varepsilon$-parameter in K decays. Both quantities are found to be consistent with the experimental data if the cut-off $\Lambda$ existing in the model is larger than $800$ TeV which is consistent with the cut-off set by the FCNC restriction\cite{FCNC}.\\ \hspace*{\parindent}It should be remarked that any model of the spontaneous CP violation suffers from the cosmological domain wall problem. In the present paper we are interested in constructing simple examples of the dynamical CP violation and we tentatively circumvent the problem by assuming that the dynamical CP violation takes place before the inflation period. \vglue 0.6cm \newpage \stepcounter{subsection} \subsection{DASHEN MECHANISM IN COMPOSITE HIGGS MODELS} \stepcounter{section} \vglue 0.1cm \hspace*{\parindent}% Here in the present section we briefly review the Dashen mechanism of the spontaneous CP violation with the application to the composite Higgs models.\\ \hspace*{\parindent}We start with the Lagrangian ${\cal L}_0$ symmetric under the flavor group \begin{equation} G_{F}=\prod_{\rho}U_{V}(n_{\rho})\otimes U_{A}(n_{\rho})\ , \end{equation} \noindent where $n_{\rho}$ is the number of quark flavors belonging to the irreducible representation $\rho$ in the underlying gauge group and $U_V$ ($U_A$) is the unitary group associated with the vector (axialvector) currents. Here by the term ``quark'' we mean the ordinary quarks as well as the fermions needed to generate the composite Higgs field. The quark fields included in the Lagrangian ${\cal L}_0$ are all massless to guarantee the underlying gauge symmetry and the flavor symmetry.\\ \hspace*{\parindent}We assume that the flavor symmetry $G_F$ is broken dynamically by the presence of the nonvanishing vacuum expectation value for the composite field made of certain quark fields, \begin{equation} \langle \bar{\psi}\psi \rangle \neq 0\ . \end{equation} Here we have chosen the vacuum for which \begin{equation} \langle \bar{\psi}\gamma_{5}\psi \rangle = 0\ . \end{equation} The quarks acquire masses according to the dynamical breaking of the flavor symmetry $G_F$. The remaining flavor symmetry if any will be denoted by $S_F$. As is wellknown the vacuum satisfying Eqs.$(2.2)$ and $(2.3)$ is not unique and thus we have degenerate vacua in $G_{F}/S_{F}$. These degenerate vacua point to arbitrary direction in $G_{F}/S_{F}$.\\ \hspace*{\parindent}Now we add to ${\cal L}_0$ the term $\cal L'$ which explicitly breaks the flavor symmetry $G_F$. We assume that $\cal L'$ is CP-invariant and $S_F$-symmetric. The degeneracy of the vacua mentioned above is now resolved in the system described by the total Lagrangian \begin{equation} {\cal L}={\cal L}_0+{\cal L'}\ . \end{equation} The direction of the vacuum thus determined, however, does not necessarily guarantee the conditions $(2.2)$ and $(2.3)$. Hence we need to make a transformation on the field to recover the conditions \begin{equation} {\psi}'=U\psi\ , \end{equation} \noindent with $U$ the transformation belonging to $G_F$. By this transformation the form of the symmetry breaking term $\cal L'$ will be modified so that CP violating terms in general show up in $\cal L'$. We will call this mechanism of the spontaneous CP violation\cite{ODCP} the Dashen mechanism. In the following we would like to apply the Dashen mechanism to the case of the composite Higgs models.\\ \hspace*{\parindent}In electroweak theory the Higgs fields are introduced as elementary scalar fields. Accordingly the Higgs mass, Higgs self-coupling constant and Higgs-fermion Yukawa-coupling constants are all arbitrary parameters. In the composite Higgs model the Higgs particle appears as a composite system of some fundamental fermions and some of the parameters in the standard electroweak theory are predictable in principle. The Lagrangian corresponding to this model may be given by \begin{equation} {\cal L}_0={\cal L}_{QCD}+{\cal L}_{EW}+{\cal L}_{DYN}\ , \end{equation} \noindent where ${\cal L}_{QCD}$ is the ordinary QCD Lagrangian for quarks, ${\cal L}_{EW}$ is the electroweak Lagrangian without Higgs fields and ${\cal L}_{DYN}$ is the dynamical term which is assumed to be responsible for generating the composite Higgs system as a bound state (this term may be thought of as a low-energy effective Lagrangian stemming from the more fundamental Lagrangian).\\ \hspace*{\parindent}The Higgs particle appears as a bound state of the fundamental fermions $\psi$ and the bound state is assumed to generate a condensation, \begin{equation} \langle \bar{\psi}\psi \rangle \neq 0\ . \end{equation} \noindent The fundamental fermions as well as the ordinary quarks acquire a mass according to the condensation. The mass of the fundamental fermions should be of the order of the weak scale in order to guarantee that the resulting effective theory be the standard electroweak theory.\\ \hspace*{\parindent}In the technicolor model\cite{TECH} the fundamental fermion is the techniquark, in the top-condensation model\cite{TOPC} it is the top quark with mass close to the weak scale, in the fourth-generation model\cite{FG} it is the heavy quark in the assumed fourth generation and in the color-sextet model\cite{SQ} it is the quark in the sextet representation of the color SU(3).\\ \hspace*{\parindent}In the following we would like to present simple models of the dynamical CP violation in the composite Higgs models. \vglue 0.6cm \setcounter{equation}{0} \stepcounter{subsection} \stepcounter{subsection} \subsection{SIMPLE MODELS OF DYNAMICAL CP VIOLATION} \stepcounter{section} {\elevenbf\noindent A. General formalism} \vglue 0.3cm Here we first present a general argument in constructing simple models of the dynamical CP violation in the composite Higgs models. We consider $n_{\rho}$ flavors of fundamental quarks in the representation $\rho$ of the color SU(3) or other symmetry group (we call this symmetry governing the fundamental quarks the symmetry S) and $n_3$ flavors of ordinary quarks in the triplet representation of the color SU(3). The fundamental quarks may or may not have a color degree of freedom.\\ \hspace*{\parindent}We will discuss transformations which mix the flavors of the fundamental and ordinary quarks among themselves. Since this transformation has to conserve charges, the mixing occurs only among the up-type (or down-type) fundamental and ordinary quarks. For simplicity we consider only up-type fundamental and ordinary quarks.\\ \hspace*{\parindent}According to Goldstein's analysis\cite{DCP2} one finds that only two flavors of the fundamental and ordinary quarks are not sufficient to realize the Dashen mechanism. Hence we try a model with 2 flavors of the up-type fundamental quarks $Q$ and 3 flavors of the up-type ordinary quarks $q$: \begin{equation} Q=(U,C)\ ,\mbox{\hspace{1cm}} \hbox{\raisebox{.2ex}{$q$}}=(u,c,t)\ . \end{equation} \noindent We assume that $Q$ belongs to the N-plet of the fundamental symmetry S and $q$ belongs to the color triplet. It is understood that our model equally applies to the system of the down-type quarks \begin{equation} Q=(D,S)\ ,\mbox{\hspace{0.9cm}}\hbox{\raisebox{.2ex}{$q$}}=(d,s,b)\ . \end{equation} \noindent In the following by the term ``quark'' we generically mean both fundamental and ordinary quarks.\\ \hspace*{\parindent}As a $G_F$ breaking Hamiltonian density $\cal H'$ we take the following four-fermion terms \begin{eqnarray} {\cal H'}=-{\cal L'}=\!\!\! & & \!\!\!{G^{Q}}_{\alpha \beta} \bar{Q}_{\scriptscriptstyle L}\tau^{\alpha}Q_{\scriptscriptstyle R}\bar{Q}_{\scriptscriptstyle R} \tau^{\beta}Q_{\scriptscriptstyle L} \nonumber \\ \!\!\! & + & \!\!\!{G^{Qq}}_{\alpha \beta} \bar{Q}_{\scriptscriptstyle L}\tau^{\alpha}Q_{\scriptscriptstyle R}\bar{\hbox{\raisebox{.2ex}{$q$}}}_{\scriptscriptstyle R} \lambda^{\beta}\hbox{\raisebox{.2ex}{$q$}}_{\scriptscriptstyle L} +h.c. \nonumber \\ \!\!\! & + & \!\!\!{G^{q}}_{\alpha \beta} \bar{\hbox{\raisebox{.2ex}{$q$}}}_{\scriptscriptstyle L}\lambda^{\alpha}\hbox{\raisebox{.2ex}{$q$}}_{\scriptscriptstyle R}\bar{\hbox{\raisebox{.2ex}{$q$}}}_{\scriptscriptstyle R} \lambda^{\beta}\hbox{\raisebox{.2ex}{$q$}}_{\scriptscriptstyle L}\ , \end{eqnarray} \newpage \noindent where ${G^{Q}}_{\alpha \beta}$ , ${G^{Qq}}_{\alpha \beta}$ and ${G^{q}}_{\alpha \beta}$ are coupling parameters among fundamental quarks $Q$ and ordinary quarks $\hbox{\raisebox{.2ex}{$q$}}$ which depend on indices $\alpha$ and $\beta$ of the flavor SU(2) and SU(3) matrices $\tau^{\alpha}$ $(\alpha =1,2,3)$ and $\lambda^{\alpha}$ $(\alpha =1,2,\ldots ,8)$ respectively. In Eq.$(3.3)$ the fundamental symmetry indices and color indices are suppressed and are understood to be contracted between adjoining quarks. There would be other possibilities of contracting these indices. We, however, confine ourselves to the case of Eq.$(3.3)$.\\ \hspace*{\parindent}We require the CP invariance and hermiticity of the Lagrangian $(3.3)$. We then have \begin{eqnarray} & & {G^{Q}}_{\alpha \beta}{\tau^{\alpha}}_{rr'} {\tau^{\beta}}_{ss'} = {G^{Q}}_{\beta \alpha}{\tau^{\alpha}}_{r'r} {\tau^{\beta}}_{s's} = ({G^{Q}}_{\beta \alpha}{\tau^{\alpha}}_{r'r} {\tau^{\beta}}_{s's})^{\ast} \ , \nonumber \\ & & {G^{Qq}}_{\alpha \beta}{\tau^{\alpha}}_{rr'} {\lambda^{\beta}}_{ss'} = ({G^{Qq}}_{\alpha \beta}{\tau^{\alpha}}_{rr'} {\lambda^{\beta}}_{ss'})^{\ast}\ , \nonumber \\ & & {G^{q}}_{\alpha \beta}{\lambda^{\alpha}}_{rr'} {\lambda^{\beta}}_{ss'} = {G^{q}}_{\beta \alpha}{\lambda^{\alpha}}_{r'r} {\lambda^{\beta}}_{s's} = ({G^{q}}_{\beta \alpha}{\lambda^{\alpha}}_{r'r} {\lambda^{\beta}}_{s's})^{\ast}\ , \end{eqnarray} \noindent where indices $r, r', s, s'$ represent flavors of $Q$ and $\hbox{\raisebox{.2ex}{$q$}}$, i.e. $U, C, u, c, t.$\\ \hspace*{\parindent}Our first task is to find the correct vacuum under the Lagrangian \begin{equation} {\cal L}={\cal L}_0+{\cal L'}\ , \end{equation} \noindent where ${\cal L}_0$ is the Lagrangian given by Eq.$(2.6)$ and ${\cal L'}$ is given by Eq.$(3.3)$. Let us denote by $|\bar{0}\rangle$ the ground state (vacuum) for a system governed by the Lagrangian $(3.5)$ and by $|0\rangle$ the ground state for ${\cal L}_0$ which is invariant under $S_F$. To find the ground state $|\bar{0}\rangle$ we try to minimize the energy \begin{equation} E(W)=\langle \bar{0}\, |\, {\cal H'}\, |\, \bar{0}\rangle =\langle 0\, |\, W^{\scriptscriptstyle \dag}{\cal H'}W\, |\, 0\rangle\ , \end{equation} \noindent by suitably choosing the transformation $W$ in $G_F$. The transformation $W$ is induced by the transformation $U$ of fermion fields $Q$ and $\hbox{\raisebox{.2ex}{$q$}}$: \begin{equation} Q'_{\scriptscriptstyle L,\scriptscriptstyle R}={U^{Q}}_{\scriptscriptstyle L,\scriptscriptstyle R}Q_{\scriptscriptstyle L,\scriptscriptstyle R}\ , \mbox{\hspace{1cm}} \hbox{\raisebox{.2ex}{$q$}}'_{\scriptscriptstyle L,\scriptscriptstyle R}={U^{q}}_{\scriptscriptstyle L,\scriptscriptstyle R} \hbox{\raisebox{.2ex}{$q$}}_{\scriptscriptstyle L,\scriptscriptstyle R}\ , \end{equation} \noindent where $U^{Q}_{\scriptscriptstyle L,\scriptscriptstyle R}$ is the transformation belonging to the left-handed(right-handed) flavor SU(2) for fundamental quarks $Q$ and $U^{q}_{\scriptscriptstyle L,\scriptscriptstyle R}$ belonging to the SU(3) for ordinary quarks $q$. The transformation $W$ is a function of these fermion transformations: \begin{equation} W=W(U)\ , \end{equation} \noindent where we represent generically by $U$ the transformations ${U^{Q}}_{\scriptscriptstyle L,\scriptscriptstyle R}$ and ${U^{q}}_{\scriptscriptstyle L,\scriptscriptstyle R}$. We find \newpage \begin{eqnarray} E(W)= \!\! & & \!\!\!\!\!\!\!\!{G^{Q}}_{\alpha \beta} \langle 0\, |\, \bar{Q}_{\scriptscriptstyle L}{U^{Q}}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag} \tau^{\alpha}{U^{Q}}_{\scriptscriptstyle R}Q_{\scriptscriptstyle R} \bar{Q}_{\scriptscriptstyle R}{U^{Q}}_{\scriptscriptstyle R}^{\scriptscriptstyle \dag} \tau^{\beta}{U^{Q}}_{\scriptscriptstyle L}Q_{\scriptscriptstyle L}\, |\, 0\rangle \nonumber \\ \!\! & + & \!\!\!{G^{Qq}}_{\alpha \beta} \langle 0\, |\, \bar{Q}_{\scriptscriptstyle L}{U^{Q}}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag} \tau^{\alpha}{U^{Q}}_{\scriptscriptstyle R}Q_{\scriptscriptstyle R} \bar{\hbox{\raisebox{.2ex}{$q$}}}_{\scriptscriptstyle R}{U^{q}}_{\scriptscriptstyle R}^{\scriptscriptstyle \dag} \lambda^{\beta}{U^{q}}_{\scriptscriptstyle L}\hbox{\raisebox{.2ex}{$q$}}_{\scriptscriptstyle L} +h.c.\, |\, 0\rangle \nonumber \\ \!\! & + & \!\!\!{G^{q}}_{\alpha \beta} \langle 0\, |\, \bar{\hbox{\raisebox{.2ex}{$q$}}}_{\scriptscriptstyle L}{U^{q}}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag} \lambda^{\alpha}{U^{q}}_{\scriptscriptstyle R}\hbox{\raisebox{.2ex}{$q$}}_{\scriptscriptstyle R} \bar{\hbox{\raisebox{.2ex}{$q$}}}_{\scriptscriptstyle R}{U^{q}}_{\scriptscriptstyle R}^{\scriptscriptstyle \dag} \lambda^{\beta}{U^{q}}_{\scriptscriptstyle L}\hbox{\raisebox{.2ex}{$q$}}_{\scriptscriptstyle L}\, |\, 0\rangle\ . \end{eqnarray} \hspace*{\parindent}Since the state $|0\rangle$ is invariant under $S_F$, we may express the following amplitudes as given below: \begin{eqnarray} & & \langle 0\, |\, \bar{Q}_{\scriptscriptstyle L r}Q_{\scriptscriptstyle R r'}\, |\, 0\rangle ={\Delta^Q}_{0}\delta_{rr'}\ , \; \langle 0\, |\, \bar{\hbox{\raisebox{.2ex}{$q$}}}_{\scriptscriptstyle L r}\hbox{\raisebox{.2ex}{$q$}}_{\scriptscriptstyle R r'}\, |\, 0\rangle ={\Delta^q}_{0}\delta_{rr'}\ , \nonumber \\ & & \langle 0\, |\, \bar{Q}_{\scriptscriptstyle L r}Q_{\scriptscriptstyle R r'} \bar{Q}_{\scriptscriptstyle R s}Q_{\scriptscriptstyle L s'}\, |\, 0\rangle = {\Delta^Q}\delta_{rr'}\delta_{ss'}+{\Delta^{\prime Q}} \delta_{rs'}\delta{r's}\ , \nonumber \\ & & \langle 0\, |\, \bar{Q}_{\scriptscriptstyle L r}Q_{\scriptscriptstyle R r'} \bar{\hbox{\raisebox{.2ex}{$q$}}}_{\scriptscriptstyle R s}\hbox{\raisebox{.2ex}{$q$}}_{\scriptscriptstyle L s'}\, |\, 0\rangle = {\Delta^{Qq}}\delta_{rr'}\delta_{ss'}, \nonumber \\ & & \langle 0\, |\, \bar{\hbox{\raisebox{.2ex}{$q$}}}_{\scriptscriptstyle L r}\hbox{\raisebox{.2ex}{$q$}}_{\scriptscriptstyle R r'} \bar{\hbox{\raisebox{.2ex}{$q$}}}_{\scriptscriptstyle R s}\hbox{\raisebox{.2ex}{$q$}}_{\scriptscriptstyle L s'}\, |\, 0\rangle = {\Delta^q}\delta_{rr'}\delta_{ss'}+{\Delta^{\prime q}} \delta_{rs'}\delta{r's}\ , \end{eqnarray} \noindent where parameters $\Delta$ are chosen to be real. After some algebra we obtain \begin{eqnarray} E(W)= \!\! & & \!\!\!\!\!\!\!\!{g^{Q}}_{\alpha\beta} \mbox{Tr}[\, U^{Q}\tau^{\alpha}] \mbox{Tr}[\, \tau^{\beta} U^{Q\scriptscriptstyle \dag}] \nonumber \\ \!\! & + & \!\!\!r{g^{Qq}}_{\alpha\beta} \mbox{Tr}[\, U^{Q}\tau^{\alpha}] \mbox{Tr}[\, \lambda^{\beta} U^{q\scriptscriptstyle \dag}]+h.c. \nonumber \\ \!\! & + & \!\!\!r^2{g^{q}}_{\alpha\beta} \mbox{Tr}[\, U^{q}\lambda^{\alpha}] \mbox{Tr}[\, \lambda^{\beta} U^{q\scriptscriptstyle \dag}]\ , \end{eqnarray} \noindent where matrices $U^Q$ and $U^q$ and parameters ${g^{Q}}_{\alpha\beta}$ , ${g^{Qq}}_{\alpha\beta}$ , ${g^{q}}_{\alpha\beta}$ and $r$ are given by the following relations: \begin{equation} U^{Q}={U^Q}_{\scriptscriptstyle R}{U^Q}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag}\ , \mbox{\hspace{1cm}} U^{q}={U^q}_{\scriptscriptstyle R}{U^q}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag}\ , \end{equation} \begin{eqnarray} {g^{Q}}_{\alpha\beta}={G^{Q}}_{\alpha\beta} \Delta^{Q}\ ,\; \!\!\!\! & r & \!\!\!\!\!{g^{Qq}}_{\alpha\beta}= {G^{Qq}}_{\alpha\beta}\Delta^{Qq}\ ,\; r^2{g^{q}}_{\alpha\beta}={G^{q}}_{\alpha\beta} \Delta^{q}\ ,\\ & r & \!\!\!=\frac{\langle\, \bar{\hbox{\raisebox{.2ex}{$q$}}}\hbox{\raisebox{.2ex}{$q$}}\, \rangle} {\langle \bar{Q}Q\rangle} \ . \end{eqnarray} Here we introduced parameter $r$ in order to show explicitly the relative size of the three kinds of parameters ${G^{Q}}_{\alpha \beta}$ , ${G^{Qq}}_{\alpha \beta}$ and ${G^{q}}_{\alpha \beta}$. The parameter $r$ is the ratio of the ordinary and fundamental mass scale\cite{TECH} and its size is assumed to be \begin{equation} r \sim \left(\frac{1\, \mbox{GeV}}{1\, \mbox{TeV}}\right)^3 =10^{-9}\ . \end{equation} Our task is to minimize $E(W)$ given in Eq.$(3.11)$ by changing $U$ and find the solution for $U$. With $U$ determined in this procedure we rewrite ${\cal L'}$ to see whether CP violating terms are generated in ${\cal L'}$. \newpage {\elevenbf\noindent B. Special solutions} \vglue 0.3cm We would like to find the general solution for $U$ to minimize $E(W)$ in Eq.$(3.11)$. It is, however, quite complicated to obtain the general solution and we shall confine ourselves to some special solutions to this problem.\\ \hspace*{\parindent}We first consider the following specialization, \begin{eqnarray} & & {g^Q}_{00}={g^Q}_{33}\; (\equiv g^Q)<0\ ,\nonumber \\ & & 3{g^{Qq}}_{00}=-\sqrt{3}\ {g^{Qq}}_{08} =\sqrt{3}\ {g^{Qq}}_{30}=2{g^{Qq}}_{38} \; (\equiv g^{Qq})>0\ ,\nonumber \\ & & {g^{Q}}_{\alpha \beta}=0,\; {g^{Qq}}_{\alpha \beta}= 0\; : \mbox{ otherwise}\ . \end{eqnarray} \noindent Parametrizing the matrix elements of matrices $U^Q$ and $U^q$ by \begin{eqnarray} & & {U^{Q}}_{i j}={u^{Q}}_{i j}\exp (i{\theta^{Q}}_{ij})\; \mbox{\hspace{2ex}with }\; i,j=1,2\nonumber \\ & & {U^{q}}_{i j}={u^{q}}_{i j}\exp (i{\theta^{q}}_{ij})\; \mbox{\hspace{4ex}with }\; i,j=1,2,3 \end{eqnarray} \noindent where $u$'s and $\theta$'s are real constants constrained by the unitality of $U$, we obtain \begin{eqnarray} E(W) \!\! & = & \!\!2g^Q\{({u^{Q}}_{11})^2+({u^{Q}}_{22})^2\} \nonumber \\ & & \!\!\!\!+2rg^{Qq}\left\{\frac{\sqrt{3}}{2} {u^Q}_{11}{u^q}_{11} \cos({\theta^Q}_{11}-{\theta^q}_{11}) \right. \nonumber \\ & & \mbox{\hspace{5ex}} +\frac{\sqrt{3}}{2}{u^Q}_{11}{u^q}_{22} \cos({\theta^Q}_{11}-{\theta^q}_{22}) \nonumber \\ & & \mbox{\hspace{5ex}} +{u^Q}_{11}{u^q}_{33} \cos({\theta^Q}_{11}-{\theta^q}_{33}) \nonumber \\ & & \mbox{\hspace{5ex}} -\frac{\sqrt{3}}{2}{u^Q}_{22}{u^q}_{11} \cos({\theta^Q}_{22}-{\theta^q}_{11}) \nonumber \\ & & \mbox{\hspace{5ex}} -\frac{\sqrt{3}}{2}{u^Q}_{22}{u^q}_{22} \cos({\theta^Q}_{22}-{\theta^q}_{22}) \nonumber \\ & & \mbox{\hspace{5ex}} +{u^Q}_{22}{u^q}_{33} \cos({\theta^Q}_{22}-{\theta^q}_{33}) \Biggr\} \nonumber \\ & & \!\!\!\!+\mbox{O}(r^2)\ . \end{eqnarray} \noindent In deriving Eq.$(3.18)$ we kept only the terms up to the first order of the small number $r$. We expand parameters $u$'s and $\theta$'s in powers of $r$ and look for the minimum of the energy $E(W)$ to the first order of $r$: \begin{eqnarray} & & E=E^0+E^1r+\mbox{O}(r^2)\ ,\nonumber \\ & & u=u^0+u^1r+\mbox{O}(r^2)\ ,\nonumber \\ & & \theta =\theta^0+\theta^1r+\mbox{O}(r^2)\ , \end{eqnarray} \noindent where we have omitted the suffices $i$ and $j$ and the superfix $Q$ or $q$ in the parameters $u$ and $\theta$. After some algebra we find \begin{equation} E^0 = 2g^Q\{({u^{Q0}}_{11})^2+({u^{Q0}}_{22})^2\}\ , \end{equation} \begin{eqnarray} E^1 & = & \!\!4g^Q({u^{Q0}}_{11}{u^{Q1}}_{11} +{u^{Q0}}_{22}{u^{Q1}}_{22}) \nonumber \\ & & \!\!\!\!+2g^{Qq}\left\{\frac{\sqrt{3}}{2} {u^{Q0}}_{11}{u^{q0}}_{11} \cos({\theta^{Q0}}_{11}-{\theta^{q0}}_{11}) \right. \nonumber \\ & & \mbox{\hspace{5ex}} +\frac{\sqrt{3}}{2}{u^{Q0}}_{11}{u^{q0}}_{22} \cos({\theta^{Q0}}_{11}-{\theta^{q0}}_{22}) \nonumber \\ & & \mbox{\hspace{5ex}} +{u^{Q0}}_{11}{u^{q0}}_{33} \cos({\theta^{Q0}}_{11}-{\theta^{q0}}_{33}) \nonumber \\ & & \mbox{\hspace{5ex}} -\frac{\sqrt{3}}{2}{u^{Q0}}_{22}{u^{q0}}_{11} \cos({\theta^{Q0}}_{22}-{\theta^{q0}}_{11}) \nonumber \\ & & \mbox{\hspace{5ex}} -\frac{\sqrt{3}}{2}{u^{Q0}}_{22}{u^{q0}}_{22} \cos({\theta^{Q0}}_{22}-{\theta^{q0}}_{22}) \nonumber \\ & & \mbox{\hspace{5ex}} +{u^{Q0}}_{22}{u^{q0}}_{33} \cos({\theta^{Q0}}_{22}-{\theta^{q0}}_{33}) \Biggr\}\ . \end{eqnarray} \noindent Since we chose $g^Q<0$, ${u^{Q0}}_{11}$ and ${u^{Q0}}_{22}$ may be taken to be unity according to Eq.$(3.20)$. We find the following set of parameters $u$'s and $\theta$'s to minimize $E^1$. \begin{eqnarray} & & {u^{Q0}}_{11}={u^{Q0}}_{22}=1\ , \nonumber \\ & & {u^{Q1}}_{11}={u^{Q1}}_{22}=0\ , \nonumber \\ & & {u^{q0}}_{11}={u^{q0}}_{22}={u^{q0}}_{33}=1\ , \nonumber \\ & & {\theta^{Q0}}_{11}=\theta\pm \frac{\pi}{3}\ ,\; {\theta^{Q0}}_{22}=\theta\mp \frac{\pi}{3}\ ,\; \mbox{\hspace{13ex}}\pmod{2\pi} \nonumber \\ & & {\theta^{q0}}_{11}=\theta\mp \frac{\pi}{2}\ ,\; {\theta^{q0}}_{22}=\theta\mp \frac{\pi}{2}\ ,\; {\theta^{q0}}_{33}=\theta\pm \pi \ ,\; \mbox{\hspace{1ex}}\pmod{2\pi} \end{eqnarray} \noindent where $\theta$ is the free parameter. Thus the transformation matrices $U^Q$ and $U^q$ are given by \begin{eqnarray} U^Q & \!\!\!\! = {U^Q}_{\scriptscriptstyle R}{U^Q}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag} = & \!\!\! e^{i\theta} \left( \begin{array}{cc} e^{\pm i\frac{\pi}{3}} & 0 \\ 0 & e^{\mp i\frac{\pi}{3}} \end{array} \right)\ , \nonumber \\ U^q & \!\!\!\! = {U^q}_{\scriptscriptstyle R}{U^q}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag} = & \!\!\!\! e^{i\theta} \left( \begin{array}{ccc} e^{\mp i\frac{\pi}{2}} & 0 & 0 \\ 0 & e^{\mp i\frac{\pi}{2}} & 0 \\ 0 & 0 & e^{\pm i\pi} \end{array} \right)\ . \end{eqnarray} \noindent In the present paper we would like to construct a model without the strong CP violation and so we set \begin{equation} \theta = 0 \ . \end{equation} \newpage We redefine the fields $Q$ and $\hbox{\raisebox{.2ex}{$q$}}$ in such a way that \begin{eqnarray} Q'_{\scriptscriptstyle L}=Q_{\scriptscriptstyle L}\ , & \mbox{\hspace{0.7cm}} & Q'_{\scriptscriptstyle R}={U^Q}_{\scriptscriptstyle L}{U^Q}_{\scriptscriptstyle R}^{\scriptscriptstyle \dag}Q_{\scriptscriptstyle R}\ , \nonumber \\ q'_{\scriptscriptstyle L}=q_{\scriptscriptstyle L}\ , & \mbox{\hspace{0.7cm}} & q'_{\scriptscriptstyle R}={U^q}_{\scriptscriptstyle L}{U^q}_{\scriptscriptstyle R}^{\scriptscriptstyle \dag}q_{\scriptscriptstyle R}\ . \end{eqnarray} \noindent so that the following condition is recovered, \begin{eqnarray} & & \langle 0\, |\, \bar{Q'}_{\scriptscriptstyle L r} Q'_{\scriptscriptstyle R r'}\, |\, 0\rangle ={\Delta^Q}_{0}\delta_{rr'}\ , \nonumber \\ & & \langle 0\, |\, \bar{\hbox{\raisebox{.2ex}{$q$}}'}_{\scriptscriptstyle L r} \hbox{\raisebox{.2ex}{$q$}}'_{\scriptscriptstyle R r'}\, |\, 0\rangle ={\Delta^q}_{0}\delta_{rr'}\ . \end{eqnarray} \noindent Expressed by the new fields $Q'$ and $\hbox{\raisebox{.2ex}{$q$}}'$ the Hamiltonian density $\cal H'$ takes the form \begin{eqnarray} {\cal H'}=\!\!\! & & \!\!\!\!\!\!-{\cal L'}={G^{Q}}_{\alpha \beta} \bar{Q'}_{\scriptscriptstyle L}\tau^{\alpha}{U^Q}_{\scriptscriptstyle R} {U^Q}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag}Q'_{\scriptscriptstyle R} \bar{Q'}_{\scriptscriptstyle R}{U^Q}_{\scriptscriptstyle L}{U^Q}_{\scriptscriptstyle R}^{\scriptscriptstyle \dag} \tau^{\beta}Q'_{\scriptscriptstyle L} \nonumber \\ \!\!\! & + & \!\!\!{G^{Qq}}_{\alpha \beta} \bar{Q'}_{\scriptscriptstyle L}\tau^{\alpha}{U^Q}_{\scriptscriptstyle R} {U^Q}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag}Q'_{\scriptscriptstyle R} \bar{\hbox{\raisebox{.2ex}{$q$}}'}_{\scriptscriptstyle R}{U^q}_{\scriptscriptstyle L}{U^q}_{\scriptscriptstyle R}^{\scriptscriptstyle \dag} \lambda^{\beta}\hbox{\raisebox{.2ex}{$q$}}'_{\scriptscriptstyle L} +h.c. \nonumber \\ \!\!\! & + & \!\!\!{G^{q}}_{\alpha \beta} \bar{\hbox{\raisebox{.2ex}{$q$}}'}_{\scriptscriptstyle L}\lambda^{\alpha}{U^q}_{\scriptscriptstyle R} {U^q}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag}\hbox{\raisebox{.2ex}{$q$}}'_{\scriptscriptstyle R} \bar{\hbox{\raisebox{.2ex}{$q$}}'}_{\scriptscriptstyle R}{U^q}_{\scriptscriptstyle L}{U^q}_{\scriptscriptstyle R}^{\scriptscriptstyle \dag} \lambda^{\beta}\hbox{\raisebox{.2ex}{$q$}}'_{\scriptscriptstyle L}\ . \end{eqnarray} \noindent We find in Eq.$(3.27)$ that the second and third term in general violate CP since these two terms can not be made real. To see this situation more explicitly we rewrite Eq.$(3.27)$ in the following form, \begin{eqnarray} {\cal H'}=\!\!\! & & \!\!\!2{G^{Q}} (\bar{U'}_{\scriptscriptstyle L}U'_{\scriptscriptstyle R}\bar{U'}_{\scriptscriptstyle R}U'_{\scriptscriptstyle L} +\bar{C'}_{\scriptscriptstyle L}C'_{\scriptscriptstyle R}\bar{C'}_{\scriptscriptstyle R}C'_{\scriptscriptstyle L}) \nonumber \\ \!\!\! & + & \!\!\!{G^{Qq}}\biggl[ \frac{\sqrt{3}}{2}(e^{i\frac{5}{6}\pi} \bar{U'}_{\scriptscriptstyle L}U'_{\scriptscriptstyle R} -e^{i\frac{1}{6}\pi}\bar{C'}_{\scriptscriptstyle L}C'_{\scriptscriptstyle R}) (\bar{u'}_{\scriptscriptstyle R}u'_{\scriptscriptstyle L}+\bar{c'}_{\scriptscriptstyle R}c'_{\scriptscriptstyle L}) \nonumber \\ & & \mbox{\hspace*{4ex}}+(e^{-i\frac{2}{3}\pi} \bar{U'}_{\scriptscriptstyle L}U'_{\scriptscriptstyle R} +e^{-i\frac{4}{3}\pi}\bar{C'}_{\scriptscriptstyle L}C'_{\scriptscriptstyle R}) (\bar{t'}_{\scriptscriptstyle R}t'_{\scriptscriptstyle L}) \biggr] +h.c. \nonumber \\ \!\!\! & + & \!\!\!\frac{4}{3}{G^{q}}_{88} \bar{t'}_{\scriptscriptstyle L}t'_{\scriptscriptstyle R}\bar{t'}_{\scriptscriptstyle R}t'_{\scriptscriptstyle L}+\cdots \nonumber \\ \!\!\! & + & \!\!\!({G^{q}}_{44}-{G^{q}}_{55})e^{i\frac{3}{2}\pi} \bar{u'}_{\scriptscriptstyle L}t'_{\scriptscriptstyle R}\bar{u'}_{\scriptscriptstyle R}t'_{\scriptscriptstyle L}+\cdots\; . \end{eqnarray} \noindent We now clearly observe that the new expression of the Hamiltonian density $\cal H'$ includes CP violating terms. In fact the second and fourth term in $\cal H'$ apparently violate CP. On the other hand the other terms do not violate CP.\\ \hspace*{\parindent} It may be interesting to see the form of the up-type quark mass matrix $(M_{ij})$ in the present specific model. After some algebra we find \begin{equation} (M_{ij})=2g^{Qq} \left( \begin{array}{ccc} -3/4 & 0 & 0 \\ 0 & -3/4 & 0 \\ 0 & 0 & -1/2 \end{array} \right)\ . \end{equation} \noindent Obviously the form $(3.29)$ does not properly reproduce the quark mass hierarchy and so our model is not realistic. \newpage We next consider the following specific choice of our coupling parameters, \begin{eqnarray} & & {g^Q}_{00}={g^Q}_{33}\; (\equiv g^Q)<0\ ,\nonumber \\ & & \sqrt{3}{g^{Qq}}_{00}=-\ {g^{Qq}}_{08} =\sqrt{3}\ {g^{Qq}}_{30}=2\ {g^{Qq}}_{38} \; (\equiv g^{Qq})>0\ ,\nonumber \\ & & {g^{Q}}_{\alpha \beta}=0,\; {g^{Qq}}_{\alpha \beta} =0\; : \mbox{ otherwise}\ . \end{eqnarray} \noindent Following the same procedure as before we find the solution \begin{eqnarray} U^Q & \!\!\!\! = {U^Q}_{\scriptscriptstyle R}{U^Q}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag} = & \!\!\! e^{i\theta} \left( \begin{array}{cc} e^{\pm i\frac{\pi}{4}} & 0 \\ 0 & e^{\mp i\frac{\pi}{4}} \end{array} \right)\ , \nonumber \\ U^q & \!\!\!\! = {U^q}_{\scriptscriptstyle R}{U^q}_{\scriptscriptstyle L}^{\scriptscriptstyle \dag} = & \!\!\!\! e^{i\theta} \left( \begin{array}{ccc} e^{\mp i\frac{\pi}{2}} & 0 & 0 \\ 0 & e^{\mp i\frac{\pi}{2}} & 0 \\ 0 & 0 & e^{\pm i\pi} \end{array} \right)\ . \end{eqnarray} \noindent The up-type quark mass matrix corresponding to the above solution reads \begin{equation} (M_{ij})=2g^{Qq} \left( \begin{array}{ccc} -\sqrt{6}/4 & 0 & 0 \\ 0 & -\sqrt{6}/4 & 0 \\ 0 & 0 & -\sqrt{6}/2 \end{array} \right)\ . \end{equation} We observe that in this case the top quark is heavier than the up and charm quark. \vglue 0.4cm {\elevenbf\noindent C. Models} \vglue 0.3cm We found the CP violating interaction Lagrangian as a result of special solutions of the minimum $E(W)$ condition. Thus we succeeded in constructing the simple model of the dynamical CP violation. In deriving the model we made some simplifying assumptions. This simplification made the model far from explaining the real situation in standard theory. For example our model Hamiltonian does not reproduce the KM matrix correctly. In order to get the full KM matrix we have to relax our assumptions and minimize $E(W)$ with the full expression of the transformation matrix $U$ (We have to abolish the assumption that $U$ be a diagonal matrix). This attempt will be made in a separate work. We are, however, interested in estimating physical effects in low-energy phenomena which are predicted by the Hamiltonian. Such estimation may help examining whether our model serves as a prototype of the real theory of the dynamical CP violation for standard theory.\\ \hspace*{\parindent}The system of quarks we assumed consists of the up-type 2-flavor fundamental quarks $Q$ and 3-flavor ordinary quarks $q$ as shown in Eq.$(3.1)$. We have not yet specified the symmetry group S to which the fundamental quarks $Q$ belong.\\ \hspace*{\parindent}A natural possibility is to identify the symmetry group S to the technicolor SU(N). In this case the fundamental quark $Q$ is the techniquark\cite{TECH} belonging to the N-dimensional fundamental representation of the technicolor SU(N).\\ \hspace*{\parindent}Another possibility is to identify the symmetry group S to the color SU(3). In this case the fundamental quark $Q$ is the color-sextet quark\cite{SQ} belonging to the 6-dimensional representation of the color SU(3).\\ \hspace*{\parindent}These two possibilities fit the previous argument quite well and constitute two practical models of the dynamical CP violation.\\ \hspace*{\parindent}It is also possible to identify the fundamental quarks $Q$ to the top quark in the top-condensation model\cite{TOPC} (or in the top-color model\cite{TCOL}). In this case, however, we are not able to get the nontrivial CP violating phase within our framework\\ \hspace*{\parindent}Yet another possibility is to identify the fundamental fermion $Q$ to the quark in the assumed fourth generation\cite{FG}. In this case again it is impossible to obtain the nontrivial CP violating phase in our approach.\\ \hspace*{\parindent}In the following application we are in mind the technicolor model as well as the color-sextet quark model. \vglue 1cm \setcounter{equation}{0} \stepcounter{subsection} \stepcounter{subsection} \stepcounter{subsection} \subsection{LOW ENERGY EFFECTS} \stepcounter{section} \hspace*{\parindent}% In our simple model introduced in the last section the KM matrix is real and diagonal. This is because we have taken a paticular choice for a $G_F$ breaking Lagrngian $\cal L'$ and have neglected the higher order terms in $r$. Starting with the more general assumption we could have obtained the KM matrix with off-diagonal elements and complex phases.\\ \hspace*{\parindent}In the present section we consider possible low-energy effects originating from the model Lagrangian $(3.27)$. By this analysis we will be able to compare the low-energy CP-violating effects of dynamical origin with the one in the standard origin of the CP violation (i.e. through the KM phase).\\ \hspace*{\parindent}Our Hamiltonian reads \begin{equation} H= H_{0}+H'_{cons}+H'_{viol}\ , \end{equation} \noindent where $H_{0}$ is the Hamiltonian derived from Lagrangian ${\cal L}_{0}$, $H'_{cons}$ is the CP conserving part of the Hamiltonian defined by integrating Eq.$(3.27)$ over the space variables and $H'_{viol}$ is the CP violating part. In the following we consider two typical low-energy effects derived from the Hamiltonian $(4.1)$. \vglue 0.4cm {\elevenbf\noindent A. Neutron electric dipole moment} \vglue 0.3cm Since the Lagrangian $\cal L'$ includes the energy scale at which the four-fermion interactions are induced from the more fundamental gauge theory, it is expected that our estimate of the neutron electric dipole moment depend on this energy scale. This means that this fundamental energy scale, i.e. the cut-off parameter $\Lambda$, may be constrained by the experimental information on the neutron electric dipole moment.\\ \hspace*{\parindent}We estimate the size of the contribution to the neutron electric dipole moment coming from our CP-violating Lagrangian $\cal L'$ given in Eq.$(3.27)$.\\ \hspace*{\parindent}The neutron electric dipole moment $d_n$ is given in terms of the quark dipole moments $d_u$ and $d_d$ in the naive quark model such that \begin{equation} d_n=\frac{4d_d-d_u}{3}\ . \end{equation} \noindent The electric dipole moment of quarks is calculated through the following term in the quark electromagnetic form factor at zero momentum transfer, \begin{equation} -d_q\bar{u}\sigma_{\mu \nu}\gamma_{5}q^{\nu}u\ , \end{equation} \noindent where suffix $\hbox{\raisebox{.2ex}{$q$}}$ represents the u or d quark and $\hbox{\raisebox{.2ex}{$q$}}^{\nu}$ is the momentum transfer for quarks (momentum carried by the virtual photon) and $u$ is the Dirac spinor for quark $\hbox{\raisebox{.2ex}{$q$}}$.\\ \hspace*{\parindent}We start with the Lagrangian $\cal L'$ given in Eq.$(3.27)$. For the later calculational convenience we introduce auxiliary fields $\phi$ and use the following effective Lagrangian instead of the four-fermion type Lagrangian $(3.27)$: \begin{equation} {\cal L'}=-\bar{\psi} \left( \phi^{\scriptscriptstyle \dag}\frac{1-\gamma_5}{2} +\phi\frac{1+\gamma_5}{2} \right) \psi +{G^{-1}}\phi^{\scriptscriptstyle \dag}\phi\ . \end{equation} \noindent The use of the above auxiliary-field Lagrangian makes it easier to classify the relevant Feynman diagrams contributing to the quark electric dipole moment and to perform the higher-order loop calculations.\\ \hspace*{\parindent}At one-loop level diagrams shown in Fig.1 contribute to the quark electromagnetic form factor. \newpage \vspace*{2cm} \begin{center} Fig.1a\hspace{7cm}Fig.1b \vglue 0.3cm { \tenrm\baselineskip=12pt \noindent Fig.1 One-loop diagrams for the electromagnetic vertex function of quarks\\ represented by the use of auxiliary field $\phi$.} \vglue 1.5cm \end{center} \noindent As is easily seen the diagram in Fig.1a has no tensor structure corresponding to the electric dipole moment. The contribution of Fig.1b to the electric dipole moment is found to vanish. Thus there is no one-loop contribution to the quark electric dipole moment.\\ \hspace*{\parindent}We next examine the two-loop contribution to the quark electric dipole moment. The relevant diagrams are shown in Fig.2. \vspace{2cm} \begin{center} Fig.2a\hspace{7cm}Fig.2b \end{center} \vspace{2cm} \begin{center} Fig.2c \vglue 0.3cm {\rightskip=3pc \leftskip=3pc \tenrm\baselineskip=12pt \noindent Fig.2 Two-loop diagrams for the electromagnetic vertex function of quarks.} \vglue 1.5cm \end{center} The Feynman amplitudes corresponding to these diagrams are in general quartically divergent. The quartically divergent part of the amplitudes, however, has no tensor structure of the electric dipole moment and hence the leading contribution of these diagrams to the quark electric dipole moment is quadratically divergent. As is seen by direct calculations, the diagrams in Figs.2a and 2b have no quadratically divergent contribution to the quark electric dipole moment. The reason for this is that the helicity of the quark flips three times in these diagrams. Accordingly the leading quadratic divergence exists only in the diagram in Fig.2c. In the following we will calculate the quadratically divergent part of the Feynman amplitude corresponding to the diagram in Fig.2c.\\ \hspace*{\parindent}The Feynman amplitude $F$ corresponding to the diagram in Fig.2c reads \begin{eqnarray} F & = & \sum_{i,j,k}\int\frac{d^4\!p}{(2\pi)^4i} \frac{d^4\!p'}{(2\pi)^4i} \biggl[G^q_{jiuk}G^q_{kjiu} \nonumber \\ & & \mbox{\hspace{19ex}}\times\biggl\{\frac{1+\gamma_5}{2} \frac{1}{m_i-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_1+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/'} \frac{1-\gamma_5}{2} \frac{1}{m_j-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_1+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/'} \nonumber \\ & & \mbox{\hspace{21ex}}\times\frac{1-\gamma_5}{2} \frac{1}{m_k-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_1+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/} (Q_ke\gamma_{\mu}) \frac{1}{m_k-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_2+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/} \frac{1+\gamma_5}{2} \nonumber \\ & & \mbox{\hspace{21ex}}+\frac{1+\gamma_5}{2} \frac{1}{m_i-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_1+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/'} \frac{1-\gamma_5}{2} \frac{1}{m_j-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_1+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/'} (Q_je\gamma_{\mu}) \nonumber \\ & & \mbox{\hspace{21ex}}\times\frac{1}{m_j-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_2+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/'} \frac{1-\gamma_5}{2} \frac{1}{m_k-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_2+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/} \frac{1+\gamma_5}{2} \nonumber \\ & & \mbox{\hspace{21ex}}+\frac{1+\gamma_5}{2} \frac{1}{m_i-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_1+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/'} (Q_ie\gamma_{\mu}) \frac{1}{m_j-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_2+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/'} \frac{1-\gamma_5}{2} \nonumber \\ & & \mbox{\hspace{21ex}}\times\frac{1}{m_j-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_2+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/'} \frac{1-\gamma_5}{2} \frac{1}{m_k-\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/_2+\hbox{\raisebox{.2ex}{$p$}}\mbox{\hspace{-0.9ex}}/} \frac{1+\gamma_5}{2}\biggr\} \nonumber \\ & & \mbox{\hspace{17ex}}+G^q_{ukji}G^q_{iukj} \biggl\{\gamma_5 \rightarrow -\gamma_5\biggr\} \biggr]\ , \end{eqnarray} \noindent where $p_1$ $(p_2)$ is the momentum of the incoming (outgoing) quark. Here in Eq.$(4.5)$ the charge $Q_j$ is equal to $2/3$ for up-type quarks, i.e. $j = u, c, t,$ and is equal to $-1/3$ for down-type quarks, i.e. $j = d, s, b$. By extracting the quadratically divergent part $F^{div}$ of Eq.$(4.5)$ we obtain \begin{equation} F^{div}=\frac{2\Lambda^2}{(4\pi)^4}\sum_{i,j,k}Qe \mbox{Im}\{G^q_{jiuk}G^q_{kjiu}\} m_j(iA_jk_{\mu}\gamma_5 -B_j\sigma_{\mu\nu}\hbox{\raisebox{.2ex}{$q$}}^{\nu}\gamma_5)\ , \end{equation} \noindent where $A_j$ and $B_j$ are given by \begin{eqnarray} A_j & = & 2\left(\ln\frac{\Lambda^2}{m_j^2}-1\right) \nonumber \\ & + & \!\!\!\!\int^{1}_{0}d\!x\int^{1}_{0}d\!y \int^{1-y}_{0}d\!z \biggl[\left\{\frac{4}{x}(2-3y)+(3-5y)\right\} \ln\left|\frac{x(1-x)+1-y-z}{1-y-z}\right| \nonumber \\ & & \mbox{\hspace{18ex}} -\frac{3x(1-x)(1-2y)}{x(1-x)+1-y-z} \nonumber \\ & & \mbox{\hspace{18ex}} +\frac{1}{2}\frac{x(1-x)}{\{x(1-x)+1-y-z\}^2} \biggl\{x(1-x)\left(3(3-7y)-\frac{8}{x}(2-3y)\right) \nonumber \\ & & \mbox{\hspace{38ex}} +(1-y-z)\left(2(3-7y) -\frac{6}{x}(2-3y)\right)\biggr\}\biggl]\ , \nonumber \\ B_j & = & \int^{1}_{0}d\!x\int^{1}_{0}d\!y \int^{1-y}_{0}d\!z\frac{1}{x} \biggl(4\ln\left|\frac{x(1-x)+1-y-z}{1-y-z}\right| \nonumber \\ & & \mbox{\hspace{22ex}} -\frac{x(1-x)\{4x(1-x)+3(1-y-z)\}} {\{x(1-x)+1-y-z\}^2}\biggr)\ . \end{eqnarray} \noindent After some algebra we derive the following formula for the quadratically divergent part of the electric dipole moment of the up-quark $d_u$ \begin{equation} d_u=\frac{2}{3}e\frac{\Lambda^2}{(4\pi)^4}\sum_{i,j,k} \mbox{Im}\{G^q_{jiuk}G^q_{kjiu}\}m_j(A_j+B_j)\ . \end{equation} Performing the integration in Eq.$(4.7)$ we finally find the explicit expression for the quadratically divergent part of the up-quark electric dipole moment, \begin{equation} d_u=\frac{2}{3}e\frac{2\Lambda^2}{(4\pi)^4}\sum_{i,j,k} \mbox{Im}\{G^q_{jiuk}G^q_{kjiu}\}m_j \left[2\ln \frac{\Lambda^2}{m_j^2} -2.57\right]\ . \end{equation} \hspace*{\parindent}Apparently the dominant contribution in the above formula to the up-quark dipole moment comes from the top-quark intermediate state. Keeping only the top-quark contribution to Eq.$(4.9)$ and taking into accout that \begin{equation} \mbox{Im}\{G^q_{jiuk}G^q_{kjiu}\}\sim \frac{g^4}{4\Lambda^4} \sim \frac{(2\pi)^2}{\Lambda^4}\ , \end{equation} \noindent we find \begin{equation} d_u=e\frac{m_t}{48\pi^2 \Lambda^2} \left[4\ln \frac{\Lambda}{m_t}-2.57\right]\ . \end{equation} Since $d_u \gg d_d$, we find that $d_n = d_u/3$. Assuming that $m_t$ = 140 GeV and taking into account the experimental upper bound of the neutron electric dipole moment\cite{NEDE} we realize that the effective cut-off of the loop integral should satisfy \begin{equation} \Lambda > 800\; \mbox{TeV}\ . \end{equation} \hspace*{\parindent}The above lower bound for the cut-off $\Lambda$ is of the same order as the one set by the FCNC restriction\cite{FCNC}. If we use the value of $\Lambda$ set by the FCNC restriction which will be described in Eq.$(4.22)$ and calculate $d_n$ through Eq.$(4.11)$, we find $d_n \sim 5\times 10^{-27}\; e$cm. This prediction is surely much smaller than the present experimental bound. In the standard model with the KM phase the neutron electric dipole moment is calculated and is found to be extremely small\cite{NEDM}. Our result $(4.11)$ and $(4.12)$ gurantees this property of the standard model. \vglue 0.4cm {\elevenbf\noindent B. K-meson system} \vglue 0.3cm The only known experimental information on the CP violation exists in the K-meson decays. In this subsection we discuss the $\varepsilon$-parameter which is determined by measuring the charge asymmetry in the semileptonic decay of the $\mbox{K}^0$ meson and the $2\pi$ decay of the $\mbox{K}^0_{L}$ meson.\\ \hspace*{\parindent}$\mbox{K}^0$-meson states $|\mbox{K}^{0}_{L}\rangle$ and $|\mbox{K}^{0}_{S}\rangle$ are defined as \begin{equation} \left\{ \begin{array}{c} \displaystyle |\mbox{K}^{0}_{L}\rangle = \frac{1}{\sqrt{2(1+|\varepsilon|^2)}} \left\{(1+\varepsilon)\,|\mbox{K}^0\rangle +(1-\varepsilon)\,|\bar{\mbox{K}}^0\rangle \right\} \\ \displaystyle |\mbox{K}^{0}_{S}\rangle = \frac{1}{\sqrt{2(1+|\varepsilon|^2)}} \left\{(1+\varepsilon)\,|\mbox{K}^0\rangle -(1-\varepsilon)\,|\bar{\mbox{K}}^0\rangle \right\} \end{array} \right. \ , \end{equation} \begin{displaymath} (\; |\mbox{K}^0\rangle = -\, \mbox{\raisebox{-0.1ex}{CP}}\, |\bar{\mbox{K}}^0 \rangle\; )\ , \end{displaymath} \noindent With the non-vanishing $\varepsilon$ the K-meson mass eigenstates are different from the eigenstates of CP. We have \begin{equation} \varepsilon = \frac{\langle\mbox{K}^0|\, H\, |\bar{\mbox{K}}^0\rangle^{\frac{1}{2}} -\langle\bar{\mbox{K}}^0|\, H\, |\mbox{K}^0\rangle^{\frac{1}{2}}} {\langle\mbox{K}^0|\, H\, |\bar{\mbox{K}}^0\rangle^{\frac{1}{2}} +\langle\bar{\mbox{K}}^0|\, H\, |\mbox{K}^0\rangle^{\frac{1}{2}}} \simeq \frac{\langle\mbox{K}^0|\, H'_{viol}\, |\bar{\mbox{K}}^0\rangle} {\langle\mbox{K}^0|\, (H_{0} +H'_{cons})\, |\bar{\mbox{K}}^0\rangle}\ . \end{equation} Here we require that \begin{equation} |\langle\mbox{K}^0|\, (H_{0} +H'_{cons})\, |\bar{\mbox{K}}^0\rangle | \gg |\langle\mbox{K}^0|\, H'_{viol}\,|\bar{\mbox{K}}^0 \rangle | \ . \end{equation} The Hamiltonian $H'_{viol}$ contains the following term, \begin{equation} i\, \mbox{Im}(G)\int \!\!d^{3}\!x\, \bar{s}_{\scriptscriptstyle L} d_{\scriptscriptstyle R}\bar{s}_{\scriptscriptstyle R}d_{\scriptscriptstyle L}+h.c.\ , \end{equation} where $G$ is the corresponding four-fermion coupling constant and s and d represent the s and d-quark fields. Although in our model Im$(G)$ vanishes, we here consider the more general cace in which Im$(G)\neq 0$. Using the PCAC relation (It should be remembered that a specific choice of the contraction of color indices is made in Eq.$(3.3)$ ), we find \begin{equation} \langle\mbox{K}^0|\bar{s}_{\scriptscriptstyle L}d_{\scriptscriptstyle R} \bar{s}_{\scriptscriptstyle R}d_{\scriptscriptstyle L}|\bar{\mbox{K}}^0\rangle =-\frac{B_{K}(\mu )f_{K}^{2}m_{K}^{4}}{2(m_s+m_d)^2}\ , \end{equation} where $B_{K}(\mu)$ is the so-called B-parameter, $f_{K}$ is the K-meson decay constant and $m_{K}$, $m_{d}$ and $m_{s}$ are the mass of the K-meson, d-quark and s-quark respectively. After some calculation, we obtain \begin{equation} \langle\mbox{K}^0|H'_{viol}|\bar{\mbox{K}}^0\rangle \simeq -i\, \frac{\mbox{Im}(G)B_{K}(\mu )f_{K}^{2}m_{K}^{3}} {4(m_{d}+m_{s})^2} \langle\mbox{K}^0|{\mbox{K}}^0\rangle\ . \end{equation} We see by definition \begin{equation} \langle\mbox{K}^0|\, (H_{0}+H'_{cons})\, |\bar{\mbox{K}}^0\rangle \simeq \frac{1}{2}\left(\Delta M-\frac{i}{2}\Delta \Gamma \right) \langle\mbox{K}^0|{\mbox{K}}^0\rangle\ , \end{equation} where $\Delta M$ and $\Delta \Gamma$ are the $\mbox{K}_L-\mbox{K}_S$ difference of the mass and decay width respectively. Accordingly we obtain \begin{equation} \varepsilon \simeq \frac{-i\, \mbox{Im}(G)B_{K}(\mu)f_{K}^{2}m_{K}^{3}} {2\left(\Delta M-\frac{i}{2}\Delta \Gamma \right) (m_{d}+m_{s})^2}\ . \end{equation} \noindent By inserting experimental data in Eq.$(4.20)$ we find \begin{equation} \mbox{Im}(G)\sim 10^{-9}\; \mbox{TeV}^{-2}\ . \end{equation} \noindent The above result $(4.21)$ is about $10^2$ times smaller than that obtained by the FCNC restriction\cite{FCNC}, \begin{equation} \mbox{Re}(G)< 10^{-7}\; \mbox{TeV}^{-2}\ . \end{equation} \vglue 1.0cm \stepcounter{subsection} \stepcounter{subsection} \stepcounter{subsection} \stepcounter{subsection} \subsection{CONCLUSION} \stepcounter{section} \vglue 0.1cm \hspace*{\parindent}Applying Dashen's mechanism to the composite Higgs models we succeeded to find simple models of the dynamical CP violation. Although our models have to be further elaborated to explain the actual KM phase, they represent an essential ingredient of the dynamical CP violation in the standard model and may be thought of as prototype models which accommodate the CP violation in the standard model.\\ \hspace*{\parindent}In order to see whether our model could be in conformity with experimental situations we examined low-energy consequences of our model. By estimating the $\varepsilon$-parameter in the neutral K-meson decays and the neutron electric dipole moment we derived the lower bound on the cut-off parameter using the available experimental informations. The cut-off parameter signals, at the scale determined by the low-energy data, the existence of the deeper theory for which our model is an effective theory. The lower bound we obtained is consistent with the one required by the constraint on the flavor-changing neutral current.\\ \hspace*{\parindent}Although our model is a simple toy model for the dynamical CP violation, it may be elaborated to fully account for the CP violation in the standard model. The investigation in this direction is in progress. \vglue 1.0cm \subsection*{ACKNOWLEDGEMENTS} \vglue 0.1cm \hspace*{\parindent}The authors would like to thank T.~Onogi for discussions and suggestions and K.~Yamawaki for useful informations. \vglue 0.6cm \newpage \subsection*{REFERENCES} \vglue 0.1cm
{ "redpajama_set_name": "RedPajamaArXiv" }
5,050
Q: How to configure server for Ruby on Rails 3 for many users where each user has his own set of gems? Could somebody help me, please? I would use Passenger + nginx to run Rails but how to do that if each user wants to have his own set of gems? A: Use RVM with each user has his own gem directory
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,914
Earned Media Coverage for the Chicago Crime Commission - John Pastuovic Communications, Inc. JPC garnered tons of earned media coverage for the Chicago Crime Commission during Tri-State Gang Summit. Below is a story that ran on CBS2 Chicago. ABC7, NBC5, WGN, Univision, Crain's Chicago Business, and the Associated Press all covered the event. The Chicago Crime Commission retains JPC for their unique ability to generate earned media in the highly competitive Chicago Media Market. Greetings Mr Pastuovic, I read about your work with the Chicago Crime Commission and gangs. Well done! I delivered mail into a housing project for 2 years. On the 1st of the month the line of young mothers waiting for me was long. For 20 minutes I gave out welfare checks, food stamps, medicaid cards and letters from prison(only letters written in pencil). Then I would drive down the street to the next complex; no welfare checks, no letters from prison. Theory; Crime=education level of the mother(+absentee' fathers). Before I was a mailman I was a CPA on La Salle St; a lot of digging and searching. Recently, I dug and searched the backgrounds of the men killed in confrontations with police(Laquan McDonald, Freddie Gray, Michael Brown...) over 90% had teen mothers who dropped out of school. Sadly, the next young man to meet tragedy in a confrontation with police will have a teen mother who dropped out. The Welfare Reform Act of 1996(vetoed twice) had its pluses and minuses. The roles were reduced 60%. However, 4 million are still on the roles. A teen mother under 18 is required to stay in school to collect her check($150-$300 depending on the state). Because of generational welfare history these 4 million are the most difficult to change. The welfare check is not enough to keep them in school. The poorest of the poor are getting poorer. I believe the government should pay teen mothers enough to stay in school(free onsite daycare+attendance bonus+graduation bonus). Education is the answer. I hope you will consider these recommendations in your gang and crime work. Thank you for your time.
{ "redpajama_set_name": "RedPajamaC4" }
4,518
{"url":"https:\/\/www.groundai.com\/project\/the-inertialess-limit-of-particle-sedimentation-modeled-by-the-vlasov-stokes-equations\/","text":"1 Introduction\n###### Abstract\n\nWe study the Vlasov-Stokes equations which macroscopically model the sedimentation of a cloud of particles in a fluid, where particle inertia are taken into account but fluid inertia are assumed to be negligible. We consider the limit when the inertia of the particles tends to zero, and obtain convergence of the dynamics to the solution of an associated inertialess system of equations. This system coincides with the model that can be derived as the homogenization limit of the microscopic inertialess dynamics.\n\nThe inertialess limit of particle sedimentation modeled by the Vlasov-Stokes equations\n\nRichard M. H\u00f6fer111University of Bonn, Institute For Applied Mathematics. Endenicher Allee 60, 53115 Bonn, Germany.\nEmail: hoefer@iam.uni-bonn.de, Phone: +49 228 735602\n\nJuly 16, 2019\n\n## 1 Introduction\n\nWe consider the sedimentation of a cloud of identical spherical particles suspended in a fluid subject to gravitation. It is assumed that the suspension is sufficiently dilute such that collisions of particles do not play a role. Furthermore, we neglect inertial forces of the fluid, i.e., the fluid is modeled by a Stokes equation, but particle inertia are taken into account. These assumptions are justified if the Reynolds number is much smaller than the Stokes numbers which is the case for very small particles in gases. We refer to [Koc90] for the details of the microscopic model and a discussion about the regime of validity.\n\nLet a nonnegative function describe the number density of particles at time and position with velocity . We denote the position density and current by\n\n \u03c1(t,x) :=\u222bR3f(t,x,v)dv, (1) j(t,x):=\u03c1(t,x)\u00afV(t,x) :=\u222bR3f(t,x,v)vdv. (2)\n\nHere, the mean velocity is defined to be zero in the set . As a model for the macroscopic dynamics, we consider the so-called Vlasov-Stokes equations, a Vlasov equation for the particles coupled with Brinkman equations for the fluid,\n\n \u2202tf+v\u22c5\u2207xf+\u03bbdivv(^gf+92\u03b3(u\u2212v)f) =0,f(0,\u22c5,\u22c5)=f0, (3) \u2212\u0394u+\u2207p+6\u03c0\u03b3\u03c1(u\u2212\u00afV) =0,divu=0.\n\nHere, and are the fluid velocity and pressure respectively, with being the gravitational acceleration, and and are constants that will be discussed below. The first equation expresses that the forces acting on the particles are the gravitation and the drag exerted by the fluid. The Brinkman equations are Stokes equations with a force term that arises from the same drag.\n\nA rigorous derivation of these macroscopic equations from the microscopic dynamics has not been achieved yet, a formal derivation can be found in [Koc90]. In the quasi-static case, the Brinkman equations have been established in [Al90a], [DGR08]. Using this, the Vlasov-Stokes equations (3) can be formally derived from the microscopic dynamics after non-dimensionalizing. The constants and are given by\n\n \u03bb=\u03bc2\u03c1p(\u03c1p\u2212\u03c1f)\u03d52|g|L3,\u03b3=\u03d5L2R2, (4)\n\nwhere is the fluid viscosity, and are the particle and fluid mass density respectively, is the volume fraction of the particles, is the diameter of the cloud of particles, and the radius of the particles. The constant determines the interaction strength between fluid and particles. The quantity is known as the Stokes number and determines the strength of the inertial forces. For definiteness, we assume such that . Then, the larger , the less important inertial effects become. For a more detailed discussion of these parameters as well as a formal derivation of the system (3), we refer to [Hof16].\n\nFor similar equations as (3), global well-posedness has been proven in [Ham98] and [BDGM09]. In [Jab00], the author considers the inertialess limit of the system, where the fluid velocity in (5) is replaced by a force term that is given by a convolution operator which is more regular than the Stokes convolution operator. In [Gou01], similar limits are studied for a one dimensional model without gravity and including inertial forces on the fluid. In [GP04], the authors consider limits of high and low inertia of the system of a Vlasov equation without gravity and with a given random fluid velocity field. Similar systems that include Brownian motion of the particles and their limits have been studied among others in [CP83], [GJV04a], [GJV04b] [CG06], and [GHMZ10].\n\n### 1.1 Main result\n\nWe are interested in the limit , which corresponds to inertialess particles. For the ease of notation we drop all the other constants and consider the system\n\n \u2202tf+v\u22c5\u2207xf+\u03bbdivv(gf+(u\u2212v)f) =0,f(0,\u22c5,\u22c5)=f0, (5) \u2212\u0394u+\u2207p+\u03c1(u\u2212\u00afV) =0,divu=0.\n\nFor inertialess particles, the following macroscopic equation has been proven in [Hof16] to be the homogenization limit of many small particles.\n\n \u2202t\u03c1\u2217+(g+u\u2217)\u22c5\u2207\u03c1\u2217 =0,\u03c1\u2217(0,\u22c5)=\u03c10:=\u222bR3f0dv, (6) \u2212\u0394u\u2217+\u2207p =g\u03c1\u2217,divu\u2217=0.\n\nMoreover, well-posedness of this system has been proven in [Hof16].\n\nIn these equations, particles are described by their position density only, because their velocity is the sum of the fluid velocity and the constant which is the direct effect due to gravitation.\n\nThe main result of this paper is the following theorem.\n\n###### Theorem 1.1.\n\nAssume is compactly supported. Then, for , there exists a unique solution to (5). Let be the unique solution to (6). Then, for all , and all\n\n \u03c1\u03bb \u2192\u03c1\u2217in\u00a0C0,\u03b1((0,T)\u00d7R3), (7) u\u03bb \u2192u\u2217in\u00a0L\u221e((t,T);W1,\u221e(R3))\u00a0and in\u00a0L1((0,T);W1,\u221e(R3)). (8)\n\nFormally, for large values of , the first equation in (5) forces the particle to attain the velocity , i.e., the density concentrates around . Using that and integrating the first equation in (5) in leads to the first equation in (6). Moreover, in the fluid equation in (5) can formally be replaced by , which leads to the fluid equation in (6).\n\nFormally, the adjustment of the particle velocities described above happens in times of order . In fact, the process is more complicated as the fluid velocity changes very fast in this time scale as well. In other words, there is a boundary layer of width at time zero for the convergence of the fluid (and particle) velocity. This is the reason, why the convergence can only hold uniformly on time intervals for as stated in the theorem. The particles, however, do not move significantly in times of order . Thus, there is no boundary layer in the convergence .\n\n### 1.2 Idea of the proof\n\nWe introduce the kinetic energy of the particles\n\n E(t):=\u222bR3\u00d7R3|v|2fdxdv.\n\nUsing the Vlasov-Stokes equations (5) yields the following energy identities for the fluid velocity and the particle energy (cf. Lemma 2.1 and Lemma 2.2).\n\n \u2225\u2207u\u22252L2(R3)+\u2225u\u2225L2(\u03c1) =(u,j)L2(R3)\u2264\u2225\u00afV\u22252L2\u03c1\u2264E, (9) 12ddtE =\u03bb(g\u22c5\u222bR3\u00d7R3jdx\u2212\u222bR3\u00d7R3(u\u2212v)2fdxdv\u2212\u2225\u2207u\u22252L2(R3)). (10)\n\nHere and in the following, the weighted -norm is defined by\n\n \u2225h\u2225pLp\u03c1:=\u222bR3|h|p\u03c1dx.\n\nAs expected, equation (10) shows that there is loss of energy due to friction (friction between the particles and the fluid as well as friction inside of the fluid), but the gravity pumps energy into the system (if we assume , which at least after some time should be the case). Note that the Vlasov-Stokes equations (5) also imply that the mass of the particles is conserved.\n\nTo analyze solutions to the Vlasov equation in (5), we look at the characteristic curves starting at time at position , where denotes the value of the solution along the characteristic curve.\n\n \u2202sX =V, X(t,t,x,v)=x, (11) \u2202sV =\u03bb(g+u(s,X)\u2212V(s,t,x,v)), V(t,t,x,v)=v, \u2202sZ =3\u03bbZ, Z(t,t,x,v)=f(t,x,v).\n\nBy the standard theory, any solution with is of the form\n\n f(t,x,v)=e3\u03bbtf0(X(0,t,x,v),V(0,t,x,v)). (12)\n\nUsing the characteristics as well as estimates based on the energy identities (9) and (10) and regularity theory of Stokes equations, we prove global well-posedness of the Vlasov-Stokes equations (5) for compactly supported initial data . A similar approach based on an analysis of the characteristics has been used to prove existence of solutions to the Vlasov-Poisson equations in [BD85], [Pfa92], and [Sch91] (see also [Gla96]). From the PDE point of view, the electrostatic potential appearing in the Vlasov-Poisson equation is similar to the fluid velocity in the Vlasov-Stokes equations. However, in the Vlasov-Poisson equations, the force acting on the particles is the gradient of the electrostatic potential. whereas in the Vlasov-Stokes equations, only the fluid velocity itself contributes. This makes it possible to prove existence (and also uniqueness) in a much simpler way for the Vlasov-Stokes equations.\n\nIn order to prove the convergence in Theorem 1.1, the starting point is integrating the characteristics which yields\n\n V(t,0,x,v)\u2212V(0,0,x,v)=\u03bb(\u222bt0u\u03bb(s,X(s,0,x,v))+gds+X(0,0,x,v)\u2212X(t,0,x,v)). (13)\n\nThus,\n\n \u2223\u2223\u2223X(t,0,x,v)\u2212x\u2212\u222bt0u\u03bb(s,X(s,0,x,v))+gds\u2223\u2223\u2223\u2264|V(t,0,x,v)\u2212v|\u03bb. (14)\n\nTherefore, provided the speed of the particles does not blow up, we see that for large values of the particles are almost transported by the fluid plus the gravity. Clearly, this is also what happens for solutions to the limit inertialess equations (6).\n\nIn order to show that is close to , we introduce a fluid velocity which can be viewed as intermediate between and by\n\n \u2212\u0394~u\u03bb+\u2207p\u03bb=g\u03c1\u03bb,div~u\u03bb=0. (15)\n\nIn order to prove smallness of , one needs estimates on and that are uniform in , which are more difficult to obtain than those that we use in the proof of well-posedness. Indeed, in view of the energy identity for the particles (10), any naive estimate based on that equation will blow up as . However, as the first term is linear in the velocity and the other terms (which have a good sign) are quadratic, the energy cannot exceed a certain value as long as the particle density is not too concentrated (cf. Lemma 3.2). In other words, if the energy is high enough, the quadratic friction terms will prevail over the linear gravitation terms and therefore prevent the energy from increasing further. However, if concentrations of the particle density occur, the particles essentially fall down like one small and heavy particle, leading to large velocities. Indeed, the terminal velocity of a spherical particle of radius in a Stokes fluid at rest is\n\n V=29\u03c1p\u2212\u03c1f\u03bcgR2.\n\nIn order to rule out such concentration effects, we use again the representation of in (12) obtained from the characteristics. Indeed, computing by taking the integral over in (12), we can show that the prefactor in that formula is canceled due to concentration of in velocity space in regions of size as long as we control in a suitable way (cf. Lemma 3.4). As is controlled by due to the energy identity (9), this enables us to get uniform estimates for both , , and for small times.\n\nIt turns out that also estimates on derivatives of are needed to prove smallness of . These are provided by a more detailed analysis of the characteristics.\n\n### 1.3 Plan of the paper\n\nThe rest of the paper is organized as follows.\n\nIn Section 2, we prove global well-posedness of the Vlasov Stokes equations (5), based on energy estimates, analysis of the characteristics, and a fixed point argument.\n\nIn Section 3, we derive a priori estimates that are uniform in for small times by analyzing the characteristics more carefully. In particular we prove and use that the supports of the solutions concentrate in the space of velocities.\n\nIn Section 4.1, we use those a priori proven in Section 4 to show that the fluid velocity is close to the intermediate fluid velocity defined in (15) as . In Section 4.2, we prove the assertion of the main result, Theorem 1.1, up to times where we have uniform a priori estimates. This follows from compactness due to the a priori estimates and convergence of averages of on small cubes, which we prove using again the characteristic equations. In Section 4.3, we finish the proof of the main result, Theorem 1.1, by extending the a priori estimates from Section 3 to arbitrary times. This is done by using both the a priori estimates and the convergence for small times.\n\n## 2 Global well-posedness of the Vlasov-Stokes equations\n\nIn this section, we write for any constant that depends only on the initial datum. Any additional dependencies are denoted by arguments of , e.g. is a constant that depends only on and the initial datum. We use the convention that is monotone in all its arguments.\n\n### 2.1 Estimates for the fluid velocity\n\n###### Lemma 2.1.\n\nLet be nonnegative, and assume is such that . Let\n\n \u03c1(x) :=\u222bR3g(x,v)dv, (16) j(x):=\u03c1\u00afV :=\u222bR3g(x,v)vdv, (17) E :=\u222bR3\u00d7R3g(x,v)|v|2dxdv. (18)\n\nThen there exists a unique weak solution to the Brinkman equation\n\n \u2212\u0394u+\u2207p+\u03c1u=j.\n\nMoreover,\n\n \u2225\u2207u\u22252L2(R3)+\u2225u\u2225L2\u03c1(R3) =(u,j)L2(R3)\u2264\u2225\u00afV\u22252L2\u03c1(R3\u2264E, (20) \u2225u\u2225L\u221e(R3) \u2264C(\u2225g\u2225L\u221e(R3,\u2225g\u2225L1(R3),E)(1+Q), (21) \u2225u\u2225W1,\u221e(R3) \u2264C(Q,E)\u2225g\u2225L\u221e(R3). (22)\n###### Proof.\n\nExistence and uniqueness of weak solutions in follows from the Lax-Milgram theorem.\n\nIn the following, we write instead of and instead of . Testing the Brinkman equation with itself yields\n\n \u2225\u2207u\u222522+\u2225u\u22252L2\u03c1=(j,u)L2(R3)\u2264\u2225u\u2225L2\u03c1\u2225\u00afV\u2225L2\u03c1. (23)\n\nBy the Cauchy-Schwarz inequality\n\n \u00afV2\u03c1=(\u222bR3g(x,v)vdv)2\u222bR3g(x,v)dv\u2264\u222bR3g(x,v)v2dv. (24)\n\nHence,\n\n \u2225u\u22252L2(\u03c1)\u2264\u2225\u00afV\u2225L2(\u03c1)\u2264E.\n\nUsing again (23) yields (20). Using the critical Sobolev embedding, we have\n\n \u2225u\u222526\u2264C\u2225\u2207u\u222522\u2264CE. (25)\n\nMoreover, we can use this Sobolev inequality in (20) to get\n\n \u2225u\u222526\u2264C\u2225u\u22256\u2225j\u22256\/5.\n\nUsing the definition of yields and therefore\n\n \u2225\u2207u\u22252+\u2225u\u22256\u2264C(Q)\u2225g\u2225\u221e (26)\n\nStandard regularity theory for the Stokes equation (see [Ga11]) implies\n\n \u2225\u22072u\u2225q\u2264C\u2225\u03c1u\u2225q+C\u2225j\u2225q. (27)\n\nfor all . In order to prove (22), we use (27) and (25) to get\n\n \u2225\u22072u\u22256\u2264C\u2225\u03c1u\u22256+C\u2225j\u22256\u2264C\u2225\u03c1\u2225\u221e\u2225u\u22256+C\u2225j\u22256\u2264C(E,Q)\u2225g\u2225\u221e.\n\nHence, by Sobolev embedding and (26)\n\n \u2225\u2207u\u2225\u221e\u2264C\u2225\u22072u\u22256+C\u2225\u2207u\u22252\u2264C(E,Q)\u2225g\u2225\u221e,\n\nand similarly for .\n\nIt remains to prove (21). Let . Then,\n\n \u03c1=\u222bR3gdv\u2264\u222b{|v|R}|v|2gdv\u2264CR3\u2225g\u2225\u221e+CR\u22122\u222b{|v|>R}|v|2gdv. (28)\n\nWe choose\n\n R=(\u222bR3|v|2fdv)1\/5\u2225g\u2225\u22121\/5\u221e.\n\nThus,\n\n \u03c1\u2264\u2225g\u22252\/5\u221e(\u222bR3|v|2gdv)3\/5,\n\nand therefore,\n\n \u2225\u03c1\u22255\/3\u2264\u2225g\u22252\/5\u221eE35. (29)\n\nMoreover, by definition of , (29) implies for all ,\n\n \u2225j\u2225p\u2264Q\u2225\u03c1\u2225p\u2264C(\u2225g\u2225\u221e,\u2225g\u22251,E)Q. (30)\n\nSobolev and H\u00f6lder\u2019s inequality imply\n\n \u2225u\u222510\u2264C\u2225\u22072u\u222530\/23\u2264C\u2225\u03c1\u22255\/3\u2225u\u22256+C\u2225j\u222530\/23\u2264C(\u2225g\u2225\u221e,\u2225g\u22251,E)(1+Q),\n\nwhere we used (25), (29), and (30). Now, we can repeat the argument, using this improved estimate for in (27). This yields\n\n \u2225u\u222530\u2264C(\u2225g\u2225\u221e,\u2225g\u22251,E)(1+Q).\n\nUsing again (27) yields\n\n \u2225\u22072u\u222530\/19\u2264C(\u2225g\u2225\u221e,\u2225g\u22251,E)(1+Q).\n\nAs , we can apply Sobolev embedding to get\n\n \u2225u\u2225\u221e\u2264C\u2225\u22072u\u222530\/19+C\u2225u\u22256\u2264C(\u2225g\u2225\u221e,\u2225g\u22251,E)(1+Q),\n\nwhich finishes the proof of (21). \u220e\n\n### 2.2 A priori estimates for the particle density\n\n###### Lemma 2.2.\n\nLet and and let be minimal such that . Assume is a solution to (5) with . Then, is compactly supported on . Let be minimal such that . Furthermore, define\n\n E(t) :=\u222bR3\u00d7R3|v|2fdxdv. (31)\n\nThen,\n\n \u2225f(t,\u22c5,\u22c5)\u2225L\u221e(R3\u00d7R3) =e3\u03bbt, (32) \u2225\u03c1\u22251 =1, (33) \u2202tE =2\u03bb(g\u22c5\u222bR3jdx\u2212\u222bR3\u00d7R3(u\u2212v)2fdxdv\u2212\u2225\u2207u\u22252L2(R3)) (34) \u22642\u03bb(CE12\u2212\u222bR3\u00d7R3(v\u2212\u00afV)2fdxdv\u2212\u2225u\u2212\u00afV\u22252L2\u03c1(R3)\u2212\u2225\u2207u\u22252L2(R3)), (35) E(t) \u2264C(1+(\u03bbt)2), (36) Q(t) \u2264C(t,\u03bb). (37)\n###### Proof.\n\nBy the regularity assumptions on and , the characteristics in (11) are well defined and (12) holds. This shows that the support of remains uniformly bounded on compact time intervals.\n\nThe exponential growth of the -norm of (32) follows from the characteristic equations as we have seen in (12).\n\nMass conservation (33) follows directly from integrating the Vlasov equation (5).\n\nWe multiply the Vlasov equation by and integrate to find\n\n \u2202tE =2\u222bR3\u00d7R3v\u22c5\u03bb(g+u\u2212v)fdxdv (38) =2\u03bb(g\u22c5\u222bR3\u00d7R3vfdxdv\u2212\u222bR3\u00d7R3(u\u2212v)2fdxdv+\u222bR3\u00d7R3u\u22c5(u\u2212v)fdxdv) =2\u03bb(g\u22c5\u222bR3\u00d7R3jdx\u2212\u222bR3\u00d7R3(u\u2212v)2fdxdv\u2212\u2225\u2207u\u22252L2(R3)).\n\nThis yields the identity (34). By the Cauchy-Schwarz inequality\n\n \u222bR3|j|dx\u2264\u222bR3\u00d7R3|v|fdvdx\u2264\u2225\u03c1\u22251\/2L1(R3)E1\/2. (39)\n\nMoreover, by definition of in (2)\n\n \u222bR3\u00d7R3(u\u2212v)2fdxdv =\u222bR3\u00d7R3((v\u2212\u00afV)2+(\u00afV\u2212u)2\u22122(v\u2212\u00afV)(\u00afV\u2212u))fdxdv (40) =\u222bR3\u00d7R3(v\u2212\u00afV)2fdxdv+\u2225u\u2212\u00afV\u22252L2\u03c1(R3).\n\nUsing (39) and (40) shows (35).\n\nIn particular\n\n \u2202tE\u2264C\u03bbE1\/2.\n\nThis proves (36) by a comparison principle for ODEs.\n\nThe characteristic equation for in (11) implies\n\n |V(t,0,x,v)| =\u2223\u2223\u2223e\u2212\u03bbt(v+\u03bb\u222bt0e\u03bbs(g+u(s,X(s,0,x,v)))ds)\u2223\u2223\u2223 \u2264e\u2212\u03bbtv+|g|+\u222bt0\u2225u(s\u22c5)\u2225L\u221e(R3)ds.\n\nThus, for all , we get by Lemma 2.1, (32), (33), and (36)\n\n |V(t,0,x,v)| \u2264Q0+1+C(\u2225f\u2225L\u221e((0,t)\u00d7R3\u00d7R3),\u2225E\u2225L\u221e(0,t))\u222bt0(1+Q(s))ds (41) \u2264C+C(\u03bbt)\u222bt0(1+Q(s))ds.\n\nBy the equation for , we get for all\n\n |X(t,0,x,v)|\u2264Q0+\u222bt0|V(s,0,x,v)|ds\u2264Q0+tC(\u03bbt)\u222bt0(1+Q(s))ds. (42)\n\nHence,\n\n Q(t)\u2264sup(x,v)\u2208suppf0|(X(t,0,x,v),V(t,0,x,v))|\u2264C+(1+t)C(\u03bbt)\u222bt0(1+Q(s))ds.\n\nGronwall\u2019s equation yields (37). \u220e\n\n### 2.3 Well-posedness by the Banach fixed point theorem\n\n###### Proposition 2.3.\n\nLet with compact support. Then, for all , there exists a unique solution to (5) with .\n\n###### Proof.\n\nWe want to prove existence of solutions using the Banach fixed point theorem. Let . We define the metric space, where we want to prove contractiveness,\n\n Y:={h\u2208L\u221e((0,T)\u00d7R3\u00d7R3): h\u22650,\u2225h(t,\u22c5)\u2225L1(R3)=\u2225f0\u2225L1(R3), (43) \u222bR3\u00d7R3(1+|v|2)hdxdv\u2264E1,supph\u2282[0,T]\u00d7\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00afBQ1(0)}. (44)\n\nThen, is a complete metric space. Let and . For , we define to be the solution to\n\n \u2212\u0394ui+\u2207p=\u222b\u221e0\u222bR3(v\u2212ui)hidv.\n\nWe define the characteristics analogously to (11) by\n\n \u2202s(Xi,Vi)(s,t,x,v) =(Vi(s,t,x,v),g+ui(s,Xi(s,t,x,v))\u2212Vi(s,t,x,v)), (45) (Xi,Vi)(t,t,x,v)=(x,v). (46)\n\nThen, the solutions to the equation\n\n \u2202tfi+v\u22c5\u2207xfi+\u03bbdivv(gfi+(ui\u2212v)fi)=0, (47)\n\nwith initial datum is given by\n\n fi(t,x,v)=e3\u03bbtf0((Xi,Vi)(0,t,x,v)), (48)\n\nand . We estimate\n\n |f1(t,x,v)\u2212f2(t,x,v)|\u2264e3\u03bbt\u2225\u2207f0\u2225L\u221e(R3\u00d7R3)|(X1,V1)(0,t,x,v)\u2212(X2,V2)(0,t,x,v)|. (49)\n\nFurthermore, writing instead of and similar for , we have for all\n\n |(X1,V1)(s)\u2212(X2,V2)(s)| (50) \u2264\u222bts|(V1(\u03c4)\u2212V2(\u03c4),\u03bb(u1(\u03c4,X1(\u03c4))\u2212u2(\u03c4,X2(\u03c4))\u2212V1(\u03c4)+V2(\u03c4)))|d\u03c4 (51) \u2264\u222bts|V1(\u03c4)\u2212V2(\u03c4)|+\u2225\u2207u1(\u03c4,\u22c5)\u2225L\u221e(R3)|X1(\u03c4)\u2212X2(\u03c4)|+\u2225u1(\u03c4,\u22c5)\u2212u2(\u03c4,\u22c5)\u2225L\u221e(R3)d\u03c4 (52) \u2264C(X,Q1,E1)V\u222bts|(X1,V1)(\u03c4)\u2212(X2,V2)(\u03c4)|d\u03c4+C(Q1,E1)(t\u2212s)\u2225g1\u2212g2\u2225L\u221e(R3), (53)\n\nwhere we used Lemma 2.1. Gronwall\u2019s inequality implies\n\n |(X1,V1)(t)\u2212(X2,V2)(t)|\u2264C(Q1,E1)t\u2225g1\u2212g2\u2225L\u221e(R3)exp(C(Q1,E1)\u2225g1\u2225L\u221e(R3)t).\n\nInserting this in (49) yields\n\n \u2225f1\u2212f2\u2225L\u221e((0,T)\u00d7R3\u00d7R3) (54) \u2264Te3TC(Q1,E1)\u2225\u2207f0\u2225L\u221e(R3)\u2225g1\u2212g2\u2225L\u221e(R3)exp(C(Q1,E1)T\u2225g1\u2225L\u221e(R3))\n\nFor , consider . Then, for all , equation (54) implies that there exists such that the mapping is contractive. We have to check that implies . First,\n\n \u2225f(t,\u22c5,\u22c5)\u2225L1(R3)=\u2225f0\u2225L1(R3) (55)\n\nfollows from the equation. Moreover, for any , equation (48) implies that we can choose sufficiently small such that\n\n \u2225f\u2225L\u221e((0,T)\u00d7R3\u00d7R3)=\u2225f0\u2225L\u221e(R3\u00d7R3)e3\u03bbT\u2264L. (56)\n\nFurthermore, we have\n\n \u2202t\u222bR3\u222bR3|v|2fdxdv =2\u222bR3\u222bR3v\u22c5(g+u\u2212v)fdxdv (57) \u22642(|g|+\u2225u\u2225L\u221e(R3))\u222bR3\u222bR3(1+|v|2)fdxdv. (58)\n\nHence, using mass conservation, equation (55),\n\n \u2202t\u222bR3\u00d7R3(1+|v|2)fdxdv\u2264(|g|+\u2225u\u2225L\u221e(R3))\u222bR3\u00d7R3(1+|v|2)fdxdv.\n\nTherefore, Lemma 2.1 and Gronwall\u2019s inequality imply\n\n \u222bR3\u00d7R3(1+|v|2)fdxdv\u2264\u222bR3\u00d7R3(1+|v|2)f0dvdxexp(C(Q1,E1)Lt). (59)\n\nThus, for any , we can choose small enough such that for all .\n\nFinally, we need to control the support of . To do this, we follow the same argument as in the last part of the proof of Lemma 2.2 to get\n\n Q(t)\u2264Q0+(1+t)\u222bt0C(L,E1,Q1)ds\u2264Q0+(1+t)tC(L,E1,Q1).\n\nAgain, for any , we can choose small enough such that for all .\n\nTherefore, by the Banach fixed point theorem, we get local in time existence of solutions to (5). Global existence follows directly from the a priori estimates in Lemma 2.2, since these ensure that all the relevant quantities for the fixed point argument do not blow up in finite time.\n\nSince with uniform compact support, higher regularity of follows from taking derivatives in the Brinkman equations in (5) and using regularity theory for Stokes equations similar as in the proof of Lemma 2.1. \u220e\n\n## 3 Uniform estimates on \u03c1\u03bb and u\u03bb\n\nIn the following, we assume that is the solution to the Vlasov-Stokes equations (5) for some and some compactly supported initial datum . In this section we want to derive a priori estimates for these solutions that do not depend on . This is why we cannot use the a priori estimates derived in Lemma 2.2. However, the drawback of the estimates that we prove in this section is that they allow for blow-up in finite time. This is also why they are not suitable in the proof of global well-posedness, that we showed in the previous section. Later, we will use the limit equation in order to show that the estimates derived here allow for uniform estimates for arbitrary times.\n\nAgain, we denote by any constant, which only depends on and may change from line to line.\n\n### 3.1 Estimates for the fluid velocity\n\nIn this subsection we show that the fluid velocity as well as the particle velocity is controlled by , uniformly in , which means that high velocities can only occur if particles concentrate in position space. This also implies control on the particle positions and velocities\n\nThe proof is based on the energy identity from Lemma 2.2, equation (34), and the subsequent estimate (35). The idea is to estimate the sum of the quadratic terms in that expression, which have a negative sign, by from below. The following Lemma, which is a general observation on weighted -spaces, shows why such an estimate is true if is not too large.\n\nHaving shown this estimate, the quadratic terms in (35) dominate the linear term, which has been estimated by . This leads to control of uniformly in .\n\n###### Lemma 3.1.\n\nThere exists a constant , such that for all nonnegative , and ,\n\n \u2225\u2207w\u22252L2(R3)+\u2225w\u2212h\u22252L2\u03c3(R3)\u2265c0min{\u2225\u03c3\u2225\u22121L","date":"2020-07-09 17:11:01","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9070000052452087, \"perplexity\": 493.6423684563673}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593655900614.47\/warc\/CC-MAIN-20200709162634-20200709192634-00527.warc.gz\"}"}
null
null
Dieudonne Wilfried Seyi Ntsengue (born 23 January 1998) is a Cameroonian professional boxer. As an amateur he competed in the men's middleweight event at the 2016 Summer Olympics. He was defeated by Egypt's Hosam Bakr Abdin in the round of 16. He was the flag bearer for Cameroon for both the Parade of Nations during the opening ceremony and the closing ceremony. He competed at the 2020 Summer Olympics in the men's middleweight event. Professional boxing record References External links 1998 births Living people Middleweight boxers Cameroonian male boxers Olympic boxers of Cameroon Boxers at the 2016 Summer Olympics Place of birth missing (living people) African Games gold medalists for Cameroon African Games medalists in boxing Commonwealth Games medallists in boxing Commonwealth Games silver medallists for Cameroon Boxers at the 2018 Commonwealth Games Competitors at the 2015 African Games Boxers at the 2020 Summer Olympics 21st-century Cameroonian people Medallists at the 2018 Commonwealth Games
{ "redpajama_set_name": "RedPajamaWikipedia" }
80
Reveille Coffee's post on Columbus in North Beach is on of our favorite weekend haunts. We love the overall chill environment and the dry flower arrangements. Love the latte art as well their cappucinos. We also love a cup of simple black coffee with the toad in the hole egg sandwich with bacon, fried egg, braised kale and leek with gruyere for $9.00. We would definitely try the homemade biscuits, Inna plutot jam and butter or the homemade peanut butter and plutot jelly sandwich sometime.
{ "redpajama_set_name": "RedPajamaC4" }
3,282
Written and presented for the 10th Anniversary of the New England Public Radio Arts and Humanities Award. Written in honor of NEPR, my co-award recipients, all independent media and those who create and struggle for justice. and isn't a revolution 360 degrees? a new way to see? the lips of our empathy. are a revolution of human grace. Copyright, Magdalena Gómez, 2018. May not be reproduced in any form without the express written consent of the author. I've been called a provocateur-always by people I respect. It has been meant as an affirmation and compliment, and that is how I receive it. To be provocative is a necessary component in the creation of art. If not to move people, then what? I don't create to be liked, I create to provoke thought, to evoke visceral response and ultimately to inspire positive action for social change.
{ "redpajama_set_name": "RedPajamaC4" }
5,640
define(['src/util/datatraversing', 'src/util/actionmanager'], function(Traversing, ActionManager) { var allScripts = []; var variableFilters; function setVarFilter( name, element, filter ) { var self = this; if( ! filter ) { self.getRepositoryData().set( name, element ); return; } require( [ filter ], function( filterFunction ) { self.getRepositoryData( ).set( name, filterFunction( element ) ); } ); } function setVar( name, element, jpath, filter ) { var self = this; if( ! jpath || ! element.getChild ) { setVarFilter.call( self, name, element, filter ); return; } self.repositoryData.set(name, null); element.getChild( jpath ).done( function( returned ) { setVarFilter.call( self, name, returned, filter ); }); } function setHighlight( element, value ) { if( ! ( element._highlight instanceof Array ) ) { return; } this.repositoryHighlights.set( element._highlight, value ); } function setHighlightId( id, value ) { this.repositoryHighlights.set(id, value); } return { getRepositoryData: function() { return this.repositoryData; }, setRepositoryData: function(repo) { this.repositoryData = repo; }, getRepositoryHighlights: function() { return this.repositoryHighlights; }, setRepositoryHighlights: function(repo) { this.repositoryHighlights = repo; }, getRepositoryActions: function() { return this.repositoryActions; }, setRepositoryActions: function(repo) { this.repositoryActions = repo; }, setVar: setVar, setVariable: setVar, resetVariables: function() { this.repositoryData.reset(); }, getVar: function(name) { var data = this.repositoryData.get(name); if(data && data[1]) { return data[1]; } return; }, listenHighlight: function() { if( ! Array.isArray( arguments[ 0 ]._highlight ) ) { return; } arguments[ 0 ] = arguments[ 0 ]._highlight; this.repositoryHighlights.listen.apply(this.repositoryHighlights, arguments); }, killHighlight: function() { this.repositoryHighlights.kill.apply( this.repositoryHighlights, arguments ); }, highlight: setHighlight, highlightId: setHighlightId, setHighlight: setHighlight, doAction: function(key, value) { this.repositoryActions.set(key, value); }, executeAction: function(name, value) { ActionManager.execute( name, value ); }, getAllFilters: function( ) { return variableFilters; }, setAllFilters: function( filters ) { variableFilters = filters; } } });
{ "redpajama_set_name": "RedPajamaGithub" }
6,922
St Mary's University warmly welcomes students from Israel. Below you will find lots of useful information for Israeli applicants but if you still have more questions, please feel free to email us. Israeli students who are offered a place to study at St Mary's will automatically be considered for one of St Mary's International Student Scholarships – these are available to new, full-time undergraduate and postgraduate taught degree students in any subject area. St Mary's staff regularly engage with potential students from Israel. If you have any questions about studying at St Mary's or how to go about making an application, please do not hesitate to email us. London has a long-standing nationality community and there are many organisations that bring the community together through social events, sports and cultural activities, such as the annual Israeli festival held in London 'TLV in LDN'. The Anglo-Israel Association was founded to promote friendship and understanding between the people of the two countries and also organises a number of events throughout the year. You can also find a wide range of Israeli restaurants and food shops in and around London – many of these are listed on the TimeOut website. The Embassy of Israel in London has useful information for nationals from Israel living in the UK.
{ "redpajama_set_name": "RedPajamaC4" }
8,101
\section{Introduction} It has been studied years ago that the supersymmetric generalizations of the bosonic {CP(N-1)} sigma model is automatically an $\mathcal {N}\!=\!(2,2)$ supersymmetric theory by the virtue of that the target space {CP(N-1)} is a K\"{a}hler manifold \cite{4}. Recently Edalati and Tong suggested and built an $\mathcal{N}\!=\!(0,2)$ {CP(N-1)} heterotic sigma model by study of the low-energy dynamics of vortex strings in $\mathcal{N}\!=\!1$ four-dimensional gauge theories\cite{5}. Later Shifman and Yung formulated a geometric representation for this heterotic model with the {CP(N-1)} target space for bosonic fields and an extra right-handed fermion which couples to the fermion fields of the $\mathcal{N}\!=\!(2,2)$ CP(N-1) model\cite{1}, and thus breaks the supersymmetry to $\mathcal{N}\!=\!(0,2)$. In this paper, we follow the geometric formulation to discuss the fermionic zero modes of $\mathcal{N}\!=\!(0,2)$ heterotic CP(1) sigma model in the instanton background. Nontrivial homotopy group structure of target space CP(1) allows for the bosonic instanton background of the form of a holomorphic function $\phi=\rho/(z-z_{0})$, where $\rho$ and $z_{0}$ are instanton size and center respectively. Under such background, the fermionic zero modes can be obtained by acting supercharges and superconformal charges to the instanton solution. Since the heterotic deformation of the standard {CP(1)} sigma model only preserves half of four original supercharges, the supersymmetries and superconformal symmetries are partly broken. The remaining fermionic generators are not sufficient to generate all fermionic zero modes from the instanton background. Therefore we will find zero modes by solving Dirac equations. In Section 2, we will calculate the chiral anomaly at first to give the number of zero modes. In Section 3 we will explicitly solve Dirac equations of zero modes and use the result of Section 2 to pick up those linear independent modes. In the last Section we generalize this result to {CP(N-1)} heterotic sigma models.\\ \section{Chiral Anomaly} We start from the chiral anomaly of the standard CP(1) sigma model. The notations are in accordance with the reference \cite{1}. The Lagrangian of CP(1) is given as follows: \begin{eqnarray} \mathcal{L}&=&\frac{2}{g_{0}^2\chi^2}[\partial_{\mu}\phi^{\dagger}\partial^{\mu}\phi+i\bar{\psi}\gamma^{\mu}(\partial_{\mu}\psi-\frac{2}{\chi}\phi^{\dagger}\partial_{\mu}\phi\psi)+\frac{1}{\chi^2}(\bar{\psi}\psi)^2]\nonumber\\ &=&\frac{2}{g_{0}^2\chi^2}\partial_{\mu}\phi^{\dagger}\partial^{\mu}\phi+\frac{2i}{g_{0}^2}\bar{\xi}\gamma^{\mu}(\partial_{\mu}\xi-iA_{\mu}\xi)+\frac{2}{g_{0}^2}(\bar{\xi}\xi)^2 \end{eqnarray} where $\psi=(\psi_R, \psi_L)^T$, $\xi\equiv\psi/\chi$ and $A_{\mu}\equiv (i/\chi)\phi^{\dagger}\overleftrightarrow{\partial_{\mu}}\phi$ for convenience to calculate the chiral anomaly. It has a chiral symmetry and thus an axial current: \begin{eqnarray} j^{\mu }_5=\frac{2}{g_{0}^2\chi^2}\bar{\psi}\gamma^{\mu}\gamma_{5}\psi=\frac{2}{g_{0}^2}\bar{\xi}\gamma^{\mu}\gamma_{5}\xi \end{eqnarray} The chiral anomaly is well-known\cite{3} as: \begin{eqnarray} \partial_{\mu}j^{\mu }_5=\frac{1}{\pi}\epsilon^{\mu\nu}\partial_{\mu}A_{\nu}=\frac{2i}{\pi}\epsilon^{\mu\nu}\frac{\partial_{\mu}\phi^{\dagger}\partial_{\nu}\phi}{\chi^2}\,. \end{eqnarray} This anomaly term corresponds to the tadpole diagram:\\ \hspace*{5.5cm}\includegraphics[scale=1]{A4.eps} Now we are going to consider the chiral anomaly of heterotic {CP(1)} sigma model. The Lagrangian of the heterotic model for bilinear fermion terms is deformed from Eq.\,(1) by adding terms with an extra right-handed fermion field $\zeta_{R}$ \cite{1}. Since only bilinear terms of fermions need to be considered when calculating one loop diagrams, we just write them up: \begin{eqnarray} \mathcal {L}_{biferm.}&=&\zeta_{R}^{\dagger}i\partial_{L}\zeta_{R}+[\gamma g_{0}^{2}\zeta_{R}G(i\partial_{L}\phi^{\dagger})\psi_{R}+h.c.] +\frac{2i}{g_{0}^2\chi^2}\bar{\psi}\gamma^{\mu}(\partial_{\mu}\psi-\frac{2}{\chi}\phi^{\dagger}\partial_{\mu}\phi\psi)\nonumber\\ &=&i\bar{\zeta}\gamma^{\mu}\frac{1+\gamma_5}{2}\partial_{\mu}\zeta+(\bar{\zeta}\gamma^{\mu}\frac{1+\gamma_5}{2}B_{\mu}\xi+h.c.)+ \frac{2i}{g_{0}^2}\bar{\xi}\gamma^{\mu}(\partial_{\mu}\xi-iA_{\mu}\xi) \end{eqnarray} where we define $\zeta=(\zeta^{\dagger}_R, 0)^T=(1+\gamma_{5})\zeta/2$ and $B_{\mu}=2i\gamma\partial_{\mu}\phi^{\dagger}/\chi^2$. It is easy to see $\gamma_{5}\zeta=\zeta$ and the Lagrangian is invariant under $\psi\rightarrow e^{i\alpha\gamma_5}\psi$ and $\zeta\rightarrow e^{i\alpha\gamma_5}\zeta$. Therefore the corresponding axial current is: \begin{eqnarray} j^{\mu}_5=\frac{2}{g_{0}^2}\bar{\xi}\gamma^{\mu}\gamma_{5}\xi+\bar{\zeta}\gamma^{\mu}\frac{1+\gamma_5}{2}\zeta \end{eqnarray} We will show that the chiral anomaly of the heterotic model will not be corrected by the extra $\zeta$ terms, and thus prove that the number of zero modes is the same as the standard {CP(1)} model. Actually, to calculate this anomaly one needs to consider diagrams as follows:\\ \includegraphics[scale=1]{A2.eps}\\ The diagrams B and C cancel with each other because both of them are finite, and thus free to shift integral variables. The diagrams D and E also cancel each other. It can be proved by using momentum $k^{\mu}$ times the sum of these two diagrams, $i.e.$\\ \includegraphics[scale=1]{A3.eps}\\ By multiplying $k^\mu$, the original converged digrams D and E become two divergent parts respectively. However they are all logarithmically divergent, and thus still free to shift integral variables to prove the cancellation. Therefore the anomaly of heterotic case is still only from the tadpole diagram, and is the same as in the standard CP(1) case, see equation (3).\\ \section{Fermionic Zero Modes} For the standard CP(1) model, $\psi$ has four zero modes, under the instanton background $\phi=\rho/(z-z_0)$\cite{2}. Since the chiral anomaly is not corrected in the heterotic case, the number of zero modes should be still four. We now proceed to derive the Dirac equations for $\psi$ and $\zeta$ from the heterotic Lagrangian and Wick-rotate them to Euclidean space by $x^0\rightarrow -ix^2$. We only keep linear terms in the equations of motion and solve them. We will see that the solutions are also satisfied with the equations of motion when we add nonlinear terms. Since we have already shown that the number of independent zero modes is four, the solutions solved in the equations of motions up to linear terms are all zero modes. From Eq.\ (4) and instanton background $\bar{\partial}\phi=0$, in spinor components we have: \begin{eqnarray} \delta\zeta^{\dagger}_R&\Longrightarrow&\bar{\partial}\zeta_{R}=0\\ \delta\zeta_{R}&\Longrightarrow&\bar{\partial}\zeta^{\dagger}_R+2\gamma\frac{\bar{\partial}\phi^{\dagger}}{\chi^2}\psi_{R}=0\\ \delta\psi^{\dagger}_{L}&\Longrightarrow&\partial\psi_{L}-2\frac{\phi^{\dagger}\partial\phi}{\chi}\psi_{L}=0\\ \delta\psi^{\dagger}_{R}&\Longrightarrow&\bar{\partial}\psi_{R}=0\\ \delta\psi_{R}&\Longrightarrow&\gamma g_{0}^{2}\zeta_{R}\bar{\partial}\phi^{\dagger}-\bar{\partial}\psi^{\dagger}_R+\frac{2\phi\bar{\partial}\phi^{\dagger}}{\chi}\psi^{\dagger}_{R}=0\\ \delta\psi_{L}&\Longrightarrow&\partial\psi^{\dagger}_{L}=0 \end{eqnarray} Notice that Eqs.\ (8) (9) and (11) are the same as in the standard CP(1) case. Therefore the solutions to these three equations should be the same as before. They give all four zero modes. Since we have argued that the total number of zero modes is four, there is no extra zero modes raised from field $\psi^{\dagger}_R$, although the equation of motion of $\psi^{\dagger}_R$\ (10) is deformed by the $\gamma$ term. Therefore we have: \begin{eqnarray} &&\psi_{R}^{(1)}=\frac{\rho}{(z-z_{0})^{2}}\,, \ \ \ \ \psi_{L}^{(1)}=0\,, \ \ \ \ \psi^{\dagger(1)}_R=0\,, \ \ \ \ \psi^{\dagger(1)}_L=0\,,\\ &&\psi_{R}^{(2)}=\frac{\rho}{(z-z_{0})}\,, \ \ \ \ \ \ \psi_{L}^{(2)}=0\,, \ \ \ \ \psi^{\dagger(2)}_R=0\,, \ \ \ \ \psi^{\dagger(2)}_L=0\,,\\ &&\psi_{R}^{(3)}=0\,, \ \ \ \ \psi_{L}^{(3)}=0\,, \ \ \ \ \psi^{\dagger(3)}_R=0\,, \ \ \ \ \psi^{\dagger(3)}_L=\frac{\bar{\rho}}{(\bar{z}-\bar{z_0})^2}\,,\\ &&\psi_{R}^{(4)}=0\,, \ \ \ \ \psi_{L}^{(4)}=0\,, \ \ \ \ \psi^{\dagger(4)}_R=0\,, \ \ \ \ \psi^{\dagger(4)}_L=\frac{\bar{\rho}}{(\bar{z}-\bar{z_0})^2}\,, \end{eqnarray} By the same argument, fermionic field $\zeta$ and $\zeta^{\dagger}$ cannot provide extra zero modes. From Eqs.\ (6) (7) and (10), we get: \begin{eqnarray} \zeta^{(1,2,3,4)}_{R}=0\,, \ \ \ \ \zeta^{\dagger(1,2,3,4)}_R=-2\gamma\,\frac{\phi^{\dagger}}{\chi}\,\psi_R^{(1,2,3,4)} \end{eqnarray} At first sight, the solutions seem inconsistent with the anomaly calculation, which gives the number of the difference between left and right handed fermions under instanton background, $i.e.$ \begin{eqnarray} &&\int d^{2}x\partial_{\mu}j^{\mu }_5=\int d^{2}x\frac{2i}{\pi}\epsilon^{\mu\nu}\frac{\partial_{\mu}\phi^{\dagger}\partial_{\nu}\phi}{\chi^2}=4 \end{eqnarray} From solutions (12)-(15), It seems that the first two solutions are right handed fermions while the last two are left handed, thus the number of the difference is zero. However, because we wick-rotated to Euclidean space to perform the calculation, in which the complex conjugate is not a well-defined involution for chiral fermions, the field $\bar{\psi}$ is thus totally independent of $\psi$. Moreover in Euclidean space, the Lagrangian is $SO(2)$ invariant. Therefore the field $\bar{\psi}$ transformed under $SO(2)$ is the same as $\psi^{\dagger}$. Bearing this in mind, we can be back to the Lagrangian (4): \begin{eqnarray} \mathcal {L}_{\psi}&=&\frac{2i}{g_{0}^2\chi^2}\bar{\psi}\gamma^{\mu}(\partial_{\mu}\psi-\frac{2}{\chi}\phi^{\dagger}\partial_{\mu}\phi\psi)\nonumber\\ &=&\frac{2i}{g_{0}^2\chi^2}\bar{\psi}_L\gamma^{\mu}(\partial_{\mu}\psi_R-\frac{2}{\chi}\phi^{\dagger}\partial_{\mu}\phi\psi_R)+\frac{2i}{g_{0}^2\chi^2}\bar{\psi}_R\gamma^{\mu}(\partial_{\mu}\psi_L-\frac{2}{\chi}\phi^{\dagger}\partial_{\mu}\phi\psi_L) \end{eqnarray} Now it is clear that, when wick-rotated to Euclidean space, the field $\psi_R^\dagger$ and $\psi_L^\dagger$ we solved in Eqs.\ (10) and (11) are in fact the fields $\bar{\psi}_L$ and $\bar{\psi}_R$ respectively, while the field $\bar{\psi}$ plays the same role as $\psi^{\dagger}$ in Euclidean space. Therefore the four solutions are all right handed fermions. It is consistent with the anomaly calculation. As for the field $\zeta$, solution (16) shows that the extra field $\zeta$ mixes with field $\psi$, however it is linearly dependent of field $\psi$, and thus does not produce extra zero modes as what we also expected from the result of the chiral anomaly calculation (17). At last, one more worthy to mention is the solutions of field $\psi$. They are of the same form as in the standard {CP(1)} zero modes. It means that not only the number of zero modes does not change, the zero modes of $\psi$ also receive no correction in the heterotic model. In fact it is easy to understand this result from equation (8) and (9). Although the supersymmetries are partly broken by deformation in the heterotic case, the classical equation of motion of $\psi$ up to linear terms are still $\mathcal{N}=(2,2)$ supersymmetic invariant. Therefore in this sense the zero modes of $\psi$ still can be generated by supercharges and superconformal charges from the instanton background, even though half of the symmetries are broken in the heterotic model. In addition, one can check that these solutions are satisfied with the equations of motion when adding nonlinear terms of fermions.\\ \section{Heterotic CP(N-1) Sigma Model:} Our conclusion to the heterotic {CP(1)} model is that the zero modes of $\psi$ are the same as in the standard {CP(1)} case, while the extra fermion $\zeta$ does not provide extra zero modes. This result can be simply generalized to the heterotic {CP(N-1)} model. For the heterotic {CP(N-1)} case, the Lagrangian containing bilinear terms of fermions is\cite{1}: \begin{eqnarray} \mathcal {L}&=&\zeta_{R}^{\dagger}i\partial_{L}\zeta_{R}+[\gamma g_{0}^{2}\zeta_{R}G_{i\bar{j}}(i\partial_{L}\phi^{j\dagger})\psi_{R}^{i}+h.c.] +iG_{i\bar{j}}\bar{\psi}^{\bar{j}}\gamma^{\mu}(\partial_{\mu}\psi^{i}+\Gamma^{i}_{lk}\partial_{\mu}\phi^{l}\psi^{k}) \end{eqnarray} Hence, the equations of motion up to linear terms under instanton backgroud $\bar{\partial}\phi^{i}=0$ are: \begin{eqnarray} \delta\zeta^{\dagger}_{R}&\Longrightarrow&\bar{\partial}\zeta_{R}=0\\ \delta\zeta_{R}&\Longrightarrow&\bar{\partial}\zeta^{\dagger}_R+\gamma g_{0}^{2}G_{i\bar{j}}\bar{\partial}\phi^{j\dagger}\psi_{R}^{i}=0\\ \delta\psi_{L}^{\dagger\bar{j}}&\Longrightarrow&\partial\psi_{L}^{i}+\Gamma^{i}_{kl}\partial\phi^{l}\psi_{L}^{k}=0\\ \delta\psi_{R}^{\dagger\bar{j}}&\Longrightarrow&\bar{\partial}\psi_{R}^{i}=0\\ \delta\psi_{L}^{j}&\Longrightarrow&\gamma g_{0}^{2}\zeta_{R}\bar{\partial}\phi^{\dagger\bar{i}}-\bar{\partial}\psi^{\dagger\bar{i}}_R-\Gamma^{\bar{i}}_{\bar{k}\bar{l}}\bar{\partial}\phi^{\dagger\bar{l}}\psi_{L}^{\dagger\bar{k}}=0\\ \delta\psi_{R}^{j}&\Longrightarrow&\partial\psi_{L}^{\dagger\bar{i}}=0 \end{eqnarray} It is not hard to see the solutions are: \begin{eqnarray} &&\psi_{R}^{i(1)}=\frac{\rho^{i}}{(z-z^{i}_{0})^{2}}\,, \ \ \ \ \psi_{L}^{i(1)}=0\,, \ \ \ \ \psi^{\dagger\bar{i}(1)}_R=0\,, \ \ \ \ \psi^{\dagger\bar{i}(1)}_L=0\,,\\ &&\psi_{R}^{i(2)}=\frac{\rho^{i}}{(z-z^{i}_{0})}\,, \ \ \ \ \ \ \psi_{L}^{i(2)}=0\,, \ \ \ \ \psi^{\dagger\bar{i}(2)}_R=0\,, \ \ \ \ \psi^{\dagger\bar{i}(2)}_L=0\,,\\ &&\psi_{R}^{i(3)}=0\,, \ \ \ \ \psi_{L}^{i(3)}=0\,, \ \ \ \ \psi^{\dagger\bar{i}(3)}_R=0\,, \ \ \ \ \psi^{\dagger\bar{i}(3)}_L=\frac{\bar{\rho^{i}}}{(\bar{z}-\bar{z_0^{i}})^2}\,,\\ &&\psi_{R}^{i(4)}=0\,, \ \ \ \ \psi_{L}^{i(4)}=0\,, \ \ \ \ \psi^{\dagger\bar{i}(4)}_R=0\,, \ \ \ \ \psi^{\dagger\bar{i}(4)}_L=\frac{\bar{\rho^{i}}}{(\bar{z}-\bar{z_0^{i}})^2}\,,\\ &&\zeta_R=0\,,\\ &&\zeta_R^{\dagger(1,2,3,4)}=-\gamma g_0^2\,\frac{\partial K}{\partial\phi^{i}}\,\psi_{R}^{{i}(1,2,3,4)} \end{eqnarray} where $K$ is the K\"{a}hler potential. From these solutions we see that the number of zero modes in the heterotic {CP(N-1)} model is still $4(N-1)$, while $\zeta$ can be linearly expressed by the zero modes of $\psi$ and thus not extra zero mode.\\ \section*{Acknowledgements} I am grateful for M. Shifman who led me to the subject, and A. Vainshtein for very useful and fruitful discussions and reviewing the final version of the manuscript. I also want to thank M. Sasseville for proofreading the text.\\
{ "redpajama_set_name": "RedPajamaArXiv" }
106
//g++-5 --std=c++11 -g -o algo_dp_number_of_1_2_steps algo_dp_number_of_1_2_steps.cc /** * @file Climbing stairs * @brief Find out #distinct ways to climb n steps, 1-2 at a time */ // https://leetcode.com/problems/climbing-stairs/ #include <iostream> /* std::cout */ #include <algorithm> /* std::max */ using namespace std; /** * You are climbing a stair case. It takes n steps to reach to the top. * Each time you can either climb 1 or 2 steps. In how many distinct ways * can you climb to the top ? */ /* Quick Recursive factorial implementation. */ static int fact(int n) { return (n == 1 || n == 0) ? 1 : n * fact(n - 1); } /** * DP based solution: * * This question alludes to Fibonacci series expansion * * When n <= 0 the number of ways we can climb is 0 * * When n = 1, there is exactly 1 way to climb the stairs * * When n = 2, there are exactly 2 ways to climb the stairs * * When n > 2, the number of ways to reach n, is the number * * of ways to reach n-1 + number of ways to reach n-2 pos. * * This is because, n-1 and n-2 do not overlap (1 step from * * n-1 to reach n and 2 steps from n-2 to reach n) * * Time Complexity = O(n), Space Complexity = O(1) */ int climbStairsFibonacci(int n) { int n_1 = 1; int n_2 = 0, fib = 0; /* Calculate n from n-1 and n-2 values */ for(int i = 0; i < n ; i++) { fib = n_1 + n_2; /* #ways to reach here = n-1 + n-2 */ n_2 = n_1; /* n-2 takes pprevious cycle value */ n_1 = fib; /* get n-1 ready for next cycle */ } return fib; } /** * Permutation Calculation Solution: * * Calculate the number of 2's and number of 1's in set (n) * * and then total the sum of all possible arrangements with * * summation_over_all_two_cnt ( (n!)/(one_cnt! * two_cnt!) ) * * Note, factorial grows fast, so above calc might overflow */ int climbStairsPermutation(int n) { int num_twos = 0, res = 0; if(n == 0 || n == 1 || n == 2) return n; for(int i = n; i >= 2; i-= 2, num_twos++); /* #2's in inp */ res = 1; /* Cnt all one outcome separately. */ /* Loop over all other outcomes varying #2s to get to next* * outcome & calculate the #arrangements of each outcome */ for(int i = 1; i <= num_twos; i++) { int twos = i; /* #2s in this outcome */ int ones = n - (twos * 2); /* remaining #1s in outcome*/ int num_arrange; /* Calculate #arrangements of this outcome with formua * * = (#1+#2)! / (#1! #2!) */ num_arrange = (fact(twos+ones) / (fact(twos) * fact(ones))); res += num_arrange; } return res; } const int staircase_size = 12; int main() { for(int i = 0; i <= staircase_size; ++i) { auto ans_fib = climbStairsFibonacci(i); auto ans_perm = climbStairsPermutation(i); if(ans_fib != ans_perm) { cout << "Error: for i = " << i << " Fib mode = " << ans_fib << " Perm mode = " << ans_perm << endl; return -1; } else cout << "#ways to climb " << i << " stairs = " << ans_fib << endl; } return 0; }
{ "redpajama_set_name": "RedPajamaGithub" }
3,508
Home/World News/Middle East/Iran/Leader of Islamic Ummah and Oppressed Imam Khamenei: Iran May Not Continue N. Deal Under Sanctions IranMiddle EastNorth America Leader of Islamic Ummah and Oppressed Imam Khamenei: Iran May Not Continue N. Deal Under Sanctions Supreme Leader of Islamic Ummah and Oppressed Imam Ayatollah Seyed Ali Khamenei once again reiterated that Tehran would never take part in any negotiations over its missile and regional power, and warned that Europeans should comply with his earlier stated conditions or Iran would discard the nuclear deal. Addressing thousands of Iranian people and officials at a ceremony at the Mausoleum of the late founder of the Islamic Republic, Imam Khomeini, in Southern Tehran on Monday, the Iranian Leader said the EU should avoid miscalculations that Iran would continue fulfilling its undertakings under the Joint Comprehensive Plan of Action (JCPOA) if the European states do not stand against the US sanctions in practice. "The words uttered by some European states indicate that they expect the Iranian nation to both agree to comply with the nuclear deal undertakings and live under sanction. I tell the European states that have undigested thoughts in their minds that this dream of yours will not come true," Ayatollah Khamenei said. "The Iranian nation will not tolerate to be under both sanctions and nuclear restrictions," he reiterated. Ayatollah Khamenei further dismissed the US-led moves to cause misperception among Iranians that discarding the nuclear deal would end in war, saying some wrongly mention that Iran's opposition to continued compliance with "an impaired version of the JCPOA would result in war, but this is a lie," and a part of the enemies' psychological war against Iran. He said Tehran would never agree to negotiate over its power components, specially its missile and regional power, and warned that Iran has developed such a robust missile power that it would retaliate any enemy strike with ten-fold missile attacks. The Leader further warned the EU to avoid playing with Iran over the nuclear deal, and ordered the country's nuclear officials to take the needed measures to be completely prepared for any decision by the Islamic Republic. "The Atomic Energy Organization of Iran is duty-bound to prepare the ground for achieving 190 thousand of SWUs (Seperate Working Units of uranium enrichment), but within the framework of the JCPOA for the time being," he said in warning remarks, implying that if the EU proves disloyalty to its undertakings Tehran will resume nuclear fuel operations to maximum levels that it needs to fuel its power plants. "The Atomic Energy Organization of Iran should also take the introductory measures for the execution of the President's order from tomorrow," the Iranian Leader added. His remarks came as Iran and the EU are in talks on how to preserve the nuclear deal after the US discarded the agreement in a blatant violation of its endorsed undertakings and despite international warnings and as time is clicking for the Europeans to declare their final decision on how they intend to ensure Iran of their compliance with their duties under the nuclear deal. US President Donald Trump announced on May 8 that the US would no longer remain part of the JCPOA and promised to re-impose the highest level of economic sanctions against Iran in response to Tehran's development of its nuclear program. Almost two weeks later, the Iranian Supreme Leader declared five prerequisites for the EU to keep Tehran under the nuclear deal, underlining that the three European nations are necessitated to prove their trustworthiness after their disloyalty to a similar deal in 2005. Addressing a meeting with former and present Iranian officials here in Tehran on May 24, the Iranian Leader reminded defiance of undertakings by Britain, France and Germany back in 2005 when Iran embarked on a voluntary implementation of the additional protocol to the NPT and halted nuclear activities, and said the European trio "showed a major disloyalty; they made a promise in 2004-2005, but defied. Now they should prove that today they won't show the dishonesty and disloyalty of that day". Ayatollah Khamenei asked officials not to repeat a single mistake twice, and cautioned, "Europe has displayed that it accompanies the US in most sensitive cases." He further blasted the Europeans for keeping mum about the US frequent violations of the 2015 nuclear deal in the last two years, and said "the EU is required to make up for this silence". Ayatollah Khamenei reminded that the US has violated UN Security Council Resolution 2231 that accompanies the nuclear deal, and said hence "Europe should issue a resolution against the US violation" of the agreement. He said "the EU should also undertake to avoid a discussion of Iran's missile and regional power". The Iranian Leader reminded that the nuclear talks were aimed at the removal of the sanctions, "many of which were not lifted, while they have been recently threatening to revive the sanctions desipte the emphasis of the UN Security Council resolution" to the otherwise. The Leader further underlined that the EU should also pledge "to take action against any kind of sanction against the Islamic Republic and stand against the US sanctions on the basis of a clear-cut position". "Europe is also needed to ensure Iran's full crude sales," he said, and explained, "In case the Americans manage to strike a blow at our oil sales, (we) should be able to sell whatever volume of oil that we want. The Europeans should compensate (for the loss in crude sales) in a guaranteed manner and buy Iran's crude." Ayatollah Khamenei also underscored that the EU banks should also ensure trade with Iran, and said, "We don't want a fight with these three countries, but (we) don't trust them either because of their past record." He further warned that "if Europeans delay in complying with our demands, then Iran would preserve its right for resuming nuclear activities". "Once we observe that the JCPOA (Joint Comprehensive Plan of Action) produces no fruit, then one way would be reviving the operations that have been shut down." After Trump's declaration, the Iranian government issued a statement, calling the US withdrawal as "unlawful". The statement underlined Iran's prerequisites for continuing the deal with the five world powers after the US pullout of the agreement. "Iran, as a country that has remained committed to its legal obligations, will pursue the US Government's decision to withdraw from the JCPOA as provided by the mechanisms and provisions of the accord, and if the US withdrawal is not fully compensated and the full interests of the Iranian people are not met and guaranteed – as stated in the accord and as outlined by Iran's Leader on 9 May – it will exercise its legal right to take whatever reciprocal measures it deems expedient. Other parties to the JCPOA, and especially its three European signatories, must take necessary action to safeguard the accord and to implement their commitments – which they proved incapable of fully performing even while the US was nominally a party to the deal, due to the obstructions by the Trump Administration – and to proceed from giving pledges to taking practical action without any preconditions," it said. "None of the provisions or timeframes within the JCPOA, which were the subject of twelve years of negotiations, are negotiable in any manner. The US, which has through its meddling and erroneous policies ignited extremism, terrorism, destruction, war and child killing in our region, is in no position to issue any diktat about the Islamic Republic of Iran's lawful presence within its own region nor its effective support for the peoples of Syria and Iraq in their endeavor to fight extremists. The US and its allies, which through their support for the regime of Saddam Hussein, including equipping it with chemical weapons and the most advanced military equipment while blocking Iran's access to any means of defense victimized the Iranian people for eight years, and currently turning our region into a powder keg through their sale of hundreds of billions of dollars of useless advanced weaponry devouring the financial resources of the region, are in no position to impose restrictions on the Islamic Republic of Iran's lawful means of defense, including defensive ballistic missiles which have been designed to carry conventional weapons based on the bitter experiences of the war with the regime of Saddam Hussein. Indeed, such efforts explicitly violate the principles of international law, and the Islamic Republic of Iran's legitimate right to self-defense under Article 51 of the United Nations Charter," it added. "As announced by the President of the Islamic Republic of Iran on 8 May, the Foreign Minister has been tasked with the duty of taking the necessary measures to obtain required guarantees from the remaining parties to the JCPOA as well as Iran's other economic partners, and to immediately report the results of this mission. Meanwhile, the President of the Atomic Energy Organization of Iran has been tasked with taking all necessary steps in preparation for Iran to pursue industrial-scale enrichment without any restrictions, using the results of the latest research and development of Iran's brave nuclear scientists." "The people of Iran will with calm and confidence continue their path towards progress and development and the Government of the Islamic Republic of Iran has foreseen all necessary measures to facilitate this under any circumstances," the statement continued. "The Islamic Republic of Iran, as a secure and powerful state, which derives its security and economic development from within, relying on the prudent participation and resilience of its brave and civilized people, seeks constructive and dignified engagement with the world, and as shown by its implementation of the JCPOA despite the United States' continuous violations, is a trustworthy and committed partner for all who are prepared to cooperate on the basis of shared interests and mutual respect," it reiterated. Also, Secretary of Iran's Supreme National Security Council (SNSC) Ali Shamkhani underlined last week that his country would never renegotiate the 2015 nuclear deal, adding that Tehran would revise cooperation with the Europeans if they failed to defend the interests of both sides. "The international nuclear agreement has specified how Iran's nuclear program should continue and the closed case of Iran's nuclear negotiations will not reopen in any conditions," Shamkhani said in response to US Secretary of State Mike Pompeo's earlier remarks against Iran's nuclear program. Pompeo gave a speech at the Heritage Foundation in Washington: "After the Deal: A New Iran Strategy." Over 26 minutes, Pompeo articulated a strategy that can best be summarized as, "Do everything we say, or we will crush you", days after the US withdrawal from the 2015 nuclear deal. Meantime, Shamkhani described the negotiations between Iran and the European states on implementation of the nuclear deal after the US withdrawal as a test for Europe to prove their independence against the US dictated policies, and said if they cannot or do not want to defend their and the Islamic Republic's interests against the illegal and irrational approaches of Trump then "continued cooperation with them is useless". He referred to the US officials' allegations against Iran's peaceful nuclear activities, and said Washington which has used atomic bombs against the Japanese people and has equipped the child-killer Israeli regime with such weapons, is not competent to opine on Iran's nuclear program. Also, Head of the Atomic Energy Organization of Iran (AEOI) Ali Akbar Salehi said last week that Tehran is capable of and prepared to increase the amount and level of its uranium enrichment. "We have done the needed preparations in different fields to return to the pre-nuclear deal era, but we hope never to reach that point," Salehi told reporters. "If we decide to develop certain operations, including enrichment, we will be ready to increase production or (level of) enrichment, for instance," he added. Imam Rohollah Khomeini (r.a) a spirit from Allah and a gift for all Muslims and Oppressed ones. Hajj From The Viewpoint Of Imam Khomeini! Hajj as an Important Instrument to Attain Islamic Foreign Political Aims Leader: Revolution of Imam Khomeini is stronger than ever
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,432
Sweden sent 31 athletes to the 1978 European Athletics Championships which took place 29 August–3 September 1978 in Prague. Sweden won one medal at the Championships. Medalists References Nations at the 1978 European Athletics Championships Sweden at the European Athletics Championships 1978 in Sweden
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,750
{"url":"http:\/\/www.maplesoft.com\/support\/help\/Maple\/view.aspx?path=combstruct\/gfeqns","text":"combstruct - Maple Programming Help\n\nHome : Support : Online Help : Mathematics : Discrete Mathematics : Combinatorics : Combinatorial Structures : combstruct\/gfeqns\n\ncombstruct\n\n gfeqns\n find the system of generating function equations associated with a grammar\n\n Calling Sequence gfeqns(spec, typ, var, tags)\n\nParameters\n\n spec - combinatorial specification typ - labeling type; 'labeled'\u00a0or 'unlabeled' var - variable to use in the generating functions tags - (optional) list of lists; each list contains a variable tag followed by the Epsilon\u00a0nonterminal name(s) associated with that tag\n\nDescription\n\n \u2022 The gfeqns\u00a0function returns a list of the generating function equations which count the objects described in the specification.\n Each nonterminal has an associated equation which uses the nonterminal name as the name of the equation. For example, $A\\left(z\\right)$\u00a0would be the generating function for $A$. The equation is in terms of the other nonterminals.\n \u2022 If the objects are labeled, the exponential generating functions are produced. If the objects are unlabeled, ordinary generating functions are used.\n \u2022 Objects can be marked (tagged) by forming the product of the object with a named Epsilon\u00a0and associating a tag with that Epsilon.\n If the tag is a variable name, the resulting generating function equations have an extra variable that marks that object. The same tag can be associated with more than one Epsilon\u00a0name. The tag does not need to be a variable.\n \u2022 For information on how to write specifications, see combstruct\u00a0and combstruct[specification]\n\nExamples\n\n > $\\mathrm{with}\\left(\\mathrm{combstruct}\\right):$\n\nAn example of a labeled binary tree.\n\n > $\\mathrm{tree}\u2254\\left\\{T=\\mathrm{Union}\\left(L,\\mathrm{Prod}\\left(N,T,T\\right)\\right),L=\\mathrm{Atom},N=\\mathrm{Atom}\\right\\}:$\n > $\\mathrm{gfeqns}\\left(\\mathrm{tree},\\mathrm{labeled},z\\right)$\n $\\left[{L}{}\\left({z}\\right){=}{z}{,}{N}{}\\left({z}\\right){=}{z}{,}{T}{}\\left({z}\\right){=}{z}{+}{z}{}{{T}{}\\left({z}\\right)}^{{2}}\\right]$ (1)\n\nMark the leaf nodes of an unlabeled general tree.\n\n > $\\mathrm{tree1}\u2254\\left\\{T=\\mathrm{Union}\\left(L,\\mathrm{Prod}\\left(N,\\mathrm{Set}\\left(T\\right)\\right)\\right),L=\\mathrm{Prod}\\left(\\mathrm{leaf},\\mathrm{Atom}\\right),\\mathrm{leaf}=\\mathrm{\u0395},N=\\mathrm{Atom}\\right\\}:$\n > $\\mathrm{gfeqns}\\left(\\mathrm{tree1},\\mathrm{unlabeled},z,\\left[\\left[u,\\mathrm{leaf}\\right]\\right]\\right)$\n $\\left[{L}{}\\left({z}{,}{u}\\right){=}{u}{}{z}{,}{N}{}\\left({z}{,}{u}\\right){=}{z}{,}{T}{}\\left({z}{,}{u}\\right){=}{L}{}\\left({z}{,}{u}\\right){+}{z}{}{{\u2147}}^{{\\sum }_{{{j}}_{{1}}{=}{1}}^{{\\mathrm{\u221e}}}\\phantom{\\rule[-0.0ex]{5.0px}{0.0ex}}\\frac{{T}{}\\left({{z}}^{{{j}}_{{1}}}{,}{{u}}^{{{j}}_{{1}}}\\right)}{{{j}}_{{1}}}}{,}{\\mathrm{leaf}}{}\\left({z}{,}{u}\\right){=}{u}\\right]$ (2)\n\nAssociate the value 2 with the leaves in this labeled general tree.\n\n > $\\mathrm{gfeqns}\\left(\\mathrm{tree1},\\mathrm{labeled},z,\\left[\\left[2,\\mathrm{leaf}\\right]\\right]\\right)$\n $\\left[{L}{}\\left({z}\\right){=}{2}{}{z}{,}{N}{}\\left({z}\\right){=}{z}{,}{T}{}\\left({z}\\right){=}{L}{}\\left({z}\\right){+}{z}{}{{\u2147}}^{{T}{}\\left({z}\\right)}{,}{\\mathrm{leaf}}{}\\left({z}\\right){=}{2}\\right]$ (3)","date":"2017-03-26 18:54:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 11, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4868433177471161, \"perplexity\": 1128.6138720998051}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-13\/segments\/1490218189245.97\/warc\/CC-MAIN-20170322212949-00574-ip-10-233-31-227.ec2.internal.warc.gz\"}"}
null
null
SRW1 Pre-War Independent Operators in Renfrewshire A Fleet History of Pre-War Independent Operators in Renfrewshire The Publications Team are pleased to announce the first publication in a new series featuring the history of smaller operators. Unlike previous publications in the PGL series (which we aim to complete in due course with PGL4 - Bristol), the new series will feature all known operators in a County and will include the full history of all vehicles listed. The first volume will cover all operators in Renfrewshire who started operating before 1940, but where an operator is included in the history, that operator's history is brought up to the current date to ensure completeness. Further Fleet Histories of smaller operators are in various stages of production, but the team would welcome any offers to produce histories in other areas. If you feel able to help, please contact the Publications Manager either at the usual Leroy House address or via email to fred.ward@psvcircle.org.uk. A further innovation, starting with the Renfrewshire Fleet History is the addition of ISBN numbers to all Fleet Histories, Chassis and Body Lists. This will give us additional marketing opportunities and make the search for our publications much easier in literary catalogues. Published March 2012 Cover price £12.00 Product Code: SRW1 SDG1 Pre-War Independent Operators In Dumfriesshire, Kirkcudbrightshire & Wigtownshire A Fleet History of Pre-War Independent Operators In Dumfriesshire, Kirkcudbrightshire & Wig.. SAD1 Pre-War Independent Operators In Aberdeenshire and Kincardineshire A Fleet History of Pre-War Independent Operators In Aberdeenshire and Kincardineshire.. SSI1 Pre-War Independent Operators in the Scottish Islands A Fleet History of Pre-War Independent Operators In the Scottish Islands.. SHI1 Pre-War Independent Operators in the Scottish Highlands A Fleet History of Pre-War Independent Operators In the Scottish Highlands and Moray Firth.. SGL1 Operators in South Gloucestershire A Fleet History of all known bus Operators In South Gloucestershire... SAS1 A Fleet History of Operators in Angus, Perthshire & Kinross-shire A Fleet History of Pre-war Independent Operators in Angus, Perthshire & Kinross-shire.. SGL2 Operators in Bristol (A to K) A Fleet History of all known small bus Operators In Bristol.(Operators A to K.).. Tags: SRW1 Pre-War Independent Operators in Renfrewshire bus fleet history
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
621
Q: Infinite product of random matrices Given a $2 \times 2$ matrix $$A(k) = \begin{pmatrix} a_{1,1}(k) & a_{1,2}(k)\\ a_{2,1}(k) & a_{2,2}(k) \end{pmatrix}$$ where $a_{m,n}(k) \sim \mathcal{N}(0,\,\sigma^{2})$ are Gaussian distributed random variables, and the product: $$P(N) = \prod_{k=1}^N A(k)$$ If we have $$d := \lim_{N\to\infty} \det\left( P(N) \right)$$ is it possible to prove that $d \lt \infty$? A: Let's define the following: $$ d_N = \det\left(\prod_{k=1}^N A(k)\right) = \prod_{k=1}^N \det(A(k))=\prod_{k=1}^N a_{11}(k)a_{22}(k)-a_{21}(k)a_{12}(k) $$ where $a_{ij}(k)\sim\mathcal{N}(0,\sigma^2)$. So then the expected value is given by: $$ \mathbb{E}[d_N]=0=:\mu_N $$ Next, define $$ \hat{S}_1(k) = \hat{a}_{11}(k)\hat{a}_{22}(k),\;\;\hat{S}_2(k) = \hat{a}_{21}(k)\hat{a}_{21}(k) $$ where $\hat{a}_{ij}(k)\sim\mathcal{N}(0,1)$. So, the variance is written: \begin{align} \mathbb{V}[d_N] &= \mathbb{E}[d_N^2]= \mathbb{E}\left[ \prod_{k=1}^N (a_{11}(k)a_{22}(k)-a_{21}(k)a_{12}(k))^2 \right]\\ &=\prod_{k=1}^N \mathbb{E}\left[ (a_{11}(k)a_{22}(k)-a_{21}(k)a_{12}(k))^2 \right]\\ &= \prod_{k=1}^N \mathbb{E}\left[ (\sigma^2\hat{S}_1(k)-\sigma^2\hat{S}_1(k))^2 \right]\\ &= \sigma^{4N}\prod_{k=1}^N \mathbb{E}\left[ (\hat{S}_1(k)-\hat{S}_1(k))^2 \right] \end{align} Now for some random variable algebra. First, note that the $\hat{S}_i$ are chi-squared because they are a product of independent normal RVs (e.g. here): $$ \hat{S}_1(k), \hat{S}_2(k)\sim\chi^2_1 $$ Next, define $\mathfrak{D}_k= \hat{S}_1(k) - \hat{S}_2(k)$. The difference between two chi-squared variables (e.g. here, or A note on gamma difference distributions by Bernhard Klar) follow a variance-gamma (generalized Laplace) distribution: $$ \mathfrak{D}_k\sim\Gamma_\mathcal{V}(\mu,\alpha,\beta,\lambda,\gamma)=\Gamma_\mathcal{V}\left(0,\frac{1}{2},0,\frac{1}{2},\frac{1}{2}\right) $$ This also tells us that: \begin{align} \mathbb{E}[\mathfrak{D}_k] &= \mu + \frac{2\beta\lambda}{\gamma^2} = 0 \\ \mathbb{V}[\mathfrak{D}_k] &= \frac{2\lambda}{\gamma^2}\left( 1 + \frac{2\beta^2}{\gamma^2} \right) = 4 \end{align} So now we can complete the computation: $$ \mathbb{V}[d_N]=\sigma^{4N}\prod_{k=1}^N \mathbb{E}\left[ \mathfrak{D}_k^2 \right] = \sigma^{4N}\prod_{k=1}^N (\mathbb{V}[\mathfrak{D}_k] + \mathbb{E}[\mathfrak{D}_k]^2) = 4^N\sigma^{4N} =: \varsigma^2_N $$ Hopefully I didn't make any arithmetic mistakes. Anyway, woah, so that is potentially a very large variance. Clearly it depends heavily on the value of $\sigma$. But anyway, ignoring the limit, we can bound our target using the Chebyshev inequality: $$ P(|d_N-\mu_N|\geq \varsigma_N\kappa) \leq \frac{1}{\kappa^2}\;\;\;\implies\;\;\; P(|d_N|\geq 2^N\sigma^{2N}\kappa) \leq \frac{1}{\kappa^2}$$ Maybe there is a better concentration inequality. But, if we denote $\sigma = \hat{\sigma}/\sqrt{2} $, then at least what this tells us is that: if $\hat{\sigma}<1$, then the probability that $d_N$ is non-zero is essentially zero.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,084
{"url":"https:\/\/www.lessonplanet.com\/teachers\/quiz-9-newtons-method","text":"# Quiz 9: Newton's Method\n\nIn this Newton's method learning exercise, students estimate the value of a solution and use a differential to estimate the value of a root. This one-page learning exercise contains two problems.","date":"2017-07-20 15:29:01","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8605504035949707, \"perplexity\": 1630.720217479195}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-30\/segments\/1500549423222.65\/warc\/CC-MAIN-20170720141821-20170720161821-00113.warc.gz\"}"}
null
null
No, you are unable to make a partial payment. You can make one payment online and this must be for the entire outstanding balance of your holiday. To make a partial payment, please call our Customer Contact Centre on 0345 355 5111 (calls charged at local rate).... An association is required to accept a partial payment made by a delinquent owner, notwithstanding whether the payment is sufficient to cover the total amount of delinquent assessments, late fees, interest, and collection costs owed by the owner to the association at the time the payment is made. Definition of partial payment: Payment amount that is less than the due amount, is a part payment for an unfinished work, or is an installment payment.... in this Business Process assumes an invoice with document number 72526689 for the amount of 508,00 HKD to be present on vendor account 10560003 and for which a partial payment of 150 HKD has to be made. another 2 down payment requests for 80.00 HKD each already exist in the account. Solved How do I show a partial payments applied to invoices? More Partial Payment Synonyms For more words similar to partial payment, try: how to get rid of credit card debt in dubai 18/03/2015�� can only mean "Enclosed is a check for $100 in partial payment of my bill." Please feel free to find some reliable evidence that "Here is a payment against my bill" can ever mean "Here is final (or full) payment of my bill." The total of the partial amounts entered for each split item determines the difference amount, which in turn is used to calculate the partial payment amount. Input checks prevent overpayment and ensure that the entire partial payment amount entered is distributed. how to find profit margin in this Business Process assumes an invoice with document number 72526689 for the amount of 508,00 HKD to be present on vendor account 10560003 and for which a partial payment of 150 HKD has to be made. another 2 down payment requests for 80.00 HKD each already exist in the account. Home > RSS Feeds > Scheduled Partial Payments (For Distribution to Travelers) Scheduled Partial Payments (For Distribution to Travelers) It�s always a good idea to pay off your electric, cell phone, and credit card bills (to name a few) before they�re due. The amount of payment can be the full amount stated on the invoice for the sale, or a partial amount. Payment receipts can be useful for both the seller and the buyer, and are an additional document used in communication in the sales process. When a payment must be made for a partial period, for example if the start or end date of the lease does not coincide with the start or end of a payment period, users can specify how to calculate the partial payment.
{ "redpajama_set_name": "RedPajamaC4" }
3,898
{"url":"https:\/\/www.zbmath.org\/authors\/?q=ai%3Akarniadakis.george-em","text":"Compute Distance To:\n Documents Indexed: 317 Publications since 1986, including 7 Books 6 Contributions as Editor Co-Authors: 210 Co-Authors with 317 Joint Publications 5,609 Co-Co-Authors\nall top 5\n\n### Co-Authors\n\n 6 single-authored 16 Venturi, Daniele 16 Wan, Xiaoliang 16 Zhang, Zhongqiang 15 Mao, Zhiping 14 Sherwin, Spencer J. 14 Zayernouri, Mohsen 13 Li, Zhen 12 Perdikaris, Paris G. 12 Xiu, Dongbin 11 Triantafyllou, Michael S. 11 Zeng, Fanhai 10 Meng, Xuhui 10 Raissi, Maziar 10 Wang, Zhi Cheng 9 Caswell, Bruce 9 Kirby, Robert M. II 9 Lu, Lu 9 Lucor, Didier 9 Song, Fangying 9 Su, Chau-Hsing 8 Dong, Suchuan 8 Lin, Guang 8 Maxey, Martin R. 7 Deng, Mingge 7 Evangelinos, Constantinos 7 Grinberg, Leopold 7 Lomtev, Igor 7 Orszag, Steven Alan 7 Pang, Guofei 7 Rozovskii, Boris L. 7 Yu, Yue 6 Cho, Heyrim 6 Choi, Minseok 6 Fedosov, Dmitry A. 6 Giannakouros, John G. 6 Jagtap, Ameya D. 6 Kharazmi, Ehsan 6 Sirisup, Sirod 6 Zhang, Dongkun 5 Babaee, Hessam 5 Bian, Xin 5 Cai, Wei 5 Constantinides, Yiannis 5 Henderson, Ronald D. 5 Ma, Xia 5 Sidilkover, David 5 Warburton, Timothy C. E. 5 Zheng, Xiaoning 4 Baek, Hyoungsu 4 Bittencourt, Marco L\u00facio 4 Cai, Shengze 4 Cao, Wanrong 4 Guo, Ling 4 Lei, Huan 4 Patera, Anthony T. 4 Sapsis, Themistoklis P. 4 Symeonidis, Vasileios 4 Tang, Yuhang 4 Xu, Jin 4 Yang, Xiu 4 Yin, Minglang 3 Ainsworth, Mark 3 Beskok, Ali 3 Chryssostomidis, Chryssostomos 3 Foo, Jasmine 3 Gulian, Mamikon 3 Kawaguchi, Kenji 3 Kevrekidis, Ioannis George 3 Kim, Changho 3 Rockwell, Donald 3 Shu, Chi-Wang 3 Tang, Yifa 3 Tretyakov, Michael V. 3 Triantafyllou, George S. 3 Yazdani, Alireza 3 Zaki, Tamer A. 3 Zheng, Mengdi 2 Batcho, Paul F. 2 Blumers, Ansel L. 2 Bourguet, R\u00e9mi 2 Burrage, Kevin 2 Chen, Xiaoli 2 Cockburn, Bernardo 2 D\u2019Elia, Marta 2 Du, Yiqing 2 Duan, Jinqiao 2 Fan, Dixia 2 Goswami, Somdatta 2 Jin, Pengzhan 2 Kaiktsis, Lambros 2 Karamanos, G.-S. 2 Li, Xuejin 2 Lischke, Anna 2 Luo, Xian 2 Newman, David J. 2 Ouyang, Jie 2 Parks, Michael L. 2 Pivkin, Igor V. 2 R\u00f8nquist, Einar M. 2 Shin, Yeonjong ...and 110 more Co-Authors\nall top 5\n\n### Serials\n\n 97 Journal of Computational Physics 40 Computer Methods in Applied Mechanics and Engineering 38 SIAM Journal on Scientific Computing 31 Journal of Fluid Mechanics 8 Communications in Computational Physics 7 International Journal for Numerical Methods in Engineering 7 Journal of Scientific Computing 6 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 4 SIAM Journal on Numerical Analysis 3 International Journal for Numerical Methods in Fluids 3 Applied Numerical Mathematics 3 Computational Mechanics 3 Numerical Mathematics and Scientific Computation 2 Computers and Fluids 2 International Journal of Heat and Mass Transfer 2 Neural Networks 2 Proceedings of the National Academy of Sciences of the United States of America 2 Fractional Calculus & Applied Analysis 2 European Journal of Mechanics. B. Fluids 2 Multiscale Modeling & Simulation 2 Proceedings of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences 1 Computers & Mathematics with Applications 1 Computers and Structures 1 Journal of Engineering Mathematics 1 Journal of Statistical Physics 1 Physics of Fluids, A 1 Fluid Dynamics Research 1 Theoretical and Computational Fluid Dynamics 1 Chaos, Solitons and Fractals 1 Bulletin of the Polish Academy of Sciences. Technical Sciences 1 Numerische Mathematik 1 Physica D 1 European Journal of Applied Mathematics 1 Journal of Non-Newtonian Fluid Mechanics 1 SIAM Review 1 Physics of Fluids 1 Electronic Journal of Differential Equations (EJDE) 1 Parallel Algorithms and Applications 1 Philosophical Transactions of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Flow, Turbulence and Combustion 1 HERMIS-$$\\mu\\pi$$. Hellenic European Research on Mathematics and Informatics Science 1 Applied Mathematical Sciences 1 Interdisciplinary Applied Mathematics 1 Lecture Notes in Computational Science and Engineering 1 Science 1 International Journal for Uncertainty Quantification 1 Journal of Theoretical Biology 1 Transactions of Mathematics and its Applications 1 Handbook of Fractional Calculus with Applications 1 Communications on Applied Mathematics and Computation\nall top 5\n\n### Fields\n\n 174 Fluid mechanics\u00a0(76-XX) 144 Numerical analysis\u00a0(65-XX) 79 Partial differential equations\u00a0(35-XX) 55 Probability theory and stochastic processes\u00a0(60-XX) 39 Computer science\u00a0(68-XX) 37 Mechanics of deformable solids\u00a0(74-XX) 25 Ordinary differential equations\u00a0(34-XX) 22 Biology and other natural sciences\u00a0(92-XX) 17 Statistics\u00a0(62-XX) 11 Statistical mechanics, structure of matter\u00a0(82-XX) 10 Geophysics\u00a0(86-XX) 9 Approximations and expansions\u00a0(41-XX) 8 Dynamical systems and ergodic theory\u00a0(37-XX) 6 General and overarching topics; collections\u00a0(00-XX) 6 Classical thermodynamics, heat transfer\u00a0(80-XX) 5 Real functions\u00a0(26-XX) 4 Optics, electromagnetic theory\u00a0(78-XX) 3 Mechanics of particles and systems\u00a0(70-XX) 1 Linear and multilinear algebra; matrix theory\u00a0(15-XX) 1 Special functions\u00a0(33-XX) 1 Integral equations\u00a0(45-XX) 1 Calculus of variations and optimal control; optimization\u00a0(49-XX) 1 Convex and discrete geometry\u00a0(52-XX) 1 Quantum theory\u00a0(81-XX)\n\n### Citations contained in zbMATH Open\n\n276 Publications have been cited 8,534 times in 4,962 Documents Cited by Year\nThe Wiener\u2013Askey polynomial chaos for stochastic differential equations.\u00a0Zbl\u00a01014.65004\n2002\nHigh-order splitting methods for the incompressible Navier-Stokes equations.\u00a0Zbl\u00a00738.76050\nKarniadakis, George Em; Israeli, Moshe; Orszag, Steven A.\n1991\nSpectral\/$$hp$$ element methods for computational fluid dynamics. 2nd ed.\u00a0Zbl\u00a01116.76002\nKarniadakis, George Em; Sherwin, Spencer J.\n2005\nPhysics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.\u00a0Zbl\u00a01415.68175\nRaissi, M.; Perdikaris, P.; Karniadakis, G. E.\n2019\nThe development of discontinuous Galerkin methods.\u00a0Zbl\u00a00989.76045\nCockburn, Bernardo; Karniadakis, George E.; Shu, Chi-Wang\n2000\nModeling uncertainty in flow simulations via generalized polynomial chaos.\u00a0Zbl\u00a01047.76111\n2003\nSpectral\/hp element methods for CFD.\u00a0Zbl\u00a00954.76001\nKarniadakis, George Em; Sherwin, Spencer J.\n1999\nAn adaptive multi-element generalized polynomial chaos method for stochastic differential equations.\u00a0Zbl\u00a01078.65008\n2005\nMicroflows and nanoflows. Fundamentals and simulation. Foreword by Chih-Ming Ho.\u00a0Zbl\u00a01115.76003\nKarniadakis, George; Beskok, Ali; Narayan, Aluru\n2005\nMulti-element generalized polynomial chaos for arbitrary probability measures.\u00a0Zbl\u00a01128.65009\n2006\nModeling uncertainty in steady state diffusion problems via generalized polynomial chaos.\u00a0Zbl\u00a01016.65001\n2002\nFractional Sturm-Liouville eigen-problems: theory and numerical approximation.\u00a0Zbl\u00a01349.34095\n2013\nLow-dimensional models for complex geometry flows: Application to grooved channels and circular cylinders.\u00a0Zbl\u00a00746.76021\nDeane, A. E.; Kevrekidis, I. G.; Karniadakis, G. E.; Orszag, S. A.\n1991\nFractional spectral collocation method.\u00a0Zbl\u00a01294.65097\n2014\nHidden physics models: machine learning of nonlinear partial differential equations.\u00a0Zbl\u00a01381.68248\n2018\nSpectral\/$$hp$$ element methods for computational fluid dynamics. Reprint of the 2nd hardback ed. (2005).\u00a0Zbl\u00a01256.76003\nKarniadakis, George Em; Sherwin, Spencer J.\n2013\nA spectral vanishing viscosity method for large eddy simulations.\u00a0Zbl\u00a00984.76036\nKaramanos, G. S.; Karniadakis, G. E.\n2000\nExponentially accurate spectral and spectral element methods for fractional ODEs.\u00a0Zbl\u00a01349.65257\n2014\nFractional spectral collocation methods for linear and nonlinear variable order FPDEs.\u00a0Zbl\u00a01349.65531\n2015\nA semi-Lagrangian high-order method for Navier-Stokes equations.\u00a0Zbl\u00a01028.76026\n2001\nSecond-order approximations for variable order fractional derivatives: algorithms and applications.\u00a0Zbl\u00a01349.65092\nZhao, Xuan; Sun, Zhi-zhong; Karniadakis, George Em\n2015\nThe multi-element probabilistic collocation method (ME-PCM): Error analysis and applications.\u00a0Zbl\u00a01153.65008\nFoo, Jasmine; Wan, Xiaoliang; Karniadakis, George Em\n2008\nA spectral viscosity method for correcting the long-term behavior of POD models.\u00a0Zbl\u00a01136.76412\n2004\nMulti-element probabilistic collocation method in high dimensions.\u00a0Zbl\u00a01181.65014\n2010\nMachine learning of linear differential equations using Gaussian processes.\u00a0Zbl\u00a01380.68339\nRaissi, Maziar; Perdikaris, Paris; Karniadakis, George Em\n2017\nA triangular spectral element method; applications to the incompressible Navier-Stokes equations.\u00a0Zbl\u00a01075.76621\nSherwin, S. J.; Karniadakis, G. E.\n1995\nThree-dimensional dynamics and transition to turbulence in the wake of bluff objects.\u00a0Zbl\u00a00754.76043\nKarniadakis, George Em; Triantafyllou, George S.\n1992\nA low-dimensional model for simulating three-dimensional cylinder flow.\u00a0Zbl\u00a01001.76043\n2002\nA generalized spectral collocation method with tunable accuracy for variable-order fractional differential equations.\u00a0Zbl\u00a01339.65197\nZeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em\n2015\nInferring solutions of differential equations using noisy multi-fidelity data.\u00a0Zbl\u00a01382.65229\nRaissi, Maziar; Perdikaris, Paris; Karniadakis, George Em\n2017\nA new triangular and tetrahedral basis for high-order $$(hp)$$ finite element methods.\u00a0Zbl\u00a00837.73075\nSherwin, Spencer J.; Karniadakis, George Em.\n1995\nA unified Petrov-Galerkin spectral method for fractional PDEs.\u00a0Zbl\u00a01425.65127\nZayernouri, Mohsen; Ainsworth, Mark; Karniadakis, George Em.\n2015\nLong-term behavior of polynomial chaos in stochastic flow simulations.\u00a0Zbl\u00a01123.76058\n2006\nDe-aliasing on non-uniform grids: algorithms and applications.\u00a0Zbl\u00a01161.76534\nKirby, Robert M.; Karniadakis, George Em\n2003\nDiscontinuous Galerkin methods. Theory, computation and applications. 1st international symposium on DGM, Newport, RI, USA, May 24\u201326, 1999.\u00a0Zbl\u00a00935.00043\n2000\nDiscontinuous spectral element methods for time- and space-fractional advection equations.\u00a0Zbl\u00a01304.35757\n2014\nNumerical Gaussian processes for time-dependent and nonlinear partial differential equations.\u00a0Zbl\u00a01386.65030\nRaissi, Maziar; Perdikaris, Paris; Karniadakis, George Em\n2018\nReweighted $$\\ell_1$$ minimization method for stochastic elliptic differential equations.\u00a0Zbl\u00a01349.60113\n2013\nWhat is the fractional Laplacian? A comparative review with new results.\u00a0Zbl\u00a01453.35179\nLischke, Anna; Pang, Guofei; Gulian, Mamikon; Song, Fangying; Glusa, Christian; Zheng, Xiaoning; Mao, Zhiping; Cai, Wei; Meerschaert, Mark M.; Ainsworth, Mark; Karniadakis, George Em\n2020\nFrequency selection and asymptotic states in laminar wakes.\u00a0Zbl\u00a00659.76043\nKarniadakis, George Em; Triantafyllou, George S.\n1989\nDynamics and low-dimensionality of a turbulent near wake.\u00a0Zbl\u00a00987.76041\nMa, X.; Karamanos, G.-S.; Karniadakis, G. E.\n2000\nA discontinuous Galerkin ALE method for compressible viscous flows in moving domains.\u00a0Zbl\u00a00956.76046\nLomtev, I.; Kirby, R. M.; Karniadakis, G. E.\n1999\nA spectral method (of exponential convergence) for singular solutions of the diffusion equation with general two-sided fractional derivative.\u00a0Zbl\u00a01422.65428\n2018\nMicro flows. Fundamentals and simulation.\u00a0Zbl\u00a00998.76002\n2002\nTime-dependent generalized polynomial chaos.\u00a0Zbl\u00a01201.65216\nGerritsma, Marc; van der Steen, Jan-Bart; Vos, Peter; Karniadakis, George\n2010\nOnset of three-dimensionality, equilibria, and early transition in flow over a backward-facing step.\u00a0Zbl\u00a00728.76057\nKaiktsis, Lambros; Karniadakis, George Em; Orszag, Steven A.\n1991\nAdaptive ANOVA decomposition of stochastic incompressible and compressible flows.\u00a0Zbl\u00a01408.76428\nYang, Xiu; Choi, Minseok; Lin, Guang; Karniadakis, George Em\n2012\nA new stochastic approach to transient heat conduction modeling with uncertainty.\u00a0Zbl\u00a01038.80003\n2003\nTetrahedral spectral elements for CFD.\u00a0Zbl\u00a00857.76044\nSherwin, S. J.; Karniadakis, G. E.\n1995\nA combined direct numerical simulation\u2013particle image velocimetry study of the turbulent near wake.\u00a0Zbl\u00a01177.76156\nDong, S.; Karniadakis, G. E.; Ekmekci, A.; Rockwell, D.\n2006\nA discontinuous Galerkin method for the viscous MHD equations.\u00a0Zbl\u00a00954.76051\nWarburton, T. C.; Karniadakis, G. E.\n1999\nBeyond Wiener-Askey expansions: handling arbitrary PDFs.\u00a0Zbl\u00a01102.65006\n2006\nDeepXDE: a deep learning library for solving differential equations.\u00a0Zbl\u00a01459.65002\nLu, Lu; Meng, Xuhui; Mao, Zhiping; Karniadakis, George Em\n2021\nFPINNs: fractional physics-informed neural networks.\u00a0Zbl\u00a01420.35459\nPang, Guofei; Lu, Lu; Karniadakis, George Em\n2019\nSpectral polynomial chaos solutions of the stochastic advection equation.\u00a0Zbl\u00a01001.76084\nJardak, M.; Su, C.-H.; Karniadakis, G. E.\n2002\nTempered fractional Sturm-Liouville eigenproblems.\u00a0Zbl\u00a01323.34012\nZayernouri, Mohsen; Ainsworth, Mark; Karniadakis, George Em\n2015\nA robust and accurate outflow boundary condition for incompressible flow simulations on severely-truncated unbounded domains.\u00a0Zbl\u00a01349.76569\nDong, S.; Karniadakis, G. E.; Chryssostomidis, C.\n2014\nImplicit-explicit difference schemes for nonlinear fractional differential equations with nonsmooth solutions.\u00a0Zbl\u00a01355.65104\nCao, Wanrong; Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em\n2016\nFast difference schemes for solving high-dimensional time-fractional subdiffusion equations.\u00a0Zbl\u00a01352.65278\nZeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em\n2016\nMechanisms of transverse motions in turbulent wall flows.\u00a0Zbl\u00a01039.76028\n2003\nA fractional phase-field model for two-phase flows with tunable sharpness: algorithms and simulations.\u00a0Zbl\u00a01423.76102\nSong, Fangying; Xu, Chuanju; Karniadakis, George Em\n2016\nPetrov-Galerkin and spectral collocation methods for distributed order differential equations.\u00a0Zbl\u00a01367.65113\nKharazmi, Ehsan; Zayernouri, Mohsen; Karniadakis, George Em\n2017\nA generalized spectral collocation method with tunable accuracy for fractional differential equations with end-point singularities.\u00a0Zbl\u00a01431.65193\nZeng, Fanhai; Mao, Zhiping; Karniadakis, George Em\n2017\nA direct numerical simulation study of flow past a freely vibrating cable.\u00a0Zbl\u00a00901.76062\nNewman, David J.; Karniadakis, George Em\n1997\nUnstructured spectral element methods for simulation of turbulent flows.\u00a0Zbl\u00a00840.76070\nHenderson, Ronald D.; Karniadakis, George Em\n1995\nSpectral element-Fourier methods for incompressible turbulent flows.\u00a0Zbl\u00a00722.76053\n1990\nPhysics-informed neural networks for high-speed flows.\u00a0Zbl\u00a01442.76092\nMao, Zhiping; Jagtap, Ameya D.; Karniadakis, George Em\n2020\nSpectral and discontinuous spectral element methods for fractional delay equations.\u00a0Zbl\u00a01314.34159\nZayernouri, Mohsen; Cao, Wanrong; Zhang, Zhongqiang; Karniadakis, George Em\n2014\nUnsteadiness and convective instabilities in two-dimensional flow over a backward-facing step.\u00a0Zbl\u00a00875.76111\nKaiktsis, Lambros; Karniadakis, George Em; Orszag, Steven A.\n1996\nNonlinear information fusion algorithms for data-efficient multi-fidelity modelling.\u00a0Zbl\u00a01407.62252\nPerdikaris, P.; Raissi, M.; Damianou, A.; Lawrence, N. D.; Karniadakis, G. E.\n2017\nEquation-free\/Galerkin-free POD-assisted computation of incompressible flows.\u00a0Zbl\u00a01213.76146\nSirisup, Sirod; Karniadakis, George Em; Xiu, Dongbin; Kevrekidis, Ioannis G.\n2005\nDrag reduction in wall-bounded turbulence via a transverse travelling wave.\u00a0Zbl\u00a01112.76373\nDu, Yiqing; Symeonidis, V.; Karniadakis, G. E.\n2002\nTetrahedral $$hp$$ finite elements: Algorithms and flow simulations.\u00a0Zbl\u00a00847.76038\nSherwin, S. J.; Karniadakis, G. E.\n1996\nSpectral\/hp methods for viscous compressible flows on unstructured 2D meshes.\u00a0Zbl\u00a00929.76095\nLomtev, I.; Quillen, C. B.; Karniadakis, G. E.\n1998\nOptimal error estimates of spectral Petrov-Galerkin and collocation methods for initial value problems of fractional differential equations.\u00a0Zbl\u00a01326.65100\nZhang, Zhongqiang; Zeng, Fanhai; Karniadakis, George Em\n2015\nNumerical methods for stochastic partial differential equations with white noise.\u00a0Zbl\u00a01380.65021\n2017\nSecond-order numerical methods for multi-term fractional differential equations: smooth and non-smooth solutions.\u00a0Zbl\u00a01439.65081\nZeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em\n2017\nSystematic coarse-graining of spectrin-level red blood cell models.\u00a0Zbl\u00a01231.74311\nFedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em\n2010\nDynamics and flow structures in the turbulent wake of rigid and flexible cylinders subject to vortex-induced vibrations.\u00a0Zbl\u00a00983.76029\n1999\nGeneralized polynomial chaos and random oscillators.\u00a0Zbl\u00a01060.70515\nLucor, D.; Su, C.-H.; Karniadakis, G. E.\n2004\nConservative physics-informed neural networks on discrete domains for conservation laws: applications to forward and inverse problems.\u00a0Zbl\u00a01442.92002\nJagtap, Ameya D.; Kharazmi, Ehsan; Karniadakis, George Em\n2020\nSpectral element simulations of laminar and turbulent flows in complex geometries.\u00a0Zbl\u00a00678.76050\n1989\nBasis functions for triangular and quadrilateral high-order elements.\u00a0Zbl\u00a00930.35016\nWarburton, T. C.; Sherwin, S. J.; Karniadakis, G. E.\n1999\nA new method to impose no-slip boundary conditions in dissipative particle dynamics.\u00a0Zbl\u00a01177.76325\nPivkin, Igor V.; Karniadakis, George Em\n2005\nA discontinuous Galerkin method for the Navier-Stokes equations.\u00a0Zbl\u00a00951.76041\n1999\nQuantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems.\u00a0Zbl\u00a01454.65008\nZhang, Dongkun; Lu, Lu; Guo, Ling; Karniadakis, George Em\n2019\nExact PDF equations and closure approximations for advective-reactive transport.\u00a0Zbl\u00a01349.35068\nVenturi, D.; Tartakovsky, D. M.; Tartakovsky, A. M.; Karniadakis, G. E.\n2013\nA convergence study of a new partitioned fluid-structure interaction algorithm based on fictitious mass and damping.\u00a0Zbl\u00a01426.76496\n2012\nDeep learning of vortex-induced vibrations.\u00a0Zbl\u00a01415.76177\nRaissi, Maziar; Wang, Zhicheng; Triantafyllou, Michael S.; Karniadakis, George Em\n2019\nStability and accuracy of periodic flow solutions obtained by a POD-penalty method.\u00a0Zbl\u00a01070.35024\n2005\nFractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms.\u00a0Zbl\u00a01415.74049\nYu, Yue; Perdikaris, Paris; Karniadakis, George Em\n2016\nConvolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.\u00a0Zbl\u00a01371.60119\n2014\nComputing fractional Laplacians on complex-geometry domains: algorithms and simulations.\u00a0Zbl\u00a01380.65357\nSong, Fangying; Xu, Chuanju; Karniadakis, George Em\n2017\nGalerkin and discontinuous Galerkin spectral\/$$hp$$ methods.\u00a0Zbl\u00a00924.76078\nWarburton, T. C.; Lomtev, I.; Du, Y.; Sherwin, S. J.; Karniadakis, G. E.\n1999\nA composite neural network that learns from multi-fidelity data: application to function approximation and inverse PDE problems.\u00a0Zbl\u00a01454.76006\n2020\nA Petrov-Galerkin spectral method of linear complexity for fractional multiterm ODEs on the half line.\u00a0Zbl\u00a01367.65108\nLischke, Anna; Zayernouri, Mohsen; Karniadakis, George Em\n2017\nHidden fluid mechanics: Learning velocity and pressure fields from flow visualizations.\u00a0Zbl\u00a01478.76057\nRaissi, Maziar; Yazdani, Alireza; Karniadakis, George Em.\n2020\nA computable evolution equation for the joint response-excitation probability density function of stochastic dynamical systems.\u00a0Zbl\u00a01365.92023\nVenturi, D.; Sapsis, T. P.; Cho, H.; Karniadakis, G. E.\n2012\nGappy data and reconstruction procedures for flow past a cylinder.\u00a0Zbl\u00a01065.76159\n2004\nSpectral element methods for elliptic problems in nonsmooth domains.\u00a0Zbl\u00a00844.65082\n1995\nDeepXDE: a deep learning library for solving differential equations.\u00a0Zbl\u00a01459.65002\nLu, Lu; Meng, Xuhui; Mao, Zhiping; Karniadakis, George Em\n2021\nhp-VPINNs: variational physics-informed neural networks with domain decomposition.\u00a0Zbl\u00a007338008\nKharazmi, Ehsan; Zhang, Zhongqiang; Karniadakis, George E. M.\n2021\nTwo-point stress-strain-rate correlation structure and non-local eddy viscosity in turbulent flows.\u00a0Zbl\u00a01461.76181\nClark Di Leoni, Patricio; Zaki, Tamer A.; Karniadakis, George; Meneveau, Charles\n2021\nNon-invasive inference of thrombus material properties with physics-informed neural networks.\u00a0Zbl\u00a007340448\nYin, Minglang; Zheng, Xiaoning; Humphrey, Jay D.; Karniadakis, George Em\n2021\nTowards a unified theory of fractional and nonlocal vector calculus.\u00a0Zbl\u00a007443850\nD&rsquo;Elia, Marta; Gulian, Mamikon; Olson, Hayley; Karniadakis, George Em\n2021\nFlow over an espresso cup: inferring 3-D velocity and pressure fields from tomographic background oriented Schlieren via physics-informed neural networks.\u00a0Zbl\u00a01461.76417\nCai, Shengze; Wang, Zhicheng; Fuest, Frederik; Jeon, Young Jin; Gray, Callum; Karniadakis, George Em\n2021\nSolving inverse stochastic problems from discrete particle observations using the Fokker-Planck equation and physics-informed neural networks.\u00a0Zbl\u00a01480.35377\nChen, Xiaoli; Yang, Liu; Duan, Jinqiao; Karniadakis, George Em\n2021\nMultiscale parareal algorithm for long-time mesoscopic simulations of microvascular blood flow in zebrafish.\u00a0Zbl\u00a01479.76123\nBlumers, Ansel L.; Yin, Minglang; Nakajima, Hiroyuki; Hasegawa, Yosuke; Li, Zhen; Karniadakis, George Em\n2021\nLearning and meta-learning of stochastic advection-diffusion-reaction systems from sparse measurements.\u00a0Zbl\u00a007441294\nChen, Xiaoli; Duan, Jinqiao; Karniadakis, George Em\n2021\nWhat is the fractional Laplacian? A comparative review with new results.\u00a0Zbl\u00a01453.35179\nLischke, Anna; Pang, Guofei; Gulian, Mamikon; Song, Fangying; Glusa, Christian; Zheng, Xiaoning; Mao, Zhiping; Cai, Wei; Meerschaert, Mark M.; Ainsworth, Mark; Karniadakis, George Em\n2020\nPhysics-informed neural networks for high-speed flows.\u00a0Zbl\u00a01442.76092\nMao, Zhiping; Jagtap, Ameya D.; Karniadakis, George Em\n2020\nConservative physics-informed neural networks on discrete domains for conservation laws: applications to forward and inverse problems.\u00a0Zbl\u00a01442.92002\nJagtap, Ameya D.; Kharazmi, Ehsan; Karniadakis, George Em\n2020\nA composite neural network that learns from multi-fidelity data: application to function approximation and inverse PDE problems.\u00a0Zbl\u00a01454.76006\n2020\nHidden fluid mechanics: Learning velocity and pressure fields from flow visualizations.\u00a0Zbl\u00a01478.76057\nRaissi, Maziar; Yazdani, Alireza; Karniadakis, George Em.\n2020\nAdaptive activation functions accelerate convergence in deep and physics-informed neural networks.\u00a0Zbl\u00a01453.68165\nJagtap, Ameya D.; Kawaguchi, Kenji; Karniadakis, George Em\n2020\nLearning in modal space: solving time-dependent stochastic PDEs using physics-informed neural networks.\u00a0Zbl\u00a01440.60067\nZhang, Dongkun; Guo, Ling; Karniadakis, George Em\n2020\nExtended physics-informed neural networks (XPINNs): a generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations.\u00a0Zbl\u00a007419158\nJagtap, Ameya D.; Karniadakis, George Em\n2020\nPhysics-informed generative adversarial networks for stochastic differential equations.\u00a0Zbl\u00a01440.60065\nYang, Liu; Zhang, Dongkun; Karniadakis, George Em\n2020\nOn the convergence of physics informed neural networks for linear second-order elliptic and parabolic type PDEs.\u00a0Zbl\u00a01473.65349\nShin, Yeonjong; Darbon, J\u00e9r\u00f4me; Karniadakis, George Em\n2020\nPPINN: parareal physics-informed neural network for time-dependent PDEs.\u00a0Zbl\u00a007337119\nMeng, Xuhui; Li, Zhen; Zhang, Dongkun; Karniadakis, George Em\n2020\nA stabilized semi-implicit Fourier spectral method for nonlinear space-fractional reaction-diffusion equations.\u00a0Zbl\u00a01453.65370\nZhang, Hui; Jiang, Xiaoyun; Zeng, Fanhai; Karniadakis, George Em\n2020\nLocally adaptive activation functions with slope recovery for deep and physics-informed neural networks.\u00a0Zbl\u00a01472.68175\nJagtap, Ameya D.; Kawaguchi, Kenji; Karniadakis, George Em\n2020\nSympnets: intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems.\u00a0Zbl\u00a01475.68316\nJin, Pengzhan; Zhang, Zhen; Zhu, Aiqing; Tang, Yifa; Karniadakis, George Em\n2020\nA fast solver for spectral elements applied to fractional differential equations using hierarchical matrix approximation.\u00a0Zbl\u00a01442.65144\nLi, Xianjuan; Mao, Zhiping; Wang, Nan; Song, Fangying; Wang, Hong; Karniadakis, George Em\n2020\nQuantifying the generalization error in deep learning in terms of data distribution and neural network smoothness.\u00a0Zbl\u00a01475.68315\nJin, Pengzhan; Lu, Lu; Tang, Yifa; Karniadakis, George Em\n2020\nDying ReLU and initialization: theory and numerical examples.\u00a0Zbl\u00a007419148\nLu, Lu; Shin, Yeonjong; Su, Yanhui; Karniadakis, George Em\n2020\nPhysics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.\u00a0Zbl\u00a01415.68175\nRaissi, M.; Perdikaris, P.; Karniadakis, G. E.\n2019\nFPINNs: fractional physics-informed neural networks.\u00a0Zbl\u00a01420.35459\nPang, Guofei; Lu, Lu; Karniadakis, George Em\n2019\nQuantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems.\u00a0Zbl\u00a01454.65008\nZhang, Dongkun; Lu, Lu; Guo, Ling; Karniadakis, George Em\n2019\nDeep learning of vortex-induced vibrations.\u00a0Zbl\u00a01415.76177\nRaissi, Maziar; Wang, Zhicheng; Triantafyllou, Michael S.; Karniadakis, George Em\n2019\nEfficient multistep methods for tempered fractional calculus: algorithms and simulations.\u00a0Zbl\u00a007099350\nGuo, Ling; Zeng, Fanhai; Turner, Ian; Burrage, Kevin; Karniadakis, George Em\n2019\nMulti-domain spectral collocation method for variable-order nonlinear fractional differential equations.\u00a0Zbl\u00a01440.65258\nZhao, Tinggang; Mao, Zhiping; Karniadakis, George Em\n2019\nMachine learning of space-fractional differential equations.\u00a0Zbl\u00a01419.35209\nGulian, Mamikon; Raissi, Maziar; Perdikaris, Paris; Karniadakis, George\n2019\nNeural-net-induced Gaussian process regression for function approximation and PDE solution.\u00a0Zbl\u00a01451.68242\nPang, Guofei; Yang, Liu; Karniadakis, George Em\n2019\nDiscovering a universal variable-order fractional model for turbulent Couette flow using a physics-informed neural network.\u00a0Zbl\u00a01434.76053\nMehta, Pavan Pranjivan; Pang, Guofei; Song, Fangying; Karniadakis, George Em\n2019\nAn entropy-viscosity large eddy simulation study of turbulent flow in a flexible pipe.\u00a0Zbl\u00a01415.76369\nWang, Zhicheng; Triantafyllou, Michael S.; Constantinides, Yiannis; Karniadakis, George Em\n2019\nFractional Gray-Scott model: well-posedness, discretization, and simulations.\u00a0Zbl\u00a01440.35344\nWang, Tingting; Song, Fangying; Wang, Hong; Karniadakis, George Em\n2019\nFractional magneto-hydrodynamics: algorithms and applications.\u00a0Zbl\u00a01416.76127\n2019\nA spectral penalty method for two-sided fractional differential equations with General boundary conditions.\u00a0Zbl\u00a007099316\nWang, Nan; Mao, Zhiping; Huang, Chengming; Karniadakis, George Em\n2019\nSupervised parallel-in-time algorithm for long-time Lagrangian simulations of stochastic dynamics: application to hydrodynamics.\u00a0Zbl\u00a01452.76172\nBlumers, Ansel L.; Li, Zhen; Karniadakis, George Em\n2019\nMapping the properties of the vortex-induced vibrations of flexible cylinders in uniform oncoming flow.\u00a0Zbl\u00a01430.76091\nFan, Dixia; Wang, Zhicheng; Triantafyllou, Michael S.; Karniadakis, George Em\n2019\nNonlocal flocking dynamics: learning the fractional order of PDEs from particle simulations.\u00a0Zbl\u00a01449.35447\nMao, Zhiping; Li, Zhen; Karniadakis, George Em\n2019\nNumerical methods.\u00a0Zbl\u00a01410.65001\n2019\nA Computational Mechanics special issue on: Data-driven modeling and simulation \u2013 theory, methods, and applications.\u00a0Zbl\u00a01476.00073\n2019\nA stabilized phase-field method for two-phase flow at high Reynolds number and large density\/viscosity ratio.\u00a0Zbl\u00a01453.76154\nWang, Zhicheng; Dong, Suchuan; Triantafyllou, Michael S.; Constantinides, Yiannis; Karniadakis, George Em\n2019\nOne-dimensional modeling of fractional flow reserve in coronary artery disease: uncertainty quantification and Bayesian optimization.\u00a0Zbl\u00a01441.76150\nYin, Minglang; Yazdani, Alireza; Karniadakis, George Em\n2019\nHidden physics models: machine learning of nonlinear partial differential equations.\u00a0Zbl\u00a01381.68248\n2018\nNumerical Gaussian processes for time-dependent and nonlinear partial differential equations.\u00a0Zbl\u00a01386.65030\nRaissi, Maziar; Perdikaris, Paris; Karniadakis, George Em\n2018\nA spectral method (of exponential convergence) for singular solutions of the diffusion equation with general two-sided fractional derivative.\u00a0Zbl\u00a01422.65428\n2018\nA Riesz basis Galerkin method for the tempered fractional Laplacian.\u00a0Zbl\u00a01402.65120\nZhang, Zhijiang; Deng, Weihua; Karniadakis, George Em\n2018\nA partitioned coupling framework for peridynamics and classical theory: analysis and simulations.\u00a0Zbl\u00a01440.74045\nYu, Yue; Bargos, Fabiano F.; You, Huaiqian; Parks, Michael L.; Bittencourt, Marco L.; Karniadakis, George E.\n2018\nA new class of semi-implicit methods with linear complexity for nonlinear fractional differential equations.\u00a0Zbl\u00a01404.65105\nZeng, Fanhai; Turner, Ian; Burrage, Kevin; Karniadakis, George Em\n2018\nA dissipative particle dynamics method for arbitrarily complex geometries.\u00a0Zbl\u00a01380.76123\nLi, Zhen; Bian, Xin; Tang, Yu-Hang; Karniadakis, George Em\n2018\nA computational stochastic methodology for the design of random meta-materials under geometric constraints.\u00a0Zbl\u00a01398.60073\nTsantili, Ivi C.; Cho, Min Hyung; Cai, Wei; Karniadakis, George Em\n2018\nMulti-fidelity optimization of super-cavitating hydrofoils.\u00a0Zbl\u00a01439.74256\nBonfiglio, L.; Perdikaris, P.; Brizzolara, S.; Karniadakis, G. E.\n2018\nStochastic domain decomposition via moment minimization.\u00a0Zbl\u00a01391.60167\nZhang, Dongkun; Babaee, Hessam; Karniadakis, George Em\n2018\nA spectral-element\/Fourier smoothed profile method for large-eddy simulations of complex VIV problems.\u00a0Zbl\u00a01410.76320\nWang, Zhicheng; Triantafyllou, Michael S.; Constantinides, Yiannis; Karniadakis, George Em\n2018\nActive learning of constitutive relation from mesoscopic dynamics for macroscopic modeling of non-Newtonian flows.\u00a0Zbl\u00a01392.76058\nZhao, Lifei; Li, Zhen; Caswell, Bruce; Ouyang, Jie; Karniadakis, George Em\n2018\nAnalytical and computational studies of correlations of hydrodynamic fluctuations in shear flow.\u00a0Zbl\u00a007414373\nBian, Xin; Deng, Mingge; Karniadakis, George Em\n2018\nMachine learning of linear differential equations using Gaussian processes.\u00a0Zbl\u00a01380.68339\nRaissi, Maziar; Perdikaris, Paris; Karniadakis, George Em\n2017\nInferring solutions of differential equations using noisy multi-fidelity data.\u00a0Zbl\u00a01382.65229\nRaissi, Maziar; Perdikaris, Paris; Karniadakis, George Em\n2017\nPetrov-Galerkin and spectral collocation methods for distributed order differential equations.\u00a0Zbl\u00a01367.65113\nKharazmi, Ehsan; Zayernouri, Mohsen; Karniadakis, George Em\n2017\nA generalized spectral collocation method with tunable accuracy for fractional differential equations with end-point singularities.\u00a0Zbl\u00a01431.65193\nZeng, Fanhai; Mao, Zhiping; Karniadakis, George Em\n2017\nNonlinear information fusion algorithms for data-efficient multi-fidelity modelling.\u00a0Zbl\u00a01407.62252\nPerdikaris, P.; Raissi, M.; Damianou, A.; Lawrence, N. D.; Karniadakis, G. E.\n2017\nNumerical methods for stochastic partial differential equations with white noise.\u00a0Zbl\u00a01380.65021\n2017\nSecond-order numerical methods for multi-term fractional differential equations: smooth and non-smooth solutions.\u00a0Zbl\u00a01439.65081\nZeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em\n2017\nComputing fractional Laplacians on complex-geometry domains: algorithms and simulations.\u00a0Zbl\u00a01380.65357\nSong, Fangying; Xu, Chuanju; Karniadakis, George Em\n2017\nA Petrov-Galerkin spectral method of linear complexity for fractional multiterm ODEs on the half line.\u00a0Zbl\u00a01367.65108\nLischke, Anna; Zayernouri, Mohsen; Karniadakis, George Em\n2017\nMulti-fidelity Gaussian process regression for prediction of random fields.\u00a0Zbl\u00a01419.62272\nParussini, L.; Venturi, D.; Perdikaris, P.; Karniadakis, G. E.\n2017\nDiscovering variable fractional orders of advection-dispersion equations from field data using multi-fidelity Bayesian optimization.\u00a0Zbl\u00a01419.62499\nPang, Guofei; Perdikaris, Paris; Cai, Wei; Karniadakis, George Em\n2017\nA Petrov-Galerkin spectral element method for fractional elliptic problems.\u00a0Zbl\u00a01439.65205\nKharazmi, Ehsan; Zayernouri, Mohsen; Karniadakis, George Em\n2017\nAdaptive finite element method for fractional differential equations using hierarchical matrices.\u00a0Zbl\u00a01439.65091\nZhao, Xuan; Hu, Xiaozhe; Cai, Wei; Karniadakis, George Em\n2017\nA robust bi-orthogonal\/dynamically-orthogonal method using the covariance pseudo-inverse with application to stochastic flow problems.\u00a0Zbl\u00a01380.76093\nBabaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em\n2017\nA tunable finite difference method for fractional differential equations with non-smooth solutions.\u00a0Zbl\u00a01439.65082\nChen, Xuejuan; Zeng, Fanhai; Karniadakis, George Em\n2017\nFractional Burgers equation with nonlinear non-locality: spectral vanishing viscosity and local discontinuous Galerkin methods.\u00a0Zbl\u00a01380.65280\n2017\nDissipative particle dynamics: foundation, evolution, implementation, and applications.\u00a0Zbl\u00a01387.35496\nLi, Z.; Bian, X.; Li, X.; Deng, M.; Tang, Y.-H.; Caswell, B.; Karniadakis, George E.\n2017\nNumerical methods for high-dimensional kinetic equations.\u00a0Zbl\u00a01404.65117\nCho, Heyrim; Venturi, Daniele; Karniadakis, George Em\n2017\nEfficient two-dimensional simulations of the fractional Szabo equation with different time-stepping schemes.\u00a0Zbl\u00a01412.65091\nSong, Fangying; Zeng, Fanhai; Cai, Wei; Chen, Wen; Karniadakis, George Em\n2017\nImplicit-explicit difference schemes for nonlinear fractional differential equations with nonsmooth solutions.\u00a0Zbl\u00a01355.65104\nCao, Wanrong; Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em\n2016\nFast difference schemes for solving high-dimensional time-fractional subdiffusion equations.\u00a0Zbl\u00a01352.65278\nZeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em\n2016\nA fractional phase-field model for two-phase flows with tunable sharpness: algorithms and simulations.\u00a0Zbl\u00a01423.76102\nSong, Fangying; Xu, Chuanju; Karniadakis, George Em\n2016\nFractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms.\u00a0Zbl\u00a01415.74049\nYu, Yue; Perdikaris, Paris; Karniadakis, George Em\n2016\nMultifidelity information fusion algorithms for high-dimensional systems and massive data sets.\u00a0Zbl\u00a01342.62110\nPerdikaris, Paris; Venturi, Daniele; Karniadakis, George Em\n2016\nNumerical methods for high-dimensional probability density function equations.\u00a0Zbl\u00a01349.65046\nCho, H.; Venturi, D.; Karniadakis, G. E.\n2016\nStrong and weak convergence order of finite element methods for stochastic PDEs with spatial white noise.\u00a0Zbl\u00a01357.65013\nZhang, Zhongqiang; Rozovskii, Boris; Karniadakis, George Em.\n2016\nFractional-order uniaxial visco-elasto-plastic models for structural analysis.\u00a0Zbl\u00a01439.74077\nSuzuki, J. L.; Zayernouri, M.; Bittencourt, M. L.; Karniadakis, G. E.\n2016\nMulti-fidelity modelling of mixed convection based on experimental correlations and numerical simulations.\u00a0Zbl\u00a01383.76438\nBabaee, H.; Perdikaris, P.; Chryssostomidis, C.; Karniadakis, G. E.\n2016\nFlow in complex domains simulated by dissipative particle dynamics driven by geometry-specific body-forces.\u00a0Zbl\u00a01349.76069\nYazdani, Alireza; Deng, Mingge; Caswell, Bruce; Karniadakis, George Em\n2016\nFractional spectral collocation methods for linear and nonlinear variable order FPDEs.\u00a0Zbl\u00a01349.65531\n2015\nSecond-order approximations for variable order fractional derivatives: algorithms and applications.\u00a0Zbl\u00a01349.65092\nZhao, Xuan; Sun, Zhi-zhong; Karniadakis, George Em\n2015\nA generalized spectral collocation method with tunable accuracy for variable-order fractional differential equations.\u00a0Zbl\u00a01339.65197\nZeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em\n2015\nA unified Petrov-Galerkin spectral method for fractional PDEs.\u00a0Zbl\u00a01425.65127\nZayernouri, Mohsen; Ainsworth, Mark; Karniadakis, George Em.\n2015\nTempered fractional Sturm-Liouville eigenproblems.\u00a0Zbl\u00a01323.34012\nZayernouri, Mohsen; Ainsworth, Mark; Karniadakis, George Em\n2015\nOptimal error estimates of spectral Petrov-Galerkin and collocation methods for initial value problems of fractional differential equations.\u00a0Zbl\u00a01326.65100\nZhang, Zhongqiang; Zeng, Fanhai; Karniadakis, George Em\n2015\nTime-splitting schemes for fractional differential equations I: Smooth solutions.\u00a0Zbl\u00a01320.65106\nCao, Wanrong; Zhang, Zhongqiang; Karniadakis, George Em\n2015\nNumerical methods for stochastic delay differential equations via the Wong-Zakai approximation.\u00a0Zbl\u00a01329.60233\nCao, Wanrong; Zhang, Zhongqiang; Karniadakis, George Em\n2015\nMulti-resolution flow simulations by smoothed particle hydrodynamics via domain decomposition.\u00a0Zbl\u00a01349.76663\nBian, Xin; Li, Zhen; Karniadakis, George Em\n2015\nSpecial issue on \u201cFractional PDEs: theory, numerics, and applications\u201d.\u00a0Zbl\u00a01349.35004\n2015\nMultiscale universal interface: a concurrent framework for coupling heterogeneous solvers.\u00a0Zbl\u00a01349.76738\nTang, Yu-Hang; Kudo, Shuhei; Bian, Xin; Li, Zhen; Karniadakis, George Em\n2015\nNumerical methods for SPDEs with tempered stable processes.\u00a0Zbl\u00a01320.65020\n2015\n...and 176 more Documents\nall top 5\n\n### Cited by 7,186 Authors\n\n 192 Karniadakis, George Em 70 Sherwin, Spencer J. 48 Lin, Guang 46 Wang, Hong 39 Xiu, Dongbin 32 Zheng, Xiangcheng 31 Thompson, Mark Christopher 29 Doostan, Alireza 28 Narayan, Akil C. 26 Shen, Jie 25 Dehghan Takht Fooladi, Mehdi 25 Le Ma\u00eetre, Olivier P. 23 Ghanem, Roger G. 23 Venturi, Daniele 22 Schwab, Christoph 22 Shu, Chi-Wang 22 Zhou, Tao 21 Hourigan, Kerry 21 Perdikaris, Paris G. 21 Zabaras, Nicholas J. 20 Blackburn, Hugh M. 20 Dumbser, Michael 20 Hesthaven, Jan S. 20 Kopriva, David Alan 20 Nobile, Fabio 20 Rozza, Gianluigi 19 Tartakovsky, Daniel M. 18 Lucor, Didier 18 Mao, Zhiping 18 Xu, Chuanju 18 Zaky, Mahmoud A. 18 Zayernouri, Mohsen 17 Abbaszadeh, Mostafa 17 Jin, Shi 17 Knio, Omar M. 17 Sagaut, Pierre 17 Wan, Xiaoliang 17 Wang, Lilian 16 Dong, Suchuan 16 Guo, Ben-Yu 16 Jakeman, John Davis 16 Najm, Habib N. 16 Noack, Bernd R. 16 Nordstr\u00f6m, Jan 16 Quarteroni, Alfio M. 16 Zhang, Dongxiao 15 Giraldo, Francis X. 15 Iaccarino, Gianluca 15 Li, Zhen 15 Liu, Fawang 15 Warburton, Timothy 15 Zhang, Zhimin 14 Baccouch, Mahboub 14 Boscheri, Walter 14 Brunton, Steven L. 14 Kronbichler, Martin 14 Machado, Jos\u00e9 Ant\u00f3nio Tenreiro 14 Mishra, Siddhartha 14 Pasquetti, Richard 14 Sheard, Gregory John 14 Tartakovsky, Alexandre M. 14 Xing, Yulong 14 Yang, Xiu 14 Zeng, Fanhai 14 Zhang, Zhongqiang 13 Ainsworth, Mark 13 Bourguet, R\u00e9mi 13 Elman, Howard C. 13 Ferrer, Esteban 13 Guo, Ling 13 Jiang, Xiaoyun 13 Luo, Hong 13 Pulch, Roland 13 Reddy, Junuthula Narasimha 13 Soize, Christian 13 Triantafyllou, Michael S. 12 Cheng, Liang 12 Congedo, Pietro Marco 12 Deng, Weihua 12 Gerritsma, Marc I. 12 Gunzburger, Max D. 12 Kutz, J. Nathan 12 Marzouk, Youssef M. 12 Matthies, Hermann Georg 12 Maxey, Martin R. 12 Nouy, Anthony 12 Trask, Nathaniel A. 12 Turner, Ian William 12 Xu, Yan 12 Yu, Yue 11 Abdelkawy, Mohamed A. 11 B\u0103leanu, Dumitru I. 11 Cort\u00e9s L\u00f3pez, Juan Carlos 11 Dawson, Clint N. 11 Fischer, Paul F. 11 Gassner, Gregor J. 11 Hao, Zhaopeng 11 Leweke, Thomas 11 Li, Dongfang 11 Li, Hong ...and 7,086 more Authors\nall top 5\n\n### Cited in 327 Serials\n\n 975 Journal of Computational Physics 429 Computer Methods in Applied Mechanics and Engineering 283 Journal of Fluid Mechanics 277 Journal of Scientific Computing 273 Computers and Fluids 149 SIAM Journal on Scientific Computing 127 Physics of Fluids 114 Applied Numerical Mathematics 108 Computers & Mathematics with Applications 100 Journal of Computational and Applied Mathematics 83 Applied Mathematics and Computation 76 International Journal for Numerical Methods in Engineering 69 SIAM\/ASA Journal on Uncertainty Quantification 66 Applied Mathematical Modelling 65 International Journal for Numerical Methods in Fluids 52 Mathematics of Computation 49 Communications in Computational Physics 47 European Journal of Mechanics. B. Fluids 45 Computational Mechanics 43 Mathematics and Computers in Simulation 39 Numerical Algorithms 38 International Journal of Computer Mathematics 33 Numerische Mathematik 32 SIAM Journal on Numerical Analysis 32 Computational and Applied Mathematics 31 Communications in Nonlinear Science and Numerical Simulation 30 Computational Geosciences 29 Fractional Calculus & Applied Analysis 27 Physica D 27 Advances in Computational Mathematics 26 International Journal of Heat and Mass Transfer 25 Mathematical Problems in Engineering 25 International Journal of Computational Fluid Dynamics 23 Chaos, Solitons and Fractals 23 Engineering Analysis with Boundary Elements 22 Communications on Applied Mathematics and Computation 21 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 19 Journal of Engineering Mathematics 19 Journal of Mathematical Analysis and Applications 19 Nonlinear Dynamics 18 Advances in Difference Equations 17 Archives of Computational Methods in Engineering 17 Multiscale Modeling & Simulation 15 BIT 15 International Journal of Computational Methods 15 Proceedings of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences 14 Applied Mathematics Letters 13 Journal of Statistical Physics 12 Numerical Methods for Partial Differential Equations 11 Acta Mechanica 11 Theoretical and Computational Fluid Dynamics 11 Mathematical and Computer Modelling 11 European Journal of Mechanics. A. Solids 10 Computer Physics Communications 10 Automatica 10 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 10 Mathematical Modelling of Natural Phenomena 10 East Asian Journal on Applied Mathematics 9 Computational Methods in Applied Mathematics 9 Discrete and Continuous Dynamical Systems. Series S 9 Advances in Applied Mathematics and Mechanics 9 Journal of Theoretical Biology 9 AMM. Applied Mathematics and Mechanics. (English Edition) 8 SIAM Review 8 Chaos 8 Journal of Turbulence 8 Journal of Applied Mathematics and Computing 8 Journal of Computational Acoustics 8 Acta Mechanica Sinica 8 Advances in Mathematical Physics 8 International Journal of Applied and Computational Mathematics 7 Journal of Mathematical Physics 7 Meccanica 7 International Journal of Numerical Methods for Heat & Fluid Flow 7 Combustion Theory and Modelling 7 Discrete and Continuous Dynamical Systems. Series B 7 SIAM Journal on Applied Dynamical Systems 7 International Journal of Numerical Analysis and Modeling 7 Statistics and Computing 6 ZAMP. Zeitschrift f\u00fcr angewandte Mathematik und Physik 6 SIAM Journal on Control and Optimization 6 Continuum Mechanics and Thermodynamics 6 Abstract and Applied Analysis 6 Journal of Numerical Mathematics 6 Acta Numerica 6 Communications in Applied and Industrial Mathematics 6 Stochastic and Partial Differential Equations. Analysis and Computations 6 Journal of Function Spaces 6 Results in Applied Mathematics 5 Journal of Mathematical Biology 5 Mathematical Methods in the Applied Sciences 5 Physics Letters. A 5 Wave Motion 5 Calcolo 5 Applied Mathematics and Mechanics. (English Edition) 5 European Journal of Applied Mathematics 5 Journal of Non-Newtonian Fluid Mechanics 5 Archive of Applied Mechanics 5 Fractals 5 ZAMM. Zeitschrift f\u00fcr Angewandte Mathematik und Mechanik ...and 227 more Serials\nall top 5\n\n### Cited in 46 Fields\n\n 2,882 Numerical analysis\u00a0(65-XX) 2,191 Fluid mechanics\u00a0(76-XX) 1,406 Partial differential equations\u00a0(35-XX) 493 Mechanics of deformable solids\u00a0(74-XX) 465 Probability theory and stochastic processes\u00a0(60-XX) 284 Computer science\u00a0(68-XX) 261 Statistics\u00a0(62-XX) 254 Ordinary differential equations\u00a0(34-XX) 228 Biology and other natural sciences\u00a0(92-XX) 203 Real functions\u00a0(26-XX) 134 Statistical mechanics, structure of matter\u00a0(82-XX) 121 Dynamical systems and ergodic theory\u00a0(37-XX) 116 Classical thermodynamics, heat transfer\u00a0(80-XX) 112 Approximations and expansions\u00a0(41-XX) 104 Systems theory; control\u00a0(93-XX) 100 Optics, electromagnetic theory\u00a0(78-XX) 100 Geophysics\u00a0(86-XX) 76 Calculus of variations and optimal control; optimization\u00a0(49-XX) 57 Operations research, mathematical programming\u00a0(90-XX) 56 Special functions\u00a0(33-XX) 53 Integral equations\u00a0(45-XX) 44 Mechanics of particles and systems\u00a0(70-XX) 33 Game theory, economics, finance, and other social and behavioral sciences\u00a0(91-XX) 32 Harmonic analysis on Euclidean spaces\u00a0(42-XX) 32 Information and communication theory, circuits\u00a0(94-XX) 30 Linear and multilinear algebra; matrix theory\u00a0(15-XX) 29 Operator theory\u00a0(47-XX) 17 Functional analysis\u00a0(46-XX) 16 Quantum theory\u00a0(81-XX) 14 Global analysis, analysis on manifolds\u00a0(58-XX) 11 Difference and functional equations\u00a0(39-XX) 10 Integral transforms, operational calculus\u00a0(44-XX) 8 General and overarching topics; collections\u00a0(00-XX) 6 Functions of a complex variable\u00a0(30-XX) 5 Combinatorics\u00a0(05-XX) 5 Differential geometry\u00a0(53-XX) 5 Astronomy and astrophysics\u00a0(85-XX) 4 Number theory\u00a0(11-XX) 3 Measure and integration\u00a0(28-XX) 2 History and biography\u00a0(01-XX) 2 Relativity and gravitational theory\u00a0(83-XX) 2 Mathematics education\u00a0(97-XX) 1 Mathematical logic and foundations\u00a0(03-XX) 1 Potential theory\u00a0(31-XX) 1 Convex and discrete geometry\u00a0(52-XX) 1 General topology\u00a0(54-XX)\n\n### Wikidata Timeline\n\nThe data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.","date":"2022-05-29 03:07:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5294930934906006, \"perplexity\": 13540.146990095758}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652663035797.93\/warc\/CC-MAIN-20220529011010-20220529041010-00525.warc.gz\"}"}
null
null
Text copyright © 2017 by Ashley Blom Illustrations copyright © 2017 by Lucy Engelman All rights reserved. Except as authorized under U.S. copyright law, no part of this book may be reproduced in any form without written permission from the publisher. Library of Congress Cataloging in Publication Number: 2016941165 ISBN 978-1-59474-921-6 Ebook ISBN 978-1-59474-922-3 Ebook design adapted from printed book designed by Andie Reid Production management by John J. McGurk Quirk Books 215 Church Street Philadelphia, PA 19106 quirkbooks.com v4.1 _Cover_ _Title Page_ _Copyright_ _Dedication_ _Introduction_ **TRICKY TECHNIQUES** How to Eat a Whole Fish How to Eat a Lobster How to Eat Crawfish How to Eat Raw Oysters How to Eat Escargots How to Eat Bugs How to Eat Asparagus How to Eat an Artichoke How to Slice an Avocado How to Open a Coconut How to Pick Out Ripe Fruit How to Eat Edamame How to Eat Kohlrabi How to Slice a Mango How to Eat a Papaya How to Eat a Pomegranate How to Eat a Rambutan How to Eat Kumquats How to Go Nuts How to Eat Durian How to Carve a Chicken How to Eat a Quail How to Eat Pigs' Feet How to Eat a Pig's Head **ETIQUETTE ENIGMAS** How to Use the Correct Fork How to Use Chopsticks How to Taste Cheese How to Eat Noodles How to Sip Soup How to Hold a Wineglass How to Taste Wine How to Make a Toast How to Drink Tea How to Use Bread as a Utensil How to Eat Sushi How to Tip How to Decide Who Pays the Bill How to Order from the Menu How to Excuse Yourself from the Table **FOODIE FIXES** How to Eat Something Spicy How to Eat Something Messy How to Pace Yourself When Drinking How to Stay Vegetarian at a Barbecue How to Stick to Your Diet at a Party How to Fix Bad Breath How to Handle Beans How to Taste Something You Hate How to Recover from a Tongue Burn How to Send Food Back How to Stop Yourself from Choking _Resources_ _About the Author_ _About the Illustrator_ _Index_ _Acknowledgments_ # INTRODUCTION Imagine this: You're in polite company—perhaps a business lunch, a fancy dinner party, or meeting your future spouse's parents—and suddenly there's a whole fish on the plate in front of you. Or maybe the cooked foot of a pig. Or maybe the only utensil offered is a pair of chopsticks or a loaf of bread. Are you prepared? Do you know what to do? Or do you excuse yourself, escape to the coat check, dig out your phone, and frantically research how to properly, and politely, consume your unexpected dinner? Honestly...do you even know how to excuse yourself from the table? There are etiquette rules for that! Where is that coat check ticket, anyway? Stop. Put the phone away. You got this. The following tips and tricks will explain, step by step, how to navigate these and many other edible enigmas with poise and grace. Whether your dinner is staring back at you or it's your turn to set the table, you'll know just what to do and how to do it. **EAT A WHOLE FISH** Does your dinner have a face? Follow these steps and you won't regret it the staring contest. The flavor punch of a whole fish is more than worth it. **YOU WILL NEED** Whole, cooked fish Knife (ideally a fillet knife, but any small sharp knife will do) Fork or spoon STEP 1 If the fish is small, like a sardine, the bones are edible and you can eat it whole. Something like pickled herring can also be eaten whole because the vinegar in which the fish is stored softens the bones. If confronted with a larger fish, proceed to step 2. STEP 2 With the knife, make a cut just behind the head, parallel to the gills, just deep enough that the knife hits the spine. Make a second cut just in front of the tail, again not all the way through the body, stopping at the spine. STEP 3 Make a cut parallel to the belly, right below the spine. STEP 4 Use a knife or fork to gently push the meat off the ribs from the spine down to the belly. STEP 5 Remove the backbone by lifting the tail and pulling the spine up and off the body. Use the knife to extract any visible bones left in the meat. STEP 6 You should now have a boneless fillet on your plate. Slice off the head and tail and discard them, or scoop out any meaty bits you find in the head. Serious seafood foodies highly recommend fish cheeks. Alternative Method: Properly cooked fish will flake off the bone. If you don't mind a mess, you can simply use a fork to remove the meat from the body. **FISH ALERT!** Whole fish is popular in many cultures and restaurants. You're most likely to encounter it in a stew, encased in a salt crust, or grilled. Luckily, menus usually indicate that a fish will be served whole. "It was always the biggest fish I caught that got away." EUGENE FIELD **EAT A LOBSTER** Once considered a poor man's food, lobster now graces the tables of the rich, the famous, and the vacationing masses. Here's how to disassemble this bottom-dweller with ease. **YOU WILL NEED** 1 whole, cooked lobster Lobster cracker, nutcracker, or scissors (this may not be necessary if the shells are soft) Bin for discarded shells Plastic bib with a lobster on it (required attire) Lobster fork (optional) Butter for dipping (optional) STEP 1 Locate the edible portions of the lobster: STEP 2 To remove the tail, grasp it with your hand, gently twist, and pull it off the body. STEP 3 Pull the flippers off the end of the tail. Push your finger or lobster fork through the opening, working the tail meat out the other end. Discard empty shells in the bin. STEP 4 Find the digestive tract: the dark gray line running through the tail. It is safe to eat, but it doesn't taste very good. Remove it. STEP 5 Twist off the claws and knuckles. Pull the thumb off each claw and discard it; the meat should be left in one piece on the rest of the claw. STEP 6 Use the lobster cracker to crack off the top third of the claw. Pull out the meat with the lobster fork. Dip meat into butter. Consume ravenously. You may stop here or join the legions of serious lobster eaters and continue on to the body of the beast. STEP 7 Peel the outer shell off the body with your hands; it should come off fairly easily. STEP 8 There is a bit of soft white meat between the inner gills of the lobster. Use a pick or small fork to gently pull it out. STEP 9 Next, look for the bright red roe, which will be present in female lobsters. This is considered a delicacy by lobster lovers. STEP 10 Finally, if you feel like taking a chance, scoop out the green tomalley, aka the liver. Because the liver filters out toxins, it is not always recommended eating, but lobster aficionados claim it's the best part. You can eat it by the spoonful or spread it on crostini. **LOBSTER ALERT!** Lobster is most common in coastal areas, where it is available fresh. You are most likely to encounter a Maine lobster, the type with large claws and tail, as described here. Rock lobsters, which can be found in warmer waters, do not have large claws and typically only their tail is served. Lobsters are cooked by steaming or boiling, and occasionally they're stuffed with a mixture of bread crumbs and seafood (typically crab). **EAT CRAWFISH** These small, tasty crustaceans, sometimes called mudbugs, look like miniature lobsters. They are often boiled with a hefty pour of Cajun spices. If your Southern host dumps a pile of them in front of you, don't fret. Shelling them is easier than you'd think. **YOU WILL NEED** Crawfish Bucket for discarded shells Butter or remoulade for dipping (optional) STEP 1 With one hand, pinch the crawfish by its head. With the other, pinch the spot where the tail meets the body. STEP 2 Twist the tail and pull it away from the body. Optional step for the non-squeamish: suck the juices from the head, and then discard it. (See "Crawfish Alert!".) STEP 3 Twist off the tail flippers. Devein the crawfish (its dark, stringy digestive tract should come off easily). STEP 4 Gently use your thumbs and forefingers to work off the first two or three rings of the wider part of the shell. Pull out the exposed tail meat. Dip the meat into butter (if desired), and enjoy. If the crawfish is relatively big, you can eat the claw meat as well. Crack the shell with your hands or by biting it in half. Use a small fork or a piece of shell to scoop out the meat. **CRAWFISH ALERT!** Crawfish live in fresh water. You're certain to find them in the southern United States, especially Louisiana. The juices from the head could very well be a pure shot of Cajun spices if your host is a fan of spicy foods. Skip this step if you can't handle the heat. **EAT RAW OYSTERS** Yes, oysters are alive when you eat them. They will be served with their shells cut open and nestled on a bed of ice with condiments on the side. They should be wet, with a bit of liquid in the shell. Master this skill and you're sure to impress your friends. **YOU WILL NEED** Oysters on the half shell Oyster fork Condiments such as lemon wedges, horseradish, hot sauce, or shallot vinegar (optional) STEP 1 Order some oysters. They are typically served in singles, by the dozen, or by the half dozen. STEP 2 Use a fork to gently pry the meat from the shell, being careful not to spill the juices. STEP 3 For your first taste, skip the condiments. You'll want to experience the full flavor of the oyster before you decide if it needs a little boost. STEP 4 Lift the shell to your lips and tip the oyster and its juices into your mouth. STEP 5 Experts and foodies agree that you should chew the oyster a few times to release all the flavor. If you're not big on strange textures, you may prefer to swallow the whole mouthful in one go. **OYSTER ALERT!** You may have heard the old rule to "only eat oysters in months that end in 'r,' " but in modern times, thanks to year-round farming, oysters are safe to eat anytime. However, due to oysters' natural spawning cycle, you will find the tastiest morsels in the spring. If buying oysters to shell at home, pick ones that are heavy for their size. **EAT ESCARGOTS** Enjoying these spiral-shelled delicacies is a sure-fire way to feel very cultured and French. **YOU WILL NEED** Escargots (cooked snails) Escargot tongs Escargot fork Butter or sauce for dipping (optional) STEP 1 With your nondominant hand, use the tongs to pick up a single snail shell. STEP 2 With your dominant hand, use the fork to pierce the meat and pull it out of the shell. STEP 3 Dip the snail in the provided butter or sauce (if desired). Eat the entire mouthful at once. Do not nibble it! Escargots will be served on a snail plate, which has little wells for the shells. The butter or sauce will be either directly on the snail or served on the side for dipping. If escargot tongs and special fork are not provided, use a napkin (never your bare hands!) to pick up the shell and a regular fork to pull out the snail. **ESCARGOT ALERT!** Escargots are available fresh year-round. You're most likely to be served ones that have been boiled and then baked or broiled. Take the opportunity to taste the history—snails have been on the menu for millennia. The ancient Romans loved them, and evidence of people eating snails has been found in prehistoric sites. **EAT BUGS** The edible insects you'll most likely encounter in the U.S. are prepared ants and crickets, but in other countries you may find yourself with a plate full of beetles or tarantulas. Go for it! It's just protein, right? **YOU WILL NEED** Prepared bugs, such as ants, crickets, beetles, or tarantulas STEP 1 Remove sharp extremities such as spiny legs, stingers, or fangs. STEP 2 If the bug is alive, bite off the head firmly and quickly. This will ensure that the bug does not struggle in your mouth. STEP 3 Eat the bug whole, and chew well. Preparation is everything. If the idea of crunching on a live cricket gives you the heebie-jeebies, seek out cooked bugs that are flavored with ingredients you enjoy. For example, look for chocolate-covered ants or seasoned earthworms. If you want to start with a low-stakes taste, try cricket flour, a common ingredient in high-protein diets. **BUG ALERT!** People have been eating insects for thousands of years, and they're an everyday food around the world, especially in tropical countries. Though they don't tend to get a lot of menu space in most Western restaurants, bugs are a wonderful source of protein, vitamins, and minerals. The environmental footprint of insect farming is smaller than that of large-animals agriculture. Plus, they taste good! Entomophagy ( _noun_ ): the practice of eating insects **EAT ASPARAGUS** You have two options for consuming this vegetable: using your fingers or your cutlery. Follow your host's lead. **YOU WILL NEED** Cooked asparagus spears Knife Fork Dipping sauce Plate for discarded stalk bases (optional) **The Modern Way** If you are served long spears that have not been cut prior to serving, you may wonder exactly how to consume them without looking like a cow chewing cud. STEP 1 Pierce the spear with a fork to hold it in place. STEP 2 Use a knife to gently cut the spear into bite-sized pieces. If the base of the spear is tough and does not yield to gentle pressure, set it aside. Don't eat it. STEP 3 Pierce one piece with the fork and dip or drag it in the provided sauce. Eat. **The Old Way** Victorian high-society etiquette stated that asparagus is a finger food. If you are served spears sans silverware or are engaging in time travel, here's how to proceed. STEP 1 Pick up the spears, one at a time, by the base. Use only your fingertips. STEP 2 Dip a spear into the provided sauce, and eat from top to bottom. (It goes without saying that Victorian ladies would not approve of double dipping.) STEP 3 Discard the tough base onto a separate plate. **EAT AN ARTICHOKE** Some say the artichoke is the perfect excuse to consume a bowl of butter free from criticism. Others say, "How the heck do you eat this thing?" **YOU WILL NEED** A fully cooked artichoke A small bowl of butter seasoned with lemon or aioli STEP 1 Check the artichoke for sharp barbs at the top of each leaf. Typically, these are removed prior to cooking, but if they are still intact, proceed with caution. STEP 2 Starting at the bottom of the artichoke bulb, gently pull off a leaf. If the artichoke has been cooked properly, the leaf should come off easily. STEP 3 Hold the top of the leaf and dip the bottom two-thirds in butter. STEP 4 Don't eat the whole leaf—it will be too tough to chew. Instead, use your teeth to gently scrape the soft flesh off the leaf. Discard the rest of the leaf. STEP 5 Repeat with the rest of the leaves. Inside the artichoke are underdeveloped barbs. Scrape these out with a spoon or fork and discard. STEP 6 Eventually, you will reach the heart of the artichoke, where the leaves are soft. Eat the heart in full, with a hefty dab of butter, of course. **SLICE AN AVOCADO** Creamy and rich, with a mild nutty taste, the humble avocado is a true thing of beauty and a nutritional powerhouse, packed with vitamin K, fiber, folate, and healthy fats. **YOU WILL NEED** Avocado Sharp knife Cutting board or countertop Spoon (optional) STEP 1 With a sharp, nonserrated knife, cut lengthwise into the avocado. You will feel the knife stop at the pit. Keeping the knife in place, hold the avocado in your palm and slowly turn it around the knife in a circular motion, cutting the avocado in half around the pit. STEP 2 Remove the knife and twist the halves in opposite directions. They will come apart easily, and the pit will be in one of the halves. Place the pit half on a cutting board or countertop. Quickly hit the center of the pit with the heel of the knife. The pit should come out easily on the knife. If it resists, press the knife into the pit as shown. Pick up the avocado half in your nonknife hand and carefully twist the fruit while holding the knife steady. STEP 3 Cut the flesh into slices or a cross-hatch pattern. STEP 4 Turn the skin inside out, and the flesh should fall out. If any pieces stick to the skin, use a spoon to gently scoop them out. To check if an avocado is ripe, use a fingertip or fingernail to gently pop off the stem and look inside. If the flesh is light green, it is ready to eat. The fruit will yield when you squeeze it gently. If the stem doesn't come off with light pressure, the avocado is not ripe. A few ways to ripen an avocado: 1. Set it on a sunny windowsill. It will ripen over the next few days. 2. Put it in a brown paper bag with an apple. The bag will trap the ethylene gas given off by the apple, which speeds ripening. 3. Bury it in flour inside a brown paper bag. It should ripen within 3 days; check it daily. If you plan to reuse the flour, wash and dry the avocado before burying it; after removing the avocado, sift the flour. **AVOCADO ALERT!** Some avocados—like the Fuerte variety—have smooth green skin, and others—looking at you, Hass—are darker and more roughly textured. Presumably that's how they got the now-outdated nickname "alligator pears." Want to grow your own avocado? Pierce the sides of the seed (pit) with toothpicks and suspend it over water, submerging it about ½ inch. Within a few weeks, the plant will begin to root. Once a small green plant has emerged from the top, it's ready to plant! In the United States, avocado consumption peaks over Super Bowl weekend. The Hass Avocado Board estimates that about 278 million were consumed during the week before the 2016 game. **OPEN A COCONUT** Reenact your favorite stranded-on-a-desert-island movie from the comfort of your own home. **YOU WILL NEED** Coconut Skewer Towel or dishcloth Mallet Kitchen knife Butter knife Paring knife STEP 1 Locate the "eyes" of the coconut—the three dots on the top—and poke until you find two that give to gentle pressure. Push them in with the skewer. STEP 2 Drain the liquid into a bowl or jar and chill. (Coconut water is a great source of electrolytes and tastes good, too.) STEP 3 Hold the coconut in one hand with the towel. Using the mallet, firmly tap it on all sides while turning, until the shell cracks. STEP 4 Use the kitchen knife to pry the two halves apart. STEP 5 Use the butter knife to slice the white flesh and separate it from the shell. Use a paring knife to peel the brown skin from the flesh. Serve. How to Open a Coconut in the Wilderness: So long as you have a sharp knife or machete, you can open a coconut. Locate the seam between the eyes and use the knife to chop right into the seam. The coconut should come apart and the meat can be scooped out as above. If you want to reserve the juice, chop off just the top of the coconut and drain before cutting further. **COCONUT ALERT!** You're likely to find fresh coconuts year-round, but especially from October through December. Choose a heavy one. You should be able to hear the liquid inside when you shake it, but the eyes shouldn't be leaking. "For a moment, or a second, the pinched expressions of the cynical, world-weary, throat-cutting, miserable bastards we've all had to become disappears, when we're confronted with something as simple as a plate of food." ANTHONY BOURDAIN IN _KITCHEN CONFIDENTIAL_ **PICK OUT RIPE FRUIT** Follow these tips to choose perfect, ready-to-eat fruit every time! Berries: Look for bold, brightly colored fruit. If white or green spots are visible, the berry was picked before it reached peak ripeness and should be avoided. Overripe berries may be mushy or brown. Stone fruits, mangoes, and avocados: Hold the fruit and gently press it with your thumb. If the fruit gives slightly, it is ripe enough to eat. If it is too hard, it is not yet ripe. If it is too soft or the skin breaks, it is too ripe. Melons: Knock your knuckles on the fruit. If the sound is low and hollow, it is ripe. If it is shallow and hard, the fruit is not yet ripe. Ripe melons should give only slightly when pressed; if the melon gives easily, it is overripe. Bananas and plantains: A banana with bright yellow skin is ready to be eaten; some people prefer a fruit with a few brown spots. Plantains can be eaten at every stage of the ripening process, from green to very dark brown or black. Always try to buy fruit that is in season and native to your area. Fruits that have to travel long distances are typically picked before peak ripeness and may not ripen sufficiently in the store. **EAT EDAMAME** These fresh green soybeans are usually served in their fuzzy pods, with anything from a sprinkle of salt to more complex seasonings. Get ready to use your hands. **YOU WILL NEED** Cooked edamame pods Napkin Bowl for discarded empty pods STEP 1 Roll up your sleeves, if necessary, and keep a napkin nearby to wipe your fingers after eating. STEP 2 Hold an edamame pod with your fingertips, placing your fingers on either side of the seam at the top of the pod. STEP 3 Bite down gently on the pod to open it, and use your teeth to pop the beans into your mouth. If the pods are flavored, suck off a bit of the seasoning. STEP 4 Discard the empty pod. **EDAMAME ALERT!** High in fiber and protein, edamame make a wonderful snack or appetizer. They're sold ready to eat or raw, in which case you'll want to steam them in the pod and chill them. "Kids are now eating things like edamame and sushi. I didn't know what shiitake mushrooms were when I was 10—most kids today do." EMERIL LAGASSE, QUOTED IN _FOOD & WINE_ **EAT KOHLRABI** A farmers market staple, this green root vegetable tastes similar to celery root, broccoli, and cabbage. **YOU WILL NEED** Kohlrabi root Sharp knife Kitchen shears Vegetable peeler (optional) Mandoline slicer (optional) STEP 1 Using the sharp knife, make cuts across the top and bottom of the bulb. Snip off the greens with the kitchen shears. (Save these. They are delicious when sautéed.) STEP 2 Using the knife or peeler, peel off the tough outer skin. Discard the skin. STEP 3 Stand the bulb on the flat bottom and slice it in half from top to bottom. Cut each piece in half so you have 4 quarters. STEP 4 Use the tip of the knife to cut out the tough core. Discard it. STEP 5 Cut the quarters into matchsticks or discs, or slice thinly with a mandoline (especially if using the kohlrabi in a salad). STEP 6 Use the pieces to make slaws and salads, or cook them as you would any root vegetable. **KOHLRABI ALERT!** If you receive a farm share basket or CSA, you're likely encountering this weird-looking member of the cabbage family on a regular basis. They're easy to grow and mature in less than 12 weeks, so many farmers take advantage of the hearty bulb, which can be eaten raw or cooked. The leaves are edible, too, and taste like turnip greens. **SLICE A MANGO** Mangoes have a wide, flat, centrally located seed that can be challenging to remove if you don't know what you're doing. **YOU WILL NEED** Mango Sharp nonserrated knife Spoon (optional) STEP 1 Locate the small bump on the mango, also called its eye. The seed is behind the bump, so you'll cut around it. Place the fruit on a cutting board with the bump facing up. STEP 2 Insert the knife blade into the mango about a half inch to one side of the bump. Cut down along the center to the bottom, slicing off one side. If the knife hits the seed, continue cutting a bit closer to the skin. Repeat on the other side of the mango. You should have three pieces: two skin-on slices and the middle containing the seed. STEP 3 Scrape out the flesh with a large spoon. Alternatively, you can cut a checkerboard pattern into the fruit and turn the skin inside out over a plate. With a knife or spoon, scrape off the flesh that remains around the seed. Don't peel the mango before cutting. Keeping the skin on the fruit until the very end cuts down on mess. **MANGO ALERT!** Mango are harvested twice per year, so you're likely to find them in stores year-round. Ripe mangoes will be mostly red and yellow and give slightly when pressed. These juicy tropical fruits are good eaten fresh, made into smoothies or desserts, or used in salsas. **EAT A PAPAYA** Enjoy a taste of the tropics by dissecting this exotic fruit. **YOU WILL NEED** Papaya Cutting board Sharp nonserrated knife Metal spoon Melon baller (optional) STEP 1 Place the papaya on the cutting board and use the knife to slice off about a half inch from each end. Be gentle; a ripe papaya's flesh is very soft. STEP 2 Slice the papaya in half lengthwise. STEP 3 Use a spoon to scoop out the seeds and the strands they're attached to. STEP 4 Cut the papaya into smaller sections and carefully remove the skin with the knife. Alternatively, you can use a melon baller to scoop the flesh out of the skin. **PAPAYA ALERT!** If you don't live in a tropical area and you're not on a cruise, you're probably more likely to encounter papaya flavoring or papaya juice rather than fresh papaya. But don't let that stop you; the juice is a delicious source of vitamin C. And if you do have access to the fruit, dig in. It's juicy and sweet. "There is no love sincerer than the love of food." GEORGE BERNARD SHAW **EAT A POMEGRANATE** In Greek mythology, Persephone ate six pomegranate seeds and was stuck in Hades every winter. Perhaps if she'd known how to prepare the fruit herself, she could have avoided that whole mess. **YOU WILL NEED** Pomegranate Sharp knife Bowl of water large enough to submerge the pomegranate STEP 1 With the knife, slice about a half inch off both ends of the pomegranate. STEP 2 Make vertical cuts in the skin just deep enough to break through the skin. STEP 3 Submerge the fruit in the bowl of water and pull it apart at the cuts. STEP 4 Pull the seeds out of the white pith with your fingers. The seeds will sink to the bottom of the bowl and the pith will float to the top. STEP 5 Discard the pith and strain the seeds from the water. Eat the seeds whole. To freeze pomegranate seeds (also called arils), first let them dry thoroughly. Then spread them on a baking sheet and place the sheet in the freezer. When the seeds are completely frozen, put them in an airtight bag, seal tightly, and return to freezer until ready to use. **POMEGRANATE ALERT!** Pomegranate pros: Its sweet-tart flavor is unparalleled. The fruit—protected by a leathery skin—is generally available in the United States between August and December. It's high in vitamin C, vitamin K, folate, and dietary fiber. Pomegranate juice is also commercially available and is an easy way to enjoy the fruit's taste without the work of cutting and cleaning. Pomegranate con: Be forewarned...pomegranate stains. **EAT A RAMBUTAN** Rambutans may look like tiny torture devices, but fear not! The spikes won't hurt you, and the fruit inside is soft and sweet. **YOU WILL NEED** A rambutan Small paring knife STEP 1 Use the paring knife to gently slice about halfway around the center of the fruit, pressing down just hard enough to break through the skin. STEP 2 Gently pinch the fruit on either side and pull the halves apart. The white globe should pop out easily. STEP 3 Use the paring knife to cut into the center of the fruit. Pry out and discard the seed. Pop the fruit into your mouth and enjoy! If removing the seed with a knife is too much work, you can eat the fruit whole and work the seed out in your mouth. In polite company, however, the knife method is preferred. **RAMBUTAN ALERT!** Rambutans are a common snack in Asia, where they are typically eaten plain. Their taste and texture are similar to lychees, but a bit more sour. The flesh is used to make jellies, salads, and ice cream. Rambutans don't ripen after being picked, so make sure that the fruit you select is brightly colored (typically red, but there are some yellow varieties) and not leaking juice. **EAT KUMQUATS** Kumquats are a small, sour citrus fruit cultivated in the United States, China, and Japan. They look something like miniature oranges, but you can eat the skin, which tastes sweet. **YOU WILL NEED** Fresh kumquats STEP 1 Don't peel the fruit; there's no need. This should go without saying, but do wash them. Always wash the skin of any fruit you eat or cut through. STEP 2 Pop a kumquat into your mouth, skin and all. STEP 3 Chew, swallow, and enjoy. **KUMQUAT ALERT!** Fresh kumquats—easiest to come across between November and June—will be golden orange. The fruit should be firm and the skin unblemished. You may also encounter kumquats that have been pickled, candied, sliced into a salad, used as a garnish, or cooked into marmalades or jellies. Keep an eye out for the limequat, a tart, green-yellow cross between the kumquat and the key lime. "I hate people who are not serious about meals. It is so shallow of them." OSCAR WILDE IN _THE IMPORTANCE OF BEING EARNEST_ **GO NUTS** Nuts are a portable, filling snack, but which ones require a nutcracker? Which are sold toasted? Here's what to know. **YOU WILL NEED** Assorted nuts Nutcracker Hammer Nutpick Bowl for discarded shells **Almonds** These teardrop-shaped nuts are the most widely cultivated in the world; they are typically available raw or toasted, either whole, sliced, or in pieces. To peel, blanch in boiling water, pat dry, and gently squeeze nuts out of their brown skins with your fingers. **Brazil Nuts** A staple of the mixed-nuts medley, Brazil nuts are big, sweet, and rich and can be eaten raw or roasted. You can buy them shelled or unshelled. To shell, crack with a nutcracker, but note that the shell is extremely thick, so make sure your nutcracker is up to the task! One pound unshelled is equivalent to 3¼ cups shelled. **Cashews** Because their shells are toxic, these C-shaped nuts are usually sold shelled. Whether raw or toasted, they are delicious eaten out of hand. They're also a popular ingredient in Indian, Chinese, Thai, and other recipes. **Chestnuts** Technically chestnuts can be eaten raw, but you're more likely to run across them roasted, candied, boiled, or incorporated into a recipe. Their bitter skins and hard shells will already have been peeled, or they will be served with the shell cracked open enough that the meat can be pulled out. **Hazelnuts** Also called filberts, these sweet round nuts are a favorite for baking. They're also found raw in nut mixes, toasted and sprinkled over vegetables, and of course in Nutella, the chocolate-hazelnut spread. **Macadamia Nuts** Macadamias have very hard shells, so they're most often sold shelled, either raw or roasted. They're large, white, sweet, and rich. Eat them out of hand or use them in baking recipes. **Peanuts** Yes, they are technically legumes. But you'll find peanuts sold with the nuts, shelled or unshelled, roasted or raw. To shell, hold one between both thumbs and first fingers. Pull apart the halves and gently squeeze each side until the nut pops out. Rub peanuts between your fingers to remove the skins. One pound unshelled peanuts is equivalent to 2⅔ cups shelled. **Pecans** Most famous for starring in rich pies, pecans are delicious in a range of sweet and savory dishes, not to mention raw as a snack. To shell, first sort and discard any that rattle when you shake them. Boil them in a large pot of water for 10 to 15 minutes. Drain and let cool. Place a nut between the handles of a nutcracker and squeeze gently until the shell cracks. Rotate it and squeeze again. Repeat until the shell comes off. Clean nuts carefully, and let them dry in a strainer for 24 hours before eating or cooking with them. One pound unshelled is equivalent to 2 cups shelled. **Pistachios** These delicately flavored pale-green nuts are a holiday staple. Pistachios are typically sold roasted. After roasting, the shell should be slightly open; use your fingers to pry apart the two halves and eat the meat inside. A completely closed shell, is a sign that the nutmeat is immature; discard it. If the shell is open but not open wide enough for you to pry apart, take a discarded shell from another pistachio, push it into the opening, and turn it like a key. The halves should come right apart. One pound unshelled is equivalent to 3¼ to 4 cups shelled. **Walnuts** You can buy walnuts in their shells or shelled. They're delicious in baked goods or toasted and then sprinkled over salads. If you find them unshelled, get ready for a challenge—they're notoriously tough. Place them in hot water and let them soak for 24 hours to soften them. Place one, pointed end up, on a hard surface and hit the point with a hammer. Then pull the shell apart. Use a nutpick to loosen the nut from the inside of the shell and pop it out. Heads up: walnut hulls will stain your hands black. Consider wearing gloves, and cover your clothing, work surface, nearby children, and anything else you might touch while working. One pound unshelled is equivalent to 2 cups shelled. **Toasted Nuts** Toasting nuts gives them a deeper, roasted flavor. Arrange nuts in a single layer on an ungreased baking pan or rimmed baking sheet. Bake in a 350°F oven for 5 to 10 minutes, stirring occasionally and watching carefully to make sure they don't burn. When they become fragrant, use a spatula to transfer the nuts from the baking pan to a bowl or plate to cool. Even a little extra time on the hot pan could cause them to burn. Nuts in their shells are typically much less expensive than their shelled counterparts. The extra weight of the shell rarely cancels out the savings. **Frozen Nuts** Freeze nuts, shelled or unshelled, that you won't eat or use right away to keep them fresher longer. Here's how long they will last in the freezer: Almonds | 9 months ---|--- Brazil nuts | 9 months Cashews | 9 months Chestnuts | 9–12 months Hazelnuts | 9 months Macadamia nuts | 9 months Peanuts | 6 months Pecans | 12 months Pistachios | 12 months Walnuts | 12 months **NUTS ALERT!** A few rules apply to all nuts: Buy them as fresh as possible. Unshelled nuts should have no cracks or holes and be heavy for their size. Shelled nuts should look uniform in color and size. Store nuts in an airtight container in a cool place to keep them from going rancid. **EAT DURIAN** This huge, brownish-green fruit native to Malaysia is known throughout Southeast Asia as the "king of the fruits." It is covered in spikes and has been said to smell like anything from gasoline to rotting meat. But its rich, creamy texture and sweet, buttery taste make the experience worthwhile. **YOU WILL NEED** Durian Large, sharp knife Cutting board Spoon STEP 1 Place the durian stem side down on the cutting board. STEP 2 With a large knife, make a deep cut about halfway from the top down into the fruit, creating a hole large enough to stick your fingers into. STEP 3 Using your hands, pull apart the skin to reveal the creamy yellowish fruit inside. It should break in half. If the skin is too hard to pull apart, cut it a bit more or ask a strong friend to assist you. STEP 4 Scoop out the fruit pods with a spoon and enjoy! Be careful to avoid the hard brown pits. **DURIAN ALERT!** To describe durian as huge is not an exaggeration. This fruit can weigh up to 10 pounds! Look for it in Asian markets in the United States. In Southeast Asia, durian is sold in grocery stores and by street vendors. Wherever you buy fresh durian, don't transport it on public transportation or in other enclosed spaces. "They said that if you could hold your nose until the fruit was in your mouth a sacred joy would suffuse you from head to foot that would make you oblivious to the smell of the rind, but that if your grip slipped and you caught the smell of the rind before the fruit was in your mouth, you would faint." MARK TWAIN **CARVE A CHICKEN** "Would you like to carve?" they asked. "Sure," you said. "Can I take that back?" you thought when the chicken came out of the oven. Don't panic. Here's what to do. **YOU WILL NEED** Whole roasted chicken Large cutting board Sharp carving knife Large serving plate STEP 1 Place the chicken on the cutting board, with the breast (the large curved side) facing up. If it has just come out of the oven, let it rest for at least 10 minutes. STEP 2 Remove the legs: Hold a leg with one hand. Hold the knife in the other hand and position the blade where the thigh meets the breast. Wiggle the knife slightly to find the joint where the thigh is attached. Press the knife hard on the joint to separate the leg from the body. Repeat with the other leg. STEP 3 Separate the drumsticks and thighs: Locate the joint that connects the thigh to the drumstick. Press hard on the joint with the knife to separate them. Repeat with the other leg. Transfer the drumsticks and thighs to a serving plate. STEP 4 Remove the wings: Locate the joint connecting one wing to the body and press on it with the knife to cut off the wing; repeat on the other side. Transfer the wings to the serving plate. STEP 5 Remove the breasts: Cut down the center of the chicken, stopping when you feel resistance from the breast plate. Slide the knife horizontally down either side, cutting each breast off in one long motion. Slice meat to the desired thickness and transfer to the serving plate. Don't throw away the bones! Place them in a large stock pot with celery, carrots, onions, garlic, parsley, peppercorns, and just enough water to cover everything. Bring to a boil over high heat, lower heat to medium-low, and let simmer for 2 hours. Strain out the solids and refrigerate the liquid overnight. The next day, use a spoon to skim off the fat. Store the stock in the freezer until you need it. **EAT A QUAIL** Quail is perfect for anyone who has dreamed of eating a whole chicken in one sitting. **YOU WILL NEED** Whole cooked quail or other small game bird Dinner plate Sharp knife Fork Carve as you would a chicken (see here) on your dinner plate instead of a large cutting board. In short: STEP 1 Place the quail on the dinner plate and hold it steady with the fork in the center of the breast. STEP 2 Remove the legs and wings by cutting through the joints with the knife. STEP 3 Cut down the center of the breast, using the breast plate as your guide. STEP 4 Make a vertical cut down either side of the breast to remove the breast meat. **QUAIL ALERT!** Quail is typically served whole, occasionally wrapped in bacon or prosciutto. If you're served quail at a fancy dinner party, proper etiquette is to use a knife and fork to get as many morsels off the bones as possible. At a more casual event, however, eating quail with your hands is perfectly acceptable. Game hens are slightly bigger, and the same rules apply (though the knife-and-fork method is easier with these larger birds). **EAT PIGS' FEET** If you can eat a hot dog, you can eat a pig's foot. **YOU WILL NEED** Cooked pig's foot Knife and fork, or your hands Think of a pig's foot like a bigger, porkier chicken wing. If cooked properly, the meat should fall off the bone easily. There are no standard etiquette rules about how to eat pigs' feet, so assess your company and select your strategy: • Silverware strategy: Use the fork and knife to work the meat off the bone and eat it. • By-hand strategy: Pick up the entire foot and bite the meat off the bone. **PIGS' FEET ALERT!** Pigs' feet are full of tendons and bones, which lend plenty of flavor to the meat. Slow cooking is best for this cut, and you're most likely to encounter pigs' feet braised, boiled, pickled, or in a stew. If they are served whole, the hooves will be attached. In the United Kingdom, pigs' feet are known as trotters. "Fools make feasts, and wise men eat them." BENJAMIN FRANKLIN **EAT A PIG'S HEAD** Go whole hog and dine on this lesser-used part of the pig. **YOU WILL NEED** Whole cooked pig's head Large cutting board Damp towel (optional) Fork Sturdy, sharp boning knife STEP 1 The head will likely be served either split (cheek side up) or whole (chin on the plate). Nearly every part of the head is edible, including the tongue, snout, and eyes, so extracting the meat can begin anywhere. The biggest pieces are the cheeks and the areas around the jaw, eyes, and ears. Place the head on a cutting board (covered with a damp towel if desired). STEP 2 Cut off the ears where they connect to the head by placing the knife behind each ear and slicing it off. These can be eaten whole. STEP 3 If the pig has been cooked properly, the cheeks can be pulled from the bones with just a fork. Cut through the skin with the knife, starting just below the top of the ear, where the fleshy cheek bulges out to the side. Slice the meat off the head. STEP 4 Use the knife to slice off sections of meat from under each eye. Use a fork to remove pieces the knife can't reach. If you're feeling adventurous, stick the knife into the side of the eye socket on an angle and dig out the eyeball. STEP 5 Make a few shallow incisions around the snout and peel off the thick skin. The snout skin is edible but may be tough. Scrape off the surrounding meat with the fork. STEP 6 Turn the head upside down. Scoop the meat out of the jaw with the fork, using the knife to cut away any sections stuck to the bone. Pull the jaw away from the rest of the skull, and cut out the tongue. Slice the tongue and eat it. STEP 7 Look over the rest of the skull and work out any remaining bits of meat. Use the knife and fork as much as you can to extract the meat, but to get into all the nooks and crannies you will likely have to use your fingers. Keep a napkin nearby and try to minimize mess. **PIG'S HEAD ALERT!** While a traditional pig's head is common in most cultures, it's rare to see a full pig's head on a restaurant menu (if you do, consider yourself lucky and order it immediately). A more common offering is headcheese, a jellied mixture made with bits of meat and gelatin pulled from the pig's head. **USE THE CORRECT FORK** Navigate a formal table setting with ease. **YOU WILL NEED** Table setting Table settings are usually arranged according to the courses of a meal, with the utensils to be used first located on the outermost edges. When in doubt, work from the outside in. Here are some dishes you might encounter, depending on the foods being served and how formal the meal is. Dinner plate: Front and center. Bread and butter plate: A smaller plate placed above the forks, on the top left of the place setting. Napkin: Find it on top of or to the left of the dinner plate when you arrive. Lay it across your lap after you sit down so that it's close at hand when you need it. Salad fork: This small fork has shorter tines than those on a dinner fork. It is on the far left of the dinner plate. Fish fork: If a fish course will be served, this fork, which is smaller than a dinner fork and has wider tines, will be located between the salad and dinner forks. Dinner fork: The dinner fork has longer tines. It is closest to the left side of the dinner plate. Dessert fork: The dessert fork may be distributed when the dessert course is served or set above the dinner plate, below the dessert spoon, at the start of the meal. Cocktail fork: Also called a seafood fork or oyster fork, this small, three-tined utensil is used for eating small appetizers, especially seafood. It's placed to the right of the soup spoon, and sometimes the tines are placed in the bowl of the soup spoon with the handle angled off to the right. Soup spoon: Wider and rounder than a dinner spoon, it is on the far right of the dinner plate. Dinner spoon: The dinner spoon is narrower than a soup spoon and comes to a gentle point. It is the spoon closest to the dinner plate on the right side. Dessert spoon: If the dessert spoon isn't passed out with dessert, look for it above the dinner plate directly above the dessert fork. Butter knife: This small spreader is placed on top of the bread and butter plate, with its handle to the right. Dinner knife: This is closest to the right side of the plate, with the blade facing the plate. Fish knife: Find this smaller, wider knife with a flat blade to the right of the dinner knife and to the left of the spoons. It's convenient for lifting pieces to your fork, and it comes in handy if you're eating a whole fish (see here). Salad knife: If your salad requires a knife, this utensil will be placed between the fish knife and the spoons. Glasses: Both wineglasses and water glasses are placed above the knife and spoons on the top right of the place setting. Soup bowl: If soup is being served, this bowl will be placed on top of the dinner plate. "Traditionally, a luncheon is a lunch that takes an eon." _MISS MANNERS' GUIDE TO EXCRUCIATINGLY CORRECT BEHAVIOR_ **USE CHOPSTICKS** With a little practice, you'll never need to opt for Western utensils. **YOU WILL NEED** Pair of chopsticks Food to practice on STEP 1 Place one chopstick in the space between your thumb and forefinger. Rest it on your middle finger to keep it in place. STEP 2 Hold the second chopstick like a pencil between the tips of your forefinger and thumb. Tap the tips of the chopsticks on the table to make sure they're aligned. STEP 3 Move only the top chopstick, keeping the other stick stationary. Use the tips of the chopsticks to pick up food. (Note: If you're eating soup with chopsticks, it is acceptable to bring the bowl to your lips and use the chopsticks to guide bits of food from the bowl to your mouth.) **ETIQUETTE NOTE** Never cross your chopsticks or stick them upright in your food—both are symbols of death. Also, never rub chopsticks together. Only cheap sticks will splinter, and to assume yours will is in poor taste. **TASTE CHEESE** Whether served as finger foods, light snacks, or a formal course following a meal, cheeses can be surprisingly complex. **YOU WILL NEED** A variety of cheeses on a cheese plate A cheese knife or small knife Plain crackers for palate cleansing (optional) Begin with the softer cheeses that crumble easily off the block or are spreadable; these tend to be mild in flavor. Save harder and sharper cheeses (ones that hold their shape when cut) for the end of your tasting; otherwise, their flavor may overpower anything you taste after them. If the cheese has come straight from the fridge, allow it to warm to room temperature before tasting. Cold can dull cheese's flavor. Sight is the first way to learn to appreciate and differentiate cheeses. Look at the cheese: What is it cased in? Is it soft or hard? Are there cracks or crystals in the surface? Notice the color and consistency of the rind and the cheese within it, known as the pate. Next, smell the cheese: Break off a piece with your fingers and bring it to your nose. Take a deep whiff and try to pick up on the flavors. This will help you taste the cheese fully, for smell contributes largely to the experience of tasting. Finally, put the cheese into your mouth. Chew slowly to release the full flavor, and breathe while you chew. Pay attention; flavors you notice at the beginning of the bite may be different from what you experience at the end. Although you might be tempted to eat the cheese on a cracker, you can fully enjoy the flavor of each cheese if you eat it alone and use crackers as a palate cleanser between tastings. **Cheese Cheat Sheet** Here are some terms you might encounter at a cheese tasting. If you eat enough of the cheese, you'll know this lingo in no time. **TYPE OF CHEESE** | **DEFINITION** | **EXAMPLE** ---|---|--- Fresh | A cheese that has a lot of moisture. These tend to be creamy and spreadable and have a short shelf life. | Ricotta Chèvre Soft-ripened | A very soft cheese that has been ripened from the outside in. It's likely to have an edible white rind. | Brie Camembert Semisoft | A smooth, moist cheese that will likely be creamy and have little or no rind. | Havarti Monterey Jack Firm/hard | A dense cheese with a consistency ranging from fudgelike to solid. These have less moisture than softer cheeses. | Gouda Cheddar Parmesan Blue | A cheese with a blue mold growing through the inside. | Gorgonzola Stilton **CHEESE ALERT!** Cheese is an amazing food. It is made by letting milk, usually cow's milk or goat's milk, thicken and separate into solid and liquid parts, which are known as curds and whey. The equation to make cheese is basically dairy solids plus time. There are two major types: fresh cheese and ripened cheese. Fresh cheese is made after the milk separates into curds and whey. Ripened cheese is left to age and develop flavors. Compare a fresh cheese, such as cream cheese or ricotta, to an aged cheese with sharper, nuttier tones, such as gruyère. The longer the cheese ages, the more complex the flavor. Molds are sometimes added for further depth of flavor, such as with pungent blue or creamy brie cheese. "How can you govern a country which has two hundred and forty-six varieties of cheese?" CHARLES DE GAULLE **EAT NOODLES** You know that classic scene from _Lady and the Tramp_ with the adorable dogs and the spaghetti kiss? Nope, don't do that. **YOU WILL NEED** A prepared noodle dish Fork Spoon STEP 1 Catch a few noodles on the tines of a fork; don't take an entire forkful. Use a spoon to guide the noodles onto the fork. STEP 2 Twirl the fork against the bowl of the spoon so the noodles wind around the fork in a nest shape. STEP 3 Bring the fork to your mouth and eat the noodles in one bite. If you're eating an Asian noodle dish, use chopsticks to bring the noodles to your mouth. It's acceptable to slurp them up, but don't be noisy. "It's the sense of what family is at the dinner table. It was the joy of knowing Mother was in the kitchen making our favorite dish. I wish more people would do this and recall the joy of life." PAUL PRUDHOMME **SIP SOUP** If nothing else, remember this: Do. Not. Slurp. **YOU WILL NEED** Crackers (optional) Bowl of soup Soup spoon STEP 1 If you're eating soup with crackers, drop them into the bowl one at a time. STEP 2 Using your spoon, scoop the soup away from you, toward the center of the table. This prevents drips from falling onto your lap. STEP 3 Gently bring the spoon to your lips. Sip from the side of the spoon, not the tip. STEP 4 Do not slurp. We can't emphasize this enough. STEP 5 Repeat steps 2–4. To get the last bit of liquid from the near-empty bowl, tilt the bowl away from you and use the spoon to scoop up what's left. STEP 6 When you're finished not slurping, place your spoon on the plate underneath the bowl or in the bowl if there isn't a plate. **TECHNIQUE VARIATIONS** ♦ To eat French onion soup, use the edge of your spoon to cut off bits of the bread and cheese, and eat each spoonful in one full bite. ♦ If the soup has large pieces of food in it, such as meatballs or matzo balls, gently cut them with a knife or the edge of your spoon before consuming them. ♦ Asian soups and noodle bowls are served with chopsticks and occasionally a wide spoon. If a spoon is provided, use the chopsticks to place noodles and solids in the bowl of the spoon with a bit of broth, and bring the spoon to your mouth to eat. Otherwise, bring the bowl to your lips to drink the liquid and use the chopsticks to pick out and eat bits of food. "Good manners: the noise you don't make when you're eating soup." BENNETT CERF **HOLD A WINEGLASS** How you hold a wineglass is the key to consuming reds and whites at the optimal temperatures. It's also the key to at least looking like you know what you're doing. **YOU WILL NEED** A glass of delicious wine Your hands Red Wine: Cup the bowl of the glass in your hand. White Wine: Hold the glass by the stem, pinched between your index finger and thumb. Why: Your hands are warm. If you hold the bowl of the glass, the heat from your hand will transfer to the wine and warm it up. Typically, white wine is served cold and red wine is served closer to room temperature, hence the different approaches for holding each type. To clean a wineglass, cup the bowl of the glass in your hand. Use a stemware brush with foam bristles and a little dish detergent to clean the inside and rim of the glass. Rinse carefully with hot water. Turn the glass upside down and let it air-dry. If you have crystal glasses and they start to develop a film, dilute a little white vinegar in water and soak them in it. **TECHNIQUE VARIATIONS** What do you do when someone hands you a glass of Champagne? What about other glasses? ♦ Brandy snifter: Hold the glass by the bowl. Your hand's warmth brings out some of the spirit's aromas. ♦ Champagne flute: Hold the glass by the stem, not the bowl. This allows you to enjoy the bubbly chilled and to see the bubbles moving through the liquid unobstructed by your fingers (or fingerprints). ♦ Cocktail glass, also known as a martini glass: Hold the glass by the stem. Balance the bottom with your other hand if necessary. Try to find smaller glasses; if your drink is too big, it will get warm before you finish it. ♦ Stemless glass: If your host serves white wine in a stemless glass, you must consume it as quickly as possible before it warms to room temperature. "Once...in the wilds of Afghanistan, I lost my corkscrew, and we were forced to live on nothing but food and water for days." W. C. FIELDS, _MY LITTLE CHICKADEE_ **TASTE WINE** Resist the urge to down your glass. Instead, show off your wine know-how with these tips. **YOU WILL NEED** Wine Tasting glass Glass of water (optional) Spitting bucket (optional) Expert sommeliers use these steps to distinguish wine down to the year it was bottled. You can use them to remember what types of wine you like best. Note: These tips are for wine tastings at which you'll be trying multiple bottles, but the same rules apply when you're tasting a bottle you've purchased at a restaurant. The person who ordered the wine follows these steps to confirm that it's up to par before sharing it with the table. See: Look at the wine in the glass. What color is it? Deep red? Light pink? Is it transparent? Swirl: Gently swirl the wine and notice how the liquid slides along the inside of the glass. Full-bodied wines, which often have bold flavors, are more viscous and will slide more slowly around the glass. Lighter wines are thinner and will slide quickly around the glass; they have a less powerful but still pleasant taste. Swirling also aerates the wine, releasing the aromatics and the full flavor. Smell: Now that those aromas are swirling, stick your nose into the glass and take a deep breath. Try to pick out the different aromas. Fruit, oak, caramel, vanilla...try to discern the flavors before you taste the wine. Sip: Resist the urge to gulp and instead take a small sip. Move the wine around in your mouth and let it settle on your tongue. See if you detect any of the flavors you smelled in the glass. Savor: Now you want to mix some air with the wine in your mouth, so that the flavors become even more apparent. Take a small breath through your lips and move the wine around as you do so—like a little kid blowing bubbles, but backward. This may look funny. Spit (optional): If you will be sampling more than one wine, you may want to spit the wine into the bucket provided after tasting, to ensure that you keep a clear head throughout the experience. **WATCH OUT FORCORKED WINE!** Does the wine smell off-putting, like wet dog or mold? Send it back immediately! Wine that smells this way is known as "corked wine," which means the cork has been infected with a fungus that taints the wine's flavor. This is why servers often show you the cork when opening a bottle. Don't worry: you'll likely get a refund, whether you bought the bottle at a restaurant or in a store. If water is served, rinse your wineglass and take a small sip before the next tasting so that the different wines' flavors don't mix in the glass or your mouth. If water isn't provided, take a tiny sip of your next wine before fully tasting it to eliminate flavors lingering in your mouth from the previous pour. "Wine is the most healthful and most hygienic of beverages." LOUIS PASTEUR **MAKE A TOAST** You want people to pull out their phones and record your speech because it's memorable, not because you had too many cocktails beforehand... **YOU WILL NEED** A room full of people Glass of champagne, wine, or another drink Butter knife STEP 1 Decide what you're going to say in advance. Keep it short. Write it down and practice it. You may feel silly rehearsing, but doing so helps you know how long the speech will be and helps you memorize it. Keep your audience in mind while you write. For example, avoid off-color jokes in polite company or in groups of people whom you don't know well. STEP 2 Know when you're on. If someone else is hosting, that person makes the first toast. If you're the first toaster, wait for a lull in activity. You don't want to compete with a full dance floor or dinner service. After most of the room has nearly finished eating is a good time. STEP 3 Stand at your seat and gently tap your glass with a butter knife to get the attention of the room. STEP 4 Give your toast. If you're not a seasoned public speaker, the spotlight can be intimidating. Follow these tips for a smooth and memorable speech: • Position your feet a little farther than shoulder width apart. You might feel weird standing like this, but you won't look weird, and your stance will keep you from swaying or shifting your weight back and forth. In other words, your audience won't get seasick watching you. • Bend your knees slightly. Locking them increases your chances of fainting. • Hold your drink in one hand. Hold your notes or a microphone in the other, or rest your free hand gently on the table or at your side. • Speak loudly, confidently, and more slowly than you think you should. If you're nervous, you're almost certainly talking faster than you realize. • Make eye contact. Look around the room and connect with people, not your feet or your notes. You memorized this, remember? • Don't apologize or make a disclaimer like "I'm not much of a speaker, but..." Just let people enjoy your toast. When they tell you later that they loved it, simply say thank you. • Still have stage fright? Take a deep breath and think about why you're giving the toast. Whether it's for a cause you believe in or your best friend getting married, you agreed to this for a good reason. Let it motivate you. If all else fails, look at the clock. In an hour your speech will be long over and you'll be enjoying that champagne and dancing the night away. STEP 5 End with your glass raised high, and then take a sip. The event and your role in it will dictate the length of your speech. As the best man or maid of honor at a wedding, for example, you can speak for a bit longer than if you're simply toasting a relative's nuptials. **TOASTING ALERT!** Some people consider it bad luck to toast with water. Still, toasting with water is more polite than sitting it out. If you're the recipient of the toast (congrats!), stand and acknowledge it gracefully, but don't drink to yourself. **DRINK TEA** You don't have to point your pinkie, we promise. **YOU WILL NEED** Tea Teacup and saucer Teaspoon Milk (optional) Sugar (optional) STEP 1 At a formal tea gathering, the host will pour your tea. If you are served hot water and tea bags or loose leaves in a tea strainer, place the tea in the water and steep for 3 to 5 minutes. STEP 2 Remove tea bag or strainer. Stir in milk and sugar (if desired). Do not let the teaspoon hit the edges of the teacup; this is considered impolite. When you're done stirring, place the teaspoon on the saucer. STEP 3 Hold your teacup with one hand, curling your index finger around the handle and placing your thumb on top of the handle. Rest your middle finger under the bottom curve of the handle, and curl your ring finger and pinkie into your palm. STEP 4 Look down into your teacup as you sip so you don't spill tea on yourself. If the teacups are antiques, you may want to pour the milk in first so the sudden heat of the tea does not crack the cup. **TEA ALERT!** Each culture approaches tea in a different way. These rules apply to a traditional English-style tea party or afternoon tea—a light meal between lunch and dinner where tea is served. If you're unsure what customs are appropriate to a tea situation, look to your host, who will typically lead the service and offer etiquette clues. Also pay attention to other guests, who may be more familiar with the style of tea service. **USE BREAD AS A UTENSIL** In some cultures, bread is the only utensil at the table. In Ethiopian cuisine, for example, meals are consumed with pieces of _injera_ , a spongy flatbread. Here's the lowdown on substituting silverware with your favorite carbs. **YOU WILL NEED** Bread Soup or sauce-based meal STEP 1 Tear off a bite-sized piece of bread—do not use the whole roll or slice, since that would require double dipping, an etiquette no-no—and hold one end with your fingertips. STEP 2 Scoop the sauce into the bread by pushing it away from you, to ensure that drips fall back onto the plate and not onto you. Take care not to get sauce on your hands. STEP 3 Eat the piece of bread in one bite. **BREAD ALERT!** Even in cultures where bread is considered an appetizer or side dish, it can also be a useful and tasty way to sop up sauce. At traditional Italian meals, for example, bread is served alongside, rather than before, the main course for just that purpose. As always, if you're unsure of the rules, do what your host is doing. "The art of bread making can become a consuming hobby, and no matter how often and how many kinds of bread one has made, there always seems to be something new to learn." JULIA CHILD **EAT SUSHI** In Japan, eating sushi is a practice with an entire set of customs and traditions. What follows are the most important tips for eating in Western sushi joints. **YOU WILL NEED** Sushi Chopsticks Soy sauce (optional) STEP 1 Before ordering, familiarize yourself with the different types of sushi. • Sashimi: Though the word literally means "sliced meat" in Japanese, at most sushi restaurants it refers to slices of raw (or occasionally boiled) fish. These slices are not served with sushi rice. • Nigiri: This refers to a hand-pressed rectangle of sushi rice with a topping like cooked or raw seafood or omelet. • Maki: Fish, vegetables, rice, and other ingredients that are rolled with nori (seaweed) or soy paper and sliced are known as maki. STEP 2 Place your order. Don't ask if the fish is fresh; it's insulting. Instead, if you don't know what to select, ask the chef for suggestions. You may end up with something that hasn't even made it to the menu yet. STEP 3 Use chopsticks to pick up sashimi pieces. Nigiri and maki may be eaten with chopsticks or your fingers. STEP 4 If you'd like to dip a piece in the provided soy or ponzu sauce, do so gingerly so as not to saturate the sushi. Wasabi should not be mixed into the soy sauce; place a little bit directly on the food. STEP 5 Place the entire piece in your mouth and chew slowly, savoring the fish. If a piece of sushi is too large to eat in one bite, do not put it back on your plate. Take a bite, hold it with your chopsticks or fingers until you finish chewing, and then immediately eat the rest. STEP 6 Eat the provided pickled ginger between sushi servings to cleanse your palate. STEP 7 If you're sharing sushi with others, turn your chopsticks around before picking up a piece from someone else's plate so that you don't touch the food with the end that has been in your mouth. Place it on your plate and then turn the chopsticks back to the original position before eating. STEP 8 If you're sitting at the bar, don't hand money to the chef. Sushi chefs take cleanliness very seriously and will not touch the money. **TIP** In most of the United States, the minimum wage for people who earn tips is between $2 and $3 per hour, and much of that goes to taxes before your server ever sees it. A general guideline is to tip your server 15% to 20% of the total bill. **At a Restaurant** Less than 15%: Not recommended unless service was unacceptable. 15%: Adequate service. 18%: Good service. The server was acceptable and your party is pleased. 20%: Great service. The server gave your party everything you needed in a timely manner. You have no complaints. More than 20%: Excellent service. The server went above and beyond the required duties. At a Bar Typically you should tip a bartender 20% of the total tab, or more if service was exceptional. However, don't tip below $1 per beer or wine, or $2 per cocktail, and leave more if the drinks are complicated to make or the bar was busy. At a Coffee Shop Counter service employees are paid a full hourly wage, but if you spot a tip jar at the register, leaving a small tip is welcome—especially if you have a complicated order. A dollar per beverage or item is typical. Delivery and Take-Out These workers also receive hourly wages, but a 10% to 15% tip is appropriate if your order has been prepared correctly and delivered to you hot. It's much faster to calculate a 20% tip in your head than it is to pull out your phone. Look at the total amount of your bill. Move the decimal one place to the left. You remember this from school—that's 10%. Now multiply by 2 to get your tip. **TIPPING ALERT!** Tipping is commonplace in America, but in other places a tip of 10% or less, if anything, is typical. In fact, in many Asian countries tipping is considered rude except for service at a luxury hotel or restaurant. Before you travel to a new country, research the local customs so you don't commit a tipping faux pas. **DECIDE WHO PAYS THE BILL** Learn how to divvy up checks the modern way. **YOU WILL NEED** A dinner date Bill at the end of the meal Money Gone are the days when the gentleman was expected to pay for the lady. Nowadays, the general rule is if you initiated the plans, you will be paying, or at least paying your own share. The rule applies both to one-on-one dates as well as groups, such as if you invite your partner's parents to join the two of you for a meal. If you are the invitee, however, you should always offer to pay for your share, unless it has been explicitly stated that you are being treated. Some people will insist on paying for the entire bill. Thank them for their generosity, and offer to pick up the tab next time. If the costs of your party's meals are fairly similar, it's often easier to split the bill evenly. However, if there are a variety of costs and items among members of the party, and if only a few folks ordered drinks or dessert, you may decide to divide the bill individually. If you split the bill and want to pay with credit cards, find out if the restaurant will let you pay with multiple cards. Some have restrictions on the number of ways you can pay. If you're in a business meeting, typically the person who set up the meeting will pay. If you're the client, you'll likely be taken care of. **ORDER FROM THE MENU** You have questions. Fortunately, your server has answers. **YOU WILL NEED** Menu Your server Curiosity In any restaurant worth its salt, the servers will be knowledgeable about everything on the menu, including ingredients and preparation. A good rule of thumb is to order what the restaurant is known for or whatever the chef's recommendations are. If you still can't choose, simply ask, "What would you suggest?" or "What do you order here?" A good server will be happy to note popular items and personal favorites. Some may also be able to request adjustments to suit your needs. Just keep requests within reason and remember how helpful the server was come tipping time. Most restaurants have a website or social media presence, so it's easy to look up the menu before you visit. It may not include daily specials but will give you an idea of what to expect, including price ranges and offerings for special diets. Also worth noting: It may or may not be true, but supposedly diners look at the top right-hand corner of the menu first. You're likely to find the most expensive dishes there. These may be surrounded by items that bring the restaurant a high profit margin. If you're on a budget, read the entire menu before choosing your meal. Before your first visit, check online reviews, but take them with a grain of salt. Often people will post about either excellent or terrible experiences, but nothing in between. Trying to eat healthfully? The menu likely won't include nutritional information, but here are some guidelines to keep in mind: • Decide what to order before becoming very hungry or having a drink. • Start with a clear soup or a salad. Ask for dressing on the side. • Skip the bread. Save room for vegetables! • Try to swap out fries for a side of vegetables or salad. It's not always possible, but it's worth asking. • Avoid fried foods. Opt for baked seafood or leaner cuts of meat. • Restaurant portions are often much larger than what you'd serve yourself at home. Before you take a bite, ask for a to-go box and wrap up one-third to one-half of your meal. Then set it aside and take it home for lunch or dinner the next day. Feeling adventurous? Politely ask the server if you may order off menu. If so, request the chef's favorite meal to make. This is especially fun in sushi restaurants. Beware of the "specials" at chain restaurants. If the server offers off-menu items that are not listed in an insert and does not mention they're doing a menu preview, it's likely that the kitchen is trying to get rid of ingredients that are about to go bad. **DATE ALERT!** Occasionally you may find yourself having a meal with someone you don't know well and wanting to make a good impression. Maybe it's a job interview, a date, a potential client, or your favorite celebrity. Don't panic. • First and foremost, order what you'd like to eat. Women don't have to be restricted to salad on a first date, and men do not need to assert their manliness by ordering a steak. And nobody needs to act like a pretentious foodie to be impressive. • Avoid messy foods. You probably spent an hour fretting over the perfect outfit; don't risk dribbling down your front! • Keep alcohol consumption to a minimum. You might be nervous, but try to take a few deep breaths instead of a few quick shots. • If you'll be doing a lot of close talking, avoid garlic, onions, or other foods with a strong smell. • Keep a few breath mints handy, just in case. • If you think you'll be nervous, arrive a little early and try to relax. Sit in your car, outside on a bench, or even at the restaurant's bar. (No need to start drinking, of course. A tonic and lime is a beautiful thing.) On a practical level, you won't arrive late or rushed due to unexpected traffic or a wrong turn. And giving yourself a few minutes to gather your thoughts can make you feel much more comfortable. • Try to get to know the other person, who may be just as nervous as you. Fortunately, food is a great conversation starter. With any luck, you'll be enjoying the conversation before long. Have fun! **EXCUSE YOURSELF FROM THE TABLE** Ghosting is not appropriate in polite company. **YOU WILL NEED** Dinner companions Napkin Subtlety STEP 1 To step away from the table temporarily, wait for a pause in conversation; don't interrupt someone. Say "excuse me" or "I'll be right back." Don't explain that you are going to the restroom, to take a call, or whatever else you plan to do. STEP 2 Loosely fold your napkin and leave it to the left of your plate or on your seat—don't crumple it. At a restaurant, you may be given a fresh napkin upon your return. STEP 3 The host placing her napkin on the table is the cue that dinner is over. You may now do the same. Thank your host and say goodbye to the rest of the guests. If you're leaving dinner for good, do so when coffee is served, after dessert has ended. "The shared meal elevates eating from a mechanical process of fueling the body to a ritual of family and community, from the mere animal biology to an act of culture." MICHAEL POLLAN **EAT SOMETHING SPICY** Can't handle the heat? These tips will save you some pain. **YOU WILL NEED** Spicy food Glass of ice water Starchy side dish such as bread, crackers, or potatoes Glass of milk STEP 1 Before taking your first bite, drink a full mouthful of ice water. The ice water will numb your mouth, making it easier to consume something potentially painful. (Note: Don't drink ice water _after_ eating spicy food; it will spread the heat around your mouth and only worsen the sting.) STEP 2 Start slow. If possible, first try something slightly above your current tolerance, such as medium salsa rather than mild. Don't attempt the triple-habanero variety right off the bat, especially if you're not used to spicy foods. STEP 3 Take slow, deliberate bites and chew well. Eating quickly will give you less control over the burn. STEP 4 Alternate your tastes with bites of a starchy side dish, which will help absorb the heat. STEP 5 After you finish eating, drink a glass of milk. Dairy products contain a protein that helps soothe your mouth and temper the spice. **SPICE ALERT!** Multiple theories explain why people seek out and brag about eating foods that cause pain. Perhaps we like chilies because they offer health benefits, like lower blood pressure. Or maybe, as professor of psychology Dr. Paul Rozin suggests, it's possible that we tend to enjoy some "benign masochism" with our hot sauce. **Scoville Ratings** If you have trouble with spicy food, avoid anything that advertises a Scoville rating. Named after its creator, Wilbur Scoville, the scale measures the concentration of capsaicin, the substance that makes peppers spicy. A bell pepper is rated 0 Scoville heat units (SHU), and cayenne pepper generally earns between 30,000 and 50,000 SHU. Due to genetic cross-breeding and newly discovered species, the World's Hottest Pepper changes every few years. As of 2016, the reigning champion is the Carolina Reaper, which clocks in at 2.2 million SHU. **SHU OF COMMON PEPPERS** | ---|--- Habanero | 350,000 Thai | 100,000 Cayenne | 50,000 Serrano | 23,000 Chipotle | 8,000 Jalapeño | 8,000 Poblano | 1,500 Banana | 500 Bell Pepper | 0 **EAT SOMETHING MESSY** Enjoying a sloppy joe doesn't mean you have to be a sloppy you. **YOU WILL NEED** Messy food Fork Knife Napkin Moist towelette STEP 1 Have your napkin at the ready to wipe up spills, whether they're on your face or down your front. Messy foods can be unpredictable. STEP 2 Use a knife and fork whenever possible. They might not make a difference with overstuffed sandwiches or greasy chicken wings, but you can use them to pull the meat off a BBQ rib without getting sauce under your fingernails. STEP 3 If the food must be consumed with your hands, use both of them to hold it. Lean over your plate so that drips land there instead of in your lap. STEP 4 Take small bites. This gives you better control over what you're eating. STEP 5 Use a fork, not your hands, to eat bits of food that have fallen onto your plate. STEP 6 After eating, use the moist towelette to clean your hands and face, if needed. "Barbecue may not be the road to world peace, but it's a start." ANTHONY BOURDAIN **PACE YOURSELF WHEN DRINKING** Enjoy a night out, but stay in control. Bonus: some of these tips may help prevent a hangover, too! • Never drink liquour on an empty stomach. Having a heavy, fatty meal beforehand will slow your body's absorption of alcohol. • Limit consumption to one drink an hour, at most. • Drink with friends, and don't stay glued to your phone the whole time. The more you talk and socialize, the less time you'll have to gulp down your drink. • Stick to beer and wine, which contain less alcohol than mixed drinks and are unaffected by the generous pours of a heavy-handed bartender. • Alternate cups of water with cups of booze to stay hydrated, help pace your liquor consumption, and prevent a hangover. • Just say no to shots. They're a surefire way to let a buzz get away from you. • Pick a drink and stick to it. Although the old saying "Beer before liquor, never been sicker; liquor before beer, you're in the clear" isn't exactly truthful, you'll have a harder time keeping track of your intake when drinking a variety of cocktails or spirits of different alcohol contents. • Want to make a glass of wine last longer? Pour in a bit of lime soda or sparkling water. Instant spritzer! • Obvious Warning: Don't drink to excess, and never drive drunk, even if you feel okay to drive. Alcohol can take a long time to metabolize and you may think you're soberer than you are. There are plenty of phone apps that will get you a safe ride home. Use them. **MOCKTAIL ALERT!** You might have tons of reasons not to drink alcohol, but you can still enjoy a well-crafted refreshment. Ask the bartender for a cocktail without alcohol, or order a standard like tonic and lime. "A vine bears three grapes, the first of pleasure, the second of drunkenness, and the third of repentance." ANACHARSIS **STAY VEGETARIAN AT A BARBECUE** Because sometimes you must socialize with carnivores. • Bring a dish to share withs. Doing so will ensure a vegetarian option, and—bonus—you can show the meat eaters that veggie-centric dishes are delicious. • BYO burger—after clearing it with the host—to cook for yourself. With food allergies and special diets becoming more common, it's no longer out of the ordinary to see someone eating their own meal at a social gathering. • Load your plate with side dishes. Even when the main event is a slab of meat, you're likely to find a variety of sides that are vegetarian. Cole slaw, potato salad, and crudités are all safe bets. Be careful about green beans that may be cooked with bacon or saucy sides that may include an animal stock. • Are the hosts close friends of yours? Call them ahead of time and ask what they plan to serve. They may be willing to pick up veggie burgers or another meatless item for you. They might even modify some of their side dishes—by omitting bacon, using vegetable stock instead of chicken stock, etc. • If all else fails, or if you're truly worried you won't have any options, eat a full meal before the barbecue. **VEGETARIAN ALERT!** Are you hosting a vegetarian or vegan friend at your barbecue? While you cook, keep track of dishes that contain animal products. Read ingredients lists carefully, and watch out for these surprises: ♦ Red food dye: Natural red #4, cochineal, carminic acid, or carmine, which is found in pasta sauce and other red foods, is made from beetles. ♦ Orange juice: Fortified varieties often contain fish oil or lanolin, which is made from wool. Look for OJ without added omega-3s, or squeeze your own. ♦ White sugar: Refined white sugar is processed with cattle bones. ♦ Gelatin: Because it is made from animal products like bones and hooves, gelatin is a no-no for vegans. Unfortunately, that rules out the Jell-O salad, a lot of candies, some yogurts, and marshmallows. ♦ Anchovies: These tiny fish turn up in some surprising places, like Caesar salad dressing and Worcestershire sauce. ♦ Lard: Cake mixes, pie crusts, refried beans, and tortillas may contain lard. ♦ Beer and wine: Some drinks are clarified with fish bladders, and others are vegetarian. ♦ Cheese: Rennet, which is made from sheep stomachs, is an ingredient in some cheeses. "Nothing will benefit human health and increase the chances for survival of life on earth as much as the evolution to a vegetarian diet." ALBERT EINSTEIN **STICK TO YOUR DIET AT A PARTY** It's week 3 of your diet, arguably the roughest stretch to get through, and suddenly you find yourself face-to-face with a festive buffet at a wedding, baby shower, or happy hour. Here's how to stay on the wagon. • Eat before the party so that you're full of diet-friendly food and less likely to fill your plate with calorie-laden hors d'oeuvres. • If you're hungry, choose wisely—party staples like fresh fruit, veggie crudités, cheeses, and baked chips and salsa won't kill your diet. Try to make the same choices you would make at home: grilled chicken instead of a bacon-covered burger, vegetables instead of potato chips, fruit instead of cupcakes, and the like. • Grab a smaller plate, and don't go back for seconds. Setting limits is a helpful way to ensure you stick to your diet. Studies show that smaller plates make it easier to stop when you're full. • Alcohol is the downfall of many a diet, so try to avoid it. If you can't help but have an alcoholic beverage, opt for clear liquors with no-sugar mixers, like a vodka soda with a squeeze of lime. Choose a mixed drink that's easy to dilute if you want the same amount of liquor to last longer, and alternate alcoholic drinks with glasses of water. • Don't feel the need to explain. Peer pressure can be intense when people notice you're not taking advantage of the spread. But nobody should have a say in your food choices except you (and in some cases, your doctor). A diet is a personal decision. If someone inquires why you're not partaking in the triple-stuffed, double-stacked potato skins, feel free to simply smile and shrug. • If the hosts are close friends of yours, casually mention ahead of time that you're cutting down on sugar (or whatever your personal dieting tactic is). They'll more than likely be supportive—after all, they're your friends. Maybe that means they'll prepare more vegetable sides or serve fruit with dessert. Maybe it just means they won't shove the cupcakes in your face. Either way, why not have them on your side? • Be positive. You may be tempted to think—or worse, say—things like "Do you know how many calories are in that dip?!" or "Too bad I can't have those cookies." Instead, focus on how delicious the fresh fruit salad is or how amazing your host's homegrown tomatoes taste. You'll be happier, and your host will get some well-deserved praise. • Have a good time. What else is going on at this party other than food? Play darts. Look through photos. Get in the pool. Ask for a tour of the backyard. Play with the kids. Get into a conversation with someone you don't know—not about food. You won't have time to dwell on the fried cheese appetizers. • Keep the big picture in mind. You started your diet before you got to this party, and you're a smart decision maker. Remind yourself of your goal when you're tempted to blow it. "My doctor told me to stop having intimate dinners for four. Unless there are three other people." ORSON WELLES **FIX BAD BREATH** The alternative is to breathe through your nose and avoid speaking. • Take preventative measures. The easiest way to fix bad breath is to prevent it from happening. Don't smoke. Brush your teeth twice a day (don't forget to scrape your tongue!) and gargle with mouthwash. Avoid foods like onion and garlic and drinks like coffee, whose odor will linger on your breath. If a food has a sharp smell, it's likely that smell will occupy your mouth, too. • Chew sugarless gum. The sugar in mints and regular gum will feed the bacteria in your mouth, making bad breath even worse. • Drink water. Saliva can help prevent bad breath, and water will encourage saliva production. • Rinse stinky food particles out of your mouth with plain water or soda water. • Is there a sprig of parsley on your plate? Mint leaves in your mojito? Chew on green herbs like these to counter bad breath with a strong but pleasant scent. Just don't grab chives; their oniony odor won't do you any favors. • Nibble some nuts or another hard, abrasive snack. The texture will help loosen food particles that may be causing an odor in your mouth. Not sure if you have bad breath? Sneak off to the bathroom and floss. If the floss smells bad, your breath probably does as well. **HANDLE BEANS** Beans, beans, the musical fruit...you know the rest. Here's how to avoid potentially embarrassing bodily reactions to beans or other foods that disagree with you. • Identify and avoid your troublesome foods. Some people have issues with beans, fibrous vegetables, or dairy products. • If the offending food is one you enjoy, you may want to try to build a tolerance to it. Gradually add it to your diet, starting with small portions. • Stash anti-gas medicine in your wallet or purse if you can't or don't want to avoid the food, and take some before you eat it. • Ginger has been said to help calm the digestive tract. Grab a ginger beer or ginger candies if you start to feel symptoms of gas or indigestion. Mint, honey, and cinnamon may also help. • Drink water, which will flush out your digestive tract and help alleviate the gas. **FIBER ALERT!** How did beans get their reputation? They contain fiber and starches that can't be digested entirely by the enzymes in your gut. So bacteria in the intestines help break them down, which leads to flatulence. On the plus side, the fiber in beans is good for digestion, and beans are thought to help keep blood sugar steady, lower cholesterol, and make you feel full longer. **TASTE SOMETHING YOU HATE** Someone lovingly made you a dish...try not to squirm. • Ask for a small portion. Less on your plate means less you have to choke down. • Take a deep breath. Unless you're allergic, this food isn't going to kill you. • Take small bites. If your gag reflex is activated, you won't want a mouthful of food. • Eat with other food or drink. Follow a bite of the offending dish with something more pleasant. If the texture is your concern, take a bite of something with a different texture to mask it. Take a drink to rinse away the flavor. • Hold your nose, if possible. Smell is a large part of taste, so by blocking your nose you'll diminish the flavor. • Tell the truth. We're all adults, right? Everyone has likes and dislikes. Feel free to gently inform your host that you don't really care for broccoli. "Sharing food with another human being is an intimate act that should not be indulged in lightly." M. F. K. FISHER **RECOVER FROM A TONGUE BURN** We've all been there: you're super hungry, or you misjudge how hot the soup is, and suddenly your impatience results in a tongue burn. The good news—the sensation is temporary. Try these cooling tricks. • Assess the damage. If you've severely burnt your tongue— for example, if it blisters, turns black, or is numb or seriously painful—seek medical attention. You don't want it to become infected. When in doubt, go to a doctor. If the burn is mild, read on. • Rinse your mouth with cool water, which will temper the heat and wash away food particles that may be clinging to your tongue. • Drink some milk. It will coat your tongue and relieve the burning sensation. • Take an anti-inflammatory painkiller like ibuprofen, which will lessen the pain and inflammation. • Gargle with salt water multiple times a day to promote healing. • Swish with milk of magnesia to promote cell regrowth. Topical anesthetics can help numb the pain while your tongue heals. • During healing, avoid eating acidic or abrasive foods, which can irritate the sores. Be careful with microwaves! Sometimes they heat food unevenly, so your next bite could be hotter than you expect. **BURN ALERT!** Burning mouth syndrome, also called glossodynia, is the sensation of burning in your mouth that has no apparent reason. It's a chronic pain unrelated to the hot meal you're having. **SEND FOOD BACK** Sometimes something just goes wrong. When that something is your meal, don't be afraid to send it back. Be polite, understand that mistakes happen, and you'll be enjoying your meal in no time. STEP 1 Take preventative measures: Do you have a food allergy? Or does the thought of an errant mushroom make you gag? Tell your server before ordering. If your allergy is life threatening, call ahead and make sure that the restaurant can accommodate you. Keep in mind that although some kitchens can make changes to dishes, there may be a variety of reasons why your filet mignon simply cannot be divorced from its mushroom sauce. STEP 2 Inspect your plate and take a maximum of three bites. If something is wrong with the dish (overcooked, undercooked, not as ordered) it is acceptable to request a redo. If the issue is a matter of taste (you took a chance on the salmon even though you're not a huge fan of fish), the server may or may not offer you something else. STEP 3 Gently request the attention of your server and explain the error. Keep in mind that your server may not be at fault. STEP 4 At this point, a number of things may happen: • Your server will take the dish back to the kitchen to be fixed. • Your server will have the kitchen remake the dish entirely. • You will be offered a substitute dish. • If your meal can't be fixed sufficiently and an adequate substitute can't be made, you will not be charged for it. STEP 5 Once your new plate arrives, inspect it and take a bite. Has the issue been fixed? If so, great. Enjoy! If not, politely (but this time, perhaps more firmly) inform your server. A manager may stop by the table to discuss the issue with you. STEP 6 Remember to be polite and understanding. And unless you know definitively that the server is wholly at fault, don't let the tip suffer. If you are not charged for your meal, remember to tip the server based on what the total bill would have been had the complimentary portion been included. **STOP YOURSELF FROM CHOKING** Hopefully you'll never need this essential information. STEP 1 Be prepared. Take a class on CPR and the Heimlich maneuver. Watch videos and study techniques. Whether you're a lifeguard or a data analyst, such lifesaving training is useful— you never know what might happen or where. If you spend a lot of time with kids, consider getting additional child-focused training, which differs from techniques appropriate for adults. STEP 2 Try to stay calm. It's easy to panic when your airway is suddenly blocked, but you'll be better able to save yourself if you keep a level head. STEP 3 Try to cough. If you can, it's a sign that air is still getting through. You may be able to dislodge the object this way. STEP 4 Get the attention of someone nearby, if possible. If you can't speak or make noise, use whatever is at your disposal to draw attention to yourself. Crossing your hands over your throat is the universal sign for "I'm choking." STEP 5 If no one is around, perform the Heimlich maneuver on yourself. According to the Mayo Clinic: "Call 911 or your local emergency number immediately. Then perform abdominal thrusts to dislodge the item. Place a fist slightly above your navel. Grasp your fist with the other hand and bend over a hard surface—a countertop or chair will do. Shove your fist inward and upward." STEP 6 Continue until the object is dislodged or medical help arrives. **CPR ALERT!** Look for lifesaving classes at your local hospital, community college, or American Red Cross center (redcross.org). Or search online by location for an American Heart Association–approved course (heart.org). Such classes usually last only a couple of hours and are inexpensive. Whatever price you pay will be worth it should you ever need to use these skills. "Be prepared." OFFICIAL SCOUT MOTTO # RESOURCES These books, websites, and apps will help you further research all your adulting needs. HOW TO BEHAVE _The Art of Manliness: Classic Skills and Manners for the Modern Man_ Brett McKay and Kate McKay _Good Manners for Nice People Who Sometimes Say F*ck_ Amy Alkon _How to Be a Perfect Stranger: The Essential Religious Etiquette Handbook_ Stuart M. Matlins and Arthur J. Magida _I Like You: Hospitality under the Influence_ Amy Sedaris _Miss Manners' Guide to Excruciatingly Correct Behavior_ Judith Martin and Gloria Kamen _Modern Manners: Tools to Take You to the Top_ Dorothea Johnson _Modern Romance_ Aziz Ansari _Stuff Every Graduate Should Know_ Alyssa Favreau _The Worst Case Scenario Survival Handbook_ David Borgenicht and Joshua Piven HOW TO EAT _The America's Test Kitchen Cooking School Cookbook_ America's Test Kitchen _How to Cook Everything: The Basics_ Mark Bittman BeerAdvocate.com Beer profiles, ratings, and reviews. Eater.com National and city-specific guides to restaurants and food news. Fooducate App (iOS, Android) Scan a barcode at the grocery store for nutritional value and recommended alternatives. OpenTable.com Browse restaurants and make reservations. Humane Eating Project App (iOS, Android) Find vegetarian-friendly restaurants and restaurants with humane meat options. Mixology App (iOS, Android) Enter your ingredients and discover thousands of cocktail recipes. Vivino App (iOS, Android, Windows) Snap a shot of a wine bottle and get reviews and prices. WineGlass App (iOS) Snap a shot of the wine list and get ratings and pairing info. # ABOUT THE AUTHOR Ashley Blom is a Millennial foodie from Hatfield, Massachusetts. She has been writing ever since she could hold a pen or sit at a computer. She currently resides in Austin, Texas, with her fiancé, a rescue dog, and two cats. She enjoys eating (and cooking!) her way through her adopted hometown and writes about it at ForkingUp.com. # ABOUT THE ILLUSTRATOR Lucy Engelman is an illustrator in the most traditional sense of the word. Her intricate line work often nods to her fascination with the natural world. She has a BFA from the University of Michigan and has worked with _Bon Appétit_ , West Elm, Patagonia, and _Runner's World_. Her work has been recognized by the Society for Publication Designers and has been featured on a variety of platforms. She is also staff illustrator for _Collective Quarterly_. Lucy puts pen to paper surrounded by her growing collection of house plants in Pittsburgh, Pennsylvania. # INDEX A allergy, food almonds, 1.1, 1.2 animal products in foods arils artichoke Asian soups and noodle dishes, 2.1, 2.2 asparagus avocado, 1.1, 1.2 B bad breath banana pepper bananas beans, _See also_ edamame bell pepper benign masochism theory of spicy food berries bill, how to pay blue cheese, 2.1, 2.2 bowl, soup Brandy snifter Brazil nuts, 1.1, 1.2, 1.3 bread as utensil bread and butter plate brie cheese bugs burned tongue burning mouth syndrome butter knife C camembert cheese Carolina Reaper pepper cashews, 1.1, 1.2 cayenne pepper Champagne flute check, _see_ bill, how to pay cheddar cheese cheese chestnuts, 1.1, 1.2 chèvre cheese chicken, 1.1; homemade stock, 1.2 chipotle pepper choking chopsticks, 2.1, 2.2, 2.3 citrus cocktail fork, 2.1 cocktail glass coconut corked wine CPR crawfish cream cheese cricket flour D date dessert fork dessert spoon diet dinner fork dinner knife dinner plate dinner spoon drinking durian E edamame entomophagy escargots excuse yourself from the table F filberts fish, whole fish fork fish knife flatulence food allergy food you hate, how to eat forks, types of French onion soup fruit, how to pick ripe G game hens glasses, 2.1, 2.2. _See also_ wineglass glossodynia gorgonzola cheese gouda cheese gruyère cheese H habanero pepper havarti cheese hazelnuts, 1.1, 1.2 headcheese Heimlich maneuver I _injera_ insects, _see_ bugs J jalapeño pepper K "king of the fruits," knives, types of kohlrabi kumquats L legumes, _see_ beans, edamame, peanuts limequat lobster M macadamia nuts, 1.1, 1.2 maki mango, 1.1, 1.2 martini glass matzo ball soup meatball soup menu, how to order from messy food mocktails Monterey Jack cheese mudbugs N napkin nigiri noodles, 2.1, 2.2 Nutella nuts O order from the menu oysters P papaya Parmesan cheese paying the bill peanuts, 1.1, 1.2 pecans, 1.1, 1.2 pigs' feet pig's head pistachios, 1.1, 1.2 place setting, _see_ table setting plantains plates, types of poblano pepper, 3.1 pomegranate poultry, _see_ chicken, quail Q quail R rambutan raw oysters ricotta cheese S salad fork salad knife sashimi Scoville rating send back food in a restaurant serrano pepper shellfish, _see_ crawfish, lobster, oysters snails soup soup bowl soup spoon soybeans "specials" on the menu spicy food split a restaurant bill spoons, types of stemless wineglass stemware, _see_ glasses, wineglass stilton cheese stone fruit sushi T table setting tea Thai chili pepper tip toast (as speech) tongue burn trotters V vegetarian W walnuts, 1.1, 1.2 whole fish wine, 2.1, 2.2 wineglass, _See also_ glasses World's Hottest Pepper # ACKNOWLEDGMENTS Thanks to my agent, Sally Ekus, who saw the power in a single tweet. Thanks for helping me make my childhood dream of seeing my name in print come true! Thanks, Mom, for encouraging my writing and always supporting me, even when most parents would balk at their kid going to school for creative writing. And thank you to Jane Yolen for giving me an award in writing way back in first grade. You made my little kid ego soar and I haven't looked back since. Thank you to my awesome editor, Tiffany Hill, for guiding me through my first book and generally being awesome. #MillennialPower! At Quirk, our strikingly unconventional titles include best-selling fiction, award-winning craft books and cookbooks, irreverent reference guides, wall-enhancing poster books, and plenty of titles in a category all their own (you try to explain _The Resurrectionist_ ). But we're not just book creators, we're also a community of book lovers. Join us for literary pub crawl suggestions, Worst Case Wednesday survival advice, love letters to libraries, plus announcements about contests, giveaways, book release events, and author signings. We're seekers of all thing awesome, and since you are awesome, isn't it time we talked? Stuff Every Cook Should Know Summer Cocktails Breakfast for Dinner How to Behave Scope our site: Quirkbooks.com Be our friend: Facebook.com/​quirkbooks Try our tweets: Twitter.com/​quirkbooks ## Contents 1. Cover 2. Title Page 3. Copyright 4. Dedication 5. Contents 6. Introduction 7. Tricky Techniques 1. How to Eat a Whole Fish 2. How to Eat a Lobster 3. How to Eat Crawfish 4. How to Eat Raw Oysters 5. How to Eat Escargots 6. How to Eat Bugs 7. How to Eat Asparagus 8. How to Eat an Artichoke 9. How to Slice an Avocado 10. How to Open a Coconut 11. How to Pick Out Ripe Fruit 12. How to Eat Edamame 13. How to Eat Kohlrabi 14. How to Slice a Mango 15. How to Eat a Papaya 16. How to Eat a Pomegranate 17. How to Eat a Rambutan 18. How to Eat Kumquats 19. How to Go Nuts 20. How to Eat Durian 21. How to Carve a Chicken 22. How to Eat a Quail 23. How to Eat Pigs' Feet 24. How to Eat a Pig's Head 8. Etiquette Enigmas 1. How to Use the Correct Fork 2. How to Use Chopsticks 3. How to Taste Cheese 4. How to Eat Noodles 5. How to Sip Soup 6. How to Hold a Wineglass 7. How to Taste Wine 8. How to Make a Toast 9. How to Drink Tea 10. How to Use Bread as a Utensil 11. How to Eat Sushi 12. How to Tip 13. How to Decide Who Pays the Bill 14. How to Order from the Menu 15. How to Excuse Yourself from the Table 9. Foodie Fixes 1. How to Eat Something Spicy 2. How to Eat Something Messy 3. How to Pace Yourself When Drinking 4. How to Stay Vegetarian at a Barbecue 5. How to Stick to Your Diet at a Party 6. How to Fix Bad Breath 7. How to Handle Beans 8. How to Taste Something You Hate 9. How to Recover from a Tongue Burn 10. How to Send Food Back 11. How to Stop Yourself from Choking 10. Resources 11. About the Author 12. About the Illustrator 13. Index 14. Acknowledgments 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123. 124. 125. 126. 127. 128. 129. 130. 131. 132. 133. 134. 135. 136. 137. 138. 139. 140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 1. Cover 2. Cover 3. Title Page 4. Table of Contents 5. Start
{ "redpajama_set_name": "RedPajamaBook" }
568
import { AsmParser } from '../lib/asm-parser'; import { VcAsmParser } from '../lib/asm-parser-vc'; import { AsmRegex } from '../lib/asmregex'; describe('ASM CL parser', () => { it('should work for error documents', () => { const parser = new VcAsmParser(); const result = parser.process('<Compilation failed>', { directives: true, }); result.asm.should.deep.equal([{ source: null, text: '<Compilation failed>', }]); }); }); describe('ASM regex base class', () => { it('should leave unfiltered lines alone', () => { const line = ' this is a line'; AsmRegex.filterAsmLine(line, {}).should.equal(line); }); it('should use up internal whitespace when asked', () => { AsmRegex.filterAsmLine(' this is a line', {trim: true}).should.equal(' this is a line'); AsmRegex.filterAsmLine('this is a line', {trim: true}).should.equal('this is a line'); }); it('should keep whitespace in strings', () => { AsmRegex.filterAsmLine('equs "this string"', {trim: true}).should.equal('equs "this string"'); AsmRegex.filterAsmLine(' equs "this string"', {trim: true}).should.equal(' equs "this string"'); AsmRegex.filterAsmLine('equs "this \\" string \\""', {trim: true}).should.equal('equs "this \\" string \\""'); }); it('should not get upset by mismatched strings', () => { AsmRegex.filterAsmLine("a \"string 'yeah", {trim: true}).should.equal("a \"string 'yeah"); }); }); describe('ASM parser base class', () => { let parser; const filters = {}; before(() => { parser = new AsmParser(); }); it('should recognize source column numbers', () => { const asm = ` .text .intel_syntax noprefix .file "tmp.cpp" .file 1 "/usr/include" "stdlib.h" .file 2 "/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits" "std_abs.h" .file 3 "/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9" "cstdlib" .file 4 "/usr/lib/llvm-11/lib/clang/11.0.0/include" "stddef.h" .file 5 "/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9" "stdlib.h" .globl main # -- Begin function main .p2align 4, 0x90 .type main,@function main: # @main .Lfunc_begin0: .file 6 "/home/necto/proj/compiler-explorer" "tmp.cpp" .loc 6 3 0 # tmp.cpp:3:0 .cfi_startproc # %bb.0: # %entry push rbp .cfi_def_cfa_offset 16 .cfi_offset rbp, -16 mov1 rbp, rsp .cfi_def_cfa_register rbp sub rsp, 48 mov2 dword ptr [rbp - 4], 0 .Ltmp0: .loc 6 4 20 prologue_end # tmp.cpp:4:20 mov3 edi, 16 call malloc .loc 6 4 9 is_stmt 0 # tmp.cpp:4:9 mov4 qword ptr [rbp - 16], rax `; const output = parser.process(asm, filters); const push_line = output.asm.find(line => line.text.trim().startsWith('push')); const mov1_line = output.asm.find(line => line.text.trim().startsWith('mov1')); const call_line = output.asm.find(line => line.text.trim().startsWith('call')); const mov4_line = output.asm.find(line => line.text.trim().startsWith('mov4')); push_line.source.should.not.have.ownProperty('column'); mov1_line.source.should.not.have.ownProperty('column'); call_line.source.column.should.equal(20); mov4_line.source.column.should.equal(9); }); it('should parse line numbers when a column is not specified', () => { const asm = ` .section .text .LNDBG_TX: # mark_description "Intel(R) C Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 12.1 Build 20120410"; .file "iccKTGaIssTdIn_" .text ..TXTST0: # -- Begin main # mark_begin; .align 16,0x90 .globl main main: ..B1.1: # Preds ..B1.0 ..___tag_value_main.2: # ..LN0: .file 1 "-" .loc 1 2 is_stmt 1 pushq %rbp #2.12 `; const output = parser.process(asm, filters); const pushq_line = output.asm.find(line => line.text.trim().startsWith('pushq')); pushq_line.source.should.not.have.ownProperty('column'); pushq_line.source.line.should.equal(2); }); });
{ "redpajama_set_name": "RedPajamaGithub" }
259
import React, { Component } from 'react'; import { Row, Col } from 'antd'; import LayoutWrapper from '../../../components/utility/layoutWrapper'; import PageHeader from '../../../components/utility/pageHeader'; import ContentHolder from '../../../components/utility/contentHolder'; import Box from '../../../components/utility/box'; import basicStyle from '../../../config/basicStyle'; import BasicMap from './maps/basic'; // import GeoLocationMap from './maps/geoLoacations'; export default class GoogleMap extends Component { render() { const { rowStyle, colStyle, gutter } = basicStyle; return ( <LayoutWrapper> <PageHeader>Google Map</PageHeader> <Row style={rowStyle} gutter={gutter} justify="start"> <Col md={24} sm={24} xs={24} style={colStyle}> <Box> <ContentHolder> <BasicMap /> </ContentHolder> </Box> </Col> </Row> </LayoutWrapper> ); } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,084
Q: SSRS PBI report refresh error: [Could not load file or assembly 'Microsoft.OData.Core.NetFX35.V7`] I have built a report in PBI Desktop and published to PBI Report Server. The datasource connection type is ODATA. The report will not update on the PBI Report Server. What should I do to make the scheduled refresh work? * *The report runs and refreshes just fine on my computer. *The report was successfully deployed to PBI Report Server and I can interact with the data & visuals just fine on the PBIRS. *The data source connection test says "successful". *According to MSDOCS: Power BI report data sources in Power BI Report Server, the ODATA datasource it supported for "Scheduled Refreshed"; it is not a "Live" connection (like SQL datasource). The problem is when I schedule the report to refresh in Report Manager the refresh fails and the data is not up-to-date with the datasource. The error message: Data source error: Login failed for data source 'Unknown'. The error details button displays this message: SessionID: d363d9d2-d9d3-4838-8241-d2cadcc59c73 [0] -1055784932: File or Folder: Could not load file or assembly 'Microsoft.OData.Core.NetFX35.V7, Version=7.4.0.11102, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.. The exception was raised by the IDbCommand interface. I have attempted to execute the refresh with both a Report refresh and Shared Schedule refresh and I have also tried refreshing the connection via schedule ("Schedule"-- 2:00PM) and manual ("Refresh Now"-- 2:07PM). I conducted a search on my desktop computer C: drive for the file seen in the error message (Microsoft.OData.Core.NetFX35.V7) and I did find the DLL file on my desktop pc in multiple different folder directories (screenshot below). The DLL file is found on my desktop pc :C Drive in multiple locations. The DLL file is not found on the server directory. I don't know much of anything about DLL and Assemblies. - C:\Program Files\Microsoft Power BI Desktop\bin\ ... 1 files - C:\Program Files\Microsoft Power BI Desktop RS\bin\ ... 1 files - C:\Program Files (x86)\Microsoft Power BI Desktop RS\bin\ ... 1 files - C:\Windows\assembly\NativeImages_v4.0.30319_32\ ... 2 files - C:\Windows\assembly\NativeImages_v4.0.30319_64\ ... 2 files UPDATE 4/30/2019, DLL installation I tried installing the DLL manually. I then tried installing the full PBI Desktop program. (Update 1) DLL Manual Install I tried installing the DLL manually, but I ran into some command-line errors (regsvr32, regasm). References: Install DLL File in Windows / MsDocs> .NET Framework> Regasm.exe (Update 2) PBI Desktop program install I then tried installing the full PBI Desktop program. The DLL file is now found on the server directories. But the refresh error message persists... : "...Could not load file or assembly 'Microsoft.OData.Core.NetFX35.V7, Version=7.4.0.11102, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies.... The system cannot find the file specified.". Server directories: C:\Program Files\Microsoft Power BI Desktop RS\bin\ ... 1 files C:\Program Files (x86)\Microsoft Power BI Desktop RS\bin\ ... 1 files C:\Program Data\Microsoft\NetFramework\BreadcrumbStore\ ... 2 files ... Microsoft.OData.Core.NetFX35.V7, Culture=neutral, PublicKeyToken=31bf3856ad364e35 ... Microsoft.OData.Core.NetFX35.V7, Version=7.4.0.11102, Culture=neutral, PublicKeyToken=31bf3856ad364e35 C:\Users\All Users\Microsoft\NetFramework\BreadcrumbStore\ ... 1 files ... Microsoft.OData.Core.NetFX35.V7, Culture=neutral, PublicKeyToken=31bf3856ad364e35 ... Microsoft.OData.Core.NetFX35.V7, Version=7.4.0.11102, Culture=neutral, PublicKeyToken=31bf3856ad364e35 C:\Windows\assembly\NativeImages_v4.0.30319_32\ ... 2 files C:\Windows\assembly\NativeImages_v4.0.30319_64\ ... 2 files REPORT SERVER (screenshots: data source, schedule refresh) DATA SOURCE Here is the configuration of the datasource properties at the report manager URL. http://gcod050/reports/manage/catalogitem/datasources/01-PBI/SSRS%20Datasets DATASOURCE REFRESH ERROR
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,264
{"url":"http:\/\/www.gradesaver.com\/textbooks\/math\/algebra\/algebra-a-combined-approach-4th-edition\/chapter-10-review-page-748\/115","text":"## Algebra: A Combined Approach (4th Edition)\n\n$\\dfrac{xy}{\\sqrt[3]{10xyz}}$\n$\\bf{\\text{Solution Outline:}}$ To rationalize the numerator of the given expression, $\\sqrt[3]{\\dfrac{xy^2}{10z}} ,$ multiply by an expression equal to $1$ which will make the numerator a perfect power of the index. Then use the laws of radicals to simplify the resulting expression. $\\bf{\\text{Solution Details:}}$ Multiplying the given expression by an expression equal to $1$ which will make the numerator a perfect power of the index results to \\begin{array}{l}\\require{cancel} \\sqrt[3]{\\dfrac{xy^2}{10z}\\cdot\\dfrac{xy}{xy}} \\\\\\\\= \\sqrt[3]{\\dfrac{x^3y^3}{10xyz}} .\\end{array} Using the Quotient Rule of radicals which is given by $\\sqrt[n]{\\dfrac{x}{y}}=\\dfrac{\\sqrt[n]{x}}{\\sqrt[n]{y}}{},$ the expression above is equivalent to \\begin{array}{l}\\require{cancel} \\dfrac{\\sqrt[3]{x^3y^3}}{\\sqrt[3]{10xyz}} \\\\\\\\= \\dfrac{\\sqrt[3]{(xy)^3}}{\\sqrt[3]{10xyz}} \\\\\\\\= \\dfrac{xy}{\\sqrt[3]{10xyz}} .\\end{array}","date":"2018-04-19 10:30:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.996330976486206, \"perplexity\": 670.3670035637974}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-17\/segments\/1524125936833.6\/warc\/CC-MAIN-20180419091546-20180419111546-00180.warc.gz\"}"}
null
null
Law abiding firearm owners and Shooters have began to organise themselves and will no longer be part of the silent majority. There are even non gun owners or shooters that are becoming aware of the lies and factual errors being hammered home by the anti gun advocates, these people are opening their eyes to the fact that shooters are becoming the victims of having their rights restricted or removed altogether. These non shooters are also becoming aware that rights are a precious thing and that they could easily become the next victims of the pseudo intellectuals who vaguely chase fashionable causes.
{ "redpajama_set_name": "RedPajamaC4" }
4,983
Practice Github This file will be for experimenting with github commands.
{ "redpajama_set_name": "RedPajamaGithub" }
9,489
Q: Adding splashscreen to iphone app in AppDelegate I'm running xcode-4.2 and the project is based for ios5 using storyboards. I've created a single view application , using the template provided by Apple. In the storyboard I the removed the viewcontroller created for me and added a UITabBarController. Next I added a new class MyTabBarController which is a subclass of UITabBarController. Now I want to show a splashscreen before the TabBar appears. So I can do some loading and calculation in the background. I thought AppDelegate.m would be a good place for this. Since that's the place where my rootview get's loaded not ? Or should a show the splashscreen from the rootviewcontroller which is MyTabBarController in my case ? So I created a xib file. I'm surprised you can add .xib files to ios5 storyboard projects. The xib file is called SplashView.xib it has a single view with an image on it. Code in AppDelegate - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { _splashScreen = [[UIViewController alloc] initWithNibName:@"SplashView" bundle:nil]; //_splashScreen is defined as:@property (strong, nonatomic) UIViewController *splashScreen; [_window.rootViewController presentModalViewController:_splashScreen animated:NO]; [self performSelector:@selector(hideSplash) withObject:nil afterDelay:2]; return YES; } The problem is nothing happens. Even if I change the value from 2 to 200. The application starts up as if there is no splashscreen. As you might have noticed I'm still struggling with the design of objective-c and iphone application. I hope a decent answer to my question will bring some clarity to the subject. Thanks in advance! A: Splash screens are built into iOS apps. All you need to do is create a file called Default.png and Default@2x.png (for retina displays) and it will work as a splash screen for when the app launches. You can also set what these images will be in your apps info.plist. A: I've dealt with a few clients who wanted to use an animated splash. Though I'm totally against this, following Apple's HIG, those clients just don't understand... Anyway, since - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions; has to return boolean, it's very important not to halt it for anything. Also, since launch time is measured by iOS, if it's taking too long, the app will be terminated by iOS! For this reason, I often use - (void)applicationDidBecomeActive:(UIApplication *)application; with some kind of flag to indicate if it happened at launch or at returning from background mode. Or, you should use a NSTimer or - (void)performSelector:(SEL)aSelector withObject:(id)anArgument afterDelay:(NSTimeInterval)delay; so, didFinishLaunchingWithOptions can return without being blocked for processing your animated splash. This delayed performSelector should be implemented not only for hiding action (like the way you intended it), but also for starting the animation. A: If you are using storyboard, you can just add the splash UIImageView to your window.rootViewController.view like this: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { UIImage *splashImage = [UIImage autoAdjustImageNamed:@"Default.png"]; UIImageView *splashImageView = [[UIImageView alloc] initWithImage:splashImage]; [self.window.rootViewController.view addSubview:splashImageView]; [self.window.rootViewController.view bringSubviewToFront:splashImageView]; [UIView animateWithDuration:1.5f delay:2.0f options:UIViewAnimationOptionCurveEaseInOut animations:^{ splashImageView.alpha = .0f; CGFloat x = -60.0f; CGFloat y = -120.0f; splashImageView.frame = CGRectMake(x, y, splashImageView.frame.size.width-2*x, splashImageView.frame.size.height-2*y); } completion:^(BOOL finished){ if (finished) { [splashImageView removeFromSuperview]; } }]; return YES; } I think the reason why directly just add the UIImageView to window is because iOS will bring the rootViewController.view to front when the default splash will hide. And this will overlap the animation. This means the animation does happen but it's behind the rootViewController. A: I just add an identical image to the launch image to my first view controller and then fade it (or whatever animation you require) - this avoids pausing the app load in the AppDelegate. You need to ensure that the image has the same size and origin as your launch image e.g. to set the image to display on my first view controller which is a tableViewController: UIImageView *imageView = [[UIImageView alloc] initWithFrame:self.tableView.bounds]; imageView.image = [UIImage imageNamed:@"[imagename]"]; [self.tableView addSubview:imageView]; [self.tableView bringSubviewToFront:imageView]; // Fade the image [self fadeView:imageView]; -(void)fadeView:(UIView*)viewToFade { [UIView animateWithDuration:FADE_DURATION animations:^ { viewToFade.alpha = 0.0; } ]; }
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,541
\section{Introduction} Variability at all observed wavelengths is a distinctive characteristic of objects hosting an accreting black hole, in particular AGN. The study of the timing properties of AGN can provide important information about the structure, the physics and the dynamics of the radiating source. In the last years much progress has been made to characterize the AGN variability in the 2--20 keV X-ray range, in particular since the launch of \textit{RXTE} in 1995. On the other hand, the global variability properties of AGN at hardest X-rays above 20 keV have been poorly studied in the past. Some studies of the spectral variability of AGN during different flux states were carried out with \textit{CGRO}/OSSE and \textit{BeppoSAX}/PDS (e.g. \cite{petrucci00}), while a long term monitoring at hard X-rays was not possible due to the relatively small field of view of these instruments and the observation strategy of these satellites. The only exception was the BATSE instrument on board of \textit{CGRO}, that detected a handful of AGN using the Earth occultation technique \cite{harmon04,soldi08}. \textit{INTEGRAL} IBIS/ISGRI \cite{winkler03,lebrun03} and \textit{Swift}/BAT \cite{gehrels04} offer now a unique opportunity to observe a large number of AGN on different time scales in the hard X-ray band above 20 keV. The first instrument is more suited for investigating the spectral variations in bright and well monitored AGN, as already shown, for example, for NGC~4151 \cite{lubinski10} and MCG$-$05$-$23$-$016 \cite{beckmann08}. On the other hand, the BAT provides a database to study the timing properties of a large AGN sample. The results obtained from the first 9 months of \textit{Swift}/BAT observations have been reported for 44 AGN detected at more than $10\sigma$ over this time period \cite{beckmann07}. The blazars had been found to show the highest variability, and 30\% of the Seyfert galaxies in the sample showed variability of $> 10\%$ with respect to their average flux. A general trend of increasing variability with absorption had been detected that could be ascribed to other two relations, i.e. the anti-correlation between absorption and luminosity (observed by different X-ray surveys \cite{ueda03,beckmann09}) and between variability and luminosity, observed also in the BAT sample. About 6 years of public \textit{INTEGRAL} data and 5 years of \textit{Swift} data are currently available, providing the possibility to extend this kind of studies to a much larger AGN sample and with better statistic. In Fig.~\ref{lc_IC4329A} an example of BAT and ISGRI hard X-ray light curves for the bright Seyfert 1 galaxy IC~4329A is shown. \begin{figure} \includegraphics[width=.45\textwidth,angle=90]{fig1.ps} \caption{15--50 keV \textit{Swift}/BAT (black circles) and 20--60 keV \textit{INTEGRAL}/ISGRI (blue triangles) light curves of the Seyfert 1 galaxy IC~4329A. For better clarity, the light curves are rebinned to 10 and 3 days, respectively. } \label{lc_IC4329A} \end{figure} \section{\textit{Swift}/BAT long-term monitoring} Since November 2004, the instrument BAT \cite{barthelmy05} on board the \textit{Swift} satellite has been observing the sky in the 15--195 keV energy range and, thanks to its large field of view of $\sim 1.4 \rm \, sr$ and to \textit{Swift}'s observing strategy, it has been monitoring a large number of hard X-ray sources \cite{tueller09}. Daily light curves in the 15--50 keV band are provided on the \textit{Swift}/BAT hard X-ray transient monitoring pages\footnote{\emph{http://swift.gsfc.nasa.gov/docs/swift/results/transients}} for 281 AGN. We have downloaded the BAT light curves for these AGN, covering the time from the beginning of the mission up to August 23, 2009. The light curves have been rebinned to 3-day bins and we selected the 36 AGN for which the average of the detection significance of each point of the light curve is larger than 3$\sigma$. This limit has been chosen after checking that the light curves of random positions in the sky (i.e. background) provided by the BAT team have an average significance of $\leq 2.4 \sigma$ when rebinned to 3-days. Among these 36 AGN, there are 18 blazars, 3 Seyfert 1 galaxies, 3 intermediate type Seyfert, 8 Seyfert 2 and 4 radio galaxies. We use this sample to study the hard X-ray variability of AGN, applying some basic variability estimators, as the normalized excess variance $\sigma^2_{\rm NXS}$ \cite{vaughan03,ponti04}. The 18 Seyfert and radio galaxies show an average (in logarithmic space) amplitude of variations $\langle \sigma^2_{\rm NXS} \rangle = 0.09$, whereas the 18 blazars show on average larger variations, $\langle \sigma^2_{\rm NXS} \rangle = 0.25$ (left panel in Fig.~\ref{histo}). We find a 1\% KS-test probability that the two samples are drawn from the same distribution. The larger variability of blazars at hard X-rays is expected due the nature of their emission in this energy range, usually believed to be produced in relativistic jets (e.g. \cite{pian08}). According to our preliminary results, there does not seem to be any correlation between the normalized excess variance and the black hole mass nor the Eddington ratio for the AGN in this sample, whereas we confirm the trend observed in Seyfert and radio galaxies of less luminous objects being more variable (right panel in Fig.~\ref{histo}) already reported by Beckmann et al. (2007). For this relation, we find a probability of chance correlation of 2.5\%, when the Circinus galaxy is excluded. This peculiar object is a Compton thick Seyfert 2 galaxy with high reflection through a thick torus \cite{yang09}, which could explain the low variability observed. We also confirm the correlation already found for other X-ray selected AGN samples \cite{beckmann09,bianchi09} between the luminosity (averaged over the 5 years of Swift observations) and the black hole mass for the 22 AGN with measured masses (Fig.~\ref{MBH_Lx}). We obtain a probability of chance correlation of 0.03\% (1\% when only the 17 Seyfert galaxies are considered) and a relation of the form $L_X \propto M_{\rm BH}^{(0.75 \pm 0.18)}$ ($L_X \propto M_{\rm BH}^{(0.5 \pm 0.2)}$ for Seyfert galaxies only). A proportionality with index lower than 1, as found here, indicates that the more massive objects have either a lower X-ray efficiency or a lower accretion rate than less massive objects. The latter (i.e. sub-Eddington luminosities for the most luminous, high-mass quasars) has been recently observed for a large sample of quasars up to redshift $z = 2$ from the Sloan Digital Sky Survey \cite{steinhardt09}. \begin{figure} \includegraphics[width=.35\textwidth,angle=90]{fig2.ps} \includegraphics[width=.35\textwidth,angle=90]{fig3.ps} \caption{\textit{Left:} Histograms of the 15--50 keV normalized excess variance computed for 36 AGN observed by \textit{Swift}/BAT (Seyfert and radio galaxies in black, blazars in blue). \textit{Right:} Normalized excess variance versus 15--50 keV luminosity for the Seyfert and radio galaxies in our BAT AGN sample. } \label{histo} \end{figure} \begin{figure}[!b] \begin{center} \includegraphics[width=.4\textwidth,angle=90]{fig4.ps} \caption{Correlation between the 15--50 keV luminosity and the central black hole mass for our BAT AGN sample. Seyfert and radio galaxies are indicated with black circles, blazars with blue squares.} \label{MBH_Lx} \end{center} \end{figure} \section{Spectral variability: the example of IC~4329A} For the brightest AGN, accurate spectral variability studies in the hard X-ray band can be carried out with \textit{INTEGRAL} data (e.g. \cite{lubinski10,beckmann08}). As an example, the spectral variability of the bright Seyfert 1 galaxy IC~4329A has been investigated in the 20--100 keV band. We have analysed all the \textit{INTEGRAL} IBIS/ISGRI data available at the time of our study, which includes data for an effective exposure time of about 620~ks collected from the beginning of the mission up to February 12, 2008. The data have been analysed using the version 8 of the Offline Scientific Analysis Software (OSA). The data can be separated into 5 different periods, spanning from 50 to 200 ks of effective exposure time each, accumulated within 2 to 40 days. The spectra for each period have been extracted using the standard OSA spectral extraction and 3\% systematics have been added. In the left panel of Figure~\ref{spe}, the uncorrelated variations of the 18--60 and 60--100 keV fluxes suggest a spectral change of the source during the 6 years of \textit{INTEGRAL} observations. Indeed, a high-energy cut-off at $E_{\rm c} = 39 \pm 17 \rm \, keV$, signature of thermal Comptonisation processes, can be found in the August 2003 ISGRI spectrum, whereas no sign of curvature was detectable in July 2003 (right panel of Fig.~\ref{spe}). During the same period (August 2003) the broad component of the 6.4~keV Fe K line has been found to show a hint of variability on hour time scale \cite{demarco09}. This hard X-ray spectral change could indicate an increase of the electron plasma temperature, moving the cut-off towards higher energies, where it was not detectable by ISGRI in the July 2003 observation. Adding data below 20~keV will allow us to fit the combined spectrum with a more physical Comptonisation model, and therefore, to more precisely determine the temperature of the Comptonising plasma and constrain the presence and strength of the Compton reflection hump, already observed for this source in the past by \textit{BeppoSAX} \cite{dadina07}. \begin{figure} \hspace{-0.8cm} \includegraphics[width=.43\textwidth,angle=90]{fig5.ps} \includegraphics[width=.53\textwidth,angle=90]{fig6.ps} \caption{\textit{Left:} \textit{INTEGRAL}/ISGRI light curve of IC~4329A in three energy bands. Each data point corresponds to an exposure time from 50 to 200 ks accumulated within 2 to 40 days. \textit{Right:} ISGRI spectra of IC~4329A extracted from the \textit{INTEGRAL} observations of July 2003 (top) and August 2003 (bottom). The spectra are fitted with a simple power law (top) and with an exponentially cut-off power law, with cut-off at $39 \pm 17 \rm \, keV$ (bottom). } \label{spe} \end{figure} \vspace{-0.3cm} \section{Conclusions} We have presented preliminary results of a study investigating the variability properties of AGN at hard X-rays. The 15--50 keV light curves of a sample of 36 AGN out of the 281 monitored by \textit{Swift}/BAT have been analysed. We found a significantly larger variability of the blazar population compared to the Seyfert one, as expected considering the beamed nature of the high-energy emission of blazars. We confirm the trend of increasing hard X-ray variability with decreasing luminosity for the Seyfert sample, and the correlation between the hard X-ray luminosity and the black hole mass for the 22 AGN with known masses \cite{beckmann07,beckmann09}. The latter relation is consistent with more massive black holes accreting at lower Eddington rates than the less massive ones. As an example of spectral variability studies with \textit{INTEGRAL}, we have analysed IBIS/ISGRI data spanning more than 5 years of the bright Seyfert 1 galaxy IC~4329A. Spectral variations have been observed, with a high-energy cut-off being detected in August 2003 whereas no curvature was visible in the July 2003 spectrum. Further analysis of the \textit{INTEGRAL} spectra, including also the JEM-X data below 20~keV will allow to further investigate this possible change in the plasma electron temperature and the presence of a reflection component. Even though only preliminary, the results presented here show the potential of \textit{INTEGRAL}/ISGRI and \textit{Swift}/BAT data to study AGN variability above 20~keV. The large amount of \textit{INTEGRAL} data currently available will enable us to extend this kind of studies to about 20 Seyfert galaxies, among which MCG~$+$08$-$11$-$11, NGC~4593, NGC~4388, Mrk~509, NGC~2110, as already done for other AGN \cite{beckmann08,lubinski10}. In the meantime, the BAT monitoring is continuing, providing us with the best sampled hard X-ray light curves for an increasing sample of AGN. \vspace{-0.3cm} \section*{Acknowledgments} \vspace{-0.3cm} \small{ S.S. acknowledges the Centre National d'Etudes Spatiales (CNES) for financial support. Part of the present work is based on observations with \textit{INTEGRAL}, an ESA project with instruments and science data centre funded by ESA member states with the participation of Russia and the USA. }
{ "redpajama_set_name": "RedPajamaArXiv" }
1,758
import angular from 'angular'; import thunkMiddleware from 'redux-thunk'; import 'ng-redux'; import 'angular-ui-router'; import './app.css'; import { helloWorlds } from '../reducers'; angular.module('app', ['ngRedux', 'ui.router']) .config(config); config.$inject = ['$ngReduxProvider', '$locationProvider', '$stateProvider']; function config($ngReduxProvider, $locationProvider, $stateProvider) { $ngReduxProvider.createStoreWith( { helloWorlds }, [ thunkMiddleware ] ); $locationProvider.html5Mode({ enabled: false, requireBase: false }); $locationProvider.hashPrefix('!'); } require('./home/home.ng1.js'); require('./home/home.ng1.directive.js'); require('./home/home.ng1.controller.js');
{ "redpajama_set_name": "RedPajamaGithub" }
107
Q: @Override doesn't force to use enum annotation in overridden methods Is there a way how to force the compiler to check if I use same annotations in child method as in parent method. If I use @NotNull and String everything works as I expect. public static final int FIRST_ENUM = 0; public static final int SECOND_ENUM = 1; @IntDef(value = { FIRST_ENUM, SECOND_ENUM }) @Retention(RetentionPolicy.SOURCE) public @interface MyEnum { } class Parent { public void method(@MyEnum int a) { } } class Child extends Parent { @Override public void method(int a) { // This is valid otherwise there is no @MyEnum } } This code is valid although I don't override all annotations. A: To make the compiler check consistency of your annotations, you must run an annotation processor during compilation. You can do so using the standard javac -processor flag. When run without an annotation processor, the compiler writes most annotations into .class files but does not verify their semantics. You can define your own annotation processor, or use one written by someone else. The Fake Enum Checker of the Checker Framework can verify correct use of your @MyEnum annotation at compile time.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,737
\section{Proof of Eigenvalue Identity} \label{sec:appendixA} \begin{proposition} Let $\mathbf{A} \in \R^{n \times n}$ be any real symmetric matrix. Then $\min_{\mathbf{B} \succeq 0} \|\mathbf{A} - \mathbf{B}\|_F^2 = \sum_{i : \lambda_i(\mathbf{A}) < 0 } \lambda_i^2(\mathbf{A})$. \end{proposition} \begin{proof} Let $g_i$ be the eigenvector associated with $\lambda_i = \lambda_i(\mathbf{A})$. First, setting $\mathbf{B} = \sum_{i : \lambda_i(\mathbf{A}) \geq 0 } \lambda_i g_i g_i^\top$, which is a PSD matrix, we have $\|\mathbf{A} - \mathbf{B}\|_F^2 = \| \sum_{i : \lambda_i(\mathbf{A}) < 0 } \lambda_i(g_i g_i^\top \mathbf{A}) \|_2^2 = \sum_{i : \lambda_i(\mathbf{A}) < 0 } \lambda_i^2(\mathbf{A})$, where the second equality follows from the Pythagorean Theorem, which proves that $\min_{\mathbf{B} \succeq 0} \|\mathbf{A} - \mathbf{B}\|_F^2 \leq \sum_{i : \lambda_i(\mathbf{A}) < 0 } \lambda_i^2(\mathbf{A})$. To see the other direction, fix any PSD matrix $\mathbf{B}$, and let $\mathbf{Z} = \mathbf{B} - \mathbf{A}$. Then $\mathbf{Z} + \mathbf{A} \succeq 0$, where $\succeq$ is the Lowner ordering, thus $\mathbf{Z} \succeq -\mathbf{A}$, which by definition implies that $x^\top \mathbf{Z} x \geq - x^\top \mathbf{A} x$ for all $x \in \R^n$. Then by the Courant-Fischer variational characterization of eigenvalues, we have that $\lambda_i(\mathbf{Z}) \geq -\lambda_i(\mathbf{A})$ for all $i$. In particular, $|\lambda_i(\mathbf{Z})| \geq |\lambda_i(\mathbf{A})|$ for all $i$ such that $\lambda_i(\mathbf{A}) < 0$. Thus $\|\mathbf{Z}\|_F^2 = \sum_i \lambda_i^2(\mathbf{Z}) \geq \sum_{i : \lambda_i(\mathbf{A}) < 0 } \lambda_i^2(\mathbf{Z}) \geq \sum_{i : \lambda_i(\mathbf{A}) < 0 } \lambda_i^2(\mathbf{A})$, which completes the proof. \end{proof} \subsection{Our Contributions} \label{sec:contri} We now introduce our main contributions. Our algorithms for PSD testing randomly sample principal submatrices and check if they are PSD. Thus, all our algorithms have one-sided error; when $\mathbf{A}$ is PSD, they always return \textsf{PSD}, and whenever our algorithms return \textsf{Not PSD}, they output a certificate in the form of a principal submatrix which is not PSD. In what follows, $\omega < 2.373$ is the exponent of matrix multiplication, and $\tilde{O},\tilde{\Omega}$ notation only hide $\log(1/\epsilon)$ factors (and $\log(s)$ factors for Ky-Fan-$s$ and residual error bounds), thus our bounds have no direct dependency on the input size $n$. We first state our result for the $\ell_\infty$ gap problem in its most general form, which is equivalent to Problem \ref{prob:linf} in the special case when $\mathbf{A}$ is symmetric. \smallskip \smallskip \smallskip \noindent \textbf{Theorem \ref{thm:inftymain} }($\ell_\infty$-gap Upper Bound)\textit{ There is a non-adaptive sampling algorithm which, given $\mathbf{A} \in \R^{n \times n} $ with $\|\mathbf{A}\|_\infty \leq 1$ and $\epsilon \in (0,1)$, returns \textsf{PSD} if $x^\top \mathbf{A} x \geq 0$ for all $x \in \R^n$, and with probability $2/3$ returns \textsf{Not PSD} if $x^\top\mathbf{A} x \leq - \epsilon n$ for some unit vector $x \in \R^n$. The algorithm make $\tilde{O}(1/\epsilon^2)$ queries to the entries of $\mathbf{A}$, and runs in time $\tilde{O}(1/\epsilon^{\omega})$.\smallskip \smallskip \smallskip } We demonstrate that the algorithm of Theorem \ref{thm:inftymain} is optimal up to $\log(1/\epsilon)$ factors, even for adaptive algorithms with two-sided error. Formally, we show: \smallskip \smallskip \smallskip \noindent \textbf{Theorem \ref{thm:linftyLB} }($\ell_\infty$-gap Lower Bound)\textit{ Any adaptive or non-adaptive algorithm which solves the PSD testing problem with $\epsilon$-$\ell_\infty$ gap with probability at least $2/3$, even with two-sided error and if $\mathbf{A}$ is promised to be symmetric, must query $\wt{\Omega}( 1/\epsilon^2)$ entries of $\mathbf{A}$.\smallskip \smallskip \smallskip } Next, we present our algorithm for the $\ell_2^2$-gap problem. Our algorithm crucially relies on first running our tester for the $\ell_\infty$-gap problem, which allows us to demonstrate that if $\mathbf{A}$ is far from PSD in $\ell_2^2$ but close in $\ell_\infty$, then it must be far, under other notions of distance such as Schatten norms or residual tail error, from any PSD matrix. \smallskip \smallskip \smallskip \noindent \textbf{Theorem \ref{thm:l2Main} } ($\ell_2^2$-gap Upper Bound)\textit{ There is a non-adaptive sampling algorithm which, given a symmetric matrix $\mathbf{A} \in \R^{n \times n}$ with $\|\mathbf{A}\|_\infty \leq 1$ and $\epsilon \in (0,1)$, returns \textsf{PSD} if $\mathbf{A}$ is PSD, and with probability $2/3$ returns \textsf{Not PSD} if $ \min_{\mathbf{B} \succeq 0}\|\mathbf{A} - \mathbf{B} \|^2_F\geq \epsilon n^2$. The algorithm make $\tilde{O}(1/\epsilon^4)$ queries to $\mathbf{A}$, and runs in time $\tilde{O}(1/\epsilon^{2\omega})$.\smallskip \smallskip \smallskip } We complement our upper bound by a $\wt{\Omega}( \frac{1}{\epsilon^2})$ lower bound for PSD-testing with $\epsilon$-$\ell_2^2$ gap, which holds even for algorithms with two sided error. Our lower bound demonstrates a separation between the complexity of PSD testing with $\sqrt{\epsilon}$-$\ell_\infty$ gap and PSD testing with $\epsilon$-$\ell_2^2$-gap, showing that the concentration of negative mass in large eigenvalues makes PSD testing a strictly easier problem. \smallskip \smallskip \smallskip \noindent \textbf{Theorem \ref{thm:lbmain} }($\ell_2^2$-gap Lower Bound) \textit{ Any non-adaptive algorithm which solves the PSD testing problem with $\epsilon$-$\ell_2^2$ gap with probability at least $2/3$, even with two-sided error, must query $\wt{\Omega}( 1/\epsilon^2)$ entries of $\mathbf{A}$. \smallskip \smallskip \smallskip } Our lower bound is built on discrete hard instances which are ``locally indistinguishable'', in the sense that the distribution of any small set of samples is completely identical between the PSD and $\epsilon$-far cases. At the heart of the lower bound is a key combinatorial Lemma about arrangements of paths on cycle graphs (see discussion in Section \ref{sec:techlb}). Our construction is highly general, and we believe will likely be useful for proving other lower bounds for matrix and graph property testing problems. Exemplifying the applicability of our construction, we obtain as an immediate corollary a new lower bound for testing the Schatten-$1$ norm of $\mathbf{A}$. Recall, that the Schatten-$1$ norm is defined via $\|\mathbf{A}\|_{\mathcal{S}_1} = \sum_i \sigma_i(\mathbf{A})$, where $\sigma_1(\mathbf{A}) \geq \dots \geq \sigma_n(\mathbf{A})$ are the singular values of $\mathbf{A}$. \smallskip \smallskip \smallskip \noindent \textbf{Theorem \ref{thm:schattenlb} } (Schatten-$1$ Lower Bound) \textit{ Fix any $1/\sqrt{n} \leq \epsilon \leq 1$. Then any non-adaptive algorithm in the bounded entry model that distinguishes between \begin{enumerate} \item $\|\mathbf{A}\|_{\mathcal{S}_1} \geq \epsilon n^{1.5}$, \item $\|\mathbf{A}\|_{\mathcal{S}_1} \leq (1-\epsilon_0)\epsilon n^{1.5}$ \end{enumerate} with probability $2/3$, where $\epsilon_0 = 1/\log^{O(1)}(1/\epsilon)$, must make at least $\tilde{\Omega}( 1/\epsilon^4)$ queries to $\mathbf{A}$. } \smallskip \smallskip \smallskip Note that one always has $\|\mathbf{A}\|_{\mathcal{S}_1} \leq n^{1.5}$ in the bounded entry model ($\|\mathbf{A}\|_\infty \leq 1$), which accounts for the above scaling. Theorem \ref{thm:schattenlb} extends a lower bound of Balcan et. al. \cite{BalcanTesting}, which is $\Omega(n)$ for the special case of $\epsilon ,\epsilon_0= \Theta(1)$. Thus, for the range $\epsilon = \tilde{O}(n^{-1/4})$, our lower bound is an improvement. To the best of our knowledge, Theorem \ref{thm:schattenlb} gives the first $\tilde{\Omega}(n^2)$ sampling lower bound for testing Schatten-$1$ in a non-degenerate range (i.e., for $\|\mathbf{A}\|_{\mathcal{S}_1} > n$). \begin{remark} We note that the lower bound of \cite{BalcanTesting} is stated for a slightly different version of gap (a ``$\epsilon$-$\ell_0$''-gap), where either $\|\mathbf{A}\|_{\mathcal{S}_1} \geq c_1 n^{1.5}$ for a constant $c_1$, or at least $\epsilon n^2$ of the entries of $\mathbf{A}$ must be changed (while respecting $\|\mathbf{A}\|_\infty \leq 1$) so that the Schatten-$1$ is larger than $c_1 n^{1.5}$. However, their lower bound construction itself satisfies the ``Schatten-gap'' version as stated in Theorem \ref{thm:schattenlb}, where either $\|\mathbf{A}\|_{\mathcal{S}_1} \geq c_1 n^{1.5}$, or $\|\mathbf{A}\|_{\mathcal{S}_1} \leq c_2 n^{1.5}$ and $c_1 > c_2$ are constants. From here, it is easy to see that this gap actually implies the $\ell_0$-gap (and this is used to obtain the $\ell_0$-gap lower bound in \cite{BalcanTesting}), since if $\|\mathbf{A}\|_{\mathcal{S}_1} \leq c_2 n^{1.5}$ then for any $\mathbf{E}$ with $\|\mathbf{E}\|_\infty \leq 2$ and $\|\mathbf{E}\|_0 \leq \epsilon n^2$ for a small enough constant $\epsilon < c_2^2$, we have $\|\mathbf{A} + \mathbf{E} \|_{\mathcal{S}_1} \leq \|\mathbf{A}\|_{\mathcal{S}_1} + \|\mathbf{E}\|_{\mathcal{S}_1} \leq n^{1.5} (c_2 + 2 \sqrt{\epsilon} ) < c_1 n^{1.5}$. So Theorem \ref{thm:schattenlb} implies a lower bound of $\tilde{\Omega}(1/\epsilon^2)$ for distinguishing $\|\mathbf{A}\|_{\mathcal{S}_1} \geq \sqrt{\epsilon} n^{1.5}$ from the case of needing to change at least $\tilde{\Omega}(\epsilon n^2)$ entries of $\mathbf{A}$ so that $\|\mathbf{A}\|_{\mathcal{S}_1} \geq \sqrt{\epsilon} n^{1.5}$. Thus, our lower bound \textit{also} extends the $\ell_0$-gap version of the results of \cite{BalcanTesting} for the range $\epsilon = \tilde{O}(1/\sqrt{n})$. \end{remark} In addition to Schatten-$1$ testing, the same lower bound construction and techniques from Theorem \ref{thm:lbmain} also result in new lower bounds for testing the Ky-Fan $s$ norm $\|\mathbf{A}\|_{\textsf{KF}(s)} = \sum_{i=1}^s \sigma_i(\mathbf{A})$, as well as the cost of the best rank-$s$ approximation $\|\mathbf{A} - \mathbf{A}_s \|_{F}^2 = \sum_{i> s} \sigma_i^2(\mathbf{A})$, stated below. In the following, $s$ is any value $1 \leq s \leq n/(\text{poly} \log n)$, and $c$ is a fixed constant. \smallskip \smallskip \smallskip \noindent \textbf{Theorem \ref{thm:kflb} }(Ky-Fan Lower Bound) \textit{ Any non-adaptive algorithm in the bounded entry model which distinguishes between \begin{enumerate} \item $\|\mathbf{A}\|_{\textsf{KF}(s)} > \frac{c}{\log s}n$ \item $\|\mathbf{A}\|_{\textsf{KF}(s)} < (1-\epsilon_0) \cdot \frac{c}{\log s}n $ \end{enumerate} with probability $2/3$, where $\epsilon_0 = \Theta(1/\log^2(s))$, must query at least $\tilde{\Omega}(s^2)$ entries of $\mathbf{A}$. }\smallskip \smallskip \smallskip \smallskip \smallskip \smallskip \noindent \textbf{Theorem \ref{thm:taillb} }(Residual Error Lower Bound) \textit{ Any non-adaptive algorithm in the bounded entry model which distinguishes between \begin{enumerate} \item $\|\mathbf{A} - \mathbf{A}_s \|_{F}^2 > \frac{c}{s \log s}n $ \item $\|\mathbf{A} - \mathbf{A}_s \|_{F}^2 < (1-\epsilon_0) \cdot \frac{c}{s \log s}n$ \end{enumerate} with probability $2/3$, where $\epsilon_0 = 1/\log^{O(1)}(s)$, must query at least $\tilde{\Omega}(s^2)$ entries of $\mathbf{A}$. }\smallskip \smallskip \smallskip Our lower bound for the Ky-Fan norm complements a Ky-Fan testing lower bound of \cite{li2016tight}, which is $\Omega(n^2/s^2)$ for distinguishing \textbf{1)} $\|\mathbf{A}\|_{KF(s)} < 2.1 s \sqrt{n}$ from \textbf{1)} $\|\mathbf{A}\|_{KF(s)} > 2.4s \sqrt{n}$ when $s = O(\sqrt{n})$. Note their bound decreases with $s$, whereas ours increases, thus the two bounds are incomparable (although they match up to $\log(s)$ factors at $s = \Theta(\sqrt{n})$).\footnote{The bound from \cite{li2016tight} is stated in the sketching model, however the entries of the instance are bounded, thus it also applies to the sampling model considered here.} We also point out that there are (not quite matching) upper bounds for both the problems of Ky-Fan norm and $s$-residual error testing in the bounded entry model, just based on a standard application of the Matrix Bernstein Inequality.\footnote{See Theorem 6.1.1 of \cite{tropp2015introduction}, applied to $S_k = a_{(k)}(a_{(k)})^\top$, where $a_{(k)}$ is the $k$-th row sampled in $\mathbf{A}$; for the case of residual error, one equivalently applies matrix Bernstein inequality to estimate the head $\sum_{i \leq k} \sigma_i^2(\mathbf{A})$. These bounds can be tightened via the usage of interior Chernoff bounds \cite{gittens2011tail}.} We leave the exact query complexity of these and related testing problems for functions of singular values in the bounded entry model as subject for future work. \paragraph{A Remark on the $\ell_2^2$-Gap.} We note that there appear to be several key barriers to improving the query complexity of PSD testing with $\ell_2^2$-gap beyond $O(1/\epsilon^4)$, which we briefly discuss here. First, in general, to preserve functions of the squared singular values of $\mathbf{A}$ up to error $\epsilon n^2$, such as $\|\mathbf{A}\|_F^2 = \sum_i \sigma_i^2(\mathbf{A})$ or $\|\mathbf{A}\|_2^2 = \sigma_1^2(\mathbf{A})$, any algorithm which samples a submatrix must make $\Omega(1/\epsilon^4)$ queries (see Lemma \ref{lem:lowerboundAA} for estimating $\sum_{i \leq k} \sigma_i^2$ for any $k$). In other words, detecting $\epsilon n^2$-sized perturbations in the spectrum of a matrix in general requires $\Omega(1/\epsilon^4)$ sized submatrices. This rules out improving the query complexity by detecting the $\epsilon n^2$ negative mass in $\mathbf{A}$ via, for instance, testing if the sum of squares of top $k=1/\epsilon$ singular values has $\Theta(\epsilon n^2)$ less mass than it should if $\mathbf{A}$ were PSD (even this may require $\Omega(k^2/\epsilon^4)$ queries, see the discussion in Section \ref{sec:techl2}). The key issue at play in the above barrier appears to be the requirement of sampling submatrices. Indeed, notice for the simplest case of $\|\mathbf{A}\|_F^2 $, we can easily estimate $\|\mathbf{A}\|_F^2$ to additive $\epsilon n^2$ via $O(1/\epsilon^2)$ queries to random entries of $\mathbf{A}$. On the other hand, if these queries must form a submatrix, then it is easy to see that $\Omega(1/\epsilon^4)$ queries are necessary, simply from the problem of estimating $\|\mathbf{A}\|_F^2$ whose rows (or columns) have values determined by a coin flip with bias either equal to $1/2$ or $1/2 + \epsilon$. On the other hand, for testing positive semi-definiteness, especially with one-sided error, the requirement of sampling a principal submatrix seems unavoidable. In addition, a typical approach when studying spectral properties of submatrices is to first pass to a random row submatrix $\mathbf{A}_{S \times [n]}$, argue that it preserves the desired property (up to scaling), and then iterate the process on a column submatrix $\mathbf{A}_{S \times T}$. Unfortunately, these types of arguments are not appropriate when dealing with eigenvalues of $\mathbf{A}$, since after passing to the rectangular matrix $\mathbf{A}_{S \times [n]}$, any notion of negativity of the eigenvalues has now been lost. This forces one to argue indirectly about functions of the singular values of $\mathbf{A}_{S \times [n]}$, returning to the original difficulty described above. We leave it as an open problem to determine the exact non-adaptive query complexity of PSD testing with $\ell_2^2$-gap. For a further discussion of these barriers and open problems, see Section \ref{sec:conclusion}. \subsection{Connections to Optimization, Euclidean Metrics and Linear Algebra} We now describe some explicit instances where our algorithms may be useful for testing positive semi-definiteness. We emphasize that in general, the distance between $\mathbf{A}$ and the PSD cone may be too small to verify via our testers. However, when the input matrices satisfy a non-trivial gap from the PSD cone, we can speed up some basic algorithmic primitives. The first is testing feasibility of the PSD constraint in a Semi-Definite Program (SDP) with sublinear queries and time, so long as the variable matrix has bounded entries. Importantly, our algorithms also output a separating hyperplane to the PSD cone \begin{corollary}[Feasibility and Separating Hyperplanes for SDPs] Given a SDP $\mathcal{S}$, let $\mathbf{X} \in \R^{n \times n}$ be a symmetric matrix that violates the PSD constraint for $\mathcal{S}$. Further, suppose $\|\mathbf{X} \|_{\infty} \leq 1$ and $\mathbf{X}$ is $\epsilon n^2$-far in entry-wise $\ell_2^2$ distance to the PSD cone. Then, there exists an algorithm that queries $\widetilde{O}(1/\epsilon^4)$ entries in $\mathbf{X}$ and runs in $\wt{O}(1/\epsilon^{2\omega})$ time, and with probability $9/10$, outputs a vector $\tilde{v}$ such that $\tilde{v}^{T} \mathbf{X} \tilde{v} < 0$. Moreover, if $\lambda_{\min}(\mathbf{X}) < -\epsilon n$, then there is an algorithm yielding the same guarantee, that queries $\widetilde{O}(1/\epsilon^2)$ entries in $\mathbf{X}$ and runs in $\wt{O}(1/\epsilon^{\omega})$ time. \end{corollary} While in the worst case, our algorithm may need to read the whole matrix to exactly test if $\mathbf{X}$ is PSD, there may be applications where relaxing the PSD constraint to the convex set of matrices which are close to the PSD cone in Euclidean distance is acceptable. Moreover, our algorithm may be run as a preliminary step at each iteration of an SDP solver to check if the PSD constraint is badly violated, resulting in speed-ups by avoiding an expensive eigendecomposition of $\mathbf{X}$ whenever our algorithm outputs a separating hyperplane~\cite{vandenberghe1996semidefinite}. Next, we consider the problem of testing whether an arbitrary finite metric $d$ over $n$ points, $x_1, \dots x_n \in \mathbb{R}^d$ is embeddable into Euclidean Space. Testing if a metric is Euclidean has a myriad of applications, such as determining whether dimensionality reduction techniques such as Johnson-Lindenstrauss can be used \cite{parnas2003testing}, checking if efficient Euclidean TSP solvers can be applied \cite{arora1998polynomial}, and more recently, computing a low-rank approximation in sublinear time \cite{bakshi2018sublinear,indyk2019sample}. It is well known (Schoenberg's criterion~\cite{dattorro2010convex}) that given a distance matrix $\mathbf{D} \in \R^{n \times n}$ such that $\mathbf{D}_{i,j} = d(x_i, x_j)$, the points are isometrically embeddable into Euclidean space if and only If $\mathbf{G} = \mathbf{1} \cdot \mathbf{D}_{1,*} + \mathbf{D}_{1,*}^\top \cdot \mathbf{1}^\top - \mathbf{D} \succeq 0$, where $ \mathbf{D}_{1,*}$ is the first row of $\mathbf{D}$. Notice that embeddability is scale invariant, allowing one to scale distances to ensure boundedness. Furthermore, since our algorithms sample submatrices and check for non-positive semi-definiteness, the tester need not know this scaling in advance, and gives guarantees for distinguishing definiteness if the necessary gap is satisfied after hypothetically scaling the entries. \begin{corollary}[Euclidean Embeddability of Finite Metrics] Given a finite metric $d$ on $n$ points $\{x_1,\dots,x_n\}$, let $\mathbf{D} \in \R^{n \times n}$ be the corresponding distance matrix, scaled so that $\|\mathbf{D}\|_\infty \leq 1/3$, and let $\mathbf{G}= \mathbf{1}\mathbf{D}_{1,*} + \mathbf{D}_{1,*}^\top\mathbf{1}^\top - \mathbf{D}$. Then if $\min_{\mathbf{B} \succeq 0} \|\mathbf{G} - \mathbf{B}\|_F^2 \geq \epsilon n^2$, there exists an algorithm that queries $\wt{O}(1/\epsilon^4)$ entries in $\mathbf{A}$ and with probability $9/10$, determines the non-embeddability of $\{x_1,\dots,x_n\}$ into Euclidean space. Further, the algorithm runs in time $\wt{O}(1/\epsilon^{2\omega})$. \end{corollary} \begin{remark} An intriguing question is to characterized geometric properties of finite metrics based on the $\ell_2^2$-distance of the Schoenberg matrix $\mathbf{G}$ from the PSD cone. For instance, given a finite metric with Schoenberg matrix $\mathbf{G}$ that is close to being PSD in $\ell^2_2$-distance, can we conclude that the metric has a low worst or average case distortion embedding into Euclidean space? \end{remark} \begin{remark} Since rescaling entries to be bounded only affects the gap parameter $\epsilon$, in both of the above cases, so long as the magnitude of the entries in $\mathbf{X},\mathbf{D}$ do not scale with $n$, the running time of our algorithms is still sublinear in the input. \end{remark} Finally, several recent works have focused on obtaining sublinear time algorithms for low-rank approximation when the input matrix is PSD \cite{musco2017sublinear, bakshi2019robust}. However, such algorithms only succeed when the input is PSD or close to PSD (in $\ell_2^2$), and it is unknown how to verify whether these algorithm succeeded in sublinear time. Therefore, our tester can be used as a pre-processing step to determine input instances where the aforementioned algorithms provably will (or will not) succeed. \subsection{Related work} Property testing in the bounded entry model was first considered in \cite{BalcanTesting} to study the query complexity of testing spectral properties of matrices, such as stable rank (the value $\|\mathbf{A}\|_F^2/\|\mathbf{A}\|_2^2)$ and Schatten $p$ norms. A related model, known as the \textit{bounded row model}, where rows instead of entries are required to be bounded, was studied by Li, Wang, and Woodruff \cite{li2014improved}, who gave tight bounds for testing stable rank in this model. In addition, the problem of testing the rank of a matrix from a small number of queries has been well studied \cite{parnas2003testing,krauthgamer2003property,li2014improved,barman2016testing}, as well the problem of estimating the rank via a random submatrix \cite{balcan2011learning,balcan2016noise}. Notice that since rank is not a smooth spectral property, hiding an unbounded value in a single entry of $\mathbf{A}$ cannot drastically alter the rank. Thus, for testing rank, the condition of boundedness is not required. More generally, the bounded entry model is the natural sampling analogue for the \textit{linear sketching model}, where the algorithm gets to choose a matrix $\mathbf{S} \in \R^{t \times n^2}$, where $t$ is the number of ``queries'', and then observes the product $\mathbf{S} \cdot \textsf{vec}(\mathbf{A})$, where $\textsf{vec}(\mathbf{A})$ is the vectorization of $\mathbf{A}$~\cite{li2017embeddings,li2016tight, braverman2018matrix,li2016approximating,li2014improved,li2014sketching,braverman2019schatten,li2019approximating}. The model has important applications to streaming and distributed algorithms. Understanding the query complexity of sketching problems, such as estimating spectral norms and the top singular values \cite{andoni2013eigenvalues,li2014sketching,li2016tight}, estimating Schatten and Ky-Fan norms \cite{li2016tight, li2017embeddings,li2016approximating, braverman2019schatten}, estimating $\ell_p$ norms \cite{alon1996space, indyk2006stable, kane2010exact, jayaram2019towards,ben2020framework}, and $\ell_p$ sampling \cite{monemizadeh20101, Jowhari:2011,jayaram2018perfect,jayaram2019weighted}, has been a topic of intense study. For the problem of sketching \emph{eigenvalues} (with their signs), perhaps the most related result is \cite{andoni2013eigenvalues}, which gives point-wise estimates of the top eigenvalues. Notice that linear sketching can simulate sampling by setting the rows of $\mathbf{S}$ to be standard basis vectors, however sketching is in general a much stronger query model. Note that to apply a linear sketch, unlike in sampling, one must read all the entries of $\mathbf{A}$, which does not yield sublinear algorithms. A special case of the sketching model is the \textit{matrix-vector product} model, which has been studied extensively in the context of compressed sensing~\cite{candes2006stable, eldar2012compressed} and sparse recovery~\cite{gilbert2010sparse}. Here, one chooses vectors $v_1,\dots,v_k$ and observes the products $\mathbf{A} v_1,\dots,\mathbf{A} v_k$. Like sketching, matrix-vector product queries are a much stronger access model than sampling. Recently, in the matrix-vector product model, Sun et. al. considered testing various graph and matrix properties~\cite{sun2019querying}, and Han et. al. considered approximating spectral sums and testing positive semi-definiteness~\cite{han2017approximating}. Lastly, while there has been considerable work on understanding concentration of norms and singular values of random matrices, not as much is known about their \textit{eigenvalues}. Progress in understanding the behavior of singular values of random matrices includes concentration bounds for spectral norms of submatrices \cite{rudelson2007sampling,tropp2008norms}, concentration bounds for extreme singular values \cite{gittens2011tail,tropp2015introduction,vershynin2010introduction,garg2018matrix,kyng2018matrix}, non-commutative Khintchine inequalities for Schatten-$p$ norms \cite{lust1991non,pisier2009remarks,pisier2017non}, as well as Kadison-Singer type discrepancy bounds \cite{marcus2015interlacing,kyng2020four,song2020hyperbolic}. These random matrix concentration bounds have resulted in improved algorithms for many fundamental problems, such as low-rank approximation and regression \cite{clarkson2017low, musco2017sublinear, bakshi2018sublinear, indyk2019sample, diao2019optimal} and spectral sparsification \cite{spielman2011spectral,batson2013spectral,andoni2016sketching}. However, in general, understanding behavior of negative eigenvalues of random matrices and submatrices remains largely an open problem. \subsection{Technical Overview} \input{Techniques} \section{Upper bounds for Schatten and Ky-Fan norm Estimation} In this section, we prove a new decay theorems for Schatten and Ky-Fan Norms under random restrictions. Our techniques use multiple ideas from functional analysis, such as symmetrization and non-commutative Khintchine Inequalities for all $p$ in $(0,2]$. We begin by defining the generalized Ky-Fan norm: \begin{definition} Given a $n \times d$ matrix $\mathbf{A}$, let $\mathbf{U} \mathbf{\Sigma} \mathbf{V}^T$ be the SVD of $\mathbf{A}$. Then, for any $k \in [d]$ and $p \in (0,\infty)$, the $(k,p)$-Ky-Fan norm is given by \[ \|\mathbf{A}\|_{K_{k,p}} = \left(\sum^{k}_{i =1} \sigma^{p}_{i}(\mathbf{A})\right)^{1/p} \] \end{definition} \begin{remark}The $(k,p)$-Ky-Fan norm captures as special cases the Frobenius norm, all Schatten-$p$ norms and the Operator norm. \end{remark} We begin by stating our main decay results. We obtain new theorems for the behavior of the $(p,k)$-Ky Fan norm when $p \in \{1,2\}$. For $p=2$, this captures the Frobenius norm of the best rank-$k$ approximation to $\mathbf{A}$. For $p=1$, and $k=n$, we obtain a decay result for Schatten-$1$. \begin{theorem}[$(2,k)$-Ky Fan Decay] \label{thm:top_k_decay} Given a $n \times n$ symmetric matrix $\mathbf{A}$, $k \in [n]$ and let $Q$ be a random subset of $[n]$ such that $\expecf{}{|Q|}= \frac{q}{n}$ and let $\mathbf{A}_{\mid Q}$ be the matrix obtained by restricting the columns of $\mathbf{A}$ to the subset indexed by $Q$. Then, for $r\in \mathbb{N}$, \begin{equation*} \expecf{r}{\Norm{\mathbf{A}_{\mid Q}}_{K_{k,2}}} = \sqrt{\frac{q}{n}} \Norm{\mathbf{A}}_{K_{k,2}} \pm O\left( \sqrt{k} \log(k)^{1/2} \expecf{r}{ \Norm{\mathbf{A}_{\mid Q} }_{1\to 2} } \right) \end{equation*} \end{theorem} \ainesh{adding the right statement and proof for the above} \ainesh{Next, we obtain a decay theorem for the standard Ky-Fan norm as well as Schatten-$1$. The key techniques we use here are a symmetrization argument for pseudo-norms and the non-commutative khintchine inequality for $p < 1$. This commentary should go elsewhere. } \begin{theorem}[$(1,k)$-Ky Fan and Schatten-$1$ Decay] \label{thm:ky_fan_decay} Given a $n \times n$ symmetric matrix $\mathbf{A}$, $k \in [n]$ and let $Q$ be a random subset of $[n]$ such that $\expecf{}{|Q|}= \frac{q}{n}$ and let $\mathbf{A}_{\mid Q}$ be the matrix obtained by restricting the columns of $\mathbf{A}$ to the subset indexed by $Q$. Then, for $r\in \mathbb{N}$, \begin{equation*} \expecf{r}{\Norm{\mathbf{A}_{\mid Q}}_{K_{k,1}}} = 2\sqrt{\frac{q}{n}} \Norm{\mathbf{A}}_{K_{k,1}} \pm O\left( k \log(k)^{1/2} \expecf{r}{ \Norm{\mathbf{A}_{\mid Q} }_{1\to 2} } \right) \end{equation*} \end{theorem} We begin by recalling the non-commutative Khintchine Inequality, originally discovered by Lust-Picard \cite{lust1986inegalites} and Lust-Picard and Pisier \cite{lust1986inegalites}. We use a sharp bound due to Buchholz \cite{buchholz2001operator}, that was subsequently improved by Oliviera \cite{Oliveira_2010} and restated in Meka and Wigderson (Theorem 5.1): \ainesh{put right def of NCKI from gilles pisier } \begin{theorem}[Non-Commutative Khintchine Inequality] \label{thm:non_commutative_khintchine} Let $\mathbf{A} = \sum_{i \in [t]} \mathbf{A}_{(i)}$ be a finite sequence of $n \times n$ PSD matrices and for all $i \in [t]$, $\eta_i$ be a Rademacher random variable. Then, there exists a constant $\Delta$ such that with probability $1- ne^{-\Delta^2}$ \begin{equation*} -\Delta \left(\sum_{i\in[t]} \mathbf{A}_{(i)}\mathbf{A}^{\top}_{(i)} \right)^{1/2} \preceq \sum_{i\in[t]}\eta_i \mathbf{A}_{(i)} \preceq \Delta \left(\sum_{i\in[t]} \mathbf{A}_{(i)}\mathbf{A}^{\top}_{(i)} \right)^{1/2} \end{equation*} \end{theorem} We also require the following simple fact for $L_{p}$-spaces when $p<1$: \begin{fact}[Triangle and Reverse Triangle Inequality] \label{fact:tri} Given vectors $x, y \in \mathbb{R}^d$, for any $p \in (0,1)$, $\| x +y \|_p \leq 2^{1/p- 1}(\|x\|_p \pm \|y \|_p)$ and $\|x+y\|^p_p = \|x\|^p_p \pm \|y\|^p_p$. \end{fact} Next, we show that classical symmeterization arguments extend to pseudo-norms. \begin{lemma}[Symmetrization for Norms, \cite{} HDP Book] Given $n$ i.i.d. mean $0$ random vectors $X_1, X_2, \ldots X_n \in \mathbb{R}^d$, let $\{\eta_i\}_{i\in [n]}$ be $n$ Rademacher random variables. Then, for any norm $\| \|_{K}$, \[ \expecf{}{\Norm{\sum_{i \in [n]} X_i}_K} \leq 2 \expecf{}{\Norm{\sum_{i \in [n]} \eta_i X_i }_K} \] \end{lemma} Now, we are ready to prove our theorems. \begin{proof}[Proof of Theorem \ref{thm:top_k_decay}] Let $K_{*}$ denote the $(k,2)$-Ky-Fan norm. For all $i \in [n]$, let $\delta_i$ be indicator random variables for $i \in Q$ and thus $\expecf{}{\delta_i} = \delta= q/n$. Let $\mathbf{A}_{i}$ denote the $i$-th column of $\mathbf{A}$. Therefore, $\mathbf{A} = \sum_{i \in [n]} e_i \otimes \mathbf{A}_i$ and $\mathbf{A}_{\mid Q} = \sum_{i \in [n]} \delta_i e_i \otimes \mathbf{A}_i$, where $e_i$ is the $i$-th standard basis vector. Let $\mathbf{A} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^T$ be the Eigendecomposition of $\mathbf{A}$. Observe, \begin{equation*} \Norm{\mathbf{A} }_{K^*} = \left(\sum_{j \in [k]} \lambda^2_j(\mathbf{A})\right)^{1/2} = \Norm{\mathbf{A}^{\top} \mathbf{A}}_{K_{(k,1)}} = \Norm{\sum_{i \in [n]} \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{(k,1)}} \end{equation*} where the second equality follows from $\mathbf{A}^T \mathbf{A} = \mathbf{U} \mathbf{\Lambda}^2 \mathbf{U}^T$ and the definition of $\|\cdot \|_{K_{(k,1)}}$. Similarly, \begin{equation*} \|\mathbf{A}_{\mid Q}\|_{K^*} = \Norm{\sum_{i \in [n]} \delta_i \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{(k,1)}} \end{equation*} First, we do the analysis for the expectation. We would like to bound $\expecf{}{\Norm{\mathbf{A}_{\mid Q}}_{K^*}}$ as follows \begin{equation} \label{eqn:decay_statement} \begin{split} \expecf{}{\Norm{\mathbf{A}_{\mid Q}}_{K^*}}^2 & = \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i} + \delta \mathbf{A}^{\top}\mathbf{A} }^{1/2}_{K_{k,1}}} \\ & = \Norm{ \delta \mathbf{A}^{\top}\mathbf{A} }^{1/2}_{K_{k,1}} \pm \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{k,1}}}\\ & = \sqrt{\delta} \Norm{\mathbf{A}}_{K^*} \pm \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{k,1}}} \end{split} \end{equation} where the second equality follows from triangle and reverse triangle inequality on the square-root of the Ky-Fan norm. Observe, we get the rate of decay we desire for $\|\mathbf{A}\|_{K^*}$ and it suffices to bound the additive term. Using Fact \ref{} we know that $K_{k,1}$ is a norm and it follows from Lemma \ref{} that we can bound the expectation by introducing Rademacher random variables $\eta_i$ and observing that $\eta_i(\delta_i - \delta)$ has the same distribution as $\eta_i \delta_i$ and thus \begin{equation*} \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{k,1}}} \leq 2 \expecf{}{\expecf{\eta}{\Norm{ \sum_{i \in [n]} \eta_i \delta_i\mathbf{A}_{i} \otimes \mathbf{A}_{i}}_{K_{k,1}}}^{1/2}} \end{equation*} Here, we apply Theorem \ref{thm:non_commutative_khintchine} with $\mathbf{X}_i = \mathbf{A}_i \otimes \mathbf{A}_i$ and $\mathbf{Y}_i = 0$ to obtain that with probability at least $99/100$, over a fixing of $Q$, \begin{equation} \label{eqn:khintchine} \begin{split} \expecf{\eta}{\Norm{ \sum_{i \in [n]} \eta_i\delta_i \mathbf{A}_{i} \otimes \mathbf{A}_{i}}_{K_{k,1}}} & \leq O\left(\sqrt{p }\right) \Norm{\left( \sum_{i \in [n]} \delta_i (\mathbf{A}_{i} \otimes \mathbf{A}_{i})^{\top} \mathbf{A}_{i} \otimes \mathbf{A}_{i} \right)^{1/2} }_{K_{k,1}} \\ & \leq O\left(\sqrt{p } \right) \Norm{\mathbf{A}_{\mid Q}}_{1\to 2} \cdot \Norm{\mathbf{A}_{\mid Q} }_{K_{(k,1)}} \\ & \leq O\left(\sqrt{pk } \right) \Norm{\mathbf{A}_{\mid Q}}_{1\to 2} \cdot \Norm{\mathbf{A}_{\mid Q} }_{K_{(k,2)}} \end{split} \end{equation} where we incur a $\sqrt{k}$ factor relating $K_{(k,1)}$ to $K_{(k,2)}$. Plugging this back into Equation \eqref{eqn:decay_statement}, we get \begin{equation} \label{eqn:top_k_bound_for_algo} \begin{split} \expecf{}{\Norm{\mathbf{A}_{\mid Q}}_{K^*}} & = \sqrt{\delta} \Norm{\mathbf{A}}_{K^*} \pm O\left(\sqrt{pk }\expecf{}{\Norm{\mathbf{A}_{\mid Q}}_{1\to 2}} \right)^{1/2} \cdot \expecf{}{\Norm{\mathbf{A}_{\mid Q} }_{K_{(k,2)}}}^{1/2} \end{split} \end{equation} where we used Cauchy-Schwarz. \ainesh{This above bound should suffice for our applications} \ainesh{unfortunately going from the above to a clean bound incurs } Now, we are ready to bound the $r$-th moment. Observe $\expecf{r}{\Norm{\mathbf{A}_{\mid Q}}_{K^*}}^2 = \expecf{}{\Norm{ \delta_i \mathbf{A}_i \otimes \mathbf{A}_i }^{r/2}_{K_{k,1}}}^{2/r}$. Then, \begin{equation} \label{eqn:decay_statement} \begin{split} \expecf{r}{\Norm{\mathbf{A}_{\mid Q}}_{K^*}}^2 & = \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i} + \delta \mathbf{A}^{\top}\mathbf{A} }^{r/2}_{K_{k,1}}}^{2/r} \\ & = \Norm{ \delta \mathbf{A}^{\top}\mathbf{A} }_{K_{k,1}} \pm \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{r/2}_{K_{k,1}}}^{2/r}\\ & = \delta \Norm{\mathbf{A}}^2_{K^*} \pm \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{r/2}_{K_{k,1}}}^{2/r} \end{split} \end{equation} where the second equality follows from triangle and reverse triangle inequality on the Ky-Fan norm then triangle inequality for $\expecf{r/2}{}$. Observe, we get the rate of decay we desire for $\|\mathbf{A}\|_{K^*}$ and it suffices to bound the additive term. Since $K_{k,1}$ it follows from Lemma \ref{} that we can bound the expectation by introducing Rademacher random variables $\eta_i$ and observing that $\eta_i(\delta_i - \delta)$ has the same distribution as $\eta_i \delta_i$ and thus \begin{equation*} \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{r/2}_{K_{k,1}}} \leq 2 \expecf{}{\expecf{\eta}{\Norm{ \sum_{i \in [n]} \eta_i \delta_i\mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{r/2}_{K_{k,1}}}} \end{equation*} Here, we apply Theorem \ref{thm:non_commutative_khintchine} with $\mathbf{X}_i = \mathbf{A}_i \otimes \mathbf{A}_i$ and $\mathbf{Y}_i = 0$ to obtain that with probability at least $99/100$, over a fixing of $Q$, \begin{equation} \label{eqn:khintchine} \begin{split} \expecf{\eta}{\Norm{ \sum_{i \in [n]} \eta_i\delta_i \mathbf{A}_{i} \otimes \mathbf{A}_{i}}_{K_{k,1}}} & \leq O\left(\sqrt{p }\right) \Norm{\left( \sum_{i \in [n]} \delta_i (\mathbf{A}_{i} \otimes \mathbf{A}_{i})^{\top} \mathbf{A}_{i} \otimes \mathbf{A}_{i} \right)^{1/2} }_{K_{k,1}} \\ & \leq O\left(\sqrt{p } \right) \Norm{\mathbf{A}_{\mid Q}}_{1\to 2} \cdot \Norm{\mathbf{A}_{\mid Q} }_{K_{(k,1)}} \\ & \leq O\left(\sqrt{pk } \right) \Norm{\mathbf{A}_{\mid Q}}_{1\to 2} \cdot \Norm{\mathbf{A}_{\mid Q} }_{K_{(k,2)}} \end{split} \end{equation} where we incur a $\sqrt{k}$ factor relating $K_{(k,1)}$ to $K_{(k,2)}$. Plugging this back into Equation \eqref{eqn:decay_statement}, we get \begin{equation} \label{eqn:second_ky_fan} \begin{split} \expecf{r}{\Norm{\mathbf{A}_{\mid Q}}_{K^*}}^2 & = \delta \Norm{\mathbf{A}}^2_{K^*} \pm O\left(\sqrt{pk }\expecf{r}{\Norm{\mathbf{A}_{\mid Q}}_{1\to 2}} \right) \cdot \expecf{r}{\Norm{\mathbf{A}_{\mid Q} }_{K^*}} \end{split} \end{equation} where we used Cauchy-Schwarz Observe, $x^2 \leq a x + b$ implies $x \leq \frac{a + \sqrt{a^2 + 4b}}{2} \leq 2a + \sqrt{b}$ and similarly, $x \geq -2a + \sqrt{b}$ and thus from Eqn \eqref{eqn:second_ky_fan}, \begin{equation} \expecf{r}{\Norm{\mathbf{A}_{\mid Q} }_{K^*} } = \sqrt{\delta} \Norm{\mathbf{A}}_{K^*} \pm O\left(\sqrt{pk }\log(n)\right) \expecf{r}{\Norm{\mathbf{A}_{\mid Q} }_{1\to 2}} \end{equation} which completes the proof. \end{proof} \ainesh{This above proof is straighforward and does not rquire new symm and the variance bound also works out nicely. } Next, we prove the decay theorem for $(k,1)$-Ky Fan norms. \begin{proof}[Proof of Theorem \ref{thm:ky_fan_decay}] For all $i \in [n]$, let $\delta_i$ be indicator random variables for $i \in Q$ and thus $\expecf{}{\delta_i} = \delta= q/n$. Let $\mathbf{A}_{i}$ denote the $i$-th column of $\mathbf{A}$. Therefore, $\mathbf{A} = \sum_{i \in [n]} e_i \otimes \mathbf{A}_i$ and $\mathbf{A}_{\mid Q} = \sum_{i \in [n]} \delta_i e_i \otimes \mathbf{A}_i$, where $e_i$ is the $i$-th standard basis vector. Let $\mathbf{A} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^T$ be the Eigen-Decomposition of $\mathbf{A}$. Observe, \begin{equation*} \Norm{\mathbf{A} }_{K_{k,1}} = \left(\sum_{j \in [k]} |\lambda_j(\mathbf{A})|\right) = \left(\sum_{j \in [k]} \left(\lambda^2_j(\mathbf{A})\right)^{1/2}\right)^{1/p} = \Norm{\mathbf{A}^{\top} \mathbf{A}}^{1/2}_{K_{(k,1/2)}} = \Norm{\sum_{i \in [n]} \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{(k,1/2)}} \end{equation*} where the second equality follows from $\mathbf{A}^T \mathbf{A} = \mathbf{U} \mathbf{\Lambda}^2 \mathbf{U}^T$ and the definition of $\|\cdot \|_{K_{(k,1/2)}}$. Similarly, \begin{equation*} \|\mathbf{A}_{\mid Q}\|_{K_{k,1}} = \Norm{\sum_{i \in [n]} \delta_i \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{(k,1/2)}} \end{equation*} We would like to bound $\expecf{}{\Norm{\mathbf{A}_{\mid Q}}_{K_{k,1}}}$. Observe, \begin{equation} \label{eqn:decay_statement} \begin{split} \expecf{}{\Norm{\mathbf{A}_{\mid Q}}_{K_{k,1}}} & = \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i} + \delta \mathbf{A}^{\top}\mathbf{A} }^{1/2}_{K_{k,1/2}}} \\ & = \expecf{}{\Norm{ \delta \mathbf{A}^{\top}\mathbf{A} }^{1/2}_{K_{k,1/2}} \pm \Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i} }^{1/2}_{K_{k,1/2}}}\\ & = \Norm{ \delta \mathbf{A}^{\top}\mathbf{A} }^{1/2}_{K_{k,1/2}} \pm \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{k,1/2}}} \\ & = \sqrt{\delta} \Norm{\mathbf{A}}_{K_{k,1}} \pm \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{k,1/2}}} \end{split} \end{equation} where the second equality follows from triangle and reverse triangle inequality from Fact \ref{fact:tri} and the third follows from linearity of expectation. \ainesh{add a proof of symm.} Observe, we get the rate of decay we desire for $\|\mathbf{A}\|_{K_{k,1}}$ and it suffices to bound the additive term. Unfortunately $K_{k,1/2}$ is not a norm using symmetrization, we can bound the expectation by introducing Rademacher random variables $\eta_i$ and observing that $\eta_i(\delta_i - \delta)$ has the same distribution as $\eta_i \delta_i$ and thus \begin{equation*} \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{k,1/2}}} \leq 4 \expecf{}{\expecf{\eta}{\Norm{ \sum_{i \in [n]} \eta_i \delta_i\mathbf{A}_{i} \otimes \mathbf{A}_{i}}_{K_{k,1/2}}}^{1/2}} \end{equation*} Here, we apply Theorem \ref{thm:non_commutative_khintchine} with $\mathbf{X}_i = \mathbf{A}_i \otimes \mathbf{A}_i$ and $\Delta = \sqrt{k}$, to obtain that with probability at least $1-1/\text{poly}(k)$, for all $\ell \in [n]$, \begin{equation} \left|\lambda_{\ell}\left( \sum_{i\in[t]} \eta_i \mathbf{A}_{i}\otimes \mathbf{A}_{i} \right)\right| \leq \log(k)^{1/2} \left|\lambda_{\ell}\left( (\mathbf{A} \mathbf{A}^T)^{1/2} \right)\right| \end{equation} Observe, $\mathbf{A}^T \mathbf{A} = \mathbf{U} \mathbf{\Sigma}^2 \mathbf{U}^T$, where $\mathbf{\Sigma}$ are the singular values of $\mathbf{A}$ and therefore $(\mathbf{A}^T \mathbf{A})^{1/2}$ has the same singular values as $\mathbf{A}$. Then, for a fixing of $Q$, \begin{equation} \label{eqn:khintchine} \begin{split} \expecf{\eta}{\Norm{ \sum_{i \in [n]} \eta_i\delta_i \mathbf{A}_{i} \otimes \mathbf{A}_{i}}_{K_{k,1/2}}} & \leq O\left(\log(k)^{1/2}\right) \Norm{\left( \sum_{i \in [n]} \delta_i (\mathbf{A}_{i} \otimes \mathbf{A}_{i})^{\top} \mathbf{A}_{i} \otimes \mathbf{A}_{i} \right)^{1/2} }_{K_{k,1/2}} \\ & \leq O\left(\log(k)^{1/2}\right) \Norm{\Norm{\mathbf{A}_{\mid Q}}_{1\to 2} \left( \mathbf{A}^T_{\mid Q} \mathbf{A}_{\mid Q} \right)^{1/2} }_{K_{k,1/2}} \\ & \leq O\left(\log(k)^{1/2}\right) \Norm{\mathbf{A}_{\mid Q}}_{1\to 2} \cdot \Norm{\mathbf{A}_{\mid Q} }_{K_{k,1/2}} \\ & \leq O\left(k\log(k)^{1/2}\right) \Norm{\mathbf{A}_{\mid Q}}_{1\to 2} \cdot \Norm{\mathbf{A}_{\mid Q} }_{K_{k,1}} \end{split} \end{equation} where we incur a $k$ factor by relating $K_{k,1/2}$-norm to the $K_{k,1}$-norm. Plugging this back into Equation \eqref{eqn:decay_statement}, we get \begin{equation} \label{eqn:second_ky_fan} \begin{split} \expecf{}{\Norm{ \mathbf{A}_{\mid Q} }_{K_{k,1}}} & = \sqrt{\delta} \Norm{\mathbf{A}}_{K_{k,1}} \pm O\left(k^{1/2}\log(k)^{1/4}\right) \expecf{}{\Norm{\mathbf{A}_{\mid Q}}_{1\to 2} \cdot \Norm{\mathbf{A}_{\mid Q} }_{K_{k,1}} }^{1/2} \\ & = \sqrt{\delta} \Norm{\mathbf{A}}_{K_{k,1}} \pm O\left(k^{1/2}\log(k)^{1/4}\right) \expecf{}{\Norm{\mathbf{A}_{\mid Q}}_{1\to 2}}^{1/2} \expecf{}{ \Norm{\mathbf{A}_{\mid Q} }_{K_{(k,1)}}}^{1/2} \end{split} \end{equation} where we used Cauchy-Schwarz Observe, $x \leq b+ a\sqrt{x}$ implies \[ x \leq \frac{2b + a^2 +\sqrt{a^2 (a^2 + 4 b)} }{2} = b + \frac{ (1+ \sqrt{1+ 4b/a^2}) a^2 }{2} \] and similarly, $x \geq b - + \frac{ (1+ \sqrt{1+ 4b/a^2}) a^2 }{2}$ and thus from \eqref{eqn:second_ky_fan}, \begin{equation} \expecf{}{\Norm{\mathbf{A}_{\mid Q} }_{K_{k,1}} } = 2\sqrt{\delta} \Norm{\mathbf{A}}_{K_{k,1}} \pm O\left(k\log(k)^{1/2}\right) \expecf{}{\Norm{\mathbf{A}_{\mid Q} }_{1\to 2}} \end{equation} which completes the proof for $r= 1$. Next, we bound the moments of the norm of a random submatrix, for $r\geq 2$. Then, using Fact \ref{fact:tri}, \begin{equation} \label{eqn:decay_statement} \begin{split} \expecf{r}{\Norm{\mathbf{A}_{\mid Q}}_{K_{k,1}}}^{2} & = \expecf{}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i} + \delta \mathbf{A}^{\top}\mathbf{A} }^{r/2}_{K_{k,1/2}}}^{2/r} \\ & = \expecf{r/2}{ 2 \Norm{ \delta \mathbf{A}^{\top}\mathbf{A} }_{K_{k,1/2}} \pm 2 \Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i} }_{K_{k,1/2}}}\\ & = 2\Norm{ \delta \mathbf{A}^{\top}\mathbf{A} }_{K_{k,1/2}} \pm 2 \expecf{r/2}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}_{K_{k,1/2}}} \\ & = 2\delta \Norm{\mathbf{A}}^2_{K_{k,1}} \pm 2\expecf{r}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{k,1/2}}}^{2} \end{split} \end{equation} Using symmetrization, we can bound the expectation by introducing Rademacher random variables $\eta_i$ and observing that $\eta_i(\delta_i - \delta)$ has the same distribution as $\eta_i \delta_i$ and thus \begin{equation*} \expecf{r}{\Norm{ \sum_{i \in [n]} (\delta_i - \delta) \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{k,1/2}}}^{2} \leq 4 \expecf{r}{\expecf{\eta}{\Norm{ \sum_{i \in [n]} \eta_i \delta_i\mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{k,1/2}}}}^{2} \end{equation*} Here, we apply Theorem \ref{thm:non_commutative_khintchine} with $\mathbf{X}_i = \mathbf{A}_i \otimes \mathbf{A}_i$ and $\Delta = \sqrt{k}$, to obtain that for any $p \in (0,2]$, with probability at least $1-1/\text{poly}(k)$, for all $\ell \in [n]$, \begin{equation} \left|\lambda_{\ell}\left( \sum_{i\in[t]} \eta_i \mathbf{A}_{i}\otimes \mathbf{A}_{i} \right)\right|^{p} \leq \log(k)^{p/2} \left|\lambda_{\ell}\left( (\mathbf{A} \mathbf{A}^T)^{1/2} \right)\right|^{p} \end{equation} Observe, $\mathbf{A}^T \mathbf{A} = \mathbf{U} \mathbf{\Sigma}^2 \mathbf{U}^T$, where $\mathbf{\Sigma}$ are the singular values of $\mathbf{A}$ and therefore $(\mathbf{A}^T \mathbf{A})^{1/2}$ has the same singular values as $\mathbf{A}$. Then, for a fixing of $Q$, setting $p$ above to $1/2$ \begin{equation} \label{eqn:khintchine} \begin{split} \expecf{\eta}{\Norm{ \sum_{i \in [n]} \eta_i\delta_i \mathbf{A}_{i} \otimes \mathbf{A}_{i}}^{1/2}_{K_{k,1/2}}} & \leq O\left(\log(k)^{1/4}\right) \Norm{\left( \sum_{i \in [n]} \delta_i (\mathbf{A}_{i} \otimes \mathbf{A}_{i})^{\top} \mathbf{A}_{i} \otimes \mathbf{A}_{i} \right)^{1/2} }^{1/2}_{K_{k,1/2}} \\ & \leq O\left(\log(k)^{1/4}\right) \Norm{\Norm{\mathbf{A}_{\mid Q}}_{1\to 2} \left( \mathbf{A}^T_{\mid Q} \mathbf{A}_{\mid Q} \right)^{1/2} }^{1/2}_{K_{k,1/2}} \\ & \leq O\left(\log(k)^{1/4}\right) \Norm{\mathbf{A}_{\mid Q}}^{1/2}_{1\to 2} \cdot \Norm{\mathbf{A}_{\mid Q} }^{1/2}_{K_{k,1/2}} \\ & \leq O\left(\sqrt{k} \log(k)^{1/4}\right) \Norm{\mathbf{A}_{\mid Q}}^{1/2}_{1\to 2} \cdot \Norm{\mathbf{A}_{\mid Q} }^{1/2}_{K_{k,1}} \end{split} \end{equation} where we incur a $k$ factor by relating $(1/2)$-norm to the $1$-norm. Plugging this back into Equation \eqref{eqn:decay_statement}, we get \begin{equation} \label{eqn:second_ky_fan} \begin{split} \expecf{r}{\Norm{ \mathbf{A}_{\mid Q} }_{K_{k,1}}}^{2} & = 2 \delta \Norm{\mathbf{A}}^2_{K_{k,1}} \pm O\left(k\log(k)^{1/2}\right) \expecf{}{\Norm{\mathbf{A}_{\mid Q}}^{r/2}_{1\to 2} \cdot \Norm{\mathbf{A}_{\mid Q} }^{r/2}_{K_{(k,1)}} }^{2/r} \\ & = 2 \delta \Norm{\mathbf{A}}^2_{K_{k,1}} \pm O\left(k\log(k)^{1/2}\right) \expecf{}{\Norm{\mathbf{A}_{\mid Q}}^{r}_{1\to 2} \cdot \Norm{\mathbf{A}_{\mid Q} }^{r}_{K_{(k,1)}} }^{1\textbf{}/r} \\ & = 2\delta \Norm{\mathbf{A}}^2_{K_{k,1}} \pm O\left(k\log(k)^{1/2}\right) \expecf{r}{\Norm{\mathbf{A}_{\mid Q}}_{1\to 2}} \expecf{r}{ \Norm{\mathbf{A}_{\mid Q} }_{K_{(k,1)}}} \end{split} \end{equation} where we used Jensen's in the first equality and Cauchy-Schwarz in the second. Observe, $x^2 \leq a x + b$ implies $x \leq \frac{a + \sqrt{a^2 + 4b}}{2} \leq a + \sqrt{b}$ and similarlet, $x \geq -a + \sqrt{b}$ and thus from \eqref{eqn:second_ky_fan}, \begin{equation} \expecf{r}{\Norm{\mathbf{A}_{\mid Q} }_{K_{k,1}} } = \sqrt{2\delta} \Norm{\mathbf{A}}_{K_{k,1}} \pm O\left(k\log(k)^{1/2}\right) \expecf{r}{\Norm{\mathbf{A}_{\mid Q} }_{1\to 2}} \end{equation} which completes the proof. \end{proof} \subsection{Testing Norms in the Bounded Entry Model} In this subsection, we describe how to obtain estimators for matrix norms from the general decay theorems we proved above. Algorithmically, we simply sample $q$ rows and columns of our matrix and output the corresponding norm of the sub-matrix as our estimate. \ainesh{Instead of using the main theorem, we should equation 36 to bound the expectation, and use the main theorem to bound the variance} \section{A One-Sided Tester for Positive Semi-Definiteness Matrices with $\ell_2$ Gap} Let $\mathbf{A} \in \R^{n \times n}$ be symmetric with eigenvalues $\lambda_{\max} = \lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_n = \lambda_{\min}.$ Let $t \in \{0,1,\dots,n\}$ be such that $\lambda_i < 0$ for all $i \geq t$. In this section, we consider the $L_2$ gap. Namely, we are promised that $\|\mathbf{A}\|_\infty \leq 1$, and that one of the two following cases hold: \begin{enumerate} \item $\mathbf{A}$ is PSD. \item $\mathbf{A}$ is $\epsilon$-far from PSD, meaning that $\frac{\sum_{i\geq t} \lambda_i^2}{n^2} = \epsilon$. \end{enumerate} Our goal is to distinguish between the two cases. Our algorithm will query a principle submatrix $\mathbf{A}_{S \times S}$ and return PSD if $\mathbf{A}_{S \times S}$ is PSD, otherwise it will return not PSD. Since all principle submatrices of PSD matrices are PSD, we only need show that if $\mathbf{A}$ is $\epsilon$-far from PSD, then we can find a non-PSD principle submatrix with small size $|S|^2$. Thus, in the following, we can focus on the case where $\mathbf{A}$ is $\epsilon$-far from PSD. For notation, let $\alpha \in (\epsilon,1]$ be such that $\|\mathbf{A}\|_F^2 = \alpha n^2$. Note that $\alpha \geq \epsilon$ because we have $\sum_{i} \lambda_i^2 \geq \epsilon n^2$ by assumption. We can write $\mathbf{A} = \sum_{i=1}^n \lambda_i v_i v_i^T$ where $v_1,\dots,v_n \in \R^n$ are the orthonormal eigenvectors for $\mathbf{A}$, and then write $\mathbf{A}$ via a cannonical decomposition $\mathbf{A} = \mathbf{A}^+ + \mathbf{A}^-$ where $\mathbf{A}^+ = \sum_{i< t} \lambda_i v_i v_i^T$ and $\mathbf{A}^- = \sum_{i\geq t} \lambda_i v_i v_i^T$. Note by assumption, $\|\mathbf{A}^-\|_F^2 = \epsilon n^2$, and by Pythagorean theorem $\|\mathbf{A}\|_F^2 = \|\mathbf{A}^+\|_F^2 + \|\mathbf{A}^-\|_F^2$. \begin{lemma}\label{lem:samplefrob} Let $S \subset [n]$ be a random subset of expected size $k = O(\delta^{-1}(\frac{ \sqrt{c_1} }{\epsilon} + \frac{c_0}{\epsilon^2}))$. Fix a symmetric matrix $\mathbf{A} \in \R^{n \times n}$ with $\sum_i \|\mathbf{A}_{i,*}\|_2^4 \leq c_0 n^3$ and $\|\mathbf{A}\|_4^4 \leq c_1 n^2$ for some values $c_0,c_1$. Then with probability $1-\delta$, we have $\frac{n^2}{k^2}\|\mathbf{A}_{S \times S}\|_F^2 = \|\mathbf{A}\|_F^2 \pm \epsilon n^2 $. \end{lemma} \begin{proof} Let $\delta_i$ indicate that we choose $i \in [n]$ to add to $S$. Then $\ex{\delta_i} = \frac{k}{n}$, and \[\ex{\sum_{i \neq j} \delta_i \delta_j \mathbf{A}_{i,j}^2 } = \frac{k^2}{n^2} \sum_{i \neq j} \mathbf{A}_{i,j}^2 \] Moreover, \begin{equation} \begin{split} \textbf{Var}\left( \sum_{i \neq j} \delta_i \delta_j \mathbf{A}_{i,j}^2 \right) &\leq \frac{k^4}{n^4} \sum_{4 = |\{i,j,s,r\}|} \mathbf{A}_{i,j}^2 \mathbf{A}_{s,r}^2 + \frac{k^3}{n^3} \sum_{3 = |\{i,j,s,r\}|} \mathbf{A}_{i,j}^2 \mathbf{A}_{s,r}^2 + \frac{k^2}{n^2} \sum_{2 = |\{i,j,s,r\}|} \mathbf{A}_{i,j}^4\\ - &\frac{k^4}{n^4} (\sum_{i \neq j} \mathbf{A}_{i,j}^2)^2 \\ & \leq \frac{k^3}{n^3} \sum_{3 = |\{i,j,s,r\}|} \mathbf{A}_{i,j}^2 \mathbf{A}_{s,r}^2 + \frac{k^2}{n^2} \|\mathbf{A}\|_4^4\\ & \leq \frac{k^3}{n^3}\sum_{i} 6\mathcal{R}_i^2 + \frac{k^2}{n^2} \|\mathbf{A}\|_4^4\\ \end{split} \end{equation} where $\mathcal{R}_i = \sum_{j} \mathbf{A}_{i,j}^2 = \mathcal{C}_i = \sum_j \mathbf{A}_{j,i}^2$. Thus $\frac{k^3}{n^3}\sum_{i} 6\mathcal{R}_i^2 \leq \frac{k^3}{n^3} 6 c_0^2 n^3 = 6 c_0 k^3$. Moreover, note that $\|\mathbf{A}\|_4^4 \leq n^2 c_1$. Thus the variance is bounded by $6 c_0 k^3 + k^2 c_1$. So \[ \pr{ \sum_{i \neq j} \delta_i \delta_j \mathbf{A}_{i,j}^2 - \frac{k^2}{n^2} \sum_{i \neq j} \mathbf{A}_{i,j}^2| > \epsilon k^2 } \leq \frac{6 c_0 k^3 + k^2 c_1}{\epsilon^2 k^4} \] Finally, note that the diagonal can effect $\|\mathbf{A}_{S \times S}\|_F^2$ by at most $c_0 k < \epsilon k^2$, thus we can add this additional $\epsilon k^2$ term into the overall error. \end{proof} \begin{lemma}\label{lem:eigpreserve} Fix any $\epsilon >0$ and $x \in \R^n$ such that $|x^T \mathbf{A} x| > \epsilon_0 n$ for some $\epsilon_0 > \epsilon$. Let $S \subset [n]$ be a random subset of size $|S| =O(\delta^{-1}/\epsilon^2)$. Let $Z = \mathbf{A}_{S \times S}$. Then \[ \bpr{ \left| \frac{n}{k}\|x_S\|_2^{-2} x^T_SZx_S - x^T \mathbf{A} x \right| > \epsilon n } < \delta \] \end{lemma} \begin{proof} First note that $\|x_S\|_2^2 = (1 \pm \epsilon)\frac{k}{n}$. The remaining results follow from the first section. \end{proof} \subsection{A $\text{poly}(\log(n)/\epsilon)$-query algorithm} In the following, let $\| \mathbf{A} \|$ denote the standard operator norm of a matrix $\mathbf{A}$, and let $\|\mathbf{A} \|_p = \left( \sum_{i,j} |\mathbf{A}_{i,j}|^p \right)^{1/p}$ denote the $p$-norm of the vectoization of $\mathbf{A}$. Let $\|\mathbf{A}\|_{1 \to 2}$ denote the maximum $\ell_2$ norm of a column of $\mathbf{A}$. Finally, for a random variable $X$ and $p \geq 1$, let $\mathbb{E}_p X = \left(\ex{ X^p}\right)^{1/p}$. \begin{theorem}[Theorem 1 \cite{tropp2008norms}]\label{thm:tropp} Let $\mathbf{A} \in \R^{n \times n}$ be Hermitian, decomposed into diagonal and off-diagonal parts $\mathbf{A} = \mathbf{D} + \mathbf{H} $. Fix $p \geq 2$ and set $q = \max\{ p , 2\log(n)\}$. Then \[ \mathbb{E}_p \|\mathbf{R} \mathbf{A} \mathbf{R} \| \leq C \left[q \mathbb{E}_p \|\mathbf{R} \mathbf{H} \mathbf{R} \|_\infty + \sqrt{\delta q} \mathbb{E}_p \|\mathbf{H} \mathbf{R} \|_{1 \to 2} + \delta \|\mathbf{H} \| \right] + \mathbb{E}_p \|\mathbf{R} \mathbf{D} \mathbf{R} \| \] Where $\mathbf{R} $ is a diagonal matrix with diagonal entry $\mathbf{R} _{i,i} = \delta_i$ where $\delta_i$ is Bernoulli with mean $\delta$. \end{theorem} Recall that $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_n$ are the eigenvalues of $\mathbf{A}$, and $v_i$ is the $i$-th eigenvector of $\mathbf{A}$, so that $\mathbf{A} v_i = \lambda_i v_i$. Note that we can assume that $\lambda_i \geq - \epsilon^2 n / 1000$ for all $i$, otherwise we have a $\Theta(1/\epsilon^4)$-query algorithm from the first section. \begin{proposition} Fix a Hermitian $\mathbf{A} \in \R^{n \times n}$ with $\|\mathbf{A}\|_\infty \leq 1$. Let $v \in \R^n$ be a unit eigenvector of $\mathbf{A}$ so that $\mathbf{A} v = \lambda v$. Then we have the following \begin{enumerate} \item[(a)] $\lambda \leq \|v\|_1^2$ \item[(b)] $\|v\|_\infty \leq \frac{\sqrt{n}}{|\lambda|}$ \item[(c)] $\|v\|_4^4 \leq\frac{n}{|\lambda|^2}$ \item[(d)] $\|\lambda v v^T\|_4 \leq \sqrt{n}$ \item[(e)] $\|(\lambda v v^T)_{i,*}\|_2 \leq \sqrt{n}$ for any $i \in [n]$. \end{enumerate} \end{proposition} \begin{proof} For $(a)$, we have $\lambda = v^T \mathbf{A} v \leq \sum_{i,j} |v_i| |v_j| = \|v\|_1^2$. For $(b)$, for any $j$ we have $|v_j| = |\lambda^{-1} \langle \mathbf{A}_{j,*}, v \rangle |\leq |\lambda^{-1}| \|\mathbf{A}_{j,*}\|_2 \leq \frac{\sqrt{n}}{|\lambda|}$. Using $(b)$ and the fact that $\|v\|_2 =1$, we know that $\|v\|_4^4$ is maximized by having equal $\frac{|\lambda|^2}{n}$ coordinates equal to $\frac{\sqrt{n}}{|\lambda|}$, which gives $\|v\|_4^4 \leq \frac{n}{|\lambda|^2}$, demonstrating $(c)$. For $(d)$, we have $\|\lambda v v^T\|_4^4 = \sum_{i,j} \lambda^4 v_i^4 v_j^4 = \lambda^4 \|v\|_4^8 \leq \lambda^4 \frac{n^2}{|\lambda|^4} \leq n^2$ again using $(b)$. Finally, for $(e)$, we have $\|(\lambda v v^T)_{i,*}\|_2^2 = \sum_j \lambda^2 v_i^2 v_j^2 = \lambda^2 v_i^2 \leq n$ using $(b)$. \end{proof} Now decompose $\mathbf{A} = \mathbf{B} + \mathbf{L}$ where $\mathbf{B} = \sum_{i : \lambda_i > C \epsilon^2 n} \lambda_i v_i v_i^T$, for some constant $C$ that we will later choose. Notice that $|\{i : \lambda_i > C \epsilon^2 n\}| \leq \frac{1}{(C \epsilon^2)^2}$, since we have $\sum_i \lambda_i^2 \leq n^2$. Note that $|\mathbf{B}_{p,q}| = |\sum_{i : \lambda_i > C \epsilon^2 n} \lambda_i (v_{i})_p(v_{i})_q| \leq \sum_{i : \lambda_i > C \epsilon^2 n}\frac{n}{\lambda_i} \leq \frac{1}{(C \epsilon^2)^3 }$, thus $\|\mathbf{B}\|_\infty \leq \frac{1}{(C\epsilon^2)^3}$. Since $\|\mathbf{A}\|_\infty \leq 1$, we also have $\|\mathbf{L}\|_\infty \leq \frac{1}{(C\epsilon^2)^3} + 1$. Moreover, by the prior proposition and the triangle inequality, we have \[\|\mathbf{B}\|_4 \leq \sum_{i : \lambda_i > C \epsilon^2 n} \|\lambda_i v_{i}v_{i}^T \|_4 \leq \frac{\sqrt{n}}{(C \epsilon^2)^2} \] Or $\|\mathbf{B}\|_4^4 \leq \frac{n^2}{(C \epsilon^2)^8}$. Moreover, we have $$\|\mathbf{B}_{j,*}\|_2 \leq \sum_{i : \lambda_i > C \epsilon^2 n} \|\lambda_i (v_i v_i^T)_{j,*}\|_2 \leq \frac{\sqrt{n}}{(C \epsilon^2)^2}$$ So $\sum_j \|\mathbf{B}_{j,*}\|_2^4 \leq \frac{n^3}{(C \epsilon^2)^8}$. Thus, we can apply Lemma \ref{lem:samplefrob} with $c_1 = c_2 = \frac{1}{(c \epsilon^2)^8}$ on the matrix $\mathbf{B}$. Thus setting $k = \Theta(\frac{1}{\epsilon^{18}})$ yields the following: \rajesh{The bound on $\sum_j \|\mathbf{B}_{j,*}\|_2^4 $ can be improved to $n^3$ using that $B,L$ have ortho columns and rows, and must sum to vectors with norm $\sqrt{n}$.} \rajesh{Maybe apply this arguement to preserve norm of $L$, then use inverse Lidskii's?} \begin{lemma} Fix any constant $\delta > 0$, and suppose $\mathbf{A}$ is $\epsilon$-far from PSD. Decompose $\mathbf{A} = \mathbf{B} + \mathbf{L}$ as above, Let $S \subset [n]$ be a random subset of expected size $k = \Theta(\frac{1}{\epsilon^{18} })$. Then with probability $1-\delta$, we have simultaneously: \begin{enumerate} \item $\|\mathbf{A}_{S \times S}\|_F^2 = \alpha k^2 \pm \epsilon k^2 / 1000$ \item $\|\mathbf{B}_{S \times S}\|_F^2 \leq(\alpha - \epsilon) k^2 + \pm \epsilon k^2 / 1000$. \end{enumerate} \end{lemma} \noindent We will also need the following proposition. \begin{proposition}[PSD matrices are top heavy]\label{prop:topheavy} Let $\mathbf{D} \in \R^{k \times k}$ be such that $\|\mathbf{D}\|_\infty \leq 1$, and let $\lambda_1(\mathbf{D}) \geq \lambda_2(\mathbf{D}) \geq \dots \geq \lambda_k(\mathbf{D})$ be its eigenvalues. Then if $\mathbf{D}$ is PSD, we have \[\sum_{i \leq t}\lambda_i(\mathbf{D})^2 \geq \|\mathbf{D}\|_F^2 - t^{-1} k^2\] for any $t \in [k]$. \end{proposition} \begin{proof} We first show that $\lambda_t(\mathbf{D}) \leq t^{-1} k$. To see this, suppose $\lambda_{t}(\mathbf{D}) > t^{-1} k$. Then because $\mathbf{D}$ is PSD, we would have $\sum_i \lambda_i >t \cdot t^{-1} k = k$, which contradicts the fact that $\sum_{i > t} \lambda_i(\mathbf{D}) \leq \text{Tr}(\mathbf{D}) \leq k $ following from the bound $\|\mathbf{D}\|_\infty \leq 1$. Thus, $\lambda_{t}(\mathbf{D}) \leq t^{-1} k$. Using this and the bound on $\sum_{i > t} \lambda_i(\mathbf{D})$, it follows that the quantity $\sum_{i > t} \lambda_i(\mathbf{D})^2$ is maximimized by having $t $ eigenvalues equal to $t^{-1} k$, yielding $\sum_{i > t} \lambda_i(\mathbf{D})^2 \leq t^{-1} k^2 $. Thus, if it were also that case that $\sum_{i \leq t}\lambda_i(\mathbf{D})^2 < |\mathbf{D}\|_F^2 - t^{-1} k^2$ then we would have $\sum_i \lambda_i(\mathbf{D}) < |\mathbf{D}\|_F^2 - t^{-1} k^2 + t^{-1} k^2 = \|\mathbf{D}\|_F^2$, which is a contradiction because $\sum_i \lambda_i(\mathbf{D}) = \|\mathbf{D}\|_F^2$, which completes the proof. \end{proof} \begin{proposition}\label{prop:nottopheavy} Let $\mathbf{D} \in \R^{k \times k}$ be such that $\|\mathbf{D}\|_\infty \leq 1$, and let $\lambda_1(\mathbf{D}) \geq \lambda_2(\mathbf{D}) \geq \dots \geq \lambda_k(\mathbf{D})$ be its eigenvalues. Let $\sigma:[n] \to [n]$ denote the re-ordering of the eigenvalues by magnitude, so that $|\lambda_{\sigma(1)} | \geq |\lambda_{\sigma(2)}| \geq \dots \geq |\lambda_{\sigma(k)}|$. Fix any $t \geq 1$. Suppose $\mathbf{D}$ is at least $(1/t)$-far in $L_2$ from PSD, so that $\sum_{i : \lambda_i(\mathbf{D}) < 0} \lambda_i^2(\mathbf{D}) \geq k^2/t$, and suppose further that $\lambda_k(\mathbf{D}) > - \frac{k}{2t}$. Then we have \[\sum_{i \leq t}\lambda_{\sigma(i)}(\mathbf{D})^2 < \|\mathbf{D}\|_F^2 - t^{-1} k^2/2\] for any $t \in [k]$. \end{proposition} \begin{proof} Let $W \subseteq [n]$ be the set of values $i \in [n]$ such that $\lambda_{\sigma(i)} < 0$. By assumption: $\sum_{i \in W} \lambda_{\sigma(i)}^2 \geq \ n^2$, and moreover $\max_{ i \in W} |\lambda_{\sigma(i)}| \leq \frac{k}{2t}$. Now note that \[\sum_{i \in W} \lambda_{\sigma(i)}^2 = \sum_{i \in W, i \leq t} \lambda_{\sigma(i)}^2 + \sum_{i \in W, i > t} \lambda_{\sigma(i)}^2\] By the $\ell_\infty$ bound on the negative eigenvalues, we have that $ \sum_{i \in W, i \leq t} \lambda_{\sigma(i)}^2 \leq t ( \frac{k}{2t})^2 = k^2 / 4t$. Thus we must have $\sum_{i \in W, i > t} \lambda_{\sigma(i)}^2 \geq (3/4)k^2/t$, giving \begin{equation} \begin{split} \sum_{i > t} \lambda_{\sigma(i)}^2 & \geq \sum_{i \in W, i > t} \lambda_{\sigma(i)}^2 \\ & \geq (3/4)k^2/t \\ \end{split} \end{equation} Since $\|\mathbf{D}\|_F^2 = \sum_{i \leq t} \lambda_{\sigma(i)}^2 + \sum_{i > t} \lambda_{\sigma(i)}^2$, it follows that $ \sum_{i \leq t} \lambda_{\sigma(i)}^2 \leq \|\mathbf{D}\|_F^2 -(3/4)k^2/t$, as needed. \end{proof} \begin{theorem} Let $\mathbf{A} \in \R^{n \times n}$ be Hermitian matrix with $\|\mathbf{A}\|_\infty \leq 1$ that is $\epsilon$-far from PSD in $L_2$. Then if $S \subset [n]$ is a random subset with expected size $k = \Theta(\frac{\log(n)}{\epsilon^{18} })$, we have that $\mathbf{A}_{S \times S}$ is not PSD with probability at least $2/3$. \end{theorem} \begin{proof} First condition on the prior Lemma holding with probability $1-\delta$ for a sufficiently small constant $\delta$. We now claim that $\|\mathbf{L}_{S \times S}\| \leq \epsilon^2 k/1000$, after setting the constant $C>0$ to be sufficiently small, with probability at least $99/100$. This will follow directly from Theorem \ref{thm:tropp}, setting $\delta = \frac{k}{n}$ with $p=2$. To see this, note that the first term can be bounded by $O(\log(n) \|\mathbf{L}\|_\infty) =O( \log(n) (\|\mathbf{B}\|_\infty +1) ) = O(\frac{\log n}{ \epsilon^6})$. The second term can be bounded by $O(\frac{k}{\sqrt{n}} \sqrt{\log(n)} \|\mathbf{L}\|_\infty ) \leq O(\frac{k}{\sqrt{n}\epsilon^6} \sqrt{\log(n)})$. The third term can be bounded by $\frac{k}{n}\|\mathbf{L}\| \leq C \epsilon^2 k $. Since $\mathbf{R} \mathbf{D} \mathbf{R}$ is a diagonal matrix, the final term can be bounded by $O( \|\mathbf{L}\|_\infty) = O( \frac{1}{\epsilon^6})$. Altogether, we have \[ \mathbb{E}_2 \|\mathbf{L}_{S \times S}\| \lesssim \frac{\log(n) }{\epsilon^6} + \frac{k \sqrt{\log(n)} }{n^{1/2}\epsilon^6} + C\epsilon^2 k \] Note that if $k \geq n$, then we are reading the whole matrix anyway, so $k/\sqrt{n} \leq \sqrt{k}$. Setting $k \gtrsim C^{-2} \frac{\log(n)}{\epsilon^{16}}$ large enough, the final term dominates up to any arbitrarily large constant factor, and by Markovs we obtain $\|\mathbf{L}_{S \times S}\| \leq \epsilon^2 k/1000$ with probability $99/100$ after taking $C$ small enough. By the min-max principle, we now have $\lambda_i(\mathbf{A}_{S \times S}) = \lambda_i(\mathbf{B}_{S \times S}) \pm \|\mathbf{L}_{S \times S}\| = \lambda_i(\mathbf{B}_{S \times S}) \pm \epsilon^2 k/1000$ for every $i \in [|S|]$. Now set $t= 100/\epsilon$, and observe: \begin{equation} \begin{split} \sum_{i \leq t} \lambda_i(\mathbf{A}_{S \times S})^2 &\leq \sum_{i \leq t} (\lambda_i(\mathbf{B}_{S \times S}) + \epsilon^2 k /1000)^2 \\ &\leq \|\mathbf{B}_{S \times S}\|_F^2 + t \epsilon^2 k^2/1000 + t \epsilon^4 k^2 / 1000^2\\ &\leq \|\mathbf{B}_{S \times S}\|_F^2 + \epsilon k^2/5 \\ \end{split} \end{equation} Now condition on the prior Lemma, we have that $\|\mathbf{B}_{S \times S}\|_F^2 \leq (\alpha - \epsilon)k^2 + \epsilon k^2 / 1000$ and $\|\mathbf{A}_{S \times S}\|_F^2 \geq \alpha k^2 - \epsilon k^2/1000$, thus $\|\mathbf{B}_{S \times S}\|_F^2 \leq \|\mathbf{A}_{S \times S}\|_F^2 - \epsilon k^2(1 -1/500)$, which yields: \begin{equation}\label{eqn:topbound} \begin{split} \sum_{i \leq t} \lambda_i(\mathbf{A}_{S \times S})^2 &\leq \|\mathbf{B}_{S \times S}\|_F^2 + \epsilon k^2 / 5 \\ &< \|\mathbf{A}_{S \times S}\|_F^2 - \frac{3}{4}\epsilon k^2 \\ \end{split} \end{equation} But we can now apply Proposition \ref{prop:topheavy} with the same $t = 100/\epsilon$ as above, using the fact that $|S| < 2k$ with probability at least $99/100$, which implies that if $\mathbf{A}_{S \times S}$ was PSD then we would have $\sum_{i \leq t} \lambda_i(\mathbf{A}_{S \times S})^2 \geq \|\mathbf{A}_{S \times S}\|_F^2 - \epsilon k^2 / 50$. But this contradicts the upper bound of Equation \ref{eqn:topbound}, which completes the proof of the Theorem after union bounding over all condition upon events which held with large constant probability. \end{proof} \begin{theorem} Fix a Hermitian matrix $\mathbf{A} \in \R^{n \times n}$ with $\|\mathbf{A}\|_\infty \leq 1$, and suppose we are given the promise that either $(1)$ $\mathbf{A}$ is PSD or $(2)$ $\mathbf{A}$ is $\epsilon$-far in $L_2$ from PSD, meaning $\sum_{i : \lambda_i(\mathbf{A}) < 0} \lambda_i(\mathbf{A})^2 \geq \epsilon n^2$. Then there is a algorithm which makes at most $O(\frac{\log^2(n)}{\epsilon^{36}})$-queries to $\mathbf{A}$ and determines with probability at least $3/4$ whether $\mathbf{A}$ is PSD or $\epsilon$-far in $L_2$ from PSD. \end{theorem} \begin{proof} Follows form last theorem since query complexity is $k^2$. \end{proof} \begin{comment} \begin{algorithm}[!h] \SetKwInOut{Input}{Input} \caption{Test PSDNess \label{alg:L2}} $k \leftarrow O(\frac{1}{\epsilon^{2.5}})$\\ $M \leftarrow O(1/\epsilon)$\\ \For{$i=1,2,\dots,M$}{Sample a random subset $S \subset [n]$ of expected size $\ex{|S|} = k$. \\ Query $Z^i = A_{S \times S}$. \\ \If{$Z^i$ is Not PSD}{ \Return{Not PSD} } } \If{If no non-PSD matrix was found}{ \Return{PSD}} \end{algorithm} Let $Z^+,Z^-$ be the restriction of $A^+,A^-$ to the principle submatrix $S \times S$. Then we can write $Z^+ = \sum_{i> t} \lambda_i (v_i)_S (v_i)^T_S$, $Z^- = \sum_{i\leq t} \lambda_i (v_i)_S (v_i)^T_S$. Therefore $Z^+$ is PSD and $Z^-$ is NSD. We first prove the following anti-concentration Lemma for the Frobenius mass in $A^-$. \begin{lemma} Suppose $x = y + z \in \R^{n}$, where $\|x\|_\infty \leq 1$ and $\langle y,z \rangle = 0$. Then $\|y\|_2^2 \leq n$ and $\|z\|_2^2 \leq n$. Moreover, if we define $y' \in \R^n$ via \[ y_i' = \begin{cases} y_i & \text{ if } |y_i| < 4 \\ 0 & \text{ otherwise} \\ \end{cases} \] and similarly define $z'$, then $\|y'\|_2^2 \geq \frac{\|y\|_2^4}{n}$ and $\|z'\|_2^2 \geq \frac{\|z\|_2^4}{n}$. \end{lemma} \begin{proof} We have $0 \leq \langle y,z \rangle = \langle x-z,z \rangle = \langle x,z \rangle - \|z\|_2^2 \leq \sqrt{n} \|z\|_2- \|z\|_2^2$, thus $\|z\|_2 \leq \sqrt{n}$ as needed, and the remainder follows by symmetry. Now suppose there existed some set $S \subset [n]$ with $|S| < \|z\|_2^2/4$ with $\|z_S\|_2^2 > (1- \frac{\|z\|_2^2}{100n}) \|z\|_2^2$. Then $\|z_{-S}\|_2 \leq \frac{\|z\|_2^2}{10n^{1/2}}$, and we have \begin{equation} \begin{split} |z\|_2^2 &= \langle x,z \rangle \\ &= \langle x,z_{S} + z_{-S}\rangle \\ &\leq \sqrt{|S|} \|z\|_2 + \|z_{-S}\|_2 \|x\|_2 \\ & \leq \frac{1}{2}\|z\|_2^2 + \frac{\|z\|_2^2}{10} \\ & < \|z\|_2^2 \\ \end{split} \end{equation} which is a contradiction. It follows that no such set $S$ with both $|S| < \|z\|_2^2/4$ and $\|z_S\|_2^2 > (1- \frac{\|z\|_2^2}{100n}) \|z\|_2^2$ can exist. Now let $S$ be the set of $i \in [n]$ such that $z_i > 4$. Clearly $|S| \leq \|z\|_2^2 / 16$, thus by the above, it follows that $\|z'\|_2^2 = \|z_{-S}\|_2^2 \geq \frac{\|z\|_2^4}{n}$. \end{proof} \begin{corollary}\label{cor:decomp} Decompose $A = A^+ - A^-$ as above. Let $D \in \R^{n \times n}$ be the matrix defined by \[ D_{i,j} = \begin{cases} A^-_{i,j} & \text{ if } |A^-_{i,j}| < 4 \\ 0 & \text{ otherwise} \\ \end{cases} \] Then $\|D\|_2^2 \geq \epsilon^2 n^2$ \end{corollary} \begin{proof} By relationships of $p$-norms, we have $\|D\|_f^2\geq \frac{1}{n}\sum_i\|A_{i,*}^-\|_2^4 \geq \frac{1}{n^2}(\sum_i\|A_{i,*}^-\|_2^2)^2 = n^{-2} \|A^-\|_F^4 \geq \epsilon^2 n^2$. \end{proof} \begin{proposition} If $A$ is PSD, and $S \subseteq [n]$ is a random sample of size $|S| = k =O(1/\epsilon^2)$, then $\|(A_S)_{-t}\|_F^2 \leq \epsilon^2 k^2/5$ with probability $9/10$, where $t = O(1/\epsilon^2)$ and $(A_S)_{-t}$ is $A_S$ projected away from the top $t$ singular values of $A_S$. \end{proposition} \begin{proof} Let $Z_{t}$ and $Z_{-t}$ be the restrictions of $A_{t}$ and $A_{-t}$. First note that $A_{-t}$ can have no singular values larger than $\epsilon^2 n/100$, since this would imply that more than $t$ eigenvectors of $A$ are above this threshold, which would violate the fact that Tr$(A) \leq n$. It follows that $\|A_{-t}\|_F^2 \leq (\epsilon^2/100) n^2$. Thus $\ex{\|Z_{-t}\|_2^2} \leq k + \epsilon^2 k^2/100 \leq \epsilon^2 k^2 / 50$. By Markov's, we have $\|Z_{-t}\|_2^2 \leq \epsilon^2 k^2/5$ with probability $9/10$. Since $Z = Z_t + Z_{-t}$, and $Z_t$ has rank at most $t$, projecting away from the top $t$ coordinates can only result in a norm at most $\|Z_{-t}\|_2 \leq \epsilon^2 k^2/5$. \end{proof} \end{comment} \section{PSD Testing with \texorpdfstring{$\ell_2^2$}{L-2-square} Gap} Let $\mathbf{A} \in \R^{n \times n}$ be a symmetric matrix with eigenvalues $\lambda_{\max} = \lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_n = \lambda_{\min}.$ In this section, we consider the problem of testing positive semi-definiteness with an $\ell_2^2$ gap. Formally, the problem statement is as follows. \begin{definition}[PSD Testing with $\ell_{2}^2$-Gap.]\label{def:l2} Fix, $\epsilon \in (0,1]$ and let $\mathbf{A} \in \R^{n \times n}$ be a symmetric matrix satisfying $\|\mathbf{A}\|_\infty \leq 1$, with the promise that either \begin{itemize} \item \textbf{YES Instance}: $\mathbf{A}$ is PSD. \item \textbf{NO Instance}: $\mathbf{A}$ is $\epsilon$-far from PSD in $\ell_2^2$, meaning that $\min_{\mathbf{B} \succeq 0} \|\mathbf{A} - \mathbf{B}\|_F^2 = \sum_{i: \lambda_i < 0} \lambda_i^2 = \epsilon n^2$. \end{itemize} The PSD Testing problem with $\ell_{2}^2$-gap is to design an algorithm which distinguish these two cases with probability at least $2/3$, using the minimum number of queries possible to the entires of $\mathbf{A}$. \end{definition} Our algorithm for this problem will query a principal submatrix $\mathbf{A}_{S \times S}$ and return PSD if $\mathbf{A}_{S \times S}$ is PSD, otherwise it will return not PSD. Since all principal submatrices of PSD matrices are PSD, we only need show that if $\mathbf{A}$ is $\epsilon$-far from PSD, then we can find a non-PSD principal submatrix with small size. Note again that this implies that our algorithm will be one-sided. Thus, in the following, we can focus on the case where $\mathbf{A}$ is $\epsilon$-far from PSD. We begin by stating two fundamental observations, which, along with an application of our algorithm from Section \ref{sec:Linfty}, will allow us to reduce the problem of PSD testing with $\ell_2$ gap to the problem of testing certain functions of the \textit{singular values} of $\mathbf{A}$. \begin{proposition}[PSD matrices are top heavy]\label{prop:topheavy} Fix any $n \in \mathbb{N}$, $1 \leq k \leq n$, and $\mathbf{D} \in \R^{n \times n}$. Then if $\mathbf{D}$ is PSD, we have \[\sum_{i > k} \sigma_i(\mathbf{D})^2 \leq \frac{1}{k}\left( \text{Tr}(\mathbf{D})\right)^2\] In particular, if $\mathbf{D}$ has bounded entries $\|\mathbf{D}\|_\infty \leq 1$, we have $\sum_{i > k} \sigma_i(\mathbf{D})^2 \leq \frac{1}{k}n^2$. \end{proposition} \begin{proof} We first show that $\sigma_k(\mathbf{D}) \leq k^{-1} \text{Tr}(\mathbf{D})$. To see this, suppose $\sigma_{k}(\mathbf{D}) > k^{-1} \text{Tr}(\mathbf{D})$. Then because $\mathbf{D}$ is PSD, we would have $\sum_i \sigma_i = \sum_i \lambda_i = \text{Tr}(\mathbf{A}) > k \cdot k^{-1} \text{Tr}(\mathbf{D})$, a contradiction. Thus, $\sigma_{i}(\mathbf{D}) \leq k^{-1} \text{Tr}(\mathbf{D})$ for all $i \geq k$. Using this and the bound $\sum_{i > k} \sigma_i(\mathbf{D}) \leq \text{Tr}(\mathbf{D})$, it follows that the quantity $\sum_{i > k} \sigma_i(\mathbf{D})^2$ is maximimized by having $k$ singular values equal to $\text{Tr}(\mathbf{D})/k$, yielding $\sum_{i > t} \sigma_i(\mathbf{D})^2 \leq k \cdot (\text{Tr}(\mathbf{D})/k)^2 = k^{-1} (\text{Tr}(\mathbf{D}))^2 $ as needed. \end{proof} \begin{proposition}\label{prop:nottopheavy} Let $\mathbf{D} \in \R^{n \times n}$ be a symmetric matrix such that $\|\mathbf{D}\|_\infty \leq 1$, and let $\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_n$ be its singular values. Suppose $\mathbf{D}$ is at least $\epsilon$-far in $L_2$ from PSD, so that $\sum_{i : \lambda_i(\mathbf{D}) < 0} \lambda_i^2(\mathbf{D}) \geq \epsilon n^2$, and suppose further that $\min_i \lambda_i(\mathbf{D}) > - \frac{1}{2k} n$ for any $k \geq \frac{2}{\epsilon}$. Then we have \[\sum_{i >k}\sigma_{i}^2(\mathbf{D}) > \frac{\epsilon}{2} n^2\] \end{proposition} \begin{proof} Let $W \subseteq [n]$ be the set of values $i \in [n]$ such that $\lambda_{i} < 0$. Let $W' \subseteq [n]$ be the set of values $i \in [n]$ such that $\sigma_{i} < \frac{1}{2k} n$. By assumption: $\sum_{i \in W'} \sigma_{i}^2 \geq \sum_{i \in W} \lambda_{i}^2 \geq \epsilon n^2$. Now $\sum_{i \in W'} \sigma_{i}^2 = \sum_{i \in W', i \leq k} \sigma_{i}^2 + \sum_{i \in W', i > k} \sigma_{i}^2$, so the fact that $|\sigma_i| \leq (1/2k)n$ for every $i \in W'$, we have that $ \sum_{i \in W', i \leq k} \sigma_{i}^2 \leq k (n/(2k))^2 = n^2/4k < \epsilon n^2/2$. Thus we must have $\sum_{i \in W, i > t} \sigma_{i}^2 > \epsilon n^2/2 $, giving \begin{equation} \begin{split} \sum_{i > k} \sigma_{i}^2 & \geq \sum_{i \in W', i > k} \sigma_{i}^2 \\ & > \epsilon n^2/2 \\ \end{split} \end{equation} as required. \end{proof} \noindent \subsection{Analysis of the Algorithm} Our analysis will require several tools, beginning with the following interlacing lemma. \begin{lemma}[Dual Lidskii Inequality, \cite{tao2011topics} Chapter 1.3]\label{lem:duallidskii} Let $\mathbf{M}_1,\mathbf{M}_2$ be $t \times t$ symmetric Matrices, and fix $1 \leq i_1 < i_2 < \dots < i_k \leq n$. Then we have \[\sum_{j=1}^{k} \lambda_{i_j} (\mathbf{M}_1 + \mathbf{M}_2) \geq \sum_{j=1}^{k}\lambda_{i_j} (\mathbf{M}_1 ) + \sum_{j=1}^{k} \lambda_{n-j+1}(\mathbf{M}_2) \] \end{lemma}\noindent We will also need the following result of Rudelson and Vershynin \cite{rudelson2007sampling} on the decay of spectral norms of random submatrices. \begin{proposition}[\cite{rudelson2007sampling}]\label{prop:vershy} Let $\mathbf{A} \in \R^{n \times m}$ be a rank $r$ matrix with maximum Euclidean row norm bounded by $M$, in other words $\max_i |(\mathbf{A} \mathbf{A}^\top)_{i,i}| \leq M$. Let $Q \subset [n]$ be a random subset of rows of $\mathbf{A}$ with expected cardinality $q$. Then there is a fixed constant $\kappa \geq 1$ such that \[ \ex{\|\mathbf{A}_{Q \times [m] }\|_2 } \leq \kappa ( \sqrt{\delta} \|\mathbf{A}\|_2 + \sqrt{\log q} M) \] \end{proposition} \noindent Finally, we will need a generalized Matrix Chernoff bound for the interior eigenvalues of sums of random matrices, which was derived by Gittens and Tropp \cite{gittens2011tail}. \begin{theorem}[Interior Eigenvalue Matrix Chernoff, Theorem 4.1 of \cite{gittens2011tail}]\label{thm:matcher} Consider a finite sequence $\{\mathbf{X}_j\}$ of independent, random, positive-semidefinite matrices with dimension $m$, and assume that $\|\mathbf{X}_j\|_2 \leq L$ for some value $L$ almost surely. Given an integer $k \leq n$, define \[ \mu_k = \lambda_k\left(\sum_j \ex{\mathbf{X}_j}\right) \] then we have the tail inequalities \begin{equation} \begin{cases} \; \; \; \; \bpr{ \lambda_k( \sum_j \mathbf{X}_j) \geq (1 +\delta ) \mu_k } \leq (n-k+1) \cdot \left[ \frac{e^\delta}{(1+\delta)^{1+\delta}}\right]^{\mu_k/L} & \text{ for } \delta > 0 \\[12pt] \; \; \; \; \bpr{ \lambda_k( \sum_j \mathbf{X}_j) \leq (1 -\delta ) \mu_k } \leq k \cdot \left[ \frac{e^{-\delta}}{(1-\delta)^{1-\delta}}\right]^{\mu_k/L} & \text{ for } \delta \in [0,1) \\ \end{cases} \end{equation} \end{theorem} \vspace{.2in} \paragraph{The Algorithm.} Our first step is to run the $\ell_\infty$-gap algorithm of Section \ref{sec:Linfty} with $\epsilon_0 = \frac{2}{k}$, where we set $k = \frac{2 \cdot 400^2 \kappa^4}{\epsilon}$, where $\kappa \geq 1$ is the constant in Proposition \ref{prop:vershy}. This allows us to assume that $\lambda_i \geq - \epsilon_0 n / 1000 \geq - \frac{1}{2k}n$ for all $i$, otherwise we have a $\wt{O}(1/\epsilon^2)$-query algorithm from the Section \ref{sec:Linfty}, and since our target complexity is $\wt{O}(1/\epsilon^4)$, we can safely disregard the cost of running this algorithm in parallel. We begin by demonstrating that the Frobenius norm of $\mathbf{S} \mathbf{A}$ is preserved (up to scaling), where $\mathbf{S}$ is a random row sampling matrix with sufficiently many rows. \begin{proposition}\label{prop:trace} Let $\mathbf{M} \in \R^{m \times m}$. Fix $t \geq 1$ and let $\mathbf{S}$ be a row sampling matrix which samples each row of $\mathbf{M}$ with probability $p = \frac{t}{m}$, and let $\mathbf{S} \in \R^{t_0 \times m}$ be a row sampling matrix drawn from this distribution, where $\ex{t_0} = t$. Then we have \[ \ex{\frac{1}{p}\text{Tr}(\mathbf{S} \mathbf{M} \mathbf{S}^\top)} = \sum_i \lambda_i(\mathbf{M}) = \text{Tr}(\mathbf{M})\] and \[ \text{Var}\left(\frac{1}{p}\text{Tr}(\mathbf{S} \mathbf{M} \mathbf{S}^\top)\right)\leq \frac{m}{t}\sum_i \mathbf{M}_{i,i}^2 \] \end{proposition} \begin{proof} For $i \in [m]$, let $\delta_i \in \{0,1\}$ indicate that we sample row $i$. We have $\ex{\text{Tr}(\mathbf{S} \mathbf{M}\mathbf{S}^\top)} = \frac{1}{p} \ex{\sum_{i=1}^n \delta_i \mathbf{M}_{i,i} } = \text{Tr}(\mathbf{M})$. Moreover, \begin{equation} \begin{split} \text{Var}\left(\frac{1}{p}\text{Tr}(\mathbf{S} \mathbf{M} \mathbf{S}^\top)\right)& \leq \frac{1}{p^2} \sum_{i=1}^n \delta_i \mathbf{M}_{i,i} - \left(\text{Tr}(\mathbf{M})\right)^2 \\ & \leq \sum_{i\neq j} \mathbf{M}_{i,i}\mathbf{M}_{j,j} + \frac{1}{p} \sum_i \mathbf{M}_{i,i}^2 - \left(\text{Tr}(\mathbf{M})\right)^2\\ & \leq \frac{1}{p}\sum_i \mathbf{M}_{i,i}^2 \\ \end{split} \end{equation} as stated. \end{proof} We now fix $t = \Theta(\log(1/\epsilon)/\epsilon^2)$, and draw row independent sampling matrices $\mathbf{S}, \mathbf{T}$ with an expected $t$ rows. Let $S ,T \subset [n]$ be the rows and columns sampled by $\mathbf{S},\mathbf{T}^\top$ respectively. We then compute $\mathbf{Z} = \mathbf{S} \mathbf{A} \mathbf{T}^\top$ with an expected $O(t^2)$ queries. Finally, we query the principal submatrix $\mathbf{A}_{(S \cup T) \times (S \cup T)}$, and test whether $\mathbf{A}_{(S \cup T) \times (S \cup T)}$ is PSD. Clearly if $\mathbf{A}$ is PSD, so is $\mathbf{A}_{(S \cup T) \times (S \cup T)}$, so it suffices to anaylzie the \texttt{NO} case, which we do in the remainder. \begin{lemma}\label{lem:smallk} Let $\mathbf{A} \in \R^{n \times n}$ be $\epsilon$-far from PSD with $\|\mathbf{A}\|_\infty \leq 1$. Then let $\mathbf{Z} = \mathbf{S} \mathbf{A} \mathbf{T}^\top$ be samples as described above, so that $\mathbf{Z}$ has an expected $t = \Theta(\log(1/\epsilon)/\epsilon^2)$ rows and columns, where $t$ is scaled by a larger enough constant, and let $k = \frac{2 \cdot 400^2 \kappa^4}{\epsilon}$, where $\kappa \geq 1$ is the constant in Proposition \ref{prop:vershy}. Suppose further that $\sigma_{k+1}(\mathbf{A}) \leq 10 n/k$. Then with probability $19/20$, we have \[ \frac{n^2}{t^2}\sum_{i > k} \sigma_i^2(\mathbf{Z}) > \epsilon n^2/16 \] \end{lemma} \begin{proof} Now write $\mathbf{A} = \mathbf{U} \Lambda \mathbf{V}^\top$, $\mathbf{A}_{k} = \mathbf{U} \Lambda_{k} \mathbf{V}^\top, \mathbf{A}_{-k} = \mathbf{U} \Lambda_{-k} \mathbf{V}^\top$. Then $\mathbf{A} = \mathbf{A}_{k} + \mathbf{A}_{-k}$, and the rows of $\mathbf{A}_{k}$ are orthogonal to the rows of $\mathbf{A}_{-k}$. Note that this implies that $\|\mathbf{A}_{i,*}\|_2^2 = \|(\mathbf{A}_{k})_{i,*}\|_2^2 + \|(\mathbf{A}_{-k})_{i,*}\|_2^2$ for each $i \in [n]$ by the Pythagorean theorem, and since $\|\mathbf{A}\|_\infty \leq 1$ we have $\|(\mathbf{A}_{-k})_{i,*}\|_2^2 \leq n$. Now set $\mathbf{M}_1 = \mathbf{S} \mathbf{A}_{k} \mathbf{A}_{k}^\top \mathbf{S}^\top$, and $\mathbf{M}_2 = \mathbf{S} \mathbf{A}_{-k} \mathbf{A}_{-k}^\top \mathbf{S}^\top$. Notice that $\mathbf{M}_1 + \mathbf{M}_2 = \mathbf{S} (\mathbf{A}_{k} \mathbf{A}_{k}^\top + \mathbf{A}_{-k} \mathbf{A}_{-k}^\top )\mathbf{S}^\top = \mathbf{S} \mathbf{A} \mathbf{A}^\top \mathbf{S}^\top$, using the fact that the rows and columns of $\mathbf{A}_{k}$ are orthogonal to the rows and columns (respectively) of $\mathbf{A}_{-k}$. Let $p = \frac{t}{n}$ be the row sampling probability. Now suppose $\|(\mathbf{A}_{-k})\|_F^2 = \alpha n^2$. Note that we have shown that $\alpha > \epsilon /2$. By Proposition \ref{prop:trace}, we have $\ex{\text{Tr}(\mathbf{M}_2)/p} = \sum_{i > k}= \alpha n^2 > \epsilon n^2/2$ for some $\alpha \geq \epsilon/2 $, where the last inequality follows from Proposition \ref{prop:nottopheavy}. Moreover, we have \begin{equation} \begin{split} \text{Var}\left(\frac{1}{p}\text{Tr}(\mathbf{M}_2)\right)&\leq \frac{1}{p}\sum_i (\mathbf{M}_2)_{i,i}^2 \\ &= \frac{1}{p}\sum_i \|(\mathbf{A}_{-k})_{i,*}\|_2^4\\ \end{split} \end{equation} It follows that since each row satisfies $\|(\mathbf{A}_{-k})_{i,*}\|_2^2 \leq n$ and $\|(\mathbf{A}_{-k})\|_F^2 = \alpha n^2$., the quantity $\sum_i \|(\mathbf{A}_{-k})_{i,*}\|_2^4$ is maximized having $\alpha n$ rows with squared norm equal to $n$. This yields \begin{equation} \begin{split} \text{Var}\left(\frac{1}{p}\text{Tr}(\mathbf{M}_2)\right)& \leq \frac{1}{p}\sum_i 2 \alpha n \cdot n^2\\ & \leq 2 \frac{\alpha n^4}{t} \\ & \leq \frac{ \alpha^2}{100^2} n^4 \\ \end{split} \end{equation} Where in the last line, we used that $t > \frac{4\cdot 100^2}{\epsilon_0} \geq \frac{2 \cdot 100^2}{\alpha}$. Then by Chebyshev's inequality, with probability $99/100$, we have $\frac{1}{p}\text{Tr}(\mathbf{M}_2) > \alpha n^2 - (\alpha/10) n^2 = (9/10) \alpha n^2 \geq (9/20) \epsilon n^2$. Call this event $\mathcal{E}_1$, and condition on it now. Next, by Proposition \ref{prop:vershy}, since $\sigma_{k+1}(\mathbf{A}) \leq 10 n/k$ we have $\ex{\|\mathbf{S}\mathbf{A}_{-k}\|_2} \leq \kappa (10\sqrt{tn}/k + \sqrt{2\log(1/\epsilon)}\sqrt{n}) < 20 \kappa \sqrt{tn}/k$. Then by Markovs, we have $\|\mathbf{S}\mathbf{A}_{-k}\|_2^2 = \|\mathbf{M}_2\|_2 \leq 200^2 \kappa^2 t n/k^2$ with probability $99/100$, which we condition on now, and call this event $\mathcal{E}_2$. Then by the Dual Lidskii inequality \ref{lem:duallidskii}, we have \begin{equation} \begin{split} \frac{1}{p}\sum_{j> k} \lambda_j(\mathbf{M}_1 + \mathbf{M}_2) &\geq \frac{1}{p} \left(\sum_{j> k}\lambda_j( \mathbf{M}_2) \right) \\ &\geq \frac{1}{p}( \text{Tr}(\mathbf{M}_2) - k \|\mathbf{M}_2\|_2 ) \\ &\geq (9/20) \epsilon n^2 - 200^2 \kappa^2 n^2/k\\ &\geq \epsilon n^2/4 \\ \end{split} \end{equation} using that $k > \frac{2 \cdot 400^2 \kappa^4}{ \epsilon}$. Now let $\mathbf{W} = \frac{1}{\sqrt{p}}(\mathbf{S}\mathbf{A})^\top$, and note that we took the transpose, so $\mathbf{W}$ has $n$ rows and $t_1$ columns, where $\ex{t_1} = t$. Now by Chernoff bounds, with probability $99/100$ we have $t_2 \leq 2t$; call this event $\mathcal{E}_3$ and condition on it now. The above demonstrates that $\frac{1}{p} \sum_{j > k} \lambda_j(\mathbf{M}_1 + \mathbf{M}_2) = \sum_{j > k}\lambda_j^2(\mathbf{W}) \leq \epsilon n^2 / 4$. Now note that $\sigma_{k+1}(\mathbf{W}) = \frac{1}{\sqrt{p}}(\sigma_{k+1}(\mathbf{S}\mathbf{A}_{k} + \mathbf{S} \mathbf{A}_{-k}) < \frac{1}{\sqrt{p}} \|\mathbf{S}\mathbf{A}_{-k}\|_2 \leq 200 \kappa n/k$, where we used the Weyl inequality for singular values: namely that for any two matrices $\mathbf{A},\mathbf{B}$ and value $i$, $|\sigma_i(\mathbf{A} + \mathbf{B}) - \sigma_i(\mathbf{A})| \leq \|\mathbf{B}\|_2$, and using that $\mathbf{S}\mathbf{A}_k$ is rank at most $k$, so $\sigma_{k+1}(\mathbf{S}\mathbf{A}_{k}) = 0$. Now draw a random row sampling matrix $\mathbf{T}$ with an expected $t$ rows, and write $\mathbf{N}_1 = \mathbf{T} \mathbf{W}_k \mathbf{W}_k^\top \mathbf{T}$ and $\mathbf{N}_2 = \mathbf{T} \mathbf{W}_{-k} \mathbf{W}_{-k}^\top \mathbf{T}$, and note again that $\mathbf{N}_1 + \mathbf{N}_2 = \mathbf{T} \mathbf{W} \mathbf{W}^\top \mathbf{T}$. Moreover, the rows of $\mathbf{W}_k$ live in a subspace orthogonal to the rows of $\mathbf{W}_{-k}$, so again by the Pythagorean theorem and boundedness of the entries in $\mathbf{A}$, we have $\|(\mathbf{W}_{-k})_{i,*}\|_2^2 \leq \frac{1}{p} t_1 \leq 2 n$ for all $i \in [n]$. Then by Proposition \ref{prop:trace}, we have $\ex{\text{Tr}(\mathbf{N}_2)/p} = \|\mathbf{W}_{-k}\|_F^2 = \alpha n^2 \geq \epsilon n^2 / 4$, and \begin{equation} \begin{split} \text{Var}\left(\frac{1}{p}\text{Tr}(\mathbf{N}_2)\right)& \leq \frac{1}{p}\sum_{i=1}^n \| (\mathbf{W}_{-k})_{i,*}\|_2^4 \\ & \leq \frac{1}{p} n^3 \\ & \leq \frac{1}{t} n^4 \\ & \leq \frac{\epsilon^2}{100^2} n^4 \\ \end{split} \end{equation} Then by Chebyshev's inequality, with probability $99/100$, we have $\frac{1}{p}\text{Tr}(\mathbf{N}_2) > \epsilon n^2/4 - (\epsilon /10) n^2 = \epsilon n^2/8$. Call this event $\mathcal{E}_4$, and condition on it now. Now as shown above, we have $\|W_{-k}\|_2 \leq 200 \kappa n/k$, thus by Proposition \ref{prop:vershy} we have $\ex{\|\mathbf{T} \mathbf{W}_{-k}\|_2 } \leq \kappa(200 \kappa \sqrt{tn} /k + 4\sqrt{\log(1/\epsilon)}\sqrt{n} ) \leq 400 \kappa^2 \sqrt{tn}$, again where we take $t = \Theta(\frac{\log(1/\epsilon)}{\epsilon^2})$ with a large enough constant. Then by Markov's inequality, with probability $99/100$ we have $\|\mathbf{N}_2\|_2 \leq 400^2 \kappa^4 n^2/k^2$, and again by the Dual Lidskii inequality \ref{lem:duallidskii}, we have \begin{equation} \begin{split} \frac{1}{p}\sum_{j> k} \lambda_j(\mathbf{N}_1 + \mathbf{N}_2) &\geq \frac{1}{p} \left(\sum_{j> k}\lambda_j( \mathbf{N}_2) \right) \\ &\geq \frac{1}{p}( \text{Tr}(\mathbf{N}_2) - k \|\mathbf{N}_2\|_2 ) \\ &\geq \epsilon n^2/8 - 400^2 \kappa^4 n^2/k\\ &\geq \epsilon n^2/16 \\ \end{split} \end{equation} Using that $k \geq \frac{2 \cdot 400^2 \kappa^4}{\epsilon}$. Note moreover that $$\frac{1}{p}\sum_{j> k} \lambda_j(\mathbf{N}_1 + \mathbf{N}_2) = \frac{1}{p}\sum_{j> k} \sigma_j^2(\mathbf{T} \mathbf{W}) = \frac{1}{p^2}\sum_{j> k} \sigma_j^2(\mathbf{S} \mathbf{A}^\top \mathbf{T}^\top)$$ Using that $\mathbf{A} = \mathbf{A}^\top$ so that $ \mathbf{Z} = \mathbf{S} \mathbf{A}^\top \mathbf{T}^\top$ we conclude that $ \frac{1}{p^2}\sum_{i > k} \sigma_i^2(\mathbf{Z}) = \frac{n^2}{t^2}\sum_{i > k} \sigma_i^2(\mathbf{Z}) > \epsilon n^2/16 $ as desired. Note that we conditioned on $\mathcal{E}_i$ for $i=1,2,3,4,5$, each of which held with probability $99/100$, thus the result holds with probability $19/20$ by a union bound. \end{proof} We will now address the case where $\sigma_k(\mathbf{A}) > 10 n/k$ \begin{lemma}\label{lem:bigk} Let $\mathbf{A} \in \R^{n \times n}$ be $\epsilon$-far from PSD with $\|\mathbf{A}\|_\infty \leq 1$. Then let $\mathbf{Z} = \mathbf{S} \mathbf{A} \mathbf{T}^\top$ be samples as described above, so that $\mathbf{Z}$ has an expected $t = \Theta(\log(1/\epsilon)/\epsilon^2)$ rows and columns, where $t$ is scaled by a larger enough constant, and let $k = \frac{2 \cdot 400^2 \kappa^4}{\epsilon}$, where $\kappa \geq 1$ is the constant in Proposition \ref{prop:vershy}. Suppose further that $\sigma_{k}(\mathbf{A}) > 10 n/k$. Then with probability $49/50$, we have \[ \frac{n}{t} \sigma_k(\mathbf{Z}) \geq 8 n/k \] \end{lemma} \begin{proof} The proof is by application of Theorem \ref{thm:matcher} twice. We first generate a random row sampling matrix $\mathbf{S}$ with an expected $t$ rows, and bound $\lambda_k( (\mathbf{S}\mathbf{A})^\top \mathbf{S} \mathbf{A} ) = \sigma_k^2(\mathbf{S} \mathbf{A})$. Let $\mathbf{X}_j \in \R^{n \times n}$ be a random variable such that $\mathbf{X}_j = \mathbf{A}_{(j)}^\top \mathbf{A}_{(j)}$, where $\mathbf{A}_{(j)}$ is the $j$-th row of $\mathbf{A}$ that was sampled in $\mathbf{S}$. Then $\sum_{j} \mathbf{X}_j = (\mathbf{S}\mathbf{A})^\top \mathbf{S} \mathbf{A} $, and $\ex{\mathbf{X}_j} = \frac{t}{n}\sum_{j=1}^n \mathbf{A}_j^\top \mathbf{A}_j = \frac{t}{n} \mathbf{A} \mathbf{A}^\top$, where $ \mathbf{A}_j$ is the $j$-th row of $\mathbf{A}$. Moreover, note that $\|\mathbf{X}_j\|_2 \leq \max_i \|\mathbf{A}_{i,*}\|_2^2 \leq n$ for all $j$, by the boundedness of $\mathbf{A}$. Thus note that $\mu_k = \lambda_k((t/n) \mathbf{A}^\top \mathbf{A}) \geq (t/n) 100 n^2 / k^2 = \frac{100 t n }{k^2}$. Thus by the Interior Matrix Chernoff Bound \ref{thm:matcher}, we have that for some constant $c$: \begin{equation} \begin{split} \bpr{ \lambda_k((\mathbf{S}\mathbf{A})^\top \mathbf{S} \mathbf{A}) \leq .9 \mu_k } &\leq k \cdot c^{\mu_k/L} \\ &\leq k \cdot c^{\frac{100 t n }{k^2 } \cdot \frac{1}{n}} \\ &\leq k \cdot e^{-100 \log(k)} \\ & \leq 1/1000 \end{split} \end{equation} Where we use $t = \Theta(\frac{\log(1/\epsilon)}{\epsilon^2})$ with a large enough constant. Also condition on the fact that $\mathbf{S}$ has at most $2t$ rows, which holds with probability $999/1000$. Call the union of the above two event $\mathcal{E}_1$, which holds with probability $99/100$, and condition on it now. Given this, we have $\sigma_k^2(\mathbf{S}\mathbf{A}) \geq \frac{90 tn}{k^2}$. Now again, let $\mathbf{Y}_j = (\mathbf{S}\mathbf{A})_{(j)} (\mathbf{S}\mathbf{A})_{(j)}^\top$, where $(\mathbf{S}\mathbf{A})_{(j)}$ is the $j$-th column of $\mathbf{S}\mathbf{A}$ sampled by the column sampling matrix $\mathbf{T}$. Let $\mathbf{M} = (\mathbf{S} \mathbf{A})^\top$. Then again we have $\|\mathbf{Y}_j\|_2 \leq 2t$, using that $\mathbf{S}\mathbf{A}$ has at most $2t$ rows, and each entry is bounded by $1$. Moreover, $\sum_j \mathbf{Y}_j = \mathbf{T} \mathbf{M} \mathbf{M}^\top \mathbf{T}^\top $ We also have $\lambda_k(\ex{ \sum_j \mathbf{Y}_j}) = \lambda_k(\frac{t}{n} \mathbf{M} \mathbf{M}^\top ) > \frac{90 t^2}{k^2}$. Applying the Interior Matrix Chernoff Bound again, we have that for some constant $c$: \begin{equation} \begin{split} \bpr{ \lambda_k( \mathbf{T} (\mathbf{S} \mathbf{A})^\top(\mathbf{S} \mathbf{A}) \mathbf{T}^\top ) \leq .9 \mu_k } &\leq k \cdot c^{\mu_k/L} \\ &\leq k \cdot c^{\frac{90 t^2 }{k^2 } \cdot \frac{1}{2t} }\\ &\leq k \cdot e^{-100 \log(k)} \\ & \leq 1/1000 \\ \end{split} \end{equation} Call the above event $\mathcal{E}_2$. Conditioned on $\mathcal{E_1} \cup \mathcal{E}_2$, which hold together with probability $49/50$, we have that $\sigma_k( \mathbf{S} \mathbf{A} \mathbf{T}^\top ) \leq .9 \sqrt{\frac{90 t^2}{k^2}} > 8 t/k$. Since $\mathbf{Z} = \mathbf{S} \mathbf{A} \mathbf{T}^\top $, we have $\frac{n}{t} \sigma_k( \mathbf{Z} ) > 8n/k$ as needed. \end{proof} \begin{theorem}\label{thm:l2premain} Let $\mathbf{A} \in \R^{n \times n}$ be $\epsilon$-far from PSD with $\|\mathbf{A}\|_\infty \leq 1$. Then if $S,T \subset [n]$ are random subsets with expected each size $t = O(\log(1/\epsilon)/\epsilon^2)$, then with probability $9/10$ the principal submatrixx $\mathbf{A}_{(S \cup T) \times (S \cup T)}$ is not PSD. \end{theorem} \begin{proof} First, by Chernoff bounds, with probability $99/100$ we have $|S \cup T| \leq |S| + |T| \leq 4t$, which we call $\mathcal{E}_1$ and condition on now. First, consider the case that $\sigma_k(\mathbf{A}) \leq 10 n/k$, where $k = \frac{2 \cdot 400^2 \kappa^4}{\epsilon}$. Then by Lemma \ref{lem:smallk}, with probability $19/20$, we have that $\sum_{i >k}\sigma_i^2(\mathbf{A}_{S \times T}) > \epsilon t^2/16$. Now we first prove the following claim: \begin{claim} Let $\mathbf{Z} \in \R^{n \times m}$ be any matrix, and let $\tilde{\mathbf{Z}}$ be a rectangular submatrix of $\mathbf{Z}$. for any Let $\mathbf{Z}_k,\tilde{\mathbf{Z}}_k$ be the truncated SVD of $\mathbf{Z},\tilde{\mathbf{Z}}$ respectively, for any $1 \leq k \leq \min\{n,m\}$. Then we have \[\|\mathbf{Z} - \mathbf{Z}_k\|_F^2 \geq \|\tilde{\mathbf{Z}}- \tilde{\mathbf{Z}}_k\|_F^2 \] \end{claim} \begin{proof} Note that $\|\mathbf{Z} - \mathbf{Z}_k\|_F^2 \geq \|\tilde{\mathbf{Z}}_k - \mathbf{Z}_k'\|_F^2$, where $\mathbf{Z}_k'$ is the matrix $\mathbf{Z}_k$ restricted to the submatrix containing $\tilde{\mathbf{Z}}$. But $\tilde{\mathbf{Z}}_k$ is the \textit{best} rank-$k$ approximation to $\tilde{\mathbf{Z}}$, so $ \|\tilde{\mathbf{Z}}- \tilde{\mathbf{Z}}_k\|_F^2 = \min_{\mathbf{B} \text{rank-k}} \|\tilde{\mathbf{Z}}- \mathbf{B}\|_F^2 \leq \|\tilde{\mathbf{Z}}_k - \mathbf{Z}_k'\|_F^2$, using the fact that a submatrix of a rank-k matrix is at most rank $k$. \end{proof} It follows that $\|\mathbf{A}_{(S \cup T) \times (S \cup T)} - (\mathbf{A}_{(S \cup T) \times (S \cup T)})_k\|_F^2 = \sum_{j > k} \sigma_j^2(\mathbf{A}_{(S \cup T) \times (S \cup T)}) \geq \sum_{j > k} \sigma_j^2(\mathbf{A}_{ S \times T}) > \epsilon t^2/16 > \epsilon |S \cup T|^2/256$. But note that if $\mathbf{A}_{(S \cup T) \times (S \cup T)}$ was PSD, then we would have $\sum_{j > k} \sigma_j^2(\mathbf{A}_{ S \times T}) < \leq \frac{16}{k}t^2$, which is a contradiction since $k = \frac{2 \cdot 400^2 \kappa^4}{\epsilon} > \frac{100^2}{\epsilon}$. Now consider the case that $\sigma_k(\mathbf{A}) > 10 n/k$. Then by Lemma \ref{lem:bigk}, we have $\sigma_k((\mathbf{A}_{ S \times T}) \geq 8 t/k$ with probability at least $49/50$. Then $\|\mathbf{A}_{ S \times T}\|_{\mathcal{S}_1} \geq \sum_{i=1}^k \sigma_i((\mathbf{A}_{ S \times T}) \geq 8t$. Using the fact that the Schatten norm of a matrix is always at least as large as the Schatten norm of any submatrix (this follows from the fact that the singular values of the submatrix are point-wise dominated by the larger matrix, see Theorem 1 \cite{thompson1972principal}), we have $\|\mathbf{A}_{(S \cup T) \times (S \cup T)}\|_{\mathcal{S}_1} \geq 8t$. But note that if $\mathbf{A}_{(S \cup T) \times (S \cup T)}$ was PSD, then we would have $\|\mathbf{A}_{(S \cup T) \times (S \cup T)}\|_{\mathcal{S}_1} = \text{Tr}(\mathbf{A}_{(S \cup T) \times (S \cup T)}) \leq |S \cup T| \leq 4t$, which is a contradiction. This completes the proof of the theorem. \end{proof} \begin{theorem}\label{thm:l2Main} Fix $\mathbf{A} \in \R^{n \times n}$ with $\|A\|_\infty \leq 1$. There is a non-adaptive sampling algorithm that, with probability $9/10$, correctly distinguishes the case that $\mathbf{A}$ is PSD from the case that $\mathbf{A}$ is $\epsilon$-far from PSD in $\ell_2$, namely that $\sum_{i : \lambda_i(\mathbf{A}) < 0} \frac{ \lambda_i^2(\mathbf{A})}{n^2} \geq \epsilon$. The algorithm queries a total of $O(\frac{\log^2(1/\epsilon)}{\epsilon^4})$ entries of $\mathbf{A}$, and always correctly classifies $\mathbf{A}$ as PSD if $\mathbf{A}$ is indeed PSD. Moreover, the algorithm runs in time $\tilde{O}(1/\epsilon^{2\omega})$, where $\omega< 2.373$ is the exponent of fast matrix multiplication. \end{theorem} \begin{proof} We first apply the algorithm of Section \ref{sec:Linfty} with $\epsilon_0 = \frac{2}{k}$, which as discussed allows us to assume that $\lambda_i \geq - \epsilon_0 n / 1000 \geq - \frac{1}{2k}n$ for all $i$. The cost of doing so is $\wt{\Theta}(1/\epsilon^2)$ queries, and this algorithm also yields one-sided error as desired. The remainder of the theorem follows directly from Theorem \ref{thm:l2premain}, using that all principal submatrices of PSD matrices are PSD. Finally, for runtime, notice that the main computation is computing the eigenvalues of a $k \times k$ principal submatirx, for $k = \tilde{O}(1/\epsilon^2)$, which can be carried out in time $\tilde{O}(1/\epsilon^{2\omega})$ \cite{demmel2007fast2,banks2019pseudospectral}. \end{proof} \begin{comment} \subsection{Ky-Fan $(s,p)$-Norm} \begin{definition} Given a matrix $\mathbf{D} \in \R^{n \times m}$ with singular values $\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_{\min\{m,n\}}$ and $s \leq \min\{m,n\}$, $p \geq 1$, the Ky-Fan $(s,p)$-norm is defined as \[ \|\mathbf{D}\|_{KF(s,p)} = \left(\sum_{i=1}^s \sigma_i^p(\mathbf{D}) \right)^{1/p}\] \end{definition} \begin{proposition} $\|\mathbf{D}\|_{KF(s,p)}$ is a norm for $p \geq 1$. \end{proposition} \begin{proof} Since singular values behave correctly under scalings, it suffices to show the triangle inequality. \begin{equation} \begin{split} \|\mathbf{D} + \mathbf{E}\|_{KF(s,p)} &= \max_{U,V \text{ rank}-s} \| \PP_U (\mathbf{D} + \mathbf{E}) \PP_V \|_{Sh(p)} \\ & \leq \max_{U,V \text{ rank}-s} \left(\| \PP_U \mathbf{D} \PP_V \|_{Sh(p)} + \|\PP_U \mathbf{E} \PP_V \|_{Sh(p)} \right) \\ &\leq \max_{U,V \text{ rank}-s} \| \PP_{U} \mathbf{D} \PP_{V} \|_{Sh(p)} +\max_{U,V \text{ rank}-s} \| \PP_{U'} \mathbf{E} \PP_{V'} \|_{Sh(p)} \\ & \leq \|\mathbf{D} \|_{KF(s,p)} + \|\mathbf{E} \|_{KF(s,p)} \end{split} \end{equation} Where we used that Schatten $p$ is a norm for $p \geq 1$. \end{proof} \end{comment} \section{PSD Testing with \texorpdfstring{$\ell_\infty$}{L-inf} Gap}\label{sec:Linfty} In this section, we introduce our algorithm for the PSD testing problem with $\ell_\infty$-gap. As discussed earlier, we consider a more general version of the $\ell_\infty$ gap than the definition presented in Problem \ref{prob:linf}, which allows one to test a notion of positive semi-definitness which applies to non-symmetric matrices as well. Specifically, we define the \emph{PSD} case as when $x^\top \mathbf{A} x \geq 0$ for all $x \in \R^n$, and the \emph{far} case as when $x^\top \mathbf{A} x < - \epsilon n$ for a unit vector $x$. We note that if $\mathbf{A}$ is symmetric, this definition is equivalent to Problem \ref{prob:linf}. In fact, as we will see shortly, one can always reduce the non-symmetric case to the symmetric case, so this distinction will not matter algorithmically. Formally, we solve the following problem: \begin{definition}[General PSD Testing with $\ell_{\infty}$-Gap.]\label{def:linfty} Fix, $\epsilon \in (0,1]$ and let $\mathbf{A} \in \R^{n \times n}$ be any matrix satisfying $\|\mathbf{A}\|_\infty \leq 1$. The goal is to distinguish between the following two cases: \begin{itemize} \item \textbf{YES Instance}: $\mathbf{A}$ satisfies $x^\top \mathbf{A} x \geq 0$, for all $x \in \R^n$. \item \textbf{NO Instance}: There exists a unit vector $x \in \R^n$ such that $x^\top \mathbf{A} x < - \epsilon n$. \end{itemize} with probability at least $2/3$. \end{definition} \paragraph{Reducing to the symmetric case} In the case where $\mathbf{A}$ is symmetric, as in Problem \ref{prob:linf}, the above gap instance can be restated in terms of the minimum eigenvalue of $\mathbf{A}$. Specifically, we are promised that either $\lambda_{\min}(\mathbf{A}) \geq 0$ or $\lambda_{\min}(\mathbf{A}) \leq -\epsilon n$. However, we now observe that one can reduce to the symmetric case with only a factor of $2$ loss in the query complexity, by simply querying the symmetrization $(\mathbf{A} + \mathbf{A}^\top)/2$. First note, that for any $x \in \R^{n}$ and any matrix $\mathbf{A} \in \R^{n \times n}$, we have $x^\top \mathbf{A} x = x^\top \mathbf{A}^\top x$, thus for any $x$ we have $x^\top \mathbf{A} x = x^\top \frac{\mathbf{A} + \mathbf{A}^\top}{2}x$. Thus $x^\top \mathbf{A} x \geq 0$ for all $x$ if and only if $x^\top \frac{\mathbf{A} + \mathbf{A}^\top}{2} x \geq 0$ for all $x$, which occurs if and only if the matrix $\frac{\mathbf{A} + \mathbf{A}^\top}{2}$ is PSD. Similarlly, we have that $x^\top \mathbf{A} x \leq - \epsilon n$ for some unit vector $x$ if and only if $x^\top \frac{\mathbf{A} + \mathbf{A}^\top}{2} x \leq - \epsilon n$ for some unit vector $x$, which occurs if and only if $\lambda_{\min}(\frac{\mathbf{A} + \mathbf{A}^\top}{2} )\leq - \epsilon n$. Note also that the matrix $\frac{\mathbf{A} + \mathbf{A}^\top}{2} $ has bounded entries $\|\frac{\mathbf{A} + \mathbf{A}^\top}{2} \|_\infty \leq 1$ if $\|\mathbf{A}\|_\infty \leq 1$. Moreover, query access to $\frac{\mathbf{A} + \mathbf{A}^\top}{2} $ can be simulated via query access to $\frac{\mathbf{A} + \mathbf{A}^\top}{2}$ with a loss of at most a factor of $2$ in the query complexity, by symmetrizing the queries. In fact, our algorithms will not even incur this factor of $2$ loss, since all queries our algorithms make will belong to principal submatrices of $\mathbf{A}$. Thus, in what follows, we can restrict ourselves to the original formulation as specified in Problem \ref{prob:linf}, and assume our input $\mathbf{A}$ is symmetric. The goal of this section is now to prove the following theorem, which demonstrate the existence of a $\tilde{O}(1/\epsilon^2)$ query one-sided error tester for the above problem. In Section \ref{sec:lb}, we demonstrate that this complexity is optimal (up to $\log(1/\epsilon)$ factors), even for testers with two sided error (Theorem \ref{thm:linftyLB}). \begin{theorem}[Query Optimal One-Sided Tester for $\ell_{\infty}$ Gap (see Theorem \ref{thm:inftymain})] There is an algorithm which, given $\mathbf{A}$ with $\|\mathbf{A}\|_\infty \leq 1$ such that either $x^\top \mathbf{A} x \geq 0$ for all $x$ (\texttt{YES} case), or such that $x^\top\mathbf{A} x \leq - \epsilon n$ for some $x \in \R^n$ with $\|x\|_2 \leq 1$ (\texttt{NO} case), distinguishes the two cases with probability at least $3/4$, while making at most $\tilde{O}(\frac{1}{\epsilon^2})$ queries to the entries of $\mathbf{A}$, and runs in time $\tilde{O}(1/\epsilon^{\omega})$, where $\omega < 2.373$ is the exponent of matrix multiplication. Moreover, in the first case when $x^\top \mathbf{A} x \geq 0$ for all $x$, is PSD, the algorithm always correctly outputs \texttt{YES}. \end{theorem} \paragraph{Algorithmic Setup} First recall that if $\mathbf{A}$ is PSD, then then every \textit{principal} submatrix $\mathbf{A}_{T \times T}$ of $\mathbf{A}$ for $T \subseteq [n]$ is also PSD. Thus, it will suffice to query a collection of principal submatrices $\mathbf{A}_{T_1 \times T_1} , \mathbf{A}_{T_2 \times T_2} , \dots, \mathbf{A}_{T_t \times T_t} $ of $\mathbf{A}$, and return \texttt{Not PSD} if any one of them fails to be PSD. Such a algorithm then naturally has one-sided error, since if $\mathbf{A}$ is PSD it will always return PSD. Thus, in the remainder of the section, it suffices to consider only the \texttt{NO} case, and demonstrate that, if $\mathbf{A}$ satisfies $x^\top\mathbf{A} x \leq - \epsilon n$ for some unit vector $x \in \R^n$, then with good probability at least one of the sampled principal submatrices will fail to be PSD. Moreover, as shown above, it suffices to consider the case where $\mathbf{A}$ is symmetric. In this case, we will fix $x$ to be the eigenvector associated with the smallest eigenvalue of $\mathbf{A}$. Thus, in what follows, we can fix $x$ so that $\min_{z \in \R^n : \|z\|_2 = 1} z^\top \mathbf{A} z = x^\top \mathbf{A} x = \lambda_{\min}(\mathbf{A}) = -\epsilon n$. Notice here we define $\epsilon$ to satisfy the \emph{equality} $\lambda_{\min}(\mathbf{A})= -\epsilon n$, however our algorithm need only know a lower bound $\epsilon_0 < \epsilon$ on $\epsilon$. The reason for this is that the input parameter $\epsilon_0$ will only effect the sizes of the random submatrices being sampled (smaller $\epsilon_0$ increases the size). Thus, an algorithm run with parameter $\epsilon_0$ can be simulated by first running the algorithm with parameter $\epsilon > \epsilon_0$, and then randomly adding rows and columns to the sampled submatrices from the correct marginal distribution. Thus, there is a coupling such that the submatrices chosen by an algorithm with any input $\epsilon_0 < \epsilon$ will always contain the submatrices sampled by an algorithm given the input exactly $\epsilon$, so if the latter algorithm sampled a non-PSD submatrix, so would the former. Thus, for the sake of analysis, we can assume that the value $\epsilon$ is known. Throughout the following section, we will assume $1/\epsilon < c \cdot n$ for some sufficiently small constant $c$. Notice that if this was not the case, we would have $1/\epsilon^2 = \Omega(n^2)$, and we would be permitted to read the entire matrix $\mathbf{A}$, as this is within our target budget of $\tilde{O}(1/\epsilon^2)$. \subsection{Warm-up: a \texorpdfstring{$O(1/\epsilon^3)$}{epsilon cube} algorithm}\label{sec:warmup} We first describe a $O(1/\epsilon^3)$ query algorithm for the problem of PSD testing with $\ell_\infty$-gap. The general approach and results of this algorithm will be needed for the more involved $\wt{O}(1/\epsilon^2)$ query algorithm which we shall develop in the sequel. As noted above, it suffices to analyze the NO case, where we have $x^\top \mathbf{A} x = \lambda_{\min}(\mathbf{A}) = - \epsilon n$ for a unit vector $x \in \R^n$. Our goal will be to analyze the random variable $Z = x_T^\top \mathbf{A}_{T \times T} x_T$, where $T \subset [n]$ is a random subset, where each $i \in [n]$ is selected independently with some probability $\delta$. Notice that if $\delta_i \in \{0,1\}$ is an indicator variable indicating that we sample $i \in T$, then we have $Z = x_T^\top \mathbf{A}_{T \times T} x_T= \sum_{i,j} x_i \mathbf{A}_{i,j} x_j \delta_i \delta_j$. Now, algorithmically, we do not know the vector $x$. However, if we can demonstrate concentration of $Z$, and show that $Z < 0$ for our sampled set $S$, then we can immediately conclude that $\mathbf{A}_{T \times T}$ is not PSD, a fact which \textit{can} be tested. Thus, the analysis will proceed by pretending that we did know $x$, and analyzing the concentration of $x_T^\top \mathbf{A}_{T \times T} x_T$. In the following section, however, we will ultimately analyze the concentration of this random variable with respects a slightly different vector than $x$. We first remark that we can assume, up to a loss in a constant factor in the value of $\epsilon$, that the diagonal of $\mathbf{A}$ is equal to the identity. \begin{proposition}\label{prop:1diag} We can assume $\mathbf{A}_{i,i} = 1$, for all $i \in [n]$. Specifically, by modifying $\mathbf{A}$ so that $\mathbf{A}_{i,i} = 1$, for all $i \in [n]$, the completeness (PSD) case is preserved and the soundness (not PSD) case is preserved up to a factor of $1/2$ in the parameter $\epsilon$. \end{proposition} \begin{proof} Every time we observe an entry $\mathbf{A}_{i,i}$ we set it equal to $1$. In this new matrix, if $\mathbf{A}$ was PSD to begin with, then $\mathbf{A}$ will still be PSD, since this modification corresponds to adding a non-negative diagonal matrix to $\mathbf{A}$. If $x\mathbf{A} x \leq - \epsilon n$ originally for some $x \in \R^n$, then $x^\top\mathbf{A} x \leq - \epsilon n + 1$ after this change, since the diagonal contributes at most $\sum_i \mathbf{A}_{i,i} x_i^2 \leq \|x\|_2^2 \leq 1$ to the overall quadratic form. Note this additive term of $1$ is at most $(\epsilon n)/2$ since we can assume $\epsilon = \Omega(1/n)$. \end{proof} We now notice that if $x$ is the eigenvector associated with a a large enough eigenvalue, the $\ell_2$ mass of $x$ cannot be too concentrated. \begin{proposition}\label{prop:spread} Let $\mathbf{A} \in \R^{n \times n}$ be a symmetric matrix with $\lambda_{\min}(\mathbf{A}) = - \epsilon n$, and let $x \in \R^n$ be the (unit) eigenvector associated with $\lambda_{\min}(\mathbf{A})$. Then we have that $\|x\|_\infty \leq \frac{1}{\epsilon\sqrt{ n}}$. \end{proposition} \begin{proof} By Cauchy-Schwartz, for any $i \in [n]$: \[ |\lambda_{\min}| \cdot | x_i| = |\langle \mathbf{A}_{i,*} , x\rangle| \leq \| \mathbf{A}_{i,*} \|_2 \leq \sqrt{n} \] from which the proposition follows using $\lambda_{\min}(\mathbf{A}) = - \epsilon n$. \end{proof} \begin{comment} \begin{proposition}\label{prop:spread} Suppose there exists a unit vector $x \in \R^n$ with $x^\top \mathbf{A} x \leq -\epsilon n$. Then there is a $y \in \R^n$, given by $y = x_S$ for some subset $S \subset [n]$ (where $x_S$ is $x$ with the coordinates not in $S$ set to $0$), which satisfies the following properties: \begin{enumerate} \item $y^\top \mathbf{A} y \leq -\frac{\epsilon n}{2} $ \item $\|y\|_\infty \leq \frac{6}{\epsilon\sqrt{ n}}$ \item $\|y\|_2 \leq 1$ \end{enumerate} \end{proposition} \begin{proof} First note that since $x$ is a unit vector, at most $ \epsilon^2 n/c^2$ of the coordinates $x_i$ of $x$ can be larger than $(c/\epsilon\sqrt{n})$ for any constant $c$. Fix $c$ to be sufficiently large, and let $H \subset [n]$ be the subset $H = \left\{ i \in [n] \; \mid \; |x_i| \geq \frac{c}{\epsilon \sqrt{n}} \right\}$. Let $S = [n] \setminus H$. We first claim that if we set $y = x_S \in \R^n$, where $x_S$ is $x$ with the coordinates not in $S$ set equal to $0$, then we have $y^\top \mathbf{A} y \leq - \epsilon' n$ for some $\epsilon' = \Theta(\epsilon)$. Note \[ y^\top \mathbf{A} y = x^\top \mathbf{A} x -\left( \sum_{i,j \in H} \mathbf{A}_{i,j} x_{i} x_j\right)- \left(\sum_{i \in S, j \in H} \mathbf{A}_{i,j} x_{i} x_j \right) - \left(\sum_{i \in H, j \in S} \mathbf{A}_{i,j} x_{i} x_j\right) \] We bound the contribution of the last three terms. The first term can be written as $|x^\top_H \mathbf{A}_H x_H| \leq \|\mathbf{A}_H\|_2 \leq \|\mathbf{A}_H\|_F \leq |H| \leq \epsilon^2 n/c$. The second term can be written as $|\langle x_S, \mathbf{A} x_H\rangle| \leq \|x_S\|_2 \|\mathbf{A} x_H\|_2 \leq \|\mathbf{A}_{*,H}\|_2 \leq \|\mathbf{A}_{*,H}\|_F \leq \sqrt{|H| n} \leq \epsilon n/ c$, where we used Cauchy-Schwartz and a bound on the operator norm by the Frobenius, and the last term can be bounded symmetrically by $\epsilon n/c$. Setting $c = 6$ completes the proposition. \end{proof} \end{comment} Recall that our goal is to analyze the random variable $Z = x_T^\top \mathbf{A}_{T \times T} x_T= \sum_{i,j} x_i \mathbf{A}_{i,j} x_j \delta_i \delta_j$. To proceed, we bound the moments of $Z$. Our bound on these moments can be tightened as a function of the \textit{row and column contributions} of the target vector $x$, which we now define. \begin{definition}\label{def:RRCC1} Fix any $y \in \R^n$. Then for any $i \in [n]$, define the total row and column contributions of $i$ as $\mathcal{R}_i(y) = \sum_{j \in [n] \setminus i} y_i \mathbf{A}_{i,j}y_j$ and $\mathcal{C}_i(y) = \sum_{j \in [n] \setminus i} y_j \mathbf{A}_{j,i} y_i$ respectively. \end{definition}\noindent Notice from the above definition, we have $\sum_i \mathcal{R}_i(y) + \mathcal{C}_i(y) = 2\left(y^\top \mathbf{A} y - \sum_i \mathbf{A}_{i,i} y_i^2\right) $. \begin{fact}\label{fact:case1OPT} Let $x \in \R^n$ be the eigenvector associated with $\lambda_{\min}(\mathbf{A})$. Then we have $\mathcal{R}_i(x) + \mathcal{C}_i(x) \leq 0$ for all $i \in [n]$. \end{fact} \begin{proof} Suppose there was an $i$ with $\mathcal{R}_i(x) + \mathcal{C}_i(x) > 0$. Then setting $z = x_{[n] \setminus i}$ we have $z^\top \mathbf{A} z = \langle x , \mathbf{A} x\rangle - (\mathcal{R}_i(x) + \mathcal{C}_i(x)) - \mathbf{A}_{i,i} (x_i)^2$. Recall from Proposition \ref{prop:1diag} that we can assume $\mathbf{A}_{i,i} = 1$ for all $i$, thus it follows that $z^\top \mathbf{A} z < \langle x , \mathbf{A} x\rangle$, which contradicts the optimality of $x$. \end{proof} \noindent We now bound the expectation of the random quadratic form. \begin{proposition}[Expectation Bound]\label{prop:exp} Let $\mathbf{A} \in \R^{n \times n}$ be a matrix with $\|\mathbf{A}\|_\infty \leq 1$, and let $y \in \R^n$ be any vector with $\|y\|_2 \leq 1$ and $y^\top \mathbf{A} y < - \epsilon n$. Let $Z = \sum_{i,j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j$, where $\delta_1,\dots,\delta_n \sim \text{Bernoulli}(\frac{k}{n})$. Then if $k \geq 8/\epsilon$, we have $\ex{Z} \leq - \frac{\epsilon k^2}{ 4n }$. \end{proposition} \begin{proof} Let $c_{i,j} = \mathbf{A}_{i,j}y_i y_j$. First note, for any $i \in [n]$, the term $c_{i,j}$ is included in $T$ with probability $k/n$ if $i = j$, and with probability $k^2/n^2$ if $i \neq j$. So \begin{equation} \begin{split} \bex{Z} &= \sum_{i \neq j} \frac{k^2}{n^2} c_{i,j} + \sum_{i \in [n]} \frac{k}{n} c_{i,i}\\ &= \frac{k^2}{n^2} \left( \langle y ,\mathbf{A} y\rangle - \sum_{i \in [n]}\mathbf{A}_{i,i} y_i^2 \right) + \frac{ k}{n}\sum_{i \in [n]}\mathbf{A}_{i,i} y_i^2 \\ & \leq - \frac{\epsilon k^2}{ 2n } + \left(\frac{ k}{n} + \frac{ k^2}{n^2}\right) \sum_{i \in [n]} y_i^2\\ & \leq - \frac{\epsilon k^2}{ 2n } + \frac{2k}{n} \leq - \frac{\epsilon k^2}{ 4n }\\ \end{split} \end{equation} Where in the last inequality, we assume $k \geq 8 /\epsilon$. \end{proof}\noindent Next, we bound the variance of $Z$. We defer the proof of the following Lemma to Section \ref{sec:variance}. \begin{lemma}[Variance Bound]\label{lem:var} Let $\delta_1,\dots,\delta_n \sim \text{Bernoulli}(\frac{k}{n})$. Let $y \in \R^n$ be any vector such that $\|y\|_2 \leq 1, \|y\|_\infty \leq \frac{1}{\epsilon \sqrt{n}}$, and $y^\top \mathbf{A} y = - \epsilon n$, where $\mathbf{A} \in \R^{n \times n}$ satisfies $\|\mathbf{A}\|_\infty \leq 1$. Further suppose that $\mathcal{R}_i(y) + \mathcal{C}_i(y) \leq 0$ for each $i \in [n]$. Then, assuming $k \geq 6/\epsilon$, we have \[\mathbf{Var}\left[\sum_{i,j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \right] \leq O\left(\frac{k^3}{n^2}\right) \] Moreover, if the tighter bound $\|y\|_\infty \leq \frac{\alpha}{\epsilon \sqrt{n}}$ holds for some $\alpha \leq 1$, we have \[\mathbf{Var}\left[\sum_{i,j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j\right] \leq O\left(\frac{k^2}{n^2}+ \frac{\alpha k^3}{n^2} \right) \] \end{lemma} We note that the variance of the random quadratic form can be improved if we have tighter bounds on certain properties of the target vector $y$. We demonstrate this fact in the following Corollary, which we will use in Section \ref{sec:improving}. Note that the assumptions of Corollary \ref{cor:var} differ in several minor ways from those of Lemma \ref{lem:var}. For instance, we do not require $k \geq 6/\epsilon$ (we note that this assumption was required only to simply the expression in Lemma \ref{lem:var}). Also notice that we do not bound the diagonal terms in Corollary \ref{cor:var}. We defer the proof of Corollary \ref{cor:var} to Section \ref{sec:variance}. \begin{corollary}[Tighter Variance Bound]\label{cor:var} Let $\delta_1,\dots,\delta_n \sim \text{Bernoulli}(\frac{k}{n})$. Let $\mathbf{A} \in \R^{n \times n}$ with $\|\mathbf{A}\|_\infty \leq 1$ be any matrix and $y$ a vector such that $|y^\top\mathbf{A} y| \leq c_1\epsilon n$ for some value $c_1>0$, and such that $\|y\|_\infty \leq \frac{\alpha}{\epsilon \sqrt{n}}$ for some $\alpha > 0$. Let $\mathbf{Z} \in \R^n$ be defined by $\mathbf{Z}_i = \mathcal{R}_i(y) + \mathcal{C}_i(y)$ for $i \in [n]$, and suppose we have $\|\mathbf{Z}\|_2^2 \leq c_2 \epsilon n$. Then we have \[\mathbf{Var}\left[\sum_{i\neq j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \right] \leq O\left(\frac{k^2}{n^2} + \frac{c_1^2 k^4 \epsilon^2}{n^2} + \frac{(c_1 +c_2 )\epsilon k^3}{n^2} + \frac{\alpha^2 k^3}{n^2} \right) \] \end{corollary} We now observe that the variance computations from Lemma \ref{lem:var} immediately gives rise to a $O(1/\epsilon^3)$ algorithm. \begin{theorem} There is a non-adaptive sampling algorithm which queries $O(\epsilon^{-3})$ entries of $\mathbf{A}$, and distinguishes the case that $\mathbf{A}$ is PSD from the case that $\lambda_{\min}(\mathbf{A}) < - \epsilon n$ with probability $2/3$. \end{theorem} \begin{proof} Let $x \in \R^n$ be the eigenvector associated with $\lambda_{\min}(\mathbf{A}) = - \epsilon n$ (recall that we can assume equality), and let $Z_1,\dots,Z_d$ be independent repetitions of the above process, with $k = 10/\epsilon $ and $d = 3840/\epsilon$. Let $Z = \frac{1}{d}\sum_{i=1}^d Z_i$. Then $\bvar{Z} \leq \frac{6}{d}\frac{k^3}{n^2}$ by Lemma \ref{lem:var}, where we used the bounds on $\|x\|_\infty$ from Proposition \ref{prop:spread} and the property that $\mathcal{R}_i(x) + \mathcal{C}_i(x) \leq 0$ for all $i$ from Fact \ref{fact:case1OPT} to satisfies the assumptions of Lemma \ref{lem:var}. By Chebysev's inequality: \begin{equation} \begin{split} \bpr{ Z \geq -\frac{\epsilon k^2}{4n} +\frac{\epsilon k^2}{8n} } &\leq \left(\frac{64 n^2}{\epsilon^2 k^4}\right)\left(\frac{6k^3}{dn^2}\right) \\ & \leq \frac{1}{10 \epsilon k} \\ & \leq \frac{1}{100} \end{split} \end{equation} It follows that with probability $99/100$, the average of the $Z_i$'s will be negative. Thus at least one of the $Z_i$'s must be negative, thus the submatrix corresponding to this $Z_i$ will not be PSD. The total query complexity is $O(k^2 d) = O(\epsilon^{-3})$ \end{proof} \subsection{Variance Bounds} \label{sec:variance} \input{Variance} \subsection{Improving the complexity to \texorpdfstring{$\tilde{O}(1/\epsilon^{2})$}{epsilon square} }\label{sec:improving} \input{LInfSecondPart} \subsubsection{Case 1: Varied Subsampling and Eigenvector Switching} In this section, we analyze the \textbf{Case 1}, which specifies that $x_S \mathbf{A} x_{T_a} + x_{T_a} \mathbf{A} x_S \leq -\frac{\epsilon n }{10\log(1/\epsilon)}$ for some $T_a$ such that $2^a \geq 10^6 \zeta^3$, where $\zeta = \Theta(\log^2(1/\epsilon))$ is chosen with a sufficiently large constant. Recall here that $x \in \R^n$ is the (unit) eigenvector associated with $\lambda_{\min}(\mathbf{A}) = - \epsilon n$. We now fix this value $a$ associated with $T_a$. In order to find a principal submatrix $\mathbf{A}_{T \times T}$ that is not PSD for some sampled subset $T \subset [n]$, we will need to show that $T \cap T_a$ intersects in at least one coordinate. As discussed in Section \ref{sec:techlinf}, we will need to switch our analysis from $x$ to a different vector $y$, in order to have $y^\top_T \mathbf{A}_{T \times T} y_T < 0$ with non-negligible probability conditioned on $|T \cap T_a| \geq 1$. To construct the appropriate vector $y$, we will first proceed by proving several propositions which bound how the quadratic form $x^\top \mathbf{A} x$ changes as we modify or remove some of the coordinates of $x$. For the following propositions, notice that by definition of $T_b$, using the fact that $\|x\|_2^2 \leq 1$, we have that $|T_b| \leq \frac{\epsilon n}{100 2^{b-1}}$ for any $b \geq 1$, which in particular holds for $b=a$. \begin{proposition}\label{prop:rectangularcontribution} Let $\mathbf{A} \in \R^{n \times n}$ satisfy $\|\mathbf{A}\|_\infty \leq 1$. Let $S,T \subset [n]$, and let $v \in \R^n$ be any vector such that $\|v\|_2 \leq 1$. Then $|v_S^\top \mathbf{A} v_T| \leq \sqrt{|S|\cdot | T|}$ \end{proposition} \begin{proof} We have $|v_S^\top \mathbf{A} v_T| = |\sum_{i \in S} \sum_{j \in T} v_i \mathbf{A}_{i,j}v_j|\leq \sum_{i \in S}|v_i| \sum_{j \in T} |v_j| \leq \sum_{i \in S}|v_i| \|v_T\|_1 \leq \|v_S\|_1\|v_T\|_1\allowbreak \leq \sqrt{|S| |T|}$ as needed. \end{proof} \begin{proposition}\label{prop:rectangularcontribution2} Let $\mathbf{A} \in \R^{n \times m}$ satisfy $\|\mathbf{A}\|_\infty \leq 1$ for any $n,m$. and let $v \in \R^{n}, u \in \R^{m}$ satisfy $\|u\|_2^2, \|v\|_2^2 \leq 1$. Then \[\sum_{j=1}^m \left(\sum_{i=1}^{n} v_i \mathbf{A}_{i,j} u_j \right)^2 \leq n\] \end{proposition} \begin{proof} We have $\left(\sum_{i=1}^{n} v_i \mathbf{A}_{i,j} u_j \right)^2 \leq u_j^2\left(\sum_{i=1}^{n} |v_i| \right)^2 \leq u_j^2 \|v\|_1^2$, so the sum can be bounded by $\sum_{j=1}^m u_j^2 \|v\|_1^2 = \|v\|_1^2 \|u\|_2^2 \leq \|v\|_1^2 \leq n$ as needed. \end{proof} \begin{proposition}\label{prop:Sbound} Let $x$ be as defined above. Then we have $|\langle x_S, \mathbf{A} x_S\rangle| \leq 10 \epsilon n$. \end{proposition} \begin{proof} Suppose $\langle x_S, \mathbf{A} x_S\rangle= C\epsilon n$ for a value $C$ with $|C|>10$. Note that $|\langle x_{[n] \setminus S}, \mathbf{A} x_{[n] \setminus S} \rangle| \leq \frac{\epsilon n}{100}$ by Proposition \ref{prop:rectangularcontribution}, using that $|[n] \setminus S| = |\cup_{b \geq 1} T_b| \leq \frac{\epsilon n}{100}$ (here we use the fact that at most $\frac{\epsilon n}{100}$ coordinates of a unit vector can have squared value larger than $\frac{100}{\epsilon n}$). If $C>0$, then we must have that $(\langle x_S ,\mathbf{A} x_{[n] \setminus S} \rangle +\langle x_{[n] \setminus S}, \mathbf{A} x_S\rangle ) \leq - ( C +99/100) \epsilon n$ for us to have that $\langle x , \mathbf{A} x\rangle = - \epsilon n$ exactly. Thus if $C$ is positive and larger than $10$, it would follow that by setting $v=x_S/2 + x_{[n] \setminus S}$, we would obtain a vector $v$ with $\|v\|_2 \leq 1$ such that $v$ has smaller quadratic form with $\mathbf{A}$ than $x$, namely with $v^\top \mathbf{A} v \leq - ( C +99/100) \epsilon n /2 + \epsilon C n/4 + n \epsilon / 100 < -\epsilon n$ using that $C > 10$, which contradicts the optimality of $x$ as the eigenvector for $\lambda_{\min}(\mathbf{A})$. Furthermore, if $C < -10$, then $x_S^\top \mathbf{A} x_S < - 10 \epsilon$, which again contradicts the optimality of $x$. \end{proof} Now recall that the total row and column contributions of $i$ are defined as $\mathcal{R}_i(x) = \sum_{j \in [n] \setminus i} x_i \mathbf{A}_{i,j} x_j$ and $\mathcal{C}_i(x) = \sum_{j \in [n] \setminus i} x_j \mathbf{A}_{j,i} x_i$ respectively. In the remainder of the section, we simply write $\mathcal{R}_i = \mathcal{R}_i(x)$ and $\mathcal{C}_i = \mathcal{C}_i(x)$. We now define the contribution of $i$ within the set $S \subset [n]$. \begin{definition} Let $S \subset [n]$ be as defined above. Then for any $i \in [n]$, define the row and column contributions of $i$ within $S$ as $\mathcal{R}_{i}^S = \sum_{j \in S \setminus i} x_i \mathbf{A}_{i,j} x_j$ and $\mathcal{C}_{i}^S = \sum_{j \in S \setminus i} x_j \mathbf{A}_{j,i} x_i$ respectively. \end{definition}\noindent Observe from the above definition, we have $\sum_{i \in T_a} (\mathcal{R}_{i}^S + \mathcal{C}_{i}^S)= (x_S \mathbf{A} x_{T_a} + x_{T_a} \mathbf{A} x_S) \leq - \frac{\epsilon n}{10 \log(1/\epsilon)}$, where the inequality holds by definition of Case 1. \begin{proposition}\label{propLl1rowcolumnbound} We have $\sum_{i \in S} (\mathcal{R}_{i}^S + \mathcal{C}_{i}^S)^2 \leq 1601 \epsilon n$. \end{proposition} \begin{proof} Let $z^S ,z,z^-\in \R^{|S|}$ be vectors defined for $i \in S$ as $z_i^S = \mathcal{R}_{i}^S+ \mathcal{C}_{i}^S$, $z_i = \mathcal{R}_i + \mathcal{C}_i$, and $z^- = z - z^S$. Notice that our goal is to bound $\|z^S\|_2^2 $, which by triangle inequality satisfies $\|z^S\|_2^2 \leq 2\left(\|z\|_2^2 + \|z^-\|_2^2\right)$. First note that \begin{equation} \begin{split} \|z^-\|_2^2 &= \sum_{i \in S} \left(\sum_{j \notin S} x_i \mathbf{A}_{i,j} x_j + \sum_{j \notin S} x_j \mathbf{A}_{j,i} x_i \right)^2 \\ & \leq 2 \sum_{i \in S} \left(\sum_{j \notin S} x_i \mathbf{A}_{i,j} x_j \right)^2 + 2 \sum_{i \in S} \left(\sum_{j \notin S} x_j \mathbf{A}_{j,i} x_i \right)^2\\ \end{split} \end{equation} Using that $[n] \setminus S < \epsilon n/100$, we have by Proposition \ref{prop:rectangularcontribution2} that $\sum_{i \in S} \left(\sum_{j \notin S} x_i \mathbf{A}_{i,j} x_j \right)^2 \leq \epsilon n / 100$, so $\|z^-\|_2^2 \leq \epsilon n /25$. We now bound $\|z\|_1 = \sum_{i \in S} | \mathcal{R}_i + \mathcal{C}_i|$. By Fact \ref{fact:case1OPT}, we have $\mathcal{R}_i + \mathcal{C}_i \leq 0$ for all $i \in [n]$, which means that $\|z\|_1 \leq \sum_{i \in [n]} | \mathcal{R}_i + \mathcal{C}_i| = |2 \langle x , \mathbf{A} x \rangle - 2\sum_{i \in [n]} \mathbf{A}_{i,i} (x_i)^2| \leq 2 \epsilon n$. Next, we bound $\|z\|_\infty$. Notice that $|\mathcal{R}_i + \mathcal{C}_i |= 2| x_i \langle \mathbf{A}_{i,*}, x \rangle - \mathbf{A}_{i,i}(x_i)^2 | \leq 2\epsilon n (x_i)^2 + 2\mathbf{A}_{i,i}(x_i)^2 < 4 \epsilon n (x_i)^2 $, using that $\mathbf{A} x = - \epsilon n x$, since $x$ is an eigenvector of $\mathbf{A}$. Since for $i\in S$, it also follows that $(x_i)^2 \leq \frac{100}{\epsilon n}$, thus $\|z\|_\infty \leq 400$ as needed. It follows that $\|z\|_2^2$ is maximized by having $ 2 \epsilon n/400$ coordinates equal to $400$, giving $\|z\|_2^2 \leq 2 \epsilon n/400 (400)^2 = 800 \epsilon n$. It follows then that $\|z^S\|_2^2 \leq 1601 \epsilon n$ as needed. \end{proof} \paragraph{Eigenvector Setup: } We now define the ``target'' direction $y \in \R^n$ which we will use in our analysis for \textbf{Case 1}. First, we will need the following definitions. Let \[ D_a^p = \left\{ t \in T_a \; : \; -\frac{ 2^{p+1} 2^{a}}{ \log(1/\epsilon)} \leq \mathcal{R}_t^S + \mathcal{C}_t^S \leq -\frac{ 2^p 2^{a}}{ \log(1/\epsilon)} \right\}\] Define the \textit{fill} $\beta$ of $T_a$ as the value such that $\beta = 2^{-p}$ where $p \geq 1$ is the smallest value of $p$ such that $-|D_a^p|(\frac{ 2^p 2^{a}}{ \log(1/\epsilon)}) \leq - \frac{\epsilon n}{40 \log^2(1/\epsilon)}$. Note that at least one such $p$ for $1 \leq p \leq \log(1/\epsilon)$ must exist. let $T_a^* = D_a^p$ where $\beta = 2^{-p}$. Observe that $x_S \mathbf{A} x_{T_a^*} + x_{T_a^*} \mathbf{A} x_S \leq- \frac{\epsilon n}{40 \log^2(1/\epsilon)}$. Finally, we define our target ``eigenvector'' $y$ as \begin{equation}\label{eqn:eigen} y = x_S + \zeta\beta \left(2^{-a} \cdot x_{T_a}\right) \end{equation} where $\zeta = \Theta(\log^2(1/\epsilon))$ is as above, and we also define our target submatrix subsampling size as $\lambda = \frac{2000\beta^2 \zeta^2 \log(1/\epsilon)}{2^a \epsilon}$. First, we prove that for a random submatrix $\mathbf{A}_{T \times T}$, where $i \in [n]$ is sampled and added to $T$ with probability $\lambda/n$, we have that $y^\top \mathbf{A}_{T \times T} y$ is negative in expectation conditioned on $|T \cap T_a^*| \geq 1$. \begin{lemma}\label{lem:conditionalex} Suppose we are in case $1$ with $T_a$ contributing such that $2^a \geq 10^6 \zeta^3$. Let $\delta_i$ be an indicator variable that we sample coordinate $i$, with $\ex{\delta_i} = \frac{\lambda}{n}$ and $\lambda = \frac{2000\beta^2 \zeta^2 \log(1/\epsilon)}{2^a \epsilon}$. Then if $y = x_S + \zeta\beta (2^{-a} x_{T_a})$ where $\zeta \geq 100 \log^2(1/\epsilon)$, and if $t \in T_a^*$, then \[\mathbb{E} \left[ \sum_{i,j \in S \cup \{t\}} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \; \big| \; \delta_t = 1 \right] \leq - \frac{50 \zeta \lambda }{ n} \] \end{lemma} \begin{proof} First observe $\ex{ \sum_{i \in S} y_i^2 \mathbf{A}_{i,i} \delta_i} \leq \frac{\lambda}{n}\|x\|_2^2 \leq \frac{\lambda}{n}$ since $x$ is a unit vector. Note that $y_S = x_S$ by construction, so we can use Proposition \ref{prop:Sbound} to bound $|\langle y_S , \mathbf{A} y_S\rangle |$ by $10 \epsilon n$, which gives \begin{equation} \begin{split} \mathbb{E} \left[ \sum_{i,j \in S \cup \{t\}} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \; \big| \; \delta_t = 1 \right] &\leq \frac{\lambda^2}{n^2}\left(\langle y_S , \mathbf{A} y_S \rangle- \sum_{i \in S} y_i^2 \mathbf{A}_{i,i} \delta_i\right)+ \frac{\lambda}{n} + \frac{\lambda}{n}\sum_{i \in S} (\mathbf{A}_{t,i} + \mathbf{A}_{i,t}) y_i y_t +y_t^2 \\ & \leq \frac{20 \lambda^2 \epsilon }{n} +\frac{\lambda }{n}\left(1 + \frac{ \zeta \beta }{2^{a}} \left(\mathcal{R}_t^S + \mathcal{C}_t^S\right) \right)+ (\frac{\zeta \beta}{2^a})^2 (\frac{100 2^a}{\epsilon n})\\ & \leq \frac{20 \lambda^2 \epsilon }{n} +\frac{\lambda }{n}\left(1 + \frac{ \zeta \beta }{2^{a}} \left(\mathcal{R}_t^S + \mathcal{C}_t^S\right) \right)+ \frac{100\zeta^2 \beta^2}{2^a \epsilon n}\\ \end{split} \end{equation} Now by definition of $i \in T_a^*$, we have $\left(\mathcal{R}_t^S + \mathcal{C}_t^S\right) \leq- \frac{2^a}{\beta \log(1/\epsilon)}$. Thus \begin{equation} \begin{split} \mathbb{E} \left[ \sum_{i,j \in S \cup \{t\}} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \; \big| \; \delta_t = 1 \right]& \leq \frac{20 \lambda^2 \epsilon }{n} +\frac{\lambda }{n}\left(1 -\frac{\zeta }{ \log(1/\epsilon)} \right)+ \frac{ 100\zeta^2 \beta^2}{2^a \epsilon n}\\ \end{split} \end{equation} Setting $\zeta > 100 \log^2(1/\epsilon)$, we first note that $\frac{\lambda }{n}\left(1 -\frac{\zeta }{ \log(1/\epsilon)} \right) \leq -\frac{99\lambda \zeta }{100n \log(1/\epsilon)}$. Since $$\lambda = \frac{2000\beta^2 \zeta^2 \log(1/\epsilon)}{2^a \epsilon} \leq \frac{2000\beta^2 }{2^6 \zeta \epsilon } \leq \frac{1}{\ \zeta \epsilon}$$ it follows that $\frac{20 \lambda^2 \epsilon}{n} \leq \frac{20 \lambda }{n \zeta} \leq \frac{20 \lambda }{n} < \frac{\lambda \zeta }{ 5\log(1/\epsilon) n}$. Thus $\frac{10 \lambda^2 \epsilon}{n} -\frac{99\lambda \zeta }{100n \log(1/\epsilon)} \leq -\frac{3\lambda \zeta }{4n \log(1/\epsilon)}$. So we can simply and write \begin{equation} \begin{split} \mathbb{E} \left[ \sum_{i,j \in S \cup \{t\}} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \; \big| \; \delta_t = 1 \right] &\leq -\frac{3\lambda \zeta }{4n \log(1/\epsilon)}+ \frac{100 \zeta^2 \beta^2}{2^a \epsilon n}\\ & =-\frac{1500\beta^2 \zeta^3 }{2^a\epsilon n }+ \frac{100 \zeta^2 \beta^2}{2^a \epsilon n}\\ & \leq -\frac{1400\beta^2 \zeta^3 }{2^a\epsilon n }\\ & \leq - \frac{50 \zeta \lambda }{n} \\ \end{split} \end{equation} as desired. \end{proof} \begin{lemma}\label{lem:Spart} Let $\delta_i$ be an indicator variable with $\ex{\delta_i} = \lambda/n$. Then \[\mathbf{Pr}\left[ \left|\sum_{i , j\in S} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j - \bex{\sum_{i , j\in S} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j} \right| \geq C\frac{\lambda }{n} \right] \leq \frac{49}{50}\] Where $C>0$ is some constant. \end{lemma} \begin{proof} We can apply Corollary \ref{cor:var}, where we can set the values of $c_1,c_2$ to be bounded by constants by the results of Proposition \ref{prop:Sbound} and \ref{propLl1rowcolumnbound}, and by definition of the set $S$ we can set $\alpha \leq \sqrt{\epsilon}$ for the $\alpha$ in Corollary \ref{cor:var}, and using that $\lambda \leq O(1/\epsilon)$, we obtain: \[ \mathbf{Var} \left[ \sum_{i \neq j\in S} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \right] \leq \frac{C\lambda^2}{n^2} \] for some constant $C$. Now note that $\ex{ \sum_{i \neq j\in S} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j} \leq \frac{30 \lambda^2\epsilon }{n} \leq O(\frac{\lambda}{n})$, thus by Chebyshev's, with probability $99/100$, we have $|\sum_{i \neq j\in S} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j| \leq O( \frac{ \lambda}{n})$. Moreover, note that $\sum_i \mathbf{A}_{i,i}y_i^2 \delta_i$ can be assumed to be a positive random variable using that $\mathbf{A}_{i,i} =1$, and note the expectation of this variable is $\frac{\lambda}{n}$, and is at most $100\lambda/n$ with probability $99/100$. Thus $|\sum_{i \in S} y_i^2 \mathbf{A}_{i,i} \delta_i - \ex{\sum_{i \in S} y_i^2 \mathbf{A}_{i,i} }| \leq 100 \lambda/n$. By a union bound, we have: \[\mathbf{Pr}\left[ \left|\sum_{i , j\in S} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \right| \geq C\frac{\lambda }{n} \right] \leq \frac{49}{50}\] Where $C = 150$. \end{proof} \begin{lemma}\label{lem:tpart} Fix any $t \in T_a^*$. Then \[\mathbf{Pr}\left[ \left|\sum_{i \in S}y_t(y_i \mathbf{A}_{i,t} + \mathbf{A}_{t,i} y_i) \delta_i - \ex{ \sum_{i \in S}y_t(y_i \mathbf{A}_{i,t} + \mathbf{A}_{t,i} y_i) \delta_i} \right|\geq \frac{10\lambda}{n} \right] \leq \frac{1}{100}\] \end{lemma} \begin{proof}By independence of the $\delta_i$'s \begin{equation} \begin{split} \bvar{ \sum_{i \in S} (y_i \mathbf{A}_{i,t} y_t + y_t \mathbf{A}_{t,i} y_i) \delta_i } & \leq \frac{\lambda}{n} \sum_{i } (y_i \mathbf{A}_{i,t} y_t + y_t \mathbf{A}_{t,i} y_i)^2 \\ & \leq \frac{\lambda}{n} \sum_{i }2 y_t^2 y_i^2 \\ &\leq \frac{2\lambda}{n} (\zeta \beta 2^{-a})^2 \frac{100 2^a}{\epsilon n} \\ & \leq \frac{2\lambda}{n} \left(\frac{100 \zeta^2 \beta^2}{2^a\epsilon n} \right) \\ & \leq \frac{2\lambda^2 }{5n^2} \\ \end{split} \end{equation} Now since $\ex{ \sum_{i \in S}(y_i \mathbf{A}_{i,t} y_t + y_t \mathbf{A}_{t,i} y_i) \delta_i} \leq - \frac{\lambda}{n}\frac{\zeta}{\log(1/\epsilon)} \leq -\frac{100\lambda}{n}$, the desired result follows by Chebyshev's inequality. \end{proof} \begin{lemma} Fix any $t \in T_a^*$. Then \[\bpr{ \sum_{i,j \in S \cup \{t\}} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \leq \frac{-25\zeta \lambda}{n} \; \big| \; \delta_t = 1} \geq 24/25\] \end{lemma} \begin{proof} Conditioned on $\delta_t = 1$, we have \begin{equation} \begin{split} &\left| \left[ \sum_{i,j \in S \cup \{t\}} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \right] - \bex{\sum_{i,j \in S \cup \{t\}} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \; \big| \; \delta_t = 1| } \right| \\ & \leq \left|\sum_{i \in S}y_t(y_i \mathbf{A}_{i,t} + \mathbf{A}_{t,i} y_i) \delta_i - \ex{ \sum_{i \in S}y_t(y_i \mathbf{A}_{i,t} + \mathbf{A}_{t,i} y_i) \delta_i} \right| \\ & + \left|\sum_{i , j\in S} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j - \bex{\sum_{i , j\in S} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j} \right| \\ & \leq C \frac{\lambda}{n} \end{split} \end{equation} for some constant $C \leq 200$, where the last fact follows from Lemmas \ref{lem:Spart} and \ref{lem:tpart} with probability $24/25$. Since $\bex{\sum_{i,j \in S \cup \{t\}} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \; \big| \; \delta_t = 1| } \leq - \frac{50 \zeta \lambda }{n}$ by Lemma \ref{lem:conditionalex}, by scaling $\zeta$ by a sufficiently large constant, the result follows. \end{proof} \begin{theorem}\label{thm:case1} Suppose we are in case $1$ with $T_a$ contributing such that $2^a \geq 10^6 \zeta^3$. Then there is an algorithm that queries at most $O(\frac{\log^7(1/\epsilon)}{\epsilon^2})$ entries of $\mathbf{A}$, and finds a principal submatrix of $\mathbf{A}$ which is not PSD with probability at least $9/10$ in the NO case. The algorithm always returns YES on a YES instance. \end{theorem} \begin{proof} By the above, we just need to sample a expected size $O(\lambda^2)$ submatrix from the conditional distribution of having sampled at least one entry from $T_a^*$. Since $|T_a^*| \geq \beta/10 \frac{\epsilon n}{2^a \log(1/\epsilon)}$, and since $\lambda = \Theta(\frac{\beta^2 \zeta^2 \log(1/\epsilon)}{2^a \epsilon})$, we see that this requires a total of $k$ samples of expected size $O(\lambda^2)$ , where \begin{equation} \begin{split} k &= (n/|T_a^*|) /\lambda \leq (\frac{2^a 10 \log(1/\epsilon)}{ \beta \epsilon }) (\frac{2^a \epsilon }{\beta^2 \zeta^2 \log(1/\epsilon)} ) \\ &\leq 10 \frac{2^{2a}}{ \beta^3 \zeta^2 } \end{split} \end{equation} Thus the total complexity is $O(k \lambda^2)$, and we have \begin{equation} \begin{split} k \lambda^2 &\leq 10 \frac{2^{2a}}{ \beta^3 \zeta^2 } (\frac{\beta^4 \zeta^4 \log^2(1/\epsilon)}{2^{2a} \epsilon^2}) \\ &\leq 10 \frac{\beta \zeta^2 \log^2(1/\epsilon )}{\epsilon^2} \\ & = O(\frac{\zeta^2 \log^2(1/\epsilon)}{\epsilon^2}) \end{split} \end{equation} we use the fact that we can set $\zeta = O(\log^2(1/\epsilon))$. Finally, note that we do not know $\beta$ or $2^a$, but we can guess the value of $\lambda$ in powers of $2$, which is at most $O(\frac{\zeta^2}{\epsilon^2})$, and then set $k$ to be the value such that $k \lambda^2$ is within the above allowance. This blows up the complexity by a $\log(1/\epsilon)$ factor to do the guessing. \end{proof} \subsubsection{Case 2: Spread Negative Mass and Main Theorem} In the prior section, we saw that if the quadratic form $x^T \mathbf{A} x$ satisfies the condition for being in Case 1, we could obtain a $\tilde{O}(1/\epsilon^2)$ query algorithm for finding a principal submatrix $A_{T \times T}$ such that $y^\top \mathbf{A}_{T \times T} y < 0$ for some vector $y$. Now recall that $S = \{ i \in [n] \; : \; |x_i|^2 \leq \frac{1}{\epsilon n } \}$, and let $T_a = \{ i \in [n] \; : \; \frac{100 2^{a-1}}{\epsilon n} \leq |x_i|^2 \leq \frac{ 100 2^{a}}{\epsilon n } \}$ for $a \geq 1$. Recall that the definition of Case $1$ was that $x_S^\top \mathbf{A} x_{T_a} + x_{T_a}^\top \mathbf{A} x_S \leq -\epsilon n /(10\log(1/\epsilon))$ for some $2^a \geq 10^6 \zeta^3$. In this section, we demonstrate that if this condition does not hold, then we will also obtain a $\tilde{O}(1/\epsilon^2)$ query algorithm for the problem. Thus, suppose now that we are in Case $2$; namely that $x_S \mathbf{A} x_{T_a} + x_{T_a} \mathbf{A} x_S > -\epsilon n /(10\log(1/\epsilon))$ for all $2^a \geq 10^6 \zeta^3$. Now let $T^+ = \cup_{2^a > 10^6 \zeta^3} T_a$ and let $T^- = \cup_{2^a \leq 10^6 \zeta^3} T_a$. Let $S^* = S \cup T^-$. We now observe an important fact, which sates that if we are not in Case 1, then $x_{S^*} \mathbf{A} x_{S^*}$ contributes a substantial fraction of the negativeness in the quadratic form. \begin{fact}\label{fact:case2} Suppose we are in Case $2$: meaning that $x_S^\top \mathbf{A} x_{T_a} + x_{T_a} \mathbf{A} x_S > -\epsilon n /(10\log(1/\epsilon))$ for all $2^a \geq 10^6 \zeta^3$. Then we have $x_{S^*}^\top \mathbf{A} x_{S^*} \leq - \epsilon n/2$. \end{fact} \begin{proof} Notice that this implies that $x_S^\top\mathbf{A} x_{T^+} + x_{T^+} \mathbf{A} x_S \geq - \epsilon n / 10$, since there are at most $\log(1/\epsilon)$ level sets included in $T^+$ by Proposition \ref{prop:spread}. Note since the contribution of $|x_{T^+}^\top \mathbf{A} x_{T^+}| \leq - \epsilon n 10^{-6}/ \zeta^3$ and $|x_{T^-} \mathbf{A} x_{T^+} + x_{T^+} \mathbf{A} x_{T^-}| \leq \sqrt{|T^-| |T^+|} \leq \epsilon n /100$ by Proposition \ref{prop:rectangularcontribution}. Thus if $x^\top\mathbf{A} x \leq - \epsilon n$ to begin with, it follows that we must have \begin{equation} \begin{split} x_{S^*}^\top \mathbf{A} x_{S^*} &\leq x^\top A x -\left( (x_S^\top\mathbf{A} x_{T^+} + x_{T^+} \mathbf{A} x_S)- (x_{T^+}^\top \mathbf{A} x_{T^+}) - (x_{T^-} \mathbf{A} x_{T^+} + x_{T^+} \mathbf{A} x_{T^-} ) \right)\\ & \leq - \epsilon n +\epsilon n/10 + + \epsilon n 10^{-6}/ \zeta^3 \epsilon n /100 \\ &< - \epsilon n/2 \\ \end{split} \end{equation} \end{proof} We now proceed by analyzing the result of sampling a principal submatrix from the quadratic form $x_{S^*}^\top \mathbf{A} x_{S^*}$, which by the prior fact is already sufficently negative. Specifically, we will demonstrate that the variance of the standard estimator from Lemma \ref{lem:var}, and specifically Corollary \ref{cor:var}, is already sufficiently small to allow for a single randomly chosen $O(1/\epsilon) \times O(1/\epsilon)$ principal submatrix of $\mathbf{A}$ to have negative quadratic form with $x_{S^*}$ with good probability. In order to place a bound on the variance of this estimator and apply Corollary \ref{cor:var}, we will need to bound the row and column contributions of the quadratic form $x_{S^*}^\top \mathbf{A}_{S^* \times S^*}x_{S^*}$, which we now formally define. \begin{definition} For $i \in [n]$, define the row and column contributions of $i$ within $S^*$ as $\mathcal{R}_i^*= \sum_{j \in S^* \setminus i} x_i \mathbf{A}_{i,j} x_j$ and $\mathcal{C}_i^* = \sum_{j \in S^* \setminus i} x_j \mathbf{A}_{j,i} x_i$ respectively. \end{definition}\noindent Recall that the \textit{total} row and column contributions of $i$ are defined via $\mathcal{R}_i = \sum_{j \in [n] \setminus i} x_i \mathbf{A}_{i,j} x_j$ and $\mathcal{C}_i = \sum_{j \in [n] \setminus i} x_j \mathbf{A}_{j,i} x_i$ respectively, and recall that we have $\mathcal{R}_i + \mathcal{C}_i \leq 0$ for all $i \in [n]$ by Fact \ref{fact:case1OPT} \begin{proposition}\label{propLl1rowcolumnboundCase2} We have $\sum_{i \in S^*} (\mathcal{R}_i^* + \mathcal{C}_i^*)^2 \leq 10^9 \cdot \zeta^3 \epsilon n$. \end{proposition} \begin{proof} The proof proceeds similarly to Proposition \ref{propLl1rowcolumnbound}. Let $z^* ,z,z^-\in \R^{|S^*|}$ be defined for $i \in S^*$ via $z_i^* = \mathcal{R}_i^* + \mathcal{C}_i^*$, $z_i = \mathcal{R}_i + \mathcal{R}_i$, and $z^- = z - z$. Notice that our goal is to bound $\|z^*\|_2^2 $, which by triangle inequality satisfies $\|z^*\|_2^2 \leq 2\left(\|z\|_2^2 + \|z^-\|_2^2\right)$. First note that \begin{equation} \begin{split} \|z^-\|_2^2 &= \sum_{i \in S^*} \left(\sum_{j \notin S^*} x_i \mathbf{A}_{i,j} x_j + \sum_{j \notin S^*} x_j \mathbf{A}_{j,i} x_i \right)^2 \\ &\leq 2 \sum_{i \in S^*} \left(\sum_{j \notin S^*} x_i \mathbf{A}_{i,j} x_j \right)^2 + 2 \sum_{i \in S^*} \left(\sum_{j \notin S^*} x_j \mathbf{A}_{j,i} x_i \right)^2\\ \end{split} \end{equation} Using that $|[n] \setminus S^*| < \epsilon n/100$, we have by Proposition \ref{prop:rectangularcontribution2} that $\sum_{i \in S^*} \left(\sum_{j \notin S} x_i \mathbf{A}_{i,j} x_j \right)^2 \leq \epsilon n / 100$, so $\|z^-\|_2^2 \leq \epsilon n /25$. We now bound $\|z\|_1 = \sum_{i \in S} | \mathcal{R}_i + \mathcal{R}_i|$. Recall that we have $\mathcal{R}_i + \mathcal{R}_i \leq 0$ for all $i \in [n]$, which means that $\|z\|_1 \leq \sum_{i \in [n]} | \mathcal{R}_i + \mathcal{R}_i| = |2 \langle x , \mathbf{A} x \rangle - 2\sum_{i \in [n]} \mathbf{A}_{i,i} (x_i)^2| \leq 2 \epsilon n$. Next, we bound $\|z\|_\infty$. Notice that $|\mathcal{R}_i + \mathcal{R}_i |= 2| x_i \mathbf{A}_{i,*} x| = 2\epsilon n (x_i)^2 - 2\mathbf{A}_{i,i}(x_i)^2 < 4 \epsilon n (x_i)^2 $, using that $\mathbf{A} x = - \epsilon n x$, since $x$ is an eigenvectror of $\mathbf{A}$. Since $ i\in S^*$, by definition we have $(x_i)^2 \leq \frac{100 \cdot 10^6 \cdot \zeta^3}{\epsilon n}$, thus $\|z\|_\infty \leq 100 \cdot 10^6 \cdot \zeta^3$. It follows that $\|z\|_2^2$ is maximized by having $ 2 \epsilon n/( 10^8 \cdot \zeta^3)$ coordinates equal to $ 10^8 \cdot \zeta^3$, giving $\|z\|_2^2 \leq 2 \epsilon n/( 10^8 \cdot \zeta^3) ( 10^8 \cdot \zeta^3)^2 = 2 \cdot 10^8 \cdot \zeta^3 \epsilon n$. It follows then that $\|z\|_2^2 \leq \cdot 10^9 \cdot \zeta^3 \epsilon n$ as needed. \end{proof} \begin{theorem}\label{thm:inftymain} There is an algorithm which, given $\mathbf{A}$ with $\|\mathbf{A}\|_\infty \leq 1$ such that either $x^\top \mathbf{A} x \geq 0$ for all $x \in \R^{n}$ (\texttt{YES} Case), or $x^\top\mathbf{A} x \leq - \epsilon n$ for some $x \in \R^n$ with $\|x\|_2 \leq 1$ (\texttt{NO} Case), distinguishes the two cases with probability $3/4$ using at most $\wt{O}(\frac{1}{\epsilon^2})$ queries, and running in time $\tilde{O}(1/\epsilon^\omega)$, where $\omega< 2.373$ is the exponent of fast matrix multiplication. Moreover, in the \texttt{YES} case the, the algorithm always outputs \texttt{YES} (with probability $1$), and in the \texttt{NO} case, the algorithm returns a certificate in the form of a principal submatrix which is not PSD. \end{theorem} \begin{proof} By Theorem \ref{thm:case1} which handles Case 1, we can restrict ourselves to Case 2. Using Fact \ref{fact:case2} as well as Proposition \ref{propLl1rowcolumnboundCase2}, can apply Corollary \ref{cor:var} with the vector $y = x_{S^*}$, setting $c_1 = \Theta(1)$ and $c_2 = \Theta(\zeta^3)$, and $\alpha = O(\sqrt{\epsilon} \zeta^3) = O(\sqrt{\epsilon} \log^6(1/\epsilon))$, to obtain that $$\mathbf{Var}[\sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j] \leq O( \log^{12}(1/\epsilon) \frac{k^2}{n^2})$$ where $k = \tilde{\Theta}(1/\epsilon)$. Since by Proposition \ref{prop:exp} and Fact \ref{fact:case2} , we have $\ex{\sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j} \leq \frac{k^2}{4 n^2} \langle x_{S^*},\mathbf{A} x_{S^*}\rangle \leq -\frac{\epsilon k^2}{8n}$, it follows that by repeating the sampling procedure $O(\log^{12}(1/\epsilon))$, by Chebyshev's we will have that at least one sample satisfies $\sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \leq - \frac{\epsilon k^2}{4n}$ with probability $99/100$. Now note that this random variable does not take into account the diagonal. Thus, it will suffice to bound the contribution of the random variable $ \sum_{i \in [n]} \delta_i \mathbf{A}_{i,i}(y_i)^2$ $\tilde{O}((1/\epsilon)/n)$. First observe that $\ex{ \sum_{i \in [n]} \delta_i \mathbf{A}_{i,i}(y_i)^2 } = \frac{k}{n}$. The proof proceeds by a simple bucketing argument; let $\Lambda_i = \{i \in S^* \; | \; \frac{2^i}{n} \leq(y_i)^2 \leq \frac{2^{i+1}}{n} \}$, and for a single $k \times k$ sampled submatrix, let $T \subset [n]$ be the rows and columns that are sample. Note that $\ex{| T \cap \Lambda_i|} \leq k 2^{-i}$, since $|\Lambda_i| \leq 2^{-i}$. Note also that $|\Lambda_i| = 0$ for every $i$ such that $2^i \geq \frac{100^8 \zeta^3 }{\epsilon}$ by definition of $S^*$ and the fact that $y$ is zero outisde of $S^*$. Then by Chernoff bounds we have that with probability $\pr{| T \cap \Lambda_i| > \log(1/\epsilon) \max\{ k 2^{-i} ,1\} } \leq 1-\frac{\epsilon^{10}}{C}$ for some constant $C$ for our choosing. We can then union bound over all $O(\log(1/\epsilon))$ sets $\Lambda_i$, to obtain \[\sum_{i \in [n]} \delta_i \mathbf{A}_{i,i}(y_i)^2 \leq \sum_{i : 2^i \leq \frac{100^8 \zeta^3 }{\epsilon} } \frac{2^{i+1}}{n} | T \cap \Lambda_i| \leq \sum_{i : 2^i \leq \frac{100^8 \zeta^3 }{\epsilon} } \frac{2}{n}\log(1/\epsilon) \max\{ k , 2^i\}\] with probability at least $ 1-\frac{\epsilon^9}{C}$. Setting $k = \Theta(\log^6(1/\epsilon)/\epsilon)$, we have that $\sum_{i \in [n]} \delta_i \mathbf{A}_{i,i}(y_i)^2 \leq \sum_{i : 2^i \leq \frac{100^8 \zeta^3 }{\epsilon} } (2/n)\log(1/\epsilon) k = O(\log^2(1/\epsilon)k/n)$ . Thus we can condition on $ \sum_{i \in [n]} \delta_i \mathbf{A}_{i,i}(y_i)^2= O(\log^2(1/\epsilon)k/n)$ for all $\tilde{O}(1)$ repetitions of sampling a submatrix. Since at least one sampled submatrix satisfied $\sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \leq - \frac{\epsilon k^2}{4n}$, and since $k = \Theta(\log^6(1/\epsilon)/\epsilon)$, this demonstrates that at least one sampled submatrix will satisfy $\sum_{i ,j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j < - \frac{\epsilon k^2}{8n}$ as needed in the NO instance. The resulting query complexity is then $O(\log^2 (1/\epsilon)k^2) = O(\frac{\log^{24}(1/\epsilon)}{\epsilon^2}) = \wt{O}(\frac{1}{\epsilon^2})$ as desired. Finally, for runtime, notice that the main computation is computing the eigenvalues of a $k \times k$ principal submatirx, for $k = \tilde{O}(1/\epsilon)$, which can be carried out in time $\tilde{O}(1/\epsilon^\omega)$ \cite{demmel2007fast2,banks2019pseudospectral}. \end{proof} \section{$L_0$ gap} Consider the following instance: $A_1 = G^T G/n$ and $A_2 = G^T G / n - c g g^T/n$. Here $G_{i,j},g_k \sim \mathcal{N}(0,1)$ independently, where $c > 10^4$. \begin{lemma} The matrix $A_2$ is $\Omega(n/\log(n))$-far from being PSD in $L_0$. \end{lemma} \begin{proof} Fix $A_2 = (1/n)(G^T G - c g g^T)$, and fix any matrix $B \in \R^{n \times n}$ with $\|B\|_{\infty} \leq 1$ and $\|B\|_0 \leq C n$ for some $C > 0$. Let $v = g/\|g\|_2$. We bound $\langle v, Bv \rangle$. Note that $\|v\|_{\infty} \leq \frac{10 \sqrt{\log(n)}}{\sqrt{n}}$. Thus we have \begin{equation} \begin{split} \langle v ,Bv \rangle& \leq \sum_{i,j} |v_i v_j| \delta_{B_{i,j} \neq 0} \\ & \leq \|B\|_0 \frac{10\log(n)}{n}\\ & \leq 10C \log(n) \\ \end{split} \end{equation} Now note that $\|G\|_2 \leq \alpha \sqrt{n}$ for some $\alpha \leq 10$ with good probability. It follows that $v^T(A_2 + B)v \leq \alpha^2 - c/2 + 10 C \log(n)$. So setting $C < c / (100\log(n))$, we have $v^T(A_2 + B)v < 0$, which demonstrates that $A_2 + B$ is not PSD, and completes the proof. \end{proof} \end{comment} \section{Lower bounds}\label{sec:lb} \subsection{Lower Bound for PSD Testing with \texorpdfstring{$\ell_\infty$}{L-infinity} Gap} We begin by demonstrating a $O(1/\epsilon^2)$ lower bound for the problem of testing postive semi-definiteness with an $\ell_\infty$ gap. Our lower bound holds even when the algorithm is allowed to adaptively sample entryies of $\mathbf{A}$. \begin{theorem}\label{thm:linftyLB} Any adaptive or non-adaptive algorithm which receives query access to $A \in \R^{n \times n}$ with $\|\mathbf{A}\|_\infty \leq 1$, and distinguishes with probability at least $2/3$ whether \begin{itemize} \item $\mathbf{A}$ is PSD. \item $x^T\mathbf{A} x< - \epsilon n$ for some unit vector $x \in \R^n$ and $\epsilon \in (0,1)$ \end{itemize} must make $\Omega(1/\epsilon^2)$ queries to $A$. \end{theorem} \begin{proof} We construct two distributions $\mathcal{D}_1,\mathcal{D}_2$ over matrices, and draw the input $A$ from the mixture $(\mathcal{D}_1 + \mathcal{D}_2 )/2$. $\mathcal{D}_1$ is supported on one matrix: the zero matrix $\mathbf{0}^{n \times n}$, which is PSD. Now set $t = 2 \epsilon^2 n$ and let $B \in \R^{n \times n}$ be the matrix given by \[ \mathbf{B} = \begin{bmatrix} 0 & -\mathbf{1}^{ n - t \times t} \\ - \mathbf{1}^{ t \times n -t} & -\mathbf{1}^{ t\times t} \\ \end{bmatrix} \] Where $- \mathbf{1}^{ n\times m} $ is the $n \times m$ matrix consisting of a $-1$ in each entry. Now let $x \in \R^{n \times n}$ be defined by $x_i = 1$ for $i=1,2,\dots,n-t$, and let $x_j = 1/\epsilon$ for $j > n - t$. Then note that $x^T \mathbf{B} x < - \frac{1}{\epsilon} \cdot 2 \epsilon^2 n^2 < - \epsilon n \|x\|_2^2 $, thus $\mathbf{B}$ is $\epsilon$-far from PSD in $\ell_\infty$ gap. To sample $A \sim \mathcal{D}_1$, we set $\mathbf{A} = \mathbf{P}_\Sigma \mathbf{B} \mathbf{P}_{\sigma}^T$, where $\mathbf{P}_{\sigma}$ is a randomly drawn permutation matrix, namely $\sigma \sim S_n$ uniformly at random. Notice that to distinguish $A \sim \mathcal{D}_1$ from $A \sim \mathcal{D}_2$, the algorithm must read a non-zero entry. By Yao's min-max principle, we can assume that there is a deterministic algorithm that solves the problem with probability $2/3$ over the randomness of the distribution. Fix any $k < 1/(100\epsilon^2)$, and let $s_1,s_2,\dots s_k$ be the adaptive sequence of entries it would sample if $A_{s_i} = 0$ for each $i=1,2,\dots,k$. Then then the probability that any of the the $s_i$'s land in a row or a column of $\mathbf{A} = \mathbf{P}_\Sigma \mathbf{B} \mathbf{P}_{\sigma}^T$ with non-zero entries is at most $1/50$. Thus with probability $49/50$ under input from $A \sim \mathcal{D}_2$, the algorithm will output the same value had $A$ been the all zero matrix. Thus the algorithm succeeds with probability at most $51/100$ when $A$ is drawn from the mixture, demonstrating that $\Omega(1/\epsilon^2)$ samples are required for probability $2/3$ of success. \end{proof} \subsection{Lower Bound for PSD Testing with \texorpdfstring{$\ell_2$}{L-2} Gap} We now present our main lower bound for PSD testing. Our result relies on the construction of explicit graphs with gaps in their spectrum, which have the property that they are indistinguishable given only a small number of queries to their adjacency matrices. Our lower bound is in fact a general construction, which will also result in lower bounds for testing the Schatten $1$ norm, Ky-Fan norm, and cost of the best rank $k$ approximation. \paragraph{Roadmap} In the following, we will first introduce the notation and theory required for the section, beginning with the notion of subgraph equivalence of matrices. We then construct our hard distributions $\mathcal{D}_1,\mathcal{D}_2$, and prove our main conditional results, Lemma \ref{lem:ifBDthenLB}, which demonstrates a lower bound for these hard distributions conditioned on the existence of certain pairs of subgraph equivalent matrices. Finally, we prove the existence of such matrices, which is carried out in the following Section \ref{sec:CnLemma}. Putting these pieces together, we obtain our main lower bound in Theorem \ref{thm:lbmain}. \paragraph{Preliminaries and Notation} In the following, it will be useful to consider \textit{signed graphs}. A signed graph $\Sigma$ is a pair $(|\Sigma|, s)$, where $|\Sigma| = (V,E)$ is a simple graph, called the \textit{underlying graph}, and $s:E \to \{1,-1\}$ is the \textit{sign function}. We will sometimes abbreviate the signs equivalently as $\{+,-\}$. We will write $E^+,E^-$ to denote the set of positive and negative edges. If $\Sigma$ is a signed graph, we will often write $\Sigma = (V(\Sigma),E(\Sigma))$, where $E(\Sigma)$ is a set of \textit{signed} edges, so $E(\Sigma) \subset \binom{|V(\Sigma)|}{2} \times \{+,-\}$ with the property that for each $e \in \binom{|V(\Sigma)|}{2}$, at most one of $(e,+,),(e,-)$ is contained in $E(\Sigma)$. For a signed graph $G$ on $n$ vertices, let $\mathbf{A}_G \in \{1,0,-1\}^{n \times n}$ be its adjacency matrix, where $(\mathbf{A}_G)_{i,j}$ is the sign of the edge $e=(v_i,v_j)$ if $e \in E(G)$, and is $0$ otherwise. For a graph $H$, let $\|H\|$ denote the number of vertices in $H$. For any simple (unsigned) graph $G$, let $\overline{G}$ be the signed graph obtained by having $E^+(\overline{G}) = E(G)$, and $E^-(\overline{G}) = \binom{|V|}{2} \setminus E(G)$. In other words, $\overline{G}$ is the complete signed graph obtained by adding all the edges in the complement of $G$ with a negative sign, and giving a positive sign the edges originally in $G$. We remark that the negation of the adjacency matrix of $\overline{G}$ is known as the \textit{Seidel matrix} of $G$. In what follows, we will often not differentiate between a signed graph $G$ and its signed adjacency matrix $\mathbf{A}_G$. For graphs $G,H$, let $G \oplus H$ denote the disjoint union of two graphs $G,H$ We will assume familiarity with basic group theory. For groups $G,H$, we write $H \leq G$ if $H$ is a subgroup of $G$. For a set $T$, let $2^T$ denote the power set of $T$. Throughout, let $S_n$ denote the symmetric group on $n$ letters. For two signed graphs $\Sigma,H$, let $\mathcal{F}_H(\Sigma) = \{G = (V(\Sigma),E(G)) \; | \; E(G) \subseteq E(\Sigma), G \cong H \}$ be the set of signed subgraphs of $\Sigma$ isomorphic to $H$. For a permutation $\sigma \in S_n$, we write $\mathbf{P}_\sigma \in \R^{n \times n}$ to denote the row permutation matrix associated with $\sigma$. For $k \geq 3$, let $C_k$ denote the cycle graph on $k$ vertices. For signed graphs $G,H$, a signed graph isomorphism (or just isomorphism) is a graph isomorphism that preserve the signs of the edges. For any set $U \subset [n] \times [n]$ and matrix $\mathbf{A} \in \R^{n \times n}$, we write $\mathbf{A}_U$ to denote the matrix obtained by setting the entries $(A_U)_{i,j} = A_{i,j}$ for $(i,j) \in U$, and $(\mathbf{A}_U)_{i,j} = 0$ otherwise. A set $U \subset [n] \times [n]$ is called symmetric if $(i,j) \in U \iff (j , i) \in U$. We call $U$ simple if it does not contain any elements of the form $(i,i)$. We will sometimes refer to a simple symmetric $U$ by the underlying simple undirected graph induced $U$. \paragraph{Subgraph Equivalence} We now formalize the indistinguishably property which we will require. For matrices $\mathbf{A},\mathbf{B}$, when thought of as adjacency matrices of graphs, this property can be thought of as a more general version of ``locally indistinguishability'', in the sense that, for any small subgraph $H$ of $\mathbf{A}$, there is a \textit{unique} subgraph of $\mathbf{B}$ that is isomorphic to $H$. The following definition is more general, in the sense that a subgraph can also have ``zero valued edges'', corresponding to the fact that an algorithm can learn of the non-existence of edges, as well as their existence. \begin{definition}[Sub-graph Equivalence]\label{def:subgraphequiv} Fix any family $\mathcal{U}$ of symmetric subsets $\mathcal{U} = \{U_i\}_i \in 2^{[n] \times [n]}$, and let $\Gamma \leq S_n$ be a subgroup of the symmetric group on $n$ letters. Let $\mathbf{A},\mathbf{B} \in \R^{n \times n}$. Then we say that $\mathbf{A}$ is $(\mathcal{U},\Gamma)$-subgraph isomorphic to $\mathbf{B}$, and write $\mathbf{A} \cong_{\mathcal{U},\Gamma} \mathbf{B}$, if for every $U_i \in \mathcal{U}$ there is a bijection $\psi_i: \Gamma \to \Gamma$ such that \[ \left(\mathbf{P}_{\sigma} \mathbf{A} \mathbf{P}_{\sigma}^T \right)_{U_i} = \left(\mathbf{P}_{\psi_i(\sigma)} \mathbf{B} \mathbf{P}_{\psi_i(\sigma)}^T \right)_{U_i} \] for all $\sigma \in \Gamma$. If $G,H$ are two signed graphs on $n$ vertices with adjacency matrices $\mathbf{A}_G,\mathbf{A}_H$, then we say that $G$ is $(\mathcal{U},\Gamma)$-subgraph equivalent to $H$, and write $G \cong_{\mathcal{U},\Gamma} H$, if $\mathbf{A}_G \cong_{\mathcal{U},\Gamma} \mathbf{A}_H$. \end{definition} \noindent Note we do not require the $U_i$'s to be simple in the above definition. At times, if $\Gamma = S_n$, then we may omit $\Gamma$ and just write $G \cong_{\mathcal{U}} H$ or $\mathbf{A} \cong_{\mathcal{U}} \mathbf{B}$. \begin{example} Let $G,H$ be arbitrary graphs on $n$ vertices, and let each $\mathcal{U} = \{U_i\}$ be a simple graph consisting of a single edge. Then $G \cong_{\mathcal{U},S_n} H$ if and only if $|E(G)| = |E(H)|$. \end{example} \begin{example} Let $G,H$ be arbitrary graphs on $n$ vertices, and let $\mathcal{U} = \{U_i\}$ be a single graph, where $U_i$ is a triangle on any three vertices. Then $G \cong_{\mathcal{U},S_n} H$ if and only if the number of induced subgraphs on three vertices that are triangles, wedges, and single edges, are each the the same in $G$ as in $H$. \end{example} In what follows, we will consider graphs that are $\mathcal{U}$ subgraph isomorphic, for a certain family of classes $\mathcal{U}$, which we now define. In what follows, recall that the matching number $\nu(G)$ of a graph $G$ is the size of a maximum matching in $G$, or equivalently the maximum size of any subset of pairwise vertex disjoint edges in $G$. \begin{definition} For $1 \leq t \leq n$, let $\mathcal{U}^t_n$ be the set of all undirected, possibly non-simple graphs $U_i$ on $n$ vertices, with the property that after removing all self-loops, $U_i$ does not contains any set of $t$ vertex disjoint edges. Equivalently, after removing all self-loops from $U_i$, the matching number $\nu(U_i)$ of $U_i$ is less than $t$. \end{definition} In other words, $\mathcal{U}^t_n$ is the set of graphs with no set of $t$ pair-wise non-adjacent edges $e_1,\dots,e_t$ such that each $e_i$ is not a self loop. Notice by the above definition that $\mathcal{U}^{t}_n \subset \mathcal{U}^{t+1}_n$. We will also need the following definition. \begin{definition} For any $n,m \leq 1$, let $\Gamma_{n,m} \leq S_{nm}$ be the subgroup defined $\Gamma_{n,m} = \{\sigma \in S_{nm} \; | \; \sigma(i,j) = (\pi(i),j), \; \pi \in S_{n} \}$, where the tuple $(i,j) \in [n] \times [m]$ indexes into $[nm]$ in the natural way. \end{definition} \noindent Notice in particular, if $\mathbf{A} \in \R^{n \times n}$ and $\mathbf{D} \in\R^{m \times m}$, then we have \[\{ \mathbf{P}_{\sigma}(\mathbf{A} \otimes \mathbf{D}) \mathbf{P}_{\sigma}^T \; | \; \sigma \in \Gamma_{n,m}\} =\{ (\mathbf{P}_{\pi} \otimes \mathbb{I}_m) (\mathbf{A} \otimes \mathbf{D}) (\mathbf{P}_{\pi} \otimes \mathbb{I}_m)^T \; | \; \pi \in S_n\}\] Note also by elementary properties of Kronecker products, we have $(\mathbf{P}_{\pi} \otimes \mathbb{I}_m) (\mathbf{A} \otimes \mathbf{D}) (\mathbf{P}_{\pi} \otimes \mathbb{I}_m)^T = ( \mathbf{P}_{\pi} \mathbf{A} \mathbf{P}_{\pi}^T) \otimes \mathbf{D}$. For such a $\sigma \in \Gamma_{n,m}$, we write $\sigma = \pi \otimes \text{id}$, where $\pi \in S_n$ \begin{lemma}\label{len:kron} Fix any $t ,m \geq 1$, and let $\mathbf{A},\mathbf{B} \in \R^{n \times n}$ be matrices with $\mathbf{A} \cong_{\mathcal{U}^t_n,S_n} \mathbf{B}$, where $\mathcal{U}^t_n$ is defined as above, and let $\mathbf{T} \in \R^{m \times m}$ be any matrix. Then $\mathbf{A} \otimes \mathbf{T} \cong_{\mathcal{U}^t_{nm},\Gamma_{n,m}} \mathbf{B} \otimes \mathbf{T}$, where $\Gamma \leq S_{nm}$ is as defined above.\footnote{Note that this fact extends naturally to tensoring with rectangular matrices $\mathbf{T}$.} \end{lemma} \begin{proof} Fix any $U_i' \in \mathcal{U}^t_{nm}$. Note that every edge of $U_i'$ corresponds to a unique edge of a graph on $n$ vertices. This can be seen as every edge of $U_i'$ is of the form $((i_1,j_1),(i_2,j_2))$ where $i_1,i_2 \in [n], j_1 ,j_2 \in [m]$, which corresponds to the edge $(i_1,i_2) \in [n] \times [n]$. So let $U_i \subset [n] \times [n]$ be the set of all such edges induced by the edges of $U_i'$. Observe, of course, that many distinct edges of $U_i'$ could result in the same edge of $U_i$. We claim that $U_i \in \mathcal{U}^t_n$. Suppose this was not the case, and let $e_1,\dots,e_{t} \in U_i$ be vertex disjoint non-self loop edges, where $e_\ell = (i_\ell,j_\ell)$, $i_\ell \neq j_\ell$. Then for each $\ell \in [t]$, there must be at least one edge $e_\ell' \in U_i'$ such that $e_{\ell}' = ( (i_\ell, a_\ell) (j_\ell,b_\ell)) \in U_i'$, and we can fix $e_{\ell}'$ to be any such edge. Then since each vertex $i_{\ell} \in [n]$ occured in at most one edge of $e_1,\dots,e_{t}$ by assumption, it follows that each vertex $(i_\ell,j_\ell) \in [n] \times [m]$ occurs at most once in $e_1',\dots,e_{t}'$, which contradictts the fact that the $U_i' \in \mathcal{U}^t_{nm}$. Now that we have $U_i \in \mathcal{U}^t_n$, since $\mathbf{A} \cong_{\mathcal{U}^t_n,S_n} \mathbf{B}$ we have a bijection function $\psi_i:S_n \to S_n$ such that $\left(\mathbf{P}_{\pi} \mathbf{A} \mathbf{P}_{\pi}^T \right)_{U_i} = \left(\mathbf{P}_{\psi_i(\pi)} \mathbf{B} \mathbf{P}_{\psi_i(\pi)}^T \right)_{U_i}$. We now define the mapping $\hat{\psi}_i : \Gamma_{n,m} \to \Gamma_{n,m}$ by $\hat{\psi}_i(\pi \otimes \text{id}) = \psi_i(\pi) \otimes \text{id}$, and show that it satisfies the conditions of Definition \ref{def:subgraphequiv}. Now note that each $\sigma = \pi \otimes \text{id} \in \Gamma_{n,m}$ satisfies $\mathbf{P}_\sigma = \mathbf{P}_{\pi} \otimes \mathbb{I}$, and so $\mathbf{P}_\sigma (\mathbf{A} \otimes \mathbf{T} )\mathbf{P}_\sigma^T = \mathbf{P}_{\pi}\mathbf{A} \mathbf{P}_{\pi}^T \otimes\mathbf{T}$. We now claim that for any $U_i' \in \mathcal{U}^t_{nm}$, if we construct $U_i \in \mathcal{U}^t_n$ as above, we have that for any matrix $\mathbf{Z} \in \R^{n \times n}$ the non-zero entries of $(\mathbf{Z})_{U_i} \otimes \mathbf{T}$ contain the non-zero entries of $(\mathbf{Z}\otimes \mathbf{T} )_{U_i'}$. As a consequence, if $(\mathbf{Z})_{U_i} \otimes \mathbf{T} = (\mathbf{Y})_{U_i} \otimes \mathbf{T}$ for some other matrix $\mathbf{Y} \in \R^{n \times n}$, we also have $(\mathbf{Z} \otimes \mathbf{T})_{U_i'} = (\mathbf{Y} \otimes \mathbf{T})_{U_i'}$. But the claim in question just follows from the construction of $U_i'$, since for every entry $((i_1,j_1),(i_2,j_2)) \in U_i'$ we added the entry $(i_1,i_2) \in U_i$. Now since we have that$\left(\mathbf{P}_{\pi} \mathbf{A} \mathbf{P}_{\pi}^T \right)_{U_i} = \left(\mathbf{P}_{\psi_i(\pi)} \mathbf{B} \mathbf{P}_{\psi_i(\pi)}^T \right)_{U_i}$, we also obtain \[\left(\mathbf{P}_{\pi} \mathbf{A} \mathbf{P}_{\pi}^T \right)_{U_i} \otimes \mathbf{T} = \left(\mathbf{P}_{\psi_i(\pi)}\mathbf{B} \mathbf{P}_{\psi_i(\pi)}^T \right)_{U_i} \otimes \mathbf{T}\] which as just argued implies that \[ \left(\mathbf{P}_{\pi} \mathbf{A} \mathbf{P}_{\pi}^T\otimes \mathbf{T}\right)_{U_i'} = \left(\mathbf{P}_{\psi_i(\pi)} \mathbf{B} \mathbf{P}_{\psi_i(\pi)}^T \otimes \mathbf{T}\right)_{U_i'} \] Since $\left(\mathbf{P}_{\pi} \mathbf{A} \mathbf{P}_{\pi}^T\otimes\mathbf{T} \right)_{U_i'} = \left(\mathbf{P}_{\sigma}( \mathbf{A} \otimes \mathbf{T}) \mathbf{P}_{\sigma}^T \right)_{U_i'}$ and $(\mathbf{P}_{\psi_i(\pi)} \mathbf{B} \mathbf{P}_{\psi_i(\pi)}^T \otimes \mathbf{T})_{U_i'} = (\mathbf{P}_{\hat{\psi}(\sigma)}( \mathbf{B} \otimes \mathbf{T}) \mathbf{P}_{\hat{\psi}(\sigma)}^T )_{U_i'}$, it follows that $\mathbf{A} \otimes \mathbf{T} \cong_{\mathcal{U}^t_{nm},\Gamma_{n,m}} \mathbf{B} \otimes \mathbf{T}$ as required. \end{proof} \paragraph{The Hard Instance.} We now describe now distributions, $\mathcal{D}_1,\mathcal{D}_2$, supported on $n \times n$ matrices $\mathbf{A}$ and paramterized by a value $k \geq 1$, such that distinguishing $\mathcal{D}_1$ from $\mathcal{D}_2$ requires $\Omega(k^2)$ samples. The distributions are parameterized by \textit{three} matrices, $(\mathbf{B},\mathbf{D},\mathbf{Z})$, which are promised to satisfy the properties that $\mathbf{B},\mathbf{D} \in \R^{d \times d}$ with $\mathbf{B} \cong_{\mathcal{U}^t_{d} , S_d} \mathbf{D}$ for some $t \leq d$, and $\mathbf{Z} \in \R^{m \times m}$, where $m=n/(dk)$. Also define $\wt{\mathbf{B}} = \mathbf{B} \otimes \mathbf{Z}$, $\wt{\mathbf{D}} = \mathbf{D} \otimes \mathbf{Z}$. We now define the distribution. We first define $\mathcal{D}_1$. In $\mathcal{D}_1$, we select a random partition of $[n]$ into $L_1,\dots,L_k$, where each $|L_i| = n/k$ exactly. Then for each $i \in [k]$, we select a uniformly random $\sigma_i \in \Gamma_{d,m}$ and set $\mathbf{A}_{L_i \times L_i} = \mathbf{P}_{\sigma_i} \wt{\mathbf{B}} \mathbf{P}_{\sigma_i}^T$, and the remaining elements of $\mathbf{A}$ are set to $0$. In $\mathcal{D}_2$, we perform the same procedure, but set $\mathbf{A}_{L_i \times L_i} = \mathbf{P}_{\sigma_i} \wt{\mathbf{D}} \mathbf{P}_{\sigma_i}^T$. So if $\mathbf{A} \sim \frac{\mathcal{D}_1 + \mathcal{D}_2}{2}$, then $\mathbf{A}$ is block-diagonal, with each block having size $n/k$. We first demonstrate that for any matrices $(\mathbf{B},\mathbf{D},\mathbf{Z})$ satisfying the above properties, distinguishing these distributions requires $\Omega(k^2)$ samples. We assume in the following that $dk$ divides $n$, which will be without loss of generality since we can always embed a small instance of the lower bound with size $n'$ such that $ n/2<n-dk\leq n' \leq n$, and such that $dk$ divides $n$. \begin{lemma}\label{lem:ifBDthenLB} Fix any $1 \leq k ,d\leq n$. Let $(\mathbf{B},\mathbf{D},\mathbf{Z})$ be any three matrices such that $\mathbf{B},\mathbf{D} \in \R^{d \times d}$, $\mathbf{B} \cong_{\mathcal{U}^t_{d} , S_d} \mathbf{D}$ where $t = \log k$, and $\mathbf{Z} \in \R^{m \times m}$, where $m = n/(dk)$. Then any non-adaptive sampling algorithm which receives $\mathbf{A} \sim \frac{\mathcal{D}_1 + \mathcal{D}_2}{2}$ where the distributions are defined by the tuple $(\mathbf{B},\mathbf{D},\mathbf{Z})$ as above, and distinguishes with probability at least $2/3$ whether $\mathbf{A}$ was drawn from $\mathcal{D}_1$ or $\mathcal{D}_2$ must sample $\Omega(k^2)$ entries of $\mathbf{A}$. \end{lemma} \begin{proof} We show that any algorithm cannot distinguish $\mathcal{D}_1$ from $\mathcal{D}_2$ with probability greater than $2/3$ unless it makes at least $\ell > C\cdot k^2$ queries, for some constant $C>0$. So suppose the algorithm makes at most $C\cdot k^2/100$ queries in expectation and is correct with probability $2/3$. Then by Markov's there is a algorithm that always makes at most $\ell = C k^2$ queries which is correct with probability $3/5$. By Yao's min-max principle, there is a determinstic algorithm making this many queries which is correct with probability $3/5$ over the distribution $\frac{\mathcal{D}_1 + \mathcal{D}_2}{2}$. So fix this algorithm, which consists of a single subset $U \subset[n] \times [n]$ with $|U| = \ell$. We now generate the randomness used to choose the partition $L_1,\dots,L_k$ of $[n]$. Let $U_i = U \cap L_i \times L_i = \{(i,j) \in U \; | \; i,j \in L_i\}$. Let $\mathcal{E}_i$ be the event that $U_i \in \mathcal{U}_{md}^t$. We first bound $\pr{\neg \mathcal{E}_i}$, where the probability is over the choice of the partition $\{L_i\}_{i \in [k]}$. For $\neg \mathcal{E}_i$, there must be $t$ pairwise vertex disjoint non-self loop edges $e_1,\dots,e_t \in U$ such that $e_{j} = (a_j,b_j)$ and $a_j, b_j \in L_i$. In other words, we must have $2t$ distinct vertices $a_1,b_1,\dots,a_t,b_t \in L_i$. For a fixed vertex $v \in [n]$, this occurs with probability $1/k$, and the probability that another $u \in [n] \cap L_i$ conditioned on $v \in [n]$ is strictly less than $1/k$ as have have $|L_i| = n/k$ exactly. Thus, the probability that all $2t$ vertices are contained in $L_i$ can then be bounded $\frac{1}{k^{2t}}$. Now there are $\binom{|U|}{t} \leq \ell^t$ possible choices of vertex disjoint edges $e_1,\dots,e_t \in U$ which could result in $\mathcal{E}_i$ failing to hold, thus $\pr{ \mathcal{E}_i } \geq 1 - \frac{\ell^t}{k^{2t}}$ and by a union bound \begin{equation} \begin{split} \pr{\cap_{i=1}^k \mathcal{E}_i } & \geq 1 - \frac{\ell^t}{k^{2t-1}} \\ & \geq 1 - \frac{C^t k^{2t}}{k^{2t-1}} \\ & \geq 1 - C^t k \\ & \geq \frac{99}{100} \\ \end{split} \end{equation} Where in the last line, we took $C \leq 1/10$ and used the fact that $t = \log(k)$. Then if $\mathcal{E} = \cap_{i=1}^k \mathcal{E}_i$, we have $\pr{\mathcal{E}} > 99/100$, which we condition on now, along with any fixing of the $L_i$'s that satisfies $\mathcal{E}$. Conditioned on this, it follows that $U_i \in \mathcal{U}_{md}$ for each $i \in [k]$. Using that $B\cong_{\mathcal{U}^t_{d} , S_n} D$, we can and apply Lemma \ref{len:kron} to obtain $\mathbf{B} \otimes \mathbf{Z} \cong_{\mathcal{U}^t_{dm},\Gamma_{d,m}} \mathbf{D} \otimes \mathbf{Z}$. Thus, for each $i \in [k]$ we can obtain a bijection function $\psi_i:\Gamma_{d,m} \to \Gamma_{d,m}$ such that $(\mathbf{P}_{\sigma_i} \wt{\mathbf{B}} \mathbf{P}_{\sigma_i}^T)_{U_i} = (\mathbf{P}_{\psi_i(\sigma_i)} \wt{\mathbf{D}} \mathbf{P}_{\psi(\sigma_i)}^T)_{U_i}$ for each $\sigma_i \in \Gamma_{d,m}$. Thus we can create a coupling of draws from $\mathcal{D}_1$ with those from $\mathcal{D}_2$ conditioned on $\mathcal{E}$, so for any possible draw from the remaining randomness of $\mathcal{D}_1$, which consists only of drawing some $(\sigma_1,\dots,\sigma_k) \in S_d^k$ generating a matrix $\mathbf{A}_1$, we have a unique corresponding draw $(\psi_1(\sigma_1),\dots,\psi_k(\sigma_k)) \in S_d^k$ of the randomness in $\mathcal{D}_2$ which generates a matrix $\mathbf{A}_2$, such that $(\mathbf{A}_1)_U = (\mathbf{A}_2)_U$. Thus conditioned on $\mathcal{E}$, any algorithm is correct on $\frac{\mathcal{D}_1 + \mathcal{D}_2}{2}$ with probability exactly $1/2$. Since $\mathcal{E}$ occured with probability $99/100$, it follows than the algorithm is correct with probability $51/100 < 3/5$, which is a contradiction. Thus we must have $\ell \geq Ck^2 = \Omega(k^2)$ as needed. \end{proof} We are now ready to introduce our construction of the matrices as required in the prior lemma. Recall that $k \geq 3$, let $C_k$ denote the cycle graph on $k$ vertices. \begin{fact}\label{fact:cycle} Fix any $n \geq 3$. We have $\lambda_{\min}(C_{2n + 1}) = -2 + \Theta(1/n^2)$ and $\lambda_{\min}(C_{n} \oplus C_{n+1}) = -2$. \end{fact} \begin{proof} The eigenvalues of the cycle $C_\ell$ are given by $2\cos(\frac{2 \pi t}{\ell})$ \cite{chung1996lectures} for $t=0,\dots,\ell-1$, which yields the result using the fact that $\cos(\pi (1 + \epsilon)) = 1 + \Theta(\epsilon^2)$ for small $\epsilon$ \end{proof} \begin{proposition} \label{prop:cong} Fix any $n = n_1 + n_2$. For any $t \leq \min \{n_1,n_2\}/4$, we have $C_{n} \cong_{\mathcal{U}_{n}^t, S_{n}} C_{n_1} \oplus C_{n_2}$ \end{proposition} \begin{proof} We begin by fixing any set $U_i \in \mathcal{U}_n^t$. For any signed graph $\Sigma$ and graph $G$ on $n$ vertices such that the maximum set of vertex disjoint edges in $\Sigma$ is $t \leq \min \{n_1,n_2\}/4$, let $\mathcal{H}_{\Sigma}(G) = \{\sigma \in S_n \; | \; \mathbf{P}_{\sigma} \mathbf{A}_{\Sigma} \mathbf{P}_{\sigma}^T = \mathbf{A}_{H} , H \subset G \} $ and let $\mathcal{H}^{-1}_{\Sigma}(G) = \{\sigma \in S_n \; | \; \mathbf{A}_{\Sigma} = \mathbf{P}_{\sigma} \mathbf{A}_{H}\mathbf{P}_{\sigma}^T , H \subset G \} $. By Corollary \ref{cor:CycleEquiv}, we have $|\mathcal{H}_{\Sigma}(\overline{C}_n)| =| \mathcal{H}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2} } )|$ whenever $|\Sigma|$ has no set of at least $\min \{n_1,n_2\}/4$ vertex disjoint edges. Since $S_n$ is a group and has unique inverses, we also have$|\mathcal{H}^{-1}_{\Sigma}(\overline{C}_n)| =|\mathcal{H}_{\Sigma}(\overline{C}_n)| = | \mathcal{H}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2} } )| = | \mathcal{H}_{\Sigma}^{-1}(\overline{C_{n_1} \oplus C_{n_2} } )|$. We now define a function $\psi_i:S_n \to S_n$ such that $(\mathbf{P}_\sigma \mathbf{A}_{\overline{C}_n} \mathbf{P}_{\sigma}^T)_{U_i} =(\mathbf{P}_{\psi_i(\sigma)} \mathbf{A}_{\overline{C_{n_1} \oplus c_{n_2}}} \mathbf{P}_{\psi_i(\sigma)}^T)_{U_i} $ for every $\sigma \in S_n$. Now fix any signed graph $\Sigma$ such that $\Sigma = (\mathbf{P}_\sigma \mathbf{A}_{\overline{C}_n} \mathbf{P}_{\sigma}^T)_{U_i}$ for some $\sigma \in S_n$. Note that the set of $\pi \in S_n$ such that $\Sigma = (\mathbf{P}_\pi A_{\overline{C}_n} \mathbf{P}_{\pi}^T)_{U_i}$ is precisely $\mathcal{H}^{-1}_{\Sigma}(\overline{C}_n)$. Similarly, the set $\pi \in S_n$ such that $\Sigma = (\mathbf{P}_\pi A_{\overline{C_{n_1} \oplus C_{n_2} }} \mathbf{P}_{\pi}^T)_{U_i}$ is precisely $\mathcal{H}^{-1}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2} })$. Also, by construction of $\mathcal{U}_{n}^t$, we know that the maximum set of vertex disjoint edges in $U_i$, and therefore in $\Sigma$ is $t \leq \min \{n_1,n_2\}/4$, So by the above, we know there is a bijection $\psi_{i}^{\Sigma} : \mathcal{H}^{-1}_{\Sigma}(\overline{C}_n) \to \mathcal{H}^{-1}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2} })$ for every such realizable matrix $\Sigma$. Taking $\psi_i(\sigma) = \psi_i^{ (\mathbf{P}_\sigma \mathbf{A}_{\overline{C}_n} \mathbf{P}_{\sigma}^T)_{U_i}}(\sigma)$ satisfies the desired properties for $\overline{C_{n}} \cong_{\mathcal{U}_{n}^t, S_{n}} \overline{C_{n_1} \oplus C_{n_2}}$. Notice that this implies that $C_{n} \cong_{\mathcal{U}_{n}^t, S_{n}} C_{n_1} \oplus C_{n_2}$, since $C_{n}$ and $C_{n_1} \oplus C_{n_2}$ are both obtained from obtained $\overline{C_{n}}$ and $ \overline{C_{n_1} \oplus C_{n_2}}$ by changing every entry with the value $-1$ to $0$. \end{proof} \begin{proposition}\label{prop:LBfinal} Fix any $t > 1$, and set either $d_0 = 4t$, and $d = 2d_0+1$. Set $\mathbf{B} = 1/2(\mathbf{A}_{C_{d}} + \lambda \mathbb{I}_{d})$ and $\mathbf{D} = 1/2(\mathbf{A}_{C_{d_0} \oplus C_{d_0 + 1}} + \lambda \mathbb{I}_{d})$, where $\lambda = - 2 \cos( \frac{2 \pi d_0 }{2d_0+1})$. Then we have that $\mathbf{B}$ is PSD, $\lambda_{\min}(\mathbf{D}) < -\delta$ where $\delta = \Theta(1/d^2)$, $\|\mathbf{B}\|_\infty , \|\mathbf{D}\|_\infty \leq 1$, and $\mathbf{B}\cong_{\mathcal{U}^t_{d} , S_{d}} \mathbf{D}$. \end{proposition} \begin{proof} By Proposition \ref{prop:cong}, we know $C_{2d_0+1} \cong_{\mathcal{U}_{2d_0+1}^t, S_{2d_0+1}} C_{d_0} \oplus C_{d_0+1}$, so to show subgraph equivalence suffices to show that adding $\lambda \mathbb{I}_{2d_0+1}$ to both $C_{2d_0+1}$ and $ C_{d_0} \oplus C_{d_0+1}$ does not effect the fact that they are $\mathcal{U}^t_{d_0} , S_{d_0}$ subgraph-equivalent. But note that this fact is clear, since we have only changed the diagonal which is still equal to $\lambda$ everywhere for both $\mathbf{B},\mathbf{D}$. Namely, for any $\sigma,\pi \in S_{2d_0+1}$ and $i \in [2d_0+1]$ we have $\left(\mathbf{P}_{\sigma} \mathbf{B} \mathbf{P}_{\sigma}^T \right)_{(i,i)} = \left(\mathbf{P}_{\pi} \mathbf{D} \mathbf{P}_{\pi}^T \right)_{(i,i)} = \lambda$, thus the subgraph equivalence between $C_{2d_0+1}$ and $ C_{d_0} \oplus C_{d_0+1}$ still holds using the same functions $\psi_i$ as required for $C_{2d_0+1} \cong_{\mathcal{U}_{2d_0+1}^t, S_{2d_0+1}} C_{d_0} \oplus C_{d_0+1}$. Note that the $L_\infty$ bound on the entries follows from the fact that adjacency matrices are bounded by $1$ and zero on the diagonal, $\lambda \leq 2$, and we scale each matrix down by $1/2$. Next, by Fact \ref{fact:cycle}, we know that $\mathbf{B}$ is PSD and $\lambda_{\min}(\mathbf{D}) =- \Theta(\frac{1}{d^2})$, which holds still after scaling by $1/2$, and completes the proof. \end{proof} We now state our main theorem, which is direct result of instantiating the general lower bound of Lemma \ref{lem:ifBDthenLB} with the matrices as described above in Proposition \ref{prop:LBfinal}. \begin{theorem}\label{thm:lbmain} Any non-adaptive sampling algorithm which solves with probability at least $2/3$ the PSD testing problem with $\epsilon$-$\ell_2^2$ gap must query at least $\wt{\Omega}( \frac{1}{\epsilon^2})$ entries of the input matrix. \end{theorem} \begin{proof} Set $k = C \frac{1}{\epsilon \log^6(1/\epsilon)}$ for a small enough constant $C>0$. Also set $t = \log k$, $d_0 = 4t$. $d = 2d_0 + 1$, and as before set $m = n/(dk)$. We first apply Lemma \ref{lem:ifBDthenLB} with $\mathbf{Z} = \mathbf{1}^{m \times m}$, and the matrices $\mathbf{B} = 1/2(\mathbf{A}_{C_{d}} + \lambda \mathbb{I}_{d})$ and $\mathbf{D} = 1/2(\mathbf{A}_{C_{d_0} \oplus C_{d_0+1}} + \lambda \mathbb{I}_{d})$ from Proposition \ref{prop:LBfinal}, where $\lambda = - 2 \cos( \frac{2 \pi d_0 }{2d_0+1})$. Then by Lemma \ref{lem:ifBDthenLB}, using that $\mathbf{B}\cong_{\mathcal{U}^t_{2d_0+1} , S_{2d_0+1}} \mathbf{D}$ via Proposition \ref{prop:LBfinal}, it follows that any non-adaptive sampling algorithm that distinguishes $\mathcal{D}_1$ from $\mathcal{D}_2$ requires $\Omega(k^2)$ samples. We now demonstrate every instance of $\mathcal{D}_1$ and $\mathcal{D}_2$ satisfy the desired $\ell_2^2$-gap as defined in Problem \ref{prob:l2}. First, since the eigenvalues of the Kronecker product $\mathbf{Y} \otimes \mathbf{Z}$ of any matrices $\mathbf{Y},\mathbf{Z}$ are all pairwise eigenvalues of the matrices $\mathbf{Y},\mathbf{Z}$, it follows that $\wt{\mathbf{B}}$ is PSD as $\mathbf{B}$ is PSD by Proposition \ref{prop:LBfinal} and and $\mathbf{1}^{m \times m} = \mathbf{1}^m (\mathbf{1}^m)^T$ is PSD. By the same fact and Proposition \ref{prop:LBfinal}, since $\lambda_1(\mathbf{1}^{m \times m}) = m$, we have that $\lambda_{\min}(\wt{\mathbf{D}}) = -\Theta(m/(d^2)) = -\Theta(\frac{n}{d^3 k})$. Now note that if $\mathbf{A}_1 \sim \mathcal{D}_1$, then $\mathbf{A}_1$ is a block-diagonal matrix where each block is PSD, thus $\mathbf{A}_1$ is PSD. Note also that if $\mathbf{A}_2 \sim \mathcal{D}_2$, then $\mathbf{A}_2$ is a block-diagonal matrix where each block is has an eigenvalue smaller than $-C'\frac{n}{d^3 k}$ for some constant $C' > 0$. Since the eigenvalues of a block diagonal matrix are the union of the eigenvalues of the blocks, it follows that \begin{equation} \begin{split} \sum_{i: \lambda_i(\mathbf{A}_2) < 0} (\lambda_i(\mathbf{A}_2))^2 &= \sum_{i=1}^k \lambda_{\min}(\wt{\mathbf{D}})^2 \\ &\geq k (C' \frac{n}{d^3 k})^2 \\ &= \left( \frac{(C')^2 \log^6 (1/\epsilon) }{C (8\log(\frac{C}{\epsilon \log^6(1/\epsilon)}) +1)^6 }\right) \cdot \epsilon n^2 \\ &\geq \epsilon n^2 \\ \end{split} \end{equation} Where the last inequality follows from setting the constant $C = \frac{(C')^2}{100^6}$ so that \begin{equation} \begin{split} \frac{(C')^2 \log^6 (1/\epsilon) }{C\left(8 \log(\frac{C}{\epsilon \log^6(1/\epsilon)}) +1 \right)^6 } &= \frac{100^6 \log^6 (1/\epsilon) }{\left(8 \log(\frac{(C')^2}{ 100^6\epsilon \log^6(1/\epsilon)}) +1 \right)^6 } \\ &\geq \frac{100^6 \log^6 (1/\epsilon) }{\left(16 \log(\frac{1}{\epsilon })\right)^6 } \\ & >1 \\ \end{split} \end{equation} and using that the first inequality above holds whenever $\left(\frac{1}{\epsilon }\right)^{16} \leq \left(\frac{ (C')^2}{3 \cdot 100^6\epsilon \log^6(1/\epsilon)}\right)^{8}$, which is true so long as $\epsilon < C_0$ for some constant $C_0$. Note that if $\epsilon > C_0$, then a lower bound of $\Omega(1) = \Omega(1/\epsilon^2)$ follows from the one heavy eigenvalue $\ell_\infty$ gap lower bound. Thus $\mathbf{A}_1,\mathbf{A}_2$ satisfies the $\epsilon$-$L_2$ gap property as needed, which completes the proof. \end{proof} \subsubsection{\texorpdfstring{$C_{n_1+ n_2}$}{Cycle n1 plus n2} is Subgraph Equivalent to \texorpdfstring{$C_{n_1} \oplus C_{n_2}$}{Cycle n1 plus Cycle n2} }\label{sec:CnLemma} In this section, we demonstrate the subgraph equivalence of the the cycle $C_{n_1+ n_2}$ and union of cycles $C_{n_1} \oplus C_{n_2}$. In order to refer to edges which are not in the cycles $C_{n_1+ n_2}$ and $C_{n_1} \oplus C_{n_2}$, it will actually be convenient to show that $\overline{C_{n_1+ n_2}}$ is subgraph equivelant to $\overline{C_{n_1} \oplus C_{n_2}}$, where recall that $\overline{G}$ for a simple graph $G$ is the result of adding negative edges to $G$ for each edge $e = (u,v) \notin E(G)$. Equivalently, the adjacency matrix of $\overline{G}$ is the result of replacing the $0$'s on the off-diagonal of $A_{G}$ with $-1$'s. Notice that, by the definition of subgraph equivalence, it does not matter whether these values are set to $0$ or to $-1$. \paragraph{Overview of the bijection. } We now intuitively describe the bijection of Lemma \ref{lem:cyclesmain}, which demonstrates that for any singed graph $\Sigma$ such that any set of pairwise vertex disjoint edges $\{e_1,\dots,e_k\}$ (i.e. any matching) in $\Sigma$ has size at most $k \leq \min\{n_1,n_2\}/4$, the number of subgraphs of $\overline{C_{n_1 + n_2}}$ isomorphic to $\Sigma$ is the same as the number of subgraphs of $\overline{C_{n_1} \oplus C_{n_2}}$ isomorphic to $\Sigma$. So let $H$ be any subgraph of $\overline{C_{n_1 + n_2}}$ that is isomorphic to $\Sigma$. For simplicity, let $n_1 = n_2$, and suppose $H$ contains only positive edges, so that $H$ is actually a subgraph of the unsigned cycle $C_{2n}$. Since $\Sigma$ has at most $n/4$ edges, $\Sigma \cong H$ must be a collection of disjoint paths. So the problem can be described as an arrangement problem: for each arrangement $H$ of $\Sigma$ in $C_{2n}$, map it to a unique arrangement $H'$ of $\Sigma$ in $C_{n} \oplus C_n$. \begin{figure} \centering \includegraphics[scale = 1.1 ]{gr.pdf} \includegraphics[scale = 1.1 ]{copy_of_graphs-2.pdf} \caption{An illustration of the bijection in Lemma \ref{lem:cyclesmain}, when $H$ only contains positive edges. The three colored paths represent the graph $H$, which must be mapped from $C_{2n}$ to $C_n \oplus C_n$. Since the paths intersect the edges $(2n,1)$ and $(n,n+1)$ to be cut, we must first swap the last four vertices $\{n-4,\dots,n\}$ and $\{2n - 4, \dots, 2n\}$ of $C_{2n}$ before the two splitting points $n,2n$, and then cut the cycle. Note that four is the smallest number of vertices which can be swapped, without swapping in the middle of a path of $H$. } \label{fig:graph1} \end{figure} We would like to construct such a mapping by ``splitting'' the big cycle $C_{2n}$ into two smaller cycles, see Figure \ref{fig:graph1} for an example. Specifically, we could split the cycle $C_{2n}$ down the middle, cutting the edges $(n,n+1)$, and $(n,1)$, and instead connecting the first vertex to the $n$-th and the $n+1$-st to the $2n$-th. Now if $H$ does not contain either of the cut edges, then the resulting collection of paths will be an isomorphic copy of $H$ living inside of $C_{n} \oplus C_n$. However, if $H$ does contain such an edge, we cannot cut the cycle here, as the resulting paths inside of $C_n \oplus C_n$ would not be isomorphic. For example see Figure \ref{fig:graph1}, where if we just cut the edge between $(n,n+1)$ and rerouted it to $(n,1)$, then the red cycle with $4$ vertices would be disconnected into a cycle of length three, and an isolated vertex. To handle this, before cutting and rerouting the edges $(n,n+1)$ and $(2n,1)$, we first \textit{swap} the last $i$ vertices before the cutting points, for some $i$. Namely, we swap the vertices $(n-i,n-i+1,\dots,n)$ with $(2n - i, 2n - i + 1, \dots ,2n)$ and \textit{then} split the graph at the edges $(n,n+1)$ and $(2n,1)$. For the resulting graphs to be isomorphic, we cannot swap in the middle of a path, thus the value $i$ is chosen as the \textit{smallest} $i \geq 0$ such that the edges $(n-i-1,n-i)$ and $(2n-i-1,2n - i)$ do not exist in any path of $H$. Moreover, such an $i$ must exist, so long as $H$ has fewer than $\min\{n_1,n_2\}$ edges (the stronger bound of $\min\{n_1,n_2\}/4$ is only needed for the more general case, where negative edges are included). One can show that this mapping is actually an involution; namely, given the collection of paths $H'$ in $C_{n} \oplus C_{n}$ which are obtained from applying the function on $H$, one can similarly find the smallest $i \geq 0$ such that the edges $(n-i-1,n-i)$ and $(2n-i-1,2n - i)$ are not in $H'$, which must in fact be the same value of $i$ used when mapping $H$! Then, by swapping the last $i$ vertices before $n$ and $2n$, and then reconnecting $C_n \oplus C_n$ into a single cycle, one obtains the original graph $H$. From this, demonstrating bijectivity becomes relatively straightforward. Extending this to the case where $H$ is allowed to contain negative edges of $\overline{C_{2n}}$ follows similar steps, albiet with a stronger condition on the choice of $i$. The full proof is now presented below. \begin{lemma}\label{lem:cyclesmain} Fix any $n = n_1 + n_2$. Fix any simple graph $|\Sigma|$, such that any set of vertex disjoint edges $\{e_1,\dots,e_k\}$ in $|\Sigma|$ has size at most $k \leq \min\{n_1,n_2\}/4$, and let $\Sigma = (|\Sigma|,\sigma)$ be any signing of $|\Sigma|$. Let $\mathcal{F}_{\Sigma}(\overline{C_n})$ denote the set of subgraphs of $\overline{C_n}$ isomorphic to $|\Sigma|$, and similarly define $\mathcal{F}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2}})$. Then we have \[ \left|\mathcal{F}_{\Sigma}(\overline{C_n})\right| = \left|\mathcal{F}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2}})\right| \] \end{lemma} \begin{proof} Order the vertices's of the cycle $C_n = \{1,2,\dots,n\}$, which we will describe as the same vertex set for $C_{n_1} \oplus C_{n_2}$, where $\{1,\dots,n_1\}$ are the vertices of the first cycle $C_{n_1}$ and $\{n_1 + 1, \dots, n\}$ are the vertices of $C_{n_2}$. We derive a bijection $\varphi: \mathcal{F}_{\Sigma}(\overline{C_n})\to \mathcal{F}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2}})$. We describe a point $\mathbf{X} \in \mathcal{F}_H(\overline{C_n}) \cup \mathcal{F}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2}})$ by its (signed) adjacency matrix $\mathbf{X} \in \{-1,0,1\}^{n \times n}$. Namely, $\mathbf{X} \in \{-1,0,1\}^{n \times n}$ is any matrix obtained by setting a subset of the entries of $\mathbf{A}_{\overline{C_{n_1 + n_2}}}$ or $\mathbf{A}_{\overline{C_{n_1} \oplus C_{n_2}}} $ equal to $0$, such that the signed graph represented by $\mathbf{X}$ is isomorphic to $\Sigma$. In this following, we will always modularly interpret the vertex $v_{n +i} =v_i$ for $i \geq 1$. Thus, we can now think of $\varphi$ as being defined on the subset of the matrices $\{-1,0,1\}^{n \times n}$ given by the adjacency matrices of signed graphs in $\mathcal{F}_{\Sigma}(\overline{C_n})$. In fact, it will useful to define $\varphi$ on a larger domain. Let $\mathcal{D} \subset \{-1,0,1\}^{n \times n}$ be the set of all adjacency matrices for signed graphs $G$ with the property that any set of vertex disjoint edges $\{e_1,\dots,e_k\}$ in $G$ size at most $k \leq \min\{n_1,n_2\}/4$. Notice that $\mathcal{D}$ contains both $ \mathcal{F}_{\Sigma}(\overline{C_n})$ and $\mathcal{F}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2}})$. For a given $\mathbf{X} \in \mathcal{D}$, we will define $\varphi(\mathbf{X}) = \mathbf{P}_{\sigma_\mathbf{X}} \mathbf{X} \mathbf{P}_{\sigma_\mathbf{X}}^T$ for some permutation $\sigma_\mathbf{X}$. Since the graph of $ \mathbf{P}_{\sigma_\mathbf{X}} \mathbf{X} \mathbf{P}_{\sigma_\mathbf{X}}^T$ is by definition isomorphic to $\mathbf{X}$, it follows that $ \mathbf{P}_{\sigma_\mathbf{X}} \mathbf{X} \mathbf{P}_{\sigma_\mathbf{X}}^T \in \mathcal{D}$, thus $\varphi$ maps $\mathcal{D}$ into $\mathcal{D}$. So in order to define the mapping $\varphi(\mathbf{X})$, it suffices to define a function $\phi: \mathcal{D} \to S_n$ mapping into the symmetric group so that $\varphi(\mathbf{X}) = \mathbf{P}_{\phi(\mathbf{X})}\mathbf{X}\mathbf{P}_{\phi(\mathbf{X})}^T$. For $i=0,1,2,\dots,\min_{n_1,n_2} -1$, define the permutation $\sigma_i \in S_n$ as follows. For $j \in \{0,1,\dots,n_1 - i\} \cup \{n_1 + 1,\dots,n - i\}$, we set $\sigma_i(j) = j$. If $i > 0$, then for each $0 \leq j < i$, we set $\sigma_i(n_1 - j) = n - j$ and $\sigma_i(n - j) = n_1 - j$. In other words, the function $\sigma_i$ swaps the last $\max\{0,i-1\}$ vertices before the spliting points $n_1,n$ of the cycle. Notice that $\sigma_i$ is an involution, so $\sigma_i(\sigma_i) = \text{id}$ and $\sigma_i = \sigma^{-1}_i$. We now define our bijection $\varphi$. For $\mathbf{X} \in \mathcal{D}$, let $i(\mathbf{X})$ be the smallest value of $i \geq 0$ such that $\mathbf{X}_{n_1 - i, n_1 - i + 1} = \mathbf{X}_{n - i, n - i +1} =\mathbf{X}_{n - i, n_1 - i + 1} = \mathbf{X}_{n_1 - i, n - i +1} = 0$. Equivalently, $i(\mathbf{X})$ is the smallest value of $i \geq 0$ such that none of the four edges of the cycle $c_i = (v_{n_1 -i}, v_{n_1 -i+1}, v_{n -i}, v_{n - i + 1})$ exist in $\mathbf{X}$. We then define $\varphi(\mathbf{X}) = \sigma_{i(\mathbf{X})} = \sigma_\mathbf{X}$, so that $\varphi(\mathbf{X}) = \mathbf{P}_{\sigma_{i(\mathbf{X})}}\mathbf{X}\mathbf{P}_{\sigma_{i(\mathbf{X})}}^T$. Note that if the maximum number of vertex disjoint edges in $\mathbf{X}$ is at most $\min\{n_1,n_2\}/4$, then $i(\mathbf{X})$ must always exist and is at most $\min\{n_1,n_2\}/2+1$. This can be seen by the fact that for each $i$ such that $i(\mathbf{X}) > i+1$, there must be at least one edge with endpoints in the set $\{v_{n_1 - i}, v_{n_1-i+1}, v_{n - i}, v_{n-i+1}\}$, thus for each $i \geq 0$ with $i< i(\mathbf{X})$ we can assign an edge $e_{i,}$ such that $e_{0}, e_{2},e_4, \dots, e_{i(\mathbf{X})-1}$ are vertex disjoint. We must first argue that if $\mathbf{X} \in \mathcal{F}_{\Sigma}(\overline{C_n})$, then $\varphi(\mathbf{X}) \in\mathcal{F}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2}})$, namely that the function maps into the desired co-domain. To do this, we must show that for every $(i,j)$ with $(\mathbf{P}_{\sigma_\mathbf{X}}\mathbf{X}\mathbf{P}_{\sigma_\mathbf{X}}^T)_{i,j} \neq 0$, we have $(\mathbf{P}_{\sigma_\mathbf{X}}\mathbf{X}\mathbf{P}_{\sigma_\mathbf{X}}^T)_{i,j} = (\mathbf{A}_{\overline{C_{n_1} \oplus C_{n_2} }})_{i,j}$. This is equivalent to showing that for any signed edge $e = (v_i,v_j) \in \mathbf{X}$, $v_i,v_j$ are connected in $C_n$ if and only if $v_{\sigma_\mathbf{X}(i)},v_{\sigma_\mathbf{X}(j)}$ are connected in $C_{n_1} \oplus C_{n_2}$. In the proof of this fact, we will only use that $\mathbf{X} \in \mathcal{D}$. So suppose $v_i,v_j$ were connected in $C_n$, and wlog $j > i$. First suppose that $i \notin \{n,n_1\}$. Then we have $j=i+1$. Since $e = (v_i,v_j) \in \mathbf{X}$ is an edge of the subgraph, we know $i \notin \{ n - i(\mathbf{X}), n_1 - i(\mathbf{X})\}$ by construction of $i(\mathbf{X})$. Thus $(v_{\sigma_\mathbf{X}(i)},v_{\sigma_\mathbf{X}(j)}) = (v_{i'} , v_{i'+1})$ for some $i' \notin \{n_1,n\}$, which is always an edge of $C_{n_1} \oplus C_{n_2}$. If $i = n$, then $j = 1$, and we have $i(\mathbf{X}) > 0$, so $\sigma(i) = n_1$ and $\sigma(j) = 1$, and $(v_{n_1}, v_{1}) $ is an edge of $C_{n_1} \oplus C_{n_2}$. Similarly, if $i = n_1$, then $j = n_1 + 1$, and since again necessarily $i(\mathbf{X})>0$ we have $\sigma(i) = n, \sigma(j) = n_1 + 1$, and $(v_{n},v_{n_1 + 1})$ is an edge of $C_{n_1} \oplus C_{n_2}$. We now consider the case where $(v_i,v_j) \in \mathbf{X}$ is not an edge in $C_n$. Suppose for the sake of contradiction that $(v_{\sigma_\mathbf{X}(i)} ,v_{\sigma_\mathbf{X}(j)})$ is an edge in $C_{n_1} \oplus C_{n_2}$. WLOG, $i,j$ are in the first cycle $C_{n_1}$. We can write $\sigma_\mathbf{X}(i) = i', \sigma_\mathbf{X}(j) = i' + 1$ for some $i' \in \{1,2,\dots,n_1 \}$, where $i'+1$ is interpreted as $1$ if $i' = n_1$. If $i' \leq i(\mathbf{X}) -1$, then both $ i' = i$ and $ i'+1 = i+1 = j$, but $(v_{i}, v_{i+1})$ is also connected in $C_n$. If $i'\geq i(\mathbf{X}) +1$, then $i' = i + n_2$ and $i'+1 = i + n_2 +1$ (where $i + n_2 + 1$ is interpreted modularly as $1$ if $i =n_1$), and again $v_{i+n_2}$ and $v_{i+n_2 + 1}$ are connected in $C_n$. Finally, if $i' = i(\mathbf{X})$, then $i= i'$ and $j = i+ n_2 + 1$, but then we cannot have $(v_i,v_j) \in \mathbf{X}$ by construction of $i(\mathbf{X})$, which completes the of the claim that $\varphi$ maps $\mathcal{F}_{\Sigma}(\overline{C_n})$ into $\mathcal{F}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2}})$. We now show that $\varphi$ is injective. To do this, we show that $\varphi(\varphi(\mathbf{X})) = \mathbf{X}$ for any $\mathbf{X} \in \mathcal{D}$ -- namely that $\varphi$ is an involution on $\mathcal{D}$. This can be seen by showing that we always have $i(\mathbf{X}) = i(\varphi(\mathbf{X}))$. To see this, observe that $i(\mathbf{X})$ is defined as the first $i \geq 0$ such that none of the four edges of the cycle $c_i = (v_{n_1 -i}, v_{n_1 -i+1}, v_{n -i}, v_{n - i + 1})$ exist in $\mathbf{X}$. Thus it suffices to show that for each $\min \{n_1,n_2\}-1 > i \geq 0$, the number of edges in $c_i$ is preserved after permuting the vertices by $\sigma_{i(\mathbf{X})}$. To see this, note that if $i(x) > i$, then $(\sigma_{i(\mathbf{X})}(v_{n_1 -i}),\sigma_{i(\mathbf{X})}( v_{n_1 -i + 1}),\sigma_{i(\mathbf{X})} (v_{n -i}),\sigma_{i(\mathbf{X})}( v_{n - i + 1})) = (v_{n -i}, v_{n -i+1}, v_{n_1 -i}, v_{n_1 - i + 1}) $, which is the same cycle. If $i(\mathbf{X}) < i$, then $\sigma_{i(\mathbf{X})}$ does not move any of the vertices in $c_i$. Finally, if $i(\mathbf{X}) = i$, then $(\sigma_{i(\mathbf{X})}(v_{n_1 -i}),\sigma_{i(\mathbf{X})}( v_{n_1 -i + 1}),\sigma_{i(\mathbf{X})} (v_{n -i}),\sigma_{i(\mathbf{X})}( v_{n - i + 1})) = (v_{n_1 -i}, v_{n -i+1}, v_{n -i}, v_{n_1 - i + 1})$, which again is the same cycle $c_i$ (just with the ordering of the vertices reversed). So $\varphi(\varphi(\mathbf{X})) = \mathbf{X}$ for any $\mathbf{X} \in\mathcal{D}$, so in particular $\varphi:\mathcal{F}_{\Sigma}(\overline{C_n}) \to \mathcal{F}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2} }) $ is injective. To show surjectivity, it suffices to show that if $\mathbf{X} \in \mathcal{F}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2}})$ then $\varphi(\mathbf{X}) \in\mathcal{F}_{\Sigma}(\overline{C_{n}})$. Namely, that $\varphi$ can also be defined as a valid function $\varphi: \mathcal{F}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2}}) \to \mathcal{F}_{\Sigma}(\overline{C_{n}})$. Again, this is equivalent to showing that for any signed edge $e = (v_i,v_j) \in \mathbf{X}$, $v_i,v_j$ are connected in $C_{n_1} \oplus C_{n_2}$ if and only if $v_{\sigma_\mathbf{X}(i)},v_{\sigma_\mathbf{X}(j)}$ are connected in $C_{n}$. Since $\sigma_\mathbf{X}$ is an involution, this is the same as asking that for any signed edge $e = (v_i,v_j) \in \mathbf{X}$, $v_{\sigma_\mathbf{X}(\sigma_\mathbf{X}(i))},v_{\sigma_\mathbf{X}(\sigma_\mathbf{X}(j))}$ are connected in $C_{n_1} \oplus C_{n_2}$ if and only if $v_{\sigma_\mathbf{X}(i)},v_{\sigma_\mathbf{X}(j)}$ are connected in $C_{n}$. Setting $i' = \sigma_\mathbf{X}(i), j' =\sigma_\mathbf{X}(j)$, this states that for all signed edges $(v_{i'},v_{j'}) \in \mathbf{P}_{\sigma_\mathbf{X}} \mathbf{X} \mathbf{P}_{\sigma_\mathbf{X}} = Y \in \mathcal{D}$, we have that $v_{i'},v_{j'}$ are connected in $C_{n}$ if and only if $v_{\sigma_\mathbf{X}(i')},v_{\sigma_\mathbf{X}(j')}$ are connected in $C_{n_1} \oplus C_{n_2}$. But as shown above, we have that $i(\mathbf{X}) = i(\varphi(\mathbf{X}))$, so $\sigma_\mathbf{X} = \sigma_Y$, and then this fact was already proven above for any $Y \in \mathcal{D}$, which completes the proof. \end{proof} Now for any signed graph $\Sigma$ on $n$ vertices, let $\mathbf{A}_{\Sigma}$ be its adjacency matrix. Note that we can equivalently define via $\mathcal{F}_{\Sigma}(\overline{C_n}) = \{ H \subseteq \overline{C_n}, \; | \; \mathbf{P}_{\sigma} \mathbf{A}_{\Sigma} \mathbf{P}_{\sigma}^T = \mathbf{A}_{H} , \sigma \in S_n \}$. Here $ H \subseteq \overline{C_n}$ means $H$ is a subgraph of $\overline{C}_n$. On the other hand, we may be interested in the potentially much larger set of all possible permutations $\sigma$ such that $\mathbf{P}_{\sigma} \mathbf{A}_{\Sigma} \mathbf{P}_{\sigma}^T = \mathbf{A}_{H}$ for some $H \subset \overline{C_n}$. So define $\mathcal{H}_{\Sigma}(\overline{C_n}) = \{\sigma \; | \; \mathbf{P}_{\sigma} \mathbf{A}_{\Sigma} \mathbf{P}_{\sigma}^T = \mathbf{A}_{H} , H \subset \overline{C_n}, \sigma \in S_n \} $. It is not difficult to show that $|\mathcal{H}_{\Sigma}(\overline{C_n})| = |\text{Aut}(\Sigma)| |\mathcal{F}_{\Sigma}(\overline{C_n})|$, where $\text{Aut}(\Sigma)$ is the set of (signed) graph automorphisms of $\Sigma$. \begin{fact} We have $|\mathcal{H}_{\Sigma}(\overline{C_n})| = |\text{Aut}(\Sigma)| |\mathcal{F}_{\Sigma}(\overline{C_n})|$. \end{fact} \begin{proof} Fix any $H \subset \overline{C_n}$ such that $\mathbf{P}_{\sigma} \mathbf{A}_{\Sigma} \mathbf{P}_{\sigma}^T = \mathbf{A}_{H}$ for some $\sigma \in S_n$. We show that there are exactly $|\text{Aut}(\Sigma)|$ elements $\sigma' \in S_n$ such that $\mathbf{P}_{\sigma'} \mathbf{A}_{\Sigma} \mathbf{P}_{\sigma'}^T = \mathbf{A}_{H}$. By definition, $\text{Aut}(\Sigma)$ is the set of permutations $\pi \in S_n$ with $\mathbf{P}_{\sigma} \mathbf{A}_{\Sigma} \mathbf{P}_{\sigma}^T = \mathbf{A}_{\Sigma}$. For every $\pi \in \text{Aut}(\Sigma)$, we have $\mathbf{P}_{\sigma} \mathbf{P}_{\pi} \mathbf{A}_{\Sigma} \mathbf{P}_{\pi}\mathbf{P}_{\sigma}^T =\mathbf{P}_{\sigma \circ \pi } \mathbf{A}_{\Sigma} \mathbf{P}_{\sigma \circ \pi}^T = \mathbf{A}_{H}$, and moreover the set of elements $|\{ \sigma \circ \pi \; | \; \pi \in \text{Aut}(\Sigma) \}| = |\text{Aut}(\Sigma)|$ since $S_n$ is a group. Now suppose we have some $\lambda \in S_n$ such that $\mathbf{P}_{\lambda} \mathbf{A}_{\Sigma} \mathbf{P}_{\lambda}^T = \mathbf{A}_{H}$ and $\lambda \notin \{ \sigma \circ \pi \; | \; \pi \in \text{Aut}(\Sigma) \}$. Then $\mathbf{P}_{\sigma} \mathbf{A}_{\Sigma} \mathbf{P}_{\sigma}^T = \mathbf{P}_{\lambda} \mathbf{A}_{\Sigma} \mathbf{P}_{\lambda}^T$, so $\mathbf{P}_{\sigma^{-1} \circ \lambda} \mathbf{A}_{\Sigma} \mathbf{P}_{\sigma^{-1} \circ \lambda}^T = \mathbf{A}_{\Sigma} $, which by definition implies that $\sigma^{-1} \circ \lambda =x$ for some $x \in \text{Aut}(\Sigma)$. Thus $ \lambda = \sigma \circ x \in \{ \sigma \circ \pi \; | \; \pi \in \text{Aut}(\Sigma) \}$, which is a contradiction. \end{proof} \begin{corollary}\label{cor:CycleEquiv} Fix any $n = n_1 + n_2$. Fix any simple graph $|\Sigma|$, such that any set of vertex disjoint edges $\{e_1,\dots,e_k\}$ in $|\Sigma|$ has size at most $k \leq \min\{n_1,n_2\}/4$, and let $\Sigma = (|\Sigma|,\sigma)$ be any signing of $|\Sigma|$. Let $\mathcal{F}_{\Sigma}(\overline{C_n})$ denote the set of subgraphs of $C_n$ isomorphic to $|\Sigma|$, and similarly define $\mathcal{F}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2}})$. Then we have \[ \left|\mathcal{H}_{\Sigma}(\overline{C_n})\right| = \left|\mathcal{H}_{\Sigma}(\overline{C_{n_1} \oplus C_{n_2}})\right| \] \end{corollary} \subsection{Lower Bounds for Schatten, Ky-Fan, and Tail Error Testing} In this section, we demonstrate how our construction of subgraph equivalent matrices with gaps in their spectrum result in lower bounds for a number of other spectral testing problems via Lemma \ref{lem:ifBDthenLB}. We begin by proving a lower bound for testing Schatten norms. To do this, we must first demonstrate that there is a gap in the Schatten $1$ norm between a cycle and the union of two disjoint cycles. \begin{fact}[Theorem 1 of \cite{knapp2009sines}]\label{fact:cos} Fix any $a ,b,n \in \R$ with $\sin(b/2) \neq 0$. Then we have \[\sum_{k=0}^{n-1} \cos(a + kb )= \frac{\sin(\frac{n b}{2})}{ \sin(\frac{b}{2}) }\cos\left(a + \frac{(n-1)b}{2} \right) \] \end{fact} \begin{proposition} Fix any $d \geq 6$ be any integer divisible by $4$. Then \[ \|C_d\|_{\mathcal{S}_1} = 4 \cdot \frac{\cos \left( \pi/d \right)}{\sin(\pi/d)}\] \end{proposition} \begin{proof} By \cite{chung1996lectures}, for any $d \geq 3$ the eigenvalues of $C_d$ are given by $2 \cdot \cos( \frac{2 \pi j}{d})$ for $j=0,1,\dots,d-1$. Let $a_1 = \lfloor d/4 \rfloor, a_2 = \lfloor 3d/4 \rfloor, a_3 = d - a_2 -1$. \begin{equation} \begin{split} \|C_{d}\|_1 & =2 \sum_{j=0}^{d-1} \left|\cos\left( \frac{2 \pi j}{d}\right) \right| \\ & =2 \left( \sum_{j=0}^{a_1 } \cos\left( \frac{2 \pi j}{d}\right) - \sum_{j=a_1 + 1}^{a_2 } \cos\left( \frac{2 \pi j}{d} \right)+ \sum_{j=a_2 + 1 }^{d-1} \cos\left( \frac{2 \pi j}{d} \right) \right) \\ & =2 \left( \sum_{j=-a_3}^{a_1 } \cos\left( \frac{2 \pi j}{d}\right) - \sum_{j=a_1 + 1}^{a_2 } \cos\left( \frac{2 \pi j}{d} \right) \right) \\ \end{split} \end{equation} We analyze each term in the above via Fact \ref{fact:cos}. Firstly: \begin{equation} \begin{split} \sum_{j=-a_3}^{a_1 } \cos\left( \frac{2 \pi j}{d}\right) &= \sum_{j=0}^{a_1 + a_3 } \cos\left( \frac{2 \pi j}{d} -\frac{2 \pi a_3 }{d}\right) \\ &= \frac{\sin((a_1 + a_3 + 1) \pi/d )}{\sin(\pi/d)} \cos\left( \frac{(a_1 + a_3) \pi}{d} -\frac{2 \pi a_3 }{d}\right)\\ \end{split} \end{equation} Note that if $d$ is divisible by $4$, the above becomes $2 \cos(\pi/d)/\sin(\pi/d)$. Next, for the second term, we have \begin{equation} \begin{split} \sum_{j=a_1 + 1}^{a_2 } \cos\left( \frac{2 \pi j}{d} \right) &= \sum_{j=0}^{a_2 - a_1 - 1 } \cos\left( \frac{2 \pi j}{d} - \frac{2\pi(a_1 + 1) }{d} \right)\\ &= \frac{\sin((a_2 - a_1 ) \pi/d )}{\sin(\pi/d)} \cos\left( \frac{(a_2 - a_1 - 1) \pi }{d} - \frac{2\pi(a_1 + 1) }{d} \right)\\ \end{split} \end{equation} Again, note that if $d$ is divisible by $4$, the above becomes $2 \cos(\pi/d)/\sin(\pi/d)$. Putting these two equations together, we have that \[ \|C_d\|_1 = 4 \cdot \frac{\cos \left( \pi/d \right)}{\sin(\pi/d)} \] \end{proof} \begin{proposition}\label{prop:schattengap} Fix any $d$ larger than some constant. Then we have \[ \left| \| \mathbf{C}_{8d}\|_{\mathcal{S}_1} - \| \mathbf{C}_{4d} \oplus \mathbf{C}_{4d} \|_{\mathcal{S}_1} \right| \gtrsim \frac{1}{d^3} \] \end{proposition} \begin{proof} By the prior Lemma, we have $\| \mathbf{C}_{d}\|_{\mathcal{S}_1} = 4 \cot( \pi/d)$ for any $d$ divisible by $4$. Thus using the Taylor expansion of cotangent, we have \begin{equation} \begin{split} \| \mathbf{C}_{8d}\|_{\mathcal{S}_1} &= 4 \left( \frac{8d}{\pi } + \frac{ \pi}{24d} + \frac{\pi^3}{ 45 \cdot 512 \cdot d^3} + O(1/d^5) \right) \end{split} \end{equation} and \begin{equation} \begin{split} \| \| \mathbf{C}_{4d} \oplus \mathbf{C}_{4d} \|_{\mathcal{S}_1} &=2 \|\mathbf{C}_{4d} \|_{\mathcal{S}_1} \\ & = 4 \left( \frac{8d}{\pi } + \frac{ \pi}{24d} + \frac{\pi^3}{ 45 \cdot 128 \cdot d^3} + O(1/d^5) \right) \end{split} \end{equation} Thus \begin{equation} \begin{split} \left| \| \mathbf{C}_{8d}\|_{\mathcal{S}_1} - \| \mathbf{C}_{4d} \oplus \mathbf{C}_{4d} \|_{\mathcal{S}_1} \right|& \gtrsim \frac{1}{d^3} \end{split} \end{equation} \end{proof} \begin{theorem}\label{thm:schattenlb} Fix any $\frac{1}{\sqrt{n}} \leq \epsilon \leq 1$. Then given $\mathbf{A} \in \R^{n \times n}$ with $\|\mathbf{A}\|_\infty \leq 1$, any non-adaptive sampling algorithm which distinguishes between the cases \begin{enumerate} \item $\|\mathbf{A}\|_{\mathcal{S}_1} > \epsilon_0 n^{1.5}$ \item $\|\mathbf{A}\|_{\mathcal{S}_1} <\epsilon_0 n^{1.5} - \epsilon n^{1.5} $ \end{enumerate} with probability at least $3/4$, where $\epsilon_0 = \wt{\Theta}(\epsilon)$, must query at least $\tilde{\Omega}(1/\epsilon^4)$ entries of $\mathbf{A}$. \end{theorem} \begin{proof} We use the hard instance $\mathcal{D}_1,\mathcal{D}_2$ as earlier. Set $k = C \frac{1}{\epsilon^2 \log^9(1/\epsilon)}$, $t = \log k$, and $d = 4k$, and $m=n/(dk)$. We instantiate the matrices $(\mathbf{B},\mathbf{D},\mathbf{Z})$ in the hard instance via $\mathbf{B} = C_{2d}, \mathbf{D} = \C_{d} \oplus C_d$, and let $\mathbf{Z} = \delta_{i,j}$ for $i \leq j$, where $\delta_{i,j} \in \{-1,1\}$ are i.i.d. Bernoulli random variables, so that $\mathbf{Z} \in \R^{m \times m}$ is a symmetric random Bernoulli matrix. Using the fact that $\|\mathbf{Z}\|_2 \leq O(\sqrt{n})$ with high probability \cite{vershynin2010introduction}, along with the fact that $\|\mathbf{Z}\|_F^2 = n^2$ deterministically, we have that $\|\mathbf{Z}\|_{\mathcal{S}_1} > C_1 m^{1.5}$ with non-zero probability for some constant $C_1 > 0$, as the former two facts imply that $\mathbf{Z}$ has $\Omega(n)$ eigenvalues with magnitude $\Theta(\sqrt{n})$. Thus, we can deterministically fix $\mathbf{Z}$ to be such a matrix with $\{1,-1\}$ entries such that $\|\mathbf{Z}\|_{\mathcal{S}_1} \geq C_1 m^{1.5}$. Given this, we have $\|\wt{\mathbf{B}}\|_{\mathcal{S}_1} = \|\mathbf{B} \otimes \mathbf{Z} \|_{\mathcal{S}_1} = \|\mathbf{B}\|_{\mathcal{S}_1} \cdot \|\mathbf{Z}\|_{\mathcal{S}_1}$, and so by Proposition \ref{prop:schattengap}, we have \[ \left| \|\wt{\mathbf{B}}\|_{\mathcal{S}_1} - \|\wt{\mathbf{D}}\|_{\mathcal{S}_1} \right| \geq C_0\frac{m^{1.5}}{d^3} \] for some absolute constant $C_0 \geq 0$. Note also that we have $\|\mathbf{B} \|_{\mathcal{S}_1} > \Omega(d)$, where we use the fact that a constant fraction of the eigenvalues $2 \cdot \cos( \frac{2 \pi j}{d})$ for $j=0,1,\dots,d-1$ of $\mathbf{B}$ are $\Omega(1)$. Thus we have $\|\wt{\mathbf{B}}\|_{\mathcal{S}_1} = dm^{1.5}$. Now by Proposition \ref{prop:cong}, we obtain that $\mathbf{B} \cong_{\mathcal{U}_{2d}^t, S_{2d} } \mathbf{D}$, and thus $\wt{\mathbf{B}} = \mathbf{B} \otimes \mathbf{Z} \cong_{\mathcal{U}_{2d}^t, \Gamma_{2d,2dm}} \mathbf{D} \otimes \mathbf{Z} = \wt{\mathbf{D}}$ by Lemma \ref{len:kron}. Thus by Lemma \ref{lem:ifBDthenLB}, we have that distinguishing $\mathcal{D}_1$ from $\mathcal{D}_1$ requires $\Omega(k^2) = \tilde{O}(1/\epsilon^4)$ samples for any non-adaptive algorithm. It suffices then to show that if $\mathbf{A}_1 \sim \mathcal{D}_1$ and $\mathbf{A}_2 \sim \mathcal{D}_2$, then we have the desired gap in Schatten norms. We have \begin{equation} \begin{split} \left| \|\mathbf{A}_1\|_{\mathcal{S}_1} -\| \mathbf{A}_2\|_{\mathcal{S}_1} \right| &\geq \sum_{i=1}^k C_0\frac{m^{1.5}}{d^3} \\ & \geq C_0 \frac{n^{1.5}}{d^{4.5} k^{1/2}} \\ & \geq \epsilon n^{1.5} \\ \end{split} \end{equation} Where the last inequality follows setting $C$ large enough, and assuming that $1/\epsilon$ is larger than some constant as in Theorem \ref{thm:lbmain}. Again, if $1/\epsilon$ is not larger than some constant, a $\Omega(1)$ lower bound always applies, since an algorithm must read at least one entry of the matrix to have any advantage. Now note that we also have $\|\mathbf{A}_1\|_{\mathcal{S}_1} = k \|\wt{\mathbf{B}}\|_{\mathcal{S}_1} = kdm^{1.5} = n^{1.5} /\sqrt{dk} = \wt{\Theta}(\epsilon n^{1.5})$ as desired. To complete the proof, we can scale down all the entries of the input matrix by $1/2$, which results in the required bounded entry property, and only changes the gap by a constant factor. \end{proof} We now present our lower bound for testing Ky-Fan norms. Recall that for a matrix $\mathbf{A} \in \R^{n \times n}$ and $1 \leq s \geq n$, the Ky-Fan $s$ norm is defined as $\|\mathbf{A}\|_{KF(s)} = \sum_{i=1}^k \sigma_i(\mathbf{A})$, where $\sigma_i(\mathbf{A})$ is the $i$-th singular value of $\mathbf{A}$. \begin{theorem}\label{thm:kflb} Fix any $1 \leq s \leq n/(\text{poly} \log n)$. Then there exists a fixed constant $c > 0$ such that given $\mathbf{A} \in \R^{n \times n}$ with $\|\mathbf{A}\|_\infty \leq 1$, any non-adaptive sampling algorithm which distinguishes between the cases \begin{enumerate} \item $\|\mathbf{A}\|_{KF(s)} > \frac{c}{\log(s)}n $ \item $\|\mathbf{A}\|_{KF(s)} < (1-\epsilon_0) \frac{c}{\log(s)} n $ \end{enumerate} with probability at least $3/4$, where $\epsilon_0 = \Theta(1/\log^2(s))$, must query at least $\tilde{\Omega}(s^2)$ entries of $\mathbf{A}$.\footnote{$\wt{\Omega}$ hides $\log(s)$ factors here.} \end{theorem} \begin{proof} The proof is nearly the same as the usage of the hard instance in Theorem \ref{thm:lbmain}. Set $k=s$, and let $d_0 = \Theta(\log s)$ and $d = 2d_0 + 1$. We apply Lemma \ref{lem:ifBDthenLB} with the hard instance as instantiated with $\mathbf{Z} = \mathbf{1}^{m \times m}$, and the matrices $\mathbf{B} = 1/4(\mathbf{A}_{C_{d}} - 2\mathbb{I}_{d})$ and $\mathbf{D} = 1/4(\mathbf{A}_{C_{d_0} \oplus C_{d_0+1}} -2\mathbb{I}_{d})$. Notice that since the eigenvalues of $C_d$ are given by $2 \cdot \cos( \frac{2 \pi j}{d})$ for $j=0,1,\dots,d-1$ \cite{chung1996lectures}, we have $\lambda_{\min}(\mathbf{A}_{C_{d}}) = -2 \cos(\frac{2 \pi d_0}{2d_0 + 1}) = -2 + \Theta(1/\log^2(1/\epsilon))$, $\lambda_{\min}(\mathbf{A}_{C_{d_0} \oplus C_{d_0+1}} ) = -2$, and $\lambda_{\max}(\mathbf{A}_{C_{d}}) = \lambda_{\max}(\mathbf{A}_{C_{d_0} \oplus C_{d_0+1}} ) = 2$. Thus $\|\mathbf{D}\|_2 = 4$ and $\|\mathbf{B}\|_2 = 4 - \Theta(1/d^2)$, and moreover $\|\mathbf{D} \otimes \mathbf{Z}\|_2 = 4m$ $\|\mathbf{B} \otimes \mathbf{Z}\|_2 = 4m(1 - \Theta(1/\log^2(1/\epsilon)))$. Thus if $\mathbf{A}_1 \sim \mathcal{D}_1$, we have $\|\mathbf{A}_1\|_{KF(s)} > \sum_{i=1}^k 4m = 4km$, and $\|\mathbf{A}_2\|_{KF(s)} < 4km(1 - \Theta(1/\log^2(1/\epsilon)))$. The proof then follows from the $\Omega(k^2)$ lower bound for this hard instance via Lemma \ref{lem:ifBDthenLB}. \end{proof} We now present our lower bound for testing the magnitude of the $s$-tail $\|\mathbf{A} - \mathbf{A}_s \|_{F}^2$, where $\mathbf{A}_s = \mathbf{U} \Sigma_s \mathbf{V}^T$ is the truncated SVD (the best rank-$s$ approximation to $\mathbf{A}$). Note that $\|\mathbf{A} - \mathbf{A}_s \|_{F}^2 = \sum_{j > s}\sigma_j^2(\mathbf{A})$. \begin{theorem}\label{thm:taillb} Fix any $1 \leq s \leq n/(\text{poly} \log n)$. Then there exists a fixed constant $c> 0$ (independent of $\epsilon)$, such that given $\mathbf{A} \in \R^{n \times n}$ with $\|\mathbf{A}\|_\infty \leq 1$, any non-adaptive sampling algorithm which distinguishes between the cases \begin{enumerate} \item $\|\mathbf{A} - \mathbf{A}_s \|_{F}^2 > \frac{c}{\log(s)} \cdot \frac{n^2}{s} $ \item $\|\mathbf{A} - \mathbf{A}_s \|_{F}^2 < (1-\epsilon_0)\cdot \frac{c}{\log(s)}\cdot \frac{n^2}{s}$ \end{enumerate} with probability at least $3/4$, where $\epsilon_0 = \wt{\Theta}(1)$, must query at least $\tilde{\Omega}(s^2)$ entries of $\mathbf{A}$. \end{theorem} \begin{proof} We set $s = k$, and use the same hard instance as in Theorem \ref{thm:kflb} above. Note that if $\mathcal{D}_1,\mathcal{D}_2$ are defined as in Theorem \ref{thm:kflb}, if $\mathbf{A}_1 \sim \mathcal{D}_1$, $\mathbf{A}_s \sim \mathcal{D}_s$, we have $\sum_{i=1}^s \lambda_i(\mathbf{A}_1) = s(4m)^2 = 16n^2/(sd^2)$ and $\sum_{i=1}^s \lambda_i(\mathbf{A}_2) = 16 n^2/(sd^2)(1-\Theta(1/\log^2 s))$. Now note that $\|\mathbf{A}_1\|_F^2 = \|\mathbf{A}_2\|_F^2 = k d m^2 = n^2 / (dk) = n^2/(ds)$, using that each of the single cycle and union of two smaller cycles has $d$ edges, so the Frobenius norm of each block is $d m^2$ in both cases. Using that $d = \Theta(\log s)$, we have that if $\|(\mathbf{A}_1) - (\mathbf{A}_1)s \|_{F}^2 > n^2/(ds) - 16n^2/(sd^2) = c \frac{n^2}{s \log(s)}$ for some constant $c> 0$, and $\|(\mathbf{A}_2) - (\mathbf{A}_2)s \|_{F}^2 > n^2/(ds) - 16 n^2/(sd^2)(1-\Theta(1/\log^2 s)) = c \frac{n^2}{s \log(s)} + \wt{\Theta}(\frac{n^2}{s})$, which completes the proof after applying Lemma \ref{lem:ifBDthenLB}. \end{proof} \subsection{Lower Bound For Estimating Ky-Fan of \texorpdfstring{$\mathbf{A} \mathbf{A}^T$ }{A A-transpose} via Submatrices} In this section, we demonstrate a $\Omega(1/\epsilon^4)$ query lower bound for algorithms which estimate the quantity $\sum_{i=1}^k \sigma_i^2(\mathbf{A}) = \|\mathbf{A}\AA^T \|_{KF(k)}$ for any $k \geq 1$ by querying a sub-matrix. The following lemma as a special case states that for $\epsilon = \Theta(1/\sqrt{n})$, additive $\epsilon n^2$ approximation of $\|\mathbf{A}\AA^T \|_{KF(k)} $ requires one to read the entire matrix $\mathbf{A}$. \begin{lemma}\label{lem:lowerboundAA} Fix any $1 \leq k \leq n$, and fix any $\frac{100}{\sqrt{n}} \leq \epsilon \leq 1/4$. Any algorithm that queries a submatrix $\mathbf{A}_{S \times T}$ of $\mathbf{A} \in \R^{n \times n}$ with $\|\mathbf{A}\|_\infty \leq 1$ and distinguishes with probability at least $4/5$ between the case that either: \begin{itemize} \item $\sum_{i=1}^k \sigma_i^2(\mathbf{A}) > n^2 / 2 + \epsilon n^2$. \item $\sum_{i=1}^k \sigma_i^2(\mathbf{A}) \leq n^2 / 2 $ \end{itemize} must make $|S|\cdot |T| = \Omega(1/\epsilon^4)$ queries to the matrix $\mathbf{A}$. \end{lemma} \begin{proof} We design two distributions $\mathcal{D}_1,\mathcal{D}_2$. If $\mathbf{A}_1 \sim \mathcal{D}_1$, we independently set each row of $\mathbf{A}_1$ equal to the all $1's$ vector with probability $p_1 = 1/2 + 2 \epsilon$, and then return either $\mathbf{A} = \mathbf{A}_1$ or $\mathbf{A} = \mathbf{A}_1^T$ with equal probability. If $\mathbf{A}_2 \sim \mathcal{D}_2$, we independently set each row of $\mathbf{A}_2$ equal to the all $1's$ vector with probability $p_2 = 1/2 - 2\epsilon$, and then return either $\mathbf{A} = \mathbf{A}_2$ or $\mathbf{A} = \mathbf{A}_2^T$ with equal probability. Our hard instance then draws $\mathbf{A} \sim \frac{\mathcal{D}_1 + \mathcal{D}_2}{2}$ from the mixture. First note that in both cases, we have $\|\mathbf{A}\|_2^2 = \|\mathbf{A}\|_F^2 = \sum_{i=1}^k \sigma_i^2(\mathbf{A})$, since the matrix is rank $1$. Since $\frac{100}{\sqrt{n}} \geq \epsilon$, by Chernoff bounds, we have that if $\mathbf{A}_1 \sim \mathcal{D}_1$ then $\sum_{i=1}^k \sigma_i^2(\mathbf{A}) > n^2 / 2 + \epsilon n^2$ with probability at least $99/100$. Similarly, we have that if $\mathbf{A}_2 \sim \mathcal{D}_2$ then $\sum_{i=1}^k \sigma_i^2(\mathbf{A}) \leq n^2$ with probability at least $99/100$. Now suppose that such an algorithm sampling $|S|\cdot |T| < \frac{c^2}{ \epsilon^4}$ entries exists, for some constant $c>0$. Then by Yao's min-max principle, there is a fixed submatrix $S,T \subset [n]$ such that, with probability $9/10$ over the distribution $\frac{\mathcal{D}_1 + \mathcal{D}_2}{2}$, the algorithm correctly distinguishes $\mathcal{D}_1$ from $\mathcal{D}_2$ given only $A_{S \times T}$. Suppose WLOG that $|S| \leq \frac{c}{\epsilon^2}$. Then consider the case only when $\mathbf{A}_1$ or $\mathbf{A}_2$ is returned by either of the distributions, and not their transpose, which occurs with probability at least $1/2$. Then $A_{S \times T}$ is just a set of $|S|$ rows, each of which are either all $0$'s or all $1$'s. Moreover, each row is set to being the all $1$'s row independently with probability $p_1$ in the case of $\mathcal{D}_2$, and $p_2$ in the case of $\mathcal{D}_2$. Thus, by Independence across rows, the behavior of the algorithm can be assumed to depend only on the number of rows which are set to $1$. Thus, in the case of $\mathcal{D}_1$ the algorithm receives $X_1 \sim \texttt{Bin}(|S|,p_1)$ and in $\mathcal{D}_2$ the algorithm receives $X_2 \sim \texttt{Bin}(|S|,p_2)$. Then if $d_{TV}(X_1,X_2)$ is the total variational distance between $X_1,X_2$, then by Equation 2.15 of \cite{adell2006exact}, assuming that $\epsilon \sqrt{|S|} $ is smaller than some constant (which can be obtained by setting $c$ small enough), we have \[ d_{TV}(X_1,X_2) \leq O( \epsilon \sqrt{|S|} ) \] Which is at most $1/100$ for $c$ a small enough constant. Thus any algorithm can correctly distinguish these two distributions with advantage at most $1/100$. Since we restricted our attention to the event when rows were set and not columns, and since we conditioned on the gap between the norms which held with probability $99/100$, it follows that the algorithm distinguishes $\mathcal{D}_1$ from $\mathcal{D}_2$ with probability at most $1/2 + 1/4 + (2/100) < 4/5$, which completes the proof. \end{proof} \begin{comment} \subsection{Old $O(1/\epsilon^{4/3})$ bound} Consider the following distribution. Split $[n]$ into $k$ random subset $S_1,\dots,S_k$ where $\ex{|S_i|} = n/k$. Let $v_1,\dots,v_k \sim {-1,1}^n$. Consider the block diagonal matrix $\mathbf{B} = \sum_{i=1}^k (v_i)_{S_i} (v_i)_{S_i}^T$. We consider two cases: $\mathbf{A}_1 = \mathbf{I + (B - \texttt{diag}(B))$ or $A_2 = I - (B - \texttt{diag}(B))$. We prove the following lemma. Before this, we will need the following proposition: \begin{proposition}\label{prop:boundcycles} Let $G= (V,E)$ be any simple undirected graph, and fix any $k \geq 3$. Than the number of $k$-cycles in $G$ is at most $O((2|E|)^{k/2})$ \end{proposition} \begin{proof} Let $\mathcal{T}_k$ be the set of $k$-cycles in $G$, and let $A$ be the adjacency matrix of $G$. It is standard for any $t\geq 1$ that $(A^t)_{i,j}$ is the number of walks of length $t$ starting from vertex $i$ and ending at vertex $j$. Thus $|\mathcal{T}_k| \leq \text{Tr}(A^k)$. Note in addition, for any real symmetric matrix $B \in \R^{n \times n}$ we will use the fact that $\text{Tr}(B) = \sum_i \lambda_i(B)$ where $\{\lambda_i(B)\}_{i=1}^n$ are the eigenvalues of $B$, and moreover that $\lambda_i(B^k) = \lambda_i(B)^k$. Now \begin{equation} \begin{split} \text{Tr}(A^k) & = \sum_i \lambda_i(A)^k \\ & \leq \sum_i |\lambda_i(A)|^k \\ & \leq \left(\sum_i |\lambda_i(A)|^2\right)^{k/2} \\ & = \left(\text{Tr}(A^2)\right)^{k/2} \\ & = \left(2|E|\right)^{k/2} \\ \end{split} \end{equation} Where in the third line, we use that $\|x\|_p \leq \|x\|_q$ for any $x \in \R^n$ and $p \geq q$, in the fourth line we use that $A^2 = AA^T$ is PSD so $ \left(\sum_i |\lambda_i(A)|^2\right)^{k/2} = \left(\sum_i \lambda_i(A)^2\right)^{k/2} = \left(\text{Tr}(A^2)\right)$, and in the final line we use that the number of closed walks of length $2$ from a fixed vertex $v_i$ to itself is deg$(v_i)$, so $\text{Tr}(A^2) = \sum_i \text{deg}(v_i) = 2|E|$. \end{proof} \begin{lemma} Any randomized non-adaptive algorithm which makes an expected $t$ queries to the matrix $A \in \R^{n \times n}$ and distinguishes with probability $4/5$ whether $A = A_1$ or $A = A_2$ must have $t = \Omega(k^{4/3})$. \end{lemma} \begin{proof}We set the input distribution $\mathcal{D} = \frac{A_1 + A_2}{2}$, and suppose $t < (k/2000)^{4/3}$. We can first condition on the fact that the algorithm makes at most $10t$ queries, and solves the problem with probability at least $7/10$ my a Markov bound. We now make the algorithm deterministic by Yao's mini-max: specifically there is a subset $T \subset [n] \times [n]$ of size $|T| \leq 10t$ and a deterministic function $f:A_{T} \to \{0,1\}$ such that $f$ correctly distinguishes the two cases with probability $7/10$-th, where here the randomness is now taken over the choice of $v_i$ and $S_i$. We can now interpret $A$ as a graph with unit edge weights. The query set $T$ queries for the set of edges from some fixed graph $G_T$. In other words, the edges $E(G) = \{ (i,j) | (i,j) \in T\}$. On the other hand, let $G_A$ be the graph given by $A$. In other words, $E(G_A) = \{ (i,j) \in S_t | i \neq j, t \in [k] \}$, where $w(i,j) = A_{i,j}$ is the weight of the edge $(i,j)$. The algorithm gets to observe $E(G_T) \cap E(G_A)$. \begin{claim} With probability $99/100$, $E(G_T) \cap E(G_A)$ does not contain any odd cycles, namely the graph $G_T \cap G_A$ is bipartite \end{claim} \begin{proof} Let $\mathcal{T}_\ell$ be the set of $\ell$-cycles in $E(G_T)$ for any odd integer $\ell \geq 3$. By Proposition \ref{prop:boundcycles}, we have $\mathcal{T}_\ell \leq (20t)^{\ell/2}$. Fix any $c \in \mathcal{T}_\ell$. We compute the probability that $c \in E(G_A)$. For this to occur, all $\ell$ of the vertices in $c$ must be contained in the same set $S_j$ for some $j \in [k]$. We have \begin{equation} \begin{split} \pr{c \in E(G_A) } & \leq \sum_{j \in [k]} \pr{c \in S_j} \\ & \leq k \pr{c \in S_j} \\ & \leq k \cdot \frac{1}{k^\ell} \\ \end{split} \end{equation} \end{proof} Where the last inequality holds because there are $\ell$ vertices in $c$, each of which is contained in $S_j$ independently with probability $1/k$. By a union bound, the probability that $E(G_T) \cap E(G_A)$ contains an $\ell$ cycle is at most $| \mathcal{T}_{\ell}| \cdot \frac{1}{k^{\ell-1}} \leq \frac{(20t)^{\ell/2}}{k^{\ell-1}} \leq (\frac{1}{100})^{\ell/2} k^{2\ell/3 - \ell + 1}\leq (\frac{1}{10})^\ell$, where in the last step we used $\ell \geq 3$. Union bounding over all $\ell$: \begin{equation} \begin{split} \pr{ (E(G_T) \cap E(G_A)) \text{ contains an odd cycle} } & \leq \sum_{\ell = 3}^\infty | \mathcal{T}_{\ell}| \cdot \frac{1}{k^{\ell}}\\ & \leq \sum_{\ell = 3}^\infty (1/10)^\ell \\ & \leq 1/100 \\ \end{split} \end{equation} We now show that conditioned on this claim the probability that $f$ is correct is exactly $1/2$. Let $\mathcal{E}$ be the event that $G_{T \cap A} = G_T \cap G_A$ has no odd cycles. Note that $\mathcal{E}$ is a function only of the randomness which determines the sets $\{S_t\}_{t \in [k]}$, and not whether $A = A_1$ or $A = A_2$, or the randomness in the $v_j$'s. So we can now condition on any values of the sets $\{S_t\}_{t \in [k]}$ which satisfy $\mathcal{E}$ . Conditioned on this, the graph $G_{T \cap A} $ is bipartite, so we can assign a fixed two coloring $\varphi :V(G_{T \cap A} ) \to \{0,1\}$ to the graph. We now claim that for any value of $\mathcal{O} = A_T$ drawn from the distribution , we have $\pr{A = A_1 | \mathcal{O} = A_T } = \pr{A = A_1 | \mathcal{O} = A_T }$. To see this, note that for any realization of $A_T$, there are precisely two values of./ For $6 \times 6$ examples, show that you cannot get a subgraph which contains $\geq 4$ vertices with edges connected to them, unless it is $4$ vertices in a single connected subgraph with diameter $\leq 2$ (must be a star). Any subgraph with $\leq 3$ is fine. Then deal with stars, which can contain more than $4$ vertices but are fine. So cases are enumerated depending on how many vertices appear in the subgraph with adjacent edges. \begin{itemize} \item Two or more subgraphs with four vertices total: only case is two disjoint edges. Cannot happen with less than $1/\epsilon^{3/2}$. \item One subgraph with four versices and diameter $\geq 3$: must contain a length $3$ path, cannot happen. \item One subgraph with four versices and diameter $\leq 2$: must be a star on $4$ vertices. This is okay. \item One subgraph with $\leq 3$ vertices: is either a triangle, single edge, or wedge. All are okay. \item More than four vertices: must be a subset which falls into the prior catagory. This cannot happen if it is a catagory that cannot happen. Otherwise, it must be a star on $r > 4$ vertices, which is okay. \end{itemize} \end{proof} \end{comment} \section{#1}} \newcommand{\new}[1]{{\em #1\/}} \newcommand{\set}[1]{\{#1\}} \newcommand{\setof}[2]{\{\,{#1}|~{#2}\,\}} \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\mathbf{Q}}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\mathbf{L}}{\mathcal{L}} \newcommand{\text{poly}}{\text{poly}} \newcommand{\compl}[1]{\overline{#1}} \newcommand{\epsilon}{\epsilon} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathbf{Z}}{\mathcal{Z}} \newcommand{\pr}[1]{\text{\bf Pr}\normalfont\lbrack #1 \rbrack} \newcommand{\ex}[1]{\mathbb{E}\normalfont\lbrack #1 \rbrack \newcommand{\bpr}[1]{\text{\bf Pr}\normalfont \Big[#1 \Big]} \newcommand{\bex}[1]{\mathbb{E}\normalfont \Big[#1 \Big]} \newcommand{\bvar}[1]{\mathbf{Var}\normalfont \Big(#1 \Big)} \def\prob#1#2{\mbox{\bf Pr}_{#1}\left[ #2 \right]} \def\pvec#1#2{\vec{\mbox{P}}^{#1}\left[ #2 \right]} \def\expec#1#2{{\bf \mathbb{E}}_{#1}[ #2 ]} \def\expecf#1#2{{\bf \mathbb{E}}_{#1}\left[ #2 \right]} \def\var#1{\mbox{\bf Var}[ #1 ]} \def\varf#1{\mbox{\bf Var}\left[ #1 \right]} \def\stackrel{\mathrm{def}}{=}{\stackrel{\mathrm{def}}{=}} \def\setof#1{\left\{#1 \right\}} \def\sizeof#1{\left|#1 \right|} \def\trace#1{\mathrm{Tr} \left(#1 \right)} \def\trs#1{\mathrm{Tr}_{\sigma} \left(#1 \right)} \def\mathbf{\Sigma}{\mathbf{\Sigma}} \def\Norm#1{\left\| #1 \right\|} \def\mathbf{U}{\mathbf{U}} \def\mathbf{V}{\mathbf{V}} \def\text{Tr}{\text{Tr}} \def\textsf{KF}{\textsf{KF}} \def\mathbf{A}{\mathbf{A}} \def\mathbf{B}{\mathbf{B}} \def\mathbf{E}{\mathbf{E}} \def\mathbf{M}{\mathbf{M}} \def\mathbf{N}{\mathbf{N}} \def\mathbf{L}{\mathbf{L}} \def\mathbf{W}{\mathbf{W}} \def\mathbf{C}{\mathbf{C}} \def\mathbf{D}{\mathbf{D}} \def\mathbf{H}{\mathbf{H}} \def\mathbb{I}{\mathbb{I}} \def\mathbf{Y}{\mathbf{Y}} \def\mathbf{X}{\mathbf{X}} \def\mathbf{P}{\mathbf{P}} \def\mathbf{U}{\mathbf{U}} \def\mathcal{Q}{\mathcal{Q}} \def\mathcal{S}{\mathcal{S}} \def\mathcal{T}{\mathcal{T}} \def\mathbf{V}{\mathbf{V}} \def\mathbf{E}{\mathbf{E}} \def\mathbf{S}{\mathbf{S}} \def\mathbf{T}{\mathbf{T}} \def\mathbf{W}{\mathbf{W}} \def\mathbf{Q}{\mathbf{Q}} \def\mathbf{G}{\mathbf{G}} \def\mathbf{M}{\mathbf{M}} \def\mathbf{Z}{\mathbf{Z}} \def\mathbf{R}{\mathbf{R}} \def\mathbf{\Lambda}{\mathbf{\Lambda}} \usepackage{multirow} \usepackage{hhline} \usepackage{makecell} \newcommand{\TODO}[1]{ {\color{red} TODO: #1}}\newcommand{\p}{\mathfrak{p}} \newcommand{\PP}{\mathbb{P}} \newcommand{\tx}[1]{\text{#1}} \newcommand{\ttx}[1]{\texttt{#1}} \newcommand{\wt}[1]{\widetilde{#1}} \newtheorem{duplicate}{} \title{Testing Positive Semi-Definiteness via Random Submatrices} \author{ Ainesh Bakshi\thanks{Ainesh Bakshi and Rajesh Jayaram would like to thank the partial support from the Office of Naval Research (ONR) grant N00014-18-1-2562, and the National Science Foundation (NSF) under Grant No. CCF-1815840.}\\ CMU \\ \texttt{abakshi@cs.cmu.edu} \and Nadiia Chepurko \\ MIT \\ \texttt{nadiia@mit.edu} \and Rajesh Jayaram\footnotemark[1] \\ CMU \\ \texttt{rkjayara@cs.cmu.edu} } \begin{document} \maketitle \begin{abstract} We study the problem of testing whether a matrix $\mathbf{A} \in \R^{n \times n}$ with bounded entries ($\|\mathbf{A}\|_\infty \leq 1$) is positive semi-definite (PSD), or $\epsilon$-far in Euclidean distance from the PSD cone, meaning that $\min_{\mathbf{B} \succeq 0} \|\mathbf{A} - \mathbf{B}\|_F^2 > \epsilon n^2$, where $\mathbf{B} \succeq 0$ denotes that $\mathbf{B}$ is PSD. Our main algorithmic contribution is a non-adaptive tester which distinguishes between these cases using only $\tilde{O}(1/\epsilon^4)$ queries to the entries of $\mathbf{A}$.\footnote{Throughout the paper, $\tilde{O}(\cdot)$ hides $\log(1/\epsilon)$ factors.} If instead of the Euclidean norm we considered the distance in spectral norm, we obtain the ``$\ell_\infty$-gap problem'', where $\mathbf{A}$ is either PSD or satisfies $\min_{\mathbf{B}\succeq 0} \|\mathbf{A} - \mathbf{B}\|_2 > \epsilon n$. For this related problem, we give a $\tilde{O}(1/\epsilon^2)$ query tester, which we show is optimal up to $\log(1/\epsilon)$ factors. Both our testers randomly sample a collection of principal submatrices and check whether these submatrices are PSD. Consequentially, our algorithms achieve \textit{one-sided error}: whenever they output that $\mathbf{A}$ is not PSD, they return a certificate that $\mathbf{A}$ has negative eigenvalues. We complement our upper bound for PSD testing with Euclidean norm distance by giving a $\tilde{\Omega}(1/\epsilon^2)$ lower bound for any non-adaptive algorithm. Our lower bound construction is general, and can be used to derive lower bounds for a number of spectral testing problems. As an example of the applicability of our construction, we obtain a new $\tilde{\Omega}(1/\epsilon^4)$ sampling lower bound for testing the Schatten-$1$ norm with a $\epsilon n^{1.5}$ gap, extending a result of Balcan, Li, Woodruff, and Zhang \cite{BalcanTesting}. In addition, our hard instance results in new sampling lower bounds for estimating the Ky-Fan Norm, and the cost of rank-$k$ approximations, i.e. $\|\mathbf{A} - \mathbf{A}_k\|_F^2 = \sum_{i > k} \sigma_i^2(\mathbf{A})$. \end{abstract} \thispagestyle{empty} \newpage \tableofcontents \thispagestyle{empty} \newpage \pagenumbering{arabic} \section{Introduction} \input{intro.tex} \section{Preliminaries} We now introduce the notation and definitions that will be used consistently throughout the paper. Additional, specialized notation will be introduced as needed in their respective sections. Specifically, our lower bound construction in Section \ref{sec:lb} utilizes several additional pieces of notation, such as those concerning signed graphs, which are introduced at the beginning of that section. \paragraph{Singular Values and Eigenvalues.}We use boldface $\mathbf{A}$ notation to denote matrices. For a $n \times d$ matrix $\mathbf{A}$, let $\sigma_{\max}(\mathbf{A}) = \sigma_1(\mathbf{A}) \geq \sigma_2(\mathbf{A}) \geq \dots \geq \sigma_{\min\{n,d\}}(\mathbf{A}) = \sigma_{\min}(\mathbf{A})$ denote the singular values of $\mathbf{A}$. If $\mathbf{A}$ is rank $r$, let $\mathbf{A} = \mathbf{U}\mathbf{\Sigma} \mathbf{V}^\top$ be its singular value decomposition, where $\mathbf{U} \in \R^{n \times r},\mathbf{V}\in \R^{d \times r}$ have orthonormal columns, and $\mathbf{\Sigma} \in \R^{r \times r} $ is a diagonal matrix with the (non-zero) singular values $\sigma_i$ on the diagonal. We use $\mathbf{\Sigma}_k$ to denote the matrix $\mathbf{\Sigma}$ but with all entries but the $k$ largest singular values removed and use $\mathbf{\Sigma}_{-k}$ to denote the matrix $\mathbf{\Sigma}$ but with all entries but the $n-k$ smallest singular values removed. Let $\mathbf{A}_k = \mathbf{U} \Sigma_k \mathbf{V}^\top$ and $\mathbf{A}_{-k} = \mathbf{U} \Sigma_{-k}\mathbf{V}^\top$. The matirx $\mathbf{A}_k$ is referred to as the \textit{truncated SVD} of $\mathbf{A}$, and is the best rank-$k$ approximation to $\mathbf{A}$: $\| \mathbf{A} - \mathbf{A}_k\|_F^2 = \sum_{i > k}\sigma^2(\mathbf{A}) = \min_{\mathbf{B} \text{ rank-k}} \|\mathbf{A} - \mathbf{B}\|_F^2$. For the special case when $\mathbf{A} \in \R^{n \times n}$ is symmetric, we use $\mathbf{U} \mathbf{\Lambda} \mathbf{U}^\top$ to denote the Eigenvalue Decomposition of $\mathbf{A}$, where $\lambda_{\max}(\mathbf{A}) = \lambda_1(\mathbf{A}) \geq \lambda_2(\mathbf{A}) \geq \dots \geq \lambda_n(\mathbf{A}) = \lambda_{\min}(\mathbf{A})$ denote the eigenvalues of $\mathbf{A}$. A real-symmetric matrix $\mathbf{A}$ is said to be \textit{Positive Semi-Definite} (PSD) if $\lambda_{\min} \geq 0$, which is equivalent to having $x^\top \mathbf{A} x \geq 0$ for all $x \in \R^n$. We will utilize the Loewner ordering on symmetric matrices. \begin{definition}[Loewner Ordering] For symmetric matrices $\mathbf{B},\mathbf{D}$, we write $\mathbf{B} \succeq \mathbf{D}$ if $\mathbf{B} - \mathbf{D}$ is PSD. \end{definition} \noindent Notice that if $\mathbf{B} \succeq \mathbf{D}$, then by definition $x^\top \mathbf{B} x \geq x^\top \mathbf{D} x$ for all $x \in \R^n$. Then by an application of the Courant-Fischer variational principle for eigenvalues, we have that $\lambda_i(\mathbf{B}) \geq \lambda_i(\mathbf{D})$ for all $i \in [n]$. \paragraph{Matrix Norms and Submatrices.} We use the notation $\|\mathbf{A}\|_2 = \sigma_{\max}(\mathbf{A})$ to denote the spectral norm of $\mathbf{A}$, $\|\mathbf{A}\|_F = (\sum_{i,j} A_{i,j}^2 )^{1/2}= (\sum_{i=1}^n \sigma_i^2(\mathbf{A}))^{1/2}$ to denote the Frobenius norm of $\mathbf{A}$. For $p \geq 1$, we write $\|\mathbf{A}\|_{\mathcal{S}_p} = (\sum_{i=1}^n \sigma_{i}^p(\mathbf{A}))^{1/p}$ to denote the Schatten $p$-norm of $\mathbf{A}$, and $\|\mathbf{A}\|_{\textsf{KF}(p,k)} = (\sum_{i=1}^k \sigma^p_{i}(\mathbf{A}))^{1/p}$ to denote the $(p,k)$-Ky-Fan norm of $\mathbf{A}$. If $p$ is not specified for a Ky-Fan norm, it is assumed to be $1$, namely $\|\mathbf{A}\|_{\textsf{KF}(k)} = \|\mathbf{A}\|_{\textsf{KF}(1,k)}$. For subsets $S,T\subseteq [n]$, we denote the matrix $\mathbf{A}_{S \times T} \in \R^{|S| \times |T|}$ as the matrix $\mathbf{A}$ restricted to the submatrix of the rows in $S$ and the columns in $T$. If $S = T$, then the square submatrix $\mathbf{A}_{S \times T} = \mathbf{A}_{S \times S}$ is called a \textit{principal submatrix} of $\mathbf{A}$. For a vector $x \in \R^n$ and subset $S \subset [n]$, we write $x_S \in \R^n$ to denote the vector obtained after setting equal to zero all coordinates $x_i$ with $i \notin S$. Finally, we use the notation $\mathbf{A}_{i,*}$ to denote the $i$-th row of $\mathbf{A}$, and $\mathbf{A}_{*,i}$ to denote the $i$-th column of $\mathbf{A}$. \input{LInfinityGap} \input{L2Gap} \input{LowerBounds} \section{Conclusion}\label{sec:conclusion} In this work, we gave an optimal (up to $\log(1/\epsilon)$ factors) algorithm for testing if a matrix was PSD, or was far in spectral norm distance from the PSD cone. In addition, we gave a query efficient algorithm for testing if a matrix was PSD, or was $\epsilon n^2$ far from the PSD-cone in $\ell_2^2$ distance. Furthermore, we established a new technique for proving lower bounds based on designing ``subgraph-equivelant'' matrices. We believe that this technique is quite general, as shown by its immediate application to lower bounds for the Schatten-$1$ norm, Ky-Fan norm, and tail error testing. Our construction could also likely be useful for proving lower bounds against testing of \textit{graph properties}, which is a well studied area \cite{goldreich2010introduction}. We pose the open problem to design (or demonstrate the non-existence of) additional subgraph-equivalent matrices beyond the cycle graph construction utilized in this work, which have gaps in their spectral or graph-theoretic properties. Additionally, we pose the open problem of determining the exact non-adaptive query complexity of PSD testing with $\ell_2^2$ gap. As discussed in Section \ref{sec:contri}, there appear to be several key barriers to improving the complexity beyond $O(1/\epsilon^4)$. Indeed, it seems that perhaps the main tool that is lacking is a concentration inequality for the eigenvalues of random principal submatrices. Since most such decay results apply only to norms \cite{tropp2008norms,rudelson2007sampling}, progress in this direction would likely result in important insights into eigenvalues of random matrices. Finally, we note that the complexity of the testing problems for several matrix norms, specifically the Schatten $p$ and Ky-Fan norms, are still open in the bounded entry model. In particular, for the Schatten $1$ norm, to the best of our knowledge no non-trivial algorithms exist even for estimation with additive error $\Theta(n^{1.5})$, thus any improvements would be quite interesting. \begin{comment} In this work, we gave an optimal (up to $\log(1/\epsilon)$ factors) algorithm for testing if a matrix was PSD, or had a large negative eigenvalue ($\ell_\infty$-gap), which is equivalent to the spectral norm distance to the PSD cone. In addition, we gave a query efficient algorithm for testing if a matrix was PSD, or was $\epsilon n^2$ far from the PSD-cone in $\ell_2^2$ distance. In addition to testing, our algorithms yield \textit{certificates} that a matrix is not PSD. Furthermore, we established a new technique for proving lower bounds based on designing ``subgraph-equivelant'' matrices. We believe that this technique is quite general, as shown by its immediate application to lower bounds for the Schatten-$1$ norm, Ky-Fan norm, and tail error testing. Our construction could also likely be useful for proving lower bounds against testing of \textit{graph properties}, which is a well studied area \cite{goldreich2010introduction}. We pose the open problem to design (or demonstrate the non-existence of) additional subgraph-equivalent matrices beyond the cycle graph construction in Section \ref{sec:lb}, which have gaps in their spectral or graph-theoretic properties. Additionally, we pose the open problem of determining the exact non-adaptive query complexity of PSD testing with $\ell_2^2$ gap. As discussed in Section \ref{sec:contri}, there appear to be several key barriers to improving the complexity beyond $O(1/\epsilon^4)$. Indeed, it seems that perhaps the main tool that is lacking is a \textit{decay} bound for the eigenvalues of random principal submatrices. Since most such decay results apply only to norms \cite{tropp2008norms,rudelson2007sampling}, progress in this direction would likely result in important insights into eigenvalues of random matrices. Finally, we note that the complexity of the testing problems for several matrix norms, specifically the Schatten $p$ and Ky-Fan norms, are still open in the bounded entry model. In particular, for the Schatten $1$ norm, to the best of our knowledge no non-trivial algorithms exist even for estimation with additive error $\Theta(n^{1.5})$. Improved testing algorithms for these problems in the sampling model could lead to huge speed-ups for statistical learning algorithms, such as those run on recommendation systems and covariance matrices. \end{comment} \section*{Acknowledgement} We thank Erik Waingarten for many useful suggestions, being closely involved in the early stages of this project, and for feedback on early drafts of this manuscript. We also thank Roie Levin, Ryan O'Donnell, Pedro Paredes, Nicolas Resch, and Goran Zuzic for illuminating discussions related to this project. \bibliographystyle{alpha} \section{Multiple Negative Eigenvalues in the Sensing model} Let $A \in \R^{n \times n}$ be a symmetric matrix with eigenvectors $v_1,v_2,\dots,v_m$, for $m \leq n$, and eigenvalues $\lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_k < 0 \leq \lambda_{k+1} \leq \dots \leq \lambda_m$. Note that $\|A\|_F^2 = \sum_{i=1}^m \lambda_i^2$. Now let $\epsilon = \frac{ \sum_{i=1}^k \lambda_i^2}{\|A\|_F^2} \leq 1$, which is a measure of the fraction of $A$ that is negative. We consider the sensing model, where the algorithm fixes sensing matrices $X_1,\dots,X_\ell \in \R^{n \times n}$ and observes $\rho_i \langle \text{vec}(X_i), \text{vec}(A) \rangle = \text{Tr}(X_i^T A)$. Thus the observation $\rho_i$ is a linear function of the enties of $A$. \begin{theorem} There is an algorithm in the sensing model which uses $\ell = O(1/\epsilon^2)$ observations and tests whether $A$ is PSD or if $\frac{ \sum_{i=1}^k \lambda_i^2}{\|A\|_F^2} \geq \epsilon$. \end{theorem} \begin{proof} We first describe our sketching matrices $X_1,\dots,X_\ell$. First, we generate Gaussian vectors $g_1,g_2,\dots,g_r \in \mathcal{N}(0,\mathbb{I}_n)$, for $r = \Theta(1/\epsilon^2)$. First note that the length of the projection of $g_i$ onto $v_j$ can be written as $\langle v_j, g_i \rangle = h_{i,j} \sim \mathcal{N}(0,1)$ since $v_j$ is a unit vector. Thus if $h_i \in \R^n$ is the vector such that $(h_{i})_j = h_{i,j}$, we can write $V^T g_i = h_i$, where $V \in \R^{n \times m}$ is the matrix with $j$-th column equal to $v_j$. Since $V^T$ now has orthogonal rows, it follows that $h_{i,1},h_{i,2},\dots,h_{i,m}$ are i.i.d. standard Gaussian variables. By the prior paragraph, we can write $g_i$ as the projection of its components onto the eigenspace of $A$ $g_i = \sum_{j=1}^m v_j v_j^T h_{i,j} + \beta_i$, where $\beta_i \in \R^n$ is in the null space of $A$. Thus \[ g_i^T A g_i = \sum_{j=1}^m \lambda_j h_{i,j}^2 \] Moreover, $\ex{ (g_i^T A g_i)^2 } = \sum_{j} \lambda_j^2 $, which holds since the $h_{i,j}$'s are independent and mean $0$ variance $1$. We create a sketching matrix $X_i$ for each quadratic form, so that $\langle X_i, A\rangle = g_i^T A g_i$. \end{proof} \section{Spread Eigenvalue PSD testing} \section{Cut Norm} Using Veryshin expectation upper bound Cut norm is $\max_{x,y \in \{0,1\}^n} |y^\top A x|$.x \begin{theorem} There is an algorithm that gives an $O(1)$ relative error approximation of the cut norm by only looking at $O(n^4/\|A\|_C^2)$ entries of $A$. \end{theorem} \section{Schatten $p$-norm} \section{Operator norm} The operator norm of $A \in \R^{n \times m}$ is given by $\|A\|_2 = \max_{x \in \R^m: \|x\|_2 = 1} \|Ax\|_2$. Note that $\|A\|_2$ can also be written by \[ \|A\|_2 = \max_{y \in \R^m, x \in \R^n, \|x\|_2 = \|y\|_2 = 1} |y^\top A x| \] \begin{proof} We first prove $\|A\|_2 \leq$ the RHS. To see this, let $x$ be such that $\|Ax\|_2 = \|A\|_2$. Then if we set $y = Ax/\|Ax\|_2$, then we have $y^\top Ax = \|Ax\|_2$, which completes the proof of this direction. Now to see that $\|A\|_2 \geq$ the RHS, fix $x,y$ that maximizes $y^\top A x$. For any $x$, the $y$ that maximizes $y^\top A x$ is $y=Ax/\|Ax\|_2$, thus $y^\top A x = \|Ax\|_2 \leq \|A\|_2$ as needed. \end{proof} Fix $x,y$ such that $yAx = \|A\|_2 $. We now computed the expectation of $y_T A x_S$, where $|T| = |S| = k$ are independent, uniform samples of $[n]$. Let $\delta_i$ indicate we sample the $i$-th column, and let $\theta_i$ indicate we sample the $i$-th row. with $\ex{\delta_i} =p, \ex{\theta_i} = q$, such that $pq = \frac{k^2}{n^2}$, where $k = \Theta(1/\epsilon)$. The following is clear. \begin{proposition} We have \[ \bex{\sum_{i ,j} y_i A_{i,j} x_i\theta_i \delta_j } = pq\left(yAx\right) = \frac{k^2}{n^2} \|A\|_2 = \frac{\epsilon k^2}{n}\] \end{proposition} Let $c_{i,j} = y_i A_{i,j} y_j$, and define the row and column contributions as $\mathbf{R}_i = \sum_{j \in [n]} c_{i,j}$ $\mathbf{C}_i = \sum_{j \in [n]} c_{j,i}$. \begin{lemma}\label{lem:spectralvar} Assuming $k = \Omega(1/\epsilon)$, we have \[\mathbf{Var}\left[\sum_{i,j} y_i A_{i,j} y_j \delta_i \delta_j \right] \leq pq + 2\sum_i \left( p q^2 \mathbf{R}_i^2 + p^2 q \mathbf{C}_i^2 \right) \] \end{lemma} \begin{proof} . Then we have \begin{equation} \begin{split} &\mathbf{Var}\left[\sum_{i,j} y_i A_{i,j} y_j \delta_i \delta_j \right] \leq pq\sum_{i , j} c_{i,j}^2 + p q^2\sum_{i } \sum_{j \neq v} c_{i,j} c_{i,v} + p^2 q \sum_j \sum_{i \neq u} c_{i,j} c_{u,j} \\ & + p^2 q^2\sum_{ i \neq u, u \neq v} c_{i,j} c_{u,v} - \left( pq\sum_{i, j} y_i A_{i,j} y_j \right)^2 \\ \end{split} \end{equation} Similar to proposition \ref{prop:var}, we can cancel the $p^2 q^2\sum_{ i \neq u, u \neq v} c_{i,j} c_{u,v}$ term with the expectation squared, and the remaining terms from the latter can be upper bounded by bounding the first three terms. First, note $\sum_{i , j} c_{i,j}^2 \leq \sum_{i , j} y_i^2 y_j^2 \leq 1$, so $ pq\sum_{i , j} c_{i,j}^2 \leq pq$. Next $\sum_{i } \sum_{j \neq v} c_{i,j} c_{i,v} \leq \mathbf{R}_i^2 + \sum_{i=1}^n y_i^2 \leq \mathbf{R}_i^2 + 1$, where $\mathbf{R}_i = \sum_{j \in [n]} c_{i,j}$. Similarly $\sum_j \sum_{i \neq u} c_{i,j} c_{u,j} \leq \mathbf{C}_i^2 + 1$, thus \[\mathbf{Var}\left[\sum_{i,j} y_i A_{i,j} y_j \delta_i \delta_j \right] \leq pq + 2\sum_i \left( p q^2 \mathbf{R}_i^2 + p^2 q \mathbf{C}_i^2 \right) \] where the extra factor of $2$ comes from absorbing possibly positive terms in minus the expectation squared, as well as the $+1$ term. \end{proof} We now consider two cases. First suppose that for every $S,T \subseteq [n]$ with $|S|,|T| \leq - \epsilon n/10$, we have $y_{-S} A x_{-T} \geq \epsilon n/ 2$. Call this case $1$. \begin{lemma} Suppose we are in case $1$. Then there is an algorithm that samples $\ell$ $k \times k$ submatrix $Z_1,\dots,Z_{\ell}$ of $A$, where $\ell = O(1)$ and $k = \Theta(1/\epsilon)$, such that $\frac{1}{\ell}\sum_{i=1}^\ell\|Z_i\|_2 \geq \frac{k}{n} \|A\|_2$, or $$\frac{200n}{k\ell}\sum_{i=1}^\ell\|Z_i\|_2 \geq \|A\|_2$$ \end{lemma} \begin{proof} We first claim that we can modify $x,y$ so that $y A x \geq \epsilon n/2$, and such that $\sum_i \mathbf{R}_i^2 + \mathbf{C}_i^2 \leq O(\epsilon n)$ Now by the assumption on the case, we can first remove a set Sets $S_0 = \{i \in [n] \; | \; |y_i| \geq \frac{10}{\sqrt{ \epsilon n}} \}$ and $T_0 = \{i \in [n] \; | \; |x_i| \geq \frac{10}{\sqrt{ \epsilon n}}\}$, which each have size at most $\epsilon n / 100$. Moreover, we will still have $y_{-S_0} A x_{-T_0} \geq \epsilon n /2$, and $\|x\|_\infty, \|y\|_\infty \leq \frac{10}{\sqrt{ \epsilon n}}$. Afterwards, note that we can assume $\mathbf{R}_i^2 > 0$ (and $\mathbf{C}_i > 0$), since otherwise we could set $y_i$ (or $x_i$) equal to $0$, and only increase $yAx$. Moreover, note that $\sum_i| \mathbf{R}_i| =\sum_i \mathbf{R}_i\leq \epsilon n$ and $\sum_i \mathbf{C}_i \leq \epsilon n$. Now define $S = \{i \in [n] \; | \; |\mathbf{R}_i| \geq 10 \} \cup S_0$, which has size at most $|S| \leq \epsilon n / 10$, and remove the coordinates in $S$ from $y$ (set them equal to zero). Afterwards, the result is that $\max_i |\mathbf{R}_i| \leq 10$. Next, we can remove $T = \{i \in [n] \; | \; |\mathbf{C}_i| \geq 10 \} \cup T_0$ from $x$, and have $\max_i |\mathbf{C}_i| \leq 10$. Now the second step can increase the value of a given $|\mathbf{R}_i|$, but only by at most $(\epsilon n/10)|x|_\infty |y|_\infty \leq 10$. It follows that after these modifications, we have $\max_i |\mathbf{R}_i| \leq 20$ and $\max_i |\mathbf{C}_i| \leq 10$, and moreover $y_{-S} A x_{-T} \geq \epsilon n/2$. Define these to be our new vectors. Putting together the $\ell_\infty$ and $\ell_1$ bounds on the $\mathbf{R}_i$ and $\mathbf{C}_i$, we have $\sum_i \mathbf{R}_i^2 + \mathbf{C}_i^2 \leq \epsilon 30 \epsilon n$. Now by Lemma \ref{lem:spectralvar}, if we set $p=q=\frac{k}{n}$, it follows that $\bvar{ \sum_{i,j} (y_{-S})_i A (x_{-T})_j \theta_i \delta_j } \leq \frac{k^2}{n^2} + \frac{k^3}{n^3} (30 \epsilon n)$. Since $k=\Theta(1/\epsilon)$, the result is that the variance is $O(k^2/n^2)$ and the expectation is $\frac{k^2}{n^2} \|A\|_2 =O(k/n)$, so the average is at least $\frac{k^2}{n^2} \|A\|_2 /2$ with probability $9/10$. By Markov's, the $\ell_2$ of the sampled coordinates in $x,y$ in each repetition was at most $100\frac{k}{n}$, thus we have that $\frac{1}{\ell}\sum_{i=1}^\ell\|Z_i\|_2 \geq \frac{k}{200n}\|A\|_2$, or $$\frac{200n}{k\ell}\sum_{i=1}^\ell\|Z_i\|_2 \geq \|A\|_2$$ \end{proof} \begin{lemma} Suppose we are not in case $1$. Then there is an algorithm that samples $\ell$ submatrix $Z_1,\dots,Z_{\ell}$ of $A$, each of which has total size $O(\log^2(1/\epsilon)/\epsilon^2)$ and $\ell = O(\log(1/\epsilon))$, such that \[\max_i \|Z_i\|_2 \geq \frac{k}{600 \log(1/\epsilon) n} \|A\|_2\] \end{lemma} \begin{proof} Suppose we are not in case $1$. First note that for any sets $S,T$ with size at most $\epsilon n/10$, the contribution of $y_S A x_T$ is at most $\epsilon n/10$, thus if we are not in case $2$, then wlog there is a set $S$ with $|S| \leq \epsilon n/10$ such that $y_S A x \geq \epsilon n /6$. First note that we can assume that $S \subseteq T_a = \{ i \in [n] : \; : \frac{2^a}{n} \leq |y_i|^2 \leq \frac{2^{a+1}}{n} \}$ for some $1 \leq a \leq \log(1/\epsilon)$, at the cost of $y_S A x \geq \epsilon n /24 \log(1/\epsilon)$. We can now sample a submatrix of $y_S A x$ of size $k^2$, which has expected value at least $\frac{k^2}{n^2} \|A\|_2/(24 \log(1/\epsilon))$. We now consider the variance of this sampling. As before, we can assume $\mathbf{R}_i > 0$ for all $i$. Now first observe $\sum_i \mathbf{C}_i^2 \leq \sum_{i } x_i^2 \left(\sum_{j \in S}A_{j,i}y_j \right)^2 \leq\sum_{i } x_i^2 \|y_{S}\|_1^2 \leq |S|$. Moreover, we have $\mathbf{R}_i \leq y_i |x|_1 \leq \|y\|_\infty \sqrt{n}$, and $\sum_i |\mathbf{R}_i| \leq \epsilon n$, as well as $\|y\|_\infty^2 \leq \frac{2^a}{n}$. So we have $\max_i |\mathbf{R}_i|^2 \leq 2^a$, thus $\sum_i \mathbf{R}_i^2 \leq (\epsilon n/ \sqrt{2^a})(2^a) \leq \epsilon n \sqrt{2^a}$. We can now sample with $p = \frac{1}{|S|}$ and $q = \frac{|S| k^2}{n^2}$. so that $pq = \frac{k^2}{n^2}$. Firstly, note that the $2\sum_i p^2 q\mathbf{C}_i^2$ term in the variance is at most $\frac{k^2}{|S| n^2} |S| = \frac{k^2}{n^2}$, and the term $2 \sum_i pq^2\mathbf{R}_i^2 \leq2 \frac{k^4 |S|}{n^4} ( \epsilon n 2^{a/2}) \leq 2\frac{ \epsilon k^4 |S| 2^{a/2}}{n^3} \leq 2\frac{\epsilon k^4 }{2^{a/2}n^2}$, where the last inequality holds because $|S| \leq |T_a| \leq n/2^a$. Now first note that we can assume that $2^{a/2} \geq 1/(100\epsilon)$. If we could not choose such an $a$, then we would have $y_SAx \leq - \epsilon n / 12$, but since $S$ has size at most $\epsilon n / 100$, we have $\|y_s\|_2 \leq 1/100$, and we could scale $y$ by a factor of $100$ entrywise and obtain $yAx \geq 10\epsilon n = 10 \|A\|_2$ where $x,y$ are still at most unit vectors, which is impossible. Thus $2 \sum_i pq^2\mathbf{R}_i^2 \leq \frac{2 \epsilon^2 k^4}{n^2 } = \frac{2 \|A\|_2^2 k^2}{n^4}$. Altogether, the variance is at most $2 \frac{k^2}{n^2} + \frac{2 \|A\|_2^2 k^2}{n^4} \leq \frac{4 \|A\|_2^2 k^2}{n^4}$. Now note that the expectation of the sampling is at least $\frac{k^2}{n^2} \|A\|_2 /(24 \log(1/\epsilon))$, and since $k \geq \log(1/\epsilon)/\epsilon$, it follows that, with probability at least $9/10$, we have that a single sample satisfies $\sum_{i,j}c_{i,j} \theta_i \delta_j \geq \frac{k^2}{n^2} \|A\|_2 /(30 \log(1/\epsilon))$. Moreover, note that the expected $\ell_2^2$ of the coordinates of $y$ is $p$, and for $x$ it is $q$. With probability $9/10$, we have both that the $\sum_i y_i^2 \theta_i \leq 20p$ and $\sum_i x_i^2 \delta_i \leq 20q$. Thus we can scale each coordinate $y_i$ by $1/\sqrt{20 p}$ and each $x_i$ by $1/\sqrt{20 q}$, which scales the whole quadratic form by $\frac{1}{10 \sqrt{pq}} = \frac{n}{20k}$. Thus we have $\sum_{i,j}c_{i,j} \theta_i \delta_j \geq \frac{k}{n} \|A\|_2 /(600 \log(1/\epsilon))$ after this scaling, with probability $4/5$. It follows that with constant probability, the spectral norm of this sampled submatrix is at least $\frac{k}{n} \|A\|_2 /(600 \log(1/\epsilon))$ with constant probability, and we must repeat once for each of the $\log(1/\epsilon)$ guesses of the sampling probabilities $p$. \end{proof} Putting the two results together, we obtain: \begin{theorem} There is an algorithm that samples $\ell = O(\log(1/\epsilon))$ submatrices $Z_1,\dots,Z_{\ell}$ of $A$, each of which has at most $O(\log^2(1/\epsilon)/\epsilon^2)$ entries, such that \[\max_i \|Z_i\|_2 \geq \frac{k}{600 \log(1/\epsilon) n} \|A\|_2\] \end{theorem} \begin{lemma} We have \[ \bex{\|A_{S \times T}\|_2} \leq O\left(\frac{k}{n} \|A\|_2 + \sqrt{k} \right) \] \end{lemma} It follows that $$\frac{200n}{k\ell}\sum_{i=1}^\ell\|Z_i\|_2 \leq O\left(\|A\|_2(1 + \frac{1}{\epsilon\sqrt{k} })\right)$$ So we obtain an estimate $\Psi$ with \[\|A\|_2 \leq \Psi \leq C_1 \|A\|_2(1 + \frac{1}{\epsilon\sqrt{k} })\] \end{comment} \section{Sketching $L_2$ Gap} \TODO{Lower bounds for spectral sketching imply lower bounds for sketching PSD gap -- just test if $\lambda I - A$ is PSD for different $\lambda$. Use a $c\epsilon n $-approx of spectral norm for large $c$ to reduce to $O(1)$ choices of $\lambda$, giving $1/\epsilon^2$ lower bound} \TODO{Can try subtracting off diagonal. This makes PSD case not PSD, but still keeps a gap between the cases. Then show that this gap can be estimated.} Let $A \in \R^{n \times n}$ be a symmetric matrix with eigenvectors $v_1,v_2,\dots,v_m$, for $m \leq n$, and eigenvalues $\lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_k < 0 \leq \lambda_{k+1} \leq \dots \leq \lambda_m$. Note that $\|A\|_F^2 = \sum_{i=1}^m \lambda_i^2$. Now let $\epsilon = \frac{ \sum_{i=1}^k \lambda_i^2}{\|A\|_F^2} \leq 1$, which is a measure of the fraction of $A$ that is negative. We consider the sensing model, where the algorithm fixes sensing matrices $X_1,\dots,X_\ell \in \R^{n \times n}$ and observes $\rho_i \langle \text{vec}(X_i), \text{vec}(A) \rangle = \text{Tr}(X_i^T A)$. Thus the observation $\rho_i$ is a linear function of the enties of $A$. \begin{proposition} Let $A \in \R^{n \times n}$ be any fixed matrix. Then if $G \in \R^{t \times n}$ is a i.i.d. matrix of Gaussians scaled by $1/\sqrt{t}$, where $t = \Omega(1/\epsilon)$, then \[ \|G\left(A \right)G^T - \frac{\text{Tr}(A) + 10 \alpha \|A\|_F}{t} \mathbb{I}_t\|_F^2 = (1 \pm \epsilon) \|A\|_F^2 \] with probability... \end{proposition} \begin{proof} Write $A = Q \Lambda Q^T$ as the eigendecomposition. Note that $G Q \in \R^{t \times }$ is distributed again as i.i.d. Gaussians scaled by $1/\sqrt{t}$. Thus we can consider the matrix $G\Lambda G^T$. First, if $i \neq j$, we have $\ex{ (G\Lambda G^T)_{i,j}^2}= \ex{ \left( \sum_k G_{i,k} G_{j,k} \lambda_k(A) \right)^2 } = t^{-2}\sum_k \lambda_k(A)^2 = \|A\|_F^2/t^2$. \textbf{Compute Variance of off-diagonal and show concentration}. Next, if $i=j$, $(G \Lambda G^T)_{i,i} = \sum_k G_{i,k}^2 \lambda_k$. Now let $X_k =h_k^2 \lambda_k - \lambda_k$, where $h_k \sim \mathcal{N}(0,1)$ and note $\ex{X_k} = 0$. Note moreover that $|X_k| \leq \|A\|_2 \leq \|A\|_F$ deterministically. Thus by Bernstein's inequality: \begin{equation} \begin{split} \bpr{ |\sum_k h_k^2 \lambda_k- \text{Tr}(A)| > \alpha \|A\|_F } &\leq \exp\left( \frac{\frac{1}{2}\alpha^2 \|A\||_F^2 }{\sum_{k=1}^n \ex{X_i^2} + \frac{\alpha}{3}\|A\|_F^2 }\right) \\ &\leq \exp\left( \frac{\frac{1}{2}\alpha^2 \|A\||_F^2 }{\|A\|_F^2 + \frac{\alpha}{3}\|A\|_F^2 }\right) \\ &\leq \exp\left( \alpha / 3\right) \\ \end{split} \end{equation} Setting $\alpha = \Theta(\log(t))$, we have that $(GAG)_{i,i} =\left( \text{Tr}(A) \pm O(\log(n)) \|A\|_F\right)t^{-1}$ for all $i \in [t]$ with probability $1-1/\text{poly}(t)$. Since the diagonal is a $1/t$ fraction of the whole matric, it follows that \[ \|G\left(A \right)G^T - \frac{\text{Tr}(A) + 10 \alpha \|A\|_F}{t} \mathbb{I}_t\|_F^2 = (1 \pm \epsilon) \|A\|_F^2 \] \end{proof} \textbf{Matrix Chernoff for completeness, Bernsteins and losing $L_2$ mass for soundness}\\ \begin{lemma} If $A$ is PSD, we have \[\lambda_{\min}(GAG^T) \geq \sum_i \lambda_i(A) = \text{Tr}(A) - 10 \|A\|_F \] with probability $1-\exp(\Omega(\sum_i \lambda_i(A) ))$, where $G \in \R^{t \times n}$ is i.i.d. Gaussian. \end{lemma} \begin{proof} We can write $GAG^T = G\Lambda G^T = \sum_{i=1}^n G_i \lambda_i G_i^T$, where $G_i \in \R^t$ is the $i$-th column of $G$. Let $X_i = G_i \lambda_i G_i^T$. Note that $\ex{\sum_i X_i} = \text{Tr}(A) \mathbb{I}_n$, so $\mu_{\min} = \lambda_{\min} \left(\ex{\sum_i X_i }\right) =\text{Tr}(A)$. Moreover, we can condition on the fact that $\|G_i\|_2 \leq \sqrt{\log(n)t}$ w.h.p for all $i$ (make this formal to use in Chernoff bound). Then by matrix chernoff, we have: \end{proof} \textbf{Rest of proof:} Compute $GAG^T$, if $A$ is not PSD then $A = A^+ + A^-$ and $\|GAG^T\|_F^2 >\|GA^+G^T\|_F^2$ since both are preserved so $GAG^T$ is not PSD. \newpage First, we restate a result of \cite{andoni2013eigenvalues}. \begin{theorem}[Theorem 1.3 \cite{andoni2013eigenvalues}] Fix $\epsilon>0$ and let $A \in \R^{n \times m}$ be any matrix with singular values $\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_{\min\{m,n\}}$. Then there is a linear sketch of the matrix $A$ using space $O(k\epsilon^{6}\log(1/\delta))$ from which one can compute a value $R$ such that with probability $1-\delta$ \[ R = (1 \pm \epsilon) \sum_{i > k} \sigma_i^2 \] \end{theorem} \begin{lemma}[Theorem 3.2 \cite{andoni2013eigenvalues}] Fix any symmetric matrix $A \in \R^{n \times n}$, and let $G \in \R^{t \times n}$ be a i.i.d. Gaussian matrix with $t = \Theta(k/\epsilon^2)$. Then for $i=1,2,\dots,k$, we have \[ \lambda_i(GAG^T) = \lambda_i(A)(1 \pm \epsilon) \pm \lambda_{i+1}(A) \] \end{lemma} \section{Multiple Negative Eigenvalues in the Sensing/Sketching model} Let $A \in \R^{n \times n}$ be a symmetric matrix with eigenvectors $v_1,v_2,\dots,v_m$, for $m \leq n$, and eigenvalues $\lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_k < 0 \leq \lambda_{k+1} \leq \dots \leq \lambda_m$. Note that $\|A\|_F^2 = \sum_{i=1}^m \lambda_i^2$. Now let $\epsilon = \frac{ \sum_{i=1}^k \lambda_i^2}{\|A\|_F^2} \leq 1$, which is a measure of the fraction of $A$ that is negative. We consider the sensing model, where the algorithm fixes sensing matrices $X_1,\dots,X_\ell \in \R^{n \times n}$ and observes $\rho_i \langle \text{vec}(X_i), \text{vec}(A) \rangle = \text{Tr}(X_i^T A)$. Thus the observation $\rho_i$ is a linear function of the enties of $A$. \begin{lemma} Let $g_1,g_2,\dots,g_k \sim \mathcal{N}(0,1)$ and fix $a \in \R^k$ with $\sum_i |a_i| = k$, and let $Z = \sum_{i=1}^k |a_i| g_i^2$. Then for any constant $C > 0$, there exists a constant $c > 0$ such that \[ \bpr{Z < k(1 - \frac{C}{\sqrt{k}})} \geq c \] \end{lemma} \begin{proof} Note that the pdf of the variable $Z = \sum_i g_i^2$ is given by $p(x) = \frac{x^{k/2 - 1}e^{-x/2} } { 2^{k/2} (k/2)!}$. Setting $x = k(1-C/\sqrt{k})$, we obtain \TODO{Fix: finish analysis to bound by $\Omega(k^{-1/2})$} \begin{equation} \begin{split} p( k(1-C/\sqrt{k})) &= \frac{(k(1-\frac{C}{\sqrt{k}}))^{k/2 - 1}e^{-k(1-C/\sqrt{k})/2} } { 2^{k/2} \gamma(k/2)} \\ & = \left(\frac{(k(1-\frac{C}{\sqrt{k}}))}{2 e^{1 - C/\sqrt{k} }}\right)^{k/2} (k(1-\frac{C}{\sqrt{k}}))^{-1} \frac{1}{(k/2-1)!}\\ & \geq \left(\frac{k(1-\frac{C}{\sqrt{k}})}{2 e^{1 - C/\sqrt{k} }}\right)^{k/2} \frac{(\pi k)^{-1/2}}{k(1-\frac{C}{\sqrt{k}})}\left(\frac{e}{k}\right)^{k/2-1} (1 \pm o(1))\\ & = \left(\frac{(1-\frac{C}{\sqrt{k}})}{2e^{ - C/\sqrt{k} }}\right)^{k/2} \frac{(\pi k)^{-1/2}}{k(1-\frac{C}{\sqrt{k}})} \left(\frac{k}{2e}\right) (1 \pm o(1))\\ & = \exp \left(\frac{k}{2}(Ck^{-1/2} - Ck^{-1/2} - C^2k^{-1}/2 + O(k^{-3/2})) \right) \frac{(\pi k)^{-1/2}}{k(1-\frac{C}{\sqrt{k}})} (1 \pm o(1))\\ & = \exp \left(-C^2/4 + O(k^{-1/2})) \right) \frac{(\pi k)^{-1/2}}{k(1-\frac{C}{\sqrt{k}})} (1 \pm o(1))\\ \end{split} \end{equation} \end{proof} \subsubsection{PSD Testing with \texorpdfstring{$\ell_\infty$}{L-inf} Gap}\label{sec:techlinf} Recall in the general statement of the $\ell_\infty$-gap problem, our task is to distinguish between $\mathbf{A} \in \R^{n \times n}$ satisfying $x^\top \mathbf{A} x \geq 0$ for all $ x \in \R^n$, or $x^\top \mathbf{A} x \leq - \epsilon n$ for some unit vector $x \in \R^n$. Since if $x^\top \mathbf{A} x \geq 0$ for all $ x \in \R^n$ the same holds true for all principal submatrices of $\mathbf{A}$, it suffices to show that in the $\epsilon$-far case we can find a $k \times k$ principal submatrix $\mathbf{A}_{T \times T}$ such that $y^\top \mathbf{A}_{T \times T} y < 0$ for some $y \in \R^{k}$.\footnote{This can be efficiently checked by computing the eigenvalues of $\mathbf{A}_{T \times T} + \mathbf{A}_{T \times T}^\top$.} \paragraph{Warmup: A $O(1/\epsilon^3)$ query algorithm.} Since we know $x^\top \mathbf{A} x \leq - \epsilon n$ for some fixed $x$, one natural approach would be to show that the quadratic form with the \textit{same} vector $x$, projected onto to a random subset $T \subset [n]$ of its coordinates, is still negative. Specifically, we would like to show that the quadratic form $\mathcal{Q}_T(x) = x^\top_T\mathbf{A}_{T \times T} x_T$, of $x$ with a random principal submatrix $\mathbf{A}_{T \times T}$ for $T \subset [n]$ will continue to be negative. If $\mathcal{Q}_T(x) < 0$, then clearly $\mathbf{A}_{T \times T}$ is not PSD. Now while our algorithm does not know the target vector $x$, we can still analyze the concentration of the scalar random variable $\mathcal{Q}_T(x)$ over the choice of $T$, and show that it is negative with good probability. \smallskip \smallskip \smallskip \noindent \textbf{Proposition \ref{prop:exp} and Lemma \ref{lem:var} (informal)} \textit{ Suppose $\mathbf{A} \in \R^{n \times n}$ satisfies $\|\mathbf{A}\|_\infty \leq 1$ and $x^\top \mathbf{A} x \leq - \epsilon n$ where $\|x\|_2 \leq 1$. Then if $k \geq 6/\epsilon$, and if $T \subset [n]$ is a random sample of expected size $k$, we have $\ex{\mathcal{Q}_T(x)} \leq - \frac{\epsilon k^2}{4n}$ and $\text{Var}(\mathcal{Q}_T(x)) \leq O(\frac{k^3}{n^2})$. } \smallskip \smallskip \smallskip By the above Proposition, after setting $k = \Theta(1/\epsilon^2)$, we have that $|\ex{\mathcal{Q}_T(x)}|^2 = \Omega( \text{Var}(\mathcal{Q}_T(x))$, and so by Chebyshev's inequality, with constant probability we will have $\mathcal{Q}_T(x)< 0$. This results in a $k^2 = O(1/\epsilon^4)$ query tester. To improve the complexity, we could instead set $k=\Theta(1/\epsilon)$ and re-sample $T$ for $k$ times independently to reduce the variance. Namely, one can sample submatrices $T_1,T_2,\dots,T_k$, and analyze $\frac{1}{k}\sum_{i=1}^k\mathcal{Q}_{T_i}(x)$. The variance of this sum goes down to $O(\frac{k^2}{n^2})$, so, again by Chebyshev's inequality, the average of these quadratic forms will be negative with constant probability. If this occurs, then at least one of the quadratic forms must be negative, from which we can conclude that at least one of $\mathbf{A}_{T_i \times T_i}$ will fail to be PSD, now using only $O(1/\epsilon^3)$ queries. \paragraph{A Family of Hard Instances} One could now hope for an even tighter analysis of the concentration of $\mathcal{Q}_T(x)$, so that $O(1/\epsilon^2)$ total queries would be sufficient. Unfortunately, the situation is not so simple, and in fact the two aforementioned testers are tight in the query complexity for the matrix dimensions they sample. Consider the hard instance $\mathbf{A}$ in the left of Figure \ref{fig:Matrix1}, which is equal to the identity on the diagonal, and is zero elsewhere except for a small subset $S \subset [n]$ of $|S| = \epsilon^2 n$ rows and columns, where we have $\mathbf{A}_{S \times \overline{S}}=\mathbf{A}_{\overline{S}\times S} = - \mathbf{1}$, where $\overline{S}$ is the complement of $S$. Notice that if we set $x_i^2 = 1/(2n)$ for $i \notin S$ and $x_i^2 = 1/(2\epsilon^2 n)$ for $i \in S$, then $x^\top \mathbf{A} x \leq - \epsilon n/4$. However, in order to even see a single entry from $S$, one must sample from at least $\Omega(1/\epsilon^2)$ rows or columns. In fact, this instance itself gives rise to a $\Omega(1/\epsilon^2)$ lower bound for any testing algorithm, even for adaptive algorithms (Theorem \ref{thm:linftyLB}). \begin{figure} \centering \includegraphics[scale = .17 ]{Mat1.png} \caption{Hard instances for $\ell_\infty$ testing. On the left, the negative mass is highly concentrated in $|S| = \epsilon^2 n$ rows and columns, and on the right it more spread out over $|S| = \alpha n$, where $\epsilon^2 \leq \alpha \leq \epsilon$. } \label{fig:Matrix1} \end{figure} The difficulty of the above instance is that the negative mass of $x^\top \mathbf{A} x$ is hidden in only a $\epsilon^2$-fraction of $\mathbf{A}$. On the other hand, since the negative entries are so large and concentrated, one need only sample $O(1)$ entries from a single row $i \in S$ in order for $\mathbf{A}_{T \times T}$ to be non-PSD in the prior example. Thus, an algorithm for such instances would be to sample $O(1/\epsilon^2)$ principal submatrices, each of \textit{constant} size. On the other hand, the set $S$ could also be more spread out; namely, we could have $|S| = \alpha n$ for any $\epsilon^2 \leq \alpha \leq \epsilon$, but where each entry in $\mathbf{A}_{S \times \overline{S}}$ is set to $-\epsilon/\sqrt{\alpha}$ (see the matrix in the right side of Figure \ref{fig:Matrix1}). If instead, we define $x_i^2 = 1/(2\alpha n)$ for $i \in S$, we still have $x^\top \mathbf{A} x < - \epsilon n/4$. However, now any submatrix $\mathbf{A}_{T \times T}$ with $|T \cap S| = 1$ must have at least $|T| \geq \alpha/\epsilon^2$ rows and columns, otherwise $\mathbf{A}_{T \times T}$ would be PSD due to the identity on the diagonal. The aforementioned instances suggest the following approach: query matrices at $O(\log \frac{1}{\epsilon} )$ different scales of subsampling. Specifically, for each $\epsilon^2 \leq \alpha = 2^i \leq \epsilon$, we sample $\tilde{O}(\frac{\epsilon^2}{\alpha^2})$ independent $k \times k$ submatrices, each of size $k = \tilde{O}(\alpha/\epsilon^2)$, giving a total complexity of $\tilde{O}(\frac{1}{\epsilon^2})$. The analysis now proceeds by a complete characterization of the ways in which $x^\top \mathbf{A} x$ can be negative. Specifically, we prove the following: either a substantial fraction of the negative mass is hidden inside of a small set of rows and columns $S$ with $|S| < \epsilon n$, or it is the case that $\text{Var}(\mathcal{Q}_T(x))$ is small enough so that a single $k \times k$ submatrix will already be non-PSD with good probability when $k \gtrsim 1/\epsilon$. Given this classification, it suffices to demonstrate a level of subsampling which will find a non-PSD submatrix when the negative mass is concentrated inside inside a small set $S$. \paragraph{Eigenvector Switching.} To analyze this case, ideally, one would like to demonstrate that conditioned on $T$ intersecting $S$ at some level of subsampling, we will have $\mathcal{Q}_T(x) < 0$ with good probability. Unfortunately, the approach of analyzing the quadratic form with respects to $x$ will no longer be possible; in fact, $\mathcal{Q}_T(x)$ may never be negative conditioned on $|T \cap S| = 1$ (unless $|T| > 1/\epsilon$, which we cannot afford in this case). The complication arises from the fact that the coordinates of $x_i$ in the small set $S$ can be extremely large, and thus the diagonal contribution of $x_i^2 \mathbf{A}_{i,i}$ will dominate the quadratic form of a small submatrix. For instance, if $\mathbf{A}_{T \times T}$ is a sample with $k =|T| = O(1)$ which intersects the set $S$ in the leftmost matrix in Figure \ref{fig:Matrix1}, where $x_i = 1/(\epsilon \sqrt{n})$ for $i \in S$ and $x_i = 1/ \sqrt{n}$ otherwise, then $\mathcal{Q}_T(x) \approx k/n -(k/\sqrt{n}) x_i + \mathbf{A}_{i,i} x_i^2$, which is dominated by the diagonal term $\mathbf{A}_{i,i} x_i^2 = 1 /(\epsilon^2n)$. Thus, while $\mathbf{A}_{T \times T}$ itself is not PSD, we have that $\mathcal{Q}_T(x)> 0$. To handle this, we must and analyze the quadratic form $\mathcal{Q}_T(\cdot)$ with respect to \textit{another} direction $y$. The vector $y$ may not even satisfy $y^\top \mathbf{A} y < 0$, however conditioned on $|T \cap S| \geq 1$, we will have $\mathcal{Q}_T(y) < 0$ with good probability. Clearly, we must scale down the large coordinates $x_i$ for $i \in S$. However, one cannot scale too low, otherwise the negative contribution of the rows $i \in S$ would become too small. The correct scaling is then a careful balancing act between the contributions of the different portions of $\mathbf{A}_{T \times T}$. Informally, since the $x_i$'s for $i \in S$ make up a $|S|/n$ fraction of all coordinates, they can be as large as $x_i^2 \geq( n/|S| )\cdot(1/n)$. However, inside of the smaller submatrix $\mathbf{A}_{T \times T}$, then conditioned on $i \in T$, since $|T|$ is small $x_i$ now makes up a larger $1/|T|$ fraction of the submatrix, thus we should scale down $x_i$ to only be $x_i^2 \approx |T|/n $. With this scaling in mind, we (very roughly) set $y_i^2 = (|S|/n) \cdot (|T|/n)$ if $i \in S$, and set $y_i = x_i$ otherwise. The remaining argument then requires a careful analysis of the contribution of entries of $\mathbf{A}$ outside of $S$ to show that the target vector $y$ indeed satisfies $\mathcal{Q}_T(y) <0$ with good probability conditioned on $T$ intersecting $S$. \subsubsection{PSD Testing with \texorpdfstring{$\ell_2$}{L-2} Gap}\label{sec:techl2} Recall in the $\ell_2$ gap problem, our task is to distinguish between $\mathbf{A}$ being PSD, and $\mathbf{A}$ being $\epsilon$-far in $\ell_2^2$ distance from any PSD matrix, namely that $\sum_{i : \lambda_i(\mathbf{A}) < 0} \lambda_i^2(\mathbf{A}) > \epsilon n^2$. In what follows, we refer to the quantity $\sum_{i : \lambda_i(\mathbf{A}) < 0} \lambda_i^2(\mathbf{A})$ as the \textit{negative mass} of $\mathbf{A}$. First observe that in the special case that we had a ``large'' negative eigenvalue, say $\lambda_{\min}(\mathbf{A}) = - \sqrt{\epsilon} n$, then by applying our testing algorithm for $\ell_\infty$-gap, we could find a non-PSD submatrix with only $\tilde{O}(1/\epsilon)$ queries. However, in general the negative mass of $\mathbf{A}$ may be spread out over many smaller eigenvalues. Thus, we cannot hope to apply our earlier approach for the $\ell_\infty$-gap, which preserved the quadratic form $\mathcal{Q}_T(x) = x^\top_T \mathbf{A}_{T \times T} x_T$ with respects to a fixed direction $x$. Instead, our approach will be to show that if $\mathbf{A}$ is $\epsilon$-far from PSD in $\ell_2^2$, then the singular values of $\mathbf{A}$ must be ``far'' from PSD, in some other notion of distance, allowing us to indirectly infer the existence of negative eigenvalues in submatrices. \paragraph{PSD matrices are top-heavy, and a reduction to estimating the tail.} Our first step is to show that if $\mathbf{A} \in \R^{n \times n}$ is PSD, then the $t$-``tail'' of $\mathbf{A}$, defined as $\sum_{i > t} \sigma_i^2(\mathbf{A})$, cannot be too large. This can be derived from the following fact: if $\mathbf{A}$ is PSD then we can bound the Schatten-$1$ norm of $\mathbf{A}$ by $ \|\mathbf{A}\|_{\mathcal{S}_1} = \sum_i \sigma_i(\mathbf{A}) = \text{Tr}(\mathbf{A})$, which is at most $n$ if $\|\mathbf{A}\|_\infty \leq 1$. This simple fact will prove highly useful, since whenever we can demonstrate that the Schatten-$1$ norm of a submatrix $\mathbf{A}_{T \times T}$ is larger than $|T|$, we may immediately conclude the that $\mathbf{A}_{T \times T}$ is not PSD. In addition, it implies: \smallskip \noindent \textbf{Proposition \ref{prop:topheavy}} (PSD matrices are top-heavy) \textit{ Fix any $n \in \mathbb{N}$, $1 \leq t \leq n$, and $\mathbf{D} \in \R^{n \times n}$. Then if $\mathbf{D}$ is PSD, we have \[\sum_{i > t} \sigma_i(\mathbf{D})^2 \leq \frac{1}{t}\left( \text{Tr}(\mathbf{D})\right)^2\] In particular, if $\mathbf{D}$ has bounded entries $\|\mathbf{D}\|_\infty \leq 1$, we have $\sum_{i > t} \sigma_i(\mathbf{D})^2 \leq \frac{1}{t}n^2$. }\smallskip On the other hand, suppose that $\mathbf{A}$ is $\epsilon$-far from PSD, and let $t > 10 / \epsilon$. Then if no eigenvalue is smaller than $- \epsilon n/100$, a condition which can be checked with $\tilde{O}(1/\epsilon^2)$ queries by first running our $\ell_\infty$-gap tester, then the negative mass must be spread out, and it must be the case that a substantial fraction of the negative mass of $\mathbf{A}$ is contained in the bottom $n-t$ singular values. Specifically, we must have $\sum_{i > t} \sigma_i(\mathbf{A})^2 > (\epsilon/2) n^2$, whereas any PSD matrix $\mathbf{D}$ would have to satisfy $\sum_{i > t} \sigma_i^2(\mathbf{D}) \leq (\epsilon/10) n^2$ by the above Proposition. Thus, after first running our $\ell_\infty$ tester, it will suffices to estimate the tail $\sum_{i > t} \sigma_i^2(\mathbf{A})$. Equivelantly, since $\|\mathbf{A}\|_F^2 = \sum_{i } \sigma_i^2(\mathbf{A})$ can be efficiently estimated, it also suffices to estimate the ``head'' $\sum_{i \leq t} \sigma_i^2(\mathbf{A})$ to additive $O(\epsilon n^2)$. In order to accomplish this, one could utilize the tools from random matrix concentration, such as Matrix Bernstein's inequality \cite{tropp2015introduction}, which allows one to estimate each $\sigma_i^2$ to error $\eta n^2 $ by taking a random rectangular $O(1/\eta^2) \times O(1/\eta^2)$ sized submatrix. The error in estimating $\sum_{i \leq t} \sigma_i^2(\mathbf{A})$ is then $t \eta n^2$, thus one needs to set $\eta = O(\epsilon/t)$, giving a $O(1/\epsilon^8)$ tester with two-sided error. Using a careful bucketing analysis on the error, along with the more powerful Interior Matrix Chernoff bounds of Gittens and Tropp \cite{gittens2011tail}, one can improve this to $O(t^2/\epsilon^4) = O(1/\epsilon^6)$. However, substantial improvements on unconditional estimation of $\sum_{i \leq t} \sigma_i^2(\mathbf{A})$ seem unlikely. In fact, we demonstrate that event for $t=1$ (spectral norm estimation), tools such as matrix concentration inequalities which sample submatrices of $\mathbf{A}$, must make $\Omega(1/\epsilon^4)$ queries (Lemma \ref{lem:lowerboundAA}), which rules out, for instance, a $o(t^2/\epsilon^4)$ upper bound for general $t$. Thus, instead of unconditional estimation, our main insight is to demonstrate conditions under which $\sum_{i \leq t} \sigma_i^2(\mathbf{A})$ can be efficiently estimated. When these conditions do not hold, we show that it is because the Schatten-$1$ norm of our sampled submatrix must be too large, from which we can deduce the existence of negative eigenvalues in our query. In the first case, if the $t$-th singular value is not too large, say $\sigma_{t+1}(\mathbf{A}) \leq 10n/t$, we show that the (re-scaled) tail $\frac{n^2}{k^2}\sum_{i > t} \sigma_i^2(\mathbf{A}_{S \times T})$ of a random rectangular matrix, where $|S| = |T| = k = O(1/\epsilon^2)$, approximates the tail of $\mathbf{A}$ to error $O(\epsilon n^2)$. Our argument relies on splitting $\mathbf{A}$ into head and tail pieces $\mathbf{A} = \mathbf{A}_t + \mathbf{A}_{-t}$, where $\mathbf{A}_t$ is $\mathbf{A}$ projected onto the top-$t$ eigenvectors of $\mathbf{A}$. We demonstrate that the spectral mass of each is preserved after passing to a random row submatrix, and additionally demonstrate that $ \sigma_{\max}(\mathbf{A}_{-t}) = \sigma_{t+1}(\mathbf{A})$ does not grow too much using spectral decay inequalities for random submatrices \cite{rudelson2007sampling}. This forces the spectrum of $(\mathbf{A}_{-t})_{S \times [n]}$ to be well spread out, allowing us to apply interlacing inequalities to demonstrate that after adding $(\mathbf{A}_{t})_{S \times [n]}$ back in, the resulting tail is still sufficiently large, and then iterate the argument when sampling columns to obtain $\mathbf{A}_{S \times T}$. On the other hand, if $\sigma_{t+1}(\mathbf{A})$ is too large, then after moving to a random row submatrix the spectral norm of $\mathbf{A}_{-t}$ can concentrate highly in its top eigenvalues, which can then be absorbed by the top $t$ eigenvalues of $\mathbf{A}_t$, stealing too much mass from the tail. Instead, note that if $\sigma_{t+1}(\mathbf{A}) \geq 10n/t$, then the Schatten norm of $\mathbf{A}$ must be large, namely $\sum_i \sigma_i(\mathbf{A}) > 10 n$, which cannot occur if $\mathbf{A}$ is PSD. We show that by applying Interior Eigenvalue Matrix Chernoff bounds (mentioned above), we can preserve this fact, obtaining $\frac{n}{k}\sigma_{t+1} (\mathbf{A}_{S \times T}) > 10n/t$ with good probability when $k = \Omega(1/\epsilon^2)$. If this is the case, then the Schatten norm of the submatrix will be too large: $\|\mathbf{A}_{S \times T}\|_{\mathcal{S}_1} \geq t (10k/t) > 10k$. To obtain a certificate from this fact, we move to the larger principal submatrix $\mathbf{A}_{(S\cup T) \times (S\cup T)}$, which we show must still have large Schatten norm, from which we can infer the existence of negative eigenvalues. Similarly, in the earlier case, we show that the large tail of $\mathbf{A}_{S \times T}$ implies that the principal submatrix $\mathbf{A}_{(S\cup T) \times (S\cup T)}$ also has too large of a tail, meaning it must not be PSD. \subsubsection{Lower Bounds}\label{sec:techlb} As seen above, the distribution of negative mass in the matrix $\mathbf{A}$ plays an important role in the complexity of testing if $\mathbf{A}$ is PSD. Specifically, the problem becomes easier the more concentrated the negative mass is within a few eigenvalues. So in order to avoid a $o(1/\epsilon^2)$ upper bound from the $\ell_\infty$-testing algorithm, our hard instance must have $|\lambda_{\min}(\mathbf{A})| = O( \epsilon n)$ in the $\epsilon$-far case. On the other hand, we cannot allow the negative mass to be extremely spread out, otherwise we would have to add many more \textit{positive} eigenvalues to avoid violating the trace constraint $|\text{Tr}(\mathbf{A})| = |\sum_i \lambda_i(\mathbf{A})| \leq n$ implied by the boundedness, creating further spectral differences between the instances. With this in mind, our hard distribution will have $1/\epsilon$ negative eigenvalues, each roughly equal to $\lambda_i(\mathbf{A}) = - \epsilon n$. \paragraph{The Hard Instance.} Our first insight is to construct a discrete instance, with the property that the distribution induced by observing a small sample of the ``meaningful'' entries of $\mathbf{A}$ is \textit{identical} in both cases. Specifically, we construct two distribtuions: $\mathcal{D}_{\text{YES}}$ and $\mathcal{D}_{\text{NO}}$ over $n \times n$ matrices. In both cases, $\mathbf{A}$ will be block diagonal, with $k$ disjoint blocks $B_1,B_2,\dots,B_k \subset [n]$, each of size $|B_i| = n/k$, for some parameter $k$; we will later set $k=\Theta(1/\epsilon)$, so our target lower bound is $\Omega(k^2)$. In $\mathcal{D}_{\text{YES}}$, each block $\mathbf{A}_{B_i \times B_i}$ will be PSD, whereas in $\mathcal{D}_{\text{NO}}$ we will have $\lambda_{\min}(\mathbf{A}_{B_i \times B_i}) = - \tilde{\Theta}(n/k) \approx - \epsilon n$. The partition $B_1 \cup B_2 \cup \dots \cup B_k = [n]$ is chosen randomly, so that for any fixed set of samples, only a small fraction them will be contained inside any block $\mathbf{A}_{B_i \times B_i}$. The diagonal entries will always be fixed to $1$, and all off-diagonal entries are either $\{0,1,-1\}$. The samples $a_1,a_2,\dots,a_s \in [n] \times [n]$ of any algorithm can then be interpreted as a graph $H$ (possibly with self-loops), where for each edge $a_r= (i,j) \in E(H)$, the algorithm learns the value $\mathbf{A}_{i,j} \in \{0,1,-1\}$. Now consider the algorithm which just samples a $t \times t$ principal submatrix $T \subset [n]$, so that $H$ is a $t$-clique. Now in expectation $\ex{|T \cap B_i|} = \frac{t}{k}$ for each $i$, however, by a balls and bins argument, as $t$ approaches $k$ we will obtain some blocks $i$ with $|T \cap B_i| = \Omega(\log k/\log\log k)$. Thus, to fool this query, we must be able to ``fool'' cliques of size roughly $\log k$ within a block $B_i$. On the other hand, an algorithm could find many more entries in a block by lop-sided sampling: for instance, it could sample $k^2$ entries in a single column of $\mathbf{A}$ ($H$ is a $k^2$-star), getting $k$ entries inside a column of a block $\mathbf{A}_{B_i \times B_i}$. Thus we must also fool large star queries. It turns out that the right property to consider is the \textit{matching number} $\nu(H)$ of the query graph $H$, i.e. the size of a maximum matching. Notice for a star $H$, we have $\nu(H) = 1$. We prove (roughly) that if within each block $B_i$, one can ``fool'' every query graph $H$ inside $\mathbf{A}_{B_i \times B_i}$ with matching number $\nu(H) < \ell$, one would obtain a lower bound of $\Omega(k^{\frac{2(\ell -1)}{\ell}})$. Thus, it will suffice to fool all query graphs $H$ within a block $B_i$ with $\nu(H) \leq \log k$. For a first step towards this, suppose that in $\mathcal{D}_{\text{YES}}$, we set each block independently to $\mathbf{A}_{B_i \times B_i} = v v^\top$, where $v \in \{1,-1\}^{|B_i|}$ is a random sign vector, and in $\mathcal{D}_{\text{NO}}$, we set $\mathbf{A}_{B_i \times B_i} = - v v^\top$ (except we fix the diagonal to be $1$ in both cases). Now notice that the distribution of any individual entry $(\mathbf{A}_{B_i \times B_i})_{a,b}$ is symmetric, and identical in both $\mathcal{D}_{\text{YES}}$ and $\mathcal{D}_{\text{NO}}$. Furthermore, it is not difficult to check that the distribution of a path or star query $H$ inside of $\mathbf{A}_{B_i \times B_i}$ is also identical in both cases. On the other hand, if $H$ contained a \textit{triangle}, then this would not be the case, since in $\mathcal{D}_{\text{YES}}$ one could never have a negative cycle $(x,y,z)$ where $v_x v_y = v_y v_z = v_z v_x = -1$, whereas this could occur in $\mathcal{D}_{\text{NO}}$, since we could have that $-v_x v_y = -v_y v_z = -v_z v_x = -1$. Thus, roughly, to distinguish between these distributions $\mathcal{D}_{\text{YES}}$ from $\mathcal{D}_{\text{NO}}$, an algorithm must sample a triangle within one of the blocks $\mathbf{A}_{B_i \times B_i}$, which one can show requires $\Omega(k^{4/3})$ queries, yielding a first lower bound.\footnote{Note that $\nu(H) = 1$ for a triangle $H$, so the $\Omega(k^{2(\ell - 1)/\ell})$ lower bound when $\nu(H) < \ell$ is actually loose here.} \paragraph{Boosting to $\Omega(k^{2})$.} Given the above example, we would now like to construct instances which fool $H$ with larger and larger $\nu(H)$. In fact, our next insight is to have an even simpler structure on $\mathcal{D}_{\text{YES}}$ and $\mathcal{D}_{\text{NO}}$: each of them will be a random permutation of one of two \textit{fixed} matrices $\mathbf{D}_1,\mathbf{D}_2$ respectively. We now formalize the ``fooling'' condition we need. For a matrix $\mathbf{B}$ and a query graph $H$, let $(\mathbf{B})_H$ denote the result of setting all entries of $\mathbf{B}$ not in $H$ equal to zero. Then the matrices $\mathbf{D}_1,\mathbf{D}_2$ must have the property that for any graph $H$ with $\nu(H) \leq \log k$, if $\sigma:[m] \to [m]$ is a random permutation and $\mathbf{P}_\sigma \in \R^{m \times m}$ is the row permutation matrix corresponding to $\sigma$, then the distribution of $(\mathbf{P}_\sigma \mathbf{D}_1 \mathbf{P}_\sigma^\top)_H$ is identical to the distribution $(\mathbf{P}_\sigma \mathbf{D}_2 \mathbf{P}_\sigma^\top)_H$. We call this property $H$-\textit{subgraph equivalence}. This implies that any algorithm which queries the edges in $H$ inside of $\mathbf{P}_\sigma \mathbf{D}_1 \mathbf{P}_\sigma^\top$ or $\mathbf{P}_\sigma \mathbf{D}_2 \mathbf{P}_\sigma^\top$ will be unable to distinguish between them with any advantage. To obtain a lower bound, we must also have a gap between $\lambda_{\min}(\mathbf{D}_1)$ and $\lambda_{\min}(\mathbf{D}_2)$, so that their spectrum can be shifted to make one PSD and the other far. Furthermore, neither $\lambda_{\min}(\mathbf{D}_1)$ or $\lambda_{\min}(\mathbf{D}_2)$ can be too negative, otherwise by shifting we would lose boundedness of the entries. A priori, it is not even clear that such matrices $\mathbf{D}_1,\mathbf{D}_2$ exist, even for fixed values of $\nu(H)$, such as $\nu(H) = 5$. Our main contribution now is to demonstrate their existence for every $\nu(H)$. Our construction is simple, but perhaps surprisingly so. Both $\mathbf{D}_1,\mathbf{D}_2$ will be adjacency matrices; in the PSD case, we set $\mathbf{D}_1$ to be the cycle graph $C_{2m+1}$ on $2m+1 = \Theta(\log k)$ vertices, and in the $\epsilon$-far case we set $\mathbf{D}_2$ to be the disjoint union of two cycles $C_{m+1} \oplus C_m$. Since one of $m$ and $m+1$ is even, while $2m+1$ is odd, we will have that $\lambda_{\min}(C_{m+1} \oplus C_m) = -2$, but $\lambda_{\min}(C_{2m+1} ) > -2$.\footnote{To intuitively see why this is true, note that if $m$ is even and $v \in \{-1,1\}^m$ is the vector that assigns opposite signs to adjacent vertices of $C_m$, then we have $C_m v = -2 v$. However, if $m$ is odd, this assignment $v$ is no longer possible.} To show subgraph equivalence, it suffices to show a slightly more general version of the following: for any graph $H$ with $\nu(H) < m/4$, the number of subgraphs of $C_{2m+1}$ isomorphic to $H$ is the same as the number of subgraphs of $C_{m+1} \oplus C_m$ isomorphic to $H$.\footnote{A more general statement is needed since $H$ can also query for edges which do not exist in $C_{2m+1}$.} Note that if $\nu(H) < m/4$, then $H$ is just a disjoint collection of paths. Our proof of this fact is by a construction of a bijection from arrangements of $H$ in $C_{2m+1}$ to $H$ in $C_{m+1} \oplus C_m$. While a seemingly simple property, some care must be taken when designing a bijection. Our mapping involves first ``swapping'' two paths (whose length depends on $H$) in $C_{2m+1}$, before ``splitting'' $C_{2m+1}$ into two cycles of length $m$ and $m+1$. We direct the reader to Section \ref{sec:CnLemma} for further details. \vspace{-.1 in} \paragraph{Amplifying the Gap.} The subgraph equivalence between $C_{2m+1}$ and $C_{m+1} \oplus C_m$ prevents any algorithm from distinguishing between them with a small number of samples, however the gap in the minimum eigenvalue shrinks at the rate of $\Theta(1/m^2)$. Meaning, if we set $\gamma = \lambda_{\min}(C_{2m+1}) = 2-\Theta(1/m^2)$, while the matrix $\gamma \mathbf{I} + C_{2m+1}$ is PSD and has constant sized entries, we only have $\lambda_{\min}(\gamma \mathbf{I} + C_{m+1} \oplus C_m) = - \Theta(1/m^2)$, which is not far enough from PSD. Instead, recall that we only need $m = \Omega(\log k)$ to fool all $H$ with $\nu(H) \leq \log k$, but the block size which we must fill is much larger: $\mathbf{A}_{B_i \times B_i}$ has size $|B_i| = n/k$. Thus, instead of setting $m = \Theta(n/k)$ and filling all of $\mathbf{A}_{B_i \times B_i}$ with the cycles, we set $m = \Theta(\log k)$, and we amplify the spectral gap by taking the tensor product of the small graphs $C_{2m+1}$ and $C_{m+1} \oplus C_m$ with a large, fixed matrix $\mathbf{M}$, so that $(\gamma \mathbf{I} + C_{2m+1}) \otimes \mathbf{M}$ has $|B_i|$ rows and columns. We prove that taking the tensor product with any fixed $\mathbf{M}$ preserves the subgraph equivalence properties of the original matrices. From here, our lower bounds for testing PSD with $\ell_2$ gap, Schatten norms, Ky fan, and the cost of the best rank-$k$ approximation, all follow by a proper choice of $\mathbf{M}$. For PSD testing, we can choose $\mathbf{M} = \mathbf{1}$ to be the all $1$'s matrix, and to amplify the gap in Schatten $1$ norm, we can choose $\mathbf{M}$ to be a random Rademacher matrix. Since $\mathbf{M} = \mathbf{1}$ is PSD and $\|\mathbf{M}\|_2 = \wt{\Omega}(n/k)$, the gap is amplified to the desired $- \wt{\Omega}(n/k)$. Finally, we remark that to obtain a lower bound for another norm, any matrix $\mathbf{M}$ which is large in that norm may be suitable, so long as the original sub-graph equivalent matrices also have a gap in that norm. We pose it as an interesting open problem to design other pairs of matrices $\mathbf{D}_1,\mathbf{D}_2$ with different spectral gaps which have good sub-graph equivalence properties. \section{Proofs of Variance Bounds from Section \ref{sec:warmup}} In this section, we provide the proofs of the variance bounds in Lemma \ref{lem:var} and Corollary \ref{cor:var}. For convenience, we restate the Lemma and Corollary here before the proofs. \smallskip \smallskip \smallskip \noindent \textbf{Lemma \ref{lem:var}} \textit{ Let $\delta_1,\dots,\delta_n \sim \text{Bernoulli}(\frac{k}{n})$. Let $y \in \R^n$ be any vector such that $\|y\|_2 \leq 1, \|y\|_\infty \leq \frac{1}{\epsilon \sqrt{n}}$, and $y^\top \mathbf{A} y = - \epsilon n$, where $\mathbf{A} \in \R^{n \times n}$ satisfies $\|\mathbf{A}\|_\infty \leq 1$. Further suppose that $\mathcal{R}_i(y) + \mathcal{C}_i(y) \leq 0$ for each $i \in [n]$. Then, assuming $k \geq 6/\epsilon$, we have \[\mathbf{Var}\left[\sum_{i,j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \right] \leq O\left(\frac{k^3}{n^2}\right) \] Moreover, if the tighter bound $\|y\|_\infty \leq \frac{\alpha}{\epsilon \sqrt{n}}$ holds for some $\alpha \leq 1$, we have \[\mathbf{Var}\left[\sum_{i,j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j\right] \leq O\left(\frac{k^2}{n^2}+ \frac{\alpha k^3}{n^2} \right) \] }\smallskip \smallskip \smallskip \begin{proof} Let $c_{i,j} = \mathbf{A}_{i,j}y_i y_j$. We have \begin{equation} \label{eqn:varbig} \begin{split} &\mathbf{Var}\left[\sum_{i,j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \right] \leq \frac{k}{n}\sum_{i } c_{i,i}^2 + \frac{k^2}{n^2} \sum_{i \neq j} c_{i,j}^2 + \frac{k^2}{n^2} \sum_{i \neq j} c_{i,j} c_{j,i} + \frac{k^2}{n^2 }\sum_{i \neq j } c_{i,i} c_{j,j} + \frac{k^2}{n^2} \sum_{i \neq j}c_{i,i}c_{i,j} \\ +& \frac{k^2}{n^2} \sum_{i \neq j}c_{i,i}c_{j,i} + \frac{k^3}{n^3}\sum_{i\neq j \neq u} c_{i,j} c_{u,j} + \frac{k^3}{n^3}\sum_{j\neq i \neq u} c_{i,j} c_{i,u} + \frac{k^3}{n^3}\sum_{i\neq j \neq u} c_{i,j} c_{j,u}+ \frac{k^3}{n^3}\sum_{j\neq i \neq u} c_{i,j} c_{u,i} \\ & + \frac{2k^3}{n^3} \sum_{i \neq j \neq u} c_{i,i} c_{j,u} + \frac{k^4}{n^4}\sum_{i \neq j \neq v \neq u} c_{i,j} c_{u,v} - \left( \frac{k^2}{n^2}\sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j - \frac{k}{n} \sum_i \mathbf{A}_{i,i} y_i^2 \right)^2 \\ \end{split} \end{equation} We first consider the last term $\frac{k^4}{n^4}\sum_{i \neq j \neq v \neq u} c_{i,j} c_{u,v} = \sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \sum_{u \neq v \neq i \neq j} y_u \mathbf{A}_{u,v} y_v$. Here $i \neq j \neq v \neq u$ means all $4$ indices are distinct. Note that this term is canceled by a subset of the terms within $ \left( \frac{k^2}{n^2}\sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j - \frac{k}{n} \sum_i \mathbf{A}_{i,i} y_i^2 \right)^2$. Similarly, the term $\frac{k^2}{n^2 }\sum_{i \neq j } c_{i,i} c_{j,j}$ cancels. Moreover, after expanding $ \left( \frac{k^2}{n^2}\sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j - \frac{k}{n} \sum_i \mathbf{A}_{i,i} y_i^2 \right)^2$, every remaining term which does not cancel with another term exactly is equal to another term in the variance above, but with an additional one (or two) factors of $\frac{k}{n}$ attached. Thus, if we can bound the remaining terms in Equation \ref{eqn:varbig} by some value $B$, then an overall variance bound of $2\cdot B$ will follow. We now consider $\mathcal{T} = \left(\sum_{j\neq i \neq u} c_{i,j} c_{i,u} +\sum_{i\neq j \neq u} c_{i,j} c_{u,j} +\sum_{j\neq i \neq u} c_{i,j} c_{u,i} +\sum_{i\neq j \neq u} c_{i,j} c_{j,u} \right)$. We have \[\sum_{i\neq j \neq u} c_{i,j} c_{i,u} = \sum_i \sum_{j \neq i} y_i \mathbf{A}_{i,j} y_j \sum_{u \neq i \neq j} y_i \mathbf{A}_{i,u} y_u\] \[\sum_{i\neq j \neq u} c_{i,j} c_{u,j} = \sum_j \sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \sum_{u \neq i \neq j} y_u \mathbf{A}_{u,j} y_j\] \[\sum_{j\neq i \neq u} c_{i,j} c_{u,i} = \sum_i \sum_{j \neq i} y_i \mathbf{A}_{i,j} y_j \sum_{u \neq i \neq j} y_u \mathbf{A}_{u,i} y_i\] \[\sum_{i\neq j \neq u} c_{i,j} c_{j,u} = \sum_j \sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \sum_{u \neq i \neq j} y_j \mathbf{A}_{j,u} y_u\] Now for simplicity, we write $\mathcal{R}_i = \mathcal{R}_i(y)$ and $\mathcal{C}_i = \mathcal{C}_i(y)$ for $i \in [n]$. Then by assumption, we have $\mathcal{R}_i + \mathcal{C}_i \leq 0$ for each $i$, thus $|\sum_i (\mathcal{R}_i + \mathcal{C}_i) | = \sum_i |(\mathcal{R}_i + \mathcal{C}_i)|$. Also note that we have $|\sum_i (\mathcal{R}_i + \mathcal{C}_i)| = |2 y^\top \mathbf{A} y -2\sum_i \mathbf{A}_{i,i} y_i^2| \leq 4\epsilon n $. Now observe \[\left|\left( \sum_i\sum_{j \neq i} y_i \mathbf{A}_{i,j} y_j \sum_{u \neq i \neq j} y_i \mathbf{A}_{i,u} y_u\right) - \sum_i\mathcal{R}_i^2 \right| = \sum_i\sum_{u \in [n] \setminus i} y_i^2 \mathbf{A}_{i,u}^2 y_u^2\leq \sum_iy_i^2 \leq 1\] And similarly \[\left|\left(\sum_j \sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \sum_{u \neq i \neq j} y_u \mathbf{A}_{u,j} y_j\right) -\sum_j\mathcal{C}_i^2 \right| = \sum_j\sum_{u \in [n] \setminus j} y_u^2 \mathbf{A}_{u,j}^2 y_j^2\leq \sum_jy_j^2 \leq 1\] \[\left|\left( \sum_i \sum_{j \neq i} y_i \mathbf{A}_{i,j} y_j \sum_{u \neq i \neq j} y_u \mathbf{A}_{u,i} y_i\right) - \sum_i\mathcal{R}_i\mathcal{C}_i \right| = \sum_i\sum_{u \in [n] \setminus i} y_i^2 \mathbf{A}_{i,u} \mathbf{A}_{u,i} y_u^2\leq \sum_iy_i^2 \leq 1\] \[\left|\left( \sum_j \sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \sum_{u \neq i \neq j} y_j \mathbf{A}_{j,u} y_u\right) -\sum_j\mathcal{R}_j\mathcal{C}_j \right| = \sum_j\sum_{u \in [n] \setminus j} y_u^2 \mathbf{A}_{u,j} \mathbf{A}_{j,u} y_j^2\leq \sum_jy_j^2 \leq 1\] Taking these four equations together, we obtain $\left| \mathcal{T} - \sum_i (\mathcal{R}_i + \mathcal{C}_i)^2 \right| \leq 4$, so it will suffice to upper bound the value $ \sum_i (\mathcal{R}_i + \mathcal{C}_i)^2$ instead. First note that since $|y_i| \leq \frac{1}{\epsilon \sqrt{n}}$ for all $i$, so for any $i \in [n]$ we have \[|(\mathcal{R}_i + \mathcal{C}_i)| \leq |\sum_{j \neq i} y_i \mathbf{A}_{i,j} y_j| + |\sum_{j \neq i} y_j \mathbf{A}_{j,i} y_i| \leq \frac{1}{\epsilon \sqrt{n}}(\sum_{j } 2y_j) \leq \frac{2}{\epsilon \sqrt{n}} \|y\|_1 \leq \frac{2}{\epsilon} \] Combining this bound with the fact that $\sum_i |(\mathcal{R}_i + \mathcal{C}_i)| \leq 4 \epsilon n$ from earlier, it follows that the sum $\sum_i (\mathcal{R}_i + \mathcal{C}_i)^2$ is maximized by setting $2 \epsilon^2 n$ of the terms $(\mathcal{R}_i + \mathcal{C}_i)$ equal to the largest possible value of $(2/\epsilon)$, so that $\sum_i (\mathcal{R}_i + \mathcal{C}_i)^2 \leq 2 \epsilon^2 n (2/\epsilon)^2 = O(n)$. This yields an upper bound of $\frac{k^3}{n^3}\mathcal{T} = O(\frac{k^3}{n^2})$. Note, that in general, given the bound $\|y\|_\infty \leq \frac{\alpha}{\epsilon \sqrt{n}}$ for some value $\alpha \leq 1$, then each term $|(\mathcal{R}_i + \mathcal{C}_i)| \leq \frac{2\alpha }{\epsilon}$. On the other hand, $\sum_i |(\mathcal{R}_i + \mathcal{C}_i)| \leq 4 \epsilon n$. Thus, once again, $\sum_i |(\mathcal{R}_i + \mathcal{C}_i)|^2$ is maximized by setting $\Theta(\epsilon^2 n/\alpha )$ inner terms equal to $\Theta((\frac{\alpha }{\epsilon})^2)$, giving $\mathcal{T} \leq \alpha n$ for general $\alpha < 1$. Thus, for general $\alpha \leq 1$, we have $\frac{k^3}{n^3}\mathcal{T} = O(\frac{\alpha k^3}{n^2})$. Next, we bound $\frac{k^2}{n^2} \sum_{i \neq j}c_{i,i}c_{i,j} + \frac{k^2}{n^2} \sum_{i \neq j}c_{i,i}c_{j,i}$ by $\frac{k^2}{n^2}\sum_i y_i^2( \mathcal{R}_i + \mathcal{C}_i) $. As shown above, $ | \mathcal{R}_i + \mathcal{C}_i| \leq 2y_i\sqrt{n}$, thus altogether we have \begin{equation}\label{eqn:secondorderterms} \frac{k^2}{n^2}\left( \sum_{i \neq j}c_{i,i}c_{i,j} +\sum_{i \neq j}c_{i,i}c_{j,i} \right)\leq \frac{k^2}{n^2}\sum_i y_i^3 \sqrt{n} \end{equation} Using that $\|y\|_\infty \leq \frac{\alpha}{\epsilon \sqrt{n}}$ for $\alpha \leq 1$, and the fact that $\|y\|_2^2 \leq 1$, it follows that $\|y\|_3^3$ is maximized by having $\frac{n \epsilon^2}{\alpha^2}$ terms equal to $\|y\|_\infty \leq\frac{\alpha}{\epsilon \sqrt{n}}$, which gives an upper bound of $\|y\|_3^3 \leq \frac{\alpha}{\epsilon \sqrt{n}}$. Thus, we can bound the right hand side of Equation \ref{eqn:secondorderterms} by $\frac{k^2 \alpha}{n^2 \epsilon}$, which is $O(k^3/n^2)$ when $\alpha = 1$ using that $k = \Omega(1/\epsilon)$. Now, we bound $\frac{k^3}{n^3} \sum_{i \neq j \neq u} c_{i,i} c_{j,u} $ by \begin{equation} \begin{split} \frac{k^3}{n^3} \sum_{i \neq j \neq u} c_{i,i} c_{j,u} &\leq \frac{k^3}{n^3} \sum_i y_i^2 \mathbf{A}_{i,i} \sum_{j \neq u \neq i} y_j\mathbf{A}_{j,u} y_u \\ & \leq \frac{k^3}{n^3} \sum_i y_i^2 \mathbf{A}_{i,i} \sum_{j \neq u \neq i} y_j\mathbf{A}_{j,u} y_u \\ & \leq \frac{k^3}{n^3} \sum_i y_i^2 \mathbf{A}_{i,i} \left(\epsilon n + O(1)\right) \\ &\leq \frac{\epsilon k^3}{n^2} \\ & = O( \frac{k^2}{n^2}) \end{split} \end{equation} Also observe that $\sum_{i ,j } c_{i,j}^2 \leq \sum_{i,j} y_i^2 y_j^2 = \|y|_2^4 \leq 1$, so $\sum_{i \neq j} c_{i,j}^2 \leq \sum_{i, j} c_{i,j}^2 \leq 1$, and also $\sum_{i \neq j} c_{i,j} c_{j,i} \leq \sum_{i,j} y_i^2 y_j^2 \leq 1$, which bounds their corresponding terms in the variance by $O(k^2/n^2)$. Finally, we must bound the last term $\frac{k}{n}\sum_{i } c_{i,i}^2 = \frac{k}{n}\sum_i y_i^4 \mathbf{A}_{i,i}^2 \leq\frac{k}{n} \sum_i y_i^4 $. Note that $|y_i| \leq 1/(\epsilon \sqrt{n})$ for each $i$, and $\|y\|_2 \leq 1$. Thus $\sum_i y_i^4$ is maximized when one has $\epsilon^2 n$ terms equal to $1/(\epsilon \sqrt{n})$ , and the rest set to $0$. So $\sum_i y_i^4 \leq \epsilon^2 n(\frac{1}{\epsilon \sqrt{n}})^4 \leq \frac{1}{\epsilon^2 n}$. In general, if $\|y\|_\infty \leq \frac{\alpha}{\epsilon \sqrt{n}}$, we have $\sum_i y_i^4 \leq \frac{\epsilon^2 n}{\alpha^2 }(\frac{\alpha}{\epsilon \sqrt{n}})^4 \leq \frac{\alpha^2}{\epsilon^2 n}$. Thus we can bound $\frac{k}{n}\sum_i c_{i,i}^2$ by $O(\frac{k^3 \alpha^2}{n^2})$ Altogether, this gives \begin{equation} \begin{split} &\mathbf{Var}\left[\sum_{i,j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \right] \leq O(\frac{k^2}{n^2}+ \frac{\alpha k^3}{n^2} + \frac{\alpha k^2 }{n^2 \epsilon} + \frac{\alpha^2 k^3}{n^2}) \\ & = O(\frac{k^2}{n^2}+ \frac{\alpha k^3}{n^2} + \frac{\alpha^2 k^3}{n^2})\\ \end{split} \end{equation} which is $O(k^3/n^2)$ in general (where $\alpha \leq 1$), where we assume $k \geq 6/\epsilon$ throughout. \end{proof} \smallskip \smallskip \smallskip \noindent \textbf{Corollary \ref{cor:var}} \textit{ Let $\delta_i \in \{0,1\}$ be an indicator random variable with $\ex{\delta_i} = k/n$. Let $\mathbf{A} \in \R^{n \times n}$ with $\|\mathbf{A}\|_\infty \leq 1$ be any matrix and $y$ a vector such that $|y^\top\mathbf{A} y| \leq c_1\epsilon n$ for some value $c_1>0$, and such that $\|y\|_\infty \leq \frac{\alpha}{\epsilon \sqrt{n}}$ for some $\alpha > 0$. Let $\mathbf{Z} \in \R^n$ be defined by $\mathbf{Z}_i = \mathcal{R}_i(y) + \mathcal{C}_i(y)$ for $i \in [n]$, and suppose we have $\|\mathbf{Z}\|_2^2 \leq c_2 \epsilon n$. Then we have \[\mathbf{Var}\left[\sum_{i\neq j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \right] \leq O\left(\frac{k^2}{n^2} + \frac{c_1^2 k^4 \epsilon^2}{n^2} + \frac{(c_1 +c_2 )\epsilon k^3}{n^2} + \frac{\alpha^2 k^3}{n^2} \right) \] }\smallskip \smallskip \smallskip \begin{proof} We proceed as in Lemma \ref{lem:var}, except that we may remove the terms with $c_{i,j}$ for $i=j$, yielding \begin{equation} \begin{split} &\mathbf{Var}\left[\sum_{i\neq j} y_i \mathbf{A}_{i,j} y_j \delta_i \delta_j \right] \leq \frac{k^2}{n^2} \sum_{i \neq j} c_{i,j}^2 + \frac{k^2}{n^2} \sum_{i \neq j} c_{i,j} c_{j,i} + \frac{k^3}{n^3}\sum_{i\neq j \neq u} c_{i,j} c_{u,j} \\ & + \frac{k^3}{n^3}\sum_{j\neq i \neq u} c_{i,j} c_{i,u} + \frac{k^3}{n^3}\sum_{i\neq j \neq u} c_{i,j} c_{j,u}+ \frac{k^3}{n^3}\sum_{j\neq i \neq u} c_{i,j} c_{u,i} \\ & + \frac{k^3}{n^3} \sum_{i \neq j \neq u} c_{i,i} c_{j,u} + \frac{k^4}{n^4}\sum_{i \neq j \neq v \neq u} c_{i,j} c_{u,v}- \left( \frac{k^2}{n^2}\sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \right)^2 \\ \end{split} \end{equation} As in Lemma \ref{lem:var}, we can cancel the term $\frac{k^4}{n^4}\sum_{i \neq j \neq v \neq u} c_{i,j} c_{u,v}$ with a subterm of $-\left( \frac{k^2}{n^2}\sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \right)^2$, and bound the remaining contribution of $-\left( \frac{k^2}{n^2}\sum_{i \neq j} y_i \mathbf{A}_{i,j} y_j \right)^2$ by individually bounding the other terms in the sum. First, we can similarly bound the last term by $c_1^2 \epsilon^2 k^4/n^2$ as needed. Now when bounding $$\mathcal{T} = \left(\sum_{j\neq i \neq u} c_{i,j} c_{i,u} +\sum_{i\neq j \neq u} c_{i,j} c_{u,j} +\sum_{j\neq i \neq u} c_{i,j} c_{u,i} +\sum_{i\neq j \neq u} c_{i,j} c_{j,u} \right)$$ we first observe that in the proof of Lemma \ref{lem:var}, we only needed a bound on $\|\mathbf{Z}\|_2^2$ to give the bound on $\mathcal{T}$. So by assumption, $\|\mathbf{Z}\|_2^2 \leq c_2 \epsilon n$, which gives a total bound of $\frac{c_2 k^3 \epsilon }{n^2}$ on $\frac{k^3}{n^3}\mathcal{T}$. Also, we bound we bound $\frac{k^3}{n^3} \sum_{i \neq j \neq u} c_{i,i} c_{j,u} $ by \begin{equation} \begin{split} \frac{k^3}{n^3} \sum_{i \neq j \neq u} c_{i,i} c_{j,u} &\leq \frac{k^3}{n^3} \sum_i y_i^2 \mathbf{A}_{i,i} \sum_{j \neq u \neq i} y_j\mathbf{A}_{j,u} y_u \\ & \leq \frac{k^3}{n^3} \sum_i y_i^2 \mathbf{A}_{i,i} \sum_{j \neq u \neq i} y_j\mathbf{A}_{j,u} y_u \\ & \leq \frac{k^3}{n^3} \sum_i y_i^2 \mathbf{A}_{i,i} \left(c_1\epsilon n + O(1)\right) \\ &\leq \frac{\epsilon c_1 k^3}{n^2} \\ \end{split} \end{equation} which is within our desired upper bound. Finally observe that $\sum_{i ,j } c_{i,j}^2 \leq \sum_{i,j} y_i^2 y_j^2 = \|y|_2^4 \leq 1$, so $\sum_{i \neq j} c_{i,j}^2 \leq \sum_{i, j} c_{i,j}^2 \leq 1$, and also $\sum_{i \neq j} c_{i,j} c_{j,i} \leq \sum_{i,j} y_i^2 y_j^2 \leq 1$, which bounds their corresponding terms in the variance by $O(k^2/n^2)$, which completes the proof. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,601
\section{Introduction}\label{introduction} \setcounter{equation}{0} \numberwithin{equation}{section} In the present paper, we consider the equation \begin{equation}\label{1.1} -(r(x)y'(x))'+q(x)y(x)=f(x),\quad x\in \mathbb R \end{equation} where $f\in L_p(\mathbb R),$\ $(L_p(\mathbb R):=L_p),$\ $p\in(1,\infty)$ and \begin{equation}\label{1.2} r>0, \ q\ge0, \ r^{-1}\in L_1^{\loc}(\mathbb R), \ q\in L_1^{\loc}(\mathbb R)\quad \left(r^{-1}:\equiv\frac{1}{r}\right), \end{equation} \begin{equation}\label{1.3} \lim_{|d|\to\infty}\int_{x-d}^x\frac{dt}{r(t)}\cdot\int_{x-d}^x q(t)dt=\infty,\quad x\in\mathbb R. \end{equation} Our general goal consists in finding criteria for compactness of the resolvent of equation \eqref{1.1}. To state the problem more precisely, we need the following definitions and restrictions. Here and in the sequel, by a solution of equation \eqref{1.1}, we mean any function $y$ absolutely continuous together with $ry'$ and satisfying \eqref{1.1} almost everywhere on $\mathbb R.$ We say that equation \eqref{1.1} is correctly solvable in a given space $L_p,$\ $p\in[1,\infty)$ if the following assertions hold (see \cite[Ch.III, \S6, no.2]{1}): \begin{enumerate} \item[I)] for every function $f\in L_p$, there exists a unique solution of \eqref{1.1}, $y\in L_p;$ \item[II)] there exists an absolute constant $c(p)\in(0,\infty)$ such that the solution of \eqref{1.1}, $y\in L_p,$ satisfies the inequality \begin{equation}\label{1.4} \|y\|_p\le c(p)\|f\|_p,\qquad \forall f\in L_p\quad (\|f\|_p:=\|f\|_{L_p}). \end{equation} \end{enumerate} See \cite{2} and \S2 below for precise conditions that guarantee I)--II). In the sequel, for brevity, this is referred to as ``problem I)--II)" or ``question on I)--II)". It is easy to see that the problem I)--II) can be reformulated in different terms (see \cite{2,3}). To this end, let us introduce the set $\mathcal D_p$ and the operator $\mathcal L_p:$ $$\mathcal D_p=\{y\in L_p: y,ry'\in L_p,\quad -(ry')'+qy\in L_p\},$$ $$\mathcal L_py=-(ry')'+qy,\quad y\in\mathcal D_p.$$ (Here $\mathcal A C^{\loc}(\mathbb R)$ is the set of functions absolutely continuous on every finite segment.) The linear operator $\mathcal L_p$ is called the maximal Sturm-Liouville operator, and problem I)--II) is obviouisly equivalent to the problem on existence and boundedness of the operator $\mathcal L_p^{-1}: L_p\to L_p $ (see \cite{3}). We can now give a precise statement of the problem studied in the present paper: \textit{To find minimal additional requirements to \eqref{1.2} and \eqref{1.3} to the functions $r$ and $q$ under which, together with I)--II), the following condition III) also holds (``problem I)--III)" or ``question on I)--III)"):} III) \textit{for a given $p\in(1,\infty)$ the operator $\mathcal L_p^{-1}: L_p\to L_p$ is compact.} The main goal of the present paper is an answer to the question on I)--III). For the reader's convenience we outline the structure of the paper. In \S2 we collect the preliminaries necessary for exposition; \S3 contains a list of all results of the paper together with comments; \S4 contains the proofs; in \S5 we present examples of applications of our results to a concrete equation; and, finally, \S6 contains the proofs of some technical assertions. \section{Preliminaries} \begin{thm} \cite{4} \label{thm2.1} Suppose that conditions \eqref{1.2} and \begin{equation}\label{2.1} \int_{-\infty}^x q(t)dt>0,\qquad \int_x^\infty q(t)dt>0,\qquad x\in\mathbb R \end{equation} hold. Then the equation \begin{equation}\label{2.2} (r(x)z'(x))'=q(x)z(x),\qquad x\in\mathbb R \end{equation} has a fundamental system of solutions (FSS) with the following properties: \begin{equation}\label{2.3} v(x)>0,\ u(x)>0,\quad v'(x)\ge0,\quad u'(x)\le0,\qquad x\in\mathbb R, \end{equation} \begin{equation}\label{2.4} r(x)[v'(x)u(x)-u'(x)v(x)]=1,\qquad x\in\mathbb R, \end{equation} \begin{equation}\label{2.5} \lim_{x\to-\infty}\frac{v(x)}{u(x)}=\lim_{x\to\infty}\frac{u(x)}{v(x)}=0, \end{equation} \begin{equation}\label{2.6} \int_{-\infty}^0\frac{dt}{r(t)u^2(t)}<\infty,\ \int_0^ \infty\frac{dt}{r(t)v^2(t)}<\infty,\ \int_{-\infty}^0\frac{dt}{r(t)v^2(t)}=\int_0^\infty\frac{dt}{r(t)u^2(t)}=\infty. \end{equation} \end{thm} Moreover, properties \eqref{2.3}--\eqref{2.6} determine the FSS $\{u,v\}$ uniquely up to constant mutually inverse factors. \begin{cor} \cite{4} \label{cor2.2} Suppose that conditions \eqref{1.2} and \eqref{2.1} hold. Then equation \eqref{2.2} has no solutions $z\in L_p$ apart from $z\equiv 0.$ \end{cor} The FSS from \thmref{thm2.1} is denoted below by $\{u,v\}$. \begin{thm} \cite{4,5} \label{thm2.3} For the FSS $\{u,v\}$ we have the Davies-Harrell representations \begin{equation}\label{2.7} u(x)=\sqrt{\rho(x)}\exp\left(-\frac{1}{2}\int_{x_0}^x\frac{d\xi}{r(\xi)\rho(\xi)}\right),\quad v(x)=\sqrt{\rho(x)}\exp\left(\frac{1}{2}\int_{x_0}^x\frac{d\xi}{r(\xi)\rho(\xi)}\right) \end{equation} where $x\in\mathbb R,$ $\rho(x)=u(x)v(x),$ $x_0$ is a unique solution of the equation $u(x)=v(x)$ in $\mathbb R.$ Furthermore, for the Green function $G(x,t)$ corresponding to equation \eqref{1.1}: \begin{equation}\label{2.8} G(x,t)=\begin{cases} u(x)v(t),\quad & x\ge t\\ u(t)v(x), \quad & x\le t\end{cases}\end{equation} and for its ``diagonal value" $G(x,t)\big|_{x=t}=\rho(x)$, we have the following representation \eqref{2.9} and equalities \eqref{2.10}: \begin{equation}\label{2.9} G(x,t)=\sqrt{\rho(x)\rho(t)}\exp\left(-\frac{1}{2}\left|\int_x^t\frac{d\xi}{r(\xi)\rho(\xi)}\right|\right),\quad x,t\in\mathbb R,\end{equation} \begin{equation}\label{2.10} \int_{-\infty}^0\frac{d\xi}{r(\xi)\rho(\xi)}=\int_0^\infty\frac{d\xi}{r(\xi)\rho(\xi)}=\infty.\end{equation} \end{thm} \begin{remark}\label{rem2.4} Representations \eqref{2.7} and \eqref{2.8} are given in \cite{5} for $r\equiv1$ and in \cite{4} for $r\not\equiv1.$ See \cite{4} for equalities \eqref{2.10}. Throughout the sequel conditions \eqref{1.2}--\eqref{1.3} are assumed to be satisfied (if not stated otherwise) without special mentioning. \end{remark} \begin{lem} \cite{4} \label{lem2.5} For every given $x\in\mathbb R$ each of the following equations \begin{equation}\label{2.11} \int_{x-d}^x\frac{dt}{r(t)}\cdot \int_{x-d}^xq(t)dt=1,\qquad \int_x^{x+d}\frac{dt}{r(t)}\cdot\int_x^{x+d}q(t)dt=1\end{equation} in $d\ge0$ has a unique finite positive solution. Denote them by $d_1(x)$ and $d_2(x),$ respectively. For $x\in\mathbb R$ we introduce the following functions: \begin{equation} \begin{aligned}\label{2.12} \varphi(x)&=\int_{x-d_1(x)}^x\frac{dt}{r(t)},\qquad \psi(x)=\int_x^{x+d_2(x)}\frac{dt}{r(t)},\\ h(x)&=\frac{\varphi(x)\psi(x)}{\varphi(x)+\psi(x)}\ \left(\equiv\left(\int_{x-d_1(x)}^{x+d_2(x)}q(t)dt\right)^{-1}\right). \end{aligned} \end{equation} \end{lem} \begin{thm} \cite{4} \label{thm2.6} For $x\in\mathbb R$ the following inequalities hold: \begin{equation} \begin{aligned}\label{2.13} 2^{-1}v(x)\le(r(x)v'(x))\varphi(x)\le 2v(x)\\ 2^{-1}u(x)\le(r(x)|u'(x)|)\psi(x)\le 2u(x) \end{aligned} \end{equation} \begin{equation}\label{2.14} 2^{-1}h(x)\le\rho(x)\le 2h(x).\end{equation} \end{thm} \begin{cor} \cite{4} \label{cor2.7} Let $r\equiv1.$ For every given $x\in\mathbb R$ consider the following equation: \begin{equation}\label{2.15} d\cdot\int_{x-d}^{x+d}q(t)dt=2 \end{equation} in $d\ge0.$ Equation \eqref{2.15} has a unique finite positive solution. Denote it by $\tilde d(x).$ We have the inequalities: \begin{equation}\label{2.16} 4^{-1}\cdot\tilde d(x)\le \rho(x)\le 3\cdot 2^{-1}\tilde d(x),\qquad x\in\mathbb R.\end{equation} \end{cor} \begin{remark}\label{rem2.8} Two-sided sharp by order a priori estimate of type \eqref{2.13} first appear in \cite{6} (for $r\equiv1$ and under some additional requirements to $q).$ Under conditions \eqref{1.2} and $\inf\limits_{x\in\mathbb R} q(x)>0,$ estimates similar to \eqref{2.13}, with other more complicated auxiliary functions, were given in \cite{7}. Sharp by order estimates of the function $\rho$ were first obtained in \cite{8} (under some additional requirements to $r$ and $q$). Therefore, we call inequalities of such type Otelbaev inequalities. Note that in \cite{8} auxiliary functions more complicated than $h$ and $\tilde d$ were used. The function $\tilde d$ was introduced by M. Otelbaev (see \cite{9}). Throughout the sequel we denote by $c, c(p),\dots$ absolute positive constants which are not essential for exposition and may differ even within a single chain of computations. We write $\alpha(x)\asymp\beta(x),$ $x\in (a,b)$ if positive functions $\alpha$ and $\beta$ defined in $(a,b)$ satisfy the inequalities $$c^{-1}\cdot\alpha(x)\le\beta(x)\le c\alpha(x),\qquad x\in (a,b).$$ \end{remark} \begin{lem} \cite{10} \label{lem2.9} For $x\in\mathbb R$ we have the inequality \begin{equation}\label{2.17} r(x)|\rho'(x)|<1. \end{equation} In addition, the inequality $m<1$ where \begin{equation}\label{2.18} m=\sup_{x\in\mathbb R}r(x)|\rho'(x)|\end{equation} holds if and only if $\varphi(x)\asymp \psi(x),$\ $x\in\mathbb R.$\end{lem} We also introduce a new auxiliary function $s$ and the function $d$ already known from \cite{4}. The properties of the functions are similar, and therefore for brevity we present them together. See \cite{4} for the proofs for $d,$ and \S6 below for the proofs for $s.$ \begin{lem} \cite [\S6 below]{4} \label{lem2.10} For every $x\in\mathbb R$ each of the equations \begin{equation}\label{2.19} \int_{x-d}^{x+d}\frac{dt}{r(t)h(t)}=1,\qquad \int_{x-s}^{x+s}\frac{dt}{r(t)\rho(t)}=1 \end{equation} in $d\ge0$ and $s\ge0$ has a unique finite positive solution. Denote the solutions of \eqref{2.19} by $d(x)$ and $s(x)$, respectively. The functions $d(x)$ and $s(x)$ are continuous for $x\in\mathbb R.$\end{lem} \begin{lem} \cite [\S6 below]{4} \label{lem2.11} For $x\in\mathbb R$, $t\in[x-\varepsilon d(x),x+\varepsilon d(x)]$ $(t\in[x-\varepsilon s(x),x+\varepsilon s(x)])$ and $\varepsilon\in[0,1]$, we have the inequalities: \begin{equation}\label{2.20} (1-\varepsilon)d(x)\le d(t)\le(1+\varepsilon)d(x),\end{equation} \begin{equation}\label{2.21} ((1-\varepsilon)s(x)\le s(t)\le(1+\varepsilon)s(x)).\end{equation} In addition, we have the equalities: \begin{equation}\label{2.22} \lim_{x\to-\infty}(x+d(x))=-\infty,\qquad \lim_{x\to\infty}(x-d(x))=\infty,\end{equation} \begin{equation}\label{2.23} \Big(\lim_{x\to-\infty}(x+s(x))=-\infty,\quad \lim_{x\to\infty}(x-s(x))=\infty\Big).\end{equation} \end{lem} \begin{defn} \cite{19} \label{defn2.12} Suppose we are given $x\in\mathbb R,$ a positive and continuous function $\varkappa(t)$ for $t\in\mathbb R$, a sequence $\{x_n\}_{n\in\mathbb N'},$ $\mathbb N'=\{\pm1,\pm2,\dots\}.$ Consider segments $\Delta_n=[\Delta_n^-,\Delta_n^+],$ $\Delta_n^{\pm}=x_n\pm\varkappa(x_n).$ We say that the segments $\{\Delta_n\}_{n=1}^\infty\left(\{\Delta_n\}_{n=-\infty}^{-1}\right)$ form an $\mathbb R(x,\varkappa)$-covering of $[x,\infty)$ $((-\infty,x])$ if the following requirements hold: \begin{enumerate}\item[1)] $\Delta_n^+=\Delta_{n+1}^-$ for $n\ge1$\quad $(\Delta_{n-1}^+=\Delta_n^-$ for $n\le-1)$, \item[2)] $\Delta_1^-=x$ $(\Delta_{-1}^+=x),$\quad $\bigcup\limits_{h\ge1}\Delta_n=[x,\infty)$\quad $\Big(\bigcup\limits_{n\le-1}\Delta_n=(-\infty,x]\Big).$ \end{enumerate} \end{defn} \begin{lem} \cite{19} \label{lem2.13} Suppose that for a positive and continuous function $\varkappa(t)$ for $t\in\mathbb R$, we have the relations \begin{equation}\label{2.24} \lim_{t\to\infty}(t-\varkappa(t)=\infty\quad \Big(\lim_{t\to-\infty}(t+\varkappa(t)))=-\infty\Big). \end{equation} Then for every $x\in\mathbb R$ there is an $\mathbb R(x,\varkappa)$-covering of $[x,\infty)(\mathbb R(x,\varkappa)$-covering of $(-\infty,x])$. \end{lem} \begin{remark}\label{rem2.14} If for some $x\in\mathbb R$ there exist $\mathbb R(x,\varkappa)$-coverings of both $[x,\infty)$ and $(-\infty,x]$, then their union will be called an $\mathbb R(x,\varkappa)$-covering of $\mathbb R.$ \end{remark} \begin{lem} \cite [\S6 below]{4} \label{lem2.15} For every $x\in \mathbb R$ there exist $\mathbb R(x,d)$ and $\mathbb R(x,s)$-coverings of $\mathbb R.$\end{lem} \begin{remark}\label{rem2.16} Assertions of the type in \lemref{lem2.15} and estimates of the form \eqref{2.20} were introduced by Otelbaev (see \cite{9}).\end{remark} \begin{lem} \cite [\S6 below]{4} \label{lem2.17} Let $x\in\mathbb R,$ $t\in[x-d(x),x+d(x)]$ $(t\in[x-s(x),x+s(x)])$. Then the following inequalities hold: \begin{equation}\label{2.25} \alpha^{-1}v(x)\le v(t)\le\alpha v(x), \qquad \alpha^{-1}u(x)\le u(t)\le\alpha u(x), \end{equation} \begin{equation}\label{2.26} \alpha^{-1}\rho(x)\le \rho(t)\le\alpha \rho(x), \qquad (4\alpha)^{-1}h(x)\le h(t)\le4\alpha h(x). \end{equation} \begin{equation}\label{2.27} \left(\begin{array}{cc} c^{-1}v(x)\le v(t)\le cv(x),\quad c^{-1}u(x)\le u(t)\le cu(x) \\ \\ c^{-1}\rho(x)\le\rho(t)\le c\rho(x) \end{array}\right) . \end{equation} Here $\alpha=\exp(2).$ \end{lem} \begin{thm} \cite{2} \label{thm2.18} Suppose that conditions \eqref{1.2} and \eqref{2.1} hold and $p\in(1,\infty).$ Then equation \eqref{1.1} is correctly solvable in $L_p$ if and only if the Green operator $G:L_p\to L_p $ is bounded. In the latter case, for every function $f\in L_p$ the solution $y\in L_p$ of \eqref{1.1} is of the form $y=Gf.$ In particular, $\mathcal L_p^{-1}=G.$ Here (see \eqref{2.8}): \begin{equation} \label{2.29} (Gf)(x)\overset{\text{def}}{=}\int_{-\infty}^\infty G(x,t)f(t)dt,\quad x\in\mathbb R,\quad f\in L_p. \end{equation} \end{thm} \begin{remark}\label{rem2.19} If $r^{-1}\notin L_1(-\infty,0)$ and $r^{-1}\notin L_1(0,\infty)$, then condition \eqref{2.1} and, a fortiori, \eqref{1.3} are necessary for correct solvability of equation \eqref{1.1} in $L_p,$ $p\in(1,\infty)$ (see \cite{2}). \end{remark} \begin{lem} \cite{2} \label{lem2.20} Suppose that conditions \eqref{1.2} and \eqref{2.1} hold and $p\in(1,\infty).$ Consider the integral operators \begin{equation}\label{2.30} (G_1f)(x)=u(x)\int_{-\infty}^x v(t)f(t)dt,\qquad x\in\mathbb R, \end{equation} \begin{equation}\label{2.31} (G_2f)(x)=v(x)\int_x^\infty u(t)f(t)dt,\qquad x\in\mathbb R. \end{equation} We have the relations \begin{equation}\label{2.32} G=G_1+G_2,\end{equation} \begin{equation}\label{2.33} \frac{\|G_1\|_{p\to p}+\|G_2\|_{p\to p}}{2}\le\|G\|_{p\to p}\le \|G_1 \|_{p\to p}+\|G_2\|_{p\to p}.\end{equation} \end{lem} \begin{thm} \cite{2} \label{thm2.21} Equation \eqref{1.1} is correctly solvable in $L_p$, $p\in(1,\infty)$ if and only if $B<\infty.$ Here \begin{equation}\label{2.34} B\overset{\text{def}}{=}\sup_{x\in\mathbb R} h(x)d(x). \end{equation} Moreover, the following relations hold: \begin{equation}\label{2.35} \|G\|_{p\to p}\asymp\|G_1\|_{p\to p}\asymp\|G_2\|_{p\to p}\asymp B.\end{equation} \end{thm} \begin{thm}\label{thm2.22} Let \eqref{1.2} and \eqref{2.1} be satisfied. Then equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty)$ if and only if $S<\infty.$ Here \begin{equation}\label{2.36} S\overset{\text{def}}{=} \sup_{s\in\mathbb R}(\rho(x)s(x)). \end{equation} \end{thm} \begin{remark}\label{rem2.23} Theorems \ref{thm2.21} and \ref{thm2.22} are proved in the same way because the properties of the functions $d$ and $s,$\ $\rho$ and $h$ are quite analogous (see above). Moreover, the proof of \thmref{thm2.22} is even simpler compared to \thmref{thm2.21} because there is no need to apply estimates \eqref{2.14}. In particular, for this reason, in \thmref{thm2.22} instead of condition \eqref{1.3} of \thmref{thm2.21} there appears a weaker condition \eqref{2.1}. Thus, since the proof of \thmref{thm2.22} is reduced to the repetition of the argument from \cite{2}, we do not present it here. \end{remark} \begin{thm} \cite{2, 12} \label{thm2.24} Suppose that the conditions \eqref{1.2} and $r\equiv 1$ hold. Then equation \eqref{1.1}is correctly solvable in $L_p,$ $p\in[1,\infty)$ if and only if there exists $a>0$ such that $m(a)>0$. Here $$m(a)=\inf_{x\in\mathbb R}\int_{x-a}^{x+a}q(t)dt.$$ \end{thm} \begin{thm} \cite{2, 4} \label{thm2.25} For every $p\in(1,\infty)$ equation \eqref{1.1} is correctly solvable in $L_p$ if $A>0.$ Here \begin{equation}\label{2.37} \mathcal A= \inf_{x\in\mathbb R}\mathcal A(x),\qquad \mathcal A(x)= \frac{1}{2d(x)}\int_{x-d(x)}^{x+d(x)}q(t)dt.\end{equation} \end{thm} \begin{remark}\label{rem2.26} In contrast to the condition $B<\infty,$ the meaning of the requirement $\mathcal A>0$ is quite obvious: some special Steklov average of the function $q$ must be separated from zero uniformly on the whole axis (see \cite{4}). Moreover, the requirement $A>0$ can be viewed as a weakening of the simplest condition $\inf\limits_{x\in\mathbb R}q(x)>0$ guaranteeing correct solvability of \eqref{1.1} in $L_p$, $p\in[1,\infty)$ (see \cite{4,7}. We continue this comment in the next assertion (\thmref{thm2.28}) by defining a meaningful class of equations \eqref{1.1} (see \cite{10}) in which the requirement $B<\infty$ is equivalent to a condition of the form $\mathcal A>0.$ Towards this end, we need a new auxiliary function. \end{remark} \begin{lem} \cite{10, 13} \label{lem2.27} Let $\varphi(x)\asymp \psi(x),$ $x\in\mathbb R.$ For a given $x\in\mathbb R$ consider the equation in $\mu\ge0:$ \begin{equation}\label{2.38} \int_{x-\mu}^{x+\mu}q(t)h(t)dt=1. \end{equation} Equation \eqref{2.38} has at least one positive finite solution. Let \begin{equation}\label{2.39} \mu(x)=\inf_{\mu\ge0}\left\{\mu:\int_{x-\mu}^{x+\mu}q(t)h(t)dt=1\right\}. \end{equation} The function $\mu(x)$ is continuous for $x\in\mathbb R$, and, in addition, \begin{equation}\label{2.40} \lim_{x\to-\infty}(x+\mu(x))=-\infty, \qquad \lim_{x\to\infty}(x-\mu(x))=\infty.\end{equation} \end{lem} \begin{thm} \cite{10} \label{thm2.28} Let $\varphi(x)\asymp \psi(x),$ $x\in\mathbb R.$ Then $B<\infty$ if and only if $\tilde{\mathcal A}>0.$ Here \begin{equation}\label{2.41} \tilde{\mathcal A}=\inf_{x\in\mathbb R}\tilde{\mathcal A}(x),\qquad \tilde{\mathcal A}(x)=\frac{1}{2\mu(x)} \int_{x-\mu(x)}^{x+\mu(x)}q(t)dt.\end{equation} \end{thm} \begin{remark}\label{rem2.29} To apply \thmref{thm2.21} to concrete equations, one has to know the auxiliary functions $h$ and $d.$ Usually it is not possible to express these functions through the original coefficients $r$ and $q$ of equation \eqref{1.1}. However, it is easy to see that when studying the value of $B,$ one can replace in an equivalent way the functions $h$ and $d$ with their sharp by order two-sided estimates. In most cases, such inequalities can be obtained using standard tools of local analysis (see, e.g., \cite{4} and a detailed exposition in \cite{10}; one example of obtaining such estimates is given in \S6 below). It is clear that in concrete cases of the question on I)--II), it is particularly convenient to use criteria which either do not use the functions $h$ and $d$ at all, or use, say, only the function $h.$ Such assertions are contained in the following theorem. \end{remark} \begin{thm} \cite{2} \label{thm2.30} Suppose that conditions \eqref{1.2}--\eqref{1.3} hold. Then we have the following assertions: {\rm A)} Equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty)$ if any of the following conditions holds: \begin{alignat}{4} & 1)\quad && B_1<\infty, \quad && B_1=\sup_{x\in\mathbb R}B_1(x), \quad && B_1(x)=r(x)h^2(x),\qquad\qquad\qquad\qquad\label{2.42}\\ & 2)\quad && B_2<\infty, \quad && B_2=\sup_{x\in\mathbb R}B_2(x), \quad && B_2(x)=r(x)\varphi(x)\psi(x),\qquad\qquad\qquad\qquad\label{2.43}\\ & 3)\quad && B_3<\infty, \quad && B_3=\sup_{x\in\mathbb R}B_3(x), \quad && B_3(x)= h(x)\cdot|x|,\qquad\qquad\qquad\qquad\label{2.44}\end{alignat} {\rm B)} Suppose that in addition to \eqref{1.2} and \eqref{1.3} the following conditions hold: \begin{equation}\label{2.45} r^{-1}\in L_1,\qquad q\notin L_1(-\infty,0),\qquad q\notin L_1(0,\infty).\end{equation} Then equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty)$ if $\theta<\infty.$ Here $\theta=\sup_{x\in\mathbb R}\theta(x),$ \begin{equation}\label{2.46} \theta(x)=|x|\left(\int_{-\infty}^x\frac{dt}{r(t)}\right)\cdot \left(\int_x^\infty \frac{dt}{r(t)}\right). \end{equation} \end{thm} We also need the following known facts. \begin{thm} \cite[Ch.IV, \S8, Theorem 20]{14} \label{thm2.31} Let $p\in(1,\infty).$ The set $\mathcal K\in L_p$ is precompact if and only if the following conditions hold: \begin{alignat}{2} & 1)\quad && \sup_{f\in\mathcal K}\|f\|_p<\infty,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\label{2.47}\\ & 2)\quad && \lim_{\delta\to0} \sup_{f\in\mathcal K}\sup_{|t|<\delta}\|f(\cdot+t)-f(\cdot)\|_p=0 ,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\label{2.48}\\ & 3)\quad && \lim_{x\to\infty} \sup_{f\in\mathcal K}\int_{|x|\ge N}|f(x)|^pdx=0.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\label{2.49}\end{alignat} \end{thm} Let $\mu,\theta$ be almost everywhere finite measurable positive functions defined in the interval $(a,b),-\infty\le a<b\le\infty.$ We introduce the integral operators \begin{equation}\label{2.50} (Kf)(x)=\mu(x)\int_x^b\theta(t)f(t)dt,\qquad x\in(a,b), \end{equation} \begin{equation}\label{2.51} (\tilde Kf)(x)=\mu(x)\int_a^x\theta(t)f(t)dt,\qquad x\in(a,b). \end{equation} \begin{thm} \cite{15} \cite[Ch.1, \S1.3]{16} \label{thm2.32} For $p\in(1,\infty)$ the operator $K:L_p(a,b)\to L_p(a,b)$ is bounded if and only if $H_p(a,b)<\infty.$ Here $H_p(a,b)=\sup_{x\in(a,b)}H_p(x,a,b),$ \begin{equation}\label{2.52} H_p(x,a,b)=\left[\int_a^x\mu(t)^pdt\right]^{1/p}\cdot\left[\int_x^b\theta(t)^{p'}dt\right]^{1/p'},\quad p'=\frac{p}{p-1}.\end{equation} \end{thm} In addition, the following inequalities hold: \begin{equation}\label{2.53} H_p(a,b)\le\|K\|_{L_p(a,b)\to L_p(a,b)}\le(p)^{1/p}(p')^{1/p'}H_p(a,b), \end{equation} \begin{thm} \cite{15} \cite[Ch.1, \S1.3]{16} \label{thm2.33} For $p\in(1,\infty)$ the operator $\tilde K:L_p(a,b)\to L_p(a,b)$ is bounded if and only if $\tilde H_p(a,b)<\infty.$ Here $\tilde H_p(a,b)=\sup_{x\in(a,b)}\tilde H_p(x,a,b),$ and \begin{equation}\label{2.54} \tilde H_p(x,a,b)=\left[\int_a^x\theta(t)^{p'}dt\right]^{1/p'}\cdot\left[\int_x^b\mu(t)^{p}dt\right]^{1/p},\quad p'=\frac{p}{p-1}.\end{equation} \end{thm} In addition, the following inequalities hold: \begin{equation}\label{2.55} \tilde H_p(a,b)\le\|K\|_{L_p(a,b)\to L_p(a,b)}\le(p)^{1/p'}(p')^{1/p'}\tilde H_p(a,b). \end{equation} Note that some assertions (mainly of a technical nature) will be given in \S4--\S5 in the course of the exposition. \section{Main Results} Recall that, if conditions I)--II) hold, then $\mathcal L_p^{-1}=G,$ $p\in(1,\infty)$ (see \thmref{thm2.18}). Therefore, in the sequel in the statements of the theorems, we write the operator $G$ instead of the operator $\mathcal L_p^{-1}.$ Our main result is the following theorem. \begin{thm}\label{thm3.1} Let $p\in(1,\infty)$, and suppose that equation \eqref{1.1} is correctly solvable in $L_p.$ Then the operator $G:L_p\to L_p$ is compact if and only if \begin{equation}\label{3.1} \lim_{|x|\to\infty}h(x)d(x)=0. \end{equation} \end{thm} \begin{thm}\label{thm3.2} Suppose that conditions \eqref{1.2} and \eqref{2.1} hold, $p\in(1,\infty),$ and equation \eqref{1.1} is correctly solvable in $L_p.$ Thus the operator $G: L_p\to L_p$ is compact if and only if \begin{equation}\label{3.2} \lim_{|x|\to\infty} \rho(x)s(x)=0.\end{equation} \end{thm} \begin{remark}\label{rem3.3} Theorems \ref{thm3.1} and \ref{thm3.2} are related to one another in the same way as Theorems \ref{thm2.21} and \ref{thm2.22} (see Remark \ref{rem2.23}). Therefore, we do not present a proof of \thmref{thm3.2}. \end{remark} \begin{thm}\label{thm3.4} Suppose that condition \eqref{3.1} holds. Then the operator $G: L_2\to L_2$ is compact, self-adjoint, and positive. Its maximal and eigenvalue $\lambda$ satisfies the estimates (see \eqref{2.34}): \begin{equation}\label{3.3} c^{-1}B\le \lambda\le cB. \end{equation} \end{thm} \begin{remark}\label{rem3.5} Theorems \ref{thm3.1} and \ref{thm3.4} were obtained in \cite{13} under an additional requirement $\varphi\asymp\psi(x),$ $x\in\mathbb R.$ The meaning of condition \eqref{3.1} can be clarified ``in terms of the coefficients" of equation \eqref{1.1} in the same way as is done in Remark \ref{rem2.26} for the interpretation of the condition $B<\infty.$ In particular, in order to expand on \thmref{thm3.1}, we state the following theorem. \end{remark} \begin{thm}\label{thm3.6} \cite{13} Let $\varphi\asymp\psi(x),$\ $x\in\mathbb R,$ $\ p\in(1,\infty),$ and suppose that equation \eqref{1.1} is correctly solvable in $L_p.$ Then the operator $G: L_p\to L_p$ is compact if and only if \begin{equation}\label{3.4} \lim_{|x|\to\infty}\tilde{\mathcal A}(x)=\infty \end{equation} (see \eqref{2.41}).\end{thm} Thus, if $\varphi(x)\asymp\psi(x),$ $x\in\mathbb R$, requirement \eqref{3.1} means that some special Steklov average value of the function $q$ must tend to infinity at infinity. We can now present several consequences of \thmref{thm3.1}. Their significance consists in the fact that they allow us to clarify the question on I)--III)) either not using at all the functions $k$ and $d$, or with the help of only $h$ (see Remark \ref{rem2.29}. \begin{cor}\label{cor3.7} Let $p\in(1,\infty)$ and $\mathcal A>0$ (see \eqref{2.37}). Then the operator $G: L_p\to L_p$ is compact if $\mathcal A(x)\to\infty$ as $|x|\to\infty.$ \end{cor} \begin{cor}\label{cor3.8} Let $p\in(1,\infty)$ and $q(x)\to\infty$ as $|x|\to\infty.$ Then the operator $G:L_p\to L_p$ is compact. \end{cor} \begin{cor}\label{cor3.9} \cite{17} Suppose that conditions \eqref{1.2} hold, $r(x)\equiv1,$ $x\in\mathbb R,$ and $m(a_0)>0$ for some $a_0\in(0,\infty)$ (see \thmref{thm2.24}). Then the operator $G:L_p\to L_p$ is compact if and only if the Molchanov condition (see \cite{2}) holds: \begin{equation}\label{3.5} \lim_{|x|\to\infty}\int_{x-a}^{x+a}q(t)dt=\infty,\qquad \forall a\in(0,\infty). \end{equation} \end{cor} \begin{cor}\label{cor3.10} Let $p\in(1,\infty)$. Then assertions I)--III) hold if and only if any of the following conditions is satisfied: \begin{alignat}{2} &1)\ && B_1<\infty \ \text{(see \eqref{2.42})},\ r(x)h^2(x)\to 0 \ \text{as}\ |x|\to\infty\label{3.6}\qquad \qquad\qquad\qquad\\ &2)\ && B_2<\infty \ \text{(see \eqref{2.43})},\ r(x)\varphi(x)\psi(x)\to 0 \ \text{as}\ |x|\to\infty\label{3.7}\\ &3)\ && B_3<\infty \ \text{(see \eqref{2.44})},\ h(x)\cdot|x|\to 0 \ \text{as}\ |x|\to\infty\label{3.8} \end{alignat} \end{cor} \begin{cor}\label{cor3.11} Denote \begin{equation}\label{3.9} r_0=\sup_{x\in\mathbb R} r(x),\qquad h_0=\sup_{x\in\mathbb R} h(x). \end{equation} Let $r_0<\infty.$ Then \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty)$ if $h_0<\infty.$ In addition, the operator $G: L_p\to L_p,$ $p\in(1,\infty)$ is compact if $h(x)\to 0$ as $|x|\to\infty.$ \end{cor} \begin{remark}\label{rem3.12} Note that the requirement \begin{equation}\label{3.10} q(x)\to \infty\qquad\text{as}\qquad |x|\to\infty \end{equation} is so strong that the answer to the question on I)--III) is no dependent on the behaviour (within the framework of \eqref{1.2}) of the function $r.$ In this connection, look at the opposite situation and find out what requirements on the function $r$ is the positive solution of the behaviour (within a certain framework) of the function $q$. See Theorems \ref{thm3.13} and \ref{thm3.14} below for possible answers to these questions. We emphasize that these assertions have been obtained from Theorems \ref{thm2.22} and \ref{3.2} where \eqref{1.3} is not used. Therefore, n Theorems \ref{thm3.13} and \ref{thm3.14} requirements on the function $q$ are weakened to conditions \eqref{1.2} and \eqref{2.1}. \end{remark} \begin{thm}\label{thm3.13} Suppose that together with \eqref{1.2} condition \eqref{2.1} holds and $\theta<\infty $ (see \thmref{thm2.30}). Then equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty).$ In addition, the operator $G: L_p\to L_p$, $p\in(1,\infty)$ is compact if $\theta(x)\to0$ as $|x|\to\infty$ (see \eqref{2.46}). \end{thm} \begin{thm}\label{thm3.14} Suppose that conditions \eqref{1.2}, \eqref{2.1} hold and $\nu<\infty $. Here $\nu=\sup\limits_{x\in\mathbb R}\nu(x),$ \begin{equation}\label{3.11} \nu(x)=r(x)\left(\int_{-\infty}^x\frac{dt}{r(t)}\right)^2\cdot \left(\int_x^\infty\frac{dt}{r(t)}\right)^2,\quad x\in\mathbb R.\end{equation} Then equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty).$ If, in addition, $\nu(x)\to0$ as $|x|\to\infty,$ then the operator $G:L_p\to L_p,$ $p\in(1,\infty)$ is compact. \end{thm} \section{Proofs} \renewcommand{\qedsymbol}{} \begin{proof}[Proof of \thmref{thm3.1}] Necessity. Let us check \eqref{3.1} as $x \to\infty.$ (The case $x\to-\infty$ is treated in a similar way.) Let $\{\Delta_n\}_{n\in\mathbb N'}$ be an $\mathbb R(0,d)$-covering of $\mathbb R,$\ $F=\{f_n(t)\}_{n\in\mathbb N'}$ and $$f_n(t)=\begin{cases} d(x_n)^{-1/p}&\quad\text{if}\ t\in\Delta_n \\ &\qquad\qquad\qquad, \quad n\in \mathbb N'\\ 0&\quad\text{if}\ t\notin\Delta_n \end{cases}$$ Then $\|f_n\|_p^p=2,$ $n\in\mathbb N'$ and the set $\{Gf_n\}_{n\in\mathbb N'}$ is precompact in $L_p.$ Let $x\in\Delta_n,$ $n\in\mathbb N'.$ In the following relations we apply \eqref{2.14} and \eqref{2.25}--\eqref{2.26}: \begin{align} (Gf_n)(x)&=u(x)\int_{\Delta_n^-}^x v(t)f_n(t)dt+v(x)\int_x^{\Delta_n^+}u(t)f_n(t)dt\nonumber\\ &=\frac{u(x)}{u(x_n)}\rho(x_n)\int_{\Delta_n^-}^x\frac{v(t)}{v(x_n)}\cdot\frac{dt}{d(x_n)^{1/p}} +\frac{v(t)}{v(x_n)}\cdot\rho(x_n)\int_x^{\Delta_n^+}\frac{u(t)}{u(x_n)}\frac{dt}{d(x_n)^{1/p}} \nonumber\\ &\ge c^{-1}\rho(x_n)d(x_n)^{1/p'}\ge c^{-1}h(x_n)d(x_n)^{1/p'},\quad n\in \mathbb N'.\label{4.1} \end{align} By \thmref{thm2.31}, for a given $\varepsilon>0$ there exists $\mathbb N(\varepsilon)\gg1$ such that $$\sup_{f_k\in F}\int_{|x|\ge\mathbb N(\varepsilon)}|(Gf_k)(t)|^pdt\le\varepsilon.$$ From the properties of an $\mathbb R(0,d)$-covering of $\mathbb R,$ it follows that there exists $n_0=n_0(\varepsilon)\in\mathbb N=\{1,2,3,\dots\}$ such that $\mathbb N(\varepsilon)\in\Delta_{n_0}.$ Set $n_1=n_1(\varepsilon)=n_0(\varepsilon)+1.$ Since $\mathbb N(\varepsilon)\le\Delta_{n_1}^-,$ we have \begin{equation}\label{4.2} \sup_{f_k\in F}\int_{\Delta_{n_1}^-}^\infty|(Gf_k)(t)|^pdt\le\varepsilon.\end{equation} Let $k\ge n_1(\varepsilon)$. Then from \eqref{4.1}--\eqref{4.2}, it follows that $$\varepsilon\ge\int_{\Delta_{n_1}^-}^\infty|(Gf_k)(t)|^pdt\ge\int_{\Delta_k^-}^{\Delta_k^+} |(Gf_k)(t)|^pdt\ge c^{-1}(h(x_k)d(x_k))^p.$$ Therefore, $\lim\limits_{k\to\infty}(h(x_k)d(x_k))=0.$ From the inequalities $$0< h(x)d(x)\le ch(x_n)d(x_n),\qquad x\in\Delta_n,\quad n\in\mathbb N',$$ that follows from \eqref{2.26} and \eqref{2.20} (for $\varepsilon=0$), we note get \eqref{3.1}. \end{proof} \renewcommand{\qedsymbol}{\openbox} \begin{proof}[Proof of \thmref{thm3.1}] Sufficiency. Assume that the hypotheses of the theorem are satisfied. Then by \thmref{thm2.18} the operator $G: L_p\to L_p$ is bounded, and by \lemref{lem2.20} so are the operators $G_1: L_p\to L_p$ and $G_2: L_p\to L_p.$ Clearly, if $G_1$ and $G_2$ are compact, then so is $G$ (see \eqref{2.32}). Furthermore, compactness of $G_1$ and $G_2$ is checked in the same way, and therefore below we only consider $G_2.$ Let $F=\{f\in L_p:\|f\|_p\le 1\}.$ Compactness of $G_2: L_p\to L_p$ will be established as soon as we check that the set $W=\{g\in L_p:g=G_2f,\ f\in F\}$ is precompact in $L_p.$ Below we show that the set $W$ satisfies conditions 1), 2), and 3) of \thmref{thm2.31} and thus proves \thmref{thm3.1}. \textit{Verification of condition 1)}. The above arguments (together with the definition of the set $F$ and Theorems \ref{thm2.18} and \ref{thm2.21}) imply the inequality 1), \begin{align*} \sup_{g\in W}\|g\|_p&=\sup_{f\in F}\|G_2f\|_p\le \|G_2\|_{p\to p}\cdot\|f\|_p\\ &\le \|G_2\|_{p\to p}\le cB<\infty\quad \Rightarrow 1).\end{align*} \textit{Verification of condition 3)}. We need some auxiliary assertions. \begin{lem}\label{lem4.1} \cite{2} Let $x\in\mathbb R$ and let $\{\Delta_n\}_{n\in\mathbb N'}$ be an $\mathbb R(x,d)$-covering of $\mathbb R.$ Then \begin{equation} \begin{aligned}\label{4.3} \int_{\Delta_n^+}^{\Delta_{-1}^+}\frac{d\xi}{r(\xi)h(\xi)}&=|n|-1,\quad \text{if}\quad n\le -1\\ \int_{\Delta_1^-}^{\Delta_n^-}\frac{d\xi}{r(\xi)h(\xi)}&=n-1,\quad\ \ \text{if}\quad n\ge 1.\end{aligned} \end{equation} \end{lem} \begin{lem}\label{lem4.2} Let $B<\infty$ (see \eqref{2.34}), $x\in\mathbb R,$ $p\in(1,\infty)$ and \begin{equation}\label{4.4} \theta_p(x)=\left[\int_{-\infty}^x v(t)^pdt\right]^{1/p}\cdot\left[\int_x^\infty u(\xi)^{p'}d\xi\right]^{1/p'},\quad p'=\frac{p}{p-1}. \end{equation} Then we have the inequalities \begin{equation}\label{4.5} c^{-1}h(x)d(x)\le\theta_p(x)\le\begin{cases} B^{1/p}\sup\limits_{t\ge x}(h(t)d(t))^{1/p'}, \quad \text{if}\ x\ge 0\\ B^{1/p'}\sup\limits_{t\le x}(h(t)d(t))^{1/p}, \quad \text{if}\ x\le 0\end{cases}\end{equation} \end{lem} \begin{proof} Let $p\in(1,2],$ $\gamma\in(0,1]$ (the number $\gamma$ will be chosen later). Now we apply Theorems \ref{thm2.1} and \ref{thm2.3}: \begin{align} \theta_p(x)&\le\left[\int_{-\infty}^x v(t)^pdt\right]^{1/p}\cdot u(x)^\gamma\cdot\left[\int_x^\infty u(\xi)^{(1-\gamma)p'}d\xi\right]^{1/p'}\nonumber\\ &\le\left[\int_{-\infty}^x\rho(t)^{\gamma p}\cdot v(t)^{(1-\gamma)p}dt\right]^{1/p}\cdot\left[\int_x^\infty u(\xi)^{(1-\gamma)p'}d\xi\right]^{1/p'}\nonumber\\ &=\left[\int_{-\infty}^x\rho(t)^{\frac{1+\gamma}{2}p}\exp\left(-\frac{1-\gamma}{2}p\int_t^x \frac{ds}{r(s)\rho(s)}\right)\cdot\exp\left(\frac{1-\gamma}{2}p\int_{x_0}^x\frac{ds}{r(s)\rho(s)} \right)dt\right]^{1/p}\nonumber\\ &\quad \cdot \left[\int_x^\infty\rho(\xi)^{\frac{1-\gamma}{2}p'}\exp\left(-\frac{1-\gamma}{2}p' \int_x^\xi\frac{ds}{r(s)\rho(s)}\right)\exp\left(-\frac{1-\gamma}{2}p'\int_{x_0}^x\frac{ds} {r(s)\rho(s)}\right)d\xi\right]^{1/p'}\nonumber\\ &=\left[\int_{-\infty}^x\rho(t)^{\frac{1+\gamma}{2}p}\exp\left(-\frac{1-\gamma}{2}p\int_t^x \frac{ds}{r(s)\rho(s)}\right)dt\right]^{1/p}\nonumber\\ &\quad\cdot\left[\int_x^\infty\rho(\xi) ^{\frac{1-\gamma}{2}p'}\exp \left(-\frac{1-\gamma}{2}p'\int_x^\xi\frac{ds}{r(s)\rho(s)}\right)d\xi\right]^{1/p'}.\label{4.6} \end{align} Let $\gamma_1$ be the solution of the equation $$\frac{1+\gamma}{2}p=\frac{1-\gamma}{2}p'\quad\Rightarrow\quad\gamma:=\gamma_1=\frac{p'-p}{p'+p}.$$ For $\gamma=\gamma_1$ inequality \eqref{4.6} takes the form \begin{align} \theta_p(x)&\le\left[\int_{-\infty}^x\rho(t)\exp\left(-(p-1)\int_t^x\frac{ds}{r(s)\rho(s)} \right) dt\right]^{1/p}\nonumber\\ &\quad\cdot \left[\int_x^\infty\rho(\xi)\exp \left(-\int_x^\xi\frac{ds}{r(s)\rho(s)}\right)d\xi\right] ^{1/p'}:=(J_1(x))^{1/p}\cdot(J_2(x))^{1/p'}.\label{4.7} \end{align} Let us estimate $J_1(x)$ and $J_2(x)$. We only consider the case $x\ge0$ because the case $x\le0$ is treated in a similar way. Below we use the properties of an $\mathbb R(x,d)$-covering of $\mathbb R,$ inequalities \eqref{2.26} and \eqref{2.14}, and equalities \eqref{4.3}: \begin{align} J_1(x)&=\int_{-\infty}^x\rho(t)\exp\left(-(p-1)\int_t^x\frac{ds}{r(s)\rho(s)}\right)dt\nonumber\\ &=\sum_{h=-\infty}^{-1}\int_{\Delta_n}\rho(t)\exp\left(-(p-1)\int_t^x\frac{ds}{r(s)\rho(s)}\right)dt\nonumber\\ &\le c\sum_{h=-\infty}^{-1} h(x_n)d(x_n)\exp\left(-\frac{p-1}{2}\int_{\Delta_{n^+}}^{\Delta_{-1}^+} \frac{ds}{r(s)h(s)}\right)\nonumber\\ &\le cB\sum_{n=-\infty}^{-1}\exp\left(-\frac{p-1}{2}(|n|-1)\right)=cB,\label{4.8} \end{align} \begin{align} J_2(x)&=\int_x^\infty\rho(\xi)\exp\left(-\int_x^\xi\frac{ds}{r(s)\rho(s)}\right)d\xi =\sum_{n=1}^\infty\int_{\Delta_n}\rho(\xi)\exp\left(-\int_x^\xi\frac{ds}{r(s)\rho(s)}\right)d\xi\nonumber\\ &\le c\sum_{n=1}^\infty h(x_n)d(x_n)\exp\left(-\frac{1}{2}\int_{\Delta_1^-}^{\Delta_n^-}\frac{ds}{r(s)\rho(s)}\right)\nonumber\\ &\le c\sup_{t\ge x}(h(t)d(t))\sum_{n=1}^\infty\exp\left(-\frac{n-1}{2}\right)=c \sup_{t\ge x}(h(t)d(t)).\label{4.9} \end{align} Thus, for $p\in(1,2]$, the upper estimate in \eqref{4.5} follows from \eqref{4.8}--\eqref{4.9}. Let $p\in(2,\infty),$ $\gamma\in(0,1]$ (the number $\gamma$ will be chosen later). Now we apply Theorems \ref{thm2.1} and \ref{thm2.3}: \begin{align} \theta_p(x)&=\left[\int_{-\infty}^x v(t)^pdt\right]^{1/p}\cdot\left[\int_x^\infty u(\xi)^{p'}d\xi\right]^{1/p'}\nonumber\\ &\le\left[\int_{-\infty}^x v(t)^{(1-\gamma)p}dt\right]^{1/p}\cdot v(x)^\gamma\cdot\left[\int_x^\infty u(\xi)^{p'}d\xi\right]^{1/p'}\nonumber\\ &\le\left[\int_{-\infty}^x v(t)^{(1-\gamma)p}dt\right]^{1/p}\cdot\left[\int_x^\infty\rho(\xi)^{\gamma p'}\cdot u(\xi)^{(1-\gamma)p'}d\xi\right]^{1/p'}\nonumber\\ &=\left[\int_{-\infty}^x\rho(t)^{\frac{1-\gamma}{2}p}\cdot\exp\left(-\frac{1-\gamma}{2}p\int_t^x \frac{ds}{r(s)\rho(s)}\right)\cdot\exp\left(\frac{1-\gamma}{2}p\int_{x_0}^x\frac{ds}{r(s)\rho(s)}\right) dt\right]^{1/p}\nonumber\\ &\quad \cdot\left[\int_x^\infty\rho(\xi)^{\frac{1+\gamma}{2}p'}\cdot\exp\left(-\frac{1-\gamma}{2}p'\int_x^\xi \frac{ds}{r(s)\rho(s)}\right)\cdot\exp\left(-\frac{1-\gamma}{2}p'\int_{x_0}^x\frac{ds}{r(s)\rho(s)}\right) d\xi\right]^{1/p'}\nonumber\\ &=\left[\int_{-\infty}^x\rho(t)^{\frac{1-\gamma}{2}p}\exp\left(-\frac{1-\gamma}{2}p\int_t^x\frac{ds} {r(s)\rho(s)}\right)dt\right]^{1/p}\nonumber\\ &\quad \cdot\left[\int_{x}^\infty\rho(\xi)^{\frac{1+\gamma}{2}p'}\exp\left(-\frac{1-\gamma}{2}p\int_x^\xi\frac{ds} {r(s)\rho(s)}\right)d\xi\right]^{1/p'}.\label{4.10} \end{align} Let now $\gamma$ be the solution $\gamma_2$ of the equation $$\frac{1-\gamma}{2}p=\frac{1+\gamma}{2}p'\quad\Rightarrow\quad \gamma:=\gamma_2=\frac{p-p'}{p+p'}.$$ For $\gamma=\gamma_2$ inequality \eqref{4.10} takes the form \begin{align} \theta_p(x)&\le\left[\int_{-\infty}^x\rho(t)\exp\left(-\int_t^x\frac{ds}{r(s)\rho(s)}\right)dt\right] ^{1/p} \nonumber\\ &\quad\cdot \left[\int_x^\infty\rho(\xi)\exp\left(-(p'-1)\int_x^\xi\frac{ds}{r(s)\rho(s)}\right)d\xi\right]^{1/p'}. \label{4.11} \end{align} That \eqref{4.11} implies the upper estimate in \eqref{4.5} can be proved similarly to the proof of the same estimate from \eqref{4.7}, and therefore we omit the proof. It remains to obtain the lower estimate in \eqref{4.5}. The following inequality follows from \eqref{2.14} and \eqref{2.26}: \begin{align*} \theta_p(x)&\ge\left[\int_{x-d(x)}^xv(t)^pdt\right]^{1/p}\cdot\left[\int_x^{x+d(x)}u(t)^{p'}dt\right] ^{1/p'} \\ &\ge c^{-1}v(x)d(x)^{1/p}\cdot c^{-1}u(x)d(x)^{1/p'}=c^{-1}\rho(x)d(x)\ge c^{-1}h(x)d(x). \end{align*} \end{proof} \begin{cor}\label{cor4.3} Let $p\in(1,\infty)$ and $B<\infty$ (see \eqref{2.34}). Then $\theta_p(x)\to0$ as $|x|\to\infty$ if and only if condition \eqref{3.1} holds.\end{cor} \begin{proof} This is an immediate consequence of \eqref{4.5}. \end{proof} \begin{cor}\label{cor4.4} Let $p\in(1,\infty)$ and $B<\infty$ (see \eqref{2.34}). Suppose that condition \eqref{3.1} holds, $N\ge 1$ and \begin{equation}\label{4.12} \theta_p^{(+)}(x,N)=\left[\int_N^xv(t)^pdt\right]^{1/p}\cdot\left[\int_x^\infty u(\xi)^{p'}d\xi\right]^{1/p'},\quad x\ge N, \end{equation} \begin{equation}\label{4.13} \theta_p^{(-)}(x,N)=\left[\int_{-\infty}^xv(t)^pdt\right]^{1/p}\cdot\left[\int_x^{-N} u(\xi)^{p'}d\xi\right]^{1/p'},\quad x\le -N. \end{equation} Then \begin{equation}\label{4.14} \theta_p^{(-)}(x,N)\to0,\qquad \theta_p^{(+)}(x,N)\to0\quad\text{as}\quad N\to\infty. \end{equation} \end{cor} \begin{proof} Now we use \eqref{4.5}: \begin{align*} 0&<\theta_p^{(+)}(x,N)\le\sup_{x\ge N}\theta_p^{(+)}(x,N)\le\sup_{x\ge N}\theta_p(x)\nonumber\\ &\le cB^{1/p}\sup_{t\ge N}(h(t)d(t))^{1/p'}\to 0\quad\text{as}\quad N\to\infty\quad\Rightarrow\quad \eqref{4.14}. \end{align*} The second relation of \eqref{4.14} can be checked in a similar way. \end{proof} Let us now check 3). The following relations are obvious: \begin{align*} \sup_{g\in W}\int_{|x|\ge N}|g(t)|^pdt&=\sup_{f\in F}\int_{|x|\ge N}|(G_2f)(x)|^pdx \\ &\le 2\sup_{f\in F}\max\left\{\int_{-\infty}^{-N}|(G_2f)(x)|^pdx,\ \int_N^\infty|(G_2f)(x)|^pdx\right\}.\end{align*} Denote \begin{equation}\label{4.15} T_1(N)=\sup_{f\in F}\int_{-\infty}^{-N}|(G_2f)(x)|^pdx, \end{equation} \begin{equation}\label{4.16} T_2(N)=\sup_{f\in F}\int^{\infty}_{N}|(G_2f)(x)|^pdx. \end{equation} To prove 3), it is enough to verify that \begin{equation}\label{4.17} T_1(N)\to 0,\qquad T_2(N)\to0\qquad \text{as}\quad N\to\infty. \end{equation} Let us check \eqref{4.17} for $T_2(N).$ Now we use the definition of the set $F,$ \thmref{thm2.32} and Corollary \ref{cor4.4}: \begin{align*} T_2(N)&=\sup_{f\in F}\int_N^\infty|(G_2f)(x)|^pdx\le\|G_2\|^p_{L_p(N,\infty)\to L_p(N,\infty)}\sup\|f\|_{L_p(N,\infty)}^p\\ &\le c(p)\sup_{x\ge N}\left[\left(\int_N^\infty v(t)^pdt\right)^{1/p}\cdot\left(\int_x^\infty u(\xi)^{p'}d\xi\right)^{1/p'}\right]^p\cdot\sup_{f\in F}\|f\|_p^p\\ &\le c(p)\sup_{x\ge N}[\theta_p^{(+)}(x,N)]^p\to 0\qquad \text{as}\quad N\to\infty. \end{align*} Let us go to $T_1(N).$ First consider the value $(G_2f)(x)$ for $x\le-N$ and $f\in F$: \begin{align} (G_2f)(x)&=v(x)\int_x^\infty u(\xi)f(\xi)d\xi=v(x)\int_x^{-N}u(\xi)f(\xi)d\xi+v(x)\int_{-N}^\infty u(\xi)f(\xi)d\xi\nonumber\\ &:=(\tilde P_Nf)(x)+(\hat P_Nf)(x).\label{4.18} \end{align} Here \begin{equation}\label{4.19} (\tilde P_Nf)(x)=v(x)\int_x^{-N}u(\xi)f(\xi)d\xi,\quad x\le -N,\quad f\in F, \end{equation} \begin{equation}\label{4.20} (\hat P_Nf)(x)=v(x)\int^\infty_{-N}u(\xi)f(\xi)d\xi,\quad x\le -N,\quad f\in F. \end{equation} The following relations are obvious: \begin{align} T_1(N)&= \sup_{f\in F}\int_{-\infty}^{-N}|(G_2f)(x)|^pdx=\sup_{f\in F}\int_{-\infty}^{-N}|(\tilde P_Nf)(x)+(\hat P_Nf)(x)|^pdx\nonumber\\ &\le 2^p\sup_{f\in F}\left[\int_{-\infty}^{-N}|(\tilde P_Nf)(x)|^pdx+\int_{-\infty}^{-N}|(\hat P_Nf)(x)|^pdx\right]\nonumber\\ &\le c(p)\left[\sup_{f\in F}\int_{-\infty}^{-N}|(\tilde P_Nf)(x)|^pdx+\sup_{f\in F}\int_{-\infty}^{-N}|\hat P_Nf)(x)|^pdx\right]\nonumber\\ &=c(p)[\tilde T_1(N)+\hat T_1(N)].\label{4.21} \end{align} Here \begin{equation}\label{4.22} \tilde T_1(N)=\sup_{f\in F}\int_{-\infty}^{-N}|(\tilde P_Nf)(x)|^pdx,\end{equation} \begin{equation}\label{4.23} \hat T_1(N)=\sup_{f\in F}\int_{-\infty}^{-N}|(\hat P_Nf)(x)|^pdx.\end{equation} Clearly, $T_1(N)$ satisfies \eqref{4.17} if \begin{equation}\label{4.24} \tilde T_1(N)\to0,\qquad \hat T_2(N)\to0\qquad\text{as}\quad N\to\infty.\end{equation} To prove the first relation of \eqref{4.24}, we use the definition of the set $F,$ \thmref{thm2.32} and Corollary \ref{cor4.4}: \begin{align*} \tilde T_1(N)&=\sup_{f\in F}\int_{-\infty}^{-N}|(\tilde P_Nf)(x)|^pdx\le\|\tilde P_N\|^p_{L_p(-\infty,-N)\to L_p(-\infty,-N)}\cdot \sup_{f\in F}\|f\|_{L_p(-\infty,-N)}^p\\ &\le c(p)\sup_{x\le -N}\left[\left(\int_{-\infty}^x v(t)^pdt\right)^{1/p}\cdot\left(\int_x^{-N}u(\xi)^{p'}d\xi\right)^{1/p'}\right]^p\cdot\sup_{f\in F}\|f\|_p^p\\ &\le c(p)\sup_{x\le-N}\theta_p^{(-)}(x,N)\to0\qquad\text{as}\quad N\to \infty. \end{align*} To prove the second relation of \eqref{4.24}, we use the definition of the set $F,$ H\"older's inequality and Corollary \ref{cor4.3}: \begin{align*} \hat T_1(N)&=\sup_{f\in F}\int_{-\infty}^{-N}|(\hat P_Nf)(x)|^pdx\le \sup_{f\in F}\left(\int_{-\infty}^{-N}v(x)^pdx\right)\cdot\left(\int_{-N}^\infty u(\xi)|f(\xi|d\xi\right)^p\\ &\le\left(\int_{-\infty}^{-N}v(x)^pdx\right)\cdot\left(\int_{-N}^\infty u(\xi)^{p'}d\xi\right)^{p/p'}\cdot\sup_{f\in F}\|f\|_{L_p(-N,\infty)}^p\\ &\le \theta_p^p(-N)\to 0\qquad\text{as}\quad N\to\infty. \end{align*} Thus relation \eqref{4.17} holds, and therefore condition 3) is satisfied. \textit{Verification of condition 2)}. According to \eqref{2.32}, it is enough to show that \begin{equation}\label{4.25} \lim_{\delta\to0}\sup_{f\in \mathcal K}\sup_{|t|\le\delta}\|(G_if)(\cdot+t)-(G_if)(\cdot)\|_p=0,\quad i=1,2.\end{equation} Both equalities of \eqref{4.25} are checked in the same way; therefore, below we only consider the case $i=2.$ Furthermore, equality \eqref{4.25} will be prove as soon as we find $\delta=\delta(\varepsilon)\in(0,1]$ for a given $\varepsilon>0$ such that \begin{equation}\label{4.26} \sup_{f\in\mathcal K}\sup_{|t|\le\delta}\|(G_2f)(\cdot+t)-(G_2f)(\cdot)\|_[\le \varepsilon.\end{equation} Thus, let $\varepsilon>0$ be given. Set $N\ge 1$ (the choice of $N $ will be made more precise later). Then for $f\in\mathcal K$ we have \begin{align} \|(G_2f)(\cdot+t)&-(G_2f)(\cdot)\|_p^p=\|(G_2f)(\cdot+t)-(G_2f)(\cdot)\|_{L_p(-N,N)^+}^p\nonumber\\ &\quad +\|(G_2f)(\cdot+t)-(G_2f)(\cdot)\|_{L_p(-\infty,-N)}^p+\|(G_2f)(\cdot+t)-(G_2f)(\cdot)\|_{L_p(N,\infty))}^p \nonumber \\ &\le \|(G_2f)(\cdot+t)-(G_2f)(\cdot)\|_{L_p(-N,N)}^p+2\|G_2f\|_{L_p(-\infty,-N+1)}^p \nonumber\\ &\quad +2\|G_2f\|_{L_p(N-1,+\infty)}^p.\label{4.27} \end{align} By 3), for the given $\varepsilon>0$ there exists $N_0=N_0(\varepsilon)$ such that $$\sup_{f\in\mathcal K}\|G_2f\|_{L_p(-\infty,-N_0+1)}^p+\sup_{f\in\mathcal K}\|G_2f\|_{L_p(N_0-1,\infty)}\le \frac{\varepsilon}{4}^p.$$ Therefore, for $N=N_0$ inequality \eqref{4.27} can be continued as follows: \begin{equation}\label{4.28} \|(G_2f)(\cdot+t)-(G_2f)(\cdot)\|_p^p\le\|(G_2f)(\cdot+t)-(G_2f)(\cdot)\|_{L_p(-N_0,N_0)}^p+\frac{\varepsilon ^p}{2}.\end{equation} Throughout the sequel, $|x|\le N_0$ and $|t|\le\delta$ (the number $\delta$ will be chosen later). Let us continue estimate \eqref{4.28}. We have \begin{align} |(G_2f)(x+t)&-(G_2f)(x)| =\left|v(x+t)\int_{x+t}^\infty u(\xi)f(\xi)d\xi-v(x)\int_x^\infty u(\xi)f(\xi)d\xi\right|\nonumber\\ &\le |v(x+t)-v(x)|\cdot\left|\int_x^\infty u(\xi)f(\xi)d\xi\right|+v(x+t)\left|\int_x^{x+t}u(\xi)f(\xi)d\xi\right|\nonumber\\ &:=(\mathcal A f)(x,t)+(Bf)(x,t).\label{4.29}\end{align} Here \begin{equation}\label{4.30} (Af)(x,t)=|v(x+t)-v(x)|\cdot\left|\int_x^\infty u(\xi)f(\xi)d\xi\right|,\quad f\in\mathcal K,\end{equation} \begin{equation}\label{4.31} (Bf)(x,t)= v(x+t)) \cdot\left|\int_x^{x+t} u(\xi)f(\xi)d\xi\right|,\quad f\in\mathcal K.\end{equation} Let us introduce the numbers \begin{equation}\label{4.32} \delta_1=\min_{x\in[-N_0,N_0]}d(x),\qquad \eta=\sup_{x\in[-N_0,N_0]}\sup_{|t|\le\delta} \left|\int_x ^{x+t}\frac{d\xi}{r(\xi)h(\xi)}\right|.\end{equation} From absolute continuity of the Lebesgue integral, it follows that given $\varepsilon>0,$ one can choose $\delta=\delta(\varepsilon) $ so small that the following inequalities hold: \begin{equation}\label{4.33} \delta\le\delta_1,\qquad \eta\le\frac{\varepsilon}{\alpha}.\end{equation} (Here $\alpha$ is a positive number to be chosen later.) In the following estimate of $(Af)(x,t)$, we use \eqref{4.33}, the equalities (see \cite{10}) \begin{equation}\label{4.34} \frac{v'(x)}{v(x)}=\frac{1+r(x)\rho'(x)}{2r(x)\rho(x)},\qquad \frac{u'(x)}{u(x)}= - \frac{1-r(x)\rho'(x)}{2r(x)\rho(x)},\qquad x\in\mathbb R,\end{equation} and estimates \eqref{2.17}, \eqref{2.25} and \eqref{2.26}: \begin{align} (Af)(x,t)&=|v(x+t)-v(x)|\cdot\left|\int_x^\infty u(\xi)f(\xi)d\xi\right|=\left|\int_x^{x+t}v'(s)ds\right|\cdot\frac{1}{v(x)}|(G_2f)(x)|\nonumber\\ &=\left|\int_x^{x+t}\frac{r(s)v'(s)}{v(s)} \cdot \frac{v(s)}{v(x)}\cdot\frac{ds}{r(s)}\right|\cdot|(G_2f)(x)|\nonumber\\ &\le\left|\int_x^{x+t}\frac{2}{\rho(s)}\cdot e^2\frac{ds}{r(s)}\right|\cdot|(G_2f)(x)|\le c\left|\int_x^{x+t}\frac{ds}{r(s)h(s)}\right|\cdot|(G_2f)(x)|\nonumber\\ &\le \frac{c\varepsilon}{\alpha}\cdot|(G_2f)(x)|.\label{4.35} \end{align} Furthermore, in the estimate of $(Bf)(x,t)$ we use \eqref{2.25}, \eqref{2.26}, \eqref{4.33}, H\"older's inequality and the definition of the set $\mathcal K:$ \begin{align} (Bf)(x,t)&=v(x+t) \left|\int_x^{x+t} u(\xi)f(\xi)d\xi\right|\nonumber\\ &\le \frac{v(x+t)u(x+t)}{v(x)u(x)}\cdot \rho(x)\left|\int_x^{x+t}\frac{u(\xi)}{u(x)}\cdot \frac{u(x)}{u(x+t)}\cdot |f(\xi)|d\xi\right|\nonumber\\ &\le c\rho(x)\left|\int_x^{x+t}|f(\xi)|d\xi\right|\le c\rho(x)|t|^{1/p'}\cdot\|f\|_p\nonumber\\ &\le c\rho(x)\delta^{1/p'}\le c\big(\max_{|x|\le N_0}\rho(x)\big)\cdot\delta^{1/p'}.\label{4.36} \end{align} The following estimates are derived from \eqref{4.35}, \eqref{4.36}, the definition of the set $\mathcal K$ and \eqref{2.35}: \begin{align*} |(G_2f)(x+t)-(G_2f(x)|&\le (\mathcal A f)(x,t)+(Bf)(x,t) r\\ &\le \frac{c\varepsilon}{\alpha}|(G_2f)(x)|+c\big(\max_{|x|\le N_0}\rho(x)\big)\delta^{1/p'}\ \Rightarrow \\ \|(G_2f)(\cdot+t)-(G_2f)(\cdot)\|_{L_p(-N_0,N_0)} &\le \frac{c\varepsilon}{\alpha} \|(G_2f\|_p+c\big(\max_{|x|\le N_0}\rho(x)\big)N_0^{1/p} \cdot \delta^{1/p'}\\ &\le \frac{cB}{\alpha}\varepsilon+c\big(\max_{|x|\le N_0}\rho(x)\big)N_0^{1/p}\delta^{1/p'}. \end{align*} Set $\alpha=2^{1+\frac{1}{p}}\cdot cB$ and, if necessary, choose a smaller $\delta$ so that the following inequality holds: $$c\big(\max_{|x|\le N_0}\rho(x)\big)\cdot N_0^{1/p}\delta^{1/p'}\le\frac{\varepsilon}{2^{1+1/p}}.$$ Then we get the estimates $$\|(G_2f)(\cdot+t)-(G_2f)(\cdot)\|_{L_p(-N_0,N_0 )}\le \frac{\varepsilon}{2^{1/p}}\ \Rightarrow\ \text{(see \eqref{4.28})},$$ $$\|(G_2f)(\cdot+t)-(G_2f)(\cdot)\|_p^p\le \frac{\varepsilon^p}{2 }+\frac{\varepsilon^p}{2}=\varepsilon^p\ \Rightarrow\ \eqref{4.25}\ \Rightarrow 2).$$ The theorem is proved. \end{proof} \renewcommand{\qedsymbol}{} \begin{proof}[Proof of \thmref{thm3.4}] We need the following assertion. \end{proof} \begin{lem}\label{lem4.5} Suppose that condition \eqref{3.1} holds. Then $B<\infty$ (see \eqref{2.34}).\end{lem} \renewcommand{\qedsymbol}{\openbox} \begin{proof} {}From \eqref{3.1} and \eqref{2.14} it follows that $\rho(x)d(x)\to0$ as $|x|\to\infty.$ Hence there is $x_0\gg1$ such that $\rho(x)d(x)\le 1$ for $|x|\ge x_0.$ By \lemref{lem2.10}, the function $\rho(x)d(x)$ is continuous for $x\in\mathbb R$ and is therefore bounded on $[-x_0,x_0]$. Hence $S<\infty$ (see \eqref{2.36}), and therefore $B<\infty$ (see \eqref{2.34}).\end{proof} Let us now go to the assertion of the theorem. Since $G(x,t)=G(t,x)$ for all $t,x\in\mathbb R$ (see \eqref{2.9}), the operator $G:L_2\to L_2$ is symmetric and bounded (see \lemref{lem4.5} and \eqref{2.35}). Hence the operator $G$ is self-adjoint and, by \thmref{thm3.1}, compact. Furthermore, estimates \eqref{3.3} follow from positivity of $G$ which, in turn, will be proved below. Towards this end, we need the following two lemmas. \begin{lem}\label{lem4.6} The equalities \begin{equation}\label{4.37} \lim_{|x|\to\infty}\frac{u(x)}{v(x)}\cdot\int_{-\infty}^xv(t)^2dt=0, \end{equation} \begin{equation}\label{4.38} \lim_{|x|\to\infty}\frac{v(x)}{u(x)}\cdot\int^{\infty}_xu(t)^2dt=0 \end{equation} hold if and only if condition \eqref{3.1} is satisfied. \end{lem} \renewcommand{\qedsymbol}{} \begin{proof}[Proof of \lemref{lem4.6}] Necessity. Both equalities are checked in the same way, and therefore below we only consider \eqref{4.38}. Below $x\in\mathbb R,$ and we apply estimates \eqref{2.25} and \eqref{2.14}: \begin{align*} I(x)&\overset{\text{def}}{=}\frac{v(x)}{u(x)}\int_x^\infty u^2(t)dt\ge \frac{v(x)}{u(x)}\cdot\int_x^{x+d(x)}u^2(t)dt\\ &=\frac{v(x)}{u(x)}\int_x^{x+d(x)}\left(\frac{u(t)}{u(x)}\right)^2\cdot u^2(x)dt\ge c^{-1}\rho(x)d(x)\ge c^{-1}h(x)d(x)>0.\end{align*} It remains to refer to \eqref{4.38}. \renewcommand{\qedsymbol}{\openbox} \begin{proof}[Proof of \lemref{lem4.6}] Sufficiency. {}From \eqref{2.7} we obtain the equality \begin{equation}\label{4.39} I(x)=\frac{v(x)}{u(x)}\cdot\int_x^\infty u^2(t)dt=\int_x^\infty \rho(t)\exp\left(-\int_x^t\frac{d\xi}{r(\xi)\rho(\xi)}\right)dt.\end{equation} Let $x\to\infty.$ Below we use \eqref{4.39}, properties of an $\mathbb R(x,d)$-covering of $[x,\infty),$ \eqref{2.26} and \eqref{4.3}: \begin{align*} I(x)&=\sum_{n=1}^\infty\int_{\Delta_n}\rho(t)\exp\left(-\int_x^t\frac{d\xi}{r(\xi)\rho(\xi)}\right)dt\ \le c\sum_{n=1}^\infty\rho(x_n)d(x_n)\exp\left(-\int_{\Delta_1^-}^{\Delta_n^-}\frac{d\xi}{r(\xi)\rho(\xi)}\right)\\ &\le c\sum_{n=1}^\infty h(x_n)d(x_n)\exp\left(-\frac{1}{2}\int_{\Delta_1^-}^{\Delta_n^-}\frac{d\xi}{r(\xi)h(\xi)}\right)\\ &\le c\sup_{t\ge x}(h(t)d(t))\sum_{n=1}^\infty\exp\left(-\frac{n-1}{2}\right)=c\sup_{t\ge x}(h(t)d(t)).\end{align*} The latter inequality and \eqref{3.1} imply \eqref{4.38} (as $x\to\infty).$ Let now $x\to-\infty.$ Fix $\varepsilon>0$ and choose $\ell=\ell(\varepsilon)\gg1$ so that the following estimate will hold \begin{equation}\label{4.40} 4c_0B \ell^2\cdot\exp\left(-\frac{\ell-1}{2}\right)\le \varepsilon,\qquad c_0=\sum_{k=1}^\infty\exp\left(-\frac{k-1}{2}\right).\end{equation} Consider the segments $\{\Delta_k\}_{k=1}^\ell$ from an $\mathbb R(x,d)$-covering of $[x,\infty).$ Let us show that \begin{equation}\label{4.41} \lim_{x\to-\infty}\Delta_\ell^+=-\infty.\end{equation} Assume the contrary: there exists $c>-\infty$ such that $\Delta_\ell^+\ge c$ as $x\to-\infty.$ Then by \eqref{2.10} and \eqref{4.3}, we have $$\ell=\int_{\Delta_1^-}^{\Delta_\ell^+}\frac{d\xi}{r(\xi)h(\xi)}\ge\int_x^c\frac{d\xi}{r(\xi)h(\xi)} \ge\frac{1}{2}\int_x^c\frac{d\xi}{r(\xi)\rho(\xi)}\to\infty\quad\text{as}\quad x\to-\infty,$$ a contradiction, so \eqref{4.41} is proved. Let us now choose $x_1(\varepsilon)$ and $x_2(\varepsilon)$ so that the following inequalities will hold: \begin{alignat}{2} 4e^2c_0\cdot\ell\cdot h(t)d(t) &\le\varepsilon \quad &&\text{for}\ t\le -x_1(\varepsilon),\label{4.42}\\ \Delta_\ell^+&\le-x_1(\varepsilon)\quad &&\text{for}\ x\le -x_2(\varepsilon).\label{4.43} \end{alignat} Let $x_0=\max\{x_1(\varepsilon),x_2(\varepsilon)\}.$ Below for $x\le x_0$ we use \eqref{4.39}, properties of an $\mathbb R(x,d)$-covering of $\mathbb R,$ \eqref{2.26}, \eqref{4.42}, \eqref{4.43} and \eqref{4.40}: \begin{align*} I(x)&=\sum_{n=1}^\infty\int_{\Delta_n}\rho(t)\exp\left(-\int_x^t\frac{d\xi}{r(\xi)\rho(\xi)}\right)\\ &\le 2e^2\left\{\sum_{n=1}^\ell h(x_n)d(x_n)\exp\left(-\frac{n-1}{2}\right)+\sum_{n=\ell+1}^\infty h(x_n)d(x_n)\exp\left(-\frac{n-1}{2}\right)\right\}\\ &\le 2e^2c_0\sup_{t\le\Delta_\ell^+}(h(t)d(t))+2e^2c_0B\exp(-\frac{\ell-1}{2} )\le \frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon. \end{align*} The obtained estimates lead to \eqref{4.38}. \end{proof} \begin{lem}\label{lem4.7} Let $p\in(1,\infty),$ $f\in L_p$ and $y=Gf.$ Then, if condition \eqref{3.1} holds, we have \begin{equation}\label{4.44} \lim_{|x|\to\infty }r(x)y'(x)y(x)=0.\end{equation} \end{lem} \begin{proof} From \eqref{3.1} and \lemref{lem4.5} it follows that $B<\infty.$ Let, for example, $x\to\infty$ (the case $x\to-\infty$ is treated in a similar way). Below we use the definition and properties of the operator $G: L_2\to L_2$ (see \eqref{2.29}), \eqref{2.17}, \eqref{2.30}--\eqref{2.33} and the Schwarz inequality: \begin{align*} r(x)|y'(x)|\cdot|y(x)|&\le (G|f|)(x)\cdot\left[r(x)\left|\frac{d}{dx}(Gf)(x)\right|\right]\\ &\le (G|f|)(x)\cdot\left[\frac{r(x)|u'(x)|}{u(x)}\cdot(G_1|f|)(x)+\frac{r(x)v'(x)}{v(x)}(G_2|f|)(x)\right]\\ &\le \frac{[(G|f|)(x)]^2}{\rho(x)}\le c\frac{[(G_1|f|)(x)]^2 +[(G_2|f|)(x)]^2}{\rho(x)} \\ &\le c\left\{\frac{u(x)}{v(x)}\cdot\int_{-\infty}^x v^2(t)dt+\frac{v(x)}{u(x)}\cdot\int_x^\infty u^2(t)dt\right\}\cdot\|f\|_2^2.\end{align*} It remains to apply \lemref{lem4.6} \end{proof} Let us now complete the proof of the theorem. Below we assume that $f\in L_2$ and $y:=Gf.$ Then, obviously, $f=\mathcal L_2y$, and we have the relations \begin{align*} \int_{-\infty}^\infty(Gf)(x)\cdot\bar f(x)dx&=\int_{-\infty}^\infty y(x)\overline{(\mathcal L_2y)(x)}dx=\lim_{\substack{ b\to\infty\\ a\to-\infty}} \int_a^by(x)\overline{[-(r(x)y'(x))'+q(x)y(x)]}dx\\ &=\lim_{\substack{ b\to\infty\\ a\to-\infty}}\int_a^by(x)\big[-(r(x)\overline{y'(x)}\big]'+q(x)\overline{y(x)}dx\\ &=\lim_{\substack{ b\to\infty\\ a\to-\infty}}\left[-r(x)\overline{y'(x)}y(x)\Big|_a^b+\int_a^b(r(x)|y'(x)|^2+q(x)|y(x)|^2)dx\right]\\ &=\int_{-\infty}^\infty(r(x)|y'(x)|^2+q(x)|y(x)|^2)dx\ge0. \end{align*} \end{proof} \begin{proof}[Proof of Corollary \ref{cor3.7}] The following relations are based on \thmref{thm2.1}: $$ \left.\begin{array}{ll} \displaystyle{r(x)v'(x)-r(t)v'(t)=\int\limits_t^xq(\xi)v(\xi)d\xi,\quad t\le x\in\mathbb R }\\ \displaystyle{r(t)u'(t)-r(x)u'(x)=\int\limits_x^tq(\xi)u(\xi)d\xi\quad\ t\ge x\in\mathbb R}\end{array}\right\}\quad\Rightarrow $$ $$r(x)v'(x)\ge \int_{-\infty}^x q(\xi)v(\xi)d\xi,\quad -r(x)u'(x)\ge \int_x^\infty q(\xi)u(\xi)d\xi\quad\Rightarrow$$ \begin{align*} 1 =r(x)[v'(x)u(x)-u'(x)v(x)]&\ge u(x)\int_{-\infty}^x q(t)v(t)dt+v(x)\int_x^\infty q(t)u(t)dt\\ &=\int_{-\infty}^\infty q(t)G(x,t)dt. \end{align*} Below we continue the last inequality using \eqref{2.9}, \eqref{2.14}, \eqref{2.19} and \eqref{2.26}: \begin{align} 1&\ge\int_{x-d(x)}^{x+d(x)}q(t)G(x,t)dt=\int_{x-d(x)}^{x+d(x)}\sqrt{\rho(t)\rho(x)}\exp\left(-\frac{1} {2}\left|\int_x^t\frac{d\xi}{r(\xi)\rho(\xi)}\right|\right)dt\nonumber\\ &\ge c^{-1}h(x)\exp\left(-\frac{1}{4}\int_{x-d(x)}^{x+d(x)}\frac{d\xi}{r(\xi)h(\xi)}\right)\cdot \int_{x-d(x)}^{x+d(x)}q(t)dt =c^{-1}h(x)\int_{x-d(x)}^{x+d(x)}q(t)dt.\label{4.45} \end{align} Equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty)$ since $\mathcal A>0$ (see \thmref{thm2.25}), and from \eqref{4.45} it follows that $$c(\mathcal A(x))^{-1}\ge h(x)d(x),\qquad x\in\mathbb R\ \Rightarrow\ \eqref{3.1}.$$ The assertion now follows from \thmref{thm3.1}. \end{proof} \begin{proof}[Proof of Corollary \ref{cor3.8}] Since $q(x)\to\infty$ as $|x|\to\infty,$ condition \eqref{1.3} holds, and therefore all auxiliary functions are defined (see Lemmas \ref{lem2.5} and \ref{lem2.10}). Let $q(x)\ge 1$ for $|x|\ge x_1.$ Then by \eqref{2.22}, there exists $x_2\gg x_1$ such that for $|x|\ge x_2$ we have \begin{equation}\label{4.46} [x-d(x),x+d(x)]\cap[[-x_1,x_1]=\emptyset.\end{equation} Then for $|x|\ge x_2$ from \eqref{4.45} it follows that $$c\ge h(x)\int_{x-d(x)}^{x+d(x)} q(t)dt\ge h(x)\int_{x-d(x)}^{x+d(x)}1 dt=2h(x)d(x)\ \Rightarrow$$ $$\sup_{|x|\ge x_2}(h(x)d(x))\le 2c<\infty.$$ Since the function $h(x)d(x)$ is bounded on $[-x_2,x_2]$ (see the proof of \lemref{lem4.5}), we have $B<\infty,$ and by \thmref{thm2.21} equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty).$ Further, from \eqref{2.22} it follows that $$c(\mathcal A(x))^{-1}\ge h(x)d(x),\quad |x|\ge x_2;\quad \mathcal A(x)\to\infty\quad \text{as}\qquad |x|\to\infty.$$ Hence condition \eqref{3.1} holds, and the assertion of the corollary follows from \thmref{thm3.1}. \end{proof} \begin{proof}[Proof of Corollary \ref{cor3.9}] We need the following fact whose proof is presented for the sake of completeness. \begin{lem}\label{lem4.8} \cite{17} Suppose that conditions \eqref{1.2}--\eqref{1.3} hold and $r\equiv1.$ Then equality \eqref{3.5} holds if and only if $\tilde d(x)\to0$ as $|x|\to\infty$ (see \eqref{2.15}).\end{lem} \renewcommand{\qedsymbol}{} \begin{proof}[Proof of \lemref{lem4.8}] Necessity. Assume the contrary: equality \eqref{3.5} holds but $\tilde d(x)\nrightarrow 0$ as $|x|\to\infty.$ This means that there exist $\varepsilon>0$ and points $\{x_n\}_{n=1}^\infty$ such that $|x_n|\to\infty$ as $n\to\infty$ and $\tilde d(x_n)\ge\varepsilon.$ This implies $$\frac{1}{\varepsilon}\ge\frac{1}{\tilde d(x_n)}=\frac{1}{2}\int_{x_n-\tilde d(x_n)}^{x_n+\tilde d(x_n)}q(t)dt\ge \frac{1}{2}\int_{x_n-\varepsilon}^{x_n+\varepsilon}q(t)dt,\quad n\ge1.$$ Thus equality \eqref{3.5} breaks down for $a=\varepsilon,$ a contradiction. \end{proof} \renewcommand{\qedsymbol}{\openbox} \begin{proof}[Proof of \lemref{lem4.8}] Sufficiency. If $\tilde d(x)\to0$ as $|x|\to\infty,$ then for any $a\in(0,\infty)$ and for all $|x|\gg1,$ we have $$\frac{1}{2}\int_{x-a}^{x+a}q(t)dt\ge\frac{1}{2}\int_{x-\tilde d(x)}^{x+\tilde d(x)} q(t)dt=\frac{1}{\tilde d(x)}\ \Rightarrow\ \eqref{3.5}.$$ \end{proof} Let us now go to the corollary. For $r\equiv1,$ from \eqref{2.10} and \eqref{2.26} we obtain $$ \left.\begin{array}{ll} \displaystyle{1=\int_{x-d(x)}^{x+d(x)}\frac{dt}{h(t)}\le c\frac{d(x)}{h(x)} }\\ \displaystyle{1=\int_{x-d(x)}^{x+d(x)}\frac{dt}{h(t)}\le c^{-1}\frac{d(x)}{h(x)}}\end{array}\right\}\quad\Rightarrow\quad h(x)\asymp d(x),\qquad x\in\mathbb R. $$ On the other hand, from \eqref{2.16} and \eqref{2.14}, it follows that $h(x)\asymp \rho(x)\asymp\tilde d(x),$ $x\in\mathbb R.$ Putting this together, we obtain the main relations: $h(x)\asymp d(x)\asymp\tilde d(x),$ $x\in\mathbb R.$ Further, as $m(a_0)>0$ for some $a_0\in(a,\infty)$, we conclude that equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty)$ by \thmref{thm2.24}. We have $h(x)d(x)\to0$ as $|x|\to\infty$ if and only if $\tilde d(x)\to0$ as $|x|\to\infty$ since $h(x)d(x)\asymp\tilde d^2(x),$ $x\in\mathbb R$. The assertion of the corollary now follows from \lemref{lem4.8}. \end{proof} \renewcommand{\qedsymbol}{\openbox} \begin{proof}[Proof of Corollary \ref{cor3.10}] By \thmref{thm2.30}, in all the following cases 1)--3), equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty).$ Let us show that in the same cases condition \eqref{3.1} holds, and thus by \thmref{thm3.1} our assertion will then be proved. 1) Let $x\in\mathbb R,$ $\Delta(x)=[x-d(x),x+d(x)]$. Below we use the Schwarz inequality and \eqref{2.19}: \begin{align*} 2d(x)&=\int_{\Delta(x)}\sqrt{\frac{r(t)h(t)}{r(t)h(t)}}dt\le \left(\int_{\Delta(x)}r(t)h(t)dt\right)^{1/2}\cdot\left(\int_{\Delta(x)}\frac{dt}{r(t)h(t)}\right)^{1/2} \\ &=\left(\int_{\Delta(x)}r(t)h(t)dt\right)^{1/2}\quad \Rightarrow\end{align*} \begin{equation}\label{4.47} 4d^2(x)\le \int_{\Delta(x)} r(t)h(t)dt,\quad x\in\mathbb R.\end{equation} Let $\eta(x)=\sup_{t\in\Delta(x)}(r(t)h^2(t)).$ From \eqref{2.22} and \eqref{3.6} it follows that $\eta(x)\to0$ as $|x|\to\infty.$ Further, from \eqref{4.47} using \eqref{2.26}, we obtain $$4d^2(x)\le \int_{\Delta(x)}r(t)h^2(t)\cdot\frac{h(x)}{h(t)}\frac{dt}{h(x)}\le c\eta(x)\frac{d(x)}{h(x)}\quad \Rightarrow$$ $$0< h(x)d(x)\le c\eta(x),\qquad x\in\mathbb R\qquad\Rightarrow\qquad\eqref{3.1}.$$ 2) This assertion follows from 1) and \eqref{2.12}, \eqref{3.7} and \eqref{3.6}: $$r(x)h^2(x)=r(x)\varphi(x)\psi(x)\frac{\varphi(x)\psi(x)}{(\varphi(x)+\psi(x))^2}\le r(x)\varphi(x)\psi(x),\quad x\in\mathbb R.$$ 3) From \eqref{2.22} it follows that $d(x)\le |x|$ for all $|x|\gg1$. Hence $0<h(x)d(x)\le h(x)|x|$ for all $|x|\ge 1,$ and therefore \eqref{3.1} holds because of \eqref{3.8}. \end{proof} \begin{proof}[Proof of Corollary \ref{cor3.11}] For $x\in\mathbb R,$ according to \eqref{2.19} and \eqref{2.26}, we have $$1=\int_{\Delta(x)}\frac{dt}{r(t)h(t)}\ge \frac{c^{-1}}{h(x)}\int_{\Delta(x)}\frac{dt}{r(t)}\ge \frac{c^{-1}}{r_0}\frac{d(x)}{h(x)}\quad\Rightarrow$$ $$0<h(x)d(x)\le ch^2(x).$$ Then $B\le ch_0^2<\infty,$ and condition \eqref{3.1} holds. The assertion follows from Theorems \ref{thm2.21} and \ref{thm3.1}. \end{proof} \begin{proof}[Proof of \thmref{thm3.13}] Since $\theta(x)\to 0$ as $|x|\to\infty,$ we have $r^{-1}\in L_1(\mathbb R)$ and $\theta<\infty$ (see \eqref{2.46}). We now need the following lemma. \begin{lem}\label{lem4.9} Suppose that conditions \eqref{1.2} and \eqref{2.1} hold and $r^{-1}\in L_1.$ Then we have the equality \begin{equation}\label{4.48} \rho(x)\le \tau\int_{-\infty}^x\frac{dt}{r(t)}\cdot\int_x^\infty\frac{dt}{r(t)},\qquad x\in\mathbb R. \end{equation} Here \begin{equation}\label{4.49} \tau=\max\left\{\left(\int_{-\infty}^0\frac{dt}{r(t)}\right)^{-1},\left(\int_0^\infty\frac{dt}{r(t)}\right )^{-1}\right\}.\end{equation} \end{lem} \begin{proof} From \thmref{thm2.1}, it easily follows that \begin{equation}\label{4.50} u(x)=v(x)\int_x^\infty\frac{dt}{r(t)v^2(t)},\quad v(x)=u(x)\int_{-\infty}^x\frac{dt}{r(t)u^2(t)},\qquad x\in\mathbb R. \end{equation} {}From \eqref{4.50} and \eqref{2.3} we now obtain $$\rho(x)=v^2(x)\int_x^\infty\frac{dt}{r(t)v^2(t)}\le\int_x^\infty\frac{dt}{r(t)},\qquad x\in\mathbb R,$$ $$\rho(x)u^2(x)\int_{-\infty}^x\frac{dt}{r(t)u^2(t)}\le\int_{-\infty}^x\frac{dt}{r(t)},\qquad x\in \mathbb R.$$ Hence \begin{equation}\label{4.51} \rho(x)=\begin{cases} \displaystyle{\int_x^\infty\frac{dt}{r(t)}},\quad & \text{if}\ x\ge 0\\ \\ \displaystyle{ \int_{-\infty}^x\frac{dt}{r(t)}},\quad & \text{if}\ x\le 0\end{cases}\end{equation} Estimate \eqref{4.48} follows from\eqref{4.51} and \eqref{4.49}. \end{proof} Further, from \eqref{2.23} we conclude that $s(x)\le|x|$ for all $|x|\gg1$, and therefore $$\rho(x)s(x)\le\tau|x|\cdot\int_{-\infty}^x \frac{dt}{r(t)}\cdot\int_x^\infty \frac{dt}{r(t)}=\tau\theta(x),\qquad |x|\gg1.$$ The latter inequality means that $S\le\tau\theta<\infty.$ Hence equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty)$ by \thmref{thm2.22}. If $\theta(x)\to0$ as $|x|\to\infty,$ then condition \eqref{3.2} holds, and the operator $G: L_p\to L_p$, $p\in(1,\infty)$, is compact by \thmref{thm3.2} \end{proof} \begin{proof}[Proof of \thmref{thm3.14}] Below we follow the scheme of the proof of Corollary \ref{cor3.10},1). Let $x\in\mathbb R,$ $\tilde \Delta(x)=[x-s(x+s(x)]$ (see \eqref{2.19}). From the Schwarz inequality and \eqref{2.19}, we get \begin{align*} 2s(x)&=\int_{\tilde \Delta(x)}\sqrt{\frac{r(t)\rho(t)}{r(t)\rho(t)}}dt\le\bigg(\int_{\tilde\Delta(x)}r(t)\rho(t)dt\bigg)^{1/2} \bigg(\int_{\tilde\Delta(x)}\frac{dt}{r(t)\rho(t)}\bigg)^{1/2}\\ &=\bigg(\int_{\tilde\Delta(x)}r(t)\rho(t)dt\bigg)^{1/2},\quad x\in\mathbb R\quad\Rightarrow \end{align*} \begin{equation}\label{4.52} 4s^2(x)\le\int_{\tilde\Delta(x)}r(t)\rho(t)dt,\qquad x\in\mathbb R. \end{equation} Further, since $\nu<\infty,$ we have $r^{-1}\in L_1.$ Therefore by \lemref{lem4.9} we have estimate \eqref{4.48}. This implies the inequality \begin{equation}\label{4.53} r(x)\rho^2(x)\le c\nu(x),\qquad x\in\mathbb R.\end{equation} Since $\nu<\infty,$ from \eqref{4.52}, \eqref{4.53} and \eqref{2.28}, we get \begin{gather} 4s^2(x)\le \int_{\tilde\Delta(x)}r(t)\rho^2(t)\frac{\rho(x)}{\rho(t)}\le c\nu \frac{s(x)}{\rho(x)},\quad x\in\mathbb R\quad \Rightarrow\nonumber\\ s(x)\rho(x)\le c\nu,\quad x\in\mathbb R\quad\Rightarrow\quad S\le c\nu.\label{4.54}\end{gather} From \eqref{4.54} and \thmref{thm2.22} it follows that equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty)$. Let $\nu(x)\to0$ as $|x|\to\infty.$ Then by \eqref{4.53} and \eqref{2.23}, we also have $\tilde\eta(x)\to0$ as $|x|\to\infty,$ where $\tilde\eta(x)=\sup\limits_{t\in\tilde\Delta(x)}r(t)\rho^2(t).$ Hence $$ 4s^2(x)\le\int_{\tilde\Delta(x)}r(t)\rho^2(t)\frac{\rho(x)}{\rho(t)}\frac{dt}{\rho(x)}\le c\tilde\eta(x)\frac{s(x)}{\rho(x)}\quad\Rightarrow $$ $$\rho(x)s(x)\le c\tilde\eta(x),\quad x\in\mathbb R\quad\Rightarrow\quad \lim_{|x|\to\infty}\rho(x)s(x)=0. $$ Thus the operator $G: L_p\to L_p,$ $p\in(1,\infty),$ is compact by \thmref{thm3.2}. \end{proof} \section{Additional assertions. Example} Below we consider equation \eqref{1.1} with coefficients \begin{equation}\label{5.1} r(x)=e^{\alpha|x|},\qquad q(x)=e^{\beta|x|},\qquad x\in\mathbb R\end{equation} where $\alpha$ and $\beta$ are any given real numbers. In what follows, for brevity we refer to it as equation \eqref{5.1}. Our goal in connection to \eqref{5.1} is to obtain for this equation a complete solution of problems 1)--II) and I)--III). As mentioned above, to study concrete equations \eqref{1.1}, one needs assertions that allow us to obtain sharp by order two-sided estimates of the functions $h$ and $d$ (see Remark \ref{rem2.29}). Below we will see that getting such inequalities is a certain technical problem of local analysis. However, the statement of such a problem depends on the properties of the coefficients of equation \eqref{1.1}. Therefore, here we restrict ourselves to considering statements ``sufficient" for investigation of \eqref{5.1}. (Cf. \cite{10} where estimates of $h$ and $d$ were obtained for equations \eqref{1.1} with nonsmooth and oscillating coefficients $r$ and~$q.$) The next theorem contains a general method that guarantees obtaining estimates for $h$ and $d.$ Note that this statement is a formalization of certain devices which were first used by Otelbaev for estimating his auxiliary functions (see \cite{9}). \begin{thm}\label{thm5.1} \cite{2} Suppose that conditions \eqref{1.2} and \eqref{1.3} hold. For a given $x\in\mathbb R$ introduce functions in $\eta\ge0:$ \begin{equation}\label{5.2} F_1(\eta)=\int_{x-\eta}^x\frac{dt}{r(t)}\cdot\int_{x-\eta}^x q(t)dt, \end{equation} \begin{equation}\label{5.3} F_2(\eta)=\int_x ^{x+\eta}\frac{dt}{r(t)}\cdot\int_{x} ^{x+\eta} q(t)dt, \end{equation} \begin{equation}\label{5.4} F_4(\eta)=\int_{x-\eta} ^{x+\eta}\frac{dt}{r(t)h(t)}. \end{equation} Then the following assertions hold (see Lemmas \ref{lem2.5} and \ref{lem2.10}): \begin{enumerate} \item[1)] the inequality $\eta\ge d_1(x)$ $(0\le\eta\le d_1(x))$ holds if and only if $F_1(\eta)\ge 1$ $(F_1(\eta)\le 1);$ \item[2)] the inequality $\eta\ge d_2(x)$ $(0\le\eta\le d_2(x))$ holds if and only if $F_2(\eta)\ge 1$ $(F_2(\eta)\le 1)$; \item[3)] the inequality $\eta\ge d(x)$ $(0\le\eta\le d(x))$ holds if and only if $F_3(\eta)\ge 1$ $(F_3(\eta)\le 1).$ \end{enumerate} \end{thm} The next theorem is an example of using \thmref{thm5.1}. \begin{thm}\label{thm5.2} Suppose that the following conditions hold: \begin{equation}\label{5.5} r>0,\quad q>0,\quad r\in\mathcal A C^{\loc}(\mathbb R),\quad q\in\mathcal A C^{\loc}(\mathbb R) \end{equation} (here $\mathcal A C^{\loc}(R)$ is the set of functions absolutely continuous on every finite interval of the real axis). Let, in addition, $$\varkappa_1(x)\to0,\quad \varkappa_2(x)\to0\qquad\text{as}\quad |x|\to\infty$$ where \begin{equation}\label{5.6} \varkappa_1(x)=r(x)\sup_{|t|\le 80\hat d(x)}\left|\int_x^{x+t}\frac{r'(\xi)}{r^2(\xi)}d\xi\right|,\quad x\in\mathbb R,\end{equation} \begin{equation}\label{5.8} \varkappa_2(x)=\frac{1}{q(x)}\cdot\sup_{|t|\le 80\hat d(x)}\left|\int_x^{x+t}q'(\xi) d\xi\right|,\quad x\in\mathbb R,\end{equation} \begin{equation}\label{5.9} \hat d(x)=\sqrt{\frac{r(x)}{q(x)}},\qquad x\in\mathbb R. \end{equation} Then for all $|x|\gg1$ each of the equations \eqref{2.11} has a unique finite positive solution $d_1(x)$ and $d_2(x)$, respectively, and we have (see \eqref{2.12}, \eqref{2.19}): \begin{equation}\label{5.10} \lim_{|x|\to\infty}\frac{d_1(x)}{\hat d(x)}=\lim_{|x|\to\infty}\frac{d_2(x)}{\hat d(x)}=1,\end{equation} \begin{equation}\label{5.11} \lim_{|x|\to\infty}\varphi(x)\sqrt{r(x)q(x)}=\lim_{|x|\to\infty}\psi(x) \sqrt{r(x)q(x)}=1,\end{equation} \begin{equation}\label{5.12} \lim_{|x|\to\infty}h(x)\sqrt{r(x)q(x)}=\frac{1}{2}, \end{equation} \begin{equation}\label{5.13} c^{-1}\hat d(x)\le d(x)\le c\hat d(x),\qquad x\in\mathbb R.\end{equation} In addition, $B<\infty$ (see \eqref{2.34} if and only if $\inf_{x\in\mathbb R} q(x)>0,$ and equality \eqref{3.1} holds if and only if $q(x)\to\infty$ as $|x|\to\infty.$ \end{thm} \renewcommand{\qedsymbol}{} \begin{proof} Both relations \eqref{5.10} are proved in the same way, and therefore we only consider, say, the second equality. Below we use some properties of the function $F_2(\eta).$ It is convenient to list these properties as a separate statement. \end{proof} \begin{lem}\label{lem5.3} Under conditions \eqref{5.5}, the function $F_2(\eta)$ satisfies the following relations: \begin{enumerate}\item[1)] $F_2(\eta)\in\mathcal A C^{\loc}(\mathbb R_+),$\quad $R_+=(0,\infty)$; \item[2)] $F_2(\eta)>0$ for $\eta>0;$ \item[3)] $F_2'(\eta)>0$ for $\eta>0.$\end{enumerate}\end{lem} \renewcommand{\qedsymbol}{\openbox} \begin{proof} Property 2) is an obvious consequence of \eqref{5.5}. Further, $$F_2'(\eta)=\frac{1}{r(x+\eta)}\int_x^{x+\eta}q(t)dt+q(x+\eta)\int_x^{x+\eta}\frac{dt}{r(t)}.$$ This equality together with \eqref{5.5} imply properties 1) and 3). \end{proof} \begin{lem}\label{lem5.4} Let $\eta(x)=\alpha\hat d(x),$ $x\in\mathbb R,$ $\alpha\in(0,80].$ Then we have the inequalities \begin{equation}\label{5.14} \frac{r(x)}{\eta(x)} \left|\int_0^{\eta(x)}\left(\int_x^{x+s}\frac{r'(\xi)}{r^2(\xi)}\right)ds\right|\le \varkappa_1(x),\quad x\in\mathbb R, \end{equation} \begin{equation}\label{5.15} \frac{1}{q(x)\eta(x)}\left|\int_0^{\eta(x)}\left(\int_x^{x+s}q'(\xi)d\xi\right)ds\right|\le \varkappa_2(x),\quad x\in\mathbb R.\end{equation} \end{lem} \begin{proof} Inequalities \eqref{5.14}--\eqref{5.15} are obvious. Say, $$ \frac{r(x)}{\eta(x)}\left|\int_0^{\eta(x)}\left(\int_x^{x+s}\frac{r'(\xi)}{r^2(\xi)}d\xi\right)ds\right| \le \frac{r(x)}{\eta(x)}\cdot\eta(x)\sup_{|s|\le 80\hat d(x)}\left|\int_x^{x+s}\frac{r'(\xi)}{r^2(\xi)}d\xi\right|=\varkappa_1(x).$$ Let us now go to \eqref{5.10}. Let $\eta\ge0.$ The following relations are obvious: \begin{align} \int_x^{x+\eta}\frac{d\xi}{r(\xi)}&=\int_0^\eta\frac{ds}{r(x+s)}=\frac{\eta}{r(x)}-\int_0^\eta\left( \int_x^{x+s}\frac{r'(\xi)}{r^2(\xi)}d\xi\right)ds\nonumber\\ &=\frac{\eta}{r(x)}\left[1-\frac{r(x)}{\eta}\int_0^\eta\left(\int_x^{x+s}\frac{r'(\xi)}{r^2(\xi)}d\xi\right) ds\right],\quad x\in\mathbb R,\label{5.16}\end{align} \begin{align} \int_x^{x+\eta}q(t)dt&=\int_0^\eta q(x+s)ds=q(x)\eta+\int_0^\eta\left(\int_x^{x+s} q'(\xi) d\xi\right)ds\nonumber\\ &=q(x)\eta\left[1+\frac{1}{q(x)\eta}\int_0^\eta\left(\int_x^{x+s}q'(\xi)d\xi\right) ds\right],\quad x\in\mathbb R.\label{5.17}\end{align} Denote \begin{equation} \begin{aligned}\label{5.18} \delta(x)=\varkappa_1(x)+\varkappa_2(x),\qquad x \in\mathbb R,\\ \eta(x)=\hat d(x)(1+\delta(x)),\qquad x \in\mathbb R. \end{aligned} \end{equation} Then for all $|x|\gg1,$ from \eqref{5.18}, \eqref{5.17}, \eqref{5.16}, \eqref{5.14} and \eqref{5.15}, it follows that \begin{align} F_2(\eta(x))&=\int_x^{x+\eta(x)}\frac{dt}{r(t)}\cdot\int_x^{x+\eta}q(t)dt\nonumber\\ &=\eta^2(x)\frac{q(x)}{r(x)}\cdot \left[1-\frac{r(x)}{\eta(x)}\int_0^{\eta(x)}\left(\int_x^{x+s}\frac{r'(\xi)}{r^2(\xi)}d\xi\right)ds\right]\nonumber\\ &\quad \cdot \left[1+\frac{1}{q(x)\eta(x)}\int_0^{\eta(x)}\left(\int_x^{x+s}q'(\xi)d\xi\right)ds\right]\nonumber\\ & \ge(1+\delta(x))^2(1-\varkappa_1(x))(1-\varkappa_2(x))\nonumber\\ &\ge (1+2\delta(x))(1-\delta(x))=1+\delta(x)-2\delta^2(x)\ge 1.\label{5.19} \end{align} Since $F_2(0)=0,$ from \eqref{5.19} and \lemref{lem5.3}, it follows that the equation $F_2(d)=1$ has a unique finite positive solution. Denote it $d_2(x).$ From \eqref{5.19} and \thmref{thm5.1}, we obtain the estimate \begin{equation}\label{5.20} d_2(x)\le \eta(x)=\hat d(x)(1+\delta(x),\qquad |x|\gg 1.\end{equation} Let now \begin{equation}\label{5.21} \eta(x)=\hat d(x)(1-\delta(x)\gg 1.\end{equation} Clearly, $\eta(x)>0$ for all $|x|\gg1.$ The following relations are similar to \eqref{5.19}: \begin{align*} F_2(\eta(x))&=\int_x^{x+\eta(x)}\frac{dt}{r(t)}\cdot \int_x^{x+\eta(x)}q(t)dt\\ &=\eta^2(x)\cdot \frac{q(x)}{r(x)}\left[1-\frac{r(x)}{\eta(x)}\int_0^{\eta(x)}\left(\int_x^{x+s}\frac{r'(\xi)}{r^2(\xi)} d\xi\right)ds\right]\\ &\quad \cdot \left[1+\frac{1}{q(x)\eta(x)}\int_0^{\eta(x)}\left(\int_x^{x+s}q'(\xi)d\xi\right)ds\right]\le(1-\delta(x))^2 (1+\varkappa_1(x))(1+\varkappa_2(x))\\ &=[1-2\delta(x)+\delta^2(x)][1+\varkappa_1(x)+\varkappa_2(x)+\varkappa_1(x)\varkappa_2(x)]. \end{align*} It is easy to see that for all $|x|\gg1$, we have the inequalities:\begin{align*} 1-2\delta(x)+\delta^2(x)&\le 1-\frac{5}{3}\delta(x)\\ \varkappa_1(x)\cdot \varkappa_2(x)&\le \frac{\varkappa_1(x)+\varkappa_2(x)}{2}=\frac{\delta(x)}{2}\end{align*} that allow us to continue the estimate \begin{equation}\label{5.22} F_2(\eta(x))\le\left(1-\frac{5}{3}\delta(x)\right)\left(1+\frac{3}{2}\delta(x)\right)\le 1-\frac{\delta(x)}{6}\le 1.\end{equation} {}From \eqref{5.22} and \thmref{thm5.1} we obtain the inequality \begin{equation}\label{5.23} d_2(x)\ge \eta(x)=\hat d(x)(1-\delta(x)),\qquad |x|\gg 1.\end{equation} From \eqref{5.20} and \eqref{5.23} we obtain \eqref{5.10}. Let us now go to \eqref{5.11}. These inequalities are a consequence of \eqref{5.10}. Indeed, as above, we get \begin{align} \psi(x)&=\int_x^{x+d_2(x)}\frac{dt}{r(t)}=\frac{d_2(x)}{r(x)}-\int_0^{d_2(x)}\left(\int_x^{x+s}\frac{ r'(x)}{r^2(\xi)}d\xi\right)ds\nonumber\\ &=\frac{d_2(x)}{r(x)}\left[1-\frac{r(x)}{d_2(x)}\int_0^{d_2(x)}\left(\int_x^{x+s}\frac{r'(\xi)}{r^2(\xi)} d\xi\right)ds\right]\nonumber\\ &\Rightarrow\quad \psi(x)\sqrt{r(x)q(x)}=\frac{d_2(x)}{\hat d(x)}\cdot (1+\gamma(x)),\qquad x\in\mathbb R.\label{5.24} \end{align} Here \eqref{5.24} it easily follows that $|\gamma(x)|\le\varkappa_1(x)$ for $|x|\gg 1.$ This proves \eqref{5.11} and hence, in view of \eqref{2.12}, also \eqref{5.12}. Let us verify \eqref{5.13}. Let us show that $d(x)\le 80\hat d(x)$ for all $|x|\gg1$. Assume the contrary. This means that $d(x)>\eta(x)=80\hat d(x)$ for some $|x|\gg1.$ In the following relations, apart from the above assumption, we use \eqref{2.19}, \eqref{2.26}, \eqref{2.22}, \thmref{thm5.1} and the part of the theorem that has already been proved: \begin{align*} 1&=\int_{x-d(x)}^{x+d(x)}\frac{dt}{r(t)h(t)}\ge \frac{1}{4e^2}\cdot \frac{1}{h(x)}\int_{x-d(x)}^{x+d(x)}\frac{dt}{r(t)}\\ &\ge\frac{1}{80}\sqrt{r(x)q(x)}\left[2\frac{\eta(x)}{r(x)}-\int_0^{\eta(x)}\left(\int_x^{x+s} \frac{r'(\xi)}{r^2(\xi)}d\xi\right)ds\right. \\ &\quad +\int_0^{\eta(x)}\ \left(\left.\int_{x-s}^x\frac{r'(\xi)}{r^2(\xi)}d\xi\right)ds\right]\ge 2\cdot\left[1-\frac{1}{2}\cdot\frac{r(x)}{\eta(x)}\int_0^{\eta(x)}\left(\int_x^{x+s}\frac{r'(\xi)}{r^2(\xi)} d\xi\right)ds\right.\\ &\quad+\frac{1}{2}\cdot\frac{r(x)}{\eta(x)}\int_0^{\eta(x)}\left.\left(\int_{x-s}^x\frac{r'(\xi)}{r^2(\xi)}d\xi \right)\right]\ge 2(1-\varkappa_1(x))>1. \end{align*} Contradiction. Hence $$d(x)\le 80\hat d(x)\qquad \text{for}\qquad |x|\gg 1.$$ To get the lower estimate of $d(x)$ for $|x|\gg1,$ we use \begin{align} 1&=\int_{x-d(x)}^{x+d(x)}\frac{dt}{r(t)h(t)}\le \frac{4e^2}{h(x)}\int_{x-d(x)}^{x+d(x)}\frac{dt}{r(t)}\nonumber\\ &\le 80\sqrt{r(x)q(x)}\left[2\frac{d(x)}{r(x)}-\int_0^{d(x)}\int_x^{x+s}\frac{r'(\xi)}{r^2(\xi)}d\xi ds\right.\nonumber\\ &\quad +\left.\int_0^{d(x)}\frac{r'(\xi)}{r^2(\xi)}d\xi ds\right]\le160 \frac{d(x)}{\hat d(x)}(1+\varkappa_1(x))\le320\frac{d(x)}{\hat d(x)}.\label{5.25} \end{align} Hence \begin{equation}\label{5.26} d(x)\ge \frac{\widehat{d(x)}}{320}\qquad\text{for}\qquad |x|\gg 1.\end{equation} Choose $x_0\gg1$ so that for $|x|\ge x_0$ inequalities \eqref{5.25} and \eqref{5.26} would hold together. Let $$f(x)=\frac{d(x)}{\widehat{d(x)}},\qquad x\in [-x_0,x_0].$$ By \lemref{lem2.10}, the function $f(x)$ is positive and continuous on $[-x_0,x_0]$ and therefore attains on this segment a positive minimum $m$ and a finite maximum $M.$ Let $c\gg1$ be such that $$c^{-1}\le\min\left\{\frac{1}{320},m\right\}\le \max\{80,M\}\le c.$$ With such a choice of $c$, taking into account the fact proven above, we obtain \eqref{5.13}. The remaining assertions of the theorem follow from \eqref{5.12}--\eqref{5.13}.\end{proof} We also need the following facts. \begin{thm}\label{thm5.5} \cite[Ch.XI, \S6]{18}. Suppose that conditions \eqref{1.2} and \eqref{2.1} hold, and, in addition, \begin{equation}\label{5.27} \int_{-\infty}^0\frac{dt}{r(t)}=\int_0^\infty\frac{dt}{r(t)}=\infty.\end{equation} Then the relations \begin{equation}\label{5.28} v(x)\to0\qquad\text{as}\qquad x\to-\infty, \qquad u(x)\to0\qquad\text{as}\qquad x\to\infty \end{equation} hold if and only if \begin{equation}\label{5.29} \int_{-\infty}^x q(t)\int_t^x\frac{d\xi}{r(\xi)}dt=\int_x^\infty q(t)\int_x^t\frac{d\xi}{r(\xi)}dt=\infty,\quad x\in \mathbb R.\end{equation} \end{thm} \begin{thm}\label{thm5.6} Suppose that conditions \eqref{1.2}, \eqref{2.1} and \eqref{5.28} hold, and, in addition, equation \eqref{1.1} is correctly solvable in $L_p,$ $p\in(1,\infty).$ Then equalities of \eqref{5.29} hold.\end{thm} \begin{proof} The operator $G: L_p\to L_p,$ $p\in (1,\infty)$ is bounded by \thmref{thm2.18}. From \eqref{2.33} it follows that then so is the operator $G_2: L_p\to L_p,$ $p\in(1,\infty)$ (see \eqref{2.31}). Then by \thmref{thm2.32} we have \begin{equation}\label{5.30} \sup_{x\in\mathbb R}\left(\int_{-\infty}^xv(t)^pdt\right)^{1/p}\left(\int_x^\infty u(t)^{p'}dt\right)^{1/p'}<\infty. \end{equation} Further, by \thmref{thm2.1} there exist the limits $$\lim_{x\to-\infty} v(x)=\varepsilon_1\ge 0,\qquad \lim_{x\to\infty}u(x)=\varepsilon_2\ge0.$$ If here $\varepsilon_1>0$ or $\varepsilon_2>0,$ then \eqref{5.30} does not hold. Hence $\varepsilon_1=\varepsilon_2=0.$ Then \eqref{5.29} holds by \thmref{thm5.5}.\end{proof} Let us now go to equation \eqref{5.1}. Denote by $S_p$ the set of linear bounded operators acting from $L_p,$ $p\in(1,\infty)$, and by $S_0^{(0)}$ the subset of $S_p$ consisting of the compact operators. Thus writing $G\in S_p$ $(G\in S_p^{(0)})$ we mean that the operator $G: L_p\to L_p$ is bounded (compact). \begin{thm}\label{thm5.7} Let $G$ be the Green operator corresponding to equation \eqref{5.1} (see \eqref{2.29}). Then, regardless of $p\in(1,\infty)$ and depending on the numbers $\alpha$ and $\beta$, the operator has the properties presented in the following table. \begin{table}[h] \begin{equation}\label{5.31} \begin{tabular}{|c|c|c|c|} \hline $\alpha \ \setminus\ \beta$ & $\beta<0$ & $\beta=0$ & $\beta>0$ \\ \hline $\alpha<0$ & $G\notin S_p$, & $G\in S_p,$\ $G\notin S_p^{(0)}$ & $G\in S_p^{(0)}$ \\ \hline $\alpha=0$ & $G\notin S_p$ & $G\in S_p,$\ $G\notin S_p^{(0)}$ & $G\in S_p^{(0)}$ \\ \hline $\alpha>0$ & $G\in S_p^{(0)} $ & $G\in S_p^{(0)} $ & $G\in S_p^{(0)} $ \\ \hline \end{tabular} \end{equation} \end{table} \end{thm} \begin{proof} Let us numerate the entries of matrix \eqref{5.31} in the usual way and view them as instances of relations between $\alpha$ and $\beta.$ We move along the rows of the matrix. Since in the case of \eqref{5.1} the functions $r$ and $q$ are even, in all the relations in the sequel, we only consider the case $x\ge0$ $(x\gg1)$. \smallskip \noindent $\underline{\text{Case}\ (1,1)\ ( \alpha<0,\ \beta<0)}$ Under conditions $(1,1)$ the hypotheses of \thmref{thm5.6} hold. Therefore, the operator $G:L_p\to L_p,$ $p\in(1,\infty)$ can be bounded only if \eqref{5.29} holds. In particular, we must have the equality \begin{equation}\label{5.32} \infty=\int_0^\infty e^{\beta t}\left(\int_x^t e^{-\alpha\xi}d\xi\right)dt\ \Rightarrow\ \beta\ge\alpha.\end{equation} Below we consider cases \ a)~$\beta>\alpha$ and \ b)~$\beta=\alpha$ separately. a) Let $\alpha<\beta<0$. In this case the hypotheses of \thmref{thm5.2} hold, and therefore $B=\infty$ because $\inf\limits_{x\in\mathbb R}q(x)=\inf\limits_{x\in\mathbb R} e^{\beta|x|}=0.$ Thus $G\notin S$ by Theorems \ref{thm2.21} and \ref{thm2.28}. b) Let $\beta=\alpha<0.$ In this case $r=q,$ and one can compute $h$ and $d$ directly. Thus the equation for $d_2(x)$ is of the form (see \eqref{2.11}) $$1=\int_x^{x+d}e^{-\alpha\xi}d\xi\cdot\int_x^{x+d}e^{\alpha\xi}d\xi=\frac{(e^{|a|d}-1)(1-e^{-|\alpha|d})} {\alpha^2},\quad d\ge0.$$ Hence $d_2(x)=c.$ To find $d_1(x)$, we will first check that $d_1(x)\le x$ for all $x\gg1.$ Indeed, the function $$F(x)=\int_0^x e^{-\alpha\xi}d\xi\cdot\int_0^x e^{\alpha\xi}d\xi=\frac{e^{|\alpha|x}-e^{-|\alpha|x}-2}{\alpha^2}\to\infty$$ as $x\to\infty,$ and therefore $d_1(x)\le x$ for $x\gg1.$ Then equation \eqref{2.11} for $d_1(x)$ is of the form: $$1=\int_{x-d}^xe^{-\alpha\xi}d\xi\cdot\int_{x-d}^x e^{\alpha\xi}d\xi=\frac{(1-e^{-|\alpha|d})(e^{|\alpha|d}-1)}{\alpha^2},\quad d\ge0.$$ Hence $d_1(x)=d_2(x)=c.$ This easily implies the equalities $$\varphi(x)=ce^{|\alpha|\,|x|},\quad\psi(x)=ce^{|\alpha|\,|x|},\quad h(x)=ce^{|\alpha|\,|x|},\quad d(x)=c.$$ Hence $B=\infty$ and $G\notin S_p$, $p\in(1,\infty)$ by Theorems \ref{thm2.21} and \ref{thm2.18}. \smallskip \noindent $\underline{\text{Case}\ (1,2)\ ( \alpha<0,\ \beta=0)}$ In this case $q(x)\equiv1,$ and therefore $G\in S_p$ by \thmref{thm2.25}. We will use \thmref{thm5.2} to answer a more subtle question on the inclusion $G\in S_p^{(0)}.$ It is easy to see that in this case its hypotheses are satisfied, and equation \eqref{3.1} does not hold because $q(x)\nrightarrow \infty$ as $|x|\to\infty.$ Then $G\in S_p^{(0)}$, $p\in(1,\infty)$ by \thmref{thm3.1}. \newpage \noindent $\underline{\text{Case}\ (1,3)\ ( \alpha<0,\ \beta>0)}$ In this situation conditions \eqref{1.2}--\eqref{1.3} hold and $q(x)\to\infty$ as $|x|\to\infty.$ Then $G\in S_p^{(0)},$ $p\in(1,\infty)$ by Corollary \ref{cor3.8}. \smallskip \noindent $\underline{\text{Case}\ (2,1)\ ( \alpha=0,\ \beta<0)}$ Since $r\equiv1$ and $m(a)=0$, for any $a\in(0,\infty)$, we have $G\notin S_p,$ $p\in(1,\infty)$ (see \thmref{thm2.24}). \smallskip \noindent $\underline{\text{Case}\ (2,2)\ ( \alpha=\beta=0)}$ Since $r\equiv q\equiv1,$ we have $G\in S_p,$ $p\in(1,\infty)$ by \thmref{thm2.24}, and $G\notin S_p^{(0)}$, $p\in(1,\infty)$ by Corollary \ref{cor3.9}. \smallskip \noindent $\underline{\text{Case}\ (2,3)\ ( \alpha=0,\ \beta>0)}$ We have $r\equiv1,$ $q(x)\to\infty$ as $|x|\to\infty.$ Hence $G\in S_p^{(0)}\in(1,\infty)$ by Corollary \ref{cor3.8} or Corollary \ref{cor3.9}. \smallskip \noindent $\underline{\text{Cases}\ (3,1); (3,2); (3,3)}$ $(\alpha>0$ and $ \beta=0; \ \alpha>0$ {and} $ \beta>0,$ \ respectively) All cases are treated in the same way. Clearly, $r^{-1}\in L_1,$ $q>0$. Then $G\in S_p^{(0)}$, $p\in(1,\infty)$ by \thmref{thm3.13} or \thmref{thm3.14}. \end{proof} \section{Proofs of Otelbaev's Lemmas} In this section we present the proofs of Lemmas \ref{lem2.10}, \ref{lem2.11}, \ref{lem2.15} and \ref{lem2.17} for the function $s(x)$ (see \eqref{2.19}). Assertions of such type (except for \lemref{lem2.17}, and with other auxiliary functions) were first applied by Otelbaev, and therefore we call them Otelbaev's Lemmas (see \cite{9}). \begin{proof}[Proof of \lemref{lem2.10}] Consider the function \begin{equation}\label{6.1} F(\eta)=\int_{x-\eta}^{x+\eta}\frac{dt}{r(t)\rho(t)},\qquad \eta\ge0.\end{equation} Clearly, the function $F(\eta)$ is continuous for $\eta\in[0,\infty;$ $F(0)=0,$ $F(\infty)=\infty$ (see \eqref{2.10}), and $$F'(\eta)=\frac{1}{r(x+\eta)(\rho(x+\eta)}+\frac{1}{r(x-\eta)\rho(x-\eta)}>0.$$ Therefore the second equation of \eqref{2.19} has a unique finite positive solution. Denote it by $s(x)$ and check that the function $s(x),$ $x\in\mathbb R$ is continuous. Towards this end, we show that the following inequality holds: \begin{equation}\label{6.2} |s(x+t)-sx)|\le|t|,\qquad |t|\le s(x),\qquad x\in\mathbb R. \end{equation} To check \eqref{6.2}, we have to consider two cases: \ 1)~$t\in[0,s(x)]$\ and \ 2)~$t\in[-s(x),0]$. They are treated in a similar way, and therefore below we only consider Case 1). Thus let $t\in[0,s(x)].$ Then we have the obvious inclusions $$[x-s(x),x+s(x)]\subseteq [(x+t)-(t+s(x)),(x+t)+(t+s(x))],$$ $$[(x+t)-(s(x)-t),(x+t)+(s(x)-t)]\subseteq[x-s(x),x+s(x)],$$ and therefore the following inequalities hold: \begin{equation*} \left.\begin{array}{ll} \displaystyle{ 1=\int_{x-s(x)}^{x+s(x)} \frac{d\xi}{r(\xi)\rho(\xi)} \le \int_{(x+t-(t+s(x)))}^{(x+t)+(t+s(x))} \frac{d\xi}{r(\xi)\rho(x)}},\\ \\ \displaystyle{ 1=\int_{x-s(x)}^{x+s(x)} \frac{d\xi}{r(\xi)\rho(\xi)} \ge \int_{(x+t)-(s(x)-t)}^{(x+t)+(s(x)-t)} \frac{d\xi}{r(\xi)\rho(x)}} \end{array}\right\}\ \Rightarrow \ \left.\begin{array}{ll} \displaystyle{ s(x+t)\le t+s(x)}\\ \displaystyle{s(x+t)\ge s(x)-t} \end{array}\right\}\ \Rightarrow \ \eqref{6.2} \end{equation*} {}From \eqref{6.2} it follows that $s(x),$ $x\in\mathbb R$ is continuous. \end{proof} \begin{proof}[Proof of \lemref{lem2.11}] Let us rewrite \eqref{6.2} in a different way: \begin{equation}\label{6.3} s(x)-|t|\le s(x+t)\le s(x)+|t|\quad\text{if}\quad |t|\le s(x).\end{equation} Let $\xi=x+t.$ Then $t=\xi-x$, and in this notation we obtain inequalities equivalent to \eqref{6.3}: \begin{equation}\label{6.4} s(x)-|\xi-x|\le s(\xi)\le s(x)+|\xi-x|\quad\text{if}\quad |\xi-x|\le s(x).\end{equation} Let $\varepsilon\in[0,1]$ and $|\xi-x|\le\varepsilon s(x).$ Then, evidently, $|\xi-x|\le\varepsilon s(x)\le s(x)$, and \eqref{2.21} follows from \eqref{6.4}: $$s(x)-\varepsilon s(x)\le s(x)-|\xi-x|\le s(\xi)\le s(x)+|\xi-x|\le s(x)+\varepsilon s(x).$$ Further, equalities \eqref{2.23} are checked in the same way, and therefore below we only consider the second one. We show that $\varliminf\limits_{x\to\infty}(x-s(x))=\infty.$ Assume the contrary. Then there exist a sequence $\{x_n\}_{n=1}^\infty$ such that $x_n\to\infty$ as $n\to\infty$ and a number $c$ such that $$x_n-s(x_n)\le c<\infty,\qquad n=1,2,\dots$$ From these assumptions we obtain $$1=\int_{x_n-s(x_n)}^{x_n+s(x_n)}\frac{dt}{r(t)\rho(t)}\ge \int_c^{x_n}\frac{dt}{r(t)\rho(t)}\to\infty\quad\text{as}\quad n\to\infty $$ (see \eqref{2.10}). Contradiction. Hence $$\varliminf_{x\to\infty}(x-s(x))=\infty \ \Rightarrow\ \infty\le \varliminf_{x\to\infty}(s-s(x))\le \varlimsup_{x\to\infty}(x-s(x))\le\infty\ \Rightarrow$$ $$\varliminf_{x\to\infty}(x-s(x))=\varlimsup_{x\to\infty}(x-s(x))=\infty\ \Rightarrow\ \eqref{2.23}$$ \end{proof} \begin{proof}[Proof of \lemref{lem2.15}] The assertion of the lemma immediately follows from Lemmas \ref{lem2.10}, \ref{lem2.11} and \ref{lem2.13}. \end{proof} \begin{proof}[Proof of \lemref{lem2.17}] Below for $t\in [x,x+s(x)]$, we use \thmref{thm2.1}, \eqref{4.34}, \eqref{2.17} and \eqref{2.19}: $$\frac{v'(t)}{v(t)}=\frac{1+r(t)\rho'(t)}{2r(t)\rho(t)}\le\frac{1}{r(t)\rho(t)},\quad t\in[x,x+s(x)]\ \Rightarrow$$ $$ln\frac{v(x+s(x))}{v(x)}\le\int_x^{x+s(x)}\frac{dt}{r(t)\rho(t)}<\int_{x-s(x)}^{x+s(t)}\frac{dt} {r(t)\rho(t)}=1.$$ Similarly, $$ln \frac{v(x)}{v(x-s(x))}\le\int_{x-s(x)}^x\frac{dt}{r(t)\rho(t)}<\int_{x-s(x)}^{x+s(x)}\frac {dt}{r(t)\rho(t)}=1.$$ This gives the inequalities of \eqref{2.27}, for example: $$e^{-1}\le\frac{v(x-s(x))}{v(x)}\le\frac{v(t)}{v(x)}\le\frac{v(x+s(x))}{v(x)}\le e,\quad |t-x|\le s(x).$$ Inequalities \eqref{2.27} for the function $\rho$ is a consequence of the following inequalities for $u$ and $v$: $$c^{-1}\le \frac{\rho(t)}{\rho(x)}=\frac{u(t)}{u(x)}\ \frac{v(t)}{v(x)}\le c,\qquad |t-x|\le s(x).$$ \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,453
Dear Readers: It's quite possible that I am a bit lopsided or biased in my view of how our job could be done better when it comes to reducing injury and death collisions. I actually look to your letters and comments to help me balance my view, and to keep public knowledge in perspective. In many cases, readers have made some very wise observations. As we start the new month, deaths from Pasadena police reported collisions are up 100 percent over the traffic deaths from last year. I narrowed the field to Pasadena police reports, because if I added the fatalities that occurred on the freeways that were reported to the California Highway Patrol, we would be up almost 200 percent. While we had a low for traffic deaths in Pasadena in 2003 at five, each one of these cases takes on its own personality with us as we investigate them. There are two areas where I think the law should be changed so as to promote a safer driving environment. The first is the requirement that traffic officers be in distinctively marked vehicles while performing traffic enforcement. The second is the "driving while distracted" legislation that failed in the Legislature. In 1961, the state Legislature enacted Vehicle Code Section 40800. It requires that an officer who is primarily performing traffic enforcement must be in a full and distinctive uniform, while operating a distinctively marked vehicle. I have no objection to the uniform requirement, as citizens need to know whom they are being stopped by. The requirement to have a distinctive vehicle needs to be rethought. Lookouts for street races need only make a quick check to know if an officer is in the area, because our patrol cars are quite distinctive. These races are becoming more frequent and violent, and the death toll is rising. My argument is simple, if any car approaching could be a police car, then racers might be less likely to race as often. I also like the process of crushing cars seized in street races, but I will settle for a mandated 30-day impound. I mentioned earlier that Senate Bill 1800 died in committee. This bill, authored by state Sen. Kevin Murray, would have made it illegal to operate a vehicle while being distracted. Distractions are a major contributor to injury accidents in Pasadena. Several studies throughout the country indicate that distractions may contribute to 25 percent to 30 percent or more of all collisions. When vehicles are backed up behind you, and you did not notice that your turn signal has been on for 11 minutes, is that unsafe? When you turn to your children to tell them to behave, and then have to slam on the brakes to make a panic stop, is that unsafe? One of the opponents of this bill said its enactment into law would declare open season on drivers for ... driving. Should a driver be cited for speed and for being distracted? In my opinion, it boils down to safety. If you are unable to operate a vehicle safely, you should be cited for it. If there is no fear of getting caught, the problem will proliferate unabated. The fines are the consequence, and a small price to pay compared to the lives lost because someone had to make that very important phone call while they drove through an occupied crosswalk. I like the idea of declaring open season on unsafe, distracted drivers.
{ "redpajama_set_name": "RedPajamaC4" }
8,993
package org.caleydo.view.tourguide.stratomex.s; import static org.caleydo.view.tourguide.stratomex.StratomexRenderStyle.ICON_ACCEPT; import static org.caleydo.view.tourguide.stratomex.StratomexRenderStyle.ICON_ACCEPT_DISABLED; import static org.caleydo.view.tourguide.stratomex.StratomexRenderStyle.ICON_ADD_DEPENDENT; import static org.caleydo.view.tourguide.stratomex.StratomexRenderStyle.ICON_ADD_INDEPENDENT; import static org.caleydo.view.tourguide.stratomex.StratomexRenderStyle.ICON_CANCEL; import static org.caleydo.view.tourguide.stratomex.StratomexRenderStyle.ICON_UNDO; import static org.caleydo.view.tourguide.stratomex.StratomexRenderStyle.ICON_UNDO_DIASBLED; import gleem.linalg.Vec3f; import java.net.URL; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.List; import java.util.Objects; import java.util.Set; import javax.media.opengl.GL2; import javax.media.opengl.GLContext; import org.caleydo.core.data.collection.EDataType; import org.caleydo.core.data.collection.table.Table; import org.caleydo.core.data.datadomain.ATableBasedDataDomain; import org.caleydo.core.data.datadomain.DataDomainManager; import org.caleydo.core.data.datadomain.DataDomainOracle; import org.caleydo.core.data.perspective.table.TablePerspective; import org.caleydo.core.data.perspective.variable.Perspective; import org.caleydo.core.data.selection.SelectionType; import org.caleydo.core.data.virtualarray.VirtualArray; import org.caleydo.core.data.virtualarray.group.Group; import org.caleydo.core.event.AEvent; import org.caleydo.core.event.EventListenerManager.ListenTo; import org.caleydo.core.event.EventPublisher; import org.caleydo.core.manager.GeneralManager; import org.caleydo.core.util.collection.Pair; import org.caleydo.core.util.color.Color; import org.caleydo.core.util.logging.Logger; import org.caleydo.core.view.ViewManager; import org.caleydo.core.view.listener.RemoveTablePerspectiveEvent; import org.caleydo.core.view.opengl.canvas.AGLView; import org.caleydo.core.view.opengl.layout.ALayoutRenderer; import org.caleydo.core.view.opengl.layout.ElementLayout; import org.caleydo.core.view.opengl.layout.ElementLayouts; import org.caleydo.core.view.opengl.layout.Row; import org.caleydo.core.view.opengl.layout.util.multiform.MultiFormRenderer; import org.caleydo.core.view.opengl.layout2.GLGraphics; import org.caleydo.core.view.opengl.picking.APickingListener; import org.caleydo.core.view.opengl.picking.IPickingLabelProvider; import org.caleydo.core.view.opengl.picking.IPickingListener; import org.caleydo.core.view.opengl.picking.Pick; import org.caleydo.core.view.opengl.picking.PickingMode; import org.caleydo.core.view.opengl.util.texture.TextureManager; import org.caleydo.datadomain.pathway.PathwayDataDomain; import org.caleydo.datadomain.pathway.data.PathwayTablePerspective; import org.caleydo.datadomain.pathway.graph.PathwayGraph; import org.caleydo.view.stratomex.EEmbeddingID; import org.caleydo.view.stratomex.EPickingType; import org.caleydo.view.stratomex.GLStratomex; import org.caleydo.view.stratomex.addin.IStratomeXAddIn; import org.caleydo.view.stratomex.brick.GLBrick; import org.caleydo.view.stratomex.brick.configurer.ClinicalDataConfigurer; import org.caleydo.view.stratomex.brick.configurer.IBrickConfigurer; import org.caleydo.view.stratomex.brick.configurer.PathwayDataConfigurer; import org.caleydo.view.stratomex.brick.sorting.NoSortingSortingStrategy; import org.caleydo.view.stratomex.column.BlockAdapter; import org.caleydo.view.stratomex.column.BrickColumn; import org.caleydo.view.stratomex.column.BrickColumnManager; import org.caleydo.view.stratomex.column.FrameHighlightRenderer; import org.caleydo.view.tourguide.stratomex.event.AddNewColumnEvent; import org.caleydo.view.tourguide.stratomex.event.HighlightBrickEvent; import org.caleydo.view.tourguide.stratomex.event.UpdateNumericalPreviewEvent; import org.caleydo.view.tourguide.stratomex.event.UpdatePathwayPreviewEvent; import org.caleydo.view.tourguide.stratomex.event.UpdateStratificationPreviewEvent; import org.caleydo.view.tourguide.stratomex.event.WizardActionsEvent; import org.caleydo.view.tourguide.stratomex.wizard.WizardElementLayout; import com.google.common.base.Predicate; import com.google.common.collect.Iterables; import com.google.common.collect.Lists; import com.jogamp.opengl.util.texture.Texture; /** * @author Samuel Gratzl * */ public class TourGuideAddin implements IStratomeXAddIn { private static final Logger log = Logger.create(TourGuideAddin.class); private static final Color COLOR_SELECTED = SelectionType.SELECTION.getColor(); private static final Color COLOR_POSSIBLE_SELECTION = Color.NEUTRAL_GREY; private static final String ADD_PICKING_TYPE = "templateAdd"; private static final String ADD_DEPENDENT_PICKING_TYPE = "templateDependentAdd"; private static final String ADD_INDEPENDENT_PICKING_TYPE = "templateInDependentAdd"; private static final String CONFIRM_PICKING_TYPE = "templateConfirm"; private static final String CANCEL_PICKING_TYPE = "templateAbort"; private static final String BACK_PICKING_TYPE = "templateBack"; private GLStratomex stratomex; private AddWizardElement wizard; private int previewIndex; // where // what either an element or a brick private EWizardMode wizardMode; private ElementLayout wizardElement; // wrapper element of the wizard element for layouting private WizardElementLayout wizardElementWrapper; private List<BrickColumn> wizardPreviews = new ArrayList<>(); /** * the current selection and related information */ private ESelectionMode selectionMode = null; private GLBrick selectionCurrent = null; private String hoveredButton = ""; /** * events that has to be triggered one frame later */ private final List<AEvent> delayedEvents = new ArrayList<>(); @Override public void stampTo(GLStratomex stratomeX) { this.stratomex = stratomeX; } @Override public void postDisplay() { for (AEvent event : delayedEvents) EventPublisher.trigger(event); delayedEvents.clear(); } @Override public void renderOptionTrigger(GL2 gl, float x, float y, float w, float h, int id) { if (isWizardActive()) // not more than one at the same time return; renderButton(gl, x, y, w, h, 32, stratomex, ADD_PICKING_TYPE, id, ICON_ADD_INDEPENDENT); } public boolean isWizardActive() { return wizardElement != null || !wizardPreviews.isEmpty(); } public void renderStartButton(GL2 gl, float x, float y, float w, float h, int id) { if (isWizardActive()) // not more than one at the same time return; renderButton(gl, x, y, w, h, 32, stratomex, ADD_PICKING_TYPE, id, ICON_ADD_INDEPENDENT); } public void renderAddDependentButton(GL2 gl, float x, float y, float w, float h, int id) { if (isWizardActive()) // not more than one at the same time return; renderButton(gl, x, y, w, h, 24, stratomex, ADD_DEPENDENT_PICKING_TYPE, id, ICON_ADD_DEPENDENT); } public void renderAddInDependentButton(GL2 gl, float x, float y, float w, float h, int id) { if (isWizardActive()) // not more than one at the same time return; renderButton(gl, x, y, w, h, 24, stratomex, ADD_INDEPENDENT_PICKING_TYPE, id, ICON_ADD_INDEPENDENT); } public void renderConfirmButton(GL2 gl, float x, float y, float w, float h) { boolean disabled = wizardPreviews.isEmpty(); // no preview no accept renderButton(gl, x, y, w, h, 32, stratomex, CONFIRM_PICKING_TYPE, 1, disabled ? ICON_ACCEPT_DISABLED : ICON_ACCEPT); } public void renderCancelButton(GL2 gl, float x, float y, float w, float h) { renderButton(gl, x, y, w, h, 32, stratomex, CANCEL_PICKING_TYPE, 1, ICON_CANCEL); } public void renderBackButton(GL2 gl, float x, float y, float w, float h) { boolean disabled = wizard == null || !wizard.canGoBack(); // check can go back renderButton(gl, x, y, w, h, 32, stratomex, BACK_PICKING_TYPE, 1, disabled ? ICON_UNDO_DIASBLED : ICON_UNDO); } private void renderButton(GL2 gl, float x, float y, float w, float h, int diameter, AGLView view, String pickingType, int id, URL texture) { GLGraphics.checkError(gl); boolean isHovered = Objects.equals(hoveredButton, pickingType + (id + 1)); id = view.getPickingManager().getPickingID(view.getID(), pickingType, id + 1); // stratomex.addIDPickingTooltipListener("Add another column", pickingType, pickedObjectID) gl.glPushName(id); final float wi = view.getPixelGLConverter().getGLWidthForPixelWidth(diameter); final float hi = view.getPixelGLConverter().getGLHeightForPixelHeight(diameter); final float xi = x + w * 0.5f - wi * 0.5f; final float yi = y + h * 0.5f - hi * 0.5f; final float z = 1.5f; Vec3f lowerLeftCorner = new Vec3f(xi, yi, z); Vec3f lowerRightCorner = new Vec3f(xi + wi, yi, z); Vec3f upperRightCorner = new Vec3f(xi + wi, yi + hi, z); Vec3f upperLeftCorner = new Vec3f(xi, yi + hi, z); Color col = isHovered ? new Color(0.8f, 0.8f, 0.8f) : new Color(1f, 1f, 1f); gl.glPushAttrib(GL2.GL_TEXTURE_BIT); TextureManager t = view.getTextureManager(); Texture tex = t.get(texture); tex.enable(gl); // gl.glTexEnvi(GL2ES1.GL_TEXTURE_ENV, GL2ES1.GL_TEXTURE_ENV_MODE, GL2ES1.GL_MODULATE); t.renderTexture(gl, tex, lowerLeftCorner, lowerRightCorner, upperRightCorner, upperLeftCorner, col); gl.glPopAttrib(); gl.glPopName(); GLGraphics.checkError(gl); } @Override public void registerPickingListeners() { final Object receiver = TourGuideAddin.this; stratomex.addTypePickingTooltipListener("Add another column at this position", ADD_PICKING_TYPE); stratomex.addTypePickingListener(new IPickingListener() { @Override public void pick(Pick pick) { switch (pick.getPickingMode()) { case CLICKED: log.debug("add new column"); EventPublisher.trigger(new AddNewColumnEvent(pick.getObjectID() - 1).to(receiver).from(this)); break; case MOUSE_OVER: hoveredButton = ADD_PICKING_TYPE + pick.getObjectID(); break; case MOUSE_OUT: hoveredButton = null; break; default: break; } } }, ADD_PICKING_TYPE); stratomex.addTypePickingTooltipListener("Add column based on this column's stratification", ADD_DEPENDENT_PICKING_TYPE); stratomex.addTypePickingListener(new IPickingListener() { @Override public void pick(Pick pick) { switch (pick.getPickingMode()) { case CLICKED: log.debug("add new dependent column"); EventPublisher.trigger(new AddNewColumnEvent(pick.getObjectID() - 1, EWizardMode.DEPENDENT).to( receiver).from(this)); break; case MOUSE_OVER: hoveredButton = ADD_DEPENDENT_PICKING_TYPE + pick.getObjectID(); break; case MOUSE_OUT: hoveredButton = null; break; default: break; } } }, ADD_DEPENDENT_PICKING_TYPE); stratomex.addTypePickingTooltipListener("Add stratification column based on this column", ADD_INDEPENDENT_PICKING_TYPE); stratomex.addTypePickingListener(new IPickingListener() { @Override public void pick(Pick pick) { switch (pick.getPickingMode()) { case CLICKED: log.debug("add new independent column"); EventPublisher.trigger(new AddNewColumnEvent(pick.getObjectID() - 1, EWizardMode.INDEPENDENT).to( receiver).from(this)); break; case MOUSE_OVER: hoveredButton = ADD_DEPENDENT_PICKING_TYPE + pick.getObjectID(); break; case MOUSE_OUT: hoveredButton = null; break; default: break; } } }, ADD_INDEPENDENT_PICKING_TYPE); class ActionPickingListener extends APickingListener { private final String pickingType; ActionPickingListener(String pickingType) { this.pickingType = pickingType; } @Override protected void clicked(Pick pick) { log.debug("click on wizardaction: " + pickingType); EventPublisher.trigger(new WizardActionsEvent(pickingType).to(receiver).from(this)); } @Override protected void mouseOut(Pick pick) { hoveredButton = null; } @Override protected void mouseOver(Pick pick) { hoveredButton = pickingType + pick.getObjectID(); } } stratomex.addTypePickingTooltipListener("Confirm the current previewed element", CONFIRM_PICKING_TYPE); stratomex.addTypePickingListener(new ActionPickingListener(CONFIRM_PICKING_TYPE), CONFIRM_PICKING_TYPE); stratomex.addTypePickingTooltipListener("Cancel temporary column", CANCEL_PICKING_TYPE); stratomex.addTypePickingListener(new ActionPickingListener(CANCEL_PICKING_TYPE), CANCEL_PICKING_TYPE); stratomex.addTypePickingTooltipListener("Go back", BACK_PICKING_TYPE); stratomex.addTypePickingListener(new ActionPickingListener(BACK_PICKING_TYPE), BACK_PICKING_TYPE); IPickingListener brickPicker = new IPickingListener() { @Override public void pick(Pick pick) { onBrickPick(pick); } }; stratomex.addTypePickingListener(brickPicker, EPickingType.BRICK.name()); stratomex.addTypePickingListener(brickPicker, EPickingType.BRICK_TITLE.name()); stratomex.addTypePickingListener(new IPickingListener() { @Override public void pick(Pick pick) { if (wizard != null) wizard.onPick(pick); } }, AddWizardElement.PICKING_TYPE); stratomex.addTypePickingTooltipListener(new IPickingLabelProvider() { @Override public String getLabel(Pick pick) { if (wizard != null) return wizard.getLabel(pick); return null; } }, AddWizardElement.PICKING_TYPE); } /** * if we pick an brick * * @param pick */ protected void onBrickPick(Pick pick) { // don't need to select if (pick.getPickingMode() != PickingMode.CLICKED || selectionMode == null || wizard == null) return; GLBrick brick = findBick(pick.getObjectID()); if (brick == null) return; boolean isHeader = brick.isHeaderBrick(); if (!isHeader && (selectionMode == ESelectionMode.STRATIFICATION)) return; if (this.selectionCurrent == brick) return; if (this.wizardPreviews.contains(brick.getBrickColumn())) // can't select temporarly return; selectBrick(brick); } private void selectBrick(GLBrick brick) { boolean handled = false; // Replace sampled dimension perspective with full one TablePerspective tablePerspective = brick.getBrickColumn().getTablePerspective(); ATableBasedDataDomain dataDomain = tablePerspective.getDataDomain(); Perspective dimension = dataDomain.getTable().getDefaultDimensionPerspective(false); // if it does not exist, it will be created tablePerspective = dataDomain.getTablePerspective(tablePerspective.getRecordPerspective().getPerspectiveID(), dimension.getPerspectiveID()); switch (selectionMode) { case STRATIFICATION: handled = wizard.onSelected(tablePerspective); break; case GROUP: handled = wizard.onSelected(tablePerspective, brick.getTablePerspective().getRecordGroup()); break; } if (handled) { if (this.selectionCurrent != null) { changeHighlight(this.selectionCurrent, COLOR_POSSIBLE_SELECTION); } changeHighlight(brick, COLOR_SELECTED); this.selectionCurrent = brick; stratomex.setDisplayListDirty(); } } private void changeHighlight(GLBrick brick, Color color) { if (brick == null) return; if (brick.isHeaderBrick()) { brick.getBrickColumn().setHighlightColor(color == null ? BrickColumn.REVERT_COLOR : color); } else { ElementLayout layout = brick.getLayout(); if (color == null) layout.clearBackgroundRenderers(FrameHighlightRenderer.class, GLContext.getCurrentGL().getGL2()); else { // select brick by changing highlight for (FrameHighlightRenderer glow : Iterables.filter(layout.getBackgroundRenderer(), FrameHighlightRenderer.class)) { glow.setColor(color); return; } // no yet there add one layout.addBackgroundRenderer(new FrameHighlightRenderer(color, true)); } } } public void selectStratification(Predicate<TablePerspective> filter, boolean autoSelectLeftOfMe) { this.selectionMode = ESelectionMode.STRATIFICATION; // highlight all possibles int index = 0; GLBrick toSelect = null; for (BrickColumn col : stratomex.getBrickColumnManager().getBrickColumns()) { if (filter.apply(col.getTablePerspective())) { changeHighlight(col.getHeaderBrick(), COLOR_POSSIBLE_SELECTION); if (autoSelectLeftOfMe && previewIndex == index) { toSelect = col.getHeaderBrick(); } } index++; } if (toSelect != null) selectBrick(toSelect); repaint(); } public void selectGroup(Predicate<Pair<TablePerspective, Group>> filter, boolean allowSelectAll) { this.selectionMode = ESelectionMode.GROUP; // highlight all possibles for (BrickColumn col : stratomex.getBrickColumnManager().getBrickColumns()) { TablePerspective tablePerspective = col.getTablePerspective(); if (allowSelectAll) changeHighlight(col.getHeaderBrick(), COLOR_POSSIBLE_SELECTION); for (GLBrick brick : col.getSegmentBricks()) { if (filter.apply(Pair.make(tablePerspective, brick.getTablePerspective().getRecordGroup()))) changeHighlight(brick, COLOR_POSSIBLE_SELECTION); } } repaint(); } @ListenTo(sendToMe = true) private void onHighlight(HighlightBrickEvent event) { BrickColumnManager manager = stratomex.getBrickColumnManager(); BrickColumn brickColumn = manager.getBrickColumn(event.getStratification()); if (brickColumn == null) return; Color c = event.isHighlight() ? event.getColor() : null; if (event.getGroup() == null) { changeHighlight(brickColumn.getHeaderBrick(), c); } else { Group g = event.getGroup(); for (GLBrick brick : brickColumn.getSegmentBricks()) { if (g.equals(brick.getTablePerspective().getRecordGroup())) { changeHighlight(brick, c); break; } } } } private void repaint() { stratomex.updateLayout(); stratomex.setDisplayListDirty(); } /** * @param brickId * @return */ private GLBrick findBick(int brickId) { for (BrickColumn col : stratomex.getBrickColumnManager().getBrickColumns()) { if (col.getHeaderBrick().getID() == brickId) return col.getHeaderBrick(); for (GLBrick brick : col.getSegmentBricks()) { if (brick.getID() == brickId) return brick; } } return null; } /** * @param index * @param independentOne * @return */ private ElementLayout createTemplateElement(TablePerspective source, boolean independentOne) { createWizard(source, independentOne); ElementLayout l = ElementLayouts.wrap(wizard, 120); l.addBackgroundRenderer(new TemplateHighlightRenderer()); l.addBackgroundRenderer(new WizardActionsLayoutRenderer(stratomex, this)); return l; } private WizardElementLayout layout(ElementLayout body) { return new WizardElementLayout(body); } private void createWizard(TablePerspective source, boolean independentOne) { if (source == null) { wizard = AddWizardElementFactory.create(this, stratomex); wizardMode = EWizardMode.GLOBAL; } else if (independentOne) { wizard = AddWizardElementFactory.createIndepenent(this, stratomex, source); wizardMode = EWizardMode.INDEPENDENT; } else { wizard = AddWizardElementFactory.createDependent(this, stratomex, source); wizardMode = EWizardMode.DEPENDENT; } stratomex.registerEventListener(wizard); wizard.prepare(); } @ListenTo(sendToMe = true) private void onAddEmptyColumn(AddNewColumnEvent event) { if (!wizardPreviews.isEmpty() || wizardElement != null) // only one at one time return; int index = 0; TablePerspective source = null; BrickColumnManager brickColumnManager = stratomex.getBrickColumnManager(); switch (event.getMode()) { case DEPENDENT: for (BrickColumn col : brickColumnManager.getBrickColumns()) { if (col.getID() == event.getObjectId()) { source = col.getTablePerspective(); break; } index++; } if (source == null) return; break; case INDEPENDENT: for (BrickColumn col : brickColumnManager.getBrickColumns()) { if (col.getID() == event.getObjectId()) { source = col.getTablePerspective(); index -= 1; // left of break; } index++; } if (source == null) return; break; default: if (event.getObjectId() <= 0) { // left or first index = -1; } else { // right of BrickColumn col = brickColumnManager.getBrickColumnSpacers().get(event.getObjectId()).getLeftDimGroup(); index = col == null ? -1 : brickColumnManager.getBrickColumns().indexOf(col); } } previewIndex = index; wizardElement = createTemplateElement(source, event.getMode() == EWizardMode.INDEPENDENT); wizardElementWrapper = layout(wizardElement); stratomex.relayout(); } @ListenTo(sendToMe = true) private void onWizardAction(WizardActionsEvent event) { switch (event.getPickingType()) { case CONFIRM_PICKING_TYPE: if (wizardPreviews.isEmpty()) return; // remove the preview buttons if (!wizardPreviews.isEmpty()) { BrickColumn wizardPreview = wizardPreviews.get(0); final Row layout = wizardPreview.getLayout(); layout.clearForegroundRenderers(AddAttachedLayoutRenderer.class, GLContext.getCurrentGL().getGL2()); layout.clearForegroundRenderers(WizardActionsLayoutRenderer.class, GLContext.getCurrentGL().getGL2()); if (canHaveDependentColumns(wizardPreview)) layout.addForeGroundRenderer(new AddAttachedLayoutRenderer(wizardPreview, this, false)); if (canHaveIndependentColumns(wizardPreview)) layout.addForeGroundRenderer(new AddAttachedLayoutRenderer(wizardPreview, this, true)); } // reset done(true); stratomex.relayout(); break; case CANCEL_PICKING_TYPE: if (wizardMode == EWizardMode.INDEPENDENT && !wizardPreviews.isEmpty()) { // restore the old unstratified dependend one updateDependentBrickColumn(null, wizardPreviews.get(0)); } for (BrickColumn col : wizardPreviews) { stratomex.removeTablePerspective(col.getTablePerspective()); } // reset done(false); stratomex.relayout(); break; case BACK_PICKING_TYPE: if (wizard != null) wizard.goBack(); break; } repaint(); } /** * listens to remove events done via the remove button and check if this was our template * * @param event */ @ListenTo private void onRemoveTablePerspective(RemoveTablePerspectiveEvent event) { if (event.getReceiver() != stratomex) return; // removed my template for (BrickColumn wizardPreview : wizardPreviews) { if (wizardPreview.getTablePerspective() == event.getTablePerspective()) done(false); } } /** * @param brickColumn */ @Override public void addedBrickColumn(BrickColumn brickColumn) { if (wizardPreviews.contains(brickColumn)) return; Row layout = brickColumn.getLayout(); if (canHaveDependentColumns(brickColumn)) layout.addForeGroundRenderer(new AddAttachedLayoutRenderer(brickColumn, this, false)); if (canHaveIndependentColumns(brickColumn)) layout.addForeGroundRenderer(new AddAttachedLayoutRenderer(brickColumn, this, true)); } /** * determines whether a given BrickColumn can have dependent ones * * @param brickColumn * @return */ private static boolean canHaveDependentColumns(BrickColumn brickColumn) { IBrickConfigurer b = brickColumn.getBrickConfigurer(); if (b instanceof PathwayDataConfigurer) { return false; } if (b instanceof ClinicalDataConfigurer) { return false; } return true; } private static boolean canHaveIndependentColumns(BrickColumn brickColumn) { IBrickConfigurer b = brickColumn.getBrickConfigurer(); if (b instanceof PathwayDataConfigurer) { return false; // TODO } if (b instanceof ClinicalDataConfigurer) { return b.getBrickSortingStrategy() instanceof NoSortingSortingStrategy; } return false; } /** * */ private void done(boolean confirmed) { selectionMode = null; selectionCurrent = null; // clear highlights for (BrickColumn col : stratomex.getBrickColumnManager().getBrickColumns()) { col.setHighlightColor(BrickColumn.REVERT_COLOR); for (GLBrick brick : col.getSegmentBricks()) { brick.getLayout().clearBackgroundRenderers(FrameHighlightRenderer.class, GLContext.getCurrentGL().getGL2()); } } // cleanup template if (wizardElement != null) cleanupWizardElement(); wizardPreviews.clear(); if (wizard != null) { wizard.done(confirmed); wizard.destroy(GLContext.getCurrent().getGL().getGL2()); wizard = null; } } private void cleanupWizardElement() { wizardElement.setRenderer(null); wizardElementWrapper.destroy(GLContext.getCurrent().getGL().getGL2()); wizardElement = null; wizardElementWrapper = null; } @ListenTo(sendToMe = true) private void onUpdatePreview(UpdateStratificationPreviewEvent event) { boolean wasOnTheFly = wizard == null; if (wizard == null) { // no wizard there to handle add a template column on the fly initIntermediateWizard(); wizard = AddWizardElementFactory.createForStratification(this, stratomex); stratomex.registerEventListener(wizard); wizard.prepare(); } wizard.onUpdate(event); if (wasOnTheFly && !wizardPreviews.isEmpty()) onWizardAction(new WizardActionsEvent(CONFIRM_PICKING_TYPE)); } private TablePerspective initIntermediateWizard() { final BrickColumnManager bcm = stratomex.getBrickColumnManager(); BrickColumn selected = bcm.getActiveBrickColumn(); TablePerspective selectedTP = selected == null ? null : selected.getTablePerspective(); wizardMode = EWizardMode.GLOBAL; previewIndex = selected != null ? bcm.indexOfBrickColumn(selected) : bcm.getRightColumnStartIndex() - 1; return selectedTP; } @ListenTo(sendToMe = true) private void onUpdatePreview(UpdatePathwayPreviewEvent event) { boolean wasOnTheFly = wizard == null; if (wizard == null) { // no wizard there to handle add a template column on the fly initIntermediateWizard(); wizard = AddWizardElementFactory.createForPathway(this, stratomex); stratomex.registerEventListener(wizard); wizard.prepare(); } wizard.onUpdate(event); if (wasOnTheFly && !wizardPreviews.isEmpty()) onWizardAction(new WizardActionsEvent(CONFIRM_PICKING_TYPE)); } @ListenTo(sendToMe = true) private void onUpdateNumerical(UpdateNumericalPreviewEvent event) { boolean wasOnTheFly = wizard == null; if (wizard == null) { // no wizard there to handle add a template column on the fly initIntermediateWizard(); wizard = AddWizardElementFactory.createForOther(this, stratomex); stratomex.registerEventListener(wizard); wizard.prepare(); } wizard.onUpdate(event); if (wasOnTheFly && !wizardPreviews.isEmpty()) onWizardAction(new WizardActionsEvent(CONFIRM_PICKING_TYPE)); } /** * @param columns */ @Override public void addColumns(List<BlockAdapter> columns) { if (wizardElement == null) return; if (previewIndex < 0) columns.add(0, new BlockAdapter(wizardElementWrapper)); else { int index = previewIndex - stratomex.getBrickColumnManager().getCenterColumnStartIndex(); index = Math.min(index + 1, columns.size()); columns.add(index, new BlockAdapter(wizardElementWrapper)); } } /** * whether a template column exists or not * * @return */ @Override public boolean isEmpty() { return wizardElement == null; } @Override public boolean canShowDetailBrick() { return isEmpty() && (wizard == null); } public void replaceTemplate(TablePerspective with, IBrickConfigurer config, boolean extra, Color highlight) { List<Pair<Integer, BrickColumn>> added = null; final List<TablePerspective> withL = Collections.singletonList(with); if (highlight != null) delayedEvents.add(new HighlightBrickEvent(with, null, highlight).to(this)); if (wizardElement != null) { assert !extra; BrickColumnManager bcm = stratomex.getBrickColumnManager(); BrickColumn left = previewIndex < 0 ? null : bcm.getBrickColumns().get( bcm.getCenterColumnStartIndex() + previewIndex); added = stratomex.addTablePerspectives(withL, config, left, false); if (added.size() > 0) { cleanupWizardElement(); if (wizardMode == EWizardMode.INDEPENDENT) { updateDependentBrickColumn(with, added.get(0).getSecond()); } wizardPreviews.add(added.get(0).getSecond()); } else { wizardPreviews.clear(); } } else if (!wizardPreviews.isEmpty()) { if (extra) { if (wizardPreviews.size() == 1) { // add extra added = stratomex.addTablePerspectives(withL, config, wizardPreviews.get(0), true); if (added.size() > 0) { wizardPreviews.add(added.get(0).getSecond()); } } else { // update extra BrickColumn extraPreview = wizardPreviews.get(0); added = stratomex.addTablePerspectives(withL, config, extraPreview, true); stratomex.removeTablePerspective(wizardPreviews.get(1).getTablePerspective()); if (added.size() > 0) { wizardPreviews.set(1, added.get(0).getSecond()); } else { wizardPreviews.remove(1); } } } else { BrickColumn wizardPreview = wizardPreviews.get(0); if (wizardPreview.getTablePerspective().equals(with)) // replace with itself return; added = stratomex.addTablePerspectives(withL, config, wizardPreview, true); stratomex.removeTablePerspective(wizardPreview.getTablePerspective()); if (added.size() > 0) { if (wizardMode == EWizardMode.INDEPENDENT) { updateDependentBrickColumn(with, added.get(0).getSecond()); } wizardPreviews.set(0, added.get(0).getSecond()); } else { wizardPreviews.remove(0); } } } else if (stratomex.isDetailMode()) { return; } else { // create a preview on the fly added = stratomex.addTablePerspectives(withL, config, null, true); if (added.size() > 0) { wizardPreviews.add(added.get(0).getSecond()); } } if (added != null && !added.isEmpty()) { wizardPreviews.get(0).getLayout() .clearForegroundRenderers(ALayoutRenderer.class, GLContext.getCurrentGL().getGL2()); wizardPreviews.get(0).getLayout().addForeGroundRenderer(new WizardActionsLayoutRenderer(stratomex, this)); } } /** * updates the dependent brick column if an independent brick column is for it selected * * @param with */ private void updateDependentBrickColumn(TablePerspective with, BrickColumn new_) { BrickColumnManager bcm = stratomex.getBrickColumnManager(); int index = bcm.indexOfBrickColumn(new_) + 1; if (index <= 0 || index >= bcm.getBrickColumns().size()) return; BrickColumn toUpdate = bcm.getBrickColumns().get(index); TablePerspective from = toUpdate.getTablePerspective(); IBrickConfigurer brickConfigurer = toUpdate.getBrickConfigurer(); if (brickConfigurer instanceof ClinicalDataConfigurer) { ClinicalDataConfigurer configurer; TablePerspective to; if (with == null) { // restore original one configurer = new ClinicalDataConfigurer(); configurer.setSortingStrategy(new NoSortingSortingStrategy()); to = from.getDataDomain().getTablePerspective( from.getDataDomain().getTable().getDefaultRecordPerspective(false).getPerspectiveID(), from.getDimensionPerspective().getPerspectiveID()); } else { to = asPerspective(with.getRecordPerspective(), from); configurer = ClinicalDataConfigurer.create(stratomex, with, to); } List<Pair<Integer, BrickColumn>> pairs = stratomex.addTablePerspectives(Lists.newArrayList(to), configurer, new_, true); if (with == null && canHaveIndependentColumns(pairs.get(0).getSecond())) { // readd add indepenent buttons pairs.get(0).getSecond().getLayout() .addForeGroundRenderer(new AddAttachedLayoutRenderer(pairs.get(0).getSecond(), this, true)); } stratomex.removeTablePerspective(from); } else if (brickConfigurer instanceof PathwayDataConfigurer) { // TODO // TablePerspective t = asPerspective(with, pathway); } } public void replaceTemplate(ALayoutRenderer renderer) { if (wizardElement != null) { wizardElement.setRenderer(renderer); renderer.setLimits(wizardElement.getSizeScaledX(), wizardElement.getSizeScaledY()); // don't know why the // element layout does // it not by it own } else if (!wizardPreviews.isEmpty()) { ElementLayout new_ = ElementLayouts.wrap(renderer, 120); BrickColumn wizardPreview = wizardPreviews.get(0); previewIndex = stratomex.getBrickColumnManager().indexOfBrickColumn(wizardPreview) - 1; for (BrickColumn preview : wizardPreviews) stratomex.removeTablePerspective(preview.getTablePerspective()); wizardPreviews.clear(); wizardElement = new_; wizardElementWrapper = layout(wizardElement); new_.addBackgroundRenderer(new TemplateHighlightRenderer()); new_.addForeGroundRenderer(new WizardActionsLayoutRenderer(stratomex, this)); } else { ElementLayout new_ = ElementLayouts.wrap(renderer, 120); wizardElement = new_; wizardElementWrapper = layout(wizardElement); new_.addBackgroundRenderer(new TemplateHighlightRenderer()); new_.addForeGroundRenderer(new WizardActionsLayoutRenderer(stratomex, this)); stratomex.relayout(); } } public void replaceOtherTemplate(Perspective underlying, TablePerspective numerical, boolean extra, Color highlight) { TablePerspective t = asPerspective(underlying, numerical); TablePerspective underlyingTP = findTablePerspective(underlying); if (underlyingTP == null) return; ClinicalDataConfigurer configurer = ClinicalDataConfigurer.create(stratomex, underlyingTP, t); replaceTemplate(t, configurer, extra, highlight); } public void replacePathwayTemplate(Perspective underlying, PathwayGraph pathway, boolean extra, Color highlight) { if (underlying == null) { replaceTemplate(new PrimitivePathwayRenderer(pathway, stratomex)); } else { TablePerspective t = asPerspective(underlying, pathway); replaceTemplate(t, new PathwayDataConfigurer(), extra, highlight); } } private TablePerspective findTablePerspective(Perspective record) { for (TablePerspective p : stratomex.getTablePerspectives()) if (p.getRecordPerspective() == record) return p; return null; } private static TablePerspective asPerspective(Perspective underlying, TablePerspective clinicalVariable) { Perspective dim = clinicalVariable.getDimensionPerspective(); ATableBasedDataDomain dataDomain = (ATableBasedDataDomain) dim.getDataDomain(); Perspective rec = null; for (String id : dataDomain.getRecordPerspectiveIDs()) { Perspective r = dataDomain.getTable().getRecordPerspective(id); if (r.getDataDomain().equals(underlying.getDataDomain()) && r.isLabelDefault() == underlying.isLabelDefault() && r.getLabel().equals(underlying.getLabel())) { rec = r; break; } } if (rec == null) { // not found create a new one rec = dataDomain.convertForeignPerspective(underlying); dataDomain.getTable().registerRecordPerspective(rec); } return dataDomain.getTablePerspective(rec.getPerspectiveID(), dim.getPerspectiveID(), false); } protected static TablePerspective asPerspective(Perspective record, PathwayGraph pathway) { PathwayDataDomain pathwayDataDomain = (PathwayDataDomain) DataDomainManager.get().getDataDomainByType( PathwayDataDomain.DATA_DOMAIN_TYPE); ATableBasedDataDomain dataDomain = (ATableBasedDataDomain) record.getDataDomain(); Perspective dimension = dataDomain.getTable().getDefaultDimensionPerspective(false); for (PathwayTablePerspective p : pathwayDataDomain.getTablePerspectives()) { if (p.getPathway().equals(pathway) && p.getRecordPerspective().equals(record) && p.getDimensionPerspective().equals(dimension)) return p; } // not found create new one PathwayTablePerspective pathwayDimensionGroup = new PathwayTablePerspective(dataDomain, pathwayDataDomain, record, dimension, pathway); // pathwayDimensionGroup.setPrivate(true); pathwayDataDomain.addTablePerspective(pathwayDimensionGroup); return pathwayDimensionGroup; } public List<TablePerspective> getVisibleTablePerspectives() { return stratomex.getTablePerspectives(); } public ALayoutRenderer createPreviewRenderer(PathwayGraph pathway) { return new PrimitivePathwayRenderer(pathway, stratomex); } public ALayoutRenderer createPreviewRenderer(TablePerspective tablePerspective) { // create a preview similar to the header EEmbeddingID embeddingID = selectEmbeddingID(tablePerspective); Set<String> remoteRenderedViewIDs = ViewManager.get().getRemotePlugInViewIDs(GLStratomex.VIEW_TYPE, embeddingID.id()); MultiFormRenderer multiFormRenderer = new MultiFormRenderer(stratomex, true); List<TablePerspective> tablePerspectives = Lists.newArrayList(tablePerspective); String brickEventSpace = EventPublisher.INSTANCE.createUniqueEventSpace(); for (String viewID : remoteRenderedViewIDs) { multiFormRenderer.addPluginVisualization(viewID, GLStratomex.VIEW_TYPE, embeddingID.id(), tablePerspectives, brickEventSpace); } return multiFormRenderer; } private static EEmbeddingID selectEmbeddingID(TablePerspective tablePerspective) { EEmbeddingID embeddingID; if (tablePerspective instanceof PathwayTablePerspective) embeddingID = EEmbeddingID.PATHWAY_HEADER_BRICK; else if (DataDomainOracle.isClinical(tablePerspective.getDataDomain()) && hasIntegers(tablePerspective)) embeddingID = EEmbeddingID.CLINICAL_HEADER_BRICK; else if (DataDomainOracle.isCategoricalDataDomain(tablePerspective.getDataDomain())) embeddingID = EEmbeddingID.CATEGORICAL_HEADER_BRICK; else embeddingID = EEmbeddingID.NUMERICAL_HEADER_BRICK; return embeddingID; } /** * @param tablePerspective * @return */ private static boolean hasIntegers(TablePerspective tablePerspective) { Table table = tablePerspective.getDataDomain().getTable(); VirtualArray dva = tablePerspective.getDimensionPerspective().getVirtualArray(); VirtualArray rva = tablePerspective.getRecordPerspective().getVirtualArray(); if (dva.size() == 0 || rva.size() == 0) return false; return table.getRawDataType(dva.get(0), rva.get(0)) != EDataType.STRING; } @Override public Collection<? extends String> addEmptyStrings() { return Arrays.asList("To add a column showing a dataset", " click the \"+\" button at the top", "or use the LineUp or Data-View Integrator view", "", "Refer to http://help.caleydo.org for more information."); } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,498
\section*{Introduction} Measuring the refractive index $n$ of a substance or medium is part of every introductory physics lab. Various approaches to determine this index have been developed over the years based on the different ways light reflects and transmits in the medium. With the introduction of lasers in basic physics courses, a number of these methods have become accessible to the undergraduate physics laboratory. Several of these techniques are highly accurate and use specific optical equipment, like spectrometers, interferometers, or microscopes. In the laboratory, the method instructors choose to measure the refractive index of a liquid depends on different factors, such as the optical properties of the substance they want to study, or simply the resources available at hand, that in many cases, are very limited. Among all those different methods, the spherical concave mirror filled with a liquid makes itself an excellent alternative to measure the refractive index of the liquid, specially when no fancy apparatuses are available, and the accuracy of the measurements is not critical. In this paper, we would like to present a simple geometrical derivation of the refractive index of a transparent liquid that is obtained using a spherical concave mirror. This derivation relies mostly on Snell's law and the small-angle approximation. In addition, we also use Gaussian optics equations for mirrors and thin-lenses to verify the validity of the refractive index equation. This method is based on the measurements of the actual and apparent position of the centre of curvature of the mirror, when it is empty and filled with a liquid, respectively. The laws of reflection and refraction are essential to understand the physics behind this method. Students measuring indices of refraction of liquids using this technique are introduced gradually with the concepts of reflection and refraction of light, Snell's law, image formation by spherical mirrors, and Gaussian optics. \section*{Spherical mirrors} Let us first consider the spherical concave mirror shown in~\fref{Fig:01}. The \textit{optical axis} is the radial line through the centre of the mirror that intersects its surface at the vertex point $V$. Some relevant points on the optical axis are the \textit{centre of curvature} $C$, and the \textit{focal point} $F$. The centre of curvature coincides with the centre of the sphere of which the mirror forms a section. At the focal point, rays parallel to the optical axis and incident on the concave mirror, intersect after being reflected by the mirror's surface~\cite{Wilson}. \begin{figure}[h] \centering \includegraphics[clip=true,scale=0.75]{figure01} \caption{\label{Fig:01}Positions of the focal point and centre of curvature of a spherical concave mirror. Rays 1 and 2, and their respective reflections $1'$ and $2'$ in the mirror, determine the location of the image.} \end{figure} The distance $CV$ between the centre of curvature and the vertex is equal to the radius of the sphere and is called the \textit{radius of curvature} $R$. Similarly, the distance $FV$ between the focal point of the mirror and the vertex is known as the \textit{focal length} $f$. In general, when the rays are close to the optical axis---that is, for the \emph{small-angle approximation}, the focal length can be shown to be half of the radius of curvature~\cite{Hecht}: \begin{equation}\label{Eq:f} f = \frac{R}{2}. \end{equation} The location and nature of the image formed by a spherical mirror can be determined by graphical ray-tracing techniques~\cite{Pedrotti}. To find the conjugate image point $I$ of an object point $O$ located at the centre of curvature $C$, the paths of any two rays leaving $O$ are sufficient~\cite{Suppapittayaporn}. We first use the so-called \textit{parallel ray} $1$ that is incident along a path parallel to the optical axis, strikes the mirror at point $H$, and is reflected through the focal point $F$ as ray $1'$. Next, we use the so-called \textit{focal ray} $2$ that passes through the focal point and is reflected parallel to the optical axis as ray $2'$. The image point $I$ is formed where the two rays $1'$ and $2'$ intersect. This image is real, inverted, located at the centre of curvature $C$, and has the same size of the object, as shown in~\fref{Fig:01}. \section*{The experiment} Measuring the refractive index $n$ of a transparent liquid, like water, using a spherical concave mirror is based on locating the \emph{actual} and \emph{apparent} centres of curvature of the mirror, when it is empty and filled with a thin layer of water, respectively. \begin{figure}[h] \centering \includegraphics[clip=true,scale=0.70]{figure02} \caption{\label{Fig:02}Schematics of the spherical concave mirror experiment for measuring the refractive index of water. The support holding the lamp is moved up and down until a sharp image of the lamp is formed on the screen.} \end{figure} The typical experimental setup (see~\fref{Fig:02}) consists on a mirror, a support with a vertical rod, and a clamp holding a lamp (source of light) and a screen horizontally~\cite{Rachna}. When there is no water in the mirror, the centre of curvature is located at $C$, and the radius of curvature is $R=CV$. From~\eref{Eq:f}, the focal length of the concave mirror is $f=FV=R/2$. When the mirror is filled with water, the apparent centre of curvature $C'$ moves down, and the apparent radius of curvature becomes $R'=C'V$. The new focal length is given by $f'=F'V=R'/2$. To obtain the refractive index of water $n_{\mathrm{w}}$, the clamp that holds the lamp and the screen are moved to position $C$ until a sharp image of the lamp is formed by the empty mirror. The radius of curvature $R$ is then measured. Next, the mirror is filled with a thin layer of water, and the lamp and screen moved down until a new sharp image of the lamp is formed on the screen. This position corresponds to the apparent centre of curvature $C'$. The new radius of curvature $R'$ is then measured. The refractive index can be obtained by using the equation \begin{equation}\label{Eq:RR01} n_{\mathrm{w}} = \frac{R}{R'}. \end{equation} \section*{The Snell's law approach} When the point object $O$ is placed at the centre of curvature $C$, and the mirror is empty (no water has been poured in), we may use the same graphical ray-tracing methods of~\fref{Fig:01} to locate the conjugate image point $I$. Ray $1_{\mathrm{a}}$ leaves point $O$ parallel to the optical axis, strikes the mirror at point $H$ and reflects as ray $1'_{\mathrm{a}}$. This ray intersects the principal axis at focal point $F$. The focal length is then $f=FV$. The conjugate image point $I$ is formed at the centre of curvature $C$, as illustrated in~\fref{Fig:03}. The image is real, inverted and has the same size of the object. \begin{figure}[h] \centering \includegraphics[clip=true,scale=0.75]{figure03} \caption{\label{Fig:03}Formation of the images at the positions of the actual and apparent centres of curvature.} \end{figure} Now, when the mirror is filled with a thin layer of water, the magnitude of its focal length decreases, and the apparent centre of curvature $C'$ gets closer to the vertex $V$ of the mirror. Ray $1_{\mathrm{a}}$ leaves object point $O$ parallel to the optical axis, strikes the mirror at point $H$ and then reflects. The reflected ray refracts from water into air bending away from the surface normal at the point of incidence $P$, for the refractive index of water $n_{\mathrm{w}}$ is bigger than that of air $n_{\mathrm{a}}$. The refracted ray $1'_{\mathrm{b}}$ intersects the optical axis at the new focal point $F'$. The new focal length becomes $f' = F'V$. If the object is placed at the apparent centre of curvature $C'$, and labelled object point $O'$, the conjugate image point $I'$ is found at the apparent centre of curvature $C'$. The image is real, inverted, and has the same size of the object (see \fref{Fig:03}). Using Snell's law at surface point $P$, we get \begin{equation}\label{Eq:Snell} n_{\mathrm{w}} \sin\theta = n_{\mathrm{a}} \sin\phi, \end{equation} where $n_{\mathrm{w}}$ and $n_{\mathrm{a}}$ are the refractive indices of water and air, respectively. From this point on, we will assume $n_{\mathrm{a}}\approx 1.00$. \Fref{Fig:03} shows that from triangles $FHG$ and $F'H'G'$ \numparts \begin{eqnarray} HG =& FH\,\sin\theta \label{Eq:HG01} \\ H'G' =& F'H'\,\sin\phi. \label{Eq:HG02} \end{eqnarray} \endnumparts The backward extension of the refracted ray $1'_{\mathrm{b}}$ strikes the mirror at point $H'$. If we assume that the focal lengths $F$ and $F'$ are large compared to the thickness of the water layer in the mirror, points $G$ and $G'$, as well as, $H$ and $H'$ are very close to each other. Therefore, we can make the following approximations for the geometrical lengths: \begin{eqnarray}\label{Eq:HG03} \eqalign{ H'G' \approx& HG \\ F'H' \approx& F'H. } \end{eqnarray} Physically, points $H$ and $H'$, as well as $G$ and $G'$ are not coinciding but are located nearby. Using~\eref{Eq:HG03} in~\eref{Eq:HG02}, we may write \begin{equation}\label{Eq:HG04} HG = F'H\,\sin\phi. \end{equation} Since the left hand sides of~\eref{Eq:HG01} and~\eref{Eq:HG02} are the same, we may equate both equations to obtain \begin{equation}\label{Eq:sines} FH\,\sin\theta = F'H\,\sin\phi. \end{equation} Using~\eref{Eq:Snell} in~\eref{Eq:sines}, we obtain \begin{equation}\label{Eq:nw01} \frac{FH}{F'H} = \frac{\sin\phi}{\sin\theta} = n_{\mathrm{w}}. \end{equation} If the angles $\theta$ and $\phi$ are small, then this angles can be replaced by their tangents (small-angle approximation). Furthermore, the distance $GV$ in~\fref{Fig:03}, the \textit{sagittal depth} of the surface~\cite{Blaker}, is also small, and we may neglect it. Therefore, we can write \begin{equation}\label{Eq:nw02} n_{\mathrm{w}} = \frac{\tan\phi}{\tan\theta} = \frac{f}{f'}. \end{equation} Finally, using~\eref{Eq:f}, we can express the refractive index of water in terms of the actual and apparent radii of curvature as follows \begin{equation}\label{Eq:RR02} n_{\mathrm{w}} = \frac{R}{R'}. \end{equation} \section*{The silvered lens analogue} An expression for the refractive index of the liquid may be obtained also from the spherical mirror and thin-lens formulas \numparts \begin{eqnarray} \frac{1}{s} + \frac{1}{s'} = - \frac{2}{r} \label{Eq:mirror01} \\ \frac{n_1}{s} + \frac{n_2}{s'} = \frac{n_2 - n_1}{r}, \label{Eq:lens01} \end{eqnarray} \endnumparts where $s$ is the object distance, $s'$ is the image distance, and $r$ is the radius of curvature; $n_1$ and $n_2$ are the refractive indices of glass (liquid) and air, respectively. \begin{figure}[h] \centering \includegraphics[clip=true,scale=0.75]{figure04} \caption{\label{Fig:04}Schematic of the silvered lens analogue.} \end{figure} Consider a thin, plano-convex lens of radius of curvature $R=CV$ and silvered on its curved side, as illustrated in~\fref{Fig:04}. For an object at infinity ($s \rightarrow \infty$), incident rays are parallel to the optical axis. After reflection at the curved face, we may use~\eref{Eq:mirror01} to get \begin{equation}\label{Eq:mirror02} \frac{1}{\infty} + \frac{1}{s_1} = - \frac{2}{R}. \end{equation} Thus, $s_1 = -R/2 = f$. Now, after refraction at the plane face ($r \rightarrow \infty$), and using~\eref{Eq:lens01} with $s=s_1$, $s'=s_2$, $n_1=n_{\mathrm{w}}$, and $n_2=1$, we obtain \begin{equation}\label{Eq:lens02} \frac{n_{\mathrm{w}}}{-R/2} + \frac{1}{s_2} = 0. \end{equation} Then, $s_2 = R/2n = f'$ (see~~\fref{Fig:04}). Finally, taking the ratio of the two focal lengths, we get \begin{equation}\label{Eq:nw03} \frac{f}{f'} = \frac{R/2}{R/2n_{\mathrm{w}}} = n_{\mathrm{w}}, \end{equation} which is in accordance with~\eref{Eq:nw02} (see reference~\cite{Pedrotti}). \section*{Conclusions} The experiment described in this paper gives a simple and effective method of measuring the refractive index of water (or any transparent liquid) using a spherical concave mirror. A valuable pedagogical consideration of this method is that, students may observe that a real image of the object is formed at the centre of curvature of the empty mirror when the object is located at the same point. Similarly, a real image is formed at the apparent centre of curvature when the mirror is filled with water, if the object is located at the same place. This provides an easy way of finding the refractive index of the liquid by measuring the ratio between the actual and apparent radii of curvature. In addition, by using Snell's law and reflection of rays in a concave spherical mirror, they should notice that the images are not only real, but inverted, and have the same size of the object. As shown by the Snell's law approach, with the concave mirror method we can get an expression for the refractive index of a liquid using merely Snell's law and the small angle approximation (paraxial approximation), and avoiding complications introduced by the thin-lens equation. However, in the silvered mirror analogue, we use the spherical mirror and thin-lens equations to verify the validity of the refractive index equation obtained using the Snell's law. This experiment is suitable for the undergraduate physics laboratory. It is easy to setup and perform. It has been carried out successfully in the lab providing accurate numerical values of the refractive index of water and other transparent liquids. \ack Author (AJ) gratefully acknowledges funding support from RCSA. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,180
You can use your communication skills to deliver bad news without unduly concerning your audience. Towards the end of the year, organizations present their financial results. Inevitably, for some organizations the financial reports will not be good, and the bad news will need to be presented carefully to avoid upsetting investors and internal staff. Bad news presentations can be challenge to deliver for leaders who are inexperienced in public speaking, so use these communication skills to help get the job done. Of the methods outlined here, this is the most ethically appropriate for most business bad news situations. It relies upon presenting some good news while unemotionally delivering the bad news, then reinforcing a positive message. In this technique, you should present some positive news. This is a very important step as it starts the presentation on a positive note. You are giving the audience a positive energy, and something to "hold on to" when you share the bad news later in the presentation. Next, it is time to share the bad news. This should be done in a factual way. Avoid emotive language and an excessive delivery. Simply state what the bad news is and deliver it a confident manner. You will gain the respect of your audience by not trying to disguise the bad news, whilst not overly alarming them with the poor results. After delivering the bad news, it is time to lift the optimism levels of the audience by returning to some positive news. Often, this is delivered as an analysis of what was learned from the bad news. Such as, "Our profit levels were below expectation, this is due to overproduction of widgets. The last step in delivering the bad news is to avoid making excuses or apportioning blame. The bad news happened. There is nothing that can be done to change that situation. However, the lessons have been learned and are being applied. When you can convey this in your presentation, you will earn the plaudits of your audience rather than their condemnation. This technique is often used when you are at fault for the bad news and you want to justify, to some extent, the reasons for the poor performance. The first step in this technique is to find other problems/sources of bad news that are similar to yours. This could be along the lines of, "Our company profits are down 20%, but generally the market is down 25%", or "While we made a loss this year, our competitors have also lost money". You will want to have a list of references available for you to use. However, you will want to save some for a question and answer session to help you fend off any tricky questions. After building a case that the bad news you are delivering is similar to other external factors, you will want to start presenting a positive impression of the work that was done during the year. You do not want to paint a doom and gloom picture. Instead, you want to start lifting the optimism of your audience by highlight any major achievements or good work that was completed. Finally, you will want to start presenting a picture of the coming year, highlighting how the good work that has been done provides a foundation for moving forward. You should avoid identifying the pieces of work that were not completed. Instead, paint a picture of hope and a brighter future that can be built from the disappointments of the past year. This is the traditional politicians' technique for delivering bad news. It involves presenting the bad news in a positive manner. This approach is very hard to do well. You need to be a professional politician. To perform this technique well, you need to know your subject very well. The more facts and statistics you have, the more comfortable you will be in delivering the bad news. The additional information will allow you select the most appropriate information to "spin" successfully. Following on from step one, you can ensure your presentation includes ample statistical evidence to support the position you are spinning. The more statistical evidence you provide, the stronger your position will be. This might be obvious, but it requires good preparation to ensure you don't inadvertently introduce facts that weaken your position. In the process of preparing your speech, you will identify a large number of statistics you can utilize in your presentation. Take the time to ensure each statistic that you use in the final version supports your position, and does not detract in any way. Part of the process of delivering a "Spin" presentation involves you convincing the audience that you are sufficiently knowledgeable about the topic of the presentation. Only when they have confidence in your knowledge of the subject will they accept the statistics you are sharing in the presentation. If you fail to convey your expertise on the topic, your audience will question the statistics and "spin" you are presenting them. To sell anything, you need to have a conviction and energy that your audience can relate to. When you deliver your presentation, you should ensure your energy levels remain high to provide the optimism that your audience needs to feel that the position of your company is good. Inevitably, someone in your audience will recognize that you are only providing a selected view of the position of the company. They may interrogate you during a question and answer session, or provide negative commentary to others about your presentation after you've finished. You need to accept that will happen, and be prepared to defend your position professionally and passionately. Communicating bad news is not something any of us want to do. Unfortunately, at some point, you will be faced with a situation where you have to deliver unpleasant information to an audience. How the information is packaged and delivered will greatly influence the response of your audience. Using the techniques above to deliver a bad news presentation you will enhance your reputation and ultimately your career. How Can You Present Bad News? Mark Kyte is Presentation Skills Expert. Mark coaches clients around Australia helping them improve their public speaking and presentation skills so they can advance in their careers. Claim your FREE Public Speaking course. Also, connect with Mark on Facebook.
{ "redpajama_set_name": "RedPajamaC4" }
7,336