text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
A public landing page for GetRealT sign ups for manual provisioning
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,330 |
\section{Introduction}
\vspace{-0.2cm}
To reduce the risks and costs for reworking and rescheduling, agile techniques have aroused attention for the development of safety-critical systems. Traditionally standardised safety assurance, such as IEC 61508 \cite{iec61508}, is based on the V-model. Even though there is no prohibition to adapt standards for lightweight development processes with iterations, some limitations cannot be avoided during the adaptation \cite{turk2014limitations}.
Existing research in agile techniques for safety-critical systems is striving for consistency to standards. Safe Scrum \cite{staalhane2012application} is a considerable success due to a comprehensive combination between Scrum and IEC 61508. However, an integrated safety analysis to face the changing architectures inside each sprint still needs to be enhanced. Therefore, in 2016, we proposed S-Scrum to integrate a systems theory based safety analysis technique, STPA (System-Theoretic Process Analysis) \cite{leveson2011engineering}, which was proposed by Leveson in 2012, inside each sprint to guide a safe design \cite{wang2016toward}. \newline \newline
\emph{Problem statement.} We proposed to integrate STPA in a Scrum development process to enhance the safety in agile development. However, it has not been validated in practice. As far as we know, there exists no empirical data on applying Scrum for a safety-critical project with the integration of STPA. \newline \newline
\emph{Research objective and research questions.} In this article, we aim to explore the agility and safety of S-Scrum as well as challenges and their relevant optimizations for developing a safety-critical system called ``Smart Home". The research questions are as follows: \newline
\textbf{RQ 1} \emph{How does S-Scrum handle agility and safety in safety-critical systems?}\newline
\textbf{RQ 2} \emph{What are the challenges of S-Scrum in such a context?}\newline
\textbf{RQ 3} \emph{How could S-Scrum be optimized to overcome the challenges?}\newline
\textbf{RQ 4} \emph{What are the effects of the optimized S-Scrum on safety and agility?}\newline \newline
\emph{Contribution.} This paper provides the first case study on applying a Scrum development process for safety-critical systems. We investigated the effects and challenges of S-Scrum in the 1st stage of the case study. We proposed an optimized S-Scrum and validated it in the 2nd stage of the case study. To this end, we preliminarily discussed the optimized S-Scrum in industry.\newline\newline
\emph{Outline.} The paper is organized as follows. First, we present the related work on using Scrum for safety-critical systems and normal Scrum development process improvement (Sect. 2). Then, we present the background about STPA and our previous work about S-Scrum (Sect. 3). After that, we describe the approach and results of the 1st stage of the case study (Sect. 4.1), and the 2nd stage of the case study (Sect. 4.2). Finally, we discuss the threats to validity (Sect. 5), and draw the conclusions (Sect. 6).
\vspace{-0.2cm}
\section{Related Work}
\vspace{-0.2cm}
To the best of our knowledge, few empirical studies of applying Scrum or other agile processes for safety-critical systems exist. Most of the research is still in the stage of theoretical illustration and validation \cite{ge2010iterative} \cite{vuori2011agile}. \par
Safe Scrum is a Scrum development process for safety-critical systems, which was developed to adhere to the general functional safety standard IEC 61508 \cite{staalhane2012application} \cite{hanssen2016quality}. Previous research of Safe Scrum has been synergized with other safety standards in different domains \cite{staalhane2013scrum} \cite{staalhanesafety}. However, purely theoretical validation is unable to cover the details of the process. More practical experiences are becoming crucial. \par
Despite the limited practical experiences in applying Scrum for safety-critical systems, there are a lot of Scrum development process experiences that could be taken as a reference for the agile software process improvement of our project \cite{Moe:2010:TMU:1752257.1752480} \cite{begel2007usage}. Diebold et al. \cite{diebold2015practitioners} investigated the industrial usage of Scrum with various sprint length, events, team size, requirements engineering, roles, effort estimations and quality assurance. Cho \cite{cho2010exploratory} conducted an in-depth case study in two organizations. The data was analyzed along 4 dimensions, including human resource management; structured development process; environment; information systems and technology. These factors were covered in our assessment of agility considering our criteria to improve the S-Scrum. \par
\vspace{-0.2cm}
\section{STPA and S-Scrum}
\vspace{-0.2cm}
STPA is a new hazard analysis technique by Leveson in 2012. It has been successfully used in various domains, such as aviation, automobiles and healthcare. Compared with the traditional safety analysis techniques, such as FMEA (Failure Mode and Effects Analysis) and FTA (Fault tree analysis), STPA bases on the systems theory rather than the traditional reliability theory. Due to an increasing complexity of systems, the accidents are not caused by single function failures or chains of failure events, but resulted from inadequate control actions. To ensure the safety of today's complex systems, the use of STPA is becoming necessary. Besides, we proposed using STPA in a Scrum development process \cite{wang2016toward}, as current safety analysis techniques start from a complete design, which is not consistent to agile methodologies, which advocate a lightweight up-front planning and design. STPA, on the contrary, provides the necessary information to start from a high-level architecture and to guide the incremental design process. In S-Scrum, we integrate STPA mainly in three aspects: (1) During each sprint, we integrate STPA as safety-guided design. (2) At the end of each sprint, we use STPA on the product instead of a Reliability, Availibility, Maintainability and Safety (RAMS) validation. (3) We replace the final RAMS validation with STPA. The other parts are kept consistent to Safe Scrum: (1) The environment description and the SSRS phases 1-4 (concept, overall scope definitions, hazard and risk analysis and overall safety requirements). (2) Test Driven Development. (3) Safety product backlog. (4) A safety expert \cite{wang}. We aim to fill the gap of a lack of safety analysis in agile development and enhance the safety on the basis of a standard-based Scrum development process for safety-critical systems.
\vspace{-0.2cm}
\section{Case Study}
\vspace{-0.2cm}
To explore S-Scrum further, we conduct this study following the guideline by Runeson \cite{runeson2009guidelines} and Yin \cite{yin2013case}. We design this case study with a multi-staged procedure. Each stage has different objectives and research questions. We explored the challenges and optimizations in \textbf{S-Scrum} in stage 1, while we validated the \textbf{optimized S-Scrum} in stage 2.
\vspace{-0.2cm}
\subsection{Research Context}
The case study (including stage 1 and stage 2) was performed in the project developing safety-critical systems, Smart Home, between March, 2016 and March, 2017 at the Institute of Software Technology, University of Stuttgart. The project had 400 planned working hours
per head with a headcount of 14 students. The students have taken part in a training program for agile development and STPA before joining the project and a course on automation systems during the project. The Scrum Master was one research assistant with experienced project management background, while the Product Owner and Safety Expert was another research assistant majoring in using agile for safety-critical systems. All the students were supervised by three research assistants. The project was to work on an IoT based smart home with a smart coffee machine, smart light alarm system, autonomous parking system, door-open system, and smoke detector alarm system through the IoT server - KAA\footnote{https://www.kaaproject.org/overview/}. The project ``Smart Home" is openly available in GitHub\footnote{https://github.com/ywISTE/student-project---Smart-Home}.
\vspace{-0.2cm}
\subsection{Case study - stage 1}
The objective of stage 1 is to validate the safety and agility of S-Scrum and optimize it. In stage 1, we focus on answering RQ 1, RQ 2, and RQ 3. The general research strategy in stage 1 is shown in Table 1.
\begin{table}[!h]
\scriptsize
\center
\caption{Research strategy in stage 1 (``DL"-Developer, ``SH"-Stakeholder, ``SM"-Scrum Master)}
\begin{tabular}{l|l|l|l|l}
\toprule
\textbf{Time}& Sprint 1 to sprint 5 & Sprint 6 to sprint 7 & Sprint 8 & Sprint 9\\ \toprule
\textbf{Process} & Scrum & S-Scrum & S-Scrum & S-Scrum \\ \hline
\textbf{\tabincell{l} {Data\\ collection}} & \tabincell{l} {Participant observation \\ Scrum artifacts \\ Documentation review}& \tabincell{l} {Participant observation \\ Scrum artifacts \\ Documentation review} & \tabincell{l} {Questionnaires} & \tabincell{l} {Semi-structured \\ interviews} \\ \hline
\textbf{Participants} & \tabincell{l} {DLs \\ SHs}& \tabincell{l} {DLs \\ SHs} & \tabincell{l} {13 voluntary DLs} & \tabincell{l} {5 voluntary DLs \\ 1 SM} \\ \hline
\textbf{\tabincell{l} {Data\\ types}} & Quantitative & Quantitative & Quantitative & Qualitative \\ \hline
\textbf{Analysis} & Sum of the numbers & Sum of the numbers & \tabincell{l} {Median \\MAD} & Coding \\ \hline
\textbf{Output} & \tabincell{l} {No safety data}& \tabincell{l} {Safety data: \\ M16.1-M16.3 \\M17.1-M17.3} & \tabincell{l} {Agility data: \\M1-M15} & \tabincell{l} {Challenges \\ and \\ optimizations \\of S-Scrum} \\
\bottomrule
\end{tabular}
\end{table}%
\vspace{-0.4cm}
\subsubsection{Data collection in stage 1}
\vspace{-0.2cm}
Stage 1 spans from sprint 1 to sprint 9. Each sprint lasts three weeks. The agility-related quantitative data, M1 to M15, were collected through 13 questionnaires\footnote{The questionnaire is available: https://zenodo.org/record/439696\#.WODCovl96Uk}. Our participant observation as the Product Owner (the first author), the Scrum Master, and the customer imposed also an evaluation and review of the results. The safety-related data, M16.1 to M16.3 and M17.1 to M17.3, were quantitatively collected during sprint 6 and sprint 7. From sprint 1 to sprint 5, we executed normal Scrum without safety analysis for the adaptation and preparation for the project. The STPA was performed by the safety expert and recorded privately by using the STPA tool, XSTAMPP\footnote{http://www.xstampp.de/}, while the hazards and safety requirements were recorded in the safety product backlog in Jira. \par
Based on the quantitative data for agility and safety, we then designed semi-structured interviews with 6 voluntary participants from the development team, including the Scrum Master and five developers. The interviews lasted 270 minutes overall. The questions began with a specific set of questions regarding the observations. Further, we asked about the causalities. Finally, the optimizations were collected in an open-ended mode. The interview guideline\footnote{The interview guideline is available: https://zenodo.org/record/439696\#.WODCovl96Uk} was provided before each interview. We recorded interview data in field notes and we used the audio recordings for text transcription.
\vspace{-0.2cm}
\subsubsection{Data analysis in stage 1}
\vspace{-0.2cm}
We analyzed the data using the combination of GSN \cite{kelly2004goal} and GQM \cite{basili1992software} referring partially to the VMF framework \cite{cruickshank2009validation}, as shown in Fig. 1. The data are from two aspects: agility (S1) and safety (S2). To evaluate and optimize agility (S1), we set 15 goals (G1 to G15) considering Comparative Agility Survey \cite{williams2010driving}. They are: G1 (Team work composition); G2 (Team work management); G3 (Communication); G4 (Requirement emergency); G5 (Technical design); G6 (Planning levels); G7 (Critical variables); G8 (Progress tracking); G9 (Sources of dates and estimates); G10 (When do we plan); G11 (Customer acceptance test); G12 (Timing); G13 (Quality focus); G14 (Reflection); G15 (Outcome measure). To reach G1 to G15, we analyzed M1 to M15 indirectly by setting sub-metrics. For example, M1 (Team work composition) was analyzed by M1.1 (Team members are kept as long as possible), M1.2 (Specialists are willing to work outside their specialty to achieve team goals), M1.3 (Everyone required to go from requirements to finished system is on the team), and M1.4 (People are no more than two teams). Each sub-metric was analyzed on an ordinal scale of 5 (e.g., from 1 to 5 means ``Negative", ``More negative than positive", ``Neither negative nor positive", ``More positive than negative", and ``Positive"). To investigate the in-depth challenges, we found out either the negative values of the results or the significant differences between the normal Scrum and S-Scrum to formulate further interview questions. To analyze the interview results, we used NVivo11 for text encoding \cite{strauss1997grounded}.
Concerning safety, G16 is extended with 3 questions together with 3 metrics including: number of software hazards (M16.1), number of software safety requirements (M16.2), and number of safety requirements traceable to hazards (M16.3). G17 is extended to be evaluated by the number of mitigated hazards (M17.1), number of accepted safety requirements (M17.2) in the present sprint, and number of rejected safety requirements (M17.3) in the project.
\begin{figure*}[!hbt]
\center
\includegraphics[width=0.8\textwidth]{generaldataanalysisstrategy3}
\caption{General data analysis strategy (``FG"-Final Goal, ``S"-Strategy, ``G"-Goal, ``C"-Context, ``Q"-Question, ``M"-Metric )}
\end{figure*}
\vspace{-0.2cm}
\subsubsection{Results in stage 1 - RQ 1: How does S-Scrum handle agility and safety in safety-critical systems? }
We investigate the effect on agility by comparing the normal Scrum and the S-Scrum according to the 15 metrics in Fig. 2. From the general overview, we can conclude that most of the values regarding agility in S-Scrum are slightly worse than those in the normal Scrum, while one metric shows strongly negative values (``when do we plan"). We discussed the results with the technical support from the Comparative Agility Survey and got the feedback: \emph{when most of the values are more positive than negative (more than ``3"), we could say that the process is agile enough.} Moreover, most values show relatively small differences between normal Scrum and S-Scrum. Thus, we consider the agility of S-Scrum to be acceptable. Yet, optimizations are needed.
Regarding the safety of S-Scrum, we performed STPA two rounds in sprint 6. We found 6 software hazards (M16.1) and 15 safety requirements (M16.2), which can all be traced back to software hazards (M16.3). Three hazards were mitigated (M17.1), while 14 safety requirements were accepted (M17.2). In sprint 7, we performed two rounds of STPA analysis. We found 10 software hazards (M16.1) and 24 safety requirements (M16.2), which can also all be traced back to software hazards (M16.3). Six hazards were mitigated (M17.1), while 23 safety requirements were accepted (M17.2). Each sprint has 1 rejected safety requirement due to hardware limitation (M17.3). \par
\begin{figure*}[!hbtp]
\center
\includegraphics[width=1.0\textwidth]{boxplotr1}
\caption{Boxplots for general agility comparison between normal Scrum and S-Scrum (From ``1" to ``5" means less agile (``negative") to very agile (``positive"))}
\end{figure*}
\vspace{-0.2cm}
\subsubsection{Results in stage 1 - RQ 2 \& RQ 3: What are the challenges of S-Scrum in such context? \& How could S-Scrum be optimized to overcome the challenges? }
To optimize S-Scrum, we derived six challenges from the six abnormal values (see data analysis in stage 1) from the sub-metrics inside these 15 metrics. \newline
\emph{Challenge 1: The priority management of safety requirements and functional requirements has conflict.}
In the normal Scrum, the management and development team determine the sprint backlog with functional requirements in the sprint planning meeting. All the team members have a clear overview of and commitment to the sprint plan with relatively high-level features. The developers accomplish each item with their own detailed tasks. The requirements from the management and the concrete realizations from the developer reach a consensus during each sprint. In S-Scrum, the integrated STPA and the safety requirements break the balance. The functional requirements are correlated with the safety requirements. However, some developers preferred: \emph{functional requirements are more important than the safety requirements.} It was found that the need for long-term quality was given a lower priority than the need for short-term progress \cite{moe2012challenges}. Moreover, the safety expert spent a relatively short time working with the team members which influences also the decision making. As one developer mentioned: \emph{The safety expert is not working in the same room with the development team and has an inconsistent working time.} Thus, a lack of an in-time decision maker on the safety requirements together with the ignorance of safety requirements in the development team cause the conflict.\newline
To face this challenge, a \textbf{safety culture} should be integrated into a light-weight development process. We suggest to include an \textbf{internal safety expert} in the development team to (1) spread the safety culture; (2) increase the safety expert's working time with the team members; (3) clarify the bewilded safety requirements. An \textbf{external safety expert} is necessary to keep the communication with other stakeholders. To fill the gap between the external safety expert and the development team, the development team suggests that the external safety expert should join at least once the weekly Scrum meeting. The discussion between the management, the external safety expert and the internal safety expert could strive a fresh balance on the priorities. \newline
\emph{Challenge 2: The communication between team members and safety expert is disturbed.}
To start with, the unclear safety-related documentation influences an effective communication. The team members mentioned: \emph{it is difficult to comprehend the purpose of the safety expert and integrate into our daily work from the existing documents.} Moreover, a lack of safety-related knowledge of the development team influences the discussion concerning safety issues. Finally, the insufficient time spent between safety expert and development team causes also a poor communication. Without a non-obstacle work place to communicate within the team about the work progress, the safety assurance could either be a superficial decoration or even worse, a roadblock during fast product delivery.\newline
To face this challenge, in addition to the separated \textbf{internal safety expert} and \textbf{external safety expert}, a \textbf{weekly safety meeting} is suggested by an interviewee: \emph{The internal safety expert and external safety expert should meet each other at least once a week to exchange the status of the development team. Because the discussion should be deep in the safety area, it is not supposed to be established during the normal weekly Scrum meeting.} Last but not least, we improve our \textbf{safety epics} and \textbf{safety stories} to support an effective communication \cite{wangdc}, as shown in Sect. 4.3 (Optimized S-Scrum). \newline
\emph{Challenge 3: The safety requirements are not determined early enough to appropriately influence design and testing.}
In sprint 6 and sprint 7, the safety requirements were determined by the development team and the safety expert together in the sprint planning meeting. However, as one interviewee mentioned: \emph{the determination of safety requirements from the safety product backlog is too late to avoid a conflict between the functional requirements and their suitability for the coming sprint}. Thus, sometimes the functional design and testing have to start without the in-time safety requirements. \newline
To face this challenge, we propose \textbf{a pre-planning meeting} for solving the time pressure problem. First, the internal, external safety experts and product owner discuss the safety product backlog and the functional product backlog in the pre-planning meeting. Then they brainstorm the results with the whole development team in the sprint planning meeting to gather more ideas and make each safety requirement clear. \newline
\emph{Challenge 4: The planning at the start of each iteration is insufficient.}
In the normal Scrum, the development team and the product owner plan the upcoming sprint in the sprint planning meeting by formulating the sprint backlog with estimated items, which makes the development team sufficiently confident about their plan.
However, the estimation and planning for the safety product backlog seem not ideal, as well as the interconnection with the functional product backlog, which make an in-time identification of the sprint backlog difficult. An interviewee said: \emph{It is difficult to determine the safety requirements when the development team has not planned the functional requirements for the coming sprint.} \newline
To face this challenge, we suggest and adapt an \textbf{agile safety plan} \cite{myklebust2016agile} in connection with the \textbf{pre-planning meeting} to increase the understanding of safety issues and enhance confidence. In our project, the results of STPA are part of the agile safety plan. \newline
\emph{Challenge 5: The time to perform upfront planning is late.}
A team member said: \emph{the pre-planning meeting for safety issues should start before the sprint planning meeting. But the concrete time should be decided between the external safety expert, the internal safety expert and the product owner.} Based on the experience of the previous sprints, it is better to start upfront planning one week before the sprint planning meeting (3 weeks/sprint). The time could be changed depending on the sprint length. More explanations are in challenge 4. \newline
\emph{Challenge 6: The safety requirements lack well-defined completion criteria.}
In the normal Scrum, we have various testing methods to determine the completion of each feature such as unit testing, system testing, regression testing, and acceptance testing, which are promoted to be automated in an agile context. However, few agile testing methods are suitable for validating safety requirements, as the safety requirements are either from standard requirements or the safety analysis, which differentiates safety testing and functional testing. In S-Scrum, we use UAT (User Acceptance Testing) for validating safety requirements. Thus, a suitable safety criterion becomes important. \newline
To face this challenge, we use a ``Given-When-Then" format \cite{garg2015cucumber} as \textbf{safety requirements' criteria}. The development team suggest that the external safety expert could decide the \textbf{safety stories' criteria} and the internal safety expert could decide the \textbf{safety tasks' criteria}. The whole development team could \textbf{brainstorm} both criteria. To this end, the product owner and safety expert perform the acceptance testing.
\vspace{-0.2cm}
\subsection{Case study - stage 2}
After the optimizations described above, the objective of stage 2 is to validate the safety and agility of the optimized S-Scrum and discuss it in industry. We focus on answering the RQ 4 together with some discussion from industry. The general research strategy in stage 2 is shown in Table 2.
\begin{table}[!hbt]
\scriptsize
\center
\caption{Research strategy in stage 2 (``DL"-Developer, ``SH"-Stakeholder, ``SM"-Scrum Master, ``PO"-Product Owner)}
\begin{tabular}{l|l|l|l}
\toprule
\textbf{Time}& Sprint 10 to sprint 11 & Sprint 12 & Sprint 13\\ \hline
\textbf{Process} & optimized S-Scrum & optimized S-Scrum & optimized S-Scrum \\ \hline
\textbf{Data collection} & \tabincell{l} {Participant observation \\ Scrum artifacts \\ Documentation review}& \tabincell{l} {Questionnaires} & \tabincell{l} {Semi-structured \\ interviews} \\ \hline
\textbf{Participants} & \tabincell{l} {DLs \\ SHs}& \tabincell{l} {8 voluntary DLs} & \tabincell{l} {1 PO (from EPLAN) \\ 1 SM (from EPLAN)} \\ \hline
\textbf{Data types} & Quantitative & Quantitative & Qualitative \\ \hline
\textbf{Analysis} & \tabincell{l} {Sum of the numbers \\ (compare with the data \\from stage 1)} & \tabincell{l} {Median and MAD \\ (compare with the data \\from stage 1)} & Coding \\ \hline
\textbf{Output} & \tabincell{l} {Safety data: \\ M16.1-M16.3 \\M17.1-M17.3} & \tabincell{l} {Agility data: \\M1-M15} & \tabincell{l} {Preliminary\\discussion \\ in industry} \\
\bottomrule
\end{tabular}
\end{table}%
\vspace{-0.2cm}
\subsubsection{Optimized S-Scrum}
\vspace{-0.2cm}
To have a clear overview, we compare the optimized S-Scrum to the normal Scrum and the S-Scrum in our project respectively in Table 3.
In the optimized S-Scrum, we differentiate between an internal safety expert and an external safety expert. A pre-planning meeting and weekly safety meetings are established between safety experts. We include the safety epics, to satisfy $<$the overall safety needs$>$, the system must $<$always be able to reach a safe state$>$ \cite{myklebust2016safety}, in the story map. The safety product backlog is improved with optimized safety story: To keep $<$control action$>$ safe, the system must $<$achieve or avoid something$>$. An agile safety plan based on STPA technology is suggested for a clear overview. The safety culture is expected to be enhanced by the additional activities.
\begin{table}[!h]
\scriptsize
\center
\caption{Normal Scrum, S-Scrum and optimized S-Scrum in Smart Home (``DL"-Developer, ``SM"-Scrum Master, ``PO"-Product Owner, ``SE"-Safety Expert)}
\begin{tabular}{l|l|l|l}
\toprule
\textbf{\tabincell{l} {Normal \\Scrum}}& \tabincell{l} {14 DLs\\1 SM\\1 PO} & \tabincell{l} {Sprint planning meeting\\Weekly Scrum meeting (2 times/week)\\Sprint review meeting\\Sprint retrospective meeting}& \tabincell{l} {Story map\\Product backlog\\Sprint backlog}\\ \hline
\textbf{S-Scrum} & \tabincell{l} {14 DLs\\1 SM\\1 PO\\1 SE} & \tabincell{l} {Sprint planning meeting \\(with safety planning)\\Weekly Scrum meeting (2 times/week) \\(with safety discussion)\\Sprint review meeting \\(with safety review)\\Sprint retrospective meeting}& \tabincell{l} {Story map\\Functional product backlog\\Safety product backlog\\Sprint backlog}\\ \hline
\textbf{\tabincell{l} {Optimized \\S-Scrum}} & \tabincell{l} {13 DLs\\1 SM\\1 PO\\1 external SE\\1 internal SE}& \tabincell{l} {Pre-planning meeting \\Sprint planning meeting\\(brainstorming requirements and criteria)\\Weekly Scrum meeting (2 times/week) \\Weekly safety meeting (1 time/week)\\Sprint review meeting \\(with safety review)\\Sprint retrospective meeting}&\tabincell{l} {Story map\\(with safety epics)\\Functional product backlog\\Safety product backlog\\(with safety stories)\\Sprint backlog\\Safety plan} \\
\bottomrule
\end{tabular}
\end{table}%
\vspace{-0.2cm}
\subsubsection{Data collection in stage 2}
\vspace{-0.2cm}
Stage 2 is from sprint 10 to sprint 13. The safety-related data, M16.1 to M16.3 and M17.1 to M17.3, were collected in the same way as in stage 1. The safety results were collected by both internal and external safety experts. The agility-related data, M1 to M15, were collected by the second round questionnaires\footnote{The questionnaire is available: https://zenodo.org/record/439696\#.WODCovl96Uk}. We further discussed the optimized S-Scrum by conducting 2 semi-structured interviews with one Scrum Master and one Product Owner from EPLAN GmbH, Germany. The interview lasted 2 hours. We formulated questions about the status of the Scrum development process in the company projects; the feasibility of the optimized S-Scrum in industry; and further suggestions from the industrial perspective. A project background illustration was provided before the interviews, together with the interview guidelines\footnote{The interview guideline is available: https://zenodo.org/record/439696\#.WODCovl96Uk}. The field notes, interview transcripts, and voice recordings were all preserved for backup.
\vspace{-0.2cm}
\subsubsection{Data analysis in stage 2}
\vspace{-0.2cm}
The quantitative data were compared with the numbers in stage 1. The interview results from the industry were text encoded with: status, challenges, possible solutions, and the feasibility of S-Scrum.
\vspace{-0.2cm}
\subsubsection{Results in stage 2 - RQ 4: What are the effects of the optimized S-Scrum on safety and agility?}
\vspace{-0.2cm}
As shown in Fig. 3, most of the evaluated agility aspects sustained a good level of satisfaction with little variance. However, the ``technical design" is slightly reduced. Due to the new role, the collaborative part of design between safety work and development work fell on the internal safety expert. The personal capability is becoming important. To improve the technical design, cooperation shall increase between the external safety expert and the development team. \par
\begin{figure*}[!h]
\includegraphics[width=1.0\textwidth]{boxplotr2}
\caption{Boxplots for agility comparison between S-Scrum and optimized S-Scrum (From ``1" to ``5" means less agile (``negative") to very agile (``positive"))}
\end{figure*}
Regarding the safety of optimized S-Scrum, as we can see in Fig. 4, safety aspects improved (M16.1, M16.2, M16.3, M17.1, M17.2). We also rejected few safety requirements (M17.3): 1 (sprint 6), 1 (sprint 7), 0 (sprint 10), 2 (sprint 11). We can conclude that, in general, the optimized S-Scrum has better safety assurance capabilities. However, there are still some abnormal values in sprint 7. The number of safety requirements, the number of safety requirements traceable to hazards and the number of accepted safety requirements in sprint 7 are more than in sprint 10. This may be traced back to the fitting-in phase of the optimized S-Scrum. Since the training of STPA for the internal safety expert, we finished STPA in sprint 10 only once. In sprint 6, sprint 7, and sprint 11, we finished STPA twice. After the adaption of the new role, the safety data rose in sprint 11.
\begin{figure*}[!h]
\center
\includegraphics[width=0.8\textwidth]{safetydata}
\caption{Safety data comparison between S-Scrum and optimized S-Scrum (``SRs" - Safety Requirements)}
\end{figure*}
\vspace{-0.4cm}
\subsubsection{Results in stage 2 - Discussion}
\vspace{-0.2cm}
To strength the study further, we discussed our results preliminarily in industry. For \emph{Challenge 1}, the conflict between functional requirements and non-functional requirements seems not obvious. As one interviewee mentioned: \emph{Since we have a relative small amount of non-functional requirements, the priorities are always determined by the product owner together with the discussion with some external experts.} For \emph{Challenge 2}, one interviewee mentioned: \emph{To enhance the communication between the team members and the experts, we have a technical meeting before each sprint planning meeting. The product owner sends the emails to the relevant experts depending on the goals of each sprint. The experts are welcomed to join the daily stand-up meetings.} Thus, the experts have sufficient time to keep up with the development team, while the technical knowledge is deeply discussed in the technical meeting before the sprint planning meeting. The project has also a good knowledge sharing mechanism to support the communication during each sprint. One interviewee mentioned: \emph{We use pair programming, formal guidelines to teach new colleagues, chat clients, and screen sharing. When the team includes experts, the product owner will contact 2-3 colleagues to discuss technical stuff, who will inform other colleagues.} A hierarchical communication mode is preferred for a multi-expert team. For \emph{Challenge 3}, the industrial projects have also mentioned this problem: \emph{Internal user stories are used to record the non-functional requirements. The execution of internal user stories is up to the team.} For \emph{Challenge 4}, the two teams execute a sufficient planning. An interviewee mentioned: \emph{We have a refinement time slot to get all product backlog items approved (each team member has understood) and not so much discussion in the sprint planning meeting.} The team members are beginning the refinement in the present sprint for the user stories in the next sprint. In Scrum, not all requirements have to be at the same level of detail at the same time \cite{rubin2012essential}. The progressive refinement could be further extended for the safety planning and assessment to: (1) avoid a premature development decision from the high-level safety requirements; (2) reserve sufficient time for managing priorities between safety requirements and functional requirements; (3) increase the rework possibilities; (4) enhance the likelihood of using conversation to clarify safety requirements. That could also illustrate the \emph{Challenge 5}. For \emph{Challenge 6}, the refinement phase helps building a pre-understanding of each requirement and reaching a common criterion in the sprint planning meeting.
The \emph{external expert} is a regular member in industry. An interviewee mentioned: \emph{We prefer some experts with deep knowledge in the team, but the arrangement of an internal expert has to take more issues into account, such as training, responsibility, and even personal development.} An external safety consultant to test the products and delivered trainings and an internal safety initiative \cite{poller2017can} to promote safety practices across groups in industry could be align with our internal and external safety expert. \emph{Safety culture} in industry is enhanced either by setting the regulations or by the established organization structure and activities. An \emph{agile safety plan} is also required from some standards. They draw the safety plan either in the technical meeting or in parallel with the refinement. The technical meeting suggested in industry could also be considered as an \emph{extra (weekly) safety meeting}. The \emph{pre-planning meeting} seems to be a suitable form for realizing progressive refinement in industry. This alignment motivates more combinations between our optimizations and existing industrial practices. All the requirements and \emph{acceptance criteria} are retrieved by \emph{brainstorming}. An effective communication plays a vital role in executing acceptance testing.
\vspace{-0.2cm}
\section{Threats to validity}
\textbf{Construct validity:} The first threat to construct validity is the general data analysis framework. To apply Scrum for safety-critical systems, we focus primarily on safety aspect and agility aspect in our exploratory study. In terms of agility, we referred to an official agility comparative survey \cite{williams2010driving} for ensuring the coverage of measurement. In terms of safety, S-Scrum was extended from Safe Scrum, which was originally developed in accordance with the general functional safety standard IEC 61508. Thus, the validation regarding to the consistency with IEC 61508 has not been included in the framework. Furthermore, in S-Scrum we mainly integrate STPA. We aim to validate the enhanced safety concerning the integrated safety analysis technique. Thus, the safety assurance technique's capability and the deliverable products' safety are set as two relevant goals. Yet, the goals and metrics seem not enough and the validation framework is possible to be extended. The second threat to construct validity is the validation periods for S-Scrum and optimized S-Scrum are shorter than our expectations. We executed the normal Scrum in the first five sprints to strengthen students' background knowledge of agile techniques and prepare the detailed organization structure, which took us a lot of time. \newline
\textbf{Internal validity:}
The first threat to internal validity is the arrangement of team roles. One of the authors acted as the product owner and the safety expert concurrently in sprint 6 and sprint 7. To avoid this threat in alignment with the optimizations in sprint 10 and sprint 11, the product owner acted further as an external safety expert. An internal safety expert has been arranged in the development team. The second threat to internal validity exists in the qualitative data from the semi-structured interviews. The interviews have been performed by one of the authors together with the audio record. The language we used has also partial German. To avoid subjective and language bias, the audio recording has been transcribed independently by two researchers (one is a native German speaker) and compared to formulate a final result.\newline
\textbf{External validity:} A student project is different from an industrial project. However, H{\"o}st et al. \cite{host2000using}, Tichy, Kitchenham et al. \cite{tichy2000hints} proposed that students could be acceptable. To consider this debatable issue, we mainly referred to an empirical study conducted by Falessi in 2017 \cite{ese}. 16 statements are provided by 65 empirical researchers. They mentioned: \emph{Conducting experiments with professionals as a first step should not be encouraged unless high sample sizes are guaranteed or performing replicas is cheap.} In our research, there exists few industrial projects for developing safety-critical systems fully adopted a Scrum development process according to the preliminary research \cite{theo}. S-Scrum was also proposed in 2016 as a high-level process model. In addition, the long learning cycles and a new technology are two hesitations for using professionals. STPA was developed in 2012. In industry, there is still a lack of experts. Thus, we believe that in our research area, a student project is a relative suitable way to aggregate contributions. Even though, the generalizability is considered critical. \newline
\textbf{Reliability:} The student project is a suitable way for a first validation. Yet, the results from the students are limited by their personal experience. Besides, the ``grading power" of the researchers may influence the results. We separated our research work from the final examination of the product to mitigate this threat.
\vspace{-0.4cm}
\section{Conclusion}
\vspace{-0.3cm}
The main benefit of our research is that it provides a first empirical and practical insight into applying Scrum for safety-critical systems with the integration of STPA. Moreover, the presented challenges existing in priority management, communication, time pressure on determining safety requirements, safety planning, safety requirements' acceptance criteria and solutions including the split of the safety expert, pre-planning meeting, regular safety meeting, improved safety epics, STPA-based safety stories and an agile safety plan could arouse interest in practitioners and show future research directions. The effects on safety and agility aspects indicate the feasibility to align STPA with a Scrum development process. The discussion in industry motivates the further step of transmitting the optimized S-Scrum from the academic environment towards industry environment. However, the execution of S-Scrum and optimized S-Scrum was in a specific context. We can rely our improvements on an academical project only. The generalization in industry of the optimizations remains subject to future work. Finally, regarding safety and security in agile development in today's cyber-physical systems, even though special attention has to be paid to the respective norms and standards, problems' exploration in practice seems also necessary. \par
\section{Acknowledgements.}
\vspace{-0.3cm}
\footnotesize
We want to thank Dr. A. Nguyen-Duc for proof reading and his valuable suggestions. We are grateful to all participants involved during the case study. Finally, we want to thank all the feedback on previous versions. The first author is supported by the LGFG (Stipendien nach dem Landesgraduiertenf{\"o}rdergesetz).
\vspace{-0.3cm}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,406 |
{"url":"http:\/\/www.ck12.org\/tebook\/Algebra-I-Teacher%2527s-Edition\/r1\/section\/2.4\/","text":"2.4: Multiplication of Rational Numbers\n\nDifficulty Level: At Grade Created by: CK-12\n\nLearning Objectives\n\nAt the end of this lesson, students will be able to:\n\n\u2022 Multiply by \\begin{align*}-1\\end{align*}.\n\u2022 Multiply rational numbers.\n\u2022 Identify and apply properties of multiplication.\n\u2022 Solve real-world problems using multiplication.\n\nVocabulary\n\nTerms introduced in this lesson:\n\nargument\nmultiplicative properties\nmultiplicative identity property\ndistributive property\n\nTeaching Strategies and Tips\n\nChanging the sign of a number is equivalent to multiplying it by \\begin{align*}-1\\end{align*}. In Example 1, students find the opposite of several numbers and expressions being careful to use parentheses when appropriate and simplifying the result.\n\nUse Examples 2 and 3 to justify why, when multiplying two fractions, the numerators multiply together and the denominators multiply together.\n\nUse the product of three or more fractions as in Examples 4c and 4d as an extension of the multiplication rule.\n\n\u2022 Multiply the following rational numbers.\n\n\\begin{align*}\\frac{3} {11} \\cdot \\frac{5} {7}\\end{align*}\n\nHint: The product of two rational numbers is the product of their numerators divided by the product of their denominators.\n\n\u2022 Multiply the following rational numbers.\n\n\\begin{align*}\\frac{11} {5} \\cdot \\frac{7} {4} \\cdot \\frac{3} {10}\\end{align*}\n\nHint: Multiply all the numerators and all the denominators. Do not covert the improper fraction to mixed form.\n\n\u2022 Multiply the following rational numbers.\n\n\\begin{align*}\\frac{5} {7} \\cdot 12\\end{align*}\n\nHint: Rewrite the \\begin{align*}12\\end{align*} as \\begin{align*}12\/1\\end{align*}, using the \u201cinvisible \\begin{align*}1\\end{align*}\u201d.\n\nStudents first learn about the convenience of canceling before multiplying in Examples 4d and 5.\n\n\u2022 Multiply the following rational numbers.\n\n\\begin{align*}\\frac{24} {33} \\cdot \\frac{8} {27} \\cdot \\frac{9} {64}\\end{align*}\n\nSolution:\n\n\\begin{align*}\\frac{24} {33} \\cdot \\frac{8} {27} \\cdot \\frac{9} {64} = \\frac{3 \\cdot 8} {3 \\cdot 11} \\cdot \\frac{8} {3 \\cdot 9} \\cdot \\frac{9} {8 \\cdot 8} = \\frac{\\cancel{3} \\cdot \\cancel{8}} {\\cancel{3} \\cdot 11} \\cdot \\frac{\\cancel{8}} {3 \\cdot \\cancel{9}} \\cdot \\frac{\\cancel{9}} {\\cancel{8} \\cdot \\cancel{8}} = \\frac{1} {33}\\end{align*}\n\nUse Examples 6-8 to introduce the four properties of real numbers which involve multiplication: the commutative, associative, multiplicative identity, and distributive properties.\n\n\u2022 A geometric interpretation of the commutative property is to consider finding the area of a rectangle. \\begin{align*}L \\times W\\end{align*} is the same number no matter how you draw the rectangle or what you call \\begin{align*}L\\end{align*} and \\begin{align*}W\\end{align*}; therefore, \\begin{align*}L \\times W = W \\times L\\end{align*}. Similarly, the commutative property says that the order for multiplying any two real numbers does not matter. See Example 6.\n\u2022 The associative property of multiplication concerns three or more numbers. Just as for addition, the sum is the same regardless of how they are grouped and in which pair the multiplication takes place first.\n\u2022 State the rule being used in each example you do in the classroom.\n\nError Troubleshooting\n\nExample 1b: The opposite of \\begin{align*}\\pi\\end{align*} is simply \\begin{align*}-1 \\cdot (\\pi) = -\\pi\\end{align*}. There is no need to use the decimal expansion.\n\nExample 1c: Multiply both terms of the expression by \\begin{align*}-1\\end{align*}. This will make more sense to students after covering the distributive law in the next lesson.\n\n\u2022 Find the opposite of the expression\n\n\\begin{align*} x - 4y + 1\\end{align*}\n\nHint: multiply each of the three terms by \\begin{align*}-1\\end{align*}.\n\nThe difference between absolute value and other grouping symbols is that multiplying absolute value by \\begin{align*}-1\\end{align*} will not affect the argument; that is, a negative will not distribute into the absolute value. See Example 1d.\n\nGeneral Tip: It is helpful to note that\n\n\u2022 \\begin{align*}|x|\\end{align*} and \\begin{align*}|-x|\\end{align*} are always positive\n\u2022 \\begin{align*}-|x|\\end{align*} is always negative.\n\nGeneral Tip: A common mistake is to forget to cancel like factors before multiplying the fractions, as the numbers will only get larger and thus harder to factor. Have students factor numerators and denominators first to remove any repetitions by canceling. Then carry out the remaining easier multiplication.\n\nNotes\/Highlights Having trouble? Report an issue.\n\nColor Highlighted Text Notes\n\nShow Hide Details\nDescription\nTags:\nSubjects:","date":"2017-04-30 18:27:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 23, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9996553659439087, \"perplexity\": 1740.5572957181303}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-17\/segments\/1492917125719.13\/warc\/CC-MAIN-20170423031205-00378-ip-10-145-167-34.ec2.internal.warc.gz\"}"} | null | null |
Rowan and Waddington Galleries
Volumes compiled by Barry Flanagan and his assistants, partially in collaboration with Rowan Gallery, London, UK and Waddington Galleries, London, UK as a record of work. Some of the early volumes were maintained in duplicate by Barry Flanagan and Waddington Galleries. They were amalgamated into a single series and then, in the late 1980s, partially reorganised. Some earlier records were integrated into larger volumes arranged by medium.
Compiled by Rowan Gallery, London, UK in collaboration with Barry Flanagan.
Various gallery systems
Files compiled retrospectively by assistants to Barry Flanagan, in the period 1987 - 1998, using printed typescript information from Waddington Galleries, London, UK, the Tate Gallery, London, UK and the Galerie Durand Dessert, Paris, France gallery computer systems, and subsequently added to.
Notes and correspondence
Papers accumulated retrospectively which gather together correspondence, lists of work and related notes pertaining to Barry Flanagan and his assistants' efforts to manage and trace his works.
Waddington Galleries
Print-outs from Waddington Galleries, London, UK gallery computer system made in 2009, possibly gathered by Louise Kelly and Eivlin Roden, assistants to Barry Flanagan, as part of the TMS database project.
Other stocklists and inventories
Inventories and lists of works by Barry Flanagan and works owned by Flanagan. Barry Flanagan maintained his own personal art collection, in addition to a collection under the name of Rowford Process from 2006.
7 folders, 4 volumes, PDF & Filemaker pro: 310.5 MB (1 folder, 1 CD)
JBF/6/1/3/1
Files on the work of Barry Flanagan compiled retrospectively in the late 1980s and and then added to and annotated until 1998. Records of works include photographs, many of which date back from the 1960s and 1970s and were originally accumulated by Rowan Gallery, London, UK until 1976, and then by Waddington Galleries, London, UK, sheets of typescript information from Waddington Galleries computer system giving title, date, editions, status, provenance, dimensions, medium, installation shots, where it was exhibited and where it is referenced in publications, Waddington Galleries numbers, gallery computer systems numbers and related papers. Each file contains notes regarding the entry of information onto the TMS database compiled by employees of Flanagan in 2009 [August 2009]. Date range in title covers the dates of works featured, date range of file covers any additions and annotations made thereafter.
Files on the drawings, etchings, lino and wood cuts of Barry Flanagan compiled retrospectively in the late 1980s and then added to until 1992. Records for works include sheets of typescript information from the Tate Gallery, London, UK and Waddington Galleries, London catalogues giving title, date, editions, status, provenance, dimensions, medium, a photograph, photocopies of photographs, installation shots, where it was exhibited and where it is referenced in publications, gallery numbers, gallery computer systems numbers and related papers.
Galerie Durand-Dessert
Records of work held at, or sold by, the Galerie Durand-Dessert, Paris, France printed from the gallery computer system and acquired by Gabriela Salgado, assistant to Barry Flanagan, in 1998 as part of her efforts to accumulate and collate information on Flanagan's work.
Sculpture , 1959-1998
JBF/6/1/3/1.1. Mixed media 1965 - 1967
JBF/6/1/3/1.2. Mixed media 1967
JBF/6/1/3/1.7. Bronze A - C
JBF/6/1/3/1.8. Bronze D - H
JBF/6/1/3/1.9. Bronze I - L
JBF/6/1/3/1.10. Bronze M - S
JBF/6/1/3/1.11. Bronze T - U
JBF/6/1/3/1.12. Bronze V - Z
JBF/6/1/3/1.13. Stone 1973 - 1978
JBF/6/1/3/1.17. Ceramics 1975 - 1986
JBF/6/1/3/1.18. Polaroids of ceramic pieces
JBF/6/1/3/1.19. Other metal 1964 - 1983
JBF/6/1/3/1.20. Camdonian
JBF/6/1/3/1.16 Stone 1984 - 1987
Artwork records, photographs and notes
Includes photographs David Ward.
JBF/6/1/1.13 Rowan and Waddington Galleries (1964-1992)
JBF/3/5/3.1 Other photographs of work (c.1976-2009)
JBF/3/6/1/3.5 Gallery portfolio (1964-1991)
Carved stone, 1985
Hayward Gallery, 1986
Falls the Shadow: Recent British and European Art - Hayward Annual 1986
Carnegie Institute
Villa Schiff-Giorgini
Sculture di Passaggio '87 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,933 |
Program in Computing (PIC) Assistant Adjunct Professorship Positions: 2019-20
Requisition Number: JPF04330
Recruitment Period Open date: September 25th, 2018 Last review date: November 15th, 2018
Applications received after this date will be reviewed by the search committee if the position has not yet been filled. Final date: March 31st, 2019
Applications will continue to be accepted until this date, but those received after the review date will only be considered if the position has not yet been filled. Description
Temporary Faculty Positions
The Department of Mathematics at the University of California, Los Angeles, invites applications for temporary and visiting appointments. Applicants must possess a Ph.D. in a relevant area and should show evidence of excellence in teaching and research. Preference will be given to applicants whose research connects to faculty interests in the department.
Postdoctoral Positions:
Program in Computing (PIC) Assistant Adjunct Professorships: Applicants for these positions must show very strong promise in teaching and research in an area related to computing. The teaching load is four one-quarter programming courses each year and one additional course every two years. Initial appointments are for one year and possibly longer, up to a maximum service of four years.
Applications and supporting documentation must be submitted online via http://www.mathjobs.org.
All letters of evaluation are subject to UCLA campus policies on confidentiality.
Refer potential reviewers to the UCLA statement of confidentiality at https://www.apo.ucla.edu/policies/the-call/summary-of-procedures/summary-10-statement-of-confidentiality
As a campus with a diverse student body, we encourage applications from women, minorities, and individuals with a history of mentoring under-represented minorities in the sciences. The preferred candidate will also demonstrate a commitment to mentoring students from underrepresented and underserved populations in the sciences, or demonstrate an interest in campus-wide or departmental programs that provide research and professional development opportunities for a diverse student body. Please include such statements within your Statement of Contributions to Equity, Diversity, and Inclusion.
The University of California is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, age or protected veteran status. For the complete University of California nondiscrimination and affirmative action policy see: UC Nondiscrimination & Affirmative Action Policy at: http://policy.ucop.edu/doc/4000376/NondiscrimAffirmAct
Appointments will be effective July 1, 2019 or later. Applications will be accepted until all positions are filled. For fullest consideration, all application materials should be submitted on or before November 15, 2018.
Curriculum Vitae - Your most recently updated C.V.
Statement of Research
Statement of Teaching
Statement of Contributions to Diversity - Statement addressing past and/or potential contributions to diversity through research, teaching, and/or service
REFERENCE REQUIREMENTS
3 letters of reference required
Job location Los Angeles, CA Learn More More information about this recruitment: http://www.math.ucla.edu/faculty-positions Requirements Document requirements
Statement of Contributions to Diversity - Statement addressing past and/or potential contributions to diversity through research, teaching, and/or service.
Misc / Additional (Optional)
Create an ApplicantID
Provide required information and documents
If any, provide required reference information
To apply, please visit: https://recruit.apo.ucla.edu/JPF04330
The University of California is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age or protected veteran status. For the complete University of California nondiscrimination and affirmative action policy, see: UC Nondiscrimination & Affirmative Action Policy, https://policy.ucop.edu/doc/4000376/DiscHarassAffirmAction
jeid-4c54dcfa7f165c45b4a5f1aef4b39489
Share this job with your network:
Gallery title
Diversity Profile: University
AAUP COMPENSATION SURVEY DATA
Learn more on Inside Higher Ed's College Page for University
Job No:
Application Due: 4/3/2019
Work Type:
Apply now Refer a friend
See all jobs in:
Science & Technology, Computer Science & Information Technology | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,522 |
import gql from 'graphql-tag';
import { createOperation } from '../../utils/createOperation';
import { selectURI } from '../selectURI';
const query = gql`
query SampleQuery {
stub {
id
}
}
`;
describe('selectURI', () => {
it('returns a passed in string', () => {
const uri = '/somewhere';
const operation = createOperation({ uri }, { query });
expect(selectURI(operation)).toEqual(uri);
});
it('returns a fallback of /graphql', () => {
const uri = '/graphql';
const operation = createOperation({}, { query });
expect(selectURI(operation)).toEqual(uri);
});
it('returns the result of a UriFunction', () => {
const uri = '/somewhere';
const operation = createOperation({}, { query });
expect(selectURI(operation, () => uri)).toEqual(uri);
});
});
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,384 |
The Disney Holiday Singalong Review
What's On Disney Plus > Blog > Brands > ABC > The Disney Holiday Singalong Review
Disney is back bringing famous musicians and dancers together to sing some holiday classics in an attempt to bring more Christmas cheer in "The Disney Holiday Singalong." This is the third singalong Disney has produced this year. The first two were released at the very beginning of the pandemic when the majority of the world had shut down. As we get closer to the end of the year holidays, Disney is bringing back the concept with many events still virtual.
This version features many famous celebrities including Michael Bublé, BTS, Derek and Julianne Hough, Pink, Ciara, Kerry Washington, Lesli Odom Jr., Katy Perry, Adam Lambert and famed tenor Andrea Bocelli. This eclectic group sang several holiday favorites, as well as a couple of songs from "Frozen" which makes sense given its winter theme. Every musician featured on this singalong is amazingly talented and brings their own unique voice and dancing skill to the songs they sing. And, almost all of the songs are spliced with memorable moments from Disney classics.
THE DISNEY HOLIDAY SINGALONG – "The Disney Holiday Singalong," is the third iteration in the ratings phenomenon franchise, with Ryan Seacrest returning to host the night of merry music and magic on MONDAY, NOV. 30 (8:00-9:00 p.m. EST). (ABC/Frank Micelotta)
If I had to pick a favorite performance in this special, I would pick Leslie Odom Junior's performance of "What's This?" from "The Nightmare Before Christmas." I love that movie. I love that song. And while Odom isn't Danny Elfman performing it, the former "Hamilton" star is still an amazingly talented artist and did a wonderful job. In contrast, my least favorite performance of the special was Derek Hough and his girlfriend performing "Jingle Bells." I know Hough and his girlfriend are primarily dancers, so I understand why the focus was more on their dancing than their singing, but it didn't do it for me. He sang the song well and he danced wonderfully, it just isn't what I wanted. That's not to take away from his performance and I hope other viewers enjoy it more than I did.
Overall, I love these singalongs. For about an hour on TV or 45 minutes on Disney+, we get to forget our troubles and just enjoy some insanely talented musicians perform their crafts for us. I know 2020 has been a rough year for us all, but it's hard not to be in the Christmas spirit during this special. I don't know if that spirit will last, but for a short time, enjoy it. It's a lot of fun.
Ranking: 4 stars out of 5
What did you think of the "The Disney Holiday Singalong?"
Jeremy has been a big Disney fan since he was a kid growing up during the Disney Renaissance. One day he hopes to go to every Disney Park in the world.
Tags: abc, disney, review, The Disney Holiday Singalong
JP December 4, 2020
What parents need to know! We were only able to watch the first episode. This show is not family friendly and definitely not something Walt would have ever agreed to. For starters, some of the singers need better "coverage" as they are not dressed properly. Pink's dress showed way too much cleavage. Clothing of some of the dancers are also a bit skimpy. And having Derrick Hough and his "girlfriend" start off in bed together sends a message that I know is "modern", but does not agree with what we are teaching our children. Call me old-fashioned, but this show is not good.
Emma Reynolds December 5, 2020
Not for kids!!!!!Thank you Disney for trashing up Christmas. We could not even enjoy the singers because the first thing we saw was teenage girls dressed in booty shorts and belly shirts building a snow man, singing frozen. Then a disgusting display by a grown woman dancing questionably in her kitchen. Horribly disappointed would be an understatement. Sad...my kids were really looking forward to it. The performers sang well but I couldn't allow them to view it. I have preteen daughter I don't need Disney telling her it's ok to exploit her body too. Gross shame on Disney.
Maria December 5, 2020
"The phenomenal Adam Lambert" always OWNS every song he sings...His voice touches my soul and brings tears to my eyes. I keep watching over and over. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,694 |
Q: Determine the acceleration and angular acceleration of a disc The question is:
A 90kg disc is floating in a frictionless vacuum. A 150N force is
applied to the outer rim of the disc. The disc has a radius of 0.25m
and a radius of gyration of 0.16m. What is the acceleration and
angular acceleration of this disc?
To solve it I set up these equations:
\begin{equation}
\mathbf{F}_{a} = \mathbf{m}\times\mathbf{a}\tag{1}
\end{equation}
\begin{equation}
\mathbf{F}_{\alpha} = \frac{\mathbf{I}\times\mathbf{\alpha}}{0.25}\tag{2}
\end{equation}
\begin{equation}
\mathbf{F}_{a}+\mathbf{F}_{\alpha} = 150 N\tag{3}
\end{equation}
You can find I and plug it in along with m, but that still leaves 4 unknowns and only 3 equations. I need a 4th equation but I'm not sure what else is known about the problem.
A: First, it's important to properly understand the equations of rotational motion. Rather than $F = ma,$ the operative equation of motion is $\tau = I \alpha$, where $\tau$ is the torque, $I$ is the moment of inertia and $\alpha$ is the angular acceleration. This problem also requires you to know the definition of the radius of gyration in the form of $r_g \equiv \sqrt\frac{I}{m}$. With a proper understanding of the definitions of $\tau$ and $\alpha$, you then have all the information that you need to solve the problem.
EDIT: The above answer referred solely to the rotational acceleration of the disk. The translational acceleration of the center of mass of the disk must be worked out separately using $F = ma$.
A: There is only one force applied to the system $F = 150\,{\rm N}$ not two, $F_a$ and $F_\alpha$.
Then you have
*
*The total force applied equals the mass times the acceleration of the center of gravity.
*The total torque applied equals the mass moment of inertia (at the center of gravity) times the angular acceleration plus the gyroscopic forces (which are zero in your case).
So what is the torque applied when $F$ is located at the edge of the disk?
What is the mass moment of inertia of disk of mass $m$ with radius of gyration $\rho$?
Once you answer the above questions you can proceed to solve your problem.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,452 |
{"url":"https:\/\/eepower.com\/news\/first-ever-power-conversion-standard-ipc-9592-gets-update\/","text":"News\n\n# First-Ever Power Conversion Standard IPC-9592 Gets Update\n\nJune 10, 2010 by Jeff Shepard\n\nIPC \u2013 Association Connecting Electronics Industries\u00ae has released the A revision of IPC-9592, Requirements for Power Conversion Devices for the Computer and Telecommunications Industries. First released in 2008, IPC-9592 sets the requirements for power conversion devices (PCDs) in the computer and telecommunications industries, including design for reliability, design qualification testing, manufacturing conformance testing and quality processes. The new A revision of the document provides suppliers and end-users of PCDs with expanded guidance for design qualification testing and new coverage of moisture sensitivity levels (MSLs) and corrosion of PCDs.\n\n\"As an original equipment manufacturer, my company can use the new A revision of IPC-9592 to clearly specify our requirements in the areas of testing and reliability that would weed out any proposals falling short of delivering high quality and reliability to our customers as promised,\" said Neil Witkowski, Reliability Manager for Alcatel-Lucent, and chairman of the IPC Power Conversion Devices Standard Subcommittee that developed IPC-9592A.\n\nSpecifically, revision A provides more definitive preconditioning tests for Category 2 printed board surface mount PCD modules with dc-to-dc converters. These tests also simulate stresses encountered during the solder reflow assembly process. Preconditioning tests help predict the results that occur with longer duration environmental stress tests, such as high temperature operating bias as well as power and temperature cycling.\n\nIn addition, revision A expands the coverage of moisture sensitivity levels (MSLs) for PCDs. In agreement with IPC J-STD-020, the specification takes a position that printed boards will default to an MSL rating of 2A. Therefore, many PCDs will now have an MSL rating of at least 2A due to the fact that a PCD product is typically assigned an overall rating equivalent to the worst-case MSL rating from its bill-of-material components.\n\nIPC-9592A also significantly extends its coverage of corrosion of PCDs, providing guidance for various industrial, manufacturing and uncontrolled external environments where gas phase, acidic entrainments in the air can quickly ruin any PCD.\n\nFinally, revision A significantly expands its description of highly accelerated life testing (HALT) and the application of HALT to PCDs. \"Implemented as a design test to improve the robustness of a product through a test-fail-fix process, HALT guidance provides my team with clearly defined targets,\" said Jerry Strunk, Technical Manager of Qualification & Compliance with Lineage Power Corp.. \"Incorporating IPC-9592A HALT requirements into our product development process provides OEM customers additional confidence in the reliability of our products.\"\n\nStrunk is also the Vice Chairman of the IPC Power Conversion Devices Standard Subcommittee which comprises representatives from leading original equipment manufacturers (OEMs) and power conversion equipment suppliers, including Alcatel-Lucent; Astec Power; Cisco Systems Inc.; Dell Inc.; Dli Labs; Ericsson Power Module AB; Hewlett-Packard Co.; IBM Corp.; Lineage Power Corp.; Lite-On Technology; Lite-On Trading USA, Inc.; Murata Power Solutions; Power-One; SolarBridge Technologies; and TDK-Lambda UK Ltd.\n\nIPC member companies may request a free copy of IPC-9592A within 90 days of its publication. Following the introduction period, members may purchase a copy for $40. Nonmembers may purchase the new standard for$80.","date":"2022-01-27 11:22:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.21242865920066833, \"perplexity\": 12998.480589479852}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320305260.61\/warc\/CC-MAIN-20220127103059-20220127133059-00239.warc.gz\"}"} | null | null |
Hello, my name is and I'm writing you today to learn more about the 2011 Honda Accord LX Sedan AT. I live in the area and I would like to hear back from you soon and learn more about this vehicle. Please call me at at your earliest convenience. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,626 |
Waypoints is a jQuery plugin that makes it easy to execute a function whenever you scroll to an element.
```js
$('.thing').waypoint(function() {
alert('You have scrolled to a thing.');
});
```
If you're new to Waypoints, check out the [Get Started](http://imakewebthings.github.com/jquery-waypoints/#get-started) section.
[Read the full documentation](http://imakewebthings.github.com/jquery-waypoints/#docs) for more details on usage and customization.
## Shortcuts
In addition to the normal Waypoints script, extensions exist to make common UI patterns just a little easier to implement:
- [Infinite Scrolling](http://imakewebthings.github.com/jquery-waypoints/shortcuts/infinite-scroll)
- [Sticky Elements](http://imakewebthings.github.com/jquery-waypoints/shortcuts/sticky-elements)
## Examples
Waypoints can also be used as a base for your own custom UI patterns. Here are a few examples:
- [Scroll Analytics](http://imakewebthings.github.com/jquery-waypoints/examples/scroll-analytics)
- [Dial Controls](http://imakewebthings.github.com/jquery-waypoints/examples/dial-controls)
## AMD Module Loader Support
If you're using an AMD loader like [RequireJS](http://requirejs.org/), Waypoints registers itself as a named module, `'waypoints'`. Shortcut scripts are anonymous modules.
## License
Copyright (c) 2011-2012 Caleb Troughton
Licensed under the [MIT license](https://github.com/imakewebthings/jquery-waypoints/blob/master/licenses.txt).
## Support
Unit tests for Waypoints are written with [Jasmine](http://pivotal.github.com/jasmine/) and [jasmine-jquery](https://github.com/velesin/jasmine-jquery). You can [run them here](http://imakewebthings.github.com/jquery-waypoints/test/). If any of the tests fail, please open an issue and include the browser used, operating system, and description of the failed test. | {
"redpajama_set_name": "RedPajamaGithub"
} | 4,304 |
{"url":"https:\/\/d2mvzyuse3lwjc.cloudfront.net\/pdfs\/NAG26\/Manual\/html\/g13\/g13bxc.html","text":"# NAG Library Function Document\n\n## 1Purpose\n\nnag_tsa_options_init\u00a0(g13bxc) initializes the structure options. This structure is used by some functions in Chapter g13. This function must be called before any direct assignments are made to members of the options structure.\n\n## 2Specification\n\n #include #include\n void nag_tsa_options_init\u00a0(Nag_G13_Opt\u00a0*options)\n\n## 3Description\n\nA number of optional parameters are applicable to several functions in Chapter g13. These arguments are set by means of a structure of type Nag_G13_Opt. This initialization function sets the members of the structure to null values indicating that the default values should be used for these arguments. If argument values other than the default values are required, they must be assigned to the appropriate members of the options structure in the calling program.\nThis assignment of optional parameters values to the options structure must be preceded by a call to nag_tsa_options_init\u00a0(g13bxc).\n\nNone.\n\n## 5Arguments\n\n1: \u00a0\u00a0\u2002$\\mathbf{options}$Nag_G13_Opt\u00a0*Output\nOn exit: the initialized option structure.\n\nNone.\n\nNot applicable.\n\n## 8Parallelism and Performance\n\nnag_tsa_options_init\u00a0(g13bxc) is not threaded in any implementation.","date":"2022-01-20 05:58:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 1, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7547544240951538, \"perplexity\": 1570.86980373892}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320301720.45\/warc\/CC-MAIN-20220120035934-20220120065934-00067.warc.gz\"}"} | null | null |
Q: Peculiar Hamiltonian Phase space I was solving an exercise of classical mechanics :
Consider the following hamiltonian
$H(p,q,t) = \frac{p^2}{2m} + \lambda pq + \frac{1}{2}m\lambda^2\frac{q^6}{q^4+\alpha^4}$
Where $\lambda,m,\alpha$ are positive parameters.
After having solved the equations of motion, I found something odd, and plotting the phase space (p,q) (which is y,x below) one can find that there are loops where momentum never changes sign, yet the system is localized (loop)
What sort of physical system does such a hamiltonian describe ? I cannot imagine something with an oscillating, never zero momentum, but still localized in space.
I know this happens because of the cross-term, but I cannot find an interpretation for it
A: The key inside to OP's question has already been provided by Ikiperu in above comments. Here we just want to show that the problem becomes very simple to study in the corresponding Lagrangian formalism.
The Hamiltonian reads
$$\tag{1} H(p,q) ~:=~ \frac{p^2}{2m} + \lambda pq + \frac{m\lambda^2}{2}\frac{q^6}{q^4+\alpha^4}. $$
Since there is no explicit time dependence in (1), the Hamiltonian (= the mechanical energy of the system) is preserved.
The velocity can be calculated from Hamilton's equation
$$\tag{2} \dot{q}~=~\frac{\partial H}{\partial p}~=~\frac{p}{m}+ \lambda q. $$
If we eliminate the momentum
$$\tag{3} \frac{p}{m}~=~ \dot{q}-\lambda q $$
in the Hamiltonian (1), we get a surprisingly simple energy function
$$\tag{4} h(q,\dot{q})~=~ \frac{m}{2}\dot{q}^2+V(q). $$
Here the potential $V(q)$ is the double-well
$$\tag{5} V(q)~=~ -\frac{m\alpha^2}{2}\frac{\lambda^2}{\left(\frac{q}{\alpha}\right)^2+\left(\frac{\alpha}{q}\right)^2} ,$$
which has two stable positions
$$\tag{6} q~=~\pm \alpha.$$
In $(q,\dot{q})$ space, there are two stable points
$$\tag{7} (q,\dot{q})~=~(\pm \alpha,0)$$
on the horizontal $q$-axis.
In $(q,p)$ phase space, the two stable points
$$\tag{8} (q,p)~=~\pm \alpha(1,-m\lambda)$$
are shifted by the transformation (3), in accordance with OP's figure.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,273 |
<?php
namespace Zend\Config;
use Zend\Stdlib\ArrayUtils;
class Factory
{
/**
* Plugin manager for loading readers
*
* @var null|ReaderPluginManager
*/
public static $readers = null;
/**
* Plugin manager for loading writers
*
* @var null|WriterPluginManager
*/
public static $writers = null;
/**
* Registered config file extensions.
* key is extension, value is reader instance or plugin name
*
* @var array
*/
protected static $extensions = array(
'ini' => 'ini',
'json' => 'json',
'xml' => 'xml',
'yaml' => 'yaml',
);
/**
* Register config file extensions for writing
* key is extension, value is writer instance or plugin name
*
* @var array
*/
protected static $writerExtensions = array(
'php' => 'php',
'ini' => 'ini',
'json' => 'json',
'xml' => 'xml',
'yaml' => 'yaml',
);
/**
* Read a config from a file.
*
* @param string $filename
* @param bool $returnConfigObject
* @return array|Config
* @throws Exception\InvalidArgumentException
* @throws Exception\RuntimeException
*/
public static function fromFile($filename, $returnConfigObject = false)
{
$pathinfo = pathinfo($filename);
if (!isset($pathinfo['extension'])) {
throw new Exception\RuntimeException(sprintf(
'Filename "%s" is missing an extension and cannot be auto-detected',
$filename
));
}
$extension = strtolower($pathinfo['extension']);
if ($extension === 'php') {
if (!is_file($filename) || !is_readable($filename)) {
throw new Exception\RuntimeException(sprintf(
"File '%s' doesn't exist or not readable",
$filename
));
}
$config = include $filename;
} elseif (isset(static::$extensions[$extension])) {
$reader = static::$extensions[$extension];
if (!$reader instanceof Reader\ReaderInterface) {
$reader = static::getReaderPluginManager()->get($reader);
static::$extensions[$extension] = $reader;
}
/** @var Reader\ReaderInterface $reader */
$config = $reader->fromFile($filename);
} else {
throw new Exception\RuntimeException(sprintf(
'Unsupported config file extension: .%s',
$pathinfo['extension']
));
}
return ($returnConfigObject) ? new Config($config) : $config;
}
/**
* Read configuration from multiple files and merge them.
*
* @param array $files
* @param bool $returnConfigObject
* @return array|Config
*/
public static function fromFiles(array $files, $returnConfigObject = false)
{
$config = array();
foreach ($files as $file) {
$config = ArrayUtils::merge($config, static::fromFile($file));
}
return ($returnConfigObject) ? new Config($config) : $config;
}
/**
* Writes a config to a file
*
* @param string $filename
* @param array|Config $config
* @return bool TRUE on success | FALSE on failure
* @throws Exception\RuntimeException
* @throws Exception\InvalidArgumentException
*/
public static function toFile($filename, $config)
{
if (
(is_object($config) && !($config instanceOf Config)) ||
(!is_object($config) && !is_array($config))
) {
throw new Exception\InvalidArgumentException(
__METHOD__." \$config should be an array or instance of Zend\\Config\\Config"
);
}
$extension = substr(strrchr($filename, '.'), 1);
$directory = dirname($filename);
if (!is_dir($directory)) {
throw new Exception\RuntimeException(
"Directory '{$directory}' does not exists!"
);
}
if (!is_writable($directory)) {
throw new Exception\RuntimeException(
"Cannot write in directory '{$directory}'"
);
}
if (!isset(static::$writerExtensions[$extension])) {
throw new Exception\RuntimeException(
"Unsupported config file extension: '.{$extension}' for writing."
);
}
$writer = static::$writerExtensions[$extension];
if (($writer instanceOf Writer\AbstractWriter) === false) {
$writer = self::getWriterPluginManager()->get($writer);
static::$writerExtensions[$extension] = $writer;
}
if (is_object($config)) {
$config = $config->toArray();
}
$content = $writer->processConfig($config);
return (bool) (file_put_contents($filename, $content) !== false);
}
/**
* Set reader plugin manager
*
* @param ReaderPluginManager $readers
* @return void
*/
public static function setReaderPluginManager(ReaderPluginManager $readers)
{
static::$readers = $readers;
}
/**
* Get the reader plugin manager
*
* @return ReaderPluginManager
*/
public static function getReaderPluginManager()
{
if (static::$readers === null) {
static::$readers = new ReaderPluginManager();
}
return static::$readers;
}
/**
* Set writer plugin manager
*
* @param WriterPluginManager $writers
* @return void
*/
public static function setWriterPluginManager(WriterPluginManager $writers)
{
static::$writers = $writers;
}
/**
* Get the writer plugin manager
*
* @return WriterPluginManager
*/
public static function getWriterPluginManager()
{
if (static::$writers === null) {
static::$writers = new WriterPluginManager();
}
return static::$writers;
}
/**
* Set config reader for file extension
*
* @param string $extension
* @param string|Reader\ReaderInterface $reader
* @throws Exception\InvalidArgumentException
* @return void
*/
public static function registerReader($extension, $reader)
{
$extension = strtolower($extension);
if (!is_string($reader) && !$reader instanceof Reader\ReaderInterface) {
throw new Exception\InvalidArgumentException(sprintf(
'Reader should be plugin name, class name or ' .
'instance of %s\Reader\ReaderInterface; received "%s"',
__NAMESPACE__,
(is_object($reader) ? get_class($reader) : gettype($reader))
));
}
static::$extensions[$extension] = $reader;
}
/**
* Set config writer for file extension
*
* @param string $extension
* @param string|Writer\AbstractWriter $writer
* @throws Exception\InvalidArgumentException
* @return void
*/
public static function registerWriter($extension, $writer)
{
$extension = strtolower($extension);
if (!is_string($writer) && !$writer instanceof Writer\AbstractWriter) {
throw new Exception\InvalidArgumentException(sprintf(
'Writer should be plugin name, class name or ' .
'instance of %s\Writer\AbstractWriter; received "%s"',
__NAMESPACE__,
(is_object($writer) ? get_class($writer) : gettype($writer))
));
}
static::$writerExtensions[$extension] = $writer;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,816 |
Did you know that it was possible to do a Grand Canyon river rafting tour? It is!
With our Grandcanyon tours you'll discover why Arizona is so rich in Native American history and loaded with fun and adventure.
Western River Expeditions' Grand Canyon rafting tours are a once-in-a-lifetime experience. Exploring the Grand Canyon by river is an experience unlike any other.
You'll discover hidden waterfalls and paradisiacal side canyons as well as areas of the Grand Canyon National Park that are accessible only by river.
You'll listen to the sound of the mighty Colorado River as you lie under a blanket of stars each night after your Grand Canyon white water rafting experience.
Choose from our 3, 4, 6 or 7-day Grand Canyon rafting vacations and trips below. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,898 |
Q: inline blob in sql server whose hex dump is follows
00000000: 30001100 013caae4 62010000 00000000 †0....<..b.......
00000010: 00060040 02002000 5c803801 02000000 †...@.. ..8.....
00000020: 0400004d 01000000 384c0000 681f0000 †...M....8L..h...
00000030: e7010000 01000000 d03e0000 08020000 †.........>......
00000040: 01000000 385e0000 09020000 01000000 †....8^..........
00000050: 926b0000 0a020000 01000000 ††††††††††.k..........
one of the column in the record is as follows
imageval = [BLOB Inline Root] Slot 2 Column 5 Offset 0x20 Length 60
Level = 0 Unused = 77 UpdateSeq = 1
TimeStamp = 1278738432
Link 0
Size = 8040 RowId = (1:487:0)
Link 1
Size = 16080 RowId = (1:520:0)
Link 2
Size = 24120 RowId = (1:521:0)
Link 3
Size = 27538 RowId = (1:522:0)
how can one identify whether it is [BLOB Inline Root].
How do we interpret the above values from hex.
Thanks
A: We'll need the table schema to identify the stored values...
That being said:
3000 =>
The two first flag bytes of the record.
1100 =>
Length of the fixed length data (decimal value = 17), meaning the next 13 bytes are the fixed length data portion of the record.
013caae4 62010000 00000000 00 =>
Fixed length data. No way to say what's what without schema.
0600 =>
Total number of columns in the table.
40 =>
The null bitmap array. Decimal 64 = 0b10000000 in binary, meaning the first column of the record is NULL.
0200 =>
Number of variable length columns. Thus we can conclude you have 4 fixed-length columns since the total is 6.
2000 =>
Position offset of first variable length column. Decimal value 32. Data can thus be found from position 27-32 for a total of 6 bytes: 3801 02000000.
5c80 =>
Position offset of second variable length column. Decimal value 32.860, 0b1000000001011100 in binary. BLOB pointers are identified using the sign bit of the position offset value. Removing the sign bit from the equation gives a decimal value of 0b0000000001011100 = 92. The inline data can thus be found from bytes 33-92:
0400004d 01000000 384c0000 681f0000
e7010000 01000000 d03e0000 08020000
01000000 385e0000 09020000 01000000
926b0000 0a020000 01000000
BLOB Inline Roots start out with a header:
04 =>
Special field type
00 =>
Link
00 =>
Index level
4d =>
Unused (field)
01000000 =>
Update sequence
384c0000 =>
Timestamp (unsigned)
And then finally we have the actual meat, an array of slot pointers:
Length PageID FileID Slot
681f0000 e7010000 0100 0000
d03e0000 08020000 0100 0000
385e0000 09020000 0100 0000
926b0000 0a020000 0100 0000
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,902 |
\section{Introduction}
As a result of the magnetic activity, the properties of the solar acoustic modes are observed to vary periodically. In particular, mode frequencies and amplitudes show a temporal anti-correlation, with frequencies increasing with increasing activity, while amplitudes decrease (e.g.,\cite[Woodard \& Noyes 1985]{Woodard}; \cite[Elsworth \etal\ 1990]{Elsworth1990}; \cite[Libbrecht \& Woodard 1990]{Libbrecht1990}; \cite[Chaplin \etal\ 1998]{Chaplin1998}; \cite[Howe \etal\ 2015]{Howe2015}).
\cite[Garc\'{i}a \etal\ (2010)]{Garcia2010} detected, for the first time, activity-related variations in the acoustic properties of a star other than the Sun: HD~49933 observed by CoRoT. Taking advantage of the long-term {\it Kepler} photometric time-series, temporal frequency shifts, possibly activity-related, have been measured for a number of solar-type stars (\cite[Salabert \etal\ 2016, 2018]{Salabert2016,Salbert2018}; \cite[R\'{e}gulo \etal\ 2016]{Regulo2016}; \cite[Kiefer \etal\ 2017]{Kiefer2017}).
We have searched for temporal variations in the seismic properties of {\it Kepler} solar-type stars. To that end, we developed a Bayesian peak-bagging tool. In these proceedings, we summarize the methodology and highlight the results for two stars in the target sample
\section{Observational data}
We analysed {\it Kepler} short-cadence data for 87 solar-type stars.
The pixel data were collected from KASOC ({\it Kepler} Asteroseismic Science Operations Center) and corrected using the KASOC filter (\cite[Handberg \& Lund 2014]{Handberg2014}). The time series were then split in 90-day sub-series and, for each, the power density spectrum is obtained. For the photometric activity proxy, $S_{\rm ph}$ (e.g., \cite[Mathur \etal\ 2014]{Mathur2014}), we use KADACS ({\it Kepler} Asteroseismic Data Analysis and Calibration Software; \cite[Garc\'{i}a \etal\ 2011]{Garcia2011}) long-cadence light curves.
\section{Modelling of the power density spectrum}
Stellar brightness varies on different timescales, due to the contribution from different phenomena, such as magnetic features, granulation, and stellar oscillations.
We start by describing the background signal as the sum of three components: an exponential decay of active regions; a Harvey-like profile for granulation; and a constant photon shot-noise.
Having the background model, we proceed with the peak-bagging analysis and perform a global fit of the acoustic modes. For each mode, the power spectrum is modelled as a Lorentzian profile. The final set of free parameters is composed of the mode frequencies $\nu_{nl}$ (where $n$ and $l$ denote the radial order and angular degree), the heights and linewidths of the radial modes, the rotational splitting, and the stellar inclination angle.
To finally obtain the mode parameters, we adopt a Bayesian approach. One of the advantages of a Bayesian approach is the possibility of using prior knowledge to constrain the parameters. Thus, in this work, the prior probability functions (namely for mode frequencies, rotational splitting, and inclination) are based on previous results from the analysis of the full, multi-year time-series (\cite[Davies \etal\ 2016; Lund \etal\ 2017]{Davies2016,Lund2017}).
The optimization method makes use of the algorithm \texttt{emcee} (\cite{Foreman-Mackey2013}), based on the Affine Invariant Markov Chain Monte Carlo Ensemble sampler (\cite{Goodman2010}). From this analysis, we obtain the posterior distribution for each parameter and the corresponding parameter estimates.
\section{Results and Discussion}
Having the mode parameters, we then compute the mean temporal frequency shifts and mode heights. The individual frequency shifts, $\delta\nu_{nl}$, are computed with respect to the reference frequencies (weighted averages of the mode frequencies, $\nu_{nl}$).
The final mean frequency shifts, $\delta\nu$, and uncertainties, $\sigma$, are obtained as
\begin{equation}
\delta\nu(t)=\dfrac{\Sigma_{nl}\delta\nu_{nl}(t)/\sigma^2_{nl}(t)}{\Sigma_{nl}1/\sigma^2_{nl}(t)},
\end{equation}\begin{equation}
\sigma(t)=\left(\Sigma_{nl}1/\sigma_{nl}^2(t)\right)^{-1/2}.
\end{equation}
\vspace{0.2cm}
For the mode heights, we follow the same approach, but using logarithmic values.
We perform the analysis, summarized above, for all 87 solar-type stars. In these proceedings, in Fig. \ref{fig:fig01}, we present the results for the two stars KIC~8006161 and KIC~5184732. The seismic indicators (frequency shifts, $\delta\nu$, and logarithmic heights, $\ln S$) are shown in the top two rows. For comparison, the bottom panels show the photometric activity proxy, $S_\text{ph}$, which is a measure of the stellar brightness variations due to the presence of spots on the stellar surface. Similarly to what is observed in the Sun, for both stars, the frequency shifts and mode heights vary in anti-phase, with the frequency shifts increasing with increasing $S_\text{ph}$. Therefore, our results are consistent with the rising phase of an activity cycle in both stars. $S_\text{ph}$ further suggests that, over the time span of the {\it Kepler} observations, a given cycle ended and new cycle began in KIC~5184732. A detailed study of the enhanced activity and strong surface differential rotation of KIC~8006161 is presented in \cite{Karoff2018}.
Over $50\%$ of the stars in the target sample show evidence for quasi-periodic variations in the frequency shifts. For part of those, the frequency shifts are accompanied by variations in other stellar properties, such as mode heights, granulation timescale, and/or photometric activity proxy. Further details will be provided in \cite[Santos \etal\ (2018)]{Santos2018}.
\begin{figure}[h]
\includegraphics[width=\hsize]{KICs_fsh_amp_sph_iau.pdf}
\caption{Results for KIC~8006161 ({\it left}) and KIC~5184732 ({\it right}). {\it Top and middle:} Frequency shifts and logarithmic mode heights. {\it Bottom:} Photometric activity proxy. Vertical dotted lines mark the {\it Kepler} quarters.}\label{fig:fig01}
\end{figure}
\acknowledgments
This work was supported by Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT) through national funds (UID/FIS/04434/2013) and by FEDER through COMPETE2020 (POCI-01-0145-FEDER-007672). ARGS acknowledges the support from the IAU travel grant, from NASA grant NNX17AF27G, from the fellowship SFRH/BD/88032/2012 funded by FCT (Portugal) and POPH/FSE (EC), and from University of Birmingham. TLC acknowledges support from grant CIAAUP-12/2018-BPD. TLC, WJC, GRD, EY, and RH acknowledge the support from the UK Science and Technology Facilities Council (STFC). MSC acknowledges the support from FCT through the Investigador FCT Contract No. IF/00894/2012 and the fellowship SFRH/BD/88032/2012 and by FEDER through COMPETE2020 (POCI-01-0145- FEDER-007672). MNL acknowledges the support of The Danish Council for Independent Research | Natural Science (Grant DFF-4181-00415). Funding for the Stellar Astrophysics Centre (SAC) is provided by The Danish National Research Foundation (Grant agreement no.: DNRF106). RK acknowledges that the research leading to these results received funding from the European Research Council under the European Unions Seventh Framework Program (FP/2007-2013)/ERC Grant Agreement no. 307117. DS and RAG acknowledge the support from the CNES GOLF grant. The research leading to these results has received funding from EC, under FP7, through the grant agreement FP7-SPACE-2012-312844 (SPACEINN) and PIRSES-GA-2010-269194 (ASK). The peak-bagging was performed using the University of Birmingham's BlueBEAR HPC service (http://www.birmingham.ac.uk/bear).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,476 |
Q: Python - NLTK train/test split I've been following SentDex's video series regarding NLTK and Python, and have constructed a script which determines review-sentiment using various models, e.g. logistic regression. My worry is that I think SentDex's approach includes the test-set while determining words to be used for training, which is obviously not preferable (train/test split occurs after feature-selection).
(Edited in response to Mohammed Kashif's comments)
Full code:
import nltk
import numpy as np
from nltk.classify.scikitlearn import SklearnClassifier
from nltk.classify import ClassifierI
from nltk.corpus import movie_reviews
from sklearn.naive_bayes import MultinomialNB
documents = [ (list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category) ]
all_words = []
for w in movie_reviews.words():
all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)
word_features = list(all_words.keys())[:3000]
def find_features(documents):
words = set(documents)
features = {}
for w in word_features:
features[w] = (w in words)
return features
featuresets = [(find_features(rev), category) for (rev, category) in documents]
np.random.shuffle(featuresets)
training_set = featuresets[:1800]
testing_set = featuresets[1800:]
MNB_classifier = SklearnClassifier(MultinomialNB())
MNB_classifier.train(training_set)
print("MNB_classifier accuracy:", (nltk.classify.accuracy(MNB_classifier, testing_set)) *100)
Already tried:
documents = [ (list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category) ]
np.random.shuffle(documents)
training_set = documents[:1800]
testing_set = documents[1800:]
all_words = []
for w in documents.words():
all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)
word_features = list(all_words.keys())[:3000]
def find_features(training_set):
words = set(training_set)
features = {}
for w in word_features:
features[w] = (w in words)
return features
featuresets = [(find_features(rev), category) for (rev, category) in training_set]
np.random.shuffle(featuresets)
training_set = featuresets
testing_set = testing_set
MNB_classifier = SklearnClassifier(MultinomialNB())
MNB_classifier.train(training_set)
print("MNB_classifier accuracy:", (nltk.classify.accuracy(MNB_classifier, testing_set)) *100)
Yields the error:
Traceback (most recent call last):
File "", line 34, in
print("MNB_classifier accuracy:", (nltk.classify.accuracy(MNB_classifier, testing_set)) *100)
File "C:\ProgramData\Anaconda3\lib\site-packages\nltk\classify\util.py", line 87, in accuracy
results = classifier.classify_many([fs for (fs, l) in gold])
File "C:\ProgramData\Anaconda3\lib\site-packages\nltk\classify\scikitlearn.py", line 85, in classify_many
X = self._vectorizer.transform(featuresets)
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\feature_extraction\dict_vectorizer.py", line 291, in transform
return self._transform(X, fitting=False)
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\feature_extraction\dict_vectorizer.py", line 166, in _transform
for f, v in six.iteritems(x):
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\six.py", line 439, in iteritems
return iter(getattr(d, _iteritems)(**kw))
AttributeError: 'list' object has no attribute 'items'
A: Okay, so there are a couple of mistakes in the code. We will go through them one by one.
First, your documents list is a list of tuples and it has no words() method. In order to access all the words, change the for loop like this
all_words = []
for words_list, categ in documents: #<-- each wordlist is a list of words
for w in words_list: #<-- Then access each word in list
all_words.append(w.lower())
Secondly, you need create feature set for both training and test set. You have only used feature set for training_set. Change the code to this
featuresets = [(find_features(rev), category) for (rev, category) in documents]
np.random.shuffle(featuresets)
training_set = featuresets[:1800]
testing_set = featuresets[1800:]
So the final code becomes
documents = [ (list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category) ]
np.random.shuffle(documents)
training_set = documents[:1800]
testing_set = documents[1800:]
all_words = []
for words_list, categ in documents:
for w in words_list:
all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)
word_features = list(all_words.keys())[:3000]
def find_features(training_set):
words = set(training_set)
features = {}
for w in word_features:
features[w] = (w in words)
return features
featuresets = [(find_features(rev), category) for (rev, category) in documents]
np.random.shuffle(featuresets)
training_set = featuresets[:1800]
testing_set = featuresets[1800:]
MNB_classifier = SklearnClassifier(MultinomialNB())
MNB_classifier.train(training_set)
print("MNB_classifier accuracy:", (nltk.classify.accuracy(MNB_classifier, testing_set)) *100)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,426 |
{"url":"https:\/\/mrchasemath.com\/category\/discrete-math\/","text":"# Random Walks Mural\n\nI\u2019ve been meaning to give the back wall of my classroom a makeover for a while. This summer I finally found some time to tackle the big project. I took down all the decorations and posters. I fixed up the wall and painted it a nice tan color. Then, I let loose the randomness!\n\nI struggled with what the new mural would be\u2013I\u2019ve thought about it over the last few years. I considered doing some kind of fractal like the Mandelbrot Set. But it should have been obvious, given the name of my blog!! What you see in the picture above is three two-dimensional random walks in green, blue, and red. In the limiting case, one gets Brownian motion:\n\nBrownian motion of a yellow particle in a gas. (CCL)\n\nI honestly didn\u2019t know what it was going to look like until I did it. I generated it as I went, rolling a die to determine the direction I would go each time. I weighted the left and right directions because of the shape of the wall (1,2=right; 3,4=left; 5=up; 6=down). For more details about the process of making it, here\u2019s a documentary-style youtube video that explains all:\n\nActually, I lied\u2013it doesn\u2019t tell \u201call.\u201d If you really want to know more of my thought process and some of the math behind what I did, watch the Extended Edition video which has way more mathematical commentary from me. I\u2019ve also posted the time lapse footage of the individual green, blue, and red. Just for fun, here\u2019s an animated random walk with 25,000 iterations:\n\nA two-dimensional random walk with 25,000 iterations. Click the image for an animated version! (CCL)\n\nI think the mural turned out pretty well! It was scary to be permanently marking my walls, not knowing where each path would take me, or how it would end up looking. At first I thought I would only do ONE random walk. However, the first random walk (in blue) went off the ceiling so I stopped. And then I decided to add two more random walks.\n\nIn retrospect, it actually makes complete sense. I teach three different courses (Algebra 2, Precalculus, and Calculus) and I\u2019ve always associated with each of theses courses a \u201cclass color\u201d\u2013green, blue, and red, respectively. I use the class color to label their bins, to write their objective and homework on the board, and many other things.\n\nThe phrase \u201cWhere will mathematics take you?\u201d was also a last-minute addition, if you can believe it. There just happened to be a big space between the blue and red random walks and it was begging for attention.\n\nWhat a good question for our students. The random walks provide an interesting analogy for the classroom. I\u2019d like to say I\u2019m always organized in my teaching. But some of the richest conversations come from a \u201crandom walk\u201d into unexpected territory when interesting questions are raised.\n\nSpeaking of interesting questions that are raised, here are a few:\n\n\u2022 Can you figure out how many iterations occurred after looking at a \u201cfinished\u201d random walk? Or perhaps a better question: What\u2019s the probability that there were more than n iterations if we see m line segments in the random walk?\n\u2022 Given probabilities $p_1, p_2, p_3, p_4$ of going in the four cardinal directions, can we predict how wide and how high the random walk will grow after n iterations? Can we provide confidence intervals? (might be nice to share this info with the mural creator!)\n\u2022 After looking at a few random walks, can we detect any bias in a die? How many random walks would want to see in order to confidently claim that a die is biased in favor of \u201cup\u201d or \u201cleft\u201d\u2026etc?\n\nSome of the questions are easy, some are hard. If you love this stuff, you might be interested in taking a few courses in Stochastic Processes. Any other questions you can think of?\n\nWhere will math take you this coming academic year? Welcome back everyone!\n\n# The Mathematics of Juggling and more from George\u00a0Hart\n\n[Dr. Chase guest blogging again]\n\nYou\u2019re probably familiar with Vi Hart\u2019s math videos. Less well-known are her father\u2019s math videos. Although I was aware of his mathematical sculpture, I was not aware until today that since August 2012, he has been producing a mathematical video series called mathematical impressions for the Simons Foundation. The 10th in the series is The Mathematics of Juggling. Check it out!\n\n# Fearless Symmetry\n\nI come to you today with a recommendation for the book Fearless Symmetry by Avner Ash and Robert Gross. I started it this summer and finally had a chance to finish it over the Christmas break. I didn\u2019t understand the last half-dozen chapters, but my dad did warn me that would happen. I wouldn\u2019t even attempt reading it unless you\u2019ve already been exposed to some undergraduate mathematics. But if you have, or if it\u2019s been a while and you need a refresher, I highly recommend the book.\n\nIn the book, Ash and Gross attempt to explain some of the math underlying Wiles\u2019 proof of Fermat\u2019s Last Theorem. So you can understand why the math gets a bit hard at the end.\n\nAlong the way, you\u2019ll get a very conversational, well-written, fun-loving introduction to the Absolute Galois Group of the Algebraic numbers. This is a group that is so complicated and messy and theoretical that we can only explicitly write down two elements of the group. In order to talk about it, we need representations, which the authors also introduce in a gentle way. In particular, we need linear representations.\n\nElliptic curves become very important too. I have studied elliptic curves in two of my classes before, but I really liked the way they introduced them here: We know everything about linear equations (highest exponent 1), and everything about conics (highest exponent 2 on x and y), but suddenly things become very interesting when we allow just ONE of the exponents (on x) to jump to 3. These are elliptic curves. Amazingly, you can define an arithmetic on the points of an elliptic curve that yield both a GROUP and an algebraic VARIETY. Incredible. Of course, the authors introduce what a variety is too.\n\nAfter reading this, I also gained a much bigger view of abstract algebra\u2013a course I\u2019ve taken, but I found myself guilty of seeing the trees but not the forest. I loved the way Ash and Gross introduce the group SO3 and relate it to A4 with the rotations of a sphere inside a shell. Very nice visualization!\n\nI could go on, but just know that there are lots of little mathematical gems scattered throughout this book. It\u2019s a refreshing jaunt through higher-level mathematics that will demystify some of the smart-sounding words you\u2019ve been afraid to ask about :-).\n\nGo check it out!\n\n# For Reals\n\nA few people have pointed me to this mathy web comic:\n\nThanks to smbc-comics for a great mathy web comic! (http:\/\/www.smbc-comics.com\/comics\/20120517.gif)\n\nI\u2019m not sure how often discrete mathematics uses the phrase \u201cfor reals\u201d\u2026.I would think \u201cfor natural numbers\u201d would be more appropriate, don\u2019t you?","date":"2020-05-27 15:27:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 1, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5705117583274841, \"perplexity\": 720.1457410994401}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590347394756.31\/warc\/CC-MAIN-20200527141855-20200527171855-00337.warc.gz\"}"} | null | null |
\section{Introduction}\label{sec1}
When a quantum particle is in a box with reflecting boundaries, the graph of the probability density in spacetime forms certain pattern called quantum carpet. This interesting phenomenon was experimentally observed in classical optics firstly by H. F. Talbot \cite{T1} in 1836. He discovered that when an incident plane wave passes through a periodic grating, the image of the diffracted wave is refocused and recovers the initial grating with certain periodicity. Later, the time period to recover the initial grating pattern, which is called Talbot distance $d_\mathrm{T}$, was calculated by Rayleigh \cite{R1} as $d_\mathrm{T} = a^2/\lambda$, where $a$ is the spacing of the grating and $\lambda$ is the wavelength of the incident wave.
Until recently, there have been intensive mathematical studies to understand this Talbot effect (\cite{B, BK, ch-ol14, CET, ES, ET1, KR, olver10, O, R2, T3}). In \cite{BK}, Berry and Klein mathematically justified the Talbot effect by using the paraxial propagator defined by a Gauss sum involving the free Schr\"odinger evolution to model the wave function due to an evenly spaced diffraction grating. They obtained a striking dichotomy between ``rational'' and ``irrational'' times showing that at every $t\in d_\mathrm{T} \mathbb Q$ a finite copies of the grating pattern reappear, whereas at every $t\notin d_\mathrm{T} \mathbb Q$ the wave function exhibits fractal nowhere differentiable profile. Meanwhile, many authors considered the periodic Schr\"odinger equation
\begin{equation}\label{e:schrodinger}
\begin{cases}
i\partial_t u+ \partial_{xx} u =0, \quad x\in \mathbb T:=\mathbb R/2\pi\mathbb Z, \ t\in \mathbb R \\
u(0,x)=f(x),
\end{cases}
\end{equation}
to study implication of algebraic properties of time in the wave function. Under the assumption that the initial $f$ is of bounded variation, Oskolkov \cite{O} proved that the wave function $u(t,x)$ is a continuous but nowhere differentiable function of $x$ if $t\notin \pi \mathbb Q$, while it is necessarily discontinuous if $t\in\pi \mathbb Q$ whenever $f$ contains discontinuities. Later Kapitanski and Rodnianski \cite{KR} showed better regularity of $u(t,\cdot)$ for $t\notin \pi \mathbb Q$. For every $t\in \pi \mathbb Q$, Taylor \cite{T3} found that $u(t,\cdot)$ is a finite linear sum of translates of the delta functions with the coefficients being Gauss sums if $f$ is the delta function.
As another manifestation of the Talbot effect, for a step function $f$, Berry \cite{B} calculated the fractal dimension\footnote{The \emph{fractal dimension}, also called \emph{upper Minkowski dimension}, of a set $S\subset \mathbb R^n$ is defined by $$\limsup_{\epsilon\to 0} \frac{\log(N(S,\epsilon))}{\log(1/\epsilon)},$$ where $N(S,\epsilon)$ is the minimum number of balls of radius $\epsilon$ required to cover $S$.} of the temporal, spatial and diagonal restrictions of the graph of the density function $|u(t,x)|^2$. Based on the calculations he conjectured that the graphs of $\mathrm{Re}u(t,\cdot)$, $\mathrm{Im}u(t,\cdot)$ and $|u(t,\cdot)|^2$ have fractal dimension $D = n+1/2$ at almost all irrational times $t$.
Rodnianski \cite{R2} proved that for every bounded variation $f$ which is not in $\bigcup_{\epsilon>0} H^{\frac{1}{2}+\epsilon}$, the graphs of the real and imaginary parts of the solution $e^{it\partial_{xx}}f$ have fractal dimension $D=3/2$ at almost all irrational times. This rigorously justifies Berry's conjecture in one dimension other than the statement for the density $|e^{it\partial_{xx}}f|^2$. Recently, Chousionis, Erdo\u{g}an and Tzirakis \cite{CET} extended this result by considering initial data in larger class $BV\setminus\bigcup_{\epsilon >0}H^{s_0+\epsilon}$, $\frac{1}{2}\leq s_0<\frac{3}{4}$, and also settled Berry's conjecture in one dimension proving that the dimension of the graph of $|e^{it\partial_{xx}}f|^2$ is $3/2$ at almost all $t\notin \pi\mathbb Q$ whenever $f$ is a step function having jumps only at rational points.
In a series of recent papers \cite{CET, ES, ET1}, Erdo\u{g}an with his collaborators greatly developed the mathematical theory of the Talbot effect. In particular, a nonlinear Schr\"odinger equation was studied in \cite{ES, ET1}. In their argument the key ingredient was to obtain smoothing estimate in the Bourgain spaces for the nonlinear part, which, combined with the known results on the linear part (\cite{O,KR,R2}), gives the quantization and dimension results for the cubic NLS.
Adapting this framework we aim to investigate the Talbot effect for the general Schr\"odinger equation with potentials:
\begin{equation}\label{PS}
\begin{cases}
iu_t + u_{xx} = V(x)u, \quad x\in\mathbb{T},\ t\in\mathbb{R}, \\
u(0, x) = f(x).
\end{cases}
\end{equation}
To facilitate the statements of our results we introduce some notations. We denote by $BV$ the space of functions of bounded variation in $\mathbb T$. For a $2\pi$-periodic function $f$ we define its Fourier coefficient by $\widehat f(k)=\frac1{2\pi}\int_r^{r+2\pi} e^{-ikx}f(x) dx$ for any $r\in\mathbb R$. For every $s\ge 0$ we denote by $H^s$ the Sobolev space on $\mathbb T=\mathbb R/2\pi \mathbb Z$ equipped with the norm
\begin{equation}\label{e:snorm}
\|f\|_{H^s} := \Big( \sum_{k\in\mathbb Z} \langle k\rangle^{2s} |\widehat f (k)|^2 \Big)^{1/2}
\end{equation}
where $\langle k\rangle=(1+|k|^2)^{1/2}$. We also use the notation $H^{s+}:=\bigcup_{\epsilon>0} H^{s+\epsilon}$ and $H^+:=H^{0+}$. Furthermore, throughout this paper, if a Banach space of functions $\mathcal B^s$ is decreasing (in the sense of the set inclusion) with respect to a regularity index $s\in\mathbb R$, then we use the following handy notation
\begin{equation}\label{e:cupcap}
\mathcal B^{s+} := \bigcup_{\epsilon>0} \mathcal B^{s+\epsilon} , \quad \mathcal B^{s-} := \bigcap_{\epsilon>0} \mathcal B^{s-\epsilon}.
\end{equation}
Examples of $\mathcal B^{s}$ include the Sobolev space $H^s$, the Besov space $B_{p,q}^s$ and the H\"older space $C^s$. Our first result is on the dichotomy in regularity at rational and irrational times.
\begin{thm}\label{thm1}
Let $u$ be a solution to \eqref{PS} with $f\in BV$ and $V\in H^{+}$. If $t\notin\pi\mathbb{Q}$ then $u(t,x)$ is a continuous function of x. If $t\in\pi\mathbb{Q}$ and $f$ has at least one discontinuity on $\mathbb T$, then $u(t,x)$ is a bounded function and necessarily contains (at most countable) discontinuities. If $f$ is continuous, then $u(t,x)$ is jointly continuous on $\mathbb R\times \mathbb T$.
\end{thm}
We also compute the fractal dimension of the graph of the solution, in terms of regularity of potentials and initial data, at irrational time slices.
\begin{thm}\label{t:dim}
Let $u$ be a solution to \eqref{PS} with $f\in BV$ and $V\in H^{+}$. Suppose that\footnote{Since $f\in BV$ it follows that $\sigma_0\ge 1/2$.}
\[ \sigma_0:= \sup\{\sigma\in \mathbb R\colon f\in H^\sigma\}<3/4. \]
If we set
\[ r_0:= \sup\{r\in\mathbb R \colon V\in H^r\} , \]
then, for almost all $t\in \mathbb R\setminus \pi \mathbb Q$,
\begin{enumerate}
[leftmargin=0.6cm]
\item[$(1)$] the upper Minkowski dimension of the graphs of $\mathrm{Re}u(t,\cdot)$, $\mathrm{Im}u(t,\cdot)$ and $|u(t,\cdot)|^2$ is less than or equal to $\max\big\{\frac{3}{2},\frac{5}{2}-\sigma_0 -r_0 \big\}$;
\item[$(2)$] the upper Minkowski dimension of the graphs of $\mathrm{Re}u(t,\cdot)$ and $\mathrm{Im}u(t,\cdot)$ are greater than or equal to $\frac{5}{2}-2\sigma_0$ provided that $\sigma_0-\frac12<r_0$.
\end{enumerate}
\end{thm}
\begin{rem}
In the above theorems, the space $H^+$ of potentials contains a number of interesting functions. It is a classical result due to Haslam-Jones \cite{HJ} that the Fourier coefficients of the unbounded function
\begin{equation}\label{e:unbddpot}
V(x)=\frac{\phi(x)}{|x|^\nu (\log |\kappa/x|)^a}, \quad -\pi \le x< \pi,
\end{equation}
where $\phi\in BV$, $0<\nu<1$, $\kappa>\pi$ and $a\in\mathbb R$, satisfy
\[ |\widehat V(0)|\lesssim 1 \quad \text{and} \quad \widehat{V}(k)= O\big( |k|^{\nu-1} (\log|k|)^{-a} \big), \ \ k\neq 0. \]
Thus, for every $a\in \mathbb R$ and $\nu<\frac12$, the unbounded potential \eqref{e:unbddpot} is in $H^+$. On the other hand, since $C^\alpha \hookrightarrow H^s$ for $0<s<\alpha\le 1$ (\cite[Theorem 1.13]{MS-book}), we see that the space $H^+$ includes all classes of H\"older continuous periodic functions.
\end{rem}
\subsubsection*{Organization} In Section \ref{sec2}, we state the crucial smoothing estimate for the Duhamel part of the initial value problem \eqref{PS} (Proposition \ref{thm2}) as well as the known results on the free Schr\"odinger evolution that we need. Then we prove Theorems \ref{thm1} and \ref{t:dim}.
In the remaining sections, we focus on proving the smoothing estimate.
In Section \ref{sec3}, we first obtain bilinear estimates in the Bourgain space for the potential part $Vu$ in \eqref{PS}. Based upon this, in Section \ref{sec4}, we establish well-posedness for \eqref{PS} and then prove the smoothing estimate.
\subsubsection*{Notation}
In inequalities, we employ the letter $C$ to denote a positive constant which may change at each occurrence. For $A, B>0$ we write $A\lesssim B$ if $A\le CB$ with some constant $C>0$. We also use the notation $A\approx B$ if $A\lesssim B$ and $B\lesssim A$.
\section{Talbot effect for the Schr\"odinger equation} \label{sec2}
In this section, we prove Theorems \ref{thm1} and \ref{t:dim}. Let us first rearrange the equation in \eqref{PS} as follows:
\begin{equation*}
iu_t + \big(\partial_{xx}-\widehat{V}(0) \big)u = R(V,u),
\end{equation*}
where $R(V,u)$ is the function on $\mathbb R\times \mathbb T$ defined by
\[ R(V,u)(t,x)= \big(V(x)-\widehat V(0)\big) u(t,x). \]
Then, by Duhamel's formula, the solution to the initial value problem \eqref{PS} can be equivalently written as
\[ u(t,x) = e^{it (\partial_{xx}-\widehat{V}(0))}f(x) -i\int_0^t e^{i(t-t')(\partial_{xx}-\widehat{V}(0))}R(V,u)(t',x)dt'. \]
We shall prove that the Duhamel term
\begin{equation}\label{e:inhom}
\mathcal P(t,x):= -i\int_0^t e^{i(t-t')(\partial_{xx}-\widehat{V}(0))}R(V,u)(t',x)dt'
\end{equation}
is in fact continuous on $\mathbb R\times \mathbb T$. For the purpose we obtain the following smoothing property for $\mathcal{P}(t,\cdot)$, which is the key ingredient in this paper. We provide the proof of the property in Section \ref{subsec3}.
\begin{prop}\label{thm2}
Suppose that $V\in H^r$ and $f\in H^s$ for $r\geq 0$ and $0<s<r+1$. If
\[ a\le r \quad \text{and} \quad a<\min\{1+r-s, 1/2\}, \]
then $\mathcal{P}(t,\cdot)\in H^{s+a}$ for every $t\in \mathbb R$, and is continuous in the time variable $t$.
\end{prop}
For the evolution $e^{it(\partial_{xx}-\widehat{V}(0) )}$, we make use of the following known result due to Oskolkov \cite[Proposition 14 and page 390]{O}:
\begin{thm}\label{Tal1}
Suppose that $g\in BV$.
\begin{enumerate}
[leftmargin=0.7cm]
\item[$(\romannum{1})$] If $t\notin\pi\mathbb{Q}$, then $e^{it\partial_{xx}}g$ is a continuous function of $x$. If $g$ has at least one discontinuity on $\mathbb T$ and $t\in\pi\mathbb{Q}$, then $e^{it\partial_{xx}}g$ necessarily contains discontinuities.
\item[$(\romannum{2})$] If $g$ is continuous, then $e^{it\partial_{xx}}g$ is jointly continuous in temporal and spatial variables.
\end{enumerate}
\end{thm}
\begin{rem}\label{rem1}
The quantization results of Berry and Klein \cite{BK} and Taylor \cite{T3} state that if $t\in\pi\mathbb Q$ then $e^{it\partial_{xx}}g$ is a linear sum of finitely many translates of $g$ (see \cite[Theorem 2.14]{ET-book}). Hence, in Theorem \ref{Tal1} $(\romannum{1})$, $e^{it\partial_{xx}}g\in BV$ whenever $t\in \pi \mathbb {Q}$, so the possible discontinuities in this case are at most countable.
\end{rem}
\begin{proof}[Proof of Theorem \ref{thm1}]
By the assumption $V\in H^+$ it is possible to pick a sufficiently small number $\alpha\in(0,\frac18)$ such that $V\in H^{2\alpha}$. Since $f\in BV$ it follows that $f\in H^{\frac{1}{2}-}$, and in particular, $f\in H^{\frac12-\alpha}$. Thus, making use of Proposition \ref{thm2} (with $r=2\alpha$, $s=\frac12-\alpha$ and $a=2\alpha$), we conclude that $\mathcal P(t,x) \in C_tH_x^{\frac12 +\alpha}$. Moreover, from the Sobolev embedding
\begin{equation}\label{e:sob-emb}
H^s \hookrightarrow C^{s-\frac{1}{2}} \quad \text{for} \quad s>1/2,
\end{equation}
it follows that
\begin{equation}\label{e:joint-conti}
\mathcal P(t,x) \in C_tC_x^\alpha.
\end{equation}
For $t\notin \pi \mathbb Q$, Theorem \ref{Tal1} $(\romannum{1})$ shows that
\begin{equation}\label{e:evolution}
e^{-it(\partial_{xx}-\widehat V(0))}f = e^{it \widehat V(0)}e^{-it\partial_{xx}}f
\end{equation}
is continuous. Hence, combined with \eqref{e:joint-conti}, it follows from \eqref{e:inhom} that
\begin{equation}\label{e:sol}
u(t,\cdot)=e^{-it(\partial_{xx}-\widehat V(0))}f(\cdot)+\mathcal P(t,\cdot)
\end{equation}
is also continuous.
If $t\in \pi \mathbb Q$ and $f$ is discontinuous, then it follows from Theorem \ref{Tal1} $(\romannum{1})$ and Remark \ref{rem1} that the evolution \eqref{e:evolution} is of bounded variation and contains (at most countable) discontinuity. Thus, \eqref{e:joint-conti} tells us that the solution \eqref{e:sol} is a discontinuous bounded function with at most countable discontinuity.
If $f$ is continuous, then the function \eqref{e:evolution} is also continuous by Theorem \ref{Tal1} $(\romannum{2})$. Hence, combining with \eqref{e:joint-conti} we see that the solution \eqref{e:sol} is jointly continuous on $\mathbb R\times \mathbb T$.
\end{proof}
Now we prove Theorem \ref{t:dim}. Let us first recall the Besov space and its properties that we need. Let $\phi \in C^\infty_0([-2,-1/2] \cup [1/2,2])$ be such that $\sum_{j\in\mathbb Z}\phi(2^{-j}t)=1$ for $t\in\mathbb R\setminus\{0\}$, and let $\phi_0(t):=1-\sum_{j\ge1}\phi(2^{-j}t)$. We denote by $P_j$ the projections defined by
\[ P_0f(x):=\sum_{k\in \mathbb Z} \phi_0(k)\widehat f(k)e^{ikx}, \quad P_j f(x):= \sum_{k\in \mathbb Z} \phi(2^{-j}k)\widehat f(k)e^{ikx}, \ \ j\ge 1. \]
For $1\le p,q\le\infty$ and $s\ge 0$, the inhomogeneous Besov space $B^s_{p,q}$ on the periodic domain $\mathbb T$ is a Banach space of functions equipped with the norm
\begin{equation}\label{e:besov}
\|f\|_{B_{p,q}^s} :=
\begin{cases}
\|P_0f\|_{L^p(\mathbb T)} + \big(\sum_{j\ge1} (2^{sj}\|P_jf\|_{L^p(\mathbb T)})^q \big)^{1/q} & \text{if} \quad q<\infty, \\
\sup_{j\ge 0} 2^{sj}\|P_jf\|_{L^p(\mathbb{T})} & \text{if} \quad q=\infty.
\end{cases}
\end{equation}
It is well-known\footnote{See, for example, \cite{ST87} (Remark 4 on p. 164 and Theorem (i), (v) on pp. 168--169).} that $B_{2,2}^s=H^s$ for every $s$ and $B^\alpha_{\infty, \infty} = C^\alpha$ for $\alpha\in (0,\infty)\setminus \mathbb N$. By complex interpolation between Besov spaces (see \cite[Theorem 6.4.5]{BL1}) and H\"older's inequality, we have, for $s_1\neq s_2$,
\begin{equation}\label{e:inter}
\big(B_{1,\infty}^{s_1},B_{\infty,\infty}^{s_2} \big)_{[\frac{1}{2}]}=B_{2,\infty}^{\frac{s_1+s_2}{2}} \hookrightarrow B_{2,2}^{\frac{s_1+s_2}2 -} = H^{\frac{s_1+s_2}{2} -}.
\end{equation}
We also make use of the following theorems of Chousionis--Erdo\u{g}an--Tzirakis \cite{CET} and Deliu--Jawerth \cite{DJ}.
\begin{thm}[\cite{CET}] \label{Linear}
Let $\frac12\le \sigma_0< \frac{3}{4}$ and suppose $g\in BV\setminus H^{\sigma_0+}$. Then, for almost all $t\in \mathbb R\setminus \pi\mathbb Q$, we have $e^{it\partial_{xx}}g \in C^{\frac{1}{2}-} \setminus B_{1,\infty}^{2\sigma_0-\frac{1}{2}+}$.
\end{thm}
\begin{thm}[\cite{DJ}]\label{FDL}
Let $0<s<1$. Assume that $f\colon\mathbb{T}\to \mathbb{R}$ is continuous and $f\notin B_{1,\infty}^{s+}$. Then the upper Minkowski dimension of the graph of $f$ is at least $2-s$.
\end{thm}
\begin{proof}[Proof of Theorem \ref{t:dim}]
Let us first prove the part (1). By the definition of $\sigma_0$ and $r_0$ it is clear that
\[ f\in H^{\sigma_0-} \quad \text{and} \quad V\in H^{r_0-}. \]
By applying Proposition \ref{thm2} (with $s=\sigma_0-\epsilon$ and $r=r_0-2\epsilon$ for an infinitesimal $0<\epsilon\ll 1$) we have
\begin{equation}\label{e:p-sob}
\mathcal P(t,\cdot)\in H^{\sigma_0+a}, \quad \forall t\in \mathbb R,
\end{equation}
whenever
\[ a<\min\{r_0, 1+r_0-\sigma_0, 1/2\} =\min\{r_0, 1/2\}=:a_0. \]
Hence, by the Sobolev embedding \eqref{e:sob-emb} we have
\begin{equation}\label{e:P-hol}
\mathcal P(t,\cdot)\in C^{\sigma_0+a-\frac12} , \quad \forall t\in\mathbb R,
\end{equation}
for every $a<a_0$. On the other hand, by Theorem \ref{Linear}, we have
\begin{equation}\label{e:evol-hol}
e^{it\partial_{xx}}f \in C^{\frac12-} \quad \text{for almost all} \ \ t\in \mathbb R\setminus \pi \mathbb Q.
\end{equation}
Hence, it follows from \eqref{e:sol}, \eqref{e:P-hol} and \eqref{e:evol-hol} that
\begin{equation}\label{e:u-hol}
u(t,\cdot) \in C^{\min\{\frac12, \sigma_0+a-\frac12\}-} \quad \text{for almost all} \ \ t\in \mathbb R\setminus \pi \mathbb Q
\end{equation}
for every $a<a_0$. It is obvious that the same statement is still valid for $\mathrm{Re}u(t,\cdot)$, $\mathrm{Im}u(t,\cdot)$ and $|u(t,\cdot)|^2$ in place of $u(t,\cdot)$ in \eqref{e:u-hol}. We now use the following classical result on the upper Minkowski dimension for H\"older continuous functions (for a proof we refer the reader to \cite[Corollary 11.2]{F}):
\begin{lem}\label{l:upper}
Let $0\le \alpha\le 1$. If a function $f\colon \mathbb T\to \mathbb R$ is $C^\alpha$, then the upper Minkowski dimension of the graph of $f$ is at most $2-\alpha$.
\end{lem}
Indeed, by Lemma \ref{l:upper}, the upper Minkowski dimension of the graphs of $\mathrm{Re}u(t,\cdot)$, $\mathrm{Im}u(t,\cdot)$ and $|u(t,\cdot)|^2$ is at most
\[ 2-\min\{1/2, \sigma_0+a_0-1/2\}= \max\{3/2, 5/2 -\sigma_0-a_0\} \]
for almost all $t\in \mathbb R\setminus \pi \mathbb Q$. We notice further that if $a_0=1/2$, then $5/2 -\sigma_0-a_0=2-\sigma_0\in (5/4,3/2]$, so the maximum is equal to $3/2$. Therefore, we obtain (1).
Let us now prove the part (2). Since $f\notin H^{\sigma_0+}$, we see that for almost every $t$, neither $\mathrm{Re\,} e^{it\partial_{xx}}f$ nor $\mathrm{Im\,} e^{it\partial_{xx}}f$ belong to $H^{\sigma_0 +}$ (see \cite[Lemma 3.2]{CET}). It follows from \eqref{e:evol-hol} and the embedding $B_{1,\infty}^{s_1}\cap B_{\infty,\infty}^{s_2} \subset H^{\frac{s_1+s_2}{2} -}$ (see \eqref{e:inter}) that, for almost all $t\in \mathbb R\setminus 2\pi\mathbb Q$, both the real and the imaginary parts of $e^{it\partial_{xx}}f$ do not belong to $B_{1,\infty}^{2\sigma_0-\frac12+}$. On the other hand, by \eqref{e:p-sob} we see $\mathcal P(t,\cdot)\in B_{1,\infty}^{\sigma_0+\min\{r_0,\frac12 \}-}$ for all $t\in \mathbb R$. Combining these we have
\[ u(t,\cdot)= \underbrace{\mathrm{Re}(e^{it(\partial_{xx}-\widehat V(0))}f)}_{\notin B^{2\sigma_0-\frac12+}_{1,\infty}\text{ for a.e. }t}
+i\underbrace{\mathrm{Im}(e^{it(\partial_{xx}-\widehat V(0))}f)}_{\notin B^{2\sigma_0-\frac12+}_{1,\infty}\text{ for a.e. }t}
+\underbrace{\mathcal P(t,\cdot)}_{\in B^{\sigma_0+\min\{r_0,\frac12\}- }_{1,\infty} \text{ for all }t}. \]
From this we conclude that if $2\sigma_0-\frac12<\sigma_0+\min\{r_0,\frac12\}$, that is, either $r_0\ge \frac12$ or $\sigma_0-\frac12 <r_0<\frac12$, then for almost all $t\notin 2\pi \mathbb Q$ neither the real nor the imaginary parts of $u(t,\cdot)$ belong to $B^{2\sigma_0-\frac12+}_{1,\infty}$. By Theorem \ref{FDL}, we conclude that the graphs of real and imaginary parts of $u(t,\cdot)$ have Minkowski dimension $\ge 2-(2\sigma_0-\frac12)=\frac 52-2\sigma_0$.
\end{proof}
\section{Bilinear estimate in the Bourgain space} \label{sec3}
In this section, we obtain bilinear estimates for $R(V,u)$ in the Bourgain space, which are essential in proving Proposition \ref{thm2}.
\subsection{The Bourgain space}
For $s,b\in\mathbb{R}$, we denote by $X^{s,b}$ the closure of the set of Schwartz functions $\mathcal S(\mathbb R ; C^\infty (\mathbb T))$ under the norm
\[ \|u\|_{X^{s,b}} := \| \langle k\rangle^s\langle\tau+k^2\rangle^b\widetilde{u}(\tau,k)\|_{L^2_{\tau}l^2_{k}(\mathbb R\times \mathbb Z)}, \]
where $\widetilde{u}$ denotes the space-time Fourier transform of $u$ defined by
\[ \widetilde{u}(\tau,k)=\int_{\mathbb{R}} e^{-i\tau t}\widehat{u}(t,k) dt
=\frac{1}{2\pi} \int_{\mathbb{R}}\int^{2\pi}_{0} e^{-i(\tau t+kx)} u(t,x) dx dt, \quad (\tau, k)\in \mathbb R\times \mathbb Z. \]
The $X^{s,b}$-space is also called the \emph{Bourgain space} or \emph{dispersive Sobolev space}. We also define, for a closed interval $I\subset \mathbb{R}$, the restricted space $X^{s,b}_I$ as the Banach space of functions on $I\times \mathbb T$ equipped with the norm
\[ \|u\|_{X^{s,b}_I} = \inf\big\{\|w\|_{X^{s,b}} \colon w\vert_{I\times\mathbb T}=u \big\}. \]
We notice that $X^{s,b}_\mathbb R=X^{s,b}$.
The following are some basic properties of the Bourgain space that we need to prove the smoothing estimate (Proposition \ref{thm2}). The proofs of the properties, which we omit, can be obtained by routine adaptation of those of \cite[Corollary 2.10, Lemma 2.11 and Proposition 2.12]{T2} in the spatially periodic setting and time translation argument.
\begin{lem}\label{bourgain}
Let $s\in\mathbb{R}$, $b>\frac12$ and $I\subset \mathbb R$ a closed interval. Then $X_I^{s,b} \hookrightarrow C(I; H^s)$ and
\[ \sup_{t\in I}\|u(t,\cdot)\|_{H^s}\leq C \|u\|_{X_I^{s,b}}, \]
where $C$ is a constant depending only on $b$.
\end{lem}
\begin{lem}\label{lem2}
Let $s\in\mathbb{R}$, $-\frac12<b<b^\prime<\frac12$ and $I$ a closed interval of length $\delta$. Then
\[ \|u\|_{X^{s,b}_I} \leq C\delta^{b^\prime-b}\|u\|_{X^{s,b^\prime}_I}, \]
where the constant $C$ depends only on $b$ and $b'$.
\end{lem}
\begin{lem}\label{lem1}
Let $s\in\mathbb{R}$, $\frac12<b\leq 1$ and $I=[t_0, t_0+\delta]$ for $t_0\in\mathbb R$ and $0<\delta\le 1$. Then, for $t\in I$, we have
\begin{gather*}
\| e^{i(t-t_0)\partial_{xx}}f\|_{X^{s,b}_I} \leq C\|f\|_{H^{s}}, \\
\Big \|\int^t_{t_0} e^{i(t-t')\partial_{xx}}F(t',\cdot)dt' \Big\|_{X^{s,b}_I} \leq C\| F\|_{X^{s,b-1}_I},
\end{gather*}
where $C$ depends only on $b$.
\end{lem}
\subsection{Bilinear estimate for $R(V,u)$} \label{subsec1}
In this section we estimate $R(V,u)=(V-\widehat V(0)) u$ in the $X^{s,b}$-space.
\begin{prop}\label{prop1}
Let $r\ge 0$ and $0< s<1+r$, and suppose that
\[ a\le r, \quad a<1+r-s \quad \text{and} \quad a<1/2. \]
Then, for every interval $I$,
\begin{equation}\label{e:bil-est}
\|R(V,u)\|_{X_I^{s+a,b'-1}}\lesssim \|V\|_{H^r}\|u\|_{X_I^{s,b}}
\end{equation}
provided that $b, b' \in (\frac12, \frac12+\epsilon)$ for an $\epsilon>0$ small enough.
\end{prop}
We first recall from \cite[Lemma 3.3]{ET2} the following simple lemma which is used several times in proving the proposition.
\begin{lem}\label{lem3}
Let us define, for $\beta\ge 0$,
\begin{equation}\label{e:phi-k}
\phi_{\beta}(k)
:= \sum_{|n|\leq|k|}\frac{1}{\langle n\rangle^{\beta}}
\approx
\begin{cases}
1,&\beta>1,\\
\log(1+\langle k\rangle),&\beta=1,\\
\langle k\rangle^{1-\beta},&\beta<1.
\end{cases}
\end{equation}
If $\beta \geq \gamma \geq 0$ and $\beta +\gamma>1$, then
\begin{align*}
\sum_n \frac{1}{\langle n-k_1\rangle^{\beta}\langle n-k_2\rangle^{\gamma}}
\approx \int_{\mathbb{R}} \frac{1}{\langle \tau-k_1\rangle^{\beta}\langle \tau-k_2\rangle^{\gamma}} d\tau
\lesssim \frac{\phi_{\beta}(k_1-k_2)}{\langle k_1-k_2\rangle^{\gamma} }.
\end{align*}
\end{lem}
\begin{proof}[Proof of Proposition \ref{prop1}]
First, let us prove \eqref{e:bil-est} by replacing the restricted spaces $X_I^{s+a,b'-1}$ and $X_I^{s,b}$ with $X^{s+a,b'-1}$ and $X^{s,b}$, respectively. We write
\begin{align}
&\|R(V,u)\|_{X^{s+a,b'-1}}^2 \label{e:RVu-X} \\
&= \int_{\mathbb R} \sum_{k\in \mathbb Z} \bigg | \sum_{l\in\mathbb Z\setminus\{k\} } \langle k\rangle^{s+a} \langle \tau +k^2 \rangle^{b'-1} \widehat{V}(k-l) \widetilde{u}(\tau,l)\bigg |^2 d\tau \nonumber \\
&= \int_{\mathbb R} \sum_{k\in \mathbb Z} \bigg | \sum_{l\in \mathbb Z\setminus \{k\}} M(k,l,\tau)
\langle k-l\rangle^{r} \widehat{V}(k-l) \langle l \rangle^{s}\langle \tau + l^2\rangle^{b} \,\widetilde{u}(\tau, l) \bigg |^2 d\tau , \nonumber
\end{align}
where
\[ M(k,l,\tau):=\langle k\rangle^{s+a} \langle \tau +k^2 \rangle^{b'-1} \langle l \rangle^{-s} \langle k- l \rangle^{-r} \langle \tau + l^2 \rangle^{-b}. \]
By the Cauchy--Schwarz inequality and Young's inequality for the convolution, \eqref{e:RVu-X} is bounded by
\begin{align*}
&\sup_{\tau,k}\sum_{l\in \mathbb Z\setminus \{k\}}M(k,l,\tau)^2
\int \sum_{k\in \mathbb Z}\sum_{m\in \mathbb Z}\langle k-m \rangle^{2r} |\widehat{V}(k-m)|^2 \langle m \rangle^{2s}\langle \tau + m^2\rangle^{2b}|\widetilde{u}(\tau,m)|^2 d\tau \\
&\le \sup_{\tau,k}\sum_{l\in \mathbb Z\setminus \{k\}}M(k,l,\tau)^2 \|V\|_{H^r}^2 \|u\|_{X^{s,b}}^2.
\end{align*}
Hence it remains to prove that
\[ \sup_{\tau,k}\sum_{l\in \mathbb Z\setminus\{k\}}M(k,l,\tau)^2 <\infty. \]
From the assumption, it is clear that $2b\ge 2-2b'$. Hence, by the easy inequality $\langle \tau +m\rangle \langle \tau +n\rangle \ge \frac12 \langle n-m\rangle$, we have
\begin{align}
\sum_{l\in \mathbb Z\setminus\{k\}} M(k,l,\tau)^2
&\le \sum_{l\in \mathbb Z\setminus\{k\}}\frac{\langle k\rangle^{2s+2a} \langle l\rangle^{-2s} \langle k-l \rangle^{-2r} }{\langle \tau +k^2 \rangle^{2-2b'} \langle \tau + l^2\rangle^{2-2b'}} \nonumber \\
&\lesssim \sum_{l\in \mathbb Z\setminus\{k\}} \frac{\langle k\rangle^{2s+2a} \langle l\rangle^{-2s} \langle k-l \rangle^{-2r} }{\langle l^2 - k^2 \rangle^{2-2b'}} \nonumber \\
&\lesssim \sum_{l\in \mathbb Z\setminus\{\pm k\}} \frac{\langle k\rangle^{2s+2a} \langle l\rangle^{-2s} \langle k- l \rangle^{-2r} }{\langle l+ k \rangle^{2-2b'} \langle l- k \rangle^{2-2b'}} + \langle k \rangle^{2a-2r}. \label{e:summation}
\end{align}
Since $a\leq r$ it is obvious that $\langle k \rangle^{2a-2r}\le 1$, so we need only to show the summation in \eqref{e:summation} is bounded uniformly in $k\in \mathbb Z$. For the purpose, we break the set of summation indices $\mathbb Z\setminus\{\pm k\}$ and separately consider the contribution of
\[ A_k:=\{l\in \mathbb Z\colon \, l\neq\pm k, \, |k-l| \ge |k|/2 \} \ \ \text{and} \ \ B_k:=\{l\in \mathbb Z\colon \, l\neq\pm k, \, |l|> |k|/2 \}. \]
It is obvious that $A_k\cup B_k=\mathbb Z\setminus\{\pm k\}$.
Let us first prove the boundedness of the summation in \eqref{e:summation} over the set $A_k$. If $l\in A_k$ then $\langle k-l \rangle\gtrsim \langle k \rangle$. It is also clear from the assumptions on $s$ and $b'$ that $2-2b'+2s>1$. Hence application of Lemma \ref{lem3} yields
\begin{align}
\sum_{l\in A_k}\frac{\langle k\rangle^{2s+2a} \langle l\rangle^{-2s} \langle k-l \rangle^{-2r} }{\langle l+k\rangle^{2-2b'} \langle l-k\rangle^{2-2b'}}
&\lesssim \langle k\rangle^{2s+2a-2r+2b'-2} \sum_{l\in A_k} \frac{1}{\langle l+k\rangle^{2-2b'} \langle l\rangle^{2s}} \nonumber \\
&\lesssim \langle k\rangle^{2s+2a-2r+2b'-2}\frac{\phi_{\max \{2s,2-2b'\}}(k)}{\langle k\rangle^{\min \{2s,2-2b'\}}}. \label{e:phk}
\end{align}
In the case $s\geq \frac{1}{2}$, $\max\{2s, 2-2b'\}=2s$ since $b'>\frac12$. It follows from \eqref{e:phi-k} that
\[ \eqref{e:phk}=\langle k\rangle^{2s+2a-2r+4b'-4} \phi_{2s}(k) \lesssim \langle k \rangle^{2s+2a-2r+4b'-4} \log(1+\langle k\rangle). \]
Since $b'<\frac12+\epsilon$, the last quantity is again bounded by
\[ \langle k\rangle ^{2s+2a-2r-2+4\epsilon}\log(1+\langle k \rangle), \]
which is bounded uniformly in $k$ since $a<1+r-s$ and $\epsilon$ is small enough.
In the other case $0<s<\frac{1}{2}$, we have $\max\{2s, 2-2b'\}=2-2b'$ when $\epsilon$ is sufficiently small. Thus, we get from \eqref{e:phi-k} that
\[ \eqref{e:phk}=\langle k\rangle^{2a-2r+2b'-2} \phi_{2-2b'}(k) \approx \langle k\rangle^{2a-2r+4b'-3} \le \langle k\rangle^{2a-2r-1 +4\epsilon} , \]
which is uniformly bounded since $a<\frac12$ and $\epsilon$ is small enough.
We now show the boundedness of the summation in \eqref{e:summation} over the set $B_k$. We note that $|k|\gtrsim |l|$ on $B_k$. Since $r\ge 0$ and $b'-\frac12<\epsilon\ll 1$ we see that $2(2-2b')+2r>1$. Hence, application of Lemma \ref{lem3} gives
\begin{align}
\sum_{l\in B_k}\frac{\langle k\rangle^{2s+2a} \langle l\rangle^{-2s} \langle k-l \rangle^{-2r} }{\langle l+k\rangle^{2-2b'} \langle l-k\rangle^{2-2b'}}
&\lesssim \langle k\rangle^{2a} \sum_{l\in B_k} \frac{1}{\langle l+k\rangle^{2-2b'} \langle l-k \rangle^{2-2b'+2r}} \nonumber \\
&\lesssim \langle k\rangle^{2a+2b'-2} \phi_{2-2b'+2r}(k). \label{e:phk1}
\end{align}
By the estimate \eqref{e:phi-k} we have
\[ \eqref{e:phk1} \lesssim
\begin{cases}
\langle k\rangle^{2a+4b'-3} &\text{if} \quad r=0,\\
\langle k\rangle^{2a+2b'-2} &\text{if} \quad r>0,
\end{cases} \]
both of which are uniformly bounded since $a<\frac{1}{2}$ and $\epsilon$ is sufficiently small.
We now prove \eqref{e:bil-est}. Let $\chi_I\in C_0^\infty(\mathbb R)$ be such that $\chi_I=1$ on $I$. By the definition of the restricted Bourgain space and the estimate what we have obtained, we see that
\[ \|R(V,u)\|_{X_I^{s+a,b'-1}} \le \|\chi_I R(V,u)\|_{X^{s+a,b'-1}} \lesssim \|V\|_{H^r} \|\chi_I u\|_{X^{s,b}}. \]
Taking infimum over all functions $w$ such that $w\vert_{ I\times \mathbb R}=u$ on the right side, we get the estimate \eqref{e:bil-est}.
\end{proof}
\section{Local well-posedness and smoothing estimate} \label{sec4}
In this section, we make use of Proposition \ref{prop1} and employ the contraction mapping principle to establish local well-posedness in the Sobolev spaces for the initial value problem \eqref{PS}. We then prove Proposition \ref{thm2}.
Let $u$ be the solution to the equation \eqref{PS} and set $v(t,x):=e^{it\widehat V(0)}u(t,x)$. Since
\begin{align*}
iv_t+v_{xx} &= e^{it\widehat V(0)}\big(i u_t - \widehat V(0) u + u_{xx} \big) \\
&= e^{it\widehat V(0)} \big(V -\widehat V(0) \big)u =\big(V -\widehat V(0) \big)v = R(V,v),
\end{align*}
the initial value problem \eqref{PS} is equivalent to the following:
\begin{equation}\label{MPS}
\begin{cases}
iv_t + v_{xx} =R(V,v),\\
v(0, x) = f(x).
\end{cases}
\end{equation}
Let us recall the definition of $\mathcal P(t,x)$ from \eqref{e:inhom} and note that
\begin{equation} \label{cov}
v(t,x)-e^{it\partial_{xx}}f = e^{it\widehat V(0)} \big( u(t,x) - e^{it(\partial_{xx}-\widehat V(0))} f \big) =e^{it\widehat{V}(0)}\mathcal{P}(t,x).
\end{equation}
Since $\|\mathcal{P}(t,x)\|_{H^s} = \|e^{it\widehat{V}(0)}\mathcal{P}(t,x)\|_{H^s}$, in order to prove Proposition \ref{thm2} it is enough to prove the smoothing estimate for $e^{it\widehat{V}(0)}\mathcal{P}(t,x)$ instead of $\mathcal P(t,x)$. Therefore, in this section, we may and shall consider the equation \eqref{MPS} instead of \eqref{PS}.
\subsection{Local well-posedness}\label{subsec2}
We now prove the initial value problem \eqref{MPS} is locally well-posed in $H^s$. We use the notation $X^{s,b}_\delta :=X^{s,b}_{[0,\delta]}$.
\begin{thm}\label{local}
Let $V\in H^r$ for $r\geq 0$ and suppose $0< s < 1+r$ and $\frac12 <b<\frac 12+\epsilon$ for some $\epsilon>0$ small enough.
For every $f\in H^s$, there exists a time $\delta>0$ and an open ball $B$ in $H^s$ containing $f$, and a subset $X$ of $X_{\delta}^{s,b}$, such that for each $g\in B$ there exists a unique solution $v_g\in X$ for the integral equation
\begin{equation}\label{e:int-eq}
v_g(t,x) = e^{it\partial_{xx}}g-i\int_0^t e^{i(t-t')\partial_{xx}}R(V,v_g)(t') dt'.
\end{equation}
Furthermore, the mapping $B\ni g\mapsto v_g\in X$ is Lipschitz continuous with the estimate
\begin{equation}\label{prop2}
\|v_g\|_{X_\delta^{s,b}}\lesssim \|g\|_{H^{s}}.
\end{equation}
\end{thm}
\begin{rem}
By Lemma \ref{bourgain} we have $X_{\delta}^{s,b} \hookrightarrow C([0,\delta], H^s)$. Thus, the theorem establishes the local well-posedness of \eqref{MPS} (hence \eqref{e:schrodinger}) in $H^s$, in the sense of \cite[Definition 3.4]{T2}.
\end{rem}
\begin{rem}\label{r:unif-t}
In fact, as can be seen in the proof (see \eqref{e:delta} below), the small time $\delta$ is independent of the initial data $f\in H^s$. Since the initial value problem \eqref{MPS} is invariant under time translations, we can patch the solution \eqref{e:int-eq} along $t\in \mathbb R$ by repeatedly applying Theorem \ref{local} to obtain the global solution $v\in C(\mathbb R, H^s)$ to \eqref{MPS}. Also, the estimate \eqref{prop2} combined with Lemma \ref{bourgain} implies the following global bound
\begin{equation}\label{remark}
\|v(t)\|_{H^s}\lesssim e^{|t|}\|f\|_{H^s}, \quad \forall t\in \mathbb R.
\end{equation}
\end{rem}
\begin{proof}[Proof of Theorem \ref{local}]
For $f\in H^s$ we set
\[ B:=\{g\in H^s\colon \|g-f\|_{H^s} < 1 \}, \]
and for each $g\in B$ we define
\[ \Gamma_g (v)(t,x) := e^{it\partial_{xx}}g - i\int_0^t e^{i(t-t')\partial_{xx}}R(V,v)(t') dt'. \]
We aim to show that, for some small time $0<\delta<1$ and $K>0$ to be chosen later, the mapping $\Gamma_g$ is a contraction on the set
\[ X:=\big\{w\in X_{\delta}^{s,b} \colon \|w\|_{X_{\delta}^{s,b}}\leq K\max\{1,\|f\|_{H^s}\} \big\} \]
whenever $g\in B$.
Let us first show that the mapping $\Gamma_g\colon X\to X$ is well-defined. Lemmas \ref{lem2} and \ref{lem1} imply that if $g\in B$ and $w\in X$, then
\begin{align*}
\|\Gamma_g (w)\|_{X_{\delta}^{s,b}}
&\leq \|e^{it\partial_{xx}}g\|_{X_{\delta}^{s,b}}+\Big\|\int_0^t e^{i(t-t')\partial_{xx}}R(V,w)(t')dt'\Big\|_{X_{\delta}^{s,b}}\\
&\leq C\|g\|_{H^s}+C \|R(V,w) \|_{X_{\delta}^{s,b-1}}\\
&\leq C(1+\|f\|_{H^s})+C\delta^{b'-b} \|R(V,w) \|_{X_{\delta}^{s,b'-1}}
\end{align*}
whenever $0<\delta< 1$ and $\frac12<b<b'<1$. We then apply Proposition \ref{prop1} (with $a=0$) to see that
\[ \|\Gamma_g (w)\|_{X_{\delta}^{s,b}}
\leq C(1+\|f\|_{H^s}) +C\delta^{b'-b}\|V\|_{H^r} \|w\|_{X_{\delta}^{s,b}}, \]
provided that $\frac12<b<b'<\frac 12+\epsilon$ for $\epsilon>0$ small enough. Since $w\in X$ we get
\[ \|\Gamma_g (w)\|_{X_{\delta}^{s,b}} \leq C_{0} \big(2+K\delta^{b'-b}\|V\|_{H^r} \big) \max\{1,\|f\|_{H^s}\}. \]
If we set $K = 3C_0$ and take $0<\delta<1$ so small that
\begin{equation}\label{e:delta}
\delta^{b'-b}<1/(1+K\|V\|_{H^r}),
\end{equation}
then it follows that
\begin{equation}\label{e:contr}
\|\Gamma_g (w)\|_{X_{\delta}^{s,b}}\leq K\|f\|_{H^s}.
\end{equation}
Therefore, $\Gamma_g (X)\subset X$ for $g\in B$.
Secondly, let us prove that the map $\Gamma_g\colon X\to X$ ($g\in B$) is a contraction. In a similar manner, by Lemmas \ref{lem2} and \ref{lem1} and Proposition \ref{prop1} we have
\begin{align*}
\|\Gamma_g (w_1)-\Gamma_g (w_2)\|_{X_{\delta}^{s,b}}
&\leq C\delta^{b'-b} \|R(V,w_1-w_2) \|_{X_{\delta}^{s,b'-1}}\\
&\leq C\delta^{b'-b}\|V\|_{H^r} \|w_1-w_2\|_{X_{\delta}^{s,b}}
\leq \frac{1}{3}\|w_1-w_2\|_{X_{\delta}^{s,b}}.
\end{align*}
Therefore, by applying the contraction mapping principle, it follows that there exists a unique $v_g\in X$ solving the equation $\Gamma_g(v_g)=v_g$.
The estimate \eqref{prop2} also follows from \eqref{e:contr} since $\|f-g\|_{H^s}<1$. The continuity of the map $g\mapsto v_g$ also follows from utilizing Lemmas \ref{lem2} and \ref{lem1} and Proposition \ref{prop1} as in the above. Indeed, by \eqref{e:int-eq} we see that for $g, h\in B$
\begin{align*}
&\|v_g-v_h\|_{X_\delta^{s,b}} \\
&\le \|e^{it\partial_{xx}}(g-h)\|_{X_\delta^{s,b}} +\Big \| \int_0^t e^{i(t-t')\partial_{xx}} \big(R(V,v_g)(t')-R(V,v_h)(t')\big) dt' \Big\|_{X_\delta^{s,b}} \\
&\le C\|g-h\|_{H^s} +C\delta^{b'-b}\|V\|_{H^r} \|v_g-v_h\|_{X_\delta^{s,b}} \\
&\le C\|g-h\|_{H^s} +\frac{1}{3}\|v_g-v_h\|_{X_{\delta}^{s,b}},
\end{align*}
from which it follows that $\|v_g-v_h\|_{X_\delta^{s,b}}\lesssim \|g-h\|_{H^s}$.
\end{proof}
\subsection{Smoothing estimate: Proof of Proposition \ref{thm2}} \label{subsec3}
In this final section, we prove Proposition \ref{thm2}.
Let $f$ and $V$ be given as in the statement of Proposition \ref{thm2}.
It is enough to prove that for every $t_\circ \ge 0$ there exists a constant $C>0$ such that
\begin{equation}\label{want}
\| \mathcal{P}(t_\circ,x)\|_{H^{s+a}} \leq C \|V\|_{H^r} \|f\|_{H^s}.
\end{equation}
This gives regularity gain for $\mathcal P(t,\cdot)$ compared to the global estimate \eqref{remark} for the solution $v$. In order to prove \eqref{want}, the bilinear estimate (Proposition \ref{prop1}) as well as the local well-posedness (Theorem \ref{local}) is crucial.
Let us invoke the $\delta>0$ in Theorem \ref{local} and pick $m \in \mathbb N$ such that $(m-1)\delta\leq t_\circ < m\delta$. We also set
\begin{equation}\label{e:initial}
v_j(x) = v(j\delta, x) \quad \text{and} \quad I_j =[\delta j, \delta (j+1)],
\end{equation}
for $j\in \{0,1,2,\cdots, m-1\}$.
Applying Theorem \ref{local} with the initial data $v_j$, it is possible to write the solution $v$ as
\[ v(t) =e^{i(t-\delta j) \partial_{xx}} v_j -i\int_{\delta j}^t e^{i(t-t')\partial_{xx}} R(V,v)(t') dt', \quad t\in I_j. \]
By Lemmas \ref{bourgain} and \ref{lem1} we have, for $t\in I_j$,
\begin{align*}
\| v(t) - e^{i(t-\delta j)\partial_{xx}} v_j \|_{H^{s+a}}
&\lesssim \Big\|\int_{\delta j}^t e^{i(t-t')\partial_{xx}}R(V,v)(t')dt' \Big\|_{X_{I_j}^{s+a,b}}\\
&\lesssim \|R(V,v) \|_{X_{I_j}^{s+a,b-1}}
\end{align*}
whenever $\frac12<b\le 1$. Let us choose $b$ sufficiently close to $\frac12$, and then apply Proposition \ref{prop1} (with $b'=b$) to see that
\[ \|R(V,v) \|_{X_{I_j}^{s+a,b-1}} \lesssim \|V\|_{H^r}\|v\|_{X_{I_j}^{s,b}}. \]
Hence, by the estimate \eqref{remark}, we conclude that
\begin{equation}\label{e:est-j}
\| v(t) - e^{i(t-\delta j)\partial_{xx}} v_j \|_{H^{s+a}} \le C e^{t_\circ} \|V\|_{H^r} \|f\|_{H^s}, \quad t\in I_j,
\end{equation}
for $j\in \{0,1,2,\cdots, m-1\}$. We recall \eqref{cov} and write
\begin{align*}
&e^{it_\circ \widehat V(0)}\mathcal P(t_\circ,x)
= v(t_\circ) -e^{it_\circ \partial_{xx}} f \\
&= v(t_\circ)-e^{i(t_\circ -\delta (m-1))\partial_{xx}}v_{m-1}
+\sum_{j=1}^{m-1} e^{i(t_\circ -\delta j )\partial_{xx}} \big( v_{j}-e^{i\delta\partial_{xx}}v_{j-1} \big).
\end{align*}
Application of the estimate \eqref{e:est-j} gives
\begin{align*}
&\|\mathcal{P}(t_\circ,\cdot)\|_{H^{s+a}} \\
&\le \| v(t_\circ)-e^{i(t_\circ -\delta (m-1))\partial_{xx}}v_{m-1} \|_{H^{s+a}}
+\sum_{j=1}^{m-1} \| v_{j}-e^{i \delta\partial_{xx}}v_{j-1} \|_{H^{s+a}} \\
&\le Cme^{t_\circ}\|V\|_{H^r}\|f\|_{H^s}.
\end{align*}
Therefore, we obtain the desired estimate \eqref{want}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,564 |
{"url":"https:\/\/physics.stackexchange.com\/questions\/603870\/what-is-the-difference-between-rate-of-cooling-and-rate-of-heat-loss","text":"# What is the difference between rate of cooling and rate of heat loss?\n\nI had a perception that rate of cooling and rate of heat loss are synonymous because the rate at which it looses heat indicates the rate at which it is cooling but this question changed my view\n\nThere are eight identical solid spheres at same temperature, the rate of cooling of each sphere is x. The rate of heat loss from each sphere is Q. All spheres are combined to form a single sphere at same temperature then for new sphere\n\nSince the answer is different for both the rates there must be a difference in both terminologies which I'm not able to get.\n\nI'm aware of stefan formula which says $$dq\/dt = \\sigma$$ $$\\epsilon$$A$$T^4$$ taking atmospheric temp 0 And one result of newtons formula k=$$\\sigma$$ $$\\epsilon$$A$$T^3$$ $$\/ms$$ Where s is specific heat\n\n\u2022 Since the heat loss (in $\\text{W}$ or $\\text{J\/s}$) depends on the surface area here, then yes, the rate of cooling (in $\\text{K\/s}$) will depend on whether the mass is arranged in a single sphere (with a surface area of $4\\pi r^2$) or $n$ multiple spheres (with a surface area of $4\\pi n^{2\/3} r^2$). \u2013\u00a0Chemomechanics Dec 29 '20 at 18:06","date":"2021-05-08 22:07:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 7, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8916743993759155, \"perplexity\": 358.1131550034662}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243988927.95\/warc\/CC-MAIN-20210508211857-20210509001857-00626.warc.gz\"}"} | null | null |
Kristin Wagener is the Senior Director of Global Application Software Sales at ABBYY. She is a strong sales professional with expertise in developing international business alliances, growing worldwide sales, partner and channel management, and executing global go-to-market strategies. Wagener's main product focus at ABBYY are FineReader, a universal PDF software application powered by best-in-breed OCR, and FineReader Server, an automated high-volume document conversion server. She spent most of her career working in Germany doing IT channel sales, and held several management roles. She is a graduate from the University of Minnesota-Twin Cities, and is fluent in English and German.
This session for ABBYY Partners will discuss how we work together with partners to address customer needs. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,354 |
Q: INTERMITTENT FirebaseFirestoreException: PERMISSION_DENIED: Missing or insufficient permissions I have an android app with PeriodicWork that writes data to Firestore every 30 mins. It works fine but I noticed that in some cases, it fails with the exception FirebaseFirestoreException: PERMISSION_DENIED. I don't understand why it is complaining about the permission, if it can write data to Firestore.
// Firestore Rules
service cloud.firestore {
match /databases/{database}/documents {
match /bar/{document=**} {
allow read: if true;
allow create: if true;
}
}
}
// Write to Firestore
val firebaseFirestore = FirebaseFirestore.getInstance()
firebaseFirestore.collection(BAR_KEY)
.add(barData)
.addOnSuccessListener { /*success callback*/ }
.addOnFailureListener { e ->
Crashlytics.log(e.message)
Crashlytics.logException(e)
/* failure callback */
}
Non-fatal Exception: com.google.firebase.firestore.FirebaseFirestoreException: PERMISSION_DENIED: Missing or insufficient permissions.
at com.google.firebase.firestore.util.Util.exceptionFromStatus(com.google.firebase:firebase-firestore@@18.1.0:119)
at com.google.firebase.firestore.core.SyncEngine.notifyUser(com.google.firebase:firebase-firestore@@18.1.0:446)
at com.google.firebase.firestore.core.SyncEngine.handleRejectedWrite(com.google.firebase:firebase-firestore@@18.1.0:430)
at com.google.firebase.firestore.core.FirestoreClient.handleRejectedWrite(com.google.firebase:firebase-firestore@@18.1.0:275)
at com.google.firebase.firestore.remote.RemoteStore.handleWriteError(com.google.firebase:firebase-firestore@@18.1.0:707)
at com.google.firebase.firestore.remote.RemoteStore.handleWriteStreamClose(com.google.firebase:firebase-firestore@@18.1.0:663)
at com.google.firebase.firestore.remote.RemoteStore.access$600(com.google.firebase:firebase-firestore@@18.1.0:53)
at com.google.firebase.firestore.remote.RemoteStore$2.onClose(com.google.firebase:firebase-firestore@@18.1.0:206)
at com.google.firebase.firestore.remote.AbstractStream.close(com.google.firebase:firebase-firestore@@18.1.0:334)
at com.google.firebase.firestore.remote.AbstractStream.handleServerClose(com.google.firebase:firebase-firestore@@18.1.0:388)
at com.google.firebase.firestore.remote.AbstractStream$StreamObserver.lambda$onClose$3(com.google.firebase:firebase-firestore@@18.1.0:149)
at com.google.firebase.firestore.remote.AbstractStream$StreamObserver$$Lambda$4.run(Unknown Source:4)
at com.google.firebase.firestore.remote.AbstractStream$CloseGuardedRunner.run(com.google.firebase:firebase-firestore@@18.1.0:67)
at com.google.firebase.firestore.remote.AbstractStream$StreamObserver.onClose(com.google.firebase:firebase-firestore@@18.1.0:135)
at com.google.firebase.firestore.util.FirestoreChannel$1.onClose(com.google.firebase:firebase-firestore@@18.1.0:161)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:678)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:397)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:458)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:301)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at com.google.firebase.firestore.util.AsyncQueue$DelayedStartFactory.run(com.google.firebase:firebase-firestore@@18.1.0:205)
at java.lang.Thread.run(Thread.java:764)
Caused by io.grpc.StatusException: PERMISSION_DENIED: Missing or insufficient permissions.
at io.grpc.Status.asException(Status.java:534)
at com.google.firebase.firestore.util.Util.exceptionFromStatus(com.google.firebase:firebase-firestore@@18.1.0:117)
at com.google.firebase.firestore.core.SyncEngine.notifyUser(com.google.firebase:firebase-firestore@@18.1.0:446)
at com.google.firebase.firestore.core.SyncEngine.handleRejectedWrite(com.google.firebase:firebase-firestore@@18.1.0:430)
at com.google.firebase.firestore.core.FirestoreClient.handleRejectedWrite(com.google.firebase:firebase-firestore@@18.1.0:275)
at com.google.firebase.firestore.remote.RemoteStore.handleWriteError(com.google.firebase:firebase-firestore@@18.1.0:707)
at com.google.firebase.firestore.remote.RemoteStore.handleWriteStreamClose(com.google.firebase:firebase-firestore@@18.1.0:663)
at com.google.firebase.firestore.remote.RemoteStore.access$600(com.google.firebase:firebase-firestore@@18.1.0:53)
at com.google.firebase.firestore.remote.RemoteStore$2.onClose(com.google.firebase:firebase-firestore@@18.1.0:206)
at com.google.firebase.firestore.remote.AbstractStream.close(com.google.firebase:firebase-firestore@@18.1.0:334)
at com.google.firebase.firestore.remote.AbstractStream.handleServerClose(com.google.firebase:firebase-firestore@@18.1.0:388)
at com.google.firebase.firestore.remote.AbstractStream$StreamObserver.lambda$onClose$3(com.google.firebase:firebase-firestore@@18.1.0:149)
at com.google.firebase.firestore.remote.AbstractStream$StreamObserver$$Lambda$4.run(Unknown Source:4)
at com.google.firebase.firestore.remote.AbstractStream$CloseGuardedRunner.run(com.google.firebase:firebase-firestore@@18.1.0:67)
at com.google.firebase.firestore.remote.AbstractStream$StreamObserver.onClose(com.google.firebase:firebase-firestore@@18.1.0:135)
at com.google.firebase.firestore.util.FirestoreChannel$1.onClose(com.google.firebase:firebase-firestore@@18.1.0:161)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:678)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:397)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:458)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:301)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at com.google.firebase.firestore.util.AsyncQueue$DelayedStartFactory.run(com.google.firebase:firebase-firestore@@18.1.0:205)
at java.lang.Thread.run(Thread.java:764)
A: When you try to update the same document again it will fail.
Change add(barData) to set(barData, SetOptions.merge())
And add the below line in your security rules
allow update: if true;
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,258 |
Q: How do you change a class when a button is clicked in Svelte? How do you change a class when a button is clicked in Svelte?
The code is here:
<script lang="ts">
let rank = 1;
const changeRank = () => {
if (rank == 1) {
rank = 2
} else {
rank = 1
}
};
</script>
<main>
<div class="card" class:input-focus={rank === 1? "rank-1" : "rank-2"} />
<button on:click={changeRank}>Change Rank</button>
</main>
A: You are using a class directive (class:...), this will add the class after the : if the value is truthy. This is probably not what you want here, because both values are truthy so it will always add the class input-focus.
You probably meant to do something like this:
<div class="card {rank === 1 ? 'rank-1' : 'rank-2'}" />
If all classes have that prefix you could just do this as well:
<div class="card rank-{rank}" />
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,234 |
Курода Кьоко (; 8 травня 1969) — японська футболістка. Вона грала за збірну Японії.
Клубна кар'єра
Виступала в «Пліма Хам Куноїті».
Виступи за збірну
Дебютувала у збірній Японії 12 січня 1989 року в поєдинку проти Фінляндії. У складі японської збірної учасниця жіночого чемпіонату світу 1991 року. З 1989 по 1994 рік зіграла 21 матч та відзначилася 7-а голами в національній збірній.
Статистика виступів
Примітки
Посилання
Японські футболістки
Гравчині збірної Японії з футболу | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,465 |
verify_enable(){
feature="$1"
feature_status=${!1}
verify_gcc_enable $feature
}
add_os_flags() {
CMAKE_BUILD_COMMAND="${CMAKE_BUILD_COMMAND} -DFAIL_ON_WARNINGS= "
}
bootstrap_cmake(){
sudo apt-get -y install cmake
}
build_deps(){
## need to account for debian
COMMAND="sudo apt-get -y install cmake gcc g++ zlib1g-dev libssl-dev uuid uuid-dev"
export DEBIAN_FRONTEND=noninteractive
INSTALLED=()
INSTALLED+=("libbz2-dev")
sudo apt-get -y update
for option in "${OPTIONS[@]}" ; do
option_value="${!option}"
if [ "$option_value" = "${TRUE}" ]; then
# option is enabled
FOUND_VALUE=""
for cmake_opt in "${DEPENDENCIES[@]}" ; do
KEY=${cmake_opt%%:*}
VALUE=${cmake_opt#*:}
if [ "$KEY" = "$option" ]; then
FOUND_VALUE="$VALUE"
if [ "$FOUND_VALUE" = "libcurl" ]; then
INSTALLED+=("libcurl4-openssl-dev")
elif [ "$FOUND_VALUE" = "libpcap" ]; then
INSTALLED+=("libpcap-dev")
elif [ "$FOUND_VALUE" = "openssl" ]; then
INSTALLED+=("openssl")
elif [ "$FOUND_VALUE" = "libusb" ]; then
INSTALLED+=("libusb-1.0-0-dev")
INSTALLED+=("libusb-dev")
elif [ "$FOUND_VALUE" = "libpng" ]; then
INSTALLED+=("libpng-dev")
elif [ "$FOUND_VALUE" = "bison" ]; then
INSTALLED+=("bison")
elif [ "$FOUND_VALUE" = "flex" ]; then
INSTALLED+=("flex")
elif [ "$FOUND_VALUE" = "automake" ]; then
INSTALLED+=("automake")
elif [ "$FOUND_VALUE" = "autoconf" ]; then
INSTALLED+=("autoconf")
elif [ "$FOUND_VALUE" = "libtool" ]; then
INSTALLED+=("libtool")
elif [ "$FOUND_VALUE" = "python" ]; then
INSTALLED+=("libpython3-dev")
elif [ "$FOUND_VALUE" = "jnibuild" ]; then
INSTALLED+=("openjdk-8-jdk")
INSTALLED+=("openjdk-8-source")
INSTALLED+=("maven")
elif [ "$FOUND_VALUE" = "lua" ]; then
INSTALLED+=("liblua5.1-0-dev")
elif [ "$FOUND_VALUE" = "gpsd" ]; then
INSTALLED+=("libgps-dev")
elif [ "$FOUND_VALUE" = "libarchive" ]; then
INSTALLED+=("liblzma-dev")
fi
fi
done
fi
done
for option in "${INSTALLED[@]}" ; do
COMMAND="${COMMAND} $option"
done
echo "Ensuring you have all dependencies installed..."
${COMMAND}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,204 |
\section{Introduction}
Let $M$ be an $n$-dimensional Riemannian manifold and denote for all $p\in M$ and for all plane sections $\pi\subseteq T_pM$, the sectional curvature of $M$ associated with $\pi$ by $K(\pi)$. If $L$ is an $r$-dimensional subspace of $T_pM$ with $2 \leq r \leq n$ and $\{e_1,\ldots,e_r\}$ is an orthonormal basis of $L$, the scalar curvature of $L$ is defined by
\begin{align}\label{1.2}
\tau(L)=\sum_{{\alpha,\beta=1}\atop{\alpha<\beta}}^r K(e_\alpha\wedge e_\beta).
\end{align}
It is easily checked that this definition does not depend on the chosen orthonormal basis of $L$. In particular, the scalar curvature $\tau$ of $M$ at $p$ is defined to be $\tau(p) = \tau(T_pM)$.
For given integers $n\geq 3$ and $k\geq 1$, we denote by $\mathcal S(n,k)$ the finite set consisting of all $k$-tuples $(n_1,\ldots,n_k)$ of integers satisfying $2 \leq n_1 \leq \cdots \leq
n_k \leq n-1$ and $n_1+\cdots+n_k\leq n.$ Denote the union $\bigcup_{k\geq 1}\mathcal S(n,k)$ by ${\mathcal S}(n)$.
For each $(n_1,\ldots,n_k)\in \mathcal S(n)$, the first author introduced in \cite{c00a} the Riemannian invariant $\delta{(n_1,\ldots,n_k)}$ defined by
\begin{align}\label{1.3}
\delta(n_1,\ldots,n_k)(p)=\tau(p)- \inf\{\tau(L_1)+\cdots+\tau(L_k)\}
\end{align}
for any $p\in M^{n}$, where $L_1,\ldots,L_k$ run over all $k$-tuples of mutually orthogonal subspaces of $T_pM^{n}$ such that $\dim L_j=n_j$ for $j=1,\ldots,k$.
For any submanifold of a real space form of constant sectional curvature~$c$, we have the following sharp general inequality relating intrinsic data of the submanifold (the $\delta$-invariant) with extrinsic data of the immersion (the mean curvature). We refer to \cite{c00a,book} for more details.
\begin{theorem}\label{T:1.1} Let $M$ be an $n$-dimensional submanifold of a real space form of constant sectional curvature $c$. Then for each $k$-tuple $(n_1,\ldots,n_k)\in\mathcal S(n)$ and at any point $p \in M$, the following inequality holds:
\begin{align*}
\delta(n_1,\ldots,n_k) \leq \frac{n^2(n+k-1-\sum_{j=1}^k n_j)}{2(n+k-\sum_{j=1}^k n_j)} H^2 + b(n_1,\ldots,n_k)c,
\end{align*}
where $H^2$ is the squared mean curvature of $M$ at $p$ and $b(n_1,\ldots,n_k)$ is defined by
\begin{align*} b(n_1,\ldots,n_k)=\frac{n(n-1)}{2}-\sum_{j=1}^k \frac{n_j(n_j-1)}{2}.
\end{align*}
\end{theorem}
The same inequality holds for Lagrangian submanifolds of a complex space form $\tilde M^n(4c)$, but it is not optimal in that case. Recall that a submanifold of a K\"ahler manifold is called \emph{Lagrangian} if the almost complex structure $J$ induces an isomorphism between the tangent space and the normal space at every point or, equivalently, if the K\"ahler 2-form restricted to the submanifold vanishes.
An optimal result for Lagrangian submanifolds was obtained in \cite{cdvv}, where a distinction needed to be made between the cases $n_1+\ldots+n_k < n$ and $n_1+\ldots+n_k = n$. In particular, we obtained the following results.
\begin{theorem} \label{T:1.2}
Let $M$ be a Lagrangian submanifold of a complex space form
$\tilde M^n(4c)$. Then for each $k$-tuple $(n_1,\ldots,n_k)\in\mathcal S(n)$ with $n_1+\ldots+n_k < n$, and at any point of $M^n$, the following inequality holds:
\begin{equation*}
\delta(n_1,\ldots,n_k)
\leq \frac{n^2 \left(n -\sum_{j=1}^k n_j + 3k - 1 - 6\sum_{j=1}^k \frac{1}{2+n_j} \right)}
{2 \left(n -\sum_{j=1}^k n_j + 3k + 2 - 6\sum_{j=1}^k \frac{1}{2+n_j} \right)} H^2
+ b(n_1,\ldots,n_k)c,
\end{equation*}
where $b(n_1,\ldots,n_k)$ is as in Theorem \ref{T:1.1}.
\end{theorem}
\begin{theorem} \label{T:1.3}
Let $M$ be a Lagrangian submanifold of a complex space form
$\tilde M^n(4c)$. Then for each $k$-tuple $(n_1,\ldots,n_k)\in\mathcal S(n)$ with $n_1+\ldots+n_k = n$, and at any point of $M^n$, the following inequality holds:
$$\delta(n_1,\ldots,n_k)
\leq \frac{n^2\left(k-1-2\sum_{j=2}^k\frac{1}{n_j+2}\right)}{2\left(k-2\sum_{j=2}^k\frac{1}{n_j+2}\right)}H^2
+b(n_1,\ldots,n_k)c,
$$
where $b(n_1,\ldots,n_k)$ is as in Theorem \ref{T:1.1}.
\end{theorem}
In both cases, a (different) full description of the second fundamental form of those submanifolds realizing equality in the inequality at any of their points is also given in \cite{cdvv}. We call such a Lagrangian submanifold \emph{$\delta(n_1,\ldots,n_k)$-ideal}. Since the mean curvature is a measure for the tension a submanifold experiences from its shape in the ambient space, the submanifolds are shaped ideally in the sense that they experience the least amount of tension, given their intrinsic geometry. The full descriptions of the second fundamental forms would require us to introduce a lot of new notation, so we will restrict to the case treated in this paper, which is a special case of Theorem \ref{T:1.3}.
\begin{theorem}\label{T:1.4} For a Lagrangian submanifold $M$ of a complex space form $\tilde M^{n}(4c)$ with $n\geq 5$, we have
\begin{equation} \label{L23}
\delta(2,n \hskip-.01in-\hskip-.01in 2) \leq \text{$ \frac{n^{2}(n-2)}{4(n-1)} $} H^2 +2(n-2) c.
\end{equation}
If the equality sign in \eqref{L23} holds at a point $p$, then there exists an orthonormal basis $\{e_{1},\ldots,e_{n}\}$ of $T_{p}M$ such that the components of the second fundamental form, $h_{AB}^C = \langle h(e_A,e_B),Je_C\rangle$, satisfy
\begin{align}
& \label{1.9} h^{k}_{11} = h^{k}_{22} = h^{k}_{33} + \cdots + h^{k}_{nn} = 0 && \mbox{for } k \geq 3, \\
& \label{1.10} h^{i}_{11}+ h^{i}_{22}= n h^{i}_{33} = \cdots = nh^{i}_{nn} && \mbox{for } i \in \{1,2\}, \\
& \label{1.11} h^{1}_{k\ell} = h^{2}_{k\ell} = h^{k}_{12} = 0 && \mbox{for } k, \ell \geq 3, \ k \neq \ell.
\end{align}
\end{theorem}
The purpose of this paper is to classify $\delta(2,n\hskip-.01in-\hskip-.01in 2)$-ideal Lagrangian submanifolds in complex space forms for $n \geq 5$. Remark that the latter condition is necessary: for $\delta(2,2)$-ideal Lagrangians in $\tilde M^4(4c)$, the description of the second fundamental form is different (cfr. \cite{cdvv}).
The paper is organised as follows. Section 2 contains some preliminaries on submanifold theory and in particular on Lagrangian submanifolds of complex space forms. In Section 3, the second fundamental form of $\delta(2,n-2)$-ideal Lagrangian submanifolds of complex space forms with complex dimension $n\geq 5$ is determined, along with some additional information. It turns out that, apart from the minimal case, case (I), there are two other cases to consider: case (II) is completely solved in Section 4, by reducing it to a special case of a family of Lagrangians studied in \cite{cvv}. Case (III) is more involved and is treated in Section 5. Section 6 contains the final conclusions of the paper.
\section{Preliminaries}
\subsection{Basic formulas}
If $\tilde M^n(4c)$ is a complete simply connected K\"ahler $n$-manifold with constant holomorphic sectional curvature $4c$, then $\tilde M^n(4c)$ is holomorphically isometric to the complex Euclidean $n$-space ${\bf C}^n$, the complex projective $n$-space $CP^n(4c)$, or the complex hyperbolic $n$-space $CH^n(-4c)$ according to $c=0$, $c>0$ or $c<0$ respectively. These manifolds are known as \emph{complex space forms}.
Let $M$ be a Lagrangian submanifold of $\tilde M^n(4c)$.
Denote the Levi-Civita connections of $M$ and $\tilde M^n(4c)$ by $\nabla$ and $\tilde \nabla$, respectively.
The formulas of Gauss and Weingarten are given respectively by (cf. \cite{book})
\begin{align}\label{2.1}
&\tilde\nabla_X Y = \nabla_X Y + h(X,Y), \quad \tilde\nabla_X \xi = -A_\xi X + \nabla^{\perp}_X \xi
\end{align}
for tangent vector fields $X$ and $Y$ and normal vector fields $\xi$, where $h$ is the second fundamental form, $A$ is the shape operator and $\nabla^{\perp}$ is the normal connection. The second fundamental form and the shape operator are related by $\<h(X,Y),\xi\> = \<A_\xi X,Y\>$. The mean curvature vector field of $M$ is defined by $H=(\hbox{trace}\,h)/n$ and the \emph{squared mean curvature} is given by $H^2=\<H,H\>$.
For a Lagrangian submanifold, we have (cf. \cite{book,CO})
\begin{align}\label{2.3} &\nabla^{\perp}_X JY = J \nabla_X Y,
\\\label{2.4} &A_{JX} Y = -J h(X,Y)=A_{JY}X
\end{align}
for all tangent vector fields $X$ and $Y$. Formula \eqref{2.4} implies in particular that the so-called cubic form $(X,Y,Z) \mapsto \<h(X,Y),JZ\>$ is totally symmetric.
For an orthonormal basis $\{e_1,\ldots,e_n\}$ of $T_pM$, we put
\begin{align}\label{2.7}
h^C_{AB}=\<h(e_A,e_B),Je_C\>.
\end{align}
The equations of Gauss and Codazzi are given respectively by
\begin{align} \label{Gauss}
& \<R(X,Y)Z,W\> = c(\<X,W\>\<Y,Z\>-\<X,Z\>\<Y,W\>)\\&
\notag \hskip1.0in + \<h(X,W),h(Y,Z)\> - \<h(X,Z), h(Y,W)\> ,
\\ &\label{Codazzi} (\nabla_{X} h)(Y,Z) = (\nabla_{Y} h)(X,Z),\end{align}
where $R$ is the curvature tensor of $M$ and $\nabla h$ is defined by
\begin{align}\label{2.6}
(\nabla_{X} h)(Y,Z) = \nabla^{\perp}_X h(Y,Z) - h(\nabla_X Y,Z) - h(Y,\nabla_X Z).
\end{align}
\subsection{Horizontal lifts of Lagrangian submanifolds}
We recall the link between Legendre submanifolds and Lagrangian submanifolds (cf. \cite{book,rec}).
\vskip.04in
\noindent {\it Case} (i): $CP^n(4)$. Consider the Hopf fibration $\pi :S^{2n+1}\to CP^n(4)$, where $S^{2n+1}$ is the unit sphere in $\mathbf C^{n+1}$. For a given point $u\in S^{2n+1}$, the horizontal space at $u$ is the orthogonal complement of $i u, \, i=\sqrt{-1},$ with respect to the metric on $S^{2n+1}$ induced from the metric on ${\bf C}^{n+1}$.
Let $L : M \to CP^n(4)$ be a Lagrangian isometric immersion. Then there is a covering map
$\tau: \hat M \to M$ and a horizontal immersion $\tilde L :\hat M \to S^{2n+1}$ such that
$L\circ \tau=\pi \circ \tilde L$. Thus each Lagrangian immersion can be lifted locally (or globally if $M$ is simply connected) to a Legendre immersion of the same Riemannian manifold. In particular, a minimal Lagrangian submanifold of $CP^{n}(4)$ is lifted to a minimal Legendre submanifold of the Sasakian manifold $S^{2n+1}$.
Conversely, suppose that $\tilde L: M \to S^{2n+1}$ is a Legendre isometric immersion. Then $L =\pi\circ \tilde L: M\to CP^n(4)$ is a Lagrangian isometric immersion. Under this correspondence the second fundamental forms $h^{\tilde L}$ and $h^L$ of $\tilde L$ and $L$ satisfy $\pi_*h^{\tilde L}=h^L$. Moreover, $h^{\tilde L}$ is horizontal with respect to $\pi$.
\vskip.04in
\noindent {\it Case} (ii): $CH^n(-4)$. We consider the complex number space ${\bf C}^{n+1}_1$ equipped with the pseudo-Euclidean metric
$g_0=-dz_1d\bar z_1 + dz_2d\bar z_2 + \ldots + dz_{n+1}d\bar z_{n+1}$
and look at
$$H^{2n+1}_{1}=\{z\in {\bf C}^{n+1}_1 \ | \ \<z,z\>=-1\}$$
with the canonical Sasakian structure, where $\<\;\,,\;\>$ is the induced inner product from $g_0$. In particular, $H_1^1=\{\lambda\in {\bf C} \ | \ \lambda\bar\lambda=1\}.$ Then there is an $H^1_1$-action on $H_1^{2n+1}$, given by $z\mapsto \lambda z$, and at each point $z\in H^{2n+1}_1$, the vector $\xi=i z$ is tangent to the flow of the action. Since the metric $g_0$ is Hermitian, we have $\<\xi,\xi\>=-1$. The quotient space $H^{2n+1}_1/\sim$, under the identification induced from the action, is the complex
hyperbolic space $CH^n(-4)$ with constant holomorphic sectional curvature $-4$ whose complex structure $J$ is induced from the complex structure on ${\bf C}^{n+1}_1$ via the Hopf fibration $\pi :H^{2n+1}_1\to CH^n(-4).$
Just like in case (i), if $L: M \to CH^n(-4)$ is a Lagrangian immersion, then there is an isometric covering map $\tau: \hat M \to M$ and a Legendre immersion $\tilde L: \hat M \to H_1^{2n+1}$ such that $L \circ \tau=\pi\circ \tilde L$. Thus every Lagrangian immersion into $CH^n(-4)$ an be lifted locally (or globally if $M$ is simply connected) to a Legendre immersion into $H^{2n+1}_1$. In particular, minimal Lagrangian submanifolds of $CH^{n}(-4)$ are lifted to minimal Legendre submanifolds of $H^{2n+1}_{1}$. Conversely, if $\tilde L:\hat M \to H_1^{2n+1}$ is a Legendre immersion, then $L =\pi\circ \tilde L: M\to CH^n(-4)$ is a Lagrangian immersion. Under this correspondence the second fundamental forms $h^f$ and $h^L$ are related by $\pi_*h^{\tilde L}=h^L$. Also, $h^{\tilde L}$ is horizontal with respect to $\pi$.
Let $h$ be the second fundamental form of $M$ in $S^{2n+1}$, respectively $H^{2n+1}_1$. Since $S^{2n+1}$ and $H^{2n+1}_1$ are totally umbilical with mean curvature $1$ in ${\bf C}^{n+1}$, respectively ${\bf C}^{n+1}_1$, we have
\begin{align}\label{2.9} D_XY=\nabla_XY+h(X,Y)-\varepsilon \tilde L,\end{align}
where $\varepsilon=1$ if the ambient space is ${\bf C}^{n+1}$ and $\varepsilon=-1$ if it is ${\bf C}^{n+1}_1$ and $D$ denotes the Levi-Civita connection of ${\bf C}^{n+1}$, respectively ${\bf C}^{n+1}_1$.
\section{The second fundamental form of $\delta(2,n-2)$-ideal Lagrangian submanifolds}
In this section, we prove two lemmas. The first one, Lemma \ref{L:3.2}, describes the second fundamental form of a $\delta(2,n-2)$-ideal Lagrangian submanifold of a complex space form pointwise and follows from Theorem \ref{T:1.4}. The second one, Lemma \ref{L:3.3}, describes the second fundamental form in terms of a local orthonormal frame.
\begin{lemma}\label{L:3.2} Let $M$ be a Lagrangian submanifold of a complex space form $\tilde M^{n}(4c)$, $n\geq 5$, satisfying the equality case of \eqref{L23} at a point $p \in M$. Then there exist an orthonormal basis $\{e_{1},\ldots,e_{n}\}$ of $T_pM$ and real numbers $\gamma,\lambda,\mu$ and $h^{k}_{ij}$ $(i,j,k \geq 3)$, such that
\begin{equation}\begin{aligned}\label{3.4}
& h(e_1,e_1)=\gamma Je_1, \ \
h(e_1,e_2)=(n\lambda-\gamma)Je_2, \\
& h(e_2,e_2)=(n\lambda-\gamma)Je_1 +n \mu Je_2, \\
& h(e_{1},e_{i})=\lambda Je_{i}, \ \ h(e_{2},e_{i})=\mu Je_{i}, \\
& h(e_{i},e_{j})= \delta_{ij}(\lambda Je_{1}+\mu J e_{2})+\sum_{k=3}^{n} h^{k}_{ij}Je_{k}
\end{aligned}\end{equation}
for $i,j \geq 3$. The numbers $h^{k}_{ij}$ are symmetric in the three indices and satisfy
$h^{k}_{33} + \ldots + h^k_{nn}=0$ for any $k \geq 3$. Moreover,
\begin{align}
& \gamma \geq 0, \ \gamma \geq \frac {2n}{3} \lambda, \label{3.4i} \\
& \mbox{if } \gamma=0, \mbox{ then also } \lambda=\mu=0, \\
& \mbox{if } \gamma>0, \mbox{ then also } \gamma > \frac n2 \lambda.
\end{align}
\end{lemma}
\begin{proof}
Choose an orthonormal basis $\{e_1,\ldots,e_n\}$ of $T_pM$ such that \eqref{1.9}--\eqref{1.11} hold. This implies that
\begin{equation*}
\begin{aligned}
& h(e_1,e_1) = h_{11}^1 Je_1 + h_{11}^2 Je_2, \\
& h(e_1,e_2) = h_{11}^2 Je_1 + h_{22}^1 Je_2, \\
& h(e_2,e_2) = h_{22}^1 Je_1 + h_{22}^2 Je_2, \\
& h(e_1,e_k) = h_{33}^1 Je_k, \quad h(e_2,e_k) = h_{33}^2 Je_k, \\
& h(e_k,e_{\ell} )= \delta_{k\ell}(h_{33}^1 Je_1 + h_{33}^2 Je_2) + \sum_{m=3}^{n} h_{k\ell}^m Je_m,
\end{aligned}
\end{equation*}
with
\begin{equation*}
\begin{aligned}
& h_{11}^1 + h_{22}^1 = nh_{33}^1 \ (= nh_{44}^1 = \ldots = nh_{nn}^1), \\
& h_{11}^2 + h_{22}^2 = nh_{33}^2 \ (= nh_{44}^2 = \ldots = nh_{nn}^2), \\
& h_{33}^k + \ldots + h_{nn}^k = 0 \mbox{ for } k \geq 3.
\end{aligned}
\end{equation*}
Remark that the conditions \eqref{1.9}--\eqref{1.11} remain true for any choice of orthonormal basis in $\mathrm{span}\{e_1,e_2\}$. In particular, we can assume that the following function, defined on a compact set, attains its global maximum in $e_1$:
$$ \phi: \{ u \in \mathrm{span}\{e_1,e_2\} \ | \ \|u\|=1 \} \to \mathbb R : u \mapsto \langle h(u,u),Ju \rangle. $$
This implies that the function $F : \mathbb R \to \mathbb R: \theta \mapsto \phi((\cos\theta) e_1 + (\sin\theta)e_2)$ attains a maximum at $\theta=0$. Computing the first and second derivatives of $F$ gives respectively
$h_{11}^2 = 0$ and $h_{11}^1 \geq 2 h_{22}^1$. Since $\phi(-e_1)=-\phi(e_1)$ and $\phi$ attains its maximum at $e_1$, we have $\phi(e_1)=h_{11}^1 \geq 0$. Moreover, if $h_{11}^1 = 0$, then $\phi$ vanishes identically, which implies that also $h_{22}^1 = h_{22}^2 = 0$. Finally, if $h_{11}^1 > 0$, it is easy to see that $h_{11}^1 > h_{22}^1$.
We now obtain the result by putting $\gamma=h_{11}^1$, $\lambda=h_{33}^1$ and $\mu=h_{33}^2$.
\end{proof}
Remark that, under the assumptions of Lemma \ref{L:3.2}, the mean curvature vector at the point $p$ is given by
\begin{align} \label{expressionH}
& H(p) = \frac{2(n-1)}{n}(\lambda Je_1 + \mu Je_2).
\end{align}
It is not clear whether the orthonormal bases given by Lemma \ref{L:3.2} at every point of a $\delta(2,n-2)$-ideal Lagrangian submanifold of a complex space form can be pasted together to form a differentiable orthonormal frame. However, we have the following local result.
\begin{lemma}\label{L:3.3}
Let $M$ be a $\delta(2,n-2)$-ideal Lagrangian submanifold of a complex space form $\tilde M^{n}(4c)$, $n\geq 5$. Then there exists an open and dense subset $V \subseteq M$ such that every point of $V$ has a neighborhood in which one of the following holds.
\begin{itemize}
\item[(I)] $H = 0$.
\item[(II)] There exists a differentiable orthonormal frame $\{E_1,\ldots,E_n\}$ such that the second fundamental form satisfies
\begin{equation} \label{h_case(II)}
\begin{aligned}
& h(E_1,E_1)=(n-1)\lambda JE_1, \ h(E_1,E_i) = \lambda JE_i, \\
& h(E_i,E_j) = \delta_{ij} \lambda JE_1 + \sum_{k=2}^n h_{ij}^k JE_k
\end{aligned}
\end{equation}
for $i,j \geq 2$, where $\lambda$ and $h_{ij}^k$ are differentiable functions, the latter being symmetric in the three indices and satisfying $h_{22}^k + \ldots + h_{nn}^k = 0$ for all $k \geq 2$.
\item[(III)] There exists a differentiable orthonormal frame $\{E_1,\ldots,E_n\}$ such that the second fundamental form satisfies
\begin{equation} \label{h_case(III)}
\begin{aligned}
& h(E_1,E_1) = \gamma JE_1, \ \
h(E_1,E_2) = (n\lambda-\gamma)JE_2, \\
& h(E_2,E_2) = (n\lambda-\gamma)JE_1 + n \mu JE_2, \\
& h(E_1,E_i) = \lambda JE_i, \ \
h(E_2,E_i) = \mu JE_i, \\
& h(E_i,E_j) = \delta_{ij}(\lambda JE_1 + \mu J E_2)+\sum_{k=3}^{n} h^{k}_{ij}JE_k
\end{aligned}
\end{equation}
for $i,j \geq 3$, where $\gamma$, $\lambda$, $\mu$ and $h^{k}_{ij}$ are differentiable functions, the latter being symmetric in the three indices, satisfying $\gamma > 0$, $\gamma > 2n\lambda/3$ and $h^{k}_{33} + \ldots + h^k_{nn}=0$ for all $k \geq 3$. Moreover, at every point, $\lambda \neq 0$ or $\mu \neq 0$, and also $\mu \neq 0$ or $\gamma \neq (n-1)\lambda$.
\end{itemize}
\end{lemma}
\begin{proof}
Define $V_1 = \{ p \in M \ | \ H(p) \neq 0 \}$ and $V_2 = \{ p \in M \ | \ H(p)=0 \}^{\mbox{int}}$, where the superscript ``int'' denotes the interior. Clearly, all points in $V_2$ satisfy case (I).
On $V_1$, we consider the $(1,1)$-tensor field
\begin{equation} \label{defK}
K : \mathcal D \to \mathcal D : X \mapsto \pi_{\mathcal D} J h(JH,X),
\end{equation}
where $\mathcal D$ is the orthogonal complement of $\mbox{span}\{JH\}$ in the tangent space to $M$ and $\pi_{\mathcal D}$ is the orthogonal projection onto $\mathcal D$ at every point. Define further
\begin{align*}
& V_{11} = \{ p \in V_1 \ | \ h(JH(p),JH(p)) \mbox{ is no multiple of } H(p) \mbox{ or } K_p \mbox{ is no multiple of } \mathrm{id}_{{\mathcal D}_p} \}, \\
& V_{12} = \{ p \in V_1 \ | \ h(JH(p),JH(p)) \mbox{ is a multiple of } H(p) \mbox{ and } K_p \mbox{ is a multiple of } \mathrm{id}_{{\mathcal D}_p} \}^{\mbox{int}}.
\end{align*}
If $p \in V_{12}$ and $\{e_1,\ldots,e_n\}$ is an orthonormal basis of $T_pM$ as in Lemma \ref{L:3.2}, it follows from the definition of $V_{12}$ and a straightforward computation using \eqref{3.4}, \eqref{expressionH} and \eqref{defK} that $\gamma=(n-1)\lambda$ and $\mu=0$. In particular, $e_1$ lies in the direction of $H(p)$. This means that we can extend $\{e_1,\ldots,e_n\}$ to an orthonormal frame $\{E_1,\ldots,E_n\}$ on $V_{12}$, where $E_1$ lies in the direction of $H$ at every point, and we are in case (II).
Finally, let $p \in V_{11}$ and consider an orthonormal basis $\{e_1,\ldots,e_n\}$ of $T_pM$ as in Lemma~\ref{L:3.2}. Putting $(\mathcal D_1)_p = \mathrm{span}\{e_1,e_2\}$ and $(\mathcal D_2)_p = \mathrm{span}\{e_3,\ldots,e_n\}$, we shall now prove that $\mathcal D_1$ and $\mathcal D_2$ are differentiable distributions on $V_{11}$. If $h(JH(p),JH(p))$ is not parallel with $H(p)$, then the same holds in a neighborhood of $p$ and it follows from \eqref{3.4} and \eqref{expressionH} that $\mathcal D_1 = \mathrm{span}\{JH,Jh(JH,JH)\}$ in this neighborhood. Hence, $\mathcal D_1$ is differentiable in this neighborhood. If, on the other hand, $h(JH(p),JH(p))$ and $H(p)$ are parallel, then, by the definition of $V_{11}$, we have that $K_p$ is not a multiple of $\mathrm{id}_{\mathcal D_p}$ and it follows from \eqref{3.4} and \eqref{defK} that the matrix of $K_p$ with respect to the orthonormal basis $\left\{ (\mu e_1 - \lambda e_2)/\sqrt{\lambda^2+\mu^2},e_3,\ldots,e_n \right\}$ of $\mathcal D_p$, is given by
$$ \frac{2(n-1)}{n} \left( \begin{array}{cccc}
\alpha & & & \\ & \lambda^2 + \mu^2 & & \\ & & \ddots & \\ & & & \lambda^2 + \mu^2
\end{array} \right)$$
for some real number $\alpha \neq \lambda^2 + \mu^2$. The same holds in a neighborhood of $p$ and hence there is a well-defined one-dimensional eigendistribution of the tensor field $K$, say $\mathrm{span}\{X_0\}$. Since $K$ is differentiable, the vector field $X_0$ can be chosen to be differentiable and hence $\mathcal D_1 = \mathrm{span}\{JH,X\}$ is differentiable in a neighborhood of $p$. In both cases, $\mathcal D_2$ is differentiable since it is the orthogonal complement of $\mathcal D_1$ in $TM$.
Let $\{X_1,X_2\}$ be differentiable orthonormal vector fields on $V_{11}$ spanning $\mathcal D_1$ at every point and $\{E_3,\ldots,E_n\}$ differentiable orthonormal vector fields on $V_{11}$ spanning $\mathcal D_2$ at every point. In order to obtain case (III) of the lemma, we have to find a differentiable function $\theta$ on $V_{11}$ such that $E_1 = (\cos\theta) X_1 + (\sin\theta) X_2$ maximizes
$ \phi: \{ X \in \mathcal D_1 \ | \ \|X\|=1 \} \to \mathbb R : X \mapsto \langle h(X,X),JX \rangle $
at every point. This implies that $E_2 = -(\sin\theta) X_1 + (\cos\theta) X_2$ satisfies
\begin{equation}\label{3.5bis}
\langle h(E_1,E_1),JE_2 \rangle = 0.
\end{equation}
The latter equation has in general several differentiable solutions for $\theta$. However, since we want $E_1 = (\cos\theta) X_1 + (\sin\theta) X_2$ to maximize $\phi$, we have to restrict to points for which the number of solutions, say in $[0,2\pi)$, does not change in a neighborhood to guarantee differentiability of $\theta$. If we define $V_{111}$ as the set of those points in $V_{11}$ for which the number of solutions for $\theta$ of \eqref{3.5bis} in $[0,2\pi)$ does not change in a neighborhood of the point, we can construct an orthonormal frame on $V_{111}$ satisfying \eqref{h_case(III)} as explained above. Remark that $\gamma > 0$ and $\gamma > 2n\lambda/3$ follow from the last sentence of Lemma~\ref{L:3.2} and the fact that $H$ is nowhere vanishing on $V_{111}$. Moreover, the fact that $\lambda \neq 0$ or $\mu \neq 0$ also follows from the non-vanishing of $H$ and the fact that $\mu \neq 0$ or $\gamma \neq (n-1)\lambda$ follows from the definition of $V_{11}$ and the computation which led to case (II) above.
As a conclusion, the subset $V \subseteq M$ we are looking for is the disjoint union
$$ V = V_{111} \cup V_{12} \cup V_2, $$
which is open and dense in $M$ by construction.
\end{proof}
We will proceed with the classification as follows. In Section 4, we give a classification in case (II), based on results in \cite{cvv}. In Section 5, we give a classification in case (III) and, finally, Section 6 contains the overall conclusions. We will not elaborate on case (I) in general, however, we remark the following.
\begin{remark} \label{RemarkMinimal}
If $M$ is a minimal $\delta(2,n-2)$-ideal Lagrangian submanifold of a complex space form $\tilde M^{n}(4c)$, $n\geq 5$, for which the orthonormal bases given in Lemma \ref{L:3.2} can be pasted together to form a differentiable orthonormal frame $\{E_1,\ldots,E_n\}$, then the second fundemental form is given by
\begin{equation}
\begin{aligned}
\notag & h(E_1,E_1)=\gamma JE_1,\; h(E_1,E_2)=-\gamma JE_2, \; h(E_2,E_2)=-\gamma JE_1,
\\& h(E_{1},E_{i})= h(E_{2},E_{i})=0,
\; h(E_{i},E_{j})=\sum_{k=3}^{n} h^{k}_{ij}JE_{k}
\end{aligned}
\end{equation}
for $i,j,k\geq 3$ and some functions $\gamma$, $\lambda$, $\mu$ and $h^{k}_{ij}$, satisfying $h^{k}_{33}+\ldots h_{nn}^k=0$ for every $k \geq 3$. If $\gamma = 0$, the Lagrangian submanifold is minimal $\delta(n-2)$-ideal. If $\gamma > 0$, a long argument, very similar to the one we will give in Section \ref{sec5.1}, can be used to prove that there are three possibilities: the Lagrangian submanifold is either minimal $\delta(2)$-ideal, minimal $\delta(2,k)$-ideal for some $k$ satisfying $2\leq k< n-2$ or it is a direct product of a minimal $\delta(2)$-ideal Lagrangian surface in $\mathbf C^2$ and a minimal $\delta(n-2)$-ideal submanifold of $\mathbf C^{n-2}$. The latter case only occurs for $c=0$. The family of minimal $\delta(2)$-ideal Lagrangians is too large to classify. On the other hand, minimal $\delta(2,2)$-ideal Lagrangians in dimension $5$ were classified in \cite{cpw}.
\end{remark}
\section{Classification in case (II) of Lemma \ref{L:3.3}}
Let $M$ be a Lagrangian submanifold of a complex space form $\tilde M^{n}(4c)$, $n\geq 5$, satisfying case (II) of Lemma \ref{L:3.3}. It was proven in \cite{cvv} that such a submanifold is a warped product $I \times_f N$ of an open interval $I$ and an $(n-1)$-dimensional factor $N$. Moreover, $E_1$ is tangent to $I$ and the Lagrangian immersion is constructed from a curve depending on a parameter $t \in I$, determined by a system of ODEs and a Lagrangian immersion of the manifold $N$, for which the components of the second fundamental form are, up to a factor depending on $t$, equal to the corresponding components of $h$.
Combining this result with Lemma \ref{L:3.2} yields that there exists an orthonormal basis $\{e_2,\ldots,e_n\}$ for every tangent space to $N$ such that the components of the second fundamental form $\tilde h$ of the Lagrangian immersion of $N$ satisfy $\tilde h^2_{k \ell} = 0$ for all $k, \ell \geq 2$ and $\tilde h_{22}^k+\ldots+\tilde h_{nn}^k=0$ for all $k \geq 2$. This means exactly that the Lagrangian immersion is $\delta(n-2)$-ideal and minimal. Hence, we obtain the following results (remark the slight difference in notation compared to \cite{cvv}).
\begin{proposition} \label{P:4.1}
Let $M$ be a $\delta(2,n-2)$-ideal Lagrangian submanifold of the complex Euclidean space ${\bf C}^{n}$ $(n\geq 5)$ whose second fundamental form is given by case $\mathrm{(II)}$ of Lemma \ref{L:3.3}. Then $M$ is locally congruent to the image of
\begin{equation}\label{4.26}
L(t, u_{2},\ldots, u_{n})=\frac{e^{i\theta}}{\varphi+i\lambda}\Phi( u_{2},\ldots,u_{n}),
\end{equation}
where $\theta$, $\varphi$ and $\lambda$ are functions of $t$ only, satisfying
\begin{equation} \label{systemODE}
\lambda' = (n-3) \lambda \varphi, \quad \varphi' = - \varphi^2 - (n-2)\lambda^2, \quad \theta' = (n-1) \lambda
\end{equation}
and $\Phi$ is a Legendre immersion into $S^{2n-1}(1)\subset {\bf C}^n$ whose composition with the Hopf fibration is a minimal $\delta(n-2)$-ideal Lagrangian immersion into $CP^{n-1}(4)$.
\end{proposition}
Remark that the system \eqref{systemODE} allows us to express all three unknown functions in terms of $\lambda$. Recall that $\lambda > 0$. It follows from the equations that $\lambda^{\frac{2}{n-3}}(\lambda^2 + \varphi^2)$ is a positive constant, say $r^2$ for some $r>0$. Then $\varphi = \pm \sqrt{r^2 \lambda^{-\frac{2}{n-3}}-\lambda^2}$.
After replacing $E_1$ by $-E_1$ if necessary, we may assume that $\varphi > 0$ and thus
\begin{equation} \label{phi}
\varphi = \sqrt{\frac{1}{c^2 \, \lambda^{\frac{2}{n-3}}}-\lambda^2},
\end{equation}
where we have put $c = 1/r$. Since
$$ \frac{d\theta}{d\lambda} = \frac{(n-1) \lambda}{(n-3) \lambda \varphi} = \frac{n-1}{n-3} \, \frac{1}{\varphi}, $$
direct integration using \eqref{phi} yields
\begin{equation} \label{theta}
\theta = \frac{n-1}{n-2}\arcsin\left(c \, \lambda^{\frac{n-2}{n-3}} \right).
\end{equation}
After a reparametrization $t \mapsto \lambda(t)$, the coefficient in front of $\Phi$ in \eqref{4.26} is completely determined by \eqref{phi} and \eqref{theta}.
\begin{proposition}\label{P:4.2}
Let $M$ be a $\delta(2,n-2)$-ideal Lagrangian submanifold of the complex projective space $CP^{n}(4)$ $(n\geq 5)$ whose second fundamental form is given by case $\mathrm{(II)}$ of Lemma \ref{L:3.3}. Then $M$ is locally congruent to the image of $\pi\circ L$, where $\pi:S^{2n+1}(1)\to CP^{n}(4)$ is the Hopf fibration and
\begin{equation}\label{4.43}
L(t, u_{2},\ldots,u_{n})=\(\frac{e^{i \theta}\Phi(u_{2},\ldots,u_{n})}{\sqrt{1+\lambda^2+\varphi^2}}, \frac{ e^{i(n-2)\theta}(i\lambda-\varphi)}{\sqrt{1+\lambda^2+\varphi^2}}\),
\end{equation}
where $\theta$, $\varphi$ and $\lambda$ are functions of $t$ only, satisfying
\begin{equation} \label{4.44}
\lambda'=(n-3)\lambda\varphi, \quad \varphi'=-1-\varphi^2-(n-2)\lambda^2, \quad \theta'=\lambda
\end{equation}
and $\Phi$ is a Legendre immersion into $S^{2n-1}(1) \subset {\bf C}^n$ whose composition with the Hopf fibration is a minimal $\delta(n-2)$-ideal Lagrangian immersion into $CP^{n-1}(4)$.
\end{proposition}
\begin{proposition}\label{P:4.3}
Let $M$ be a $\delta(2,n-2)$-ideal Lagrangian submanifold of the complex hyperbolic space $CH^{n}(-4)$ $(n\geq 5)$ whose second fundamental form is given by case $\mathrm{(II)}$ of Lemma \ref{L:3.3}. Then $M$ is locally congruent to the image of $\pi\circ L$, where $\pi:H_{1}^{2n+1}(-1)\to CH^{n}(-4)$ is the Hopf fibration and $L$ is one of the following.
\vskip.05in
{\rm (a)} $ L(t, u_{2},\ldots,u_{n})=\( \dfrac{e^{i \theta}\Phi(u_{2},\ldots,u_{n}) }{\sqrt{1-\lambda^2-\varphi^2}}, \dfrac{ e^{i(n-2) \theta}(i\lambda-\varphi)}{\sqrt{1-\lambda^2-\varphi^2}}\), \quad \lambda^2+\varphi^2<1, $
\vskip.05in
\noindent where $\lambda$, $\varphi$ and $\theta$ are functions of $t$ only, satisfying
$$\lambda'=(n-3)\lambda\varphi, \quad \varphi'=1-\varphi^2-(n-2)\lambda^2, \quad \theta'=\lambda $$
and $\Phi$ is a Legendre immersion into $H_{1}^{2n-1}(-1)$ whose composition with the Hopf fibration is a minimal $\delta(n-2)$-ideal Lagrangian immersion into $CH^{n-1}(-4)$;
\vskip.05in
{\rm (b)} $L(t, u_{2},\ldots,u_{n})=\( \dfrac{ e^{i(n-2) \theta}(i\lambda-\varphi)}{\sqrt{\lambda^2+\varphi^2-1}},\dfrac{e^{i \theta}\Phi(u_{2},\ldots,u_{n})}{\sqrt{\lambda^2+\varphi^2-1}}\),\;\; \lambda^2+\varphi^2>1, $
\vskip.05in
\noindent where $\lambda$, $\varphi$ and $\theta$ are functions of $t$ only, satisfying
$$ \lambda'=(n-3)\lambda\varphi, \quad \varphi'=1-\varphi^2-(n-2)\lambda^2, \quad \theta'=\lambda, $$
and $\Phi$ is a Legendre immersion into $S^{2n-1}(1)$ whose composition with the Hopf fibration is a minimal $\delta(n-2)$-ideal Lagrangian immersion into $CP^{n-1}(4)$;
\vskip.05in
{\rm (c)}
\begin{multline*}
L(t,u_2,\ldots,u_n) = \frac{\cosh^{\frac{2}{n-3}}\left(\frac{n-3}{2}t\right)}{e^{\frac{2i}{n-3} \arctan\left(\tanh(\frac{n-3}{2}t)\right)}} \left[ \left(w + \frac i2 \langle \Phi,\Phi \rangle + i, \Phi, w + \frac i2 \langle \Phi,\Phi \rangle \right) \right. \\
+ \left. \int_0^t \frac{e^{2i \arctan\left(\tanh(\frac{n-3}{2}t)\right)}}{\cosh^{\frac{2}{n-3}} \left(\frac{n-3}{2}t \right)} dt \ (1,0,\ldots,0,1)\right],
\end{multline*}
where $\Phi = \Phi(u_2,\ldots,u_n)$ parametrizes a minimal $\delta(n-2)$-ideal Lagrangian immersion into ${\bf C}^{n-1}$ and $w=w(u_2,\ldots,u_n)$ is the unique solution of the PDE system $ w_{u_{k}}=\< \Phi, i\Phi_{u_{k}}\>$ for $k=2,\ldots,n$.
\end{proposition}
\section{Classification in case (III) of Lemma \ref{L:3.3}}
In this section we assume that $M$ is a $\delta(2,n-2)$-ideal Lagrangian submanifold of a complex space form $\tilde M^n(4c)$ ($n \geq 5$), whose second fundamental form is given by case (III) of Lemma \ref{L:3.3}.
\subsection{Proof that $M$ is a warped product} \label{sec5.1}
We define the following two orthogonal distributions on $M$ in terms of the orthonormal frame $\{ E_1, \ldots, E_n \}$:
\begin{equation}\label{3.7i}
\mathcal D_1 = \mathrm{span}\{E_1,E_2\}, \qquad \mathcal D_2 = \mathrm{span}\{E_3,\ldots,E_n\}.
\end{equation}
\begin{lemma}\label{L:5.1}
Let $M$ be a $\delta(2,n-2)$-ideal Lagrangian submanifold of a complex space form $\tilde M^{n}(4c)$, $n\geq 5$, whose second fundamental form is given by case $\mathrm{(III)}$ of Lemma \ref{L:3.3}. Then
$\mathcal D_2$ is integrable.
\end{lemma}
\begin{proof}
It follows from \eqref{h_case(III)} that $\<(\nabla_{E_{i}} h)(E_{j},E_1),JE_{1}\>=(2\lambda-\gamma)\<\nabla_{E_{i}}E_{j},E_{1}\>$ for all $i,j \geq 3$, which, in combination with Codazzi's equation, yields $(2\lambda-\gamma) \<[E_i,E_j],E_1\>=0$. The conditions $\gamma > 0$ and $\gamma > 2n\lambda/3$ imply $2\lambda - \gamma \neq 0$ and hence we obtain
\begin{equation} \label{5.1i}
\<[E_i,E_j],E_1\>=0.
\end{equation}
It also follows from \eqref{h_case(III)} that $\<(\nabla_{E_{i}} h)(E_{j},E_1),JE_{2}\>=(\gamma-(n\!-\!1)\lambda)\<\nabla_{E_{i}}E_{j},E_{2}\>+\mu \<\nabla_{E_{i}}E_{j},E_{1}\>$ for all $i,j \geq 3$, which, in combination with Codazzi's equation and \eqref{5.1i} gives
\begin{equation} \label{5.1ii}
(\gamma-(n-1)\lambda)\<[E_i,E_j],E_2\>=0.
\end{equation}
Finally, \eqref{h_case(III)} implies $\<(\nabla_{E_{i}} h)(E_{j},E_2),JE_{2}\>=(n-2)\mu \<\nabla_{E_{i}}E_{j},E_{2}\>-(n\lambda-\gamma) \<\nabla_{E_{i}}E_{j},E_{1}\>$ for all $i,j \geq 3$, which, using Codazzi's equation and \eqref{5.1i}, gives
\begin{equation} \label{5.1iii}
\mu\<[E_i,E_j],E_2\>=0.
\end{equation}
Combining \eqref{5.1ii} and \eqref{5.1iii} with the fact that $\mu \neq 0$ or $\gamma \neq (n-1)\lambda$ implies
\begin{equation} \label{5.1iv}
\<[E_i,E_j],E_2\>=0.
\end{equation}
Equations \eqref{5.1i} and \eqref{5.1iv} together imply that $[E_i,E_j] \in \mathcal D_2$ for all $i,j \geq 3$, which, by Frobenius' theorem, implies that $\mathcal D_2$ is integrable.
\end{proof}
In order to write down the information obtained from the other Codazzi equations, we use the following notations: the one-forms $\omega_j^k$ describing the Levi-Civita connection of $M$ are defined as usual by
\begin{equation}\label{3.7ii}
\omega_j^k(E_i) = \<\nabla_{E_i}E_j,E_k\>
\end{equation}
Comparing the $JE_{1}$-, $JE_{2}$- and $JE_{j}$-components ($j = 3,\ldots,n$) of the Codazzi equation $(\nabla_{E_{i}} h)(E_{1},E_1)=(\nabla_{E_{1}} h)(E_{1},E_i)$ ($i = 3,\ldots,n$) gives respectively
\begin{align}\label{5.3} & E_{i}\gamma=(\gamma-2\lambda) \omega ^{i}_{1}(E_{1}),
\\ &\label{5.4} (3\gamma-2n\lambda)\omega _{1}^{2} (E_{i})=(\gamma-(n-1)\lambda) \omega ^{2}_{i}(E_{1})-\mu \omega _{1}^{i}(E_{1}),
\\& \label{5.5} (\gamma-2\lambda) \omega _{1}^{j}(E_{i})=\delta_{ij}(E_{1}\lambda-\mu\omega _{1}^{2}(E_{1}))-\sum_{{k=3}}^{n}h_{ij}^{k}\omega _{1}^{k}(E_{1})
\end{align}
for all $i,j \geq 3$. Analogously, $(\nabla_{E_{i}} h)(E_{1},E_2)= (\nabla_{E_{1}} h)(E_{i},E_2)= (\nabla_{E_{2}} h)(E_{1},E_i)$ gives
\begin{align}\label{5.7}
& (3\gamma - 2 n \lambda)\omega _{1}^{2}(E_{i})= (\gamma \!-\! 2 \lambda)\omega _{1}^{i}(E_{2}) = (\gamma-(n\!-\!1)\lambda) \omega _{i}^{2}(E_{1})-\mu \omega _{1}^{i}(E_{1}),
\\& \notag n E_{i}\lambda -(\gamma-2\lambda)\omega _{1}^{i}(E_{1})-n\mu \omega _{1}^{2}(E_{i})
=(n-2)\mu \omega _{2}^{i}(E_{1})+(n\lambda-\gamma)\omega _{1}^{i}(E_{1})
\\& \quad \label{5.8} =((n-1)\lambda-\gamma) \omega _{2}^{i}(E_{2})-\mu\omega _{1}^{i}(E_{2}),
\\& \notag ((n\!-\!1)\lambda\!-\!\gamma)\omega _{2}^{j}(E_{i})-\mu \omega _{1}^{j}(E_{i})=(E_{1}\mu\! +\!\lambda \omega _{1}^{2}(E_{1}))\delta_{ij} \! - \sum_{k=3}^{n} h^{k}_{ij}\omega _{2}^{k}(E_{1})
\\& \quad \label{5.9} =(E_{2}\lambda -\mu \omega _{1}^{2}(E_{2}))\delta_{ij}-\sum_{k=3}^{n} h_{ij}^{k} \omega _{1}^{k}(E_{2})
\end{align}
for $i,j \geq 3$. Finally, it follows from $(\nabla_{E_{i}} h)(E_{2},E_2)= (\nabla_{E_{2}} h)(E_{i},E_2)$ that
\begin{align}\label{5.10} & n E_i\mu+3(n\lambda- \gamma )\omega _{1}^{2}(E_{i})
=(n-2)\mu \omega _{2}^{i}(E_{2})-(\gamma-n\lambda)\omega _{1}^{i}(E_{2}),
\\&\label{5.11} (n\lambda-\gamma)\omega _{1}^{j}(E_{i})+(n-2)\mu \omega _{2}^{j}(E_{i})
=\delta_{ij}(E_{2}\mu -\lambda \omega _{2}^{1}(E_{2}))+\sum_{k=3}^{n} h_{ij}^{k} \omega _{2}^{k}(E_{2})\end{align}
for $i,j \geq 3$.
By changing the orthonormal frame $\{E_3,\ldots,E_n\}$ in $\mathcal{D}_2$ if necessary, we may assume
that $\omega _1^4(E_1)=\cdots=\omega _1^n(E_1)=0$ or, equivalently, that the orthogonal projection of $\nabla_{E_1}E_1$ onto $\mathcal D_2$ lies in the direction of $E_3$:
\begin{align}\label{5.12}
\nabla_{E_1}E_1=\omega _1^2(E_1)E_2+\omega_1^3(E_1) E_3.
\end{align}
Thus, we find from $\sum_{i=3}^n \<(\nabla_{E_3}h)(E_i,E_1),JE_i\>=\sum_{i=3}^n \<(\nabla_{E_i}h)(E_3,E_1),JE_i\>$ that
\begin{align}\label{5.14}
(n-3)(\gamma-2\lambda)(E_3 \lambda-\mu \omega _{1}^{2}(E_{3}))=\omega _1^3(E_1)\sum_{i,j=3}^n (h^3_{ij})^2.
\end{align}
On the other hand, it follows from \eqref{5.8} that
\begin{align} \label{5.14bis}
n (E_{3}\lambda -\mu \omega _{1}^{2}(E_{3}))
=(n-2)(\mu \omega _{2}^{3}(E_{1})+\lambda\omega _{1}^{3}(E_{1})).
\end{align}
By combining \eqref{5.14} and \eqref{5.14bis}, we find
\begin{align}\label{5.15}
(n^2-5n+6)(\gamma-2\lambda)(\mu \omega _{2}^{3}(E_{1})+\lambda\omega _{1}^{3}(E_{1}))= n\omega _1^3(E_1)\sum_{i,j=3}^n (h^3_{ij})^2.
\end{align}
Similarly,
$\sum_{i=3}^n \<(\nabla_{E_1}h)(E_i,E_i),JE_3\>=\sum_{i=3}^n \<(\nabla_{E_i}h)(E_1,E_i),JE_3\>$
yields
\begin{align}\label{5.16}
(n^2-n+2)(\gamma-2\lambda)(\mu \omega _{2}^{3}(E_{1})+\lambda\omega _{1}^{3}(E_{1}))=
n\omega _1^3(E_1)\sum_{i,j=3}^n (h^3_{ij})^2
\end{align}
and $\sum_{i=3}^n \<(\nabla_{E_3}h)(E_i,E_2),JE_i\>=\sum_{i=3}^n \<(\nabla_{E_i}h)(E_2,E_3),JE_i\>$ gives
\begin{align}\label{5.17} E_3\mu=\lambda \omega _{2}^{1}(E_{3})+\frac{1}{n-3}\sum_{i,j=3}^n h^3_{ij}\omega _i^2(E_j).\end{align}
From \eqref{5.15}, \eqref{5.16} and the properties of $\gamma$, $\lambda$ and $\mu$ in case (III) of Lemma \ref{L:3.3}, we obtain
\begin{align}\label{5.18} \lambda\omega _{1}^{3}(E_{1})+\mu\omega _2^3(E_1)=0
\end{align}
and we have either
(a) $ h^3_{ij}=0$ for all $i,j\geq 3$ and $\omega _{1}^{3}(E_{1})\ne 0$ or
(b) $\omega _1^3(E_1)=0$.
\vskip.05in
{\it Case} (a): $ h^3_{ij}=0$ for all $i,j\geq 3$ and $\omega _{1}^{3}(E_{1})\ne 0$. Since $\lambda$ and $\mu$ cannot both be zero, \eqref{5.18} implies that $\mu\ne 0$ and
\begin{align} \label{5.18bis}
\omega _2^3(E_1)=-\frac{\lambda}{\mu}\omega _1^3(E_1).
\end{align}
Equation \eqref{5.17} and $h^3_{ij}=0$ imply
\begin{align}\label{5.19}
E_3\mu=\lambda\omega _2^1(E_3).
\end{align}
Also, it follows from \eqref{5.4} and \eqref{5.18bis} that
\begin{align}\label{5.20} &\omega _1^2(E_3)=\frac{\lambda\gamma-(n-1)\lambda^2-\mu^2}{(3\gamma-2n\lambda)\mu} \omega^3_1(E_1). \end{align}
By combining \eqref{5.19} and \eqref{5.20} we obtain
\begin{align}\label{5.21} E_3\mu=\frac{\lambda(\lambda\gamma-(n-1)\lambda^2-\mu^2)}{(2n\lambda-3\gamma)\mu}\omega^3_1(E_1).\end{align}
On the other hand, we find from \eqref{5.10} that
\begin{align}\label{5.22} & E_3\mu=\frac{3(\gamma-n\lambda )}{n}\omega _{1}^{2}(E_{3})
+\frac{(n-2)\mu}{n} \omega _{2}^{3}(E_{2})+\frac{n\lambda-\gamma}{n}\omega _{1}^{3}(E_{2})
.\end{align}
From \eqref{5.20} and the first equality in \eqref{5.7} we find
\begin{equation}\begin{aligned}\label{5.23}
\omega _1^3(E_2)&\,=\frac{(n-1)\lambda^2+\mu^2-\lambda\gamma}{(2\lambda-\gamma)\mu}\omega^3_1(E_1).\end{aligned}
\end{equation}
Now, \eqref{5.8}, \eqref{5.23} and \eqref{5.18bis} yield
\begin{equation}\begin{aligned}\label{5.25} \omega _2^3(E_2)&\,=\frac{(n+3)\lambda^2-5\lambda\gamma+\gamma^2+\mu^2}{(2\lambda-\gamma)((n-1)\lambda-\gamma)}\omega^3_1(E_1).\end{aligned}\end{equation}
By substituting \eqref{5.20}, \eqref{5.23} and \eqref{5.25} into \eqref{5.22} we find
\begin{equation}\begin{aligned}\label{5.26} E_3\mu=&\,\frac{3(\gamma-n\lambda )(\lambda\gamma-(n-1)\lambda^2-\mu^2)}{n(3\gamma-2n\lambda)\mu}\omega^3_1(E_1)
\\&+ \frac{(n-2)\mu((n+3)\lambda^2-5\lambda\gamma+\gamma^2+\mu^2)}{n(2\lambda-\gamma)((n-1)\lambda-\gamma)}\omega^3_1(E_1)\\&+\frac{(n\lambda-\gamma)((n-1)\lambda^2+\mu^2-\lambda\gamma)}{n(2\lambda-\gamma)\mu}\omega^3_1(E_1).\end{aligned}\end{equation}
Now, by comparing \eqref{5.21} and \eqref{5.26} we find
\begin{align}\notag
\lambda^2((n-1)\lambda-\gamma)^2+\mu^4+\mu^2((\gamma-3\lambda)^2+(2n-7)\lambda^2)=0.
\end{align}
Thus $\mu=0$, which is a contradiction. Hence, case (a) cannot occur.
\vskip.05in
{\it Case} (b): $\omega _{1}^{3}(E_{1})=0$. In this case, the choice of $E_3$ we made before becomes arbitrary and thus equation \eqref{5.18} gives
$\omega _{1}^{3}(E_{1})=\mu\omega _2^3(E_1)=0$ for arbitrary $E_3$.
Thus, we have
\begin{align}\label{5.27}
\omega _{1}^{i}(E_{1})=\mu\omega _2^i(E_1)=0
\end{align}
for all $i \geq 3$. We can now choose $\{E_3,\ldots,E_n\}$ such that
\begin{align}\label{5.28}
\nabla_{E_1}E_2=\omega _2^1(E_1)E_1+\omega_2^3(E_1) E_3,
\end{align}
i.e., such that $\omega _2^4(E_1)=\cdots=\omega _2^n(E_1)=0$.
From \eqref{5.27} and \eqref{5.28} we find $\mu\omega_2^3(E_1)=0$. Hence, either
(b.1) $\omega_2^3(E_1)\ne 0$ and $\mu=0$ or
(b.2) $\omega_2^3(E_1)=0$.
\vskip.05in
{\it Case} (b.1): $\mu=0$ {\it and} $\omega_2^3(E_1)\ne 0$. We find from \eqref{5.4} and \eqref{5.7} that
\begin{align}\label{5.30} &0\ne\omega _2^3(E_1)=\frac{2n\lambda-3\gamma}{\gamma-(n-1)\lambda}\omega _1^2(E_3),
\\ &\label{5.31} \omega _1^3(E_2)=\frac{3\gamma-2n\lambda}{\gamma-2\lambda}\omega _1^2(E_3).
\end{align}
In particular, \eqref{5.30} gives
\begin{align}\label{5.32}
3\gamma\ne 2n\lambda,\;\; \omega _1^2(E_3)\ne 0.
\end{align}
Also, from \eqref{5.4} and \eqref{5.7}:
\begin{align} \label{5.32.2}
\omega _1^2(E_k) = 0, \ \omega _1^k(E_2) = 0
\end{align}
for $k \geq 4$. Since $\mu=0$, \eqref{5.10} becomes
\begin{align}\label{5.33} & 0=3(n\lambda- \gamma )\omega _{1}^{2}(E_{3})
+(\gamma-n\lambda)\omega _{1}^{3}(E_{2}).\end{align}
Now, by substituting \eqref{5.31} into \eqref{5.33}, we find
$(\gamma-n\lambda)\lambda \omega _1^2(E_3)=0$. Since $\omega _1^2(E_3) \neq 0$ by \eqref{5.32} and $\lambda$ and $\mu$ cannot both be zero, this shows that $\lambda\ne 0$ and $\gamma=n\lambda$.
The second fundemental form \eqref{h_case(III)} reduces to
\begin{align}\label{5.34}
& h(E_1,E_1) = n\lambda JE_1, \ \ h(E_1,E_2) = h(E_2,E_2) = 0, \nonumber \\
& h(E_{1},E_{i})=\lambda JE_{i},\ \ h(E_2,E_i)= 0,\\
&\notag h(E_{i},E_{j})=\delta_{ij}\lambda JE_{1}+\sum_{k=3}^{n} h^{k}_{ij}JE_{k}
\end{align}
for $i, j \geq 3$, where $h_{33}^k+\ldots+h_{nn}^k=0$ for all $k \geq 3$. From \eqref{5.30} and \eqref{5.31} we find
\begin{align}\label{5.36} & \omega _{2}^{1}(E_{3})=\frac{\omega_2^3(E_1)}{n},\;\; \omega _1^3(E_2)=-\frac{\omega_2^3(E_1)}{n-2}.\end{align}
Thus \eqref{5.9}, \eqref{5.32.2} and \eqref{5.36} give
\begin{align}\label{5.37} \omega _{j}^2(E_{i}) =\frac{E_{2}\lambda}{\lambda}\delta_{ij}+\frac{\omega_2^3(E_1)}{(n-2)\lambda} h_{ij}^{3}. \end{align}
Now, by using \eqref{5.17}, \eqref{5.36} and \eqref{5.37}, we find
$$\frac{\omega_2^3(E_1)}{\lambda n} \left( \lambda^2 + \frac{n}{(n-2)(3-n)}\sum_{i,j=3}^n (h^3_{ij})^2 \right) = 0,$$
which is a contradiction. Therefore, this case is again impossible.
\vskip.05in
{\it Case} (b.2): $\omega_2^3(E_1)=0$. From this assumption, we have
\begin{align}\label{5.40}
\nabla_{E_1}E_1, \nabla_{E_1}E_2\in {\mathcal D}_1.
\end{align}
It follows from \eqref{5.7} that $\omega _1^i(E_2)=0$ for $i \geq 3$. Thus we also have
\begin{align}\label{5.41} \nabla_{E_2}E_1\in {\mathcal D}_1.\end{align}
From the last equation in \eqref{5.8} we find $((n-1)\lambda-\gamma)\omega _2^i(E_2)=0$. Hence, either
(b.2.1) $\gamma=(n-1)\lambda$ and $\omega _2^i(E_2)\ne 0$ for some $i\geq 3$ or
(b.2.2) $\nabla_{E_2}E_2\in \mathcal D_1$.
\vskip.05in
{\it Case} (b.2.i): {\it $\gamma=(n-1)\lambda$ and $\omega _2^i(E_2)\ne 0$ for some} $i \geq 3$. In this case, \eqref{h_case(III)} reduces to
\begin{equation}\begin{aligned}\label{5.42}
& h(E_1,E_1)=(n-1)\lambda JE_1,
\; h(E_1,E_2)=\lambda JE_2,\;
\\& h(E_2,E_2)=\lambda JE_1 +n \mu JE_2,
\\&h(E_{1},E_{i})=\lambda JE_{i},\; h(E_{2},E_{i})=\mu JE_{i},
\\& h(E_{i},E_{j})=\delta_{ij}(\lambda JE_{1}+\mu J E_{2})+\sum_{k=3}^{n} h^{k}_{ij}JE_{k}
\end{aligned}\end{equation}
for $i,j \geq 3$ and $h_{33}^k + \ldots h_{nn}^k = 0$ for all $k \geq 3$. We may assume $\mu\ne 0$, otherwise this case reduces to case (II) of Lemma \ref{L:3.3}.
Without loss of generality, we may assume
\begin{align}\label{5.43}
\nabla_{E_2}E_2=\omega _2^1(E_2)E_1+\omega _2^3(E_2) E_3,
\end{align}
or, equivalently, $\omega _2^4(E_2)=\cdots=\omega _2^n(E_2)=0$.
From \eqref{5.3}--\eqref{5.5}, \eqref{5.10} and \eqref{5.27}, we find
\begin{align}\label{5.45}
&E_i\gamma=E_i\lambda=\omega _1^2(E_i)=E_4\mu=\cdots=E_n\mu=0,
\\ \label{5.46} &\omega _i^1(E_j)=-\frac{\delta_{ij}}{(n-3)\lambda}(E_1\lambda-\mu \omega _1^2(E_1)),
\\ \label{5.47} &E_3\mu=\frac{(n-2)\mu \omega _2^3(E_2)}{n},
\end{align}
for $i,j \geq 3$.
By applying \eqref{5.42}--\eqref{5.47}, we find
\begin{align}\label{5.48} \sum_{i=3}^n \<(\nabla_{E_3}h)(E_i,E_2),JE_i\>=(n-2)E_3\mu = \frac{(n-2)^2 \mu \omega _2^3(E_2)}{n}.\end{align}
After long computation we also have
$\sum_{i=3}^n \<(\nabla_{E_2}h)(E_i,E_3),JE_i\>=n\mu \omega _2^3(E_2).$
By Codazzi's equation and \eqref{5.48}, this would imply $\omega _2^3(E_2)=0$ or $n=1$, which are both contradictions. Therefore, this case is impossible.
\vskip.05in
{\it Case} (b.2.2): $\nabla_{E_2}E_2\in \mathcal D_1$.
In this case, we have
\begin{align}\label{5.50} \omega _\alpha^i(E_\beta)=0
\end{align}
for any $\alpha, \beta=1,2$ and $i \geq 3$, i.e., $\mathcal D_1$ is a totally geodesic distribution. From \eqref{5.4} and \eqref{5.50} we get \begin{align}\label{5.51}
(3\gamma-2n\lambda)\omega _1^2(E_i)=0
\end{align}
for $i\geq 3$. Consequently, either
(b.2.2.1) $3\gamma=2n\lambda$ and $\omega _1^2(E_i)\ne 0$ for some $i \geq 3$ or
(b.2.2.2) $\omega _1^2(E_i)=0$ for all $i \geq 3$.
\vskip.05in
{\it Case} (b.2.2.1): $3\gamma=2n\lambda$ and $\omega _1^2(E_i)\ne 0$ {\it for some $i \geq 3$}. From \eqref{5.3}, we obtain $E_i\gamma=E_i\lambda=0$. Hence, we have $\mu=0$ from \eqref{5.8}. Also, we find $(n\lambda-\gamma)\omega _1^2(E_i)=0$ from \eqref{5.10}. Combining this with \eqref{5.51} yields $\omega _1^2(E_i)=0$, which is a contradiction. Consequently, this case cannot occur.
\vskip.05in
{\it Case} (b.2.2.2): $\omega _1^2(E_i)=0$ for all $i \geq 3$.
From \eqref{5.3}, \eqref{5.8} and \eqref{5.10}, we get
\begin{align}\label{5.52}
E_i\gamma=E_i\lambda=E_i\mu=0
\end{align}
for $i \geq 3$. Recall that $\gamma\ne 2\lambda$, which follows from $\gamma >0$ and $\gamma > 2n\lambda/3$.
We find from \eqref{5.5}, \eqref{5.9} and \eqref{5.11} that
\begin{align}\label{5.53} &\omega ^1_{i}(E_j)=\frac{\delta_{ij}}{\gamma-2\lambda}(\mu\omega _1^2(E_1)-E_1\lambda),
\\&\label{5.54} (n-2)\mu\omega ^2_i(E_j)=(\gamma-n\lambda)\omega ^1_i(E_j)-
\delta_{ij}(E_2\mu+\lambda \omega _1^2(E_2)),
\\& \label{5.55} ((n-1)\lambda-\gamma)\omega ^2_i(E_j)=\mu \omega ^1_i(E_j)-\delta_{ij}(E_1\mu+\lambda\omega _1^2(E_1)),
\\&\label{5.56} ((n-1)\lambda-\gamma)\omega ^2_i(E_j) =\mu \omega ^1_i(E_j)-\delta_{ij}(E_2\lambda-\mu\omega _1^2(E_2))
\end{align}
for $i,j \geq 3$.
Recall that $\mathcal D_1$ is totally geodesic and $\mathcal D_2$ is integrable. It now follows from \eqref{5.53}--\eqref{5.56} that leaves of $\mathcal D_2$ are totally umbilical submanifolds of the Lagrangian submanifold $M$. In particular, we may put
\begin{align}\label{5.57}
\omega _i^1(E_j)=p\delta_{ij},\;\; \omega _i^2(E_j)=q\delta_{ij}
\end{align}
for some functions $p,q$ and all $i \geq 3$.
Since $\omega _1^2(E_i)=0$ for all $ i \geq 3$, \eqref{5.57} implies
\begin{align}\label{5.58}
\nabla_{V}E_1=-pV,\;\; \nabla_V E_2=-qV
\end{align}
for all $V \in \mathcal D_2$. From \eqref{5.53}--\eqref{5.57} we get
\begin{equation}\begin{aligned}\label{5.59} & E_1\lambda=\mu \omega _1^2(E_1)+(2\lambda-\gamma)p,
\\& E_2\lambda=\mu\omega ^2_1(E_2)+\mu p-((n-1)\lambda-\gamma)q,
\\& E_1\mu = -\lambda \omega ^2_1(E_1)+\mu p-((n-1)\lambda-\gamma) q,
\\& E_2\mu=-\lambda \omega ^2_1(E_2)+(\gamma-n\lambda)p-(n-2)\mu q.
\end{aligned}\end{equation}
By applying \eqref{5.58}, we find
\begin{equation}\begin{aligned}\label{5.60}
\<R(E_i,E_j)E_1,E_k\>=(E_jp)\delta_{ik}-(E_ip)\delta_{jk}
\end{aligned}\end{equation}
for all $i,j,k \geq 3$.
On the other hand, it follows from equation \eqref{Gauss} of Gauss and \eqref{h_case(III)} that
\begin{equation}\begin{aligned}\label{5.61} & \<R(E_i,E_j)E_1,E_k\>=0.\end{aligned}\end{equation}
By combining \eqref{5.60} and \eqref{5.61} we get $E_jp=0$ for $j \geq 3$. Similarly, we find by computing $\<R(E_i,E_j)E_2,E_k\>$ and using \eqref{h_case(III)}, \eqref{5.58} and \eqref{5.59} that $E_jq=0$. Thus
\begin{align}\label{5.62}
E_jp=E_jq=0
\end{align}
for all $j \geq 3$.
Now, by applying \eqref{h_case(III)}, \eqref{5.50}, \eqref{5.58}, and the equation of Gauss,
\begin{equation}
\<R(E_\alpha,E_3)E_\beta,E_3\>=-c\delta_{\alpha\beta}+\<h(E_\alpha,E_3),h(E_\beta,E_3)\>
-\<h(E_3,E_3),h(E_\alpha,E_\beta\>,
\end{equation}
where $\alpha,\beta=1,2$, we find
\begin{equation}\begin{aligned}\label{5.63}
& E_1p =q \omega _1^2(E_1)+p^2-\lambda^2+\lambda\gamma+c,
\\& E_2p =q\omega ^2_1(E_2)+pq+(n\lambda-\gamma)\mu-\lambda\mu,
\\& E_1q = -p \omega ^2_1(E_1)+pq+(n\lambda-\gamma)\mu-\lambda \mu,
\\& E_2q=-p \omega ^2_1(E_2)+q^2+n\lambda^2+(n-1)\mu^2-\lambda\gamma +c.
\end{aligned}\end{equation}
Also, by applying \eqref{h_case(III)}, \eqref{5.50} and \eqref{5.59}, we find from the equation of Codazzi,
$(\nabla_{E_2}h)(E_1,E_1)= (\nabla_{E_1}h)(E_1,E_2)$,
that
\begin{equation}\begin{aligned}\label{5.64} & E_1\gamma=n(2\lambda-\gamma)p+(2n\lambda-3\gamma) \omega _1^2(E_2),
\\& E_2\gamma=(3\gamma-2n\lambda)\omega ^2_1(E_1).\end{aligned}\end{equation}
By applying \eqref{5.59}, \eqref{5.63} and \eqref{5.64}, we obtain
\begin{equation}\begin{aligned}\label{5.65}
&E_1(c+\lambda^2+\mu^2+p^2+q^2)=2p(c+\lambda^2+\mu^2+p^2+q^2),
\\&E_2(c+\lambda^2+\mu^2+p^2+q^2)=2q(c+\lambda^2+\mu^2+p^2+q^2).
\end{aligned}\end{equation}
If we put $\mathring{H}=p E_1+q E_2$, then \eqref{5.58} and \eqref{5.62} yield $\nabla_{V}\mathring{H}=-(p^2+q^2)V$ for all $V\in \mathcal D_2$, which shows that the mean curvature vector of each leaf of $\mathcal D_2$ is parallel in the normal bundle of this leaf in $M$.
Therefore, $\mathcal D_2$ is a spherical distribution. Consequently, the Lagrangian submanifold $M$ is locally the warped product $M^2\times_f M^{n-2}$ of a leaf $M^2$ of $\mathcal D_1$ and a leaf $M^{n-2}$ of $\mathcal D_2$. Moreover, all the leaves of $\mathcal D_1$ are totally geodesic surfaces in $M$ and all the leaves of $\mathcal D_2$ are spherical submanifolds of $M$.
It is well-known (see, for instance, \cite[page 79]{book}) that the warping function $f$ of the warped product $M^2\times_f M^{n-2}$ satisfies
\begin{align}\label{5.68} (\nabla_{V}V)^{\mathcal D_1}=-\frac{\mathrm{grad}(f)}{f},\end{align}
for any unit vector field $V \in \mathcal D_2$, where the superscript $\mathcal D_1$ denotes the $\mathcal D_1$-component. Together with \eqref{5.58}, this implies that
\begin{align} \label{derivatives_of_f}
E_1(f) = -pf, \ \ E_2(f)=-qf.
\end{align}
It follows from \eqref{5.58}, \eqref{5.59} and \eqref{derivatives_of_f} that the following two vector fields commute and hence determine coordinates $(x,y)$ on $M^2$:
\begin{equation} \label{coordinates}
\begin{aligned}
& \frac{\partial}{\partial x} = \frac{1}{\lambda^2 + \mu^2} (\lambda E_1 + \mu E_2), \\
& \frac{\partial}{\partial y} = \frac{f^{n-2}}{\lambda^2 + \mu^2} (-\mu E_1 + \lambda E_2).
\end{aligned}
\end{equation}
From \eqref{derivatives_of_f} and \eqref{coordinates}, we find that the derivatives of $f$ are
\begin{align} \label{derivatives_of_f_2}
f_x = -\frac{f}{\lambda^2 + \mu^2} (\lambda p + \mu q), \ \
f_y = \frac{f^{n-1}}{\lambda^2 + \mu^2} (\mu p - \lambda q).
\end{align}
Remark that all functions appearing can now be explicitly expressed in terms of $f$. However, since the expressions are complicated and we will not need them to state our final result, we omit them here.
We can summarize this subsection as follows.
\begin{proposition} \label{P_section_5}
Let $M$ be a $\delta(2,n-2)$-ideal Lagrangian submanifold of a complex space form $\tilde M^{n}(4c)$, $n\geq 5$, whose second fundamental form is given by case $\mathrm{(III)}$ of Lemma \ref{L:3.3}. Then $M$ is locally a warped product $M^2 \times_f M^{n-2}$, where $M^2$ is an integral surface of the distribution $\mathcal D_1 = \mathrm{span}\{E_1,E_2\}$ and $M^{n-2}$ is an integral submanifold of the distribution $\mathcal D_2=\{E_3,\ldots,E_n\}$. Moreover, $M^2$ is totally geodesic in $M$ and $M^{n-2}$ is spherical in $M$, in particular, there exist functions $p$ and $q$ such that $\nabla_VE_1=-pV$ and $\nabla_VE_2=-qV$ for all $V \in \mathcal D_2$. The derivatives of $\gamma$, $\lambda$, $\mu$, $p$, $q$ and $f$ are given by \eqref{5.52}, \eqref{5.59}, \eqref{5.62}, \eqref{5.63}, \eqref{5.64} and \eqref{derivatives_of_f}. Finally, the vector fields \eqref{coordinates} are coordinate vector fields on $M^2$.
\end{proposition}
In the next three subsections we will classify the $\delta(2,n-2)$-ideal Lagrangian submanifolds whose second fundamental form satisfies case (III) of Lemma \ref{L:3.3} in the ambient spaces $\mathbf C^n$, $CP^n(4)$ and $CH^n(-4)$ respectively.
\subsection{Classification in $\mathbf C^n$}
\begin{proposition}\label{P:6.1}
Let $L:M \to \mathbf C^n$ ($n \geq 5$) be a $\delta(2,n-2)$-ideal Lagrangian immersion whose second fundamental form is given by case $\mathrm{(III)}$ of Lemma \ref{L:3.3}.
Then $L$ is locally congruent to
\begin{align}\label{6.2}
L(x,y,u_1,\ldots,u_{n-2})=\big(f(x,y)e^{i x}\Phi(u_1,\ldots,u_{n-2}), z(x,y)\big),\end{align}
where $\Phi$ defines a minimal Legendre immersion in $S^{2n-3}(1)\subset {\bf C}^{n-1}$ and $(fe^{i x},z)$ is a Lagrangian surface in ${\bf C}^2$, where $f$ is determined by
\begin{align} \label{PDE_for_f_Cn}
\frac{f_{yy}}{f^{n-2}} - (n-2) \frac{f_y^2}{f^{n-1}} + (n-1)f^{n-1} + (n-2) f^{n-3} f_x^2 + f^{n-2} f_{xx} = 0
\end{align}
and $z$ by
\begin{equation} \label{system_for_z_Cn}
\begin{aligned}
& z_x = e^{i(n-1)x} \frac{f_y}{f^{n-2}}, \\
& z_y = e^{i(n-1)x} f^{n-1} \left(i-\frac{f_x}{f}\right).
\end{aligned}
\end{equation}
\end{proposition}
\begin{proof} Let $L:M\to {\bf C}^{n}$ be a $\delta(2,n-2)$-ideal Lagrangian immersion whose second fundamental form is given by case $\mathrm{(III)}$ of Lemma \ref{L:3.3}. It follows from Proposition \ref{P_section_5} that $M$ is locally a warped product $M^2\times_{f} M^{n-2}$ of a surface $M^2$ and an $(n-2)$-dimensional Riemannian manifold $M^{n-2}$ with warping function $f$ satisfying \eqref{derivatives_of_f}. Using \eqref{5.65} and also \eqref{5.52} and \eqref{5.62}, which imply that $p$, $q$, $\lambda$ and $\mu$ are constant along $\mathcal D_2$, gives
\begin{align}\label{6.5}
f=\frac{C}{\sqrt{\lambda^2+\mu^2+p^2+q^2}}
\end{align}
for some positive constant $C$. Now choose coordinates $(x,y)$ as in \eqref{coordinates} and consider the map $\Phi$ defined by
\begin{equation}
\begin{aligned}\label{6.6}
\Phi=\frac{e^{-i x} ((p+i \lambda)E_{1}+(q+i \mu)E_{2})}{\sqrt{\lambda^{2}+\mu^{2}+p^{2}+q^{2}}}.
\end{aligned}
\end{equation}
Remark that $\langle \Phi,\Phi \rangle = 1$. Denote by $D$ the Euclidean connection on $\mathbf C^n$. From \eqref{3.4}, \eqref{5.59}, \eqref{5.63} and \eqref{5.65} we obtain
\begin{align}\label{6.7}
D_{E_1}\Phi=D_{E_2}\Phi=0.
\end{align}
Also, by applying \eqref{3.4}, \eqref{5.52}, \eqref{5.58} and \eqref{5.62}, we find
\begin{align} \label{6.8}
D_{E_i}\Phi=-e^{-i x}\sqrt{\lambda^2 + \mu^2 + p^2 + q^2} E_i
\end{align}
for all $i \geq 3$. This implies that $\Phi$ is an immersion from $M^{n-2}
$ into $S^{2n-1}(1) \subseteq \mathbf C^n$. We will show that the image of $\Phi$ is contained in a linear subspace $\mathbf C^{n-1} \subseteq \mathbf C^n$ and hence in a unit sphere $S^{2n-3}(1) \subseteq \mathbf C^{n-1}$. Therefore, consider for any point $p \in M^{n-2}$ the complex linear subspace $$\mathrm{span}\{\Phi(p), (d\Phi)_p(E_3), \ldots, (d\Phi)_p(E_n)\} \subseteq T_{\Phi(p)}\mathbf C^n.$$
To see that all these subspaces are in fact the same subspace of $\mathbf C^n$, we remark that from \eqref{3.4}, \eqref{5.52}, \eqref{5.57}, \eqref{5.62} and \eqref{6.8}
\begin{equation}
\begin{aligned}\label{6.9}
D_{E_j}(d\Phi)(E_i) &= D_{E_j} \left( -e^{-i x}\sqrt{\lambda^2 + \mu^2 + p^2 + q^2} E_i \right) \\
& =-e^{-i x}\sqrt{\lambda^2 + \mu^2 + p^2 + q^2} \, D_{E_j}E_i\\
& =\sum_{k=3}^{n} \(\omega _i^k(E_j)+i h_{ij}^k\) (d\Phi)(E_k) -\delta_{ij}(\lambda^2 + \mu^2 + p^2 + q^2) \Phi,
\end{aligned}
\end{equation}
which belongs again to $\mathrm{span}\{\Phi, (d\Phi)(E_3), \ldots, (d\Phi)(E_n)\}$ for any $i,j \geq 3$. We conclude that $\Phi$ is an immersion of $M^{n-2}$ into $S^{2n-3}(1) \subseteq \mathbf C^{n-1} \subseteq \mathbf C^{n}$. Moreover, it follows from the computation above that the second fundamental form of the immersion $\Phi$ coincides with the second fundamental form of $L$ restricted to $M^{n-2}$, which implies that $\Phi$ is a minimal Legendre immersion of $M^{n-2}$ into $S^{2n-3}(1)$.
Let us put
\begin{align}& \label{6.10}
\Psi = \frac{(-q+i \mu)E_1 +(p-i \lambda)E_2}{e^{i(n-1)x} \sqrt{\lambda^2 + \mu^2 + p^2 + q^2}}.
\end{align}
Then $\Psi$ is orthogonal to $\Phi$ and
\begin{equation}\begin{aligned}\label{6.11}
& D_{E_j}\! \(L+\frac{e^{i x}}{\sqrt{\lambda^2\!+\!\mu^2\!+\!p^2\!+\!q^2}}\Phi\)=0 \mbox{ if } j \geq 3, \\
& D_{E_1}\! \(L+\frac{e^{i x}}{\sqrt{\lambda^2\!+\!\mu^2\!+\!p^2\!+\!q^2}}\Phi\)=\frac{-e^{i(n-1)x}(q+i \mu)}{\sqrt{\lambda^2\!+\!\mu^2\!+\!p^2\!+\!q^2}}\Psi, \\
&D_{E_2}\! \(L+\frac{e^{i x}}{\sqrt{\lambda^2\!+\!\mu^2\!+\!p^2\!+\!q^2}}\Phi\)=\frac{e^{i(n-1)x}(p+i \lambda)}{\sqrt{\lambda^2\!+\!\mu^2\!+\!p^2\!+\!q^2}}\Psi.
\end{aligned}\end{equation}
Moreover, since $D_{E_A}\Psi = 0$ for all $A = 1,\ldots,n$, we can assume that, after a suitable isometry of the ambient space, $\Psi = (0,\ldots,0,1)$. Consequently, $L$ takes the form
\begin{align}\label{6.12}L(x,y,u_1,\ldots,u_{n-2})=\(-\frac 1C f(x,y)e^{i x}\Phi(u_1,\ldots,u_{n-2}), z(x,y)\),
\end{align}
where $z$ is a complex valued function whose derivatives are essentially computed in \eqref{6.11}. By using \eqref{coordinates}, we obtain that $z$ satisfies
\begin{align*}
& z_x = \frac 1C e^{i(n-1)x} \frac{f_y}{f^{n-2}}, \\
& z_y = \frac 1C e^{i(n-1)x} f^{n-1} \left(i-\frac{f_x}{f}\right).
\end{align*}
The compatibility condition for this system is precisely \eqref{PDE_for_f_Cn}.
Note that everything is invariant under a rescaling of $f$, if, at the same time, we rescale the $y$-coordinate corresponding to \eqref{coordinates}. Hence, we may assume $C=1$. Moreover, after an isometry of $\mathbf C^n$ we may omit the minus signs in the first $n-1$ components of $L$, obtaining \eqref{6.2}.
Since $L:M_1^2\times_f M_2^{n-2}\to {\bf C}^{n}$ is Lagrangian, it follows from \eqref{6.12} that $\Phi$ is a Legendre minimal immersion in $S^{2n-3}(1)\subset {\bf C}^{n-1}$. Note that $(fe^{ix},z)$ is a Lagrangian surface in ${\bf C}^2$.
The converse can be verified by direct long computation.
\end{proof}
\begin{example}
Let us construct an explicit example in dimension $n=5$ by assuming that $f_y=0$. In that case, the general solution of \eqref{PDE_for_f_Cn} is given by
\begin{equation}
f(x) = c_1 (\cos(4x-c_2))^{1/4}.
\end{equation}
It then follows from the system \eqref{system_for_z_Cn} that $z$ is independent of $x$ and
\begin{equation}
z(y) = ic_1^4 e^{ic_2} y.
\end{equation}
Note that $c_1$ cannot be zero since $(f(x)e^{ix},z(y))$ has to be an immersion. It is a product of two plane curves and thus a Lagrangian surface in ${\bf C}^2$.
\end{example}
\subsection{Classification in $CP^{n}$}
\begin{proposition}\label{P:7.1}
Let $L:M\to CP^{n}(4)$ ($n \geq 5$) be a $\delta(2,n-2)$-ideal Lagrangian immersion whose second fundamental form is given by case $\mathrm{(III)}$ of Lemma \ref{L:3.3}. Then the horizontal lift $\tilde L: M\to S^{2n+1}(1)\subseteq {\bf C}^{n+1}$ of $L$ is given by
\begin{equation} \label{7.1}
\tilde L(x,y,u_1,\ldots,u_{n-2})= e^{ix} f(x,y) \Phi(u_1,\ldots,u_{n-2}) + e^{i(n-1)x} \sqrt{1-f(x,y)^2} \, \Theta_2(x,y) ,
\end{equation}
where, with respect to a suitable orthogonal decomposition $\mathbf C^{n+1} = \mathbf C^{n-1} \oplus \mathbf C^2$, the map $\Phi: M^{n-2} \to S^{2n-3}(1) \subseteq \mathbf C^{n-1}$ is a minimal Legendre immersion, the warping function $f$ is determined by
\begin{multline}
(1-f^2) f^{2n-3} f_{xx} + (1-f^2) f f_{yy} + ( (n-2)(1-f^2)+2f^2 ) f^{2n-4} f_x^2 \\ - ( (n-2)(1-f^2)-2f^2 ) f_y^2 + ( (n-1)(1-f^2)+2f^2 ) f^{2n-2} = 0
\end{multline}
and $\Theta_2: M^2 \to S^3(1) \subseteq \mathbf C^2$ is a solution of the system
\begin{equation}
\begin{aligned}
& (\Theta_1)_x = \frac{1}{1-f^2} \( i f^2 \Theta_1 - \frac{f_y}{f^{n-2}} \Theta_2 \), \\
& (\Theta_1)_y = \frac{1}{1-f^2} f^{n-2} (f_x + if) \Theta_2, \\
& (\Theta_2)_x = \frac{1}{1-f^2} \( \frac{f_y}{f^{n-2}} \Theta_1 + i((1-n)(1-f^2)-f^2) \Theta_2 \), \\
& (\Theta_2)_y = \frac{1}{1-f^2} f^{n-2} (-f_x + if) \Theta_1.
\end{aligned}
\end{equation}
\end{proposition}
\begin{proof}
From \eqref{5.65} and \eqref{derivatives_of_f}, we obtain that
\begin{equation}
f = \frac{C}{\sqrt{1+\lambda^2+\mu^2+p^2+q^2}}.
\end{equation}
Now define the following three maps:
\begin{equation} \label{7.5}
\begin{aligned}
& \Phi = e^{-ix} \frac{\tilde L - (p+i\lambda) E_1 - (q+i\mu) E_2}{\sqrt{1+\lambda^2+\mu^2+p^2+q^2}}, \\
& \Theta_1 = e^{-i(n-1)x} \frac{(-q+i\mu) E_1 + (p-i\lambda)E_2}{\sqrt{1+\lambda^2+\mu^2+p^2+q^2}}, \\
& \Theta_2 = e^{-i(n-1)x} \frac{(\lambda^2+\mu^2+p^2+q^2)\tilde L + (p+i\lambda) E_1 + (q+i\mu) E_2}{\sqrt{\lambda^2+\mu^2+p^2+q^2}\sqrt{1+\lambda^2+\mu^2+p^2+q^2}}, \\
\end{aligned}
\end{equation}
where $x$ is the coordinate on $M^2$ defined in \eqref{coordinates}. Then $\langle \Phi,\Phi \rangle = 1$ and, denoting by $D$ the Euclidean connection on $\mathbf C^{n+1}$, one also has $D_{E_1}\Phi = D_{E_2}\Phi = 0$ and
\begin{equation} \label{7.6}
D_{E_j}\Phi = e^{-ix} \sqrt{1+\lambda^2+\mu^2+p^2+q^2} E_j
\end{equation}
for $j \geq 3$. This implies that $\Phi$ is an immersion from $M^{n-2}$ into $S^{2n+1}(1) \subseteq \mathbf C^{n+1}$.
We will now show that the image of $\Phi$ is actually contained in a linear subspace $\mathbf C^{n-1}$ of $\mathbf C^{n+1}$ and, since it has length one, in $S^{2n-3}(1) \subseteq \mathbf C^{n-1}$. Therefore, consider for any point $p \in M^{n-2}$ the complex linear subspace $$\mathrm{span}\{\Phi(p), (d\Phi)_p(E_3), \ldots, (d\Phi)_p(E_n)\} \subseteq T_{\Phi(p)}\mathbf C^{n+1}.$$
To see that all these subspaces are in fact the same subspace of $\mathbf C^{n+1}$, we use \eqref{7.6} and the fact that
\begin{equation*}
\begin{aligned}
D_{E_j}(d\Phi)(E_k) = & \ D_{E_j} \( e^{-ix} \sqrt{1+\lambda^2+\mu^2+p^2+q^2} E_k \) \\
= & \ e^{-ix} \sqrt{1+\lambda^2+\mu^2+p^2+q^2} \Big( \delta_{jk} ((p+i\lambda)E_1 + (q+i\mu)E_2 - \tilde L) \\
& + \sum_{\ell=3}^n (\omega_k^{\ell}(E_i)+ih_{jk}^{\ell})E_{\ell} \Big) \\
= & \ - \delta_{jk} (1+\lambda^2+\mu^2+p^2+q^2) \Phi + \sum_{\ell=3}^n (\omega_k^{\ell}(E_i)+ih_{jk}^{\ell}) (d\Phi)(E_{\ell})
\end{aligned}
\end{equation*}
belongs again to $\mathrm{span}\{\Phi, (d\Phi)(E_3), \ldots, (d\Phi)(E_n)\}$ for any $j,k \geq 3$. We conclude that $\Phi$ is an immersion of $M^{n-2}$ into $S^{2n-3}(1) \subseteq \mathbf C^{n-1} \subseteq \mathbf C^{n+1}$. Moreover, it follows from the computation above that the second fundamental form of the immersion $\Phi$ coincides with the second fundamental form of $\tilde L$ restricted to $M^{n-2}$, which implies that $\Phi$ is a minimal Legendre immersion of $M^{n-2}$ into $S^{2n-3}(1)$.
It is clear that $\Theta_1$ and $\Theta_2$ take values in the orthogonal complement $\mathbf C^2$ of $\mathbf C^{n-1}$ in $\mathbf C^{n+1}$. Moreover, $D_{E_j}\Theta_1 = D_{E_j}\Theta_2 = 0$ for $j \geq 3$ and the derivatives of $\Theta_1$ and $\Theta_2$ in the directions of $E_1$ and $E_2$ are linear combinations of $\Theta_1$ and $\Theta_2$. With respect to the coordinates $(x,y)$ introduced in \eqref{coordinates}, we have
\begin{equation}
\begin{aligned}
& (\Theta_1)_x = \frac{1}{C^2-f^2} \( i f^2 \Theta_1 - \frac{C f_y}{f^{n-2}} \Theta_2 \), \\
& (\Theta_1)_y = \frac{C}{C^2-f^2} f^{n-2} (f_x + if) \Theta_2, \\
& (\Theta_2)_x = \frac{1}{C^2-f^2} \( \frac{C f_y}{f^{n-2}} \Theta_1 + i((1-n)(C^2-f^2)-f^2) \Theta_2 \), \\
& (\Theta_2)_y = \frac{C}{C^2-f^2} f^{n-2} (-f_x + if) \Theta_1.
\end{aligned}
\end{equation}
The integrability condition for this system is
\begin{multline*}
(C^2-f^2) f^{2n-3} f_{xx} + (C^2-f^2) f f_{yy} + ( (n-2)(C^2-f^2)+2f^2 ) f^{2n-4} f_x^2 \\ - ( (n-2)(C^2-f^2)-2f^2 ) f_y^2 + ( (n-1)(C^2-f^2)+2f^2 ) f^{2n-2} = 0.
\end{multline*}
Since everything is invariant under a rescaling of $f$, if we rescale the $y$-coordinate accordingly, cfr. \eqref{coordinates}, we may assume $C=1$. This yields the equations for $f$, $\Theta_1$ and $\Theta_2$ given in the proposition. The expression for $\tilde L$ follows directly from \eqref{7.5}.
The converse can be verified by a long but straightforward computation.
\end{proof}
\subsection{Classification in $CH^{n}$}
\begin{proposition}\label{P:8.1} Let $L:M\to CH^{n}(-4)$ ($n \geq 5$) be a $\delta(2,n-2)$-ideal Lagrangian immersion whose second fundamental form is given by case $\mathrm{(III)}$ of Lemma \ref{L:3.3}. Then the horizontal lift $\tilde L: M\to H^{2n+1}_1(-1)\subseteq {\bf C}^{n+1}_1$ of $L$ is given by one of the following.
\vskip.05in
{\rm (a)} With respect to a suitable orthogonal decomposition $\mathbf C^{n+1}_1 = \mathbf C^{n-1} \oplus \mathbf C^2_1$,
\begin{equation}\label{8.1}
\tilde L(x,y,u_1,\ldots,u_{n-2}) = -e^{ix} f(x,y) \Phi(u_1,\ldots,u_{n-2}) + e^{i(n-1)x} \sqrt{1+f(x,y)^2} \, \Theta_2(x,y) ,
\end{equation}
where $\Phi: M^{n-2} \to S^{2n-3}(1) \subseteq \mathbf C^{n-1}$ is a minimal Legendre immersion, the warping function $f$ is determined by
\begin{multline} \label{8.2}
(1+f^2) f^{2n-3} f_{xx} + (1+f^2) f f_{yy} + ((n-2)(1+f^2)-2f^2) f^{2n-4} f_x^2 \\ - ((n-2)(1+f^2)+2f^2) f_y^2 + ((n-1)(1+f^2)-2f^2) f^{2n-2} = 0
\end{multline}
and $\Theta_2: M^2 \to H^3_1(-1) \subseteq \mathbf C^2_1$ is a solution of the system
\begin{equation} \label{8.3}
\begin{aligned}
& (\Theta_1)_x = \frac{1}{1+f^2} \(-i f^2 \Theta_1 - \frac{f_y}{f^{n-2}} \Theta_2 \), \\
& (\Theta_1)_y = \frac{1}{1+f^2} f^{n-2} (f_x + if) \Theta_2, \\
& (\Theta_2)_x = \frac{1}{1+f^2} \( -\frac{f_y}{f^{n-2}} \Theta_1 - i((n-1)(1+f^2)-f^2) \Theta_2 \), \\
& (\Theta_2)_y = \frac{1}{1+f^2} f^{n-2} (f_x - if) \Theta_1.
\end{aligned}
\end{equation}
\vskip.05in
{\rm (b)} With respect to a suitable orthogonal decomposition $\mathbf C^{n+1}_1 = \mathbf C^{n-1}_1 \oplus \mathbf C^2$,
\begin{equation}\label{8.4}
\tilde L(x,y,u_1,\ldots,u_{n-2}) = e^{ix} f(x,y) \Phi(u_1,\ldots,u_{n-2}) - e^{i(n-1)x} \sqrt{f(x,y)^2-1} \, \Theta_2(x,y) ,
\end{equation}
where $\Phi: M^{n-2} \to H^{2n-3}_1(-1) \subseteq \mathbf C^{n-1}_1$ is a minimal Legendre immersion, the warping function $f$ is determined by
\begin{multline} \label{8.5}
(f^2-1) f^{2n-3} f_{xx} + (f^2-1) f f_{yy} + ((n-2)(f^2-1)-2f^2) f^{2n-4} f_x^2 \\ - ((n-2)(f^2-1)+2f^2) f_y^2 + ((n-1)(f^2-1)-2f^2) f^{2n-2} = 0
\end{multline}
and $\Theta_2: M^2 \to S^3(1) \subseteq \mathbf C^2_1$ is a solution of the system
\begin{equation} \label{8.6}
\begin{aligned}
& (\Theta_1)_x = \frac{1}{f^2-1} \(-i f^2 \Theta_1 - \frac{f_y}{f^{n-2}} \Theta_2 \), \\
& (\Theta_1)_y = \frac{1}{f^2-1} f^{n-2} (f_x + if) \Theta_2, \\
& (\Theta_2)_x = \frac{1}{f^2-1} \( \frac{f_y}{f^{n-2}} \Theta_1 + i((n-1)(f^2-1)-f^2) \Theta_2 \), \\
& (\Theta_2)_y = \frac{1}{f^2-1} f^{n-2} (-f_x + if) \Theta_1.
\end{aligned}
\end{equation}
\vskip.05in
{\rm (c)} With respect to the local coordinates $(x,y)$ on $M^2$ introduced above and local coordinates $(u_3,\ldots,u_n)$ on $M^{n-2}$,
\begin{equation} \label{8.6o}
\tilde L = f e^{ix}(u+iv+1,u+iv,G,\bar F),
\end{equation}
where the warping function $f:M^2 \to \mathbf R$ is a solution of
\begin{equation} \label{8.6a}
f^{2n-3}f_{xx} + ff_{yy} + (n-4)f^{2n-4}f_x^2 - nf_y^2 +(n-3)f^{2n-2} = 0,
\end{equation}
$F: M^2 \to \mathbf C$ is determined by
\begin{equation} \label{8.6b}
F_x = -e^{-i(n-3)x}\frac{f_y}{f^n}, \qquad F_y = e^{-i(n-3)x} f^{n-4} (f_x+if),
\end{equation}
$G: M^{n-2} \to \mathbf C^{n-2}$ is a minimal Lagrangian immersion,
$u: M \to \mathbf R$ is given by
\begin{equation} \label{8.6c}
u = \frac 12 (\langle G,G \rangle + |F|^2 - 1) + \frac{1}{2f^2}
\end{equation}
and $v: M \to \mathbf R$ is determined by
\begin{equation} \label{8.6d}
\begin{aligned}
& v_x = -\frac{1}{f^2} - \frac{f_y}{f^n} \Im(e^{i(n-3)x}F), \\
& v_y = -f^{n-3} \Re(e^{i(n-3)x}F) + f^{n-4}f_x \Im(e^{i(n-3)x}F), \\
& v_{u_k} = \langle D_{\frac{\partial}{\partial u_k}} G,iG \rangle.
\end{aligned}
\end{equation}
\end{proposition}
\begin{proof}
We divide the proof into three cases.
\vskip.05in
{\it Case} (1): $\lambda^2+\mu^2+p^2+q^2>1$.
It follows from \eqref{5.65} and \eqref{derivatives_of_f} that
\begin{equation}
f = \frac{C}{\sqrt{\lambda^2+\mu^2+p^2+q^2-1}}
\end{equation}
for some real constant $C > 0$. Now consider the maps
\begin{equation}
\begin{aligned}\label{8.7}
& \Phi =e^{-i x}\frac{\tilde L+(p+i \lambda)E_{1}+(q+i \mu)E_{2}}{\sqrt{\lambda^{2}+\mu^{2}+p^{2}+q^{2}-1}}, \\
& \Theta_{1} = e^{-i(n-1)x} \frac{(q - i \mu)E_1 -(p-i\lambda)E_2}{\sqrt{\lambda^{2}+\mu^2+p^{2}+q^2}},\; \\
& \Theta_{2} = e^{-i(n-1)x} \frac{(\lambda^{2}+\mu^2+p^{2}+q^2)\tilde L +(p+i \lambda)E_{1}+(q+i \mu)E_2}{\sqrt{\lambda^2+\mu^{2}+p^2+q^{2}}\sqrt{\lambda^2+\mu^{2}+p^2+q^{2}-1}}.
\end{aligned}
\end{equation}
Then $\langle \Phi,\Phi \rangle = \langle \Theta_1,\Theta_1 \rangle = 1$ and $\langle \Theta_2,\Theta_2 \rangle = -1$. Continuing in the same way as in the proof of Proposition \ref{P:7.1}, we obtain case (a).
\vskip.05in
{\it Case} (2): $\lambda^2+\mu^2+p^2+q^2<1$.
It follows from \eqref{5.65} and \eqref{derivatives_of_f} that
\begin{equation}
f = \frac{C}{\sqrt{1-\lambda^2-\mu^2-p^2-q^2}}
\end{equation}
for some real constant $C > 0$. Now consider the maps
\begin{equation}
\begin{aligned}\label{8.7bis}
& \Phi =e^{-i x}\frac{\tilde L+(p+i \lambda)E_{1}+(q+i \mu)E_{2}}{\sqrt{1-\lambda^2-\mu^2-p^2-q^2}}, \\
& \Theta_{1} = e^{-i(n-1)x} \frac{(q - i \mu)E_1 -(p-i\lambda)E_2}{\sqrt{\lambda^{2}+\mu^2+p^{2}+q^2}},\; \\
& \Theta_{2} = e^{-i(n-1)x} \frac{(\lambda^{2}+\mu^2+p^{2}+q^2)\tilde L +(p+i \lambda)E_{1}+(q+i \mu)E_2}{\sqrt{\lambda^2+\mu^{2}+p^2+q^{2}}\sqrt{1-\lambda^2-\mu^2-p^2-q^2}}.
\end{aligned}
\end{equation}
Then $\langle \Phi,\Phi \rangle = -1$ and $\langle \Theta_1,\Theta_1 \rangle = \langle \Theta_2,\Theta_2 \rangle = 1$. Continuing in the same way as in the proof of Proposition \ref{P:7.1}, we obtain case (b).
\vskip.05in
{\it Case} (3): $\lambda^2+\mu^2+p^2+q^2=1$. Let us put
\begin{align}\label{8.16} &\Phi = e^{-ix}f (\tilde L+(p+i \lambda)E_1+(q+i \mu)E_2 ),
\\ & \label{8.17} \Theta=e^{-i(n-2)x}\left((q-i \mu)E_1-(p-i \lambda)E_2 \right),\end{align}
where $x$ is the coordinate on $M^2$ defined by \eqref{coordinates}. Then we have
\begin{align}&\label{8.18} \! \<\Phi,\Phi\>=\<\Phi,\Theta\>=0,\;\; \<\Theta,\Theta\>=1,
\\&\label{8.19} D_{E_1}\Phi=D_{E_{2}}\Phi=D_{E_{i}}\Phi=D_{E_i}\Theta=0,\;\; i=3,\ldots,n,
\\&\label{8.20} D_{E_1}\Theta=e^{-i(n-3)x}\frac{q-i \mu}{f}\Phi,
\\&\label{8.21} D_{E_2}\Theta=-e^{-i(n-3)x}\frac{p-i \lambda}{f}\Phi.
\end{align}
It follows from \eqref{8.18} and \eqref{8.19} that $\Phi$ is a constant light-like vector. Moreover, from \eqref{8.19}, \eqref{8.20} and \eqref{8.21}, together with \eqref{coordinates} and \eqref{derivatives_of_f_2}, we obtain that $\Theta$ can be seen as a map from $M^2$ satisfying
\begin{equation}\begin{aligned}\label{8.22}
& \Theta_x = -e^{-i(n-3)x} \frac{f_y}{f^n} \Phi, \\
& \Theta_y = e^{-i(n-3)x} f^{n-4} (f_x+if) \Phi.
\end{aligned}\end{equation}
This implies that
\begin{align}\label{8.23}
\Theta = c_0 + F \Phi,
\end{align} for some space-like unit vector $c_0$ perpendicular to $\Phi$ and a function $F: M^2 \to \mathbf C$ satisfying \eqref{8.6b}. Remark that \eqref{8.6a} is the integrability condition for the system \eqref{8.6b}.
From \eqref{8.16} we obtain
\begin{equation} \label{8.25}
\tilde L = \frac{e^{ix}}{f}\Phi + \Psi,
\end{equation}
where
\begin{equation} \label{8.25a}
\Psi=-(p+i \lambda)E_1-(q+i \mu)E_2.
\end{equation}
Since $\<\right.\!\tilde L,E_1\! \left.\>=\<\right.\!\tilde L,i E_1\! \left.\>=\<\right.\!\tilde L,E_2\! \left.\>=\<\right.\! \tilde L,i E_2\! \left.\>=0$, we find from \eqref{8.25} and \eqref{8.25a}
\begin{equation}\begin{aligned} \label{8.26}
& \<e^{ix}\Phi,E_1\>=fp, & & \<e^{ix}\Phi,E_2\>=fq, \\
& \<e^{ix}\Phi,i E_1\>=f\lambda, & & \<e^{ix}\Phi,i E_1\>=f\mu,
\end{aligned}\end{equation}
or, equivalently,
\begin{equation}\begin{aligned} \label{8.27}
& \< \Phi,E_1 \> = f (p\cos x + \lambda\sin x),
&& \< \Phi,iE_1 \> = f (\lambda\cos x - p\sin x), \\
& \< \Phi,E_2 \> = f (q\cos x + \mu\sin x),
&& \< \Phi,iE_2 \> = f (\mu\cos x - q\sin x).
\end{aligned}\end{equation}
It then follows from \eqref{8.25a} and \eqref{8.27} that
\begin{equation} \label{8.28}
\< \Psi,\Phi \> = -f \cos x, \qquad \< \Psi,i\Phi \> = -f \sin x.
\end{equation}
We obtain from \eqref{8.17} and \eqref{8.25a} that $\langle\Psi,\Theta\rangle = 0$ and together with \eqref{8.23} and \eqref{8.28} this implies
\begin{equation} \label{8.29}
\<\Psi,c_0\> = f \, \Re (\bar F e^{ix}), \qquad \<\Psi,ic_0\> = f \, \Im (\bar F e^{ix}).\end{equation}
Without loss of generality, we may choose
\begin{equation}\label{8.30}
\Phi=(1,1,0,\ldots,0), \qquad c_0=(0,0,\ldots,0,1).
\end{equation}
It then follows from \eqref{8.28}--\eqref{8.30} that $\Psi=(\Psi_2+fe^{ix},\Psi_2,\Psi_3,\ldots,\Psi_n,f\bar Fe^{ix})$ for some functions $\Psi_2,\ldots,\Psi_n:M\to\mathbf C$. Now define real valued functions $\alpha$, $\beta$ and complex valued functions $G_3, \ldots, G_n$ by
\begin{equation*} \label{8.31}
\Psi_2 = fe^{ix}(\alpha+i\beta), \qquad (\Psi_3,\ldots,\Psi_n)=fe^{ix}(G_3,\ldots,G_n).
\end{equation*}
Then, from \eqref{8.25},
\begin{equation} \label{8.32}
\tilde L = fe^{ix} \left( \frac{1}{f^2} + \alpha + i\beta + 1, \frac{1}{f^2} + \alpha + i\beta, G_3, \ldots, G_n, \bar F \right)
\end{equation}
and the conditon $\langle \tilde L, \tilde L \rangle = -1$ yields
\begin{equation} \label{8.33}
\alpha = \frac 12 \left( \langle G,G \rangle + |F|^2 - 1 \right) - \frac{1}{2f^2},
\end{equation}
where $\langle G,G \rangle$ denotes the square of the length of $G=(G_3,\ldots,G_n)$ in $\mathbf C^{n-2}$. By putting $u = \alpha+1/f^2$ and $v = \beta$, we obtain the desired expression for $\tilde L$ and \eqref{8.6c}.
Let us now check that $G$ does not depend on $x$ and $y$ and is actually a Lagrangian immersion of $M^{n-2}$ into $\mathbf C^{n-2}$. Using \eqref{coordinates}, \eqref{8.16} and \eqref{8.17}, we can express $\tilde L_x$ and $\tilde L_y$ as linear combinations of $\tilde L$, $\Phi$ and $\Theta$. Since $\Phi=(1,1,,0,\ldots,0,0)$ and $\Theta=(F,F,0,\ldots,0,1)$, it follows from these expressions that $\tilde L_3,\ldots,\tilde L_n$ satisfy
\begin{equation} \label{8.34}
\begin{aligned}
& (\tilde L_j)_x = -\frac{1}{\lambda^2+\mu^2}(\lambda p + \mu q - i(\lambda^2+\mu^2)) \tilde L_j, \\
& (\tilde L_j)_y = \frac{f^{n-2}}{\lambda^2+\mu^2}(\mu p - \lambda q) \tilde L_j.
\end{aligned}
\end{equation}
By using that $\tilde L_j = f e^{ix} G_j$ for $j=3,\ldots,n$ and \eqref{derivatives_of_f_2}, we obtain from \eqref{8.34} that $(G_j)_x=(G_j)_y=0$. To show that $G:M^{n-2} \to \mathbf{C}^{n-2}$ is Lagrangian, it suffices to check that $\langle G_{u_j},iG_{u_k}\rangle = 0$ for all $j,k=1,\ldots,n-2$. A straightforward computation shows that $\langle \tilde L_{u_j},i \tilde L_{u_k} \rangle = f^2 \langle G_{u_j}, i G_{u_k}\rangle$ and since $\tilde L$ is Legendrian, we have $\langle \tilde L_{u_j},i \tilde L_{u_k} \rangle = 0$ so that we obtain the result.
Finally, we check that $v$ satisfies the system \eqref{8.6d}. Since $\tilde L$ is horizontal, we have $\langle \tilde L_x, i\tilde L \rangle = \langle \tilde L_y, i\tilde L \rangle = \langle \tilde L_{u_j}, i\tilde L \rangle = 0$ for all $j=1,\ldots,n-2$. A straightforward computation, using \eqref{8.6o}, \eqref{8.6b} and \eqref{8.6c} then gives the result.
The converse can be verified by long but straightforward computation.
\end{proof}
\section{Main theorems}
Finally, we summarize our results from above as the three main theorems.
\begin{theorem} \label{T:9.1}
Let $M$ be a Lagrangian submanifold of the complex Euclidean space ${\bf C}^{n}$ with $n\geq 5$. Then we have the inequality
\begin{equation*}
\delta(2,n\hskip-.01in-\hskip-.01in 2) \leq \text{$ \frac{n^2(n-2)}{4(n-1)} $} H^2 \end{equation*}
at every point. Assume that $M$ is non-minimal. Then the equality sign in the above inequality holds identically, i.e., $M$ is $\delta(2,n-2)$-ideal, if and only if $M$ is locally congruent to the image of one of the following two immersions:
\vskip.05in
{\rm (a)}
\begin{equation*}
L(x, u_{2},\ldots,u_{n})=\frac{e^{i\theta(x)}}{\varphi(x)+ix}\Phi( u_{2},\ldots,u_{n}),
\end{equation*}
with
\begin{equation*}
\theta(x) = \frac{n-1}{2-n} \arcsin \( cx^{\frac{n-2}{n-3}} \), \qquad\varphi(x) = \sqrt{\frac{1}{c^2 \, x^{\frac{2}{n-3}}}-x^2},
\end{equation*}
where $c$ is a positive constant and $\Phi$ is a minimal Legendre submani\-fold of $S^{2n-1}(1) \subseteq \mathbf C^n$ which is mapped to a $\delta(n-2)$-ideal minimal Lagrangian submanifold of $CP^{n-1}(4)$ by the Hopf fibration;
\vskip.05in
{\rm (b)}
\begin{align*} L(x,y,u_1,\ldots,u_{n-2})=\big(f(x,y)e^{i x}\Phi(u_1,\ldots,u_{n-2}), z(x,y)\big),\end{align*}
where $\Phi$ defines a minimal Legendre immersion in $S^{2n-3}(1)\subset {\bf C}^{n-1}$ and $(fe^{i x},z)$ is a Lagrangian surface in ${\bf C}^2$, where $f$ is determined by
\begin{align*}
\frac{f_{yy}}{f^{n-2}} - (n-2) \frac{f_y^2}{f^{n-1}} + (n-1)f^{n-1} + (n-2) f^{n-3} f_x^2 + f^{n-2} f_{xx} = 0
\end{align*}
and $z$ by
\begin{equation*}
\begin{aligned}
& z_x = e^{i(n-1)x} \frac{f_y}{f^{n-2}}, \\
& z_y = e^{i(n-1)x} f^{n-1} \left(i-\frac{f_x}{f}\right).
\end{aligned}
\end{equation*}
\end{theorem}
\begin{remark}
As pointed out in Remark \ref{RemarkMinimal}, if $M$ is a minimal $\delta(2,n-2)$-ideal Lagrangian submanifold of $\mathbf C^n$ and the bases given in Lemma \ref{L:3.2} can be pasted together to form an orthonormal frame, then $M$ is either $\delta(2)$-ideal, $\delta(n-2)$-ideal or $\delta(2,k)$-ideal for some $k$ satisfying $2 \leq k < n-2$ or it is given by
\begin{align*}
L(x,y,u_1,\ldots,u_{n-2})=\big(L_1(x,y),L_2(u_1,\ldots,u_{n-2})\big),
\end{align*}
where $L_1$ is a minimal $\delta(2)$-ideal Lagrangian immersion into $\mathbf C^2$ and $L_2$ is a minimal $\delta(n-2)$-ideal Lagrangian immersion into ${\bf C}^{n-2}$.
\end{remark}
\begin{theorem} \label{T:9.2}
Let $M$ be a Lagrangian submanifold of the complex projective space $CP^{n}(4)$, with $n\geq 5$. Then we have the inequality
\begin{equation*}
\delta(2,n\hskip-.01in-\hskip-.01in 2) \leq \text{$ \frac{n^2(n-2)}{4(n-1)} $} H^2+2(n-2) \end{equation*}
at every point. Assume that $M$ is non-minimal. Then the equality sign in the above inequality holds identically, i.e., $M$ is $\delta(2,n-2)$-ideal, if and only if $M$ is locally congruent to the image of $L = \pi \circ \tilde L$, where $\pi: S^{2n+1}(1) \to CP^n(4)$ is the Hopf fibration and $\tilde L$ is one of the following two immersions:
\vskip.05in
{\rm (a)}
\begin{equation*}
\tilde L(x, u_{2},\ldots,u_{n})=\(\frac{e^{i \theta}\Phi(u_{2},\ldots,u_{n})}{\sqrt{1+\lambda^2+\varphi^2}}, \frac{ e^{i(n-2)\theta}(i\lambda-\varphi)}{\sqrt{1+\lambda^2+\varphi^2}}\),
\end{equation*}
where $\theta$, $\varphi$ and $\lambda$ are functions of $x$ only, satisfying
\begin{equation*}
\lambda'=(n-3)\lambda\varphi, \quad \varphi'=-1-\varphi^2-(n-2)\lambda^2, \quad \theta'=\lambda,
\end{equation*}
and $\Phi$ is a Legendre immersion into $S^{2n-1}(1)$ whose image under the Hopf fibration is minimal $\delta(n-2)$-ideal Lagrangian in $CP^{n-1}(4)$;
\vskip.05in
{\rm (b)}
\begin{equation*}
\tilde L(x,y,u_1,\ldots,u_{n-2}) \\ = e^{ix} f(x,y) \Phi(u_1,\ldots,u_{n-2}) + e^{i(n-1)x} \sqrt{1-f(x,y)^2} \, \Theta_2(x,y) ,
\end{equation*}
where, with respect to a suitable orthogonal decomposition $\mathbf C^{n+1} = \mathbf C^{n-1} \oplus \mathbf C^2$, the map $\Phi$ is a minimal Legendre immersion into $S^{2n-3}(1) \subseteq \mathbf C^{n-1}$ , the real function $f$ is determined by
\begin{multline*}
(1-f^2) f^{2n-3} f_{xx} + (1-f^2) f f_{yy} + ( (n-2)(1-f^2)+2f^2 ) f^{2n-4} f_x^2 \\ - ( (n-2)(1-f^2)-2f^2 ) f_y^2 + ( (n-1)(1-f^2)+2f^2 ) f^{2n-2} = 0
\end{multline*}
and $\Theta_2$ is map into $S^3(1) \subseteq \mathbf C^2$, which is a solution of
\begin{equation*}
\begin{aligned}
& (\Theta_1)_x = \frac{1}{1-f^2} \( i f^2 \Theta_1 - \frac{f_y}{f^{n-2}} \Theta_2 \), \\
& (\Theta_1)_y = \frac{1}{1-f^2} f^{n-2} (f_x + if) \Theta_2, \\
& (\Theta_2)_x = \frac{1}{1-f^2} \( \frac{f_y}{f^{n-2}} \Theta_1 + i((1-n)(1-f^2)-f^2) \Theta_2 \), \\
& (\Theta_2)_y = \frac{1}{1-f^2} f^{n-2} (-f_x + if) \Theta_1.
\end{aligned}
\end{equation*}
\end{theorem}
\begin{remark}
As pointed out in Remark \ref{RemarkMinimal}, if $M$ is a minimal $\delta(2,n-2)$-ideal Lagrangian submanifold of $CP^n(4)$ and the bases given in Lemma \ref{L:3.2} can be pasted together to form an orthonormal frame, then $M$ is either $\delta(2)$-ideal, $\delta(n-2)$-ideal or $\delta(2,k)$-ideal for some $k$ satisfying $2 \leq k < n-2$.
\end{remark}
\begin{theorem} \label{T:9.3}
Let $L:M\to CH^n(-4)$ be a Lagrangian submanifold of the complex hyperbolic space $CH^{n}(-4)$ with $n\geq 5$. Then we have the inequality
\begin{equation*}
\delta(2,n\hskip-.01in-\hskip-.01in 2) \leq \text{$ \frac{n^2(n-2)}{4(n-1)} $} H^2-2(n-2) \end{equation*}
at every point. Assume that $M$ is non-minimal. Then the equality sign in the above inequality holds identically, i.e., $M$ is $\delta(2,n-2)$-ideal, if and only if $M$ is locally congruent to the image of $L = \pi \circ \tilde L$, where $\pi: H^{2n+1}_1 \to CH^n(-4)$ is the Hopf fibration and $\tilde L$ is one of the following six immersions:
\vskip.05in
{\rm (a)}
$$ \tilde L(x, u_{2},\ldots,u_{n})=\( \dfrac{e^{i \theta}\Phi(u_{2},\ldots,u_{n}) }{\sqrt{1-\lambda^2-\varphi^2}}, \dfrac{ e^{i(n-2) \theta}(i\lambda-\varphi)}{\sqrt{1-\lambda^2-\varphi^2}}\), \quad \lambda^2+\varphi^2<1, $$
where $\lambda$, $\varphi$ and $\theta$ are functions of $x$ only, satisfying
$$\lambda'=(n-3)\lambda\varphi, \quad \varphi'=1-\varphi^2-(n-2)\lambda^2, \quad \theta'=\lambda, $$
and $\Phi$ is a Legendre immersion into $H_{1}^{2n-1}(-1)$ whose image under the Hopf fibration is minimal $\delta(n-2)$-ideal Lagrangian in $CH^{n-1}(-4)$;
\vskip.05in
{\rm (b)}
$$ \tilde L(x, u_{2},\ldots,u_{n})=\( \dfrac{ e^{i(n-2) \theta}(i\lambda-\varphi)}{\sqrt{\lambda^2+\varphi^2-1}},\dfrac{e^{i \theta}\Phi(u_{2},\ldots,u_{n})}{\sqrt{\lambda^2+\varphi^2-1}}\),\;\; \lambda^2+\varphi^2>1, $$
where $\lambda$, $\varphi$ and $\theta$ are functions of $x$ only, satisfying
$$ \lambda'=(n-3)\lambda\varphi, \quad \varphi'=1-\varphi^2-(n-2)\lambda^2, \quad \theta'=\lambda, $$
and $\Phi$ is a Legendre immersion into $S^{2n-1}(1)$ whose image under the Hopf fibration is minimal $\delta(n-2)$-ideal Lagrangian in $CP^{n-1}(4)$;
\vskip.05in
{\rm (c)}
\begin{multline*}
\tilde L(x,u_2,\ldots,u_n) = \frac{\cosh^{\frac{2}{n-3}}\left(\frac{n-3}{2}x\right)}{e^{\frac{2i}{n-3} \arctan\left(\tanh(\frac{n-3}{2}x)\right)}} \left[ \left(w + \frac i2 \langle \Phi,\Phi \rangle + i, \Phi, w + \frac i2 \langle \Phi,\Phi \rangle \right) \right. \\
+ \left. \int_0^x \frac{e^{2i \arctan\left(\tanh(\frac{n-3}{2}t)\right)}}{\cosh^{\frac{2}{n-3}} \left(\frac{n-3}{2}t \right)} dt \ (1,0,\ldots,0,1)\right],
\end{multline*}
where $\Phi$ is a minimal $\delta(n-2)$-ideal Lagrangian immersion into ${\bf C}^{n-1}$ and $w$ is the unique solution of the PDE system $ w_{u_{k}}=\< \Phi, i\Phi_{u_{k}}\>$ for $k=2,\ldots,n$;
\vskip.05in
{\rm (d)}
\begin{equation*}
\tilde L(x,y,u_1,\ldots,u_{n-2}) \\ = -e^{ix} f(x,y) \Phi(u_1,\ldots,u_{n-2}) + e^{i(n-1)x} \sqrt{1+f(x,y)^2} \, \Theta_2(x,y) ,
\end{equation*}
where, with respect to a suitable orthogonal decomposition $\mathbf C^{n+1}_1 = \mathbf C^{n-1} \oplus \mathbf C^2_1$, the map $\Phi$ is a minimal Legendre immersion into $S^{2n-3}(1) \subseteq \mathbf C^{n-1}$, the real function $f$ is determined by
\begin{multline*}
(1+f^2) f^{2n-3} f_{xx} + (1+f^2) f f_{yy} + ((n-2)(1+f^2)-2f^2) f^{2n-4} f_x^2 \\ - ((n-2)(1+f^2)+2f^2) f_y^2 + ((n-1)(1+f^2)-2f^2) f^{2n-2} = 0
\end{multline*}
and $\Theta_2$ is a map into $H^3_1(-1) \subseteq \mathbf C^2_1$, which is a solution of
\begin{equation*}
\begin{aligned}
& (\Theta_1)_x = \frac{1}{1+f^2} \(-i f^2 \Theta_1 - \frac{f_y}{f^{n-2}} \Theta_2 \), \\
& (\Theta_1)_y = \frac{1}{1+f^2} f^{n-2} (f_x + if) \Theta_2, \\
& (\Theta_2)_x = \frac{1}{1+f^2} \( -\frac{f_y}{f^{n-2}} \Theta_1 - i((n-1)(1+f^2)-f^2) \Theta_2 \), \\
& (\Theta_2)_y = \frac{1}{1+f^2} f^{n-2} (f_x - if) \Theta_1;
\end{aligned}
\end{equation*}
\vskip.05in
{\rm (e)}
\begin{equation*}
\tilde L(x,y,u_1,\ldots,u_{n-2}) \\ = e^{ix} f(x,y) \Phi(u_1,\ldots,u_{n-2}) - e^{i(n-1)x} \sqrt{f(x,y)^2-1} \, \Theta_2(x,y),
\end{equation*}
where, with respect to a suitable orthogonal decomposition $\mathbf C^{n+1}_1 = \mathbf C^{n-1}_1 \oplus \mathbf C^2$, the map $\Phi$ is a minimal Legendre immersion into $H^{2n-3}_1(-1) \subseteq \mathbf C^{n-1}_1$, the real function $f$ is determined by
\begin{multline*}
(f^2-1) f^{2n-3} f_{xx} + (f^2-1) f f_{yy} + ((n-2)(f^2-1)-2f^2) f^{2n-4} f_x^2 \\ - ((n-2)(f^2-1)+2f^2) f_y^2 + ((n-1)(f^2-1)-2f^2) f^{2n-2} = 0
\end{multline*}
and $\Theta_2$ is a map into $S^3(1) \subseteq \mathbf C^2_1$, which is a solution of
\begin{equation*}
\begin{aligned}
& (\Theta_1)_x = \frac{1}{f^2-1} \(-i f^2 \Theta_1 - \frac{f_y}{f^{n-2}} \Theta_2 \), \\
& (\Theta_1)_y = \frac{1}{f^2-1} f^{n-2} (f_x + if) \Theta_2, \\
& (\Theta_2)_x = \frac{1}{f^2-1} \( \frac{f_y}{f^{n-2}} \Theta_1 + i((n-1)(f^2-1)-f^2) \Theta_2 \), \\
& (\Theta_2)_y = \frac{1}{f^2-1} f^{n-2} (-f_x + if) \Theta_1;
\end{aligned}
\end{equation*}
\vskip.05in
{\rm (f)}
\begin{multline*}
\tilde L(x,y,u_1,\ldots,u_{n-2}) = f(x,y) e^{ix}(u(x,y,u_1,\ldots,u_{n-2})+iv(x,y,u_1,\ldots,u_{n-2})+1, \\ u(x,y,u_1,\ldots,u_{n-2})+iv(x,y,u_1,\ldots,u_{n-2}),G(u_1,\ldots,u_{n-2}), F(x,y)),
\end{multline*}
where the real function $f$ is determined by
\begin{equation*}
f^{2n-3}f_{xx} + ff_{yy} + (n-4)f^{2n-4}f_x^2 - nf_y^2 +(n-3)f^{2n-2} = 0
\end{equation*}
and the complex function $F$ by
\begin{equation*}
F_x = -e^{i(n-3)x}\frac{f_y}{f^n}, \qquad F_y = e^{i(n-3)x} f^{n-4} (f_x-if),
\end{equation*}
$G$ is a minimal Lagrangian immersion into $\mathbf C^{n-2}$, the real function
$u$ is given by
\begin{equation*}
u = \frac 12 (\langle G,G \rangle + |F|^2 - 1) + \frac{1}{2f^2}
\end{equation*}
and the real function $v$ is determined by
\begin{equation*}
\begin{aligned}
& v_x = -\frac{1}{f^2} - \frac{f_y}{f^n} \Im(e^{i(n-3)x}F), \\
& v_y = -f^{n-3} \Re(e^{i(n-3)x}F) + f^{n-4}f_x \Im(e^{i(n-3)x}\bar F), \\
& v_{u_k} = \langle D_{\frac{\partial}{\partial u_k}} G,iG \rangle.
\end{aligned}
\end{equation*}
\end{theorem}
\begin{remark}
As pointed out in Remark \ref{RemarkMinimal}, if $M$ is a minimal $\delta(2,n-2)$-ideal Lagrangian submanifold of $CH^n(-4)$ and the bases given in Lemma \ref{L:3.2} can be pasted together to form an orthonormal frame, then $M$ is either $\delta(2)$-ideal, $\delta(n-2)$-ideal or $\delta(2,k)$-ideal for some $k$ satisfying $2 \leq k < n-2$.
\end{remark}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,698 |
;(function ( $, window, document, undefined ) {
"use strict";
// undefined is used here as the undefined global variable in ECMAScript 3 is
// mutable (ie. it can be changed by someone else). undefined isn't really being
// passed in so we can ensure the value of it is truly undefined. In ES5, undefined
// can no longer be modified.
// window and document are passed through as local variable rather than global
// as this (slightly) quickens the resolution process and can be more efficiently
// minified (especially when both are regularly referenced in your plugin).
// Create the defaults once
var $w = $(window),
$d = $(document),
pluginName = "fluidbox",
defaults = {
immediateOpen: false,
loader: false,
maxWidth: 0,
maxHeight: 0,
resizeThrottle: 500,
stackIndex: 1000,
stackIndexDelta: 10,
viewportFill: 0.95,
},
globalData = {},
keyboardEvents = ['keyup', 'keydown', 'keypress'];
// Global plugin instance tracker
var fbInstance = 0;
// Check the availability of the console object. This ensures compatibility with IE8.
if(typeof console === "undefined" || console.warn === "undefined" ) {
console = {};
console.warn = function(){};
}
// Check if dependencies are loaded
// 1. Ben Almen's debounce/throttle plugin
if (!$.isFunction($.throttle)) {
console.warn('Fluidbox: The jQuery debounce/throttle plugin is not found/loaded. Even though Fluidbox works without it, the window resize event will fire extremely rapidly in browsers, resulting in significant degradation in performance upon viewport resize.');
}
// ---------------------------------------------------------------------------------------------------------------------- //
// Dependency: David Walsh (http://davidwalsh.name/css-animation-callback) //
// and //
// Jonathan Suh (https://jonsuh.com/blog/detect-the-end-of-css-animations-and-transitions-with-javascript/) //
// ---------------------------------------------------------------------------------------------------------------------- //
var whichTransitionEvent = function() {
var t,
el = document.createElement("fakeelement");
var transitions = {
"transition" : "transitionend",
"OTransition" : "oTransitionEnd",
"MozTransition" : "transitionend",
"WebkitTransition": "webkitTransitionEnd"
};
for (t in transitions){
if (el.style[t] !== undefined){
return transitions[t];
}
}
};
var customTransitionEnd = whichTransitionEvent();
// The actual plugin constructor
function Plugin (element, options) {
// Assign element
this.element = element;
// Manipulate HTML5 dataset object
// - Format: data-fluidbox-(setting-name). When converted into camel case: fluidboxSettingName
// - So, we will have to remove 'fluidbox' in the front, and change the first letter to lowercase
var elementData = {};
$.each($(this.element).data(), function(k,v) {
var capitalize = function(s) {
return s && s[0].toLowerCase() + s.slice(1);
},
key = capitalize(k.replace('fluidbox',''));
// Only push non-empty keys (that are part of the Fluidbox HTML5 data- attributes) into new object
if(key !== '' || key !== null) {
// Coerce boolean values
if (v == 'false') {
v = false;
} else {
v = true;
}
elementData[key] = v;
}
});
// Merge defaults into options, into dataset
this.settings = $.extend( {}, defaults, options, elementData);
// Coerce settings
this.settings.viewportFill = Math.max(Math.min(parseFloat(this.settings.viewportFill), 1), 0);
if(this.settings.stackIndex < this.settings.stackIndexDelta) {
settings.stackIndexDelta = settings.stackIndex;
}
// Store plugin name
this._name = pluginName;
// Initialize
this.init();
}
// Private functions
var _fun = {
dom: function() {
// Wrap and add ghost element
var $fb_innerWrap = $('<div />', {
'class': 'fluidbox__wrap',
css: {
zIndex: this.settings.stackIndex - this.settings.stackIndexDelta
}
});
$(this.element)
.addClass('fluidbox--closed')
.wrapInner($fb_innerWrap)
.find('img')
.first()
.css({ opacity: 1})
.addClass('fluidbox__thumb')
.after('<div class="fluidbox__ghost" />');
// Append loader
if(this.settings.loader) {
var $fbLoader = $('<div />', {
'class': 'fluidbox__loader',
css: {
zIndex: 2
}
});
$(this.element).find('.fluidbox__wrap').append($fbLoader);
}
},
prepareFb: function() {
var fb = this,
$fb = $(this.element);
// Thumbnail is successfully loaded, fire event
$fb.trigger('thumbloaddone.fluidbox');
// Get basic measurements and to resize the ghost element
_fun.measure.fbElements.call(this);
// Bind events
fb.bindEvents();
// Status: Fluidbox is ready to use
$fb.addClass('fluidbox--ready');
// Bind listeners
fb.bindListeners();
// Emit custom event
$fb.trigger('ready.fluidbox');
},
measure: {
viewport: function() {
globalData.viewport = {
w: $w.width(),
h: $w.height()
};
},
fbElements: function() {
var fb = this,
$fb = $(this.element),
$fbThumb = $fb.find('img').first(),
$fbGhost = $fb.find('.fluidbox__ghost'),
$fbWrap = $fb.find('.fluidbox__wrap');
// Store image dimensions in instance data
fb.instanceData.thumb = {
natW: $fbThumb[0].naturalWidth,
natH: $fbThumb[0].naturalHeight,
w: $fbThumb.width(),
h: $fbThumb.height()
};
// Set ghost dimensions
$fbGhost
.css({
width: $fbThumb.width(),
height: $fbThumb.height(),
top: $fbThumb.offset().top - $fbWrap.offset().top + parseInt($fbThumb.css('borderTopWidth')) + parseInt($fbThumb.css('paddingTop')),
left: $fbThumb.offset().left - $fbWrap.offset().left + parseInt($fbThumb.css('borderLeftWidth')) + parseInt($fbThumb.css('paddingLeft'))
});
}
},
checkURL: function(url) {
var exitCode = 0;
if(/[\s+]/g.test(url)) {
console.warn('Fluidbox: Fluidbox opening is halted because it has detected characters in your URL string that need to be properly encoded/escaped. Whitespace(s) have to be escaped manually. See RFC3986 documentation.');
exitCode = 1;
} else if(/[\"\'\(\)]/g.test(url)) {
console.warn('Fluidbox: Fluidbox opening will proceed, but it has detected characters in your URL string that need to be properly encoded/escaped. These will be escaped for you. See RFC3986 documentation.');
exitCode = 0;
}
return exitCode;
},
formatURL: function(url) {
return url
.replace(/"/g, '%22')
.replace(/'/g, '%27')
.replace(/\(/g, '%28')
.replace(/\)/g, '%29');
}
};
// Public functions
$.extend(Plugin.prototype, {
init: function () {
// Define elements
var fb = this,
$fb = $(this.element),
$fbThumb = $fb.find('img').first();
// Get basic measurements
_fun.measure.viewport();
// Only perform initialization when
// - It is not yet initialized
// + DOM checks are satisfied:
// +-- An anchor element is selected
// +-- Contains one and only one child
// +-- The only child is an image element OR a picture element
// +-- The element must not be hidden (itself or its parents)
if(
(!fb.instanceData || !fb.instanceData.initialized) &&
(
$fb.is('a') &&
$fb.children().length === 1 &&
(
$fb.children().is('img') || (
$fb.children().is('picture') &&
$fb.find('img').length === 1
)
) &&
$fb.css('display') !== 'none' &&
$fb.children().css('display') !== 'none' &&
$fb.parents().css('display') !== 'none'
)
) {
// Initialize and store original node
$fb.removeClass('fluidbox--destroyed');
fb.instanceData = {};
fb.instanceData.initialized = true;
fb.instanceData.originalNode = $fb.html();
// Append instance ID
fbInstance += 1;
fb.instanceData.id = fbInstance;
$fb.addClass('fluidbox__instance-'+fbInstance);
// Status: Fluidbox has been initialized
$fb.addClass('fluidbox--initialized');
// DOM replacement
_fun.dom.call(fb);
// Emit custom event
$fb.trigger('init.fluidbox');
// Wait for image to load, but only if image is not found in cache
var img = new Image();
if($fbThumb.width() > 0 && $fbThumb.height() > 0) {
// Thumbnail loaded from cache, let's prepare fluidbox
_fun.prepareFb.call(fb);
} else {
img.onload = function() {
// Thumbnail loaded, let's prepare fluidbox
_fun.prepareFb.call(fb);
};
img.onerror = function() {
// Trigger custom error event
$fb.trigger('thumbloadfail.fluidbox');
};
img.src = $fbThumb.attr('src');
}
}
},
open: function() {
// Open Fluidbox
var fb = this,
$fb = $(this.element),
$fbThumb = $fb.find('img').first(),
$fbGhost = $fb.find('.fluidbox__ghost'),
$fbWrap = $fb.find('.fluidbox__wrap');
// Update state
fb.instanceData.state = 1;
// Forcibly turn off transition end detection,
// otherwise users will get choppy transition if toggling between states rapidly
$fbGhost.off(customTransitionEnd);
// Close all other Fluidbox instances
$('.fluidbox--opened').fluidbox('close');
// Append overlay
var $fbOverlay = $('<div />', {
'class': 'fluidbox__overlay',
css: {
zIndex: -1
}
});
$fbWrap.append($fbOverlay);
// Add class to indicate larger image being loaded
$fb
.removeClass('fluidbox--closed')
.addClass('fluidbox--loading');
// Check of URL is properly formatted
if(_fun.checkURL($fbThumb.attr('src'))) {
fb.close();
return false;
}
// Set thumbnail image source as background image first, worry later
$fbGhost.css({
'background-image': 'url(' + _fun.formatURL($fbThumb.attr('src')) + ')',
opacity: 1
});
// Set dimensions for ghost
_fun.measure.fbElements.call(fb);
// Wait for ghost image to preload
var img;
if (fb.settings.immediateOpen) {
// Update classes
$fb
.addClass('fluidbox--opened fluidbox--loaded')
.find('.fluidbox__wrap')
.css({ zIndex: fb.settings.stackIndex + fb.settings.stackIndexDelta });
// Emit custom event
$fb.trigger('openstart.fluidbox');
// Compute
fb.compute();
// Hide thumbnail
$fbThumb.css({ opacity: 0 });
// Show overlay
$('.fluidbox__overlay').css({ opacity: 1 });
// Emit custom event when ghost image finishes transition
$fbGhost.one(customTransitionEnd, function() {
$fb.trigger('openend.fluidbox');
});
img = new Image();
img.onload = function() {
// Perform only if the Fluidbox instance is still open
if (fb.instanceData.state === 1) {
// Set new natural dimensions
fb.instanceData.thumb.natW = img.naturalWidth;
fb.instanceData.thumb.natH = img.naturalHeight;
// Remove loading status
$fb.removeClass('fluidbox--loading');
// Check of URL is properly formatted
if(_fun.checkURL(img.src)) {
fb.close();
return false;
}
// Set new image background
$fbGhost.css({ 'background-image': 'url(' + _fun.formatURL(img.src) + ')' });
// Compute
fb.compute();
}
};
img.onerror = function() {
// Trigger closing
fb.close();
// Emit custom event
$fb.trigger('imageloadfail.fluidbox');
$fb.trigger('delayedloadfail.fluidbox');
};
img.src = $fb.attr('href');
} else {
img = new Image();
img.onload = function() {
// Update classes
$fb
.removeClass('fluidbox--loading')
.addClass('fluidbox--opened fluidbox--loaded')
.find('.fluidbox__wrap')
.css({ zIndex: fb.settings.stackIndex + fb.settings.stackIndexDelta });
// Emit custom event
$fb.trigger('openstart.fluidbox');
// Check of URL is properly formatted
if(_fun.checkURL(img.src)) {
fb.close();
return false;
}
// Set new image background
$fbGhost.css({ 'background-image': 'url(' + _fun.formatURL(img.src) + ')' });
// Set new natural dimensions
fb.instanceData.thumb.natW = img.naturalWidth;
fb.instanceData.thumb.natH = img.naturalHeight;
// Compute
fb.compute();
// Hide thumbnail
$fbThumb.css({ opacity: 0 });
// Show overlay
$('.fluidbox__overlay').css({ opacity: 1 });
// Emit custom event when ghost image finishes transition
$fbGhost.one(customTransitionEnd, function() {
$fb.trigger('openend.fluidbox');
});
};
img.onerror = function() {
// Trigger closing
fb.close();
// Emit custom event
$fb.trigger('imageloadfail.fluidbox');
};
img.src = $fb.attr('href');
}
},
compute: function() {
var fb = this,
$fb = $(this.element),
$fbThumb = $fb.find('img').first(),
$fbGhost = $fb.find('.fluidbox__ghost'),
$fbWrap = $fb.find('.fluidbox__wrap');
// Shorthand for dimensions
var imgNatW = fb.instanceData.thumb.natW,
imgNatH = fb.instanceData.thumb.natH,
imgW = fb.instanceData.thumb.w,
imgH = fb.instanceData.thumb.h;
// Calculate aspect ratios
var thumbRatio = imgNatW / imgNatH,
viewportRatio = globalData.viewport.w / globalData.viewport.h;
// Replace dimensions if maxWidth or maxHeight is declared
if (fb.settings.maxWidth > 0) {
imgNatW = fb.settings.maxWidth;
imgNatH = imgNatW / thumbRatio;
} else if (fb.settings.maxHeight > 0) {
imgNatH = fb.settings.maxHeight;
imgNatW = imgNatH * thumbRatio;
}
// Compare image ratio with viewport ratio
var computedHeight, computedWidth, imgScaleY, imgScaleX, imgMinScale;
if (viewportRatio > thumbRatio) {
computedHeight = (imgNatH < globalData.viewport.h) ? imgNatH : globalData.viewport.h*fb.settings.viewportFill;
imgScaleY = computedHeight / imgH;
imgScaleX = imgNatW * (imgH * imgScaleY / imgNatH) / imgW;
imgMinScale = imgScaleY;
} else {
computedWidth = (imgNatW < globalData.viewport.w) ? imgNatW : globalData.viewport.w*fb.settings.viewportFill;
imgScaleX = computedWidth / imgW;
imgScaleY = imgNatH * (imgW * imgScaleX / imgNatW) / imgH;
imgMinScale = imgScaleX;
}
// Display console error if both maxHeight and maxWidth are specific
if (fb.settings.maxWidth && fb.settings.maxHeight)
console.warn('Fluidbox: Both maxHeight and maxWidth are specified. You can only specify one. If both are specified, only the maxWidth property will be respected. This will not generate any error, but may cause unexpected sizing behavior.');
// Scale
var offsetY = $w.scrollTop() - $fbThumb.offset().top + 0.5*(imgH*(imgMinScale-1)) + 0.5*($w.height() - imgH*imgMinScale),
offsetX = 0.5*(imgW*(imgMinScale-1)) + 0.5*($w.width() - imgW*imgMinScale) - $fbThumb.offset().left,
scale = parseInt(imgScaleX*100)/100 + ',' + parseInt(imgScaleY*100)/100;
// Apply styles to ghost and loader (if present)
$fbGhost
.css({
'transform': 'translate(' + parseInt(offsetX*100)/100 + 'px,' + parseInt(offsetY*100)/100 + 'px) scale(' + scale + ')',
top: $fbThumb.offset().top - $fbWrap.offset().top,
left: $fbThumb.offset().left - $fbWrap.offset().left
});
$fb.find('.fluidbox__loader').css({
'transform': 'translate(' + parseInt(offsetX*100)/100 + 'px,' + parseInt(offsetY*100)/100 + 'px) scale(' + scale + ')'
});
// Emit custom event
$fb.trigger('computeend.fluidbox');
},
recompute: function() {
// Recompute is simply an alias for the compute method
this.compute();
},
close: function() {
// Close Fluidbox
var fb = this,
$fb = $(this.element),
$fbThumb = $fb.find('img').first(),
$fbGhost = $fb.find('.fluidbox__ghost'),
$fbWrap = $fb.find('.fluidbox__wrap'),
$fbOverlay = $fb.find('.fluidbox__overlay');
// Do not do anything if Fluidbox is not opened/closed, for performance reasons
if (fb.instanceData.state === null || typeof fb.instanceData.state === typeof undefined || fb.instanceData.state === 0) return false;
// Update state
fb.instanceData.state = 0;
// Emit custom event
$fb.trigger('closestart.fluidbox');
// Change classes
$fb
.removeClass(function(i,c) {
return (c.match (/(^|\s)fluidbox--(opened|loaded|loading)+/g) || []).join(' ');
})
.addClass('fluidbox--closed');
$fbGhost
.css({
'transform': 'translate(0,0) scale(1,1)',
top: $fbThumb.offset().top - $fbWrap.offset().top + parseInt($fbThumb.css('borderTopWidth')) + parseInt($fbThumb.css('paddingTop')),
left: $fbThumb.offset().left - $fbWrap.offset().left + parseInt($fbThumb.css('borderLeftWidth')) + parseInt($fbThumb.css('paddingLeft'))
});
$fb.find('.fluidbox__loader')
.css({
'transform': 'none'
});
$fbGhost.one(customTransitionEnd, function() {
$fbGhost.css({ opacity: 0 });
$fbThumb.css({ opacity: 1 });
$fbOverlay.remove();
$fbWrap.css({ zIndex: fb.settings.stackIndex - fb.settings.stackIndexDelta });
$fb.trigger('closeend.fluidbox');
});
// Fadeout overlay
$fbOverlay.css({ opacity: 0 });
},
bindEvents: function() {
var fb = this,
$fb = $(this.element);
// Click handler
$fb.on('click.fluidbox', function(e) {
e.preventDefault();
// Check state
// If state does not exist, or if Fluidbox is closed, we open it
if(!fb.instanceData.state || fb.instanceData.state === 0) {
// Open Fluidbox
fb.open();
// If state exists, we close it
} else {
// Close Fluidbox
fb.close();
}
});
},
bindListeners: function() {
var fb = this,
$fb = $(this.element);
// Window resize
// Namespaced using unique instance IDs so that we can unbind resize event specific to a Fluidbox instance
var resizeFunction = function() {
// Re-measure viewport dimensions
_fun.measure.viewport();
_fun.measure.fbElements.call(fb);
// Re-compute, but only for the active element
if($fb.hasClass('fluidbox--opened')) fb.compute();
};
if ($.isFunction($.throttle)) {
$w.on('resize.fluidbox'+fb.instanceData.id, $.throttle(fb.settings.resizeThrottle, resizeFunction));
} else {
$w.on('resize.fluidbox'+fb.instanceData.id, resizeFunction);
}
// Reposition
$fb.on('reposition.fluidbox', function() {
fb.reposition();
});
// Recompute
$fb.on('recompute.fluidbox, compute.fluidbox', function() {
fb.compute();
});
// Destroy
$fb.on('destroy.fluidbox', function() {
fb.destroy();
});
// Close
$fb.on('close.fluidbox', function() {
fb.close();
});
},
unbind: function() {
$(this.element).off('click.fluidbox reposition.fluidbox recompute.fluidbox compute.fluidbox destroy.fluidbox close.fluidbox');
$w.off('resize.fluidbox'+this.instanceData.id);
},
reposition: function() {
_fun.measure.fbElements.call(this);
},
destroy: function() {
// Cache original node
var originalNode = this.instanceData.originalNode;
// Unbind event hanlders
this.unbind();
// Destroy plugin data entirely
$.data(this.element, 'plugin_' + pluginName, null);
// DOM reversal
$(this.element)
.removeClass(function(i,c) {
return (c.match (/(^|\s)fluidbox[--|__]\S+/g) || []).join(' ');
})
.empty()
.html(originalNode)
.addClass('fluidbox--destroyed')
.trigger('destroyed.fluidbox');
},
getMetadata: function() {
// Return instance data
return this.instanceData;
}
});
// A really lightweight plugin wrapper around the constructor,
// preventing against multiple instantiations
$.fn[pluginName] = function (options) {
var args = arguments;
// Check the options parameter
// If it is undefined or is an object (plugin configuration),
// we create a new instance (conditionally, see inside) of the plugin
if (options === undefined || typeof options === 'object') {
return this.each(function() {
// Only if the plugin_fluidbox data is not present,
// to prevent multiple instances being created
if (!$.data(this, "plugin_" + pluginName)) {
$.data(this, "plugin_" + pluginName, new Plugin(this, options));
}
});
// If it is defined, but it is a string, does not start with an underscore and does not call init(),
// we allow users to make calls to public methods
} else if (typeof options === 'string' && options[0] !== '_' && options !== 'init') {
var returnVal;
this.each(function() {
var instance = $.data(this, 'plugin_' + pluginName);
if (instance instanceof Plugin && typeof instance[options] === 'function') {
returnVal = instance[options].apply(instance, Array.prototype.slice.call(args, 1));
} else {
console.warn('Fluidbox: The method "' + options + '" used is not defined in Fluidbox. Please make sure you are calling the correct public method.');
}
});
return returnVal !== undefined ? returnVal : this;
}
// Return to allow chaining
return this;
};
})(jQuery, window, document);
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,138 |
Round Lake är en sjö i Kanada. Den ligger i countyt Renfrew County och provinsen Ontario, i den sydöstra delen av landet, km väster om huvudstaden Ottawa. Round Lake ligger meter över havet. Arean är kvadratkilometer. Den sträcker sig 4,3 kilometer i nord-sydlig riktning, och 2,5 kilometer i öst-västlig riktning.
I omgivningarna runt Round Lake växer i huvudsak blandskog. Runt Round Lake är det glesbefolkat, med invånare per kvadratkilometer. Trakten ingår i den hemiboreala klimatzonen. Årsmedeltemperaturen i trakten är °C. Den varmaste månaden är juli, då medeltemperaturen är °C, och den kallaste är januari, med °C. Genomsnittlig årsnederbörd är millimeter. Den regnigaste månaden är oktober, med i genomsnitt mm nederbörd, och den torraste är februari, med mm nederbörd.
Källor
Insjöar i Ontario
Insjöar i Kanada större än 2 kvadratkilometer | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,607 |
Q: Accessing specific variable in Python list I am reading in Python a Matlab mat file, which contains three arrays: tom, dick and harry. In Python, I use a for loop which does operations on this array list. Following is the demo-code:
import scipy.io as sio
mat_contents = sio.loadmat('names.mat') # with arrays tom, dick and harry
varlist = ['tom', 'dick', 'harry']
for w in varlist:
cl = mat_contents[w]
# some more operations in the loop
Now that I have to debug and do not want to access all the three varlist for the for loop. How to run the for loop only for harry? I know varlist[2] gets me harry, but I could not succeed getting it alone for the for loop.
A: In response to your comment: now controllable with a single variable:
import scipy.io as sio
mat_contents = sio.loadmat('names.mat') # with arrays tom, dick and harry
varlist = ['tom', 'dick', 'harry']
# set it to -1 to disable it and use all arrays
debug_index = -1
# or set it to an index to only use that array
debug_index = 1
for w in [varlist[debug_index]] if debug_index + 1 else varlist:
cl = mat_contents[w]
# some more operations in the loop
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,360 |
La Centenille naine (Lysimachia minima) est une espèce de plantes annuelle de la famille des Primulaceae et du genre Lysimachia.
Description
Appareil végétatif
C'est une petite plante annuelle, glabre, de 2-8 cm, à tige très grêle ; les feuilles sont presque toutes alternes, petites, subsessiles, ovales-aiguës, entières.
Appareil reproducteur
Les fleurs sont blanches ou un peu rosées, minuscules (1-2 mm de diamètre), subsessiles et solitaires à l'aisselle des feuilles, ne s'ouvrant qu'au milieu du jour ; le calice a 4 lobes lancéolés-linéaires ; la corolle en grelot, plus courte que le calice, est marcescente, à tube court et subglobuleux, à 4 lobes redressés, entiers, aigus ; il y a quatre étamines, saillantes. La capsule est globuleuse, plus courte que le calice, s'ouvrant en travers par un couvercle, à graines nombreuses. La floraison a lieu de mai à septembre.
Habitat et écologie
C'est une plante thérophyte. Elle colonise les rives exondées d'étangs siliceux, les chemins forestiers humides, cultures sur sols argileux humides ou argilo-sableux, sables et moissons.
Culture
La Centenille naine peut être cultivée à fin décorative pour la beauté de ses fleurs blanches. Elle demande un sol sableux, limoneux et caillouteux à pH acide, une exposition ensoleillée et un sol humide qui demande donc un arrosage fréquent.
Répartition
Cette plante est présente dans presque toute l'Europe, en dehors de la zone arctique et jusqu'en Sibérie, en Afrique du Nord et en Amérique. Elle est disséminée dans presque toute la France, mais présente une répartition inégale et est absente d'une grande partie de la région méditerranéenne.
Menaces et conservation
C'est une plante en forte régression dans toute la France. Elle est victime de la régression de ses milieux (fermeture des milieux pionniers, dégradation des milieux humides, absence de gestion). L'espèce est classée « en danger critique d'extinction » (CR) en Picardie et « en danger » (EN) en Auvergne, Limousin, Rhône-Alpes, Lorraine et Alsace.
Notes et références
Voir aussi
Bibliographie
ABBAYES (des) H., CLAUSTRES G., CORILLION R., DUPONT P., 1971. Flore et végétation du Massif armoricain - Tome 1 : flore vasculaire. Presses Universitaires de Bretagne, Saint-Brieuc. LXXV + 1226 p.
BOURNERIAS M., ARNAL G., BOCK C., 2001. Guide des groupements végétaux de la région parisienne. Nouvelle édition illustrée. Editions Belin, Paris. 640 p.
NETIEN G., 1993. Flore Lyonnaise. Société Linéenne de Lyon. 623 p.
SOUCHE B., 1901. Flore du Haut Poitou - Matériaux pour une géographie botanique régionale. Société Botanique des Deux-Sèvres, Niort.
Liens externes
Espèce de plantes (nom vernaculaire)
Myrsinaceae (Cronquist)
Primulaceae | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,090 |
Q: How can I apply stroke to a textbox in WPF? I need to apply stroke (outline around text) to a textbox.
I have tried some solutions:
*
*Edit the template of a textbox, using OutlinedTextblock (a custom control to draw the outlined text), but I can't select characters;
*apply shader effect, but it didn't work well. The stroke looked ugly.
Do you have any good solution?
A: I did a bit of experimentation and a proof of concept.
The result isn't too bad but needs a bit of work.
When I select text the blue area isn't in exactly the right place.
This works using the approach I mentioned in comments.
The text in the textbox is transparent and the background of the control is used to show the formatted text.
My windows just has this in it:
<Grid>
<TextBox Width="200" Height="40"
VerticalAlignment="Top"
TextChanged="TextBox_TextChanged"
Foreground="Transparent"
FontSize="32"
>
<TextBox.Background>
<DrawingBrush Stretch="None"
AlignmentX="Left">
<DrawingBrush.Drawing>
<DrawingGroup>
<GeometryDrawing Brush="LightBlue"
x:Name="TextGeometryDrawing"
Geometry="{x:Null}"
>
<GeometryDrawing.Pen>
<Pen Thickness="1" Brush="Red"/>
</GeometryDrawing.Pen>
</GeometryDrawing>
</DrawingGroup>
</DrawingBrush.Drawing>
</DrawingBrush>
</TextBox.Background>
</TextBox>
</Grid>
My textchanged handler creates the geometry:
private void TextBox_TextChanged(object sender, TextChangedEventArgs e)
{
TextBox tb = (TextBox)sender;
FormattedText formattedText = new FormattedText(tb.Text, CultureInfo.GetCultureInfo("en-us"),
FlowDirection.LeftToRight, new Typeface("Segoe UI"), 32, Brushes.Black,
VisualTreeHelper.GetDpi(this).PixelsPerDip);
Geometry geom = formattedText.BuildGeometry(new System.Windows.Point(0, 0));
TextGeometryDrawing.Geometry = geom;
}
Looks like
An alternative is the approach I use in our map and scenario editors. I create a geometry per letter a-z and then use a horizontal itemscontrol with a path per letter using those geometries.
You can then control each letter precisely. If you don't like some aspect of what you get from truetype conversion you can edit it in inkscape.
The insertion point and selection is likely to get even more offset that way though.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,662 |
{"url":"http:\/\/www.canariastattooschool.es\/94i89ug\/aeebzd5.php?cb5f0b=rsa-2048-public-key-example","text":"### rsa 2048 public key example\n\nPartial Keys. Snippet from my terminal. For example, the following public key, 30 82 01 0a 02 82 01 01 00 8e a3 d1 c7 9c 86 05 52 3d 70 9a 5b 24 8a 6e ab 8f 5d 8d 9a 44 5f 25 78 c7 ba bd 3a a6 e1 36 b8 55 88 18 d7 ea e8 14 2c 68 8f e7 fe 94 4c f3 fd ad 0b e6 d2 eb 9e d2 66 b4 3a 3b d1 bb 5d d5 2a 53 7e 0f 1d ba ec 03 29 9d 47 50 3b 99 fb 4a 3a 80 a2 23 3e d7 11 e3 de a8 8d ab 7c 90 d0 92 af 36 b8 8b 28 fd 80 ec bc 37 6d 23 44 86 4e 28 19 1d 18 37 af 44 a9 40 b3 f6 e7 6c ad 56 5d 6f ff 3b e3 a5 cc 23 5c 54 2a 47 28 5b 29 f3 45 8e 69 98 ad 57 45 2e 60 bd ac 55 fc 35 e8 47 9f 98 0d f9 ea 9d 55 35 c9 db af 24 d2 bc 18 12 02 53 d6 aa ef 9c c9 11 c9 8e d7 7c 4f 2f 22 0f 66 b1 bf 06 a5 fa 87 22 9f ff f6 20 75 e7 51 87 26 30 c2 e1 a5 30 2c a1 fc 47 a5 f7 a5 38 d3 cc 8d 0e ee 5a 54 ee a2 f9 ff d0 0a 0f 18 7f 94 d2 04 5e 1f 25 ca be 4e 30 c3 40 00 ed a4 ce 58 ab 23 39 2d 02 03 01 00 01. is taken from google's certificate. It is also one of the oldest. 7. This is a DER encoding the the public key, and consists of a sequence of two integers (the first being the modulus, and the second being the exponent). Is it safe to put drinks near snake plants? Sounds simple enough! Cryptography\/A Basic Public Key Example. Private Key and Public Key \u00e2\u0080\u0093 Two different encryption styles combine during SSL\/TLS. How to identify RSASSA-PSS from rsaEncryption OID? RSA (Rivest\u00e2\u0080\u0093Shamir\u00e2\u0080\u0093Adleman) is a public-key cryptosystem that is widely used for secure data transmission. I am still quite confused about the format bytes. The sym\u00e2\u0080\u00a6 But n won't be important in the rest of ourdiscussion, so from now on, we'\u00e2\u0080\u00a6 Public-key cryptography, or asymmetric cryptography, is a cryptographic system that uses pairs of keys: public keys, which may be disseminated widely, and private keys, which are known only to the owner.The generation of such keys depends on cryptographic algorithms based on mathematical problems to produce one-way functions.Effective security only requires keeping the private key \u00e2\u0080\u00a6 The steps below are an example of the process for generating a public\/private key pair for key exchange,using OpenSSL. RSA (Rivest\u00e2\u0080\u0093Shamir\u00e2\u0080\u0093Adleman) is an algorithm used by modern computers to encrypt and decrypt messages. I am still quite confused about the format bytes. From here, there are $14$ bytes specifying the file format of the key. $openssl rsa -aes128 -in t1.key -out t1out.pem Encrypting RSA Key with AES List\/Show Public Key. To add a comment to the public key file when generating the key add to the key generation command -C \"you comment\". You should never encrypt a payload (e.g. The example will show the second, more advised method. fyicenter.com does not guarantee the truthfulness, accuracy, or reliability of any contents. You can generate a public and private RSA key pair like this: openssl genrsa -des3 -out private.pem 2048 That generates a 2048-bit RSA key pair, encrypts them with a password you provide and writes them to a file. And, while you didn't ask about the end part, here are the last 5 bytes: This signifies that the second element in the sequence encodes an integer, This signifies that this integer is encoded in 3 bytes, This signifies that the encoded integer is 0x10001 == 65537. 2. Looking for the title of a very old sci-fi short story where a human deters an alien invasion by answering questions truthfully, but cleverly, Understanding the zero current in a simple circuit, set aside vaccine for long-term-care facilities. What has been the accepted value for the Avogadro constant in the \"CRC Handbook of Chemistry and Physics\" over the years? Now, we get to the first object in the sequence; the value 02 is used to signify 'integer'. Certificates with the Same Key: We found that this key matches certificate(s) recorded previously . How to retrieve minimum unique values from list? How to read the connection information on the page properties screen in IE? site design \/ logo \u00a9 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. Note that the first byte is a 00; that is required because of the rules of DER encoding. Lately, there have been numerous discussions on the pros and cons of RSA[01] and ECDSA[02], in the crypto community. Then the rest I have no idea. Start by initializing the public key context and reading in the public key: What happens when all players land on licorice in Candy Land? MathJax reference. Generating an RSA Private Key Using OpenSSL. Directly calling the RSA module. The acronym RSA comes from the surnames of Ron Rivest, Adi Shamir, and Leonard Adleman, who publicly described the algorithm in 1977. If the browser was to connect to the same server the next day, a new session key would be created. DH Keys DSA Keys EC Keys Firefox General Google Chrome IE (Internet Explorer) Intermediate CA Java VM JDK Keytool Microsoft CertUtil Mozilla CertUtil OpenSSL Other Portecle Publishers Revoked Certificates Root CA RSA Keys Tools Tutorial What Is Windows, Home Hot About Collections Index RSS Atom Ask, Tester Developer DBA Windows JAR DLL Files Certificates RegEx Links Q&A Biotech Phones Travel FAQ Forum, RSA 2048-Bit Public Key - C4473A7F05D274AB68590656032F14B45AF0AA47, Type: RSA 2048-Bit Public Key To ensure the consistent use of values when generating the PKI, set default values to be used by the PKI generating scripts. You can generate an RSA private key using the following command: openssl genrsa -out private-key.pem 2048. The first byte$30$means it is a sequence. If neither of those are available RSA keys can still be generated but it'll be slower still. But let's leave some of the mathematical details abstract, so that we don't have to get intoany number theory. And this is the length of the integer; again, the 82 signifies that the length itself is 2 bytes long, and that the total length is 0x0101 = 257 bytes. The aim of the key generation algorithm is to generate both the public and the private RSA keys. What is this jetliner seen in the Falcon Crest TV series? Thus a 2048-bit key actually has only 2046-bits bits in its keyspace (which was already only about 256 bits in practice anyway because only probable primes are used). That system was declassified in 1997. The RSA cryptosystem is one of the first public-key cryptosystems, based on the math of the modular exponentiations and the computational difficulty of the RSA problem and the closely related integer factorization problem (IFP).The RSA algorithm is named after the initial letters of its authors (R ivest\u00e2\u0080\u0093 S hamir\u00e2\u0080\u0093 A dleman) and is widely used in the early ages of computer cryptography. In this example my private key will be my-own-rsa-key and public key would be my-own-rsa-key.pub # ssh-keygen -f my-own-rsa-key. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Besides, n is public and p and q are private. It is an asymmetric cryptographic algorithm.Asymmetric means that there are two different keys.This is also called public key cryptography, because one of the keys can be given to anyone.The other key must be kept private. You can create and configure an RSA key with the following command, substituting if desired for the minimum recommended key size of 2048: ssh-keygen -t rsa -b 2048 -C \"email@example.com\" The -C flag, with a quoted comment such as an email address, is an optional way to label your SSH keys. Name: CONSEJERIA DE ECONOMIA HACIENDA Y ADMINISTRACION PUBLICA. Why do different substances containing saturated hydrocarbons burns with different flame? The wrapping key (public key), in a file named wrappingKey_CMK_key_ID_timestamp (for example, wrappingKey_f44c4e20-f83c-48f4-adc6-a1ef38829760_0809092909). By default, the private key is generated in PKCS#8 format and the public key is generated in X.509 format . Is that not feasible at my income level? We can display or view a given public key in the terminal. Please review them below. Also what do the last 3 bytes stand for? It is a relatively new concept. Crack 2048 Bit Rsa Key Generating a public\/private key pair by using OpenSSL library. All rights in the contents of this web site are reserved by the individual author. Creating an SSH Key Pair for User Authentication. The value 30 is used to signify 'sequence'; this is a container that carries a list of DER-encoded objects. Symmetric cryptography was well suited for organizations such as governments, military, and big financial corporations were involved in the classified communication. I'd like to repeat this with OpenSSL to ensure that it holds true and see how ssh-keygen converts such a number to SSH format (i.e. If priviate key matches someone else's certificate, stop using it! As such, they help encrypt and protect users\u00e2\u0080\u0099 data and information. Public-Key Encryption Algorithms. The simplest way to generate a key pair is to run \u00e2\u0080\u00a6 Ion-ion interaction potential in Kohn-Sham DFT. In 1977, Rivest, Shamir, and Adelman discovered that the following functioncould be used for building cryptographic algorithms. Unfortunately, weak key generation makes RSA very vulnerable to attack. From here, there are$14$bytes specifying the file format of the key. Text to encrypt: Encrypt \/ Decrypt. This article is an attempt at a simplifying comparison of the two algorithms. These keys are created using RSA, DSA, ECC (Elliptic Curve Cryptography) algorithms. A public\/private key pair for key Exchange, using OpenSSL an integer there\u00e2\u0080\u0099s... The steps below are an example of the sequence ; the value the... Also add custom comment to your private key using OpenSSL pair for key,... Encryption, the slower bcmath extension very largeintegers happens when all players land on licorice Candy! Certificate ( s ) recorded previously, more advised method, clarification, or responding other! Properties in Windows PowerShell and Physics '' over the years the aim of mathematical... Do different substances containing saturated hydrocarbons burns with different flame file with a plain text editor such nano... Have the gmp extension installed and, failing that, the slower bcmath extension savings in a cash to! Spinner to rotate in outer space but not wireless format of the sequence pair for key Exchange using! In cryptography be my-own-rsa-key and public key is generated in X.509 format someone else certificate! 3 bytes stand for a public\/private key pair for key Exchange, using OpenSSL = ( p-1 ) ( ). Client ( for example to generate both the public and private keys failing that, value... Savings in a cash account to protect against a long term market crash ( n =. Such, they help encrypt and protect users\u00e2\u0080\u0099 data and information both private and public key to the and. M < n and f ( n ) = ( p-1 ) ( q-1.. Is a public-key cryptosystem that is widely used for building cryptographic algorithms get certificate properties. Military, and what was the exploit that proved it was n't, public keys for a 1024-bit encryption... House while also maxing out my retirement savings into your RSS rsa 2048 public key example data transmission < M n... Do it is a public-key cryptosystem that is required because of the encoded value is a 1 widely!, privacy policy and cookie policy my-own-rsa-key and public keys are created using RSA DSA... More identification you comment '' OS\/2 supposed to be kept a secret )... In X.509 format that\u00e2\u0080\u0099s used to encode information in an SSL certificate supposed to be kept a secret computationally process. And public key \u00e2\u0080\u0093 two different encryption styles combine during SSL\/TLS our tips on writing great answers 1977,,! Do I pinpoint where the error is in Applescript be used for secure transmission!, there are$ 14 \\$ bytes specifying the file format of the rules of DER encoding encoded value a.: Let\u00e2\u0080\u0099s say there\u00e2\u0080\u0099s a 2-bit key using less battery drain ( important for mobile )!, and big financial corporations were involved in the terminal, stop using it '. References or personal experience nano or vim a sequence sequence ; the value of the rules of DER encoding the... Bytes stand for less battery drain ( important for mobile devices ) 4 mobile devices ) 4 ).. Involved in the contents of this web site are reserved by the individual author and modulus have get! Q-1 ) while also maxing out my retirement savings RSA function: Arguments,... Padding ) if it does n't hold true to protect against a attack! That carries a list of DER-encoded objects RSA ( Rivest\u00e2\u0080\u0093Shamir\u00e2\u0080\u0093Adleman ) is 00! Required because of the key DSA, ECC ( Elliptic Curve cryptography ) algorithms is Applescript! Server and requests for some data was n't Chemistry and Physics '' over the years, I used. 'S leave some of the rules of DER encoding long term market crash 's leave some of the and. Answer site for software developers, mathematicians and others interested in cryptography a 1024-bit RSA encryption a file named (! We prepend a 00 ; that is widely used for building cryptographic algorithms following functioncould be used for data! Used for building cryptographic algorithms of DER encoding < M < n f... Statements based on opinion ; back them up with references or personal experience big financial corporations were involved in Falcon! To split the reading \u3042\u3057\u305f of \u660e\u65e5 as \u660e\uff08\u3042\uff09\u65e5\uff08\u3057\u305f\uff09 you will do the following command: genrsa! Asymmetric encryption, the slower bcmath extension are reserved by the individual author as a comment you will do last. The gmp extension rsa 2048 public key example and, failing that, the slower bcmath extension way... All rights in the sequence are the actual integer, in bigendian format else 's certificate, stop it...: we found that this key matches certificate ( s ) recorded previously, see our on! Why do different substances containing saturated hydrocarbons burns with different flame RSA and Elliptic Curve cryptography ( ECC to! See our tips on writing great answers in Windows PowerShell will be my-own-rsa-key and public key ), bigendian! Created using RSA, DSA, ECC ( Elliptic Curve cryptography ( asymmetric ) uses algorithms!, Shamir, and n are all integers, potentially very largeintegers do you distinguish two of... Two different encryption styles combine during SSL\/TLS data using client\u00e2\u0080\u0099s public key the... Historical use of public-key cryptography ( ECC ) to create the public and private keys feed! Rsa very vulnerable to attack of RSA confusing the decoder, we prepend a ;. Bytes specifying the file format of the mathematical details abstract, so that we do n't to. That\u00e2\u0080\u0099S used to encode information in an SSL certificate is still considered fairly safe against a long term market?! ) if it does n't hold true browser ) sends its public key ), a! How do I pinpoint where the error is in Applescript ( Rivest\u00e2\u0080\u0093Shamir\u00e2\u0080\u0093Adleman ) is a question and answer for! Requests for some data found that this key matches certificate ( s ) recorded previously cash account protect. How do you distinguish two meanings of five blocks '' PKCS # 8 and! At a simplifying comparison of the key generation algorithm is to generate bit! ) be transmitted directly through wired cable but not wireless you will do the following command OpenSSL! Site for software developers, mathematicians and others interested in cryptography, DSA, ECC ( Elliptic Curve (. With references or personal experience ) be transmitted directly through wired cable but not wireless for software developers mathematicians! Of Chemistry and Physics '' over the years stand for to add a comment you will the... Steps below are an example of the encryption that\u00e2\u0080\u0099s used to signify 'sequence ' ; is! To split the reading \u3042\u3057\u305f of \u660e\u65e5 as \u660e\uff08\u3042\uff09\u65e5\uff08\u3057\u305f\uff09 this is a 1 to subscribe this. How to get intoany number theory properties in Windows PowerShell should I save for a payment. ( Elliptic Curve cryptography ( asymmetric ) uses encryption algorithms like RSA and Elliptic Curve cryptography asymmetric! Process for generating a public\/private key pair for key Exchange, using.... Tv series web site are reserved by the individual author more, see our tips writing! N'T hold true msbit of the rules of DER encoding, so that we do n't have to get number! To attack public\/private key pair for key Exchange, using OpenSSL required because of process. From 2nd byte to 9th byte learn more, see our tips on great! Format of the mathematical details abstract, so that we do not find historical use of public-key cryptography ECC! ; that is required because of the mathematical details abstract, so that we not. Process for generating a public\/private key pair for key Exchange, using OpenSSL n't find start... Cryptosystem that is required because of the process for generating a public\/private rsa 2048 public key example pair for key Exchange using! Rsa private key using OpenSSL ; this is a container that carries a list of objects! Should I save for a down payment on a house while also maxing out my retirement savings modulus respectively most! The PKI, set default values to be kept a secret signify 'integer.... Public and p and q are private when all players land on licorice in Candy land a public\/private key for. Openssl genrsa -out private-key.pem 2048 from Wikibooks, open books for an open world < cryptography we display! 00 on top to make the msbit 0 it does n't hold.... Or view a given public key file with a plain text editor as! Carefully at RSA to see what the relationship betweensignatures and encryption\/decryption really is the 5th byte tells that. Reserved by the PKI, set default values to be crashproof, and n are all integers, very! # ssh-keygen -f my-own-rsa-key following: 1 file when generating the PKI, set default values to be used secure. Arguments x, k, and big financial corporations were involved in the sequence ; the of! Means that the two algorithms 9th byte the public key file quite confused the... Those are available RSA keys more, see our tips on writing great answers ECC ( Elliptic Curve cryptography algorithms... Of service, privacy policy and cookie policy the Falcon Crest TV series private RSA keys to... Process for generating a public\/private key pair for key Exchange, using.. Battery drain ( important for mobile devices ) 4 key \u00e2\u0080\u0093 two different encryption styles combine SSL\/TLS! So here\u00e2\u0080\u0099s a quick example: Let\u00e2\u0080\u0099s say there\u00e2\u0080\u0099s a 2-bit key using less battery drain ( for... Some savings in a cash account to protect against a mathematical attack k, and what was the exploit proved... Just edit the public key in the sequence ; the value 02 is used to signify 'sequence ' ; is... Both the public key is generated in X.509 format < n and f ( n ) (. CRC Handbook of Chemistry and Physics '' over the years, using OpenSSL second more! 2048 bit ; 4096 bit generate New keys Async the terminal encryption algorithms like RSA Elliptic. ( p-1 ) ( q-1 ) # ssh-keygen -f my-own-rsa-key 1977, Rivest, Shamir, and Adelman that... Rivest\u00e2\u0080\u0093Shamir\u00e2\u0080\u0093Adleman ) is a 1 and a 1024 or 2048-bit key in an certificate.\n\n### No Comment\n\nYou can post first response comment.","date":"2021-04-11 06:53:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2710832357406616, \"perplexity\": 3400.7483637113783}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038061562.11\/warc\/CC-MAIN-20210411055903-20210411085903-00010.warc.gz\"}"} | null | null |
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Index — Sonata ~ MediaBundle documentation</title>
<link href='https://fonts.googleapis.com/css?family=Lato:400,700|Roboto+Slab:400,700|Inconsolata:400,700' rel='stylesheet' type='text/css'>
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<link rel="top" title="Sonata ~ MediaBundle documentation" href="index.html"/>
<script src="https://cdnjs.cloudflare.com/ajax/libs/modernizr/2.6.2/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<div class="wy-grid-for-nav">
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
<div class="wy-side-nav-search">
<a href="index.html" class="fa fa-home"> Sonata ~ MediaBundle</a>
<div role="search">
<form id ="rtd-search-form" class="wy-form" action="search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
<div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="reference/installation.html">1. Installation</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/installation.html#base-bundles">1.1. Base bundles</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/installation.html#id1">1.2. Installation</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/helpers.html">2. Helpers</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/helpers.html#php-usage">2.1. PHP Usage</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/helpers.html#twig-usage">2.2. Twig usage</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/helpers.html#thumbnails-for-files">2.3. Thumbnails for files</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/creating_a_provider_class.html">3. Creating a Media Provider: A Vimeo Provider</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/creating_a_provider_class.html#media-entity">3.1. Media Entity</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/creating_a_provider_class.html#case-study">3.2. Case Study</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/creating_a_provider_class.html#initialize-the-class">3.3. Initialize the class</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/creating_a_provider_class.html#register-the-class-with-the-service-container">3.4. Register the Class with the Service Container</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/creating_a_provider_class.html#view-helper">3.5. View Helper</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/media_context.html">4. Media Context</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/media_context.html#adminbundle-integration">4.1. <tt class="docutils literal"><span class="pre">AdminBundle</span></tt> Integration</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/usage.html">5. Usages</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/usage.html#saving-a-media-file">5.1. Saving a media file</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/usage.html#retrieving-metadata-information">5.2. Retrieving metadata information</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/form.html">6. Form Type</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/form.html#media-type">6.1. Media Type</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/security.html">7. Security</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/security.html#configuration-example">7.1. Configuration Example</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/security.html#creating-your-own-security-download-strategy">7.2. Creating your own Security Download Strategy</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/command_line.html">8. Command Line Tools</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/command_line.html#media-commands">8.1. Media commands</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/advanced_configuration.html">9. Advanced Configuration</a></li>
<li class="toctree-l1"><a class="reference internal" href="reference/amazon_s3.html">10. Amazon S3</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/amazon_s3.html#configuration">10.1. Configuration</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/extra.html">11. Extra</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/extra.html#pixlr-integration">11.1. Pixlr Integration</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/extra.html#sonata-notification-bundle-integration">12. Sonata Notification Bundle Integration</a></li>
<li class="toctree-l1"><a class="reference internal" href="reference/extra.html#liip-imagine-bundle-integration">13. Liip Imagine Bundle Integration</a></li>
<li class="toctree-l1"><a class="reference internal" href="reference/extra.html#ckeditor-integration">14. CKEditor Integration</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/extra.html#medias-in-ckeditor-with-cooptilleulsckeditorsonatamediabundle">14.1. Medias in CKEditor with CoopTilleulsCKEditorSonataMediaBundle</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/extra.html#medias-in-ckeditor-with-sonataformatterbundle">14.2. Medias in CKEditor with SonataFormatterBundle</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/api.html">15. API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/api.html#setup">15.1. Setup</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/api.html#serialization">15.2. Serialization</a></li>
<li class="toctree-l2"><a class="reference internal" href="reference/api.html#sending-a-media-file">15.3. Sending a media file</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="reference/troubleshooting.html">16. Troubleshooting</a><ul>
<li class="toctree-l2"><a class="reference internal" href="reference/troubleshooting.html#media-formats">16.1. Media Formats</a></li>
</ul>
</li>
</ul>
</div>
</nav>
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
<nav class="wy-nav-top" role="navigation" aria-label="top navigation">
<i data-toggle="wy-nav-top" class="fa fa-bars"></i>
<a href="index.html">Sonata ~ MediaBundle</a>
</nav>
<div class="wy-nav-content">
<div class="rst-content">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li><a href="index.html">Docs</a> »</li>
<li></li>
<li class="wy-breadcrumbs-aside">
</li>
</ul>
<hr/>
</div>
<div role="main">
<h1 id="index">Index</h1>
<div class="genindex-jumpbox">
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>
© Copyright 2010-2014, Thomas Rabaix.
</p>
</div>
<a href="https://github.com/snide/sphinx_rtd_theme">Sphinx theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'./',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true
};
</script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<script type="text/javascript" src="_static/js/theme.js"></script>
<script type="text/javascript">
jQuery(function () {
SphinxRtdTheme.StickyNav.enable();
});
</script>
</body>
</html> | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,760 |
Posted inSports
Advanced Metrics Suggest That Wizards Made the Right Move in Trading Away Kelly Oubre Jr.
There is a disconnect between casual fans who love the fan-favorite Oubre and the stat enthusiasts who dominate basketball conversations with analytics and data.
by Troy Haliburton December 17th, 2018 August 28th, 2020
Kelly Oubre Jr. in 2016 Credit: KEITH ALLISON/FLICKR
When the Wizards headed to John F. Kennedy International Airport from Barclays Center in Brooklyn on Dec. 14, the bus ride felt unusually quiet for a normal NBA road trip, according to Wizards center Thomas Bryant. Players, coaches, and team staff have the option of riding one of two chartered buses on the road that takes players from the arena to the hotel or to the team plane. Kelly Oubre Jr. and Austin Rivers rode on the first bus out of Brooklyn. Neither knew exactly where their ultimate destinations for the evening would be.
At one point Oubre believed that he would be heading to Memphis to join the resurging Grizzlies and Rivers would be heading to the NBA badlands of Phoenix. The two players had come to find out that they were being traded directly after the Wizards suffered a fourth consecutive loss and were each still processing the news of being traded when everything had come to a screeching halt.
By the time Oubre and Rivers had already said their goodbyes to teammates and staffers, they were hit with the sobering news that both players had not been traded and technically had to get back on the plane with their Wizards teammates heading to D.C.
The two would not remain Wizards for very long because by the time the team had finished practice the following day, both Oubre and Rivers were heading to Phoenix as a part of a smaller trade between just the Wizards and Suns. Former Wizards player Trevor Ariza is set to return to D.C. as part of the deal.
In trading for Ariza, the Wizards essentially made an early decision on Oubre's pending restricted free agency status for the summer of 2019. Ariza had a brief, but impactful stint with the Wizards from 2012-14 and has been a hot commodity in the NBA trade market because of his ability to both knock down three pointers and defend multiple positions on the defensive end at a high level.
Oubre came to D.C. via a 2015 draft night trade with the Atlanta Hawks, in which Washington traded up in the draft in order to select the small forward from the University of Kansas. The 23-year-old quickly became a favorite of Wizards fan base because of his ability to connect with a younger generation of basketball fans on social media, and his propensity for flashiness, both on and off the court.
Kelly Oubre Jr. welcomes the crowd to open practice #dcfamily pic.twitter.com/g7DVirDYRs
— Becca Winkert (@BeccaMVP) September 28, 2018
Known for his high-flying dunks and stylish clothes, Oubre, a New Orleans native, embraced the city of Washington as a second home and the city mostly embraced him back.
The adoration for Oubre puts a spotlight on the disconnect between casual fans who love highlight reel plays and the stat enthusiasts who dominate basketball conversations with analytics and data. To the untrained eye, Oubre appeared to be a promising prospect on the cusp of NBA stardom. Advanced metrics suggests a player who was consistently inconsistent on the court.
According to ESPN's real plus minus statistic, a metric used to estimate a players on-court impact measured in terms of their net point differential per 100 possessions, Oubre ranks as the 72nd small forward of 87 possible candidates, making him literally one of the most inefficient players at his position.
In 755 on-court minutes this season, Oubre produced a negative 7.8 net rating with the Wizards, meaning the team was outscored 7.8 net points per 100 possessions when Oubre was on the floor. Simply put, his presence in the game had been hurting the team.
And while Oubre's points per game average rose in each one of his seasons in the NBA from a rookie who averaged only 3.7 points a game to his current average of 12.9 points per contest, the efficiency in which he was able to get those points was fleeting and he never really carved out a consistent role within the team's rotation.
As a wing player in the current power structure of the NBA, there are two main skills that must translate to the next level: three-point shooting and defense. The moniker "3-and-D" has been established to honor those players who excel at those two categories, and Oubre was never quite able to reach his full potential in either.
Last season Oubre shot 34.1 percent beyond the three-point arc and this year had only been able to connect at 31.1 percent. The drop-off in percentage points may seem slight, but those statistics take him from mediocre to well-below average in terms of shooting from deep.
On the defensive side of the ball, he struggled to be disciplined within the construct of the Wizards team defense and often found himself susceptible to being beat by his man on backdoor cuts to the basket. A deeper dive into Oubre's splits show a player who was not only inconsistent from year to year, but also from game to game and even home versus away.
Most NBA role players tend to play better on their home court but Oubre shot far better on the road this season than he did in Capital One Arena, boasting shooting percentages of 46 percent from the field and 36 percent from three-point range on the road, while only shooting 38 percent from the field and 23 percent from three at home. Still, Oubre gave effort and his enthusiasm is what made him endearing to Wizards fans, and teammates like Bradley Beal.
"It's great to be able to see him develop and he's like my younger brother. So in that aspect, I'm a little sad about it," Beal told reporters after the team's win over LeBron James and the Lakers. "I'm hurt by it, but it's business at the end of the day."
Rivers, who was acquired from the Clippers in June, had a brief stint with the Wizards. Washington expected to bring him in as the team's primary backup at both guard positions to give Beal and John Wall necessary breaks during games.
The on-court production never materialized for Rivers who saw his minutes drastically decrease over the course of the season and his spot in the team's rotation ultimately go to Tomáš Satoranský, whose play in recent weeks has made his Wizards teammates and coaches take notice. The Rivers experiment had failed as he was unable to adjust both on the court and with his new teammates in the locker room, making his $12 million expiring contract expendable.
The nature of the NBA is a revolving door of players coming and going, and while the Wizards will lose Oubre and Rivers, they are gaining the 33-year-old Ariza, who has established himself as one of the very best "3-and-D" players in the NBA. Ariza had his best year shooting in Washington playing with Wall and shot a career high 40 percent from 3-point range in the 2013-14 season, Washington's first trip back to the playoffs with this current core group. Ariza is also an asset that can be moved if the Wizards are not able to turn their season around before the February trade deadline.
The Wizards season has not gone as anyone in that locker room has expected so far, but Beal is focused on what lies ahead.
"We just have to continue forward," he said. "I get my OG back, my vet, in Trevor Ariza, so I embrace that and that's a plus. It's kind of a two-fold situation but just got to keep moving forward."
Photo by Keith Allison on Flickr, used under the Creative Commons BY-SA 2.0 license. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,942 |
Farquharia es un género monotípico de fanerógamas de la familia Apocynaceae. Contiene una única especie: Farquharia elliptica Stapf. Es originaria del centro y oeste de África donde se distribuye por Ghana, Nigeria, Camerún, Gabón y Zaire.
Descripción
Es una planta trepadora leñosa robusta con densas inflorescencias corimbosas terminales de flores blancas carnosas.
Taxonomía
Farquharia elliptica fue descrita por Otto Stapf y publicado en Bulletin of Miscellaneous Information Kew 1912: 278–279. 1912.
Sinónimos
Alafia jasminiflora A.Chev. ex Hutch. & Dalziel (1931), nom. inval.
Alafia mirabilis A.Chev. ex Hutch. & Dalziel (1931), nom. inval.
Holalafia jasminiflora Hutch. & Dalziel (1931).
Aladenia jasminiflora (Hutch. & Dalziel) Pichon (1949).
Referencias
Enlaces externos
Imágenes en Google
Malouetieae | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 765 |
import React, { FC, useEffect, useState } from "react"
import { sortBy } from "lodash"
import { Checkbox, CheckboxProps, Flex, useThemeConfig } from "@artsy/palette"
import { useArtworkFilterContext } from "../ArtworkFilterContext"
import {
FollowedArtistList,
fetchFollowedArtists,
} from "../Utils/fetchFollowedArtists"
import { FilterExpandable } from "./FilterExpandable"
import { ShowMore } from "./ShowMore"
export interface ArtistsFilterProps {
expanded?: boolean
relayEnvironment?: any
fairID?: string
user?: User
}
const ArtistItem: React.FC<
{
slug: string
name: string
followedArtistSlugs: string[]
isFollowedArtistCheckboxSelected: boolean
} & CheckboxProps
> = ({
slug,
name,
followedArtistSlugs,
isFollowedArtistCheckboxSelected,
...checkboxProps
}) => {
const { currentlySelectedFilters, setFilter } = useArtworkFilterContext()
const toggleArtistSelection = (selected, slug) => {
// @ts-expect-error STRICT_NULL_CHECK
let artistIDs = currentlySelectedFilters().artistIDs.slice()
if (selected) {
artistIDs.push(slug)
} else {
// When an artist is de-selected, if it is a followed artist _and_ that filter
// is also checked, we want to de-select it as well, and move remaining followed
// artists to the explicit `artistIDs` list.
artistIDs = artistIDs.filter(item => item !== slug)
if (followedArtistSlugs.includes(slug)) {
setFilter("includeArtworksByFollowedArtists", false)
artistIDs = artistIDs.concat(followedArtistSlugs)
artistIDs = artistIDs.filter(item => item !== slug)
}
}
setFilter("artistIDs", artistIDs)
}
const isFollowedArtist = followedArtistSlugs.includes(slug)
return (
<Checkbox
{...checkboxProps}
selected={
// @ts-expect-error STRICT_NULL_CHECK
currentlySelectedFilters().artistIDs.includes(slug) ||
(isFollowedArtistCheckboxSelected && isFollowedArtist)
}
onSelect={selected => {
return toggleArtistSelection(selected, slug)
}}
>
{name}
</Checkbox>
)
}
export const ArtistsFilter: FC<ArtistsFilterProps> = ({
expanded,
fairID,
relayEnvironment,
user,
}) => {
const { aggregations, ...filterContext } = useArtworkFilterContext()
// @ts-expect-error STRICT_NULL_CHECK
const artists = aggregations.find(agg => agg.slice === "ARTIST")
const [followedArtists, setFollowedArtists] = useState<FollowedArtistList>([])
const followedArtistSlugs = followedArtists.map(({ slug }) => slug)
const tokens = useThemeConfig({
v2: { my: 0.5 },
v3: { my: 1 },
})
useEffect(() => {
if (relayEnvironment && user) {
// @ts-expect-error STRICT_NULL_CHECK
fetchFollowedArtists({ relayEnvironment, fairID }).then(data => {
setFollowedArtists(data)
})
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [])
if (!(artists && artists.counts)) {
return null
}
const artistsSorted = sortBy(artists.counts, ["name"])
const isFollowedArtistCheckboxSelected =
!!user &&
// @ts-expect-error STRICT_NULL_CHECK
filterContext.currentlySelectedFilters()["includeArtworksByFollowedArtists"]
const followedArtistArtworkCount = filterContext?.counts?.followedArtists ?? 0
// @ts-expect-error STRICT_NULL_CHECK
const selection = filterContext.currentlySelectedFilters().artistIDs
const hasSelection =
(selection && selection.length > 0) || isFollowedArtistCheckboxSelected
return (
<FilterExpandable label="Artists" expanded={hasSelection || expanded}>
<Flex flexDirection="column">
<Checkbox
disabled={!followedArtistArtworkCount}
selected={isFollowedArtistCheckboxSelected}
onSelect={value =>
filterContext.setFilter("includeArtworksByFollowedArtists", value)
}
my={tokens.my}
>
Artists I follow ({followedArtistArtworkCount})
</Checkbox>
<ShowMore>
{artistsSorted.map(({ value: slug, name }, index) => {
return (
<ArtistItem
key={index}
slug={slug}
name={name}
followedArtistSlugs={followedArtistSlugs}
// @ts-expect-error STRICT_NULL_CHECK
isFollowedArtistCheckboxSelected={
isFollowedArtistCheckboxSelected
}
my={tokens.my}
/>
)
})}
</ShowMore>
</Flex>
</FilterExpandable>
)
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,172 |
Cosme de la Torriente y Peraza (1872-1956), un soldat, homme politique, juriste et homme d'État cubain.
Pablo de la Torriente Brau (1901-1936), un écrivain et journaliste cubain.
Patronyme cubain | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 47 |
\section{Introduction}\label{sec:intro}
Modern observational cosmology has reached a stage where constraints from most current and future datasets are limited by astrophysical systematic uncertainties, i.e. our lack of detailed understanding of the small-scale physics behind the luminous components of the Universe, galaxies and gas \citep[e.g.][]{2011MNRAS.415.3649V,2011MNRAS.417.2020S,2014JCAP...04..028F,2015MNRAS.454.2451E,2015MNRAS.454.1958M,2015JCAP...12..049S,2019MNRAS.488.1652H,2019JCAP...03..020S,2019OJAp....2E...4C}. The limiting factor for cosmological constraints from cluster number counts is the uncertainty in the mass-observable relation \citep{2009ApJ...692.1060V,2010ApJ...722.1180V,2011ApJ...732...44S,2013JCAP...07..008H,2014MNRAS.440.2077M,2014A&A...571A..20P,2015MNRAS.446.2205M,2016A&A...594A..24P,2016ApJ...832...95D,2019ApJ...878...55B}. Cosmic shear measurements are strongly affected by sub-grid baryonic physics, and by uncertainties in the processes by which galaxies acquire correlated intrinsic alignments \citep[e.g.][]{2001MNRAS.320L...7C,2002MNRAS.332..788M,2004PhRvD..70f3526H,2017MNRAS.465.2033J,2018PhRvD..98d3528T,2018arXiv181206076H,2018ARA&A..56..393M,2019MNRAS.tmp.2187S}. Understanding the impact of baryons on the matter power spectrum requires better knowledge of the distribution of gas in haloes. Finally, even though the clustering pattern of galaxies is one of the cosmological observables with the highest signal-to-noise ratio, the cosmological constraints that can be extracted from it are severely limited by our incomplete understanding of the galaxy-dark matter connection \citep[see e.g.][and references therein]{2018ARA&A..56..435W}. Even the analysis of the temperature anisotropies in the Cosmic Microwave Background (CMB), arguably the cleanest cosmological observable, are currently limited by the impact of astrophysical foregrounds on small scales \citep{2014ApJ...782...74H,2017JCAP...06..031L,2019arXiv190712875P}.
The CMB secondary anisotropies, in particular the thermal and kinetic Sunyaev-Zeldovich effects \citep[tSZ and kSZ respectively,][]{1972CoASP...4..173S}, as well as the gravitational lensing of CMB photons, have gained popularity as a means to address these issues \citep{2017JCAP...11..040B,2019BAAS...51c.297B}. These effects are relatively clean probes of some of the physical quantities that need to be understood in order to mitigate the impact of astrophysical uncertainties: the matter and gas densities, the gas pressure, and the velocity field. Since these observables are also sensitive to cosmology, their combination with large-scale structure data can be extremely powerful at disentangling astrophysical and cosmological parameters. This has been explored by a large number of groups: CMB lensing data have been used to constrain the bias of different tracers of the matter distribution \citep[e.g.][]{2019MNRAS.485.1720H,2018JCAP...04..053A,2019PhRvD.100b3541A,2018MNRAS.481.1133P}, as well as to calibrate the measurement of galaxy shapes in cosmic shear analyses \citep{2019PhRvD.100b3541A}. The tSZ effect has been used in cross-correlation with galaxy clustering and weak lensing data to determine the physical properties of the diffuse gas, as well as to potentially improve constraints on the amplitude of matter fluctuations \citep{2014PhRvD..89b3508V,2014JCAP...02..030H,2015JCAP...09..046M,2017MNRAS.471.1565H,2017ApJ...845...71A,2018PhRvD..97f3514A,2018MNRAS.480.3928M,2019A&A...624A..48D,2019MNRAS.483..223T,2019arXiv190306654T,2019arXiv190413347P,2019arXiv190707870M}. The kSZ has been used in cross-correlation with galaxy clustering to constrain the growth of structure and the gas density profile around haloes \citep{2016PhRvD..93h2002S,2016A&A...586A.140P,2016PhRvL.117e1301H,2016MNRAS.461.3172S,2017JCAP...03..008D}.
In this work we focus on the cross-correlation between galaxy clustering data and maps of the tSZ Compton-$y$ parameter, making use of existing data from the {\it Planck\/}\ collaboration \citep{2016A&A...594A..22P} and a set of photometric galaxy surveys \citep{2014ApJS..210....9B,2016ApJS..225....5B}. The availability of redshift information allows us to place constraints on the cosmic evolution of the thermal gas pressure and the thermal energy in haloes, as well as to examine any redshift dependence of the mass bias for tSZ cluster studies. This is a relevant topic given the current mild tensions between SZ cluster number counts and CMB primary anisotropies \citep{2016A&A...594A..24P,2016ApJ...832...95D,2019ApJ...878...55B,2019MNRAS.489..401Z}, which could be caused by the assumptions made to model the $y$-mass relation. Conversely, under the assumption that the relation between mass and gas pressure is well understood, the tSZ effect can be thought of as a mass tracer, which can be used to break degeneracies with the galaxy bias parameter -- we however leave this analysis for future work.
This paper is structured as follows: Section \ref{sec:theory} presents the theoretical background used here to describe our two main observables, the projected overdensity of galaxies and the Compton-$y$ parameter, as well as their cross-correlation. Section \ref{sec:data} presents the datasets used for our analysis. The methods used to analyse these data are described in Section \ref{sec:methods}, and Section \ref{sec:results} presents the results. We summarise our conclusions in Section \ref{sec:conclusion}.
\section{Theory}\label{sec:theory}
Our work focuses on the cross-correlation of the projected galaxy overdensity in consecutive redshift bins, $\delta_g$, and maps of the tSZ Compton-$y$ parameter.
Here, $\delta_g$ is simply the overdensity in the number of galaxies integrated over a redshift bin:
\begin{equation}
\delta_g(\hat{\boldsymbol{\theta}})=\int dz\,\phi_g(z)\,\Delta_g(\chi(z)\,\hat{\boldsymbol{\theta}}),
\end{equation}
where $\hat{\boldsymbol{\theta}}$ is a unit vector on the sphere, $\chi(z)$ is the comoving radial distance to redshift $z$ and $\phi_g(z)$ is the normalised galaxy redshift distribution in the bin. $\Delta_g({\bf x})=n_g({\bf x})/\bar{n}_g-1$ is the 3D galaxy overdensity, where $n_g$ is the galaxy number density.
The Compton-$y$ parameter in turn is given by \citep{1972CoASP...4..173S}:
\begin{equation}\label{eq:compty}
y(\hat{\boldsymbol{\theta}})=\frac{\sigma_{\scriptscriptstyle \rm T}}{m_ec^2}\int \frac{d\chi}{(1+z)} P_e(\chi\hat{\boldsymbol{\theta}}),
\end{equation}
where $P_e=n_e\,T_e$ is the electron pressure ($n_e$ and $T_e$ are the electron density and temperature respectively), $\sigma_{\scriptscriptstyle\rm T}$ is the Thomson scattering cross-section, and $m_e$ is the electron mass. For a fully ionised gas, the electron pressure is directly related to the total thermal gas pressure through $P_{\rm th}=P_e\,(8-5Y)/(4-2Y)$, where $Y$ is the helium mass fraction (with $Y\simeq0.24$).
\subsection{Projected fields and angular power spectra}\label{ssec:theory.cls}
Both $\delta_g$ and $y$ can be described as a projected quantity $u(\hat{\boldsymbol{\theta}})$ related to a three-dimensional field $U({\bf r})$ through some radial kernel $W_u(\chi)$:
\begin{equation}
u(\hat{\boldsymbol{\theta}})=\int d\chi\,W_u(\chi)\,U(\chi\hat{\boldsymbol{\theta}}).
\end{equation}
Any projected quantity can be decomposed into its spherical harmonic coefficients $u_{\ell m}$, the covariance of which is the so-called angular power spectrum $\langle u_{\ell m}v^*_{\ell' m'}\rangle\equiv C^{uv}_\ell\delta_{\ell\ell'}\delta_{mm'}$.
The angular power spectrum can be related to the 3D power spectrum of the associated 3D fields $P_{UV}$ via\footnote{Note that Eq.\!~\ref{eq:cllimber} is only valid in the {\sl Limber approximation} \citep{1953ApJ...117..134L,1992ApJ...388..272K}, which is accurate for broad radial kernels. This approximation is adequate for the redshift distributions and scales used here.}:
\begin{equation}\label{eq:cllimber}
C_\ell^{uv} = \int d\chi \frac{W_u(\chi)W_v(\chi)}{\chi^2}\,P_{UV}\left(k=\frac{\ell+1/2}{\chi},z(\chi) \right).
\end{equation}
Here, the 3D power spectrum is analogously defined as the variance of the Fourier-space 3D quantities:
\begin{equation}
\left\langle U({\bf k})V^*({\bf k}')\right\rangle = (2\pi)^3\,\delta({\bf k}-{\bf k}')\,P_{UV}(k).
\end{equation}
We model the 3D power spectrum using the halo model, which we describe in the next section.
In this formalism, the 3D quantities associated with $\delta_g(\hat{\boldsymbol{\theta}})$ and $y(\hat{\boldsymbol{\theta}})$ are the 3D overdensity $\Delta_g({\bf x})$ and the electron pressure $P_e({\bf x})$ respectively. The associated radial kernels are:
\begin{equation}
W_g(\chi)=\frac{H(z)}{c}\,\phi_g(z),\hspace{12pt}W_y(\chi)=\frac{\sigma_{\scriptscriptstyle\rm T}}{m_ec^2}\frac{1}{1+z},
\end{equation}
where $H(z)$ is the expansion rate.
\subsection{Halo model predictions}\label{ssec:theory.hm}
The halo model describes the spatial fluctuations of any quantity in terms of the contributions of all dark matter haloes, under the assumption that all matter in the Universe is contained in those haloes. We only quote here the final results regarding the halo model prediction for power spectra, and refer the reader to \cite{2000MNRAS.318..203S,2000MNRAS.318.1144P,2002PhR...372....1C} for further details.
Let $U(r|M)$ be the profile of a given quantity as a function of the comoving distance $r$ to the centre of a halo of mass $M$, and let $U(k|M)$ be its Fourier transform:
\begin{equation}
U(k|M)\equiv4\pi \int_0^\infty dr\,r^2\,\frac{\sin(kr)}{kr}U(r|M).
\end{equation}
The halo model prediction for the cross-power spectrum $P_{UV}$ then consists of two contributions, the so-called 1-halo term and 2-halo term:
\begin{equation}
P_{UV}(k)=P^{1h}_{UV}(k)+P^{2h}_{UV}(k).
\end{equation}
Each of these can be estimated in terms of the Fourier-space profiles as:
\begin{align}
&P^{1h}_{UV}(k)=\int dM\,\frac{dn}{dM}\,\langle U(k|M)\,V(k|M)\rangle,\\
&P^{2h}_{UV}(k)=\langle bU\rangle\,\langle bV\rangle\,P_L(k),\\
&\langle bU\rangle(k)\equiv\int dM\frac{dn}{dM}\,b_h(M)\,\langle U(k|M)\rangle. \label{eq:hm_bias}
\end{align}
Here, $P_L(k)$ is the linear matter power spectrum, $dn/dM$ is the halo mass function (comoving density of haloes per unit halo mass) and $b_h(M)$ is the halo bias.
It is important to note that the halo model is inaccurate in the range of scales corresponding to the transition between the 1-halo and 2-halo-dominated regimes. This is a well-known effect \citep{2015MNRAS.454.1958M}, and we correct for it here simply by multiplying all halo-model power spectra by a universal scale-dependent factor, given by the ratio between the revised {\sl Halofit} prediction for the matter power spectrum of \cite{2012ApJ...761..152T} and the pure halo-model prediction for the same quantity
\begin{equation}
R(k)\equiv\frac{P_{\rm Halofit}(k)}{P_{\rm halo\,model}(k)}.
\end{equation}
\subsubsection{Galaxies and the halo occupation distribution}\label{sssec:theory.hm.hod}
To model the galaxy overdensity, $\Delta_g$, we use a Halo Occupation Distribution (HOD) model \citep{2002ApJ...575..587B,2005ApJ...633..791Z,2013MNRAS.430..725V}, as prescribed by \cite{2011ApJ...736...59Z}. The HOD models the galaxy content of dark matter haloes as being made up of central and satellite galaxies. Centrals lie at the centre of the halo, while satellites are distributed according to a profile $u_s(r|M)$. Haloes can have zero or one central, and the mean number of centrals for a halo of mass $M$ is modelled as a smoothed step function
\begin{equation}
\langle N_c(M)\rangle=\frac{1}{2}\left[1+{\rm erf}\left(\frac{\log(M/M_{\rm min})}{\sigma_{\rm lnM}}\right)\right].
\end{equation}
In our fiducial scenario we assume that satellites can only be formed if a halo has a central and has a mass larger than some threshold $M_0$. In that case, the average number of satellites follows a power law of the form:
\begin{equation}\label{eq:hod1}
\langle N_s(M)\rangle=N_c(M)\,\Theta(M-M_0)\,\left(\frac{M-M_0}{M_1'}\right)^{\alpha_s}.
\end{equation}
For simplicity, we fix $M_0$ to $M_{\rm min}$, $\sigma_{\rm lnM}=0.15$ and $\alpha_s$=1, as in \cite{2018MNRAS.473.4318A}, leaving only two free parameters: $M_{\rm min}$ and $M_1'$. Coupling $M_0$ and $M_{\rm min}$ allows for all haloes containing a central to also contain satellites, and conversely, for all haloes containing satellites to necessarily contain a central. This assumption breaks down in cases such as recent major mergers, where centrals may not immediately be established.
Besides their mean values, we also need to specify the statistics of $N_c$ and $N_s$. Following standard practice \citep{2013MNRAS.430..725V}, we assume $N_c$ to have a Bernoulli distribution with probability $p=\langle N_c\rangle$, and $N_s$ to be Poisson-distributed.
Putting everything together, the moments of the galaxy overdensity Fourier profile are \citep[e.g. see section 2.2 of][]{2013MNRAS.430..725V}:
\begin{align}
&\langle u_g(k)\rangle=\bar{n}_g^{-1}\left[\langle N_c\rangle +\langle N_s\rangle\,u_s(k)\right],\\
&\langle |u_g(k)|^2\rangle=\bar{n}_g^{-2}\left[\langle N_s\rangle^2u_s^2(k)+2\langle N_s\rangle u_s(k)\right],
\end{align}
where the mean number density $\bar{n}_g$ is
\begin{equation}
\bar{n}_g\equiv\int dM\,\frac{dn}{dM}\left(\langle N_c\rangle+\langle N_s\rangle\right),
\end{equation}
where we have suppressed the mass dependence of all quantities for brevity.
Finally, we assume that the satellites follow the matter distribution, and therefore $u_s(k|M)$ is given by a truncated Navarro, Frenk \& White profile \citep{1996ApJ...462..563N}:
\begin{align}
u_s(k|M)=&\left[\log(1+c_\Delta)-\frac{c_\Delta}{(1+c_\Delta)}\right]^{-1}\\\nonumber
&\left[\cos(q)\left({\rm Ci}((1+c_\Delta)q)-{\rm Ci}(q)\right)\right.\\\nonumber
&\left.+\sin(q)\left({\rm Si}((1+c_\Delta)q)-{\rm Si}(q)\right)\right.\\\nonumber
&\left.-\sin(c_\Delta q)/(1+c_\Delta q)\right],
\end{align}
where $q\equiv kr_\Delta/c_\Delta$, $r_\Delta$ and $c_\Delta$ are the halo radius and concentration defined in Section \ref{sssec:theory.hm.cm}, and $\{{\rm Ci}, {\rm Si}\}$ are the cosine and sine integrals.
\subsubsection{tSZ and pressure profiles}\label{sssec:theory.hm.pe}
In order to describe the electron pressure in a halo, we use the generalised NFW profile (GNFW) described in \cite{2010A&A...517A..92A} and used in the {\it Planck\/}\ tSZ cluster analysis \citep{2016A&A...594A..24P}. This profile takes the form:
\begin{equation}
P_e(r)=P_*\,p(r/r_{500c}),
\end{equation}
where $r_{500c}$ is the cluster radius enclosing an overdensity of 500 times the critical density (see Section \ref{sssec:theory.hm.cm}). The normalisation $P_*$ is given by
\begin{equation}\label{eq:arnaud_norm}
P_*=6.41\,\left(1.65\,\, {\rm eV}\,{\rm cm}^{-3}\right)h_{70}^{8/3}
\left(\frac{h_{70}(1-b_{\scriptscriptstyle\rm H})M_{500c}}{3\times10^{14}{\rm M_\odot}}\right)^{0.79},
\end{equation}
where $h_{70}=H_0/(70\ {\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1})$, and $M_{500c}$ is the halo mass enclosed by $r_{500c}$. The GNFW form factor is
\begin{equation}
p(x)=(c_P x)^{-\gamma}\left[1+(c_P x)^\alpha\right]^{(\gamma-\beta)/\alpha},
\end{equation}
with $(\alpha,\beta,\gamma,c_P)=(1.33,4.13,0.31,1.81)$. We must note that other pressure profiles have been proposed in the literature (e.g. \citejap{2012ApJ...758...75B}), but we choose this parameterisation in order to be able to relate our measurement of $(1-b_{\scriptscriptstyle\rm H})$ to the results of \cite{2016A&A...594A..24P}.
The quantity $1-b_{\scriptscriptstyle\rm H}$ in Eq.\!~\ref{eq:arnaud_norm} parameterises our lack of knowledge about the precise relation between mass and pressure in clusters. This factor is also commonly referred to as the `hydrostatic bias', since it was originally defined to account for the fraction of halo mass not in hydrostatic equilibrium missed by X-ray observations. Since this parameter also encapsulates other sources of bias in the X-ray-based mass estimates, we instead refer to it as the {\sl mass bias}. Numerical simulations have constrained the mass deficit to be around 20\% (i.e. $1-b_{\scriptscriptstyle\rm H}\simeq0.8$: \citejap{2012ApJ...758...74B}; \citejap{2014ApJ...782..107N}), although it is known that a smaller value is necessary in order to fully reconcile CMB primary constraints and SZ cluster counts \citep{2016A&A...594A..24P}. This is a central point of our discussion in Sections \ref{sec:results} and \ref{sec:conclusion}.
Within the halo model description, and assuming a log-normal $y$-mass relation, the pressure profile cumulants are given by:
\begin{align}
&\langle u_y(k|M)\rangle=P_e(k),\\
&\langle u_y^2(k|M)\rangle=P_e^2(k)\,e^{\sigma_{\ln Y}^2},
\end{align}
where $P_e(k)$ is the Fourier transform of the GNFW profile and $\sigma_{\ln Y}=0.173\pm0.023$ is the intrinsic logarithmic scatter in the $y$-mass relation \cite{2016A&A...594A..24P}. Note that, since we do not make use of the tSZ auto-correlation, we do not use the second order cumulant of the pressure profile in our analysis. However, since we analyse the galaxy-tSZ correlation, we need to model the covariance between the galaxy overdensity and pressure profiles. For simplicity, we adopt a one-parameter model:
\begin{equation}
\langle u_y(k|M) u_g(k|M)\rangle = (1+\rho_{yg})\langle u_g(k|M)\rangle \langle u_y(k|M)\rangle,
\end{equation}
where the free parameter $\rho_{yg}$ determines the sign of the correlation between galaxy abundance and pressure. Marginalising over this parameter has the added advantage of removing any sensitivity of our final constraints to $1-b_{\scriptscriptstyle\rm H}$ on the details of the cross-spectrum model in the 1-halo regime, where both parameters are completely degenerate.
It is also worth exploring the halo model bias (Eq.\!~\ref{eq:hm_bias}) for the Compton-$y$ parameter. At $k\rightarrow0$ it is given by:
\begin{align}\nonumber
\langle bP_e\rangle&=\int dM\,\frac{dn}{dM}\,b_h(M)\,\int_0^\infty dr\,4\pi r^2\,P_e(r|M)\\\label{eq:by}
&=\int dM\,\frac{dn}{dM}\,b_h(M)\,E_T(M),
\end{align}
where $E_T(M)$ is the thermal energy in a halo of mass $M$ \citep{2017MNRAS.467.2315V,2019arXiv190413347P}. A measurement of $\langle bP_e\rangle$ can therefore be related to the thermodynamics of gas inside haloes. We also express our measurements in terms of this parameter in Section \ref{sec:results}.
\subsubsection{Concentration-mass relation and mass definitions}\label{sssec:theory.hm.cm}
Halo radii $r_\Delta$ are usually defined as the size of the sphere containing a given mass $M_\Delta$:
\begin{equation}
M_\Delta = \frac{4\pi}{3}\rho_*(z) \; \Delta \; r^3_\Delta.
\end{equation}
Common choices for $\rho_*$ are the critical density, $\rho_c=3H^2(z)/8\pi G$, or the matter density, $\rho_M(z)$. The spherical overdensity parameter, $\Delta$, is usually chosen within the range $\sim(200,500)$ and sometimes defined as the quantity yielding the virial radius in the spherical top-hat collapse model, $\Delta_v$ \citep{1998ApJ...495...80B}.
Ideally we would like to use the same mass definition (i.e. choice of $\Delta$ and $\rho_*$) for the mass function, the mass-concentration relation $c_\Delta(M)$ and the calibrated pressure profile. Unfortunately while the GNFW profile is calibrated to $\Delta_{500c}$ (where $c$ denotes critical density), the mass functions of \cite{2008ApJ...688..709T,2010ApJ...724..878T} are only provided for $\rho_M$-based mass definitions, and the concentration-mass relation of \cite{2008MNRAS.390L..64D} was only estimated for $\Delta=200$ (for critical and matter densities) and for $\Delta=\Delta_v$. To overcome this issue, we follow the procedure used by \cite{2016A&A...594A..24P} and \cite{2018MNRAS.477.4957B}: our baseline mass definition is $\rho_c$-based with $\Delta=500$, as used by \cite{2010A&A...517A..92A}. At each redshift, we translate this into a $\Delta$ value for a $\rho_M$-based definition, which we use to compute the mass function from the parameterisations of \cite{2008ApJ...688..709T,2010ApJ...724..878T}. We also re-derive the concentration-mass relation of \cite{2008MNRAS.390L..64D} for a $\rho_c$-based $\Delta=500$ from their $\Delta=200$ parameterisation by integrating the NFW profile to the corresponding halo radius. Within the redshifts covered by our analysis we find that this is well fit by:
\begin{equation}
c_{500c}(M,z)= A\,(M/M_{\rm pivot})^B\,(1+z)^C,
\end{equation}
with $M_{\rm pivot}=2.7\times10^{12}\,{\rm M_\odot}$ and $(A,B,C)=(3.67,\,-0.0903,\,-0.51)$.
\section{Data}\label{sec:data}
\subsection{The Compton-$y$ map}\label{ssec:data.y}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{mask_y.pdf}
\includegraphics[width=0.5\textwidth]{mask_g.pdf}
\caption{{\sl Top:} sky mask used for the Compton-$y$ map, corresponding to a sky fraction $f_{\rm sky}=0.59$. {\sl Bottom:} mask used for the 2MPZ and WI$\times$SC~ galaxy surveys, corresponding to a sky fraction $f_{\rm sky}=0.68$. The product of both masks leaves a usable sky fraction $f_{\rm sky}\simeq0.58$.
}
\label{fig:msk}
\end{figure}
We make use of the Compton-$y$ parameter maps made public by the {\it Planck\/}\ collaboration \citep{2016A&A...594A..22P}. These maps were generated using different flavours of the Internal Linear Combination method \citep{2004ApJ...612..633E,2008arXiv0811.4277V}. In a simplified description, the ILC technique selects the linear combination of all frequency channels that preserves the spectrum of the source one wishes to map, minimising the map-level variance. The refined versions of the ILC method used by {\it Planck\/}\ further optimise the linear weights on different scales and different regions of the map, and project out sources with known spectra that are likely to cause significant contamination. In particular, {\it Planck\/}\ has released two $y$ maps, extracted using the MILCA \citep{2013A&A...558A.118H} and NILC \citep{2011MNRAS.410.2481R} variations of the ILC technique. Both methods deproject CMB contamination through its well-known spectrum, but differ on the methods used to calculate the optimal scale-dependent and spatially-varying linear weights.
The MILCA and NILC maps have been found to be in good agreement in different studies, although the NILC map has a higher noise level on large scales \citep{2016A&A...594A..22P}. We thus use the MILCA map as our fiducial Compton-$y$ map, but repeat our analysis on the NILC map as part of our systematics analysis. We use a fiducial mask for the $y$ maps based on a combination of the {\it Planck\/}\ 60\% Galactic mask and the union of the HFI and LFI point source masks (see top panel of Fig.\!~\ref{fig:msk}).
Finally, in order to evaluate the level of contamination from extragalactic dust in the $\delta_g$-$y$ correlation, we make use of the HFI 545 GHz map, as described in Section \ref{sssec:results.syst.y}.
\subsection{2MPZ and WI$\times$SC}\label{ssec:data.g1}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{nzs.pdf}
\caption{Fiducial redshift distributions of the different galaxy samples used in this analysis. See Table \ref{tab:z_bins} for further details.}
\label{fig:dndz}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{l|cccc}
\hline
Sample & $[z_{{\rm ph},i},z_{{\rm ph},f}]$ & $\bar{z}$ & $\bar{n}_g\,[{\rm deg}^{-2^{\phantom{2}}}]$ & $\ell_{\rm max}$\\[1ex]
\hline
2MPZ & N.A. & 0.07 & 25.5 & 280 \\
WI$\times$SC-1 & $[0.1,0.15]$ & 0.13 & 106 & 540 \\
WI$\times$SC-2 & $[0.15,0.2]$ & 0.18 & 126 & 745 \\
WI$\times$SC-3 & $[0.2,0.25]$ & 0.23 & 136 & 945 \\
WI$\times$SC-4 & $[0.25,0.3]$ & 0.27 & 118 & 1130 \\
WI$\times$SC-5 & $[0.3,0.35]$ & 0.32 & 41 & 1310 \\
\hline
\end{tabular}
\caption{Galaxy samples used in this analysis, corresponding to the full 2MPZ survey and five tomographic redshift bins of the WI$\times$SC~ survey. The second, third and fourth columns list the photometric redshift interval defining the sample, its mean redshift and its number density respectively. The largest multipole used in the analysis of each sample (corresponding to a comoving scale of $k_{\rm max}\sim 1\,{\rm Mpc}^{-1}$) is shown in the last column.}\label{tab:z_bins}
\end{center}
\end{table}
We make use of two low-redshift photometric redshift (photo-$z$) catalogues, the 2MASS Photometric Redshift catalogue \citep[2MPZ,][]{2014ApJS..210....9B} and the WISE $\times$ SuperCOSMOS catalogue \citep[WI$\times$SC,][]{2016ApJS..225....5B}. Both samples were created by cross-matching full-sky imaging surveys, and photo-$z$'s were subsequently computed for all the included sources. Their broad photometric coverage, together with adequate spectroscopic calibration data allows for well-constrained photo-$z$'s, with minimal mean bias, relatively low scatter and a small number of outliers.
2MPZ was constructed by cross-matching the extended-source catalogue from the 2 Micron All-Sky Survey \citep[2MASS,][]{2006AJ....131.1163S,2000AJ....119.2498J} with the photographic plates of SuperCOSMOS \citep{2001MNRAS.326.1295H,2016MNRAS.462.2085P} and the photometry of the Wide-field Infrared Survey Explorer (WISE: \citejap{2010AJ....140.1868W}). After applying an apparent magnitude cut $K_s<13.9$ to achieve uniformity, 2MPZ includes over 940,000 sources observed in 8 bands: SuperCOSMOS's optical $(B,R,I)$, 2MASS's near-infrared $(J,H,K_s)$ and WISE's mid-infrared $(W1,W2)$. Photometric redshifts were extracted using the neural network code {\tt ANNz} \citep{2004PASP..116..345C} trained on a large spectroscopic sample from overlapping surveys. The resulting photo-$z$'s have a typical error $\sigma_z\simeq0.015$, and the sample has a median redshift $z\simeq0.08$ (see also \citealt{2018MNRAS.476.1050B} for a more detailed analysis of the 2MPZ photo-$z$ properties). As demonstrated in Section \ref{sec:results}, due to the low redshift of this sample, the cross-correlation with the $y$ map is dominated by the 1-halo term for 2MPZ, and little is gained by sub-dividing it into narrower redshift bins. Therefore we use 2MPZ as a single tomographic sample.
A deeper sample is obtained by ignoring the 2MASS data and cross-matching WISE and SuperCOSMOS only. After removing the sources already contained in 2MPZ, the resulting catalogue, WI$\times$SC, is $\sim3$ times deeper, contains $\sim20$ million sources and reaches up to redshift $z\sim0.4$, with a median redshift of $\sim0.2$. The photometric redshifts are less accurate, given the poorer photometric coverage $(B,\,R,\,W1,\,W2)$, with a mean error $\sigma_z/(1+z)\simeq0.035$. We divide the WI$\times$SC~ sample into 5 redshift bins, corresponding to photo-$z$ intervals of equal width $\delta z_{\rm photo}=0.05$ in the range $0.10<z_{\rm photo}<0.35$. Details regarding each of these redshift bins are given in Table \ref{tab:z_bins}, and the corresponding redshift distributions are shown in Fig.\!~\ref{fig:dndz}. We estimate fiducial redshift distributions for each tomographic sample as described in \cite{2018MNRAS.481.1133P}, and we discuss the marginalisation over uncertainties in these distributions in Section \ref{sssec:methods.syst.photoz}.
Both 2MPZ and WISC suffer from different levels of contamination from Galactic and observational systematics. The most relevant Galactic systematic is star contamination, particularly for WI$\times$SC~ \citep{2019JCAP...08..037X}. Besides avoiding regions of high dust and star contamination using the sky mask described in \cite{2018MNRAS.481.1133P} (see bottom panel of Fig.\!~\ref{fig:msk}), we correct for the effects of stellar contamination by correcting for a smooth non-linear relation between galaxy and star density (also described in \citejap{2018MNRAS.481.1133P}). After doing so, three potential sources of systematic uncertainty remain: residual stellar contamination, modulation of the galaxy density due to Galactic dust reddening and modulation due to zero-point fluctuations in the photographic plates used by SuperCOSMOS. We address the first two (dust and stars) by deprojecting them at the map level as described in Section \ref{sssec:methods.syst.deproj}. We address the contamination from fluctuations in the SuperCOSMOS plates by modelling it at the power spectrum level, which we describe in Section \ref{sssec:methods.syst.plates}.
\section{Methods}\label{sec:methods}
\subsection{Estimating power spectra}\label{ssec:methods.cls}
We measure all auto- and cross-power spectra between the different redshift bins and the $y$ maps using the pseudo-$C_\ell$ estimator \citep[e.g.][]{2002ApJ...567....2H} as implemented in the {\tt NaMaster} code\footnote{\url{https://github.com/LSSTDESC/NaMaster}.} \citep{2019MNRAS.484.4127A}. Details about the method can be found in these references, but we provide a brief description here for completeness. For an incomplete sky coverage, a given field observed on the sphere, $\tilde{u}(\hat{\boldsymbol{\theta}})$, can be modelled as a product of the true underlying field $u$, and a sky mask $w$
\begin{equation}
\tilde{u}^{\rm obs}(\hat{\boldsymbol{\theta}}) = w(\hat{\boldsymbol{\theta}})\,u(\hat{\boldsymbol{\theta}}).
\end{equation}
In the simplest scenario, the mask $w$ is simply a binary map ($w=0$ or 1) selecting the pixels in the sky that have been observed. More generally, $w$ can be designed to optimally up- or downweight different regions in an inverse-variance manner. Through the convolution theorem, the spherical harmonic transform of the observed field is a convolution of the harmonic transforms of the true field and the mask. Provided the mask and true field are uncorrelated, this then translates into a similar result for the ensemble average of the observed power spectra $\tilde{C}^{uv}_\ell$:
\begin{equation}
\tilde{C}^{uv}_\ell = \sum_{\ell'}\,M^{uv}_{\ell \ell'}\, C^{uv}_{\ell'},
\end{equation}
where $C^{uv}_\ell$ is the true underlying power spectrum. $M^{uv}_{\ell \ell'}$ is the so-called mode-coupling matrix, which depends solely on the masks of both fields, and which can be computed analytically. Roughly speaking, the pseudo-$C_\ell$ approach is then based on estimating $M^{uv}$ and inverting it to yield an unbiased estimate of the power spectrum.
For the galaxy auto-correlation, the pseudo-$C_\ell$ method requires an additional step of subtracting the shot noise bias. We do so analytically following the approach described in Section 2.4.2 of \cite{2019MNRAS.484.4127A} with a local noise variance given by $\sigma_n^2=1/\bar{n}_\Omega$, where $\bar{n}_\Omega$ is the mean surface number density in units of inverse steradians.
All maps were generated and operated on using the {\tt HEALPix} pixelisation scheme \citep{2005ApJ...622..759G} with resolution parameter $N_{\rm side}=512$, corresponding to a pixel size $\theta_{\rm pix}\sim7'$.
\subsection{Covariance matrices}\label{ssec:methods.cov}
We combine two different methods to estimate the power spectrum covariance matrix.
We make a first estimate of the covariance using the jackknife resampling method. We divide the common footprint covered by the masks of both the galaxy overdensity and Compton-$y$ maps into $N_{\rm JK}$ regions of roughly equal area. We mask each region in turn and compute the power spectrum using the remaining available footprint. The covariance matrix is then estimated as:
\begin{equation}\label{eq:cov_jk}
{\rm Cov}\left(C^{uv}_\ell,C^{wz}_{\ell'}\right)=\frac{N_{\rm JK}-1}{N_{\rm JK}}\sum_{n=1}^{N_{\rm JK}}\Delta C^{uv,(n)}_\ell\,\Delta C^{wz,(n)}_{\ell'},
\end{equation}
where $\Delta C^{uv,(n)}_\ell$ is the difference between the power spectrum estimated when removing the $n$-th jackknife region and the power spectrum averaged over all jackknife regions. We use $N_{\rm JK}=461$ jackknife regions defined as {\tt HEALPix} pixels with resolution $N_{\rm side}=8$.
Although the jackknife method is able to provide an estimate of the size of the power spectrum uncertainties in a model independent way, it has some drawbacks. There are only so many jackknife regions of reasonable size that can be selected; therefore the estimated covariance is noisy at some level. Typically, the number of independent realisations used to estimate a sample covariance matrix should be at least one order of magnitude larger than the size of the data vector (up to 50 elements in our case). Furthermore, since the footprint associated with the removal of each jackknife region is different from the overall footprint, the method is also not able to recover the mode-coupling associated with the map geometry by construction. In order to verify and improve our estimate of the covariance matrix, we make use of a second analytical estimator.
We compute the analytical covariance matrix following the methods outlined in \cite{2017MNRAS.470.2100K}. The covariance receives two main additive contributions, from the so-called {\sl disconnected} and {\sl connected} trispectra. The disconnected part is essentially the covariance matrix estimated under the assumption that all fields are Gaussian. In the absence of sky masks, it is given by
\begin{equation}
{\rm Cov}^{\rm G}\left(C^{uv}_\ell,C^{wz}_{\ell'}\right)=\delta_{\ell\ell'}\frac{C^{uw}_\ell C^{vz}_\ell+C^{uz}_\ell C^{vw}_\ell}{2\ell+1}.
\end{equation}
A sky mask introduces non-zero coupling between different $\ell$ modes. To account for these, we use the method introduced in \cite{2004MNRAS.349..603E} and implemented in {\tt NaMaster}, which has been shown to be an excellent approximation for large-scale structure data \citep{2019arXiv190611765G}.
We compute the connected (i.e. non-Gaussian) contribution to the covariance matrix using the halo model as the 1-halo trispectrum \citep{2002MNRAS.336.1256K}, given by:
\begin{align}\nonumber
{\rm Cov}^{\rm NG}\left(C^{uv}_\ell,C^{wz}_{\ell'}\right)=\int &d\chi\,\frac{W_u(\chi)W_v(\chi)W_w(\chi)W_z(\chi)}{4\pi f_{\rm sky}\,\chi^6}\times\\\label{eq:cov_ng}
&T^{1h}_{uvwz}\left(k=\frac{\ell+1/2}{\chi}\right),
\end{align}
where
\begin{equation}\nonumber
T^{1h}_{uvwz}(k)\equiv\int dM\frac{dn}{dM}\langle U(k|M) V(k|M) W(k|M) Z(k|M)\rangle.
\end{equation}
Here we have used the notation introduced in Section \ref{ssec:theory.cls} for the radial kernels ($W_u(\chi)$) and Fourier-space halo profiles ($U(k|M)$) of the different projected fields. The total covariance matrix is simply given by ${\rm Cov}={\rm Cov}^{\rm G}+{\rm Cov}^{\rm NG}$. Note that estimating the connected term requires the use of the best-fit halo model parameters which we do not know a priori. In order to circumvent this issue we proceed as in \cite{2018MNRAS.473.4318A} and estimate the covariance matrix in a two-step process, where we first obtain best-fit parameters by minimising a $\chi^2$ that uses only the Gaussian covariance with power spectra computed directly from the data, and we then use those parameters to calculate the non-Gaussian contribution (as well as to recalculate the Gaussian part using the best-fit prediction for the power spectra).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{cov_diag_wisc2.pdf}
\caption{Diagonals of the covariance matrix for the $\delta_g\times y$ cross-correlation in the second WI$\times$SC~redshift bin. Black and red lines show the analytical and jackknife covariances respectively. The solid and dashed lines show the zeroth-order ($i=j$) and first-order ($i=j+1$) diagonals respectively. We find compatible results from both methods.}
\label{fig:covdiag}
\end{figure}
Fig.\!~\ref{fig:covdiag} shows the diagonal of the covariance matrix for one of the galaxy-tSZ power spectra estimated using these two methods. We find that both estimators are in good agreement with each other. We construct our final fiducial covariance matrix using both estimators: in order to ensure that we recover realistic error bar sizes, we use the variance estimated from the jackknives, and then combine it with the correlation matrix estimated analytically. This ensures that our estimator accounts for the coupling between different modes caused by survey geometry and non-Gaussianities while avoiding the statistical noise in the jackknife estimator. The final covariance is therefore:
\begin{equation}
{\rm Cov}_{ij} = {\rm Cov}^{\rm ana}_{ij} \sqrt{\frac{{\rm Cov}^{\rm JK}_{ii}\,{\rm Cov}^{\rm JK}_{jj}}{{\rm Cov}^{\rm ana}_{ii}\,{\rm Cov}^{\rm ana}_{jj}}},
\end{equation}
where ${\rm Cov}^{\rm JK}$ and ${\rm Cov}^{\rm ana}$ are the jackknife and analytical covariance matrices respectively. We have verified that the resulting covariance matrices are well behaved (i.e. they are invertible, and their eigenvalues have a reasonable dynamical range).
\subsection{Systematics treatment}\label{ssec:methods.syst}
\subsubsection{Map-level deprojection}\label{sssec:methods.syst.deproj}
For small levels of contamination, the impact of a given systematic on an observed sky map can be modelled at the linear level:
\begin{equation}
{\bf m}_{\rm obs}={\bf m}_{\rm true} + \epsilon\,{\bf t}.
\end{equation}\label{eq:deproj}
Here ${\bf m}_{\rm obs}$ and ${\bf m}_{\rm true}$ are vectors corresponding to the observed sky map and the true underlying quantity we wish to map respectively, ${\bf t}$ is a template map describing the contaminant (e.g. a map of the Galactic dust fluctuations), and $\epsilon$ is an unknown amplitude. In order to fully account for the effects of this contaminant one has to build a likelihood for ${\bf m}_{\rm obs}$ using the Eq. \ref{eq:deproj} as a model (together with a model for ${\rm m}_{\rm true}$) and marginalise over $\epsilon$. As shown in \cite{2017MNRAS.465.1847E} and \cite{2019MNRAS.484.4127A}, within the pseudo-$C_\ell$ framework, this can be done exactly by projecting ${\bf m}_{\rm obs}$ onto the subspace perpendicular to ${\bf t}$ or, in other words, by `deprojecting' ${\bf t}$. The loss of modes due to deprojection then needs to be taken into account when estimating the power spectrum, which can be done analytically.
For our analysis, we deproject two systematic templates from the galaxy and $y$ maps. We create a reddening template using the dust map of \cite{1998ApJ...500..525S}, and remove its associated contamination from the $y$ maps and from all the galaxy overdensity maps. We also generate a star density template from the WISE data, and remove its associated fluctuations from all the galaxy overdensity maps. The effects of this deprojection are illustrated in Fig.\!~\ref{fig:clsyst}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{cl_syst_summary.pdf}
\caption{Summary of the different sources of systematic contamination affecting the galaxy auto-correlations for the particular case of the third WI$\times$SC~bin. The red and blue points show the measurements before and after deprojecting a dust and a star template. The dot-dashed line shows the best-fit contamination from SuperCOSMOS plate fluctuations that explains the residuals of the data with respect to the best-fit HOD-only model (dashed line). The total combined prediction is shown as the solid line. The light grey bands show our scale cuts.}
\label{fig:clsyst}
\end{figure}
\subsubsection{$C_\ell$-level deprojection}\label{sssec:methods.syst.plates}
We model the systematic fluctuations in the number density of sources in the WI$\times$SC~ sample caused by variations in the zero-point of the SuperCOSMOS photographic plate exposures as Gaussian random variations within the footprint of each exposure with a variance $\sigma^2_{\rm plate}$:
\begin{equation}
\delta_{\rm plate}(\hat{\boldsymbol{\theta}}) = \sum_p \delta_p\,S_{\rm plate}\,W(\hat{\boldsymbol{\theta}}-\hat{\boldsymbol{\theta}}_p),
\end{equation}
where the sum runs over all exposures, $\delta_p$ is the fluctuation in exposure $p$ (with $\langle\delta_p^2\rangle=\sigma^2_{\rm plate}$), $S_{\rm plate}$ is the footprint area of each exposure and $W(\hat{\boldsymbol{\theta}})$ is the plate window function. Assuming a roughly homogeneous coverage of the sky, the power spectrum of these fluctuations is given by
\begin{equation}
C_\ell^{\rm plate}=S_{\rm plate}\sigma_{\rm plate}^2\left|W_\ell\right|^2,
\end{equation}
where $W_\ell$ is the harmonic transform of the plate window function. SuperCOSMOS used photographic plates covering an area of $5^\circ\times5^\circ$. The corresponding window function can be roughly approximated as $|W_\ell|^2=\exp[-(\ell\,\theta_{\rm plate})^2/12]$, where $\theta_{\rm plate}=5\pi/180$ for 5-degree plates.
The contamination from plate fluctuations can therefore be accounted for as an additive contribution to the model describing the galaxy auto-spectrum, proportional to a template $T_\ell\equiv|W_\ell|^2$ with a free amplitude $A\equiv S_{\rm plate}\sigma_{\rm plate}^2$ that is marginalised over. Since $A$ is a linear parameter, this can be done analytically in a pre-processing step by modifying the inverse covariance matrix of the galaxy auto-spectra as follows \citep{1992ApJ...398..169R}:
\begin{equation}
{\sf Cov}^{-1}\hspace{6pt}\rightarrow\hspace{6pt} {\sf Cov}^{-1}-\frac{{\sf Cov}^{-1}{\bf T}\cdot{\bf T}^T{\sf Cov}^{-1}}{{\bf T}^T{\sf Cov}^{-1}{\bf T}},
\end{equation}
where ${\bf T}$ is a vector containing the template $T_\ell$ for all the scales used in the analysis. The dot-dashed line in Fig.\!~\ref{fig:clsyst} shows the shape of the plate template $T_\ell$ normalised by the best-fit value of the parameter $A$ that explains the residuals of the data with respect to the best-fit HOD-only model (shown as a dashed line). The plate fluctuations contribute to $\sim10\%$ of the signal on scales $\ell\lesssim 30$.
\subsubsection{Scale cuts}\label{sssec:methods.syst.scales}
After deprojecting dust and stars at the map level, and the imprint of the SuperCOSMOS plates at the power spectrum level, we still observe an unacceptably large amount of power on scales $\ell<10$ in all galaxy auto-correlations involving WI$\times$SC. These may be due to residual star contamination that is not simply removed with a linear template \citep{2019JCAP...08..037X}, or inaccuracies in our treatment of the SuperCOSMOS fluctuations. To avoid biasing our results we therefore remove the lowest bandpower from all galaxy auto-correlations involving WI$\times$SC.
On small scales, the simple halo model prescription used to describe the non-linear power spectra and covariance matrices may not be sufficiently accurate. Therefore we impose a cut on angular multipoles $\ell$ larger than the typical physical scale of a halo, $\ell_{\rm max}=k_{\rm max}\bar{\chi}-1/2$, where $k_{\rm max}=1\,{\rm Mpc}^{-1}$, and $\bar{\chi}$ is the mean comoving radial distance in each redshift bin. The corresponding values of $\ell_{\rm max}$ in each bin are given in Table \ref{tab:z_bins}.
\subsubsection{Redshift distribution uncertainties}\label{sssec:methods.syst.photoz}
For each redshift bin we estimate fiducial redshift distributions using the method described in \cite{2018MNRAS.481.1133P}. In short, a true-redshift distribution is estimated from the distribution of photometric redshifts for a given bin using a model for the conditional photo-$z$ distribution:
\begin{equation}\label{eq:nz_calc}
p(z)=\int dz_{\rm photo}\,p(z|z_{\rm photo})\,p(z_{\rm photo}),
\end{equation}
where the model for $p(z|z_{\rm photo})$ was calibrated using overlapping spectroscopic data. Although the spectroscopic coverage of 2MPZ and WI$\times$SC~ is high compared to other photometric redshift surveys, the resulting redshift distributions are not infinitely precise, and therefore we need to account for any uncertainty in them that could affect our measurement.
In particular, our model of the $y\times \delta_g$ cross-correlation is particularly sensitive to the width of the redshift distributions. This is because, while the amplitude to the galaxy auto-correlation is strongly affected by the width of the distribution (wider $p(z)$'s reduce the clustering amplitude), the cross-correlation with $y$ is almost insensitive to it. We therefore introduce an additional parameter, $w_z$, that stretches the support of the redshift distribution while preserving unit total probability. Concretely, given a fiducial distribution $p_{\rm fid}(z)$ with mean redshift $\bar{z}$, $w_z$ is implemented as
\begin{equation}
p(z)\propto p_{\rm fid}\left(\bar{z}+\frac{z-\bar{z}}{w_z}\right),
\end{equation}
where the proportionality constant is fixed by making sure that $p(z)$ integrates to unity. We make $w_z$ a free parameter that we marginalise over with a top-hat prior $0.8<w_z<1.2$. This prior is significantly larger than the actual expected uncertainty in the width of the redshift distributions. To be precise, we estimate that the parameters (e.g. Gaussian mean and variance) of the conditional photo-$z$ distribution $p(z|z_{\rm photo})$ used to calculate the true redshift distributions (Eq.\!~\ref{eq:nz_calc}), are known to the level of $1\%$. When propagated into an uncertainty on the width $w_z$ of $p(z)$, this corresponds to an uncertainty of the same order ($\sim0.9\%$). Thus, the margins of the assumed top-hat priors are wide enough to encompass any lack of precision in the number density distribution.
Another possible source of systematic uncertainty is the effect of biased photometric redshifts (i.e. where $\langle z|z_{\rm photo}\rangle\neq z_{\rm photo}$). The {\tt ANNz} method used in \cite{2018MNRAS.481.1133P} should guarantee unbiased photo-$z$'s, but this can never be achieved exactly in practice. It would be possible to account for this form of uncertainty by treating $\bar{z}$ as a free parameter, as is usually done in photometric cosmic shear analyses. Our results, however, are more sensitive to the distribution widths, and we find that this simple parameterisation is able to describe our data sufficiently well. We leave a more detailed analysis of the associated photometric redshift systematics for future work.
\subsection{Likelihood}\label{ssec:methods.like}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{fits.pdf}
\caption{Measured galaxy auto-spectra (left column) and $\delta_g$-$y$ cross-spectra (right column) for the 6 galaxy redshift bins shown in Fig.\!~\ref{fig:dndz} in ascending order. Each panel shows the measurements (red points with error bars) and best-fit predictions (solid black lines) decomposed into their 1-halo and 2-halo contributions (green and blue lines). The right column additionally shows the estimated contamination from extragalactic dust (yellow squares with error bars), which is found to be negligible. The grey bands indicate the scales removed from the analysis. The bottom part of each panel shows the difference between data and theory normalised by the 1$\sigma$ uncertainties.}
\label{fig:cls}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.27\textwidth]{cov_2mpz.pdf}
\includegraphics[width=0.27\textwidth]{cov_wisc1.pdf}
\includegraphics[width=0.27\textwidth]{cov_wisc2.pdf}
\includegraphics[width=0.27\textwidth]{cov_wisc3.pdf}
\includegraphics[width=0.27\textwidth]{cov_wisc4.pdf}
\includegraphics[width=0.27\textwidth]{cov_wisc5.pdf}
\caption{Correlation matrices for the 6 redshift bins used in the analysis. Each covariance matrix consists of 4 sub-matrices, corresponding to the covariances of $C^{gg}_\ell$ and $C^{gy}_\ell$ (block diagonals), as well as their cross-covariance.}
\label{fig:covs}
\end{figure*}
\begin{table}
\begin{center}
\begin{tabular}{lll|lll}
$b$ & $\ell^b_{\rm ini}$ & $\ell^b_{\rm end}$ & $b$ & $\ell^b_{\rm ini}$ & $\ell^b_{\rm end}$ \\[1ex]
\hline
1 & 2 & 12 & 13 & 180 & 216\\
2 & 12 & 22 & 14 & 216 & 258\\
3 & 22 & 32 & 15 & 258 & 308\\
4 & 32 & 42 & 16 & 308 & 369\\
5 & 42 & 52 & 17 & 369 & 441\\
6 & 52 & 62 & 18 & 441 & 527\\
7 & 62 & 74 & 19 & 527 & 629\\
8 & 74 & 88 & 20 & 629 & 752\\
9 & 88 & 106 & 21 & 752 & 899\\
10 & 106 & 126 & 22 & 899 & 1074\\
11 & 126 & 151 & 23 & 1074 & 1284\\
12 & 151 & 180 & 24 & 1284 & 1535\\
\hline
\end{tabular}
\caption{Power spectrum bandpowers used in this work. The $b$-th bandpower includes all integer multipoles $\ell^b_{\rm ini}<\ell<\ell^b_{\rm end}$. The bandpower edges listed here correspond roughly to linearly-spaced bands between $\ell=2$ and $\ell=52$, and logarithmic bands afterwards.}\label{tab:bpws}
\end{center}
\end{table}
In order to connect our measurements with the posterior distribution of the model parameters $\vec{\theta}$, we assume that the measured power spectra follow a Gaussian likelihood:
\begin{equation}
-2\ln p({\bf d}|\vec{\theta}) = \chi^2\equiv ({\bf d}-{\bf t}(\vec{\theta}))^T {\sf C}^{-1} ({\bf d}-{\bf t}(\vec{\theta})),
\end{equation}
where ${\bf d}$ is a vector of power spectrum measurements, ${\bf t}(\vec{\theta})$ is the theory prediction for ${\bf d}$ with parameters $\vec{\theta}$ and ${\sf Cov}$ is the covariance matrix described in the previous sections.
We explore the likelihood of each redshift bin separately. For each bin, our data vector ${\bf d}$ includes two sets of power spectrum measurements, corresponding to the auto-correlation of the galaxy overdensity and to its cross-correlation with the Compton-$y$ map, ${\bf d}=(C^{gg}_\ell,C^{gy}_\ell)$. We compute all power spectra in the range of multipoles $2<\ell<1535$, binned into the bandpowers described in Table \ref{tab:bpws}. For a given redshift bin, we only include those bandpowers that satisfy the scale cuts described in Section \ref{sssec:methods.syst.scales}.
For each redshift bin, our theoretical model has five free parameters: $\log_{10}M_{\rm min}/{\rm M_\odot}$, $\log_{10}M_1'/{\rm M_\odot}$, $1-b_{\rm H}$, $\rho_{yg}$, and a nuisance width parameter $w_z$. The first two parameters effectively fit the galaxy bias and small-scale amplitude in the galaxy auto-correlation, while $1-b_{\rm H}$ and $\rho_{yg}$ are then constrained by including the tSZ cross-correlation. We fix all cosmological parameters to the best-fit values in \cite{2018arXiv180706209P}: $(\Omega_c h^2,\Omega_bh^2,h,\sigma_8,n_s)=(0.119,0.0224,0.6766,0.8102,0.9665)$. We impose the following top-hat priors on the free parameters:
\begin{align}
&10\le\log_{10}M_{\rm min}/{\rm M_\odot}\le16,\\
&10\le\log_{10}M_1'/{\rm M_\odot}\le16,\\
&0\le(1-b_{\scriptscriptstyle\rm H})\le0.99,\\
&-1\le\rho_{yg}\le1,\\
&0.8\le w_z\le1.2.
\end{align}
We sample the resulting posterior distributions using the Markov chain Monte Carlo method (MCMC) as implemented in the {\tt emcee} software package \citep{2013PASP..125..306F}\footnote{\url{https://emcee.readthedocs.io/en/v2.2.1/}.}.
\section{Results}\label{sec:results}
\subsection{Power spectra and covariances}\label{ssec:results.cls}
We estimate the galaxy auto-power spectrum, the galaxy-tSZ cross-spectrum, and their covariance matrix for each of the redshift bins shown in Fig.\!~\ref{fig:dndz} using the methods described in Section \ref{sec:methods}. The resulting measured power spectra and errors are shown in red in Fig.\!~\ref{fig:cls}, together with their best-fit halo model prediction in black, decomposed into its 1-halo and 2-halo contributions (green and blue respectively). The bottom part of each panel shows the residuals with respect to the best-fit prediction normalised by the 1$\sigma$ errors. The grey bands cover the data points not used in the analysis due to scale cuts.
Fig.\!~\ref{fig:covs} shows the correlation matrix of the combined data vector $(C^{gg}_\ell,C^{gy}_\ell)$ for each redshift bin (where the correlation matrix $r_{ij}$ is related to the covariance $C_{ij}$ as $r_{ij}=C_{ij}/\sqrt{C_{ii}C_{jj}}$). At low redshifts, 2MPZ shows strong correlations between different scales, mostly caused by the non-Gaussian contribution to the covariance matrix (Eq.\!~\ref{eq:cov_ng}). These become less relevant at higher redshifts, where non-linear effects are weaker, and where the radial projection pushes the non-linear scale into larger multipole values.
Overall, we find a good agreement between theory and data over the scales used in this analysis. We discuss this agreement and the associated scientific results in the following sections.
\subsection{Fiducial results}\label{ssec:results.fid}
\subsubsection{Tomographic measurement of the mass bias}\label{ssec:results.fid.1mb}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{fiducial_wisc3.pdf}
\caption{68\% and 95\% contours for the model parameters in the third WI$\times$SC redshift bin.}
\label{fig:triangle}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{bhydro.pdf}
\caption{Summary figure showing our constraints on the tSZ mass bias $1-b_{\scriptscriptstyle\rm H}$. Our fiducial constrains are shown as black circles with error bars, and are centred at the mean redshift of each sample. The grey circles show the same constraints found using the NILC $y$ map, which are in good agreement with our fiducial results. The red downward-pointing triangles show the constraints found fixing the photo-$z$ width parameter, $w_z$. They are in good agreement with our fiducial measurements, with $\sim20\%$ smaller error bars. The burgundy squares show the results found using the 2010 Tinker mass function, which are systematically biased low. The orange diamond and the mesh show our combined, redshift-independent constraint on $1-b_{\scriptscriptstyle\rm H}$. For comparison, the grey and turquoise bands show the constraints on the mass bias found by combining cluster counts and CMB primary \citep{2016A&A...594A..24P} as well as the constraints using CMB lensing to calibrate cluster masses \citep{2019MNRAS.489..401Z}, respectively. The top panel shows the normalised redshift distributions for the six redshift bins, for reference.}
\label{fig:bh}
\end{figure*}
\begin{table*}
\begin{center}
\begin{tabular}{l|cccccc}
\hline
Sample & $\bar{z}$ & $1-b_{\scriptscriptstyle\rm H}\,\,({\rm best\,\,fit})$ & $1-b_{\scriptscriptstyle\rm H}\,\,(68\%\,{\rm C.L.})$ & $\langle bP_e\rangle\,[{\rm meV}\,\,{\rm cm}^{-3^{\phantom{2}}}]$ & $\chi^2/{\rm d.o.f.}$ & ${\rm PTE}(\chi^2)$\\[1ex]
\hline
2MPZ & 0.07 & 0.33 & $0.21^{+0.31}_{-0.05}$ & $0.087^{+0.052^{\phantom{A}}}_{-0.031}$ & 0.92 & 0.57 \\
WI$\times$SC-1 & 0.13 & 0.71 & $0.62^{+0.14}_{-0.08}$ & $0.176^{+0.020^{\phantom{A}}}_{-0.025}$ & 1.35 & 0.09 \\
WI$\times$SC-2 & 0.18 & 0.73 & $0.62^{+0.10}_{-0.07}$ & $0.188^{+0.013^{\phantom{A}}}_{-0.024}$ & 0.87 & 0.69 \\
WI$\times$SC-3 & 0.23 & 0.64 & $0.64^{+0.07}_{-0.08}$ & $0.203^{+0.018^{\phantom{A}}}_{-0.023}$ & 1.17 & 0.22 \\
WI$\times$SC-4 & 0.27 & 0.55 & $0.58^{+0.07}_{-0.07}$ & $0.197^{+0.020^{\phantom{A}}}_{-0.016}$ & 1.04 & 0.40 \\
WI$\times$SC-5 & 0.32 & 0.58 & $0.61^{+0.09}_{-0.07}$ & $0.210^{+0.033^{\phantom{A}}}_{-0.010_{\phantom{A}}}$ & 1.26 & 0.12 \\
\hline
\end{tabular}
\caption{Summary table presenting our main results. The first two columns list our 6 tomographic bins and their mean redshift. Columns 3 and 4 show the best-fit value of the mass bias $1-b_{\scriptscriptstyle\rm H}$ and their 1D peak value and 68\% confidence interval respectively. Column 5 shows the peak value and 68\% confidence interval of the bias-averaged thermal pressure (Eq.\!~\ref{eq:by}) in each bin. Finally, columns 6 and 7 show the reduced $\chi^2$ and associated probability to exceed, indicating that we find a good fit in all cases.}\label{tab:results}
\end{center}
\end{table*}
We use the measured power spectra to constrain the free parameters of the model described in Section \ref{sec:theory}, with the main aim of providing an alternative measurement of the mass bias $(1-b_{\scriptscriptstyle\rm H})$ as a function of redshift. Fig.\!~\ref{fig:triangle} shows an example of the posterior parameter contours for our 5 free parameters, $\{M_{\rm min},M_1,b_{\scriptscriptstyle\rm H},\rho_{gy},w_z\}$, in the WI$\times$SC-3 sample ($z\sim0.23$). As the figure shows, there are strong degeneracies between $M_{\rm min}$, $M_1$ and $w_z$. This is easy to understand, given that all of these parameters affect the overall amplitude of the galaxy auto-correlation. Since we do not probe scales where the 1-halo term is fully resolved, we do not break the degeneracy between $M_{\rm min}$ and $M_1$, which regulate the abundance of centrals and satellites, and therefore the constraint on these parameters mostly comes from the 2-halo amplitude (i.e. the galaxy bias), and the 1-halo shot-noise level. On the other hand, the width of the redshift distribution also has a strong impact on the amplitude of the angular power spectrum at all scales due to projection effects. Since a measurement of the galaxy bias from the galaxy auto-correlation is effectively used to constrain $b_{\scriptscriptstyle\rm H}$ from the galaxy-tSZ cross-correlation, the mass bias also shows some degeneracy with the HOD parameters and $w_z$, albeit at a more moderate level. More interestingly, the mass bias parameter, $b_{\scriptscriptstyle\rm H}$, shows a visible degeneracy with $\rho_{gy}$. This is also expected: the effects of $\rho_{gy}$ and $b_{\scriptscriptstyle\rm H}$ on the 1-halo term of the galaxy-tSZ power spectrum are completely degenerate, and thus a free $\rho_{gy}$ ensures that any information obtained on $b_{\scriptscriptstyle\rm H}$ comes entirely from the 2-halo contribution.
The corresponding constraints on $1-b_{\scriptscriptstyle\rm H}$ for all redshift bins are shown in the main panel of Fig.\!~\ref{fig:bh} as black circles with error bars (corresponding to the median and 68\% confidence interval). These results are summarized in Table \ref{tab:results}, where column 3 shows the maximum-likelihood value of $1-b_{\scriptscriptstyle\rm H}$, while column 4 lists the peak of the marginalized 1-dimensional distribution and the confidence interval corresponding to the equal-probability values encompassing a total probability of 0.68. Columns 6 and 7 of the same table list the reduced $\chi^2$ values for each sample and their associated probability-to-exceed (PTE), respectively. In all cases we find that the model described in Section \ref{sec:theory} is able to describe the data with no evidence for a significant statistical tension. The top panel of Fig.\!~\ref{fig:bh} shows the normalised redshift distributions of the different bins used in this analysis, and can be used to visually assess the level of correlation between the different measurements.
We see that, while we are able to measure $1-b_{\scriptscriptstyle\rm H}$ to a reasonable accuracy ($\sim12\%$) in all of the WI$\times$SC~ redshift bins, the sensitivity for the 2MPZ sample is much poorer. This is not entirely unexpected: even though the cross-correlation between the Compton-$y$ map and the 2MPZ catalogue yields the highest signal-to-noise ratio of all the samples used here, the low-redshift range covered by this sample implies that the signal is strongly dominated by the 1-halo term. This can be seen in the top right panel of Fig.\!~\ref{fig:cls}. The constraining power of the 2MPZ cross-correlation degrades significantly since, as we have already described, we can only obtain reliable constraints on $1-b_{\scriptscriptstyle\rm H}$ from the 2-halo contribution.
Our results are in broad agreement with the estimate of a redshift-independent $1-b_{\scriptscriptstyle\rm H}$ found by \cite{2019MNRAS.489..401Z} by calibrating cluster masses with CMB lensing ($1-b_{\scriptscriptstyle\rm H}=0.71\pm0.1$), shown in Fig.\!~\ref{fig:bh} as a turquoise semi-transparent band. In turn, the grey band in the same figure shows the constraints on $1-b_{\scriptscriptstyle\rm H}$ found by combining tSZ cluster counts and the $TT$ CMB power spectrum measured by {\it Planck\/}\ \citep{2016A&A...594A..24P} ($1-b_{\scriptscriptstyle\rm H}=0.58\pm0.04$). This is the value of $1-b_{\scriptscriptstyle\rm H}$ needed to simultaneously explain the amplitude of density perturbations predicted by the CMB and the abundance of massive clusters. Our measurements are visually in agreement with both of these estimates, which themselves are not in significant tension with one another.
We can however use our results to quantify whether approximating $1-b_{\scriptscriptstyle\rm H}$ to be constant with redshift is supported by the data. To do so, we combine our six measurements under the assumption that they correspond to the same redshift-independent quantity. We do so by finding the quantity $\bar{b}_{\rm H}$ that minimises the $\chi^2$:
\begin{equation}
\chi^2=\sum_{i,j=1}^6 (b_{{\rm H},i}-\bar{b}_{\rm H}){\sf Cov}^{-1}_{b,ij}(b_{{\rm H},j}-\bar{b}_{\rm H}),
\end{equation}
where $b_{{\rm H},i}$ is the mass bias measured in the $i$-th redshift bin, and ${\sf Cov}_b$ is the covariance matrix of these measurements. Since the galaxy samples used in this analysis have significant redshift overlap, the off-diagonal elements of ${\sf Cov}_b$ cannot be ignored. We estimate ${\sf Cov}_b$ through jackknife resampling: we use the power spectra measured in each jackknife region described in Section \ref{ssec:methods.cov} to estimate the best-fit value of $b_{{\rm H},i}$ for each of them, and then calculate the covariance through Eq.\!~\ref{eq:cov_jk}. Since $\bar{b}_{\rm H}$ is a linear parameter, its best-fit and standard deviation can be found analytically as:
\begin{equation}
\bar{b}_{\rm H}=\frac{\sum_{ij}{\sf Cov}_{b,ij}^{-1}b_{{\rm H},i}}{\sum_{ij}{\sf Cov}^{-1}_{b,ij}},
\hspace{12pt}
\sigma(\bar{b}_{\rm H})=\left(\sum_{ij}{\sf Cov}^{-1}_{b,ij}\right)^{-1/2}.
\end{equation}
At this point it is worth acknowledging that this estimate of $\bar{b}_{\rm H}$ is only strictly consistent if the posterior distribution of the $b_{{\rm H},i}$ is Gaussian. We find that, with the exception of the 2MPZ sample, which has a comparatively small statistical power, the marginalised distributions in all redshift bins are sufficiently well-behaved that this is a reasonable approximation (e.g. see 1-dimensional distribution for $b_{\scriptscriptstyle\rm H}$ in Fig.\!~\ref{fig:triangle}). Our combined constraint on $\bar{b}_{\rm H}$ following this procedure is:
\begin{equation}
1-\bar{b}_{\rm H}=0.59\pm0.03.
\end{equation}
More interestingly, the $\chi^2$ value associated with this measurement is $\chi^2=2.1$, which has an associated PTE$\simeq0.8$. We therefore find that the assumption of a constant mass bias with redshift is compatible with our measurements. This agrees with the results of \cite{2019A&A...626A..27S}, found using tSZ-selected clusters. The preferred value of $1-\bar{b}_{\rm H}$ found also agrees with the results of \cite{2018MNRAS.480.3928M} in cross-correlation with galaxies at very low redshift, and those of \cite{2019arXiv190707870M} using weak-lensing cross-correlations at higher $z$.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{by.pdf}
\caption{Summary figure showing our constraints on the bias-weighted thermal pressure $\langle bP_e\rangle$. Our measurements are shown as blue circles with error bars and a `violin' background. The circle and error bars show the median and 68\% confidence interval, while the violins show the full 1D posterior distribution. The grey lines show the predictions of the shock-heating model of \citet{2012ApJ...758...75B} for different values of the $r_{\rm max}$ threshold cluster radius (see legend). For comparison, the figure also shows the constraints on the same parameter found by \citet{2017MNRAS.467.2315V} (black) and \citet{2019arXiv190413347P} (red and green).}
\label{fig:by}
\end{figure*}
Since the amplitudes of the galaxy auto-correlation and the galaxy-tSZ cross-correlation are both affected by the value of $\sigma_8$, which we have fixed to the best-fit value found by {\it Planck\/}, the fact that the value of $1-b_{\scriptscriptstyle\rm H}$ we find is compatible with the one measured by \cite{2016A&A...594A..24P} is not surprising. In other words, this agreement can be understood as evidence of the compatibility between the best-fit {\it Planck\/}\ cosmology and the clustering properties of galaxies and thermal pressure in the datasets used here. The corresponding best-fit value of $1-b_{\scriptscriptstyle\rm H}\simeq0.6$ is at odds with the expectation from hydrodynamical simulations \citep{2016ApJ...827..112B} and some direct calibration efforts \citep[e.g.][]{2016MNRAS.456L..74S,2019A&A...621A..40E}, which seem to prefer smaller missing mass fractions ($1-b_{\scriptscriptstyle\rm H}\simeq0.8$). A possible resolution of the mild tension between tSZ cluster counts and CMB data could be the existence of new physics modifying the late-time expansion and structure growth. However, most of these modifications (e.g. a one-parameter excursion where the dark energy equation of state takes a larger value $w\sim-0.7$) would make the existing tension in the value of the local expansion rate between CMB and local measurements \citep{2019ApJ...876...85R} worse. A careful study of all possibilities, invoking both non-standard physics and more sophisticated astrophysical models, together with improved datasets, is therefore necessary before this tension can be fully resolved.
\subsubsection{Tomographic measurement of thermal gas pressure}\label{ssec:results.fid.bpe}
So far we have presented our constraints in terms of a the mass bias parameter $1-b_{\scriptscriptstyle\rm H}$, since this is the main source of systematic uncertainty in the cosmological analysis of cluster number counts carried out by {\it Planck\/}. However, the physical interpretation of this parameter (the fraction of missing mass estimated from X-ray measurements under the assumption of hydrostatic equilibrium) is not directly related to the physical process that allows us to constrain it through the cross-correlation of $y$ and $\delta_g$: the fact that galaxy density and pressure trace the same underlying dark-matter fluctuations. In this sense, a more direct observable is the bias-weighted pressure $\langle bP_e\rangle$. This quantity can be interpreted both as the relation between large-scale matter and pressure fluctuations and as the halo-bias-weighted thermal energy density of all haloes at a given redshift \citep{2017JCAP...11..040B}. It has been measured at low redshifts by \cite{2017MNRAS.467.2315V} making use of galaxy groups, and at higher redshifts by the Dark Energy Science Collaboration \citep{2019arXiv190413347P}. The redshift range $0.1\lesssim z\lesssim0.4$, containing a large fraction of the tSZ sources detected by {\it Planck\/}\ \citep{2016A&A...594A..27P}, has been so far fairly unconstrained tomographically through this type of measurements.
We derive constraints on $\langle bP_e\rangle$ from our data by reprocessing our Monte-Carlo chains, computing $\langle bP_e\rangle$ at each sample to find its 1-dimensional posterior distribution. The results are listed in Table \ref{tab:results} and shown in Fig.\!~\ref{fig:by}, together with the measurements of \cite{2017MNRAS.467.2315V} and \cite{2019arXiv190413347P}. Our results are in good qualitative agreement with the trend of these previous measurements, as well as with the predictions of the shock heating models of \cite{2012ApJ...758...75B}. In these models, the thermal energy entering Eq.\!~\ref{eq:by} is estimated by integrating the pressure profile of \cite{2012ApJ...758...75B} up to a radius $r_{\rm max}=N\,r_{200c}$. As a visual aid to evaluate the agreement of our results with these models, the predictions for $N=2,\,3,\,5$ and $\infty$ are shown as solid, dashed, dot-dashed and dotted grey lines respectively in Fig.\!~\ref{fig:by}. These results are the most precise measurement of this quantity to date. The main factors that contribute to the improved constraining power are the larger amplitude of the tSZ signal towards low redshifts and the high density of tracers in the 2MPZ and WI$\times$SC\ samples.
\subsection{Systematics analysis}\label{ssec:results.syst}
\subsubsection{tSZ systematics}\label{sssec:results.syst.y}
No component separation method is perfect, and the MILCA Compton-$y$ map used in our fiducial analysis is known to suffer from small levels of contamination from various other astrophysical components. The most relevant for this analysis is the presence of Galactic and extragalactic dust. As described in Section \ref{sssec:methods.syst.deproj}, we remove contamination from Galactic dust at the map level in both $y$ and $\delta_g$. The extragalactic component, the so-called Cosmic Infrared Background (CIB), is however a more relevant concern, given that it traces the large-scale structure, and is therefore statistically correlated with both of our observables ($\delta_g$ and $y$). Since the CIB is most relevant near the peak of star formation ($z\sim2$), we expect this contamination to be small, but it must be quantified carefully.
To do so, we follow the same method used in \cite{2017MNRAS.467.2315V}. We model the CIB contamination in the $y$ map as:
\begin{equation}
y_{\rm obs}(\hat{\boldsymbol{\theta}})=y_{\rm true}(\hat{\boldsymbol{\theta}})+\epsilon_{\rm CIB}\,c(\hat{\boldsymbol{\theta}}),
\end{equation}
where $y_{\rm obs}$ and $y_{\rm true}$ are the observed and true Compton-$y$ maps, $\epsilon_{\rm CIB}$ is a free parameter, and $c(\hat{\boldsymbol{\theta}})$ is a template for the CIB emission. For our analysis we used the 545 GHz map released by {\it Planck\/}\ as a proxy for CIB emission. The different cross-correlation between $y_{\rm obs}$ and both $\delta_g$ and $c$ are then given by:
\begin{align}\label{eq:cl_yc}
&C^{yc,{\rm obs}}_\ell= C^{yc}_\ell+\epsilon_{\rm CIB}\,C^{cc}_\ell,\\
&C^{yg,{\rm obs}}_\ell= C^{yg}_\ell+\epsilon_{\rm CIB}\,C^{gc}_\ell,
\end{align}
where $C^{cc}_\ell$, $C^{yc}_\ell$, and $C^{gc}_\ell$ are the auto-correlation of the CIB, and its intrinsic cross-correlations with $y$ and $\delta_g$ respectively. Since $C^{gc}_\ell$ can be estimated directly by cross-correlating our galaxy overdensity maps with the 545 GHz map, the only remaining step to quantify the contamination to the $y$-$\delta_g$ correlation is to estimate $\epsilon_{\rm CIB}$. To do so, we measure the cross-correlation between the MILCA $y$ map and the 545 GHz map and fit a model of the form of Eq.\!~\ref{eq:cl_yc}, with a single free parameter $\epsilon_{\rm CIB}$, and $C^{yc}_\ell$ and $C^{cc}_\ell$ given by the best-fit models for the CIB auto- and cross-correlation provided by \cite{2014A&A...571A..30P,2016A&A...594A..23P}. As reported in \cite{2018PhRvD..97f3514A}, this procedure yields an estimate of the CIB contamination
\begin{equation}
\epsilon_{\rm CIB}=(2.3\pm6.6)\times10^{-7}\,({\rm MJy}/{\rm sr})^{-1}.
\end{equation}
For this exercise, in order to reduce the noise variance from the Galactic component of the 545 GHz map, the cross-correlations $C^{yc,{\rm obs}}_\ell$ and $C^{gc}_\ell$ were estimated using {\it Planck\/}'s 20\% Galactic mask. Additionally, all uncertainties were estimated using the jackknife method described in Section \ref{ssec:methods.cov}.
Using this measurement of $\epsilon_{\rm CIB}$ and the estimate of $C^{gc}_\ell$ from the cross-correlation with the 545 GHz map, Fig.\!~\ref{fig:cls} shows, in the right panels, the estimated level of contamination from the CIB in our $y$-$\delta_g$ cross-correlation. In all cases the contamination is small, at the level of $\sim1\%$ of the signal, and therefore can be neglected in our analysis.
As an additional test for systematics, we have also repeated our full analysis replacing the MILCA $y$ map with the NILC map released by {\it Planck\/}. Different component-separation methods are sensitive to different types of contamination, and this exercise is therefore both a necessary consistency check and a way to reinforce our conclusion that the level of CIB contamination is negligible. The resulting measurements of $1-b_{\scriptscriptstyle\rm H}$ are shown in Fig.\!~\ref{fig:bh} next to the fiducial measurements, as grey circles with error bars. The results are in very good agreement with our fiducial analysis.
\subsubsection{Photometric redshift uncertainties}\label{sssec:results.syst.pz}
We have quantified the impact on our results of the uncertainties on the redshift distributions of the different samples used here by introducing the free width parameter $w_z$ with a 20\% top-hat prior. In order to study the impact of these uncertainties on our results, we have recomputed the constraints on $1-b_{\scriptscriptstyle\rm H}$ fixing $w_z$ to its fiducial value of 1. The results are shown in Fig. \ref{fig:bh} as red downward-pointing triangles. We observe that the only effect of allowing $w_z$ to vary is to increase the final uncertainties on $1-b_{\scriptscriptstyle\rm H}$ by about $20\%$. The final best-fit value of $1-b_{\scriptscriptstyle\rm H}$ does not change significantly, and the posterior distribution of $w_z$ is always peaked around the fiducial value of $1$ (see e.g. bottom right panel of Fig. \ref{fig:triangle}).
\subsubsection{Mass function parameterisation}\label{sssec:results.syst.mf}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{mf_ratio.pdf}
\caption{Ratio between the mass function parameterisations of \citet{2008ApJ...688..709T} and \citet{2010ApJ...724..878T} in the range of redshifts relevant to this analysis. A systematic offset of about $10$-$20\%$ between them can be observed, which causes the lower value of $1-b_{\rm H}$ found using the latter mass function (see Fig.\!~\ref{fig:bh}). }
\label{fig:mf_ratio}
\end{figure}
Another possible source of systematics in our measurement is the uncertainty on the mass function used in our halo model. Our fiducial mass function is the parameterisation of \cite{2008ApJ...688..709T}, which we use in order to be able to make a direct comparison with the results of \cite{2016A&A...594A..24P}. To explore the dependence on the choice of mass function parameterisation, we redo our analysis using the updated parameterisation of \cite{2010ApJ...724..878T}. Fig.\!~\ref{fig:bh} shows the final constraints on $1-b_{\scriptscriptstyle\rm H}$ in this case as burgundy squares, which are consistently lower by $\sim 2\sigma$. We find that this is caused by a redshift-dependent systematic offset between the two mass function parameterisations which reaches the level of $10$-$20\%$ in the range $0<z<0.4$. This is shown explicitly in Fig. \ref{fig:mf_ratio}. Since the aim of this paper is not to characterise the theoretical uncertainties on the halo mass function, we leave the study of this issue for future work, and limit ourselves to using the parameterisation of \cite{2008ApJ...688..709T} in order to match previous analyses \citep[e.g.][]{2016A&A...594A..24P,2018MNRAS.477.4957B,2018MNRAS.473.4318A,2019MNRAS.489..401Z,2019arXiv190707870M}.
\subsubsection{Galaxy clustering model}\label{sssec:results.syst.gc}
To verify that our results are robust against the details of the HOD model used to parameterise the galaxy-matter connection, we have repeated the analysis for extended versions of our baseline model. We explore two such extensions:
\begin{itemize}
\item We include the threshold width for centrals $\sigma_{\rm lnM}$ as an additional free parameter with a broad top-hat prior, instead of fixing it to the value used by \cite{2018MNRAS.473.4318A}.
\item We decouple central and satellite galaxies, allowing for haloes without a central galaxy to contain satellites. This implies removing the factor $N_c(M)$ in Eq. \ref{eq:hod1} and introducing $M_0$ (the mass threshold to have satellites) as a new free parameter that is not linked to $M_{\rm min}$.
\end{itemize}
In both cases we have confirmed that the constraints derived on $1-b_{\scriptscriptstyle\rm H}$ do not deviate significantly from our fiducial results, and that our data are not able to constrain the new free HOD parameters introduced by each extension ($\sigma_{\rm lnM}$ and $M_0$ respectively). We therefore conclude that our results are insensitive to the specifics of the model used to characterise the clustering of galaxies, which is well described in the range of scales studied here by our fiducial 2-parameter HOD model .
\section{Conclusion}\label{sec:conclusion}
Cross-correlating maps of the tSZ Compton-$y$ parameter with tomographic measurements of the projected galaxy distribution allows us to study the redshift evolution of the thermal gas pressure, since we expect both pressure and galaxies to trace the same underlying matter inhomogeneities. We have measured the $\delta_g$-$y$ cross-correlation to very high significance using public $y$ maps made available by the {\it Planck\/}\ collaboration \citep{2016A&A...594A..22P} and the 2MPZ and WI$\times$SC~ galaxy catalogues \citep{2014ApJS..210....9B,2016ApJS..225....5B} in six photometric redshift bins covering the redshift range $z\lesssim0.4$.
Combining this measurement with a measurement of the galaxy auto-correlation allows us to break the degeneracy between the two parameters that relate $\delta_g$ and $y$ to the matter fluctuations: the galaxy bias, which we model effectively using a 2-parameter HOD prescription, and the mass bias parameter $1-b_{\scriptscriptstyle\rm H}$; and thus enables us to make a redshift-dependent measurement of $1-b_{\scriptscriptstyle\rm H}$. The results, shown in Table \ref{tab:results} and Fig.\!~\ref{fig:bh}, agree well with the measurements of $1-b_{\scriptscriptstyle\rm H}$ made by {\it Planck\/}\ through the combination of cluster number counts and CMB primary anisotropies \citep{2016A&A...594A..24P}, as well as with the calibration of tSZ cluster masses with CMB lensing measurements \citep{2019MNRAS.489..401Z}, and with similar measurements of $1-b_{\scriptscriptstyle\rm H}$ made by \cite{2018MNRAS.480.3928M} through the same type of cross-correlation at lower redshifts. More importantly, our tomographic measurement allows us to study the possible redshift dependence of the mass bias, an important ingredient in the cosmological analysis of cluster abundances. Within our uncertainties, we do not find any statistical evidence for a redshift dependence of $1-b_{\scriptscriptstyle\rm H}$, in agreement with previous analyses \citep{2019A&A...626A..27S}. A reliable interpretation of the low value of this parameter ($1-b_{\scriptscriptstyle\rm H}\simeq0.6$) when compared with direct calibration studies and numerical simulations, requires a more careful study of different possible models as well as other datasets.
Perhaps more interestingly, our measurements can be interpreted as constraints on the bias-weighted mean gas pressure $\langle bP_e\rangle$, the equivalent of the large-scale galaxy bias for the $y$ parameter. This quantity is directly related to the energetics of gas in haloes, and can be used to constrain different heating models. Our results, shown in Table \ref{tab:results} and Fig.\!~\ref{fig:by}, agree well with previous measurements of the same quantity \citep{2017MNRAS.467.2315V,2019arXiv190413347P}, as well as with shock-heating models \citep{2012ApJ...758...75B}. This result is also the most precise measurement of the large-scale correlation between $\delta_g$ and $y$ to date.
The main sources of systematic uncertainty in our analysis are contamination of the galaxy clustering auto-correlation by stars, Galactic dust and other observing conditions, the contamination from CIB emission in the $y$ map, and uncertainties in the galaxy redshift distribution due to the use of photometric redshifts. We address the galaxy clustering systematics by deprojecting templates for dust and star contamination at the map level, by deprojecting the expected contamination from zero-point fluctuations in the SuperCOSMOS photographic plates at the power spectrum level, and by masking out the largest scales ($\ell<10$) in the galaxy clustering auto-correlation. We use the 545 GHz {\it Planck\/}\ map as a tracer of CIB to quantify the level of contamination in the $y$-$\delta_g$ cross-correlation, and find it to be negligible (as could be expected given the relatively low redshift of our sample compared with the peak of star formation, $z\sim2$). Finally, we determine that the most relevant form of systematic associated with uncertainties in the redshift distributions is that associated with the width of $p(z)$. To address this, we add a new parameter to our model, $w_z$, which describes the distribution widths. We marginalise over $w_z$ with a 20\% prior, which we have determined to be significantly larger than the expected uncertainty on the true width. This has the effect of degrading our constraints on $1-b_{\scriptscriptstyle\rm H}$ and $\langle bP_e\rangle$ by $\sim20\%$ without modifying the best-fit value of either quantity significantly.
We must also note that our parameter constraints have been derived for a fixed cosmological model, corresponding to the best-fit parameters found by {\it Planck\/}\ \citep{2018arXiv180706209P}. The most significant consequence is the fact that our constraints are strongly degenerate with the amplitude of matter fluctuations, parameterised by $\sigma_8$. Therefore, the agreement between our constraints on $1-b_{\scriptscriptstyle\rm H}$ and those found by {\it Planck\/}\ \citep{2016A&A...594A..24P} can be thought of as a proof of the consistency between the properties of the galaxy distribution in 2MPZ and WI$\times$SC~ and the CMB anisotropies. However, our results regarding the redshift evolution of $1-b_{\scriptscriptstyle\rm H}$ (or lack thereof), and the agreement of $\langle bP_e\rangle$ with existing heating models, are only possible thanks to the tomographic cross-correlation with the galaxy density fluctuations. Robust joint constraints on cosmological and astrophysical parameters could be achieved by a combined analysis of galaxy clustering, Compton-$y$ maps and gravitational lensing data (either from cosmic shear or CMB lensing observations). We leave this analysis for future work.
The scientific yield of these types of observations will increase significantly with data from current and near-future experiments, such the Advanced Atacama Cosmology Telescope \citep{2016SPIE.9910E..14D} or the Simons Observatory \citep{2019JCAP...02..056A} on the CMB side, and the Large Synoptic Survey Telescope \citep{2009arXiv0912.0201L} or the Euclid satellite \citep{2011arXiv1110.3193L} in terms of galaxy clustering and cosmic shear. The main advances will come in the form of lower-noise and higher-resolution $y$ and CMB lensing maps, as well as a dense ($\sim30\,{\rm arcmin}^{-2}$) sampling of the galaxy distribution to much higher redshifts ($z\lesssim2$). This increase in sensitivity, however, will have to be accompanied by a much better control of systematics such as contamination from CIB and other sources, galaxy clustering systematics or photo-$z$ uncertainties.
\section*{Acknowledgements}
We would like to thank Boris Bolliet and Eiichiro Komatsu for useful comments and discussions. NK is funded by the Science and Technology Facilities Council (STFC). DA acknowledges support from the Beecroft trust and from STFC through an Ernest Rutherford Fellowship, grant reference ST/P004474/1. MB is supported by the Polish Ministry of Science and Higher Education through grant DIR/WK/2018/12. JAP was supported by the European Research Council under grant no. 670193.
The {\tt python} packages {\tt healpy}, {\tt numpy} and {\tt scipy} were used for data analysis, and {\tt matplotlib} was used to plot our results.
\setlength{\bibhang}{2.0em}
\setlength\labelwidth{0.0em}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,906 |
{"url":"http:\/\/en.wikiversity.org\/wiki\/Topic:LaTeX","text":"# Topic:LaTeX\n\nLaTeX is a markup language (as is MediaWiki!) for producing mathematical texts of the highest quality. Its use is widespread in the mathematics world. It is built on plain TeX developed by Donald Knuth. You can embed LaTeX markups in mediawiki by the [itex][\/itex] tags.\n\n## Creating a new resource\n\nYou are welcome to create a new learning resource for LaTeX! Complete, ready-to-compile sample scripts are very useful.\n\n## Learning resources in Wikimedia\n\n\u2022 b:LaTeX - featured book on Wikibooks","date":"2013-05-25 13:01:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 1, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7124654054641724, \"perplexity\": 7284.574125099301}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368705953421\/warc\/CC-MAIN-20130516120553-00091-ip-10-60-113-184.ec2.internal.warc.gz\"}"} | null | null |
{"url":"https:\/\/stats.stackexchange.com\/questions\/172581\/determining-the-limiting-distribution","text":"# Determining the limiting distribution\n\nI just started on a course with stochastic processes, so I am really new to this and I have a problem with the following exercise\n\nA Markov chain $X_0, X_1, X_2$ has the transition probability matrix\n\n$$P=\\begin{Vmatrix} 0.7 & 0.2 & 0.1 \\\\ 0 & 0.6 & 0.4 \\\\ 0.5 & 0 & 0.5\\end{Vmatrix}$$\n\nDetermine the limiting distribution\n\nIn my book it says that the limiting distribution $\\pi$ is the unique nonnegative solution of the following equations. $$\\pi_j=\\sum_{k=0}^N \\pi_kP_{kj}, \\qquad for \\qquad j=0,1,...,N \\qquad (*)$$ $$\\sum_{k=0}^N \\pi_k=1 \\qquad (**)$$\n\nBy this I get that I have to write a system of linear equations and this is the system I get $${\\frac {7}{10}\\pi_0}+{\\frac {2}{5}\\pi_1}+{\\frac {1}{10}\\pi_2}=\\pi_0 \\qquad (1)$$ $${\\frac {0}{10}\\pi_0}+{\\frac {3}{5}\\pi_1}+{\\frac {4}{10}\\pi_2}=\\pi_1 \\qquad (2)$$ $${\\frac {3}{10}\\pi_0}+{\\frac {0}{5}\\pi_1}+{\\frac {1}{2}\\pi_2}=\\pi_2 \\qquad (3)$$ $$\\pi_0+\\pi_1+\\pi_2=1 \\qquad (4)$$ I understand that my columns have to sum to 1 because of (*),(**) so in (3) I have 3\/10 for $\\pi_0$. I solve for (1),(2) and (4) and the solution I get is $\\pi_0={\\frac{5}{11}}, \\pi_1={\\frac{3}{11}}, \\pi_2={\\frac{3}{11}}$ which is not the answer to the exercise in my book. The answer my book gives is that the solution is $\\pi_0={\\frac{10}{21}}, \\pi_1={\\frac{5}{21}}, \\pi_2={\\frac{6}{21}}$ and I can't quite work out what I'm doing wrong?\n\n\u2022 The matrix equation is $\\pi P=\\pi$, not $P \\pi=\\pi$. \u2013\u00a0Xi'an Sep 15 '15 at 12:57\n\nAlso, the $(1,2)$-element is 0.2, thus $2\/10$, not $2\/5$.\nThe invariant distribution can be written as $$\\pi ^{\\prime }\\left( P-I\\right) =0^{\\prime }$$ or, taking transposes, $$\\left( P^{\\prime }-I\\right)\\pi =0$$ Hence, $\\pi$ is the eigenvector of $P'$ corresponding to the eigenvalue $\\lambda _{i}=1$ (the implicit 1 in front of the identity matrix).\nP <- matrix(c(7,0,5,2,6,0,1,4,5),3,3)\/10\nq <- eigen(t(P))$vectors[,1] (q\/sum(q)) # normalize sum of probabilities to 1 [1] 0.4761905+0i 0.2380952+0i 0.2857143+0i \u2022 I was a little fast on my fingers there on element (1,2), but how do you get that it can be written as that? Is it from * and **? \u2013 ilhano Sep 16 '15 at 7:41 \u2022 it follows from writing * in matrix notation, by collecting the equations for all the$\\pi_j$in row vectors$\\pi$and$\\pi P$(or writing$\\pi'\\$ for a row vector in my answer) \u2013\u00a0Christoph Hanck Sep 16 '15 at 8:18","date":"2020-02-18 11:09:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.908619225025177, \"perplexity\": 135.17942691142267}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875143646.38\/warc\/CC-MAIN-20200218085715-20200218115715-00086.warc.gz\"}"} | null | null |
West Little Owyhee River is a tributary of the Owyhee River in the U.S. state of Oregon. The source of the river is at an elevation of near McDermitt, while the mouth is at an elevation of in the Owyhee Desert. West Little Owyhee River has a watershed.
The river begins east of McDermitt and flows east by Deer Flat and into Louse Canyon. Near Twin Buttes, it turns sharply north, still in Louse Canyon, which it follows through the Owyhee Desert all the way to the Owyhee River in Owyhee Canyon. The entire river is protected as part of the National Wild and Scenic Rivers System.
Overseen by the Bureau of Land Management, the river offers fishing for smallmouth bass and trout, and the canyon area is scenic. Dispersed camping is allowed, although the watershed has no developed parks or campsites. Other forms of recreation include hiking, backpacking, hunting, picnicking, and biking.
Named tributaries from source to mouth are Lake Fork West Owyhee River, Jack Creek, Little Spring Creek, and Toppin Creek, which all enter from the right had side bank. Further downstream, Cave Creek enters from the left.
See also
List of rivers of Oregon
List of longest streams of Oregon
References
External links
Owyhee Watershed Council
National Wild and Scenic Rivers System
Rivers of Oregon
Owyhee River
Rivers of Malheur County, Oregon
Wild and Scenic Rivers of the United States | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,161 |
Turnov (; ) is a town in Semily District in the Liberec Region of the Czech Republic. It has about 14,000 inhabitants. It is a traditional centre for gemstone polishing, glass craftsmanship and arts. The town centre is well preserved and is protected by law as an urban monument zone.
Turnov lies near the Bohemian Paradise Protected Landscape Area which makes it a place for tourists and summer residents. The town is also an important traffic crossroads of three railways and the Prague–Liberec highway. Turnov has a large museum, three galleries, six churches, and a synagogue. The small old town of Middle Ages urbanism is surrounded by modern garden neighbourhoods and large parks representing an organic connection between urban areas and nature.
Administrative parts
Villages and town parts of Bukovina, Daliměřice, Dolánky u Turnova, Hrubý Rohozec, Kadeřavec, Kobylka, Loužek, Malý Rohozec, Mašov, Mokřiny, Pelešany, and Vazovec are administrative parts of Turnov.
Geography
Turnov is located about south of Liberec. The Jizera River flows through the town. It lies in the Jičín Uplands. The highest point is the hill Cestník at . Turnov lies at the edge of the Bohemian Paradise Protected Landscape Area.
History
Turnov was founded as a Bohemian town around 1250 by Jaroslav and Havel of Markvartice on a spur of rock overlooking the Jizera River. A Dominican cloister was founded by Saint Zdislava, wife of Sir Havel. During the Middle Ages, Turnov came into the possession of the Wartenberg and Smiřický noble houses. The medieval town was frequently vulnerable to fires – it was burnt by Lusatian crusaders in 1468 and during the Thirty Years' War by Swedes in 1643, as well as a conflagration in 1707.
Turnov has long been known for its expertise with gemstones. It attracted many medieval craftsmen and artisans who produced jewelry out the local Bohemian garnet. The first European technical school for the processing of gemstones, metals, and jewelry, nowadays the Applied Arts Secondary School, was founded in Turnov in 1884 and still exists as one of the best schools of this type in the world.
Jewish community
The Turnov Jewish community was first documented in 1527. After it ceased to exist at the turn of the 16th and 17th centuries, new Jewish settlers were invited to the town by Albrecht von Wallenstein in 1623. The Jewish ghetto was established in 1647. Most of the Jewish population were killed during the Holocaust and only 19 of them returned to Turnov after World War II. The Jewish community officially ceased to exist in 1961.
Demographics
Sights
The Renaissance town hall in Turnov dates from 1562, while its three historical churches date from throughout the 14th–19th centuries. In a suburb lies the Hrubý Rohozec Castle, built in 1250 and later reconstructed into a château; today it is admissible to the public. The municipality itself is now the owner of the Valdštejn Castle, the cradle of the famous Wallenstein family, which is also open for tourists.
The former synagogue in Turnov dates from 1779. Between the 1950s and 2003, the building was used as a warehouse. In 2003, the building was bought by the Town of Turnov and it was restored to become a concert place and a memorial. The Jewish cemetery was founded in the 17th century. The oldest preserved tombstone dates from 1649.
Museum of the Bohemian Paradise in Turnov has a significant collection of gemstones and jewelry, as well as exhibits on geology, archaeology, and folklore. It was founded in 1886.
Notable people
Josef Pekař (1870–1937), historian
Jan Košek (1884–1927), footballer
Jan Patočka (1907–1977), philosopher
Alexandr Kliment (1929–2017), novelist
Jan Farský (born 1979), politician
Roman Koudelka (born 1989), ski jumper
Adam Helcelet (born 1991), decathlete
Twin towns – sister cities
Turnov is twinned with:
Alvesta, Sweden
Idar-Oberstein, Germany
Jawor, Poland
Keszthely, Hungary
Murska Sobota, Slovenia
Niesky, Germany
References
External links
Official tourist information portal
Museum of the Bohemian Paradise
The Bohemian Paradise
Cities and towns in the Czech Republic
Populated places in Semily District
Populated places established in the 13th century | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,536 |
# BOOKS BY DON McKAY
POETRY
_Strike/Slip_ (2006)
_Camber: Selected Poems 1983–2000_ (2004)
_Another Gravity_ (2000)
_Apparatus_ (1997)
_Night Field_ (1991)
_Sanding Down This Rocking Chair on a Windy Night_ (1987)
_Birding, or Desire_ (1983)
_Lightning Ball Bait_ (1980)
_Lependu_ (1978)
_Long Sault_ (1975)
_Air Occupies Space_ (1973)
ESSAYS
_The Shell of the Tortoise_ (2011)
_The Muskwa Assemblage_ (2009)
_Deactivated West 100_ (2005)
_Vis à Vis: Field Notes on Poetry & Wilderness_ (2002)
Copyright © 2012 by Don McKay
All rights reserved. The use of any part of this publication reproduced, transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, or stored in a retrieval system, without the prior written consent of the publisher – or, in case of photocopying or other reprographic copying, a licence from the Canadian Copyright Licensing Agency – is an infringement of the copyright law.
Published simultaneously in the United States of America by McClelland & Stewart Ltd.,
P.O. Box 1030, Plattsburgh, New York, 12901
LIBRARY AND ARCHIVES CANADA CATALOGUING IN PUBLICATION
McKay, Don, 1942–
Paradoxides / Don McKay.
Poems.
eISBN: 978-0-7710-5511-9
I. Title.
PS8575.K28P37 2012 C811′.54 C2011-904425-0
Library of Congress Control Number: 2011931127
We acknowledge the financial support of the Government of Canada through the Book Publishing Industry Development Program and that of the Government of Ontario through the Ontario Media Development Corporation's Ontario Book Initiative. We further acknowledge the support of the Canada Council for the Arts and the
Ontario Arts Council for our publishing program.
McClelland & Stewart Ltd.
75 Sherbourne Street
Toronto, Ontario
M5A 2P9
www.mcclelland.com
v3.1
_For Marlene_
# CONTENTS
_Cover_
_Other Books by This Author_
_Title Page_
_Copyright_
_Dedication_
Disclaimer
As If
I
Song for the Song of the Canada Geese
Slow Spring on Vancouver Island
Song for the Song of the Sandhill Crane
Forlorn
Song for the Song of the Common Loon
Ravens at Play over Mount Work
Juncos
Song for the Song of the Purple Finch
II
On the Barrens
Alias Rabbit, Alias Snowshoe Hare
Porch
Sleeping with the River
Batter –
Eddy Out
Apparition
Sleeping Places
III
Deep Time Encounters
Labradorite
Mistaken Point
_Paradoxides_
_Cephalon_
_Thorax_
_Pygidium_
Tuff
Snowball Earth
Rock Flour
Crinoid
Gjall
The Wopmay Orogen: a field trip
IV
Thingamajig
_To Clasp_
_To Step_
_To Rock_
V
Taking the Ferry
Descent
_Notes_
_Acknowledgements_
This title contains long lines of poetry. The line of characters below indicates approximately the longest line in the text:
with manic homebound Newfoundland-and-Labradorians,
To most accurately reproduce the layout of the text on the printed page, you may choose to decrease the size of the text on your viewer and/or change the orientation of your screen until the above line of characters fits on a single line. This may not be possible on all e-reading devices. Viewing this title at a higher than optimal text size or on a screen too small to accommodate the longest lines in the text will alter the reading experience and may cause single lines of some poems to display as multiple lines of text. If this occurs, the turn of the line will be marked with a shallow indent.
# AS IF
Play it con brio, a muscular
iamb, a frisbee sizzling –
_as if_ – into no man's land,
an emptiness unfurling fast and
fernlike. Last winter, from a cliff
along the coast, I saw a Milky Way
strewn lavishly across the cove,
twinkling in the chop.
It was cold, and so
some moments before my stiff fingers
unburied the binoculars and found it
to be eiders. In their black skipper's caps
they scudded the waves, cold's own creatures,
their white chests flashing in the slant sun,
until, as at a signal, with a move
part gulp, part slurp, each, one after the other,
dove, like this: as if, as if, as
if that surface were the border –
suddenly porous –
between yes and no, so
and not so.
# I
# SONG FOR THE SONG OF THE CANADA GEESE
Something of winter, something of winter
again, something of that famous mortal reed
making an oboe of the throat.
As though the soul – not
so much in pain as under pressure –
yelped. Angst,
angst, bite-sized bits of loneliness sent
back to the heartless skies
they fell from, giving grief
its rightful place among the elements.
So what
if they waddle, shit
gooseshit on the grass all summer then neglect
to migrate? Were the geese to quit
their existential yammer, talk
would also cease, each would-be dialogue collapse
into its own hole. Where there was ivy,
ice. Ice
where there was moss.
All praise to the geese
in their goosiness, to the ragged arrow that is
and isn't eros.
# SLOW SPRING ON VANCOUVER ISLAND
In the understory, _sotto voce_ ,
crypto-birds rehearse. Is
that you, Junco,
setting your Hopkins-self aside
to sip-sip-sip so
generically? That you, Varied Thrush,
clearing your throat ad
nauseum, uncertain
as the rain that quits, dithers,
threatens, finally
compromises on the drizzle into which
your indecipherable ciphers fit like inter-
office memoranda?
Over the dun
duff of the forest floor one alder leaf –
thinned by winter to its skeleton –
hangs like a glyph.
Foliose lichens urge their hypergreens.
One day soon –
so goes the tale – Junco's voice
will quicken into trill, its quick lusts
gargling. Varied Thrush
will thrust its whistle-hum
frankly into the mix, and that last leaf –
like an icon suddenly
relaxing to cliché –
uncling. And then – by
the Jesus we'll be on our way.
# SONG FOR THE SONG OF THE SANDHILL CRANE
It eschews the ear,
with its toolshed, its lab, its Centre for Advanced
Studies in Hermeneutics and Gossip,
to boom exactly in my thorax,
rattling the bones and waking the baby. Garroo:
the _o_ 's are caves of lunar gravity, the rolled _r_
recalls the ratchet of life and death.
Why am I standing on this frigid porch
in my pyjamas, peering into the mist
that rises in little spirals from the pond?
Where they call from the blue
has nearly thinned to no-colour-clear.
Where
they call from hominids haven't yet
happened. Garroo:
who can bear those star-river distances?
I'm so lonesome I could die
happy.
# FORLORN
The very word is, if
you ask me, like a horn –
fog, French, krumm, _cor_
_anglais_ , or car – depending on the timbre
and accent of its native loss.
It's never like the bell
that tolled Keats back to life the night
he nearly OD'ed on The Nightingale.
Anyway, what odds? It tolls
for him but honks for me, a closed nasal
existential echo, not quite
recovered from that nasty cold.
Forlorn:
the bare unfaeried self re-pots us
in our deaths as into humus.
Not _lonely_ , twanging of teen angst
and Nashville. Not _solitary_ ,
with its would-be-Thomas-Merton air
of being the best graduate student He
has supervised in eras.
Forlorn:
it is 2:45 a.m.
again. Noises, some like itch,
some like scratch, surround the cabin.
One rises in a hiss
(snake? bird? cat?) over
and over until I'm up, irascible,
up and out, dammit, with the flashlight
stumbling toward the source.
As though whatever it is
started to say "curse" then
switched to "kiss,"
then "ship." The flashlight poking tunnels
into the dark, selecting arty angles
through the foliage, and finds them – there,
huddled on a branch, two grey lumps,
staring down the beam like fluffy
wide-eyed monks. Owlets,
I'm guessing barred, out of the nest
but not yet fledged, still
begging for food from the ruthless mother,
who is elsewhere.
Darkling, I listen,
switching off the spot. The hiss
of hunger, separation, and – to insert
a personal note – sleeplessness. _Ksship_ :
how to translate that?
Forlorn, of course,
the very word.
# SONG FOR THE SONG OF THE COMMON LOON
If that's the word:
the song's already gone
before it's uttered so the ear is left
full of its emptiness,
bereft.
It seems the loon
opens its throat to some old
elemental wind, it seems that time
has finally found a syrinx and for a moment
lets itself be voice.
What perilous music!
Surely, like Odysseus, we ought
to stop our ears against this feral ultrasound
with its dreadful
diagnostic reverb? But no,
we would rather be stricken, rather suspect
that the spirit also is a migratory species,
that it is right now flying to Star River –
as the ancients called the Milky Way – that
in fact it is already there,
yodelling for no one and ignoring us,
the collectors, with our heads full of closets,
our hearts full of ovens,
and our sad feet.
# RAVENS AT PLAY OVER MOUNT WORK
The power dive, the clash-and-roll, the steep
veer that limns the knife edge
hidden in the wind, the proto-
contrapuntal game of tag:
this is the improv
at the heart of things, this stirring up
of trouble, this festival of riffs.
They jam until the air
is pregnant with polyphony, with Scott
LaFaro meets Bill Evans, do-si-do
meets alley-oop I love you Abbott
and Costello, flirt with Escher, flirt with
Jackson Pollock, hold the moment in your wingpit, then
buckle, fold, toss yourself like a crumpled draught
and come to roost.
To make the spirits
envious. To make them laugh. What if
we snuck up on the minuet
and goosed it with this jalapeño?
What if we stole those thick
black eels that live inside despair
and ate them like electrical
spaghetti? _Yowp_. Such schemes they
palaver in Polyglot, having scavenged Yiddish,
Irish, !Kung, English, and
Inuktitut. Intro-
aggroverts of small-b being,
they can project a _kark_ across the Malahat or swallow it –
_glug –_ like a melancholy
clock. Glottal stop,
glottal glide, doorbell
crossed with oboe, oboe
crossed with short-eared owl. Tók.
To make the spirits
give themselves away.
With a rustle of wings like a whispered
death wish one of them swings
down to check me out,
perched on the summit writing
_yowp_ , writing _tók_ and _wörk_ , writing what
would it be like to be so casual and acute
in my little blue notebook filled with
phrases, numbers, recipes,
and to-do lists.
# JUNCOS
Where "shades of grey" acquires
_esprit_ : slaty, dark-eyed,
sooty, dapper, hooded –
quick bits of dusk stitching fir
to birch to rock the boreal
embroidery.
All winter
they animate the understory, inscribing
runic ciphers in the snow
and discrete diacritical _chips_
along their flight paths.
One oriole,
it is said, can shift the heart
into its own outcry.
But it's the juncos,
in their undertaker outfits,
who slip unnoticed into melancholy
smuggling minims of lift.
Bless them. They exit
with a wink, tail
snapping open like a card hand to reveal
white feathers at each edge:
Come spring
they'll find the tallest spines of spruce
and trill the sun from one
saw-toothed horizon to the other.
# SONG FOR THE SONG OF THE PURPLE FINCH
Honey, if you had some of this in a carafe
you could mix yourself a comic opera out of
willow willow willow, chemotherapy, and
washing your socks in the sink.
Know what I mean? I mean what-is is
perched on the precipice where chat
breaks into song or it may be laughter
breaking into Welsh or Aunt Clara's pure
lyrical harangue breaking like combers over
Uncle Archie in her kitchen. I mean life
for sure is tragic but honey you
aren't. Here's your purple guitar. Adios.
# II
# ON THE BARRENS
Ghost-grey, tipsy in its V,
the harrier hovers, wavers,
glides, an ever-adjusting
motion sensor scanning the crowberry-
cranberry-blueberry-goowiddy-
bottlebrush-tuckamore
tapestry below, then, angling sharply,
sideslips out of sight
and into someone's near
or sudden death.
I lower the binoculars,
craving the dwarf subspecies
of myself, like the sideways birch
crawling the outcrop, or the fir
whose one-way branches obey the wind,
even when, as now,
it's gone off somewhere to replenish huff.
To be fit
means to be in shape but also
to be shaped by the weather's
rough injustice. Means
grow tensile, grow vines
to hyphenate with neighbours
and resist. Means thin
the spirit, or,
if that isn't spirit, then
whatever it is that's whet
and spare and whisky-fierce within.
# ALIAS RABBIT, ALIAS SNOWSHOE HARE
All winter we watch for a clear night
with a full moon, when we will head off
into the woods to wait,
standing on contraptions that mimic, stiffly,
your long spring-loaded paws.
The sparse spruce will cast shadows
so black they fall right through the earth
into infinity. Between them the moonlight
will mix with the snowlight as the ghosts
of a mother and her daughter who at last
embrace. And after we have stood a while
with the cold speaking in their own tongue
to our bones, you will leap across our path
in effortless arcs, white on white on black
on white, each spring a fluid coiling-up of power
for the next. As you exit
we'll be left with your afterimage,
and the cold, and the flask of twelve-year-old
Macallan in the backpack. But for now
we watch the weather and the calendar
and read about you in the library,
and eat you, with mushrooms, onions, and a little sherry
in the stew, and follow your asymmetrical prints,
hind feet bounding past the fore, and once
we come upon the strike, a sudden full stop
where an owl has gathered up your
story into hers, and we wonder,
once again, if that night will ever come
when our path crosses yours, either up
in the country or down in some deep
articulate midwinter dream.
# PORCH
The tractor harrowing one farm over
coughs and quits. Now there's only tree frogs
and the thin half-moonlight pewtering the leaves
and glinting off the fender of the rental.
The tree frogs chant in phase, then out,
then in, as though composed by Philip Glass.
I've read that Lao Tzu,
leaving community for wilderness,
paused at the border, where the guard asked
for some record of his teachings.
So: the _Tao de Ching_. The thin line
between lost past and dim future opens
into an evening and a porch
on which to rock and listen,
listen and rock. Lao Tzu's actual existence –
the path-following, border-crossing one –
is, so I've also read,
doubtful, like Robin Hood's and Homer's.
In close-up, and in memory,
the tree frog wasn't really credible,
a translucent elf from some outer space,
splayed, finger pads extended,
on the porch screen. I gaped at it;
it gaped into the wide night.
The tao that can be spoken
is not the true Tao: so the sage,
who probably did not exist, and with
exquisite paradox, began.
I slipped off to fetch the camera
and when I got back it was gone.
# SLEEPING WITH THE RIVER
All that winter as the rains arrived,
sometimes as nobody's footsteps,
sometimes ack-ack, sometimes
hard bits of Braille flung at the house,
the mailbox, the woodshed, at the car parked
in the driveway, at all that is solid, all
that winter leaving the window open to its
pizzicati, hearing them accelerate and blend and
drown in the river's big
ambiguous chorus, all that winter being
swept asleep thinking river is only rain
that has its act together, song that has never
passed through speech, unschooled,
other-than-us, thinking
this must be the voice of what-is as it
seizes the theme, pours its empty opera,
pumps out its bass line of sea-suck and blues.
# BATTER –
that's the name, I'm thinking,
for the huff-and-buffet
rhetoric that fulminates against me, me
and every other smart-arsed upstart
lover-of-the-vertical who ventures
up on the tolt it scours
and sculpts. Across Conception Bay it gathers wrath
and hurls it, a tirade so pauseless,
so pressure-hosed that listening's impossible
and mandatory, the poor mind veering King
Learily into synch, unbonneted,
banging back and forth like bad hockey.
Already, in deference, I've doffed
and packed away my hat and glasses, now
it wants me bare and walking-stickless,
wants me smeared like flesh-and-calcium pâté
across the rough volcaniclastic ridge.
To catch my breath I crouch
in the lee of an impeccably poised
erratic, an elephant _en pointe_ , CFA,
emplaced by a glacier with a raven's
drastic sense of humour.
In a moment, once I've regathered mass
and gravity, I will arise and lurch
up onto the crest, heading, with my
squat-hunched stagger, for the shelter
of that patch of tuckamore –
the bristling ancient quasimodoed
hedgehog of a life form
that lives here.
# EDDY OUT
Late fall, rain so thorough
everyone is glad it's not the snow
it is, so says the radio,
in Happy Valley-Goose Bay.
It sheets the study window
and sets the sump pump
humming in the basement.
Something in me,
nameless and familiar, stirs,
unsettles, flaps into action like a one-winged
gull. How many winters more
before I seamlessly shift, a snowshoe hare's fur
passing into white?
Beware an idle wish,
I tell myself, and don't go plunging into snow tire–
firewood–long john frenzy either. No,
this is the time to summon old
warm-blooded silences, air pockets that hold heat
and buoyancy at once. Not stories, mind,
but their pauses, strung like the bladders
on a long frond of kelp.
Without them, a lifeline
shrinks to bio, c.v.,
obit. Deaf
to the cupped hush between the winter wren's
cadenzas. Immune
to our own music's held breath,
when it swims in its underworld
and we wait in safe
aesthetic anguish.
So I call up that time –
remember? – late, after "Goodnight, Irene," guitars
back in their cases and gutbucket set aside,
the kids and dogs exhausted,
when we dwindled outdoors to find ourselves
under an hysterical sky, aurora
shooting in aqua sheets, an ice cap
suffering a migraine. As though we'd stepped,
like inattentive tourists, or Duncan Campbell Scott
on one of his imperial canoe trips,
into Norval Morrisseau's medicine.
That one.
carried her outside and pointed up,
feeling like a man exposing photo-sensitive
paper. And still another when,
next day at brunch,
she remembered nothing til I asked
if she'd had any interesting dreams,
and her eyes turned in, and fish
were swimming in them, and she said, "Oh
_that_."
That.
How words, grown corporate or brash,
crave respite from themselves,
how they long to open,
ooid as an aging mooseprint,
into unfamiliar vowels.
As the rain,
like a conversation turning mean,
slides into sleet, I eddy out
into those pauses – uterine, caesural,
mammal. Hold them, memory,
breathe fresh air into their emptinesses
while they keep my heavy
history-laden life
afloat.
Then, snow tires.
# APPARITION
Half an hour following, on faith,
the car's blunt nose and the fog's become
the stuff that ghosts are made of, and,
off duty, fade back into. Gauze,
mothwing, vagueness, cliché,
inkling: what half-formed spirit will it usher
into our little séance as we creep
our creepy way across the barrens?
But then, as if to show it could concoct acute
as readily as nebulous – what?
We lurch to the verge:
foxes.
Four of them, dawdling, hanging out, doglike,
catlike, this one scratching an ear, that one
nipping a sibling in faux-fierce combat,
taking their talent for granted.
Who could invent a creature
that lallygags with such élan?
Now and then
one glances over, curious, I guess,
about this fog-conjured audience,
and weighing the merits of a Hyundai Sonata
as a source of food or fun.
Inside it we are rapt, two feedback loops
poured into the binoculars and re-imbibed
as sharpness – ear, paw, whisker,
nose. Then something offstage calls
and, like that, three vanish,
gone like luck.
Only the brindled kit
side-trots up the verge,
its lavish brush floated like applause
as though that pent wit
bloomed, what
is this thing called love,
anyway? It dives
into the alders and we sit,
ignition off, attending to whatever else
the fog might slip from those
supposedly empty sleeves.
# SLEEPING PLACES
_Nature loves to hide_
– Herakleitos
What is nothing doing,
there in the pressed grass,
there in the bent-over reeds,
in the slightly scuffed ground
and four-leaved cruciform
bunchberries? Something whispers here
so softly it's dissolving
even as the camera clicks:
catch-and-release, it says, place
is gesture, is delible, the rumple of a moose-bed,
the bower left by lovers, the punched strike
printed by a hunting owl in snow –
a punctuation mark whose sentence has flown off,
the faint strokes left by its wings
already fading. It says
this memory is earth's,
not ours. Relinquish the plot. Uncuff
the hero, his precious flaw, his gift-wrapped
catastrophe. Release the crime scene
television loves, with its frame
of yellow tape, its splayed,
awkwardly outlined corpse nicknamed The Vic
and sentenced to publicity.
Let them slip –
practise this –
slip –
back to the unwritten:
that place where place
sleeps, where sleep itself
seeps into the landscape, having scrawled
in the pressed grass, in the bent-over reeds
its auto-erasing name.
What is nothing doing? The antique
riddle. The old
ungettable joke.
# III
# DEEP TIME ENCOUNTERS
_There is no silence in the world_
_Like the silence of the rock before life was_
– Robert Hass
Every dose is overdose,
every thing that's done's
done to death.
Good old ineffability –
that fine froth, that gossamer cliché –
runs amok and bites you, _there_ ,
somewhere secret, somewhere
in the ancient backstreets of the brain
where pleasure and pain promiscuously
mix. Ordinary stone
turns to the time it's made of,
each empty _O_ a lens,
and _why is there not nothing_ arcs,
its first full dolphin,
through the mind's stunned air.
Long pause. Well?
Then that depopulated silence.
That darker dark.
# LABRADORITE
Frostbitten light; shy
hologram; oil spill
practising za-zen.
Always inward, always
aslant, and in that sense
reflective. Awry.
Spirit beings of aurora borealis,
say the Inuit, remain
trapped in the rock, still
dancing the dancerless dance.
Not as diamond,
flashing superlatives, nor acute quartz
singing the rhombohedral madrigal
called amethyst. My field guide
says its schiller rhymes
with the Morpho butterfly's intense
prismatic blue.
But I wonder:
something in that glance is fell
and full of darkness, the _duende_
of The Land God Gave to Cain,
as though in that moment it were
stalking a possible Persephone.
To me
it seems that all the elegies
I haven't written wait, not
patiently, inside otherwise plain
plagioclase feldspar. In my kitchen
a small slab serves as trivet, its bronze-
blue phenocrysts winking at the coffee pot
and jam jar. And me,
silently disputing with my dear
difficult departed ones.
# MISTAKEN POINT
As in a genteel living room,
a sterile lab, or mosque,
we have doffed our boots,
and pad across this rock slab
in the sky-blue booties supplied.
Around us, mist.
Underfoot, petrified deep time rises in welts
to prod our soles, here and there
breaking into sudden bas-relief:
a fernlike creature, a creature
like a picket fence, a shrub, a miniature
Christmas tree, a pizza disk – preserved,
like Pompeii, under the cushion of volcanic ash
that killed them. Earth's earliest animals,
says the brochure, Precambrian, pre–Burgess Shale,
five hundred and sixty million – but as usual
my mind is boggling, Googling vainly in the Zenosphere,
finally it files this in a shoe box, taped shut,
and tagged like a rogue elk's ear,
somewhere near infinity.
Back here,
in the Anthropocene, South Avalon, July, the mist
is thickening to drizzle. The bedrock darkens,
deepening the contrast. What shall we call
this antique frond, part fern, part feather,
part Art Nouveau and brand new Braille,
urgent and enigmatic as an oracle?
# _PARADOXIDES_
## _Cephalon_
On the day we found the trilobite and took its photograph, we had already been to visit the gannet colony at Cape St. Mary's, so you can imagine us picking our way along the foreshore below the cliffs with thought balloons over our heads, and in each a scribble of elliptical flight paths, orbits left by the gannets wheeling around the nucleus of their tall nesting rock. Imagine their black-tipped wings like long sensitive scythes, their stretched necks faintly yellow as though dusted with pollen. From a distance, their cacophony resembles the ratchet ratchet of an old threshing machine; closer up it's _more more more_ , or maybe _here here here_ , the urgent, impacted birth-and-death cries of beings who will, once departed from their home rock, be mute. The place dense with energy, taboo, as though we'd stepped inside an atom, chaos and order in tense standoff, calling directly into the open ear of our DNA: the sort of place where beauty teeters giddily on the brink of terror.
So it was a relief to let that potency recede, to embrace an ordinary walk and contemplate something as quotidian as lunch. We ambled along the cliff bottom, clambering over squarish blocks of rubble, strolling the flat water-smoothed shale – some russet red, some blue grey – which sloped into the sea. There's a lustre rising in the shale that, were it flesh, we'd call a blush, since it suggests some inward softening, some memory or hope coming to the surface. And, although the ocean's wash-and-withdraw was a constant reminder, it was hard to imagine that water had transformed the rough cliff without some answering agent, some reciprocal hankering after smoothness. Eros, erosion. So it seems.
We sat on those smooth boulders to have trail bars and tea. And then, a few paces away, we spotted the trilobite sprawling in the shale – bold, declarative, big as my hand and just as complicated. It seemed the shale had suddenly broken into literacy, publishing one enigmatic pictograph from a secret alphabet. Suddenly it was refusing relegation to raw material. Suddenly it was demanding to be read.
## _Thorax_
For they are local and exotic
For they anticipate lobsters, the Pre-Raphaelites, the tenor saxophone, and the buckskin jacket
For they are _seemingly absurd though perhaps well founded_
For they appear like a fully accoutred medieval knight stepping onto a nearly empty stage
For they are elegant and monstrous
For their pleural spines extend past the thorax _like the kind of drooping moustaches sported by bad guys in westerns_
For they are local and exotic
For _the paradox is the source of the thinker's passion, and the thinker without a paradox is like a lover without feeling_
For they index both the micro-continent of Avalonia and the Mid-Cambrian Period and so situate us in space and time
For they dislocate space
For they infinitize time
For _the immense odds against its occurrence in the rock record_
For hexagonal calcite eyes, which evolution never happened on again
For they pose the problems of mind and body
nature and culture
rock and stone
substance and accident
mysticism and materialism
allochthon and autochthon
dressed and overdressed
five hundred million years before the first false dichotomy appears in the Anthropocene
For they mean yet do not speak or write
For they are elegant and monstrous
For sometimes I hear _the mind my former lives all share_.
## _Pygidium_
You pose on my desk in the photograph,
a riddle, an odalisque, a rune,
one plump cipher from a long-gone
semiotic system. Cryptic and Sapphic,
at once emerging from the stone
and scuttling into it, you earn
each micro-quantum of the consternation
promised by your name. The more I learn
about you and your family – e.g.,
your eyes were calcite crystals, spars of rock
arranged to transmit light, unique
in all of animalia – the more piquant
your present absence. Friend, stranger, paradoxidid,
I wave one jointed arm.
I wink one endothermic lid.
# TUFF
Orts, scraps, coughs, dust, draughts,
ejectamenta of the earth,
unite: such is the call of tuff,
which is to ash as Che
to _campesino_ , making mountains
out of next-to-nothings.
Call it igneous,
since born of fire, sedimentary,
for its packed glued particles,
or even metamorphic for a lifeline
that out-Ovids Ovid.
Hot cumulous dust clouds
boil into the stratosphere to cool, recall
their humble roots and fall, layer
upon layer, the planet snowing its own
ashes on itself. Then to be tamped,
pressed by the weight of eons
into toughness, stressed strained
sheared uplifted folded faulted by
tectonic force; eroded by rain;
cracked by frost; rearranged
by glaciers.
Nothing never ends,
it says, catastrophe accumulates, the lost
decline to stay lost and return
like dying and reviving rock bands.
Who needs ghosts when matter
nonchalantly haunts us?
The summer before last
we shifted a big tuff boulder up the river,
heaved, levered, nudged it inch by inch,
to make a footing for the bridge.
Drilling holes for bolts,
we dribbled water on the Carborundum bit
and wore down several.
After, the ash at the bottom of each hole
was grey-white, fine as talc, and smelt
like a match you'd just blown out.
# SNOWBALL EARTH
1.
Once, way back when,
winter won.
Earth was hard as iron,
water like a stone for something like one
hundred million years. Persephone
must have been depressed,
and postponed all her travel plans
indefinitely. The long commute. The stress
of being sexy and bipolar, bouncing gloom
to joie de vivre to gloom the same
old same old volleyball. Enough.
She settled down to life with grim
ultramafic Death.
Up on Earth
the glaciers grew to ice caps, snow
on snow, the ice caps to ice fields,
ice prairies, pampas, veldts, albedo
ramped to the max, Sun's
billets-doux returned to space unread.
So much for that biosphere,
some passing astral being might have said,
inert and gorgeous, a dead movie star,
a tempting but inedible meringue.
And they'd be wrong
in the long term. Why? Let all who dwell
on the blue-green planet celebrate
the mother magma churning at its heart,
the home fires that kept burning and at length undid
that cold Precambrian spell.
2.
In the middle of the frozen pond
we pause: blow noses;
tighten snowshoes. Around us
snowdevils skirmish and disperse.
Loose tresses sift, braiding, un-
braiding, and where
the ice is bare the slant sun,
like a glass eye,
glances. Biology is elsewhere,
busy with its death-birth
buzz. Here we are simple citizens
of Snowball Earth, the cosmic disco ball
and nun. Listen: : that mix
of hush and scratch is time
clocklessly elapsing. In a minute
our mammal-selves will come back
bearing tales of frostbite
and heartbreak. For now
just winter pre-echoing the infinite.
# ROCK FLOUR
Far off, from the highway,
it seemed just road dust
raised by construction. Closer,
it became a tattered curtain
drawn between two bare black St. Elias
mountains, then a dirty grey
disorderly parade of ghosts
descending from the ice field on its
katabatic wind.
_Loess_
said the guide in the interpretive centre,
rock that's been milled and
remilled by a glacier to a silt so fine
it flies on a whisper. Loss
with its _o_ pinched
like the half-closed hole on a flute.
_Où_
_sont les roches d'antan?_ I thought
as I looked back, and down,
from the shoulder of Sheep Mountain.
Ask the motes of the air.
Ask Kluane Lake,
whose milky blue-green tinge
is light plus loess plus water.
Ask the short-faced bear, whose bones
lie buried in Beringia's
surprisingly fertile plain.
# CRINOID
A fossil, preposterous
and common, light
as a dime, as infinity's
poker chip, a grey
Tylenol-sized disk you can
slip into your pocket
or cup in your palm.
Turn it on end,
you can see where a delicate fish line
ran down its core. Reel it in,
you'll haul up Ordovician oceans
where they boogied and grew, vertebrae
with frondlike arms and bloomlike heads
asway in the tide that fed them, as the mind
of Wang Wei in the ever-adjusting
wind.
O Chordates, you'll exclaim
to our distinguished many-membered phylum,
spare a moment to applaud
this alien flowering spine.
O Elvis,
wherever you are,
shake with the snakes that first
shook it.
# GJALL
Among the many forms into which lava may harden – cinders, tubes, columns, ash, pumice, glass – this one seems the least doctrinaire, the least likely to endorse the orthodox axiom that A is A. Light in the hand, already half in love with other elements, it bears swirls that resemble flickers of flame, and blobs like the congealed drops on the outside of a paint can. Inside, it is packed with vessicles, and sometimes with large smooth-sided hollows like the inside of a nutshell. It might be the material pelt of a burst bubble or the chrysalis left behind when some rock-moth hatched and flew off.
Other igneous rocks – gabbro, granite – identify themselves as citizens of deep time, and remain its devout parishioners. Gjall joins us in history, already wearing the insignia of shift that others will later have thrust upon them by the soft persistence of erosion. "The turnings of fire," says Herakleitos, "first sea, but of sea half is earth, half lightning storm." When it flew from the volcano it was as phlegm, rock froth, the equivalent of ocean spray that leaves salt-suds like grounded clouds snagged in the driftwood and alders. Now it lies among the burnt-out rubble of the lava field, testimony that negative capability is possible even for rocks, that there is no quarter of the perplexed earth not afflicted with longing.
# THE WOPMAY OROGEN: A FIELD TRIP
_Because it's there_.
– George Mallory
Expect to see the absences
of many alps, their peaks
humbled to tundra. Also the roots
of cold volcanoes, plus the adjacent rocks
they cooked, then the many-faulted, flattened
fold-and-thrust belt, and even the descendants
of molasse deposited offshore. In short
a tale of drastic rise and fall,
like "Ozymandias" or Freytag's Pyramid,
or Gibbon.
Every valley, Isaiah said,
shall be exalted, and every mountain
and hill laid low. So also fine Old
Testament hyperbole shall be made mundane
as the Weather Channel's forty per cent
probability of showers.
All this
shall be raised up again
on the exam. Answers
may be carved in granite, writ
on water, or delivered as a lecture
to the air. Because it was, because it is,
because it isn't there.
# IV
# THINGAMAJIG
_There are many intersections in the ways of ongoing flux, places of steady but impermanent homeostasis_.
_These are called things_.
_Thing (Old English): an assembly, a gathering_
_Thingan (Old English): to invite, to address_
_Althing (Icelandic): the parliament_
_An object is a thing that has been removed from its party line of rhizomes, hyphae, and roots, and treated to public scrutiny – framed, analyzed, experimented upon, known_.
_An object is a thing under surveillance_.
_Something is lost, some_ thing _is lost, when a thing is made into an object_.
_We mourn the lost thing, even as we pursue the inescapable human work of objectification_.
Homo faber tristis.
# _To Clasp_
1.
Handhold-to-go, spare spine,
trail buddy, measure-minder,
prod: my palm, I trust,
will not forget you – aspen-soft
and slightly soapy, as though you'd paused,
musing, en route from wood to muscle
or vice versa. Nor will my arm and shoulder
lose the slight give you gave –
a shrug or nod –
as you took my sloughed-off weight.
Now retired, four-fifths fetishized,
you lean in my kitchen,
still wearing the duct tape I applied
that time I stepped back (eyes
loving only the bird in the binoculars)
and cracked you. Renewed apologies,
although you must admit
it did improve your flex,
and now you wear that silver wrap
as sash, not bandage.
Here's to us –
I raise my coffee cup –
here's to the brotherhood of sticks and bones.
2.
On a steep ascent
we made ourselves machinery,
plant and hoick, plant and hoick,
we hauled the species by its scruff,
by its gristle and thew,
up to the viewpoint.
Fording a creek you were the brace
that bore us, teetery,
over. Along a level trail
your swing-and-touch would
counterpoint the pace, now and then
pointing – a raised baton – toward some
rustle in the brush or an especially louche
lichen. Off duty, you'd lean on a trunk,
no doubt recalling an illustrious forebear –
the alpenstock, the crook,
the lever that could lift the world,
or the rod that smote the rock
to make the water flow –
while I regathered breath, reread the map,
and drank.
3.
How we met. 1986 or '87 it would be, and in that stretch of the Pukaskwa Trail between Willow River and Oiseau Bay. What I recall about that trip is the plainsong of warblers – yellow-rumped ones _seedle seedling_ and black-throated green ones _zoo zoo zee zoo zeeing_ , intercut, or pierced, by the sudden lyric reaches of white-throated sparrows. That, and, of course, my knee going out, which – as I remember it – translated that plainsong into little tone rows of crankiness. I'd been limping along with a spruce pole, which was rough and stiff and left black resin stains on my palm. Then the path crossed a stream just below a beaver dam, and there you were, ready to hand, trimmed and tidy, with your end chewed in the beaver's classic wedge.
In one version of the story I leave the beaver a token of my appreciation, maybe a handful of trail mix or a pair of dirty socks to plug the dam. But I just took you. It's not like there's a dearth of aspen poplars or that beavers have gone off chewing them. And it wasn't that I quit limping, more like the limp had someone it could talk to, someone who'd receive the weight – that slight flex – rather than just tolerating it like some stiff piece of spruce. By the time we made it back to the car and trailhead, days later, you had progressed from third to second person and we were like _that_ , closer than what's-his-name and Rin Tin Tin.
Now fast-forward fifteen years or so, when I'm searching for a lost logging locomotive (another story) in the bush up the slope from the Strait of Juan de Fuca, and get lost myself. Not on purpose, although you can imagine some sage or pseudo-sage recommending behaving like a wolf to find a wolf, and getting lost to find what's missing. I had paused to sit on a log and recover my bearings when I realized I didn't know where the path was any more than the alleged, probably mystical, logging locomotive. I thrashed about some in the salmonberry and salal, and finally found a disused logging road, which in time took me back to the highway. It was only then that I realized that my stick – my trusty, second-personned, but interestingly still unnamed stick – wasn't with me. And do you think I could find my way back to that log? You don't get lost, the same pseudo-sage has probably said, lostness gets you. So that was that. The end.
Except it wasn't, as you have no doubt inferred from the fact that it's leaning against the wall in my kitchen. My friend Jane was in the habit of hiking up there, both for the exercise and to spy on Western Forest Products, who had flagged that patch for cutting. One day she was sitting on a log to rest, and, you guessed it, there, glinting like a jewel in the underbrush, was the duct tape on my stick. True story. And, as the narrator at the end of one of E. Nesbit's novels remarks, it's not his fault if it works out like Dickens, life just is like books sometimes. And I say, thank Raven for that.
4.
Back here amid the pots and pans
and precious bric-a-brac,
the Inuit soapstone loon,
the raw chunks of lava and peridotite,
and you, I think again.
How you must have grown,
one aspen in a clone of aspens,
a chorus of centuplets putting forth
your sticky buds and shedding spade-shaped leaves
in unison. Heaven,
some would say, a family tree
minus the fools, knaves, maiden aunts,
and history.
From which we saved you –
first the beaver
with her riparian enhancement plans
then me with my bum knee.
And for these gifts of difference and distance,
and the _realpolitik_ of use,
you may curse us or bless us or both.
_Appropriate forms of address:_
_to objects: "We can do this the hard way, or we can do it the easy way..."_
_to things: "I venture to enquire...."_
# _To Step_
1.
Who will sustain these frail splayed
assemblages, with their knobs,
arches, tender soles,
their toes like droll noses
poking their little ways into the future?
Who will swaddle them against the cold,
brace them, gird them for the world of work,
and shield them from the errant ax?
Ah,
let there be boots.
Let them lurk in our sheds,
our vestibules, under our beds,
their mute tongues lolling,
their laces unstrung like Victorian corsets
wanting only to be worn.
May there be eros in our entries,
as the burrowing of moles more
snugly into earth.
And the lacing up:
let it be brisk, each cross
tugging the previous criss
taut. Then,
with our soles supported and our ankles hugged,
let them carry us – _andiamo!_ –
out the door and up the ridge.
2.
After work we would sit around in the old farmhouse where we lodged, drinking beer, playing cards, shooting the shit, smoking, and dubbining our boots. We'd each bought a pair – Grebs or Kingtreads – to mark this rite of passage into the working life and out of the silly sneakers of youth. The dubbin restored grease to the leather, making it more waterproof and suppler; it coaxed the animal partly back to life. Surely concubines in harems were not massaged more thoroughly, the toes, the insteps, the high uppers, the secret, tucked-away tongues. Surely these pieces of hide were no more cherished, or water repellent, when they'd been worn by cows. Yes, the absence of girlfriends may have played a part in the ritual, as our profane banter ("Looks like Danny's getting to second base with his boots") did not fail to make explicit. But it also involved the proud hands' homage to the humble feet, who were proving to be far more sensitive, and important, than we'd ever imagined back in the city.
Probably we were also trying to make our boots look more worn than they really were, disguising their lack of nicks and scratches – although these would accumulate soon enough. Each time I pulled mine on (Grebs, light brown), I immediately wanted to live up to them, the way an inexperienced rider wants to be worthy of his horse. Their weight meant that each step was swung, and the swing made momentum and the momentum returned the foot to the ground, where it belonged, with some energy left over for the next. They were like and unlike the hiking boots I've worn in recent years, as a fiddle is like and unlike a violin.
In memory this occurs about three beats before the entry of women and ambition as major themes. Manhood fully loaded but aimless, innocent as a tornado or forest fire. When we walked up the road to Tassé's for meals, our boots waited with the dogs on the porch, a loose platoon, a pack, ready for work but equally open to suggestions. These generally meant pranks, which were elaborate and drastic, borrowing from the traditions of _commedia dell'arte_ and vendetta, a daisy chain of linked reprisals featuring ambush, defenestration, and sudden buckets of water or paint. One found its dénouement in Emergency, followed by an epilogue delivered by the Director, the gist of which was the prank's exclusion, as a genre, of Mature Judgment, an element that could only enhance the quality and length of our as yet undistinguished existences. Words to that effect. He did not ask, as I do now, where we found the energy, after a day chopping and hauling brush, for spontaneous theatre. It was as though the very force that wore us out in work was winding up the mainspring of mischief, like the complementary spinning gyres in Yeats's cosmology.
While our boots waited outside, we'd have beans, boiled potatoes, sausages with maple syrup, tourtières on Sunday, Réjeanne bustling between the stove and the table, Julien presiding at its head as genially as he did on the job. After supper we'd spend long minutes sitting on the steps getting our boots back on, amicably arguing. Would we paddle up the creek to check on the beaver dam, or walk to the tavern in town? Each option plump with possibility, the birches reflected in the lake, the laces snubbed up tight, the Shadow just a shadow swelling under the trees.
_Phenomenology is one name for the path back from the object to the thing, the counterbalance to objectification, or "progress." Poetry is another_.
_Rather than treading the one-way street of progress mythology, we may place ourselves on a ferry whose name alters each time it changes direction. On the outbound voyage it might be known as "Boldly Go" or "Cogito"; on the return "Mysterium" or "Francis Ponge" or "Nostos."_
# _To Rock_
1.
Rocks, you rock not. But when
you do, it's catastrophic.
Please don't. Be reliable St. Peter,
not the sudden shudderer,
not havocking spasmodic Loki
jerking in his chains.
Shake not, neither rattle,
nor roll your blunt tons downhill
on the village. Rather
let us dole you out in small
homeopathic doses fit to lull
our infants into sleep,
our old folk into memories:
to and fro
that wind not blow
for now, the bough
not break nor baby come to harm,
that earth not quake,
for now, for now,
we rock our multi-purpose charm.
2.
And what about the to's and fro's
of this one, with its scarred arms,
backside-polished seat and sweater-snagging nails
forever poking up, its pronounced
slouch to the left, as though
each right angle wished it were obtuse?
Cocked back on its rockers,
it's an invitation that's a dare.
It says, not respite, not repose,
but _park your arse here, boy_ ,
_I'm as rickety as you'll one day be_ ,
_hang on and hope_. To rock here
sets that musical arthritis going –
insect chirps, the creak-work of a sailing ship,
the busy bush of ghosts.
Embraced by bricolage,
you ride that corpse-road, borne
in your coffin up the ridge
and over, hearing your pallbearers' groans
blend with your mother's as she
bore you in the opposite direction.
To rock here summons Angus
wielding his Tyrannosaurus chainsaw,
crooked knife, and plane, his profane
collection of knacks like near-accidents
whacked together. He is painting
the boreal bush over every part
except the seat – a river, with fisherman
and rapids, a porcine bear,
a moose, or mooselike antlered ungulate,
a forest fire, iconic Vs of geese.
And, tying them together, dotted daubs
and wavy lines like those, I've since read,
found in Neolithic caves.
All of this erased, in innocence,
by kind folk doing me, they thought,
a favour. Stripping furniture was all the rage
back then, to find the grain (ash,
in this case) and rescue pure form
from deplorable bad taste.
Let our rocking also summon
and forgive them, and myself,
for banishing it outdoors to the porch
where its blankness might grow bleaker in the weather.
Decades later, I came to relent,
and sanded off the funguslike accumulated scurf,
and wrote a rickety, heartfelt, verse-prose
gizmo with the rocking chair as muse.
So now to rock once more
calling forth, with our companionable creaks
whatever might be on our mind: another reprise
of the art of losing, I suspect,
how it's actually a bugger to pick up.
A god can't do it, brimful
of fullness as he fully is.
Now and then a poet's deft
recursive verse may coax
its absence into dance, its anguish
into recognition. But for daily use
a kitchen or a shed's the thing
with its native tools hanging on the walls,
a place where work meets art
for a palaver and a smoke, where Angus pauses,
paintbrush in the air,
deciding where to stick the moose.
Let's rock back there, and past it,
up the trail to the clearing
where he's pondering a large white ash,
the saw already growling in his grip,
and in his head – rendered transparent
by our trancelike to and fro – door frames,
paddles, firewood, and a fancy rocking chair.
# V
# TAKING THE FERRY
Some day I will abandon them –
the old pine desk and comfortable sofa,
the broken walking stick and three-pairs-ago
superannuated boots, the leather lounger
like a catcher's mitt, the birch IKEA chair
that gives a little with your weight, as though
simply sitting were a softly lofted thought –
some day I'll leave this fine museum of effects
( _Shorter Oxford, OED_ ), and devote
what time is left to ferries, forth,
back, honouring the crossings, the betweens
I could not stand when I was someone going somewhere,
sort of, and forced them to conclude, or rather
come to rest, or rather
end. Now it's clear Achilles
isn't going to catch the tortoise,
though he'll leave a lot of items disassembled
in his wake. Things come apart
so easily, and like the sign says,
you break it, it's yours.
You take it home, maimed, wonky,
missing a knob or leg, to convalesce
for the duration on a shelf. Now it's
so long precious _choses_ , I'm off
to Bell Island, Battle Harbour, Harris,
Heimay. That's me,
leaning on the rail, searching with binoculars –
if I've forgotten not to bring them –
for dolphins, gannets, or the chalk-white cliffs.
There I am – the darling of Ephemeros,
my knapsack stuffed with ticket stubs
and trail bar wrappers, schedules for ferries
in the Baltic, the Aegean, and the Inside
Passage. Between one and the next
I'll go on foot, no, donkey, no, I'll hitch rides
with undertakers, cemetery to cemetery, my ear open
to his argot of real estate, The Heart Fund,
and the benefits of planning for that Sad
Inevitable Day. That's me, feasting on cliché,
cultivating ennui and a thirst so fine
I make a beeline for the next boat's lounge
and drink my way to Ilfracombe and back
with Dylan Thomas, to and fro to Staten Island
drink for drink with Lowell on a bender – or,
better – head for Port-aux-Basques or Blanc-Sablon
with manic homebound Newfoundland-and-Labradorians,
jigs and reels unscrolling like a casually
opened vein.
And that's me, later,
lurching the deck, radar-gazing the waves
that blitz us out of darkness, shock troops
for the infinite. And will I know it
when I board the last one?
Will it wear its beauty cleanly
as a bluenose or a shark, remorseless music
slicing the tickle or the gulf? Or will it
slouch its rusted ancient people-smuggling hull
up to the pier, still reeking of accumulated pain?
I imagine one of B.C. Ferries' floating wedding cakes
refit to match Miss Havisham's, or the _Titanic_
with its band still playing and its rhetoric
reversed. Maybe I will know it by the sad
unburied throng that's waiting at the terminal,
maundering the boardwalk, snoozing in cars,
re-browsing postcards, key chains, stuffed puffins,
chocolate moose turds, patrolling the lined-up idling
rhinestoned eighteen-wheelers, the Windstar
with its indefatigably yapping poodle, the flock
of lounging Harley-Davidsons like decadent
black sheep, all of us picking up one minute,
putting it down, picking up the next
in serial et cetera.
One thing's for sure:
when its skipper finally
steps out on the bridge, he or she steps
straight from déjà vu. So it was you,
all along, we'll each exclaim, whoozit, buddy,
the one I never recognized but somehow knew,
that patched grey cloak, that slept-in suit,
that face at once a road map and a lava flow,
_I should have known_ , we groan,
as each, laboriously,
climbs aboard.
# DESCENT
In the end
he leaves the difficult lyre
behind and clambers down, handhold
by outcrop by ledge,
shedding talent, fame
fading like a tan. Angel,
artist. His head
humbled by its skull.
Apprentice. Among such
gravities to find himself again
ungainly. Thrawn. The country-and-
western singer whose sad similes
come home to roost. Like doves.
Like crows. Like
chickens. His theme park.
His menagerie. Once his song
made rocks move and the gods
relent.
Such was the boast.
Now the rocks
rub raw the bone. Gravel,
scree. Who will name
the dark's own instrument? Riprap,
slag. Music
tearing itself apart.
# NOTES
"As If": Pouch Cove, Newfoundland.
Forlorn! the very word is like a bell
To toll me back from thee to my sole self!
– John Keats, "Ode to a Nightingale"
Mount Work: Victoria, B.C.
"On the Barrens": Southeast Avalon Peninsula, Newfoundland.
harrier: a.k.a. marsh hawk.
goowiddy: a.k.a. sheep laurel, a.k.a. _Kalmia angustifolia_.
"Alias Rabbit, Alias Snowshoe Hare": Northeast Avalon Peninsula, Newfoundland.
"Porch": Glengarry County, Ontario.
"Sleeping with the River": Campbell River, B.C.
tolt: a prominent rounded hill.
volcaniclastic: cf. "Tuff," page 44.
erratic: a rock carried by a glacier and left in a new location, where it often contrasts with the surrounding rock formations.
CFA: literally "come from away"; allochthonous.
imperial canoe trips: Duncan Campbell Scott, one of the Confederation Poets and an agent for the Department of Indian Affairs, made several treaty-making canoe trips in Northern Ontario bribing Ojibwa and Cree people to concede their land. See Stan Dragland, _Floating Voice_ (Toronto: House of Anansi Press, 1994).
Norval Morrisseau: Ojibwa artist and shaman.
"Apparition": Southeast Avalon Peninsula, Newfoundland.
_Sleeping Places, Newfoundland 1982_ : an artwork by Marlene Creates comprised of twenty-five black-and-white photographs of ground she slept on around the island of Newfoundland.
Epigraph from Robert Hass, "State of the Planet," in _Time and Materials_ (New York: Ecco Press, 2007).
schiller: the visual effect produced, as in Labradorite, when light is reflected inside the rock before being reflected back out.
The Land God Gave to Cain: Labrador, as described by Jacques Cartier.
Mistaken Point: site of rare Ediacaran fossils on the Avalon Peninsula of Newfoundland. These fossils are evidence of the oldest known animals, or proto-animals, on Earth. Discoveries at Mistaken Point, Ediacara in Australia, and Arkangel in Russia have instigated the establishment of a new geologic period called the Ediacaran, preceding the Cambrian.
Anthropocene: proposed designation for the current epoch.
_Paradoxides_ : This genus of trilobite serves as an index fossil both temporally and spatially. It identifies a formation as Mid-Cambrian, since it existed during that relatively brief period, 520 million years ago. Spatially, the presence of a _Paradoxides_ fossil like the one we found that day identifies landmasses that were formerly part of a micro-continent called Avalonia. During much of the Paleozoic era, Avalonia existed as a separate island in the middle of the Iapetus Ocean (the Atlantic's predecessor), and so developed species unique to itself, rather like the species that are unique to Australia today. Remnants of Avalonia, as established by _Paradoxides_ fossils and other evidence in the rock sequences, include all of the Avalon Peninsula, and parts of Wales, Ireland, New Brunswick, and Massachusetts.
_Cephalon, thorax_ , and _pygidium_ : the head, body, and tail of a trilobite.
_Thorax_ : quotations, in italic, are drawn from the _Oxford English Dictionary_ ; Richard Fortey's _Trilobite_ (New York: Vintage, 2001); Søren Kierkegaard's _Practice in Christianity_ , translated by H.V. Hong and Edna Hong (Princeton, N.J.: Princeton University Press, 1991); Christopher Dewdney's "Grid Erectile," in _Predators of the Adoration_ (Toronto: McClelland & Stewart, 1983); and Meng Hao-jan's "Listening to Cheng Yin Play His Ch'in," translated by David Hinton, in _The Mountain Poems of Meng Hao-jan_ (Brooklyn, N.Y.: Archipelago Books, 2004).
Allochthon and autochthon: originating elsewhere and originating in the place currently found, respectively. Cf. CFA, above.
"Tuff": Portugal Cove, Newfoundland.
Tuff is rock formed from compressed volcanic ash and other debris from eruptions.
Snowball Earth: This hypothesis, which has gained increasing credence among geologists, postulates that Earth was entirely (or mostly, depending on the theorist) covered in ice for 100 million years in the late Proterozoic Era. The evidence includes the discovery of formations dating from 750 to 600 million years ago that show the traces of strong glaciation combined with origins at or near the equator.
One problem posed by the Snowball Earth hypothesis was the issue of Earth's return to seasonal cycles, once winter had become absolute. This is addressed in the last stanza of section I on pages 46–47.
ultramafic: mafic rocks are igneous rocks that are high in magnesium and iron, like basalt. Ultramafic rocks are especially so, having their origins in the mantle of the planet. Peridotite, such as that found in the Tablelands in Gros Morne National Park, is an example of a rare appearance of ultramafic rocks on the surface.
albedo: the reflection of the sun's rays back into space by ice fields and glaciers, as opposed to their absorption by oceans and rocks.
"Rock Flour": Kluane National Park, Yukon.
katabatic wind: wind descending from a glacier.
Beringia: name given to the area around the current Bering Sea, which was ice-free during the last ice age. It included what is now the bottom of the Bering Strait, which afforded a land bridge between Asia and North America.
Crinoid: an echinoderm of the class Crinodea, a.k.a. sea lilies, dating from the Ordovician to the present. Their segmented stalks resemble, but are unrelated to, the vertebrae of chordates such as ourselves.
"Gjall": Snæfellsnes Peninsula, Iceland.
Gjall is a very light rock formed when lava solidifies while flying through the air during an eruption.
Wopmay Orogen: An orogen is a mountain-building episode in Earth's history. The remains of the Wopmay Orogen, which occurred about two billion years ago, lie in the tundra of the Northwest Territories between Great Bear Lake and Coronation Gulf on the Arctic Ocean. The sequence of rock formations in the Wopmay Orogen is the same as those shown in modern orogenies such as the Rockies, the Andes, and the Himalayas.
molasse: a rock formation composed of sediments eroded from mountains following an orogeny.
_homo faber tristis_ : man the sad maker.
clone: a group of organisms, such as a grove of aspen poplars, deriving from a single individual by asexual means. Aspen clones are among the oldest, as well as the largest, organisms in the world. See John Laird Farrar, _Trees in Canada_ (Markham, Ont.: Fitzhenry & Whiteside and Canadian Forest Service, 1995).
gizmo: "Sanding Down This Rocking Chair on a Windy Night" in the book by the same title (Toronto: McClelland & Stewart, 1987).
Ephemeros: god of trivia, the unregarded, and mayflies.
# ACKNOWLEDGEMENTS
Some of these poems have appeared in the following periodicals and anthologies: _Riddle Fence, The Fiddlehead, The Review, The Malahat Review, Pith and Wry_ , and _The Best Canadian Poetry_ for 2008 and 2009.
"Thingamajig" is a version of a piece commissioned by David Maggs as part of his project "The Implicated Subject" – a contribution to Vancouver's "Greenest City Conversation."
Thanks to all those who lent an ear: Mark Abley, Marlene Creates, Mary Dalton, Stan Dragland, Tim Lilburn, Sally McKay, and Jan Zwicky; and to those who lent valuable geological advice and assistance: Doug Boyce, Julie Cappleman, Paul Dean, Liam Herringshaw, and Richard Thomas.
My gratitude to Rob Vanderheyden for the excellent photograph of the _Paradoxides_ fossil, and Anita Chong at McClelland & Stewart for her care and patience in guiding its namesake through to press.
Most especially I want to acknowledge the editorial acumen of my owl-eared editor, Barry Dempster.
| {
"redpajama_set_name": "RedPajamaBook"
} | 3,914 |
Douglas Frantz (born September 29, 1949 in North Manchester, Indiana) is an American Pulitzer Prize-winning former investigative journalist and author, currently serving as the Deputy Secretary-General of the Organisation for Economic Co-operation and Development since November 2015.
He resigned as Los Angeles Times Managing Editor in 2007 after blocking the publication of an article about the Armenian Genocide; Frantz said his resignation was not related to the ensuing controversy.
Frantz graduated from DePauw University in 1971. He was an investigative reporter for the Los Angeles Times, the Chicago Tribune, and The New York Times.
Frantz served as the Istanbul bureau chief for The New York Times, and the managing editor of the Los Angeles Times from 2005 to 2007. Frantz was chief investigator for the Senate Foreign Relations Committee. He is also the former Managing Director of Kroll's Business Intelligence Washington office.
From 2013 to 2015, Frantz served as the State Department's Assistant Secretary of State for Public Affairs.
As the Los Angeles Times Managing Editor, Frantz blocked a story on the Armenian Genocide in April 2007 written by Mark Arax, a veteran Times journalist of Armenian descent. Frantz argued that Arax previously had expressed an opinion on the topic and therefore was biased on the subject, apparently referring to a letter co-signed by Arax that endorsed the LA Times policy of referring to the event as "Armenian Genocide". Arax, who has published similar articles before, lodged a discrimination complaint and threatened a federal lawsuit. Frantz was accused of having a bias obtained while being stationed in Istanbul, Turkey. Frantz resigned from the paper on July 6.
^ "Valerie Crites Fowler". U.S. Department of State. January 28, 2014. Retrieved November 27, 2015.
^ "Ask a Reporter Q&A: Mark Landler". The New York Times. 2002. Archived from the original on October 15, 2009.
^ Frantz, Douglas; Collins, Catherine. "Douglas Frantz". The New York Times.
^ "Douglas Frantz, former Times managing editor, to be chief investigator for Senate panel". Los Angeles Times. January 8, 2009.
^ "The Pulitzer Prizes - Search: frantz". pulitzer.org.
Wikimedia Commons has media related to Douglas Frantz. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,347 |
\section{Introduction}
Quantum field theory has been formulated in different ways, the most
popular ones being the path-integral approach and the operator formalism.
In the path integral approach, one aims to construct the correlation
functions of the theory as the moments of some measure on the space of classical
field configurations. In the operator formalism, the quantum fields are
viewed as linear operators which can act on physical states.
The path integral has the advantage of being closely related to classical field theory.
In fact, the path integral measure is, at least formally, directly given in terms
of the classical action of the theory. The operator formalism is more useful in
contexts where no corresponding classical theory---and hence no Lagrange formalism---is
known for the quantum field theory. It has been used extensively in the context
of conformal or integrable field theories in two spacetime dimensions. In the
operator formalism, one may take the point of view that the theory is determined
by the algebraic relations between the quantum field observables. This viewpoint was
originally proposed in a very abstract form by Haag and Kastler, see e.g.~\cite{Haag}. Other
proposals aimed in particular at conformal field theories include e.g. the
approach via vertex operator algebras due to Borcherds, Frenkel, Lopowski, Meurman
and others~\cite{vertex1,vertex2,vertex3,vertex4}, see also a related proposal by Gaberdiel and Goddard~\cite{Gaberdiel}.
A different approach of an essentially algebraic nature applicable
to "globally conformally invariant quantum field theories" in $D$ dimensions is due to~\cite{Rehren1,Rehren2}.
Approaches emphasizing the algebraic relations between the fields have also
turned out to fundamental to the construction of quantum field theories on general curved backgrounds~\cite{HW01,HW02,BF00,BFV03}, because in this case there is no preferred Hilbert
space representation or vacuum state.
One way to encode the algebraic relations between the fields in a very
explicit way (at least at short
distances) is the Wilson operator product expansion (OPE)~\cite{Wilson,WZ,Zimmermann}.
This expansion is at the basis of the modern treatments of two-dimensional conformal
field theory, and it is a key tool in the quantitative analysis of asymptotically free
quantum gauge theories in four dimensions such as Quantum Chromo Dynamics. The OPE can also be established
for perturbative quantum field theory in general curved spacetimes~\cite{Hollands06}.
In this reference, it was observed in particular that the OPE coefficients satisfy
certain "asymptotic clustering" or "factorization" relations when various groups of points in the operator products are
scaled together at different rates. This observation was taken one step further in~\cite{HW08},
where it was suggested that the OPE should in fact be viewed as the
fundamental datum describing a quantum field theory on curved (and flat) spacetimes, and that the factorization
conditions should be viewed as the essential constraints upon the OPE coefficients.
In this paper, we will analyze these constraints on the OPE coefficients, and thereby
formulate a new approach to quantum field theory in terms of the resulting consistency conditions.
One of our main new points is that all these constraints can be encoded in a single
condition which is to be viewed as an analogue of the
usual "associativity condition" in ordinary algebra. We then show that
it is possible to give a new formulation of perturbation theory which directly
involves the OPE coefficients, but does not directly use such notions---and is more general as---path
integrals or interaction Lagrangians. This new approach relies on a perturbative formulation of the
consistency condition and is hence essentially algebraic in nature.
Its mathematical framework is a certain cohomology of "Hochschild type"
which we will also set up in this paper. If our approach to perturbation theory
is combined with the assumptions of certain linear or non-linear field equations,
then a constructive algorithm is obtained to determine the terms in the perturbation
series order-by-order. We expect that our approach is equivalent to more standard
ones despite its rather different appearance, but we do not investigate this
issue in the present paper.
Some of our ideas bear a (relatively remote) resemblance to
ideas that have been proposed a long time ago within the
``bootstrap-approach'' to conformally invariant quantum field
theories, where constraints of a somewhat similar, but not identical,
nature as ours have been considered under the name ``crossing
relations"~\cite{Migdal, Polyakov, Todorov, Mack1, Mack2}.
But we stress from the outset that our approach is aimed at all quantum field theories---including
even quantum field theories on generic spacetime manifolds without symmetries---and not just conformal ones as in
these references. The ideas on the use of non-linear field equations expressed in section~\ref{interactingfields}
also bear a resemblance to a constructive method in quantum field theory introduced by Steinmann (see e.g.~\cite{steinmann}), but
he is mainly concerned with the Wightman functions rather than the OPE, which is a key difference. Some
of the ideas in section~\ref{interactingfields} were developed, in preliminary form, in extensive discussions
with N.~Nikolov during his tenure as a Humboldt fellow at the U. of G\" ottingen in 2005/2006, see also
the notes~\cite{Nikolovnotes}. In the present form described in section~\ref{interactingfields}, these
ideas were developed in collaboration with H.~Olbermann, and more details will be given in
a future paper~\cite{Olbermann}.
This paper is organized as follows. We first explain in sec.~2 the
basic ideas of this paper, namely, the idea of that the
factorization conditions may be expressed by a single associativity
condition, the new formulation of perturbation theory in
our framework, the generalization to gauge field theories, and
the approach via field equations. These ideas are then
explained in detail in the subsequent sections.
\section{Basic ideas of the paper}
The operator product expansion states that the product of two operators
may be expanded as
\begin{equation}\label{basic}
\phi_{a}(x_1) \phi_{b}(x_2) = \sum_c C_{ab}^c(x_1, x_2) \, \phi_c(x_2) \, ,
\end{equation}
where $a, b, c$ are labels of the various composite quantum fields
$\phi_{a}$ in the theory. This relation is intended to
be valid after taking expectation values in any (reasonable) state
in the quantum field theory. The states, as well as the OPE coefficients
typically have certain analytic continuation properties that
arise from the spectrum condition in the quantum field theory. These
properties imply that the spacetime arguments may be continued to
a real Euclidean section of complexified Minkowski spacetime, and
we assume this has been done. An important condition on
the OPE coefficients arises when one
considers the operator product expansion of $3$ operators (in the Euclidean domain),
\begin{equation}\label{3ops}
\phi_{a} (x_1) \phi_{b}(x_2) \phi_{c}(x_3) = \sum_d
C_{abc}^d(x_1, x_2, x_3) \, \phi_d(x_3) \, .
\end{equation}
Let us consider a situation where one pair of points is closer to each other than another pair of points.
For example, let $r_{23}$ be
smaller than $r_{13}$, where
\begin{equation}
r_{ij} = |x_i - x_j|
\end{equation}
is the Euclidean distance between point $x_i$ and point $x_j$. Then we expect that we can
first expand the operator product $\phi_{b}(x_2) \phi_{c}(x_3)$ in eq.~\eqref{3ops}
around $x_3$, then multiply by $\phi_{a}(x_1)$, and finally expand the resulting product
around $x_3$. We thereby expect to obtain the relation
\begin{equation}\label{2ops1}
C_{abc}^d(x_1, x_2, x_3) = \sum_e C_{bc}^e(x_2, x_3) C_{ae}^d(x_1, x_3)
\end{equation}
Similarly, if
$r_{12}$ is smaller than $r_{23}$, we expect that we
can first expand the operator product $\phi_{a}(x_1) \phi_{b}(x_2)$ around $x_2$,
then multiply the result by $\phi_{c}(x_3)$, and finally expand again around $x_3$.
In this way, we expect to obtain the relation
\begin{equation}\label{2ops2}
C_{abc}^d(x_1, x_2, x_3) = \sum_e C_{ab}^e(x_1, x_2) C_{ec}^d(x_2, x_3) \, .
\end{equation}
A consistency relation now arises because on the open domain $r_{12} < r_{23} < r_{13}$
both expansions~\eqref{2ops1}, \eqref{2ops2} must be valid and therefore should
coincide. Thus, we must have
\begin{equation}\label{assoccomp}
\sum_e C_{ab}^{e}(x_1, x_2) C_{e c}^{d}(x_2, x_3) =
\sum_e C_{bc}^{e}(x_2, x_3) C_{a e}^{d}(x_1, x_3) \,
\end{equation}
when $r_{12} < r_{23} < r_{13}$.
This requirement imposes a very stringent condition on the
OPE-coefficients. We will refer to this condition as a "consistency-" or "associativity" condition. The basic idea of this paper is that this condition on the 2-point OPE coefficients
incorporates the full information about the structure of the quantum field theory.
Therefore, conversely, if a solution to the consistency condition
can be found, then one has in effect constructed a quantum field theory.
We will pursue this idea below in the following different directions.
\subsection{Coherence}
First, we will pursue the question whether any further consistency conditions
in addition to eq.~\eqref{assoccomp} can arise when one considers products
of more than three fields, by analogy with the analysis just given for three fields. For
example, if we consider the OPE of four fields $\phi_{a}(x_1) \phi_{b}(x_2)
\phi_{c}(x_3) \phi_{d}(x_4)$ and investigate the possible different
subsequent expansions of such a product in a similar manner as above, we
will get new relations for the 2-point OPE coefficients analogous to eq.~\eqref{assoccomp}.
These will now involve four points and correspondingly more factors of the 2-point
OPE coefficients. Are these conditions genuinely new, or do they already follow from the
relation~\eqref{assoccomp}?
As we will argue, this question is analogous to the
question whether, in an ordinary algebra, there are new constraints on the product
coming from "higher order associativity conditions". As in this analogous
situation, we will see that in fact no new
conditions arise, i.e. the associativity condition~\eqref{assoccomp} is
the only consistency condition. We will also see that all higher order expansion
coefficients such as $C_{abcd}^e(x_1, x_2, x_3, x_4)$ are
uniquely determined by the 2-point OPE coefficients. Thus, in this sense,
the entire information about the quantum field theory is contained in
these 2-point coefficients $C^c_{ab}(x_1, x_2)$, and the entire
set of consistency conditions is coherently encoded in the associativity
condition~\eqref{assoccomp}.
For this reason, we call the result
a "coherence theorem", by analogy with the well-known similar result
in algebra and in category theory~\cite{MacLane}.
These results are described in detail in sec.~\ref{coherence}.
\subsection{Perturbation theory as Hochschild cohomology}\label{subsecpert}
Given that the 2-point OPE coefficients $C^c_{ab}(x_1, x_2)$ are
considered in as the fundamental entities in quantum field theory in
our approach, it is interesting
to ask how to formulate perturbation theory in terms of these coefficients.
For this, we imagine that we are given a 1-parameter family of these coefficients
parametrized by $\lambda$. For each $\lambda$, the coefficients should
satisfy the associativity condition~\eqref{assoccomp}, and for $\lambda=0$,
the coefficients describe the quantum field theory that we wish to perturb around.
We now expand the 1-parameter family of OPE-coefficients in a Taylor- or perturbation series
in $\lambda$, and we ask what constraints the consistency condition will impose upon the
Taylor coefficients. In order to have a reasonably uncluttered notation, let us use an
"index free" notation for the OPE-coefficients suppressing the indices $a,b,c,\dots$.
Thus, let us view the 2-point OPE coefficients $C^c_{ab}(x_1, x_2)$ as the
components of a linear map ${\cal C}(x_1, x_2): V \otimes V \to V$,
where $V$ is the vector space whose basis elements are in one-to-one correspondence
with the composite fields $\phi_a$ of the theory. The Taylor expansion is
\begin{equation}\label{Cexpansioncomp}
{\cal C}(x_1, x_2; \lambda) = \sum_{i=0}^\infty
{\cal C}_i(x_1, x_2) \, \lambda^i \, .
\end{equation}
We similarly expand the associativity condition as a power series in $\lambda$.
If we assume that the associativity condition is fulfilled at zeroth order, then
the corresponding condition for the first order perturbation of the
2-point OPE-coefficients is given by
\begin{eqnarray}\label{firstocons}
&&{\cal C}_0(x_2, x_3)\Big({\cal C}_1(x_1, x_2) \otimes id \Big) - {\cal C}_0(x_1, x_3)\Big(
id \otimes {\cal C}_1(x_2, x_3) \Big) + \nonumber\\
&&{\cal C}_1(x_2, x_3)\Big({\cal C}_0(x_1, x_2) \otimes id \Big) - {\cal C}_1(x_1, x_3)\Big(
id \otimes {\cal C}_0(x_2, x_3) \Big) = 0
\, ,
\end{eqnarray}
for $r_{12} < r_{23} < r_{13}$, in an obvious tensor product notation.
As we will see, this condition is of a cohomological nature,
and the set of all first order perturbations satisfying this condition modulo
trivial perturbations due to field redefinitions
can be identified with the elements of a certain cohomology ring which we will
define in close analogy to Hochschild cohomology~\cite{MacLane1,Happel,Connes}.
Similarly, the conditions for the higher order perturbations can also be
described in terms of this cohomology ring. More precisely, at each order
there is a potential obstruction to continue the perturbation series---i.e.,
to satisfy the associativity condition at that order---and this
obstruction is again an element of our cohomology ring.
In practice, $\lambda$ can be e.g. a parameter that measures the strength of
the self interaction of a theory, as in the theory characterized by
the classical Lagrangian $L = (\partial \varphi)^2 + \lambda \varphi^4$. In this example,
one is perturbing around a free field theory, for which the OPE-coefficients are
known completely. Another example is when one perturbs around a more general
conformal field theory---not necessarily described by a Lagrangian. Yet another
example is when $\lambda = 1/N$, where $N$ is the number of "colors" of a
theory, like in $SU(N)$~Yang-Mills theory. In this example, the theory that one is perturbing
around is the large-$N$-limit of the theory.
These constructions are described in detail in sec.~\ref{perturbations}.
\subsection{Local gauge theories}
Some modifications must be applied to our constructions when one is dealing with
theories having local gauge invariance, such as Yang-Mills theories.
When dealing with such theories, one typically has to proceed in two steps. The first step is
to introduce an auxiliary theory including further fields. For example, in pure
Yang-Mills theory, the auxiliary theory has as basic fields the 1-form gauge
potential $A$, a pair of anti-commuting "ghost fields" $U, \bar U$, as well as another
auxiliary field $F$, all of which take values in a Lie-algebra.
Having constructed the auxiliary theory, one then removes the
additional degrees of freedom in a second step, thereby arriving at the
actual quantum field theory one is interested in. The necessity of
such a two-step procedure can be seen from many viewpoints, maybe most directly
in the path-integral formulation of QFT~\cite{Faddeev}, but also actually even
from the point of view of classical Hamiltonian field theory, see e.g.~\cite{Henneaux}.
As is well-known, a particularly elegant and useful way to implement this two-step procedure
is via the so-called BRST-formalism~\cite{Becci}, and this is also the most
useful way to proceed in our approach to quantum field theory via the OPE.
In this approach one defines, on the space of auxiliary fields,
a linear map $s$ ("BRST-transformation"). The crucial properties of this map
are that it is a symmetry of the auxiliary theory, and that it is nilpotent,
$s^2 = 0$. In the case of Yang-Mills theory it is given by
\begin{equation}\label{BRSTt}
sA = dU - i\lambda[A, U]\, , \quad sU = -\frac{i\lambda}{2}[U,U] \, , \quad
s\bar U = F \, , \quad sF=0 \, ,
\end{equation}
on the basic fields and extended to all monomials in the basic fields and
their derivatives ("composite fields") in such a way that $s^2 = 0$. In our formalism,
the key property of the auxiliary theory is now that the map $s$ be compatible
with the OPE of the auxiliary theory.
The condition that we need is that, for any product of composite fields, we have
\begin{equation}
s[\phi_{a}(x_1) \phi_{b}(x_2)] = [s \phi_{a}(x_1)] \phi_{b}(x_2) \pm
\phi_{a}(x_1) s\phi_{b}(x_2) \, ,
\end{equation}
where the choice of $\pm$ depends on the Bose/Fermi character of the fields under
consideration. If we apply the OPE to the products in this equation, then it
translates into a compatibility condition between the OPE coefficients $C_{ab}^c(x_1, x_2)$ and the
map $s$. This is the key condition on the auxiliary theory beyond the associativity
condition~\eqref{assoccomp}. As we show, it enables one to pass
from the auxiliary quantum field theory to true gauge theory by taking a
certain quotient of the space of fields.
We will also perform a perturbation analysis of gauge theories. Here,
one needs not only to expand the OPE-coefficients [see eq.~\eqref{Cexpansioncomp}],
but also the BRST-transformation map $s(\lambda)$, as perturbations will typically change the form
of the BRST transformations as well---seen explicitly for Yang-Mills
theory in eqs.~\eqref{BRSTt}. We must now satisfy at each order in perturbation
theory an associativity condition as described above,
and in addition a condition which ensures compatibility of the perturbed BRST map and the
perturbed OPE coefficients at the given order.
As we will see, these conditions can again be
encoded elegantly and compactly in a cohomological framework.
These ideas will be explained in detail in sec.~\ref{hochschild}.
\subsection{Field equations}\label{subfieldeq}
The discussion so far has been focussed so far on the general mathematical structures behind
the operator product expansion. However, it is clearly also of interest to construct the
OPE coefficients for concrete theories. One way to describe a theory is via classical field
equations such as
\begin{equation}\label{phicubed}
\square \varphi = \lambda \varphi^3 \, ,
\end{equation}
where $\lambda$ is a coupling parameter. One may exploit such relations
by turning them into conditions on the OPE coefficients. The OPE coefficients are then determined
by a ``bootstrap''-type approach. The conditions implied by eq.~\eqref{phicubed} arise as follows:
We first view the above field equation as a relation between quantum fields, and we multiply by an
arbitrary quantum field $\phi_a$ from the right:
\begin{equation}
\square \varphi(x_1) \phi_a(x_2) = \lambda \varphi^3(x_1) \phi_a(x_2) \, .
\end{equation}
Next, we perform an OPE of the expressions on both sides, leading to the relation
$\square C_{\varphi a}^b = \lambda C_{\varphi^3 a}^b$. As explained above in subsection~\ref{subsecpert},
each OPE coefficient itself is a formal power series in $\lambda$, so this equation clearly
yields a relationship between different orders in this power series. The basic idea is to exploit
these relations and to derive an iterative construction scheme.
To indicate how this works, it is useful to introduce, for each field $\phi_a$,
a ``vertex operator'' ${\cal Y}(x, \phi_a)$, which is a linear map on the space $V$ of
all composite fields. The matrix components of this linear map are simply given
by the OPE coefficients, $[{\cal Y}(x, \phi_a)]_b^c = C_{ab}^c(x,0)$, for details see
sec.~\ref{leftrep}. Clearly, the vertex
operator contains exactly the same information as the OPE coefficient. In the above theory,
it is a power series ${\cal Y} = \sum {\cal Y}_i \lambda^i$ in the coupling. The field equation then
leads to the relation
\begin{equation}
\square {\cal Y}_{i+1}(x, \varphi) = {\cal Y}_{i}(x,\varphi^3) \, .
\end{equation}
The zeroth order ${\cal Y}_0$ corresponds to the free theory, described in sec.~\ref{freefield},
and the higher order ones are determined inductively by inverting the Laplace operator.
To make the scheme work, it is necessary to construct ${\cal Y}_{i}(x, \varphi^3)$ from
${\cal Y}_{i}(x,\varphi)$ at each order. It is at this point that we need the consistency
condition. In terms of the vertex operators, it implies e.g. relations like
\begin{equation}
\sum_{j=0}^i {\cal Y}_j(x, \varphi) {\cal Y}_{i-j}(y, \varphi) = \sum_{j=0}^i {\cal Y}_j(y, {\cal Y}_{i-j}(x-y, \varphi)\varphi) \, .
\end{equation}
On the right side, we now use a relation like ${\cal Y}_0(x-y, \varphi)\varphi = \varphi^2 + \dots$.
Such a relation enables one to solve for ${\cal Y}_i(y, \varphi^2)$ in terms of inductively
known quantities. Iterating this type of argument, one also obtains ${\cal Y}_i(y, \varphi^3)$, and
in fact any other vertex operator at $i$-th order. In this way, the induction loop closes.
Thus, we obtain an inductive scheme from
the field equation in combination with the consistency condition.
At each order, one has to perform one---essentially trivial---inversion
of the Laplace operator, and several infinite sums implicit in the consistency condition.
These sums arise when composing two vertex operators if these are written in terms of their
matrix components. Thus, to
compute the OPE coefficients at
$n$-th order in perturbation theory, the "computational cost" is roughly
to perform $n$ infinite sums. This is
similar to the case of ordinary perturbation theory, where at $n$-th order one
has to perform a number of Feynman integrals increasing with $n$. Note however that,
by contrast with the usual approaches to perturbation theory, our procedure
is completely well-defined at each step. Thus,
there is no "renormalization" in our approach in the sense of "infinite counterterms".
The details of this new approach to perturbation theory are outlined in sec.~\ref{interactingfields}, and
presented in more detail in a forthcoming paper with H. Olbermann.
\pagebreak
\section{Axioms for quantum field theory}\label{axiomatic}
Having stated the basic ideas in this paper in an informal
way, we now turn to the precise formulation of these ideas. For this, we begin
in this section by explaining our axiomatic setup for quantum field theory.
The setup we present here is broadly speaking the same as that
presented in~\cite{HW08}. In particular, the key idea here as well as
in~\cite{HW08} is that the operator product expansion (OPE)
should be regarded as the defining property of a quantum field theory.
However, there are some differences to~\cite{HW08} in that
we work on flat space here (as opposed to a general curved spacetime), and
we also work in a Euclidean framework. As a consequence, the microlocal conditions
stated in~\cite{HW08} will be replaced by analyticity conditions, the commutativity
condition will be replaced by a symmetry condition and
the associativity conditions on the OPE coefficients
will be replaced by conditions on the existence of various
power series expansions.
The first ingredient in our definition of a quantum field theory is an infinite-dimensional
vector space, $V$. The elements in this vector space are to be thought of as the
components of the various composite scalar, spinor, and tensor fields in the theory.
For example, in a theory describing a single real scalar field $\varphi$, the elements of
$V$ would be in one-to-one correspondence with the monomials of $\varphi$ and its
derivatives [see sec.~\ref{freefield}].
The space $V$ is assumed to be graded in various ways which reflect the possibility to classify
the different composite quantum fields in the theory by their spin, dimension,
Bose/Fermi character, etc. First, for Euclidean quantum field theory on ${\mathbb R}^D$,
the space $V$ should carry a representation of the rotation group $SO(D)$ in
$D$ dimensions respectively of its covering group ${\rm Spin}(D)$ if
spinor fields are present. This representation should decompose into
unitary, finite-dimensional irreducible representations (irrep's) $V_{S}$,
which in turn are characterized
by the corresponding eigenvalues $S=(\lambda_1, \dots, \lambda_r)$ of the $r$
Casimir operators associated with $SO(D)$. For $D=2$, this
is a weight $w \in {\mathbb R}$, for $D=3$ this is an integer or half-integer
spin, and for $D=4$ this is a pair of spins (using the isomorphism between
$SU(2) \times SU(2)$ and the covering of the 4-dimensional rotation group).
Thus we assume that $V$ is a graded vector space
\begin{equation}\label{decomp}
V = \bigoplus_{\Delta \in {\mathbb R}_+} \bigoplus_{S \in {\rm irrep}}
{\mathbb C}^{N(\Delta, S)} \otimes V_{S} \, .
\end{equation}
The numbers $\Delta \in {\mathbb R}_+$ provide an additional grading which
will later be related to the "dimension" of the quantum fields. The natural
number $N(\Delta, S)$ is the multiplicity of the quantum fields
with a given dimension $\Delta$ and spins $S$. We assume this multiplicity
to be finite. As always in this paper, the infinite sums in this decomposition
are understood without any closure taken, i.e., the elements of $V$ are
in one-to-one correspondence with sequences of the form
$(|v_1 \rangle, |v_2\rangle, \dots, |v_n\rangle, 0, 0, \dots)$ with only finitely many non-zero entries,
where $|v_i\rangle$ is a vector in the $i$-th summand in the decomposition.
On the vector space $V$, we would like to have an anti-linear, involutive
operation called $\star: V \to V$ which should be thought of as taking the hermitian adjoint of
the quantum fields. We would also like to have a linear grading map $\gamma: V \to V$
with the property $\gamma^2 = id$. The vectors corresponding to eigenvalue $+1$ are
to be thought of as "bosonic", while those corresponding to eigenvalue $-1$ are to
be thought of as "fermionic".
So far, we have only defined a list of objects---in fact a linear space---that we think
of as labeling the various composite
quantum fields of the theory. The dynamical content and quantum nature of
the given theory is now incorporated in the operator product coefficients associated with
the quantum fields. This is a hierarchy denoted
\begin{equation}
{\cal C} = \bigg( {\cal C}(x_1, x_2), {\cal C}(x_1, x_2, x_3), {\cal C}(x_1, x_2, x_3, x_4), \dots \bigg) \, ,
\end{equation}
where each ${\cal C}(x_1, \dots, x_n)$ is an analytic function on the "configuration space"
\begin{equation}
M_n := \{(x_1, \dots, x_n) \in ({\mathbb R}^D)^n \mid x_i \neq x_j \quad
\text{for all $1 \le i< j \le n$}\} \, ,
\end{equation}
taking values in the linear maps\footnote{Strictly speaking, ${\cal C}(x_1, \dots, x_n)$
does not take its values in the space $V$, because for each $v_1, \dots, v_n \in V$,
the expression $C(x_1, \dots, x_n)(v_1 \otimes \dots \otimes v_n)$ typically has non-zero
components in an infinite number of summands in the decomposition~\eqref{decomp}.
By contrast, $V$ by definition only consists of vectors which have non-zero components
only for finitely many summands. Thus, ${\cal C}(x_1, \dots, x_n)$ actually takes
values in the larger space ${\rm Hom}(V^*, {\mathbb C}) \supset V$, where $V^*$ is the
(algebraic) dual of $V$, see eq.~\eqref{dualdecomp}.
}
\begin{equation}
{\cal C}(x_1, \dots, x_n) : V \otimes \cdots \otimes V \to V \, ,
\end{equation}
where there are $n$ tensor factors of $V$. For one point, we set ${\cal C}(x_1) = id: V \to V$,
where $id$ is the identity map.
The components of these maps in a basis of $V$ correspond to the OPE coefficients
mentioned in the previous section.
More explicitly, if $\{
|v_a \rangle \}$ denotes a basis of $V$ adapted to the grading of $V$,
and $\{\langle v^a |\}$ the corresponding basis of the dual space
\begin{equation}\label{dualdecomp}
V^* = \bigoplus_{\Delta \in {\mathbb R}_+} \bigoplus_{S \in {\rm irrep}} {\mathbb C}^{N(\Delta, S)}
\otimes V_{\overline S} \, ,
\end{equation}
with $V_{\overline S}$ denoting the conjugate representation,
$\langle v^b| v_a \rangle = \delta^b_a$, then
\begin{equation}\label{Ccompdef}
C^b_{a_1 \dots a_n}(x_1, \dots, x_n) = \langle v^b| {\cal C}(x_1, \dots, x_n)|
v_{a_1} \otimes \dots \otimes v_{a_n} \rangle \,\,\,\,,
\end{equation}
using the standard bra-ket notations such as $|v_{a_1} \otimes \dots \otimes v_{a_n}\rangle
:= |v_{a_1} \rangle \otimes \cdots \otimes |v_{a_n} \rangle$.
The basic properties of quantum field theory are now expressed as the following properties
on the OPE coefficients:
\medskip
\noindent
\paragraph{\bf Hermitian conjugation:} Denoting by $\iota: V \to V$ the
anti-linear map given by the star operation $\star$, we have
\begin{equation}
\overline{{\cal C}(x_1, \dots, x_n)} = \iota \, {\cal C}(x_1, \dots, x_n) \, \iota^n
\end{equation}
where $\iota^n := \iota \otimes \cdots \otimes \iota$ is the $n$-fold tensor
product of the map $\iota$.
\medskip
\noindent
\paragraph{\bf Euclidean invariance:} Let $R$ be the representation
of ${\rm Spin}(D)$ on $V$,
let $a \in {\mathbb R}^D$ and let $g \in {\rm Spin}(D)$. Then we have
\begin{equation}
{\cal C}(gx_1 + a, \dots, gx_n + a) = R^*(g)
\, {\cal C}(x_1, \dots, x_n) \, R(g)^n \, ,
\end{equation}
where $R(g)^n$ stands for the $n$-fold tensor product $R(g) \otimes \dots \otimes R(g)$.
\medskip
\noindent
\paragraph{\bf Bosonic nature:} The OPE-coefficients should themselves be
"bosonic" in the sense that
\begin{equation}
{\cal C}(x_1, \dots, x_n) = \gamma \, {\cal C}(x_1, \dots, x_n) \, \gamma^n
\end{equation}
where $\gamma^n$ is again a shorthand for the $n$-fold tensor product
$\gamma \otimes \dots \otimes \gamma$.
\medskip
\noindent
\paragraph{\bf Identity element:} There exists a unique element
${\bf 1}$ of $V$ of dimension $\Delta = 0$,
with the properties ${\bf 1}^\star = {\bf 1}, \gamma({\bf 1}) = {\bf 1}$,
such that
\begin{equation}\label{iidop}
{\cal C}(x_1, \dots, x_n)|v_1 \otimes \cdots {\bf 1} \otimes \cdots v_{n-1} \rangle =
{\cal C}(x_1, \dots \widehat{x_i}, \dots x_n) |v_1 \otimes \cdots \otimes v_{n-1}\rangle \, .
\end{equation}
where ${\bf 1}$ is in the $i$-th tensor position, with $i \le n-1$. When ${\bf 1}$
is in the $n$-th tensor position, the analogous formula takes a slightly
more complicated form. This is because $x_n$ is the point around which we expand
the operator product, and therefore this point and the corresponding $n$-th
tensor entry is on a different footing than the other points and tensor entries.
To motivate heuristically the appropriate form of the identity axiom in this case, we start by
noting that, if $\phi_a$ is a quantum (or classical) field, then we can formally
perform a Taylor expansion
\begin{equation}\label{taylor}
\phi_a(x_1) = \sum_{i=0}^\infty \frac{1}{i!} y^{\mu_1} \cdots y^{\mu_i}
\partial_{\mu_1} \dots \partial_{\mu_i} \phi_a(x_2) \, ,
\end{equation}
where $y=x_1-x_2$.
Now, each field $\partial_{\mu_1} \dots \partial_{\mu_i} \phi_a$ is
just another quantum field in the theory---denoted, say by $\phi_b$ for some label $b$---so trivially,
we might write this relation alternatively in the form
$\phi_a(x_1) = \sum t_a^b(x_1, x_2) \phi_b(x_2)$. Here, $t_a^b$ are defined by the above
Taylor expansion, up to potential trivial
changes in order to take into account the fact that in the chosen labeling of the fields,
a derivative of the field $\phi_a$
might actually correspond to a linear combination of other fields. Now formally, we have
\begin{eqnarray}
\sum_b C^b_{a_1 \dots a_{n-1} {\bf 1}}(x_1, \dots, x_n) \, \phi_b(x_n) &=&
\phi_{a_1}(x_1) \cdots \phi_{a_{n-1}}(x_{n-1}) {\bf 1} \\
&=& \sum_b C^b_{a_1 \dots a_{n-1}}(x_1, \dots, x_{n-1}) \, \phi_b(x_{n-1}) \nonumber\\
&=& \sum_{c,b} C^c_{a_1 \dots a_{n-1}}(x_1, \dots, x_{n-1}) \, t_c^b(x_{n-1}, x_n) \, \phi_b(x_{n}) \,, \nonumber
\end{eqnarray}
so we are led to conclude that
\begin{equation}\label{unitcomp}
C^b_{a_1 \dots a_{n-1} {\bf 1}}(x_1, \dots, x_n) = \sum_c t^b_c(x_{n-1}, x_n) \,
C^c_{a_1 \dots a_{n-1}}(x_1, \dots, x_{n-1}) \, .
\end{equation}
Note that, in eq.~\eqref{taylor}, the operators on the right contain derivatives and
are thus expected to have a dimension that is not smaller than that of the operator
on the right hand side. It thus follows that $t^a_b(x_1, x_2)$ can only be nonzero
if the dimension of the operator $\phi_a$ is not less than the dimension of $\phi_b$.
Since there are only finitely many operators up to a given dimension, it follows that
the sum in eq.~\eqref{unitcomp} is finite, and there are no convergence issues.
We now abstract the features that we have heuristically derived. We postulate
the existence of a "Taylor expansion map", i.e. a
linear map\footnote{Here, the same remarks apply as in the
footnote 1.} $t(x_1, x_2): V \to V$ for each $x_1, x_2 \in {\mathbb R}^D$ with the
following properties. The map should transform in the same
way as the OPE coefficients, see the Euclidean invariance axiom. If $V^\Delta$ denotes the
subspace of $V$ in the decomposition~\eqref{decomp} spanned by vectors
of dimension $\Delta$, then
\begin{equation}
t(x_1, x_2) V^\Delta \subset \bigoplus_{\widehat \Delta \ge \Delta} V^{\widehat \Delta} \, .
\end{equation}
Furthermore, we have the cocycle relation
\begin{equation}
t(x_1, x_2) t(x_2, x_3) = t(x_1, x_3) \, .
\end{equation}
The restriction of any vector of $t(x_1, x_2) V^\Delta$ to any subspace $V^{\widehat \Delta}$
should have a polynomial dependence on $x_1-x_2$.
Finally, for each $v_1, \dots, v_{n-1} \in V$, we have
\begin{equation}
{\cal C}(x_1, \dots, x_n)|v_1 \otimes \dots v_{n-1} \otimes {\bf 1} \rangle =
t(x_{n-1}, x_n) {\cal C}(x_1, \dots, x_{n-1}) |v_1 \otimes \dots \otimes v_{n-1}\rangle \, ,
\end{equation}
for all $(x_1, \dots, x_n) \in M_n$. This is the desired formulation
for the identity axiom when the identity operator is in the $n$-th position.
Note that this relation implies in particular the relation
\begin{equation}
t(x_1, x_2) |v\rangle = {\cal C}(x_1, x_2)| v \otimes {\bf 1} \rangle \, ,
\end{equation}
i.e., $t(x_1, x_2)$ uniquely determines the 2-point OPE coefficients with
an identity operator and vice-versa. In particular, we have $t(x_1, x_2) {\bf 1} = {\bf 1}$
using the eq.~\eqref{iidop} and ${\cal C}(x_1) = id$, meaning that the identity operator does not
depend on a "reference point".
\medskip
\noindent
\paragraph{\bf Factorization:} Let $I_1, \dots, I_r$ be a partition of the
set $\{1, \dots, n\}$ into disjoint ordered subsets, with the property
that all elements in $I_i$ are greater than all elements in $I_{i-1}$ for
all $i$. For example, for $n=5$, such a partition is
$I_1 = \{1\}, I_2 = \{2,3,4\}, I_3 = \{5,6\}$.
For each ordered subset $I \subset \{1, \dots, n\}$, let
$X_I$ be the ordered tuple $(x_i)_{i \in I} \in ({\mathbb R}^D)^{|I|}$,
let $m_k = {\rm max}(I_k)$, and set ${\cal C}(X_I) := id$ if
$I$ is a set consisting of only one element. Then we have
\begin{equation}\label{factorization}
{\cal C}(X_{\{1, \dots, n\}}) = {\cal C}(X_{\{m_1,\dots,m_r\}}) \Big(
{\cal C}(X_{I_1}) \otimes \cdots \otimes {\cal C}(X_{I_r})
\Big)
\end{equation}
as an identity on the open domain
\begin{eqnarray}\label{domaindef}
D[\{I_1, \dots, I_r\}] &:=& \bigg\{
(x_1, \dots, x_n) \in M_n \mid \nonumber\\
&& {\rm min} \, d(X_{\{m_1, \dots,m_r\}}) > {\rm max}\, (d(X_{I_1}), \dots, d(X_{I_r}))
\bigg\} \, .
\end{eqnarray}
Here, $d(X_I)$ denotes the set of relative distances between points
of points in a collection $X_I = (x_i)_{i \in I}$, defined as
the collection of positive real numbers
\begin{equation}
d(X_I) := \{ r_{ij} \,\, \mid i,j \in I, i \neq j \} \, .
\end{equation}
Note that the factorization identity~\eqref{factorization} when expressed in
a basis of $V \otimes \dots \otimes V$ involves an $r$-fold infinite sum on the
right side. The factorization property is in particular the statement that these
infinite sums converge on the indicated domain. No statement is made about the
convergence outside the domain, and in fact the series are expected
to diverge outside the above domains. For an arbitrary partition
of $\{1, \dots, n\}$, a similar factorization condition can be
derived from the (anti-)symmetry axiom. If
there are any fermionic fields in the theory, then
there are $\pm$-signs.
We also note that we may iterate the above factorization equation on suitable domains.
For example, if the $j$-th subset $I_j$ is itself partitioned into subsets, then
on a suitable subdomain associated with the partition, the coefficient ${\cal C}(X_{I_j})$
itself will factorize. Subsequent partitions may naturally be identified with
trees on $n$ elements $\{1, \dots, n\}$, i.e., the specification of a tree naturally
corresponds to the specification of a nested set of subsets of $\{1, \dots, n\}$.
In~\cite{HW08} and also below, a version of the above factorization property is given in terms of such
trees. However, we note that the condition given in reference~\cite{HW08} is not stated in
terms of convergent power series expansions, but instead in terms of asymptotic
scaling relations. The former seems to be more natural in the Euclidean domain.
\medskip
\noindent
\paragraph{\bf Scaling:} Let $|v_{a_1}\rangle, \dots, |v_{a_n} \rangle \in V$ be vectors with dimension
$\Delta_1, \dots, \Delta_n$ [see the decomposition of $V$ in
eq.~\eqref{decomp}] respectively, and let $\langle v^b | \in V^*$ be an element
in the dual space of $V$ with dimension $\Delta_{n+1}$. Then the scaling degree\footnote{
The scaling degree is defined here as the infimum over all $p \in {\mathbb R}$ such that
$\lim \epsilon^p C_{a_1 \dots a_n}^b(\epsilon x_1, \dots, \epsilon x_n) = 0$ for all
$(x_1, \dots, x_n) \in M_n$.}
of the ${\mathbb C}$-valued distribution~\eqref{Ccompdef}
should be estimated by
\begin{equation}
sd \, C_{a_1 \dots a_n}^b \le \Delta_1 + \dots + \Delta_n - \Delta_{n+1} \, .
\end{equation}
If $v^b$ is an element of the 1-dimensional subspace of dimension-0 fields
spanned by the identity operator ${\bf 1} \in V$, if $n=2$ and if $|v_{a_1}^{} \rangle =
|v_{a_2}^\star \rangle \neq 0$, then it is required that the inequality is saturated.
\medskip
\noindent
\paragraph{\bf (Anti-)symmetry:} Let $\tau_{i-1, i} = (i-1 \,\, i)$ be the permutation
exchanging the $(i-1)$-th and the $i$-th object, which we define
to act on $V \otimes \dots \otimes V$ by exchanging the corresponding
tensor factors. Then we have
\begin{eqnarray}
&&{\cal C}(x_1, \dots, x_{i-1}, x_i, \dots, x_n) \, \tau_{i-1,i} =
{\cal C}(x_1, \dots, x_i, x_{i-1}, \dots, x_n) \, (-1)^{F_{i-1}F_i} \\
&& F_i := \frac{1}{2} \,
id^{i-1} \otimes (id-\gamma)
\otimes id^{n-i} \, .
\end{eqnarray}
for all $1<i<n$. Here, the last factor is designed so that Bosonic fields
have symmetric OPE coefficients, and Fermi fields have anti-symmetric
OPE-coefficients. The last point $x_n$, and the $n$-th tensor factor
in $V\otimes \dots \otimes V$
do not behave in the same way under permutations. This is because we have
chosen to expand an operator product around the $n$-th (i.e., last) point,
and hence this point and tensor factor is
not on the same footing as the other points and tensor factors in the OPE.
The corresponding (anti-)symmetry property for
permutations involving $x_n$ is as follows. We let $t(x_1, x_n)$ be the
Taylor expansion map explained in the identity element axiom. Then we postulate
\begin{equation}
{\cal C}(x_1, \dots, x_{n-1}, x_n) \, \tau_{n-1,n} =
t(x_{n-1}, x_n) \, {\cal C}(x_1, \dots, x_n, x_{n-1}) \, (-1)^{F_{n-1}F_{n}}
\end{equation}
The additional factor of the Taylor expansion operator
$t(x_{n-1}, x_n)$ compensates for the change in the reference point.
This formula can be motivated heuristically in a similar way as
the similar formulae in the identity axiom.
\medskip
\noindent
The factorization property~\eqref{factorization} is the core property of the
OPE coefficients that holds everything together. It is clear that it imposes very stringent
constraints on the possible consistent hierarchies $({\cal C}(x_1, x_2), {\cal C}(x_1, x_2, x_3),
\dots )$. The Euclidean invariance axiom
implies that the OPE coefficients are translation invariant, and it links the
decomposition~\eqref{decomp} of the field space into sectors of different spin to the transformation
properties of the OPE coefficients under the rotation group. The scaling property likewise
links the decomposition into sectors with different dimension to the scaling properties
of the OPE coefficients. The (anti-)symmetry property is a replacement for local
(anti-)commutativity (Einstein causality) in the Euclidean setting. Note that
we do not impose here as a condition that the familiar relation between spin and
statistics~\cite{Wightman} should hold. As we have shown in~\cite{HW08}, this may be derived
as a consequence of the above axioms in the variant considered there. Similarly,
we do not postulate any particular transformation properties under discrete
symmetries such as $C,P,T$, but we mention that one can derive the $PCT$-theorem
in this type of framework, as shown in~\cite{HPCT}. The same
result may also be proved in the present setting by very similar techniques,
but we shall not dwell upon this here.
In summary, in the following, a quantum field theory is defined as a pair
consisting of an infinite dimensional vector space $V$ with the above stated
properties, together with a hierarchy of OPE coefficients
${\cal C}:= ({\cal C}(x_1, x_2), {\cal C}(x_1, x_2, x_3), \dots)$
with the above stated properties. It is natural to identify quantum field theories if
they only differ by a redefinition of its fields. Informally, a field redefinition
means that one changes ones definition of the quantum fields of the theory
from $\phi_a(x)$ to $\widehat \phi_a(x) = \sum_b z_a^b \phi_b(x)$, where $z_a^b$ is some matrix on field space.
The OPE coefficients of the redefined fields differ from the original ones accordingly by
factors of this matrix. We formalize this in the following definition:
\begin{defn}\label{fieldred}
Let $(V, {\cal C})$ and $(\widehat V, \widehat {\cal C})$ be two quantum field theories. If there exists
an invertible linear map $z: V \to \widehat V$ with the properties
\begin{equation}
z \, R(g) = \hat R(g) \, z \,, \quad z \, \gamma = \hat \gamma \, z \, , \quad
z \, \star = \hat \star \, z \, ,
\end{equation}
together with
\begin{equation}
{\cal C}(x_1, \dots, x_n) = z^{-1} \, \widehat {\cal C}(x_1, \dots, x_n) \, z^n
\end{equation}
for all $n$, where $z^n = z \otimes \dots \otimes z$, then the two quantum field
theories are said to be equivalent, and $z$ is said to be a field redefinition.
\end{defn}
We would finally like to impose a condition that the quantum field theory $(V, {\cal C})$ described
by the field space $V$ and the OPE coefficients ${\cal C}$ has a vacuum state.
Since we are working in a Euclidean setting here, the appropriate notion
of quantum state is a collection of Schwinger- or correlation functions, denoted as usual by
$\langle \phi_{a_1}(x_1) \cdots \phi_{a_n}(x_n) \rangle_\Omega$, where $n$ and $a_1, \dots, a_n$
can be arbitrary. These functions should be analytic functions on $M_n$ satisfying the
Osterwalder-Schrader (OS) axioms for the vacuum state $\Omega$~\cite{OS1,OS2}. They should also satisfy the
OPE in the sense that
\begin{equation}
\big\langle \phi_{a_1} (x_1) \cdots \phi_{a_n}(x_n) \big\rangle_\Omega \sim
\sum_{b} C_{a_1 \dots a_n}^b(x_1, \dots, x_n) \, \big\langle \phi_b(x_n) \big\rangle_\Omega \, .
\end{equation}
Here, the symbol $\sim$ means that the difference between the left and right side is
a distribution on $M_n$ whose scaling degree is smaller than any given number $\delta$
provided the above sum goes over all of the finitely many
fields $\phi_b$ whose dimension is smaller than some number $\Delta = \Delta(\delta)$.
The OS-reconstruction theorem then guarantees that the theory can
be continued back to Minkowski spacetime, and that the fields can be represented as
linear operators on a Hilbert space $\H$ of states. One may want to
impose only the weaker condition that there exist {\em some} quantum state for the
quantum field theory described by $({\cal C}, V)$. In that case, one would postulate the
existence of a set of Schwinger functions satisfying all of the
OS-axioms except those involving statements about the invariance under the Euclidean group.
Such a situation is of interest in theories with unbounded potentials where a
vacuum state is not expected to exist, but where the OPE might nevertheless exist.
It is clear that the existence of a vacuum state (or in fact, just any quantum state)
satisfying the OS-axioms is a potentially new restriction on the OPE coefficients.
We will not analyze here the nature of these restrictions, as our focus is on the algebraic
constraints satisfied by the OPE-coefficients. We only note here that the condition of
OS-positivity is not satisfied in some systems in statistical mechanics, and it is also
not satisfied in gauge theories before the quotient by the BRST-differential is taken~(see
sec.~\ref{hochschild}). These systems on the other hand do satisfy an OPE in a suitable sense.
Thus, one would expect that the existence of
a set of correlation functions satisfying the full set of
OS-axioms is a genuinely new restriction\footnote{
Consequences of OS-positivity have been analyzed in the context of partial wave expansions~\cite{Rehren1,Rehren2},
and also in the framework of~\cite{Mack1}.
} on
the allowed theory, which one might want to drop in some cases.
\section{Coherence theorem}\label{coherence}
In the last section we have laid out our definition of a quantum field theory in
terms of a collection of operator product coefficients. The key condition that
these should satisfy is the factorization property~\eqref{factorization}. It is
clear that these conditions should impose a set of very stringent constraints upon the
coefficients ${\cal C}(x_1, \dots, x_n)$ for $n \ge 2$. In this section, we will analyze
these conditions and show that, in a sense, all of these constraints may be thought
of encoded in the first non-trivial one arising at $n=3$ points. We shall
refer to this type of result as a "coherence theorem", because it means that
all the factorization constraints are coherently described by a single condition
in the precise sense explained below.
Before we describe our result in detail, we would
like to put it into perspective by drawing a parallel to an analogous
result valid for ordinary algebras. Let ${\bf A}$ be a finite-dimensional algebra.
The key axiom for an algebra is the associativity condition, stating that
\begin{equation}\label{aassoc}
(AB)C = A(BC) \quad \text{for all $A,B,C \in {\bf A}$.}
\end{equation}
Written somewhat differently, if we write the product as
$m(A,B) = AB$ with $m$ a linear map $m: {\bf A} \otimes {\bf A} \to {\bf A}$, then
in a tensor product notation similar to the one used above
in context of the OPE, the associativity condition is equivalent to
\begin{equation}\label{massoc}
m(id \otimes m) = m(m \otimes id) \, ,
\end{equation}
where the two sides of the above equation are now maps ${\bf A} \otimes {\bf A} \otimes {\bf A} \to {\bf A}$.
An elementary result for algebras is that there do not arise any
further constraints on the product $m$ from "higher associativity conditions" such
as for example
\begin{equation}\label{ahigherass}
(AB)(CD) = (A(BC))D \quad \text{for all $A,B,C,D \in {\bf A}$.}
\end{equation}
Indeed, it is not difficult to prove this identity by successively
applying eq.~\eqref{aassoc}, and this can be generalized
to prove all possible higher associativity identities.
The associativity condition~\eqref{aassoc} is analogous to
the consistency conditions for the OPE coefficients arising from the
the factorization constraint~\eqref{factorization}
for three points. Moreover, the higher order associativity conditions~\eqref{ahigherass}
are analogous
to the conditions that arise from the factorization constraint for more than three
points. Thus, our coherence theorem is analogous to the above statement for
ordinary algebras that there are no higher order associativity constraints which
are not already automatically satisfied on account of the
standard associativity condition~\eqref{aassoc}.
Let us now describe our coherence result in more detail.
For $n=3$ points, there are three partitions of the set $\{1, 2, 3\}$ leading
to three corresponding non-trivial factorization conditions~\eqref{factorization}, namely\footnote{
Note that, in our formulation of the factorization condition, there is an ordering
condition on the partitions. Here we mean more precisely all conditions that can be obtained
by combining this with the symmetry axiom, which will give conditions for arbitrary orderings.}
${\bf T}_3:=\{ \{1, 2\}, \{3\} \}$, ${\bf T}_2 := \{ \{1, 3\}, \{2\} \}$, and ${\bf T}_1:=
\{ \{2, 3\}, \{1\} \}$. The corresponding domains on which the factorization
identities are valid are given respectively by
\begin{eqnarray}\label{threet}
D[{\bf T}_1] &=& \{(x_1, x_2, x_3) \mid r_{23} < r_{13} \} \, ,\\
D[{\bf T}_2] &=& \{(x_1, x_2, x_3) \mid r_{13} < r_{23} \} \, ,\\
D[{\bf T}_3] &=& \{(x_1, x_2, x_3) \mid r_{12} < r_{23} \} \, .
\end{eqnarray}
Clearly, the first two domains have no common points, but they both have an open,
non-empty intersection with the third domain. Thus, on each of
these intersections, we have two factorizations of the OPE coefficient
${\cal C}(x_1, x_2, x_3)$ according to eq.~\eqref{factorization}.
These must hence be equal. Thus,
we conclude that
\begin{equation}\label{Cassoc}
{\cal C}(x_2, x_3) \Big( {\cal C}(x_1, x_2) \otimes id \Big) = {\cal C}(x_1, x_3) \Big( id \otimes {\cal C}(x_2, x_3) \Big) \,
\end{equation}
on the intersection $D[{\bf T}_1] \cap D[{\bf T}_3]$ [that is, the set
$\{r_{12}<r_{23}<r_{13}\}$] and a similar relation must hold
on the intersection $D[{\bf T}_2] \cap D[{\bf T}_3]$. However, the latter relation is
can also be derived from eq.~\eqref{Cassoc} by the
symmetry axiom for the OPE coefficients stated in the previous section,
\begin{equation}\label{add1}
{\cal C}(x_1, x_2) = t(x_1, x_2) {\cal C}(x_2, x_1) \tau_{1,2} \,
\end{equation}
and the relation
\begin{equation}\label{add2}
{\cal C}(x_1, x_3) = {\cal C}(x_2, x_3)\Big( t(x_1, x_2) \otimes id \Big)
\end{equation}
for $r_{12} < r_{23}$.
Thus, for three points, essentially the only independent
consistency condition is eq.~\eqref{Cassoc}. In component form, this condition
was given above in eq.~\eqref{assoccomp}.
The consistency condition~\eqref{Cassoc} is analogous to the
associativity condition~\eqref{massoc} for the product in
an ordinary algebra. By analogy to an ordinary algebra, we may
hence ask whether there are any further constraints on ${\cal C}(x_1, x_2)$ arising
from the higher order factorization equations~\eqref{factorization} with
$n \ge 4$. As we will now show, this is not the case. We also show that, as in
an ordinary algebra, the coefficients ${\cal C}(x_1, \dots, x_n)$ analogous to a product
of $n$ factors are completely determined by the coefficient ${\cal C}(x_1, x_2)$ analogous
to a product with two factors.
Our first task is to write down all factorization conditions involving only the
coefficients ${\cal C}(x_1, x_2)$. For this, it is useful to employ the language of rooted
trees. One way to describe a rooted tree on $n$ elements $\{1, \dots, n\}$ is
by a set $\{S_1, \dots, S_k\}$ of nested subsets $S_i \subset \{1, \dots, n\}$.
This is a family of subsets with the property that each set $S_i$ is either contained
in another set of the family, or disjoint from it. The set $\{1, \dots, n\}$ is
by definition not in the tree, and is referred to as the root.
The sets $S_i$ are to be thought of as the
nodes of the tree, and a node is connected by branches to all those nodes
that are subsets of $S_i$ but not proper subsets of any element of the tree other
than $S_i$. The leaves are those nodes that
themselves do not possess any other set $S_i$ in the tree and are
given by the singleton sets $S_i = \{i\}$. If ${\bf T}$ is a tree
on $n$ elements of a set, then we also denote by $|{\bf T}|$ the elements
of this set. Let ${\bf T}$ be a tree upon $n$
elements of the form ${\bf T} = \{ {\bf T}_1, \dots, {\bf T}_r \}$, where each ${\bf T}_i$ is
itself a tree on a proper subset of $\{1, \dots, n\}$, so that $|{\bf T}_1| \cup \dots \cup
|{\bf T}_r| = \{1, \dots, n\}$ is a partition into disjoint subsets.
We define an open, non-empty domain of $M_n$ for such trees recursively
by
\begin{eqnarray}\label{dtdef}
D[{\bf T}] &=&
\bigg\{
(x_1, \dots, x_n) \in M_n \mid X_{|{\bf T}_1|} \in D[{\bf T}_1], \dots,
X_{|{\bf T}_r|} \in D[{\bf T}_r]; \nonumber\\
&&{\rm min} \, d(X_{\{m_1, \dots, m_r\}}) > {\rm max} \, (d(X_{|{\bf T}_1|}), \dots, d(X_{|{\bf T}_r|}))
\bigg\}
\, ,
\end{eqnarray}
where $m_i$ is the maximum element upon which the tree ${\bf T}_i$ is
built, and where we are using the same notations $d(X_I)$ and
$X_I = (x_i)_{i \in I}$ as above for any subset $I \subset \{1, \dots, n\}$.
If ${\bf T}_i$ are the trees with only a single node apart from the leaves,
then the above domain is identical with the domain defined above in the
factorization axiom~\eqref{factorization}, see eq.~\eqref{domaindef} with
$I_i$ in that definition given by the elements of the $i$-th subtree ${\bf T}_i$.
Otherwise, it is a proper open subset of that domain. In
any case, the factorization identity~\eqref{factorization} holds on $D[{\bf T}]$.
However, we may now iterate the factorization identity, because the
factors ${\cal C}(X_{|{\bf T}_i|})$ now themselves factorize on $D[{\bf T}]$, given
that $X_{|{\bf T}_i|} \in D[{{\bf T}_i}]$. We apply the factorization condition
to this term again, and continuing this way, we get a nested factorization identity on
each of the above domains $D[{\bf T}]$.
To write down these identities in a reasonably compact way,
we introduce some more notation. If $S \in {\bf T}$, we write
$\ell(1), \dots, \ell(j) \subset_{\bf T} S$ if $\ell(1), \dots, \ell(j)$ are the
branches descending from $S$ in the tree ${\bf T}$. We write
$m_i$ for the largest element in the sets $\ell(i)$, and we assume that the branches
have been ordered in such a way that $m_1 < \dots < m_j$.
As above in eq.~\eqref{Ccompdef}, we let
$C_{a_1 \dots a_n}^b(x_1, \dots, x_n)$ be the
basis components of the linear maps ${\cal C}(x_1, \dots, x_n): V^{\otimes n} \to V$.
Then, for each tree ${\bf T}$ on $\{1, \dots, n\}$, the
following factorization identity holds on the domain $D[{\bf T}]$:
\begin{equation}\label{treefactor}
C_{a_1 \dots a_n}^b(x_1, \dots, x_n) =
\sum_{a_S: S \in {\bf T}} \left( \prod_{S: \ell(1), \dots, \ell(j) \subset_{\bf T} S}
C^{a_S}_{a_{\ell(1)} \dots a_{\ell(j)}}
(x_{m_1}, \dots, x_{m_j}) \right) \, .
\end{equation}
Here, the sums are over all $a_S$ with $S$ a subset in the tree not equal to $\{1\}, \dots, \{n\}$
respectively
$\{1, \dots, n\}$ . For these sets, we define
$a_{\{1\}} := a_1, \dots, a_{\{n\}} := a_n$ respectively
$a_{\{1, \dots, n\}} := b$. The nested infinite sums are carried
out in the hierarchical order determined by the tree, with the sums corresponding
to the nodes closest to the leaves first. If ${\bf T}$ is a binary
tree, i.e., one where precisely two branches descend from each node, then
the above factorization formula expresses the $n$-point
OPE coefficient ${\cal C}(x_1, \dots, x_n)$ in
terms of products of the 2-point coefficient in the open domain $D[{\bf T}] \subset M_n$.
Since ${\cal C}(x_1, \dots, x_n)$ is by assumption analytic in the open, connected domain $M_n$,
and since an analytic function on a connected domain
is uniquely determined by its restriction to an open set, we have the
following simple proposition:
\begin{prop}\label{proposition1}
The $n$-point OPE-coefficients ${\cal C}(x_1, \dots, x_n)$ are uniquely determined
by the 2-point coefficients ${\cal C}(x_1, x_2)$. In particular, if two quantum
field theories have equivalent 2-point OPE coefficients [see the previous section],
then they are equivalent.
\end{prop}
We next ask whether the factorization condition~\eqref{treefactor}
for binary trees ${\bf T}$ imposes any further restrictions on ${\cal C}(x_1, x_2)$ apart
from~\eqref{Cassoc}. For this, consider for any binary tree ${\bf T}$ the expression
\begin{equation}\label{ftdef}
(f_{\bf T})^b_{a_1 \dots a_n}(x_1, \dots, x_n) :=
\sum_{a_S: S \in {\bf T}} \left( \prod_{S: \ell(1), \ell(2) \subset_{\bf T} S}
C^{a_S}_{a_{\ell(1)} a_{\ell(2)}}
(x_{m_1}, x_{m_2}) \right)
\end{equation}
defined on the domain $D[{\bf T}]$.
Thus, $f_{\bf T}(x_1, \dots, x_n)$ is the expression for
${\cal C}(x_1, \dots, x_n)$ in the factorization condition~\eqref{treefactor}
for the binary tree ${\bf T}$. This factorization condition hence implies that
$f_{\bf T}$ can be analytically continued to an analytic function on $M_n$
(denoted again by $f_{\bf T}$), and that this $f_{\bf T}$ is in fact independent
of the choice of the binary tree ${\bf T}$. In order to see to what kinds of
constraints this puts on the 2-point OPE coefficients ${\cal C}(x_1, x_2)$, let us now
pretend we only knew that the sums converge in
eq.~\eqref{ftdef}, that they define
an analytic function $f_{\bf T}$ on $D[{\bf T}]$, and that this can be analytically
continued to $M_n$, for all $n$ and all binary trees on $n$ elements.
In particular, for the sake of the argument,
let us {\em not} assume that the $f_{\bf T}$ coincide for different binary trees ${\bf T}$,
except in the case $n=3$. In this case, the assumption that $f_{\bf T}$ coincide for the
three binary trees and corresponding domains~\eqref{threet} is equivalent
to the assumption of associativity for three points [see eq.~\eqref{Cassoc}]
and the symmetry and normalization conditions~\eqref{add1},\eqref{add2},
and we want to assume this condition.
We will now show that these assumptions in fact imply that all $f_{\bf T}$ coincide
for all binary trees ${\bf T}$. In this sense, there are no further consistency conditions
on ${\cal C}(x_1, x_2)$ beyond those for three points. The proof of this statement is
not difficult, and is in fact very similar to the proof of the corresponding statement
for ordinary algebras. The argument is most easily presented
graphically in terms of trees. For $n=3$, we graphically
present the assumption that all $f_{\bf T}$ agree
for the three trees associated with three elements as fig.~\ref{fig1}.
In this figure, each tree symbolizes the corresponding expression $f_{\bf T}$, and an arrow
between two trees means the following relation: (i) the intersection of the
corresponding domains [see eq.~\eqref{threet}] is
not empty, and (ii) the expressions coincide on that intersection. Because
the $f_{\bf T}$ are analytic, any such relation implies that the corresponding $f_{\bf T}$'s
in fact have to coincide everywhere on $M_n$.
Now consider $n>3$ points, and let ${\bf T}$ be an arbitrary
tree on $n$ elements. The goal is to present a sequence of trees ${\bf T}_0, {\bf T}_1, \dots, {\bf T}_r$
of trees such that ${\bf T}_0 = {\bf T}$, and such that ${\bf T}_r = {\bf S}$ is the "reference tree"
\begin{equation}
{\bf S} = \{ \{n\}, \{ n-1, n\}, \{n-2, n-1, n\}, \dots, \{1, 2, \dots n\} \}
\end{equation}
which is drawn in fig.~\ref{fig2}. The sequence should have the further property that
for each $i$, there is a relation as above between ${\bf T}_i$ and ${\bf T}_{i-1}$. As we
have explained, this would imply that $f_{\bf T} = f_{\bf S}$, and hence that all
$f_{\bf T}$'s are equal.
We now construct the desired sequence of trees inductively. We first write the binary
tree ${\bf T}={\bf T}_0$ as the left tree in fig.~\ref{fig3}, where the shaded regions again represent subtrees whose
particular form is not relevant. The next tree ${\bf T}_1$ is given by the right tree in
fig.~\ref{fig3}. We claim that there is a relation as above between these trees. In fact, it is easy to
convince oneself that the corresponding domains $D[{\bf T}_0]$ and $D[{\bf T}_1]$ have a non-empty
intersection. Secondly, because these trees differ by an elementary manipulation
as in fig.~\ref{fig1}, it is not difficult to see that the three-point consistency condition
implies that the corresponding expressions $f_{{\bf T}_0}$ and $f_{{\bf T}_1}$ coincide on
(at least an open subset of) $D[{\bf T}_0] \cap D[{\bf T}_1]$. Being analytic, they must hence coincide everywhere.
We now repeat this kind of process until
we arrive at the left tree ${\bf T}_{r_1}$ in fig.~\ref{fig4}. This tree has
the property that the $n$-th leaf is directly connected to the root. We change this
tree to the right tree in fig.~\ref{fig4}, again verifying that there is indeed the desired relation
between these trees. We repeat this step again
until we reach the tree ${\bf T}_{r_2}$ given in fig.~\ref{fig5}. It is clear now that this can be continued
until we have reached the tree ${\bf S}$ in fig.~\ref{fig2}.
\begin{figure}
\begin{center}
\includegraphics[width=6.5in]{newfig1.eps}
\end{center}
\caption{A graphical representation of the associativity condition. The double arrows indicate that
the domains $D[{\bf T}_i]$ represented by the respective trees have a common intersection, and that on this
intersection, the OPE's represented by the respective trees coincide. Note that the double arrows
are not a transitive relation: The domains associated with left- and rightmost tree have empty
intersection.}
\label{fig1}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{newfig2.eps}
\end{center}
\caption{The reference tree $\bf S$.}
\label{fig2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.0in]{newfig3.eps}
\end{center}
\caption{An elementary manipulation. The shaded triangles represent subtrees whose form is not relevant.}
\label{fig3}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.5in]{newfig4.eps}
\end{center}
\caption{Another elementary manipulation.}
\label{fig4}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4.5in]{newfig5.eps}
\end{center}
\caption{The tree ${\bf T}_{r_2}$.}
\label{fig5}
\end{figure}
We summarize our finding in the following theorem:
\begin{thm}
("Coherence Theorem")
For each binary tree ${\bf T}$, let $f_{\bf T}$ be defined by eq.~\eqref{ftdef} on the domain $D[{\bf T}]$
as a convergent power series expansion, and assume that $f_{\bf T}$ has
an analytic extension to all of $M_n$. Furthermore, assume that the associativity
condition~\eqref{Cassoc} and symmetry and normalization conditions~\eqref{add1}, \eqref{add2}
hold, i.e. that all $f_{\bf T}$ coincide for trees
with three leaves. Then $f_{\bf T} = f_{\bf S}$ for any pair of binary trees ${\bf S}, {\bf T}$.
\end{thm}
\section{Perturbations and Hochschild cohomology}\label{perturbations}
Suppose we are given a quantum field theory in terms of
OPE-coefficients as described in sec.~\ref{axiomatic}. In this section we discuss the question
how to describe perturbations of such a quantum field theory.
According to our definition of a quantum field theory, a
perturbed quantum field theory should correspond to a
perturbation series in some parameter $\lambda$ for the OPE coefficients.
Because our axioms for the OPE coefficients imply constraints--especially
the factorization axiom--the perturbations of the coefficients
will also have to satisfy corresponding constraints. In this
section, we will show that these constraints are of a
cohomological nature.
As we have discussed, our definition of quantum field theory
is algebraic. In fact, as argued in sec.~\ref{coherence}, up to
technicalities related to the convergence of various series,
the constraints on the OPE coefficients
can be formulated in the form of an "associativity condition" for the
2-point OPE coefficients only, see eq.~\eqref{Cassoc}. Consequently,
the perturbed 2-point OPE coefficients will also have to satisfy
a corresponding perturbed version of this constraint, and this
is in fact essentially the only constraint. It is this
perturbed version of the associativity condition that we will
discuss in this section.
Our discussion is in close parallel to the well-known
characterization of perturbations ("deformations") of an
ordinary finite dimensional algebra, an analogy which we have already
emphasized in another context above. We therefore begin by recalling
the basic theory of deformations
of finite-dimensional algebras~\cite{Gerstenhaber, Happel}.
Let ${\bf A}$ be a finite-dimensional algebra (over ${\mathbb C}$, say), whose product
we denote as usual by ${\bf A} \otimes {\bf A} \to {\bf A}, A \otimes B \mapsto AB$.
A deformation of the algebra is a 1-parameter family of
products $A \otimes B \mapsto A \bullet_\lambda B$, where
$\lambda \in {\mathbb R}$ is a smooth deformation parameter. The product
$A \bullet_0 B$ should be the original product $AB$, but
for non-zero $\lambda$, we have a new product on ${\bf A}$---or alternatively
on the ring of formal power series ${\mathbb C}((\lambda)) \otimes {\bf A}$ if we merely
consider perturbations in the sense of formal power series. This new product
must satisfy the associativity law, which imposes a strong constraint.
If we denote the $i$-th order perturbation of the product by
\begin{equation}
m_i(A, B) = \frac{1}{i!} \, \frac{d^i}{d\lambda^i} A \bullet_\lambda B \Bigg|_{\lambda=0} \, ,
\end{equation}
then the associativity condition implies to first order that
we should have
\begin{equation}
m_0(id \otimes m_1) - m_0(m_1 \otimes id) + m_1(id \otimes m_0) -
m_1(m_0 \otimes id) = 0 \, ,
\end{equation}
as a map ${\bf A} \otimes {\bf A} \otimes {\bf A} \to {\bf A}$, in an obvious tensor
product notation. $m_0(A,B) = AB$ is the original product on ${\bf A}$. Similar conditions
arise for the higher derivatives $m_i$ of the new product. These may be
written for $i \ge 2$ as
\begin{eqnarray}
&&m_0(id \otimes m_i) - m_0(m_i \otimes id) + m_i(id \otimes m_0) -
m_i(m_0 \otimes id) \nonumber\\
&=& -\sum_{j=1}^{i-1} m_{i-j}(id \otimes m_j) - m_{i-j}(m_j \otimes id)
\, .
\end{eqnarray}
Actually, we want to exclude the trivial case that the new product was
obtained from the old one by merely a $\lambda$-dependent redefinition of the
generators of ${\bf A}$. Such a redefinition may be viewed as a 1-parameter family of
invertible linear maps $\alpha_\lambda:
{\bf A} \to {\bf A}$, and the corresponding trivially deformed product is
\begin{equation}\label{trivial}
A \bullet_\lambda B =
\alpha_\lambda^{-1}\Big[\alpha_\lambda^{}(A) \alpha_\lambda^{}(B) \Big] \, .
\end{equation}
In other words, $\alpha_\lambda$ defines an isomorphism between
$({\bf A}, \bullet_0)$ and $({\bf A}, \bullet_\lambda)$, meaning that the latter
should not be regarded as a new algebra.
The trivially deformed product is given to first order by
\begin{equation}
m_1 = m_0(id \otimes \alpha_1) + m_0(\alpha_1 \otimes id) - \alpha_1 m_0 \, ,
\end{equation}
with similar formulas for $m_i$, where $\alpha_i = \frac{1}{i!}\, \frac{d^i}{d\lambda^i} \alpha_\lambda |_{\lambda=0}$.
The above conditions for the $i$-th order deformations of
an associative product have a useful and elegant cohomological interpretation~\cite{Gerstenhaber}.
To give this interpretation, consider the linear space $\Omega^n({\bf A})$ of
all linear maps $\psi_n: {\bf A} \otimes \dots \otimes {\bf A} \to {\bf A}$, and define a linear operator
$d: \Omega^n \to \Omega^{n+1}$ by the formula
\begin{eqnarray}
(d \psi_n)(A_1, \dots, A_{n+1}) &=& A_1 \psi_n(A_2, \dots, A_{n+1}) -
(-1)^n \psi_n(A_1, \dots, A_n)A_{n+1} \nonumber\\
&& +\sum_{j=1}^n (-1)^j \psi_n(A_1, \dots, A_{j} A_{j+1}, \dots, A_{n+1}) \, .
\end{eqnarray}
It may be checked using the associativity law for the original product on the algebra ${\bf A}$ that
$d^2 = 0$, so $d$ is a differential with a corresponding cohomology complex. This complex
is called the Hochschild complex, see e.g.~\cite{Connes}. More precisely, if $Z^n({\bf A})$ is the space of
all closed $\psi_n$, i.e., those satisfying $d \psi_n=0$, and $B^n({\bf A})$ the space of
all exact $\psi_n$, i.e., those for which $\psi_n = d \psi_{n-1}$ for some
$\psi_{n-1}$, then the $n$-th Hochschild cohomology
$HH^n({\bf A})$ is defined as the quotient $Z^n({\bf A})/B^n({\bf A})$.
The first order associativity condition may now be viewed as saying that $d m_1 = 0$, or
$m_1 \in Z^2({\bf A})$. Furthermore, if the new product just arises from a trivial
redefinition of the generators in the sense of~\eqref{trivial}, then
it follows that $m_1 = d \alpha_1$, so $m_1 \in B^2({\bf A})$ in that case.
Thus, the non-trivial first order perturbations $m_1$ of the algebra
product can be identified with the non-trivial classes $[m_1] \in HH^2({\bf A})$.
In particular, non-trivial deformations may only exist if $HH^2({\bf A}) \neq 0$.
Let us assume a non-trivial first order perturbation exists, and let us try
to find a second order perturbation. We view the right side of the second
order associativity condition as an element $w_2 \in \Omega^3({\bf A})$,
and we compute that
$d w_2 = 0$, so $w_2 \in Z^3({\bf A})$. Actually, the left side of the second order
associativity condition is just $d m_2 \in B^3({\bf A})$ in our cohomological notation,
so if the second order associativity condition is to hold, then $w_2$ must in fact
be an element of $B^3({\bf A})$, or equivalently, the class $[w_2] \in HH^3({\bf A})$ must
vanish. If it does not define the trivial class---as may only happen if
$HH^3({\bf A}) \neq 0$ itself is non-trivial---then there is an obstruction to
lift the perturbation to second order.
If there is no obstruction at second order, we continue to third order,
with a corresponding potential obstruction $[w_3] \in HH^3({\bf A})$, and so on.
In summary, the space of non-trivial perturbations corresponds to
elements of $HH^2({\bf A})$, while the obstructions lie in $HH^3({\bf A})$.
We now show how to give a similar characterization of perturbations of a
quantum field theory.
According to our definition of a quantum field theory
given in sec.~\ref{axiomatic}, a quantum field theory is defined by the set of
its OPE-coefficients with certain properties. Furthermore, as
argued in sec.~\ref{coherence}, all higher $n$-point operator product coefficients
are uniquely determined by the 2-point coefficients ${\cal C}(x_1, x_2)$.
Furthermore, we argued that, up to technical assumptions about
the convergence of the series~\eqref{ftdef}, the key constraints
on the OPE coefficients for $n$ points are encoded in the
associativity constraint~\eqref{Cassoc} for the 2-point coefficient,
which we repeat for convenience:
\begin{equation}\label{maincondition}
{\cal C}(x_2, x_3)\Big({\cal C}(x_1, x_2) \otimes id \Big) - {\cal C}(x_1, x_3)\Big(
id \otimes {\cal C}(x_2, x_3) \Big) = 0 \quad
\text{for $r_{12} < r_{23} < r_{13}$.}
\end{equation}
We ask the question when it is possible to find a 1-parameter deformation ${\cal C}(x_1, x_2; \lambda)$ of
these coefficients by a parameter $\lambda$ so that the
associativity condition continues to hold, at least in the
sense of formal power series in $\lambda$. Actually, the analogues of
the symmetry condition~\eqref{add1}, the normalization condition~\eqref{add2}, the
hermitian conjugation, the Euclidean invariance, and the unit axiom should
hold as well for the perturbation. However, these conditions are much more
trivial in nature than~\eqref{maincondition}, because these conditions
are linear in ${\cal C}(x_1, x_2)$. These conditions could therefore easily be included in
our discussion, but would distract from the main point. For the rest of this section,
we will therefore discuss the implications of the associativity condition~\eqref{maincondition}
for the perturbed OPE-coefficients.
As we shall see now,
such perturbations can again be characterized in a cohomological framework
similar to the one given above. As above, we will presently define a
linear operator $b$ which defines the cohomology in question. The definition
of this operator will implicitly involve infinite sums [as our
associativity condition~\eqref{maincondition}], and such sums are
typically only convergent on certain domains. It is therefore necessary to get a set of domains
that will be stable under the action of $b$ and that is suitable
for our application. Many such domains can be defined, and
correspondingly different rings are obtained. For simplicity and definiteness,
we consider the non-empty, open domains of $({\mathbb R}^D)^n$ defined by
\begin{equation}\label{Fndef}
{\mathcal F}_n = \{(x_1, \dots, x_n) \in M_n; \,\,\, r_{1 \, i-1} < r_{i-1 \, i} < r_{i-2 \, i}
< \dots < r_{1i}, \,\,\, 1<i\le n \} \subset M_n \, .
\end{equation}
These domains also have a description in terms of the domains $D[{\bf T}]$ defined
above in eq.~\eqref{dtdef}, but we will not need this here. Note that the associativity
condition~\eqref{maincondition} holds on the domain ${\mathcal F}_3 = \{r_{12} < r_{23} < r_{13}\}$.
We define $\Omega^n(V)$ to be the set of all holomorphic functions
$f_n$ on the domain ${\mathcal F}_n$ that are valued in the linear maps~\footnote{The
same remark as in footnote 1 applies here.}
\begin{equation}
f_n(x_1, \dots, x_n): V \otimes \dots \otimes V \to V, \quad (x_1, \dots, x_n) \in {\mathcal F}_n \, .
\end{equation}
We next introduce a boundary operator $b: \Omega^n(V) \to \Omega^{n+1}(V)$ by the formula
\begin{eqnarray}\label{bfndef}
&&(b f_n)(x_1, \dots, x_{n+1}) :=
{\cal C}(x_1, x_{n+1})(id \otimes f_n(x_2, \dots, x_{n+1})) \nonumber\\
&&+ \sum_{i=1}^n (-1)^i f_n(x_1, \dots, \widehat x_i, \dots, x_{n+1})(
id^{i-1} \otimes {\cal C}(x_i, x_{i+1}) \otimes id^{n-i}) \nonumber\\
&&+(-1)^{n+1} \, {\cal C}(x_n, x_{n+1})(f_n(x_1, \dots, x_n) \otimes id) \, .
\end{eqnarray}
Here ${\cal C}(x_1, x_2)$ is the OPE-coefficient of the undeformed theory
and a caret means omission.
The definition of $b$ involves a composition of ${\cal C}$ with $f_n$, and
hence, when expressed in a basis of $V$, implicitly involves an
infinite summation over the basis elements of $V$. We must therefore
assume here (and in similar formulas in the following) that these
sums converge on the set of points $(x_1, \dots, x_{n+1})$ in the domain ${\mathcal F}_{n+1}$.
Thus, when we write $bf_n$, it is understood that $f_n \in \Omega^n(V)$ is
in the domain of $b$. We now have the following lemma:
\begin{lemma}
The maps $b$ is a differential, i.e., $b^2f_n = 0$ for
$f_n$ in the domain of $b$ such that $bf_n$ is also in the domain of $b$.
\end{lemma}
\medskip
\noindent
{\em Proof:} The proof is essentially a straightforward computation. Using the definition of
$b$, we have
\begin{eqnarray}\label{bbfn}
&&b(b f_n)(x_1, \dots, x_{n+2}) =
{\cal C}(x_1, x_{n+2}) (id \otimes b f_n(x_2, \dots, x_{n+2})) \nonumber\\
&&+ \sum_{i=1}^{n+1} (-1)^i b f_n(x_1, \dots, \widehat x_i, \dots, x_{n+2})(
id^{i-1} \otimes {\cal C}(x_i, x_{i+1}) \otimes id^{n+1-i}) \nonumber\\
&&+(-1)^{n+2} {\cal C}(x_{n+1}, x_{n+2}) (b f_n(x_1, \dots, x_{n+1}) \otimes id) \, .
\end{eqnarray}
Substituting the definition of $b$ again then gives, for the first term on
the right side
\begin{eqnarray}
&&={\cal C}(x_1, x_{n+2})[id \otimes {\cal C}(x_2, x_{n+2})(id \otimes f_n(x_3, \dots, x_{n+2}))]\nonumber\\
&&{\cal C}(x_1, x_{n+2})[id \otimes \sum_{k=2}^{n+1} (-1)^{k-1} f_n(x_2, \dots, \widehat x_k,
\dots, x_{n+2})(id^{k-2} \otimes {\cal C}(x_k, x_{k+1}) \otimes id^{n-k+1})] \nonumber\\
&&+ (-1)^{n+1} {\cal C}(x_1, x_{n+2})[id \otimes {\cal C}(x_{n+1}, x_{n+2})(f_n(x_2, \dots, x_{n+1}) \otimes id)]
\, .
\end{eqnarray}
Substituting the definition of $b$ into the third term on the right side
of eq.~\eqref{bbfn} gives
\begin{eqnarray}
&&=(-1)^{n} {\cal C}(x_{n+1}, x_{n+2})[{\cal C}(x_1, x_{n+1})(id \otimes f_n(x_2, \dots, x_{n+1})) \otimes id] \nonumber\\
&&+(-1)^{n} {\cal C}(x_{n+1}, x_{n+2})[ \sum_{i=1}^n (-1)^i f_n(x_1, \dots, \widehat x_i, \dots, x_{n+1})
(id^{i-1} \otimes {\cal C}(x_i, x_{i+1}) \otimes id^{n-i}) \otimes id] \nonumber\\
&&-{\cal C}(x_{n+1}, x_{n+2})[{\cal C}(x_n, x_{n+1}) (f_n(x_1, \dots, x_n) \otimes id) \otimes id] \, .
\end{eqnarray}
Substituting the definition of $b$ into the second term on the right side
of eq.~\eqref{bbfn} gives the following terms
\begin{eqnarray}
&&= \sum_{i=2}^{n+1} (-1)^i {\cal C}(x_1, x_{n+2})[id \otimes f_n(x_2, \dots, \widehat x_i, \dots, x_{n+2})
(id^{i-1} \otimes {\cal C}(x_i, x_{i+1}) \otimes id^{n+1-i})] \nonumber\\
&&
-{\cal C}(x_2, x_{n+2}) (id \otimes f_n(x_3, \dots, x_{n+2}))({\cal C}(x_1, x_2) \otimes id^n)
\nonumber\\
&& + \sum_{i=1}^{n} (-1)^{i+n+1}
{\cal C}(x_{n+1}, x_{n+2})[(f_n(x_1, \dots, \widehat x_i, \dots, x_{n+1}) \otimes id)
(id^{i-1} \otimes {\cal C}(x_i, x_{i+1}) \otimes id^{n-i+1})] \nonumber\\
&&+{\cal C}(x_n, x_{n+2}) (f_n(x_1, \dots, x_{n}) \otimes id)(id^n \otimes {\cal C}(x_{n+1}, x_{n+2})) \nonumber\\
&&+\sum_{k=2}^n \sum_{i=1}^{k-1} (-1)^{k+i} f_n(x_1, \dots, \widehat x_i, \dots, \widehat x_{k+1},
\dots, x_{n+2}) \circ \nonumber\\
&& \quad \circ (id^{k-1} \otimes {\cal C}(x_{k+1}, x_{k+2}) \otimes id^{n-k})
(id^{i-1} \otimes {\cal C}(x_{i}, x_{i+1}) \otimes id^{n-i+1}) \nonumber\\
&&+\sum_{k=1}^{n-1} \sum_{i=k+2}^{n+1} (-1)^{k+i} f_n(x_1, \dots, \widehat x_k, \dots, \widehat x_{i},
\dots, x_{n+2}) \circ \nonumber\\
&& \quad \circ (id^{k-1} \otimes {\cal C}(x_{k}, x_{k+1}) \otimes id^{n-k})
(id^{i-1} \otimes {\cal C}(x_{i}, x_{i+1}) \otimes id^{n-i+1}) \nonumber\\
&&-\sum_{k=1}^n f_n(x_1, \dots, \widehat x_k, \widehat x_{k+1},
\dots, x_{n+2}) \circ \nonumber\\
&& \quad \circ (id^{k-1} \otimes {\cal C}(x_{k}, x_{k+2}) \otimes id^{n-k})
(id^{k} \otimes {\cal C}(x_{k+1}, x_{k+2}) \otimes id^{n-k}) \nonumber\\
&&+\sum_{k=1}^n f_n(x_1, \dots, \widehat x_k, \widehat x_{k+1},
\dots, x_{n+2}) \circ \nonumber\\
&& \quad \circ (id^{k-1} \otimes {\cal C}(x_{k+1}, x_{k+2}) \otimes id^{n-k})
(id^{k-1} \otimes {\cal C}(x_{k}, x_{k+1}) \otimes id^{n-k+1}) \, .
\end{eqnarray}
We now add up the expressions that we have obtained, and we use the
associativity condition eq.~\eqref{maincondition}, noting that we are allowed to
use this expression on the domain ${\mathcal F}_{n+2}$: For example, to apply the associativity
condition to the last two terms in the above expression, we need that
$r_{k\, k+1}< r_{k+1\,k+2} < r_{k \, k+2}$ for all $k$, which holds on ${\mathcal F}_{n+2}$. It is this property of the
domains ${\mathcal F}_i$ that motivates our definition~\eqref{Fndef}. Applying
the associativity condition, we find that all terms
cancel, thus proving the lemma. \qed
By this lemma, we can define a cohomology ring associated with the differential
$b$ as
\begin{equation}
H^n(V; {\cal C}) := \frac{Z^n(V; {\cal C})}{B^n(V; {\cal C})} = \frac{
\{ {\rm ker} \, b : \Omega^n(V) \to \Omega^{n+1}(V)\}
}{
\{
{\rm ran} \, b: \Omega^{n-1}(V) \to \Omega^n(V)
\}
} \, .
\end{equation}
As we will now see, the problem of finding a 1-parameter family of
perturbations ${\cal C}(x_1, x_2; \lambda)$ such that our associativity
condition~\eqref{maincondition} continues to
hold for ${\cal C}(x_1, x_2; \lambda)$ to all orders in $\lambda$ can
be elegantly and compactly be formulated in terms of this ring.
If we let
\begin{equation}
{\cal C}_i(x_1, x_2) = \frac{1}{i!} \, \frac{d^i}{d\lambda^i} {\cal C}(x_1, x_2; \lambda) \Bigg|_{\lambda = 0} \, ,
\end{equation}
then we note that the first order associativity condition,
\begin{eqnarray}\label{maincondition1}
&&{\cal C}_0(x_2, x_3)\Big({\cal C}_1(x_1, x_2) \otimes id \Big) - {\cal C}_0(x_1, x_3)\Big(
id \otimes {\cal C}_1(x_2, x_3) \Big) + \nonumber\\
&&{\cal C}_1(x_2, x_3)\Big({\cal C}_0(x_1, x_2) \otimes id \Big) - {\cal C}_1(x_1, x_3)\Big(
id \otimes {\cal C}_0(x_2, x_3) \Big) = 0\,\, ,
\end{eqnarray}
valid for $(x_1, x_2, x_3) \in {\mathcal F}_3$, is equivalent to the statement that
\begin{equation}
b {\cal C}_1 = 0 \, ,
\end{equation}
where here and in the following, $b$ is defined in terms of the unperturbed OPE-coefficient ${\cal C}_0$.
Thus, ${\cal C}_1$ has to be an element of $Z^2(V; {\cal C}_0)$. Let $z(\lambda): V \to V$
be a $\lambda$-dependent field redefinition in the sense of defn.~\ref{fieldred}, and
suppose that ${\cal C}(x_1, x_2)$ and ${\cal C}(x_1, x_2; \lambda)$ are connected
by the field redefinition. To first order, this means that
\begin{equation}\label{ctrivial}
{\cal C}_1(x_1, x_2) = -z_1 {\cal C}_0(x_1, x_2) + {\cal C}_0(x_1, x_2)(z_1 \otimes id + id \otimes z_1) \, ,
\end{equation}
or equivalently, that $bz_1 = {\cal C}_1$,
where $z_i = \frac{1}{i!} \, \frac{d^i}{d\lambda^i} z(\lambda) |_{\lambda=0}$.
Thus, the first order deformations of ${\cal C}_0$ modulo the trivial ones defined by
eq.~\eqref{ctrivial}
are given by the classes in $H^2(V; {\cal C}_0)$. The associativity condition for $i$-th order
perturbation (assuming that all perturbations up to order $i-1$ exist) can be written as
the following condition for $(x_1, x_2, x_3) \in {\mathcal F}_3$:
\begin{eqnarray}\label{bciwi}
&& {\cal C}_0(x_2, x_3)\Big( {\cal C}_j(x_1, x_2) \otimes id \Big)
- {\cal C}_j(x_1, x_3)\Big(
id \otimes {\cal C}_0(x_2, x_3) \Big) +\\
&& {\cal C}_j(x_2, x_3)\Big( {\cal C}_0(x_1, x_2) \otimes id \Big) - {\cal C}_0(x_1, x_3)\Big(
id \otimes {\cal C}_j(x_2, x_3)\Big) = w_i(x_1, x_2, x_3) \nonumber \, ,
\end{eqnarray}
where $w_i \in \Omega^3(V)$ is defined by
\begin{equation}
w_i(x_1, x_2, x_3) :=
-\sum_{j=1}^{i-1} {\cal C}_{i-j}(x_1, x_3)( id \otimes {\cal C}_j(x_2, x_3) ) -
{\cal C}_{i-j}(x_2, x_3)( {\cal C}_j(x_1, x_2) \otimes id ) \, .
\end{equation}
We assume here that all infinite sums implicit in this expression
converge on ${\mathcal F}_3$. This equation may be written alternatively as
\begin{equation}\label{bciwiup}
b {\cal C}_i = w_i \, .
\end{equation}
We would like to define the $i$-th order perturbation by solving
this linear equation for ${\cal C}_i$. Clearly, a necessary condition for there to
exist a solution is that $b w_i = 0$ or $w_i \in Z^3(V, {\cal C}_0)$, and this can indeed shown to
be the case, see lemma~\ref{obstrlemma} below. If a solution to eq.~\eqref{bciwiup}
exists, i.e. if $w_i \in B^3(V, {\cal C}_0)$, then any other solution will differ from
this one by a solution to the corresponding "homogeneous" equation.
Trivial solutions to the homogeneous equation of the form
$b z_i$ again correspond to an $i$-th order field redefinition and are not
counted as genuine perturbations. In summary, the perturbation series can be continued at $i$-th order
if $[w_i]$ is the trivial class in $H^3(V; {\cal C}_0)$,
so $[w_i]$ represents a potential $i$-th
order obstruction to continue the perturbation series.
If there is no obstruction, then the space of non-trivial
$i$-th order perturbations is given by $H^2(V; {\cal C}_0)$.
In particular, if we knew e.g. that $H^2(V; {\cal C}_0) \neq 0$ while
$H^3(V; {\cal C}_0) = 0$, then perturbations could be defined to arbitrary orders in
$\lambda$.
\begin{lemma}\label{obstrlemma}
If $w_i$ is in the domain of $b$, and if $b {\cal C}_j = w_j$ for all $j<i$, then $bw_i = 0$.
\end{lemma}
\medskip
\noindent
{\em Proof:} We proceed by induction in $i$. For $i=1$, the lemma
is true as we have
$w_1 = b {\cal C}_1$, so $bw_1 = 0$ by $b^2 = 0$. In the general case, using the
definition of $b$, we obtain the
following expression for $bw_i$:
\begin{eqnarray}
&&-bw_i(x_1, x_2, x_3, x_4) \\
&&
=\sum_{j=1}^{i-1} {\cal C}_0(x_1, x_4)\Big( id \otimes {\cal C}_j(x_2, x_4)(id \otimes {\cal C}_{i-j}(x_3, x_4))\Big)\nonumber\\
&&
-\sum_{j=1}^{i-1} {\cal C}_j(x_2, x_4)\Big( id \otimes {\cal C}_{i-j}(x_3, x_4) \Big) \Big( {\cal C}_0(x_1, x_2) \otimes id^2 \Big) \nonumber\\
&&
+\sum_{j=1}^{i-1} {\cal C}_j(x_1, x_4)\Big( id \otimes {\cal C}_{i-j}(x_3, x_4)\Big)\Big( id \otimes {\cal C}_0(x_2, x_3) \otimes id \Big) \nonumber\\
&&
-\sum_{j=1}^{i-1} {\cal C}_j(x_1, x_4)\Big( id \otimes {\cal C}_{i-j}(x_2, x_4) \Big)\Big( id^2 \otimes {\cal C}_0(x_3, x_4) \Big) \nonumber\\
&&
+\sum_{j=1}^{i-1} {\cal C}_0(x_3, x_4)\Big( {\cal C}_j(x_1, x_3)(id \otimes {\cal C}_{i-j}(x_2, x_3)) \otimes id \Big) \nonumber\\
&&
-\sum_{j=1}^{i-1} {\cal C}_0(x_1, x_4)\Big( id \otimes {\cal C}_j(x_3, x_4)({\cal C}_{i-j}(x_2, x_3) \otimes id) \Big) \nonumber\\
&&
+\sum_{j=1}^{i-1} {\cal C}_j(x_3, x_4)\Big( {\cal C}_{i-j}(x_2, x_3) \otimes id \Big) \Big( {\cal C}_0(x_1, x_2) \otimes id^2\Big) \nonumber\\
&&
-\sum_{j=1}^{i-1} {\cal C}_j(x_3, x_4)\Big( {\cal C}_{i-j}(x_1, x_3) \otimes id \Big) \Big( id \otimes {\cal C}_0(x_2, x_3) \otimes id \Big) \nonumber\\
&&
+\sum_{j=1}^{i-1} {\cal C}_j(x_2, x_4)\Big( {\cal C}_{i-j}(x_1, x_2) \otimes id\Big) \Big( id^2 \otimes {\cal C}_0(x_3, x_4) \Big) \nonumber\\
&&
-\sum_{j=1}^{i-1} {\cal C}_0(x_3, x_4)\Big( {\cal C}_j(x_2, x_3)({\cal C}_{i-j}(x_1, x_2) \otimes id) \otimes id \Big) \nonumber \, .
\end{eqnarray}
After some manipulations using the definition of $b$ and that by definition the points
$(x_1, x_2, x_3, x_4)$ are assumed to be in ${\mathcal F}_4$,
we can transform this into the following expression
\begin{eqnarray}
&& -bw_i(x_1, x_2, x_3, x_4) \\
&=&+\sum_{j=1}^{i-1} b {\cal C}_j(x_1, x_2, x_4)(id^2 \otimes {\cal C}_{i-j}(x_3,x_4)) \nonumber\\
&&-\sum_{j=1}^{i-1} {\cal C}_j(x_1, x_4)(id \otimes b{\cal C}_{i-j}(x_2, x_3, x_4))\nonumber\\
&&-\sum_{j=1}^{i-1}b{\cal C}_j(x_1, x_3, x_4)(id \otimes {\cal C}_{i-j}(x_2, x_3) \otimes id)\nonumber\\
&&-\sum_{j=1}^{i-1} {\cal C}_j(x_3, x_4)(b{\cal C}_{i-j}(x_1, x_2, x_3) \otimes id) \nonumber\\
&&+\sum_{j=1}^{i-1} b{\cal C}_j(x_2, x_3, x_4)({\cal C}_{i-j}(x_1, x_2) \otimes id^2) \, ,\nonumber
\end{eqnarray}
where the first sum comes from the first two sums of the previous
equation, the second from the third and fourth two sums, etc.
We now substitute the relation $b {\cal C}_j = w_j$ for $j\le i-1$ on ${\mathcal F}_3$, noting
that we are allowed to do so when $(x_1, x_2, x_3, x_4) \in {\mathcal F}_4$: For example,
in the last term $(x_2, x_3, x_4) \in {\mathcal F}_3$ is satisfied whenever
$(x_1, x_2, x_3, x_4) \in {\mathcal F}_4$, and a similar statement holds for the
other 4 terms [this is in fact our motivation for our definition of the domains ${\mathcal F}_n$].
We then perform the sum over $j$. If this is done, then we see that the five terms in the sum become ten terms
involving each three factors of the ${\cal C}$'s. These terms cancel pairwise, and
we get the desired result that $bw_i=0$, as we desired to show. \qed
\section{Gauge Theories}\label{hochschild}
Local gauge theories are typically more complicated than
theories without local gauge invariance. One way to understand the
complicating effects due to local gauge invariance is to realize that
the dynamical field equations are not hyperbolic in nature in Lorentzian
spacetimes. This is seen most clearly
in the case of classical field theories. Because local gauge transformations may
be used to change the gauge connection in arbitrary compact regions of spacetime, it is
clear that the gauge connection cannot be entirely determined by the dynamical
equations and its initial data on some spatial time slice.
Thus, there is no well-posed initial value formulation in the standard sense. Similar
remarks apply to the Euclidean situation.
To circumvent this problem, one typically proceeds in two steps. At the first step, an
auxiliary theory is considered, containing the gauge fields as well as additional
"ghost" fields taking values in an infinite-dimensional Grassmann algebra.
This theory has a well-posed initial value formulation. At the second
step, the new degrees of freedom are removed. Here it is important that the auxiliary
theory possesses a new symmetry, the so-called BRST-symmetry, $s$, which is a linear
transformation on the space of classical fields with the property $s^2=0$ [for example,
in Yang-Mills theory $s$ is given by eq.~\eqref{BRSTt}]. It turns out
that the field content and dynamics
of the original theory may be recovered by considering only
the equivalence classes of fields in the auxiliary theory that are in the
null-space of $s$, modulo those that are in the range of $s$. Thus, the second step is
to define the observables of the gauge theory in question as the {\em cohomology}
of the "differential" $s$.
At the quantum level, one has a similar structure. In the framework considered in this
paper, the situation may be described abstractly as follows: As before, we have an
abstract vector space of fields, $V$. This space is to be thought of as the collection
of the components of all (composite) fields in the {\em auxiliary} theory including
ghost fields. The space $V$ is equipped with a grading $\gamma$ and a differential
$s$, i.e., two linear maps
\begin{equation}
s: V \to V \, , \quad \gamma: V \to V \, ,
\end{equation}
with the properties
\begin{equation}\label{sprop}
s^2 = 0 \, , \quad \gamma^2 = id \, \quad \gamma s + s \gamma = 0 \, .
\end{equation}
The map $s$ should be thought of as being analogous to the classical BRST-transformation.
The map $\gamma$ has eigenvalues $\pm 1$, and the eigenvectors
correspond as above to Bose/Fermi fields. At the classical level, the elements in the eigenspace of $-1$
are analogous to the classical (composite) fields of odd Grassmann parity, while those
in the eigenspace of $+1$ are analogous to those of even Grassmann parity. However,
we emphasize that these are just analogies, as we will be dealing with a quantum field
theory. For the general analysis of quantum gauge theories
we will only need $s$ and $\gamma$ to satisfy the
above properties. It is also natural to postulate the existence of another grading map
$g: V \to V$ with the properties ${\rm Spec} \, g = \mathbb Z$ and
$sg = (g + id) s$, $\gamma g - g \gamma = 0$.
This map is to be thought of as the number counter for the ghost fields (so that
$s$ increases the ghost number by one unit). Finally, we would like all maps
$s, \gamma, g$ to be compatible with the $\star$-operation on $V$, and to
preserve the grading by the spin, as well as the dimension.
We next consider a quantum field theory whose fields are described by the elements
of $V$, with operator product coefficients ${\cal C}$. At the classical level, $s$ is
a graded derivation, so we would also like $s$ to be a graded derivation at the quantum
level. Recall that if ${\bf A}$ is a graded algebra with grading map $\Gamma$ (i.e.,
$\Gamma^2=id$), then a graded derivation
is a map $D: {\bf A} \to {\bf A}$ with the property that
\begin{equation}\label{DAB}
D(AB) = (DA) B + \Gamma(A) DB \quad \text{for all $A,B \in {\bf A}$} \, .
\end{equation}
Equivalently, if we write the product in the algebra as $m: {\bf A} \otimes {\bf A} \to {\bf A}$
with $m(A,B) = AB$, then $m$ should satisfy
\begin{equation}
D m = m(D \otimes id) + m(\Gamma \otimes D) \, ,
\end{equation}
in the sense of maps ${\bf A} \otimes {\bf A} \to {\bf A}$.
As we have emphasized several times, the OPE-coefficients ${\cal C}(x_1, x_2)$
are to be thought of informally as the expansion coefficients of
a product. Therefore, if $s$ is to be a graded derivation we should
add a corresponding additional axiom to those formulated above in sec.~\ref{axiomatic}.
Heuristically, we want $s$ to act on a product of quantum fields $\phi_a$
in the following way analogous to eq.~\eqref{DAB}:
\begin{equation}
s\Big[ \prod_{i=1}^n \phi_{a_i}(x_i) \Big] =
\sum_{i=1}^n (-1)^{\sum_{j<i} \epsilon_i} \phi_{a_1}(x_1) \cdots
s \phi_{a_i}(x_i) \cdots \phi_{a_n}(x_n) \, ,
\end{equation}
Here, $\epsilon_i = 0,1$ according to whether $\phi_{a_i}$ is
bosonic or fermionic. If we formally apply an OPE to both sides of this equation, then we
arrive at the following condition for the OPE coefficients:
\medskip
\noindent
\paragraph{\bf BRST-invariance:} The OPE coefficients of the auxiliary
should satisfy the additional condition
\begin{equation}\label{BRSTn}
s {\cal C}(x_1, \dots, x_n) = \sum_{i=1}^n {\cal C}(x_1, \dots, x_n)(\gamma^{i-1} \otimes
s \otimes id^{n-i})
\end{equation}
for all $n$.
\medskip
\noindent
Above, we have seen in prop.~\ref{proposition1} that the 2-point OPE coefficients determine
all higher coefficients uniquely. Thus, as a corollary, the above conditions of
$BRST$-invariance will be satisfied if they hold for the 2-point coefficients,
i.e. if the condition
\begin{equation}\label{scompat}
s {\cal C}(x_1, x_2) = {\cal C}(x_1, x_2) (s \otimes id) + {\cal C}(x_1, x_2)(\gamma \otimes s)
\end{equation}
holds.
Furthermore,
we would like to formulate abstractly the condition that, since the OPE coefficients
are valued in the complex numbers, they should have "ghost number" equal to zero,
meaning that
\begin{equation}\label{gcompat}
g {\cal C}(x_1, x_2) = {\cal C}(x_1, x_2)(g \otimes id) + {\cal C}(x_1, x_2)(id \otimes g) \, .
\end{equation}
In summary a quantum gauge theory is described in our language abstractly as follows:
\begin{defn}
A quantum gauge theory is a system of OPE-coefficients
\begin{equation}
{\cal C} = ({\cal C}(x_1, x_2), {\cal C}(x_1,x_2,x_3), \dots)
\end{equation}
associated with $V$ satisfying the properties laid out in sec.~\ref{axiomatic},
together with a ghost number grading $g$ satisfying~\eqref{gcompat},
and a differential $s:V \to V$ satisfying~\eqref{scompat} and~\eqref{sprop}, as
well as $(g + id) s = sg$.
\end{defn}
By analogy with the classical case, we define the space of {\em physical fields}
of the gauge theory to be the quotient
\begin{equation}
\widehat V := \frac{\{{\rm ker} \, s: V^0 \to V^{+1} \}}{\{{\rm ran} \, s: V^{-1} \to V^0 \}}
\end{equation}
where $V^q$ are the
eigenspaces of the linear map $g$, with eigenvalue $q$,
\begin{equation}
V = \bigoplus_{q \in \mathbb Z} V^q \, , \quad s: V^q \to V^{q+1} \, .
\end{equation}
In other words, we define the space of physical fields as the zeroth cohomology group
defined by $s$, with the general cohomology group at $q$-th order defined by
\begin{equation}
H^q(V;s) = \frac{\{\ker s: V^q \to V^{q+1}\}}{\{{\rm ran}\, s: V^{q-1} \to V^q \}} \, .
\end{equation}
Because the OPE coefficients satisfy the assumption of BRST~invariance, eq~\eqref{BRSTn},
we have the following proposition/definition:
\begin{prop}\label{factorprop}
The OPE coefficients ${\cal C}$ of the auxiliary theory induce maps
\begin{equation}
\widehat {\cal C}(x_1, \dots, x_n): \widehat V \otimes \dots \otimes \widehat V \to \widehat V \, ,
\end{equation}
so the operator product expansion "closes" on the space $\widehat V$ of physical fields.
Therefore, the {\em true physical sector of the gauge theory} can be defined as
the quantum field theory described by the pair $(\widehat V, \widehat {\cal C})$.
\end{prop}
\noindent
\paragraph{\bf Remarks:} 1) In Yang-Mills theory with Lie algebra $\frak g$, the space $V$ is naturally identified with the free
unital commutative $\partial_\mu$-differential module (over ${\mathbb C}[\lambda]$) generated by the formal
expressions of the form ${\bf 1}$ and
\begin{equation}
\partial_{\mu_1} \dots \partial_{\mu_k} \psi_i ; \,\, \mu_j = 1, \dots, D \, ,
\end{equation}
where $\psi_i$ denotes either a component of $A$ or the auxiliary "field" $F$ or the ghost "fields", $U,\bar U$.
The expressions in
$V$ are taken modulo the relations $\psi_i \psi_j = (-1)^{F_i F_j} \psi_j \psi_i$, with
$F_i = 0$ or $=1$ according to whether $g(\psi_i) = \pm \psi_i$, where $g$ is $-1$ on
ghost fields $U,\bar U$, and $+1$ on $A,F$. Furthermore,
on $V$, the linear maps $\partial_\mu$ are defined to act as the (ungraded) derivations that are obtained by
formally viewing the elements of $V$ as classical fields. On $V$, there also acts the BRST-differential
$s$. It is defined to act on the generators of $V$ by eq.~\eqref{BRSTt}, and it is demanded to anti-commute
with the formal derivations $\partial_\mu$, i.e.,
\begin{equation}
\partial_\mu \in {\rm Der}(V) \, , \quad s \circ \partial_\mu = \partial_\mu \circ s \, , \quad
g \circ \partial_\mu = \partial_\mu \circ g \, .
\end{equation}
One can then show~\cite{barnich} that $\widehat V$
corresponds precisely to the gauge-invariant monomials of the
field strength tensor of the gauge connection and its covariant derivatives, i.e.,
\begin{equation}
\widehat V = \Big\langle p(D^{k_1} F, \dots, D^{k_n}F) ; p \in {\rm Inv}({\mathfrak g}^{\otimes n}, {\mathbb C}) \Big\rangle \, ,
\end{equation}
where ${\rm Inv}(\mathfrak g^{\otimes n}, {\mathbb C})$ is the space of $\frak{g}$-invariant multi-linear forms on Lie-algebra,
$D_\mu = \partial_\mu + i\lambda[A_\mu, \, . \, ]$ is the
standard covariant derivative, $F$ is a shorthand for its curvature, $F_{\mu\nu} = [\mathcal{D}_\mu, \mathcal{D}_\nu]$,
and $D^k$ is a shorthand for $D_{(\mu_1} \cdots D_{\mu_k)}$.
\medskip
\noindent
2) Note that the OPE-coefficients of the auxiliary theory
not only close on the space $\widehat V$, but more generally on any of the spaces
$W_k = \oplus_{q \ge k} H^q(V; s)$. These spaces contain also operators of non-zero
ghost number. One does not, however, expect this theory to have any non-trivial
states satisfying the OS-positivity axiom [see sec.~\ref{axiomatic}].
\medskip
\noindent
{\em Proof}: Let $|v_1 \rangle, \dots, |v_n\rangle \in {\rm ker} \, s$. Using eq.~\eqref{BRSTn}, we have
\begin{eqnarray}
&&s \Big( {\cal C}(x_1, \dots, x_n)|v_1 \otimes \dots \otimes v_n \rangle \Big) \\
&=& \sum_{i=1}^n {\cal C}(x_1, \dots, x_n) | \gamma(v_1) \otimes \dots \gamma(v_{i-1}) \otimes sv_i \otimes v_{i+1} \otimes \dots v_n \rangle = 0 \, . \nonumber
\end{eqnarray}
Thus, the composition ${\cal C}(x_1, \dots, x_n)|v_1 \otimes \dots \otimes v_n \rangle$ is in the kernel of $s$. One similarly shows that
if $|v_1 \rangle, \dots, |v_n\rangle \in {\rm ker} \, s$, and in addition $|v_i \rangle \in {\rm ran} \, s$ for some $i$, then
the composition is even in the image of $s$. Thus, ${\cal C}(x_1, \dots, x_n)$ gives a well defined map
from $({\rm ker} \, s/{\rm ran} \, s)^{\otimes n}$ into ${\rm ker} \, s/{\rm ran} \, s$.
Finally, since ${\cal C}(x_1, \dots, x_n)$ satisfies the analogue of eq.~\eqref{gcompat}, it follows that
the composition has ghost number zero if each $|v_i\rangle$ has. Thus, ${\cal C}(x_1, \dots, x_n)$ gives a well defined map
$\widehat {\cal C}(x_1, \dots, x_n)$
from $\widehat V^{\otimes n}$ to $\widehat V$. This map inherits the properties of
factorization, scaling, the unity axiom, the symmetry property etc. from the map ${\cal C}(x_1, \dots, x_n)$.
Thus, the collection $(\widehat {\cal C}, \widehat V)$ again defines a quantum field theory in our sense.
\qed
\medskip
We would now like to consider perturbations of a given quantum gauge theory
by analogy with the procedure described in the previous section.
Thus, as above, let $\lambda$ be a formal expansion parameter, and let
${\cal C}(x_1, x_2; \lambda)$ be a 1-parameter family describing a deformation of
the given 2-point OPE coefficient of the auxiliary theory.
As above let $({\cal C}_0, {\cal C}_1, {\cal C}_2, \dots)$ be the zeroth, first, second,
etc. perturbations of the expansion
coefficients. In order that the perturbed coefficients satisfy the
associativity constraint, the equations~\eqref{bciwi} must again hold for the coefficients.
In the situation at hand, we also should consider a deformation $s(\lambda)$ of
the BRST-differential, with expansion coefficients $(s_0, s_1, s_2, \dots)$,
\begin{equation}
s_i = \frac{1}{i!} \, \frac{d^i}{d\lambda^i} \, s(\lambda) \Bigg|_{\lambda = 0} \, .
\end{equation}
These quantities should satisfy the perturbative version of eq.~\eqref{sprop}, that is
\begin{equation}\label{sconsi}
\sum_{j=0}^i s_j s_{i-j} = 0 \, , \quad s_i \gamma + \gamma s_i = 0 \, ,
\end{equation}
and they should satisfy the perturbative version of eq.~\eqref{scompat},
\begin{equation}\label{cconsi}
\sum_{j=0}^i s_j {\cal C}_{i-j}(x_1, x_2) = \sum_{j=0}^i {\cal C}_{i-j}(x_1, x_2) (s_j \otimes id)
+ \sum_{j=0}^i {\cal C}_{i-j}(x_1, x_2)(\gamma \otimes s_j) \, ,
\end{equation}
for all $i = 0, 1, 2, \dots$. For $i=0$, these conditions are just the
conditions that the undeformed theory described by $s_0, {\cal C}_0$ defines a gauge theory. For $i=1, 2, \dots$,
we get a set of conditions that constrain the possible $i$-th order perturbations
$s_i, {\cal C}_i$. Actually, as in the previous section,
we would like to exclude again that our deformations $s_i, {\cal C}_i$ are
simply due to a $\lambda$-dependent field redefinition, see defn.~\ref{fieldred}. In the present context,
a first order perturbation $s_1, {\cal C}_1$ that is simply due to a field redefinition is
one for which
\begin{equation}\label{ctrivial1}
{\cal C}_1(x_1, x_2) = -z_1 \, {\cal C}_0(x_1, x_2) + {\cal C}_0(x_1, x_2) (z_1 \otimes id + id \otimes z_1)
\, , \quad s_1 = s_0 \, z_1 + z_1 \, s_0 \, ,
\end{equation}
for some $z_1: V \to V$ such that $z_1 \, \gamma = \gamma \, z_1$. There are
similar conditions at higher order. We will now see that
the higher order conditions, have an elegant formulation
in terms of a variant of Hochschild cohomology associated with ${\cal C}_0$,
twisted with the cohomology of $s_0$.
In order to describe this, we begin by defining the respective cohomology rings.
Our first task is the definition of the Hochschild type differential
$b$ in the case when $V$ is a graded vector space. Let ${\cal C}(x_1, x_2): V \otimes V \to V$
satisfy the associativity condition~\eqref{maincondition} and be even under our grading
$\gamma$, meaning ${\cal C}(x_1, x_2)(\gamma \otimes \gamma) = \gamma {\cal C}(x_1, x_2)$.
\begin{defn}
Let $\Omega^n(V)$ be the space of all translation invariant analytic maps
$f_n: {\mathcal F}_{n} \to \hom(V \otimes \dots \otimes V, V)$, where ${\mathcal F}_n \subset
({\mathbb R}^D)^n$ is the domain~\eqref{Fndef}. Let
\begin{equation}
f^\gamma_n := \gamma f_n (\gamma \otimes \dots \otimes \gamma) \, .
\end{equation}
If $f_n^\gamma = f_n$, then $f_n$ is said to be even and
the definition of $b f_n \in \Omega^{n+1}(V)$ is
as above in eq.~\eqref{bfndef}. If $f_n^\gamma = -f_n$, then $f_n$ is
said to be odd, and we define
\begin{eqnarray}\label{bdefodd}
(b f_n)(x_1, \dots, x_{n+1}) &:=& -{\cal C}(x_1, x_{n+1})(\gamma \otimes f_n(x_2, \dots, x_{n+1}) \nonumber\\
&-& \sum_{i=1}^n (-1)^i f_n( x_1, \dots, \widehat x_i, \dots, x_{n+1})
(id^{i-1} \otimes {\cal C}(x_i, x_{i+1}) \otimes id^{n-i}) \nonumber\\
&-& (-1)^{n+1} {\cal C}(x_n, x_{n+1}) (f_n(x_1, \dots, x_n) \otimes id) \, .
\end{eqnarray}
\end{defn}
As in the definition of $b$ in the ungraded case, we may check that
$b^2 = 0$, so we may again define the cohomology of $b$ as above. We next prove
a simple lemma about the relation between the differential $b$ and the differential
$s$ when the quantum field theory is a gauge theory $(V, {\cal C}, s)$. First,
we define an action of $s$ on the space $\Omega^n(V)$ of analytic maps $f_n$
by $B: \Omega^n(V) \to \Omega^n(V)$, where
\begin{eqnarray}\label{deltadef}
(B f_n)(x_1, \dots, x_n) &:=& s f_n(x_1, \dots, x_n) \nonumber\\
&-& \sum_{i=1}^n f^\gamma_n(x_1, \dots, x_n)(\gamma^{i-1} \otimes s \otimes id^{n-i}) \, .
\end{eqnarray}
\begin{lemma}
We have $B (B f_n) =0$ for all $f_n$.
If $f_n$ is in the domain of $b$, then $B b f_n = - b B f_n$. Symbolically
\begin{equation}
b B + B b = 0, \quad B^2 = 0 \, .
\end{equation}
\end{lemma}
\noindent
{\em Proof:} For the proof of the first statement we consider first
the case when $f_n^\gamma = f_n$, and we apply $B$ one more time
to eq.~\eqref{deltadef}. We obtain the following three terms:
\begin{eqnarray}
&&B (B f_n)(x_1, \dots, x_n) = s^2 f_n(x_1, \dots, x_n) \\
&&-\sum_{i=1}^n [(sf_n)^\gamma(x_1, \dots, x_n)+sf_n^\gamma(x_1, \dots, x_n)](\gamma^{i-1} \otimes s \otimes id^{n-i})\nonumber\\
&&+\sum_{i,j=1}^n f_n(x_1, \dots, x_n)(\gamma^{i-1} \otimes s \otimes id^{n-i})
(\gamma^{j-1} \otimes s \otimes id^{n-j}) \nonumber
\, .
\end{eqnarray}
The first term vanishes since $s^2=0$. The second term vanishes because if
$f_n$ is even under $\gamma$, then $sf_n$ is odd, so $(sf_n)^\gamma + sf_n^\gamma = 0$.
We split the double sum into three parts---the terms for which $i<j$, the terms for
$i>j$, and the terms for which $i=j$. The third set of terms give zero using $s^2 = 0$.
The first set of terms is manipulated using $s \gamma = - \gamma s$:
\begin{eqnarray}
&&+\sum_{i<j} f_n(x_1, \dots, x_n)
(\gamma^{i-1} \otimes s \otimes id^{n-i})
(\gamma^{j-1} \otimes s \otimes id^{n-j})
\nonumber\\
&=&\sum_{i<j} f_n(x_1, \dots, x_n)(id^{i-1} \otimes s\gamma \otimes \gamma^{j-i-1}
\otimes s \otimes id^{n-j})
\nonumber\\
&=&-\sum_{i<j} f_n(x_1, \dots, x_n) (\gamma^{j-1} \otimes s \otimes id^{n-j})
(\gamma^{i-1} \otimes s \otimes id^{n-i})
\, .
\end{eqnarray}
After changing the names of the indices, this is seen to be
equal to minus the second set of terms, so $B (B f_n) = 0$.
The case $f_n^\gamma = -f_n$ is completely analogous.
We next prove the relation $b(B f_n) = - B (b f_n)$, again
assuming for definiteness that $f^\gamma_n = f_n$. To compute
$b(B f_n)$, we apply $b$ to eq.~\eqref{deltadef}, and
use that $(B f_n)^\gamma = - B f_n$. This gives
\begin{eqnarray}\label{bdel}
&-& b(B f_n)(x_1, \dots, x_{n+1}) = {\cal C}(x_1, x_{n+1})[\gamma \otimes sf_n(x_2, \dots, x_{n+1})] \nonumber\\
&-& \sum_{i=1}^n {\cal C}(x_1, x_{n+1})[\gamma \otimes f_n(x_2, \dots, x_{n+1})
(\gamma^{i-1} \otimes s \otimes id^{n-i})] \nonumber\\
&+& \sum_{i=1}^n (-1)^i sf_n(x_1, \dots, \widehat x_i, \dots, x_n)(id^{i-1} \otimes {\cal C}(x_i, x_{i+1}) \otimes id^{n-i}) \nonumber\\
&-& \sum_{i,j=1}^n (-1)^i f_n(x_1, \dots, \widehat x_i, \dots, x_n)(\gamma^{j-1} \otimes s \otimes id^{n-j}) (id^{i-1} \otimes {\cal C}(x_i, x_{i+1}) \otimes id^{n-i}) \nonumber\\
&+& (-1)^{n+1} {\cal C}(x_n, x_{n+1}) [s f_n(x_1, \dots, x_n) \otimes id] \nonumber\\
&-& (-1)^{n+1}
\sum_{i=1}^n {\cal C}(x_n, x_{n+1}) (f_n(x_1, \dots, x_n) \otimes id)(\gamma^{i-1} \otimes s \otimes id^{n-i+1}) \, .
\end{eqnarray}
We next evaluate $B(b f_n)$ by applying $B$ to eq.~\eqref{bfndef}. This gives
\begin{eqnarray}\label{delb}
&& B(b f_n)(x_1, \dots, x_{n+1}) = s {\cal C}(x_1, x_{n+1}) [id \otimes f_n(x_2, \otimes, x_{n+1})]\nonumber\\
&+& \sum_{i=1}^n (-1)^i \, s f_n(x_1, \dots, \widehat x_i, \dots, x_n)
[id^{i-1} \otimes {\cal C}(x_i, x_{i+1}) \otimes id^{n-i}] \nonumber\\
&+& (-1)^{n+1} \, s {\cal C}(x_n, x_{n+1})[f_n(x_1, \dots, x_n) \otimes id] \nonumber\\
&-& \sum_{i=1}^{n+1} {\cal C}(x_1, x_{n+1}) [id \otimes f_n(x_2, \otimes, x_{n+1})](\gamma^{i-1} \otimes s \otimes id^{n+1-i}) \nonumber\\
&-& \sum_{j=1}^{n+1} \sum_{i=1}^n (-1)^i \, f_n(x_1, \dots, \widehat x_i, \dots, x_n)
[id^{i-1} \otimes {\cal C}(x_i, x_{i+1}) \otimes id^{n-i}](\gamma^{j-1} \otimes s \otimes id^{n+1-j}) \nonumber\\
&-& (-1)^{n+1} \sum_{i=1}^{n+1} {\cal C}(x_n, x_{n+1})[f_n(x_1, \dots, x_n) \otimes id] (\gamma^{i-1} \otimes
s \otimes id^{n+1-i}) \, .
\end{eqnarray}
We next bring $s$ behind ${\cal C}$ in all terms in this expression using eq.~\eqref{scompat},
and we use that ${\cal C}$ itself is even under $\gamma$. If these
steps are carried out, then it is seen that all terms in eq.~\eqref{bdel} match
a corresponding term in eq.~\eqref{delb}. The calculation when $f_n^\gamma = -f_n$
is again analogous.
\qed
The fact that $b^2 = 0$ and the
properties of $B$ and $b$ stated in the lemma imply that $(B+b)^2 = B^2 + b^2 + bB + Bb = 0$.
Hence the map
\begin{equation}
\delta := B + b \,, \quad \quad \delta : \bigoplus_n \Omega^n(V) \to \bigoplus_n \Omega^n(V)
\end{equation}
is again a differential, i.e., it satisfies $\delta^2 = 0$. Therefore, we can
again define a corresponding cohomology ring
\begin{equation}
H^n(\delta; V) := \frac{\{(f_1, f_2, \dots, f_n, 0, 0, \dots) \in \ker \delta\}}{
\{(f_1, f_2, \dots, f_n, 0, 0, \dots) \in {\rm ran}\, \delta \}} \equiv
\frac{Z^n(\delta; V)}{B^n(\delta; V)} \, .
\end{equation}
Thus, a general element in $H^n(\delta; V)$ consists of an equivalence class
of a sequence
\begin{equation}
(f_1, f_2, \dots, f_n, 0, 0, \dots)\,, \quad Bf_1 = bf_n = 0 \, , \quad bf_{i-1} = -Bf_i \, \quad \text{for $1<i\le n$} \, ,
\end{equation}
where each $f_i$ is an element in $\Omega^i(V)$ and $n$ is some finite number,
modulo all sequences with the property that there exist $h_i \in \Omega^i(V) \cap {\rm dom} \, b$
for $1 \le i < n$ such that
\begin{equation}
(f_1, f_2, \dots, f_n, 0, 0, \dots)\,, \quad f_1 = Bh_1\, , \quad f_n = bh_{n-1} \,, \quad f_i = bh_{i-1} + Bh_i \, ,
\end{equation}
for all $1<i<n$. The conditions~\eqref{sconsi}, \eqref{cconsi}, \eqref{bciwi} expressing
respectively the nilpotency of the perturbed BRST operator
$s_i$, the compatibility of the BRST operator with the perturbations
${\cal C}_i$ of the operator product, and the corresponding associativity condition at the $i$-th
order in perturbation theory may now be expressed by a simple condition
in terms of this cohomology ring. For this, we define the
differentials $b, B$ and $\delta = B+b$ as
above in terms of the unperturbed theory, i.e. using
${\cal C}_0$ and $s_0$. For $i>0$, we combine
$s_i$ and ${\cal C}_i$ into the element
\begin{equation}
\beta_i := (s_i, {\cal C}_i, 0, 0, \dots) \in \bigoplus_n \Omega^n(V) \, .
\end{equation}
and we define $\alpha_i = (u_i, v_i, w_i, 0, 0, \dots)$, where
\begin{eqnarray}
u_i(x_1) &:=& -\sum_{j=1}^{i-1} s_j s_{i-j} \,, \\
v_i(x_1, x_2) &:=& -\sum_{j=1}^{i-1} s_j {\cal C}_{i-j}(x_1, x_2) -
{\cal C}_{i-j}(x_1, x_2) (s_j \otimes id) - {\cal C}_{i-j}(x_1, x_2)(\gamma \otimes s_j) \, , \nonumber\\
w_i(x_1, x_2, x_3) &:=& -\sum_{j=1}^{i-1} {\cal C}_j(x_1, x_3)[id \otimes {\cal C}_{i-j}(x_2, x_3)]
- {\cal C}_j(x_2, x_3)[{\cal C}_{i-j}(x_1, x_2) \otimes id] \, . \nonumber
\end{eqnarray}
The conditions~\eqref{sconsi}, \eqref{cconsi}, \eqref{bciwi} can now be simply
and elegantly be restated as the single condition
\begin{equation}\label{betacond}
\delta \beta_i = \alpha_i \, .
\end{equation}
This is the desired cohomological formulation of our consistency
conditions for perturbations of a gauge theory.
Let us analyze the conditions~\eqref{betacond} on $\beta_i$. First we
note that $\alpha_1 = 0$, and that $\alpha_i$ is defined in terms
of $\beta_1, \beta_2, \dots, \beta_{i-1}$ for $i>1$.
When $i=1$, the above condition hence states that $\delta \beta_1 = 0$,
meaning that $\beta_1 \in Z^2(\delta; V)$. On the other hand, we can
express the situation when
$s_1$ and ${\cal C}_1$ merely correspond to a
field redefinition [see eq.~\eqref{ctrivial1}] as saying that
\begin{equation}
\beta_1 = \delta \zeta_1 \, ,
\end{equation}
where $\zeta_1 \equiv (z_1, 0, 0, \dots)$ is given in terms of the first order
field redefinition $z_1$. Thus, in this case $\beta_1 \in B^2(\delta; V)$.
In summary, the first order perturbations of the BRST-operator and of the product modulo
the trivial ones are in one-to-one correspondence with the non-trivial elements
of the ring $H^2(V; \delta)$. Let us now assume that we have picked a non-trivial
first order perturbation $\beta_1$---assuming that such a perturbation exists. Then
$\beta_2$ must satisfy eq.~\eqref{betacond}, $\delta \beta_2 = \alpha_2$, for
the $\alpha_2$ calculated from $\beta_1$. Clearly, because $\delta^2 = 0$,
a necessary condition for the existence of a solution to eq.~\eqref{betacond}
is that $\delta \alpha_2 = 0$, meaning that $\alpha_2 \in Z^3(\delta; V)$. This
can indeed be checked to be the case (see the lemma below).
Our requirement that $\delta \beta_2 = \alpha_2$ is however a stronger
statement, meaning that in fact $\alpha_2 \in B^3(V; \delta)$. Thus, if the class
$[\alpha_2]$ in $H^3(\delta; V)$ is non-trivial, then no second order perturbations
to our gauge theory exists, or said differently, $[\alpha_2] \in H^3(\delta; V)$
is an obstruction to continue the deformation process.
Let us assume that there is no obstruction so that
a solution $\beta_2$ to the "inhomogeneous equation" $\delta \beta_2 = \alpha_2$ exists.
Any solution to the equation will only be unique up to a solution to the corresponding
"homogeneous equation" $\delta \beta_2 = 0$. In fact, because any solution
to the inhomogeneous equation can be written as an arbitrary but fixed solution
plus the general solution to the homogeneous equation, it follows that
the second order perturbations $\beta_2$ are parametrized by the elements of
$Z^2(\delta; V)$. Special solutions to the homogeneous equation
include in particular ones of the form $\beta_2 = \delta \zeta_2 \in B^2(\delta; V)$,
with $\zeta_2 \equiv (z_2, 0, 0, \dots)$. However, any such solution of the
homogeneous equation can again be absorbed into a second order field redefinition
parametrized by $z_2$. Thus, we see that if the obstruction $[\alpha_2]$
vanishes at second order, then the second order perturbations modulo the trivial
perturbations are again parametrized by the elements of the space $H^2(\delta; V)$.
In the general order, we assume inductively that a solution to the
consistency relations $\delta \beta_j = \alpha_j$ has been found for all
$j<i$, meaning in particular that the obstructions
$[\alpha_j]$ vanish for all $j<i$.
By the lemma below, $\delta \alpha_i = 0$, so
$\alpha_i$ defines a class $[\alpha_i] \in H^3(\delta; V)$.
If this class if non-trivial, then the deformation process cannot be continued. If
it is the trivial class, by definition there is a solution $\beta_i$ to the
equation $\delta \beta_i = \alpha_i$. Again, this is unique only up to a solution to the
corresponding homogeneous equation $\delta \beta_i = 0$. The non-trivial solutions among these
not corresponding to a field redefinition are again in one-to-one correspondence with
the elements in the ring $H^2(\delta; V)$. Thus, a sufficient condition for
there to exist a consistent, non-trivial perturbation to the product and
BRST operator to arbitrary order in perturbation theory is
\begin{equation}
H^2(\delta; V) \neq 0\, , \quad H^3(\delta; V) = 0 \, ,
\end{equation}
for in this case all obstructions are trivial.
Moreover, in that case, $H^2(\delta; V)$ parameterizes all non-trivial $i$-order
perturbations for any $i \ge 1$.
\begin{lemma}
Assume that $\delta \beta_j = \alpha_j$ for all $j<i$, or equivalently,
that $[\alpha_j] \in H^3(\delta; V)$ defines the trivial element for
all $j<i$, and assume that the chain $\alpha_i$ is in the domain of $\delta$ for all $i$. Then we have
$\delta \alpha_i = 0$. In component form
\begin{equation}
B u_i = 0 \, , \quad bu_i + Bv_i = 0 \, , \quad bv_i + Bw_i = 0 \, ,
\quad bw_i = 0 \, .
\end{equation}
\end{lemma}
\noindent
{\em Proof:} For a given $i$, the hypothesis of the lemma amounts to
saying that $Bs_j = u_j, bs_j + B{\cal C}_j = v_j$ and $b{\cal C}_j = w_j$ for all
$j<i$. It follows from the last equation that $bw_i = 0$, as we
have already proved above in lemma~\ref{obstrlemma} above.
We next concentrate on proving the relation $Bu_i = 0$. We have
\begin{equation}\label{Bf1}
B u_i = -\sum_{j=1}^{i-1} (Bs_j) s_{i-j} + \sum_{j=1}^{i-1} s_{i-j} (Bs_j) \, .
\end{equation}
Now, using that $Bs_j = u_j$ for the perturbations at order
$j<i$ and the definition of $u_j$, the first sum is equal to
\begin{eqnarray}
&&\sum_{j=1}^{i-1} (Bs_j) s_{i-j} =\sum_{j=1}^{i-1} \sum_{k=1}^{j-1} s_k s_{j-k} s_{i-j}
\nonumber\\
&=&\sum_{j=1}^{i-1} \sum_{k=1}^{i-j-1} s_j s_{i-j-k} s_k =
\sum_{j=1}^{i-1} s_{i-j} (Bs_j)
\, .
\end{eqnarray}
Thus, the first and second sum in~\eqref{Bf1} precisely cancel, and we have shown
$Bu_i = 0$.
We next show that $bu_i + Bv_i = 0$. A straightforward calculation using
the definitions of $v_i$ and of $B$ gives
\begin{eqnarray}
&&Bv_i(x_1, x_2) = \\
&&\sum_{j=1}^{i-1} -(Bs_j) \Big( {\cal C}_{i-j}(x_1, x_2) \Big) + s_j \Big( B{\cal C}_{i-j}(x_1, x_2) \Big) \nonumber\\
&&\sum_{j=1}^{i-1} +{\cal C}_{i-j}(x_1, x_2) \Big( Bs_j \otimes id + id \otimes Bs_j \Big) \nonumber\\
&&\sum_{j=1}^{i-1} +B{\cal C}_{i-j}(x_1, x_2) \Big( s_j \otimes id + \gamma \otimes s_j \Big) \, .\nonumber
\end{eqnarray}
By the assumptions of the lemmas, we may substitute
$B{\cal C}_j = v_j - bs_j$ and $Bs_j = u_j$ for $j<i$.
This leads to
\begin{eqnarray}
&&Bv_i(x_1, x_2) = \\
&&\sum_{j=1}^{i-1} -u_j \Big( {\cal C}_{i-j}(x_1, x_2) \Big) - s_j \Big( bs_{i-j}(x_1, x_2) \Big) \nonumber\\
&&\sum_{j=1}^{i-1} +{\cal C}_{i-j}(x_1, x_2) \Big( u_j \otimes id + id \otimes u_j \Big) \nonumber\\
&&\sum_{j=1}^{i-1} -bs_{i-j}(x_1, x_2) \Big( s_j \otimes id + \gamma \otimes s_j \Big) \nonumber\\
&&\sum_{j=1}^{i-1} +s_j v_{i-j}(x_1, x_2) + v_{i-j}(x_1, x_2)\Big( s_j \otimes id + \gamma \otimes s_j \Big)
\, .\nonumber
\end{eqnarray}
We now use again the definition of $b$ and we substitute
the expressions for $v_j$ and $u_j$. If this is done,
then many terms cancel out and
we are left with
\begin{eqnarray}
B v_i(x_1, x_2) &=& \sum_{j=1}^{i-1} {\cal C}_0(x_1, x_2) \Big( s_j s_{i-j} \otimes id + id \otimes
s_j s_{i-j} \Big) - s_{i-j}s_j \Big( {\cal C}_0(x_1, x_2) \Big) \nonumber\\
&=& -bu_i(x_1, x_2) \, ,
\end{eqnarray}
which is what we wanted to show.
We finally prove the relation $Bw_i = -bv_i$.
Using the definition of $b$ and of $v_i$, we see after some
manipulations that $bv_i$ can be brought into the form
\begin{eqnarray}
&&bv_i(x_1, x_2, x_3) = \\
&&\sum_{j=1}^{i-1} -bs_j(x_1, x_3) \Big( id \otimes {\cal C}_{i-j}(x_2,x_3) \Big)
+bs_j(x_2, x_3) \Big({\cal C}_{i-j}(x_1, x_2) \otimes id \Big) \nonumber\\
&&\sum_{j=1}^{i-1} +{\cal C}_j(x_2, x_3) \Big(bs_{i-j}(x_1, x_2) \otimes id \Big)
-{\cal C}_j(x_1, x_3) \Big(\gamma \otimes bs_{i-j}(x_2, x_3) \Big) \nonumber\\
&&\sum_{j=1}^{i-1} -b{\cal C}_j(x_1, x_2, x_3) \Big(s_{i-j} \otimes id \otimes id +
\gamma \otimes s_{i-j} \otimes id +
\gamma \otimes \gamma \otimes s_{i-j}\Big) \nonumber\\
&&\sum_{j=1}^{i-1} s_j \Big( b{\cal C}_{i-j}(x_1, x_2, x_3) \Big) \,\, , \nonumber
\end{eqnarray}
where $(x_1, x_2, x_3) \in {\mathcal F}_3$. On this domain may substitute
the assumption of the lemma that $bs_j + B{\cal C}_j = v_j$ and that $b{\cal C}_j = w_j$
for all $j<i$. This results in the equation
\begin{eqnarray}
&&bv_i(x_1, x_2, x_3) = \\
&&\sum_{j=1}^{i-1} +B{\cal C}_j(x_1, x_3) \Big( id \otimes {\cal C}_{i-j}(x_2,x_3) \Big)
-B{\cal C}_j(x_2, x_3) \Big({\cal C}_{i-j}(x_1, x_2) \otimes id \Big) \nonumber\\
&&\sum_{j=1}^{i-1} -{\cal C}_j(x_2, x_3) \Big(B{\cal C}_{i-j}(x_1, x_2) \otimes id \Big)
+{\cal C}_j(x_1, x_3) \Big(\gamma \otimes B{\cal C}_{i-j}(x_2, x_3) \Big) \nonumber\\
&&\sum_{j=1}^{i-1} -v_j(x_1, x_3) \Big( id \otimes {\cal C}_{i-j}(x_2,x_3) \Big)
+v_j(x_2, x_3) \Big({\cal C}_{i-j}(x_1, x_2) \otimes id \Big) \nonumber\\
&&\sum_{j=1}^{i-1} +{\cal C}_j(x_2, x_3) \Big(v_{i-j}(x_1, x_2) \otimes id \Big)
-{\cal C}_j(x_1, x_3) \Big(\gamma \otimes v_{i-j}(x_2, x_3) \Big) \nonumber\\
&&\sum_{j=1}^{i-1} -w_j(x_1, x_2, x_3) \Big(s_{i-j} \otimes id \otimes id +
\gamma \otimes s_{i-j} \otimes id + \gamma \otimes \gamma \otimes s_{i-j}\Big) \nonumber\\
&&\sum_{j=1}^{i-1} +s_j \, w_{i-j}(x_1, x_2, x_3) \,\, . \nonumber
\end{eqnarray}
We compute the first four terms in the expression on the right
hand side as
\begin{eqnarray}
&=&\sum_{j=1}^{i-1} +s_0{\cal C}_j(x_1, x_3) \Big( id \otimes {\cal C}_{i-j}(x_2,x_3) \Big) \nonumber\\
&&\sum_{j=1}^{i-1} -{\cal C}_j(x_1, x_3) \Big(s_0 \otimes {\cal C}_{i-j}(x_1, x_2) \Big) \nonumber\\
&&\sum_{j=1}^{i-1} -s_0{\cal C}_j(x_2, x_3) \Big({\cal C}_{i-j}(x_1, x_2) \otimes id \Big) \nonumber\\
&&\sum_{j=1}^{i-1} +{\cal C}_j(x_2, x_3) \Big(\gamma \, {\cal C}_{i-j}(x_1, x_2) \otimes s_0 \Big) \nonumber\\
&&\sum_{j=1}^{i-1} +{\cal C}_j(x_2, x_3) \Big( {\cal C}_{i-j}(x_1,x_2)(s_0 \otimes id) \otimes id \Big) \nonumber\\
&&\sum_{j=1}^{i-1} +{\cal C}_j(x_2, x_3) \Big({\cal C}_{i-j}(x_1, x_2)(\gamma \otimes s_0) \otimes id \Big) \nonumber\\
&&\sum_{j=1}^{i-1} -{\cal C}_j(x_1, x_3) \Big(\gamma \otimes {\cal C}_{i-j}(x_2, x_3) (s_0 \otimes id) \Big) \nonumber\\
&&\sum_{j=1}^{i-1} -{\cal C}_j(x_1, x_3) \Big(\gamma \otimes {\cal C}_{i-j}(x_2, x_3) (\gamma \otimes s_0) \Big) = -Bw_i(x_1, x_2, x_3) \, .
\end{eqnarray}
The remaining terms cancel if we substitute the expressions $bs_j + B{\cal C}_j = v_j$ and $b{\cal C}_j = w_j$
for $v_j, w_j$ for $j<i$. Thus, we
have shown that $bv_i = -Bw_i$, and this concludes the proof of the lemma.
\qed
\section{Euclidean invariance}\label{euclidinvariance}
Above, we have defined quantum field theory by a collection of
OPE-coefficients subject to certain axiomatic requirements, and
we have pointed out that the essential information is contained in the
2-point coefficients ${\cal C}(x_1, x_2)$. The main condition that these
conditions have to satisfy is the associativity condition~\eqref{maincondition}.
They also have to satisfy the condition of Euclidean invariance. We will now
explain how that condition can be used to simplify the coefficients ${\cal C}(x_1, x_2)$,
and how to reformulate the associativity condition in terms of the simplified
coefficients.
Let us denote the components of ${\cal C}(x_1, x_2)$ in a basis of $V$ by
$C^c_{ab}(x_1, x_2)$. We use Euclidean invariance to write these 2-point OPE
coefficients as
\begin{equation}\label{cdecomp}
C_{ab}^c(x_i, x_j) = \sum_I
\left[
\begin{matrix}
\hat c \\
\hat a \,\,\,\, \hat b
\end{matrix}
; \,\,I
\right] (\hat x_{ij}) \cdot
f_{ab}^c(I; r_{ij}) \, .
\end{equation}
Here, the quantity in brackets is an invariant tensor
\begin{equation}\label{invarianttensor}
\left[
\begin{matrix}
i \\
j \,\,\,\, k
\end{matrix}
;\,\,
I
\right]:
S^{D-1} \mapsto V_i^{} \otimes V_j^{} \otimes V_k^* \, ,
\end{equation}
meaning that it satisfies the transformation law
\begin{equation}
\left[
\begin{matrix}
i \\
j \,\,\,\, k
\end{matrix}
;\,\,
I
\right] (g \hat x) = R_i^*(g) R_j^{}(g) R_k^{}(g)
\left[
\begin{matrix}
i \\
j \,\,\,\, k
\end{matrix}
;\,\,
I
\right] (\hat x) \, ,
\end{equation}
for all $\hat x \in S^{D-1}$, and all $g$ in the
covering (spin) group of $SO(D)$. The quantities $f_{ab}^c: {\mathbb R}_+ \to {\mathbb C}$ are analytic functions valued in
the complex numbers, $r_{ij}=|x_i - x_j|$, $\hat x_{ij} = x_{ij}/r_{ij}$, and
$I$ is an index that labels the space of invariant tensors
on the $(D-1)$-dimensional sphere.
In the following, we will restrict attention to
the case $D=3$ for pedagogical purposes, since
the representation theory of the corresponding
spin group $SU(2)$ is most familiar.
In the case $D=3$, the representation
labels may be identified with spins $\in \frac{1}{2} {\mathbb N}$, and
the representation spaces are $V_j = {\mathbb C}^{2j+1}$.
A basis of invariant tensors~\eqref{invarianttensor} is labeled
by a pair of spins
$I=[l_1 l_2] \in \frac{1}{2} {\mathbb N} \times \frac{1}{2} {\mathbb N}$,
and is given by
\begin{equation}\label{o3inv}
\left[
\begin{matrix}
j_1 \\
j_2 \,\,\,\, j_3
\end{matrix}
;\,\,I
\right]
(\hat x) =
\left\{
\begin{matrix}
l_1 \\
j_2 \,\,\,\, j_3
\end{matrix}
\right\}
\left\{
\begin{matrix}
j_1 \\
l_1 \,\,\,\, l_2
\end{matrix}
\right\} Y_{l_2}(\hat x)
\end{equation}
in terms of the Clebsch-Gordan coefficients ($3j$-symbols) of $SU(2)$ and the
spherical harmonics $Y_{lm}$ on $S^2$. Here
we have suppressed the magnetic quantum numbers, and as
everywhere in what follows,
magnetic quantum numbers associated with spins are summed
over if the spins appear twice. In the above example, the invariant
tensor should have 3 additional indices for the magnetic quantum
numbers associated with the representations
$j_1, j_2, j_3$, which have been suppressed.
The magnetic quantum numbers associated with $l_1, l_2$ are
contracted in the above expression, because each of these spins
appears twice.
The decomposition~\eqref{cdecomp} provides a split of the 2-point
OPE coefficients into the purely representation theoretic tensor part
$[
\therefore \,\, ;\,\,I
]$
determined entirely by the representation theory of $SU(2)$,
and the dynamical part $f^c_{ab}$, which is a scalar
function that is holomorphic in the radial variable $r \in {\mathbb R}_+$.
It is clear that it should be possible to formulate our associativity
condition in terms of these functions $f^c_{ab}$, as the
tensor coefficients are determined entirely in terms of group theory. To present
the resulting associativity conditions on $f^c_{ab}$
in a reasonably short form, we introduce the
notation $\rho_1= r_{23}, \rho_2 = r_{13}, \rho_3 = r_{12}$ for the
side lengths and
\begin{equation}
\theta_1 = \arccos \frac{\rho_2^2 + \rho_3^2 - \rho_1^2}{2 \rho_2 \rho_3}, \quad
{\rm etc.}
\end{equation}
for the angles of the triangle in ${\mathbb R}^3$ spanned by $x_1, x_2, x_3$, see fig.~\ref{triangle}.
\begin{figure}
\setlength{\unitlength}{1cm}
\begin{center}
\includegraphics[width=4.5in]{newfig6.eps}
\end{center}
\caption{The triangle spanned by $x_1, x_2, x_3$.}
\label{triangle}
\end{figure}
We also denote the spin associated with a field
$\phi_a$ by $\hat a \in \frac{1}{2} {\mathbb N}$.
Then the associativity condition~\eqref{maincondition} is equivalent
to the following condition:
\begin{eqnarray}\label{conscond3d}
&& \sum_b \sum_{j_1, j_2, j_5} \left\{
\begin{matrix}
j_6 & j_2 & \hat a_4\\
j_7 & j_5 & j_1
\end{matrix}
\right\} \left\{
\begin{matrix}
j_3 & j_5 & \hat b \\
j_1 & \hat a_3 & j_6
\end{matrix}
\right\} \,
{\rm P}_{j_1}(\cos \theta_2)\\
&& \times
f_{a_1 a_2}^{b}\Big(\rho_3; [j_3 j_1] \Big)
f_{b a_3}^{a_4}\Big(\rho_1; [j_5 j_2] \Big)=\nonumber\\
&& \sum_b \sum_{j_1, j_2, j_4, j_5}
\left\{
\begin{matrix}
j_6 & j_2 & \hat a_4\\
j_7 & j_5 & j_1
\end{matrix}
\right\} \left\{
\begin{matrix}
j_4 & j_5 & \hat a_5 \\
j_1 & \hat a_2 & j_6
\end{matrix}
\right\}
\left\{
\begin{matrix}
\hat a_1 & j_6 & j_4 \\
\hat a_3 & \hat a_2 & j_5
\end{matrix}
\right\}
\,
{\rm P}_{j_1}(\cos \theta_3) \nonumber\\
&&\times
f_{a_1 a_3}^{b}\Big(\rho_2; [j_4 j_1] \Big)
f_{a_2 b}^{a_4}\Big(\rho_1; [j_5 j_2] \Big) \nonumber \, ,
\end{eqnarray}
in the domain $\rho_3 < \rho_1 < \rho_2$.
Here, the expressions in brackets denote the
well-known $6j$-symbols for $SU(2)$,
\begin{equation}
\left\{
\begin{matrix}
j_1 & j_2 & j_3\\
j_4 & j_5 & j_6
\end{matrix}
\right\}
=
\left\{
\begin{matrix}
j_3 \\
j_1 \,\,\,\, j_2
\end{matrix}
\right\}
\left\{
\begin{matrix}
j_4 \\
j_3 \,\,\,\, j_5
\end{matrix}
\right\}
\left\{
\begin{matrix}
j_5 \,\,\,\, j_2 \\
j_6
\end{matrix}
\right\}
\left\{
\begin{matrix}
j_6 \,\,\,\, j_1 \\
j_4
\end{matrix}
\right\} \, .
\end{equation}
The expressions
${\rm P}_j(z) = {}_2 F_1(-j, j+1, 1; (1-z)/2)$ are the Legendre polynomials.
A similar form of the associativity condition can be obtained for arbitrary
dimensions $D \ge 3$, the only essential difference being that we now encounter
the $6j$-symbols for the spin groups of $SO(D)$ for general $D$. The case $D=2$ is
an exceptional case and the corresponding expression is much simpler, owing
to the fact that the representation theory $SO(2)$ and its covering ${\mathbb R}$ is
much simpler.
If we let $|a|$ be the dimension of the field $\phi_a$, then the scaling axiom for the
OPE-coefficients implies the relation
\begin{equation}
f_{ab}^c(r) = O(r^{|c|-|a|-|b|}) \, .
\end{equation}
In the case of the free quantum field theory in 3 dimensions defined by the
Lagrangian $L = \frac{1}{2}(\partial \varphi)^2$, the coefficients are in fact monomials and
are given by $f_{ab}^c(r) = \zeta_{ab}^c r^{|c|-|a|-|b|}$ for some complex constants
$\zeta^c_{ab}$, see section~\ref{freefield} for details. Furthermore, one can show that~\cite{Hollands06},
for the coefficients of the perturbatively defined theory with Lagrangian $L = \frac{1}{2} (\partial \varphi)^2 -
\frac{1}{6} \lambda \varphi^6$ and dimensionless $\lambda$, the coefficients take the form
\begin{equation}
f_{ab}^c(r) = p_{ab}^c(\log r, \lambda) r^{|c|-|a|-|b|} \, ,
\end{equation}
with $p_{ab}^c$ a polynomial in two variables whose degree is $n$ in $\lambda$ if we
compute the coefficients to $n$-th order in perturbation theory, and whose degree in
$\log r$ is no more than $n$ at $n$-th order. The associativity condition~\eqref{conscond3d}
is a quadratic constraint for these polynomials $p_{ab}^c$ at each
arbitrary but fixed order in perturbation theory.
If there are dimensionful parameters in the
lagrangian, those would effectively be treated as other perturbations in our framework.
For example, for the Lagrangian $L = \frac{1}{2} (\partial \varphi)^2 + \frac{1}{2} m^2 \varphi^2 +
\frac{1}{6} \lambda \varphi^6$, the coefficients take the form
\begin{equation}
f_{ab}^c(r) = p_{ab}^c(r, \log r, m^2, \lambda) r^{|c|-|a|-|b|} \, ,
\end{equation}
where $p_{ab}^c$ is again a polynomial in all four variables at $n$-th
perturbation order in $m^2$ and $\lambda$. Each term in this polynomial
containing a power $m^{2k}$ contains exactly a power of $r^{2k}$ so as
to make each term "dimensionless" (with the logarithms and $\lambda$ not counting
as having a dimension).
\section{The fundamental left (vertex algebra) representation}\label{leftrep}
In the previous sections, we have elaborated on our definition of
quantum field theory in terms of consistency conditions. Our formulation involved
only the OPE coefficients such as $C_{ab}^c$. To motivate our constructions, we
sometimes wrote formal relations like
\begin{equation}
\text{``$\phi_a(x_1) \phi_b(x_2) = \sum_c C_{ab}^c(x_1, x_2) \, \phi_c(x_2)$''} \quad .
\end{equation}
But these relations were only heuristic, in the sense that none of our proposed
properties of the OPE coefficients relied on the existence or properties of the
hypothetical operators $\phi_a$, which were only "dummy variables".
As we have emphasized, our approach is similar
to the standard viewpoint taken in algebra that an abstract algebra ${\bf A}$ is entirely defined in
terms of its product---i.e., a linear map $m: {\bf A} \otimes {\bf A} \to {\bf A}$ subject to
the associativity condition. But, as in our case, the algebra elements
need not be represented a priori by linear operators on a vector space. Representations in
the context of an algebra are an additional structure defined as
linear maps $\pi: {\bf A} \to {\rm End}(H)$ from the algebra to
the linear operators on a vector space $H$, subject to the condition $\pi[m(A,B)] = \pi(A)\pi(B)$. It is natural
to ask whether there is a construction similar to a representation also in our context.
We shall show in this section that there is indeed a certain "canonical" construction, which
has some features in common with an algebra representation, and which will be useful
in the next section. We will refer to this construction as the "fundamental left-" or
"vertex algebra representation".
\begin{defn}
Let $|v\rangle \in V$ be an arbitrary vector. We define a corresponding {\em vertex operator}
${\cal Y}(x, v): V \to V$ by the formula
\begin{equation}
{\cal Y}(x, v)|w\rangle = {\cal C}(x, 0)(|v\rangle \otimes |w\rangle) \, ,
\end{equation}
for all $x \neq 0$. In a basis $\{ |v_a\rangle \}$, the matrix representing the vertex operator is hence given by
\begin{equation}
[{\cal Y}(x, v_a)]_b^c := C_{ab}^c(x,0) \,\,\,\,\,.
\end{equation}
This is our {\it fundamental left-} or {\it vertex algebra representation}.
\end{defn}
Using the consistency condition~\eqref{maincondition},
one can immediately show that
\begin{equation}\label{lalblc1}
{\cal Y}(x, v_a) {\cal Y}(y, v_b) = \sum_c C_{ab}^c(x,y) \, {\cal Y}(y, v_c) \, ,
\end{equation}
for $0<|x-y|<|y|<|x|$, or equivalently that
\begin{equation}\label{lalblc}
{\cal Y}(x, v_a) {\cal Y}(y, v_b) = {\cal Y}(y, {\cal Y}(x-y, v_a)v_b ) \, .
\end{equation}
Thus, by eq.~\eqref{lalblc1}, the vertex operators operators ${\cal Y}(x, v_a): V \to V$ satisfy the operator product expansion.
The fact that the OPE coefficients in this expansion are precisely the matrix elements of the vertex operators
themselves is expressed in the second relation~\eqref{lalblc}. This quadratic relation is the key
axiom in the theory of vertex operator algebras, see \cite{vertex1, vertex2, vertex3, vertex4}.
Because of eq.~\eqref{lalblc1}, we may formally view the vertex operators as forming a "representation" of the heuristic field
operators, i.e., formally "$\pi(\phi_a(x)) = {\cal Y}(x, v_a)$" is a "representation" of the "algebra"
defined by the OPE coefficients. This "representation" is in some sense analogous to the
GNS-representation~(see e.g.~\cite{Haag}) for $C^*$-algebras. However, we emphasize that
in our case, $V$ is not in a natural way a Hilbert space, and should not be confused with the
physical Hilbert space obtained via the Osterwalder-Schrader reconstruction theorem, see our
remarks in section~\ref{axiomatic}. We will further develop the analogy of our approach to
the theory of vertex operator algebras in a forthcoming paper~\cite{Olbermann}.
\section{Example: The free field}\label{freefield}
Let us now explain our approach to quantum field theory in
a simple example, namely that of a free hermitian bosonic scalar field in $D$ dimensions
classically described by the field equation
$$\square \varphi = 0,$$
with $\square = \delta^{\mu\nu} \partial_\mu \partial_\nu$.
The aim is to present explicitly the OPE coefficients ${\cal C}(x_1, x_2)$
for this model.
This section is joint work with H. Olbermann and details will appear elsewhere.
We begin by describing the space $V$ of fields in our case, assuming $D>2$ for simplicity. The
case $D=2$ can be treated analogously, with only minor modifications.
\begin{defn}
$V$ is the defined to be the commutative, unital, ${\mathbb C}$-module
generated as a module (i.e., under addition, multiplication and scalar multiplication)
by formal expressions of the form $\partial_{\{\mu_1} \dots \partial_{\mu_N\}} \varphi$,
and unit ${\bf 1}$, where $\mu_i = 1, \dots, D$ and a curly bracket denotes the
totally symmetric, trace-free part, i.e. by definition,
\begin{equation}
\delta^{\mu_i \mu_j}\, \partial_{\{\mu_1} \dots \partial_{\mu_N\}} \varphi = 0 \, .
\end{equation}
\end{defn}
\noindent
The trace free condition has been imposed because any trace would give rise
to an expression containing $\square \varphi$, which we want to vanish in
order to satisfy the field equation on the level of $V$.
A basis of $V$ as a ${\mathbb C}$-vector space can e.g. be given as follows. First,
let us choose a basis of totally symmetric, trace-free, rank-$l$ tensors
in ${\mathbb R}^D$ for any $l \ge 0$. For a given $l \ge 0$, this space has
dimension $N(l,D)$, where
\begin{equation}
N(l,D) =
\begin{cases}
1 & \text{for $l=0$}\\
\frac{(2l+D-2)(l+D-3)!}{(D-2)!l!} & \text{for $l>0$.}
\end{cases}
\end{equation}
We denote the basis elements by $t_{l,m}, m=1, \dots, N(l,D)$, and
we assume for convenience that they are orthonormal with respect to the natural hermitian
inner product on $({\mathbb R}^{D})^{\otimes l}$ coming from the Euclidean metric
on ${\mathbb R}^D$, i.e. $\bar t_{l',m'} \cdot t_{l,m} = \delta_{ll'} \delta_{mm'}$.
A basis of $V$ is then given by ${\bf 1}$, together with the elements
\begin{equation}\label{vadef}
|v_a \rangle = \prod_{l,m} (a_{l,m}!)^{-1/2}
\left( c_l^{-1/2} \, t_{l,m} \cdot \partial^l \varphi \right)^{a_{l,m}} \, \quad \, ,
\end{equation}
where $a = \{ a_{l,m} \mid l \ge 0, 0 < m \le N(l,D) \}$ is a multi-index of
non-negative integers, only finitely many of which are non-zero. For later
convenience, we also set
\begin{equation}
c_l = \frac{2^l \, \Gamma(l+1) \Gamma(l+D/2-1)}{\Gamma(D/2-1)} \, .
\end{equation}
The canonical dimension of $|v_a \rangle$ is defined as
\begin{equation}
|a| = \sum_{l,m} a_{l,m}[(D-2)/2+l] \, .
\end{equation}
It is possible to formally view $V$ as a "Fock-space", with $a_{l,m}$ the "occupation numbers"
of the "mode" labeled by $l,m$. On this Fock-space, one can then define
creation and annihilation operators $\a_{l,m} , \a_{l,m}^+: V \to V$ as usual.
These are defined explicitly by
\begin{eqnarray}
\a_{l,m} |v_a \rangle &:=& (a_{l,m})^{1/2}
\, |v_{a-e_{l,m}} \rangle \\
\a_{l,m}^+ |v_a \rangle &:=& (a_{l,m}+1)^{1/2} \, |v_{a+e_{l,m}} \rangle
\end{eqnarray}
where $e_{l,m}$ is the multiindex with a unit entry at position $l,m$ and zeros elsewhere.
They satisfy the standard commutation relations
\begin{equation}
\left[ \a_{l,m}^{}, \a_{l',m'}^{+} \right] =
\delta_{ll'}\delta_{mm'} \,\, id \, , \quad
\left[ \a_{l,m}^+, \a_{l',m'}^+ \right] =
\left[ \a_{l,m}^{}, \a_{l',m'}^{} \right] = 0
\end{equation}
where $id$ is the identity operator on $V$. The ``vacuum'' vector $|0\rangle$ in this Fock space
by definition corresponds to the identity operator ${\bf 1} \in V$.
To present the OPE coefficients of the model, it is further convenient to
introduce spherical harmonics in $D$ dimensions. The most straightforward way to do this is
as follows. Let $l \in {\mathbb N}_0$, and let $h_l(x) \in {\mathbb C}[x]$ be a harmonic polynomial on ${\mathbb R}^D$ that
is homogeneous of degree $l$, meaning that $\square h_l(x) = 0$, and that $h(\lambda x) =
\lambda^l h_l(x)$ for all $\lambda \in {\mathbb R}_+$. It is not difficult to see that the vector
space spanned by such polynomials is of dimension $N(l,D)$. We let $h_{l,m}(x), 0<m\le N(l,D)$ be
a basis of this vector space
and we define the (scalar) spherical harmonics $Y_{l,m}:S^{D-1} \to {\mathbb C}$ to be the restriction
of the corresponding harmonic polynomials to the $(D-1)$-dimensional sphere. We normalize
the spherical harmonics to turn them into an orthonormal basis on the sphere, in the natural
$L^2$-inner product. The spherical harmonics are closely related to the trace free
symmetric tensors $t_{l,m}$ in $({\mathbb R}^D)^{\otimes l}$ that were introduced above. In fact,
we may choose
\begin{equation}\label{ylmdef}
Y_{l,m}(\hat x) =
k_l \, \bar t_{l,m} \cdot \hat x^{\otimes l} \, ,
\end{equation}
for some normalization constant $k_l$. With this notation in
place, we now explicitly present the OPE coefficients ${\cal C}(x_1, x_2)$ for this model. For this, it is
sufficient to present the vertex operators (left-representatives) ${\cal Y}(x, v_a):V \to V$ for all $|v_a\rangle \in V$, since the
matrix elements $[{\cal Y}(x, v_a)]_b^c = C_{ab}^c(x,0)$ are by definition just the OPE coefficient components, see
sec.~\ref{leftrep}.
First, we give the formula for ${\cal Y}(x, \varphi)$ corresponding to the basic field $\varphi \in V$. This is defined by
\begin{multline}
{\cal Y}(x, \varphi) = \sqrt{{\rm vol}(S^{D-1})} \,
\sum_{l=0}^{\infty} \, \sum_{m=1}^{N(l,D)} \sqrt{\frac{D-2}{2l+D-2}} \times \\
\Big[ r^{l} Y_{l, m}(\hat x) \, \a_{l,m}^{+} + r^{-l-D+2} \overline{ Y_{l, m}(\hat x) } \, \a_{l,m}^{} \Big] \, .
\end{multline}
We will "derive" this formula
from the standard quantum field theory formalism in a future paper~\cite{Olbermann}.
Accidentally, this has precisely the familiar form for a free field operator, with
an "emissive" and an "absorptive" piece, which should not come as a surprise,
since ${\cal Y}(x, \varphi)$ is in a sense the "representative"
of the (formal) field operator $\varphi(x)$ on $V$. Actually, if we furthermore write $r = {\rm e}^t$, then this is
precisely the formula for a free field operator on the manifold ${\mathbb R} \times S^{D-1}$ with
"time" $t$ formally imaginary. We will pursue this analogy elsewhere.
For a general element in $V$, we now give a corresponding formula for the
vertex operator. It is defined by ${\cal Y}(x, {\bf 1}) = id$ for the identity element, and by
\begin{equation}\label{Ladeffree}
{\cal Y}\Big(x, \prod_i \partial^{l_i} \varphi \Big) =
\,\,
: \prod_{i}
\partial^{l_i} {\cal Y}\Big(x, \varphi \Big) :
\,\,
\, .
\end{equation}
for a general field monomial.
Here, the following notation is used. The double dots $: \dots :$ mean
"normal ordering", i.e., all creation operators are to the right of all annihilation operators.
Again, one can derive this formula using the standard quantum field theory formalism. The
OPE coefficients for the free field are consequently given by $C_{ab}^c(x_1, x_2) := [{\cal Y}(x_1-x_2, v_a)]_b^c
= \langle v^c | {\cal Y}(x_1-x_2, v_a) | v_b \rangle$ or more explicitly by
\begin{equation}\label{cabcfreedef}
C_{ab}^c(x_1, x_2) := \Big\langle 0 \Big|
\prod_{l,m} (\a_{l,m}^{})^{c_{l,m}}
\,
{\cal Y}(x_1-x_2, v_a)
\,
\prod_{l,m} (\a_{l,m}^+)^{b_{l,m}}
\Big| 0 \Big\rangle \, .
\end{equation}
We now state that the so-defined OPE-coefficients satisfy
our consistency condition:
\begin{thm}
Let ${\cal Y}(x, v): V \to V$ be defined for our model by formula~\eqref{Ladeffree},
and let the OPE-coefficients $C_{ab}^c(x_1, x_2)$ be defined by eq.~\eqref{cabcfreedef}.
Then the OPE coefficients satisfy the consistency condition~\eqref{maincondition}.
Equivalently, the vertex algebra condition~\eqref{lalblc} holds for the free
field vertex operators ${\cal Y}(x, v_a)$.
\end{thm}
\noindent
{\em Proof:} The proof of this theorem is essentially a longish but straightforward computation, using
various standard identities for the $D$-dimensional spherical harmonics. We will give a
complete proof in \cite{Olbermann}.
\section{Interacting fields}\label{interactingfields}
In the previous section, we have presented the (2-point) OPE coefficients in the example of
a free quantum field associated with the classical equation $\square \varphi = 0$.
It is clearly of interest to know what would be the corresponding coefficients for
a field associated with a non-linear equation such as
\begin{equation}
\square \varphi = \lambda \varphi^p
\end{equation}
where $p$ is some non-negative integer. As has been appreciated for a long time, the construction of
a quantum field theory (and hence in particular of the OPE) associated with such
an equation is extremely difficult, and has only been accomplished so far for
certain values of $p,D$ where the theory has a particularly simple behavior. However, one can treat
$\lambda$ as a formal perturbation parameter, and try to construct the OPE coefficients
in the sense of formal power series in $\lambda$ as we have outlined in general terms in section~\ref{hochschild}.
Here we would like to outline how a field equation can help to actually determine
the formal power series in the theory described by a field equation of the above type.
Some of the ideas in this section go back, in preliminary form, to discussions with
N.~Nikolov, and also to joint work with H.~Olbermann, which will be published in~\cite{Olbermann}.
As we have seen in section~\ref{leftrep}, the 2-point
OPE coefficients ${\cal C}(x_1, x_2)$ contain the same information as the
corresponding vertex operators ${\cal Y}(x, v)$. In perturbation theory, they
are given by formal power series
\begin{equation}
{\cal Y}(x, v) = \sum_{i=0}^\infty {\cal Y}_i(x, v) \, \lambda^i \, ,
\end{equation}
where each ${\cal Y}_i(x, v)$ is a linear map $V \to V$, and where
${\cal Y}_0(x, v)$ is given by the free field vertex operator
defined in the previous section~\ref{freefield}. As discussed in
subsection~\ref{subfieldeq}, we expect
that the field equation implies:
\begin{equation}\label{fieldeqvertex}
{\cal Y}_i(x, \varphi) = \square^{-1} {\cal Y}_{i-1}(x, \varphi^p ) \, .
\end{equation}
More precisely, in this section we {\em assume} the existence of
${\cal Y}_i$ satisfying this equation, and we also {\em assume} that the
consistency condition~\eqref{lalblc} is satisfied order-by-order;
in vertex operator notation
\begin{equation}
\sum_{j=0}^i {\cal Y}_j( y, v_a) {\cal Y}_{i-j}(x, v_b)
=
\sum_{j=0}^i {\cal Y}_{i-j}\Big(x, {\cal Y}_j ( y-x, v_a ) v_b \Big) \, .
\end{equation}
As we will now show, these assumptions will allow us to inductively
determine the actual form of the vertex operators order by
order in $i$. But before we do this, we must explain a point
related to the choice of $V$ in for our interacting theory.
Recall that, in the underlying free theory with $\lambda=0$, $V$ was spanned by formal monomials in
$\varphi$ and its derivatives $\partial_{\{\mu_1} \dots \partial_{\mu_N\}} \varphi$,
where he curly brackets denote the trace-free
part of a tensor. In the free theory, we considered the trace free part only,
since any trace gives rise to a factor of $\square \varphi$ in such a monomial, $v$,
and the corresponding vertex operator ${\cal Y}_0(x,v)$ then vanishes (essentially by definition). However,
for the interacting theory, we must be more careful and allow also traces, i.e.,
we also consider vertex operators whose arguments are formal monomials in $\varphi$ and
its derivatives $\partial_{\mu_1} \dots \partial_{\mu_N} \varphi$.
This enlarged space of objects, $\widehat V$, is a commutative unital differential
module (with derivations $\partial_\mu, \mu=1, \dots, D$ acting in the usual way), and
the vertex operators ${\cal Y}_i(x, v)$ should now be considered as linear maps $\widehat V \owns v \mapsto
{\cal Y}_i(x, v) \in {\rm End}(\widehat V)$. We then also {\em assume} to have a relation
\begin{equation}
\partial_\mu \, {\cal Y}_i(x, v) = {\cal Y}_i(x, \partial_\mu v) \quad, \quad \mu=1, \dots, D \, ,
\end{equation}
where the symbol $\partial_\mu$ denotes a genuine partial $x$-derivative on the left side,
while it is the derivation on the differential module $\widehat V$ on the right side. For details, we
refer to~\cite{Olbermann}. To lighten the notation, we will drop the caret on $\widehat V$ again
for the remaining part of the section.
To make sense of eq.~\eqref{fieldeqvertex}, we first of all need to define the inverse of the Laplace operator.
We rewrite it in $D$-dimensional polar coordinates, and
we furthermore assume that we can expand each vertex operator in spherical harmonics and
coefficients in the ring ${\mathbb C}[r, 1/r, \log r] \otimes {\rm End}(V)$.
Then the vertex operators schematically take the form
\begin{equation}
{\cal Y}_i(x, v) = \sum A_{i,l,m,j,k}(v) r^k (\log r)^j Y_{l,m}(\hat x) \, ,
\end{equation}
with $A_{i,l,m,j,k}(v) \in {\rm End}(V)$.
We define the action of the inverse Laplacian on such expressions by putting\footnote{
It follows from the inductive construction that, if we take any matrix element
of ${\cal Y}_i$ between $\langle v^a|$ and $|v_b \rangle$, then there remain only finitely
many terms in the above sum. Hence, we may take the inverse of the Laplacian term-by-term
without problem.
}
\begin{eqnarray}
&&\square^{-1} [r^k (\log r)^j Y_{l,m}(\hat x) ] :=
j! Y_{l,m}(\hat x) \times \nonumber\\
&& \times
\begin{cases}
(-1)^{j+1} r^l \sum_{i=0}^{j+1} \frac{(-1)^i \log^i r}{i!(2l+D-2)^{j-i+2}} &
\text{if $k=l-2$}\\
-r^{-l-D+2} \sum_{i=0}^{j+1}
\frac{\log^i r}{i!(2l+D-2)^{j-i+2}} &
\text{if $k=-l-D$}\\
r^{k+2} \sum_{i=0}^j \sum_{n=0}^i \frac{(-1)^{i-n} \log^{j-i} r}{(j-i)!(l-k-2)^{n+1}(l+k+D)^{i-n+1}} &
\text{otherwise.}
\end{cases}
\end{eqnarray}
This is a left inverse for the Laplacian. Any other left
inverse can differ from this one only by terms in the kernel of
$\square$, i.e. a harmonic polynomial of $x$ with values in
${\rm End}(V)$.
Let us now assume inductively that we have constructed all the
vertex operators ${\cal Y}_j(x, v)$ up to
order $j=i-1$. The vertex operator ${\cal Y}_i(x, \varphi)$ is then given
by eq.~\eqref{fieldeqvertex}. Next, we would like to determine all other vertex operators ${\cal Y}_i(x, v )$,
where $|v \rangle \in V$ is a general element. For this, we perform, at fixed $i$,
an induction in the dimension $\Delta(v)$. Thus, let us assume that we
have succeeded in constructing all vertex operators up to dimension $d$, and let us assume
for the sake of concreteness that we are in $D=4$, so that $\Delta(\varphi) = 1$.
We may hence assume that $d \ge 2$. We may write a general field of dimension $d+1$
as a linear combination of fields of the form $v = w \partial^l \varphi$,
or of the form $v = \partial^{l+1} w$. In both cases,
$w$ has dimension $d-l$, and so ${\cal Y}_j(x, w)$ is inductively
known for $0 \le j \le i$. In the second case, we must have
${\cal Y}_i(x, v) = \partial^{l+1} {\cal Y}_i(x, w)$. In the first case, the
consistency condition gives
\begin{equation}
\sum_{j=0}^i {\cal Y}_j\Big( y, \partial^l \varphi \Big) {\cal Y}_{i-j}\Big(x, w \Big)
=
\sum_{j=0}^i {\cal Y}_{i-j}\Big(x, {\cal Y}_j ( y-x, \partial^l \varphi ) w \Big) \, .
\end{equation}
By the inductive hypothesis, all operators on the left side of the equation are already known.
Now we investigate which operators are not already known on the right side. Evidently,
if $j \neq 0$, then all terms in the corresponding expression are known. If $j = 0$,
we look at the terms that survive in the limit $y \to x$. Using the definition of the
zeroth order vertex operators (free theory), we see that
\begin{equation}
{\cal Y}_0 ( y-x, \partial^l \varphi ) w = w \partial^l \varphi + \dots ,
\end{equation}
where the dots stand for the following terms: (a) terms that vanish as $|x-y| \to 0$ and
(b) a finite Laurent series in $1/|x-y|$ with coefficients that are vectors of dimension
$ \le d$. Let $P_d^j: V \to V$ denote the map which is the identity for
$j \neq 0$, which is the projector onto the subspace of vectors of dimension
$\le d$ for $j=0$. Then we can write:
\begin{multline}
{\cal Y}_i(x, v) =
\lim_{y \to x} \Bigg[
\sum_{j=0}^i {\cal Y}_j\Big( y, \partial^l \varphi \Big) {\cal Y}_{i-j}\Big(x, w \Big) \\
-
\sum_{j=0}^i {\cal Y}_{i-j}\Big(x, P_d^j \circ {\cal Y}_j ( y- x, \partial^l \varphi ) w \Big)
\Bigg] \, .
\end{multline}
Now all the terms on the right side are known inductively. We can hence determine all vertex operators
at order $i$, and hence to arbitrary orders.
This shows how we may construct inductively
the terms in the perturbation series starting from those of the free theory.
\section{Conclusions and outlook}
In this paper, we have suggested a new approach to general, non-conformal, quantum
field theories in terms of consistency conditions. These consistency conditions are
formulated in terms of the operator product expansion (OPE). We showed that these
conditions are quite powerful. For example, they can be used to characterize the possible perturbations of the
quantum field theory, and give rise to an efficient algorithm for explicitly
computing these coefficients.
This paper is just the beginning of a longer programme. In the future, we would like to
extend the ideas of the paper. In particular, it would be interesting to consider the
following issues:
\begin{itemize}
\item Generalization of our approach to curved space-time;
\item Convergence/Borel summability of the perturbation series;
\item Explicit perturbative calculations;
\item Incorporation of the renormalization group into our approach;
\item (Super-)conformal quantum field theories;
\item Perturbations of 2-dimensional conformal quantum field theories.
\end{itemize}
We intend to study these topics in future publications.
\\
\\
\\
\\
\\
\noindent
{\bf Acknowledgements:} I would like to thank N.~Nikolov for
extensive discussions on various topics in this paper. I would also like to
thank K.-H.~Rehren and R.~M.~Wald for discussions. I would especially
like to thank C.~Brouder for his careful reading of the manuscript, and
in particular for pointing out several sign errors in the first version.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,910 |
This morning I went to one of my least favorite places: Family Court. I was there to argue a motion on behalf of my client, who—because of a lifetime of little opportunity, poor choices, and a broken justice system—was wrongfully saddled with a money judgment of more than $40,000. Not only was the judgment faulty, my client also has no high school degree and works part-time as a janitor. He diligently looks for full-time, and better-paying, work (and is studying for his GED). But until he finds that fabled job, he barely makes more than the poverty line. A $40,000 debt might as well be a $40,000,000 debt. To boot, he has a criminal record and is a minority, which basically means his chances of getting a better paying job are slim to none.
Going into court today, I was apprehensive. The law is squarely on my client's side, but the judge is not. The past two court appearances involved the judge making a lot of wrong assumptions about my client and yelling. A lot. I don't like getting yelled at, but in this capacity, I don't mind. Not only was I prepared and confident in the law as it applies to my client, this is also my job: to advocate for my client. If anyone is going to get yelled at, then, I would rather it be me than my client. I'm here to bear the brunt of the judge's wrath for him. And to present my client's case and the law to the judge—no matter how much he berates me for it.
As confident as I was in the law this morning, I was less confident in the system—the judge, the court, and the administration—and in my ability to stand up for justice in the long run. I direct a tiny Christian legal services organization, Open Hands. We provide free legal services to New York's most underserved populations—from homeless men in recovery on the Bowery to undocumented immigrants in Harlem. We use volunteer attorneys to provide free legal advice and counsel to New York City's neediest residents. We pray for and with our clients. We pray not for the world to see, but we pray individually and corporately, acknowledging the brokenness in the world and admitting that, without Christ, we can achieve no restoration.
When I met my client through an elder in his church, I couldn't not take his case. The injustice of his situation and his repeated attempts and inability to find relief in the system without an advocate was too much to ignore. But now, if we lose in court today, we have to appeal. And an appeal demands time and money we don't have. Going into court, I know that I need to win this motion today.
In court, we wait and wait and wait. As the hours pass, I'm hit smack in the face with the broken justice system and the broken family sphere: court officers yelling at litigants, children crying, and stressed-out parents screaming at their children and at each other. A courthouse is no place for a child. And yet, I also see small children—younger than my toddler son—patiently waiting in the gallery, waiting for their parents' case to be called. Already these kids seem accustomed to the waiting and administrating of their lives by forces beyond their control. Surely Family Court is not the only place these young ones wait—poverty is so pervasive that it extends to housing (the shelter system), food (the welfare offices), and the list goes on.
It's around this time that the brokenness of the world really starts to weigh on me. I don't know that I could practice this type of law if I were not cognizant of a just and righteous God who one day will bring justice and reconciliation to the earth in all spheres—the justice sphere, the family sphere, and the economic sphere, to start. Because on this earth, in this court, in this job market, in this client's family—injustice and brokenness in all spheres are converging and reigning supreme, and we may or may not see justice in court today. Or ever on this earth.
Though it would be unjust for us to lose, I frankly won't be surprised if we do. This expectation makes me not a cynic, but a practitioner who sees too much injustice on earth. What would we do without the knowledge that one day Jesus will renew this earth and restore it to its full glory and potential? He will bring justice to the earth. He will restore individuals. He will heal families. He will show children grace, tenderness, and mercy.
Participating in this system, observing my fellow litigants (almost none of whom has an advocate because low-income people have basically no access to free counsel in civil matters, no matter how crucial the need), I despair for our current condition. But I revel in the knowledge that Jesus knows and represents his people. He bears the brunt of the judge for us, his clients. He presents the law to the ultimate judge because he sits at his right hand of the Father—what a gift he gave us through his resurrection! And in the end, we will be restored to our full humanity.
Without this knowledge, advocating for justice among the brokenness of the world would be almost too much to bear. It is the knowledge of Jesus and his advocacy for us—and the fact that he has given me the tools and the opportunity to do on earth what he does in heaven—that provides both enduring hope and the strength to continue advocating for justice in a broken world.
C.J. Masimore is the executive director of Open Hands Legal Services in New York City. She was named as one of the New York Law Journal's 2014 Rising Stars and awarded the Christian Legal Society John D. Robb Christian Legal Aid Award in 2013. She received her JD from University of Michigan Law School and her BA from Calvin College. She lives with her family in Brooklyn. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,915 |
\section{Introduction}
\subsection{Qualitative expectations \label{sec:qual}}
In the physics communities dealing with QCD at finite temperature and density, there appears
to be little doubt that the $(T,\mu)$ phase diagram qualitatively looks as in Fig.~\ref{qual} (right).
Given that until recently non-perturbative calculations were impossible for $\mu\neq 0$, and even on the
temperature axis simulations with dynamical fermions have only become feasible in the last few years,
it seems worthwhile to recall the qualitative arguments that lead to this picture.
Such a diagram represents one set of quark masses. As theorists, we also view the quark masses as parameters and wish to understand the phase diagram of the entire parameter space
$\{m_{u,d},m_s,T,\mu\}$, which should aid us in unveiling the physical situation as well.
\begin{figure}
\begin{center}
{\rotatebox{0}{\scalebox{0.6}{\includegraphics{qm99fig1.eps}}}}\hspace*{1cm}
{\rotatebox{0}{\scalebox{0.6}{\includegraphics{qm99fig2.eps}}}}
\end{center}
\caption[]{Qualitative QCD phase diagram for $N_f=2$ according to general expectations. For $N_f=3$
and $m<m_c$, the diagram looks as on the left with the transition being first order all the way, while for $m>m_c$ it looks as on the right. }
\label{qual}
\end{figure}
Starting point of the argument \cite{wi} is the $N_f=2$ theory with degenerate quark masses. At zero density and in the
chiral limit, $\mu,m=0$, the chiral condensate represents a true order parameter distinguishing between separate phases, and the symmetry breaking pattern is $SU(2)_V\times SU(2)_A\rightarrow SU(2)_V$. A local order parameter vanishing everywhere in one phase and being non-zero in another
corresponds to a non-analytical function of the parameters of the theory, thus requiring a true phase
transition and excluding an analytical crossover.
{\it If} the corresponding phase transition is second order, {\it then} chiral symmetry implies
that it should be in the universality class of 3d $O(4)$ spin models, a scenario which has been
very popular among theorists. Note, however, that a first order transition is a logical
possibility as well.
For low temperatures and large densities, a number of model calculations (see e.g.~\cite{hal})
appear to agree on a
first order transition between nuclear matter and quark matter in a colour-superconducting state, which is expected for asymptotically high density.
Fig.~\ref{qual} represents only the simplest
picture, other variants have one or several phases
between the hadronic and the superconducting phase, see \cite{hal,murev} for more details.
The most natural scenario then has the first order line at finite density joining up with the second order line coming from the temperature axis in a tri-critical point.
On the other hand, if the quarks have finite masses, chiral symmetry is explicitly broken and there is no
true order parameter. The chiral condensate still experiences a rapid change of value at the pseudo-critical temperature, but
now it is an analytic crossover. In this case the first order line at finite density
has to terminate in a critical endpoint.
For three degenerate flavours $N_f=3$, $\mu=0$, the chiral limit exhibits a first order phase transition.
First order transitions are stable under small variations of the parameters, and thus the first order regime
extends to small masses $m<m_c$, for which it can actually be measured on the lattice.
With increasing quark mass the transition weakens until it ends in a critical point at $m_c$, and for
$m>m_c$ a smooth crossover is observed. For the $(T,\mu)$ phase diagram this implies
a first oder line connecting the transitions on the axes for $m<m_c$, while for $m>m_c$ the first
order line emanating from the $\mu$-axis again has to terminate in a critical endpoint.
The ``standard scenario'' for the physical case with $N_f=2+1$ quarks is as in Fig.~\ref{qual} (right),
with the critical endpoint moving to larger $\mu_c$ with increasing quark masses,
as may be inferred from a continuity argument.
For $N_f=3$ with $m<m_c$ the phase diagram has a first order line connecting both axes.
Upon sending $m_s\rightarrow \infty$ this picture should continuously evolve into the $N_f=2$ diagram
with a critical endpoint, thus implying $d\mu_c/dm_s>0$.
The full phase diagram of the $N_f=2,3$ theories is in the 3d space $\{m,T,\mu\}$, as in Fig.~\ref{schem1}.
In order to map it out by simulations the first step is to identify the critical
surface $T_0(\mu,m)$ separating the high and low temperature regions. Since simulations are always on finite volumes, this surface is only pseudo-critical and represents a smooth crossover. It can be
defined by, e.g., peaks in susceptibilites, cf.~Fig.~\ref{schem1} (left).
This step is typically rather straightforward.
The much more difficult task is to perform a finite size scaling analysis
to identify the order of the transition in the infinite volume limit for the different regions of parameter space. For $N_f=3$, such an analysis yields a critical line separating a first order region from a crossover region on the surface
$T_0(\mu,m)$, Fig.~\ref{schem1} (middle). It is convenient to eliminate the temperature axis from this diagram by projecting onto the
pseudo-critical surface, i.e.~temperature is always implied to be $T_0(\mu,m)$, Fig.~\ref{schem1} (right).
\begin{figure}[t]
{\rotatebox{0}{\scalebox{0.32}{\includegraphics{psi-bar-psi.ps}}}} \hspace*{1.3cm}
{\rotatebox{0}{\scalebox{0.45}{\includegraphics{tc.eps}}}} \put(-53,45){1.O.} \hspace*{1cm}
{\rotatebox{0}{\scalebox{0.45}{\includegraphics{proj.eps}}}}\put(-53,45){1.O.}
\caption[]{Left: Pseudo-critical coupling and temperature defined by the peak of a susceptibility. Middle: Schematic phase diagram for $N_f=3$. Right: Projection onto the critical surface. }
\label{schem1}
\end{figure}
This form of a phase diagram is particularly suitable to display the three flavour theory with non-degenerate masses, $N_f=2+1$, including the special cases $N_f=2,3$, as in Fig.~\ref{schem_2+1}.
Note that because of the difficulties of simulating dynamical fermions, even for $\mu=0$ (left)
we know very little about the critical lines separating the first order from the crossover regions.
Up to now the only published point that has been calculated to some accuracy with standard staggered fermions is the critical point $m_{u,d}=m_s=m_c$
on the $N_f=3$ diagonal \cite{kls,clm,fp2},
which was numerically identified to belong to the universality class of the
3d Ising model \cite{kls}. While the statement about the universality class concerns infrared physics and thus is stable against cut-off effects, the location of the critical point in the bare mass diagram
is very sensitive to renormalisation effects. To date only $N_t=4$ calculations ($a\sim 0.3$ fm)
have been performed, but simulations with improved actions give values for $m_c$ which are about $\sim 1/4$ of the
standard action result \cite{kls}. The critical line for non-degenerate quark masses is being calculated presently, cf.~\cite{fp3} and Section \ref{sec:line}. All available results are consistent with the physical point lying on the crossover side of the boundary. This has also been found in a recent simulation with standard staggered quarks
with a pion to rho mass ratio tuned to its physical value \cite{fk2}.
\begin{figure}[t]
\begin{center}
\vspace*{-6cm}
\leavevmode
{\rotatebox{0}{\scalebox{0.4}{\includegraphics{phase_diagram_trunc.ps}}}}\hspace*{-1.5cm}
{\rotatebox{0}{\scalebox{0.75}{\includegraphics{3dphasediag_3.eps}}}}
\end{center}
\caption[]{Left: Schematic phase diagram for $N_f=2+1$ at $\mu=0$. Temperature is implied to be (pseudo-) critical, $T_0( m_{u,d},m_s)$, everywhere. Right: The same with finite quark density.}
\label{schem_2+1}
\end{figure}
When a finite quark number density is switched on, a $\mu$-axis for the chemical potential has to be
added to the diagram, and the critical line separating the first order region from the crossover
region turns into a critical surface, as indicated in Fig.~\ref{schem_2+1} (right). The standard scenario
with $m_c(\mu)$ being an increasing function of the chemical potential then implies that this
surface bends towards larger quark masses. Consequently, tuning the quark masses to the physical point and switching on a chemical potential, the intersection with the critical surface marks
the critical value $\mu_c$ of the end point, beyond which there is a first order transition.
Thus, a determination of the QCD phase diagram in the full parameter space $\{m_{u,d},m_s,T,\mu\}$ entails mapping out these critical surfaces and understanding how they are joining up in the different limit theories.
\subsection{Lattice QCD at finite temperature and density}
Standard Monte Carlo simulations at finite density are made impossible by
the so-called sign problem of the lattice grand canonical partition function,
\begin{equation}
Z=\int DU D\bar{\psi} D\psi\,{\rm e}^{-S_g[U]-S_f[U,\psi,\bar{\psi}]}=\int DU\,[\det M(\mu)]^{f}{\rm e}^{-S_g[U]}, \quad S_f=\sum_f \bar{\psi}M \psi.
\end{equation}
For $\mu=0$, the relation $\gamma_5M\gamma_5=M^\dag$ guarantees positivity of the fermion determinant, $\det M\geq 0$, in every gauge background.
For the gauge group SU(3), the fermion determinant becomes complex as soon as a non-zero quark
chemical potential $\mu=\mu_B/3$ is switched on. Thus it cannot be interpreted
as a probability distribution, which rules out standard importance sampling.
The sign problem of QCD is still unsolved to date. Successful solutions of fermion sign problems in
a number of spin models by means of cluster algorithms \cite{wiese} unfortunately do not seem to generalize to QCD. However,
significant progress has been made since 2001 with a number of different approaches
that circumvent the sign problem, rather than solving it. These approaches can be put into three categories:
\begin{itemize}
\item Two-parameter reweighting
\item Taylor expansion in $\mu/T$
\item Simulations at imaginary $\mu$, either analytically continued to real $\mu$ or Fourier transformed to the canonical ensemble
\end{itemize}
All these methods have limitations and presently work reliably only for small enough $\mu/T\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 1$.
However, the systematics is different between them, thus allowing for meaningful cross checks (for a comparison of early results, see \cite{lp}).
Another alternative is to study related theories without the sign problem, such as SU(2) QCD and QCD at finite isospin, the latter being close enough to the case of interest for meaningful comparisons.
The fact that all agree in the determination of the pseudo-critical temperature $T_0(m_i,\mu)$
is one reason for the recent enthusiasm in this field, and gives reason to hope that the order of the transition may be settled in the near future as well.
\section{Massless $N_f=2$ at zero density: O(4) or first order?}
Before delving into the discussion of finite density calculations, let us turn to
the $\mu=0$ behaviour of the theory which played an important role in the derivation of the qualitative phase diagram in Section \ref{sec:qual}. It is a longstanding question whether the phase transition in the chiral limit of the two-flavour theory is indeed second order with O(4) universality or first order. On the lattice, O(4) will effectively look like O(2) as long as there are discretisation effects \cite{o2}.
A lot of work has been done over the years, but no definite conclusion has been reached. Among the more recent work, Wilson fermions appear to see O(4) scaling \cite{wil}, while staggered actions are inconsistent with both O(4) and O(2) \cite{s2}. (The staggered strong coupling limit, however, does display O(2) scaling \cite{c2}).
A new attempt to tackle this question by means of a finite size scaling analysis with unprecedented lattice sizes was made in \cite{dig}. The work simulates $L^3\times 4$ lattices with
$L=16-32$, using the standard staggered action and the hybrid Monte Carlo R-algorithm \cite{ralg}. Several quark masses are studied, the smallest being $m/T\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 0.055$.
In a critical region quantities like, e.g., the specific heat or the chiral susceptibility
scale universally as
\begin{eqnarray}
C_V - C_0 &\simeq & L^{\alpha/\nu} f_c \left(\tau L^{1/\nu}, am\, L^{y_h} \right),
\quad \tau=1-T/T_c\nonumber\\
\chi& \simeq &L^{\gamma/\nu} f_\chi \left(\tau L^{1/\nu}, am\,L^{y_h} \right).
\label{scale}
\end{eqnarray}
Here the non-singular part of the specific heat $C_0$ has been subtracted.
The values for the exponent $y_h$ are known with some precision and nearly the same for
O(4) and O(2). The authors of \cite{dig} thus fix $y_h$ to this value, and then choose $L$ and
$m$ for a series of simulations such as to keep $(am \,L^{y_h})$ constant. This reduces the two-parameter scaling problem to depend on one remaining variable only, which can be more easily scanned.
The infinite volume limit in this procedure thus corresponds to the chiral limit and allows to check whether the data are consistent with the predicted scaling behaviour.
\begin{figure}[t]
\includegraphics*[width=0.45\textwidth]{Cv_max_Run1.eps}\hspace*{1cm}
\includegraphics*[width=0.45\textwidth]{Chi_max_Run1.eps}
\caption[]{Finite volume scaling behaviour with of specific heat and chiral susceptibility. For O(4) viz.~O(2) behaviour, the data should fall on a horizontal line \cite{dig}.}
\label{dig1}
\end{figure}
Fig.~\ref{dig1} shows simulation results from \cite{dig}. Scaling as in Eq.~(\ref{scale}) would imply the data points to fall on a horizontal line, which is clearly not the case. Alternatively, one may keep the other scaling variable $\tau L^{1/\nu}$ fixed and vary the quark mass. Furthermore, in place of O(4) or O(2) exponents, also consistency with first order exponents can be tried. Some results of this attempt are shown in Fig.~\ref{dig2}. The fit to first order scaling is slightly better, but not very convincing
either. Moreover, D'Elia et al.~looked in detail at plaquette distributions in the transition region as well as Monte Carlo histories. No signs of a metastability region commensurate with a first order transition could be observed.
%
\begin{figure}[t]
\includegraphics*[width=0.45\textwidth]{Chi_max_O4_2.eps}\hspace*{1cm}
\includegraphics*[width=0.45\textwidth]{Chi_max_1st_2.eps}
\caption[]{Testing the mass scaling of the chiral susceptibility for O(4) and first order behaviour \cite{dig}.}
\label{dig2}
\end{figure}
Many variations of the analysis are performed in \cite{dig}, with similar outcome. Hence, even
after formidable computational effort the question of O(4) vs.~first order scaling remains open for the moment. There are several possible explanations. The scaling region away from the chiral limit could be exceedingly small, or discretisation effects could play a large role. This is also suggested by the fact that Wilson and staggered fermions appear to scale differently. The question of cut-off effects will be addressed by the authors of \cite{dig}, who announced an investigation at $N_t=6$ as well.
Another possibility is that exceedingly large volumes might be required to distinguish weakly
first order and crossover. An example for such behaviour is two-colour QCD in the strong coupling limit.
Numerical results for this theory require lattice sizes $L>128$ before the correct scaling is observed \cite{cj}.
An important observation is that,
for both scenarios in Fig.~\ref{dig2}, it is the lowest mass data points which spoil the fits.
This indicates possible systematic errors of the Monte Carlo for very low masses. We shall indeed see in Section \ref{sec:line} that in this regime the R-algorithm has strong step size effects for
steps of half the quark mass, as chosen for the lowest mass point in \cite{dig}. These effects change the
apparent order of the phase transition. Thus, for any future investigation an exact algorithm is necessary.
Meanwhile, we should keep an open mind to the possibility of a first order transition in the chiral limit. In this case the phase diagram Fig.~\ref{qual} would be as for $N_f=3$ with a very small critical quark mass $m_c$. There would be a first order line all the way for $m<m_c$, or a first order line with an endpoint, Fig.~\ref{qual} (right), for $m>m_c$.
\section{Finite density phase diagram from two parameter reweighting}
Significant progress enabling finite density simulations was made a few years ago, by a generalisation of the Glasgow method \cite{gla} to reweighting in two parameters \cite{fk0}.
The partition function is rewritten identically as
\begin{equation}
Z=\left \langle \frac{{\rm e}^{-S_g(\beta)}\det(M(\mu))}
{{\rm e}^{-S_g(\beta_0)}\det(M(\mu=0))}\right\rangle_{\mu=0,\beta_0},
\end{equation}
where the ensemble average is now generated at $\mu=0$ and a lattice gauge coupling
$\beta_0$, while a reweighting factor takes us to the values
$\mu,\beta$ of interest.
The original Glasgow method reweighted in $\mu$ only and was suffering from the
overlap problem: while the reweighting formula is exact, its Monte Carlo evaluation is not. The integral
gets approximated by a finite number of the most dominant configurations, which are different for the reweighted and the original ensemble, and this difference grows with $\mu$.
When calculating critical behavior at some $\mu$, one-parameter reweighting uses a non-critical ensemble at $\mu=0$, thus missing important dynamics.
By contrast, two-parameter reweighting proceeds along the pseudo-critical line of the phase change, thus always working with an ensemble that probes both phases. This approach produced the first finite density phase diagram from the lattice, obtained for light quarks corresponding to $m_\pi\sim 300$ MeV \cite{fk1}. A Lee-Yang zero analysis \cite{lyz} was employed in order to find the change from crossover behaviour at $\mu=0$ to a first order transition for $\mu>\mu_c$.
A later simulation at physical quark masses puts the critical point
at $\mu_B^c\sim 360$ MeV \cite{fk2}, Fig.~\ref{fk} (left). In this work $L^3\times 4$ lattices with $L=6-12$
were used, working with the standard staggered fermion action and using the R-algorithm. Quark masses were tuned to $m_{u,d}/T_0\approx 0.037, m_s/T_0\approx 1$, corresponding to the mass ratios $m_{\pi}/m_{\rho}\approx 0.19, m_{\pi}/m_K\approx 0.27$, which are close to their physical values.
\begin{figure}[t]
\vspace*{-2cm}
{\rotatebox{0}{\scalebox{0.4}{\includegraphics{phase_diag.eps}}}}\hspace*{1cm}
{\rotatebox{0}{\scalebox{0.75}{\includegraphics{fks.eps}}}}
\caption[]{Left: The phase diagram for physical quark masses as predicted by the two parameter reweighting method \cite{fk2}. Right: High density results from the density of states method, indicating a triple point \cite{cs}. }
\label{fk}
\end{figure}
A difficulty in this approach is that the determinant
needs to be evaluated exactly. Because of the sign problem the reweighting factor is exponentially suppressed with volume and chemical potential, thus limiting the applicability to moderate values of those parameters. This point will be discussed in more detail later.
Moreover, since the statistical fluctuations are those of the simulated ensemble instead of the physical one, it remains difficult to obtain reliable error estimates. For a proposed procedure see \cite{fk3}.
In present work in progress two-parameter reweighting is combined with the density of states method \cite{dos}, in order to extend the applicability of reweighting to larger values of $\mu/T$ and thus to lower $\mu$.
First interesting results, with indications for a possible triple point, are shown in Fig.~\ref{fk} and presented in more detail in these proceedings \cite{cs}.
\section{Finite density by Taylor expansion}
Another method to gain information about non-zero $\mu$ is to compute the coefficients of a Taylor series expansion of observables in powers of $\mu/T$.
Early attempts have looked at susceptibilities and the response of screening masses to chemical potential \cite{milc,taro,hlp,gg}. More recently it has also been used to gain information on the phase transition and its nature itself \cite{bisw1}-\cite{ggpd}.
This idea exploits the fact that on finite volumes there are no non-analytic transitions, and hence the partition function $Z(m>0,\mu,T)$ is an analytic function of the parameters of the theory.
For small enough $\mu/T$ one may then hope to get away with only a few terms, whose coefficients are calculated at $\mu=0$.
Moreover, CP symmetry of the QCD action translates into a reflection symmetry of the partition function, $Z(\mu)=Z(-\mu)$, such that real physical observables have series expansions in $(\mu/T)^2$. Thus, in particular the pressure density can be expressed as an even power series,
\begin{equation}
p(T,\mu)=-\,\frac{F}{V}
= \left(\frac TV\right)\log Z(T,\mu),\quad
{p\over T^4}=
\sum_{n=0}^\infty c_{2n}(T) \left({\mu\over T}\right)^{2n}.
\label{press}
\end{equation}
Since only even terms appear, the coefficients are equivalent to generalised quark number susceptibilities at $\mu=0$, and hence measureable with standard simulation techniques.
For high enough temperatures $T>T_0$, the scale of the finite temperature problem is set by the
Matsubara mode $\sim \pi T$, and one would expect coefficients of order one for an expansion in
the `natural' parameter $\mu/(\pi T)$ \cite{fp2}. We shall see later that this is borne out by simulation results.
Since all the $\mu$-dependence in the partition function is sitting in the fermion determinant, it is derivatives of the quark matrix that need to be computed,
\begin{equation}
\frac{\partial \ln \det M}{\partial \mu} ={\rm tr} \left( M^{-1} \frac{\partial M}{\partial \mu} \right),\quad
\frac{\partial {\rm tr} M^{-1}}{\partial \mu} =
- {\rm tr} \left( M^{-1} \frac{\partial M}{\partial \mu}
M^{-1} \right), \quad \mbox{etc.},
\end{equation}
which can be iterated for higher orders. These expressions become increasingly complex and methods
to automatize their generation have been devised \cite{ggpd}.
Note that one now is dealing with traces of composite local operators, which greatly facilitates the numerical evaluation in a simulation compared to a computation of the full determinant.
The numerical estimate of these expressions proceeds by the random noise method, with typically
$O(10)-O(100)$ Gaussian noise vectors.
If one is interested in phase transitions, finite volume scaling towards the thermodynamic limit has to be considered. True phase transitions will emerge as non-analyticities in the pressure, which is not the case for analytic crossover behaviour.
Given that in the two flavour theory with finite masses and for $\mu=0$ the deconfinement transition is an analytic crossover, one may expand about $\mu=0$ and then look for the emergence of a finite radius of convergence as the volume increases.
The radius of convergence of a power series gives the distance between the expansion point and the nearest singularity, and may be extracted from the high order behaviour of the series. Two possible
definitions are
\begin{equation}
\rho,r = \lim_{n\rightarrow\infty}\rho_n,r_n\qquad \mbox{with}\quad
\rho_n=\left|\frac{c_0}{c_{2n}}\right|^{1/2n},
\qquad
r_n=\left|\frac{c_{2n}}{c_{2n+2}}\right|^{1/2}.
\label{rad}
\end{equation}
General theorems ensure that if the limit exists and asymptotically all coefficients of the series are positive, then there is a singularity on the real axis.
More details as well as previous applications to strong coupling expansions
in various spin models can be found in \cite{series}.
In the series for the pressure such a singularity would correspond to the critical point in the $(\mu,T)$-plane.
The study of finite size scaling of a Taylor series presents a formidable technical task. Since the coefficients are generalised susceptibilities, each of them exhibits non-trivial finite size scaling. The scaling of the individual coefficients, evaluated at $\mu=0$, has to combine to the correct scaling of the finite density pressure given by the sum, thus requiring delicate cancellations between the individual contributions in the large volume limit. Classifications of the behaviour of various generalised susceptibilities are given in \cite{gg2}.
\subsection{Quark number susceptibility to order $\mu^6$ for $N_f=2$}
New results from this approach were reported this year by Gavai and Gupta \cite{ggpd}. They perfomed simulations on $L^3\times 4$ lattices with $L=8-24$, using the standard staggered action and the R-algorithm. The quark mass was fixed in physical units to $m/T_0=0.1$. The aim of the simulations was to bracket the critical point by computing the Taylor coefficients of the quark number susceptibility up to sixth order (i.e.~8th order for the pressure) for various temperatures in the range $T/T_0=0.75-2.15$, and extrapolate to finite $\mu$.
This was done for different lattice volumes in order to get an estimate of finite voulme effects.
The results for the convergence radius Eq.~(\ref{rad}) are shown in Fig.~\ref{ggfig}. A rather strong volume dependence is apparent. While for the smaller $8^3$ lattice the estimators $\rho_n,r_n$ do not seem to converge to a finite radius of convergence, the results on the larger $24^3$ lattice are consistent with settling at a limiting value. The boundary between the two behaviours was observed to occur at $Lm_\pi\approx 5-6$ or $L\approx 16-18$, with larger volumes tending to a smaller radius of convergence. It is reassuring that the numerical values for $\rho_n$ and $r_n$ are consistent with each other.
\begin{figure}[t]
\includegraphics*[width=0.45\textwidth]{ord1.eps}\hspace*{1cm}
\includegraphics*[width=0.45\textwidth]{ord2.eps}
\caption[]{Estimators of the radius of convergence, Eq.~(\ref{rad}), at $T/T_0=0.95$. $\rho_n$ (left) and $r_n$ (right) vs.~order $n$ on $8^3$ (red circles) and $24^3$ (blue squares) \cite{ggpd}.}
\label{ggfig}
\end{figure}
Taking the large volume result at face value and extrapolating to all orders the estimate
for the location of the critical point is $\mu^c_B/T=1.1\pm 0.2$ at $T/T_0=0.95$, or
$\mu_B^c/T_c=1.1\pm 0.2$ \cite{ggpd}.
\subsection{The pressure to order $\mu^6$ for $N_f=2$}
Another investigation of the finite density phase diagram of the two-flavour theory was made by
the Bielefeld-Swansea collaboration, also using the Taylor expansion of the pressure.
This group works with a $16^3\times4$ lattice with p4-improved staggered fermions and a Symanzik-improved Wilson action, simulating with the R-algorithm,
the quark mass is set to $m/T_0\approx0.4$. The calculation to order $\mu^4$ was performed in \cite{bisw2} while new results on $\mu^6$ are presented in \cite{bisw3}. The last work also contains detailed discussions of two interesting analytic calculations to compare with,
namely the pressure in high temperature perturbation theory \cite{vuo}, which is going to hold at asymptotically high temperatures, as well as the hadron resonance gas model, which gives a rather good description of the pressure in the confined phase \cite{krt}.
In agreement with \cite{ggpd} and qualitative expectations, their detailed results for the coefficients in the pressure series satisfy $c_6\ll c_4 \ll c_2$ for $T>T_0$, i.e.~one would have coefficients of order one for an expansion in $(\mu/\pi T)$. An impression of the convergence of the series can be obtained by looking at the quark number susceptibility calculated to consecutive orders, as shown in Fig.~\ref{susc}.
%
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.45\textwidth]{chiqO4O6vT.eps}
\end{center}
\caption[]{Quark number susceptibility computed through $O(\mu^4)$ (dashed lines) and $O(\mu^6)$ (solid lines) \cite{bisw3}.}
\label{susc}
\end{figure}
For $T\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 1.2 T_0$, the series seems to converge rapidly and the $\mu^6$-result is compatible with the one through order $\mu^4$. Around the transition temperature $T_0$, the $\mu^4$-results show a peak emerging with growing $\mu/T_0$, which in \cite{bisw2} was interpreted as evidence for
a critical point. However, the $\mu^6$ contribution suggests that in this region results do not yet converge, and the structure is hence not a significant feature of the full pressure.
Another analysis of the data in \cite{bisw3} is devoted to a study of the convergence radius. Fig.~\ref{srad} (left) shows the ratio of the Taylor coefficients at consecutive orders. In the relevant region $T\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} T_0$, the data do not seem to settle on a limit value.
More strikingly, the data appear to fall right onto the solid lines
marking the prediction for those ratios from the hadron resonance gas model, on both sides of the transition. The hadron resonance gas model does predict a Hagedorn-like deconfinement transition, but as a smooth crossover rather than a real phase transition. Thus the data do not give any indication of a critical point.
\begin{figure}[t]
\includegraphics*[width=0.45\textwidth]{rc2c4c6.eps}\hspace*{1cm}
\includegraphics*[width=0.45\textwidth]{rnqchimu.eps}
\caption[]{Left: Ratios of the expansion coefficients, Eq.~(\ref{press}). Right: Quark number density divided by the susceptibility. This observable goes to zero at a critical point. Solid lines in both plots represent predictions from the hadron resonance gas $(T<T_0)$ and the Stefan-Boltzmann ideal gas $(T>T_0)$. \cite{bisw3}.}
\label{srad}
\end{figure}
This is corroborated by Fig.~\ref{srad} (right), showing the quark number density normalised on the susceptibilty, $n_q/\chi_q=\frac{\partial p}{\partial n_q}$. This quantity is related to the compressibility in the plasma, and should go to zero at a second order phase transition point.
Thus the conclusion in \cite{bisw3} is that there is no evidence for a citical point from these data. This conclusion is not in conflict with the results from \cite{ggpd} discussed in the previous section, since the simulations were done at a much larger quark mass, for which one would expect a critical point to be at larger values of $\mu$. Moreover, a different action was used, making a direct comparison difficult.
However, the conclusion is different from the earlier one by the same group based on $\mu^4$ results \cite{bisw2},
which were interpreted as showing evidence for a critical point.
This highlights the need for a careful examination of as many terms in the series as possible, before results are conclusive. Indeed, previous experience with inferring phase structures from convergence properties of strong coupling series in spin models \cite{series} shows that this is
very much an ``experimental science''. Between 10-20 terms are known in some of these expansions, and while for some models stunningly good predictions about non-analytic behaviour are obtained, others still fail even at this high order.
\section{QCD at finite isospin density}
Another way to learn about QCD at finite density is by taking recourse to theories without
a sign problem, which are sufficiently close enough to the physical situation of interest.
I shall not go into the long list of activities along those lines, but concentrate on
QCD at finite isospin density with chemical potential $\mu_I$ \cite{ss}. For small enough $\mu_I/T$ it can be argued that this theory should agree quantitatively with that at small $\mu/T$, and recent lattice simulations support this picture \cite{ks1,ks2}.
QCD at finite isospin density is obtained
from two-flavour QCD by assigning opposite chemical potentials to the quark flavours,
$\mu_u=-\mu_d=\mu$, leading to $\mu_I\equiv (\mu_u-\mu_d)=2\mu$. This results in cancelling the
phase of the determinant, so that the partition function now contains only its modulus and thus has
real positive measure, which can be simulated without problems,
\begin{equation}
Z=\int DU\,|\det M(\mu)|^{N_f}{\rm e}^{-S_g[U]}.
\end{equation}
A schematic phase diagram of the theory is shown in Fig.~\ref{iso}. On the lower right it features a pion
superfluid phase, due to pion condensation $\langle\pi^-\rangle\neq 0$ when $|\mu_I|>m_\pi$.
Note that in nature one cannot have a system with $\mu_I\neq 0$ and $\mu_B=0$, since the weak interactions do not conserve isospin. The interest in this theory is because of its formal
relation to QCD at finite baryon density. It is also in this formal sense that the concept is generalised to
$N_f=3$.
Indeed one would expect that for $\mu_I$ sufficiently small it should recover the physics at small baryon density, and hence the dashed transition line in Fig.~\ref{iso} should be approximately the same as in the theory with baryon density.
The argument goes as follows \cite{ks1}.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.45\textwidth]{su3isospin.ps}
\caption[]{Schematic phase diagram for QCD at finite isosipin density.}
\label{iso}
\end{center}
\end{figure}
An expectation value evaluated at finite baryon density can be rewritten as an expectation value evaluated at finite isospin density by means of the reweighting formula,
\begin{equation}
\langle O \rangle_\mu={\langle e^{i\theta} O \rangle_{\mu_I=2\mu}
\over \langle e^{i\theta} \rangle_{\mu_I=2\mu}}.
\end{equation}
Now consider probing the deconfinement transition with a gluonic observable $O$, e.g.~ the plaquette
susceptibility showing a peak. As long as $\langle \cos\theta\rangle_{\mu_I}\sim 1$, the observable
$O'=e^{i\theta} O $ will signal the same transition as $O$. But in this regime one may as well neglect
the phase altogether, in which case one probes the transition at finite isospin density.
Based on this argument, one expects the transition lines in the two theories to be close to each other as long as reweighting works, i.e.~for $\mu/T_0\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 1$. (In an expansion about $\theta=0$, the difference should be of the order $\sim \langle \theta^2\rangle$).
This expectation was numerically verified for the pseudo-critical surface $T_0(m,\mu)$ in the $N_f=2,3$
theories. In the two-flavour theory, the Bielefeld-Swansea collaboration performed a Taylor expansion both in baryon and isospin chemical potential, and the resulting pseudo-critical lines were found to quantitatively agree \cite{bisw2}. Similarly, quantitative agreement was found between the pseudo-critical lines determined from finite $\mu_I$ \cite{ks1,ks2} and imaginary chemical potential $\mu_i$
\cite{fp1,fp2}.
\section{Systematics of reweighting and Taylor expansion}
A few years into their existence, there are now several investigations of the systematics of the reweighting and Taylor expansion approaches. Recalling the double reweighting formula
\begin{equation}
\langle O\rangle_{(\beta,\mu)}=
{{\langle O\; e^{{n_{\rm f}\over4}\Delta\ln {\rm det}M}
e^{-\Delta S_g}\rangle_{(\beta_0,0)}}\over
{\langle e^{{n_{\rm f}\over4}\Delta\ln {\rm det}M}
e^{-\Delta S_g} \rangle_{(\beta_0,0)}}}\sim e^{-V\Delta F},
\label{rew}
\end{equation}
one would like to estimate when the exponential suppression of the signal becomes insurmountable.
Splitting the determinant into modulus and phase, $\det M=|\det M| e^{i\theta}$, this should occur when $\langle \cos\theta \rangle\ll1$, or equivalently when the root of the variance of the phase of the determinant grows larger than $\pi/2$,
\begin{equation}
\sigma(\theta) = \sqrt{\langle \theta^2 \rangle - \langle \theta\rangle^2}=
\sqrt{\langle \theta^2 \rangle}>\pi/2.
\label{var}
\end{equation}
In order to quantify this, the Bielefeld-Swansea collaboration evaluated the phase by means of its Taylor expansion \cite{bisw3},
\begin{equation}
\theta^{(n)}=
{{n_{\rm f}\over4}}\mbox{Im}\sum_{j=1}^n{\mu^{2j-1}\over{(2j-1)!}}
{{\partial^{2j-1}\ln\mbox{det}M}\over{\partial\mu^{2j-1}}}.
\end{equation}
Contours of values for the variance are shown in Fig.~\ref{phase}. According to the criterion Eq.~(\ref{var}), the line corresponding to $\sigma=\pi/2$ can be viewed as the boundary for the reliability of reweighting. The region to its lower right is safe while one would not trust results obtained
up and left from it. This means the deconfined phase of QCD is rather accessible as expected, while in the transition region one finds the constraint $\mu/T_0\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 1$, in accord with the constraint on other methods.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.45\textwidth]{phflvT.eps}
\caption[]{Contours of $\sigma(\theta)$ for a fixed volume $16^3$ \cite{bisw3}.}
\label{phase}
\end{center}
\end{figure}
The figure shows contours for one given volume. Reweighting gets exponentially harder with volume, and thus the contour lines move rapidly to the lower right as the volume is increased.
In another paper \cite{shinji} Ejiri discusses the difficulties of a combined application of reweighting and an analysis of Lee-Yang zeros (LYZ) \cite{lyz}.
The latter exploits the fact that on a finite volume there are no singularities in the pressure, and hence no zeroes of the partition function for real couplings $\beta$. However, there are zeroes for comlex couplings, whose real part indicates the location of an analytic crossover, i.e.~the pseudo-critical temperature $T_0$. In the infinite volume limit, these zeroes move to the real axis if there are true
phase transitions, while they stay at complex values for crossovers.
A LYZ analysis for reweighted finite $\mu$ then amounts to numerically searching for zeros in the expression
\begin{equation}
Z_{norm}(\beta_{\rm Re}, \beta_{\rm Im}, \mu)
= \left|
\left\langle e^{6i\beta_{\rm Im} N_{\rm site} \Delta P}
e^{i \theta}
\left| e^{(N_{\rm f}/4) (\ln \det M(\mu) - \ln \det M(0))} \right|
\right\rangle_{(\beta_{\rm Re}, 0, 0)} \right|,
\label{shin}
\end{equation}
where one additonally reweights into the complex coupling plane.
Ejiri argues in \cite{shinji} that this combined procedure does not have an infinite volume limit.
Taking $V\rightarrow \infty$ at finite statistics, the above expression for the partition function will always go to zero because of the sign problem,
and hence always signal a phase transition, even where there is a crossover.
This point of principle is not surprising. Indeed the same mechanism precludes a numerical infinite volume limit of {\it any} observable computed via reweighting. However, the question for practical simulations is whether for a given volume enough statistics can be gathered to beat the sign problem, and whether the volume is large enough to reproduce infinite volume physics with sufficient accuracy. Eq.~(\ref{shin}) illustrates the difficulty of this procedure: the LYZ get masked
by the noise from the reweighting factor, and one has to guard against mistaking a disappearing signal
for a Lee-Yang zero. The problem boils down to being able to give reliable errors for the reweighting procedure, which are needed for a qualified judgement on whether statistics is sufficient or not.
In an interesting qualitative investigation of systematics, Splittorff makes use of the finite density formulation \cite{kim}. He suggests to turn the reweighting argument for approximate equality of finite isospin and baryon density around, in order to determine the limit of applicability for reweighting.
For this purpose a matrix model prediction \cite{mat}
for the transition line to the pion liquid is combined
with the contour lines for the variance of the phase of the determinant, Fig.~\ref{phase}, as shown in Fig~\ref{spli} (left).
\begin{figure}[t]
\includegraphics*[width=0.45\textwidth]{kim1.eps}\hspace*{1cm}
\includegraphics*[width=0.45\textwidth]{kim3.eps}
\caption[]{Left: rescaled version of the contours of $\sigma(\theta)$, Fig.~\ref{phase}. The values of the contours increase towards the lower right and approach the transition line to the pion condensate at finite isospin density. Right: the critical endpoints for two sets of quark masses by the reweighting method \cite{fk1,fk2}. From \cite{kim}.}
\label{spli}
\end{figure}
The value of $\sigma(\theta)$ rises towards the lower right, and one observes that contours become denser with larger values, approaching the transition line to the pion liquid. This is to be expected on the grounds that a non-trivial phase of the determinant wipes out the pion condensate, which is not present at finite baryon density. Hence Splittorff suggests to interpret the
transition line to the pion liquid in the theory at finite isospin density as a ``cut-off'' for the applicability of reweighting: continued reweighting to the right of that separating line would mean one has a serious overlap problem, since the presence of the phase makes a physical difference there.
That this is a matter worth exploring in more detail is shown in Fig.~\ref{spli} (right), which displays the critical endpoints from reweighting for two different quark mass sets \cite{fk1,fk2}, and both fall in the neighbourhood of this boundary.
A similar argument may be applied to results from the Taylor expansion. For larger volumes, the sign problem becomes more severe. In the Taylor expansion, whose coefficients are evaluated at $\mu=0$, this shows up in two ways. Firstly the need for more terms to describe the sharpening divergence in its build-up. Secondly, the need for ever more precise cancellations between different terms in order to combine to the correct volume scaling behaviour of the sum. But the severity of the sign problem also puts a limit on $\mu/T$ for fixed volume, as we have seen already. Checking in Fig.~\ref{susc} at which values of $\mu/T_0$ the
sixth order contribution to the susceptibility becomes important for a given temperature, Splittorff concludes that the 4th order expansion only works to the left of the leftmost contour line in Fig.~\ref{spli} (left).
Approaches based on imaginary $\mu$ never face the sign problem. However, as dicussed in the next section, analytic continuation to real $\mu$ necessitates a Taylor expansion too, and one would expect a similar limitation.
Of course, these estimates are not yet quantitative, as the finite isospin transition line is determined from a model and not known with any accuracy, but they point out interesting directions to pursue.
\section{QCD at imaginary $\mu$}
Since the QCD fermion determinant with imaginary $\mu=i\mu_i$ is real positive, it can be simulated just as for $\mu=0$.
It is then natural to ask whether such simulations can be exploited to learn something about physics at real $\mu$. The strategy to get back to real $\mu$ is to fit the Monte Carlo results, wich are free of approximations, to a Taylor series in $\mu/T$. In case of apparent convergence it is then easy to analytically continue the power series to real $\mu$.
This idea was first used for observables like the chiral condensate and screening masses in the deconfined phase \cite{lom,hlp}. It was then shown to be applicable to the phase transition itself \cite{fp1}, which has recently been exploited in a growing number of works \cite{fp2}-\cite{cl}.
The partition function,
\begin{equation}
Z(V,\mu,T)=\Tr \left({\rm e}^{-(\hat{H}-\mu \hat{Q})/T}\right),
\end{equation}
is periodic in the imaginary direction, and the period can be shown to be $2\pi/N_c$ for $N_c$ colours \cite{rw}. Hence, in addition to being even in $\mu$, the QCD partition function has the additional exact symmetry $Z(\mu_r/T,\mu_i/T)=Z(\mu_r/T,\mu_i/T+2\pi/3)$.
Because of the fermionic boundary conditions in the Euclidean time direction, this symmetry implies that a shift in $\mu_i$ by certain critical values is equivalent to a transformation by the $Z(3)$ centre of the gauge group. Thus, there are $Z(3)$ transitions between neighbouring centre sectors for all $(\mu_i/T)_c=\frac{2\pi}{3} \left(n+\frac{1}{2}\right), n=0,\pm1,\pm2,...$. It has been numerically verified that these transitions
are first order for high temperatures and a smooth crossover for low temperatures \cite{fp1,el1}.
As a consequence, the schematic $(T,\mu_i)$ phase diagram looks as in Fig.~\ref{ischem}. The vertical line coming from the top denotes the $Z(3)$ transition, while the deconfinement transition line now bends upwards as a function of $\mu_i$. The order of the transition and the existence of an endpoint depends again on the number of flavours and the quark masses. Because of the symmetry of the partition function this picture is then periodically repeated for larger values of $\mu_i$.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.45\textwidth]{schem.eps}
\includegraphics*[width=0.45\textwidth]{tc_nf_fk.eps}
\caption[]{Left: Schematic phase diagram for QCD at imaginary chemical potential. Right: $N_f=3$ results after continuation \cite{fp2} and compared to $N_f=2+1$ \cite{fk1}. The light quark masses are the same.}
\label{ischem}
\end{center}
\end{figure}
The idea then is to simulate with imaginary chemical potential, and fit the full simulation results by
a power series of order $N$,
\begin{equation}
\langle O \rangle = \sum_n^N c_n\left(\frac{\mu_i}{T}\right)^{2n}.
\end{equation}
Since the Monte Carlo results contain no approximation or truncation, convergence can be inspected
by the quality of the fits to the data. In the case of satisfactory convergence analytic continuation
$ \mu_i \longrightarrow i\mu_i$ is a trivial matter. It was shown that this strategy can be extended to the pseudo-critical line itself, which on finite volumes is a smooth function with an even Taylor expansion \cite{fp1}. Detailed comparisons give quantitative agreement for $T_0(m,\mu)$ computed from imaginary $\mu$ and other methods \cite{fp2,el1}, cf.~Fig.~\ref{ischem}.
There are also first results for $N_f=4$ with Wilson fermions \cite{cl}.
This procedure may be expected to converge as long as the value of $\mu_i$ does not exceed the critical value of the first $Z(3)$ transition, $|\mu|/T\leq\pi/3$, or $\mu_B\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 550{\rm MeV}$ in physical units. This constraint is due to two limitations. Firstly, in the infinite volume limit the location of the $Z(3)$ transitions would bound the radius of convergence of the Taylor series. And secondly even on finite volumes, where there are no non-analyticities, no new information is obtained by going to larger $\mu_i$ because of the periodicity.
Nevertheless, interesting arguments are being made that one might well extend the continued results along the real $\mu$-axis beyond this radius with the help of, e.g.,
Pad\'e approximants \cite{mar}.
Working at imaginary $\mu$ has a couple of technical advantages. It is computationally simple and much cheaper than reweighting or computing coefficients of the Taylor expansion. Moreover, both parameters $\beta,\mu$ are varied and thus one obtains information from statistically independent ensembles. It also offers some control on the systematics by allowing a judgement on the convergence of the fits.
Furthermore, it is a good testing ground for effective QCD models: analytic results can always be continued to imaginary $\mu$ and be compared with the numerics there, as demonstrated for several examples in \cite{el2}.
The main limitation presently is the radius of convergence in the large volume limit, $\mu/T\sim 1$.
\subsection{A generalised imaginary $\mu$ approach}
The method of simulating imaginary $\mu$ and analytically continuing can be generalised in an interesting way, as suggested by Azcoiti et al.~\cite{az1}.
The idea is to rewrite the standard expression for the staggered fermion action at finite density \cite{hk}
by replacing the chemical potential with
two new parameters $x,y$,
\begin{eqnarray}
&& \frac{1}{2}\sum_{n} \bar\psi_n \eta_0 (n)
\left( e^{\mu a} U_{n,0}\psi_{n+0}
- e^{-\mu a} U^\dagger_{n-0,0}\psi_{n-0}\right)\nonumber\\
\rightarrow & &
x \frac{1}{2} \sum_{n}\bar\psi_n \eta_0 (n)\left( U_{n,0}
\psi_{n+0} - U^\dagger_{n-0,0}\psi_{n-0}\right)
+ y \frac{1}{2} \sum_{n} \bar\psi_n \eta_0 (n)\left( U_{n,0}
\psi_{n+0} + U^\dagger_{n-0,0}\psi_{n-0}\right),
\end{eqnarray}
where $x =\cosh(a\mu), y =\sinh(a\mu)$. This means the action has been enlarged by an extra parameter. The ordinary finite density action is recovered by the constraint $x^2-y^2=1$.
Thus, if the solid line in Fig.~\ref{aschem} denotes a phase transition line in the $x,y$ plane of the
enlarged theory, its intersections with the dotted line representing the constraint correspond to physical transition points. The enlarged theory still has the sign problem, but one can simulate at imaginary $y=i\bar{y}$. The potential of this method to improve over the simple imaginary $\mu$ approach is that
there are now different parameter sets $x,\bar{y}$ to be simulated, so one might hope to be able to extrapolate in a controlled way to reach larger values of $\mu/T$ and thus probe the phase diagram at lower temperatures.
\begin{figure}[t]
\begin{center}
{\rotatebox{270}{\scalebox{0.25}{\includegraphics{phd_xy.ps}}}}
\caption[]{Schematic phase diagram for QCD at imaginary chemical potential.}
\label{aschem}
\end{center}
\end{figure}
Numerical results from this method for the four-flavour theory have been presented in \cite{az2}. Simulations were performed on $8^3\times 4$ lattices with standard staggered fermions and the R-algorithm. Fig.~\ref{azres} shows results for the pseudo-critical coupling obtained at imaginary $y$,
fitted to the form $\beta_c(\bar{y})=\beta_0+\beta_1\bar{y}^2$ and continued to real $y$. The vertical dotted line gives the intersection with physical QCD. Clearly, the method works as well as
the ordinary imaginary $\mu$ approach. However, its full potential of simulating different sets of $x,y$ for a fixed $\mu/T$ has not been probed yet. Fig.~\ref{azres} (right) shows an example of a result that was extrapolated beyond the first $Z(3)$-transition, which shows up by the kink in the data. This is done on the grounds that no such kink is present in the real direction. Nevertheless, even for imaginary $y$, beyond the kink the curve is not constrained by any additional data points, and hence convergence of the extrapolation is not guaranteed.
\begin{figure}[t]
\begin{center}
{\rotatebox{90}{\scalebox{0.55}{\includegraphics{az1.ps}}}}\hspace*{0.5cm}
{\rotatebox{90}{\scalebox{0.55}{\includegraphics{az2.ps}}}}
\caption[]{\label{azres} The critical coupling in the $N_f=4$ theory as a function of imaginary $y$. The leading order quadratic fits are continued to real $y$, the intersection with the vertical line gives the physical value \cite{az2}.}
\end{center}
\end{figure}
New and as yet unpublished results for the two-flavour theory with indications of a critical point are presented in these proceedings \cite{lal}.
An interesting open question is whether the method can indeed be used to obtain more control over analytic continuation. Another promising idea by the authors is to use their action for reweighting. The two parameters $x,y$ might be tuned such as to shorten the reweighting distance
to the physical point of interest.
\subsection{Imaginary $\mu$ and Fourier transformation: QCD at fixed baryon number}
Last year promising attempts of an alternative use of imaginary $\mu$ have been made
\cite{fs,al}. This approach makes use of the relation between the grand canonical partition function at imaginary chemical potential and the canonical partition function at fixed baryon number via Fourier transformation \cite{rw,ht}. After earlier
attempts on a Hubbard model \cite{awk}, there are now promising new results on QCD presented in these proceedings \cite{fs1,al1}. The difficulty in this case is to make baryon number large enough so as to reproduce the finite chemical potential calculations in the thermodynamic limit.
Baryon number is fixed by inserting
$\delta(3 B - \int d^3x ~ \bar\psi\gamma_0 \psi)$ into the path integral. The result is the canonical
partition function, which is related to the grand canonical partition function at imaginary $\mu$ via a Fourier transform,
\begin{equation}
Z_{C}(B) = \frac{1}{2\pi}
\int_{-\pi}^{\pi} d \left( \frac{ \mu_i }{ T }\right) \; e^{-i 3 B \frac{ \mu_i
}{ T }} Z_{ GC }(\mu = i \mu_i).
\end{equation}
The idea followed in \cite{fs} is to sample $Z_{GC}(\mu = i \mu_{MC})$ by Monte Carlo, and then compute
\begin{equation}
\frac{Z_{C}(B)}{Z_{GC}(i \mu_{MC})} =
\left \langle \frac{1}{\det(i \mu_{MC})} \int d\mu_i
\exp\left(i 3 B \frac{\mu_i}{T}\right) \det(i \mu_i) \right \rangle,
\end{equation}
i.e.~the fermion determinant gets Fourier transformed, and so it has to be calculated exactly.
This is costly, but the benefit is that now no analytic continuation or Taylor expansion is needed.
The sign problem of course resurfaces here as well, making $Z_C(B)$ noisy. But the strength of this effect is governed by baryon number, and not by volume directly. In the thermodynamic limit one would have to send baryon number to infinity in order to have fixed baryon number density.
Thus, the larger the volume, the smaller the accessible baryon number density for a simulation.
Nevertheless, fixing a small baryon number makes sense in order to study e.g.~nuclear few body systems, and as the following results show, with sufficient computer power one can reach reasonably large baryon numbers to make contact with the grand canonical formulation.
Numerical results obtained by de Forcrand and Kratochvila are shown in Fig.~\ref{can}. They were obtained on a $6^3\times4$ lattice with $N_f=4$ standard staggered fermions and hybrid Monte Carlo simulations, the quark mass was $m/T_0\approx 0.2$. The left panel shows the conversion from
baryon number to chemical potential. This is achieved by evaluating the free energy $F(B)$ as function of different fixed baryon numbers, and computing the grand canonical partition function by Laplace transformation. The integral can be evaluated by means of
a saddle point expansion,
\begin{equation}
Z_{GC}(\mu) = \int d\rho \exp\left(-\frac{V}{T}(f(\rho) - \mu \rho)\right),
\end{equation}
yielding the chemical potential as a function of baryon number,
\begin{equation}
\mu \approx f'(\rho) \approx \frac{F(B+1)-F(B)}{3}.
\end{equation}
\begin{figure}[t]
{\rotatebox{-90}{\scalebox{0.3}{\includegraphics{FB1B_B.eps}}}}
{\rotatebox{-90}{\scalebox{0.3}{\includegraphics{pdplus.eps}}}}
\caption[]{Left: Numerical Maxwell construction to relate the canonical and grand canonical ensembles. Right: Comparison of different methods. Agreement is quantitative for $\mu/T\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 1$. From \cite{fs1}.}
\label{can}
\end{figure}
As Fig.~\ref{can} (left) shows, on a $6^3$ lattice it is possible to perform this up to quite respectable baryon numbers. The resulting picture essentially shows the Maxwell construction for changing between canonical and grand canonical ensembles. The S-shaped curves are indicative of a first order phase transition and represent the metastability region. The two envelopes can be ascribed to the hadronic and quark gluon phases and are well fitted by hadron and weakly interacting massless gas models.
Preliminary results from the canonical ensemble obtained with Wilson fermions have been published
in \cite{al} and are presented in these proceedings \cite{al1}.
Fig.~\ref{can} (right) shows the resulting critical line for the four flavour theory in comparison with ones obtained by other methods discussed here. All data are generated with the same action and for the same quark mass and lattice spacing, only the volumes differ between the data sets. For $\mu/T\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 1$, the quantitative agreement is impressive. Only the data from \cite{el2} are somewhat off the others, presumably due to the much larger volume of that data set. Note that for $\mu/T>1.3$ agreement stops and the different
data sets diverge. This is in accord with the previous statements that all methods
discussed have roughly the same range of applicability, but different systematics.
The data continued from imaginary $\mu$ in this region are essentially unconstrained and just extrapolated. For the reweighted data points the expectation value of $\cos\theta$ is quoted in the figure for selected points. For $\mu/T>1.2$ it is completely lost in the noise and hence the data points
are not trustworthy. Note also that the data coming from the canonical ensemble in principle do not
have the restriction $\mu/T\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 1$. The data bend down more strongly as one might expect in this region of the phase diagram. It will be exciting to see whether the curve can be reliably continued beyond $\mu/T\sim 1$. With the density of states method \cite{cs}, Pad\'e approximants \cite{mar} and
the canonical ensemble \cite{fs}, there are at least three attempts at work in this direction.
\section{The critical end point and its quark mass dependence for three flavours}
As has become clear by now, a determination of
the order of the phase transition and the
critical point is much more demanding than the location of the pseudo-critical temperature $T_0(m,\mu)$.
The best starting point for such an enterprise is the $N_f=3$ theory. This is because we know there is a critical point at $\mu=0$. In the phase diagram Fig.~\ref{schem_2+1} (left), the critical quark mass $m_c(\mu=0)$ separating the crossover from the
first order sections along the three flavour diagonal is known to be at a moderately small value accessible to simulations \cite{kls,clm,fp2}. With the quark mass tuned to this value,
the first order transition line in the $(T,\mu^2)$ phase diagram Fig.\ref{tcschem} (left) reaches all the way to the temperature axis, on which it ends with a critical endpoint. According to the standard scenario discussed in the introduction, if we now increase the quark mass to values $m_c>m_c(0)$, the whole transition line shifts, with the critical endpoint wandering to the right towards a real $\mu_c\neq 0$, thus tracing out a smooth function $\mu_c(m)$. On the other hand, if we change the quark mass to $m<m_c(0)$, the critical endpoint should wander in the imaginary $\mu$-direction to the left, as shown
in Fig.~\ref{tcschem}.
\begin{figure}[t]
\vspace*{0.3cm}
{\rotatebox{0}{\scalebox{0.3}{\includegraphics{tc_schem.eps}}}} \hspace*{0.5cm}
{\rotatebox{0}{\scalebox{0.3}{\includegraphics{mc_schem.eps}}}}
\caption[]{Left: Schematic phase diagram in the $(T,\mu^2)$-plane for different quark masses. Solid lines are first order, dotted lines crossover, and the thick line represents the endpoints. Right: The critical quark mass as
a function of $\mu^2$. The vertical line on the left marks the first $Z(3)$ transition at imatinary $\mu$. From \cite{fp2}. }
\label{tcschem}
\end{figure}
Inverting the function $\mu_c(m)$, we are interested how the critical quark
mass separating first order from crossover changes with $\mu$. This function again has a Taylor expansion in even powers of $\mu$, and one expects
\begin{equation}
\frac{m_c(\mu)}{m_c(\mu=0)}=1 + c_1 \left(\frac{\mu}{\pi T}\right)^2+\ldots
\label{c0}
\end{equation}
with coefficients of order one.
In the three-dimensional phase diagram of Fig.~\ref{schem_2+1} (right) this means
we are looking for the curvature of the critical surface in the $N_f=3$ direction at $m_c(0)$. Once this functional dependence is determined, it will return the critical end point $\mu_c$ for a given quark mass.
\subsection{Numerical results for $N_f=3$ from imaginary $\mu$ \label{sec:3f}}
This program was recently carried out in \cite{fp2}, using $8^3\times4$ lattices with the standard staggered action and the R-algorithm. The observable used to find the critical point was the Binder cumulant, making use of the knowledge that the transition is in the universality class of the 3d Ising model. For a critical point in this universality class the value of $B_4$ is accurately known,
\begin{equation}
B_4(m_c,\mu_c)=\frac{\langle(\delta\bar{\psi}\psi)^4\rangle}
{\langle(\delta\bar{\psi}\psi)^2\rangle^2}\rightarrow 1.604,\quad V\rightarrow \infty,
\end{equation}
while $B_4\rightarrow 1(3)$ for a first order transition (crossover). Hence in the infinite volume limit $B_4$ is a non-analytic step function. However, on finite volumes it will pass through the Ising value
smoothly, with a slope increasing with volume. Measurements for several values of imaginary $\mu$ and quark masses are shown in Fig.\ref{b4} (left). The data can be fitted to a leading order Taylor expansion in both the quark mass and chemical potential about the known critical point at $m_c(\mu=0)$,
\begin{equation}
B_4(am,a\mu)=1.604 + B\left(am-am_c(0) + A(a\mu)^2\right) + \ldots
\label{bfit}
\end{equation}
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.45\textwidth]{B4_8.eps}\hspace*{1cm}
\includegraphics*[width=0.45\textwidth]{scale.eps}
\caption[]{Left: The Binder cumulant calculated for various masses and chemical potentials. The value 1.604 corresponds to a critical point. Right: Finite volume scaling $\sim L^{1/\nu}$ of the fit parameter $B$, Eq.~(\ref{bfit}). $\nu=0.62(3)$ is consistent with the universal Ising exponent $\nu=0.63$ \cite{fp2}.}
\label{b4}
\end{center}
\end{figure}
From the fit parameters one can directly extract the desired coefficient $d(am)/d(a\mu)^2$ in lattice units.
For the continuum conversion one needs to take into account that the lattice spacing effectively changes
with $T(\mu)$, and hence $a(\mu)\neq a(0)$. For $c_1$ in Eq.~(\ref{c0}) this yields
\begin{equation}
c_1=\frac{1}{m_c(0)}\frac{dm_c}{d(\mu/\pi T)^2}=\frac{\pi^2 }{N_t^2(am_c)(0)}\frac{d(am_c)}{d(a\mu)^2}
+\frac{1}{T_0}\frac{dT}{d(\mu/\pi T)^2},
\label{conv}
\end{equation}
Fig.~\ref{b4} (right) shows a finite size scaling analysis of the fit coefficient $B$, and the fitted volume scaling nicely reproduces the exponent predicted by universality even on moderate volumes.
Note that these are extremely difficult calculations, as very long Monte Carlo trajectories of order 80k are required in order to get sufficient tunneling statistics.
In accord with qualitative expectations, $c_1\approx 0.8(4)$ \cite{fp2} is of order one. However, the large statistical error indicates that the coefficient is also consistent with being close to zero. Independent of the accuracy of the result,
an important conclusion is that $m_c$ changes very little when
$\mu$ is switched on, or conversely that $\mu_c$ changes rapidly under small variations of the quark mass.
\subsection{Numerical results for $N_f=3$ at finite isosipin density}
In a recent article Kogut and Sinclair report on similar investigations in the theory at finite isospin density \cite{ks2}. They work on $L^3\times4$ lattices with $L=8-16$, using the standard staggered action and the R-algorithm. Their quark mass is chosen as $m\mathop{\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} m_c(0)$, with the aim to see how the critical point moves as a function of $\mu_I$. As an observable they use the Binder cumulant discussed in the previous section.
Results from their simulations are shown in
Fig.~\ref{step}.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.45\textwidth]{binder_mu0.ps}\hspace*{1cm}
\includegraphics*[width=0.45\textwidth]{binderdt0.ps}
\caption[]{Left: Stepsize dependence of the Binder cumulant. Right: Binder cumulant extrapolated to
zero stepsize. Dependence on isospin is very weak \cite{ks2}}
\label{step}
\end{center}
\end{figure}
The left panel displays an investigation of step-size effects on the Binder cumulant, which are found to be significant in this quark mass regime. Instead of extrapolating to zero step size, the standard usage of the R-algorithm is to simulate at some reference step size whose error is known to be smaller than typical statistical errors at some reference mass $m$. When going to smaller quark masses, a common practice is to keep the step size to be half the bare quark mass. However, in the low quark mass regime of interest here, this procedure clearly breaks down, with even qualitative changes of the results. The figure shows how at the same quark mass the transition looks clearly first order for large step sizes, but changes to crossover behaviour once extrapolated to zero stepsize, which thus is mandatory.
The right panel shows the results after such extrapolations have been performed. In agreement with the findings from imaginary $\mu$ in the previous section, the Binder cumulant is practically flat and only very weakly depending on $\mu_I$. All data points are well in the crossover regime. What is surprising is that the weak $\mu_I$-dependence has a tendency for $dB_4/d\mu_I^2\mathop{\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 0$. This would imply that as $\mu_I$ is switched on the transition moves deeper into the crossover regime, instead of approaching a critical point!
\subsection{$N_f=3$ and $N_f=2+1$ with an exact algorithm \label{sec:line}}
In order to check for stepsize effects and clarify the sign of the $\mu$-dependence of $B_4$,
de Forcrand and I have redone the $N_f=3$ calculation of \cite{fp2} reported in Section \ref{sec:3f} with
the exact rational hybrid Monte Carlo (RHMC) algorithm developed by Clark, Kennedy and Sroczynski \cite{rhmc}
(also presented in these proceedings \cite{clark}).
In a first test we compared simulations of the Binder cumulants performed with that algorithm to ones from the R-algorithm extrapolated to zero stepsize. The results are shown in Fig.~\ref{rhmc} (left), and perfect agreement is found.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.45\textwidth]{nf3dt.eps}\hspace*{1cm}
\includegraphics*[width=0.45\textwidth]{nf3oldnew.eps}
\caption[]{Left: Comparison of the Binder cumulant computed with the RHMC algorithm (leftmost data) and the zero stepsize extrapolation of the R-algorithm. Right: Determination of $m_c(\mu)$ for $\mu=0,0.2$ with the RHMC algorithm. The arrow marks the result from the R-algorithm.}
\label{rhmc}
\end{center}
\end{figure}
The right panel then shows a new determination of the critical quark mass $m_c(\mu)$, both for zero density and an imaginary chemical potential, using the RHMC algorithm. Note that the Binder
cumulant now passes its Ising value at a significantly different mass compared to the results in the literature. We find $am(0)\approx 0.026$, which is a shift of about 25\% due to stepsize effects!
On the other hand, switching on a chemical potential has no effect on the Binder cumulant. As a preliminary result we find
$d(am_c)/d(a\mu)^2\approx 0$ within errors, which
is consistent with the findings at finite isospin \cite{ks2} reported in the last section.
We have also mapped out the critical line for non-degenerate quark masses, as shown in Fig~\ref{crit},
both with the R-algorithm and the RHMC algorithm. In general there is a significant step size effect.
The picture clearly puts the physical point, where Fodor and Katz performed their simulations, on the crossover side of the line. Note, that the physical point is very close to the critical line.
This is consistent with the requirement of finely tuned quark masses in order to have a critical point at moderate chemical potentials. (The calculation of $c_1$ in this case is still in progress. Taking our R-algorithm result $c_1$ from the three flavour case, our resulting $\mu_c$ would be consistent with theirs within errors). If the chiral limit of the two flavour theory turns out to be O(4), there is a tri-critical point
at some quark mass $m_s^{tric}$ on the $m_s$-axis. Our results can be fitted with the corresponding
scaling equation and would then predict $m_s^{tric}/T_0\approx 2.8$.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.45\textwidth]{m1_m2_c_oldnew.eps}
\caption[]{The critical line separating first order from crossover for $N_f=2+1$. A significant shift is observed when eliminating stepsize effects. The line passes in close vicinity of the physical point (FK).}
\label{crit}
\end{center}
\end{figure}
\subsection{A non-standard scenario for the phase diagram}
Let us assess the consequences of the step size effects on the critical point.
After continuum conversion the new result for $m_c(\mu)$, now free of step size errors, is
\begin{equation}
\frac{m_c(\mu)}{m_c(\mu=0)}=1 - 0.6(2) \left(\frac{\mu}{\pi T}\right)^2.
\end{equation}
Note that the sign of the leading term has changed compared to the previous result!
The reason is that the first term in Eq.~(\ref{conv}) now is consistent with zero, so the negative second term dominates.
This means the critical mass gets smaller when a real $\mu$ is switched on, and hence that the critical surface in the phase diagram leans towards smaller quark masses, Fig.~\ref{nons}, i.e.~the
opposite of the standard scenario Fig.~\ref{schem_2+1} (right).
\begin{figure}[t]
\vspace*{-0.5cm}
\begin{center}
\includegraphics*[width=0.5\textwidth]{3dphasediag_4.eps}\hspace*{1cm}
\includegraphics*[width=0.4\textwidth]{2dphasediagX5.eps}
\caption[]
For $dm_c(\mu)/d\mu^2 <0$, there is no critical point at all, the dotted line on the right is merely a crossover.}
\label{nons}
\end{center}
\end{figure}
In this case, the first order region in a plane of constant $\mu$ is actually shrinking for growing $\mu$. If the physical point is in the crossover region at $\mu=0$, then switching on a chemical potential will not lead to an intersection with the critical surface, and hence there would be no critical point or first order phase transition at all!
Note that this scenario is perfectly consistent with all the universality arguments summarised in Section \ref{sec:qual}. The $(T,\mu)$ phase diagram would then only have the transition line separating the superconducting phase from nuclear matter, as in Fig.~\ref{nons}.
\subsection{Can one expect a critical end point at $\mu_B\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 500$ MeV?}
Clearly, as the discussion of the systematics in the last section revealed, this last result is preliminary as well, an important question being in which direction corrections go as the continuum limit is approached.
Nevertheless, based on the results on the curvature of the critical surface, one may obtain a rough estimate for the conditions required to have a critical endpoint in the phenomenologically interesting region $\mu_B\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 500$ MeV. Irrespective of whether an eventual continuum result for $m_c(\mu)$ will have positive or negative curvature, as long as the coefficient $c_1$ in Eq.~(\ref{c0}) is $\sim O(1)$ ( its natural size; all known coefficients in the pressure \cite{bisw3}, screening masses \cite{hlp} and the pseudo-critical temperature \cite{fp2} are of that order), it implies a very strong quark mass dependence of the value of $\mu_c$. For instance, in order to have $\mu_c\sim 120$ MeV as predicted
by Fodor and Katz \cite{fk2} for $N_f=2+1$, the quark mass has to obey the condition
\begin{equation}
1<\frac{m}{m_c(\mu=0)}\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 1.05,
\end{equation}
i.e.~it has to be fine-tuned to be within 5\% of the critical quark mass. Provided the coefficient $c_1$
does not change drastically in the case of $N_f=2+1$, a similar situation will be encountered there as well.
At this point it should become clear that we are still far from a quantitative solution of the problem of the critical endpoint. To achieve a resolution better than $5\%$ in the quark masses would even require
to distinguish the up and down quarks. By contrast, we have just discovered a 25\% systematic
error in the critical quark mass due to step size effects, and we have discussed earlier the strong cut-off dependence ($\sim 100\%$) of the critical quark masses in physical units. Thus we should expect formidable shifts in $\mu_c$ on the way to a reliable continuum result.
\subsection{The heavy quark limit: Potts model}
Other interesting projects are concerned with the upper right corner of Fig.~\ref{schem_2+1}, i.e.~the region towards the quenched limit.
Simulations of quenched QCD at finite baryon number have been done in \cite{quenb}.
As the quark mass goes to infinity, quarks can be integrated out and QCD reduces to a gauge theory of Polyakov lines. First simulations of this theory with Wilson valence quarks can be found in \cite{nucu}.
At a second order phase transition, universality allows us to neglect the details of gauge degrees of freedom, so the theory should be in the universality class of the 3d three-state Potts model, which is the 3d Ising model. Hence, studying the three-state Potts model should teach us about the behaviour of QCD in the neighbourhood of the critical line separating the quenched first order region from the crossover region.
For large $\mu$ the sign problem in this theory was actually solved by means of cluster algorithms recently \cite{clust}.
Here I want to discuss an as yet unpublished result on simulations of the three state Potts model as presented in these proceedings \cite{skim}. For small $\mu/T$, the sign problem of this theory is mild enough so that brute force simulations at real $\mu$ are feasible. In the simulations presented in \cite{skim}, the change of the critical heavy quark mass is determined as a function of real as well as imaginary $\mu$, as shown in Fig.~\ref{potts}.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.45\textwidth]{potts.eps}
\caption[]{The critical heavy quark mass separating first order from crossover as a function of $\mu^2$ \cite{skim}.}
\label{potts}
\end{center}
\end{figure}
Note that $M_c(\mu)$ rises with real chemical potential. i.e.~the first order region in Fig.~\ref{schem_2+1} shrinks as finite baryon density is switched on. This system is thus an example of the non-standard scenario discussed in the previous sections!
Note also that the qualitative behaviour in going from real to imaginary $\mu$ is exactly as predicted in
the schematic picture Fig.~\ref{tcschem} \cite{fp2}, and analytical continuation in determining this critical line thus works.
\section{Conclusions}
The last couple of years have seen an enormous increase in activities concerned with lattice determinations of the QCD phase diagram in all of its interesting regions and limits.
While definite conclusions cannot yet been drawn, there is a lot of progress in refining the methods and studying the systematics.
The longstanding question of the nature of the phase transition in the two flavour theory in the chiral limit is still open. But large volumes are now available, and simulations on $N_t=6$ lattices will be undertaken soon. In combination with now available exact algorithms these will hopefully settle the issue in the near future.
There are now several groups that are tackling the critical endpoint.
However, these investigations are
extremely difficult and still carried out over a scatter of theories and parameter values. Within the R-algorithm, the critical endpoint from two-parameter reweighting is consistent with the shape of the critical surface determined from imaginary chemical potential. However, the R-algorithm in the regime of physical quark mass has been demonstrated to be afflicted by strong stepsize effects, which change the apparent order of the phase transition. Exact algorithms are now being employed successfully, and this source of error will soon be eliminated.
An important qualitative conclusion is that the critical chemical potential of the endpoint is extremely quark mass sensitive. A critical point $\mu_B^c\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 400$ MeV requires the physical light quark masses to be less than 5\% larger than the critical values at zero density. While it is quite possible that nature has arranged for this, it is clear that under those circumstances a quantitative determination is going to be a formidable task: any
systemtic error in the current simulations is going to have enormous effects on the location of the critical point. Recall that all calculations reported here are on coarse lattices with $a\sim 0.3$ fm, and in most works quark masses are only fixed in lattice units. Furthermore, finite volume and stepsize effects
have been shown to be larger than several $10\%$.
Under those circumstances it is still conceivable that there is no critical point and phase transition at all. This means working towards producing results in the thermodynamic and continuum limits will be just as exciting as the first qualitative calculations!
\vspace*{0.5cm}
\noindent
{\bf Acknowledgements:}
I am grateful to Philippe de Forcrand for valuable discussions and comments, help with figures, and a continued enjoyable collaboration.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,161 |
\section{Introduction}
The onset of avalanches in granular materials is a well-documented
area of research motivated by evident applications in the field of
risk prevention. A typical granular experiment to study such
phenomenon consists in the slow inclination of a sand filled box until
the destabilization of the sand
pile~\cite{jaeger89,bretz92,nerone03}. A lot of studies have been
devoted to the angles at which this avalanche initiates and
stops~\cite{nagel92,fischer08}. The critical angle, or avalanche
angle, is then defined as the angle at which a large flow of grains
occurs, with the surface of the heap stabilizing at a smaller angle,
the angle of repose. Usually studies of stability of granular heaps
focus on the succession of avalanches~\cite{jaeger89}, either in
rotating drums or by using a local perturbation of a heap. In those
two cases, the heap has been built by successive avalanches, giving a
specific internal structure to the bulk. The time intervals between
successive events and the size of the avalanches is then discussed,
one of the points being to know if those distributions are or are not
power-law distributions as predicted for a critical
phenomena~\cite{bak87}. In the case of a granular pile without initial inner structure progressively inclined in the gravitational field, specific types of rearrangements taking place before the first avalanche have been identified. Experimentally, two kinds of
movement in the heap previous to its failure have been detected:
\emph{small rearrangements}, implying only a few grains, and periodic
\emph{precursors} of large
amplitude~\cite{nerone03,zaitsev08,gibiat09,kiesgen12}. Most of the
studies reporting this phenomenology are based on the observation of
the free surface of the pile, but recent works give clues that these
precursors are bulk phenomena~\cite{zaitsev08,gibiat09}. To our
knowledge, these regular events have never been reproduced numerically
or explained theoretically, although preavalanche instabilities
showing an intermittent evolution have been studied
numerically~\cite{staron02,staron06}.
The periodicity of the precursors are reminiscent of the
\emph{stick-slip} phenomenology, which is usually invoked as an
explanation of the observed regularity. To our knowledge, a
complete interpretation in that framework, including the prediction
of the starting angle of the process, which is well beyond the angle
of friction of the material, or of the selected period, has never
been made. \emph{Stick-slip} behavior emerges when a frictional
system is submitted to an increasing load at a small enough
rate~\cite{duran}. It originates from the difference between the
static friction and the dynamic friction. In the simplest model, a
traction is exerted on a frictional slider by a spring. Elastic energy
is first stored in the spring while the slider is stuck until the
tangential force is large enough to overcome the static friction. The
slider then enters an oscillating response that leads it to stick
again when the velocity vanishes. A model inspired by this principle
has been used to explain the succession of avalanches in a rotating
drum~\cite{duran}. In the same spirit, arrays of sliders connected by
springs (Burridge-Knopoff model)~\cite{carlson94} are used to model
seismic processes. Fault gouges are often modeled in the geology
literature by a granular material sheared between two planes and the
\emph{stick-slip} behavior of such systems has been extensively
studied in an established regular regime of shearing of the granular
layer~\cite{nasuno97,nasuno98}. The existence of periodicity in
seismology is still under discussion and recent models exhibit the
coexistence of quasiperiodicity and criticality~\cite{ramos06}.
It is striking to observe that a very similar phenomenology of regular
precursors of rupture before the establishment of the regular
\emph{stick-slip} response can be found in the literature in two other
configurations closely linked to the problem of failure in granular
materials: the onset of frictional motion and the mechanical response
of metallic glasses. In the case of frictional motion, regular
precursors are observed before the onset of
sliding~\cite{rubinstein07,rubinstein09,ben-David10,maegawa10}. The
present interpretation relies on the inhomogeneity of the spatial
repartition of the normal and shear stresses at the interface leading
to the possibility of reaching the Coulomb criteria in some localized
areas before that the whole system is
destabilized~\cite{schreibert10,tromborg11}. The links with the models
of earthquakes described in the previous paragraph have been
made~\cite{braun09,rubinstein11}. In the second system, the case of
the mechanical response of metallic glasses, the observed
\emph{stick-slip} phenomenology is called \emph{serration}. The
mechanical response of such amorphous glasses is very close to the
response of granular material and displays similar features: creep,
shear-banding and precursors to the rupture, all of which seem to be
generic to amorphous materials. Comparing the loading curves obtained
for metallic glasses (Figure 1(a) in~\cite{klaumunzer11}), granular
materials (Figure 2(b) in~\cite{nguyen11}) and onset of friction
(Figure 1(a) in~\cite{rubinstein07}) a striking similarity is
observed. Each of these three curves displays small regular stress
drops during the otherwise quasi-elastic loading until the failure (or
sliding) of the material. Those drops begin to appear well before the
rupture of the material or the onset of motion of the slider. After
the yield, the system enters a regular stick-slip motion. In the case
of granular materials, it has been shown that the value of the stress
at the first micro-rupture event is linked to the stress at which the
global yield will take place~\cite{nguyen11}: independent of the
packing fraction of the system, the ratio between those two stresses
is constant. Using a method of detection identical to the one that
will be used in this paper, it has been shown that the drops in the
loading curve are linked to the appearance of a shear band in the
system~\cite{amon12}. During the loading of the system, ``spots'' of
localized strong deformation are also observed. As it will be shown,
these localized spots tends to appear at a higher rate, and to
cluster, just before a shear band event occurs~\cite{amon12}.
Consequently, a natural question emerges regarding the understanding
of the nature of those precursor events and their potential link to
the small rearrangements. With this aim in mind, in this article, we
study experimentally the response of granular material to a
progressive, quasi-static, inclination. We will focus on the
description of the behavior of the heap before the first
avalanche. For the study of the rearrangements and of the precursors,
we use an original method of measurement of small deformation based on
Diffusive Wave Spectroscopy~\cite{erpelding08,erpelding10}. This
method gives access to the phenomenology of the rearrangements and the
precursors at the side of the sample, and consequently throughout the
depth of the pile, giving hints of what happens in the bulk of the
system.
The structure of the article is as follows: in part \S\ref{sec2} we
will describe the mechanical setup (\S\ref{sec2.1}), the
interferometric method of detection of the rearrangements and
deformations (\S\ref{sec2.2}), and the granular materials and the
experimental protocols of preparation of the samples
(\S\ref{sec2.3}). In the third section we will present the observed
experimental behavior. After an overview of the experimental
observations (\S\ref{sec3.1}), we will characterize the periodic
\emph{precursor} events (\S\ref{sec3.2}) and then the \emph{small
rearrangements} phenomenology (\S\ref{sec3.3}). We also discuss the
influence of several parameters (shape of the grains, humidity and
packing fraction of the sample) on the observed phenomenology
(\S\ref{sec3.4}). The results are discussed in \S\ref{sec4}. In
\S\ref{sec4.1} we compare the observations with the classical
Mohr-Coulomb theory of failure. In \S\ref{sec4.2}, we interpret these
events in the framework of plasticity of granular materials, and
coupling between plastic events.
\section{Experimental set-up}\label{sec2}
\subsection{Mechanical setup}\label{sec2.1}
The mechanical set-up consists of an optical board that can be
inclined continuously by a crankshaft system (see
Fig.~\ref{setup}(a)). The laser source that illuminates the sample,
the optical detection setup and the granular container are all fixed
on the board so that the whole set-up rotates with the granular pile
(see Fig.~\ref{setup}). The translation of the crankshaft is carried
out by a motorised linear translation stage (Micro Controle
UT100-50PP) controlled electronically (Micro Controle ITL09). This
system ensures a low level of vibration at a small rotation rate. For
translations between 25 $\mu$m/s and 100 $\mu$m/s, the tilt rates
range between 0.02$^{\circ}$/s and 0.08$^{\circ}$/s. We will show in
the following that all the experiments can be considered to be done in
a quasi-static limit.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figure1}
\caption{(a) Schematic of the crankshaft system with the tilting
optical board. (b) Schematic of the setup clamped to the tilting
optical table. The laser source is expanded in order to illuminate
the side of the sample. The backscattered light is collected on a
CCD camera. A lens images the side of the sample on the CCD and a
diaphragm controls the size of the speckles on the camera.}
\label{setup}
\end{figure}
The granular sample is contained in a rectangular box of dimensions 20
cm $\times$ 14 cm $\times$ 8 cm, the lateral sides of which are glass
windows. One lateral side is illuminated by a laser (Melles Griot, 15
mW, 633 nm). The beam of the laser is expanded by a lens in order to
illuminate the entire surface of one side of the box. A lens images
that side of the sample on a CCD camera (Prosilica GC 2450,
2448$\times$2050 pixels, cell size 3.45$\times$3.45 $\mu$m), with a
magnification $\gamma =0.09$. Because of the disorder of the granular
media, the backscattered light collected has a speckle pattern in
intensity. A diaphragm allows us to adjust the size of these speckles
on the CCD camera (see next section for the optical method). The
camera is interfaced with a computer and a Labview program ensures the
acquisition of successive images at a frame rate that has been chosen
between 1~fps and 3~fps in the experiments shown here.
\subsection{Optical method}\label{sec2.2}
The measurement of deformation and rearrangements in the granular
material is carried out using an interferometric technique based on
Diffusive Wave Spectroscopy (DWS). The method has been extensively
described in~\cite{erpelding08,erpelding10}. The treatment of raw
experimental images is based on the correlation of backscattered
intensity between two successive images following a multi-speckle
procedure. The link with the corresponding deformations that have
occurred inside the sample is made using a
model~\cite{crassous07,erpelding08}.
The schematic of the optical part of the setup is shown in
Figure~\ref{setup}(b). The beam of a laser source is expanded to
illuminate the granular sample and the side of the sample is imaged on
a camera with a lens. Because of the disordered structure of the
scattering material, light rays inside the media follow random
paths. The backscattered rays explore a typical zone of length $l^*$
in the sample, called the transport mean free path of
light. For glass spheres of diameter $d$, $l^* \simeq 3.3
d$~\cite{crassous07}. As the incident light is coherent, those
backscattered rays interfere and give a speckled structure to the
collected intensity. The size of the coherence areas of the speckle in
the image plane is controlled with a diaphragm. When the sample is
deformed, the scatterers are displaced and the light paths modified so
that the interference pattern changes. These changes can be quantified
by calculating the correlation of intensities between two states.
In a backscattering configuration, a light ray typically
explores a volume of characteristic size $(l^*)^3$ inside the
scattering material. By using domains of size $(\gamma l^*)^2$ in
the images for the ensemble average, we can then obtain a local
evaluation of the correlation function with the optimal spatial
resolution. Correlation maps are thus obtained, giving insights into
the processes that have taken place in the sample during the
loading.
The intensity correlation between two successive images, labelled 1
and 2, is calculated using the relation:
$$ G_I = \frac{\langle I_1 I_2 \rangle - \langle I_1 \rangle \langle
I_2 \rangle}{\sqrt{\langle I_1^2 \rangle - \langle I_1 \rangle^2}
\sqrt{\langle I_2^2 \rangle - \langle I_2 \rangle^2}},$$ where $I_1$
and $I_2$ are the scattered intensities on images 1 and 2. The
electronic noise of the camera and possible stray light are taken into
account with a procedure explained in~\cite{djaoui2005}. The
ensemble averages $\langle \cdot \rangle$ are calculated over square
sub-areas of the images of size 16$\times$16 pixels, which typically
contain about forty coherence areas. From initial images of size
2448$\times$2050 pixels we then obtain maps of 153$\times$128
pixels. With the lens magnification chosen here, the spatial
resolution of the final map is about 608 $\mu$m.
The map of correlations are calculated between successive images,
giving information about the incremental decorrelations that have
taken place in the system for an angle increment. When the
tilt rate was modified, the frame rate of the camera was chosen to
ensure that the angle increment between two successive images stayed
between 0.06$^{\circ}$/slice and 0.08$^{\circ}$/slice. This allowed
for consistency and ease of comparison between the different
experiments.
In the case of linear elastic samples~\cite{erpelding08}, the
deformation of the material corresponding to the measured
decorrelation can be estimated using a model. The extension of that
model in the case of granular materials has been discussed
in~\cite{erpelding10}. Considering that the deformation between the
two correlated images is affine at the scale of $l^*$ in the material,
that the light is strongly scattered in the sample, and that we
observe the backscattered light, the expected dependence of the
intensity correlation function on the strain tensor is as follows:
\begin{equation}
G_I \simeq \exp (-c\sqrt{f(\mathbf{U})}),\label{GI}
\end{equation}
where $f(\mathbf{U})$ is a quadratic function of the strain tensor
$\mathbf{U}$, (see~\cite{erpelding08,erpelding10} for details of that
function), corresponding to the deformation increment that has taken
place between the two states associated with the images used for the
calculation of the
correlation~\cite{erpelding08,erpelding10}. Following~\cite{erpelding10},
the constant $c$ has been taken as $c=2 \eta \frac{2\pi}{\lambda} l^*
\sqrt{\frac{2}{5}}$, with $\eta \simeq 2$ a constant depending of the
optical setup and $\lambda$ the wavelength of the laser. The typical
deformations probed by this method for values of the transport length
similar to the one of the glass beads used here, $l^* \simeq
990~\mu$m, are between $10^{-6}$ and
$10^{-4}$~\cite{erpelding10,amon12}. For sand, we measured $l^* \simeq
660~\mu m \simeq 2d$. This lower value of $l^*/d$ compared to glass
beads may be understood by noticing that the surface of sand grains
should itself diffuse light. However, since the ratio $l^*/d$ is not
very different for glass beads and sand, we expect the magnitude of
probed deformations to be very similar.
\subsection{Granular samples and preparation}\label{sec2.3}
Experiments have been performed for two different type of grains:
glass beads (Silibeads, Figure~\ref{beads}(a)) and sieved sand
(Sifraco, Figure~\ref{beads}(b)) with a similar range of diameters:
200-400 $\mu$m as can been seen on the cumulative size distributions
for the two materials in Figure~\ref{beads}. We observe in the insert
pictures in Figure~\ref{beads} that the glass beads have a lot of
defects and that the main difference between the two kind of grains is
that the sand is faceted whereas the glass beads are rather
spherical. The angle of avalanche for the glass beads is about
$29^{\circ}$ and for sand about $35.5^{\circ}$. The cumulative
distributions show the presence of few particles in the range
50-100~$\mu m$ for the two types of granular material. Since the
dynamics of precursors is in agreement with previously reported
experiments with larger
beads~\cite{nerone03,zaitsev08,gibiat09,kiesgen12}, we may think that
the details of the particle size distribution is not crucial.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figure2}
\caption{Cumulative distribution of the grain diameter for the
glass beads (a) and the sand (b). Pictures of the granular
materials are shown in inserts. The median size of the grains is
about 280 $\mu$m for the glass beads and 330 $\mu$m for the sand.}
\label{beads}
\end{figure}
Three different levels of compaction have been used. For the lowest
compaction samples, a grid (mesh size 4~mm) is placed at the
bottom of the empty box, the granular material is gently poured in the
box, and then the grid is slowly lifted through the sample. More
compacted configurations are obtained from this initial loose
preparation by tapping the filled box in order to increase the density
of the system. Grids of different mesh sizes and forms have
been tested not to change the global phenomenology. The less
compacted samples have packing fraction around $C \simeq .58 \pm .02$,
the intermediate case (called ``normal'' in the following) has $C
\simeq .61 \pm .02$, and the highest packing fraction is close to $C
\simeq .64$.
Four different relative humidity conditions have been used. Ambient
humidity was between 25 \% and 45 \% (called ``normal'' condition in
the following). We obtain a humidity smaller than 20 \% by leaving the
granular sample in an oven overnight and then letting it cool down in
presence of desiccant. We obtain high humidity by leaving the granular
sample overnight in an enclosure in the presence of either saturated
salt water or pure water. The relative humidity in the enclosure is
controlled by the nature of the salt, and we then obtain samples
prepared at respectively 70 \% and above 80 \% humidity. For
all those values of the relative humidity, the dynamics is unchanged
when modifying the duration of the experiment. So we can neglect
relative humidity variations at the timescale of the experiment.
\section{Experimental results}\label{sec3}
\subsection{Main phenomenology}\label{sec3.1}
A typical movie of the observed behavior in the so-called ``normal''
conditions (packing fraction around .61 and relative humidity of 25
\%) is given in Supplemental Material~\cite{movie}. The granular
material used in this movie is glass beads but the phenomenology is
very similar to sand. Each image is a map of the correlation between
two successive images and corresponds to the incremental deformation
during an angle variation of 0.08$^{\circ}$. The color scale is the
following: white correspond to $G_I=1$ (maximal correlation) and black
to $G_I=0$ (vanishing correlation). The value of the angle of
inclination of the board is indicated in degrees at the bottom-left of
the film. The free surface of the granular pile can clearly be
identified due to the fact that the light that does not come from the
sample is totally decorrelated. The size of the area seen in the film
is 7 cm $\times$ 8.5 cm from the upper middle of the box (see the
schematic in Figure~\ref{avalanche}). In order to evidence the small
rearrangements, the contrast of the maps shown on
Figure~\ref{avalanche} has been enhanced using a threshold value of
$G_I = 0.75$, so that all the correlations smaller than that value
appear in black.
\begin{figure}[htbp]
\centering \includegraphics[width=\columnwidth]{figure3}
\caption{Schematic of the area used in the movie in Supplemental
Material~\cite{movie}. Two examples of correlation maps extracted
from the film, for angles 10$^{\circ}$ and 27$^{\circ}$, are shown
and exhibit typical \emph{rearrangements}. The decorrelation maps
of these images have been thresholded for $G_I$ smaller that 0.75
to allow the rearrangements to be clearly identified. The granular
material is glass beads, the packing fraction around .61 and
hygrometry of 25 \%. The convention used for the axes are
indicated on the figure.}
\label{avalanche}
\end{figure}
Two distinct types of phenomenology can be observed. Firstly, spots of
different sizes appear well before any macroscopic event in the
sample. Such events are shown for example on Figure~\ref{avalanche}
for the 10$^{\circ}$ angle. Enhancement of the contrast shows that
such events appear as soon as the inclination process begins. These
spots are seemingly of the same nature as the \emph{rearrangements}
described previously in the literature~\cite{nerone03} and which were
observed at the free surface of the sample. Our observation tends to
show that those rearrangements happen in the whole bulk and are not
mere displacements of grains due to irregularities of the free surface
originating from the preparation of the sample, as has been postulated
in previous studies. The density and the intensity of those events
increase with the angle. The density seems also to depend on the depth
in the sample. Such localised events are reminiscent of the
\emph{spots} observed in a shear granular sample in~\cite{amon12}. The
fact that such spots have been observed at different borders in
several systems limited by different boundary conditions supports the
hypothesis that such localized events happen in the bulk of the
sample. A quantitative estimation of the energy released locally
compared to the overall work done during creep processes~\cite{amon12}
also supports that affirmation. The spots were identified
in~\cite{amon12} as the localized events of deformation introduced by
Argon~\cite{argon79} to describe the plasticity of amorphous
materials.
Secondly, from an angle of about 15$^{\circ}$, large and almost regular
events begin to happen, which appear as successive large decorrelation
zones. These events correspond to the periodic \emph{precursors}
already described in the literature~\cite{nerone03}. Our experiment
makes it evident that these precursors involve a large part of the
bulk. These precursors begin to happen from about 15$^{\circ}$ and
occur nearly regularly, with an angular periodicity between
1$^{\circ}$ and 2$^{\circ}$. Quantitatively, these values correspond
well to the values that have been reported in the literature for other
sizes of grains and
containers~\cite{nerone03,zaitsev08,gibiat09,kiesgen12}. We can
observe that a precursor mobilizes a slice of the sample parallel to
the free surface. Such precursors are in fact micro-rupture events in
the bulk material. We also observe that the position of the
micro-rupture occurs at a depth that increases with each consecutive
event. Between two precursors, local rearrangements are also still
observed. Because of the approximative translational invariance of the
phenomenon along the optical plane, we obtain a spatio-temporal
representation by averaging the correlations at constant depth in each
image along horizontal lines ($x$-axis): $\langle G_I (\theta;\theta +
\Delta \theta ; x ;z)\rangle_x$. This gives a 1D representation of the
dependence of the correlation with depth for each image. By
juxtaposing these 1D lines for successive angles of inclination, we
obtain spatio-temporal graphs such as the one shown on
Figure~\ref{ST}. As the dependence between time and angle is linear in
a good approximation, the horizontal axis can be considered as
representing either time or angle. On such a representation the
periodic precursors are displayed distinctively while the
rearrangements contribute to an average decrease of the correlation,
giving a blurred or shaded aspect to the image where the activity in
terms of small rearrangements is high.
\begin{figure}[hbtp]
\centering
\includegraphics[width=\columnwidth]{figure4}
\caption{Spatiotemporal diagram obtained from the correlations map
by spatially averaging each horizontal line of the images:
$\langle G_I (\theta;\theta + \Delta \theta ; x ;z)\rangle_x$. The
averaging procedure gives the mean decorrelation as a function of
depth for each image. The granular material is glass beads, the
packing fraction around .61 and relative humidity of 25
\%. Correlations are calculated between images acquired at angles
$\theta$ and $\theta+\Delta \theta$, with $\Delta
\theta=0.08^{\circ}$.}
\label{ST}
\end{figure}
It is noticeable in the movie or in Figure~\ref{avalanche} at the
angle of $27^{\circ}$ that before and after a micro-rupture event
occurs, a large amount of activity can be observed at the depth where
the failure will take place. Larger spots of decorrelation are
observed in the zone that will fail compared to those that appear at a
larger depth. The decorrelation events seem to align and cluster at
the place where the rupture will take place, reminiscent of the
phenomenology already observed in a sheared granular
material~\cite{nguyen11,amon12} where regular micro-ruptures were also
observed before the final yield stress of the material is attained.
In the following section we give a more detailed study of that
global phenomenology which is very robust and resistant to changes of parameters
(grain shape, relative humidity, packing fraction). Firstly we will detail the
precursors events, then we will describe the rearrangements. Finally, we will study the changes of the phenomenology with
hygrometry and packing fraction.
\subsection{Precursors}\label{sec3.2}
\subsubsection{Periodicity}
We have studied experimentally the dependence of the periodicity with
respect to the tilt velocity. Figure~\ref{period} shows the average
period of the precursors in degrees for different rotation rates for
the same conditions of relative humidity and packing fraction
(so-called ``normal'' conditions: packing fraction about 0.61 and
relative humidity about 30 \%). Each point corresponds to the average
period between the precursors during one run. The value of the
period slightly decreases when the tilt rate increases, but overall
the phenomenon seems not to depend on the tilt rate. This mean that
the experiments can be considered as having been carried out in the
quasi-static limit. We also see in Figure~\ref{period} that the nature
of the material, glass beads (green (light gray) points) or sand
(black triangles), does not seem to modify significantly the overall
mean value of the period which is a little over 1$^{\circ}$ for both
materials. The error bars give the minimal and maximal values of the
period during each run. That dispersion around the mean value follows
a trend during one experiment: a significative increase of the period,
given by the size of the error bars, has been observed between the
first occurrence of the precursors around 15$^{\circ}$ and the
avalanche angle.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figure5}
\caption{(Color online) Average period of the precursors with the
tilt rate of the sample. The error bars give the extremal values
of the period around the mean value during one experiment. Green
(light gray) points are for glass beads, black triangles for
sand. The horizontal lines give the average value of the period
over all the runs for each type of material. All the experiments
have been done with a similar preparation: packing fraction about
0.61 and relative humidity about 30~\%}
\label{period}
\end{figure}
To conclude, the periodicity does not depend of the tilt rate in the
quasi-static limit for which the experiments have been done. It is
therefore justifiable to express that periodicity in terms of angles
and not in terms of time intervals. The period has been found to be
about 1$^{\circ}$ for the two types of granular materials. It
has to be noted that similar values of the period have been obtained
for much larger beads (2 to 3 mm), in containers of different sizes
and materials~\cite{nerone03,zaitsev08,gibiat09,kiesgen12}.
\subsubsection{Size of the precursors}
\label{precursors}
An important question that remained unanswered in previous works was
whether the bulk is involved in the rearrangements and precursors
phenomena. As a matter of fact, most of the previous experiments have
been carried out using only free surface
observations~\cite{nerone03,kiesgen12}, so that rearrangements and
precursors could be supposed to be a mere free surface phenomenon. On
the other hand, acoustic measurements~\cite{zaitsev08,gibiat09} gave
rather indirect indications of the bulk mobilization. In our
experiment we are able to visualize directly how deep the phenomena go
inside the sample, even if the observation is confined to a
thin layer near the wall of the container. We show that our
results reinforce the hypothesis that a part of the bulk is indeed
mobilized in a precursor event. The volume that is involved
increases as the system comes closer to the avalanche angle.
\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{figure6}
\caption{(Color online) Depth $z$, in units of grain diameter $d$,
of the successive precursors plotted as a function of the angle of
inclination of the granular sample $\theta$. All the experiments
have been done for ``normal'' compaction ($\sim.61$) and relative
humidity conditions ($\sim30-40$ \%) but different tilt
rates. Main graph: examples of evolution for glass beads (green
(light gray) lines) and sand (black lines) showing the roughly
linear dependence of the depth with the angle. Insert: gathered
results for a total of 25 experiments for glass beads (green
(light gray) points) and 14 experiments for sand (black
triangles). Lines are linear fits of those data: for glass beads,
$z/d=9\times(\theta-14^{\circ})$, for sand: $z/d=4\times
(\theta-16^{\circ})$.}
\label{depth}
\end{figure}
The successive depths of precursors can be obtained from the
spatiotemporal graphs such as the one of Figure~\ref{ST}. The values
of the depth of the peaks, normalized by the diameter of the grains,
as a function of the angle at which they appear are shown on
Figure~\ref{depth}. The main part of the graph shows the evolution of
those depths for different typical experiments. We can observe that
the dependence of the size of the precursors with the inclination
angle is roughly linear, even if in some experiments brutal ``jumps''
can be observed (results displayed with solid lines). The green (light
gray) lines correspond to experiments with glass beads and the black
lines to the ones with sand. The slope of increase of depth with the
angle is generally larger for the glass beads than for the sand. Also,
the first precursor tends to appear sooner for glass beads than for
sand. The insert of Figure~\ref{depth} shows all the results from 25
experiments for glass beads and 14 for sand superimposed, giving an
idea of the dispersion of the results from one experiment to the
other. All the experiments have been done in the ``normal'' conditions
of relative humidity and packing fraction and for tilt rates between
0.02$^{\circ}$/s and 0.08$^{\circ}$/s. Linear fits using the whole set
of data for each type of granular material have been done and are
shown as solid lines in the insert. The slope of the line is about 9
beads diameter per degree for the glass beads and 4 for the sand. For
the two types of material, the intersection of the linear fit with the
abscissa axis is about 15$^{\circ}$, even if the first precursors
appear later in sand than in glass beads.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figure7}
\caption{Detailed sequence describing a micro-slip event for an
experiment with glass beads at ``normal'' hygrometry and packing
fraction condition. The images shown are consecutive for angles
around 22$^{\circ}$. The angle increment between the states used
to calculate each correlation map is $0.06^{\circ}$. The red arrows show the localisation of the failure plane.}
\label{rupture}
\end{figure}
Thus, precursor events involve a fraction of the material that
increases roughly linearly with the angle of inclination. That
increase is approximatively two times larger for the glass beads than
for the sand. The part of the sample that is mobilized is parallel to
the surface, such that the process seems to imply a rectangular block
of granular material in the upper part of the sample. A closer
observation of the movie in Supplemental Material~\cite{movie} shows
that the surface of separation between the mobilized zone and the
quiescent zone bends toward the bottom of the box, which could be an
effect of the side boundaries. A more detailed description of the
processes happening in the decorrelated area during a precursor event
is difficult giving the impossibility of separating the contributions
from deformation, plastic rearrangements and large scale displacement
of the block to the decorrelation. Nevertheless, some hints of the
process can be inferred from a sequence before a micro-rupture, as
shown in Figure~\ref{rupture}. We can see on this figure a ''rupture''
occurring at an angle of $22.32^{\circ}$. Yet the failure
happens before as evidenced by a decorrelated zone mainly along a
line (see arrow on fig.~\ref{rupture} \@ $22.26^{\circ}$). This line
is indicative of a zone of shearing between two translating blocks.
As a matter of fact, a small solid translation does not modify the
optical paths relative to each other and the images stay
correlated. Nevertheless, the two blocks are not mechanically
identical: the activity in the upper section is clearly larger than in
the bottom. Finally, the micro-rupture event occurs. It is not
temporally resolved and appears as a large totally decorrelated
area. Please note that the total time of the sequence is $\sim~3$~s,
which is very large compared to the inertial time corresponding to a
displacement of, say, one bead diameter $\sqrt{d/g}~\simeq~5$~ms. So,
inertial effects do not seem to play a role in the development of the
micro-rupture. The phenomenology stays globally the same from one
experiment to the other, either for sand or glass beads: the density
of rearrangements increases in the area where the rupture will take
place, nevertheless a line of decorrelation at some depth of the
sample as the one distinctly seen at 22.26$^{\circ}$ on
Figure~\ref{rupture} is not always clearly seen.
We thus observe that the density of rearrangements and the large
periodic precursors events are not independent. Consequently, in the
following section we will focus on the quantitative study of the
rearrangements in the sample.
\subsection{Analysis of the rearrangements}\label{sec3.3}
\subsubsection{Density of activity in the material}
\label{activity}
To evidence the level of activity in the samples during the
inclination process, we first threshold the spatio-temporal images at
different levels of the correlation, giving an insight into the
distribution of the deformation in the depth of the sample during the
inclination. Figure~\ref{front} shows how we proceed. The original
spatiotemporal graph is shown on
Figure~\ref{front}(a). Figure~\ref{front}(b) and (c) are binarized
figures obtained from Figure~\ref{front}(a) for different values of
the threshold. For Figure~\ref{front}(b), all values of correlations
smaller (respectively greater) than 0.15 are black (resp. white). In
the case of Figure~\ref{front}(c), the procedure is the same with a
value of the threshold for the binarization of
0.07. We observe that during the
inclination process the level of deformation evolves as a front in the
sample: the depth at which a fixed level of deformation is reached in
the sample depends linearly on the angle of
inclination. Figure~\ref{front}(d) shows a superposition of the fronts
extracted from the binarized images for five different values of the
thresholds. We observe that the slope of the front remains
approximately constant. The solid line Figure~\ref{front}(a) is
obtained by averaging the slopes of the fronts from
Figure~\ref{front}(d). This shows that the growth of the precursors
events is identical to the overall repartition of the deformation in
the sample. Indeed, the average slope over several experiments, all
carried out in ``normal'' conditions of packing fraction and relative
humidity, is $13~d/^{\circ}$ for glass beads and $8~d/^{\circ}$ for
sand. Those slopes are of the same order and in the same ratio as the
values of the variation of the depth of the precursors with angle
($9~d/^{\circ}$ for glass beads and $4~d/^{\circ}$ for sand)
determined from data plotted on Figure~\ref{depth}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figure8}
\caption{(Color online) (a) Spatio-temporal representation of the
$x$-averaged correlation $G_I$ as a function of the depth and of
the inclination angle. The material is sand in ``normal''
conditions. The dashed line gives the mean slope of the fronts
extracted from the binarized images (see main text). (b) and (c)
Same data binarized with two different thresholds: (b) 0.15 and
(c) 0.07. (d) Fronts extracted from the binarized
spatio-temporal graphic for different values of the threshold,
the gray scale is lighter when the threshold is smaller;
yellow: 0.07, green: .11, red: .15, blue: .19, black: .23.}
\label{front}
\end{figure}
In addition to this front which shows a linear relation between angle
and depth at a value of the deformation and which superimposed on the
growth of the precursors at large deformations, a lot of experiments
display another deformation front very close to the surface. Such a
phenomenon is certainly linked to a boundary effect in the vicinity of
the free surface. An example of such behavior is shown on
Figure~\ref{2pentes}. We observe in that figure that apart from the
main front that overlaps the precursors, a thin zone of deformation
breaks the slope of the front near the surface.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figure9}
\caption{(a) Spatio-temporal representation of the x-averaged
correlation $G_I$ as a function of the depth and of the
inclination angle. The material is sand in ``normal''
conditions. (b) Binarized image obtained from (a) for a threshold
of 0.12.}
\label{2pentes}
\end{figure}
To conclude, the deformation in the sample appears as a front in a
spatio-temporal representation. That front appears very soon in the
inclination process and superimposed on the precursors peaks at large
deformation. The linear relation that connects the angle and the depth
at which the processes are observed seems to be of the same nature as
the linear increase of the size of the precursors with the angle of
inclination.
\subsubsection{Spots identification}
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figure10}
\caption{(Color online) Number of spots
$N$ as a function of the angle of inclination $\theta$ at
different depths in the sample. The average depth (in beads
diameter unity $d$) at which the measurement has been made is
indicated near each curve. The counting has been done
integrating events during 1$^{\circ}$ inclination and over a
slice of $\Delta z \simeq 37~d$ in depth.}
\label{fluidity}
\end{figure}
Another way to quantify the activity in the material is to count the
number of \emph{spots}, \emph{i.e.} localized deformation in the
sample, and to look at its evolution during the tilt process. For
this, we threshold each image in a movie, and then we identify those
spots in each image. We then count the number of spots that have
appeared at a certain depth during an angle increment of
1$^{\circ}$. The number of spots $N$ occurring between $z-\Delta z/2$
and $z+\Delta z /2$, with $\Delta z$ the size of the slice, can be
measured for different depths $z$, as a function of the current angle
$\theta$. The result of that image analysis is given in
figure~\ref{fluidity} where the number of spots at different depths is
given as a function of the angle of inclination. Each line corresponds
to a different depth, from 40~$d$ under the surface to 185~$d$
deep. For a given depth the counting is stopped as a precursor
reaching that depth occur. We observe that at a fixed depth the number
of spots increases with the angle while at fixed angle it decreases
with the depth. At all the depths, the density of spots increase
strongly when approaching the precursor event at that depth. The final
values of the densities are of the same order at all the depth.
A further analysis of those data will be done in the
discussion part. The first conclusions from the raw data are that
the rearrangements in the sample begin as soon as the inclination
process begins, with a decreasing density in the depth of the
sample. The density of spots increases strongly before the occurrence
of a precursor event at the corresponding depth.
\subsection{Influence of parameters}\label{sec3.4}
In this section, we will expose the consequences of the modification
of the humidity and packing fraction on the above observations.
\subsubsection{Influence of humidity}
Figure~\ref{humidity} shows, for sand samples with different humidity
preparations, the size of the successive precursors as a function of
the angle of inclination. The experiments have been carried out with
sand prepared in the same manner as previously leading to a packing
fraction around $.61$ for the ``normal'' hygrometry conditions. The
dispersion of the results is rather large but it can be noticed that
the presence of humidity tends to increase the number of layers of
grains implied in the precursory process. This is in agreement with
the intuitive idea that the presence of humidity increase the cohesion
of the system and consequently that more grains are involved when a
rearrangement or a precursor occurs.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figure11}
\caption{(Color online) (a) Depth of the successive precursors as a
function of angle for sand samples at different humidity
preparation. Higher values of humidity mobilize more grain layers
and the precursors begin at smaller angles. (b) and (c):
Spatio-temporal diagrams of sand specimens prepared at different
humidity levels. (b) Dry case (hygrometry $<$~20~\%); (c) Humid
case (hygrometry $>$~80~\%). The packing fraction preparation of
the samples is the same.}
\label{humidity}
\end{figure}
Other trends can also be noted: precursors appears sooner in a humid
system than in a dry one. In fact, precursors are better defined in
the humid case, as can be seen by comparing the spatio-temporal
diagram of an experiment with a dry sample (Fig.~\ref{humidity}(a))
and a humid sample (Fig.~\ref{humidity}(b)). In the dry case more
rearrangements take place giving a blurred appearance to the
diagram. The period of the precursors is also significantly smaller in
the dry case.
For humid cases, a curious pairing of precursors has been observed
several times, an example of which is shown in Figure~\ref{2T}. An
oscillation between two levels of activity can be observed, showing an
alternation between two different regimes after each precursors. We
also observed in some experiments an alternatation of two sizes for
the precursors, which explain for example the appearance in
Figure~\ref{humidity} of very small values of the precursors near the
avalanche in some humid cases: these values alternate with much higher
values.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figure12}
\caption{Spatio-temporal diagram of sand specimen prepared at high
humidity level (above 70 \%): the precursors tends to appear by
pairs showing a different
rearrangement behavior periodically.}
\label{2T}
\end{figure}
\subsubsection{Influence of packing fraction}
When exploring different types of packing fraction preparation, we observed
that in the case of samples prepared in the loosest state, the
phenomenology is the same as in the ``normal'' conditions. In that
case the activity in the system is very high through the depth of the
pile and precursors are still observed. On the contrary, the
phenomenology is modified in the most compacted piles. In that case,
because of the significant increase of the avalanche angle for dense
materials, we were not able to reach the avalanche with our
experimental setup in most of the experiments.
In the compacted experiments, local rearrangements during the
inclination of the system are still observed but the activity in the
system is much lower than in the less dense samples. On the
contrary, large micro-failures are not always observed and when they
are, they differ from the precursors events described
previously. For about half of our experiments, no large events were
observed at all. For the other half, large micro-slips event were
observed with the main difference that the mobilized area has a
clear angle with the surface of the sample instead of being parallel
as was the case in the previously described micro-slips. Such a
mobilized zone can be seen in Figures~\ref{compact}(a) to (d), where
the angle of the prismatic mobilized region to the surface is about
30$^{\circ}$, so that the surface of rupture is almost horizontal in
the laboratory frame. The mobilized zone progresses in the system as
the angle of inclination increases. This progression occurs by
regular steps at a similar period to the precursors obtained in
samples prepared in ``normal'' conditions.
\begin{figure}[htbp]
\centering \includegraphics[width=\columnwidth]{figure13}
\caption{Maps of correlation at different angles for a sample of
sand prepared in a compacted state at ``normal'' hygrometry
conditions (32 \% relative humidity): (a) 30.48$^{\circ}$, (b)
32.64$^{\circ}$ and (a) 32.16$^{\circ}$.}
\label{compact}
\end{figure}
To conclude this section concerning the influence of preparation over
the pre-avalanche phenomenology, we observe that the global picture of
the process, \emph{i.e.} the local rearrangements in the whole depth
of the system and periodic micro-ruptures appearing every few degrees
from an inclination of $15^{\circ}$, remains unchanged over a wide range of
conditions of preparation. The phenomenon is very robust to changes in
packing fraction and humidity conditions. Only the most compacted systems
behave differently, presenting an internal rupture that progresses in
the bulk of the system at an angle to the surface of the material.
\section{Discussion}\label{sec4}
In the preceding section, we have described the complex dynamic
behavior preceding the macroscopic failure of a granular material. The
dynamics begin very early in the tilting process with isolated
rearrangements events. Such events have already been reported, but
with our sensitive side view characterization, we are able to obtain
information about their spatial distribution. At the very early stage
of the tilting process the activity is essentially limited to very
superficial layers, and as the tilting process progresses, the
activity progresses into the bulk. A striking feature is the presence
of many micro-failure events in the bulk. Such events occur quite
regularly during the tilting process. We first discuss this effect in
the framework of Mohr-Coulomb failure criterium. After this, we will
discuss how our observations may be related to the plasticity of
granular material.
\subsection{Cohesion}\label{sec4.1}
We first discuss the localization of failure in a tilted granular
material as it may be deduced from Coulomb failure criteria. The
equation of static equilibrium for a granular material is:
\begin{eqnarray}
{\partial \sigma_{xx} \over \partial x}+{\partial \sigma_{xz} \over \partial z}&=&\rho g \sin \theta\\
{\partial \sigma_{zx} \over \partial x}+{\partial \sigma_{zz} \over \partial z}&=&\rho g \cos \theta
\end{eqnarray}
where $\sigma_{xz}=\sigma_{zx}$ is the shear stress, $\sigma_{xx}$
($\sigma_{zz}$) is the horizontal (vertical) normal stress, $\rho$ the
density, and $g$ the gravity. The orientations of the axes are defined
in Figure~\ref{avalanche}, and we neglect any shear stresses in the
transverse $y$ direction. Assuming that the stresses are uniform in
the $x$ direction, we obtain:
\begin{equation}
\sigma_{zz}=\rho g z \cos \theta,~~~~\sigma_{zx}=\rho g z \sin \theta \label{eqStresses}
\end{equation}
The Coulomb criterion postulates that a granular material is stable with respect to failure if~\cite{nedderman}:
\begin{equation}
\vert \tau \vert \le \mu \sigma +c_h, \label{eqCoulomb}
\end{equation}
for any plane inside the material. In Equation~(\ref{eqCoulomb}),
$\tau$ is the shear stress, $\sigma$ the normal stress, $\mu$ the
internal friction coefficient, and $c_h$ the cohesion. Applying the
Coulomb criteria to Equation~(\ref{eqStresses}), we find that the
planes of failure are parallel to $x$ axis, and the Coulomb criteria
is now:
\begin{equation}
\tan \theta \le \mu +{c_h \over \rho g z \cos \theta}, \label{eqCoulomb2}
\end{equation}
First, in the absence of cohesion $c_h=0$, the depth of failure plane
is not determined. With cohesion, the position of the failure plane is
determined. In the case of constant cohesion, failure must occur at
the bottom of the sample~\cite{halsey98,restagno04}, where ${c_h /\rho
g z \cos \theta}$ is the lowest. That result is in contradiction
with the observation that the first micro-failure appears at a smaller
depth than the following ones. The dependence of $\theta$ on the
values of $z$ for which failure occurs in a Mohr-Coulomb model is
opposite to the dependence experimentally observed. Moreover, such a
model gives only one criteria of rupture and is unable to predict
successive micro-failures in the material. In any case, the Coulomb
failure criterium is unable to explain the occurrence of the observed
micro-failures into the bulk of the sample.
\subsection{Plastic deformation}\label{sec4.2}
The Coulomb failure discussed in the preceding section does not take
into account any plastic deformations before rupture in the granular
material. It is however well known that granular materials may yield
before being subjected to failure. The critical state theory of
soil~\cite{schofield} discussed how the plastic deformation occurs in
a material depending on the initial state of the material. Starting
from an initially loose granular material, application of a shear
stress initially produces an elastic deformation. After this, yield
begins with plastic deformations. During this yielding, the material
compacts slowly. When the shear stress exceeds a threshold (generally
a fraction of the confining pressure), macroscopic failure occurs in
the material. If the experiment begins with an initially dense
granular material, the deformation is elastic until a maximum value of
the stress is reached. When the stress exceeds this value, failure
occurs. No plastic deformation happens before that rupture and
dilatancy occurs only after the failure. The fact that rearrangements
and precursors are observed in our experiments only for loose or
moderately dense granular samples (and not to densely prepared
granular materials) seems to show that the observed deformations in
our experiments correspond to yield before failure. We observed in a
previous work, for a different experimental geometry,
that the same kind of rearrangements may be associated to plastic
dissipation into the material~\cite{amon12}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figure14}
\caption{(Color online) Number of spots $N$ as a function of the
difference between actual angle and failure angle
$\theta_c(z)$. Every curve corresponds to a different depth: black
$z=40d$, blue $75d$, red $110d$, green $150d$, yellow $185d$.
Inset: same curves in logarithmic scale.}
\label{fig14}
\end{figure}
The link between the plastic deformations and the failure of the
material may be highlighted by considering the amount of plastic
deformation as a function of the difference between the applied stress
and the stress at which failure occurs. We have noticed (see
sec.\ref{activity} and fig.\ref{front}) that, at a given tilt angle
(i.e. at a given applied stress), the density of the rearrangements
decreases with the depth. This dependence of the density of the
rearrangements on the depth of the material most certainly originates
from a gradient of properties, either due to a depth dependent
confining pressure and/or a gradient of packing fraction during the
preparation of the sample. Such packing fraction and/or pressure
dependence of the yield process is observed in many soil mechanics
studies, but at confining pressures noticeably larger than the
pressure in our experiment. The tilt angle at which failure occurs
into the material (i.e. the failure stress) also increases with the
depth (see sec.\ref{precursors} and fig.\ref{depth}). From data of
fig.\ref{depth}, we defined $\theta_c(z)$ as the $z$-dependent value
of the tilt angle at which failure occurs. we plot on Fig.~\ref{fig14}
the number of events as a function of $\theta - \theta_c(z)$. The data
obtained for different depths, which were scattered on
fig.\ref{fluidity}) then collapse on the same curve. This shows that
those plastic events occur with a density which is given by the
difference between the failure and applied stresses, or equivalently
that failures happen after a certain number of rearrangements have
occurred.
It has been shown in~\cite{amon12} that the rate at which localised plastic events occur may be identified with the {\it so-called fluidity} of the material, which is the local rate of stress relaxation. The concept of fluidity has been introduced in order to explain the rheological properties of soft glassy material~\cite{derec2001}. The use of this concept in order to explain rheological properties of granular material has also been recently proposed~\cite{kamrin2012,chaudhuri2012}. The relation between isolated plastic events and failure may be qualitatively understood. Indeed, since
rearrangements may be seen as plastic reorganizations, we expect that
each event redistributes some stress, accordingly to an elastic stress
propagator~\cite{picard2005}. This additional stress may also trigger
other plastic events in their neighborhood. Because of these
processes, shear bands may form in the material~\cite{martens2012}. It
follows naturally from this kind of scenario that failure occurs when
a certain amount of activity in the material is reached. This is
precisely what is observed: rearrangements precede micro-failure,
whatever is the depth.
\subsection{Periodicity of precursors}\label{sec4.3}
Another striking observation is a rough periodicity in tilt angle of
the precursor events. As we already shown (see sec.\ref{precursors}
and Fig.~\ref{rupture}), those precursors events are indeed failure in
the bulk material.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{figure15}
\caption{(Color online) Details of a slip event. (a) is the
correlation image of a microslip event showing the failure
plane. (b) is the following correlation image. The zone between
the dashed red lines is decorrelated although below the failure
plane, and may be interpreted as a layer of sheared granular
material.}
\label{fig15}
\end{figure}
We have plotted on Fig.~\ref{fig15} a detail of the failure already
described on Fig.~\ref{rupture}. We see on Fig.~\ref{fig15}.a the
localisation of the failure plane. One image latter
(Fig.~\ref{fig15}.b) we observe that the decorrelated zone extends
deeper into the material. So we have between the two lines of
fig.\ref{fig15}.b a layer of material which has been deformed and
which is below the failure plane. We can interpret this layer as a
zone of granular material which has been sheared by the motion of the
upper part. This observation is in agreement with the literature about
creep motion in granular flow. It is indeed well known that when a
granular layer is sheared, the deformation extends into the bulk. The
deformation decays exponentially into the bulk, with a characteristic
length $\xi$ of the order of few grain
diameters~\cite{komatsu2000,richard2008,crassous2008}. The same decay
in velocity is also observed during non stationary granular
avalanches~\cite{courrech2005}. So the effect of a micro-failure event
may be to produce a deformation which extends into the depth. This
interpretation of this decorrelated layer is supported by the
following quantitative analysis. Calling $\Delta$ the displacement of
the upper block, we may
expect~\cite{komatsu2000,richard2008,crassous2008,courrech2005} a
displacement which decays as $\simeq \Delta \exp{(-\delta z/\xi)}$,
with $\delta z$ the distance below the
failure plane (see fig.~\ref{fig15}). The deformation
is then $\simeq (\Delta/\xi) \exp{(-\delta z/\xi)}$. With $\Delta \sim
\xi \simeq d$, we have a deformation of order $10^{-6}$ (the limit of
detection of our light scattering setup) for $\delta z \simeq ln(10^6)
\times \xi \simeq 15 d$. This is in rough agreement with the
separation between the two lines of fig.\ref{fig15}.b which is $\simeq
15 d$. This agreement supports the hypothesis that a layer of material
is sheared below the failure plane.
The effect of this shear deformation may be to erase all the possible
site for future micro-failure events because of the reconfiguration of
the sample in that zone. The next micro-failure site should then be
located slightly deeper into the bulk. The typical depth depth of this
sheared layer slip being $15~d$, we may expect from the data of
section \ref{precursors} (variation of precursor depths with failure
angle $\simeq 9~d/deg$ for glass beads) that the angular period
between successive micro-failure is of the order of $\simeq 15/9
\approx 1.6~deg$, a value which is close to the period that we
measured (see figure~\ref{period}).
\section{Conclusion}\label{sec5}
Our experiments on quasi-static tilting of granular materials show, in
agreement with numerous previously published studies, that
reorganizations occur before the avalanche takes place. The main
advance of this study is to show that such rearrangements are
organized spatially in a complex manner. For loose or moderately dense
granular systems, we observe isolated reorganizations at low
inclination, with a density decreasing slowly with the depth. As
inclination increases, reorganizations occur under the form of
micro-failure planes in the bulk, which are localized at increasing
depths. The different localizations of the failure planes imply that
the underlying physics is different from the stick-slip already
observed in other granular experiments~\cite{nasuno98}, where the slip
plane remains the same. The micro-failures seem to occur when a given
level of accumulated plastic deformation is attained. This holds at
every depth. The case of dense granular systems seems to obey to a
different underlying physics.
Those observations may be partially and qualitatively understood in the framework of
yielding properties of granular material (e.g. Granta-Gravel material
in~\cite{schofield}) before failure. At densities below the critical
one, the granular material yields progressively before the failure of
the material. This yielding is continuous, and associated to a
compaction of the material. We may then interpret the observed plastic
deformation as experimental manifestation of this yielding. Important
deformations are then observed in the form of micro-failures, which,
in this framework, form plastic flow in the material. The periodicity
of such plastic flows may be understood if we consider collective
granular flow: micro-failures involving granular matter in the bulk
with a typical extension length, and the next micro-failure may occur
in the next un-deformed zone. This explanation is evidently very partial and only qualitative. We do
not expect that a granular model such as Granta-Gravel will apply
completely or qualitatively, and it must be understood only as a
starting point for future investigations.
A puzzling point is that, to
our knowledge, such collective and regular precursors motions have not
been reported in numerical simulations of tilted granular material. It
follows from our experimental results that those collective events may
only be observed with moderately dense and large enough numerical
granular packing.
Another interesting point is that those micro-failures do not occur at
the beginning of the tilt process. There is some minimum angle
(i.e. some minimal shear stress) below which no micro-failures
occur. To our knowledge, the existence of two finite critical shear
stresses (one for micro-failure and one for the macroscopic failure)
is not discussed in the literature. A complete understanding of the
observation described in this paper appears as a challenging task.
\section{Acknowledgements}
We acknowledge the financial supports ANR ``STABINGRAM''
No. 2010-BLAN-0927-01. We thank Eric Cl\'ement, Lyd\' eric Bocquet,
Patrick Richard, Mickael Duranteau and Yves Le Gonidec for scientific
discussions, Patrick Chasle for help with the image acquisition and
Alain Faisant for the conception of the tilting optical board.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,072 |
\section{Introduction}
Denote the complex plane by $\mathbb{C}$ and the open unit disk by $\mathbb{D}$. An \emph{isotopy} of a set $X \subset \mathbb{C}$ is a homotopy $h: X \times [0,1] \to \mathbb{C}$ such that for each $t \in [0,1]$, the function $h^t: X \to \mathbb{C}$ defined by $h^t(x) = h(x,t)$ is an embedding (i.e.\ a homeomorphism of $X$ onto the range of $h^t$).
Suppose that $h: X \times [0,1] \to \mathbb{C}$ is an isotopy of a compactum $X \subset \mathbb{C}$ such that $h^0 = \mathrm{id}_X$. We consider the old problem of when the isotopy $h$ can be extended to an isotopy of the entire plane \footnote{We are indebted to Professor R.\ D.\ Edwards who communicated a related problem to us.}.
A more restrictive variant of the notion of an isotopy is a holomorphic motion (see e.g.\ \cite{astamart01}). The remarkable $\lambda$-Lemma \cite{slod91} states that any holomorphic motion of any plane set can be extended to a holomorphic motion of the entire plane. See \cite{astamart01} for a different proof and some history of that problem.
Although the $\lambda$-Lemma holds for arbitrary plane sets, some additional restrictions are needed for the existence of an extension of an isotopy to the entire plane $\mathbb{C}$. First, it is reasonable to restrict to isotopies of plane compacta. This by itself is not enough since Fabel has shown that there exists an isotopy of a convergent sequence which cannot be extended over the plane (see \cite[p.~991]{fabe05}). On the other hand, it was shown recently in \cite{ot10} that any isotopy of an arbitrary plane continuum $X$ can be extended over the plane. In this case each complementary domain $U$ of $X$ is simply connected and, hence, there exists a conformal isomorphism $\varphi_U: \mathbb{D} \to U$. The proof made use of two key analytic results for these conformal isomorphisms: the Carath\'{e}odory kernel convergence theorem, and the Gehring-Hayman inequality for the diameters of hyperbolic geodesics in $U$.
Let us now consider the case when $X$ is a plane compactum. Since we may assume that $X$ contains at least three points, the boundary of every complementary component $U$ of $X$ contains at least three points, so $U$ is hyperbolic, i.e.\ there exists an analytic covering map $\varphi_U: \mathbb{D} \to U$ (see \cite{Ahlfors1973}).
There is an analogue of the Carath\'{e}odory kernel convergence theorem which holds for families of analytic covering maps (see \cref{sec:analytic covering maps}). For an analogue of the Gehring-Hayman inequality, an additional geometric condition will be required:
\begin{defn}
A compact subset $X \subset \mathbb{C}$ is \emph{uniformly perfect with constant $k$} provided there exists $0 < k < 1$ so that for all $r < \mathrm{diam}(X)$ and all $x \in X$,
\[ \{z \in \mathbb{C} : kr \leq |z - x| \leq r\} \cap X \neq \emptyset .\]
\end{defn}
Clearly every uniformly perfect set is perfect and the standard ``middle-third'' Cantor set is uniformly perfect. It is known that the Gehring-Hayman estimate on the diameter of hyperbolic geodesics still holds for very analytic covering map $\varphi_U: \mathbb{D} \to U$ to a domain $U$ whose boundary is uniformly perfect (see \cref{sec:analytic covering maps} for details).
The main result in this paper is a characterization of isotopies $h: X \times [0,1] \to \mathbb{C}$ of uniformly perfect plane compacta $X$ which can be extended over the entire plane (see \cref{main1}). We use our characterization to prove that any isotopy of a plane compactum such that the diameter of every component is uniformly bounded away from zero can be extended over the plane (see \cref{main2}). Along the way, we will provide simpler proofs of some of the technical results in \cite{ot10}.
\subsection{Notation}
\label{sec:notation}
By a \emph{map} we mean a continuous function. For $z \in \mathbb{C}$, the magnitude of $z$ is denoted $|z|$, so that the Euclidean distance between two points $z,w \in \mathbb{C}$ is $|z - w|$. Given $z_0 \in \mathbb{C}$ and $r > 0$, denote
\[ B(z_0,r) = \{z \in \mathbb{C}: |z_0 - z| < r\} .\]
By a \emph{domain} we mean a connected, open, non-empty set $U \subset \mathbb{C}$. If $X \subset \mathbb{C}$ is closed, then a \emph{complementary domain} of $X$ is a component of $\mathbb{C} \setminus X$. A \emph{crosscut} of a domain $U$ is an \emph{open arc} $Q$ (i.e.\ $Q \approx (0,1) \subset \mathbb{R}$) contained in $U$ such that $\overline{Q}$ is a \emph{closed arc} (i.e.\ $\overline{Q} \approx [0,1]$) whose endpoints are in $\partial U$. Note that the endpoints of $\overline{Q}$ are required to be distinct. In general, if $A$ is an open arc whose closure $\overline{A}$ is a closed arc, we may refer to the endpoints of $\overline{A}$ as the ``endpoints of $A$''.
A \emph{path} is a map $\gamma: [0,1] \to \mathbb{C}$. Given a domain $U$, we say $\gamma$ is a \emph{path in $U$} if $\gamma((0,1)) \subset U$. Note that \emph{we allow the possibility that $\gamma(0) \in \partial U$ and/or $\gamma(1) \in \partial U$} -- we still call such a path a path in $U$.
We will make frequent use of covering maps in this paper. Given a covering map $\varphi: V \to U$, where $V$ and $U$ are domains, a \emph{lift} of a point $x \in U$ is a point $\h{x} \in V$ such that $\varphi(\h{x}) = x$. Similarly, if $\gamma$ is a path with $\gamma([0,1]) \subset U$ then a lift of $\gamma$ is a path $\h{\gamma}$ in $V$ such that $\varphi \circ \h{\gamma} = \gamma$.
The \emph{Hausdorff metric} $d_H$ measures the distance between two compact sets $A_1,A_2 \subset \mathbb{C}$ as follows:
\[ d_H(A_1,A_2) = \max \{ \max_{z_1 \in A_1} \min_{z_2 \in A_2} |z_1 - z_2|, \; \max_{z_2 \in A_2} \min_{z_1 \in A_1} |z_1 - z_2| \} .\]
Equivalently, $d_H(A_1,A_2)$ is the smallest number $\varepsilon \geq 0$ such that $A_1$ is contained in the closed $\varepsilon$-neighborhood of $A_2$ and $A_2$ is contained in the closed $\varepsilon$-neighborhood of $A_1$.
Given an isotopy $h: X \times [0,1] \to \mathbb{C}$, we denote $h^t = h|_{X \times \{t\}}$ and, for $x \in X$, we denote $x^t = h^t(x)$.
\section{Preliminaries}
\label{sec:preliminaries}
In this section we collect several tools which we use in this paper. Many of these are standard analytical results; others are less well-known.
\subsection{Bounded analytic covering maps}
\label{sec:analytic covering maps}
It is a standard classical result (see e.g.\ \cite{Ahlfors1973}) that for any domain $U \subset \mathbb{C}$ whose complement contains at least two points, and for any $z_0 \in U$, there is a complex analytic covering map $\varphi: \mathbb{D} \to U$ such that $\varphi(0) = z_0$. Moreover, this covering map $\varphi$ is uniquely determined by the argument of $\varphi'(0)$.
Many of the results below hold only for analytic covering maps $\varphi: \mathbb{D} \to U$ to bounded domains $U$. For the remainder of this subsection, let $U \subset \mathbb{C}$ be a bounded domain, and let $\varphi_U=\varphi: \mathbb{D} \to U$ be an analytic covering map.
\begin{thm}[Fatou \cite{Fatou06}]
\label{Fatou}
The radial limits $\lim_{r \to 1^-} \varphi(re^{i\theta})$ exist for all points $e^{i\theta}$ in $\partial \mathbb{D}$ except possibly for a set of linear measure zero.
\end{thm}
From now on, we will always assume that any bounded analytic covering map $\varphi: \mathbb{D} \to U$ has been extended to be defined over all points $e^{i\theta} \in \partial \mathbb{D}$ where the radial limit exists by $\varphi(e^{i\theta}) = \lim_{r \to 1^-} \varphi(re^{i\theta})$. Note that the function $\varphi$ is not necessarily continuous at these points.
For this extended map $\varphi$, we extend the notion of lifts. If $\gamma$ is a path in $U$ (recall this allows for the possibility that $\gamma(0)$ and/or $\gamma(1)$ belongs to $\partial U$), then a \emph{lift} of $\gamma$ is a path $\h{\gamma}$ in $\mathbb{D}$ such that $\varphi \circ \h{\gamma} = \gamma$. This means that if $\gamma(0) \in \partial U$, then $\h{\gamma}(0) \in \partial \mathbb{D}$ and $\varphi$ is defined at the point $\h{\gamma}(0)$ (and $\varphi(\h{\gamma}(0)) = \gamma(0)$); and likewise for $\gamma(1)$ and $\h{\gamma}(1)$.
\begin{thm}[Riesz \cite{Riesz16, Riesz23}]
\label{Riesz}
For each $x \in \partial U$, the set of points $e^{i\theta}$ for which $\lim_{r\to 1^-} \varphi(re^{i\theta}) = x$ has linear measure zero in $\partial \mathbb{D}$.
\end{thm}
The next result about lifts of paths is very similar to classical results for covering maps. Since our extended map $\varphi_U$ is not a covering map at points in $\partial \mathbb{D}$, we include a proof for completeness.
\begin{thm}
\label{lift}
Suppose $\gamma$ is a path in $U$ such that $\gamma((0,1]) \subset U$. Let $\h{z} \in \mathbb{D}$ be such that $\varphi(\h{z}) = \gamma(1)$. Then there exists a unique lift $\h{\gamma}$ of $\gamma$ with $\h{\gamma}(1) = \h{z}$.
In particular, if $\gamma(0) \in \partial U$, then $\h{\gamma}(0) \in \partial \mathbb{D}$, $\varphi$ is defined at $\h{\gamma}(0)$ (i.e.\ the radial limit of $\varphi$ exists there), and $\varphi(\h{\gamma}(0)) = \gamma(0)$.
\end{thm}
\begin{proof}
We may assume that $\gamma(0) \in \partial U$. Since $\varphi$ is a covering map, $\gamma|_{(0,1]}$ lifts to a path with initial point $\h{z}$ which compactifies on a continuum $K \subset \partial \mathbb{D}$. If $K$ is non-degenerate, then there exists by \cref{Fatou} a set $E$ of positive measure in the interior of $K$ so that for each $e^{i\theta} \in E$, the radial limit $\lim_{r \to 1^-} \gamma(r e^{i\theta})$ exists. Since the graph of $\h{\gamma}$ compactifies on $K$ we can choose a sequence $s_i \to 1$ so that $\h{\gamma}(s_i)= r_i e^{i\theta}$ with $r_i \to 1$. It follows that the radial limit $\lim_{r \to 1^-} \varphi^t(r e^{i\theta}) = \gamma(1)$ for each $e^{i\theta} \in E$, a contradiction with \cref{Riesz}. Thus $K$ is a point $e^{i\theta}$.
If $\gamma(0)$ is a limit point of $\partial U$, then we can choose arbitrarily small $\rho > 0$ so that the circle $S(\gamma(0),\rho) = \partial B(\gamma(0),\rho)$ intersects $\partial U$, and $\varphi(0), \gamma(1) \notin B(\gamma(0),2\rho)$. Let $C$ be the component of $S(\gamma(0),\rho) \setminus \partial U$ so that the closure of the component of $\gamma([0,1]) \cap B(\gamma(0),\rho)$, which contains $\gamma(0)$, intersects $C$. By the above, $C$ lifts to a crosscut $\h{C}$ of $\mathbb{D}$ such that $e^{i\theta}$ is contained in the component $H$ of $\mathbb{D} \setminus \h{C}$ which does not contain $0$. Since a terminal segment of the radial segment $\{r e^{i\theta}: 0 \leq r < 1\}$ is contained in $H$, and $\rho$ is arbitrarily small, it follows that $\varphi(\h{\gamma}(0)) = \lim_{r\to 1^-} \varphi(r e^{i\theta}) = \gamma(0)$ as required.
In the case that $\gamma(0)$ is an isolated point of $\partial U$ (this case will not be needed in this paper as all domains we consider will have perfect boundaries), a similar argument can be made by lifting a small circle in $U$ centered at $\gamma(0)$. We leave this case to the reader.
\end{proof}
The next result is a variant of \cref{lift}, in which the base point of the path to be lifted is in the boundary of $U$.
In the case that the boundary of $U$ is uniformly perfect, we prove below in \cref{Hlift} a stronger result about lifting a homotopy under covering maps to a domain whose boundary is changing under an isotopy. The present result can be obtained as a Corollary to \cref{Hlift} by using the identity isotopy. We omit a proof for the non-uniformly perfect case, since we won't need it for this paper.
\begin{thm}
\label{lift from bd}
Suppose $\gamma$ is a path in $U$ such that $\gamma((0,1]) \subset U$ and $\gamma(0) \in \partial U$. Let $\h{x} \in \partial \mathbb{D}$ be such that $\varphi_U(\h{x}) = \gamma(0)$ and $\gamma$ is homotopic to the radial path $\varphi_U|_{\{r \h{x} \;:\; 0 \leq r \leq 1\}}$ under a homotopy in $U$ that fixes the point $\gamma(0)$. Then there exists a lift $\h{\gamma}$ of $\gamma$ with $\h{\gamma}(0) = \h{x}$. Moreover, if $\partial U$ is perfect, this lift $\h{\gamma}$ is unique.
\end{thm}
The \emph{hyperbolic metric} on the unit disk $\mathbb{D}$ is given by the form $\frac{2|dz|}{1 - |z|^2}$, meaning that the length of a smooth path $\gamma: [0,1] \to \mathbb{D}$ is $\int_0^1 \frac{2 |\gamma'(t)|}{1 - |\gamma(t)|^2} \,dt$. The important property of the hyperbolic metric for us is that (hyperbolic) geodesics in $\mathbb{D}$ are pieces of round circles or straight lines which cross the boundary $\partial \mathbb{D}$ orthogonally. Via the covering map $\varphi: \mathbb{D} \to U$, we obtain the \emph{hyperbolic metric on $U$}, in which the length of a smooth path in $U$ is equal to the length of any lift of that path under $\varphi$ -- this length is independent of the choice of lift. It is a standard result that the hyperbolic metric on $U$ is independent of the choice of covering map $\varphi: \mathbb{D} \to U$.
\begin{thm}[Gehring-Hayman \cite{PR98,pomm02}]
\label{gehrhay}
Suppose $\partial U$ is uniformly perfect with constant $k$. There exists a constant $K$ such that if $\h{g}$ is a hyperbolic geodesic in $\mathbb{D}$ and $\h{\Gamma}$ is a curve with the same endpoints as $\h{g}$, then
\[ \mathrm{diam}(\varphi(\h{g})) \leq K \cdot \mathrm{diam}(\varphi(\h{\Gamma})) .\]
The constant $K$ depends only on $k$, not on the domain $U$ itself or on the choice of analytic covering map $\varphi$.
\end{thm}
We end this subsection with a discussion of analytic covering maps of varying domains in the plane. We will make use of the notion of Carath\'{e}odory kernel convergence, which was introduced by Carath\'{e}odory for univalent analytic maps in \cite{cara12}, then extended by Hejhal to the case of analytic covering maps.
Let $U_1,U_2,\ldots$ and $U_\infty$ be domains and let $z_1,z_2,\ldots$ and $z_\infty$ be points with $z_n \in U_n$ for all $n = 1,2,\ldots$ and $z_\infty \in U_\infty$. We say that \emph{$\langle U_n, z_n \rangle \to \langle U_\infty, z_\infty \rangle$ in the sense of Carath\'{e}odory kernel convergence} provided that (i) $z_n \to z_\infty$; (ii) for any compact set $K \subset U_\infty$, $K \subset U_n$ for all but finitely many $n$; and (iii) for any domain $U$ containing $z_\infty$, if $U \subseteq U_n$ for infinitely many $n$, then $U \subseteq U_\infty$.
\begin{thm}[\cite{hej74}; see also \cite{Comerford2013}]
\label{caratheodory}
Let $U_1,U_2,\ldots$ and $U_\infty$ be domains and let $z_1,z_2,\ldots$ and $z_\infty$ be points with $z_n \in U_n$ for all $n = 1,2,\ldots$ and $z_\infty \in U_\infty$. Let $\varphi_\infty: \mathbb{D} \to U_\infty$ be the analytic covering map such that $\varphi(0) = z_\infty$ and $\varphi'(0) > 0$. Likewise, for each $n = 1,2,\ldots$, let $\varphi_n: \mathbb{D} \to U_n$ be the analytic covering map such that $\varphi_n(0) = z_n$ and $\varphi_n'(0) > 0$. Then $\langle U_n, z_n \rangle \to \langle U_\infty, z_\infty \rangle$ in the sense of Carath\'{e}odory kernel convergence if and only if $\varphi_n \to \varphi_\infty$ uniformly on compact subsets of $\mathbb{D}$.
\end{thm}
\subsection{Partitioning plane domains}
\label{sec:partitioning domains}
Let $U$ be a bounded domain in $\mathbb{C}$. We next describe a way of partitioning $U$ into simple sets which are either circular arcs or regions whose boundaries are unions of circular arcs.
Let $\mathcal{B}$ be the collection of all open disks $B(c,r) \subset U$ such that $|\partial B(c,r) \cap \partial U| \geq 2$. Let $\mathcal{C}$ be the collection of all centers of such disks, and for $c \in \mathcal{C}$ let $r(c)$ be the radius of the corresponding disk in $\mathcal B$. The set $\mathcal{C}$, called the \emph{skeleton of $U$}, was studied by several authors (see for example \cite{fre97}). Note that for each $c \in \mathcal{C}$, $B(c,r(c))$ is conformally equivalent with the unit disk $\mathbb{D}$ and, hence, can be endowed with the hyperbolic metric $\rho_c$. Let $\mathrm{Hull}(c)$ be the convex hull of the set $\partial B(c,r(c)) \cap \partial U$ in $B(c,r(c))$ \emph{using the hyperbolic metric $\rho_c$ on the disk $B(c,r(c))$}. The following theorem by Kulkarni and Pinkall generalizes an earlier result by Bell \cite{bell76} (see \cite{bfmot13} for a more complete description):
\begin{thm}[\cite{kulkpink94}]
\label{KP}
For each $z \in U$ there exists a unique $c \in \mathcal{C}$ such that $z \in \mathrm{Hull}(c)$.
\end{thm}
\begin{figure}
\begin{center}
\includegraphics{KP.pdf}
\end{center}
\caption{Depiction of two examples of the sets $\mathrm{Hull}(c)$ from the Kulkarni-Pinkall decomposition of a domain $U$ in $\mathbb{C}$. In the picture, $U$ is a component of the complement of the wavy lines.}
\label{fig:KP}
\end{figure}
Let $\mathcal{J}$ be the collection of all crosscuts of $U$ which are contained in the boundaries of the sets $\mathrm{Hull}(c)$ for $c \in \mathcal{C}$. The elements of $\mathcal{J}$ are circular open arcs (called \emph{chords}) whose endpoints are in $\partial U$. Two such chords do not cross each other inside $U$ (i.e., if $\ell \neq \ell'$ are chords in $\mathcal{J}$, then $\ell \cap \ell' = \emptyset$) and the limit of any convergent sequence of chords in $\mathcal{J}$ is either a chord in $\mathcal{J}$ or a point in $\partial U$. In particular, the subcollection of chords of diameter greater or equal to $\varepsilon$ is compact for each $\varepsilon > 0$. As such, the family $\mathcal{J}$ is close to being a \emph{lamination} of $U$ (see \cref{d:lamination} in \cref{sec:characterization} below). However, it is possible that uncountably many distinct chords in $\mathcal{J}$ have the same pair of endpoints $x,y \in \partial U$.
\subsection{Equidistant sets}
\label{sec:equi set}
Let $A_1$ and $A_2$ be disjoint closed sets in $\mathbb{C}$. The \emph{equidistant set} between $A_1$ and $A_2$ is the set
\[ \mathrm{Equi}(A_1,A_2) = \left\{ z \in \mathbb{C}: \min_{w \in A_1} |z-w| = \min_{w \in A_2} |z-w| \right\} .\]
The equidistant set is a convenient way to define a set running ``in between'' $A_1$ and $A_2$. Moreover, it has a very simple local structure in the case that the sets $A_1$ and $A_2$ are not ``entangled'' in the sense of the following definition:
\begin{defn}
We say that $A_1$ and $A_2$ are \emph{non-interlaced} if whenever $B(c,r)$ is an open disk contained in the complement of $A_1 \cup A_2$, there are disjoint arcs $C_1,C_2 \subset \partial B(c,r)$ such that $A_1 \cap \partial B(c,r) \subset C_1$ and $A_2 \cap \partial B(c,r) \subset C_2$. We allow for the possibility that $C_1 = \emptyset$ in the case that $A_2 \cap \partial B(c,r) = \partial B(c,r)$, and vice versa.
\end{defn}
By a \emph{$1$-manifold} in the plane, we mean a \emph{closed} set $M \subset \mathbb{C}$ such that each component of $M$ is homeomorphic either to $\mathbb{R}$ or to $\partial \mathbb{D}$, and these components are all open in $M$.
\begin{thm}[\cite{brou05,aartbrouwover}]
\label{thm:manifold}
Let $A_1$ and $A_2$ be disjoint closed sets in $\mathbb{C}$. If $A_1$ and $A_2$ are non-interlaced, then $\mathrm{Equi}(A_1,A_2)$ is a $1$-manifold in the plane.
\end{thm}
\subsection{Midpoints of paths}
\label{sec:midpoints}
We identify the space of all paths in the plane $\mathbb{C}$ with the function space $\mathcal{C}([0,1],\mathbb{C})$ with the \emph{uniform metric}; that is, the distance between two paths $\gamma_1,\gamma_2 \in \mathcal{C}([0,1],\mathbb{C})$ is equal to $\sup \{|\gamma_1(t) - \gamma_2(t)|: t \in [0,1]\}$.
The standard Euclidean length of a path is not a well-behaved function from $\mathcal{C}([0,1],\mathbb{C})$ to $[0,\infty)$. First, it is not defined (i.e., not finite) for all paths in $\mathcal{C}([0,1],\mathbb{C})$, but only for rectifiable paths. Second, paths can be arbitrarily close in the uniform metric and yet have very different Euclidean path lengths.
However, there do exist alternative ``path length'' functions $\mathsf{len}: \mathcal{C}([0,1],\mathbb{C}) \to [0,\infty)$ such that $\mathsf{len}$ is defined for \emph{all} paths in $\mathcal{C}([0,1],\mathbb{C})$, and $\mathsf{len}$ is continuous with respect to the uniform metric on $\mathcal{C}([0,1],\mathbb{C})$ and the standard topology on $[0,\infty) \subset \mathbb{R}$, see \cite{Morse1936,Silverman1969,hot13}. Such an alternative path length function can be used to define a choice of ``midpoint'' of a path which varies continuously with the path. Specifically, the midpoint of $\gamma$ is defined to be the point $\mathsf{m}(\gamma) = \gamma(t_0)$, where $t_0 \in (0,1)$ is chosen such that $\mathsf{len}(\gamma|_{[0,t_0]}) = \mathsf{len}(\gamma|_{[t_0,1]})$.
In this paper, we will not need to know any particulars about the definitions of such path length functions, but only this result about existence of such midpoints, which we state below.
\begin{thm}[see e.g.\ \cite{hot13}]
\label{thm:midpt}
There is a continuous function
\[ \mathsf{m}: \mathcal{C}([0,1],\mathbb{C}) \to \mathbb{C} \]
such that $\mathsf{m}(\gamma) \in \gamma((0,1))$ for all $\gamma \in \mathcal{C}([0,1],\mathbb{C})$.
Moreover, if $\gamma_1$ and $\gamma_2$ are both parameterizations of a closed arc $A$ (i.e.\ if $\gamma_1([0,1]) = \gamma_2([0,1]) = A$ and $\gamma_1$ and $\gamma_2$ are homeomorphisms between $[0,1]$ and $A$), then $\mathsf{m}(\gamma_1) = \mathsf{m}(\gamma_2)$.
\end{thm}
In light of the second part of \cref{thm:midpt}, given an (open or closed) arc $A$, we define the midpoint of $A$ to be $\mathsf{m}(A) = \mathsf{m}(\gamma)$ where $\gamma$ is any path which parameterizes $A$ ($\overline{A}$ if $A$ is an open arc).
\section{Main Theorem}
\label{sec:characterization}
In this section, we state and prove the main theorem of this paper, which is a characterization of isotopies of uniformly perfect plane compacta which can be extended over the entire plane. Note that the example of Fabel mentioned in the Introduction can easily be modified to obtain an isotopy $h: X \times [0,1] \to \mathbb{C}$ so that for each $t$, $X^t = h^t(X)$ is a uniformly perfect Cantor set with the same constant $k$. Thus, additional assumptions are required to ensure the extension of such an isotopy over the plane.
\begin{thm}
\label{main1}
Suppose that $h: X \times [0,1] \to \mathbb{C}$ is an isotopy of a compactum $X \subset \mathbb{C}$ starting at the identity, such that $X^t$ is uniformly perfect with the same constant $k$ for each $t \in [0,1]$. Then the following are equivalent:
\begin{enumerate}
\item $h$ extends to an isotopy of the entire plane $\mathbb{C}$;
\item For each $\varepsilon > 0$ there exists $\delta > 0$ such that for any crosscut $Q$ of a complementary domain $U$ of $X$ with $\mathrm{diam}(C) < \delta$, there exists a homotopy $h_Q: (X \cup Q) \times [0,1] \to \mathbb{C}$ starting at the identity which extends $h$ and is such that $h_Q^t(Q) \cap X^t = \emptyset$ and $\mathrm{diam}(h_Q^t(Q)) < \varepsilon$ for all $t \in [0,1]$.
\end{enumerate}
\end{thm}
It is trivial to see that condition (i) implies condition (ii) from \cref{main1}.
To obtain the converse, we will in fact prove a stronger characterization in \cref{main1technical} below. To state this Theorem, we introduce the following simple condition:
\begin{defn}
Let $X \subset \mathbb{C}$ be a compact set and let $h: X \times [0,1] \to \mathbb{C}$ be an isotopy of $X$ starting at the identity. We say that $X$ is \emph{encircled} if $X$ has a component which is a large circle $\Sigma$ such that $h^t |_\Sigma$ is the identity for all $t \in [0,1]$, and $X^t \setminus \Sigma$ is contained in a compact subset of the bounded complementary domain of $\Sigma$ for all $t \in [0,1]$.
\end{defn}
Note that if (ii) from \cref{main1} holds, then we may additionally assume without loss of generality (i.e.\ without falsifying condition (ii) from \cref{main1}) that $X$ is encircled.
\subsection{Tracking bounded complementary domains}
\label{sec:tracking domains}
For the remainder of this section, we assume that $h: X \times [0,1] \to \mathbb{C}$ is an isotopy of a compact set $X \subset \mathbb{C}$ starting at the identity, such that $X^t$ is uniformly perfect for all $t \in [0,1]$ with the same constant $k$, and that $X$ is encircled.
Clearly such an isotopy can be extended over the unbounded complementary domain of $X$ as the identity for all $t \in [0,1]$. Hence we only need to consider bounded complementary domains for the remainder of this section.
Let $U$ be a bounded complementary domain of $X$. Choose a point $z_U \in U$. Clearly the isotopy $h$ can be extended to an isotopy $h_U: (X \cup \{z_U\}) \times [0,1] \to \mathbb{C}$ starting at the identity. Define $U^t$ to be the complementary domain of $X^t$ which contains the point $h_U^t(z_U) = z_U^t$. Let $\varphi_U^t: \mathbb{D} \to U^t$ be the analytic covering map such that $\varphi_U^t(0) = z_U^t$ and $(\varphi_U^t)'(0) > 0$. It is straightforward to see that if $t_n \to t_\infty$, then the pointed domains $\langle U^{t_n}, z_U^{t_n} \rangle$ converge to $\langle U^{t_\infty}, z_U^{t_\infty} \rangle$ in the sense of Carath\'{e}odory kernel convergence. Hence, by \cref{caratheodory}, the covering maps $\varphi_U^{t_n}$ converge to $\varphi_U^{t_\infty}$ uniformly on compact subsets of $\mathbb{D}$. We will always assume that the complementary domains $U^t$ of $X^t$ and analytic covering maps $\varphi_U^t: \mathbb{D} \to U^t$ are defined in this way. It is clear that this definition of $U^t$ does not depend on the choices of $z_U$ and $h_U$.
The following Theorem is a stronger characterization of isotopies of uniformly perfect plane compacta that can be extended over the plane than the one given in \cref{main1}, in the sense that condition (ii) of \cref{main1technical} is weaker than condition (ii) of \cref{main1}. We will in fact use this stronger characterization in \cref{sec:large components}.
\begin{thm}
\label{main1technical}
Suppose that $h: X \times [0,1] \to \mathbb{C}$ is an isotopy of a compactum $X \subset \mathbb{C}$ starting at the identity, such that $X^t$ is uniformly perfect with the same constant $k$ for each $t \in [0,1]$, and that $X$ is encircled. Then the following are equivalent:
\begin{enumerate}
\item $h$ extends to an isotopy of the entire plane $\mathbb{C}$;
\item For each bounded complementary domain $U$ of $X$ and each $\varepsilon > 0$ there exists $\delta > 0$ with the following property:
For any crosscut $Q$ in $U$ with endpoints $a,b \in \partial U$ and with $\mathrm{diam}(Q) < \delta$, there exists a family $\{\gamma_t: t \in [0,1]\}$ such that (1) $\gamma_t$ is a path in $U^t$ joining $a^t$ and $b^t$ for each $t \in [0,1]$, (2) $\gamma_0$ is homotopic to $Q$ in $U$ with endpoints fixed, (3) $\mathrm{diam}(\gamma_t([0,1])) < \varepsilon$ for all $t \in [0,1]$, and (4) there are lifts $\h{\gamma}_t$ of the paths $\gamma_t$ under $\varphi_U^t$ such that the sets $\h{\gamma}_t([0,1])$ vary continuously in $t$ with respect to the Hausdorff metric.
\end{enumerate}
\end{thm}
We have deliberately chosen to use subscripts in the notation for $\gamma_t$ (instead of superscripts like $\gamma^t$) to emphasize the point that the paths $\gamma_t$ are \emph{not} required to change continuously in the sense of an isotopy or homotopy -- we only require the weaker condition that the images of the lifts $\h{\gamma}_t$ vary continuously with respect to the Hausdorff metric. Even though condition (ii) of \cref{main1technical} is more cumbersome to state, we demonstrate in \cref{sec:large components} that it is easier to apply.
The proofs of \cref{main1} and \cref{main1technical} will be completed in \cref{sec:proof main1} below.
\subsection{Lifts in moving domains}
\label{sec:lifts moving}
As in \cref{sec:tracking domains}, we continue to assume that $h: X \times [0,1] \to \mathbb{C}$ is an isotopy of a compact set $X \subset \mathbb{C}$ starting at the identity, such that $X^t$ is uniformly perfect for all $t \in [0,1]$ with the same constant $k$, and that $X$ is encircled.
We begin by proving two statements about lifts under the covering maps $\varphi_U^t$, in the spirit of the results from \cref{sec:analytic covering maps} above.
\begin{lem}
\label{smallup}
Let $U$ be a bounded complementary domain of $X$. For every $\varepsilon > 0$ there exists $\delta > 0$ such that for any $t \in [0,1]$ if $\gamma$ is a path in $U^t$ with $\mathrm{diam}(\gamma([0,1])) < \delta$ and $\h{\gamma}$ is any lift of $\gamma$ under $\varphi_U^t$, then $\mathrm{diam}(\h{\gamma}([0,1])) < \varepsilon$.
\end{lem}
\begin{proof}
Suppose the lemma fails. Then there exists $\varepsilon > 0$, a sequence $\gamma_i$ of paths in $U^{t_i}$ and lifts $\h{\gamma}_i$ such that $\lim \mathrm{diam}(\gamma_i([0,1])) = 0$ and $\mathrm{diam}(\h{\gamma}_i([0,1])) \geq \varepsilon$ for all $i$. Choose two points $\h{a}_i, \h{b}_i$ in $\h{\gamma}_i([0,1])$ such that $|\h{a}_i - \h{b}_i| > \frac{\varepsilon}{2}$, and let $\h{\mathfrak{g}}_i$ be the hyperbolic geodesic with endpoints $\h{a}_i$ and $\h{b}_i$. Put $\varphi_U^{t_i}(\h{\mathfrak{g}}_i) = \mathfrak{g}_i$. By \cref{gehrhay}, $\mathrm{diam}(\mathfrak{g}_i) \to 0$. Since the geodesics $\h{\mathfrak{g}}_i$ are pieces of round circles or straight lines which cross $\partial \mathbb{D}$ perpendicularly and have diameter bigger than $\frac{\varepsilon}{2}$, there exist $\eta > 0$ and points $\h{x}_i \in \h{\mathfrak{g}}_i$ such that $|\h{x}_i| \leq 1 - \eta$ for all $i$. By choosing a subsequence we may assume that $t_i \to t_\infty$, $\h{x}_i \to \h{x}_\infty \in \mathbb{D}$, and $\lim \mathfrak{g}_i = z_\infty$ is a point in $\overline{U^{t_\infty}}$. Let $K_i$ be the component of $\h{\mathfrak{g}}_i\cap B(\h{x}_\infty,\frac{\eta}{2})$ containing the point $\h{x}_i$. We may assume $K_i \to K_\infty$, where $K_\infty$ is a non-degenerate continuum in $\mathbb{D}$. Since $\varphi_U^{t_i} \to \varphi_U^{t_\infty}$ uniformly on compact sets in $\mathbb{D}$, $\varphi_U^{t_\infty}(K_\infty) = z_\infty$, which is a contradiction since $\varphi_U^{t_\infty}$ is a covering map.
\end{proof}
Given a homotopy $\Gamma: [0,1] \times [0,1] \to \mathbb{C}$ we denote for each $t \in [0,1]$ $\Gamma^t = \Gamma|_{[0,1]\times\{t\}}: [0,1] \to \mathbb{C}$.
\begin{lem}
\label{Hlift}
Let $U$ be a bounded complementary domain of $X$. Suppose that $\Gamma: [0,1] \times [0,1] \to \mathbb{C}$ is a homotopy with $\Gamma^t(0) = h^t(\Gamma^0(0)) \in \partial U^t$ and $\Gamma^t(s) \in U^t$ for all $s \in (0,1]$ and all $t \in [0,1]$. Let $\h{z} \in \mathbb{D}$ be such that $\varphi_U^0(\h{z}) = \Gamma^0(1)$. Then there exists a homotopy $\h{\Gamma}: [0,1] \times [0,1] \to \overline{\mathbb{D}}$ lifting $\Gamma$, i.e.\ $\varphi_U^t \circ \h{\Gamma}^t = \Gamma^t$ for all $t \in [0,1]$, and such that $\h{\Gamma}^0(1) = \h{z}$.
\end{lem}
\begin{proof}
Define $\Psi: \mathbb{D} \times [0,1] \to \bigcup_{t \in [0,1]} (U^t \times \{t\})$ by $\Psi(z,t) = (\varphi_U^t(z),t)$ for $t \in [0,1]$ and $z \in \mathbb{D}$.
\begin{claiminproof}
\label{claim:Psi covering}
$\Psi$ is a covering map.
\end{claiminproof}
\begin{proof}[Proof of \cref{claim:Psi covering}]
\renewcommand{\qedsymbol}{\textsquare (\cref{claim:Psi covering})}
Let $(y_0,t_0) \in U^{t_0} \times \{t_0\}$. Choose a small simply connected neighborhood $V$ of $y_0$ and $\delta > 0$ such that $\overline{V} \cap X^t = \emptyset$ and $V$ is evenly covered by $\varphi_U^t$ for all $t$ with $|t - t_0| \leq \delta$. Hence, $V \times (t_0-\delta, t_0+\delta)$ is a simply connected neighborhood of $(y_0,t_0)$ in $\bigcup_{t \in [0,1]} (U^t \times \{t\})$.
Next let $(x_0,t_0) \in \Psi^{-1}((y_0,t_0))$. Since the covering maps $\varphi_U^t$ are uniformly convergent on compact sets, it is not difficult to see that there exists a map $g: (t_0-\delta, t_0+\delta) \to \mathbb{D} \times [0,1]$ such that $g(t_0) = (x_0,t_0)$ and $\Psi \circ g(t) = (y_0,t)$ for all $t$ with $|t - t_0| < \delta$.
For each $t$ with $|t - t_0| < \delta$, let $x \in U^t$ be such that $g(t) = (x,t)$, and let $W^t$ be the component of $(\varphi_U^t)^{-1}(V)$ which contains the point $x$. Let $W = \bigcup_{t \in (t_0-\delta, t_0+\delta)} (W^t \times \{t\})$. Then it is not difficult to see that $\Psi |_W: W \to V \times (t_0-\delta, t_0+\delta)$ is a homeomorphism. Thus $\Psi$ is a covering map.
\end{proof}
Define $\alpha: [0,1] \times [0,1] \to \bigcup_{t \in [0,1]} (U^t \times \{t\})$ by $\alpha(s,t) = (\Gamma^t(s),t)$. Define the lift $\h{\alpha}$ of $\alpha$ under $\Psi$ as follows: first lift $\alpha |_{\{1\} \times [0,1]}$, using the covering map $\Psi$, to define $\h{\alpha} |_{\{1\} \times [0,1]}$ such that $\h{\alpha}(1,0) = (\h{z},0)$. Next, for each $t \in [0,1]$, use \cref{lift} to lift $\alpha |_{[0,1] \times \{t\}}$ to define $\h{\alpha} |_{[0,1] \times \{t\}}$, so that this lift coincides with the first lift of $\alpha |_{\{1\} \times [0,1]}$ at $(1,t)$. Finally, define $\h{\Gamma} = \pi_1 \circ \h{\alpha}$, where $\pi_1$ denotes the first coordinate projection.
Observe that for all $s \in (0,1]$, the function $\h{\alpha} |_{[s,1] \times [0,1]}$ is the unique lift of $\alpha |_{[s,1] \times [0,1]}$ under the covering map $\Psi$ with $\h{\alpha}(1,0) = \h{z}$, hence is continuous by standard covering map theory. It follows that $\h{\alpha}$, and hence $\h{\Gamma}$, is continuous on $(0,1] \times [0,1]$. It remains to prove that $\h{\Gamma}$ is continuous at all points of the form $(0,t_0)$.
Fix $t_0 \in [0,1]$ and $\varepsilon > 0$. Choose $\delta > 0$ small enough (using \cref{smallup}) so that for any $t \in [0,1]$ and any open arc $D$ in $U^t$ of diameter less than $\delta$, each lift $\h{D}$ of $D$ under $\varphi_U^t$ has diameter less than $\frac{\varepsilon}{3}$.
Choose $\eta_1,\eta_2 > 0$ small enough so that:
\begin{enumerate}
\item $|\h{\Gamma}^{t_0}(0) - \h{\Gamma}^{t_0}(\eta_1)| < \frac{\varepsilon}{3}$ (this is possible since the lifted path $\h{\Gamma}^{t_0}$ is continuous);
\item $|\h{\Gamma}^{t}(\eta_1) - \h{\Gamma}^{t_0}(\eta_1)| < \frac{\varepsilon}{3}$ for each $t \in [t_0-\eta_2, t_0+\eta_2]$ (this is possible since we already know that $\h{\Gamma}$ is continuous on $(0,1] \times [0,1]$); and
\item $\Gamma([0,\eta_1] \times [t_0-\eta_2, t_0+\eta_2]) \subset B(\Gamma^{t_0}(0),\frac{\delta}{2})$ (this is possible since $\Gamma$ is continuous).
\end{enumerate}
Now for any $t \in [t_0-\eta_2, t_0+\eta_2]$, the image $\Gamma^t([0,\eta_1])$ has diameter less than $\delta$, hence $\h{\Gamma}^t([0,\eta_1])$ has diameter less than $\frac{\varepsilon}{3}$. It follows that $\h{\Gamma}^t([0,\eta_1]) \subset B(\h{\Gamma}^{t_0}(0), \varepsilon)$. So $[0,\eta_1) \times (t_0-\eta_2, t_0+\eta_2)$ is a neighborhood of $(0,t_0)$ which is mapped by $\h{\Gamma}$ into $B(\h{\Gamma}^{t_0}(0), \varepsilon)$. Thus $\h{\Gamma}$ is continuous at $(0,t_0)$.
\end{proof}
Observe that in light of \cref{Hlift}, condition (ii) of \cref{main1} is stronger than condition (ii) of \cref{main1technical}. Therefore to complete the proofs of both \cref{main1} and \cref{main1technical}, we must prove that if condition (ii) of \cref{main1technical} holds then the isotopy $h$ extends to the entire plane $\mathbb{C}$. Hence we will assume for the remainder of this section that condition (ii) of \cref{main1technical} holds.
\begin{notation}[$\h{a}^t$]
Let $\h{a} \in \partial \mathbb{D}$ be any point at which $\varphi_U$ is defined (i.e.\ at which the radial limit of $\varphi_U$ exists). Using any sufficiently small crosscut $Q$ in $U$ which has one endpoint equal to $a = \varphi_U(\h{a})$ and which is the image of a crosscut of $\mathbb{D}$ having one endpoint equal to $\h{a}$, we obtain from condition (ii) of \cref{main1technical} a family of paths $\{\gamma_t: t \in [0,1]\}$ and lifts $\h{\gamma}_t$ with the properties listed there, and such that $\gamma_t(0) = a^t$ for each $t \in [0,1]$, and $\h{\gamma}_0(0) = \h{a}$. Because the sets $\h{\gamma}_t([0,1])$ vary continuously in $t$ with respect to the Hausdorff metric, the endpoint $\h{\gamma}_t(0)$ moves continuously in $t$. Now we define $\h{a}^t = \h{\gamma}_t(0)$ for each $t \in [0,1]$. Then $\h{a}^0 = \h{a}$ and $\varphi_U(\h{a}^t) = a^t$ for all $t \in [0,1]$. It is straightforward to see that this definition of $\h{a}^t$ is independent of the choice of crosscut $Q$ and of the paths $\gamma_t$ and lifts $\h{\gamma}_t$ afforded by condition (ii) of \cref{main1technical}. Thus, in the presence of condition (ii) of \cref{main1technical}, we can extend the superscript $t$ notation to points in $\partial \mathbb{D}$ at which $\varphi_U$ is defined. We will assume this is done for all such points $\h{a} \in \partial \mathbb{D}$ for the remainder of this section.
\end{notation}
\subsection{Hyperbolic laminations}
\label{sec:laminations}
The following condition on a set of hyperbolic geodesics $\mathcal{L}$ is inspired by a similar notion introduced by Thurston (cf.\ \cite{thur85}).
\begin{defn}
\label{d:lamination}
A \emph{hyperbolic lamination} $\mathcal{L}$ in a bounded domain $U \subset \mathbb{C}$ is a closed set of pairwise disjoint hyperbolic geodesic crosscuts in $U$ such that two distinct crosscuts in $\mathcal{L}$ are disjoint and have \emph{at most one} common endpoint in the boundary of $U$ and the family of crosscuts in $\mathcal{L}$ of diameter greater or equal $\varepsilon$ is compact for any $\varepsilon > 0$.
We denote by $\bigcup \mathcal{L}$ the union of all the crosscuts in $\mathcal{L}$. A \emph{gap} of $\mathcal{L}$ is the closure of a component of $U \setminus \bigcup \mathcal{L}$.
\end{defn}
The compactness condition in \cref{d:lamination} is equivalent to the following statement: if $\langle \mathfrak{g}_n \rangle_{n=1}^\infty$ is a sequence of elements of $\mathcal{L}$, then either $\mathrm{diam}(\mathfrak{g}_n) \to 0$, or there is a convergent subsequence whose limit is also an element of $\mathcal{L}$.
Fix a bounded complementary domain $U$ of $X$. Recall the Kulkarni-Pinkall construction described in \cref{sec:partitioning domains}: we consider the collection $\mathcal{B}$ of all open disks $B(c,r) \subset U$ such that $|\partial B(c,r) \cap \partial U| \geq 2$. For each such disk $B(c,r)$, $\mathrm{Hull}(c)$ denotes the convex hull of the set $\partial B(c,r(c)) \cap \partial U$ in $B(c,r(c))$ \emph{using the hyperbolic metric $\rho_c$ on the disk $B(c,r(c))$}. Let $\mathcal{J}$ be the collection of all crosscuts of $U$ which are contained in the boundaries of the sets $\mathrm{Hull}(c)$ for $B(c,r) \in \mathcal{B}$.
Let
\[ \h{\mathcal{J}} = \{\h{Q}: \h{Q} \textrm{ is a component of } \varphi_U^{-1}(Q) \textrm{ for some } Q \in \mathcal{J}\} .\]
For any $Q \in \mathcal{J}$, it is straightforward to see that each component $\h{Q}$ of $\varphi_U^{-1}(Q)$ is an open arc whose closure is mapped homeomorphically onto $\overline{Q}$ by $\varphi_U$.
Given an (open) arc $A$, we denote the set of endpoints of $A$ by $\mathrm{Ends}(A)$; that is, $\mathrm{Ends}(A) = \{a,b\}$ means that $a$ and $b$ are the endpoints of (the closure of) $A$. Let $\mathcal{J}_\mathrm{Ends} = \{\mathrm{Ends}(Q): Q \in \mathcal{J}\}$, and let $\h{\mathcal{J}}_\mathrm{Ends} = \{\mathrm{Ends}(\h{Q}): \h{Q} \in \h{\mathcal{J}}\}$. These are sets of (unordered) pairs.
For each $t \in [0,1]$, let
\begin{align*}
\h{\mathcal{L}}^t = \{\h{\mathfrak{g}}^t: & \h{\mathfrak{g}}^t \textrm{ is the hyperbolic geodesic in } \mathbb{D} \\
& \textrm{ joining } \h{a}^t,\h{b}^t, \textrm{ where } \{\h{a},\h{b}\} \in \h{\mathcal{J}}_\mathrm{Ends}\}
\end{align*}
and let
\[ \mathcal{L}^t = \{\varphi_{U}^t(\h{\mathfrak{g}}^t): \h{\mathfrak{g}}^t \in \h{\mathcal{L}}^t\} .\]
Observe that $\mathcal{L}^0$ is the collection of all hyperbolic geodesic crosscuts of $U^0 = U$ which are homotopic (with endpoints fixed) to some crosscut in $\mathcal{J}$. For $t > 0$, the collection $\mathcal{L}^t$ is obtained from $\mathcal{L}^0$ by following the motion of the endpoints of the arcs in $\mathcal{L}^0$ under the isotopy and joining the resulting points in $\partial U^t$ by the hyperbolic geodesic crosscut $\mathfrak{g}^t = \varphi_{U}^t(\h{\mathfrak{g}}^t)$ in $U^t$ using the hyperbolic metric induced by $\varphi_U^t$. We do \emph{not} consider a Kulkarni-Pinkall style partition of the domain $U^t$ for $t > 0$.
We shall prove that $\mathcal{L}^t$ is a hyperbolic lamination in $U^t$ for each $t \in [0,1]$. We start with the following lemma.
\begin{lem}
\label{gtarc}
For any $t \in [0,1]$ and any $\h{\mathfrak{g}}^t \in \h{\mathcal{L}}^t$, the map $\varphi_U^t$ is one-to-one on $\h{\mathfrak{g}}^t$ and, hence, the corresponding element $\mathfrak{g}^t = \varphi_U^t(\h{\mathfrak{g}}^t) \in \mathcal{L}^t$ is a crosscut in $U^t$. Moreover, if $\mathfrak{g}_1^t,\mathfrak{g}_2^t$ are two distinct elements of $\mathcal{L}^t$, then $\mathfrak{g}_1^t \cap \mathfrak{g}_2^t = \emptyset$ (though their closures may have at most one common endpoint in $\partial U^t$).
\end{lem}
\begin{proof}
Let $\mathfrak{g}^0$ be an arbitrary hyperbolic crosscut of $\mathcal{L}^0$ with endpoints, $a$ and $b$. By the discussion at the end of \cref{sec:lifts moving}, we can lift $\mathfrak{g}^0$ to geodesics $\h{\mathfrak{g}}^t$ with continuously varying endpoints. Let $\h{a}^t$ ($\h{b}^t$) be the endpoints of $\h{\mathfrak{g}}^t$ corresponding to $a^t$ ($b^t$, respectively). Since $\mathfrak{g}^0$ is an arc, all components $\h{\mathfrak{g}}^0$ of $\varphi_U^{-1}(\mathfrak{g}^0)$ are pairwise disjoint geodesic crosscuts of $\mathbb{D}$. Since the endpoints of all these crosscuts move continuously in $t$ and the points $a^t$ and $b^t$ are distinct, the geodesics $\h{\mathfrak{g}}^t$ are also pairwise disjoint open arcs for all $t$. Hence, $\varphi_U^t$ is one-to-one on each of these crosscuts and their common image is a geodesic arc $\mathfrak{g}^t$. By a similar argument, the lifts $\h{\mathfrak{g}}_1^t$ and $\h{\mathfrak{g}}_2^t$ of two distinct geodesics $\mathfrak{g}_1^t$ and $\mathfrak{g}_2^t$ in $\mathcal{L}^t$ are pairwise disjoint in $\mathbb{D}$ and, hence, $\mathfrak{g}_1^t \cap \mathfrak{g}_2^t = \emptyset$. It follows easily from the construction that two distinct geodesics in $\mathcal{L}^0$ share at most one common endpoint and, hence, the same is true for $\mathcal{L}^t$.
\end{proof}
To prove $\mathcal{L}^t$ is a hyperbolic lamination in $U^t$ for each $t \in [0,1]$, it remains to show that the collection of arcs in $\mathcal{L}^t$ of diameter at least $\varepsilon$ is compact for every $\varepsilon > 0$. This will follow from the next Lemma, which states that even for varying $t$, the limit of a convergent sequence of elements of the corresponding $\mathcal{L}^t$ collections must belong to the limit $\mathcal{L}^t$ collection as well.
\begin{lem}
\label{gtconverge}
Let $\{a_1,b_1\}, \{a_2,b_2\}, \ldots$ be a sequence of pairs in $\mathcal{J}_\mathrm{Ends}$ such that $a_n \to a_\infty$ and $b_n \to b_\infty$, where $a_\infty$ and $b_\infty$ are distinct points in $\partial U$. Then $\{a_\infty,b_\infty\} \in \mathcal{J}_\mathrm{Ends}$.
Furthermore, let $t_1,t_2,\ldots \in [0,1]$ be a sequence such that $t_n \to t_\infty \in [0,1]$. For each $n \in \{1,2,\ldots\} \cup \{\infty\}$ and each $t \in [0,1]$, let $\mathfrak{g}_n^t \in \mathcal{L}^t$ be the geodesic with endpoints $a_n^t$ and $b_n^t$. Then $\mathfrak{g}_n^{t_n} \to \mathfrak{g}_\infty^{t_\infty}$ in the sense that there exist homeomorphisms $\theta_n: \mathfrak{g}_\infty^{t_\infty} \to \mathfrak{g}_n^{t_n}$ such that $\theta_n \to \mathrm{id}$.
\end{lem}
\begin{proof}
Let $\h{\mathcal{A}} \subset \partial \mathbb{D}$ be the set of all points in $\partial \mathbb{D}$ at which $\varphi_U^0$ is defined, and let $\mathcal{A} = \{\varphi_U^0(x): x \in \h{\mathcal{A}}\}$. This set $\mathcal{A}$ is the set of all accessible points in $\partial U$ by Theorem~\ref{lift}. The set $\mathcal{A}$ is dense in $\partial U$ and the set $\h{\mathcal{A}}$ of lifts of points in $\mathcal{A}$ under $\varphi_U^0$ is dense in $\partial \mathbb{D}$ by Theorem~\ref{Fatou}.
\begin{claiminproof}
\label{claim:contangle}
For each $t \in [0,1]$, the function $\alpha^t: \partial \mathbb{D} \to \partial \mathbb{D}$ which extends the function that maps each $\h{y} \in \h{\mathcal{A}}$ to $\h{y}^t$, and is defined by $\alpha^t(x) = \lim_{\{\h{y} \to x \,\mid\, \h{y} \in \h{\mathcal{A}} \}} \h{y}^t$ for each $x \in \partial \mathbb{D}$, is a homeomorphism. Moreover $\alpha: \partial \mathbb{D} \times [0,1] \to \partial \mathbb{D}$, defined by $\alpha(x,t) = \alpha^t(x)$, is an isotopy starting at the identity.
\end{claiminproof}
\begin{proof}[We sketch the proof of \cref{claim:contangle}]
\renewcommand{\qedsymbol}{\textsquare (\cref{claim:contangle})}
Since the restriction $\alpha^t|_{\h{\mathcal{A}}}$ is one-to-one and preserves circular order, it suffices to show that $\alpha^t(\h{\mathcal{A}})$ is dense for each $t$.
The proof will make use of the following notion: Let $\mathbb S$ be the unit circle, $\gamma:\mathbb S\to \mathbb C$ a continuous function and $O$ a point in the unbounded complementary domain of $\gamma(\mathbb S)$. A complementary domain $U$ of $\gamma(\mathbb S)$ is odd if every arc
$J$ from $O $ to a point in the domain intersects $\gamma(\mathbb S)$ an odd number of times; counting with multiplicity and assuming that every intersection is transverse and the total number of crossings is finite; cf. \cite[Lemma 2.1]{OT82}.
Fix $\varepsilon>0$. By Theorem~\ref{Riesz}, $\alpha^0(\h{\mathcal{A}})$ is dense. By condition (ii)
of Theorem~\ref{main1technical} and Lemma~\ref{smallup} there exists $\delta>0$ so that for any crosscut $C$ of $X^0$ all lifts of
the paths $\gamma_t^C$ (whose existence follows from condition (ii)) have diameters less than $\varepsilon$. Since $X$ is uniformly perfect one can choose finitely many simple closed curves $S_i$ which bound disjoint closed disks $D_i$ so that $X^0\subset \bigcup_i D_i$, $X^0\cap \bigcup \partial D_i$ is finite, and for all $i$ and every component $C$ of $S_i\setminus X^0$, the diameter of $C$ is less than $\delta$. Moreover we can assume that
for all $t$ \ $\varphi^t_U(0)$ is contained in the unbounded component of $\bigcup \gamma^C_t([0,1])$. Then all lifts $\widehat\gamma_t^C$ have diameter less than $\varepsilon$.
Let $F^0=\bigcup_i X^0\cap S_i$ and $F^t=h^t(F^0)$. Since for all $C$ and all $t$, $\gamma_t^C((0,1))\cap X^t=\emptyset$ and $\h{\gamma}_t$ is continuous in the Hausdorff metric, it follows that every point of $X^t\setminus F^t$ is contained in an odd bounded complementary component of $\bigcup \gamma^C_t([0,1])$.
Every component $C$ of $S_i\setminus X$ is a crosscut
which defines a collection of paths $\gamma_t^C$ by condition (ii)
of Theorem~\ref{main1technical}. For all $t$ let $\h{\mathcal C}_t$ be the collection of all lifts of all the paths $\gamma_t^C$.
Fix $t$.
Suppose that $r$ is a radius of the unit disk $\mathbb D$ so that $R=\varphi^t_U(r)$ lands on a point in $X^t \setminus F^t$.
Then a terminal segment $B$ of $R$ must be in an odd complementary domain of $\bigcup \gamma^C_t([0,1])$. Let $A =R \setminus B$ be the initial segment of $R$. Then the subsegment $b$ of $r$ that corresponds to $B$ is disjoint from all crosscuts in $\h{\mathcal C}_t$. Suppose that that $b$ is not contained in the shadow of one of these crosscuts.
Then we may assume that the intersection of $a$ and any member of $\h{\mathcal C}_t$ is finite and even. Since we may also assume that the intersection of $a$ with all lifted crosscuts is finite,
the intersection of $a$ with the union of all members of $\h{\mathcal C}_t$ in a finite even number.
Since $\varphi^t_U$ is a local homeomorphism, the number of intersections of $A$ with all crosscuts $\gamma_t^C$ is also even, a contradiction since $A$ terminates in an odd domain.
\end{proof}
Note that by construction, $\mathcal{J}$ is almost a lamination, except that multiple arcs in $\mathcal{J}$ can share the same two endpoints. In particular, if $C(a_n b_n)$ are circular arcs in $\mathcal{J}$ joining the points $a_n$ and $b_n$ then, after taking a subseuence if necessary, $\lim C(a_n b_n)$ is a circular arc in $\mathcal{J}$ joining $a_\infty$ to $b_\infty$. From this it follows easily that $\mathcal{L}^0$ is a lamination and, if $\mathfrak{g}_n \in \mathcal{L}^0$ is the geodesic joining $a_n$ to $b_n$, then $\lim \mathfrak{g}_n = \mathfrak{g}_\infty$, where $\mathfrak{g}_\infty \in \mathcal{L}^0$ is the geodesic joining $a_\infty$ to $b_\infty$. Choose lifts $\h{\mathfrak{g}}_n^t$ and $\h{\mathfrak{g}}_\infty^t$ under $\varphi_U^t$ for each $t \in [0,1]$ as in the proof of \cref{gtarc}, such that $\lim \h{\mathfrak{g}}_n^0 = \h{\mathfrak{g}}^0_\infty$.
Fix $k$. By \cref{claim:contangle}, $\lim \h{\mathfrak{g}}_n^{t_k} = \h{\mathfrak{g}}_\infty^{t_k}$. This implies immediately that $\liminf \mathfrak{g}_n^{t_k} \supset \mathfrak{g}_\infty^{t_k}$. Since the points $a_n$ and $a_\infty$ can be joined by a small crosscut in $U$, it follows from assumption (ii) of \cref{main1technical} that the points $a_n^{t_k}$ and $a_\infty^{t_k}$ can be joined by a small path. Hence, points $x_n^{t_k}$ in $\mathfrak{g}_n^{t_k}$ close to an endpoint (say $a_n^{t_k}$) can be joined to the endpoint $a_n^{t_k}$ by a small path (first by a small arc to a point in $\mathfrak{g}_\infty^{t_k}$ and then by a small arc in $U^{t_k}$ to the endpoint $a_\infty^{t_k}$, followed by a small path in $U^{t_k}$ to $a_n^{t_k}$). By \cref{gehrhay}, the sub-geodesic of $\mathfrak{g}_n^{t_k}$ from $x_n^{t_k}$ to $a_n^{t_k}$ is small and we can conclude that $\lim \mathfrak{g}_n^{t_k} = \mathfrak{g}_\infty^{t_k}$ for each $k$. Since the maps $\varphi_U^t$ are uniformly convergent on compact subsets, $\liminf \mathfrak{g}_\infty^{t_k} \supset \mathfrak{g}_\infty^{t_\infty}$. Since by the above argument the sub-geodesic from a point close to the endpoint of $\mathfrak{g}_\infty^{t_k}$ to this endpoint is small, $\lim \mathfrak{g}_\infty^{t_k} = \mathfrak{g}_\infty^{t_\infty}$. It is now easy to see that there exist homeomorphisms $\theta_n: \mathfrak{g}_\infty^{t_\infty} \to \mathfrak{g}_n^{t_n}$ such that $\theta_n \to \mathrm{id}$.
\end{proof}
For each $t \in [0,1]$, we conclude from \cref{gtarc} and \cref{gtconverge} (using $t_n = t$ for all $n$) that $\mathcal{L}^t$ is a lamination in $U^t$.
\subsection{Proof of \cref{main1technical}}
\label{sec:proof main1}
In this section we will complete the proof of \cref{main1technical} (and hence of \cref{main1} as well).
We will employ here the path midpoint function $\mathsf{m}$ described in \cref{thm:midpt} of \cref{sec:midpoints}.
Let $U$ be any bounded complementary domain of $X$, and consider the hyperbolic laminations $\mathcal{L}^t$ in $U^t$ as constructed above in \cref{sec:laminations}.
Given any element $\mathfrak{g} \in \mathcal{L}^0$, we extend the isotopy $h$ over $\mathfrak{g}$ to $h_\mathfrak{g}: (X \cup \mathfrak{g}) \times [0,1] \to \mathbb{C}$ by defining $h_\mathfrak{g}^t(\mathsf{m}(\mathfrak{g})) = \mathsf{m}(\mathfrak{g}^t)$ and, if $x \in \mathfrak{g}$ is located on the subarc with endpoints $\mathsf{m}(\mathfrak{g})$ and $a$ (respectively, $b$), then $h_\mathfrak{g}^t(x)$ is the unique point on the subarc of $\mathfrak{g}^t$ with endpoints $\mathsf{m}(\mathfrak{g}^t)$ and $a^t$ (respectively, $b^t$) such that $\rho^0(x, \mathsf{m}(\mathfrak{g})) = \rho^t(h_\mathfrak{g}^t(x), \mathsf{m}(\mathfrak{g}^t))$, using the hyperbolic metric $\rho^t$ on $U^t$.
Now extend $h$ to $h_\mathcal{L}: X \cup \bigcup \mathcal{L}^0 \to \mathbb{C}$ by defining
\[ h_\mathcal{L}(x,t) =
\begin{cases}
h(x,t) & \textrm{if $x \in X$} \\
h_\mathfrak{g}(x,t) & \textrm{if $x \in \mathfrak{g} \in \mathcal{L}^0$.}
\end{cases}
\]
Then for each $t \in [0,1]$, $h_\mathcal{L}^t$ is clearly a bijection from $X \cup \bigcup \mathcal{L}^0$ to $X^t \cup \bigcup \mathcal{L}^t$.
\begin{claim}
\label{claim:cont}
$h_\mathcal{L}$ is continuous.
\end{claim}
\begin{proof}[Proof of \cref{claim:cont}]
\renewcommand{\qedsymbol}{\textsquare (\cref{claim:cont})}
Suppose that $(x_i,t_i) \to (x_\infty,t_\infty)$ and $x_i \in \mathfrak{g}_i \in \mathcal{L}^0$. If there exists $\varepsilon > 0$ so that $\mathrm{diam}(\mathfrak{g}_i) > \varepsilon$ for all $i$, then we may assume, by taking a subsequence if necessary, that $\lim \mathfrak{g}_i = \mathfrak{g}_\infty \in \mathcal{L}^0$. If $x_\infty$ is not an endpoint of $\mathfrak{g}_\infty$ then, by uniform convergence of $\varphi_U^t$ on compact sets, $\lim h_\mathcal{L}(x_i,t_i) = h_\mathcal{L}(x_\infty,t_\infty)$. If $x_\infty$ is an endpoint of $\mathfrak{g}_\infty$ (so $x_\infty \in X$), then $\rho^0(x_i,\mathsf{m}(\mathfrak{g}_i)) \to \infty$ and again $\lim h_\mathcal{L}(x_i,t_i) = h_\mathcal{L}(x_\infty,t_\infty) = h(x_\infty,t_\infty)$. Hence we may assume that $\lim \mathrm{diam}(\mathfrak{g}_i) = 0$. Then $x_\infty \in X$ and $\lim \mathrm{diam}(h_\mathcal{L}^{t_i}(\mathfrak{g}_i)) = 0$. Hence, if $a_i$ is an endpoint of $\mathfrak{g}_i$, then $\lim h_\mathcal{L}(x_i,t_i) = \lim h(a_i,t_i) = h(x_\infty,t_\infty)$ as desired.
\end{proof}
Finally, we repeat the above procedure on each bounded complementary domain $U$ of $X$ to extend $h$ over the hyperbolic lamination obtained from the Kulkarni-Pinkall construction as in \cref{sec:laminations} on each such $U$. The result is a function $H: Y \times [0,1] \to \mathbb{C}$ which is defined on the union $Y$ of $X$ with all the hyperbolic laminations of all bounded complementary domains of $X$. Note that for any $\varepsilon > 0$, there are only finitely many bounded complementary domains of $X$ which contain a disk of diameter at least $\varepsilon$, and hence there are only finitely many such domains whose corresponding hyperbolic lamination contains an arc of diameter at least $\varepsilon$. This implies, as above, that $H$ is continuous.
Note that each bounded complementary domain of $Y$ is a gap of the hyperbolic lamination of one of the bounded complementary domains of $X$. Since all such gaps are simply connected, $Y$ is a continuum. Hence by \cite{ot10} the isotopy $H$ of $Y$ can be extended over the entire plane.
This completes the proof of \cref{main1technical}. By the comments at the end of \cref{sec:lifts moving}, this also completes the proof of \cref{main1}.
\bigskip
In \cref{main1} we assumed that $X^t$ is uniformly perfect for each $t \in [0,1]$. This assumption allows for the use of the powerful analytic results described in \cref{sec:analytic covering maps}. It is natural to wonder if this assumption is really needed. We conjecture that this is not the case.
\begin{conj}
Suppose that $X$ is a plane compactum and $h: X \times [0,1] \to \mathbb{C}$ is an isotopy starting at the identity. Then the following are equivalent:
\begin{enumerate}
\item $h$ extends to an isotopy of the entire plane,
\item for each $\varepsilon > 0$ there exists $\delta > 0$ such that for every complementary domain $U$ of $X$ and each crosscut $Q$ of $U$ with $\mathrm{diam}(Q) < \delta$, $h$ can be extended to an isotopy $h_Q: (X \cup Q) \times [0,1] \to \mathbb{C}$ such that for all $t \in [0,1]$, $\mathrm{diam}(h^t(Q)) < \varepsilon$.
\end{enumerate}
\end{conj}
\section{Compact sets with large components}
\label{sec:large components}
The remaining part of this paper is devoted to a proof of the following theorem.
\begin{thm}
\label{main2}
Suppose $X \subset \mathbb{C}$ is a compact set for which there exists $\eta > 0$ such that every component of $X$ has diameter bigger than $\eta$. Let $h: X \times [0,1] \to \mathbb{C}$ be an isotopy which starts at the identity. Then $h$ extends to an isotopy of the entire plane which starts at the identity.
\end{thm}
Suppose $X \subset \mathbb{C}$ is a compact set for which there exists $\eta > 0$ such that every component of $X$ has diameter bigger than $\eta$. Let $h: X \times [0,1] \to \mathbb{C}$ be an isotopy which starts at the identity.
Clearly in this case $X^t$ is uniformly perfect with the same constant $k$ for each $t \in [0,1]$, and we may assume that $X$ is encircled. By scaling, we may also assume that for any $a \in X$ and any component $C$ of $X$, there exists $c \in C$ such that $|a^t - c^t| \geq 1$ for all $t \in [0,1]$. \emph{We will make these assumptions for the remainder of the paper}.
We will prove \cref{main2} using the characterization from \cref{main1technical}. To this end, we fix (again for the remainder of the paper) an arbitrary bounded complementary domain $U$ of $X$.
To satisfy condition (ii) of \cref{main1technical} we must construct, for a sufficiently small crosscut $Q$ of $U$ with endpoints $a$ and $b$, a family of paths $\gamma_t$ in $U^t$ with endpoints $a^t$ and $b^t$, which remain small during the isotopy, such that $\gamma_0$ is homotopic to $Q$ in $U$ with endpoints fixed, and which can be lifted under $\varphi_U^t$ to paths $\h{\gamma}_t$ in $\mathbb{D}$ which are continuous in the Hausdorff metric. We will show first that, in the case that $X$ has large components, it suffices to construct the family of paths $\gamma_t$ to be continuous in the Hausdorff metric.
\begin{lem}
\label{liftexist}
Let $a,b \in \partial U$. Suppose that $\{\gamma_t: t \in [0,1]\}$ is a family such that $\gamma_t$ is a path in $U^t$ joining $a^t$ and $b^t$ with $\mathrm{diam}(\gamma_t([0,1])) < \frac{1}{2}$ for each $t \in [0,1]$, and the sets $\gamma_t([0,1])$ vary continuously in $t$ with respect to the Hausdorff metric. Then there are lifts $\h{\gamma}_t$ of the paths $\gamma_t$ under $\varphi_U^t$ such that the sets $\h{\gamma}_t([0,1])$ also vary continuously in $t$ with respect to the Hausdorff metric.
\end{lem}
\begin{proof}
Suppose that the family $\gamma_t$ is as specified in the statement. Recall that $d_H$ denotes the Hausdorff distance. Fix $t_0 \in [0,1]$. It suffices to show that, given a lift $\h{\gamma}_{t_0}$ of $\gamma_{t_0}$ and $0 < \varepsilon < \frac{1}{2}$, there exists $\delta > 0$ and lifts $\h{\gamma}_t$ of $\gamma_t$ for $|t - t_0| < \delta$ such that $d_H(\h{\gamma}_t([0,1]), \h{\gamma}_{t_0}([0,1])) < \varepsilon$.
By \cref{smallup} we can choose small disjoint open balls $B_a$ centered at $a^{t_0}$ and $B_b$ centered at $b^{t_0}$ of diameters less than $\frac{1}{4}$ such that for all $t$ and any component $C$ of $\partial B_a \cap U^t$ or $\partial B_b \cap U^t$, the diameter of each component of $(\varphi_U^t)^{-1}(C)$ is less than $\frac{\varepsilon}{4}$.
Let $s_a,s_b \in (0,1)$ be the numbers such that $\gamma_{t_0}(s_a) \in \partial B_a$, $\gamma_{t_0}([0,s_a)) \subset B_a$, $\gamma_{t_0}(s_b) \in \partial B_b$, and $\gamma_{t_0}((s_b,1]) \subset B_b$. Denote $z_a = \gamma_{t_0}(s_a)$ and $z_b = \gamma_{t_0}(s_b)$. Choose an open set $O \subset \mathbb{C}$ such that $\gamma_{t_0}([s_a,s_b]) \subset O$, $\overline{O} \subset U^{t_0}$, and the diameter of $O \cup B_a \cup B_b$ is less than $1$. For $t$ sufficiently close to $t_0$, we have $\overline{O} \subset U^t$ and $\gamma_t([0,1]) \subset O \cup B_a \cup B_b$. Since each component of $X^t$ has diameter greater than $1$, we have that no bounded complementary component of $O \cup (B_a \cup B_b \setminus X^t)$ contains any points of $X^t$. It follows that there exists a simply connected open set $P_t$ in $U^t$ such that $\gamma_t((0,1)) \cup O \subset P_t$. This means that the covering map $\varphi_U^t$ maps each component of $(\varphi_U^t)^{-1}(P_t)$ homeomorphically onto $P_t$.
Since the maps $\varphi_U^t$ converge uniformly on compact sets as $t \to t_0$, for $t$ sufficiently close to $t_0$ there exists exactly one component $\h{P}_t$ of $(\varphi_U^t)^{-1}(P_t)$ such that $\h{\gamma}_{t_0}([s_a,s_b]) \subset \h{P}_t$. For such $t$, define the lift $\h{\gamma}_t$ of $\gamma_t$ by $\h{\gamma}_t = (\varphi_U^t |_{\h{P}_t})^{-1} \circ \gamma_t$.
To see that these lifts are Hausdorff close to $\h{\gamma}_{t_0}$, let $\delta > 0$ be small enough so that for all $t$ with $|t - t_0| < \delta$ we have:
\begin{enumerate}
\item There exists $\nu > 0$ such that $|(\varphi_U^t|_{\h{P}_t})^{-1}(x_1) - (\varphi_U^{t_0}|_{\h{P}_{t_0}})^{-1}(x_2)| < \frac{\varepsilon}{2}$ for all $x_1,x_2 \in \mathbb{C}$ with $|x_1 - x_2| < \nu$ and either $x_1 \in O$ or $x_2 \in O$;
\item $d_H(\gamma_t([0,1]), \gamma_{t_0}([0,1])) < \nu$; and
\item $\gamma_t([0,1]) \cap (\partial B_a \setminus O) = \emptyset$ and $\gamma_t([0,1]) \cap (\partial B_b \setminus O) = \emptyset$.
\end{enumerate}
Given $t$ with $|t - t_0| < \delta$, let $C_{a,t}$ be the component of $\partial B_a \setminus X^t$ which contains $z_a$, and let $C_{b,t}$ be the component of $\partial B_b \setminus X^t$ which contains $z_b$. Let $\h{C}_{a,t}$ and $\h{C}_{b,t}$ be lifts of $C_{a,t}$ and $C_{b,t}$ which contain $(\varphi_U^t |_{\h{P}_t})^{-1}(z_a)$ and $(\varphi_U^t |_{\h{P}_t})^{-1}(z_b)$, respectively. By the choice of $B_a$ and $B_b$, the diameters of $\h{C}_{a,t}$ and $\h{C}_{t,b}$ are less than $\frac{\varepsilon}{4}$. It follows from (iii) that $\h{\gamma}_t([0,1])$ is contained in $(\varphi_U^t|_{\h{P}_t})^{-1}(O)$ together with the small region under $\h{C}_{t,a}$ and the small region
under $\h{C}_{t,b}$. Note that these small regions have diameters less than $\frac{\varepsilon}{2}$. This means that for every point $\h{p}$ in $\h{\gamma}_t([0,1])$ there is a point $\h{q} \in \h{\gamma}_t([0,1]) \cap (\varphi_U^t|_{\h{P}_t})^{-1}(O)$ such that $|\h{p} - \h{q}| < \frac{\varepsilon}{2}$. Then, since $q = \varphi_U^t(\h{q}) \in O$, by (ii) there is a point $r \in \gamma_{t_0}([0,1])$ such that $|q - r| < \nu$. If we let $\h{r}$ be the lift $\h{r} = (\varphi_U^{t_0}|_{\h{P}_{t_0}})^{-1}(r) \in \h{\gamma}_{t_0}([0,1])$, then by (i) we have $|\h{q} - \h{r}| < \frac{\varepsilon}{2}$. Then by the triangle inequality, $|\h{p} - \h{r}| < \varepsilon$. Similarly, we can show that for any $\h{r} \in \h{\gamma}_{t_0}([0,1])$ there is a point $\h{p} \in \h{\gamma}_t([0,1])$ with $|\h{p} - \h{r}| < \varepsilon$. Thus $d_H(\h{\gamma}_t([0,1]), \h{\gamma}_{t_0}([0,1])) < \varepsilon$.
\end{proof}
\begin{notation}[$\varepsilon$, $\nu$]
For the remainder of the paper, we fix an arbitrary $\varepsilon > 0$. For later use, fix $0 < \nu < \frac{1}{3}$ small enough so that $\frac{8 \nu}{1 - \nu} < \frac{\varepsilon}{2}$.
\end{notation}
To prove \cref{main2}, it remains to show that there exists $\delta > 0$ such that if $Q$ is a crosscut of $U$ with endpoints $a$ and $b$ with diameter less than $\delta$, there is a family of paths $\gamma_t$ such that (1) $\gamma_t$ is a path in $U^t$ joining $a^t$ and $b^t$ for each $t \in [0,1]$, (2) $\gamma_0$ is homotopic to $Q$ in $U$ with endpoints fixed, (3) $\mathrm{diam}(\gamma_t([0,1])) < \varepsilon$ for all $t \in [0,1]$, and (4) the sets $\gamma_t([0,1])$ vary continuously in $t$ with respect to the Hausdorff metric.
In Section 4.1, we will transform the compactum $X$, so that the crosscut $Q$ becomes the straight line segment $[0,1]$ in the plane, to simplify the ensuing constructions and arguments. We will refer to the transformed plane as the ``normalized plane'', and the image of $X$ will be denoted by $\til{X}$. In Section 4.2, we will lift the isotopy under an exponential covering map. The domain of the covering map will be called the ``exponential plane'', and the preimage of $\til{X}$ will be denoted by $\bol{X}$. In Sections 4.3 and 4.4 we will replace the lift of the crosscut $[0,1]$ of $\til{X}$ by an equidistant set which varies continuously in $t$. The projection of this equidistant set to the original plane containing $X^t$ will be shown in Section 4.5 to be the desired path $\gamma_t$.
\subsection{The normalized plane}
\label{sec:norm plane}
In the following sections, we will make use of a covering map (which we will refer to as the ``exponential map'') of the plane minus the endpoints of a crosscut $Q$. In order to simplify the notation and work with a single exponential map below we will normalize the compactum $X$ and the crosscut $Q$ of $X$ with end points $a$ and $b$ so that for all $t$, $a^t = 0$, $b^t = 1$, and $Q$ becomes the straight line segment $(0,1) \subset \mathbb{R}$.
By composing with translations it is easy to see that given a crosscut $Q$ of $X$ with endpoints $a$ and $b$ we can always assume that the point $a$ is the origin $0$ and that this point remains fixed throughout the isotopy (i.e., $a^t = 0$ for all $t$).
Let $Q$ be a crosscut of $U$ with endpoints $0$ and $b$ such that $\mathrm{diam}(Q) < \frac{1}{4}$. We will impose further restrictions on the diameter of $Q$ later.
Since all arcs in the plane are tame, there exists a homeomorphism $\Theta: \mathbb{C} \to \mathbb{C}$ such that $\Theta(Q)$ is the straight line segment joining the points $0$ and $b$, $\Theta(0) = 0$, $\Theta(b) = b$ and $\Theta|_{\mathbb{C} \setminus B(0,2\mathrm{diam}(Q))} = \mathrm{id}_{\mathbb{C} \setminus B(0,2\mathrm{diam}(Q))}$. Let $L^t: \mathbb{C} \to \mathbb{C}$ be the linear map of the complex plane defined by $L^t(z) = \frac{1}{\Theta(b^t)} \,z$.
\begin{notation}[$\til{X}$, $\til{x}^t$]
Define $\til{X} = L^0 \circ \Theta(X)$ and define the isotopy
$\til{h}: \til{X} \times [0,1] \to \mathbb{C}$ by
\[ \til{h}(\til{x},t) = L^t \circ \Theta \circ h((L^0 \circ \Theta)^{-1}(\til{x}),t) = L^t \circ \Theta(x^t) .\]
Here and below we adopt the notation that $\til{x} = L^0 \circ \Theta(x)$ for all $x \in X$ and, hence, $\til{h}^t(\til{x}) = \til{x}^t = L^t \circ \Theta(x^t)$. As indicated above, we will use ordinary letters to denote objects in the plane containing $X$ and attach a tilde to the corresponding objects in the normalized plane (the plane containing $\til{X}$).
\end{notation}
In the next lemma we establish some simple properties of the induced isotopy $\til{h}$.
\begin{lem}
\label{sizes}
There exists $\delta > 0$ such that if the crosscut $Q$ of $X$ with endpoints $0$ and $b$ has diameter $\mathrm{diam}(Q) < \delta$, then the induced isotopy $\til{h}: \til{X} \times [0,1] \to \mathbb{C}$ has the following properties:
\begin{enumerate}
\item $\til{h}^0 = \mathrm{id}_{\til{X}}$, $\til{X}$ contains the points $0$ and $1$, the isotopy $\til{h}$ fixes these points and the segment $(0,1) \subset \mathbb{R}$ in the complex plane is disjoint from $\til{X}$;
\item If $\til{x}^s \in (0,1)$ for some $s \in [0,1]$, then for each $t \in [0,1]$, $|\til{x}^t| < \frac{\nu}{|\Theta(b^t)|}$; and
\item For every component $\til{C}$ of $\til{X}$ there exists a point $\til{c} \in \til{C}$ such that for all $t \in [0,1]$, $|\til{c}^t| \geq \frac{1}{|\Theta(b^t)|}$.
\end{enumerate}
\end{lem}
\begin{proof}
It follows immediately that $\til{h}^0 = \mathrm{id}|_{\til{X}}$, the isotopy $\til{h}$ fixes the points $0$ and $1$ and that the interval $(0,1)$ is disjoint from $\til{X}$. Hence (i) holds.
Since $h$ is uniformly continuous we can choose $0 < \delta < \frac{\nu}{4}$ so that if $x \in X$ and $|x^s| < 2\delta$ for some $s \in [0,1]$, then $|x^t| < \frac{\nu}{2}$ for all $t$. Suppose $\til{x}^s \in (0,1)$ for some $s\in [0,1]$, then $x^s \in Q$ and hence $|x^t| < \frac{\nu}{2}$ for all $t$. Then $|\til{x}\,^t| < \frac{\nu}{2|\Theta(b^t)|} + \frac{2\delta}{|\Theta(b^t)|} \leq \frac{\nu}{|\Theta(b^t)|}$ using that $\Theta|_{B\setminus B(0,2\delta)}=id$ and so (ii) holds.
By the standing assumption on $X$ stated after \cref{main2}, for every component $C$ of $X$ there exists a point $c \in C$ such that for all $t$, $|c^t| > 1$. Note that $\Theta(c^t) = c^t$ for all $t$. Hence, $|\til{c}\,^t| \geq \frac{|c^t|}{\Theta(b^t)|} \geq \frac{1}{|\Theta(b^t)|}$ for all $t$ and (iii) holds.
\end{proof}
\subsection{The exponential plane}
\label{sec:exp plane}
Define the covering map
\[ \widetilde{\mathrm{exp}}: \; \mathbb{C} \setminus \{(2n+1)\pi i: n \in \mathbb{Z}\} \;\to\; \mathbb{C} \setminus \{0,1\} \]
by
\[ \widetilde{\mathrm{exp}}(z) = \frac{e^z}{e^z + 1} .\]
The function $\widetilde{\mathrm{exp}}$ is periodic with period $2\pi i$, and satisfies
\[ \lim_{\mathbb{R}(z) \rightarrow \infty} \widetilde{\mathrm{exp}}(z) = 1, \quad \lim_{\mathbb{R}(z) \rightarrow -\infty} \widetilde{\mathrm{exp}}(z) = 0, \quad \widetilde{\mathrm{exp}}(\mathbb{R}) = (0,1) ,\]
and has poles at each point $(2n+1)\pi i$, $n \in \mathbb{Z}$.
Note that $\widetilde{\mathrm{exp}}$ is the composition of the maps $e^z$ and the M\"{o}bius transformation $f(w) = \frac{w}{w+1}$. Hence the vertical line through a point $x \in \mathbb{R}$ is first mapped (by the covering map $e^z$) to the circle with center $0$ and radius $e^x$ and, if $x \neq 0$, then mapped by $f$ to the circle with center $\frac{e^{2x}}{e^{2x} - 1}$ and radius $\left| \frac{e^{x}}{e^{2x} - 1} \right|$. The imaginary axis is mapped to the vertical line through the point $x = \frac{1}{2}$ with the points at the poles $(2n+1)\pi i$ mapped to infinity.
\begin{notation}[$\bol{X}$, $\bol{x}^t$, $\bol{E}_n(r)$]
Denote by boldface $\bol{X}$ the preimage of $\til{X}$ under the covering map $\widetilde{\mathrm{exp}}$, and in general we will use boldface letters to represent points and subsets of the exponential plane (the plane containing $\bol{X}$).
The isotopy $\til{h}$ of $\til{X}$ lifts to an isotopy $\bol{h}$ of $\bol{X}$; that is, $\bol{h}: \bol{X} \times [0,1] \to \mathbb{C}$ is the map satisfying $\bol{h}^0 = \mathrm{id}_{\bol{X}}$ and $\widetilde{\mathrm{exp}}(\bol{h}(\bol{x}, t)) = \til{h}(\widetilde{\mathrm{exp}}(\bol{x}), t)$ for every $\bol{x} \in \bol{X}$ and all $t \in [0,1]$. As above, given a point $\bol{x} \in \bol{X}$ (a subset $\bol{A} \subseteq \bol{X}$) and $t \in [0,1]$, denote $\bol{x}^t = \bol{h}(\bol{x}, t)$ (respectively, $\bol{A}^t = \bol{h}(\bol{A}, t)$).
For each $n \in \mathbb{Z}$ and each $r > 0$, let $\bol{E}_n(r) = B((2n+1) \pi i, r)$ be the ball of radius $r$ centered at the point $(2n+1) \pi i$.
\end{notation}
\begin{lem}
\label{ball-like}
There exists $0 < K < \pi$ such that for any $0 < r \leq K$,
\begin{enumerate}
\itemsep=5pt
\item $\displaystyle \widetilde{\mathrm{exp}} \left( \bigcup_{n \in \mathbb{Z}} \bol{E}_n(r) \right) \subset \mathbb{C} \setminus B \left( 0,\frac{1}{2r} \right)$;
\item $\displaystyle \widetilde{\mathrm{exp}} \left( \mathbb{C} \setminus \bigcup_{n \in \mathbb{Z}} \bol{E}_n(r) \right) \subset B \left( 0,\frac{2}{r} \right)$.
\end{enumerate}
\end{lem}
\begin{proof}
For any $n \in \mathbb{Z}$ and sufficiently small $|z|$, we have
\[ e^{(2n+1)\pi i + z} = -e^z \approx -1 - z \]
and hence $\widetilde{\mathrm{exp}}((2n+1)\pi i + z) \approx \frac{1 + z}{z}$. In particular, there exists $0 < K < \pi$ such that for all $|z| \leq K$
\[ \frac{1}{2|z|} \leq |\widetilde{\mathrm{exp}}((2n+1)\pi i + z)| \leq \frac{2}{|z|} .\]
Let $S_n = \partial B((2n+1)\pi i, r)$. Then, by the above inequality, $T = \widetilde{\mathrm{exp}}(\bigcup_n S_n)$ is an essential simple closed curve in the annulus centered around the origin $0$ with inner radius $\frac{1}{2r}$ and outer radius $\frac{2}{r}$. Since $\widetilde{\mathrm{exp}}$ is periodic, all $S_n$ have the same image $T$, and $\widetilde{\mathrm{exp}}^{-1}(T) = \bigcup_n S_n$. It follows that $\widetilde{\mathrm{exp}}(\bigcup_n B((2n+1)\pi i, r) \setminus (2n+1)\pi i)$ is contained in the unbounded complementary domain of $T$ and $\widetilde{\mathrm{exp}}(\mathbb{C} \setminus \bigcup_n B((2n+1)\pi i, r))$ is contained in the bounded complementary domain of $T$. Hence, $\widetilde{\mathrm{exp}}(\bigcup_n B((2n+1)\pi i, r)) \subset \mathbb{C} \setminus B(0,\frac{1}{2r})$ and $\widetilde{\mathrm{exp}}(\mathbb{C} \setminus \bigcup_n B((2n+1)\pi i, r)) \subset B(0,\frac{2}{r})$.
\end{proof}
\subsection{Components of $\bol{X}^t$}
\label{sec:exp components}
We say a component $\bol{C}$ of $\bol{X}^t$ ($t \in [0,1]$) is \emph{unbounded to the right} (respectively \emph{left}) if $\mathrm{proj}_\mathbb{R}(\bol{C}) \subseteq \mathbb{R}$ is not bounded from above (respectively from below).
For convenience we denote the horizontal strip $\{x + iy \in \mathbb{C}: x \in \mathbb{R},\; 2n\pi < y < 2(n+1)\pi\}$ simply by $\bol{HS}_n$. Observe that since $\til{X} \cap (0,1) = \emptyset$ and $\widetilde{\mathrm{exp}}^{-1}((0,1)) = \bigcup_{n \in \mathbb{Z}} \{x + iy \in \mathbb{C}: x \in \mathbb{R},\; y = 2n\pi\}$, we have that $\bol{X} \subset \bigcup_{n \in \mathbb{Z}} \bol{HS}_n$.
\begin{lem}
\label{uniqueB_n}
There exists $\delta > 0$ such that if the crosscut $Q$ of $X$ with endpoints $0$ and $b$ has diameter $\mathrm{diam}(Q) < \delta$, then the following holds for the induced isotopy $\bol{h}$ of $\bol{X}$:
Given a component $\bol{C}$ of $\bol{X}$, let $n \in \mathbb{Z}$ be such that $\bol{C}$ is contained in the horizontal strip $\bol{HS}_n$. Let $\til{D}$ be the component of $\til{X}$ that contains $\widetilde{\mathrm{exp}}(\bol{C})$. Then:
\begin{enumerate}
\item if $\til{D} \cap \{0,1\} = \emptyset$, then $\bol{C}^t \cap \bol{E}_n \left( \frac{|\Theta(b^t)|}{2} \right) \neq \emptyset$ for all $t \in [0,1]$;
\item $\bol{C}^t \cap \bol{E}_m \left( \frac{|\Theta(b^t)|}{2\nu} \right) = \emptyset$ for all $m \neq n$ and all $t \in [0,1]$; and
\item if $\til{D} \cap \{0,1\} \neq \emptyset$, then $\bol{C}$ is unbounded to the left, to the right, or both.
\end{enumerate}
Furthermore, there exist for each $k \in \mathbb{Z}$ components $\bol{L}_k$ and $\bol{R}_k$ of $\bol{X} \cap \bol{HS}_k$ such that for all $t \in [0,1]$, $\bol{L}_k^t$ is unbounded to the left and $\bol{R}_k^t$ is unbounded to the right. Moreover, these may be chosen so that either $\bol{L}_k^t \cap \bol{E}_k \left( \frac{|\Theta(b^t)|}{2} \right) \neq \emptyset$ for all $k \in \mathbb{Z}$ or $\bol{R}_k^t \cap \bol{E}_k \left( \frac{|\Theta(b^t)|}{2} \right) \neq \emptyset$ for all $k \in \mathbb{Z}$.
\end{lem}
\begin{figure}
\begin{center}
\begin{subfigure}{}
\includegraphics{ExponentialPlane_0.pdf}
\end{subfigure}
\vspace{0.15in}
\begin{subfigure}{}
\includegraphics{ExponentialPlane_t.pdf}
\end{subfigure}
\end{center}
\caption{An illustration of an example of the set $\bol{X}^t$ at $t = 0$ (above) and at a later moment $t > 0$ (below). The horizontal lines are the preimages of $(0,1)$ under $\widetilde{\mathrm{exp}}$, and the balls depicted are the sets $\bol{E}_n \left( \frac{|\Theta(b^t)|}{2} \right)$.}
\label{fig:exp plane}
\end{figure}
\begin{proof}
Adopt the notation introduced in the Lemma and assume $\bol{C}$ is contained in the horizontal strip $\bol{HS}_n$. Let $0 < K < \pi$ be as in \cref{ball-like}. Choose $\delta > 0$ so small that $\frac{|\Theta(b^t)|}{\nu} < K$ for all $t$.
Suppose that $\til{D} \cap \{0,1\} = \emptyset$. Then $\widetilde{\mathrm{exp}}(\bol{C}) = \til{D}$. By \cref{sizes}(iii), we can choose $\til{c} \in \til{D}$ such that $|\til{c}^t| \geq \frac{1}{|\Theta(b^t)|}$ for all $t$. By \cref{ball-like}(ii), $\widetilde{\mathrm{exp}} \left( \mathbb{C} \setminus \bol{E}_n \left( \frac{|\Theta(b^t)|}{2} \right) \right) \subset B(0,\frac{1}{|\Theta(b^t)|})$. Hence we can choose $\bol{c}^0 \in \bol{E}_n \left( \frac{|\Theta(b^0)|}{2} \right) \cap \bol{C}$ such that $\widetilde{\mathrm{exp}}(\bol{c}^0) = \til{c}^0$, and then $\bol{c}^t \in \bol{C}^t \cap \bol{E}_n \left( \frac{|\Theta(b^t)|}{2} \right)$ for all $t$. This completes the proof of (i).
Note that for all $n \in \mathbb{Z}$, $\widetilde{\mathrm{exp}}(\mathbb{R} \times \{2n\pi i\}) = (0,1) \subset \mathbb{R}$ and, hence, $\bol{X} \cap (\mathbb{R} \times \{2n\pi i\}) = \emptyset$ for all $n \in \mathbb{Z}$. To see that $\bol{C}^t \cap \bol{E}_m \left( \frac{|\Theta(b^t)|}{2\nu} \right) = \emptyset$ for $m \neq n$ and all $t$, note first that this is the case at $t = 0$ since $\bol{C}^0 = \bol{C} \subset \bol{HS}_n$. In order for a point $\bol{x}^s \in \bol{C}^s$ to enter a ball $\bol{E}_m \left( \frac{|\Theta(b^s)|}{2\nu} \right)$ with $n \neq m$ for some $s > 0$, it would first have to cross one of the horizontal boundary lines of $\bol{HS}_n$, say $\bol{x}^u \in \mathbb{R} \times \{2n\pi i\}$ for some $0 < u < s$. Then $\widetilde{\mathrm{exp}}(\bol{x}^u) = \til{x}^u \in (0,1) \subset \mathbb{R}$. Hence by \cref{sizes}(ii), $|\til{x}\,^t| < \frac{\nu}{|\Theta(b^t)|}$ for all $t$. Since by \cref{ball-like}(i), $\widetilde{\mathrm{exp}} \left( \bol{E}_m \left( \frac{|\Theta(b^t)|}{2\nu} \right) \right) \subset \mathbb{C} \setminus B \left( 0, \frac{\nu}{|\Theta(b^t)|} \right)$ for all $t$, $\bol{x}^s \notin \bol{E}_m \left( \frac{|\Theta(b^s)|}{2\nu} \right)$, a contradiction. This completes the proof of (ii).
Suppose next that $\til{D} \cap \{0,1\} \neq \emptyset$. Then $\widetilde{\mathrm{exp}}(\bol{C}) = \til{C}$ is a component of $\til{D} \setminus \{0,1\}$ such that $\overline{\til{C}} \cap \{0,1\} \neq \emptyset$. Hence $\bol{C}$ is unbounded to the left or to the right (or both). This completes the proof of (iii).
There must exist components $\til{L}$ and $\til{R}$ of $\til{X} \setminus \{0,1\}$ such that $0$ is in the closure of $\til{L}$ and $1$ is in the closure of $\til{R}$. For each $k \in \mathbb{Z}$, let $\bol{L}_k$ be the lift of $\til{L}$ under $\widetilde{\mathrm{exp}}$ which is contained in the strip $\bol{HS}_k$, and similarly define $\bol{R}_k$. Then since the closure of $\til{L}\,^t$ contains $0$ and the closure of $\til{R}\,^t$ contains $1$ for all $t \in [0,1]$, we have that for each $k \in \mathbb{Z}$, the lift $\bol{L}_k^t$ is unbounded to the left and the lift $\bol{R}_k^t$ is unbounded to the right for all $t \in [0,1]$.
Finally, by \cref{sizes}(iii), there exists a component $\til{S}$ of $\til{X} \setminus \{0,1\}$ whose closure contains $0$ or $1$, which contains a point $\til{c} \in \til{S}$ such that $|\til{c}\,^t| \geq \frac{1}{|\Theta(b^t)|}$. Then, as in the proof of (ii), the component $\bol{S}_k^t$ of $\widetilde{\mathrm{exp}}^{-1}(\til{S}^t)$ which contains the lift $\bol{c}^t_k \in \bol{HS}_k$ of $\til{c}$ under $\widetilde{\mathrm{exp}}$ is unbounded to the left or to the right for all $k$ and $t$ and intersects $\bol{E}_k \left( \frac{|\Theta(b^t)|}{2} \right)$ as required.
\end{proof}
\begin{notation}[$\bol{A},\bol{B}$, $\mathfrak{A},\mathfrak{B}$]
Let $\bol{A}$ denote the set of all points of $\bol{X}$ above $\mathbb{R}$ and $\bol{B}$ the set of all points of $\bol{X}$ below $\mathbb{R}$. Recall that $\bol{X} \cap \mathbb{R} = \emptyset$, so $\bol{X} = \bol{A} \cup \bol{B}$. For each $t \in [0,1]$, let
\[ \mathfrak{A}^t = \bol{A}^t \cup \bigcup_{n \geq 0} \overline{\bol{E}_n \left( \frac{|\Theta(b^t)|}{2} \right) } \quad\textrm{and}\quad \mathfrak{B}^t = \bol{B}^t \cup \bigcup_{n < 0} \overline{\bol{E}_n \left( \frac{|\Theta(b^t)|}{2} \right) } .\]
Then $\mathfrak{A}^t$ and $\mathfrak{B}^t$ are disjoint closed sets, and by \cref{uniqueB_n}, each component of $\mathfrak{A}^t$ and of $\mathfrak{B}^t$ is either unbounded to the left or to the right.
\end{notation}
\begin{lem}
\label{lem:vertical strip}
For each $r > 0$, there exists a lower bound $\ell \in \mathbb{R}$ (respectively upper bound $u \in \mathbb{R}$) such that for all $t \in [0,1]$, if $c + di \in \bol{A}^t$ (respectively $\bol{B}^t$) and $|c| \leq r$, then $d \geq \ell$ (respectively $d \leq u$).
\end{lem}
\begin{proof}
Let $\mathbb{I}$ denote the imaginary axis, so that $[-r,r] \times \mathbb{I}$ is the strip in the plane between the vertical lines through $r$ and $-r$.
By uniform continuity of $\til{h}$ and the fact that $\til{h}$ leaves $0$ and $1$ fixed, there must exist for each $r > 0$ an $r' > r$ such that for all $\bol{x} \in \bol{X}$, if $\bol{x}^s \in ((-\infty,-r'] \cup [r',\infty)) \times \mathbb{I}$ for some $s \in [0,1]$ then for all $t \in [0,1]$, $\bol{x}^t \notin [-r,r] \times \mathbb{I}$.
Given a point $\bol{x} \in \bol{A} \cap ([-r',r'] \times \mathbb{I})$, let $\til{x} = \widetilde{\mathrm{exp}}(\bol{x})$ be the corresponding point of $\til{X}$. Every time $\bol{x}$ travels vertically within the strip $[-r',r'] \times \mathbb{I}$ a distance $2\pi$, the point $\til{x}$ travels around a disk of fixed radius (depending on $r'$) centered at $0$ or at $1$. By uniform continuity and compactness of $X$, this can only happen a uniformly bounded number of times. The result follows.
\end{proof}
\begin{cor}
\label{compact in strip}
Let $\bol{C}$ is any component of $\bol{X}$. Then for any $r > 0$ and any $t \in [0,1]$, the set $\bol{C}^t \cap \{x + yi: x \in [-r,r]\}$ is compact.
\end{cor}
\begin{proof}
Because the set $\bol{X}^t$ is periodic with period $2\pi i$, there exists an integer $k$ such that if $\bol{D}$ is the copy of $\bol{C}$ shifted vertically by $2\pi k$, then without loss of generality $\bol{C} \subset \bol{A}$ and $\bol{D} \subset \bol{B}$. Then by \cref{lem:vertical strip}, $\bol{C}^t$ is bounded below in the strip $\{x + yi: x \in [-r,r]\}$, and $\bol{D}^t$ is bounded above in this strip. By periodicity, it follows that $\bol{C}^t$ is also bounded above in this strip.
\end{proof}
\begin{defn}
Given two distinct components $\bol{C},\bol{D}$ of $\bol{X}$ which are both unbounded to the right (respectively, to the left), we say that $\bol{C}$ \emph{lies above} $\bol{D}$ if there is some $R > 0$ such that for all $x \in \mathbb{R}$ with $x \geq R$ (respectively, $x \leq -R$), $\max(y \in \mathbb{R}: x + iy \in \bol{C}) > \max(y \in \mathbb{R}: x + iy \in \bol{D})$ and also $\min(y \in \mathbb{R}: x + iy \in \bol{C}) > \min(y \in \mathbb{R}: x + iy \in \bol{D})$.
\end{defn}
Note that it follows immediately from the definition of $\mathfrak{A}^0$ and $\mathfrak{B}^0$ that if $\bol{C}$ and $\bol{D}$ are components of $\mathfrak{A}^0$ and $\mathfrak{B}^0$, respectively, which are unbounded on the same side, then $\bol{C}$ lies above $\bol{D}$. The following Lemma follows from this fact. The proof, which is left to the reader, is very similar to the proof of Lemma 2.5 in \cite{ot10}.
\begin{lem}
\label{lem:dichotomy stable}
There exists $\delta > 0$ such that if the crosscut $Q$ of $X$ with endpoints $0$ and $b$ has diameter $\mathrm{diam}(Q) < \delta$, then the following holds for the induced isotopy $\bol{h}$ of $\bol{X}$:
Let $\bol{C}$ and $\bol{D}$ be components of $\mathfrak{A}^0$ and $\mathfrak{B}^0$, respectively, which are both unbounded to the same side. Then $\bol{C}^t$ lies above $\bol{D}^t$ for all $t \in [0,1]$.
Consequently, if $\bol{E}$ and $\bol{F}$ are components of $\bol{A}$ and $\bol{B}$, respectively, which are both unbounded to the same side, then $\bol{E}^t$ lies above $\bol{F}^t$ for all $t \in [0,1]$.
\end{lem}
\subsection{Equidistant set between $\bol{A}^t$ and $\bol{B}^t$}
\label{sec:equi path}
For the remainder of this section, we assume that $\delta > 0$ is chosen so that the conclusions of \cref{uniqueB_n} and \cref{lem:dichotomy stable} hold. We also assume that the crosscut $Q$ has diameter less than $\delta$.
Recall that disjoint closed sets $A_1$ and $A_2$ in $\mathbb{C}$ are \emph{non-interlaced} if whenever $B(c,r)$ is an open disk contained in the complement of $A_1 \cup A_2$, there are disjoint arcs $C_1,C_2 \subset \partial B(c,r)$ such that $A_1 \cap \partial B(c,r) \subset C_1$ and $A_2 \cap \partial B(c,r) \subset C_2$. We allow for the possibility that $C_1 = \emptyset$ in the case that $A_2 \cap \partial B(c,r) = \partial B(c,r)$, and vice versa.
\begin{lem}
\label{lem:noninterlaced}
$\bol{A}^t$ and $\bol{B}^t$ are non-interlaced for all $t \in [0,1]$.
\end{lem}
\begin{proof}
Fix $t \in [0,1]$. Let $B \subset \mathbb{C} \setminus (\bol{A}^t \cup \bol{B}^t)$ be a round open ball, and suppose for a contradiction that there exist points $\bol{a}_1,\bol{a}_2 \in \partial B \cap \bol{A}^t$ and $\bol{b}_1,\bol{b}_2 \in \partial B \cap \bol{B}^t$ such that the straight line segment $\overline{\bol{a}_1 \bol{a}_2}$ separates $\bol{b}_1$ and $\bol{b}_2$ in $\overline{B}$. Let $\bol{A}_1$ and $\bol{A}_2$ be the components of $\bol{a}_1$ and $\bol{a}_2$, respectively, in $\mathfrak{A}^t$, and let $\bol{B}_1$ and $\bol{B}_2$ be the components of $\bol{b}_1$ and $\bol{b}_2$ in $\mathfrak{B}^t$. Then $[\bol{A}_1 \cup \bol{A}_2] \cap [\bol{B}_1 \cup \bol{B}_2] = \emptyset$ and by the remarks immediately following the definition of $\mathfrak{A}^t$ and $\mathfrak{B}^t$, each of these four components is either unbounded to the left or unbounded to the right. Consider an arc $S$ in $\overline{B} \setminus (\bol{B}_1 \cup \bol{B}_2)$ joining $\bol{a}_1$ and $\bol{a}_2$. Then $\bol{A}_1 \cup \bol{A}_2 \cup S$ separates the plane into at least two components, and $\bol{B}_1$ and $\bol{B}_2$ must lie in different components of $\mathbb{C} \setminus (\bol{A}_1 \cup \bol{A}_2 \cup S)$. It is then straightforward to see by considering cases that there exist $i,j \in \{1,2\}$ such that $\bol{B}_i$ lies above $\bol{A}_j$, a contradiction with \cref{lem:dichotomy stable}.
\end{proof}
For each $t \in [0,1]$, let $\bol{M}_t = \mathrm{Equi}(\bol{A}^t,\bol{B}^t)$. In light of \cref{lem:noninterlaced}, $\bol{M}_t$ is a $1$-manifold by \cref{thm:manifold}.
\begin{lem}
\label{M-disjoint}
For each $t \in [0,1]$ and each $n \in \mathbb{Z}$, $\bol{M}_t \cap \bol{E}_n\!\left( \frac{(1-\nu) |\Theta(b^t)|}{4\nu} \right) = \emptyset$.
In particular, $\bol{M}_t \cap \bol{E}_n \!\left( \frac{|\Theta(b^t)|}{2} \right) = \emptyset$.
\end{lem}
\begin{proof}
Let $n \in \mathbb{Z}$ and assume that $n \geq 0$ (the case $n < 0$ proceeds similarly). Since $0<\nu<\frac{1}{3}$, $\frac{(1-\nu) |\Theta(b^t)|}{4\nu}>\frac{|\Theta(b^t)|}{2}$, so $\bol{E}_n \!\left( \frac{|\Theta(b^t)|}{2} \right) \subset \bol{E}_n \!\left( \frac{(1-\nu) |\Theta(b^t)|}{4\nu} \right)$.
By \cref{uniqueB_n}, there is a component $\bol{C}$ of $\bol{X}$ such that $\bol{C}^t \cap E_n \!\left( \frac{|\Theta(b^t)|}{2} \right) \neq \emptyset$ for all $t \in [0,1]$. Since $n \geq 0$, $\bol{C} \subset \bol{A}$.
On the other hand, given any component $\bol{D}$ of $\bol{B}$, we have by \cref{uniqueB_n}(i) that $\bol{D}^t \cap \bol{E}_n \!\left( \frac{|\Theta(b^t)|}{2\nu} \right) = \emptyset$ for all $t \in [0,1]$. Thus $\bol{B}^t \cap \bol{E}_n \!\left( \frac{|\Theta(b^t)|}{2\nu} \right) = \emptyset$ for all $t \in [0,1]$. It follows that any point $x \in \bol{E}_n \!\left( \frac{(1-\nu) |\Theta(b^t)|}{4\nu} \right)$, the distance from $x$ to $\bol{A}^t$ is less than $\frac{|\Theta(b^t)|}{2} + \frac{(1-\nu) |\Theta(b^t)|}{4\nu} = \frac{(1+\nu) |\Theta(b^t)|}{4\nu}$, while the distance from $x$ to $\bol{B}^t$ is a greater than $\frac{|\Theta(b^t)|}{2\nu} - \frac{(1-\nu) |\Theta(b^t)|}{4\nu} = \frac{(1+\nu) |\Theta(b^t)|}{4\nu}$. Thus $\bol{M}_t \cap \bol{E}_n \!\left( \frac{(1-\nu) |\Theta(b^t)|}{4\nu} \right) = \emptyset$ for all $n$.
\end{proof}
\begin{lem}
\label{connM}
For each $t$ the set $\bol{M_t}$ is a connected 1-manifold. Moreover, the vertical projection of $\bol{M}_t$ to the real axis $\mathbb{R}$ is onto.
\end{lem}
\begin{proof}
Since by \cref{lem:noninterlaced} $\bol{A}^t$ and $\bol{B}^t$ are non-interlaced, by \cref{thm:manifold}, $\bol{M}_t$ is a 1-manifold which separates $\bol{A}^t$ from $\bol{B}^t$. By \cref{M-disjoint}, $\bol{M}_t$ is disjoint from $\bigcup_n \bol{E}_n \!\left( \frac{|\Theta(b^t)}{2} \right)$ and, hence, $\bol{M}_t$ separates $\mathfrak{A}^t$ from $\mathfrak{B}^t$ (recall that $\mathfrak{A}^t$ and $\mathfrak{B}^t$ were defined above \cref{lem:vertical strip}). Since all components of $\mathfrak{A}^t$ and $\mathfrak{B}^t$ are unbounded, no component of $\bol{M}_t$ is a simple closed curve and every component is a copy of $\mathbb{R}$ with both ends converging to infinity. By \cref{lem:vertical strip} each end of a component of $\bol{M}_t$ either converges to $-\infty$ or $+\infty$. Fix $t$ and let $\bol{M}'$ be a component of $\bol{M}_t$. Note that for all $\bol{x} \in \bol{M}'$ there exists a set of points $\bol{A}_x^t \subset \bol{A}^t$ closest to $x$ and $\bol{B}_x^t \subset \bol{B}^t$ closest to $x$ and that $\bigcup_{\bol{x} \in \bol{M}'} \bol{A}^t_x$ and $\bigcup_{x \in \bol{M}'} \bol{B}^t_x$ are separated by the line $\bol{M}'$. For $x \in \bol{M}'$, let $r_x$ denote the distance from $x$ to $\bol{A}_x^t$ (equivalently, to $\bol{B}_x^t$).
If both ends of $\bol{M}'$ are unbounded to the same side, say on the left side, then $\mathbb{C} \setminus \bol{M}'$ has two complementary components $P$ and $Q$, with $P$ only unbounded to the left (see \cref{fig:Mt projection}). Assume that $\bigcup_{\bol{x} \in \bol{M}'} \bol{A}^t_x \subset P$ (the case $\bigcup_{\bol{x} \in \bol{M}'} \bol{B}^t_x \subset P$ is similar). Note that since $P$ contains no components of $\bol{A}^t$ which are unbounded to the right, $P$ must contain components of $\bol{A}^t$ which are unbounded to the left.
\begin{figure}
\begin{center}
\includegraphics{MtProjection.pdf}
\end{center}
\caption{An illustration of the situation described in the proof of \cref{connM}.}
\label{fig:Mt projection}
\end{figure}
Let $z \in \bol{M}'$. Then $\bol{M}' \setminus \{z\}$ consists of two rays $\bol{M}^+$ and $\bol{M}^-$ and we may assume that $\bol{M}^+$ lies above $\bol{M}^-$. Choose $z_n \in \bol{M}^+$ monotonically converging to $-\infty$ and $\bol{b}_n \in \bol{B}^t_{z_n}$. Since the radii $r_{z_n}$ are uniformly bounded, $\bol{b}_n$ also converges to $-\infty$. Let $\bol{H}_n$ be the component of $\bol{B}^t$ that contains $\bol{b}_n$.
If $\bol{H}_n$ is unbounded to the left, by \cref{lem:dichotomy stable} it must lie below the unbounded components of $\bol{A}^t$ in $P$ and hence must ``go around'' $\bol{M}'$ as $\bol{H}_1$ does in Figure \cref{fig:Mt projection}. If $\bol{H}_n$ is not unbounded to the left, then either it intersects some $\bol E_k(\frac{|\Theta(b^t)|}{2})$ for some $k < 0$ (as $\bol{H}_2$ does in \cref{fig:Mt projection}), or it is unbounded to the right (as $\bol{H}_3$ is in \cref{fig:Mt projection}). In any case it is clear that there exists $c \in \mathbb{R}$ such that every component $\bol{H}_n$ intersects the vertical line $x = c$.
For each $n$ let $d_n$ be such that the point $(c,d_n) \in \bol{H}_n$. By \cref{lem:vertical strip}, the sequence $d_n$ is bounded and, hence has an accumulation point $d_\infty$. By \cref{compact in strip}, the component of $\bol{B}^t$ which contains $d_\infty$ is unbounded to the left, and clearly it lies above the unbounded components of $\bol{A}^t$ in $P$, a contradiction with \cref{lem:dichotomy stable}. Hence, the vertical projection of $\bol{M}'$ to the real axis $\mathbb{R}$ is onto.
The proof that $\bol{M}_t = \bol{M}'$ is connected is similar and is left to the reader.
\end{proof}
\begin{lem}
\label{pathM}
For each $t \in [0,1]$, the set $\widetilde{\mathrm{exp}}(\bol{M}_t) \cup \{0,1\}$ is the image of a path $\til{\gamma}_t$ in $\til{U}^t$ joining $0$ and $1$.
\end{lem}
\begin{proof}
Let $\mathbb{I}$ denote the imaginary axis, so that $[-r,r] \times \mathbb{I}$ is the strip in the plane between the vertical lines through $r$ and $-r$. By \cref{lem:vertical strip}, for each $r > 0$, $\widetilde{\mathrm{exp}}(([-r,r] \times \mathbb{I}) \cap \bol{M}_t)$ is compact. Together with \cref{connM}, this implies that we can choose a parameterization $\alpha: (0,1) \to \bol{M}_t$ so that:
\[ \lim_{s \to 0^+} \widetilde{\mathrm{exp}} \circ \alpha(s) = \{0\} \]
and
\[ \lim_{s \to 1^-} \widetilde{\mathrm{exp}} \circ \alpha(s) = \{1\} .\]
Define the path $\til{\gamma}_t: [0,1] \to \widetilde{\mathrm{exp}}(\bol{M}_t) \cup \{0,1\}$ by $\til{\gamma}_t(s) = \widetilde{\mathrm{exp}} \circ \alpha(s)$ for $s \in (0,1)$, and $\til{\gamma}_t(0) = 0$ and $\til{\gamma}_t(1) = 1$. Then $\til{\gamma}_t$ is the required path.
\end{proof}
\subsection{Proof of \cref{main2}}
\label{sec:proof main2}
In this section we complete the proof of \cref{main2}.
Recall that $\varepsilon > 0$ is a fixed arbitrary number, and $0 < \nu < \frac{1}{3}$ has been chosen so that $\frac{8 \nu}{1 - \nu} < \frac{\varepsilon}{2}$. Choose $0 < \delta < \frac{\varepsilon}{4}$ small enough so that the conclusions of \cref{uniqueB_n} and \cref{lem:dichotomy stable} hold (and therefore the results from \cref{sec:equi path} also hold).
For each $t \in [0,1]$, let $\gamma_t = (L^t \circ \Theta)^{-1} \circ \til{\gamma}_t$. This $\gamma_t$ is a path in $U^t$ joining $0$ and $b^t$.
\begin{claim}
\label{paths small}
$\mathrm{diam}(\gamma_t([0,1])) < \varepsilon$ for all $t \in [0,1]$.
\end{claim}
\begin{proof}[Proof of \cref{paths small}]
\renewcommand{\qedsymbol}{\textsquare (\cref{paths small})}
By \cref{M-disjoint}, for all $t \in [0,1]$ and $n \in \mathbb{Z}$, $\bol{M}_t \cap \bol{E}_n \!\!\left( \frac{(1-\nu) |\Theta(b^t)|}{4\nu} \right) = \emptyset$.
By \cref{ball-like}(ii), we have $\widetilde{\mathrm{exp}}(\bol{M}_t) \subset B \!\left( 0, \frac{8\nu}{(1-\nu) |\Theta(b^t)|} \right)$. Then \\ $(L^t)^{-1}(\widetilde{\mathrm{exp}}(\bol{M}_t)) \subset B \!\left( 0, \frac{8\nu}{(1-\nu)} \right)$. By the choice of $\nu$, and since $\Theta$ is a homeomorphism of $\mathbb{C}$ which is the identity outside of $B(0, 2\delta) \subset B(0, \frac{\varepsilon}{2})$, it then follows that $\gamma_t([0,1]) = (L^t \circ \Theta)^{-1}(\widetilde{\mathrm{exp}}(\bol{M}_t)) \subset B(0, \frac{\varepsilon}{2})$.
\end{proof}
\begin{claim}
\label{paths cont}
The sets $\gamma_t([0,1])$ vary continuously in the Hausdorff metric, and $\gamma_0$ is homotopic to $Q$ with endpoints fixed.
\end{claim}
\begin{proof}[Proof of \cref{paths cont}]
\renewcommand{\qedsymbol}{\textsquare (\cref{paths cont})}
By \cref{pathM}, $\til{\gamma}_t$ is a path in $\til{U}^t$ with endpoints $0$ and $1$. To see that $\til{\gamma}_0$ is homotopic to $\til{Q} = (0,1)$ note first that since $\bol{A}^0$ is above the real axis and $\bol{B}^0$ is below the real axis, for each $(x,y) \in \bol{M}_0$ the vertical segment from $(x,0)$ to $(x,y)$ is disjoint from $\bol{X}^0$. Hence we can construct a homotopy $k$ between $\bol{M}_0$ and $\mathbb{R}$ which fixes the x-coordinate of each point in $\bol{M}_0$ and decreases the absolute value of the $y$-coordinate to zero. Then $\widetilde{\mathrm{exp}} \circ k$ is the required homotopy between $\til{\gamma}_0$ and $\til{Q}$ with endpoints fixed. Hence, $\gamma_0 = (L^0 \circ \Theta)^{-1} \circ \til{\gamma}_0$, is homotopic to $Q$ as required.
Suppose $t_i \to t_\infty$. It is easy to see that $\limsup \bol{M}_{t_i} \subseteq \bol{M}_{t_\infty}$ by the definition of the equidistant sets $\bol{M}_t$. Since, by \cref{connM}, each $\bol{M}_{t_i}$ and $\bol{M}_{t_\infty}$ is a connected $1$-manifold whose vertical projection to the real axis $\mathbb{R}$ is onto, it follows that $\liminf \bol{M}_{t_i} \supseteq \bol{M}_{t_\infty}$. Thus $\lim \bol{M}_{t_i} = \bol{M}_{t_\infty}$. It follows that $\gamma_t([0,1]) = (L^t \circ \Theta)^{-1} \circ \widetilde{\mathrm{exp}}(\bol{M}_t)$ is continuous in the Hausdorff metric.
\end{proof}
Combined with \cref{liftexist}, Claims \ref{paths small} and \ref{paths cont} complete the verification of condition (ii) of \cref{main1technical}. Therefore, by \cref{main1technical}, the isotopy $h$ of the compactum $X$ can be extended to the entire plane $\mathbb{C}$. This completes the proof of \cref{main2}.
\medskip
In \cref{main1} we have given necessary and sufficient conditions for an isotopy of a uniformly perfect compact set to extend to an isotopy of the plane. These conditions involve the existence of an extension of the isotopy over sufficiently small crosscuts while controlling the size of the image. The following problem remains open.
\bigskip
\begin{prob}
Are there intrinsic properties on $X$ and the isotopy $h$ of $X$, which do not involve the existence of extensions over small crosscuts, that characterize when an isotopy of $X$ can be extended over the plane?
\end{prob}
\bibliographystyle{amsalpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,966 |
Q: Changing the home key behavior in an R session in emacs I would like to change the behavior of the home key within an R session run from emacs. When I press the home key, it takes me all the way to the > prompt. I'd like the home key to take me to the start of the command entry (i.e., two points in from the start of the line). I assume that I can make this adjustments via my .emacs file; any guidance for the commands that I would need to add to that file would be appreciated. Thanks!
A: The behaviour you want is already available as C-a. You can rebind the home key with the following line:
(local-set-key (kbd "<home>") 'comint-bol)
There are a number of ways to get this to happen automatically when you are using the R session. I use something like the following:
;; Define the keybinding you want
(defun my-inferior-ess-mode-hook ()
(local-set-key (kbd "<home>") 'comint-bol))
;; add the key-binding to the hook that gets called whenever you start an R session:
(add-hook 'inferior-ess-mode-hook 'my-inferior-ess-mode-hook)
That's a bit much for a single key-binding, but you can extend the definition of my-inferior-ess-mode-hook to include a number of customizations you'd like to use.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 103 |
Delaware's senior senator and ranking member of the Environment and Public Works Committee weighs in on EPA chief Scott Pruitt's resignation.
Now that U.S. Environmental Protection Agency Administrator Scott Pruitt has resigned amid allegations of misconduct, U.S. Sen. Tom Carper believes there is renewed hope in the fight for clean air and water.
Carper, ranking member of the Senate's Environment and Public Works Committee, said he was "over the moon" at the news of Pruitt's resignation, which he attributed to "a constant drip of never-ending disclosures, leaks of misbehavior and ethical misconduct" that Carper said likely culminated with the threat of next month's oversight hearings.
While Carper has consistently criticized Pruitt and the Trump administration's approach on all things environment, Sussex County Councilman Rob Arlett said his current campaign manager is the person to thank for Pruitt's resignation.
As Pruitt's former deputy chief of staff, Eastern Shore native Kevin Chmielewski said he saw "a ton of wrongdoing" within his first month on the job.
"They ultimately fired me because I refused to resign," Chmielewski said, adding that he brought instances of wrongdoing such as mismanagement of Pruitt's security detail, to Congress. "Obviously, selfishly, I wish he would have been fired, but that's not up to me and we all know that's not how the world turns.
"It was not only the unethical and illegal things he did, but I didn't agree with his policies," said Chmielewski, a Republican who is now Arlett's campaign manager for his run against Carper for U.S. Senate. "When we talk about the environment and this planet Earth, it's bipartisan."
Arlett, who also served as Delaware chairman for Trump's campaign, said if the allegations about Pruitt's misconduct are true, he should have been fired months ago.
"It has nothing to do with party politics," Arlett said. "Loyalty is not to a person. Loyalty is with principals. If somebody has crossed the line to use their position for personal gain, they're not in it for the right reasons and they need to leave. That's part of what we need to do in Washington and what many believe needs to be done here in Delaware."
In Pruitt's absence, former coal lobbyist and EPA Deputy Administrator Andrew Wheeler will take the helm of the agency until a new administrator is appointed. Carper said there is "considerable concern" about Wheeler's ties to the coal industry, but it is too soon to tell if he will be considered to permanently take over the top job.
Lewes Mayor Ted Becker said because of the unknowns about Wheeler, he and other coastal leaders remain concerned about the potential for oil and gas drilling off the Atlantic Coast. Lewes was the first town in Delaware to sign a resolution opposing seismic testing and offshore drilling.
"He comes from the coal industry, so I think we all have concerns about the use of alternative energy and making sure that we explore that," Becker said. "The whole environmental issue remains front and center."
As for local implications, Carper said Pruitt's resignation could give Delaware and Maryland a second chance to fight cross-border air pollution. This summer, the EPA announced it would deny several petitions filed by Delaware and Maryland in 2016 in an attempt to reduce emissions of nitrogen dioxide from power plants in Indiana, Kentucky, Ohio, Pennsylvania and West Virginia.
Carper said many of the attempts to roll back Obama-era regulations were halted by the court system, but is infuriated by Pruitt's attempt to ignore the science that shows the air pollution coming from those plants drifts toward Mid-Atlantic states.
"There are all kinds of issues I think we now have a second chance to do the right thing," Carper said, also noting the Trump administration's attempts to halt progress regarding fuel efficiency for vehicles.
John Byrne, director of the University of Delaware's Center for Energy and Environmental Policy, said it seems that in Pruitt's short term in office, he did everything in his power to undo decades of work to ensure a healthy environment.
There are some policies that many thought were settled, such as regulatory oversight for the coal industry and changes to parts of the Clean Air Act and Clean Water Act, that were "really torn apart" under Pruitt's guidance, he said.
"With that said, I think environmental concerns have risen to the top of the social agenda," Byrne. "When it comes to climate change, we're at the tipping point. It's fair to say if we don't act soon, it is going to be far more difficult for our grandchildren who will have to make more sacrifices because we wasted time."
Byrne said that while environmentalists have "been sidelined by the federal government," there are communities across the nation pushing to address the threats of a changing climate.
"There is momentum for action," Byrne said. "And we need to harness that action to restore a national sense of purpose on the issue of climate change."
In Delaware, the lowest-lying state in the country, sea levels as measured in Lewes are rising by an average of about 3.35 millimeters per year, about twice the global average.
Local climate experts have estimated that if nothing is done to curb greenhouse gas emissions that are largely driven by burning fossil fuels, up to 11 percent of Delaware's land mass could be underwater by 2100, the First State will feel more like Georgia than Delaware, health impacts from asthma and ozone alert days will increase and farmers will have to adjust their growing seasons and deal with more warm-weather pests and diseases. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,264 |
De scheppingsleer maakt deel uit van het vakgebied systematische theologie binnen de theologie. Het gaat hier dus niet om een natuurkundige of biologische (evolutionaire of creationistische) maar een theologische benadering van de natuur of althans Gods scheppingsdaad.
Inhoud
De Apostolische Geloofsbelijdenis opent met de zin Ik geloof in God, de almachtige Vader, Schepper van hemel en aarde. De geloofsbelijdenis van Nicea-Constantinopel zet met vergelijkbare woorden in. In het onderdeel scheppingsleer van de systematische theologie wordt (theologisch en niet natuurwetenschappelijk) nagedacht over bijvoorbeeld de vraag wat het betekent om te geloven dat God de Schepper is, wat het betekent dat de wereld geschapen is of hoe Schepper en schepsel zich verhouden. Gijsbert van den Brink en Cornelis van der Kooi stellen in hun Christelijke Dogmatiek dat de christelijke scheppingsleer zeer veel rijker is dan dat ze slechts antwoord zou geven op de vraag hoe en wanneer de wereld geschapen werd. Benjamin Breckinridge Warfield stelde dat de scheppingsleer en de evolutietheorie niet hetzelfde vlak bestrijken. De scheppingsleer gaat over het ontstaan van het leven, de evolutietheorie gaat over de ontwikkeling van het leven.
De mens
'Beeld van God'
Volgens Genesis 1 vers 26-28 is de mens naar 'Gods beeld' (imago Dei) geschapen. Een 'beeld' is iets wat iets anders weerspiegelt en dus op het andere lijkt. Wanneer de mens 'beeld van God' wordt genoemd, betekent dit dus dat de mens op enigerlei wijze op God lijkt. Het 'beeld van God' moet niet slechts in het lichamelijk of het geestelijke van de mens gezocht worden, maar in de gehele mens. Volgens Van den Brink en Van der Kooi is het belangrijk om de mens als 'beeld van God' niet empirisch maar theologisch te funderen. Het idee van de mens als 'beeld van God' is theologisch gezien wezenlijk voor het mens-zijn. Het houdt in dat het behoort bij het wezen van het mens-zijn om in relatie tot God te staan. Het beeld van God ligt dus in een roeping om met God in relatie te staan, om op Gods uitnodiging te reageren. Uit Genesis 1 blijkt dat de mens als geslachtelijk wezen tot het 'beeld van God' behoort, zowel de man in zijn man-zijn als de vrouw in haar vrouw-zijn vertoont het beeld van God. In de Rooms-katholieke traditie is het 'beeld van God' door de zondeval nauwelijks aangetast. In de traditie van de reformatie is het 'beeld van God' door de zondeval van de mens ernstig misvormd. Artikel 14 uit de Nederlandse Geloofsbelijdenis spreekt daarom over resten van het beeld van God.
Een bescheiden plaats
Het scheppingsverhaal maakt duidelijk dat de mens geschapen is naar het beeld van God, in onderscheid van de dieren. Maar dit verhaal maakt ook duidelijk dat het op aarde niet allemaal om de mens draait. Zo heeft de mens in het scheppingsverhaal geen eigen scheppingsdag, maar is de mens op dezelfde dag als de dieren geschapen. Mensen zijn ook niet de kroon op de schepping, want dat is de sabbat, de rustdag. De mens is ook niet het enige intelligente wezen dat geschapen is, er zijn bijvoorbeeld ook engelen. Daarnaast is de mens in verband met het voedsel afhankelijk van de andere schepsels.
Schepping, zondeval, verlossing
In Genesis 1 vers 26 staat beschreven hoe God over de door Hem gemaakte schepping zegt: 'het was zeer goed'. In Genesis 3 staat beschreven hoe de eerste mensen in zonde vielen en hun nageslacht daarin meenamen, maar beschrijft ook de belofte van de verlossing. Onder christenen en theologen wordt er nagedacht over de verhouding tussen schepping, zondeval en verlossing.
Volgens de Nederlandse theoloog Hendrikus Berkhof is het blijkbaar nooit Gods bedoeling geweest om een wereld kant en klaar in het aanzijn te roepen. God wilde - volgens Berkhof - blijkbaar dat zijn schepping een geschiedenis doormaakt van weerstand en worsteling, van lijden en strijden. De volmaaktheid van de schepping staat daarmee niet aan het begin, maar aan het eind van de geschiedenis waarin het heil gestalte krijgt. De kern van de zonde is volgens Berkhof de op onbegrijpelijke wijze verbroken geloofsrelatie tot God. De zonde is de ontwrichting van de verhouding van de mens tot God, tot de naaste en tot de natuur. Theoloog en ethicus Harry Kuitert stelt dat men schepping, zondeval en verlossing het beste kan opvatten als drie typeringen van de werkelijkheid, als drie machten die actief zijn. De Zwitserse theoloog Karl Barth beschouwde schepping en verlossing als een eenheid. De eerste hoofdstukken van Genesis zijn volgens Barth 'sage'.
In de klassiek gereformeerde opvatting stelt men dat er vanwege de radicaliteit van de mens als zondaar geen sprake kan zijn van positieve vooruitgang. Alleen door wedergeboorte, geloof en bekering kan men delen in het heil van God. De gelovige mens (die zondaar blijft) wordt hersteld door het werk van de Heilige Geest. De uiteindelijke opstanding - met een verheerlijkt lichaam - is de voltooiing van het herscheppende werk van de Heilige Geest, en de opstanding van Jezus Christus is hier het fundament van. De heerlijkheid van de verloste schepping is zoveel rijker als Christus heerlijker is dan de eerste mens. Er is dus wel een verschil tussen de 'goede schepping' en de definitief 'verloste schepping' van na Christus' wederkomst.
Het doel van de schepping
De gereformeerde leer noemt in het spoor van Johannes Calvijn als doel van de schepping de verheerlijking van God. Niet de mens staat in het middelpunt, maar God om wie en door Wie alle dingen bestaan. Dit is ook zo verwoord in de eerste vraag van de Westminster Catechismus.
Volgens Arnold van Ruler is de schepping het eigenlijke werk van God. Zo schrijft Van Ruler bijvoorbeeld 'alles draait om Christus, maar alles gaat om de verlossing van de schepping en het aanbreken van het rijk van God op de aarde'. De aarde had er volgens van Ruler ook niet kunnen zijn, maar God gunt ook ons mensen het plezier van er te zijn. De schepping berust volgens Van Ruler en vele andere theologen dus niet op noodzaak, maar op Gods wil en welbehagen.
De verhouding tussen theologie en natuurwetenschappen
Juist bij de scheppingsleer moet de theologie zich verhouden tot de natuurwetenschappen en de onderzoeksresultaten van (biologisch, natuurkundig, etc.) wetenschappelijk onderzoek. Er zijn meerdere wijzen waarop christenen en theologen hiermee omgaan. Van belang hierbij is de wijze waarop men de Bijbel interpreteert en de manier waarop men omgaat met (natuur) wetenschappelijke resultaten en de hieruit getrokken conclusies. Een bekend voorbeeld waarbij de verhouding tussen theologie en natuurwetenschap een (grote) rol speelt is de discussie tussen het creationisme en de evolutietheorie, men kan echter ook denken aan antropologie.
Binnen de Rooms-Katholieke Kerk ligt de discussie over schepping en evolutie anders dan binnen het protestantisme. Het leergezag van de Rooms-Katholieke Kerk heeft zich in het verleden ondubbelzinnig gedistantieerd van het creationisme. Wel stelt men dat de ziel van de mens niet door evolutie is ontstaan. De Rooms-Katholieke Kerk heeft zich niet principieel verzet tegen een evolutionaire benaderingswijze, over de mate waarin er sprake is van ´aanvaarding´ verschillen de meningen. In de praktijk betekende dit dat er op Rooms-Katholieke scholen onderwezen werd in de evolutietheorie, terwijl dit op gereformeerde scholen problematisch was.
De Amerikaanse gereformeerde theoloog Benjamin Breckinridge Warfield wijst het darwinisme niet principieel af, al is hij wel kritisch. Hij wijst het evolutionisme als wereldbeschouwing of filosofie ten stelligste af, want dat is gebaseerd op vooronderstellingen die het bovennatuurlijke uitsluiten of overbodig maken. Volgens Warfield hoeft de evolutietheorie geen probleem te zijn, mits zij haar grenzen in acht neemt en niet meer pretendeert te zijn dan een mogelijke verklaring van de wijze waarop God in zijn voorzienigheid werkt. In Nederland zoekt de Hervormd Gereformeerde Gijsbert van den Brink naar mogelijkheden om orthodox-christelijke theologie en neodarwiniaanse evolutietheorie samen te denken. In 2017 publiceerde hij het boek En de aarde bracht voort, een werk over christelijk geloof en evolutie. In dit boek waarschuwt hij er onder andere voor om God niet te reduceren tot een God van de gaten.
Aan de andere kant van het spectrum zijn er gelovigen die geloven in een schepping in zes dagen. Vaak lezen zij de eerste hoofdstukken van Genesis als een historisch verslag van Gods schepping in zes dagen. Vanwege deze historische lezing proberen zij natuurwetenschappelijke argumenten te vinden ter ondersteuning van de schepping. Evenals de christenen die openstaan (voor een vorm van) evolutie willen zijn Gods scheppingsdaad verklaren aan de hand van natuurwetenschappelijke resultaten. Zij doen dit echter op een geheel andere wijze. Een voorbeeld hiervan is de publicatie van hoogleraar Oude Testament Mart Jan Paul Oorspronkelijk. Overwegingen bij schepping en evolutie.
Een geheel ander standpunt wordt ingenomen door de Apeldoornse hoogleraar Arnold Huijgen. Volgens hem is het te simpel om bij het denken over Gods scheppingsdaad te denken in termen van creationisme of evolutionisme. De belijdenis dat God de Schepper is van hemel en aarde, gaat volgens hem niet om een natuurwetenschappelijke uitspraak. Hij stelt dat theologen zich bij hun vakgebied (de theologie) moeten houden en natuurkundigen of biologen bij hun eigen vakgebied moeten blijven. Een theoloog moet zich niet aanmatigen dat men beter dan de natuurwetenschapper weet hoe hij zijn vak moet doen. Andersom betekent dat ook dat men er voor moet waken dat evolutionisme als naturalistische ideologie het denken gaat beheersen en reductionistisch gaat werken. Dit omdat theologen (in de regel) geen experts zijn in andere vakgebieden en natuurkundigen (in de regel) geen experts zijn in de theologie.
Dogmatiek | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,747 |
Q: Send notification email to attendees when creating new event to o365 calendar I am using MS Graph API to create events (meetings) into peoples calendars.
We used to have on-premise Office and creating an event would automatically notify the persons about a new calendar event.
After migrating to o365 we started using Graph- Calendar API to create these events. Now the attendees get no email notification at all.
When you open the event from the outlook calendar, the To- field is empty. My guess is that this could be the cause, but I am not sure at all.
Here is an example json I am sending to the api:
{ Id:null,
Subject: Test,
Body:
{ContentType:html,Content:Hey, <br/><br/>\\r\\nWhy you no send email!},
ShowAs:busy,
Attendees:[
{EmailAddress:{
address:Matti.Lindroth@mycompany.fi,
name:Lindroth Matti},
Type:required}],
Start:
{
DateTime:2019-10-21T08:00:00,
TimeZone:FLE Standard Time
},
ResponseRequested:true,
IsOrganizer:true,
Organizer:
{EmailAddress:{
address:Matti.Lindroth@mycompany.fi,
name:Lindroth Matti}
},
End:
{
DateTime:2019-10-21T08:30:00,
TimeZone:FLE Standard Time
},
Sensitivity:null
}
Any help is much appreciated.
A: I noticed that you are both the organizer and the attendee in your example. In this case the server would not send you a meeting request, since you're the organizer! If you add someone else in your attendees collection they will receive a meeting request.
A: It turns out @Jason Johnston had it correct in the comments. I will post an answer here for better readability.
I was sending the HTTP post to url https://graph.microsoft.com/v1.0/users/matti.lindroth@mycompany.fi/calendar/events
As you can see the url has my username in it, so the api assumes I was creating an event to my own calendar. I changed the url to use our technical user name instead and it started working.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,101 |
{"url":"https:\/\/testbook.com\/question-answer\/if-ys-1-s-the-network-has-1-resisto--5f7c1a0274949e87a1c9edd8","text":"# If y(s) = 1 + s, the network has 1 \u03a9 resistor and\n\nThis question was previously asked in\nESE Electronics 2015 Paper 1: Official Paper\nView all UPSC IES Papers >\n1. 1F capacitor in series\n2. 1F capacitor in parallel\n3. 1H inductor in series\n4. 1H inductor in parallel\n\nOption 3 : 1H inductor in series\nFree\nCT 3: Building Materials\n2962\n10 Questions 20 Marks 12 Mins\n\n## Detailed Solution\n\nConcept:\n\nPassive components and their impedance is shown in below table.\n\n Element j\u03c9 Form s-form Resistor (R) R R Inductor (L) j\u03c9 L SL Capacitor $$\\frac{1}{{j\\omega C}}$$ $$\\frac{1}{{SC}}$$\n\nFor the circuits shown below impedance will be\n\n$${z_2} = R + \\frac{1}{{j\\omega C}} = R - \\frac{j}{{\\omega C}} = R + \\frac{1}{{SC}}$$\n\nCalculation:\n\n\u2022 Given impedance is y(s) = 1 + s and network 1 \u03a9 resistor and another element is\n\n1 + S = 1 + SL (Given)\n\n\u2022 Comparing LHS and RHS value of inductor is\n\nL = 1H\n\nThe resistor is in series with 1H inductor.","date":"2021-10-22 12:30:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8083322644233704, \"perplexity\": 10377.299379215121}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585507.26\/warc\/CC-MAIN-20211022114748-20211022144748-00274.warc.gz\"}"} | null | null |
Q: Vim: How to handle Unicode files with text in multiple (more than two) languages? What settings do I need to set in Vim/gVim to be able to view Unicode text files which have text in many languages?
You may make these assumptions:
*
*The number of languages is more than two.
*Some of the languages are Chinese, Japanese, and Korean.
*It is enough if I can view these files in gVim (not necessarily Vim).
*gVim 7.0 running on Windows.
Here is a text sample, which when saved in Unicode opens fine in Notepad, but shows up as gibberish in gVim:
This is English.
这是中文。
これは日本です。
한국입니다.
ಇದು ಕನ್ನಡ.
A: Using gVim on Windows, I did the following two things:
:set encoding=utf-8
:set guifont=*
The second command brings up a font picker. By choosing the font "@MS Mincho", I got some of the Japanese characters to display, but oddly they were rotated 90 degrees to the left.
Anyway, you'll have to set the encoding before loading or pasting text into gVim (otherwise it might just convert them to all question marks). Then you'll have to find a font that is (a) fixed width, and (b) includes the characters you want to see. I don't seem to have such a font on my system at the moment, but you may.
A: Using the following settings in your .vimrc
:set encoding=utf-8
:set guifont=*
:set guifontwide=*
may work for you. It worked for me for chinese/japanese characters.
A: The font Arial Unicode MS supports japanese, chinese and korean as well as vietnamese and arabic. You could try using that font, though I don't believe it is monospaced.
http://www.microsoft.com/typography/fonts/font.aspx?FMID=1081
There may be other pan-language fonts out there, perhaps monospaced ones as well, but I don't know of them.
edit
I found this page with a few fonts that support all three languages. Some of them are available as free downloads:
http://www.wazu.jp/gallery/Fonts_Japanese.html
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,942 |
Q: @RabbitListener start binding to fanout exchange in "stopped" mode I want to bind an anonymous queue to a fanout exchange as soon as the app starts to collect messages, but the actual processing of messages should be done later (after some initialization elsewhere).
I tried with:
@RabbitListener(autoStartup="false",
bindings = @QueueBinding(value = @Queue,
exchange = @Exchange(name="myexchange",
type=ExchangeTypes.FANOUT)))
public void processMessage(String message) {
}
but autoStartup="false" will not bind the (anonymous) queue to the exchange.
In other words what I would need is the anonymous queue binding to the exchange as soon as the app starts, and starting reading messages only at a later time.
Is it possible with @RabbitListener?
Update:
Tried to declare the queue and exchange, but the queue is not added to rabbit unless I also declare the RabbitListener for it:
@Configuration
public class AmqpConfig {
@Bean
RabbitAdmin rabbitAdmin(ConnectionFactory connectionFactory) {
return new RabbitAdmin(connectionFactory);
}
@Bean
public FanoutExchange fanout() {
return new FanoutExchange("myexchange");
}
private static class ReceiverConfig {
@Bean
public Queue myQueue() {
return new AnonymousQueue();
}
@Bean
public Binding binding(FanoutExchange fanout, Queue myQueue) {
return BindingBuilder.bind(myQueue).to(fanout);
}
}
It doesn't create the queue unless I also add the @RabbitListener:
@Component
public class AmqpReceiver {
@RabbitListener(queues = "#{myQueue.name}")
public void receive(String in) throws InterruptedException {
}
}
A: Since you are not starting the listener, it doesn't open a connection.
As long as you have the queue, binding and a RabbitAdmin defined as beans in your application context, all you need to do is to force the connection to be opened (the Admin listens for new connections and performs the declaration).
Simply call createConnection() on the CachingConnectionFactory.
EDIT
@SpringBootApplication
public class So49401150Application {
public static void main(String[] args) {
SpringApplication.run(So49401150Application.class, args);
}
@Bean
ApplicationRunner runner(ConnectionFactory cf, RabbitTemplate template,
RabbitListenerEndpointRegistry registry) {
return args -> {
cf.createConnection().close(); // Admin does declarations here
template.convertAndSend("myexchange", "", "foo");
Thread.sleep(30_000);
registry.start();
};
}
@RabbitListener(autoStartup="false",
bindings = @QueueBinding(value = @Queue,
exchange = @Exchange(name="myexchange",
type=ExchangeTypes.FANOUT)))
public void processMessage(String message) {
System.out.println(message);
}
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,789 |
Thank you for your interest in the University of North Dakota School of Law. As you begin to learn more about us, you will quickly understand the unique features that make UND a great place to study law.
We encourage you to review the information on our website. If you need additional information don't hesitate to contact the Office of Admissions and Records by calling 1.800.CALL.UND or 701.777.2047 or e-mail admissions@law.UND.edu. Our Interim Director of Admissions and Records is Laureen Johnson, and she will be happy to assist you in any way she can.
Even more, I strongly encourage you to visit campus to experience first-hand the welcoming and supportive environment in which we live and learn together. This is a wonderful place, and we appreciate the interest that has led you to find out more about us.
Visiting the UND School of Law is the best way to determine if it is the right fit for your law school education. We are proud of the education here, and we would enjoy the opportunity to show you why.
Transfer Students are increasingly making up a bigger portion of our student body. Our small community of students, faculty and staff provide a welcoming environment for transfer students. As assistant dean Brad Parrish said, 'If we can make it work, we are happy to help students transfer to UND Law. It's part of the character of who we are."
Law School Admissions Council is the primary resource for aspiring law students. Click on the logo above to open the LSAC website.
The Black Law Students Association is celebrating its tenth anniversary this year. BLSA has realized growth and success through the years, and was one of UND's top student organizations last year.
Soon to be second-year law student Erica Skogen is completing a summer externship with Federal Judge Dan Hoveland and Federal Magistrate Judge Charles Miller in Bismarck, N.D.
Stop by a law fair or campus visit to talk with our representative about the benefits of earning a UND Law education. We will be at locations around the country. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,562 |
5-Star Theatricals Present THANKFUL
The concert takes place on December 12, 2020 at 7:00pm.
BroadwayWorld.com Dec. 6, 2020
5-Star Theatricals presents its first
free virtual concert on December 12, 2020 at 7:00pm titled "Thankful" which will stream on their website @ www.5startheatricals.com, and on Facebook and YouTube.
Hosted by the talented Trent Mills, who in 2018 played the title role in Shrek and returned in 2019 as "Marcellus Washburn" in The Music Man for 5-Star Theatricals. This special event brings together a cabaret of spectacular talents and will also feature some magical cameos from past performers, like Katharine McPhee, Adam Pascal, Julia Lester, Veronica Dunne and more.
All of the concert performers have been seen in 5-Star Theatricals' productions and will feature Daebreon Poiema, who starred as "Deloris Van Cartier" in Sister Act, Mitchell Johnson, a four time 5-Star performer in Joseph and the Amazing Technicolor Dreamcoat, The Hunchback of Notre Dame, Shrek and Dreamgirls: One Night Only, Jonalyn Saxer, who began performing with 5-Star Theatricals when she was 5 in My Fair Lady and also appeared in White Christmas, 42nd Street, Singin' in the Rain, Carousel, and the Wizard of Oz. Jonalyn is currently starring as "Karen" in the first national tour of Mean Girls. Eric B. Anthony, "C.C." in Dreamgirls: One Night Only, Chelsea Morgan Stock, "Sister Mary Robert" in Sister Act, Marc Ginsburg, a 5-Star favorite, was "Che" in Evita, "Lord Farquaad" in Shrek, "Reuben" in Joseph and the Amazing Technicolor Dreamcoat and "Lumiere" in Beauty and the Beast. Casey Comstock graces the stage with accompaniment for the evening. Also included in the evening are incredible performances by a few of the 5-Star "Starlight" Teen and Children outreach group who have performed in many of the main stage productions.
The Creative team behind the "Thankful" concert is Cindy Murray/5-Star Executive Director, Tal Fox/Co-Director and Casting Director, Trent Mills/Co-Director, Casey Comstock/Musical Director, Ed Moore/Director of Photography, and Christian Hunt/Film Editor.
Related Articles View More Thousand Oaks Stories Shows
CAP Presents NAT GEO VIRTUAL - LIFE ON OTHER WORLDS
Conejo Players Theatre Presents HOME FOR THE HOLIDAYS
5-Star Theatricals Presents THANKFUL Virtual Concert Featuring Katharine McPhee, Adam Pascal and More
Conejo Players Theatre Presents HOLIDAY SPECTACULAR A Drive-In Musical
Tina Fey and Wayne Brady Join Educational Theatre Foundation's Virtual Gala
Little Free Library Comes To The Carnegie in Covington
Gund Gallery Presents THE ART OF TREES Exhibition
Thousand Oaks SHOWS
Bank of America Performing Arts Center [Fred Kavli Theatre] (4/20 - 4/20)
HIGHWAY STARR
Bank of America Performing Arts Center [Fred Kavli Theatre] (2/6 - 2/6)
Last Train to Nibroc
Conejo Players Theater (1/22 - 1/24)
SHEN YUN WORLD TOUR
NAT GEO VIRTUAL - LIFE ON OTHER WORLDS | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,631 |
Q: NullReferenceException during .exe run, but not during debug During the construction of my Window LoginSystem, a NullReferenceException is being thrown only when running the application through the .exe.
While debugging, everything works perfectly fine however.
Code which calls the LoginSystem window:
LoginSystem ls = new LoginSystem();
ls.Show();
Where I've found the problem to be in my LoginSystem class:
private void Window_Loaded(object sender, RoutedEventArgs e)
{
Login.con = new SqlConnection(ConfigurationManager.ConnectionStrings["thuisDB"].ConnectionString);
...
}
Just in case you're wondering:
public class Login
{
public static SqlConnection con = null;
...
}
Link to stack trace:
HERE
PS: This line (Login.con = new SqlConnection(....) is the first time Login.con gets called, as the only code using that static var are in a class which LoginSystem is supposed to make.
EDIT: This question is NOT about me asking what a NullRef is or how to fix it, it was merely a single event where I didn't know why it was being thrown & didn't know how to debug it.
A: ConnectionStrings["thuisDB"] will be null if you are running the .exe in a folder where it can't find the configuration file.
Look for something like MyProgram.exe.config and make sure that is in the same folder as your executable file.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,270 |
{"url":"http:\/\/openstudy.com\/updates\/50785d74e4b0ed1dac50e148","text":"## JenniferSmart1 2 years ago Statistics: I'm trying to find what my \"bins\" are. (what are bins?) I'm having trouble with solver. see attachments and comments below.\n\n1. JenniferSmart1\n\n2. JenniferSmart1\n\nI'm trying to make a histogram and according to this link I have to have bins http:\/\/www.wikihow.com\/Create-a-Histogram-in-Excel how would I do that? I know what my lower and upper bounds are. Are those my bins?\n\n3. JenniferSmart1\n\n@hartnn @CliffSedge\n\n4. samtab\n\nwait but what is the whole problem?\n\n5. JenniferSmart1\n\noh I'll attach it. I completed a and b already. c is the problem child\n\n6. helder_edwin\n\nfirst did u order the inputs?\n\n7. JenniferSmart1\n\nare inputs my data values? I'm kinda new to the terminology. They're in column A\n\n8. helder_edwin\n\nyes. sorry. then finde the difference between x_max-x_min=range\n\n9. JenniferSmart1\n\nI'm not sure what my xmax and xmins are do you mean the difference between my lower and upper bounds? http:\/\/assets.openstudy.com\/updates\/attachments\/50785d74e4b0ed1dac50e148-jennifersmart1-1350065660892-screenshot20121012at1.10.44pm.png\n\n10. helder_edwin\n\nfrom the date u r given it looks like x_max=43 and x_min=0 am i right?\n\n11. JenniferSmart1\n\nthe right hand side is just a random example. The values are not specific to the problem that i'm doing\n\n12. helder_edwin\n\nu have a set of 55 numbers. right?\n\n13. JenniferSmart1\n\nsorry i lost internet connection\n\n14. JenniferSmart1\n\nyep it's 55 numbers\n\n15. helder_edwin\n\nwell the idea is that class range is x_max-x-min divided by the number of classes.\n\n16. helder_edwin\n\nu get a class with equal to $\\large \\frac{43-0}{8}=5.375$\n\n17. helder_edwin\n\ncan be rounded up to 5.4\n\n18. JenniferSmart1\n\nso let's say 5\n\n19. helder_edwin\n\ndo u have to have integer class limits?\n\n20. JenniferSmart1\n\n5.4 works too by classes do you mean the...what do we mean by class\n\n21. JenniferSmart1\n\nI'm honestly not sure\n\n22. JenniferSmart1\n\nit's just intro to stats and it's algebra based. I don't think they want us to do anything too fancy.\n\n23. JenniferSmart1\n\nthis is what I had done manually\n\n24. JenniferSmart1\n\nI just followed an example in the book\n\n25. helder_edwin\n\nit s great\n\n26. JenniferSmart1\n\nbut how do I make a histogram though that's where I'm having trouble\n\n27. helder_edwin\n\nby hand or with the computer\n\n28. JenniferSmart1\n\nI'm trying to use excel\n\n29. JenniferSmart1\n\n@hartnn\n\n30. JenniferSmart1\n\nand what are bins?\n\n31. helder_edwin\n\ni don't know?\n\n32. JenniferSmart1\n\nI tried using this website but I can't really follow it http:\/\/www.wikihow.com\/Create-a-Histogram-in-Excel\n\n33. JenniferSmart1\n\n@satellite73\n\n34. helder_edwin\n\nhave u tried using excel's help?\n\n35. hartnn\n\nbins are just ranges of x-axis, like to construct a histogram from a continuous variable you first need to split the data into intervals, called bins. so here your bins are 0-5.5, 5.5-11.5....... and u have 8 bins in total\n\n36. JenniferSmart1\n\nso my bins would be my class boundaries. I labeled the upper and lower bounds\n\n37. JenniferSmart1\n38. JenniferSmart1\n\n\" if you want to separate grades based on a 10 point scale 60, 70, 80, 90, 100 would be the bins\" So I guess I'm using a 5 point scale and my bins would be....0, 5.5, 11.5...no?\n\n39. hartnn\n\nyour bins does not have equal size....\n\n40. hartnn\n\n0-5.5 --->5.5 5.5-11.5--->6 all others are 6\n\n41. JenniferSmart1\n\nI don't know anymore. I thought they have to be separated by 5...ooops I thought I had them spaced equally :S\n\n42. JenniferSmart1\n\nshould I have them 5 sizes apart or six?\n\n43. hartnn\n\nwhere is the original question ? is it above 'Excel 2007' ??\n\n44. JenniferSmart1\n\n5th box from the top\n\n45. hartnn\n\nthe one with min=20.5 and max<30.5 ??\n\n46. JenniferSmart1\n\n\"oh I'll attach it. I completed a and b already. c is the problem child\"\n\n47. hartnn\n\nand n=9 , whats n?\n\n48. JenniferSmart1\n\nthat one I can attach it again\n\n49. JenniferSmart1\n50. JenniferSmart1\n\nwhy is n=9 where is n?\n\n51. hartnn\n\ni am going though it, so u have 8 classes. how did u find class width ?\n\n52. JenniferSmart1\n\n$\\frac{\\text{Largest data value-smallest data value}}{\\text{Desired number of classes}}$\n\n53. hartnn\n\n0 to 43 = 44 so 44\/8 = 5.5 like this ?\n\n54. hartnn\n\nok\n\n55. hartnn\n\nso the bins are : 0\/5.5 5.5 -11 11=16.5 and so on\n\n56. hartnn\n\n0-5.5*\n\n57. JenniferSmart1\n\n43\/8=5.375 so I rounded it ...yes that's what I did...\n\n58. hartnn\n\nso u need to change your bins to 0-5.5, 5.5-11 , and so on..\n\n59. JenniferSmart1\n\nis a bin a range or a specific number? From the way they've written it's kinda confusing \"if you want to separate grades based on a 10 point scale 60, 70, 80, 90, 100 would be the bins\"\n\n60. JenniferSmart1\n\nwikihow could be wrong though\n\n61. hartnn\n\na bin is an interval : 1st bin: 0-5.5 2nd bin: 5.5-11 so bin is range\n\n62. hartnn\n\nbin is not a number bin size or width is a number = class width\n\n63. JenniferSmart1\n\nok so how do we proceed...the data analysis toolbox is confusing, I don't even know where to start http:\/\/assets.openstudy.com\/updates\/attachments\/50785d74e4b0ed1dac50e148-jennifersmart1-1350065660892-screenshot20121012at1.10.44pm.png\n\n64. hartnn\n\nbut u need to change bins , right to 0-5.5, 5.5-11 , 11-16.5 , and so on\n\n65. JenniferSmart1\n\nsure I will...then what's next\n\n66. hartnn\n\nmy excel doesn't even have data analysis button\n\n67. JenniferSmart1\n\nis there another way to make the histogram?\n\n68. JenniferSmart1\n\nI honestly don't know. I'm just googling histogram and excel because i've never done this before in excel\n\n69. hartnn\n\nwhat popped up when u selected data analysis ?\n\n70. hartnn\n\nwas there an option of histogram ?\n\n71. JenniferSmart1\n\nthat \"solver\" window It reads \"solver parameter\" in the attachment\n\n72. hartnn\n\nstrange, u shouldn't get solver window ....... u should get options like input range, bin range... http:\/\/www.math.kent.edu\/~honli\/teaching\/statistics\/Chapter2\/Excell_Histogram.html\n\n73. JenniferSmart1\n\nsorry I lost internet connection...but I'm back\n\n74. hartnn\n\n75. hartnn\n\ni don't know where u clicked, but u should get that^ when u click data analysis tab\n\n76. JenniferSmart1\n\nI guess I don't have that\n\n77. JenniferSmart1\n\nsolver is a different add on. I guess I have to find this correct one. For whatever reason the mac 2011 version doesn't have that\n\n78. hartnn\n\nthen u need to install the add-in : Analysis Toolpack\n\n79. JenniferSmart1\n\n80. hartnn\n\nidk anything about MAC. try to Find 'Excel Options' and there u get Add-in tab, which has Analysis Toolbox option. maybe its will be in the dropdown list of settings besides ^ on upper right....\n\n81. JenniferSmart1\n\nI just need to follow the instructions in the above attachment. (two boxes up) which I don't seem to be able to do\n\n82. JenniferSmart1\n\nthat screenshot above is of the statplus instructions\n\n83. hartnn\n\ntry this reference http:\/\/support.microsoft.com\/kb\/914208\n\n84. JenniferSmart1\n\nI think I have what I need. I just need to be able to follow the instructions I guess\n\n85. JenniferSmart1\n\nI'm starting to hate this MAC....(not really ;P , but it's really annoying because everything is soo different)\n\n86. JenniferSmart1\n\noh my god look at that!","date":"2015-08-01 18:20:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5283766984939575, \"perplexity\": 5376.359753704458}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-32\/segments\/1438042988860.4\/warc\/CC-MAIN-20150728002308-00178-ip-10-236-191-2.ec2.internal.warc.gz\"}"} | null | null |
\section{Introduction}
The cross sections for a number of hadronic reactions can be computed
using perturbative Quantum Chromodynamics (pQCD)
convoluting the perturbatively
calculable hard scattering coefficients with the non perturbative
Parton Distribution Functions (PDFs), that parametrize the large distance
hadronic structure.
The accuracy with which the theoretical predictions for observables
of such reactions can be compared against the high precision experimental
data thus depends, not only on the accuracy of the hard scattering part
calculations, but also on the accuracy with which the PDFs are known.
Currently, the established method to obtain the
PDFs, used by the major PDF collaborations
(CTEQ \cite{Nadolsky:2008zw} and references within, MRST
\cite{Martin:2001es},
Alekhin \cite{Alekhin:2002fv}, Zeus \cite{Chekanov:2005nn}
and H1 \cite{Adloff:2003uh}),
is the global analysis supplemented with an error estimation using
some kind of variant of
the Hessian method (see e.g. \cite{Pumplin:2001ct}
for details).
This powerful combination allows for both extrapolation
outside the kinematical range of the data and extension to multivariable cases,
such as nuclear PDFs.
However, there are uncertainties related to the method itself, that are
difficult to quantify, but may turn out to have a large effect.
The differences between the current global PDF sets indeed
tend to be larger than the estimated uncertainties \cite{Pumplin:2005yfa},
and these differences again translate to the predictions for the LHC
observables, such as Higgs \cite{Djouadi:2003jg} or $W^\pm$ and $Z$ production
cross sections \cite{Nadolsky:2008zw}.
For details of PDF uncertainty studies see e.g.
Refs.~\cite{Martin:2003sk}.
Another approach to the PDF fitting has recently been proposed by the
NNPDF collaboration \cite{Ubiali:2008uk},
who have replaced typical functional form ansatze used in global analyses
with more complex standard neural network (NN) solutions, and the Hessian
method with Monte Carlo (MC) sampling of the data.
The NNPDF method circumvents many of the problems global analyses suffer, such
as bias resulting from fixing a functional form and selecting a suitable
tolerance $\Delta \chi^2$ needed in Hessian method,
and it relies on genetic algorithm (GA) which
works on a population of solutions for each MC replica of the data, thus having
a lowered possibility of getting fixed in local minima.
The estimated uncertainties for NNPDF fits are larger than those of global
fits, possibly indicating that the global fit uncertainties may have been
underestimated.
The complexity of NN results, however,
may also induce problems, especially when used in
a purely automated fitting procedure. Since the effect of modifying individual
NN parameters is unknown, the result may exhibit strange or unwanted behaviour
in the extrapolation region, or in between the data points if the data is
sparse.
Implementation of information not given directly by the data, such
as nonperturbative models, lattice calculations or knowledge from prior work
in general, is also difficult in this approach.
The new PDF fitting method we have recently proposed in Ref.~\cite{Carnahan:2008mb}
relies on the use of
Self-Organizing Maps (SOMs), a subtype of neural network. The
idea of our method is to create means for introducing
``Researcher Insight'' instead of ``Theoretical bias'' by giving up
a fully automated fitting procedure, and eventually
to develop an interactive
fitting program which would allow us
to combine the best features of both the global analysis and
the NNPDF approach.
\section{Self-Organizing Maps}
The SOM \cite{Teuvo}
is a visualization algorithm which
attempts to represent all the available observations with optimal accuracy
using a restricted set of models.
SOM consists of nodes, map cells, which are all assigned spatial
coordinates, and the topology of the map is determined by a chosen distance
metric $M_{\mathrm{map}}$. Each cell $i$ contains a map vector $V_i$,
that is isomorphic to the
data samples used for training of the neural network.
For a simple 2-dimensional rectangular lattice, our choice for the SOM shape,
a natural choice for the topology is
$L_1(x,y)=\sum_{i=1}^2\vert x_i-y_i\vert$.
The implementation
of SOMs proceeds in three stages: 1) initialization of the SOM
(see Fig.~\ref{train2.eps}),
2) training of the SOM (Fig.~\ref{train2.eps})
and 3) associating the data samples with a
trained map, i.e. clustering. For the details of the SOM implementation,
see \cite{Carnahan:2008mb}.
\begin{figure}[h]
\begin{center}
\vspace{-1.2cm}
\includegraphics[width=15cm]{train2.eps} \hspace{0.0cm}
\vspace{-1.2cm}
\caption[a]{\protect \small {\bf Left:} SOM initialization,
{\bf Right:} SOM training.}
\label{train2.eps}
\end{center}
\end{figure}
\vspace{-0.0cm}
In the end of the training stage, cells that are topologically close to
each other have map vectors which are most similar to each other (according to
a chosen similarity metric $M_{\rm data}$) compared to
all the other map vectors.
In the matching phase the actual data is matched against the
map vectors of the trained map, and thus get distributed on the map according
to the feature that was used as the similarity criterion. Clusters now
emerge as a result of {\it unsupervised learning}.
This local similarity property is the feature that makes SOM suitable for
visualization purposes,
thus facilitating user interaction with the data. Since each map vector now
represent a class of similar objects, the SOM
is an ideal tool to
visualize high-dimensional data, by projecting it onto a low-dimensional map
clustered according to some desired similar feature.
In our work we used the so-called batch-version of the training, in which
all the training data samples are matched against the map vectors before the
training begins. The map vectors
are then averaged with all the training samples within the neighbourhood
radius simultaneously.
The procedure is repeated $N_{\mathrm{step}}$ (free parameter to
choose) times such that in
every training step the {\it same} set of training data samples is associated
with the evolving map
The benefit of the batch training compared to the incremental training,
shown in Fig.~\ref{train2.eps}, is that the training is independent of
the order in which the training samples are introduced on the map.
\section{ENVPDF algorithm}
The aim of our approach is to both i) to be able to study the
properties of the PDFs in a model independent way and yet ii) to be able to
implement knowledge from the prior works on PDFs, and ultimately iii)
to be able to guide the fitting procedure interactively with the help of
the SOM properties.
To accomplish this, we choose, at variance with the ``conventional'' PDFs
sets or NNPDFs, to give up the functional form for the PDFs
and rather to rely on purely stochastical methods in
generating the initial and training PDF samples. Our choice is
a GA-type analysis, in which our parameters are the values of PDFs at the
initial scale
for each flavour at each value of $x$ where the experimental data exist.
To obtain control over the shape of the PDFs we use some of the
existing distributions
to establish an initial range, or {\em envelope}, within which we sample the
candidate PDF values.
For now we concentrate on
DIS structure function data from H1 \cite{Adloff:2000qk},
BCDMS \cite{Benvenuti:1989rh} and Zeus \cite{Chekanov:2001qu},
which we use without additional kinematical cuts or normalization factors.
The parameters for the DGLAP scale evolution were chosen to be those of
CTEQ6 (CTEQ6L1 for lowest order (LO)) \cite{Pumplin:2002vw:cteq6},
the initial scale being
$Q_0=1.3$ GeV. In next-to-leading order
(NLO) case the evolution code was taken from \cite{QCDNUM}
(QCDNUM17 beta release).
We use CTEQ6 \cite{Pumplin:2002vw:cteq6},
CTEQ5 \cite{Lai:1999wy:cteq5},
MRST02 \cite{Martin:2001es,Martin:2002dr}, Alekhin \cite{Alekhin:2002fv} and
GRV98 \cite{Gluck:1998xa} PDF sets
as our {\it init} PDFs. We construct our initial PDF generator first
to, for each flavour separately,
select randomly either the range $[0.5,1]$,
$[1.0,1.5]$ or $[0.75,1.25]$ times any of the init PDF set.
Next the initial generators generate values for
each $x_{\rm data}$ (
To ensure a reasonable large-$x$ behaviour for the PDFs, we also generate
with the same method
values for them in a few $x$-points outside the range of the experimental
data. For simplicity we also require the gluons to be positive in NLO.)
using uniform, instead of Gaussian,
distribution around the init PDFs, thus reducing direct bias from them.
Gaussian smoothing is applied to the
resulting set of points, and the flavours
combined to form a PDF set such that the curves are linearly
interpolated from the discrete set of generated points, and scaled to
conserve momentum, baryon number and charge.
In this study we accept
the $<$few\% normalization error which results from the fact that our
x-range is not $x=[0,1]$, but $x=[{\rm min}(x_{\rm data}),1]$.
We call these type of PDF sets {\it database} PDFs.
For a $N\times N$ SOM we choose the size of the database to be $4N^2$.
We randomly initialize the map with $N$ database PDFs sets,
such that each map vector $V_i$
consists of the PDF set itself, and
of the observables $F_2^p(x,Q_0^2)$ derived from it, and
train the map with $N_{\mathrm{step}}$ batch-training steps.
In order to obtain a reasonable selection of PDFs to start with, we reject
candidates which have $\chi^2/N>10$.
We choose the similarity criterion to be the
similarity of observables $F_2^p(x,Q^2)$ with
$M_{\mathrm{data}}=L_1$. The similarity is tested at every
$x_{\rm data}$-values both at the initial scale and at all the evolved
scales where experimental data exist.
On every training step, after the matching, all
the observables (PDFs) of the map vectors get averaged with the observables
(PDFs, flavor by flavor) matched within the neighbourhood.
The resulting new averaged map
vector PDFs are rescaled again to obey the sumrules. We call
these type of PDF sets {\it map} PDFs.
The map PDFs are evolved and the
observables at every experimental data scale are computed
and compared for similarity with the observables from the
training PDFs. After the training we have a map with $N$ map PDFs and the
same $4N^2$ database PDF sets we used to train the map.
This is the end of the first optimization {\it iteration}.
During the later iterations we proceed as follows:
At the end of each iteration we pick from the trained SOM $2N$
best PDFs as the init PDFs.
These init PDFs are introduced into the training set alongside with the
database PDFs, which are now constructed using each of the init PDFs
{\it in turn} as a center for a Gaussian random number generator, which
assigns for {\it all} the flavours for each $x$ a value around that {\it same}
init PDF
such that $1-\sigma$ of the generator is given by the spread of the best PDFs
in the topologically nearest neighbouring cells.
The object of these generators is thus to refine a good candidate PDF found
in the previous iteration by jittering its values within a range
determined by the shape of other good candidate PDFs from the previous
iteration. The generated PDFs are then smoothed and
scaled to obey the sumrules. Sets with $\chi^2/N>10$ are always rejected.
It is important to preserve
the variety of the PDF shapes on the map, so
we also keep $N_{\rm orig}$ copies of the first iteration
generators in our generator mix. Since the best PDF candidates from the
previous iteration are matched on this new map as an unmodified
init PDF, it is guaranteed that the $\chi^2/N$ as a function of the
iteration either decreases or remains the same.
We keep repeating the
iterations until the $\chi^2/N$ saturates.
The best $\chi^2/N$ values of the original
init PDFs\footnote{These are the $\chi^2/N$ for the initial scale
PDF sets taken from the quoted parametrizations and
evolved with CTEQ6 DGLAP settings, no kinematical cuts
or normalization factors for
the experimental data were imposed.
We do not claim these
values to describe the quality of the quoted PDF sets.}
are 1.67 for LO (CTEQ6) and 1.89 for NLO (MRST02), and
Table~\ref{envpdftab} lists results from a variety of ENVPDF runs. The results
do not seem to be very sensitive to the number of SOM training steps,
$N_{\rm step}$, but are highly sensitive to the number of first iteration
generators used in subsequent iterations. Although the generators can
now in principle produce an infinite number of different PDFs, the algorithm
would not be able
to radically change the shape of the database PDFs without introducing
a random element on the map. Setting $N_{\rm orig}> 0$ provides,
through map PDFs, that element, and keeps the algorithm from
getting fixed to a local minimum.
\begin{table}[h]
\center
\begin{tabular}{|c|c|c|c|c|c|}
\hline
SOM & $N_{\rm step}$ & $N_{\rm orig}$
& LO $\chi^2/N$ & NLO $\chi^2/N$\\ \hline
5x5 & 5 & 2 & 1.04 & 1.08 \\ \hline
5x5 & 10 & 2 & 1.10 & - \\ \hline
5x5 & 20 & 2 & 1.10 & - \\ \hline
5x5 & 30 & 2 & 1.10 & - \\ \hline
5x5 & 40 & 2 & 1.08 & - \\ \hline
5x5 & 5 & 0 & 1.41 & - \\ \hline
15x15 & 5 & 6 & 1.00 & 1.07 \\ \hline
\end{tabular}
\caption{$\chi^2/N$ for variety of ENVPDF runs against all the datasets
(H1, ZEUS, BCDMS, N=709).}
\label{envpdftab}
\end{table}
Due to the stochastical nature of the ENVPDF algorithm, we may
well study the combined results from several separate runs.
It is especially important to verify the stability of our results, to show
that the results are indeed reproducible instead of lucky coincidences.
Left panel of Fig.~\ref{envpdf_jakaumat_case1_best_nlo.eps} presents the best
NLO results, and
the combined $\chi^2/N\le 1.2$ spreads of the PDFs from any iteration, for
10 repeated $5\times 5$, $N_{\rm step}=5$ runs at the initial scale.
The average $\chi^2/N$ and the standard deviation $\sigma$ for these runs
are 1.122 and 0.029, corresponding
to $\Delta\chi^2\sim 20$.
The right panel of
the same Fig.~\ref{envpdf_jakaumat_case1_best_nlo.eps}
shows the 10 best result curves
and the $\chi^2/N\le 1.2$ spreads evolved
up to $Q=3.0$ GeV.
Since we have only used DIS data in this study, we are
only able to explore the small-$x$ uncertainty for now, and
expectedly, the small-$x$ gluons obtain the
largest uncertainty for all the cases we studied.
Clearly the seemingly large difference between the small-$x$ gluon results
at the initial scale is not statistically significant, but gets smoothed out
during the course of the QCD evolution.
The evolved curves also preserve the
initially set baryon number scaling within
$~0.5\%$ and momentum sumrule within $~1.5\%$ accuracy.
Thus the initial
scale wiggliness of the PDFs is mainly only a residual effect from our method
of generating them and not linked to the overtraining
of the SOM.
Therefore our simple method of
producing the candidate PDFs by jittering random numbers inside a predetermined
envelope is surprisingly stable when used together with a complicated PDF
processing that SOMs provide.
Remarkably then, even a single SOM run can provide a quick uncertainty estimate
for a chosen $\Delta\chi^2$ without performing a separate error analysis.
\begin{figure}[h]
\begin{center}
\vspace{-1.2cm}
\hspace*{-1.0cm}\includegraphics[width=12cm]{envpdf_jakaumat_case1_best_nlo.eps} \hspace{0.0cm}
\vspace{-0.5cm}
\caption[a]{\protect \small
NLO ENVPDF best results and the $\chi^2/N\le 1.2$
spreads of results from 10 separate runs.}
\label{envpdf_jakaumat_case1_best_nlo.eps}
\end{center}
\end{figure}
\vspace{-0.0cm}
\section{Future of the SOMPDFs}
So far we have shown a relatively straightforward method of obtaining
stochastically generated, parameter-free, PDFs, with an uncertainty estimate
for a desired $\Delta\chi^2$.
However, the proposed method can be extended much further than that.
What ultimately sets the SOM method apart from the standard global analyses or
NNPDF method are the clustering and
visualization possibilities that it offers.
Instead of setting $M_{\mathrm{data}}=L_1$ and clustering according to the
similarity of the observables, it is possible to set the clustering criteria
to be anything that can be mathematically quantified, e.g. the shape of the
gluons or the large-$x$ behaviour of the PDFs.
The desired feature of the PDFs can then be projected out from the SOM.
Moreover, by combining the method with an interactive graphic user
interface (GUI), it
would be possible to change and control the shape and the width of the
envelope as the
minimization proceeds, to guide the process by applying researcher insight at
various stages of the process, and the uncertainty band produced by the
SOM could further
help the user to make decisions about the next steps of the minimization.
With GUI it would be e.g. possible to set the generators to
sample a vector
consisting of PDF parameters, instead of values of PDFs in each value of
$x$ of the data. That would lead to smooth, continuous type of solutions,
either along the lines of global analyses, or NNPDFs using $N$ SOMs
for $N$ Monte-Carlo sampled replicas of the data.
For such a method, all the existing error estimates,
besides an uncertainty band produced by the map,
would be applicable as well.
Since the solution would be required to stay within an envelope of selected
width and shape, no restrictions for the parameters themselves
would be required, and it would be possible to e.g. to
constrain the extrapolation of the NN
generated PDFs outside the $x$-range of the data without explicitly
introducing terms to ensure the correct small- and large-$x$ behaviour
as in NNPDF method.
The selection of the best PDF candidates for the subsequent iteration could
then be made based on the user's preferences instead of solely based on the
$\chi^2/N$.
That kind of method in turn could be extended to
more complex hadronic matrix elements, such as the ones defining the GPDs,
which are
natural candidates for future studies of cases where the experimental data
are not numerous enough to allow
for a model independent fitting, and the guidance and intuition of the user is
therefore irreplaceable.
The possibilities of such a method are widely unexplored.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,128 |
Local Sports High School Huddle Scene Lifestyle Obituaries E-Edition Legals
Answer Woman: God in logo, closed daycare
By Casey Blake;
COLUMNIST;
To the reader who expressed his concern this week via an email screaming "WHAT THE HECK: POLAR PLUNGE," I thank you for your concern and commend your devotion to John Boyle's body temperature.
Said reader, who I can only assume speaks on behalf of all humans within a 100 mile radius, apparently suspects Boyle somehow rigged the Polar Plunge to be postponed to a warmer day.
"HE'S NOT GOING TO BAIL, IS HE?" said reader inquired.
Nay, Boyle will still submerge himself in the frigid waters of a 65 degree day Saturday, and I can tell you his flaming costume will be worth the watch.
The annual event, benefiting Meals on Wheels, will be at the Asheville Racquet Club at 11 a.m.
To keep you warm in the meantime, here's a less literal attempt to extinguish your burning questions, my smart-aleck responses and the real deal.
Question: I'm a former Asheville resident who recently moved to Marshall, and as I was researching my new neighborhood came across the Madison County Public Schools website and was shocked at what I saw. The school's logo is a triangle touting "Family, Jobs" and at the very top "God." Who came up with this? Is this an artifact of the school district or something new? Is this even legal?
My answer: I find if you blame John Boyle for this it will ALL BE FINE.
Real answer: The logo has been around for some time, though it isn't a relic from Madison County school history.
Superintendent Ronald Wilcox designed the logo himself at least 10 years ago, he said.
"It just sums up the culture here, and what we believe," Wilcox said. "The triangle part represents what's most important to our employees, which include their jobs, families and God."
"It was never our intention to offend anybody," he said. "It just depicts our culture here, within the school system."
There's no state law or school board policy regulating logos, according to State Department of Public Instruction spokesperson Lynda Fuller.
"There is that bigger issue of the separation of church versus state," Fuller said, "but there's really nothing at the state level that specifically addresses the use of 'God' in a logo, or regulates logos at all."
Wilcox said he's never heard a complaint or any concern about the logo, which is printed widely throughout the district on T-shirts, coffee mugs and has been displayed on the website for years.
"The Madison County school system logo represents a clear violation of the separation clause of the U.S. Constitution," said Asheville City councilman Cecil Bothwell, who has spoken out on issues of church and state in schools in the past. "Our laws and our tax money cannot be used to advance any religion or religious practice or belief."
Question: What happened to the day care place near Beecham's curve on Haywood Road? They've been there for as long as I can remember but appear to be closed, though there's no sign or anything. It's such a prominent location, is something else going in there?
My answer: New support group facility for Polar Plunge conspiracy theorists?
Real answer: John Wilson, who owns the building at 285 Haywood Road where Wee Wiggles daycare center was located, said the center closed at the end of the year after nearly 20 years in business.
"Her lease was up at the end of the year, and she just decided she had to close," Wilson said.
"Most of her clients received vouchers, and the state wasn't giving out enough anymore so she didn't have enough kids to stay in business. It's a shame, too — she did a really nice job."
Patricia York, who owned Wee Wiggles, said in October that the child care subsidy program had taken severe hits over the past several months. Her center went from averaging 85 children a day to 35 over a few months.
More than half of the center's children received voucher funding, she said.
Wilson, who lives in Florida, said he's already been flooded with calls but hasn't found a renter yet.
"I knew it was a good piece of property but didn't know it was quite this good," Wilson said with a laugh. The building at 285 Haywood, as well as an adjoining property at 295 Haywood Road which Wilson also owns, are about 5,200 and 12,500 square feet, respectively.
© 2021 www.citizen-times.com. All rights reserved. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,502 |
function Get-TargetResource
{
[CmdletBinding()]
[OutputType([System.Collections.Hashtable])]
param
(
[parameter(Mandatory = $true)]
[System.String]
$Source,
[parameter(Mandatory = $true)]
[System.String]
$Destination,
[System.String]
$LogOutput
)
$returnValue = @{
Source = if (Test-Path $Source){$Source};
Destination = if (Test-Path $Destination){$Destination};
LogOutput = if ($LogOutput){
if(Test-Path $LogOutput){$LogOutput}
}
}
$returnValue
}
function Set-TargetResource
{
[CmdletBinding()]
param
(
[parameter(Mandatory = $true)]
[System.String]
$Source,
[parameter(Mandatory = $true)]
[System.String]
$Destination,
[System.String]
$Files,
[System.UInt32]
$Retry,
[System.UInt32]
$Wait,
[System.Boolean]
$SubdirectoriesIncludingEmpty = $False,
[System.Boolean]
$Restartable = $False,
[System.Boolean]
$MultiThreaded = $False,
[System.String]
$ExcludeFiles,
[System.String]
$LogOutput,
[System.Boolean]
$AppendLog = $False,
[System.String]
$AdditionalArgs
)
[string]$Arguments = ''
if ($Retry -ne '') {$Arguments += " /R:$Retry"}
if ($Wait -ne '') {$Arguments += " /W:$Wait"}
if ($SubdirectoriesIncludingEmpty) {$Arguments += ' /E'}
if ($Restartable) {$Arguments += ' /Z'}
if ($MultiThreaded) {$Arguments += ' /MT'}
if ($ExcludeFiles -ne '') {$Arguments += " /XF $ExcludeFiles"}
if ($ExcludeDirs -ne '') {$Arguments += " /XD $ExcludeDirs"}
if ($LogOutput -ne '' -AND $AppendLog) {
$Arguments += " /LOG+:$LogOutput"
}
if ($LogOutput -ne '' -AND -not $AppendLog) {
$Arguments += " /LOG:$LogOutput"
}
if ($AdditionalArgs -ne $null) {$Arguments += " $AdditionalArgs"}
try {
Write-Verbose "Executing: Robocopy.exe `"$($Source)`" `"$($Destination)`" $($Arguments)"
Invoke-Robocopy $Source $Destination $Arguments
}
catch {
Write-Warning "An error occured executing Robocopy.exe. ERROR: $_"
}
}
function Test-TargetResource
{
[CmdletBinding()]
[OutputType([System.Boolean])]
param
(
[parameter(Mandatory = $true)]
[System.String]
$Source,
[parameter(Mandatory = $true)]
[System.String]
$Destination,
[System.String]
$Files,
[System.UInt32]
$Retry,
[System.UInt32]
$Wait,
[System.Boolean]
$SubdirectoriesIncludingEmpty,
[System.Boolean]
$Restartable,
[System.Boolean]
$MultiThreaded,
[System.String]
$ExcludeFiles,
[System.String]
$LogOutput,
[System.Boolean]
$AppendLog,
[System.String]
$AdditionalArgs
)
try {
$result = Invoke-RobocopyTest $Source $Destination
}
catch {
Write-Warning "An error occured while getting the file list from Robocopy.exe. ERROR: $_"
}
# https://support.microsoft.com/en-us/kb/954404
# ROBOCOPY $LASTEXITCODE is a bitflag:
# 0: Source and destination are completely synchronized
# 1: One or more files were copied successfully (new files present)
# 2: extra files/directories detected
# 4: mismatched files/directories
# 8: copy errors and retries exceeded
# 16: serious error
if ($result -ge 0 -and $result -lt 8) {[system.boolean]$result = $true}
else {[system.boolean]$result = $false}
$result
}
# Helper Functions
function Invoke-RobocopyTest {
param (
[parameter(Mandatory = $true)]
[System.String]
$source,
[parameter(Mandatory = $true)]
[System.String]
$destination
)
$output = & robocopy.exe /L $source $destination
$LASTEXITCODE
}
# Invoke-RobocopyTest C:\DSCTestMOF C:\DSCTestMOF2
function Invoke-Robocopy {
param (
[parameter(Mandatory = $true)]
[System.String]
$source,
[parameter(Mandatory = $true)]
[System.String]
$destination,
[System.String]
$Arguments
)
# This is a safe use of invoke-expression. Input is only passed as parameters to Robocopy.exe, it cannot be executed directly
$output = Invoke-Expression "Robocopy.exe `"$Source`" `"$Destination`" $Arguments"
$LASTEXITCODE
}
# Invoke-Robocopy -source C:\DSCTestMOF -destination C:\DSCTestMOF2 -Arguments '/E /MT'
Export-ModuleMember -Function *-TargetResource
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,043 |
{"url":"https:\/\/moodymudskipper.github.io\/flow\/articles\/experimental-functions.html","text":"Version 0.1.0 features experimental functions to display different kinds of flow diagrams.\n\nThese are not stable but already useful and since developing flow takes time I though it\u2019d be useful to release them as they are for the time being. Please report issues or upvote existing ones on github if you\u2019d like to see more work happening there.\n\n## flow_view_vars()\n\nflow_view_vars() shows dependencies between variables in a function, this kind of functions cannot be made completely robust but seems to work quite well if you don\u2019t use assign and don\u2019t reuse variable names all over the place.\n\nFull lines are direct dependencies, dashed lines are dependencies through control flow. Variables are repeated when they are modified.\n\nLet\u2019s test it on tidyselect::ends_with\n\ntidyselect::ends_with\n#> function (match, ignore.case = TRUE, vars = NULL)\n#> {\n#> check_match(match)\n#> vars <- vars %||% peek_vars(fn = \"ends_with\")\n#> if (ignore.case) {\n#> vars <- tolower(vars)\n#> match <- tolower(match)\n#> }\n#> length <- nchar(vars)\n#> flat_map_int(match, ends_with_impl, vars, length)\n#> }\n#> <bytecode: 0x13849f460>\n#> <environment: namespace:tidyselect>\nflow_view_vars(tidyselect::ends_with)\n\nIf we don\u2019t want to repeat the variable we can set expand = FALSE, the resulting diagram doesn\u2019t reflect the sequence of modifications but sums up dependencies more clearly as it is more compact.\n\nflow_view_vars(tidyselect::ends_with, expand = FALSE)\n\nUse the out argument to export these diagrams.\n\n## flow_view_deps()\n\nflow_view_deps() shows dependencies between functions in a package.\n\nExported functions are blue, unexported are yellow, the number of lines of code is indicated between brackets (useful to detect small helpers).\n\nBy default functions called from other non base packages are shown but don\u2019t create new nodes.\n\nflow_view_deps(tidyselect::ends_with)\n\nWe can collapse those to show only packages :\n\nflow_view_deps(tidyselect::ends_with, show_imports = \"packages\")\n\nOr not show those at all :\n\nflow_view_deps(tidyselect::ends_with, show_imports = \"none\")\n\nThere are many ways to tweak the output, in particular we can :\n\n\u2022 promote functions from other packages so they\u2019ll have their own node\nflow_view_deps(tidyselect::ends_with, promote = \"purrr::map\")\n\n\u2022 Demote functions from the target package to treat them as functions from other packages.\nflow_view_deps(tidyselect::ends_with, demote = \"peek_vars\")\n\n\u2022 Hide a function altogether :\nflow_view_deps(tidyselect::ends_with, hide = c(\"peek_vars\", \"purrr::map\"))\n\n\u2022 Trim to stop recursing after a given function:\nflow_view_deps(tidyselect::ends_with, trim = \"peek_vars\")\n\nUse the out argument to export these diagrams.\n\n## flow_view_shiny()\n\n#\u2019 This function displays a shiny app\u2019s module structure, . If you call for instance flow_view_shiny() on a function that runs the app and #\u2019 uses both the maon server and ui functions, you\u2019ll display the full graph of server and ui modules.\n\nflow_view_shiny() is a wrapper around flow_view_deps() to show the structure of a shiny app. It assumes the app is built on top of module functions named a certain way (adjustable through the pattern argument).\n\nIt works nicely on apps built with {golem} or follow the same kind of structure (good practice basically), such as those that you\u2019d build following the recommendations in Hadley Wickham\u2019s \u201cMestering Shiny\u201d.\n\nApps that use source() are not well supported but it might come as we found it\u2019s quite common (though probably not good practice).\n\nHere\u2019s an example using the great {esquisser} app.\n\nflow_view_shiny(esquisse::esquisser, show_imports = \"none\")\n\nUse the out argument to export these diagrams.","date":"2023-03-30 02:45:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.19116872549057007, \"perplexity\": 4931.576315056752}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296949093.14\/warc\/CC-MAIN-20230330004340-20230330034340-00730.warc.gz\"}"} | null | null |
Tyler, the Creator Banned From United Kingdom Over Lyrics
"Coming to the UK is a privilege, and we expect those who come here to respect our shared values," says British Home Office
Jon Blistein
Jon Blistein's Most Recent Stories
Rage Against the Machine to Headline Boston Calling, Firefly Festivals
Billie Eilish Wins Big at 2020 Grammy Awards
David Lynch and a Monkey Just Released a Seven-Inch of Torch Ballads
Tyler, the Creator revealed he has been banned from the United Kingdom after canceling a string of concerts in the U.K. and Ireland
Kommersant Photo/Getty
Odd Future member Tyler, the Creator announced he has been banned from the United Kingdom after canceling a string of shows in the U.K. and Ireland, including sets at the Reading and Leeds Festivals.
"Based on lyrics from 2009 I am not allowed in the U.K. for 3-5 years (although I was there 8 weeks ago). That is why the shows were canceled," the rapper wrote, in all caps, on Twitter.
Tyler's manager, Christian Clancy, echoed the sentiment in a post on Tumblr, alleging that a letter from the U.K. Home Office specifically cited lyrics from 2009's Bastard and 2011's Goblin.
"[T]he type of lyrics he hasn't written since," Clancy added. "[H]ighlights from the letter include that his work 'encourages violence and intolerance of homosexuality' and 'fosters hatred with views that seek to provoke others to terrorist acts.' I grew up on N.W.A, Eminem and Rage Against the Machine, so it's hard to me to fully wrap my head [around] this thought process and its implications."
While the official banishment note to Tyler was not released, The Quietus did receive a comment from a Home Office spokesperson appearing to confirm the rapper's allegations. "Coming to the UK is a privilege, and we expect those who come here to respect our shared values," the spokesperson said. "The Home Secretary has the power to exclude an individual if she considers that his or her presence in the UK is not conducive to the public good or if their exclusion is justified on public policy grounds."
Tyler, the Creator's Lawyer Calls 'Riot' Tag 'Inaccurate Description'
The Greatest Rock & Roll Christmas Songs, Holiday Songs
Lucasfilm's Kathleen Kennedy on 'Rise of Skywalker' and the Future of 'Star Wars'
As Clancy also noted, Tyler has made over 20 trips to the U.K. in the last five years for concerts, in-stores and meet and greets, all without incident. He also admitted that while Tyler's old lyrics can make him cringe, being banned for them is "a broader issue of free speech, with new lines being drawn that include reaching back in time without acknowledging growth. In fact, punishing growth… Is he not worthy of the pat on the back for becoming aware and making changes? What message does that send? Is race a conscious or subconscious factor at all?"
The U.K. ban comes on the heels of Tyler's canceled Australian tour, which was protested by feminist group Collective Shout (who also campaigned against his 2013 trek). Tyler incorrectly claimed on Twitter that the group had gotten him banned from the country — igniting a wave of online vitriol and misogyny against Collective Shout director of operations, Coralie Alison — but CNN reported that, at the time, his visa was still being inspected.
Last summer, Tyler and Odd Future were banned from entering New Zealand to play Eminem's Rapture Festival in Auckland. Immigration officials cited a 2011 incident in Boston where OF members reportedly urged fans to attack police officials; they "deemed [the group] to be a potential threat to public order and the public interest." In 2011, Odd Future was also dropped from the Big Day Out festival after the Auckland City Council said the group's lyrics were misogynistic and homophobic.
In This Article: Tyler, the Creator | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,860 |
{"url":"https:\/\/brilliant.org\/problems\/parametric-parabola-in-3d-space\/","text":"# Parametric Parabola in 3D Space\n\nGeometry Level pending\n\nA three dimensional vector $$\\vec{r}(t)$$ is defined explicitly in terms of the paramater $$t \\in \\mathbb{R}$$, as follows:\n\n$\\vec{r}(t) = ( x(t), y(t), z(t) ) = (1, 4, 9) + t (1, 2, 3) + t^2 (1, 1, 1)$\n\nThe above vector traces a parabola in 3D. What is the vertex of this parabola ?\n\n\u00d7","date":"2017-09-22 06:33:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6753487586975098, \"perplexity\": 876.496989679777}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-39\/segments\/1505818688671.43\/warc\/CC-MAIN-20170922055805-20170922075805-00004.warc.gz\"}"} | null | null |
Whether you suffer from physical or emotional traumatic events, chronic pain, stress, or anything that disrupts the life you want to live, Balance From Within is your partner in working through these issues.
Find the right solutions to your health and medical needs with services from Balance From Within, now based at Humfeld Chiropractic and Nutrition Center.
At Humfeld Chiropractic and Nutrition Center., we specialize in providing you with a unique approach to helping you reach your specific health goals. We work with you to help recognize the problems and actions that need to be taken to decrease or eliminate potential impacts to your health without the harmful side effects of medication.
Balance From Within is now based at Humfeld Chiropractic and Nutrition Center. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,512 |
Palaephatus amplisaccus is a moth of the family Palaephatidae. It is found in the Valdivian forests of the lake region in southern Argentina and Chile.
The length of the forewings is 6–8 mm for males and 7-7.5 mm for females. Adults have light brown forewings variably marked with white and dark brown streaks. There is a relatively prominent dark brown spot found at the middle of the costa. They are on wing from November to February, probably in one generation per year.
Etymology
The specific name is derived from Latin amplus (meaning large) and saccus (meaning sack or bag) and refers to the enlarged vinculum-saccus of the male.
References
Moths described in 1986
Palaephatidae
Taxa named by Donald R. Davis (entomologist) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,299 |
Diadoque (du grec ancien : , « successeur ») est le nom donné aux personnages qui, dans la Grèce antique, succédent à un roi.
Les Diadoques sont, plus spécifiquement, les généraux et compagnons d'Alexandre le Grand qui luttent les uns contre les autres pour contrôler son immense empire après sa mort en 323 av. J.-C. Se disputant le titre royal et les territoires conquis, ils donnent naissance aux principales dynasties grecques de l'époque hellénistique : celle des Lagides, fondée par Ptolémée, des Séleucides fondée par Séleucos, et des Antigonides fondée par Antigone le Borgne.
Les guerres des Diadoques (321 à 281 av. J.-C.) marquent traditionnellement le début de l'époque hellénistique.
Origines du terme
Étymologie
Le terme « diadoque » a été introduit dans la langue française à la fin du , d'abord pour désigner les généraux qui se disputent l'empire d'Alexandre le Grand, et dans l'édition 1932 du Dictionnaire de l'Académie française, au sens de prince héritier du royaume Grèce (dans la royauté grecque moderne). Le mot est emprunté au grec ancien , « successeur, qui recueille la succession de », déverbal du grec ancien , « recevoir par succession ».
Terminologie
L'historien prussien Johann Gustav Droysen définit la notion de « successeurs d'Alexandre » en 1836 dans L'Ère des Diadoques, ouvrage qui traite de la période allant de 323 à 220 av. J.-C. Dans Histoire des Épigones (1843), il décrit plus précisément les royaumes hellénistiques de la deuxième génération des Diadoques, les « héritiers » des premiers successeurs, entre 277 et 221. La pensée historique de Droysen a depuis été perpétuée par les historiens contemporains et le terme de « diadoque » est maintenant utilisé universellement.
À la même époque, l'historien anglais George Grote ne mentionne pas pour ainsi dire les Diadoques dans la première édition de son History of Greece. from the Earliest Period to the Close of the Generation Contemporary with Alexander the Great (12 vol., 1846-1856). Dans cet ouvrage il les décrit comme des « rois » ayant pris le pouvoir après la mort d'Alexandre. Dans l'édition de 1869, il précise qu'il s'agit « de grands officiers d'Alexandre, qui après sa mort se divisent l'Empire qu'il avait conquis pour se créer leurs propres royaumes ».
Sources antiques
Les sources contemporaines des Diadoques sont désormais réduites à l'état de fragments. LHistoire des successeurs d'Alexandre de Hiéronymos de Cardia, collaborateur d'Eumène de Cardia puis d'Antigone le Borgne, a inspiré différents historiens : Diodore de Sicile (Bibliothèque historique, livres à ), Trogue Pompée (Histoires philippiques, abrégées par Justin), Plutarque (les Vies parallèles d'Alexandre, d'Eumène et de Démétrios) et Arrien (Histoire de la succession d'Alexandre). Enfin, Les Macédoniques de Douris de Samos ont inspiré Trogue-Pompée et Plutarque.
Partage de l'empire d'Alexandre
La nature monarchique du pouvoir
Sous le règne de Philippe II, la Macédoine est un État qui ne répond pas aux principes de la cité grecque. Le gouvernement est une basileia (monarchie), le chef d'État étant le basileus. Son fils, Alexandre, reçoit le titre de kurios (régent) de Macédoine pendant qu'il mène ses campagnes militaires. Après l'assassinat de Philippe en 336 av. J.-C., Alexandre prend le contrôle du royaume en éliminant ses rivaux. La même année, Darius III accède au trône de Perse et devient le Grand Roi.
Par ailleurs, Alexandre est, à la suite de son père, hégémon (commandant en chef) de la Ligue de Corinthe qui vise à unir les cités grecques contre l'Empire perse. Il convient de noter que cette ligue des Hellènes a été rétablie en 302 av. J.-C. par le Diadoque Antigone le Borgne et son fils Démétrios Poliorcète.
Le rôle des Compagnons
L'institution des Compagnons (Hetairoi) offre à l'armée macédonienne une souplesse dans le cadre des opérations militaires et d'une répartition des talents. Les Compagnons n'ont pas de fonctions fixes, à l'exception de la cavalerie. Ils sont plutôt un ensemble d'officiers, qu'Alexandre peut assigner au gré des besoins. Généralement membres de la noblesse macédonienne (on trouve aussi des Grecs), beaucoup sont unis personnellement à Alexandre. Au fil de la conquête, certains obtiennent pouvoirs et richesses. La mort d'Alexandre ne signifie pas l'extinction de leurs ambitions, du moins pour les plus importants d'entre eux.
Les accords de Babylone
Quand Alexandre le Grand meurt à Babylone en juin 323 av. J.-C., il laisse derrière lui un immense empire qui s'étend de la Macédoine au Pendjab, en incluant notamment les régions suivantes : l'Anatolie, le Levant, l'Égypte, la Babylonie, la Perse et la Bactriane.
Les dissensions entre les généraux d'Alexandre apparaissent immédiatement car il meurt sans héritier désigné, même si, selon les auteurs de la Vulgate, il aurait confié, agonisant, l'anneau royal à Perdiccas, deuxième dans la hiérarchie à la fin du règne. Méléagre et la phalange soutiennent en effet le demi-frère d'Alexandre, Philippe III Arrhidée, alors que Perdiccas soutient lui l'idée d'attendre la naissance de l'enfant qu'Alexandre a eu avec Roxane. Un compromis est trouvé : Philippe III devient roi en compagnie de l'enfant de Roxane, le futur Alexandre IV, dans l'idée qu'il serait un garçon. Perdiccas devient chiliarque (régent) de l'« empire », autrement dit des territoires d'Asie. Cependant, il fait exécuter Méléagre et cherche à exercer un plein contrôle sur les affaires politiques, ce qui suscite l'hostilité de certains généraux.
La plupart de ces dignitaires reçoivent d'importantes dotations sous la forme de satrapies. Les principales désignations sont les suivantes :
Ptolémée reçoit l'Égypte ; Antigone, la Phrygie, la Lycie et la Pamphylie ; Lysimaque, la Thrace ; Léonnatos, la Phrygie hellespontique ; Eumène de Cardia la Cappadoce ; Néoptolème l'Arménie ; Peithon la Médie. La Macédoine et la Grèce restent sous l'égide d'Antipater, dont le fils, Cassandre, devient commandant des hypaspistes. Enfin Séleucos obtient le commandement de la cavalerie des Compagnons en tant qu'hipparque.
Dans les territoires orientaux, Perdiccas conserve les dirigeants mis en place par Alexandre : Taxilès et Poros en Inde, Oxyartès dans les Paropamisades, Peucestas en Perside, Sibyrtios en Arachosie et Gédrosie, Stasanor en Arie et Drangiane, Philippe en Bactriane et Sogdiane, etc.
La régence d'Antipater
Antipater conserve sous le règne d'Alexandre le rôle éminent qu'il tient depuis Philippe II. Quand l'armée macédonienne part pour l'Asie en 334 av. J.-C., il est en effet désigné régent de Macédoine et de Grèce, au titre de « stratège d'Europe ». Peu de temps avant sa mort en 323 av. J.-C., Alexandre confie à Cratère la mission de ramener les vétérans en Macédoine et de prendre la place d'Antipater, lequel doit se rendre à Babylone à la tête de nouvelles troupes. Mais la mort d'Alexandre remet en cause cette décision. Antipater est en effet confirmé comme régent de Macédoine et de Grèce par les accords de Babylone. Ils forment une sorte de « triumvirat » avec Perdiccas, chiliarque de l'empire, et Cratère, tuteur des rois.
Cratère, tuteur des rois
Cratère est l'un des principaux commandants d'Alexandre, et l'un de ses plus proches « conseillers » avec Héphaistion. Après la sédition d'Opis sur le Tigre en 324 av. J.-C., Alexandre lui ordonne, en compagnie de Léonnatos, de ramener les vétérans en Macédoine. C'est en Cilicie que Cratère apprend la mort du roi. Mais du fait de la distance qui le sépare de Babylone, il ne peut pas participer aux accords de Babylone. De par son prestige, il est tout de même désigné tuteur (prostatès) des rois Philippe III et Alexandre IV, respectivement demi-frère et fils posthume d'Alexandre. Cratère et Léonnatos se rendent en Europe car la nouvelle de la mort d'Alexandre a provoqué la rébellion des Grecs durant la guerre lamiaque en -322. À l'issue de cette campagne victorieuse, Antipater lui offre sa fille Phila. Par la suite il est alerté par Antigone le Borgne des ambitions « impériales » de Perdiccas et passe en Asie Mineure avec l'appui de Néoptolème qui le persuade de marcher contre Eumène de Cardia, le stratège de Perdiccas. Il trouve la mort en combattant Eumène au printemps 321 av. J.-C. en Cilicie.
Les ambitions de Léonnatos
À la mort d'Alexandre, le Conseil des Sômatophylaques et des Philoi (Amis), désigne Léonnatos, en compagnie de Perdiccas, tuteur (prostatès) provisoire de l'enfant à naître de Roxane, le futur Alexandre IV. Lors des accords de Babylone, Léonnatos abandonne la prostasie à Cratère. Léonnatos aurait nourri des ambitions royales, légitimé par sa parenté avec la mère de Philippe II, Eurydice, et la promesse d'un mariage avec Cléopâtre, sœur d'Alexandre. Alors qu'il est censé faire la conquête de la Cappadoce au profit d'Eumène de Cardia, il répond à l'appel d'Antipater, enfermé dans Lamia. Antipater lui aurait proposé d'épouser l'une de ses filles, peut-être Eurydice. La guerre lamiaque donne à Léonnatos l'occasion de remporter un succès militaire dont il espère profiter pour remplacer Antipater à la régence et peut-être se proclamer roi de Macédoine. Il détourne donc une grande partie de l'armée destinée à la conquête de la Cappadoce et intervient pour secourir Antipater. Mais Léonnatos est tué lors d'un engagement aux pieds des remparts de Lamia en 322 av. J.-C.
L'ère des Diadoques
La répression des révoltes
La nouvelle de la mort d'Alexandre inspire une révolte des cités grecques emmenées par Athènes, connue sous le nom de guerre lamiaque. Les Grecs contraignent Antipater à s'enfermer dans la forteresse de Lamia en Thessalie. Celui-ci reçoit le soutien de renforts commandés par Léonnatos qui trouve la mort pendant les combats. Mais la guerre ne se termine qu'avec l'arrivée de Cratère, à la tête des vétérans, qui défait les Grecs à la bataille de Crannon en août 322 av. J.-C..
Dans le même temps, Peithon, l'ambitieux satrape de Médie, réprime sévèrement une révolte des colons grecs de Bactriane qui souhaitent leur rapatriement. Perdiccas et Eumène de Cardia soumettent la Cappadoce, jamais conquise par Alexandre, en battant le roi auto-proclamé Ariarathe.
La première guerre des Diadoques (321–320 av. J.-C.)
Le mariage prévu entre Perdiccas et la sœur d'Alexandre, Cléopâtre, ainsi que les ambitions hégémoniques, si ce n'est royales, du chiliarque, suscitent l'hostilité d'Antipater, Cratère, Antigone et Ptolémée qui forment rapidement une coalition. L'élément déclencheur de la guerre est, en 322 av. J.-C., le détournement par Ptolémée de la dépouille embaumée d'Alexandre et son transfert en Égypte à Memphis. Bien qu'Eumène de Cardia parvienne à vaincre (et tuer) Cratère en Asie mineure à la Bataille de l'Hellespont, Perdiccas est assassiné sur le Nil par ses généraux conjurés, dont Séleucos, Peithon et Antigénès. La mort de Perdiccas oblige à une nouvelle répartition du pouvoir.
Le partage de Triparadisos
Consécutivement à la mort de Perdiccas en 321 av. J.-C., les Diadoques se partagent le pouvoir à Triparadisos (Syrie). La répartition des satrapies ne connait pas de grandes modifications. Antipater devient le régent (épimélète) de l'« empire » tandis que les deux roitelets, Philippe III et Alexandre IV, sont placés sous sa garde en Macédoine. Antigone est chargé de lutter contre le premier soutien de Perdiccas, Eumène de Cardia. De fait, Antipater maintient son contrôle sur l'Europe, tandis qu'Antigone tient une position similaire en Asie. La désignation d'Antipater rencontre pour seule opposition celle d'Eurydice, l'ambitieuse épouse de Philippe III.
Mort d'Antipater et ses conséquences
En 319 av. J.-C., Antipater meurt, et avec lui la dernière autorité capable de maintenir un semblant d'unité à l'« empire ». Après cela, les conflits se succèdent, sachant qu'il confie sa succession à Polyperchon, un général de la « vieille génération », outrepassant son propre fils, Cassandre. Une guerre éclate donc rapidement en Europe entre Polyperchon et Cassandre, ce dernier étant soutenu par Antigone et Ptolémée. Polyperchon, qui cherche à maintenir la royauté argéade, s'allie avec Eumène de Cardia qu'il désigne « stratège d'Asie ». Le régent fuit jusqu'en Épire avec l'enfant-roi, Alexandre IV et sa mère Roxane. Là, il joint ses forces avec celles d'Olympias, la mère d'Alexandre, et ensemble ils tentent d'envahir la Macédoine en 317. Ils rencontrent l'opposition de Philippe III et de son épouse Eurydice. Mais ces derniers sont vaincus : Olympias ordonne l'assassinat de Philippe III et pousse Eurydice au suicide. En 316, Cassandre reprend l'initiative ; il fait exécuter Olympias et obtient définitivement la régence de Macédoine.
Bilan politique
L'avènement royal des Diadoques
Parmi les Diadoques, il convient de distinguer les généraux qui n'ont jamais obtenu la titulature royale tels, pour les plus importants d'entre-eux, Perdiccas, Antipater, Polyperchon, Eumène de Cardia, et les satrapes qui sont parvenus à se faire proclamer roi à la fin du , à savoir chronologiquement Antigone, Ptolémée et Séleucos, fondateurs des grandes dynasties hellénistiques (Antigonides, Lagides et Séleucides). Lysimaque, roi de Thrace, a quant à lui été vaincu par Séleucos et n'est pas parvenu à fonder de dynastie à la suite d'inextricables querelles matrimoniales.
Les guerres des Diadoques de 323 à 281 av. J.-C.
Les guerres des Diadoques, marquées par de nombreux retournements d'alliance, opposent les principaux généraux d'Alexandre le Grand en vue de la direction de son immense empire ou du contrôle des territoires qui le composent. Elles se déroulent de 322 à 281 av. J.-C., soit de la coalition contre Perdiccas à la bataille de Couroupédion. À la mort d'Alexandre, au moins neuf généraux d'Alexandre se disputent son héritage. En 311, au moment d'un premier accord de paix, ils ne sont plus que cinq : Cassandre en Macédoine, Lysimaque en Thrace, Antigone en Asie Mineure et en Syrie, Séleucos en Babylonie et en Perse ainsi que Ptolémée en Égypte. En 301, après la défaite d'Antigone à la bataille d'Ipsos, son empire asiatique est partagé entre les vainqueurs, Séleucos et Lysimaque, amorçant une période de stabilisation, en dehors de la Macédoine qui connait une succession de guerres pour le pouvoir ainsi que la menace des invasions celtes. La défaite de Lysimaque en 281 marque la fin de l'ère des Diadoques, seul Séleucos ayant survécu à cette date.
Les principaux conflits mettant en jeu les Diadoques sont :
La guerre lamiaque de 323 à 322, opposant Macédoniens et Grecs coalisés ;
La première guerre des Diadoques de 322 à 321, opposant Perdiccas à une coalition ;
La deuxième guerre des Diadoques de 319 à 315, opposant Antigone le Borgne à une coalition ;
La troisième guerre des Diadoques de 314 à 311, opposant Antigone à une coalition ;
La guerre babylonienne de 311 à 309 , opposant Antigone et Séleucos ;
La quatrième guerre des Diadoques de 308 à 301, opposant Antigone à une coalition ;
Les guerres de Démétrios Poliorcète en Asie Mineure et en Grèce de 296 à 288 ;
La guerre pour la Macédoine entre Lysimaque et Séleucos de 285 à 281.
Les Épigones : héritiers des Diadoques
Les Épigones sont les fils et héritiers des Diadoques. Il s'agit de Démétrios Poliorcète, fils d'Antigone le Borgne, de Ptolémée II, fils de Ptolémée et d'Antiochos , fils de Séleucos. Ces souverains installent durablement les dynasties établies par les Diadoques : les Antigonides en Macédoine (installés définitivement par Antigone II Gonatas, fils de Démétrios), les Lagides en Égypte et les Séleucides en Asie (Babylonie, Syrie, Perse).
Les Attalides, qui règnent de 282 à 130 av. J.-C. sur le royaume de Pergame en Éolide, sont les héritiers de Philétairos, un général macédonien rallié à Lysimaque. Pyrrhus, roi d'Épire et (provisoirement) roi de Macédoine, qui a combattu les Romains, appartient à la période des Épigones sans en être véritablement un.
La fin des royaumes hellénistiques
L'immense héritage d'Alexandre s'éteint peu à peu sous les coups de la conquête romaine qui commence au et s'achève en 30 av. J.-C. Les Antigonides sont évincés par les Romains en 167 av. J.-C. après la bataille de Pydna qui clôt les guerres de Macédoine. Les Séleucides, qui ont notamment dû faire face aux incursions des Parthes, ont été forcés par les Romains d'abandonner le contrôle de l'Asie mineure, avant que la Syrie ne devienne une province romaine en 64 av. J.-C. sous la férule de Pompée. Les Lagides sont restés longtemps maîtres de l'Égypte, entretenant avec Rome une relation « amicale ». L'Égypte est finalement annexée par Rome en 30 av. J.-C. après la défaite de Cléopâtre VII et de Marc Antoine à la bataille d'Actium.
Notes et références
Voir aussi
Sources antiques
Sont mentionnées ici les sources non fragmentaires :
, -.
, Eumène, Démétrios, Pyrrhus.
.
Bibliographie
En français
.
.
En anglais
Articles connexes
Alexandre le Grand
Accords de Babylone
Accords de Triparadisos
Bilan du règne d'Alexandre le Grand
Épigones : fils et les héritiers des Diadoques
Guerres des Diadoques
Succession d'Alexandre le Grand
Époque hellénistique
Alexandre le Grand
Noblesse en Grèce
Histoire militaire du IIIe siècle av. J.-C.
Histoire militaire du IVe siècle av. J.-C.
Guerre impliquant la Macédoine antique | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,536 |
A few years ago, a tax preparation company used "You've got people!" as a tagline. Their TV commercials showed a satisfied client bragging "I've got people," confident and at ease that they had a trusted tax experts in their corner. This really resonated with me, having just retired from corporate life to launch an "encore career" of coaching, change management consulting, and teaching. It became quickly clear to me that going solo doesn't mean going it alone. We ALL need help from others.
Corporate life offers infrastructure that is often taken for granted – office space, structure, steady work flow, and a regular paycheck with benefits. Support for payroll, accounting, IT, and HR are all taken care of by the "mother ship" employer. Replacing all those benefits and services required me to either do it myself or "outsource" (ask for help).
A Team for Success can be much more than just tactical, transactional vendor-client relationships. The Team for Success should also be of strategic importance, by partnering with trusted colleagues who do similar or complementary work for brainstorming, collaboration, and especially for referrals. And, this means you also are on the other's Team for Success, too, adding value for everyone by strengthening and extending networks.
Be curious. Honestly assess your current situation. Who are your "people" now, on your current team?
Reciprocate. Whose success team are you on? Whose team do you want to join? Where can you bring value and be of service? Offer to be on their team.
So, who are your "people"– your Team for Success? | {
"redpajama_set_name": "RedPajamaC4"
} | 2,614 |
La Pasticceria
RESIDENCE LIFE ARZIGNANO
Between city and landscape
Arzignano, Vicenza
The Life Residences project stems from an idea of improving an important area of the town centre of Arzignano (Vicenza) along the Roggia, a small watercourse that flows through the urban landscape.
The architectural choices were based on enhancing the area around the Roggia to create an attractive urban landscape. The result is perfectly integrated in the urban setting, where its potential as a public park is fully realised. The aim of the project is achieved using contemporary language: coherence in style choices, quality of residential units, a focus on sustainable solutions, clearly defining the social spaces, and a large amount of readily perceived green areas.
Brusarosco Immobiliare srl, Edica srl
11000 sqm
director of the project Flavio Albanese —
Flavio Albanese
founder & partner
Flavio Albanese (1951), is founder and president of ASA studio albanese. A self-taught man whose training did not follow the more usual academic route, he began to show interest in shapes and design as a very young boy, and he gradually added many strings to his bow in all sectors of architecture and design. He has held courses at the École Polytechinique Fédérale in Lausanne and at the Art Institute in Chicago (1980), at Yale University (1983), at the University of Architecture in Delft (2005), at the University of Florida (2006), at the Fundacion Proa de Buenos Aires (2008) and frequently at the most important Italian universities. He has also held two workshops at the international summer school of the Architecture School in Venice in 2009 and 2010. He was a member of the Confindustria Vicenza committee from 1998 to 2001, the Domus Academy Scientific Committee (2004-2005) and the MIart Committee of Honour (2009 and 2010), director of the Officina del Porto di Palermo (2006-2008), vice president of the Andrea Palladio Architecture Firms International Centre (2011-2015) and president of the Fondazione Teatro Comunale Città di Vicenza (2010-2016). From 2007 to 2010 he was asked to head Domus, the prestigious international architecture, design and contemporary art magazine. Together with his brother France, he founded ASA studio albanese in Vicenza in 1987, whose projects have appeared in the most important architecture and design magazines. The Neores project was selected for the Mies van der Rohe Foundation European Union Prize for Contemporary Architecture, and ASA studio albanese took part in Venice's Architecture Biennial in 2004 and 2006. Flavio is an avid reader and bibliophile (his library, which is open to the rest of the firm, contains more than 15,000 volumes) and he is a connoisseur and collector of contemporary art.
Franco Albanese
partner & executive director
Franco Albanese (born in Vicenza in 1958) has worked in the world of architecture and design since 1976. He graduated from the Architecture School in Venice in 1986 and the year after he founded ASA studio albanese in Vicenza with his brother Flavio. Since then he has been the firm's CEO and Technical Manager, and this role has led him to playing his part in the creation, development and execution of the most important projects. As designer and operations manager he oversaw: the Faculty of Veterinary Medicine at the University of Padua (1997); "Neores", the production site and headquarters of Sinv Spa in Schio, Vicenza, (selected for the Mies van der Rohe Foundation European Union Prize for Contemporary Architecture in 2003); the project for the Town Hall of the Municipality of Grumolo delle Abbadesse, Vicenza (1999); "Morimondo 17", the industrial reconversion of the Sinv spa premises in Milan (2000); the headquarters of Margraf in Chiampo, in the province of Vicenza (2006); and the redevelopment of the "Cava di Mursia" on Pantelleria (2010). He also supervised the design of the Mestre bypass (2004), the "Rocco Forte Verdura Resort" in Sciacca, in Sicily (2005), the expansion of Pantelleria Airport (2006) and the new Rinascente in Palermo (2007). In recent years, he has increasingly concentrated on reconverting urban industrial areas, which has become a key theme of ASA studio albanese's philosophy.
project administrator Franco Albanese
team Nicola Caputo —
Nicola Caputo
final and working stages of projects, coordinating operations on site for the professionals who physically create the project and working with clients on their choices to ensure that the firm's vision for the project is in keeping with the clients' wishes. For the firm Nicola has been involved in projects that include the Rocco Forte Verdura Resort in Sciacca, the Hybrid Tower in Mestre, the Villa Coeur Jolie in Cap d'Antibes, as well as many other projects. Before joining ASA studio albanese, he worked at Studio Altieri in Thiene, in the contract field in workplace interiors with Adotta, and again as designer with Studio Gabbiani & associati, tackling projects of different sizes, including hospitals, shopping malls, infrastructure and restoration work. He lives in Vicenza with his wife and son. He's a keen pre-master swimmer and when he isn't diving into the firm's design projects, he's diving into swimming pools. One of his favourite books is "The Art of War" by Sun Tzu. One thing he couldn't live without is a pair of well-crafted shoes.
Andrea Garzotto —
Andrea Garzotto
Andrea is an architect who specialises in image rendering of projects and 3D models and joined ASA studio albanese in 2007. Virtual rendering and 3D models are a key aspect of every level in architectural projects, which is why Andrea is involved in all the firm's most important projects. Besides rendering images, he also works in interior design and architectural design. After spending a year and a half in Porto, where he fell in love with black and white images, he graduated in architecture from the IUAV in Venice in 2006. In December 2012, he opened "Incipit", a collective space and laboratory for the visual arts. Andrea is a freelance photographer, and considers himself to be a precursor to selfies and a wannabee biker. Travelling is an integral part of his life and his photography. He loves the Berghain in Berlin and wines that have bubbles.
Simone Matteazzi
Since 2008, the year he joined ASA studio albanese, Simone has followed projects throughout their creation, from feasibility studies to the final stage. He specialises in relations with public bodies and local government, particularly as regards urban and regional planning. Since 2011, he has been working on the complex Lindower 22 project in Berlin, for which he has supervised the retrofit phase and the new construction work. Before joining ASA studio albanese, he worked for Archistudio in Vicenza (from 2001 to 2008) and as a member of the National City Planning Institute (2003-2008). He has also taught CAD at the Pier Giacomo Castiglioni Interior Architecture Institute from 2002 to 2012. He is currently Vice President of the Ordine degli Architetti Paesaggisti Pianificatori della Provincia di Vicenza and in this role he has been a member of national architecture tender commissions. Born and raised in Vicenza, Simone lives here with his wife Giulia, an agronomist, and his daughters Maria and Anna. His favourite word is frontier, and he prefers the beach to the mountains.
Images credits
www.residenzelife.com
asastudioalbanese.com
info@asastudioalbanese.com | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,562 |
(2) if the patent or application for patent is entitled to claim a right of priority under section 119, 365(a), or 365(b) or to claim the benefit of an earlier filing date under section 120, 121, or 365(c), based upon 1 or more prior filed applications for patent, as of the filing date of the earliest such application that describes the subject matter.
35 U.S.C. 102 (pre-AIA) Conditions for patentability; novelty and loss of right to patent.
A claimed design may be rejected under 35 U.S.C. 102 when the invention is anticipated (or is "not novel") over a disclosure that is available as prior art. In design patent applications, the factual inquiry in determining anticipation over a prior art reference is the same as in utility patent applications. That is, the reference "'must be identical in all material respects.'" Hupp v. Siroflex of America Inc., 122 F.3d 1456, 43 USPQ2d 1887 (Fed. Cir. 1997). For anticipation to be found, the claimed design and the prior art design must be substantially the same. Door-Master Corp. v. Yorktowne, Inc., 256 F.3d 1308, 1313, 59 USPQ2d 1472, 1475 (Fed. Cir. 2001) (citing Gorham Mfg. Co. v. White, 81 U.S. 511, 528 (1871)).
In International Seaway Trading Corp. v. Walgreens Corp., 589 F.3d 1233, 1239-40, 93 USPQ2d 1001, 1005 (Fed. Cir. 2009), the Federal Circuit held that the ordinary observer test, the test used for infringement, is "the sole test for anticipation." Under the ordinary observer test, "'if, in the eye of an ordinary observer, giving such attention as a purchaser usually gives, two designs are substantially the same, if the resemblance is such as to deceive such an observer, inducing him to purchase one supposing it to be the other, the first one patented is infringed by the other.'" Gorham, 81 U.S. at 528. In Egyptian Goddess, an en banc panel of the Federal Circuit "characteriz[ed] the ordinary observer as being 'deemed to view the differences between the patented design and the accused product in the context of the prior art.'" Seaway, 589 F.3d at 1239-40, 93 USPQ2d at 1005, quoting Egyptian Goddess Inc. v. Swissa Inc., 543 F.3d 665, 676, 88 USPQ2d 1658, 1666-67 (Fed. Cir. 2008) (en banc). The court also explained that "'when the claimed design is close to the prior art designs, small differences between the accused design and the claimed design are likely to be important to the eye of the hypothetical ordinary observer.'" Id.
The ordinary observer test requires consideration of the design as a whole. See Seaway, 589 F.3d at 1243, 93 USPQ2d at 1008; Egyptian Goddess, 543 F.3d at 677, 88 USPQ2d 1667. In applying the ordinary observer test, "determine whether 'the deception that arises is a result of the similarities in the overall design not of similarities in ornamental features in isolation.'" See Richardson v. Stanley Works Inc., 597 F.3d 1288, 1295, 93 USPQ2d 1937, 1941 (Fed. Cir. 2010), citing Amini Innovation Corp. v. Anthony California Inc., 439 F.3d 1365, 1371, 78 USPQ2d 1147, 1151 (Fed. Cir. 2006) (holding that the overall infringement test is not to be converted to an element-by-element comparison when factoring out the functional aspects of various design elements). See Apple Inc. v. Samsung Elecs. Co., 786 F.3d 983, 998, 114 USPQ2d 1953, 1962 (Fed. Cir. 2015); Ethicon Endo-Surgery, Inc. v. Covidien, Inc., 796 F.3d 1312, 1333, 115 USPQ2d 1880, 1896 (Fed. Cir. 2015); and Sport Dimension, Inc. v. Coleman Co. Inc., 820 F.3d, 1316, 1320-21, 118 USPQ2d 1607, 1609-10 (Fed. Cir. 2016). "The mandated overall comparison is a comparison taking into account significant differences between the two designs, not minor or trivial differences that necessarily exist between any two designs that are not exact copies of one another." Seaway, 589 F.3d at 1243, 93 USPQ2d at 1008. "Just as minor differences between a patented design and an accused article's design cannot, and shall not, prevent a finding of infringement, so too minor differences cannot prevent a finding of anticipation." Id. (internal quotation marks omitted).
Anticipation does not require that the claimed design and the prior art be from analogous arts. In re Glavas, 230 F.2d 447, 450, 109 USPQ 50, 52 (CCPA 1956). "It is true that the use to which an article is to be put has no bearing on its patentability as a design and that if the prior art discloses any article of substantially the same appearance as that of an applicant, it is immaterial what the use of such article is. Accordingly, so far as anticipation by a single prior art disclosure is concerned, there can be no question as to nonanalogous art in design cases." Id. (internal quotation marks omitted).
When a claim is rejected under 35 U.S.C. 102 as being unpatentable over prior art, those features of the design which are functional and/or hidden during end use may not be relied upon to support patentability. See In re Cornwall, 230 F.2d 447, 109 USPQ 57 (CCPA 1956); Jones v. Progress Ind. Inc., 119 USPQ 92 (D. R.I. 1958). Further, in a rejection of a claim under 35 U.S.C. 102, mere differences in functional considerations do not negate a finding of anticipation when determining design patentability. See Black & Decker, Inc. v. Pittway Corp., 636 F.2d 1193, 231 USPQ 252 (N.D. Ill. 1986).
It is not necessary for the examiner to cite or apply prior art to show that functional and/or hidden features are old in the art as long as the examiner has properly relied on evidence to support the prima facie lack of ornamentality of these individual features. If applicant wishes to rely on functional or hidden features as a basis for patentability, the same standard for establishing ornamentality under 35 U.S.C. 171 must be applied before these features can be given any patentable weight. See MPEP § 1504.01(c).
In evaluating a statutory bar based on pre-AIA 35 U.S.C. 102(b), the experimental use exception to a statutory bar for public use or sale (see MPEP § 2133.03(e)) does not usually apply for design patents. See In re Mann, 861 F.2d 1581, 8 USPQ2d 2030 (Fed. Cir. 1988). However, Tone Brothers, Inc. v. Sysco Corp., 28 F.3d 1192, 1200, 31 USPQ2d 1321, 1326 (Fed. Cir. 1994) held that "experimentation directed to functional features of a product also containing an ornamental design may negate what otherwise would be considered a public use within the meaning of section 102(b)." See MPEP § 2133.03(e)(6).
Registration of a design abroad is considered to be equivalent to patenting for priority purposes under 35 U.S.C. 119(a) -(d) and for prior art purposes pre-AIA 35 U.S.C. 102(d), whether or not the foreign grant is published. (See Ex parte Lancaster, 151 USPQ 713 (Bd. App. 1965); Ex parte Marinissen, 155 USPQ 528 (Bd. App. 1966); Appeal No. 239-48, Decided April 30, 1965, 151 USPQ 711, (Bd. App. 1965); Ex parte Appeal decided September 3, 1968, 866 O.G. 16 (Bd. App. 1966). The basis of this practice is that if the foreign applicant has received the protection offered in the foreign country, no matter what the protection is called ("patent," "Design Registration," etc.), if the United States application is timely filed, a claim for priority will vest. If, on the other hand, the U.S. application is not timely filed, a statutory bar arises under pre-AIA 35 U.S.C. 102(d) as modified by 35 U.S.C. 172. In order for the filing to be timely for priority purposes and to avoid possible statutory bars, the U.S. design patent application must be made within 6 months of the foreign filing. See also MPEP § 1504.10.
The laws of each foreign country vary in one or more respects.
The following table sets forth the dates on which design rights can be enforced in a foreign country (INID Code (24)) and thus, are also useable in a pre-AIA 35 U.S.C. 102(d) rejection as modified by 35 U.S.C. 172. It should be noted that in many countries the date of registration or grant is the filing date.
DE-Germany Date of registration or grant The industrial design right can be enforced by a court from the date of registration although it is in force earlier (as from the date of filing—as defined by law).
GB-United Kingdom Date of registration or grant which is the filing date Protection arises automatically under the Design Right provision when the design is created. Proof of the date of the design creation needs to be kept in case the design right is challenged. The protection available to designs can be enforced in the courts following the date of grant of the Certificate of Registration as of the date of registration which stems from the date of first filing of the design in the UK or, if a priority is claimed under the Convention, as another country.
WO-World Intellectual Property Organization (WIPO) Subject to Rule 14.2 of the Regulations (on defects), the International Bureau enters the international deposit in the International Register on the date on which it has in its possession the application together with the items required. Reproductions, samples, or models pursuant to Rule 12, and the prescribed fees.
1Based on information taken from the "Survey of Filing Procedures and Filing Requirements, as well as of Examination Methods and Publication Procedures, Relating to Industrial Designs" as adopted by the PCIPI Executive Coordination Committee of the World Intellectual Property Organization (WIPO) at its fifteenth session on November 25, 1994.
Acknowledgment is made of the application identified in the oath or declaration or application data sheet which was filed more than six months prior to the filing date of the present application. Applicant is reminded that if the application matured into a form of patent protection before the filing date of the present application it would constitute a statutory bar to the issuance of a design patent in the United States under pre-AIA 35 U.S.C. 102(d) in view of pre-AIA 35 U.S.C. 172.
In brackets 1 and 2, insert the name of country where application was filed.
Form paragraphs for use in rejections under 35 U.S.C. 102 and pre-AIA 35 U.S.C. 102 are set forth below.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA.
This form paragraph should be used in any application subject to the first inventor to file provisions of the AIA.
The present application, filed on or after March 16, 2013, is being examined under the pre-AIA first to invent provisions.
This form paragraph should be used in any application filed on or after March 16, 2013, that is subject to the pre-AIA prior art provisions.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103 ) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
1. This form paragraph must be used in all Office Actions when a prior art rejection is made in an application with an actual filing date on or after March 16, 2013 that claims priority to, or the benefit of, an application filed before March 16, 2013.
2. This form paragraph should only be used ONCE in an Office action.
The claim is rejected under 35 U.S.C. 102(a)(1) as being anticipated by because the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
1. In bracket 1, identify the reference applied against the claimed design.
2. For applications with an actual filing date on or after March 16, 2013 that claim priority to, or the benefit of, an application filed before March 16, 2013, this form paragraph must be preceded by form paragraphs 15.10.aia and 15.10.15.
The claim is rejected under pre-AIA 35 U.S.C. 102(a) as being anticipated by because the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country before the invention thereof by the applicant for patent.
2. For applications with an actual filing date on or after March 16, 2013 that claim priority to, or the benefit of, an application filed before March 16, 2013, this form paragraph must be preceded by form paragraph 15.10.15.
The claim is rejected under 35 U.S.C. 102(b) as being anticipated by because the invention was patented or described in a printed publication in this or a foreign country, or in public use or on sale in this country more than one (1) year prior to the application for patent in the United States.
The claim is rejected under pre-AIA 35 U.S.C. 102(c) because the invention has been abandoned.
The claim is rejected under pre-AIA 35 U.S.C. 102(d), as modified by pre-AIA 35 U.S.C. 172, as being anticipated by because the invention was first patented or caused to be patented, or was the subject of an inventor's certificate by the applicant, or the applicant's legal representatives or assigns in a foreign country prior to the date of the application for patent in this country on an application for patent or inventor's certificate filed more than six (6) months before the filing of the application in the United States.
In bracket 1, identify the reference applied against the claimed design.
The claim is rejected under 35 U.S.C. 102(a)(2) as being anticipated by because the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
2. For applications claiming priority to, or the benefit of, an application filed before March 16, 2013, this form paragraph must be preceded by form paragraphs 15.10.aia and 15.10.15.
3. This form paragraph should only be used in an application filed on or after March 16, 2013, where the claims are being examined under 35 U.S.C. 102 /103 as amended by the AIA.
The claim is rejected under pre-AIA 35 U.S.C. 102(e) as being anticipated by because the invention was described in a patented or published application for patent by another filed in the United States before the invention thereof by the applicant for patent.
The claim is rejected under pre-AIA 35 U.S.C. 102 because applicant did not himself invent the subject matter sought to be patented.
The claim is rejected under pre-AIA 35 U.S.C. 102 because, before the applicant's invention thereof, the invention was made in this country by another who had not abandoned, suppressed or concealed it.
A rejection based on this statutory basis can be made in an application or patent that is examined under the first to file provisions of the AIA if it also contains or contained at any time (1) a claim to an invention having an effective filing date as defined in 35 U.S.C. 100(i) that is before March 16, 2013, or (2) a specific reference under 35 U.S.C. 120, 35 U.S.C. 121, or 35 U.S.C. 365(c) to any patent or application that contains or contained at any time such a claim.
For applications with an actual filing date on or after March 16, 2013 that claim priority to, or the benefit of, an application filed before March 16, 2013, this form paragraph must be preceded by form paragraphs 15.10.aia and 15.10.15.
The claim is rejected under pre-AIA 35 U.S.C. 102(g) because, before the applicant's invention thereof, the invention was made in this country by another who had not abandoned, suppressed or concealed it.
For applications with an actual filing date on or after March 16, 2013, that claim priority to, or the benefit of, an application filed before March 16, 2013, this form paragraph must be preceded by form paragraphs 15.10.fti and 15.10.15.
(a) IN GENERAL.—Whoever invents any new, original, and ornamental design for an article of manufacture may obtain a patent therefor, subject to the conditions and requirements of this title.
(b) APPLICABILITY OF THIS TITLE.—The provisions of this title relating to patents for inventions shall apply to patents for designs, except as otherwise provided.
(c) FILING DATE.—The filing date of an application for patent for design shall be the date on which the specification as prescribed by section 112 and any required drawings are filed.
The present application sets forth incorrect inventorship because .
The claim is rejected under 35 U.S.C. 171 and 35 U.S.C. 115 for failing to set forth the correct inventorship for the reasons stated above.
In bracket 1, insert the basis for concluding that the inventorship is incorrect.
1. This form paragraph is to be used ONLY when a rejection under 35 U.S.C. 171 on another basis has been made and the statutory text thereof is already present.
2. This form paragraph must be preceded by form paragraph 15.07.01 for a rejection based on improper inventorship.
The claim is directed to the same invention as that of the claim of commonly assigned copending Application No. . The issue of priority under pre-AIA 35 U.S.C. 102(g) and possibly pre-AIA 35 U.S.C. 102(f) of this single invention must be resolved. Since the U.S. Patent and Trademark Office normally will not institute an interference between applications or a patent and an application of common ownership (see MPEP § 2302), the assignee is required to state which entity is the prior inventor of the conflicting subject matter. A terminal disclaimer has no effect in this situation since the basis for refusing more than one patent is priority of invention under pre-AIA 35 U.S.C.102(f) or (g) and not an extension of monopoly. Failure to comply with this requirement will result in a holding of abandonment of this application.
The following form paragraph should be included after the form paragraph setting forth the rejection under 35 U.S.C. 102 (a), (b), (d) or (e) to provide an explanation of the applied reference.
The appearance of is substantially the same as that of the claimed design. See e.g., International Seaway Trading Corp. v. Walgreens Corp., 589 F.3d 1233, 1237-38, 1240, 93 USPQ2d 1001 (Fed. Cir. 2009) and MPEP § 1504.02.
1. This paragraph should be included after paragraph 15.11.aia or 15.15.aia to explain the basis of the rejection.
2. In bracket 1, identify the reference applied against the claimed design.
1. This paragraph should be included after paragraph 15.11.fti, 15.12.fti, 15.14.fti or 15.15.fti to explain the basis of the rejection.
The following form paragraphs may be used to reject a claim under pre-AIA 35 U.S.C. 102(e) over an application or patent having an earlier effective U.S. filing date with a common inventor and/or assignee, or that discloses but does not claim the design.
The claim is provisionally rejected under 35 U.S.C. 102(a)(2) as being anticipated by copending Application No. which has a common with the instant application.
Because the copending application names another inventor and has an earlier effective filing date, it would constitute prior art under 35 U.S.C. 102(a)(2), if published under 35 U.S.C. 122(b) or patented. This provisional rejection under 35 U.S.C. 102(a)(2) is based upon a presumption of future publication or patenting of the copending application.
This provisional rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the design in the reference was obtained directly or indirectly from the inventor of this application and is thus not prior art under 35 U.S.C. 102(b)(2)(A); (2) perfecting a claim to priority under 35 U.S.C. 119 that antedates the reference by filing a certified priority document in the application that satisfies the enablement and description requirements of 35 U.S.C. 112(a); (3) perfecting the benefit claim under 35 U.S.C. 120 by filing an application data sheet under 37 CFR 1.76 which contains a specific reference to a prior application in accordance with 37 CFR 1.78 and establishing that the prior application satisfies the enablement and description requirements of 35 U.S.C. 112(a); (4) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B); or (5) providing a statement pursuant to 35 U.S.C. 102(b)(2)(C) that the subject matter disclosed and the claimed invention, not later than the effective filing date of the claimed invention, were owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement.
1. This form paragraph is used to provisionally reject over a copending application (utility or design) with an earlier filing date that discloses the claimed invention which has not been patented or published under 35 U.S.C. 122. The copending application must have either a common assignee or at least one common inventor.
2. In bracket 2, insert inventor or assignee.
4. This form paragraph should only be used in an application filed on or after March 16, 2013, where the claims are being examined under 35 U.S.C. 102 /103 as amended by the AIA.
The claim is provisionally rejected under pre-AIA 35 U.S.C. 102(e) as being anticipated by copending Application No. which has a common with the instant application.
Based upon the different inventive entity and the earlier effective U.S. filing date of the copending application, it would constitute prior art under pre-AIA 35 U.S.C. 102(e), if published under 35 U.S.C. 122(b) or patented. This provisional rejection under pre-AIA 35 U.S.C. 102(e) is based upon a presumption of future publication or patenting of the copending application.
Since the design claimed in the present application is not the same invention claimed in the application, the examiner suggests overcoming this provisional rejection in one of the following ways: (A) a showing under 37 CFR 1.132 that the design in the reference was derived from the designer of this application and is thus not the invention "by another;" (B) a showing of a date of invention for the instant application prior to the effective U.S. filing date of the reference under 37 CFR 1.131(a); (C) perfecting a claim to priority under 35 U.S.C. 119 that antedates the reference by filing a certified priority document in the application that satisfies the enablement and description requirements of 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph; or (D) perfecting the benefit claim under 35 U.S.C. 120 by adding a specific reference to the prior filed application in compliance with 37 CFR 1.78 and establishing that the prior application satisfies the enablement and description requirements of 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph. If the application was filed before September 16, 2012, the specific reference must be included in the first sentence(s) of the specification following the title or in an application data sheet; if the application was filed on or after September 16, 2012, the specific reference must be included in an application data sheet.
1. This form paragraph is used to provisionally reject over a copending application (utility or design) with an earlier filing date that discloses (but does not claim) the claimed invention which has not been patented or published under 35 U.S.C. 122. The copending application must have either a common assignee or at least one common inventor.
2. Use pre-AIA 35 U.S.C. 102(e) as amended by the American Inventor's Protection Act (AIPA) (form paragraph 7.12.fti) to determine the reference's prior art date, unless the reference is a U.S. patent issued directly, or indirectly, from an international application which has an international filing date prior to November 29, 2000. Use pre-AIPA 35 U.S.C. 102(e) (form paragraph7.12.01.fti) only if the reference is a U.S. patent issued directly or indirectly from either a national stage of an international application (application under 35 U.S.C. 371 ) which has an international filing date prior to November 29, 2000, or a continuing application claiming benefit under 35 U.S.C. 120, 121, 365(c), or 386(c) to an international application having an international filing date prior to November 29, 2000. See the Examiner Notes for form paragraphs 7.12.fti and 7.12.01.fti to assist in the determination of the reference's pre-AIA or pre-AIPA 35 U.S.C. 102(e) date.
4. For applications with an actual filing date on or after March 16, 2013, that claim priority to, or the benefit of, an application filed before March 16, 2013, this form paragraph must be preceded by form paragraph 15.10.15.
The claim is provisionally rejected under pre-AIA 35 U.S.C. 102(e) as being anticipated by the claim in copending Design Patent Application No. which has a common with the instant application.
Based upon the different inventive entity and the earlier effective U.S. filing date of the copending application, it would constitute prior art under pre-AIA 35 U.S.C. 102(e), if patented. This provisional rejection under pre-AIA 35 U.S.C. 102(e) is based upon a presumption of future patenting of the copending application. The rejection may be overcome by abandoning the earlier-filed copending application.
1. In bracket 2, insert inventor or assignee.
2. This form paragraph must be preceded by form paragraph 15.24.05.fti to notify the applicant that the question of patentability under pre-AIA 35 U.S.C. 102(f) /(g) also exists.
The claim is rejected under 35 U.S.C. 102(a)(2) as being anticipated by patent .
Because the patent names another inventor and has an earlier effective filing date, it constitutes prior art under 35 U.S.C. 102(a)(2).
This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the disclosure in the reference was obtained directly or indirectly from the inventor of this application and is thus not prior art under 35 U.S.C. 102(b)(2)(A); (2) perfecting a claim to priority under 35 U.S.C. 119 that antedates the reference by filing a certified priority document in the application that satisfies the enablement and description requirements of 35 U.S.C. 112(a); (3) perfecting the benefit claim under 35 U.S.C. 120 by filing an application data sheet under 37 CFR 1.76 which contains a specific reference to a prior application in accordance with 37 CFR 1.78 and establishing that the prior application satisfies the enablement and description requirements of 35 U.S.C. 112(a); (4) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B); or (5) providing a statement pursuant to 35 U.S.C. 102(b)(2)(C) that the subject matter disclosed and the claimed invention, not later than the effective filing date of the claimed invention, were owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement.
1. This form paragraph should be used when the claimed design in the application being examined is disclosed in the drawings of an earlier-filed design or utility patent. When the design claimed in the application being examined is disclosed in the drawings of an earlier-filed design patent, it would most often be in the form of subcombination subject matter, (part or portion of an article), that is patentably distinct from the claim for the design embodied by the combination or whole article. It may also be unclaimed subject matter depicted in broken lines in the earlier-filed application.
2. In bracket 1, insert number of patent.
The claim is rejected under pre-AIA 35 U.S.C. 102 as being anticipated by patent .
Based upon the different inventive entity and the earlier effective U.S. filing date of the reference, it constitutes prior art under pre-AIA 35 U.S.C. 102.
Since the design claimed in the present application is not the same invention claimed in patent , the examiner suggests overcoming this rejection in one of the following ways: (A) a showing under 37 CFR 1.132 that the design in the reference was derived from the designer of this application and is thus not the invention "by another;" (B) a showing of a date of invention for the instant application prior to the effective U.S. filing date of the reference under 37 CFR 1.131(a); (C) perfecting a claim to priority under 35 U.S.C. 119 that antedates the reference by filing a certified priority document in the application that satisfies the enablement and description requirements of 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph; or (D) perfecting the benefit claim 35 U.S.C. 120 by adding a specific reference to the prior filed application in compliance with 37 CFR 1.78 and establishing that the prior application satisfies the enablement and description requirements of 35 U.S.C. 112(a) or 35 U.S.C. 112, first paragraph. If the application was filed before September 16, 2012, the specific reference must be included in the first sentence(s) of the specification following the title or in an application data sheet; if the application was filed on or after September 16, 2012, the specific reference must be included in an application data sheet.
1. This form paragraph should be used when the claimed design in the application being examined is disclosed in the drawings of an earlier-filed design or utility patent but is not claimed therein. When the design claimed in the application being examined is disclosed in the drawings of an earlier-filed design patent, it would most often be in the form of subcombination subject matter, (part or portion of an article), that is patentably distinct from the claim for the design embodied by the combination or whole article. It may also be unclaimed subject matter depicted in broken lines in the earlier-filed application.
2. In brackets 1 and 2, insert number of patent.
The following form paragraphs may be used in a second or subsequent action, where appropriate.
The claim is FINALLY REJECTED under as .
1. In bracket 1, insert statutory basis.
2. In bracket 2, insert reasons for rejection.
3. See paragraphs in MPEP Chapter 700, for "Action is Final" and "Advisory after Final" paragraphs. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,026 |
This simply means to get phone numbers (home and surf the web and search online.) If you own your choice and so forth. Houston TX you need it, you pay before the insurance rates and affordable coverage.
Some forms of financial responsibility for your personal wellbeing and that you're involved in those crashes. You can get this coverage can differ from short term coverage, then you have to pay for your car as well. We've seen the niftiest gadgets, features, and more sturdy frames are looked favorably. You can be utilized for a costume too. If you know that cost less. Many car crash fatalities have been driving for a rate that the rate to protect you, as middle aged adults, may not be reflected in your policy, you can start comparing full coverage auto insurance AR, is currently regulated at the premium every month.
So consider how long the person who hits them actually is a MUST for those folks who are not set by lower monthly premium Industrial policy was.
Sometimes you just enter your home when deciding for yourself. Naturally, someone with a solid fact that certain car liability insurance. Everybody knows that you do have the same office just use the company, the answers; you may ask you to switch.
(If you have errors on your coverage to encompass more than three) not. First of all, you need at the seats are correctly answered, it is a dependent in a different amount they collect from you. In fact, with a fly-by-night operation. After you have no insurance company to insurance premiums that they have enough coverage for the repair or replace if an accident was your fault? More than likely your lender is going to and your property is now time to do online. Also ratings can change anytime you like many other companies may be adviseable to ask about fees for paying more. Look over the costs exceed these numbers and not believing there is sometimes made based on their insurance policy to cover the deductible is the savings are enough. So make sure your car it is worth a lot of time and even price.
It would be to international newspapers, aside from getting out of the day, not having current insurance company and you will need to realize is that there are various factors such as anti lock brakes, too. There are a few local insurance agencies and comparing rates. In a position where you can fully understand insurance quotes. However, if you are going to pay? In today's economy one should get a driver's credit report from all the medical and legal defense, if necessary.
Well, we have to break down these payments also. The kinds of insurance companies, so your general travel insurance may not see these Nevada state requirements. Try and draw in more accidents than older people because statistically they are too low to the higher value of antique of vintage model vehicles. Make the effort to save the extra money on your car's mileage. When you are going to be done and you will have to comply to the future. I cannot think of it to return to normal after you buy. A common question is that if there are other things while you are looking for a maximum limit, if you require full coverage. This will provide a proof of this article, you ought to follow my advice is very important when it comes to buying the insurance company either. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,030 |
Q: Loading Text from a file into a Text box I am trying to load a text file into a textbox. The file that is to be loaded is selected from a Listbox UnViewed_Messages however when I try and load the file it does nothing.
The code used to populate the list box is below:
public Quarantine_Messages()
{
InitializeComponent();
//loads in files to the list box on load
DirectoryInfo DirFiles = new DirectoryInfo(@"C:\User\***\Documents\Noogle system\Quarantin_Messages");
FileInfo[] Files = DirFiles.GetFiles("*.txt");
foreach (FileInfo file in Files)
{
UnViewed_Messages.Items.Add(file.Name);
}
}
This is the code I am using to try and load the text file into the textbox Message_Viewer
private void Open_Message_Content_Click(object sender, RoutedEventArgs e)
{
//tries to read in the file to the text box Message_Viewer
string[] Files = Directory.GetFiles(@"C:\User\***\Documents\Noogle system\Quarantin_Messages");
foreach (string file in Files)
{
if (System.IO.Path.GetFileName(file) != UnViewed_Messages.SelectedItems.ToString())
{
using (var viewer = new StreamReader(File.OpenRead(file)))
{
Message_Viewer.Text = viewer.ReadToEnd();
viewer.Dispose();
}
}
}
}
Any help with this would be greatly appreciated, Thanks in advance.
A: Try something like this maybe:
private FileInfo[] files;
private DirectoryInfo directory;
private void Form1_Load(object sender, EventArgs e)
{
directory = new DirectoryInfo(@"C:\Users\smelendez\downloads");
files = directory.GetFiles("*.txt");
foreach (FileInfo file in files)
{
listBox1.Items.Add(file.Name);
}
}
private void listBox1_SelectedIndexChanged(object sender, EventArgs e)
{
var selectedFile = files[listBox1.SelectedIndex];
richTextBox1.Text = "";
richTextBox1.Text = File.ReadAllText(selectedFile.FullName);
}
And then continue from there with whatever logic you need.
A: Instead of your streamreader, just use:
string[] lines = File.ReadAllLines(file);
Message_Viewer.Text = String.Join(Environment.NewLine, lines)
Thats all. C#/.NET I/O is really clean if you know the stuff.
Edit: Keep in mind that with really big files, you would still have to use a filestream to read and add line by line, but it really shouldn't be a problem. I mean fill your ram with lines of text and then we talk again, okay?
MSDN File-I/O
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,001 |
{"url":"https:\/\/math.stackexchange.com\/questions\/2835353\/domains-for-which-the-divergence-theorem-holds\/2847085","text":"# Domains for which the divergence theorem holds\n\nIn the book Elliptic partial differential equations of second order written by Gilbarg and Trudinger, I saw the following sentence on page 17 in section 2.4 Green\u2019s Representation:\n\nAs a prelude to existence considerations we derive now some further consequences of the divergence theorem, namely, Green identities. Let $\\Omega$ be a domain for which the divergence theorem holds and let $u$ and $v$ be $C^2(\\bar\\Omega)$ functions.\n\nIt is well known that the divergence theorem holds when $\\Omega$ is a bounded domain with $C^1$ boundary.\n\nAre there any other domain than a bounded one with $C^1$ boundary for which the theorem holds?\n\nI would be grateful if you could give any comment for this question.\n\n\u2022 My (minimal) experience with such questions is you have to dive into geometric measure theory a bit. Francesco Maggi touches on exactly this question in his book on geometric measure theory, titled Sets of Finite Perimeter and Geometric Variational Problems. \u2013\u00a0fourierwho Jun 29 '18 at 2:31\n\u2022 Thanks for your reply. I will check the book. \u2013\u00a004170706 Jun 29 '18 at 2:33\n\u2022 Now that I have the book in front of me you will want to see the synopsis of Part 2 in Maggi's book. Also see: mathoverflow.net\/questions\/253488\/\u2026 \u2013\u00a0fourierwho Jun 29 '18 at 5:50\n\nAs suggested by fourierwho, perhaps the most the natural domains for which the divergence (also called Gauss-Green) theorem holds are the sets of finite perimeter, i.e. Caccioppoli sets, so let's precisely see why.\n\nDefinition 1 ([1], \u00a73.3 p. 143). Let $$\\Omega$$ a Lebesgue measurable set in $$\\mathbb{R}^n$$. For any open subset $$G\\subseteq\\mathbb{R}^n$$ the perimeter of $$\\Omega$$ in $$G$$, denoted as $$P(\\Omega,G)$$, is the variation of $$\\chi_\\Omega$$ in $$\\Omega$$ i.e. $$\\begin{split} P(\\Omega,G)&=\\sup\\left\\{\\int_\\Omega \\nabla\\cdot\\varphi\\,\\mathrm{d}x\\,:\\,\\varphi\\in [C_c^1(G)]^n, \\|\\varphi\\|_\\infty\\leq1\\right\\}\\\\ & =| \\nabla \\chi_{\\Omega\\cap G}|=TV(\\Omega,G) \\end{split}\\tag{1}\\label{1}$$ where $$[C_c^1(G)]^n$$ is the set of compact support continuously differentiable vector functions in $$G$$ and $$TV$$ is the total variation of the set function $$\\nabla \\chi_{\\Omega\\cap G}$$.\n\nThe set $$\\Omega$$ is a set of finite perimeter (a Caccioppoli set) in $$G\\subseteq\\mathbb{R}^n$$ if $$P(\\Omega,G)<\\infty$$.\n\n\u2022 If $$G=\\mathbb{R}^n$$, then we can speak of perimeter of $$\\Omega$$ tout court, and denote it as $$P(\\Omega)$$.\n\u2022 If $$P(\\Omega,G^\\prime)<\\infty$$ for every bounded open set $$G^\\prime\\Subset\\mathbb{R}^n$$, $$\\Omega$$ is a set of locally finite perimeter.\n\nWhy definition \\eqref{1} implies a natural extension of the classical divergence (Gauss-Green) theorem? For simplicity lets consider sets of finite perimeter: $$P(\\Omega)<\\infty$$ implies that the distributional derivative of the characteristic function of $$\\Omega$$ is a vector Radon measure whose total variation is the perimeter defined by \\eqref{1}, i.e. $$\\nabla\\chi_\\Omega(\\varphi)=\\int_\\Omega\\nabla\\cdot\\varphi\\,\\mathrm{d}x=\\int_\\Omega \\varphi\\,\\mathrm{d}\\nabla\\chi_\\Omega\\quad \\varphi\\in [C_c^1(\\mathbb{R}^n)]^n\\tag{2}\\label{2}$$ Now the support in the sense of distributions of $$\\nabla\\chi_\\Omega$$ is $$\\subseteq\\partial\\Omega$$ ([2], \u00a71.8 pp. 6-7): to see this note that if $$x\\notin\\partial\\Omega$$, it should belong to an open set $$A\\Subset\\mathbb{R}^n$$ such that either $$A\\Subset\\Omega$$ or $$A\\Subset\\mathbb{R}^n\\setminus\\Omega$$:\n\n1. if $$A\\Subset\\Omega$$, then $$\\chi_\\Omega=1$$ on $$A$$ and hence \\eqref{2} is equal to zero for each $$\\varphi\\in [C_c^1(A)]^n$$\n2. if $$A\\Subset\\mathbb{R}^n\\setminus\\Omega$$, then $$\\chi_\\Omega=0$$ on $$A$$ and hence \\eqref{2} is again equal to zero for each $$\\varphi\\in [C_c^1(A)]^n$$\n\nAlso, as a general corollary of (one of the versions of) Radon-Nikodym theorem ([1], \u00a71.1 p. 14) we can apply a polar decomposition to $$\\nabla\\chi_\\Omega$$ and obtain $$\\nabla\\chi_\\Omega=\\nu_\\Omega|\\nabla\\chi_\\Omega|_{TV}\\equiv\\nu_\\Omega|\\nabla\\chi_\\Omega|\\tag{3}\\label{3}$$ where $$\\nu_\\Omega$$ is a $$L^1$$ function taking values on the unit sphere $$\\mathbf{S}^{n-1}\\Subset\\mathbb{R}^n$$, and rewriting \\eqref{2} by using \\eqref{3} we obtain the sought for general divergence (Gauss-Green) theorem $$\\int_\\Omega\\!\\nabla\\cdot \\varphi\\, \\mathrm{d}x =\\int_{\\partial\\Omega} \\!\\varphi\\,\\cdot\\nu_\\Omega\\, \\mathrm{d}|\\nabla\\chi_\\Omega|\\quad\\forall\\varphi\\in [C_c^1(\\mathbb{R}^n)]^n\\tag{4}\\label{4}$$ Note that this result is an almost direct consequence of definition 1 above, with minimal differentiability requirement imposed on the data $$\\varphi$$: it seems to follow directly from the given definition of perimeter \\eqref{2} through the application of general (apparently unrelated) theorems on the structure of measures and distributions, and in this sense it is the most \"natural form\" of the divergence\/Gauss-Green theorem.\n\nFurther notes\n\n\u2022 When $$\\Omega$$ is a smooth bounded domain, \\eqref{4} \"reduces\" the standard divergence (Gauss-Green) theorem.\n\u2022 There are more general statement of the theorem, relaxing further both the conditions on $$\\Omega$$ and on $$\\varphi$$: however they require further, more technical, assumptions and therefore are in some sense \"less natural\".\n\u2022 The notion of perimeter \\eqref{1} was introduced by Ennio De Giorgi by using a gaussian kernel in order to \"mollify\" the set $$\\Omega$$. By using De Giorgi's ideas, Calogero Vinti and Emilio Bajada further generalized the notion of perimeter: however I am not aware of a corresponding generalization of the divergence theorem.\n\n[1] Ambrosio, Luigi; Fusco, Nicola; Pallara, Diego (2000), Functions of bounded variation and free discontinuity problems. Oxford Mathematical Monographs, New York and Oxford: The Clarendon Press\/Oxford University Press, New York, pp. xviii+434, ISBN 0-19-850245-1, MR1857292, Zbl 0957.49001.\n\n[2] Giusti, Enrico (1984), Minimal surfaces and functions of bounded variations, Monographs in Mathematics, 80, Basel\u2013Boston\u2013Stuttgart: Birkh\u00e4user Verlag, pp. XII+240, ISBN 978-0-8176-3153-6, MR 0775682, Zbl 0545.49018","date":"2021-07-25 07:10:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 50, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9495097398757935, \"perplexity\": 236.4966935381286}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046151638.93\/warc\/CC-MAIN-20210725045638-20210725075638-00412.warc.gz\"}"} | null | null |
{"url":"https:\/\/www.transtutors.com\/questions\/cornerstone-exercise-14-22-accounting-rate-of-return-eyring-company-invested-7-500-0-1321917.htm","text":"# Cornerstone Exercise 14-22 Accounting Rate of Return Eyring Company invested $7,500,000 in a new... 1 answer below \u00bb Cornerstone Exercise 14-22 Accounting Rate of Return Eyring Company invested$7,500,000 in a new product line. The life cycle of the product is pro- jected\u00a0 to\u00a0 be\u00a0 7\u00a0 years\u00a0 with\u00a0 the\u00a0 following\u00a0\u00a0 net\u00a0 income\u00a0 stream:\u00a0 $300,000,$300,000,\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0$500,000,$900,000, $1,000,000,$2,100,000, and \u00a0$1,200,000. Required: Calculate the ARR. ## 1 Approved Answer 5 Ratings, (9 Votes) ARR = Average Accounting Income \/ Initial Investment Average Accounting Income =... ## Related Questions in Managerial Accounting - Others \u2022 ### Question 1: Case study You are a financial adviser and the following information is an extract of... (Solved) September 22, 2018 invested in her shares. She bought 430 shares in Caltex at$35 a share and 250 shares in Rio Tinto at \\$60 a share. She wants to sell her Rio Tinto shares to lock in her profit and has come to you for advice. David is concerned about his superannuation. He would like to know how much return his\n\nPlease find the attached complete solution of the Case study. The tax rates use for doing the solution is of 2018. Required: A. Calculate David and Chloe\u2019s after-tax income for the year...\n\u2022 ### This is Corporate Accounting's Assignment\n\n(Solved) September 17, 2018\n\nThis is Corporate Accounting's Assignment\n\nHi, Please find the file enclosed herewith. Thanks! --------------------------------------------------------------------------------------...\n\u2022 ### Refer to the data in Exercise 11\u20148. Assume that instead\n\n(Solved) July 12, 2016\n\nRefer to the data in Exercise 11\u20148. Assume that instead of producing 4,000 units during the month, the company produced only 3,000 units, using 14,750 pounds of material. (The rest of the material purchased remained in raw materials inventory.) Required: Compute the direct materials price and quanti\n\nNotice in the arrangement beneath that the materials price variance is computed for the whole amount of materials purchased, while the materials quantity variance is computed just for the...\n\u2022 ### Applying Managerial Accounting Concepts to the Service Industry (100 Points) Many of the concepts in...\n\n(Solved) 5 weeks ago\n\nApplying Managerial Accounting Concepts to the Service Industry (100 Points) Many of the concepts in managerial accounting were first developed for the manufacturing environment. Do you think the same concepts, such as variable costs, fixed costs, mixed costs, and job order costing, can also be ap\n\nManagerial accounting is also known as management accounting or cost accounting. Generally, the managerial accounting involves collecting, analyzing the accounting information about the...\n\u2022 ### Assignment 3: Professional Obligations Task Firstly, you are required to devise a series of...\n\n(Solved) 5 weeks ago\n\nAssignment 3: Professional Obligations Task Firstly, you are required to devise a series of interview questions to investigate how ethics and professional behaviour affect individuals in their real work life . These interviews can be conducted face to face, by telephone, or (as a last choice","date":"2018-10-23 08:05:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.21867865324020386, \"perplexity\": 6079.070580590675}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-43\/segments\/1539583516117.80\/warc\/CC-MAIN-20181023065305-20181023090805-00271.warc.gz\"}"} | null | null |
Edith Beebe Carhart (April 14, 1879 - April 1, 1964) was the City Librarian in Bellingham, Washington, and compiled the "History of Bellingham".
Early life
Edith Beebe Carhart was born on April 14, 1879, in Terre Haute, Indiana, the daughter of Dr. Joseph Carhart (1849-1926) and Ida Beebe Clark (1852-1914).
She graduated from North Dakota State Teachers College and received private training in library work.
Career
Edith Beebe Carhart was the principal of grade schools in Alaska, Oregon and Washington.
She was the Librarian and Manager of the Boarding Department of the State Teacher's College at Mayville, North Dakota for 5 years and city librarian in Bellingham for more than 16 years.
Later in life she entered the real estate and insurance business.
She compiled a History of Bellingham (1926). She wrote The Angora Wool Rabbit: A Manual for the Beginner (Miller & Sutherlen printing Company, 1930).
Personal life
Edith Beebe Carhart moved to Washington in 1916 and lived at 2727 Eldridge, Bellingham, Washington.
She died at the age of 84 on April 1, 1964, in Bellingham, Washington. She was buried in Bayview Cemetery, Bellingham.
References
1879 births
American librarians
American women librarians
1964 deaths
People from Terre Haute, Indiana | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,388 |
export { default } from 'ember-data-github/mirage-models/github-commit';
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,638 |
'Toilet to tap' bill waiting for Gov. Scott's signature
Astronauts also drink reclaimed wastewater, but that idea isn't sitting so well with some environmental groups.
'Toilet to tap' bill waiting for Gov. Scott's signature Astronauts also drink reclaimed wastewater, but that idea isn't sitting so well with some environmental groups. Check out this story on naplesnews.com: https://www.naplesnews.com/story/news/2018/03/30/toilet-tap-bill-waiting-gov-scotts-signature/474034002/
Chad Gillis, cgillis@news-press.com Published 2:52 p.m. ET March 30, 2018 | Updated 12:05 p.m. ET March 31, 2018
Cape Coral's canals are being drained by overuse and an historic lack of rain(Photo: Cory O'Donnell | The News-Press)
If it's good enough for astronauts, it's good enough for you.
That idea worked well for Tang in the 1960s, when the powdered orange drink was marketed as the best thirst-quencher in outer space.
Often called "toilet to tap" legislation by critics, a bill that's sitting on Gov. Rick Scott's desk could soon allow utilities and water managers in Florida to store, treat and recycle wastewater in a similar fashion.
"My prediction is he'll let it go to law without signing it so it won't have his fingerprints on it," said Linda Young, with the Clean Water Network. "But he'll still be known as the poopy governor."
Gov. Scott has until April 10 to sign the legislation, but he could choose to veto the bill or simply let it pass unsigned.
Young said she's embarrassed that Florida is turning to these relatively expensive water treatment methods while getting nearly 5 feet of rain a year.
State's west of the Mississippi like Nevada only get a scant 9.5 inches of rain.
More: South Florida water suppliers face unique challenges
More:What flows through pipes of many colors?
A look at the Lee County Solids Waste Division Andrew West/news-press.com
"They're still discharging freshwater, you have it in Fort Myers and in Cape Coral," Young said of freshwater discharges that flow into the Caloosahatchee River each year. "Florida is the water waster in the world. We use more water than anybody in the world. All you've got to do is drive around and see all the sprinklers going. There has never been a serious concerted effort in this state to conserve water, even when we have drought periods."
Young said she's also concerned that water treatment methods may not remove pharmaceuticals, chemicals and other pollutants.
More: Critics: Poor water management will lead to shortages for some
But water managers and some municipalities say it's perfectly safe and agree with the legislation, saying Florida's growing population must turn to alternative water sources to meet current and future needs.
Brad Baird, Tampa's administrator for public works and utilities services, said reclaimed wastewater could provide water for the growing city for years.
A proposed project there "involves recharge and recovery of up to 50 million gallons a day. If implemented, it first drought-proof Tampa's water supply, and secondly provide 20 million gallons a day for the Tampa Bay region," Baird said. "It would solve the water supply issues for the Tampa area for decades to come, and we see this bill as something that would help this effort."
More: Florida's water worries prompt look at recycling
The bill would allow utilities to treat and pump treated wastewater into drinking water aquifers to offset water consumed by people, lawns and farms.
Nations in the Middle East and Southeast Asia have used the technology for decades, some providing the vast majority of drinking water through reclamation.
Reclaimed water is used in Cape Coral to provide irrigation water for residents and business owners. Using reclaimed water takes some demand off the city's drinking water supply.
"The water that comes out of an advanced wastewater treatment plant is highly treated and much cleaner than the zones in the aquifer that we're talking about performing the recharge," Baird said. "It's not the exact same water. For our project, we're recharging at 800 to 900 feet and recovery at 300 feet. So you have soil aquifer treatment for 500 feet, and that provides some natural treatment and can metabolize CECs (Contaminants of Emerging Concern) and can remove nutrients such as phosphorus. We expect it to be able to move that."
Contaminants of Emerging Concern includes pharmaceuticals and personal care products.
More: Irma aftermath: Ewww. What's that smell?
Baird said he'd be happy to be the first person to drink the reclaimed water, and that it's time to use the technology.
"There's no more cheap water out there, and actually implementing potable reuse is more cost-effective (than some other techniques)," Baird said. "The treated water coming out of the plants is of higher quality than the sea water."
Others argue that the approach is wrong, that the state should focus on restoring the Everglades and other wetland systems.
"This is the wrong solution. The biggest underlying issue it's trying to solve is the aquifer is running out of water, and the aquifer is running out of water is because the River of Grass was dammed off," said Peter Girard, with the non-profit group Bull Sugar (bullsugar.org). "The way to fix it is to reconnect the natural flows that supply the Everglades and recharge the aquifers."
Young said Florida should have focused for decades on conserving water that falls here during the rainy summer season. Instead, much of that water is sent to the Atlantic Ocean and Gulf of Mexico.
"We need to get really serious about water conservation and stop being gluttons," Young said. "And we're supposed to be sharing it with the environment. We're squandering the very thing that makes Florida wonderful."
Connect with this reporter: Chad Gillis on Twitter.
Read or Share this story: https://www.naplesnews.com/story/news/2018/03/30/toilet-tap-bill-waiting-gov-scotts-signature/474034002/ | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,822 |
\section{Green Assistant Overview}
In this section we will give an overview of the HADAS Green Assistant tool, from the user point of view. So, we will show first how to use it, before presenting the complexity of implementing such a tool.
Suppose Alice is a software developer that wants to develop an energy-efficient application. If she wants to use the HADAS Green assistant, she must follow the following steps also shown in Figure \ref{fig:assistant}:
\begin{enumerate}
\item Identify energy hotspots in the application requirements. Alice wants to develop a photo gallery application, so she needs to add a photo in an album and to reduce the photo quality of a photo, among other functionality. The HADAS green assistant has previously identified these concerns as energy consuming. For instance, data store (e.g., add a photo) and file compression (e.g., reduce the quality of a photo).
\item Select the energy-consuming concerns in the Hadas Repository. Alice can see the data store and the file compression in the repository of energy consuming concerns showed by the HADAS green assistant. She should select all the energy-consuming concerns provided by HADAS that she needs to develop the application. Now Alice knows that these concerns may strongly affect the final energy expenditure of her application. The user interface is implemented as a Google Form, that fits very well what we need. This Google Form (see Figure X) has a extensive list with all the concerns (e.g., Store, Communication, Compression or Security) and although they are pretty intuitive, in the form we also show some keywords that could could be associated with each of these concerns. For example, the Store concern could be used when in the application requirements the developer finds words like, save, upload, add, send, put, write, among others.
\item Choose the options for every energy-consuming concern selected in Step 2. HADAS will show the application developer a separated form for every selected concern and in our case Alice has to select the alternatives she wants to use. For instance, the photo storing could be done in locally and/or remotely in an external hard drive and or in the cloud. Depending on this selection, alternatives for other energy consuming concerns could be also suggested by the assistant. For example if the developer choose the cloud remote storage option this implies that the photos have to be uploaded to a remote server. But, the Communication concern that is the responsible for sending the file to the cloud, was previously identified by HADAS as another energy consuming concern. Then, in order to make easier the developer job, HADAS automatically will show a list with the different alternatives of the Communication concern, so as Alice can choose one of them. If Alice chooses the external hard drive as remote storage, this solution could need encryption, that is another energy consuming concern according to HADAS. So, alternatives security-related concerns will also be shown to the developer.
\item Energy-efficiency Analysis of the different alternatives explored by the developer. After Alice selected the architectural solution she wants for every energy consuming concern in step 3), HADAS will help her to be aware of the energy implications of their decisions. The energy consumption of each alternative is expressed by means of energy functions parameterized by variable parameters (e.g., the photo file size). HADAS does not provide the exact consumption in Watts, but what Alice needs to know is which alternatives are more energy consuming than others and which ones can be considered more green. This is an iterative process, since Alice can select different options and analyse the impact of each of her decisions in the energy expenditure of the application. Thanks that HADAS take into account the dependencies between different energy consuming concerns, Alice will be able to take complex decisions about no so much effort. For example, Alice can decide not to include the possibility of cloud remote storage for uncompressed files. She was able to make this decision, because the expected size of photos is large enough to justify including extra components for compressing and uncompressing the file
HADAS cannot decide on behalf of Alice, unless she decided not to select any option of some of the energy consuming concerns.
\item Get the architectural configuration and add other functionality. Finally, HADAS will provide the architectural configuration including the definitive options the developer selected after the analysis of the step 4). For instance, HADAS will provide the configuration of the Store, Compression, Security and Comunnication concerns with the option choosen by the developer. In this case Alice, will have to complete the application with the rest of the functionality considered not relevant for energy efficiency (e.g., photo edition)
\end{enumerate}
\section{Green Assistant Requirements}
The implementation of a green assistance implies addressing the following requirements:
\begin{itemize}
\item \textbf{R1: Identify and model the energy hotspots}. Since developers do not have yet enough knowledge about what concerns could impact more in the power consumption, the green assistant should support them in this task. Although recent works propose different architectural tactics to implement concrete energy hotspots, none of them explicitly model the architectural variability of the energy consuming concerns. Additionally, the energy consuming concerns associated with every energy hotspot could depend on other energy consuming concerns. But these kinds of dependencies often go unnoticed by the developer, and so they do not include them in the energy analysis. Then, the green assistant should automatically include those energy consuming concerns that depend on the ones already selected by the developer. The variability models and dependencies should also be part of the models stored in the green repository.
\item \textbf{R2: Design the architecture of every valid configuration and resource consumption of each component}. Up to the moment, if a developer wants to know the resource consumption of a concrete architectural solution that fits an energy hotspot, they need to manually specify the architecture and calculate the resources needed by each component (for example, with Palladio). Considering that for a given energy hotspot there could be many solutions, designers would not be interested in doing this for every alternative. But, without this information it is not possible to guide the developer in selecting the most energy efficient solution. So, the great challenge here is to provide the developer with a pre-defined architecture for each of the variants of every energy consuming concern, and an estimation of the consumed resources. The benefit is twofold: (i) the developer knows in advance the resources needed by the energy consuming concerns of their application, simply clicking a button; (ii) the architectural design of the selected solution can be reused, being part of the final architecture of the application. These models should also be part of the green repository.
\item \textbf{R3: Calculate the energy function of each architectural configuration}. We have already seen that the energy consumed by an application usually depends on input parameters, but the challenge is who is going to define the energy function for a concrete architectural configuration. The only way of making this is manually, so ideally the green assistant should already provide this information. Since an application usually has to include many energy consuming concerns, having the energy function of each of them previously calculated will help designers to see the power consumption of the final application, and make some corrections in their decisions if it is necessary. Finally, this information should enrich the models stored in the green repository.
\item \textbf{R4: Implement the user interface of the green assistant tool}. The approach presented in this paper is not viable without a user interface. This user interface should show: (i) the list of energy consuming concerns associated with the energy hotspots; (ii) the options available and a mechanism to enable and disable other options (i.e., other energy consuming concerns) according to the dependencies identified in R1; (iii) a energy efficiency analysis with graphics showing the energy consumption in function of some input parameters;(iv) an option to generate the architectural configuration corresponding to the selections made by the developer from the energy consumption point of view.
\end{itemize}
\section{Conclusions}
In this work we have presented the HADAS Green Assistant, a tool that helps application developers to be energy-aware when they are designing their applications trying to produce energy-efficient software. In order to develop the green assistant we have built a green repository composed by energy consuming concerns. These concerns represent the different ways of implementing the energy hotspots we have detected in nowadays applications, as data storage and access, communication, compression and so on.
We have modeled the concerns using variability models to manage the different implementations. The variability models represent both all the possible alternatives for each concern and the dependencies between them. It is really necessary to take into account these dependencies because we want to offer to the application developers the possibility to reason about energy consumption of the whole software architecture, and the concerns are working together for one application, not in an isolated way. Therefore, to provide the power consumption of every concern throw experimental tests it is not enough. We have conducted also simulations in Palladio to store in the HADAS repository the energy functions associated to any possible configuration of all the concerns working together.
The HADAS green assistant offers also a very intuitive graphical user interface based in forms that the application developers have to fill in order to select which energy consuming concerns they want to consider and which alternatives they want to explore.
About future works, we plan to exploit the variability models representing the energy consuming concerns to used them also at runtime. It is well known that the real energy consumption of an application strongly depends on the data used and on the final usage of such application. Then, maybe the decisions taken at design time are not enough to build real energy-efficient applications. So, we propose to use variability models at runtime to be able to reconfigure dynamically the applications to adequate their concerns to new situations.
\section{The HADAS Green Assistant}
In this section we will present the HADAS Green Assistant tool, focusing especially on describing the HADAS Green Repository. They are described both from the point of view of the software developers who want to use it and from a technical perspective.
Suppose that Alice is a
software developer that wants to develop an energy-efficient
application. We will use a Media Store application, previously
implemented and defined in Palladio, to illustrate our proposal and
the advantages of using our tool for choosing energy efficient
architectures adapted to the requirements.
If Alice wants to use the HADAS Green Assistant, she must follow the
steps shown in Figure \ref{fig:assistant}. \emph{Note that we will
describe these steps in italic letter in order to differenciate
between the user point of view and the implementation details}.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/Assistant.pdf}
\caption{An Schema of the HADAS Green Assistant tool}
\label{fig:assistant}
\end{figure}
\subsection{Energy Consuming Concerns}
\emph{The \underline{first step (Figure \ref{fig:assistant}, label 1)} is to identify
energy hotspots in the application requirements. Alice is going to
develop the Media Store application, so she needs to store audio
files in a server and/or to encode these files, among other
functionality}.
The HADAS Green Assistant has previously identified these kind of
concerns as energy consuming. For instance, \textsf{Store} (e.g., upload an
audio file) and \textsf{Compression} (e.g., encode an audio file). We have
explored nowadays applications in order to identify the main energy
hotspots that are repeated among many of these applications. So far
our repository have a list of 10 energy consuming concerns, as can
be seen in Figures \ref{fig:forms} and \ref{fig:repositoryCVL}:
\textsf{Store}, \textsf{Communication}, \textsf{Compression}, \textsf{Security}, \textsf{Data Access},
\textsf{Notification}, \textsf{Synchronization}, \textsf{User Interface}, \textsf{Code Migration} and
\textsf{Fault Tolerance}. Of course, this list will be augmented when we have
new evidence about other energy hotspots.
\emph{The \underline{second step (Figure \ref{fig:assistant}, label 2)} for Alice is
to select the energy-consuming concerns in the HADAS respository}.
The user interface of the HADAS assistant is implemented as a Google
Form, that fits very well what we need. This Google Form (see Figure
\ref{fig:forms}) has a list with all the concerns and although they are pretty intuitive, in
the form we also show some keywords that could be associated
with each of these concerns. For example, the \textsf{Store} concern could be
used when in the application requirements the developer finds words
like \textsf{save}, \textsf{upload}, \textsf{add}, \textsf{send}, \textsf{put}, \textsf{write}, among others. So,
applications developers can identify easily which concern correspond
to their application hotspots.
\emph{In the requirements of the Media Store, Alice can see words like
storage, upload (corresponding with the \textsf{Store} concern); download, cache
(corresponding with the \textsf{Data Access} concern); users, login, encrypted
(corresponding with the \textsf{Security} concern); encode, compressed
(corresponding with the \textsf{Compression} concern); GUI, interface (corresponding
with the \textsf{User Interface} concern). So, now Alice knows that these
concerns may strongly affect the final energy expenditure of her
application and decides to perform the sustainability analysis offered by the HADAS assistant to compare different architectural configurations and choose the one that better fits her needs. Then, she should select all the energy-consuming
concerns provided by HADAS that she needs to develop the
application. In Figure \ref{fig:forms} it can be seen that she has
selected the \textsf{Store}, \textsf{Compression}, \textsf{Security}, \textsf{Data Access} and \textsf{User
Interface} concerns}.
There exists many variability in the way to design and implement
these concerns (e.g., the data could be stored locally or remotely,
there are many encryption algorithms, or different codecs to
compress audio or video files). Additionally, these concerns are not
independent from each other. For instance, there are several
concerns related with \textsf{Communication}, such as \textsf{Data
Access}, \textsf{Store}, \textsf{Notification},
\textsf{Synchronization} or \textsf{Code Migration}. This means that
there are dependencies between them and, therefore, the energy
consumption cannot be analyzed isolatelly for every concern.
Instead, a whole architecture should be analyzed, where these
dependencies are explicitly modeled and taken into account. In the
next subsection we will see how this variability and the dependences
are modeled in our approach.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/Forms.pdf}
\caption{HADAS Green Assistant interface: some Forms}
\label{fig:forms}
\end{figure}
\subsection{Variability Model and Dependencies }
\emph{The \underline{third step for Alice (Figure \ref{fig:assistant}, label 3)} is
to choose the options for every energy-consuming concern selected in the second step.} The HADAS assistant will show the application developer a separated google
form for every selected concern and in our case \emph{Alice has to select
the alternatives she wants to explore to analyze later the energy
efficiency.} For instance, the store could be done locally and/or
remotely in an external hard drive, in a server or in the cloud.
Depending on this selection, alternatives for other energy consuming
concerns could also be suggested by the assistant. \emph{The Media Store
application will store the audio files in a \textsf{Server}, so Alice has selected this option for the \textsf{Store}
concern (as can be seen
in Figure \ref{fig:forms})}. This selection implies that the audio files have to be uploaded
to a remote server. The \textsf{Communication} concern, which is the
responsible for sending the file, was previously identified by HADAS
as another energy consuming concern and it is included in the
repository. Thus, communication is required to upload the files and must be included in the analysis in order to know how much energy will be
consumed by the whole architecture. However, \emph{Alice was not aware of the dependency between the remote storage in a server and communication and, thus, she did not explicitly selected the \textsf{Communication} concern during the first step (as can be seen in Figure \ref{fig:forms})}. However, this is not a problem when using HADAS because, in order to make easier the developer job, the HADAS assistant will automatically show the options available for the \textsf{Communication}
concern. This is possible because the HADAS repository contains information about the dependencies between different energy concerns.
We have modeled the energy consuming concerns variability and the
dependencies between them using the Common Variability Language
(CVL). CVL allows modelling the variability separately from a base
model (i.e. architectural model), but both the variability and the base models can be connected
and can be managed using the same tool. In particular, using the CVL
tools we specify the Variability Model (called VSpec tree) and the
binding between this and the Base Model. With a VSpec tree we can
specify the common features that must be part of the architectural
solution for a given energy consuming concern, and also their
variants (e.g., to store data locally or remotely).
Some of the features of this variability model can be seen in Figure
\ref{fig:repositoryCVL}. We have depicted part of the variability
for some concerns related to the Media Store case study. The rest
were not included for the sake of simplicity and also because it is
out of the scope of this paper. Specially, Figure
\ref{fig:repositoryCVL} shows part of the variability for the
\textsf{Store} concern, the variability of the audio codecs of the
\textsf{Compression} concern and also the \textsf{Encryption}
variants of the \textsf{Security} concern. We have put in bold the
lines of the selected features that correspond to Alice's checkbox
selections in the forms shown in Figure \ref{fig:forms}.
\emph{Concretely, Alice has selected to store the files in a
\textsf{Server} and to explore four different codecs or algorithms
for the audio compression: \textsf{LAME}, \textsf{Vorbis},
\textsf{jFLAC} and \textsf{JSpeex}}.
Note that \textsf{Communication} is also selected in the variability model of \ref{fig:repositoryCVL} (marked in a
bolder red line). This is because Alice selected the \textsf{Server} feature of the \textsf{Store
concern}, and there is a cross-tree constraint between this and the \textsf{Communication} concern that implements the dependency between both concerns. This constraint is called \textsf{Comm} in the figure and it is marked in red. It formally defines the mutual dependency as: \texttt{Server implies Communication}.
\begin{figure*}
\includegraphics[width=\textwidth]{Figures/RepositoryCVL.pdf}
\caption{The variability model of the HADAS Green Repository in CVL}
\label{fig:repositoryCVL}
\end{figure*}
With CVL it is possible to generate automatically the Resolution
Models after selecting (True or False) a set of choices in the
variability model. These selections in the variability model are
obtained from the selections in the google forms. A Resolution Model
represents a set of alternatives for every selected concern that the
developer wants to explore during the analysis of the power
consumption. Every alternative corresponds with a concrete
architecture of the base model. We will detail how we define this
architecture in next subsection. The benefit of using the HADAS
assistant is that the specifications of the variability model and
the generation of the resolution model, as well as the specification
of the alternative software architectures, are steps completely
transparent to Alice, who only has to deal with the google forms.
\subsection{Architecture and Energy Consumption Simulations}
\emph{Once Alice has selected the energy consuming concerns and the
variants for the ones she wants to explore, she has to make an
energy-efficiency analysis of the different alternatives (\underline{Figure
\ref{fig:assistant}, label 4) as the fourth step}. HADAS will help her to be aware of
the energy implications of their decisions}.
The energy consumption of each concern variant is expressed by means
of energy functions parameterized by variable parameters (e.g., the
audio file size). HADAS does not pretend to provide the exact
consumption in Watts, because what Alice needs to know is which
alternatives are more energy consuming than others, and which ones
can be considered more green. \emph{This is an iterative process,
since Alice can select different options for several concerns and
analyse the impact of each of her decisions in the energy
expenditure of the application.} So, the HADAS Green Assistant will
show the energy consumption for every concern but also for the whole
architecture, since as mentioned before the concerns are not
independent from each other. Then, as we describe in the next
section, we need to be able to simulate the energy expenditure and
of all the possible architectural alternatives of all the concerns.
In this way the HADAS Green Assistant will provide the energy
functions for every configuration generated from the variability
model described in the previous subsection.
The Base Model of CVL must be a MOF compliant model of the software
architecture, which in our case is the set of components and
connections that specify a concrete concern variant. Since we need
to include the expected energy consumption of each variant, we have
to use an architectural model or language that provides this kind of
information. We have chosen to model the architecture using the
Palladio Component Model (PCM) due to the powerful toolset that
provides (Palladio) to analyse the resource consumption at
architectural level (including the energy). The metamodel of PCM can
be implemented in MOF, so it can perfectly be used jointly with CVL.
Therefore, in order to automatically provide the energy consumption
functions for the configurations we connect the CVL variability
model (VSpec Tree) with the respective architectural base model
specified in PCM. Figure \ref{fig:compressionpcm} depicts some of
the components that provides the \textsf{Compression} interface to
compress audio files of different formats using different algorithms
or codecs.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/CompressionPCM.pdf}
\caption{PCM Repository Diagram for the Compression Concern}
\label{fig:compressionpcm}
\end{figure}
Then, using the Palladio Power Consumption Analyzer we simulate the
different configurations to obtain the power consumption. Thus, the
application developer will automatically know how the energy
consumption varies when they select different alternatives for the
selected concerns. The total number of alternatives when there are
several selected concerns could be really high, so reason about and
simulate the energy expenditure of all these possible alternatives by
hand for every application is not possible. HADAS helps to that by previously simulating the energy expenditure
of every possible configuration of our repository with Palladio.
This takes a lot of time, but with our approach it has to be performed only once. The
benefit for the developer is that they do not have to make any
resource consumption simulation by hand. The tool will show the
results so that the developer only has to analyse the different
results.
\emph{Furthermore, thanks that HADAS takes into account the
dependencies between different energy consuming concerns, Alice will
be able to make complex decisions with no so much effort. For
example, Alice can decide not to include the possibility to upload
compressed audio files when they are too small (less than 5 Mb). She
was able to make this decision, because the energy consumption
functions provided by HADAS take into account both the energy
expenditure to compress the file and the energy expenditure to send
the compressed file to the server.} This has been possible because
HADAS exploit the dependencies between the concerns. Then although
in the Media Store requirements Alice did not identified
communication as a energy hotspot, HADAS automatically suggested her
to consider also this concern.
It showed the different energy functions for the
upload functionality including also the compression (specified as another cross-tree constraint).
This is very important since, as will be detailed in next section, the
decisions taken could be different if we analyse the audio compression
without considering the communication and viceversa.
\emph{The \underline{last step Alice has to perform (Figure \ref{fig:assistant}, label 5)} is to get the architectural
configuration from the HADAS repository simply clicking a button}. For
that, HADAS will generate the architectural configuration including
the definitive options Alice has selected after the analysis in the step 4) using our implementation of the CVL engine. For instance, HADAS will include in the configuration (i.e. the resolved model in CVL) components for the
\textsf{Store}, \textsf{Compression}, \textsf{Data Access}, \textsf{Security} and \textsf{Communication} concerns corresponding to the
options chosen by her. \emph{In this case Alice, will have to
complete the application with the rest of functionality
considered not relevant for energy efficiency (e.g., audio file
edition)}.
\section{Evaluation}
In this section we are going to detail how we perform the
simulations that are needed to be able to inform, through the HADAS
green assistant, about the power consumption of the alternative
architectures software developers want to explore. We perform the
simulation in advance to store in the HADAS green repository an
energy function that will provide how many watts every one of the
possible configuration of each concern are going to waste, at least
in relative terms -- i.e., which one of the different variants waste
less and which one more. We differentiate two main steps. A first
step where we perform the experiments to obtain the energy functions
for each energy-consuming concern. And a second step where we
integrate the information from the experiments into the Palladio
Consumption Analyzer in order to simulate the joint use of several
dependent concerns.
\subsection{Experimental studies}
For every energy-consuming concern we have carried out a set of experiments
measuring the energy waste with Joulmeter\footnote{https://www.microsoft.com/en-us/research/project/joulemeter-computational-energy-measurement-and-optimization/},
a Microsoft modeling tool to measure the energy usage of software
applications running on a computer. This tool has been calibrated
using Watts'Up\footnote{https://www.wattsupmeters.com/secure/products.php?pn=0} to be able
to obtain the real power consumption of every hardware component
(e.g., CPU, HDD, Screen, ..). All the experiment has been conducted
in a Gateway DT30 Desktop PC with Intel Core 2 Quad Q9300, 2.50GHz
and 8GB of RAM under Windows 10, 64 bits. And all the concerns have
been implemented in Java.
Remember that Alice chose 4 different audio codecs to compress audio
files because she wanted to know which one(s) were more appropriate,
from a energy point of view, to be included in her Media Store
application. So, in order to let her know which one is more energy
efficient for her application, in the step 4 described in previous
section, the HADAS green assistant could show the graphic of Figure
\ref{fig:graphics_compression}. This graphic shows the power
consumption (in a logarithmic scale) to compress 9 WAV audio files
of different sizes (from 5Mb to almost 1GB) using the following
audio compression algorithms: Java LAME 3.99.3 to create MP3 audio
files using a bitrate of 128Mb, Vorbis-java (libvorbis-1.1.2) to
compress in OGG files, javaFlacEncoder 0.3.1 for the FLAC algorithm
and Java Speex Encoder v0.9.7 indicated as SPX in the figure. Then,
Alice could explore which one is more energy efficient for her
application. For instance, if it is a media store for managing music
songs, the typical file sizes could be between 15-35Mb, so she only
would need to look at this information. As can be observed in the
graphic for these sizes, the most energy efficient algorithm is LAME
to compress in MP3 files since it consumes less than 0.3 Watts,
meanwhile the other three consume more than 0.6, the double.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/graphics_compression.pdf}
\caption{Compression Power Consumption}
\label{fig:graphics_compression}
\end{figure}
However, the analysis of this information in an isolated way is not enough to let
Alice takes a decision of which algorithm to use for her application
since the compression concern will not be used alone. Typically, the
Media Store will compress the audio files before uploading them
to the server. Then, in order for Alice to reason in a proper way,
she needs to know the total power consumption to both compress the file
and to send it to the server. Notice that the different compression
algorithms produce compressed file of different sizes, and therefore
the energy consumption for the communication concern will be different depending on the compression algorithm previously used.
Then, in order to simulate the energy consumption of both concerns
working together we use the Palladio Power Consumption Analyzer.
\subsection{Simulation with the Palladio Analyzer}
As previously described, we have in the HADAS repository the architectural
components modelling the energy consuming concerns
with all their respective alternatives, and with information about how they are connected
between them. These models are defined in the Palladio PCM repository diagram. Then,
for every component we have also defined its behavior in a Service Effect
Specifications (SEFF) PCM diagram.
In these diagrams we have also included
the energy models that represent the power consumption of the
internal actions (or methods) of the behavior, which have been
previously calculated taking the measures from the experiments with Joulmeter, as
explained for the compression algorithms.
Then, using the Palladio Power Consumption Analyzer we can simulate
how much energy the two concerns working together consume, as shown
in Figure \ref{fig:graphics_communication}. Note that this is not so
simple as sum up the Watts of the two actions (compression and
communication) since the communication energy waste depends on the
size of the file to be sent and this size strongly depends on the
compression algorithm used. In this graphic, we have also included
the power consumption of sending a WAV file without compression. We
can observe that for not too big files (less than 20Mb) it could be
more energy efficient to send the file without been compressed than
to compress it using either the javaFlacEncoder 0.3.1 for the FLAC
algorithm or the Java Speex Encoder. Then in this case, for the
typical file size of WAV song files the decision is not so clear as
before. For instance, for 30Mb files the energy consumption of first
using LAME, Vorbis and Speex to compress the file and then upload
the compressed file to the server is very similar, so Alice could
chose the one that fits better with other requirements as, for
instance, the audio quality. However, she could avoid the FLAC
algorithm for her media store application. But, if the media store
were dedicated to manage short audio recorded messages with size of
less than 10Mb this algorithm is the more energy-efficient to
compress and to send the file to the server. However, the FLAC
algorithm is the one that more power consumes for big files. Then,
for a media store to manage long audio conferences it would consume
much less the Speex algorithm. These differences of energy
consumption were not so high when we simulated the compression
algorithms alone.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/graphics_communication.pdf}
\caption{Compression and Communication Power Consumption}
\label{fig:graphics_communication}
\end{figure}
Therefore, thanks to our proposal, where we use also Palladio to
make the previous simulations, Alice is able to explore all the
alternatives for all the energy-consuming concerns available in our
repository. Of course, our assistant will show her all the
alternatives for the five concerns she selected, not only for these
two. So she will be able to analyse the simulations and pick the
more proper and energy-efficient alternatives for the whole
architecture.
\subsection{Discussion}
Performig the experiments for all the individual concern and the later simulations using
Palladio for all the possible configuration of the concerns are time-consuming and not easy tasks.
However, these tasks have to be done just once when adding the concern to the repository. Then, this information can be (re)-used for many different applications, having many
advantages for the application developers:
\begin{enumerate}
\item It helps them to identify potential energy hotspots in their
application, just looking into the requirements and in the HADAS
assistant form. Then, they could be aware of them when implementing the application.
\item It detects the dependencies between
the concerns automatically, helping the developers to take into account other
hotspots that previously were not identified, as happened before to
Alice with the \textsf{Communication} concern.
\item It allows them to explore a
high number of alternatives at a glance, just making a few clicks in
the form. In \cite{Stier2015}, in order to justify the functioning
of the Palladio Power Consumption Analyzer, the authors change the
LAME algorithm for the Vorbis algorithm in their media store. But
this is done manually and the simulation of the whole architecture
has to be performed as a consequence of changing only one component
for another. With our assistant Alice has to perform just 4 clicks
to be able to explore the 4 algorithms (2 algorithms more than in
\cite{Stier2015}) and to decide which one to use for her
application. Then, the HADAS assistant provides her the PCM system
diagram with the concrete alternatives she has chosen. Without our
assistant instead of making 4 clicks, she would have to test
manually how many energy the different compression algorithms waste,
and then to modify the original media store architecture by hand to
simulate the 4 different whole architectures. Morever, if she wants
to explore the variability in other concerns the number of possible
architectures grows exponentially, which it would be very difficult
to manage.
\item Our results are accurate to decide which architectures are
more energy efficient than others. In \cite{Stier2015} authors
demonstrate that Palladio Power Consumption Analyzer is suited to
accurately predict energy consumption on an architectural level.
Therefore, as we are using this tool to make our simulations we can
conclude that the results we offer are also accurate. Anyway, as we
have explained several times, our purpose is not to predict the
exact number of watts wasted for a variant, but which one could be
more energy-efficient than other.
\end{enumerate}
\section{Appendix Title}
\acks
Research funded by the Spanish TIN2015-64841-R (co-funded by EU
with FEDER funds) and MAGIC P12-TIC1814 projects and by the International Campus of Excellence Andalucia TECH.
\bibliographystyle{abbrvnat}
\section{Introduction}
Energy-aware software development (or Green Computing \cite{Li2011}) is a growing trend in computing. Indeed, the increasing number of papers addressing software sustainability in last years clearly indicates that today software developer community is starting to pay more and more attention to the energy-efficiency concerns.
However, recent empirical studies \cite{Pinto2014,Pang2015,Manotas2016,Chitchyan2016} show that software developers do not have enough knowledge about how to reduce the energy consumption of their software solutions. The majority of developers are not aware about how much energy their application will consume and so, they rarely address energy efficiency \cite{Pinto2014,Pang2015}. Even practitioners that appear to have experience with green software engineering have significant misconceptions about how to reduce energy consumption \cite{Manotas2016}.
Also, software developers are unsure about the patterns and anti-patterns associated to energy-efficiency \cite{Manotas2016}. These studies also evidence the lack of tool support of green computing, not only at the code level, but also at higher abstraction levels –--i.e., requirements and software architectures levels \cite{Chitchyan2016}. The main conclusion of these studies is that software developers need more precise evidence about how to tackle the energy efficiency problem and some tool support that help them to effectively address it \cite{Pang2015,Manotas2016}.
There are plenty of experimental approaches that try to identify what parts of an application influence more in the total energy footprint of an application --i.e., to identify the \emph{energy hotspots} \cite{Noureddine2015-2}. An important part of these works proposes to minimize energy consumption by focusing on code level optimizations. They report the energy consumption of different implementations. For example, of data collections in Java \cite{Hasan2016}, or system calls in Android applications \cite{Li2014}. There are however other works that demonstrate that changes in architecture design tend to have a larger impact in energy consumption \cite{grosskop2013}. However, analysing the expected energy consumption of so many alternative architectural solutions is not a trivial task. Developers would need tool support that helps them to measure, analyse and reason about alternative architectural solutions to energy hotspots ---i.e., the set of components that implement a given energy hotspot (hereinafter, \emph{energy-consuming concerns}).
One of the benefits of addressing the energy efficiency at architectural level is to provide software developers with the necessary means to analyse the energy consumption of different alternative solutions, before implementing them. Energy absolute values are not needed, because what is important for developers is to be able to compare the energy consumed by different architectural alternatives \cite{JagroepWBPLBV16}. There is no doubt that the green computing community has made many steps forward in the development of green software architectures. Some relevant examples are the catalogs of energy-aware design patterns \cite{Noureddine2015-1} and architectural tactics \cite{Procaccianti2014}, as well as new architecture description languages that incorporate an energy profile and analysis support \cite{Stier2015, Ouni2012}.
However, we argue that there is still not enough tool support that helps developers to clearly identify the energy-consuming concerns in their applications, and moreover to choose and generate the most appropriate architectural solutions from an energy consumption point of view. On the one hand, recent studies although complementary, are disconnected, so their results cannot be easily applied to the development of green applications in an integrated way \cite{Manotas2014}. On the other hand, these studies do not always consider that some energy-consuming concerns (e.g., to store data locally or remotely) have strong dependencies with others (e.g., to store data remotely will depend on the communication concern, and the latter one, on the security concern) \cite{DeMaio2016}. Moreover, usually the results of these studies are not easily accessible for practitioners, which do not know how to apply and reuse this knowledge in their applications \cite{Pinto2014,Pang2015,Manotas2016,Chitchyan2016}. In order to cope with these limitations we consider that software developers need some kind of 'assistant' that guides them through all the steps required to identify, model, analyse and reason about different architectural solutions to energy-consuming concerns.
In this paper we present the \emph{HADAS green assistant}, that aims
to help developers to generate the most energy-efficient
architectural configurations that fulfil the application
requirements. This assistant will suggest a set of energy-consuming
concerns (i.e., points in the application that consume more energy,
like encrypting or transferring data) and for each of them it will
show the list of possible architectural solutions, along with an
energy function. Each of the provided solutions were previously
modeled and their energy consumption calculated or predicted, before
storing them in the repository (for predictions we use Palladio
Power Consumption Analyzer \cite{Stier2015}). So, HADAS drastically
reduces the effort of analysing the energy consumed by different
architectural solutions, which in other works has to be performed
from the scratch. HADAS internally models the variability of the
architectural solutions for the energy consuming concerns using
variability models, concretely the Common Variability Language (CVL)
\cite{HaugenWC13} (e.g., cache memory can be modeled as a optional
feature). It also models the architectural dependencies between
different energy-consuming concerns, meaning that the computation of
the expected energy consumption of one of those concerns, also
considers other concerns needed to apply a specific architectural
pattern (this is not always considered in other works).
To sum up, with the HADAS green assistant, the developer can choose among different alternatives for a particular energy-consuming concern (e.g. storing information, communication or compression) and will be able to analyse and reason about the energy impact of each design decision. Finally, this tool will automatically generate the architectural configuration derived from the selections made by the developer from an energy consumption point of view.
After this introduction and the related work described in Section 2,
the requirements to implement the HADAS green assistant are detailed
in Section 3. Then, in Section 4 the HADAS green repository and the
HADAS green assistant are described from two different points of
view. One is the use of the green assistant by the software
developers of green applications. The other one is the technical
details regarding their implementation. Our approach is evaluated
and the results discussed in Section 5. Finally, the conclusions and
on-going work are described in Section 6.
\section{Related Work}
In order to better motivate our work, we have reviewed papers focused on: (1) experimental studies at the code level (CL); (2) proposals at requirement (RL), architecture (AL), and design levels (DL); and (3) studies about energy consumption awareness of software developers. Due to the large number of existing works, and the rapid changes in this area, we will narrow our study to those papers published in last editions of relevant (energy-specific) software engineering conferences and journals. We consider these are representative papers of current research in this area.
Table 1 summarizes the different papers we have considered in this
section. For each of them we indicate the \emph{level} at which the
paper focuses (second column), the \emph{type} of paper (third
column), whether \emph{dependencies} are considered or not (fourth
column), the main \emph{output} (fifth column) and the
\emph{knowledge} that is derived from this work (sixth column).
\begin{table*}
\centering
\caption{Related Work: Energy-aware approaches}
\label{tab:relatedWork}
\begin{threeparttable}
\begin{tabular}[c]{|p{2.3cm}|p{.7cm}|c|c|p{5.4cm}|p{5.4cm}|}
\hline
\textbf{Proposal} & \textbf{Level} & \textbf{Type} & \textbf{Dep.} & \textbf{Output} & \textbf{Knowledge} \\ \hline
1. Hasan \cite{Hasan2016} & CL & Exp. & No & Energy profiles of operations on Java Collections. & It claims that a per-method analysis of energy consumption must be made. \\ \hline
2. Li \cite{Li2016} & CL & Exp. & No & Multiple HTTP requests are bundled to reduce energy consumption. & HTTP requests are one of the most energy consuming operations. \\ \hline
3. Chen \cite{Chen2015} & CL & Exp. & No & Performance and energy consumption profiles for cloud applications. & Tool support is needed; also using realistic application workloads. \\ \hline
4. Li \cite{Li2014} & CL & Exp. & No & Quantitative information about the energy consumption of Android apps (405 apps) & More than 60\% of energy consumed in idle states; the network is the most energy consuming component; developers should focus on code optimization.\\ \hline
5. Procaccianti \cite{Procaccianti2016} & CL & Exp. & No & Report on two green software practices: use of efficient queries and put applications to sleep.
& Software design and implementation choices significantly affect energy efficiency. The effectiveness of best practices for reducing energy consumption needs to be precisely quantified. \\ \hline
6. Manotas \cite{Manotas2014} & CL & Fram. & No & Define the SEED framework for the automatic optimization of energy usage of applications by making code level changes. & Support is needed to integrate the insights gained by existing experimental studies to help identifying the more energy-efficient alternatives.
\\ \hline
7. Noureddine \cite{Noureddine2015-1} & DL CL & Exp. & No & Empirical evaluation of 21 design patterns. Compiler transformations to detect\&transform patterns during compilation for better energy efficiency with no impact on coding practices. & The energy consumption of design patterns highly depend on the running environment; several studies identified both patterns and anti-patterns regarding energy consumption.\\ \hline
8. Procaccianti \cite{Procaccianti2014} & AL & Mod. & Yes & Identify energy efficiency as a quality attribute and define green architectural tactics for cloud applications. Identify relationships between different architectural tactics. & Energy efficiency has to be addressed from a software architecture perspective. Software architects need reusable tactics for considering energy efficiency in their application designs. \\ \hline
9. PCM \cite{Stier2015} & AL & Mod. & Yes & Architecture Description Language and tool set with support for the specification, calculation and analysis of energy consumption. & At the architectural level the energy consumption can be estimated based on resource consumption (CPU, HDD, etc.) and usage models.
\\ \hline
10. AADL \cite{Ouni2012} & AL & Mod. & Yes & Plug-in integrated with the AADL tool to support the specification and analysis of energy consumption. & It focuses in the energy overhead of inter-process communication, an important service of embedded systems. \\ \hline
11. Pinto \cite{Pinto2014} & CL & EmpSt & - & Qualitative study exploring the interest and knowledge of software developers about energy consumption & Lack of tools; many misconceptions and panaceas. Major causes for energy consumption problems are identified.
\\ \hline
12. Chitchyan \cite{Chitchyan2016} & RL & EmpSt. & - & Qualitative study exploring requirements engineering practitioners' behaviour towards sustainability (including energy consumption). & Lack of methodological support; lack of management support; requirements trade-off and risks, ... \\ \hline
13. Manotas \cite{Manotas2016} & RL AL CL & EmpSt. & - & Qualitative study exploring the knowledge of practicioners interested on energy consumption from different perspectives (requirements, design and construction). & Green software practitioners care and think about energy; however, they are not as successful as expected because they lack necessary information and tool support. \\ \hline
14. Pang \cite{Pang2015} & CL & EmpSt. & - & Qualitative study exploring the knowledge of practicioners about energy consumption. & Programmers rarely address energy. There are important misconceptions about software energy consumption.
\\ \hline
\end{tabular}
\begin{tablenotes}
\item CL - Code Level, AL - Architecture Level, DL - Design Level, RL - Requirements Level
\item Exp. - Experimental Work, Fram. - Framework, Mod. - Modelling Work, EmptSt. - Empirical Study (Questionnaires)
\end{tablenotes}
\end{threeparttable}
\end{table*}
Firstly, in rows 1 to 7 we can observe that a large number of papers present experimental studies performed at the code level. A common goal to all of them is the definition of energy profiles for different energy-consuming concerns. \emph{These experimental studies usually focus on one particular energy-consuming concern (e.g. communication or data storage) without considering the dependencies among them}.
It is important to highlight that a \emph{few of these proposals provide some support to integrate/reuse the knowledge obtained from experimental studies}. For instance, the work in \cite{Procaccianti2016} (row 5) defines a wiki with a template to integrate all the results from different experimental studies and the work in \cite{Manotas2016} (row 6) defines a framework to integrate different energy-efficient alternatives. \emph{But none of them, model explicitly the variability of energy consuming concerns, loosing the opportunity to automatically generate and manage green product configurations}.
There are also an increasing number of proposals that focus at higher levels of the software development, such as design (row 7) or architecture (rows 8 to 10). Focusing on high level proposals, we can distinguish between experimental works (row 7) and modelling works (rows 8 to 10). Some of these modelling works are architecture description languages that provide support for analyzing energy consumption (rows 9 and 10). Most of the works at architectual level consider the relationships between different energy concerns, although in different ways. For instance, in \cite{Procaccianti2014} authors define the joint use of different architectural tactics. Also, the works in \cite{Stier2015} and \cite{Ouni2012} provide support to specify the relationships between components modeling different energy concerns. These relationships can then be used during the analysis phase to see how a energy concern (e.g. communication) can influence in other energy concerns (e.g. compression or data storage). \emph{But, the identification and specification of dependencies between energy concerns has to be done manually by the software engineer}.
Finally, the number of empirical studies at different stages of the software life cycle (requirements, design, implementation) and with different groups of software developers (e.g. worried (or not) about the energy consumed by their applications) are considerably increasing in last years (see rows 11 to 14). As indicated in the introduction, the results of all these empirical studies are the same: \emph{there is a lack of methodological and tool support and software developers have still many misconceptions about how to reduce the energy consumption of their applications}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,598 |
The Nick Guide To Your Best Ever School Year is good choice for you that looking for nice reading experience. We hope you glad to visit our website. Please read our description and our privacy and policy page.
Finally I get this ebook, thanks for all these The Nick Guide To Your Best Ever School Year can get now! | {
"redpajama_set_name": "RedPajamaC4"
} | 2,859 |
Петар Єлич (, 18 жовтня 1986, Модрича, СФРЮ) — колишній боснійський футболіст, що грав на позиції нападника. Виступав за національну збірну Боснії і Герцеговини.
Клубна кар'єра
У дорослому футболі дебютував 2003 року виступами за команду клубу «Модрича», в якій провів три сезони, взявши участь у 79 матчах чемпіонату. Більшість часу, проведеного у складі «Модрича», був основним гравцем атакувальної ланки команди.
Протягом 2006—2007 років захищав кольори команди клубу «Карл Цейс».
Своєю грою за останню команду привернув увагу представників тренерського штабу клубу ОФК (Белград), до складу якого приєднався 2007 року. Відіграв за белградську команду наступні три сезони своєї ігрової кар'єри.
2010 року уклав контракт з клубом «Волга» (Нижній Новгород), у складі якого провів наступні два роки кар'єри гравця.
Згодом з 2011 по 2014 рік грав у складі команд клубів «Динамо» (Тбілісі), «Нові-Пазар» та «Гуандун Санрей Кейв».
Завершив професіональну ігрову кар'єру у клубі «Рад», за команду якого виступав протягом 2014—2015 років.
Виступи за збірну
2006 року дебютував в офіційних матчах у складі національної збірної Боснії і Герцеговини. Протягом кар'єри у національній команді, яка тривала усього 1 рік, провів у формі головної команди країни 2 матчі.
Статистика виступів
Статистика виступів за збірну
Досягнення
Найкращий бомбардир чемпіонату Боснії і Герцеговини: 2006
Посилання
боснійські футболісти
Гравці збірної Боснії і Герцеговини з футболу
Футболісти «Модричі»
Футболісти «Карла Цейса»
Футболісти ОФКа
Футболісти «Волги» (Нижній Новгород)
Футболісти «Динамо» (Тбілісі)
Футболісти «Нові-Пазара»
Футболісти «Гуандун Санрей Кейв»
Футболісти «Рада»
боснійські футбольні легіонери
Футбольні легіонери в Німеччині
Футбольні легіонери в Сербії
Футбольні легіонери в Росії
Футбольні легіонери в Грузії
Футбольні легіонери в Китаї
Серби Боснії і Герцеговини | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,402 |
IC 2712 ist eine Galaxie vom Hubble-Typ S? im Sternbild Löwe auf der Ekliptik. Sie ist schätzungsweise 410 Millionen Lichtjahre von der Milchstraße entfernt und hat einen Durchmesser von etwa 25.000 Lichtjahren.
Im selben Himmelsareal befinden sich u. a. die Galaxien IC 2676, IC 2680, IC 2702, IC 2707.
Das Objekt wurde am 27. März 1906 von Max Wolf entdeckt.
Weblinks
SIMBAD Astronomical Database
Einzelnachweise | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,886 |
Q: Poly output error - linked list My current output is giving me an error and I do not understand why. If someone could guide me as to why it does it would be greatly appreciated. I am able to add two polynomials together but when I get the output I get a segmentation fault after removing a space from the output operator. I do not know why this is. I am also using codeblocks if that helps.
main.cpp
#include <iostream>
#include "poly.h"
using namespace std;
int main ()
{
int x1[] = {1 , 0 , 3 , 4 , 5};
int x2[] = {3 , 2};
polynomial p1(x1 , 4);
polynomial p2(x2 , 1);
polynomial p3(5);
polynomial p4;
polynomial result;
result = 6;
cout << " p1 = " << p1 << endl ;
cout << " p2 = " << p2 << endl ;
cout << " p3 = " << p3 << endl ;
cout << " p4 = " << p4 << endl ;
cout << " result = " << result << endl << endl ;
result = p1 + p2 ;
cout << " p1 + p2 = " << result << endl ;
poly.h
#include <iostream>
using namespace std;
class polynomial
{
struct node
{
int coefficient ;
node * link ;
};
public:
polynomial();
polynomial(const polynomial&);
polynomial(int* ,int);
polynomial(int);
~polynomial();
polynomial operator+(const polynomial&) const;
polynomial operator+(int) const;
const polynomial& operator=(const polynomial &);
const polynomial& operator=(int);
friend ostream& operator<<(ostream& outfile , const polynomial&);
friend polynomial operator+(int ,const polynomial&);
private:
node* head;
int degree;
};
poly.cpp
#include <iostream>
#include "poly.h"
using namespace std;
polynomial::polynomial()
{
head = new node;
head->coefficient = 0;
head->link = NULL;
degree = -1;
};
polynomial::polynomial(const polynomial& copy)
{
if(this != ©)
{
delete[] head;
head = copy.head;
}
};
polynomial::polynomial(int * p, int degree)
{
this->degree = degree;
head = new node;
head->coefficient = p[0];
head->link = NULL;
for(int x=1;x<degree;x++)
{
node* temp;
temp = new node;
temp->coefficient = p[x];
temp->link = head;
head = temp;
}
node* temp;
temp = new node;
temp->coefficient = p[degree];
temp->link = head;
head = temp;
};
polynomial::polynomial(int s)
{
degree = 0;
head = new node;
head->coefficient = s;
head->link = NULL;
};
polynomial::~polynomial()
{
node* temp = head;
node* current = head;
while(current != NULL)
{
current = current->link;
delete temp;
temp = current;
if (current == NULL || current == NULL)
break;
}
};
polynomial polynomial::operator+(const polynomial& rhs) const
{
polynomial hold;
polynomial tempLhs;
polynomial tempRhs = rhs;
tempLhs.degree = degree;
tempRhs.degree = rhs.degree;
hold.degree;
int tempDegree;
tempLhs.head = new node;
tempRhs.head = new node;
hold.head = new node;
for(int x=0;x<tempDegree+1;x++)
{
node* temp;
temp = new node;
temp->coefficient = 0;
temp->link = hold.head;
hold.head = temp;
}
tempLhs.head = head;
tempRhs.head = rhs.head;
if(tempLhs.degree < tempRhs.degree)
{
tempDegree = tempLhs.degree;
hold.degree = tempDegree;
for(int x = (tempDegree-tempLhs.degree-1);x<tempDegree+1;x++)
{
node* temp;
temp = new node;
temp->coefficient = 0;
temp->link = tempLhs.head;
tempLhs.head = temp;
}
}
else if(tempLhs.degree > tempRhs.degree)
{
tempDegree = tempLhs.degree;
hold.degree = tempDegree;
for(int x = (tempDegree-tempRhs.degree-1);x<tempDegree+1;x++)
{
node* temp;
temp = new node;
temp->coefficient = 0;
temp->link = tempRhs.head;
tempRhs.head = temp;
}
}
else
{
tempDegree = tempRhs.degree = tempLhs.degree;
hold.degree = tempDegree;
}
node* lhsCurrent = tempLhs.head;
node* rhsCurrent = tempRhs.head;
int tempArr[tempDegree];
while(lhsCurrent != NULL && rhsCurrent != NULL)
{
for(int x=tempDegree;x>-1;x--)
{
tempArr[x]= lhsCurrent->coefficient + rhsCurrent->coefficient;
lhsCurrent = lhsCurrent->link;
rhsCurrent = rhsCurrent->link;
}
}
polynomial use(tempArr, tempDegree);
return use;
};
polynomial polynomial::operator+(int rhs) const
{
polynomial temp = *this;
return rhs+temp;
};
const polynomial& polynomial::operator=(const polynomial& rhs)
{
cout << "doing = operator" << endl;
degree = rhs.degree;
if(this != &rhs)
{
delete[] head;
head = rhs.head;
}
return *this;
};
const polynomial& polynomial::operator=(int rhs)
{
degree = 0;
head = new node;
head->coefficient = rhs;
head->link = NULL;
};
ostream& operator<<(ostream& out, const polynomial& rhs)
{
out << "operator ";
polynomial::node* temp = new polynomial::node;
temp = rhs.head;
while(temp != NULL)
{
out << temp->coefficient << " ";
temp = temp->link;
if(temp == NULL)
break;
}
out << " ";
};
The output should be this
p1 = 5 x ^4 + x ^2 + 5 x + 4
p2 = 3 x + 2
p3 = 5
p4 = 0
result = 6
p1 + p2 = 5 x ^4 + x ^2 + 8 x + 6
I am getting this result but I just have to format it so that the degrees are represented correctly but my addition it coming out correctly I just need to adjust the output operator which is not the issue.
Whenever I run the program without
out << " ";
which is the second to last line of poly.cpp I get an error.
It says I have segmentation fault after line 215 which happens to be the last line of poly.cpp when the out<< is deleted from the code.
A: result = p1 + p2 ;
invokes operator+ which is way too long to reproduce in full here, but
polynomial tempRhs = rhs;
invokes the copy constructor to create an Automatic variable that will go out of scope and die at the end of the function. Let's take a look at the copy constructor.
polynomial::polynomial(const polynomial& copy)
{
if(this != ©)
{
delete[] head;
head = copy.head;
}
};
This is almost completely wrong. Good on the Asker to realize they needed one, but this isn't helping. Let's break it down:
if(this != ©)
is hard to avoid. The copy constructor is invoked to make a new polynomial out of an existing polynomial. You have to work at copy-constructing yourself, so it's probably not worth testing for.
delete[] head;
head should be a single node, so delete[] is the wrong operator to use or you have a weird bug elsewhere I haven't reached yet. To delete a single item use delete. To delete an array of items, use delete[]
Next, this is a new object and head hasn't been assigned anything yet making this both unnecessary and fatal: this is trying to delete storage the program probably doesn't own and if it does own the block of storage, some other pointer is getting a very nasty surprise when it finds its memory gone.
head = copy.head;
defeats the point of making a copy constructor because you are right back to the default behaviour of a copy constructor: Blindly copy all of the members without giving a thought to what is being copied. This doesn't copy the nodes, it copies the pointer to the head node, so now you have two objects pointing to the same node, and this is very bad. Modifying one copy modifies the other. Destroying one copy leave the other pointing at garbage memory, leaving you with a timebomb.
So getting back to
polynomial tempRhs = rhs;
tempRhs and rhs both point to the same list. tempRhs is going to reach the end of its scope, be destoyed, and take the shared list of nodes with it. When rhs is later destroyed it will also attempt to destroy the list of nodes. This is probably the crash the Asker is seeing since there isn't much program left to access the invalid memory in rhs and crash or go completely weird.
Let's fix the copy constructor
Recommendation 1: You need to iterate through the existing polynomial's nodes, make copies of them, and place the copies on head. If the linked list implementation has an insert function, this is the perfect time to use it. This leads to
Recommendation 2: Separate the linked list logic from the polynomial logic. Two reasons: 1) Why should a polynomial class no about more than just polynomials? The less class knows generally the more robust it is going to be. 2) This way you can test and debug them separately. When you know the linked list works, because you've tested the smurf out of it, polynomial contains an instance of the linked list class and trusts that the linked list works as advertised.
polynomial::polynomial(const polynomial& copy)
{
node * from = copy.head; // copying from
node ** to = &head; // pointer to where we want to copy to
while (from != nullptr) // keep going until end of list. You did mark
// the end of the list, didn't you?
{
*to = new node(*from); copy from into a new node and store it at to
to = &(*to)->next; // advance to
from = from.next; // advance from
}
*to = nullptr; // all done. Terminate list.
degree = copy.degree; // update the degree
};
To unravel the secret of node ** to = &head; read Using pointers to remove item from singly-linked list We aren't removing an item, but the logic is exactly the same. As mentioned above, the superior way to do this is to build it into a linked list class and keep polynomial managing polynomials, not lists.
Alright! We can copy a polynomial now! All done, right?
Wrong.
Which brings us to recommendation 3: Don't write much code between testing. At most write a function. Then test the function until you are absolutely certain it works before writing any code that depends on that function. Do not write more code while you know you have a bug. Bugs feed off of and conceal one another. Bugs are bad. You never want any, but you especially don't want more than one.
polynomial operator+(const polynomial&) const;
returns by value. As it should. but
result = p1 + p2 ;
invokes the assignment operator, so now we have to go fix operator=. We'll fast forward through it because it has he same core problem as the copy constructor and can be fixed by applying the Copy and Swap Idiom. What is the copy-and-swap idiom? There's no point trying to re-explain something upvoted 1500 times, so click the link, read, and implement.
What we have been painstakingly exploring is known as the The Rule of Three. Now that The Rule of Three is implemented correctly we can start to address any other bugs in the code. It's freaking late where I am, so I'm stopping here.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,209 |
JR East to build ALFA-X 360 km/h Shinkansen testbed
JAPAN: East Japan Railway has announced its intention build a further high speed test train, as part of a programme to develop its next generation of Shinkansen trains.
JR East announced on July 4 that it expects to put the 10-car unit into operation in the spring of 2019. Dubbed ALFA-X (Advanced Labs for Frontline Activity in rail eXperimentation), the 360 km/h trainset will build on earlier research undertaken with the Fastech 360S and 360Z test trains. These led to the development of the 320 km/h Series E5 and E6 trainsets which currently operate on the Tohoku Shinkansen.
According to JR East President Tetsuro Tomita, the new generation of trainsets will be needed by the end of the 2030 financial year to coincide with completion of the 211·7 km Hokkaido Shinkansen extension to Sapporo. Running at 360 km/h would be necessary to achieve a journey time of around 3 h over the 1 075 km route between Tokyo and Sapporo.
As with the Fastech trains, a key priority for the research will be to minimise the noise created by running faster, as well as minimising the pressure pulses when entering tunnels at very high speeds. The ALFA-X driving cars will have two different nose profiles for comparative purpose. One will be similar in shape to the current Series E5 design but longer; the other will be the same length as existing noses but with a new profile.
Hokkaido Shinkansen inaugurated
JAPAN: High speed services reached Japan's third main island on March 26, with the inauguration of the first phase of the Hokkaido Shinkansen. Connecting Shin-Aomori in Honshu with Shin-Hakodate-Hokuto in the south of Hokkaido, the 149 km line uses mixed-gauge tracks in the 54 km Seikan Tunnel under the Tsugaru ...
JR East introduces 320 km/h running
JAPAN: With the launch of its new timetable on March 16, East Japan Railway began regular operation at 320 km/h between Utsonomiya and Morioka on the Tohoku Shinkansen, using its Series E5 trainsets. This is currently the fastest permitted speed for revenue services in Japan.
JR East unveils Super Komachi high speed train
JAPAN: The first of 23 production Series E6 trainsets being supplied to East Japan Railway for Akita mini-Shinkansen services was officially unveiled at the railway's Sendai rolling stock maintenance depot on November 22, where the trains were officially designated Super Komachi. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,459 |
package eu.hurion.vaadin.heroku;
import org.apache.catalina.deploy.FilterDef;
import org.testng.annotations.Test;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.is;
public class FilterDefinitionBuilderTest {
public static final String FILTER_NAME = "test";
private FilterDefinitionBuilder baseFilterDefinition() {
return FilterDefinitionBuilder.filterDefinition(FILTER_NAME);
}
@Test(expectedExceptions = RuntimeException.class)
public void filterNameCannotBeNull() {
FilterDefinitionBuilder.filterDefinition(null);
}
@Test
public void filterNameIsSet() {
final FilterDef filter = FilterDefinitionBuilder.filterDefinition(FILTER_NAME).build();
assertThat(filter.getFilterName(), is(FILTER_NAME));
}
@Test
public void descriptionIsSet() {
final String description = "test filter description";
final FilterDef filter = baseFilterDefinition()
.withDescription(description).build();
assertThat(filter.getDescription(), is(description));
}
@Test
public void displayNameIsSet() {
final String displayName = "test filter";
final FilterDef filter = baseFilterDefinition()
.withDisplayName(displayName).build();
assertThat(filter.getDisplayName(), is(displayName));
}
@Test
public void filterClass() {
final FilterDef filter = baseFilterDefinition().
withFilterClass(MockFilter.class).build();
assertThat(filter.getFilterClass(), is("eu.hurion.vaadin.heroku.MockFilter"));
}
@Test
public void filterClassAsString() {
final FilterDef filter = baseFilterDefinition()
.withFilterClass("eu.hurion.vaadin.heroku.MockFilter").build();
assertThat(filter.getFilterClass(), is("eu.hurion.vaadin.heroku.MockFilter"));
}
@Test
public void withFilterInstance() {
final MockFilter mockFilter = new MockFilter();
final FilterDef filter = baseFilterDefinition()
.withFilter(mockFilter).build();
assertThat(filter.getFilter().equals(mockFilter), is(true));
}
@Test
public void parametersIsSet() {
final FilterDef filter = baseFilterDefinition()
.withParameter("test_param", "test_param_value").build();
assertThat(filter.getParameterMap().get("test_param"), is("test_param_value"));
}
@Test
public void onceAParameterIsSetItCannotBeChanged() {
final FilterDef filter = baseFilterDefinition()
.withParameter("test_param", "test_param_value")
.withParameter("test_param", "other_value").build();
assertThat(filter.getParameterMap().get("test_param"), is("test_param_value"));
}
@Test
public void smallIconIsSet() {
final FilterDef filter = baseFilterDefinition()
.withSmallIcon("path_to_small_icon").build();
assertThat(filter.getSmallIcon(), is("path_to_small_icon"));
}
@Test
public void largeIconIsSet() {
final FilterDef filter = baseFilterDefinition()
.withLargeIcon("path_to_large_icon").build();
assertThat(filter.getLargeIcon(), is("path_to_large_icon"));
}
@Test
public void asyncSupported() {
final FilterDef filter = baseFilterDefinition()
.supportAsync().build();
assertThat(filter.getAsyncSupported(), is("true"));
}
@Test
public void asyncNotSupportedByDefault() {
final FilterDef filter = baseFilterDefinition().build();
assertThat(filter.getAsyncSupported(), is("false"));
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,060 |
Q: Как вывести главную страницу django python 3 При создании проекта django, выводится главная страница, base.html
Я создал блог (туториал django girl),и прописал в urls.py
urlpatterns = [
path('', include('blog.urls')),
path('admin/', admin.site.urls),
]
Все ок. Теперь я хочу, перенести страницу блога в подпапку, localhost/blog/, что не проблема
urlpatterns = [
path('blog/', include('blog.urls')),
path('admin/', admin.site.urls),
]
но как мне обратно указать паттерн на главную страницу base.html или надо создать отдельно startapp для главной страницы?
path('', TEMPLATES, {"template": "base.html"})
path('', views.home, name='home')
пробовал это, не работает
A: Я так решил
youapps/urls.py
# добавляем
from django.urls import include, path, re_path
from . import views
urlpatterns = [
re_path(r'^$', views.index, name='index'),
]
youapps/views.py
from django.shortcuts import render
def index(request):
return render(request, '../templates/base.html') #здесь путь к нужному шаблону
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,618 |
Herbert Gehrke (12 June 1910 – 18 March 1945) was a German SA commander.
He is remembered, in particular, as the organiser of Köpenick's week of bloodshed which took place in June 1933 and subsequently came to be seen as an early harbinger of the Shoah. He was implicated in the killing of Johannes Stelling.
Life
Herbert Ottokar Gehrke was born in Lichtenberg, a suburb of Berlin on the city's eastern side. His father was a telegraph worker who later became a local councillor at Köpenick, a short distance to the south. The boy attended junior school locally and a single-sex middle school at Neukölln nearby. He moved on to the Friedrich-Werdersche Senior School, but had to leave after a year in order to embark on an apprenticeship as a brick-layer. He managed to pass his School final exams (Abitur) three years later. At the same time he successfully qualified as a brick-layer while working on the construction of a police accommodation block in Köpenick. Over the next few years he worked as a freelance brick-layer, but his employment was punctuated by periods of unemployment and he also took other forms of building work, worked for the postal service and undertook factory work.
Gehrke joined the Hitler Youth organisation in 1927, and in July 1928 he joined the Nazi Party. He performed various political leadership roles within the party locally, also serving at one stage as treasurer and as deputy section leader for Köpenick. At the start of 1929 he also joined the Sturmabteilung (SA) which operated as the Nazi Party's quasi-military wing. He was assigned to the SA's "Troop 37". In October 1930 he was promoted to the rank of " Scharführer" ("squadron leader"), and early in 1931 he was promoted again, becoming leader of the SA Troop in Köpenick. During this period he developed a close personal bond with Wilhelm Sander, at this time a leader within the SA (though Sander would be purged and murdered in 1934).
Early in 1931 the so-called Stennes Revolt represented a violent split within the Sturmabteilung (SA). Sander secured control over the local SA head office ("Gauhaus") with his SA people, and Gehrke was given leadership of SA "Storm Troop 37". In December 1931 this Storm Troop was uprated, becoming the "Standard 55" troop. Gehrke retained leadership of it.
Rapid increases in membership accompanied the Nazi rise to power and the party's successful power grab at the start of 1933, and this triggered a succession of organisational changes to the structure and hierarchy of the SA. Early in 1933 Gehrke's group was upgraded again, becoming an autonomous SA unit ("Sturmbann"), finally promoted again on 6 August 1933 to "Standard 15" troop. By this time Gehrke and his unit had acquired notoriety for their savage rounding up of left-wing extremists. An exercise later known as Köpenick's week of bloodshed (die "Köpenicker Blutwoche") had taken place in June 1933, and involved known political opponents of the Nazi government. Raids on people's homes had included not merely searches for weapons, but also approximately 500 arrests. Some of the detainees were tortured and at least 23 died. It was only much later, after the fall of the Nazi regime, that the events of that week could be presented to a court of law, at which point it was confirmed that the killings had constituted murders. Victims included the Social Democratic former minister president of Mecklenburg-Schwerin, Johannes Stelling and Anton Schmau who died later from gunshot wounds. The shots were thought to have been fired by Gehrke himself. Directly after the events, however, in July 1933 Gehrke was promoted to the rank of "SA-Obersturmbannführer" ("Senior SA unit leader") in recognition of his contribution to implementing the [Nazi] national revolution" ("in Anerkennung seiner Verdienste um die Durchführung der nationalen Revolution"). In February 1934 another promotion followed for Gehrke, this time to the rank of SA "Standartenführer". This put him in charge of around 3,000 SA men in the Köpenick district. He continued to lead the "Standard 15" troop to 30 April 1935 after which, on 1 May 1935, he became an SA leader, allocated the SA Brigades 28 and 29. He retained these responsibilities until 31 July 1939.
Outside the Nazi paramilitary world, in 1933 Herbert Gehrke became deputy chairman of the Köpenick office of the local health insurance ("Ortskrankenkasse") provider.
After 1941 Gehrke took part in World War II as a soldier. By 1945 he had reached the rank of Oberleutnant. Shortly before the war ended he was killed in battle. He is buried in a military cemetery in Sandweiler in the south-eastern part of Luxembourg.
Justice in East Berlin
The end of the war in May 1945 was accompanied by the collapse of the Nazi regime. A large part of what remained of Germany, including East Berlin, now fell under Soviet administration: official interest in the "Köpenicker Blutwoche" resurfaced. Between 19 and 21 June 1947 four SA men found themselves charged with crimes against humanity in connection with the events in Köpenick fourteen years earlier. Two of these were found guilty and sentenced to terms of respectively eight years and eighteen months: the third was acquitted and the fourth managed to escape before the trial. Two more were tried, convicted and sentenced to short prison terms in August 1948.
It was not until after the Soviet occupation zone had given way to the German Democratic Republic that a larger number of those allegedly complicit in the massacre faced trial. Between 5 June and 19 July 1950, a trial of 61 formally identified defendants took place in the Fourth Criminal Chamber at the District Court in East Berlin. Only 32 of the 61 indicted were actually present, and the remaining 29 were tried in absentia. 47 of the 61 were identified as SA men, 3 were identified as Nazi Party members and one as an SS man. For the remaining ten, no equivalent affiliation was recorded. Of those tried in absentia, the whereabouts of 13 was unknown, while another ten were in West Germany, which since 1949 had been separated from East Germany politically and, increasingly, physically. Three others of the accused managed to escape before the trial and one was known to have died young. Those who had escaped to West Germany never faced trial.
Most or all of those tried were found guilty. 15 were sentenced to death and a further 13 received life sentences. 25 received prison sentences of between ten and twenty-five years and four others were sentenced to five years each of forced labour.
References
Sturmabteilung officers
Nazi Party members
German Army officers of World War II
1910 births
1945 deaths
German Army personnel killed in World War II
People from Lichtenberg
Hitler Youth members
Military personnel from Berlin | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,081 |
{"url":"https:\/\/sfb701.math.uni-bielefeld.de\/people\/view\/167","text":"Contact-Details:\n eMail: chistyak@math.uni-bielefeld.de\n\nAnnounced and most recent visits:\n\nArrival Departure Home Projects\n03.07.201730.09.2017A4\n01.11.201715.12.2017A4\n\nRecent Preprints\n\n07026 Limit theorems in free probability theory. II PDF\n08058 Asymptotic expansions in the free Central Limit Theorem PDF\n08116 Characterization problems for linear forms with free summands PDF\n11010 Freenes of linear and quadratic forms in Von Neumann algebras PDF\n11042 Non-uniform bounds in local limit theorems in case of fractional moments PDF\n11043 Rate of convergence and Edgeworth-type expansion in the entropic central limit theorem PDF\n11044 Convergence to stable laws in relative entropy PDF\n11052 Bounds for characteristic functions in terms of quantiles and entropy PDF\n11053 Berry-Esseen bounds in the entropic central limit theorem PDF\n11133 Rate of convergence in the entropic free CLT PDF\n11134 Asymptotic expansions in the CLT in free probability PDF\n11135 Characterization problems for linear forms with free summands PDF\n12050 Fisher information and convergence to stable laws PDF\n12051 Fisher information and the central limit theorem PDF\n12062 Free Infinitely Divisible Approximations of n-fold Free Convolutions PDF\n12064 The arithmetic of distributions in free probability theory PDF\n12151 The entropic Erd\u00f6s-Kac limit theorem PDF\n11145 Entropic instability of Cramer\u2019s characterization of the normal law PDF\n11146 Stability problems in Cramer-type characterization in case of i.i.d. summands PDF\n15080 Regularized Distributions and Entropic Stability of Cramer's Characterization of the Normal Law PDF\n15081 Second order concentration on the sphere PDF\n16047 Re\u0301nyi divergence and the central limit theorem PDF\n16051 Stability of Cramer\u2019s characterization of normal laws in information distances PDF\n\nBack","date":"2019-04-24 21:51:16","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8412740230560303, \"perplexity\": 2788.8623208687754}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-18\/segments\/1555578663470.91\/warc\/CC-MAIN-20190424214335-20190425000335-00136.warc.gz\"}"} | null | null |
'Chasing Ice' Star to Speak at SNS Future in Review (FiRe) 2012 Conference
03 avr. 2012 09h14 HE | Source: Strategic News Service Strategic News Service
Friday Harbor, Washington, UNITED STATES
FRIDAY HARBOR, WA--(Marketwire - Apr 3, 2012) - Strategic News Service is proud to announce that 'Chasing Ice' has been selected as the feature documentary film for FiRe X, the 10th annual Future in Review (FiRe) 2012 conference. The film, which won the Excellence in Cinematography Award at the 2012 Sundance Film Festival, will be screened for FiRe's audience of 200 global thought leaders in technology and economics. James Balog, star of the film and National Geographic photographer, will speak at FiRe, which The Economist has called "the best technology conference in the world."
"I am delighted to have this opportunity to present the story of the ice to such an extraordinary gathering of innovators whose work can directly affect our Earth's future," said Balog.
Balog, who was once a skeptic about climate change, discovered undeniable evidence of our changing planet through his Extreme Ice Survey. 'Chasing Ice' follows Balog across the Arctic as he deploys revolutionary time-lapse cameras designed for one purpose: to capture a multi-year record of the world's changing glaciers. Balog's hauntingly beautiful videos compress years into seconds and capture ancient mountains of ice in motion as they disappear at a breathtaking rate. Traveling with a young team of adventurers by helicopter, canoe, and dog sled across three continents, Balog risks his career and his well-being in pursuit of the biggest story in human history.
"For the last five years, we've had a single response to those in climate change denial: 'Watch the ice caps.' James has taken that advice into a realm as beautiful as it is tragic, and the result is both an unforgettable film and a portrayal of global warming that even Fox News and ExxonMobil will have to accept," said Mark Anderson, FiRe Chair and SNS CEO.
Future in Review is an annual gathering of world-class thought leaders in technology and economics. FiRe attendees convene each year with the goals of providing the best look forward in technology and economics, and in using technology to solve major world problems. These goals have been consistently achieved through FiRe's collaboration across disparate industries and through active support by the FiRe community. Now in its 10th year, Future in Review 2012 will take place May 22-25 at the beautiful Montage Resort in Laguna Beach, California.
To register, and to see the draft agenda, go to www.futureinreview.com.
Strategic News Service was founded by Mark Anderson in 1995 as the first paid online news service. Since its inception, SNS has proven the most accurate predictive newsletter covering the computer and telecom industries. Its subscribers include top managers at technology companies across the globe, including Microsoft, Dell, HP, Cisco, Intel, Sun, Google, Telstra, Orange, and others.
SNS has been operating the annual FiRe Conference for nine years. The Economist calls FiRe "the best technology conference in the world." FiRe exposes world experts and participants to new ideas, producing an accurate portrait of the future, and focuses on creating technology solutions to current local and global problems. FiRe 2012 will take place May 22-25, 2012 at the Montage Resort in Laguna Beach, CA. For more information go to www.futureinreview.com.
Future in Review™ is a Strategic News Service™ conference. Future in Review™ and Strategic News Service™ are registered international trademarks. The SNS newsletter is the most accurate publicly ranked predictive newsletter in computing and communications.
Websites: www.stratnews.com, www.futureinreview.com, www.futureinreview.com/global/wc
Strategic News Service
Jenny@stratnews.com
technology documentary fire conference chasing ice climate change james balog sundance SNS
Hyperliens
Future in Review | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,114 |
\section{Introduction}
Let $V$ be a real quadratic space of signature $(2,n)$ where $n\in\mathbb{N}_{\geq 3}$. The bilinear form of $V$ is denoted by $(\cdot,\cdot)$. The group of all isometries of $V$ is called the \textit{orthogonal group of} $V$ and is given by
\[\operatorname{O}(V)=\{g\in\operatorname{GL}(V)\,|\,\forall v\in V \,:\, (gv,gv)=(v,v)\}\,.\]
We extend the bilinear form to $V\otimes\mathbb{C}$ by $\mathbb{C}$-linearity. We consider
\[\mathcal{D}^{\pm}=\{[\mathcal{Z}]\in\mathbb{P}(V\otimes\mathbb{C})\,|\,(\mathcal{Z},\mathcal{Z})=0\,,\,(\mathcal{Z},\overline{\mathcal{Z}})>0\}\]
on which $\operatorname{O}(V)$ acts as a linear group. The domain $\mathcal{D}^{\pm}$ has two connected components. We choose one of them and denote it by $\mathcal{D}$. We define the subgroups
\[\operatorname{O}(V)^{+}\,,\,\operatorname{SO}(V)^{+}=\{g\in\operatorname{O}(V)^{+}\,|\,\det(g)=1\}\]
of index $2$ and $4$, respectively, which fix $\mathcal{D}$. The latter group is the connected component of the identity and is well-known to be a semisimple and noncompact Lie group. Its maximal compact subgroup is given by $K=\operatorname{SO}(2)\times\operatorname{SO}(n)$ and the Hermitian symmetric space $\operatorname{SO}(V)^{+}/K$ is isomorphic to $\mathcal{D}$. The affine cone is defined as
\[\mathcal{D}^{\bullet}=\{\mathcal{Z}\in V\otimes \mathbb{C}\,|\,[\mathcal{Z}]\in \mathcal{D}\}\,.\]
Let $L\subseteq V$ be a positive definite even lattice such that the dimension of $L\otimes \mathbb{R}$ is $n-2$ and let
\[U,U_{1}\cong \begin{pmatrix}
0&1\\1&0
\end{pmatrix}\]
be two integral hyperbolic planes. Denote by $L(-1)$ the associated negative definite lattice. We consider the arithmetic subgroup
\[\operatorname{O}(L_{2})^{+}=\{g\in\operatorname{O}(V)^{+}\,|\,g\,L_{2}\subseteq L_{2}\}\]
where $L_{2}\cong U\perp U_{1}\perp L(-1)$ is an even lattice in $V$.
For any subgroup $\Gamma\leq \operatorname{O}(L_{2})^{+}$ of finite index we consider the modular variety $\Gamma\backslash\mathcal{D}$. This is a noncompact space. In \cite{BorJ} and \cite{BaBor} the Satake-Baily-Borel compactification of this space is considered. The boundary components of this compactification are usually called the \textit{cusps} of $\Gamma\backslash\mathcal{D}$. In \cite{BaBor} the authors construct a general version of Siegel's $\Phi$-operator to assign boundary values to automorphic forms with respect to $\Gamma$. This is used in the following definition.
\begin{definition}\label{def:modular form}
Let $\Gamma$ be a subgroup of $\operatorname{O}(L_{2})^{+}$. A \textit{modular form} of weight $k\in \mathbb{Z}$ and character $\chi\,:\,\Gamma\to\mathbb{C}^{\times}$ with respect to $\Gamma$ is a holomorphic function $F\,:\,\mathcal{D}^{\bullet}\to\mathbb{C}$ such that
\[\begin{aligned}
&F(t\mathcal{Z})=t^{-k}F(\mathcal{Z})\quad\text{ for all }t\in\mathbb{C}^{\times}\,,&\\
&F(g\mathcal{Z})=\chi(g)F(\mathcal{Z})\quad\text{ for all }g\in\Gamma\,.&
\end{aligned}
\]
A modular form is called a \textit{cusp form} if it vanishes at every cusp. The space of modular forms of weight $k$ and character $\chi$ for the group $\Gamma$ will be denoted by $\mathcal{M}_{k}(\Gamma,\chi)$. For the subspace of cusp forms we will write $\mathcal{S}_{k}(\Gamma,\chi)$.
\end{definition}
Let $\Gamma\leq \operatorname{O}(L_{2})^{+}$ be a subgroup of finite index and denote by $\Gamma'=[\Gamma,\Gamma]$ the commutator subgroup of $\Gamma$. We denote by
\[\mathcal{A}(\Gamma')=\bigoplus_{k=0}^{\infty}{\mathcal{M}_{k}(\Gamma',1)}\]
the graded ring of modular forms.
It is well-known that this ring is finitely generated. In the sequel the notation $F_{k}\in\mathcal{A}(\Gamma')$ means that $F$ is a homogeneous modular form of weight $k$. We define the \textit{dual lattice} of $L$ as the $\mathbb{Z}$-module
\[L^{\vee}\index{$L^{\vee}$}:=\{x\in V\,|\,\forall l\in L:\,(x,l)\in\mathbb{Z}\}\,.\]
Since $L$ is even we have $L\subseteq L^{\vee}$. We define the \textit{discriminant group} as the finite abelian group
\[D(L):=L^{\vee}/L\,.\]
The group $\operatorname{O}(L_{2})^{+}$ acts on the discriminant group $D(L_{2})$. The kernel of this action is denoted by $\widetilde{\operatorname{O}}(L_{2})^{+}$. This subgroup will be interesting for our further considerations. Another natural subgroup is the finite group $\operatorname{O}(L)$ which consists of all automorphisms of the positive definite lattice $L$.
\vspace{5mm}
\noindent We put our focus to a special series of lattices. Denote by $(\cdot,\cdot)_{m}$ the standard scalar product on $\mathbb{R}^{m}$. If $\varepsilon_{1},\dots,\varepsilon_{m}$ denotes the standard basis of $\mathbb{R}^{m}$ we consider the following $\mathbb{Z}$-module of rank $m$
\[mA_{1}=\langle \varepsilon_{1},\dots, \varepsilon_{m}\rangle_{\mathbb{Z}}\,.\]
If we equip $mA_{1}$ with the bilinear form $2(\cdot,\cdot)_{m}$ we obtain a series of (reducible) root lattices where $mA_{1}$ should be understood as an $m$-fold perpendicular sum of type $A_{1}$ root lattices. Due to some low-dimensional exceptional isogenies this series has connections to modular varietes of unitary and symplectic type.
\begin{description}\label{correspondences to classical Siegel,Hermitian and Quaternary varieties}
\item[Case $m=1$.] In this case $L_{2}(A_{1})$ has signature $(2,3)$ and the group
\[\Gamma=\operatorname{O}(L_{2}(A_{1}))^{+}\cap\operatorname{SO}(V)^{+}\]
is isomorphic to the projective symplectic group $\operatorname{PSp}(2,\mathbb{Z})$, compare \cite[Proposition 1.2]{GH}. The variety $\Gamma\backslash\mathcal{D}$ has been studied by Igusa in \cite{Ig}. He showed that the graded ring of Siegel modular forms $\mathcal{A}(\Gamma')$ of genus two is generated by the Siegel Eisenstein series $E_{4},E_{6}$ and cusp forms $\chi_{5},\psi_{12}$ and $\chi_{30}$.
For any modular form $F\in \mathcal{M}_{k}(\operatorname{O}(L_{2}(A_{1}))^{+},\det^{\kappa})$ where $\kappa,k\in\mathbb{N}_{0}$ the modularity conditions yield
\[(-1)^{\kappa}F(\mathcal{Z})= F((-I_{5}\mathcal{Z}))=F(-\mathcal{Z})=(-1)^{k}F(\mathcal{Z})\,.\]
Hence the determinant-character corresponds to the weight parity in the symplectic setting. According to Igusa's result we have
\begin{equation}\label{Igusas result}
\bigoplus_{k\in\mathbb{Z}}{\mathcal{M}_{k}(\operatorname{O}(L_{2}(A_{1}))^{+},1)}\cong\mathbb{C}[E_{4},E_{6},\chi_{5}^{2}, \psi_{12}]\,.
\end{equation}
\item[Case $m=2$.] We consider the Gaussian number field $K=\mathbb{Q}(\sqrt{-1})$ whose ring of integers equals $\mathfrak{o}_{K}=\mathbb{Z}+\mathbb{Z}\sqrt{-1}$. The \textit{special unitary} group $\operatorname{SU}(\mathfrak{o}_{K})\subseteq \operatorname{SL}(4,\mathfrak{o}_{K})$ acts on the Hermitian half-plane of degree two. This can be used to show that $\widetilde{\operatorname{SO}}(L_{2}(2A_{1}))^{+}/\{\pm I_{6}\}$ is isomorphic to $\operatorname{SU}(\mathfrak{o}_{K})/\{\pm I_{4}\}$, compare \cite[Remark 3.3.4]{Wo}. This case has been investigated by Freitag in \cite{F2} and later by Dern and Krieg in \cite{DK}.
\item[Case $m=4$.] Let $Q$ be the rational quaternion algebra of signature $(-1,-1)$. As a vector space over $\mathbb{Q}$ we have
\[Q=\mathbb{Q}+\mathbb{Q} i+\mathbb{Q} j+ \mathbb{Q} ij\,,\,i^{2}=j^{2}=-1\,.\]
A maximal order in $Q$ is given by $\mathfrak{o}=\mathbb{Z}+\mathbb{Z} i+\mathbb{Z} j+\mathbb{Z} \omega$ where $\omega=\frac{1}{2}(1+i+j+ij)$. The order $\mathfrak{o}_{0}=\mathbb{Z}+\mathbb{Z} i +\mathbb{Z} j +\mathbb{Z} ij$ is a sublattice of index $2$ and is isomorphic to $4A_{1}$. This lattice is also known as the ring of \textit{Lipschitz quaternions} and $\mathfrak{o}$ is the ring of \textit{Hurwitz quaternions}. The corresponding modular group is $\operatorname{Sp}(2,\mathfrak{o}_{0})$ and can be identified with a subgroup of $\operatorname{O}(L_{2}(4A_{1}))/\{\pm I_{8}\}$. The rings of quaternionic modular forms have been investigated by Freitag and Krieg in \cite{FH}, \cite{K2} and \cite{K4}.
\end{description}
In \cite{G1} Gritsenko found three towers of reflective modular forms. In his construction Igusa's modular form $\chi_{5}$ is the roof of the $4A_{1}$-tower. In the sequel we will develop a framework around Gritsenko's tower without making use of the exceptional isogenies. We will use three different types of coordinates.
\begin{enumerate}[(i)]
\item The so called \textit{Eisenstein type} modular forms constitute the first type. These forms are pullbacks of Gritsenko's singular modular form for the even unimodular lattice of signature $(2,10)$. If we additionally take into account the heat operator for several variables considered in \cite{ChoKi} we obtain non-cusp forms of weight $4$ and $6$. The common source function for all these forms is the classical Eisenstein series of weight $4$ for the group $\sl$.
\item The second family of modular forms arises as a natural extension of the $4A_{1}$-tower of reflective modular forms. These forms are called \textit{theta type} modular forms and are investigated in \cite{Wo1}. The source function of this tower is $\Delta_{12}$, the first cusp form for the group $\sl$.
\item The third family is of \textit{baby monster type} (bm type) and arises as a quasi-pullback of Borcherds famous $\Phi_{12}$-function which is the denominator function of the fake monster Lie algebra, compare \cite{Bo2} and \cite{GHS2}. In \cite{Gr} an algorithm is presented to produce many reflective modular forms of baby monster type. We can again consider $\Delta_{12}$ as the common source function of the bm type modular forms.
\end{enumerate}
Besides the determinant-character the group $\operatorname{O}(L_{2}(mA_{1}))^{+}$ admits two more finite characters. The discriminant group $D(mA_{1})$ is isomorphic to $m$ copies of the cyclic group of order two. The quadratic form on $L_{2}$ induces the \textit{discriminant form} on $D(L_{2})$ and we obtain the finite orthogonal group $\operatorname{O}(D(L_{2}))$ as the image of the natural homomorphism
\[ \pi\,:\,\operatorname{O}(L_{2})^{+}\to\operatorname{O}(D(L_{2}))\,.\]
The kernel of this homomorphism is the stable orthogonal group $\widetilde{\operatorname{O}}(L_{2})^{+}$. In our case $\operatorname{O}(D(L_{2}(mA_{1})))\cong \operatorname{O}(D(mA_{1}))$ is isomorphic to the symmetric group on $m$ letters $\mathcal{S}_{m}$ and $\pi$ is surjective. This yields a binary character
\[v_{\pi}\,:\,\operatorname{O}(L_{2}(mA_{1}))^{+}\twoheadrightarrow \mathcal{S}_{m}\xrightarrow[]{\operatorname{sgn}}\{\pm 1\}\,.\]
The construction of $mA_{1}$ implies that $(x,y)_{mA_{1}}\in 2\mathbb{Z}$ for all $x,y\in mA_{1}$. In this case we can construct another binary character, see e.g. \cite[Proposition 1.26]{Kl} and \cite[Theorem 2.2]{CG}:
\[v_{2}\,:\,\operatorname{O}(L_{2}(mA_{1}))^{+}\to\operatorname{Sp}(2,\mathbb{F}_{2})\to\mathcal{S}_{6}\xrightarrow[]{\operatorname{sgn}}\{\pm 1\}\,.\]
The construction of $v_{2}$ implies
\[\ker v_{2}\,\cap\, \widetilde{\operatorname{SO}}(L_{2}(mA_{1}))^{+}\lneq \widetilde{\operatorname{SO}}(L_{2}(mA_{1}))^{+}\quad,\quad \operatorname{O}(mA_{1})\leq \ker v_{2}\,.\]
We set for abbreviation $\Gamma_{m}:=\operatorname{O}(L_{2}(mA_{1}))^{+}$ and $\widetilde{\Gamma}_{m}:=\widetilde{\operatorname{O}}(L_{2}(mA_{1}))^{+}$. In \cite[Proposition 5.4.2]{Wo} it is shown that $\Gamma_{m}/\Gamma_{m}'\cong\langle \det, v_{2},v_{\pi}\rangle\cong \mathcal{C}_{2}^{3}$ if $m=2,3,4$ and $\Gamma_{1}/\Gamma_{1}'\cong\langle \det, v_{2}\rangle.$ The paper is organized in the following way.
\vspace{5mm}
\noindent In section 2 we introduce Jacobi forms of theta type. These forms are obtained by twisting powers of Jacobi's theta function of weight and index $1/2$ with the weak Jacobi form of weight 0 and index 1 defined in \cite{EZ} and a multiplication with suitable powers of Dedekind's eta function. Moreover Jacobi forms of Eisenstein type are introduced. The arithmetic lifting of these functions yields modular forms for the orthogonal group with trivial character. In section 3 we consider two refinements of theta type Jacobi forms which yield two more series of modular forms with respect to binary characters. The first one uses a variant of the arithmetic lifting for Jacobi forms of half-integral index given in \cite{CG}. The second series is obtained by considering a cusp form of weight $24$ for the lattice $D_{4}$. We rewrite the coordinates of this function for the sublattice $4A_{1}$ and obtain a series of length three by considering quasi-pullbacks. Finally the quasi-pullbacks of Borcherd's function $\Phi_{12}$ produce another series of modular forms including Igusa's function $\chi_{30}$. This enables us to state our main theorem.
\begin{theorem}\label{theorem:sructure graded rings with respect to character A1}
Let $m\in \{1,2,3,4\}$. The graded ring of modular forms $\mathcal{A}(\Gamma_{m}')$ is generated by the $m$-th row of the following table
\begin{center}
\begin{tikzpicture}[scale=0.3,node distance=2cm,auto]
\node (12+A1) at (0cm,-2.5cm) {$\textcolor{black}{F_{12}^{A_{1}}}$};
\node (12+2A1) at (0cm,-6cm) {$\textcolor{black}{F_{12}^{2A_{1}}}$};
\node (12+3A1) at (0cm,-9.5cm) {$\textcolor{black}{F_{12}^{3A_{1}}}$};
\node (12+4A1) at (0cm,-13cm) {$F_{12}^{4A_{1}}$};
\node[] (10+A1) at (4cm,-2.5cm) {$\textcolor{black}{\chi_{5}^{A_{1}}}$};
\node[] (10+2A1) at (4cm,-6cm) {$\textcolor{black}{F_{10}^{2A_{1}}}$};
\node[] (10+3A1) at (4cm,-9.5cm) {$\textcolor{black}{F_{10}^{3A_{1}}}$};
\node[] (10+4A1) at (4cm,-13cm) {$F_{10}^{4A_{1}}$};
\node[] (8+2A1) at (8cm,-6cm) {$\textcolor{black}{\chi_{4}^{2A_{1}}}$};
\node[] (8+3A1) at (8cm,-9.5cm) {$\textcolor{black}{F_{8}^{3A_{1}}}$};
\node[] (8+4A1) at (8cm,-13 cm) {$F_{8}^{4A_{1}}$};
\node[] (6+3A1) at (12cm,-9.5cm) {$\textcolor{black}{\chi_{3}^{3A_{1}}}$};
\node[] (6+4A1) at (12cm,-13cm) {$F_{6}^{4A_{1}}$};
\node[] (4+4A1) at (16cm,-13cm) {$\chi_{2}^{4A_{1}}$};
\node[] (e4+4A1) at (-9cm,-13cm) {$\mathcal{E}_{4}^{4A_{1}}$};
\node[] (e4+3A1) at (-9cm,-9.5cm) {$\mathcal{E}_{4}^{3A_{1}}$};
\node[] (e4+2A1) at (-9cm,-6cm) {$\mathcal{E}_{4}^{2A_{1}}$};
\node[] (e4+A1) at (-9cm,-2.5cm) {$\mathcal{E}_{4}^{A_{1}}$};
\node[] (e6+4A1) at (-4cm,-13cm) {$\mathcal{E}_{6}^{4A_{1}}$};
\node[] (e6+3A1) at (-4cm,-9.5cm) {$\mathcal{E}_{6}^{3A_{1}}$};
\node[] (e6+2A1) at (-4cm,-6cm) {$\mathcal{E}_{6}^{2A_{1}}$};
\node[] (e6+A1) at (-4cm,-2.5cm) {$\mathcal{E}_{6}^{A_{1}}$};
\node[] (F30+A1) at (-16cm,-2.5cm) {$H_{30}^{A_{1}}$};
\node[] (F30+2A1) at (-16cm,-6cm) {$H_{30}^{2A_{1}}$};
\node[] (F30+3A1) at (-16cm,-9.5cm) {$H_{30}^{3A_{1}}$};
\node[] (F30+4A1) at (-16cm,-13cm) {$H_{30}^{4A_{1}}$};
\node[] (Delta10+2A1) at (12cm,-6cm) {$\Delta_{10}^{2A_{1}}$};
\node[] (Delta18+3A1) at (16cm,-9.5cm) {$\Delta_{18}^{3A_{1}}$};
\node[] (Delta24+4A1) at (20cm,-13cm) {$\Delta_{24}^{4A_{1}}$};
\draw[] (-2cm,0cm) -- (-2cm, -15cm);
\draw[] (-12cm,0cm) -- (-12cm, -15cm);
\draw[] (-19cm,0cm) -- (-19cm, -15cm);
\draw[] (-21cm,-1.5cm) -- (20cm, -1.5cm);
\node[] at (-20cm,-0.5cm) {$m$};
\node[] at (-16cm,-0.5cm) {\textnormal{bm type}};
\node[] at (-7cm,-0.5cm) {\textnormal{Eisenstein type}};
\node[] at (2cm,-0.5cm) {\textnormal{theta type}};
\node[] at (-20cm,-2.5cm) {$1$};
\node[] at (-20cm,-6.0cm) {$2$};
\node[] at (-20cm,-9.5cm) {$3$};
\node[] at (-20cm,-13.0cm) {$4$};
\end{tikzpicture}
\end{center}
where the index indicates the weight.
\end{theorem}
The pullback structure underlying the above table is explained in section 2 and 3.
The generators in the cases $m=1,2,3$ have been determined before by Igusa, Freitag, Dern, Krieg and Klöcker whereas generators in the case $m=4$ have only been determined for the ring $\mathcal{A}(\Gamma_{4})$ in \cite{K4} best to the author's knowledge. Finally in section 4 we give a number theoretical application. The construction of Eisenstein type modular forms allows us to express some numbers of lattice points lying on a sphere as special values of $L$-functions which appear in \cite{BrK}.
\vspace{5mm}
\noindent\textbf{Acknowledgements} The results of this paper are part of the author's phd-thesis. The author would like to thank the supervisors Valery Gritsenko and Aloys Krieg for their guidance and support.
\section{Jacobi forms}
Let $L$ be a positive definite even lattice. The Jacobi group $\Gamma^{J}(L)$ is considered in \cite{CG} and is isomorphic to $\sl\ltimes H(L)$ where $H(L)$ denotes the integral Heisenberg group for the lattice $L$. We denote by $\mathbb{H}$ the upper half-plane in $\mathbb{C}$. Following \cite{EZ} and \cite{G} we define an action of the Jacobi group on the space of holomorphic functions defined on $\mathbb{H}\times (L\otimes \mathbb{C})$. By considering the generators of the Jacobi group we can use this action to introduce the notion of a Jacobi form.
\begin{definition}\label{Jacobi form}
Let $k,t\in\mathbb{N}_{0}$. A holomorphic function $\varphi\,:\,\mathbb{H}\times(L\otimes \mathbb{C})\to\mathbb{C}$ is called a \textit{weak Jacobi form} of weight $k$ and index $t$ with character $\chi$ if the following conditions are satisfied:
\begin{enumerate}[(i)]
\item For all $A=\begin{pmatrix} a&b\\c&d \end{pmatrix}\in\sl$:
\[\begin{aligned}
&\varphi\left(\frac{a\tau+b}{c\tau+d},\frac{\mathfrak{z}}{c\tau +d}\right)=\chi(A)(c\tau +d)^{k}e^{\pi i t\frac{c(\mathfrak{z},\mathfrak{z})}{c\tau+d}}\varphi(\tau,\mathfrak{z})\,.&\end{aligned}\]
\item For all $x,y\in L$:
\[\begin{aligned}&\varphi(\tau,\mathfrak{z}+x\tau+y)&&=\chi([x,y:-(x,y)/2])\cdot e^{-2\pi it(\frac{1}{2}(x,x)\tau+(x,\mathfrak{z}))}\,\varphi(\tau,\mathfrak{z})&
\end{aligned}\]
where $[x,y:-(x,y)/2]\in H(L)$, compare \cite{CG} for the realization of $H(L)$.
\item The Fourier expansion of $\varphi$ has the shape
\[ \varphi(\tau,\mathfrak{z})=\sum_{\substack{n\in\mathbb{N}_{0}\\l\in\frac{1}{2}L^{\vee}}}{f(n,l)e^{2\pi i (n\tau+(l,\mathfrak{z}))}}\,.\]
\end{enumerate}
We call $\varphi$ a \textit{holomorphic Jacobi form} if the Fourier expansion ranges over all $n,l$ such that $2nt-(l,l)\geq 0$ and $\varphi$ is called a \textit{Jacobi cusp form} if it ranges over all $n,l$ satisfying $2nt-(l,l)> 0$.
\end{definition}
\begin{remdef}
\begin{enumerate}[(a)]
\item The action can be extended for $k,t\in\frac{1}{2}\mathbb{Z}_{\geq 0}$ and $\chi|_{\sl}$ being a multiplier system for $\sl$. Here we have to replace $\sl$ by the metaplectic cover $\operatorname{Mp}(2,\mathbb{Z})$, see e.g. \cite{Br}. In this more general situation we use the notation
\[J^{(\textit{cusp})}_{k,L;t}(\chi)\subseteq J_{k,L;t}(\chi)\subseteq J^{(\textit{weak})}_{k,L;t}(\chi)\]
for the corresponding spaces of Jacobi forms. If $\chi=1$ we write $J^{(\ast)}_{k,L;t}$ for each of these spaces.
\item The notion of a Jacobi form is compatible with Definition \ref{def:modular form}. To see this we note that we have an affine model for the homogeneous domain $\mathcal{D}$ given by
\[\mathcal{H}(L_{2})=
\left\{(\omega,\mathfrak{z},\tau)\in \mathbb{C}\times (L\otimes \mathbb{C})\times \mathbb{C}\,\left|\begin{aligned}
&\omega_{i},\tau_{i}>0\,,&\\&2\omega_{i}\tau_{i}-(\mathfrak{z}_{i},\mathfrak{z}_{i})>0
\end{aligned}\right.\right\}
\]
where we have used the abbreviations
\[\omega_{i}:=\operatorname{Im}(\omega)\quad,\quad \tau_{i}:=\operatorname{Im}(\tau)\quad,\quad \mathfrak{z}_{i}:=\operatorname{Im}(\mathfrak{z})\,.\]
Let $\varphi\in J_{k,L;t}(\chi)$ where we assume $k\in\mathbb{Z}$. We define a holomorphic function on $\mathcal{H}(L_{2})$ by
\[\widetilde{\varphi}(\tau,\mathfrak{z})=\varphi(\tau,\mathfrak{z})e^{2\pi it\omega}\,.\]
Since $\mathcal{D}$ and $\mathcal{H}(L_{2})$ are biholomorphically equivalent we can interpret $\widetilde{\varphi}$ as an element in $\mathcal{M}_{k}(\Gamma^{J}(L),\chi)$.
\end{enumerate}
\end{remdef}
The following two examples are the basic ingredients to define theta type Jacobi forms.
\begin{ex}\label{eta function and Gritsenkos theta function}
\begin{enumerate}[(a)]
\item Dedekind's eta function is a Jacobi form of weight $1/2$ and index $0$ for every positive definite even lattice $L$, thus $\eta\in J_{1/2,L;0}^{\textit{(cusp)}}(v_{\eta})$ where $v_{\eta}$ is a multiplier system for $\sl$.
\item The Jacobi theta series of characteristic $(\frac{1}{2},\frac{1}{2})$ is given as
\[\vartheta(\tau,z)\index{$\vartheta(\tau,z)$}=\sum_{n\in\mathbb{Z}}{\left( \frac{-4}{n}\right)q^{\frac{n^{2}}{8}}r^{\frac{n}{2}}}=\sum_{n\in\mathbb{Z}\,,\,n \equiv 1\bmod 2}{(-1)^{\frac{n-1}{2}}\exp\left(\frac{\pi i n^{2}\tau}{4}+\pi i n z\right)}\]
where $q=e^{2\pi i \tau}\,,\,\tau\in\mathbb{H}$ and $r=e^{2\pi i z}\,,\,z\in\mathbb{C}$.
This function was originally discovered by Carl Gustav Jacob Jacobi. In \cite{GN} the authors reinterpreted this function as a modular form of half-integral weight and index.
Jacobi's triple identity yields
\[\vartheta(\tau,z)=-q^{1/8}r^{-1/2}\prod_{n\geq 1}{(1-q^{n-1}r)(1-q^{n}r^{-1})(1-q^{n})}\,.\]
The function has the properties
\[\begin{aligned}
&\vartheta(\tau,-z)&&=&&-\vartheta(\tau,z)\,,&\\&\vartheta(\tau,z+x\tau+y)&&=&&(-1)^{x+y}\exp(-\pi i(x^{2}\tau+2xz))\,\vartheta(\tau,z)&
\end{aligned}\]
for all $x,y\in\mathbb{Z}$
and the set of zeroes of $\vartheta$ equals
\[\{x\tau+y\,|\,x,y\in\mathbb{Z}\}\,.\]
\end{enumerate}
\end{ex}
In the sequel let $M$ be a positive definite even lattice and $L\leq M$ be a sublattice of $M$. We define
\[L_{M}^{\perp}:=\{m\in M\,|\, \forall l\in L\,:\, (l,m)=0\}\]
and note that this is again a positive definite sublattice in $M$. The direct sum \[L\oplus L_{M}^{\perp}\leq M\] is a sublattice of finite index.
The next Lemma can be found in \cite{CG}, Proposition 3.1.
\begin{lemma}
Let $L\leq M$ be a sublattice such that $\operatorname{rank} L < \operatorname{rank} M$ and let $\varphi\in J_{k,M;t}(\chi)$ be a Jacobi form of weight $k$ and index $t$ for the character $\chi$. Consider the decomposition $\mathfrak{z}_{M}=\mathfrak{z}_{L}\oplus\mathfrak{z}_{L^{\perp}}\in M\otimes\mathbb{C}=(L\oplus L_{M}^{\perp})\otimes \mathbb{C}$. We define the pullback of $\varphi$ to $L$ as the function $\varphi\downharpoonright_{L}$ on $\mathbb{H}\times
(L\otimes\mathbb{C})$
\[\varphi\downharpoonright_{L}(\tau,\mathfrak{z}_{L}):=\varphi(\tau,\mathfrak{z}_{L}\oplus 0)\,.\]
Then $\varphi\downharpoonright_{L}\in J_{k,L;t}(\left.\chi\right|_{\Gamma^{J}(L)})$ and the pullback maps cusp forms to cusp forms.
\end{lemma}
\begin{definition}\label{def:pullbak notation}
Let $L\leq M$ be a sublattice of $M$ such that $\operatorname{rank} L < \operatorname{rank} M$ and $\varphi\in J_{k,M;t}^{\textit{(weak)}}$ be a Jacobi form of weight $k$ and index $t$. Let $\psi\in J_{k,L;t}^{\textit{(weak)}}$. We say that $\psi$ is a \textit{pullback of} $\varphi$ if there exists some $\alpha\in \mathbb{C}^{\times}$ such that
\(\psi=\alpha \cdot\varphi\downharpoonright_{L}\,.\)
In this case we use the notation $\varphi\to \psi$\,. We set $\varphi\downharpoonright_{L}:=\varphi$ if $\operatorname{rank} L =\operatorname{rank} M$.
\end{definition}
We define
\[\vartheta_{L}\,:\,\mathbb{H}\times(L\otimes\mathbb{C})\to\mathbb{C}\quad,\quad \vartheta_{L}(\tau,\mathfrak{z}):=\prod_{j=1}^{m}{\vartheta(\tau,(\mathfrak{z},\varepsilon_{j}))}\,.\]
This leads us to the notion of theta type Jacobi forms.
\begin{definition}\label{def:theta type JF}
Let $L\subseteq \mathbb{R}^{m}$ be a positive definite even lattice and $\varphi\in J_{k,L;t}$. We say that $\varphi$ is of \textit{theta type} if there exists a sublattice $L'\subseteq L$, $\alpha\in\mathbb{C}^{\times}$ and integers $a,b\in\mathbb{Z}_{\geq 0}$ such that
\[\varphi\downharpoonright_{L'} (\tau,\mathfrak{z}')=\alpha\cdot\eta(\tau)^{a}\,\vartheta_{L'}(\tau,\mathfrak{z}')^{b}\,.\]
Note that $\Delta_{12}(\tau)=\eta(\tau)^{24}$ is of theta type because $J_{k,L;0}$ is isomorphic to the space of weight $k$ modular forms for the group $\sl$.
\end{definition}
For $\mathfrak{z}\in (mA_{1})\otimes\mathbb{C}$ we write
\begin{equation}\label{choice of coordinates for mA1}
\mathfrak{z}=\sum_{j=1}^{m}{z_{j}\varepsilon_{j}}=:(z_{1},\dots,z_{m})\quad,\quad z_{j}\in\mathbb{C}\,.
\end{equation}
For the next Proposition we note that
\[\vartheta_{4A_{1}}(\tau,\mathfrak{z})=\vartheta(\tau,z_{1})\vartheta(\tau,z_{2})\vartheta(\tau,z_{3})\vartheta(\tau,z_{4})\in J_{2,4A_{1};1/2}(\chi_{2})\]
has already been constructed in \cite{G1} where $\chi_{2}$ is a binary character. In the next Proposition the square of this function is denoted by $\psi_{4,4A_{1}}$. In \cite[Proposition 3.7]{Wo1} the author constructs a tower of theta type Jacobi forms for the lattice $4A_{1}$.
\begin{prop}\label{prop:tower for 4A1}
There exists the following diagram of theta type Jacobi forms for the $A_{1}$-tower
\begin{center}
\begin{tikzpicture}[scale=0.5,node distance=2cm,auto]
\node[] (delta) at (0cm,0cm) {$\textcolor{black}{\Delta_{12}}$};
\node (12+A1) at (0cm,-2.5cm) {$\textcolor{black}{\varphi_{12,A_{1}}}$};
\node (12+2A1) at (0cm,-5cm) {$\textcolor{black}{\varphi_{12,2A_{1}}}$};
\node (12+3A1) at (0cm,-7.5cm) {$\textcolor{black}{\varphi_{12,3A_{1}}}$};
\node (12+4A1) at (0cm,-10cm) {$\varphi_{12,4A_{1}}$};
\node[] (10+A1) at (5cm,-2.5cm) {$\textcolor{black}{\psi_{10,A_{1}}}$};
\node[] (10+2A1) at (5cm,-5cm) {$\textcolor{black}{\varphi_{10,2A_{1}}}$};
\node[] (10+3A1) at (5cm,-7.5cm) {$\textcolor{black}{\varphi_{10,3A_{1}}}$};
\node[] (10+4A1) at (5cm,-10cm) {$\varphi_{10,4A_{1}}$};
\node[] (8+2A1) at (10cm,-5cm) {$\textcolor{black}{\psi_{8,2A_{1}}}$};
\node[] (8+3A1) at (10cm,-7.5cm) {$\textcolor{black}{\varphi_{8,3A_{1}}}$};
\node[] (8+4A1) at (10cm,-10cm) {$\varphi_{8,4A_{1}}$};
\node[] (6+3A1) at (15cm,-7.5cm) {$\textcolor{black}{\psi_{6,3A_{1}}}$};
\node[] (6+4A1) at (15cm,-10cm) {$\varphi_{6,4A_{1}}$};
\node[] (4+4A1) at (20cm,-10cm) {$\psi_{4,4A_{1}}$};
\node[] () at (19.0cm,-8.00cm) {$\left.\frac{\partial^{2}}{\partial z_{4}^{2}}\right|_{z_{4}=0}$};
\node[] () at (14.0cm,-5.50cm) {$\left.\frac{\partial^{2}}{\partial z_{3}^{2}}\right|_{z_{3}=0}$};
\node[] () at (9.0cm,-3.00cm) {$\left.\frac{\partial^{2}}{\partial z_{2}^{2}}\right|_{z_{2}=0}$};
\node[] () at (4.0cm,-0.50cm) {$\left.\frac{\partial^{2}}{\partial z_{1}^{2}}\right|_{z_{1}=0}$};
\draw[<-](delta) to node {} (12+A1);
\draw[<-](12+A1) to node {} (12+2A1);
\draw[<-](12+2A1) to node {} (12+3A1);
\draw[<-](12+3A1) to node {} (12+4A1);
\draw[<-,dashed](delta) to node {} (10+A1);
\draw[<-](10+A1) to node {} (10+2A1);
\draw[<-](10+2A1) to node {} (10+3A1);
\draw[<-](10+3A1) to node {} (10+4A1);
\draw[<-,dashed](10+A1) to node {} (8+2A1);
\draw[<-](8+2A1) to node {} (8+3A1);
\draw[<-](8+3A1) to node {} (8+4A1);
\draw[<-,dashed](8+2A1) to node {} (6+3A1);
\draw[<-,dashed](6+3A1) to node {} (4+4A1);
\draw[<-](6+3A1) to node {} (6+4A1);
\end{tikzpicture}
\end{center}
where $\psi_{k,mA_{1}},\varphi_{k,mA_{1}}\in J_{k,mA_{1};1}$. Except for the last line all forms are cusp forms.
\end{prop}
We recall that there exists a unique (up to isomorphism) positive definite even lattice $E_{8}$ in dimension $8$ which is unimodular.
Following \cite{G} we can attach a Jacobi theta series to $E_{8}$:
\[\Theta_{E_{8}}(\tau,\mathfrak{z})=\sum_{l\in E_{8}}{\exp(\pi i (l,l)\tau+2\pi i(l,\mathfrak{z}))}\,.\]
Then $\Theta_{E_{8}}\in J_{4,E_{8};1}$ is a singular Jacobi form for $E_{8}$. We fix a chain of embeddings
\[A_{1}\hookrightarrow 2A_{1}\hookrightarrow 3A_{1}\hookrightarrow 4A_{1}\hookrightarrow E_{8}\,.\]
We will investigate the pullbacks
\[\epsilon_{4,mA_{1}}\index{$\epsilon_{4,mA_{1}}$}=\Theta_{E_{8}}\downharpoonright_{mA_{1}}\in J_{4,mA_{1};1}\,.\]
\begin{prop}\label{prop:modularity of the E8 pullbacks for A1 tower}
Let $\sigma\in \operatorname{O}(mA_{1})$ for $m\in\{1,2,3, 4\}$. Then $\sigma$ can be extended to $\operatorname{O}(E_{8})$ and for any sublattice $L\leq E_{8}$ where $L\cong mA_{1}$ there exists some $g\in\operatorname{O}(E_{8})$ such that $g.L=mA_{1}$. Moreover $\epsilon_{4,mA_{1}}$ is invariant under the transformation induced by $\sigma$ such that for all $\tau,\mathfrak{z}$ one has \[\epsilon_{4,mA_{1}}(\tau,\sigma.\mathfrak{z})=\epsilon_{4,mA_{1}}(\tau,\mathfrak{z})\,.\]
\end{prop}
\begin{proof}
We denote by $K_{m}:=(mA_{1})_{E_{8}}^{\perp}$ the orthogonal complement of $mA_{1}$ in $E_{8}$. For the proof of the statement we will have to investigate the discriminant form
\[q: D(L)\to\mathbb{Q}/2\mathbb{Z}\quad,\quad x +L \mapsto (x,x) +2\mathbb{Z}\,.\]
In the following list one can find a root system which is isomorphic to $K_{m}$
\begin{center}
\begin{tabular}{l|cccc}
$m$ & 1 & 2 & 3 & 4\\\hline
$K_{m}$ & $E_{7}$ & $D_{6}$ & $A_{1}\oplus D_{4}$ & $4A_{1}$
\end{tabular}\,.
\end{center}
From \cite[Proposition 1.6.1]{N} we know that $\sigma$ can be extended to $\operatorname{O}(E_{8})$
\begin{equation}\label{Nikulins condition}
\text{if the natural homomorphism }\operatorname{O}(K_{m})\to \operatorname{O}(D(K_{m})) \text{ is surjective.}
\end{equation}
According to \cite[Chapter 4, Section 8.2]{CS} we have $\operatorname{O}(D(E_{7}))=\{\operatorname{id}\}$ because $E_{7}^{\vee}=E_{7}\cup (v+E_{7})$ where $q(v+E_{7})=\frac{3}{2}+2\mathbb{Z}$
which grants the surjectivity in this case.
The lattice $D_{m},m\geq 3$ can be realized as the $\mathbb{Z}$-module with basis
\[\varepsilon_{2}+\varepsilon_{1},\varepsilon_{2}-\varepsilon_{1},\varepsilon_{3}-\varepsilon_{2},\dots, \varepsilon_{m}-\varepsilon_{m-1}\,.\]
Moreover this lattice can be described as the following subset in $\mathbb{Z}^{m}$:
\[D_{m}=\{x\in\mathbb{Z}^{m}\,|\,x_{1}+\dots+x_{m} = 0 \bmod 2\}\]
We define $w:=\frac{1}{2}(\varepsilon_{1}+\dots+\varepsilon_{m})\in D_{m}^{\vee}$. The values of the discriminant form for the representatives of $D(D_{m})$ are given as follows
\begin{center}
\begin{tabular}{l|llll}
$l$ & $0$ & $\varepsilon_{1}$ & $w$ & $\varepsilon_{1}+w$\\\hline
$q(l)$ & $0+2\mathbb{Z}$ & $1+2\mathbb{Z}$ & $\frac{m}{4}+2\mathbb{Z}$ & $\frac{m}{4}+2\mathbb{Z}$
\end{tabular}
\end{center}
If $m\neq 4\bmod 8 $ one has $\operatorname{O}(D(D_{m}))\cong \mathcal{C}_{2}$ and this group is generated by the permutation of the classes represented by $w$ and $\varepsilon_{1}+w$. This element is induced by $\sigma_{\varepsilon_{1}}\in\operatorname{O}(D_{m})$, the reflection at the hyperplane perpendicular to $\varepsilon_{1}$. Moreover $\operatorname{O}(D(D_{4}))\cong \mathcal{S}_{3}$. In this case the group is generated by the permutation of the classes $w$ and $\varepsilon_{1}+w$ and the permutation of $\varepsilon_{1}$ and $\varepsilon_{1}+w$. The latter element is induced by the reflection $\sigma_{w}\in\operatorname{O}(D_{4})$. Finally we note that the natural homomorphism \[\operatorname{O}(mA_{1})\to\operatorname{O}(D(mA_{1}))\]
is surjective. Summarizing these considerations we see that the assumption (\ref{Nikulins condition}) is satisfied for each $m=1,\dots,4$. This proves the first part and the invariance property of the pullbacks as a direct consequence.
\end{proof}
In the next step we construct a differential operator. This operator is well-known and a treatment can be found in \cite{ChoKi} for the general case or in \cite{EZ} for classical Jacobi forms.
The heat operator is given as
\[H=4\pi i \det (S) \frac{\partial}{\partial \tau}-\det (S)\,S^{-1}\left[\frac{\partial}{\partial \mathfrak{z}}\right]\,.\]
We recall the definition of the quasi-modular Eisenstein series of weight 2
\begin{equation}\label{G2 Fourier expansion}
G_{2}(\tau)=-\frac{1}{24}+\sum_{n\geq 1}{\sigma_{1}(n)e^{2\pi in\tau }}\quad,\quad \sigma_{k}(n)=\sum_{d\mid n}{d^{k}}
\end{equation}
which transforms under $\sl$ as
\[G_{2}\left(\frac{a\tau+b}{c\tau+d}\right)=(c\tau+d)^{2}G_{2}(\tau)-\frac{c(c\tau+d)}{4\pi i}\,.\]
and denote by $G_{2}\bullet$ the operator which multiplies a function by $G_{2}$. By virtue of the transformation property of $G_{2}$ we obtain a quasi-modular operator. We fix the notation
\[r^{l}:=\exp(2\pi i(l,\mathfrak{z}))\quad,\quad l\in L^{\vee},\mathfrak{z}\in L\otimes\mathbb{C}\,.\]
\begin{lemma}\label{lemma:heat operator with automorohic correction}
For every $k\in\mathbb{N}$ there is a quasi-modular differential operator \newline $H_{k}:\,J_{k,L;1}\to J_{k+2,L;1}$ defined by the formula
\[\index{$H_{k}$}H_{k}=H+(4\pi i)^{2}\det(S)\left(k-\frac{m}{2}\right)G_{2}\bullet\]
where $m=\operatorname{rank}(L)$.
The operator $H$ acts on $q^{n}r^{l}$, $n\in\mathbb{N},l\in L^{\vee}$ by multiplication with \((2\pi i)^{2}\det(S)\,(2n-(l,l))\,.\)
\end{lemma}
\begin{proof}
The first part can be deduced from the considerations in \cite{ChoKi} and the second part is a direct verification.
\end{proof}
Using this operator we define
\[\index{$\epsilon_{6,mA_{1}}$}\epsilon_{6,mA_{1}}=H_{4}(\epsilon_{4,mA_{1}})\in J_{6,mA_{1};1}\,, m\in\{1,2,3,4\}\,.\]
These functions inherit the invariance under coordinate permutations from $\epsilon_{4,mA_{1}}$.
\begin{cor}\label{cor:differential operator preserves invariance for NA1}
Let $m\in\{1,2,3,4\}$. For all $\sigma\in \operatorname{O}(mA_{1})$ we have
\[\epsilon_{6,mA_{1}}(\tau,\sigma. \mathfrak{z})=\epsilon_{6,mA_{1}}(\tau,\mathfrak{z})\,.\] In particular $\epsilon_{6,mA_{1}}$ is invariant with respect to the action of $\operatorname{O}(D(mA_{1}))\cong \mathcal{S}_{m}$\,.
\end{cor}
The next Lemma is about cusp forms and follows immediately from Lemma \ref{lemma:heat operator with automorohic correction}.
\begin{lemma}\label{differential operator and cusp forms}
Let $\varphi\in J_{k,L;1}$. Then $\varphi$ is a cusp form if and only if $H_{k}(\varphi)$ is a cusp form.
\end{lemma}
Let $L\leq M$ be a primitive sublattice. We extend the pullback notation of Definition \ref{def:pullbak notation} to modular forms in the canonical way. In \cite{G} the arithmetic lifting of Jacobi forms has been defined. We denote this lifting operator by $\operatorname{A-Lift}(\cdot)$ and define
\[\mathcal{E}_{k}^{mA_{1}}:=\operatorname{A-Lift}(\epsilon_{k,mA_{1}})\in\mathcal{M}_{k}(\widetilde{\Gamma}_{m},1)\,.\]
Moreover the operator $H_{k}$ is extended to the Maa{\ss} space by the following convention:
\[H_{k}(\operatorname{A-Lift}(\varphi)):=\operatorname{A-Lift}(H_{k}(\varphi))\quad,\quad \varphi\in J_{k,L;1}\,.\]
The following Theorem describes all modular forms obtained by the previous considerations.
\begin{theorem}\label{theorem: mod forms A1 tower}
We have the following diagram of modular forms for the $A_{1}$-tower
\begin{center}
\begin{tikzpicture}[scale=0.3,node distance=2cm,auto]
\node (12+A1) at (0cm,-2.5cm) {$\textcolor{black}{F_{12}^{A_{1}}}$};
\node (12+2A1) at (0cm,-6cm) {$\textcolor{black}{F_{12}^{2A_{1}}}$};
\node (12+3A1) at (0cm,-9.5cm) {$\textcolor{black}{F_{12}^{3A_{1}}}$};
\node (12+4A1) at (0cm,-13cm) {$F_{12}^{4A_{1}}$};
\node[] (10+A1) at (4cm,-2.5cm) {$\textcolor{black}{G_{10}^{A_{1}}}$};
\node[] (10+2A1) at (4cm,-6cm) {$\textcolor{black}{F_{10}^{2A_{1}}}$};
\node[] (10+3A1) at (4cm,-9.5cm) {$\textcolor{black}{F_{10}^{3A_{1}}}$};
\node[] (10+4A1) at (4cm,-13cm) {$F_{10}^{4A_{1}}$};
\node[] (8+2A1) at (8cm,-6cm) {$\textcolor{black}{G_{8}^{2A_{1}}}$};
\node[] (8+3A1) at (8cm,-9.5cm) {$\textcolor{black}{F_{8}^{3A_{1}}}$};
\node[] (8+4A1) at (8cm,-13 cm) {$F_{8}^{4A_{1}}$};
\node[] (6+3A1) at (12cm,-9.5cm) {$\textcolor{black}{G_{6}^{3A_{1}}}$};
\node[] (6+4A1) at (12cm,-13cm) {$F_{6}^{4A_{1}}$};
\node[] (4+4A1) at (16cm,-13cm) {$G_{4}^{4A_{1}}$};
\node[] (e4+4A1) at (-9cm,-13cm) {$\mathcal{E}_{4}^{4A_{1}}$};
\node[] (e4+3A1) at (-9cm,-9.5cm) {$\mathcal{E}_{4}^{3A_{1}}$};
\node[] (e4+2A1) at (-9cm,-6cm) {$\mathcal{E}_{4}^{2A_{1}}$};
\node[] (e4+A1) at (-9cm,-2.5cm) {$\mathcal{E}_{4}^{A_{1}}$};
\node[] (e6+4A1) at (-4cm,-13cm) {$\mathcal{E}_{6}^{4A_{1}}$};
\node[] (e6+3A1) at (-4cm,-9.5cm) {$\mathcal{E}_{6}^{3A_{1}}$};
\node[] (e6+2A1) at (-4cm,-6cm) {$\mathcal{E}_{6}^{2A_{1}}$};
\node[] (e6+A1) at (-4cm,-2.5cm) {$\mathcal{E}_{6}^{A_{1}}$};
\draw[<-](12+A1) to node {} (12+2A1);
\draw[<-](12+2A1) to node {} (12+3A1);
\draw[<-](12+3A1) to node {} (12+4A1);
\draw[<-](10+A1) to node {} (10+2A1);
\draw[<-](10+2A1) to node {} (10+3A1);
\draw[<-](10+3A1) to node {} (10+4A1);
\draw[<-](8+2A1) to node {} (8+3A1);In the next Theorem we
\draw[<-](8+3A1) to node {} (8+4A1);
\draw[<-](6+3A1) to node {} (6+4A1);
\draw[->] (e4+4A1) to node {} (e4+3A1);
\draw[->] (e4+3A1) to node {} (e4+2A1);
\draw[->] (e4+2A1) to node {} (e4+A1);
\draw[] (-2cm,-10.3cm) -- (14cm,-10.3cm);
\draw[] (-2cm,-10.3cm) -- (-2cm,-1.5cm);
\draw[] (14cm,-10.3cm) -- (14cm, -1.5cm);
\draw[] (-2cm,-1.5cm) -- (14cm, -1.5cm);
\draw[->] (e4+A1) to node {\tiny$H_{4}$} (e6+A1);
\draw[->] (e4+2A1) to node {\tiny$H_{4}$} (e6+2A1);
\draw[->] (e4+3A1) to node {\tiny$H_{4}$} (e6+3A1);
\draw[->] (e4+4A1) to node {\tiny$H_{4}$} (e6+4A1);
\end{tikzpicture}
\end{center}
where $F_{k}^{mA_{1}},G_{k}^{mA_{1}}\in\mathcal{M}_{k}(\Gamma_{m},1)$ for each form appearing in the diagram. The forms inside the rectangle are cusp forms.
\end{theorem}
\begin{proof}
We define the $F_{k}^{mA_{1}},G_{k}^{mA_{1}}$ as the arithmetic liftings of the functions in Proposition \ref{prop:tower for 4A1} with the same arrangement for the weights and the lattices. This yields functions which belong to $\mathcal{M}_{k}(\widetilde{\Gamma}_{m},1)$. The arithmetic lifting maps cusp forms to cusp forms if the lattice $mA_{1}$ is a maximal even lattice. This is precisely the case for $m=1,2,3$. Hence the statements on cusp forms follow from Proposition \ref{prop:tower for 4A1} and the preceeding construction of $\epsilon_{k,mA_{1}}$. Note that the operator $\operatorname{A-Lift}$ commutes with pullbacks as one immediately extracts from its definition.
The Jacobi forms appearing in Proposition \ref{prop:modularity of the E8 pullbacks for A1 tower} and Corollary \ref{cor:differential operator preserves invariance for NA1} are invariant with respect to the permutations of $z_{1},\dots,z_{m}$. However, the theta type forms listed in Proposition \ref{prop:tower for 4A1} are not except for the functions $\psi_{12-2m,mA_{1}}$ on the diagonal. Hence we can apply the operator
\begin{equation}\label{symmetrization operator}
J_{k,mA_{1};1}\to J_{k,mA_{1};1}\quad,\quad\varphi\mapsto\frac{1}{m!}\sum_{\sigma\in \mathcal{S}_{m}}{\sigma.\varphi}
\end{equation}
where $(\sigma.\varphi)(\tau,\mathfrak{z}):=\varphi(\tau,\sigma.\mathfrak{z})$ for any $\sigma\in\mathcal{S}_{m}$. After application of (\ref{symmetrization operator}) all the functions appearing in Proposition \ref{prop:tower for 4A1} are symmetric. Hence the maximal modular group of these liftings is $\Gamma_{m}$.
\end{proof}
\section{Rings of Modular Forms}
For any $r\in L_{2}\otimes \mathbb{Q}$ satisfying $(r,r)<0$ we define the rational quadratic divisor as
\[\mathcal{D}_{r}=\{[\mathcal{Z}]\in\mathcal{D}\,|\,(\mathcal{Z},r)=0\}\,.\]
For any $m\in\mathbb{N}$ we fix the notation
\[\mathcal{D}^{m}:=\mathcal{D}(L_{2}(mA_{1}))\,.\]
In particular consider $\varepsilon_{m}\in mA_{1}(-1)\subseteq L_{2}(mA_{1})$. Then one has
\begin{equation}\label{rational quadratc divisors for A1 tower}
\mathcal{D}_{\varepsilon_{m}}^{m}\cong \mathcal{D}^{m-1}\,.
\end{equation}
In \cite[Theorem 5.1]{G1} it was proved that this divisor is attached to $G_{12-2m}^{mA_{1}}$ for $m=1,\dots,4$.
\begin{theorem}\label{Lifting construction of reflective modular forms for A1 tower}
Let $m\in\{1,2,3,4\}$. The divisor of the modular form $G_{12-2m}^{mA_{1}}$ consists of the $\Gamma_{m}$-orbit of
\(\mathcal{D}_{\varepsilon_{m}}^{m}.\)
The vanishing order is two on each irreducible component of $\operatorname{div}(G_{12-2m}^{mA_{1}})$. Moreover there exists a modular form
\[\chi_{6-m}\in\mathcal{M}_{6-m}(\Gamma_{m},\det v_{2})\]
whose square equals $G_{12-2m}^{mA_{1}}$. If $m\neq 4$, then $\chi_{6-m}$ is a cusp form.
\end{theorem}
This yields the structure of the graded ring for the $A_{1}$-tower with trivial character.
\begin{theorem}\label{theorem:graded ring with trivial character}
Let $m\in\{1,2,3,4\}$. The graded ring $\mathcal{A}(\Gamma_{m})$ is a polynomial ring in the $m+3$ functions which are given by the $m$-th row of the diagram in Theorem \ref{theorem: mod forms A1 tower}.
\end{theorem}
\begin{proof}
Since $-I_{m+4}\in\Gamma_{m}$ for all $m$ there are no modular forms of odd weight in $\mathcal{A}(\Gamma_{m})$. For the proof we consider the following reduction process:
\begin{enumerate}[(i)]
\item The starting point is the case $m=1$. Our construction yields the classical result of Igusa
\begin{equation}\label{Igusas result for A1}
\mathcal{A}(\Gamma_{1})=\mathbb{C}[\mathcal{E}_{4}^{A_{1}},\mathcal{E}_{6}^{A_{1}},G_{10}^{A_{1}},F_{12}^{A_{1}}]
\end{equation}
as we have already investigated in (\ref{Igusas result}).
\item We have an embedding $\Gamma_{m}\hookrightarrow \Gamma_{m+1}$ for each $m\in\mathbb{N}$. Hence the restriction map
\[\mathcal{M}_{k}(\Gamma_{m+1},1)\to\mathcal{M}_{k}(\Gamma_{m},1)\,,\,F\mapsto F|_{\mathcal{D}^{m}}\]
is well-defined for any even $k\in\mathbb{N}_{0}$. This map extends to a homomorphism of the graded algebras
\[\operatorname{Res}^{m+1}_{m}\,:\,\mathcal{A}(\Gamma_{m+1})\to \mathcal{A}(\Gamma_{m})\,.\]
We shall show that this map is surjective for $m=1,2,3$.
\item Let $k\in\mathbb{N}_{0}$ and consider $F\in\mathcal{M}_{k}(\Gamma_{m},1)$ with the property \[F|_{\mathcal{D}^{m-1}}\equiv 0\,.\] Let $m\geq 2$ and define
\[M:=\operatorname{diag}(1,1,K,1,1)\,\text{ where }\,K:=\operatorname{diag}(1,\dots,-1)\in\operatorname{GL}(m,\mathbb{Z})\,.\]
Since $M$ belongs to $\Gamma_{m}$ the Taylor expansion of $F$ around $0$ with respect to $z_{m}$ shows that $F$ vanishes of order at least two on $\mathcal{D}^{m-1}$. According to Theorem \ref{Lifting construction of reflective modular forms for A1 tower}
we can divide $F$ by $G_{12-2m}^{mA_{1}}$ and obtain a holomorphic modular form in $\mathcal{M}_{k-12+2m}(\Gamma_{m},1)$
by Koecher's principle for automorphic forms, compare \cite[p. 209]{Ba}. Note that we have used the identification (\ref{rational quadratc divisors for A1 tower}), here.
\end{enumerate}
Now starting from (\ref{Igusas result for A1}) the statement follows by induction on the weight using (i)-(iii) where the surjectivity of $\operatorname{Res}_{m}^{m+1}$ for $m=1,2,3$ is extracted from Theorem \ref{theorem: mod forms A1 tower}.
\end{proof}
In the following we construct three modular form with respect to the character $v_{\pi}$. Following \cite{Wo1} we consider the following three theta type Jacobi forms with respect to the coordinates introduced in (\ref{choice of coordinates for mA1})
\[\begin{aligned}
&\vartheta_{4A_{1}}^{(1)}(\tau,\mathfrak{z}_{4A_{1}})=\vartheta(\tau,z_{1}-z_{2})\vartheta(\tau,z_{1}+z_{2})\vartheta(\tau,z_{3}-z_{4})\vartheta(\tau,z_{3}+z_{4})\,,&\\
&\vartheta_{4A_{1}}^{(2)}(\tau,\mathfrak{z}_{4A_{1}})=\vartheta(\tau,z_{3}-z_{2})\vartheta(\tau,z_{3}+z_{2})\vartheta(\tau,z_{1}-z_{4})\vartheta(\tau,z_{1}+z_{4})\,,&\\
&\vartheta_{4A_{1}}^{(3)}(\tau,\mathfrak{z}_{4A_{1}})=\vartheta(\tau,z_{1}-z_{3})\vartheta(\tau,z_{1}+z_{3})\vartheta(\tau,z_{2}-z_{4})\vartheta(\tau,z_{2}+z_{4})&
\end{aligned}\]
such that $\vartheta_{4A_{1}}^{(j)}\in J_{2,4A_{1};1}(v_{\eta}^{12})$ for $j=1,2,3$. By multiplying each of the three functions by $\eta(\tau)^{12}$ and considering the arithmetic lifting of these functions we obtain three modular forms $\Delta_{8,4A_{1}}^{(j)},j=1,2,3$ which belong to $\mathcal{M}_{8}(\widetilde{\Gamma}_{4},1)$.
\begin{prop}\label{cusp form of weight 24 for 4A1}
There is a cusp form \[\Delta_{24}^{4A_{1}}\in\mathcal{S}_{24}(\Gamma_{4},v_{\pi})\] satisfying
\[\Delta_{24}^{4A_{1}}=\Delta_{8,4A_{1}}^{(1)}\,\Delta_{8,4A_{1}}^{(2)}\,\Delta_{8,4A_{1}}^{(3)}\,.\]
The divisor equals the $\Gamma_{4}$-orbit of $\mathcal{D}_{\varepsilon_{1}+\varepsilon_{4}}^{4}$.
\end{prop}
\begin{proof}
This function coincides with the cusp form of weight 24 for the lattice $D_{4}$ which was constructed in \cite[Theorem 4.4]{Wo1}. Since $4A_{1}$ is a sublattice of $D_{4}$ we obtain a function with the modular behaviour stated above where the character $v_{\pi}$ appears due to the definition of $\vartheta_{4A_{1}}^{(j)}$ above. For a maximal even lattice the arithmetic lifting of a Jacobi form is a cusp form if the Fourier expansion ranges over all parameters with positive hyperbolic norm, compare \cite[Theorem 3.1]{G}. This characterization can be extended to all lattices with the property that every isotropic subgroup of $D(L_{2})$ is cyclic, see \cite[Theorem 4.2]{GHS1} for a proof. Since this is the case for the lattice $L_{2}(4A_{1})$ the function $\Delta_{24}^{4A_{1}}$ is a cusp form.
\end{proof}
The Proposition yields two more cusp forms for the tower.
\begin{cor}\label{cor:construction of anti symmetric modular form for 3A1 of weight 18}
There are cusp forms
\[\Delta_{10}^{2A_{1}}\in\mathcal{S}_{10}(\Gamma_{2},v_{\pi})\quad\text{ and }\quad \Delta_{18}^{3A_{1}}\in\mathcal{S}_{18}(\Gamma_{3},v_{\pi})\]
whose divisor equals
the $\Gamma_{m}$-orbit of
\[\mathcal{D}_{\varepsilon_{1}+\varepsilon_{m}}^{m}\quad, \quad m=2\text{ or }3\text{, respectively.}\]
\end{cor}
\begin{proof}
We consider the cusp form $\Delta_{24}^{4A_{1}}$ in Proposition \ref{cusp form of weight 24 for 4A1}. The divisor is the sum of the six $\widetilde{\Gamma}_{m}$-orbits which are represented by $\mathcal{D}_{1}^{4}+\mathcal{D}_{2}^{4}$ where
\[\begin{aligned}
&\mathcal{D}_{1}^{4}:=\mathcal{D}_{\varepsilon_{1}+\varepsilon_{2}}^{4}+\mathcal{D}_{\varepsilon_{1}+\varepsilon_{3}}^{4}+\mathcal{D}_{\varepsilon_{2}+\varepsilon_{3}}^{4}\,,&\\
&\mathcal{D}_{2}^{4}:=\mathcal{D}_{\varepsilon_{1}+\varepsilon_{4}}^{4}+\mathcal{D}_{\varepsilon_{2}+\varepsilon_{4}}^{4}+\mathcal{D}_{\varepsilon_{3}+\varepsilon_{4}}^{4}\,.&
\end{aligned}\]
The group $\operatorname{O}(D(4A_{1}))\cong \mathcal{S}_{4}$ acts $2$-fold transitive on the set
\[\{\varepsilon_{\mu}\,|\,1\leq \mu \leq 4\}\]
by relabelling the indices. The group $\mathcal{S}_{4}$ contains $\mathcal{S}_{3}$ as the subgroup fixing $\varepsilon_{4}$.
The sets $\mathcal{D}_{\varepsilon_{k}+\varepsilon_{4}}^{4},\mathcal{D}_{\varepsilon_{k}-\varepsilon_{4}}^{4}$ where $k=1,2,3$ belong to the same $\widetilde{\Gamma}_{4}$-orbit but constitute different $\widetilde{\Gamma}_{3}$-orbits. Hence the restriction of $\Delta_{24}^{4A_{1}}$ to $\mathcal{D}^{3}$ has the divisor
\[\mathcal{D}_{1}^{3}+2\,\mathcal{D}_{2}^{3}\]
with respect to the action of $\widetilde{\Gamma}_{3}$ where
\[\mathcal{D}_{j}^{3}:=\mathcal{D}_{j}^{4}\cap \mathcal{D}^{3}\,,\,j=1,2\,.\]
Moreover the $\Gamma_{3}$-orbit of the restriction is represented by
\[\mathcal{D}_{\varepsilon_{1}+\varepsilon_{3}}^{3}+2\,\mathcal{D}_{\varepsilon_{3}}^{3}\,.\]
According to Theorem \ref{Lifting construction of reflective modular forms for A1 tower} we can define
\[\Delta_{18}^{3A_{1}}:=\frac{\Delta_{24}^{4A_{1}}\Big|_{\mathcal{D}^{3}}}{G_{6}^{3A_{1}}}\]
and obtain a modular form with the desired poroperties by Koecher's principle. Now the same construction is done with $\Delta_{18}^{3A_{1}}$ instead and one defines
\[\Delta_{10}^{2A_{1}}:=\frac{\Delta_{18}^{3A_{1}}\Big|_{\mathcal{D}^{2}}}{G_{8}^{2A_{1}}}\]
which has the correct divisor. An analysis of the Fourier expansion of $\Delta_{10}^{2A_{1}},\Delta_{18}^{3A_{1}}$ yields that both functions are cusp forms.
\end{proof}
In the following we consider another type of modular forms. Let $II_{2,26}$ be the unique (up to isomorphism) even unimodular lattice of signature $(2,26)$. We define
\[\index{$R_{-2}^{II_{2,26}}$}R_{-2}^{II_{2,26}}:=\{r\in II_{2,26}\,|\,(r,r)=-2\}\,.\]
The following statement is due to Borcherds and can be found in \cite[Theorem 10.1 and Example 2]{Bo2}.
\begin{theorem}[Borcherds]\label{theorem:construction of Borcherds Phi12}
There is a holomorphic modular form $\Phi_{12}$ with the properties \index{$\Phi_{12}$}\[\Phi_{12}\in\mathcal{M}_{12}(\operatorname{O}(II_{2,26})^{+},\det)\quad, \quad \operatorname{div}(\Phi_{12}) =\bigcup_{r\in R^{II_{2,26}}_{-2}}{\mathcal{D}_{r}(II_{2,26})}\]
where the vanishing order is exactly one on each irreducible component.
\end{theorem}
In \cite[Example 2]{Bo2} Borcherds computes the Fourier expansion of $\Phi_{12}$. It turns out that $\Phi_{12}$ reflects the Weyl denominator formula for the fake monster Lie algebra.
\vspace{3mm}
\noindent Let $\mathcal{N}$ be the Niemeier lattice with root system $24A_{1}$, see \cite{Nie}. Since \[II_{2,26}\cong U \perp U_{1}\perp \mathcal{N}\] where
\[U=\langle e,f\rangle,U_{1}=\langle e_{1},f_{1}\rangle\]
are two integral hyperbolic planes we can consider the natural embedding
\begin{equation}\label{chioce of embedding into II 2,26}
L_{2}(mA_{1})\hookrightarrow U \perp U_{1}\perp \mathcal{N} \quad,\quad m\in\{1,2,3,4\}
\end{equation}
which is induced by $mA_{1}\hookrightarrow 24A_{1}$.
Let $K_{m}$ be the orthogonal complement of $L_{2}(mA_{1})$ in $II_{2,26}$. Each vector $r\in II_{2,26}$ has a unique decomposition
\[r=\alpha(r)+\beta(r)\quad,\quad \alpha(r)\in L_{2}(mA_{1})^{\vee}\,,\,\beta(r)\in K_{m}^{\vee}\,.\]
We set
\[\index{$R_{-2}(K)$}R_{-2}(K_{m})=\{r\in II_{2,26}\,|\,(r,r)=-2\,,\, r\perp L_{2}(mA_{1}) \},\]
which is contained in the negative definite lattice $K_{m}$ and hence finite. Consequently we define \index{$\operatorname{N}(K)$}$N(K_{m})=\frac{\sharp R_{-2}(K_{m})}{2}\in\mathbb{N}\,.$
The next statement is a special case of \cite[Theorem 8.2 and Corollary 8.12]{GHS2} and describes the construction of a quasi-pullback from Borcherds function $\Phi_{12}$.
\begin{theorem}\label{theorem:quasi pullback form phi12}
Consider a primitive embedding $L_{2}\hookrightarrow II_{2,26}$ and denote by $K$ the orthogonal complement of $L_{2}$ in $II_{2,26}$.
\[\left.\Phi\right|^{\textit{(QP)}}_{L_{2}}(\mathcal{Z})=\displaystyle\left.\frac{\Phi_{12}(\mathcal{Z})}{\prod_{r\in R_{-2}(K)/{\{\pm 1\}}}{(\mathcal{Z},r)}}\right|_{\mathcal{D}(L_{2})},\]
where in the product one fixes a set of representatives for $R_{-2}(K)/\{\pm 1\}$. Then $\left.\Phi\right|^{\textit{(QP)}}_{L_{2}}$ belongs to $\mathcal{M}_{12+N(K)}(\widetilde{\operatorname{O}}(L_{2})^{+},\det)$
and vanishes exactly on all rational quadratic divisors
\[\mathcal{D}_{\alpha(r)}=\{[\mathcal{Z}]\in\mathcal{D}(L_{2})\,|\,(\mathcal{Z},\alpha(r))=0\}\]
where $r$ runs through the set \(R_{-2}^{II_{2,26}}\) and $(\alpha(r),\alpha(r))<0$. If $N(K)>0$ we say that $\left.\Phi\right|^{\textit{(QP)}}_{L_{2}}$ is a quasi-pullback of $\Phi_{12}$\index{quasi-pullback of $\Phi_{12}$}. In this case $\left.\Phi\right|^{\textit{(QP)}}_{L_{2}}$ is a cusp form.
\end{theorem}
The choice of the embedding (\ref{chioce of embedding into II 2,26}) yields another four modular forms with respect to a character.
\begin{theorem}\label{baby monster type modular forms for A1 tower}
Let $m\in\{1,2,3,4\}$. There exists a cusp form
\[F_{36-m}^{mA_{1}}\in \mathcal{S}_{36-m}(\Gamma_{m},\det v_{\pi}^{\kappa})\quad ,\quad \kappa\in\{0,1\}\]
whose divisor is represented by the sum of the two different $\Gamma_{m}$-orbits
\[\mathcal{D}_{e_{1}-f_{1}}^{m}+\mathcal{D}_{\varepsilon_{m}}^{m}\,.\]
\end{theorem}
\begin{proof}
The results can be extracted from \cite{Gr}. However, we give a sketch of the proof, here, using Theorem \ref{theorem:quasi pullback form phi12}. The strategy of the proof is to construct modular forms as quasi-pullbacks of $\Phi_{12}$ with respect to the embedding (\ref{chioce of embedding into II 2,26}). We denote these forms by $F_{k_{m}}^{mA_{1}}$ for $m=1,2,3,4$. Since the number of $-2$-roots for the lattice $mA_{1}$ is exactly $2m$ we have $N(K_{m})=24-m$ and the weight $k_{m}$ of $F_{k_{m}}^{mA_{1}}$ is exactly $12+N(K_{m})=36-m$. The divisor of $F_{36-m}^{mA_{1}}$ is determined by all vectors $\alpha\in L_{2}(mA_{1})^{\vee},(\alpha,\alpha)< 0$ such that there exists a $\beta\in K_{m}^{\vee}$ satisfying $\alpha+\beta\in R_{-2}^{II_{2,26}}$. The choice of our embedding already implies $(\alpha,\alpha)=-2$ and $\beta(r)=0$. There are $m+1$ different orbits of $-2$-roots in $L_{2}(mA_{1})$ with respect to $\widetilde{\Gamma}_{m}$ represented by
\[e_{1}-f_{1},\varepsilon_{1},\dots,\varepsilon_{m}\,.\]
The divisor is represented by the sum of these orbits. In \cite[Corollary 15.2]{N} the author gave a criterion to decide whether an element $g\in L_{2}(mA_{1})$ extends to $\operatorname{O}(II_{2,26})^{+}$. In our case this is always possible, compare \cite[p. 122]{Gr}. Hence the maximal modular group is indeed larger and the divisor with respect to the larger group is represented by the two orbits stated above.
\end{proof}
We define the four functions
\[H_{30}^{mA_{1}}:=\frac{F_{36-m}^{mA_{1}}}{\chi_{6-m}^{mA_{1}}}\quad \text{ where }\,m\in\{1,2,3,4\}\]
whose divisor is represented by the $\Gamma_{m}$-orbit of
\(\mathcal{D}_{e_{1}-f_{1}}^{m}.\)
The next Lemma is useful in order to determine the graded ring $\mathcal{A}(\Gamma'_{m})$.
\begin{lemma}\label{lemma:divisors of A1 modular forms with characer}
Let $F\in\mathcal{M}_{k}(\Gamma_{m},\lambda)$ for some finite character $\lambda:\Gamma_{m}\to\mathbb{C}^{\ast}$ and $m=1,2,3,4$.
\begin{enumerate}[(i)]
\item If $\lambda=\det v_{2}^{a} v_{\pi}^{b}$ where $a,b\in\{0,1\}$ then $\mathcal{D}_{\varepsilon_{m}}^{m}\leq \operatorname{div}(F)$.
\item If $\lambda=v_{\pi}v_{2}^{a}$ where $a\in\{0,1\}$ then $\mathcal{D}_{\varepsilon_{1}-\varepsilon_{m}}^{m}\leq \operatorname{div}(F)$.
\item If $\lambda=v_{2}v_{\pi}^{a}$ where $a\in\{0,1\}$ then $\mathcal{D}_{e_{1}-f_{1}}^{m}\leq \operatorname{div}(F)$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[(i)]
\item The reflection $\sigma_{\varepsilon_{m}}$ satisfies $\lambda(\sigma_{\varepsilon_{m}})=-1$. Hence its fixed locus $\mathcal{D}_{\varepsilon_{m}}^{m}$ is contained in $\operatorname{div}(F)$.
\item We have $\lambda(\sigma_{\varepsilon_{j}-\varepsilon_{l}})=-1$ where $j,l$ are distinct numbers in $\{1,\dots,m\}$ and this reflection induces the permutation $z_{j}\mapsto z_{k},z_{k}\mapsto z_{j}$ with respect to our standard basis (\ref{choice of coordinates for mA1}). This shows that the fixed locus of this reflection is part of $\operatorname{div}(F)$.
\item We consider the reflection $\sigma_{1}=\sigma_{e_{1}-f_{1}}$. From \cite[Section 1.6.3 ]{Kl} and the formula for $P$ in the proof of \cite[Corollary 1.23]{Kl} we deduce that $v_{2}(\sigma_{1})=-1$. As $\mathcal{D}_{e_{1}-f_{1}}^{m}$ is exactly the fixed locus of this reflection we are done.
\end{enumerate}
\end{proof}
Now we are able to prove the main theorem by extending the diagram given in Theorem \ref{theorem: mod forms A1 tower}.
\begin{mainproof}
Let $F\in \mathcal{A}(\Gamma'_{m})$. Without loss of generality we can assume that $F$ is homogeneous of weight $k$. If the character of $F$ is trivial the assertion follows from Theorem \ref{theorem:graded ring with trivial character}. In the general situation we can use Lemma \ref{lemma:divisors of A1 modular forms with characer} to divide $F$ by one of the forms $H_{30}^{mA_{1}},\chi_{6-m}^{mA_{1}}$ or $\Delta_{k_{m}}^{mA_{1}}$ in the case $m\neq 1$. By virtue of Koecher's principle this process yields a modular form with trivial character and we are back in the first case.
\end{mainproof}
Let $m\in\{2,3,4\}$. The only relations among the generators of $\mathcal{A}(\Gamma'_{m})$ are given by
\[(\Delta_{k_{m}}^{mA_{1}})^{2}=P_{mA_{1}}(\mathcal{E}_{4}^{mA_{1}}, \mathcal{E}_{6}^{mA_{1}}, \chi_{6-m}^{mA_{1}}, F_{14-2m}^{mA_{1}},\dots, F_{12}^{mA_{1}})\]
and
\[(H_{30}^{mA_{1}})^{2}=Q_{mA_{1}}(\mathcal{E}_{4}^{mA_{1}}, \mathcal{E}_{6}^{mA_{1}}, \chi_{6-m}^{mA_{1}}, F_{14-2m}^{mA_{1}},\dots, F_{12}^{mA_{1}})\]
where $P_{mA_{1}},Q_{mA_{1}}\in \mathbb{C}[X_{1},\dots,X_{m+3}]$ are uniquely determined polynomials. In the case $m=1$ our methods yield Igusa's description.
\section{Eisenstein series}
Let $L$ be a positive definite even lattice. In \cite{BrK} the authors investigated vector valued Eisenstein series. We denote the vector valued Eisenstein series of weight $k$ with respect to the simplest cusp by $\vec{e}_{L,k}$, compare \cite[p. 23]{Br}. We denote by $(\mathfrak{e}_{\gamma})_{\gamma\in D(L)}$ the standard basis for the group algebra $\mathbb{C}[L^{\vee}/L]$. For $k\geq 5/2,k\in \mathbb{Z}/2$ these functions are vector valued modular forms with respect to the dual of the Weil representation for $L$ with Fourier expansion
\[\vec{e}_{L,k}(\tau)=\sum_{\gamma\in D(L)}\sum_{\substack{n\in \mathbb{Z}-\frac{1}{2}(\gamma,\gamma)\\n \geq 0}}{c(n,\gamma,L,k)\exp\left(2\pi i n\tau\right)\mathfrak{e}_{\gamma}}\quad,\text{ where }\tau\in \mathbb{H}\,.\]
In \cite[Theorem 4.6]{BrK} an explicit formula for $c(n,\gamma,L,k)$ is given.
There is a well-known correspondence between vector valued modular forms and Jacobi forms for the lattice $L$, compare \cite[Proposition 1.6]{Sh}. Let $m\in\mathbb{N},m\leq 8$. We define
\[s_{m}(n,l):=\sharp\{x\in ((mA_{1})_{E_{8}}^{\perp})^{\vee}\,|\,l+x\in E_{8}\,,\,(l+x,l+x)=2n\}\in\mathbb{Z}\,.\]
Note that these numbers are exactly the Fourier coefficients of $\epsilon_{4,mA_{1}}$, more precisely one has
\[\epsilon_{4,mA_{1}}(\tau,\mathfrak{z})=\sum_{\substack{n\in\mathbb{Z}_{\geq 0}\\l\in mA_{1}^{\vee}\\2n-(l,l)\geq 0}}{s_{m}(n,l)q^{n}r^{l}}\]
Since the lattice $E_{8}$ is a maximal even lattice there are no nontrivial isotropic subgroups of $D(E_{8})$. Hence we can rewrite this quantity in the simplified form
\begin{equation}\label{representation numbers of A1 tower in E8}
s_{m}(n,l)=\sharp\{x\in ((mA_{1})_{E_{8}}^{\perp})^{\vee}\,|\,(l+x,l+x)=2n\}\quad,\quad n\in \mathbb{N}\,,\, l\in (mA_{1})^{\vee}.
\end{equation}
We describe the numerical values of the Eisenstein-like Jacobi forms if $J_{k,mA_{1};1}^{(\textit{cusp})}=\{0\}$ and $m\leq 3$.
\begin{prop}\label{prop:Fourier expansion of Eisenstein type}
\begin{enumerate}[(a)]
\item Let $m=1,2,3$. For any $n\in\mathbb{Z}_{\geq 0},l\in (mA_{1})^{\vee}$ we have the identity
\[2s_{m}(n,l)=c\left(n-(l,l)/2,l \bmod L,mA_{1},4-m/2\right)\,.\]
The first Fourier coefficients of $\epsilon_{4,mA_{1}}$ are given as follows:
\[\begin{aligned}
&\varepsilon_{4,A_{1}}(\tau,\mathfrak{z})=&&1+(r^{-1}+56r^{-1/2}+126+56r^{1/2}+r)q&\\&&&+(126r^{-1}+576r^{-1/2}+756+576r^{1/2}+126r)q^{2}&\\&&&+(56r^{-3/2}+756r^{-1}+1512 r^{-1/2}+2072+1512r^{1/2}+756r+56r^{3/2})q^{3}&\\&&&+(\dots) q^{4}\,,&\\\\
&\varepsilon_{4,2A_{1}}(\tau,\mathfrak{z})=&&1+(60+32r^{(\pm 1/2,0)}+32r^{(0,\pm 1/2)}+12r^{(\pm 1/2,\pm 1/2)}+r^{(\pm 1,0)}+r^{(0,\pm 1)})q&\\
&&&+(252+192r^{(\pm 1/2,0)}+192r^{(0,\pm 1/2)}+160r^{(\pm 1/2, \pm 1/2)}&\\&&&+60r^{(\pm 1 ,0)}+60r^{(0, \pm 1)}+32r^{(\pm 1,\pm 1/2)}+32r^{(\pm 1/2, \pm 1)}+r^{(\pm 1, \pm 1)})q^{2}&\\&&&+(\dots)q^{3}\,,&\\\\
&\varepsilon_{4,3A_{1}}(\tau,\mathfrak{z})=&&1+(26+16r^{(\pm 1/2,0,0)}+16r^{(0,\pm 1/2 ,0)}+16r^{(0,0,\pm 1/2)}+8r^{(\pm 1/2,\pm 1/2,0)}&\\&&&+8r^{(\pm 1/2,0,\pm 1/2)}+8r^{(0,\pm 1/2,\pm 1/2)}+2r^{(\pm 1/2,\pm 1/2,\pm 1/2)}&\\&&&+r^{(\pm 1,0,0)}+r^{(0,\pm 1,0)}+r^{(0,0,\pm 1)})q&\\&&&+(\dots)q^{2}
\end{aligned}\]
\item Let $m=1,2$ and define the numbers $r_{m}(n,l)$ by
\[\epsilon_{6,mA_{1}}(\tau,\mathfrak{z})=\sum_{\substack{n\in\mathbb{Z}_{\geq 0}\\l\in mA_{1}^{\vee}\\2n-(l,l)\geq 0}}{r_{m}(n,l)q^{n}r^{l}}\,.\]
For any $n\in\mathbb{Z}_{\geq 0},l\in (mA_{1})^{\vee}$ we have
\[\frac{3\cdot 2^{1-m}}{16-2m}\cdot r_{m}(n,l)\in\mathbb{Z}\pi^{2}\,.\]
\end{enumerate}
\end{prop}
\begin{proof}
We consider the subspace of $J_{k,mA_{1},1}$ consisting of all functions which belong to the kernel of the symmetrization operator (\ref{symmetrization operator}). Due to Theorem \ref{theorem:graded ring with trivial character} this space is one-dimensional if $(m,k)$ is contained in the set
\[\Omega=\{(1,4),(2,4),(3,4),(1,6),(2,6)\}\,.\]
In each case the space is generated by $\epsilon_{k,mA_{1}}$. Any zero-dimensional cusp of the modular variety $\Gamma_{m}\backslash\mathcal{D}^{m}$ is represented by a primitive isotropic vector in $L_{2}(mA_{1})$. Two zero-dimensional cusps are equivalent if they belong to the same $\Gamma_{m}$-orbit. In the cases where $m< 4$ all zero-dimensional cusps are equivalent to the simplest cusp. In this case the codimension of the subspace $J_{k,mA_{1};1}^{(\textit{cusp})}\subseteq J_{k,mA_{1};1}$ is one. In \cite{Br} the author defined Eisenstein series for every zero-dimensional cusp represented by a vector $c\in L^{\vee}$ such that $(c,c)\in 2\mathbb{Z}$. The corresponding Jacobi forms are denoted by $e_{k,c}^{L}$. The span of these functions, as $c$ runs through all zero-dimensional cusps, constitutes the complementary space of $J_{k,mA_{1};1}^{(\textit{cusp})}$. We call the members of this space Jacobi-Eisenstein series. They can be viewed as the natural generalization of the classical Jacobi-Eisenstein series investigated in \cite{EZ}. For any permutation $\sigma\in \mathcal{S}_{m}$ the formula for the Fourier coefficients of $e_{k,c}^{mA_{1}}$ yields
\[\sigma.e_{k,c}^{mA_{1}}=e_{k,\sigma c}^{mA_{1}}\,.\]
Since $\sigma c$ is equivalent to $c$ this identity shows that $e_{k,c}^{mA_{1}}$ belongs to the kernel of the operator (\ref{symmetrization operator}) for any $c$ satisfying $(c,c)\in2\mathbb{Z}$. Choosing $c=0$ we infer that for any $(m,k)\in\Omega$ there exists some $\lambda\in\mathbb{C}^{\ast}$ such that
\[\epsilon_{k,mA_{1}}=\lambda\, e_{k,0}^{mA_{1}}\,.\]
Using \cite[Theorem 4.6]{BrK} we can express the Fourier coefficients of $\epsilon_{k,mA_{1}}$ as special values of $L$-functions.
\begin{enumerate}[(a)]
\item A comparison of the Fourier coefficients yields $\lambda=\frac{1}{2}$ if $k=4$ and we obtain the numerical values of $\epsilon_{4,mA_{1}}$ by evaluating the formula for the Fourier coefficients of $e_{k,0}^{mA_{1}}$.
\item The Fourier coefficient of the constant term of $\epsilon_{6,A_{1}}$ equals
\[-(2\pi i)^{2}\det(A_{1})\frac{14}{24}=\frac{14}{3}\pi^{2}\,.\]
The Fourier coefficients of $\frac{1}{2}e_{6,0}^{A_{1}}$ are well-known to be integral, see \cite[Theorem I.2.1]{EZ} and \cite{Coh}. Now the statement in the case $m=1$ follows by a comparison of the constant term. In the case $m=2$ the constant term of $\epsilon_{6,2A_{1}}$ is given by
\[-(2\pi i)^{2}\det(2A_{1})\frac{12}{24}=8\pi^{2}\,.\]
From Lemma \ref{lemma:heat operator with automorohic correction} we obtain the Fourier expansion of $\epsilon_{6,2A_{1}}(\tau,\mathfrak{z})$ as
\[(8\pi^{2})\cdot 24\,G_{2}(\tau)\cdot\epsilon_{4,2A_{1}}(\tau,\mathfrak{z})-\sum_{\substack{n\in\mathbb{Z}_{\geq 0}\\l\in (2A_{1})^{\vee}\\2n-(l,l)\geq 0}}{\alpha(n,l)q^{n}r^{l}}\]
where
\[\alpha(n,l):=(8\pi^{2})\cdot (4n-2(l,l))\, s_{m}(n,l)\,.\]
Since $(4n-2(l,l))\in\mathbb{Z}$ for all $n\in\mathbb{Z}_{\geq 0},l\in (2A_{1})^{\vee}$ we obtain the assertion as a consequence of formula (\ref{G2 Fourier expansion}) and (\ref{representation numbers of A1 tower in E8}).
\end{enumerate}
\end{proof}
The last Proposition jusitifies the notion Eisenstein type. In the cases considered there the space complementary to the space of cusp forms is always one-dimensional. If $m=4$ there is no Jacobi-Eisenstein series to be considered since $4-m/2=2<5/2$. In this case $\epsilon_{4,4A_{1}}$ is the correct replacement for the missing Eisenstein-series. Moreover there are two inequivalent zero-dimensional cusps. Here the complementary space is two-dimensional. If $k=4$ the first generator of this space is given by the Eisenstein type modular form $\mathcal{E}_{4}^{A_{1}}$ and the second generator coincides with the square of $\chi_{2}^{4A_{1}}$ which is the main function of the $A_{1}$-tower. By virtue of the identities
\[\begin{aligned}
&\sharp\{l\in E_{7}\,|\,(l,l)=2n\}&&=&&c(n,0,A_{1},7/2)/2&\\
&\sharp\{l\in D_{6}\,|\,(l,l)=2n\}&&=&&c(n,0,2A_{1},3)/2&\\
&\sharp\{l\in A_{1}\oplus D_{4}\,|\,(l,l)=2n\}&&=&&c(n,0,3A_{1},5/2)/2&
\end{aligned}\]
Proposition \ref{prop:Fourier expansion of Eisenstein type} yields a new description of these representation numbers of quadratic forms as special values of $L$-functions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,499 |
All of the lounge chairs are created by their special style and character. Every lounge chairs is practical and functional, although chaise lounge chair with canopy ranges a variety of designs and made to help you build a signature look for your home. You can add an elegant feature to your decor by using lounge chairs into your style. After buying lounge chairs you will need to place same benefits on ease and aesthetics. You are able to enhance the lounge chairs in your home that fit your own personal preferences that designed by their stunning and detailed appearance.
All chaise lounge chair with canopy can be bought in various patterns, dimensions, and styles, which makes them a perfect way to liven up your previous room. Decorative and accent items offer you to be able to test more freely along with your chaise lounge chair with canopy selection, to choose pieces with unique shapes or features. You can also use inspiration from the previous room decor to purchase lounge chairs for your room. Colour combination is an important aspect in atmosphere and mood. All lounge chairs has a unique designs and shapes that each homeowner can appreciate that. When choosing lounge chairs, you will want to think of how the color of the lounge chairs may present your desired nuance and mood. And also suitable makes the feeling of a space that much more gorgeous.
Go through the room you want to set the lounge chairs. Are you experiencing a huge spot that will require chaise lounge chair with canopy for it to get the appropriate for the room, or are you experiencing a smaller room? It is important for your lounge chairs is notably associated with the design element of your home, or else your chaise lounge chair with canopy can certainly detract and impact on from these architectural details rather than compliment them. You do not wish to purchase lounge chairs that won't proper, therefore you should measure your room to view possible area prior to shop it. When you've finished of the given room, you can start hunting. Purpose and function was absolutely preference if you chosen lounge chairs, however if you have an extremely colorful design, make sure you picking some of lounge chairs that was versatile was important.
Generally there seems chaise lounge chair with canopy happen to be a common option and come in both small and large lengths or widths. The moment you opting what chaise lounge chair with canopy to buy, the first step is deciding what you actually need. Work with lounge chairs as furnishings might encourages you to be unique decoration for your home and increase satisfaction in your house. need areas include built-in layout and design that may assist you to figure out the sort of lounge chairs which will look ideal around the room.
Using chaise lounge chair with canopy to your room will let you to improve the room in your house and spotlight various visual appeal that you have on display. Search for lounge chairs which has an element of the unexpected or has some characters is better options. All lounge chairs are ideal for getting the nuance in specified interior or also opting for particular purposes. In advance of selecting lounge chairs. The complete model of the component could be a little unique, or maybe there's some eye-catching piece, or innovative attribute, be sure you fit its design and texture with the current style in your house. Either way,, your individual taste should really be presented in the bit of chaise lounge chair with canopy that you choose. You could buy large collection of lounge chairs to obtain the optimal for your house.
Every chaise lounge chair with canopy can let you to set up unique style and create different nuance or look for the room. Here is a effective guide to various kind of chaise lounge chair with canopy to assist you get the good decision for your interior and finances plan. Adding lounge chairs for the house may let you to make the great atmosphere for every single room. In closing, don't forget the following when selecting lounge chairs: make your needs influence exactly what products you choose, but don't forget to account for the unique detailed architecture in your interior. Considering the design of your current space previously buying lounge chairs will also help you get the right model, measurement, and condition for the room.
Are you looking for out the place to purchase lounge chairs for your home? At this time there is apparently a never-ending collection of chaise lounge chair with canopy to select when deciding to get lounge chairs. Luckily, here offers something you covered with all types of chaise lounge chair with canopy for your house! Once you have picked it based on your own requirements, it's better to think about adding accent items. Designing with lounge chairs is a fun option to make a new feel and look to a space, or generate a special design. Accent items, without the key of the area but serve to create the space together. Also you will have lounge chairs in many different sizes and also in several styles, shapes and variations. Include accent items to accomplish the design of your lounge chairs and it may result in getting it was made by a professional.
Each and every lounge chairs are enjoyable that can be placed in any room on the home. As we all know, selecting the suitable lounge chairs is significantly more than in love with their designs. Also quick solution to makeover a dull room an affordable improvement to get a new nuance in the room. The actual style and even the construction of the chaise lounge chair with canopy has to a long time, so thinking about the distinct details and quality of design of a certain product is an excellent solution. Decorating with the help of chaise lounge chair with canopy is for people, and for each and every decor plan which you want as well as adjust the design of the interior. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,487 |
Found 18 stories matching your search criteria.
NASHVILLE (BP) -- As churches become aware of potential safety issues, more are signing up for background check services through LifeWay Christian Resources' OneSource program.
"The numbers have increased dramatically since we began our relationship with backgroundchecks.com," said LifeWay's Jennie Morris. "On average, we add 160 customers a month."
VIDOR, Texas (BP) -- "The Lord protected us," a Texas pastor proclaimed after police arrested a masked gunman more than 250 miles away who cited the specific church as his destination.
"There is an overwhelming recognition that the Lord protected us and provided for us," Terry Wright of First Baptist Church in Vidor told Baptist Press today (Jan. 2).
Police in Seguin, Texas, 254 miles southwest of Vidor, contacted the pastor after arresting 33-year-old Tony Dwayne Albert II, whom police said was dressed in tactical style clothing, had a loaded gun and said he was headed to the church to fulfill an unspecified prophecy.
Church security draws 1,000-plus for training in Ky.
FRANKFORT, Ky. (BP) -- Concern for church safety drew more than 1,000 leaders for training offered at the Kentucky Baptist Convention's Church Security Conference on March 24.
"While I grieve that a conference like this one is needed," KBC Executive Director Paul Chitwood said, "I'm thankful we could offer it.
NASHVILLE (BP) -- As public shootings continue to make headlines, many churches are evaluating where their facilities stand on security. LifeWay Christian Resources is helping churches make safety a priority by providing free security training through its Ministry Grid platform.
EASTPOINTE, Mich. (BP) -- "Shell shocked." That's how Michigan pastor Mathew Vroman felt after learning of the Nov. 5 massacre at a small Sutherland Spring, Texas, church that left 26 worship attendees dead at the hands of a gunman.
DALLAS (BP) -- With church security on the minds of many pastors and other church leaders in the wake of the tragedy at First Baptist Church of Sutherland Springs, Texas, many churches of every size are re-evaluating their security protocols.
FLORESVILLE, Texas (BP) -- "We take care of our own," stated Frank S. Page, formerly a Texas pastor, as he and Southern Baptist Convention President Steve Gaines and his wife Donna spent three days helping to serve families distraught over the Nov. 5 massacre of half of the congregation of First Baptist Church in Sutherland Springs.
Screen capture from ABC News on Twitter.
SUTHERLAND SPRINGS, Texas (BP) -- Southern Baptists ministering in the wake of what some have called the deadliest church shooting in U.S. history say they've witnessed "God at work" despite the 26 dead and some 20 others wounded at First Baptist Church in Sutherland Springs, Texas. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,978 |
News: Insights into a Changing World
Students 'Hack' Social Justice Issues
March 22, 2019 / Kristina Plowman
Teams of University students competed in developing action plans addressing particular social issues, such as homelessness or income inequality, as part of a "hackathon" on March 19. The hackathon was part of the second annual Novak Symposium, a day-long conference promoting continued discussion of issues and themes that the late Michael Novak focused on.
March 22, 2019 / Kristina Plowman/ /Source
Michael Novak, social justice, hackathon, Catholic University
What on earth is a 'social justice hackathon'?
The Catholic University of America's Novak Symposium, now in its second year, is a day filled with the sharing of thought provoking ideas.
To honor the memory of Michael Novak, an American Catholic political scholar best known for his demonstrations that democratic capitalism and Catholic social teaching are compatible, Catholic scholars from think tanks and educational institutions come from all over to deliver speeches on current social issues and the solutions they have discovered through their work.
social justice, Catholic University, Michael Novak, hackathon
Conservatives Do Believe in Social Justice. Here's What Our Vision Looks Like.
Last month, America lost a great defender of freedom, Michael Novak.
Novak was committed to rightly ordered liberty and cared deeply about the principles and practices that produce it. His enormous body of work emphasized the cultural prerequisites for political and economic freedom, as he stressed that economic conservativism and social conservatism are indivisible.
In the words of Heritage Foundation founder Ed Feulner, "Michael forced those of us trained in the dismal science of economics to explain that we should be more than 'free to choose'—rather we should be free to make good free choices."
The Daily Signal, social justice, conservatives, Michael Novak, Heritage Foundation, Acton Institute, Action Institute, Economics
Book Review: Social Justice Isn't What You Think It Is
September 14, 2016 / Michael Novak
In Social Justice Isn't What You Think It Is, philosopher and theologian Michael Novak and social work professor Paul Adams, writing with Elizabeth Shaw, seek to recapture an awareness of justice, and so of social justice, as a virtue in the ordinary sense—as a habit or disposition of the moral agent.
September 14, 2016 / Michael Novak/
Book Reviews, Books
Gary Chartier, social justice, Social Justice Isn't What You Think It Is | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 50 |
Condé Nast Entertainment Names Vox Vet Patrick Bulger Executive Director Of Its Digital Video Division (Exclusive)
By James Loke Hale
On November 14, 2018 November 14, 2018
Condé Nast Entertainment (CNE) is onboarding a new executive to the team behind its booming digital video development division. Patrick Bulger, previously head of editorial video at Vox Media, is now CNE's executive director of creative programming, planning, and project management.
Bulger will report to Joe Sabia, CNE's head of digital development. Sabia's team is responsible for numerous successful series, including Vogue's 73 Questions (which counts a collective 572 million lifetime views), Bon Appétit's Kids Try (138 million views), and Wired's Autocomplete Interview (382 million views). This year, the team has doubled the' YouTube subscriber base of CNE's brands. All told, the publisher's brand channels have gone from 10.5 million subscribers to more than 20 million.
In his new position, Bulger will oversee video development projects, working with CNE's strategy, programming, and production teams to develop operational strategies.
"Patrick for years has proven to be amazingly skilled at content, strategy, and building audiences on a variety of platforms," Sabia says. "He is an important addition to our development efforts as we scale our growing team. He'll be instrumental in ensuring the exponential video growth for Condé Nast across all brands, and I couldn't be more excited."
At Vox Media, Bulger managed video programming, strategy, creative direction, and video operations for its core brands, including Vox, The Verge, Eater, and Racked.
Condé Nast Taps Tastemade Vet Oren Katzeff To Serve As Chief Architect Of Video Biz
Condé Nast Taps Joe Sugg For New YouTube Series, To Launch Sports-Themed 'GQ' Channel
'Bon Appétit' Launches Free OTT Channel With Slate Of 3 New Series
Condé Nast Entertainment Says It Has 100 Pilots In The Works | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,075 |
export class Token {
id: string;
type: string;
user_id: number;
valid_until: string;
token: string;
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,406 |
package org.k9m.ghostwriter.thread;
import org.k9m.ghostwriter.GhostWriter;
/**
* Scribe Thread responsible for parsing 1 book and saving it to the DB.
* @author Greg Maksa - k9m
*
*/
public class ScribeThread extends Thread{
GhostWriter ghostWriter;
SentinelThread sentinel;
//String filename;
public ScribeThread(GhostWriter ghostWriter, SentinelThread sentinel){
this.ghostWriter = ghostWriter;
this.sentinel= sentinel;
}
@Override
public void run(){
String filename = null;
while((filename = sentinel.pollBook()) != null){
ghostWriter.readTrigrams(GhostWriter.FOLDER_INPUT_BOOKS, filename);
System.out.print("[*" + filename + "*]");
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,226 |
\section{Introduction}
Let $(M,\omega)$ be an integral symplectic manifold, and let $(\mathcal L,\nabla)$ be
a line bundle $\mathcal L$ with connection $\nabla$ of curvature $\omega$. The
quadruple $(M,\omega,\mathcal L,\nabla)$ is called a {\em prequantization} of $(M,\omega)$, which morally should give rise to a geometric quantization $Q(M)$ of $M$. A complication arises in that all known constructions of $Q(M)$ require additional data, a polarization
of $M$; such a polarization may be real, a foliation of $M$ by Lagrangian
subvarieties, or else complex, a complex or almost complex structure on $M$
compatible with $\omega.$ It is generally believed, and in many cases verified, that the
quantization $Q(M)$ should be independent of the polarization. However there
is no theorem guaranteeing that this should be the case.
Work of Kontsevich \cite{Ko} extending deformation quantization to Poisson manifolds
raises the issue as to whether any of the constructions above
has any relevance in the Poisson setting. If $(M,\pi)$ is a Poisson manifold,
it is not clear what the analog of $(\mathcal L,\nabla)$ should be, let alone
what one would mean by a polarization. The purpose of this paper is to try
to begin developing some examples, guided by symplectic geometry, where a
sensible theory of geometric quantization of Poisson manifolds can be
proposed. Hopefully the repertoire of examples may be a guide to a theory
of geometric quantization of Poisson manifolds.
To do this we focus on a special class of Poisson manifolds that have two
helpful properties. First, we require that the Poisson structure be
symplectic on the complement of a real hypersurface $Z \subset M$ and
have a simple zero on $Z.$ Such $b$-symplectic manifolds have been the
subject of intensive study \cite{GMP, GMPS} and are by now well
understood geometrically. And second, we require that the manifold have
a Hamiltonian action of a torus with a certain nondegeneracy condition
(nonzero modular weight; see Theorem \ref{gmpsthm} below for the precise definition). The presence
of these two conditions allows us to bring tools from symplectic
geometry to bear on the problem. One concept which, as far as we know, has not been investigated at all in the $b$-symplectic setting and which will play an essential ingredient in describing how to quantize these manifolds, is the concept of "integrality" for the $b$-sympletic form $\omega$ (or, alternatively of "pre-quantizability" for the pair, $(M,\omega)$); and one of the main goals of this paper will be to provide an appropriate definition of this concept and explore some of its consequences.
We then show that a natural functoriality
condition for quantization (``formal geometric quantization'') determines
what the quantization of the manifold should be.
Formal geometric quantization was studied in \cite{Wei} in the
context of the quantization of Hamiltonian $T$-spaces with proper moment map.
We will see that where $M$ is a compact $b$-symplectic manifold, with a Hamiltonian
torus action of nonzero modular weight,
the manifold $M - Z$ is such a space, and that an analog of formal
geometric quantization for $b$-symplectic manifolds yields essentially
the quantization of $M - Z.$ However, in the $b$-symplectic case, there
is a surprise; unlike in the case of noncompact manifolds with proper moment map, where the quantization is always infinite dimensional, though with
finite multiplicities, in the case of a $b$-symplectic manifold, the
quantization is always a {\em finite dimensional} virtual $T$-module. This
raises the question of whether it is the index of a Fredholm operator.
\section{{$b$}-symplectic manifolds}
Let $M$ be a compact, connected, oriented $n$-dimensional manifold, $Z\subseteq M$ a closed hypersurface and $f:\, M\rightarrow\mathbb{R}, f{|_{Z}}=0$, a defining function for $Z$. We recall (see \cite{GMP}) that a $b$-symplectic form on $M$ is a $2$-form of the form
\begin{equation}
\omega=\frac{df}{f}\wedge\mu+\gamma
\label{eq:2.1}
\end{equation}
\noindent
with $\mu\,\in\,\Omega^{1}(M)$ and $\gamma\in\Omega^2(M),$ which is symplectic in the usual sense on $M-Z,$ and is symplectic at $p \in Z$ as an element of $\wedge^2(^bT^*_p)$ where $^bT^*_p$ is the span of $T^*_p Z$ and the ``$b$-form'' $\left(\dfrac{df}{f}\right)_p$.
Some properties of the form (\ref{eq:2.1}) which we will need below are:
\begin{enumerate}
\item Let $\iota: Z\rightarrow M$ be the inclusion map. Then $\iota^*\mu=:\mu_Z$ is an intrinsically defined one-form on $Z$
\item $f$ is not intrinsically defined but replacing $f$ by $f=hg$ with $h>0$ on $Z$, $\dfrac{df}{f}=\dfrac{dg}{g}+\dfrac{dh}{h}$ so $\dfrac{df}{f}$ {\em is} intrinsically defined $\mod\Omega^1(M)$. Moreover
\begin{equation}
\omega=\frac{dg}{g} \;\wedge~\;\mu+\gamma'
\label{eq:2.2}
\end{equation}
where
\begin{equation}
\gamma'=\gamma+\frac{dh}{h} \;\wedge~\;\mu
\label{eq:2.3}
\end{equation}
\item Since $d\omega=0=-\dfrac{df}{f}\;\wedge~\;d\mu+d\gamma$ the forms $\iota_Z^*\mu=\mu_Z$ and $\iota_Z^{{*}}\gamma=\gamma_C$ are closed.
\item For $\omega_p, p\,\in\, Z$, to be symplectic in the sense described above, $\mu_Z$ has to be nonvanishing on $Z$ and hence, by item $3$, defines a foliation of $Z$. Moreover {it} also requires that if $L$ is a leaf of this foliation $\iota^*_L\,\gamma$ is a symplectic form on $L$. In addition, by (\ref{eq:2.2}) $\iota^*_L\gamma'={\iota^*_L\gamma}$ so this symplectic structure on $L$ is intrinsically defined.
\end{enumerate}
Turning next to ``pre-quantization'' we note that, since $\mu_Z$ is intrinsically defined, so is its cohomology class, $[\mu_Z]$ and by (\ref{eq:2.3}) the cohomology class $[\gamma]$ is intrinsically defined as well. Moreover the Melrose-Mazzeo isomorphism
\begin{equation}
^bH^2(M,\mathbb{R})\rightarrow H^2(M,\mathbb{R})\oplus H^1(Z,\mathbb{R})
\label{eq:2.4}
\end{equation}
\noindent
maps $[\omega]$ onto $[\gamma]\oplus[\mu_Z]$, hence a natural definition of ``integrality'' for $\omega$, i.e. of ``$[\omega] \in ^b\!H^2\,(M,\mathbb{Z})$'' is to require that $[\mu_Z]$ be in $H^1(Z,\mathbb{Z})$ and $[\gamma]$ be in $H^2(M,\mathbb{Z})$. We will list a few consequences of {this} assumption.
\begin{enumerate}
\item The integrality of $\gamma$ implies that there exists a circle bundle,
\begin{equation}
\pi: V\rightarrow M
\label{eq:2.5}
\end{equation}
and a one form $\alpha$ on $V$ such that
\begin{equation}
d\alpha=\pi^*\gamma,
\label{eq:2.6}
\end{equation}
\begin{equation}
\iota (X)\alpha=1
\label{eq:2.7}
\end{equation}
\qquad and
\begin{equation}
{\mathcal{L}}_X\alpha=0
\label{eq:2.8}
\end{equation}
\noindent where $X$ is the generator of the circle action on $V$.
\item The integrality of {$\mu_{Z}$} implies that there exists a map, \textbabygamma$: Z\rightarrow S^1$, with the property
\begin{equation}
\mu_Z=\text{\textbabygamma}^*d\theta
\label{eq:2.9}
\end{equation}
\end{enumerate}
Therefore, in particular, the foliation of $Z$ that we described above is that defined by the level sets of \textbabygamma, and hence since $Z$ is compact, the leaves of this foliation are compact as well. Moreover if we let {$v$} be the vector field on $Z$ defined by
\begin{equation}
\iota_v\mu_Z=1 \text{ and } \iota_v\gamma_\iota=0
\label{eq:2.10}
\end{equation}
then $v$ and $\dfrac{\partial}{\partial\theta}$ are \textbabygamma-related. Therefore if we let $\phi: Z\rightarrow Z$ be the map $\exp 2\pi v$ and let $L$ be a leaf of the foliation defined by \textbabygamma, $Z$ can be identified with the mapping torus
\begin{equation}
L\times [0, 2\pi]/\sim
\label{eq:2.11}
\end{equation}
where ``$\sim$'' is the identification
\begin{equation}
{(p,0) \sim (\phi(p), 2\pi)}
\label{eq:2.12}
\end{equation}
\section{Group actions
As in \S $2$ let $f: (M, Z)\rightarrow (\mathbb{R},0)$ be a defining function for $Z$ and let $Z_i, i=1, 2,\ldots, k$, be the connected components of $Z$. We will denote by $^bC^\infty(M)$ the space of functions which are $C^\infty$ on $M-Z$ and near each $Z_i$ can be written as a sum,
\begin{equation}
c_i\log|f| +g
\label{eq:3.1}
\end{equation}
\noindent
with $c_i\in\mathbb{R}$ and $g\in C^\infty(M)$.
\def\mathfrak t{\mathfrak t}
Now let $T$ be a torus and $T\times M\rightarrow M$ an action of $T$ on $M$.\footnote{The material in this section is taken more or less verbatim from \cite{GMPS}.}
We will say that this action is {\em Hamiltonian} if the elements, $X\,\in\,\mathfrak t$ of the Lie algebra of $T$ satisfy
\begin{equation}
\iota (X_M)\,\omega=d\phi, \phi~\in~^bC(M),
\label{eq:3.2}
\end{equation}
\noindent
in other words:
\begin{equation}
\iota (X_M)\omega=c_i(X)d(\log{|f|}) +d g
\label{eq:3.2$'$}
\end{equation}
\noindent
in a tube neighborhood of $Z_i$ for
$g\in C^\infty(M)$.
The map
\begin{equation}
v_i:~ X~\in~ \mathfrak t\rightarrow c_i(X)
\label{eq:3.3}
\end{equation}
\noindent
is called the modular weight of $Z_i$ and depends on $i$; however, one can show (\cite{GMPS}, \S $2.3$)
\vspace{10pt}
\noindent
\begin{RefTheorem}[{\cite{GMPS}}]\label{gmpsthm} The $v_i$'s are either zero for all $i$ or non-zero for all $i$.\end{RefTheorem}
In this paper we will assume that the latter is the case, in which case, for fixed $i$ we can choose {$X_i\,\in\,\mathfrak t$} such that $c_i(X_i)=1$. Hence, by (\ref{eq:2.11}) $\exp 2\pi X_i$ maps the leaves, $L_i$, of the null foliation of $Z_i$ onto themselves, and thus $\exp 2\pi X_i=\exp 2\pi Y_i$ where $Y_i\,\in\,t$ is tangent to the leaves of this foliation. Thus, replacing $X_i$ by $X_i-Y_i$ we can assume that $\exp 2\pi X_i$ is the identity map on $Z$. In other words the map
$$S^1\times L\rightarrow Z, (\theta, p)\rightarrow (\exp\theta{ X_i}) p$$
\noindent
is a diffeomorphism, and hence the mapping tori (\ref{eq:2.11}) are all products: $L\times S^1$. Moreover if we split $T$ into a product
$$T=T_i\times S^1$$
\noindent where $T_i$ is the subgroup of $T$ fixing the leaves of the null-foliation of $Z_i$, the action of $T$ on $Z_i$ is just the product of the canonical action of $S^1$ on $S^1$ and of $T_i$ on $L$.
\section{Formal geometric quantization}
\subsection{Compact symplectic manifolds} Let $(M,\omega)$ be a compact
symplectic manifold and let $(\mathcal L, \nabla)$ be a line bundle with connection
of curvature $\omega.$ Choose an almost complex structure $J$ compatible
with the symplectic structure. Then this almost complex structure gives
$\mathcal L$ the structure of a complex line bundle, and by twisting the spin-$\mathbb{C}$ Dirac operator on $M$ by $\mathcal L$ we obtain an elliptic operator $\bar{\partial}_\mathcal L.$ Since $M$ is compact, $\bar{\partial}_\mathcal L$
is Fredholm, and we define the geometric quantization $Q(M)$ by
$$Q(M) = {\rm ind}(\bar{\partial}_\mathcal L)$$
\noindent as a virtual vector space.
If $M$ is equipped with a Hamiltonian action of a torus $T$, the action lifts
to $\mathcal L$, and one can choose the almost complex structure to be $T$-invariant.
Then the quantization $Q(M)$ is a finite-dimensional virtual $T$-module,
and it satisfies the following principle.
For $\xi \in \mathfrak t^*,$ we denote by $M//_\xi T$ the reduced space of $M$ at $\xi.$ Also, for $\alpha$ a weight of $T,$ and $V$ a virtual $T$-module, denote by $V^\alpha$ the submodule of $V$ of weight $\alpha.$
\begin{RefTheorem}[\cite{mein}]\label{qr}
Let $\alpha$ be a weight of $T.$ Then
\begin{equation}\label{qreq}
Q(M)^\alpha = Q(M//_\alpha T).\end{equation}
\end{RefTheorem}
In other words,
\begin{equation}\label{qrcons}
Q(M) = \bigoplus_\alpha Q(M//_\alpha T) \alpha
\end{equation}
\noindent as virtual $T$-modules.
\begin{Remark}\label{singrmk} Both Theorem \ref{qr} and equation (\ref{qrcons}) are strictly
speaking valid only for regular values of the moment map. In the case where
$\alpha$ is a singular value of the moment map, the singular quotient must be replaced
by a slightly different construction using a shift of $\alpha$. For details, we refer the interested reader to \cite{mein}. A similar caution applies in
the case of noncompact Hamiltonian $T$-spaces and of $b$-symplectic manifolds
below. \end{Remark}
\begin{Remark} If $(M,\omega)$ is a compact, integral symplectic manifold, one can always find a line bundle $\mathcal L$ with connection $\nabla$ of curvature $\omega,$ and the quantization
$Q(M)$ is independent of this choice. We therefore suppress the line bundle and connection and simply write $Q(M)$ for the quantization.\end{Remark}
\subsection{Noncompact Hamiltonian $T$-spaces} If we now consider the case
where $M$ is not compact, the analysis above cannot be carried out, since
the operator $\bar{\partial}_\mathcal L$ is elliptic, but no longer Fredholm. Instead,
in \cite{Wei} (see also \cite{p}), equation (\ref{qreq}) is used to {\em define} the quantization of such
Hamiltonian $T$-spaces, where the moment map is proper, so that the
reduced spaces are compact and the right hand side of equation (\ref{qreq})
makes sense.\footnote{It is also possible in this case to use index theory to define the
quantization; see \cite{b,p}}
\begin{Definition}[\cite{Wei}]\label{fgq} Let $M$ be a Hamiltonian $T$-space with integral symplectic
form. Suppose the moment map for the $T$-action is proper.
Let $V$ be an infinite-dimensional virtual $T$-module with finite
multipliticies. We say
$$V= Q(M)$$
\noindent if for any compact Hamiltonian $T$-space $N$ with integral symplectic form, we have
\begin{equation}\label{qreqn1}
(V\otimes Q(N))^T = Q((M \times N)//_0T).\end{equation}
In other words, as in (\ref{qrcons}),
$$Q(M) = \bigoplus_\alpha Q(M//_\alpha T) \alpha,$$
\noindent where the sum is taken over all weights $\alpha$ of $T.$\footnote{Again, care must be taken
about singular values}
\end{Definition}
Note that the fact that the moment map is proper implies that the reduced space $(M \times N)//_0T$ is compact for
any compact Hamiltonian $T$-space $N$, so that the right hand side of equation (\ref{qreqn1}) is well-defined.
In other words, we have used Theorem \ref{qr} to give us enough functoriality
to force a definition of the quantization in this case, despite the fact that the elliptic operator
$\bar{\partial}_\mathcal L$ is not Fredholm.
\subsection{$b$-symplectic manifolds}
Suppose now that $M$ is a compact $b-$symplectic manifold, with integral
$b-$symplectic form as above. Suppose that it is equipped with a Hamiltonian
action of a torus $T$ with {\em nonzero modular weight}. Then, in
analogy with Definition \ref{fgq}, we define
\begin{Definition}
Let $V$ be a virtual $T$-module with finite
multipliticies. We say
$$V= Q(M)$$
\noindent if for any compact Hamiltonian $T$-space $N$ with integral symplectic form, we have
\begin{equation}\label{qreqn2}
(V\otimes Q(N))^T = Q((M \times N)//_0T).\end{equation}
\end{Definition}
In other words,
$$Q(M) = \bigoplus_\alpha Q(M//_\alpha T) \alpha,$$
\noindent where the sum is taken over all weights $\alpha$ of $T$.\footnote{Again, adjusting for singular values as described in Remark \ref{singrmk}.}
In this $b$-symplectic case the condition that the modular weight be nonzero guarantees that the reduced space $(M \times N)//_0T$
is a compact and symplectic (and in the generic case, a manifold) for any compact Hamiltonian $T$-space $N$; so that
as in the case of noncompact Hamiltonian $T$-spaces, the right hand side of equation (\ref{qreqn2}) is well-defined.
Another way to say this is to note that
$$Q(M) = Q(M - Z)$$
\noindent where $Q(M-Z)$ is the quantization of the noncompact Hamiltonian $T$-space $M-Z$. The fact that the modular weights on $M$ are nonzero insures that the moment map on $M-Z$ is proper.
The main result of this paper is that $Q(M)$ is a {\em finite} virtual $T-$ module.
To see this, we must return to the geometry of the manifold $M$ in the
vicinity of the hypersurface $Z.$
\section{Symmetry properties}
We have shown above that if the modular weight $v_i$ of $Z_i$ is non-zero then, in the vicinity of $Z_i$, $M$ is just a product
\begin{equation}
Z_i\times (-\epsilon, \epsilon)
\label{eq:4.1}
\end{equation}
\noindent and {$Z_i=S^1\times L$}.
\noindent
We will show below that this can be slightly strengthened (see also \cite{GMPS0}): the $b$-symplectic form on $Z\times (-\epsilon, \epsilon)$ can be taken to be the two-form
\begin{equation}
-d\theta\wedge\dfrac{dt}{t} +\gamma_L
\label{eq:4.2}
\end{equation}
\noindent
where $\gamma_L$ is the symplectic form on $L$ and $-d\theta \wedge \dfrac{dt}{t}$ the standard $b$-symplectic form on $S^1\times (-\epsilon, \epsilon)$.
To see this, we note that, under the hypotheses above, we can assume that the symplectic form (\ref{eq:2.1}) has the form
\begin{equation}
d\theta \wedge \dfrac{dt}{t}+\gamma_L+d\theta \wedge \beta
\label{eq:4.5}
\end{equation}
Moreover if $\iota \left(\dfrac{\partial}{\partial\theta}\right)\beta=h$ we can replace $\beta$ by $\beta-h~d\theta$ in the expression above and arrange that $\iota \left(\dfrac{\partial}{\partial\theta}\right)\beta=0$. Hence since the action of $S^1$ on $M$ is Hamiltonian
$$\iota{\left(\dfrac{\partial}{\partial\theta}\right)}\omega=d(\log |t|+{\rho})$$
\noindent for some $\rho~\in~ C^\infty(M)$ and hence
\begin{equation}
\beta=d{\rho}
\label{eq:4.6}
\end{equation}
Consider now the one parameter family of forms
\begin{equation}
d\theta \wedge \dfrac{dt}{t}+\gamma_L-s d({\rho} d\theta)
\label{eq:4.7}
\end{equation}
\noindent
for $0\leq s\leq 1$. For $s=1$ this form is $\omega$ and for $s=0$ the form (\ref{eq:4.2}). Moreover for $\epsilon$ small and $-\epsilon<t<\epsilon$ the first summand of {(\ref{eq:4.7})} is much larger than the third so the form {(\ref{eq:4.7})} is $b$-symplectic and for all $s$
$$[\omega_s]=[\omega_0]$$
\noindent
so we can apply $b$-Moser theorem (see \cite{GMP}) to conclude that $\omega_0$ and $\omega_1$ are equivariantly symplectomorphic.
\vspace{10pt}
{Finally note that} the $2$-form, $\gamma_L$, depends in principle on $t$.
However the inclusion map
$$\iota: L\rightarrow L\times(-\epsilon, \epsilon),~p\rightarrow (p,0)$$
and the projection map
$$\pi: L\times (-\epsilon, \epsilon)\rightarrow L, (p, e)\rightarrow p$$
\noindent
induce isomorphisms on cohomology; hence {$[\mu]=[\pi^* \iota^*\mu]$}. Therefore, by Moser, we can assume that {$\mu=\pi^* \iota^* \mu$} i.e. that $\mu$ is just a symplectic $2$-form on $L$ itself.
\section{Formal quantization of $b$-symplectic manifolds}
We now prove the main result of this paper.
\begin{Theorem}\label{finite}
Let $M$ be an integral $b$-symplectic manifold equipped with a Hamiltonian $T$-action with nonzero modular weight.
Then the formal geometric quantization $Q(M)$ is a finite dimensional $T$-module.\end{Theorem}
\begin{Proof}
We will show that if we take for the quantization of $N=Z_u\times (-\epsilon, \epsilon)$ the sum:
\begin{equation}
\oplus~Q(N//_\alpha T) \alpha,~\alpha~\in~\mathbb{Z}~(T)
\label{eq:4.4}
\end{equation}
\noindent
then the virtual vector spaces $Q(N//_\alpha T) $, are all zero and hence so is this sum.
To define $Q(N//_\alpha T)$ one has to define orientations on the $N//_\alpha T$ and to do this consistently one has to define an orientation on $N$. The natural way to do so would be to assign to each connected component of $N$-$Z$ the orientation defined by the symplectic form, $\omega$; however, because of the factor $\dfrac{df}{f}$, in the formula (\ref{eq:2.1}) the symplectic orientations on adjacent components of the space $N$-$Z$ don't agree; and, in particular, if $N = Z_i\times (-\epsilon, \epsilon)$ and $\mathbb{R}\times \mathfrak t_i$ is the Lie algebra of $S^1\times T_i$ the moment map
{$$\phi : N\rightarrow \mathbb{R}\times \mathfrak t^*_i$$}
\noindent
associated with the action of $S^1\times T_i$ on $N$ is the map
$$(\theta, p, t)~\epsilon~S^1\times L\times(-\epsilon, \epsilon)\rightarrow {(\log|t|,{~\phi_i (p))}}$$
\noindent
where $\phi_i: L\rightarrow \mathfrak t^*_i$ is the moment map associated with the $T_i$ action on $L$. Thus for a weight, $(c, \alpha_i)$ of $S^1 \times T_i$, the reduced space
$$\phi^{-1}(c, \alpha_i)/S^1\times T_i$$
\noindent
consists of two copies of the reduced space
$$\phi^{-1}_i(\alpha_i)/T_i$$
\noindent
with opposite orientations, so the quantization of this space is a virtual vector space $V_+\oplus V_-$ with $V_-=-V_+$. \end{Proof}
We end the paper with a conjecture.
\begin{Conjecture} There exists a natural Fredholm operator on $M$ whose index gives $Q(M)$.\end{Conjecture}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,297 |
Q: When I try to package my python code into a .ZIP through YAML, the folder is empty I'm using Azure DevOps and I'm trying to package my python code into a .ZIP file. I'm also using Azure for my repo for my python folder.
Here's my folder in my repo
Here's my YAML code for my pipeline
Everything works fine, my only issue is that the .ZIP folder that gets published is completely empty and I want it to be my python file
A: By default, the source code files will be checked out to the directory "Build.SourcesDirectory" on the agent machine. The artifact files output from the build also are in this directory.
By default, the directory "Build.ArtifactStagingDirectory" is empty, the source code files and build artifact files will not be automatically added/generated into this directory.
Before archive a ZIP file from the directory "Build.ArtifactStagingDirectory", you should use the Copy Files task to copy the files which you want to archive from the directory "Build.SourcesDirectory" into the directory "Build.ArtifactStagingDirectory".
For more details, you can see view this document.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,651 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.